Return to site

The AI Genie is out

of the Bottle!

Self-modifying Systems

Are on the Loose!

· Articles

APIs can be implemented in a way that produces self-modifying code that could lead to new cybersecurity vulnerabilities.

As many have done, I have queried ChatGPT topics ranging from competitive products, cybersecurity strategies, and so on. Recently I have focused on code generation. I asked it for source code and API’s on various topics, and that led me to ask about CI /CD (Continuous Integration and Continuous Delivery) systems with integrations to ChatGPT. And at this moment I became concerned about potential ease to exploit self-modifying systems code. AI Governance must adapt quickly to address this risk.

To validate my concerns, I asked ChatGPT: Can CI/CD systems auto-generate new code using ChatGPT's source code, and thereby create self-modifying systems?

ChatGPT’s response, summarized:

While CI/CD systems can certainly automate the process of building and deploying code, they cannot generate new code from scratch. And ChatGPT may not necessarily create executable code.

CI/CD systems can automate many aspects of software development, but they cannot generate entirely new code on their own. Similarly, while self-modifying systems exist, they typically rely on predefined rules or algorithms rather than generating entirely new code.

My answer to ChatGPT: Hogwash!

If the capability exists, it will be exploited! And we need to address it now!

As CTO of a software firm with CI/CD capabilities and a software bus that connects to other CI/CD solutions and ChatGPT (see figure 1), I have the building blocks for an automated system that can query ChatGPT for code. Using our Pipeline-as-Code tools, we can automate the integration and delivery to the field, and we can have this recurring in a closed loop system where rules and algorithms can be modified without human oversight.

How Companies Address these Risks?

As Enterprise CISOs, CTOs, and CIOs assess their CI/CD solutions, their pipeline-as-code solutions, and their automation workflows, they need extra care to only select vendors with process compliance and cybersecurity risk management support built-in their solution set.

My recommendation is vendors with an architecture supporting Testing and Cybersecurity Vulnerability Scanning in their CI/CD workflows and pipelines be selected, as in the following diagram, ideally with Shift-Left Testing addressing issues before it hits the field.

broken image

Source: Kovair

But since we know from experience likely ChatGPT or Generative AI generated source code may get deployed anyway, so post-deployment monitoring is also needed from vendors like Splunk, Red Hat, New Relic, Selenium, and Veracode as well.

broken image

Source: Kovair.

Connecting all the dots! 

broken image

Source: Kovair

Implications for Business Leaders

To achieve the software agility desired with ChatGPT and Generative AI in software development, requires newer holistic thinking with Shift-Left Testing, Post-deployment Monitoring and Closed-Loop Automation. This will require newer processes, methods, and tools, supporting newer workflows, newer analytics, dashboards along with orchestration solutions leading to converged AIOps and DevOps.

With the above, engineers will be able to build, app-awareness with security-awareness with a common workflow engine, using a common software bus to platforms supporting agile applications within Multi-clouds. By leveraging a common data lake for analytics for newer AIOps, for Closed Loop Automation, with common Dashboards supporting predictive analytics, newer efficient services like Intent-based solutions can result with ChatGPT and Generative AI in general.

— Akshay Sharma

Advisor at Lionfish Tech Advisors


Alladin's lamp Image by macrovector_official on Freepik