The security risks of not adopting AI and secure open-source
Defence needs to take advantage of AI-assisted software development and secure open source.
Defence needs to take advantage of AI-assisted software development and secure open source.
Modern software supply chains are vast, interdependent and vulnerable due to their reliance on open-source software. With AI-enhanced adversaries now able to search, modify and weaponise open-source software at unprecedented speed, the risk profile is accelerating. Security researchers have already demonstrated that AI can be used to insert subtle vulnerabilities or malware into software and mask the threat well enough to evade standard security scanners. In controlled environments AI has been used to create and hide backdoors in open-source libraries in a way that mimics legitimate contributions.
On the defensive side, however, agentic AI models can also continuously analyse code, detect potential weaknesses or vulnerabilities and propose remediations far faster than human teams can manage. AI-assisted pipelines offer a way to scale assurance while reducing human error and enabling engineers to focus on the security decisions that really matter. In an environment where exploitation cycles are compressing from month to days or even hours the security case for AI-assisted software development should be compelling.
As threat actors use AI at scale to accelerate their supply chain attacks, Defence needs to take advantage of AI-assisted software development and secure open source to not only write quality code at the speed of relevance but to minimise, detect and mitigate these new forms of attack.
AI-assisted software development however, has had a lot of bad press because it’s often confused with vibe coding. Vibe coding is the practice of prompting generative AI (often by an enthusiastic amateur rather than a software engineer) to generate code and accepting the output without the rigour of design, testing or verification. On the other hand, AI-assisted software development embeds AI within a structured development pipeline where skilled engineers remain in control, using the technology to enhance quality, security and speed rather than replace judgement. Across industry, AI-assisted software development is now becoming the norm. The adoption of secure open-source software is helping further to reduce these supply chain threats.
AI-assisted software development is not a threat to be feared – it is a tool to be tested, trusted and adopted.
By contrast, most of Australian Defence is still working towards DevSecOps maturity – a worthwhile goal a decade ago but now only a starting point. Defence’s focus on chasing yesterday’s technologies is understandable in a risk-averse environment but it is leaving little bandwidth to consider where others in the AUKUS alliance and defence sector have already moved. Defence doesn’t need to replace its engineers or compromise its standards. It needs to augment its people with tools that help them keep pace with the scale and speed of modern software development while protecting the mission and capabilities.
Our research on the socio-technical aspects of software development shows that trustworthiness comes not from technology alone but from the behaviours, processes and governance frameworks that surround it. AI assisted software development succeeds when the AI is embedded in clear, structured pipelines; when human engineers remain in control of decisions; when outputs are testable, auditable and transparent; when guardrails and verification steps are well-defined; and when developers are trained in both the technology and the underlying engineering principles. This is the opposite of vibe coding. AI-assisted software development is not a leap into the unknown but a logical extension of modern engineering practice.
The most important step Defence can take is to adopt AI-assisted software development across its enterprise. Doing so will require selecting secure AI-enabled development platforms and ensuring that developers are trained to integrate these tools effectively into their workflows. It also involves establishing clear governance for the use of large language models, incurring appropriate guardrails and piloting agentic code-scanning tools within a controlled Defence program to demonstrate their value safely. As AI takes on more routine analytical work human expertise can be redirected toward the high-judgement security and engineering decisions where it adds the greatest value. This approach reduces reliance on large teams of scarce security engineers by augmenting their work through automation. Taken together these steps provide a safe and manageable entry point to AI-assisted software development avoiding the risks of ungoverned AI use while delivering immediate security and productivity benefits.
The opportunity for Defence is significant. AI-assisted development with secure open-source software can strengthen sovereign capability, reduce workforce pressure, improve security and shorten delivery times. With the right governance, training and guardrails it is entirely within reach.
But the window is narrowing. The flip-side is that adversaries are already using AI to exploit software and vulnerabilies at scale and we require acceleration with technologies such as AI and secure open-source software to counter those threats. If Defence doesn’t take this step soon Australia risks falling behind not just in capability but in the ability to secure its own digital infrastructure against these increasing threats.
AI-assisted software development is not a threat to be feared – it is a tool to be tested, trusted and adopted.