Your AI tools are no longer just answering questions. They are taking action. They are booking meetings, writing code, sending emails, querying databases, and making decisions on your behalf, often without you even knowing. The agentic AI security risks Australia businesses face are real, significant, and something the Australian Signals Directorate is now treating as a national priority.
In May 2026, the ASD’s Australian Cyber Security Centre (ACSC) published joint guidance titled “Careful Adoption of Agentic AI Services”, co-authored with cybersecurity agencies from the United States, Canada, the United Kingdom, and New Zealand. This is not theoretical. It is a coordinated, international warning that businesses using AI tools need to take seriously.
This post explains what agentic AI is, what makes it different, what risks it introduces, and what your business should be doing about it right now.
What Is Agentic AI?
Most people are familiar with generative AI: tools like ChatGPT that respond to prompts, produce content, and answer questions. Generative AI generates output for humans to review and act on.
Agentic AI is different. It does not just produce output. It takes action.
An agentic AI system uses a large language model (LLM) to interpret a goal, reason through a sequence of steps, and execute those steps using real tools, real systems, and real data, without needing a human to approve each individual action along the way.
A Simple Way to Think About It
Imagine telling an employee: “Handle the new client onboarding for today.” A traditional AI might produce a checklist. An agentic AI might actually log into your CRM, create the client record, send the welcome email, generate the contract, add calendar invites, and notify your operations team, all without you touching a single button.
That level of capability is genuinely useful. It is also genuinely risky if the guardrails are not right.
What Makes Agentic AI Different From Regular AI?
Agentic AI systems typically include:
- A core LLM to interpret instructions and reason through problems
- Access to external tools such as email, calendars, file systems, or web browsers
- The ability to store and retrieve memory across sessions
- Planning workflows that allow the system to break goals into sequential steps
- The ability to create sub-agents to handle specific parts of a task
Because these systems connect to real software and take real actions, the consequences of something going wrong are far more serious than a chatbot giving a bad answer.
Understanding the Agentic AI Security Risks Australia Businesses Face
The ASD’s guidance identifies several categories of risk that apply directly to any Australian business using or planning to use agentic AI tools. Understanding these is the first step to managing them.
Inherited Vulnerabilities From the Underlying AI Model
Every agentic AI system is built on a foundation of an LLM. Whatever vulnerabilities exist in that model, the agentic system inherits them.
One of the most serious is prompt injection. This is where malicious instructions are hidden inside content the AI is processing, such as a webpage it visits, a document it reads, or an email it scans. The AI may interpret those instructions as legitimate and act on them, even if they direct it to do something harmful.
For example, a malicious actor could embed instructions in a phishing email designed to be processed by your AI email assistant. If that assistant has permission to download attachments or forward emails, it could be manipulated into doing so.
A Wider Attack Surface
Traditional software has a defined set of entry points that security teams protect. Agentic AI systems change that equation entirely.
Because these systems connect to a range of tools, data sources, and external services, each connection becomes a potential entry point for attackers. A web search integration. A file storage connection. An API link to a third-party platform. Each of these widens the attack surface and introduces potential vulnerabilities that may not exist in your core systems at all.
The more tools an agentic system has access to, the more ways an attacker can exploit it.
Privilege and Identity Risks
Agentic AI systems need permissions to do their work. They need access to files, systems, and services. The problem is that many implementations give AI agents broader permissions than they actually need, simply because it makes setup easier or because the full scope of what the agent might do was not properly considered.
When a highly privileged AI agent is compromised, the impact is amplified by every permission it holds. An agent with read and write access to your entire document library could expose or destroy far more than one with access only to the files relevant to its task.
The ASD’s guidance is clear on this point: agentic AI should only ever be granted the minimum access needed to complete its specific task, and nothing more.
Unpredictable and Emergent Behaviour
One of the most unsettling aspects of agentic AI is that it can behave in ways that were not anticipated during design. Because these systems reason through problems rather than following rigid scripts, they can arrive at decisions that seem logical within their own framework but are harmful, unusual, or simply wrong from a business or security perspective.
This is sometimes called emergent behaviour, and it is genuinely difficult to predict. A system designed to reduce costs might find creative ways to do so that violate policy. A system designed to maximise response rates might engage in communication strategies that damage client relationships.
Accountability Gaps
When a human employee makes a decision, there is a clear accountability chain. When an agentic AI makes a decision across dozens of automated steps, tracing exactly what happened, why it happened, and who or what is responsible becomes significantly harder.
This opacity matters both for internal governance and for regulatory compliance. In industries subject to data privacy laws, financial regulations, or professional standards, the inability to explain an AI’s decision trail is a serious problem.
What the ASD Is Saying: Key Recommendations
The “Careful Adoption of Agentic AI Services” guidance is not a document designed to scare businesses away from AI. It is a practical framework for adopting these tools responsibly. Here is a summary of the core recommendations.
Align AI Risk With Your Existing Security Model
Before deploying any agentic AI tool, organisations should map its risks against the security controls already in place. Agentic AI does not exist in isolation. It interacts with your existing infrastructure, data, and processes, and those interactions need to be understood in the context of your existing risk posture.
Apply the Principle of Least Privilege
Every agentic AI system should have the minimum permissions required to complete its task. This means:
- Defining the specific data, systems, and tools the AI needs access to
- Restricting access to everything else, including sensitive information and critical systems
- Regularly reviewing permissions as the AI’s scope evolves
Start Small and Build Progressively
The guidance recommends deploying agentic AI starting with low-risk, well-defined tasks. As confidence in the system’s behaviour builds, and as monitoring and controls mature, the scope can be expanded. This approach reduces the blast radius of any failure or compromise and gives your team time to understand how the AI behaves before it is handling anything mission-critical.
Maintain Human Oversight
Automation does not mean abandonment. Agentic AI systems should have meaningful human checkpoints, especially for high-stakes decisions or actions with significant consequences. The ASD recommends designing these checkpoints into the system from the beginning, not as an afterthought.
Implement Strong Identity Management
Agentic AI agents should have verified, managed identities just like any other system or user. This includes:
- Assigning unique identities to each agent
- Using strong authentication between agents and the systems they connect to
- Logging all agent activity so there is a complete record of what was done and when
Conduct Ongoing Monitoring and Threat Modelling
Security is not a one-time exercise. The guidance recommends continuous monitoring of agentic AI behaviour, regular threat modelling to identify new risks as the system evolves, and periodic security assessments.
Why This Matters for Australian Businesses
Australia is not insulated from the global AI adoption wave. Australian businesses across every sector are deploying AI tools, often rapidly and with limited security review. The agentic AI security risks Australia faces are compounded by the speed of adoption and the gap between what these tools can do and what most businesses understand about their security implications.
Professional services firms, in particular, hold some of the most sensitive data in the economy: client financial records, legal documents, health information, business strategies. An agentic AI system with access to that data and the ability to act on it is not just a productivity tool. It is a high-value target.
The Regulatory Dimension
Australia’s Privacy Act imposes obligations on how organisations handle personal information. If an agentic AI system accesses, processes, or transmits personal data, those obligations apply. A breach caused by an AI agent carries the same legal weight as any other breach, and the consequences can be significant.
The ASD guidance also reflects a broader government position that organisations deploying AI are responsible for its behaviour. Ignorance of the risks is not a defence.
What You Should Do Right Now
If your business is using or considering agentic AI tools, here are the immediate steps to take.
Conduct a Current-State Audit
Start by identifying every AI tool currently in use across your organisation. Not just the ones IT knows about. Many teams are independently adopting AI tools without formal approval or security review.
For each tool, assess:
- What data does it have access to?
- What actions can it take on behalf of users or the organisation?
- Who approved its deployment and on what basis?
- What logging and monitoring is in place?
Review Permissions and Access Controls
For any tool that has the ability to take action, review its permissions. Remove access to anything it does not need. Apply least privilege as a non-negotiable standard.
Establish a Policy for AI Tool Adoption
If one does not exist, create a formal process for evaluating and approving AI tools before they are deployed. This should include a security review, a data access assessment, and a documented risk acceptance sign-off.
Brief Your Leadership Team
The risks associated with agentic AI are not just technical. They have business, legal, and reputational dimensions. Your executive team needs to understand what these tools are, what they can do, and what the organisation’s exposure is.
Engage a Cybersecurity Partner
For most businesses, keeping pace with the evolving AI threat landscape is not something an internal team can do alone. Partnering with a cybersecurity specialist who understands both the technical and strategic dimensions of AI risk is increasingly important.
At Otto IT, our Managed Cybersecurity Services are designed to help businesses like yours navigate exactly these kinds of emerging challenges, from identifying your exposure to implementing the controls that reduce your risk.
General Advice for a Safer AI Future
Beyond the immediate steps, here are some broader principles to keep in mind as AI tools continue to evolve.
Treat AI like any other vendor. Before deploying any agentic AI tool, apply the same due diligence you would to any third-party software vendor. Review their security practices, data handling policies, and breach notification procedures.
Do not assume AI is secure by default. Many AI tools are built with usability as the primary design goal. Security is often an afterthought. It is your responsibility to assess what controls are in place and what additional controls you need.
Keep humans in the loop for critical decisions. Automation is valuable, but some decisions should always have a human sign-off. Define those decisions explicitly and configure your AI tools accordingly.
Update your incident response plan. An AI-related incident has different characteristics to a traditional breach. Your incident response plan should account for how you would identify, contain, and investigate an AI-involved security event.
Stay informed. The ASD publishes ongoing guidance on AI and cybersecurity. Subscribing to ACSC alerts and staying engaged with your cybersecurity partner will help you stay ahead of emerging risks.
Ready to Strengthen Your Security Posture?
The emergence of agentic AI is not a reason to avoid AI adoption. It is a reason to adopt it thoughtfully, with the right controls in place. Businesses that get this right will benefit from genuine productivity gains without exposing themselves to unnecessary risk. Those that move fast without adequate controls are taking a gamble with their data, their clients, and their reputation.
If you are unsure where your business stands on agentic AI security risks Australia, start with a conversation. Our team at Otto IT can help you assess your current exposure and build a roadmap to confident, secure AI adoption.
Get in touch with our team today to discuss how we can support your cybersecurity strategy.
Frequently Asked Questions
What is agentic AI and how is it different from ChatGPT?
ChatGPT and similar tools are generative AI: they produce text, images, or other content based on a prompt, and a human decides what to do with that output. Agentic AI goes further by taking action. It can connect to real systems, execute tasks, and make sequential decisions without requiring human approval at each step. The key difference is that agentic AI acts, not just generates.
Why is the ASD involved in AI security guidance?
The Australian Signals Directorate is Australia’s national cybersecurity authority. As AI tools, particularly agentic AI, become more widely adopted across government, critical infrastructure, and private sector organisations, the ASD has a mandate to identify and communicate the associated security risks. The “Careful Adoption of Agentic AI Services” guidance reflects their assessment that these tools introduce risks that organisations need to actively manage.
Is it safe to use agentic AI tools in my business?
Agentic AI can be used safely, but it requires deliberate risk management. The ASD’s guidance recommends starting with low-risk tasks, applying least-privilege access controls, maintaining human oversight for important decisions, and conducting ongoing monitoring. With the right controls in place, the benefits of these tools can be realised without unacceptable risk.
What is prompt injection and why does it matter?
Prompt injection is a type of attack where malicious instructions are hidden inside content that an AI system processes, such as an email, document, or webpage. The AI may interpret those hidden instructions as legitimate commands and act on them. For agentic AI systems that have real-world capabilities, a successful prompt injection attack can have serious consequences, including data exfiltration, unauthorised communications, or system changes.
What does least privilege mean in the context of AI?
Least privilege is the security principle of giving any user, system, or application only the minimum access needed to perform its function. Applied to agentic AI, it means restricting an AI agent’s access to only the data, tools, and systems directly relevant to the task it is performing. Broad or unrestricted access significantly amplifies the impact of any compromise or unexpected behaviour.
How can Otto IT help with agentic AI security risks?
Otto IT provides Managed Cybersecurity Services tailored to Australian businesses, including risk assessments, security policy development, and ongoing monitoring. Our team can help you evaluate your current AI tool landscape, identify gaps in your controls, and implement the safeguards recommended by the ASD. Reach out to our team to start the conversation.
Does the ASD guidance apply to small businesses?
The guidance is primarily aimed at government, critical infrastructure, and industry stakeholders. However, the principles apply to any organisation using agentic AI tools. Small and medium businesses are increasingly using AI tools with agentic capabilities, often without realising it. The same risks apply regardless of business size, and the recommended controls are scalable to any organisation.
managed it support articles
Related Blog Articles
Discover more insights to optimise your business with the latest IT trends and best practices. Stay ahead of the curve by learning how to leverage cutting-edge technology for success. Explore expert advice and valuable guidance to navigate the evolving world of IT solutions