"Blame the intern" is not a security strategy for Agent AI: Balancing AI autonomy and safety
Hi everyone. This is Jon. AI technology is rapidly evolving, and one area that has recently been gaining attention is "agentic AI." This refers to AI that is not simply a chatbot, but performs tasks autonomously based on user instructions. For example, it can automatically send emails or perform data analysis. However, this convenient AI is creating new risks in terms of cybersecurity. The InfoWorld article "'Blame the intern' is not an agentic AI security strategy" points out that blaming human error, such as "blaming the intern," is not a true security strategy. This means that we need a solid strategy to ensure security while leveraging AI's autonomy. In this article, I will explain the security challenges of agentic AI in an easy-to-understand manner, based on the latest trends.
Recommended for those who want to start automating with no coding!
With Make.com (formerly Integromat)...
📌 Integrate major tools like email, Slack, Google Sheets, and Notion all at once
📌 Automate complex tasks with just drag and drop
📌 A free plan is also available, so you can try it out for yourself.
If you're interested, here's the details:
What is Make.com (formerly Integromat)? How to Use It, Pricing, Reviews, and Latest Information [2025 Edition]
What is Agentic AI? A simple explanation for beginners
Let's start by reviewing the basics of Agentic AI. Agentic AI is a technology in which AI behaves like an "agent." While traditional AI simply responds to user queries, Agentic AI receives instructions, then makes plans and takes action on its own. For example, if you ask it to book a trip, it will automatically search for flights and book a hotel. As of 2025, this technology is attracting particular attention in the field of cybersecurity, and CrowdStrike's blog, "The Dawn of Agentic SOC: Reimagining AI Era Cybersecurity" (published September 26, 2025), describes it as a technology that will transform security operations for the AI era.
However, while this autonomy brings convenience, it also increases security risks. Because AI makes its own decisions and acts on its own, it is more susceptible to manipulation by malicious attackers. There has also been active discussion on posts on X (formerly Twitter) about vulnerabilities in Agentic AI, with an attack technique called prompt injection becoming a hot topic. This is a method of hijacking an AI's behavior by mixing malicious instructions into the AI's input prompts. In a post by Vercel (June 9, 2025), he advises limiting access to tools and not trusting their output to prevent such attacks.
The Latest Trends and Challenges in Agent AI Security in 2025
As we enter 2025, Agentic AI is beginning to be fully utilized in the field of cybersecurity. According to an SC Media article, "Cybersecurity in 2025: Agentic AI to change enterprise security and business operations in a year ahead" (published January 9, 2025), experts predict that AI will not just be a supplementary tool, but will take the lead in coding and threat detection. Additionally, a WatchGuard blog post, "Agentic AI is changing cybersecurity" (published September 27, 2025), describes a future in which AI-driven defense systems will combat tireless "AI hackers."
However, some issues have also become apparent. The relevant InfoWorld article (published on September 30, 2025) criticizes the "Blame the Intern" approach. This refers to the approach of blaming junior staff or human error when a security incident occurs, but argues that this is not a fundamental solution. In fact, the key to the security of Agentic AI lies in the design and governance of the AI itself; simply blaming human error is insufficient. An old post by Henning Kilset (October 4, 2021) on X has also attracted renewed attention, pointing out that the intern's mistake was a problem with organizational processes.
Specific security threats include:
- Plan Injection: An attack that contaminates the internal planning of an AI. This attack was introduced in a post by Sentient (July 13, 2025) and in collaboration with Princeton researchers, and has been shown to have a high success rate of 46-63% in bypassing existing defenses. Secure memory and semantic checks are recommended as countermeasures.
- Prompt Injection: Manipulating AI prompts to cause malicious behavior. In a post by Andy Zou (July 29, 2025), he reported that 44 AI agents were tested, resulting in 62,000 breaches out of 1.8 million attempts. Examples of email leaks and financial losses were cited.
- Shadow AI Risk: Unauthorized use of AI will increase within the enterprise, creating security holes. In a post by Shah Sheikh (September 30, 2025), he introduces the Entro Security platform as a tool to prevent this.
These trends mean that security strategies for 2025 will move toward limiting AI autonomy while increasing oversight. A CSO Online article, "Agentic AI in IT security: Where expectations meet reality" (published 14 hours ago), points out that while AI automates repetitive tasks, issues of reliability, affordability, and oversight remain. A WebProNews article, "Agentic AI in IT Security: Automation Promises and Real Challenges" (published 2 days ago), also warns of new vulnerabilities such as false positives and prompt injection.
By the way, I recommend Gamma as a tool that brings AI closer to you. Gamma is a new standard tool that uses AI to instantly create documents, slides, and websites. It's easy to use even for beginners, and perfect for improving work efficiency. Check out this article for more details:What is Gamma? A new standard for instantly creating documents, slides, and websites with AI.
How should businesses and individuals respond? Practical advice
For practical tips to strengthen the security of agentic AI, consider the following: The ScienceDirect paper, "Transforming cybersecurity with agentic AI to combat emerging cyber threats" (published July 1, 2025), proposes using AI to enhance threat response.
- Restrict tool access: Minimize the data that AI can access and require human confirmation for critical operations.
- Regular audits: A system has been introduced to check the AI's behavioral logs and detect abnormalities.
- Education and process improvement: Share AI risks across the organization and prioritize systemic measures over "blame the intern."
- Utilize the latest tools: Consider a platform that integrates data lakes and AI security, such as the evolved version of Microsoft Sentinel (announced September 30, 2025).
A post by Sigil AI on X (May 30, 2025) emphasizes the importance of a decentralized AI security layer and warns of the risk that one bad link could cause everything to be lost. A post by Cloud Security Podcast (September 24, 2025) cites an example of data leakage due to indirect prompt injection and points out the danger of autonomous actions without user confirmation.
Jon's Summary
While Agentic AI will revolutionize security in 2025, it also creates new attack vectors. Rather than taking a "Blame the Intern" approach, it's important to incorporate security into AI from the design stage. When using AI tools, make sure you get information from trusted sources and use them with safety as your number one priority. The future of AI is bright, but using it wisely is key.
If you want to streamline your documentation with AI, try Gamma:What is Gamma? A new standard for instantly creating documents, slides, and websites with AI.
Reference sources
- InfoWorld: 'Blame the intern' is not an agentic AI security strategy (2025-09-30) – https://www.infoworld.com/article/4064222/blame-the-intern-is-not-an-agentic-ai-security-strategy.html
- SC Media: Cybersecurity in 2025: Agentic AI to change enterprise security and business operations in year ahead (2025-01-09)
- CrowdStrike Blog: The Dawn of Agentic SOC: Reimagining AI Era Cybersecurity (2025-09-26)
- WatchGuard Blog: Agentic AI is changing cybersecurity (2025-09-27)
- ScienceDirect: Transforming cybersecurity with agentic AI to combat emerging cyber threats (2025-07-01)
- CSO Online: Agentic AI in IT security: Where expectations meet reality (14 hours ago)
- WebProNews: Agentic AI in IT Security: Automation Promises and Real Challenges (2 days ago)
- X Posts: Various discussions on agentic AI vulnerabilities (eg, Sentient, Andy Zou, Vercel, etc., 2025 posts)
