Image
Kleiner weißer Roboter

05.05.2026 | News Anyone who hires AI agents must also manage them

When introducing AI agents, one should not think of it as a chatbot project, but rather as onboarding a new, highly privileged employee. Otherwise, digital assistants can become uncontrollable actors. A commentary by IntraFind CEO Franz Kögl.

Companies are currently bringing digital employees into their organizations—and this is a tremendous opportunity. AI agents can accelerate processes, prepare decisions, and noticeably lighten the load on teams. But precisely because they have such a significant impact, they also need to be managed. After all, AI agents are far more than just tools: they are always-available digital employees with system privileges. That is exactly what makes them so productive—but also a security concern. If manipulated, they can influence operational business processes, such as initiating payments, changing permissions, or accessing sensitive data.

The fact that agents are controlled through human language creates a gateway. So-called prompt injections—instructions inserted into emails, tickets, websites, or documents—can cause agents to bypass rules, disclose sensitive data, or misuse tools. Agents don’t just read about something; they are often guided by what they read. This is particularly problematic when agents have access to multiple systems. In such cases, a single manipulation can trigger several downstream actions.

However, risks do not arise solely from attacks. It is enough if an agent is permitted to open the wrong doors. One example of this is knowledge and research agents that access internal documents. Here, security hinges on a seemingly trivial question—namely, whether the system actually uses every document strictly within the existing permissions. The responses must contain only what the user is authorized to see; otherwise, there is a risk that the AI will willingly divulge secrets. 

Powerful AI agents take on repetitive tasks, accelerate processes, and free up employees for more value-added activities. However, performance alone is not enough; AI agents must also be controllable. Companies must be able to understand and trace what an agent is doing, what data it accesses, what tools it uses, and how it arrives at its decisions. In critical processes, AI agents must not be a black box. They require clear permissions, fixed boundaries, controlled interfaces, and comprehensive logging. And companies should be able to stop them at any time in an emergency.

Those introducing agent-based AI should not think of it as a chatbot project, but rather as onboarding a new, highly privileged employee—and integrate them securely into business processes: with minimal access rights, clear responsibilities, continuous logging, and defined shutdown and escalation paths. Ultimately, the success of AI agents depends not only on their intelligence, but also on how well we integrate them into our processes and use them responsibly.

The author

Franz Kögl
CEO
Franz Kögl co-founded IntraFind Software AG with Bernhard Messer in 2000. Together, they built the company into an established provider of enterprise search software. He regularly gives talks and writes specialist articles on topics such as artificial intelligence, machine learning, and cognitive search.
Image
Autor Franz Kögl IntraFind