
24.03.2026 | News Four key factors for safe AI use
AI has become part of everyday working life – unfortunately, not always through official channels or with clear guidelines for use. This creates data protection and security risks that companies and government agencies must address if they want to avoid data leaks, compliance violations, or data breaches. These are the most important measures:
Key Factor #1: Ensuring technological security for AI
For generative AI to be used productively in companies and government agencies, its use must be designed to be technologically secure. This applies regardless of whether it involves AI chats based on internal data or general GenAI applications such as summaries, translations, or research tasks. Companies should therefore provide their employees with AI tools that enable these use cases securely, in a controlled manner, and without data leakage, while also being operated on-premises or in a controlled private cloud environment. In the case of chats involving internal data, these systems should ensure that only authorized information is processed and returned with verifiable source references. In addition, organizations should prevent the use of unauthorized external AI services, for example through web gateways and Cloud Access Security Brokers (CASB), as well as by restricting installation rights on end devices. This helps with the critical task of preventing shadow AI, meaning the use of unapproved AI tools by employees.
Key Factor #2: Creating the right framework for AI
To ensure secure AI use, companies and government agencies first need clear, binding rules for using AI tools and handling confidential data. Companies and government agencies should also appoint AI officers who, together with data protection officers, enforce structured governance and establish official reporting channels for errors, risks, and misconduct.
Key Factor #3: Building employee AI competence
Once the foundation for AI has been laid in the form of concrete rules and guidelines, companies should work to strengthen their employees’ ability to work with AI. This begins with imparting a basic understanding of how AI works and the various concepts involved, as well as raising employees’ awareness of the requirements of the AI Regulation. In practical training sessions, they should also learn which AI tools they are using, how to use them, and what data they are permitted to provide to enter in those tools. Ideally, trainers in these sessions will not throw around IT jargon but will also answer questions about system security and what happens to the data entered in simple language that even IT novices can understand. Awareness training also promotes critical and analytical thinking and helps identify warning signs of hallucinations, such as unverifiable numbers or data. For companies and government agencies, the following applies: The higher the level of competence among the workforce, the lower the risk of misuse and legal violations—and the greater the chance of targeted, efficient use.
Key Factor #4: Lead by Example
Anyone who wants to convince their employees of the safe use of AI should also lead by example themselves. This includes, among other things, using only the AI tools officially approved by the company. Managers should also complete the same training as their employees. Combined with good and transparent communication about the strengths and weaknesses of artificial intelligence, this increases general acceptance of the technology. One way to lower potential barriers to AI use among some employees is to designate so-called “power users.” These individuals have already gained experience with AI and can thus directly assist colleagues as they get started, answer questions, and help build trust and confidence in the new technology among their peers.
“Companies and government agencies looking to implement AI within a secure framework should consider lightweight AI assistants—that is, chatbots or agents designed for specific tasks—as a pilot project,” emphasizes Franz Kögl, a member of the executive board at IntraFind. “Such solutions can be deployed with relatively little effort, offer immediate value through easy access to information, and support employees in everyday tasks such as summarizing, translating, or generating text. It is important that the software seamlessly integrates both internal and external data sources, consistently enforces access rights, and does not disclose sensitive information to external parties in an uncontrolled manner. Supplementary GenAI features should be flexible and intuitive to use, while simultaneously meeting all security and compliance guidelines. In this way, organizations create a secure, productive framework for AI use while ensuring high-quality results.”