

17.02.2026 | Blog Introducing GenAI in Public Sector: secure and trusted
What employees need answers to first when it comes to GenAI
Before generative AI is actually used in everyday government work, employees often ask the same basic questions: Do we retain control over data and access? Can I trust the results – and who is responsible for them? And: Does this help me in my everyday work or does it create additional work? These questions arise not only with cloud offerings, but whenever a new system is perceived as a "black box."
Control over data and access: Employees want to know what information they are allowed to enter, where processing and storage take place, whether content is used for training, and who has administrative access. Without clear answers, the willingness to try the tool at all decreases.
Responsibility and quality of results: Even with AI, employees remain professionally and legally responsible. This means there is a correspondingly high level of concern about hallucinations, inaccurate or outdated statements – and the risk of unnoticed errors being carried over.
Confidencein policies and compliance: Many hesitate not so much out of rejection, but out of fear of accidentally violating data protection, confidentiality, or internal guidelines. Without simple guidelines, GenAI becomes a source of uncertainty rather than a relief.
Role and workplace: Where automation becomes possible, questions arise about task shifts, qualifications, and prospects. It is important to send a clear message: GenAI provides support – and creates space for more value-adding activities.
Suitability for everyday use: If its use requires switching between tools, time-consuming preparatory work, or additional documentation, its benefits quickly evaporate. Acceptance arises where GenAI fits seamlessly into processes and brings quick, visible relief.
Taking these points seriously is the starting point for successful implementation. The next chapter deals with how public authorities can build trust and motivation in concrete terms – technically, organizationally, and communicatively.
How public authorities motivate employees for GenAI and build trust
1. Implement data-sovereign AI
When generative AI is operated entirely within your own IT environment, without data leakage, employees' perceptions change and the barrier to adoption drops significantly. On-premises solutions operated in your own data center and sovereign AI models build trust – among employees, data protection officers, and staff councils.
2. Simple language instead of IT jargon
Instead of complicated IT terms, users should be given simple answers and practical examples – this increases their willingness to use the system. Project goals must be clearly communicated – ongoing dialogue ensures acceptance.
Employees want to understand:
- What data am I allowed to enter?
- How secure is the system?
- What happens to my entries?
3. Training sessions that provide confidence in using the system
This information can be conveyed in training sessions- one of the most effective levers for acceptance. Formats that combine two levels are successful:
- legal and organizational guidelines as well as
- practical application training with real cases from the specialist department
The key here is to convey a sense of security. The simple phrase "You can't break anything" removes many people's inhibitions about taking the first step.
4. Managers as role models
When managers use GenAI themselves and talk openly about it, this has a stronger effect than any official communication. Role models signal that use is desirable, legitimate, and part of everyday work.
5. Use multipliers
There are always tech-savvy power users in the team who can help colleagues get started. This lowers their inhibitions about asking questions and creates acceptance on an equal footing.
6. Make quick wins visible
Abstract promises of benefits are rarely convincing, but concrete successes are – for example, "20 minutes saved per case," "higher-quality texts," or "faster information retrieval." When employees see concrete benefits in their everyday work, even skeptics become curious about new tools.
These could be pilot projects for generative AI in the public sector
Clearly defined use cases are particularly suitable for pilot projects, e.g.:
- Creation and revision of notices, emails, statements
- Internal knowledge research on files, documents, and specialist systems
- Summaries of complex texts or legal bases
- AI-supported application processing
- Citizen service chatbot
It is important to have a manageable framework in which it is possible to try things out, learn, and adapt. It makes sense to start small—in a clearly defined pilot area with sovereign, data protection-compliant generative AI—and to scale up once employees feel confident and the authority has achieved clear benefits..
Conclusion: Acceptance comes from sovereign AI and good communication
In addition to secure technology, introducing GenAI also requires transparent communication and a culture that encourages curiosity and a healthy approach to learning from mistakes rather than concern. It is crucial that employees experience early on how GenAI actually improves their everyday work. Authorities that take any concerns seriously, build trust, and introduce practical use cases step by step create the ideal conditions for a successful and sustainable rollout.
IntraFind as a partner for trustworthy generative AI
Are you planning an AI project? We can advise you on secure GenAI solutions for your agency and implement them with you.
Related articles


How can generative AI ease the workload in everyday office life?

AI assistant simplifies onboarding
The author
Franz Kögl
