Image
Intrafind 25 Jahre

13.10.2025 | Blog 25 years of IntraFind: Where Enterprise Search meets Generative AI

The German enterprise search provider and AI specialist IntraFind Software AG has been on the market for 25 years. What began as an innovative start-up has grown into a leading provider of intelligent search and analytics solutions, used by organizations around the world. Christoph Goller, Director Research and one of the company’s first employees, looks back on an exciting technological journey.

The early years and some very good choices that shaped the journey

I’ve been working with IntraFind for almost 25 years, and it has truly been the adventure of a lifetime. When Franz Kögl and Bernhard Messer founded the company, I didn’t know them yet, but I met them shortly afterward. In the very beginning, it was just Franz, Bernhard, Andreas, and me. Andreas—who today is the technical lead of our Professional Services team - was still a student intern at the time, while Bernhard and I focused on consulting projects for various companies. I’m quite sure that the name IntraFind, as well as the name of our flagship product iFinder, came entirely from Franz - and well before the iPhone made the “i-” prefix famous. Today, that naming tradition continues with products like iAssistant and iHub.

Starting out with Enterprise Search

From the very beginning, the company’s focus was on Enterprise Search - a truly visionary decision in hindsight. As Google familiarized the world with internet search, we were providing organizations with powerful tools to search their internal content. The core mission of iFinder has always been to boost employee productivity. 

Another brilliant decision - this time from Bernhard - was to base our products on the open-source library Lucene (and later Elasticsearch / OpenSearch). This meant we didn’t need to develop our own search technology from scratch but could instead build on the strength of an active community. Both Bernhard and I became Lucene committers, which was not only a privilege but also a lot of fun. During those years, I was in close contact with Doug Cutting, the creator of Lucene and Hadoop. When it came to adopting open-source software, we were true pioneers - at least in Germany. At that time, many large German companies were still skeptical about open source. In fact, in the early years, we didn’t actively promote that our products were built on Lucene and open-source foundations.

Early on with AI and Deep Learning

I earned my PhD in AI and like to think of myself as a deep learning pioneer - perhaps just a bit ahead of my time. From the very beginning, my role at IntraFind has been to apply AI to improve the quality of search. The early 2000s, however, fell into what was known as an “AI winter” (not the first, and maybe not the last). Back then, even mentioning AI was almost taboo. In Germany, opportunities to work in AI outside publicly funded university research were scarce. Very few of my PhD colleagues managed to build careers in AI. Most ended up in uninspiring industry roles, while a fortunate few, like Sepp Hochreiter, went on to become renowned professors.

When I developed the first version of our Topic Finder for text classification, neural networks were not considered reliable enough for production use. To avoid scaring off customers, we would frame our work in terms of “optimization methods” or “support vector machines (SVMs)” rather than neural networks. Strictly speaking, in the simplest case, a SVM is identical to a neural network—so it wasn’t untrue. Even in those challenging times for AI, IntraFind continued to invest in the technology and successfully implemented AI-driven solutions. In that sense, we were true AI pioneers within the German industry.

The rise and fall of tech hypes

Over the past 25 years, we’ve witnessed countless technological shifts - many of them driven by big hypes. At times, these trends even seemed to threaten our business model: when customers chased after hype technologies, there was naturally less budget left for our products.

One of the most striking examples was the rise of big data and NoSQL databases. For decades, SQL databases had been the unquestioned standard. Then suddenly, everyone was turning to NoSQL, full of big promises and even bigger expectations. But here’s the twist: search engines were the first NoSQL databases. They had always coexisted with SQL systems and were built to handle “big data” from the very beginning. In that sense, IntraFind was already a NoSQL pioneer - long before the hype even began.

I vividly remember a funny episode from the height of the Big Data hype. At the time, large companies were pouring money into massive Hadoop clusters. Everyone was talking about MapReduce - and then scrambling to find actual use cases. One research team from a major company proudly demonstrated their achievement: they had around 700,000 documents - a database of support cases, which happens to be a classic use case for IntraFind. With their massive Hadoop cluster and MapReduce, they managed to pre-compute the similarity between all documents in just a few hours. We then showed them an iFinder running on some old, outdated hardware. It could instantly deliver the most similar support cases for any given one - in less than a second. And that was the real use case they needed, not pre-computing every possible similarity in advance. It was a perfect example of the saying: when all you have is a hammer, everything looks like a nail.

Other examples of such hypes include blockchain and agile software development (such as Scrum) as an organizational model. At IntraFind, we had never followed the traditional waterfall approach; from the beginning, we worked in an agile way - guided mostly by common sense. For a period, we applied Scrum strictly “by the book,” but over time we adapted elements of Kanban, dropped much of Scrum, and found a way of working that truly fit our needs.

On the AI front, the semantic web was a challenge that haunted me for many years. Being more of a neural network enthusiast, I never took to it wholeheartedly. Honestly, all the semantic web projects I was involved in — mostly research initiatives — were complete failures. Ontologies and knowledge graphs could handle only small demo cases; building the necessary knowledge manually was simply impossible. With the recent advances in generative AI, however, the situation has changed dramatically. The general knowledge that the semantic web promised is now accessible through the APIs of tools like Perplexity and ChatGPT - we just need to put it to use.

The AI revolution: Generative models for new applications

The release of ChatGPT by OpenAI in December 2022 marked a major breakthrough. While generative language models had existed for some time, advances such as instruction tuning and reinforcement learning made them genuinely practical. ChatGPT not only engages in conversation but can also answer a wide range of questions, leveraging extensive general knowledge from its training data. I don’t see this as just another tech hype. Generative models are opening the door to AI applications that seemed unattainable only a few years ago. That said, there are aspects of hype involved — expectations are extremely high, so some level of disappointment is inevitable. Still, I don’t anticipate another AI winter, at least not anytime soon.

Past, present, future of our technology

Over the past 25 years, our products have undergone three major technological transformations. The first iFinder was built directly on the Lucene library and essentially consisted of a set of Java applications. While we implemented a form of distributed search, it could only handle moderate datasets—up to around 50 million documents.

The landscape changed with the arrival of open-source solutions for horizontally scalable distributed search: Solr in 2006 and Elasticsearch in 2010, both based on Lucene. Search technology became immensely popular, the Lucene community grew significantly, and competition increased. We adapted and emerged stronger by reimplementing our products on Elasticsearch (now OpenSearch). By 2012 we were able to handle terabytes of data, enabling us to execute massive Enterprise Search projects for some of Germany’s largest companies.

A perfect match: Search Engines and Generative AI

We now entered the third generation of our products. With recent breakthroughs in AI (generative models from OpenAI and meanwhile also from Europe and China), many thought generative models might replace search engines entirely. In reality, they complement each other perfectly. Search engines provide factual, up-to-date information, while generative models can formulate answers based on that knowledge — a combination known as Retrieval-Augmented Generation (RAG). We integrated this approach into iFinder through an RAG component called iAssistant. Reasoning models (agentic AI), that decide themselves which queries to send to which search engine or service for a given user request, are a topic we are currently working on.

Our iHub extends these capabilities with applications such as text generation, translation, summarization, and file analysis, all executed via user-friendly micro-apps in a secure, data protection-compliant environment. Moreover, by leveraging models (or even APIs e.g. from OpenAI or Perplexity with search-engine-access to general world knowledge) as knowledge bases, the long-standing vision of the semantic web may finally be realized.

These new capabilities are set to give IntraFind another growth spurt and open up exciting opportunities. After all, alongside delivering reliable Enterprise Search software, we have been an AI company from the very beginning.

Related articles

Image
3 Personen arbeiten zusammen vor einem Monitor

7 tips for your AI project with enterprise search & chatbots

Planning a project with generative AI? Then thorough preparation is key. In this blog post, we share practical tips on how to successfully implement your AI project step by step.
Read blog
Image
Leuchtende Glühbirne in kräftigen Farben

RAG is dead - or is it?

Retrieval-Augmented Generation (RAG) was introduced to address the limitations of LLMs. But with larger context windows and improved training methods, some claim RAG is becoming obsolete. We explain why RAG is far from dead
Read blog
Image
Wegweiser, die in verschiedene Richtungen zeigen

Why context is crucial for effective enterprise search

A good enterprise search helps employees quickly find the information they need. By understanding context, search delivers truly relevant results and builds a strong foundation for generative AI.
Read blog

The author

Dr. Christoph Goller
Director Research
Christoph Goller holds a PhD in computer science from the Technical University of Munich where he conducted research in Deep Learning. He is Apache Lucene Committer with extensive experience in Information Retrieval, Natural Language Processing and AI. Since 2002 he has been leading the research department at IntraFind.
Image
Dr. Christoph Goller