LLMs in SME

Large language models like ChatGPT can help small and medium-sized enterprises (SMEs) automate processes and use knowledge more efficiently – but only if they are enriched with the right data.

This is exactly what so-called RAG architectures – Retrieval Augmented Generation – are designed to do. A retrieval system specifically searches for relevant information from company documents so that the LLM can provide accurate and tailored answers.

However, for SMEs, this is often not feasible:

  • Commercial solutions require sensitive data to be uploaded to the cloud – a no-go for many small businesses.
  • Running the infrastructure in-house is expensive.
  • The systems are technically complex – and must be regularly updated with new knowledge.

Our project addresses exactly these challenges:

We are developing a lightweight, locally deployable retrieval system that allows SMEs to securely and efficiently use their own data with open-source LLMs – with no cloud, no GPU investments, and no dependency on third parties.

In this way, we enable small businesses to adopt generative AI – securely, cost-effectively, and tailored to their real-world needs.

Contact: Dr. Cristina Mihale-Wilson | LinkedIn | mihale-wilson(at)wiwi.uni-frankfurt.de

Top