Improve Your RAG Accuracy by 30% with Lettria's GraphRAG. Download our free white paper.

Improving RAG performance: Introducing GraphRAG

Introducing GraphRAG - creating a RAG architecture in a graph database - improving the performance and responses found in traditional RAG architectures - improving outcomes for companies that are leveraging AI in their internal systems.

Increase your rag accuracy by 30% with Lettria

Introduction

Large Language Models (LLMs) like ChatGPT have revolutionized the way work is done today.  Adoption of LLMs to generate and analyze content still feels very new, but is also being adopted across industries at a frenetic rate.  LLMs are trained on a large corpus of knowledge, but often do not have specific domain knowledge required for companies… Nor would companies want to expose proprietary data into an LLM.  

Retrieval Augmented Generation (RAGs) are a way to utilize the power of  LLMs with external data sources to create chat based responses. As you might expect, RAG creation has become a powerful tool - allowing companies to create chatbots on internal intelligence.  Innovations to RAG architectures are also moving at a rapid pace. 

Forbes reports that 73% of companies are using AI for chatbots, and 41% use AI for data aggregation. These organizations need their unique data added to the AI - in an accurate complete way.  They are all using tools like RAG architecture.

In this post, we introduce GraphRAG - creating a RAG architecture in a graph database - improving the performance and responses found in traditional RAG architectures - improving outcomes for companies that are leveraging AI in their internal systems.

Want to see how easy it is to implement GraphRAG?

Typical RAG Architectures

A typical RAG takes unstructured data, and converts them into databases of vectors (large arrays of numbers).  When two vectors are multiplied, a cosine similarity value is produced.  Two vectors with the highest cosine similarity are likely to be related.  (Of course, if they are not closely related - this is called a hallucination - where the LLM provides a result that is not correct). 

This approach works well when examining exact data that can be expressed as a vector. This means that RAG databases work well at comparing discrete terms or objects.  However, this also means that creating abstractions, complex reasoning and analysis of trends is not as easy - as the RAG cannot make those connections.  By adding ontologies - the RAG can discover additional connections between data that is not present in a simple vector RAG.

When using a RAG architecture, private and proprietary data can be used, without the worry of exposing the data in an LLM.

GraphRAG

A graph RAG takes all of the advantages of the vector-based RAG, but stores the data in a graph database.  Graph databases store additional context about what is stored - information is stored in nodes, and the edges describe relationships and links that connect the information. 

How can we identify the nodes and edges of a graph database?  RAGGraph uses ontologies. Ontologies are formal representations of concepts and relationships in your data. By defining common connections and representations in the dataset, we can build a more accurate and complete graph database.

In addition to the vector based comparisons, our RAG can utilize the power of the graph connections to further refine and improve the results of each query.  This provides a stronger holistic response for high level queries, and greater accuracy (fewer hallucinations) - and fewer tokens spent on each query.

With the additional context in a graph database, our RAG gains additional powers - hence the name GraphRAG.

GraphRAG vs. RAG

When testing a graphRAG vs. a RAG, Lettria found:

  • 30% less token usage.  GraphRag is able to create accurate and complete results with less input.
  • Holistic answers.  In addition to the “in the weeds” responses, GraphRAG is able to connect scatter facts due to graph similarities. The GraphRAG is better able to discern the forest from the trees.
  • Fewer hallucinations If the Vectors match, but the graph does not, GraphRAG is better able to avoid hallucinations - ensuring higher accuracy in the results.

Conclusion

GraphRAG is a next step in AI RAG creation. Faster, fewer tokens, less hallucinations and better results. GraphRAG is supported by top Graph databases:

Get in touch with a Text2Graph expert

Ready to revolutionize your RAG?

Callout

Get started with GraphRAG in 2 minutes
Talk to an expert ->