3 mins.
Humans-in-the-loop
Lettria’s human-in-the-loop AI process isn’t just a recognition of technological advances that still need to be made. It’s a reflection of how humans have used machines for centuries.
To give an example, take a look at your local construction site.
You don’t see masses of people lifting, sawing, hammering. Instead, you’ll find a few people directing some powerful machines. None of these machines are fully autonomous, left to their own devices; they’re all being guided by at least one and oftentimes two or three people.
That’s because a construction site reflects a simple truth that we’ve long known: Machines can be powerful, precise, fast, and any number of other characteristics to fit a given need. But they also need some level of human intervention and supervision to ensure they’re executing the task properly.
Over the past few years, that truth has often been tossed aside in how people think about (and use!) new AI-based tools like LLMs. These tools have been marketed as miracles – and on the surface, they even sometimes perform in ways that make it look like that’s the case.
But in the engineering corner of machine learning, the concept of reinforcement learning has been a key standard for a long time. Procedures such as active learning and reinforcement learning from human feedback (RLHF) demonstrate the benefits of having human guardrails as machines are being developed and put to practical use.
At Lettria, our NLP and GraphRAG technologies pass through continuous layers of testing, with queries being made and answers evaluated according to a range of quality (correct, partially correct, partially incorrect, etc.).

That information is used to continue fine-tuning the ontologies and knowledge graphs, both for baseline uses and to match our individual clients’ needs and use cases. All this effort has paid off, as our GraphRAG technology beats traditional VectorRAG by up to 30%, increasing accuracy while eliminating hallucinations.
But AI-based tools are still developing, and there is a lot of ongoing human supervision that still needs to happen in order for them to work well. Correctly parsing data and documents, mastering multi-hop reasoning, and much more are now being successfully achieved, but it’s still a good idea to have someone there to verify the output – just as happens at construction sites, factories, farms, and other machine-heavy sectors around the world. That’s why every Lettria answer is accompanied by a detailed graph and snippets that show the machine’s work.

And that’s also why you can’t just sign up for Lettria as a self-service SaaS. Our technology is designed for clients who are running complex businesses where the margin for error is tiny, if it’s there at all. That’s why each of our success stories follows the same pattern:
Pilot: Roughly 12 weeks of defining the problem, document intake and parsing, graph and node creation, testing, verification and correction. This serves two purposes: the daily users within your business develop a fine-tuned understanding of the tool, and your company leaders aren’t asked to blindly trust yet another AI solution.
Deployment: After a successful pilot, further use cases within the company are identified and sent through the same kind of process. For each new application, people – both within your company and Lettria’s AI engineers – are there at every step to make sure the machine is working properly.
The bottom line
AI doesn’t need to be a bunch of black boxes marketed as miracle solutions. Trust has to be earned, just like in any other relationship. That’s why we’re more than happy to put our machine’s reasoning out there in plain sight for easy verification. And if there’s a problem, our team is right there: after all, our goal is to build AI that actually works.