Elicit's Reliability: Does Elicit run the same risk of "hallucination" as other AI tools?
It is essential that an AI research assistant is accurate and trustworthy! Elicit is focused on reducing hallucinations and ensuring reliability. We use several strategies to reduce falsified responses.
We ensure that the info we use is extracted directly from papers or generated based on research papers. We then highlight the source of the content within the relevant papers. We also do lots of internal evaluations to test how common hallucinations are.
We use strategies like process supervision, prompt engineering, ensembling multiple models, double-checking our results with custom models and internal evaluations, and more to reduce the rate of hallucinations.
To learn more about what Elicit is doing to reduce hallucinations, please read: