Google DeepMind’s FACTS Grounding (Factual Accuracy in Content Through Source-grounding) is a new tool designed to evaluate and improve the accuracy of AI-generated responses by ensuring they stay grounded in real information. The project includes a public leaderboard to track and compare model performance. Learn more here.
Generative AI is already useful but we have to stop the models from making stuff up (hallucinating) so we can trust them in high-stakes situations. A public leaderboard grading how the models perform at this is a great resource. View the leaderboard here.