SCIENCE

AI ‘hallucinates’ constantly, but there’s a solution


The main problem with big tech’s experiment with artificial intelligence (AI) is not that it could take over humanity. It’s that large language models (LLMs) like Open AI’s ChatGPT, Google’s Gemini and Meta’s Llama continue to get things wrong, and the problem is intractable.

Known as hallucinations, the most prominent example was perhaps the case of US law professor Jonathan Turley, who was falsely accused of sexual harassment by ChatGPT in 2023.


Source link

Related Articles

Back to top button