Would you employ an assistant who was dazzlingly quick and resourceful, but who regularly—and confidently—told you things that weren’t true? If you use a chatbot or other artificial intelligence, you already do.

AI programs make associations between data in ways that can be unpredictable and that occasionally produce bizarre results. It’s easy to see when generative AI makes visual mistakes, like a hand with too many fingers, but it can be tougher to spot errors in text. When AI produces material that doesn’t match reality, it is called an AI hallucination.

How often do AI hallucinations happen?

According to research from Vectara, a company that evaluates AI performance, AI hallucinates up to 30% of the time. Some platforms are much more reliable than others, but all of them can produce false or misleading results.

Some famous examples include ChatGPT making up legal cases, which caused trouble for the lawyer who referred to the fake precedents in court, and Google’s Bard chatbot falsely claiming in its first demonstration that the James Webb Space Telescope took the first pictures of a planet outside our solar system—something that happened more than 18 years before the telescope was launched.

What does this mean for using AI in your business?

AI can be a wonderful time-saver for repetitive tasks, but ultimately you are responsible for any of its output that you use. Remember, TREC rules specify that license holders must exercise “prudence and caution so as to avoid misrepresentation, in any way, by acts of commission or omission.” They also prohibit any advertisements that are misleading, likely to deceive the public, and those that create a misleading impression. Article 12 of the Code of Ethics also charges you with presenting a true picture in your advertising, marketing, and other representations. So, make sure that you fully supervise and vet anything your AI “assistant” produces.