AI21 Labs introduces anti-hallucination feature for GPT chatbots
AI21 Labs has recently launched “Contextual Answers”, a question answering engine for Large Language Models (LLM).
When the new engine is connected to the LLM, users can upload their own data libraries to limit the model’s output to specific information.
The launch of ChatGPT and similar artificial intelligence (AI) products has created a paradigm shift in the AI industry, but adoption has become a difficult prospect for many companies due to a lack of credibility.
According to research, employees spend almost half of their working days searching for information. This presents a huge opportunity for chatbots that can perform search functions; However, most chatbots are not aimed at enterprises.
AI21 has developed Contextual Answers to bridge the gap between chatbots designed for general use and enterprise question answering services by allowing users to pipeline their own data and document libraries.
According to a blog post by AI21, Contextual Answers allows users to send AI answers without re-training the model, thereby removing some of the biggest barriers to adoption:
“Most companies struggle to adopt [AI]Citing cost, complexity, and a lack of expertise in the models’ organizational data, answers that are inaccurate, “hallucinated” or inappropriate to the context were cited.
One of the outstanding challenges associated with developing useful LLMs like OpenAI’s ChatGPT or Google’s Bard is teaching them to express their lack of confidence.
Typically, when a user questions a chatbot, it will provide an answer, even if there is not enough information in the dataset to provide factual information. In these cases, LLMs often make up information without any factual basis, instead giving unreliable answers such as “I don’t know”.
The researchers call this output a “hallucination” because the machines generate information that appears to not exist in their dataset, such as people seeing things that aren’t really there.
We’re excited to introduce Contextual Answers, an API solution where answers are based on organizational knowledge, leaving no room for AI hallucinations.
➡️ https://t.co/LqlyBz6TYZ pic.twitter.com/uBrXrngXhW
— AI21 Labs (@AI21Labs) 19 July 2023
According to A121, contextual answers should reduce the hallucination problem entirely by either outputting information only if it is relevant to the user-supplied document or by outputting nothing at all.
In industries where accuracy is more important than automation, such as finance and law, the emergence of generator pre-trained transformer systems (GPT) has produced mixed results.
Experts continue to recommend financial caution when using GPT systems because of their tendency to hallucinate or fragment information even when connected to the Internet and able to link to resources. And in the legal industry, a lawyer is now facing fines and penalties after relying on the output generated by ChatGPT during a case.
By preloading the AI system with relevant data and intervening before the system finds non-factual information confusing, AI21 has demonstrated to reduce the problem of hallucinations.
This could lead to mass adoption, especially in the fintech sector, where traditional financial institutions have been reluctant to adopt GPT technology, and the cryptocurrency and blockchain communities have had mixed success using chatbots.
Connected: OpenAI is launching “custom instructions” for ChatGPT so users don’t have to repeat themselves at every prompt
Stay Connected With Us On Social Media Platforms For Instant Updates, Click Here To Join Us Facebook