Over the past year, AI researchers have found that when AI chatbots such as ChatGPT find themselves unable to answer questions that satisfy users’ requests, they tend to offer false answers. In a new study, as part of a program aimed at stopping chatbots from lying or making up answers, a research team added Chain of Thought (CoT) windows. These force the chatbot to explain its reasoning as it carries out each step on its path to finding a final answer to a query.
Experiments show adding CoT windows to chatbots teaches them to lie less obviously
Reader’s Picks
-
Eventgoers’ live experiences are shaped by media technologies like social media, whether used in the moment or not, and memory [...]
-
Language learners often assume that using rare, complex vocabulary will make their speech sound more fluent. Research suggests that there [...]
-
Lead researchers Nicole Hiekel from the Max Planck Institute for Demographic Research (MPIDR) and Katia Begall from the Radboud Universiteit [...]