Over the past year, AI researchers have found that when AI chatbots such as ChatGPT find themselves unable to answer questions that satisfy users’ requests, they tend to offer false answers. In a new study, as part of a program aimed at stopping chatbots from lying or making up answers, a research team added Chain of Thought (CoT) windows. These force the chatbot to explain its reasoning as it carries out each step on its path to finding a final answer to a query.
Experiments show adding CoT windows to chatbots teaches them to lie less obviously
Reader’s Picks
-
Men experiencing intimate partner violence turn to harmful coping strategies due to limited services and persistent social stigma, according to [...]
-
Imagine being a therapist and sitting across from a client who casually admits to kicking their dog. They kick until [...]
-
Tightly connected communities tend to be more resilient when facing extreme events such as earthquakes, hurricanes, floods or wildfires, says [...]