Asking ChatGPT a health-related question that included evidence was seen to confuse the AI-powered bot and affect its ability to produce accurate answers, according to new research. Scientists were "not sure" why this happens, but they hypothesised that including the evidence in the question "adds too much noise", thereby lowering the chatbot's accuracy. They said that as large language models (LLMs) like ChatGPT explode in popularity, there is potential risk to the growing number of people using online tools for key health information. LLMs are trained on massive amounts of textual data and hence are capable of producing content in the natural language. The researchers from the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and The University of Queensland (UQ), Australia, investigated a hypothetical scenario of an average person asking ChatGPT if 'X' treatment has a positive effect on condition 'Y'. They looked at two question formats - either just a question
Read The Rest at :