Many publicly accessible AI assistants lack adequate safeguards to prevent mass health disinformation, warn experts

Many publicly accessible artificial intelligence (AI) assistants lack adequate safeguards to consistently prevent the mass generation of health disinformation across a broad range of topics, warn experts in the BMJ. They call for enhanced regulation, transparency, and routine auditing to help prevent advanced AI assistants from contributing to the generation of health disinformation.

This article is brought to you by this site.

Reader’s Picks