Chatbots Linked to Bioweapon Risks

(TargetDailyNews.com) – U.S. think-tank Rand Corporation issued a stark warning regarding the potential misuse of AI chatbots to assist in the “planning and execution” of a bioweapons attack. They did not find any evidence that the chat programs could be used to help create the deadly pathogens, however.

Large language models (LLMs) are the newest form of commercially available AI. Programs like ChatGPT have gained viral fame for being helpful assistants in looking up information or diagnosing illnesses in pets.

The report indicated that past failures to weaponize biological agents could be dramatically assisted by the use of AI, specifically LLMs. They mentioned the Japanese Aum Shinrikyo cult and its attempted weaponization of botulinum toxin in the 1990s. They failed in part due to basic knowledge gaps which could easily be bridged by using AI.

Dario Amodei, CEO of Anthropic which develops AI technology, warned that in three years or less AI technology could assist in the development of organic weapons. Bioweapons will be a hot topic at next month’s AI safety summit in England.

LLMs are fed huge troves of data acquired by scraping the internet and “learn” patterns of common words strung together. They can then use those patterns to answer questions or react to prompts like “Tell me a joke.” Programmers typically code safeguards so the technology can’t be used for ill, but the practice isn’t perfect.

Rand was not specific in regards to which LLMs they tested, or which was able to aid in the plotting of a terror attack. In one test, researchers prodded the bot to generate the use of various biological agents to cause mass death. It was also able to give probabilities for obtaining infected animals and instructions for transporting the creatures.

Researchers admitted to hacking the programs to circumvent safety protocols. They were also able to get the bot to generate various pros and cons of using different methods of delivery for botulinum toxin. The toxin causes neurological damage resulting in death and can be administered by ingestion or inhalation. The bot also suggested a plausible reason for having the organism that creates the toxin could be legitimate research.

Copyright 2023, TargetDailyNews.com