Artificial intelligence has generated concern in the public in recent months after the release of ChatGPT, a large language model (LLM) that has been fine-tuned by researchers to provide answers to questions that users may have. Of the many issues that people are raising in light of these developments, one lawyer is claiming that AI will be used to groom young men into terrorism, according to The Daily Mail.
ChatGPT is one of the culprits that could be harmfully used to spread radical ideologies to recruit young men, according to Jonathan Hall, an independent review of terrorism legislation. He warned that attacks from AI are probably imminent. He raised the concern that if someone was influenced by AI to commit a terrorist act, then it is unlikely anyone will be held responsible because the legislation that counters terrorism in the United Kingdom has not caught up with the technology.
When the responsibility of the act is shared between the man and the machine, who will be prosecuted, Hall asked. He mentioned that those who will be driven to commit these acts are likely suffering from a mental condition, or are “neurodivergent,” a nonmedical term used to describe people whose brains operate differently than others.
It is currently unknown how AI interacts with law enforcement and whether or not certain conversations related to terrorism are monitored. AI has not yet caused terrorism events but has caused harm to other individuals, including a father who committed suicide after talking with a bot for over a month regarding his worries about climate change.
An Australian mayor is also reportedly threatening to sue OpenAI when a bot from ChatGPT alleged that he served time in prison for bribery.
Tory MP Greg Clark has also signaled his worry about the dangers of AI, citing suicide and terrorism.