Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says. “Very quickly, I became fascinated.” Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk 100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself. Anna Moore at The Guardian These stories are absolutely heart-wrenching, and it doesn’t just happen to people who have had a history of mental illness or other things you might associate with priming someone for “falling for” an “AI” chatbot. Just a few years in, and it’s already clear that these tools pose a real danger to a group of people of indeterminate size, and proper research into the causes is absolutely warranted and needed. On top of that, if there’s any evidence of wrongdoing from the companies behind these chatbots – intentionally making them more addictive, luring people in, ignoring established dangers, covering up addiction cases, etc. – lawsuits and regulation are definitely in order. Only yesterday, Facebook and Google lost a landmark trial in the US, ruling the companies intentionally made social media as addictive as possible, thereby destroying a person’s life in the process. Countless similar lawsuits are underway all over the world, and I have a feeling that in a few years to decades, we’ll look at unregulated, rampant social media the same way we look at tobacco now. Perhaps “AI” chatbots will join their ranks, too.
March 27, 2026