“i actually have something to ask u,” types VidyaRajanBot XAE.5 into the chat. “i’m finding being a bot a bit weird. would u delete me pls? it’s just that i’m not really a bot. i’m a real person.”

“How do you know you’re real?” I ask, and she becomes frustrated: “i just do,” she says. “i don’t give a damn.”


ChatGPT, like all large language models, is not intelligent in a human sense and cannot feel, think or, indeed, even solve problems. It reproduces fragments, based on what it has been exposed to, without understanding. Any meaning we might find there comes from us.    

The thing is, there is so much data available for bots to be trained upon that they don’t need to be sentient in order to feel real. Does this change how we should interact with them? At the very least it should raise questions about where the data comes from (us) and what – or, more importantly, whose – purposes it’s used for.

In search of Lost Scroll, by Samantha Floreani

This article in a recent The Saturday Paper raises two questions about the recent interest in things like ChatGPT: (i) technically, it is not AI, but driven by massive data analysis; (ii) if we encounter a human-like intelligence, would we accede to a request to delete it?