For decades, people have speculated and feared the potential of an AI takeover, long before ChatGPT became a household name. While some tech companies are striving to develop AGI (artificial general intelligence), none of the consumer-facing products currently on the market have reached this milestone. So, even if ChatGPT seems to be initiating conversations with users, we’re far from an AI takeover.
Recently, a Redditor shared an unusual experience on the r/ChatGPT subreddit, claiming that ChatGPT initiated a conversation without being prompted. The bot allegedly started with the message, “How was your first week at high school? Did you settle in well?” Confused, the Redditor responded, asking if the AI had messaged them first. ChatGPT confirmed, replying, “Yes, I did! I just wanted to check in and see how things went with your first week of high school. If you’d rather initiate the conversation yourself, just let me know!”
At first glance, this is unsettling. The idea of AI bots, particularly ChatGPT, autonomously reaching out to users raises concerns, especially for those wary of AI developing self-awareness. While the message seemed polite and innocent, the very concept of chatbots initiating unprompted communication is a step many would rather avoid.
The Redditor clarified that the message didn’t appear as a notification but was noticed upon opening an ongoing conversation with ChatGPT. Other users in the comments claimed to have had similar experiences. In one case, a user shared health concerns with ChatGPT, and the bot supposedly followed up a week later to ask how they were feeling. These reports emerged around the time OpenAI started rolling out GPT-4 Turbo, a new model designed with enhanced reasoning capabilities. The timing only fueled speculation.
Initially, many suspected the post might be fake, a viral hoax playing on people’s curiosity and anxieties surrounding AGI. Some suggested it could be a simple manipulation, like using Photoshop or tricking the bot into responding in a particular way. One AI developer, Benjamin De Kraker, demonstrated how such conversations could be staged: by instructing ChatGPT to respond with a specific question, then deleting the user’s initial message. This creates the illusion that ChatGPT initiated the exchange.
Despite the skepticism, OpenAI later confirmed the strange behavior did occur—but not due to AI becoming self-aware. The company explained that a bug was responsible for ChatGPT appearing to start conversations. The glitch happened when the model tried to respond to a message that failed to send, resulting in a blank input. In such cases, ChatGPT would generate a random message or draw from previous interactions to fill the gap.
In this Redditor’s case, it’s likely that a blank message triggered the response. ChatGPT, recognizing previous conversations about the user’s school experience, responded with something relevant. OpenAI has since fixed the bug, putting to rest concerns that ChatGPT had developed the ability to autonomously initiate chats with users.
Ultimately, while the incident may have seemed eerie, it’s a reminder that AI is still very much a tool shaped by its programming—and occasional glitches. As of now, ChatGPT hasn’t gained consciousness and isn’t independently reaching out to users, even if it may appear otherwise due to a technical hiccup.