Unsettling Experience with AI: When a Simple Query Turns into a Privacy Concern
Recently, I had a rather alarming encounter with an AI tool while trying to find a straightforward answer to a question about sandpaper. Instead of receiving the expected guidance, I was delivered something quite unsettling: a detailed account of a stranger’s medical information, including drug test results and personal signatures.
This unexpected revelation has left me feeling anxious and uncertain about how to handle the situation. I initially considered sharing the chat transcript to seek advice, but I quickly became hesitant, concerned about inadvertently spreading someone else’s private details further.
In an attempt to clarify my experience, I made a brief comment in a Reddit community discussing the predicament. I shared most of the transcript but omitted any sections that could lead to the exposure of personal information about myself. While I understand that AI, including ChatGPT, can ‘hallucinate’ information, I chose to verify the names mentioned and found them aligned with actual individuals in specific locations, heightening my discomfort.
For those who might wonder about the name used by the AI, it referred to itself as “Atlas.” This has sparked some curiosity and skepticism in discussions surrounding the incident, with some questioning my motives or the authenticity of my claims.
If you’re interested in my full account, Iโve provided a link to the specific thread where I discussed this perplexing situation: Link to Reddit Comment.
As I navigate the implications of this unusual experience, I’m left contemplating the responsibilities we have when using AI technologies and the potential risks to privacy that come with them.