
ChatGPT and privacy: why you shouldn't trust AI with your secrets
ChatGPT, the trendy AI-powered conversational robot, is also fond of your personal data, which it requires for users to use it. With your email address and phone number, OpenAI – like most major Internet services – can already know a lot about you, including the country you live in and the name of your mobile operator. But most importantly, ChatGPT keeps track of everything you exchange with it.
Since the launch of ChatGPT, only 2 months after its release, this application has conquered an impressive number of active users to date: 100 million. Never before has such growth been observed for a consumer tool. This is due to the incredible capabilities of the artificial intelligence built into ChatGPT, allowing users to automate many tasks. Many professions, and even entire industries, are being disrupted.
And while many people are discovering business ideas and ways to make money easily using ChatGPT, such as using it for marketing or writing, others are wondering about the impact on our privacy and how it uses and protects our personal data.
An AI fueled by our personal data
OpenAI, the company behind ChatGPT, trained its AI with billions of words from the Internet – books, articles, documents, essays. All kinds of information available on the Internet were fed to the AI, without the consent of the authors. If you've ever posted anything online, whether it's a blog post or a simple comment under an article, it's very likely that ChatGPT has used this information for its training.
Anything you ask ChatGPT can help it learn more about you, such as your interests, your beliefs, the best and worst of you, depending on how far you want to take the conversation. For example, we've learned that Microsoft's staff are reading users’ conversations with its Bing chatbot in order to respond to “inappropriate behavior”. As a result, some of the sensitive data mined by this AI could be used to identify you or locate you. We've seen that even Microsoft and Amazon have warned their employees not to share sensitive data with ChatGPT.
Another concern is the potential for ChatGPT to be used for nefarious purposes. As an AI model that can generate natural language responses, ChatGPT is already used to create fake news, generate spam messages, or even impersonate individuals in online interactions. And while the model has been trained on a large corpus of data, it's more than possible that its responses could contain blatant errors, biases or perpetuate harmful stereotypes. After all, the internet is full of propaganda, lobbying and opinion pieces.
In addition, individuals do not have the ability to know whether ChatGPT is using their information and to have it removed from the database. This raises the question of whether ChatGPT is compliant with various domestic privacy regulations (as the European GDPR).
Finally, you should know that, according to the company's privacy policy, it collects users' IP address, browser type and settings, as well as data about users' interactions with the site, including the type of content, features used, and actions taken. It even collects data about browsing patterns and the websites you visit. In addition, OpenAI states that it may share the information collected with other third parties.
So, yes, ChatGPT is an amazing piece of software that can be useful in many cases and will probably become even more useful in the future. But let's keep in mind that like most Internet services, it collects a lot of data, and even if it says it doesn't reuse or store it, the doubt can persist. In general, it's best to avoid entrusting anything personal to AI. ChatGPT is a tool, it's not your confidant, nor your friend.