AI-powered chatbots, including ChatGPT, are increasingly used for various tasks and to answer many questions.
However, the privacy and security of personal information shared on these platforms have become a global concern, especially among tech enthusiasts.
Accusations have been levelled against several organisations for allegedly using sensitive and confidential internet data to train their AI chatbots.
Experts in artificial intelligence believe sharing crucial organisational information, political views, and preferences on chatbots like ChatGPT is inappropriate, as it could potentially lead to future crises.
Oxford University’s Professor Mike Wooldridge advises against sharing personal information, stating that ChatGPT’s AI is trained using user data.
He further adds against the nearest versions of ChatGPT, “You should assume that anything you type into ChatGPT is just going to be fed directly into future versions of ChatGPT.”
Professor Wooldridge, a UK-based artificial intelligence analyst, warns that users could face future risks due to the personal information shared on ChatGPT.
According to Professor Wooldridge, once your data was in the system, it was almost hard to pull it out because of how AI models operated.
Earlier this year, the firm behind ChatGPT, OpenAI, had to repair a problem that allowed users to see portions of other users’ chat history.
However, it’s worth noting that OpenAI, the creator of ChatGPT, asserts that they’ve introduced a feature to halt ChatGPT’s data storage technology.
By turning off ChatGPT’s Chat History feature, users’ information is not saved for AI training.
