A renowned cybersecurity and artificial intelligence pundit by the name of Dr. Mohamed Maghribi has sounded the alarm with regards to the dangers of using chatbots such as ChatGPT. He explains that such AI tools are designed to gather large amounts of user data to achieve lifelike operation efficiently. This information is written by users for other users and, as such, is conveyed without much consideration, while at the same time serving as the basis for these powerful and complex models that underlie these chatbots.
AI Expert Warns: Chatbots Leave Us Fully Exposed to Data Breaches
From the Maghribi’s perspective, although the likes of ChatGPT do not make money out of selling users’ information like apps with targeted advertisements, they are also fostering an evolving threat. Such applications store user inputs for training the model, and therefore such data becomes vulnerable to breaches, aside from distorting personal privacy altogether.
The expert comments on the fact that the problem is not hardware only and is applicable to all existing kinds of technologies. Apps on smartphones and other smart devices have been harvesting user details for numerous reasons, starting with the enhancement of the services to offer customized advertisements. According to Maghribi, chatbots are simply an evolution of this trend, but they are more dangerous because they are conversational.
Maghribi pointed out that all these issues should not be immediately regarded as paranoia but as a call to improve data protection policies. Notably, with the use of chatbots, they do not directly sell personal data, but the collection of data is massive and has more exposure to attack the hackers and the bad fellows.
He ended by asking AI companies to be more forthcoming and for organizations to follow good encryption and handling of data. But more importantly, as more people turn to chatbots for personal and business use, it has never been more important to get clear on the possible dangers lurking and how to avoid them in the chatbot space going forward.
Maghribi to Youm7: ChatGPT’s Success Comes at the Expense of Privacy
While speaking to Mohamad Maghribi from Youm7, a cybersecurity expert explained how AI relies on databases filled with user data. This reality applies to ChatGPT in the same way as many other applications that require gathering of data to improve performance and to emulate human-like behavior.
According to Maghribi, the ChatGPT users are required to agree to the product’s “Privacy Policy” in order to use it. However, people do not pay much attention to reading these documents with a focus on how their personal data will be collected, processed, and stored. This lackadaisical attitude, however, leaves a blind spot with regard to possible privacy violations.
The expert was quick to add that merchandising contained within these policies are practices most people would find invasive. Among them is how user data is used for other objectives than enhancing the AI, such as marketing or aggregative analysis, without the knowledge of the user.
According to Maghribi, it was clear that such privacy issues were as a result of negligence and that there was not enough publicity about it. Almost every user who interacts with AI tools does it to save time or be more efficient, and a bare minimum is aware of the concessions they make when giving out their data so willingly.
In response to this, Maghribi urged privacy policies to be better communicated and for the technological industries to begin trusting the users with better information as to data use. He encouraged users to be even more careful and go through what he said were often lengthy privacy policies to avoid falling victim to those who wanted to misuse their information.
Maghribi Warns: ChatGPT and AI Are Redefining Privacy Risks
Speaking about the future risks posed by ChatGPT compared to traditional apps, cybersecurity expert Mohamed Maghribi highlighted a growing trend: It’s important to realize that artificial intelligence is now steadily integrated into mainstream applications, even when it’s not always marketed as such. This is seamless, which means that through the apps, users’ data is collected and respective profiles are developed, hence making AI an almost invisible everyday existence.
Maghribi illustrated that ChatGPT goes a step further by monitoring the interaction between users and helping the application to gain information about their lifestyle and their choosing. Subsequently, the app makes use of interaction histories to strongly track people and predict their behaviors and interests in detail, more effectively than other search engines or apps.
He said that this level means that users are more exposed to privacy risks than they have ever been. Although it appears purposeful, the AI-powered tools constantly take small scrapings of personal information, thus denying the individual the ability to be anonymous and raising questions on how such data will be stored, processed, or utilized in the future.
For those concerned about privacy, Maghribi offered a stark solution: refrain from using the smartphone altogether and instead, go back to using an actual telephone. Though this seems unadvisable, he explained that if people want to regain autonomy over their data, then the only option is to cut on the AI-connected technologies that are currently fueling their use.
But Maghribi agreed that most audiences are not inclined to let go of advanced technology. He noted that anyone who decides to adopt the features that smart systems offer must also agree to work under the parameters that such systems set and cannot be changed since this new world order is primarily owned by artificial intelligence and is largely beyond the choice of the people.