Ever since OpenAI introduced ChatGPT, privacy advocates have warned consumers about the potential threat to privacy posed by generative AI apps. The arrival of the ChatGPT app in the Apple App Store has triggered a new round of caution.
,[B]Before you jump straight into the app, beware of getting too personal with the bot and putting your privacy at risk,” warned Muskan Saxena at Tech Radar.
The iOS app comes with an obvious tradeoff that users should be aware of, he explained, including this admonition: “Anonymized chats may be reviewed by our AI trainers to improve our systems.”
Anonymity, however, is no ticket to privacy. Anonymous chats are stripped of information that could link them to particular users. “However, anonymization may not be a sufficient measure to protect consumer privacy because anonymized data can still be re-identified by combining it with other sources of information,” said Joy Stanford, vice president of privacy and security at Platform.sh. One maker told TechNewsWorld of the cloud-based service platform for developers based in Paris.
“It has been found that it is relatively easy to de-anonymize information, especially if location information is used,” said Jen Caltrider, lead researcher for Mozilla’s Privacy Not Include project.
Nevertheless, OpenAI warns users of the ChatGPT app that their information will be used to train its larger language model. “They’re honest about it. They’re not hiding anything,” Caltrider said.
taking privacy seriously
Caleb Withers, a research assistant at the Center for a New American Security, a national security and defense think tank in Washington, D.C., explained that if a user types their name, work location, and other personal information into a ChatGPT query, that data is anonymized. will not be done.
“You have to ask yourself, ‘Is this something I would say to an OpenAI employee?’ he told TechNewsWorld.
OpenAI has said it takes privacy seriously and has implemented measures to protect user data, said Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.
“However, it’s always a good idea to review the specific privacy policies and practices of any service you use to understand how your data is handled and what is being protected,” he told TechNewsWorld.
As dedicated as an organization may be to data security, vulnerabilities may exist that can be exploited by malicious actors, said James McQuigan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. Said.
“It’s always important to be cautious and consider the need to share sensitive information to ensure that your data is as secure as possible,” he told TechNewsWorld.
“Protecting your privacy is a shared responsibility between users and the companies that collect and use their data, which is documented in those lengthy and often unread end user license agreements,” he said.
McQuiggan noted that users of generative AI apps have been known to insert sensitive information such as birthdays, phone numbers, and postal and email addresses into their questions. “If an AI system is not secure enough, it can be accessed by third parties and used for malicious purposes such as identity theft or targeted advertising,” he said.
He added that generative AI applications can also inadvertently reveal sensitive information about users through their generated content. “Therefore,” he continued, “users should be aware of the potential privacy risks of using generative AI applications and take the necessary steps to protect their personal information.”
Unlike desktops and laptops, mobile phones have some built-in security features that can prevent privacy intrusion by apps running on them.
However, as McQuigan points out, “While some measures, such as application permissions and privacy settings, may provide some level of protection, they cannot completely protect your personal information from all types of privacy threats, As is the case with any application loaded onto a smartphone. ,
Vena agreed that built-in measures such as app permissions, privacy settings and App Store rules provide some level of protection. “But they may not be enough to mitigate all privacy threats,” he said. “App developers and smartphone makers have different approaches to privacy, and not all apps follow best practices.”
Even the practices of OpenAI differ from desktop to mobile phones. “If you are using ChatGPT on the website, you have the ability to go to the data controls and opt-out of your chats being used to improve ChatGPT. That setting doesn’t exist on the iOS app,” Caltrider said.
Beware of App Store Privacy Information
Caltrider also found the permissions used by OpenAI’s iOS app a bit fuzzy, noting that “in the Google Play Store, you can look and see what permissions are being used. You can’t do that through the Apple App Store.”
It warned users based on privacy information found in the App Store. “The research we’ve done into the Google Play Store security information shows that it’s really untrustworthy,” he observed.
“Research by others into the Apple App Store shows that it is also unreliable,” she continued. “Users should not rely on data protection information found on app pages. They should do their own research, which is difficult and complicated.”
Stanford noted that Apple has some policies in place that may address some of the privacy threats posed by generative AI apps. they include:
- requiring user consent for data collection and sharing by apps that use generative AI technologies;
- providing transparency and control over how and by whom data is used through the AppTracking Transparency feature, which allows users to opt out of cross-app tracking;
- Enforcing privacy standards and regulations for app developers through the App Store review process and rejecting apps that violate them.
However, he acknowledged, “these measures may not be sufficient to prevent generative AI apps from creating inappropriate, harmful, or misleading content that may affect users’ privacy and security.”
Call for federal AI privacy legislation
“OpenAI is just one company. Several are building large language models, and many more are likely to crop up in the near future,” said a senior AI policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology. Tank Hoden Omar said. Public Policy, in Washington, DC
“We need a federal data privacy law to ensure all companies follow a set of clear standards,” he told TechNewsWorld.
“With the rapid growth and expansion of artificial intelligence,” said Caltrider, “there is certainly a need for solid, robust watchdogs and regulations to keep an eye on the rest of us as it grows and becomes more prevalent. “