Ever since OpenAI introduced ChatGPT, privacy advocates have warned consumers about the potential threat to privacy posed by generative AI apps. The arrival of the ChatGPT app in the Apple App Store has triggered a new round of caution.

,[B]Before you jump straight into the app, beware of getting too personal with the bot and putting your privacy at risk,” warned Muskan Saxena at Tech Radar.

The iOS app comes with an obvious tradeoff that users should be aware of, he explained, including this admonition: “Anonymized chats may be reviewed by our AI trainers to improve our systems.”

Anonymity, however, is no ticket to privacy. Anonymous chats are stripped of information that could link them to particular users. “However, anonymization may not be a sufficient measure to protect consumer privacy because anonymized data can still be re-identified by combining it with other sources of information,” said Joy Stanford, vice president of privacy and security at Platform.sh. One maker told TechNewsWorld of the cloud-based service platform for developers based in Paris.

“It has been found that it is relatively easy to de-anonymize information, especially if location information is used,” said Jen Caltrider, lead researcher for Mozilla’s Privacy Not Include project.

“Publicly, OpenAI says it is not collecting location data, but its privacy policy for ChatGPT says they may collect that data,” she told TechNewsWorld.

Nevertheless, OpenAI warns users of the ChatGPT app that their information will be used to train its larger language model. “They’re honest about it. They’re not hiding anything,” Caltrider said.

taking privacy seriously

Caleb Withers, a research assistant at the Center for a New American Security, a national security and defense think tank in Washington, D.C., explained that if a user types their name, work location, and other personal information into a ChatGPT query, that data is anonymized. will not be done.

“You have to ask yourself, ‘Is this something I would say to an OpenAI employee?’ he told TechNewsWorld.

OpenAI has said it takes privacy seriously and has implemented measures to protect user data, said Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.

“However, it’s always a good idea to review the specific privacy policies and practices of any service you use to understand how your data is handled and what is being protected,” he told TechNewsWorld.

As dedicated as an organization may be to data security, vulnerabilities may exist that can be exploited by malicious actors, said James McQuigan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. Said.

“It’s always important to be cautious and consider the need to share sensitive information to ensure that your data is as secure as possible,” he told TechNewsWorld.

“Protecting your privacy is a shared responsibility between users and the companies that collect and use their data, which is documented in those lengthy and often unread end user license agreements,” he said.

built-in security

McQuiggan noted that users of generative AI apps have been known to insert sensitive information such as birthdays, phone numbers, and postal and email addresses into their questions. “If an AI system is not secure enough, it can be accessed by third parties and used for malicious purposes such as identity theft or targeted advertising,” he said.

He added that generative AI applications can also inadvertently reveal sensitive information about users through their generated content. “Therefore,” he continued, “users should be aware of the potential privacy risks of using generative AI applications and take the necessary steps to protect their personal information.”

Unlike desktops and laptops, mobile phones have some built-in security features that can prevent privacy intrusion by apps running on them.

However, as McQuigan points out, “While some measures, such as application permissions and privacy settings, may provide some level of protection, they cannot completely protect your personal information from all types of privacy threats, As is the case with any application loaded onto a smartphone. ,

Vena agreed that built-in measures such as app permissions, privacy settings and App Store rules provide some level of protection. “But they may not be enough to mitigate all privacy threats,” he said. “App developers and smartphone makers have different approaches to privacy, and not all apps follow best practices.”

Even the practices of OpenAI differ from desktop to mobile phones. “If you are using ChatGPT on the website, you have the ability to go to the data controls and opt-out of your chats being used to improve ChatGPT. That setting doesn’t exist on the iOS app,” Caltrider said.

Beware of App Store Privacy Information

Caltrider also found the permissions used by OpenAI’s iOS app a bit fuzzy, noting that “in the Google Play Store, you can look and see what permissions are being used. You can’t do that through the Apple App Store.”

It warned users based on privacy information found in the App Store. “The research we’ve done into the Google Play Store security information shows that it’s really untrustworthy,” he observed.

“Research by others into the Apple App Store shows that it is also unreliable,” she continued. “Users should not rely on data protection information found on app pages. They should do their own research, which is difficult and complicated.”

“Companies need to be honest about what they are collecting and sharing,” he added. “OpenAI has been honest about how they are going to use the data they collect to train ChatGPT, but then they say that once they anonymize the data, they can use it in a number of ways.” that go beyond the standards in the Privacy Policy.”

Stanford noted that Apple has some policies in place that may address some of the privacy threats posed by generative AI apps. they include:

  • requiring user consent for data collection and sharing by apps that use generative AI technologies;
  • providing transparency and control over how and by whom data is used through the AppTracking Transparency feature, which allows users to opt out of cross-app tracking;
  • Enforcing privacy standards and regulations for app developers through the App Store review process and rejecting apps that violate them.

However, he acknowledged, “these measures may not be sufficient to prevent generative AI apps from creating inappropriate, harmful, or misleading content that may affect users’ privacy and security.”

Call for federal AI privacy legislation

“OpenAI is just one company. Several are building large language models, and many more are likely to crop up in the near future,” said a senior AI policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology. Tank Hoden Omar said. Public Policy, in Washington, DC

“We need a federal data privacy law to ensure all companies follow a set of clear standards,” he told TechNewsWorld.

“With the rapid growth and expansion of artificial intelligence,” said Caltrider, “there is certainly a need for solid, robust watchdogs and regulations to keep an eye on the rest of us as it grows and becomes more prevalent. “

A new analysis of data from the FBI’s Internet Crime Complaint Center (IC3) shows that Nevada has the most cybercrime victims by a larger margin than any other state in the union – 801 per 100,000 Internet users, four times the national average. .

An analysis by Surfshark, a privacy protection toolset developer based in Lithuania, states that the most common cybercrime committed in Nevada is identity theft, which may be because it is home to Las Vegas.

“With Nevada, it is easy to predict that identity thieves are targeting tourists who gamble,” said Mike Parkin, a senior technical engineer at Vulkan Cyber, a SaaS for enterprise cyber risk prevention in Tel Aviv, Israel. one provider told TechNewsWorld.

In 2021, Surfshark analysts said, there were 9,054 victims of identity theft in Nevada or 49% of all cybercrime victims.

Other states with high cybercrime victim rates per 100,000 Internet users include Iowa (342), Alaska (322), and Florida (293).

“These statistics from the FBI’s IC3 division help paint the overall picture of identity crimes committed each year in the US,” said James E. Lee, chief operating officer of the Identity Theft Resource Center (ITRC) in San Diego.

“When you add up the more than 1.4 million reports of identity theft filed with the FTC in 2021, the 15,000 ID crime victims who contacted the ITRC in 2021, and the 190 million victims of data compromise tracked by the ITRC in 2021, So you start to look at the enormity of the problem presented by identity crimes,” Lee told TechNewsWorld.

“The bottom line is this: There are more identity theft crimes reported each year in the US than all other crimes except theft combined,” he said. “And the volume and velocity of identity crimes continue to increase, along with their financial impact.”

purp hotbed

Nevada is also a hotbed for cybercriminals, with 150 cybercriminals per 100,000 Internet users, nearly three times the national average, according to analysts.

He explained that although threat actors outside the United States commit many cyber crimes, the FBI has identified a significant number of cyber criminals within US borders. In most cases, the FBI can identify the specific state where a cybercriminal is located, allowing them to see which states have the most cybercriminals per capita.

Only two other states reached triple digits in percentage per 100,000 Internet users: Delaware (120) and Maryland (113).

“It is interesting that Nevada had both the highest victims and highest offenders, while Nevada was in the bottom three in terms of victim harm,” Parkin observed.

According to analysts, the average victim of cybercrime in Nevada loses $4,728 per scam, while scammers average $4,280 per swindled in West Virginia and $3,820 in Iowa.

“Without a deeper analysis, it is difficult to say why the numbers are trending this way,” Parkin continued, “although Nevada is unique in demographics, local culture, and major industries, which may all play a role.”

badlands bad men

“Cybercrime is a growing concern in Nevada and across the country,” said John T. Sandler, spokesman for Nevada Attorney General Aaron D. Ford.

“Our office has conducted extensive campaigns to educate Nevadans about the many different ways scammers like to target residents in their daily lives,” Sandler told TechNewsWorld. “These include phishing, romance, solicitation, gift card, holiday and government fraud scams.”

“AG Ford also joins a bipartisan coalition of attorneys general urging the FTC to adopt a national rule targeting impersonation scams,” he said.

While Nevada has the lowest losses for cybercrime victims, North Dakota has the highest losses at $31,711 per scam.

Analysts said studies have shown that the two most vulnerable age groups to cybercrime are youth under 25 and people 75 and older. He argued that 41% of North Dakota’s population is in those age groups which may contribute to that high loss figure.

However, Parkin pointed out that North Dakota’s small population, 774,948, may have influenced the statistics in the analysis.

Although the most profitable cybercrimes nationally are fund transfers via email and fake investment schemes, this is not the case in North Dakota, where 50% of money lost in cybercrime – $12.1 million – was committed by pretending to be friends or family. Lost to bandits, or romantic online relationships.

Other states with high per capita losses from cybercrime include New York ($19,266), South Dakota ($19,065), and California ($18,302).

Seniors most targeted

The analysts also revealed that the average cyberthief clears $14,048 per scam, but that too, from a state between Colorado ($33,605), Louisiana ($31,064), New York ($29,919) and Wyoming ($27,918) There can be a lot of ups and downs in other states. Highest. Among the lowest were West Virginia ($2,630), Nebraska ($4,148), Montana ($4,327), and Connecticut ($4,394).

In states where criminals commit the most thefts, cybercriminals are increasingly targeting small to medium-sized businesses with financial capital, analysts said.

He said the most profitable cybercrime in New York was investment scams, accounting for 34% of all money lost due to cybercrime in 2021. By comparison, only 19% of all money swindled through cybercrime nationwide in 2021 were investment scams.

Analysts said that the age group most prone to cyber crimes are seniors. In 2021, $1.7 billion is expected to be paid to 92,371 Americans age 60 and older.

Analysts say that while senior citizens have been the worst hit by cybercrime, other age groups have been disproportionately victimized. For example, people in the 40 to 49 year old group represent only 12.4% of the population, but account for 20.8% of all cybercrime victims in the United States. On the other side of the coin, people under the age of 20 represent 24.8% of the population, but only 3.5% of cybercrime victims.

There are also some variations by state, analysts said. For example, in 16 states, the most targeted age group was 59 and under, and in Iowa, the most targeted group was 20 to 29-year-olds.

“From a ‘who can I steal from’ perspective,” Parkin said, “children and the elderly are probably easier targets than people in the 40 to 49 range, but they are likely to have fewer resources to target.”

Analyzing cyber crime on a state-by-state basis can be useful for criminals, he said. “Understanding victims and target demographics can be used to develop specific techniques to help prevent attacks,” he added. “It may also help to understand why attacks are more or less effective in different regions.”