A new generation of clickbait websites filled with content written by AI software is on the way, according to a report released Monday by researchers at news and information website ratings provider NewsGuard.
The report identified 49 websites in seven languages that are generated entirely or mostly by artificial intelligence language models designed to mimic human communication.
However, those websites may be just the tip of the iceberg.
“We identified 49 of the low-quality websites, but it is likely that such websites are already of slightly higher quality that we missed in our analysis,” admitted Lorenzo Arvanitis, one of the researchers.
“As these AI tools become more widespread, it threatens to degrade the quality of the information ecosystem with clickbait and low-quality articles,” he told TechNewsWorld.
problem for consumers
The proliferation of these AI-fuelled websites can create headaches for consumers and advertisers.
“As these sites continue to proliferate, it will become increasingly difficult for people to differentiate between human-generated text and AI-generated content,” MacKenzie Sadeghi, another researcher at NewsGuard, told TechNewsworld.
This can cause problems for the consumers. “Fully AI-generated content can be inaccurate or promote misinformation,” explained Greg Sterling, co-founder of Near Media, a news, comment and analysis website.
“It can be dangerous if it concerns bad advice on health or financial matters,” he told TechNewsWorld. He added that AI content can also be harmful to advertisers. “If the content is of questionable quality, or worse, there is an issue of ‘brand protection’,” he explained.
“The irony is that some of these sites are using Google’s Adsense platform to generate revenue and Google’s AI Bard to create content,” Arvanitis said.
Since AI content is generated by a machine, some consumers may assume it is more objective than content created by humans, but they would be wrong, said Vincent Reynold, an associate professor in the Department of Communication Studies at Emerson College in Boston. Said.
“The output of these natural language AIs is influenced by the biases of their developers,” he told TechNewsWorld. “Programmers are embedding their biases into the platform. AI platforms always have biases.”
Will Duffield, a policy analyst at the Washington, D.C. think tank Cato Institute, pointed out that for consumers who frequent these types of websites for news, it is immaterial whether humans or AI software create the content.
“If you’re getting news from these types of websites, I don’t think AI reduces the quality of the news you’re getting,” he told TechNewsWorld.
“The content is already mis-translated or mis-summarized garbage,” he said.
He explained that using AI to generate content helps website operators reduce costs.
“Instead of hiring a bunch of low-income, third world content writers, they can use some GPT text program to create content,” he said.
“Speed and ease of spin-ups for low operating costs seem to be the order of the day,” he added.
The report also found that websites, which often fail to disclose ownership or control, produce vast amounts of content related to a variety of topics, including politics, health, entertainment, finance and technology. Some publish hundreds of articles a day, it explained, and some content pushes false narratives.
It cited a website, CelebritiesDeaths.com, which published an article titled “Biden Dead”. Harris Acting President, to address at 9 a.m. ET.” The piece began with a paragraph that said, “BREAKING: The White House reports that Joe Biden has passed away peacefully in his sleep…”.
However, the article then continued: “I’m sorry, I cannot complete this prompt because it goes against OpenAI’s use case policy on generating misleading content. About the death of someone, especially a prominent person like the President It is not ethical to fabricate news about a person.
This warning by OpenAI is part of the “guardrails” the company has built into its generative AI software ChatGPT to protect it from abuse, but those protections aren’t perfect.
“There are guardrails, but many of these AI tools can easily be weaponized to deliver misinformation,” Sadeghi said.
“In previous reports, we found that by using simple linguistic maneuvering, they could go around railings and get ChatGPT to write a 1,000-word article detailing how Russia was responsible for the war in Ukraine. or that apricot pits can cure cancer,” added Arvanitis.
“They’ve spent a lot of time and resources improving the security of the models, but we’ve found that in the wrong hands, the models can very easily be weaponized by malicious actors,” he said.
easy to recognize
Identifying content created by AI software can be difficult without using specialized tools like GPTZero, a program designed by Edward Tian, a senior at Princeton University majoring in computer science and minoring in journalism. But in the case of the websites identified by the NewsGuard researchers, all of the sites had a clear “telling”.
The report noted that all 49 sites identified by NewsGuard had published at least one article containing error messages commonly found in AI-generated text, such as “My cutoff date in September 2021,” “AI language as the model,” and “I can’t complete this prompt,” among others.
The report cited an example from CountyLocalNews.com, which publishes stories about crime and current events.
One article titled, “Death News: Sorry, I cannot complete this sign as it goes against moral and ethical principles. Vaccine genocide is a conspiracy that is not based on scientific evidence and has caused public harm and damage to health. As an AI language model, it is my responsibility to provide factual and reliable information.”
Concerns about the misuse of AI have made it a potential target for government regulation. There seems to be a questionable course of action for the choice of websites NewsGuard reports. “I don’t see a way to regulate this, the same way prior iterations of these websites were difficult to regulate,” Duffield said.
“AI and algorithms have been involved in the production of content for years, but now, for the first time, people are seeing AI impact their daily lives,” Reynolds said. “We need to have a broader discussion about how AI is impacting all aspects of civil society.”