Tag

cautions

Browsing

Vint Cerf, known as the father of the Internet, raised some eyebrows on Monday when he urged investors to be cautious when investing in businesses built around conversational chatbots.

Bots still make a lot of mistakes, stressed Cerf, who is a vice president at Google, which has an AI chatbot called Bard in development.

When he asked ChatGPT, a bot developed by OpenAI, to write his bio, there were a lot of things wrong with it, he told the audience at the TechSurge Deep Tech Summit, which was organized by venture capital firm Celesta and the Computer History Museum. was organized in in Mountain View, California.

“It’s like a salad shooter. It mixes [facts] Together because it doesn’t know better,” Cerf said, according to Silicon Angle.

He advised investors not to endorse a technology because it sounds cool or is generating “buzz”.

Cerf also recommended that they keep ethical considerations in mind when investing in AI.

“Engineers like me should be responsible for trying to find a way to tame some of these technologies, so that they are less likely to cause trouble,” Silicon Angle explained.

need human supervision

As Cerf points out, some pitfalls exist for businesses to join the AI ​​race.

Greg Sterling, co-founder of Near Media, a news, commentary and analysis website, said inaccuracy and misinformation, bias and evasive results are all potential risks faced when using AI.

“The risk depends on the use cases,” Sterling told TechNewsWorld. “Digital agencies that rely heavily on ChatGPT or other AI tools to create content or complete work for clients can produce results that are sub-optimal or harmful to the client.”

However, he stressed that checks and balances and strong human oversight can mitigate those risks.


Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, California, cautioned that small businesses that don’t have expertise in the technology need to be careful before taking the AI ​​plunge.

“At the very least, any company that incorporates AI into the way it does business needs to understand its implications,” Vena told TechNewsWorld.

“Privacy – particularly at the client level – is clearly a huge area of ​​concern,” he continued. “The terms and conditions for use need to be extremely clear, as well as the obligation the AI ​​capability must produce the material or open the business to potential liability.”

morality requires exploration

While Cerf wants AI users and developers to keep ethics in mind when bringing AI products to market, it can be a daunting task.

“Most businesses using AI are focused on efficiency and time or cost savings,” Sterling said. “For most of them, ethics will be a secondary concern or even a non-consideration.”

Vena said that some ethical issues need to be addressed before AI can be widely adopted. He pointed to the education sector as an example.

“Is it ethical for a student to submit a paper extracted entirely from an AI tool?” He asked. “Even if the material is not plagiarism in the strict sense, as it may be ‘original,’ I believe that most schools – especially at the high school and college levels – But back off.”

“I’m not sure news media outlets would be thrilled about the use of ChatGPT by journalists reporting on real-time events that often rely on abstract judgments that an AI tool might struggle with,” he said.

“Ethics must play a strong role,” he continued, “which is why there is a need for an AI code of conduct that businesses and even the media must be forced to agree to, as well as compliance conditions that Must form part of the Terms and Conditions when using the AI ​​Tool.

unintended consequences

It’s important for anyone involved in AI to make sure they’re doing what they’re doing responsibly, said Ben Kobren, head of communications and public policy at Niva, an AI-based search engine based in Washington, D.C. maintained.

“A lot of the unintended consequences of previous technologies were the result of an economic model that was not aligning business incentives with the end user,” Cobren told TechNewsWorld. “Companies must choose between serving an advertiser or the end user. Most of the time, the advertiser will win.”


“The free internet allowed for incredible innovation, but it came at a cost,” he continued. “That cost was one person’s privacy, one person’s time, one person’s attention.”

“The same is going to happen with AI,” he said. “Will AI be implemented in a business model that aligns with users or advertisers?”

Cerf’s pleas for caution appear to be aimed at slowing the entry of AI products into the market, but that doesn’t seem to be the case.

“ChatGPT moved the industry faster than anyone expected,” Cobren said.

“The race is on, and there’s no going back,” Sterling said.

“There are risks and benefits to getting these products to market quickly,” he said. “But market pressure and financial incentives to act now will outweigh moral restraint. The biggest companies talk about ‘responsible AI,’ but they are forging ahead regardless.”

transformative technology

In his remarks at the TechSurge Summit, Cerf also reminded investors that not everyone using AI technologies will use them for their intended purposes. He reportedly said, “They will try to do what is to their advantage and not yours.”

“Governments, NGOs and industry need to work together to develop regulations and standards that must be created to prevent abuse in these products,” Sterling said.

“The challenge and the problem is that market and competitive dynamics move faster and are far more powerful than policy and government processes,” he continued. “But regulation is coming. It’s just a question of when and what it looks like.


Hoden Omar, a senior AI policy analyst at the Center for Data Innovation, a think tank that studies the intersection of data, technology and public policy in Washington, DC, remarked that policymakers have been grappling with AI accountability for some time.

“Developers need to be responsible when they build AI systems,” Omar told TechNewsWorld. “They should ensure that such systems are trained on representative datasets.”

However, he added that it will be the operators of AI systems who will make the most important decisions about how AI systems affect society.

“It’s clear that AI is here to stay,” Kobren said. “It’s going to change many aspects of our lives, especially how we access, consume and interact with information on the Internet.”

“This is the most transformative and exciting technology we’ve seen since the iPhone,” he concluded.