OpenAI CTO Mira Murati on Sunday courted controversy over government oversight of artificial intelligence when she acknowledged in an interview with Time magazine that the technology needs to be regulated.
“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a controlled and responsible way,” Murati told TIME. “But we are a small group of people, and we need a ton more input and a lot more input into this system that goes beyond the technologies — certainly the regulators and the governments and everybody else.”
Asked whether government involvement at this stage of AI’s development could hinder innovation, she replied: “It’s never too early. Given the impact of these technologies, it’s very important for everyone to be involved.” Is.
Greg Sterling, co-founder of the news, comment and analysis website Near Media, agreed, saying that since the market provides incentives for abuse, some regulation is probably necessary.
“Deliberately designed disincentives against unethical behavior can reduce the potential misuse of AI,” Sterling told TechNewsWorld, “but regulation can also be poorly designed and fail to prevent any of .
He acknowledged that regulation too early or too heavily could hurt innovation and limit the benefits of AI.
“Governments should convene AI experts and industry leaders to jointly draw up a framework for possible future regulation. This should probably also happen internationally,” Sterling said.
consider existing laws
Artificial intelligence, like many technologies and tools, can be used for a wide variety of purposes, explained Jennifer Huddleston, a technology policy research fellow at the Cato Institute, a Washington, DC think tank.
Many of these uses are positive, and consumers are already experiencing beneficial uses of AI, such as real-time translation and better traffic navigation, he continued. “Before seeking new regulations, policymakers should consider how existing laws around issues such as discrimination may already address concerns,” Huddleston told TechNewsWorld.
Artificial intelligence should be regulated, but how it is already regulated also needs to be considered, added Mason Kortz, clinical instructor at the Cyberlaw Clinic at Harvard University Law School in Cambridge, Mass.
“We have a lot of general rules that make things legal or illegal, regardless of whether they’re done by humans or AI,” Kortz told TechNewsworld.
“We need to look at the ways in which the existing laws regulate AI, and what are the ways in which they are not and there is a need to innovate and be creative,” he said.
For example, he said there is no general rule regarding autonomous vehicle liability. However, there are still plenty of areas of law to consider if an autonomous vehicle causes an accident, such as negligence law and product liability law. He explained that these are potential ways to regulate the use of AI.
need a light touch
However, Kortz acknowledged that many of the current rules came into play after the fact. “So, in a way, they’re like second best,” he said. “But they are an important measure when we develop the rules.”
“We should try to be proactive in regulation where we can,” he said. “After harm is done, there is recourse through the legal system. It is better not to be harmed.”
However, Mark N., president and principal analyst at SmartTech Research in San Jose, Calif. Vena argues that heavy regulation could stifle the booming AI industry.
“At this early stage, I’m not a big fan of government regulation of AI,” Vena told TechNewsWorld. “AI can have a lot of benefits, and government interference can eliminate them.”
Such suffocating influence on the Internet was lessened in the 1990s, they maintained through “light touch” regulation such as Section 230 of the Communications Decency Act, which allowed online platforms to limit the amount of third-party content displayed on their websites. granted immunity from liability for.
However, Kortz believes the government can put the brakes on something without shutting down an industry appropriately.
“People criticize the FDA, that it’s prone to regulatory capture, that it’s run by drug companies, but we’re still in a better world than pre-FDA, when anyone could sell anything and Anything can be put on the label,” he said.
“Is there a good solution that captures only the good aspects of AI and blocks all the bad ones? Probably not,” Vena continued, “but some structure is better than no structure.”
“It’s not going to do anyone any good to let good AI in and bad AI out,” he said. “We can’t guarantee that good AI is going to win that battle, and the collateral damage could be quite significant.”
regulation without throttle
Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington, DC, said there are some things policymakers can do to regulate AI without stifling innovation.
“One is to focus on specific use cases,” Castro told TechNewsWorld. “For example, regulating self-driving cars should look different from regulating AI used to generate music.”
“Another is to focus on behavior,” he continued. “For example, it is illegal to discriminate when hiring employees or renting apartments – whether a human or an AI system makes that decision should be irrelevant.”
“But policy makers must be careful not to unfairly hold AI to a different standard or apply incomprehensible rules to AI,” he said. “For example, some safety requirements in today’s vehicles, such as steering wheels and rearview mirrors, may not make sense for autonomous vehicles without passengers or drivers.”
Vena would like to see a “transparent” approach to regulation.
“I would prefer regulation requiring AI developers and content producers to be completely transparent about the algorithms they are using,” he said. “They could be reviewed by a third-party body made up of academics and some commercial entities.”
“Balance being transparent around the algorithms and sources of content AI tools derive from should encourage and reduce abuse,” he stressed.
plan for the worst case
Kortz said that many people believe that technology is neutral.
“I don’t think technology is neutral,” he said. “We have to think about the bad actors. But we also have to think about the poor decisions of the people who create these things and put them out there in front of the world.”
“I would encourage anyone developing AI for a particular use case to think not only about their intended use, but also what the worst possible use for their technology is,” he concluded.