Some technology insiders want to halt the continued development of artificial intelligence systems before machine learning takes the neurological pathways their human creators intended to use. Other computer experts argue that missteps are inevitable and that development must continue.
More than 1,000 tech and AI veterans recently signed a petition calling for a six-month moratorium on training for AI systems more powerful than GPT-4 for the computing industry. Proponents want AI developers to create security standards and mitigate the potential risks posed by risky AI technologies.
The nonprofit Future of Life Institute organized the petition which calls for a near-immediate public and verifiable termination by all major developers. Otherwise, governments must step in and establish a moratorium. As of this week, the Future of Life Institute says it has collected more than 50,000 signatures that are going through its vetting process.
The letter is not an attempt to halt all development of AI in general. Instead, its supporters want developers to back away from a dangerous race to “ever-large unpredictable black-box models with emergent capabilities.” During the timeout, AI labs and independent experts must jointly develop and implement a set of shared security protocols for advanced AI design and development.
“AI research and development should be focused on making today’s powerful, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, aligned, trustworthy and loyal,” the letter said.
support is not universal
It’s doubtful that anyone will stop anything, suggested John Bambenek, principal threat hunter at security and operations analytics SaaS company Netenrich. Still, he sees a growing awareness that consideration of the ethical implications of AI projects lags far behind the pace of development.
“I think it’s good to re-evaluate what we’re doing and it will have a profound impact, because we’ve already seen some spectacular failures when it comes to mindless deployment of AI/ML,” Bambneck told TechNewsworld. “
Everything we do to stop things in the AI space is probably just noise, said Andrew Barrett, vice president at cybersecurity advisory services firm Coalfire. It is also impossible to do this globally in a coordinated manner.
“AI will be a productivity enhancer for the next few generations. The danger would be seeing that it replaces the search engines and then gets monetized by advertisers who ‘discreetly’ place their products in the answers. What’s interesting is that there has been a ‘spike’ in fear since the recent attention paid to ChatGPT,” Barratt told TechNewsworld.
Rather than stop, Barrett recommends encouraging knowledge workers around the world to look at how they can best use the various AI tools that are becoming more consumer-friendly to help deliver productivity. Those who do not will be left behind.
Security and privacy should remain a top concern for any tech company, whether it’s AI-focused or not, according to Dave Geary, CEO of crowdsourced cybersecurity company BugCrowd. When it comes to AI, ensuring that models have the necessary safeguards, feedback loops and mechanisms to uncover security concerns are important.
“As organizations increasingly adopt AI for all the efficiency, productivity and democratization of data benefits, it is important to ensure that concerns are identified, there is a reporting mechanism in place to surface them, in the same way that a security The vulnerability will be identified and reported,” Gerry told TechNewsWorld.
Highlighting Legitimate Concerns
In what may be an increasingly specific response to the need to regulate AI, machine learning expert Anthony Figueroa, co-founder and CTO of results-driven software development company Rootstrap, supports regulation of artificial intelligence, but its There is a suspicion of a stagnation in development. for any meaningful change.
Figueroa uses big data and machine learning to help companies create innovative solutions to monetize their services. But he is skeptical that regulators will move at the right pace and understand the implications of what they should regulate. He sees the challenge as similar to that posed by social media two decades ago.
“I think the letter he wrote is important. We are at a tipping point, and we need to start thinking about progress that we haven’t had before. I don’t think six months, a year, two It is possible to hold off on anything for years or even a decade.
Suddenly, AI-powered everything universal is the next big thing. The virtual overnight success of OpenAI’s ChatGPT product has suddenly forced the world to take notice of the immense power and potential of AI and ML technologies.
“We don’t yet know the effects of that technology. What are the dangers? We do know some things that can go wrong with this double-edged sword,” he warned.
Does AI need regulation?
TechNewsWorld discusses with Anthony Figueroa the issues surrounding the need for developer control of machine learning and the potential need for government regulation of artificial intelligence.
TechNewsWorld: Within the computing industry, what guidelines and ethics exist to keep you on track safely?
Anthony Figueroa: You need your own set of personal ethics in your head. But even with that, you can have a lot of unwanted consequences. What we’re doing with this new technology, ChatGPT, for example, is exposing AI to massive amounts of data.
That data comes from public and private sources and different things. We are using a technique called Deep Learning, which is based on studying how our brain works.
How does this affect the ethics and use of the guidelines?
Figueroa: Sometimes, we don’t even understand how AI solves a problem in a particular way. We do not understand the thought process within the AI ecosystem. Add to this a concept called interpretability. You should be able to determine how the decision is made. But with AI, it’s not always interpretable, and has varying results.
How are those factors different with AI?
Figueroa: Interpretable AI is slightly less powerful because you have more restrictions, but then again, you have the question of ethics.
For example, consider doctors addressing a case of cancer. They have many treatments available. One of the three drugs is completely interpretable and will give the patient a 60% chance of recovery. Then they have a non-explainable treatment that, based on historical data, would have an 80% chance of a cure, but they don’t really know why.
That combination of drugs, along with the patient’s DNA and other factors, affect the outcome. So what should the patient take? You know, it’s a tough decision.
How do you define “intelligence” in the context of AI development?
Figueroa: We can define intelligence as the ability to solve problems. Computers solve problems in a completely different way than people. We solve them with a combination of conscientiousness and intelligence, which gives us the ability to feel things and solve problems together.
AI is about solving problems by focusing on results. A typical example is the self-driving car. What if all results are bad?
A self-driving car will choose the least bad of all possible outcomes. If the AI has to choose a navigational maneuver that will either kill the “passenger-driver” or kill two people in a road crossing with a red light, you can make the case either way.
You could argue that pedestrians are at fault. So the AI would make a moral decision and say let’s kill the pedestrians. Or the AI could say let’s at least try to kill people. There is no correct answer.
What about regulatory issues?
Figueroa: I think AI has to be regulated. Until we have a clear assessment of regulation, it is possible to hold back development or innovation. We won’t have that. We don’t really know what we are regulating or how to enforce regulation. So we have to create a new way of regulation.
One of the things OpenAI devs do well is to build their technology in plain sight. The developers can work on their technology for two more years and come up with more sophisticated technology. But he decided to highlight the current success to the world, so that people could start thinking about regulation and what kind of regulation could be enforced.
How do you start the evaluation process?
Figueroa: It all starts with two questions. One is, what is regulation? It is an instruction created and maintained by an authority. Then the second question is, who is the authority – an entity that has the power to issue orders, make decisions, and implement those decisions?
Related to those first two questions is a third question: Who or what are the candidates? We can localize government in one country or individual national institutions like the United Nations which can be powerless in these situations.
Where you have industry self-regulation you can make the case that is the best way to go. But you’ll have a lot of bad actors. You can have professional organizations, but then you get into more bureaucracy. Meanwhile, AI is advancing at an astonishing pace.
What do you think is the best way?
Figueroa: It should be a combination of government, industry, professional organizations and perhaps non-governmental organizations working together. But I’m not very optimistic, and I don’t think they’ll be able to find an adequate solution for what’s to come.
Is there any way to deal with AI and ML by implementing stopgap safeguards if the entity violates the guidelines?
Figueroa: You can always do this. But one challenge is not being able to predict all the possible consequences of these technologies.
Right now, we have all the big guys in the industry – OpenAI, Microsoft, Google – working on more fundamental technology. In addition, many AI companies are working with a second level of abstraction using the technology being created. But they are the oldest institutions.
So you have a genetic brain to do whatever you want. If you have the proper ethics and procedures in place, you can reduce adverse effects, increase safety, and reduce bias. But you can’t eliminate it at all. We have to live with it and create some accountability and rules. If an unintended consequence occurs, we must be clear as to whose responsibility it is. I think it’s important.
What needs to be done now to chart the course for the secure use of AI and ML?
Figueroa: The first is a subtext that we don’t know everything and accept that there are going to be negative consequences. In the long run, the goal is for the positive outcomes to far outweigh the negative ones.
Consider that the AI revolution is unpredictable but inevitable now. You can make the case that the rules can be enforced, and it might be good to slow down the pace and make sure we are as safe as possible. Accept that we are going to suffer some negative consequences with the hope that the long term effects are far better and will give us a better society.