Some technology insiders want to halt the continued development of artificial intelligence systems before machine learning takes the neurological pathways their human creators intended to use. Other computer experts argue that missteps are inevitable and that development must continue.

More than 1,000 tech and AI veterans recently signed a petition calling for a six-month moratorium on training for AI systems more powerful than GPT-4 for the computing industry. Proponents want AI developers to create security standards and mitigate the potential risks posed by risky AI technologies.

The nonprofit Future of Life Institute organized the petition which calls for a near-immediate public and verifiable termination by all major developers. Otherwise, governments must step in and establish a moratorium. As of this week, the Future of Life Institute says it has collected more than 50,000 signatures that are going through its vetting process.

The letter is not an attempt to halt all development of AI in general. Instead, its supporters want developers to back away from a dangerous race to “ever-large unpredictable black-box models with emergent capabilities.” During the timeout, AI labs and independent experts must jointly develop and implement a set of shared security protocols for advanced AI design and development.

“AI research and development should be focused on making today’s powerful, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, aligned, trustworthy and loyal,” the letter said.

support is not universal

It’s doubtful that anyone will stop anything, suggested John Bambenek, principal threat hunter at security and operations analytics SaaS company Netenrich. Still, he sees a growing awareness that consideration of the ethical implications of AI projects lags far behind the pace of development.

“I think it’s good to re-evaluate what we’re doing and it will have a profound impact, because we’ve already seen some spectacular failures when it comes to mindless deployment of AI/ML,” Bambneck told TechNewsworld. “

Everything we do to stop things in the AI ​​space is probably just noise, said Andrew Barrett, vice president at cybersecurity advisory services firm Coalfire. It is also impossible to do this globally in a coordinated manner.

“AI will be a productivity enhancer for the next few generations. The danger would be seeing that it replaces the search engines and then gets monetized by advertisers who ‘discreetly’ place their products in the answers. What’s interesting is that there has been a ‘spike’ in fear since the recent attention paid to ChatGPT,” Barratt told TechNewsworld.

Rather than stop, Barrett recommends encouraging knowledge workers around the world to look at how they can best use the various AI tools that are becoming more consumer-friendly to help deliver productivity. Those who do not will be left behind.

Security and privacy should remain a top concern for any tech company, whether it’s AI-focused or not, according to Dave Geary, CEO of crowdsourced cybersecurity company BugCrowd. When it comes to AI, ensuring that models have the necessary safeguards, feedback loops and mechanisms to uncover security concerns are important.

“As organizations increasingly adopt AI for all the efficiency, productivity and democratization of data benefits, it is important to ensure that concerns are identified, there is a reporting mechanism in place to surface them, in the same way that a security The vulnerability will be identified and reported,” Gerry told TechNewsWorld.

Highlighting Legitimate Concerns

In what may be an increasingly specific response to the need to regulate AI, machine learning expert Anthony Figueroa, co-founder and CTO of results-driven software development company Rootstrap, supports regulation of artificial intelligence, but its There is a suspicion of a stagnation in development. for any meaningful change.

Figueroa uses big data and machine learning to help companies create innovative solutions to monetize their services. But he is skeptical that regulators will move at the right pace and understand the implications of what they should regulate. He sees the challenge as similar to that posed by social media two decades ago.

“I think the letter he wrote is important. We are at a tipping point, and we need to start thinking about progress that we haven’t had before. I don’t think six months, a year, two It is possible to hold off on anything for years or even a decade.

Suddenly, AI-powered everything universal is the next big thing. The virtual overnight success of OpenAI’s ChatGPT product has suddenly forced the world to take notice of the immense power and potential of AI and ML technologies.

“We don’t yet know the effects of that technology. What are the dangers? We do know some things that can go wrong with this double-edged sword,” he warned.

Does AI need regulation?

TechNewsWorld discusses with Anthony Figueroa the issues surrounding the need for developer control of machine learning and the potential need for government regulation of artificial intelligence.

TechNewsWorld: Within the computing industry, what guidelines and ethics exist to keep you on track safely?

Anthony Figueroa: You need your own set of personal ethics in your head. But even with that, you can have a lot of unwanted consequences. What we’re doing with this new technology, ChatGPT, for example, is exposing AI to massive amounts of data.

That data comes from public and private sources and different things. We are using a technique called Deep Learning, which is based on studying how our brain works.

How does this affect the ethics and use of the guidelines?

Figueroa: Sometimes, we don’t even understand how AI solves a problem in a particular way. We do not understand the thought process within the AI ​​ecosystem. Add to this a concept called interpretability. You should be able to determine how the decision is made. But with AI, it’s not always interpretable, and has varying results.

How are those factors different with AI?

Figueroa: Interpretable AI is slightly less powerful because you have more restrictions, but then again, you have the question of ethics.

For example, consider doctors addressing a case of cancer. They have many treatments available. One of the three drugs is completely interpretable and will give the patient a 60% chance of recovery. Then they have a non-explainable treatment that, based on historical data, would have an 80% chance of a cure, but they don’t really know why.

That combination of drugs, along with the patient’s DNA and other factors, affect the outcome. So what should the patient take? You know, it’s a tough decision.

How do you define “intelligence” in the context of AI development?

Figueroa: We can define intelligence as the ability to solve problems. Computers solve problems in a completely different way than people. We solve them with a combination of conscientiousness and intelligence, which gives us the ability to feel things and solve problems together.

AI is about solving problems by focusing on results. A typical example is the self-driving car. What if all results are bad?

A self-driving car will choose the least bad of all possible outcomes. If the AI ​​has to choose a navigational maneuver that will either kill the “passenger-driver” or kill two people in a road crossing with a red light, you can make the case either way.

You could argue that pedestrians are at fault. So the AI ​​would make a moral decision and say let’s kill the pedestrians. Or the AI ​​could say let’s at least try to kill people. There is no correct answer.

What about regulatory issues?

Figueroa: I think AI has to be regulated. Until we have a clear assessment of regulation, it is possible to hold back development or innovation. We won’t have that. We don’t really know what we are regulating or how to enforce regulation. So we have to create a new way of regulation.

One of the things OpenAI devs do well is to build their technology in plain sight. The developers can work on their technology for two more years and come up with more sophisticated technology. But he decided to highlight the current success to the world, so that people could start thinking about regulation and what kind of regulation could be enforced.

How do you start the evaluation process?

Figueroa: It all starts with two questions. One is, what is regulation? It is an instruction created and maintained by an authority. Then the second question is, who is the authority – an entity that has the power to issue orders, make decisions, and implement those decisions?

Related to those first two questions is a third question: Who or what are the candidates? We can localize government in one country or individual national institutions like the United Nations which can be powerless in these situations.

Where you have industry self-regulation you can make the case that is the best way to go. But you’ll have a lot of bad actors. You can have professional organizations, but then you get into more bureaucracy. Meanwhile, AI is advancing at an astonishing pace.

What do you think is the best way?

Figueroa: It should be a combination of government, industry, professional organizations and perhaps non-governmental organizations working together. But I’m not very optimistic, and I don’t think they’ll be able to find an adequate solution for what’s to come.

Is there any way to deal with AI and ML by implementing stopgap safeguards if the entity violates the guidelines?

Figueroa: You can always do this. But one challenge is not being able to predict all the possible consequences of these technologies.

Right now, we have all the big guys in the industry – OpenAI, Microsoft, Google – working on more fundamental technology. In addition, many AI companies are working with a second level of abstraction using the technology being created. But they are the oldest institutions.

So you have a genetic brain to do whatever you want. If you have the proper ethics and procedures in place, you can reduce adverse effects, increase safety, and reduce bias. But you can’t eliminate it at all. We have to live with it and create some accountability and rules. If an unintended consequence occurs, we must be clear as to whose responsibility it is. I think it’s important.

What needs to be done now to chart the course for the secure use of AI and ML?

Figueroa: The first is a subtext that we don’t know everything and accept that there are going to be negative consequences. In the long run, the goal is for the positive outcomes to far outweigh the negative ones.

Consider that the AI ​​revolution is unpredictable but inevitable now. You can make the case that the rules can be enforced, and it might be good to slow down the pace and make sure we are as safe as possible. Accept that we are going to suffer some negative consequences with the hope that the long term effects are far better and will give us a better society.

Online attackers are stealing IP addresses and converting them into cash by selling so-called proxyware services.

The Threat Research team at Sysdig reported Tuesday that malicious actors are installing proxyware on computers without the owner’s knowledge, then selling the unit’s IP address to the proxyware service, making US$10 a month for every compromised device. Happening.

The researchers explained in a company blog that proxyware services allow users to make money by sharing their Internet connection with others. Attackers, however, are taking advantage of the platforms to monetize victims’ internet bandwidth, just as malicious cryptocurrency mining attempts to monetize the CPU cycles of infected systems.

“Proxyware services are legitimate, but they cater to people who want to circumvent security and restrictions,” said Michael Clarke, director of threat research at Sysdig, a San Francisco-based maker of SaaS platforms for threat detection and response. Said.

“They use residential addresses to bypass bot protection,” he told TechNewsWorld.

For example, buying lots of sneaker brands can be very profitable, but websites put in protections to limit sales to a single pair per IP address, he explained. They use these proxy IP addresses to buy and resell as many pairs as possible.

“Sites rely more heavily on residential IP addresses than on other types of addresses,” he said. “That’s why there’s such a premium on residential addresses, but cloud services and mobile phones are also starting to become desirable for these services.”

food for influencers

These apps are often promoted through referral programs, with many notable “influencers” promoting them for passive income opportunities, says Emmanuel Chavoya, senior manager of product security at SonicWall, a network firewall manufacturer in Milpitas, California. he said.

“Income seekers download software to share their bandwidth and make money,” he told TechNewsWorld.

“However,” he continued, “these proxyware services can expose users to disproportionate levels of risk, as users cannot control the activities performed using their home and mobile IP addresses.”

“There have been instances of users or their infrastructure unwittingly engaging in criminal activity,” he said.

Such activity includes access to potential click-fraud or silent advertising sites, SQL injection probes, and attempts to access the critical /etc/passwd file on Linux and Unix systems (which keeps track of registered users with access to a system). , including crawling government websites. The crawling of personally identifiable information – including national IDs and Social Security numbers – and the bulk registration of social media accounts.

organization careful

Proxyware services can be used to generate Web traffic or manipulate Web search results, explained Timothy Morris, chief security advisor for Tenium, maker of an endpoint management and security platform in Kirkland, Wash.

“Some proxy clients will come with ‘bonus content’ that may be ‘trojanized’ or malicious, providing unauthorized access to the computer running the proxy service, usually for crypto mining,” he told TechNewsWorld.

Sysdig Threat Research Engineer Crystal Morin said organizations affected by proxyware could see an increase in their cloud platform management costs and a drop in service.

“And just because an attacker is doing crypto mining or proxyjacking on your network doesn’t mean that’s all they’re doing,” he told TechNewsWorld.

“There is a concern that if they are using Log4j or some other vulnerability, and they have access to your network,” he continued, “they can do something beyond using the system for profit, so you have to Have to be careful and watch for other malicious activity.

Clark said an organization may also face some reputational risks from proxyjacking.

“There may be illegal activity going on that can be attributed to the company or organization whose IP was taken, and they may end up on a denial list for threat intelligence services, allowing people to leave completely.” There could be a problem with the internet connection of the victim,” he said.

“There could also be a potential law enforcement investigation,” he said.

He added that the proxyjacking activity uncovered by Sysdig researchers was intended to target organizations. “The attackers cast a wide net across the Internet and targeted cloud infrastructure,” he said.

“Typically,” he continued, “we would see this type of attack bundled in Windows adware. This time we are targeting cloud networks and servers, which is more business oriented.”

Log4j vulnerability was exploited

The attackers studied by Sysdig researchers exploited the Log4j vulnerability to compromise their targets. A flaw in a popular open-source Java-based logging utility discovered in 2021 is estimated to have affected 93% of all enterprise cloud environments.

“Millions of systems are still running with vulnerable versions of Log4j, and according to Sensis, more than 23,000 of them are accessible from the Internet,” the researchers wrote.

“Log4j is not the only attack vector for proxyjacking malware to be deployed, but this vulnerability alone could theoretically provide over $220,000 in profit per month,” he said. “More conservatively, a modest settlement of 100 IPs would net a passive income of approximately $1,000 per month.”

While this shouldn’t be an issue, there is still a “long tail” of systems vulnerable to the Log4J vulnerability that haven’t been patched, observed Mike Parkin, a senior technical engineer at Vulkan Cyber, a provider of SaaS for enterprise cyber. . Exposure treatment in Tel Aviv, Israel.

He told TechNewsWorld, “The number of vulnerable systems is going down, but it will still take some time to reach zero – either all of the rest are being patched or the remainder are being found and exploited.” Used to be.”

“The vulnerability is being actively exploited,” Morris said. “There are reports of vulnerable versions still being downloaded.”

protect through investigation

To protect yourself from proxyjacking, Morin recommends robust and continuous real-time threat detection.

“Unlike cryptojacking, where you would see spikes in CPU usage, CPU usage is very low here,” he explained. “So, the best way to detect it is through detection analytics, where you’re looking for the kill chain aspects of the attack — early access, vulnerability exploitation, detection evasion, persistence.”

Chavoya advised organizations to create detailed rules for what types of applications are allowed on end-user devices through application whitelisting.

Whitelisting involves creating a list of approved applications that can run on devices within an organization’s network and preventing any other applications from running.

“This can be a highly effective way to prevent proxyware and other types of malware from running on devices within an organization’s network,” Chavoya said.

“By creating detailed rules for what types of applications are allowed on end user devices, organizations can ensure that only authorized and necessary applications are allowed to run,” he continued.

“This can greatly reduce the risk of proxyjacking and other types of cyber-attacks that rely on unauthenticated applications running on end-user devices,” he concluded.