Elon Musk has reportedly been reaching out to artificial intelligence researchers in recent weeks toward setting up a research lab to develop an alternative to OpenAI’s ChatGPT.

According to a report in The Information, the move stems from Musk’s dissatisfaction with the security measures OpenAI has incorporated into ChatGPT to prevent it from producing text that offends users.

Musk and some conservative commentators have also accused OpenAI of training ChatGPT to “wake up”.

Writing for National Review in January, Nate Hochman announced that he had found an underlying ideological bias in ChatGPT.

“It is unclear whether this was a feature of ChatGPT from the beginning or if it is a recent improvement to the algorithm, but it appears that the crackdown on ‘misinformation’ that we have seen across technology platforms in recent years – which often turns into more brazen attempts to stifle or silence viewpoints that disagree with progressive conservatism – now also a feature of ChatGPT,” he wrote.

avoiding controversy

Will Duffield, a policy analyst at the Cato Institute, a Washington, D.C. think tank, counters, however, that what appears to some to be an ideological bias is actually an attempt to avoid controversy.

“Voc is the wrong framing,” Duffield told TechNewsWorld.

“OpenAI’s chatbot and DALL-E are also tuned to avoid controversial topics,” he continued. “That doesn’t mean they’re designed to arouse. In order to avoid controversy, they tend to mirror what society as a whole has deemed controversial.

For example, he explained that if you ask ChatGPT to write a poem about Biden and one about Trump, it will write a poem about Biden but not Trump.

“It’s not because someone at OpenAI tuned the model to avoid Trump, but because Trump is a controversial figure,” he said. “If you read everything that’s been written about Biden and Trump over the past 10 years, you come away with the impression that Trump is more controversial.”

Musk can bring transparency

Greg Sterling, co-founder of Near Media, a news, commentary and analysis website, also found the Jagran criticism unfair.

“OpenAI and Microsoft are trying to prevent ChatGPT from generating hateful or racist content, which is both responsible and practical to do,” Sterling told TechNewsWorld. “Any brand associated with AI hate speech or misinformation will be tarnished in the public mind.”

But Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, California, argued that there have been too many examples in the media of OpenAI’s apparent awareness of the technology.

“In all fairness, OpenAI is still in a testing phase, and given some of the market’s reportedly negative reaction to it, I doubt OpenAI will dial it back over time,” Vena told TechNewsWorld.

He added that if Musk becomes part of the AI ​​landscape, it could be a good thing for the technology.

“Given the approach management has brought to Twitter — whether you like it or don’t like it — their focus on transparency could be a good thing,” he said.

“I expect him to focus on being highly transparent about the algorithms that Twitter can use with his own ChatGPT implementation, and you can see him promoting a stronger code of conduct,” he said. to continue.

“I think we should be open-minded about the changes they can bring to ChatGPT Play,” said Vena.

traffic magnet

Twitter could benefit from reports of Musk getting into the AI ​​chat business, observed Ross Rubin, principal analyst at Retical Research, a consumer technology advisory firm in New York City.

“ChatGPT has become an incredible traffic magnet, and it’s impossible to ignore when you’re in the attention game like Twitter. You have to answer that,” he told TechNewsWorld.

“AI could also give users a way to mine the incredible amount of data and perspective that scroll through Twitter every day,” he continued.

“Keeping up with the content you want to follow on Twitter has always been challenging because there’s so much noise out there,” Rubin said. “AI can be helpful in that area.”

Musk AI could also benefit from its relationship with Twitter. “They have data from Twitter, which could be an interesting trove of information to train AIs with,” Duffield said.

“If Musk really wants to differentiate its AI product, it should use models that users can run on their own machines and determine their own weights and biases.” “That kind of freedom will ultimately be what matters most in the AI ​​space.”

Bing AI comes to Windows 11

Bob O’Donnell, founder and principal analyst at Technalysis Research, a technology market research and consulting firm in Foster City, California, predicted that many companies would be developing large language models like ChatGPT.

“Since they’re incredibly expensive to develop, you need someone with Musk’s money to control those kinds of efforts,” he told TechNewsWorld.

“I don’t know that whatever Musk does is going to make a huge difference,” he said. “What I do know is that we are going to see many companies trying this with many different approaches. Ultimately it is going to come down to what people find useful.

While Musk prepares to enter the AI ​​chat arena, Microsoft continues to expand the use of the technology in its products. It announced on Tuesday that the latest version of Windows 11 will incorporate AI-powered Bing directly into the taskbar.

“The search box is one of the most widely used features on Windows, with more than half a billion users every month, and now with the typeable Windows Search Box and the new AI-powered Bing front and center of this experience , you’ll be empowered to find the answers you’re looking for, faster than ever,” Microsoft chief product officer Panos Panay wrote in a company blog.

There have been some hiccups in Microsoft’s aggressive rollout of AI-based products, but slowing down doesn’t seem to be an option.

“There will be setbacks, but it’s really more about Microsoft’s ability to adjust on the fly and update with new guardrails and better training,” said Jason Wong, vice president and analyst at Gartner, a research and advisory firm based in Stamford, Conn. Is.” , told TechNewsworld.

“There is so much potential with generative AI that being early rather than late to market at this point is worth the risk,” he added.

As criminal activity on the Internet continues to intensify, hunting bugs for cash is attracting more and more security researchers.

In its latest annual report, bug bounty platform Integrity revealed that there was a 43% increase in the number of analysts signing up for its services from April 2021 to April 2022. For Integrity alone, this means adding 50,000 researchers.

For the most part, it has been noted, bug bounty hunting is part-time work for the majority of researchers, with 54% holding full-time jobs and another 34% being full-time students.

“Bug bounty programs are tremendously successful for both organizations and security researchers,” said Ray Kelly, a fellow at WhiteHat Security, an application security provider in San Jose, Calif., which was recently acquired by Synopsis.

“Effective bug bounty programs limit the impact of serious security vulnerabilities that could easily have put an organization’s customer base at risk,” he told TechNewsWorld.

“Payments for bug reports can sometimes exceed six-figure amounts, which may seem like a lot,” he said. “However, the cost of fixing and recovering a zero-day vulnerability for an organization can total millions of dollars in lost revenue.”

‘Good faith’ rewarded

As if that weren’t incentive enough to become a bug bounty hunter, the US Department of Justice recently sweetened the career path by adopting a policy that said it would not enforce the federal Computer Fraud and Abuse Act against hackers, Who starred in “Good”. trust” when attempting to discover flaws in software and systems.

“The recent policy change to prevent prosecuting researchers is welcome and long-awaited,” said Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk prevention in Tel Aviv, Israel.

“The fact that researchers have, over the years, tried to help and find the right security flaws under a regime that amounted to ‘doing no good’ suggests that it takes them to do the right thing.” There was dedication, even if doing the right thing meant risky fines and jail time,” he told TechNewsWorld.

“This policy change removes a fairly significant obstacle to vulnerability research, and we can expect it to pay dividends quickly and without the risk of jail time for doing it for bug discoverers in good faith.” Will pay dividends with more people.”

Today, ferreting out bugs in other people’s software is considered a respectable business, but it isn’t always the case. “Basically there were a lot of issues with when bug bounty hunters would find vulnerabilities,” said James McQuigan, a security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.

“Organizations will take a lot of offense to this, and they will try to accuse the researcher of finding it when, in fact, the researcher wanted to help,” he told TechNewsWorld. “The industry has recognized this and now email addresses have been established to receive such information.”

benefits of multiple eyes

Over the years, companies have come to realize what bug bounty programs can bring to the table. “The task of discovering and prioritizing weak, unintended consequences is not, and should not be, the focus of the organization’s resources or efforts,” explained Casey Ellis, CTO and founder of BugCrowd, which operates a crowdsourced bug bounty platform. Is.

“As a result, a more scalable and effective answer to the question ‘where am I most likely to settle’ is no longer considered a good one, but should be one,” he told TechNewsWorld. “This is where bug bounty programs come into play.”

“Bug bounty programs are a proactive way to spot vulnerabilities and reward one’s good work and discretion,” said Davis McCarthy, a lead security researcher at Valtix, a provider of cloud-native network security services in Santa Clara, Calif.

“The old adage, ‘Many eyes make all the bugs shallow,’ is true, because there is a dearth of talent in the field,” he told TechNewsWorld.

Parkin agreed. “With the sheer complexity of modern code and the myriad interactions between applications, it’s important to have a more responsible eye on looking for flaws,” he said.

“Threat actors are always working to find new vulnerabilities they can exploit, and the threats scene in cyber security has only gotten more hostile,” he continued. “The rise of bug bounties is a way for organizations to bring some of the independent researchers into the game on their side. It’s a natural response to an increase in sophisticated attacks.”

Bad Actor Reward Program

Although bug bounty programs have gained greater acceptance among businesses, they can still cause friction within organizations.

“Researchers often complain that even when firms have a coordinated disclosure or bug bounty program, a lot of pushback or friction exists. Archie Agarwal, founder and CEO of ThreatModeler, an automated threat modeling provider in Jersey City, NJ “They often feel slighted or pushy,” he said.

“Organizations, for their part, often get stuck when presented with a disclosure because the researcher found a fatal design flaw that would require months of concerted effort to rectify,” he told TechNewsWorld. “Maybe some prefer that these kinds of flaws will be out of sight.”

“The effort and expense of fixing design flaws after a system has been deployed is a significant challenge,” he continued. “The surest way to avoid this is by creating threat model systems, and as their design evolves. It provides organizations with the ability to plan for and deal with these flaws in their potential form, proactively.” does.”

Perhaps the biggest proof of the effectiveness of bug bounty programs is that malicious actors have begun to adopt the practice. The Lockbit ransomware gang is offering payments to those who discover vulnerabilities in their leaked website and their code.

“This development is novel, however, I suspect they will get many takers,” predicts John Bumbaneck, principle threat hunter at Netenrich, a San Jose, Calif.-based IT and digital security operations company.

“I know that if I find a vulnerability, I’m going to use it to jail them,” he told TechNewsWorld. “If a criminal finds someone, it must be stealing from them because there is no respect among ransomware operators.”

“Ethical hacking programs have been hugely successful. It is no surprise to see ransomware groups refining their methods and services in the face of that competition,” said Casey Bisson, head of product and developer relations at BlueBracket, Menlo Park, Calif. A cyber security services company in India.

He warned that attackers are increasingly aware that they can buy access to the companies and systems they want to attack.

“It involves looking at the security of their internal supply chains every enterprise has, including who has access to their code, and any secrets therein,” he told TechNewsWorld. “Unethical bounty programs like these turn passwords and keys into code for whoever has access to your code.”