Tag

OpenAI

Browsing

The larger multimodal language model, GPT-4, is ready for prime time, although, contrary to reports circulating since Friday, it does not support the ability to generate text-to-video.

However, GPT-4 can accept image and text input and generate text output. OpenAI explains on its website that on a range of domains – including text and documents containing photographs, diagrams or screenshots – GPT-4 exhibits similar capabilities as it does on text-only inputs.

However, this feature is in “research preview” and will not be publicly available.

OpenAI explained that GPT-4, while less capable than humans in many real-world scenarios, demonstrated human-level performance on various professional and academic benchmarks.

For example, he passed a mock bar exam with a score in the top 10% of test takers. In contrast, the GPT-3.5 score was down about 10%.

surpasses past models

One of the early users of GPT-4 is Casetext, maker of CoCounsel, an AI legal assistant that it says has been able to pass both the multiple-choice and written portions of the Uniform Bar Exam.

“GPT-4 surpasses the power of earlier language models,” Pablo Arredondo, CaseText’s co-founder and chief innovation officer, said in a statement. “The model’s ability not only to generate text, but to interpret it, is nothing less than a new era in the practice of law.”

“CaseText’s Co-Counsel is changing how law is done by automating important, time-intensive tasks and freeing up our attorneys to focus on the most impactful aspects of the practice,” said Frank Ryan, U.S. President of global law firm DLA Piper. how to practice.” Press release.


OpenAI reported that it spent six months aligning GPT-4 using lessons learned from its adversarial testing program as well as ChatGPT, resulting in its best-ever results for — though not perfect — factuality, Maneuverability and refusal to go outside the handrail.

It added that the GPT-4 training run was phenomenally stable. It was the company’s first major model capable of making accurate predictions ahead of time about its training performance.

“As we continue to focus on reliable scaling,” it wrote, “we aim to sharpen our methodologies to help predict and prepare for future capabilities — Something we consider important for security.”

nuance

OpenAI notes that the difference between GPT-3.5 and GPT-4 can be subtle. The difference emerges when the complexity of the task reaches a sufficient extent, it explained. GPT-4 is more reliable and creative and can handle more granular instructions than GPT-3.5.

GPT-4 is also more customizable than its predecessor. OpenAI explained that instead of the classic ChatGPT personality with a certain verbosity, tone, and style, developers — and soon ChatGPT users — can now set the style and function of their AI by describing those directions in a “system” message. System messaging APIs allow users to customize their users’ experience within significant limits.

API users will initially have to wait to try that feature, however, as their access to GPT-4 will be restricted by a waiting list.

OpenAI acknowledged that despite its capabilities, GPT-4 has the same limitations as earlier GPT models. Most importantly, it is still not completely reliable. It “hallucinates” facts and makes logical errors.

Great care should be taken when using language model outputs, especially in high-stakes contexts, OpenAI warned.

GPT-4 can also be overconfident in its predictions, not paying attention to double-checking work when there is a possibility of error.

T2V absent

Anticipation for a new release of GPT was halted over the weekend after a Microsoft executive in Germany suggested that text-to-video capability would be part of the final package.

“We will introduce GPT-4 next week, where we have multimodal models that will offer completely different possibilities – for example, video,” Andreas Braun, Microsoft’s chief technology officer in Germany, said at a press event on Friday.

Text-to-video will be very disruptive, said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm in Bend, Ore.


“It could dramatically change how movies and TV shows are made, how news programs are formatted by providing a mechanism for user customization,” he told TechNewsWorld.

Enderle said an early use of the technology could be in creating storyboards from script drafts. “As this technology matures, it will move into something closer to a finished product.”

video dissemination

The content created by text-to-video applications is still basic, noted Greg Sterling, co-founder of Near Media, a news, commentary and analysis website.

“But text-to-video has the potential to be disruptive in the sense that we’ll see a lot more video content generated for little or almost no cost,” he told TechNewsWorld.

“The quality and effectiveness of that video is a different matter,” he continued. “But I suspect some of it will be decent.”

Explainers and basic how-to information are good candidates for text-to-video, he said.

“I could imagine that some agencies would use this to create videos for SMBs to use on their sites or YouTube for ranking purposes,” he said.

“It won’t do well – at least at first – on any branded content,” he continued. “Social media content is another use case. You’ll see creators on YouTube use it to drive up the volume to generate views and ad revenue.

not fooled by deepfakes

As was discovered with ChatGPT, there are potential dangers to a technology like text-to-video.

“The most dangerous use cases, like all such tools, are garden-variety scams that target people with relatives or particularly vulnerable individuals or institutions,” said the Cato Institute, a policy think tank in Washington, DC. analyst Will Duffield said. ,

Duffield, however, discounted the idea of ​​using text-to-video to create effective “deepfakes”.

“When we have seen well-resourced attacks, such as the Russian deepfakes Zelensky surrendered last year, they fail because there is enough context and expectation in the world to dismiss the fake,” he explained.

“We have very well defined notions of who public figures are, what they are about, what we can expect them to do,” he continued. “So, when we see that their media is behaving in a way that is unusual, that is not in line with those expectations, we are likely to be very critical or skeptical about it.”

OpenAI CTO Mira Murati on Sunday courted controversy over government oversight of artificial intelligence when she acknowledged in an interview with Time magazine that the technology needs to be regulated.

“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a controlled and responsible way,” Murati told TIME. “But we are a small group of people, and we need a ton more input and a lot more input into this system that goes beyond the technologies — certainly the regulators and the governments and everybody else.”

Asked whether government involvement at this stage of AI’s development could hinder innovation, she replied: “It’s never too early. Given the impact of these technologies, it’s very important for everyone to be involved.” Is.

Greg Sterling, co-founder of the news, comment and analysis website Near Media, agreed, saying that since the market provides incentives for abuse, some regulation is probably necessary.

“Deliberately designed disincentives against unethical behavior can reduce the potential misuse of AI,” Sterling told TechNewsWorld, “but regulation can also be poorly designed and fail to prevent any of .

He acknowledged that regulation too early or too heavily could hurt innovation and limit the benefits of AI.

“Governments should convene AI experts and industry leaders to jointly draw up a framework for possible future regulation. This should probably also happen internationally,” Sterling said.

consider existing laws

Artificial intelligence, like many technologies and tools, can be used for a wide variety of purposes, explained Jennifer Huddleston, a technology policy research fellow at the Cato Institute, a Washington, DC think tank.

Many of these uses are positive, and consumers are already experiencing beneficial uses of AI, such as real-time translation and better traffic navigation, he continued. “Before seeking new regulations, policymakers should consider how existing laws around issues such as discrimination may already address concerns,” Huddleston told TechNewsWorld.


Artificial intelligence should be regulated, but how it is already regulated also needs to be considered, added Mason Kortz, clinical instructor at the Cyberlaw Clinic at Harvard University Law School in Cambridge, Mass.

“We have a lot of general rules that make things legal or illegal, regardless of whether they’re done by humans or AI,” Kortz told TechNewsworld.

“We need to look at the ways in which the existing laws regulate AI, and what are the ways in which they are not and there is a need to innovate and be creative,” he said.

For example, he said there is no general rule regarding autonomous vehicle liability. However, there are still plenty of areas of law to consider if an autonomous vehicle causes an accident, such as negligence law and product liability law. He explained that these are potential ways to regulate the use of AI.

need a light touch

However, Kortz acknowledged that many of the current rules came into play after the fact. “So, in a way, they’re like second best,” he said. “But they are an important measure when we develop the rules.”

“We should try to be proactive in regulation where we can,” he said. “After harm is done, there is recourse through the legal system. It is better not to be harmed.”

However, Mark N., president and principal analyst at SmartTech Research in San Jose, Calif. Vena argues that heavy regulation could stifle the booming AI industry.

“At this early stage, I’m not a big fan of government regulation of AI,” Vena told TechNewsWorld. “AI can have a lot of benefits, and government interference can eliminate them.”


Such suffocating influence on the Internet was lessened in the 1990s, they maintained through “light touch” regulation such as Section 230 of the Communications Decency Act, which allowed online platforms to limit the amount of third-party content displayed on their websites. granted immunity from liability for.

However, Kortz believes the government can put the brakes on something without shutting down an industry appropriately.

“People criticize the FDA, that it’s prone to regulatory capture, that it’s run by drug companies, but we’re still in a better world than pre-FDA, when anyone could sell anything and Anything can be put on the label,” he said.

“Is there a good solution that captures only the good aspects of AI and blocks all the bad ones? Probably not,” Vena continued, “but some structure is better than no structure.”

“It’s not going to do anyone any good to let good AI in and bad AI out,” he said. “We can’t guarantee that good AI is going to win that battle, and the collateral damage could be quite significant.”

regulation without throttle

Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington, DC, said there are some things policymakers can do to regulate AI without stifling innovation.

“One is to focus on specific use cases,” Castro told TechNewsWorld. “For example, regulating self-driving cars should look different from regulating AI used to generate music.”

“Another is to focus on behavior,” he continued. “For example, it is illegal to discriminate when hiring employees or renting apartments – whether a human or an AI system makes that decision should be irrelevant.”

“But policy makers must be careful not to unfairly hold AI to a different standard or apply incomprehensible rules to AI,” he said. “For example, some safety requirements in today’s vehicles, such as steering wheels and rearview mirrors, may not make sense for autonomous vehicles without passengers or drivers.”


Vena would like to see a “transparent” approach to regulation.

“I would prefer regulation requiring AI developers and content producers to be completely transparent about the algorithms they are using,” he said. “They could be reviewed by a third-party body made up of academics and some commercial entities.”

“Balance being transparent around the algorithms and sources of content AI tools derive from should encourage and reduce abuse,” he stressed.

plan for the worst case

Kortz said that many people believe that technology is neutral.

“I don’t think technology is neutral,” he said. “We have to think about the bad actors. But we also have to think about the poor decisions of the people who create these things and put them out there in front of the world.”

“I would encourage anyone developing AI for a particular use case to think not only about their intended use, but also what the worst possible use for their technology is,” he concluded.