Tag

AIpowered

Browsing

Microsoft may have ushered in a paradigm shift Tuesday with the release of new versions of its search engine, Bing, and web browser, Edge — both now powered by artificial intelligence.

Available in preview on Bing.com, new offerings combine browsing and chat into a unified experience that makes both work better. When performing a search, for example, more relevant results are displayed, and for information such as sports scores, stock prices, and weather forecasts, results may appear without leaving the search page.

For more complex questions—such as “What can I substitute for eggs when baking a cake”—Bing can synthesize an answer from multiple online sources and present a summary response.

Searchers can also chat with Bing to further refine a search and use it to help create content, such as travel itineraries or quizzes for trivia night.

In addition to the facelift in the Edge browser, there is also an AI function for chatting and content creation. You can ask it to summarize long reports, pare them down to the essentials, or create a LinkedIn post from a few prompts.

“AI will fundamentally transform every software category, starting with the biggest category of all — search,” Microsoft Chairman and CEO Satya Nadella said in a statement.

paradigm shift

When you integrate AI with search, you can get the best of both worlds, said Bob O’Donnell, founder and principal analyst at Technalysis Research in Foster City, Calif., a technology market research and consulting firm.

“You can have the timeliness of a search index and the intelligence of natural language-based chat and summary tools,” O’Donnell told TechNewsWorld.

This video demos the new Bing Chat experience:

“What they’re doing is ultimately making the computer smarter,” he explained. “It enables them to deliver what they have to say, not necessarily what has been said.”

“It’s going to take some time for people to get used to it, but it’s dramatically better,” he said. “Its time savings and efficiency are off the charts.”

“I think we are in the midst of a paradigm shift,” he said.

Ross Rubin, principal analyst at Reticle Research, a consumer technology advisory firm in New York City, explained that bringing AI into Bing is just the tip of a larger Microsoft strategy.

“It’s not just about Bing, which is the low-hanging fruit for the integration,” Rubin told TechNewsWorld. “They want to integrate AI into a lot of their products — Office, Teams, Azure.”

“It may help Bing in its long-standing competition with Google, but it’s really much more than that,” he said. “They wouldn’t have made this level of investment if it was about making Bing more effective.”

bard of google

Microsoft’s action comes on the heels of Google announcing on Monday that it was bringing an AI conversational service called Bard to a group of “trusted testers.” Bard is based on Google’s natural language technology, LaMDA. Microsoft is using OpenAI technology in its offering.

Google and Alphabet CEO Sundar Pichai wrote in a company blog that Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our larger language model. It pulls information from across the web to provide fresh, high-quality responses.

He explained that the Bard will initially be released with a lighter model version of the LaMDA. This much smaller model requires significantly less computing power, allowing us to scale to more users and allowing for more feedback.


He added that we will combine external feedback with our own internal testing to ensure that Bard’s responses meet a high standard in quality, safety and real-world information.

Pichai wrote that when people think of Google, they often think of quick factual answers, such as “How many keys are on a piano?” But increasingly, people are turning to Google for deeper insight and understanding — like, “Is piano or guitar easier to learn, and how much practice does each require?”

AI can be helpful in these moments, synthesizing insights for questions where there is no right answer, he continued. Soon, you’ll see AI-powered features in search that deliver complex information and multiple perspectives in easy-to-digest formats, so you can quickly understand the big picture and learn more from the web: whether it’s looking for additional perspectives doing, such as blogs from people who play both piano and guitar, or going in-depth on a related topic, such as the steps in getting started as a beginner.

Pichai said that these new AI features will start rolling out on Google Search soon.

leg up on leader

The question is, will “soon” be too late?

“Suddenly, the Microsoft search product is going to be much better than what Google has to offer,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm in Bend, Ore.

“We’ll see how many people start making the switch,” Enderle told TechNewsWorld. “The switching cost between Bing and Google is non-existent. With switching costs so low, the question will be how many people switch to Bing and how bad will Google hurt?”


“It will take time for Google to catch up,” he said. “In the meantime, people will be establishing habit patterns with Bing, and if people are happy with Bing, why go back to Google?”

He added, “This appears to be a well-executed, dark strategy to battle Google, and Google, for whatever reason, was not adequately prepared.”

Incorporating AI into search helps Microsoft get a leg up on Google, maintained Ed Anderson, research vice president and analyst at Gartner, a research and advisory firm based in Stamford, Conn.

“Microsoft beat Google to the punch in terms of bringing AI-assisted search to Bing and Edge,” Anderson told TechNewsWorld. “How closely Google is toying with its search engine and browser remains to be seen.”

rewrite search rule

O’Donnell believes the new Bing search could make some headway against Google for eyeballs. “It’s the kind of thing that once you try to explore with this new type of engine, it becomes difficult to go back to the old one. It’s so much better,” he said.

“Microsoft is trying to rewrite the rules of the game,” Rubin said. “What is at risk is not only Google’s search leadership, but also its revenue model. Displacing search with an engine that can provide answers without redirecting you somewhere will require rethinking the entire search revenue model.

However, Greg Sterling, co-founder of Near Media, a news, comment and analysis website, pointed out that not only does Google have a wealth of experience in AI, but it also has extensive resources that it has built up for search over the years.


“What Microsoft revealed is impressive, but the usage that Google shows needs to be better,” Sterling told TechNewsWorld. “It can’t get a little better. It has to get better.”

“There is an opportunity here because of concerns about privacy on the user interface and the quality of search results and ads,” he said. “There is an opening, but Microsoft needs to take advantage of those variables. It remains to be seen whether they can do that.”

Applying artificial intelligence to medical images can be beneficial to clinicians and patients, but developing the tools to do so can be challenging. Google announced on Tuesday that it is ready to take on that challenge with its new medical imaging suite.

“Google pioneered the use of AI and computer vision in Google Photos, Google Image Search, and Google Lens, and we are now making our imaging expertise, tools and technology available to healthcare and life science enterprises,” said Alisa Sou. Lynch, global lead of Google Cloud MedTech Strategy and Solutions, said in a statement.

Jeff Cribbs, Gartner’s vice president and distinguished analyst, explained that health care providers who are looking to AI for diagnostic imaging solutions are typically forced into one of two choices.

“They can purchase software from a device manufacturer, image store vendor or a third party, or they can build their own algorithms with industry agnostic image classification tools,” he told TechNewsWorld.

“With this release,” he continued, “Google is taking their low-code AI development tooling and adding substantial healthcare-specific acceleration.”

“This Google product provides a platform for AI developers and also facilitates image exchange,” said Ginny Torno, administrative director of innovation and IT clinical, assistant and research systems at Houston Methodist in Houston.

“It is not unique to this market, but can provide opportunities for interoperability that a smaller provider is not capable of,” she told TechNewsWorld.

strong component

According to Google, the medical imaging suite addresses some common pain points when developing AI and machine learning models. Components in the suite include:

  • Cloud Healthcare API, which allows easy and secure data exchange using DICOMweb, an international standard for imaging. API provides a fully managed, scalable, enterprise-grade development environment with automated DICOM de-detection. Imaging technology partners include NetApp for seamless on-premises cloud data management and cloud-native enterprise imaging PACS Change Healthcare in clinical use by radiologists.
  • AI-assisted annotation tools from Nvidia and Monae to automate the highly manual and repetitive task of labeling medical images, as well as native integration with any DICOMWeb viewer.
  • Access to BigQuery and Looker to view and search petabytes of imaging data to perform advanced analysis and create training datasets with zero operational overhead.
  • Using Vertex AI to accelerate the development of AI pipelines to build scalable machine learning models with up to 80% fewer lines of code required for custom modeling.
  • Flexible options for cloud, on-premises, or edge deployment to allow organizations to meet diverse sovereignty, data security, and privacy needs – while providing centralized management and policy enforcement with Google Distributed Cloud, enabled by Anthos.

full deck of tech

“One key difference to the medical imaging suite is that we are offering a comprehensive suite of technologies that support the process of delivering AI from start to finish,” Lynch told TechNewsWorld.

The suite offers everything from imaging data ingestion and storage to AI-assisted annotation tools to flexible model deployment options on the edge or in the cloud, she explained.

“We are providing solutions that will make this process easier and more efficient for health care organizations,” she said.

Lynch said the suite takes an open, standardized approach to medical imaging.

“Our integrated Google Cloud services work with a DICOM-standard approach, allowing customers to seamlessly leverage Vertex AI for machine learning and BigQuery for data discovery and analytics,” he added.

“By building everything around this standardized approach, we’re making it easier for organizations to manage their data and make it useful.”

image classification solution

The increasing use of medical imaging, coupled with manpower issues, has made the field ready for solutions based on artificial intelligence and machine learning.

Torno said, “As imaging systems get faster, offering higher resolution and capabilities like functional MRI, it is harder for the infrastructure to maintain those systems and, ideally, stay ahead of what is needed.” “

“In addition, there is a reduction in the radiology workforce that complicates the personnel side of the workload,” she said.

Google Cloud Medical Imaging Suite

Google Cloud aims to make health care imaging data more accessible, interoperable and useful with its medical imaging suite (Image Credit: Google)


She explained that AI can identify issues found in an image from a learned set of images. “It may recommend a diagnosis that then only needs interpretation and confirmation,” she said.

“If the image detects a potentially life-threatening situation, it can also project the images to the top of a task queue,” she continued. “AI can also streamline workflows by reading images.”

Machine learning does for medical imaging what it did for facial recognition and image-based search. “Instead of identifying a dog, Frisbee or chair in a photograph, AI is identifying the extent of a tumor, bone fracture or lung lesion in a diagnostic image,” Cribbs explained.

tools, not substitutes

Michael Arrigo, managing partner of No World Borders, a national network of expert witnesses on health care issues in Newport Beach, Calif., agreed that AI could help some overworked radiologists, but only if it be reliable.

“Data should be structured in ways that are usable and consumable by AI,” he told TechNewsWorld. “AI doesn’t work well with highly variable unstructured data in unpredictable formats.”

Torno said that many studies around AI accuracy have been done and will be done further.

“While there are examples of AI being ‘just as good’ as a human didn’t have, or being ‘just as good’ as a human being, there are also examples where an AI misses something important, or isn’t sure.” That’s what to interpret because there may be many problems with the patient,” she observed.

“AI should be seen as an efficiency tool to accelerate image interpretation and assist in emergent cases, but should not completely replace the human element,” she said.

large splash capacity

With its resources, Google can have a significant impact on the medical imaging market. “Having a major player like Google in this area could facilitate synergy with other Google products already in place in healthcare organizations, potentially enabling more seamless connectivity to other systems,” Torno said.

“If Google focuses on this market segment, they have the resources to make a splash,” she continued. “There are already many players in this area. It will be interesting to see how this product can take advantage of other Google functionality and pipelines and become a differentiator.”

Lynch pointed out that with the launch of the medical imaging suite, Google hopes to help accelerate the development and adoption of AI for imaging by the health care industry.

“AI has the potential to help reduce the burden for health care workers and improve and even save people’s lives,” she said.

“By offering our imaging tools, products and expertise to healthcare organizations, we are confident that the market and patients will benefit,” he added.

A new service powered by artificial intelligence that can turn portraits into talking heads was announced by D-ID on Monday.

Called Creative Reality Studio, the self-service application can convert a facial image into video, complete with speech.

The service is aimed at professional content creators – learning and development units, human resources departments, marketers, advertisers and sales teams – but anyone can try out the technique on the D-ID website.

Creative Reality Studio
Video by John P. Mello Jr.


The platform reduces the cost and hassle of creating corporate video content and provides an unlimited variety of presenters – versus limited avatars – that include users’ own photos or any images that the company has the right to use, according to the company. Gained notoriety when its technology was used in an app called Deep Nostalgia. The software was introduced as a way to animate old pictures.

The company said the technology enables customers and users to choose a presenter’s identity, including their ethnicity, gender, age and even their language, accent and tone. “It provides greater representation and diversity, creating a stronger sense of inclusion and belonging, which drives further engagement and interaction with the businesses that use it,” it said in a news release.

Matthew Kershaw, D-ID Marketing Vice President, told TechNewsWorld, “The use cases include empowering professional content creators to seamlessly integrate video into the digital space and presentations with specialized PowerPoint plug-ins, the use of customized corporate video narrators.” Generating more engaging content.

impressive services

The quality of these services is impressive, and continues to get better, maintained Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington DC.

“The service isn’t at a level where it’s completely replacing a presenter, but there’s no reason not to expect it to be there relatively soon,” he told TechNewsWorld.

D-ID explained that the use of video by businesses has increased dramatically and more of them are integrating it into their training, communication and marketing strategies.

Accelerating this trend, it continued, are the rapidly evolving worlds of avatars and the metaverse, both of which demand a more creative, immersive and interactive content approach from digital creators. Production budgeting, however, can be prohibitively expensive and requires significant allocation of time and talent.

“The service is an evolution of the avatars and emoji people use today, but can be used in lengthy discussions or presentations,” said Ross Rubin, principal analyst at Reticle Research, a consumer technology consulting firm in New York City.

“The idea is to save time, especially if you were going to read a script,” he told TechNewsWorld. “It can be more engaging to an audience than simply watching audio or slides.”

democratizing AI

D-ID CEO and co-founder Gil Perry noted in a news release that the company’s technology, which is limited to the enterprise, has been used to create 100 million videos.

“Now that we are offering our self-service Creative Reality platform, the potential is enormous,” he continued. “It enables both large enterprises, small companies and freelancers to create personalized videos for multiple purposes on a large scale.”

Kershaw said D-ID’s technology will further democratize creativity. “I say ‘forward’ because technology has really been democratizing the arts for decades,” he said.

“From the installation of synthesizers, samplers and sequencers in music to Photoshop and Illustrator in photography and illustration, and premiere and desktop editing in film production and motion graphics, the ability to create high-quality productions outside of specialist high-end studios It’s been happening since the 1980s,” he said. “This is the latest episode of that long-running series.”

“This is certainly a step forward towards democratizing AI,” agreed Aviva Litton, a security and privacy analyst at Gartner. “It has great use cases in education, healthcare and retail,” she told TechNewsWorld. “It’s a better way to communicate with people. We’re becoming a more visual society. Nobody has time to read anything.”

deepfake concerns

With growing concern over the use of “deepfakes” to spread misinformation and take social engineering to new heights, there is always the potential for misuse of new synthetic media solutions such as D-ID.

“As with any technology, it can be used for the ill by our bad actors, but our platform is aimed at legitimate businesses that would have no interest in that kind of use,” Kershaw said.

“Plus,” he continued, “we’re not deepfakes. We don’t put someone else’s face on someone else’s body, and we’re not trying to tell anyone something they didn’t say.”

“Within D-ID’s platform, we have put in place a number of security measures to ensure that our technology is not used in this manner,” he said. “We do not repeat the voices of celebrities or those without permission from any person.”

The company also filters abusive and racist comments, and prohibits the platform from being used to make political videos.

“D-ID is putting railings on their platforms, but we all know that railings are never perfect,” Litton said.

“It is a good tool to spread misinformation because these social media sites are not ready for deepfakes,” she said. “Even if social media sites are good at detecting deepfakes, they will never be enough. It’s like spam. Spam always gets through. It will happen too, but the consequences There will be worse.”

need for origin

Detecting deepfakes is a losing proposition in the long run, Litton said. Even today, detection algorithms typically cannot detect more than 70% of deep fakes.

He added that determined adversaries will keep pace with deepfake detection using generative adversarial networks so that the detection rate is eventually reduced to 50%.

She predicts that in 2023, 20% of successful account takeover attacks will use deepfakes to turn over sensitive data to socially engineered users or transfer funds to criminal accounts.

“Many safeguards need to be implemented industry-wide, which is why we are also working with industry bodies and regulators to implement legal safeguards that will make the industry more secure and reliable in general ,” said Kershaw. “We think that having an industry-wide system for watermarking content invisibly through the use of steganography, in particular, would get rid of almost all potential issues.”

“You will be able to see a section of media and click a button to see where it came from and what’s in it,” he said. “Transparency is the solution.”

“There are many ways to deal with counterfeiting, but the most important is to know the origin and authenticity of the media,” Castro said.