Tag

introduces

Browsing

Microsoft may have ushered in a paradigm shift Tuesday with the release of new versions of its search engine, Bing, and web browser, Edge — both now powered by artificial intelligence.

Available in preview on Bing.com, new offerings combine browsing and chat into a unified experience that makes both work better. When performing a search, for example, more relevant results are displayed, and for information such as sports scores, stock prices, and weather forecasts, results may appear without leaving the search page.

For more complex questions—such as “What can I substitute for eggs when baking a cake”—Bing can synthesize an answer from multiple online sources and present a summary response.

Searchers can also chat with Bing to further refine a search and use it to help create content, such as travel itineraries or quizzes for trivia night.

In addition to the facelift in the Edge browser, there is also an AI function for chatting and content creation. You can ask it to summarize long reports, pare them down to the essentials, or create a LinkedIn post from a few prompts.

“AI will fundamentally transform every software category, starting with the biggest category of all — search,” Microsoft Chairman and CEO Satya Nadella said in a statement.

paradigm shift

When you integrate AI with search, you can get the best of both worlds, said Bob O’Donnell, founder and principal analyst at Technalysis Research in Foster City, Calif., a technology market research and consulting firm.

“You can have the timeliness of a search index and the intelligence of natural language-based chat and summary tools,” O’Donnell told TechNewsWorld.

This video demos the new Bing Chat experience:

“What they’re doing is ultimately making the computer smarter,” he explained. “It enables them to deliver what they have to say, not necessarily what has been said.”

“It’s going to take some time for people to get used to it, but it’s dramatically better,” he said. “Its time savings and efficiency are off the charts.”

“I think we are in the midst of a paradigm shift,” he said.

Ross Rubin, principal analyst at Reticle Research, a consumer technology advisory firm in New York City, explained that bringing AI into Bing is just the tip of a larger Microsoft strategy.

“It’s not just about Bing, which is the low-hanging fruit for the integration,” Rubin told TechNewsWorld. “They want to integrate AI into a lot of their products — Office, Teams, Azure.”

“It may help Bing in its long-standing competition with Google, but it’s really much more than that,” he said. “They wouldn’t have made this level of investment if it was about making Bing more effective.”

bard of google

Microsoft’s action comes on the heels of Google announcing on Monday that it was bringing an AI conversational service called Bard to a group of “trusted testers.” Bard is based on Google’s natural language technology, LaMDA. Microsoft is using OpenAI technology in its offering.

Google and Alphabet CEO Sundar Pichai wrote in a company blog that Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence and creativity of our larger language model. It pulls information from across the web to provide fresh, high-quality responses.

He explained that the Bard will initially be released with a lighter model version of the LaMDA. This much smaller model requires significantly less computing power, allowing us to scale to more users and allowing for more feedback.


He added that we will combine external feedback with our own internal testing to ensure that Bard’s responses meet a high standard in quality, safety and real-world information.

Pichai wrote that when people think of Google, they often think of quick factual answers, such as “How many keys are on a piano?” But increasingly, people are turning to Google for deeper insight and understanding — like, “Is piano or guitar easier to learn, and how much practice does each require?”

AI can be helpful in these moments, synthesizing insights for questions where there is no right answer, he continued. Soon, you’ll see AI-powered features in search that deliver complex information and multiple perspectives in easy-to-digest formats, so you can quickly understand the big picture and learn more from the web: whether it’s looking for additional perspectives doing, such as blogs from people who play both piano and guitar, or going in-depth on a related topic, such as the steps in getting started as a beginner.

Pichai said that these new AI features will start rolling out on Google Search soon.

leg up on leader

The question is, will “soon” be too late?

“Suddenly, the Microsoft search product is going to be much better than what Google has to offer,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm in Bend, Ore.

“We’ll see how many people start making the switch,” Enderle told TechNewsWorld. “The switching cost between Bing and Google is non-existent. With switching costs so low, the question will be how many people switch to Bing and how bad will Google hurt?”


“It will take time for Google to catch up,” he said. “In the meantime, people will be establishing habit patterns with Bing, and if people are happy with Bing, why go back to Google?”

He added, “This appears to be a well-executed, dark strategy to battle Google, and Google, for whatever reason, was not adequately prepared.”

Incorporating AI into search helps Microsoft get a leg up on Google, maintained Ed Anderson, research vice president and analyst at Gartner, a research and advisory firm based in Stamford, Conn.

“Microsoft beat Google to the punch in terms of bringing AI-assisted search to Bing and Edge,” Anderson told TechNewsWorld. “How closely Google is toying with its search engine and browser remains to be seen.”

rewrite search rule

O’Donnell believes the new Bing search could make some headway against Google for eyeballs. “It’s the kind of thing that once you try to explore with this new type of engine, it becomes difficult to go back to the old one. It’s so much better,” he said.

“Microsoft is trying to rewrite the rules of the game,” Rubin said. “What is at risk is not only Google’s search leadership, but also its revenue model. Displacing search with an engine that can provide answers without redirecting you somewhere will require rethinking the entire search revenue model.

However, Greg Sterling, co-founder of Near Media, a news, comment and analysis website, pointed out that not only does Google have a wealth of experience in AI, but it also has extensive resources that it has built up for search over the years.


“What Microsoft revealed is impressive, but the usage that Google shows needs to be better,” Sterling told TechNewsWorld. “It can’t get a little better. It has to get better.”

“There is an opportunity here because of concerns about privacy on the user interface and the quality of search results and ads,” he said. “There is an opening, but Microsoft needs to take advantage of those variables. It remains to be seen whether they can do that.”

Applying artificial intelligence to medical images can be beneficial to clinicians and patients, but developing the tools to do so can be challenging. Google announced on Tuesday that it is ready to take on that challenge with its new medical imaging suite.

“Google pioneered the use of AI and computer vision in Google Photos, Google Image Search, and Google Lens, and we are now making our imaging expertise, tools and technology available to healthcare and life science enterprises,” said Alisa Sou. Lynch, global lead of Google Cloud MedTech Strategy and Solutions, said in a statement.

Jeff Cribbs, Gartner’s vice president and distinguished analyst, explained that health care providers who are looking to AI for diagnostic imaging solutions are typically forced into one of two choices.

“They can purchase software from a device manufacturer, image store vendor or a third party, or they can build their own algorithms with industry agnostic image classification tools,” he told TechNewsWorld.

“With this release,” he continued, “Google is taking their low-code AI development tooling and adding substantial healthcare-specific acceleration.”

“This Google product provides a platform for AI developers and also facilitates image exchange,” said Ginny Torno, administrative director of innovation and IT clinical, assistant and research systems at Houston Methodist in Houston.

“It is not unique to this market, but can provide opportunities for interoperability that a smaller provider is not capable of,” she told TechNewsWorld.

strong component

According to Google, the medical imaging suite addresses some common pain points when developing AI and machine learning models. Components in the suite include:

  • Cloud Healthcare API, which allows easy and secure data exchange using DICOMweb, an international standard for imaging. API provides a fully managed, scalable, enterprise-grade development environment with automated DICOM de-detection. Imaging technology partners include NetApp for seamless on-premises cloud data management and cloud-native enterprise imaging PACS Change Healthcare in clinical use by radiologists.
  • AI-assisted annotation tools from Nvidia and Monae to automate the highly manual and repetitive task of labeling medical images, as well as native integration with any DICOMWeb viewer.
  • Access to BigQuery and Looker to view and search petabytes of imaging data to perform advanced analysis and create training datasets with zero operational overhead.
  • Using Vertex AI to accelerate the development of AI pipelines to build scalable machine learning models with up to 80% fewer lines of code required for custom modeling.
  • Flexible options for cloud, on-premises, or edge deployment to allow organizations to meet diverse sovereignty, data security, and privacy needs – while providing centralized management and policy enforcement with Google Distributed Cloud, enabled by Anthos.

full deck of tech

“One key difference to the medical imaging suite is that we are offering a comprehensive suite of technologies that support the process of delivering AI from start to finish,” Lynch told TechNewsWorld.

The suite offers everything from imaging data ingestion and storage to AI-assisted annotation tools to flexible model deployment options on the edge or in the cloud, she explained.

“We are providing solutions that will make this process easier and more efficient for health care organizations,” she said.

Lynch said the suite takes an open, standardized approach to medical imaging.

“Our integrated Google Cloud services work with a DICOM-standard approach, allowing customers to seamlessly leverage Vertex AI for machine learning and BigQuery for data discovery and analytics,” he added.

“By building everything around this standardized approach, we’re making it easier for organizations to manage their data and make it useful.”

image classification solution

The increasing use of medical imaging, coupled with manpower issues, has made the field ready for solutions based on artificial intelligence and machine learning.

Torno said, “As imaging systems get faster, offering higher resolution and capabilities like functional MRI, it is harder for the infrastructure to maintain those systems and, ideally, stay ahead of what is needed.” “

“In addition, there is a reduction in the radiology workforce that complicates the personnel side of the workload,” she said.

Google Cloud Medical Imaging Suite

Google Cloud aims to make health care imaging data more accessible, interoperable and useful with its medical imaging suite (Image Credit: Google)


She explained that AI can identify issues found in an image from a learned set of images. “It may recommend a diagnosis that then only needs interpretation and confirmation,” she said.

“If the image detects a potentially life-threatening situation, it can also project the images to the top of a task queue,” she continued. “AI can also streamline workflows by reading images.”

Machine learning does for medical imaging what it did for facial recognition and image-based search. “Instead of identifying a dog, Frisbee or chair in a photograph, AI is identifying the extent of a tumor, bone fracture or lung lesion in a diagnostic image,” Cribbs explained.

tools, not substitutes

Michael Arrigo, managing partner of No World Borders, a national network of expert witnesses on health care issues in Newport Beach, Calif., agreed that AI could help some overworked radiologists, but only if it be reliable.

“Data should be structured in ways that are usable and consumable by AI,” he told TechNewsWorld. “AI doesn’t work well with highly variable unstructured data in unpredictable formats.”

Torno said that many studies around AI accuracy have been done and will be done further.

“While there are examples of AI being ‘just as good’ as a human didn’t have, or being ‘just as good’ as a human being, there are also examples where an AI misses something important, or isn’t sure.” That’s what to interpret because there may be many problems with the patient,” she observed.

“AI should be seen as an efficiency tool to accelerate image interpretation and assist in emergent cases, but should not completely replace the human element,” she said.

large splash capacity

With its resources, Google can have a significant impact on the medical imaging market. “Having a major player like Google in this area could facilitate synergy with other Google products already in place in healthcare organizations, potentially enabling more seamless connectivity to other systems,” Torno said.

“If Google focuses on this market segment, they have the resources to make a splash,” she continued. “There are already many players in this area. It will be interesting to see how this product can take advantage of other Google functionality and pipelines and become a differentiator.”

Lynch pointed out that with the launch of the medical imaging suite, Google hopes to help accelerate the development and adoption of AI for imaging by the health care industry.

“AI has the potential to help reduce the burden for health care workers and improve and even save people’s lives,” she said.

“By offering our imaging tools, products and expertise to healthcare organizations, we are confident that the market and patients will benefit,” he added.

Apple on Wednesday refreshed its iPhone, Watch and AirPods product lines at an online event, as well as introduced a new Ultra watch for activity in challenging environments.

The Apple Watch Ultra is designed to operate in extreme cold and hot environments as well as under 130 feet of water. It’s housed in a rugged titanium case and its face is made of tough sapphire crystal.

The Ultra also has an “action” button that is programmable, an 85-decibel siren for emergencies, and its controls are designed for use with gloves. It has three microphones for voice call clarity, even in windy conditions.

Apple Watch Ultra Programmable Action Button

Apple Watch Ultra Programmable Action Button (Image Credit: Apple)


Cellular is built into the Ultra, which has a battery life of 36 hours on a single charge, though Apple promised that would be increased to 60 hours with new battery optimization software, available this fall.

In addition, the Ultra supports dual-frequency GPS, and can be turned into a scuba diving computer with the Dive Plus app.

The Ultra will sell for US$799 and will be available on September 23.

“I’m a scuba diver, and I’ve had several dive watches that were over $1,000,” said Tim Bajarin, president of Creative Strategies, a technology consulting firm in Campbell, Calif.

“Here’s getting you a dive watch and everything else, plus cellular for $799,” he told TechNewsWorld. “It’s a really good deal.”

“It will be a huge hit among extreme sports enthusiasts and scuba divers,” he said.

crash detection

Ross Rubin, principal analyst at Reticle Research, a consumer technology consulting firm in New York City, calls Ultra a “statement product.”

“This shows that the Apple Watch can be used in harsh environments by people engaged in extreme activities,” he told TechNewsWorld.

“It’s not something that mainstream people need, but you can see how some technology, like action buttons, could filter out future iterations of other models,” he said.

Apple also rolled out its Series 8 Apple Watch at its pre-taped event. The watch sports a new temperature sensor, which women can use to track ovulation. Ovulation is usually measured with a thermometer and a journal. The new watch makes that task a lot easier and more convenient by providing automatic retrospective ovulation estimates.

“The Apple Watch has turned into a digital-health wearable solution,” Mark N. Venna, president and principal analyst at SmartTech Research in San Jose, Calif., told TechNewsWorld.

The watch, as well as the new iPhones, also has a new crash detection feature. When an accident is detected, the watch will automatically connect to emergency services, provide the location of the accident, and notify emergency contacts on the watch.

To make up for the feature, Apple added two new motion sensors to the Series 8, an improved three-axis gyroscope, and a higher G-force accelerometer. The accelerometer can measure up to 256 Gs, allowing it to detect the extreme effects of an accident.

Apple Watch Crash Detection Feature

Crash Detection uses an advanced sensor-fusion algorithm and a new, more powerful accelerometer and gyroscope in the Apple Watch to detect and deliver car crash alerts. If the user does not respond after a 10 second countdown, the device will dial emergency services. (image credit: Apple)


This feature operates only when a user is in a moving vehicle and collects data only around the time of a potential accident.

“The Apple Watch is not only for health, but also for safety,” Bajrin said.

Series 8 watches have a battery life of 18 hours but with the new Low Power Mode feature the battery life can be extended up to 36 hours.

Series 8 will be available on 16 September. The GPS version costs $399 and the cellular version costs $499.

satellite sos

During its online event, Apple also introduced iPhone 14 Pro with 6.1-inch display and Pro Max with 6.7-inch display.

The Pro models, as well as the 14 and 14-Plus models, support a new emergency SOS service via satellite. Newer iPhone models have special antennas built into them so they can send text messages to high-flying satellites.

To aid in the connection, the software on the phone shows a user where to point their phone to connect to the satellite and stay connected for the satellite to go on.

The iPhone 14 line features emergency SOS via satellite

The iPhone 14 lineup offers Emergency SOS via satellite. This feature enables users to message with emergency services when cellular or Wi-Fi coverage is not available. (image credit: Apple)


Due to the narrow bandwidth involved in satellite communications, Apple created a custom, short-text compression algorithm to reduce the average size of a message by a factor of three.

Initially, satellite service will be limited to the United States and Canada, where it will be offered free of charge for two years.

“The satellite phone feature will have tremendous value for a lot of people,” Venna said.

“It would have been the perfect call for Tom Hanks in the movie Castaway,” he joked. “If he had the iPhone 14, though, it would have been a very short film.”

Bajrin announced that Apple is leading a new path in personal safety and security. “It’s going to be a big part of their DNA going forward,” he said.

“The satellite feature in the iPhone 14 Pro sets a new precedent in smartphones,” he added.

dslr challenger

Apple has also upgraded the cameras in the new Pro model. The main camera at the rear of the unit houses a 48-megapixel sensor with support for a fast f/1.78 lens, while the front camera houses a 12MP sensor with an f/1.9 lens. The main camera also supports the ProRaw format.

The Pro model also has a new chip, the A16. “With the A16, Apple has the most powerful smartphone camera on the market right now,” Bajrin said.

“The 48 megapixels and ProRaw capabilities bring the iPhone 14 Pro series very close to a true DSLR,” he said.

Vena notes that Apple is shooting beyond analogy with digital single-lens reflex cameras. “It’s clear that Apple doesn’t want the camera in its phone to be on par with a DSLR, but they want it to be better than a DSLR in many ways,” he observed.

He said the upgrade to the front camera on the new iPhone represents another change in Apple’s thinking about the cameras in its phones. “It used to be that the back camera was the most important camera,” he said. “They now view the front camera and the back camera with equal importance.”

Apple iPhone 14 Pro and iPhone 14 Pro Max 48MP Camera

The iPhone 14 Pro and iPhone 14 Pro Max have the first 48MP camera on the iPhone. (image credit: Apple)


Apple has made great strides in video stabilization with the Pro and Max models. “You can literally take video with very little image movement,” Rubin said.

The frequently worn notch on the front of the Pro and Pro Max models has been changed to a dynamic notification center. “They’ve turned it into a profit rather than a loss,” Rubin said.

The iPhone 14 and iPhone 14 Plus will be available in five colors: Midnight, Blue, Starlight, Purple and (Product) Red. The iPhone 14 Pro and iPhone 14 Pro Max will be available in four colors: Space Black, Silver, Gold and Deep Purple.

The iPhone Pro 14 will sell for $999; Pro Max for $1,099. The iPhone 14 will sell for $799; 14-Plus for $899. All will be available on 16 September.

Better AirPods

Apple also introduced a new version of its AirPods Pro, which features a new H2 chip, longer battery life, and noise cancellation technology that’s up to two times more efficient than its predecessor at blocking out noise.

It’s also improved the earbuds’ “Transparency” mode by filtering out some of the disruptive noise that would normally be heard with noise-canceling. “I haven’t seen other earbuds do this,” Rubin observed. “It addresses one of the disadvantages of transparency mode. You let the whole world in, including harsh noises.”

The new AirPods Pro will go on sale on September 23 for $249.

apple airpods pro

AirPods Pro brings upgrades in transparency mode, spatial audio, and convenience features while canceling twice as much noise as its predecessor. A new low-distortion audio driver and custom amplifier deliver rich bass and crystal-clear sound. A new extra-small ear tip could provide a better fit to more users. (image credit: Apple)


“Most of these devices are just upgrades,” said Jim McGregor, founder and principal analyst at Tireus Research, a high-tech research and advisory firm in Phoenix.

“There’s only one new use model that is the Apple Watch Ultra,” he continued. “Some of the features they’re advertising on the iPhone 14 and iPhone 14 Plus have been on Android phones for three or four generations.”

“It’s becoming increasingly difficult for Apple and everyone else to justify these annual upgrades,” he said. “However, I will give Apple credit. I am happy to see that they are offering these products in a reasonable price range, and they are not trying to offer a $2,000 smartphone.