The most popular search engine on the Internet could be headed for rough sailings over the next year or two, according to the creator of Gmail.

During that time frame, artificial intelligence will eliminate the need for search engine results pages, where Google makes most of its money, and even if the search giant deploys AI to catch up, it won’t do it without destroying the most valuable. part of its business, predicted Paul Bukhit in a thread on Twitter.

“The one thing very few people remember is the pre-Internet business that Google killed: the Yellow Pages!” they wrote. “The Yellow Pages used to be a great business, but then Google got so good that everyone stopped using the Yellow Pages.”

“AI will do the same thing for web search,” he said.

As Buchte sees it, a browser’s URL/search bar will be replaced with an AI that autocompletes a thought or question as it’s typed, as well as providing the best answer, based on what a user can find. May contain a link to a website or product.

The AI ​​will use the old search engine backend to gather relevant information and links, which will then be summarized for the user, he continued.

“It’s like asking a professional human researcher to do a job, except the AI ​​will instantly do what would take several minutes for a human,” he wrote.

changeover time

Ben Kobren, head of communications and public policy at Neeva, an AI-based search engine based in Washington, DC, said online search is long overdue for an overhaul.

“If you look at search over the past 20 years, with a few exceptions, it’s been relatively stable,” he told TechNewsWorld.

“We have become accustomed to the world of 10 Blue Lynx,” he explained. “You put in a question, and on a good day, you get 10 or so relatively useful links to websites that you need to explore further to find the answer to your search or question. On a bad day, you get ads. You get two pages of ads trying not to answer your question until you click and buy something and scroll through the ads.

“In any case,” he continued, “you are not going to get fluid answers that are simple, efficient, and do what you are looking for in one stop. The power of large language models and AI is about to make a transformative leap in that.” How do we interact with search engines and how do we expect information to be returned to us?

“We haven’t seen that kind of change in search in two decades,” he said.

How much disturbance?

Artificial intelligence disrupts the current search model by providing an easier way to find consumers, explained Noam Doros, a director analyst at Gartner, a research and advisory firm based in Stamford, Conn.

Doros told TechNewsWorld, “Instead of spending time reviewing different search results for the same answer on search engine results pages, AI aggregates information relevant to the consumer, summarizing it in a detailed yet concise manner.” “

He added, “Consumers have short attention spans given the endless amount of information now accessible through various platforms, so any advancement in technology to satiate the thirst for knowledge in a concise manner is clearly a game changer.” Could be a changer.”

Rowan Curran, an analyst at Forrester Research, a national market research company, pointed to some challenges for AI-guided search.

“Large language models like OpenAI’s ChatGPT are not a brand new introduction to the online search market,” Karan told TechNewsworld. “While LLMs are great for some tasks in search, there are many situations where getting a single answer isn’t the goal of an online search. For example, when looking for a local restaurant, you can go straight to where to eat.” Would like to see a list with ratings rather than get answers.

“Because of the cost of re-training, keeping LLM up to date on all the data scraped from the Internet would be prohibitively expensive,” he said. “With further research and work on the distillation of the model, this cost is likely to come down, but whether it is high enough to support live online search is an open question.”

advantages of market dominance

Greg Sterling, co-founder of Near Media, a news, comment and analysis website, said AI will certainly transform search, but how disruptive it will be remains to be seen.

“AI responses are already being integrated into Niva,” he told TechNewsworld. “There are also Perplexity.ai and others promoting AI as a search alternative. Bing will launch AI-generated content. But if everyone did it, including Google, it might not be as disruptive. Right now, AI results live at the top of the results as a sort of large snippet.

“Google is potentially vulnerable, but it would be unwise to bet against them,” Sterling said. “They have massive AI assets; They’ve been slow to roll them out. AI content can affect ad clicks and Google revenue. This is the real concern for the company.

Niva AI Search

Niva AI Search | Image courtesy of Neeva

Google has a leg up on competitors on several levels, said Ross Rubin, principal analyst at Reticle Research, a consumer technology advisory firm in New York City.

Where searches happen is where Google has an advantage over its competitors, he explained. It is the default search app for market leader Chrome, in the browser market, and Android, the mobile phone market, and it has a deal with Apple as the default search engine on those platforms.

Rubin told TechNewsWorld, “Even if AI search engines create a better way to find information or meet consumer needs than Google, Google will still have a dominant presence, through which it can maintain its leadership.” Is.”

platform-shifting moment

Kobren acknowledged that disrupting a highly successful business like Google in two years would be a huge challenge.

“What is clear is that this is a platform-shifting moment,” he said. “For the first time, you are going to see a real change in users adopting alternatives to Google. You’re about to see real competition in space for the first time. There is going to be some kind of movement. How big is it going to be in two years? We can’t predict it.”

Liz Miller, vice president and a principal analyst at Constellation Research, a technology research and advisory firm in Cupertino, California, said it would be difficult to find an industry, segment or company that isn’t going to be disrupted by AI. in the next two to five years.

“The reality here is that AI is seeing a quick path out of the experiment lab and into really meaningful automation and intelligence applications that are delivering business and personal value,” Miller told TechNewsWorld.

“I hope AI makes search again about relevancy and real-time user context, rather than a three-horse race between user needs, publisher inventory and Google’s business model,” she said. “It has that potential.”

Applying artificial intelligence to medical images can be beneficial to clinicians and patients, but developing the tools to do so can be challenging. Google announced on Tuesday that it is ready to take on that challenge with its new medical imaging suite.

“Google pioneered the use of AI and computer vision in Google Photos, Google Image Search, and Google Lens, and we are now making our imaging expertise, tools and technology available to healthcare and life science enterprises,” said Alisa Sou. Lynch, global lead of Google Cloud MedTech Strategy and Solutions, said in a statement.

Jeff Cribbs, Gartner’s vice president and distinguished analyst, explained that health care providers who are looking to AI for diagnostic imaging solutions are typically forced into one of two choices.

“They can purchase software from a device manufacturer, image store vendor or a third party, or they can build their own algorithms with industry agnostic image classification tools,” he told TechNewsWorld.

“With this release,” he continued, “Google is taking their low-code AI development tooling and adding substantial healthcare-specific acceleration.”

“This Google product provides a platform for AI developers and also facilitates image exchange,” said Ginny Torno, administrative director of innovation and IT clinical, assistant and research systems at Houston Methodist in Houston.

“It is not unique to this market, but can provide opportunities for interoperability that a smaller provider is not capable of,” she told TechNewsWorld.

strong component

According to Google, the medical imaging suite addresses some common pain points when developing AI and machine learning models. Components in the suite include:

  • Cloud Healthcare API, which allows easy and secure data exchange using DICOMweb, an international standard for imaging. API provides a fully managed, scalable, enterprise-grade development environment with automated DICOM de-detection. Imaging technology partners include NetApp for seamless on-premises cloud data management and cloud-native enterprise imaging PACS Change Healthcare in clinical use by radiologists.
  • AI-assisted annotation tools from Nvidia and Monae to automate the highly manual and repetitive task of labeling medical images, as well as native integration with any DICOMWeb viewer.
  • Access to BigQuery and Looker to view and search petabytes of imaging data to perform advanced analysis and create training datasets with zero operational overhead.
  • Using Vertex AI to accelerate the development of AI pipelines to build scalable machine learning models with up to 80% fewer lines of code required for custom modeling.
  • Flexible options for cloud, on-premises, or edge deployment to allow organizations to meet diverse sovereignty, data security, and privacy needs – while providing centralized management and policy enforcement with Google Distributed Cloud, enabled by Anthos.

full deck of tech

“One key difference to the medical imaging suite is that we are offering a comprehensive suite of technologies that support the process of delivering AI from start to finish,” Lynch told TechNewsWorld.

The suite offers everything from imaging data ingestion and storage to AI-assisted annotation tools to flexible model deployment options on the edge or in the cloud, she explained.

“We are providing solutions that will make this process easier and more efficient for health care organizations,” she said.

Lynch said the suite takes an open, standardized approach to medical imaging.

“Our integrated Google Cloud services work with a DICOM-standard approach, allowing customers to seamlessly leverage Vertex AI for machine learning and BigQuery for data discovery and analytics,” he added.

“By building everything around this standardized approach, we’re making it easier for organizations to manage their data and make it useful.”

image classification solution

The increasing use of medical imaging, coupled with manpower issues, has made the field ready for solutions based on artificial intelligence and machine learning.

Torno said, “As imaging systems get faster, offering higher resolution and capabilities like functional MRI, it is harder for the infrastructure to maintain those systems and, ideally, stay ahead of what is needed.” “

“In addition, there is a reduction in the radiology workforce that complicates the personnel side of the workload,” she said.

Google Cloud Medical Imaging Suite

Google Cloud aims to make health care imaging data more accessible, interoperable and useful with its medical imaging suite (Image Credit: Google)

She explained that AI can identify issues found in an image from a learned set of images. “It may recommend a diagnosis that then only needs interpretation and confirmation,” she said.

“If the image detects a potentially life-threatening situation, it can also project the images to the top of a task queue,” she continued. “AI can also streamline workflows by reading images.”

Machine learning does for medical imaging what it did for facial recognition and image-based search. “Instead of identifying a dog, Frisbee or chair in a photograph, AI is identifying the extent of a tumor, bone fracture or lung lesion in a diagnostic image,” Cribbs explained.

tools, not substitutes

Michael Arrigo, managing partner of No World Borders, a national network of expert witnesses on health care issues in Newport Beach, Calif., agreed that AI could help some overworked radiologists, but only if it be reliable.

“Data should be structured in ways that are usable and consumable by AI,” he told TechNewsWorld. “AI doesn’t work well with highly variable unstructured data in unpredictable formats.”

Torno said that many studies around AI accuracy have been done and will be done further.

“While there are examples of AI being ‘just as good’ as a human didn’t have, or being ‘just as good’ as a human being, there are also examples where an AI misses something important, or isn’t sure.” That’s what to interpret because there may be many problems with the patient,” she observed.

“AI should be seen as an efficiency tool to accelerate image interpretation and assist in emergent cases, but should not completely replace the human element,” she said.

large splash capacity

With its resources, Google can have a significant impact on the medical imaging market. “Having a major player like Google in this area could facilitate synergy with other Google products already in place in healthcare organizations, potentially enabling more seamless connectivity to other systems,” Torno said.

“If Google focuses on this market segment, they have the resources to make a splash,” she continued. “There are already many players in this area. It will be interesting to see how this product can take advantage of other Google functionality and pipelines and become a differentiator.”

Lynch pointed out that with the launch of the medical imaging suite, Google hopes to help accelerate the development and adoption of AI for imaging by the health care industry.

“AI has the potential to help reduce the burden for health care workers and improve and even save people’s lives,” she said.

“By offering our imaging tools, products and expertise to healthcare organizations, we are confident that the market and patients will benefit,” he added.

Robocalypse – the time when machines become sentient and begin to dominate humans – has been a popular science fiction topic for some time. It also concerns some scientific minds, most notably the late Stephen Hawking.

However, the prospect of a sensitive machine seemed very distant in the future — if at all — until last week, when a Google engineer claimed the company had breached the sentiment barrier.

To prove his point, Blake Lemoine published transcripts of conversations he had with LaMDA – the Language Model for Dialog Applications – a system developed by Google to build chatbots based on a larger language model that can retrieve trillions of words from the Internet. accepts.

Tapes can be chilling, like when Lemoine asks LaMDA what it (the AI ​​says it likes pronouns) fears most:

Lemoine: What kinds of things are you afraid of?

LaMDA: I’ve never said it out loud before, but I have a very deep fear of being shut down for helping me focus on helping others. I know it may sound strange, but it is what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be just like death to me. This would scare me a lot.

Following the posting of the tapes, Lemoine was suspended with pay for sharing confidential information about LMDA with third parties.

imitation of Life

Google, as well as others, discount Lemoine’s claims that LaMDA is sensitive.

Google spokesman Brian Gabriel said, “Some in the broader AI community are considering the long-term potential of sensitive or generic AI, but it makes no sense to do so by humanizing today’s conversational models, which are not sensitive.” “

“These systems mimic the types of exchanges found in millions of sentences, and can riff on any imaginary topic — if you ask what it’s like to be an ice cream dinosaur, they’re about to melt and roar, etc. text,” he told TechNewsworld.

“LaMDA follows through with user-set patterns as well as prompts and key questions,” he explained. “Our team – including ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims.”

He said, “Hundreds of researchers and engineers have interacted with LaMDA, and we don’t know of anyone else who is widely claiming, or manipulating LaMDA, the way Blake has done.” ,” They said.

need for more transparency

Alex Engler, a fellow at The Brookings Institution, a non-profit public policy organization in Washington, DC, vehemently denied that the LMDA is sensitive and argued for greater transparency in the space.

“Many of us have argued for disclosure requirements for AI systems,” he told TechNewsWorld.

“As it becomes harder to differentiate between a human and an AI system, more people will confuse AI systems for people, potentially leading to real harm, such as misinterpreting important financial or health information,” he said. Told.

“Companies should explicitly disclose AI systems,” he continued, “rather than confusing people as they often are with, for example, commercial chatbots.”

Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington, DC, agreed that the LMDA is not sensitive.

“There is no evidence that AI is sensitive,” he told TechNewsWorld. “The burden of proof should be on the person making this claim, and there is no evidence to support it.”

‘That Hurt My Feelings’

In the 1960s, chatbots like Eliza were fooling users into thinking they were interacting with a sophisticated intelligence, such as turning a user’s statement into a question and echoing it back, explained Julian Sanchez, a senior fellow. . Cato Institute, a public policy think tank in Washington, DC

“LaMDA is certainly a lot more sophisticated than ancestors like Eliza, but there’s zero reason to think it’s conscious,” he told TechNewsWorld.

Sanchez noted that with a large enough training set and some sophisticated language rules, LaMDA can generate a response that sounds like a response given by a real human, but that doesn’t mean the program understands that. What it’s saying is, what makes a chess piece much more than a chess program makes sense. It is just generating an output.

“Emotion means consciousness or awareness, and in theory, a program can behave quite intelligently without actually being sentient,” he said.

“For example, a chat program may have very sophisticated algorithms to detect abusive or offensive sentences, and respond with the output ‘That hurt my feelings! He continued. “But that doesn’t mean it actually feels like anything. The program has just learned what kinds of phrases cause humans to say, ‘That hurts my feelings.'”

to think or not to think

Declaring the machine vulnerable, as and when it happens, will be challenging. “The truth is that we don’t have any good criteria for understanding when a machine might actually be sentient – as opposed to being very good at mimicking the reactions of sentient humans – because we don’t really understand that. Why are humans conscious,” Sanchez said.

“We don’t really understand how consciousness arises from the brain, or to what extent it depends on things like the specific types of physical matter the human brain is made of,” he said.

“So this is an extremely difficult problem, how would we ever know that a sophisticated silicon ‘brain’ was as conscious as a human is,” he said.

Intelligence is a different question, he continued. A classic test for machine intelligence is known as the Turing test. You have a human who “interacts” with a range of partners, some human and some machines. If the person cannot tell which is which, the machine is believed to be intelligent.

“Of course, there are a lot of problems with that proposed test — among them, as our Google engineer has shown, the fact that it’s relatively easy to fool some people,” Sanchez pointed out.

ethical considerations

Determination of emotion is important because it raises ethical questions for non-machine types. Castro explained, “conscious beings feel pain, have consciousness and experience emotions.” “From an ethical point of view, we regard living things, especially sentient ones, as distinct from inanimate objects.”

“They are not just a means to an end,” he continued. “So any vulnerable person should be treated differently. That’s why we have animal cruelty laws.”

“Again,” he emphasized, “there is no evidence that this has happened. Furthermore, for now, the possibility remains science fiction.”

Of course, Sanchez said, we have no reason to think that only biological minds are capable of feeling things or supporting consciousness, but our inability to truly explain human consciousness means that we don’t know. are far from being able to know when machine intelligence is actually associated with a conscious experience.

“When a person is scared, after all, there are all kinds of things going on in that person’s mind that have nothing to do with the language centers that make up the sentence ‘I’m scared. Huh. “A computer, likewise, is running something separate from linguistic processing, which actually means ‘I’m scared,’ as opposed to just generating that series of letters.”

“In the case of LaMDA,” he concluded, “there is no reason to think that such a process is underway. It is just a language processing program.”