Tag

general

Browsing

Qualcomm has aggressively developed and integrated generative AI capabilities across its extensive semiconductor line for the past few years. For those few people who are completely left out of the information grid, AI uses intelligent algorithms to produce new and original content, such as photos, illustrations, movies and music, based on pre-existing data. .

Qualcomm’s Generative AI strategy leverages this technology to improve various aspects of its products and services. The company says its technologies can execute a wide range of exceptional use cases, but doing so locally on a smartphone adds far more value, especially from a cost-per-query and scalability perspective .

With that as a backdrop, let’s discuss Qualcomm’s ability to create hybrid AI functions that extend from device to cloud.

Talks about this capability were started by Qualcomm earlier this year at Mobile World Congress. This approach requires specific, specialized hardware modifications and substantial software adjustments, resulting in a deep-learning text-to-image model known as Stable Diffusion in 2022.

The main application of static diffusion is to generate detailed images based on text description. Nevertheless, stationary diffusion can also be used for other tasks such as image repair (inpainting) and for modifying AI-generated images outside the boundaries of the original image (outpainting).

It is important to note that parameters are the fundamental foundation of machine learning algorithms that enable functional Gen-AI applications. They form part of a model trained using past data. In general, the relationship between the number of parameters and the sophistication holds surprisingly well in the domain of languages. The estimated amount needed in the past for Zen-AI-style apps was in the 10 billion parameter region.

qualcomm general ai 10 billion parameter infographic

Qualcomm’s AI silicon brings artificial intelligence capabilities to edge devices, including mobile phones, tablets and PCs. (Image credit: Qualcomm)


Steady Diffusion for On-Device AI

According to Qualcomm, the implementation of Stable Diffusion requires only 1 billion parameters which squeezes into a device the size of a smartphone. This static diffusion feature enables users to enter a text query and create a picture locally without using the Internet capability of a smartphone.

Since Qualcomm’s display was operating in Airplane mode, all the data needed to create that image from the text query was stored on the device. Steady Diffusion is the go-to model for Qualcomm because of its sheer size and training from huge amounts of data – it can really understand concepts that are incredibly vast in scope and can be applied to a particular or small set of topics are not limited.

Currently, Qualcomm claims to be the only firm to enable this model to work on Android-based devices. Parametric models are getting smaller and smaller, enabling compelling Gen-AI apps to operate on just a single device. If you continue with this idea, comparable generative AI use cases can be demonstrated on all types of mobile devices.

From a platform perspective, scalability is the name of the game for Qualcomm, as few other businesses have a comparable legacy in devices in the end-user device ecosystem. Qualcomm’s “installed” Snapdragon base is now more than 2 billion devices, many without internet connectivity.

Qualcomm Snapdragon Is Running Fully Generative AI on a Device

Gen AI can now run on mobile devices without internet connectivity. (Image credit: Qualcomm)


Benefits of Qualcomm’s Generative AI Approach

Qualcomm has distinct advantages thanks to its history in the smartphone industry, even though Nvidia often dominates the news in the AI ​​sector.

Qualcomm can use its generative AI to create more immersive and realistic content to improve the user experience. For example, augmented reality (AR) applications can create high-quality photos and videos, enhance the user experience and make it more interactive.

Additionally, Qualcomm’s capabilities provide businesses with essential advantages for product testing and development. Qualcomm can simulate and build realistic models for testing and development using generative AI, which can speed up the design process, save costs, and increase the effectiveness of product development.

In addition, Qualcomm’s OEMs can benefit from the untapped potential of personalization in the realm of AI, while Qualcomm solutions can provide consumers with tailored experiences that leverage generative AI.

It’s easy to see how Qualcomm’s solutions could contribute by creating specialized suggestions, unique user interfaces, or customizable answers based on individual preferences and behavior patterns.

Qualcomm should tell us more

As most of my readers know, I have been raising awareness of the ethical issues surrounding using general artificial intelligence. A number of ethical issues are raised by generative AI, particularly in light of deepfakes and the potential exploitation of AI-generated content. Qualcomm must ensure that users of its generative AI technology act ethically and within the limits of the law.

There are reasons to worry.

When I recently asked a CEO of a text-to-image Gen-AI program whether company terms and conditions mandated that created content include permanent watermarks or metatag fingerprints, he shrugged and answered in the negative. Gave.


At a recent technology conference, a prominent CEO touted the possibility of General-AI-style applications handling performance assessments of “laborious” employees. The number of lawsuits that followed is unimaginable.

Still, on a recent analyst call with Qualcomm, the company seemed to understand that it needs to take on an ethical leadership role in this area, suggesting that it will be discussing the topic significantly more in subsequent conferences. Will reveal information.

The company acknowledges that it wants consumers to maximize the Gen-AI capabilities on its devices. Yet, it also asserts how important it is to differentiate between original content and content modified by General AI.

It’s not hard to imagine that facial authentication, for example, could play an important role in mitigating the issue on this front. However, there are biometric hardware features that can also be useful.

a brave new world. But will we be safe?

It’s undeniable that Qualcomm’s emphasis on AI and continued work to integrate this capability into the company’s extensive silicon portfolio has the potential to completely transform the tech landscape as we know it. The productivity and time-saving benefits are real, significant, and practically inconceivable.

The potential is enormous because Qualcomm can now robustly run these types of apps on smartphones and other mobile devices, including PCs, without an internet connection. The potential for information extortion and privacy invasion is also clearly evident.

Qualcomm must protect user data and comply with strict privacy laws to mitigate these concerns, and ensure that the data used in developing or deploying generative AI models prevent personal identification. Any personally identifiable information (PII) is appropriately anonymized.

In addition, before collecting or using user data for generative AI purposes, Qualcomm must obtain the user’s explicit consent. Open communication about data use, sharing and collection processes is essential to maintaining users’ trust.

Safety and Ethical Challenges

Qualcomm must implement robust security measures to protect user data from unwanted access, breaches and potential misuse, especially in the context of generative AI. Access restrictions, encryption, and regular security audits are all part of this. Qualcomm can increase user trust and ensure that its Zen-AI solutions respect user privacy by including a thorough privacy plan.

I also advocate that Qualcomm mandates its OEM partners, who incorporate next-generation artificial intelligence solutions into their consumer goods, to disclose to consumers when AI creates any content from such devices. Is.


There would be a tendency to put the burden for this disclosure entirely on the equipment manufacturers, who would then expect end users to bear that obligation. Still, I’d like to see Qualcomm take a public leadership position on this topic.

Sadly, over-reliance on generative AI technology may lead to an undervaluation of human creativity and intuition.

I am horrified by the prospect of images and videos created by generative AI likely to be used by both sides of the aisle in the upcoming presidential election because they will make it nearly impossible to tell fact from fiction.

Qualcomm must strike a balance between automation and human engagement to ensure the creation of novel and valuable solutions. This aspect of General AI is an opportunity for Qualcomm.

The adoption of Generative AI in technology is potentially more significant than it was when the Internet was introduced. It is stifling most creative efforts and is not as efficient as it will be by the end of the decade.

General AI will force us to rethink how we communicate, how we collaborate, how we create, how we solve problems, how we govern, and even How and what do we travel – and this is far from an exhaustive list. I expect that once this technology matures, the list of things that haven’t changed will be much smaller than the list of things that were before.

This week, I want to focus on three things we should start discussing that represent some of the biggest risks to generative AI. I am not against technology, nor am I so foolish as to suggest that it be stopped because it would be impossible to stop it now.

What I suggest is that we start looking at mitigating these problems before they do substantial damage. The three problems are data center loading, security, and connection damage.

We’ll end with our product of the week, which just might be the best electric SUV to hit the market. I’m suddenly in the market for a new electric car, but more on that later.

data center loading

Despite all the hype, few people are yet using generative AI, let alone harnessing its full potential. The technology is processor- and data-intensive, while it is also very individual-centric, so it will not be feasible to simply live in the cloud, mainly because the size, cost and resulting latency would be unsustainable.

Much like we’ve done with other data and performance-intensive applications, the best approach will probably be a hybrid where processing power is placed closer to the user. Still, the massive amount of data, which would require aggressive updating, would need to be loaded and accessed more centrally to protect the limited storage capacity of client devices, smartphones and PCs.

But, because we’re talking about an increasingly intelligent system, which sometimes – like those used for gaming, translation or conversation – requires very low latency. How the load is divided without harming performance will likely determine whether a particular implementation is successful.


Achieving low latency won’t be easy as wireless technology improves, yet it can be unreliable due to weather, placement of towers or users, maintenance outages, man-made or natural disasters, and less than complete global coverage. AI must work both online and offline, while limiting data traffic and avoiding catastrophic outages.

Even if we could centralize all of this, the cost would be exorbitant, although we have used less performance in our individual devices which could reduce that expense. Qualcomm was one of the first firms to identify this as a problem and is doing a lot to fix it. Still, it is expected to be too little and too late, given how quickly AI is progressing and how slowly such technology is developed and brought to market.

Security

I was an internal auditor specializing in security and a competitive analyst trained in the legal ways to enter security. I learned that if someone can get enough data, they can make more accurate predictions than data they don’t have access to.

For example, if you know the average number of cars in a company’s parking lot, you can estimate with reasonable accuracy how many employees a firm has. You can generally scan social media and find out the interests of the firm’s key employees, and view job openings to determine the types of future products the company will offer.

These large language models collect massive amounts of data, and I expect that many of the things being scanned in these LLMs are, or should be, confidential. Furthermore, if enough information is collected, the gaps arising from what is not scanned will be rapidly derived.


This scenario doesn’t just apply to corporate information. With personal information readily available, we will be able to determine much more about users’ private lives.

Employers will be able to detect whistleblowers, disgruntled or disloyal employees, poor employee behavior and employees who are illegally taking advantage of the firm with greater accuracy. Protecting against a hostile entity obtaining confidential information about you, your company, or even your government would have been more feasible with far greater accuracy than I enjoy as an auditor or competitive analyst. going.

The best defense is likely to be to create enough misinformation so that the devices don’t know what’s real and what’s not. However, this path would make connected AI systems less reliable overall, which would be fine if only competitors used those systems. However, this is likely to compromise the company’s systems that may be using the security, resulting in an increasing number of bad decisions.

interpersonal relationships

Companies like Mindverse with its MindOS and Suki with its employee complement avatars are demonstrating future personal use of generative AI as a tool that can pose as if it were you. As we progressively use such devices, our ability to determine what is real and what is digital will diminish significantly, and our opinions of the people using these devices may differ from person to person. will reflect more on the device.

Imagine your digital twin doing a virtual interview, becoming the face of your presence on a dating app or taking over much of your daily virtual interactions. The tool will try to be responsive to the person it is interacting with, it will never get tired or grumpy, and it will be trained to present you in the best possible light. However, as it progresses down this path, it will become less and less like who you really are – and possibly be far more interesting, engaging, and more even-tempered.


This will create problems because, like actors who date someone the actor once played, the reality will lead to a breakup and loss of trust later on.

The easiest solution would be to either learn how to behave like your avatar or use them to interact with friends and colleagues. I doubt we will either, but these are the two most viable approaches to mitigating this insurmountable problem.

wrapping up

Generative AI is amazing and will improve performance tremendously as it ramps up the market and users reach critical mass. Yet there are significant problems that will need to be addressed, including the excessive data center loading that should drive hybrid solutions in the future, the inability to derive secrets from these vast language models, and a substantial reduction in mutual trust.

Understanding these impending risks should help avoid them. However, the improvements aren’t great, suggesting that we’ll have to regret some of the unintended consequences of using this technique.

tech product of the week

fisher ocean

My Jaguar I-Pace turned to dust last month in a towing accident that damaged its battery pack. The result was an estimated $100,000 car to fix, which is now closer to $40,000. I am expecting USAA, my insurance carrier, to repossess the car. So, I’m looking at replacement electric cars, and availability sucks across the board.

I’m likely to get another Jaguar I-Pace mainly because I don’t want to wait months or even years for my car again. Currently, I’m sharing my wife’s Volve XC-60 and running into a lot of scheduling problems where we both need the car at the same time. I went shopping for a new electric SUV, and the best I could find was the Fisher Ocean.

All-Electric Fisher Ocean

The all-electric Fisker Ocean (Image credit: Fisker)


With most electrics, the wait for a new one is months, and I can’t stand that wait. Among the electric cars available this year, the Fisker Ocean hits all the boxes. Its features include:

  • 350 mile range (bar is 300 miles)
  • Your smartphone can be the key to your vehicle
  • Reverse charging, so your car can power your home during a power outage
  • Impressive 0-60 time of around 3.6 seconds (I love the performance)
  • A solar panel roof to increase range and supply emergency power
  • a convertible-like mod (that really opens up the car)
  • One of the cleanest designs on the market.

The Fisher Ocean is an impressive car. If I can wait until the end of the year to get one, I’ll order it in a jiffy. Sadly this is not the case. Nevertheless, Fisker Ocean is still my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Human brainpower is no match for hackers unleashing digital smash-and-grab attacks powered by artificial intelligence using email hoaxes. As a result, cyber security protection must be guided by AI solutions that know hackers’ strategies better than they do.

This approach to fighting AI with better AI emerged as an ideal strategy in research conducted in March by cyber firm Darktrace to sniff out insights into human behavior around email. The survey reaffirmed the need for new cyber tools to combat AI-driven hacker threats targeting businesses.

The study sought a better understanding of how employees react to potential security threats globally. It also underscored their growing knowledge of the need for better email security.

Darktrace’s global survey of 6,711 employees in the US, UK, France, Germany, Australia and the Netherlands found respondents experienced a 135% increase in “new social engineering attacks” across thousands of active Darktrace email subscribers from January to February 2023 . The results were consistent with the widespread adoption of ChatGPT.

These novel social engineering attacks use sophisticated linguistic techniques, including increasing the amount of text, punctuation, and sentence length without any links or enclosures. The trend suggests that generative AI, such as ChatGPT, is providing an opportunity for threat actors to devise sophisticated and targeted attacks at speed and scale, according to the researchers.

According to Max Heinemeier, Chief Product Officer of Darktrace, one of the three most important findings from the research is that most employees are concerned about the threat of AI-generated emails.

“This is not surprising, as these emails are often indistinguishable from legitimate communications and some of the signs that employees commonly look for a ‘fake’ include signs such as poor spelling and grammar, which may be helpful in bypassing chatbots. Proving to be extremely efficient.” told TechNewsWorld.

Research Highlights

Darktrace asked retail, catering and leisure companies how concerned they are if hackers could use generative AI to create scam emails that are indistinguishable from real communications. Eighty-two percent said they are worried.

More than half of all respondents indicated their awareness of what employees think is an email that is a phishing attack. The top three included invitations to click on a link or open an attachment (68%), unknown senders or unexpected content (61%), and poor use of spelling and grammar (61%).


This is significant and troubling, as 45% of Americans surveyed noted that they had been the victim of a fraudulent email, according to Heinemeyer.

“It is unsurprising that employees are concerned about their ability to verify the legitimacy of email communications in a world where AI chatbots are increasingly able to mimic real-world conversations and generate emails that contain phishing attack information.” All the usual signs are lacking, such as malicious links or attachments,” he said.

Other key results of the survey include the following:

  • 70% of global employees have seen an increase in the frequency of scam emails and texts over the past six months
  • 87% of global workers are concerned about the amount of personal information about themselves available online that could be used in phishing and other email scams
  • 35% of respondents have tried ChatGPT or other general AI chatbots

human error guardrail

The wider reach of generative AI tools like ChatGPT and the increasing sophistication of nation-state actors means email scams are more credible than ever, noted Heinemeyer.

Innocent human error and threats from within remain an issue. Misdirecting an email is a risk for every employee and every organization. Nearly two out of five people have sent an important email to the wrong recipient with a similar-looking surname, either by mistake or because of autocomplete. This error rises to more than half (51%) in the financial services industry and 41% in the legal sector.

Regardless of the fault, such human errors add another layer of security risk that is not malicious. A self-learning system can spot this error before sensitive information is shared incorrectly.

In response, Darktrace unveiled a significant update to its globally deployed email solution. This helps strengthen email security tools as organizations continue to rely on email as their primary collaboration and communication tool.

“Email security tools that rely on knowledge of past threats are failing to future-proof organizations and their people against email threats,” he said.

Darktrace’s latest email capability includes behavioral detection for misdirected emails that prevent intellectual property or confidential information from being sent to the wrong recipient, according to Heinemeyer.

AI Cyber ​​Security Initiative

By understanding what’s normal, AI security can determine what doesn’t belong in a particular person’s inbox. Email protection systems often get it wrong, with 79% of respondents saying their company’s spam/security filters wrongfully block important legitimate email from reaching their inboxes.

With a deep understanding of the organization and how the individuals within it interact with their inbox, AI can determine for each email whether it is suspicious and should be acted upon or if it is legitimate and should be left untouched.

“Tools that work from knowledge of historical attacks will be no match for AI-generated attacks,” Heinemeyer offered.


Analysis of the attack shows significant linguistic deviations – both semantically and syntactically – compared to other phishing emails. This leaves little doubt that traditional email security tools, which operate from knowledge of historical threats, will fall short in picking up on the subtle indicators of these attacks, he explained.

Reinforcing this, research from Darktrace has shown that email security solutions, which include native, cloud and static AI tools, take an average of 13 days from the time a victim is attacked until the breach is detected.

“That leaves defenders vulnerable for about two weeks if they rely solely on these tools. AI defense that understands the business will be critical to detecting these attacks,” he said.

Need for AI-Human Partnership

Heinemeyer believes that the future of email security lies in a partnership between AI and humans. In this arrangement, algorithms are responsible for determining whether a communication is malicious or benign, thereby shifting the burden of responsibility away from humans.

“Training on good email security practices is important, but will not be enough to stop AI-generated threats that look like perfectly benign communications,” he warned.

One of the revolutions AI is enabling in the email space is a deeper understanding of “you”. Rather than trying to predict attacks, your understanding of employees’ behavior should be determined based on their email inbox, their relationships, tone of voice, emotions and hundreds of other data points, he argued.

“By leveraging AI to address email security threats, we not only mitigate risk but revitalize organizational trust and contribute to business outcomes. In this scenario, humans are freed up to operate at higher level, more strategic practices,” he said.

Not an insurmountable cyber security problem

The threat of offensive AI on the defensive side has been researched for a decade. Attackers will inevitably use AI to enhance their operations and maximize ROI, noted Heinemeyer.

“But it’s not something we would consider impossible from a defense perspective. The irony is that generative AI may screw up the social engineering challenge, but AI that knows you can parry,” he predicted.

Darktrace tests aggressive AI prototypes against the company’s technology to continually test the efficacy of its defenses in advance of this inevitable evolution in the attack landscape. The company is confident that AI coupled with deep business understanding will be the most powerful way to combat these threats as they continue to evolve.