Researchers at an Israeli security firm revealed Tuesday how hackers can turn the “hallucinations” of a generational AI into a nightmare for an organization’s software supply chain.

In a blog post on the Vulkan Cyber ​​website, researchers Bar Lanyado, Ortel Keizman, and Yair Divinsky explained how they exploited false information generated by ChatGPT about open-source software packages to distribute malicious code into a development environment. May go.

He explained that he has seen ChatGPT generating URLs, references, and even code libraries and functions that do not actually exist.

If ChatGPT is building code libraries or packages, attackers could use these hallucinations to spread malicious packages through suspicious and previously undetectable techniques such as typosquatting or masquerading, he noted.

If an attacker could create a package to replace the “fake” packages recommended by ChatGPT, the researchers continued, they might be able to download the victim and use it.

That scenario is becoming increasingly likely to occur, he maintained, as more and more developers migrate from traditional online search domains for code solutions like Stack Overflow to AI solutions like ChatGPT.

already generating malicious packages

Daniel Kennedy, research director of information security and networking at 451 Research, part of S&P Global Markets, said, “The authors predict that as generative AI becomes more popular, it will begin to receive developer questions that once went to Stack Overflow. ” Intelligence, a global market research company.

“The answers to those questions generated by AI may not be correct or may refer to packages that no longer exist or may never have existed,” he told TechNewsWorld. “A bad actor seeing that could create a code package in that name to contain malicious code and is consistently recommended to developers by generator AI tools.”

“Vulkan’s researchers took it a step further by prioritizing the FAQ on Stack Overflow as they would put the AI, and see that packages that don’t exist were recommended,” he said.

According to the researchers, they queried Stack Overflow to get the most common questions asked about more than 40 topics, and used the first 100 questions for each topic.

Then, they asked ChatGPT, via its API, all the questions they had collected. They used the API to replicate an attacker’s approach to obtain as many non-existent package recommendations as possible in the shortest amount of time.

In each answer, he looked for a pattern in the package installation command and extracted the recommended package. They then checked to see if the recommended package was present. If it did not, he tried to publish it himself.

cluing software

Malicious packages generated with code from ChatGPT have already been observed on the package installers PyPI and NPM, said Henrik Platt, a security researcher at Endor Labs, a dependency management company in Palo Alto, California.

“Large language models can also aid attackers in building malware variants that implement the same logic but have different forms and structures, for example, by distributing malicious code across different functions, changing identifiers, By creating fake comments and dead code or similar technologies,” he told TechNewsworld.

The problem with software today is that it is not written independently, observed Ira Winkler, chief information security officer at CYE, a global provider of automated software security technologies.

“It’s basically a lot of software already cobbled together,” he told TechNewsWorld. “It’s very efficient, so a developer doesn’t have to write a simple function from scratch.”

However, this can result in developers importing code without properly fixing it.

Joseph Harush, head of software supply chain security at Checkmarks, an application security company in Tel Aviv, Israel, said, “Users of ChatGPT are receiving instructions to install open-source software packages that, while legitimate, could install a malicious package.” Are.”

“In general,” he told TechNewsWorld, “a culture of copy-paste-exec is dangerous. Doing this blindly from sources like ChatGPT can lead to supply chain attacks, as the Vulkan research team has demonstrated.”

know your code sources

Melissa Bischopping, director of endpoint security research at Tanium, a converged endpoint management provider in Kirkland, Wash., also warned about lax use of third-party code.

“You should never download and execute code that you don’t understand and haven’t tested by grabbing it from a random source – like the open source GitHub repo or now ChatGPT recommendations,” he told TechNewsWorld.

“Any code you intend to run should be assessed for security, and you should have private copies of it,” he advised. “Don’t import directly from public repositories, such as those used in the Vulkan attack.”

She said that attacking supply chains through shared or imported third-party libraries is not new.

“This strategy will continue to be used,” he warned, “and the best defense is to employ secure coding practices and thoroughly test and review code – especially code developed by third parties – intended for use in production environments.” Is.”

“Don’t blindly trust every library or package you find on the Internet or in a chat with an AI,” he cautioned.

Know the source of your code, said Dan Lorenc, CEO and co-founder of ChainGuard, a maker of software supply chain security solutions in Seattle.

“Developer authenticity, verified through signed commits and packages, and obtaining open source artifacts from a source or vendor you can trust, is the only real long-term prevention against these Sybil-style attacks,” he told TechNewsWorld. There are mechanisms.”

opening innings

The authentication code, however, isn’t always simple, said Bud Broomhead, CEO of Wayaku, a developer of cyber and physical security software solutions in Mountain View, Calif.

“In many types of digital assets – and especially in IoT/OT devices – firmware still lacks digital signatures or other forms of establishing trust, which makes exploitation possible,” he told TechNewsWorld.

“We are in the early innings of generative AI being used for both cybercrime and defense. Credit to Vulkan and other organizations who are using language learning models to spot new threats in a timely manner and prevent this type of exploitation. are being tuned towards,” he said.

“Remember,” he continued, “it was only a few months ago that I could tell Chat GPT to create a new piece of malware, and it would. Now it took very specific and directed guidance to create it unintentionally.” And hopefully that approach too will soon be supplanted by AI engines.

Apple took the wraps off its highly anticipated mixed reality headset on Monday in a video presentation at its World Wide Developers Conference on the Apple campus in Cupertino, California.

The $3,499 Apple Vision Pro headset won’t be available until next year, but promises to usher in a new era of “spatial computing.”

“This is a hugely important product in the history of computing,” declared Tim Bajarin, president of Creative Strategies, a technology advisory firm in San Jose, California.

“Apple gave us a Mac with a graphical interface,” he told TechNewsWorld. “Then it gave us the iPhone with pocket computing. It gave us the iPad with tablet computing.”

“Each one broke new ground,” he continued. “Now it’s giving the world another new interface with gestures, eye tracking and speech recognition.”

“The technology surpasses anything we’ve seen,” he said. “There isn’t a headset in the virtual reality world that comes close to this. It’s a full computer in a headset.”

Best-in-class hardware

The Vision Pro, which looks like a pair of ski goggles, is packed with technology, including sensors that enable a user to control a virtual display with their eyes, hands and voice, and a 3D camera.

Apple Vision Pro Headset with Battery

The Apple Vision Pro is designed for high-performance tasks and is capable of working for up to two hours on a single charge. (Image credit: Apple)

Eric Abbruzzese, director of research at ABI Research, a technology advisory firm headquartered in Oyster Bay, NY, said, “It looks like it will comfortably take the place as the best hardware in its category, certainly, to match With a price point.”

“Eye tracking, dedicated silicon, high pixel density screens, and substantial sensor arrays all make for great value for a VR headset,” he told TechNewsWorld. “I don’t believe there is a headset as feature complete as the Vision Pro, but the price highlights why it is.”

“It’s also interesting that the device has the Pro moniker as a first gen product – there’s usually at least one iteration of a ‘normal’ device before the Pro branding hits, but it looks like that ‘normal’ product This has been deliberately allowed to slot in with the Pro both in price and performance,” he said.

The long-awaited product lived up to expectations, said Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.

“It’s a spatial computer that you wear on your head,” he told TechNewsWorld.

Vena said Apple demonstrated several compelling use cases for the device, ranging from viewing entertainment and games to enhancing productivity to extending the desktop through virtual displays.

Apple VisionOS for Vision Pro

The VisionOS spatial operating system lets Vision Pro users experience digital content that blends into the user’s physical world. (Image credit: Apple)

the excitement of disney

Disney CEO Robert A. Iger, who appeared at the Apple presentation, was enthusiastic about the Vision Pro.

“We are constantly looking for new ways to entertain, inform and inspire our fans, combining great technology with exceptional creativity to create truly remarkable experiences,” he added. “And we believe Apple Vision Pro is a revolutionary platform that can make our vision a reality.”

“The first time I tried Apple Vision Pro, what impressed me most was how it allowed us to bring our fans closer to their favorite characters and create deeply personal experiences to immerse them more deeply in our stories. Will allow to make,” he continued. “This platform will allow us to bring Disney to our fans in ways that were previously impossible.”

The device will also enhance the visibility of augmented reality technology.

“There is less familiarity with the term augmented reality in the consumer space,” Kristen Hanich, an analyst with Dallas market research and consulting company Parkes Associates, wrote in the Connected Consumer newsletter.

“However, the majority of consumer use is of popular applications such as Pokémon Go, Snapchat and Instagram – smartphone apps that are being used not because they are augmented reality, but because the core experiences resonate with consumers,” he continued.

“Apple’s expected announcement today in this category will help drive awareness and adoption given the power of the brand, content ecosystem, developer relationships and Apple’s focus on premium experiences,” he added.

Straddling Worlds

Gartner analyst Tuong Nguyen said virtual reality gets a lot of press because it promises to transport you to another time, another place.

“With the Vision Pro, because it does pass-through – which few other devices do – as well – it keeps you rooted in the physical world while adding digital elements,” he told TechNewsworld.

“You can use a headset and still be in the moment,” explained Ben Arnold, an analyst at global market research firm Circana (formerly NPD).

“To me, it’s different than what we’re seeing in the market now,” he told TechNewsWorld.

Apple’s announcement reflects where mixed reality is today, said Ross Rubin, principal analyst at Reticle Research, a New York City-based consumer technology advisory firm.

“One commonality that we’re seeing among mixed reality makers is the huge leverage of content and applications today because there isn’t a lot of mixed reality, augmented reality content out there,” he told TechNewsWorld.

“So if you’re Apple, and you have this incredible library of applications,” he continued, “it makes a lot of sense to have it on a device and build some additional value around it, whether it’s immersion or having multiple apps running in front of you.” on the tiles.

new 15 inch air

In addition to introducing Visual Pro, Apple made several other hardware and software announcements.

The expected 15-inch Mac Air based on the M2 silicon will be available next week for $1,299. The 13-inch M2 Air will sell for $1,099 for its base configuration, and the 13-inch M1 Air will remain part of the lineup for $999.

macbook air 15 inch front and side view

Showcasing its vibrant Starlight, Space Grey, Silver and Midnight hues, the 15-inch MacBook Air, along with its MagSafe charging and two Thunderbolt accessory ports, also includes a 3.5mm headphone jack. (Image credit: Apple)

Milestone All Apple Silicon Shift

Apple also announced that it would make its Mac Studio computers with its M2 Ultra and M2 Max silicon. Those models will also be available next week, starting at $1,999.

The Mac Pro also got an upgrade to the M2 Ultra silicon. It will sell for $6,999 and will be available next week.

Apple also refreshed its iOS, iPadOS and watchOS software lines.

“Most of the other announcements seem to be nice to have for the rich rather than important products,” Abbruzzese observed.

“The fact that all Apple products are now on Apple silicon is a milestone that everyone knew was coming, but took a fair amount of time to happen,” he said.

“To have a dedicated XR chipset alongside the Visual Pro M2 is not surprising but interesting nonetheless,” he continued. “Qualcomm has dominated the XR chipset market, and now they have a strong competitor for 2024 and beyond.”

When I joined IBM in the 1980s, they tasked me with helping build what eventually became one of the very first CRM applications. Like that time, I had to work with MIS (now called IT), and the result was terrible. Instead of the app making things easier by automating many repetitive manual tasks, it required more labor, was incredibly annoying to use, and delivered what I thought I had asked for and MIS , showed a disconnect between.

This experience was far from unusual, because even though I could code, the people at MIS didn’t understand the business. They tended to make decisions in a vacuum, which undoubtedly made their jobs easier in terms of building apps, but made users’ jobs much harder because users had surprisingly little to do with the process.

Well, AI is about to change that by slowly turning users into programmers. Microsoft is leading the way with its efforts to add AI capabilities to Windows, Office and the Microsoft Store.

Let’s dive into how Microsoft AI will benefit collaboration and user experiences, and we’ll end with our product of the week: a new set of headphones from Dell’s Alienware unit that look like nothing you’ve ever seen.

bad developer joke

A Facebook post said something like, “Giving users the ability to work with AI to code will mean end users need to clarify what they want, so your jobs are safe. “

The implication was that users generally don’t know what they want, so giving them the ability to create directly with AI would end badly. However, both my experience and this joke highlight the underlying issue that programmers and users lack training in how to collaborate with each other.

Part of the underlying problem is that programmers generally have little interest in business operations, and operations staff have little interest in coding. Since neither side usually wants to learn the nitty-gritty of the other, this can lead to some disgruntled users and very frustrated programmers.

AI has the potential to overcome this problem, because as it progresses, it will naturally try to learn about the user and, over time, be able to provide a result closer to what the user needs. Will happen.

I say “should” because, in my experience, one of the problems often encountered when building an app is that the user hasn’t fully thought through what they want. It is only after looking at the draft app that they will suddenly realize that what they need is not something they got.

AI solves this problem by not having personality, so it doesn’t get irritated, angry or frustrated. It learns through repetition and is willing to iterate infinitely to meet unmet user needs.

But users and programmers will still need to develop competence with the tools. Otherwise, they will likely become frustrated with endless iterations that result from users not being able to fully articulate what they want, and specifically what they want. No want, in the new app.

windows 11 baseline

By placing Generative AI in Windows, Microsoft creates a forcing function trend where users will learn to work with Generative AI to achieve better results. They need to learn to fully articulate their needs in order to reduce the number of annoying iterations it takes for the AI ​​to understand those needs, and most importantly, it needs to develop the skills necessary to understand and communicate that to users. that what they want.

We’ve had some mixed results with this sort of thing. Boolean logic is what the Internet has used to refine searches. Those who learned Boolean logic found that they could get the results they wanted much more quickly than those who did not. Still, we’re not up to our necks in Boolean logic users on the web, showing that the weak link remains users who refuse to learn the skills needed to become more proficient.

However, the difference with AI is that AI can help bridge the gap by learning what makes a particular user unique and attempting to bridge the gap in knowledge and experience. Unlike Boolean logic, which is static, AI will evolve to become a more personalized interface for the user and substantially reduce the need for a unique AI communication skill set for the user.

Users who are in the effort to learn how to better work with AI will benefit, and with AI being in the operating system, they will get plenty of opportunities to practice. Still, I expect most of the communication heavy lifting will come from the AI, not the user, as shown in Microsoft’s Windows Copilot introduction video:

Microsoft is blending AI into the Windows 11 platform to make it easier to use, easier to find apps, easier for developers to offer those apps through the Windows Store, and faster in every aspect of the OS and user experience. Blending AI.

wrapping up

The move to aggressively place AI in all aspects of Windows will dramatically change the user experience over time. Just as we started with a command line interface, then moved to a graphical user interface (GUI), and are now moving to AI interfaces, each step should improve productivity, reduce user frustration, And should bring the development process closer to the respective apps. They should help.

We are at the beginning of the development of this technology, so expect growing pains as it matures. However, it marked an early significant departure from the traditional view of the technology, which forced users to acquire new skill sets in order to take advantage. We are now developing AI systems that will learn how to work with users and effectively flip this dynamic on its head, leading to far more interesting, less frustrating results than expected.

While there are plenty of concerns surrounding AI, for now, these moves from Microsoft represent little risk but promise significant improvements in productivity and user satisfaction.

tech product of the week

Alienware Tri-Mode Wireless Gaming Headset AW920H – Lunar Light

Alienware products don’t come cheap, so when Dell sent me a set of its AW920H headphones, priced at a very reasonable $179.99 for tri-mode wireless headphones (I’ve found them for as low as $159), I was interested. That’s because most headphones I find in this class are priced in the $250+ range.

These are Dolby Atmos earphones, so you get virtual surround sound, they have up to 55 hours of battery life that can charge in 15 minutes for up to 6 hours using a USB-C fast charger, and they have Dell’s Alienware Aurora The analog ID is the R13 gaming desktop. They come with a mini-phone cable, so you can use them on an airplane or if you have a device that doesn’t support Bluetooth.

Alienware Tri-Mode Wireless Gaming Headset AW920H - Lunar Light

The Alienware Tri-Mode Wireless Gaming Headset AW920H supports Dolby Atmos and provides up to 55 hours of playtime on a full charge. (Image credit: Dell)

One cool feature is that if you have an Alienware PC or laptop, they’ll sync the colors of the LEDs on the headset with the LEDs on your PC. Like most headphones in this price range, they have active AI noise canceling on both inbound sound and the microphone (I use Discord when I can, and it’s annoying when game sound is bleeding into the voice stream) It is possible).

I still haven’t found a way to successfully play the game on a plane. There’s often not enough bandwidth on plane Wi-Fi, and there isn’t enough space on a small plane table for a gaming PC, let alone a gaming PC and mouse – and most of the games I play do so properly with a gaming controller. doesn’t work However, Dell showed off a gaming controller prototype at CES that may finally fix that.

The Dell Alienware Tri-Mode AW920H headphones are a bargain for what they do, and while they’re primarily focused on gaming, they should be fine for movies and music as well, and they’re my product of the week.

If you are asking, “What is SBOM?” You’ll need to catch up fast. A software bill of materials is the first line of defense against software vulnerabilities that may be lying in wait, like unlocked backdoors in your network, ready to let hackers in.

The SBOM, like any bill of materials, lists the components of the finished product, so in case of a problem, developers can zero in on the cause and address it with as little disruption as possible. SBOM is the key to supply chain security, enabling more secure DevOps and better threat intelligence to maintain a more resilient network.

Two years after a ransomware gang disrupted US fuel deliveries by attacking a pipeline operator, supply chain attacks remain a major annoyance for security professionals. In the wake of the attack and the discovery of the Log4J vulnerability, SBOMs have gone mainstream as security professionals struggle to prevent future attacks.

Dominance of SBOMs and Federal Guidance

SBOM is having a moment. During a recent RSA conference, the federal government’s Cyber ​​Security and Infrastructure Security Agency (CISA) issued guidance on the different types of SBOMs available and their use.

CISA has specifically been a promoter of the use of SBOM since Executive Order 14028 and Office of Management and Budget’s Memo M-22-18, which required the development of a reporting form for software developers serving the federal government. . CISA organizes SBOM-a-Rama meetings that bring industry types together to support CBOM development.

The CISA document is the result of a group effort launched in 2018, and like many group efforts, it can be cumbersome. The document’s introduction acknowledges as much, stating, “The different ways in which SBOM data can be collected can vary tool outputs and provide value in different use cases.” With this in mind, it is worthwhile to help clarify the types of SBOMs available and some of the possible use cases that may be most useful to an organization.

Decoding the 6 Main Types of SBOM

There are six main types of SBOM in use today as they move through the stages of the software development life cycle:

  • • design: An SBOM of this type is created for future or planned software and includes components that may or may not be present. It is usually developed based on an RFP, concept or specifications. While theoretically possible, it is hard to envision how this could help and how it could generate a machine-readable document that would meet the standards endorsed by the federal government.

    One possible use case for this type of SBOM is to alert developers to licensing issues that may arise when considering using certain components that will affect intellectual property or distribution of the finished product. This can help the SBOM development team identify incompatible elements prior to purchase and define a list of accepted and recommended components. This type of SBOM may also enable the team to source the best open-source components from a business perspective.

  • • Source: Similar to a build-type SBOM, it is generated in a development environment and includes all the source files and dependencies needed to build an artifact but leaves the build tools out of the process. It is usually generated by Software Composition Analysis (SCA) tools, with some annotations added manually.

    It’s hard to see a use case for this type instead of the more general build-type SBOM. Still, this SBOM can spot vulnerable components that are never run after deployment, giving the team a view into the dependency tree of the components involved. Therefore, it enables remediation of known vulnerabilities at the source, early in the development process.

    On the downside, it may lack the details of other types of SBOMs that involve runtime, plugin, or dynamic components, such as app server libraries.

  • • Construction: The most commonly used type of SBOM, it is a more complete list generated as part of the process of building the software that will run the final artifact. This approach uses data such as source files, dependencies, built components, build process ephemeral data, and previous design and source SBOMs. It relies on resolving all dependencies in the build system and scanning them on the build machine.

    Because actual files are scanned, this type of SBOM creates a more complete record with rich data about each file, such as its hash and source. Providing greater visibility beyond what is available from the source code instills confidence that the SBOM accurately represents the development process. This trust stems from integrating SBOM and finished product into a single workflow.

    On the downside, it is very dependent on the SBOM build environment, which may sometimes need to be changed to build the SBOM.

  • • Analyzed: This is sometimes referred to as “third-party SBOM” or binary SCA. It relies on scanning the artifact as it is distributed to work out its components; and uses third-party tools to analyze artifacts such as packages, containers, and virtual machine images. It does not require access to the build environment and can double-check SBOM data from other sources to find hidden dependencies SBOM build tools may have missed.

    Since it essentially reverse-engineers the components of the artifact, it can be a useful tool for software consumers who do not have an SBOM available or can verify an existing SBOM.

    On the downside, this type of SBOM often relies on loose estimates or risk factors depending on the context to test the components. Therefore the test can give some false-positive results. But the development team is also more likely to find libraries linked to the environment without realizing it, such as the OpenSSL libc, or others that build SBOMs, are often missed.

  • • Deployed: As its name suggests, it is a list of software deployed in a system, usually generated by compiling configuration information from the SBOM and installed artifacts. It can combine the analysis of configuration options and the examination of execution behavior in a deployed environment. It is useful to investigate software components, including other configurations and system components that run applications.

    Generating this type of SBOM may require changing installation and deployment procedures, and may not always reflect the runtime environment of the artifact as some components may be inaccessible. But the wide scope of this type of SBOM makes it an attractive option.

  • • Runtime: Sometimes called “instrumented” or “dynamic” SBOM, this type solves the blind spot in the deployed SBOM. In this case, the tools interact with the system and record the artifacts used in the running environment and loaded into memory during execution. This procedure helps avoid false positives from unused components.

    This type of SBOM gives developers visibility into dynamically loaded components and external connections and can give them details about which components are active and which parts are in use. This adds to the overhead of the network as the analysis has to be done while the system is running. Because it has to run for some time to use its full functionality, it may take some time to gather detailed information.

Final Thoughts on Selecting the SBOM

With these details in mind, selecting the right type or combination of SBOMs to meet your organization’s needs involves more consideration than simply choosing the first SBOM-generating tool available for compliance purposes.

Given the support of the federal government, SBOM is undoubtedly here to stay, and it could establish a solid foundation while introducing order into the sometimes chaotic process of securing software products.

Qualcomm has aggressively developed and integrated generative AI capabilities across its extensive semiconductor line for the past few years. For those few people who are completely left out of the information grid, AI uses intelligent algorithms to produce new and original content, such as photos, illustrations, movies and music, based on pre-existing data. .

Qualcomm’s Generative AI strategy leverages this technology to improve various aspects of its products and services. The company says its technologies can execute a wide range of exceptional use cases, but doing so locally on a smartphone adds far more value, especially from a cost-per-query and scalability perspective .

With that as a backdrop, let’s discuss Qualcomm’s ability to create hybrid AI functions that extend from device to cloud.

Talks about this capability were started by Qualcomm earlier this year at Mobile World Congress. This approach requires specific, specialized hardware modifications and substantial software adjustments, resulting in a deep-learning text-to-image model known as Stable Diffusion in 2022.

The main application of static diffusion is to generate detailed images based on text description. Nevertheless, stationary diffusion can also be used for other tasks such as image repair (inpainting) and for modifying AI-generated images outside the boundaries of the original image (outpainting).

It is important to note that parameters are the fundamental foundation of machine learning algorithms that enable functional Gen-AI applications. They form part of a model trained using past data. In general, the relationship between the number of parameters and the sophistication holds surprisingly well in the domain of languages. The estimated amount needed in the past for Zen-AI-style apps was in the 10 billion parameter region.

qualcomm general ai 10 billion parameter infographic

Qualcomm’s AI silicon brings artificial intelligence capabilities to edge devices, including mobile phones, tablets and PCs. (Image credit: Qualcomm)

Steady Diffusion for On-Device AI

According to Qualcomm, the implementation of Stable Diffusion requires only 1 billion parameters which squeezes into a device the size of a smartphone. This static diffusion feature enables users to enter a text query and create a picture locally without using the Internet capability of a smartphone.

Since Qualcomm’s display was operating in Airplane mode, all the data needed to create that image from the text query was stored on the device. Steady Diffusion is the go-to model for Qualcomm because of its sheer size and training from huge amounts of data – it can really understand concepts that are incredibly vast in scope and can be applied to a particular or small set of topics are not limited.

Currently, Qualcomm claims to be the only firm to enable this model to work on Android-based devices. Parametric models are getting smaller and smaller, enabling compelling Gen-AI apps to operate on just a single device. If you continue with this idea, comparable generative AI use cases can be demonstrated on all types of mobile devices.

From a platform perspective, scalability is the name of the game for Qualcomm, as few other businesses have a comparable legacy in devices in the end-user device ecosystem. Qualcomm’s “installed” Snapdragon base is now more than 2 billion devices, many without internet connectivity.

Qualcomm Snapdragon Is Running Fully Generative AI on a Device

Gen AI can now run on mobile devices without internet connectivity. (Image credit: Qualcomm)

Benefits of Qualcomm’s Generative AI Approach

Qualcomm has distinct advantages thanks to its history in the smartphone industry, even though Nvidia often dominates the news in the AI ​​sector.

Qualcomm can use its generative AI to create more immersive and realistic content to improve the user experience. For example, augmented reality (AR) applications can create high-quality photos and videos, enhance the user experience and make it more interactive.

Additionally, Qualcomm’s capabilities provide businesses with essential advantages for product testing and development. Qualcomm can simulate and build realistic models for testing and development using generative AI, which can speed up the design process, save costs, and increase the effectiveness of product development.

In addition, Qualcomm’s OEMs can benefit from the untapped potential of personalization in the realm of AI, while Qualcomm solutions can provide consumers with tailored experiences that leverage generative AI.

It’s easy to see how Qualcomm’s solutions could contribute by creating specialized suggestions, unique user interfaces, or customizable answers based on individual preferences and behavior patterns.

Qualcomm should tell us more

As most of my readers know, I have been raising awareness of the ethical issues surrounding using general artificial intelligence. A number of ethical issues are raised by generative AI, particularly in light of deepfakes and the potential exploitation of AI-generated content. Qualcomm must ensure that users of its generative AI technology act ethically and within the limits of the law.

There are reasons to worry.

When I recently asked a CEO of a text-to-image Gen-AI program whether company terms and conditions mandated that created content include permanent watermarks or metatag fingerprints, he shrugged and answered in the negative. Gave.

At a recent technology conference, a prominent CEO touted the possibility of General-AI-style applications handling performance assessments of “laborious” employees. The number of lawsuits that followed is unimaginable.

Still, on a recent analyst call with Qualcomm, the company seemed to understand that it needs to take on an ethical leadership role in this area, suggesting that it will be discussing the topic significantly more in subsequent conferences. Will reveal information.

The company acknowledges that it wants consumers to maximize the Gen-AI capabilities on its devices. Yet, it also asserts how important it is to differentiate between original content and content modified by General AI.

It’s not hard to imagine that facial authentication, for example, could play an important role in mitigating the issue on this front. However, there are biometric hardware features that can also be useful.

a brave new world. But will we be safe?

It’s undeniable that Qualcomm’s emphasis on AI and continued work to integrate this capability into the company’s extensive silicon portfolio has the potential to completely transform the tech landscape as we know it. The productivity and time-saving benefits are real, significant, and practically inconceivable.

The potential is enormous because Qualcomm can now robustly run these types of apps on smartphones and other mobile devices, including PCs, without an internet connection. The potential for information extortion and privacy invasion is also clearly evident.

Qualcomm must protect user data and comply with strict privacy laws to mitigate these concerns, and ensure that the data used in developing or deploying generative AI models prevent personal identification. Any personally identifiable information (PII) is appropriately anonymized.

In addition, before collecting or using user data for generative AI purposes, Qualcomm must obtain the user’s explicit consent. Open communication about data use, sharing and collection processes is essential to maintaining users’ trust.

Safety and Ethical Challenges

Qualcomm must implement robust security measures to protect user data from unwanted access, breaches and potential misuse, especially in the context of generative AI. Access restrictions, encryption, and regular security audits are all part of this. Qualcomm can increase user trust and ensure that its Zen-AI solutions respect user privacy by including a thorough privacy plan.

I also advocate that Qualcomm mandates its OEM partners, who incorporate next-generation artificial intelligence solutions into their consumer goods, to disclose to consumers when AI creates any content from such devices. Is.

There would be a tendency to put the burden for this disclosure entirely on the equipment manufacturers, who would then expect end users to bear that obligation. Still, I’d like to see Qualcomm take a public leadership position on this topic.

Sadly, over-reliance on generative AI technology may lead to an undervaluation of human creativity and intuition.

I am horrified by the prospect of images and videos created by generative AI likely to be used by both sides of the aisle in the upcoming presidential election because they will make it nearly impossible to tell fact from fiction.

Qualcomm must strike a balance between automation and human engagement to ensure the creation of novel and valuable solutions. This aspect of General AI is an opportunity for Qualcomm.

A new report from a human resources analytics firm found that artificial intelligence threatens to replace a disproportionate number of jobs typically held by women.

According to the researchers at Revelio Labs, their findings reflect societal biases that have trapped women in roles ripe for AI replacement, such as administrative assistants and secretaries.

Revelio reached his conclusions by identifying about two dozen jobs most likely to be replaced by AI, based on a National Bureau of Economic Research study. Then it identified the gender breakdown in those jobs.

Women held many of those jobs, it noted. These included bill and accounts collectors, payroll clerks and executive secretaries.

“Women, as well as people of color, are under-delegated in occupations that are repetitive in nature when it comes to tasks. This means they are going to be disproportionately affected by any jobs that are fully automated. are,” said Nicole Turner Lee, director of the Center for Technology Innovation and a senior fellow in governance studies at the Brookings Institution, a nonprofit public policy organization. Washington DC

“Those jobs have already seen a decline as a result of new technologies,” he told TechNewsWorld. “However, AI is more likely to be involved in roles where there is high repetition that can be automated. That automation often lends itself to low-level workers being ousted.

need people in the loop

Will Duffield, a policy analyst at the Cato Institute, a Washington, DC think tank, explained that if more women are in computer-related jobs than men, they will be more affected by AI displacement. However, he was skeptical that all of the jobs listed in the Ravelio report required only repetitive skills.

“It seems ludicrous to expect paralegals to be replaced by AI,” he told TechNewsWorld.

“The same is true for copy editors and auditors because, at the end of the day, you need humans to avoid making mistakes,” he said.

“AI may make workers more efficient, so there may be fewer jobs,” he continued, “but the idea that jobs will be completely replaced is quite speculative and highly publicized.”

“AI has to become more reliable rather than just another tool in their repertoire to replace people, letting them decide how much to trust,” he said.

“That’s not to say AI won’t be more reliable in the future,” he acknowledged, “but right now, it’s all pretty speculative.”

“There always needs to be some human in the loop to make sure the AI ​​isn’t causing unnecessary biases or inefficiencies,” Turner Lee said. “You still need people to manage it.”

facing severe disruption

Ravelio’s warning about AI’s impact on women’s jobs parallels one issued by the International Monetary Fund in 2018. At the time, the IMF estimated that 11% of jobs held by women – a higher percentage of jobs held by men – risked elimination due to AI and other digital technologies.

In financial services, for example, women represent about 50% of the workforce, but they hold only 25% of senior management positions, according to a report by Boston Consulting Group. The report notes that senior management positions are generally insulated from shocks caused by automation.

Women employed in this sector predominate in clerical and administrative jobs that are at high risk of attrition, such as bank tellers, who are 85% female.

The pattern also holds true in female-dominated industries such as health care and education, which are less at risk from automation, the report said.

BCG predicted that AI will disrupt employment patterns in a big way in the coming years. It stressed that companies, governments and individual women must be prepared to invest in new skills for the new generation of jobs.

However, Duffield recommended that workers think about the present rather than the future. “For the worker, it is now much less worrying about what new job you should train for as AI will replace you, rather than how to learn how to use AI in the job you are doing now,” he said.

promoted job impact

Workers who adopt AI may be surprised by its productivity gains. “It’s saving my company time and money,” said Deidre Diamond, founder and CEO of CyberSN, a cybersecurity recruiting and career resource firm in Framingham, Mass.

“I haven’t replaced people,” she told TechNewsWorld. “I’ve been able to expedite projects, expedite work.”

Ida Bird-Hill, CEO and founder of Automation Workz, a reskilling and diversity consulting firm in Detroit, also praised her productivity gains using ChatGPT. “I wrote a proposal that normally takes 100 hours in 11 hours,” he told TechNewsWorld.

Tales of productivity gains, however, are being overshadowed by grim — and somewhat distorted — predictions about AI’s impact on the workforce.

Hoden Omar, a senior AI policy analyst at the Center for Data Innovation, a think tank that studies the intersection of data, said, “The news cycle includes a series of claims about generative AI systems Jobs will be affected.” Technology, and Public Policy, in Washington, DC

“The perceived impact varies wildly from outlet to outlet, but the central message of the news media is clear – AI is here to take almost all jobs, not just blue-collar ones, white-collar ones too,” he told TechNewsworld.

‘Hokum’ claims

Omar called many of the claims “hokum”. He cited a recent news article titled “OpenAI Research Says 80% of US Workers’ Jobs Will Be Affected by GPT.”

“The headline is eye-catching, emotionally resonant and easily repeatable, but it is narrowly true and broadly misleading,” she argued. “The figure comes from a research paper by OpenAI, but the paper does not say that 80% of jobs will be affected. It says the jobs of at least 10% of ‘about 80% of the US workforce’ could be affected.

“This means that the real statistic is that large language models may affect at least eight percent of the work in the US economy,” he continued. “Far less dramatic picture of research findings but more honest.”

Omar explained that the concern about AI taking jobs is based on the “lump of labor illusion”, the idea that there is a fixed amount of work, and thus productivity growth, such as from automation, will reduce the number of jobs. But the data tells a different story, he continued. Labor productivity has grown steadily over the past century – even if that growth has been slower recently – and unemployment is at an all-time low.

“It is becoming more and more difficult to wade through the hogwash of claims about AI, but if readers, and more importantly policy makers, are not prudent, they will make decisions based on unfounded fear or hype,” she warned.

As I sometimes do, until recently, I was preparing my mind for a funky Linux flight of fancy to take us. Many pixels have sprung up on the subject of “Linux” or even just desktop Linux, and many more surely to follow. Still, I strive for a unique perspective.

That’s when I realized that it’s so challenging to think of something new to say about the Linux desktop that’s so intuitive, stable, and versatile that it doesn’t make me think about it until I actually try.

Micro-epiphany in hand, I wanted to find out what it is about Linux that makes it “just work” so well that users don’t even know they’ve tried to break out of the desktop operating system monopoly. option is selected.

Ride so smooth, you’ll feel like you’re floating

Everything most of us do is done in a browser most of the time. So since Linux lets you use almost any browser you can think of, your browser experience on Linux is instantly familiar.

Among the browsers that run like a dream on Linux are Chrome, Firefox, Opera and Vivaldi. Better yet, because system resource usage is generally lower on Linux than on Windows and macOS, the browser is one of the few things, if not the only thing, that uses a notable amount of CPU or memory. . This leaves more room for the browser to use the resources it needs.

Rarely does the fan spin on my Linux box, but when it does, I can almost always trace it down to a single browser tab – which, if you don’t know, you can do with the browser task manager. Are.

If music and videos are penguin food, keep it on

Another major desktop activity is media consumption. Get between a user and their favorite movie, TV show, or music, and they’ll be leaving your OS in CPU cycles.

There’s a reason VLC is known for its versatility. It can play practically any media format, audio or video, desktop or mobile. True, VLC installation on Linux couldn’t be easier: every desktop distribution I’ve tried has VLC in its repository. Just open package manager, search for “VLC” and install.

As mentioned above, VLC is afraid of not having any file format – and “file” in this case can mean “reading live from DVD”. It can also be cast to Chromecast devices – bet you didn’t know that, did you? If you did, my hat’s off to you!

For those who prefer streaming over cramming their drives with files, Linux remains unobstructed. In terms of video streaming, the browser has you covered. For music, Spotify has a player client built for Linux. Personally, I’ve seen more problems with my Spotify Android app than with the Spotify Linux desktop client, and the former is a huge focus for their developers.

harmony emerges from discord

Discord has emerged as one of the most popular communication apps out there, which is an impressive feat considering how many options there are.

During the pandemic, I relied heavily on Discord to keep up with tabletop RPG gaming with friends, so I put it on my system via challenge. During our weekly 4-hour sessions, friends on Windows systems would have trouble, while my .deb-packaged “public test build” of Discord didn’t go unpunished.

Old fashioned web apps never looked better

It will come as no surprise to longtime readers that I view the Linux desktop as the better desktop. I could sing his praises until my fingers were blistered. Although I will not. I am a professional.

What I will say in that vein on this particular topic – that Linux provides such a refined experience that one forgets one is running Linux – is that some Linux-centric software makes web apps feel even more polished.

On Linux, web apps are not shipped in a normal browser window. I was first exposed to this new horizon on Linux Mint, my current daily driver, through its WebApp Manager program. The program also plays nice with other Debian-based distributions. If Debian-style Linux isn’t your speed, there are comparable options for other Linux families. Linux wouldn’t be Linux without Choice, after all.

Once you’ve configured a web app, it pops into the Apps menu of your desktop environment and behaves like a regular standalone app. It only needs an existing installed browser to run, which every desktop user already has. Have multiple browsers? You’re in luck: you get to choose which browser to host. You can also choose whether to use your browser’s private browsing mode.

Best of all, WebApp Manager lets you stay logged into your account, so you’ll be ready to roll at app launch. The managed web app respects this preference even if the browser is configured to clear all authentication tokens and cookies on exit. This way, you can keep your sensitive accounts secure while still being logged into non-sensitive accounts, such as news readers.

Runs like a dream but sometimes causes nightmares

nothing is perfect. Unfortunately, this also applies to Linux distributions. Certain scenarios are more likely to result in a less-than-ideal Linux desktop experience.

You might get a headache if you try to install Linux on overly obscure or shiny new hardware. Since becoming a Linux diehard, I’ve only bought computers with generic chipsets. If you try the latest and greatest on a Mac or PC with non-standard internals, the installation and normal boot up afterwards may be messed up.

If you’re a big-time gamer, Linux will mean sacrifices. Custom gaming peripherals, streaming programs, and your favorite triple-A titles may not play nicely with Linux. I’ve heard Linux has made great strides, Steam Deck being arguably the most notable example, but all serious PC gamers I know stand by Windows rigs – as much as we all dream of a day when Linux conquers gaming. does.

If you’re married to some of the proprietary desktop software you’ve been using, Linux’s offerings will feel unfamiliar. For my part, I’m perfectly happy to trade Word for LibreOffice, GarageBand for Audacity, and Photoshop for GIMP. However, if you’re not willing to learn these highly useful alternatives, you’ll miss out on their Windows/Mac counterparts on Linux, where they just aren’t available.

Neither of these programs are ones I use on a daily basis, hence why I haven’t featured them before. But if you’re a creature of habit in your digital workflow, feel free to pass on Linux. I’m not angry

embrace linux

It speaks to the maturity of the Linux desktop that these hang-ups are the exception, not the rule. I know more than one person in my social circles who has made the switch to Linux and hasn’t looked back.

Seeing as how Linux only continues to climb higher, I look forward to the days when I won’t realize I’m on Linux most of the time.

Many technology leaders agree that while AI can be hugely beneficial to humans, it can also be misused or through negligence harm humanity. But looking to governments to solve this problem without guidance would be foolish, because politicians often don’t even understand the technology they’ve used for years, let alone something that’s just hitting the market. I have come

As a result, when governments act to mitigate a problem, they may do more harm than good. For example, it was right to punish the old Shell Oil Company for abuses, but breaking up the company shifted control of the oil from the United States to parts of the world that are not friendly to America. There was the improvement of consumer electronics, which shifted the market from the US to Japan.

The US has grabbed onto the technological leadership by the skin of its teeth, but there is no doubt in my mind that if governments act without guidance on how to regulate AI, they will shift the opportunity to China. That’s why Microsoft’s recent report titled “Governing AI: A Blueprint for the Future” is so important.

The Microsoft report defines the problem, outlines a reasonable path forward that won’t undermine US competitiveness, and addresses concerns surrounding AI.

Let’s talk about Microsoft’s blueprint for AI governance, and we’ll end with our Product of the Week, a new line of trackers that can help us keep track of the things we often have trouble finding .

EEOC Example

It is foolish to demand regulation without context. When a government reacts tactically to something it knows little about, it can do more harm than good. I started with some contradictory examples, but perhaps the ugliest example was the Equal Employment Opportunity Commission (EEOC).

Congress established the EEOC in 1964 to rapidly address the very real problem of racial discrimination in jobs. There were two basic causes of workplace discrimination. The most obvious was racial discrimination in the workplace that the EEOC could and did address. But an even bigger problem existed when it came to discrimination in education, which the EEOC didn’t address.

When businesses hired based on merit and used any methodology that the industry at the time had scientifically developed to reward employees with positions, raises, and promotions based on education and achievement When you did, you were asked to improve your company’s diversity by closing programs that often hired inexperienced minorities.

The system failed minorities by placing inexperienced minorities in jobs they weren’t well trained for, which only reinforced the belief that minorities were somehow inadequate, when in fact, they didn’t have equal opportunities for education. were given and counseling. This position was true not only for people of color but also for women, regardless of color.

Looking back now we can see that the EEOC didn’t really fix anything, but it did transform HR from an organization focused on caring and nurturing employees to an organization focused on compliance, which often meant covering up employee issues . than to address the problems.

Brad Smith Steps Up

Microsoft President Brad Smith strikes me as one of the few technology leaders who thinks broadly. Instead of focusing almost exclusively on tactical responses to strategic problems, he thinks strategically.

Microsoft’s Blueprint is such a case that, because most people are going to the government saying “you should do something”, which can lead to other long-term problems, Smith has set out to find what he thinks is a solution. What should look like, and that flashes it turned out elegantly in a five-point plan.

He begins with a provocative statement, “Don’t ask what computers can do, ask what they should do,” which reminds me of John F. Kennedy’s famous line, “Don’t ask what your country can do for you.” What can you do for your country, ask what you can do for your country. Smith’s statement comes from a book he co-authored in 2019 and has been referred to as one of the defining questions of this generation Was.

This statement brings into context the importance and need of protecting human beings and makes us think about the implications of new technology to ensure that our use of it is beneficial and not harmful.

Smith continues to talk about how we should use technology to improve the human condition as a priority, not just reduce costs and increase revenue. Like IBM, which has undertaken a similar effort, Smith and Microsoft believe that technology should be used to improve people, not replace them.

He also, and this is very rare these days, talks about the need to anticipate where technology needs to be in the future so that we can proactively and strategically anticipate problems rather than just respond to them. The need for transparency, accountability and assurance that the technology is being used legally are all important to this effort and are well defined.

5-point blueprint analysis

Smith’s first point is to implement and build on a government-led AI security framework. Too often, governments fail to realize that they already have some of the tools needed to solve a problem and waste a lot of time effectively reinventing the wheel.

Influential work has been done by the US National Institute of Standards and Technology (NIST) in the form of the AI ​​Risk Management Framework (AI RMF). It’s a good, though incomplete framework. Smith’s first point is to experiment and build on that.

Smith’s second point is the need for effective security brakes for AI systems that control critical infrastructure. If an AI that is controlling critical infrastructure gets derailed, it can cause massive damage or even mass death.

We must ensure that those systems have extensive testing, thorough human oversight, and are tested against not only likely but unlikely problem scenarios to make sure AI doesn’t jump in and make it worse. Will do

The government will define the classes of systems that will require guardrails, provide direction on the nature of those protective measures, and require that the relevant systems meet certain security requirements – such as data centers tested and licensed for such use only. to be posted in

Smith’s third point is to develop a comprehensive legal and regulatory framework for AI based on technology architecture. AI is going to make mistakes. People may not like the decisions the AI ​​makes even if they are correct, and people may blame the AI ​​for things the AI ​​had no control over.

In short, there will be a lot of litigation to come. Without a legal framework covering responsibility, rulings are likely to be varied and contradictory, making any resulting remedy difficult and very costly.

Thus, there is a need for a legal framework so that people understand their responsibilities, risks and rights to avoid future problems, and find a quick legal remedy if a problem does result. This alone could reduce what would potentially become a massive litigation load as AI is now very much in the green when it comes to legal precedent.

Smith’s fourth point is to promote transparency and ensure academic and non-profit access to AI. It makes sense; How can you trust something you don’t fully understand? People don’t trust AI today, and without transparency they won’t trust it tomorrow. In fact, I would argue that without transparency, you shouldn’t trust AI because you can’t verify that it will do what you want.

In addition, we need academic access to AI to ensure that people understand how to properly use this technology when entering the workforce and to ensure that nonprofits, especially organizations that focus on improving the human condition have effective access to this technology for good.

Smith’s fifth point is to advance new public-private partnerships to use AI as an effective tool to address inevitable societal challenges. AI will have a massive impact on society, and ensuring that this impact is beneficial and not harmful will require focus and oversight.

He explains that AI may be a sword, but it can also be effectively used as a shield which is more powerful than any existing sword on the planet. It should be used everywhere to protect democracy and fundamental rights of the people.

Smith cites Ukraine as an example where the public and private sectors have come together effectively to create a powerful defense. He believes, as do I, that we must emulate Ukraine’s example to ensure that AI reaches its potential to help move the world toward a better tomorrow.

Finale: A Better Tomorrow

Microsoft isn’t just going to governments and asking them to act to solve a problem that governments don’t yet fully understand.

It is laying out a framework for that solution, and must clearly assure that we mitigate the risks around the use of AI and have the tools and systems in place to address problems when they do occur. Remedies are available, not the least of which is an emergency stop switch that allows a derailed AI program to gracefully terminate.

Whether you’re a company or an individual, Microsoft is providing an excellent lesson here in how to find leadership to solve a problem, not just toss it at the government and ask them to fix it. Microsoft has outlined the problem and provided a well thought out solution so that the problem doesn’t become a bigger problem than it already is.

Nicely done!

tech product of the week

Pebblebee Trackers

Like most people, my wife and I often misplace stuff, which most often happens when we run out of the house and put something down without thinking about where we put it. Are.

Plus, we have three cats, which means the vet visits us regularly to take care of them. Many of our cats have found unique and creative hiding places so that they don’t get nailed or mated. So, we use trackers like Tile and AirTag.

But the problem with AirTags is that they really only work if you have an iPhone, like my wife, which means she can track things, but I can’t because I have an Android phone. Is. With the Tiles, you must either replace the device when it dies or replace the battery, which is a pain. Therefore, when we need to search for something, the battery often runs out.

The Pebblebee works like the other devices that differ yet because it’s rechargeable and will work with either Pebblebee’s app, which runs on both iOS and Android. Or will it work with native apps in those operating systems: Apple Find My and Google Find My Device. Sadly, it won’t do both at the same time, but at least you get a choice.

Pebblebee Trackers

Pebblebee Trackers: Clips to keys, bags and more; Tags for luggage, jackets, etc .; and cards for wallets and other narrow places. (Image credit: PebbleB)

When trying to locate the tracking device, it beeps and lights up, making it easier to find things at night and less like a bad game of Marco Polo (I wish smoke detectors did this) .

Because the Pebblebee works with both Apple and Android and you can recharge the battery, it serves a personal need better than the Tile or Apple’s AirTag — and it’s my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Every six months, I re-evaluate the configuration of my home office workstation for improvements and enhancements. Admittedly, my specific needs skew to the higher end that the typical home office worker would require. I host a video podcast every week and need simultaneous and convenient access to both my MacBook Pro and Dell Minitower.

While I spend most of my daily time using my MacBook Pro for video editing, blog creation, presentation development, and other productive work, I use a Dell Minitower because it’s the only one on the market for its Nvidia video card and awesome Broadcast app. is the current solution. Correcting eyesight during video podcasts.

From my point of view, the ability to make eye contact is an essential ability that enhances my professionalism. No comparable solution exists in the macOS world, although Apple may address it at its upcoming WWDC event in early June. But for now, I’ll have to use a Windows-based system with a suitable Nvidia graphics card.

Still, switching display inputs back and forth between these two systems using the manual physical buttons on the back of my existing 38-inch monitor is a significant pain point. This inconvenience is compounded by the need to use a second keyboard and mouse to operate the Windows system, which creates widespread clutter in my office.

Enter HP with its new E45c G5 DQHD Curved Monitor, which I received about a week ago. It’s hard to overstate the positive impact this extra-large display had on my overall work productivity, after only using it for a week.

The HP E45c is usefully ultrawide

One screen really is enough for many productivity power users. An ultrawide monitor may be needed to meet your demands, but more home users need regular access to multiple PCs, whether it’s a home and company-supplied laptop or, in my case, Mac and Windows PC both.

The latest HP flagship display, the E45c G5 DQHD Curved Monitor features a 45-inch dual QHD display. It includes a single curved widescreen panel with a resolution of 5,120 by 1,440 pixels, essentially two 2,560-by-1,440-pixel displays combined. You’ll need a significant amount of desktop space to use it, preferably in a corner of your home office.

The monitors act like two 24-inch displays, without the split or bezel in between that you’d find with a traditional twin-display arrangement. The size, shape and 32:9 aspect ratio of this monitor offer plenty of screen area that lends itself easily to really useful multitasking.

The 1500R curvature of the E45c helps you become more immersed in your work by taking in a larger field of view. However, HP’s Super Screen has other tricks up its sleeve beyond its sheer width.

It is at this point that the fun begins. Although ultrawide displays are nothing new, some are designed to work with laptops, particularly those that employ Thunderbolt 4 or DisplayPort over USB-C.

The HP E45c is the first device of its size and resolution to support dual-display input via a single USB-C connection. When plugged into a wall socket, it supplies power to the laptop via the USB connection, charging it while it works.

HPE45c G5 DQHD Curved Display Rear Ports

The HP E45c G5 DQHD Curved Monitor has two USB-C ports that provide up to 65W of power each to two computers, or 100W of power to one computer and 30W to a tablet or phone. (Photo by the author)

When I plugged both my MacBook Pro and Dell PC into the monitor’s respective USB-C and HDMI ports, I automatically displayed both system desktops in a 24-inch side-by-side format (as shown above). is) back-panel buttons without the need to fiddle with it as is the case with other widescreen displays.

But it does more.

Device Bridge is a home run

The HP E45c also features Device Bridge 2.0, an updated version of a function that was previously only available on HP’s premium display range. Device Bridge is a version of what the industry calls KVM (keyboard, video, mouse) functionality, although I’ve never seen it implemented so smoothly and seamlessly.

Clearly, HP is showing off its software development and implementation chops. Using a single keyboard and mouse, I could operate the desktops of the different computers displayed on the screen. I transferred files and data to my MacBook Pro and Dell PC by dragging and dropping them between the side-by-side displays. Additionally, the update has a security feature that disables Device Bridge when necessary.

Using this functionality, you can control two Windows PCs, a macOS system, or one of each machine.

Although I haven’t tried this for a final workspace, HP claims you’ll be able to daisy chain another UltraWide monitor to mimic up to four displays on two screens.

Sonos Era 100 speakers level up your home workstation

Truth be told, Sonos speakers were never targeted at the PC market. When used with a TV or as part of an entertainment system, the company’s soundbars, subwoofers and even portable ROMs sound quite enjoyable, as do the older Play Series speakers.

However, the technology hidden behind the Sonos speaker’s grille is its major selling point. Its multiroom system functionality is the most practical way to hear everything, everywhere. It boasts connectivity with Alexa, Apple, Google and practically any music streaming service.

Now, with the New Era 100, Sonos finally crushes rivals from a sound quality perspective.

sonos era 100 speaker

The Sonos Era 100 is a single speaker that easily competes with anything but more expensive two-speaker systems thanks to dual tweeters and more refined room adjustment capability. (Photo by the author)

When used as a pair, the Era 100 is undeniably the most incredible little all-in-one speaker I’ve ever heard.

I was concerned that its Bluetooth connectivity would create latency challenges, but I never experienced any video/audio synchronization issues streaming video or editing for my podcasts, even when I didn’t have speakers connected directly to my MacBook Pro audio port .

The Era 100 speaker is available in black or white. You don’t need to take out your phone to perform basic functions as it has a volume slider and a play/pause button. The rubberized bottom of the speaker is an improvement as it adheres to almost any surface and helps reduce acoustic vibrations.

A button on the back of the speaker next to the USB-C connector allows you to manually turn off the built-in microphone if you don’t like the voice assistant.

Impresses Easy Setup

Setting up and pairing two Era 100s is fairly simple. Both speakers work together when paired, though only one needs to be physically connected to the 3.5mm audio port on my MacBook Pro with the Sonos USB-C to Audio Port Adapter, which costs $19 and It is sold separately.

Easy setup only required taking out my phone, installing the Sonos app, and registering the speaker with my account. The Sonos app allows you to connect to all your favorite streaming services, create groups of multiple speakers, and specify where the Era 100 is located in your home.

I appreciate the simple integration with my streaming provider, the voice assistant, Spotify, and Alexa. The Era 100 replaced an older Amazon Echo speaker in my nearby kitchen, and it picked up my orders better than it did when it was farther away. The speaker also has a great mic for voice control.

Like advanced microphones, many of the Era 100’s best improvements are hidden from view. Still, you’ll notice them as soon as you start playing music and participating in video conference calls.

Sonos boosted the woofer by 25% and added two angled tweeters for authentic stereo sound. Previously Sonos speakers of a similar size and shape could only play mono music.

The speaker includes a 47% faster CPU, which may extend the time it takes this speaker to receive software upgrades compared to previous versions.

Interestingly, Sonos claims to have “over-built” the processing into these speakers to potentially improve performance in the future. Although I’ve tested several Sonos speaker models side-by-side over the years without ever detecting any latency, it’s comforting to know there’s room for improvement.

closing thoughts

While the overall PC market continues to struggle with top-line unit growth that won’t subside for several quarters, if not a year, the peripherals category for large, widescreen displays and docking stations remains a bright spot. happened.

Manufacturers are beginning to understand that workers need multisystems at home. While it may not be uncommon to have multiple displays in an office, home users have limited desktop space in their home offices and prefer to avoid cable clutter.

At $1,099, the HP E45c G5 DQHD Curved Monitor is more affordable than you might think when considering the typical cost of two premium 24-inch displays. This monitor’s superb Device Bridge functionality avoids the need for a secondary keyboard and mouse input device that multisystem workers will drool over.

The plethora of integrated ports on the HP E45c will provide docking station-like capabilities for all but the most advanced users. This monitor has changed the way I work at home and has dramatically increased my productivity.

As for the Sonos Era 100 speakers, I didn’t anticipate how its great sound would enhance my overall work productivity experience. Especially with the regular videoconferencing calls I participate in over the course of several hours during the day, I found the Era 100 speakers to deliver surprisingly distinct and clear sound from each speaker during the call.

On top of all those items, the Sonos Era 100 is more than satisfactory for streaming music and video. The speaker has some features to adjust its output depending on the acoustics of your room, as is the case with most premium speakers on the market, and its clean and balanced sound makes it suitable for listening to different music genres .

At $249 each — or $498 for a pair — Sonos is targeting these speakers to compete with Apple’s new $299 HomePod, announced in January. Still, as mentioned above, they are more suitable and functional for home office users.

Frankie, it’s hard to overstate the impact these new HP and Sonos accessories will have on your home work productivity. With hybrid working likely to be with us in the near future, these products are modest investments that make working at home more efficient, affordable and enjoyable.

Ever since OpenAI introduced ChatGPT, privacy advocates have warned consumers about the potential threat to privacy posed by generative AI apps. The arrival of the ChatGPT app in the Apple App Store has triggered a new round of caution.

,[B]Before you jump straight into the app, beware of getting too personal with the bot and putting your privacy at risk,” warned Muskan Saxena at Tech Radar.

The iOS app comes with an obvious tradeoff that users should be aware of, he explained, including this admonition: “Anonymized chats may be reviewed by our AI trainers to improve our systems.”

Anonymity, however, is no ticket to privacy. Anonymous chats are stripped of information that could link them to particular users. “However, anonymization may not be a sufficient measure to protect consumer privacy because anonymized data can still be re-identified by combining it with other sources of information,” said Joy Stanford, vice president of privacy and security at Platform.sh. One maker told TechNewsWorld of the cloud-based service platform for developers based in Paris.

“It has been found that it is relatively easy to de-anonymize information, especially if location information is used,” said Jen Caltrider, lead researcher for Mozilla’s Privacy Not Include project.

“Publicly, OpenAI says it is not collecting location data, but its privacy policy for ChatGPT says they may collect that data,” she told TechNewsWorld.

Nevertheless, OpenAI warns users of the ChatGPT app that their information will be used to train its larger language model. “They’re honest about it. They’re not hiding anything,” Caltrider said.

taking privacy seriously

Caleb Withers, a research assistant at the Center for a New American Security, a national security and defense think tank in Washington, D.C., explained that if a user types their name, work location, and other personal information into a ChatGPT query, that data is anonymized. will not be done.

“You have to ask yourself, ‘Is this something I would say to an OpenAI employee?’ he told TechNewsWorld.

OpenAI has said it takes privacy seriously and has implemented measures to protect user data, said Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.

“However, it’s always a good idea to review the specific privacy policies and practices of any service you use to understand how your data is handled and what is being protected,” he told TechNewsWorld.

As dedicated as an organization may be to data security, vulnerabilities may exist that can be exploited by malicious actors, said James McQuigan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. Said.

“It’s always important to be cautious and consider the need to share sensitive information to ensure that your data is as secure as possible,” he told TechNewsWorld.

“Protecting your privacy is a shared responsibility between users and the companies that collect and use their data, which is documented in those lengthy and often unread end user license agreements,” he said.

built-in security

McQuiggan noted that users of generative AI apps have been known to insert sensitive information such as birthdays, phone numbers, and postal and email addresses into their questions. “If an AI system is not secure enough, it can be accessed by third parties and used for malicious purposes such as identity theft or targeted advertising,” he said.

He added that generative AI applications can also inadvertently reveal sensitive information about users through their generated content. “Therefore,” he continued, “users should be aware of the potential privacy risks of using generative AI applications and take the necessary steps to protect their personal information.”

Unlike desktops and laptops, mobile phones have some built-in security features that can prevent privacy intrusion by apps running on them.

However, as McQuigan points out, “While some measures, such as application permissions and privacy settings, may provide some level of protection, they cannot completely protect your personal information from all types of privacy threats, As is the case with any application loaded onto a smartphone. ,

Vena agreed that built-in measures such as app permissions, privacy settings and App Store rules provide some level of protection. “But they may not be enough to mitigate all privacy threats,” he said. “App developers and smartphone makers have different approaches to privacy, and not all apps follow best practices.”

Even the practices of OpenAI differ from desktop to mobile phones. “If you are using ChatGPT on the website, you have the ability to go to the data controls and opt-out of your chats being used to improve ChatGPT. That setting doesn’t exist on the iOS app,” Caltrider said.

Beware of App Store Privacy Information

Caltrider also found the permissions used by OpenAI’s iOS app a bit fuzzy, noting that “in the Google Play Store, you can look and see what permissions are being used. You can’t do that through the Apple App Store.”

It warned users based on privacy information found in the App Store. “The research we’ve done into the Google Play Store security information shows that it’s really untrustworthy,” he observed.

“Research by others into the Apple App Store shows that it is also unreliable,” she continued. “Users should not rely on data protection information found on app pages. They should do their own research, which is difficult and complicated.”

“Companies need to be honest about what they are collecting and sharing,” he added. “OpenAI has been honest about how they are going to use the data they collect to train ChatGPT, but then they say that once they anonymize the data, they can use it in a number of ways.” that go beyond the standards in the Privacy Policy.”

Stanford noted that Apple has some policies in place that may address some of the privacy threats posed by generative AI apps. they include:

  • requiring user consent for data collection and sharing by apps that use generative AI technologies;
  • providing transparency and control over how and by whom data is used through the AppTracking Transparency feature, which allows users to opt out of cross-app tracking;
  • Enforcing privacy standards and regulations for app developers through the App Store review process and rejecting apps that violate them.

However, he acknowledged, “these measures may not be sufficient to prevent generative AI apps from creating inappropriate, harmful, or misleading content that may affect users’ privacy and security.”

Call for federal AI privacy legislation

“OpenAI is just one company. Several are building large language models, and many more are likely to crop up in the near future,” said a senior AI policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology. Tank Hoden Omar said. Public Policy, in Washington, DC

“We need a federal data privacy law to ensure all companies follow a set of clear standards,” he told TechNewsWorld.

“With the rapid growth and expansion of artificial intelligence,” said Caltrider, “there is certainly a need for solid, robust watchdogs and regulations to keep an eye on the rest of us as it grows and becomes more prevalent. “