Tag

Nvidia

Browsing

Robots have been around for decades, but they’ve mostly been stupid. They were either controlled remotely by humans or ran fixed scripts that allowed almost no latitude in terms of how they operated or what they did. Even though I was born in the year Robbie the Robot became famous, the robots I grew up with were nothing like Robbie. They were as smart as old toasters.

Luckily, that’s changing. Robotics has advanced tremendously over the past decade, partly thanks to the pioneering work Nvidia has done with autonomous vehicles, much of which has translated into autonomous robotics. At CES this year, Nvidia was on the mind among all the top robots, starting with a robotic tractor from John Deere and ending with the GlüxKind, an AI-powered baby stroller I want to buy for my aging dog.

Let’s talk about the Nvidia-powered robots at CES this week. We’ll end with our product of the week, a wireless microphone that just might keep me from getting into a fight on my next air trip.

john deere autonomous tractor

John Deere won the Best of Innovation award for its robotic tractor.

john deere fully autonomous tractor

John Deere fully autonomous tractor | Image credit: Deere & Company


I grew up working on a farm. Driving a tractor was fun in the beginning, but it got tiresome very quickly.

The repetition of driving in the heat and long lines was only broken by the excitement of an equipment failure or the prospect of a horrific death should I fall asleep and fall off the tractor. This actually happened years later to my division head at IBM, who died after falling from his tractor into a plow.

John Deere tractors will not bore, will not tire, will not die, so farmers can work on other jobs that need to be done, considering that staffing farms has become a big problem lately. Historically, robots weren’t cheap on farms because labor was cheap. But now you can’t find those workers, and it’s just as difficult to get local people to work on the farm.

Therefore, if farmers want to continue operating, they need to automate, suggesting that the farms of the future may be run entirely by increasingly intelligent robots and robotic equipment. Therefore, this tractor could be the key to ensuring food on our tables in the future.

Agrist Harvesting Robot

Another robot was from Agrist. I am not a fan of it mainly because it was made to cut capsicum and capsicum triggers my gag reflex. Just the smell of things makes me feel sick. Still, if I had to harvest bell peppers (apparently one of my concepts of hell), I’d appreciate a robot like this one that kept my hands, nose, and tongue away from the horrible stuff.

Sometimes you have to grow stuff you don’t like, and this robot will assure me that if I still had a farm—which thankfully I don’t—I could grow bell peppers and plant them without Brought close to something could bite.

Seriously, this robot is designed to work in indoor factory farms, which will be critical to the survival of countries that will be badly affected by climate change and lose the ability to farm as a result. Such robots will be crucial to sustaining humanity as the climate makes outdoor agriculture obsolete.

skydio scout drone

Drones were also covered, with Skydio Drone standing in for its Scout drone.

Skydio is an attractive drone company. There’s also a docked drone solution that reminds me of the old Green Hornet TV show. Can you imagine putting one of these on your car so you can check what’s causing that traffic jam you’re stuck in? Or imagine a police officer on a high-speed chase being able to launch one of these and have it autonomously and covertly pursue the suspect, therefore saving them life and limb while chasing them in the car. didn’t have to risk it.

Skydio drones are used in law enforcement, fire and rescue, power line inspection, construction, transportation, telecommunications, and defense.

Skydio is a powerful company with an increasingly powerful set of autonomous products that could one day save your life, making it potentially one of the most important products launched at CES this year.

GlüxKind ‘Ella’ AI-Powered Stroller

I was looking for a powered stroller for my aging dog a few weeks ago. When the dog gets tired of walking, we put it in a stroller, but the mountain climbing thing gets old. When my wife walks all three of our dogs alone, it becomes exhausting and potentially unsafe to manage the stroller at the same time.

When empty, the GlüxKind Ella stroller will follow you (I don’t want to imagine being a runaway with a baby in it). When occupied, it’s battery-assisted for going up hills where my wife often struggles (I’m currently her go-to solution for going up hills).

GlüxKind Ella AI-Powered Smart Stroller

Glüxkind was honored with the CES 2023 Innovation Awards for its “Ella” smart stroller. , Image credit: GluxKind Technologies


Sadly its current configuration won’t work for my dog. Otherwise, I would probably have ordered one. But trying to teach a 14-year-old dog to sit like a kid in a stroller is a non-starter, although it does shock others a bit when we walk by. Still, for parents with multiple children or those looking to walk their dog and child at the same time, this powered stroller could be a winner.

Now, if they would just come out with a pet configuration, I’d be all in.

Nubility Delivery Robot

Nubility’s self-driving robot, named Nuby, is one of the newer delivery robots to hit the market.

I’m a little worried about this class of robot. In tests, children and some adults often abused and broke these robots when in use. The Newbie is bigger and more robust from what I’ve seen, but I imagine it may need some sort of defense or high-speed escape capability to work in the real world.

Onboard cameras should capture and record anyone who damages it, but it may take a while for people to leave the thing alone to do their job. For this reason, Nubility is smartly targeting golf courses where the robots can be better protected. Places such as resorts, hospitals and factories would be where such robots could operate most successfully.

I’ll wait to see if they develop one with a built-in taser before putting too much faith in delivery robots outside controlled environments like golf courses and resorts.

Still, once accepted and protected, robots of this class will likely make home delivery by humans a thing of the past, better assured you’re home to receive the delivery and no more to the porch pirates. Make life very difficult whom I hate with a new found obsession after this past Christmas.

Seoul Robotics LV5 Control Tower for Autonomous Parking

Seoul Robotics demonstrated a Level 5 control tower, typical of the way autonomous cars are currently configured. It uses infrastructure outside the vehicle to manage the automobile, potentially enabling any current-generation car with Level 2 technology that is connected to that grid to operate autonomously. Is.

This variant is interesting because, rather than thinking of autonomous cars as they are now, it thinks of them more as how an air traffic controller monitors all cars in range and directs them from a central resource. Eventually, this technology could replace things like traffic lights, effectively moving them into the vehicle when it is being driven by humans and making them invisible to people riding in autonomous cars.

Not only could this approach be much cheaper than putting this technology in every car, but it would also shift maintenance from the car owner to the government, which could maintain it better, although this is not always a given.

It could also help ensure fewer catastrophic problems and allow older cars to better interoperate with newer autonomous vehicles, while providing a viable low-cost upgrade for those building more recent cars. wanted which are not currently autonomous functions like they were. This is arguably the most innovative approach to the autonomous car problem I’ve seen, and I’m thrilled by it.

while the autonomous wheelchair

Finally, Whill presented its autonomous wheelchair designed for people with limited mobility and vision. Older or partially disabled people who cannot see well are heavily dependent on others because the white cane approach does not work in a wheelchair.

Winner of the Best of Innovation Award in the Accessibility category, this wheelchair features unique high-traction tires and a rear bin to hold packages or groceries. It sounds a bit like a science fiction movie.

With 12 miles of range, the ability to climb over 3-inch objects like curbs, and very high stability for rough roads, it could be ideal for aging seniors and those with vision and mobility problems. At 5.5 mph, it’s anything but blazing fast, but if you have mobility and vision problems, you probably don’t want blazing fast.

Weighing in at 250 pounds, it’s lighter than many motorized solutions for people with limited mobility, and its autonomous capability provides freedom that some people might not get any other way.

wrapping up

This list of robots at CES is by no means exhaustive, but I realized that Nvidia was the brainchild of most of the robots I saw, so I thought I’d use that as a theme for this column. The autonomous robot revolution is just getting started, with the hope that we never go far enough to make the book robopocalypse a reality.


Over the next decade, these will unfold at an increasing pace, and Nvidia has placed itself at the heart (okay, maybe more at the brain) of these efforts. After all, we can be like George Jetson and have a maid like Rosie who’s autonomous, robotic, and with just the right level of snark.

At CES, I Saw My Robotic Future. I can’t wait until I have my own rosé!

tech product of the week

mutlhack vr microphone

Before Christmas, I took my last trip of the year to New York. Before taking off on the return flight, I had to do a radio interview over the phone. While the person next to me was fine with it, the guy in front of me was not and it looked like I was talking too loudly because he was about to hit me. I have a trained media voice, and that goes a long way.

Having a solution I can be a lifesaver when doing these things especially if we get to the point where we are making inflight phone calls and don’t want to annoy or accidentally entertain everyone on the plane with our conversations – Let alone accidentally share confidential or personal information.

The Mutlock VR microphone was one of two products launched at CES that can include your voice when you speak.

I’m choosing Mutalk because it was also designed to work with the VR rig I play with in VR, the Mask by Skyted, which was huge and apparently designed for inflight use Made it more appealing to me than it was. , To be honest, I’d be fine with either, and I have to admit that even a whole mutable rig on a plane might be a bit much.

Finally, something I can use for making calls in areas with a lot of ambient noise or when I need to be punched when I need to speak loudly. So, the Mutalk Leakage Voice Suppression Microphone is my Product of the Week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Nvidia announced significant updates to its Isaac SIM robotics simulation tool at the Consumer Electronics Show (CES) on Tuesday.

The Isaac SDK is the first open-source robotic AI development platform with simulation, navigation and manipulation. Developing partners use software tools to build and test virtual robots in realistic environments under various operating conditions. Now accessible from the cloud, Isaac SIM is built on Nvidia Omniverse, a platform for building and managing Metaverse applications.

The demand for intelligent robots is increasing as more industries adopt automation to address supply chain challenges and labor force shortages. According to ABI Research, the installed base of industrial and commercial robots will grow more than 6.4 times from 3.1 million in 2020 to 20 million in 2030.

According to Gerard Andrews, product marketing manager for Nvidia’s robotics developer community, developing, validating and deploying these new AI-based robots requires simulation technology that places them in realistic scenarios.

Isaac Sim enables roboticists to import the robot model of their choice and fully utilize its software stack to create a realistic environment to validate the physical design of the robot and ensure performance. Users can generate synthetic datasets during simulations to train the robot’s AI models that are used in the robot’s perception system. Researchers can take advantage of the Reinforcement Learning API to train models in the robot’s control stack.

The latest version focuses on improving performance and functionality for manufacturing and logistics robotics use cases. The software now supports adding people and complex conveyor systems to the simulation environment, and more assets and popular robots are pre-integrated to reduce simulation lead times.

release highlights

Robotic Operating System (ROS) developers find support for ROS 2 Humble and Windows. Robotics researchers gain many new capabilities aimed at advancing reinforcement learning, collaborative robot programming, and robot learning.

Systems improvements focus on the needs of humans working with collaborative robots (cobots) or autonomous mobile robots (AMRs). Isaac Sim’s new people simulation capabilities add normal human-like behaviors to the simulation.

For example, developers can now add human characters to simulations of a warehouse or manufacturing facility, tasked with performing common behaviors such as stacking packages or pushing carts. Many of the most common behaviors are already supported using commands.

To reduce the difference between the results observed in the real world versus the simulated world, physically accurate sensor models are essential. Nvidia’s RTX technology enables the Isaac SIM to render physically accurate data from sensors in real time. Ray tracing with greater speed and accurate sensor data under different lighting conditions or in response to reflective materials, in the case of RTX-simulated LIDAR (Light Detection and Ranging).

More tools for robotic researchers

Isaac Sim also provides a number of new simulation-ready 3D assets critical to creating physically accurate simulated environments. According to Nvidia, everything from warehouse parts to popular robots is ready, so developers and users can start building quickly.

Three new capabilities strengthen the toolset for robotics researchers:

  • Advancement in Isaac’s gym reinforces learning.
  • Isaac Cortex improves collaborative robot programming.
  • A new instrument, Isaac Orbit, provides a simulation operating environment and benchmarks for robot learning and motion planning.

nvidia's isaac sim warehouse conveyor and people simulation

Isaac Sim supports the simulation of warehouse conveyors and people. (Image credit: Nvidia)


Expanded use of robotics underway

According to Nvidia, the robotics ecosystem is already spread across a range of industries from logistics and manufacturing to retail, energy, sustainable farming and more. Its Isaac robotics platform provides advanced AI and simulation software as well as accelerated computing capabilities to the robotics ecosystem. Over a million developers and over a thousand companies rely on one or more parts of it.

Samples of robotic operations include:

  • TeleExistence has deployed beverage restocking robots in 300 convenience stores in Japan.
  • To improve safety, Germany’s national railway company Deutsche Bahn trains AI models to handle important but unpredictable corner cases that rarely happen in the real world – such as luggage falling onto a train track.
  • Sarcos Robotics is developing robots to pick up and place solar panels in renewable energy installations.
  • Festo uses Isaac Cortex to simplify programming for cobots and transfer simulation skills to physical robots.
  • Fraunhofer Isaac is developing advanced AMR using the anatomically accurate and full-fidelity visualization features of SIM.
  • Isaac is using Replicator for flexible synthetic data generation to train AI models.

A lot of work is going on creating the next generation of the web. Much of this is centered on the concept that instead of traditional web pages, we will have a vastly different experience that is far more immersive. Let’s call it “web 3D”.

I had the chance to speak with Nvidia CEO Jensen Huang who shared his thoughts on Web 3D. While it mixes elements of the metaverse, it is more tied to the AI ​​implementation that will propel the next generation of the web than a simulation of the reality of a fast fund on that new web.

vague? You are not alone, let me try to settle this concept.

Then we’ll look at my product of the week, a very different Amazon Kindle called the Scrib. It shows promise but needs some changes to be a great product.

AI propels the next generation web

Interestingly, I think Microsoft’s Halo game series got it right because Cortana, Microsoft’s fictional AI universal interface, is closest to what Huang hinted at about the future of the web.

In the game and TV series “Halo”, Cortana is the one that Master Chief talks to in order to access the technology around him. Sadly, even though prototypes like the one in this YouTube video have been created, Microsoft has yet to take Cortana to where it might be.

Right now, Cortana lags behind both Apple’s digital assistant and Google Assistant.

Huang believes that the AI ​​front end will become a reality with the next generation of the web. You will be able to design your own AI interface or have the possibility to license an already created image and personality from various providers as they step up to the occasion.

For example, if you want the AI ​​to look like your perfect boyfriend or girlfriend, you can initially describe what you want for an interface and based on the AI ​​training that AI to look like A design will do.

Alternatively—and this is not mutually exclusive—it may design it based on your known interests, from the cookies and web posts you make during your life. Or you can choose a character from a movie or an actor, which will come with a recurring fee that, in character, will become that personal interface.

Imagine having Black Widow or Thor as your personal guide to a world of information. They’ll behave just like they do in the “Avenger” movies and give you the information you’re looking for. Instead of viewing a web page, you’ll see your chosen digital assistant magically unfolding Metaverse elements to address your questions.

Search in Metaverse Experience

Discovery as we know it will change too.

For example, when looking for a new car, you can visit various manufacturers’ websites and explore options. But in the future, you can instead say “What car should I buy now?” And, based on what the AI ​​knows about you, or how you answer questions about your lifestyle, it will provide its recommendation and pull you into a metaverse experience where you test-drive the car virtually. Which is based on the choices that the AI ​​makes you think. I like.

During this virtual drive, it will add other options that you might like, and you will be able to express your interest, or lack thereof, to arrive at the final option. In the end, it will recommend where you should buy your car, whichever approach is adapted whether you value things like low prices or good service. These options will include both new and used offerings, based on what AI knows about your preferences.

The time and effort spent on the project will be massively reduced, while your satisfaction, assuming you have accurate AI information, is maximized. Over time, this web 3D interface will become a companion and trusted friend more than anything you’ve ever seen on the web.

Once it reaches critical mass, care must be taken to ensure that the interests of a political party, vendor or bad actor are not compromised in favor.

This last one is important. It may turn out that instead of being as free as in today’s browsers, the interface ends up being a paid service to ensure that no other entity can take advantage of your trust, as you will be held accountable against this new There is ample opportunity to use the interface. Ensuring that this will not happen should be more of a focus than the present.

wrapping up

According to Huang, the future of this front end – call it the next generation browser – is an increasingly photorealistic avatar based on your personal preferences and interests; One who can behave in character when needed; And one that will offer more focused options and a far more personalized web experience.

Perhaps we should talk more about the next generation of the web in its visual aspects, the 3D part, and its behavioral aspects, the “transhumanist web”. Something to noodle this week.

Technical Product of the Week

kindle scribe

I’ve been using Kindles since they were first released. I had both a keyboard and a free cellular connection.

They’ve proven to be interesting products when traveling, have all-day battery life, and perform better in the sun than LCD-based tablets or smartphones. Some are water resistant, allowing you to use them during water recreation activities. For example, when I swim on the river near my house, I’ll bring a water-resistant Kindle with me so I can read during the boring parts (for me, the whole float is the boring part).

But they’ve always been limited to being able to read books and some digital files (you can email .pdf files to Amazon for your Kindle). That just changed with the new Kindle scribe. It’s similar in size to the 10-inch Amazon Fire tablet and allows you to mark up the documents and books you’re reading.

While the Kindle scribe is still a reading-focused product, this latest version has optional pens that can be used to draw or comment on the things you’re reviewing and this, as do most similar products. , will allow you to make pictures if it interests you.

Kindle scribe for reading and writing, 300ppi Paperwhite display with basic pen

Kindle scribe (Image credit: Amazon)


As with all Kindles, it goes further with an e-paper display that works well in sunlight, and the larger size means you can finerly adjust the font to address vision problems, Could potentially remove the need for reading glasses for people who have only minor vision loss.

The drawbacks limiting the product are that it doesn’t currently support magazine or newspaper subscriptions, it doesn’t play music (probably better left to your smartphone anyway), and, as noted But the refresh rate is too low for video technology. It currently doesn’t even email.

It has a web browser, but that browser does not display web pages as intended. Instead, it lists stories vertically like a smartphone with a smaller screen. Actually, using it gives you many page load problems. For example, I couldn’t bring up Office 365 or Outlook Web sites.

Lastly, it doesn’t support handwriting conversion to text, making it less useful for note-taking than other products that have this functionality, but I expect it to improve as the product matures.

The person who will appreciate this product the most is one who wants a larger readership and sometimes needs to mark up documents as part of the editing or review process. If you want a more capable tablet, the Amazon Fire tablet is one of the best values ​​on the market, but it won’t work as well outdoors, nor does it have anywhere near the battery life that the Kindle Scribd offers.

For the right person, a Kindle scribe can be a godsend. But for most, the Amazon Fire tablet is likely to be the better overall choice. In any case, the new Kindle scribe tablet is my product of the week. At $339, it’s a good value that I expect will get better over time.

Kindle scribe will be released on November 30. You can pre-order it on Amazon now.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

While Netscape didn’t invent the Internet or HTML, it was the company that made the Internet real. Netscape went ahead with the creation of Tim Berners-Lee’s HTML and was instrumental in turning it into something that will change the world.

Last week at Siggraph, Nvidia’s opening keynote identified Universal Scene Description (USD), developed by Pixar, a Disney subsidiary, as the HTML equivalent for the Metaverse. Since Pixar wouldn’t exist without Steve Jobs, it’s like putting Pixar where Berners-Lee was, and Nvidia where Netscape was, but unlike Netscape, Nvidia is very well run and knows its battles. How to choose

Nvidia also talked about the future of the Metaverse, where avatars will become browser-like, creating a whole new level of human/machine interface. Nvidia also announced the concept of Neural Graphics, which is based heavily on AI to create more realistic Metaverse graphical elements with far less work.

This week let’s talk more about what happened at Siggraph — and how Nvidia and Disney can, and should, demonstrate their strengths at the forefront of the Metaverse.

Then we’ll close our product of the week, the HP Halo product, with an update on the Dragonfly laptop, which has just released its third edition. Halo products showcase the full capabilities of the seller and draw people to the brand, and it’s well positioned against the best of Apple.

Metaverse and Disney

I’m a former Disney employee and I can’t think of any other company on the content side that would be a better base for building the Metaverse.

Disney has always been about fantasy and trying to make magic real. While the firm has had problems maintaining its innovative leadership over the years, it still attracts all its peers, especially youth, across all age groups in terms of physical, magical places to see and film content.

It is tempting that the concept of the Multiverse, which could easily become a Metaverse creation, as illustrated by the Marvel Universe, which is also owned by Disney, suggests that as the Metaverse moved into the consumer market. Goes on, Disney could be even more powerful. The driver of this new technology for fun.

That’s a long way to say that given its relationship with the USD and entertainment, Disney may be the best-positioned media company to take advantage of this new paradigm and turn its version of the metaverse into something truly amazing. Imagine the potential of Metaverse Disney parks that kids can enjoy from their homes during extreme weather events, pandemics or wars.

Nvidia’s One Metaverse Movement

Right now, the metaverse is a mess. It appears that companies like Meta and Google are creating experiences that, like CompuServe and AOL, were done at the dawn of the Internet, which the market did not want.

The reason those wall-garden efforts didn’t survive is because no single company can meet the needs of each user. Once they gave way to the open Internet, the technology really took off, and AOL and CompuServe largely faded into history.

Nvidia CEO Jensen Huang is a big believer in the metaverse. He refers to it as Web 3.0 – the successor to Web 2.0 (the Internet as we know it today, with changes to the cloud and user-generated content). This concept of a generic metaverse, with elements that you can move on seamlessly, requires a great deal of standardization and advancements in physical interfaces like VR goggles.

Huang addressed this during the keynote, speaking of the massive advances in headset technology that in the future will bring VR glasses much closer to the size and weight of reading glasses, making them less tedious and annoying. . However, recalling our problems with 3D glasses, the industry will still need to address the overwhelming dislike of consumers for prosthetic interfaces if the effort is to reach its full potential.

One of the most interesting parts of this presentation was the concept of neural graphics, or graphics enhanced significantly by AI, which reduce the cost and speed of scanning things in the real world and turning them into mirror images in the virtual world. increase. At the event, Nvidia presented about 16 papers on neural graphics, two of which won awards.

Building on Pixar’s concept of Universal Scene Description, Huang explained how, once these virtual elements were created, they would be linked via AI to ensure that they remain in sync with the real world, Enables complex digital twins that can be used effectively for extreme precision. Simulation for both business and entertainment purposes.

This made me wonder how long it would take before we had the incarnation of Huang, who was revealed to be the keynote speaker, was actually the keynote speaker. With Huang’s progress in terms of avatar realism and emotion, there will come a time when avatars will be far better at such presentations than humans.

Up to this point, Huang introduced a concept called Audio2Face which combines a voice track with an avatar that creates realistic facial expressions, conveys emotion and is often indistinguishable from an actor’s appearance.

To do this realistically, they mapped facial muscles and then allowed the AI ​​to learn how people manipulated those muscles for different emotions and the ability to edit those emotions after the fact. . I have no doubt that the kids of tomorrow will have a lot more fun than this and in the future will create some deeply murky issues that we will need to address.

With Audio2Face MDL, a new content definition language, and neural VDB that can reduce video file sizes by up to 99%, create a pattern of increased resolution and realism while reducing the overall cost of effort.

Back to Disney: This technology could allow the company to create more compelling streaming and movie theater content while reducing its production budget, which would be huge for its top and bottom tiers.

Finally, Huang talked about a cloud publishing service for Avatars called Omniverse ACE. This could potentially open up a market for avatar creation, which in itself could be a highly profitable new tech industry.

wrapping up

With tremendous gains in USD and multi-age group content, Disney is in a unique position to benefit from our move into the metaverse.

However, the technology company to watch in this space is Nvidia which is at the forefront of creating this Web 3.0 metaverse creation that will be fast-forward to the Internet as we know it and provide us with amazing new experiences – and undoubtedly new ones. Problems we plague haven’t identified yet – much like the Internet.

In their respective fields, both Nvidia and Disney are forces of nature, and betting against either company has proven unwise. Together, they are creating a metaverse that will surprise, entertain and help solve global problems like climate change.

What is being built for the metaverse is simply amazing. For another example, look at this:

We are at the forefront of another technological revolution. Once done, the world will become a mixture of the real and the virtual and will be forever changed again.

Technical Product of the Week

HP Elite Dragonfly G3

Halo products are expensive and somewhat exclusive offerings that often show what a company can do, regardless of price.

The HP Elite Dragonfly G3 is the third generation of this Halo product, and it’s a relatively affordable showcase of HP’s laptop capabilities.

Lighter than most of its competitors, including the MacBook, sporting the latest 12th Gen Intel Core processors, and promising up to 22 hours of battery life (video), this 2.2-pound laptop is an impressive piece of kit.

HP Elite Dragonfly G3 Notebook

HP Elite Dragonfly G3 | image credit: HP


Some interesting features include a mechanical privacy shade for the 5MP front-facing camera that is activated electronically from the keyboard.

The laptop comes in a unique Slate Blue finish which I think looks awesome. This latest generation was designed for the new hybrid world many of us now live in, where we both work from home but sometimes have to go to the office.

It has Wi-Fi 6e for better wireless connectivity and supports 5G WAN for times when Wi-Fi is either too insecure or too unrealistic.

The Elite Dragonfly G3 has a unique 3:2 aspect ratio instead of the more typical panoramic display. The latter may be better for films but 3:2 is better for work. Laptops in this class are expected to focus more on content creation than on entertainment. This high screen also enabled a large touchpad that includes a fingerprint reader for security.

The ports on this unit, which has a 13.5-inch display, are surprisingly complete for one of the thinnest laptops I’ve tested. In addition to two USB-C Thunderbolt ports, it has a full-size USB port and a full-size HDMI port, both of which are unusual but unheard of in a laptop this small and light.

hp elite dragonfly g3 port

HP Elite Dragonfly G3 Right-Side Ports | image credit: HP


The product is relatively durable, using a magnesium/aluminum frame that is largely from recycled metals and designed to be recycled again as the laptop gets older.

In conclusion, it is potentially one of the most secure laptops in its class with the Wolf Pro security option for those who want extra security. Interestingly, starting at just $2,000, the Wolf Security Edition is also one of the most affordable.

I was at the launch of HP’s first Dragonfly laptop and I am very impressed with this offering which is my product of the week. I’m going to hate giving this laptop back.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Last week I heard about a podcast from Lucid Motors SVP Mike Bell (ex-Apple, ex-Rivian) who codes the Lucid Air and the upcoming Lucid SUV to be named Gravity in 2024, which is very different from every other car. Is. Road including Tesla.

Given that Tesla was heavily influenced by Apple, as well, it will be interesting to see what difference Apple makes in this market when it finally announces its electric car.

The direction of these three companies seems to be hanging in the balance. This is a change from the traditional preferences of older car makers to a model that is more compatible with a tech firm. Car companies like Lucid and Tesla are more like Apple than GM or Ford – which I imagine will eventually become a problem for GM and Ford.

I’ve also had updated briefings from Nvidia and Qualcomm on how they’re tackling autonomous driving, which could be a complimentary approach to the next generation of EVs.

Let’s talk about the future of electric cars, and we’ll kick off with our product of the week: an update on the Bartesian robotic bartender, no joke, Black + Decker.

Nvidia and Qualcomm Vehicle Tech

Nvidia’s drive platform is heavily used in Lucid vehicles. It is a comprehensive suite of offerings that cover the autonomous car technology stack, from concept and simulation to training, to inference in cars. However, most of the strength lies in Nvidia’s Omniverse simulation capability, which is widely used in the automotive industry.

Qualcomm is focusing more on bridging the car side of this equation, with compelling in-car technology that, on paper, is both cheaper and better than other options.

Given that car companies are margin-focused lasers, you can see a world emerging where Nvidia may own much of the backend and control structure for autonomous cars, and where Qualcomm can power its respective self-engineering systems in most cars. May present with driving abilities. Qualcomm has demonstrated that its ability to keep smartphone costs down can translate into in-car solutions that can do the same thing.

An increasingly potential future is one where Nvidia offers a much more autonomous car backend while Qualcomm provides in-vehicle multi-layered computer vision technology.

car and driver conversation

I was driving in the back of a Lucid car (pictured above) last week and I must admit I found the car fascinating to look at. It really is like no other car on the road. The performance specs and price are both amazing.

On performance, they deliver 2.5 seconds 0 to 60 acceleration time, up to 1,111 hp, up to 168 mph, and a range of 520 miles which is class-leading performance at the moment. But that performance will cost you closer to $200K.

Then again you could argue that you’re basically getting four cars for the price of one: a sports car, a family car, an off-road car, and the Holler (there’s a huge amount of luggage space), All in one vehicle.

Lucid also demonstrates a change in thinking about the interaction between the car and its driver. By now the driver has had to learn to drive a car. Every car is so different that most drivers may never learn how to use all the features. For example, I’ve never been able to successfully use my car’s self-parking capability.

The Lucid model learns how to work with you, learns your preferences, and this data can be transferred from car to car so you never have the issue of being unable to properly use a feature you purchased. Don’t have to face

upgrade after purchase

Lucid goes to the forefront of offering a solution that is not only software-defined, but potentially easy to update and upgrade over time, which will keep the cars in service longer than would otherwise be the case.

I’ve gotten frustrated with more traditional car companies because you can almost bet that right after you buy a new car, they’ll make an upgrade that can’t be retrofitted, and you’d know it was coming. .

For example, a few years ago I bought a Mercedes. Sometime between when I ordered the car and it was delivered, they put one of the features I ordered in another package, which was not available when I ordered my car and, because I didn’t select that bundle (again, it wasn’t even available at the time I ordered) they removed the feature from the car.

The only way I could get it back was to pay three times its cost before that happened. This huge cost increase was because it was far more expensive to add that feature once the car was built.

Both Lucid and Tesla have demonstrated that they can do a better job of providing post-purchase upgrades to their cars. As the industry considers the concept of cars as service, this ability to change a car’s configuration after it leaves the factory not only opens the door to a stronger used car opportunity for dealers, but also for a long time. Till then, with a happy relationship. Cars we buy in the end.

Instead of replacing a perfectly good car after three years because it became out of date, imagine updating the vehicle so that it looks almost as good as a new one.

Lucid’s technology-focused software-focused approach also means that many of these upgrades could come as part of the service, just like some of the more interesting improvements to Tesla cars over the years. Tesla is one of the few car companies where drivers can look forward to software updates as Tesla makes in pleasant surprises, and Lucid is looking to overtake Tesla in this regard.

Part of why Lucid may be able to outpace Tesla is its use of Nvidia Drive which is a unique way small car companies can match or exceed the capabilities of larger firms by using Nvidia’s extensive resources. It really is a game changer.

wrapping up

As we move into the middle of the decade, our in-car experiences will be changing substantially, not only to become more customizable for the buyer, but to provide a level of personalized after-sales auto-customization experience for that buyer. To do what hasn’t been seen in the tech market, let alone in the automotive market.

Once this is done, the technology market may have to pick up from some of the advancements of the automotive industry to better compete in its segment as this product concept is a competitive revolution to adapt itself to the unique needs of the user. .

It is difficult to see that any customer, when given the option, would ever opt for the old-fashioned way of forced learning and the lack of flexibility in the increasingly smart personal technology, equipment and vehicles they buy.

Companies like Lucid, Rivian, Tesla, Nvidia and Qualcomm are leading the automotive market and screaming for a future that is far more responsive to the needs of their buyers. That’s good news for our purchasing future, though probably not until the latter half of the decade.

Technical Product of the Week

‘Bev’ by Black + Decker

We were one of the first owners of Bartesian Robotic Bar Tender, and we have enjoyed the product for many years since it came out.

However, there was annoyance over how the alcohol was placed in the device and the pain of filling it with water that often spilled over. We left a bottle of wine without cleaning it for too long and it got stuck. So, we went looking for a replacement only to find out that Black + Decker has created a new version of Bartesian called Bev (little B) and it’s awesome!

Let’s start with the fact that with the old Bartesian we had to swap bottles of rum and gin when making the drink because it only contained four types of alcohol. The new version has five different bottles, and it uses the bottles the alcohol comes in, so you no longer have to clean the bottles, you just throw them out when they’re empty. Plus, it provides a sixth bottle for water so you can easily fill it under the tap (don’t try to fill it with the refrigerator; you’ll find water all over the floor).

'Bev' On-Demand Cocktail Maker by Black+Decker

‘Bev’ on-demand cocktail maker (Image Credit: Bartesian)


The unit has lights under the bottles that light up as drinks or can be cycled as it sits unused, making an impressive presentation in your kitchen or bar. Whereas the old Bartesian had a display that would take you through making a drink, the Biwi has five buttons. The first four drinks are for size and the final drink starts the making process which is much quicker and more fun to watch.

The Bev uses the same pods as the old Bartesian, but lacks a water chiller, so you’ll need a supply of ice. But the result is looking better, far less messy (the old bartesian would leak from time to time when filling), and so far it has worked flawlessly.

On a hot day, and we’re getting a lot of them, a cold rum punch is a great way to end the day; And sitting outside with a chilled cocktail on the weekend helps make it all worthwhile.

Priced at around $300, the new Biwi by Black + Decker is my product of the week. encourage!

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.