Tag

Nvidia

Browsing

I’m still working through material from Nvidia’s GTC conference last month, and two announcements that didn’t get much coverage jumped out at me.

One was how Nvidia’s AI achieved a world record in route optimization. The second was building the third generation OVX computing system that is the foundation of the Industrial Metaverse.

Taken together, they’ll help automate everything around us — including our Amazon deliveries — and make drone deliveries more likely.

Let’s talk about this Nvidia-powered future, and we’ll end with our Product of the Week, Medibus, a joint project between Cisco and Germany to provide healthcare to people who have poor access to healthcare services. Live in accessible rural areas.

Route optimization – why it matters

According to Pitney Bowes, last-mile delivery is where a large portion of the cost of shipping a product is incurred—about 40%. But that’s a big problem for drone delivery because drones have a finite range, and we’re a long way from having a critical mass of them in service.

Especially as we start to shift from humans to AI doing last-mile delivery, the ability to better manage routes – considering things like prevailing winds, weather events, potential bird strikes and other obstacles It becomes critical to ensure that you receive your package on time. and relatively unaffected. ,


If you add to this the range limitations of both land- and air-based drones, the problem becomes even more dire as solving it reduces lost drones and late packages. Nvidia has created massive amounts of road data as it has become a leader in autonomous driving technology, and it is using this knowledge to its advantage with its route optimization solutions.

A technology Nvidia developed called cuOpt can analyze billions of movements per second and account for a range of environmental issues to help ensure on-time delivery of a package, whether it’s a toy for your child or Pizza for your dinner.

Using Nvidia’s supercomputers such as the A100 Sensor Core GPU, CUopt can analyze massive amounts of data to generate the most efficient routes in real time – which is why it won the world record.

OVX Computing System Makes the Industrial Metaverse Real

The other announcement at GTC that didn’t get much coverage was the release of the third-generation Massive OVX platform that will host Nvidia’s Metaverse product, Omniverse.

Unlike Facebook’s largely failed consumer effort, the industrial metaverse is much more complex and is already in production at large plant sites as part of design and management. Every piece of equipment has a fully mechanized digital twin, and even the humans on the plant floor can have full digital twins to ensure the safety of humans and improve their efficiency.

Corrected, the Machinery digital twins are so complete that even their internals are digitally digitized through sensors synchronized with their digital twins in the Metaverse.


Using a variety of cutting-edge technologies, including BlueField 3 DPUs, Nvidia L40 GPUs, ConnectX-7 smart NICs, and its Spectrum Ethernet platform, Nvidia has created the Metaverse computer for those who want an on-premises Metaverse solution.

The effort is also somewhat tied to Nvidia’s autonomous driving efforts as it potentially connects and digitizes robots and trains that deliver to sites where the technology is already implemented. Partnering with Dell Technologies, Nvidia plans to make this technology available later this year.

wrapping up

Nvidia announced two under-the-cover technologies at its GTC conference this year. One focuses on getting the goods to you cheaply and efficiently, and it anticipates drone delivery. The second automates the construction of a large facility. Both the technologies work together to make the result viable and far more efficient.

Some of these technologies are already in use by Digitale Schien Deutschland and Pepsi Co. While fully automated metaverse-coupled factories and drone deliveries are not yet in full production, they are on their way. Nvidia is and will be shipping platforms that will make all this work.

tech product of the week

medibus

One of the most lucrative initiatives from any company is Cisco’s Country Digital Acceleration (CDA) program, which is run by Guy Diedrich.

Cisco created the CDA program to help countries digitize. It came to the fore widely during the pandemic, when politicians, bureaucrats, teachers and students transitioned to work from home, enabling governments and schools to cooperate. Cisco’s CDA efforts were critical in helping children go to school, and politicians govern remotely.

CDA’s latest effort, Medibus, is just as fascinating. Working with the German government to address the lack of medical care in remote and often economically distressed locations, the two organizations acquired some buses and converted them into mobile health care clinics.

When war broke out in Ukraine, Cisco persuaded Germany to loan some of these buses to the refugee crisis on Ukraine’s borders to bring refugees the health care they needed.

While the service was initially created to help German citizens who live in remote areas where health care professionals are few (there is a severe shortage of rural medical clinics in Germany), it has also been used for disaster relief, corporate health care efforts. And lately it’s been helpful. , to help Ukrainian refugees.

Built on Cisco’s networking and communications technology, each bus carries three people: a driver, a nurse and a doctor linked to multiple remote resources that can be brought in to consult or assist with a procedure as needed.

The effort anticipates a future where these vehicles may be fully automated and use Nvidia route optimization and Metaverse solutions to ensure their on-time arrival. The next step is to make them autonomous and help overcome staffing limitations by fully automating them.

Not only does Medibus demonstrate the potential future for medical care in rural areas, disaster recovery, and other remote medical needs, such as vaccine shots, it could bring about a roaming autonomous, automated medical system that can take care of our health even when there are no doctors. Will ensure better care. short supply. As a result, Cisco Medibus is my product of the week.

Nvidia’s GTC conference is over, but if you want to see what’s coming with AI, including generative AI like ChatGPT, robotics, autonomous electric cars and the metaverse, it’s worth watching the keynote by CEO Jensen Huang. Is.

A large part of Nvidia’s success comes from its work on Metaverse, which comes at a time when Facebook, which has changed its name to Meta, has largely failed to bring a successful Metaverse product to market.

Let’s take a look at why Nvidia’s metaverse effort was wildly successful, while Facebook turned out to be one of the costliest failures in tech history. We’ll end with our product of the week, a Chromebook from HP that just might be the best Chromebook ever.

Nvidia’s Metaverse Success

Nvidia has been working on elements of the Metaverse for about 28 years. It has focused almost exclusively on the commercial market as the business sector would derive significant financial benefits from the Metaverse. Not only is the commercial market more willing to pay for an expensive device, but the resulting potential savings will also substantially offset the initial high price of any new technology.

After all, to begin with, there were corporate devices on PC volumes. Initially due to their high cost, the consumer market for PCs did not emerge until much later. Microsoft did something similar with Lawrence Livermore National Laboratory and HoloLens, which initially allowed it to outperform peers like Google Glass.

Nvidia – whose metaverse tool is called Omniverse – also quickly realized that it could not build its own metaverse tool alone, so it partnered with several companies to build both specialized workstations and servers and deploy the results. critical services required for


Whenever Nvidia talks about its ubiquitous success, the conversation involves the vast number of partners that were, are and will be necessary to ensure positive results for an offering that is highly integrated with the real world.

Mainly through its GTC events, Nvidia has fueled interest and training in this area. Over time, it has built a comprehensive set of tools that help developers create their own metaverse instances and populate them with content. Regarding that material, Nvidia has piloted a universal design language to enable high-speed creation of virtual objects to fulfill Nvidia’s Metaverse vision.

Facebook’s Metaverse Failure

Facebook didn’t really start to ramp into the metaverse until 2019, nearly 25 years after Nvidia began its effort. Facebook seems to be focusing more on consumers than businesses with its approach. Consumers are very cost- and material-oriented. You can deploy a corporate tool with some uses, but consumers want value and breadth, and unlike businesses, they can’t offset the cost of the product with the cost savings, at least not in this area.

To be successful, Facebook will need to be more comprehensive in terms of content, cheaper in terms of price and related services, and better than Nvidia in terms of ease of use compared to those used by consumers. more for those who are interacting as part of the technology. their job.

Facebook mostly tried to go it alone and quickly incurred the exorbitant costs associated with building a metaverse, which appeared to drive down Facebook’s valuation and eventually led to mass layoffs.

The company demonstrated that the cost of creating a new market is too high for any company to go it alone, even one that was once as profitable as Facebook. You need partners, developers, and others to help cover development costs because no single company has the resources or money needed to build an ecosystem, and the metaverse requires a deep ecosystem.


Since Facebook is primarily funded through advertising, it should be, but it is not a marketing specialist. It doesn’t seem like it’s been able to create demand for its products, which should be a huge red flag for other advertisers because it means Facebook isn’t good for marketing. It is like a toolmaker who never used the tool he made.

This lack of capacity not only crippled efforts like Facebook’s Metaverse, but it also hurt related efforts like VR headsets. What amounts to a marketing superpower but not understanding how or when to use it would be uniquely silly of Facebook if it weren’t for the fact that Google has the exact same problem.

While companies that don’t use their own technology are anything new, they usually fail, but even when they underperform, these companies make crazy profits.

wrapping up

So, nvidia was successful, and facebook/meta not. Nvidia worked on the effort for decades, building a strong and deep partner system encompassing all aspects of the product, co-developing with the customers who would use it, and using it heavily during the development process itself. Thus, when the Omniverse came out, it was a winner because the company had rigorously developed a foundation for that success.

Facebook’s failure resulted from the company trying to go too fast and alone. It never felt like it tried to offer a product that would be acceptable to its consumer audience, development costs overwhelmed the company’s resources, and it seemed to be losing track of its destination .

Launching a marketplace isn’t quick or easy. It may seem like it in the end, but it takes decades of work to ultimately ensure success. It took Nvidia time, effort, and an ecosystem-building strategy that resulted in its massive success. Facebook missed that meeting, and even though it knew better than it did for a more consumer-oriented metaverse, it failed to execute.

Comparing the two companies shows a long-term strategic plan, partners and a clear idea of ​​where you want to be. It also shows that for technologies like the metaverse, the commercial market is a far better place to start than the consumer market.

tech product of the week

hp dragonfly pro chromebook

Last week I talked about the HP Dragonfly Pro Windows notebook, but today, I want to talk about its counterpart, the HP Dragonfly Pro Chromebook.

This Chromebook is arguably the successor to the older Google Pixelbook, which didn’t sell well, but focused on providing a premium Chromebook for those who wanted a more Apple-like experience, but with ChromeOS, not macOS.

Created in close collaboration between Google and Intel, this Chromebook is a one-of-a-kind offering. It’s an Intel Evo device which means fewer problems and higher reliability due to the extra quality control steps the Evo promises.

HP Dragonfly Pro Chromebook, ChromeOS, 14-inch, Touch Screen, Intel Core i5, 16GB RAM, 256GB SSD, WQXGA, Sparkling Black

The HP Dragonfly Pro Chromebook in Sparkling Black features a 14″ touch display, 16GB of memory, and a 256GB SSD. (Image credit: HP)


Externally, the sparkling black colored Chromebook looks almost identical to the Dragonfly Pro Windows product we covered last week. It’s got a similar finish, built with a heavy focus on sustainability, as well as:

  • long battery life;
  • Good performance – although the Windows AMD-based offering has more power;
  • a high-quality, backlit keyboard;
  • fingerprint recognition;
  • 1,200-nit outdoor viewable display; And
  • The same new high-performance charger seen on Windows laptops. (Be aware that these chargers work poorly on airplanes, and you’ll want a three-prong extension cord for use on an airplane.)

HP’s Dragonfly Pro Chromebook has longer battery life and a far brighter display than its Windows peer, but lacks facial recognition, which is common in most mid- to high-end Windows laptops.

This device is for those who really like the ChromeOS experience but are tired of the cheap hardware that surrounds that platform. As a result, the HP Dragonfly Pro Chromebook, with a list price of $999.99, is my product of the week.

The views expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Last week, Mercedes-Benz held a special event completely covering the future of Mercedes cars and vehicles. Mercedes-Benz is one of the world’s great automotive manufacturers. Renowned for quality, performance, design and technology, Mercedes often stands above its peers. While many people in America believe that Henry Ford created the first automobile, it was actually Karl Benz of Mercedes-Benz who did so.

I have owned two Mercedes and regret both and have vowed never to buy another. But now, after hearing what Mercedes announced, I think it’s time to reconsider that position because what I heard potentially solves both of the problems the company has been having. These issues had nothing to do with the cars themselves (which were awesome) but everything to do with my relationship with the company.

With Nvidia and Google, Mercedes-Benz is building a truly smart car that will create a relationship with the car owner and potentially a far more friendly front-end to the company than people currently have. It promises a relationship with your vehicle that is like a relationship with a well-trained pet and could develop into a long-term friendship.

Let’s talk about how automotive AI should improve our relationships with our cars and their makers. Then we’ll close with our Product of the Week, a prototype car from Audi, another Nvidia partner, that promises more amazing progress.

My Strange Mercedes-Benz Relationship

My two Mercedes-Benz cars were amazing in their own way. The first was the ML 320, one of the first Mercedes trucks made in the US, and thankfully it was a tank. I was in three accidents while I owned the car, none of which were my fault. In the third accident, I was in between two brand new Jeeps when a Mazda RX-7 speeding at a stoplight hit me. The collision was enough to crush two Jeeps, but my truck only needed two bumper repairs.

The problem that occurred had nothing to do with my truck. Instead, the problem was with the service advisor I was assigned, who apparently believed I was too poor and low-class to own a Mercedes-Benz, something he did every time I came in for service. but clarified. That experience soured me on the brand, even though it was a dealer relationship problem and had nothing to do with the car.

My other Mercedes, a GLA 45 AMG, was recent and a great car too. It was super powerful and, with one exception, the best track car I’ve ever owned. But several things happened that screwed me over to Mercedes-Benz again.


There was one option that I made sure I ordered because it was important to me: the HomeLink garage door control. But between when I ordered the car and when it arrived, the company bundled that option in, which was not present at the time of ordering. They removed the option without notifying me.

When the car arrived without it, I was offered the chance to have it installed by the service for about 10 times the initial cost when ordering. This was all on top of taking the car to Germany and being treated so poorly during the process that I almost returned the car and asked for my money back.

An additional problem was that the car had an automatic parking brake. This is problematic for a track car because when you stop and shut off the car, the brakes set, but after a track run when the brakes are white hot, the brake pads cause the rotors to adhere. Builds up, which requires a brake job. Not cheap, and the car will run badly until it is done.

AMG had a monthly event where you could call in and ask questions. When I called to inquire about this brake problem, I was again treated very poorly, with the implication that I should just suck it up and sit in the car until the brakes cooled down ( 15-20 minutes) because there was not, and never will be, a solution.

In both cases, it was not the car’s fault, but the people at Mercedes-Benz, who didn’t take customer service seriously enough for a luxury car. While my bad experiences were with Mercedes-Benz, I know people who have complained about every luxury vehicle with the possible exception of Rolls-Royce.

Instead of dealing with people, what if your car was intelligent enough to be your interface for the company? It can adapt itself to your unique needs and even act proactively to potentially save your life.

The next generation of smart Mercedes-Benz cars

I consider myself a car guy. Like many of you, I anthropomorphic my vehicles, though not as much now as I did when I was young. Until this next generation, cars have not been intelligent and have been disappointing friends.

In a mix with Nvidia and Google, Mercedes-Benz is using technologies like generative AI and Nvidia’s Omniverse to make cars smarter, cars that can talk to you and communicate more precisely what they need , report repairs and advocate for you Mercedes is in an impressive effort to make you more loyal to the brand and more engaged with your increasingly autonomous vehicle.

From how the next generation of factories are built to how cars are built, Nvidia’s Omniverse will be used to simulate factories, vehicles, lines of different models and the robots and workers who will build and maintain them.

When you order it, you can better track the car when you receive it and receive timely notice when the package changes, and you need to adjust your order to ensure that To include what you want on the car or the options made available after you order the car.

If there is a problem with the car, instead of searching the manual or calling the dealership, Vehicle will rapidly be able to explain the situation and what you need to do to fix it. This last one can be incredibly important if you have this problem hundreds of miles from the dealership.

Entertainment, driving options, seating position, ambient lighting, and massaging seats will be set to your preferences when the car recognizes you — which can be done via your phone. Those settings will be able to move from car to car if you stick with the Mercedes-Benz brand.

The cars will have game consoles and will increasingly be able to drive themselves, giving you time to access multiple entertainment options via massive displays. Think of the car as your own rolling home theater/gaming chair that will chew up the time on long journeys or keep you entertained and occupied when stuck in traffic.

You should also be able to make videoconferencing calls in the car at some point, allowing you to participate in meetings even before you get to the office (assuming you ever go to the office).

Holistic application of technology should lead to better cars and lower costs, which should translate to lower prices, better customer service and a better relationship with the car and owner. But how does this turn out my two bad experiences with Mercedes-Benz.

Wrapping Up: Automotive AI To The Rescue

The ML320 situation I talked about earlier, with embedded AI, instead of a bad experience, would be like this:

When I pick up the vehicle, and I’m clearly upset with the way I’ve been treated, the car recognizes I’m upset and asks, “What’s wrong?” Then I explain to the embedded generative AI I that the service advisor is treating my wife and me poorly and we are very upset about it.

Next thing I know, I get a call from Mercedes-Benz indicating they’ll take care of it. By the next day, I have a coffee offer with a new service advisor and the head of the dealer’s service department, who tells me to call him if I have any other problems.

Instead of not wanting a Mercedes-Benz, I am now impressed by the level of service and more loyal to the brand as a result. I’m not blowing smoke here. I once did a survey about Dell and Sony and found that even though Sony made better PCs than Dell at the time, people were more loyal to Dell because Dell treated them very well when they had problems. , and Sony didn’t.


In the GLA 45’s experience, I would have been assured that even before HomeLink was installed on the car it got to it, was able to make the vehicle aware that I was being treated poorly, and again Mercedes bud I will be able to address the problem before I turn sour on the brand.

More importantly, the car could self-fix the problem with the parking brake and report it back to Mercedes-Benz so other track drivers could benefit from my feedback.

Essentially, I’ve never wanted to buy a Mercedes, never wanted to buy anything else, which is the true advantage. Customer churn is a huge expense for any industry. For a car company with Mercedes-Benz’s reputation for quality, improvements in customer engagement and training could vastly improve perceived product quality and customer loyalty.

I expect people to want the ability to carry the personality they developed in their old Mercedes-Benz into a new car. Otherwise, they may become so attached to the vehicle that they may never want to get rid of it.

I’ve been a fan of TV shows like “Knight Rider” and “My Mother the Car,” so I look forward to the day when I can have a deeper relationship with my automobile.

tech product of the week

Audi ActiveSphere Concept

Rarely do cars make it from their prototype form to final production, but one recent prototype caught my eye: the Audi ActiveSphere concept.

Audi ActiveSphere Concept

Image credit: Audi

I live in Bend, Oregon, where the weather can go from sunny and warm to icy and dangerous in a single day. I love sports cars, and while my wife’s Jaguar F-Type was scary-dangerous when it was cold, it was one of the most fun cars I’ve ever owned.

I want a car that embodies the concept of a sports car, but can, at the push of a button, transform or expand into an off-road vehicle so I can take it for Costco runs or bike into the hills. can be used to carry. , Like Mercedes-Benz, Audi is also working with Nvidia to provide the same capabilities I mentioned above.

With top-notch performance, track capability, and the ability to transform into an off-road or pickup-truck-like vehicle at the touch of a button (or even automatically when changing positions), Audi’s ActiveSphere Designed for the place I live and how I would like to enjoy my next automobile. Of course, it’s electric – all the advancements to come are in electric cars as the automotive industry goes electric.

The Audi Activesphere concept captured my imagination like no other car had before. It’s very attractive, sums up what every other car I’ve owned lacks, and has the entertainment and self-driving capabilities I’ve always wanted but couldn’t afford. The only question is whether I can afford it or not.

Mercedes-Benz is showcasing the personality of the car I want next, and Audi is showcasing the design that meets my needs. Audi might as well have a combination of both which will occupy my wallet. I have owned three Audis (two TTs and an S5 Cabriolet) and love them both. ActiveSphere might become my fourth, so it’s my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Robots have been around for decades, but they’ve mostly been stupid. They were either controlled remotely by humans or ran fixed scripts that allowed almost no latitude in terms of how they operated or what they did. Even though I was born in the year Robbie the Robot became famous, the robots I grew up with were nothing like Robbie. They were as smart as old toasters.

Luckily, that’s changing. Robotics has advanced tremendously over the past decade, partly thanks to the pioneering work Nvidia has done with autonomous vehicles, much of which has translated into autonomous robotics. At CES this year, Nvidia was on the mind among all the top robots, starting with a robotic tractor from John Deere and ending with the GlüxKind, an AI-powered baby stroller I want to buy for my aging dog.

Let’s talk about the Nvidia-powered robots at CES this week. We’ll end with our product of the week, a wireless microphone that just might keep me from getting into a fight on my next air trip.

john deere autonomous tractor

John Deere won the Best of Innovation award for its robotic tractor.

john deere fully autonomous tractor

John Deere fully autonomous tractor | Image credit: Deere & Company


I grew up working on a farm. Driving a tractor was fun in the beginning, but it got tiresome very quickly.

The repetition of driving in the heat and long lines was only broken by the excitement of an equipment failure or the prospect of a horrific death should I fall asleep and fall off the tractor. This actually happened years later to my division head at IBM, who died after falling from his tractor into a plow.

John Deere tractors will not bore, will not tire, will not die, so farmers can work on other jobs that need to be done, considering that staffing farms has become a big problem lately. Historically, robots weren’t cheap on farms because labor was cheap. But now you can’t find those workers, and it’s just as difficult to get local people to work on the farm.

Therefore, if farmers want to continue operating, they need to automate, suggesting that the farms of the future may be run entirely by increasingly intelligent robots and robotic equipment. Therefore, this tractor could be the key to ensuring food on our tables in the future.

Agrist Harvesting Robot

Another robot was from Agrist. I am not a fan of it mainly because it was made to cut capsicum and capsicum triggers my gag reflex. Just the smell of things makes me feel sick. Still, if I had to harvest bell peppers (apparently one of my concepts of hell), I’d appreciate a robot like this one that kept my hands, nose, and tongue away from the horrible stuff.

Sometimes you have to grow stuff you don’t like, and this robot will assure me that if I still had a farm—which thankfully I don’t—I could grow bell peppers and plant them without Brought close to something could bite.

Seriously, this robot is designed to work in indoor factory farms, which will be critical to the survival of countries that will be badly affected by climate change and lose the ability to farm as a result. Such robots will be crucial to sustaining humanity as the climate makes outdoor agriculture obsolete.

skydio scout drone

Drones were also covered, with Skydio Drone standing in for its Scout drone.

Skydio is an attractive drone company. There’s also a docked drone solution that reminds me of the old Green Hornet TV show. Can you imagine putting one of these on your car so you can check what’s causing that traffic jam you’re stuck in? Or imagine a police officer on a high-speed chase being able to launch one of these and have it autonomously and covertly pursue the suspect, therefore saving them life and limb while chasing them in the car. didn’t have to risk it.

Skydio drones are used in law enforcement, fire and rescue, power line inspection, construction, transportation, telecommunications, and defense.

Skydio is a powerful company with an increasingly powerful set of autonomous products that could one day save your life, making it potentially one of the most important products launched at CES this year.

GlüxKind ‘Ella’ AI-Powered Stroller

I was looking for a powered stroller for my aging dog a few weeks ago. When the dog gets tired of walking, we put it in a stroller, but the mountain climbing thing gets old. When my wife walks all three of our dogs alone, it becomes exhausting and potentially unsafe to manage the stroller at the same time.

When empty, the GlüxKind Ella stroller will follow you (I don’t want to imagine being a runaway with a baby in it). When occupied, it’s battery-assisted for going up hills where my wife often struggles (I’m currently her go-to solution for going up hills).

GlüxKind Ella AI-Powered Smart Stroller

Glüxkind was honored with the CES 2023 Innovation Awards for its “Ella” smart stroller. , Image credit: GluxKind Technologies


Sadly its current configuration won’t work for my dog. Otherwise, I would probably have ordered one. But trying to teach a 14-year-old dog to sit like a kid in a stroller is a non-starter, although it does shock others a bit when we walk by. Still, for parents with multiple children or those looking to walk their dog and child at the same time, this powered stroller could be a winner.

Now, if they would just come out with a pet configuration, I’d be all in.

Nubility Delivery Robot

Nubility’s self-driving robot, named Nuby, is one of the newer delivery robots to hit the market.

I’m a little worried about this class of robot. In tests, children and some adults often abused and broke these robots when in use. The Newbie is bigger and more robust from what I’ve seen, but I imagine it may need some sort of defense or high-speed escape capability to work in the real world.

Onboard cameras should capture and record anyone who damages it, but it may take a while for people to leave the thing alone to do their job. For this reason, Nubility is smartly targeting golf courses where the robots can be better protected. Places such as resorts, hospitals and factories would be where such robots could operate most successfully.

I’ll wait to see if they develop one with a built-in taser before putting too much faith in delivery robots outside controlled environments like golf courses and resorts.

Still, once accepted and protected, robots of this class will likely make home delivery by humans a thing of the past, better assured you’re home to receive the delivery and no more to the porch pirates. Make life very difficult whom I hate with a new found obsession after this past Christmas.

Seoul Robotics LV5 Control Tower for Autonomous Parking

Seoul Robotics demonstrated a Level 5 control tower, typical of the way autonomous cars are currently configured. It uses infrastructure outside the vehicle to manage the automobile, potentially enabling any current-generation car with Level 2 technology that is connected to that grid to operate autonomously. Is.

This variant is interesting because, rather than thinking of autonomous cars as they are now, it thinks of them more as how an air traffic controller monitors all cars in range and directs them from a central resource. Eventually, this technology could replace things like traffic lights, effectively moving them into the vehicle when it is being driven by humans and making them invisible to people riding in autonomous cars.

Not only could this approach be much cheaper than putting this technology in every car, but it would also shift maintenance from the car owner to the government, which could maintain it better, although this is not always a given.

It could also help ensure fewer catastrophic problems and allow older cars to better interoperate with newer autonomous vehicles, while providing a viable low-cost upgrade for those building more recent cars. wanted which are not currently autonomous functions like they were. This is arguably the most innovative approach to the autonomous car problem I’ve seen, and I’m thrilled by it.

while the autonomous wheelchair

Finally, Whill presented its autonomous wheelchair designed for people with limited mobility and vision. Older or partially disabled people who cannot see well are heavily dependent on others because the white cane approach does not work in a wheelchair.

Winner of the Best of Innovation Award in the Accessibility category, this wheelchair features unique high-traction tires and a rear bin to hold packages or groceries. It sounds a bit like a science fiction movie.

With 12 miles of range, the ability to climb over 3-inch objects like curbs, and very high stability for rough roads, it could be ideal for aging seniors and those with vision and mobility problems. At 5.5 mph, it’s anything but blazing fast, but if you have mobility and vision problems, you probably don’t want blazing fast.

Weighing in at 250 pounds, it’s lighter than many motorized solutions for people with limited mobility, and its autonomous capability provides freedom that some people might not get any other way.

wrapping up

This list of robots at CES is by no means exhaustive, but I realized that Nvidia was the brainchild of most of the robots I saw, so I thought I’d use that as a theme for this column. The autonomous robot revolution is just getting started, with the hope that we never go far enough to make the book robopocalypse a reality.


Over the next decade, these will unfold at an increasing pace, and Nvidia has placed itself at the heart (okay, maybe more at the brain) of these efforts. After all, we can be like George Jetson and have a maid like Rosie who’s autonomous, robotic, and with just the right level of snark.

At CES, I Saw My Robotic Future. I can’t wait until I have my own rosé!

tech product of the week

mutlhack vr microphone

Before Christmas, I took my last trip of the year to New York. Before taking off on the return flight, I had to do a radio interview over the phone. While the person next to me was fine with it, the guy in front of me was not and it looked like I was talking too loudly because he was about to hit me. I have a trained media voice, and that goes a long way.

Having a solution I can be a lifesaver when doing these things especially if we get to the point where we are making inflight phone calls and don’t want to annoy or accidentally entertain everyone on the plane with our conversations – Let alone accidentally share confidential or personal information.

The Mutlock VR microphone was one of two products launched at CES that can include your voice when you speak.

I’m choosing Mutalk because it was also designed to work with the VR rig I play with in VR, the Mask by Skyted, which was huge and apparently designed for inflight use Made it more appealing to me than it was. , To be honest, I’d be fine with either, and I have to admit that even a whole mutable rig on a plane might be a bit much.

Finally, something I can use for making calls in areas with a lot of ambient noise or when I need to be punched when I need to speak loudly. So, the Mutalk Leakage Voice Suppression Microphone is my Product of the Week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Nvidia announced significant updates to its Isaac SIM robotics simulation tool at the Consumer Electronics Show (CES) on Tuesday.

The Isaac SDK is the first open-source robotic AI development platform with simulation, navigation and manipulation. Developing partners use software tools to build and test virtual robots in realistic environments under various operating conditions. Now accessible from the cloud, Isaac SIM is built on Nvidia Omniverse, a platform for building and managing Metaverse applications.

The demand for intelligent robots is increasing as more industries adopt automation to address supply chain challenges and labor force shortages. According to ABI Research, the installed base of industrial and commercial robots will grow more than 6.4 times from 3.1 million in 2020 to 20 million in 2030.

According to Gerard Andrews, product marketing manager for Nvidia’s robotics developer community, developing, validating and deploying these new AI-based robots requires simulation technology that places them in realistic scenarios.

Isaac Sim enables roboticists to import the robot model of their choice and fully utilize its software stack to create a realistic environment to validate the physical design of the robot and ensure performance. Users can generate synthetic datasets during simulations to train the robot’s AI models that are used in the robot’s perception system. Researchers can take advantage of the Reinforcement Learning API to train models in the robot’s control stack.

The latest version focuses on improving performance and functionality for manufacturing and logistics robotics use cases. The software now supports adding people and complex conveyor systems to the simulation environment, and more assets and popular robots are pre-integrated to reduce simulation lead times.

release highlights

Robotic Operating System (ROS) developers find support for ROS 2 Humble and Windows. Robotics researchers gain many new capabilities aimed at advancing reinforcement learning, collaborative robot programming, and robot learning.

Systems improvements focus on the needs of humans working with collaborative robots (cobots) or autonomous mobile robots (AMRs). Isaac Sim’s new people simulation capabilities add normal human-like behaviors to the simulation.

For example, developers can now add human characters to simulations of a warehouse or manufacturing facility, tasked with performing common behaviors such as stacking packages or pushing carts. Many of the most common behaviors are already supported using commands.

To reduce the difference between the results observed in the real world versus the simulated world, physically accurate sensor models are essential. Nvidia’s RTX technology enables the Isaac SIM to render physically accurate data from sensors in real time. Ray tracing with greater speed and accurate sensor data under different lighting conditions or in response to reflective materials, in the case of RTX-simulated LIDAR (Light Detection and Ranging).

More tools for robotic researchers

Isaac Sim also provides a number of new simulation-ready 3D assets critical to creating physically accurate simulated environments. According to Nvidia, everything from warehouse parts to popular robots is ready, so developers and users can start building quickly.

Three new capabilities strengthen the toolset for robotics researchers:

  • Advancement in Isaac’s gym reinforces learning.
  • Isaac Cortex improves collaborative robot programming.
  • A new instrument, Isaac Orbit, provides a simulation operating environment and benchmarks for robot learning and motion planning.

nvidia's isaac sim warehouse conveyor and people simulation

Isaac Sim supports the simulation of warehouse conveyors and people. (Image credit: Nvidia)


Expanded use of robotics underway

According to Nvidia, the robotics ecosystem is already spread across a range of industries from logistics and manufacturing to retail, energy, sustainable farming and more. Its Isaac robotics platform provides advanced AI and simulation software as well as accelerated computing capabilities to the robotics ecosystem. Over a million developers and over a thousand companies rely on one or more parts of it.

Samples of robotic operations include:

  • TeleExistence has deployed beverage restocking robots in 300 convenience stores in Japan.
  • To improve safety, Germany’s national railway company Deutsche Bahn trains AI models to handle important but unpredictable corner cases that rarely happen in the real world – such as luggage falling onto a train track.
  • Sarcos Robotics is developing robots to pick up and place solar panels in renewable energy installations.
  • Festo uses Isaac Cortex to simplify programming for cobots and transfer simulation skills to physical robots.
  • Fraunhofer Isaac is developing advanced AMR using the anatomically accurate and full-fidelity visualization features of SIM.
  • Isaac is using Replicator for flexible synthetic data generation to train AI models.

A lot of work is going on creating the next generation of the web. Much of this is centered on the concept that instead of traditional web pages, we will have a vastly different experience that is far more immersive. Let’s call it “web 3D”.

I had the chance to speak with Nvidia CEO Jensen Huang who shared his thoughts on Web 3D. While it mixes elements of the metaverse, it is more tied to the AI ​​implementation that will propel the next generation of the web than a simulation of the reality of a fast fund on that new web.

vague? You are not alone, let me try to settle this concept.

Then we’ll look at my product of the week, a very different Amazon Kindle called the Scrib. It shows promise but needs some changes to be a great product.

AI propels the next generation web

Interestingly, I think Microsoft’s Halo game series got it right because Cortana, Microsoft’s fictional AI universal interface, is closest to what Huang hinted at about the future of the web.

In the game and TV series “Halo”, Cortana is the one that Master Chief talks to in order to access the technology around him. Sadly, even though prototypes like the one in this YouTube video have been created, Microsoft has yet to take Cortana to where it might be.

Right now, Cortana lags behind both Apple’s digital assistant and Google Assistant.

Huang believes that the AI ​​front end will become a reality with the next generation of the web. You will be able to design your own AI interface or have the possibility to license an already created image and personality from various providers as they step up to the occasion.

For example, if you want the AI ​​to look like your perfect boyfriend or girlfriend, you can initially describe what you want for an interface and based on the AI ​​training that AI to look like A design will do.

Alternatively—and this is not mutually exclusive—it may design it based on your known interests, from the cookies and web posts you make during your life. Or you can choose a character from a movie or an actor, which will come with a recurring fee that, in character, will become that personal interface.

Imagine having Black Widow or Thor as your personal guide to a world of information. They’ll behave just like they do in the “Avenger” movies and give you the information you’re looking for. Instead of viewing a web page, you’ll see your chosen digital assistant magically unfolding Metaverse elements to address your questions.

Search in Metaverse Experience

Discovery as we know it will change too.

For example, when looking for a new car, you can visit various manufacturers’ websites and explore options. But in the future, you can instead say “What car should I buy now?” And, based on what the AI ​​knows about you, or how you answer questions about your lifestyle, it will provide its recommendation and pull you into a metaverse experience where you test-drive the car virtually. Which is based on the choices that the AI ​​makes you think. I like.

During this virtual drive, it will add other options that you might like, and you will be able to express your interest, or lack thereof, to arrive at the final option. In the end, it will recommend where you should buy your car, whichever approach is adapted whether you value things like low prices or good service. These options will include both new and used offerings, based on what AI knows about your preferences.

The time and effort spent on the project will be massively reduced, while your satisfaction, assuming you have accurate AI information, is maximized. Over time, this web 3D interface will become a companion and trusted friend more than anything you’ve ever seen on the web.

Once it reaches critical mass, care must be taken to ensure that the interests of a political party, vendor or bad actor are not compromised in favor.

This last one is important. It may turn out that instead of being as free as in today’s browsers, the interface ends up being a paid service to ensure that no other entity can take advantage of your trust, as you will be held accountable against this new There is ample opportunity to use the interface. Ensuring that this will not happen should be more of a focus than the present.

wrapping up

According to Huang, the future of this front end – call it the next generation browser – is an increasingly photorealistic avatar based on your personal preferences and interests; One who can behave in character when needed; And one that will offer more focused options and a far more personalized web experience.

Perhaps we should talk more about the next generation of the web in its visual aspects, the 3D part, and its behavioral aspects, the “transhumanist web”. Something to noodle this week.

Technical Product of the Week

kindle scribe

I’ve been using Kindles since they were first released. I had both a keyboard and a free cellular connection.

They’ve proven to be interesting products when traveling, have all-day battery life, and perform better in the sun than LCD-based tablets or smartphones. Some are water resistant, allowing you to use them during water recreation activities. For example, when I swim on the river near my house, I’ll bring a water-resistant Kindle with me so I can read during the boring parts (for me, the whole float is the boring part).

But they’ve always been limited to being able to read books and some digital files (you can email .pdf files to Amazon for your Kindle). That just changed with the new Kindle scribe. It’s similar in size to the 10-inch Amazon Fire tablet and allows you to mark up the documents and books you’re reading.

While the Kindle scribe is still a reading-focused product, this latest version has optional pens that can be used to draw or comment on the things you’re reviewing and this, as do most similar products. , will allow you to make pictures if it interests you.

Kindle scribe for reading and writing, 300ppi Paperwhite display with basic pen

Kindle scribe (Image credit: Amazon)


As with all Kindles, it goes further with an e-paper display that works well in sunlight, and the larger size means you can finerly adjust the font to address vision problems, Could potentially remove the need for reading glasses for people who have only minor vision loss.

The drawbacks limiting the product are that it doesn’t currently support magazine or newspaper subscriptions, it doesn’t play music (probably better left to your smartphone anyway), and, as noted But the refresh rate is too low for video technology. It currently doesn’t even email.

It has a web browser, but that browser does not display web pages as intended. Instead, it lists stories vertically like a smartphone with a smaller screen. Actually, using it gives you many page load problems. For example, I couldn’t bring up Office 365 or Outlook Web sites.

Lastly, it doesn’t support handwriting conversion to text, making it less useful for note-taking than other products that have this functionality, but I expect it to improve as the product matures.

The person who will appreciate this product the most is one who wants a larger readership and sometimes needs to mark up documents as part of the editing or review process. If you want a more capable tablet, the Amazon Fire tablet is one of the best values ​​on the market, but it won’t work as well outdoors, nor does it have anywhere near the battery life that the Kindle Scribd offers.

For the right person, a Kindle scribe can be a godsend. But for most, the Amazon Fire tablet is likely to be the better overall choice. In any case, the new Kindle scribe tablet is my product of the week. At $339, it’s a good value that I expect will get better over time.

Kindle scribe will be released on November 30. You can pre-order it on Amazon now.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

While Netscape didn’t invent the Internet or HTML, it was the company that made the Internet real. Netscape went ahead with the creation of Tim Berners-Lee’s HTML and was instrumental in turning it into something that will change the world.

Last week at Siggraph, Nvidia’s opening keynote identified Universal Scene Description (USD), developed by Pixar, a Disney subsidiary, as the HTML equivalent for the Metaverse. Since Pixar wouldn’t exist without Steve Jobs, it’s like putting Pixar where Berners-Lee was, and Nvidia where Netscape was, but unlike Netscape, Nvidia is very well run and knows its battles. How to choose

Nvidia also talked about the future of the Metaverse, where avatars will become browser-like, creating a whole new level of human/machine interface. Nvidia also announced the concept of Neural Graphics, which is based heavily on AI to create more realistic Metaverse graphical elements with far less work.

This week let’s talk more about what happened at Siggraph — and how Nvidia and Disney can, and should, demonstrate their strengths at the forefront of the Metaverse.

Then we’ll close our product of the week, the HP Halo product, with an update on the Dragonfly laptop, which has just released its third edition. Halo products showcase the full capabilities of the seller and draw people to the brand, and it’s well positioned against the best of Apple.

Metaverse and Disney

I’m a former Disney employee and I can’t think of any other company on the content side that would be a better base for building the Metaverse.

Disney has always been about fantasy and trying to make magic real. While the firm has had problems maintaining its innovative leadership over the years, it still attracts all its peers, especially youth, across all age groups in terms of physical, magical places to see and film content.

It is tempting that the concept of the Multiverse, which could easily become a Metaverse creation, as illustrated by the Marvel Universe, which is also owned by Disney, suggests that as the Metaverse moved into the consumer market. Goes on, Disney could be even more powerful. The driver of this new technology for fun.

That’s a long way to say that given its relationship with the USD and entertainment, Disney may be the best-positioned media company to take advantage of this new paradigm and turn its version of the metaverse into something truly amazing. Imagine the potential of Metaverse Disney parks that kids can enjoy from their homes during extreme weather events, pandemics or wars.

Nvidia’s One Metaverse Movement

Right now, the metaverse is a mess. It appears that companies like Meta and Google are creating experiences that, like CompuServe and AOL, were done at the dawn of the Internet, which the market did not want.

The reason those wall-garden efforts didn’t survive is because no single company can meet the needs of each user. Once they gave way to the open Internet, the technology really took off, and AOL and CompuServe largely faded into history.

Nvidia CEO Jensen Huang is a big believer in the metaverse. He refers to it as Web 3.0 – the successor to Web 2.0 (the Internet as we know it today, with changes to the cloud and user-generated content). This concept of a generic metaverse, with elements that you can move on seamlessly, requires a great deal of standardization and advancements in physical interfaces like VR goggles.

Huang addressed this during the keynote, speaking of the massive advances in headset technology that in the future will bring VR glasses much closer to the size and weight of reading glasses, making them less tedious and annoying. . However, recalling our problems with 3D glasses, the industry will still need to address the overwhelming dislike of consumers for prosthetic interfaces if the effort is to reach its full potential.

One of the most interesting parts of this presentation was the concept of neural graphics, or graphics enhanced significantly by AI, which reduce the cost and speed of scanning things in the real world and turning them into mirror images in the virtual world. increase. At the event, Nvidia presented about 16 papers on neural graphics, two of which won awards.

Building on Pixar’s concept of Universal Scene Description, Huang explained how, once these virtual elements were created, they would be linked via AI to ensure that they remain in sync with the real world, Enables complex digital twins that can be used effectively for extreme precision. Simulation for both business and entertainment purposes.

This made me wonder how long it would take before we had the incarnation of Huang, who was revealed to be the keynote speaker, was actually the keynote speaker. With Huang’s progress in terms of avatar realism and emotion, there will come a time when avatars will be far better at such presentations than humans.

Up to this point, Huang introduced a concept called Audio2Face which combines a voice track with an avatar that creates realistic facial expressions, conveys emotion and is often indistinguishable from an actor’s appearance.

To do this realistically, they mapped facial muscles and then allowed the AI ​​to learn how people manipulated those muscles for different emotions and the ability to edit those emotions after the fact. . I have no doubt that the kids of tomorrow will have a lot more fun than this and in the future will create some deeply murky issues that we will need to address.

With Audio2Face MDL, a new content definition language, and neural VDB that can reduce video file sizes by up to 99%, create a pattern of increased resolution and realism while reducing the overall cost of effort.

Back to Disney: This technology could allow the company to create more compelling streaming and movie theater content while reducing its production budget, which would be huge for its top and bottom tiers.

Finally, Huang talked about a cloud publishing service for Avatars called Omniverse ACE. This could potentially open up a market for avatar creation, which in itself could be a highly profitable new tech industry.

wrapping up

With tremendous gains in USD and multi-age group content, Disney is in a unique position to benefit from our move into the metaverse.

However, the technology company to watch in this space is Nvidia which is at the forefront of creating this Web 3.0 metaverse creation that will be fast-forward to the Internet as we know it and provide us with amazing new experiences – and undoubtedly new ones. Problems we plague haven’t identified yet – much like the Internet.

In their respective fields, both Nvidia and Disney are forces of nature, and betting against either company has proven unwise. Together, they are creating a metaverse that will surprise, entertain and help solve global problems like climate change.

What is being built for the metaverse is simply amazing. For another example, look at this:

We are at the forefront of another technological revolution. Once done, the world will become a mixture of the real and the virtual and will be forever changed again.

Technical Product of the Week

HP Elite Dragonfly G3

Halo products are expensive and somewhat exclusive offerings that often show what a company can do, regardless of price.

The HP Elite Dragonfly G3 is the third generation of this Halo product, and it’s a relatively affordable showcase of HP’s laptop capabilities.

Lighter than most of its competitors, including the MacBook, sporting the latest 12th Gen Intel Core processors, and promising up to 22 hours of battery life (video), this 2.2-pound laptop is an impressive piece of kit.

HP Elite Dragonfly G3 Notebook

HP Elite Dragonfly G3 | image credit: HP


Some interesting features include a mechanical privacy shade for the 5MP front-facing camera that is activated electronically from the keyboard.

The laptop comes in a unique Slate Blue finish which I think looks awesome. This latest generation was designed for the new hybrid world many of us now live in, where we both work from home but sometimes have to go to the office.

It has Wi-Fi 6e for better wireless connectivity and supports 5G WAN for times when Wi-Fi is either too insecure or too unrealistic.

The Elite Dragonfly G3 has a unique 3:2 aspect ratio instead of the more typical panoramic display. The latter may be better for films but 3:2 is better for work. Laptops in this class are expected to focus more on content creation than on entertainment. This high screen also enabled a large touchpad that includes a fingerprint reader for security.

The ports on this unit, which has a 13.5-inch display, are surprisingly complete for one of the thinnest laptops I’ve tested. In addition to two USB-C Thunderbolt ports, it has a full-size USB port and a full-size HDMI port, both of which are unusual but unheard of in a laptop this small and light.

hp elite dragonfly g3 port

HP Elite Dragonfly G3 Right-Side Ports | image credit: HP


The product is relatively durable, using a magnesium/aluminum frame that is largely from recycled metals and designed to be recycled again as the laptop gets older.

In conclusion, it is potentially one of the most secure laptops in its class with the Wolf Pro security option for those who want extra security. Interestingly, starting at just $2,000, the Wolf Security Edition is also one of the most affordable.

I was at the launch of HP’s first Dragonfly laptop and I am very impressed with this offering which is my product of the week. I’m going to hate giving this laptop back.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Last week I heard about a podcast from Lucid Motors SVP Mike Bell (ex-Apple, ex-Rivian) who codes the Lucid Air and the upcoming Lucid SUV to be named Gravity in 2024, which is very different from every other car. Is. Road including Tesla.

Given that Tesla was heavily influenced by Apple, as well, it will be interesting to see what difference Apple makes in this market when it finally announces its electric car.

The direction of these three companies seems to be hanging in the balance. This is a change from the traditional preferences of older car makers to a model that is more compatible with a tech firm. Car companies like Lucid and Tesla are more like Apple than GM or Ford – which I imagine will eventually become a problem for GM and Ford.

I’ve also had updated briefings from Nvidia and Qualcomm on how they’re tackling autonomous driving, which could be a complimentary approach to the next generation of EVs.

Let’s talk about the future of electric cars, and we’ll kick off with our product of the week: an update on the Bartesian robotic bartender, no joke, Black + Decker.

Nvidia and Qualcomm Vehicle Tech

Nvidia’s drive platform is heavily used in Lucid vehicles. It is a comprehensive suite of offerings that cover the autonomous car technology stack, from concept and simulation to training, to inference in cars. However, most of the strength lies in Nvidia’s Omniverse simulation capability, which is widely used in the automotive industry.

Qualcomm is focusing more on bridging the car side of this equation, with compelling in-car technology that, on paper, is both cheaper and better than other options.

Given that car companies are margin-focused lasers, you can see a world emerging where Nvidia may own much of the backend and control structure for autonomous cars, and where Qualcomm can power its respective self-engineering systems in most cars. May present with driving abilities. Qualcomm has demonstrated that its ability to keep smartphone costs down can translate into in-car solutions that can do the same thing.

An increasingly potential future is one where Nvidia offers a much more autonomous car backend while Qualcomm provides in-vehicle multi-layered computer vision technology.

car and driver conversation

I was driving in the back of a Lucid car (pictured above) last week and I must admit I found the car fascinating to look at. It really is like no other car on the road. The performance specs and price are both amazing.

On performance, they deliver 2.5 seconds 0 to 60 acceleration time, up to 1,111 hp, up to 168 mph, and a range of 520 miles which is class-leading performance at the moment. But that performance will cost you closer to $200K.

Then again you could argue that you’re basically getting four cars for the price of one: a sports car, a family car, an off-road car, and the Holler (there’s a huge amount of luggage space), All in one vehicle.

Lucid also demonstrates a change in thinking about the interaction between the car and its driver. By now the driver has had to learn to drive a car. Every car is so different that most drivers may never learn how to use all the features. For example, I’ve never been able to successfully use my car’s self-parking capability.

The Lucid model learns how to work with you, learns your preferences, and this data can be transferred from car to car so you never have the issue of being unable to properly use a feature you purchased. Don’t have to face

upgrade after purchase

Lucid goes to the forefront of offering a solution that is not only software-defined, but potentially easy to update and upgrade over time, which will keep the cars in service longer than would otherwise be the case.

I’ve gotten frustrated with more traditional car companies because you can almost bet that right after you buy a new car, they’ll make an upgrade that can’t be retrofitted, and you’d know it was coming. .

For example, a few years ago I bought a Mercedes. Sometime between when I ordered the car and it was delivered, they put one of the features I ordered in another package, which was not available when I ordered my car and, because I didn’t select that bundle (again, it wasn’t even available at the time I ordered) they removed the feature from the car.

The only way I could get it back was to pay three times its cost before that happened. This huge cost increase was because it was far more expensive to add that feature once the car was built.

Both Lucid and Tesla have demonstrated that they can do a better job of providing post-purchase upgrades to their cars. As the industry considers the concept of cars as service, this ability to change a car’s configuration after it leaves the factory not only opens the door to a stronger used car opportunity for dealers, but also for a long time. Till then, with a happy relationship. Cars we buy in the end.

Instead of replacing a perfectly good car after three years because it became out of date, imagine updating the vehicle so that it looks almost as good as a new one.

Lucid’s technology-focused software-focused approach also means that many of these upgrades could come as part of the service, just like some of the more interesting improvements to Tesla cars over the years. Tesla is one of the few car companies where drivers can look forward to software updates as Tesla makes in pleasant surprises, and Lucid is looking to overtake Tesla in this regard.

Part of why Lucid may be able to outpace Tesla is its use of Nvidia Drive which is a unique way small car companies can match or exceed the capabilities of larger firms by using Nvidia’s extensive resources. It really is a game changer.

wrapping up

As we move into the middle of the decade, our in-car experiences will be changing substantially, not only to become more customizable for the buyer, but to provide a level of personalized after-sales auto-customization experience for that buyer. To do what hasn’t been seen in the tech market, let alone in the automotive market.

Once this is done, the technology market may have to pick up from some of the advancements of the automotive industry to better compete in its segment as this product concept is a competitive revolution to adapt itself to the unique needs of the user. .

It is difficult to see that any customer, when given the option, would ever opt for the old-fashioned way of forced learning and the lack of flexibility in the increasingly smart personal technology, equipment and vehicles they buy.

Companies like Lucid, Rivian, Tesla, Nvidia and Qualcomm are leading the automotive market and screaming for a future that is far more responsive to the needs of their buyers. That’s good news for our purchasing future, though probably not until the latter half of the decade.

Technical Product of the Week

‘Bev’ by Black + Decker

We were one of the first owners of Bartesian Robotic Bar Tender, and we have enjoyed the product for many years since it came out.

However, there was annoyance over how the alcohol was placed in the device and the pain of filling it with water that often spilled over. We left a bottle of wine without cleaning it for too long and it got stuck. So, we went looking for a replacement only to find out that Black + Decker has created a new version of Bartesian called Bev (little B) and it’s awesome!

Let’s start with the fact that with the old Bartesian we had to swap bottles of rum and gin when making the drink because it only contained four types of alcohol. The new version has five different bottles, and it uses the bottles the alcohol comes in, so you no longer have to clean the bottles, you just throw them out when they’re empty. Plus, it provides a sixth bottle for water so you can easily fill it under the tap (don’t try to fill it with the refrigerator; you’ll find water all over the floor).

'Bev' On-Demand Cocktail Maker by Black+Decker

‘Bev’ on-demand cocktail maker (Image Credit: Bartesian)


The unit has lights under the bottles that light up as drinks or can be cycled as it sits unused, making an impressive presentation in your kitchen or bar. Whereas the old Bartesian had a display that would take you through making a drink, the Biwi has five buttons. The first four drinks are for size and the final drink starts the making process which is much quicker and more fun to watch.

The Bev uses the same pods as the old Bartesian, but lacks a water chiller, so you’ll need a supply of ice. But the result is looking better, far less messy (the old bartesian would leak from time to time when filling), and so far it has worked flawlessly.

On a hot day, and we’re getting a lot of them, a cold rum punch is a great way to end the day; And sitting outside with a chilled cocktail on the weekend helps make it all worthwhile.

Priced at around $300, the new Biwi by Black + Decker is my product of the week. encourage!

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.