Many technology leaders agree that while AI can be hugely beneficial to humans, it can also be misused or through negligence harm humanity. But looking to governments to solve this problem without guidance would be foolish, because politicians often don’t even understand the technology they’ve used for years, let alone something that’s just hitting the market. I have come

As a result, when governments act to mitigate a problem, they may do more harm than good. For example, it was right to punish the old Shell Oil Company for abuses, but breaking up the company shifted control of the oil from the United States to parts of the world that are not friendly to America. There was the improvement of consumer electronics, which shifted the market from the US to Japan.

The US has grabbed onto the technological leadership by the skin of its teeth, but there is no doubt in my mind that if governments act without guidance on how to regulate AI, they will shift the opportunity to China. That’s why Microsoft’s recent report titled “Governing AI: A Blueprint for the Future” is so important.

The Microsoft report defines the problem, outlines a reasonable path forward that won’t undermine US competitiveness, and addresses concerns surrounding AI.

Let’s talk about Microsoft’s blueprint for AI governance, and we’ll end with our Product of the Week, a new line of trackers that can help us keep track of the things we often have trouble finding .

EEOC Example

It is foolish to demand regulation without context. When a government reacts tactically to something it knows little about, it can do more harm than good. I started with some contradictory examples, but perhaps the ugliest example was the Equal Employment Opportunity Commission (EEOC).

Congress established the EEOC in 1964 to rapidly address the very real problem of racial discrimination in jobs. There were two basic causes of workplace discrimination. The most obvious was racial discrimination in the workplace that the EEOC could and did address. But an even bigger problem existed when it came to discrimination in education, which the EEOC didn’t address.

When businesses hired based on merit and used any methodology that the industry at the time had scientifically developed to reward employees with positions, raises, and promotions based on education and achievement When you did, you were asked to improve your company’s diversity by closing programs that often hired inexperienced minorities.

The system failed minorities by placing inexperienced minorities in jobs they weren’t well trained for, which only reinforced the belief that minorities were somehow inadequate, when in fact, they didn’t have equal opportunities for education. were given and counseling. This position was true not only for people of color but also for women, regardless of color.

Looking back now we can see that the EEOC didn’t really fix anything, but it did transform HR from an organization focused on caring and nurturing employees to an organization focused on compliance, which often meant covering up employee issues . than to address the problems.

Brad Smith Steps Up

Microsoft President Brad Smith strikes me as one of the few technology leaders who thinks broadly. Instead of focusing almost exclusively on tactical responses to strategic problems, he thinks strategically.

Microsoft’s Blueprint is such a case that, because most people are going to the government saying “you should do something”, which can lead to other long-term problems, Smith has set out to find what he thinks is a solution. What should look like, and that flashes it turned out elegantly in a five-point plan.

He begins with a provocative statement, “Don’t ask what computers can do, ask what they should do,” which reminds me of John F. Kennedy’s famous line, “Don’t ask what your country can do for you.” What can you do for your country, ask what you can do for your country. Smith’s statement comes from a book he co-authored in 2019 and has been referred to as one of the defining questions of this generation Was.

This statement brings into context the importance and need of protecting human beings and makes us think about the implications of new technology to ensure that our use of it is beneficial and not harmful.

Smith continues to talk about how we should use technology to improve the human condition as a priority, not just reduce costs and increase revenue. Like IBM, which has undertaken a similar effort, Smith and Microsoft believe that technology should be used to improve people, not replace them.

He also, and this is very rare these days, talks about the need to anticipate where technology needs to be in the future so that we can proactively and strategically anticipate problems rather than just respond to them. The need for transparency, accountability and assurance that the technology is being used legally are all important to this effort and are well defined.

5-point blueprint analysis

Smith’s first point is to implement and build on a government-led AI security framework. Too often, governments fail to realize that they already have some of the tools needed to solve a problem and waste a lot of time effectively reinventing the wheel.

Influential work has been done by the US National Institute of Standards and Technology (NIST) in the form of the AI ​​Risk Management Framework (AI RMF). It’s a good, though incomplete framework. Smith’s first point is to experiment and build on that.

Smith’s second point is the need for effective security brakes for AI systems that control critical infrastructure. If an AI that is controlling critical infrastructure gets derailed, it can cause massive damage or even mass death.

We must ensure that those systems have extensive testing, thorough human oversight, and are tested against not only likely but unlikely problem scenarios to make sure AI doesn’t jump in and make it worse. Will do

The government will define the classes of systems that will require guardrails, provide direction on the nature of those protective measures, and require that the relevant systems meet certain security requirements – such as data centers tested and licensed for such use only. to be posted in

Smith’s third point is to develop a comprehensive legal and regulatory framework for AI based on technology architecture. AI is going to make mistakes. People may not like the decisions the AI ​​makes even if they are correct, and people may blame the AI ​​for things the AI ​​had no control over.

In short, there will be a lot of litigation to come. Without a legal framework covering responsibility, rulings are likely to be varied and contradictory, making any resulting remedy difficult and very costly.

Thus, there is a need for a legal framework so that people understand their responsibilities, risks and rights to avoid future problems, and find a quick legal remedy if a problem does result. This alone could reduce what would potentially become a massive litigation load as AI is now very much in the green when it comes to legal precedent.

Smith’s fourth point is to promote transparency and ensure academic and non-profit access to AI. It makes sense; How can you trust something you don’t fully understand? People don’t trust AI today, and without transparency they won’t trust it tomorrow. In fact, I would argue that without transparency, you shouldn’t trust AI because you can’t verify that it will do what you want.

In addition, we need academic access to AI to ensure that people understand how to properly use this technology when entering the workforce and to ensure that nonprofits, especially organizations that focus on improving the human condition have effective access to this technology for good.

Smith’s fifth point is to advance new public-private partnerships to use AI as an effective tool to address inevitable societal challenges. AI will have a massive impact on society, and ensuring that this impact is beneficial and not harmful will require focus and oversight.

He explains that AI may be a sword, but it can also be effectively used as a shield which is more powerful than any existing sword on the planet. It should be used everywhere to protect democracy and fundamental rights of the people.

Smith cites Ukraine as an example where the public and private sectors have come together effectively to create a powerful defense. He believes, as do I, that we must emulate Ukraine’s example to ensure that AI reaches its potential to help move the world toward a better tomorrow.

Finale: A Better Tomorrow

Microsoft isn’t just going to governments and asking them to act to solve a problem that governments don’t yet fully understand.

It is laying out a framework for that solution, and must clearly assure that we mitigate the risks around the use of AI and have the tools and systems in place to address problems when they do occur. Remedies are available, not the least of which is an emergency stop switch that allows a derailed AI program to gracefully terminate.

Whether you’re a company or an individual, Microsoft is providing an excellent lesson here in how to find leadership to solve a problem, not just toss it at the government and ask them to fix it. Microsoft has outlined the problem and provided a well thought out solution so that the problem doesn’t become a bigger problem than it already is.

Nicely done!

tech product of the week

Pebblebee Trackers

Like most people, my wife and I often misplace stuff, which most often happens when we run out of the house and put something down without thinking about where we put it. Are.

Plus, we have three cats, which means the vet visits us regularly to take care of them. Many of our cats have found unique and creative hiding places so that they don’t get nailed or mated. So, we use trackers like Tile and AirTag.

But the problem with AirTags is that they really only work if you have an iPhone, like my wife, which means she can track things, but I can’t because I have an Android phone. Is. With the Tiles, you must either replace the device when it dies or replace the battery, which is a pain. Therefore, when we need to search for something, the battery often runs out.

The Pebblebee works like the other devices that differ yet because it’s rechargeable and will work with either Pebblebee’s app, which runs on both iOS and Android. Or will it work with native apps in those operating systems: Apple Find My and Google Find My Device. Sadly, it won’t do both at the same time, but at least you get a choice.

Pebblebee Trackers

Pebblebee Trackers: Clips to keys, bags and more; Tags for luggage, jackets, etc .; and cards for wallets and other narrow places. (Image credit: PebbleB)

When trying to locate the tracking device, it beeps and lights up, making it easier to find things at night and less like a bad game of Marco Polo (I wish smoke detectors did this) .

Because the Pebblebee works with both Apple and Android and you can recharge the battery, it serves a personal need better than the Tile or Apple’s AirTag — and it’s my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

The FBI’s Denver office is warning consumers about using free public charging stations, saying bad actors can use USB ports at juice stops to introduce malware and surveillance software onto devices.

“Take your charger and USB cord and use an electrical outlet instead,” the agency recommended in a recent tweet.

“Juice jacking” has been around for a decade, though no one knows how widespread the practice has become.

“There’s been a lot of talk about it publicly, but not a lot has caught on publicly,” said Brian Marcus, CEO of Ariz Security, a security research and education company in Wilmington, Va. Dale Marcus, and partner Robert Rowley Said. Juice Jacking was performed for the first time in 2012.

“Juice jacking chargers are like ATM skimmers,” Marcus told TechNewsWorld. “You hear a lot about them but don’t necessarily see them.”

He explained that anyone who wanted to tamper with a legitimate Power charging station could convert the station’s cable into a doctored cable, which contained a chip that could install a remote access trojan, or backdoor, on a phone. . Then the phone can be attacked at any time over the Internet.

“This is particularly prevalent with Android phones running older versions of the operating system,” Marcus said. “That’s why it’s important for users to keep their devices up to date.”

different opinion

There seems to be conflicting opinion in the security community about the danger of juice jacking to consumers.

“It’s not very common in general because using a remote charging feature is not something that people do very often,” said Bud Broomhead, CEO of Viaku, a developer of cyber and physical security software solutions in Mountain View, California.

“However, if someone is a user of a charging system outside their control, the warning issued by the FBI should cause them to change their behavior, as cases are on the rise,” he told TechNewsWorld.

Aviram Janik, president of Epona Security, a source code security company in Roseville, California, said that juice jacking is “extremely common”.

“We don’t have numbers because the devices are in places where people don’t stay for long periods of time, so it’s easy to put a bad device in and then take it out,” he told TechNewsWorld.

“This has been done for years, and the presence of malware-infected charging stations is almost routine,” he said.

“As charging becomes more and more sophisticated — meaning, data travels over the same cable as the charge — it will get worse,” he said. “When the target is of greater value — for example, an EV versus a mobile phone — the stakes will be higher.”

Jenick said another future development would be wireless charging, which would allow attackers to carry out an attack without ever seeing the physical device used for the breach.

two-way comm problem

Juice jacking is more likely to happen often by persons of interest — politicians or intelligence agency employees — said Andrew Barratt, managing head for solutions and investigations at Coalfire, a Westminster, Colo.-based provider of cybersecurity advisory services.

“For a juice jacking attack to be effective, it has to deliver a very sophisticated payload that can bypass common phone security measures,” he told TechNewsWorld.

“Frankly,” he continued, “I’d be more concerned about using the outlet so much that they would damage my cord or the socket on the phone.”

Juice jacking uses USB technology for malicious purposes. “The problem is that USB ports allow two-way communication not only for charging power but also for data transmission. How can your USB device send pictures and other data when you plug it in,” explained Roger Grimes, a defense publicist at KnowBe4, a security awareness training provider in Clearwater, Fla.

“USB ports were never designed to prevent advanced malicious commands being sent over the data channel,” he told TechNewsWorld. “USB ports have had many security improvements over the years, but there are still additional avenues of attack, and most USB-enabled devices allow charging ports to be declared an older version of the USB port standard, so few new security features are now available.” are not available.”

Will EVs Be Next?

JT Keating, senior vice president of strategic initiatives at Zimperium, a mobile security solutions provider in Dallas, cautioned consumers to be wary of free solutions billing themselves as “public” services.

“When hackers trick people into using their fake Wi-Fi networks and power stations, they can compromise devices, install malware and spyware, and steal data,” he told TechNewsWorld .

“This trend will continue and grow as more and more people connect to EV charging stations for their electric vehicles,” he continued. “By compromising an EV charging station, attackers can wreak havoc by stealing payment information or performing a variation of ransomware by disabling the stations and preventing charging.”

Coalfire’s Barratt said EV charging stations have been a concern for some time, but there has been a problem with fee evasion or free use of stations.

“Longer term,” he said, “I suspect there is a concern that we will continue to see more attacks against these chargers as the world transitions to EV chargers.”

“When we had public phones, there were attacks against them,” he continued. “Attacks regularly occur against ATMs and gas pumps. Anything where value is dispensable in an untraceable environment has potential for a cyber-enabled thief to take advantage of.

Avoid becoming a victim of juice jacking

Ever since Marcus and Rowley introduced the world to juice jacking, conditions have improved for attackers. For example, wireless connectivity has been added to the charging port.

“When we first did this, we had a whole laptop hidden in the charging station, and it worked great,” Marcus said. “The amount of compute power to do the same thing is now much less.”

The FBI isn’t the only alphabetic agency to sound the alarm about juice jacking. The FCC has warned consumers about the practice in the past as well. To avoid becoming a victim of juice jackers, it recommends:

  • Avoid using USB charging stations. Use an AC power outlet instead.
  • When traveling, bring your own AC, car charger and USB cable.
  • Carry a portable charger or external battery.
  • Consider carrying a charging-only cable, which prevents sending or receiving data while charging, from a trusted supplier.

Google opened its ChatGPT competitor Bard to the public in the United States and the United Kingdom on Tuesday, although entry will require a waiting list.

“Today we’re starting to open up access to Bard, an early experiment that lets you collaborate with generative AI,” Ellie Collins, Google vice president of product and research, Sissy Hsiao, wrote in a company blog.

He explained that Bard can be used to boost productivity, accelerate the generation of ideas, and increase curiosity.

“We’ve learned a lot testing Bard so far,” he said, “and the next important step in improving it is getting feedback from more people.”

While large language models are an exciting technology, they are not without their faults, Google executives acknowledged. Because they learn from a wide range of information that reflects real-world biases and stereotypes, they sometimes show up in their output, he continued. And they may provide false, misleading or inaccurate information while presenting it in confidence.

“Our work at Bard is guided by our AI principles, and we continue to focus on quality and safety,” the pair said. “We’re using human feedback and ratings to improve our systems, and we’ve also put in guardrails, like capping the number of exchanges in dialogue, to try to keep conversations useful and on-topic Could.”

chasing redmond

Since Google unveiled Bard to the world in February, the company has been trying to catch up with Microsoft, which is introducing AI features into its products at a rapid pace.

“Google is in a panic now that Microsoft has them beat to market, and they’re hemorrhaging users, which puts them in a ‘ready or it doesn’t get here’ mindset,” said Rob Enderle, president and principal analyst at the Enderle Group. declared. An advisory services firm in Bend, Ore.

“A short while ago, they were convinced that it was nowhere close to being ready and seem to have reduced resources, so it is unlikely that it is actually ready, but they now need a response and clearly We are taking a huge risk to prevent this from happening. Microsoft stops its search engine from bleeding out,” he told TechNewsWorld.

Undoubtedly, Google is in catch-up mode, maintained Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.

He told TechNewsworld, “I think Google is under enormous market pressure to bring Bard to the mainstream market as quickly as possible because there is a perception in part that they were taken aback by the market reception of ChatGPT.”

Google has been talking about its AI and machine-learning work for several years, but, so far, it has reached the consumer in very limited ways, observed Ross Rubin, principal analyst at Reticle Research, a consumer research firm. Technology consulting firm. New York City.

“ChatGPT really struck a nerve inside Google. This is a potential threat to Google Search,” he told TechNewsWorld.

cautious pace

Bard’s rate of development remains reserved, despite pressure to bridge the gap with Microsoft.

“Google’s pace is somewhat more cautious than Microsoft’s,” said Greg Sterling, co-founder of Near Media, a news, comment and analysis website.

“They feel they have more to lose as a brand if Bard becomes widely available and gets derailed,” he told TechNewsWorld.

Rubin explained that Bard is being rolled out slowly because Google has a dominant position in the market and wants to position the chatbot as a continuation of its existing search product.

“Microsoft has a similar rollout with the use of AI in Office,” he said.

At this point, Vena said, the perception that Google is outpacing Microsoft has hurt, so Google will use its resources to make Bard the best tool on the market and stop worrying about being the first.

ChatGPT vs LaMDA

Vena said the product could benefit from creating a waiting list while slowing the full rollout of Bard.

“It reinforces a notion that the Bard is not ready for prime time,” he said. “But putting that notion aside, this is probably a wise move on Google’s part, as a staggered release allows them to work out bugs in a measured and deliberate manner, which is a good thing.”

Sterling stressed that waiting lists serve another purpose as well. “They’re trying to control who has access and how the conversation happens around Bard,” he said. “But in fairness, this is often the way tech products are rolled out.”

Hsiao and Collins note that currently, Bard is powered by a lightweight and optimized version of Google’s research large language model LaMDA, but over time the offering will be updated with newer, more capable models.

“BARD doesn’t seem as powerful as GPT-4, which OpenAI recently released, but because it’s connected to the internet, who it can trust to answer questions makes a difference, said Will Duffield, a policy analyst at the Cato Institute, a Washington, D.C., think tank.

“Bard functions better as a personal assistant, but doesn’t perform as well on deeper analytical tasks, such as giving it a set of patch notes from a video game and asking how they would change the game’s state or analysis.” Supreme Court transcript,” he told TechNewsWorld.

multiple answer questions

Vena explained that LaMDA is specifically designed for natural language conversations and aims to be more context-aware than previous language models. It was trained on a wide variety of topics and could potentially be used in a variety of conversational applications, such as chatbots, voice assistants, and customer service tools.

Microsoft’s larger language model, on the other hand, he continued, was designed not specifically for dialog applications but for more general language understanding. Microsoft is working on a number of language models that attempt to improve natural language processing and generation in a variety of applications, including translation, sentiment analysis and question-answering.

Bard also departs from ChatGPT by drafting several of his responses to a question. “This gives users more flexibility to examine multiple query results, and that’s a good thing,” Vena said.

While offering multiple drafts gives consumers more choice and information, Sterling said, it also appears to be a defense against user criticism.

Overall, he said, Google is more cautious about its messaging and presentation about Bard than Microsoft is about Bing.

“Bing is courageous,” he said. “Microsoft has less to lose and is eager to embrace AI chat as an evolution of Bing.”

“For Google,” he continued, “it seems like it’s a new add-on that will get better over time. They’re undermining it as a search replacement. It partially meets user expectations.” and shaping broader market sentiments.