Many technology leaders agree that while AI can be hugely beneficial to humans, it can also be misused or through negligence harm humanity. But looking to governments to solve this problem without guidance would be foolish, because politicians often don’t even understand the technology they’ve used for years, let alone something that’s just hitting the market. I have come
As a result, when governments act to mitigate a problem, they may do more harm than good. For example, it was right to punish the old Shell Oil Company for abuses, but breaking up the company shifted control of the oil from the United States to parts of the world that are not friendly to America. There was the improvement of consumer electronics, which shifted the market from the US to Japan.
The US has grabbed onto the technological leadership by the skin of its teeth, but there is no doubt in my mind that if governments act without guidance on how to regulate AI, they will shift the opportunity to China. That’s why Microsoft’s recent report titled “Governing AI: A Blueprint for the Future” is so important.
The Microsoft report defines the problem, outlines a reasonable path forward that won’t undermine US competitiveness, and addresses concerns surrounding AI.
Let’s talk about Microsoft’s blueprint for AI governance, and we’ll end with our Product of the Week, a new line of trackers that can help us keep track of the things we often have trouble finding .
EEOC Example
It is foolish to demand regulation without context. When a government reacts tactically to something it knows little about, it can do more harm than good. I started with some contradictory examples, but perhaps the ugliest example was the Equal Employment Opportunity Commission (EEOC).
Congress established the EEOC in 1964 to rapidly address the very real problem of racial discrimination in jobs. There were two basic causes of workplace discrimination. The most obvious was racial discrimination in the workplace that the EEOC could and did address. But an even bigger problem existed when it came to discrimination in education, which the EEOC didn’t address.
When businesses hired based on merit and used any methodology that the industry at the time had scientifically developed to reward employees with positions, raises, and promotions based on education and achievement When you did, you were asked to improve your company’s diversity by closing programs that often hired inexperienced minorities.
The system failed minorities by placing inexperienced minorities in jobs they weren’t well trained for, which only reinforced the belief that minorities were somehow inadequate, when in fact, they didn’t have equal opportunities for education. were given and counseling. This position was true not only for people of color but also for women, regardless of color.
Looking back now we can see that the EEOC didn’t really fix anything, but it did transform HR from an organization focused on caring and nurturing employees to an organization focused on compliance, which often meant covering up employee issues . than to address the problems.
Brad Smith Steps Up
Microsoft President Brad Smith strikes me as one of the few technology leaders who thinks broadly. Instead of focusing almost exclusively on tactical responses to strategic problems, he thinks strategically.
Microsoft’s Blueprint is such a case that, because most people are going to the government saying “you should do something”, which can lead to other long-term problems, Smith has set out to find what he thinks is a solution. What should look like, and that flashes it turned out elegantly in a five-point plan.
He begins with a provocative statement, “Don’t ask what computers can do, ask what they should do,” which reminds me of John F. Kennedy’s famous line, “Don’t ask what your country can do for you.” What can you do for your country, ask what you can do for your country. Smith’s statement comes from a book he co-authored in 2019 and has been referred to as one of the defining questions of this generation Was.
This statement brings into context the importance and need of protecting human beings and makes us think about the implications of new technology to ensure that our use of it is beneficial and not harmful.
Smith continues to talk about how we should use technology to improve the human condition as a priority, not just reduce costs and increase revenue. Like IBM, which has undertaken a similar effort, Smith and Microsoft believe that technology should be used to improve people, not replace them.
He also, and this is very rare these days, talks about the need to anticipate where technology needs to be in the future so that we can proactively and strategically anticipate problems rather than just respond to them. The need for transparency, accountability and assurance that the technology is being used legally are all important to this effort and are well defined.
5-point blueprint analysis
Smith’s first point is to implement and build on a government-led AI security framework. Too often, governments fail to realize that they already have some of the tools needed to solve a problem and waste a lot of time effectively reinventing the wheel.
Influential work has been done by the US National Institute of Standards and Technology (NIST) in the form of the AI Risk Management Framework (AI RMF). It’s a good, though incomplete framework. Smith’s first point is to experiment and build on that.
Smith’s second point is the need for effective security brakes for AI systems that control critical infrastructure. If an AI that is controlling critical infrastructure gets derailed, it can cause massive damage or even mass death.
We must ensure that those systems have extensive testing, thorough human oversight, and are tested against not only likely but unlikely problem scenarios to make sure AI doesn’t jump in and make it worse. Will do
The government will define the classes of systems that will require guardrails, provide direction on the nature of those protective measures, and require that the relevant systems meet certain security requirements – such as data centers tested and licensed for such use only. to be posted in
Smith’s third point is to develop a comprehensive legal and regulatory framework for AI based on technology architecture. AI is going to make mistakes. People may not like the decisions the AI makes even if they are correct, and people may blame the AI for things the AI had no control over.
In short, there will be a lot of litigation to come. Without a legal framework covering responsibility, rulings are likely to be varied and contradictory, making any resulting remedy difficult and very costly.
Thus, there is a need for a legal framework so that people understand their responsibilities, risks and rights to avoid future problems, and find a quick legal remedy if a problem does result. This alone could reduce what would potentially become a massive litigation load as AI is now very much in the green when it comes to legal precedent.
Smith’s fourth point is to promote transparency and ensure academic and non-profit access to AI. It makes sense; How can you trust something you don’t fully understand? People don’t trust AI today, and without transparency they won’t trust it tomorrow. In fact, I would argue that without transparency, you shouldn’t trust AI because you can’t verify that it will do what you want.
In addition, we need academic access to AI to ensure that people understand how to properly use this technology when entering the workforce and to ensure that nonprofits, especially organizations that focus on improving the human condition have effective access to this technology for good.
Smith’s fifth point is to advance new public-private partnerships to use AI as an effective tool to address inevitable societal challenges. AI will have a massive impact on society, and ensuring that this impact is beneficial and not harmful will require focus and oversight.
He explains that AI may be a sword, but it can also be effectively used as a shield which is more powerful than any existing sword on the planet. It should be used everywhere to protect democracy and fundamental rights of the people.
Smith cites Ukraine as an example where the public and private sectors have come together effectively to create a powerful defense. He believes, as do I, that we must emulate Ukraine’s example to ensure that AI reaches its potential to help move the world toward a better tomorrow.
Finale: A Better Tomorrow
Microsoft isn’t just going to governments and asking them to act to solve a problem that governments don’t yet fully understand.
It is laying out a framework for that solution, and must clearly assure that we mitigate the risks around the use of AI and have the tools and systems in place to address problems when they do occur. Remedies are available, not the least of which is an emergency stop switch that allows a derailed AI program to gracefully terminate.
Whether you’re a company or an individual, Microsoft is providing an excellent lesson here in how to find leadership to solve a problem, not just toss it at the government and ask them to fix it. Microsoft has outlined the problem and provided a well thought out solution so that the problem doesn’t become a bigger problem than it already is.
Nicely done!
Pebblebee Trackers
Like most people, my wife and I often misplace stuff, which most often happens when we run out of the house and put something down without thinking about where we put it. Are.
Plus, we have three cats, which means the vet visits us regularly to take care of them. Many of our cats have found unique and creative hiding places so that they don’t get nailed or mated. So, we use trackers like Tile and AirTag.
But the problem with AirTags is that they really only work if you have an iPhone, like my wife, which means she can track things, but I can’t because I have an Android phone. Is. With the Tiles, you must either replace the device when it dies or replace the battery, which is a pain. Therefore, when we need to search for something, the battery often runs out.
The Pebblebee works like the other devices that differ yet because it’s rechargeable and will work with either Pebblebee’s app, which runs on both iOS and Android. Or will it work with native apps in those operating systems: Apple Find My and Google Find My Device. Sadly, it won’t do both at the same time, but at least you get a choice.
Pebblebee Trackers: Clips to keys, bags and more; Tags for luggage, jackets, etc .; and cards for wallets and other narrow places. (Image credit: PebbleB)
When trying to locate the tracking device, it beeps and lights up, making it easier to find things at night and less like a bad game of Marco Polo (I wish smoke detectors did this) .
Because the Pebblebee works with both Apple and Android and you can recharge the battery, it serves a personal need better than the Tile or Apple’s AirTag — and it’s my product of the week.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.