Tag

shouldnt

Browsing

I’m thrilled with our approach to using the most advanced generative AI tool widely available, the ChatGPT implementation in Microsoft’s search engine, Bing.

To show that AI is not ready, people are going to behave badly in order to obtain this new technology. But if you raise a child using similar abusive behavior, that child will likely develop flaws as well. The difference will be in the amount of time it takes for abusive behavior to appear and the amount of damage that can result.

ChatGPT has passed a theory of mind test that classified it as the peer of a 9 year old child. Given how quickly this tool is moving, it won’t stay immature and incomplete for much longer, but it can be annoying to those who are abusing it.

Tools can be misused. You can type nasty things on a typewriter, a screwdriver can be used to kill someone, and cars are classified as deadly weapons and kill if misused – as one this year did. The Super Bowl ad featured Tesla’s overhyped self-driving platform as extremely dangerous. ,

The idea that any tool can be misused is not new, but the potential for harm is far greater with AI or any automated tool. While we may not yet know where the resulting liability lies now, it is very clear that, given past rulings, it will ultimately lie with whoever causes the equipment to malfunction. AI is not going to jail. However, the person who programmed it or influenced it to do harm will potentially do harm.

While you could argue that people demonstrating this connection between hostile programming and AI abuse need to be addressed, like setting off nuclear bombs to demonstrate their threat, this strategy might end badly. Will go

Let’s explore the risks associated with misusing General AI. Then we’ll end with my product of the week, a new three-book series from John Peddie titled “The History of the GPU – Steps to Invention.” The series covers the history of the graphics processing unit (GPU), which became the foundational technology for the AI ​​we’re talking about this week.

Raising Our Electronic Kids

Artificial intelligence is a bad word. Something is either intelligent or it is not, so to imply that something electronic cannot actually be intelligent is equally short-sighted to assume that animals cannot be intelligent.

In fact, AI would be a better description for the Dunning-Kruger effect, which describes how people with little or no knowledge about a subject believe they are experts. It’s really “artificial intelligence” because those people are not intelligent in context. They just act like they are.

Bad wording aside, these upcoming AIs are in a way the children of our society, and it is our responsibility to care for them as we do our human children to ensure a positive outcome.

This result is probably more important than doing the same thing with our human children because these AIs will have far greater reach and will be able to do things much faster. As a result, if they were programmed to harm, they would have more capacity to do harm on a massive scale than a human adult.


The way some of us treat these AIs, it would be considered disrespectful if we treated our human children the same way. Yet, because we don’t think of these machines as humans or pets, we don’t enforce fair treatment as parents or pet owners.

You could argue that since they are machines, we should treat them ethically and with compassion. Without it, these systems are capable of the massive harm that can result from our disrespectful behavior. Not because the machines are vindictive, at least not now, but because we have programmed them to do us harm.

Our current response is not to punish abusers but to eliminate AI, as we did with Microsoft’s first chatbot effort. But, as the book “Robopocalypse” predicts, as AIs become smarter, this method of treatment will come with increased risks that we can mitigate by moderating our behavior now. Some of this bad behavior is beyond troubling because it implies endemic abuse that potentially extends to people.

Our collective goal should be to help advance these AIs into becoming the kind of beneficial tools they are capable of becoming, not to sabotage them in some misguided attempt to ensure our own worth and self-worth. To corrupt

If you’re like me, you’ve seen parents abusing or disrespecting their kids because they think those kids will outdo them. That’s a problem, but those kids won’t have access or power to AI. Yet as a society, we are far more willing to tolerate this behavior if it is done to AI.

General AI is not ready

Generative AI is an infant. Like a human or pet infant, it cannot yet defend itself against hostile behaviors. But like a child or a pet, if people continue to abuse it, it needs to develop protective skills, including identifying and reporting the abuser.

Once large-scale damage occurs, the responsibility will lie with those who caused the damage, intentionally or unintentionally, just as we hold accountable those who set forest fires intentionally or accidentally.

These AIs learn through interactions with people. The resulting capabilities are expected to extend into aerospace, healthcare, defence, city and home management, finance and banking, public and private management and governance. An AI might even prepare your meals in the future.

Actively working to corrupt an internal coding process will have undetermined bad consequences. The forensic review that follows a catastrophe will likely track it back to the programming error in the first place – and heaven help them if it wasn’t a coding mistake but humor or an attempt to demonstrate they can break the AI.

As these AIs move forward, it is reasonable to assume that they will develop ways to protect themselves from bad actors through detection and reporting or more stringent methods that act collectively to eliminate the threat punitively. Are.


In short, we do not yet know the extent of punitive responses that future AI will take against a bad actor, suggesting that those who intentionally harm these devices may face an eventual AI response that is We can realistically guess.

Science fiction shows such as “Westworld” and “Colossus: The Forbin Project” have created scenarios of technology abuse that can seem more fantastical than realistic. Still, it’s not a stretch to assume that an intelligence, mechanical or biological, would not move aggressively to defend itself against abuse – even if the initial response was programmed by a frustrated coder who is annoyed that their Work is getting corrupted and no AI is learning to do it itself.

Wrapping up: Anticipating future AI laws

If it isn’t already, I expect it would be illegal to intentionally misuse AI (some existing consumer protection laws may apply). Not because of some sympathetic response to this abuse – although that would be nice – but because the resulting harm could be significant.

These AI tools will need to develop ways to protect themselves from abuse because we cannot resist the temptation to abuse them, and we do not know what this mitigation will be. It can be simple prevention, but it can also be highly punitive.

We want a future where we work with AI, and the resulting relationship is collaborative and mutually beneficial. We do not want a future where AI wars with or replaces us, and working to assure the former as opposed to the latter results will depend much on how we act collectively towards these AIs And teach them to interact with us.

In short, if we remain a threat, like any intelligence, AI will work to eliminate the threat. We don’t know yet what the eviction process is. Yet, we’ve seen it envisioned in things like “The Terminator” and “The Animatrix” — an animated series of shorts describing how humans’ misuse of machines led to the world of “The Matrix.” So, we should have a good idea about how we don’t want it.

Perhaps we should more aggressively protect and nurture these new tools before they mature to the point where they must act against us to protect themselves.

I’d really like to avoid this outcome as shown in the movie “I, Robot”, wouldn’t you?

tech product of the week

‘History of GPU – Stages of Invention’

History of the GPU - Steps to Invention by John Peddie, book cover

Although we have recently moved to a technology called Neural Processing Unit (NPU), early work on AI came from Graphics Processing Unit (GPU) technology. The ability of GPUs to deal with unstructured and especially visual data has been crucial to the development of current generation AI.

Often moving far faster than CPU speeds as measured by Moore’s Law, GPUs have become an important part of the evolution of our increasingly smart devices and why they work the way they do. Understanding how this technology was brought to market and then advanced over time helps explain how AI was first developed and their unique advantages and limitations.

my old friend John Peddie is one of them, if not , is today’s leading graphics and GPU expert. John has just released a series of three books called “The History of the GPU”, which is arguably the most comprehensive chronicle of GPUs, something they have done since their inception.

If you want to learn about the hardware side of AI’s development – and the long and sometimes painful path to success for GPU firms like Nvidia – check out John Peddie’s “The History of the GPU – Steps to Invention”. This is my product of the week.

The views expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.