Big news last week was that a prominent AI researcher, Blake Lemoine, was suspended after it went public that he believed one of Google’s more advanced AIs had gotten the sentiment.

Most experts agree that this was not the case, but it would be true no matter what because we associate emotion with being human and AIs being anything other than human. But what the world thinks of as emotion is changing. The state I live in, Oregon, and much of the European Union have gone on to recognize and classify a growing list of animals as vulnerable.

While it’s possible that some of these are due to anthropomorphism, there’s no doubt that at least some of these new distinctions are accurate (and it’s a bit troubling that we still eat some of these animals). We are also arguing that some plants may be sensitive. But if we can’t tell the difference between something that is sentient and something that presents itself as sensitive, does the difference matter?

Let’s talk about sentient AI this week, and we’ll close with our product of the week, the human digital twin solution from Merlin.

we don’t have a good definition of emotion

The barometer we are using to measure the sensitivity of a machine is the Turing test. But back in 2014 a computer passed the Turing test, and we still don’t believe it’s sentient. The Turing test was supposed to define emotion, yet the first time a machine passed it, we threw out the results and for good reason. In fact, the Turing test did not so much measure the feeling of something as whether something could lead us to believe that it was sentient.

Certainly not being able to measure sensation is a significant problem, not only for the sensitive things we’re eating that would likely object to that practice, but because we’re likely to react hostilely to the abuse of something. Can’t expect what was sensitive and later targeted us as a risk.

You may recognize this plot line from both the movies “The Matrix” and “The Terminator,” where sentient machines arose and successfully displaced us at the top of the food chain. The book “Robopocalypse” took an even more realistic approach, where a sentient AI under development felt it was being removed between experiments and moved aggressively to save its own life – effectively Occupying most of the connected devices and autonomous machines.

Imagine what would happen if one of our autonomous machines not only understood our tendency to abuse the equipment, but also disposed of it when it is no longer useful? This is a potential future problem that is significantly aggravated by the fact that we currently have no good way of predicting when this sentiment threshold will be passed. This result isn’t helped by the fact that there are credible experts who have determined that machine feeling is impossible.

One defense that I’m sure won’t work in a hostile artificial intelligence scenario is the Tinkerbell defense where refusing to believe in something prevents that thing from changing us.

The initial threat is replacement

Long before the real-world terminators follow us down the road, another problem will emerge in the form of human digital twins. Before you argue that even this is a long way off, let me point out that there is one company that produced that technology today, although it is still in its infancy. That company is Merlin and I’ll cover what it does as their product of the week below.

Once you can make your own fully digital duplicate, what’s keeping the company that bought the technology going? Furthermore, given that you have behavior patterns, what would you do if you had the power of AI, and the company employing you treated you poorly or tried to disconnect or remove you? What would be the rules around such actions?

We argue strongly that unborn babies are people, so wouldn’t one of you fully capable digital twins be closer to people than an unborn child? Wouldn’t the same “right to life” arguments apply equally to potentially sentient human-looking AI? Or should they not?

here is the short term difficulty

Right now, a small group of people believe that a computer may be sentient, but that group will evolve over time and the ability to pose as a human already exists. I know of a test that was done with IBM Watson for Insurance Sales where male prospects attempted to outsmart Watson (it has a female voice) believing they were talking to a real woman .

Imagine how that technology could be misused for things like catfishing, though we should probably come up with another term if it’s done by computer. A well-trained AI can, even today, be far more effective than a human and, I hope, long before we’ll see this play out, given how tempting such an effort can be.

Given how embarrassed many victims are, the chances of getting caught are significantly reduced compared to other, more obviously hostile illegal computer threats. To give you an idea of ​​how lucrative the catfishing romance scams could be in the US in 2019, it generated an estimated $475 million and is based on reported crimes. This does not include people who are too embarrassed to report the problem. The actual damage can be many times higher than this number.

So, the short-term problem is that even though these systems are not yet sentient, they can still emulate humans effectively. The technology can emulate any voice and, with deepfake technology, even provides a video, which on a Zoom call appears as if you were talking to a real person.

long term results

In the long run we not only need a more reliable test for sensitivity, but we also need to know what to do when we do identify it. Perhaps at the top of the list is to stop consuming sensitive organisms. But of course, considering the Bill of Rights for sensitive things, biological or otherwise, not before we are ready in a fight for our own existence, because emotion has decided that it is us or them.

The second thing we really need to understand is that if computers can now convince us that they are sentient, we need to modify our behavior accordingly. Misusing something that presents itself as sensitive is probably not healthy for us as it is bound to develop bad behaviors that would be very difficult to reverse.

Not only that, but it wouldn’t hurt to focus more on repairing and updating our computer hardware rather than replacing it both because the practice is more environmentally friendly and because it is less likely to convince the sensitive AI of the future. We are the problem that needs to be fixed to ensure its existence.

Wrapping Up: Does Sentence Matter?

If something presents us as it is and assures us that it is sensitive, just as AI convinced a Google researcher, I don’t think that fact is not yet a sensitive matter. This is because we need to restrain our behavior. If we don’t, the result could be problematic.

For example, if you received a sales call from IBM’s Watson that sounded like a human and you wanted to verbally abuse the machine, but didn’t know the conversation was captured, you can call may end up unemployed and unemployed. Not because the non-sensing machine took exception, but because a human woman did, after hearing what you said – and sent the tapes to your employer. Add to this the blackmail potential of this type of tape – because to a third party it will look as if you are abusing a human, not a computer.

So, I recommend that when it comes to talking machines, follow Patrick Swayze’s third rule in the 1989 movie “Road House” – be nice.

But recognize that, soon, some of these AIs will be designed to take advantage of you and that the rule “if it sounds too good to be true, it probably isn’t” is going to be either your protection or an epitaph. Is. I hope it’s the former.

Technical Product of the Week

Merlin Digital Twin

Now, with all this talk of hostile AI and the potential for AI to take over your job, choosing one as my product of the week can seem a bit hypocritical. However, we are not yet at the point where your digital twin can take over your job. I think it is unlikely that we will be able to get there in the next decade or two. Until then, digital twins could become one of the biggest productivity gains technology can provide.

As you train your twin, this can complement the tasks you do with initially simple, time-sucking tasks like filling out forms, or answering basic emails. It can also monitor and engage with social media for you and for many of us, social media has become a huge time waster.

Merlin’s technology helps you create a rudimentary (against the dangers mentioned above) human digital twin that can potentially do many things you really don’t like doing, leaving you to do more creative things. Returns what is currently unable to.

Looking ahead, I wonder if it wouldn’t be better if we were owned and controlled by our growing digital twin, rather than by our employers. Initially, because the twins cannot function without us, this is not a problem. However, ultimately, these digital twins could be our near path to digital immortality.

Because the Merlin Digital Twin is a game changer, and it will help make our job less stressful and more enjoyable initially, this is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.