Microsoft just announced that it’s putting generative AI into Windows 11, but we’re still at the beginning of the changes this technology will be accessible to.

Some jobs will become easier, some more valuable and many will become obsolete. No matter what you do for a living, there’s a good chance AI, especially generative AI, will make a significant impact on both what you do and how you do it.

Such changes create tremendous opportunities and great threats. There will be risks associated with moving to this technology too early or too late. Like the introduction of computers, generative AI is a force multiplier, meaning those who know how to use it effectively will increase their value, and those who don’t will be unable to compete.

Let’s take a look at five areas that will be dramatically changed by the influx of generative AI, some potentially for the worse. Then we’ll close with our product of the week: the first truly wireless security camera – with wireless broadcast power – to hit the market.


The issue with news is twofold.

First, Generative AI can write news articles, but it will do so using information it can access. As the population of professional news reporters dwindles, citizen-sourced content will increase in relative percentage, and that content has proven relatively unreliable over time.

The AI ​​can’t just go out on the field to watch events; It merely interprets or repeats what others have observed. It’s a device used to punch up a story or make it easier to read, but it can also create stories, and this is where the second point becomes a problem. I believe this can be addressed algorithmically, but if the motivation is revenue and not accuracy, then the trend, which some would argue is already troubling, could worsen.

Another news-related issue is that AI will play an important role in journalism. However, the AI ​​will prioritize site traffic and try to please users, just like you’ve seen your feeds on social media modified to keep their interest. Stories you may need to know about may give way to stories you enjoy more, because AI, in its effort to please you, will seek pleasure regardless of the validity of those stories. Makes up stories to feed.

To be fair, no news organization will survive if they have content that people don’t want to see. However, ensuring that this pivot does not disconnect users from reality will be a challenge. As suggested in a 2016 book by Lance Elliott, the fix could be an “AI guardian angel” to ensure that your best interests are always protected. The AI ​​guardian angel idea has also been proposed more widely to protect against the potential emergence of hostile AI.


We have seen that Generative AI can write books. We also know that those books are bad without much oversight.

You still need to define the characters, build the world, and then create a way for the characters you’ve created to navigate that world so that it’s interesting to the reader. Now, readers can do that last part. For example, wouldn’t it be interesting if you could read an adventure book about how to survive in the world of Harry Potter or how John Wick could interact in that world?

In the future, authors may define the world and characters, flesh them out completely, and then you’ll buy access to those worlds and characters to create a story uniquely yours that you can then resell. can or can be enjoyed individually.

If you resell the results, some aspects of revenue-sharing will need to be worked out. Even so, most of us will probably only create content that we personally enjoy or share with close friends, minimizing the need for full licensing and monetizing the result.

For example, I’m currently hooked on LitRPG books, which are written in a video game universe where characters grow over time and progress through specified missions. These books are iterative, and too often, an author will stop writing a series before I’ve finished it or I’m ready to drop it.

With generative AI, I can not only change the parts of the book I don’t like and enhance the parts that I do, but I can also create sequels that wouldn’t have existed otherwise, which Helpful when the author dies and if done correctly, I would still pay the author or their estate for the privilege.

Currently, some publications are already overwhelmed with generative AI content. While this is one of the initial pain points, ensuring the quality of these written works has now become a far more difficult task.

TV & Movies

The script part of the process will follow on from the book concept described above, but you can then turn to techniques like deepfakes and Nvidia’s Omniverse to flesh out the movie and create a relatively high-quality, animated interim product that can be compared to anything else. For, being final, and for others can be just another step to ensure the quality of the result.

As this technology matures, your ability to go straight from script to final product will increase. The ability to make high-quality movies with elaborate special effects for a few bucks would also go up substantially. With streaming, a service like Amazon can either charge a subscription or charge a fee to produce exactly the movie you want to watch when you want to watch it.

Now there is no need to wait for a year or two for the sequel. You can watch a sequel every night of the year until you get tired of the same plot. Much like YouTube content today, you can have your own series that other people can watch for a fee, and if done right, there will be revenue-sharing for all parts of the production.

The real matter is related to theatres. It’s not clear how you would scale this customized experience to a large group unless it was interactive and the individuals in the group could exercise some control over the action. The result will be new content that can be redistributed via screening, it is left to other users whether they want to see the result or not.

The actors would license their appearances and create acting templates for a variety of characters which, for a fee, users would place in their streamed productions for personal use. At the same time, the theater may have a set troupe of pre-paid actors and a more defined space in which to place them where the audience can interact with and possibly guide their favorite characters.

An interesting aspect that I’m sure we’ve all sat in movie theaters where someone is vocal about what the actor should or shouldn’t do. In this context, the virtual actor on the screen can react to those vocal comments. “Can the lady in the white dress in the third row calm down? I’m having trouble thinking!” or “Thank goodness you warned me not to go into that room. I would have died! etc.

AI can significantly enhance audience engagement and make the theater experience far more social than it is now. Granted, it can be annoying if done wrong, but turning the theater experience into a social event can make going to theaters a lot more fun than it currently is. Ever been to a “The Rocky Horror Picture Show” event? They are very funny.


We generally do not learn well. Some of us learn by doing, some learn by showing, and some learn best from people they trust – and this is not an exhaustive list of our differences.

With generative AI and some prep work to determine how a child learns best, the curriculum can be defined by those needs. A virtual theatre, which can take the form of one’s parent, can then provide a customized classroom that is adapted to how the child learns best.

Learning pivots from one-to-many to one-to-one, and each child gets personalized attention from an AI that can accompany them home and help with homework virtually. This will be a huge improvement for home learners as the AI ​​can dynamically change its approach to ensure the student understands the required material. Since AI is a machine, the risks of misuse, bias, prejudice and lack of adequate oversight are largely removed.

The goals will change from completing a course of study to transferring knowledge. Instead of glorifying mostly babysitters, schools could re-evolve into places of education. Those who do not like to read can get an AI-generated representation of the course material using a mix of advanced videos and provided content.

Because AI continuously incorporates student behavior, problems like acting out and depression can be addressed more aggressively and flagged faster if a student begins to spiral into negative territory. You’ll still need oversight, but the teacher will be there to ensure the process is in place and won’t need to teach so much as assure that the system is working as expected.

Course material can be as long or as short as required. If a student makes rapid progress, the curriculum will support that rapid progress. If the student was struggling, the curriculum would slow down and bring in other resources to improve the student’s performance.

wrapping up

Generative AI will, to some extent, change what is around us. Initially, written content will see the most significant change, followed by short-form video content and, finally, commercial TV and film production. Most of this should happen within the next ten years, with written changes happening relatively quickly and film, TV and education advances coming later.

By the end of this transition, possibly in the mid-2030s, how we create and consume content will have changed dramatically. It will be far more customized and personal, and the consumers of the respective media will provide significant direction to the final product.

One of the problematic parts that will undoubtedly take some time is the licensing of the content associated with all this. If we don’t have a solid licensing program, the creative types we need to build the elements of this new AI world will be forced out of it for lack of payment, dramatically reducing the quality of the result .

The key way to get this right is to ensure a revenue model that keeps creators whole.

tech product of the week

Archos Kota Wireless Power Security Camera

Archos Kota Wireless Power Security Camera

We live in a hostile world. Where I live, it seems like there are a lot of people who like to steal from others, which has become a serious problem.

I have 14 cameras around my house, but while on our last trip, one stopped working because the gardener accidentally cut its power cord. While wireless cameras are nothing new, getting power to a wireless camera can be a problem.

Well, last week, Ossia announced its Archos Kota Wireless Power Security Camera.

Provided the cameras are within 30 feet of the power hub, they will continue to work without being plugged in or in sight of each other. Data from the camera is Wi-Fi compatible, so you can hook it up to your company or home network (the camera’s target market is home and business).

Initially these cameras will come in commercial bundle. I estimate the pricing of the cameras and bundles to be in line with how Archos prices its cameras — figure in the $200-$300 range per camera.

Bundles depend on the size of the area you need to cover. Initially, those bundles are in two forms. For sites between 600 and 800 square feet, you get a quota transmitter (for power) and three cameras. For sites 800 to 1,200 square feet, you get double that.

I’m guessing the prices for the bundles will be around $1,200 and $2,400, with additional discounts likely to incentivize buying the larger bundle.

I think it would be incredibly useful to be able to put a camera in any location without having to think about how to operate it. As a result, the new Archos Kota Wireless Power Security Camera by Ossia is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

I’m thrilled with our approach to using the most advanced generative AI tool widely available, the ChatGPT implementation in Microsoft’s search engine, Bing.

To show that AI is not ready, people are going to behave badly in order to obtain this new technology. But if you raise a child using similar abusive behavior, that child will likely develop flaws as well. The difference will be in the amount of time it takes for abusive behavior to appear and the amount of damage that can result.

ChatGPT has passed a theory of mind test that classified it as the peer of a 9 year old child. Given how quickly this tool is moving, it won’t stay immature and incomplete for much longer, but it can be annoying to those who are abusing it.

Tools can be misused. You can type nasty things on a typewriter, a screwdriver can be used to kill someone, and cars are classified as deadly weapons and kill if misused – as one this year did. The Super Bowl ad featured Tesla’s overhyped self-driving platform as extremely dangerous. ,

The idea that any tool can be misused is not new, but the potential for harm is far greater with AI or any automated tool. While we may not yet know where the resulting liability lies now, it is very clear that, given past rulings, it will ultimately lie with whoever causes the equipment to malfunction. AI is not going to jail. However, the person who programmed it or influenced it to do harm will potentially do harm.

While you could argue that people demonstrating this connection between hostile programming and AI abuse need to be addressed, like setting off nuclear bombs to demonstrate their threat, this strategy might end badly. Will go

Let’s explore the risks associated with misusing General AI. Then we’ll end with my product of the week, a new three-book series from John Peddie titled “The History of the GPU – Steps to Invention.” The series covers the history of the graphics processing unit (GPU), which became the foundational technology for the AI ​​we’re talking about this week.

Raising Our Electronic Kids

Artificial intelligence is a bad word. Something is either intelligent or it is not, so to imply that something electronic cannot actually be intelligent is equally short-sighted to assume that animals cannot be intelligent.

In fact, AI would be a better description for the Dunning-Kruger effect, which describes how people with little or no knowledge about a subject believe they are experts. It’s really “artificial intelligence” because those people are not intelligent in context. They just act like they are.

Bad wording aside, these upcoming AIs are in a way the children of our society, and it is our responsibility to care for them as we do our human children to ensure a positive outcome.

This result is probably more important than doing the same thing with our human children because these AIs will have far greater reach and will be able to do things much faster. As a result, if they were programmed to harm, they would have more capacity to do harm on a massive scale than a human adult.

The way some of us treat these AIs, it would be considered disrespectful if we treated our human children the same way. Yet, because we don’t think of these machines as humans or pets, we don’t enforce fair treatment as parents or pet owners.

You could argue that since they are machines, we should treat them ethically and with compassion. Without it, these systems are capable of the massive harm that can result from our disrespectful behavior. Not because the machines are vindictive, at least not now, but because we have programmed them to do us harm.

Our current response is not to punish abusers but to eliminate AI, as we did with Microsoft’s first chatbot effort. But, as the book “Robopocalypse” predicts, as AIs become smarter, this method of treatment will come with increased risks that we can mitigate by moderating our behavior now. Some of this bad behavior is beyond troubling because it implies endemic abuse that potentially extends to people.

Our collective goal should be to help advance these AIs into becoming the kind of beneficial tools they are capable of becoming, not to sabotage them in some misguided attempt to ensure our own worth and self-worth. To corrupt

If you’re like me, you’ve seen parents abusing or disrespecting their kids because they think those kids will outdo them. That’s a problem, but those kids won’t have access or power to AI. Yet as a society, we are far more willing to tolerate this behavior if it is done to AI.

General AI is not ready

Generative AI is an infant. Like a human or pet infant, it cannot yet defend itself against hostile behaviors. But like a child or a pet, if people continue to abuse it, it needs to develop protective skills, including identifying and reporting the abuser.

Once large-scale damage occurs, the responsibility will lie with those who caused the damage, intentionally or unintentionally, just as we hold accountable those who set forest fires intentionally or accidentally.

These AIs learn through interactions with people. The resulting capabilities are expected to extend into aerospace, healthcare, defence, city and home management, finance and banking, public and private management and governance. An AI might even prepare your meals in the future.

Actively working to corrupt an internal coding process will have undetermined bad consequences. The forensic review that follows a catastrophe will likely track it back to the programming error in the first place – and heaven help them if it wasn’t a coding mistake but humor or an attempt to demonstrate they can break the AI.

As these AIs move forward, it is reasonable to assume that they will develop ways to protect themselves from bad actors through detection and reporting or more stringent methods that act collectively to eliminate the threat punitively. Are.

In short, we do not yet know the extent of punitive responses that future AI will take against a bad actor, suggesting that those who intentionally harm these devices may face an eventual AI response that is We can realistically guess.

Science fiction shows such as “Westworld” and “Colossus: The Forbin Project” have created scenarios of technology abuse that can seem more fantastical than realistic. Still, it’s not a stretch to assume that an intelligence, mechanical or biological, would not move aggressively to defend itself against abuse – even if the initial response was programmed by a frustrated coder who is annoyed that their Work is getting corrupted and no AI is learning to do it itself.

Wrapping up: Anticipating future AI laws

If it isn’t already, I expect it would be illegal to intentionally misuse AI (some existing consumer protection laws may apply). Not because of some sympathetic response to this abuse – although that would be nice – but because the resulting harm could be significant.

These AI tools will need to develop ways to protect themselves from abuse because we cannot resist the temptation to abuse them, and we do not know what this mitigation will be. It can be simple prevention, but it can also be highly punitive.

We want a future where we work with AI, and the resulting relationship is collaborative and mutually beneficial. We do not want a future where AI wars with or replaces us, and working to assure the former as opposed to the latter results will depend much on how we act collectively towards these AIs And teach them to interact with us.

In short, if we remain a threat, like any intelligence, AI will work to eliminate the threat. We don’t know yet what the eviction process is. Yet, we’ve seen it envisioned in things like “The Terminator” and “The Animatrix” — an animated series of shorts describing how humans’ misuse of machines led to the world of “The Matrix.” So, we should have a good idea about how we don’t want it.

Perhaps we should more aggressively protect and nurture these new tools before they mature to the point where they must act against us to protect themselves.

I’d really like to avoid this outcome as shown in the movie “I, Robot”, wouldn’t you?

tech product of the week

‘History of GPU – Stages of Invention’

History of the GPU - Steps to Invention by John Peddie, book cover

Although we have recently moved to a technology called Neural Processing Unit (NPU), early work on AI came from Graphics Processing Unit (GPU) technology. The ability of GPUs to deal with unstructured and especially visual data has been crucial to the development of current generation AI.

Often moving far faster than CPU speeds as measured by Moore’s Law, GPUs have become an important part of the evolution of our increasingly smart devices and why they work the way they do. Understanding how this technology was brought to market and then advanced over time helps explain how AI was first developed and their unique advantages and limitations.

my old friend John Peddie is one of them, if not , is today’s leading graphics and GPU expert. John has just released a series of three books called “The History of the GPU”, which is arguably the most comprehensive chronicle of GPUs, something they have done since their inception.

If you want to learn about the hardware side of AI’s development – and the long and sometimes painful path to success for GPU firms like Nvidia – check out John Peddie’s “The History of the GPU – Steps to Invention”. This is my product of the week.

The views expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.