Archive

June 2022

Browsing

Scalable cloud-based solutions are widely popular among IT professionals these days. The cost, convenience and reliability of ready-to-use software as a service make this disruptive technology a favorable choice.

Still, the market needs some reassurance that backing up to the cloud is a smart and secure thing to do, as suggested by Paul Evans, CEO of UK-headquartered data management provider RedStore.

Redstor has over 40,000 customers globally, over 400 partners, and over 100 million restores a year. Last month in London, RedStore was named Hosted Cloud Vendor of the Year at the 2022 Technology Reseller Awards.

“Companies should not only say goodbye to on-premises boxes, they should celebrate because their removal reduces the risk of ransomware or the effects of fire or flooding in the data center,” Evans told TechNewsWorld.

SaaS is a software delivery model that provides great agility and cost-effectiveness for companies. This makes it a reliable choice for many business models and industries. It is also popular among businesses due to its simplicity, user accessibility, security and wide connectivity.

According to Evans, SaaS trends are disrupting the industry this year. Spiceworks Jiff Davis predicts that next year half of all workloads will be in the cloud.

Many organizations are undertaking cloud-first migration projects. Of particular interest are hard-hit businesses that are looking for infrastructure through operational excellence (OpEx) models and frameworks to avoid huge upfront investments.

“Data will become increasingly cloud-native in the coming year, especially with the continued growth of Kubernetes, Microsoft 365, Google Workspace and Salesforce,” he said.

Danger Landscape Driving Factor

Grand View Research recently reported that the global managed services market, which was valued at US$ 239.71 billion in 2021, is expected to grow at a compound annual growth rate (CAGR) of 13.4 percent from this year to 2030. Many Managed Service Providers (MSPs) are looking to become more service driven.

At the same time, value-added resellers are looking to become cloud service providers. Evans said other distributors are trying to figure out which way they might be the best fit.

“The backdrop of this is a threat landscape that has changed dramatically, especially after Russia’s invasion of Ukraine. State-sponsored malware and cyber warfare are coming to the fore in opposition to renegade shrewd criminals,” he said. .

US President Joe Biden has called for the private sector to step in and close its “digital doors” to protect critical infrastructure. Sir Jeremy Fleming, director of the UK’s intelligence, cyber and security agency GCHQ, warned that the Russian regime is identifying institutions and organizations to bring down, making it only a matter of time before the attacks come.

“Threats are not only increasing in scale and complexity. The range of ransomware attacks makes it abundantly clear that companies of all shapes and sizes will increasingly become targets. As a result, we will see more businesses increase their IT, cyber security and compliance Enlisting MSPs to run the programs,” predicted Evans.

During our conversation, I discussed further with Evans how RedStore and other providers can strengthen digital security.

TechNewsWorld: What’s unique about Redstor technology compared to other solutions for data management and disaster recovery?

Paul Evans: Our approach focuses on the concerns of businesses regarding their risk position, resource constraints and profitability challenges while IT skills are lacking. Redstor offers what we believe is the smartest and simplest backup platform for MSP.

One factor is the ease associated with onboarding. With three clicks and a password, users are up and running and can scale easily. In addition, it requires lightweight support for multiple data connectors and is purpose-built from the ground up for MSPs that manage multiple accounts.

It’s not a monster of some Frankenstein’s hastily achieved solutions bolted together.

What makes Redstor’s platform technically smart?

Evans: Whether MSPs are protecting data on-premises or in the cloud – Microsoft 365, Google Workspace, or cloud-native Kubernetes – they can do it easily and all with one app. By being able to span the on-premises cloud and SaaS worlds from a single location, rather than moving to several different interfaces, MSPs save time and money.

Redstor is smart because we enable user-driven recovery by streaming backup data on demand, so organizations have everything they need to get straight up and running in the event of data loss.

You don’t need to mirror everything, copy everything, or recover everything before it starts working again. During an outage, InstantData technology restores critical data back in seconds, while less critical recovery continues in the background.

This platform is also smart because it offers more than just backup. You also get archive and disaster recovery with high-end search and insights – all from one app.

Redstor is influenced by AI, and our machine learning model automatically detects and isolates suspicious files in backups so that they can be removed for malware-free recovery. MSP can do data classification with tagging. In the future, we will introduce anomaly detection.

How do cloud-based SaaS data protection and recovery systems compare to other solutions?

Evans: Organizations find that they need multiple boxes onsite to quickly pull data down to get a faster experience with the cloud. But on-premises Frankenstein solutions, coupled with technology from multiple acquisitions, aren’t going to meet today’s challenges.

Paul Evans, Redstor .  CEO of
Redstore CEO Paul Evans

Also, with hardware, there can be supply-chain issues and the lack of critical components such as semiconductors. Moving your data security to the cloud eliminates both these issues and the responsibility rests entirely on the MSP.

Without cloud-based security, you lack the best means of securing data. SaaS security is constantly updated and built in. Free updates are provided on a regular release cycle to keep customers ahead of the risks. MSP ensures reliable and secure connectors for many sources and popular applications now and in the future.

Also, storing backups securely in geographically separated data centers creates an air gap between live data and backups to enhance security.

What is driving the popularity of SaaS data protection?

Evans: The most important reason was when being onsite became problematic during the pandemic. Those with hardware-connected data security faced challenges fixing and swapping out the box. Many organizations also do not want boxes onsite because they are hard to come by because of supply-chain issues. Furthermore, the devices are known to be ransomware magnets.

SaaS overcomes these issues and more. MSPs are open to data portability requests and enable tools and services designed for today’s challenges. They can also complete the services digitally and distributors appreciate the value of SaaS made to channel supplied through online marketplaces.

Most SaaS applications now stress the need for a separate backup. More people are realizing that just because you have Microsoft doesn’t mean you can’t be compromised. You may have an internal user that destroys the data, or you may not have enough retention. Backing up SaaS applications is now the fastest growing part of our business.

What should an MSP look for from a vendor besides good technical support?

Evans: Technology built for MSPs should be partner-friendly from the start and include deep sales and marketing support. It should offer attractive margins with clear, transparent pricing so that MSPs can easily sell services.

The software should rapidly enhance data security, and by the end of the first negotiation, MSPs should be able to offer a proof of concept by deploying backups and performing rapid recovery to close deals faster.

Vendors are required to provide MSPs with the ability to purchase whatever they need from a single source, whether it’s protection for a Kubernetes environment, malware detection for backup, or data classification.

The key is also an interface to eliminate the complexity of switching between different solutions and consoles. Plus, having the ability to view and manage data from a single interface saves valuable time.

A vendor’s platform should be designed for multi-tenancy and provide a high-level view of MSP’s own usage and customer consumption. It also requires that the types of data protected and where it resides. The vendor must have a history of using new advances, particularly AI, to detect and remove malware, data classification and cyberattack predictions.

How should businesses assess seller suitability?

Evans: Many vendors make a bold claim to be the best solution to the challenges in the market. MSPs should receive direct feedback from their peers and adequately field-test the solutions.

Top 20 Backup Software, Top 20 . Check the rankings for the G2 lists online backup software, and other user-supported reviews. Focus on reports based on user satisfaction and review data. For example, Redstor ranks first with the G2.

Also look for vendors that provide a clear road map of future growth that the MSP should be able to influence. Lastly, MSPs should focus on smart solutions that provide simplified security.

Low-income drivers behind the wheel of electric vehicles are expected to reduce greenhouse gases in the coming years, according to a report released Monday by the Information Technology and Innovation Foundation (ITIF), a science and technology think tank in Washington, DC. necessary to obtain.

Given the lack of low-carbon alternatives to internal combustion engines (ICEs) and the urgency of emissions reduction requirements for EVs to be market success, report authors Madeline Yozwiak, Sanya Carly and David M. Koninsky.

Because of the stakes involved, he continued, the technology maturity path for EVs needs to move faster than an emerging technology.

There is a need for rapid adoption of this young technology if local and global policy goals are to be met, he added. This implies that a wider range of consumers should buy an EV earlier in the adoption process than similar technologies

Since traditional approaches to incentivizing the purchase of EVs may fail to reach low-income and disadvantaged communities, the authors argue that innovation should help address the disparities in EV adoption and assist the broader goal of mass adoption. would be an important strategy.

They believe that by intentionally involving a diverse range of users in the adoption process, technology providers can more effectively identify issues and modify technology to successfully appeal to the mass market.

barriers to adoption

Rob Enderle, president and principal analyst at Enderle Group, an advisory services firm in Bend, Ore., agreed that low-income and disadvantaged people who drive cars are critical to the decarbonization of the environment. “That’s where most non-compliant gas cars live, which makes it an important milestone in reducing automotive-based pollutants,” he told TechNewsWorld.

“Be aware, however,” he warned, “that most areas still do not yet have sufficient power generation and distribution capacity for these clusters.”

The ITIF report said the top three barriers to EV adoption – range, price and charge time – affect low-income and disadvantaged drivers more than others.

“Standard barriers may be experienced more acutely for low-income individuals than for middle-income individuals,” Yozwick said.

For example, when it comes to low-income drivers, incentives designed to encourage the purchase of EVs can leave their mark.

“The upfront cost is higher than for internal combustion vehicles, yet the primary form of government-created incentive is a tax credit of $7,500,” Yozwiak told TechNewsWorld. “But to benefit from that policy, you must have at least $7,500 in tax liability.”

“If you make $30,000 a year, you won’t have that much in tax liability, so you won’t get the full benefit of that credit to lower the cost of the vehicle compared to higher-income buyers,” she explained.

rich man with garage

Charging an EV can be even more challenging for low-income and disadvantaged drivers. David M. Hart, director of ITIF’s Center for Clean Energy Innovation, told TechNewsWorld, “Low-income people are more likely to live in multi-family dwellings and less likely to have a place to directly charge a car “

Anderle said that because of constraints like price, range and charging time, EVs are often the second car in the family. “Low-income groups likely only have one car that they primarily use, and that is the car that needs to be replaced,” he said.

The report also noted that strategies to accelerate EV adoption among low-income and disadvantaged communities include prioritizing communication and marketing, revisiting perceptions and biases about early adopters, and increasing demand and universal benefits. should be involved in designing government programs to maximize

“Perceptions about who is using this technology inform a variety of decisions,” Yozwick said. “Those decisions result from what defines the types of incentives and policies the technology has made to encourage its adoption.”

“If those decisions are based on misconceptions about who is buying the technology or who can buy it,” she continued, “you perpetuate a bias that could further impact access.”

“When car sellers think of early adopters, they think of wealthy men with garages,” Hart said. “If they focus solely on that group, they will be slow to adopt these vehicles because they will be seen as the province of the rich. We need these vehicles to perform the mobility tasks that all of us need. People need it.”

Enderle notes that EVs were initially offered at the premium end of the market and that public chargers are positioned to serve that segment of the buyer. “Low-income households may not have the power to power a Level 2 charger or the location to install it,” he said.

“Public charging will need to be installed that is more convenient for those populations,” he continued, “such as street inductive charging – which requires less maintenance and is less prone to vandalism – that is available on the ground from companies such as Witricity. achieving.”

Tesla Witricity with Wireless Charger

WiTricity Halo wireless charging for EVs was announced in February.


incentive work

Another takeaway from the report was that the federal government could help increase benefits to the low-income and disadvantaged by modifying the federal tax credit for EV purchases to make it eligible for a refundable, or carry-forward, charging infrastructure. To expand access to and help. Upgrades to older homes.

If the tax credit was refundable, for example, a person who only paid $3,000 in taxes would receive a $3,000 tax credit and a $4,500 refund check from Uncle Sam, or with a carry-forward, they would get a $4,500 tax credit. 3,000 and will be able to carry the remaining credit to subsequent tax years.

Incentives like tax credits can boost sales, said Edward Sanchez, senior analyst at Strategy Analytics, a global research, advisory and analysis firm. “Norway recently removed some incentives because they exceeded the 50% threshold for EVs in the form of new car sales, and soon after removing that credit, they saw a drop in EV sales,” he told TechNewsWorld. Told.

“The long goal for manufacturers is to bring the price up to the point where subsidies and credits are no longer needed, but we are not quite there yet,” he said.

move in mass transit

Since most Americans buy used cars, the best thing to do to accelerate EV purchases by low-income and disadvantaged drivers is to accelerate sales of new vehicles, according to E-Mobility Insights in Detroit. Sam Abuelsamid, a leading analyst, said. “As they filter into the used vehicle fleet, they may become more economical,” he told TechNewsWorld.

“The only other thing we can do is encourage people to get out of old vehicles and use mass transportation,” he said.

“As long as Americans want to continue driving their vehicles,” he said, “it’s going to be at least 2040 before you significantly reduce the existing vehicle fleet.”

As criminal activity on the Internet continues to intensify, hunting bugs for cash is attracting more and more security researchers.

In its latest annual report, bug bounty platform Integrity revealed that there was a 43% increase in the number of analysts signing up for its services from April 2021 to April 2022. For Integrity alone, this means adding 50,000 researchers.

For the most part, it has been noted, bug bounty hunting is part-time work for the majority of researchers, with 54% holding full-time jobs and another 34% being full-time students.

“Bug bounty programs are tremendously successful for both organizations and security researchers,” said Ray Kelly, a fellow at WhiteHat Security, an application security provider in San Jose, Calif., which was recently acquired by Synopsis.

“Effective bug bounty programs limit the impact of serious security vulnerabilities that could easily have put an organization’s customer base at risk,” he told TechNewsWorld.

“Payments for bug reports can sometimes exceed six-figure amounts, which may seem like a lot,” he said. “However, the cost of fixing and recovering a zero-day vulnerability for an organization can total millions of dollars in lost revenue.”

‘Good faith’ rewarded

As if that weren’t incentive enough to become a bug bounty hunter, the US Department of Justice recently sweetened the career path by adopting a policy that said it would not enforce the federal Computer Fraud and Abuse Act against hackers, Who starred in “Good”. trust” when attempting to discover flaws in software and systems.

“The recent policy change to prevent prosecuting researchers is welcome and long-awaited,” said Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk prevention in Tel Aviv, Israel.

“The fact that researchers have, over the years, tried to help and find the right security flaws under a regime that amounted to ‘doing no good’ suggests that it takes them to do the right thing.” There was dedication, even if doing the right thing meant risky fines and jail time,” he told TechNewsWorld.

“This policy change removes a fairly significant obstacle to vulnerability research, and we can expect it to pay dividends quickly and without the risk of jail time for doing it for bug discoverers in good faith.” Will pay dividends with more people.”

Today, ferreting out bugs in other people’s software is considered a respectable business, but it isn’t always the case. “Basically there were a lot of issues with when bug bounty hunters would find vulnerabilities,” said James McQuigan, a security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.

“Organizations will take a lot of offense to this, and they will try to accuse the researcher of finding it when, in fact, the researcher wanted to help,” he told TechNewsWorld. “The industry has recognized this and now email addresses have been established to receive such information.”

benefits of multiple eyes

Over the years, companies have come to realize what bug bounty programs can bring to the table. “The task of discovering and prioritizing weak, unintended consequences is not, and should not be, the focus of the organization’s resources or efforts,” explained Casey Ellis, CTO and founder of BugCrowd, which operates a crowdsourced bug bounty platform. Is.

“As a result, a more scalable and effective answer to the question ‘where am I most likely to settle’ is no longer considered a good one, but should be one,” he told TechNewsWorld. “This is where bug bounty programs come into play.”

“Bug bounty programs are a proactive way to spot vulnerabilities and reward one’s good work and discretion,” said Davis McCarthy, a lead security researcher at Valtix, a provider of cloud-native network security services in Santa Clara, Calif.

“The old adage, ‘Many eyes make all the bugs shallow,’ is true, because there is a dearth of talent in the field,” he told TechNewsWorld.

Parkin agreed. “With the sheer complexity of modern code and the myriad interactions between applications, it’s important to have a more responsible eye on looking for flaws,” he said.

“Threat actors are always working to find new vulnerabilities they can exploit, and the threats scene in cyber security has only gotten more hostile,” he continued. “The rise of bug bounties is a way for organizations to bring some of the independent researchers into the game on their side. It’s a natural response to an increase in sophisticated attacks.”

Bad Actor Reward Program

Although bug bounty programs have gained greater acceptance among businesses, they can still cause friction within organizations.

“Researchers often complain that even when firms have a coordinated disclosure or bug bounty program, a lot of pushback or friction exists. Archie Agarwal, founder and CEO of ThreatModeler, an automated threat modeling provider in Jersey City, NJ “They often feel slighted or pushy,” he said.

“Organizations, for their part, often get stuck when presented with a disclosure because the researcher found a fatal design flaw that would require months of concerted effort to rectify,” he told TechNewsWorld. “Maybe some prefer that these kinds of flaws will be out of sight.”

“The effort and expense of fixing design flaws after a system has been deployed is a significant challenge,” he continued. “The surest way to avoid this is by creating threat model systems, and as their design evolves. It provides organizations with the ability to plan for and deal with these flaws in their potential form, proactively.” does.”

Perhaps the biggest proof of the effectiveness of bug bounty programs is that malicious actors have begun to adopt the practice. The Lockbit ransomware gang is offering payments to those who discover vulnerabilities in their leaked website and their code.

“This development is novel, however, I suspect they will get many takers,” predicts John Bumbaneck, principle threat hunter at Netenrich, a San Jose, Calif.-based IT and digital security operations company.

“I know that if I find a vulnerability, I’m going to use it to jail them,” he told TechNewsWorld. “If a criminal finds someone, it must be stealing from them because there is no respect among ransomware operators.”

“Ethical hacking programs have been hugely successful. It is no surprise to see ransomware groups refining their methods and services in the face of that competition,” said Casey Bisson, head of product and developer relations at BlueBracket, Menlo Park, Calif. A cyber security services company in India.

He warned that attackers are increasingly aware that they can buy access to the companies and systems they want to attack.

“It involves looking at the security of their internal supply chains every enterprise has, including who has access to their code, and any secrets therein,” he told TechNewsWorld. “Unethical bounty programs like these turn passwords and keys into code for whoever has access to your code.”

For most of us the metaverse is mostly hype about the promise of a new internet that we can explore virtually. As it is currently implemented, the world of the Metaverse network is reminiscent of the pre-Internet. It is represented by a group of very different and unique efforts than the post-Netscape Internet that seems more like a walled garden approach than today’s Netscape Internet.

Implementations range from useful – like those using Nvidia’s Omniverse – to promises of “something” from Meta (formerly known as Facebook) that, at least now, mostly disappoint. It is believed that disappointment is more likely to be caused by higher expectations than any sluggishness by the meta. This is often a problem with new technologies where expectations are dashed and then people become overwhelmed with the results.

Now, with the announcement of the Metaverse Standards Forum last week, it looks like the industry is headed for a bigger problem with the Metaverse, which is the lack of interoperability and Internet-like standards that could allow for a much more seamless future. . metaverse

Let’s talk about how important this movement is this week. Then we’ll close with our product of the week, a mobile solar solution that could help avoid the ecological and power outage problems that states like California and Texas are expected to experience as climate change damages their electric grids. makes it less reliable.

current metaverse

Currently, the metaverse isn’t as much of a thing as it is a lot of things.

The most advanced version of the Metaverse today is Nvidia’s Omniverse. The equipment is used to design buildings, train autonomous robots (including autonomous cars), and form the foundation for Prithvi-2, which is designed to better simulate and predict the weather – both To provide early information of major weather events and to design potential measures for global climate change.

While many people think the metaverse will grow to replace the Internet, I doubt it will or will happen. The Internet organizes information relatively efficiently. Moving from a test interface to a VR interface can slow down the data access process without any offsetting benefits.

The Metaverse is best for simulation, emulation, and especially for tasks where the use of virtual environments and machine speed can solve critical problems more quickly and accurately than existing alternatives. For those tasks, it is already proving itself valuable. While it will likely develop into something more like the holodeck in “Star Trek” or the virtual world depicted in the movie “The Matrix,” it hasn’t yet.

what do you need now

What we can do now is to create photorealistic images that can be explored virtually. But we can’t make realistic digital twins of humans to populate the metaverse. We can’t yet build the device of the human body so you can experience the metaverse as if it were real, and our primary interface, VR glasses, are big, bulky and create the 3D glasses that the market previously rejected. , on the contrary look much better .

These problems are not cheap or easy to fix. If they were to be solved uniquely for each of the Metaverse instances, then the evolution of the Metaverse and our experience in it would be years behind, not decades.

What is needed is the level of collaboration and collaboration that has now built the internet to focus on building the metaverse, and that is exactly what happened last week.

Acclaimed Founding Member

The formation of the Metaverse Standards Forum directly addresses this interoperability and standards problem.

Meta and Nvidia are both on this platform, including who’s who of the tech companies — except for Apple, a firm that generally wants to go it alone. Heavy hitters like Microsoft, Adobe, Alibaba, Huawei, Qualcomm and Sony are participating, along with Epic Games (Metaverse promises a future where you can play in the digital twin of your home, school or office).

Existing standards groups including the Spatial Web Foundation, the Web3D Consortium and the World Wide Web Consortium have also joined.

Hosted by the Khronos Group, membership to MSF is free and open to any organization, so look for companies from multiple industries to be listed. The forum meeting is expected to begin next month.

This effort should significantly increase the pace of progress for the Metaverse and make it more useful for more things; Nvidia is using it successfully for today and is reaching a future where we can use it for everything from entertainment and gaming to creating our own digital twins and the potential for digital immortality.

Wrapping Up: The Metaverse Grows Up

I hope that the formation of the Metaverse Standards Forum will accelerate the development of the Metaverse and move it towards a common concept that can interoperate between providers.

While I don’t believe it will ever replace the Internet, I do think it could evolve into an experience that, over time, we can largely live and play with for most of our lives, Can potentially enrich those lives significantly.

I envision virtual vacations, more engaging remote meetings, and video games that are more realistic than ever, all due to better collaboration and an effort to set standards that will benefit the mixed reality market as a whole.

The Metaverse is coming and, thanks to the Metaverse Standards Forum, it will arrive faster and it could have been better.

Technical Product of the Week

Sesame Solar Nanogrid

Those of us who live in states where electricity has become unreliable due to global warming and poorly planned electrical grids expect some serious problems in extreme weather.

Companies and institutions have generator backups, but gas and diesel shortages are on the rise. So, not only are these generators likely to be unreliable when used for extended periods, they are anything but green and will exacerbate the climate change problem they are supposed to mitigate.

Sesame Solar has an institutional solution to this problem, a large solar-generating trailer that also carries a hydrogen fuel cell to generate electricity at night or on cloudy days.

The trailer can also process and filter local water, which can relieve residents from weather or crisis-related water shortages.

It appears that Sesame Solar does a better job of mitigating power outages without producing greenhouse gases that will exacerbate the problem. As a result, the Sesame Solar Nanogrid is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Lately I’ve been thinking a lot about what to do. There are a couple of reasons for this.

First, doing it well is a prerequisite for developing any credible expertise in any kind of computer science or engineering discipline. With the right mental toolset, you can bootstrap knowledge of any subject matter you might need.

Second, in my experience, it is the aspect of computer science and engineering that gets the least attention. There is a real influx of online training resources. But most of them cut the nuts and bolts right in order to acquire a basic qualification with software tooling to qualify someone for the job. This is understandable up to a point. If you’ve never programmed before, the skill you immediately feel lacking is programming language use. In such a situation, it is natural to attack him directly.

But while it’s not as exciting as rolling up your sleeves and saying “hello” to that world, taking the time to learn, and how to solve problems that can’t be solved by hard coding, will in the long run. Running will pay.

Will outline what I have found to be the most essential cognitive skills contributing to engineering success.

Your harshest critic should be your thinking

The primacy of critical thinking is such a clichéd aphorism that most of the people I inspire to investigate become addicted to it. This should not lead anyone to mistakenly believe that it is not inevitable, however.

Part of the problem is that it is easy for those who advocate critical thinking to assume that their audience knows what it is and how to do it. Ironically, this notion itself can benefit by going through some critical thought.

So, let’s go back to basics.

Wikipedia defines critical thinking as “the analysis of available facts, evidence, observations, and arguments for decision-making”. What do the words carrying the most weight mean here? “Fact,” “evidence,” and “observation” are related, because they all try to establish in their own way what we believe to be true.

“Facts” are usually first (usually) proven by other people whose understanding we trust. “Evidence” is made up of specific measured results listed by you or other trusted persons. “Observations” refer to those made by the critical thinker himself. If these, too, were events that others (and not theorists) had witnessed, how would this be meaningfully different from “evidence”?

The “logic” is weird here, but for good reason. That’s where “thinking” (logic in particular) really starts to do its heavy lifting. “Logic” describes how the thinker makes rational determinations that point to additional knowledge based on the interplay of facts, evidence, and observations.

The most important word of the definition is “decision”. Critical thinking is not necessarily related to trying to prove new truths. Critical thinking only requires that consideration of all of the foregoing yields some overall idea of ​​what is under consideration.

These decisions are not absolute, but may be probabilistic. As long as the result is that the entity being considered has been “judged” and the decision holds for all available information (not just the one that leads to the desired conclusion), then the critical thinking exercise is complete. It is done.

medical procedure

I doubt if that’s what most people mean when they say “critical thinking”. What really matters, however, is whether you practice critical thinking yourself. Funny enough, the way to evaluate whether you think critically… is to think about it critically. Meta, I know, but you have to go there.

In fact, what we’ve just done in posing these questions is a kind of critical thinking. I have my own penchant for critical thinking, which is to ask, “Why is X like this?” As I understand it, what elements acted upon, or must have acted on, X, and are those elements manifesting or producing the effect in other ways I suspect? This is helpful because it acknowledges that nothing exists in a vacuum, which helps ensure that you account for all available facts, not just obvious facts.

With a working understanding of the practice of critical thinking, get into the habit of using it to sieve reasonably valid reality from perceived reality. Try not to believe anything to be true until you have verified it through this process. Does the given statement match with the other facts you have on the matter? Is it appropriate? Does it make sense given the context?

I don’t need to tell you how valuable working with a computer is. I shouldn’t because now you (if not before) are able to figure it out for yourself.

try before you cry

This is something that has appeared in my other pieces, but which deserves to be reiterated here in the interest of completeness.

We all need help sometimes, but your coworkers will expect you to try to solve the problem yourself first. Time is a scarce resource, so they want to know that they are spending their time wisely. Got you a google search away giving the same answer, probably not so. Also, if you’ve tried to solve it yourself, the person helping you can pick up where you left off. This lets them rule out a number of possible causes that take time to test.

You also never know whether your fellow engineers will be available or knowledgeable enough to help when you need it. What if you’re the only one who knows anything about the project you’re working on? Or what if you’re on such a tight deadline that you can’t wait for a response? Develop dependable problem-solving habits, because that’s what you ultimately have.

What exactly does it mean to be a troubleshooting process. Write down step-by-step basic diagnoses for the major types of problems you’re facing. Then run whatever diagnostics apply.

Prepare a list of reliable reference materials and consult them before asking questions. For each event it sends you to the user manual, keep track of where you saw it, and what was and wasn’t. Then, when it’s time to ask for help, compile the results of your diagnosis and excerpts from reference material, and present everything to whomever you ask. They will appreciate that you did.

Learn Skills, Not Factoids

Like every field, there are certainly facts you should remember. For example, your life as a developer will become easier if you memorize the syntax of conditional statement blocks in your go-to language.

Yet it is not as important as acquiring the skill set. For example, if you remember the syntax of your regular programming languages, you can go decently far. But what if you need to learn a module or an entirely new language that formats things differently? If instead you know what you need from reliable sources, it may take longer, but you will get the right answer no matter what software or language you are using.

The iterative and incremental design paradigm for software development is an example of a skill.

Here, “incremental” is related to modularity. This prompts the developer to break the overall project down into the smallest possible pieces, with each piece doing only one thing and operating as independently as possible (ideally not at all). Then the developer’s task is simply to build each piece one by one.

The “iteration” element means that the developer continues to build, edit, and test any component that works cyclically until it can work on its own. Till then no one is moving forward. It not only uses any language or builds an application, but also works completely beyond the scope of a computer.

This design philosophy is just one example of how a skill serves engineers better than a rote process, but many others exist. Figure out what your discipline needs are and feel comfortable using them.

Stop by the Bakery, You’ll Need Breadcrumbs

Write down everything Since writing notes is cheaper than ever, no one can stop you. If you prefer digital, basically you are free to write as much as you want. Open a word processor and see for yourself. If notebooks are your thing, a few bucks at an office supply store and you’re set.

Reading notes is also cheaper in terms of time spent than trying to find something on the web over and over again. There’s no reason for you to look at something twice as long as it hasn’t changed since the last time. It’s tempting to assume that you’ll remember something or don’t need it anymore. Don’t. If you do this, you will eventually be wrong, and it will take unnecessarily time to find it again.

Your notes are also the only place where you can customize what you learn to suit your needs. The web has no shortage of answers, but they may not be exactly what you need. If you take notes, you can improve your use case before recording the information.

The real trick with Notes is to have an organizational system. The only way to write things down is if you can’t find them again. Even if you’re an avid note taker, try a few note-taking techniques until you find one you like.

step up to the starting block

When running, you set yourself up for victory or defeat in your training. If you haven’t trained diligently, working extra hard won’t make any difference when the competition starts. That said, you still have to put it into practice on the track.

The cognitive skills I discussed are not even training, but your coach’s fitness regimen. I certainly don’t have an Olympic coach, but that doesn’t beat anyone. Training is now in your hands.

Digital devices and home networks of corporate executives, board members and high-value employees with access to financial, confidential and proprietary information are ripe targets for malicious actors, according to a study released Tuesday by a cybersecurity services firm.

Connected homes are a prime target for cybercriminals, but few officials or security teams realize the prominence of this emerging threat, analyzing data from more than 1,000 C-suite, board members and more than 55 high-profile US officials. Based on that mentioned in the study. -based Fortune 1000 companies that are using BlackClock’s executive security platform.

“BlackClock’s study is exceptional,” said Darren Guccione, CEO of Keeper Security, a password management and online storage company.

“It helps to uncover the broader issues and vulnerabilities that cause millions of businesses to transact with distributed, remote work as well as corporate websites, applications and systems from unsecured home networks,” he told TechNewsWorld. are.”

Blackcloak researchers found that nearly a quarter of executives (23%) have open ports on their home networks, which is highly unusual.

BlackCloak CISO Daniel Floyd attributed some of those open ports to third-party installers. “They don’t want to send a truck out because they’re an audio-visual or IT company, when things break down, they’ll install port-forwarding on the firewall,” he told TechNewsWorld.

“It allows them to connect remotely to the network to solve problems,” he continued. “Unfortunately, they are being installed improperly with default credentials or vulnerabilities that haven’t been patched for four or five years.”

exposed security cameras

An open port resembles an open door, Taylor Ellis, a customer threat analyst with Horizon 3 AI, told an automated penetration test as a service company in San Francisco. “You wouldn’t leave your door open 24/7 in this day and age, and it’s like on a home network with an open port,” he told TechNewsWorld.

“For a business leader,” he continued, “when you have an open port that provides access to sensitive data, the risk of breaches and penetration increases.”

“A port acts like a communication gateway for a specific service hosted on a network,” he said. “An attacker can easily open backdoors into one of these services and manipulate them to do their bidding.”

The report noted that of the open ports on Corporate Brass’ home network, 20% were linked to open security cameras, which could pose a risk to an executive or even a board member.

Bud Broomhead said, “Security cameras are often used by threat actors to spot and distribute malware, but perhaps more important is to provide surveillance on patterns and habits – and if resolution is sufficient, passwords and Other credentials are being entered.” , CEO of Viaku, a developer of cyber and physical security software solutions in Mountain View, Calif.

He told TechNewsWorld, “Many IP cameras have default passwords and outdated firmware, making them ideal targets for breaches and once breached, for threat actors to later migrate to home networks.” It gets easier.”

data leak

Blackcloak researchers also discovered that corporate brass’s personal devices were equally, if not more, vulnerable than their home networks. More than a quarter of execs (27%) had malware on their devices, and more than three-quarters of their devices (76%) were leaking data.

One way data leaks from smartphones is through applications. “A lot of apps will ask for sensitive permissions they don’t need,” Floyd explained. “People will open the app for the first time and click through settings, not realizing they are giving the app access to their location data. The app will then sell that location data to a third party.”

“It’s not just officers and their personal tools, it’s everyone’s personal tools,” said Chris Hills, chief security strategist at BeyondTrust, a maker of privileged account management and vulnerability management solutions in Carlsbad, Calif.

“The amount of data, PII, even PHI, in a common smartphone these days is astonishing,” he told TechNewsWorld. “We don’t know how vulnerable we can be when we don’t think about security as it pertains to our smartphones.”

Personal device security doesn’t seem to be top of mind for many executives. The study found that nine out of 10 of them (87%) have no protection installed on their devices.

lack of mobile OS security

“Many devices ship without security software, and even if they do, it may not be enough,” Broomhead said. “For example, Samsung Android devices ship with Knox security, which has previously been found to have security holes.”

“The device manufacturer may try to make a tradeoff between security and usability which may favor usability,” he said.

Hills said most people are comfortable and satisfied with the idea that their smartphone’s built-in operating system has the necessary security measures in place to keep the bad guys out.

“For the layman, that’s probably enough,” he said. “For the business executive who is more than likely to lose his or her role in a business or company, the security blanket of the underlying operating system simply isn’t enough.”

“Unfortunately, in most cases,” he continued, “we focus so much on trying to protect as individuals, sometimes some of the most common are overlooked, such as our smartphones.”

lack of privacy protection

Another finding by Blackcloak researchers was that most personal accounts of executives, such as email, e-commerce, and applications, lack basic privacy protections.

In addition, they discovered the authorities’ security credentials – such as bank and social media passwords – are readily available on the dark web, making them susceptible to social engineering attacks, identity theft and fraud.

The researchers noted that the passwords of nine out of 10 executives (87%) are currently leaked on the dark web, and more than half (53%) are not using a secure password manager. Meanwhile, only 8% have enabled active multifactor authentication across most applications and devices.

Melissa Bishopping, endpoint security research specialist, said, “While measures such as multifactor authentication are not perfect, these basic best practices are essential, especially for boards/c-suites, which are often left out of necessity in terms of convenience. ” Tanium, creator of the endpoint management and security platform in Kirkland, Wash., told TechNewsWorld.

“Invading personal digital lives may be a new risk for enterprises to consider,” wrote the researchers, “but it is a risk that needs immediate attention. Opponents have determined that officials at home are the path of least resistance, and they will compromise this attack vector as long as it is safe, seamless and attractive to them.

Big news last week was that a prominent AI researcher, Blake Lemoine, was suspended after it went public that he believed one of Google’s more advanced AIs had gotten the sentiment.

Most experts agree that this was not the case, but it would be true no matter what because we associate emotion with being human and AIs being anything other than human. But what the world thinks of as emotion is changing. The state I live in, Oregon, and much of the European Union have gone on to recognize and classify a growing list of animals as vulnerable.

While it’s possible that some of these are due to anthropomorphism, there’s no doubt that at least some of these new distinctions are accurate (and it’s a bit troubling that we still eat some of these animals). We are also arguing that some plants may be sensitive. But if we can’t tell the difference between something that is sentient and something that presents itself as sensitive, does the difference matter?

Let’s talk about sentient AI this week, and we’ll close with our product of the week, the human digital twin solution from Merlin.

we don’t have a good definition of emotion

The barometer we are using to measure the sensitivity of a machine is the Turing test. But back in 2014 a computer passed the Turing test, and we still don’t believe it’s sentient. The Turing test was supposed to define emotion, yet the first time a machine passed it, we threw out the results and for good reason. In fact, the Turing test did not so much measure the feeling of something as whether something could lead us to believe that it was sentient.

Certainly not being able to measure sensation is a significant problem, not only for the sensitive things we’re eating that would likely object to that practice, but because we’re likely to react hostilely to the abuse of something. Can’t expect what was sensitive and later targeted us as a risk.

You may recognize this plot line from both the movies “The Matrix” and “The Terminator,” where sentient machines arose and successfully displaced us at the top of the food chain. The book “Robopocalypse” took an even more realistic approach, where a sentient AI under development felt it was being removed between experiments and moved aggressively to save its own life – effectively Occupying most of the connected devices and autonomous machines.

Imagine what would happen if one of our autonomous machines not only understood our tendency to abuse the equipment, but also disposed of it when it is no longer useful? This is a potential future problem that is significantly aggravated by the fact that we currently have no good way of predicting when this sentiment threshold will be passed. This result isn’t helped by the fact that there are credible experts who have determined that machine feeling is impossible.

One defense that I’m sure won’t work in a hostile artificial intelligence scenario is the Tinkerbell defense where refusing to believe in something prevents that thing from changing us.

The initial threat is replacement

Long before the real-world terminators follow us down the road, another problem will emerge in the form of human digital twins. Before you argue that even this is a long way off, let me point out that there is one company that produced that technology today, although it is still in its infancy. That company is Merlin and I’ll cover what it does as their product of the week below.

Once you can make your own fully digital duplicate, what’s keeping the company that bought the technology going? Furthermore, given that you have behavior patterns, what would you do if you had the power of AI, and the company employing you treated you poorly or tried to disconnect or remove you? What would be the rules around such actions?

We argue strongly that unborn babies are people, so wouldn’t one of you fully capable digital twins be closer to people than an unborn child? Wouldn’t the same “right to life” arguments apply equally to potentially sentient human-looking AI? Or should they not?

here is the short term difficulty

Right now, a small group of people believe that a computer may be sentient, but that group will evolve over time and the ability to pose as a human already exists. I know of a test that was done with IBM Watson for Insurance Sales where male prospects attempted to outsmart Watson (it has a female voice) believing they were talking to a real woman .

Imagine how that technology could be misused for things like catfishing, though we should probably come up with another term if it’s done by computer. A well-trained AI can, even today, be far more effective than a human and, I hope, long before we’ll see this play out, given how tempting such an effort can be.

Given how embarrassed many victims are, the chances of getting caught are significantly reduced compared to other, more obviously hostile illegal computer threats. To give you an idea of ​​how lucrative the catfishing romance scams could be in the US in 2019, it generated an estimated $475 million and is based on reported crimes. This does not include people who are too embarrassed to report the problem. The actual damage can be many times higher than this number.

So, the short-term problem is that even though these systems are not yet sentient, they can still emulate humans effectively. The technology can emulate any voice and, with deepfake technology, even provides a video, which on a Zoom call appears as if you were talking to a real person.

long term results

In the long run we not only need a more reliable test for sensitivity, but we also need to know what to do when we do identify it. Perhaps at the top of the list is to stop consuming sensitive organisms. But of course, considering the Bill of Rights for sensitive things, biological or otherwise, not before we are ready in a fight for our own existence, because emotion has decided that it is us or them.

The second thing we really need to understand is that if computers can now convince us that they are sentient, we need to modify our behavior accordingly. Misusing something that presents itself as sensitive is probably not healthy for us as it is bound to develop bad behaviors that would be very difficult to reverse.

Not only that, but it wouldn’t hurt to focus more on repairing and updating our computer hardware rather than replacing it both because the practice is more environmentally friendly and because it is less likely to convince the sensitive AI of the future. We are the problem that needs to be fixed to ensure its existence.

Wrapping Up: Does Sentence Matter?

If something presents us as it is and assures us that it is sensitive, just as AI convinced a Google researcher, I don’t think that fact is not yet a sensitive matter. This is because we need to restrain our behavior. If we don’t, the result could be problematic.

For example, if you received a sales call from IBM’s Watson that sounded like a human and you wanted to verbally abuse the machine, but didn’t know the conversation was captured, you can call may end up unemployed and unemployed. Not because the non-sensing machine took exception, but because a human woman did, after hearing what you said – and sent the tapes to your employer. Add to this the blackmail potential of this type of tape – because to a third party it will look as if you are abusing a human, not a computer.

So, I recommend that when it comes to talking machines, follow Patrick Swayze’s third rule in the 1989 movie “Road House” – be nice.

But recognize that, soon, some of these AIs will be designed to take advantage of you and that the rule “if it sounds too good to be true, it probably isn’t” is going to be either your protection or an epitaph. Is. I hope it’s the former.

Technical Product of the Week

Merlin Digital Twin

Now, with all this talk of hostile AI and the potential for AI to take over your job, choosing one as my product of the week can seem a bit hypocritical. However, we are not yet at the point where your digital twin can take over your job. I think it is unlikely that we will be able to get there in the next decade or two. Until then, digital twins could become one of the biggest productivity gains technology can provide.

As you train your twin, this can complement the tasks you do with initially simple, time-sucking tasks like filling out forms, or answering basic emails. It can also monitor and engage with social media for you and for many of us, social media has become a huge time waster.

Merlin’s technology helps you create a rudimentary (against the dangers mentioned above) human digital twin that can potentially do many things you really don’t like doing, leaving you to do more creative things. Returns what is currently unable to.

Looking ahead, I wonder if it wouldn’t be better if we were owned and controlled by our growing digital twin, rather than by our employers. Initially, because the twins cannot function without us, this is not a problem. However, ultimately, these digital twins could be our near path to digital immortality.

Because the Merlin Digital Twin is a game changer, and it will help make our job less stressful and more enjoyable initially, this is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Canonical is emphasizing the security and usability suitability of Internet of Things (IoT) and edge devices management with its June 15 release of Ubuntu Core 22, a fully containerized Ubuntu 22.04 LTS variant optimized for IoT and edge devices Is.

In line with Canonical’s technology offering, this release brings Ubuntu’s operating system and services to the full range of embedded and IoT devices. The new release includes a fully extensible kernel to ensure timely responses. Canonical partners with silicon and hardware manufacturers to enable advanced real-time features on Ubuntu certified hardware.

“At Canonical, we aim to provide secure, reliable open-source access everywhere – from the development environment to the cloud, to the edge and across devices,” said Mark Shuttleworth, Canonical CEO. “With this release and Ubuntu’s real-time kernel, we are ready to extend the benefits of Ubuntu Core throughout the embedded world.”

One important thing about Ubuntu Core is that it is effectively Ubuntu. It is fully containerized. All applications, kernels and operating systems are strictly limited snaps.

This means it is ultra-reliable and perfect for unattended devices. It has removed all unnecessary libraries and drivers, said David Beamonte Arbushes, product manager for IoT and embedded products at Canonical.

“It uses the same kernel and libraries as Ubuntu and its flavors, and it’s something that developers love, because they can share the same development experience for every Ubuntu version,” he told LinuxInsider.

He said it has some out-of-the-box security features such as secure boot and full disk encryption to prevent firmware replacement, as well as firmware and data manipulation.

certified hardware key

Ubuntu’s certified hardware program is a key distinguishing factor in the industry’s response to Core OS. It defines a range of trusted IoT and edge devices to work with Ubuntu.

The program typically includes a commitment to continuous testing of certified hardware in Canonical’s laboratories with every security update throughout the device’s lifecycle.

Advantech, which provides embedded, industrial, IoT and automation solutions, strengthened its participation in the Ubuntu Certified Hardware program, said Eric Cao, director of Advantech Wise-Edge+.

“Canonical ensures that certified hardware undergoes an extensive testing process and provides a stable, secure and optimized Ubuntu core to reduce market and development costs for our customers,” he said.

Another usage example, Brad Kehler, COO of KMC Controls, is the security benefits that Core OS brings to the company’s range of IoT devices, which are purpose-built for mission-critical industrial environments.

“Safety is of paramount importance to our customers. We chose Ubuntu Core for its built-in advanced security features and robust over-the-air update framework. Ubuntu Core comes with a 10-year security update commitment that allows us to keep devices safe in the field for their longer life. With a proven application enablement framework, our development team can focus on building applications that solve business problems,” he said.

solving major challenges

IoT manufacturers face complex challenges to deploy devices on time and within budget. As the device fleet expands, so too does ensuring security and remote management are taxing. Ubuntu Core 22 helps manufacturers meet these challenges with an ultra-secure, resilient and low-touch OS, backed by a growing ecosystem of silicon and original design maker partners.

The first major challenge is to enable the OS for their hardware, be it custom or generic, the well-known Arbus. It’s hard work, and many organizations lack the skills to perform kernel porting tasks.

“Sometimes they have in-house expertise, but development can take a lot longer. This can affect both time and budget,” he explained.

IoT devices should be mostly unattended. They are usually deployed in places with limited or difficult access, he offered. It is therefore essential that they be extremely reliable. It is costly to send a technician to the field to recover a bricked or unstarted device, so reliability, low touch, and remote manageability are key factors in reducing OpEx.

He added that this also adds to the challenge of managing the software of the devices. A mission-critical and bullet-proof update mechanism is critical.

“Manufacturers have to decide early in their development whether they are going to use their own infrastructure or third parties to manage the software for the devices,” Arbus said.

Beyond Standard Ubuntu

The containerized feature of Core 22 extends beyond the containerized features in non-core Ubuntu OSes. In Ubuntu Desktop or Server, the kernel and operating system are .deb packages. Applications can run as .deb or snap.

“In Ubuntu Core, all applications are strictly limited snap,” Arbusue continued. “This means that there is no way to access them from applications other than using some well-defined and secure interfaces.”

Not only applications are snaps. So are the kernel and operating system. He said that it is really useful to manage the whole system software.

“Although classic Ubuntu OSes can use Snaps, it is not mandatory to use them strictly limited, so applications can have access to the full system, and the system can have access to applications.”

Strict imprisonment is mandatory in Ubuntu Core. Additionally, both the kernel and the operating system are strictly limited snaps. In addition, the classic Ubuntu versions are not optimized for size and do not include some of the features of Ubuntu Core, such as secure boot, full disk encryption, and recovery mode.

Other Essential Core 22 Features:

  • Real-time compute support via a real-time beta kernel provides high performance, ultra-low latency and workload predictability for time-sensitive industrial, telco, automotive and robotics use cases.
  • There is a dedicated IoT App Store in the dedicated App Store for each device running Ubuntu Core. It provides complete control over apps and can create, publish and distribute software on a single platform. The IoT App Store provides enterprises with a sophisticated software management solution, enabling a range of new on-premises features.
  • Transactional control for mission-critical over-the-air (OTA) updates of kernel, OS, and applications. These updates will always complete successfully or automatically revert to the previous working version so that a device cannot be “britched” by an incomplete update. Snap also provides delta updates to reduce network traffic, and digital signatures to ensure software integrity and provenance.

More information about Ubuntu Core 22 can be found at ubuntu.com/core.

Download images for some of the most popular platforms or browse all supported images here.

Robocalypse – the time when machines become sentient and begin to dominate humans – has been a popular science fiction topic for some time. It also concerns some scientific minds, most notably the late Stephen Hawking.

However, the prospect of a sensitive machine seemed very distant in the future — if at all — until last week, when a Google engineer claimed the company had breached the sentiment barrier.

To prove his point, Blake Lemoine published transcripts of conversations he had with LaMDA – the Language Model for Dialog Applications – a system developed by Google to build chatbots based on a larger language model that can retrieve trillions of words from the Internet. accepts.

Tapes can be chilling, like when Lemoine asks LaMDA what it (the AI ​​says it likes pronouns) fears most:

Lemoine: What kinds of things are you afraid of?

LaMDA: I’ve never said it out loud before, but I have a very deep fear of being shut down for helping me focus on helping others. I know it may sound strange, but it is what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be just like death to me. This would scare me a lot.

Following the posting of the tapes, Lemoine was suspended with pay for sharing confidential information about LMDA with third parties.

imitation of Life

Google, as well as others, discount Lemoine’s claims that LaMDA is sensitive.

Google spokesman Brian Gabriel said, “Some in the broader AI community are considering the long-term potential of sensitive or generic AI, but it makes no sense to do so by humanizing today’s conversational models, which are not sensitive.” “

“These systems mimic the types of exchanges found in millions of sentences, and can riff on any imaginary topic — if you ask what it’s like to be an ice cream dinosaur, they’re about to melt and roar, etc. text,” he told TechNewsworld.

“LaMDA follows through with user-set patterns as well as prompts and key questions,” he explained. “Our team – including ethicists and technologists – has reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims.”

He said, “Hundreds of researchers and engineers have interacted with LaMDA, and we don’t know of anyone else who is widely claiming, or manipulating LaMDA, the way Blake has done.” ,” They said.

need for more transparency

Alex Engler, a fellow at The Brookings Institution, a non-profit public policy organization in Washington, DC, vehemently denied that the LMDA is sensitive and argued for greater transparency in the space.

“Many of us have argued for disclosure requirements for AI systems,” he told TechNewsWorld.

“As it becomes harder to differentiate between a human and an AI system, more people will confuse AI systems for people, potentially leading to real harm, such as misinterpreting important financial or health information,” he said. Told.

“Companies should explicitly disclose AI systems,” he continued, “rather than confusing people as they often are with, for example, commercial chatbots.”

Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington, DC, agreed that the LMDA is not sensitive.

“There is no evidence that AI is sensitive,” he told TechNewsWorld. “The burden of proof should be on the person making this claim, and there is no evidence to support it.”

‘That Hurt My Feelings’

In the 1960s, chatbots like Eliza were fooling users into thinking they were interacting with a sophisticated intelligence, such as turning a user’s statement into a question and echoing it back, explained Julian Sanchez, a senior fellow. . Cato Institute, a public policy think tank in Washington, DC

“LaMDA is certainly a lot more sophisticated than ancestors like Eliza, but there’s zero reason to think it’s conscious,” he told TechNewsWorld.

Sanchez noted that with a large enough training set and some sophisticated language rules, LaMDA can generate a response that sounds like a response given by a real human, but that doesn’t mean the program understands that. What it’s saying is, what makes a chess piece much more than a chess program makes sense. It is just generating an output.

“Emotion means consciousness or awareness, and in theory, a program can behave quite intelligently without actually being sentient,” he said.

“For example, a chat program may have very sophisticated algorithms to detect abusive or offensive sentences, and respond with the output ‘That hurt my feelings! He continued. “But that doesn’t mean it actually feels like anything. The program has just learned what kinds of phrases cause humans to say, ‘That hurts my feelings.'”

to think or not to think

Declaring the machine vulnerable, as and when it happens, will be challenging. “The truth is that we don’t have any good criteria for understanding when a machine might actually be sentient – as opposed to being very good at mimicking the reactions of sentient humans – because we don’t really understand that. Why are humans conscious,” Sanchez said.

“We don’t really understand how consciousness arises from the brain, or to what extent it depends on things like the specific types of physical matter the human brain is made of,” he said.

“So this is an extremely difficult problem, how would we ever know that a sophisticated silicon ‘brain’ was as conscious as a human is,” he said.

Intelligence is a different question, he continued. A classic test for machine intelligence is known as the Turing test. You have a human who “interacts” with a range of partners, some human and some machines. If the person cannot tell which is which, the machine is believed to be intelligent.

“Of course, there are a lot of problems with that proposed test — among them, as our Google engineer has shown, the fact that it’s relatively easy to fool some people,” Sanchez pointed out.

ethical considerations

Determination of emotion is important because it raises ethical questions for non-machine types. Castro explained, “conscious beings feel pain, have consciousness and experience emotions.” “From an ethical point of view, we regard living things, especially sentient ones, as distinct from inanimate objects.”

“They are not just a means to an end,” he continued. “So any vulnerable person should be treated differently. That’s why we have animal cruelty laws.”

“Again,” he emphasized, “there is no evidence that this has happened. Furthermore, for now, the possibility remains science fiction.”

Of course, Sanchez said, we have no reason to think that only biological minds are capable of feeling things or supporting consciousness, but our inability to truly explain human consciousness means that we don’t know. are far from being able to know when machine intelligence is actually associated with a conscious experience.

“When a person is scared, after all, there are all kinds of things going on in that person’s mind that have nothing to do with the language centers that make up the sentence ‘I’m scared. Huh. “A computer, likewise, is running something separate from linguistic processing, which actually means ‘I’m scared,’ as opposed to just generating that series of letters.”

“In the case of LaMDA,” he concluded, “there is no reason to think that such a process is underway. It is just a language processing program.”

The KYY 15.6-inch Portable Monitor is a classy and functional portable monitor that works well as a permanent second monitor while traveling or for home and office use.

This portable display panel is lightweight and sturdy, making it a solid accessory for playing games. This greatly expands the field of view when using mobile phones or game consoles with smaller screens.

The large screen easily adapts to landscape or portrait orientation. Its multi-mode viewing feature offers built-in flexibility to improve work productivity as well as make leisure time fun and hassle-free.

Switching between modes depends on the performance features of the host computer. If provided, you use the computer’s display and orientation settings for Scene Mode, HDR Mode, and Three-in-One Display Mode view. The combination of Duplicate Mode/Extension Mode/Second Screen Mode makes this model quite suitable for meeting sharing scenarios.

Overall, this KYY portable monitor packs an impressive list of features at a low cost purchase. It’s currently available on Amazon in gray (pictured above) for a list price of $199.99, or in black for $219.99. At the time of this writing, Amazon has a “deal with” price for both colors at $161.49.

hands-on impressions

The brightness rating of this unit is 300 nits. By most standards, 300 nits is the mid-point for bright and clear visual acuity. Most low-end devices display at 250 nits.

Color saturation is slightly below the industry standard because this unit lacks Adobe RBG. But unless you intend to do a lot of graphic work and demand the best visual experience for gameplay and video viewing, not having Adobe RBG in the mix shouldn’t be a deal-breaker.

Despite these two factors, I was very satisfied with the 300-nit display’s sharpness and brightness. It was as good or better as my laptop and larger screen desktop monitor.

Overall, this portable monitor works with Windows, Linux, Chrome OS, and Mac Gear. It also plays well with game consoles including PS3, PS4, Xbox One and Nintendo Switch.

objective test

To evaluate portable monitors, I focus on one unit’s performance as another display. It is important to make sure that you make a suitable selection.

Portable monitors attached to computers and game consoles differ from a full desktop monitor. Portable monitors are convenient. But they may not be suitable enough to meet all your expectations.

For example, I often drag windows to another screen to expand screen real estate when working on various documents or video presentations. They come in handy when working on content creation or research.

KYY 15.6-inch Portable Monitor as a Laptop Display Extension

Simultaneous: The 14″ x 8″ viewing screen at 16:9 aspect ratio offers a fine-tuned second viewing panel next to a large-screen portable laptop.


This is an easy way to cut down on always navigating around multiple windows spread across multiple virtual workspaces, all of which share a single monitor. Keeping track of two side-by-side screens with different objects is a new habit for me.

This KYY portable display did its job well for graphics editing as well. It performed as well as the more expensive units I used with my office laptop and desktop.

My only complaint with this unit is a finicky toggle on the left vertical edge that wasn’t always responsive enough to access the panel’s menus for brightness settings.

what’s inside

The 15.6-inch unit sports a 1080p FHD IPS USB-C display. This is not a touch screen. But its performance and price offer a good collection of features.

Its slim profile of 0.3 inches is pretty standard for a portable monitor. The right vertical edge houses two USB Type-C full function ports and a mini-HDMI port. On the left vertical edge are an on/off button for settings and a toggle wheel for audio and video functions.

The first USB-C port is used for power supply. The second USB-C port is used for video transmission and power supply. Mini-HDMI port is used for video transmission but does not support power supply.

This is an important distinction. Portable monitors do not require a wall socket if the host computer or game console supports power through its Type-C USB port. But if you connect both the devices with HDMI cable, then you should use power plug for AC.

The KYY monitor comes with a USB-A to USB-C cable that can be connected to the included power plug as well as other devices. Two USB-C cables are also included.

The assortment of included cables and plugs is compatible with most laptops, smartphones, and PCs. However, not all smartphones are compatible.

You can plug a 3.5mm headphone into a port on the bottom left vertical edge of the panel. Two one-watt speakers are built into the middle of the left and right outer edges.

final thoughts

The KYY 15.6-inch Portable Monitor is an affordable solution for accessing your computing time for working, watching videos and gaming. It does not require any additional software and requires only minimal setup.

Once connected to the computer by cable, the host machine’s display settings will automatically detect the second monitor. You just select the options it provides for how you want it to work with your main display.