May 2022


Titan Linux is not an operating system that casual Linux users – especially new adopters – should have installed on their primary or only computer. But seasoned Linux distribution hoppers in search of a pleasant new Linux experience shouldn’t pass up the new offering.

Titan is a new distro built on the Debian stable branch. The developers first announced its arrival on April 24. This is a very early beta release, so it’s mostly bare bones. Nevertheless, it is surprisingly very stable given this stage of its development.

I looked at version 1.2 and found little things about its performance. The new distro’s two-person developer team has a growing community of testers for such new projects; Around 60 on the last count.

Usually, such small start-up teams cannot keep up with the further progress and often Linux distros fall by the wayside. But I am impressed by the achievements of this team so far.

Project leader Matthew Moore readily admits that the success or failure of the new distro will depend on user acceptance and a supportive community. One of the biggest adoption challenges facing Titan Linux is that with no ads or reviews (so far), it’s difficult to attract the risk of potential users.

Progress and updates come almost daily. So I would expect Titan to mature more quickly than it usually does with fledgling releases.

This distro is a fully functional yet minimal KDE Plasma desktop experience with an emphasis on usability and performance. It already has a wide range of hardware support out of the box.

Titan Linux takes a unique approach to the Debian experience. This eliminates the dependency on certain meta-packages to make the system a more stable overall.

something old is turning into something new

KDE is a comprehensive desktop environment that offers users a plethora of customization options. It is also a Linux staple that is popular and reliable. However, KDE may put off new users due to its complexity and quirks.

I’ve used KDE Plasma with several distros over the years. I first tried it when the old KDE desktop turned out to be a revitalized KDE Plasma upgrade. Some of its user interface (UI) issues got in my way as a daily driver.

If I see Titan moving beyond beta releases, Titan Linux with KDE might make me a happy user again. It all comes down to usability.

work in progress

Until now, developers trimmed the fat from KDE Plasma to make it less complicated without endless customization options. That’s the point of this distro.

In addition to simpler, lighter means in the long run, the Titan could attract a larger user with aging and less powerful computers. Keeping KDE as streamlined as possible while offering full hardware support from the Debian catalog are welcome performance goals.

Titan Linux offers something a little more slim than the standard Debian. But according to Moore, it’s more useful than a standard Debian Net installation.

Customization is not a bad thing. Linux thrives on having the freedom to customize, tweak, and create a desktop environment suited to individual user preferences.

Part of the simplification is an innovative Titan Toolbox – a work in progress but very promising – by head developer Cobalt Rogue. This set of system management tools will let users maintain the OS with a single click. The toolbox will include a range of software apps hardwired to the Titan’s distinctive design, rather than a one-size-fits-all Debian Linux component.

sharing insider ideas

If you want to find out how Sausage is made, check out the developer’s website for links to both Moore and Cobalt Rogue’s YouTube videos on building Titan Linux. They both provide live stream discussion of their development efforts.

It is practical to observe conversations that focus on the goals of the team. A leading man doesn’t want Titan Linux to be just another remix. Moore plans to grow its new distribution into a unique offering with meaningful features.

In a recent video, Moore explained why he decided to build Titan Linux on Debian instead of Arch, which he used to use before. This is because Debian’s longevity between stable releases is more conducive to rapid beta releases.

Debian has long release cycles – in the neighborhood of two years – so Titan’s development doesn’t break because the base components change frequently. Arch distros are very erratic with rolling releases which often break systems.

Leaner KDE Deployed

KDE is the moniker for the K desktop environment introduced in 1996. It is a reference to the organization sponsoring the development of the K desktop and the family of software that runs on its K desktop, as well as other desktops.

When the KDE community released a major upgrade from KDE 4, the developers dubbed the new desktop upgrade to KDE 5 under the name “Plasma”. That name reflected the radical redesign and functionality changes as a type of KDE rebranding.

Various Linux distros are built around the KDE project. For example, Kubuntu Linux is a version of the Ubuntu family of OSes that uses the KDE desktop. Other popular distros running the KDE desktop environment include KaiOS, Manjaro KDE, Fedora KDE Spin, MX Linux KDE, and Garuda Linux.

What makes this brand new Titan Beta OS so remarkable to me is the potential of what it offers. It can make K Desktop more productive with streamlined features and better usability.

However, offering a stripped-down version of the KDE desktop isn’t a unique idea in itself. Many other Linux developers have tried to turn KDE into a better working desktop. Some even gave it a new name.

Making a Better K Desktop, Again

Among the hundreds of Linux distributions I’ve reviewed over the years, some of the improvement efforts differ. Looking at literally hundreds of similar looking Linux distros, rebuilding KDE is rarely productive.

Few desktop environments – and Linux is both blessed and damned – can be inviting enough to meet the computing needs of all user scenarios. KDE attempts to do the same.

Consider these examples:

  • In late 2019 Feren OS switched from a Cinnamon desktop and a Linux Mint base to a KDE Plasma and Ubuntu base.
  • The KDE Neon distro – not called Plasma – is something unique. It has KDE components that have not yet been absorbed by other KDE-based distros. It is based on Ubuntu (which itself is based on Debian Linux).
  • The KaiOS Linux distro provides a UI-refreshed KDE-based computing platform. It provides better KDE experience without bloated software and cumbersome usability.
  • The Vector Linux family is a small, fast, and lightweight Slackware-based distribution that ships a customized version of KDE to be more user-friendly than other Slackware-style distros.

A glimpse of Titan’s potential

The early beta releases of the new Titan distro are like a partially loaded framework. Sectional headings and their supporting elements are enough to get a solid reading of the big picture.

The main parts are in place and working. But many vacancies are still to be filled. The OS works well with the space it has. It will work even better when more innovative parts are written in it.

This view of the Titan Linux desktop shows the two main KDE elements – access to the virtual desktop via the lower panel and the unique Activity layout accessed via a pop-out vertical left column that provides another kind of virtual computing space Is.

Widget Popup Panel Display of Screen and Panel Apps Adds a variety of services and features to the desktop layout.

Pictured in the top left is the information display of the Terminal window with the Command Line Interface (CLI). On the right is the Software Store window that provides the ability to add/remove a complete list of Debian Linux software, even in this early beta view.

Here the simplified system settings panel in Titan Linux is shown.

ground level

Beta versions of Titan Linux are releasing at a rapid pace. This development schedule heats up anticipation for the first stable release.

The KDE Plasma desktop design found in current Linux distros is not lightweight. Beta version 1.2 consumes 450MB of RAM, making this anticipated new distro much lighter. This means two things: More aging computers running Titan OS may get a revival; And newer computers may outperform the more standard KDE integration.

The Live Session ISO is upgraded several times per week as developers push the envelope to release the first stable version and beyond. The live session environment lets you try out Titan Linux beta releases without making any changes to your current OS or hard drive.

The beta version I tested is already performing surprisingly well. More features and UI changes appear with each new ISO download.

Check it out for yourself on the Titan Linux website.

suggest a review

Is there a Linux software application or distro that you would like to recommend for review? Something you love or want to know?

Email me your thoughts and I’ll consider them for future columns.

And use the Reader Comments feature below to provide your input!

Government organizations and educational institutions, in particular, are increasingly in the crosshairs of hackers as serious web vulnerabilities continue to rise upwards.

Remote code execution (RCE), cross-site scripting (XSS), and SQL injection (SQLi) are all top software offenders. All three keep rising or hovering around the same alarming numbers year after year.

RCE, often the end target of a malicious attacker, was the main cause of the IT scam in the wake of the Log4Shell exploit. This vulnerability has seen a steady increase since 2018.

Enterprise security firm Invicti last month released its Spring 2022 AppSec Indicator report, which revealed Web vulnerabilities from more than 939 of its customers worldwide. The findings come from an analysis of the Invicti AppSec platform’s largest dataset — which has more than 23 billion customer application scans and 282,000 direct-impact vulnerabilities discovered.

Research from Invicti shows that one-third of both educational institutions and government organizations experienced at least one incident of SQLi in the past year. Data from 23.6 billion security checks underscores the need for a comprehensive application security approach, with governments and education organizations still at risk of SQL injection this year.

Data shows that many common and well-understood vulnerabilities in web applications are on the rise. It also shows that the current presence of these vulnerabilities presents a serious risk to organizations in every industry.

According to Mark Rawls, President and COO of Invicty, even well-known vulnerabilities are still prevalent in web applications. To ensure that security is part of the DNA of an organization’s culture, processes and tooling, organizations must gain command of their security posture so that innovation and security work together.

“We’ve seen the most serious web vulnerabilities continue to grow, either stable or increasing in frequency, over the past four years,” Ralls told TechNewsWorld.

key takeaways

Rawls said the most surprising aspect of the research was the rapid rise in incidence of SQL injections among government and education organizations.

Particularly troubling is SQLi, which has increased frequency by five percent over the past four years. This type of web vulnerability allows malicious actors to modify or change the queries an application sends to its database. This is of particular concern to public sector organizations, which often store highly sensitive personal data and information.

RCE is the crown jewel for any cyber attacker and is the driver behind last year’s Log4Shell program. This is also an increase of five percent since 2018. XSS saw a six percent increase in frequency.

“These trends were echoed throughout the report’s findings, revealing a worrying situation for cybersecurity,” Rawls said.

Skill gap, lack of talent included

Another big surprise for researchers is the increase in the number of vulnerabilities reported from organizations that scan their assets. There can be many reasons. But the lack of software trained in cyber security is a major culprit.

“Developers, in particular, may need more education to avoid these errors. We have noticed that vulnerabilities are not being discovered during scanning, even in the early stages of development,” Rawls explained.

When developers don’t address vulnerabilities, they put their organizations at risk. He said automation and integration tools can help developers address these vulnerabilities more quickly and reduce potential costs to the organization.

Don’t Blame Web Apps Alone

Web apps aren’t getting any less secure per sec. It’s a matter of developers being tired, overworked and often not having enough experience.

Often, organizations hire developers who lack the necessary cyber security background and training. According to Rawls, with the continuing effort towards digital transformation, businesses and organizations are digitizing and developing apps for more aspects of their operations.

“In addition, the number of new web applications entering the market every day means that every additional app is a potential vulnerability,” he said. For example, if a company has ten applications, it is less likely to have one SQLi than if the company has 1,000 applications.

apply treatment

Business teams – whether developing or using software – require both the right paradigm and the right technologies. This involves prioritizing a secure design model covering all base and baking security in the pre-code processes behind the application architecture.

“Break up the silos between teams,” Rawls advised. “Particularly between security and development – ​​and make sure organization-wide norms and standards are in place and created universally.”

With regard to investing in AppSec tools to stem the rising tide of faulty software, Ralls recommends using robust tools:

  • Automate as much as possible;
  • Integrate seamlessly into existing workflows;
  • Provide analysis and reporting to show evidence of success and where more work needs to be done.

Don’t overlook the importance of accuracy. “Tools with low false-positive rates and clear, actionable guidance for developers are essential. Otherwise, you waste time, your team won’t embrace the technology, and your security posture won’t improve,” he concluded.

partially blind spot on play

Rall said critical breaches and dangerous vulnerabilities continue to expose the organizations’ blind spots. For proof, see Log4Shell’s tornado effects.

Businesses around the world scrambled to test whether they were susceptible to RCE attacks in the widely used Log4j library. Some of these risks are increasing in frequency when they should go away for good. It comes down to a disconnect between the reality of risk and the strategic mandate for innovation.

“It is not always easy to get everyone on board with security, especially when it appears that security is holding individuals back from project completion or would be too costly to set up,” Rawls said.

An increasing number of effective cyber security strategies and scanning technologies can reduce persistent threats and make it easier to bridge the gap between security and innovation.

Microsoft Build is Microsoft’s most interesting event because it focuses on the people who build stuff, mostly code, but often, as is the case this year, hardware.

Last week, Microsoft held its latest build event and I’m pretty sure it screwed up most PC OEMs. That’s because Microsoft announced a new focused workstation for developers called Project Volterra. It has four processors and is based on ARM, not x86, and this coupled with a major effort to provide ARM native code will help that platform with the help of Qualcomm once the code becomes available in late 2022. Allows you to reach your full potential.

But ARM is only one of four processors. We still have the GPU, but Microsoft added an NPU and an ACU (Azure Compute Unit), and that last one isn’t even in the PC. Let’s talk about how Microsoft is radically rethinking PCs in the cloud world, and how disruptive this necessary change is likely to be.

Then we’ll close with our product of the week, which has to be Project Volterra because it reminds me of the old PCJR from IBM but done right. (IBM crippled the IBM PCJR because they feared it would cancel sales of their IBM PC, which is now a textbook product mistake.)

Inside 4-processor PC

Today, PCs consist of two processors, a CPU that handles numerically related information, and a GPU that focuses more on unstructured data and visual information. Together they define how a PC performs, with the current trend being load transfer from CPU to GPU as they are increasingly less structured and more visually focused, especially when it comes to how PCs store their information. Present.

But with the rise of artificial intelligence – and the fact that AI operates very differently than apps designed for CPUs or GPUs, by creating decision chains based on neural network capabilities we consider how our brains work. does – these loads operate inefficiently on the CPU, and although more efficiently on the GPU, begging for a very different hardware architecture designed specifically for those workloads.

Enter the NPU or Neural Processing Unit. On paper this can outperform both CPU and GPU with AI load with far less power and open the door for developers who want to build applications that can utilize a focused and more efficient AI processing platform . This means there will be a lot of focus on AI capabilities going forward, and Microsoft has said that, in the future, all PCs will have NPUs.

But what about the APU? Well, that’s an acronym I came up with. APU stands for Azure Processing Unit. This is the second shoe we have been waiting for since Satya took over Microsoft. This refers to a persistent connection to Azure in the cloud for additional processing power. This is actually the first hardware implementation on an endpoint that addresses the hybrid world we live in today.

By hybrid I do not mean work from home and office, although it does apply to the world we are in today. Nor does it apply to the hybrid cloud as we talk about it currently which has to do with server load. It is a new hybrid concept, where the load is transferred between the cloud and the desktop as needed.

Like PCjr – but in a good way

Project Voltera is a new class of workstation with all four processors based on ARM and focused on developers who develop for ARM-based PCs. As I mentioned earlier, it reminds me of PCJR (pronounced “PC Junior”) from IBM in the 1980s but done right.

The PCJR was a revolutionary modular design that was incredibly well priced for the time and provided an easy upgrade path that would have anticipated the coming PC-as-a-service concept decades later.

But someone in the IBM plan raised concerns that the PCJR, which was targeted at consumers, was too good because it made the much more expensive IBM PC older and more expensive. So, they crippled PCJR and effectively killed him, leaving them to learn the lesson that you never Crippling a product because it’s too good. If customers like it, you focus on that preference to ensure that customer needs are prioritized over revenue.

Which brings us back to Project Volterra. It seems to be a high-performance desktop workstation that can be built for much less cost than a traditional workstation. Moreover, PCJR is stackable to add performance like modular. But the most important thing is that it is not crippling. While it initially focused on building ARM native apps, it anticipates a future where those apps are prevalent and can perform in line with their older x86 versions.

This is a major problem for ARM PCs – that they must run under emulation and thus operate inefficiently, allowing them to perform poorly against x86 PCs – and make them compete with x86 on an even more playing field. enables to do. None of these are on the market yet and the wave they are building for is still many years out. As we approach 2025, I expect that ARM-based PCs and workstations with all these advantages will be able to compete by that time.

wrapping up

Microsoft has been one of those companies that drives personalized technology and has revolutionized it from time to time. The move to four-processor PCs with one processor in the cloud and another focused on AI load is one of the biggest hardware changes since PCs were launched. Demonstrating its in-depth knowledge of what the market wants, Microsoft gives us a view of its PC future by what it means and the need for a widespread cloud connection.

Now we can look forward to the coming world of hybrid desktop apps, NPCs (non-player characters) that are just like real people in games, and supporting apps on PC that help us achieve productivity gains We can’t even dream today.

Promising increased collaboration capabilities not only with our peers but also with more intelligent computers that can move and drive our projects, Microsoft Build this year is a very different workplace, a very different employee tool. Set and evolve expects hardware that can look and function very differently from the PCs we have today.

In short, to say that Microsoft Build was disruptive this year would be an understatement.

Technical Product of the Week

Project Volterra

The Surface line of PCs, targeted specifically at Apple, lacked a workstation or a general desktop PC-class product from the start. They have an all-in-one PC that they pit against a manufacturer user, but it lacks the focused processing performance of a workstation. With the announcement of Project Volterra, that will change.

Project Volterra

Project Volterra | Image Credits: Microsoft

While Microsoft showcased a desktop configuration, the form factor presupposes a laptop version – but given the parallel advent of head mounted displays, that laptop could also be a revolutionary design we won’t see until This platform should not be close to launch.

Initially, Project Volterra will not target traditional workstation workloads such as CAD/CAM architecture or large-scale modeling, but it will focus on an area that has had little workstation support so far, and that is ARM-based, high-performance apps. Which run natively on Windows and ARM without emulation.

But think of it only as a step. Once those apps exist, use workstations like Project Voltera will move into more traditional areas after going through the required certifications, and of course, when they can run the respective applications natively.

Project Volterra is on a critical path to make ARM a true peer to x86, and to create a new class of PC that embraces AI and the cloud more deeply than ever before, making it my Product of the Week Makes an ideal candidate.

Plus, it was one of the most surprising things — if not all — announced at Microsoft Build this year.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Do you know whether your company data is clean and well managed? Why does it matter anyway?

Without a working governance plan, you may have no company to worry about – data-wise.

Data governance is a collection of practices and procedures establishing rules, policies and procedures that ensure data accuracy, quality, reliability and security. It ensures the formal management of data assets within an organization.

Everyone in business understands the need to have and use clean data. But making sure it’s clean and usable is a bigger challenge, according to David Kolinek, vice president of product management at Atacama.

This challenge is compounded when business users have to rely on scarce technical resources. Often, no one person oversees data governance, or that person doesn’t have a complete understanding of how the data will be used and how to clean it up.

This is where Atacama comes into play. The company’s mission is to provide a solution that even people without technical knowledge, such as SQL skills, can use to find the data they need, evaluate its quality, understand any issues How to fix that and determine if that data will serve their purposes.

“With Atacama, business users don’t need to involve IT to manage, access and clean their data,” Kolinek told TechNewsWorld.

Keeping in mind the users

Atacama was founded in 2007 and was originally bootstrapped.

It started as a part of a consulting company, Edstra, which is still in business today. However, Atacama focused on software rather than consulting. So management spun off that operation as a product company that addresses data quality issues.

Atacama started with a basic approach – an engine that did basic data cleaning and transformation. But it still requires an expert user because of the user-supplied configuration.

“So, we added a visual presentation for the steps enabling things like data transformation and cleanup. This made it a low-code platform because users were able to do most of the work using just the application user interface. But that’s right now.” was also a fat-client platform,” Kolinek explained.

However, the current version is designed with the non-technical user in mind. The software includes a thin client, a focus on automation, and an easy-to-use interface.

“But what really stands out is the user experience, made up of the seamless integration that we were able to achieve with the 13th version of our engine. It delivers robust performance that is crafted to perfection,” he said. offered.

Digging deeper into data management issues

I asked Kolinek to discuss the issues of data governance and quality further. Here is our conversation.

TechNewsWorld: How is Atacama’s concept of centralizing or consolidating data management different from other cloud systems such as Microsoft, Salesforce, AWS and Google Cloud?

David Kolinek: We are platform agnostic and do not target a specific technology. Microsoft and AWS have their own native solutions that work well, but only within their own infrastructure. Our portfolio is wide open so it can serve all use cases that should be included in any infrastructure.

In addition, we have data processing capabilities that not all cloud providers have. Metadata is useful for automated processing, generating more metadata, which can be used for additional analysis.

We have developed both these technologies in-house so that we can provide native integration. As a result, we can provide a better user experience and complete automation.

How is this concept different from the notion of standardization of data?

David Kolinek
David Kolinek
Vice President of Product Management,

Kolinek: Standardization is just one of many things we do. Typically, standardization can be easily automated, in the same way that we can automate cleaning or data enrichment. We also provide manual data correction when resolving certain issues, such as missing Social Security numbers.

We cannot generate SSN but we can get date of birth from other information. So, standardization is no different. It is a subset of things that improve quality. But for us it is not just about data standardization. It is about having good quality data so that the information can be leveraged properly.

How does Atacama’s data management platform benefit users?

Kolinek: User experience is really our biggest advantage, and the platform is ideal for handling multiple individuals. Companies need to enable both business users and IT people when it comes to data management. This requires a solution for business and IT to collaborate.

Another great advantage of our platform is the strong synergy between data processing and metadata management that it provides.

Most other data management vendors cover only one of these areas. We also use machine learning and a rules-based approach and validation/standardization, both of which, again, are not supported by other vendors.

Furthermore, because we are ignorant of technology, users can connect to many different technologies from a single platform. With edge processing, for example, you can configure something in the Atacama One once, and the platform will translate it for different platforms.

Does Atacama’s platform lock-in users the same way proprietary software often does?

Kolinek: We have developed all the main components of the platform ourselves. They are tightly integrated together. There has been a huge wave of acquisitions in this space lately, with big sellers buying out smaller sellers to fill in the gaps. In some cases, you are actually buying and managing not one platform, but several.

With Atacama, you can buy just one module, such as Data Quality/Standardization, and later expand to others, such as Master Data Management (MDM). It all works together seamlessly. Just activate our modules as you need them. This makes it easy for customers to start small and expand when the time is right.

Why is the Integrated Data Platform so important in this process?

Kolinek: The biggest advantage of a unified platform is that companies are not looking for a point-to-point solution to a single problem like data standardization. It is all interconnected.

For example, to standardize you must verify the quality of the data, and for that, you must first find and catalog it. If you have an issue, even though it may seem like a discrete problem, it probably involves many other aspects of data management.

The beauty of an integrated platform is that in most use cases, you have a solution with native integration, and you can start using other modules.

What role do AI and ML play today in data governance, data quality and master data management? How is this changing the process?

Kolinek: Machine learning enables customers to be more proactive. First, you’ll identify and report a problem. One has to check what went wrong and see if there is anything wrong with the data. You would then create a rule for data quality to prevent repetition. It’s all reactive and based on something being broken down, found, reported and fixed again.

Again, ML lets you be proactive. You give it training data instead of rules. The platform then detects differences in patterns and identifies discrepancies to help you realize there was a problem. This is not possible with a rule-based approach, and is very easy to measure if you have a large amount of data sources. The more data you have, the better the training and its accuracy.

Aside from cost savings, what benefits can enterprises gain from consolidating their data repositories? For example, does it improve security, CX results, etc.?

Kolinek: This improves safety and minimizes potential future leaks. For example, we had customers who were storing data that no one was using. In many cases, they didn’t even know the data existed! Now, they are not only integrating their technology stack, but they can also see all the stored data.

It is also very easy to add newcomers to the platform with consolidated data. The more transparent the environment, the sooner people will be able to use it and start getting value.

It is not so much about saving money as it is about leveraging all your data to generate a competitive advantage and generate additional revenue. It provides data scientists with the means to build things that will drive business forward.

What are the steps in adopting a data management platform?

Kolinek: Start with a preliminary analysis. Focus on the biggest issues the company wants to tackle and select platform modules to address them. It is important to define goals at this stage. Which KPIs do you want to target? What level of ID do you want to achieve? These are questions you should ask.

Next, you need a champion to drive execution and identify the key stakeholders driving the initiative. This requires extensive communication between various stakeholders, so it is important that one focuses on educating others about the benefits and helping the teams on the system. Then comes the implementation phase where you address the key issues identified in the analysis, followed by the rollout.

Finally, think about the next set of issues that need to be addressed, and if necessary, enable additional modules in the platform to achieve those goals. The worst part is buying a device and providing it, but not providing any service, education or support. This will ensure that the adoption rate will be low. Education, support and service are very important for the adoption phase.

The best thing for me about tech related topics is that they are probably easier than any other to learn online. In fact, that’s exactly how I built the Computer Science Foundation that supports my work. Without the Internet full of resources, I would not be where I am today.

Like many who shared my path, I initially devoured every online resource I could get my hands on. But as I invest more years in my career, I have increasingly noticed the shortcomings of the material most likely to be exposed.

At first, I found that I had to re-learn some concepts I thought I understood. Then, the more concrete it got, the more I discovered that even my self-taught peers were disoriented at some point.

This inspired me to investigate how misconceptions spread. Of course, not everyone gets everything right all the time. It is human to make mistakes, after all. But with such knowledge available online, in theory, misinformation should not spread widely.

So where did it come from? In short, the same market forces that make computer science-driven fields attractive are those that provide fertile ground for questionable training material.

To give back to computer science education in a small way, I want to share my observations about determining the quality of instructional resources. Hopefully, those of you who are on a similar path will learn from the easy way what I learned the hard way.

Starting our self-dev environment

Before we begin, I want to admit that I understand that no one likes to be told that their work is less than stellar. I’m definitely not going to name names. For one thing, there are so many names that a heuristic is the only practical way to go.

More importantly, instead of just telling you where not to go, I’ll provide you with the tools to evaluate for yourself.

Heuristics are also more likely to point you in the right direction. If I declare that website X has subpar content and I am wrong, then nobody has achieved anything. Even worse, you may have missed out on an editing source of knowledge.

However, if I outline the signs that suggest any website may be off the mark, while they may still lead you to mistakenly discount a trusted resource, they still have them in most cases. Sound conclusions must be drawn.

The invisible hand of the market joins a strong hand

To understand where information of questionable quality is coming from, we need to delete our Econ 101 notes.

Why do tech jobs pay so much? High demand meets low supply. There is such an urgent need for software developers, and software development trends evolve so rapidly that tons of resources have been rapidly produced to train the latest wave.

But the market forces are not yet complete. When demand outweighs supply, production feels pressured. If production picks up, and the price stays the same, the quality goes down. Sure, prices can easily go up, but a major highlight of technical training is that much of it is free.

So, if a site can’t cope with the sharp drop in users that comes with moving from free to paid, can you blame it for staying free? Multiply this by even a modest share of all free training sites and the result is a drop in quality of training, overall.

Furthermore, because innovation in software development practices tends to iterate, so does this cycle of decline in educational quality. What happens once the hastily prepared training material is consumed? Over time the employees who consume it become the new “experts”. In a short time, these “experts” produce another generation of resources; And in this way.

Bootstrap your learning with your own bootstrap

Clearly, I am not asking you to regulate this market. What you can do However, learn to identify credible sources on your own. I promised estimates, so here are some I use to get a rough estimate of the value of a particular resource.

Is the site run by a for-profit company? It’s probably not that solid, or at least not useful for your specific use case.

At times, these sites are selling something or the other to tech-illiterate customers. The information is simplified to appeal to non-technical company leadership, not detailed to address technical grunts. Even if the site is intended for someone in your shoes, for-profit organizations try to avoid handing out Tradecraft for free.

if the site Is For the technically minded, And While the Company independently distributes practices, their use of a given software, tool or language may be completely different from how you do, will or should.

Was the site set up by a non-profit organization? If you’ve chosen the right kind, their stuff can be super valuable.

Before you believe what you read, make sure the nonprofit is reputable. Then confirm how closely the site is related to what you’re trying to learn about. For example,, administered by the same people who make Python, would be a great bet for teaching you Python.

Is the site mostly ready for training? Be cautious even if it is for profit.

Such organizations generally prefer to place apprentices in jobs more rapidly. Apprentice quality comes second. Sadly, that’s good enough for most employers, especially if it means they can save a buck on salary.

On the other hand, if the site is a major nonprofit, you can usually overestimate it. Often these types of training-driven nonprofits have a mission to build the field and support their workers—which relies heavily on people being trained properly.

to consider more

There are a few other factors you should take into account before deciding how seriously to take a resource.

If you’re looking at a forum, measure it based on its relevance and reputation.

General purpose software development forums are a frustrating amount of time because no expertise means there is little chance of specialized experts turning around.

If the forum is explicitly intended to serve a particular job role or software user base, chances are you’ll get a better advantage, as it’s more likely that you’ll find an expert there.

For things like blogs and their articles, it all depends on the background strength of the author.

Writers developing or using what you’re learning probably won’t lead you in the wrong direction. You’re probably also in good shape with a developer from a major tech company, as these entities can usually hold top-notch talent.

Be suspicious of writers writing under a for-profit company that isn’t even a developer.

summative assessment

If you want to limit this approach to a mantra, you can put it like this: Always think about who is writing the advice, and why,

Obviously, no one is ever trying to be wrong. But they may leave only what they know, and a share of information may have a focus other than being as accurate as possible.

If you can find out the reasons why the creator of the knowledge can’t keep the accuracy of the textbook at the front of his mind, you’re in less danger of inadvertently putting your work in his mind.

The director of cyber security at the National Security Agency inspired some smiles among cyber professionals last week when he told Bloomberg that the new encryption standards his agency is working with the National Institute of Standards and Technology (NIST) will have no back doors. . ,

In cyber security parlance, a backdoor is an intentional flaw in a system or software that can be secretly exploited by an attacker. In 2014, it was rumored that an encryption standard developed by the NSA included backdoors, resulting in the algorithm being dropped as a federal standard.

“Backdoors can aid law enforcement and national security, but they also introduce vulnerabilities that can be exploited by hackers and are subject to potential abuse by the agencies they are intended to assist,” John Gunn, CEO of Rochester, NY-based Token, maker of a biometric-based wearable authentication ring, told TechNewsWorld.

“Any backdoor into encryption can and will be discovered by others,” said principle threat hunter John Bumbank of Netenrich, an IT and digital security operations company in San Jose, Calif.

“You can trust the American intelligence community,” he told TechNewsWorld. “But will you trust the Chinese and the Russians when they get to the back door?”

trust but verify

Lawrence Gasman, president and founder of Inside Quantum Technology of Crozet, Va., said the public has good reason to be skeptical about NSA officials’ comments. “The intelligence community is not known for telling the absolute truth,” he told TechNewsWorld.

Mike Parkin, an engineer at Vulcan Cyber, said, “The NSA has some of the best cryptographers in the world, and well-founded rumors have circulated for years about their efforts to put backdoors into encryption software, operating systems, and hardware. ” SaaS provider for enterprise cyber-risk treatment in Tel Aviv, Israel.

He told TechNewsWorld, “Similar things can be said of software and firmware sourced from other countries, which have their own agencies with a vested interest in seeing that a network has What’s in the crossing traffic.”

“Whether it’s in the name of law enforcement or national security, officials have a long-standing disdain for encryption,” he said.

When it comes to encryption and security there should be a trust but verified approach, advised Dave Kundiff, CISO at Cyvatar, creator of an automated cybersecurity management platform in Irvine, Calif.

“Organizations may have the best of intentions, but fail to fully see those intentions,” he told TechNewsWorld. “Government entities are bound by law, but do not guarantee that they will not knowingly or unintentionally introduce backdoors.”

“It is imperative for the community at large to test and verify any of these mechanisms to verify that they cannot be compromised,” he said.

taming prime numbers

One of the drivers behind the new encryption standards is the threat of quantum computing, which has the potential to break the commonly used encryption schemes used today.

“As quantum computers become mainstream, this will make modern public-key encryption algorithms obsolete and insufficient security, as demonstrated in Shor’s algorithms,” said Jasmine Henry, JupiterOne’s director of field security, Morrisville, cyber asset management. K’s North Carolina-based provider explained. and governance solutions.

Shor’s algorithm is a quantum computer algorithm for computing the prime factors of integers. Prime numbers are the foundation of the encryption used today.

“The encryption depends on how hard it is to work with really large prime numbers,” Parkin explained. “Quantum computing has the ability to find prime numbers that rely on encryption trivial. What used to take generations to compute on a traditional computer is now revealed in a matter of moments.”

This is a major threat to today’s public key encryption technology. “This is the reason why public-key cryptography is often used to supersede ‘symmetric’ key encryption. These keys are used for the transmission of sensitive data,” explained Andrew Barratt, at Coalfire The leading, Westminster, Colorado-based provider of cyber security advisory services for solutions and investigations.

“This has important implications for almost all encryption transmissions, but also for anything else that requires digital signatures such as the blockchain technologies that support cryptocurrencies like bitcoin,” he told TechNewsWorld.

Quantum Resistor Algorithm

Gunn said that most people misunderstand what quantum computing is and how it differs from today’s classic computing.

“Quantum computing will never be in your tablet, phone or wristwatch, but for tasks like searching and factoring large prime numbers using special algorithms for specific applications,” he said. “Performance improvements are in the millions.”

“Using Shor’s algorithm and the quantum computer of the future, AES-256, the encryption standard that protects everything on the web and all of our online financial transactions, will be breakable in a short period of time,” he said.

Barratt stressed that once quantum computing becomes available for mainstream use, crypto will need to move from prime-number-based mathematics to elliptic curve cryptography-based (ECC) systems. “However,” he continued, “it is only a matter of time before the underlying algorithms that support ECC become vulnerable on the scale of quantum computing, especially by designing quantum systems to break them.”

NIST is developing quantum-resistant algorithms with the help of the NSA. “The requirements for quantum-resistant algorithms may include very large signatures, loads of processing, or massive amounts of keys that can present challenges for implementation,” Henry told TechNewsWorld.

“Organizations will face new challenges to implement quantum-resistant protocols without running into performance issues,” she said.

time of arrival?

It is unclear when a working quantum computer will be available.

“It doesn’t appear that we’ve hit the inflection point in practical application, yet haven’t been able to say with any certainty what the timeline is,” Kundiff said.

“However, that inflection point may be tomorrow, allowing us to say that quantum computing will be widely available in three years,” he told TechNewsWorld, “but until there is some point to move beyond the theoretical and practical.” No, even then it is possible a decade away.”

Gassman said he thinks the world will soon see quantum computers. “Quantum computer companies say this will happen in 10 years to 30 years,” he observed. “I think it will be before 10 years, but not before five years.”

Moore’s law – which predicts that computing power doubles every two years – does not apply to quantum computing, Gassmann maintained. “We already know that quantum evolution is proceeding at a rapid pace,” he said.

“I’m saying we’ll have a quantum computer sooner than 10 years later,” he continued. “You won’t find many people agreeing with me, but I think we should be concerned about it right now – not only because of the NSA, but because there are worse people than the NSA who want to take advantage of this technology. “

Events that are streamed live over the Internet are growing in popularity among Internet homes, especially live sports, according to a study released by Parks Associates.

The report, “Livestreaming: The Next Hot Video Market,” reveals that more than 40% of US Internet households have streamed content in the past three months. More than three out of five families (61%) were watching a streaming sports event.

The study also found that consumers who livestream spend half their online video time watching live events.

“Traditionally, live sports programming has done well,” said Parks Contributing Analyst Eric Sorensen, Sr.

However, “pre- and post-event programming doesn’t perform nearly as well in terms of ratings as the actual event,” he told TechNewsWorld. “These facts apply to both linear television and live streaming platforms.”

“Games are popular because they survive and matter little when viewed afterward,” said Michael Pachter, managing director of equity research at Wedbush Securities in Los Angeles.

“You don’t care about a baseball game that ends 12 – 2 or about a football game that ends 49 – 14, and there’s no point in watching a replay,” he told TechNewsWorld told. “Some one-off wins may be worth it if records are broken – Brady’s 500th touchdown or a no-hitter in baseball – but they are largely worth little when viewed after the fact.”

eyeball chase

Sorensen pointed out that live sports programming is migrating to online platforms as more rights become available.

“Many streaming providers continue to outbid each other for coveted sports media rights,” he said. “Sports consumers don’t want to miss ‘water cooler’ moments with their favorite sports teams.”

Professional sports leagues don’t want fans to miss those moments. “Leagues want to be where their audience is and these days, that’s online,” said Michael Goodman, director of digital media strategy at global research, advisory and analytics firm Strategy Analytics.

“Streaming is giving them additional revenue streams,” he told TechNewsWorld. “Amazon is paying a huge amount for Thursday Night Football. Streaming is also raising rights fees as there are new competitors for them.”

Michael Inoue, a principal analyst at ABI Research, said sports has always been the biggest driver for livestreaming due to the nature of programming, audience size and market potential.

“One issue with live streaming was latency,” he told TechNewsWorld. “OTT [over-the-top] In the past the services lagged far behind live broadcasting. A typical live broadcast is six to eight seconds behind a live event, while livestreaming is 30 to 45 seconds or more behind.”

“We are now seeing more live streaming hitting the same broadcast level – all 10 seconds, so this, too, is making this type of programming more equitable with traditional broadcast channels,” he said.

edge on netflix

Inouye observes that live sports streaming is on the rise as more viewers cut the pay TV cord. “Securing distribution rights is the biggest hurdle, but more and more streaming is often part of new deals and negotiations and as direct to consumer continues to grow, we will see more content through streaming channels,” He continued.

“The strong growth in video advertising in the streaming markets is also a key driver for bringing sports and other live streaming content to a wider audience,” he said. “It’s still not there at traditional broadcast levels, but it’s seen as a major complementary channel, at least now.”

Neil Macker, an equity analyst at Morningstar, said some online platforms see livestreaming as an edge in the market. “Live streaming is something that companies competing with Netflix are adding to the package to differentiate themselves, not only here in the States, but internationally as well,” he told TechNewsWorld.

Those moves by its competitors cannot be ignored for long by Netflix, which is reportedly considering a livestreaming strategy.

“Streaming is getting more attention from Netflix because it’s having a harder time competing against companies with huge reserves of intellectual property like Disney and Warner Bros. This could be a way to diversify a little bit,” said principal analyst at Reticle Research Ross Rubin said. , a consumer technology consulting firm in New York City.

“It’s also interesting, given the recent discussion of Netflix opening up an advertising tier, that live events — news and sports in particular — usually have ads associated with them,” he told TechNewsWorld.

“It is questionable, however, how much investment livestreaming will receive when Netflix wants to cut budgets and be more financially conservative,” he said.

a momentous occasion

Sorensen noted that Hulu along with Live TV, Amazon Prime Video and Disney+ are the major providers that now offer live streaming services that are challenging Netflix’s leadership position in the OTT ecosystem.

He added that offering live streaming content is not only an opportunity for Netflix to gain new subscribers, but also to retain existing ones. “Sixty-four percent of Netflix subscribers currently live stream content on other services,” he explained. “By livestreaming, Netflix can maintain longer engagements with its service.”

“This is especially important in light of Netflix’s recent earnings call that they will lose millions of subscribers in 2022,” he said. “There are many opportunities for a service like Netflix to provide eGaming, esports, and red-carpet premiere events as livestreaming entertainment, in addition to sports and news.”

“As people venture away from their homes, Netflix appears to be suffering from higher spending and lower viewership due to increased competition and behavioral changes.” Added Charles King, principal analyst at Pund-IT, a technology advisory firm in Hayward, Calif.

“Livestreaming popular events could help the company strengthen its fortunes,” he told TechNewsWorld.

not for netflix

Pachter insisted that Netflix would fail miserably at livestreaming.

“Live streaming is by appointment, and Netflix is ​​on-demand,” he explained. “Its customers will never associate it with events that are watched live, and I think they’ll give up on the idea after working with it and failing.”

“Netflix is ​​holding onto the straw. Its brand is not built around livestreaming,” said Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif.

“I think many of the mistakes Netflix is ​​making are self-inflicted wounds,” he told TechNewsWorld. “Livestreaming won’t help them get out of their quagmire.”

“The amount of content that the average consumer has is way too high, but Netflix is ​​acting like 2010, not 2022,” he said. “The amount of content available to users is exponentially higher than it was 10 to 12 years ago, when Netflix didn’t have much competition.”

“Now they have a lot of competition,” he continued. “They’re not going to be able to get themselves out of that situation.”

The first plan of its kind to comprehensively address open source and software supply chain security is awaiting White House support.

The Linux Foundation and the Open Source Software Security Foundation (OpenSSF) on Thursday brought together more than 90 executives from 37 companies and government leaders from the NSC, ONCD, CISA, NIST, DOE and OMB to reach a consensus on key actions. Improving the flexibility and security of open-source software.

A subset of the participating organizations have collectively pledged an initial tranche of funds for the implementation of the scheme. Those companies are Amazon, Ericsson, Google, Intel, Microsoft, and VMWare, with more than $30 million in pledges. As the plan progresses, more funds will be identified and work will begin as agreed upon individual streams.

The Open Source Software Security Summit II, led by the National Security Council of the White House, is a follow-up to the first summit held in January. That meeting, convened by the Linux Foundation and OpenSSF, came on the one-year anniversary of President Biden’s executive order on improving the nation’s cyber security.

As part of this second White House Open Source Security Summit, open source leaders called on the software industry to standardize on SigStore developer tools and upgrade the collective cyber security resilience of open source and improve trust in software. called upon to support the plan. Dan Lorenc, CEO and co-founder of Chainguard, co-creator of Sigstore.

“On the one-year anniversary of President Biden’s executive order, we’re here today to respond with a plan that’s actionable, because open source is a critical component of our national security, and it’s driving billions of dollars in software innovation. is fundamental to investing today,” Jim Zemlin, executive director of the Linux Foundation, announced Thursday during his organization’s press conference.

push the support envelope

Most major software packages contain elements of open source software, including code and critical infrastructure used by the national security community. Open-source software supports billions of dollars in innovation, but with it comes the unique challenges of managing cybersecurity across its software supply chains.

“This plan represents our unified voice and our common call to action. The most important task ahead of us is leadership,” said Zemlin. “This is the first time I’ve seen a plan and the industry will promote a plan that will work.”

The Summit II plan outlines funding of approximately $150 million over two years to rapidly advance well-tested solutions to the 10 key problems identified by the plan. The 10 streams of investment include concrete action steps to build a strong foundation for more immediate improvements and a more secure future.

“What we are doing together here is converting a bunch of ideas and principles that are broken there and what we can do to fix it. What we have planned is the basis to get started. As represented by 10 flags in the ground, we look forward to receiving further input and commitments that lead us from plan to action,” said Brian Behldorf, executive director of the Open Source Security Foundation.

Open Source Software Security Summit II in Washington DC, May 12, 2022.

Open Source Software Security Summit II in Washington DC, May 12, 2022. [L/R] Sarah Novotny, Open Source Lead at Microsoft; Jamie Thomas, enterprise security executive at IBM; Brian Behldorf, executive director of the Open Source Security Foundation; Jim Zemlin, executive director of The Linux Foundation.

highlight the plan

The proposed plan is based on three primary goals:

  • Securing open source security production
  • Improve vulnerability discovery and treatment
  • shortened ecosystem patching response time

The whole plan includes elements to achieve those goals. These include security education which provides a baseline for software development education and certification. Another element is the establishment of a public, vendor-neutral objective-matrix-based risk assessment dashboard for the top 10,000 (or more) OSS components.

The plan proposes the adoption of digital signatures on software releases and the establishment of the OpenSSF Open Source Security Incident Response Team to assist open source projects during critical times.

Another plan detail focuses on improved code scanning to accelerate the discovery of new vulnerabilities by maintainers and experts through advanced security tools and expert guidance.

Code audits conducted by third-party code reviews and any necessary remedial work will detect up to 200 of the most critical OSS components once per year.

Coordinated data sharing will improve industry-wide research that helps determine the most important OSS components. Providing Software Bill of Materials (SBOM) everywhere will improve tooling and training to drive adoption and provide build systems, package managers and distribution systems with better supply chain security tools and best practices.

stock factor

Chainguard, who co-created the Sigstore repository, is committed to financial resources for the public infrastructure and network offered by OpenSSF and to ensure that SigStore’s impact is felt in every corner of the software supply chain and Will collaborate with industry peers to deepen work on interoperability. software ecosystem. This commitment includes at least $1 million per year in support of Sigstore and a pledge to run it on its own node.

Designed and built with maintainers for maintainers, it has already been widely adopted by millions of developers around the world. Lorenc said now is the time to formalize its role as the de facto standard for digital signatures in software development.

“We know the importance of interoperability in the adoption of these critical tools because of our work on the SLSA framework and SBOM. Interoperability is the linchpin in securing software across the supply chain,” he said.

Related Support

Google announced Thursday that it is creating an “open-source maintenance crew” tasked with improving the security of critical open-source projects.

Google also unveiled the Google Cloud Dataset and open-source Insights projects to help developers better understand the structure and security of the software they use.

According to Google, “This dataset provides access to critical software supply chain information for developers, maintainers, and consumers of open-source software.”

“Security risks will continue to plague all software companies and open-source projects and only an industry-wide commitment that includes a global community of developers, governments and businesses can make real progress. Basic in Google Cloud and Google Fellows at Security Summit “Google will continue to play our part to make an impact,” said Eric Brewer, vice president of infrastructure.

Isolation from friends and other factors during the pandemic contributed to a significant increase in screen use by tweens and teens from pre-pandemic levels.

Common Sense Media – a non-profit organization dedicated to improving the lives of all children and families – released a detailed report in March showing that screen use grew faster in 2021 than in the previous four years. . This use of tweens was six times higher in the past two years.

The pandemic was a major contributor to the change in screen usage. According to the study, the popularity of platforms like TikTok continues to grow and it may be getting more usage.

The researchers sought details about whether there were any lasting differences in youth’s use of screen media as societies began to reopen in the fall of 2021. They focused on US tweens (ages eight to 12) and teens (ages 13 to 18) and the amount of time they spent using digital devices in addition to the time they spent doing online classes and homework.

Total entertainment screen use among tweens and adolescents per day, 2015 to 2021

2021 Common Sense Census: Media Use by Twins and Teens

Entertainment screen use includes time spent watching television and online video, playing video games, using social media, browsing websites, creating content, e-reading, and other digital activities. In 2021, for the first time, time spent reading e-books was included in the total (six minutes among tweens and eight among teens), and time spent watching movies in movie theaters and using iPod Touch. was not included (accounted for seven minutes among adolescents and six minutes among adolescents in 2019). Source: Common Sense Media

The results show no dramatic change in the overall pattern of media use by tweens and adolescents in terms of the type of equipment used. The amount of time they devote to non-school screen activities has increased significantly, as social media use has spread somewhat among younger age groups.

Online video has cemented its place at the top of young people’s media hierarchy. However, video gaming did not increase dramatically during the pandemic. The top activities remain the same – online video, gaming and social media. In addition, the general pattern between tweens and teens, or between boys and girls, has continued.

Media can be used both positively or negatively. Vulnerable children are using the media excessively, or using media that contributes to mental health issues, according to Mike Robb, senior director of research at Common Sense Media.

“We need to be able to identify and support those children. But there are also some children who are using the media to lift their mood, connect with friends, or support their mental health. We need to make sure we are not explicitly displaying all screen time,” he told TechNewsWorld.

“It really depends on who’s using it, what they’re using, and what needs they’re meeting.”

More Media Experiment Findings

The report found eight key results in 2019 compared to the previous media usage report before the pandemic. The Common Sense Media study is the only nationally representative survey that tracks media use patterns, actually among a random sample of eight to 18-year-olds. According to James P. Steyer, founder and CEO of Common Sense Media, the United States.

site teen 2021 . wouldn’t want to live without

79 percent of 13 to 18 year olds who are regular users of social media and online videos (use at least once a week), the percentage who choose each site as a site they wouldn’t want to live without.

Sites teens won't want to live without, 2021

Source: Common Sense Media

In addition to the results cited above, the researchers found:

  • If forced to choose, teens say YouTube is the site they wouldn’t want to live without. In fact, watching online videos is the preferred media activity of both groups among both boys and girls across racial/ethnic groups and income levels.
  • The use of social media is increasing among eight to 12-year-olds. Thirty-eight percent of tweens used social media (up from 31 percent in 2019). Nearly one in five (18 percent) said they now use social media daily (up from 13 percent since 2019).
  • Teens now use social media for about an hour and a half a day, but have conflicting feelings about the medium. Even though teens devote a lot of time to social media, they don’t enjoy it as much as they do with other types of media.
  • The top five social media sites teens have used so far include Instagram (53 percent), Snapchat (49 percent), Facebook (30 percent), Discord (17 percent), and Twitter (16 percent).
  • Tweens and teens both differ greatly in the average amount of screen media they engage in each day. Boys use more screen media than girls. Black and Hispanic/Latino children use more than white children. Children from low-income households use more than those from high-income households.
  • Children consumed more media overall during the pandemic than before 2019, except for one source: reading did not increase use.
  • Nearly half of all teens listen to a podcast, and one in five said they do it at least once a week. They engage with a variety of media, including media based primarily on the spoken word.
  • A large number of black, Hispanic/Latino children from low-income families still do not have access to computers at home. It is one of the most basic building blocks of digital equity.

dangerous consequences

Rob was struck by the huge increase in the amount of screen time in the last two years compared to the four years before the pandemic. From 2015 to 2019, media use for tweens grew only three percent. For teens, it increased to 11 percent.

However, from 2019 to 2021 alone, media use for both tweens and teens increased by about 20 percent. This is about six times the increase we saw for tweens alone before the pandemic.

“I am also impressed by the fact that 38 percent of tweens use social media, despite the fact that most platforms are not meant to be used by people under the age of 13,” he said.

Top entertainment screen media activities among tweens and teens, 2021

Top entertainment screen media activities among tweens and teens, 2021

Video game refers to a game played on a console, computer or portable game player. Mobile game refers to a game played on a smartphone or tablet. Source: Common Sense Media

What children do with media is as important or more important as the amount of time they spend with media, Rob offered. If kids are using great content, using technology to socialize with their friends and using technology to express themselves, they don’t think we need to worry so much about time. Is required.

“It’s when media use is replacing important activities, such as socializing, spending quality time with family, or sleeping, that worries me,” he said.

Researchers’ Tech

The researchers noted that they were surprised to find no significant expansion of new tablet and smartphone distribution among tweens and teens. He said the survey did not indicate that this had happened.

“We are beginning to see a slight trend towards the use of social media in earlier eras. This is particularly interesting given the ongoing debate about the impact of social media on the well-being of young people,” he wrote.

The other new media product pushed by Facebook (now Meta) is immersive media, which is accessed through virtual reality. The increase in time is only for entertainment media and not for school, distance learning or homework, Rob clarified.

At this point, use of the new medium has been slow to catch on; Slower, in fact, than the growth of podcasts, notes the report.

“I keep wondering if we’ll reach the media usage limit at some point, but we haven’t yet,” Rob said.

Changing thoughts on the impact of children

A recent study (Rideout & Robb, 2021) shows that many young people have used their digital devices during the pandemic to socialize with friends online, learn about the things they do and create and share their content. did. Steyer of Common Sense Media wrote in the conclusion of the report that this work suggests that parents and teachers should be careful about reducing children’s screen time consumption.

“It clearly plays an important role for many tweens and teens during the pandemic,” he said.

This latest survey of children’s media use shows that activities such as content creation, video chatting and online reading occur frequently among young people and are important and meaningful to them. But this increased screen time still constitutes a small fraction of overall screen usage, Steyer warned.

“Ultimately, the amount of time young people spend on content produced by others is still heavily dominated, whether it is the content they watch, read, play, or scroll through. Media given by children As time goes by, it is more important to elevate quality media by creating and highlighting shows, games, apps and books that create, inspire and provide positive representation,” he concluded.

2021 Common Sense Census: Media Use by Twins and Teens Report Available here.