Category

Information Technology

Category

For years companies have been allowing their employees to mix business and pleasure on their mobile devices, a move that has raised concerns among cybersecurity professionals. Now a network security organization says it has a way to secure personal mobile devices that could allow cyber warriors to sleep less comfortably.

Cloudflare on Monday announced its Zero Trust SIM, which is designed to secure every packet of data except mobile devices. Once installed on a device, the ZT SIM drives network traffic from the device to Cloudflare’s cloud, where its zero trust security policies can be applied to the data.

According to a company blog written by Cloudflare Director of Product Matt Silverlock and Innovation Head James Allworth, by combining software layer and network layer security through ZT SIM, organizations can benefit from:

  • Preventing employees from visiting phishing and malware sites. DNS requests leaving the device can automatically and implicitly use the Cloudflare Gateway for DNS filtering.
  • Reducing common SIM attacks. An eSIM-first approach could prevent SIM-swapping or cloning attacks, and could bring similar security to physical SIMs, by locking SIMs to individual employee devices.
  • rapid deployment. eSIM can be installed by scanning the QR code with the mobile phone’s camera.

distrust of personal devices

“A lot of organizations don’t trust the tools they’re managing to access sensitive corporate data because of it,” said analyst Charlie Winkless, senior director at Gartner.

“Most of us are a little less careful with our personal devices than with our business tools,” he told TechNewsWorld. “There are also fewer controls on a personal device than a business device.”

“The Zero Trust SIM is a way to try to allow some of those individual devices to take control of the corporate network as they connect.”

With a distributed workforce, the classic hub-and-spoke model for security has become obsolete, explained Malik Ahmed Khan, an equity analyst at Morningstar in Chicago.

“So, you have employees across the country accessing company resources with a mobile device sitting in their home,” he told TechNewsWorld. “How do you secure their access? That’s a big question for firms to answer.”

The answer to that question for many organizations is installing software agents on their employees’ phones as part of a mobile device management (MDM) system, which can rank employees.

“It’s inherently difficult to protect anyone’s personal equipment because owners don’t want their equipment to be managed by someone else,” said Roger Grimes, a data-driven defense campaigner at KnowBe4, a security awareness training provider in Clearwater, Fla.

Khan said adoption will be a significant challenge for Cloudflare. “There are two degrees of believing that needs to happen,” he said. “First, Cloudflare needs to convince firms to take it and second, firms need to convince their employees to use eSIM.”

hardware limitations

Grimes said there are other roadblocks facing organizations dealing with BYOD. “Phone operating systems simply don’t come with the complexity that is needed to enable and implement the methods that are typically applied to regular computers,” he told TechNewsWorld.

“For example,” he continued, “it is very difficult to implement patching so that phones and all their apps are up to date. Many times a phone’s OS will only be patched if the phone’s network provider, such as Verizon or AT&T, Decides to push the patch.

“The user can’t just click on an update feature and get a new patch, unless the phone vendor has approved it and decided to allow it to be installed,” he said.

When considering an eSIM solution, it’s important to know what it does and doesn’t do, observed Chris Clements, vice president of solutions architecture at Cerberus Sentinel, a cybersecurity consulting and penetration testing company in Scottsdale, Ariz.

“Cloudflare’s use of eSIM links the mobile device’s cellular data connection to Cloudflare’s network, where malicious domains or sites not approved by the organization’s policies cannot be blocked,” he told TechNewsworld.

“There are also capabilities for logging connections going over cellular data networks that companies typically are not able to monitor,” he said.

MDM complications

He continued, however, that there is no end-to-end encryption and that blocking and logging is limited to cellular data connections only. For example, Wi-Fi data connections are unaffected by eSIM offerings.

CloudFlare’s eSIM solution may be cheaper and simpler than deploying a full mobile device management solution and a whole network VPN that covers both Wi-Fi and cellular data connections, but it offers the same level of control and security of those solutions. does not do.” Told.

“The ability to reduce user account hijacking by preventing SIM swapping to intercept multifactor authentication codes is useful, but in reality, implementing MFA via SMS codes is no longer a best practice,” he said.

Khan pointed out, however, that there are problems with the agent-based solutions that ZeroTrust SIM has to offer. “The problem with these deployments is that they require the user to deep dive into their device’s settings and enable them to accept a bunch of certificates and permissions for the agent,” he explained.

“While it is very easy to do this on a company-issued laptop or mobile device – since the agent will be pre-configured – it is quite difficult to do it on BYOD, as the employee cannot set things up properly leaving the endpoint still partially exposed,” he said.

“Imagine having an IT security team for a firm with thousands of employees and each of them trying to follow a series of steps on their individual devices,” he continued. “It can be a nightmare, logically speaking.”

“Furthermore,” he said, “there may be a problem with updating agents uniformly and constantly asking employees to stay on the latest operating system.”

mobile headache

In addition to the ZT SIM introduction, Cloudflare also announced its Zero Trust program for mobile operators, which is designed to give mobile carriers the opportunity to give their customers access to Cloudflare’s Zero Trust platform.

“When I talk to CISOs I hear over and over again that effectively securing mobile devices at scale is one of their biggest headaches,” Cloudflare co-founder and CEO Matthew Prince said in a statement. , it’s a flaw in everyone’s deployment of Zero Trust.

“With Cloudflare ZeroTrust SIM,” he said, “we will offer the one-stop solution to secure all device traffic, helping our customers plug this hole in their ZeroTrust security posture.”

However, how the market will react to this solution remains to be seen. “I haven’t heard Gartner customers asking for this,” Winkless said. “Maybe they’ve seen something I haven’t seen. So, we’re going to see if this is an answer to a question that no one needs to answer or a transformative way of providing security.”

As IT workers continue their arduous job of protecting network users from the bad guys, some new tools could help stem the tide of vulnerabilities that continue to add up to open source and proprietary software.

Canonical and Microsoft reached a new agreement to keep their two cloud platforms running well together. Meanwhile, Microsoft apologized to open-source software developers. But BitLocker made no apology for shutting down Linux users.

Let’s take a look at the latest open-source software industry news.

New open-source tool helps devs spot exploits

Vulnerability software platform firm Resilien announced on August 12 the availability of its new open-source tool MI-X from its GitHub repository. The CLI tool helps researchers and developers quickly know whether their containers and hosts are affected by a specific vulnerability to shorten the attack window and create an effective treatment plan.

Yotam Perkal, director of vulnerability research at Resilion, said, “Cyber ​​security vendors, software providers, and CISA are issuing daily vulnerability disclosures alerting the industry to the fact that all software is built with mistakes, which are often immediately detected. should be addressed.”

“With this flow of information, the launch of Mi-X provides users with a repository of information to validate the exploitability of specific vulnerabilities, creating greater focus and efficiency around patching efforts,” he added.

“As an active participant in the vulnerability research community, this is an impressive milestone for developers and researchers to collaborate and build together,” Perkle said.

Current tools fail to factor in exploitability as organizations grapple with critical and zero-day vulnerabilities, and scramble to understand whether they are affected by that vulnerability. It’s an on-going race to figure out the answer before the threatening actor.

To determine this, organizations need to identify a vulnerability in their environment and find out whether this vulnerability is indeed exploitable, for which there is a mitigation and treatment plan.

Current vulnerability scanners take too long to scan, don’t factor in exploit potential, and often miss it entirely. This is what happened with the Log4j vulnerability. According to Resilien, a lack of equipment gives threat actors plenty of time to exploit a flaw and do major damage.

The launch of Mi-X is the first in a series of initiatives to foster a community to detect, prioritize and address software vulnerabilities.

Linux thrives along with growing security crisis

Recent data monitoring of more than 63 million computing devices across 65,000 organizations shows that the Linux OS is alive and well within businesses.

New research from IT asset management software firm Lensweeper shows that even though Linux lacks the more widespread popularity of Windows and macOS, a lot of corporate devices still run the Linux operating system.

Scanning data from more than 300,000 Linux devices in approximately 26,000 organizations, Lensweeper also revealed the popularity of each Linux operating system based on the total amount of IT assets managed by each organization.

The company released its discovery on August 4, noting that around 32.8 million people worldwide use Linux, about 90% of all cloud infrastructure and nearly all of the world’s supercomputers are dedicated users.

Research by Lensweeper showed that CentOS is the most widely used (25.6%) followed by Ubuntu (20.8%) and Red Hat (15%). The company didn’t break down the percentages of users of many of the other Linux OS distributions in use today.

Chart showing Linux devices by company size


Lensweeper suggested that businesses exhibit a disconnect between using Linux for their enhanced security and proactively putting security processes in place.

Two recent Linux vulnerabilities this year — Dirty Pipe in March and Nimbuspun in April — plus new data from Lensweeper show that businesses are going blind when it comes to the security under their roof.

“It is our belief that the majority of devices running Linux are business-critical servers, which are desired targets for cybercriminals, and the logic suggests that the larger the company, the more Linux devices that need to be protected. ,” said Roel Decnett, chief strategy officer at Lensweeper.

“With so many versions and ways of installing Linux, IT teams are faced with the complexity of tracking and managing devices as well as trying to keep them safe from cyberattacks,” he explained.

Since its launch in 2004, Lensweeper has been developing a software platform that scans and inventory all types of IT equipment, installed software and active users on a network. It allows organizations to centrally manage their IT.

BitLocker, Linux Dual Booting Together Isn’t Perfect

Microsoft Windows users who want to install Linux distributions to dual boot on the same computer are now between a technical rock and a Microsoft hard place. They can thank the increased use of Windows BitLocker software for the worsening of the Linux dual-booting dilemma.

Developers of Linux distros are facing more challenges in supporting Microsoft’s full-disk encryption on Windows 10 and Windows 11 installations. The Fedora/Red Hat engineers noted that the problem is made worse by Microsoft sealing the full-disk encryption key, which is then sealed using Trusted Platform Module (TPM) hardware.

Fedora’s Anaconda installer cannot resize BitLocker volumes with other Linux distribution installers. The workaround is first resizing the BitLocker volume within Windows to create enough free space for the Linux volume on the hard drive. This useful detail is not covered in the often vulnerable installation instructions for dual-booting Linux.

A related problem complicates the process. The BitLocker encryption key imposes another deadly restriction.

To seal, the key must match the boot chain measurement in the TPM’s Platform Configuration Register (PCR). Using the default settings for GRUB in the boot chain for a dual boot setup produces incorrect measurement values.

According to the discussion of the problem in the Fedora mailing list, users trying to dual boot when attempting to boot Windows 10/11 are then left at the BitLocker recovery screen.

Microsoft, Canonical: A Case of Opposites Attract

Canonical and Microsoft have tightened the business knot connecting them with the common goal of better securing the software supply chain.

Both software companies announced on August 16 that native .NET is now available for Ubuntu 22.04 hosts and containers. This collaboration between .NET and Ubuntu provides enterprise-grade support.

Support lets .NET developers install the ASP.NET and .NET SDK runtimes from Ubuntu 22.04 LTS with a single “apt install” command.

Check out the full details here and watch this short video for updates:

Microsoft reverses open-source app sales ban

In what could be the latest case of Microsoft opening its marketing mouth to stumbling blocks, the company recently rattled software developers by banning the sale of open-source software in its App Store. Microsoft has since reversed that decision.

Microsoft had announced new terms for its App Store, effective July 16. The new terms state that not all pricing may attempt to profit from open source or other software that is otherwise generally available at no cost. Many software developers and re-distributors of free- and open-source software (FOSS) sell installable versions of their products at the Microsoft Store.

Redmond said the new restrictions would address the problem of “misleading listings”. Microsoft claimed that FOSS licenses allow anyone to post a version of a FOSS program written by others.

However, the developers pushed back, noting that the problem is easily solved in the same way regular stores solve it – through trademarked names. Consumers may disclose the actual sources of the Software Products from third-party re-packers with pre-existing trademark rules.

Microsoft has since accepted and removed references to open-source pricing restrictions in its store policies. The company clarified that the previous policy was intended to “help protect customers from misleading product listings”.

More information is available in the Microsoft Store Policies document.

A new report from a privileged management firm (PAM) warns that IT security is getting worse as corporations become stuck deciding what to do and what it will cost.

Delinea, formerly Thycotic and Centrify, on Tuesday released research based on 2,100 security decision makers internationally, revealing that 84% of organizations have experienced an identity-related security breach in the past 18 months.

This revelation comes as enterprises are grappling with expanding entry points and more frequent and advanced attack methods from cybercriminals. It also highlights the gap between the perceived and actual effectiveness of security strategies. Despite the high percentage of accepted breaches, 40% of respondents believe they have the right strategy.

Several studies found that credentials are the most common attack vector. Delinia wanted to know what IT security leaders were doing to reduce the risk of attack. This study focused on learning about the adoption of privileged access management by organizations as a security strategy.

Key findings of the report include:

  • 60% of IT security decision-makers have been put off working on an IT security strategy due to multiple concerns;
  • Identity security is a priority for security teams, but 63% believe it is not understood by executive leaders;
  • 75% of organizations will fail to protect privileged identities because they refuse to receive the support they need.

ID security is a priority, but board purchases are critical

Leaving behind corporate commitment to actually take action is a growing policy many executives are following in relation to IT efforts to provide better breach prevention.

Many organizations are hungry to make change, but three quarters (75%) of IT and security professionals believe that promises of change will fail to protect privileged identities due to a lack of corporate support, according to researchers. .

The report noted that 90% of the respondents said that their organizations fully recognize the importance of identity security in enabling them to achieve their business goals. Nearly the same percentage (87%) said it was one of the most important security priorities for the next 12 months.

However, a lack of budget commitment and executive alignment resulted in a constant stall on improving IT security. Some 63% of respondents said that their company’s board still does not fully understand identity security and its role in enabling better business operations.

Chief Security Scientist and Advisor CISO Joseph Carson said, “While the importance of identity security is acknowledged by business leaders, most security teams will not receive the support and budget they need to provide critical security controls and resources to mitigate key risks.” A solution is needed.” in Delinia.

“This means that most organizations will be deprived of protecting privileges, leaving them vulnerable to cybercriminals searching for and abusing privileged accounts,” he said.

Lack of policies puts machine ID at great risk

Despite the good intentions of corporate leaders, companies have a long road ahead when it comes to protecting privileged identities and access. According to the report, less than half (44%) of organizations surveyed have implemented ongoing security policies and procedures for privileged access management.

These missing security protections include password rotation or approval, time-based or context-based security, and privileged behavior monitoring such as recording and auditing. Even more worrying, more than half (52%) of all respondents allow privileged users to access sensitive systems and data without the need for multifactor authentication (MFA).

Another alarming lapse has come to the fore in the research. Privileged identities include humans, such as domain and local administrators. It also includes non-humans, such as service accounts, application accounts, codes, and other types of machine identities that automatically connect to and share privileged information.

However, only 44% of organizations manage and secure machine identities. The majority leave them open and come under attack.

Graph: Delinea benchmarking security gaps and privileged access

Source: Delinia Global Survey of Cyber ​​Security Leaders


Cybercriminals look for the weakest link, Carson noted. Ignoring ‘non-human’ identities – especially when these are growing at a faster rate than human users – greatly increases the risk of privilege-based identity attacks.

“When attackers target machine and application identities, they can easily eavesdrop,” he told TechNewsWorld.

They move around the network to determine the best place to strike and inflict the most damage. He advised that organizations need to ensure that machine identity is incorporated into their security strategies and follow best practices when it comes to protecting all of their IT ‘superuser’ accounts, which could be compromised if , then the entire business could be put on hold, he advised.

The security gap is widening

Perhaps the most important finding from this latest research is that the security gap continues to widen. Many organizations are on the right track to secure and reduce cyber risk for business. They face the challenge that there still exist large security gaps for attackers to gain. This includes securing a privileged identity.

An attacker only needs to find a privileged account. When businesses still have many privileged identities left vulnerable, such as application and machine identities, attackers will continue to exploit and influence businesses’ operations in exchange for ransom payments.

The good news is that organizations realize the high priority of protecting privileged identities. The sad news is that many privileged identities are still exposed because it is simply not enough to secure a human privileged identity, Carson explained.

Not only is the security gap widening between business and attackers but also the security gap between IT leaders and business executives. While this is improving in some industries, the problem still exists.

“Until we address the challenge of communicating the importance of cyber security to the executive board and business, IT leaders will continue to struggle to obtain the resources and budget needed to close the security gap,” he said. warned.

cloud whack-a-mole

One of the main challenges to achieving identity is that mobility and the identity of the cloud environment are everywhere. According to Carson, this increases the complexity of securing identity.

Businesses are still trying to secure them with the current security technologies they already have in place today. But this results in many security gaps and limitations. He said some businesses fall short even by trying to check security identity with simple password managers.

“However, this still means relying on business users to make good security decisions. To secure identities, you must first have a good strategy and plan in place. This means knowing the types of privileged identities that exist in business. Understanding and using security technology that is designed to find and protect them,” he concluded.

As criminal activity on the Internet continues to intensify, hunting bugs for cash is attracting more and more security researchers.

In its latest annual report, bug bounty platform Integrity revealed that there was a 43% increase in the number of analysts signing up for its services from April 2021 to April 2022. For Integrity alone, this means adding 50,000 researchers.

For the most part, it has been noted, bug bounty hunting is part-time work for the majority of researchers, with 54% holding full-time jobs and another 34% being full-time students.

“Bug bounty programs are tremendously successful for both organizations and security researchers,” said Ray Kelly, a fellow at WhiteHat Security, an application security provider in San Jose, Calif., which was recently acquired by Synopsis.

“Effective bug bounty programs limit the impact of serious security vulnerabilities that could easily have put an organization’s customer base at risk,” he told TechNewsWorld.

“Payments for bug reports can sometimes exceed six-figure amounts, which may seem like a lot,” he said. “However, the cost of fixing and recovering a zero-day vulnerability for an organization can total millions of dollars in lost revenue.”

‘Good faith’ rewarded

As if that weren’t incentive enough to become a bug bounty hunter, the US Department of Justice recently sweetened the career path by adopting a policy that said it would not enforce the federal Computer Fraud and Abuse Act against hackers, Who starred in “Good”. trust” when attempting to discover flaws in software and systems.

“The recent policy change to prevent prosecuting researchers is welcome and long-awaited,” said Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk prevention in Tel Aviv, Israel.

“The fact that researchers have, over the years, tried to help and find the right security flaws under a regime that amounted to ‘doing no good’ suggests that it takes them to do the right thing.” There was dedication, even if doing the right thing meant risky fines and jail time,” he told TechNewsWorld.

“This policy change removes a fairly significant obstacle to vulnerability research, and we can expect it to pay dividends quickly and without the risk of jail time for doing it for bug discoverers in good faith.” Will pay dividends with more people.”

Today, ferreting out bugs in other people’s software is considered a respectable business, but it isn’t always the case. “Basically there were a lot of issues with when bug bounty hunters would find vulnerabilities,” said James McQuigan, a security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.

“Organizations will take a lot of offense to this, and they will try to accuse the researcher of finding it when, in fact, the researcher wanted to help,” he told TechNewsWorld. “The industry has recognized this and now email addresses have been established to receive such information.”

benefits of multiple eyes

Over the years, companies have come to realize what bug bounty programs can bring to the table. “The task of discovering and prioritizing weak, unintended consequences is not, and should not be, the focus of the organization’s resources or efforts,” explained Casey Ellis, CTO and founder of BugCrowd, which operates a crowdsourced bug bounty platform. Is.

“As a result, a more scalable and effective answer to the question ‘where am I most likely to settle’ is no longer considered a good one, but should be one,” he told TechNewsWorld. “This is where bug bounty programs come into play.”

“Bug bounty programs are a proactive way to spot vulnerabilities and reward one’s good work and discretion,” said Davis McCarthy, a lead security researcher at Valtix, a provider of cloud-native network security services in Santa Clara, Calif.

“The old adage, ‘Many eyes make all the bugs shallow,’ is true, because there is a dearth of talent in the field,” he told TechNewsWorld.

Parkin agreed. “With the sheer complexity of modern code and the myriad interactions between applications, it’s important to have a more responsible eye on looking for flaws,” he said.

“Threat actors are always working to find new vulnerabilities they can exploit, and the threats scene in cyber security has only gotten more hostile,” he continued. “The rise of bug bounties is a way for organizations to bring some of the independent researchers into the game on their side. It’s a natural response to an increase in sophisticated attacks.”

Bad Actor Reward Program

Although bug bounty programs have gained greater acceptance among businesses, they can still cause friction within organizations.

“Researchers often complain that even when firms have a coordinated disclosure or bug bounty program, a lot of pushback or friction exists. Archie Agarwal, founder and CEO of ThreatModeler, an automated threat modeling provider in Jersey City, NJ “They often feel slighted or pushy,” he said.

“Organizations, for their part, often get stuck when presented with a disclosure because the researcher found a fatal design flaw that would require months of concerted effort to rectify,” he told TechNewsWorld. “Maybe some prefer that these kinds of flaws will be out of sight.”

“The effort and expense of fixing design flaws after a system has been deployed is a significant challenge,” he continued. “The surest way to avoid this is by creating threat model systems, and as their design evolves. It provides organizations with the ability to plan for and deal with these flaws in their potential form, proactively.” does.”

Perhaps the biggest proof of the effectiveness of bug bounty programs is that malicious actors have begun to adopt the practice. The Lockbit ransomware gang is offering payments to those who discover vulnerabilities in their leaked website and their code.

“This development is novel, however, I suspect they will get many takers,” predicts John Bumbaneck, principle threat hunter at Netenrich, a San Jose, Calif.-based IT and digital security operations company.

“I know that if I find a vulnerability, I’m going to use it to jail them,” he told TechNewsWorld. “If a criminal finds someone, it must be stealing from them because there is no respect among ransomware operators.”

“Ethical hacking programs have been hugely successful. It is no surprise to see ransomware groups refining their methods and services in the face of that competition,” said Casey Bisson, head of product and developer relations at BlueBracket, Menlo Park, Calif. A cyber security services company in India.

He warned that attackers are increasingly aware that they can buy access to the companies and systems they want to attack.

“It involves looking at the security of their internal supply chains every enterprise has, including who has access to their code, and any secrets therein,” he told TechNewsWorld. “Unethical bounty programs like these turn passwords and keys into code for whoever has access to your code.”

Lately I’ve been thinking a lot about what to do. There are a couple of reasons for this.

First, doing it well is a prerequisite for developing any credible expertise in any kind of computer science or engineering discipline. With the right mental toolset, you can bootstrap knowledge of any subject matter you might need.

Second, in my experience, it is the aspect of computer science and engineering that gets the least attention. There is a real influx of online training resources. But most of them cut the nuts and bolts right in order to acquire a basic qualification with software tooling to qualify someone for the job. This is understandable up to a point. If you’ve never programmed before, the skill you immediately feel lacking is programming language use. In such a situation, it is natural to attack him directly.

But while it’s not as exciting as rolling up your sleeves and saying “hello” to that world, taking the time to learn, and how to solve problems that can’t be solved by hard coding, will in the long run. Running will pay.

Will outline what I have found to be the most essential cognitive skills contributing to engineering success.

Your harshest critic should be your thinking

The primacy of critical thinking is such a clichéd aphorism that most of the people I inspire to investigate become addicted to it. This should not lead anyone to mistakenly believe that it is not inevitable, however.

Part of the problem is that it is easy for those who advocate critical thinking to assume that their audience knows what it is and how to do it. Ironically, this notion itself can benefit by going through some critical thought.

So, let’s go back to basics.

Wikipedia defines critical thinking as “the analysis of available facts, evidence, observations, and arguments for decision-making”. What do the words carrying the most weight mean here? “Fact,” “evidence,” and “observation” are related, because they all try to establish in their own way what we believe to be true.

“Facts” are usually first (usually) proven by other people whose understanding we trust. “Evidence” is made up of specific measured results listed by you or other trusted persons. “Observations” refer to those made by the critical thinker himself. If these, too, were events that others (and not theorists) had witnessed, how would this be meaningfully different from “evidence”?

The “logic” is weird here, but for good reason. That’s where “thinking” (logic in particular) really starts to do its heavy lifting. “Logic” describes how the thinker makes rational determinations that point to additional knowledge based on the interplay of facts, evidence, and observations.

The most important word of the definition is “decision”. Critical thinking is not necessarily related to trying to prove new truths. Critical thinking only requires that consideration of all of the foregoing yields some overall idea of ​​what is under consideration.

These decisions are not absolute, but may be probabilistic. As long as the result is that the entity being considered has been “judged” and the decision holds for all available information (not just the one that leads to the desired conclusion), then the critical thinking exercise is complete. It is done.

medical procedure

I doubt if that’s what most people mean when they say “critical thinking”. What really matters, however, is whether you practice critical thinking yourself. Funny enough, the way to evaluate whether you think critically… is to think about it critically. Meta, I know, but you have to go there.

In fact, what we’ve just done in posing these questions is a kind of critical thinking. I have my own penchant for critical thinking, which is to ask, “Why is X like this?” As I understand it, what elements acted upon, or must have acted on, X, and are those elements manifesting or producing the effect in other ways I suspect? This is helpful because it acknowledges that nothing exists in a vacuum, which helps ensure that you account for all available facts, not just obvious facts.

With a working understanding of the practice of critical thinking, get into the habit of using it to sieve reasonably valid reality from perceived reality. Try not to believe anything to be true until you have verified it through this process. Does the given statement match with the other facts you have on the matter? Is it appropriate? Does it make sense given the context?

I don’t need to tell you how valuable working with a computer is. I shouldn’t because now you (if not before) are able to figure it out for yourself.

try before you cry

This is something that has appeared in my other pieces, but which deserves to be reiterated here in the interest of completeness.

We all need help sometimes, but your coworkers will expect you to try to solve the problem yourself first. Time is a scarce resource, so they want to know that they are spending their time wisely. Got you a google search away giving the same answer, probably not so. Also, if you’ve tried to solve it yourself, the person helping you can pick up where you left off. This lets them rule out a number of possible causes that take time to test.

You also never know whether your fellow engineers will be available or knowledgeable enough to help when you need it. What if you’re the only one who knows anything about the project you’re working on? Or what if you’re on such a tight deadline that you can’t wait for a response? Develop dependable problem-solving habits, because that’s what you ultimately have.

What exactly does it mean to be a troubleshooting process. Write down step-by-step basic diagnoses for the major types of problems you’re facing. Then run whatever diagnostics apply.

Prepare a list of reliable reference materials and consult them before asking questions. For each event it sends you to the user manual, keep track of where you saw it, and what was and wasn’t. Then, when it’s time to ask for help, compile the results of your diagnosis and excerpts from reference material, and present everything to whomever you ask. They will appreciate that you did.

Learn Skills, Not Factoids

Like every field, there are certainly facts you should remember. For example, your life as a developer will become easier if you memorize the syntax of conditional statement blocks in your go-to language.

Yet it is not as important as acquiring the skill set. For example, if you remember the syntax of your regular programming languages, you can go decently far. But what if you need to learn a module or an entirely new language that formats things differently? If instead you know what you need from reliable sources, it may take longer, but you will get the right answer no matter what software or language you are using.

The iterative and incremental design paradigm for software development is an example of a skill.

Here, “incremental” is related to modularity. This prompts the developer to break the overall project down into the smallest possible pieces, with each piece doing only one thing and operating as independently as possible (ideally not at all). Then the developer’s task is simply to build each piece one by one.

The “iteration” element means that the developer continues to build, edit, and test any component that works cyclically until it can work on its own. Till then no one is moving forward. It not only uses any language or builds an application, but also works completely beyond the scope of a computer.

This design philosophy is just one example of how a skill serves engineers better than a rote process, but many others exist. Figure out what your discipline needs are and feel comfortable using them.

Stop by the Bakery, You’ll Need Breadcrumbs

Write down everything Since writing notes is cheaper than ever, no one can stop you. If you prefer digital, basically you are free to write as much as you want. Open a word processor and see for yourself. If notebooks are your thing, a few bucks at an office supply store and you’re set.

Reading notes is also cheaper in terms of time spent than trying to find something on the web over and over again. There’s no reason for you to look at something twice as long as it hasn’t changed since the last time. It’s tempting to assume that you’ll remember something or don’t need it anymore. Don’t. If you do this, you will eventually be wrong, and it will take unnecessarily time to find it again.

Your notes are also the only place where you can customize what you learn to suit your needs. The web has no shortage of answers, but they may not be exactly what you need. If you take notes, you can improve your use case before recording the information.

The real trick with Notes is to have an organizational system. The only way to write things down is if you can’t find them again. Even if you’re an avid note taker, try a few note-taking techniques until you find one you like.

step up to the starting block

When running, you set yourself up for victory or defeat in your training. If you haven’t trained diligently, working extra hard won’t make any difference when the competition starts. That said, you still have to put it into practice on the track.

The cognitive skills I discussed are not even training, but your coach’s fitness regimen. I certainly don’t have an Olympic coach, but that doesn’t beat anyone. Training is now in your hands.

Ransomware is the top supply chain risk facing organization today, according to a survey released Monday by ISACA, a consortium of IT professionals with 140,000 members in 180 countries.

The survey, based on responses from more than 1,300 IT professionals with Supply Chain Insights, found that nearly three-quarters of respondents (73%) said ransomware was a major concern when considering supply chain risks to their organizations .

Other major concerns include poor information security with physical or virtual access to information systems, software by suppliers (66%), software security vulnerabilities (65%), third-party data collection (61%) and third-party service providers or vendors. exercises were included. Code or IP (55%).

The increased concern about ransomware can be because it can take a double whammy for an organization.

“First, there is the risk of an attacker finding an attack path into an organization from a compromised vendor or software dependency, as we saw with the SolarWinds and Kasia attacks, which saw a large number of downstream victims travel through that supply chain. impressed,” Chris explained. Clements, vice president of solution architecture at Cerberus Sentinel, a cybersecurity consulting and penetration testing company in Scottsdale, Ariz.

“Then there are secondary effects,” he continued, “where a ransomware gang can steal data stored on a third-party provider and attempt to take out both organizations by threatening to release it publicly if the ransom is not paid. Can do.”

“The other side of the coin is that a ransomware attack on an organization’s supply chain can cause significant operational disruption if the third party it depends on is unable to provide services because of a cyberattack,” he told TechNewsWorld. .

leader ignorance

Those attacks on the software supply chain can have a ripple effect on the physical supply chain. Eric Krone, security awareness advocate for KnowBe4, a security awareness training provider in Clearwater, Fla., said, “Ransomware contributes to significant disruptions in the already taxing supply chain when the systems that manage the creation and delivery of goods and services are compromised. is taken offline.”

“This could affect the ordering and tracking of inventory of materials needed to make the item, could affect the tracking of the status of items needed to fill orders and could cause problems with customers receiving materials, their could create shortages for customers,” he told TechNewsWorld.

“In a world of on-time order fulfillment, any delay can affect the supply chain, affecting more and more people along the way,” he said.

Nearly a third of the IT professionals surveyed (30%) disclosed that the leaders of their organizations did not have an adequate understanding of supply chain risk. “The fact that it was only 30% was somewhat encouraging,” ISACA Board of Directors Rob Clyde told TechNewsWorld. “A few years ago this number would have been much higher.”

“I think a lot of ignorance comes from underestimating the number of dependencies and their criticism of how an organization operates,” Clements said.

“These third-party tools, by their nature, often require administrative rights for many, if not all, of the customer’s devices they interact with, meaning that only one of these vendor’s agreements is for their customer. Might be enough to completely compromise the atmosphere.”

“Likewise, there is often an ignorance of how much organizations rely on third-party vendors,” he adds, “most organizations do not have a ready-to-go fallback plan if a major provider such as their email The communications platform had to have an extended outage.”

pessimistic vein

Even in situations where leaders understand the risks to their supply chains, they will not make mistakes in terms of security. “In situations where companies have to choose between security and development, every time you see them choosing growth,” says Casey Bisson, head of product and developer relations for BlueBracket, a cybersecurity services company in Menlo Park, Calif. he said.

“It comes at the risk of their customers. It comes at the risk of the company itself,” he told TechNewsWorld. “But increasingly, we’re starting to see executives being held accountable for those choices.”

The ISACA survey also found a strong vein of pessimism among IT professionals about the security prospects of their supply chains. Only 44% indicated they had high confidence in the security of their organization’s supply chain, while 53% expected supply chain issues to remain the same or get worse over the next six months.

ISACA Survey Results Top Supply Chain Risks

Source: Isaka | Understanding Supply Chain Security Gaps | 2022 Global Research Report

One of the more surprising findings of the survey was that 25% of organizations said they had experienced a supply chain attack in the past 12 months. “I didn’t think it would be anywhere near that high,” Clyde said.

“While many organizations have experienced cyberattacks in the past 12 months, I didn’t think there would be many to blame for a supply chain problem. If we had asked this question many years ago, it would have been a much smaller number. , “They said.

Meanwhile, more than eight in 10 of tech experts (84%) said their supply chains needed better governance than they do now.

“It just doesn’t work the way we try to authenticate supply chain partners today,” said Andrew Hay, COO of Lares, an information security consulting firm in Denver.

“We either generate an arbitrary score based on external scan data and IP-based confidence or we try and force them to fill out 100 or more questions on a spreadsheet,” he told TechNewsWorld. “Neither accurately reflects how secure an organization is.”

need for auditing

Many factors come into play when trying to secure a supply chain, said Mike Parkin, a senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk prevention in Tel Aviv, Israel.

“Organizations only have full visibility into their own environments, which means they have to trust that their vendors are following best practices,” he told TechNewsWorld. “This means they are required to cover contingencies when a third party vendor breach occurs or has a build process that severely restricts the damages that can occur if it occurs.”

“It is even more complicated when an organization needs to deal with multiple vendors to compensate for shortages or disruptions,” he continued. “Even with the right risk management tools, it can be difficult to account for everything in play.”

Krone said there should be some trust in suppliers; However, if administration is extended to verify what organizations tell us, as opposed to relying on responses to a questionnaire, a system of auditing should be established.

“This will inevitably increase costs, something that many organizations work hard to keep as low as possible in order to remain competitive,” he said.

“While this may be easy to justify for critical government or military systems, it can be a hard sell for traditional suppliers,” he said. “To add to the challenges, it may be difficult or impossible to impose a regime on foreign suppliers of goods and materials. This is not an easy challenge to tackle and will remain a topic of discussion for a long time.

Government organizations and educational institutions, in particular, are increasingly in the crosshairs of hackers as serious web vulnerabilities continue to rise upwards.

Remote code execution (RCE), cross-site scripting (XSS), and SQL injection (SQLi) are all top software offenders. All three keep rising or hovering around the same alarming numbers year after year.

RCE, often the end target of a malicious attacker, was the main cause of the IT scam in the wake of the Log4Shell exploit. This vulnerability has seen a steady increase since 2018.

Enterprise security firm Invicti last month released its Spring 2022 AppSec Indicator report, which revealed Web vulnerabilities from more than 939 of its customers worldwide. The findings come from an analysis of the Invicti AppSec platform’s largest dataset — which has more than 23 billion customer application scans and 282,000 direct-impact vulnerabilities discovered.

Research from Invicti shows that one-third of both educational institutions and government organizations experienced at least one incident of SQLi in the past year. Data from 23.6 billion security checks underscores the need for a comprehensive application security approach, with governments and education organizations still at risk of SQL injection this year.

Data shows that many common and well-understood vulnerabilities in web applications are on the rise. It also shows that the current presence of these vulnerabilities presents a serious risk to organizations in every industry.

According to Mark Rawls, President and COO of Invicty, even well-known vulnerabilities are still prevalent in web applications. To ensure that security is part of the DNA of an organization’s culture, processes and tooling, organizations must gain command of their security posture so that innovation and security work together.

“We’ve seen the most serious web vulnerabilities continue to grow, either stable or increasing in frequency, over the past four years,” Ralls told TechNewsWorld.

key takeaways

Rawls said the most surprising aspect of the research was the rapid rise in incidence of SQL injections among government and education organizations.

Particularly troubling is SQLi, which has increased frequency by five percent over the past four years. This type of web vulnerability allows malicious actors to modify or change the queries an application sends to its database. This is of particular concern to public sector organizations, which often store highly sensitive personal data and information.

RCE is the crown jewel for any cyber attacker and is the driver behind last year’s Log4Shell program. This is also an increase of five percent since 2018. XSS saw a six percent increase in frequency.

“These trends were echoed throughout the report’s findings, revealing a worrying situation for cybersecurity,” Rawls said.

Skill gap, lack of talent included

Another big surprise for researchers is the increase in the number of vulnerabilities reported from organizations that scan their assets. There can be many reasons. But the lack of software trained in cyber security is a major culprit.

“Developers, in particular, may need more education to avoid these errors. We have noticed that vulnerabilities are not being discovered during scanning, even in the early stages of development,” Rawls explained.

When developers don’t address vulnerabilities, they put their organizations at risk. He said automation and integration tools can help developers address these vulnerabilities more quickly and reduce potential costs to the organization.

Don’t Blame Web Apps Alone

Web apps aren’t getting any less secure per sec. It’s a matter of developers being tired, overworked and often not having enough experience.

Often, organizations hire developers who lack the necessary cyber security background and training. According to Rawls, with the continuing effort towards digital transformation, businesses and organizations are digitizing and developing apps for more aspects of their operations.

“In addition, the number of new web applications entering the market every day means that every additional app is a potential vulnerability,” he said. For example, if a company has ten applications, it is less likely to have one SQLi than if the company has 1,000 applications.

apply treatment

Business teams – whether developing or using software – require both the right paradigm and the right technologies. This involves prioritizing a secure design model covering all base and baking security in the pre-code processes behind the application architecture.

“Break up the silos between teams,” Rawls advised. “Particularly between security and development – ​​and make sure organization-wide norms and standards are in place and created universally.”

With regard to investing in AppSec tools to stem the rising tide of faulty software, Ralls recommends using robust tools:

  • Automate as much as possible;
  • Integrate seamlessly into existing workflows;
  • Provide analysis and reporting to show evidence of success and where more work needs to be done.

Don’t overlook the importance of accuracy. “Tools with low false-positive rates and clear, actionable guidance for developers are essential. Otherwise, you waste time, your team won’t embrace the technology, and your security posture won’t improve,” he concluded.

partially blind spot on play

Rall said critical breaches and dangerous vulnerabilities continue to expose the organizations’ blind spots. For proof, see Log4Shell’s tornado effects.

Businesses around the world scrambled to test whether they were susceptible to RCE attacks in the widely used Log4j library. Some of these risks are increasing in frequency when they should go away for good. It comes down to a disconnect between the reality of risk and the strategic mandate for innovation.

“It is not always easy to get everyone on board with security, especially when it appears that security is holding individuals back from project completion or would be too costly to set up,” Rawls said.

An increasing number of effective cyber security strategies and scanning technologies can reduce persistent threats and make it easier to bridge the gap between security and innovation.

Do you know whether your company data is clean and well managed? Why does it matter anyway?

Without a working governance plan, you may have no company to worry about – data-wise.

Data governance is a collection of practices and procedures establishing rules, policies and procedures that ensure data accuracy, quality, reliability and security. It ensures the formal management of data assets within an organization.

Everyone in business understands the need to have and use clean data. But making sure it’s clean and usable is a bigger challenge, according to David Kolinek, vice president of product management at Atacama.

This challenge is compounded when business users have to rely on scarce technical resources. Often, no one person oversees data governance, or that person doesn’t have a complete understanding of how the data will be used and how to clean it up.

This is where Atacama comes into play. The company’s mission is to provide a solution that even people without technical knowledge, such as SQL skills, can use to find the data they need, evaluate its quality, understand any issues How to fix that and determine if that data will serve their purposes.

“With Atacama, business users don’t need to involve IT to manage, access and clean their data,” Kolinek told TechNewsWorld.

Keeping in mind the users

Atacama was founded in 2007 and was originally bootstrapped.

It started as a part of a consulting company, Edstra, which is still in business today. However, Atacama focused on software rather than consulting. So management spun off that operation as a product company that addresses data quality issues.

Atacama started with a basic approach – an engine that did basic data cleaning and transformation. But it still requires an expert user because of the user-supplied configuration.

“So, we added a visual presentation for the steps enabling things like data transformation and cleanup. This made it a low-code platform because users were able to do most of the work using just the application user interface. But that’s right now.” was also a fat-client platform,” Kolinek explained.

However, the current version is designed with the non-technical user in mind. The software includes a thin client, a focus on automation, and an easy-to-use interface.

“But what really stands out is the user experience, made up of the seamless integration that we were able to achieve with the 13th version of our engine. It delivers robust performance that is crafted to perfection,” he said. offered.

Digging deeper into data management issues

I asked Kolinek to discuss the issues of data governance and quality further. Here is our conversation.

TechNewsWorld: How is Atacama’s concept of centralizing or consolidating data management different from other cloud systems such as Microsoft, Salesforce, AWS and Google Cloud?

David Kolinek: We are platform agnostic and do not target a specific technology. Microsoft and AWS have their own native solutions that work well, but only within their own infrastructure. Our portfolio is wide open so it can serve all use cases that should be included in any infrastructure.

In addition, we have data processing capabilities that not all cloud providers have. Metadata is useful for automated processing, generating more metadata, which can be used for additional analysis.

We have developed both these technologies in-house so that we can provide native integration. As a result, we can provide a better user experience and complete automation.

How is this concept different from the notion of standardization of data?

David Kolinek
David Kolinek
Vice President of Product Management,
atacama

Kolinek: Standardization is just one of many things we do. Typically, standardization can be easily automated, in the same way that we can automate cleaning or data enrichment. We also provide manual data correction when resolving certain issues, such as missing Social Security numbers.

We cannot generate SSN but we can get date of birth from other information. So, standardization is no different. It is a subset of things that improve quality. But for us it is not just about data standardization. It is about having good quality data so that the information can be leveraged properly.

How does Atacama’s data management platform benefit users?

Kolinek: User experience is really our biggest advantage, and the platform is ideal for handling multiple individuals. Companies need to enable both business users and IT people when it comes to data management. This requires a solution for business and IT to collaborate.

Another great advantage of our platform is the strong synergy between data processing and metadata management that it provides.

Most other data management vendors cover only one of these areas. We also use machine learning and a rules-based approach and validation/standardization, both of which, again, are not supported by other vendors.

Furthermore, because we are ignorant of technology, users can connect to many different technologies from a single platform. With edge processing, for example, you can configure something in the Atacama One once, and the platform will translate it for different platforms.

Does Atacama’s platform lock-in users the same way proprietary software often does?

Kolinek: We have developed all the main components of the platform ourselves. They are tightly integrated together. There has been a huge wave of acquisitions in this space lately, with big sellers buying out smaller sellers to fill in the gaps. In some cases, you are actually buying and managing not one platform, but several.

With Atacama, you can buy just one module, such as Data Quality/Standardization, and later expand to others, such as Master Data Management (MDM). It all works together seamlessly. Just activate our modules as you need them. This makes it easy for customers to start small and expand when the time is right.

Why is the Integrated Data Platform so important in this process?

Kolinek: The biggest advantage of a unified platform is that companies are not looking for a point-to-point solution to a single problem like data standardization. It is all interconnected.

For example, to standardize you must verify the quality of the data, and for that, you must first find and catalog it. If you have an issue, even though it may seem like a discrete problem, it probably involves many other aspects of data management.

The beauty of an integrated platform is that in most use cases, you have a solution with native integration, and you can start using other modules.

What role do AI and ML play today in data governance, data quality and master data management? How is this changing the process?

Kolinek: Machine learning enables customers to be more proactive. First, you’ll identify and report a problem. One has to check what went wrong and see if there is anything wrong with the data. You would then create a rule for data quality to prevent repetition. It’s all reactive and based on something being broken down, found, reported and fixed again.

Again, ML lets you be proactive. You give it training data instead of rules. The platform then detects differences in patterns and identifies discrepancies to help you realize there was a problem. This is not possible with a rule-based approach, and is very easy to measure if you have a large amount of data sources. The more data you have, the better the training and its accuracy.

Aside from cost savings, what benefits can enterprises gain from consolidating their data repositories? For example, does it improve security, CX results, etc.?

Kolinek: This improves safety and minimizes potential future leaks. For example, we had customers who were storing data that no one was using. In many cases, they didn’t even know the data existed! Now, they are not only integrating their technology stack, but they can also see all the stored data.

It is also very easy to add newcomers to the platform with consolidated data. The more transparent the environment, the sooner people will be able to use it and start getting value.

It is not so much about saving money as it is about leveraging all your data to generate a competitive advantage and generate additional revenue. It provides data scientists with the means to build things that will drive business forward.

What are the steps in adopting a data management platform?

Kolinek: Start with a preliminary analysis. Focus on the biggest issues the company wants to tackle and select platform modules to address them. It is important to define goals at this stage. Which KPIs do you want to target? What level of ID do you want to achieve? These are questions you should ask.

Next, you need a champion to drive execution and identify the key stakeholders driving the initiative. This requires extensive communication between various stakeholders, so it is important that one focuses on educating others about the benefits and helping the teams on the system. Then comes the implementation phase where you address the key issues identified in the analysis, followed by the rollout.

Finally, think about the next set of issues that need to be addressed, and if necessary, enable additional modules in the platform to achieve those goals. The worst part is buying a device and providing it, but not providing any service, education or support. This will ensure that the adoption rate will be low. Education, support and service are very important for the adoption phase.

The best thing for me about tech related topics is that they are probably easier than any other to learn online. In fact, that’s exactly how I built the Computer Science Foundation that supports my work. Without the Internet full of resources, I would not be where I am today.

Like many who shared my path, I initially devoured every online resource I could get my hands on. But as I invest more years in my career, I have increasingly noticed the shortcomings of the material most likely to be exposed.

At first, I found that I had to re-learn some concepts I thought I understood. Then, the more concrete it got, the more I discovered that even my self-taught peers were disoriented at some point.

This inspired me to investigate how misconceptions spread. Of course, not everyone gets everything right all the time. It is human to make mistakes, after all. But with such knowledge available online, in theory, misinformation should not spread widely.

So where did it come from? In short, the same market forces that make computer science-driven fields attractive are those that provide fertile ground for questionable training material.

To give back to computer science education in a small way, I want to share my observations about determining the quality of instructional resources. Hopefully, those of you who are on a similar path will learn from the easy way what I learned the hard way.

Starting our self-dev environment

Before we begin, I want to admit that I understand that no one likes to be told that their work is less than stellar. I’m definitely not going to name names. For one thing, there are so many names that a heuristic is the only practical way to go.

More importantly, instead of just telling you where not to go, I’ll provide you with the tools to evaluate for yourself.

Heuristics are also more likely to point you in the right direction. If I declare that website X has subpar content and I am wrong, then nobody has achieved anything. Even worse, you may have missed out on an editing source of knowledge.

However, if I outline the signs that suggest any website may be off the mark, while they may still lead you to mistakenly discount a trusted resource, they still have them in most cases. Sound conclusions must be drawn.

The invisible hand of the market joins a strong hand

To understand where information of questionable quality is coming from, we need to delete our Econ 101 notes.

Why do tech jobs pay so much? High demand meets low supply. There is such an urgent need for software developers, and software development trends evolve so rapidly that tons of resources have been rapidly produced to train the latest wave.

But the market forces are not yet complete. When demand outweighs supply, production feels pressured. If production picks up, and the price stays the same, the quality goes down. Sure, prices can easily go up, but a major highlight of technical training is that much of it is free.

So, if a site can’t cope with the sharp drop in users that comes with moving from free to paid, can you blame it for staying free? Multiply this by even a modest share of all free training sites and the result is a drop in quality of training, overall.

Furthermore, because innovation in software development practices tends to iterate, so does this cycle of decline in educational quality. What happens once the hastily prepared training material is consumed? Over time the employees who consume it become the new “experts”. In a short time, these “experts” produce another generation of resources; And in this way.

Bootstrap your learning with your own bootstrap

Clearly, I am not asking you to regulate this market. What you can do However, learn to identify credible sources on your own. I promised estimates, so here are some I use to get a rough estimate of the value of a particular resource.

Is the site run by a for-profit company? It’s probably not that solid, or at least not useful for your specific use case.

At times, these sites are selling something or the other to tech-illiterate customers. The information is simplified to appeal to non-technical company leadership, not detailed to address technical grunts. Even if the site is intended for someone in your shoes, for-profit organizations try to avoid handing out Tradecraft for free.

if the site Is For the technically minded, And While the Company independently distributes practices, their use of a given software, tool or language may be completely different from how you do, will or should.

Was the site set up by a non-profit organization? If you’ve chosen the right kind, their stuff can be super valuable.

Before you believe what you read, make sure the nonprofit is reputable. Then confirm how closely the site is related to what you’re trying to learn about. For example, python.org, administered by the same people who make Python, would be a great bet for teaching you Python.

Is the site mostly ready for training? Be cautious even if it is for profit.

Such organizations generally prefer to place apprentices in jobs more rapidly. Apprentice quality comes second. Sadly, that’s good enough for most employers, especially if it means they can save a buck on salary.

On the other hand, if the site is a major nonprofit, you can usually overestimate it. Often these types of training-driven nonprofits have a mission to build the field and support their workers—which relies heavily on people being trained properly.

to consider more

There are a few other factors you should take into account before deciding how seriously to take a resource.

If you’re looking at a forum, measure it based on its relevance and reputation.

General purpose software development forums are a frustrating amount of time because no expertise means there is little chance of specialized experts turning around.

If the forum is explicitly intended to serve a particular job role or software user base, chances are you’ll get a better advantage, as it’s more likely that you’ll find an expert there.

For things like blogs and their articles, it all depends on the background strength of the author.

Writers developing or using what you’re learning probably won’t lead you in the wrong direction. You’re probably also in good shape with a developer from a major tech company, as these entities can usually hold top-notch talent.

Be suspicious of writers writing under a for-profit company that isn’t even a developer.

summative assessment

If you want to limit this approach to a mantra, you can put it like this: Always think about who is writing the advice, and why,

Obviously, no one is ever trying to be wrong. But they may leave only what they know, and a share of information may have a focus other than being as accurate as possible.

If you can find out the reasons why the creator of the knowledge can’t keep the accuracy of the textbook at the front of his mind, you’re in less danger of inadvertently putting your work in his mind.

The first plan of its kind to comprehensively address open source and software supply chain security is awaiting White House support.

The Linux Foundation and the Open Source Software Security Foundation (OpenSSF) on Thursday brought together more than 90 executives from 37 companies and government leaders from the NSC, ONCD, CISA, NIST, DOE and OMB to reach a consensus on key actions. Improving the flexibility and security of open-source software.

A subset of the participating organizations have collectively pledged an initial tranche of funds for the implementation of the scheme. Those companies are Amazon, Ericsson, Google, Intel, Microsoft, and VMWare, with more than $30 million in pledges. As the plan progresses, more funds will be identified and work will begin as agreed upon individual streams.

The Open Source Software Security Summit II, led by the National Security Council of the White House, is a follow-up to the first summit held in January. That meeting, convened by the Linux Foundation and OpenSSF, came on the one-year anniversary of President Biden’s executive order on improving the nation’s cyber security.

As part of this second White House Open Source Security Summit, open source leaders called on the software industry to standardize on SigStore developer tools and upgrade the collective cyber security resilience of open source and improve trust in software. called upon to support the plan. Dan Lorenc, CEO and co-founder of Chainguard, co-creator of Sigstore.

“On the one-year anniversary of President Biden’s executive order, we’re here today to respond with a plan that’s actionable, because open source is a critical component of our national security, and it’s driving billions of dollars in software innovation. is fundamental to investing today,” Jim Zemlin, executive director of the Linux Foundation, announced Thursday during his organization’s press conference.

push the support envelope

Most major software packages contain elements of open source software, including code and critical infrastructure used by the national security community. Open-source software supports billions of dollars in innovation, but with it comes the unique challenges of managing cybersecurity across its software supply chains.

“This plan represents our unified voice and our common call to action. The most important task ahead of us is leadership,” said Zemlin. “This is the first time I’ve seen a plan and the industry will promote a plan that will work.”

The Summit II plan outlines funding of approximately $150 million over two years to rapidly advance well-tested solutions to the 10 key problems identified by the plan. The 10 streams of investment include concrete action steps to build a strong foundation for more immediate improvements and a more secure future.

“What we are doing together here is converting a bunch of ideas and principles that are broken there and what we can do to fix it. What we have planned is the basis to get started. As represented by 10 flags in the ground, we look forward to receiving further input and commitments that lead us from plan to action,” said Brian Behldorf, executive director of the Open Source Security Foundation.

Open Source Software Security Summit II in Washington DC, May 12, 2022.

Open Source Software Security Summit II in Washington DC, May 12, 2022. [L/R] Sarah Novotny, Open Source Lead at Microsoft; Jamie Thomas, enterprise security executive at IBM; Brian Behldorf, executive director of the Open Source Security Foundation; Jim Zemlin, executive director of The Linux Foundation.


highlight the plan

The proposed plan is based on three primary goals:

  • Securing open source security production
  • Improve vulnerability discovery and treatment
  • shortened ecosystem patching response time

The whole plan includes elements to achieve those goals. These include security education which provides a baseline for software development education and certification. Another element is the establishment of a public, vendor-neutral objective-matrix-based risk assessment dashboard for the top 10,000 (or more) OSS components.

The plan proposes the adoption of digital signatures on software releases and the establishment of the OpenSSF Open Source Security Incident Response Team to assist open source projects during critical times.

Another plan detail focuses on improved code scanning to accelerate the discovery of new vulnerabilities by maintainers and experts through advanced security tools and expert guidance.

Code audits conducted by third-party code reviews and any necessary remedial work will detect up to 200 of the most critical OSS components once per year.

Coordinated data sharing will improve industry-wide research that helps determine the most important OSS components. Providing Software Bill of Materials (SBOM) everywhere will improve tooling and training to drive adoption and provide build systems, package managers and distribution systems with better supply chain security tools and best practices.

stock factor

Chainguard, who co-created the Sigstore repository, is committed to financial resources for the public infrastructure and network offered by OpenSSF and to ensure that SigStore’s impact is felt in every corner of the software supply chain and Will collaborate with industry peers to deepen work on interoperability. software ecosystem. This commitment includes at least $1 million per year in support of Sigstore and a pledge to run it on its own node.

Designed and built with maintainers for maintainers, it has already been widely adopted by millions of developers around the world. Lorenc said now is the time to formalize its role as the de facto standard for digital signatures in software development.

“We know the importance of interoperability in the adoption of these critical tools because of our work on the SLSA framework and SBOM. Interoperability is the linchpin in securing software across the supply chain,” he said.

Related Support

Google announced Thursday that it is creating an “open-source maintenance crew” tasked with improving the security of critical open-source projects.

Google also unveiled the Google Cloud Dataset and open-source Insights projects to help developers better understand the structure and security of the software they use.

According to Google, “This dataset provides access to critical software supply chain information for developers, maintainers, and consumers of open-source software.”

“Security risks will continue to plague all software companies and open-source projects and only an industry-wide commitment that includes a global community of developers, governments and businesses can make real progress. Basic in Google Cloud and Google Fellows at Security Summit “Google will continue to play our part to make an impact,” said Eric Brewer, vice president of infrastructure.