Category

Information Technology

Category

Government organizations and educational institutions, in particular, are increasingly in the crosshairs of hackers as serious web vulnerabilities continue to rise upwards.

Remote code execution (RCE), cross-site scripting (XSS), and SQL injection (SQLi) are all top software offenders. All three keep rising or hovering around the same alarming numbers year after year.

RCE, often the end target of a malicious attacker, was the main cause of the IT scam in the wake of the Log4Shell exploit. This vulnerability has seen a steady increase since 2018.

Enterprise security firm Invicti last month released its Spring 2022 AppSec Indicator report, which revealed Web vulnerabilities from more than 939 of its customers worldwide. The findings come from an analysis of the Invicti AppSec platform’s largest dataset — which has more than 23 billion customer application scans and 282,000 direct-impact vulnerabilities discovered.

Research from Invicti shows that one-third of both educational institutions and government organizations experienced at least one incident of SQLi in the past year. Data from 23.6 billion security checks underscores the need for a comprehensive application security approach, with governments and education organizations still at risk of SQL injection this year.

Data shows that many common and well-understood vulnerabilities in web applications are on the rise. It also shows that the current presence of these vulnerabilities presents a serious risk to organizations in every industry.

According to Mark Rawls, President and COO of Invicty, even well-known vulnerabilities are still prevalent in web applications. To ensure that security is part of the DNA of an organization’s culture, processes and tooling, organizations must gain command of their security posture so that innovation and security work together.

“We’ve seen the most serious web vulnerabilities continue to grow, either stable or increasing in frequency, over the past four years,” Ralls told TechNewsWorld.

key takeaways

Rawls said the most surprising aspect of the research was the rapid rise in incidence of SQL injections among government and education organizations.

Particularly troubling is SQLi, which has increased frequency by five percent over the past four years. This type of web vulnerability allows malicious actors to modify or change the queries an application sends to its database. This is of particular concern to public sector organizations, which often store highly sensitive personal data and information.

RCE is the crown jewel for any cyber attacker and is the driver behind last year’s Log4Shell program. This is also an increase of five percent since 2018. XSS saw a six percent increase in frequency.

“These trends were echoed throughout the report’s findings, revealing a worrying situation for cybersecurity,” Rawls said.

Skill gap, lack of talent included

Another big surprise for researchers is the increase in the number of vulnerabilities reported from organizations that scan their assets. There can be many reasons. But the lack of software trained in cyber security is a major culprit.

“Developers, in particular, may need more education to avoid these errors. We have noticed that vulnerabilities are not being discovered during scanning, even in the early stages of development,” Rawls explained.

When developers don’t address vulnerabilities, they put their organizations at risk. He said automation and integration tools can help developers address these vulnerabilities more quickly and reduce potential costs to the organization.

Don’t Blame Web Apps Alone

Web apps aren’t getting any less secure per sec. It’s a matter of developers being tired, overworked and often not having enough experience.

Often, organizations hire developers who lack the necessary cyber security background and training. According to Rawls, with the continuing effort towards digital transformation, businesses and organizations are digitizing and developing apps for more aspects of their operations.

“In addition, the number of new web applications entering the market every day means that every additional app is a potential vulnerability,” he said. For example, if a company has ten applications, it is less likely to have one SQLi than if the company has 1,000 applications.

apply treatment

Business teams – whether developing or using software – require both the right paradigm and the right technologies. This involves prioritizing a secure design model covering all base and baking security in the pre-code processes behind the application architecture.

“Break up the silos between teams,” Rawls advised. “Particularly between security and development – ​​and make sure organization-wide norms and standards are in place and created universally.”

With regard to investing in AppSec tools to stem the rising tide of faulty software, Ralls recommends using robust tools:

  • Automate as much as possible;
  • Integrate seamlessly into existing workflows;
  • Provide analysis and reporting to show evidence of success and where more work needs to be done.

Don’t overlook the importance of accuracy. “Tools with low false-positive rates and clear, actionable guidance for developers are essential. Otherwise, you waste time, your team won’t embrace the technology, and your security posture won’t improve,” he concluded.

partially blind spot on play

Rall said critical breaches and dangerous vulnerabilities continue to expose the organizations’ blind spots. For proof, see Log4Shell’s tornado effects.

Businesses around the world scrambled to test whether they were susceptible to RCE attacks in the widely used Log4j library. Some of these risks are increasing in frequency when they should go away for good. It comes down to a disconnect between the reality of risk and the strategic mandate for innovation.

“It is not always easy to get everyone on board with security, especially when it appears that security is holding individuals back from project completion or would be too costly to set up,” Rawls said.

An increasing number of effective cyber security strategies and scanning technologies can reduce persistent threats and make it easier to bridge the gap between security and innovation.

Do you know whether your company data is clean and well managed? Why does it matter anyway?

Without a working governance plan, you may have no company to worry about – data-wise.

Data governance is a collection of practices and procedures establishing rules, policies and procedures that ensure data accuracy, quality, reliability and security. It ensures the formal management of data assets within an organization.

Everyone in business understands the need to have and use clean data. But making sure it’s clean and usable is a bigger challenge, according to David Kolinek, vice president of product management at Atacama.

This challenge is compounded when business users have to rely on scarce technical resources. Often, no one person oversees data governance, or that person doesn’t have a complete understanding of how the data will be used and how to clean it up.

This is where Atacama comes into play. The company’s mission is to provide a solution that even people without technical knowledge, such as SQL skills, can use to find the data they need, evaluate its quality, understand any issues How to fix that and determine if that data will serve their purposes.

“With Atacama, business users don’t need to involve IT to manage, access and clean their data,” Kolinek told TechNewsWorld.

Keeping in mind the users

Atacama was founded in 2007 and was originally bootstrapped.

It started as a part of a consulting company, Edstra, which is still in business today. However, Atacama focused on software rather than consulting. So management spun off that operation as a product company that addresses data quality issues.

Atacama started with a basic approach – an engine that did basic data cleaning and transformation. But it still requires an expert user because of the user-supplied configuration.

“So, we added a visual presentation for the steps enabling things like data transformation and cleanup. This made it a low-code platform because users were able to do most of the work using just the application user interface. But that’s right now.” was also a fat-client platform,” Kolinek explained.

However, the current version is designed with the non-technical user in mind. The software includes a thin client, a focus on automation, and an easy-to-use interface.

“But what really stands out is the user experience, made up of the seamless integration that we were able to achieve with the 13th version of our engine. It delivers robust performance that is crafted to perfection,” he said. offered.

Digging deeper into data management issues

I asked Kolinek to discuss the issues of data governance and quality further. Here is our conversation.

TechNewsWorld: How is Atacama’s concept of centralizing or consolidating data management different from other cloud systems such as Microsoft, Salesforce, AWS and Google Cloud?

David Kolinek: We are platform agnostic and do not target a specific technology. Microsoft and AWS have their own native solutions that work well, but only within their own infrastructure. Our portfolio is wide open so it can serve all use cases that should be included in any infrastructure.

In addition, we have data processing capabilities that not all cloud providers have. Metadata is useful for automated processing, generating more metadata, which can be used for additional analysis.

We have developed both these technologies in-house so that we can provide native integration. As a result, we can provide a better user experience and complete automation.

How is this concept different from the notion of standardization of data?

David Kolinek
David Kolinek
Vice President of Product Management,
atacama

Kolinek: Standardization is just one of many things we do. Typically, standardization can be easily automated, in the same way that we can automate cleaning or data enrichment. We also provide manual data correction when resolving certain issues, such as missing Social Security numbers.

We cannot generate SSN but we can get date of birth from other information. So, standardization is no different. It is a subset of things that improve quality. But for us it is not just about data standardization. It is about having good quality data so that the information can be leveraged properly.

How does Atacama’s data management platform benefit users?

Kolinek: User experience is really our biggest advantage, and the platform is ideal for handling multiple individuals. Companies need to enable both business users and IT people when it comes to data management. This requires a solution for business and IT to collaborate.

Another great advantage of our platform is the strong synergy between data processing and metadata management that it provides.

Most other data management vendors cover only one of these areas. We also use machine learning and a rules-based approach and validation/standardization, both of which, again, are not supported by other vendors.

Furthermore, because we are ignorant of technology, users can connect to many different technologies from a single platform. With edge processing, for example, you can configure something in the Atacama One once, and the platform will translate it for different platforms.

Does Atacama’s platform lock-in users the same way proprietary software often does?

Kolinek: We have developed all the main components of the platform ourselves. They are tightly integrated together. There has been a huge wave of acquisitions in this space lately, with big sellers buying out smaller sellers to fill in the gaps. In some cases, you are actually buying and managing not one platform, but several.

With Atacama, you can buy just one module, such as Data Quality/Standardization, and later expand to others, such as Master Data Management (MDM). It all works together seamlessly. Just activate our modules as you need them. This makes it easy for customers to start small and expand when the time is right.

Why is the Integrated Data Platform so important in this process?

Kolinek: The biggest advantage of a unified platform is that companies are not looking for a point-to-point solution to a single problem like data standardization. It is all interconnected.

For example, to standardize you must verify the quality of the data, and for that, you must first find and catalog it. If you have an issue, even though it may seem like a discrete problem, it probably involves many other aspects of data management.

The beauty of an integrated platform is that in most use cases, you have a solution with native integration, and you can start using other modules.

What role do AI and ML play today in data governance, data quality and master data management? How is this changing the process?

Kolinek: Machine learning enables customers to be more proactive. First, you’ll identify and report a problem. One has to check what went wrong and see if there is anything wrong with the data. You would then create a rule for data quality to prevent repetition. It’s all reactive and based on something being broken down, found, reported and fixed again.

Again, ML lets you be proactive. You give it training data instead of rules. The platform then detects differences in patterns and identifies discrepancies to help you realize there was a problem. This is not possible with a rule-based approach, and is very easy to measure if you have a large amount of data sources. The more data you have, the better the training and its accuracy.

Aside from cost savings, what benefits can enterprises gain from consolidating their data repositories? For example, does it improve security, CX results, etc.?

Kolinek: This improves safety and minimizes potential future leaks. For example, we had customers who were storing data that no one was using. In many cases, they didn’t even know the data existed! Now, they are not only integrating their technology stack, but they can also see all the stored data.

It is also very easy to add newcomers to the platform with consolidated data. The more transparent the environment, the sooner people will be able to use it and start getting value.

It is not so much about saving money as it is about leveraging all your data to generate a competitive advantage and generate additional revenue. It provides data scientists with the means to build things that will drive business forward.

What are the steps in adopting a data management platform?

Kolinek: Start with a preliminary analysis. Focus on the biggest issues the company wants to tackle and select platform modules to address them. It is important to define goals at this stage. Which KPIs do you want to target? What level of ID do you want to achieve? These are questions you should ask.

Next, you need a champion to drive execution and identify the key stakeholders driving the initiative. This requires extensive communication between various stakeholders, so it is important that one focuses on educating others about the benefits and helping the teams on the system. Then comes the implementation phase where you address the key issues identified in the analysis, followed by the rollout.

Finally, think about the next set of issues that need to be addressed, and if necessary, enable additional modules in the platform to achieve those goals. The worst part is buying a device and providing it, but not providing any service, education or support. This will ensure that the adoption rate will be low. Education, support and service are very important for the adoption phase.

The best thing for me about tech related topics is that they are probably easier than any other to learn online. In fact, that’s exactly how I built the Computer Science Foundation that supports my work. Without the Internet full of resources, I would not be where I am today.

Like many who shared my path, I initially devoured every online resource I could get my hands on. But as I invest more years in my career, I have increasingly noticed the shortcomings of the material most likely to be exposed.

At first, I found that I had to re-learn some concepts I thought I understood. Then, the more concrete it got, the more I discovered that even my self-taught peers were disoriented at some point.

This inspired me to investigate how misconceptions spread. Of course, not everyone gets everything right all the time. It is human to make mistakes, after all. But with such knowledge available online, in theory, misinformation should not spread widely.

So where did it come from? In short, the same market forces that make computer science-driven fields attractive are those that provide fertile ground for questionable training material.

To give back to computer science education in a small way, I want to share my observations about determining the quality of instructional resources. Hopefully, those of you who are on a similar path will learn from the easy way what I learned the hard way.

Starting our self-dev environment

Before we begin, I want to admit that I understand that no one likes to be told that their work is less than stellar. I’m definitely not going to name names. For one thing, there are so many names that a heuristic is the only practical way to go.

More importantly, instead of just telling you where not to go, I’ll provide you with the tools to evaluate for yourself.

Heuristics are also more likely to point you in the right direction. If I declare that website X has subpar content and I am wrong, then nobody has achieved anything. Even worse, you may have missed out on an editing source of knowledge.

However, if I outline the signs that suggest any website may be off the mark, while they may still lead you to mistakenly discount a trusted resource, they still have them in most cases. Sound conclusions must be drawn.

The invisible hand of the market joins a strong hand

To understand where information of questionable quality is coming from, we need to delete our Econ 101 notes.

Why do tech jobs pay so much? High demand meets low supply. There is such an urgent need for software developers, and software development trends evolve so rapidly that tons of resources have been rapidly produced to train the latest wave.

But the market forces are not yet complete. When demand outweighs supply, production feels pressured. If production picks up, and the price stays the same, the quality goes down. Sure, prices can easily go up, but a major highlight of technical training is that much of it is free.

So, if a site can’t cope with the sharp drop in users that comes with moving from free to paid, can you blame it for staying free? Multiply this by even a modest share of all free training sites and the result is a drop in quality of training, overall.

Furthermore, because innovation in software development practices tends to iterate, so does this cycle of decline in educational quality. What happens once the hastily prepared training material is consumed? Over time the employees who consume it become the new “experts”. In a short time, these “experts” produce another generation of resources; And in this way.

Bootstrap your learning with your own bootstrap

Clearly, I am not asking you to regulate this market. What you can do However, learn to identify credible sources on your own. I promised estimates, so here are some I use to get a rough estimate of the value of a particular resource.

Is the site run by a for-profit company? It’s probably not that solid, or at least not useful for your specific use case.

At times, these sites are selling something or the other to tech-illiterate customers. The information is simplified to appeal to non-technical company leadership, not detailed to address technical grunts. Even if the site is intended for someone in your shoes, for-profit organizations try to avoid handing out Tradecraft for free.

if the site Is For the technically minded, And While the Company independently distributes practices, their use of a given software, tool or language may be completely different from how you do, will or should.

Was the site set up by a non-profit organization? If you’ve chosen the right kind, their stuff can be super valuable.

Before you believe what you read, make sure the nonprofit is reputable. Then confirm how closely the site is related to what you’re trying to learn about. For example, python.org, administered by the same people who make Python, would be a great bet for teaching you Python.

Is the site mostly ready for training? Be cautious even if it is for profit.

Such organizations generally prefer to place apprentices in jobs more rapidly. Apprentice quality comes second. Sadly, that’s good enough for most employers, especially if it means they can save a buck on salary.

On the other hand, if the site is a major nonprofit, you can usually overestimate it. Often these types of training-driven nonprofits have a mission to build the field and support their workers—which relies heavily on people being trained properly.

to consider more

There are a few other factors you should take into account before deciding how seriously to take a resource.

If you’re looking at a forum, measure it based on its relevance and reputation.

General purpose software development forums are a frustrating amount of time because no expertise means there is little chance of specialized experts turning around.

If the forum is explicitly intended to serve a particular job role or software user base, chances are you’ll get a better advantage, as it’s more likely that you’ll find an expert there.

For things like blogs and their articles, it all depends on the background strength of the author.

Writers developing or using what you’re learning probably won’t lead you in the wrong direction. You’re probably also in good shape with a developer from a major tech company, as these entities can usually hold top-notch talent.

Be suspicious of writers writing under a for-profit company that isn’t even a developer.

summative assessment

If you want to limit this approach to a mantra, you can put it like this: Always think about who is writing the advice, and why,

Obviously, no one is ever trying to be wrong. But they may leave only what they know, and a share of information may have a focus other than being as accurate as possible.

If you can find out the reasons why the creator of the knowledge can’t keep the accuracy of the textbook at the front of his mind, you’re in less danger of inadvertently putting your work in his mind.

The first plan of its kind to comprehensively address open source and software supply chain security is awaiting White House support.

The Linux Foundation and the Open Source Software Security Foundation (OpenSSF) on Thursday brought together more than 90 executives from 37 companies and government leaders from the NSC, ONCD, CISA, NIST, DOE and OMB to reach a consensus on key actions. Improving the flexibility and security of open-source software.

A subset of the participating organizations have collectively pledged an initial tranche of funds for the implementation of the scheme. Those companies are Amazon, Ericsson, Google, Intel, Microsoft, and VMWare, with more than $30 million in pledges. As the plan progresses, more funds will be identified and work will begin as agreed upon individual streams.

The Open Source Software Security Summit II, led by the National Security Council of the White House, is a follow-up to the first summit held in January. That meeting, convened by the Linux Foundation and OpenSSF, came on the one-year anniversary of President Biden’s executive order on improving the nation’s cyber security.

As part of this second White House Open Source Security Summit, open source leaders called on the software industry to standardize on SigStore developer tools and upgrade the collective cyber security resilience of open source and improve trust in software. called upon to support the plan. Dan Lorenc, CEO and co-founder of Chainguard, co-creator of Sigstore.

“On the one-year anniversary of President Biden’s executive order, we’re here today to respond with a plan that’s actionable, because open source is a critical component of our national security, and it’s driving billions of dollars in software innovation. is fundamental to investing today,” Jim Zemlin, executive director of the Linux Foundation, announced Thursday during his organization’s press conference.

push the support envelope

Most major software packages contain elements of open source software, including code and critical infrastructure used by the national security community. Open-source software supports billions of dollars in innovation, but with it comes the unique challenges of managing cybersecurity across its software supply chains.

“This plan represents our unified voice and our common call to action. The most important task ahead of us is leadership,” said Zemlin. “This is the first time I’ve seen a plan and the industry will promote a plan that will work.”

The Summit II plan outlines funding of approximately $150 million over two years to rapidly advance well-tested solutions to the 10 key problems identified by the plan. The 10 streams of investment include concrete action steps to build a strong foundation for more immediate improvements and a more secure future.

“What we are doing together here is converting a bunch of ideas and principles that are broken there and what we can do to fix it. What we have planned is the basis to get started. As represented by 10 flags in the ground, we look forward to receiving further input and commitments that lead us from plan to action,” said Brian Behldorf, executive director of the Open Source Security Foundation.

Open Source Software Security Summit II in Washington DC, May 12, 2022.

Open Source Software Security Summit II in Washington DC, May 12, 2022. [L/R] Sarah Novotny, Open Source Lead at Microsoft; Jamie Thomas, enterprise security executive at IBM; Brian Behldorf, executive director of the Open Source Security Foundation; Jim Zemlin, executive director of The Linux Foundation.


highlight the plan

The proposed plan is based on three primary goals:

  • Securing open source security production
  • Improve vulnerability discovery and treatment
  • shortened ecosystem patching response time

The whole plan includes elements to achieve those goals. These include security education which provides a baseline for software development education and certification. Another element is the establishment of a public, vendor-neutral objective-matrix-based risk assessment dashboard for the top 10,000 (or more) OSS components.

The plan proposes the adoption of digital signatures on software releases and the establishment of the OpenSSF Open Source Security Incident Response Team to assist open source projects during critical times.

Another plan detail focuses on improved code scanning to accelerate the discovery of new vulnerabilities by maintainers and experts through advanced security tools and expert guidance.

Code audits conducted by third-party code reviews and any necessary remedial work will detect up to 200 of the most critical OSS components once per year.

Coordinated data sharing will improve industry-wide research that helps determine the most important OSS components. Providing Software Bill of Materials (SBOM) everywhere will improve tooling and training to drive adoption and provide build systems, package managers and distribution systems with better supply chain security tools and best practices.

stock factor

Chainguard, who co-created the Sigstore repository, is committed to financial resources for the public infrastructure and network offered by OpenSSF and to ensure that SigStore’s impact is felt in every corner of the software supply chain and Will collaborate with industry peers to deepen work on interoperability. software ecosystem. This commitment includes at least $1 million per year in support of Sigstore and a pledge to run it on its own node.

Designed and built with maintainers for maintainers, it has already been widely adopted by millions of developers around the world. Lorenc said now is the time to formalize its role as the de facto standard for digital signatures in software development.

“We know the importance of interoperability in the adoption of these critical tools because of our work on the SLSA framework and SBOM. Interoperability is the linchpin in securing software across the supply chain,” he said.

Related Support

Google announced Thursday that it is creating an “open-source maintenance crew” tasked with improving the security of critical open-source projects.

Google also unveiled the Google Cloud Dataset and open-source Insights projects to help developers better understand the structure and security of the software they use.

According to Google, “This dataset provides access to critical software supply chain information for developers, maintainers, and consumers of open-source software.”

“Security risks will continue to plague all software companies and open-source projects and only an industry-wide commitment that includes a global community of developers, governments and businesses can make real progress. Basic in Google Cloud and Google Fellows at Security Summit “Google will continue to play our part to make an impact,” said Eric Brewer, vice president of infrastructure.