Tag

challenge

Browsing

A recent gathering of global cybersecurity professionals has unearthed the latest attack scenarios that hackers use to infiltrate corporate networks. But contrary to the hopes of misguided potential victims, no silver bullet or software guarantee will completely protect them.

RSA Conference (RSAC) presenters focused on increasing demand for implementing the zero-trust philosophy. Presenters urged network managers to educate their employees about digital identity proofing. This includes securing the data points needed to practically spread digital ID proofing solutions.

Another major cause of network breaches is organizations integrating their on-premises environments into their cloud environments. This makes the cloud prone to various on-premise generated attacks.

“The RSA Conference plays a vital role in bringing the cyber security industry closer together. As cyber attacks grow in frequency and sophistication, it is imperative that public and private sector practitioners and experts are able to address today’s greatest challenges. Be called upon to hear unique perspectives to help,” commented RSA Conference Vice President Linda Gray Martin.

RSAC provides a year-round platform for the community to engage with, learn from and access cyber security content. That process is available online and at in-person events.

According to the RSAC, better cyber security will come only with a greater focus on threat hunting activities along with authentication, identity and access management.

head in charge

RSA Federal President Kevin Orr oversees the deployment of security, specifically identity access management tools, for federal and commercial customers. His company has its roots in the early days of cybersecurity security.

At this year’s RSA conference and related Public Sector Day, he had the opportunity to speak with leaders in the government and enterprise cybersecurity sector. He discussed his comments on the state of cyber security with TechNewsWorld.

RSA Federal is an identity and access management (IAM) solutions firm that began as a cybersecurity section within Dell Computer Company. Today, it has contracts with some of the most security-sensitive organizations in the world.

Important among the tech firm now known as RSA Federal LLC and the name of one of the leading encryption technology algorithms. RSA provides security services and solutions to customers throughout the federal public sector ecosystem.

RSA is a public-key encryption technology developed by RSA Data Security, which was founded in 1982 to commercialize the technology. The acronym Rivest stands for Shamir and Edelman, the three MIT cryptographers who developed RSA public key cryptography.

long-standing convention roots

A series of RSA company sales have positioned it to capitalize on a growing need for cybersecurity specialists. Security Dynamics bought the company in 1982. Dell later acquired RSA from EMC in 2006. A consortium of private equity investors led by Symphony Technology Group bought RSA from Dell in 2020.

The sales reflected both RSA’s and Dell’s corporate strategies. This allowed RSA to focus on security-first organizations, while Dell pursued its product strategy, according to Orr.

The annual RSAC event is an important gathering for the computer security community. It is considered the world’s leading information security conference and exhibition. Originally scheduled for February 7–10, world events led to it being rescheduled for June 6–9 at The Moscone Center in San Francisco.

RSA Federal is not a conference sponsor. However, its representatives participate in panels, showcases and speeches throughout the event.

This year’s 31st annual conference was the first to be held as a standalone, independent business since the investment from Crosspoint Capital Partners in March. The event was attended by over 26,000 attendees, including over 26,000 speakers, 400 exhibitors and over 400 members of the media.

notable takeaway

According to Orr, the biggest takeaways for cybersecurity were placed in key addresses. Security was impacted by a rapid digital transformation.

This change happened rapidly due to the pandemic. This forced it to accelerate partnerships with people working away from home.

The disruption of change in the physical world is now creating a digital ripple across the entire supply chain. Better supply chain security is needed to prevent tampering within its technology.

“Another major theme was the role played by massive propaganda. We are in a hyper-connected world. The propaganda blurs how people separate fact from fiction,” Orr said. This continues to influence the use of technology.

Perhaps one of the most damaging effects is a lack of deteriorating talent. He said that not enough people are skilled to deal with cyber security threats and what needs to be done within the cyber security domain.

Attacks are on the rise now with many different factors. In a previous world, we were all sitting behind a firewall in a corporation, Orr noted. Security teams can keep tabs on the good guys and the bad guys, except maybe insiders.

“The firewalls disappeared as soon as we went mobile from the pandemic. Your personal limit of security has disappeared. Some of that boundary needs to be built around identity,” he urged.

Identity border protection

From Orr’s catbird seat in the world of cybersecurity, he sees how preventing identity breaches is now necessary. Organizations must know who is connecting to their network. Security teams need to know what the detection does, where they are in the network, and what access they should have to see. In this globalized world, those derailments really changed things.

“The attack vectors also became realised. The attack vectors have really changed,” Orr said.

Network managers must now look at the danger areas and figure out how and where to spend the money. They also need to learn the techniques available and more importantly know that the attack surface is large.

“That means they need additional sets of people or different sets of skills to come across these open issues and address them,” Orr said.

Those decisions also include ROI factors. He further added that what is really driving the security question is that generally a corporate expense should have a return on investment.

Ransomware Gone Rogue

The rise of ransomware attacks sucks money from businesses. Initially the strategy was not to pay the ransom demand. From Orr’s point of view the better strategy now depends on the circumstances.

Either way, the victims of the ransom pay and hope for the best. Or they refuse to pay and still hope for the best. There must be a plan for the worst in the game.

“I think it is a personal decision depending on the situation. Now one size does not fit all. You have to see what the bad guys have and what they value. The big question is how to stop it from happening all the time,” he said.

lack of software options

The cyber security industry is not only facing a shortage of talent. Advanced equipment may be lacking.

“I think there’s a lot of basic technologies. I’ll start with the stuff first. Take a look at the truth. For some types of organizations cybersecurity products aren’t really something you can buy. First Step Click on Phishing Attempts Have to learn not to do,” Orr advised.

The solution starts with education. Then it continues with placing some parameters. Determine what your most valuable data is. Next research how to keep it safe. How do you monitor it?

“Cyber ​​security is really a layered approach,” Orr warned.

never trust, always challenge

That was a big topic of the security conference, he continued. Part of the big change is not being able to trust network visitors.

“It was the kind of thing that has really changed now, not to be trusted. There is always the essential approach to verify. Now you are looking at things differently,” he observed.

We are making good progress. The difference is that we are now preparing for a cyberattack, he concluded.

The cost of cleaning up data is often beyond the comfort zone of businesses full of potentially dirty data. This paves the way for reliable and compliant corporate data flows.

According to Kyle Kirwan, co-founder and CEO of data observability platform BigEye, few companies have the resources needed to develop tools for challenges such as large-scale data observability. As a result, many companies are essentially going blind, reacting when something goes wrong instead of continually addressing data quality.

A data trust provides a legal framework for the management of shared data. It promotes cooperation through common rules for data protection, confidentiality and confidentiality; and enables organizations to securely connect their data sources to a shared repository of data.

Bigeye brings together data engineers, analysts, scientists and stakeholders to build trust in data. Its platform helps companies create SLAs for monitoring and anomaly detection and ensuring data quality and reliable pipelines.

With full API access, a user-friendly interface, and automated yet flexible customization, data teams can monitor quality, consistently detect and resolve issues, and ensure that each be able to rely on user data.

uber data experience

Two early members of the data team at Uber — Kirvan and bigeye co-founder and CTO Egor Gryznov — set out to use what they learned to build Uber’s scale to build easy-to-deploy SaaS tools for data engineers. prepared for.

Kiran was one of Uber’s first data scientists and the first metadata product manager. Gryaznov was a staff-level engineer who managed Uber’s Vertica data warehouse and developed a number of internal data engineering tools and frameworks.

He realized that his team was building tools to manage Uber’s vast data lake and the thousands of internal data users available to most data engineering teams.

Automatically monitoring and detecting reliability issues within thousands of tables in a data warehouse is no easy task. Companies like Instacart, Udacity, Docker, and Clubhouse use Bigeye to make their analysis and machine learning work consistently.

a growing area

Founding Bigeye in 2019, he recognized the growing problem of enterprises deploying data in operations workflows, machine learning-powered products and services, and high-ROI use cases such as strategic analysis and business intelligence-driven decision-making.

The data observability space saw several entrants in 2021. Bigeye differentiates itself from that pack by giving users the ability to automatically assess customer data quality with over 70 unique data quality metrics.

These metrics are trained with thousands of different anomaly detection models to ensure data quality problems – even the most difficult to detect – are ahead of data engineers ever. Do not increase

Last year, data observability burst onto the scene, with at least ten data observability startups announcing significant funding rounds.

Kirwan predicted that this year, data observation will become a priority for data teams as they seek to balance the demand for managing complex platforms with the need to ensure data quality and pipeline reliability.

solution rundown

Bigeye’s data platform is no longer in beta. Some enterprise-grade features are still on the roadmap, such as full role-based access control. But others, such as SSO and in-VPC deployment, are available today.

The app is closed source, and hence proprietary models are used for anomaly detection. Bigeye is a big fan of open-source alternatives, but decided to develop one on its own to achieve internally set performance goals.

Machine learning is used in a few key places to bring a unique mix of metrics to each table in a customer’s connected data sources. Anomaly detection models are trained on each of those metrics to detect abnormal behavior.

Built-in three features in late 2021 automatically detect and alert data quality issues and enable data quality SLAs.

The first, deltas, makes it easy to compare and validate multiple versions of any dataset.

Issues, second, brings together multiple alerts at the same time with valuable context about related issues. This makes it easier to document past improvements and speed up proposals.

Third, the dashboard provides a holistic view of the health of the data, helps identify data quality hotspots, close gaps in monitoring coverage, and measures a team’s improvement in reliability.

eyeball data warehouse

TechNewsWorld spoke with Kirwan to uncover some of the complexities of his company’s data sniffing platform, which provides data scientists.

TechNewsWorld: What makes Bigeye’s approach innovative or cutting edge?

Kyle Kiran Bigey Co-Founder and CEO
Kyle Kiran, BigEye Co-Founder and CEO

Kyle Kiran: Data observation requires a consistent and thorough knowledge of what is happening inside all the tables and pipelines in your data stack. It is similar to SRE [site reliability engineering] And DevOps teams use applications and infrastructure to work round the clock. But it has been repurposed for the world of data engineering and data science.

While data quality and data reliability have been an issue for decades, data applications are now important in how many major businesses run; Because any loss of data, outage, or degradation can quickly result in loss of revenue and customers.

Without data observability, data dealers must continually react to data quality issues and entanglements as they go about using the data. A better solution is to proactively identify the problems and fix the root causes.

How does trust affect data?

Ray: Often, problems are discovered by stakeholders such as executives who do not trust their often broken dashboards. Or users get confusing results from in-product machine learning models. Data engineers can better get ahead of problems and prevent business impact if they are alerted enough.

How does this concept differ from similar sounding technologies like Integrated Data Management?

Ray: Data observability is a core function within data operations (think: data management). Many customers look for best-of-breed solutions for each task within data operations. This is why technologies like Snowflake, FiveTran, Airflow and DBT are exploding in popularity. Each is considered an important part of the “modern data stack” rather than a one-size-fits-none solution.

Data Overview, Data SLA, ETL [extract, transform, load] Code version control, data pipeline testing, and other techniques must be used to keep modern data pipelines working smoothly. Just like how high-performance software engineers and DevOps teams use their collaborative technologies.

What role do data pipelines and dataops play with data visibility?

Ray: Data Observability is closely related to the emerging practice of DataOps and Data Reliability Engineering. DataOps refers to the broad set of operational challenges that data platform owners will face. Data Reliability Engineering is a part, but only part, of Data Ops, just as Site Reliability Engineering is related but does not include all DevOps.

Data security can benefit from data observation, as it can be used to identify unexpected changes in query volume on different tables or changes in the behavior of ETL pipelines. However, data observation by itself will not be a complete data protection solution.

What challenges does this technology face?

Ray: These challenges include issues such as data discovery and governance, cost tracking and management, and access control. It also includes how to handle queries, dashboards, and the growing number of ML features and models.

Reliability and uptime are certainly challenges many DevOps teams are responsible for. But they are also often charged for other aspects such as developer velocity and security reasons. Within these two areas, data overview enables data teams to know whether their data and data pipeline are error free.

What are the challenges of implementing and maintaining data observability technology?

Ray: Effective data observability systems must be integrated into the workflows of the data team. This enables them to continuously respond to data issues and focus on growing their data platform rather than putting out data fires. However, poorly tuned data observability systems can result in a flood of false positives.

An effective data system should perform more maintenance than just testing for data quality issues by automatically adapting to changes in the business. A poorly optimized data observation system, however, may not be accurate for changes in business or more accurate for changes in business that require manual tuning, which can be time-consuming.

Data observability can also be taxing on a data warehouse if not optimized properly. Bigeye teams have experience in optimizing large-scale data observation capability to ensure that the platform does not impact data warehouse performance.