Tag

data

Browsing

Meta is taking a law enforcement intelligence company to court for collecting data about users of its Facebook and Instagram properties.

The lawsuit, filed in a federal court in California, alleges that Voyager Labs, an international scraping and monitoring service, improperly collected data from those properties through fake accounts that flouted the terms and conditions for use of the platform. is a violation.

In a January 12 post on Meta’s newsroom site, Jessica Romero, director of Platform Enforcement and Litigation, explained that Voyager’s proprietary software uses fake accounts to scrape data accessible to a user who is logged in to Facebook.

They said Voyager used a diverse system of computers and networks in different countries to hide its activity and thwart Meta’s efforts to verify fake accounts.

Romero wrote that Voyager did not compromise Facebook; Instead, it used fake accounts to scramble publicly viewable information.

“Web scraping is legal — if you’re scraping publicly available information,” observed Liz Miller, vice president and a principal analyst at Constellation Research, a technology research and advisory firm in Cupertino, California.

“In Meta’s case against Voyager Labs, the issue is the creation of a fake Facebook account, which was used for the purpose of data collection,” Miller told TechNewsWorld.

scrapping industry

Romero wrote that Meta is seeking a permanent injunction against Voyager to protect people from scraping-for-hire services.

“Companies like Voyager are part of an industry that provides scraping services to anyone, regardless of the target users and for what purpose, which includes profiling people for criminal behavior,” he continued.

“This industry secretly collects information that people share with their community, family and friends, without oversight or accountability, and in a way that can affect people’s civil rights,” she said.

“These services operate across multiple platforms and national borders and preventing the misuse of these capabilities requires a collective effort from platforms, policy makers and civil society.”


Voyager was not immediately available for comment on this story. However, a spokesperson told The Guardian in the past: “As a company, we comply with the laws of all countries in which we do business. We also trust those with whom we do business to comply with the law.” There are public and private organizations that follow.”

Meta Business Matters

While META emphasizes its efforts to protect people, it also has business ideas that need to be protected.

“Sadly, from Meta’s point of view the problem is not really about data scraping. The point is that Voyager did not pay Meta to do this,” KnowBe4, a security awareness training provider in Clearwater, Fla. Roger Grimes, a defense campaigner for the U.S., argued.

“If Voyager had paid, the meta would have been very happy,” Grimes told TechNewsworld.

Vincent Reynolds, an assistant professor in the Department of Communication Studies at Emerson College in Boston, explained that data is at the heart of the business model for social media companies.

“The data that users produce is reused by these platforms for advertising,” Raynauld told TechNewsWorld. “It’s at the core of their business model.”

“With this lawsuit,” he continued, “they are trying to protect their business model. They want to take control of the data they have and prevent other companies from using the data.”

“When they see researchers or other companies scraping data, they see business opportunities go away,” he said.

Raynauld said, “There is a clear intention by the Meta to protect its assets here.” “It’s a shot across the bow of marketers and researchers.”

common practice, common problem

Scraping social media sites for data is a common practice.

“It is common for social media sites, from Facebook and Instagram to Twitter or LinkedIn, to scrape publicly available and viewable data,” Miller said.

“Advertisers and marketers commonly use it to track trends, target audiences, or create audience profiles,” she continued. “If you’ve ever compared prices on a site so that you can get a product at the best price, you’ve likely benefited from bot-based web scraping.”


Miller said most social scraping is for rather benign uses, but exceptions exist, such as bots deployed for ad fraud, traffic scams, identity takeover and account hacking.

“The scraping is probably much worse than anyone realized, including Meta,” Grimes said. “I’m sure hundreds if not thousands of data scraping operations are targeting social media sites every day.”

“It’s probably so bad,” he continued, “that Meta only has time to worry about the biggest and most revenue-damaging examples.”

Minimizing Unethical Scraping

Grimes said combating shady data scraping is a big problem. “It’s like phishing and password-guessing,” he said. “Vendors can’t hope to stop it. The best they can try to do is identify the easiest and stop the most prominent examples.

Miller said that most social media platforms have placed constraints through their terms and conditions of use to reduce malicious scraping.

“But what many want to subtract is non-malicious scraping, which forces organizations to turn to, for example, Meta, some of the insights that social scraping can provide,” she said.

Romero wrote that meta is one of the tools used to combat scraping. “We have also invested in technical teams and tools that monitor and detect suspicious activity and use unauthorized automation for scraping,” she explained.

“This focus on scraping is part of our ongoing work to protect people’s privacy,” she said. “In the coming months, we plan to discuss some of the other measures we are actively using to prevent scraping.”

legal whack-a-mole

Until those additional measures are disclosed to combat malicious scraping, litigation may be the most effective means of cracking down on the practice.

“Being sued is a huge motivator not to do it,” Grimes observed. “Who wants to be sued by a tech giant? You can spend millions for the first day of a court hearing, even if you did nothing wrong and are completely in the right.

“That’s the nature of lawsuits, especially in the US, where the loser often doesn’t have to pay the winner’s fees,” he said.

“Lawsuits are like getting a big hammer when playing whack-a-mole,” Miller said. “You can take one out of the game, but another malicious mole will likely pop back up.”

“But, in the absence of a law or a rule making scraping publicly available data illegal,” she continued, “the goal is to reduce them with litigation costs.”

Social media platform TikTok is now facing efforts from Congress to shut down its operations amid ongoing talks with the Biden administration about data security and surveillance.

Following TikTok’s secretive launch of its retail sales integration in November, Sen. Marco Rubio, R-Fla. announced bipartisan legislation on Tuesday to ban the popular China-sponsored app from operating in the United States.

The new bill follows Rubio’s pushback at the White House in May for not addressing concerns over the app’s ties to China-based parent company ByteDance and the Chinese government’s approach to surveillance through technology. Rubio issued a statement urging President Biden to clarify that the TikTok shop would not be allowed to operate in the United States.

The law increases pressure on ByteDance, as the US fears the app could be used to spy on Americans and censor content. If approved, the effect could have far-reaching effects on influencers, social media users, and companies using it for marketing purposes.

Rubio’s bill would block all transactions with any social media company from or under the influence of China and Russia, according to a news release from Rubio’s office. The announcement said Republican Mike Gallagher and Democrat Raja Krishnamoorthi sponsored a companion bill in the US House of Representatives.

Cloud Insight can clean up the data flow

Congressional hearings on TikTok may reveal how much security and compliance with data privacy rules is at stake with Oracle, which provides cloud storage services for TikTok’s domestic operations.

Platform executives revealed in June that its US traffic goes through Oracle’s servers, and TikTok executives also said it maintains its own data backups.

Oracle did not respond to inquiries about compliance with its data collection practices with TikTok. But the company could have a significant impact if the congressional hearing pursues clarity about TikTok’s handling of US data distribution and storage, agreed Luke Lintz, CEO of HighKey Enterprises, a Canada-based digital marketing and social media management company. .

“TikTok poses a real threat when it is collecting and storing more data points about users than any other social media platform. A full audit of Oracle’s data storage is likely to take place to look at the exact agreements and policies between ByteDance and Oracle,” Lintz told the E-Commerce Times.

why worry

The proposed ban law, titled “Averting the National Threat of Internet Surveillance, Repressive Censorship and Influence, and Algorithmic Learning by the Chinese Communist Party Act (Anti-Social CCP Act),” would block and restrict all transactions from any social Will protect Americans by doing so. Media companies in or under the influence of China, Russia and several other foreign countries of concern.

“The federal government has yet to take a single meaningful action to protect American users from the threat of TikTok,” Rubio said in the announcement. It’s not about creative videos — it’s about an app that’s collecting data from hundreds of millions of American children and adults every day. We know it is used to manipulate feeds and influence elections. We know that this is the answer of the People’s Republic of China. No more time to waste on meaningless conversations with the CCP-puppet company. The time has come to ban Beijing controlled TikTok forever.”

Representative Gallagher called TikTok a “digital fentanyl” that is addicting Americans, collecting their data, and censoring their news.

According to Gallagher, TikTok is also an increasingly powerful media company that ultimately reports to the CCP. He said that China is the biggest enemy of America.

“Allowing the app to continue to operate in the US would be like allowing the USSR to buy the New York Times, the Washington Post and the major broadcast networks during the Cold War. No country with the slightest interest in its own security, Which is why it’s time to ban TikTok and any other CCP-controlled app before it’s too late.”

The Chinese Communist Party and other adversaries abroad seek any advantage against the United States through espionage and mass surveillance. Noted Representative Krishnamurthy said it is imperative not to allow hostile forces to potentially control social media networks, which can easily be weaponized against us.

“The bipartisan ANTI-SOCIAL CCP ACT is a strong step forward in protecting our country from nefarious digital surveillance and influencing the operation of totalitarian regimes. Recent revelations about the depth of TikTok’s ties to the CCP highlight the need to protect Americans from these risks before it is too late,” he said in the announcement.

Support for TikTok Ban Widens

Recently, Maryland, South Dakota, South Carolina and Texas have banned the use of TikTok on government equipment, citing potential national security threats. On Monday, Alabama and Utah also barred the use of the TikTok app on state government equipment and computer networks due to national security concerns.

Discussing the expansion and security concerns of TikTok Shops, Laura Perez, Global Director of B2B Communications for TikTok, told E-Commerce Times that the company was in active talks with the US government to address some of their fears. He added that TikTok was transparent about the issues it had with its brand and merchant partners.

As of the writing of this article, neither Perez nor other TikTok officials have responded to our request for comment on the potential banning law. However, an unnamed TikTok spokesperson pointed out elsewhere that the national security review and politically motivated ban of TikTok was troubling and would not advance US efforts to enhance national security.

Charles King, principal analyst at Pund-IT, suggested that despite growing support for banning TikTok’s operations in the US, negative reactions from the platform’s massive user base could become an issue.

Tik Tok owner ByteDance’s reliance on the Chinese government and how the company handles user information has been a concern for years, he agreed. But after the GOP’s below-expected performance in the midterm elections, there is finally growing apprehension among politicians.

“If TikTok users old enough to vote are angry enough, they could voice their displeasure in the 2024 elections,” King told the E-Commerce Times.

possible pushback possible

Banning Tiktok will no doubt affect the growing use of social media marketing. This will be hugely impactful and disruptive, especially for companies targeting the younger consumers who make up the vast majority of Tik Tok users and influencers, King observed.

“It is difficult to predict which alternative platform millions of TikTok users will choose instead,” he added.

Lintz said banning TikTok would have little effect on the trajectory of other outlets. The increasing use of social media year after year will remain unchanged.

“Currently, the average social media user uses 6.6 different platforms, which clearly indicates that users are rarely committed to one platform,” he told the E-Commerce Times.

However, Lintz anticipates a massive backlash, primarily from content creators and companies who have spent hundreds of thousands of hours building their audiences.

“Some businesses and individual brands have paid us hundreds of thousands of dollars over the years to build their brands with TikTok-specific videography, daily posting and growth strategies. If they ban it, it will all disappear overnight, leading to extreme resentment,” he offered.

Concerns grow beyond user responses

Matthew Marsden, vice president at endpoint management company Tanium, said some could argue that TikTok is dangerous because of the influence of social media on the younger generation.

“There is an even more real possibility that the popular platform is backed by the Chinese Communist Party (CCP) and used to conduct influence operations and collect sensitive personal and biometric data,” Marsden told the E-Commerce Times.

TikTok’s privacy policy states that they may collect biometric identifiers and biometric information as defined under US laws, such as faceprints and voiceprints. Marsden said the app can also share the data it collects with others.

“This is incredibly worrying because the CCP can easily coerce China-based companies to share information to support the party’s objectives,” he said. “The Chinese intelligence strategy is focused on long-term objectives and is driven by the continuous collection of data.”

why worry?

TikTok’s vast user data collection now includes commerce and purchase information combined with biometrics and activity tracking. Marsden warns that all this gives detailed intelligence to be used in the operation.

“This data can be leveraged to provide targeted, timely and often personalized psychological operations against individuals or groups of citizens. This has been seen in recent years during election cycles and politically charged events,” he said. Explained.

Perhaps an even more important concern is that TikTok users can no longer trust that an app or website will keep their data secure. Proactive measures are necessary as cybersecurity threats become a pervasive aspect of our daily lives, offered Craig Luray, CTO and co-founder of cybersecurity software firm Keeper Security.

“There are also concerns about who has access to this data, especially when it’s a nation-state,” Luray told the E-Commerce Times.

Need other actions?

Mike Parkin, senior technical engineer at Vulkan Cyber, suggested that two separate issues may be involved. His company provides SaaS solutions for enterprise cyber risk mitigation.

“Opening an online marketplace is a natural evolution of the TikTok e-commerce space and a fusion of marketplace platform and social media influencers. Whether or not social commerce itself and TikTok getting into that market is a good idea is a different question,” Parkin told the E-Commerce Times.

Lintz doesn’t think banning TikTok is the right solution. The impact on American content creators and businesses that make a living from TikTok could be dire. “I believe the right solution is to set guidelines for a US majority stake in TikTok,” he said.

A new analysis of data from the FBI’s Internet Crime Complaint Center (IC3) shows that Nevada has the most cybercrime victims by a larger margin than any other state in the union – 801 per 100,000 Internet users, four times the national average. .

An analysis by Surfshark, a privacy protection toolset developer based in Lithuania, states that the most common cybercrime committed in Nevada is identity theft, which may be because it is home to Las Vegas.

“With Nevada, it is easy to predict that identity thieves are targeting tourists who gamble,” said Mike Parkin, a senior technical engineer at Vulkan Cyber, a SaaS for enterprise cyber risk prevention in Tel Aviv, Israel. one provider told TechNewsWorld.

In 2021, Surfshark analysts said, there were 9,054 victims of identity theft in Nevada or 49% of all cybercrime victims.

Other states with high cybercrime victim rates per 100,000 Internet users include Iowa (342), Alaska (322), and Florida (293).

“These statistics from the FBI’s IC3 division help paint the overall picture of identity crimes committed each year in the US,” said James E. Lee, chief operating officer of the Identity Theft Resource Center (ITRC) in San Diego.

“When you add up the more than 1.4 million reports of identity theft filed with the FTC in 2021, the 15,000 ID crime victims who contacted the ITRC in 2021, and the 190 million victims of data compromise tracked by the ITRC in 2021, So you start to look at the enormity of the problem presented by identity crimes,” Lee told TechNewsWorld.

“The bottom line is this: There are more identity theft crimes reported each year in the US than all other crimes except theft combined,” he said. “And the volume and velocity of identity crimes continue to increase, along with their financial impact.”

purp hotbed

Nevada is also a hotbed for cybercriminals, with 150 cybercriminals per 100,000 Internet users, nearly three times the national average, according to analysts.

He explained that although threat actors outside the United States commit many cyber crimes, the FBI has identified a significant number of cyber criminals within US borders. In most cases, the FBI can identify the specific state where a cybercriminal is located, allowing them to see which states have the most cybercriminals per capita.

Only two other states reached triple digits in percentage per 100,000 Internet users: Delaware (120) and Maryland (113).

“It is interesting that Nevada had both the highest victims and highest offenders, while Nevada was in the bottom three in terms of victim harm,” Parkin observed.

According to analysts, the average victim of cybercrime in Nevada loses $4,728 per scam, while scammers average $4,280 per swindled in West Virginia and $3,820 in Iowa.

“Without a deeper analysis, it is difficult to say why the numbers are trending this way,” Parkin continued, “although Nevada is unique in demographics, local culture, and major industries, which may all play a role.”

badlands bad men

“Cybercrime is a growing concern in Nevada and across the country,” said John T. Sandler, spokesman for Nevada Attorney General Aaron D. Ford.

“Our office has conducted extensive campaigns to educate Nevadans about the many different ways scammers like to target residents in their daily lives,” Sandler told TechNewsWorld. “These include phishing, romance, solicitation, gift card, holiday and government fraud scams.”

“AG Ford also joins a bipartisan coalition of attorneys general urging the FTC to adopt a national rule targeting impersonation scams,” he said.

While Nevada has the lowest losses for cybercrime victims, North Dakota has the highest losses at $31,711 per scam.

Analysts said studies have shown that the two most vulnerable age groups to cybercrime are youth under 25 and people 75 and older. He argued that 41% of North Dakota’s population is in those age groups which may contribute to that high loss figure.

However, Parkin pointed out that North Dakota’s small population, 774,948, may have influenced the statistics in the analysis.

Although the most profitable cybercrimes nationally are fund transfers via email and fake investment schemes, this is not the case in North Dakota, where 50% of money lost in cybercrime – $12.1 million – was committed by pretending to be friends or family. Lost to bandits, or romantic online relationships.

Other states with high per capita losses from cybercrime include New York ($19,266), South Dakota ($19,065), and California ($18,302).

Seniors most targeted

The analysts also revealed that the average cyberthief clears $14,048 per scam, but that too, from a state between Colorado ($33,605), Louisiana ($31,064), New York ($29,919) and Wyoming ($27,918) There can be a lot of ups and downs in other states. Highest. Among the lowest were West Virginia ($2,630), Nebraska ($4,148), Montana ($4,327), and Connecticut ($4,394).

In states where criminals commit the most thefts, cybercriminals are increasingly targeting small to medium-sized businesses with financial capital, analysts said.

He said the most profitable cybercrime in New York was investment scams, accounting for 34% of all money lost due to cybercrime in 2021. By comparison, only 19% of all money swindled through cybercrime nationwide in 2021 were investment scams.

Analysts said that the age group most prone to cyber crimes are seniors. In 2021, $1.7 billion is expected to be paid to 92,371 Americans age 60 and older.

Analysts say that while senior citizens have been the worst hit by cybercrime, other age groups have been disproportionately victimized. For example, people in the 40 to 49 year old group represent only 12.4% of the population, but account for 20.8% of all cybercrime victims in the United States. On the other side of the coin, people under the age of 20 represent 24.8% of the population, but only 3.5% of cybercrime victims.

There are also some variations by state, analysts said. For example, in 16 states, the most targeted age group was 59 and under, and in Iowa, the most targeted group was 20 to 29-year-olds.

“From a ‘who can I steal from’ perspective,” Parkin said, “children and the elderly are probably easier targets than people in the 40 to 49 range, but they are likely to have fewer resources to target.”

Analyzing cyber crime on a state-by-state basis can be useful for criminals, he said. “Understanding victims and target demographics can be used to develop specific techniques to help prevent attacks,” he added. “It may also help to understand why attacks are more or less effective in different regions.”

Sharing high-resolution media online could inadvertently expose sensitive biometric data, according to a report released by a cyber security company on Tuesday.

This can be especially dangerous, said a 75-page report by Trend Micro, because people do not know they are exposing the information.

In the report, for example, the #EyeMakeup hashtag on Instagram, which has nearly 10 million posts, and the #EyeChallenge with more than two billion views, is enough to pass an iris scanner to uncover iris patterns.

“By publicly sharing certain types of content on social media, we give malicious actors the opportunity to source our biometrics,” the report states. “By posting our voice messages, we uncover voice patterns. By posting photo and video content, we highlight our face, retina, iris, ear-shaped patterns and, in some cases, palms and fingerprints. ,

“Since such data may be publicly available, we have limited control over its distribution,” it added. “Therefore we do not know who has already accessed the data, nor do we know for how long or for what purposes the data will be kept.”

not a panacea

The report covers what types of biometric data can be exposed on social media and outlines more than two dozen attack scenarios.

“The report suggests that biometric identification is not a panacea,” said Will Duffield, a policy analyst at the Cato Institute, a Washington, DC-based think tank.

“As we design detection systems, we need to be aware of technologies going down the pike and potential abuse in the real world,” he told TechNewsWorld.

“Trend Micro raises some valid concerns, but these concerns are not new to biometrics professionals,” Sami Alini, a biometrics specialist with Contrast Security, a maker of self-protection software solutions in Los Altos, Calif., told TechNewsWorld.

He said there are several ways to attack a biometric system, including a “presentation” attack described by the report, which substitutes a photo or other object for the biometric element.

To counter this, he continued, “viability” must be determined to ensure that the biometric presented is that of a living person and not a “replay” of a previously captured biometric.

Avi Turgman, CEO and co-founder of IronVest, an account and identity security company in New York City, agreed that “viability” is one key to thwarting attacks on biometric security.

“The Trend Micro report raises concerns about fraudulent biometrics created through social media content,” he told TechNewsWorld. “The real secret in fraud-proof biometrics is detecting liveliness, something that cannot be recreated through images and videos collected on social media.”

one factor not enough

Even when tested for liveability, biometrics can still be very easy to bypass, security awareness advocates at KnowBe4, a security awareness training provider in Clearwater, Fla., maintained.

“Holding the phone in front of a person’s face while sleeping can unlock the device, especially when they use it with the default settings, and collecting fingerprints is not a difficult task,” he told TechNewsWorld.

“What is even more worrying is that once the biometric factor is compromised, it cannot be changed like a password,” he said. “You can’t change your fingerprints or facial structure for a long time if you violate it.”

If the Trend Micro report shows anything, it’s that multi-factor authentication is a necessity, even if one of those factors is biometric.

“When used as a single factor for authentication, it is important to note that biometrics may be subject to failure or manipulation by a malicious user, particularly when that biometric data is publicly available on social media, Darren Guccione, CEO of Keeper Security, a password management and online storage company based in Chicago.

“As the capabilities of malicious actors using voice or facial biometric authentication continue to grow, it is imperative that all users implement multiple factors of authentication and use strong, unique passwords in their accounts to limit the blast radius. Apply if an authentication method is violated,” he told TechNewsWorld.

metaverse problems

“I don’t like to put all my eggs in one basket,” said Bill Malik, Trend Micro Vice President of Infrastructure Strategies. “Biometric is nice and useful, but having an additional factor of authentication gives me more confidence.”

“For most applications, a biometric and a PIN are fine,” he told TechNewsWorld. “When a biometric is used alone, it’s really easy to create.”

He stressed that the collection of biometric data will become an even greater problem when the metaverse becomes more popular.

“When you get into the metaverse, it’s going to get worse,” he said. “You’re putting on these $1,500 glasses that are designed to not only give you a realistic view of the world, but to find out what you like and don’t like about the world you see.” We are constantly monitoring your subtle expressions to find out.

However, he is not concerned that additional biometric data is being used by Digital Desperado to create deepfake clones. “Hackers are lazy, and they get everything they need with simple phishing attacks,” he declared. “So they’re not going to spend a lot of money for a supercomputer so they can clone someone.”

Device tied biometrics

Another way to secure biometric authentication is to tie it to a piece of hardware. With a biometric enrolled on a specific device, it can only be used to authenticate the user with that device.

Reed McGinley-Stempel, co-founder and CEO of Stitch, a passwordless authentication company in San Francisco, said, “This is the way Apple and Google’s biometric products work today — it’s not just the biometrics that you get when you use Face ID. Let’s check the time.”

“When you actually do a Face ID check on your iPhone, it checks that the current biometric check matches the biometric enrollment that’s stored in your device’s secure enclave,” he told TechNewsWorld.

“In this model,” he continued, “the threat of someone accessing your photos or fingerprinting yours doesn’t help them unless they have control over your physical device, which is something for attackers to climb into.” There is a very steep hill for the remote nature in which the cyber attackers operate.”

losing control of our data

The Trend Micro report states that as users, we are losing control over our data and its future uses, and the common user may not be well aware of the risks posed by the platforms we use every day. Is.

Data from social media networks is already being used by governments and even startups to extract biometrics and create identity models for surveillance cameras, it continued.

The fact that our biometric data cannot be changed means that in the future, such a wealth of data will be increasingly useful to criminals, it added.

Whether that future is five or 20 years ahead, the data is available now, it said. We are indebted to our future selves for taking precautions today to protect ourselves in tomorrow’s world.


trend micro report, Leaked Today, Exploited for Life: How social media biometric patterns affect your futureAvailable here in PDF format. No form is required to be filled at the time of this publication.

The sentencing of former Uber chief security officer Joseph Sullivan could lead to a quiet re-evaluation of how the chief information security officer (CISO) and the security community handle network breaches going forward.

A San Francisco federal jury indicted Sullivan on October 5 for failing to tell US officials about the 2016 hack of Uber’s database. Judge William H. Orrick did not set a date for sentencing.

Sullivan’s lawyer, David Angeli, said after the verdict was announced that his client’s sole focus was to ensure the security of people’s personal digital data.

Federal prosecutors noted that the case should serve as a warning to companies about how to comply with federal regulations when handling their network breaches.

Officials accused Sullivan of working to hide the data breach from US regulators and the Federal Trade Commission, and attempting to link his actions to prevent hackers from being caught.

At the time, the FTC was already investigating Uber after the 2014 hack. Two years later, hackers in Uber’s network repeatedly emailed Sullivan about the theft of large amounts of data. According to the US Justice Department, they promised they would delete the data if Uber paid the ransom.

The conviction is a significant precedent that has already sent shock waves through the CISO community. This dynamic policy highlights the personal liability involved in being a CISO in a legal and attacking environment, noted Casey Ellis, founder and CTO of Bugcrowd, a crowded cybersecurity platform.

“This calls for clear policy at the federal level around privacy protection and treatment of user data in the United States, and it emphasizes the fact that here a proactive approach to handling vulnerability information rather than a reactive approach is an important The component is flexibility for organizations, their security teams and their shareholders,” he told TechNewsWorld.

problem description

There is a growing tendency for companies afflicted with ransomware to interact with hackers. But the trial discourse showed prosecutors reminding the companies to “do the right thing,” according to media accounts.

According to published test accounts, Sullivan’s employees confirmed widespread data theft. This included theft records and 600,000 driver’s license numbers of 57 million Uber users.

The DOJ reported that Sullivan sought the hackers’ agreement to pay out US$100,000 in bitcoin. That agreement included the hackers signing a non-disclosure agreement to keep the hack from public knowledge. Uber reportedly hid the true nature of the payment as a bug bounty.

Only the jury had access to the evidence in the case, so it’s counterproductive to testify to specific details of the case, said Rick Holland, chief information security officer and vice president of strategy at Digital Shadows, a provider of digital risk management solutions.

“There are some general conclusions to draw. I am concerned by the unintended consequences of this case,” Holland told TechNewsWorld. “CISO already has a daunting task, and the outcome of the case has made CISO a scapegoat. Have given.”

important unanswered questions

Holland’s concerns include how the results of this trial could affect the number of leaders willing to take on the potential personal liability of the CISO role. He is also concerned about dismissing more whistleblower cases such as the escalating cases from Twitter.

He expects more CISOs to negotiate the insurance of directors and officers into their employment contracts. That type of policy provides personal liability coverage for decisions and actions a CISO may take, he explained.

“Furthermore, given the way both the CEO and CFO became responsible for corruption on the heels of the Sarbanes Oxley and Enron scandals, the CISO should not be the only culpable role in the case of wrongdoing around intrusions and breaches,” He suggested.

The Sarbanes-Oxley Act of 2002 is a federal law that established comprehensive auditing and financial regulations for public companies. The Enron scandal, a series of events involving questionable accounting practices, resulted in the bankruptcy of energy, goods and services company Enron Corporation and the dissolution of accounting firm Arthur Andersen.

“CISOs should effectively communicate risks to the company’s leadership team, but should not be solely responsible for cybersecurity risks,” he said.

twisted conditions

Sullivan’s conviction is a kind of ironic role reversal. Earlier in his legal career, he prosecuted cybercrime cases for the United States Attorney’s Office in San Francisco.

The DOJ’s case against Sullivan hinged on obstructing justice and acting to conceal a felony from officers. The resulting conviction can have a long-term impact on how organizations and individual authorities approach cyber incident response, particularly where it involves extortion.

Prosecutors argued that Sullivan actively concealed the massive data breach. The jury unanimously agreed with the allegation beyond a reasonable doubt.

Instead of reporting the breach, the jury found that Sullivan, backed by the knowledge and approval of Uber’s then CEO, paid the hackers and signed a non-disclosure agreement with them, falsely claiming that he had stolen data from Uber. did not do.

A new chief executive who later joined the company reported the incident to the FTC. Current and former Uber executives, lawyers and others testified for the government.

Edward McAndrew, an attorney for Bakerhostetler and former DoJ cybercrime prosecutor and national security cyber expert, told TechNewsWorld that “Sullivan’s prosecution and now conviction is unprecedented, but it needs to be understood in its proper factual and legal context.”

He said that the government has recently adopted a very aggressive policy towards cyber security. This affects white-collar compliance, where organizations and officials are increasingly cast in the simultaneous and separate roles of crime victim and enforcement target.

“Organizations need to understand how the actions of individual employees can expose them and others to the criminal justice process. And information security professionals need to understand the actions they take in response to criminal cyberattacks. How to avoid becoming personally liable for that,” warned McAndrew.

Data privacy laws are becoming a major focus globally as businesses scramble to meet new compliance obligations.

Privacy rules generally oblige any business or organization to securely store all data collected or processed by them. What they do with that data is strictly regulated.

According to a Gartner report, by the end of next year about 65% of the world’s population will have their personal data covered under modern privacy rules. Following these extended rules can be challenging.

The harvesting of personal data from electronic transactions and the increasing use of the Internet over the past 20 years have seen companies have almost free reign.

Many organizations involved in international commerce must modify their procedures in line with the new law. This is a priority for transactions and correspondence involving e-commerce and social media.

Expanding consumer mistrust, government action, and competition for customers prompted some governments to introduce stricter rules and regulations. Its effect is changing the conditions of a no-man’s land, which has allowed both large companies and small businesses to run rampant with people’s personal data.

“The biggest challenge companies face by far is maintaining the amount of data they manage, which is subject to ever-changing data privacy requirements,” Neil Jones, director of cybersecurity evangelism at Egnyte, told TechNewsWorld.

Classification of different demands

The European Union has a General Data Protection Regulation (GDPR). According to Jones, in the UK and Continental Europe, data privacy has generally been viewed as a fundamental human right. In the US and Canada, businesses must navigate around a growing patchwork of state and provincial laws.

Data privacy law in the US and Canada has traditionally been more fragmented than in the UK and Europe. Canada’s Quebec, and the United States’ Utah and Connecticut are the latest to enact comprehensive data privacy laws, joining the US states of California, Virginia and Colorado.

By the end of 2023, 10% of states in the US will be covered by data privacy legislation, Jones said. The lack of a universal standard for data privacy has created an artificial layer of business complexity.

In addition, today’s hybrid work environment has created new levels of risk, with complex compliance with myriad privacy concerns.

what’s at stake

To increase productivity, organizations may need to ask employees detailed questions about their behavior and work-from-home arrangements. According to Jones, these types of questions can create unintended privacy implications of their own.

The recent convergence of Personally Identifiable Information (PII) and Protected Health Information (PHI) has put even highly confidential data at risk. This includes confidential test results such as workers’ compensation reports, health records of employees and patients, and COVID-19 information.

“With 65% of the world’s population expected to have personal data covered under privacy regulations by next year, respecting data privacy has never been more important,” Jones said.

cloud privacy barriers

Data privacy and security are the top challenges for implementing a cloud strategy, now rebranded as Foundry, according to a recent study by IDG. In this study, the role of data security was a major concern.

When implementing a cloud strategy, IT decision makers or ITDMs are facing challenges such as controlling cloud costs, data privacy and security challenges, and lack of cloud security skills/expertise.

With more focus on securing privacy data, this problem becomes bigger as more organizations migrate to the cloud. The two main obstacles the IDG study found were data privacy and security challenges and a lack of cloud security skills/expertise.

According to Foundry, spending on cloud infrastructure has increased by about $5 million this year.

“Although enterprise businesses are leading the charge, SMBs are not far behind when it comes to cloud migration,” said Stacey Rapp, marketing and research manager at Foundry, when the report was released.

“As more organizations move towards living entirely in the cloud, IT teams will need the appropriate talent and resources to manage their cloud infrastructure and overcome any security and privacy barriers that may occur in the cloud,” he said.

obtaining compliance

Organizations can successfully prepare for data privacy legislation, but doing so requires making data privacy initiatives a “full-time job,” Jones maintained.

“Many organizations view data privacy as a part-time project for their web teams, not a full-time business initiative that can significantly impact customer relationships, employee morale and brand reputation,” he said. offered.

Beyond that step comes establishing holistic data governance programs that provide greater visibility into a company’s regulated and sensitive data. Added to this is working with trusted business and technology partners who understand the data privacy space and can help you prepare for rapidly evolving regulations.

Jones suggests that perhaps the most dynamic approach is to use advanced privacy and compliance (APC) solutions. It enables organizations to easily comply with global privacy regulations in one place.

Specifically, APCs can help achieve compliance by:

  • Managing Data Subject Access Requests (DSARs), such as the right of individuals to be notified of personal data collected on them, the right to opt-out of personal information being sold to others, or by collecting organizations right to be forgotten
  • Assessing the company’s compliance preparedness and scope with specific regulations (eg, GDPR, CCPA)
  • Create and review technical assessments of third-party vendors and evaluate potential risks to consumer data
  • Enhance cookie consent capabilities such as integration of cookie consent into compliance workflows

active labor

It can be difficult for companies to understand today’s rapidly evolving privacy landscape, as well as how specific rules apply to them, Jones said. However, by taking proactive steps, organizations can stay on top of data privacy regulations in the future.

Those phases include these ongoing tasks:

  • Monitor the status of data privacy regulations in the countries, provinces and states where the customer base resides
  • Create a data privacy task force that can improve organizational focus and increase senior executive focus on privacy initiatives
  • Be aware of new federal data privacy legislation such as the proposed US Data Privacy and Protection Act (ADPPA)

It is also important to note the long-term benefits of data privacy compliance. Specifically strengthening the company’s overall cyber security protections.

The cost of cleaning up data is often beyond the comfort zone of businesses full of potentially dirty data. This paves the way for reliable and compliant corporate data flows.

According to Kyle Kirwan, co-founder and CEO of data observability platform BigEye, few companies have the resources needed to develop tools for challenges such as large-scale data observability. As a result, many companies are essentially going blind, reacting when something goes wrong instead of continually addressing data quality.

A data trust provides a legal framework for the management of shared data. It promotes cooperation through common rules for data protection, confidentiality and confidentiality; and enables organizations to securely connect their data sources to a shared repository of data.

Bigeye brings together data engineers, analysts, scientists and stakeholders to build trust in data. Its platform helps companies create SLAs for monitoring and anomaly detection and ensuring data quality and reliable pipelines.

With full API access, a user-friendly interface, and automated yet flexible customization, data teams can monitor quality, consistently detect and resolve issues, and ensure that each be able to rely on user data.

uber data experience

Two early members of the data team at Uber — Kirvan and bigeye co-founder and CTO Egor Gryznov — set out to use what they learned to build Uber’s scale to build easy-to-deploy SaaS tools for data engineers. prepared for.

Kiran was one of Uber’s first data scientists and the first metadata product manager. Gryaznov was a staff-level engineer who managed Uber’s Vertica data warehouse and developed a number of internal data engineering tools and frameworks.

He realized that his team was building tools to manage Uber’s vast data lake and the thousands of internal data users available to most data engineering teams.

Automatically monitoring and detecting reliability issues within thousands of tables in a data warehouse is no easy task. Companies like Instacart, Udacity, Docker, and Clubhouse use Bigeye to make their analysis and machine learning work consistently.

a growing area

Founding Bigeye in 2019, he recognized the growing problem of enterprises deploying data in operations workflows, machine learning-powered products and services, and high-ROI use cases such as strategic analysis and business intelligence-driven decision-making.

The data observability space saw several entrants in 2021. Bigeye differentiates itself from that pack by giving users the ability to automatically assess customer data quality with over 70 unique data quality metrics.

These metrics are trained with thousands of different anomaly detection models to ensure data quality problems – even the most difficult to detect – are ahead of data engineers ever. Do not increase

Last year, data observability burst onto the scene, with at least ten data observability startups announcing significant funding rounds.

Kirwan predicted that this year, data observation will become a priority for data teams as they seek to balance the demand for managing complex platforms with the need to ensure data quality and pipeline reliability.

solution rundown

Bigeye’s data platform is no longer in beta. Some enterprise-grade features are still on the roadmap, such as full role-based access control. But others, such as SSO and in-VPC deployment, are available today.

The app is closed source, and hence proprietary models are used for anomaly detection. Bigeye is a big fan of open-source alternatives, but decided to develop one on its own to achieve internally set performance goals.

Machine learning is used in a few key places to bring a unique mix of metrics to each table in a customer’s connected data sources. Anomaly detection models are trained on each of those metrics to detect abnormal behavior.

Built-in three features in late 2021 automatically detect and alert data quality issues and enable data quality SLAs.

The first, deltas, makes it easy to compare and validate multiple versions of any dataset.

Issues, second, brings together multiple alerts at the same time with valuable context about related issues. This makes it easier to document past improvements and speed up proposals.

Third, the dashboard provides a holistic view of the health of the data, helps identify data quality hotspots, close gaps in monitoring coverage, and measures a team’s improvement in reliability.

eyeball data warehouse

TechNewsWorld spoke with Kirwan to uncover some of the complexities of his company’s data sniffing platform, which provides data scientists.

TechNewsWorld: What makes Bigeye’s approach innovative or cutting edge?

Kyle Kiran Bigey Co-Founder and CEO
Kyle Kiran, BigEye Co-Founder and CEO

Kyle Kiran: Data observation requires a consistent and thorough knowledge of what is happening inside all the tables and pipelines in your data stack. It is similar to SRE [site reliability engineering] And DevOps teams use applications and infrastructure to work round the clock. But it has been repurposed for the world of data engineering and data science.

While data quality and data reliability have been an issue for decades, data applications are now important in how many major businesses run; Because any loss of data, outage, or degradation can quickly result in loss of revenue and customers.

Without data observability, data dealers must continually react to data quality issues and entanglements as they go about using the data. A better solution is to proactively identify the problems and fix the root causes.

How does trust affect data?

Ray: Often, problems are discovered by stakeholders such as executives who do not trust their often broken dashboards. Or users get confusing results from in-product machine learning models. Data engineers can better get ahead of problems and prevent business impact if they are alerted enough.

How does this concept differ from similar sounding technologies like Integrated Data Management?

Ray: Data observability is a core function within data operations (think: data management). Many customers look for best-of-breed solutions for each task within data operations. This is why technologies like Snowflake, FiveTran, Airflow and DBT are exploding in popularity. Each is considered an important part of the “modern data stack” rather than a one-size-fits-none solution.

Data Overview, Data SLA, ETL [extract, transform, load] Code version control, data pipeline testing, and other techniques must be used to keep modern data pipelines working smoothly. Just like how high-performance software engineers and DevOps teams use their collaborative technologies.

What role do data pipelines and dataops play with data visibility?

Ray: Data Observability is closely related to the emerging practice of DataOps and Data Reliability Engineering. DataOps refers to the broad set of operational challenges that data platform owners will face. Data Reliability Engineering is a part, but only part, of Data Ops, just as Site Reliability Engineering is related but does not include all DevOps.

Data security can benefit from data observation, as it can be used to identify unexpected changes in query volume on different tables or changes in the behavior of ETL pipelines. However, data observation by itself will not be a complete data protection solution.

What challenges does this technology face?

Ray: These challenges include issues such as data discovery and governance, cost tracking and management, and access control. It also includes how to handle queries, dashboards, and the growing number of ML features and models.

Reliability and uptime are certainly challenges many DevOps teams are responsible for. But they are also often charged for other aspects such as developer velocity and security reasons. Within these two areas, data overview enables data teams to know whether their data and data pipeline are error free.

What are the challenges of implementing and maintaining data observability technology?

Ray: Effective data observability systems must be integrated into the workflows of the data team. This enables them to continuously respond to data issues and focus on growing their data platform rather than putting out data fires. However, poorly tuned data observability systems can result in a flood of false positives.

An effective data system should perform more maintenance than just testing for data quality issues by automatically adapting to changes in the business. A poorly optimized data observation system, however, may not be accurate for changes in business or more accurate for changes in business that require manual tuning, which can be time-consuming.

Data observability can also be taxing on a data warehouse if not optimized properly. Bigeye teams have experience in optimizing large-scale data observation capability to ensure that the platform does not impact data warehouse performance.

Do you know whether your company data is clean and well managed? Why does it matter anyway?

Without a working governance plan, you may have no company to worry about – data-wise.

Data governance is a collection of practices and procedures establishing rules, policies and procedures that ensure data accuracy, quality, reliability and security. It ensures the formal management of data assets within an organization.

Everyone in business understands the need to have and use clean data. But making sure it’s clean and usable is a bigger challenge, according to David Kolinek, vice president of product management at Atacama.

This challenge is compounded when business users have to rely on scarce technical resources. Often, no one person oversees data governance, or that person doesn’t have a complete understanding of how the data will be used and how to clean it up.

This is where Atacama comes into play. The company’s mission is to provide a solution that even people without technical knowledge, such as SQL skills, can use to find the data they need, evaluate its quality, understand any issues How to fix that and determine if that data will serve their purposes.

“With Atacama, business users don’t need to involve IT to manage, access and clean their data,” Kolinek told TechNewsWorld.

Keeping in mind the users

Atacama was founded in 2007 and was originally bootstrapped.

It started as a part of a consulting company, Edstra, which is still in business today. However, Atacama focused on software rather than consulting. So management spun off that operation as a product company that addresses data quality issues.

Atacama started with a basic approach – an engine that did basic data cleaning and transformation. But it still requires an expert user because of the user-supplied configuration.

“So, we added a visual presentation for the steps enabling things like data transformation and cleanup. This made it a low-code platform because users were able to do most of the work using just the application user interface. But that’s right now.” was also a fat-client platform,” Kolinek explained.

However, the current version is designed with the non-technical user in mind. The software includes a thin client, a focus on automation, and an easy-to-use interface.

“But what really stands out is the user experience, made up of the seamless integration that we were able to achieve with the 13th version of our engine. It delivers robust performance that is crafted to perfection,” he said. offered.

Digging deeper into data management issues

I asked Kolinek to discuss the issues of data governance and quality further. Here is our conversation.

TechNewsWorld: How is Atacama’s concept of centralizing or consolidating data management different from other cloud systems such as Microsoft, Salesforce, AWS and Google Cloud?

David Kolinek: We are platform agnostic and do not target a specific technology. Microsoft and AWS have their own native solutions that work well, but only within their own infrastructure. Our portfolio is wide open so it can serve all use cases that should be included in any infrastructure.

In addition, we have data processing capabilities that not all cloud providers have. Metadata is useful for automated processing, generating more metadata, which can be used for additional analysis.

We have developed both these technologies in-house so that we can provide native integration. As a result, we can provide a better user experience and complete automation.

How is this concept different from the notion of standardization of data?

David Kolinek
David Kolinek
Vice President of Product Management,
atacama

Kolinek: Standardization is just one of many things we do. Typically, standardization can be easily automated, in the same way that we can automate cleaning or data enrichment. We also provide manual data correction when resolving certain issues, such as missing Social Security numbers.

We cannot generate SSN but we can get date of birth from other information. So, standardization is no different. It is a subset of things that improve quality. But for us it is not just about data standardization. It is about having good quality data so that the information can be leveraged properly.

How does Atacama’s data management platform benefit users?

Kolinek: User experience is really our biggest advantage, and the platform is ideal for handling multiple individuals. Companies need to enable both business users and IT people when it comes to data management. This requires a solution for business and IT to collaborate.

Another great advantage of our platform is the strong synergy between data processing and metadata management that it provides.

Most other data management vendors cover only one of these areas. We also use machine learning and a rules-based approach and validation/standardization, both of which, again, are not supported by other vendors.

Furthermore, because we are ignorant of technology, users can connect to many different technologies from a single platform. With edge processing, for example, you can configure something in the Atacama One once, and the platform will translate it for different platforms.

Does Atacama’s platform lock-in users the same way proprietary software often does?

Kolinek: We have developed all the main components of the platform ourselves. They are tightly integrated together. There has been a huge wave of acquisitions in this space lately, with big sellers buying out smaller sellers to fill in the gaps. In some cases, you are actually buying and managing not one platform, but several.

With Atacama, you can buy just one module, such as Data Quality/Standardization, and later expand to others, such as Master Data Management (MDM). It all works together seamlessly. Just activate our modules as you need them. This makes it easy for customers to start small and expand when the time is right.

Why is the Integrated Data Platform so important in this process?

Kolinek: The biggest advantage of a unified platform is that companies are not looking for a point-to-point solution to a single problem like data standardization. It is all interconnected.

For example, to standardize you must verify the quality of the data, and for that, you must first find and catalog it. If you have an issue, even though it may seem like a discrete problem, it probably involves many other aspects of data management.

The beauty of an integrated platform is that in most use cases, you have a solution with native integration, and you can start using other modules.

What role do AI and ML play today in data governance, data quality and master data management? How is this changing the process?

Kolinek: Machine learning enables customers to be more proactive. First, you’ll identify and report a problem. One has to check what went wrong and see if there is anything wrong with the data. You would then create a rule for data quality to prevent repetition. It’s all reactive and based on something being broken down, found, reported and fixed again.

Again, ML lets you be proactive. You give it training data instead of rules. The platform then detects differences in patterns and identifies discrepancies to help you realize there was a problem. This is not possible with a rule-based approach, and is very easy to measure if you have a large amount of data sources. The more data you have, the better the training and its accuracy.

Aside from cost savings, what benefits can enterprises gain from consolidating their data repositories? For example, does it improve security, CX results, etc.?

Kolinek: This improves safety and minimizes potential future leaks. For example, we had customers who were storing data that no one was using. In many cases, they didn’t even know the data existed! Now, they are not only integrating their technology stack, but they can also see all the stored data.

It is also very easy to add newcomers to the platform with consolidated data. The more transparent the environment, the sooner people will be able to use it and start getting value.

It is not so much about saving money as it is about leveraging all your data to generate a competitive advantage and generate additional revenue. It provides data scientists with the means to build things that will drive business forward.

What are the steps in adopting a data management platform?

Kolinek: Start with a preliminary analysis. Focus on the biggest issues the company wants to tackle and select platform modules to address them. It is important to define goals at this stage. Which KPIs do you want to target? What level of ID do you want to achieve? These are questions you should ask.

Next, you need a champion to drive execution and identify the key stakeholders driving the initiative. This requires extensive communication between various stakeholders, so it is important that one focuses on educating others about the benefits and helping the teams on the system. Then comes the implementation phase where you address the key issues identified in the analysis, followed by the rollout.

Finally, think about the next set of issues that need to be addressed, and if necessary, enable additional modules in the platform to achieve those goals. The worst part is buying a device and providing it, but not providing any service, education or support. This will ensure that the adoption rate will be low. Education, support and service are very important for the adoption phase.