Tag

Fake

Browsing

A Chinese cyber espionage group is using a fake news site to infect government and energy industry targets in Australia, Malaysia and Europe with malware, according to a blog posted online on Tuesday by Proofpoint and PwC Threat Intelligence .

The group is known by several names, including APT40, Leviathan, TA423 and Red Ladon. Four of its members were indicted by the US Department of Justice in 2021 for hacking several companies, universities and governments in the United States and around the world between 2011 and 2018.

APT40 members indicted by the United States Department of Justice in 2021

The United States Department of Justice indicted APT40 members in 2021 / Image Credit: FBI


The group is using its fake Australian news site to infect visitors with the Scanbox exploit framework. “Scanbox is a reconnaissance and exploitation framework deployed by an attacker to collect a variety of information, such as the target’s public-facing IP address, the type of web browser used, and its configuration,” Proofpoint Vice President for Threat Research and Detection Sherrod explained DeGripo.

“It serves as a setup for the information gathering steps that follow and potential follow-up exploits or compromises, where malware is deployed to gain persistence on the victim’s system and allow the attacker to carry out espionage activities.” can be done,” she told TechNewsWorld.

“It creates a perception of the victim’s network that the actors then study and determine the best path forward for further compromise,” she said.

“Watering hole” attacks that use Scanbox appeal to hackers because the point of compromise is not within the victim’s organization, added John Bumbleneck, a principle threat hunter at Netenrich, a San Jose, California-based IT and digital security operations company. .

“Therefore, it is difficult to detect that information is being stolen,” he told TechNewsWorld.

modular attack

According to the Proofpoint/PwC blog, the TA423 campaign primarily targeted local and federal Australian government agencies, Australian news media companies and global heavy industry manufacturers, which maintain a fleet of wind turbines in the South China Sea.

It noted that the phishing emails for the campaign were sent from Gmail and Outlook email addresses, which Proofpoint believes were created by attackers with “moderate trust.”

Subject lines in phishing emails included “sick leave,” “user research,” and “request collaboration.”

Threatened actors often pose as employees of the fictional media publication “Australian Morning News”, the blog explained, and provide a URL to their malicious domain, to view their website or share research material that the website is publishing. Ask for goals.

If someone clicks on the target URL, they will be redirected to a fake news site and without their knowledge, the Scanbox malware will be introduced. To give credibility to their fake website, opponents posted content from legitimate news sites such as the BBC and Sky News.

Scanbox can distribute its code in one of two ways: in a single block, which gives an attacker instant access to the full functionality of the malware, or as a plug-in, modular architecture. The TA423 crew chose the plug-in method.

According to PwC, the modular route can help avoid accidents and errors that would alert a target that their system is under attack. It is also a way for researchers to reduce the visibility of the attack.

phishing boom

As such campaigns show, phishing remains the tip of the spear used to break into many organizations and steal their data. “Phishing sites will see an unexpected increase in 2022,” said Monia Deng, director of product marketing at Bolster, a provider of automated digital risk protection in Los Altos, Calif.

“Research has shown that this problem will increase tenfold in 2022 because this method is easy, effective and a perfect storm to deploy in the post-work digital age,” she told TechNewsWorld.

DeGripo said phishing campaigns continue to work as threat actors adapt. “They use current affairs and holistic social engineering techniques, at times hunting down target fear and a sense of urgency or importance,” she said.

A recent trend among threat actors, he continued, is attempting to increase the effectiveness of their campaigns by building trust with intended victims through extended interactions with individuals or through existing interactions between coworkers. .

Roger Grimes, a defense campaigner with KnowBe4, a security awareness training provider in Clearwater, Fla., stressed that social-engineering attacks are particularly resistant to technical security.

“Try as much as you can, there is no great technical defense so far that prevents all social engineering attacks,” he told TechNewsWorld. “This is especially difficult because social engineering attacks can come across email, phone, text messages and social media.

Even though social engineering is involved in 70% to 90% of all successful malicious cyber attacks, it is the rare organization that spends more than 5% of its resources to mitigate this, he continued.

“It’s the number one problem, and we treat it like a small part of the problem,” he said. “It’s the fundamental disconnect that allows attackers and malware to be so successful. Until we see this as the number one problem, it will continue to be the primary way attackers attack us. It’s just math.” “

two things to remember

While TA423 used email in its phishing campaign, Grimes notes that opponents are moving away from that approach.

“Attackers are using other methods, such as social media, SMS text messages, and voice calls to do their social engineering more often,” he explained. “This is because many organizations focus almost exclusively on email-based social engineering and the training and tools to combat social engineering on other types of media channels are not at the same level of sophistication in most organizations.”

“That’s why it’s important that every organization builds an individual and organizational culture of healthy skepticism,” he adds, “where everyone is taught how to recognize the signs of a social engineering attack, no matter how it comes.” , web, social media, SMS messages or phone calls – and it doesn’t matter who it appears to be sent by.”

He explained that most social engineering attacks have two things in common. First, they come unexpectedly. The user was not expecting this. Second, it is asking the user to do something that the sender – whatever he is pretending to be – has never asked the user to do it before.

“This may be a valid request,” he continued, “but all users should be taught that any message with those two traits is at very high risk of being a social engineering attack, and should be verified using a reliable method. as if calling that person directly on a known good phone number.”

“If more organizations taught two things to remember,” he said, “the online world would be a much safer place to calculate.”

Fake social media accounts are usually associated with bot networks, but some research released Tuesday showed that many social media users are creating fake accounts of their own for a variety of reasons.

According to a survey of 1,500 US social media users conducted by USCasinos.com, one in three US social media users have multiple accounts on the social media platforms they use. About half (48%) of people with multiple accounts have two or more additional accounts.

Reasons for creating additional accounts vary, but the most commonly cited are “sharing my thoughts without judgment” (41%) and “spying someone else’s profile” (38%).

Other motives behind creating fake accounts include “increasing my chances of winning an online contest” (13%), “increasing likes, followers and other metrics on my real account” (5%), fooling others (2.6%) Are included. and for scamming others (0.4%).

When asked where they were creating their fake accounts, respondents most often named Twitter (41%), followed by Facebook (31%) and Instagram (28%). “That’s because Twitter is pretty much open by default,” said Will Duffield, a policy analyst at the Cato Institute, a Washington, DC think tank.

“Twitter power users will often have multiple accounts — one for a mass audience, other for smaller groups, one that is open by default, one that is private,” he told TechNewsWorld.

Infographic explains where US residents create fake social media accounts

Infographic Credit: USCasinos.com


Twitter prompted the research by the online casino directory site, noted study co-author Ines Ferreira. “We started this study primarily because of discussions about Elon Musk and the Twitter deal,” she told TechNewsWorld.

That deal is currently tied up in the courts and hinges on a dispute between Musk and the Twitter board over the number of fake accounts on the platform.

sex changing detective

The types of fake accounts in the study, however, differ from the ones that confused Musk. “The survey tackles two completely different issues,” Duffield said.

“On the one hand, you have automated accounts – things operated by machines and often used for spamming. This is the kind of fake account that Elon Musk alleges Twitter has too much,” he told TechNewsWorld. There are pseudonymous accounts, which are being surveyed here. They are operated by users who do not wish to use their real names.”

The survey also found that most users retained their same gender (80.9%) when creating fake accounts. The main exception to that practice, the survey noted, is when users want to spy on other accounts. Then they are in favor of creating a fake account of the opposite sex. In general, one in 10 (13.1%) of those surveyed said they used the opposite sex when creating fake accounts.

Infographic reveals how many fake social media accounts owners own

Infographic Credit: USCasinos.com


“There are a number of reasons why we don’t want everything we do online to be associated with our real name,” Duffield said. “And it doesn’t necessarily have to be cancel culture or anything like that.”

“One of the great things about the Internet is that it allows us to divulge identities without committing ourselves or trying on new individuals so that we can showcase one aspect of ourselves at a time,” he said. Explained.

“It is absolutely normal for people to use pseudonyms online. If anything, using real names is a more contemporary expectation,” he said.

Accounts created with impunity

The study also found that most fake account creators (53.3%) prefer to keep the practice a secret from their inner circle of acquaintances. When they mentioned their fake accounts, they were most likely to mention them, followed by friends (29.9%), family (9.9%) and partners (7.7%).

The researchers also found that more than half of the owners of fake accounts (53.3%) were millennials, while Gen X had an average of three fake accounts and Gen Z had an average of two.

According to the study, the creators of fake accounts do this. When asked whether their fake accounts were reported on the platforms on which they were created, 94% of the participants responded negatively.

Infographic describing platforms where fake social media accounts have been reported

Infographic Credit: USCasinos.com


“Every time these platforms release new algorithms to report these accounts, most of them never report them,” Ferreira said. “There are so many fake accounts, and you can create them so easily, it’s really hard to identify them all.”

“After Elon Musk’s deal with Twitter, these platforms are going to be thinking a little bit more about how they’re going to do it,” she said.

However, Duffield downplayed the need for users to police fake accounts. “Creating these accounts is not against the platform rules, so there is no reason for the platform to consider them a problem,” he said.

“Since these accounts are operated by real people, even though they do not have real names, they act like real people,” he continued. “They’re messaging one person at a time. They’re taking the time to type things out. They have a typical day/night cycle. They’re sending messages to 100 different people at once at all hours of the day. Not sending thousand messages.

harmless fake?

Duffield stressed that unlike fake accounts created by bots, fake accounts created by users are less harmful to the platforms hosting them.

“There is a theory that people abuse more often when they are using a pseudonymous account or one that is not tied to their real identity, but from a sobriety perspective, banning a pseudonymous account is a real person.” No different from banning,” he observed.

“Facebook has had a real-name policy, although it has received a lot of criticism over the years,” he said. “I’d say it’s under-applied intentionally at this point.”

“As long as the pseudonymous account is complying with the rules, this is not a problem for the platforms,” he said.

While bot accounts do not contribute to the social media platform’s business model, fake user accounts do.

Duffield explained, “If the pseudonymous account is being used by a real human being, they are still seeing the ad.” “It’s not like a bot clicking on things without a human being involved. Regardless of the name on the account, if they’re seeing contextual ads and they’re being shown, from a platform standpoint, it’s not really a problem. Is.”

“Activity is reflected in monthly active user statistics, which is what the platform, advertisers and potential buyers care about,” he continued. “The total number of accounts is a useless statistic because people constantly drop accounts.”

Still, Ferreira argued that any form of fake account undermines the credibility of social media platforms. “At some point,” she said, “there are going to be more fake users than real users, so they need to do something about that now.”

A lawsuit was filed by Amazon on Tuesday against administrators of more than 10,000 Facebook groups accusing them of being part of a broker network for churning out fake product reviews.

In its lawsuit, Amazon alleges that administrators attempted to organize the placement of fake reviews on Amazon in exchange for money or free products. It said groups have been set up in the United States, United Kingdom, Germany, France, Italy, Spain and Japan to recruit people to write fake reviews on Amazon’s online store.

Amazon said in a statement posted online that it would use the information found through the lawsuit to identify bad actors and remove the reviews they commissioned from the retail website.

Dharmesh Mehta, Amazon’s Vice President of Selling Partner Services, said in the statement, “Our team intercepts millions of suspicious reviews before they are seen by customers, and this trial goes a step further to uncover criminals operating on social media.” ” “Proactive legal action targeting bad actors is one of many ways to protect customers by holding bad actors accountable.”

against meta policy

Meta, which owns Facebook, condemned the groups for setting up fake review mills on their infrastructure. “Groups that solicit or encourage fake reviews violate our policies and are removed,” Meta spokeswoman Jen Riding said in a statement to TechNewsWorld.

“We are working with Amazon on this matter and will continue to partner across the industry to address spam and fake reviews,” she said.

According to Meta, it has already removed most of the fraud groups cited in Amazon’s lawsuit and is actively investigating others for violating the company’s policy against fraud and deception.

It noted that it has introduced a number of tools to remove infringing content from its service, tools that use artificial intelligence, machine learning and computer vision to analyze specific instances of content that violate rules. Break down and identify patterns of abuse across the platform.

Is Facebook doing enough?

Rocio Concha, director of policy and advocacy, a consumer advocacy group in the UK, praised Amazon’s action, but questioned whether Facebook was doing enough to prevent abuse of its platform.

“It is positive that Amazon has taken legal action against some of the fake review brokers operating at Facebook, which is a problem the investigation has uncovered time and again,” he said in a statement. “However, it does raise a big question mark about Facebook’s proactive action to crack down on fake review agents and protect consumers.”

“Facebook needs to explain why this activity is prevalent, and [U.K.] The Competition and Markets Authority (CMA) must challenge the company to show that the action it is taking is effective,” he continued. “Otherwise, it should consider stern action against the platform.”

“The government has announced that it plans to give stronger powers to the CMA to protect consumers from the avalanche of fake reviews,” he said. “These digital markets, competition and consumer reforms should be legislated as a priority.”

Which one in 2019? released a report that estimated that 250,000 hotel reviews on the Tripadvisor website were fake. Tripadvisor dismissed the analysis in that report as “simplistic,” but in its own “Transparency” report a year later, the site found nearly one million, or 3.6%, of the reviews were fake.

no time for deep dives

“Most consumers don’t have time to dig deep into reviews,” said Ross Rubin, principal analyst at Reticle Research, a consumer technology advisory firm in New York City.

“They take star ratings as a way to build trust in a product and if people are being compensated for posting fake reviews, it undermines trust in reviews,” he told TechNewsWorld.

“Fake reviews not only encourage consumers to buy a substandard product, but they also make it more difficult to differentiate between products,” he said.

“If you have an overwhelming number of products in a category with four-and-a-half or five-star reviews, because many of them are participating in these fake review programs, the value of the reviews themselves are diminished,” he explained.

He acknowledged that fake reviews were a problem everywhere on the Internet. “But,” he continued, “because Amazon has such a strong position in online retailing and is often the first website consumers visit, it is disproportionately targeted by these fake review groups.”

Review mills also use bots to pad product reviews, but Rubin said the technology lacks the effectiveness of using a human. “The reason these groups are using people instead of bots is because bots are easier to detect,” he said. “Amazon uses machine learning techniques to identify when companies are using bots.”

‘Comprehensive’ review manipulation

In a report released last year by Uberall, an online and offline customer experience platform, review manipulation on Amazon was termed “pervasive.”

Amazon claims that only 1% of reviews on the site are fake, but the report disputed that. It cited a 2018 analysis by Fakespot that found the number of fake reviews in certain product categories such as nutritional supplements (64%), beauty (63%), electronics (61%), and athletic sneakers (59%) is more.

“Even if we reduce these numbers by 50%, there will still be a gap between what Amazon and Fakespot report,” Uberall’s report said.

What can be done to curb fake reviews?

Uberall points out that Amazon and some others use the label “Verified Buyer” to indicate high trust in reviews. “It is an approach that needs to be used more widely,” it noted, “though it is not foolproof, as Amazon has discovered.”

“Despite specific anti-fraud mechanisms,” it continued, “fake reviews are a problem that needs to be addressed more systematically and vigorously.”

The paths identified in the report to address the problem include using more technical sophistication and aggressive enforcement to bring review fraud down to low single digits, adopting a review framework that is structurally difficult to defraud and Only genuine verified buyers are to be allowed. Write a review.

“These are not mutually exclusive approaches,” it explained. “They can and should be used in conjunction with each other.”

“With online reviews there is a huge amount at stake for businesses of all sizes,” the report said. “More and better reviews directly translate into online visibility, brand equity and revenue. This creates powerful incentives for businesses to pursue positive reviews and suppress or remove negative reviews.”