Tag

Fake

Browsing

There is nothing more embarrassing than sharing or forwarding something you think is true and then being criticized because whoever received it knows better and thinks you are an idiot.

When I worked for a multinational company, I used to send out an email newsletter of things that I found interesting and relevant to our work. As long as I forwarded something about my company that wasn’t even remotely true because I didn’t verify the source, people were fine with it. That was the end of my newsletter, and suddenly I had a whole lot of executives convinced I was an idiot. This didn’t do my career any good.

This was before we had the internet, though we had email, and things have gotten worse since then. Now we can announce our stupidity not only to our boss and colleagues but to family, friends and thousands of people we have never met on social media.

None of us have time to research or fact-check every piece of information that comes our way. If you’re like me, you find a lot of things that aren’t true.

Today, fake news covers politics, medicine, investing, cryptocurrencies, science, and even dating (here’s a boy in Oregon using dating apps to kidnap and rape women). Also, there are frequent competent attempts to trick us into providing information that could result in identity theft.

How do you fix this problem? OtherWeb is a fascinating effort to help people access the truth, not by moderating fake news, but by helping you fact-check it yourself, so you can determine whether or not the information is fake as you process it. are part of.

Let’s explore the otherweb this week. Then we’ll end with our product of the week: a new smartphone from Samsung that’s a cut above the rest when it comes to video making.

fake news

Right now, one of the problems with fake news is that many people use that designation for anything they disagree with or feel bad about when the term should only be used when the news is false or misleading. Are. Furthermore, there is often disagreement as to whether a piece of information is incorrect.

For example, there are a number of stories that indicate that Earth’s core has not only stopped rotating but may have started rotating backwards — and recently, another credible source says it’s all BS. Is. Now, if you’ve seen any of the movies that suggest the event will end the world, having the Earth’s core spin backwards is certainly scary.


Recently, I read an article on Forbes that argued that all other articles are BS because the people who wrote them didn’t understand the study they based their articles on. Can you imagine casually bringing up this topic with someone you were trying to impress, only to have that Forbes article shoved in your face with the implication that you are clueless?

I’m not saying that either situation is perfect, though, since while we’re still here, the “end of the world” scenario looks off the table (good news for a Monday). But had you known both articles existed, you could have nuanced your comment, chosen another topic, or taken a side and made a more credible argument – ​​repeating fake news Rather than getting caught, even if you heard it from what some consider to be a false reliable news source.

An obvious improvement is to stop bringing up topics you don’t directly know about. Sometimes, it just seems like the safest route. But we still make decisions based on what we read, so knowing the risks of taking what we read as truth allows us to make better choices not only about what we share but how we share it. meets.

other web solutions

That’s what Otherweb attempts to do. It’s a news-focused network like Twitter, but with an emphasis on making sure you have the information you need to determine whether a story is true. It allows you to choose trusted sources to build your feed, and it uses transformative AI to scan relevant pieces of news and correct headlines.

How many times have you clicked on a link on Google thinking the story interested you, only to find that the title has nothing to do with the content?

Otherweb also summarizes the article in bullet form so you have a brief overview on the content, which can save you from wasting your time on the site, and you can use the sliding bar to see what type of content you want. .


Unlike most other such services, which use someone else’s search engine (usually Google), OtherWeb has its own, and at least for now, it is not ad-funded, so search results are not included in the list. The medium has neither the desire nor the need to optimize advertising revenue. , You get close to what you want to find without having to dig through all those paid and prioritized results that make your search that much harder.

Be aware that, at present, Otherweb has not worked out its revenue model and will wait until its user base grows to critical mass before surveying it to figure out how to monetize the service. This means the firm is limited in terms of funding, and there will be changes that may include fees to use the service or advertising to fund it. This will probably end up as some kind of hybrid model where you can choose to pay and use the service without ads or get the service for free but with annoying ads.

The OtherWeb will never be the financial powerhouse that Google is, but given its differentiator is accurate news, it should be able to better balance the needs of advertisers with the needs of users. I would still recommend paying for the service to remove the possibility of your results being contaminated by any attempt to maximize advertising revenue.

wrapping up

In my business, fake news is a career-ender, so I’m always on the lookout for services and sources that can help me identify and avoid it.

Right after 9/11, I saw “Loose Change,” a very well done conspiracy video that argues convincingly that the US was behind the attack on the twin towers. I almost became a believer because I had never seen such a well made fake story. Luckily, the one person I spoke to about it immediately set me straight, and I haven’t written a column that will forever make me an idiot. But it was a very close call.

While still in its infancy, it seems that Otherweb does a pretty good job of helping me determine if a story is fake, thus protecting my reputation from otherwise silly mistakes.


Another interesting aspect of this service is that it is completely open-source and collaborative. So that anyone wanting to do something similar but with a different spin can do so, showing that the people behind this app are less interested in revenue than in fixing the fake news problem.

Check out otherwebs if you get a chance. Log-in is required, so you must sign up with the site in order to use the service. We have very few people and companies focused on making the world a better place, and I want to see that change.

Maybe you too can avoid that next embarrassing moment where you face criticism for repeating a fake news story you didn’t know, but you should have known, was a hoax.

tech product of the week

samsung galaxy s23 ultra

Samsung is an interesting company and one of the few that has the potential to compete with Apple head-on. To date, Samsung has underperformed its potential because making its stuff work together doesn’t seem to be a priority, at least for now. I think this was the big news at the Samsung Galaxy Unpacked event last week. It’s started working “better together”, and it’s doing it better than Apple.

The difference is that Samsung products still work with other vendors’ products but still work better with other Samsung products, whereas, often, similar offerings from Apple Only Apple products that substantially limit Apple’s total addressable market (TAM).

For example, the Apple Watch, which is still the best smartwatch on the market, won’t work with Android phones, which limits the TAM for that watch to about one-third of what it might otherwise be. Samsung usually avoids that limitation, and its smartwatches are catching up to Apple’s.

But Samsung really hit it hardest last week with its Galaxy S23 Ultra. The picture and video quality of this new phone can output up to 200 megapixels for photos and 8K and 30 fps for videos, which is in line with professional cameras and can be used on almost any kind of high quality professional-grade camera. Can be used to make movies and pictures. Light.

samsung galaxy s23 ultra

Samsung Galaxy S23 Ultra will be available on February 17, 2023. (Image credit: Samsung)


Its seamless connection to laptops, especially the Book3 series announced at the event, will make a professional photographer take notice because they’re transferring RAW files, not the compressed files you usually end up with. I was a professional photographer, and even my experts couldn’t do what this phone could do.

The Galaxy S23 Ultra has a Pro-Video mode that opens up all the settings. Assuming you know what you are doing, this allows you to create amazing pictures; If you don’t, it uses AI to do all that for you. I’m pretty sure a non-photographer with this phone could go way beyond what I could do as a pro at the time.

decent mechanical digital image stabilization, advanced high-speed focusing, nightography to take great pictures in low light, and the fact that it uses Qualcomm’s most advanced technology which is a processor solution designed jointly with Samsung As it appears, this phone really stands out. Outside.

The performance jump compared to last year’s phone is pretty extreme, too, with a 34% jump in CPU performance, a 49% jump in NPU performance (AI), and a 41% jump in graphics, making it a gaming flagship. Smartphones and showcasing is a big part of it. How far has it come. Oh, and it has a 1,750-nit display that’s huge and should allow you to do things in bright sunlight that you can’t do with your current phone.

Granted, as you’d expect, it’s not cheap to date, with a list price of just under $1,200, but if my heart doesn’t have lust for this phone, this is my product of the week.

Well, I’ve been impressed with the amount Samsung spends on launch events in the past, but less impressed with the execution. Samsung executed this latest launch event almost perfectly, and credit goes to the team that prepared it. They spent more time pointing out why you want a certain feature than device speed and feed, which has always been a best practice. Nicely done! it’s worth seeing.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

A Chinese cyber espionage group is using a fake news site to infect government and energy industry targets in Australia, Malaysia and Europe with malware, according to a blog posted online on Tuesday by Proofpoint and PwC Threat Intelligence .

The group is known by several names, including APT40, Leviathan, TA423 and Red Ladon. Four of its members were indicted by the US Department of Justice in 2021 for hacking several companies, universities and governments in the United States and around the world between 2011 and 2018.

APT40 members indicted by the United States Department of Justice in 2021

The United States Department of Justice indicted APT40 members in 2021 / Image Credit: FBI


The group is using its fake Australian news site to infect visitors with the Scanbox exploit framework. “Scanbox is a reconnaissance and exploitation framework deployed by an attacker to collect a variety of information, such as the target’s public-facing IP address, the type of web browser used, and its configuration,” Proofpoint Vice President for Threat Research and Detection Sherrod explained DeGripo.

“It serves as a setup for the information gathering steps that follow and potential follow-up exploits or compromises, where malware is deployed to gain persistence on the victim’s system and allow the attacker to carry out espionage activities.” can be done,” she told TechNewsWorld.

“It creates a perception of the victim’s network that the actors then study and determine the best path forward for further compromise,” she said.

“Watering hole” attacks that use Scanbox appeal to hackers because the point of compromise is not within the victim’s organization, added John Bumbleneck, a principle threat hunter at Netenrich, a San Jose, California-based IT and digital security operations company. .

“Therefore, it is difficult to detect that information is being stolen,” he told TechNewsWorld.

modular attack

According to the Proofpoint/PwC blog, the TA423 campaign primarily targeted local and federal Australian government agencies, Australian news media companies and global heavy industry manufacturers, which maintain a fleet of wind turbines in the South China Sea.

It noted that the phishing emails for the campaign were sent from Gmail and Outlook email addresses, which Proofpoint believes were created by attackers with “moderate trust.”

Subject lines in phishing emails included “sick leave,” “user research,” and “request collaboration.”

Threatened actors often pose as employees of the fictional media publication “Australian Morning News”, the blog explained, and provide a URL to their malicious domain, to view their website or share research material that the website is publishing. Ask for goals.

If someone clicks on the target URL, they will be redirected to a fake news site and without their knowledge, the Scanbox malware will be introduced. To give credibility to their fake website, opponents posted content from legitimate news sites such as the BBC and Sky News.

Scanbox can distribute its code in one of two ways: in a single block, which gives an attacker instant access to the full functionality of the malware, or as a plug-in, modular architecture. The TA423 crew chose the plug-in method.

According to PwC, the modular route can help avoid accidents and errors that would alert a target that their system is under attack. It is also a way for researchers to reduce the visibility of the attack.

phishing boom

As such campaigns show, phishing remains the tip of the spear used to break into many organizations and steal their data. “Phishing sites will see an unexpected increase in 2022,” said Monia Deng, director of product marketing at Bolster, a provider of automated digital risk protection in Los Altos, Calif.

“Research has shown that this problem will increase tenfold in 2022 because this method is easy, effective and a perfect storm to deploy in the post-work digital age,” she told TechNewsWorld.

DeGripo said phishing campaigns continue to work as threat actors adapt. “They use current affairs and holistic social engineering techniques, at times hunting down target fear and a sense of urgency or importance,” she said.

A recent trend among threat actors, he continued, is attempting to increase the effectiveness of their campaigns by building trust with intended victims through extended interactions with individuals or through existing interactions between coworkers. .

Roger Grimes, a defense campaigner with KnowBe4, a security awareness training provider in Clearwater, Fla., stressed that social-engineering attacks are particularly resistant to technical security.

“Try as much as you can, there is no great technical defense so far that prevents all social engineering attacks,” he told TechNewsWorld. “This is especially difficult because social engineering attacks can come across email, phone, text messages and social media.

Even though social engineering is involved in 70% to 90% of all successful malicious cyber attacks, it is the rare organization that spends more than 5% of its resources to mitigate this, he continued.

“It’s the number one problem, and we treat it like a small part of the problem,” he said. “It’s the fundamental disconnect that allows attackers and malware to be so successful. Until we see this as the number one problem, it will continue to be the primary way attackers attack us. It’s just math.” “

two things to remember

While TA423 used email in its phishing campaign, Grimes notes that opponents are moving away from that approach.

“Attackers are using other methods, such as social media, SMS text messages, and voice calls to do their social engineering more often,” he explained. “This is because many organizations focus almost exclusively on email-based social engineering and the training and tools to combat social engineering on other types of media channels are not at the same level of sophistication in most organizations.”

“That’s why it’s important that every organization builds an individual and organizational culture of healthy skepticism,” he adds, “where everyone is taught how to recognize the signs of a social engineering attack, no matter how it comes.” , web, social media, SMS messages or phone calls – and it doesn’t matter who it appears to be sent by.”

He explained that most social engineering attacks have two things in common. First, they come unexpectedly. The user was not expecting this. Second, it is asking the user to do something that the sender – whatever he is pretending to be – has never asked the user to do it before.

“This may be a valid request,” he continued, “but all users should be taught that any message with those two traits is at very high risk of being a social engineering attack, and should be verified using a reliable method. as if calling that person directly on a known good phone number.”

“If more organizations taught two things to remember,” he said, “the online world would be a much safer place to calculate.”

Fake social media accounts are usually associated with bot networks, but some research released Tuesday showed that many social media users are creating fake accounts of their own for a variety of reasons.

According to a survey of 1,500 US social media users conducted by USCasinos.com, one in three US social media users have multiple accounts on the social media platforms they use. About half (48%) of people with multiple accounts have two or more additional accounts.

Reasons for creating additional accounts vary, but the most commonly cited are “sharing my thoughts without judgment” (41%) and “spying someone else’s profile” (38%).

Other motives behind creating fake accounts include “increasing my chances of winning an online contest” (13%), “increasing likes, followers and other metrics on my real account” (5%), fooling others (2.6%) Are included. and for scamming others (0.4%).

When asked where they were creating their fake accounts, respondents most often named Twitter (41%), followed by Facebook (31%) and Instagram (28%). “That’s because Twitter is pretty much open by default,” said Will Duffield, a policy analyst at the Cato Institute, a Washington, DC think tank.

“Twitter power users will often have multiple accounts — one for a mass audience, other for smaller groups, one that is open by default, one that is private,” he told TechNewsWorld.

Infographic explains where US residents create fake social media accounts

Infographic Credit: USCasinos.com


Twitter prompted the research by the online casino directory site, noted study co-author Ines Ferreira. “We started this study primarily because of discussions about Elon Musk and the Twitter deal,” she told TechNewsWorld.

That deal is currently tied up in the courts and hinges on a dispute between Musk and the Twitter board over the number of fake accounts on the platform.

sex changing detective

The types of fake accounts in the study, however, differ from the ones that confused Musk. “The survey tackles two completely different issues,” Duffield said.

“On the one hand, you have automated accounts – things operated by machines and often used for spamming. This is the kind of fake account that Elon Musk alleges Twitter has too much,” he told TechNewsWorld. There are pseudonymous accounts, which are being surveyed here. They are operated by users who do not wish to use their real names.”

The survey also found that most users retained their same gender (80.9%) when creating fake accounts. The main exception to that practice, the survey noted, is when users want to spy on other accounts. Then they are in favor of creating a fake account of the opposite sex. In general, one in 10 (13.1%) of those surveyed said they used the opposite sex when creating fake accounts.

Infographic reveals how many fake social media accounts owners own

Infographic Credit: USCasinos.com


“There are a number of reasons why we don’t want everything we do online to be associated with our real name,” Duffield said. “And it doesn’t necessarily have to be cancel culture or anything like that.”

“One of the great things about the Internet is that it allows us to divulge identities without committing ourselves or trying on new individuals so that we can showcase one aspect of ourselves at a time,” he said. Explained.

“It is absolutely normal for people to use pseudonyms online. If anything, using real names is a more contemporary expectation,” he said.

Accounts created with impunity

The study also found that most fake account creators (53.3%) prefer to keep the practice a secret from their inner circle of acquaintances. When they mentioned their fake accounts, they were most likely to mention them, followed by friends (29.9%), family (9.9%) and partners (7.7%).

The researchers also found that more than half of the owners of fake accounts (53.3%) were millennials, while Gen X had an average of three fake accounts and Gen Z had an average of two.

According to the study, the creators of fake accounts do this. When asked whether their fake accounts were reported on the platforms on which they were created, 94% of the participants responded negatively.

Infographic describing platforms where fake social media accounts have been reported

Infographic Credit: USCasinos.com


“Every time these platforms release new algorithms to report these accounts, most of them never report them,” Ferreira said. “There are so many fake accounts, and you can create them so easily, it’s really hard to identify them all.”

“After Elon Musk’s deal with Twitter, these platforms are going to be thinking a little bit more about how they’re going to do it,” she said.

However, Duffield downplayed the need for users to police fake accounts. “Creating these accounts is not against the platform rules, so there is no reason for the platform to consider them a problem,” he said.

“Since these accounts are operated by real people, even though they do not have real names, they act like real people,” he continued. “They’re messaging one person at a time. They’re taking the time to type things out. They have a typical day/night cycle. They’re sending messages to 100 different people at once at all hours of the day. Not sending thousand messages.

harmless fake?

Duffield stressed that unlike fake accounts created by bots, fake accounts created by users are less harmful to the platforms hosting them.

“There is a theory that people abuse more often when they are using a pseudonymous account or one that is not tied to their real identity, but from a sobriety perspective, banning a pseudonymous account is a real person.” No different from banning,” he observed.

“Facebook has had a real-name policy, although it has received a lot of criticism over the years,” he said. “I’d say it’s under-applied intentionally at this point.”

“As long as the pseudonymous account is complying with the rules, this is not a problem for the platforms,” he said.

While bot accounts do not contribute to the social media platform’s business model, fake user accounts do.

Duffield explained, “If the pseudonymous account is being used by a real human being, they are still seeing the ad.” “It’s not like a bot clicking on things without a human being involved. Regardless of the name on the account, if they’re seeing contextual ads and they’re being shown, from a platform standpoint, it’s not really a problem. Is.”

“Activity is reflected in monthly active user statistics, which is what the platform, advertisers and potential buyers care about,” he continued. “The total number of accounts is a useless statistic because people constantly drop accounts.”

Still, Ferreira argued that any form of fake account undermines the credibility of social media platforms. “At some point,” she said, “there are going to be more fake users than real users, so they need to do something about that now.”

A lawsuit was filed by Amazon on Tuesday against administrators of more than 10,000 Facebook groups accusing them of being part of a broker network for churning out fake product reviews.

In its lawsuit, Amazon alleges that administrators attempted to organize the placement of fake reviews on Amazon in exchange for money or free products. It said groups have been set up in the United States, United Kingdom, Germany, France, Italy, Spain and Japan to recruit people to write fake reviews on Amazon’s online store.

Amazon said in a statement posted online that it would use the information found through the lawsuit to identify bad actors and remove the reviews they commissioned from the retail website.

Dharmesh Mehta, Amazon’s Vice President of Selling Partner Services, said in the statement, “Our team intercepts millions of suspicious reviews before they are seen by customers, and this trial goes a step further to uncover criminals operating on social media.” ” “Proactive legal action targeting bad actors is one of many ways to protect customers by holding bad actors accountable.”

against meta policy

Meta, which owns Facebook, condemned the groups for setting up fake review mills on their infrastructure. “Groups that solicit or encourage fake reviews violate our policies and are removed,” Meta spokeswoman Jen Riding said in a statement to TechNewsWorld.

“We are working with Amazon on this matter and will continue to partner across the industry to address spam and fake reviews,” she said.

According to Meta, it has already removed most of the fraud groups cited in Amazon’s lawsuit and is actively investigating others for violating the company’s policy against fraud and deception.

It noted that it has introduced a number of tools to remove infringing content from its service, tools that use artificial intelligence, machine learning and computer vision to analyze specific instances of content that violate rules. Break down and identify patterns of abuse across the platform.

Is Facebook doing enough?

Rocio Concha, director of policy and advocacy, a consumer advocacy group in the UK, praised Amazon’s action, but questioned whether Facebook was doing enough to prevent abuse of its platform.

“It is positive that Amazon has taken legal action against some of the fake review brokers operating at Facebook, which is a problem the investigation has uncovered time and again,” he said in a statement. “However, it does raise a big question mark about Facebook’s proactive action to crack down on fake review agents and protect consumers.”

“Facebook needs to explain why this activity is prevalent, and [U.K.] The Competition and Markets Authority (CMA) must challenge the company to show that the action it is taking is effective,” he continued. “Otherwise, it should consider stern action against the platform.”

“The government has announced that it plans to give stronger powers to the CMA to protect consumers from the avalanche of fake reviews,” he said. “These digital markets, competition and consumer reforms should be legislated as a priority.”

Which one in 2019? released a report that estimated that 250,000 hotel reviews on the Tripadvisor website were fake. Tripadvisor dismissed the analysis in that report as “simplistic,” but in its own “Transparency” report a year later, the site found nearly one million, or 3.6%, of the reviews were fake.

no time for deep dives

“Most consumers don’t have time to dig deep into reviews,” said Ross Rubin, principal analyst at Reticle Research, a consumer technology advisory firm in New York City.

“They take star ratings as a way to build trust in a product and if people are being compensated for posting fake reviews, it undermines trust in reviews,” he told TechNewsWorld.

“Fake reviews not only encourage consumers to buy a substandard product, but they also make it more difficult to differentiate between products,” he said.

“If you have an overwhelming number of products in a category with four-and-a-half or five-star reviews, because many of them are participating in these fake review programs, the value of the reviews themselves are diminished,” he explained.

He acknowledged that fake reviews were a problem everywhere on the Internet. “But,” he continued, “because Amazon has such a strong position in online retailing and is often the first website consumers visit, it is disproportionately targeted by these fake review groups.”

Review mills also use bots to pad product reviews, but Rubin said the technology lacks the effectiveness of using a human. “The reason these groups are using people instead of bots is because bots are easier to detect,” he said. “Amazon uses machine learning techniques to identify when companies are using bots.”

‘Comprehensive’ review manipulation

In a report released last year by Uberall, an online and offline customer experience platform, review manipulation on Amazon was termed “pervasive.”

Amazon claims that only 1% of reviews on the site are fake, but the report disputed that. It cited a 2018 analysis by Fakespot that found the number of fake reviews in certain product categories such as nutritional supplements (64%), beauty (63%), electronics (61%), and athletic sneakers (59%) is more.

“Even if we reduce these numbers by 50%, there will still be a gap between what Amazon and Fakespot report,” Uberall’s report said.

What can be done to curb fake reviews?

Uberall points out that Amazon and some others use the label “Verified Buyer” to indicate high trust in reviews. “It is an approach that needs to be used more widely,” it noted, “though it is not foolproof, as Amazon has discovered.”

“Despite specific anti-fraud mechanisms,” it continued, “fake reviews are a problem that needs to be addressed more systematically and vigorously.”

The paths identified in the report to address the problem include using more technical sophistication and aggressive enforcement to bring review fraud down to low single digits, adopting a review framework that is structurally difficult to defraud and Only genuine verified buyers are to be allowed. Write a review.

“These are not mutually exclusive approaches,” it explained. “They can and should be used in conjunction with each other.”

“With online reviews there is a huge amount at stake for businesses of all sizes,” the report said. “More and better reviews directly translate into online visibility, brand equity and revenue. This creates powerful incentives for businesses to pursue positive reviews and suppress or remove negative reviews.”