The social media boost to the Silicon Valley bank run sent shock waves throughout the US banking industry, according to a 53-page report released last week by a group of university professors.

In their study, Boffins used Twitter data to show that the SVB failure was preceded by a large spike of public communication on Twitter by apparent depositors, who used the platform to discuss the bank’s troubles and More importantly, their intention is to withdraw their deposits from SVB.

The openness and speed of this coordination around a bank run is unprecedented, the researchers said.

Mark T. Williams, master’s lecturer in finance at the Questrum School of Business at Boston University, explained that before the advent of social media, banks operated because individuals communicated through much slower communication methods, such as mail, phone, or word of mouth. . ,

He told TechNewsWorld, “The impact of influencer tweets on the speed and size of the SVB bank run demonstrates the speed in which social media has accelerated the speed and reach of communication.”

“SVB failed due to poor risk management and a crypto infection spreading throughout the industry,” he continued. “What Twitter did was hasten the process of failure.”

“When influencers can touch so many people so quickly, it’s dangerous,” he said. “They can move the stock price or the value and stability of the company.”

“But Twitter did not cause the failure of SVB,” he said. “SVB caused it. Twitter forced it.

Unique Risk Channel

The social-media-fueled run on SVBs has serious implications for the banking industry, according to researchers — J. Anthony Cookson, Corbin Fox of James Madison University, Javier Gil-Bajo of Université Pompeu Fabra, Juan F. Imbet,

The researchers noted that the Silicon Valley bank faced a novel channel of run risk unique to the social media era.

“SVB depositors who were active on social media played a central role in driving the bank,” the researchers wrote. “These depositors were concentrated and highly networked through the venture capital industry and founder networks on Twitter, exacerbating the risks of running another bank.”

More importantly, he continued, SVB is not the only bank facing this novel risk channel: open communication by depositors via social media has exposed the bank to other banks exposed to such discussions in social media. increased risk of driving.

“When information travels faster, people can run to the bank faster,” said Will Duffield, a policy analyst at the Cato Institute, a Washington, DC think tank.

However, trying to regulate that information is not a good solution to the problem, he said.

“You want efficient markets. You want people to share information about the health of different firms,” ​​he told TechNewsWorld. “I can’t see the First Amendment tolerating regulation.”

social media passes

Duffield said social media platform operators are not in a position to solve the problem.

“I don’t think social media is a place to make such calls,” he said. “If you’re Twitter, you don’t know if a bank is solvent. You can’t see their balance sheet.

“You can suppress any claim that the bank is insolvent,” he continued, “but then you prevent a lot of people from knowing that the bank is in fact insolvent, and that they should try to get their money out of it.” Was.”

“When a rumor is doing the rounds, social media is in no position to verify its veracity,” he added.

Cookson agreed. “There’s not much that social media outlets can do,” he told TechNewsWorld.

“I don’t think of our paper as a call to action on social media because what users can post, or interruptions in communication, seem off limits, even when they are associated with significant real impacts,” he explained. .

“I don’t think it’s possible to regulate social media,” said Vincent Reynold, an assistant professor in the Department of Communication Studies at Emerson College in Boston.

“Any attempt to do so would be viewed as an attack on an individual’s right to express themselves,” he told TechNewsWorld.

dangerous group

Mark Ann Vena, president and principal analyst at SmartTech Research in San Jose, California, acknowledged that market vulnerabilities certainly exist when social media posts run amok and cause bank runs or even push stocks higher or lower.

However, he added that since social media posts are a form of communication, he doubts that “normal” posts can be regulated in a meaningful way to prevent these actions from occurring.

He told TechNewsWorld, “I could see that company executives and individuals who own shares in the stock could be prevented from posting insider-related posts, but existing laws and regulations already manage that, and those individuals are There are serious legal consequences for those who disclose insider information.”

“Where the danger for this really exists is if groups of individuals come together to create and promote posts that collectively have a stronger effect than if the individuals in the group posted themselves,” he said.

“If the information is intentionally misleading so as to create market distortion so that someone can take advantage, then there may be an opportunity for some regulatory work around that,” he added.

Absence of white-knuckling banking

Cookson notes that even in the absence of action by bank regulators to curb the accelerated effects of social media on bank runs, there is much that banks can do to keep their deposit runs short.

“Our results are that social media amplifies existing bank run risks, such as a larger percentage of uninsured deposits, so one key change we could see is that banks begin to manage their deposit risks more carefully. Because social media and digital banking make it risky to trust uninsured deposits,” he said.

Duffield said the Federal Reserve bailout procedures could be improved. For example, he pointed out that there is a 4 p.m. cut-off for transfers every day, even though the business operates in a world of real-time, global electronic transfers.

“The lenders of last resort in our system need to take a good look at how they can keep pace with the digital world,” he added. “These mechanisms may have worked fine in the 1970s and 1980s when everyone stopped trading at 4 p.m., but now everything moves much faster.”

“That’s a huge shortcoming that has been exposed by all this,” he said. “There’s just a mismatch in speed between the drawdown side and the pullout side.”

Another lesson learned from the SVB debacle is the difference between East Coast and West Coast banking cultures.

“West Coast capital culture is young,” Duffield said. “What we saw with Silicon Valley Bank was the downside of that. Trust developed over a very long time. ran for.

Microsoft just announced that it’s putting generative AI into Windows 11, but we’re still at the beginning of the changes this technology will be accessible to.

Some jobs will become easier, some more valuable and many will become obsolete. No matter what you do for a living, there’s a good chance AI, especially generative AI, will make a significant impact on both what you do and how you do it.

Such changes create tremendous opportunities and great threats. There will be risks associated with moving to this technology too early or too late. Like the introduction of computers, generative AI is a force multiplier, meaning those who know how to use it effectively will increase their value, and those who don’t will be unable to compete.

Let’s take a look at five areas that will be dramatically changed by the influx of generative AI, some potentially for the worse. Then we’ll close with our product of the week: the first truly wireless security camera – with wireless broadcast power – to hit the market.


The issue with news is twofold.

First, Generative AI can write news articles, but it will do so using information it can access. As the population of professional news reporters dwindles, citizen-sourced content will increase in relative percentage, and that content has proven relatively unreliable over time.

The AI ​​can’t just go out on the field to watch events; It merely interprets or repeats what others have observed. It’s a device used to punch up a story or make it easier to read, but it can also create stories, and this is where the second point becomes a problem. I believe this can be addressed algorithmically, but if the motivation is revenue and not accuracy, then the trend, which some would argue is already troubling, could worsen.

Another news-related issue is that AI will play an important role in journalism. However, the AI ​​will prioritize site traffic and try to please users, just like you’ve seen your feeds on social media modified to keep their interest. Stories you may need to know about may give way to stories you enjoy more, because AI, in its effort to please you, will seek pleasure regardless of the validity of those stories. Makes up stories to feed.

To be fair, no news organization will survive if they have content that people don’t want to see. However, ensuring that this pivot does not disconnect users from reality will be a challenge. As suggested in a 2016 book by Lance Elliott, the fix could be an “AI guardian angel” to ensure that your best interests are always protected. The AI ​​guardian angel idea has also been proposed more widely to protect against the potential emergence of hostile AI.


We have seen that Generative AI can write books. We also know that those books are bad without much oversight.

You still need to define the characters, build the world, and then create a way for the characters you’ve created to navigate that world so that it’s interesting to the reader. Now, readers can do that last part. For example, wouldn’t it be interesting if you could read an adventure book about how to survive in the world of Harry Potter or how John Wick could interact in that world?

In the future, authors may define the world and characters, flesh them out completely, and then you’ll buy access to those worlds and characters to create a story uniquely yours that you can then resell. can or can be enjoyed individually.

If you resell the results, some aspects of revenue-sharing will need to be worked out. Even so, most of us will probably only create content that we personally enjoy or share with close friends, minimizing the need for full licensing and monetizing the result.

For example, I’m currently hooked on LitRPG books, which are written in a video game universe where characters grow over time and progress through specified missions. These books are iterative, and too often, an author will stop writing a series before I’ve finished it or I’m ready to drop it.

With generative AI, I can not only change the parts of the book I don’t like and enhance the parts that I do, but I can also create sequels that wouldn’t have existed otherwise, which Helpful when the author dies and if done correctly, I would still pay the author or their estate for the privilege.

Currently, some publications are already overwhelmed with generative AI content. While this is one of the initial pain points, ensuring the quality of these written works has now become a far more difficult task.

TV & Movies

The script part of the process will follow on from the book concept described above, but you can then turn to techniques like deepfakes and Nvidia’s Omniverse to flesh out the movie and create a relatively high-quality, animated interim product that can be compared to anything else. For, being final, and for others can be just another step to ensure the quality of the result.

As this technology matures, your ability to go straight from script to final product will increase. The ability to make high-quality movies with elaborate special effects for a few bucks would also go up substantially. With streaming, a service like Amazon can either charge a subscription or charge a fee to produce exactly the movie you want to watch when you want to watch it.

Now there is no need to wait for a year or two for the sequel. You can watch a sequel every night of the year until you get tired of the same plot. Much like YouTube content today, you can have your own series that other people can watch for a fee, and if done right, there will be revenue-sharing for all parts of the production.

The real matter is related to theatres. It’s not clear how you would scale this customized experience to a large group unless it was interactive and the individuals in the group could exercise some control over the action. The result will be new content that can be redistributed via screening, it is left to other users whether they want to see the result or not.

The actors would license their appearances and create acting templates for a variety of characters which, for a fee, users would place in their streamed productions for personal use. At the same time, the theater may have a set troupe of pre-paid actors and a more defined space in which to place them where the audience can interact with and possibly guide their favorite characters.

An interesting aspect that I’m sure we’ve all sat in movie theaters where someone is vocal about what the actor should or shouldn’t do. In this context, the virtual actor on the screen can react to those vocal comments. “Can the lady in the white dress in the third row calm down? I’m having trouble thinking!” or “Thank goodness you warned me not to go into that room. I would have died! etc.

AI can significantly enhance audience engagement and make the theater experience far more social than it is now. Granted, it can be annoying if done wrong, but turning the theater experience into a social event can make going to theaters a lot more fun than it currently is. Ever been to a “The Rocky Horror Picture Show” event? They are very funny.


We generally do not learn well. Some of us learn by doing, some learn by showing, and some learn best from people they trust – and this is not an exhaustive list of our differences.

With generative AI and some prep work to determine how a child learns best, the curriculum can be defined by those needs. A virtual theatre, which can take the form of one’s parent, can then provide a customized classroom that is adapted to how the child learns best.

Learning pivots from one-to-many to one-to-one, and each child gets personalized attention from an AI that can accompany them home and help with homework virtually. This will be a huge improvement for home learners as the AI ​​can dynamically change its approach to ensure the student understands the required material. Since AI is a machine, the risks of misuse, bias, prejudice and lack of adequate oversight are largely removed.

The goals will change from completing a course of study to transferring knowledge. Instead of glorifying mostly babysitters, schools could re-evolve into places of education. Those who do not like to read can get an AI-generated representation of the course material using a mix of advanced videos and provided content.

Because AI continuously incorporates student behavior, problems like acting out and depression can be addressed more aggressively and flagged faster if a student begins to spiral into negative territory. You’ll still need oversight, but the teacher will be there to ensure the process is in place and won’t need to teach so much as assure that the system is working as expected.

Course material can be as long or as short as required. If a student makes rapid progress, the curriculum will support that rapid progress. If the student was struggling, the curriculum would slow down and bring in other resources to improve the student’s performance.

wrapping up

Generative AI will, to some extent, change what is around us. Initially, written content will see the most significant change, followed by short-form video content and, finally, commercial TV and film production. Most of this should happen within the next ten years, with written changes happening relatively quickly and film, TV and education advances coming later.

By the end of this transition, possibly in the mid-2030s, how we create and consume content will have changed dramatically. It will be far more customized and personal, and the consumers of the respective media will provide significant direction to the final product.

One of the problematic parts that will undoubtedly take some time is the licensing of the content associated with all this. If we don’t have a solid licensing program, the creative types we need to build the elements of this new AI world will be forced out of it for lack of payment, dramatically reducing the quality of the result .

The key way to get this right is to ensure a revenue model that keeps creators whole.

tech product of the week

Archos Kota Wireless Power Security Camera

Archos Kota Wireless Power Security Camera

We live in a hostile world. Where I live, it seems like there are a lot of people who like to steal from others, which has become a serious problem.

I have 14 cameras around my house, but while on our last trip, one stopped working because the gardener accidentally cut its power cord. While wireless cameras are nothing new, getting power to a wireless camera can be a problem.

Well, last week, Ossia announced its Archos Kota Wireless Power Security Camera.

Provided the cameras are within 30 feet of the power hub, they will continue to work without being plugged in or in sight of each other. Data from the camera is Wi-Fi compatible, so you can hook it up to your company or home network (the camera’s target market is home and business).

Initially these cameras will come in commercial bundle. I estimate the pricing of the cameras and bundles to be in line with how Archos prices its cameras — figure in the $200-$300 range per camera.

Bundles depend on the size of the area you need to cover. Initially, those bundles are in two forms. For sites between 600 and 800 square feet, you get a quota transmitter (for power) and three cameras. For sites 800 to 1,200 square feet, you get double that.

I’m guessing the prices for the bundles will be around $1,200 and $2,400, with additional discounts likely to incentivize buying the larger bundle.

I think it would be incredibly useful to be able to put a camera in any location without having to think about how to operate it. As a result, the new Archos Kota Wireless Power Security Camera by Ossia is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Sharing high-resolution media online could inadvertently expose sensitive biometric data, according to a report released by a cyber security company on Tuesday.

This can be especially dangerous, said a 75-page report by Trend Micro, because people do not know they are exposing the information.

In the report, for example, the #EyeMakeup hashtag on Instagram, which has nearly 10 million posts, and the #EyeChallenge with more than two billion views, is enough to pass an iris scanner to uncover iris patterns.

“By publicly sharing certain types of content on social media, we give malicious actors the opportunity to source our biometrics,” the report states. “By posting our voice messages, we uncover voice patterns. By posting photo and video content, we highlight our face, retina, iris, ear-shaped patterns and, in some cases, palms and fingerprints. ,

“Since such data may be publicly available, we have limited control over its distribution,” it added. “Therefore we do not know who has already accessed the data, nor do we know for how long or for what purposes the data will be kept.”

not a panacea

The report covers what types of biometric data can be exposed on social media and outlines more than two dozen attack scenarios.

“The report suggests that biometric identification is not a panacea,” said Will Duffield, a policy analyst at the Cato Institute, a Washington, DC-based think tank.

“As we design detection systems, we need to be aware of technologies going down the pike and potential abuse in the real world,” he told TechNewsWorld.

“Trend Micro raises some valid concerns, but these concerns are not new to biometrics professionals,” Sami Alini, a biometrics specialist with Contrast Security, a maker of self-protection software solutions in Los Altos, Calif., told TechNewsWorld.

He said there are several ways to attack a biometric system, including a “presentation” attack described by the report, which substitutes a photo or other object for the biometric element.

To counter this, he continued, “viability” must be determined to ensure that the biometric presented is that of a living person and not a “replay” of a previously captured biometric.

Avi Turgman, CEO and co-founder of IronVest, an account and identity security company in New York City, agreed that “viability” is one key to thwarting attacks on biometric security.

“The Trend Micro report raises concerns about fraudulent biometrics created through social media content,” he told TechNewsWorld. “The real secret in fraud-proof biometrics is detecting liveliness, something that cannot be recreated through images and videos collected on social media.”

one factor not enough

Even when tested for liveability, biometrics can still be very easy to bypass, security awareness advocates at KnowBe4, a security awareness training provider in Clearwater, Fla., maintained.

“Holding the phone in front of a person’s face while sleeping can unlock the device, especially when they use it with the default settings, and collecting fingerprints is not a difficult task,” he told TechNewsWorld.

“What is even more worrying is that once the biometric factor is compromised, it cannot be changed like a password,” he said. “You can’t change your fingerprints or facial structure for a long time if you violate it.”

If the Trend Micro report shows anything, it’s that multi-factor authentication is a necessity, even if one of those factors is biometric.

“When used as a single factor for authentication, it is important to note that biometrics may be subject to failure or manipulation by a malicious user, particularly when that biometric data is publicly available on social media, Darren Guccione, CEO of Keeper Security, a password management and online storage company based in Chicago.

“As the capabilities of malicious actors using voice or facial biometric authentication continue to grow, it is imperative that all users implement multiple factors of authentication and use strong, unique passwords in their accounts to limit the blast radius. Apply if an authentication method is violated,” he told TechNewsWorld.

metaverse problems

“I don’t like to put all my eggs in one basket,” said Bill Malik, Trend Micro Vice President of Infrastructure Strategies. “Biometric is nice and useful, but having an additional factor of authentication gives me more confidence.”

“For most applications, a biometric and a PIN are fine,” he told TechNewsWorld. “When a biometric is used alone, it’s really easy to create.”

He stressed that the collection of biometric data will become an even greater problem when the metaverse becomes more popular.

“When you get into the metaverse, it’s going to get worse,” he said. “You’re putting on these $1,500 glasses that are designed to not only give you a realistic view of the world, but to find out what you like and don’t like about the world you see.” We are constantly monitoring your subtle expressions to find out.

However, he is not concerned that additional biometric data is being used by Digital Desperado to create deepfake clones. “Hackers are lazy, and they get everything they need with simple phishing attacks,” he declared. “So they’re not going to spend a lot of money for a supercomputer so they can clone someone.”

Device tied biometrics

Another way to secure biometric authentication is to tie it to a piece of hardware. With a biometric enrolled on a specific device, it can only be used to authenticate the user with that device.

Reed McGinley-Stempel, co-founder and CEO of Stitch, a passwordless authentication company in San Francisco, said, “This is the way Apple and Google’s biometric products work today — it’s not just the biometrics that you get when you use Face ID. Let’s check the time.”

“When you actually do a Face ID check on your iPhone, it checks that the current biometric check matches the biometric enrollment that’s stored in your device’s secure enclave,” he told TechNewsWorld.

“In this model,” he continued, “the threat of someone accessing your photos or fingerprinting yours doesn’t help them unless they have control over your physical device, which is something for attackers to climb into.” There is a very steep hill for the remote nature in which the cyber attackers operate.”

losing control of our data

The Trend Micro report states that as users, we are losing control over our data and its future uses, and the common user may not be well aware of the risks posed by the platforms we use every day. Is.

Data from social media networks is already being used by governments and even startups to extract biometrics and create identity models for surveillance cameras, it continued.

The fact that our biometric data cannot be changed means that in the future, such a wealth of data will be increasingly useful to criminals, it added.

Whether that future is five or 20 years ahead, the data is available now, it said. We are indebted to our future selves for taking precautions today to protect ourselves in tomorrow’s world.

trend micro report, Leaked Today, Exploited for Life: How social media biometric patterns affect your futureAvailable here in PDF format. No form is required to be filled at the time of this publication.

Fake social media accounts are usually associated with bot networks, but some research released Tuesday showed that many social media users are creating fake accounts of their own for a variety of reasons.

According to a survey of 1,500 US social media users conducted by USCasinos.com, one in three US social media users have multiple accounts on the social media platforms they use. About half (48%) of people with multiple accounts have two or more additional accounts.

Reasons for creating additional accounts vary, but the most commonly cited are “sharing my thoughts without judgment” (41%) and “spying someone else’s profile” (38%).

Other motives behind creating fake accounts include “increasing my chances of winning an online contest” (13%), “increasing likes, followers and other metrics on my real account” (5%), fooling others (2.6%) Are included. and for scamming others (0.4%).

When asked where they were creating their fake accounts, respondents most often named Twitter (41%), followed by Facebook (31%) and Instagram (28%). “That’s because Twitter is pretty much open by default,” said Will Duffield, a policy analyst at the Cato Institute, a Washington, DC think tank.

“Twitter power users will often have multiple accounts — one for a mass audience, other for smaller groups, one that is open by default, one that is private,” he told TechNewsWorld.

Infographic explains where US residents create fake social media accounts

Infographic Credit: USCasinos.com

Twitter prompted the research by the online casino directory site, noted study co-author Ines Ferreira. “We started this study primarily because of discussions about Elon Musk and the Twitter deal,” she told TechNewsWorld.

That deal is currently tied up in the courts and hinges on a dispute between Musk and the Twitter board over the number of fake accounts on the platform.

sex changing detective

The types of fake accounts in the study, however, differ from the ones that confused Musk. “The survey tackles two completely different issues,” Duffield said.

“On the one hand, you have automated accounts – things operated by machines and often used for spamming. This is the kind of fake account that Elon Musk alleges Twitter has too much,” he told TechNewsWorld. There are pseudonymous accounts, which are being surveyed here. They are operated by users who do not wish to use their real names.”

The survey also found that most users retained their same gender (80.9%) when creating fake accounts. The main exception to that practice, the survey noted, is when users want to spy on other accounts. Then they are in favor of creating a fake account of the opposite sex. In general, one in 10 (13.1%) of those surveyed said they used the opposite sex when creating fake accounts.

Infographic reveals how many fake social media accounts owners own

Infographic Credit: USCasinos.com

“There are a number of reasons why we don’t want everything we do online to be associated with our real name,” Duffield said. “And it doesn’t necessarily have to be cancel culture or anything like that.”

“One of the great things about the Internet is that it allows us to divulge identities without committing ourselves or trying on new individuals so that we can showcase one aspect of ourselves at a time,” he said. Explained.

“It is absolutely normal for people to use pseudonyms online. If anything, using real names is a more contemporary expectation,” he said.

Accounts created with impunity

The study also found that most fake account creators (53.3%) prefer to keep the practice a secret from their inner circle of acquaintances. When they mentioned their fake accounts, they were most likely to mention them, followed by friends (29.9%), family (9.9%) and partners (7.7%).

The researchers also found that more than half of the owners of fake accounts (53.3%) were millennials, while Gen X had an average of three fake accounts and Gen Z had an average of two.

According to the study, the creators of fake accounts do this. When asked whether their fake accounts were reported on the platforms on which they were created, 94% of the participants responded negatively.

Infographic describing platforms where fake social media accounts have been reported

Infographic Credit: USCasinos.com

“Every time these platforms release new algorithms to report these accounts, most of them never report them,” Ferreira said. “There are so many fake accounts, and you can create them so easily, it’s really hard to identify them all.”

“After Elon Musk’s deal with Twitter, these platforms are going to be thinking a little bit more about how they’re going to do it,” she said.

However, Duffield downplayed the need for users to police fake accounts. “Creating these accounts is not against the platform rules, so there is no reason for the platform to consider them a problem,” he said.

“Since these accounts are operated by real people, even though they do not have real names, they act like real people,” he continued. “They’re messaging one person at a time. They’re taking the time to type things out. They have a typical day/night cycle. They’re sending messages to 100 different people at once at all hours of the day. Not sending thousand messages.

harmless fake?

Duffield stressed that unlike fake accounts created by bots, fake accounts created by users are less harmful to the platforms hosting them.

“There is a theory that people abuse more often when they are using a pseudonymous account or one that is not tied to their real identity, but from a sobriety perspective, banning a pseudonymous account is a real person.” No different from banning,” he observed.

“Facebook has had a real-name policy, although it has received a lot of criticism over the years,” he said. “I’d say it’s under-applied intentionally at this point.”

“As long as the pseudonymous account is complying with the rules, this is not a problem for the platforms,” he said.

While bot accounts do not contribute to the social media platform’s business model, fake user accounts do.

Duffield explained, “If the pseudonymous account is being used by a real human being, they are still seeing the ad.” “It’s not like a bot clicking on things without a human being involved. Regardless of the name on the account, if they’re seeing contextual ads and they’re being shown, from a platform standpoint, it’s not really a problem. Is.”

“Activity is reflected in monthly active user statistics, which is what the platform, advertisers and potential buyers care about,” he continued. “The total number of accounts is a useless statistic because people constantly drop accounts.”

Still, Ferreira argued that any form of fake account undermines the credibility of social media platforms. “At some point,” she said, “there are going to be more fake users than real users, so they need to do something about that now.”