This post was originally published on this site

As the Before Times started crumbling this spring, brought down by the revelation that the coronavirus was here to stay, the physical world gave way to an altered plane. Changes came quickly: Coffee shop chatter quieted down, hand sanitizer disappeared from shelves, and we even adopted a medical-grade hand-washing technique, thinking it might be enough to keep our bodies virus-free. And then everything shut down.

When quarantine hit, our physical lives were minimized. But our digital lives were ready for this moment, curated over the past decade, a period when social media firms became the largest companies in the world and the most dominant forces in our daily lives.

We logged on in 2020 because we actually needed social media this year. We were glued to our phones because, well, what else did we have left?

We had our close circles of family and friends, only some of whom we could see physically. But really, we just had our digital lives. According to eMarketer estimates, the average U.S. adult spent 23 additional minutes on their smartphone per day in 2020, and 11 more minutes per day just on social media. Year over year for the third quarter of 2020, Pinterest‘s daily active user base rose 37%, Twitter‘s base grew by 29%, Snapchat by 18% more users, and Facebook by 12%.

As the economy shifted online, Silicon Valley firms saw their stock prices soar. But they also saw intense pushback from Washington, which peppered the tech sector with inquiries and lawsuits. Even Madison Avenue demanded that social platforms act more responsibly. 

Content moderation intensified

The Covid-19 pandemic brought with it a slew of health-related and political misinformation. As a result, social platforms adopted stricter policies to protect their users, like labeling and removing inaccurate posts, and provided reliable resources from trusted sources about the pandemic and the election. As it turns out, when access to accurate information is a life and death issue, social media companies take it more seriously. 

When Twitter’s policies on misinformation and inciting violence were applied to President Donald Trump’s account in late May, and the company started labeling his tweets as rule-breaking or inaccurate, the entire industry reacted. Snapchat stopped promoting Trump’s account, Twitch suspended the president, Reddit banned hate speech, and Facebook was momentarily boycotted by more than 1,000 advertisers over its inability to keep hate speech and misinformation off its platform.

Misinformation is still a prevalent issue on nearly every social media platform, but after years of claiming they are not “media companies,” tech companies—including Facebook—have finally taken a more hands-on role in policing their platforms. That said, while policies have changed, platforms are reticent to share their data with researchers, so it’s difficult to quantify how much safer or how much better the information is after these changes.

Washington cracked down

While Democrats and Republicans have different reasons for being skeptical of Silicon Valley, each party has turned its ire toward Big Tech. In the social media space, Democrats largely think platforms aren’t doing enough to curb the spread of hate, extremism and misinformation while Republicans baselessly allege platforms have a liberal bias in how they moderate content. Still, the techlash is in full swing in Washington.

Facebook’s Mark Zuckerberg, Google’s Sundar Pichai and Twitter’s Jack Dorsey have each testified before Congress in recent months about content moderation as well as, for Zuckerberg and Pichai, allegations of anticompetitive behavior—the government has now brought multiple lawsuits against Facebook and Google for antitrust violations. Facebook’s is due to its dominance in the social media space while Google’s focuses on its dominance in search and digital advertising.

Watch Ford’s new ad that promotes mask-wearing
Comments are closed.