AI, content moderation and tech giants
Due to the growing threats of Covid-19 Facebook and Google are allowing the content moderators to work from home, the pressure is on the AI systems
Starting from today I will be writing one post that would focus on the global market and digital. This won’t be an original story but a curated one.
This week I want to cover how AI is now at the center stage for content moderation and the challenges it has during the tough times of Coronavirus. Let me start with Facebook.
On Wednesday, Facebook announced a portal that aims to be a one-stop-shop for its more than 2.5 billion users to find news and resources about the novel coronavirus, something it said was a step in an effort to combat falsehoods and provide accurate information.
Additionally, the Facebook Journalism Project is partnering with the Lenfest Institute for Journalism and the Local Media Association (LMA) to offer a total of $1 million in grants to support the US and Canadian local news organizations covering the coronavirus. These grants will help fill immediate gaps for resource-constrained newsrooms covering the impact of the coronavirus in their communities.
Company CEO Mark Zuckerberg also said that the company would allow thousands of content moderators who review banned content such as child pornography and terrorism to work from home. The move that could challenge the company and its effort of moderating harmful content, comes after weeks of complaining by contracts when full-time Facebook employees have been working from home for more than a month.
For a while now, Facebook has been feeling the heat over the notion of free speech. However, Facebook is taking a muscular approach in response to Covid-19. Instagram is removing false information associated with Covid-19 and replacing it with resources from the World Health Organization. WHO information is positioned on top of your Instagram feed.
Facebook has also announced a $100 million grant for small businesses on Facebook and is giving $1,000 to each of its employees to help cover costs during the pandemic.
Facebook is doing what it is supposed to do. But there are finer details in these big announcements. $1000 cash bonus that Facebook is providing is only for the full-time employees, initially reported by Intercept. When Techcrunch asked a response from Facebook this is what the company had to add: “The $1,000 is for full-time employees who are working from home. For contract workers, we are sending them home and paying them in full even if they are unable to work, which is much more meaningful than a one-off payment.”
Facebook has an estimated 15,000 content moderators working through third-party contractors. Not everyone will have the option to work from home. For instance, workers in the Philippines, where Facebook and other tech giants employ thousands of moderators, will not be able to work from home.
Obviously, AI now takes the center stage for content moderation. And we will have tremors before the machines and humans adapt to new habits. Earlier this week Facebook users noticed that links to legitimate news outlets and websites, including The Atlantic, USA Today, the Times of Israel, and BuzzFeed, among many others, were being taken down en masse for reportedly violating Facebook’s spam rules.
Facebook attributed the problem to a mundane bug in the platform’s automated spam filter, but some researchers and former Facebook employees worry it’s also a harbinger of what’s to come, reported The Wired.
“Facebook sent home content moderators yesterday, who generally can't [work from home] due to privacy commitments the company has made. We might be seeing the start of the [machine learning] going nuts with less human oversight,” said Facebook’s former security chief Alex Stamos.
This isn’t a problem only concerned with Facebook. YouTube and Twitter have also announced that their contractors would be sent home and they too would be relying more heavily on automated flagging tools and AI-powered review systems.
“I don’t even know how they are going to do this. [Facebook’s] human reviewers don’t get it right a lot of the time. They are amazingly bad still,” said Kate Klonick, a professor at St. John's University Law School and fellow at Yale’s Information Society Project, where she studies Facebook. But the automatic takedown systems are even worse. “There is going to be a lot of content that comes down incorrectly. It’s really kind of crazy.”
YouTube has already made it clear in a blog post that “users and creators may see increased video removals, including some videos that may not violate policies.” The site’s automated systems are so imprecise that YouTube said it would not be issuing strikes for uploading videos that violate its rules, “except in cases where we have high confidence that it’s violative.”
Google, YouTube's parent company, has ordered most of its nearly 120,000 employees to work remotely. Google has committed to paying contractors hourly workers whose job is affected and also set up a fund to offer paid sick leave to anyone who can’t work because of coronavirus symptoms and quarantines.
Twitter has taken a few steps to remove tweets that put people at risk of contracting the novel coronavirus. The safety policy now prohibits tweets that “could place people at a higher risk of transmitting COVID-19.” The new policy bans tweets denying expert guidance on the virus, encouraging “fake or ineffective treatments, preventions, and diagnostic techniques” as well as tweets that mislead users by pretending to be from health authorities or experts.
However, Twitter has decided not to take down the tweet from Elon Musk. “Kids are essentially immune, but elderly with existing conditions are vulnerable,” he tweeted. “Family gatherings with close contact between kids & grandparents probably most risky.”
Twitter said that the tweet did not violate the social media platform's new policies about the discussion of the disease, despite the fact that it contradicted information from the Centers for Disease Control and Prevention.
“Take down the wrong information or ban the wrong account and it ends up having repercussions for how people can speak—full stop—because they can't go to a literal public square,” said Kate. “They have to go somewhere on the internet.”
Machine learning has its own challenges and YouTube has also faced the heat. Tech giants obviously will face a lot of heat and they are very well aware. But tough times also test you and as I see these developments are signs of forming new habits, hopefully for a better future.
“Those who dare to fail miserably can achieve greatly,” John F. Kenedy.