- The complicated relationship between states and Big Tech
- Buffalo killings force content moderation back into the spotlight
- Is the future of content, and its moderation, a decentralized one?
Techno-Nationalism vs. Techno-Globalism
Here’s an interesting exploration of the complex interplay between techno-nationalism—how governments protect and emphasize the contributions of their Big Tech players as uniquely valuable—and the growing “techno-globalism” of those Big Tech players themselves. Examples of the latter would be, say, Microsoft collaborating on artificial intelligence with Chinese universities, or the snaking submarine cable networks of Facebook and Google. The authors of the piece (Cecilia Rikap, lecturer in international political economy at City, University of London, and Bengt-Åke Lundvall, professor emeritus at Aalborg University) conclude end users are the ultimate losers. This shifting landscape of ever-changing interests, priorities, and alliances leaves a growing global divide, they argue.
Beneath that a much darker game is being played in which the very web that links us together has also become a battlefield. The war in Ukraine has demonstrated how cyberwarfare has become a key weapon in a state’s armory, as well as the imperative to defend against it. Russia’s limited success in its attacks on Ukraine’s digital infrastructure, at least for now, is likely in part due to support from the U.S. in the months before the war. It has also highlighted how cyberwar lacks rules of engagement. In the absence of such norms, Max Smeets, of the Center for Security Studies at ETH Zurich, writes that the United Kingdom has taken it upon itself to define what a responsible cyberpower is.
“Domestically the battleground is a civilian one: How much of our data, for example, should Big Tech be required to surrender to a government? The Electronic Frontier Foundation noted that in the U.S. geofence warrants and reverse keyword warrants, which allow police to compel companies to “identify” the details of the digital devices within a certain geographic area over a given time period or the identities of those who search online for a specific term in a specific area, are so invasive “even Big Tech wants to ban them.”
Who Watches the Content Maker?
For sure, Big Tech may not see any gain in submitting to such laws, especially when a recent Pew Research Center survey found that interest in more tech regulation among Americans has declined in the past year. But perhaps there’s a more urgent battle to be fought, one that has no easy answers, either in technology or regulation: Who watches over the content makers?
Big Tech is in large part about content. Content is sometimes what is being bought, or it is the lure that keeps users scrolling long enough to see ads, reveal their tastes, and indicate their intent. But what happens when the content is deemed offensive, illegal, or dangerous? For some platforms the answer is simple enough. Netflix, among the largest companies creating and commissioning content, has been wrestling with whether its own employees have a say in what it produces or doesn’t produce. The company has just told staff to quit if they don’t want to work on content they disagree with, according to Business Insider.
But the problem gets thornier with user-generated content. Who is responsible for moderating that content, who is responsible for removing it, and how would that work? These continue to be complex questions. There are “dozens of Facebook groups intentionally spreading Islamophobia” over which “Meta refuses to act,” according to Vice, citing a report by the Center for Countering Digital Hate. Sadly, the danger is clear, and as platforms compete and diversify as trends change, challenges arise that do not submit to algorithms and tidy rules. The accused Buffalo supermarket gunman is believed to have written a manifesto inspired by the forum 4chan, posted what appeared to be a to-do list ahead of the attack on the messaging platform Discord, and broadcast his attack live on Twitch, which is owned by Amazon. The New York Times explores the problems and questions facing social platforms in light of this latest attack. The New York attorney general’s office has launched an investigation into all three platforms.
Policing Content at the Edge
At the heart of the issue lies a question that may have an impact on the future of Big Tech: Are bigger or smaller platforms better equipped for dealing with the problems of policing content and content creators?
Meta has to some extent bypassed the problem by assigning responsibility to the independent builders and managers of groups who use the platform. In the Buffalo case, Discord has argued that the killer’s message was viewable by only a small group of people 30 minutes before the attack. Discord has over the past few years used AI and other tools to recover from its damaging association with right-wing groups. Elsewhere there has long been interest in exploring a more decentralized approach to building and managing communities. Entrepreneur Sangeet Paul Choudary, for example, talks about the “dark side of the platform economy” and the need to distribute the functions of creation and moderation more widely. His third and final chapter of a series of blog posts about his “Building Blocks Thesis” concludes that at least some of the Big Tech platforms could better serve their users through a more decentralized approach that allows the ecosystem itself to better solve problems for consumers.
Some have already reached that conclusion: Twitter executives, according to The New York Times, “believe that decentralizing the social media service will radically shift online power, moving it into the hands of users.”
Jeremy Wagstaff, formerly a technology journalist with Reuters, now works as a writer and consultant. Past clients have included Microsoft, Google, Cisco, Samsung and Facebook. He has no current clients among, or financial interest in, any companies in the Fortune 500.