Skip navigation

Mark As Read

Dall-E, Data Protection for Teens, and Alphabet Criticism

Tech and privacy perspectives from around the globe

Illustration of square vignettes of a pink globe, layered with pixelated eyes and cursors.
Gabriel Hongsdusit

This week:

  • AI can create an image based on words, but there’s anxiety about AI’s role in the workplace
  • Regulators are getting better at protecting the young from technology, but there’s still much to do
  • Big Tech is withdrawing some of its services in China, and Google is facing heat for its data center plans in Saudi Arabia
↩︎ link

Artificial Intelligence and Humans

OpenAI, an AI lab co-founded by Elon Musk and partially funded by Microsoft, is making waves as it expands access to its Generative Pre-trained Transformer 3 (GPT-3), a language predictor that uses deep learning to produce convincingly humanlike text. In April, The New York Times asked, “A.I. Is Mastering Language. Should We Trust What It Says?” OpenAI is also training a version of GPT-3 to interpret text and convert it into images, with similarly eye-opening results. The company’s Dall-E neural network creates images in response to short text prompts (“Rabbit prison warden, digital art,” for example). Author and journalist Alex Kantrowitz explores the possible use cases—and ramifications—of the tool in a recent blog post: “Face to Face With Dall-E, the AI Artist That Might Change the World.” (Here are some of Kantrowitz’s “creations” in response to prompts from his Twitter feed.)

Elsewhere in AI, privacy experts are asking questions about AI’s potentially negative impact on humans. Sebastião Barros Vale, E.U. policy counsel at the Future of Privacy Forum, recently wrote up his thoughts on the Computers, Privacy and Data Protection (CPDP) conference held in Brussels in late May. (His live-tweeting of the conference can be found here; The Markup has written about the Future of Privacy Forum here.) Discussions included the impact of AI on people, both on the exploitation of vulnerable individuals and on the topical use of AI to monitor workplaces.

This latter issue is also discussed by David Heinemeier Hansson—a co-founder of Basecamp, an early adopter of remote work—who blogs about what he calls the “insecurities and paranoia” of managers adopting employee surveillance software, “which risks turning a working arrangement that should be high on autonomy, flexibility, and creativity into one dominated by suspicion, anxiety, and dread.” While many tech workers have made clear their determination to continue working from home, Hansson ponders whether this will change given the growing layoffs in tech. 

↩︎ link

Child’s Play

Ireland’s Data Protection Commission has published three short guides designed to raise “awareness among adolescents about data protection and their privacy rights.” The approach is intended to not only detail children’s rights but present them in a digestible way. In February, Sens. Richard Blumenthal and Marsha Blackburn introduced before the U.S. Congress the Kids Online Safety Act, a bill requiring companies to, among other things, act in the best interests of minors using their services. The bill has been criticized by the Electronic Frontier Foundation for overreaching, forcing companies to spy on children and to “over-censor to attempt to steer clear of the new liability risks.” 

The U.K. was an early mover in requiring Big Tech to boost teens’ privacy, introducing the Age Appropriate Design Code last year. A long-delayed bill on online safety that proposes “significant fines for companies that fail to deal with online abuse as well as possible criminal prosecution for executives” passed its first reading in the House of Commons in April. The U.K. media regulator Ofcom said in its “Online Nation” report that it found that a third of children between the ages of 8 and 15 who go online “have seen worrying or upsetting content online in the past 12 months.” Children also face privacy challenges with their increasingly internet-connected toys, according to The Regulatory Review, which argues that despite some action by the Federal Trade Commission (FTC) using the existing Children’s Online Privacy Protection Act of 1998 (COPPA), there’s “room for significant regulatory action to protect children at play.”

At the same time, Big Tech is taking steps to use its technology to better protect vulnerable users, or at least to prevent its exploitation from those who intend harm. Apple’s latest effort is the inclusion of a feature called Safety Check in the new version of its mobile operating system to aid those in abusive relationships. The feature will allow users to check who has their passwords and information and allow them to more easily cut ties to an abusive partner by reviewing and revoking access. Though as Apple has learned the hard way with AirTags, which have been used to locate and track individuals without their knowledge, every technology can be leveraged in ways the creator may not have considered. 

↩︎ link

Big Tech and a Global Footprint

Several major U.S. tech companies have scaled back operations in China in recent months. Microsoft withdrew LinkedIn’s global platform from China last year, blaming “greater compliance requirements.” LinkedIn had been the only major Western social networking platform in China for nearly seven years. Other Big Tech companies have followed suit, citing commercial setbacks as the reason for dropping some services. Amazon is closing its China Kindle e-book service next year in what appears to be a purely commercial move driven by declining interest in a separate reading device. This comes a week after Airbnb announced its own withdrawal from China listings in late May, blaming the pandemic. 

Google, which has a very limited presence in China, faces criticism elsewhere. Alphabet shareholders were recently asked by human rights group Amnesty International and others to demand that the company “uphold Google’s human rights commitments” over plans to build a Google Cloud data center in Saudi Arabia. A proposal raised at Alphabet’s annual general meeting (1:07:10) in early June called for a “human rights report on the siting of data centers in countries with human rights abuses.” The proposal, along with more than a dozen others that dealt with the environment, diversity, privacy, algorithm disclosures, and individual rights, was not approved. Ranking Digital Rights, an independent research program founded by internet freedom advocate Rebecca MacKinnon, last month called on the U.S. Securities and Exchange Commission to “break down barriers to shareholder advocacy on human rights,” in particular the use of multi-class share structures used by some Big Tech companies that it says “undermine investors’ ability to address corporate wrongdoing.” 


Jeremy Wagstaff, formerly a technology journalist with Reuters and columnist with The Wall Street Journal, now works as a writer and consultant. Past clients have included Microsoft, Google, Cisco, Samsung, and Facebook. He has no current clients among, or financial interest in, any companies in the Fortune 500.

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now