A word (or two) on platform power and responsibility

David Kaye
4 min readJan 14, 2021


When it comes to the power of social media companies, we seem to live in a Manichaean world, according to which the most dominant platforms either have State-like obligations or none at all. The legal regimes in the United States and Europe encourage that kind of discourse. Section 230 famously (or infamously, depending on your take) immunizes the companies from liability for the content they host while encouraging them to regulate expression without fear of lawsuit. The E-Commerce Directive in Europe promotes a similar kind of immunity, with some variation at the margins and in the implementation in domestic courts across the continent.

Either/or: if the companies have no fear of lawsuits, then, well, they must have no responsibility to monitor harmful content (however ‘harm’ is defined), or protect users and the public from incitement to violence, harassment, and so forth. We don’t seem to have a vocabulary that helps us navigate beyond the ‘must’ of law, or between the obligation of public directives and the discretion of self-regulation. We are stuck in a black-and-white world.

(NB: for those interested in content manipulation, Elvis is evidently lip-syncing here.)

This is an unfortunate place to be, and it’s not necessary for us to be here. Whether it’s our observation of the moment — as evelyn douek has called it, Trump’s “great deplatforming” — or our consideration of the regulatory future, the choice should not be state content regulation or complete non-regulation. Recently, Jason Pielemeier and I wrote about that possible regulatory future here. In the meantime, we can move the conversation to reassert democratic principles but not at the risk of government control over speech.

International human rights law has been grappling with this problem for decades. (If you didn’t think I would raise human rights law, then either you don’t follow me or you are not Daphne Keller.) Namely:

What are the responsibilities of private companies in jurisdictions where they have no domestic legal obligations?

In 2011, the Human Rights Council — the United Nations’ central human rights body — adopted the UN Guiding Principles on Business and Human Rights. The Guiding Principles propose a framework of state protection, company respect, and individual access to remedy. They emphasize that governments have an obligation to promote an environment in which companies protect human rights — which is helpful when thinking about why government pressure on companies to restrict what individuals may post can be so problematic.

But what I want to highlight is how the Guiding Principles propose a framework for thinking about business responsibility to respect human rights in the absence of legal obligation. In a report to the Human Rights Council in 2018, I urged the companies to implement the Guiding Principles and, in so doing, implement steps to make them more susceptible to public oversight. Among other things, I urged that they:

  1. Integrate human rights into their content standards.
  2. Develop strong human rights impact assessments for all products and policy decisions.
  3. Provide better clarity and specificity in their rules.
  4. Be much more transparent about how they implement those rules, including by sharing the factors they assess in determining what action to take against a problematic account. Not only that, I urged that they adopt what I called decisional transparency, creating a kind of case law around content decisions that researchers and others can access.
  5. Develop industry-wide oversight. Many specialists in freedom of expression are correctly worried about providing government with tools to determine what is legitimate and illegitimate speech online. And yet there is justifiable anxiety that these massive companies exercise incredible power over public discourse. It should be subjected not only to the sunlight of transparency but also to a kind of public grievance mechanism. ARTICLE 19 (disclosure, I am on its Board) has pioneered the idea of “social media councils,” which I touched on in that 2018 report:

I wrote about all this, and a bit more, in my book on the subject of global content moderation. You can find it here.

Meanwhile, in the regulation-free-zone of the moment, we can be less abstract about what companies should do and root the conversation in existing, widely-embraced global principles of business and human rights. (They are not perfect but that’s for another conversation.) Regulation should, ultimately, convert the shoulds of the UN Guiding Principles into musts, into legal obligation. Until then, there is nothing preventing the companies, working with civil society, to make all this happen and do their part to improve public oversight of their platforms. And I will just close there:



David Kaye

Teach law at UC Irvine, former UN Special Rapporteur on freedom of expression, author of Speech Police: The Global Struggle to Govern the Internet.