PlatGovNet, then and now

2021
2023
Since the onset of the COVID-19 pandemic, the global reliance on the services provided by a range of major online platform companies has skyrocketed. Online marketplaces, social networks, cloud providers, streaming services, and service delivery platforms all rake in record and increasing profits as they continue to embed themselves ever more deeply into public and private life. At the same time, dissatisfaction with the platform economy status quo is growing internationally. Across policy areas like content moderation, competition, labor law, and data protection, governments around the world are developing new rules to tackle troubling forms of outsized political, cultural, and infrastructural platform power.
2025
A changing mix of competing platform companies faced with various efforts to regulate, influence, or control them and their offers has become an ever more central feature of many societies. Monolithic services begin to fracture and decentralized platform infrastructures emerge. Some governments assert their power and authority through the agenda of "digital sovereignty". New constellations of actors emerge and tensions manifest across state, market, and civil society. We witness realignments in the political economy of platforms and societies.

PlatGovNet2025 Summary: Transitions, frictions, and new realities in global platform governance

Emillie de Keulenaar and Diyi Liu

Ever since we have understood platforms to have “eaten the world”, platform governance research emerged as an effort to understand the nature of governance by, with, and through the companies and other actors who own and operate them. This makes it an essentially interdisciplinary field that borrows and mixes elements of any discipline that helps describe its material, legal, political and social conditions relative to its ever-evolving nature. The Platform Governance Research Network (PlatGovNet) conference is one opportunity to reconvene all of those who take part in this effort and constitute the state of the field today.

PlatGovNet, way back when

The inaugural PlatGovNet conference in 2021 took place while societies worldwide confronted their dependence on digital platforms. Two years later, the 2023 conference bore witness to the field’s growing maturity and its increasingly normative orientation. The evolution of the PlatGovNet conferences over the past four years reminds us what questions have proven enduring, and where new concerns have emerged.

The regulatory landscape has been transformed. While the 2021 conference occurred before major regulatory frameworks took effect, in 2023 there was reason to actively engage with the EU’s Digital Services Act, its implications, limitations and potential global influence. The conference discourse shifted from whether to regulate platforms to how regulation and governance could work sustainably, inclusively, and democratically.

Meanwhile, alternative and decentralised platforms also became a significant focus, reflecting both emerging platforms like Mastodon and Bluesky and a growing interest in governance models beyond centralised corporate platforms.

Both conference editions laid the foundations to what have become persistent themes. Content moderation remains central and is examined from multiple angles, including community moderation practices, moderation of specific types of content, and the labor of trust and safety workers. The question of how to study platforms empirically persists, with continued attention to transparency, data access, and research methods. Global and comparative perspectives about platform governance is also a through-line through the understanding of specific regional contexts and the geopolitics of digital infrastructure.

These themes would set the stage for the 2025 conference, as we confronted new realities that both inspired and complicated these enduring concerns.

PlatGovNet in 2025

Last year, our questions adapted to a different landscape.

Governments worldwide are asserting new forms of regulatory authority through formal and informal mechanisms.

In the EU, AI regulatory frameworks have laid down the basic vocabulary by which AI companies are required to define, estimate and respond to “societal risk”. There have been disinformation campaigns during various elections, but also new (if innovative) forms of “risky” content: deepfakes, nudes, and the AI-generated historical revisionism of Grok (depending on who you ask).

Brazilian legislation has also reaffirmed its authority over a variety of platforms infringing upon local standards to safeguard democracy. Though, like in Europe, there is no lack of contestation to these legislations, this happens in the context of increasingly politicised platforms that actively partake in this contestation. X and Telegram are examples, but not the only ones.

Alongside these formal regulatory frameworks, states have exerted power over platforms through less visible, informal mechanisms of negotiation and coercion. In several jurisdictions, governments have sought access to encrypted user data for public safety, and effectively compel platforms to choose between market access and resistance. For instance, the ongoing pressure in several countries to require technical capabilities that would weaken end-to-end encryption, illustrate how state authority could be exercised through executive pressure. Similar dynamics are visible in data localisation requirements on local data storage for messaging and cloud services.

In this context, Trump’s second presidency — only a year old — has contributed to new content moderation philosophies (or indeed a “renewed” return to older forms).

In the profoundly polarised (at the very least conflicted) environments that platforms host and operate in, there is a sense, especially from the global right, that moderation has become yet another form of censorship that colludes with non-universal standards on the left. In that context, platforms like X and alt-tech competitors place themselves in the midst of culture wars where speech moderation controversies remain a central bone of contention. The assumption is that platforms must in a sense return to an ideological “centre ground” for maximum freedom of expression. Although this moderation regime may fairly be called authoritarian, one must also reckon with the polarized politics that gave rise to it.

In this same context, some platforms feel no longer so compelled to comply with the regulatory standards of state actors, and challenge them, via the White House, in the name of American “values” and business interests. The question of tech sovereignty — typically located in Asia — are now at the heart of the EU.

This also has disciplinary implications. As a field, platform governance may be steeped in profound conflicted debates about how and why to moderate public speech. How can research speak both to and beyond the polarised language of content governance? Should we consider political diversity a criteria for “diverse” research, and intellectual and ideological monocultures a lack thereof? And if so, how does one resolve profound political and normative differences within this field? Some of these challenges appear closer than they look, and conferences like these are special opportunities to continue deliberating about our own forms of governance.

Some of these challenges appear closer than they look, and conferences like these are special opportunities to continue deliberating about our own forms of governance.

Yet, there are newfound opportunities to intervene and pluralise platform markets via decentralised social infrastructures.

Enthusiasm for the fediverse once emerged in the COVID years as part of a move from proposing different platform models towards a marketplace of different protocols. Though user numbers remain somewhat stagnant, we have since seen the emergence of a maturing terminology that embraces this space as one of opportunities for new platform governance methods. Decentralisation, composable architectures, as well as a more diverse array of “alt-tech” platforms fragment platform markets and create new governance interventions.

This fragmentation opens the field to a wider range of actors (community moderators, trusted flaggers, middleware designers, cooperatives, public institutions, etc) who enter content governance as infrastructural participants. We have seen how this reconfiguration enables various forms of community-based moderation, the negotiation and deliberation of speech norms and context-sensitive enforcement, while also creating space for public actors to introduce standards through regulation and for grassroots actors to propose alternative designs.

Practically, this may shorten the perennial “governance gap” that keeps workers, users, researchers, and public actors from operationalising their values and demands in the infrastructures that govern their sectors. This then raises questions of training, education, and institutional support: how does one foster the skills required to translate normative claims into technical and organizational forms? And what can be done to train this as a method and civic capacity?

Disciplinarily, one may also appreciate a potentially "creative" turn in platform governance. That is: beside discussing the nature, implications and executions of platform governance, there is impatience, especially in the private sector, for more proactive propositions of how else governance may be done. This does not need to land on solutionism, but on deliberation around applicable concepts, frameworks and protocols.

Keynote speakers

The third edition of the PlatGovNet conference brought together 64 contributions from across the world, and featured three keynote conversations with industry and civil society actors.

Aline Os

We started with Aline Os, who brought insights from her experience building the collective platform Señoritas Courier in São Paulo and thoughts on platform cooperativism.

Aline founded her collective as a response to the widespread state of unreported employment (“trabalho informal”) in Brazil. Señoritas Courier is a network of delivery bikers whose response to platformised work is to propose a form of cooperativism based on “care before code” (a concept coined by the Disco network) as opposed to corporate platform logic. Señoritas’ cooperative entails an organisation based on internal logics of care, via collective deliberation and the rules that it establishes. There is no app to speak of but a set of internally agreed parameters: what distance workers should run; the limit of the weight of their deliveries; their working hours; the number of roads they journey; and an overall balance of distributed tasks for balanced earnings.

This process was later crystallised into a method for software design based on a collaboration with the Technology Centre of the Homeless Workers’ Movement. The goal was to translate the social technology of Señoritas into a digital platform owned by workers. Though the platform is not functioning today, it brings important lessons about the reality of the financial costs of maintenance, clashes with proprietary software, desires to maintain open and community-supported resources, and most importantly the “social technology” that subsists without technical interfaces. The potential of these tools resides in the translation between practical and technical know-how shared between workers, social movements, academia and governments.

There have been efforts from the Brazilian state to formalise a solidary economy. But with all the limitations of public legislation, one may instead rely on public policies to foster incubating spaces for grassroots software prototypes. Universities are an example, provided they understand the lived realities and infrastructure in which gig workers subsist. This remains a question of literacy: a critical awareness of the current realities of platformed work as well as one’s capacity, as a worker in any field, to develop their own platform models.

Jessica Ashooh

We also heard from Jessica Ashooh, head of Trust & Safety at Reddit. Jessica and Rasmus Kleis Nielsen had a conversation about Reddit’s approach to political polarisation, echo chambers, AI and the various crises afflicting content moderation — of legitimacy (across public and private actors); of consensus (across users with different speech norms); and geopolitically (across different jurisdictions).

The main proposition highlighted by Jessica — as with Aaron — was that moderation makes sense locally. Subreddit moderators would be “experts in their own rights”, in that they must understand the contextual basis of their norms and exercise, with local legitimacy, the rules that derive from those. Departing from that basis, there are system-wide guardrails designed to “maximise for community consensus of what is valuable content” via voting systems. Does this scale? “There is a saying that the only thing that scales with users is other users”. That is: a volunteer moderator system would allow users to maintain localised governance and understandings of their culture, with the additional but occasional oversight of the platform.

But Reddit remains a platform where users pre-select the content they consume, with, of course, some contact with broadly “popular” content. The problem is reminiscent of Sunstein's earlier works. So what about the siloing effects of self-selected subreddit content? Reddit does have features specifically designed for cross-cutting spaces specifically carved for cross-partisan dialogue. There may be spaces specifically designed for curiosity-driven dialogue to change one’s mind (r/ProveMeWrong); to inquire about who is foreign or enemy (r/AskAConservative, r/AskALiberal); or to initiate bottom-up peacebuilding initiatives (r/IsraelPalestine, r/KarabakhConflict, and more). This, perhaps, in a siloed method of its own.

This may also be reflected within a creeping problem of inauthentic AI slop as a new type of fuel to run through social media veins. Perhaps the question of locality is what creates a “premium”, “trusted human space”, where the overall guardrails and nature of subreddit conversations must rely on authentic content to make sense. This would differ from the impersonality of vanity-driven engagement logics.

Aaron Rodericks

Finally, Aaron Rodericks, Head of Trust & Safety from Bluesky Social, set the tone for discussions focused on emerging governance models, from federated networks to public interest infrastructures.

Bluesky started as an in-house project under Twitter, when Jack Dorsey considered a decentralized architecture as a means to get away from excessive legal compliance and an architecture protocol more resilient against “different censorship happening around the world”. This means a fundamentally different moderation architecture than what one is used to chart in centralised platform models. Many of us who have explored the fediverse and related topics will know that Bluesky is built on a composable stack. There are, like Reddit, “basic defaults”, a universal set of norms that apply to the whole platform. But there are also custom filters that apply to moderation choices, open to third-party modifications and user preferences. One example are moderation labels developed by and for specific Bluesky groups.

In a sense, the reason for customisation as a central ideological core of Bluesky is a diametrical response to excessive centralisation in other platforms. A centralisation of moderation norms; of ideological and other vested interests of CEOs like Musk; of attention, fomented by a unique algorithmic logic; and of power, obfuscated by the internal decision-making of platform monopolies. While this resembles a return to early federated Web development models, we may speak of a platformised model of distributed governance.

Naturally, there are tensions. One is the calcification of centralised models even in decentralised infrastructures. This is reflected in regulation, which is primarily written for centralised models, as well as in protocols themselves, which still require some degree of centralisation. In this context, users have been brought up to use largely passive, consumer-friendly interfaces where content primarily comes to them. As such, the familiarity of average platforms is Bluesky’s preferred aesthetic — though “the subversiveness” remains “baked underneath”.

Bluesky’s composable model has been considered by some of us who see a possibility in implementing “better feeds”, particularly those that may at least attenuate the levels of information disorder, perception gaps and affective polarisation measured online. This is the case for bridging or “prosocial media” models in general. And though some hope rests on these initiatives, Aaron cautions about the limits of good faith in users almost as from a Sartre play:

> “users don’t want more control over their own experience — they want to have control over the experience of others.”

The question then becomes how to manage the desire of users for negative interaction. We enter the realm of political philosophy (or perhaps we have never left it), to the extent that governance invokes perennial ideas of human (user) nature.

What has come and gone around

Legislative and regulatory configurations

The European Union’s Digital Services Act (DSA), Digital Markets Act, and AI Act continue to function as a major reference point for discussion. In Auditing the DSA, participants discussed early implementation dynamics, interrogating the effectiveness of transparency databases, the practical value of mandated reporting, and the institutional fragilities underlying the Act’s risk-based governance model.

One of the main highlights was the gap between regulatory ambition and the reality in practice. There are concerns, for example, that DSA’s systemic risk framework has architectural flaws compared to traditional EU risk regulation, such as the regulation of GMOs, chemicals or food products. There are also legitimacy and transparency gaps. The DSA gives the European Commission exclusive enforcement power without requiring member state input or independent agency oversight. Meanwhile, platforms do not provide detailed enough explanations of content moderation decisions, while researchers cannot properly audit platforms without access to moderated content.

At the same time, the conference gestured beyond Europe’s regulatory orbit. The panel Sovereignty and Platform Power highlighted regulatory developments in African and Latin American countries, including comparative perspectives on gender-based violence legislation in Europe and Colombia. These region-specific approaches to sovereignty underscore again the importance of situating regulatory efforts within locality. At the same time, non-aligned paths to digital autonomy reintroduce center-periphery perspectives in world systems, and with that creative opportunities for a "post-American" web.

Political economy and infrastructural constraints

The Platform Labor and Political Economy pointed to labor arrangements that remain unequal and unevenly visible. This implies examining how algorithmic management, interface design, and outsourcing practices continue to shape the conditions of platform workers, which echo concerns raised in 2021 around gig work and labor of content moderation.

What emerged more clearly this year was the extent to which regulatory frameworks themselves increasingly rely on (and to some extent reproduce) these labor arrangements, be it through audit practices or risk assessments. There is a sense that governance and labor are commodified through platform infrastructures, with market logics shaping everything from privacy compliance to domestic work arrangements all while performing accountability and transparency.

Another enduring PlatGovNet concern was revisited through the panel of Global Content Moderation about the uneven distribution of moderation capacity and contextual understandings of harm. Again, contributions underscored the persistent disparities of moderation resources, language coverage, and the priorities of platform’s content enforcement across regions, in a way that could systematically marginalize non-Western contexts.

Across eight languages, for example, low removal rates persist regardless of the severity of a hate-related violation, while English remains the most moderated language — and Arabic, in spite of its presence in high-conflict zones, the least. The panel also found that AI tools cannot scale, likely because low-resource languages continue to be underserved even in AI training. The shift towards LLMs may in fact mean a higher likelihood of bias since low-resource languages constitute smaller (and thus less diverse) training data.

Labor-wise, the rise of LLMs for moderation has also meant a shift from vision-related annotation tasks towards hiring workers with a different background, namely in physics and coding. One of the priorities behind this shift is to improve code generation models, which in turn feed into the overall layer of content detection and classification in social media platforms. Here, it is worth noting the delegation of (restricted) normative decision-making of content moderators towards AIs and their underlying training.

Closely related, the Platform Dependencies panel continued the discussions on the infrastructural conditions that delimit the scope of platform governance. The dependencies on cloud infrastructures, data centers, payment systems and security architectures, often controlled by a small number of dominant actors, were shown to constrain both regulatory ambition and institutional alternatives and reinforce the patterns of concentration.

The panel revealed how these dependencies manifest in multiple domains. One is an interdependence in moderation and content dissemination. Coordinated deplatforming efforts remain ineffective because of cross-platform dependencies, especially those formed around a supply and demand for moderated content. Here, only formal state intervention would bypass these structural constraints — but how to enforce these when the demand for moderated content is normalised?

Elsewhere, it was found that the DSA's law-making process was subject to a “governance by emulation”, in the sense that a small team of Commission officials were heavily reliant on external expertise and vulnerable to industry capture through rhetorical claims about what is “impossible to regulate”. This underscores again the information asymmetry that hampers public officials from making critical and proactive decisions within the tech sector, as public tech actors themselves.

Yet the panel also offered cautions conclusions. The institutionalization of public law thinking within private governance structures, while currently weak, may provide a foundation for future reform on a 20-30 year timeline. One question is how to historicise this process.

Discursive and normative foundations of governance

Finally, the panel The Discourse of Platform Governance drew attention to the narratives about governance itself. Several papers examined how concepts such as "risk", "innovation" and "safety" are used frequently across policy documents, corporate communication, and public debate, while shaping our understanding of who defines the problems platform governance is meant to solve, and whose interests those definitions ultimately serve.

Research on the AI Act, for example, revealed how the idea of innovation works as a somewhat empty signifier while for techno-optimistic positions that tends to see regulation as obstacles to progress. It was found that EU policy experts expressed concern about rapid regulatory backsliding, with American companies become increasingly sophisticated at European lobbying and pressures mounting to dilute the DSA, DMA, and AI Act in the name of competitiveness, alternative values, or internal chaos for geopolitical gain.

The panel also discussed how far-right actors tend to appropriate rights-based discourses and free speech rhetoric without long-term commitment to those values, as was commented in the case of X’s takeover. This shows a propensity for political, in-group speech norms that are sometimes generalised across a platform. Indeed, the very language of platform governance becomes contested terrain where corporate interests, political movements, and regulatory ambitions struggle to define whose problems matter and what solutions are deemed possible or impossible.

New realities in platform governance

Artificial intelligence

AI has moved from being an object of platform governance to becoming a modality through which governance operates. This shift was most explicit in the panel AI Governance, where discussions foregrounded the integration of AI systems into content moderation, recommendation, and labor management. Generative AI governance emerged as a particularly salient concern: alignment processes were shown to embed normative assumptions about online discourse that may reproduce some social hierarchies. At the same time, several contributions emphasized the infrastructural politics underpinning AI governance. Concentration in cloud computing and AI defense sectors highlighted how access to computational resources conditions who can meaningfully build, deploy, and govern AI systems at scale.

New objects and logics of moderation

While content moderation has long been central to platform governance research, the 2025 conference demonstrated a clear expansion in both where moderation takes place and how it is conceived. The panel New Objects of Moderation articulates most directly, which foregrounded how generative AI systems, agentic accounts, and synthetic media challenge assumptions about what content moderation objects look like. Moving away from singular pieces of content, the discussions also pointed to a shift among major social media platforms from moderation at the level of individual posts toward actor and behavior-based moderation.

This shift illustrates the various movements taking place to bypass the incommensurability of moderation decisions — i.e., what does and does not constitute acceptable language — towards a form of content agnosticism, either via behaviour-specific moderation or a political of "prosociality". The latter defends the position that better user relations (and better affordances for fostering those relations) would dampen the production of hateful or "misinformative" content, since it is, in the end, a product of polarised "information disorders".

As such, the panels Community-Driven Governance and Prosocial Moderation frame moderation as an essentially mediating and constructive practice that must rely on robust mechanisms for negotiation, dialogue and consensus-building. Especially in the context of federated social media, governance is increasingly framed as a distributed process involving users, communities, and designers. This is all the more necessary when censorial affordances are distributed across platforms and users, resulting in a strange form of "democratisation of censorship".

At the same time, the panel Beyond Content Governance featured a spatial turn in trust and safety research. Papers examining Social XR and virtual environments raised questions about how moderation operates in immersive, embodied settings, and how public sector actors might engage in such spaces while remaining compliant with existing regulatory frameworks. Instead of interpreting moderation as a speech intervention, the discussions emphasized other moderation norms, such as proximity, presence, and spatial interaction.

Emerging institutional arrangements

Beyond formal legislation, the conference devoted attention to how authority and accountability in platform governance are being reconfigured through emerging institutional arrangements. Rather than assuming a clear division between state regulations and platform’s self-governance, discussions in the Authority and Accountability panel examined the dialectic relationships between platforms and states, as well as the alignment with other regulatory intermediaries and technical systems.

In the meantime, there are increasing discussions around designing Public Service (Social) Media that seek to reclaim platform infrastructures for public interest. These contributions revisited public service media traditions under the era of platformization, focusing on questions of ownership, sustained business models, and democratic accountability. They emphasized how public service social media initiatives hold the potential to challenge dominant platform business models by emphasizing inclusion and collective governance, while having to confront structural constraints such as scale, funding, and dependence on commercial infrastructures.

A shifting political climate — and more methodological considerations

Many discussions were shaped by an increasingly fraught (geo-)political context in which platform governance unfolds.

The panels The Politics of Big Tech and Governmentality positioned platforms not simply as for-profit organizations or the targets of regulations, but as (political) actors whose infrastructural power, market leverage, and discursive practices actively shape the organization of social relations, and blur the boundaries between public and private authority. For instance, we heard discussions about the concentration of infrastructural power within large technology firms and their entanglement with the functions of the states, including security and welfare. Independent social media councils (SMCs), long discussed in the field, call for renewed discussions in light of both old and new authoritarian (or "hybrid") regimes, where state regulations risk enabling top-down censorship — or inevitably participating in the polarised language around what constitutes censorship.

Politics and Conflicts situated platform governance within contexts of political crises, including elections, wartime, and polarized communication environments. There were case studies of platform involvement in ongoing political conflicts alongside their responses to disinformation and political pressure, both of which highlighted how governance decisions are increasingly entangled with unresolved political disagreements. There were also renewed discussions around the rise and growing normalization of "alternativity" in the form of “alt-tech” ecosystems, which form a platform ecology increasingly fragmented around competing speech norms.

Funding acknowledgement

The 2025 PlatGovNet conference was supported by the Danish National Research Foundation grant DNRF197 for the project “Power over platforms?