Ever since we have understood platforms to have “eaten the world”, platform governance research emerged as an effort to understand the nature of governance by, with, and through the companies and other actors who own and operate them. This makes it an essentially interdisciplinary field that borrows and mixes elements of any discipline that helps describe its material, legal, political and social conditions relative to its ever-evolving nature. The Platform Governance Research Network (PlatGovNet) conference is one opportunity to reconvene all of those who take part in this effort and constitute the state of the field today.
The inaugural PlatGovNet conference in 2021 took place while societies worldwide confronted their dependence on digital platforms. Two years later, the 2023 conference bore witness to the field’s growing maturity and its increasingly normative orientation. The evolution of the PlatGovNet conferences over the past four years reminds us what questions have proven enduring, and where new concerns have emerged.
The regulatory landscape has been transformed. While the 2021 conference occurred before major regulatory frameworks took effect, in 2023 there was reason to actively engage with the EU’s Digital Services Act, its implications, limitations and potential global influence. The conference discourse shifted from whether to regulate platforms to how regulation and governance could work sustainably, inclusively, and democratically.
Meanwhile, alternative and decentralised platforms also became a significant focus, reflecting both emerging platforms like Mastodon and Bluesky and a growing interest in governance models beyond centralised corporate platforms.
Both conference editions laid the foundations to what have become persistent themes. Content moderation remains central and is examined from multiple angles, including community moderation practices, moderation of specific types of content, and the labor of trust and safety workers. The question of how to study platforms empirically persists, with continued attention to transparency, data access, and research methods. Global and comparative perspectives about platform governance is also a through-line through the understanding of specific regional contexts and the geopolitics of digital infrastructure.
These themes would set the stage for the 2025 conference, as we confronted new realities that both inspired and complicated these enduring concerns.
Last year, our questions adapted to a different landscape.
In the EU, AI regulatory frameworks have laid down the basic vocabulary by which AI companies are required to define, estimate and respond to “societal risk”. There have been disinformation campaigns during various elections, but also new (if innovative) forms of “risky” content: deepfakes, nudes, and the AI-generated historical revisionism of Grok (depending on who you ask).
Brazilian legislation has also reaffirmed its authority over a variety of platforms infringing upon local standards to safeguard democracy. Though, like in Europe, there is no lack of contestation to these legislations, this happens in the context of increasingly politicised platforms that actively partake in this contestation. X and Telegram are examples, but not the only ones.
Alongside these formal regulatory frameworks, states have exerted power over platforms through less visible, informal mechanisms of negotiation and coercion. In several jurisdictions, governments have sought access to encrypted user data for public safety, and effectively compel platforms to choose between market access and resistance. For instance, the ongoing pressure in several countries to require technical capabilities that would weaken end-to-end encryption, illustrate how state authority could be exercised through executive pressure. Similar dynamics are visible in data localisation requirements on local data storage for messaging and cloud services.
In the polarised (at the very least conflicted) environments that platforms host and operate in, there is a sense, especially from the global right, that moderation has become yet another form of censorship that colludes with non-universal standards on the left. In that context, platforms like X and alt-tech competitors place themselves in the midst of culture wars where speech moderation controversies remain a central bone of contention. The assumption is that platforms must in a sense return lesser moderation as an ideological “centre ground” for maximum freedom of expression as a new product feature. Although this moderation regime may fairly be called authoritarian, one must also reckon with the polarized politics that gave rise to it.
In this same context, some platforms feel no longer compelled to comply with the regulatory standards of state actors, and challenge them, via the White House, in the name of American “values” and business interests. The question of tech sovereignty — typically located in Asia — are now at the heart of the EU.
Enthusiasm for the fediverse once emerged in the COVID years as part of a move from proposing different platform models towards a marketplace of different protocols. Though user numbers remain somewhat stagnant, we have since seen the emergence of a maturing terminology that embraces this space as one of opportunities for new platform governance methods. One example is the notion of middleware, which refers to the possibility of third parties to design their own platform mechanisms (feeds, recommenders, etc.) into modular platform designs, such as Bluesky’s — or push these as a regulatory protocol for all platforms to comply with.
In the meantime, a widening array of actors, from content moderators and trusted flaggers to oversight boards and federated communities, are reshaping how governance operates in practice. We see a diversification of actors that come to enrich (and complicate) the platform governance triangle we have as a roadmap.
Understanding these changing landscapes requires sustained and creative empirical investigation. It also calls for new conceptual frameworks and methodological tools capable of capturing the complexity of contemporary platform ecosystems.
Our theme this year, Transitions, Frictions, and New Realities in Platform Governance, reflects this moment of transformation.
The third edition of the PlatGovNet conference brought together 64 contributions from across the world, and featured three keynote conversations with industry and civil society actors.
We started with Aline Os, who brought insights from her experience building the collective platform Señoritas Courier in São Paulo and thoughts on platform cooperativism.
Aline founded her collective as a response to the widespread state of unreported employment (“trabalho informal”) in Brazil. Señoritas Courier is a network of delivery bikers whose response to platformised work is to propose a form of cooperativism based on “care before code” (a concept coined by the Disco network) as opposed to corporate platform logic. Señoritas’ cooperative entails an organisation based on internal logics of care, via collective deliberation and the rules that it establishes. There is no app to speak of but a set of internally agreed parameters: what distance workers should run; the limit of the weight of their deliveries; their working hours; the number of roads they journey; and an overall balance of distributed tasks for balanced earnings.
This process was later crystallised into a method for software design, based on a collaboration with the Technology Centre of the Homeless Workers’ Movement. The goal was to translate the social technology of Señoritas into a digital platform owned by workers. Though the platform is not functioning today, it brings important lessons about the reality of the financial costs of maintenance, clashes with proprietary software, desires to maintain open and community-supported resources, and most importantly the “social technology” that subsists without technical interfaces. The potential of these tools resides in the translation between practical and technical know-how shared between workers, social movements, academia and governments.
There have been efforts from the Brazilian state to formalise a solidary economy. But with all the limitations of public legislation, one may instead rely on public policies to foster incubating spaces for grassroots software prototypes. Universities are an example, provided they understand the lived realities and infrastructure in which gig workers subsist. This remains a question of literacy: a critical awareness of the current realities of platformed work as well as one’s capacity, as a worker in any field, to develop their own platform models.
We also heard from Jessica Ashooh, head of Trust & Safety at Reddit. Jessica and Rasmus Kleis Nielsen had a conversation about Reddit’s approach to political polarisation, echo chambers, AI and the various crises afflicting content moderation — of legitimacy (across public and private actors); of consensus (across users with different speech norms); and geopolitically (across different jurisdictions).
On content moderation’s legitimacy crisis, Jessica highlighted Reddit’s community-driven governance as a response to the crises of other platforms’ more centralised models. The local nature of Reddit’s moderation regime would overcome the speech restrictions and normative conflicts typical of top-down centralised models. Localised subreddit moderators must be “experts in their own rights”: they must understand the contextual basis of their norms and exercise, with local legitimacy, the rules that derive from those. Departing from that basis, there are system-wide guardrails designed to “maximise for community consensus of what is valuable content” via voting systems. Does this scale? “There is a saying that the only thing that scales with users is other users”. That is: a volunteer moderator system would allow users to maintain localised governance and understandings of their culture, with the additional but occasional oversight of the platform.
But Reddit remains a platform where users pre-select the content they consume, with, of course, some contact with broadly “popular” content. There is no absence of structural conflict on the platform. So what about the siloing effects of self-selected subreddit content? Here Jessica points to the distinct nature of Reddit’s cross-cutting spaces specifically carved for cross-partisan dialogue. There may be spaces specifically designed for curiosity-driven dialogue to change one’s mind (r/ProveMeWrong); to inquire about who is foreign or enemy (r/AskAConservative, r/AskALiberal); or to initiate bottom-up peacebuilding initiatives (r/IsraelPalestine, r/KarabakhConflict, and more).
If this were to scale to a geopolitical scale, how would Reddit approach the international politics of speech regulation? Through a positioned or “principled approach” to retain basic norms that may adapt but not change fundamentally under whatever legislative configuration or national power swings. Though Reddit is obligated to follow the laws of the countries in which they operate, they will use “all \\[available\\] lawful methods” to “push back” when they think the law is being applied unjustly, or when the law itself may be unjust. Freedom of speech is the name of today’s game.
At last, generative AI: how does Reddit remain “the most human place on the Internet” in light of so much AI-generated content? By embracing an emerging role as a “premium” space for “trusted human spaces” where the overall guardrails and nature of subreddit conversations must rely on authentic content, unlike the more vanity-driven and impersonal spaces of centralised platforms. The deeply contextual nature of Reddit conversations (where users need to be in the know and abide by basic subreddit norms) would in theory deny pre-empt any value to impersonal AI slop.
Finally, Aaron Rodericks, Head of Trust & Safety from Bluesky Social, set the tone for discussions focused on emerging governance models, from federated networks to public interest infrastructures. Bluesky started as an in-house project under Twitter, when Jack Dorsey considered a decentralized architecture as a means to get away from excessive legal compliance and an architecture protocol more resilient against “different censorship happening around the world”.
This means a fundamentally different moderation architecture than what one is used to chart in centralised platform models. Many of us who have explored the fediverse and related topics will know that Bluesky is built on a composable stack. There are, like Reddit, “basic defaults”, a universal set of norms that apply to the whole platform. But there are also custom filters that apply to moderation choices, open to third-party modifications and user preferences. One example are moderation labels developed by and for specific Bluesky groups.
In a sense, the reason for customisation as a central ideological core of Bluesky is a diametrical response to excessive centralisation in other platforms. A centralisation of moderation norms; of ideological and other vested interests of CEOs like Musk; of attention, fomented by a unique algorithmic logic; and of power, obfuscated by the internal decision-making of platform monopolies. While this resembles a return to early federated Web development models, we may speak of a platformised model of distributed governance.
Of course, there are tensions. One is the calcification of centralised models even in decentralised infrastructures. This is reflected in regulation, which is primarily written for centralised models, as well as in protocols themselves, which still require some degree of centralisation. In this context, users have been brought up to use largely passive, consumer-friendly interfaces where content primarily comes to them. As such, the familiarity of average platforms is Bluesky’s preferred aesthetic — though “the subversiveness” remains “baked underneath”.
Bluesky’s composable model has been considered by some of us who see a possibility in implementing “better feeds”, particularly those that may at least attenuate the levels of information disorder, perception gaps and affective polarisation measured online. This is the case for bridging or “prosocial media” models in general. And though some hope rests on these initiatives, Aaron cautions about the limits of good faith in users almost as from a Sartre play:
“users don’t want more control over their own experience — they want to have control over the experience of others.”
The question then becomes how to manage the actual desire of users for negative interactions. We enter the realm of political philosophy (or perhaps we have never left it), to the extent that governance often invokes different ideas of human — user — nature.
Many of the panels in the 2025 conference extended conversations once started in 2021\\.
The European Union’s Digital Services Act (DSA), Digital Markets Act, and AI Act continue to function as a major reference point for discussion. In Auditing the DSA, participants discussed early implementation dynamics, interrogating the effectiveness of transparency databases, the practical value of mandated reporting, and the institutional fragilities underlying the Act’s risk-based governance model.
One of the main highlights was the gap between regulatory ambition and the reality in practice. There are concerns, for example, that DSA’s systemic risk framework has architectural flaws compared to traditional EU risk regulation, such as the regulation of GMOs, chemicals or food products. There are also legitimacy and transparency gaps. The DSA gives the European Commission exclusive enforcement power without requiring member state input or independent agency oversight. Meanwhile, platforms do not provide detailed enough explanations of content moderation decisions, while researchers cannot properly audit platforms without access to moderated content.
At the same time, the conference gestured beyond Europe’s regulatory orbit. The panel Sovereignty and Platform Power highlighted regulatory developments in African and Latin American countries, including comparative perspectives on gender-based violence legislation in Europe and Colombia. These region-specific approaches to sovereignty underscored the importance of situating regulatory efforts within distinct socio-legal contexts. At the same time, discussions around the non-aligned paths to digital autonomy reintroduced center-periphery perspectives from world systems, helping to make sense of the transnational power asymmetries that continue to structure platform governance.
The Platform Labor and Political Economy pointed to labor arrangements that remain unequal and unevenly visible. This implies examining how algorithmic management, interface design, and outsourcing practices continue to shape the conditions of platform workers, which echo concerns raised in 2021 around gig work and labor of content moderation. What emerged more clearly this year was the extent to which regulatory frameworks themselves increasingly rely on (and to some extent reproduce) these labor arrangements, be it through audit practices or risk assessments. There is a sense that governance and labor are both being commodified through platform infrastructures, with market logics shaping everything from privacy compliance to domestic work arrangements all while performing accountability and transparency.
Another enduring PlatGovNet concern was revisited through the panel of Global Content Moderation about the uneven distribution of moderation capacity and contextual understandings of harm. Contributions highlighted persistent disparities in moderation resources, language coverage, and the priorities of platform’s content enforcement across regions, in a way that could systematically marginalize non-Western contexts.
Across eight languages, for example, low removal rates persist regardless of the severity of a hate-related violation, while English remains the most moderated language and Arabic the least. The panel also found that AI tools cannot scale, likely because low-resource languages continue to be underserved even in training. The shift towards LLMs means in fact a higher likelihood of bias, since low-resource languages constitute smaller (and thus less diverse) training data. Labor-wise, the rise of LLMs for moderation has meant a shift from vision-related annotation tasks towards hiring workers with physics and coding backgrounds to improve code generation models.
Closely related, the Platform Dependencies panel continued the discussions on the infrastructural conditions that delimit the scope of platform governance. The dependencies on cloud infrastructures, data centers, payment systems, as well as security architectures, often controlled by a small number of dominant actors, were shown to constrain both regulatory ambition and institutional alternatives, and reinforce the patterns of concentration.
The panel revealed how these dependencies manifest in multiple domains. In content moderation, coordinated deplatforming efforts remain ineffective because of cross-platform dependencies; only formal state intervention would bypass these structural constraints. Legally, it was found that the DSA's law-making process was subject to a “governance by emulation”, in the sense that a small team of Commission officials were heavily reliant on external expertise and vulnerable to industry capture through rhetorical claims about what is “impossible to regulate.” Yet the panel also offered cautiously optimistic conclusions: the institutionalization of public law thinking within private governance structures, while currently weak, may provide a foundation for future reform on a 20-30 year timeline.
Finally, through a panel on The Discourse of Platform Governance, we drew attention to the narratives about governance itself. Several papers examined how concepts such as risk, innovation, safety are frequently used across policy documents, corporate communication, and public debate, but actually shape our understanding of who defines the problems platform governance is meant to solve, and whose interests those definitions ultimately serve.
Research on the AI Act, for example, revealed how the idea of innovation functions as a somewhat empty signifier, while simultaneously serving as a vector of techno-optimism that tends to see regulation as obstacles to progress. This did not prevent EU policy experts from expressing deep concern about rapid regulatory backsliding, with American companies becoming increasingly sophisticated at European lobbying and pressures mounting to dilute the DSA, DMA, and AI Act in the name of competitiveness.
The panel also discussed how far-right actors tend to appropriate rights-based discourses and free speech rhetoric without long-term commitment to those values, as was demonstrated in the case of X’s takeover. The findings underscore how the very language of platform governance becomes contested terrain where corporate interests, political movements, and regulatory ambitions struggle to define whose problems matter and what solutions are deemed possible or impossible.
As these ongoing debates make clear, platform governance today is shaped as much by contested narratives and political pressures as by the infrastructures that translate them into practice. The next section turns to the new realities through which platform governance is increasingly enacted.
Beyond extending earlier conversations, these discussions pointed to emerging conditions that are altering the modalities through which platform governance now operates.
This shift was most explicit in the panel AI Governance, where discussions foregrounded the integration of AI systems into content moderation, recommendation, and labor management. Generative AI governance emerged as a particularly salient concern: alignment processes were shown to embed normative assumptions about online discourse, often reproducing some social hierarchies. At the same time, several contributions emphasized the infrastructural politics underpinning AI governance. Concentration in cloud computing and AI defense sectors highlighted how access to computational resources conditions who can meaningfully build, deploy, and govern AI systems at scale.
While content moderation has long been central to platform governance research, the 2025 conference demonstrated a clear expansion in both where moderation takes place and how it is conceived. The panel New Objects of Moderation articulates most directly, which foregrounded how generative AI systems, agentic accounts, and synthetic media challenge assumptions about what content moderation objects look like. Moving away from singular pieces of content, the discussions also pointed to a shift among major social media platforms from moderation at the level of individual posts toward actor- and behavior-based moderation. This shift foregrounds the mechanisms through which content circulates at scale, while simultaneously risking complicating attribution, responsibility, and enforcement.
At the same time, the panel Beyond Content Governance featured a spatial turn in trust and safety research. Papers examining Social XR and virtual environments raised questions about how moderation operates in immersive, embodied settings, and how public sector actors might engage in such spaces while remaining compliant with existing regulatory frameworks. Instead of interpreting moderation as a speech intervention, the discussions emphasized proximity, presence, and spatial interaction as new challenges.
More significantly, several panels moved away from centralized content removal toward bottom-up and community-driven approaches to content governance. In particular, discussions in Community-Driven Governance and Prosocial Moderation foregrounded moderation as a mediating and constructive practice, rather than a (purely) prohibitive and adjudicative one. Especially in the context of federated social media, governance was framed as a distributed process involving users, communities, and designers, with particular attention to prosocial design strategies aimed at fostering dialogue and bridging polarization as an important driver of “harmful” content.
Beyond formal legislation, the conference devoted attention to how authority and accountability in platform governance are being reconfigured through emerging institutional arrangements. Rather than assuming a clear division between state regulations and platform’s self-governance, discussions in the Authority and Accountability panel examined the dialectic relationships between platforms and states, as well as the alignment with other regulatory intermediaries and technical systems.
In the meantime, there are increasing discussions around designing Public Service (Social) Media that seek to reclaim platform infrastructures for public interest. These contributions revisited public service media traditions under the era of platformization, focusing on questions of ownership, sustained business models, and democratic accountability. They emphasized how public service social media initiatives hold the potential to challenge dominant platform business models by emphasizing inclusion and collective governance, while having to confront structural constraints such as scale, funding, and dependence on commercial infrastructures.
Many discussions were shaped by an increasingly fraught (geo-)political context in which platform governance unfolds.
The panels The Politics of Big Tech and Governmentality positioned platforms not simply as for-profit organizations or the targets of regulations, but as (political) actors whose infrastructural power, market leverage, and discursive practices actively shape the organization of social relations, and blur the boundaries between public and private authority. For instance, we heard discussions around the concentration of infrastructural power within large technology firms and their entanglement with the functions of the states, including security and welfare. Independent social media councils (SMCs), long discussed in the field, were revisited in light of authoritarian and hybrid regimes, where state regulations risks enabling forms of censorship, and the councils were framed as attempts to reconcile platform self-governance with external oversight.
Finally, Politics and Conflicts situated platform governance within contexts of political crises, including elections, wartime, and extremely polarized communication environment. We heard case studies of platforms’ involvement in ongoing political conflicts, alongside analyses of platform responses to disinformation and political pressure, which highlighted how governance decisions are increasingly entangled with unresolved political struggles. Discussions also addressed the rise and growing normalization of “alt-tech” ecosystems, which shows the increasing distrust in mainstream platforms and parallel infrastructures that challenge these dominant governance models.
Transition points in the larger history of platform governance emerge from these discussions.
Decentralisation, composable architectures, as well as a more diverse array of “alt-tech” platforms fragment platform markets and create new governance interventions. This fragmentation opens the field to a wider range of actors (community moderators, trusted flaggers, middleware designers, cooperatives, public institutions, etc) who enter content governance as infrastructural participants. We have seen how this reconfiguration enables new forms of community-based moderation, negotiated norms, and context-sensitive enforcement, while also creating space for public actors to introduce standards through regulation and for grassroots actors to propose alternative designs.
There appears to be a persistent “governance gap”: the capacity of workers, users, researchers, and public actors to operationalise their demands and values within the infrastructures that govern their sectors. This raises questions of training, education, and institutional support: how does one foster the skills required to translate normative claims into technical and organizational forms? And what can be done to train this as a method and civic capacity?
Discussing the nature, implications and executions of platform governance inevitably lends to description or critical research. Works often describe who is involved in governance and how. They critique the ways in which governance falls short across these actors. Yet, they may sometimes shy way from proactive propositions towards a form of ethics, i.e., how else governance may be done. This does not need to land on solutionism, but on deliberation around applicable concepts, frameworks and protocols.
Platform governance is sometimes (perhaps inevitably) steeped in profound conflicted debates about how and why to moderate public speech. How can research speak both to and beyond the polarised language of content governance? Should we consider political diversity a criteria for “diverse” research, and intellectual and ideological monocultures a lack thereof? And if so, how does one resolve profound political and normative differences within this field? Some of these challenges appear closer than they look, and conferences like these are special opportunities to continue deliberating about our own forms of governance.
The 2025 PlatGovNet conference was supported by the Danish National Research Foundation grant DNRF197 for the project “Power over platforms?