Working Draft

Counted but Not Heard: Platform Democracy and the Noumenal Erosion of Participatory Agency

Section 3

Towards a Normative Response

Platform algorithms operate in two distinct but joined modes. This joint is what allows the participation paradox to persist. What we encounter, as contributors and consumers, is the content-pushing mode: the recommendation, ranking and amplification of content designed to maximise engagement. This is the mode addressed by Pariser’s filter bubble analysis and Sunstein’s account of echo chambers. In this mode, the problem appears at the phenomenal level of algorithmic curation. It produces observable outputs and is therefore open, at least in principle, to regulatory and legal intervention. The growing legal and regulatory record increasingly confirms this reach.

The European Commission’s formal proceedings against TikTok (European Commission, 2024) mark an important moment in which the state’s claim to democratic authority takes phenomenal form. Acting to protect its citizens, the Commission is making the claim that TikTok’s impact is such that it strikes at the democratic principles to which all citizens are entitled. As a result, TikTok is accountable not only for physical and mental harms, but also for democratic harms. In this example, the erosion of the ability to participate. Through its investigation of algorithmic recommendation systems and addictive design features, the Commission is attempting to force digital architectures to comply with visible democratic requirements. In doing so, it acts on the assumption that platform architecture has weakened the citizen’s effective standing as a democratic participant, requiring institutional intervention to restore a relation of justification that the platform itself no longer sustains.

In Weberian terms, the Commission occupies the structural position of the institutional proxy — the rational-legal agent filling the justificatory gap that individual citizens can no longer bridge, demanding compliance on behalf of a public interest (Weber, 1905/2001; Pagano, 2025). Forst names precisely what these enforcement actions seek to reconstruct: justificatory addressability, the creation, in proxy form, of a means through which a claim about platform architecture can reach an addressee with both authority and obligation to respond (Forst, 2017). When TikTok and Meta obstructed researcher access to algorithmic data in the 2025 proceedings (European Commission, 2025), they were not merely committing a compliance failure in the ordinary regulatory sense. They were reasserting Zuboff’s second declaration — the proprietary exemption — against even this minimal reconstruction of justificatory addressability (Zuboff, 2019, p. 178). That reassertion represents noumenal closure made visible at the institutional level: the vault holding firm against the most powerful regulatory actor yet to test it.

The Commission’s findings confirm that the content-pushing mode of these algorithms operates through narrowing circuits that elude individual agency (European Commission, 2024). The verdict in KGM v. Meta Platforms (Los Angeles Superior Court, 25 March 2026) demonstrates the judiciary’s attempt to establish the same standard of accountability. These interventions are important. They signal recognition that platform harms extend beyond individuals to democratic substance itself. However, they are reactive to the phenomenal layer — the created digital reality that Mode 1 sustains. The deeper architecture does more than generate those harms: it systematically reorganises the conditions under which participation can still count as participation, rendering even the most forceful regulatory claims structurally indirect.

Mode two of the platform operation cycle provides a gateway into the closing-off noumenal problem. Where the content-pushing mode acts on what we see, the closing-off mode acts on the conditions under which what users see can function as a justificatory claim at all. It is the algorithmic operation through which the space of reasons is pre-determined. This space being the arena through which what is thinkable and contestable as a demand for justification is shaped prior to any act of participation. While architecturally linked to content pushing, it forecloses the conditions under which content could be received as a claim requiring response. Zuboff describes how the closing-off mode operates: the extraction of behavioural data without meaningful consent and the proprietary protection of the governing logic that makes it unchallengeable (Zuboff, 2019, p. 178). In this respect, Zuboff captures how the erosion occurs: the mechanics of closure. Krouglov helps to show what those mechanics produce. The closing-off mode estranges the individual from meaningful participation, transforming agency into a subjectivity built around platform metrics rather than democratic participation (Krouglov, 2025, p. 196). Algorithmic alienation is the process through which platform architecture turns participation into measurable data points. Political engagement becomes interaction dictated by metrics. The consequence is a restructuring of democratic subjectivity before the individual arrives at any act of participation. This is what Krouglov terms the “metamorphosis of political subjectivity” (Krouglov, 2025, p. 199).

Focusing on the content-pushing mode alone does not offer a holistic account of algorithmic harm because it does not reach this prior level. That limitation is identified by Alnemr, who argues that deliberative democracy’s concerns about algorithmic harms remain incomplete without addressing the structural conditions under which citizens’ capacity to contest those harms is itself foreclosed (Alnemr, 2025, p. 2). The closing-off mode produces Dean’s registration effect, the subjective sense of participation without its democratic substance, and explains why indices can measure decline while the phenomenal apparatus of participation remains intact. It is also the mode that the EC proceedings and the KGM verdict cannot reach. The content-pushing mode is addressable, however imperfectly and reactively. The EC and KGM bear witness to that. The closing-off mode is not addressable in the same way, not because it is technically inaccessible, but because it is structurally protected by the proprietary exemption at every level of its operation. This is the noumenal problem and it is one that regulatory interventions can neither reach nor resolve. The question it leaves is not whether platform architectures can be perfected, but whether they can be held to a minimum justificatory standard at all.

Given the foregoing, can the issue of noumenal erosion be resolved? Not in any thick sense. A fully transparent, fully co-authored, fully deliberative platform architecture would be one in which every user stands as an equal normative author of its governing logic. Such perfection is not just unachievable but conceptually unknowable: it offers no standard by which its own attainment could be confirmed. The appropriate standard must therefore be thinner, deliberately so. As Williams first established, thick ethical concepts carry rich world-guided content while thin concepts operate at higher levels of generality (Williams, 2006, pp. 140–142). Thick standards demand justification of every design choice and outcome as a comprehensive ideal. Such deliberative completeness is neither technically deliverable by platforms nor collectively authorable by citizens. Thin standards, by contrast, name only the structural preconditions without which justificatory agency cannot function at all. Whether everyone is then treated fairly is the subject of a different discussion.

In his exploration of epistemic actorhood, Risse identifies the structural condition required, pointing to the agency rights that digital lifeworlds demand: democracies committed to human rights have failed to evolve their governance systems at the pace of surveillance capitalism, turning technological adaptation into a condition of the survival of those ideas themselves (Risse, 2021, p. 355). The failure is not just institutional but normative. Human rights frameworks, as they stand, cannot protect epistemic actorhood in digital lifeworlds without the upgrade Risse demands.1 Alnemr arrives at a parallel diagnosis from a deliberative democracy perspective: existing accounts of algorithmic harm remain incomplete because they do not address the structural conditions under which citizens’ capacity to contest those harms is itself foreclosed (Alnemr, 2025, p. 2). This paper accepts both diagnoses while pressing further: institutional upgrade alone proves insufficient. What is required is a justificatory standard for platform architecture itself. The standard can be articulated as three minimum conditions that precede any regulatory intervention.

The first is architectural transparency. It functions at the epistemic level, holding to account what we know and do not know about how a platform works. This demands more than the episodic explainability of current regulatory frameworks. Users must be able to identify the governing logic of platform systems well enough to contest their justificatory structure. Without this, the space of reasons remains occupied by proprietary code rather than opened to democratic claim-making.

The second is justificatory addressability. It functions at the relational level, holding the platform to account to ensure there is a relationship of accountability between the individual and the platform. Platforms must provide a real addressee capable of receiving and responding to justificatory claims — not through compliance interfaces or regulatory proxies, but through a structure in which citizens can address those who exercise architectural power over their participation. Absent this and the citizen remains a data point rather than a justifying agent.2

The third is minimal co-authorship. It functions at the structural level, holding the platform to account for enabling review and enacting change directed by the user. Users must possess some democratic role in constituting the basic rules that organise participation itself, moving beyond reactive consultation to structural involvement in platform governance. This is not collective design of every algorithm, but the right to participate in determining the conditions under which participation retains democratic substance.

Taken together these conditions restore the three elements Weber regarded as essential to legitimate rational-legal authority: intelligibility, an accountable addressee and publicly constituted rules, without which we are faced with the digital cage, rule without justification (Weber, 1905/2001, p. 123).

Noumenal erosion constitutes an existential threat to self-rule because it operates below the threshold at which existing democratic tools can intervene. Regulatory attempts to establish accountability, such as the KGM v. Meta verdict, are vital but ultimately limited; they address phenomenal harms like addiction and disinformation without reaching the proprietary closure that determines what counts as participation in the first place. When platform architectures claim the space of reasons as a proprietary enclosure, democracy hollows from within. The structural paradox reveals that what began as engineering methodology has become a fundamental assault on the foundations of collective agency.

This paper has traced the structural paradox of contemporary platform democracy: participation reaches phenomenal saturation while democratic legitimacy erodes beneath it. Section 1 established the noumenal/phenomenal frame, revealing platform architectures as digital cages that seal off the space of reasons from justificatory claim-making. Section 2 demonstrated why backsliding indices and regulatory interventions remain blind to this prior architectural level — measuring outputs while Mode 2 silently reorganises the conditions of democratic subjectivity itself. Section 3 diagnosed the two-mode operation and proposed three minimum conditions for platform legitimacy: architectural transparency, justificatory addressability and co-authorship.

These conditions expose the category error in conventional platform governance. Content moderation, algorithmic transparency duties and liability regimes address phenomenal harms, the filter bubbles, disinformation and addiction, but cannot reach the proprietary closure that determines what counts as participation at all. The European Commission and KGM v. Meta bear witness to regulatory reach, but also to its structural limit: even the most forceful institutional proxies cannot reconstruct what Mode 2 has foreclosed.

The democratic stakes are existential. Participation is not one democratic good among others, but the prior condition of legitimacy itself. Citizens exercise self-government only insofar as they appear to one another as justifying agents capable of contesting the rules that organise their common life. When platform architectures erode this capacity below justificatory visibility, transforming agency into metrics and claims into data points, democracy hollows from within.

This is not a manageable governance defect, but an emergency for self-rule. The tools of democratic accountability presuppose a space of reasons that platforms have already claimed as proprietary enclosure. What began as engineering methodology (“move fast and break things”) has become a structural assault on the foundations of collective agency. The question is no longer how platforms should be governed, but whether their architectures can sustain the justificatory preconditions of democracy at all.

Notes

1 Risse’s demands are more specific than the governance failure claim. His framework of epistemic actorhood identifies four roles in which individuals require strengthened rights protection in digital lifeworlds: as individual knowers, collective knowers, individual knowns, and collective knowns, and develops the rights to protections each requires (Risse, 2021, pp. 370–372). Significantly, Risse argues that the exercise of infopower can only be legitimate if rights are in place that generate possibilities of participation in the design of the data episteme itself (Risse, 2021, p. 371), an approach structurally close to the minimal co-authorship condition developed here. Secondly, his demand for broadly shared control over collected data (Risse, 2021, p. 372) addresses the same structural absence as justificatory addressability, from a data rights rather than a justificatory standpoint.

2 The active dismantling of accountability structures makes the absence of justificatory addressability not merely a structural condition but an accelerating one. X’s dissolution of its Trust and Safety Council in December 2022 (an advisory body of approximately one hundred civil, human rights and other organisations formed to address hate speech, child exploitation and platform safety) was the first major exercise of this logic, the company informing members by email less than an hour before a scheduled meeting that the council was no longer the appropriate structure for external input (NPR, 2022). Meta’s January 2025 announcement ending its eight-year third-party fact-checking programme (framed by Zuckerberg as a return to free expression following a “cultural tipping point” signalled by the Trump election) removed the closest institutional approximation to an external accountability mechanism its platforms had operated (Meta, 2025). The EC’s preliminary finding in October 2025 that TikTok and Meta had obstructed researcher access to algorithmic data demonstrates that the same proprietary exemption is being asserted not only against users but against the most powerful regulatory actor yet to test it (European Commission, 2025). All three decisions were exercises of the proprietary exemption in its most direct form: the unilateral redesign or concealment of the architecture of participation without obligation to account for it to those it governs. That these withdrawals were presented as expansions of free speech — or, in the EC case, as legitimate intellectual property protection — is itself a symptom of the noumenal problem: the governing logic of participation is reasserted as a liberty or property claim precisely at the moment when the structural conditions for contesting it are removed.

References

  1. Alnemr, N. (2025) ‘Deliberative democracy in an algorithmic society: harms, contestations and deliberative capacity in the digital public sphere’, Democratization, published online 2 July 2025.
  2. BBC News (2026) ‘Meta and YouTube ordered to pay $6 million in landmark social media addiction trial’, BBC News, 25 March 2026. Available at: https://www.bbc.co.uk/news/articles/c747x7gz249o (Accessed: 25 March 2026).
  3. Dean, J. (2005) ‘Communicative capitalism: circulation and the foreclosure of politics’, Cultural Politics, 1(1), pp. 51–74.
  4. European Parliament Think Tank (2024) Enforcing the Digital Services Act: State of Play. Brussels: European Parliament. Available at: https://epthinktank.eu/2024/11/21/enforcing-the-digital-services-act-state-of-play/ (Accessed: 8 April 2026).
  5. European Commission (2024) Commission Opens Formal Proceedings Against TikTok Under the Digital Services Act. Brussels: European Commission, 19 February 2024. Available at: https://ec.europa.eu/commission/presscorner/detail/en/ip_24_926 (Accessed: 12 April 2026).
  6. European Commission (2025) Commission Preliminarily Finds TikTok and Meta in Breach of Their Transparency Obligations Under the Digital Services Act. Brussels: European Commission, 24 October 2025. Available at: https://digital-strategy.ec.europa.eu/en/news/commission-preliminarily-finds-tiktok-and-meta-breach-their-transparency-obligations-under-digital (Accessed: 12 April 2026).
  7. Forst, R. (2017) Normativity and Power: Analysing Contexts of Justice. Oxford: Oxford University Press.
  8. Krouglov, A.Y. (2025) ‘Alienation 2.0: the algorithmic commodification of agency in platform capitalism’, Journal of Multicultural Discourses, 19(3), pp. 196–212.
  9. Meta (2025) ‘More speech and fewer mistakes’, About Meta, 7 January 2025. Available at: https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/ (Accessed: 14 April 2026).
  10. NPR (2022) ‘Elon Musk’s Twitter: Trust and Safety Council is dissolved’, NPR, 12 December 2022. Available at: https://www.npr.org/2022/12/12/1142399312/twitter-trust-and-safety-council-elon-musk (Accessed: 14 April 2026).
  11. Risse, M. (2021) ‘The Fourth Generation of Human Rights: Epistemic Rights in Digital Lifeworlds’, Moral Philosophy and Politics, 8(2), pp. 351–378.
  12. Weber, M. (1905/2001) The Protestant Ethic and the Spirit of Capitalism. Translated by T. Parsons. London: Routledge Classics.
  13. Williams, B. (2006) Ethics and the Limits of Philosophy. With a commentary by A.W. Moore. Abingdon: Routledge.
  14. Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the Frontier of Power. London: Profile Books.