By Cliff Potts, CSO, and Editor-in-Chief of WPS News
Baybay City, Leyte, Philippines — April 26, 2026
Reporting
Under the Digital Services Act (DSA), very large online platforms are required to conduct regular risk assessments addressing systemic harms, including the amplification of illegal content, threats to civic discourse, and impacts on fundamental rights. YouTube has stated that it complies with these obligations through internal evaluations and mitigation plans submitted to EU authorities.
What remains unavailable is the evidence needed to independently test those claims.
Public disclosures summarize conclusions but not methods. They describe risks in general terms without detailing assumptions, metrics, or counterfactuals. External researchers, journalists, and civil-society groups are asked to trust that assessments are rigorous while being denied access to the data that would allow verification.
In effect, YouTube reports that it has assessed risk—without showing how.
Analysis
A risk assessment that cannot be tested is a corporate assertion, not an accountability mechanism.
Meaningful oversight requires more than assurances. It requires visibility into the indicators used, the thresholds applied, and the trade-offs accepted. Without this information, regulators cannot determine whether mitigation measures address root causes or merely manage appearances.
This opacity reflects incentives shaped at the parent level. Google has long resisted external auditing of its core systems, citing security and proprietary concerns. While some confidentiality is legitimate, blanket opacity prevents independent scrutiny of claims that directly affect public life.
The result is a one-sided process: platforms define risk, evaluate themselves, and report outcomes in summary form. EU oversight is left to review conclusions rather than interrogate evidence.
What Remains Unclear
YouTube does not disclose the specific metrics used to assess systemic risk within EU member states, nor how those metrics vary by language, topic, or election cycle. It also does not publish the results of stress tests showing how changes to recommendations or monetization would alter risk profiles.
Without access to these details, neither regulators nor the public can judge whether risk mitigation is proportionate or effective.
Why This Matters
The DSA was designed to move beyond trust-based governance. Its purpose is to replace assurances with evidence. When platforms provide only summaries, that purpose is undermined.
If risk assessments remain shielded from independent evaluation, enforcement becomes reactive rather than preventive. Harm is identified after it spreads, not before it is amplified.
For EU regulators, the question is straightforward: can a system built on self-assessment deliver public accountability? Until YouTube’s risk evaluations are open to meaningful testing, that question remains unanswered.
References (APA)
European Commission. (2024). Digital Services Act: Systemic risk assessment and mitigation obligations.
European Digital Rights (EDRi). (2023). Platform risk assessments and the limits of self-reporting.
Pasquale, F. (2020). New laws of robotics: Defending human expertise in the age of AI. Harvard University Press.
Discover more from WPS News
Subscribe to get the latest posts sent to your email.