Systemic Harm
Broad societal harms affecting institutions, democratic processes, or public trust that emerge from widespread AI deployment.
Systemic harm refers to the broad, society-level consequences that emerge from the widespread deployment of AI systems across critical institutions, infrastructure, and information ecosystems. Unlike harms that affect identifiable individuals, systemic harms are diffuse and cumulative: they erode democratic processes, degrade epistemic commons, concentrate economic and political power, and undermine the institutional trust upon which civil society depends. These harms are often emergent properties of individually benign applications that, in aggregate, produce structural effects no single deployment intended.
Documented patterns of systemic harm are increasingly visible in the public record. AI-generated disinformation campaigns targeting elections have been identified in multiple countries, exploiting the ability of generative models to produce high-volume, localized, and linguistically diverse propaganda at minimal cost. The concentration of advanced AI capabilities among a small number of technology firms raises concerns about market power, regulatory capture, and the erosion of competitive markets. Widespread adoption of AI in hiring, lending, housing, and criminal justice creates the potential for correlated failures, where shared training data, common model architectures, or identical vendor systems produce synchronized discriminatory outcomes across entire sectors. The displacement of human expertise by AI decision-making in domains such as journalism, legal analysis, and scientific peer review risks hollowing out the institutional knowledge that societies rely upon for self-correction.
Addressing systemic harm requires governance interventions that operate at the structural level rather than the individual system level. Antitrust and competition policy must adapt to the dynamics of AI markets, where data advantages and compute requirements create natural barriers to entry. Election integrity measures need to account for AI-generated content in political advertising, voter targeting, and information operations. Standards for AI supply chain transparency can help identify single points of failure when critical sectors depend on common underlying models or datasets. International coordination is essential, as systemic AI harms routinely cross national boundaries and no single jurisdiction can address them in isolation. Ongoing investment in public AI literacy, independent auditing capacity, and civil society oversight mechanisms provides the societal resilience necessary to detect and respond to systemic harms as they emerge.