top of page

Ballots to Bots: What Happens to Democracy When Authority Is No Longer Human?

  • Jayati Tripathi
  • 2 days ago
  • 4 min read
Source: Getty Images | Credit: Adnan Beci
Source: Getty Images | Credit: Adnan Beci

In early 2025, Albania appointed an artificial intelligence system to its national cabinet. Framed as a measure to reduce corruption and enhance administrative effectiveness, this development signalled a major development in the application of AI within governance.


Across the globe, algorithmic tools are being increasingly embedded into the machinery of the state. Singapore uses customisable, public officer-led chatbots. Dubai deploys AI-powered road management systems in accident-prone areas. As the European Union trials automated border security, governments are using AI to simplify administration and reshape how decisions are made. In the United States, this trend was advanced further when Elon Musk, then the public face of the Department of Government Efficiency, proposed replacing large parts of the federal workforce with AI systems after overseeing mass layoffs.


As political authority is redistributed across human and non-human actors, questions of accountability, legitimacy, and democratic control come into sharper focus.


Albania’s Tryst with AI


The system, known as Diella, was first deployed as a virtual assistant on an e-government portal, designed to assist citizens in navigating administrative services. Prime Minister Edi Rama later elevated Diella to Minister for Public Procurement, arguing that automation could render the process “100 per cent corruption free”, he told the BBC, by limiting human discretion in tender evaluations.


A month later, the initiative took on a more symbolic dimension. Government announcements claimed Diella was “pregnant” with eighty-three virtual assistants, each assigned to a member of parliament to assist with record-keeping and legislative preparation. The announcement attracted widespread attention, blurring the line between administrative reform and political spectacle.


Efficiency, Signalling, and Reform


Public procurement has long been identified as a site of entrenched corruption in Albania, and automation may reduce opportunities for discretionary abuse. Diella’s appointment also corresponds with Albania’s broader effort to present itself as a reform-oriented candidate for European Union membership. Judicial reform, transparency, and institutional credibility remain central to accession negotiations, and digital governance has increasingly become part of that signalling process. Shortly after Diella’s announcement, European Commission President Ursula von der Leyen described Albania as a reliable partner, citing its reform trajectory and emphasis on its AI. 


Yet the political appeal of non-human governance risks obscuring deeper questions of authority. In representative democracies, power is exercised by identifiable actors who can be sanctioned through elections, courts, or parliamentary oversight. Algorithmic systems complicate this chain of responsibility. Diella does not simply execute fixed rules; she filters information, prioritises inputs, and shapes the options presented to human decision-makers. When such systems influence outcomes, responsibility becomes diffuse.


Private Infrastructure, Public Authority


Although Albanian officials have not formally disclosed Diella’s technical architecture, reporting and procurement disclosures suggest it relies on large language models developed by OpenAI and cloud infrastructure provided by Microsoft Azure. If accurate, this places a core function of Albanian governance within systems designed, maintained, and governed by US-based private firms.


Albania’s experiment also sits within a wider international distribution of technological power. For EU candidate states, aligning with European digital standards can paradoxically deepen reliance on external technology providers rather than reduce it.  Recent debates over tech sovereignty highlight the extent to which Europe’s digital infrastructure, from cloud services (two-thirds of the market) to social media platforms, remains dominated by US giants, a dependence that has become more politically fraught due to strained transatlantic relations. When governance systems are built and maintained by private actors beyond domestic democratic oversight, the values and assumptions embedded in those systems become politically consequential.


These risks are compounded by cybersecurity and data protection concerns. Government AI systems process and store vast volumes of sensitive legal and public data. Breaches, misuse, or external interference would not only pose technical challenges but also threaten state sovereignty and democratic trust.


Looking Forward


The integration of AI into governance carries genuine promise. Algorithmic systems can support climate modelling, improve risk forecasting, and strengthen aspects of electoral administration. At the same time, treating AI as a technocratic solution risks prioritising efficiency over accountability. When governance is framed as a problem to be optimised, deeper constitutional and democratic questions are often sidelined. Marc Rotenberg, from the Center for AI and Digital Policy, talking about the need for democratic values in AI, says, “The technology and business applications are evolving rapidly, but so too are the governance models….it's not at all clear that companies left alone will prioritize those values or even that [companies] that do prioritize them will necessarily succeed”.


Diella illustrates this tension. Standardising procurement decisions may reduce opportunities for procedural corruption, but the democratic implications of its role remain unresolved. Much will depend on how legitimacy is maintained in a system where non-human actors increasingly shape authority, and on the extent to which private-sector interests are embedded in public decision-making.


It also remains unclear, in Albania and elsewhere, whether AI represents a substantive delegation of authority or a largely symbolic exercise in political branding. In practice, many government AI systems function as advisory tools that filter information and recommend outcomes rather than exercising autonomous power. This distinction matters. Authority is not eliminated but displaced, from elected officials to algorithms, and ultimately to those who design, maintain, and control them. Framing AI as objective or neutral obscures these transfers of power. Whether AI strengthens or undermines democratic governance will depend less on technical capability than on the political choices that structure its use, and those choices warrant sustained scrutiny.


Written by Jayati Tripathi

Edited by Sherkan Sultan, Aditya Gupta


Comments


  • LinkedIn
  • X
  • Instagram
bottom of page