The debate over AI in governance has gained urgency in the wake of recent reports about AI-driven content moderation tools being used in several countries to monitor and restrict online speech. For example, in late 2025, a government in Southeast Asia faced criticism after an AI system automatically flagged and removed social media posts discussing local environmental protests, citing “policy violations.” This incident highlights the risks of relying on AI for decision-making, reinforcing concerns that such systems may impose external values and biases on diverse societies, and raising questions about the ethical and legal legitimacy of AI in governance.

It is neither ethical nor legally justified to use artificial intelligence in governance and decision-making. AI is a human-made system that functions on the basis of data and prompts provided by its developers. Since most AI technologies are developed in specific countries with particular ideological, cultural, and political assumptions, their decisions are likely to reflect those biases. As a result, AI-driven governance risks imposing one set of values over diverse societies.
For instance, if an AI system is trained in Western countries that hold the view that nature exists primarily to serve human needs, environmental policies generated by such systems may conflict with Eastern philosophies that view nature as sacred and deserving of reverence. This shows that AI decisions can be culturally biased and insensitive to local belief systems. Governance, however, must reflect the values and priorities of the people it governs. Therefore, delegating such authority to AI is ethically problematic.
Furthermore, legal decision-making involves balancing competing rights and interests. For example, courts often have to weigh the right to free speech against the right to dignity or protection from harm. These decisions require a deep understanding of context, social consequences, and the urgency of the situation. They also require discussions, debates, and deliberation so that multiple perspectives can be considered. AI, limited to the data on which it has been trained, cannot truly engage in such reasoning and risks reproducing the biases embedded in its design.
Democratic governance demands human involvement. Decisions should be taken by expert committees, stakeholder groups, and judicial panels that can interpret social realities and propose context-specific solutions. Such processes allow society to participate in decision-making, which increases public trust and ensures better implementation of laws and policies.
Since governments are accountable to the people, they have an incentive to reflect public will rather than impose external ideologies. Human-led decision-making prevents domination by the worldview of any one country or developer group. In contrast, AI-driven governance risks undermining democratic legitimacy by ignoring the ideological diversity of the population.
In conclusion, AI can create bias in governance and decision-making because it reflects the values of its developers rather than those of the society where it is applied. Therefore, it is neither ethical nor legally justified to allow AI to govern. Decisions made by humans—through experts, judges, and stakeholders—are more legitimate, inclusive, and implementable, ensuring greater public acceptance and trust in governance.
Discover more from newscape.in
Subscribe to get the latest posts sent to your email.