A critical examination of algorithmic governance in a complex and federal regulatory system
Every major technological breakthrough arrives with a tide of enthusiasm that tends to outrun its practical utility. Conferences, consulting firms, and technology advocates amplify the excitement until the new tool appears indispensable across every domain. Artificial Intelligence is currently at the crest of such a wave. Even institutions that ordinarily exercise caution in regulatory matters are swept up in the momentum. The Forum of Indian Regulators (FOIR) has chosen “AI in Regulation” as the theme of its annual conference—a telling illustration of how swiftly this discourse has entered India’s regulatory mainstream.
Artificial Intelligence is undeniably transforming sectors as varied as healthcare diagnostics, financial trading, logistics, and agriculture. It is therefore natural that policymakers are exploring whether similar technologies could strengthen the governance of the electricity sector. On the surface, the proposition seems compelling: the power sector generates enormous volumes of operational and financial data, operates under detailed regulatory frameworks, and involves layered decision-making that appears well-suited to algorithmic analysis.
A closer examination, however, reveals that assigning meaningful regulatory authority to AI systems in India’s electricity sector would be both premature and potentially damaging. The question is not whether AI has a role in the power sector—it clearly does. The real question is whether AI should influence or determine regulatory decisions that shape tariffs, investment incentives, consumer welfare, and system reliability. On that question, the answer must be clear: AI should serve as an instrument of analysis, not a substitute for regulatory judgement.
1. India’s Power Sector Is Not a Clean Data Environment
AI systems perform best when trained on structured, reliable, and consistent data. India’s power sector presents a markedly different landscape. Data across utilities—particularly in the distribution segment—remains fragmented, inconsistently reported, and frequently unreliable. Many distribution companies continue to operate with incomplete metering infrastructure and weak digital systems.
Aggregate Technical and Commercial (AT&C) losses in several states remain elevated, and the underlying figures are often estimated rather than precisely measured. Regulators routinely encounter submissions from utilities that are internally inconsistent or strategically structured to support tariff petitions. When foundational data is itself uncertain, AI models trained on such inputs risk producing outputs that are not merely inaccurate but confidently inaccurate—presenting false precision where genuine uncertainty exists.
Real-World Illustration — Smart Meter Deployment Gap: India’s Revamped Distribution Sector Scheme (RDSS) targets the installation of 250 million smart prepaid meters by 2025–2026. As of 2025, only around 25% have been installed confined to selected category of consumers. This means that for a large majority of distribution feeders, consumption data continues to be manually recorded, subject to human error, and periodically estimated. Any AI model trained on this data would be building its analytical foundation on figures that are partly fictitious. A tariff recommendation generated from such a model carries not the rigour of computation but the risk of systematised error at scale.
In a regulatory context where decisions affect billions of rupees of investment and the welfare of millions of consumers, such errors can carry serious consequences. The absence of reliable real-time data is not a temporary gap that deployment will soon close—it is a structural feature of the current distribution landscape that will persist for years.
2. There Is No Legal Framework Authorising AI in Regulatory Decision-Making
Electricity regulation in India is not a purely technical exercise. It is embedded in political economy and public policy. Tariff decisions determine how costs are distributed between industrial consumers, households, and the agricultural sector. Cross-subsidy structures and tariff design involve deliberate distributional choices that extend well beyond economic optimisation.
Regulatory commissions established under the Electricity Act, 2003 function as quasi-judicial bodies whose orders must reflect deliberative reasoning, public consultation, and legal accountability. Their decisions are subject to challenge before the Appellate Tribunal for Electricity (APTEL) and, ultimately, the Supreme Court. The Act specifies no provision—express or implied—that permits regulatory decisions to be generated or determined by algorithmic systems.
Real-World Illustration — Absence of Legal Provision for AI in Regulation: The Electricity Act, 2003 vests tariff-setting authority in the Central Electricity Regulatory Commission (CERC) and State Electricity Regulatory Commissions (SERCs). Section 61 requires regulators to be “guided by” specified principles including safeguarding consumer interest and recovery of costs. Section 64 mandates that tariff petitions be publicly advertised and heard. Nowhere does the Act contemplate the delegation of this function—even partially—to an automated system. Similarly, the CERC and SERC (Conduct of Business) Regulations impose procedural obligations of reasoning, hearing, and written orders that no algorithmic process can legally satisfy. India has also not enacted a sectoral AI governance law or issued binding regulations on AI use in public administration that would create a framework for such delegation. Deploying AI as a decision-making tool in this vacuum would not be regulatory innovation; it would be regulatory illegality.
An AI system cannot participate in this constitutional architecture. Algorithms cannot engage meaningfully with stakeholder submissions, respond to public hearings, or justify conclusions in the language of law and public policy. Displacing deliberative judgement with algorithmic outputs would corrode the chain of democratic accountability upon which regulatory legitimacy depends.
3. Mandatory Public Participation Cannot Be Replaced by Algorithmic Processes
The Electricity Act, 2003 and the regulations framed under it impose a non-negotiable procedural requirement: public hearings must be conducted before tariff orders are issued. This is not a formality. It is the mechanism through which affected consumers, civil society organisations, industrial associations, and state governments can place evidence and arguments on the regulatory record. It is also the basis on which regulatory orders acquire democratic legitimacy and survive appellate scrutiny.
Real-World Illustration — Mandatory Public Hearing in Tariff Determination: In the landmark case of ‘BSES Rajdhani Power Ltd. v. DERC’, the Delhi High Court affirmed that failure to hold an adequate public hearing vitiates a tariff order. Similarly, APTEL has in multiple cases remanded tariff orders to SERCs where stakeholder submissions were inadequately considered in the reasoning of the order. In Maharashtra, public hearings before the Maharashtra Electricity Regulatory Commission (MERC) on distribution tariff petitions routinely extend over several sittings, with consumer groups, industrial chambers, and agricultural lobbies placing competing demands before the Commission. The MERC’s final order is required to address each material objection by name and reasoning. No AI model—however sophisticated—can conduct a public hearing, receive oral depositions, weigh the credibility of witnesses, or respond to a legal argument placed on record.
Furthermore, the process of public consultation itself shapes the quality of regulatory outcomes. Regulators often learn from hearings—discovering data discrepancies, receiving ground-level information about supply quality, and understanding the distributional consequences of proposed tariff structures. This iterative, dialogic process of governance cannot be compressed into a training dataset.
4. AI Models Are Trained in Non-Contextual Environments
Electricity is a concurrent subject under India’s constitutional framework, and each state faces regulatory challenges shaped by its specific geography, economic structure, political economy, and resource endowment. The renewable energy curtailment concerns of Rajasthan bear little resemblance to the legacy thermal capacity challenges of Tamil Nadu. The financial distress confronting certain state distribution companies is structurally distinct from the relatively stable systems in states such as Gujarat or Himachal Pradesh.
AI models—including large language models increasingly being explored for regulatory analytics—function by identifying patterns in data on which they are trained and generalising those patterns across new contexts. This is a powerful capability that also generates systematic misjudgements when applied to highly heterogeneous regulatory environments with deep local specificity.
Real-World Illustration — LLMs Trained in Non-Contextual Environments: Leading AI language models such as GPT-5, Claude, Co-Pilot and Google’s Gemini are predominantly trained on English-language internet data, academic corpora, and documents from advanced economies. They have no meaningful training exposure to Hindi or regional-language tariff petitions, to the Rajasthan RERC’s approach to renewable curtailment compensation, or to the specific financial restructuring history of TANGEDCO in Tamil Nadu. When tested with Indian electricity regulatory documents, such models frequently conflate state-specific regulations with central regulations, misread the normative framework of the National Tariff Policy, and produce analysis that would be recognised as incorrect by any experienced regulatory practitioner. The contextual gap is not a software bug to be fixed—it reflects the fundamental reality that India’s electricity regulatory system is the product of decades of state-specific legal evolution that no general-purpose AI model presently captures.
Moreover, the power sector is undergoing rapid transformation. Renewable energy penetration is rising, electricity markets are evolving through mechanisms such as the Real Time Market and the Green Day Ahead Market, and new participants—storage operators, electric vehicles, and distributed generators—are entering the system. Training data from even two or three years ago may no longer adequately reflect current sector dynamics. AI systems that cannot continuously and accurately absorb this change risk anchoring regulatory analysis to an outdated picture of the sector.
5. Tariff Determination Involves Value Choices, Not Just Computation
At the centre of power sector regulation lies tariff determination. Regulators must evaluate the prudence of utility costs, assess capital expenditures, determine reasonable returns on investment, and allocate those costs across different consumer categories. These are not exercises in optimisation. They require contextual judgement: whether certain expenditures are justified given institutional realities, whether inefficiencies should be passed on to consumers or absorbed, and how accumulated regulatory assets should be managed within fiscal constraints.
A persistent and unresolved debate in Indian electricity regulation illustrates this point with clarity: the tension between normative tariffs and cost-reflective tariffs.
Real-World Illustration — Normative Tariff vs. Cost-Reflective Tariff Debate: Under a cost-reflective tariff framework, the tariff charged to each consumer category should approximate the actual cost of supplying power to that category. This approach is economically efficient and is broadly endorsed by the National Tariff Policy. However, in practice, agricultural consumers across most Indian states receive power either free of charge or at heavily subsidised rates—far below the cost of supply. In Telangana, Andhra Pradesh, Punjab, and Tamil Nadu, agricultural tariffs range between zero and one rupee per unit, while the average cost of supply frequently exceeds six to seven rupees per unit. The resulting revenue gap is covered through cross-subsidies from industrial and commercial consumers, or through state government subsidy payments that are often delayed or partial. An AI optimisation model trained on efficiency principles would recommend eliminating these subsidies. But such a recommendation would be politically untenable, socially disruptive, and inconsistent with constitutional obligations toward vulnerable populations. The decision of how far to move toward cost-reflectivity, how quickly, and with what protective measures is a normative and political judgement—not a computational one. No algorithm can or should make that call.
These decisions demand the balancing of precedent with fairness and the application of statutory principles within real-world institutional settings. They require judgement exercised by accountable decision-makers who can explain their reasoning and bear responsibility for the outcome.
6. The Doctrine of Reasoned Orders Limits Algorithmic Decision-Making
Law requires that regulatory decisions be accompanied by transparent, traceable reasoning. A regulatory order must articulate how evidence was evaluated and why particular conclusions were reached. This doctrine of “speaking orders” exists precisely to enable affected parties to challenge decisions through established appellate mechanisms, and for appellate tribunals to assess whether the regulator exercised its discretion lawfully.
Complex machine learning systems frequently operate as black boxes. While explainable AI (XAI) techniques can identify variables of statistical significance, they cannot replicate the structured legal reasoning that an electricity regulatory order demands. A decision justified by reference to algorithmic output alone would not survive judicial scrutiny.
Real-World Illustration — APTEL’s Emphasis on Reasoned Orders: The Appellate Tribunal for Electricity has consistently held that regulatory orders must contain adequate reasoning to enable meaningful appellate review. In ‘Adani Power Ltd. v. CERC’ and multiple other proceedings, APTEL has set aside CERC and SERC orders that did not adequately explain how specific regulatory determinations were reached—on issues such as the treatment of uncontrollable cost variations, the determination of normative operating parameters, and the disallowance of capital expenditure claims. These cases establish that the standard of reasoning required is high and legally enforceable. If a regulator were to issue an order stating that a tariff was determined “based on AI model outputs,” such an order would almost certainly be challenged and remanded for want of adequate reasoning. No AI system currently available can produce the kind of structured, evidence-linked, principle-applied reasoning that APTEL demands.
Until AI systems can produce explanations equivalent in quality and legal sufficiency to human regulatory reasoning—a capability that remains far from realisation—their direct role in binding regulatory decisions will remain legally untenable.
7. Equity Objectives Cannot Be Reduced to Efficiency Metrics
Indian electricity regulation carries explicit social obligations. Lifeline tariffs for low-income households, subsidised supply for agriculture, and rural electrification policies reflect socio-economic commitments embedded in national policy and, in certain respects, in constitutional directive principles. AI optimisation frameworks are typically designed around efficiency indicators: cost recovery, loss minimisation, and market equilibrium.
When equity considerations are incorporated into such frameworks, they appear as numerical constraints within the optimisation architecture—a representation that inevitably flattens their substance and obscures the value judgements involved in setting them.
Real-World Illustration — BPL Lifeline Tariff and the Limits of Algorithmic Equity: Most SERCs provide a lifeline tariff slab—typically the first 30 to 50 units per month—at deeply subsidised or zero rates for Below Poverty Line (BPL) consumers. The determination of this slab involves qualitative deliberation: What constitutes a dignified minimum level of electricity access? How does consumption behaviour vary across geographies? What is the fiscal capacity of the state to fund the subsidy? These questions were debated extensively by commissions such as the Karnataka Electricity Regulatory Commission (KERC) and the Maharashtra Electricity Regulatory Commission (MERC) when designing their lifeline slabs. Public consultations revealed that BPL households in urban slums had very different consumption profiles from those in rural areas, leading to differentiated policy responses. An AI model optimising for cost recovery would have no inherent mechanism to surface these distinctions—unless they were pre-programmed into it by human designers making exactly the normative choices that the AI is supposed to replace.
Human regulators engage with equity in a qualitative and iterative manner—through public consultations, sensitivity to field realities, and awareness of social context. Reducing such considerations to parameters in a mathematical model risks gradually eroding the social objectives that electricity regulation is meant to protect.
8. Cybersecurity and Energy Sovereignty Concerns
The electricity sector is critical national infrastructure and a persistent target for cyber threats. Introducing AI into regulatory processes would expand the digital attack surface, likely requiring large-scale data infrastructure and, in some cases, reliance on external technology platforms whose data governance and security standards may not align with India’s national interests.
The implications for data security and energy sovereignty are considerable. Regulatory data encompasses sensitive information on grid operations, generation assets, and the financial positions of utilities. Manipulation of AI models or of the data on which they rely could directly influence regulatory outcomes—creating a new vector for interference in India’s energy governance.
Real-World Illustration — RedEcho and the Vulnerability of Power Sector Infrastructure: In 2021, cybersecurity firm Recorded Future identified a campaign—attributed to a China-linked threat actor designated ‘RedEcho’—that had targeted at least ten Indian power sector entities, including regional load dispatch centres operated by Power System Operation Corporation (POSOCO) and several state load dispatch centres. The incident demonstrated that adversaries are specifically interested in infiltrating the operational and data systems of India’s electricity infrastructure. If AI-assisted regulatory processes were to depend on centralised data repositories and cloud-based analytical platforms—as they necessarily would—these systems would represent high-value targets. A compromised AI model could recommend tariff structures or investment parameters that systematically disadvantage certain utilities or technologies, without the manipulation being immediately detectable. The risk is not hypothetical; it is a demonstrated feature of the threat landscape facing India’s power sector.
9. Most SERCs Lack the Resources and Institutional Capacity to Govern AI Systems
Even if technological and legal barriers were resolved, a practical challenge persists: institutional capacity. Many State Electricity Regulatory Commissions operate with limited technical staff, constrained budgets, and high rates of staff turnover. Implementing AI systems requires specialised expertise not only for development but for ongoing auditing, model validation, and failure detection.
Real-World Illustration — SERC Capacity Constraints: A survey of State Electricity Regulatory Commissions reveals significant resource disparities. Several smaller commissions—including those of states such as Meghalaya, Manipur, Mizoram, and Goa—operate with fewer than ten technical staff members and annual administrative budgets that cannot sustain the infrastructure required for any serious AI implementation. Even larger SERCs frequently cite the difficulty of retaining qualified engineers and economists, given compensation structures that cannot compete with the private sector or central regulatory bodies. The Central Electricity Regulatory Commission (CERC), with far greater resources, has itself only recently moved toward digital case management systems. In this environment, deploying AI regulatory tools would not represent capacity enhancement—it would represent capacity substitution, transferring analytical authority to systems that staff cannot interrogate, audit, or correct. The result would be regulatory decisions driven by tools that no one within the institution fully understands.
Deploying complex algorithmic tools without adequate institutional capacity to govern them would not constitute modernisation. It would instead risk transferring responsibility from accountable regulators to opaque systems whose failures may go undetected until significant harm has occurred.
10. The Risk of Technical Capture of Regulation
A less-discussed but serious risk is that of “technical capture”: a situation in which regulatory outcomes are effectively determined by the design choices, assumptions, and datasets embedded in an AI system, with those embedded choices made by technology developers or platform vendors rather than by democratically accountable regulators. This is a new and more insidious form of the classical regulatory capture problem.
Real-World Illustration — Technical Capture Through Algorithmic Design: The experience of algorithmic tools in other regulatory domains is instructive. In financial regulation, automated credit-scoring and risk-assessment tools have been found to embed historical biases—for instance, systematically undervaluing creditworthiness in lower-income postal codes—because they were trained on datasets that reflected past discriminatory lending. The regulators nominally overseeing these tools often lacked the technical staff to audit them independently. In India’s electricity sector, the analogous risk is significant. If a technology company develops an AI tariff-recommendation tool trained primarily on data from privatised, profit-oriented utilities in developed economies, its embedded assumptions about reasonable returns, acceptable losses, and optimal consumer categories will not reflect India’s regulatory philosophy or statutory mandates. Regulators who adopt such tools without deep technical scrutiny may find that their orders are effectively authored by the tool’s designer—a form of regulatory outsourcing that is invisible to consumers, legislatures, and courts. The National Electricity Policy and the Tariff Policy express specific normative commitments; an AI model that silently overrides them through its architecture is a governance failure, not a governance improvement.
Compliance costs compound this risk. Establishing data reporting standards, auditing algorithmic processes, and maintaining regulatory transparency would require substantial financial and institutional resources. For a sector in which many utilities already face fiscal stress, adding such complexity without commensurate and demonstrable benefits risks further straining an ecosystem already under pressure.
11. Where AI Can Legitimately Add Value
Acknowledging these limitations does not amount to rejecting AI from the sector. On the contrary, AI can play a valuable and well-defined supportive role in operational and analytical functions where it enhances analytical capability without displacing human judgement over consequential decisions.
These include: renewable generation forecasting and grid balancing support; demand prediction for capacity planning; anomaly detection in electricity market bidding behaviour/ Market surveillance; automated processing of consumer complaint data to identify systemic distribution failures; and predictive maintenance analytics for transmission and distribution infrastructure. In each of these applications, AI serves as a tool that informs human decision-making rather than one that replaces it. This is the appropriate and constitutionally sound division of labour.
Conclusion
The case for restraint in deploying AI as a regulatory decision-making instrument in India’s power sector does not rest on technophobia. It rests on a clear-eyed assessment of the specific conditions—legal, institutional, socio-political, and data-related—that govern electricity regulation in India. The evidence from each of the rationales examined above points in the same direction.
Metering infrastructure remains inadequate. There is no legal framework authorising AI in binding regulatory decisions. Public hearings are constitutionally mandated and cannot be algorithmically replicated. AI models lack contextual grounding in India’s federal regulatory diversity. The normative debate between cost-reflective and equity-oriented tariffs cannot be resolved by optimisation. APTEL demands legally reasoned orders that no current AI can produce. Social equity objectives require qualitative deliberation. Cybersecurity threats to power sector infrastructure are real and documented. Most SERCs lack the capacity to govern AI systems. And the risk of technical capture is not theoretical—it is a demonstrated pattern in analogous regulatory domains globally.
The appropriate course is not to exclude AI from the regulatory process but to define its boundaries with discipline and clarity. AI should assist regulators in analysing information, modelling scenarios, identifying anomalies, and processing large volumes of data. The authority to make binding regulatory decisions—decisions that affect tariffs paid by 300 million households, investments made by utilities serving a billion consumers, and the financial viability of India’s energy transition—must remain firmly with accountable human institutions.
“In the governance of critical infrastructure, human judgement is not a limitation. It is the safeguard.”
https://drive.google.com/file/d/12NltMHX2msFfrll8AXSTDozSGP1uWows/view?usp=sharing