The AI playbook boards can’t ignore

How PM Modi’s MANAV Framework offers a practical playbook for corporate governance

By Rajesh Chhabara, Managing Director, CSRWorks International

A convincing AI-generated voice note arrives on a finance manager’s phone. It sounds exactly like the CEO. The message is urgent, plausible, and specific: authorise a payment now, we will tidy the paperwork later. Ten minutes later, the money is gone. By the time the company confirms it was a deepfake, the reputational damage is already spreading, and the board is asking a painful question: how did we not see this coming?

That question is about more than fraud. It is about confidence in leaders, in systems, in what is real. Credibility is quickly becoming the real currency of the AI era.

At the recently concluded India AI Impact Summit in New Delhi, Prime Minister Narendra Modi made a point that many executives and policymakers have been circling for months, but often fail to operationalise. He argued that AI must remain human-centric and offered a simple framework called MANAV, meaning “human” in Hindi, to anchor how societies and institutions should approach AI. The acronym is memorable, but what matters is its practical potential. It can be turned into a governance playbook for boards and senior leaders who are trying to harness AI without losing control of risk. Boards that wait for regulation or scandal will end up governing in hindsight.

Most organisations still treat responsible AI as an ethics discussion, or as a technical compliance exercise. Both approaches miss the core reality. AI is no longer just a product feature or a productivity tool. It is a decision-making layer that increasingly influences who gets hired, who gets served, who gets flagged, what gets priced, what gets recommended, and what information people accept as true. When that layer fails, the consequences are not abstract. They show up in courtrooms, in regulators’ inboxes, and in public outrage. They also show up in quieter ways, a customer-facing bot making a prohibited claim, or an automated hiring screen filtering out qualified candidates, until the pattern becomes impossible to ignore.

MANAV offers a useful way to see the full landscape without drowning in complexity: Moral and ethical systems (M); Accountable governance (A); National sovereignty (N); Accessible and inclusive (A); Valid and legitimate systems (V). In corporate terms, MANAV translates to five questions boards should be able to answer, with evidence, not intent.

Start with moral and ethical systems. Companies often have values statements and AI principles, but what separates a mature organisation from a slogan is boundaries. What will you not build? What will you not buy? What will you not deploy, even if it is legal and profitable? If management cannot articulate the organisation’s red lines, the company will end up discovering them in public, under pressure, after a preventable incident. Ethics, in the AI age, is not a poster on the wall. It is a set of design constraints and it only works when someone is accountable for enforcing it.

That takes us to accountable governance, which is where good intentions typically go to die. AI accountability collapses when there is no clear owner for outcomes. Who signs off on deployment? Who owns monitoring? Who owns incident response? Who can stop a system if it begins to misbehave? Boards do not need to understand the model to govern it, but they do need to insist on decision rights, escalation routes, and reporting discipline. If AI is “everyone’s responsibility”, it is no one’s responsibility. When something goes wrong, the organisation will not only face external scrutiny; it will also face internal confusion about who approved what, and why.

The third pillar, national sovereignty, can sound political. For businesses, it is simply about rights and control: data sovereignty and vendor power. Modi’s argument that data belongs to those who generate it is a reminder that AI supply chains are now part of corporate risk. Where does your data go? What are vendors permitted to do with it? Are contracts explicit about training rights, retention, cross-border transfers, auditability, and liability? Many companies are racing into AI partnerships without fully understanding the downstream implications of sharing customer data, employee information, and proprietary content. Vendor lock-in is not just a technology concern; it can become a strategic dependency that constrains future choices and weakens accountability. When control is unclear, inclusion is rarely achieved by accident.

That is why accessible and inclusive AI is not optional, or something to address once the technology “works”. Inclusion is product quality and market reach. AI that performs well for a narrow demographic or a single language can quietly fail for everyone else, producing systematic errors that are hard to detect until they explode. In some contexts, those failures become discrimination claims. In others, they become reputational crises. Even before that, they become adoption problems. When users do not believe systems work fairly and reliably, they do not use them, and the promised productivity gains remain PowerPoint promises.

Finally, validity and legitimacy. This is the pillar that will define the next chapter for enterprise AI: proof. Can you demonstrate, to an external party, that a system is safe, lawful, monitored, and fit for purpose? Documentation, testing, monitoring, and incident response are not bureaucratic burdens; they are the infrastructure of assurance. The same applies to content authenticity. Deepfakes and synthetic media do not only threaten politics. They threaten corporate communications, customer confidence, and financial controls. If an organisation cannot reliably distinguish what is real in its own channels, it has a governance gap that will be exploited.

Put differently, MANAV is not a philosophical manifesto. It is a checklist for governability.

There is a reason this matters now. Three forces are converging. First, procurement is tightening. Major customers are beginning to demand stronger assurances around AI governance, transparency, and audit rights from suppliers. Second, regulators globally are moving from broad principles to enforcement, and boards will be held accountable for failures of oversight. Third, misinformation and synthetic media are eroding the baseline confidence that markets require. When stakeholders cannot tell what is real, everything becomes more expensive: customer acquisition, risk capital, compliance, crisis management.

So what should boards and senior executives do, immediately, beyond commissioning another policy?

I believe, any organisation serious about responsible AI should be able to produce five things: an inventory of where AI is used and which applications are high-risk; a governance charter that names owners and decision rights; a vendor and data-control review that clarifies training rights, retention, cross-border use and liability; an inclusion plan that covers testing across languages, demographics and accessibility needs, alongside workforce impacts; and an evidence pack including documentation, monitoring, provenance controls and incident response that could withstand external scrutiny.

That is not an idealised future state. It is basic hygiene for a world in which AI is moving from experimentation to enterprise-scale dependence.

The most important shift is cultural. Companies will be tempted to measure AI success purely in performance metrics: speed, cost savings, accuracy. Those matter, but they are not enough. The defining metric of the next decade will be whether people trust your systems and whether you can prove you deserve that trust. If you cannot evidence it, you do not have it.

MANAV’s strength is its simplicity. It pulls AI back to first principles: technology in service of people, with accountability built in from the start. For boards, that is the real challenge and the real opportunity. AI will reward organisations that move fast. It will favour, in the long run, those that can move fast without breaking credibility. In AI, speed is optional. Accountability is not.

Previous Article

SEC extends ‘names rule’ compliance deadlines to 2027 and 2028




Related News