top of page

When Algorithms Steal: How AI is Rewriting the Rules of Financial Fraud in India

  • Sarandeep Singh
  • 21 minutes ago
  • 9 min read

Introduction

In February 2024, a finance worker at a multinational firm’s Hong Kong office transferred USD 25 million to fraudsters after attending what happened to be a routine video conference call with the company’s Chief Financial Officer and several colleagues. The catch? Every person on that call, except the victim, was an AI-generated deepfake. It is reportedly one of the largest deepfake fraud cases recorded to date and it signals a fundamental shift in the landscape of white-collar crime.


As India is rapidly digitizing its economy and positions itself as a global technology hub, over the time, AI has emerged as both a transformative tool and a sophisticated weapon in the hands of financial criminals. From voice cloning scams to tricking corporate entities into authorizing fraudulent transfers to AI-generated fake identities that slip even highly sophisticated verification systems to tackle such instances, leading to growing use of artificial intelligence in financial fraud is creating new and serious challenges at both domestic and global level of business along with regulators and law enforcement agencies.


This article looks out how AI is transforming financial frauds in India, highlighting the regulatory and enforcement gaps exposed by such developments, and discusses the institutional and doctrinal reforms necessary to ensure effective enforcement and prevention.

 

The Evolution of AI-Enabled Financial Frauds

The conventional white-collar crimes, such as embezzlement, securities fraud, and money laundering, demanded insider information, complicated plots, and, in most cases, a lot of time. This calculus has been radically changed by AI, which has democratized advanced methods of fraud and significantly lowered the time and expertise to carry out financial fraud. One of the most significant shifts is the rise of deepfake technology. By enabling the creation of highly realistic audio, video and even images, deepfakes have moved beyond experimental and imagination capability and become a practical and increasingly effective tool for financial fraud. This technology allows criminals to pose as C-suite executives with alarming levels of success in the corporate environment.


These threats have not spared the Indian companies. In 2023-24, there was a series of incidents in which AI-generated voice cloning was used by fraudsters to impersonate senior executives and order urgent wire transfers to be made by finance teams. The technology to produce such deepfakes is becoming more and more accessible, and certain voice-cloning applications are as cheap as USD 5-10 a month and need only a few minutes of audio material, which is readily obtainable through recordings of earnings calls, conference calls, or posts on social media.


Even more pernicious than impersonation, maybe, is the establishment of purely synthetic identities, i.e. synthetic identities refer to the AI generated personas that do not correspond to any real individual but are constructed using a combination of legitimately and fabricated data obtained and created respectively as a personal data. Such identities are supported by AI generated photographs, forged or manipulated identity documents, and artificially created through digital footprints such as email accounts, social media activities and even transaction histories and which allows them to pass standard customer due-diligence, and even Know Your Customer (“KYC”) verification processes. These artificial identities integrate both real and fake data to build valid, profiles that could go undetected by conventional KYC checks. AI algorithms have the capability to produce realistic photos, manufacture identity documents, and even generate reliable digital footprint on the various social media platforms. The master directions on KYC by the Reserve Bank of India (“RBI”) whose operations are largely based on verification of documents have significant challenges with regards to detecting these AI-generated forgeries. Whereas, Prevention of Money Laundering Act, 2002 framework presupposes that the issued identity documents are associated with actual people, synthetic identities make use of this assumption and provide AI-generated or manipulated documentation which means that it formally meets the requirements of the KYC.


This has serious implications on the banking sector of India. As the digital banking and fintech systems experience fast growth, synthetic identity fraud may compromise the integrity of financial system. In 2026, a scam was uncovered by the police where the scammers extorted Rs. 7 crore from a woman claiming to be police and Enforcement Directorate officials and using around 3000 mule accounts to transfer the money as to prevent any digital history or footsteps to avoid the tracking. This makes such accounts live much longer as the identity will seem valid in document-based as well as digital authentication processes and the transactions can also be repeated without being spotted by the financial institutions or the regulating authorities like the RBI.


The powers of AI in the realm of analysis also help to introduce a new type of securities fraud and manipulation of the market. Machine-learning algorithms can be used to run coordinated trading activities on many accounts to simulate artificial market action, manipulate stock prices, or run complex pump-and-dump operations in a manner and at a rate that human traders could not achieve. Securities and Exchange Board of India (“SEBI”) has shown cognizance of such risks. The SEBI consultation paper on algorithmic trading and the subsequent regulations demand the brokers and trading members to adopt risk management systems in algo-trading.

 

The AI Age of Regulatory Challenges

The regulatory framework in India to deal with financial fraud has been mostly formed in a pre-AI era, and thus there are structural loopholes that are becoming more prone to abuse. Among the most puzzling legal dilemmas in the context of AI-based fraud, one could mention: Who takes responsibility when an AI-based system supports or commits a financial crime? Is it the author of the AI tool or the individual deploying the tool or the organization that was unable to identify its abuse?


The responsibility has been one of the most complicated questions posed by AI-driven financial crime. In cases where an AI system enables or commits a fraudulent deal, it is possible that the parties who should bear the liabilities are the developer of the system, the deployer, or the organisation that did not notice the misuse of the system. According to the Indian criminal law, a conviction of a fraud offence generally requires the demonstration of mens rea, which is criminal intent. Criminal responsibility is unclear when an AI system is the author of the fraudulent transactions or communications. The IT Act acknowledges some types of automated systems, yet the terms do not sufficiently deal with situations in which AI serves as the instrument and as a quasi-actor of fraudulent activities.


Fraudulent communications or transactions that are created either autonomously or semi-autonomously by AI systems are far harder to prove intent. The Information Technology Act, 2000 (“IT Act”) acknowledges some types of automated activities, but it fails to substantially address the case of AI being used not just as a passive technology but as an active intermediate through which the results are determined by opaque processes decision-making. Practically, this produces evidentiary deadlocks as law enforcement agencies can prove that they have come to harm, but can find it hard to determine, against whom the law should take action.


Similarly, the liability of corporations in Section 141 of the Negotiable Instruments Act, 1881 and other laws like this in other acts might possibly be applied to AI-related fraud, but the courts have not yet established a broad jurisprudence regarding the liability of corporations with AI systems that act with considerable autonomy. The conventional methods of forensic accountability and fraud investigation were created to identify the general patterns of human behavior, paper trails and logical inconsistencies. AI-generated fraud has the ability to be internally consistent, have limited traditional evidence, and run in speed that the traditional detection method cannot detect.


Vicarious liability has been traditionally identified in India jurisprudence under specific and few circumstances, especially in regulatory legislation. In Sunil Bharti Mittal v. CBI, (2015) 4 SCC 609 the Supreme Court warned against the automatic imposition of criminal responsibility on the officers of a company that has not been shown to have been actively involved in the wrongdoing. This principle is further complicated by AI-driven fraud that can decentralize the decision-making process and make it part of the non-transparent algorithmic processes.


There are also major jurisdictional issues that are created by AI-driven fraud. The use of AI tools in servers outside of India but leading to financial damage in the Indian markets raises such issues as the territorial jurisdiction, admissibility of digital evidence and collaboration with international organizations. Mutual legal assistance procedures are usually slow in cross-border inquiries and do not fit the timeline of AI-driven fraud, which means that perpetrators can take advantage of jurisdictional fragmentation.

 

Reinventing Legal AI-Fraud Responses

Financial crime that is controlled by AI needs to be dealt with through a transition of the reactive enforcement to anticipatory regulation. The Indian law has to understand that AI is not just a tool but rather force multiplier that changes the essence of economic offences. This requires dedicated legal measures that would establish the criteria of liability when using AI in financial systems such as due diligence, explainability, and human supervision.


The prevention mechanisms that can be inculcated by the companies themselves should be of great concern in a solution-oriented response. In the case of the Indian companies, especially the banks, fintech companies, and large companies, AIs improve fraud, the internal controls and governance structures have to be redefined.


Businesses have to compete with AI more by using AI. Financial institutions are implementing machine-learning-based fraud detection systems that have the potential to analyse patterns of transactions, metadata of communication, and behavioral abnormalities in real time. In contrast to old-fashioned rule-based models, these models have the capability to detect nuanced relationships and coordinated actions that might represent AI-driven manipulation even when single transactions seem above board. Banking organizations are also actively implementing machine learning algorithms to identify unusual behaviors that could potentially be artificial intelligence-based fraud. A very important protection measure is that of behavioral biometrics, which is a dynamic measure of user traits, including typing speed, navigational behaviour, and interaction behaviour. As opposed to hardened credentials, these behavioral markers are hard to replicate under the same circumstances by AI systems, and they are suitable for identifying account takeovers and synthetic identities. The practicality of these types of continuous authentication models has been shown with several European companies starting to implement them to reduce the effects of deepfake-enabled fraud.


Good governance needs to strengthen technological protection. The boards of directors, and especially audit committees, are in the center stage of managing organizational exposure to AI-based fraud. Section S.177 of Companies Act, 2013 mandates audit committees with the duty of overseeing vigil mechanisms and fraud detection systems. This should be applied to the AI setting, where the control of the algorithmic risks, authorization of the AI-use policy, and the regular evaluation of the automated decision-making systems should be controlled.


In addition to the practices of the corporations, statutory reform is imperative in changing towards an actual anticipatory law of identity manipulation using AI. On the same note, the IT Act would need to be reconsidered with regard to the use of AI systems that operate somewhat autonomously to enable fraud. An explanation of the responsibility of AI deployers, the evidence requirements of AI generated contents, and the liabilities associated with the creation and management of the system would minimize uncertainty in implementation and adjudication.


The regulators like the SEBI should tries to make up a mandatory norm of governance on AI-based and algorithmic trading systems.  A legal framework against prosecution can be afforded by Section 12A of the Securities and Exchange Board of India Act, 1992 that outlaws fraudulent and unfair trade practices. Nonetheless, to prove that an AI algorithm was specifically crafted to rig the markets, as opposed to merely running a bad but still legitimate trading algorithm, raises new evidentiary issues.


Such measures would be representative of new best practices in the world, which would be conceptually informed by the structures (e.g., the EU AI Act or the Digital Services Act) without copying and pasting.


Institutional coordination is also needed in the effective reform, whereas. The financial regulators, information security resources, and law enforcement agencies must work in an environment where they can communicate information in a real-time and have AI systems audited simultaneously. Distributed control is associated with loopholes of enforcement, which are readily exploited by AI-based fraud.

 

Conclusion

The digital frontier of financial fraud by artificial intelligence is the emerging model of white-collar crime in India, which undermines the core tenets of criminal law, the design of regulations, and their enforcement policy. It is not just a hypothetical threat but it is already a part of the fast-growing digital economy in India. Although AI is efficient and innovative, its abuse within the financial system is revealing fundamental weaknesses in the structure that cannot be resolved with small-scale legal changes. The future of economic crime regulation is in the appreciation of the fact that technology has radically changed the game. As long as India does not change its legal and institutional frameworks to fight AI-enabled fraud proactively, white-collar crime will keep evolving at a rate greater than the law has been designed to control it.

This article has been authored by Sarandeep Singh, a student at Hidayatullah National Law University, Raipur. It is a part of RSRR's Rolling Blog Series.

Mailing Address

Rajiv Gandhi National University of Law,

Sidhuwal - Bhadson Road, Patiala, Punjab - 147006

Subscribe to RSRR

Thanks for submitting!

Email Us

General Inquiries: rsrr@rgnul.ac.in

Submissions: submissionsrsrr@rgnul.ac.in

Follow Us

  • LinkedIn
  • X
  • Instagram

© 2025 RGNUL Student Research Review. ISSN(0): 2349-8293.

bottom of page