top of page
  • Aadesh Ramadorai

Combating Manipulative Neuromarketing with Intermediary Regulation

Catastrophe befell this Mother’s Day, 2022, when millions of mothers in their postpartum stage were deprived of the nutrition needed to feed their infants after they had fallen prey to the neuromarketing strategies of formula food businesses. In its report on the ‘Marketing of Breast-Milk Substitutes,’ the World Health Organisation (WHO) depicted how profit-seeking companies misused social media to deliver misleading personalized content to several women who were new to motherhood via advanced neuromarketing techniques. Targeted mothers had fallen victim to manipulation by the neuromarketing targeted at them, causing the demand for the products being marketed to rampantly increase beyond the production capacity of their concerned proprietors. The abovementioned WHO report directly correlates this exponential growth in demand with the effectiveness of the neuromarketing techniques employed to increase the sales of formula products. Crisis inevitably followed when such businesses failed to meet the demand for their products by a massive margin, direly impacting mothers across the world.


Neuromarketing involves a study of the cognitive features of consumers and their subconscious response to specific marketing scenarios. Modern enterprises use this method to tailor their advertisements to have a longstanding impact on the desired target group. With the current ease of access to a large consumer base through social media platforms, online intermediaries are being rampantly misused by businesses to manipulate potential buyers.

This is especially dangerous in the Indian economy, where many legislations have become outdated due to the advancement of neuroscience and technology. It is, therefore, necessary to improve the Indian legal framework regulating intermediary platforms in a comprehensive manner while recognizing and protecting ‘neuro data’ and the ‘neuro rights’ in a digital space.

Classifying Neurodata

Neurodata consists of information which enables its possessor to observe processes occurring in the brain of persons from whom such information is generated, including the psychological and behavioural patterns exhibited by such persons. It may be classified as personal data based on the below-mentioned inferences drawn from Rule 2(1)(i) and Rule 3 of the  Information Technology (Reasonable Security Practice and Procedures and Sensitive Personal Data or Information) Rules, 2011 (‘SPDR’).

The SPDR define personal information as any information capable of directly or indirectly identifying its corresponding natural person, in combination with other information that is/ likely to be available with a body corporate. It classifies sensitive personal data as information relating to the ‘physiological and mental health condition,’ ‘sexual orientation,’ etc., of a person, additionally including any such detail provided to a body corporate for providing services, processing or storage. The Draft Digital Personal Data Protection Bill, 2022 (‘DPDP Bill’) has adopted a broader approach by defining personal data as ‘any data about an individual who is identifiable by or in relation to such data.’

This identity-based approach has also been provided for under Article 4(1) of the General Data Protection Regulation, 2016 (‘GDPR’), which further includes factors specific to the ‘mental, economic, cultural or social identity’ of a natural person under the ambit of personal data. Likewise, the European Union and its Court of Justice have previously clarified that information relating to the human brain and mind is personal data if it can single out the data subject at stake.

Manipulative Collection and Processing of Neurodata

Most devices that form the Internet of Things are capable of recording input through a myriad of techniques that monitor factors ranging from haptic feedback and simple sleep patterns to complex pupil dilation. Termed ‘digital phenotyping,’ such monitoring, coupled with AI-driven deep learning, can be used to generate neurodata from an individual’s routine tasks with credible precision. In fact, Human Rights Watch, an international non-governmental organization, reported exposed many EdTech businesses that implanted tracking software on their platforms which allowed third parties to actively monitor the behaviour of infants availing their services. In most scenarios, this cognitive surveillance was not backed by the consent of the infants or their guardians. The Draft DPDP Bill 2022 mandates that ‘verifiable parental consent’ must be obtained before the personal data of children is processed, but this does not extend to the collection of such personal data.

Neurorights are a special category of human rights ‘specifically aimed at protecting the brain and its activity as neurotechnology advances.’ They include the right to personal identity, the right to agency (i.e., the right to exercise one’s free will without influence by any external factor), the right to mental privacy and the right to protection from algorithmic bias. Manipulative and non-consensual neuromarketing gravely contravenes these neurorights, as evident from the notorious ‘Cambridge Analytica Scandal’ in which personal data of over 230 million American voters was harvested and processed to influence their voting behaviour in favour of select political parties. Labelled ‘digital gerrymandering,’ this incident further compromised the election process and sacred spirit of democracy in the United States of America. Similar tactics can be traced back to the 1930s, a period during which Nazi leader Adolph Hitler employed highly influential strategies to mentally instil his propaganda in several free-thinking persons before the Second World War.

Notable Developments in Foreign and International Data Protection Legislation

The right against algorithmic bias protects technology users from being subject to discriminatory automated decision-making when they are using technology. Such discrimination usually occurs when content is delivered to users after digital phenotyping of their cognitive manifestations. Section 49 of the United Kingdom’s Data Protection Act, 2018, protects this right and Article 5 of the European Union’s Digital Markets Act, 2022 prohibits the steering of content towards users on the basis of algorithmic decisions. Additionally, the EU recently passed a regulation covering algorithm decision-making, and similar provisions are presently being deliberated upon by the US Senate. Presently, no such protection is afforded by Indian Law.

In order to bridle the burgeoning field of neuroscience and technology with regulation, the Government of Chile incorporated neurorights into its Constitution by bestowing neurodata with the same status as an organ, thus preventing its unauthorized collection, sale or processing. Such legislation is not present in India.

Regulating the Collection and Processing of Data

The Joint Parliamentary Committee (‘JPC’) has astutely attempted to protect the right against algorithmic bias in its Report on the Personal Data Protection Bill, 2019. The 44th recommendation in this report provides for transparency over the algorithmic processing of information by data fiduciaries. While this recommendation is limited to procedural transparency per se, it opens the doors for the Government to prescribe and enforce standards of automated data processing and decision-making thereof in order to set out a substantive legal framework for the same, similar to the above-stated law in the UK. However, this recommendation has not been adopted in the Draft DPDP Bill, 2022.

It is suggested that such a substantive framework be formulated in accordance with the ‘do no harm principle’ adopted by the Supreme Court of India in Justice K.S. Puttaswamy (Retd.) v. Union of India, 2017. In this case, the Court held that the processing of biometric and digital data of individuals, which is beyond their scope of access and ability to modify, must be carried out in a manner that prevents any direct or indirect harm to them. It further stated that such processing requires rigorous and persistent oversight. Applying this judgement to the present context, the Government could bring into force laws that prevent the processing of sensitive personal data when the end result of such processing may be misused to manipulate the data principal. It may further impose periodic data processing and algorithmic decision-making-related disclosure requirements on data fiduciaries.

If such legislation is enacted, it will regulate in harmony with Draft DPDP Bill, 2022. The said Bill proposes to confer rights upon data principals to obtain a summary of their personal data being processed, various processing activities undertaken thereof as well as the identities of all the data fiduciaries with whom such personal data has been shared. This right is complemented by the right to grievance redressal as well as the option of approaching the Data Protection Board in case of failure or unsatisfactory grievance redressal.

The Explanatory Note to the Draft DPDP Bill, 2022 lays down the principles which form the genesis of the said draft, of which the following bear relevance:

  1. Firstly, the usage of personal data must be done in a manner which is lawful, fair and transparent to the individuals concerned.

  2. Secondly, personal data is to be used only for the purposes for which it was collected (principle of purpose limitation).

  3. Thirdly, only those items of personal data required for attaining a specified purpose must be collected (principle of data minimization).

  4. Fourthly, reasonable safeguards must be taken by data fiduciaries to ensure there is no unauthorized collection or processing of data.

  5. Lastly, the person who decides the purpose and means of processing personal data should be accountable for such processing.

If these principles are backed by the force of law, it would inevitably result in a ban on the digital phenotyping of neurodata, without which neuromarketing cannot be carried out in a manipulative manner.

Regulating the Transmission of Content Through Intermediaries

At present, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the ‘Intermediary Rules, 2021’) impose comprehensive due diligence duties on Intermediaries. Moreover, the electronic transmission of obscene and sexually explicit content is penalized u/s. 67, 67A and 67B of the Information and Technology Act, 2000 (the ‘Act’). Furthermore, the doctrine of contributory liability u/s. 79 of the Act extends immunity to an intermediary against unlawful third-party information, data or communication only when such intermediary complies with the above-mentioned due diligence duties as well as neither involved in the unlawful act/ omission nor has any knowledge of the same.

Indian laws in force, as well as the proposed framework under the Draft DPDP Bill, 2022, tend to regulate the access, collection and processing of sensitive personal data by intermediaries as well as the content transmitted by them thereof. They do not safeguard users of digital platforms against algorithmic bias arising out of automated decision-making. It is suggested that the ‘transmission’ of any content on all platforms falls under the definition of Intermediary u/s. 2(1)(w) of the Act[13] is additionally regulated as well. This is feasible by classifying ‘social media platforms’ as ‘publishers’ in order to attract liability from any micro-targeted delivery of data facilitated by them, as recommended in the JPC’s report.

To further elaborate, Part II of the Schedule to the Intermediary Rules, 2021 mandates that Intermediaries issue disclosures to users when the content streamed on them involves ‘discrimination, psychotropic substances, nudity, violence’ and so on. It is suggested that an additional criterion covering potentially manipulative or psychologically influential content is introduced into the Schedule so that consumers are cautioned against neuro-manipulative content and may choose to avoid being exposed to it.

Freedom of Commercial Speech

Article 19(1)(a) of the Constitution guarantees the fundamental right to speech and expression. In Tata Press Ltd. v. Mahanagar Telephone-Nigam Ltd., 1995, the Supreme Court delineated that commercial speech falls under the purview of Article 19 when its purpose is to promote the sales of a product. This reasoning was rendered under the presumption that an increase in sales would decrease the retail price per individual product, ultimately benefiting the end consumer.

It is intriguing to note that while neuromarketing may increase the sales of the marketed product, the overall result would be detrimental to the interest of the consumers when personalized marketing strategies are delivered to targeted groups on the basis of their digital phenotyping.  Neuromarketing practices per se enjoy protection under Article 19 as long as they do not harm the consumer, as portrayed above. Conversely, those neuromarketing practices, whose end result is harmful to the interest of the end consumers, are not afforded Constitutional protection.

When neuromarketing merely caters to the existing wants of a consumer, it is lawful. However, when it stimulates cognitive trigger points in the brain in order to create or magnify certain desires in the mind of the consumer, the neuroright to agency (i.e., the right to exercise one’s free will without influence by any external factor) is violated, amounting to blatant misuse of the Constitutionally guaranteed freedom of speech and expression.


Given the exponential growth of neuromarketing over social media intermediary platforms, strict regulation is the need of the hour. Justice Douglas’ dissenting opinion in Osborn v. United States, 1966, elegantly describes the danger of inevitable privacy invasion resulting from the rapid progress of science. Para 385 of the judgement is extracted below:

The time may come when no one can be sure whether his words are being recorded for use at some future time; when everyone will fear that his most secret thoughts are no longer his own … when the most confidential and intimate conversations are always open to eager, prying ears. When that time comes, privacy, and with it liberty, will be gone.

While the Indian Government is industriously framing policies to facilitate the growth of the digital Indian economy with the 5 trillion US dollar goal in mind, it is critical to frame laws that recognize and protect mental data and its resulting rights to preserve the integrity of its Digital Nagrik.


This article has been authored by Aadesh Ramadorai, a student at the Tamil Nadu Dr Ambedkar Law University, Chennai. This blog is a part of RSRR’s Blog Series on “Emerging Technologies: Addressing Issues of Law and Policy,” in collaboration with Ikigai Law.


bottom of page