Data in Healthcare: Non-Rivalrous, All-Encompassing Realities
How does data flow across networks – how do we protect it from falling into the wrong hands, where it gets reused and repurposed for interests that do not serve us, or worse, actively bring us harm? This isn’t about a trusted third-party; there is no such thing in today’s world. Since most harms can only be realized in retrospect, it has been difficult to evaluate the trade-offs we agree to, especially when consent itself becomes a contentious category.
Data, as we know, is a non-rivalrous asset, with no clear destination save for the guiding principles we employ for its many usages. It is important we pick those principles with great care. Particularly in a sector like healthcare, where trust occupies a crucial place – the ever-expanding networks around data collection and retention need to be reined in with a sense of proportionality. The consequences that arise when sloppy, or worse, absent frameworks guide the norms that govern largescale technological interventions, do not just determine the ways in which we allow governments and corporations to treat human autonomy, but are also a deciding factor in the kind of politics that gets to take centre stage. Whether it is psychometric profiling of voter-bases or IoT-driven surveillance becoming ubiquitous for the sake of lifestyle convenience in first world countries – it is sobering to realize how deliberation is often forgone, while technology takes centre space. It is important that we do not let empty slogans around ‘digitization’ to be absorbed without criticality – without guardrails, and without scrutiny of who gets exposed and for what purposes.
In this article, the focus will be on healthcare services and data vulnerabilities stemming from exclusions, the less than savoury incentives that mar public-private partnerships, best practises adopted globally that make use of cryptographic and legal securities for citizen welfare to bring about more clarity for use-specific data storage and trust-building in governance.
Public Attitudes about Private Concerns
The conversation around the delicate play between efficiency and privacy has long been an area marked by a lack of uniformity across cultures. The dearth of industry standards for citizen-data protection stem directly from the ambivalence surrounding the material need to protect sensitive data, which seems quite abstract.
In a recent survey conducted by Ernst and Young for the 2019 Report on the Global FinTech Adoption Index, participants were asked if they would be willing to share their financial data with other intermediaries, other than their bank, in order to avail better offers on various financial services. While only 13% of the respondents from Netherlands ascended to this, in India, a staggering 65% were willing to let go of sensitive information for the same. Similar trends in the Report pointed towards a correlation between lower per-capita income and the willingness of citizens to give away their data. This insight provides a segue into the dearth of healthy speculation around third-party exploitation and can be correlated with advances in the digitization of healthcare in India, that have the potential to bring about a massive shift in the doctor-patient relationship. This is crucial because information that used to be strictly private, now finds itself caught in the web of cross-party sharing, with greater potential for unintended or unauthorized divulgence of sensitive medical records. While this relationship has traditionally shared a confidentiality rooted in the Hippocratic Oath and medical ethics more generally – in the face of new-age ambiguities around the rules of data collection, it is unclear if these principles will still hold. Research shows that often marginal and low-income groups can become attuned to high exposure and lack of meaningful consent through the nature of transactional interactions for welfare services with the state. Today, the risks have come to encompass commercial data-mining as well as the gradual corrosion of the citizen’s sense of autonomy.
The associated risks remain defocused in our discourse due to the historically powerful conception of technology being a neutral social force; naively pictured as a prima-facie equalizer of our social institutions. Further, it is easy to lose track of the accompanying harms of data leakages, when one looks at the parallel strides made during these past two years in battling the pandemic. For instance, biotechnology solutions – genome sequencing in particular – that have aided the medical community’s ability to track mutations in order to inform government response, rely upon public health data that is readily tracked and made available. It is possible to remain steadfast in our adherence to scientific research, while not taking a blanket approach towards tech solutions, especially in the face of an ill-prepared demographic such as ours.
The Many Lives of Data
It is true that data has been a force for much good; it can also be leveraged to exclude and exploit. It is important to remember that the numbers ‘falling through the cracks’ are not numbers but flesh and bone human beings. Only 45% of India has access to internet services, and hence the effectiveness of citizen-reliant, tech-driven policy approaches is naturally limited. When India started its vaccination drive in early 2021, the government initiated a registration portal, Co-WIN, that was initially made mandatory for identified beneficiaries to be registered and then vaccinated. This became a hurdle for many sections of the population – those who are not tech savvy or otherwise elderly, those who do not possess smartphones or live in areas with poor connectivity. Thankfully, the Supreme Court took notice of the digital divide that alienated several sections of people from the immunization process and eased this burden. Later in June that year, limited on-site registrations were allowed, and pre-registration was deemed not mandatory in nature. Furthermore, the National Digital Health Mission (NDHM), rolled out in August 2020, is said to be constituted in order to ensure seamless linking of relevant citizen data to build an integrative health infrastructure.
The nature of such data collection, includes practices that are not just delivering upon pre-determined goals but also reshaping the relationship between the citizen and the welfare-state. This is evident from the shirking of responsibility when it comes to widespread exclusions resulting in denial of food rations in India, on the basis of errors in e-authentication of the country’s Aadhar number – which is also considered as the largest biometric database in the world. We already have with us the experience of centralized databases cutting down on social welfare by creating hurdles through mandatory e-verification. This often gets passed off as ‘savings’ and touted as a model for greater efficiency instead, as widely documented in the case of the Public Distribution System (PDS). Ethnographic work done in semi-urban parts of Chandigarh, points towards more of the same, with patients being pushed for the Health ID being mandatorily linked with the Aadhar number.
The Data Security Council of India (DSCI) acknowledges that healthcare data is subject to an ecosystem that is armed with multiple stakeholders: pharmaceutical companies, health insurance companies, third-party aggregators among others, where unauthorized disclosure could breach patient confidentiality and expose sensitive health information. Health insurance in particular becomes an area of unique concern, where biometric linkage with Aadhar allows for extensive potential profiling of individuals on the basis of socio-economic and demographic information, paired with other sensitive medical data. This has an obvious bearing on their ability to fix premium rates and issuance of claims, subjecting patients to discrimination, as seen in several cases in the U.S., where wearable devices risk grave implications due to data exposure.
Security Should Not be an Afterthought
Precautionary design from a computational standpoint is vital; safeguards which are futuristic and instead of retrospective are needed. Computational power and capacity have increased at an exponential rate, and similar leaps in cryptographic research means there are certain protocols in place that can be used to embed privacy-preserving technologies in our healthcare systems. One such approach is Secure Multi Party Computation (SMPC). Developed by Chinese computer scientist Andrew Yao, the technique has been studied for over four decades in academia, and has started to find a number of use cases in the real world. Ranging from wage gap analysis in Boston to tax fraud detection in Estonia, it has allowed for different parties to make necessary calculations, without revealing their inputs to each other – a win-win situation for both privacy and data utility.
In this way, data inputs are made invisible by encrypting datasets during the course of collection itself. Take for example, a scenario where the participant is required to share their DNA in order for a clinic to evaluate said person’s susceptibility to a type of diabetes or to assess the risk of cardiovascular diseases. The anonymity offered through secure computational methods creates an environment where participants are comfortable sharing their data, where aggregate output can aid public health campaigns or inform clinical practises without intruding upon the individual’s right to protect sensitive information. As in the case of France’s contact-tracing app, pseudonymization of data is also strongly recommended, even though experts are split on the possibility of reidentification.
Legal frameworks aren’t deterministic, but without them we will fail to have a sense of boundaries – and for boundaries to work effectively, they must be clearly chalked out. While the Information Technology Act, 2000, now two decades old, takes ‘sensitive personal data or information’ (SPDI) into account, but the term ‘health data’ does not have a clear definition within the law. Rehauling the data regulation regime to finetune them to current realities, the newly drafted Personal Data Protection Bill (PDPB 2021), which proposes a Data Protection Authority (DPA) in order to oversee the implementation of the new regulations, does cover some ground.
Unlike the Health Insurance Portability and Accountability Act (HIPAA), enacted in the US all the way back in 1996, India does not have sector-specific data protection standards. The PDPB should ideally expand with a sub-sectional focus on medical data and the challenges specific to the industry, in tandem with the changes being brought about through the national health digitization. A national health data framework is necessary for broader standardization of minimum data protection methods and approaches, many organizations such as UNAIDS also recommend country-wide privacy legislations. Legally as well, the PDPB will serve as a bedrock for the policy vision laid out in the Health Data Management Policy (HDMP), which was brought out in 2020.
The DSCI also recently launched its privacy guidelines for the healthcare sector, in light of these very concerns and growing resistance to data centralization across the world. Even though a consent-based approach is central to the framing of DSCI’s best practices frameworks pertaining to data retention, this consent remains maligned in the asymmetrical ecosystem where the data principal is tied down – through a lack of clarity of definitions, and as previously stated, through her bondage to the attainment of services such as welfare schemes and other important health facilities.
Data privacy standards should serve to minimize the level of intrusion that a patient has to face to access quality healthcare, while demarcating the need and extent of exposure of sensitive information. Preserving privacy is not a standalone effort to help retain the dignity of an individual, to protect her from harm, stigma, exclusion or exploitation through direct or indirect means. The actions arising from these concerns also bolster a country’s identity as a free society, acting as an ex-ante measure against surveillance of the private sphere. Health data can be leveraged for a variety of aims, the key is to make these aims crystal clear and mitigate actual harms by establishing well-defined safeguards for citizens and building a better environment for meaningful consent.
This blog has been authored by Yusra Khan, a student at Ashoka University. This blog is a part of RSRR’s Right to Privacy and Legality of Surveillance Blog Series, in collaboration with the Centre for Internet Security.