Why the GDPR May Fail to Protect Individuals from Privacy Risks Produced by Artificial Intelligence Applications And A New Transparency-Based Approach to AI Governance May Help

By | March 1, 2022

In 2021, Frances Haugen, a former Facebook employee, disclosed how the company’s machine-learning-powered services have been exploiting the personal data of millions of users. According to internal documents collected by Haugen during her time at Facebook, the tech giant has intentionally collected consumers’ personal data and misused its artificial intelligence (AI) systems to boost corporate profits, and it has done so without users’ consent. In light of the solid evidence surrounding Facebook’s unethical and unlawful algorithmic practices, the disclosures are worrisome to the public. Even more worrisome is the fact that these algorithmic misuses have covertly become commonplace throughout the world. 

Today, million-dollar firms develop AI-based services to collect and process data under opaque conditions. From social media platforms to the Internet of Things (IoT) to apps downloaded on our smart devices, machine-learning-based services now penetrate every corner of our private lives. This radical innovation has enabled the digitalization of the world. It has also placed at great risk the surrender of private information by digital users, often without warning. 

Algorithmic Opacity as a Radical Challenge to Data Privacy and Human Rights

As explored in my latest research, Data Privacy, Human Rights, and Algorithmic Opacity, AI business applications contain a large degree of opacity due to the complexities and trade secret protections of AI systems. This in turn creates a form of “algorithmic opacity” that can be harmful to privacy protection, human rights, and other democratic values. Machine-learning-based AI has been used to process personal data, manipulate individuals, and automate decisions absent public scrutiny. Although data privacy regulations may employ privacy policies and require notices to hold firms accountable regarding their use of data, these regulations can hardly require firms to offer notices and disclosures that explain how they operate algorithms in a way that protects privacy. If data privacy laws had required private companies to disclose their algorithmic processing for stakeholder review and oversight, arguably, individuals would not have suffered from intrusive algorithmic-based services. 

The GDPR’s Promise

In response to big data concerns and algorithmic threats, the European Union (EU) enacted a leading data protection regulation, the General Data Protection Regulation (GDPR), to strengthen the protection of personal data. Leading privacy experts have observed that the GDPR has increasingly become the global standard of data privacy laws, influencing and catalyzing a new generation of privacy rulemaking activities worldwide. In the US, California’s Consumer Privacy Rights Act (CPRA) follows many of the GDPR privacy rules. Japan and South Korea have also amended their privacy regulations to meet the privacy standards established by the GDPR. The GDPR is considered one of the strongest privacy laws around the globe because of its omnibus human-rights-based approach, which considers data protection a fundamental right, in contrast to the US’s sectoral approach, which deems information privacy a consumer right. 

The GDPR’s Unfulfilled Promise

Nonetheless, even the GDPR cannot stop firms from secretly using AI to excessively access, gather, and trade personal data for unknown purposes. As this research investigates, although the GDPR attempts to address AI perils to privacy by increasing individual control over personal data processed by advanced algorithms, such efforts remain inadequate in addressing the looming dangers to data privacy posed by AI. Based on assessments of legal measures and case studies surrounding Google’s privacy policies and disclosures, this research finds that there are both loopholes in the legal provisions of the GDPR and gaps between laws in the books and laws on the ground. Some GDPR provisions designed to protect individuals against intrusive algorithmic practices have proven to be minimally enforced in managing privacy risks produced by AI and ineffective in regulating opaque AI systems deployed by private companies. Although the technical complexities and trade secret protections of AI systems make it difficult for outsiders to monitor the use of AI owned by industries, the GDPR’s vague language, narrow focus on AI issues, broad exceptions for algorithmic decision-making, and lack of detailed instructions regarding implementation have limited its capacity to secure algorithmic accountability, data protection, and other fundamental rights. As a result, the GDPR has failed to protect individuals from privacy risks produced by AI applications, and policymakers need complementary requirements in support of algorithmic transparency and privacy protection.

Algorithmic Disclosures as a Solution

To address the issue created by algorithmic opacity, I propose a new transparency-based approach to increasing algorithmic transparency and enhancing data privacy protections. As my other recent article argues, disclosure laws can be and should be used to enhance algorithmic accountability. While the above article focuses on algorithmic disclosures in the US regulatory context, this research illustrates how transparency rules in an EU data protection context may serve as an important vehicle in protecting data privacy, as well as fundamental rights and democratic values. 

In the EU, the Non-Financial Reporting Directive (NFRD) asks firms to disclose issues concerning “respect for human rights,” including data protection issues. The NFRD aligns with the GDPR’s fundamental rights approach and requires large firms to disclose their corporate practices under several topics. This legal instrument can be used to request algorithmic transparency to enhance data protection and democratic values in today’s GDPR era.  In this context, the research proposes a new set of transparency duties for firms using AI. It explains what, who, when, to whom, and how firms should be required to disclose their AI practices for the protection of privacy and other democratic values under the NFRD. Further, it investigates what firms should disclose concerning their machine-learning-based AI systems under five categories—business descriptions, policy and due process, outcomes of policies, major risks and management, and key performance indicators—establishing a set of principles designed for algorithmic disclosures. 

Whistleblowing Mechanism as a Solution

Because disclosures made by firms can include inaccurate and irrelevant information, this research demonstrates how whistleblowing regulations may serve as an effective tool to enhance the truthfulness of corporate disclosures and facilitate data protection. The EU’s Whistleblower Protection Directive has been passed to protect whistleblowers from organizations, including retaliation by firms stemming from an employee’s whistleblowing activities. With legal protections covering whistleblowing activities in place, employees may play a crucial role in increasing algorithmic transparency since they are empowered to monitor and report illegal business practices, including inaccurate corporate disclosures and problematic algorithmic practices that compromise data privacy and other fundamental rights. 

Algorithmic Transparency as a Destination

With the combination of corporate algorithmic disclosures and whistleblowing mechanisms in place, firms will be more responsible when it comes to correcting their misconduct and addressing controversial corporate practices before their ethical algorithmic practices are disclosed or reported. Thus, algorithmic practices can be regularly monitored by stakeholders, including shareholders, employees, regulators, and communities. Moreover, transparency in algorithmic practices may help shareholders to make informed investment decisions, enable consumers to gain adequate control over data and advanced algorithms, and facilitate the innovation of algorithmic practices. 

Policy Consideration

Of note, however, are the costs involved in such disclosures, which cannot be neglected. Algorithmic disclosures involve high costs and radical reforms in business structures. Private companies may also fear that their trade secrets will be revealed to the public. Therefore, a transparency-based approach may be deemed undesirable and face resistance among private companies. But, at the same time, the current generation has increasingly called for greater transparency and trustworthiness in AI. We should not neglect to consider the costs of opacity, which have not only led to privacy invasions and hefty fines, but also to backlash due to the loss of trust in private AI systems. Since Google’s data privacy scandalswere disclosed, it was voted one of the least trustworthy tech firms. Moreover, since the GDPR went into effect, Google has been fined at least 232 million USD due to GDPR violations (current as of January 2022), leading to an international public relations crisis. The proposed algorithmic disclosures require descriptions of information on how firms operate their AI systems in ways that protect privacy and other fundamental rights. Such disclosures are not complete disclosures of AI systems that surrender or otherwise threaten trade secrets. Rather, these disclosures help firms gain trust in, and accountability for, their algorithmic practices among the public. 

Concluding Remark

In today’s AI age, transparency measures must become essential practices for firms to gain legitimacy and truly achieve privacy protections in their use of advanced AI systems. Transparency rules may be not only instrumental to the protection of human rights and democratic values, but also useful to information dissemination and algorithmic innovation. If firms continue to operate their algorithmic systems with opacity, it may be increasingly difficult for consumers to trust them and continue to share their data, more broadly impeding innovation. For these reasons, it is time for policymakers to consider a transparency-based approach to algorithmic disclosures and whistleblowing mechanisms to encourage responsible changes, strengthen corporate accountability, and realize trustworthy AI for artificial intelligence innovations in today’s GDPR era. The sooner, the better.

Sylvia Lu is a Doctoral Candidate and a Lloyd M. Robbins Doctoral Fellow at Berkeley Law

This post is adapted from her paper, “Data Privacy, Human Rights, and Algorithmic Opacity” available on SSRN

The views expressed in this post are those of the author and do not represent the views of the Global Financial Markets Center or Duke Law.

Leave a Reply

Your email address will not be published. Required fields are marked *