Opinion | Privacy regulation should be more comprehensive

By Yutong Zhao, Columnist

Over the past few years, we’ve found ourselves in unnervingly Orwellian scenarios where our democracy seems to be in jeopardy. Think of Cambridge Analytica, a British political consulting firm that used Facebook’s Application Programming Interface to harvest millions of people’s personal data and used them to influence the 2016 U.S. presidential election and Brexit referendum. The firm’s involvement led to unexpected election outcomes in both events and cast doubt on their legitimacy. 

Although it is unlikely that we find ourselves in Orwell’s vision of 1984, the extent to which information can influence our daily lives and the democratic process has become worrisome. In the Information Age, our exposure on the internet directly affects the security of our personal privacy. But interestingly, most of our information isn’t controlled by the government but by private corporations. This raises both private and public concerns: What effort should we make against the for-profit exploitation of our personal data?

Before we can answer that question, we have to know what kinds of data should be explicitly protected — after all, not all data are created equal. It should come as no surprise that some personal information is more sensitive than others. For example, people’s gender identity is probably less sensitive than their home address. 

The kind of data that is generating security concerns is known as PII, or Personal Identification Information. In short, it is any data that can potentially identify specific individuals. PII can enable identity theft and can cause great harm to the individual if breached. In terms of usage, three types of information are discussed the most due to their security importance and personal value: online privacy, financial privacy and medical privacy. 

Online privacy includes all personal information generated by online interactions. These are the kinds of data shared on social media or online questionnaires. The Cambridge Analytica scandal involved the collection of this kind of information through surveys. 

Financial privacy, like credit card information, is especially sensitive because of the threat of financial fraud. 

Medical information, including medical records and, increasingly, DNA data, has long been regulated to prevent misuse by health care practitioners, insurance companies and researchers.

But how is PII being used and regulated? Although data binning, putting user data into ‘bins’ of characteristics, can make it hard to trace information back to individual users, this doesn’t prevent companies from using PII to target advertisements and propaganda, as was the case for Facebook and Cambridge Analytica. Instead, the regulation of data focuses on keeping a company’s privacy policy from being unfair or from changing without informing users in good faith. Specifically, regulations mandate privacy requirements in the notification, collection and sharing of information. 

However, companies are hardly compliant on their own. According to a 2016 analysis of 17,991 free apps, 71% of apps that lack a privacy policy are legally required to have one. The study further finds that apps across the board exhibit 1.83 inconsistencies with privacy requirements on average. It is important to note that not all breaches are due to malicious intent. They may simply result from app developers’ uncertainty about the vague policy requirements.

This gives rise to the next question: How effective are our existing privacy regulations? Unfortunately, there is no generally applicable data privacy statue on the federal level in the U.S. Only Delaware and California have comprehensive privacy legislation: the California Online Privacy Protection Act (CalOPPA) and Delaware’s Online Privacy and Protection Act (DOPPA). 

However, there are laws that protect specific types of privacy breaches on the federal level. For example, the Health Information Privacy and Portability Act (HIIPA) protects patients’ personal health information from being traced to individual people, and the Gramm-Leach-Bliley Act (GLBA) requires financial institutions to safeguard their customers’ financial data. 

But it seems that these acts only keep privacy rights from transgression — they don’t explicitly define users’ data privacy rights in general. The European Union’s General Data Protection Regulation, on the other hand, gives users such rights including explicit opt-in consent, the right to request their data and the right to delete their data. It is far more comprehensive than American regulations in the sense that it gives both positive rights to the user and provides protection on users’ privacy.

It seems that the enhanced power of information algorithms and statistical learning techniques require us to take new challenges head-on. We need to find a happy medium where we can maximize our personal benefits from this technology while being able to maintain the integrity of our democracy.

Yutong is a senior in LAS. 

[email protected]