There is a saying, “when we get something for free, then we end up being the products.” In the age of information, all the clicks we’ve left behind — and would leave behind in future — are being (will be) used to analyze us, rate us, package us, and sell it back to us.
In a 2019 article, the New York Times outlined how each of us has “secret consumer scores”: hidden rating that determines how long we wait on hold when calling a business, and what type of service we receive. A Tinder algorithm of sorts — a low score sends you back the queue, and the high score will fetch you an elite treatment. The society we live in today is witnessing an enormous amount of data flow, and with it, a rampant surge in algorithm systems that make decisions for us without us knowing whether these decisions are fair for us or not.
How Does The Big-Tech Collect Our Data?
Big Techs, also known by the acronym ‘FAAMG’ (Facebook, Amazon, Apple, Microsoft, and Google), are competing with one another to harvest as much data as they can and sell it to third-party applications and businesses. Big Techs have acquired extraordinary amounts of data on individuals through internet browsers, email, weather applications, maps, and satellite navigations. These firms document how we browse, what food we enjoy, where we buy our socks, which music soothes us, what movies we watch, where we travel and how we let the world know about it.
Google contains 4.14 GigaBytes of data on me. (Those who wish to download their Google data may use this link.) When you download a copy of your Google data, you will see a folder containing multiple subfolders, each containing multiple .json files. In a folder labelled location history, Google kept a history of my monthly location data since 2016 — with great details about whether I was walking, or running, or tilting, or cycling, or in the vehicle, along with timestamps of the activity, location name, latitude and longitude.
Another folder recorded the ads I may have seen based on the websites I visited. In another, the files contained the details of the sites I have visited, images & videos I have searched for, apps I have opened and for how long. Even the recordings of my Google voice search are listed in yet another file, along with the date and time. This is not just a story of Google.
In the New York Times report, Facebook was charged with holding a lot of personal data on their databases. Instagram, along with its parent company Facebook, holds data on removed friends, phone book, blocked contacts, pictures, chat conversations, photos & videos sent and received, among other things. Instagram, of course, retains your search history to show targeted ads.
Even, Alexa is listening — carefully. According to a 2019 Bloomberg report, Amazon Inc. employs thousands of people to help the Alexa digital assistant powering Echo speakers. Amazon, however, claims that they use the users’ requests to Alexa to train their “speech recognition and natural language understanding systems.” While we, the users of web services, maybe generating enormous amounts of data, we have no control over it. In turn, the Big Techs constantly monitor how we produce data and then recreate us – our choices – to make us better products.
We’re in an era of data collection and surveillance — whether we opt for it or not in several instances. Panopticon, that is what Michel Foucault would have called it. To gain a competitive edge over others, these companies have been hungry for hyper-personalisation of user data. As a result, they want to know everything about a particular consumer (his/her needs, desires, and behaviours) to make useful recommendations. It is this quest for hyper-personalisation, that leads to misuse of user data. A classic case of data misuse?
Cambridge Analytica Scandal
In the early March of 2018, two leading newspapers The Guardian and The New York Times carried out a chilling report on how political consulting firm Cambridge Analyticathat worked for the Trump campaign, had harvested personal data of millions of Facebook users without their consent to “build a powerful software program to predict and influence choices at the ballot box”.
The data was collected through an application called thisisyourdigitallife, built by academic Aleksandr Kogan of Cambridge University. Kogan, in collaboration with Cambridge Analytica, had paid hundreds of thousands of users to take a personality test and agreed to have their data collected for academic use. However, the app also collected the data of test-takers’ Facebook friends, leading to the accumulation of unprecedented amounts of data.
Cambridge Analytica, according to some estimates, had harvested the private information of the Facebook profiles of more than 50 million users without their consent, making it one of the largest data leaks in the social media’s history. The Cambridge Analytica Scandal was a clear expose of how third-party developers easily accessed user’s data, who in turn, sold it to companies that misused this information.
Now, we are in 2021. And since the 2018 Cambridge Analytica Scandal, user data privacy has become mainstream. These data privacy concerns have put Big Techs under the radar of privacy watchdogs. In the last few years, we have seen (in many instances) how the Big Techs have mishandled consumer data or mined data without the user consent. Data privacy concerns do not just stop with personal privacy but encompass a wide array of issues related to what data protection means to democracy and who owns our data.
What Can We Do About Data Privacy?
Data privacy is centred around how data is collected, stored, managed, and shared with other third-party entities. It focuses on the individuals’ right to know the purpose of data collection, privacy preferences, as well as compliance with the privacy laws.
There are three ways of dealing with data privacy concerns – each interconnected and overlap with the other. At first, the onus of data privacy lies on the individual data users. On the personal front, we need to know what is personal to us and share the data only when necessary with entities we know we can trust.
When we open our emails, we should not click on links embedded in unsolicited emails as they may open an unsecured and harmful webpage. Always pay attention to the URL and ensure that it begins with “https://”, as “s” indicates that the URL is encrypted and secure. Do not give unnecessary access to cookies — and delete the cookies from your browser from time to time. These are some of the precautionary measures you (the users of data) may take while securing your data.
Secondly, we need our governments to take the necessary steps to regulate Big Tech and protect individual rights to data privacy. There’s GDPR (the General Data Protection Regulation) 2018, in the European Union that gives more control to the individuals over the personal data. According to the law, the data controllers must not collect any personal data without the consent of the data subjects. And that they must disclose any data collection, declare its lawful basis and purpose, and state how long data is being retained and if it is being shared with any third parties, or outside of the EEA.
In 2020, California State of the United States legislated a new data privacy law i.e., CCPA (California’s Consumer Privacy Act) to enhance privacy rights and consumer protection for its residents. The law empowers its residents to know what personal data is being collected about them and/or whether it is sold to third-party entities, and request these business entities to delete the personal data of its consumers. The proposed Personal Data Protection Bill in India seeks to regulate the collection, storage, and handling of personal data. However, there are looming fears about how these laws might turn the country into an “Orwellian state” — as a result of an exemption for government bodies to access personal data. There is more regulation likely to come in 2021.
Third, we need to innovate newer ways of dealing with data products, with primacy to data privacy. For instance, in 2018 Pittsburgh-based PNC Bank piloted a card with dynamic CVV, where the card’s CVV changed after every 30 to 60 minutes. The dynamic CVV technology is created to fight card-not-present fraud that has been on the rise for years. In another example, we can see that passwords are replaced with cryptographic keys and multiple layers of biometrics.
Signal, a California-based messaging application, run by a not-for-profit organisation, offers users end-to-end encryption. Signal’s “Sealed Sender” feature makes conversations more secure as the platform cannot access private messages or media, or store them on their servers. While WhatsApp provides end-to-end encryption for messages, it can access other private information — another good reason to shift sides.
In another such innovation, Presearch is a decentralised, open-source search engine with enhanced privacy features. Built on the blockchain, Presearch rewards its users with PRE crypto tokens. They do not track or store any information or searches, as a result, the users control their data. On the similar lines, Tim Berners-Lee, creator of the World Wide Web, has led an exciting project called “Solid” (derived from “social linked data”) that aims to radically change the way web applications work today and empower users of data, their freedom to choose where their data resides, and who is allowed to access it.
Solid is all about PODS – personal online data stores. Here, an individual has PODS in which all the personal data is stored. And you may choose to host the data wherever you may wish to. Instead of uploading data to remote services, the services are granted permission to access the data that lives in one of your PODS.
Interesting, isn’t it?
Picture: Yang Jing on Unsplash