This ain’t your granddaddy’s privacy battle.
Times were simpler when postcards were the big privacy invasion scare.
Today, our personal privacy is under siege by veiled government surveillance programs and the countless tech company Trojan Horses.
Privacy, per Merriam-Webster, is defined as the quality or state of being apart from company or observation, or freedom from unauthorized intrusion.
Technical innovations in the past twenty years have blurred the lines of “apart from company” and “unauthorized intrusion,” and now our personal privacy is under attack from multiple fronts.
Our locations are constantly being tracked on our phones, which are borderline inseparable from our bodies. We are under constant surveillance.
Social media platforms know more about us than we should be comfortable with.
Our sensitive information is floating around and being exchanged for a myriad of unauthorized purposes.
Many personal privacy advocates have taken to blockchain and cryptocurrency entrepreneurship to build solutions that address the concerns of our dwindling right to privacy in the digital world.
Technological advancements like blockchain and zero-proof have given the pro-privacy debate a new gust of wind. The beauty of these solutions is that they offer encryption or at least partial obfuscation on a massive scale.
Privacy coins such as Monero and Zcash give us the freedom to transact without being tracked, but this could come at the prohibitively high cost of empowering and enabling criminal activity.
Blockchain-based browsing and social media platforms like BAT, Steemit, and Sapien offer an escape from a manipulative data-mining browsing and social experience.
The following article explores the evolution of privacy in contemporary society, how the digital world has warped the reality of privacy and the rumbling dangers that come with it, and how blockchain and cryptocurrency projects offer a solution.
A Contemporary Legal History to Privacy
Privacy as we know it is a relatively recent development in human society. Our right to privacy isn’t explicitly stated in our Constitution and has been primarily defined by legal precedents, many of which haven’t accounted for the rapid societal change ushered in by the digital era.
The rise of a private tech oligarchy posed new paradigms in which a slow-moving bulwark of a government is continually playing a game of iron-fisted catch-up.
The government is in a precarious position when it comes to dishing out judgments against tech companies. These cases require light but decisive footwork to avoid stepping over and stifling private enterprise, while simultaneously protecting civilians from a very real bogeyman in the dark.
The following are a handful of the legal precedents that have helped to dictate where the United States stands on personal privacy today:
- The Fourth Amendment to the United States Constitution (1791): “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
- “The Right to Privacy” (1890): Considered one of the most influential essays in American law, “The Right to Privacy” is one of the first articles advocating for a right to personal privacy, and defined privacy as a “right to be let alone.” One of the authors othe essay, Louis Brandeis, would later become an influential Supreme Court Justice.
- Smith v. Maryland (1979): A case the solidified the “Third Party Doctrine,” Smith v. Maryland affirmed that “a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.”
This information could be anything from cell-phone location data, bank records, where you bought your last cup of coffee, credit card records, and technically anything else given to third parties. The government is able to obtain this information easily.
- Jones v. the United States (2012): Police attached a GPS tracking device to Antoine Jones’ Jeep and tracked his movements for weeks, affirming their suspicions of him being a drug dealer. The Supreme Court ruled that the GPS violated Jones’ right to privacy, since it was physically placed on his property.
The main takeaway here is how there appear to be limitations on the scalability of law enforcement. Louis Menand mentioned in an article titled “Nowhere to Hide” in The New Yorker that the police could have theoretically trailed Jones’ Jeep via car or helicopter, or better yet stationed an officer on every street corner, and their evidence would have been admissible in court.
The fact that the technology was physically placed on the Jeep matters, but the line starts to blur. Our locations are constantly being tracked on our smartphones and wearables, and we don’t really seem to mind. In fact, it’s quite the value-add to navigate the world by opening an app, or having your watch let you know how much you didn’t exercise today.
Here’s where it gets real: a small loophole in the judgments of Smith v. Maryland and Jones v. the United States exposes anyone and everyone to mass surveillance. Your autonomy, privacy, and security seem to hang by a thread if the government (or anyone) can gain access to your location history and current location at any moment.
If the companies you give your location, thumbprint, and other such information are considered “third parties”, then the government technically should be able to access them if warranted.
- That brings us to the Apple-FBI skirmish following the San Bernadino massacre in 2015, when two terrorists, Syed Rizwan Farook and Tashfeen Malik, who murdered fourteen people and wounded twenty-two, were killed. When the police retrieved Farook’s iPhone things got sticky in digital world yet again, and we saw what CNBC called “one of the highest-profile clashes in the debate over encryption and data privacy between the government and a technology company.”
The National Security Agency wasn’t able to unlock the phone, so the FBI asked Apple to unlock their own device. Apple declined on the basis that the order was “unreasonably burdensome,” and that it could lose customers if it allowed third parties to unlock their phones. The case quickly started circulating courts, but the FBI found someone who sold an unlocking device and withdrew the case.
This situation is relevant because it shows that while your data may be currently preserved by whichever third party you’ve entrusted it to, this protection is next on the government chopping block.
Situations such as the FBI versus Apple squabble help paint the contest between anonymity and safety. The privacy debate often ends in an unresolved quagmire; a state of stasis that inevitably moves towards the extinction of privacy due to rapid advancements in technology.
To avoid complicating the issue, let’s use Occam’s razor to split the issue of privacy into two simple camps: for (government) power and for (corporate) profit.
The government’s primary utility for surveillance is for control, whether that be protecting its citizens from harm or becoming some dystopian 1984 Orwellian authority.
A corporation’s primary utility for surveillance is to harvest and commoditize the information, whether that be facilitating more profitable advertisements/sales or auctioning off consumer information.
The evolution of data and privacy protection within both groups is interesting, but the case for government power takes the ethical dilemma cake. The search for company profit pales in comparison to the government’s tug of war between their duties of protection and supporting their citizen’s rights.
Uncle Sam likely doesn’t give a shit if you bought a slow cooker on Amazon, nor does he want to upsell you a cookbook based on your browsing behavior.
A government has a responsibility to keep its citizens safe, and surveillance and data monitoring have become a critical tool to keep the criminal underworld at bay.
The reality is that the world can be a nasty place, and not everyone wants to hold hands and sing Kumbaya. Human trafficking, child pornography, and terrorism are just a few of the unfortunate realities that governments around the world try to stop and are able to do so with moderate success. Without some sort of public surveillance, the government’s ability to stop the bad guys is substantially undermined.
The guiding question presents itself: how do we keep power (money, resources) away from the bad guys, and simultaneously keep the good guys from infringing on our privacy?
According to a 2016 statement by the Assistant Secretary of the Treasury for Terrorist Financing, Daniel Glaser, ISIL (ISIS) raised a whopping $360 million in revenue per year from taxing, extorting, and other activities.
This money was being used to fund the day-to-day activities, as well as support ISIS terrorist cells around the world. The majority of this money is likely fiat and can potentially be confiscated or throttled when tracked. The faster the money gets traced, the slower terrorism can spread and lives are potentially saved.
However, what if ISIS were to make use of cryptocurrency, an often untraceable monetary asset that can be sent in massive sums from anywhere to anywhere at any time? The ability to send an untraceable amount of money nearly instantly anywhere in the world is an attractive feature of private cryptocurrency but could be catastrophic if utilized by criminals.
Privacy projects are decentralized and don’t have a central authority to shut down any illicit activity. As you can imagine, this poes an enormous issue for counter-terrorism units. Granting the government the ability to track our transactions in exchange for saving our lives seems like a more than fair deal, but it’s a poor hedge against an omnipotent totalitarian regime in the future.
One side of the financial tracking debate views privacy coins as dangerous enablers of chaos and disorder, and rightfully so.
The other side of the debate views privacy coins as what could potentially be our last beacon for future generations’ sovereignty, and rightfully so.
The ability to spend our hard-earned income as we please, within reason, is a critical component of our personal autonomy, and limiting it would throttle our existence.
The more popular examples hover around transactional privacy and include privacy coins such as Monero, Zcash, Dash, and PIVX. The nucleus of the privacy feature is the use of stealth addresses, encryption, or some other sort of identity masking feature to disguise the identity of the user(s).
“Privacy may actually be an anomaly”
– Vinton Cerf, Co-creator of the military’s early 1970s Internet prototype and Google’s Chief Internet Evangelist
Today’s companies seem to know us better than we know ourselves; like a creepy neighbor that’s always trying to make enough small talk to sell you something.
There’s little we can do, or should do, to stop businesses attempting to make a profit, but the rapid advances in data collection and audience targeting could have scary unintended consequences.
Companies like Google or Facebook don’t technically sell your data, but they do make it available in ad networks to advertisers that use their ad-buying tools – and generate some meaty profit doing so.
The better data a company has, the more informed sales, marketing, and advertising decisions it can make. Instead of throwing ad spaghetti on a wall and hoping something sticks, advertisers can tailor messages to a specific targeted audience. Since these ads are more relevant to these audiences, they are more likely to purchase the good or service.
“Data is used to better serve more relevant ads. I just got an ad for dog toys, which is great because I spoil my dog. If there wasn’t any data to use, I could be getting something way less relevant like ads for discount oil changes from a repair shop across the country.”
– Troy Osinoff, Founder of digital marketing agency JUICE and former Head of Customer Acquisition at Buzzfeed
While data will always play an essential role in the consumer economy, social media has increased the ability to collect data and raised the rate of collection to unprecedented levels. Since the transition happened in the wake of the enormous value-add of social media, the average person hasn’t really been bothered by how much of their data is constantly being collected.
“People have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people. That social norm is just something that has evolved over time.”
– Facebook CEO Mark Zuckerberg in 2010.
The danger of online companies luring you into new comfort zones and collecting your data is deeper than merely trying to sell you stuff. The peril lies when these large pools of data are mismanaged and fall into the hands of malicious third parties.
In May 2018, an Oregon couple was at home talking about hardwood floors. The husband received a phone call from one of his employees in Seattle who said he received an email with the full conversation. The couple’s Amazon Echo (Amazon’s “smart speaker”), recorded the conversation and sent it over.
Amazon’s explanation of the situation was as follows:
“Echo woke up due to a word in background conversation sounding like ‘Alexa.’ Then, the subsequent conversation was heard as a ‘send message’ request. At which point, Alexa said out loud ‘To whom?’ At which point, the background conversation was interpreted as a name in the customer’s contact list. Alexa then asked out loud, ‘[contact name], right?’ Alexa then interpreted background conversation as ‘right’. As unlikely as this string of events is, we are evaluating options to make this case even less likely.”
While this story alone should be unsettling for anyone with a smart device in their home, that’s just the tip of the iceberg.
All things considered, this could have gone much worse. Once it hears its wake word, Alexa, the Echo activates and starts sending a recording to Amazon’s computers. Woe to be named Alex or Alexa and have an Echo.
As was revealed in the Snowden leaks, the National Security Agency has been able to secretly hack into the main communication links between Google and Yahoo data centers and potentially collect the data from hundreds of millions of user accounts.
What if hackers managed to extract what could be millions of conversations from Amazon’s database?
If this type of coordinated Internet of Things hacking sounds a bit far-fetched, think again.
Lappeenranta is a city in eastern Finland and is home to around 60,000 people. In late October 2016, hackers launched a Distributed Denial of Service (DDoS) and attacked the heating systems, leaving the residents of at least two housing blocks without heat in subzero weather.
Now imagine a hack at the scale of millions of IoT devices for intimate conversations/videos, or worse, forcing every smart speaker to play DJ Khaled at the same time.
Unless you were living under a rock in 2018 (you may have been better off!), you’ve probably heard of the Facebook-Cambridge Analytica data scandal.
The scandal revolved around the personally identifiable information of over 87 million Facebook users that was sold to politicians to potentially influence voters’ opinions.
The majority of the information was harvested through personality quizzes that require users to check a box that gave the page or site access to everything from your profile information to that of your friends.
To users fueled by a frantic need or pure boredom, this was a bargain.
Lo and behold, millions of profiles ended up in the hands of Cambridge Analytica. The information likely contained the public profile, page likes, and birthdays of users, as well as access to users’ news feeds, timelines, and messages. Cambridge Analytica would then create psychographic profiles of the data subjects, which may have been used to create the most effective advertising that could influence a particular individual for a political event.
The politicians and campaigns who purchased the information were behind the 2015 and 2016 campaigns of Donald Trump and Ted Cruz, as well as the 2016 Brexit vote.
An important distinction many people blur is that the Facebook-Cambridge Analytica scandal wasn’t a hack. People voluntarily consented to give up their information for something as innocuous as a quiz. However, just a glimpse behind the scenes of the impacts and movements of the data economy is all it takes to unnerve a nation.
Even worse, the credit reporting agency Equifax was actually hacked for even more sensitive information (social security numbers, birth dates, addresses, etc.) of 143 million Americans in 2017.
So, not only do we not know who potentially has our information, but this information can directly be used to pry open our bank accounts, take out loans, and make purchases in our name.
In the boardrooms of any publicly traded company, like Facebook and Google, a major conflict of interest exists between maximizing shareholder value and safeguarding their users’ data.
Although the looming threat of advertisers cashing in on our privacy is concerning, the actual danger still lies in third parties that can and will use this information with bad intentions.
Up until now, anyone concerned with their personal privacy has been forced with a dauntingly uncomfortable decision: put up with it and live a normal life, or forego the luxuries afforded by the Internet and social media and go off the grid.
Anonymity and data-privacy focused blockchain projects aim to protect your online activity, account information, and browsing behavior from unknowingly falling into the corporate coffers, personal information data markets, or the hands of malicious third parties.
One such project, the Basic Attention Token (BAT), helps power and incentivize the use of its anonymity-focused browser. BAT’s Brave browser utilizes smart contracts to allow advertisers to send ads with locked payment tokens directly to users. Users can then use their earned BAT on several things like premium articles and products, donations to content creators, data services, or high-resolution pictures.
BAT, and many other projects with Facebook and Google in their scopes, have business models that revolve around replacing the third-party intermediary component of ad networks. As a result, platforms can offer a browsing or social experience without collecting or storing extensive personal data.
When Data Gets Scary 👻
Remember the precedent set in Jones v. the United States (2012) where the government can’t invade your privacy by physically placing a GPS on you or your property, but all public surveillance is okay?
It’s estimated that there are over 40 million security cameras in the United States, and roughly 245 million professionally installed video surveillance cameras globally. The video surveillance industry is estimated to generate roughly $25 billion worldwide and growing.
The current state of video surveillance essentially creates portholes all over the world. While this near omnipresent range of vision illuminates many parts of the world, the footage must still be watched and sifted with human eyes and squishy brains.
Advances in facial recognition software, artificial intelligence, and machine learning allow for transcending the limitations of the human condition. What would have to be done manually could be aggregated and analyzed by algorithms, revealing all sorts of data and pattern analysis never before possible at scale.
For example, let’s say an alert goes out looking for a white male wearing a red shirt who robbed a gas station and left in a Dodge Durango in Austin, Texas. Instead of police manually scanning through footage and watching all cameras until they find someone who matches these details, an AI/ML-backed system would hypothetically be able to pull up all current matches in real time with a high degree of specificity.
“We found 640,000 ‘white’, 320,000 ‘males’, 20,000 ‘with red shirt’, 40 ‘with Dodge Durango’. One is within two miles of the alert. Identity is Kyle Joseph Mitchell, height 6’2, age 31, last location Chevron 2710 Bee Caves Rd, Austin, TX 78746, USA. Shall we proceed to monitor and notify all local units?”
Granted, we may be a bit far off from this level of effective analysis and output, but things get tricky if or once it gets here. China’s capital, Beijing, is currently one hundred percent covered by surveillance cameras, according to the Beijing Public Safety Bureau. Very effective and sure, the short-term effects might be higher levels of security and safety, but in the wrong hands of an authoritarian or corrupt administration or hackers, the future turns dystopian.
Data receives its value from pairing and analysis, and according to security expert Bruce Schneier, something like our location data “reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with—including who we spend the night with.”
Throw in some behavior analysis and predictions, and the majority of freedoms are immediately disabled.
Machine learning relies on a virtuous cycle where the software improves as it collects more data, and advanced computing allows for rapid data analysis across multiple data sets.
For example, an advanced state of mass surveillance would be able to track something as specific as when and where you’re going to eat before you even know it by analyzing your location, time spent between food transactions, and usual restaurant choices.
This information seems innocent, and frankly quite useless other than its commercial potential, but its implications on our psychology and freedom are enormous.
In a TED talk by Glenn Greenwald, the journalist best known for his role in publishing a series of reports on government global surveillance programs based on classified documents leaked by Edward Snowden, Greenwald notes,
“When we’re in a state where we can be monitored, where we can be watched, our behavior changes dramatically. The range of behavioral options that we consider when we think we’re being watched severely reduced.”
Earlier this year, the Chinese government implemented a system of monitoring and grading the behavior of every citizen and assigning them citizen scores.
If a citizen does something viewed as unsatisfactory, such as receiving a parking ticket or protesting the government on social media, they’ll get a few points docked off their score.
If they do something favorable, like a good public deed or helping their family in unusually tough times, they’ll receive a few points.
The high score all-stars will receive perks like favorable bank loans or discounted heating bills, while their low score dunces will be barred from buying certain things such as high-speed train tickets.
The program is currently being rolled out in a few dozen cities and will be put into full gear as a national credit system in 2020.
According to foreignpolicy.com, “the national credit system planned for 2020 will be an ‘ecosystem’ made up of schemes of various sizes and reaches, run by cities, government ministries, online payment providers, down to neighborhoods, libraries, and businesses, say Chinese researchers who are designing the national scheme. It will all be interconnected by an invisible web of information.”
China, a country that will be blanketed with nearly 626 million surveillance cameras by 2020, will have an inordinate amount of data on everything its citizens are doing, and essentially thinking.
“If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.“
–Former Google CEO Eric Schmidt in a 2009 CNBC special “Inside the Mind of Google”
This seems to be a common sentiment. If you’re not doing anything illegal or wrong, why should you hide? After all, what sort of human being that isn’t a murderer or drug dealer would even want to exist without being watched? The life unexamined (by someone else) is not worth living, right?
The fact that there are fewer and fewer places to hide brings up the questions of whether we have a right to hide at all.
Many cryptocurrency and blockchain advocates share an unwavering support for their rights to privacy. The degrees of this privacy range from a desire for data protection to a firm and resolute mission to forever keep their identity off the grid.
Data truly is a toxic asset, where any aggregator like Facebook, Google, Amazon, or even the United States government takes on a huge risk when storing it. Over time, the data deposits become richer and a much more lucrative target for hackers.
Mass surveillance throttles our desire for experimentation, creativity, adventure, and dissent.
The movement for privacy isn’t so much for preventing the Donald Trump campaign from knowing you’re a Hufflepuff when you’ve been telling everyone you’re a Gryffindor. It’s for protecting your future and that of the next generations from being born in a world that is stifled by transgressions gone unaddressed.
If the rapid evolutions in artificial intelligence are any indicator, a future built without a sturdy foundation for personal human privacy is a scary place.
Thankfully, many of us live in countries where we still have a say to argue citizen scores and the like. However, many of the freedoms we would be so quick to fight to protect are slowly escaping us under the veil of cool new social platform features and sporadic government-orchestrated data heists.
Privacy-focused blockchain projects remove the need for a central authority, as well as the burden of security for data. These solutions can prevent another Equifax hack from happening, which is already an enormous value-add.
If there is a demand for greater privacy, competitors will arise to offer it. That is, of course, if that alternative is frictionless to adopt. (*hint* hey blockchain entrepreneurs, spend less time on jargon-infested soap opera whitepapers and more on UI/UX).
However, the current state of privacy blockchain innovation is imperfect at best. According to Ethereum Co-Founder Vitalik Buterin in “Privacy on the Blockchain,”
“It’s much harder to create a ‘holy grail’ technology which allows users to do absolutely everything that they can do right now on a blockchain, but with privacy; instead, developers will in many cases be forced to contend with partial solutions, heuristics and mechanisms that are designed to bring privacy to specific classes of applications.”
For now, the best we can do is monitor and test privacy-focused solutions like little saplings. The more of a demand there is for privacy, the greater the investment in attention and capital there will be to build up a satisfactory alternative.
While our right to privacy is consistently being decided by various court cases, we should be asking ourselves the question of whether we really want it.
In a world where we’re so quick to give up our Facebook profile data for something as meaningless as a Harry Potter character quiz, or our fingerprints to Apple, or even our at-home conversations to Amazon, it’s difficult to envision mass adoption of a privacy alternative for our transactions or browsing.
We’re so easily triggered by the idea of our government overstepping its jurisdiction into our private lives.
- Mandatory thumb-prints? Nuh uh.
- Constant location tracking? No way, Jose.
- A speaker in our home that listens to our conversations? Absolutely not.
However, for Apple, Facebook, Google, and Amazon, we’re quick to volunteer without any additional thoughts.
More important than any immediate privacy solution is the firm understanding of why privacy is too important to lose sight of.
Keep yourself sharp by following monumental privacy cases as they will inevitably continue to appear, educate yourself on what steps you can take today to encrypt your life, and tell Alexa to share this article.
You gotta fight, for your right, to be private.
This article was originally published at conicentral by Alex Moskov