As data trafficking eclipses the value of oil, a new kind of slavery looms. Expanding the definition of personhood may be our only defense.
The most grotesque bits of history have been deemed “lawful” only by a simple trick of semantics—designating victims non-persons. Humans? Perhaps. Persons? Not so much.
The suspect history of “human personhood” is implied in its brazen redundancy. That recognizing a “human” as a “person” could represent such a milestone in law would seem absurd—if it weren’t for this:
Law has long preferred that most humans be something else—property.
Owing to this, the premise of the emancipatory movements of the past 600 years, from the Magna Carta to abolitionists to suffragettes, have found themselves focused on expanding this legal category of “person” to include themselves.
Of course, any staggered rollout acknowledging selected groups of humans as persons is going to be nauseating. First propertied white men, then all white men, then black men (sort of), then women (sort of), and so on. Nevertheless, today personhood stands as the most ubiquitous legal protection in the world—the receiving of “equal treatment under the law”.
And yet, despite the past 1400 years of maddening (and on-going) effort, we are now happily and blithely submitting to a world in which none of us are persons—a world in which we are all, once again, the property of another.
For while the “personhood” of cocaine hippos and living rivers and corporations and artificial intelligence continues to root abstracted blooms into the bones of law, the personhood of humans is quietly, without much notice, vanishing. This vanishing is not a function of erasure, but of migration. It is a paradigmatic migration—it is a digital migration.
We have now migrated a majority of our waking hours beyond the analog jurisdiction of the law of land—to the digital world of the web.
I don’t have to convince you how much we already live our lives out digitally—take a look at your smartphone screen time. Then add in your time in your smart car, your smart home, your fit bit, your time on Netflix and YouTube, Google and Substack, email, etc.. The average American today spends an accumulated 11 hours, 54 minutes daily connected to some form of media. That leaves an average of just 4 hours of the waking day in which we are not being digitized in some sense. But that’s just the beginning.
The intensity of our use is set to drastically saturate. Facebook/Meta, which has monopolized 91% of the social media market, has signaled its intention to digitize the entirety of human life. Meta will be focused on turning life itself into a digital interface for the physical world.
As Zuckerberg explains:
It's a virtual environment. We can be present with people in digital spaces. And you can kind of think about this as an embodied Internet that you're inside of rather than just looking at.
An internet that you are inside of—a fully digitized you. And yet this digitized you, which already constitutes the vast majority of “you” to socioeconomic and political paradigms, has no legal status as a person. No “equal protection under law”. No rights.
Here you are not yours, but the property of someone else. A property called data.
My friend Roger McNamee, a key activist in the fight against the commoditization of our personal data, wrote for Time Magazine recently:
At a minimum, Congress must ban third-party use of sensitive data, such as that related to health, location, financial transactions, web browsing and app data.
…just as it banned child labor in 1938.
What follows are philosophical and legal grounds upon which such a ban might base itself.
We Are Not Property.
Over the past months I’ve met with some top tech and human trafficking litigators. So that you can be somewhat sure you aren’t wasting your time, I’ve learned that the following arguments (though somewhat novel) are potentially legally compelling.
***Please note: the following presumes the eventuality of an “internet we are inside of”, wherein the differences between the actions of our “self” and those represented by our “data” become impossible to distinguish.
At the heart of both the corporate agenda and the ethical tech movement is the process of “anonymization” which is said to legally turn our data into property by stripping our data of identifiable attributes like address, name, and so on. Most ethical tech advocates like it (think the Netflix hit “The Social Dilemma”), because anonymization holds the promise of protecting our privacy. Corporations love it because anonymized data ostensibly turns personal information into a commodity. On the surface, then, “anonymized data” might seem the perfect solution for everyone. But a deeper look reveals serious intrinsic problems.
To begin, the corporation’s claim that anonymization imbues our data with the two critical features of a commodity—fungibility and alienability—is false.
Fungibility means that an item (your personal data, in this instance) is indistinguishable from another item (my personal data, in this instance), and therefore interchangeable as a commodity. A dollar bill is fungible because it is effectively identical to—and thus interchangeable with—another dollar bill. A soy bean is fungible because it is interchangeable with another soy bean.
Alienability is the ability of a thing (your identity, in this instance) to be made wholly alien to its prior owner (you, in this instance). In other words, when something of yours is alienable it has the potential to become not yours.
The corporate claim is that once personal data has been anonymized it becomes fungible (“you” become indistinguishable from “me”) and alienable (you are effectively no longer “you”, as “you” have been made anonymous)—and may therefore be treated as a commodity, no longer subject to data protection laws.
This assurance of anonymized data is ubiquitous in privacy policy language across web platforms. Take, for instance, Substack’s own declaration of anonymized data:
“In certain cases, we may anonymize your Personal Information in such a way that you can no longer be identified as an individual, and we reserve the right to use and share such anonymized information to trusted partners not specified here.”
But these claims of anonymized data are demonstrably false.
While I encourage those interested to see the footnotes at bottom for my extended arguments and evidence1, including evidence of the failure of both Europe’s GDPR laws and Google’s FLoC system, the following is perhaps all the convincing anyone should need to come to the conclusion that anonymization is erroneous:
A remarkably exhaustive study of anonymized data has demonstrated that 99.98% of identities contained in any dataset can be re-identified down to the individual—using just 15 demographic attributes. For context: “data sets such as those traded by brokers for marketing purposes can contain orders of magnitude more [more than 15] attributes per person”.
This suggests that any corporate claim of “anonymized data” is effectively fraud.
(I recommend you try out the study’s interface to demonstrate your own risk of re-identification).
THE SELF-OWN
I recently got into a bit of an argument with someone who could not conceive of anything better than “owning” one’s self. That’s the power of capitalist realism—the height of personal achievement has become the rhetorical “ownership” of one’s self.
It is no wonder, then, that the ethical tech movement has trouble conceiving of any better defense against data trafficking than a more literal self-ownership.
Tho self-ownership seems initially empowering, this techno-libertarian move is one big self-own. The seemingly empowering stance, taken up by Peter Thiel types (see The Sovereign Individual), is more likely to be the sort of human kryptonite that ensures the sort of eternal slavery that despots everywhere only dream about—the kind where the subjects demand their own enslavement, mistaking slavery for freedom.
In a nutshell, the idea of self-ownership in the context of the web is that if we own our own data we may benefit financially from the trafficking of our digital forms. To this end, Data Unions are popping up to pool and negotiate the sale of our data as “data brokers” to third parties, passing off the benefits to union members.
While this sort of communizing seems initially cool, it almost certainly makes matters far worse.
To begin, if the self-ownership crowd gets their way and our data is something we stand to benefit financially from, it follows that all interaction will be subsumed within transaction. This phase shift would mark what I like to call subcutaneous capital—capital which disappears behind its own ubiquity. While subcutaneous capital could have interesting benefits (to democratic power, among other things), the potential upsides are dramatically outgunned by the certain downsides—especially as they relate to mass manipulation, self-destructive incentive structures, and the effect of ubiquitous transaction upon the sacred and spiritual.
But there are more concrete problems with self-ownership.
The amount of money an individual can make from selling their data is—and is likely to continue to be—paltry. A data union fancifully calling itself REKLAIM states that you may earn up to $10 a week to sell off your data (which I am tempted to just call your “digital body”, as soon enough that term will make all too much sense).
Anyhow, exactly what is being “reklaimed” by earning 10$ a week? Humiliation?
To this point it is worth quoting Professor Shoshana Zuboff (coiner of the term “surveillance capitalism”, by which she means data trafficking) at length:
It is obscene to suppose that this harm can be reduced to the obvious fact that users receive no fee for the raw material they supply. That critique is a feat of misdirection that would use a pricing mechanism to institutionalize and therefore legitimate the extraction of human behavior for manufacturing and sale. It ignores the key point that the essence of the exploitation here is the rendering of our lives as behavioral data for the sake of others’ improved control of us.
So long as the movement to instill ethics into social technologies concedes to the sorry framing of data-as-property, property we will desire to be.
Human Data Trafficking.
There is no experiential equivalency between the physical body which is being trafficked and the data body which is being trafficked. Not yet, anyhow. The experience of being physically trafficked—be it debt bondage or forced labor or sex trafficking—is, inarguably, immeasurably more horrific in experiential terms than being data trafficked. Yet as I argue that data traffickers are engaged in a form of human trafficking on technical and philosophical grounds, I do so from a line of legal reason.
As we have established, personal data is in fact inalienable—therefore it is the extension of our person. Given this…
There are three main criteria for human trafficking:
Recruitment.
Violence, Coercion or Fraud.
Exploitation of the recruited for financial gain or material benefit.
Considering everything I’ve outlined, let’s take these one by one.
1. Recruitment.
Are we recruited to participate in social media? Obviously, the answer is yes. Advertisement constitutes recruitment.
2. Violence, Coercion, or Fraud.
Fraud: We already understand that the ubiquitous claims of “anonymized data” constitute fraud.
Done.
…But if you’re interested in additional aspects of the potential fraud and/or coercion of tech companies, including the intentional overriding of user agency by app developers, see the second set of footnotes at bottom2.
In the meantime, enjoy the succinct way professor Zuboff puts the bait-and-switch of false agency:
“We think we’re searching Google—Google is actually searching us.”
Which brings us, finally to number three.
3. Exploitation For Financial Gain or Material Benefit.
Is our personal data being exploited for financial gain or material benefit?
Yeah. Corporate exploitation of personal data, or Surveillance Capitalism, has become the lifeblood of the digital corporation. Facebook/Meta, for instance, gets 63% of its earnings from trafficking in personal data, while it makes up 80% of Google’s revenue.
To quote The Economist, the world’s most valuable resource is no longer oil, but data. And the data boom has only just begun—and “exploitation for financial or material gain” is the name of the game.
Conclusion:
The recruitment, coercion, fraud, and exploitation of a person’s activities is a straight flush of trafficking and criminal enterprise.
Full Personhood—the protection of personhood extended to our data—should be established to justify punishment and protect the person.
FULL PERSONHOOD
A last note on the term “full personhood”.
There is good reason to choose the term “full personhood” and avoid using the term “digital personhood”. For while there exist arguments that do use the term “digital personhood” to describe an extension of human rights to our data, the same phrase is simultaneously being used by technologists and corporate litigators to argue that “algorithmic entities”—not humans—should be granted “digital personhood”.
If the history of law has taught us anything, it’s that any loophole can and will be exploited by the corporation. At the top of that history is the loopholing of THE “personhood” amendment itself—the 14th Amendment. Intended to grant personhood to the formerly enslaved, the amendment was subsequently used largely to argue for “corporate personhood”, instead. While loopholing cannot be totally obviated due to the elasticity of language, we should, at the very least, not shoot ourselves directly in the feet at the start. Avoid the term “digital personhood”.
How fantasy-land is a proposal for full personhood?
As mentioned, corporations have legal personhood. Algorithms and Artificial Intelligence may already claim legal personhood. Anti-abortion laws have established “prenatal personhood”. Rivers and parks have been granted personhood. Pablo Escobar’s “cocaine hippos” were granted “personhood” by the U.S. District Court for the Southern District of Ohio.
If such abstractions can grant legal personhood to the faceless corporation or the algorithm, they may extend readily to our data.
And really, they must.
As always, comments and participation welcome.
A DEEPER DIVE INTO THE FAILURE OF ANONYMIZATION.
A study on cellphone data alone states “four spatio-temporal points are enough to uniquely identify 95% of the individuals”.
And how many of our data points may be contained in a data set?
Via TechCrunch:
…data sets such as those traded by brokers for marketing purposes can contain orders of magnitude more attributes per person.
The researchers cite data broker Experian selling Alteryx access to a de-identified data set containing 248 attributes per household for 120 million Americans, for example.
248 data points per household—and only 15 are needed to re-identify you.
Defenders of anonymization from both the ethical-tech activist camps and corporate camps may still insist that such data sets are “weak” anonymizations—and that “hard” anonymizations cannot be re-identified.
Chief among such defenders is Europe’s General Data Protection Regulation (GDPR), which claims to be “the toughest privacy and security law in the world”.
But is it?
Tessian, a company specializing in “keeping companies GDPR-compliant”, released this article citing the 20 biggest GDPR fines of the past 3 years—NONE of the fines were for anonymizations that could lead to re-identification.
Instead, most of the GDPR fines were for simple front-end, UX issues—failing to allow users to opt out of cookies, failing to provide extra GDPR-compliant scribble in their rarely-read privacy policies, etc., while one or two were for failing to secure user data from random hackers, such as the Marriot data breach.
The closest thing to a fine for re-identifiable data was against Vodafone Italia, who shared personal data along with telephone numbers with third party call centers—however, as Tessian states in their assessment:
Vodafone might have avoided this large fine by conducting regular audits of its data and properly documenting all relationships with third-party data processors.
In other words, if only Vodafone Italia had “properly” documented its relationships with third-parties, it would have been able to legally traffic personal data.
Given that 99.98% of commoditized data sets are re-identifiable—why would “the toughest privacy and security law in the world” fail to find any re-identifiable anonymized data sets across all European activity?
Any speculation would have to suppose that either the GDPR apparatus is simply not sophisticated enough to properly analyze the strength of anonymized data, or that weak anonymization is so ubiquitous that the GDPR simply doesn’t bother. Or both.
So where does that leave today’s data protection law? Utterly ineffectual.
Google to the Rescue?
To curb potential public outcry, Google’s deployment of a Federated Learning of Cohorts (FLoC) has promised increased privacy by keeping our specific interactions on our devices—away from centralized servers—and only sending “anonymized aggregations” of our data, or “cohorts”, back to the corporate servers.
Only this doesn’t improve matters.
To the contrary, researchers and critics have noted that the FLoC system actually increases the re-identifiability of the cohorts it collects. The Electronic Frontier Foundation has called it “a terrible idea” for privacy, additionally noting that “FLoC’s core objective is at odds with other civil liberties”, as even FLoC’s GitHub page itself admits:
…information about an individual's interests may eventually become public.
Some rare honesty.
MORE ON FRAUD AND/OR COERCION.
There is more (if not as clearly defined) fraud and/or coercion occurring than just the fraud of annonymized data: The user experience of all social media apps present themselves as world of personal agency—unmolested interactions of conscious participants in command of their personal experience. But that agency is specious. By their own accounts, app designers ensure that our “free agent” experience on social platforms renders us without agency, wholly addicted and predictable.
The manipulation of consumer behavior on behalf of advertisement is one thing. But when the general social agency of a person is invisibly undermined by algorithmic manipulations designed to drive up the value of the platform, a higher order of deception is occurring.
Via Business Insider:
"Three criteria are required to form a habit: sufficient motivation, an action, and a trigger," says Mezyk. The three-pronged approach, which is now standard among app developers, is based on the Fogg Behavior model, established by Stanford professor B.J. Fogg.
Or take Facebook co-founder Sean Parker’s admission of Facebooks social interface as
“…exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology."
Or take Facebook’s own study of “emotional contagion” in 2014, or their algorithmic prioritizing of the angry emoji over the like button.
Well damn. Here's the thing that bothers me personally (pun intended)- If small businesses want to step away from the Facebook, IG, google, yelp game how do we create essential visibility for our companies. My unfortunate reality is that if I don't play the game, I lose clients and an ability to sustain my business. Another system/structure would be brilliant but in the same breath seems impossible.
Thank you again ,,value information , yes we are outside , I receive daily calls from Japan , China, Indonesia ? And so on , I speak about something and very fast they suggest things about that , crazy, I cut with many social media even I don’t use that much ,still I feel they are using me