A FRIEND OF mine, who runs a large television production company in the car-mad city of Los Angeles, recently noticed that his intern, an aspiring filmmaker from the People’s Republic of China, was walking to work.
WHEN HE OFFERED to arrange a swifter mode of transportation, she declined. When he asked why, she explained that she “needed the steps” on her Fitbit to sign in to her social media accounts. If she fell below the right number of steps, it would lower her health and fitness rating, which is part of her social rating, which is monitored by the government. A low social rating could prevent her from working or traveling abroad.
China’s social rating system, which was announced by the ruling Communist Party in 2014, will soon be a fact of life for many more Chinese.
By 2020, if the Party’s plan holds, every footstep, keystroke, like, dislike, social media contact, and posting tracked by the state will affect one’s social rating.
Personal “creditworthiness” or “trustworthiness” points will be used to reward and punish individuals and companies by granting or denying them access to public services like health care, travel, and employment, according to a plan released last year by the municipal government of Beijing. High-scoring individuals will find themselves in a “green channel,” where they can more easily access social opportunities, while those who take actions that are disapproved of by the state will be “unable to move a step.”
Big Brother is an emerging reality in China. Yet in the West, at least, the threat of government surveillance systems being integrated with the existing corporate surveillance capacities of big-data companies like Facebook, Google, Microsoft, and Amazon into one gigantic all-seeing eye appears to trouble very few people—even as countries like Venezuela have been quick to copy the Chinese model.
Still, it can’t happen here, right? We are iPhone owners and Amazon Prime members, not vassals of a one-party state. We are canny consumers who know that Facebook is tracking our interactions and Google is selling us stuff.
Yet it seems to me there is little reason to imagine that the people who run large technology companies have any vested interest in allowing pre-digital folkways to interfere with their 21st-century engineering and business models, any more than 19th-century robber barons showed any particular regard for laws or people that got in the way of their railroads and steel trusts.
Nor is there much reason to imagine that the technologists who run our giant consumer-data monopolies have any better idea of the future they're building than the rest of us do.
Facebook, Google, and other big-data monopolists already hoover up behavioral markers and cues on a scale and with a frequency that few of us understand. They then analyze, package, and sell that data to their partners.
A glimpse into the inner workings of the global trade in personal data was provided in early December in a 250-page report released by a British parliamentary committee that included hundreds of emails between high-level Facebook executives. Among other things, it showed how the company engineered sneaky ways to obtain continually updated SMS and call data from Android phones. In response, Facebook claimed that users must "opt-in" for the company to gain access to their texts and calls.
The machines and systems that the techno-monopolists have built are changing us faster than they or we understand. The scale of this change is so vast and systemic that we simple humans can’t do the math—perhaps in part because of the way that incessant smartphone use has affected our ability to pay attention to anything longer than 140 or 280 characters.
As the idea of a “right to privacy,” for example, starts to seem hopelessly old-fashioned and impractical in the face of ever-more-invasive data systems—whose eyes and ears, i.e., our smartphones, follow us everywhere—so has our belief that other individual rights, like freedom of speech, are somehow sacred.
Being wired together with billions of other humans in vast networks mediated by thinking machines is not an experience that humans have enjoyed before. The best guides we have to this emerging reality may be failed 20th-century totalitarian experiments and science fiction. More on that a little later.
The speed at which individual-rights-and-privacy-based social arrangements collapse is likely to depend on how fast Big Tech and the American national security apparatus consummate a relationship that has been growing ever closer for the past decade. While US surveillance agencies do not have regular real-time access to the gigantic amounts of data collected by the likes of Google, Facebook, and Amazon—as far as we know, anyway—there is both anecdotal and hard evidence to suggest that the once-distant planets of consumer Big Tech and American surveillance agencies are fast merging into a single corporate-bureaucratic life-world, whose potential for tracking, sorting, gas-lighting, manipulating, and censoring citizens may result in a softer version of China’s Big Brother.
These troubling trends are accelerating in part because Big Tech is increasingly beholden to Washington, which has little incentive to kill the golden goose that is filling its tax and political coffers. One of the leading corporate spenders on lobbying services in Washington, DC, in 2017 was Google’s parent company, Alphabet, which, according to the Center for Responsive Politics, spent more than $18 million. Lobbying Congress and government helps tech companies like Google win large government contracts. Perhaps more importantly, it serves as a shield against attempts to regulate their wildly lucrative businesses.
If anything, measuring the flood of tech dollars pouring into Washington, DC, law firms, lobbying outfits, and think tanks radically understates Big Tech’s influence inside the Beltway. By buying The Washington Post, Amazon’s Jeff Bezos took direct control of Washington’s hometown newspaper. In locating one of Amazon’s two new headquarters in nearby Northern Virginia, Bezos made the company a major employer in the area—with 25,000 jobs to offer.
Who will get those jobs? Last year, Amazon Web Services announced the opening of the new AWS Secret Region, the result of a 10-year, $600 million contract the company won from the CIA in 2014. This made Amazon the sole provider of cloud services across “the full range of data classifications, including Unclassified, Sensitive, Secret, and Top Secret,” according to an Amazon corporate press release.
Once the CIA’s Amazon-administered self-contained servers were up and running, the NSA was quick to follow suit, announcing its own integrated big-data project. Last year the agency moved most of its data into a new classified computing environment known as the Intelligence Community GovCloud, an integrated “big data fusion environment,” as the news site NextGov described it, that allows government analysts to “connect the dots” across all available data sources, whether classified or not.
The creation of IC GovCloud should send a chill up the spine of anyone who understands how powerful these systems can be and how inherently resistant they are to traditional forms of oversight, whose own track record can be charitably described as poor.
Amazon’s IC GovCloud was quickly countered by Microsoft’s secure version of its Azure Government cloud service, tailored for the use of 17 US intelligence agencies. Amazon and Microsoft are both expected to be major bidders for the Pentagon’s secure cloud system, the Joint Enterprise Defense Initiative—JEDI—a winner-take-all contract that will likely be worth at least $10 billion.
With so many pots of gold waiting at the end of the Washington, DC, rainbow, it seems like a small matter for tech companies to turn over our personal data—which legally speaking, is actually their data—to the spy agencies that guarantee their profits. This is the threat that is now emerging in plain sight. It is something we should reckon with now, before it’s too late.
IN FACT, BIG tech and the surveillance agencies are already partners. According to a 2016 report by Reuters, Yahoo designed custom software to filter its users’ emails and deliver messages that triggered a set of search terms to the NSA.
The company’s security chief quit in protest when he learned of the program. “Yahoo is a law-abiding company, and complies with the laws of the United States,” the company said in a statement, which notably did not deny the activity, while perhaps implying that turning over user data to government spy agencies is legal.
While Google has stated that it will not provide private data to government agencies, that policy does not extend beyond America’s borders. At the same time as Yahoo was feeding user data to the NSA, Google was developing a search engine called Dragonfly in collaboration with the Communist Party of China. In a letter obtained by The Intercept, Google CEO Sundar Pichai told a group of six US senators that Dragonfly could have “broad benefits inside and outside of China” but refused to release other details of the program, which the company’s search engine chief, Ben Gomes, informed Google staff would be released in early 2019.
According to the documents obtained by The Intercept, Dragonfly would restrict access to broad categories of information, banning phrases like “human rights,” “student protest,” and “Nobel Prize” while linking online searches to a user’s phone number and tracking their physical location and movements, all of which will presumably impact social ratings or worse—much worse, if you happen to be a Uighur or a member of another Muslim minority group inside China, more than 1 million of whom are now confined in re-education camps. China’s digital surveillance net is a key tool by which Chinese authorities identify and track Muslims and others in need of re-education.
Google is also actively working with the US intelligence and defense complex to integrate its AI capacities into weapons programs. At the same time as Google was sending its letter about Dragonfly to Congress, the company was completing an agreement with the Pentagon to pursue Project Maven, which seeks to incorporate elements of AI into weaponized drones—a contract that is expected to be worth at least $250 million a year. Under pressure from its employees, Google said in June that it would not seek to renew its Project Maven contract when it expires in 2019.)
It doesn’t take a particularly paranoid mind to imagine what future big-ticket collaborations between big-data companies and government surveillance agencies might look like, or to be frightened of where they might lead. “Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency,” warned Apple chairman Tim Cook during his keynote speechto the International Conference of Data Protection and Privacy Commissioners in Brussels. “Taken to the extreme this process creates an enduring digital profile and lets companies know you better than you may know yourself. Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm.”
Cook didn’t hesitate to name the process he was describing. “We shouldn’t sugarcoat the consequences,” he said. “This is surveillance.”
While Apple makes a point of not unlocking its iPhones and SmartWatches even under pressure from law enforcement and surveillance agencies, companies like Google and Facebook that earn huge profits from analyzing and packaging user data face a very different set of incentives.
Amazon, which both collects and analyzes consumer data and sells a wide range of consumer home devices with microphones and cameras in them, may present surveillance agencies with especially tempting opportunities to repurpose their existing microphones, cameras, and data.
The company has already come under legal pressure from judges who have ordered it to turn over recordings from Echo devices that were apparently made without their users' knowledge. According to a search warrant issued by a judge trying a double-murder case in New Hampshire, and obtained by TechCrunch, the court had “probable cause to believe” that an Echo Fire picked “audio recordings capturing the attack” as well as “events that preceded or succeeded the attack.” Amazon told the Associated Pressthat it would not release such recordings “without a valid and binding legal demand properly served on us,” a response that would appear to suggest that the recordings in question exist.
Under what, if any, conditions Amazon would allow government spy agencies to access consumer data or use the company’s vast network of microphones and cameras as a surveillance network are questions that remain to be answered. Yet as Washington keeps buying expensive tools and systems from companies like Google and Amazon, it is hard to imagine that technologists on both ends of these relationship aren’t already seeking ways to further integrate their tools, systems, and data.
THE FLIP SIDE of that paranoid vision of an evolving American surveillance state is the dream that the new systems of analyzing and distributing information may be forces for good, not evil. What if Google helped the CIA develop a system that helped filter out fake news, say, or a new Facebook algorithm helped the FBI identify potential school shooters before they massacred their classmates? If human beings are rational calculating engines, won’t filtering the information we receive lead to better decisions and make us better people?
Such fond hopes have a long history. Progressive techno-optimism goes back to the origins of the computer itself, in the correspondence between Charles Babbage, the 19th-century English inventor who imagined the “difference engine”—the first theoretical model for modern computers—and Ada Lovelace, the brilliant futurist and daughter of the English Romantic poet Lord Byron.
“The Analytical Engine,” Lovelace wrote, in one of her notes on Babbage’s work, “might act upon other things besides number, where objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine. Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
This is a pretty good description of the principles of digitizing sound; it also eerily prefigures and predicts the extent to which so much of our personal information, even stuff we perceive of as having distinct natural properties, could be converted to zeros and ones.
The Victorian techno-optimists who first envisioned the digital landscape we now inhabit imagined that thinking machines would be a force for harmony, rather than evil, capable of creating beautiful music and finding expressions for “fundamental relations” of any kind according to a strictly mathematical calculus.
The idea that social engineering could help produce a more efficient and equitable society was echoed by early 20th-century American progressives. Unlike 19th- and early 20th-century European socialists, who championed the organic strength of local communities, early 20th-century American progressives like Herbert Croly and John Dewey put their faith in the rise of a new class of educated scientist-priests who would re-engineer society from the top down according to a strict utilitarian calculus.
The lineage of these progressives—who are not identical with the “progressive” faction of today’s Democratic Party—runs from Woodrow Wilson to champions of New Deal bureaucracy like Franklin D. Roosevelt’s secretary of the interior, Harold Ickes. The 2008 election of Barack Obama, a well-credentialed technocrat who identified very strongly with the character of Spock from Star Trek, gave the old-time scientistic-progressive religion new currency on the left and ushered in a cozy relationship between the Democratic Party and billionaire techno-monopolists who had formerly fashioned themselves as government-skeptical libertarians.
“Amazon does great things for huge amounts of people,” Senate minority leader Chuck Schumer told Kara Swisher of Recode in a recent interview, in which he also made approving pronouncements about Facebook and Google. “I go to my small tech companies and say, ‘How does Google treat you in New York?’ A lot of them say, ‘Much more fairly than we would have thought.’”
Big Tech companies and executives are happy to return the favor by donating to their progressive friends, including Schumer.
But the cozy relationship between mainstream Democrats and Silicon Valley hit a large-sized bump in November 2016, when Donald Trump defeated Hillary Clinton—in part through his mastery of social media platforms like Twitter. Blaming the election result on Russian bots or secret deals with Putin betrayed a shock that what the left had regarded as their cultural property had been turned against them by a right-wing populist whose authoritarian leanings inspired fear and loathing among both the technocratic elite and the Democratic party base.
Yet in the right hands, progressives continued to muse, information monopolies might be powerful tools for re-wiring societies malformed by racism, sexism, and transphobia. Thinking machines can be taught to filter out bad information and socially negative thoughts. Good algorithms, as opposed to whatever Google and Facebook are currently using, could censor neo-Nazis, purveyors of hate speech, Russian bots, and transphobes while discouraging voters from electing more Trumps.
The crowdsourced wisdom of platforms like Twitter, powered by circles of mutually credentialing blue-checked “experts,” might mobilize a collective will to justice, which could then be enforced on retrograde institutions and individuals. The result might be a better social order, or as data scientist Emily Gorcenski put it, “revolution.”
The dream of centralized control over monopolistic information providers can be put to more prosaic political uses, too—or so politicians confronted by a fractured and tumultuous digital media landscape must hope. In advance of next year’s elections for the European Parliament, which will take place in May, French Ppresident Emmanuel Macron signed a deal with Facebook in which officials of his government will meet regularly with Facebook executives to police “hate speech.”
The program, which will continue through the May elections, apparently did little to discourage fuel riots by the "gilets jaunes," which have set Paris and other French cities ablaze, even as a claim that a change in Facebook's local news algorithm was responsible for the rioting was quickly picked up by French media figures close to Macron.
At root, the utopian vision of AI-powered information monopolies programmed to advance the cause of social justice makes sense only when you imagine that humans and machines “think” in similar ways. Whether machines can “think,” or—to put it another way, whether people think like machines—is a question that has been hotly debated for the past five centuries. Those debates gave birth to modern liberal societies, whose foundational assumptions and guarantees are now being challenged by the rise of digital culture.
To recap some of that history: In the 17th century, the German philosopher Gottfried Leibniz amused himself with thinking about the nature of thinking. His most eloquent modern American popularizer, the UC Berkeley philosopher John Searle, asked Leibnitz’s essential question like this:
Imagine you taught a machine to speak Chinese and you locked it in a room with a man who did not speak Chinese. Then you had the machine produce cards with Chinese words and sentences on them, and the man took the cards and slid them out of the room through a slot. Can we say, Searle asks, that there’s anyone or anything in the room that understands Chinese?
If you believe, like Searle and Leibnitz, that the answer is no, you understand thinking as a subjective experience, a biological process performed by human brains, which are located in human bodies. By definition, then, the human brain is not a machine, and machines can’t think, even if they can perform computational feats like multiplying large numbers at blinding speeds.
Alan Turing gave an elegant answer to the Leibnitz/Searle question when he said that the only true mark of consciousness is the ability to think about oneself. Since you can build machines that fix their own problems—debug themselves—these machines are innately self-aware, and therefore there’s nothing stopping them from evolving until they reach HAL-like proportions.
What does the history of thinking about thinking have to do with dreams of digitally mediated social justice? For Thomas Hobbes, who inspired the social-contract theorist John Locke, thinking was “nothing more than reckoning,” meaning mathematical calculation. David Hume, who extended Hobbes’ ideas in his own theory of reason, believed that all of our observations and perceptions were nothing more than atomic-level “impressions” that we couldn’t possibly make sense of unless we interpreted them based on a utilitarian understanding of our needs, meaning the attempt to derive the greatest benefit from a given operation.
If, following Locke and Hume, human beings think like machines, then machines can think like human beings, only better. A social order monitored and regulated by machines that have been programmed to be free of human prejudice while optimizing a utilitarian calculus is therefore a plausible-enough way to imagine a good society. Justice-seeking machines would be the better angels of our nature, helping to bend the arc of history toward results that all human beings, in their purest, most rational state, would, or should, desire.
THE ORIGIN OF the utilitarian social calculus and its foundational account of thinking as a form of computation is social contract theory. Not coincidentally, these accounts evolved during the last time western societies were massively impacted by a revolution in communications technology, namely the introduction of the printing press, which brought both the text of the Bible and the writings of small circles of Italian and German humanists to all of Europe. The spread of printing technologies was accompanied by the proliferation of the simple hand mirror, which allowed even ordinary individuals to gaze at a “true reflection” of their own faces, in much the same way that we use iPhones to take selfies.
Nearly every area of human imagination and endeavor—from science to literature to painting and sculpture to architecture—was radically transformed by the double-meteor-like impact of the printing press and the hand mirror, which together helped give rise to scientific discoveries, great works of art, and new political ideas that continue to shape the way we think, live, and work.
The printing press fractured the monopoly on worldly and spiritual knowledge long held by the Roman Catholic Church, bringing the discoveries of Erasmus and the polemics of Martin Luther to a broad audience and fueling the Protestant Reformation, which held that ordinary believers—individuals, who could read their own Bibles and see their own faces in their own mirrors—might have unmediated contact with God. What was once the province of the few became available to the many, and the old social order that had governed the lives of Europe for the better part of a millennium was largely demolished.
In England, the broad diffusion of printing presses and mirrors led to the bloody and ultimately failed anti-monarchical revolution led by Oliver Cromwell. The Thirty Years’ War, fought between Catholic and Protestant believers and hired armies in Central and Eastern Europe, remains the single most destructive conflict, on a per capita basis, in European history, including the First and Second World Wars.
The information revolution spurred by the advent of digital technologies may turn out to be even more powerful than the Gutenberg revolution; it is also likely to be bloody. Our inability to wrap our minds around a sweeping revolution in the way that information is gathered, analyzed, used, and controlled should scare us. It is in this context that both right- and left-leaning factions of the American elite appear to accept the merger of the US military and intelligence complex with Big Tech as a good thing, even as centralized control over information creates new vulnerabilities for rivals to exploit.
The attempt to subject the American information space to some form of top-down, public-private control was in turn made possible—and perhaps, in the minds of many on both the right and the left, necessary—by the collapse of the 20th-century American institutional press. Only two decades ago, the social and political power of the institutional press was still so great that it was often called “the Fourth Estate”—a meaningful check on the power of government. The term is rarely used anymore, because the monopoly over the printed and spoken word that gave the press its power is now gone.
Why? Because in an age in which every smartphone user has a printing press in their pocket, there is little premium in owning an actual, physical printing press. As a result, the value of “legacy” print brands has plummeted. Where the printed word was once a rare commodity, relative to the sum total of all the words that were written in manuscript form by someone, today nearly all the words that are being written anywhere are available somewhere online. What’s rare, and therefore worth money, are not printed words but fractions of our attention.
The American media market today is dominated by Google and Facebook, large platforms that together control the attention of readers and therefore the lion’s share of online advertising. That’s why Facebook, probably the world’s premier publisher of fake news, was recently worth $426 billion, and Newsweek changed hands in 2010 for $1, and why many once-familiar magazine titles no longer exist in print at all.
The operative, functional difference between today’s media and the American media of two decades ago is not the difference between old-school New York Times reporters and new-media bloggers who churn out opinionated “takes” from their desks. It is the difference between all of those media people, old and new, and programmers and executives at companies like Google and Facebook. A set of key social functions—communicating ideas and information—has been transferred from one set of companies, operating under one set of laws and values, to another, much more powerful set of companies, which operate under different laws and understand themselves in a different way.
According to Section 230 of the Communications Decency Act, information service providers are protected from expensive libel lawsuits and other forms of risk that publishers face. Those protections allowed Google and Facebook to build their businesses at the expense of “old media” publishers, which in turn now find it increasingly difficult to pay for original reporting and writing.
The media once actively promoted and amplified stories that a plurality or majority of Americans could regard as “true.” That has now been replaced by the creation and amplification of extremes. The overwhelming ugliness of our public discourse is not accidental; it is a feature of the game, which is structured and run for the profit of billionaire monopolists, and which encourages addictive use.
The result has been the creation of a socially toxic vacuum at the heart of American democracy, from which information monopolists like Google and Facebook have sucked out all the profit, leaving their users ripe for top-down surveillance, manipulation, and control.
TODAY, THE PRINTING press and the mirror have combined in the iPhone and other personal devices, which are networked together. Ten years from now, thanks to AI, those networks, and the entities that control them—government agencies, private corporations, or a union of both—may take on a life of their own. Perhaps the best way to foresee how this future may play out is to look back at how some of our most far-sighted science fiction writers have wrestled with the future that is now in front of us.
The idea of intelligent machines rising to compete with the human beings who built them was seldom considered until Samuel Butler’s Erewohn, which was published in 1872. Riffing on Darwin, Butler proposed that if the species can evolve to the detriment of the weak, so could machines, until they would eventually become self-sufficient. Since then, science fiction has provided us with our best guides to what human societies mediated or run by intelligent machines might look like.
How precisely the machines might take over was first proposed by Karel Capek’s R.U.R., the 1921 play that gave us the term robot. Interestingly, Capek’s automatons aren’t machines: They emerge from the discovery of a new kind of bio-matter that differs from our own in that it doesn’t mind abuse or harbor independent desires. In the play, the humans are degenerates who stop procreating and succumb to their most selfish and strange whims—while the robots remain unerring in their calculations and indefatigable in their commitment to work. The machines soon take over, killing all humans except for a single engineer who happens to work and think like a robot.
In the play’s third act, the engineer, ordered by the robots to dissect other robots in order to make them even better, is about to take the knife to two robots, a male and a female, who have fallen in love. They each beg for the other’s life, leading the engineer to understand that they have become human; he spares them, declaring them the new Adam and Eve. This soulful theme of self-awareness being the true measure of humanity was taken up by dozens of later science fiction authors, most notably Philip K. Dick in Do Androids Dream of Electric Sheep?, which became the film Blade Runner.
Yet even classic 20th-century dystopias like Aldous Huxley’s Brave New World or George Orwell’s 1984 tell us little about the dangers posed to free societies by the fusion of big data, social networks, consumer surveillance, and AI.
Perhaps we are reading the wrong books. Instead of going back to Orwell for a sense of what a coming dystopia might look like, we might be better off reading We, which was written nearly a century ago by the Russian novelist Yevgeny Zamyatin. We is the diary of state mathematician D-503, whose experience of the highly disruptive emotion of love for I-330, a woman whose combination of black eyes, white skin, and black hair strike him as beautiful. This perception, which is also a feeling, draws him into a conspiracy against the centralized surveillance state.
The Only State, where We takes places, is ruled by a highly advanced mathematics of happiness, administered by a combination of programmers and machines. While love has been eliminated from the Only State as inherently discriminatory and unjust, sex has not. According to the Lex Sexualis, the government sex code, “Each number has a right towards every other number as a sex object.” Citizens, or numbers, are issued ration books of pink sex tickets. Once both numbers sign the ticket, they are permitted to spend a “sex hour” together and lower the shades in their glass apartments.
Zamyatin was prescient in imagining the operation and also the underlying moral and intellectual foundations of an advanced modern surveillance state run by engineers. And if 1984 explored the opposition between happiness and freedom, Zamyatin introduced a third term into the equation, which he believed to be more revolutionary and also more inherently human: beauty. The subjective human perception of beauty, Zamyatin argued, along lines that Liebniz and Searle might approve of, is innately human, and therefore not ultimately reconcilable with the logic of machines or with any utilitarian calculus of justice.
In We, the rule of utilitarian happiness is embodied in the Integral, a giant computing machine/spaceship that will “force into the yoke of reason other unknown beings that inhabit other planets, perhaps still in a wild state of freedom.” By eliminating freedom and all causes of inequality and envy, the Only State claims to guarantee infinite happiness to humankind—through a perfect calculus that the Integral will spread throughout the solar system.
In reality, sexual relationships are a locus of envy and inequality in the Only State, where power rests in the hands of an invisible elite that has removed itself somewhere beyond the clouds. But the real threat to the ideal of happiness incarnated in the Integral is not inequality or envy or hidden power. It is beauty, which isn’t rational or equal, and at the same time doesn’t exclude anyone or restrict anyone else’s pleasure, and therefore frustrates and undermines any utilitarian calculus. For D-503, dance is beautiful, mathematics is beautiful, the contrast between I-330’s black eyes and black hair and white skin is also beautiful. Beauty is the answer to D-503’s urgent question, “What is there beyond?”
Beauty is the ultimate example of human un-freedom and un-reason, being a subjectivity that is rooted in our biology, yet at the same time rooted in external absolutes like mathematical ratios and the movement of time. As the critic Giovanni Basile writes in an extraordinarily perceptive critical essay, “The Algebra of Happiness,” the utopia implied by Zamyatin’s dystopia is “a world in which happiness is intertwined with a natural un-freedom that nobody imposes on anyone else: a different freedom from the one with which the Great Inquisitor protects mankind: a paradoxical freedom in which there is no ‘power’ if not in the nature of things, in music, in dance and in the harmony of mathematics.”
Against a centralized surveillance state that imposes a motionless and false order and an illusory happiness in the name of a utilitarian calculus of “justice,” Basile concludes, Zamyatin envisages a different utopia: “In fact, only within the ‘here and now’ of beauty may the equation of happiness be considered fully verified.” Human beings will never stop seeking beauty, Zamyatin insists, because they are human. They will reject and destroy any attempt to reorder their desires according to the logic of machines.
A national or global surveillance network that uses beneficent algorithms to reshape human thoughts and actions in ways that elites believe to be just or beneficial to all mankind is hardly the road to a new Eden. It’s the road to a prison camp. The question now—as in previous such moments—is how long it will take before we admit that the riddle of human existence is not the answer to an equation. It is something that we must each make for ourselves, continually, out of our own materials, in moments whose permanence is only a dream.
David Samuels is a contributing writer at The New York Times Magazine. He is a longtime contributor to Harper’s, N+1 and The New Yorker.
No comments:
Post a Comment