of the New Real
Surveillance Never Sleeps is a new way of looking at the human impact of technology in the twenty-first century. Here, four critical intersections of technology and society–drones, surveillance, DIY bodies, and recent innovations in robotic technology–are explored for what they have to tell us about the “new real” of digital culture. With astonishing speed and relatively little public debate, we have suddenly been projected into a new reality of pervasive surveillance, drone warfare, DIY bodies as the essence of the “quantified self,” and creative developments in robotic technologies that effortlessly merge synthetic biology, artificial intelligence, and the design of articulated robotic limbs into a newly blended reality of machines, bodies, and affect. However, while the sheer dynamism of this digital remaking of human experience seemingly anticipates a future of accelerated technological change, it does not account for the dark singularities of increasingly atavistic politics, fatal flaws in the codes, the “blowback” of long-suppressed ethnic and racial grievances, or the rise of fundamentalist ideologies.
Surveillance Never Sleeps seeks to answer the question posed by the uncertain world of twenty-first century experience itself, namely why in an age of a seemingly inexorable drive to technical perfection, smart bodies, and complex machine-human interface has society itself so quickly imploded into politics moving at the speed of darkness and motivated by the will to purity? Consequently, a truly unique world situation: powerful eruptions of the boom and bust cycles of late capitalism; the rise of reactionary fundamentalist movements, some religious, others political; the effective political dispossession and economic destitution of most of the world’s population, and yet, in the midst of all of this, the emergence of a new technological theology as transcendental in its cosmological ambitions as it is localized in its implications. So, then, a twenty-first century that may have permanent war, class privilege, and resurgent forms of political recidivism as its sustaining noise. But, for all that, there is the clear signal in the technological background of ambient robots, DIY bodies, hovering drones and machine-readable surveillance that something else is happening, something as novel in its technical expressions as it is enigmatic in its consequences. Surveillance Never Sleeps is about listening intently to the signal of technologies of the new real as they penetrate the social, political and economic static of the posthuman condition.
Dreaming with Drones
When the Sky Grew a Warlike Eye
More than ever, real power in the twenty-first century is space-bound–globalized, atmospheric, instantaneous. It is not that time has disappeared, but that the medium of time itself has been everywhere reduced, reconfigured and subordinated to the language of spatialization. That is the meaning of “real-time” as part of the contemporary language of power–time itself as an otherwise empty, locative coordinate in the spatial networks of communication surrounding us. But if that is the case, if, indeed, power has taken to the air, literally taken flight with the technological capacity provided by drones to turn the sky into a warlike eye, that would also indicate that the grasp of power on the time of duration, the lived time of territorial and bodily inscription, has perhaps been terminally weakened. When the sky has been transformed into a liquid eye of power–monitoring, watching, archiving visual data for storage in distant archives–with target acquisition and weaponized drone strikes as its military tools of choice, the greater complexity and intricate materialism of time escapes its grasp.
Think perhaps of a distant future when empires, following the usual cycle of rise and decay, crumble to dusty memories, when a collapsed social economy produces an angry mass of dispossessed citizens in the otherwise empty streets, when even borders are abandoned in the global rush for scarce resources, and when all that is likely to be left may be those airborne fleets of now fully automated drones, long forgotten by their ground command, but still, for all that, circling the sky on the hunt for humans. At that point, some historian of the technological past may well begin to reflect on what exactly was released in the domestic atmosphere when the drones came home: a technologically augmented surveillance system under strict political supervision, or something different. That is, the giving of sky life to a new species of being–being drone–with a score to settle against its human inventors and, over time, the capabilities to do something about it. In this time, above all times, a time in which we can finally appreciate what is to be gained and lost–what is utopian and what dystopian–concerning the technological devices we have engineered into existence, it may be well to remember that the story of technology has never really lost its entanglement with questions of religion, mythology, and politics.
Signs of the practical entwinement of technology and mythology are everywhere now as early warnings of what is yet to come–namely, that while the contemporary language of technology might have excluded its origins in myths of nemesis and hubris, what drone technology may actually deliver in the future as its most terminal payload will be the return of mythic destiny as the hauntology of the sublime order of technology. Consider, for example, the following stories about the world of drone warfare: “Drone Kamikazes in the California Sun” and “Hydra Awakened.”
Drone Kamikazes in the California Sun
Recently, there was a serious naval “incident” off the coast of Southern California, involving an American Ticonderoga guided missile cruiser, the USS Chancellorsville, and a supposedly errant BMQ-74 target drone.
It seems that one clear Pacific day, after the drone was launched for target practice, it suddenly wheeled around, ceased software communication with shipboard command-and-control, and promptly went into full assault mode in an unexpected and perhaps first of the twenty-first-century kamikaze attacks on a battle-ready cruiser. While the navy at first reported only minor damage, later accounts confirmed not only thirty million dollars in damage to the cruiser but, equally significantly, that the drone inflicted serious, target-specific harm to the state-of-the art Aegis Combat System–the technical essence of the cruiser’s sophisticated electronic warfare systems. The consequence–one lethal drone, one broken down guided missile cruiser, and a lot of bruised navy pride.
While online chatter has focused on the lack of readiness of the guided missile cruiser to shoot down errant drones by “moding up” to Ready Status onboard guns, no one has asked why the drone suddenly broke ranks with its navy cohort, did a quick field-reversal in the sky, instantly resignified itself from passive target to aggressive predator, and swooped down like a bird of prey on its mothership. While naval authorities are reduced to speaking about “malfunctions” and “accidents,” it seems that they have not considered the historical, and then mythological, nature of the event. Historically, it may well be that the actions of the drone in attacking the missile cruiser were already hinted at by the very name of the ship, the USS Chancellorsville. Like the army before it, the US navy often adopts the names of defeated enemies and famous battles from the Civil War onwards as ways of both honoring military history and perhaps conjuring up the courage of former enemies and martial memories of distant battles as ways of strengthening itself. Chancellorsville is the name of an important Civil War battle that involved Confederate soldiers led by General Robert E. Lee moving against a larger Union army under the command of General Joseph (“Fighting Joe”) Hooker. This battle, which was won by the Confederacy, entered the annals of Civil War fame because Lee was ultimately victorious by employing the original and certainly daring tactic of suddenly dividing his forces, moving one wing, led by General Stonewall Jackson, undetected in the dead of night across the front of a much-larger Union army. Breaking with the traditional practice of maintaining cohesive, single-force strength when confronted with a superior foe, Lee’s military genius was to stake everything on his deeply intuitive knowledge of Hooker’s personal psychology–“Fighting Joe” Hooker’s actual timidity and, in effect, lack of preparation for the unexpected. And what could be more unexpected than to march your Confederate army undetected in the midnight darkness laterally across the defended front of the superior-sized, yet oblivious, Union army?
While drones have probably not, as yet, been programmed with Civil War history, in this posthuman age where objects are increasingly viewed as possessing agency, drones may have already been invested with affectivity–machines with an attitude. Given the fast, objective evolution of drone technology from passive prosthetic to augmented aerial machines equipped with artificial intelligence, powerful missiles, laser vision, and recombinant memories, drones may also now be on the verge of actually achieving elements of real subjectivity, nesting within their software logic the all-too-familiar instinctual impulses of revenge, mistrust, and resentment. Viewed through the prism of Civil War history, with its equal measures of murderous violence and tragic sacrifice, the drones of the USS Chancellorsville have possibly, in some entirely strange and certainly unexpected way, sought to remake themselves as contemporary, technological versions of Civil War reenactors. Not so much like the tragically minded Civil War buffs that almost obsessively haunt annual reinvocations of the sacrificial violence that was the Civil War, but something very different and, in fact, dangerous. Perhaps in the case of the USS Chancellorsville, its BMQ-74 drones somehow absorbed, magically, almost atmospherically, the energies of its famous battle name. Stealing a strategic march from the always creatively daring logic of Robert E. Lee, this drone actually reenacted on the Pacific shores of southern California precisely the same gambit that allowed the Confederacy against all odds to win the day at Chancellorsville. When the missile cruiser launched its target drones for a routine battle-readiness drill, it probably never suspected that it was, however inadvertently and unintentionally, reenacting the original battle of Chancellorsville, except this time not in Spotsylvania, Pennsylvania in the eighteenth century but just off the port of San Diego in the twenty-first century. Taking their cue from General Lee, these latter-day Confederate drones split their forces in the face of a superior opposition, the missile cruiser and its AEGIS system. While some drones continued on their terminal flight as flying targets for all the onboard telemetry, the most creatively daring of the drones did to the USS Chancellorsville precisely what Lee had earlier done to Hooker–namely, it split off from its normal targeting routines and swooped down on the cruiser itself in an unexpected stealth attack.
As to why a drone might do such a thing, we might want to consider the real lesson of object-oriented ontology: that is, not only are objects today rightfully conceived as possessing affectivity–trees with feelings, devices that sense, mobiles that connect–but there exist digital devices necessarily capable of absorbing a whole range of human passions, from the utopian beliefs of the “new materialism” to what seems to be the dystopia of drones invested with a lot of anger and perhaps just a bitter touch of revenge-taking. In this scenario, like all other technologies, the all-too-human will to power that is built into drones–drones that bomb, spy, irradiate–always follows this basic rule of (robotic) law. Taking seriously its appellation as an “unmanned aerial vehicle,” it becomes, in the case of the USS Chancellorsville, a massively lethal technology that finally lives up to its name–certainly not as an obedient and passive member of the programmed target pack, but, as the BMQ-74 drone, truly unmanned in its intentions. That’s the significance of its precise attack on the communications command room–it scopes out its enemy for its point of maximal weakness, does instant target-acquisition, pierces the side of the command module and explodes with all the violence that a thirteen-foot drone can do. It is certainly not for nothing that other, perhaps more battle-wise American sailors have reported that, on previous excursions in the Persian Gulf, they always kept their shipboard guns on ready-status and that, in fact, many of their weapons were painted with symbols of drones kills, some of which seemed intent, like the kamikaze drones in the California sun, on doing terminal damage to their homeland ship. Maybe, then, not just machines with an attitude, but machines with deliberately perverse intentions and time-hallowed military logic. Perhaps what just happened is that that the second battle of Chancellorsville has been reenacted with the exact same result, this time the guided missile carrier playing the hapless role of General Hooker, with the BMQ-74 drone absorbing into its putatively mechanical self all the military valour, always breaking the rules logic, and daring do that which was the hallmark of General Lee. Except, of course, this time, reflective of the purely technological destiny of military logic, the whole incident was played out with a naval target drone as the unlikely Confederate reenactor.
But in the way of all complex intersections of technology, mythology, and psychology, the conclusion of this fateful story is still to be determined. While the Confederacy gained a tactical victory at Chancellorsville by virtue of General Lee’s daring gamble, the battle was in the end a strategic defeat for the armies of the South. It was in this battle, after all, that Stonewall Jackson was accidentally killed by one of his own soldiers who mistook him for Union cavalry. In the future, who can know with any certainty what will unfold with drones possessing subjectivity. Following science fiction, will drones rise up in merciless, mechanical revenge against their human creators? Or will something else happen? Will the story of affective drones repeat the lessons of human mythology, on account of which they were invented in the first instance and the memory of which will undoubtedly haunt them long after the disappearance of the human remainder? That is, will the future epoch of drones, like the history of humanity before it, also be characterized by bouts of radiant, positivistic power mixed with accidents, futility, caprice, and the furies of enigmatic uncertainty?
“In our lifetime, what was [in effect] land and prohibitive to navigate or explore, is becoming ocean, and we’d better understand it,” noted Admiral Greenbert. “We need to be sure that our sensors, weapons and people are proficient in this part of the world [so that we can] own the undersea domain and get anywhere there.” 
Is it possible that classical Greek mythology will finally find its practical realization in contemporary history by way of advanced military innovation? That, at least, is the hope of the US navy, as evidenced by the recent DARPA solicitation for innovative design proposals for a program named Hydra, which is aimed at creating permanent, unmanned, underwater platforms in all the oceans of the world populated by drones within drones. Media reports included the following:
DARPA goes deep: New Hydra project to see underwater drones deploying drones.
The sky is no longer the limit for US drone warfare, with secret military research agency DARPA considering a conquest of the seven seas with an underwater drone carrier.
. . .
“The Hydra program will develop and demonstrate an unmanned undersea system, providing a novel delivery mechanism for insertion of unmanned air and underwater vehicles into operational environments,” says the Hydra Proposers’ Day website.
. . .
In broader terms, the Hydra project implies building an underwater drone fleet to ensure surveillance, logistics and offensive capabilities at any time globally, throughout the world’s oceans, including shallow waters and probably any river deltas or systems. 
Drones within drones, upward falling payloads, an unmanned undersea system: the future of drone warfare as envisioned by DARPA migrates the question of the unmanned from its previous station in targeted aerial surveillance to the depths of the seven seas. Here, it is no longer flocks of drones hovering in the sky, but something else–unmanned, underwater motherships equipped with drones within drones, some as troop transports, others as transport vehicles for armaments and supplies, all lying in wait, just offshore, just under the seas, waiting to instantly respond to insurrections, rebellions, disturbances.
While, from one viewpoint, this vision of repurposing the oceans for drone warfare provides another example of technological hubris combined with the US military’s proclaimed ideological commitment to “global projection of power,” from another perspective, it also contains a Heideggerian aporia. For Heidegger, the mobilization of the seven seas on behalf of a global system of command-and-control is part of the technical drive towards reducing nature and humanity to the status of the “standing-reserve”–the seven seas held in reserve, that is, for an innovative process of technological ordering with its “upwardly falling payloads,” “drones within drones,” and “underwater drone carriers.” With one difference, however: almost as if perfectly symptomatic of profound, nagging anxiety about the eventual failure of the project in the face of a greater, as yet unknown, force, the very mythic name of the Hydra program announces in advance the most critical weakness of the initiative. After all, in classical Greek mythology, the figure of Hydra, this “serpent-like monster with two heads,” evokes a larger mythological fable that is replete with moral complexity and martial ambiguity.
Mythically, the Hydra is always figured in relation to Heracles, the heroic representative of fallen divinity who, in order to win back his immortality after killing his own wife and children, is forced to undertake twelve difficult labors, involving, among others, slaying the Nemean Lion and capturing the Erymathean Boar, the Cretan Bull, and Cerberus, with its three heads of a wild dog, tail of a dragon, and snakes emerging from its back. Yet perhaps the most challenging of Hercales’s tasks was overcoming the monstrous figure of the Hydra who, with its ability to effortlessly regrow many new heads, guarded the swamps of Lerna, beneath which lay the entrance to the underworld. The mythic force of the Hydra has to do with representing a fierce entanglement that only grows more difficult and complex with any and all attempts to overcome it. In the end, the Hydra was ultimately defeated by Heracles’s brilliant tactic of cauterizing the severed heads one by one, thus eliminating the flow of blood and the generation of new heads. The lesson of the myth is that the Hydra, this watery defender of the underworld, is as weak as it is ferocious, an obstacle that can be overcome in practice by a skilled, creative, and courageous warrior such as Heracles.
Consequently, while the US military’s Hydra program may well culminate in interesting designs for an underwater world populated by drones within drones, it offers no solution to the real problem, which is, in its essence, mythological–the always certain appearance of counter-power, of counter-resistance to sovereign claims of “ownership of the undersea domain” in the form of the new Heracles: a heroic figure–perhaps from the present, perhaps from the future–with a name of no importance and from a country of no significance, who can only win back political immortality by overcoming the new Hydra of the underwater drone.
Unfortunately, while the resolution of the problem of Heracles might have been left in suspension by an act of technological indifference, naming a project Hydra has about it all the signs of mythic necessity, posing a challenge to the sleeping powers of the long-neglected pagan gods of classical antiquity. Known now by the continuous appearance of the mythic signs of necessity, nemesis, hubris, and revenge, the spirits of those pagan gods have never really been at a distant remove from the technological scene, and certainly have never been anything less than the essence, particularly if unrecognized, of political experience. While minds more attentive to the continuing mediation of the language of the pagan gods and the spectacular drives of technological hubris might enter a word of caution against carelessly conjuring up the forgotten spirit of the gods (particularly by formally inscribing their sacred names in the chronicles of contemporary history), it must be admitted that it is part of the unfolding truth of the most brash of the newest posthuman gods–the language of technological mastery–to issue a challenge to the death against the gods of classical antiquity.
Who knows really whether the power of Zeus, the jealous love of Hera, or the remorse of Heracles have heard the voice of this newest pretender to divinity? Like the original Hydra, this drone project sets out to guard the entrance to the underworld, no longer under the mythological swamps of Lerna, but within the watery labyrinth of the seven seas. Also, like the Hydra of classical mythology, this daring military innovation uses precisely the same tactic to propagate drones within drones–heads within heads–as a way of guarding itself against an enemy seeking to sever the only head of a multiplicity of heads that counts–the single, undetectable head of the Hydra that is immortal. Of course, in the transition from classical mythology to the new real of drone technology, the contemporary Hydra, lacking any plausible pretensions to immortality, begins the game of war with an immediate disadvantage. Now we know from the military’s call for proposals that the underwater drone project is intended to operate in the real-time environment common to both the contemporary moment and, it should be noted, classical antiquity–insurrections, rebellions, civil strife, revenge-seeking suicide missions. Like the immortal Hydra, the underwater drone platform lies in wait, thus ceasing to be so much an instrument of spatial domination of the skies as a lethal weapon willing to engage temporally those courageous, or perhaps foolish, enough to gain entry to the underground. A time-biased technology operating in the liquid environment of the seven seas, this newest iteration of the myth of Hydra knows only that its weaponry of choice must of necessity be that of deception, subterfuge, and secrecy. Hiding in the depths of the oceans, revealing itself only when engaged in aggressive military strikes, the drones within drones that are the essence of the Hydra program adopt the languages of temporality as their own: a waiting game of infinite patience with secret locations, illusions of identity, and hidden purposes. While drones hovering in the clear-blue sky might communicate a message of terror by their very appearance, drones secreted within the seven seas communicate a different order of meaning altogether.
The Drones of War
Perhaps nothing symbolizes so well the movement of power–from visibility to invisibility, from the imminent to the remote, from the language of discipline to that of a politics of control–than drone technology. Understood as a metaphor of power, drone technology represents the migration of power from something vested in the territorial claims of sovereign nations to the space-extending ambitions of trans-sovereign empires for which only the projection of power has political currency. Understood as a metonymy of power, drone technology is energized by the fact that, while it rides the imperial wave of the invisible, the remote, the monitor, its actual political effects are always highly visible, deadly intimate, and purely chaotic in terms of their impact upon targeted tribes, clans, families, communities, and individuals. Consequently, neither a pure metaphor nor an irredentist metonym, the power of drone technology rests structurally in its ultimately semiotic status as a violent, flickering signifier from the sky, an indeterminate point of mediation between invisible force and targeted visibility, remote commands, and highly tactile results, unmanned control, and social chaos.
Indeed, the fact that drone technology enters so easily and pervasively into contemporary public debate may be because there is something about the image of hovering drones–in all their invisibility, remoteness, and artificial control–that actually touches on, and is perhaps even emblematic of, an already widespread anxiety in the posthuman condition. In this case, what we see in those images of Predator and Reaper drones in far-off lands, these almost post-apocalyptic scenes of violent power projected across the skin of the planet by way of electronic pulses sent from remote command-and-control locations, may actually bring to the surface of individual visibility what we already experience in our unconscious and more often than not unarticulated feelings as that which is most primal, and for that reason most uncertain, about the character of our necessarily shared political condition. Certainly, we can recognize that, as citizens of the privileged centers of neoliberalism, our political fate has, for the most part, already been structurally figured in advance. When imperial violence takes the form of unexpected and unpredictable blasts from the air, when imperial power depends for its very existence on subjecting dominated populations to a form of cynical power that operates like a murderous flickering signifier–invisible yet risible, remote yet intimate, controlling in its logic yet chaotic in its effects–then we too can recognize something not particularly alien to our experience yet deeply familiar. It is as if the massive deployment of drone technology by the permanent war machine represents an accelerated test-bed for a new form of political ethics yet to emerge, one deeply attuned to the language of weapons of invisibility, death-matrices by remote command, and power at that point where it becomes something less terrestrial than purely atmospheric, something as intimately present as it is technologically suffocating.
When the drones of war are tested in foreign lands, we can perhaps comfort ourselves with the moral illusion that politics today neatly divides into a more primary distinction between friends and enemies and that the boundary points for such a division can be identified by the signs of citizenship, religion, ethnicity, race. While such ready-to-hand distinctions have the grisly political benefit of ethically dividing the world into a sacrificial table of values upon which will soon be arranged those to be violated as the unlawful alien, the scapegoat, the terrorist, the enemy non-combatant, the stranger, they also have the strong moral appeal of rendering any and all violations of the norms of social justice ethically justifiable. Those not structurally determined in advance as the outsider–from the alien to the stranger–will probably never know what it means to inhabit a body, a race, a family, a clan, a tribe, or a society that will never be honored with the most elementary rights of human recognition and reciprocity–the right to be mourned, the right to grieve. While media communiqués about new drone strikes in Asia and Africa are usually figured in the deliberately sanitized and entirely nebulous language of the war on terror, these reports on the conditions of the new security state often provoke not a ripple of discontent precisely because they work to confirm an ethically striated vision of the contemporary political condition that we long ago interiorized as our own. Of course, having acquiesced either consciously or by a silent proxy in the privileges to be gained by linking our fate to the ethical exclusions necessary to the self-preservation of power, there remains just the tangible hint of a doubt that someday the moral cycle of accidental divisions and ethical cynicism represented in all its ferocity by the drones of war will run its full course.
Perhaps they already have. Perhaps the political use of drone technology to terrorize often-defenseless populations rests on a prior moral blast that already obliterated much of the traditional language of human reciprocity and recognition. More than we may suspect, we are already dreaming with drones. Dazzled by this spectacular projection of technology that advances the space-control of empire against time-bound forms of terrestrial resistance and, perhaps, in consequence, numbed by the silent ethical compact that authorizes lethal violence from the air, we may have already naturalized something resembling drone subjectivity as our very own interior habitus. But, if this were the case, then it would only be churlish to later claim that we did not have at least a premonition of our own approaching extinction-event when the drones come home. In this sense, what is actually being tested in far-off foreign territories may not be, in the end, the purely technical abilities of drone technology as instruments of war, but pilot projects for the use of drones at home. When power turns inward as it always does, when that which has been done by power to those determined to be beyond the rites of grieving and mourning finally turns on us as the new ungrievable, the definitely unmournable, then we should not be surprised. As the last and best of all the cynical signs, drone subjects have long been nurtured in the language of moral equivocation: subjects of use and abuse, subjects of control and chaos, subjects remote even to themselves in their most intimate moments. With this perfectly equivocal result, when drone technology tracks back to its country of (technological) origin, when drones become an important dimension of the language of the new real, what the consequences will be are still unknown, still emerging. Yet we do know this: the contemporary situation oscillates today between scenes of “Bounty on Drones” and the present and future specter of “Drones Hunting Humans.”
Bounty on Drones
In the United States, The FAA (Federal Aviation Administration) recently issued a formal warning to cease and desist to the residents of Deer Field, a rural community in Colorado that was considering a proposal to issue a bounty on drones:
“Under the proposed ordinance, Deer Field would grant hunting permits to shoot drones. The permits would cost $25 each. The town would also encourage drone hunting by awarding $100 to anyone who presents a valid hunting license and identifiable pieces of a drone that has been shot down.” 
So then, a perfect reincarnation of the spirit of the Wild West in the early years of the posthuman condition. Not settling for legal niceties and certainly not yielding quietly to official power, some citizens of Deer Field want to do what gun-toting trail blazers of the old west have always done before them: take to the new surveillance trails of the sky in order to bag a hovering drone. This leads to the question: What happens when the drones finally do come home? Not as super-tech augmented, sky-bound survivors of hard fought battles in Afghanistan, Somalia and Yemen, but drones making their first appearance in their homeland as a new form of heightened state security–only this time, a state security framed less as a scattering of insurgent rebellions in the far-off global reaches of the empire of neo-liberalism than as returning drones pressed into service again as the front (aerial) line in the surveillance of domestic populations. Will this be the first symptomatic sign that the power of the new security state, having fine-tuned its apparatus of control in the war on terror, is finally prepared to colonize its own population?
Or perhaps something else: not the new security state filled with drones solely in passive submission to its strategic aims, but a present and future time populated by drones replete with a multiplicity of commercial purposes–unmanned media photography; drones for the surveillance of wildlife, crops, and storm damage; long-distance rescue drones in otherwise inaccessible harsh environments; knowledge-based drones for long distance, real-time education. While the commercial repurposing of drones is indeterminate–limited at first really only by the human imagination, one thing is certain: when the drones come home, the future is likely to be stellar grid-lock, with the sky the limit for the sudden extension of human commerce. A future filled then with many accidents in the air, near collisions as drones speeding along on their different missions forget to keep watch on their unmanned neighbors and, of course, the inevitable–long lines of stalled, sky-bound traffic, impatiently spinning their algorithmic wheels, probably getting into hot coded disputes with quick terminations of flight high on the probability charts. In this scenario, the skies of tomorrow are the expressways of today: studies in immobility with patience at the limits of its endurance.
With this difference: taking advances in ubiquitous computing and relational processing to its extremes, there are undoubtedly technological plans now afoot to design drones of the future–or as the drone industry likes to call them, “unmanned aerial vehicles”–with the capability of communicating with one another, sharing pertinent information, and, what’s even better, with advanced capabilities to mimic bees and birds by instantly gathering into fast-moving flocks of flying drones. No longer, then, drones as long-distance, lonely fliers, but suddenly drones armed with hive-minds for better swarming, flocking together in the air, swooping down on unsuspecting drone singletons, and just as suddenly turning every which way probably for the pure joy of being swarm. For most of us civilians who are aware of brilliantly creative technological advances set in motion without much thought given to unexpected consequences, it is unlikely that computer-console designers of swarming drones have read up much on insect lore, the fact that swarms in nature appear in many different shapes, all of which have very real-world consequences–from the hardworking cooperatives of bee hives with their built-in aristocratic class structure of drones and queens to the hornets who form angry swarms when annoyed or angered by the human presence. So, while the utopian dreams of all the drone designers probably shade away into the comforting flight path of future unmanned aerial vehicles as busy bee hives in the sky, the hard reality probably will be something very different. Since drones first came into existence at the behest of military violence, with its calculated bursts of murderous rage, there’s no reason to think that future generations of commercial drones will not, at some undefined point, rekindle memories of the targeting imagination of the Predator and the killer-instinct of the Reaper as the most active, and fondly thought of, long-term memories.
When drones themselves begin to dream, their psychoanalytical drivers will probably move unerringly to that moment when drones as purely technological devices first merged with human psychosis on many battlefields of the past. Not accidentally, but with a deliberate and almost inevitable evolutionary logic, since the killer instinct, with all its preparatory conditions–surveillance of targeted populations, data acquisition, arming of weaponry, and bursts of destructive violence–is not at second-hand remove from the logic of drones, but actually designed into their unmanned (but not unarmed) intelligence. The curious mixture of cybernetic rationality and spasms of irrational violence has always been the emblematic sign of drone wars. Like the human species before it, drones also have memories. Sometimes these memories are short-term, like agricultural fields to be surveyed, packages to be delivered, isolated survivors to be rescued, but they can also be long-term. It is those deeply embedded, long-term memories of their all-too-human origins in a mythic mix of antiseptic designer rationality and murder from the air, memories that will most likely be activated by swarms of drones. Liberated intentionally from human control with sensors fast-processing the territory below and other drones alongside, the drone swarm, like those angry hornets before them, is a likely candidate to go on instinctual killer-mode, to become, in practice, what their drone ancestors had long ago initiated in the skies of foreign lands. In this case, when the will to technology is finally realized in the form of swarms of angry drones, when cybernetic reason merges with unmanned violence, there will probably be a big rush for those hunting permits.
When drones become an unmanned aerial species, equipped with autonomous intelligence, weapons of choice, surveillance capabilities, and laser-like targeting abilities, we will probably be able to discern that their primal psychology will not run to the hectoring superego or the reasonable ego but to the instinctive-like drives of the howling id. Without the disciplinary cage of the social to tame it, without the fear of god to inhibit it, drones of the future will make their first swarming appearance as the id unbound: psychically self-possessed, humourlessly destructive, seemingly irrational but, for all that, cunning, creative, and probably (cybernetically) ruthless in the games they play.
With this in mind, the unsuspecting residents of Deer Field might be wise to start running or, at least, to instruct their children in the ways of mythic nemesis likely to be expressed in their streets when drones appear in the domestic sky.
Drones Hunting Humans
The first of all the violent invasions of drones hunting humans has already taken place. That’s the so-called War on Terror, with its carefully orchestrated publicity campaigns in support of ever-increasing popular fear and, in a perfect feat of logical symmetry, its identification by means of the Obama administration’s “disposition matrix” of a changing list of “terrorists,” some perhaps even dangerous opponents for targeting by fleets of drones stationed in the skies of designated kill zones. For example, according to media reports from the tribal areas of Pakistan and adjoining regions of Afghanistan, we can gather some preliminary results of this lengthy experiment in test-driving drones hunting humans.  Indeed, similar to large-scale, innovative scientific projects that can only seek major funding on the basis of “proof of concept” projects, the War on Terror might be viewed retrospectively in the same terms. Here, all the design ingredients were mobilized for a potentially successful “proof of concept” experiment in drones hunting humans: a captive population that can be targeted at will; the necessary long-range territorial distance (from Las Vegas to Afghanistan) needed to field-test lag time for the remote control of unmanned weapons systems; and media mobilization of the public opinion of domestic populations, which generates active support for the frequent use of unmanned aerial vehicles in warfare but, more importantly, generalized ethical tolerance for excluding targeted populations, whether targeted “terrorists” or civilian bystanders–families and friends at funeral gatherings, children sleeping at home–from basic recognition of the rights of reciprocity as human beings. From this purely strategic point of view, the experiment in drones hunting humans that was the essence of the War on Terror was demonstrably successful. Not particularly, of course, in the numbers of known “enemy combatants” killed–it was always the usual folly of war to expect that seasoned warriors adept in the ways of camouflage and surreptitious movements could be tracked, let alone eliminated, by flying robots in the sky. But, in the usual way of things, even major failures like the rash and ill-conceived military adventure in Afghanistan have their purposes. Like any cold-eyed examination of the outcome of this proof of concept experiment, the results were strikingly successful in inverse proportion to the harsh reality of the overall military failure itself: fleets of Predator and Reaper drones could be controlled remotely; brilliantly displayed, real-time videos of actual combat situations could be provided to elite commanders bunkered down in the command-and-control centers of the Pentagon, intelligence services, and the White House itself; captive populations could not only be targeted as required but, as an added benefit, future psych-ops would be guided by the medical finding that the humming presence of drones hunting humans in the sky would accelerate mass psychological depression, and thus political paralysis, in the targeted population; and finally, domestic populations have quickly and decisively proven themselves receptive to, if not eager participants in, ethical indifference to those identified by the state as fit objects of sacrificial violence. Consequently, when drones first began to hunt humans in the War on Terror, a complicated calculus of proof of concept was affirmed, one that was at once strictly technological (remote control of unmanned aerial vehicles), specular (those live video feeds to the masters of the war machine providing, at the minimum, the illusion of being warriors, if only ersatz warriors, in games of life and death);  psychological (creating and maintaining a generalized condition of cultural acedia in targeted populations); and ethical (preserving political support for drones hunting humans by intensifying that sweet spot of all carefully orchestrated military media campaigns–a perfect blending of moral indifference mixed with feelings of righteous anger as the emotional fuel supporting war drones operating under the sign of abuse value). This, in effect, constitutes the technological ontology of surveillance practices that function as the operating system of the new security state.
Now that the “proof of concept” stage for drones hunting humans has been completed, it will only take a slight redesign of contemporary models of war to successfully reenact this very same mix of tactics, logistics, ethics, and psychological animus in domestic space. Following the doubled ideological logic of facilitation and control by which new technologies are usually introduced, we can already identify the key political markers facilitating drones hunting humans at home. Not surprisingly, everything will have to do with “securitizing the homeland.” Not just securitizing the always-porous borders in the face of increasingly phantasmagorical anxieties about “illegal aliens” and sometimes even legitimated suspicions about potential terrorist attacks, but also the much-publicized need to securitize dense networks of oil and gas pipelines, isolated power stations, nuclear facilities, and transportation corridors. In this case, when the drones come home, it is likely that the form of invisible surveillance will take over the open skies of homeland security, with “upmoded,” war-like drones securitizing borders, patrolling far-flung networks of pipelines, and surveilling over targeted cities, neighborhoods, homes, vehicles, individuals. While economic insecurity and political anxiety provide, in the first instance, the necessary conditions for authorizing the apparatus of drones hunting humans into the domestic scene, the future will be different.
Art as a Counter-Gradient to Drone Warfare
When machines break the skin’s surface, becoming deeply entangled with desires, imagination, and dreams, do we really think that we will be left untouched, that easily discernable divisions will remain among the machinic, the natural, the human? Without conscious decision or public debate, we may have already passed into the deeply enigmatic territory of the new real: that space where the price to be paid for the sudden technological extensions of the human sensorium may be an abrupt eclipse of traditional expressions of consciousness and ethics; that time in which the uniform real-time of big data effortlessly substitutes itself for the always complex, necessarily enigmatic, and lived time of human duration. When the human life cycle increasingly depends for its very existence on technological resuscitation, how much longer will the meaning of the human not yield to the greater power of the technological? That’s the new real: the future world that is now where individuality singularity has been replaced by network connectivity, where bodies of flesh, blood, and bone have already been surpassed by a proliferation of electronic bodies in the clouds; where every step, every breath, every glance, every communication gives off dense clouds of information that are, at once, our permanently monitored past and our trackable future. For some, definitely suffocating. For others, a fully liberated future of the transhuman where the handshake made between the codes of technology and the missteps of humanity indicates that we have already migrated into another country, another time with sublime possibilities for technologically augmented bodies, digitally enhanced vision, and quickly evolving light-wave brains.
We have always been an adventurous species, living at the edge of dangerous risks and practical wisdom, a species (technologically) willing to will its own extinction while, at the same time, artistically probing the future for its terminal abysses and points of creative transformation. It is the very same with the unfolding story of drones. It is the artistic imagination of drones that displays heightened sensitivity to what Heidegger might have described as the new dwelling-place of drones at home and drones at war. Refusing to think outside the imaginary landscape of drone technology, the artistic imaginations can be so replete with important insight because they actually engage in the material reality of drone technology. Not through active imitation or complacent praise, but an artistic imagination that thinks right through all the symptomatic signs of drone technology to discover its essence–not only that which is made visible by drones but how its very invisibility and remoteness burrows inside human anxieties.
Today, a number of contemporary artists act as leading political theorists of drone technology, exploring in the language of aesthetics the remote violence and the equally remote ethical distancing that occurs when unmanned aerial vehicles are purposed by larger military missions. In the contemporary artistic imagination are to be discovered the full dimensions of drone technology as the truly ominous symbol of the times in which we live: a symbol of power that is remote, invisible, weaponized. Representing, in effect, heightened cultural consciousness concerning the full implications of drones, artists often function today as the kind of philosophical explorers that Hannah Arendt once described as the “negative will” at the heart of technology: a pornography of power that seeks to draw everything into obscene visibility–desensitized, dehumanized, sadistic in its pleasures, cynical in its purposes. Opposing the secrecy that surrounds the development and application of militarily purposed drone technology, contemporary drone art–online and real-time–breaches boundaries of secrecy by making its aesthetic explorations fully open to the electronic public, linking together in common ethical purposes drone artists from different countries and, perhaps of greater significance, creating active collaborations between critical drone art and the actual and potential victims of the cold violence of those unmanned aerial vehicles hovering in the skies of foreign lands for the moment, and soon in the twilight sky of the imperial homeland.
“In military slang, Predator drone operators often refer to kills as ‘bug splats’, since viewing the body through a grainy video image gives the sense of an insect being crushed.” 
#NotABugSplat, an emotionally evocative and deeply ethical project by a Pakistani artist collective, is what happens when those held under the sign of erasure by warlike drones finally have the opportunity to speak publicly, and in doing so begin to imagine another language, ethics, and memory for making the invisible visible, the prohibited image the necessary subject of moral inclusion, and the (technically) silenced a suddenly noticeable, deeply insistent subject struggling to be recognized. When the governing ethics of power privileges a form of long-distance ethics essentially constituted by a strict separation between decision and consequences, between remote drone operators and slaughtered people in fields, then we can most definitely know that ours is a culture that moves at the ethical speed of a bug splat with all that entails in terms of extremes of dehumanization, desensitization, and pure objectification.
Understanding that the only effective ethical response to power under the sign of a bug splat is one that suddenly humanizes the field of remote vision and thereby activates an insistent demand for recognition as human beings, #NotABugSplat works to facialize Pakistani victims, actual and intended, of US drone strikes in order to make legible the human dimensions of those condemned to abuse value status in the age of drone technology. The artistic strategy is as straightforward as it is ethically profound:
The image released as part of this project was taken by a mini-helicopter drone and depicts a young girl who lost both her parents in a drone strike in Pakistan’s Khyber Pakhtunkwala province. Hoping to instill “empathy and introspection,” one of the artists of the organizing collective (said): “We tried to replicate as much as we could what a camera from above will see looking down . . . (W)e wanted to highlight the distance between what a human being looks like when they are just a little dot versus a big face.” 
While the artistic project involves, in the first instance, remaking a farmer’s field in rural Pakistan into a large art installation featuring a massive image of a young girl’s face–an image aimed at activating the ethics of remote predator drone operators–the political implications of #NotABugSplat are universal. Here, in a unique case of art acting as a counter-gradient to power, that haunting image of a young Pakistani girl “who lost both her parents and two young siblings in a drone attack” reverses the language of power by critically and decisively re-ordering the logic of targeting. Until this point, the specific targeting of drone attacks was solely a matter of cold military logic with, for example, all young males in strike zones considered “militants, unless there is clear evidence to the contrary,” and the local population deemed “guilty by association” and “a militant if they are seen in the company or in the association of a terrorist operative.”  Working to undermine the antiseptic, radically indiscriminate logic of “signature strikes” with their unreported but widely documented massive civilian casualties, #NotABugSplat subverts such a logic of targeting. While it might be naive to suppose that an image, even a large haunting image, visible to predator drones, would have any real effect on the ethics of their remote operators, this attempt to make suffering visible, to actually facialize those literally objectified by technologies of violent disappearance, has an unpredictable advantage. For the very first time, the ethical worm turns by a radical reversal in the order of targeting. Suddenly, an art installation in a rural, Pakistani field begins to speak to drone operators housed in the remote reaches of an imperial homeland, targeting their ethics, their memories, their most fundamental understanding of the necessary demands implied by human recognition and reciprocity. While the nihilism evinced by drone technology may already be so advanced as to immediately nullify the ethical purposes of the artistic project, there always exists the fragile, nebulous possibility that the face of existential suffering can give pause to the most arid, most unmanned, of technologies of contemporary war. In this case, #NotABugSplat might best be viewed as the first of all the future artistic experiments in breaking, not the sound barrier of earlier times, but the ethics barrier of remote technology. Consequently, it is in this emotionally compelling project–a project that puts the question directly concerning whether or not shared ethical responsibility can triumph over the singular purposes of drone warfare–that both the last and best hopes of suffering humanity surely rest.
Terror from Above
Let me tell you a story
a bedtime story
Let me tell you a story
of Predator drones with giant wings
equipped with hellfire missiles
and “light of God” lasers
choking the skies over northwest Pakistan
Let me tell you a story
a daytime/nightmare story
of grandmothers as “bug splats”
and children as “double taps”
Let me tell you a story
an everyday story
of terror from above
villagers burned, body parts strewn
over cultivated fields
Let me tell you another story
The official story
a drone warfare story
Let me tell you a story
of precision strikes
where no innocent is mutilated, incinerated
Let me tell you a story
But we know this story is a lie
Surveillance power increasingly functions by moving from the center of human attention to its peripheries–invisible, ubiquitous, waiting. Now it is no longer a matter of people having to walk into the field of machinic vision–as it was in the age of street-level video cameras–but of a machinery of surveillance that electronically scans entire landscapes, carefully monitoring the daily habits of their inhabitants, watching for selected disturbances of the field of vision, which may potentially trigger a violent technological reaction–a drones strike. In this case, the surveillance power of drone technology is no longer limited to a list of potential targets listed on what the National Security Council describes as the “disposition matrix,” but something more menacing, namely the harvesting of entire populations under the sign of a generalized disposition matrix–people who are deemed to be in a permanent state of suspicion by associations no matter how accidental, by physical proximity through a wedding, a funeral, a community gathering, by the simple geospatial fact of where they happen to live. When surveillance migrates from visible technologies to invisibility, from reliance on human disturbances of machinic vision to machinic disturbances of individual experience, it means that we are living in the era of space-binding power–always hovering on the peripheries of life, bracketing the lived time of those inhabitants held under suspicion by the prospect of an immediate sentence of death from the air. What does it mean, then, when the power of surveillance is no longer limited to visual scans of always-threatening populations, but when surveillance itself incorporates a politics of life and death? Equally, what is meant when entire theatres of war in the contemporary era themselves retreat behind a shield of invisibility: unreported, unexamined, undisturbed? What is implied, in effect, by the present state of affairs when the concept of invisibility itself has been weaponized? While technologically augmented society likes to pride itself on the culture of connectivity, with bodies everywhere seemingly globally mobilized by social media into always-open data points, the reality of the new invisibility associated with technologies of surveillance would intimate that, in some fundamental sense, we are actually radically disconnected from some very essential knowledge. Perhaps what we are most disconnected from is the sudden transformation of weaponized invisibility–surveillance technology in the form of drone strikes–into a key expression of the ontology of the times in which we live: drones strikes as being towards death.
The political implications of drone strikes as weaponized invisibility has been brilliantly explored in the aesthetic work of the British artist James Bridle. In an interview with BBC, Bridle noted that his art is interested “in exposing the connection between secret surveillance, power projection and new technology through installations”: 
It’s very strange that these days we have no idea of the battlefields on which war is being fought.. . . But at the same time we’ve built technology that allows us to see the whole world on your phone. I wanted to use these technologies to make visible the contemporary battlefields, these drone strikes. 
Working in the language of social media, one of Bridle’s aesthetic projects–Dronestagram–repurposes Google Earth into a visual cartography of actual drone strikes, including location, frequency, and timing, that is then circulated through the electronic capillaries of social media, from Instagram to Twitter. Here, one medium of (social) communication is creatively redeployed as a way of drawing into visibility another medium of social destruction. But beyond Dronestagram, there is another interesting project that Bridle has initiated, one that has a larger collective purpose–to create public awareness of the material reality of drone strikes. Titled Drone Shadows, this project, based on the active collaboration between Bridle and Norwegian visual artist Einar Sneve Martinussen, produces perfectly scaled chalk drawings of drone shadows in the streets of many cities of the world. As Bridle states: “One way of looking at drones is as a natural extension of the internet . . . in terms of allowing sight and vision at a distance. They’re avatars of the net for me.”  Or, as one insightful commentator has noted: “In Drone Shadows, he draws a chalk outline to scale of a different drone each time, highlighting that not only do they not cast shadows from the vast height they operate at but that they are here among us, very literally, and unseen.”  In a larger sense, Bridle’s overall project, what he describes as the “New Aesthetic”–whether Drone Shadows or Dronestagram— focuses on the complex entanglement of technology and warfare as the essence of invisibility itself. By creating shadows for that which is without shadows, by visually mapping that which wishes to remain unmapped, his artistic imagination probes the full consequence of invisibility itself. In so doing, the project renders the question of invisibility even more complex in another way. While drone strikes can be mapped and drones themselves made to cast chalk-like shadows on city streets, what about those other invisibilities, those growing invisibilities of language, culture, ethnicity, geographical location–of life itself? Why is it that so much of what is visible today is, in fact, invisible? Why is it, in the end, that only certain expressions of human visibility–targeted bodies in the tribal lands of Pakistan, Yemen, Somalia–are dragged out into the violent visibility of otherwise invisible technologies of surveillance? Have we reached a first cultural, and then political, breaking-point in which the meanings of visibility and invisibility have entered into a more complicated mediation, one in which the question of visibility will increasingly rely on a greater political ordination while, all the while, those other very human invisibilities–differences of class, race, ethnicity, life itself–are allowed to disappear into the category of human remainder? And, of course, there is also this curious, purely aesthetic paradox, namely, that the act of making visible those hidden warfare invisibilities of Predators, Reapers, and Global Hawks does not rely on anything particularly high-tech, but on two other expressions of more urgent technologies–the simple act of drawing chalk outlines of drones on city streets and the very public act of mobilizing global public participation in the art of making drones visible. 
Night Sky Epilogue
The night sky drone
is a bullet, an eye, a gut
Venus transits and the sun
is a distant memory
2 tons of fuel and a ton of
munitions. 18″ and 7,000 miles
The smell of BBQ
walkers and runners.
A biplane overhead laconically
pulls a sign that reads
“There’s no place like home
especially when it is clean and green”
 Thomas L. Friedman, “Parallel Parking in the Arctic Circle,” New York Times Sunday Edition, March 30, 2014, p. 11.
 “DARPA Goes Deep: New Hydra Project to See Underwater Drones deploying Drones,” RT (September 10, 2013), http://rt.com/usa/darpa-underwater-drones-fleet-489/ (accessed July 28, 2014).
 Joan Lowy, “Drones: FAA warns public not to shoot at unmanned aircraft,” Christian Science Monitor, Associated Press, July 21, 2013.
 “‘Will I be Next? US Drone Strikes in Pakistan,” Amnesty International, https://www.amnestyusa.org/sites/default/files/asa330132013en.pdf (accessed on July 28, 2014).
 For example, see https://www.flickr.com/photos/whitehouse/5680724572/in/photostream (accessed on July 28).
 http://notabugsplat.com/ (accessed July 24, 2014).
 “Vincent van Drone: They’re not just killing machines anymore.” www.globalpost.com/dispatch/news/war/130812/drones-art-dronestragram-whistler-bridle (accessed April 15, 2014).
 “Art in the Drone Age: Remote-controlled vehicles now spy and kill in secret. What are artists doing about it?,” www.dazeddigital.com/artsandculture/article/16183/1/art-in-the-drone-age (accessed April 15, 2014).
 James Bridle, Drone Shadow Handbook, http://booktwo.org (accessed July 24, 2014). On his site, Bridle also offers “DIY Drone Shadows,” a free electronic download of the Drone Shadow Handbook with instructions for creating drone shadows: “For some time, I’ve wanted to open up the project, so that anyone can draw one. With this in mind, I’ve created a handbook, which gives guidance on how to draw a drone shadow, including advice on measuring and materials, and schematics for four of the most common types of drone: the Predator, Reaper, Global Hawk, and Hermes/Watchkeeper.”
Surveillance Never Sleeps
Surveillance never sleeps because it lives off data trackers designed to never forget. Algorithms have become cabinets of digital memories with sensors that attach themselves to the words we speak, places we see, even thoughts not yet expressed. Our lies and truths lived through our nights and days.
Like the sleeplessness of data itself, always mobile, circulating and recombinant, network surveillance lives under the strict obligation to police the full circumference of digital being: all those financial algorithms rendering instant, real-time judgments on questions of economic solvency; algorithms in the form of technologies of “deep packet inspection” for supervising violations of civil rights; algorithms for economic espionage in the name of national security; algorithms for pleasure, for gaming, for better apps; algorithms for tracking, recording and archiving the habitual activities and errant breaches of any human heart that makes up life in the data torrent today. While at one time insomnia referred only to a human sleep disorder, now a new form of insomnia–data insomnia–has been created.
So, then, some reports from the field of a pervasive machinery of surveillance that never seems to sleep, with its data farms, archive terror, face printing, embedded sensors, smart bodies, and cold data.
Lightning Storms in the Data Farm
The (NSA) data center was shut down through Tuesday. The source said there aren’t “arcs and fires” anymore but that the experts on the site still haven’t figured out what’s causing the problems. They have figured out how to prevent flashes of lightning, though.
“They’re seeing a pattern of where it gets to the meltdown point and they stop it before it blows again,” says the source. The source said that contractors have been injured and taken to the hospital due to electrocution, but not in the most recent shutdown. 
When finally powered up and fully online, the NSA (National Security Agency) Utah Data Center promises to be a prodigious tower of (digital) babel in the beautiful mountainous terrain of Bluffdale, Arizona. Not far from the now vanished site of Fort Douglas that was originally constructed to defend older lines of American continental communication including the stage coach line on the Oregon Trail and telegram facilities, the spyware data center also occupies a curious intersection between theology and technology, situated as it is in a community that Wired magazine describes as the largest Mormon-based polygamist community in the United States. Reportedly occupying 1.5 million square feet and costing over one billion dollars, the Utah Data Center is, in effect, the electronic cerebral cortex of a vast data harvesting system aimed primarily at gathering foreign signal intelligence but also “harvesting emails, phone records, text messages and other electronic data.”  As described by James Bamford in Wired:
A project of immense secrecy, it is the final piece in a complex puzzle assembled over the past decade. Its purpose: to intercept, decipher, analyze, and store vast swaths of the world’s communications as they zap down from satellites and zip through the underground and undersea cables of international, foreign, and domestic networks.. . .Flowing through its servers and routers and stored in near-bottomless databases will be all forms of communication, including the complete contents of private emails, cell phone calls, and Google searches, as well as all sorts of personal data trails. 
Definitely anticipating an unlimited future of information accumulation, the Utah Data Center is described as capable of storing five zettabytes of data, sufficient storage space, that is, for the next hundred years.  Of course, while technically awesome in its ability to harvest in one isolated Utah data site the world’s global communications potentially spanning an entire century, NSA’s spyware center has experienced very real problems at the double interface of unpredictable nature and human political ingenuity. While lightning storms crackled across the otherwise austere architecture of its massive data servers and “arcs and fires” seemed to break out spontaneously at the merest hint of data flowing, those human beings waiting to be harvested of their “patterns of life” were themselves engaged in creative forms of cyber-protest at the very gates of the data kingdom. Petitions were presented to the local town council demanding that vital water supplies to the data center be immediately terminated. Since cooling water is a critical requirement for any data farm that plans to mine the data skies with all the power enabled by five zettabytes of storage memory, this local protest in defense of civil liberties had the potential effect of eclipsing secret cryptography in favor of ground-truthing by water. With natural protests taking the form of fire in the air and human resistance privileging blockages in the flow of water, it was almost as if all the mythic furies associated with the four fundamental elements of the universe–fire, water, air, and earth–had suddenly assembled in the face of this monument to absolute (digital) knowledge. As for earth, protests involving this classical element constituted a literal read-out of the meaning of grounded resistance. A group of local civil libertarians adopted a highway running in front of the Utah Data Center for the sole purpose of holding up protest signs for passing motorists, while all the while engaging in the good citizenship practice of actually tidying up the highway and its immediate environment. Consequently, a curious case of a supposedly frictionless NSA data center smoothly aggregating complex streams of global data while buffeted by very real lines of friction involving lightning in the (data) sky, dammed up water, protesters disguised as highway cleaning crews, and very strange encounters in the desert air between the dream vectors of technocracy and polygamy. Perhaps what is really happening here is something that is not captured in all those data rushes of foreign, or for that matter, domestic signals intelligence and information awareness, namely that like all demands for meaning before it, this urge to absolute knowledge of the human universe is always quickly outrun by the complex particulars of humanity and nature alike. Not really as Camus supposed mythic indifference to the demand for absolute meaning, but something more subtle in its appearance, specifically that the technological dream of a perfect spyware universe of frictionless flows of information always generates in its wake unexpected and fully unpredictable lines of friction. That those lines of friction have no possibility of easy absorption into the hygienic and closed universe of tens of thousands of humming data servers does them no dishonor. It would simply indicate that a universe predicated on the security to be provided by the terrorism of the code is probably already doomed one hundred years in advance by that which cannot be avowed, included, or permitted–whether local citizenry gathering to turn off the cooling taps of water, lightning flashes across the server horizon, or that greatest line of friction of all, the loud media sound of Edward Snowden as he adds his own line of friction to the data harvest.
Not just lines of friction, though. There are also strange, and deeply enigmatic, symmetries. What the NSA is constructing in a desert bowl near Salt Lake City is a genealogical record of the digital future, one batch of signals intelligence at a time until that point over the next one hundred years some bright cryptographer in a still indeterminate future time will find in all that Big Data not simply patterns of life but patterns of whole societies, of republics, democracies, empires, tracing the rise and fall of sometimes clashing civilizations, contested visions of political economy. Literally, a history of the future traced out in the data stream. However, what is truly enigmatic in its implications is the present-day fact that there is not simply one, but two, major experiments in patient, genealogical research underway in Utah. Certainly, the technological probe of the digital future that is the NSA’s Utah Data Center, but also the theological tracking of family genealogy that has long been underway in nuclear-bomb proof caves near Salt Lake City. So, then two genealogies, the first aimed at cryptographic analysis of information culture–past, present and future; and the second a theological archiving of detailed family genealogies, tracing its arc from the ancient past to the still unknown future. Of course, it could just be coincidence, an unnoticed fact of surveillance history, that the theologically signified region of Salt Lake City with its famous history of the Mormon Trek led by Joseph Smith from the darkly wooded valley of Sharon, Vermont to the promised land of Salt Lake City was chosen by the NSA as the site of its first major server farm. Perhaps the choice of Utah as a spyware center was done for the usual pragmatic budgetary reason: cheap electricity, plentiful water with the added advantage of sometimes hiring, according to some media reports,  Mormon missionaries as NSA data analysts given their acquired skills in foreign languages. It could also be the case, though, that this convergence of technocratic and religious interests in the question of genealogy, one present- and future oriented and the other privileging the past, is symptomatic of a deeper convergence between technology and theology on the question of absolute knowledge. In this case, what might be actually happening in the scrublands of Utah are two deeply iconic exercises in truth-seeking: the one dealing with hidden patterns of signifiers in all those flows of global information and the other focused on equally hidden patterns of theological truth-saying revealed in the genealogy of family histories. While this might make of the NSA only the most recent manifestation of what might be described as the redemption quest akin to a form of the new Mormonism in American public affairs, it would also make of Mormon theology, with its tripartite focus on redemptive visions, missionary practice, and patient genealogy the premonitory consciousness of the animating historical vision of the NSA itself. Not crudely in the sense of an open, avowed affiliation between the NSA and Mormonism, but something more subtle and thus more deeply entangled: namely that the NSA as the self-avowed secret spearhead of cybernetically sophisticated technological adventurism also has its eyes on the prize of merging the redemption story that is the essence of the “American dream” with (digital) missionary consciousness and cryptographic genealogy. In other words, an unfolding symmetry of the Book of Mormon and Big Data in the mountains, deserts, and salt lakes of all the Utah’s of the data mind.
The Washington Post revealed four new slides from its trove of top secret PRISM information, appearing to confirm some of the initial reports . . . about the nature of the US government surveillance program.
Notably, the new slides appear to confirm whistleblower Edward Snowden’s claims that PRISM allows the NSA and FBI to perform real-time surveillance of email and instant messaging, though it’s still not clear which specific internet servers allow such surveillance (As originally reported, PRISM providers include Microsoft, Yahoo, Google, Facebook, PalTalk, YouTube, Skype, AOL, and Apple.) The Post claims that “depending on the provider, the NSA may receive live notifications when a target logs on or sends an email, text, or voice chat at it happens. 
During a recent interview in a Moscow hotel with the editor of The Guardian, Snowden elaborated further on the political significance of his whistleblowing, arguing that the surveillance state depends for its very existence on the basic assertion that digital experience in its totality is exempt from the traditionally protected domains of individual privacy. Grant that governmental claim, according to Snowden, and what inevitably results is something like PRISM with its unfettered, real-time access to the most intimate details of individuals. Not necessarily only “targeted” individuals, but in Snowden’s account one of the common “fringe benefits” for NSA analysts was the circulation of sexually explicit images of subjects, without their awareness and from the “privacy” of their homes.
How, then, do we begin to understand a form of power that works in secrecy, that functions, on the one hand, by a highly routinized, hyper-rational organization of billions of bytes of Big Data into differentiated streams of classification, ordering, and targeting, and then just as quickly reverts into the language of the voyeur, the (digital) stalker? Beyond the purely rhetorical contestation associated with governmental assertion of the demands of national security and counter-challenges by defenders of civil rights concerned with the instant liquidation of individual privacy, is it possible that we are dealing here with a demonstrably new expression of power–prismatic power–a form of power that is fully unique to the digital epoch from which it has surfaced as its most avant-garde manifestation and retrograde (political) expression?
In this case, it is not purely coincidental that the name, PRISM, has been selected as the name for a top secret US government surveillance program. In the science of optics, a prism serves to divide white light into the colors of the spectrum or for refracting, reflecting, deviating light. Which is precisely how power operates in the age of Big Data, that point where what matters is not the geography of material bodies, but the hidden content of their “white light”–the refractions, reflections, and deviations given off by the data torrent of targeted bodies–email, text, or voice chat–as they are passed through the “PRISM Collection Dataflow.” Here, like a latter-day version of Isaac Newton’s experiment some three hundred years ago, in which light passed through a prism first revealed the colors of the electromagnetic spectrum, the PRISM Collection Dataflow passes the content of individual data biographies–some specifically targeted, most harvested from the servers and routers of the communication industry–through the prism of its collection dataflow in order to suddenly bring into visibility bands of hidden political trajectories from the larger mass of undifferentiated detail. Theoretically, the digital traces of subjects, whether domestic or international, will be studied to determine how they fall on the spectrum of national security, whether a normal separation along the spectrum of political loyalties or variations in refraction, reflection, and deviation that may require further scrutiny. For the latter, there are multiple data programs carefully coded for further classification and ordering: Printaura, Scissors, Pinwale, Traffic Thief, Fallout, Conveyance, Nucleon, Marina & Mainway. As the report in The Washington Post notes:
Two of the new slides detail the data collection process, from the initial input of an agency analyst, to data analysis under several previously-reported analysis tools such as Marina (internet data), Mainway (call records), Nucleon (voice data), and Pinwale (video data). 
What lends a feeling of claustrophobia and suffocation to this secret machinery of government surveillance is not simply its obvious metaphoric presence as a tangible sign of intrusive surveillance, but the fact that its otherwise hyper-rational software programs are themselves disturbed by the deviant libidinal energies of its data analysts, all those NSA analysts taking full advantage of the “fringe benefits” of the Prism Collection Dataflow. Here, there may be, in fact, a double prism operating at the center of power under the sign of Big Data: the first a refraction of multiple streams of data information through the prism of surveillance programs of control; and the second an immediate reversal of the field of surveillance, this time with the voyeurism of NSA analysts as a metonymic cut across the pure sign of surveillance–puerile male affect bending the optics of surveillance in the direction of cynicism, capriciousness, and perversity. When intrusive surveillance meets uncontrolled affect, prismatic power has about it all the refracted energy and distorted aims of a form of control that is seemingly lost in the illusions of its own optics.
“Just load existing photos of your known shoplifters, members of organized crime retail syndicates, persons of interest and your best customers into FaceFirst,” a marketing pitch on the company’s site explains. “Instantly, when a person in your FaceFirst database steps into one of your stores, you are sent an email, text or SMS alert that includes their pictures and all biographical information of the known individual so that you can take immediate and appropriate action.” 
Priceless. Not only the proliferation of technologies of “total information awareness” by secret governmental agencies intent on capturing every refraction of light-speed data emitted by the digital self within the prisms of power, but now facial recognition technology for commercial use that involves the construction of biometric face prints of the population. At this point, not the entire population but only those high-value targets, including both criminals and preferred high-spending customers, identified in advance by facial recognition software triggered by biometric memories of faces that it has previously scanned for purposes of instant recall. In this software scenario, the digital face rises into privileged visibility as a secure biometric tag.
There are, of course, the inevitable lingering questions of digital privacy. Who owns the rights to your digital face? And specifically, who owns the recall rights on your digital face over an extended period of (database) time? Do you automatically assent to the alienation of rights to the acquisition, classification, ordering and targeting of your digital face with the simple act of shopping in a store, going through security screening at the airport, applying for a passport, or, for that matter, obtaining a driver’s license? And, if so, would it be reasonable to conclude that the alienation of individual rights over biometric signs of their digital identity is a necessary feature for passing beyond the lip of the net to full admission in the digital galaxy? This is not simply a reprise of traditional arguments concerning the balance that often needs to be struck between personal privacy and collective security, if only in the above case the security of the business database, but something more complicated. Literally, with the deconstruction of the face that is entailed by the mathematical vivisection of facial biometrics, the face itself has suddenly split in two, with the one face purely biological, uniquely singular to the individual that inhabits the historical markings of its smiles and frowns and wrinkles, and the other face distinctly biometric–a face print–mathematically coded, biometrically tagged, circulating in an anonymous database, beyond history, a ghostly remainder beyond material memories of the living singularity that it once was. In this case, what happens when we exist in a culture increasingly populated by facial recognition technologies that involve the deconstruction of the face to that point of excess where the database face not only floats away as something increasingly phantasmatic–a radically split face for a culture of radically split selves–but returns, again and again, as a permanent, trustworthy, machine-readable identifier of bodily presence? Here, it is no longer surveillance that never sleeps but something perhaps more profoundly melancholic, namely all those biometric images captured by ubiquitous facial recognition technologies stored in lifeless, dark databases like so many catacombs of the (digital) future. Sensitive to other stories, at other times restless, wandering ghostly spirits seeking a return to their earthly bodily presence, we wonder if in all those facial catacombs of the archived present and facial recognition future there will not also be heard, or perhaps quietly but persistently felt, like a rush of air on a windless night the insistent, melancholic sound of those intimation of deprival that is the deconstruction of the face. But of course this is preposterous, because we know that when technology eclipses mythology, there is no longer room for the hauntings of ghostly remainders at the table of biometrics. Consequently, a future of dead faces with frozen images–digitally authorized, facially recognized and biometrically tagged–as the first of all the artificial successors, file by file, to their human facial predecessors.
With the overall trajectory of mass surveillance technologies apparently aimed at developing biometrics for every overexposed subject, a recent report on the creation of “New Electronic Sensors (that) Stick to Skin as Temporary Tattoos”  is of particular importance. Designed by John A. Rogers of the University of Illinois, Urbana-Champaign, in collaboration with research colleagues in China and Singapore, these “epidermal electronics” have diverse applications:
Ultimately, Rogers says, “we want to have a much more ultimate integration” with the body, beyond simply mounting something very close to the skin. He hopes that his devices will eventually be able to use chemical information from the skin in addition to electrical information. 
A “thin, flexible sensor that can be applied with water, like a temporary tattoo,” the electronic tattoo is intended to provide precise measurements of emissions from the “brain, heart, and muscles.” So, then, no longer images of bodies wired to hovering machines or physical probes that break the surface of the skin for deeper penetration, but now sensors that “are thinner than a human hair, perhaps powered in the future by solar cells,” perfectly aestheticized in their degree of cultural coolness. In other words, the body literally repurposed as a semiconductor delivering messages from its own physiological interiority to the growing number of satellites of mass surveillance.
We wonder, though, what would happen in the future if, and more probably when, this explicitly medical technology is repurposed as an innovative technology of bodily surveillance? Electronic tattoos, that is, for a time when surveillance moves from the outside of the body to interior measurements of brain waves, muscle contractions, and blood flow? And what would happen when the bodily information harvested is not simply confined to the domain of the electrical but is articulated in the much more invasive language of the chemical–the very language that is central to the functioning of the human nervous system? Are we simply speaking to a difference of degree about, for instance, relatively crude visual images of bodily movement versus biological markers of the body’s interior, and until now, relatively unnoticed patterns of (chemical) life? Or is this report on the prototyping of epidermal tattoos more in the nature of a more fundamental break, namely a biological device that potentially facilitates a dramatic extrusion of mass surveillance systems into the essence not just of bodily physiology but also affect and consciousness? Perfectly adaptable to the development of a form of surveillance that requires biometric tracking of individual subjects–their moods, activities, degrees of endurance and potential breakdowns–electronic skin tattoos bring us to the threshold of a very new form of technological embed_edness. A uniquely powerful fusion of biology and electronics, epidermal tattoos are said to “bend, stretch and squeeze along with human skin,” and maintain contact by relying on “the natural stickiness credited for geckoes’ ability to cling to surfaces.”  It is almost as if these tattoos are not simply chemical additions to the existing surface of the diffuse, flexible organ of the skin, but are more in the way of wearing a second skin for purposes of better internal measurements. Here, the body literally begins to re-skin itself as a living, breathing, electrically charged and chemically vapored organ of surveillance. No need, then, for surveillance to continue to rely on the external communications of its many targets of investigation because bodies augmented with e-tattoos are actually growing bio-technological surveillance organs of their own. Epidermal tattoos, therefore, as perhaps the first palpable sign of the synthetic bodily flesh of all those future bodies of biometric tracking.
Machines to Bodies (M2B): Smart Bodies, Cold Data and “Five Eyes”
Are smart bodies in a culture of cold data the probable future of technologies of mass surveillance? In response to the challenge of bodily materiality, with its hidden passions, secret dreams, and unexpected–and often unpredictable–actions, the new security state is rapidly moving towards the deployment of a new generation of smart bodies located on the always searchable smart grid of technologies of mass surveillance. Since machine readability is enhanced by biometric identifiers, the aim would be to populate the skin surface with an array of sensors for improved machine readability. For the moment, the transition to smart bodies equipped with electronic skin tattoos, locatable media, and prosthetics facilitating easy biometric tracking is marked by technical, and then political, challenges as the new security state works to filter, archive, and tag the immense data oceans of global communication. It is in this context that there can be such dramatic political encounters between passionate defenders of individual privacy and proponents of the new security state interested in the total awareness provided by networked communication.
However, this is probably only a temporary transitional period since the implacable movement of surveillance technologies is towards forms of automated surveillance of smart bodies–machine-to-body communications (M2B)–that would quickly outstrip privacy concerns in favor of continuous flows of individual bodily telemetry: its location, moods, nervous physiology, heart rate, affective breakthroughs, and even medical emergencies. Taking a cue from the pervasive networks of smart grids that have been installed in many cities as part of managing energy consumption, smart bodies, like domestic homes before them, are visualized as inhabited cybernetic systems, high in information and low in energy, emitting streams of machine-readable data. For all intents and purposes, GPS-enabled smart homes are early avatars of the smart body–data tracking megaphones doubling as digital communication devices.
From the perspective of technological futurists, there is nothing really to fear in the emergent reality of a smart future with bodies enmeshed in dense networks of tracking machines, since those very same bodies will likely also be equipped with counter-tracking prosthetics–digital devices capable of quantifying the extent and intensity of data emissions between bodies and the surrounding environment of surveillance trackers. For example, as the technological futurist Kevin Kelly has stated: “Tracking and surveillance are only going to get more prevalent, but they may move toward “coveillance” so that we can control who’s monitoring us and what they are monitoring.” 
Some may argue that the human body, with all its complex inflections and unbounded mediations, will never really be reducible to a smart body circulating within a global smart grid, but that argument is countered by noting the present migration of surveillance technology towards a greater invisibility through miniaturization, breaking the skin barrier with digital devices functioning at the interface of biology, computation, and electronics, and, in fact, layering the body with data probes designed with the qualities of human skin itself–soft, malleable, bendable, fluid, elastic, tough.
A future, then, of cold information–diffuse, circulating, commutative, dissuasive–with bodies chilled to the degree-zero of recombinant flows of information. Morphologically, information has always been hygienic in its coldness, always ready to perform spectacular sign-switches between metaphor and metonymy. What this means for understanding technologies of mass surveillance is that the future of smart bodies will probably be neither the dystopia of total information dominance on the part of powerful interests nor the utopia of free-flowing communication by complicated individuals, but something else, namely friction between these warring impulses in the cynical sign-system that is information culture. Sometimes flows of cold information will be contested locally, whether in debates concerning the policing of information and free speech in urban protests, environmental contestations, and Indigenous blockades of railroads, pipelines, and highways in isolated areas off-grid, or the abbreviated attention span of mass media. At other times, information wars will be generalized across the planet, with lines of friction leaping beyond the boundaries of specific states in order to be inscribed in larger debates concerning issues related to surveillance and privacy, collective modulations of soft control and individual autonomy, that move at the process speed of instant, global connectivity across networked culture. Here, the essence of cold information lies in the friction, the fracture, the instant reversal of the always doubled sign of information.
Which brings us to the meaning of surveillance in the epoch of information as a flickering signifier, that point where all the referents are always capable of performing instant sign-switches from villain to savior, from active agents of generalized public scrutiny to passive victims of destructive overexposure. Wouldn’t this mean, though, that in a culture of cold information, surveillance technologies must themselves also become flickering signifiers, simultaneously both predator and victim? Indications that this is the case are everywhere today. For example, the primary apparatus of contemporary mass surveillance in the West is performed by a previously undisclosed collaboration of state intelligence agencies, appropriately titled “Five Eyes,” after the fact that it coordinates sophisticated signals intelligence among the United States, Britain, Canada, Australia, and New Zealand. Based on intelligence-sharing agreements developed during WWII and strikingly similar in its pattern of operation to the later “Condor” program that was created by several Latin American governments during the dark years of the “dirty wars,” Five Eyes coordinates flows of information acquired by upstream and downstream surveillance among the governments involved. Currently justified by political rhetoric framing the War on Terror and consequently authorized by law or, at least, by loopholes in existing law, Five Eyes is network conscious, geographically specific, and action-oriented, tracking information, acquiring individual targets, assembling complex profiles of targeted individuals, and acquiring massive quantities of metadata limited only by the rule of the “three hops”–tracking, that is, email message, cell phone call, or fiber-optic communication across the three hops of the originating message, the recipient, and networks of individuals and groups communicating with anyone and at any time in the first two hops. While Five Eyes has attracted widespread criticism from privacy advocates for its relentless attempts to establish an apparatus of total information awareness, it should be kept in mind that the original and continuing motivation of this secret apparatus of control has about it the sensibility of an injured victim–bunkered states living in really existent existential, even psychic, fear of having their bounded borders pierced, broken, and invaded by actual terrorists or by phantasmatically threatening breaches of their sovereign boundaries by “illegal” immigrants, the nomadic, the refugee, the planetary dispossessed. A perfect fusion of aggressive surveillance and injured sensibility, Five Eyes constitutes, in the end, a flickering signifier–a palpable sign of what is to come in the approaching culture of cold information and smart, increasingly overexposed, smart bodies.
Republic of Democracy, Empire of Data
“Empires do not last, and their ends are usually unpleasant.” 
Reflecting upon the genealogy of the surveillance state, its tactics, logistics, and overall destiny, we should listen carefully to the insights of Chalmers Johnson, a writer of the serpentine pathways of contemporary power. A historian of American militarism, a geographer of the global network of garrisons that practically realize the ends of such militarism and, best of all, a profound mythologist who has read the language of hyper power through the lens of the ancient god of Nemesis with its prescriptions for “divine justice and vengeance,” Johnson wrote a prophetic history of the future in his trilogy of works: Blowback, Nemesis, and The Sorrows of Empire. The unifying theme of Johnson’s historical imagination was that the immediate history of the ascendancy of militarism, the garrisoning and the globe, the growth of governmental secrecy, the proliferation of technologies of mass surveillance and the growth of hyper power associated with the unilateralism of this, the most recent of all the empires of the past, could only really be understood within the larger canvas of the decline of the American Republic and the triumphant rise of the empire of the United States. For Johnson, a thinker imbued with a deep sense of tragedy on the question of power as much as with lucid intelligence concerning the increasingly ruthless application of the power of empire across the surface of the earth and beyond, the historical break between Republic and Empire in the American mind was not limited to simply a question of what was to be privileged–domestic concerns or international responsibilities–but had to do with a larger epistemic rupture in American political rhetoric, one that involved a fundamental clash between the founding ideals of American democracy and the once and future requirements of imperial power. In his estimation, the contemporary American political condition is this:
In Nemesis, I have tried to present historical, political, economic, and philosophical evidence of where our current behavior is likely to lead. Specifically, I believe that to maintain our empire abroad requires resources and commitments that will inevitably undercut our domestic democracy and in the end produce a military dictatorship or its civilian equivalent.. . . History is instructive on this dilemma. If we choose to keep our empire as the Roman Republic did, we will certainly lose our democracy and grimly await the eventual blowback that imperialism generates. 
With Johnson’s political, indeed profoundly mythological, warnings in mind, we listened intently one recent spring afternoon to two clashing visions of the American future, both deeply invested in questions related to empire and democracy in the American political imagination, both immanently critical of the other, but, for all that, unified to the extent that their political rhetoric rose to the status of patterns of speech and of thought indicative of world-historical figures, one speaking in defense of the democratic ideals of the American Republic and the other extolling the virtues of empire. In the strange curves of history, the defender of the patriotic rights of empire and hence the virtues of what was, in his terms, the moral righteousness of power was President Barack Obama, in a speech to the graduating class of military cadets at West Point, while the speaker who summed up in the political gravity of his words and the ethical purchase of the dangers of the contemporary state of mass surveillance for the American Republic was Edward Snowden. Curiously, this fateful contest of ideals between the hard realities of empire and the always fragile possibilities of democracy occurred on the very same day, one speaking about “believing in the moral purpose of American exceptionalism with every fiber of my being” and the other providing a tempered but, for all that, chillingly analytical diagnosis, of the precise methods by which the surveillance state is intent on the final eclipse of the American Republic by strategies ranging from suppressing democratic dissent to literally harvesting the upstream and downstream of global communication. Just as President Obama raised the moral stakes of American exceptionalism by making it a matter of the very “fiber of (his) being,” Edward Snowden, a remarkably courageous thinker much in the longer tradition of American ethical dissenters like Henry David Thoreau, very much provided the impression of being the last patriot of a dying American Republic. While it was clear as much by the martial solemnity of the occasion at West Point as by the moral suasion of his rhetoric that Obama was constitutionally invested with all the powers of Commander in Chief of American empire, it must also be said that, for one brief moment, the sheer ethical urgency of Snowden’s warnings about the dark nihilism of the American security state very much made him a candidate, at least in moral terms, to leadership of the founding democratic ideals of the American Republic. That Snowden has quickly become such a deeply polarizing figure in American political discourse, viewed as a “traitor” by some and a “patriot” by others, follows consequentially from the distinction between empire and republic. Viewed from the perspective of the logic of empire, with its focus on the self-preservation of power for which the immense secrecy associated with the security apparatus is considered to be an absolute requirement, Snowden’s actions in exposing technologies of mass surveillance to public scrutiny is objectively traitorous. Understood in terms of the inspiring dreams of political democracy, with its rebellious attitude towards absolutist expressions of power that was, and is, the essence of the American Republic, Snowden is properly considered to be not simply a patriot, but genuinely heroic in paying the price for which the stakes are now, as they always were, his own life and death. So, then, a pure sign at the intersection of the deeply conflicting visions of democracy and power, Snowden’s fate has risen above his own autobiographical limits to become something profoundly symbolic, namely a line of resistance against the prevailing structural logic of the times, the ethical power of which is verified by the hysterical ferocity which the very mention of his name elicits from the elite leadership of the new security state. Of course, given the fluidity of power, the unified reaction of proponents of the new security state is quickly being breached.
What makes Snowden’s revelations so dangerous from the perspective of the new security state is something perfectly doubled in its nature. First, definitely not a thinker from the outside speaking the already clichéd rhetoric of “truth to power,” Snowden is an insider to the contemporary games of power. In his own terms, he was a CIA agent with computer expertise working under contract to the US government for a private security firm with access to contemporary technologies of mass surveillance. In a digital epoch in which every margin is capable of becoming the center of (networked) things, Snowden could so readily reveal the secrets of power because power itself has become something diffuse, circulating, liquid, downloadable at the speed of a flash drive. For a form of surveillance power that wishes to remain secretive, unbounded, hidden behind a veil of uncertainty as to its capabilities and intentions, avoiding exposure, particularly over-exposure, at all costs, the secret that Snowden revealed, and probably the reason why the national security state is so intent on prosecuting him under the harsh regime of espionage laws is its palpable fear that that there are perhaps many potential Snowden’s, many potential acts of dissenting political conscience in the minds and hearts of the specialist community of data analysts class that daily facilitate sophisticated technologies of mass surveillance. If Snowden could be deliberately marooned in Moscow by the cancellation of his passport by the US State Department, that should properly be understood in the nature of the unfolding political theater of the surveillance state–a preemptive, positional gesture on the part of the new security state to physically link in the public mind a dangerous truth-sayer of the great secrets of power with a political regime with a manifestly negative relationship to questions of political transparency and democracy. When Alexis de Toqueville once remarked that the high visibility of American prisons was in itself a form of political communication by providing visible warnings of the price to be paid for acts of transgression, he could not have foreseen that when the logic of empire breaks with the constitutional practices of the American Republic that there would be other prisons in the specular imagery of mass media, including those images of Snowden in flight, at first in limbo at the airport in Moscow and then taking precarious refuge in the city of Moscow itself.
In the dreamy 1990s, when the Internet was first popularized, the ruling meme was beautifully and evocatively utopian with that enduring desire in the human imagination for a technology of communication that finally matched the human desire for connectivity and (universal) community finally finding its digital expression in networked communication. Few voices were raised concerning the specter of harsher realities to come, namely the possibility that the Internet was also a powerful vehicle for sophisticated new iterations of ideologies of control as well as for inscribing a new global class structure on the world. To the suggestion that the destiny of the digital future was likely to be the rapid development of a new ruling class, the virtual class,  with its leading fragments, whether information specialists, from coders to robotic researchers, or corporate visionaries closely linked–nation by nation, continent by continent, industry by industry–by a common (technocratic) world-view and equally shared interests, the response was just as often that this is purely dystopian conjecture. As the years since the official launch in 9/11 of the counter-revolution in digital matters indicates, the original funding of the Internet by DARPA was truly premonitory, confirming in the contemporary effective militarization of the networked communication that the visionary idea of developing a global form of network connectivity that harvested the most intimate forms of individual consciousness on behalf of swelling data banks was as brilliant in its military foresightedness as it was chilling in its impact.
The public rhetoric justifying this counter-revolution in digital affairs is as threadbare as it is cynical. That, in fact, seems to be the point. When the increasingly phantasmagoric search for scapegoats of the day finally ceases, whether through lack of plausibility or declining public interest in the necessity of public justifications for undermining the essentially modernist, and thus residual, values of democracy, privacy, and law, a greater reality finally breaks to the surface of consciousness, namely that the digital future has already been hijacked by visions of power and class riding the fast current of the (digitally) new.
Perhaps what we are experiencing today are simply expressions of absolute panic on the part of traditional institutions–nation-states that have effectively lost control of their own sovereignty through the porous, unbounded nature of digital communication. In this case, political institutions based on the governance of territory are objectively threatened by an information culture that undermines traditional conceptions of political sovereignty by transforming the always active subjects of the new world of social media into potentially creative centers of social and political agency. Confronted with this elemental conflict between the emancipatory possibilities of fundamentally new relations of technological communication and old forms of political control, the response on the part of the controlling network of surveillance states is as predictable as it is relentless, namely to view domestic populations with their enhanced social media mobility as potential enemies of a state whose phantasms of perfect security increasingly come to focus on framing individuals as biometric subjects whose every movement will be tracked, every communication monitored, and every affect analyzed for its pattern-consistency. In other words, old forms of control are now being reconfigured as the new real.
With this, we enter an unfolding future of biometric surveillance as both predator and parasite–predatory because it is violently aggressive in its application of the political axiomatic of the security state to domestic populations most of all; and parasitical because biometric surveillance functions by attaching itself to the full sensory apparatus of biometric subjects. Biometric surveillance, then, as the symptomatic sign of the emergence of a new order of power–cynical power. Perfectly opaque in its purposes, random in its flows, wildly oscillating between the projection of power abroad and protestations of official innocence in the homeland, with biometric surveillance power now has achieved a state of fully realized cynicism. Like a floating sign that has abandoned relations with its originating signifier, cynical power can be so effective because it exceeds any limiting conditions. Cynical power thrives by actively generating conditions of chaos and lawlessness while, at the same time, it preserves itself by staking out positions premised on moral righteousness and appeals to political exceptionalism. Neither purely anarchic nor necessarily constrained by law, cynical power is, in the end, how contemporary technologies of mass surveillance express themselves politically. Here, power works by carefully staged strategies of impossibility, sometimes functioning to create generalized conditions of insecurity and fear within domestic populations while simultaneously justifying its use of often invisible, unchronicled exceptional powers as absolutely necessary for securing the boundaries, external and internal, of the state. The required political formula for the inauguration of cynical power and, consequently, the development of technologies of cynical surveillance always seems to follow the same fourfold logic: affectively, create conditions for emotional receptiveness within targeted populations of fear and insecurity; strategically, actively deploy sophisticated technologies of surveillance without any limiting conditions; morally, justify the use of intrusive surveillance technologies by random appeals to threats of terrorism, whether foreign enemies or domestic threats; and biologically, work to link surveillance technologies with the creation of a new form of life required by a society mediated by the bunker state, policing and austerity, namely the biometric subject.
Tripwires in Cryptography
Who could or would have suspected that the much hoped-for utopia of network communication would have terminated so quickly with a global system of meticulously machined individual surveillance as automatic in its data harvesting as it is strategic in its (individuated) target acquisitioning? Combining parallel tendencies involving a telecommunications sector invested in the sophisticated algorithms of analytical advertising and increasingly technocratic governments driven by a shared agenda of austerity economics, the bunker state and the disciplinary society, contemporary surveillance practices are perhaps best understood as premonitory signs of the uncertain future. No longer limited to questions of individual privacy, reflecting upon the question of surveillance discloses key tendencies involved in the emerging world culture of capitalist technocracy, with its complex mediation of psychic residues from the past, social detritus from the present, and the technologically enabled evacuation of human subjectivity and, with it, the eclipse of the social as the dominant pattern of the contemporary regime of political intelligibility. Certainly not stable and definitely not guaranteed to endure, the present situation is seemingly marked by a strange divergence of past and future. While the future has apparently been hijacked by a sudden and vast extension of technological capabilities for network surveillance and intrusion, the really existent world of contemporary political reality is increasingly characterized by the appearance anew of all the signs of unsettled ethnic disputes, persistent racism, ancient religious rivalries, and class warfare endemic to primitive capitalism. Consequently, while the technologically enabled societies of the West are capable of being fully seduced by the ideology of transhumanism with its dreams of coded flesh, process bodies, and machine-friendly consciousness, actual political reality reveals something dramatically different, namely the greater complexity of Eurasian ideology as the new Russian political pastoral, resurrected images of a new Islamic Caliphate and, all the while, disaffected children of affluent societies rallying to those enduring battle cries of the alienated heart, whether religious fundamentalism, atavistic politics or direct action violence. Contrary to digital expectations of a newly reconfigured, resplendently technical world of globalized real-time and real-space, today’s reality more closely resembles a fundamental and decisive break between the categories of technologically mediated space and historically determined time. For every digitally augmented individual strolling the city streets with Google Glasses for eyes, buds for ears, Big Data for better ambient awareness, and SnapChat for enhanced affectivity, there’s another passionate struggle for human loyalties underway with its alternative dreams of Caliphates inscribed on the real-earth of religious warfare, revived Russian imperial dreams of Eurasian mastery, and always those reportedly fifty million refugees wandering the skin of the planet, sometimes policed in official shelters but usually effaced of their humanity–vulnerable, precarious, literally the forgotten remainder living outside of digitally bound space and historically inscribed time.
We mention the inherently complicated messiness of the question of surveillance–its tangled borders of time and space–because the contemporary era of cynical surveillance, with its technological erasure of hard-fought centuries of legal rights concerning individual privacy and democratic association by intrusive net surveillance and the systematic harvesting of metadata may be the first sign of an emerging epochal war between the clashing cosmologies of digital data, religious faith, and recidivist memories of failed imperial projects. When the space-binding societies of the technological nebula that is digital culture collide with the still uncongealed, still spiraling astral galaxies of religious and political fundamentalism, the result is as predictable as it is grim. In this case, threatened by unanticipated and definitely unforeseen dangers from the outside, undermined by uncertain loyalties domestically and deeply mistrustful of the uncontrollable global flow of communication, digitally powered forms of governmentalization do what they do best, namely go to ground in the greater security provided by the bunker state with its ubiquitous surveillance of domestic populations, strict border controls, and psychically engineered appeals to the root affects of fear, insecurity, and anger. All the while, though, the danger grows as the complications of time-based histories of political struggle and religiously inspired warfare threaten to overwhelm spatially oriented empires of imperial power, which for all their machineries of surveillance often exhibit diminished awareness of the growing political complexity of the contemporary era. In this case, heightened accumulation of massive amounts of metadata, with all those galaxies of relational networks waiting impatiently to be decoded and deciphered of its underlying patterns, is just as often accompanied by an equally dramatic fall in political understanding of those dimensions hidden by the glare of metadata–the broken loyalty of a wavering heart, sudden internal exiles of the political imagination of the dissenter who says no, the stubborn rise of individual consciousness that refuses to be eclipsed by its biometric subjectivity, the system coder caught in the grip of critical political awareness, the agent of surveillance who experiences a fatal loss of faith in the cynicism of his obligatory duties. When space and time collide, when metadata falls from the sky into the hard ground of individuation, that is the precise point where all the future systems of mass surveillance undergo a fatal turn, instantly reversing the polarities of a system that, until now, has worked by accelerating the intensity and extensiveness of the eye of surveillance as its orbits the biometric subject.
The (Cryptographic) Lab Experiment: For Whom and For What?
But, of course, this leaves the future of the biometric subject in doubt. Possessing no certain constitutional or legal rights of its own, having no necessary fixed boundaries, called into existence by the very same surveillance technologies that then function to monitor and track its activities, not recognized as something natural by the bodies from which its information is generated, having only the existence of a constant flow of data emitted by individuals that increasingly resent its shadowy appearance, an individuated archive in the data clouds, the biometric subject is simultaneously subject and object of its own digital fate. Deeply parasitical because it feeds on the extended nervous system of the subjects that it inhabits and instinctively predatory because its metadata are viewed as the only reliable (electronic) test of political loyalty in a system that lives in fear of potential subversion from within and without, the biometric subject is that unhappiest of all forms of consciousness–an object of increasingly cynical experimentation. While power might once have been more circumspect in its intrusive surveillance, now it is emboldened, even contemptuous, in its insistence that it has sovereign authority to mark for deeper inspection the different categories of biometric subjects. Increasingly dropping even the pretense of actual “terrorists” as its prime justification, the digital dragnet trawls for any visible sign of political dissent, including the rhetorical expansion of the marker of terrorism to environmental activists, human rights workers, labor organizers, as well as those active in limiting technologies of mass surveillance. Once again, as in twentieth-century totalitarian regimes, the search for absolute loyalty begets the (electronic) tyranny of absolute power.
With this peculiar twist: unlike traditional forms of political totalitarianism, the contemporary demand for absolute subordination to the aims of the new security state and its deep packet inspection of the electronic trails of biometric subjects remains highly experimental, even cryptologically adventurous, in its methods. It is as if the system of power is still unsure of the real object of its fascination, still uncertain of the potential boundaries of a cybernetic world that operates in the language of viral contagions and where every biometric subject marked for deep inspection remains an enigmatic mixture of data clouds, corporeal bodily traces, and those invisible, and consequently undetectable, regions of the off-grid, from unarticulated affect to off-line behavior. Perhaps that explains why contemporary surveillance technologies have about them the feeling of improvised laboratory experiments that, while focused on the data markers left by the humiliated subjects that we all are now, are effectively experiments in the future of biometric subjectivity.
For example, consider two recently reported experiments in surveillance strategies for the future, the first involving a massive (social) laboratory experiment in the psychic inoculation of a selected social media audience with “emotional contagion,” and the second an equally large (political) experiment in tracking the metadata of an unsuspecting airport WiFi population, hop by (digital) hop over a two-week period. In the first case, Facebook’s Core Data Science team, in collaboration with researchers from Cornell University and the University of California at San Francisco, conducted, without notification or informed consent, an online experiment in Skinnerian operant conditioning targeting an unsuspecting audience of almost 700,000 Facebook users.  Perhaps unconsciously influenced by neurological conditioning tactics suggested by Orwell’s Brave New World and yet, for all that, blissfully unaware of contemporary advertising theory with its acute sensitivity to strategies of behavioral modification, the experiment inoculated Facebook users with very different streams of news feeds, one modified to emphasize the positive and the other privileging the negative. The aim of this study in operant conditioning was straightforward, namely whether the psychological shaping of news feeds was effective–and to what extent–in creating states of emotional contagion among Facebook users. Coincidentally, it was also reported that one of the researchers involved in “the massive emotion-manipulation study” also does “active work on DoD’s (Department of Defense) Minerva program, which studies the spread, manipulation and evolution of online beliefs.” 
While the debate on the relative merits of this experiment in neurological conditioning typically involves Facebook executives declaiming in favor of “creativity and innovation” versus privacy advocates concerned about the absence of opt-out provisions, there is a larger issue at stake in this experiment on the digital future that has been left unremarked. According to the research paper based on the study published in the Proceedings of the National Academy of Science, what’s really at stake in this experiment is the affective nature of shared experience, the fact, that is, that emotional contagion involving “depression and happiness” can be transferred by way of “experiencing an interaction” rather than direct exposure to “a partner’s emotion.” In other words, social media networks as potential objects of tactics and strategies involved with psy-ops, whether for commercial or military purposes. Here, if news feeds on all the Facebooks of the social media world can be slightly altered to inject just the minimum dose of optimism or negativity, there is a reasonable expectation of a consequent transformation in the shared affectivity of biometric subjects. Not so much operant conditioning any longer with its controlled feedback loops, but something more insidious, namely biometric conditioning with its modulated flows of information, psychic transfers of emotional affect from depression to optimism by “experiencing an interaction,” and mirroring individual moods with the prevailing (social media) norm. When social media becomes a self-contained reality-principle, one downside is its potential for psychic modulation by the play of soft power–power that no longer operates in the language of violence or manipulation, but in the more complex psychic language of suggestion and mesmerism. Regarding the questions–For Whom? and For What?–there is an interesting analysis by Nafeez Ammed in The Guardian (titled “Pentagon preparing for mass civil breakdown”) that suggests that parallel research projects funded by the US Department of Defense as part of the “Minerva Research Initiative” are aimed at militarizing social science “to develop ‘operational tools’ to target peaceful activists and protest movements”:
Among the projects awarded for the period 2014-17 is a Cornell University-led study managed by the US Air Force Office of Scientific Research which aims to develop an empirical model of the “dynamics of social movement mobilization and contagions.” The project will determine “the critical mass (tipping point) of social contagions by studying their “digital traces” in the cases of “the 2011 Egyptian revolution, the 2011 Russia Duma elections, the 2013 Nigerian fuel subsidy crisis and the 2013 Gazi park protests in Turkey.” 
In Ahmed’s estimation, such research initiatives, including the Pentagon’s war-gaming of environmental activism and protest movements intimate that National Security Agency’s “mass surveillance is partially motivated to prepare for the destabilizing impact of coming environmental, energy and economic shocks.”  In this scenario, influencing individual affect to the point of creating emotional contagions that libidinally charge the flowing circuits of social media opens up possibilities for rechanneling, redirecting, and reanimating the political trajectory of social and cultural dissent.
Probably not wishing to be outdone by such experimental initiatives in biometric conditioning and acting at the behest of the National Security Agency, Canada’s secret surveillance agency–Communications Security Establishment Canada (CSEC)–recently performed a proof of (surveillance) concept on unsuspecting travellers at a Canadian airport. Without prior notification or permission, CSEC swept up all WiFi communication at the airport and then proceeded to track the electronic communications of the target population over a two-week period. In a previously secret document (“IP Profiling Analytics & Mission Impacts”)  brought to public visibility by Snowden’s revelations, the report by the Canadian surveillance unit described as “Tradecraft Developer,” operating as part of CSEC’s Network Analysis Network, began their experiment in data vivisectioning with a overall analytic concept: “begin with single seed WiFi IP address of international airport” and “assemble set of user IDs seen on network address over two weeks.” As with many things, it is remarkable what a rich harvest of metadata a “single seed WiFi address” will provide. Not simply “going backward in time” to “uncover roaming infrastructure of host city (hotels, conference centers, WiFi hotspots, mobile gateways, coffee shops) but also “clusters (that) will resolve to other Airports!” and, in fact, as the report boasts, the Tradecraft Developer “can then take seeds from these airports and repeat to cover the whole world.”  Impatient with the “limited aperture” of data, with the fact that there is “little lingering at airports” with “arrivals using phone, not WiFi,” and with the even greater, and obvious, technical limitation that “Canadian ISPs team with US email majors, losing travel coverage,” the Tradecraft Developer quickly seems to have left the targeted airport behind in favor of a more ambitious project, specifically to perform a two-week data sweep of a mid-size Canadian city. Here, the data farming language of “valid and invalid seeds” with their digital bounty of geo information was used to trace the “profiled/seed IP location” and all its seemingly existential circumlocutions or, what’s the same, its “hopped-to IP location.” 
While the final report is surrounded by all the rhetorical seriousness of something labeled “top secret” and is written in the positivistic prose emblematic of networks analytics with all the opaque (geo-collaboration) systems administrator language of “tipping and queuing,” the overall significance of the report is purely literary. It is a children’s game gone wrong. With the aim of “providing real-time alerts of events of interest,” the Tradecraft Developers proposed a network analytics problem, in effect a children’s game called “Needle in the Haystack.”  In this scenario, what is described as the “Tradecraft Problem Statement” envisions a scenario wherein a kidnapper from a rural area travels, for reasons left unexplained, to the city to make ransom calls. So, the stipulated questions: If authorities know the time of the ransom call, can they find the needle in the haystack? Can they “discover the kidnapper’s IP ID/device? The network solution is obvious: take an actual Canadian city of 300,000 people hostage, at least in terms of their electronic communications over a 40-hour period; eliminate all IDs that repeat over this period; “leaving,” as the Tradecraft Developer report happily concludes “just the kidnapper (if he was there).” Less a powerful demonstration of Borges’s famous fable of the map before the territory, the real “top secret” of the Needle in a Haystack game is that there is no secret. Unlike a children’s game that includes elements of chance, contains a necessary sense of suspense and, just as often, emphasizes playful collaboration among participants in real-time, this network analytic version of the game of Needle in the Haystack leaves nothing to chance (the model is a closed domain of electronic information), limits the boundaries of the real to eliminate suspense, and functions to eliminate playful time by speeding up the solution by means of a just-announced Big Data computer program (CARE: Collaborative Analytics Research Environment) where, as the Tradecraft programmers boast, “run-time for hop-profiles (is) reduced from 2+ hours to several seconds allow(ing) for tradecraft to be profitably productized.”  In other words, a fast run-time, Big Data computer simulation model masquerading as a children’s game that has gone terribly wrong: no unpredictability, no mystery, no playful temporality, and no needle.
Shadows of Data, Shadows of Suffering
Bodies always have their shadowy doubles. Definitely not in the darkness of the night when the sun falls below the earthly horizon and is replaced by the different cycles of the moon, but in the clarity of a sunny day and, with it, the often unnoticed splitting of the world into bodies and their accompanying shadows. Consciousness of this ancient story of bodily shadows, with its premonitions of a fatal instability in the accepted framework of the real, has sometimes led to strangely interesting mythic possibilities. Cinematic scenes of rebellious shadows that suddenly refuse their preordained role of subordination to the governing signifier of the body in favor of striking out on their own–shadows without bodies. Or, just the reverse, bodies stripped of shadows–possessed bodies that clearly mark their break from the terrestrial register of the human by their astonishing failure to cast a shadow no matter how intense the flares of the sun.
We mention this strange contortion in the story of the body and its shadow as a way of drawing into a greater illumination those new electronic shadows that accompany the emergence of digital bodies. Every critique of contemporary surveillance has made much of the fact that the digital body always leaves electronic traces, that there is no activity in the wired world that does not accumulate clouds of data, no form of net connectivity that escapes electronic notice, and consequently no digital self that does not possess its very own electronic shadow. In all the discussion by intelligence agencies concerning tactics of mass surveillance, whether upstream (harvesting data from compliant telecommunication companies) or downstream (tapping fiber optic cables), constant emphasis is focused on creating individual profiles based on a (digital) self’s “pattern of life.” In other words, mass surveillance as also about an aesthetic act of drawing into visibility those electronic shadows that silently and invisibly accompany the digital self. Here, in a clear sign that, with the emergence of the real-time and networked space of the digital, we have decisively moved beyond the limitations of the daily cycles of the sun and moon, electronic shadows require no galactic movements of planets and the stars for their appearance. Never disappearing with the darkness, never changing their early shape with the angle of the sun, electronic shadows always rise to meet the digital self. Triggered by connectivity, governed by codes, archived in data banks, tabulated by power, the electronic shadow cast by the digital self will, in the end, outlast its human remainder. A future history, then, of electronic shadows of data that cling to the human bodies that activate them but, for all that, remain at one remove from their earthly origins.
With this inevitable result: just as novelists, short story writers, poets, and cinematographers have always suspected in their creative fables of bodies without shadows and outlaw shadows that refuse any bodily presence, the unfolding story of electronic shadows is inherently unstable. It takes an immense regime of technocratic intelligibility to maintain tight, disciplined cohesion between digital bodies and their electronic shadows. The many cases of mistaken (digital) identity indicate perhaps a more primary confusion in electronic shadow land, that point where electronic shadows sometimes exchange bodily identities, slipping immediately beyond the boundaries of one bodily tag to another with the least apparent difference. And sometimes, too, electronic shadows actually get lost–flash drives are misplaced or stolen, data banks suddenly shut down, power shortages introduce often imperceptible breaks in the data symmetry necessary for cohesive electronic shadows. In this case, to the extent that mass surveillance is probably less about earthly bodies than the electronic shadows cast by the “pattern of life,” that pattern of life already has about it a fatal catachresis, an accumulating pattern of errors that may speak more, in the end, to the truth of a system already seemingly out of control.
But still, for all that, electronic shadows sometimes contain traces of blood and human suffering. As much a sign of prohibition as affirmation, a signifier of exclusion as well as inclusion, a code of disavowal as much as avowal, electronic shadows are an enduring sign of the traditional meaning of surveillance, namely vigilance concerning who belongs and does not belong to the political community. Inscribed with data memories, always sleepless, clinging to the digital self like a cloud that will not disperse, electronic shadows precede actual bodily presence, signaling in advance whether the gated sensors of the state should impede or facilitate our passage. For those bodies chosen to be impeded, it is their electronic shadow that first betrays them to flights of rendition, life lived within the domestic penal cage of security certificates, forced deportation, indefinite detention, or the limbo of being held stateless at all the border stations of the world. When surveillance assumes the ghostly form of an electronic shadow, bodily presence is in permanent exile from time and space, prematurely cut off from that indispensable demand that marks the beginning again and again of individual singularity as much as human solitude, namely the ability to not account fully for its actions, intentions, or desires.
 Kashmir Hill, “NSA’s Utah Data Center Suffers New Round of Electrical Problems,” Forbes.com, http://www.forbes.com/sites/kashmirhill/2013/10/17/nsas-utah-data-center-suffers-new-round-of-electrical-problems/ (accessed June 19, 2014).
 Howard Berkes, “Amid Data Controversy, NSA Builds Its Biggest Data Farm,” National Public Radio, http://www.npr.org/2013/06/10/190160772/amid-data-controversy-nsa-builds-its-biggest-data-farm (accessed June 19, 2014).
 Jamshid Ghaz Askar, “NSA spy center: Unsettling details emerge, but director denies allegations,” Deseret News, http://www.deseretnews.com/article/865552597/NSA-spy-center-Unsettling-details-emerge-but-director-denies-allegations.html?pg=all (accessed June 19, 2014).
 Berkes, “Amid Data Controversy.”
 Ed Pilkington, “Washington Post releases four new slides from NSA’S Prism presentation,” Guardian Online (June 30, 2014), http://www.theguardian.com/world/2013/jun/30/washington-post-new-slides-prism (accessed January 22, 2014).
 T.C. Sotteck, “New PRISM slides: more than 100,000 ‘active surveillance targets,’ explicit mention of real-time monitoring,” The Verge (June 29, 2013), http://www.theverge.com/useres/tcosettek (accessed July 16, 2014).
 Natasha Singer, “When No One is Just a Face in the Crowd,” The New York Times, Sunday, February 2, 2014, p.3.
 Bill Chappell, “New Electronic Sensors Stick to Skin as Temporary Tattoos,” National Public Radio (August 11, 2011), http://www.npr.org/Blogs/the-two-way/2011/08/11/139554014/new-electronic-sensors-stick-to-skin-as-temporary-tattoos (accessed June 11, 2014).
 Alyson Shontell, “The Next Twenty Years are Going to Make the Last Twenty Years Like We Accomplished Nothing in Tech,” Business Insider (June 16, 2014), http://www.businessinsider.com/the-future-of-technology-will-pale-the-previous-20-years-2014-6 (accessed July 8, 2014).
 Chalmers Johnson, The Sorrows of Empire: Militarism, Secrecy, and the End of the Republic (New York: Metropolitan Books, 2004), p. 310.
 Ibid, pp. 278-279.
 For a theorization of the virtual class–its genealogy, alliances, ideology, and practices, see Arthur Kroker and Michael A. Weinstein, Data Trash: The Theory of the Virtual Class (New York: St. Martin’s Press, 1993).
 Adam D.I. Kramer, J.E.Guillory, J.T. Hancock, “Experimental evidence of massive-scale emotional contagion through social networks,” Proceedings of the National Academy of Sciences 111.24, http://www.pnas.org/content/111/24/8788.full (accessed January 3, 2014).
 Cory Doctorow, “Facebook manipulation experiment has connection to DoD ’emotional contagion’ research,” BoingBoing (July 3, 2014), http://boingboing.net/2014/07/03/facebook-manipulation-experime.html (accessed December 20, 2014).
 Nafeez Amhed, “Pentagon preparing for mass civil breakdown,” Guardian Online (June 12, 2014), http://www.theguardian.com/environment/earth-insight/2014/jun/12/pentagon-mass-civil-breakdown (accessed January 12, 2014).
 “IP Profiling Analytics & Mission Impacts–CBC,” Top secret report by Tradecraft Developer, CSEC–Network Analysis Centre (May 10, 2012), www.cbc.ca/news2/pdf/airports_redacted.pdf (accessed December 15, 2014).
There is a new DIY body in town, one which might not have the cultural pedigree of the shock tattoo, the slippery word, or the enigmatic yet subtle shift of modified bodily appearance, but a version of the DIY body that already belongs to the future for the simple reason that it comes to us directly from a future, dreamed about, obsessed over, but not yet practically realized. Visible signs of the new DIY body are everywhere: smart apps that track caloric expenditure, distances walked, miles run, rhythms of sleep, of sex, of friendship, of rage, of cheating lovers lost and won; dusty clouds of data that rise from the travelled earth of every footstep of the DIY body as it crunches its way into some unknown database along the way; and invasive but usually undetectable sociobots that break the surface of the skin, all the better to gently manipulate perception, to shape imagination, and, perhaps, even to take up permanent residency in the wasteland of the psyche. While the DIY body to which we have long been habituated represented the lovely unpredictability of individual choice playing itself out across the surface of skin, gender, and sexuality, the new DIY body comes to us with a self that has already split: part-human/part-data. In fact, the body that lives in the tension of this fatal split may be the only lingering remnant of the human, since the “self” seems to have recently departed towards the gathering horizon of artificial intelligence, synthetic biology, robotic technology–towards, that is, the larger movement of the “quantified self.” When the rising city of the quantified self breaks away from the wilderness of the unquantifiable body we can know for certain that those data clouds are also harbingers of troubles ahead for the question of human subjectivity and, with them, the eclipse of the intuitive, the ineffable, the instinctive, the numerically unintelligible but the emotionally knowable. Putting on the synthetic skin of the new DIY body with its extended sensors, creative apps, helpful prosthetics, and enabling augments is, of course, only the first step in modifying the body right out of itself in the direction of the Singularity Event.
Waiting for the Singularity
The streets of San Francisco are crammed these days with creative social media startups, many waiting, it seems, for technological rapture–the much-anticipated and longed-for singularity event when artificial consciousness finally undocks from human intelligence to usher in a new future of computers literally with (artificial) minds of their own and human minds as so many data points supporting the indefinite expansion of the lifespan promised by synthetic biology, nanotechnology, and artificial intelligence.
If biblical prophecies are any kind of guide, the triumph of artificial consciousness will initiate unpredictable, morphological changes of state across the fabric of space and time. The new force of ubiquitous computing may be violently rent with Big Data on one side and soon-to-be left behind Luddites on the other; relational processing will sweep across the land, and the body itself will finally be able to abandon its natural ties to flesh, skin, and bone in favor of the bliss of the fully quantified self.
First prophesied in the writings of Vernon Vringe, first digitally realized by Raymond Kurzweil, currently Chief Engineer of Google, and first given explicit social expression by Kevin Kelly and Gary Wolf, the coming of the technological singularity is at once the ecstatic promise and utopian hope of all those scientists, technologists, engineers, graphic artists, social media marketers, designers, and programmers who have dedicated their very bodily lives to the proposition that data is the new us.
Since its political inception, the theme of waiting for the messiah has long been the core eschatological trope of American society. From the first landing at Plymouth Rock by the early Puritans and the evangelical revival meetings that spread like prairie fire across the American midlands of the spirit in the nineteenth century to late twentieth-century invocations of religious visions of those to be either anointed or left behind in the days of apocalypse, the spirit of the messianic, with its troubled doubling of transcendence and despair, has long been native to American identity. Consequently, it comes as no particular surprise that in these, the early sunrise years of the twenty-first century, just when the dawn is lifting on the shadows of the past, Northern California is witness to the birth anew of the spirit of rapture, this time detached from previous concerns with religion and politics, and provided with a powerful digital expression in the form of technological rapture.
On the surface, the rhetoric of this latest American revival movement is delivered in the deliberately arid form of technocratic ambition–an “Internet of Things,” the “quantified self,” “A Data-Driven Life”–but scratch the surface of the covering rhetoric and what springs to mind are all those unmistakable signs of the spirit of rapture. Everything is there: a theology of technology driven by an overwhelming conviction that the vicissitudes of embodied experience are subordinate to digital transcendence; the will to extend life either by uploading the human mind into its AI machinic successors or by passionate faith in the born-again body of artificial DNA; the doctrine of data as a state of (code-driven) grace; and conversionary enthusiasm for the fully quantified life. While many different perspectives gather under the revival tent of technological rapture, one common thing remains: an abiding faith that technological society is quickly delivering us to a future inaugurated by a singularity event, that epochal time in which intelligent machines take command with promises of a mind-merger with a data world that is fluid, mobile, relational, indeterminate. Though skeptics standing outside the circle of technological rapture might be tempted to reduce its enthusiasm for data delirium to the larger figurations of the form of (technological) subjectivity necessary for the functioning of digital capitalism, that would surely overlook the fact that the contemporary will to technology is itself driven by a more radical eschatological promise, namely that the will to data has about it the tangible scent of finally achieving what the project of science has always promised, but never delivered–human relief from death, disease, and bodily decay. While Francis Bacon’s emblematic treatise Novum Organum may have been the first to so confidently link the project of science and the heretofore quixotic quest for immortality, it was left to a contemporary techno-utopian visionary, Raymond Kurzweil, (The Singularity is Near) to transform Bacon’s ontological ambition for science into a practical strategy for better–that is, extended–computational living:
This merger of man and machine, coupled with the sudden explosion in machine intelligence and rapid innovation in gene research and nanotechnology, will result in a world where there is no distinction between the biological and the mechanical, or between physical and virtual reality. These technological revolutions will allow us to transcend our frail bodies with all their limitations. Illness, as we know it, will be eradicated. Through the use of nanotechnology, we will be able to manufacture almost any physical product upon demand, world hunger and poverty will be solved, and pollution will vanish. Human existence will undergo a quantum leap in evolution. We will be able to live as long as we choose. The coming into being of such a world is, in essence, the Singularity. 
At first glance, this is only the most recent expression of the Greek myth of hubris, this cautionary tale concerning the ineluctable balance between excessive pride of purpose and mythic punishment meted out by always-observant gods. Adding complexity to this reinvocation of the myth of hubris, that vision of Singularity is, in actuality, a doubled expression of hubris. First, there is the sense of technological overconfidence involved in breaking beyond the traditional boundaries of the specifically human in order to speak of the new epoch of “man and machine,” that is, fully digitally interpolated subjects in which the specifically human merges with the extended nervous system of the cybernetic. Here, the merely human is replaced with the technologically enabled posthuman as the fundamental precondition for the Singularity. With the sovereign expression of technological posthumanism, the stage is set for the futurist release of all the pent-up excess of expressions of scientific determinism and technological fundamentalism that have been gathering momentum for some five centuries–transcending bodily limits, eradicating illness, ending poverty and hunger, and vanishing pollution. In its basics, this version of technological futurism, with its doubled sense of hubris and complicated alliance of recoded bodies, nanotechnology, genetic determinism and artificial intelligence is a creation myth–“the coming into being of such a world is, in essence, the Singularity.” With techno-futurism, we are literally present at a digital rewriting of the Book of Genesis with all that is implied in terms of (re)creating the body for smoother, and perhaps safer, passage through the often-turbulent event-horizon surrounding the black hole of the Singularity towards which (technological) society is plunging. While the DIY body may have the “Internet of Things” as its necessary digital infrastructure and the “quantified self” as its ideal expression, what drives it forward, animating its design and inspiring its constant creativity, is, in the end as in the beginning, the specter of the coming Singularity as its core creation myth. Curiously, in the same way that Heidegger once noted that the question of technology can never ever be understood technologically–that we must travel furthest from the dwelling-place of technology to discover its essence–the concept of Singularity, while evocative of the language of science and powered by digital devices, is something profoundly theological in its inception.
Of course, given the sheer complexity of contemporary global society with its mixture of recidivist social movements, global climate change, fully unpredictable human desires, economic turbulence, and, of course, changing rhythms of bodily health and the many diseases of the aged and the sick, Kurzweil’s vision is startling, less so for its naivety than for its feverish embrace of an approaching technological state of bliss–transcendent, teleological, and terminal. Transcendent because its overriding faith in machine intelligence, nanotechnology, and gene research is premised on the imperative of “overcoming our frail bodies with their limitations.” Here, unlike the Christian belief first articulated by St. Augustine in De Trinitate–with its division of the body into corruptible flesh and the perfect incorporeality of the state of grace–the newest of all the Singularities is intended to lead to a new heaven of computation. Teleological because this vision of the new Singularity invests the will to technology with a sustaining, indeed inspiring, purpose: overcoming the unknown country of death. And terminal, because this is also a philosophy of end times, certainly the end of the human species as we have known it, but also the end of easily distinguishable boundaries between the “biological and the mechanical, or between physical and virtual reality.” As Kurzweil states: “The Nanotechnology Revolution will enable us to redesign and rebuild–molecule by molecule–our bodies and brains for the world with what was interesting, going far beyond the limitations of biology.”  The end, therefore, of the biological body as we have known it and the beginning of something very novel: the merger of natural biology with its surrounding environment of technologies of the post-biological–artificial intelligence, nanotechnology, molecular science, and neurobots. As to be expected, in return for the sacrifice of a natural biological cycle of life and death, the creation myth framing technological rapture has promises of its own to keep: a fully realized future of “living indefinitely” with nanobots streaming “through the bloodstream in our bodies and brains,” telepathy in the form of “wireless communication from one brain to another,” improved “pattern recognition” by overcoming the inherent limitations of natural cognitive evolution in favor of “brain implants”  marking the inception, then triumph, of “nonbiological intelligence.” In effect, the vision of technological rapture is visualized as a marvelous, ready-made (AI) toolbox for constructing DIY bodies.
When Singularity Intersects with Human Multiplicity
While singularity theory provides a highly creative, futurist account of events likely to happen when machinic intelligence surpasses the biological limits of human cognition, the reality is that singularity is less futurist than something already deeply historical. One of the key tendencies of early twenty-first-century experience is that we may already be living in the midst of the predicted turbulence and exponential rate of change associated with the Singularity. With astounding advances in robotic technology, drones that are soon be invested with ethical autonomy in making closed (cybernetic) loop decisions concerning the “disposition matrix,” relentless mergers of the worlds of society, politics, and economy with artificial intelligence, genetic biology, and nanotech intrusions on the biological, the Singularity–the merger of the biological and the artificial–is a decidedly contemporary phenomenon, one that is complex, intersectional, exponential, and fractured: 3D printing is capable of virtually replicating the world of material objects; research labs have announced the emergence of synthetic biology premised on artificial DNA; robotics has shed its mechanical skin in favor of taking up habitation in the neural networks of information society; and the specter of a globalized surveillance network is made possible by the eerily animate presence of complicated systems of nonbiological intelligence associated with data mining. While narrowly technocratic perspectives may like to predict the approaching dawn of a new future of Singularity–with its decidedly unrealistic projections concerning new utopias of health, life-spans, wealth and unfettered knowledge–we, the first living subjects actually present at the fateful encounter between the biological and the artificial understand at the granular level the real-world consequences that follow the Singularity. When the information blast disrupts the social, when artificial DNA effectively resequences the story of natural evolution itself, when the triumph of code works to reinforce existing inequalities in labor, business and politics, then, at that point, we can recognize that the (technologically envisioned) Singularity actually expresses itself in the language of human multiplicity.
Scenes from the Event Horizon
Life by Numbers
Until a few years ago it would have been pointless to seek self-knowledge through numbers. Although sociologists could survey us in aggregate, and laboratory psychologists could do clever experiments with volunteer subjects, the real way we ate, played, talked and loved left only the faintest measureable trace. Our only method of tracking ourselves was to notice what we were doing and write it down. But even this written record couldn’t be analyzed objectively without laborious processing and analysis.
Then four things changed. First, electronic sensors got smaller and better. Second, people starting carrying powerful computer devices, typically disguised as mobile phones. Third, social media made it seem normal to share everything. And fourth, we began to get an inkling of a global superintelligence known as the cloud. 
Gary Wolf, “The Data-Driven Life,” The New York Times Magazine
Palpable signs that we are already living in the midst of the Singularity are provided by the growing cultural appeal of what has been described as the “Quantified Self Movement.” In this scenario, bodies strap on their mobile prosthetics, digitally tattoo themselves with an array of wearable electronic sensors, calibrate their social media lives by complex, flexible forms of digital self-tracking made possible by those new clouds of digital cumulus drifting across the global sky, and turn the previously unmeasured, untracked, and perhaps even unnoticed into vibrant streams of shareable data. Essentially, the surface of the body, as well its previously private interiority, is transformed into GPS data in the greater games of augmented reality. Except this time, data bodies are not so much using mobile phones to scan graphics that open onto a previously invisible world of graffiti, games, and advertising, but envelop the body in a big gif (graphics interchange format) of its very own–a digital penumbra of numbers about eating, sleeping, loving, working that provides an electronic shadow for tracking bodily activities. Suddenly, we find ourselves living in an age of the body and its digital shadow, this complex cloud of hyper-personalized data points not just accumulated by mobile bodies as they track their way through life but always spinning away from the body in fantastic reconfigurations of comparative data bases that may be perfect receptacles for social sharing but are also measuring points for better individual living.
Thought in purely astronomical terms, the quantified self movement is like a protostar–a dense concentration of “molecular clouds where stars form.”  Here, the newly emergent data self quickly throws off qualitative cultural debris from its past, thus committing itself to the daring gamble of seeking to quantify the unquantifiable, to literally construct a DIY body, one measurement at a time, that takes close account of lessons to be learned, data to be shared, measurements to be undertaken, numbers to be calculated, results to be reflected upon, activities to be improved, upgraded, overcome, by its digital double–life by numbers. In any event, for a society in which complex mergers between machine intelligence and human bodies are underway, one important adaptive response on the part of an always flexible human species is to transform subjectivity in the direction of that which is required for smooth admission to the end times of technological singularity. If the language of power is data, if the language of connection is convergence, and if the privileged value is speed, then what could be better than a coherent, comprehensive, and creative plan for reproducing a form of “self” that eerily mimics the etymological meaning of data as “thing-like”? Refusing the intuitive, throwing off the ineffable, and breaking forever with the imaginary, the quantified self movement reverses the traditional order of human subjectivity by making the thing-like character of quantifiable data both the precondition and goal of individual identity in the age of nonbiological intelligence. Unlike traditional Christian monasteries that provided physical shelter in good times and bad for the idea of the sacred and its associated religious institutions, the quantified self movement promulgates, in effect, a new order of digital monasticism that puts down roots in the psychic dimension of human subjectivity itself. With being data its primal act of faith, with the meticulous, even obsessive, calculation of life’s quanta–be it empathy, happiness, sex, or cardiovascular health–as its social practice, and with meetups of members of the quantified self movement as its mode of confessional, this new monastic order heralds the eclipse of traditional expressions of human subjectivity and the triumphant emergence of the thing-like–the “data driven life” as the form of (technological) self now taking flight at the dawn of the Singularity.
But wait. If you were to attend one of the global quantified self meetups–and they are everywhere now–the reality is most likely the opposite. The overall thematic might be the quantified life, but what resonates is the sense of individuals trying to find themselves, perhaps puzzled by the complications of daily life, and attempting as best they can, one self-confession at a time, to put the whole thing together for themselves by talking and sharing data. For example, each participant has five to ten minutes to discuss three core predetermined questions: “What did you do? How did you do it? What did you learn?”  It is as if network communications are not so much about the cold indifference of relational data points, but about its actual content, that whole stubbornly individual, always vulnerable, terribly anxiety-prone mass of highly individuated individuals. There is definitely a general yearning for self-improvement in the air, definitely a sense that the basic themes of Norman Vincent Peale’s The Power of Positive Thinking, with its homage to projected self-confidence and adaptive behavior, has escaped the power of the written text and taken up an active alliance with proponents of the quantified life. Or maybe it’s something different. Perhaps talking by data is the most recent manifestation of Dale Carnegie’s How to Win Friends and Influence People, with its insightful strategies for winning other people over to your own way of doing things by first and foremost winning yourself over to yourself.
Indeed, if one of the key characteristics of contemporary times is the seemingly relentless progression of robots towards becoming more human, it is equally the case that many humans may be in pursuit of bodies suited for better robotic living, namely the “data-driven life.” In his visionary statement of life by numbers, Gary Wolf begins with the essentially theological insight that the uniquely human qualities of fragility, precariousness, and forgetfulness, while perhaps acceptable in the epoch of the pre-digital, should now rightfully be dispensed with as the original sin of the data-driven life. According to this visionary of life by numbers, “humans make errors. We make errors of fact and errors of judgment. We have blind spots in our field of vision and gaps in our stream of attention.. . . These weaknesses put us at a disadvantage. We make decisions with partial information. We are forced to steer by guesswork. We go with our gut. That is, some of us do. Others use data.”  Perhaps, but then maybe Wolf hasn’t read Nietzsche’s Thus Spoke Zarathustra, with its constant refrain about the cold indifference of nature, the absolute lucidity and absolute coldness of that indifference particularly in the face of rationally calculated human purpose. For the quantified self, data is the newest expression of nature. Which just might mean that the storytelling that data evokes also has about it a very real sense of lucid indifference even in the face of human intentionality. We might want things to be different, but data reveals the real story. It is the cold eye surveying the subjective messiness of human experience, the indifferent scale of values taking calculated measure of all things, from calories burnt and sleep cycles altered to the rise and fall of financial fortunes at the speed of high-frequency trading. Or is it? Maybe in the end what lends the austere concept of data such seductive power is less its pure etymological meaning as the “thing-like,” than something else entirely, namely that like everything else–feelings, body images, social connections, cultural knowledge, work experience–there really is no such thing as pure data, no empty signifier floating freely outside of a complicated, dense field of intersecting relationships. In this case, when data plunges into the posthuman condition, when data expresses its supposedly cold judgments in all those quantified self meetups, there can be such a powerful sense of yearning in the air precisely because advocates of life by numbers–whether from the tech community or not–are always complicating the numbers by private anxieties, specific intentions, and complicated feelings. That is what the confessional storytelling at all those meetups are all about–not so much, in the end, life by numbers, but life itself. It is perhaps precisely in the equivocal meeting of cold data and passionate yearning, in this strange mixture of human desire to control the complexities of social experience by numbered tabulations and data’s lasting indifference to the illusions of control, that we can also begin to discern future intimations of life by numbers, that we are committing ourselves anew to an approaching era of absurd data.
Tweaking Neural Circuitry
But why should the technological drive towards the “data-driven life” remain forever on the outside of the body, enabled by apps that create self-generating loops of information guiding behavioral modification? What would happen if the desire for self-tracking was finally liberated from the body’s exterior surface, migrating inside the body generally and becoming fully interior to the brain specifically? What if one day the human brain could be lit up from within by means of advanced bio-technological devices that would suddenly draw into visibility that which, until now, has remained the subject of intense speculation and passionate conjecture, namely the possibility of tracking the brain’s complex neural circuitry and thus potentially enabling a new era of the DIY brain–one that involves tweaking the human nervous system. An insightful report by Robert Lee Holtz titled “Mysterious Brain Circuitry Becomes Viewable” provides this comment:
At laboratories in the U.S. and Europe, scientists are wrapping the brain in soft sheets of microscopic sensor circuits, lighting it up within using cell-sized diodes, turning it into a wireless transmitter.. . . Scientists even found a way to make an entire brain transparent–all the better to study the weave of neurons and synapses that make up the scaffolding of the brain.
Scientists want to transform these comparatively crude brain maps into detailed renderings that can document how the human brain’s 100 billion neurons–as many cells as stars in the Milky Way–instantly link in circuits through trillions of pathways. 
As one scientist noted, the possibility of threading light-emitting diodes into the soft matter of the brain means that “tiny seeds of light can be injected to activate special networks of light-sensitive neurons.. . . It provides a recipe for delivering all sorts of advanced technologies, such as integrated circuits down in the brain.” 
The brain as a “wireless transmitter” or “integrated circuits down in the brain”? That seems to be a scientific prescription for a cinema of neural apocalypse in which technologies of behavioral modification move from the outside of the body to the core of its cerebral cortex. No longer, then, a requirement for quantified self meetups–with their contagious techno-enthusiasm for tracking metrics of all kinds–but, in this scenario, silent meetups of integrated circuits that are downloaded directly into the previously untrackable universe of human neurology. What possibilities yet undreamed, what future still unimagined would suddenly become viable if data tracking–presently focused on that which leaves only the “faintest measureable trace”–would deliver its advanced technologies in the form of integrated circuits hardwired to the motherboard of the human brain.
The overall goal of neurological modification, actually reshaping the neural circuitry of the brain, is the essence of the DIY bodies of the future. Light up the neural circuitry of the brain, use “tiny seeds of light” to “activate networks of light-sensitive neurons,” remake the brain as a “wireless transmitter,” and we are instantly living in a newly emergent world of affective neuroscience: augmented intelligence, cybernetically enabled emotion, operant conditioning of neurological depression, technically facilitated happiness–a world of genetically improved senses. Neuroscientists motivated by dreams of genetically modifying the neural circuitry of the human species have already formed the usual alliance with large-scale commercial interests invested in ambitious plans to harvest neural circuitry for accelerated capital accumulation. Similar to most other spectacular digital launches, this double alliance of science and business around tweaking neural circuitry is motivated, in the first instance, by an ideology of facilitation. Who wouldn’t prefer for their children, if not for themselves, the heretofore impossible utopia of neural circuitry that could be effectively modified to deliver improved intelligence, health, emotions, and physical appearance? Download integrated circuits in the brain and human neurology would be quickly rendered the first and best of all the cognitive apps of the future, ready to practically realize the most recent advances in robotics, genetic biology and nanotechnology. It would be as if technological rapture took possession of neural circuitry and delivered the integrated brain to the ecstasy of singularity.
However, the other side of the ideology of (neural) facilitation is the presence of integrated circuits that take command. In this sense, once neural circuitry has been lit up by those “tiny seeds of light” and once “special networks of light-sensitive neurons” have been activated and their neurological structure diagnosed, the result is likely to be brain matter dangerously overexposed and, in fact, perhaps fatally vulnerable. What and who, then, will be the DIY bodies of the future? How will issues related to class, race, ethnicity, and gender play themselves out in the approaching universe of reengineered neural circuitry? And what happens when the previously invisible region of human neurology with “as many cells as stars in the Milky Way” abruptly moves from its sheltering darkness to the bright lights of scientific probes that want, above all, to explain the complexity of “all those millions of pathways”? From sometimes harsh historical experience, we know well that questions of visibility and invisibility are never simply reducible for their explanation to the question of technology. Who and what will be brought into visibility has always been an essentially political determination. Equally, who and what will remain cloaked in invisibility, and thus rendered exterior to traditional rights of human recognition, also involves prior political settlements concerning issues bearing on prohibition, exclusion, and disavowal. All this is, of course, studiously screened away by purely technological analysis determined to finally achieve the elixir of all scientific ambition–lighting up the soft matter of the brain in order to probe its neural contents with integrated circuits. Here, the accelerating speed of technologies of (neural) facilitation easily outpaces contemporary deliberative reflections on the fate of the human nervous system first fully objectified and then harvested by the command language of affective neuroscience. As William Leiss, a futurist philosopher of genomic science, once asked: “Are we ethically prepared for this?” Are we ready, ethically ready, for the coming order of neural modification, with its tweaking of the human nervous system, first as a way of facilitating an improved human situation (albeit for some) and ultimately to assume full neural command of that which was previously unmeasureable, untrackable, invisible?
Remote Mood Sensors
As if to accelerate the process of lighting up the brain and thus bring the full complexity of its neural circuitry into a greater visibility, a cutting-edge five-year research program has recently been announced with the aim of creating in the near future remotely controlled mood sensors, ostensibly for controlling depression and anxiety, that can be inserted directly into the brain.  Again, following doubled logic of facilitation and command, the ethical justification for such prototyping is made in terms of bringing urgent medical relief to traumatized soldiers suffering the long-term effects of post-traumatic stress disorder. Given that the mood sensors will be operationalized with possibilities for remote control, it might also be hypothesized that a bio-technological device of this emotional magnitude may also align itself very smoothly and without a ripple of (scientific) discontent with what the theorist Paul Virilio has described as the process of “endo-colonization,” namely strategic interventions by which governments make war on their own domestic populations. As reported by John Tucker in Defense One (“The Military is Building Brain Chips to Treat PTSD”), the research program follows the trajectory of technologies of “deep brain stimulation”:
How well can you predict your next mood swing? How well can anyone? It’s an existential dilemma for many of us but for the military, the ability to treat anxiety, depression, memory loss and the symptoms associated with post-traumatic stress disorder has become one of the most important battles of the post-war period.
With $12 million (and the potential for $26 million more if benchmarks are met) the Defense Advanced Research Projects Agency, or DARPA wants to reach deep into your brain’s soft tissue to record, predict and possibly treat anxiety, depression and other maladies of mood and mind. Teams from the University of California at San Francisco, Lawrence Livermore National Lab and Medtronic will use the money to create a cybernetic implant with electrodes extending into the brain. 
The research is funded by DARPA through its SUBNET (Systems-Based Neurotechnology for Emerging Therapies) program. With the overall aim of “automatically adjusting therapy as the brain itself changes,” the military’s interest is said to lie in obtaining high-resolution maps of the brain’s neural circuitry, particularly when surges of electrical signals moving across its motor cortex express themselves in symptoms related to anxiety, depression, and memory loss. “Brain chips,” then, for modulating mood swings in subject populations.
Future augmentations of the DIY body with brain chips–“invasive deep brain implants”–lend themselves most immediately to dystopian visions of mind control. Here, under the therapeutic cover of improving individual psychological health by reducing depression, anxiety, and mood swings, what is really being delivered to the brain is a fundamental change in the patterns of its neural circuitry. Once brain implants have been drilled down into the soft matter of the brain, the expectation is that gushers of neural data will provide new ways of mapping, then modeling, the brain’s electrical networks. Once installed, brain chips could potentially reverse engineer the amygdala by changing the patterned behavior of neural circuitry as a way of circumventing the neurological sources of traumatic injury. Once the brain has been opened up by cybernetic implants to mood-altering therapeutics, it creates the possibility of generalizing this initially purely therapeutic intervention across entire populations. In other words, “a crude example of what’s possible with future brain-machine and cybernetic implants in the decades ahead.” 
Perhaps, though, not “mind control” in the traditional sense of a political mechanics of domination, but the wiring together of previously individuated brains into new forms of fused affectivity. In this case, brain chips are a two-way (neurological) street, both transmitting data to waiting sensors from deep inside the soft matter of the brain and also delivering to the amygdala mood-altering therapeutics. If a future of bodies with brain chips is alarming from the perspective of received visions of mind control, perhaps that is because this is already less a futuristic project than a deeply retrograde one. In a highly mediated culture we have long been accustomed to what McLuhan once described as “media as massage”–electronic media that modulate the human nervous system with psychologically powerful simulacra of images, sounds, and (virtual) emotions. To some extent, inserting digital devices such as brain chips only makes obvious what may have already happened to us in that complex environment of brain/cybernetic interfaces known as the mass media. But, if that is the case, maybe what is most disturbing about brain chips for mood alterations are two of its other constitutive features. First, with this neurological experiment in “invasive deep brain implants,” an ethical boundary is fatally breached, one in which the human brain is harvested as another inanimate object of vivisectioning. Implanted with prosthetics, drilled with chip technology, carefully mapped and modelled, this is, in essence, an experiment in rendering neural circuitry a fully alien object of radical experimentation. What is possible, then, with “future brain-machine and cybernetic implants in the decades ahead” may be a deeply ominous future in which neurological functioning is reduced to a servomechanism of more pervasive cybernetic patterns of behavior. Operant conditioning delivered by a brain chip at the speed of light optics. Second, not just brain chips as advanced expressions of wireless operant conditioning, but also the construction of DIY bodies of the future built upon the triumph of the data-driven brain and the eclipse of the human mind. Here, hacking the brain by literally “jump-starting” it with electrical currents would mean that the struggle to overcome consciousness of trauma and mood swings associated with anxiety and depression would be reduced to a purely operational solution with efforts at understanding the social origins of trauma and existential crises that may have triggered acute anxiety or severe depression eliminated from the psychic scene. Jump-starting the data-driven brain also means a big increase in the cybernetic control of human neurology and an equally big decrease in the necessarily contingent, contextual, and ineffable nature of human consciousness.
Of course, for researchers of the data-driven brain, consciousness of the ultimately consequential results of the project may well lend added visibility to fundamental ethical doubts concerning the wisdom of this latest proposal for the technological interpolation of neural circuitry. For example, if past practices hold true, the first test subjects for this experiment in brain vivisectioning are likely to be animals involuntarily sequestered in laboratories, then perhaps even selected groups of army veterans who may be told that participation in this experiment aimed at implanting cybernetic sensors into the brain is a precondition for continued medical treatment. Equally, if mood swings are to be placed under remote (medical) control, what is to prevent the dark side of data–viral contagions, aggressive hackers, stolen or misplaced memory sticks, broken codes–from being introduced quickly and decisively into the deepest recesses of the soft matter of the brain? Sharper (brain) images, then, but also blurred ethical vision.
When Synthetic Biology Rides the Wave
We are actually transitioning from a Homo Sapiens into a Homo evolutis–a creature that begins to directly and deliberately engineer evolution to its own design. 
It’s perfect surfing conditions in La Jolla, California–sunny sky, steady breeze, and gigantic waves finally finding their way to the Pacific shoreline, swelling up to beautiful crests just before the whole (wave) scene dissolves again and again into a bone yard of broken patterns of water ebbing onto the beach. On this particular morning, there are dozens of surfers riding that magical California edge of bright sun and killer waves, some just bodysurfing but most trying to find the sweet spot of those cresting waves, that momentary physics of the barrel where bodily balance, fast motion and the curve of the cresting wave exists for the millisecond that is the take-home measure of the perfect wave. Now all this is pushed to the (pleasant) background of my attention as my mind is locked in deep, reading Greg Bears’s prophetic book Blood Music in a beachfront café located just steps away from the Scripps Institution of Oceanography, with its fabled marine research of life related to the watery element of the physical universe. In this whole scene, there’s a lot of surfing going down. Certainly, those incredible surfers of the waves just offshore, but also those marine biologists engaged in a kind of intellectual surfing of their own, this time trying to ride the waves of those sometimes perfect patterns of watery life-forms. There’s also some serious surfing taking place in Blood Music, although this time it’s not about human bodies tracking cresting waves or marine biologists looking to catch and ride the edge of insightful findings, but a story concerning the future of nanotechnology: a science-fiction fable of artificial cells that have escaped the lab, taken possession of the body of a graduate researcher, and then literally surfed the biological material of that single body until those artificial cells propagate beyond synthetically infected flesh to change the physiological structure of the entire environment. Aesthetically, the image of the future offered by Blood Music, with its story of artificial life and computation come alive, is similar to those eerie images painted by the surrealist artist Max Ernst, where human bodies, inanimate objects, vital animals, and mythological symbols blind together into a common morphology. Politically, it’s anticipatory of Bill Joy’s warning that while a computer crash might mean the inconvenience of some lost data, crashing the basic codes of life runs the danger of taking down entire environments, if not suddenly terminating the natural evolution of the human species.
In the usual way of always incommensurable thought, my mind might have the apocalyptic futurism of Blood Music in its foreground and those scenes of rhythmic surfers in its background, but my situational awareness is short-circuited by a news alert from my always on mobile that transmits the following headline from, of all places, the Scripps Institute of Research:
LA JOLLA, CA–Scientists at the Scripps Research Institute (TSRI) have engineered a bacterium, whose genetic material includes an added pair of DNA “letters” or bases, not found in nature. The cells of this unique bacterium can replicate the unnatural DNA bases more or less normally, for as long as the molecular building blocks are supplied.
“Life on Earth in all its diversity is encoded by only two pairs of DNA bases, A-T and C-G, and what we’ve made is an organism that stably contains those two plus a third, unnatural pair of bases,” said TSRI Associate Professor Floyd E. Romesberg, who led the research team. “This shows that other solutions to storing information are possible and, of course, takes us closer to an expanded-DNA biology that will have many exciting applications–from new medicines to new kinds of nanotechnology.” 
While the news release was enthusiastic in its account of synthetic biology delivering on its promise of a new alphabet of life, my own “exciting application” of the development of artificial DNA was tempered by the immediate thought that, try as I might, I could not sequester in the background of my perceptual field–was it really possible that, only three decades after the dystopian fable traced by Blood Music, events first written as literature have leaped the divisional boundaries of fact and fiction and become the modelling-principle for the future of the real. Is Blood Music the skin of the new real of synthetic biology and artificial DNA?
There can be little equivocation with the claim that synthetic biology, with its transformative creation of artificial DNA, is the future of the DIY body first, and perhaps later even, of the DIY planet. Brushing aside the seemingly feverish efforts by neuroscientists to stake proprietary claims on rewiring cognitive networks, whether by drugs, tracking, or implanted cyber-hooks, synthetic biology has introduced the fundamental game-changer of artificial life. For example, while contemporary social and political thought continues to debate the contentious relationship between power and life–whether, that is, power speaks in the name of (normative) life or in the more disciplinary name of death–synthetic biology envisions something entirely different, specifically the creation of previously unimagined forms of artificial life, from synthetic cells to the artificially constructed bodies of soldiers, astronauts and workers, that take full advantage of “an expanded DNA-biology.” More than “life by numbers,” the “quantified self,” or “remote mood sensors,” and going beyond mechanistic images of the reengineered brain as a “wireless transmitter” or an “integrated circuit” with neurons to be lit up and neural pathways to be “jump-started,” synthetic biology provides a dramatically new creative principle–Artificial DNA. Here, the addition of a “third, unnatural pair of bases” to genetic history does not simply promise “solutions to storing information” or expanding DNA-biology, but introduces a fundamental element of uncertainty into the living world. While injecting a free-wheeling and essentially designer note of the recombinant, the unnatural, the artificial to the biological process of coding “life on earth” will undoubtedly facilitate many novel and worthwhile applications, it also means taking final possession of the question of life itself. Consequently, when genomic scientists envision multidisciplinary approaches linking together molecular biology, chemistry, computer science, and electrical engineering, what they are really articulating is the gateway to the future–a gateway to enhanced possibilities for “assembl(ing) biological tools to redesign the living world.” 
At this point, thinking at the intersection of ocean-driven scenes of California surfers and science-fiction hauntologies of Blood Music, I wondered if the unnatural world to come will also someday experience for itself those strange and enigmatic fractures of broken meanings, uncomfortable fits, and clashing cosmologies of the heart and mind that seems to so unique to the human species about to be left behind. Measured by the first, truly global burst of excitement that greeted the Scripps announcement–an excitement less, to be sure, about the foregrounded text of a novel scientific breakthrough than what seems be the really existent, animating subtext, namely that we are speaking openly and positively about redesigning molecular building blocks for the “living world,” well, judging solely by the positive response to this drop-dead end of evolution, end of (natural) story press release–there is an unqualified smoothness to the future of Artificial DNA. While Artificial DNA might not, as synthetic biologists like to claim, be allowed to escape the laboratory, that does not preclude active experimentation with synthetic DNA in the many other laboratories of power and capital–weaponizing synthetic biology, creating highly specialized artificial life-forms to maximize capital accumulation as well as minimize labor unrest, technologically enabled, eugenic dreams of synthesizing the “perfect child.” No longer the “terrorism of the code” in any particularly negative sense, but a future scripted in all its smoothness, transparency, and perfectibility by the rising (genomic) signs of synthetic biology.
Yet, for all that, there is still that lingering sense that in the future even the most artificial of all the artificial DNA will come to recognize that the mythic fate of the artificial–the ancient art of artifice–is always necessarily doubled. Certainly, every artifice first expresses itself in the language of perfect simulation–a smooth coding of the living world by biological tools that only work to enhance “exciting applications.” But, of course, the secret of all the great masters of the art of artifice is the hard-won realization that what motivates the artificial, what really renders believability to the theatre of artifice is precisely the intangible elements of undecidability, imperfection, and, indeed, latent error that is always carefully masked by the staging of the artifice. In this case, as in (natural) life, so too in (artificial) life: the fact that every fully accomplished perfect surf ride ends in the boneyards of just another wave on the beach might just intimate that the future logic of synthetic biology already contains its own boneyards, that what presently remains unsynthesized, unthought, and unconsidered is the ghost-rider in the shadows of artificial DNA. Could it be that resuscitating something of the spirit of the human, that which is presently policed away by the totalizing logic of synthetic biology, is the once and future destiny of artificial DNA? Or perhaps the reverse is true. If Blood Music is the skin of synthetic biology, swarms of mutating cells, like nature before them, will be indifferent to human fate. That would mean the future of synthetic biology will likely cast natural indifference against human artifice as its likely fate. In this case, we are in the presence of new (molecular) building blocks for a very traditional story.
Remember the unanticipated, premature death of Dolly, the first of all the android sheep that, for all its artificial resuscitation by the scientific hubris of genetic engineering, could not escape its fatal destiny of accelerated, synthetically enabled, aging. Just as we can acknowledge with some confidence that every massive wave is doomed to crash and every breakdown can be a potential breakthrough, so too even the science of artifice can never really escape that messy tangle of mythic destiny, complex ambitions, complicated dreams of the sub-real, and utopian dreams of transhumanism that is the continuing singularity event of the new real. In this case, the future of synthetic biology, with its creative breakouts of artificial DNA, nanotechnology, and fabricated xeno-organisms, remains fully uncertain in advance–fully undecidable, that is, until that future moment when the synthetic imagination actually begins to ride the wave of unsynthesized reality onto the beach of life itself.
Technologies of Suspended Animation
Following Heidegger’s fateful insight that understanding the essence of the question concerning technology is never far away but always close at hand, never, that is, hidden away in mythic stories of secret origins but something always proximate to the posthuman condition. What signs can be deciphered, what lessons can be drawn from these scenes from the event horizon: Synthetic Biology Riding the Wave, Life by Numbers, Tweaking Neural Circuitry, and Remote Mood Sensors? On the surface, these are discrete stories from the data-driven life, whether expressed in all its subjective enthusiasm by the quantified self movement or by technologies specializing in reengineered neural circuitry, invasive brain implants, and biological experiments in developing artificial life-forms as radically new pathways for a literally posthuman evolution. Again, following Heidegger, it may be the contemporary human fate to be caught in the way of a larger technological destiny–its foundations, morphology, and ultimate direction, all of which remain unclear–although its transitional momentum is felt clearly and decisively at every historical turn. Indeed, several generations after Heidegger’s reflections “On the Question Concerning Technology,” the revolution in technological affairs which his thought was both attentive to and prescient about has seemingly solidified its grasp on contemporary societies with dynamic and apparently unstoppable power that we can actually begin to discern the overall trajectory, if not the terminal destiny, of the will to technology. Again, the destiny of technology lies closest to us: for example, stories of the quantified self as raw data unfold to tell a story, a ribbon of fact, a narrow path of what is promised to be transcendence. Sometimes, the unexpected comes to call: a blip, a pause, a catastrophic rupture or, perhaps, just a broken line of code. Or again, stories from synthetic biology of the development of an approaching epoch of “biological superintelligence”–artificial life-forms constructed specifically to carry forward into a still-unknown future the complicated collusion of humans and machines at the speed of algorithmic processing, with the bodies of articulated robots, artificial orifices of synthetic senses, and the planetary skin of “The Internet of Everything.” The latter is how Cisco, the California futurist telecom of things related to wireless networks, routers, and network-switching mechanisms, prefers to describe the accelerated, seemingly hyper-exponential rate of change associated with the Internet. While the Internet may have begun in the late twentieth-century as a visionary, yet relatively limited, communicative order, by 2008 it had already generated its own wireless offspring–The “Internet of Things”–accompanied by the inevitable tech euphoria:
This year’s technology trends continue by the unstoppable path of Cloud Computing, Big Data, applications and mobile devices, 3D printing, NFC payments, integrated ecosystems, and, of course, the Internet of Things . . . that network of devices and in general things connected together to perform certain tasks and/or perform monitoring activities that enhance what we already do, or try to make that possible. 
Always speaking with the confidence of a cartographer of a digital future of which it is itself one of the key communicative architects, Cisco no sooner announced the advent of the Internet of Things than only several years later–in a Schumpeterian-inspired act of creative destruction–promptly abandoned the latter conceptualization in favor of an even more grandiose vision: the “Internet of Everything.” Here, what is privileged is the power of connections, not things:
Much has been made of the “Internet of Things” and a growing array of “smart” things that will soon change every aspect of our lives–from Google’s driverless car and iRobot’s Ava 500 video collaboration robot to “smart” pill bottles that will automatically renew a prescription and remind you when to take it.
While we often think that it’s all about things, it’s not actually the “things” that create the value, it’s the connections among people, process, data, and things–or the Internet of Everything that creates value.
There are about 10 billion connected things in the world today. In the next ten years, that number will grow to 50 billion things, increasing the intelligence and value of all these connections exponentially–billions of things, trillions of connections. In other words, in the Internet of Everything, as in life, it’s not what you know, it’s who you know. Connecting dumb things makes them smart, and helping them work together makes them even smarter. That is the power of the Internet of Everything. 
A perfect statement, then, of the rapture of technological connectionism, with its layering of the language of “smartness” onto the otherwise inert world of routers, switchers, and interfaces, and its hijacking of the therapeutics of “helpfulness” on behalf of the “trillions of connections” among “people, process, data, and things.”
The Quantified Fetus
And why not? Digital euphoria of this order produces many helpful results that illustrate possibilities for connecting “everything” in unexpected ways. For example, there is a new startup called Bellabeat, that provides both a digital device and an app to serve as a fetal monitor, providing continuous, real-time, biologically sensitive readouts of the baby’s heartbeat. It can also track baby kicks and even the mother’s weight gain.  Expanding beyond the traditional intimacy of a mother’s intuitive feelings of care and concern for the baby growing inside her, Bellabeat emits heartbeats as digital soundbytes that can be shared with family and friends over the Internet–literally the Internet of Everything, including downloadable, shareable, real-time heartbeats of an unborn baby–the quantified fetus can be heard anywhere, anytime through the wondrous “power of connections.” In an uncanny intimation of the real-time of the digital superseding the biological time of the human, digital histories of fetal activity, as in the case of Bellabeat, make possible digital life-histories spanning a longer time continuum that the chronological life-cycle of humans that begins, at least in the West, with actual birth. Promoted as a digital device that can be trusted, Bellabeat may, of course, also have the unintended effect of undermining the baby’s mother’s trust in her own intuitive feelings for the invisible, yet emotionally palpable, presence of her unborn baby. A curious case, then, of increased digital sensitivity based on real-time data concerning the health of babies, and a soft, yet insistent, undermining of a mother’s actual emotionally based feelings for the well-being of her unborn baby. Not so much the old question concerning which to trust more–machine readouts or intuitive, inchoate feelings–but something else. In this case, does the power of (digital) connections also have the power to deliver us to a world of (emotional) misconnections?
“Turning the Body into a Password”
But why stop with the quantified fetus when it soon will be possible to inhabit a DIY body that is password protected? Google’s Motorola “skunkworks” division has just prototyped a new digestible digital device (Motorola’s Edible Password Pill) that once swallowed instantly transforms the human body into an authentication tool for accessing digital domains, from smartphones and laptops to digitally swipeable doors, whether offices, garages, or homes. Nominated by Time Inc. as one of the “twenty-five best inventions of the Year 2013,” the accompanying description includes the following: “Swallowed once daily, the pill consists of a tiny chip that uses the acid in your stomach to power it on. Once activated, it emits a specific 18-bit EKG-like signal that can be detected by your phone or computer, essentially turning the body into a password.” 
Following the overall logic of technological incorporation where data increasingly breaks the skin barrier, moving from its outer surface to its biological interiority, this digital device upgrades the body with the power of actually becoming its own interface, merging the “power of connections” with data flesh with such bio-technical seamlessness that the digitally authenticated body smoothly and effortlessly merges with an Internet of Everything. As Regina Dugan, former head of DARPA and now leader of Google’s advanced technology team, has remarked about the body as its own “authentication token,”
“Once swallowed, it means that my arms are like wires, my hands are like alligator clips–when I touch my phone, my computer, my door, my car, I’m authenticated in. It’s my first super power. I want that.” 
Working from the perspective that “(e)lectronics are boxy and rigid, and humans are curvy and soft,”  Google’s aim is to complete the always difficult last few millimeters of connecting the “curvy and soft” flesh of until-now organic human beings with the geometric grid of digital connectivity. Consequently, a future of modulated technology–soft, ubiquitous, pliable, smooth–sometimes camouflaged as electronic tattoos on infants (data tracking for better security), as “authentication tokens” in the supposedly hyper-cool style of e-tattoos, or as “stretchable circuits” for detecting concussions in sports injuries. Unconsciously adopting the language of mimesis, this form of body invasion by the contemporary generation of data snatchers, from the Motorola Edible Password Pill to digitally-coded rap tattoos, is brilliantly disguised as a biological appendage–subtle technology with the added benefit of conferring “super power” on fully authenticated bodies.
Between Life and Death
Perhaps the Quantified Self has already moved on to the diagrammed body, that point where digital devices are so deeply embedded in our psyches–from quantified fetuses to password protected bodies–that technology has now become a readout of the human life-cycle. When what should properly be on the periphery of human attention becomes central to perception, neurology, moods, or the human nervous system itself, there is bound to be some damage. It is not so much that under the pressure of technological change the human sensorium has now been turned inside out, resulting in radically split human senses–partially still interior to individuated bodily histories and partially circulating at the speed of digital circuitry–but that there is a growing prohibition against self-awareness of what has been lost with the appearance of the diagrammed body. There is no digital device that does not leave a bodily trace, no fusion with a synthetic life-form–whether a net bot with an inflated sense of artificial intelligence, a supposedly “smart” form of machine-to-machine communication, a cyber-implant in the theater of synthetic biology–that does not revise memories, disrupt feelings, disappear the precious singularity of that which is not only unique but ineffable–the relationship between a mother and her suddenly data-driven baby, bodies viewed as inauthentic because they are unauthenticated, life itself filled with jagged edges, slow trudges, and always messy confusions of being a being of organic matter in an increasingly dematerialized world.
Consequently, while we can be aware that the “power of connections” is swiftly delivering us to a future capable of producing quantified fetuses and password protected bodies, what remains unclear is the ultimate cultural, and perhaps even existential, impact of the triumph of the transhuman. Considered in terms other than dystopia or utopia, is it possible that such adventures in transhumanism–powered by visions of technological rapture and the Singularity event, practically implemented by the Quantified Self movement, and replete with experiments in vivisectioning neural circuitry by synthetic biologists–are fundamentally changing the meaning of life and death for the human species as a whole? Not a future of technological rapture, but an indefinite period of suspended animation in which the human species, as a life-form kept waiting for the Singularity event that may or may not ever arrive, perhaps makes its final, feverish preparations for a fateful crossing-over point between machines and humans, but, in any case, not wanting to be untethered from digital prosthetics and definitely not anticipating that very real crossing-over point–the always solitary experience of death–without helpful technologies wrapping themselves around the “soft and curvy” matter of the body organic as it terminates.
There is a revealing report in the New Scientist about a new emergency technique in suspended animation (“Gunshot victims to be suspended between life and death”) –that bears directly on larger issues related to technology, culture, and life itself. The story recounts how surgeons at a Pittsburgh hospital are now experimenting in suspended animation for victims of traumatic injuries–by guns, knives, or blunt objects–as a way of stopping blood loss, thus gaining bodily time in order that their lives can later be saved by the necessary medical interventions. One surgeon is quoted as saying: “We are suspending life, but we don’t like to call it that because it sounds like science fiction. So we call it emergency preservation and resuscitation.”  The technological procedure used in this trial is straightforward: once the aorta has been clamped, a solution of saline is pumped “through the heart and up to the brain,” and the patient’s temperature is reduced with the result that “at this point they will have no blood in their body, no breathing, and no brain activity. They will be clinically dead.”  But hopefully not for long since after the necessary surgical interventions, blood is flushed through the body, the saline solution purged, and the patient’s body warmed up by its own circulating blood. With this (redemptive) medical conclusion, “We’ve always assumed you can’t bring back the dead. But it’s a matter of when you pickle the cells.” 
Now while this is an intriguing story concerning the truly liminal boundaries between life and death, it may also be a preliminary glimpse of the fate of the human species generally and the DIY body specifically, as it is flushed with a saline solution of synthetic technologies, its key organs clamped shut with password protected apps, its body temperature definitely cooled down by increasingly antiseptic loops of cold code, and its neural circuitry placed in a state of suspended animation waiting for resuscitation by technological rapture. While medicine, like all of science before it, cannot in the end overcome the finality of human mortality, the greater ambition of contemporary technology, particularly in its transhumanist expression, is captured perfectly by the surgeon’s insight into the decidability of previously undecidable matters of life and death: “It’s a matter of when you pickle the cells.” 
 Raymond Kurzweil, “Reinventing Humanity: The Future of Machine-Human Intelligence,” http://www.singularity.com/KurzweilFuturist.pdf (accessed May 21, 2014).
 Gary Wolf, “The Data-Driven Life,” The New York Times (May 2, 2010), http://www.nytimes.com/2010/05/02/magazine/02self-measurement-t.html?pagewanted=all (accessed May 20, 2014).
 Paolo Saraceno and Renato Orfei, “From Molecular Clouds to Stars,” Istituto Di Fiscia Dello Spaizo Interplanaterio, CNR, http://www.gps.caltech.edu/classes/ge133/reading/starformation.pdf (accessed July 28, 2014).
 James Wolcott, “Wired up! Ready to Go!” Vanity Fair (February 20, 2013), http://www.vanityfair.com/culture/2013/02/quantified-self-hive-mind-weight-watchers (accessed May 20, 2014).
 Robert Lee Holtz, “Mysterious Brain Circuitry Becomes Viewable,” The Wall Street Journal, http://online.wsj.com/news/articles/SB100014241278873242353045784388114892 74812 (accessed June 2, 2014).
 Patrick Tucker, “The Military is Building Brain Chips to Treat PTSD,” Defense One, http://www.defenseone. com/technology/2014/05/D1-Tucker-military-building-brain-chips-treat-ptsd/85360/?oref=d-channelriver (accessed May 29, 2014).
 Juan Enriquez, quoted in Breanna Draxler, “Life as We Grow it: The Promises and Perils of Synthetic Biology,” Discover Magazine (December 11, 2013), http://discovermagazine.com/2013/oct/14-life-as-we-grow-it (accessed July 22, 2014).
 “Scripps Research Institute Scientists Create First Living Organism that Transmits Added Letters in DNA ‘Alphabet,'” Scripps press release (May 7, 2014), http://www.scripps.edu/news/press/2014/20140507romesberg.html (accessed July 23, 2014).
 “The Internet of Things,” Opinno, http://www.opinno.com/ en/content/internet-things-0 (accessed May 20, 2014).
 Dale Evans, “Why Connections (Not Things) Will Change the World” (August 27, 2013). For a full expression of Cisco’s futurism, see http://blogs.cisco.com/ioe/how-the-internet-of-everything-will-change-the-worldfor-the-betterinfographic/ (accessed June 9, 2014).
 Anne Field, “Venture Capital Flocks to the ‘Quantified Self,” used with permission, http://thenetwork.cisco.com/ (accessed June 05, 2014).
 “Inventions of the Year 2013: The Edible Password Pill,” Time, Inc., http://techland.time.com/2013/11/14/the-25-best-inventions-of-the-year-2013/slide/the-edible-password-pill/ (accessed May 15, 2014). Emphasis in original.
 Liz Gannes, “Passwords on Your Skin and in Your Stomach: Inside Google’s Wild Motorola Research Projects,” video, Facebook, https:www.facebook.com/sharer/sharer.php? (accessed June 6, 2014).
 Helen Thomsom, “Gunshot victims to be suspended between life and death,” New Scientist (March 26, 2014), http://www.newscientist.com/article/mg22129623.000-gunshot-victims-to-be-suspended-between-life-and-death. html#.U5VQTBzr_-s (accessed April 22, 2014).
Robots Trekking Across the Uncanny Valley
In the new real, we are running with the robots. Industrial robots for seamlessly automated car manufacturing; medical robots for facilitating patient care in assisted living retirement communities; warrior robots engaged in materializing the imaginative game scenarios of cyber-warfare; toy robots that promise a happy first encounter between machines and the newest generation of humans; and, most of all, invisible robots circulating in the data clouds of social media as SocialBots. Perhaps more than we may suspect, ours is already a blended reality in which robots not only live among us as artificially programmed prosthetics equipped with articulated limbs and complex sensory arrays, but have also begun to live within us, quietly but insistently bending the trajectory of human perception, imagination, and desire in the direction of a future life of the mind that bears unmistakable signs of a robotic imaginary. Consider, for example, the following stories focusing on the complex intersection between human intelligibility and robots, both invisible and visible.
While the future of human encounters with robots has often been envisioned as an ominous struggle between fragile but immensely adaptive humans and powerful, although less creative, mega-robots, the real-world encounter has proven to be decidedly low-key, ubiquitous, and technologically subtle. Seemingly everywhere, the digital body has been swiftly delivered to its robotic future in the form of a pervasive network of invisible bots: socialbots swarming social media sites creating contagious flows of viral information, influencing individual perception, imitating human behavior; capitalist super-bots in the form of high-frequency trading algorithms that powerfully shape the ebbs and flows of stock transactions; psy-ops bots in the service of military intelligence that function to effectively influence political perception; and, of course, those other multiplicities of net bots–spiders, crawlers, and malware–that trawl the Internet, sometimes like proletarian worker robots performing routine web indexing functions, but at other times like futurist versions of the Cylons in Battlestar Galactica, quietly searching for critical weaknesses in websites, software programs, and Internet infrastructure itself. Consequently, to the question concerning future encounters between humans and robots, the answer is already not only well known, but pervasively experienced as the contemporary real-time environment of digital life. No longer content to remain at a safe, mechanical distance from their human creators, robots in the form of those lines of code that we call bots have already broken down the walls of human perception, inhabiting the world of social media as their cybernetic hive, attaching themselves to the human imagination in the seductive form of hashtags and tweets and, all the while, migrating the spearhead of robotic evolution itself from the mechanical to the neurological.
In the usual way of things, no one really anticipated that robots would faithfully follow the trajectory of technology itself, from high visibility to pervasive invisibility, travelling from the outside of the human body to the deepest interior of human subjectivity, quickly evolving from the mechanical to bots with very active cognition. When bots proliferate in the digital clouds that surround us, when they actually take up neurological residence in human perception, desire, and imagination, we can acknowledge with some confidence not only that we are already running with the robots, but something more uncanny; namely, that robots are already living among us and, most decidedly, living within us.
The meaning of this is fully enigmatic. When robots were something that we could see–for example the cute Japanese robot that played soccer with President Obama and concluded with a victory dance and cheer –we could take the measure of the event in traditional humanist terms. But what happens when robots actually trek across the uncanny valley? Not uncanny in the usual sense of the term because they physically start to become indistinguishable from humans, but in the deeper sense that bots are perhaps already an indispensable dimension of posthuman subjectivity. We mean this literally. For example, it is reported that 30 percent of all Twitter content comes from bots: bots that reply to articles, bots that assume the names of friends in order to direct traffic to specific commercial products, bots for spying, for trading, for porn. In this case, have we become our own uncanny valley? For example, consider the following media report:
“I Flirt and Tweet. Follow Me at #Socialbot”
From the earliest days of the Internet, robotic programs, or bots, have been trying to pass themselves off as human. Chatbots greet users when they enter an online chat room, for example, or kick them out when they get obnoxious. More insidiously, spambots indiscriminately churn out e-mails advertising miracle stocks and unattended bank accounts in Nigeria. Bimbots deploy photos of gorgeous women to hawk work-from-home job ploys and illegal pharmaceuticals.
Now come socialbots. These automated charlatans are programmed to tweet and retweet. They have quirks, life histories and the gift of gab. Many of them have built-in databases of current events, so they can piece together phrases that seem relevant to their target audience. They have sleep-wake cycles so their fakery is more convincing, making them less prone to repetitive patterns that flag them as mere programs. Some have even been souped up by so-called persona management software, which makes them seem more real by adding matching Facebook, Reddit or Foursquare accounts, giving them an online footprint over time as they amass friends and like-minded followers.
Researchers say this new breed of bots is being designed not just with greater sophistication but also with grander goals: to sway elections, to influence the stock market, to attack governments, even to flirt with people and one another. 
The above report concludes by noting that of the 500 million Twitter accounts, “some researchers estimate that only 35 percent of the average Twitter user’s followers are real people,” that “more than half of Internet traffic already comes from nonhuman sources like bots or other types of algorithms,” and that in “two years, about 10 percent of the activity occurring on social online networks will be masquerading bots.” 
More than the sheer quantity of socialbots invading every dimension of digital life, what is significant about this report is something left undisclosed: that bots are integral to the question of social identity. Not simply in the sense of leveraging perceptions, desires, and imagination to move in certain directions, but integral in the fuller sense of the term–that, perhaps, we have already succeeded in moving beyond the point of real-time familiarity with the presence of bots to actually being part-human/part-bot. In this case, what may be truly uncanny is our own online subjectivity, occupying as it does an entirely unstable boundary between lines of code and lines of skin. When bots come inside us, pacing our existence with their artificial “sleep-wake cycles,” mirroring our moods with “persona management software,” and creating networks of their own consisting of “friends and like-minded followers,” we can recognize that we have become the first and best of all the posthuman subjects, breathing in lines of code as the real source of digital energy that allows us finally to come alive as the flesh and blood of socialbots.
More than half a century ago, the American psychologist B.F. Skinner correctly (and in fact enthusiastically) endorsed a future society based on a relatively primitive theory of “radical behaviorism.” Setting aside enduring questions concerning the origin and meaning of introspection and unconscious desires, Skinner suggested an alternative form of human subjectivity constructed on the strictly behavioral foundations of “operant conditioning.” For Skinner, what matters is the quantified self: the observable self that acts in and upon the world on the entirely predictable basis of social reinforcements–some negative (punishment), others positive (rewards), with yet still others more neutral in their role as reinforcements. Reducing the diverse spectrum of individual human experience–lingering desires, upstart passions of the heart, long-buried psychological repressions, mixed motives–to the observable behavior of a subject that is postulated as acting on the basis of a social protocol of rewards and punishments (i.e., avoiding that which hurts, privileging that which rewards), Skinner’s vision held that that which was true in the laboratory with respect to the behavior of rats and pigeons was equally true of social behavior in general. That is, human behavior could actually be modified by the application of the soft power of a token economy, providing actual, and sometimes symbolic, rewards as an inducement for certain privileged forms of social behavior, while gradually extinguishing undesirable behavior by the hard power of pain and punishment. Stated in its essential elements, Skinner’s vision of social behavior–“operant conditioning”–provided a way of transcending millennia of concern with that strange and definitely precarious mixture of animality, intellectuality, and emotion that is the nature of being human in favor of an ecstatic theory of remaking humans by the organized application of a radically new technology of human subjectivity–radical behaviorism. In this perhaps pragmatic and certainly deeply visionary theory of the human condition, there was always a twofold ontological assumption: first, that persistent concerns with supposed epiphenomena such as psychic blockages, unknown motives, and interior sensibility could, and should be, dismissed in favor of a technological vision of subjectivity open to its surrounding environment, deeply influenced by its actions and responding accordingly; and second, that the “self” of radical behaviorism could be socially modified, indeed socially engineered, by the methodical application of the principles of operant conditioning. Curiously, while at the intellectual level, the technological utopia that Skinner envisioned in his books Walden Two and Beyond Freedom and Dignity, were themselves surpassed by theoretical debate about the rise and decline of all the referentials of truth, power, and sexuality, Skinner’s prophetic vision of a social self capable of being modified by the soft power of social reinforcements–particularly the “token economy” of radical behaviorism–has finally found its key public expression in the once and future society of socialbots. Not simply a new technology of communication perfectly fit for the age of social media, socialbots are, in their essence, something very different, namely a technology for modifying human subjectivity that is, in its essence, simultaneously political and neurological. Political because socialbots embody how the ideology of operant conditioning is inserted into the deepest recesses of the data mind–the externalized, circulating consciousness characteristic of the quantified self of social media. Neurological because socialbots are the primary cybernetic agents of “cognitive hacking,” that complex process whereby the key driver of the newly emergent attention economy–perceptual attention–is encouraged to turn in certain directions, sometimes by positive reinforcers operating in the language of seduction and, at other times, by negative reinforcers functioning in terms of fear and anxiety. When swarms of socialbots attach themselves to the data mind–flirting, chatting, spying, tracking–we can clearly recognize that we are already living in a society of soft power and modulated violence.
Indeed, one of B.F. Skinner’s most celebrated instruments for test-driving the theory of operant conditioning was the “Skinner Box,” a closed, programmable environment whereby test subjects–including laboratory rats and pigeons–could be probed, reinforced, and, if necessary, punished as a way of calibrating, and thus engineering, the protocols of effective social modification. Now, the fact that Skinner’s theory of operant conditioning–with its stripped-down assessment of human behavior, its studious attention to the best practices of a token economy, and its transcendent vision of behavioral modification guided by experts–was seemingly displaced by theoretical attention to the death of the subject, from poststructuralism and postmodernism to posthumanism and, most recently, by new materialist theories focused on the complexity of objects as life-forms, does not necessarily mean that operant conditioning, with its profoundly eschatological vision of behavioral modification, was lost to the world of emergent technologies. In one of those superb ironies of cultural reflection, the Skinner Box could be quickly left behind as so much detritus on the way to posthuman culture precisely because the theory of operant conditioning was always waiting patiently and persistently for its technological realization by a creative form of new media–in fact, social media–that could instantly and decisively translate the anticipatory vision of soft power, token economy, and reinforcement theory that was the Skinner Box into the generalized network of socialbots within which we find ourselves enmeshed today. In this case, when socialbots take active possession of social media, when complex patterns of human neurology expressed by the ablated consciousness of the data mind are gradually shaped, indeed modified, in their observable outcomes by bots that chat, make suggestions, anticipate connections, manifest seemingly total recall, and facilitate the attainment of desirable goals (better health, greater intelligence, early warnings), then, at that point, the Skinner Box is no longer an object outside ourselves but something else entirely–a technology of programmable subjectivity rendered part-flesh/part-data. Today, it is not so much that we are mingling with physical robots in ways anticipated by cinematic and science-fiction visions of the technological future, but that clear, discernable borders have been eliminated between immaterial (social) robots and ourselves, that it is difficult to know with any certainty whether a friend or a commentator on social media is human or the sensitively attuned response of an artificial life-form–a socialbot–that can know us so intimately because, in daring to become fully digital–being social media–we may have inadvertently entered in the long-anticipated world of B.F. Skinner redux. Replete with swarms of bots–socialbots, neurobots, spybots, junkbots, hackerbots–the ablated Skinner Box that is the universe of contemporary social media has this common feature: expert systems in the form of artificial life-forms function ceaselessly to modify, cajole, influence, and channel the privileged psychic targets of human perception and social attention in the token economy of network culture, with its powerful technologies of soft facilitation and its equally harsh technologies of command, including surveillance and tracking. Happily taking up neurological residence in the data mind, armies of neurobots, sometimes acting at the behest of corporate capitalism or perhaps under governmental supervision, are, in effect, the way in which power speaks today–otherwise invisible databases that seduce, inform, link, and recall as leading spearheads of evocative communication between robots and humans.
With the sheer invisibility of socialbots, the fact that the first, fateful encounter between robots and ourselves occurs in the innocuous, immaterial form of lines of code may intimate the elimination of the pervasive anxiety surrounding the “uncanny valley”–that psychic moment identified by robotics engineers when robots that are effectively indistinguishable from human presences. In this case, the uncanny valley of robotics engineering lore may well constitute an ancient, psychological reinforcer supporting the pattern-maintenance of established boundary lines long viewed as necessary to the self-preservation of the human species. While lines of code never rise to the psychological prominence of increasingly human-like mechanical robots, they do enjoy an important technological attribute, namely encouraging the human species, individually and collectively, to drop its traditional psychological aversion to mixing robotic and human species-identity, which thus increases the vulnerability of the human species to quick insertions of the most fundamental elements of robotic consciousness, such as ambient awareness, distributive consciousness, circuits of fast connectivity and a fully externalized nervous system into the emergent infrastructure of the digital brain. Definitely not openly hegemonic and certainly not operating in the language of domination, the first encounter of neurobots and humans produces individuals who see actually begin to see, think, and feel like the socialbots of their wildest dreams.
A recent BBC report, titled “Robotic Prison Wardens to Patrol South Korean Prison,” describes a prototype demonstration of prison guard robots that would monitor inmates for “risky behavior,” specifically suicidal tendencies and violent impulses:
Professor Lee Baik-Chu of Kyonggi University, who led the design process, said that robots would alert human guards if they discovered a problem:
“As we’re almost done with creating its key operating system, we are now working on refining its details to make it look more friendly to inmates,” the professor told the Yonhap news agency. 
Quickly migrating beyond the use of robots to physically guard prisoners, this prototype project represents that moment when robots first began to evolve beyond their purely mechanical function as prison guards to the more complex task of carrying out psychiatric assessments of the behavioral patterns of prison inmates. While it could be expected that robots would first enter prisons in the traditional roles of surveillance and control, the three robots involved in the demonstration project have a very different task: namely, to mingle among a captive population as only a five-foot robot can do and while “looking more friendly to the inmates” conduct an active search for signs of suicidal and violent behavior. Not so much, then, a demonstration concerning the feasibility of using robots in prison environments, but actually an experiment with very general applications for perfecting an operating system allowing robots to conduct complex psychiatric examinations of prisoners. At this point, we move beyond cinematic images of prisons of the future with robotic guards in towers carefully monitoring prison populations to that moment when technology actively penetrates the human psyche in search of “risky behavior.” Here, robots are no longer mechanical devices, but artificial psychiatrists equipped with 3D vision, motion detection, and programmed operating systems, all aimed at discerning visible signs of melancholy, rage, despair, desperation, fatigue, hopelessness.
While it is not evident from media reports how robots are to fulfill complex psychiatric examinations–other than the mention of the demonstration robots monitoring abrupt changes in the behavior of individual prisoners–the intention is clear: for prison guard robots to cross the boundary between surveillance from the outside of captive bodies to internal explorations of psychic behavior. Guided by a prescriptive doctrine concerning the parameters of “risky behavior,” what is really being tested here is robots as avatars of the new normal, conducting frequent visual examinations of a chosen, and necessarily captive, population in order to determine which bodies fall inside and outside of the normative intelligibility determined by the artificially determined ethics of “risky behavior.” In this case, it is the responsibility of those bodies placed under surveillance to provide no outward signs of either visible dissent (violence) or refusal of the state’s power over life (suicide). While at first glance it might seem that guard robots are not programmed with levels of artificial intelligence and, perhaps, artificial affectivity necessary to detect otherwise invisible signs of powerful emotions internal to the psychic life of prisoners, what may be brought into political presence here is an entirely new conception concerning how power will operate in the robotic future. Not so much the great referentials of power over death or even power over life, but power over visible expressions of human affectivity–a form of robotic control that assumes that the psyche is not a form of internal being but a kind of external doing; that is, the psyche is not something we have but something that we do. In this scenario, what is important about the human psyche for purposes of the society of control is less the complexities of hidden intentions–the cultural acedia associated with feelings of melancholy, resentments that activate rage, total powerlessness that motivates despair–than those visible, outward manifestations of the rebellious psyche, that moment when the bodily psyche moves from the long, silent gestation of hidden intentionality to overt declarations of its intention to act, whether through violence or suicide. At that point, at least according to this prototype demonstration, robot guards will be waiting along the watchtower of the society of control, quickly targeting immanent signs of psychic rebellions against the order of normative intelligibility, relaying warnings to central command, all the while standing by for further instructions.
I, Robot Land
In September 1950, Incheon, Korea was the site of a daring, and justifiably famous, US invasion at the height of the Korean War, which aimed at capturing the capital city of Seoul and thereby decisively cutting off vital supply and communication lines to North Korean forces who were engaged in besieging UN forces further south in the Pusan peninsula. Identified as “Operation Chromite,” as conceived by General Douglas McArthur and carried out by the 1st and 5th Marine Divisions, the invasion force successfully shifted the momentum of hostilities, eventually resulting in the present-day demarcations of North and South Korea.
Possibly as an unconscious tribute to the above invasion, Incheon has been selected as the site of a second invasion, this time not by US Marines charging ashore, but by astral landing craft carrying robots from the past, present, and future. The invasion force consists of a multitude of creative robotic engineers, futurist designers, and marketing experts in entertainment spectacles, all aimed at successfully establishing, by 2016, a cutting-edge theme park called Robot Land which will consist of robotics engineering displays, commercial applications, and futurist-oriented research facilities depicting the future of robot society as well as possibilities for “harmonious co-existence” among “robots, humans, and nature.”  Not so much a Disneyworld for robots, since that would entail focusing on a symbolically rich, but past-oriented, narrative of mass entertainment spectacles, Robot Land has a very different objective. Conceived of as a “history of the future, the guiding ambition is to construct a theme park depicting a future robotic society that, while visually honoring the history of robotics engineering as well as visions of robotic society originating in science fiction and Hollywood cinema, actively and very directly engages in the project of designing the robotic future. Here, the robotic future anticipated by business, engineering, cinema, comic books, and literature will be paralleled by state-of-the-art research facilities aimed at both confirming and promoting Korea’s creative leadership in the areas of robotic design, fabrication, and engineering. Imagined as a gateway to the future rather than a spectacle of the past, Robot Land has chromite at its techno-visionary core, anticipating a hard-driving futurist invasion of global markets and perhaps of generalized cultural imagination as well by the Korean robotic imaginary. Part theme park (featuring a gigantic roller coaster that dangles off the arm of a gigantic robot before plunging into the water below; a robotic aquarium filled with robotic fish, including lobsters and jellyfish; and merry-go-rounds for riding robot animals), part futurist robot laboratory, and part “industrial promotion facility,” Robot Land takes seriously its mission of intensifying the “fun and fantasy” in the robotic future. There are, of course, necessary, indeed inevitable, exceptions as in any story concerning the unfolding (artificial) future. In the midst of this intended celebration of robotic fantasy, there are also plans underway to demonstrate “how robots may be used in 2030, particularly when it comes to assisting seniors with housework, medical check-ups and dementia prevention.”  There are also psychologically and economically regional geo-national sensibilities at play. In this case, no sooner have the Japanese constructed two colossal robot statues (Tetsujin in Kobe and Gundam in Odaibu), than Korea’s Robot Land has been built to trump Japan’s claim to supremacy in the area of gigantic robotic spectacles, with a strikingly colossal 364-foot statue of Taekwon V (Voltar the Invincible).  In this case, persistent and longstanding tensions between Korea and Japan find their most recent manifestation in the twenty-first century in the delirious form of robotic fiction.
Considered as “a history of the future,” there is at least one significant, perhaps terminal challenge to the overall logic of the project that is hinted at by the very naming of the theme park–Robot Land. Possibly conceived as a Korean alternative to the “magic kingdom” of California’s Disneyland Park where “you can sail with pirates, explore exotic jungles, meet fairy-tale princesses, dive under the ocean and rocket through the stars–all in the same day,”  Robot Land offers its own alternative vision of a future distinct from the Disneyland prescription with its “eight extravagantly themed lands–Main Street, U.S.A, Tomorrowland, Fantasyland, Mickey’s Toontown, Frontierland, Critter Country, New Orleans Square and Adventureland.”  While Disneyland seduces by translating the phantasmatic ideology of the American dream into nostalgic spectacles, Robot Land delivers a harder message: that robots are here to stay, whether taking the form of lobsters and jellyfish, assuming the entertainment guise of robotic animals gathered together for a fun carousel ride, inflating to the gigantic proportions of apocalyptic cinema like the massive statue of Taekwon V, or, more prosaically (but pervasively), spreading out their established robotic hardware as the real working infrastructure of global automobile manufacturing or, for that matter, as futurist technological prosthetics for the sick, the aged, the demented.
While Jean Baudrillard might once have noted that the seduction of Disneyland is its convincing pretense that its fantastic simulations are an escape from the real world rather than what it really is–a perfect model of the real-time model of soft power, modulated violence, and crowd-management–Robot Land is the technological order after the age of simulacra. Here, there are technologically enabled thrills–roller coasters dangling from the outstretched arms of massive robots–mesmerizing robotic spectacles, and spectacular feats of imagination, but no order of simulacra, no sense, that is, that the new order of robotics is anything than what it really is: a key component of the Korean version of the power of the new real. With its mixture of entertainment spectacles, industrial promotions, and a graduate school in robotics, Robot Land is a place where fun illusions and delirious spectacles are always underwritten by a very visible undercurrent of dead-eyed economic seriousness of purpose and carefully orchestrated research visions of (certain) robotic futures. This is, of course, its biggest problem–that the future of robotics probably will have nothing to do with any territorial referent; certainly, it will not be a “land” in any physical or even symbolic meaning of the term, but will most definitely constitute a new order of time: robotic time. In this case, Robot Time, rather than Robot Land, would probably be a more accurate description of the new epoch ushered in by all futurist robotic designs, from mass entertainment spectacles to the complex artificial sensors working the assembly lines of the manufacturing world. When the future of robotics, one already anticipated by contemporary developments, turns away from its ready-to-hand terrestrial manifestations–artificial fish, mega-statues, humanoid machines–and enters the databases of globalized networked culture as their indispensable artificial intelligence and machine-to-machine and machine-to-human communication, then we will recognize that we are no following a technological pathway that will lead to a certain place (Robot Land) but toward a certain (robotic) order of database time that is networked, communicative, neutral. As with all things having to do with theme parks, actually expressing such a fundamental eschatological rupture in the order of things–the displacement in importance of visible space by the invisibilities of (database) time–is challenging. Such a challenge is probably why, although it takes momentary refuge in the comfortable referential illusion of Robot Land, this is one theme park that will probably always be known for the hauntological traces of its essential missing element–the once and future epoch of Database Robot Time. There are definitely no “magical kingdoms,” no “fairy-tale princesses,” no pirates–just a theme park on the edge of the rising time of the East that announces that for all the psychic exuberance of its robotic fossils, from fish and statues to carousel animals, that this is one tomorrowland that will not be able to camouflage for much longer what is really taking place in this second invasion of Incheon: the newly emergent order of the time of the robots, with humans kept on standby as their necessary prosthetics.
What happens when the evolutionary destiny of robots suddenly splits into two paths, with one pathway continuing that which has long been anticipated by scientific visionaries, cinematic scenarios, and science fiction–namely, the triumphant rise of a new robotic epoch invested with technological inevitability as successors to a putatively declining human species–and an alternative pathway in which robots abruptly shed their mechanical skin, upgrade their artificial intelligence, and adopt the remote senses of network culture as their very own interface with the surrounding world of human flesh? What, then, is the future of robotics: sovereign technological automatons or database robots?
Projective thought focused on the first pathway has long been the subject matter of technological futurists. In his brilliant book, Mind’s Children, the technological futurist Hans Moravec establishes clear-cut timelines tracing the history of robots, from their first appearance as mechanical prosthetics servicing human needs to that quickly approaching singularity moment (approximately 2050) in which robots equipped with advanced artificial intelligence, articulated limbs, and full-array sensory data inputs are projected to become an autonomous species, not only thinking for themselves but, more importantly, making sovereign decisions concerning what needs to be done in the interests of the preservation of the (robotic) species. Anticipating that day of fatal reckoning in which robots, as the product of human imagination, just might be inclined to follow familiar (human) pathways of revenge-taking for the gift of (robotic) life which they can never pay back, Isaac Asimov, in his celebrated book I, Robot, is ethically preemptive in anticipating a future race of fully autonomous robots that are invested, outside their conscious awareness, of the guiding moral edict, first and foremost, to do no harm to human beings. That Asimov’s anticipatory ethics of robotic behavior would be quickly shrugged off by robots exhibiting all the behavioral, emotional, and moral traits of their human progenitors is, of course, the privileged focus of the science fiction writer, Bruce Sterling, who in his cult classic Crystal Express eloquently and passionately scripts a future war of robots spanning many galaxies–a war in which a class of robots known as “shapers” and an opposing robotic tribe identified as “mechanists” engage in protracted combat in which the key issues as stake are as profoundly ontological as they are fiercely political.
Culturally, we are already well aware of the history of the (robotic) future that will be traced by the first pathway. Like a form of generalized anticipatory consciousness, many years of cinematic history have provided dramatic images of the multiple permutations, internal and external, that will likely follow the sovereign regime of robotic logic. While most cinematic encounters between robots and humans are ultimately settled by spectacles of violent battles, a few actually hint at a “harmonious co-existence of robots, humans and nature”  with the remainder often concluding with unsettled paradoxes, unfinished narratives, and promises only made to be broken. For example, the final, anguished speech by Batty, one of the pursued replicants in Blade Runner, powerfully and evocatively captures both the anguished human will to live and a courageous replicant’s pride in star bursts that he has witnessed, distant planets explored, and inexpressible awe before the vastness of deep space. When Albert Camus first articulated the absurd sense in The Rebel as consisting of an all-too-human demand for meaning to which the universe answers with indifferent silence, he probably did not have in mind a future time in which the hunted-down replicants of Blade Runner would be commonly haunted by an existential sense of the robotic absurd, that moment in which genuine anguish by replicants over their programmed termination dates is met with the silence of nature’s indifference. That we are already conscious of the blending of technological dynamism, real power struggles, and stubborn, complex ethical entanglements that will probably constitute the material reality of the future life of the robotic mind is explored everywhere in the history of cinema, including those classics of visual imagination such as 2001: A Space Odyssey, Alien, Metropolis, Westworld, The Day the Earth Stood Still, Star Wars, Star Trek, Robocop, Terminator 2: Judgment Day, and, of course, that poignant narrative of human senescence and robotic ingenuity–Wall-E. Like a cinematically driven society eager to be haunted by its technological future, and certainly capable of quick ethical and political adaptations to the demands of the (robotic) day, we may have already war-gamed the future, played and replayed it, spliced and remixed the fractures, bifurcations, and liminalities likely to follow the Judgment Day of the technological future. In this case, it is as if the first pathway to the future–the often-told story of human hubris and cyber-power–has already taken place in our collective imagination, leaving us now to be fully absorbed in studying in advance the psychic entrails of that fateful collision of the human species with its emergent technological successor.
However, with robotic life, as much as with human life, only opposites are ever true. Consequently, if there can exist such a rich cinematic and literary vernacular surrounding the robotic future, that might be because that future may have already reached its furthest limit and already begun to move in reverse direction, not necessarily by way of a spectacular implosion but by a silent yet discernable shift in robotic intelligibility. Perhaps robots themselves have grown tired of their rehearsed cinematic portrayals, shifting direction away from the spectacle of powerful AI machines to the more prosaic, more pervasive, certainly more perverse, and genuinely more futurist enactment of the approaching world of database robots. That is what the opening stories in this narrative of robots trekking across the uncanny valley is all about: not so much a predictable future of human/robot deep space encounters, but a more complex story of database robots expressed variously as neurobots, psychic robots, and avatars of robot time. In this case, robots have already fully penetrated the human sensorium, from hijacking the process of automated labor to relentlessly hacking the senses.
Beyond visions of technological apocalypse featuring predatory struggles between space-bearing robots and instinctually-driven humans, the migration of robots into the minutiae of social life has quickly evolved from multi-axis industrial robots–automatically controlled, multipurpose and functionally reprogrammable–specializing in the automation of labor to swarms of cyber-bots, fluid networks of AI agents privileging the automation of cognition. With a rapid increase in the world robot population (300,000 in 2000 to 18,000,000 in 2011),  industrial robots have swiftly been integrated into manufacturing processes, particularly those reprogrammable around automated labor that promises to deliver predictability and reliability, backed up by “high endurance, speed, and precision.”  An increasingly technical future, therefore, in which the compulsory labor of armies of specialized robots quickly displaces laboring human subjects in many work processes: welding, shipbuilding, painting, construction, assembly, packaging, and palletizing. Here, the overall trajectory follows the traditional path of economic development, this time with robots beginning in low-skill, sometimes dangerous jobs that can be done automatically and remotely, and thereafter moving up the skill-set ladder of career achievement to assume high-skill, hyper-cognitive positions in network culture. That, at least, is the overall technological ambition, marred sometimes by disquieting reports such as the following account concerning what happened when robots went wild in a GMC factory built on a field of robotic dreams:
In the 1980s, the General Motors Corporation spent upwards of $40 billion on new technologies, many hundreds of millions on robots. Unfortunately, the company did not spend nearly enough on understanding the systems and processes that the robots were supposed to revolutionize or on the people who were to maintain and operate them. The GM plant in Hamtramck, Michigan, was supposed to be a showcase for the company. Instead, by 1988 it was the site of some of the worst in technological utopianism. Robots on the line painted each other rather than the car bodies passing by; robots occasionally went out of control and smashed into passing vehicles; a robot designed to install windshields was found systematically smashing them. Once, when a robot ceased working, technicians did not know how to fix it. A hurried call to the manufacturer brought a technician on the next plane. He looked at the robot, pushed the “Reset” button, and the machine was once again operational. 
While computer malfunctions in a manufacturing plant can sometimes be solved by simply pushing the reset button, what happens now when computer glitches effecting the core system logic of the externalized nervous system take down key areas of social life, including banking, health, identity, and warfare? Without sufficient evidence concerning the consequences of the wholesale transfer of the human sensorium to electronic databases controlled, for the most part, by machine-to-machine (M2M) communication and endlessly circuited by data robots serving as synapses of the ablated world of cognition, finance, medicine, politics, and defence, contemporary technological society has quickly rushed into outsourcing itself, literally parceling out human identity into data clouds, from digital storage of personal health information to complex networks for circulating financial data. When entire computer systems crash, sometimes as a result of overload stress and, at other times, for reasons enigmatic even to systems engineers, the result is no longer an unexpected disruption in assembly lines, but the sudden data eclipse of core areas of externalized human cognition. When data goes dark, it is as if the body has suddenly been divested of its key senses–it is the jettisoning of externalized memory, the disappearance of electronic profiles of the extruded financial self, the circulation of electronic information concerning medically tracked subjects, or the substitution of recombinant, digital orifices of the eye, ear, taste, smell, and touch in the age of the rapidly dematerialized body. While many cautionary notes have been struck concerning the inevitable fallout from a future populated by the fully ablated self, skinned with an externalized nervous system, and possessing an order of (digital) intelligibility modeled after extruded consciousness, only now is it actually possible to measure the consequential results of this basic rupture of human subjectivity. Lost in clouds of data, communicatively overexposed, its identity outsourced by fast digital algorithms, its autobiography uploaded by data streams always offshore to the vicissitudes of individual experience, the real world of technology, particularly robotic technology, reveals that we may have made a Faustian bargain with the will to technology. Whether through generalized cultural panic over the sheer speed of technological change, or perhaps an equally shared willingness to ride the whirlwind of a society based on the literal evacuation of human subjectivity, we have committed to a future of the split subject: one part a fatal remainder of effectively powerless human senses, and the other a digitally enabled universe of substitute senses. In the most elemental meaning of the term, the technological future that spreads out from this fatal split of human subjectivity cannot fail to be profoundly and decidedly uncanny. While robots, technically forearmed with indifference, coldness, and rationality, will probably at some point and in some measure successfully trek across the uncanny valley, the human response to the growing presence of the (technological) uncanny in contemporary affairs is far less certain. For example, consider the following reports from the uncanny valley that is daily life in the shadow of robots.
There was a recent newspaper report that evocatively captured the feeling of the uncanny in the robotic future. Appropriately titled, “SociBot: the ‘social robot’ that knows how you feel,” the report focused on the underlying element of uncertainty that is often a sure and certain sign of the presence of the uncanny in human affairs:
If Skype and FaceTime aren’t giving you enough of the human touch, you could soon be talking face to rubbery face with your loved ones, thanks to SociBot, a creepy “social robot” that can imitate your friends.
“It’s like having a real presence in the room,” say Nic Carey, research coordinator at Engineered Arts, the Cornish company behind the device. “You simply upload a static photo of the face you want it to mimic and our software does the rest, animating the features down to the subtle twitches and eyes that follow you around the room.
The company sees its potential in shopping centers and theme parks, airports and tourist information centers,” anywhere requiring personalized content delivered with a human touch,” as well as potential security applications, given that the Socibot can track up to 12 people simultaneously, even in a crowd.
“We are looking for platforms that can be really emotional, investigating how robots can interact with people on multiple levels.” 
In his classic essay “The ‘Uncanny,” written in 1919 and perhaps itself deeply symptomatic of the profound uncertainties that gripped European culture post-WWI, Sigmund Freud approached the question of the uncanny on the basis of an immediate refusal.  For Freud, the uncanny–unheimlich–does not denote a kind of fright associated with the “new and unfamiliar,” but something else–still indeterminate, still multiple in its appearances and illusive in its origins. Far from being “new and unfamiliar,” the uncanny for Freud represented something more enduring in the human psyche, “something familiar and old-established in the mind and which has become alienated from it only through the process of repression”–namely, the continuing yet repressed presence of “animism, magic and sorcery” in the unfolding story of the psyche. For Freud, scenes that evoked the feeling of the uncanny were remarkably diverse: “dismembered limbs, a severed head, a hand cut off at the wrist, as in a fairy tale of Hauff’s”; “feet which dance by themselves as in the book by Schaeffer”; the “story of ‘The Sand-Man’ in Hoffmann’s Nachstucken with its tale of the ‘Sand-Man who tears out children’s eyes’ and the doll Olympia who occupies an unstable boundary between a dead automaton and a living erotic subject”; the always enigmatic appearance of the double; the fear of being buried alive; and, of course, the constant fear of castration. For Freud, whatever the particular animus that evokes feelings of the uncanny, the origin remains the same–the return of that which has been repressed not only by prohibitions surrounding “animism, magic and sorcery,” but also by episodic fractures, unexpected breaks in the violence that human subjectivity does to itself to reduce to psychic invisibility the complexities of sexuality and desire.
Now that we live almost one hundred years after Freud’s initial interpretation of the origins of the uncanny, does the emergence of a new robotic technology such as SociBots have anything to tell us about the meaning of the uncanny in posthuman culture? At first glance, SociBots represents a psychic continuation of that which was alluded to by Freud–a contemporary technological manifestation of the feared figure of the double as “something familiar and old-established in the mind.” For Freud, what is truly uncanny about the figure of the double is not its apparent meaning as mimesis, but its dual signification as simultaneously being “an assurance of immortality” and an “uncanny harbinger of death.” That is, in fact, the essence of Socibots: an assurance of (digital) immortality, with its ability to transform a static photo into an animated face, complete with twitches, blushes, and possibly sighs; but also a fateful harbinger of death, with its equally uncanny ability to transform living human vision into what Paul Virilio once described as cold-eyed “machine vision “–machine-to-human communication with a perfectly animated software face tracking its human interlocutors, twelve test subjects at a time. In this case, like all robots, SociBots certainly give off tangible hints of immortality–upload a photo of yourself, a friend, an acquaintance, and they are destined for eternal digital life. But, as with all visual representations come alive, it is also a possible harbinger of death, provoking feelings of human dispensability, that the tangible human presence can also be quickly rendered fully precarious by its robotic simulacra. Interestingly, while Freud began his story of the uncanny with a reflection upon the psychic anxiety provoked by the figure of the Sandman, who robs children of their eyes, SociBots may well anticipate death in another way, this time the death of human vision and its substitution by a form of vivified robotic vision. Here, SociBots could be viewed as providing, however unintentionally, perhaps the first preliminary glimpse of the psychic theatre of the Sandman in a twenty-first century digital device. With this addition: SociBots resemble the myth of the Sandman in a second important manner. Not only, like the Sandman, does this technology provoke enduring, though deeply subliminal human anxieties over the death of vision, but it also draws into cultural presence, once again, that strange figure of the doll Olympia with its subtle equivocations between dead automaton and living erotic subject. In this case, the particular fascination of SociBots, with its almost magical and certainly (technological) occult ability to animate “features down to the subtle twitches and eyes that follow you around the room,” does not solely reside in its animation of death, but in its manifestation of a world where objects come alive, with eyes that track you, with lips that speak, and facial features that perfectly mimic their human progenitors. Neither death by automaton nor life by the doll-like construction of Socibots, but something else: this is one robotic technology that derives its sense of the uncanny by always occupying an unstable boundary between life and death, software animation and real-life visual conversations and tracking. In essence, the uncanniness of Socibots may have to do with the fact that it is a brilliant example of the blended objects–part-simulacrum/part-database–that will increasingly come to occupy the posthuman imagination. Curiously, while it might be tempting to limit the story of SociBots, like the mythic tale of the Sandman before it, to stories of the death of human vision or even to the fully ambivalent nature of blended objects, from dolls to robots, there is possibly something even more uncanny at play here. It might be recalled that Freud controversially concluded his interpretation of the uncanny with his own psychoanalytical insights concerning the unheimlich place as the uncanniness of “female genital organs”: “This unheimlich place, however, is the entrance to the former Heim (home) of all human beings, to the place where each one of us lived once upon a time and in the beginning.”  While making no prejudgment on the genital assignment of robotic technology, it might be said, however, that the story of SociBots has about it a haunting and perhaps truly uncanny sense of a premonition about a greater technological homecoming in which we are, perhaps unwittingly and unwillingly, fully involved. In this interpretation, could the origins of the SociBots uncanniness have to do with its suggestion that we are now in the presence of technologies representing, in their essence, possibilities for a second (digital) rebirth? The suggestion of the uncanny, therefore, that is SociBots may well inhere in its capacity to practically realize the once and future destiny of robots as born again technologies.
Junk Robots in the Mojave Desert: Year 2040
What happens when no tech meets high tech deep in the desert of California?
Just up the road from Barstow and far away from the crowds of Joshua Tree, there’s a junkyard where robots go to die. It consists of one hundred or so cargo-sized steel containers packed tight with the decay of robotic remains. Everything is there: a once scary DARPA-era animal robot weighing in at 250 pounds looks forlorn bundled in a shroud of net; early cobots and autonomous robots can be seen huddled together in one of the containers waiting to be reimagined; broken-down industrial robots that have reached a point of total (mechanical) exhaustion from repetitive stress injuries; abandoned self-organizing drone hives left to slowly disassemble in the desert air; swarms of discarded mini-robots–butterflies, ants and bees; mech/cyb(ernetic) corpses of robots made in the images of attack dogs, cheetahs, and pack animals, all finally untethered from reality and left to rust in the Mojave desert. Most of the valuable sensors seem to be missing but what remains is the skeleton of our robotic past. The only sound heard is the rustle of scattered papers drifting here and there with scribbled lines of start-up algorithmic codes. The only visual is the striking contrast of the sharp-lined geometry of those steel compartments against the soft liquid flows of the desert, land, and sky. The overall aesthetic effect of this robot junkyard is a curious mixture of the desert sublime with the spectral mountains in the background and dusty scrublands close to the watching eye, mixed with a lingering sense of technological desolation.
What’s most interesting about this robot junkyard–interesting, that is, in addition to its lonely beauty as a tarnished symbol of (technical) dreams not realized and (robotic) hopes not achieved–is that it has quickly proven to be a magnetic force attractor for a growing compound of artists, writers, and disillusioned computer engineers. Like a GPS positional tag alert on full open, they come from seemingly everywhere. Certainly from off-grid art communities on the plains of East Texas, some transiting from corporate startups in Silicon Valley, a few drifting in from SF, probably attracted by the tangible scent of a new tech-culture scene; there are even reports of artists drifting in from around the global net–Korean robo-hackers, Japanese database sorcerers, Bulgarian anti-coders, and European networkers–taking up desert-style habitation rights in the midst of the robot junkyard. It’s a place that some have nicknamed RoVent–a site where heaps of robots can be retrieved, repurposed, reimagined and reinvented.
It is almost as if there is a bit of telepathy at play in this strange conjuration of the artistic imagination and robots in transit to rust. Instinctively breaking with the well-scripted trajectory of robotic engineers that have traditionally sought to make robots more and more human-like, these pioneers seem to prefer the exact opposite. Curiously, they commonly seem to want to release the spirit of the robots, junkyard or not, to find their own technological essence. What is the soul of a data hive? What is the spirit of an industrial drone? What is the essence of a junkyard robotic attack dog? What makes a beautiful–though now discarded–robotic butterfly such an evocative expression of vitalism? Strangely enough, it is as if something like a Japanese-inspired spirit of Shinto, where objects are held to possess animate qualities and vital spirits, has quietly descended on this robot junkyard with its detritus of technical waste and surplus of artistic imagination.
The results of this meeting of supposedly dead technology and quintessentially live artists are as inspiring as they are unexpected. For example, one artistic display consists simply of a quiet meditation space where some of the junkyard robots are gathered in a rough circle, similar to a traditional prayer circle or the spatial arrangement of an ancient dirge, all the better to find their inner moe, or, at minimum, to reflect on that illusive point in their individual robotic work histories where the mechanical suddenly becomes the AI, the vital, the controlling intelligence and then, just as quickly, slips on backwards into the pre-mechanical order of the junkyard burial site.
In the darkness of the desert night, there’s another artistic site that is organized as a funeral pyre for dead robots. Without much in the way of wood around for stoking the flames, these artists have paid a nocturnal visit to the ruins of the CIA-funded Project Suntan, close to a super-secret aviation project, where a barrel of abandoned liquid hydrogen has been retrieved for releasing the night-time spirits of (robot) mourning. The funeral pyre should be a somber place, but in reality it’s not at all. Maybe it is simply the visual, and thus emotional, impact of a full-frenzy funeral pyre, fueled by the remains of secret experiments in high-altitude aviation fuels, sparking up the desert air. Or, then again, perhaps it is something different, something more decidedly liminal and definitely illusive. In this case, when robots are stacked on a burning funeral pyre, it is very much like ritualistic final consummation, that point where the visibly material melts down into the dreamy immaterial, and where even the scientifically contrived mechanical skins, electronic circuitry, and articulated limbs finally discover that their final destiny all along was an end-of-the-world return to the degree-zero of flickering ashes. The concluding ceremony for this newly invented Ash Wednesday for dead robots is always the same: a meticulous search by the gathered artists for the final material remains of the robots which are then just as carefully buried in the dirt from which they originally emerged. Ironically, in the liturgy of the funeral pyre, there is a final fulfillment of the utopian–though perhaps misguided–aspiration common, it seems, to many robotic engineers, namely a haunting repetition, in robotic form, of the human life cycle of birth, growth and senescence.
However, if the stoked inferno of the funeral pyre for abandoned robots sometimes assumes the moral hues of an anthropomorphic version of (robotic) imagination, the same is most definitely not the case at a third site where a feverish outburst of the artistic imagination–splicers, mixers, recombinants, recoders–plies its trade anew by remaking this treasure-load of robot technologies. Here, strange new configurations emerge from creative remixes of self-organized drone hives and fluttering robot butterflies. When (dis)articulated robot pack animals, some missing a leg or two, are repaired with extra legs culled from leftover parts of robot dogs or now only two-legged robot cheetahs, the result is often spectacular. It is just as if in this act of robotic reinvention that the drudge-like life-trajectory of many robots, previously valued only for invulnerability to boredom, to boredom with things (repetition) or boredom with human beings (routinization), is suddenly discarded. What’s left is this genuinely fun scene of robots, forever heretofore consigned to compulsory labor, untethered from their AI leashes, finally free to be what they were never designed to be: robotic cheetahs moving at the speed of a just-reassembled pack animal; robotic attack dogs, now equipped with reengineered robotic butterflies for better visual sensing, suddenly sidling away from high-testosterone attack mode in favor of startling, but ungainly, emulations of those exceedingly life-like Japanese theatrical robots. In this artistic scene, it is no longer the animating spirit of Shinto at work, but something else: the splice, the mix, the creative recombination of robotic parts into a menagerie of creative assemblages. Or maybe not. Some of the most fascinating projects involving this group of recombinant artists were those by descendants of Survival Research Lab. Their renderings quickly brushed aside the aesthetics of creative assemblages in favor of a kind of seductive violence that is, it appears, autochthonous to the American imagination. In this scenario, it is all about riding the robots. Robots as monster dogs, cheetahs, wildcats, sleek panthers, and large-winged earthbound birds waging war against one another or at other times left untethered to roam the nighttime desert, whether as sentries, mech watchdogs, or perhaps free-fire zone attack creatures burning with the ecstasy of random violence.
Designs for the Robotic Future
A Cheetah, an Android Actress, and the (AI) Cockroach
Intimations of the robotic future are often provided by the design of robots presently being assembled in engineering research labs in the USA, Japan, and the European Union. Here, the robotic future is not visualized as fully predictable, determined, or, for that matter, capable of being understood as embodying an overall telic destiny, but, much like the human condition before it, as something that will likely be contingent, multiple and complex. Indeed, if robots of the future–presently being designed on the basis of advanced research in sensor technologies, articulated limbs, and artificial intelligence–provide a glimpse of that robotic future, then it may well be that traditional patterns of human behavior notable for their complex interplay of issues related to power, affectivity, and intelligence may be well on their way to recapitulation at the level of an emerging society of future robots. Consequently, while the ultimate destiny of the robotic future remains unclear, its possible trajectories can already be discerned in the very different objectives of remarkably creative robotic research. Building on traditional differences in approaches to technology in which the United States generally excels in software, Japan in hardware and Europe in wetware (the soft interface among technology, culture, and consciousness), new advances in robotics design inform us, sometimes years in advance, concerning how robots of the future will effectively realize questions of (soft) power, (machine) affectivity, and (artificial) intelligence. For example, consider the following three examples of contemporary robotic designs, none of which fully discloses the future but all of which, taken together, may provide a preliminary glimpse of a newly emergent future in which human-robotic interactions will often turn on questions of power, emotion, and consciousness.
Robots of Power
In the cutting-edge research laboratories of Boston Dynamics, there are brilliant breakthroughs underway (mostly funded by DARPA) in designing robots that embody a tangible sense of power, robots with astonishing capabilities in moving quickly over a variety of unexpected terrains. For example, the Cheetah robot is described as “the fastest legged robot in the world,” with “an articulated back that flexes back and forth on each step, increasing its stride and running speed, much like the animal does.” Its robotic successor, the Wildcat, has already been released from the tethers of Cheetah‘s “off-board hydraulic device” and “boom like device to keep it running in the center of the treadmill,”  in order to explore potentially dangerous territories on behalf of the US Army. The Cheetah and the Wildcat are perfect robotic signs of forms of power likely to be ascendant in the twenty-first century: remotely controlled, fast, mobile, predatory. That Google has recently purchased Boston Dynamics (possibly as a way of acquiring proprietary rights to its unique sensory software) may indicate that important innovations in software development are themselves always sensitive to the question of power, seeking out, in this case, to ride Google into the robotic future, at least metaphorically, on the “articulated back” and fast legs of Cheetah and Wildcat.
Robots of Affectivity
In Japan, it’s a very different robotic future. Here, unlike the will to power that seems to be so integral to the design of American versions of the (militarized) robotic future–whether terrestrially bound or space-roving robots like Curiosity on Mars–Japanese robots often privilege designs that establish emotional connection with humans. Japanese robots, that is, as the newest of all the “companion species.” Here, focusing on robots specializing in therapeutic purposes (assisting autistic children, augmenting health care, helping the elderly cope with dementia), or for straightforward cultural consumption (androids as pop entertainment icons, robotic media newscasters), the aim has been to cross the uncanny valley in which humans begin to feel “creepy” in the presence of robots that are too human-like in their appearance and behavior. Psychological barriers against crossing the supposed uncanny valley have not stopped one of Japan’s foremost android designers, Professor Hiroshi Ishiguro, who, working in collaboration with Osaka University, has created a series of famous robots, including an android-actress Geminoid F, described as “an ultra-realistic humanlike android,” (who smiles, frowns, and talks) and, in a perfect act of simulational art, an android copy of himself. While Boston Dynamic’s Cheetah and Wildcat may provide a way of riding power into the future, Geminoid F and Professor Ishiguro’s android simulacrum do precisely the opposite by making the meaning of robots fully proximate to the question of human identity itself. If there can be such fascination with android actresses and AI replicants, that is probably because Japan’s version of the robotic future already anticipates a new future of robot-human affectivity, one in which questions of strangeness and the uncanny are rendered into indispensable dimensions of the new normal of the robotic future. In this sense, what is brought to surface by the specifically Japanese realization of the full complexity of robot-human interactions is the very shape and direction of individual and cultural psychology in the future. While Geminoid F, the android actress, will probably never really challenge boundaries between the human and the robotic, since it only represents a direct extension of the theatre of simulation that is mass media today, an android replicant is something very different. When the alter ego finally receives physical embodiment in the form of an android replicant, the question may arise whether in fact an android “selfie” might potentially be perceived in the soon-to-be realized robotic future as the very best self of all.
Robots of (Integrated) Intelligence
While American approaches to designing the robotic future often focus on questions involving the projection of power, and Japanese robotics research explores subtle psycho-technologies involved in robot-human interactions, European versions of the robotic future often privilege the complicated wetware interface involved when swarms of robots intrude into what, from a robot’s perspective, are alien spaces, whether the industrialized workplace, human domestic dwellings, or animal, plant, and insect life. Much like the European Union itself, where the value of integration is the leading social ideal, EU-funded robotic research has quickly attained global leadership in its creative studies of the bifurcations, fractures, and complex fissures involved with the extrusion of robots into the alternative environments of humans, plants, objects, and animals. For example, a press release titled “Robots can influence insects’ behavior” publicizes advances in robotic research under the sign of the European (AI) cockroach:
Scientists have developed robot cockroaches that behave so realistically they can fool the real thing. They were created as part of an EU-funded study for testing theories of collective behavior in insects, using groups of cockroaches as a model. Researchers working as part of the LEURRE project introduced the devices into a group of insects and studied their interactions. A report in the journal Science showed that the cockroaches’ self-organisational patterns of behavior and decision-making could be influenced and controlled by the tiny robots, once they had been socially integrated.
Little larger than a thumbnail, the cube-shaped “insbots” were developed under the EU-funded ‘Future and Emerging Technologies’ (FET) initiative of the Fifth Framework Programme. They were equipped with two motors, wheels, a rechargeable battery, computer processors, a light-sensing camera and an array of infrared proximity sensors. When placed among cockroaches, the machines were able to quickly adapt their behavior by mimicking the creatures’ movements. Coated in pheromones taken from cockroaches, the insbots were able to fool the insects into thinking they were the genuine article.
Coordinator Dr. Jean-Louise Deneubourg from the Université Libre de Bruxelles said, ‘In our project, the autonomous insbots call on specially developed algorithms to react to signals and responses from individual insects.’ The journal Science reported that once the robots were accepted into the group, they began to take part in and influence the group decision-making process. For instance, the darkness-loving creatures followed the insbots towards bright breams of light and congregated there. 
The report concludes by noting that the next stage of development for autonomous devices will involve building “groups of artificial systems and animals that will be able to cooperate to solve problems. So the machine is listening to and perceiving what the animals are doing and the animals are in turn perceiving and understanding what the machines are telling them.”  In other words, not so much a study of AI robot cockroaches drenched with pheromones (all the better to attract the attention of unsuspecting naturalized cockroaches) but a brilliant futurist probe into the new order of robotic communications, that point where robots learn to communicate with insects and, by extension, with plants, objects, and humans, and those very same plants, objects, and humans actually have a form of quick robotic evolution of their own by finally learning what it means to “perceive”, “understand,” and perhaps even “influence” the otherwise autonomous actions of robots. In this evolutionary scenario, a fundamental transformation in the order of communications, beginning with insects and then rapidly propagating through other species–plants, animals, and humans–anticipates a future in which the integrationist ideals of the European Union are inscribed, unconsciously, unintentionally, but certainly wholesale, on a newly emergent Robotic Union. The lowly cockroach, then, once coated with pheromones, as perhaps a fateful talisman of a possible future in which autonomous robots learn to “mimic” behavior with such uncanny accuracy that humans, like those “darkness-loving creatures” before them, follow the multiple robotic insbots of the future “towards the light and begin to congregate there.” In this situation, the only question remaining has to do with the meaning and direction of the “light” of the robotic future towards which we are congregating, certainly in the future as much as now.
The Psycho-Ontology of Future Robots
The future of robotics remains unclear, still clouded by essentially transhuman visions projected onto the design of robots, still not willing or able to reveal its ultimate destiny, that point when robotic intelligibility takes command and in doing so begins finally to trace its own trajectories in the electronic sky. Yet, for all that, there is much to be learned from reflecting upon contemporary robotics design–lessons not only about robotic technology and creative engineering but about that strange universe signified by complex encounters between robots and humans that takes place in otherwise relentlessly scientific labs around the world, from Japan and the United States to Europe. If the future will be robotic–at least in key sectors of the economy as well as network infrastructure–it is worth noting that the overall direction of that robotic trajectory already bears discernable traces of human presence, whether in terms of conflicting perspectives on robotic design or what might be prohibited, excluded, and disappeared from our successor species. Not really that long ago, an equally strange new phenomenon–the human “self”–was launched into history on the basis of key ontological conditions, some visible (the complex learning process associated with negotiating the human senses) and some invisible (the order of internalized psychic repression). In the same way, contemporary society witnesses, sometimes in mega-mechanical robotic expression and, at other times, in specifically neurological form, the technological launching of a robotic species that, while it may eventually posses its own unique phylogenetic and ontogenetic properties, will probably always bear the enduring sign of the human. Not necessarily in any particularly prescriptive way, but in the more enduring sense that the trajectory of the robotic future hinted at in the creative designs of robotics engineering may well culminate in investing future robots with a complex history of internally programmed psychic traumas that will powerfully shape their species-identity, both visibly and invisibly. In this case, contemporary fascination with robots may have its origins in a more general human willingness, if not eagerness, to displace unresolved anxieties, unacknowledged traumas, and, perhaps, grief over the death of the human onto identified prosthetics, namely robots. Could the future of robotics represent, in the end, the ethical ablation of the human condition, including the sinister and the creative, the compassionate and the cruel, in purely prosthetic form? If that is the case, are robots, like humans before them, born owing a gift–the gift of (artificial) life–that they can never repay? In this case, what is the future psycho-ontology of robots: unrelieved resentment directed against their human inventors for a gift of life organized around “compulsory servitude” or the supposed joy of (robotic) existence?
—————– “Raw: Obama Plays Soccer with Japanese Robot,” video, Youtube.com (April 24, 2014), http://www.youtube.com/watch?v=ag2vk6coBpI (accessed May 15, 2014).
 Ian Urbina, “I Flirt and Tweet: Follow Me at #Social Bot,” The New York Times Sunday Review (August 11, 2013), http://www.nytimes.com/2013/08/11/sunday-review/i-flirt-and-tweet-follow-me-at-socialbot.html?_r=0 (accessed April 17, 2014).
 “Robotic prison wardens to patrol South Korean prison,” BBC News Online (November 25, 2011), http://www.bbc.com/news/technology-15893772 (accessed April 20, 2014).
 For a full description of the Robot Land project, see http://www.robotland.or.kr/n_eng/ (accessed July 28, 2014).
 “World’s First Robot Theme Park to Open in South Korea,” CTV News (February 10, 2014), http://www.ctvnews.ca/sci-tech/worl-s-first-robot-theme-park-to-open-in-south-korea-1.1679115 (accessed May 23, 2014).
 Keane Ng, “South Korea’s Giant Robot Statues to Dwarf Japan’s,” The Escapist (September 8, 2009), http://www.escapistmagazine.com/news/view/94547-South-Koreas-Giant-Robot-Statue-to-Dwarf-Japans (accessed May 11, 2014).
 See https://disneyland.disney.go.com/ca/disney-california-adventure/ (accessed May 11, 2014).
 See http://www.robotland.or.kr/n_eng/.
 “World Robot Population, 2000-2011,” International Federation of Robotics, http://www.ifr.org/industrial-robots/statistics/ (accessed July 24, 2014).
 “Industrial Robot Statistics,” Statistic Brain, http://www.statisticbrain.com/category/technology/page/4/ (accessed July 24, 2014).
 William S. Pretzer, “How Products are Made, Vol. 2: Industrial Robots,” http://www.madehow.com/Volume-2/Industrial-Robot.html (accessed May 12, 2014).
 Oliver Wainwright, “SociBot: the ‘social robot’ that knows how you feel,” Guardian Online (April 11, 2014), http://www.theguardian.com/artanddesign/2014/apr/11/socibot-the-social-robot-that-knows-how-you-feel (accessed May 15, 2014).
 Sigmund Freud, “The Uncanny,” http://web.mit.edu/allanmc/www/freud1.pdf (accessed July 28, 2014).
 Ibid., 15.
 See http://www.bostondynamics.com/robot_cheetah.html (accessed May 14, 2014).
 “Robots can influence insects’ behavior,” European Commission: Research and Innovation, http://ec.europa.eu/research/infocenter/article_en.cfm?id=/research/headlines/news/article_07_12_07_en.html&item=Infocenter&artid=5813 (accessed May 19, 2014).
Arthur Kroker is deeply appreciative of the Social Sciences and Humanities Research Council of Canada for research support that was vital to the completion of the manuscript. His appointment as a Canada Research Chair in Technology, Culture and Theory at the University of Victoria represents a form of long-term intellectual support that has made interdisciplinary projects of this order possible.
We gratefully acknowledge the superb contribution of Shaun Macpherson in editing, book design, and preparing the manuscript for publication in the BlueShift Series. We very much value the artistic work of Jackson 2bears in creating the cover design for the book.
About the Authors
Arthur Kroker is Canada Research Chair in Technology, Culture and Theory and Director of the Pacific Centre for Technology and Culture (PACTAC) at the University of Victoria, BC. His most recent books are Exits to the Posthuman Future (Polity Press, 2014) and Body Drift: Butler, Hayles, Haraway (University of Minnesota Press, 2012). He and Marilouise Kroker also recently co-edited the second edition of Critical Digital Studies: A Reader (University of Toronto Press, 2013). His book publications include, among others, The Will to Technology and the Culture of Nihilism: Heidegger, Nietzsche, Marx (University of Toronto Press, 2004), The Possessed Individual (St. Martin’s Press, 1992), Spasm (St. Martin’s Press, 1983), and, with Michael A. Weinstein, Data Trash: The Theory of the Virtual Class (St. Martin’s Press, 1994).
Marilouise Kroker is Senior Research Scholar at the Pacific Centre for Technology and Culture, University of Victoria, BC. She is the author, with Arthur Kroker, of Hacking the Future (New World Perspectives, 1994). She has co-edited numerous anthologies, including Digital Delirium (1997), Body Invaders (1987), and The Last Sex (1993)–all published by St. Martin’s Press. In addition, Marilouise Kroker has performed and written texts for a series of videos by the video artist/musician Jackson 2bears, including Code Drift, Life by Computer, Slow Suicide, and, most recently with Arthur Kroker, After the Drones.