Myron Krueger is one of the original pioneers of virtual reality and interactive art. Beginning in 1969, Krueger developed the prototypes for what would eventually be called Virtual Reality.
These "responsive environments" responded to the movement and gesture of the viewer through an elaborate system of sensing floors, graphic tables, and video cameras. Audience members could directly interact with the video projections of others interacted with a shared environment. Krueger also pioneered the development of unencumbered, full-body participation in computer-created telecommunication experiences and coined the term "Artificial Reality" in 1973 to describe the ultimate expression of this concept.
Critics of VR often state that virtualization initially seduces the body through its promise of (immersive) escapism but eventually degrades the physical integrity of "meatspace" by retreating into a "false" and "secondary" reality. Others argue that virtual experience is as legitimate as real experience. What is your position?
Your question touches on several issues which I will address separately: seduction of incomplete reality, rejection of the corporeal, escape from reality, and the status of virtual experience.
Virtual reality is incomplete.
It is true that today's virtual reality provides very limited tactile feedback, almost no proprioceptive feedback (as would be provided by walking on a sandy beach or on rough terrain), rare opportunities to smell, and little mobility. However, it is just getting started. Criticizing a new idea because it is not yet fully realized seems unreasonably impatient. On that basis, the caves at Lascaux would never have been painted because we did not have a full palette and could not animate in three dimensions. Give us a few centuries and then revisit this complaint.
Is immersion a rejection of the physical?
Humankind has always inhabited a conceptual universe that is every bit as important to it as the physical world. Language, symbols, myths, beliefs, philosophy, mathematics, scientific theories, organizations, games, sports, and money are completely abstract dimensions but as much a part of our humanity as rocks and trees.
Originally, our conceptual world had no physical or perceptual representation. Later it got worse, reading and writing forced us to immobilize our bodies and to engage only our eyes and brain, rendering the intellect sedentary long before television arrived.
Today, computer graphics allow us to make this kind of information perceptual. Virtual reality goes a step further by engaging the machinery we use to operate in the physical world. Rather than denying the body, virtual reality reconnects it to the life of the mind. I have always pointed to physical participation as the key distinction of virtual reality.
Is immersive VR going to seduce us away from reality?
The flight from reality is not exactly a new issue. My parents were once worried that I was being seduced--by the world of books. Certainly, movies, videos, video games, and the internet have already successfully seduced us from the real world. However, there are no instances of anyone being seduced by current immersion technology. It is not good enough yet.
Is Seduction a Bad Thing?
As we have strived to control Nature, we have also endeavored to compose experience. Music takes us away the sounds of nature, but it is hardly a pale approximation of the real thing. Whereas everyday experience is jumbled and disconnected, the narrative arts can impose structure and meaning. Since real experience often teaches the wrong lessons, the military is convinced that artificial experience is the best teacher. Only the Taliban would argue that those who have never had artificial experiences are superior to those who have.
Virtual reality not only offers a new dimension in artificial experience, it improves on reality in very important ways. One theme of modern life is the desire to maintain human relationships over distance. From the letters of John and Abigail Adams, to the endless phone calls of my youth, to the electronic chat of today, we strive to feel we are together when we are not. One of my key contributions was the idea that virtual reality provides a context in which we can interact physically as well as verbally with distant companions. Clearly, it is the relationship that is real, the physical ambience at either end is secondary--as it should be, because it is not shared. In the future, our ability to communicate in virtual reality will be so good that we will choose to use it when we are together. It will be better than being there.
One of my first writings included the following sentence: "The result is an artificial reality, a whole new realm of human experience in which the laws of cause and effect are composed by the artist." From the beginning, I cautioned about the "trap of realism" which would limit virtual reality to merely imitating life when it offered the possibility of something completely new. We should celebrate these new realities, explore them, and be confident that the worlds that we create are every bit as valid as the one we started in. Ultimately, reality is whatever we say it is. We should not be intimidated by those who want to restrict us to the missionary position of the meat world when the Kama Sutra of virtual reality awaits. For us to do less than we can is to be less than we are.
Today, many new realities being built by developing and speculative technologies are being confused with having exclusively immersive properties. However, all along, you conceived of Virtual Reality (which you then called "Artificial Reality") as being primarily composed of external and not immersive properties. You also discussed the possibility that extroverted realities could be harmonized with the "real" rather than harvesting it. One could walk in physical space and experience a prototype to what Michael Heim now calls "Exo-Virtuality" (EVR). Are we then seeing a gradual return to your original idea?
In 1970, I considered HMDs (Head Mounted Displays) and rejected them because I thought whatever benefit they provided in visual immersion was offset by the encumbering paraphernalia which I felt would distance participants from the world they were supposed to feel immersed in. When I pondered what the ultimate experience would feel like, I decided that it should be indistinguishable from real experience. It would not be separated from reality by a process of suiting up, wearing gear, and being tethered to a computer by unseen wires. Instead of an alien planet accessed through an airlock, it would be like a doorway to a fantasy world that you could enter simply by attending to it.
Rather than limiting your participation to a single hand-held 3D pointing device, your image would appear in the world and every action of your body could be responded to instantaneously. Whereas the HMD folks thought that 3D scenery was the essence of reality, I felt that the degree of physical involvement was the measure of immersion. Instead of being concerned about the stagecraft, I focussed on the play.
Since I was arguing for convenience, naturalness, and obviousness, my concepts were well positioned for technological advances as they unfolded. Since 99% of applications are 2D and 99% of 3D applications are driven by 2D interfaces, there has been very little immediate interest in HMD immersion systems in the general office environment. Furthermore, given that people spend much of their time communicating with their colleagues, there is little tolerance for a technology that makes users look foolish or cuts them off from their peers.
I postulated an external virtual world in which the computer perceives users visually, listens to what is said, and answers through synthesized speech as well as projected graphics. This "Point and Talk" interface was seen as fanciful by academic researchers, but today, even Bill Gates speaks of "gesture interfaces." Three decades after my first demonstration, this approach is considered mainstream research and is being pursued at most major academic and industrial research labs.
There are two display trends that are fueling the progression towards external realities. The first is the arrival of low cost projectors driven by individual chips. The second is the possibility of inexpensive large scale displays based on organic LEDs that could be built into every available surface. One way or another, we will be surrounded by computer-generated displays that respond to verbal commands and body movements. The virtual will always be with us. The issue will not be escaping to it, but escaping from it.
HMDs are very promising but they will be used only when they provide significant advantages over other display modalities. If HMDs fit into regular eye glasses, do not change the wearers' appearance, and do not cut them off from their colleagues, they will be the least encumbering and cheapest possible displays for mobile applications such as portable computers, cell phones, augmented reality, and entertainment systems.
Providing traditional text and graphic information on see-through eyeglass displays to people as they move around their daily lives will completely change our relationship to information. Augmented reality applications are certainly interesting but will take longer to develop and deploy. The real world can be annotated with driving instructions, virtual billboards, the name of the individual you are speaking to, and the name of the plant or bug you are looking at as you walk through the woods. People you are telecommunicating with will appear as 3D beings in your real space.
Thus, virtuality will be applied to the real world that skeptics are afraid we are withdrawing from. It will make that world much richer with information. Being without this virtual capability will be like taking off your eyeglasses if you are nearsighted--possible but not comfortable.
Has "immersion" become synonymous with "evolution"?
Certainly, virtual reality is part of an evolutionary process that will profoundly blur the boundaries between humans and machines. Biotechnology will do the same because it allows us to understand ourselves as mechanistic expressions of coded (and therefore editable) information. But let me answer in terms of another evolutionary process present in virtual reality at this moment.
I started my graduate work with an interest in Artificial Intelligence. For some reason, AI has been out of favor for decades and no one speaks with the optimism that characterized early efforts. This is puzzling because we are starting to confront simulated characters that can perceive us, understand speech, and respond with speech of their own. In their early stages we will be able to quibble about whether these artificial entities are really intelligent. However in time, we will be no more likely to administer the Turing test to them than we are to scrutinize most of the people we interact with. If they had bodies and body language, we would soon cease to think about them.
But convincing robots are not on the immediate horizon. We are still stuck with the same mechanical vocabulary that we have been using for the last century. Polymer muscles are being experimented with but we are at the beginning of their development.
In the meantime, artificial intelligence can evolve by building on the concept of a microworld, one of its most successful strategies. In this technique, the knowledge required to reason about a limited domain is built into a program and sure enough the program can act like it understands what is going on inside that domain, even though its ignorance of everything outside that domain is total. Most computer applications resemble microworlds and both immersive and external versions of virtual reality certainly do. VR allows people to enter the microworld and enables the computer to perceive their body movements in the context of that world. By finessing the problem of visual perception of the real world, the AI program can focus on the relationships that exist among human participants, synthetic avatars, and the objects and spaces of the virtual world. In this context, real incremental progress in AI can be made.
In fact, the office and the home are also microworlds, inhabited by a small number of people, having a limited number of tasks that the computer can perform for them, and engaging in a limited number of stereotypical activities that could be understood with current or immediately foreseeable computer vision technology. That the cognitive and language skills are minimal does not matter, we are quite adept at communicating with children, with immigrants whose English skills are poor, and with people whose roles allow us to interact with them in restricted ways. This development is inevitable and its result will be an artificial entity that is considered as much a part of the household as a dog or a maid. Once the foot is in the door, there will be a continuing appetite for ever greater intelligence and personality until the result rivals and exceeds our own.
Do you still see your original "Artificial Reality" (AR) installation as "Artificial"? In the near future, what will become the definition of "real"? Is it safe to just call the new technologically-driven realities as "Transposed Realities" (TR)?
I saw virtual reality as a metaphor for what was happening throughout our society. My term "artificial reality" referred to this metaphor as much as to any particular means of implementing it. I deliberately made the term provocative and liked the fact that it was an oxymoron.
When I started, the term was more loaded than it is now. Then, the artistic and intellectual default position was outright hostility towards technology and "dehumanizing" was assumed to always precede "technology."
My own bias is exactly the opposite. I view technology as the essence of our humanity. An empty hand signals that our anatomy is incomplete until we pick up a tool. In addition, I consider technology an inevitable consequence of the laws of physics and therefore as natural as the birds and the bees. In fact, when I look at plants and animals, I see incredibly sophisticated technology, not something spiritually different from our own creations. Rather than thinking of myself as inventing technology, I have always believed that it was already there and that I merely discovered it. Rebutting C.P. Snow's idea of two cultures was one of the sources of the passion that I put into my early work. I felt that virtual reality and interactive art could help heal the rift. The spectacular increase in the number of artists now using technology is evidence that this is happening.
Have you given much thought to the ways in which your original "Artificial Reality" installation has influenced contemporary multimedia installation practices? Are there current artists whose work references your early experiments?
I was one of the first few artists to commit themselves to computer-based interactivity on a long term basis. I was the first to write extensively about the medium in my 1974 dissertation which was published as Artificial Reality in 1983. I not only laid out the medium but also described many ideas for interactive pieces that I wanted to create. Since I anticipated much of what has followed in my writings and in my work, it is natural that I see what seem to be obvious influences everywhere I look.
When I started, there was no concept of an interactive medium. I reasoned that to permit the participant to move around and to dominate his senses, I needed an authoritative but highly composable display. My decision was to use a video projector to display computer images, which I do not believe had been done before and certainly not for interactive experiences. When Dan Sandin visited my first video projection installation in 1970, we both saw the obvious desirability of surrounding the participant with projected images--hence the CAVE.
To maximize interactivity, I reasoned that my work also needed to perceive participants' movements. I built several sensory floors and started on a fifteen year development of specialized computers that would allow me to analyze participants' video images instantaneously. Combining live video images and computer graphics was another novel element that offered a rich set of interactive relationships. Placing geographically remote individuals in the same visual telecommunication space has provided another convention that has proved useful. These decisions have proved powerful and today there are a number of artists and scientists working in identical frameworks or variations on these themes.
Have you visited any OnLive worlds as an avatar? ActiveWorlds.com, for example, is a constantly growing and organically evolving cybergeography. If your early prototype for virtual worlds kept on growing in size since its inception in the 60s and the current idea of VR never came to fruition, what kind of physical and political infrastructure would you imagine having to be in place in order to economically, socially and environmentally sustain the potentially limitless geographical expansion of AR (versus VR) into the limited physical environment?
Actually, I have not done that much with Internet-based interactivity. I live in an electronic ghetto with only a 56Kb connection which is too slow for the kinds of interactivity that I like to work with. In fact, I suspect that if I had DSL or cable access, I would still find the lag between my input and the system's response too slow for my taste. Finally, I gave up interacting with traditional computer interfaces long ago. I find video game controls stultifying and am shocked that the players' input vocabulary has not improved much since Atari.
Actually, the infrastructure required to scale up VIDEOPLACE is not that different from what is needed for other forms of virtual reality. High bandwidth, infinite computing power, and low cost wireless technology would all be useful. However, guaranteed bandwidth and low latency are actually more important than high bandwidth for good interactivity. Today, packet switched voice over internet is just starting to be deployed. It will take years for a generalized packet switched network to evolve with each kind of data given a different priority and therefore a different level of service so completely new kinds of services to be easily cobbled together.
Today's assumption that everything will be based on the internet places another kind of barrier in the way. Like the PC, the internet is an expression of the software community's lust for overhead--for features and generality over performance. For virtual worlds based on centralized web sites accessed through PCs, it is hard to imagine a performance increase that will not be completely consumed in software complexity before the virtual reality simulations are run. Thus, unacceptable lags seem inevitable.
Human interaction is like flying. It is not enough to taxi down the runway, you have to do it fast enough to take off. In general, computer scientists have exempted themselves from speed constraints. It is as if aeronautical engineers did not think gravity was interesting.
Oddly, if ISDN had been deployed with enthusiasm by the phone companies ten years ago, it would have been possible to do high speed interactive worlds with speech as well as gesture communication. This service was not promoted aggressively for fear it would cannibalize T1 sales.
The common wisdom is that the internet and telecom worlds collapsed because too much money was spent on them. In fact, not enough was spent to complete the job. Now, it is as if an asteroid hit the earth and killed the mammals instead of the dinosaurs. The surviving companies are not the ones I would have chosen to pin my hopes for the future on.
The economic promise of a new telecom architecture is as great as the interstate highway system and the internet itself. Therefore, it is worth reconsidering our aversion to government interference, getting standards set, and building out the infrastructure as fast as possible, perhaps with some kind of bonding. My fear is that none of the myriad applications that are guaranteed to arise yet appear large enough to give the large firms confidence that there is a target large enough for them to survive even if they hit it.
It is worth mentioning that the infrastructure required to support the ultimate integration of the real and virtual worlds requires monitoring human location, direction-of-gaze, physical action, and speech on a moment-by-moment basis. Such omnipresent observation makes Big Brother seem absolutely negligent. That such a technology could be abused is obvious, but I am old enough to be living in the future I was warned about and have faith that this is a bad as it gets. Government and business will do some things that violate our privacy--but it won't be as bad as living in a small town a hundred years ago.
I have read that the first single-molecule transistor has been developed by Bell Labs. From hearing this news, I am reminded of a prediction by Wired that if we progress according to plan, we will have the first commercial nanoassembler on the market by 2004. Given the current rate of technological benchmarks and breakthroughs, do you think that such a prediction by Wired may still be accurate? And if so, when do you think such advances would be effectively put to use towards existing in ER (Enhanced Reality) frameworks?
I am not a giant in nanotechnology, but I am confident that nanoassemblers are not going to be widely available in 2004. Very preliminary self-assembly demonstrations have been done, but a generalized nanoassembler is a decade and I suspect more like two decades away. In the meantime, nanofabrication will be just one manufacturing process that is used for the functions which it can perform and its results will then be integrated with parts that are fabricated by more traditional means. We do not have a generalized macroscale assembler and I do see why the general problem would be easier for the nanoscale.
Art and the Nanoscale
The term nanoassembler suggests an offline manufacturing device which is not of particular interest to the kind of art I think about. At the same time, I have long contemplated the possibility of a person interacting with organisms and objects that exist on the micro- and nanoscales. Specifically, I suggested interacting with a bacterium or with individual atoms. There is much that could be done in this arena, but it is not enough to demonstrate an interesting idea. It is also necessary to produce an aesthetically pleasing interactive experience which both the human and the bacterium enjoy.
Enormous numbers of self assembling automata could in principle create palpable three-dimensional objects. Thus, the distinction between the display and the object represented evaporates. Certainly, I think that such a development would be a very exciting virtualization of the physical world. At the same time, I do not think that such a technology could operate rapidly enough to provide the kinds of interactive experiences that I seek to create.
Given this, what do you think will be the first commercial applications of the assembler? Do you envision any immediate artistic/cultural/aesthetic applications of the assembler once it is on the market? Might this be the key to producing a viable "Artificial Reality"?
The ability to make things ever smaller will lead to many culture-defining developments. Today, we are often engaged in conversation with people in other places, even as we offend those around us. It is easy to project miniaturized technology that will allow us to subvocalize instead of speak and later to simply think what we want to say and an implanted RF device will transmit it. A host of other implanted appliances can be expected, powered by the calories we consume or by burning the excess cholesterol in our blood.
We can have smart immune systems that distinguish friend from foe and instantly generate antibodies to combat the latest bioterrorist threat. A DNA nanoassembler will allow us to hijack the behaviors of ants to put them to work ridding our crops of pests one by one. It will allow us to incorporate new mental processing capabilities in our brains as needed and then let those cells be reprogrammed and reconnected for another purpose later. Evolution could be moment to moment rather than generation to generation.
Myron Krueger earned a BA in liberal arts from Dartmouth College and MS and PhD degrees from the University of Wisconsin. His 1974 doctoral dissertation defined human-machine interaction as an art form. It was later published as Artificial Reality (Addison-Wesley, 1983), and significantly updated as Artificial Reality II (Addison-Wesley, 1991).
Krueger's installations have been funded by both the National Endowment for the Arts and the National Science Foundation. In 1990, he received the first Golden NICA from Prix Ars Electronica for interactive computer art. He has also received awards from the scientific community for his work.
Krueger's work has been widely shown throughout the world at art museums and galleries and scientific conferences. It has been noted in many publications including Art News, Newsweek, Stern, Insight, LIFE, OMNI, the New York Times, Investment Business Daily, and the Wall Street Journal. VIDEOPLACE has also been the focus of reports on CNN, CBS Evening News, Nightwatch, Beyond 2000, and Smithsonian World.