When researching and writing about (or designing and producing) hologram simulations, there’s always an initial coming-to-terms with the terms.
When I analyzed the discourses of simulation designers, nearly all of them made some attempt to square and/or pare the language of their field. Designers and artists usually opened interviews with this, eager to make sure I understood that while we call these things “holograms” they’re not actual holography. “The words ‘hologram’ and ‘3D,’ like the word ‘love,’ are some of the most abused words in the industry,” one commercial developer told me. Michel Lemieux at Canada’s 4D Art echoed a common refrain: “A lot of people call it holography. At the beginning, 20 years ago, I was kind of always saying, ‘No, no, it’s not holography.’ And then I said to myself, ‘You know, if you want to call it holography, there’s no problem.’” In my own talks and presentations, I’ve let go of the constant scare-quotes. The Tupac “hologram” has graduated to just being a hologram. It gets stickier when we begin parsing the myriad and important differences between virtual reality (VR) and augmented reality (AR). Many of us think we have an understanding of both, largely as a result of exposure to special effects in movies and TV — where the concept of a hologram underwent its most radical evolution, from a mere technologically produced semi-static 3D image to a computer-projected, real-time, fully embodied and interactive communication medium — but it’s AR people usually grasp more than VR. They’ll say “virtual reality,” but they’ll describe Princess Leia’s message, the haptic digital displays in “Minority Report,” or the digital doctor on “Star Trek: Voyager.” Neither of these are VR, in which the user dons cumbersome gear to transport her presence into a world inside a machine (think William Gibson’s cyberspace or jacking into “The Matrix”); they are AR, which overlays digital information onto existing physical space. Yet both VR and AR refer to technologies requiring the user to user some sort of eyewear — the physical reality-blinding goggles of OculusRift (VR) or the physical reality-enhancing eye-shield of HoloLens (AR). Volumetric holograms — fully three-dimensional, projected digital imagery occupying real space — remain a “Holy Grail” (see Poon 2006, xiii) in tech development, and we may need a new term with which to label that experience. One developer just coined one.
0 Comments
Just a slightly nifty post from the “nothing new under the sun” file: All that fuss over the (never available) Google Glass, all the hype over the (still unavailable) Oculus Rift, all my excited bewilderment over the (only demoed) Microsoft HoloLens — yet these head-mounted augmented-reality displays have been on drawing boards since at least the ’60s.
Perfume is a Japanese techo-pop group, a trio of women cranked out of a Hiroshima idol-singer mill nearly 15 years ago; last week they at last made their SXSW debut, after touring the United States for the first time only last year. Their performance — an eye-popping, digitally mashed-up overload of projection-mapped spectacle — offers exciting new ways to consider the negotiations between digital and live bodies on stage.
SXSW has supported talent from Japan for most of its run, despite often pigeonholing it in the single Japan Nite showcase — which observed its 20th anniversary this year (I had the fortune of being present for the first back in ’96, featuring the great Lolita No. 18). But as bands from Japan have upped their cultural cachet here, bigger acts have spilled over into the festival’s other venues and showcases. Perfume’s set last week — sandwiched at the end of the festival's Interactive portion and the beginning of its bedrock music week — certainly turned some heads. Finally. After all this time speculating about the boring, antiquated Oculus Rift headset, Microsoft this week demoed a new product that promises an actual step forward in melding virtual-reality computing into everyday living.
CNET’s report says: “Microsoft wants us to imagine a world without screens, where information merely floats in front of you.” This, folks — this is the Kool-Aid I’m chugging. Just a response to a paper I’ve read related to human-computer interface design — one that hit me where I live, or used to.
“Soylent: A Word Processor with a Crowd Inside” describes a software project that amends the dreaded Microsoft Word with some crowd-sourced editing assistance. “Writing is difficult,” the authors observe — yeah, welcome to my world — before adding: “When we need help with complex cognition and manipulation tasks, we often turn to other people” (1). Sometimes we have support systems in place for this assistance, but sometimes not. The Soylent project crafts just such support for any writer-user, utilizing Mechanical Turk workers to farm out editing, proofreading, and formatting tasks to others. Need someone to read over your paper — because you need suggestions as to what can be cut, because you want to make sure all the proverbial i’s are dotted and t’s crossed, because if you comb through your citations one more time your head will explode — but maybe you’ve called in that favor already or don’t want to risk bothering a colleague? Launch Soylent, which hires its invisible labor force to handle the work for you, perhaps in the dead of a deadline night. What struck me about this project is how it attempts to replicate something electronically that has existed professionally for more than a century: the newsroom. "World Is Mine," sure, but Hatsune Miku is still working hard for the money in the United States. The Japanese Vocaloid sensation has enjoyed her widest exposure this year stateside, from opening the first leg of Lady Gaga's summer tour to her recent appearance as the musical guest on "The Late Show with David Letterman." Is it working to expand her audience here?
That's no moon. That's the Light Stage X, a complex and utterly cool device created by USC's Paul Debevec and teams to capture image data from human faces and bodies.
This week I enjoyed a tour of USC's Institute for Creative Technologies, featuring demos of several of their virtual-reality projects, including Bravemind (VR assistance in PTSD therapy), ELITE (VR training for counselors), Gunslinger (a wild simulation of a Western cowpoke showdown that artfully brings narrative into real-time VR interaction), and one of the Graphics Lab's Light Stages. In my recent research into virtual performance simulations, I've ended conference presentations, my upcoming book chapter, and my thesis with some measured forecasts of the cool technology likely just over the horizon — in digital re/animation, projection systems, and artificial intelligence — all the while keeping in mind that technology forecasts tend to become outdated if not entirely quaint within hours of utterance. This week, however, brought very exciting news that sent me gleefully diving into some revisions.
Princess Leias and 2.0Pacs, I give you: the Ostendo Quantum Photonic Imager. If your Roomba chews up the fringe on your favorite Oriental rug, is it OK to punch it? If an algorithm recommends a movie to you, and the movie turns out to be crapola, to whom do you direct your online flames? If a drone independently computes a tactical course correction and flies into the wrong airspace, igniting international tensions, would war be averted if our rep stood in the UN assembly and assured everyone, “The drone gravely regrets its error”?
Machine ethics, robot rights — these topics keep popping up in my world. In the last year, I’ve attended three talks addressing various shades of the subject. |
this blahg
I'm THOMAS CONNER, Ph.D. in Communication & STS, and a longtime culture journalist. Categories
All
Archives
June 2025
|