That's no moon. That's the Light Stage X, a complex and utterly cool device created by USC's Paul Debevec and teams to capture image data from human faces and bodies. This week I enjoyed a tour of USC's Institute for Creative Technologies, featuring demos of several of their virtual-reality projects, including Bravemind (VR assistance in PTSD therapy), ELITE (VR training for counselors), Gunslinger (a wild simulation of a Western cowpoke showdown that artfully brings narrative into real-time VR interaction), and one of the Graphics Lab's Light Stages. The Light Stage was my raison d'etre; having written about the things in my master's thesis on virtual performance, I was eager to see one with my own eyes. A Light Stage is a spherical cage featuring mounted cameras and LED lighting rigs. When a human being steps into the center — as I got to do yesterday, volunteering with teacher's-pet swiftness — the LEDs, in various patterns and colors, illuminate the skin from every possible direction, capturing pore-level detail (I am not ready for my subsurface close-up, ahem). The resulting digital data is used to re-create the person's likeness for use in animation, most often in movies ("Avatar," "...Benjamin Button," "Superman Returns," "Ender's Game," "Maleficent," on and on). Standing in the center of a Light Stage is kind of intense. The lights, which the lab tech assured me were at only five-percent strength, are incredibly bright. It's like being inside a disco ball during the climax of a slow dance. The lights pulse, they change colors, they flash around and around. It's somewhere between psychedelic and epileptic. Here's a view from inside: My interest in interviewing Debevec — who's also been behind several groundbreaking virtual-human projects, such as Digital Emily and Digital Ira — last year for the thesis was to get at some issues about the Uncanny Valley, of which he claimed to have a "refined idea." The familiarity response to artificial lifeforms (or animations), he said, was more a matter of degrees, that animators don't have to be 100-percent accurate in their creations. "I’d say you need to be — if you’re past 85 percent or past 90 percent on every aspect — those last few percent to a hundred can be the hardest, of course ... then you’ve got something that people can just be cool with, and belief will be suspended, and you’re golden" (personal communication, May 22, 2013). Debevec made an especially important point about the relationship between human and virtual performers — that, at least for now, the latter still require the former. That is, a digital visual effect is still merely a different perspective on a human performance. The bits just filter the atoms, allowing for adjustment of expression and context, in the same basic ways as previous film transformations from Vaseline lenses to green screens. No AI drives the virtual form yet, though the scope of the actual performance, not just the technical production of film and video, now must include animators and programmers (especially for awards). Digital animators scan faces for 3D computer models, capture motion for Hatsune Miku dances, record voices or combinations of voices to add authentic-sounding audio. But each process is still merely copying or translating a human action. The machines still rely on the human to reproduce a successful digital performance. Debevec: I think the important distinction to draw right now is we’re replacing the visuals; we’re not actually replacing any of the acting. A performance that’s come from a digital actor — either it’s a successful translation of a performance from a human actor, like Andy Serkis becoming Gollum or King Kong or Caesar, or it’s the result of great animators animating them somewhat laboriously like Woody and Buzz in “Toy Story.” Of course, that’s often done looking at video of what the real actors did and then you have this reference ... So all the acting and performing is still done by real people even if the standing in front of a camera and reflecting light part is done by the digital actors (personal communication, May 22, 2013). Because, seriously, when is Andy Serkis going to get his Oscar?
ICT is chock full of interesting VR and related research, much of it funded from military sources, and findings such as patients being more open to discussing health issues with virtual doctors and yet another step toward realizing 3D, Princess Leia-like projection.
0 Comments
Leave a Reply. |
this blahg
I'm THOMAS CONNER, Ph.D. in Communication & STS, and a longtime culture journalist. Categories
All
Archives
November 2024
|