Finally. After all this time speculating about the boring, antiquated Oculus Rift headset, Microsoft this week demoed a new product that promises an actual step forward in melding virtual-reality computing into everyday living.
CNET’s report says: “Microsoft wants us to imagine a world without screens, where information merely floats in front of you.”
This, folks — this is the Kool-Aid I’m chugging.
Just a response to a paper I’ve read related to human-computer interface design — one that hit me where I live, or used to.
“Soylent: A Word Processor with a Crowd Inside” describes a software project that amends the dreaded Microsoft Word with some crowd-sourced editing assistance. “Writing is difficult,” the authors observe — yeah, welcome to my world — before adding: “When we need help with complex cognition and manipulation tasks, we often turn to other people” (1). Sometimes we have support systems in place for this assistance, but sometimes not. The Soylent project crafts just such support for any writer-user, utilizing Mechanical Turk workers to farm out editing, proofreading, and formatting tasks to others.
Need someone to read over your paper — because you need suggestions as to what can be cut, because you want to make sure all the proverbial i’s are dotted and t’s crossed, because if you comb through your citations one more time your head will explode — but maybe you’ve called in that favor already or don’t want to risk bothering a colleague? Launch Soylent, which hires its invisible labor force to handle the work for you, perhaps in the dead of a deadline night.
What struck me about this project is how it attempts to replicate something electronically that has existed professionally for more than a century: the newsroom.
"World Is Mine," sure, but Hatsune Miku is still working hard for the money in the United States. The Japanese Vocaloid sensation has enjoyed her widest exposure this year stateside, from opening the first leg of Lady Gaga's summer tour to her recent appearance as the musical guest on "The Late Show with David Letterman." Is it working to expand her audience here?
That's no moon. That's the Light Stage X, a complex and utterly cool device created by USC's Paul Debevec and teams to capture image data from human faces and bodies.
This week I enjoyed a tour of USC's Institute for Creative Technologies, featuring demos of several of their virtual-reality projects, including Bravemind (VR assistance in PTSD therapy), ELITE (VR training for counselors), Gunslinger (a wild simulation of a Western cowpoke showdown that artfully brings narrative into real-time VR interaction), and one of the Graphics Lab's Light Stages.
In my recent research into virtual performance simulations, I've ended conference presentations, my upcoming book chapter, and my thesis with some measured forecasts of the cool technology likely just over the horizon — in digital re/animation, projection systems, and artificial intelligence — all the while keeping in mind that technology forecasts tend to become outdated if not entirely quaint within hours of utterance. This week, however, brought very exciting news that sent me gleefully diving into some revisions.
Princess Leias and 2.0Pacs, I give you: the Ostendo Quantum Photonic Imager.
If your Roomba chews up the fringe on your favorite Oriental rug, is it OK to punch it? If an algorithm recommends a movie to you, and the movie turns out to be crapola, to whom do you direct your online flames? If a drone independently computes a tactical course correction and flies into the wrong airspace, igniting international tensions, would war be averted if our rep stood in the UN assembly and assured everyone, “The drone gravely regrets its error”?
Machine ethics, robot rights — these topics keep popping up in my world. In the last year, I’ve attended three talks addressing various shades of the subject.
I'm THOMAS CONNER, communication researcher and culture journalist.