Tuesday, April 2, 2013

Creating a Virtual Odor Source: Fluid Dynamics Meets the Fun House


Smell-O-Vision is the hardy perennial of sensory technology. Engineers and artists keep returning to the idea, tweaking it here and there: a virtual reality helmet, a small device for beneath movie theater seats, and so on. Smell-o-vision is an idea as intrinsically cool as it is goofy. It gets more than its share of giggles: for example, yesterday’s April Fools debut of the Google Nose beta. (OK, the “safe search” feature was mildly amusing . . .)

For some reason, most advances in smell-o-vision technology originate in Japan. At the recent IEEE Virtual Reality conference in Orlando, researchers Haruka Matsukura, Tatsuhiro Yoneda, and Hiroshi Ishida from Tokyo University of Agriculture and Technology demonstrated a smell-capable video display. People have jury-rigged such devices before. The new wrinkle here is that the scent emerges from a specific area of the screen.

How do they do it? Via “computational fluid dynamics.” They aim multiple converging airstreams so as to create an outward flow from a specific location.
The proposed system has four fans on the four corners of the screen. The airflows that are generated by these fans collide multiple times to create an airflow that is directed towards the user from a certain position on the screen. By introducing odor vapor into the airflows, the odor distribution is as if an odor source had been placed onto the screen. The generated odor distribution leads the user to perceive the odor as emanating from a specific region of the screen.
That’s sorta cool, even if it doesn’t live up to the ridiculously overblown headline at Extremetech: “Japanese smell-o-vision TV releases scents with per-pixel accuracy.”

As a proof of principle, this technology opens up all sorts of interesting applications quite apart from video screens. I could see colliding airstreams being used to create pop-up smells in a walk-through environment. Imagine an olfactory fun house . . .

No comments: