Friday, May 02, 2008
More Than "Just" The World's Oldest Cyborg
I just noticed recently that Steve Mann, formerly of MIT and now a professor at the University of Toronto, has been actively involved in some of the things I am interested in, for quite a while. Not just wearable computers, which I am waiting impatiently for, but image processing and interpreting. He has developed a program - Video Orbits - for stitching together video into stills. One property of note: If the video zooms, then the image formed in that region has higher resolution. If the exposure changes, then the dynamic range of the image increases. It's like layering data upon data.
Professor Mann has been wearing a computer for almost 30 years, as shown in the above illustration. Of most interest to me is the idea of mediated reality, wherein the computer looks at and interprets what you are looking at, and modifies the scene before presenting it to you. These modifications could include directions to a destination ("follow the yellow line"), a name tag for someone you run into whose name you should remember but don't, or any other sort of context-relevant information. It could even present a wildly distorted picture of the world, if that's what you want. Or it could save you from being inundated by external media: it can replace billboards with countryside, or make crowds transparent, so that you don't feel crowded (say, at Disneyland). At the same time it could highlight obstacles you are in danger of colliding with, so you don't keep running into these invisible people!
The sensing/display device that is being studied and (hopefully) developed is called the EyeTap. Within its tiny-enough-to-wear eyeglass frame is both a camera and a display. A mirror sends incoming light to the camera, and also sends the image from a micro-display back into into your field of view. Between the camera and display, a wearable computer does all the image analysis, recognition and re synthesis, "mediating" your view of the world. One cool result of this is that head tracking is done directly from analysis of the image; the system doesn't need a gyro!
I'm not sure how far anyone has gotten with the tough problem of image understanding, but a quick Google search lists several links to universities involved with it. It involves face and object recognition, 3D perception and probably a lot more. This is all very encouraging!