Tuesday, July 24, 2007

Photo Tourism demo


Here's a Java demo of Photo Tourism, the predecessor to Photosynth. The drag with Photosynth is that the demo only runs on Windows XP and Vista. Bleccch. Here, all you need is a Java-enabled browser (I'm using Firefox).

Things to do

There is a multitude of things I don't understand, that I would like to. I want to move along in converting my photo and video archives into a VR memory-assist library or museum. Initially, this VR space will occupy a computer memory. Eventually, I hope that it will be incorporated into my own augmented brain. This seems to be the subject this blog is taking, somewhat without my having planned it that way.
I have begun to realize that, in a way, EVERYTHING is software. That means that without the personal ability to create and modify software, I am without a very necessary skill set. I have begun to study the C++ programming language. I downloaded the free text of "Thinking in C++" and I am studying that every day. In addition, I have discovered many articles about how various groups have implemented the conversion of video streams into panoramic mosaics and even full 3D environments. Imagine being able to video record a scene, and then feed that video into an application that converts it into a 3D space. That's what is coming. If you are interested, you should look up the Photosynth website. This is a Microsoft Research product that correlates any number of photographs of an area into a 3D space, and retains the ability to navigate to each image. The demo video is really cool.

Photosynth

A great video showing a demonstration of Photosynth - a program or database that does some of what I have been talking about

Monday, June 11, 2007

Convert Video to VR

I thought of a neat piece of software I would like to see:
This application would extract the background from a video stream that included pans and tilts in the shot.
Say you have a shot of someone running across a field. The software would process the shot into a VR background image. Using pattern recognition, each frame would add to the background image as the camera panned across a new region. Only the background would survive this process. If there was an object being tracked in the shot, then this object would not be part of the background "plate".
An extension to this process would be the ability to derive a new view of the object moving across a static background. It would be possible to view the object from a select range of angles. This part would be a little harder if the original object was near enough to the camera so that the camera saw multiple sides of the object. But this could all be sorted out.
This idea came out of my thoughts of how my video and image collection could be used to create a virtual world for myself. Sometime in the future, all of my media will be digitized and used to create a space for me to explore my life's history. People from my past will be recreated using AI algorithms and every bit of video of that person. The AI algorithm would be able to simulate the person, so that person would live on in my memory.
Eventually this data will be integrated with my own memories. Even more, my memories will be mined by the software to help me recollect things that are only dim memories for me now.
Then that first kiss, the bee sting I got when I was three - and so on - would be as real as new.

Wednesday, September 06, 2006

Rediscovering Music Synthesis

When I was in my first year of college, on of the things I was fascinated with was the relatively new field of electronic music. I was going to Orange Coast College in Costa Mesa CA at the time.
I didn't have access to a synthesizer, but I built a couple of kits from PAIA electronics (Craig Anderton's company??). I couldn't afford a real modular synth, so I settled for the simplest single-voice thing they made. It had a rubber voltage-divider thing that you vould use to generate the control voltage for the oscillator by touching a probe to it along its length. I also built a top-octave divided keyboard with maybe a 1 1/2 octave keyboard (that you could shift over several octaves). I had a tape echo machine that I bought from one of my brother's friends, Rick Maddox. I think he's a sound engineer somewhere now.
I read all the books the OCC library had about electronics. I invented patches on paper, and had to imagine how they sounded. A patch is the routing a signal takes as it is created and modified, and combined with other signals.

Today, I am revisiting this lost interest. I have downloaded a copy of the Windows program, "Audio Mulch". It has everything I could have wanted back in the day, plus a lot of things that hadn't been invented back then, such as the digital effects that I'm just learning about: quantizers and nebulizers and so on.
I'm on a path of discovery. My immediate goal is to provide ambient sound for my Mad Scientist's laboratory for Halloween. But it's just fun relearning the technology, and discovering a lot of new things!

Saturday, March 04, 2006

The Internet as Personal Memory

Well, the Internet is going to become my audio/video/photographic memory. My flickr account lets me store my photos online. Soon I will start uploading videos to Google Video (or somewhere) so the whole world can watch my home movies....

You know, there's a touch of this in Bruce Sterling's short story, "Maneki Neko". It's a great story, and can be found in his anthology "A Good, Old-Fashioned Future".

I'm continuing my move to the use of all Open software. I have one Linux box up and running. It was rescued from a relative, had XP home on it, and was almost completely trashed by viruses. Since it came to me for free, I didn't feel like I was risking anything by formatting the disk and starting over.

Friday, April 01, 2005

Eyepiece Fatigue

I often feel like I'm living my most important moments through the eyepiece of a video camera. With what I've said below, I' may be able to live them again, free from the eyepiece or video display.