We are building up steam to launch a project which involves solving parts of the plenoptic function for natural landmarks. The current state of the art for field capture of interactive scene data seems to be capturing cylindrical panoramos using rotating heads on tripods: http://www.virtualparks.org/
Imagine doing the same thing, but supporting non-deforming geometry, novel viewpoints and relighting. Now imagine putting all that technology in a package with free software and sub $10,000 capitol equipment cost and giving it to the kind of people who go record this stuff for fun in their free time.
One concern is that the experience becomes to "Disneyland". That is, the resulting immersive environment is so compelling and complete that the visitor wonders "why bother with all the walking? I've already seen it here on my TV." I read that the other way. I think "now that I've got a glipse of the place, I am highly motivated to protect it and to get out there and experience it for real."
We need to think more carefully about this issue.
Friday, August 17, 2007
Monday, August 6, 2007
SIGGRAPH 2007 paper and link follow up.
We need to follow up on a few SIGGRAPH 2007 papers. and ideas
These are:
These are:
- Image clip art. The low-cost approach to modeling lighting in a single photograph is interesting. Also, take a look at the LabelMe database.
- That paper about clipping using columns of pixels that "aren't that interesting" for use in a CS 312 project.
- All of the image-based rendering papers
- Sphereon technologies
- HD viewer for gigapixel images from MSFT
- Polhemus
- Voodoo camera tracker
- Syth-eyes camera tracker (maybe)
Subscribe to:
Posts (Atom)