Recreate a 3D scene by High Definition video camcorder

  • 1
  • Idea
  • Updated 11 years ago
It is essential to get more than 200 photos with DC so as to buld a perfect three dimesional scene.
Suppose we shoot by High Definition video camcorders , we can get 30 images per second .
During this process, we can move whereever, shooting with any angle or any thing in detail. So within a short time, we can get plenty of images.
But if we take every image as a photo and take every image from the video,we can get thousands of photos .what effect can we get by putting these photos into the photosynth!?
Photo of Sun-Zheng

Sun-Zheng

  • 2 Posts
  • 0 Reply Likes
  • excited

Posted 11 years ago

  • 1
Photo of Sun-Zheng

Sun-Zheng

  • 2 Posts
  • 0 Reply Likes
http://photosynth.net/view.aspx?cid=8...
This is a video Video can build 3 d space!!!
Photo of Nathanael Lawrence

Nathanael Lawrence

  • 795 Posts
  • 55 Reply Likes
Congratulations.
Photo of Rick Szeliski

Rick Szeliski, Employee

  • 17 Posts
  • 12 Reply Likes
There's already a discussion around this idea under the title "Create synth from HD camera footage"
Photo of TorATB

TorATB

  • 2 Posts
  • 0 Reply Likes
I looked a bit on the posts and comments, but I have not seen anything about this; Why not go a few steps forward and use this technology to assist robots in manuvering about. One of the biggest disadvantages for the robots is that they dont have a proper 3D model of the environment its operating in. By using this technology, we can give robots a big leap in how it operates. No bumping into the walls, cause it has a 3D model of the room and manages to position itself in that 3D model, so it "knows" where it can move and so on... Just put a couple of camcorders on a robot and this software and have lots of fun by programming it to clean up your room ;)
Photo of Nathanael Lawrence

Nathanael Lawrence

  • 795 Posts
  • 55 Reply Likes
Considering the amount of time required for calculating the connections between individual images, you would need to let your robot 'calibrate' in an area and then wait until the synth calculations were complete.

Even then, I wonder how effective this would be. It seems like even after a model was generated, the robot would need to analyse video frames of its current position and check them against those already synthed. At the moment we (the public) do not even have the ability to add a few more photos to our completed synths without recalculating the entire thing. Without the ability to quickly tie in views of current position to the existing model, this idea is still not ready.

I'm sure that you're right about this being used in the future. I just don't think we've figured out how to do it quickly enough to be practical.
Photo of TorATB

TorATB

  • 2 Posts
  • 0 Reply Likes
This technology is very interesting. In 10-20 years with hardware/software development, this idea might be in realization.
Photo of Rick Szeliski

Rick Szeliski, Employee

  • 17 Posts
  • 12 Reply Likes
Indeed, the idea of having a robot map out its environment to better perform its tasks (including obstractle avoidance) is a classic problem in robotics that goes under the name of SLAM (Simultaneous Localization and Mapping).

While traditionally, such systems were often build with sonar or laser scanners, newer variants use video cameras computer vision techniques closely related to Photosynth.

You can find out more about such techniques by searching for "Visual SLAM" on the Web.
Photo of Dane Jasper

Dane Jasper

  • 8 Posts
  • 1 Reply Like
While I like the idea of MANY more images in a synth, the example you linked to is basically just a video, frame by frame. Where's the up side in that?
Photo of bitplane

bitplane

  • 56 Posts
  • 11 Reply Likes
I've done this with a video of my town's high street with some success, I stuck a video cam to the driver's side window and recorded it off to MPG format.
I then opened the video in VirtualDubMod and chose "video->frame rate->decimate by->8" and "file->export image sequence" and created a stream of windows bitmaps.

Then, because PhotoSynth doesn't support bitmaps I had to install ImageMagik to convert the images, which gives me the command line tool "convert.exe"

To do this I opened a dos prompt and used "dir /b *.bmp > textfile.txt" to create a list of files, pasted the list into a columns B and C of a spreadsheet (OpenOffice Calc because I don't own Microsoft Excel, but you could also use Google Docs), replaced ".bmp" with ".jpg" in column C, and put the text "convert -quality 95" into column A.
I then copied the 3 columns and pasted them into notepad, saved as "converter.bat" and double clicked it.

The results were reasonably good with 640x480 and 900+ images, going round corners broke the synth because I couldn't aim at a reference point with the camera stuck to my window! Photosynth seemed to think the road was bent in a fish-eye fashion, but I'm assuming more data points would fix this

However, now I have the technique I'll try using it to create the point cloud for a fully synthed area before adding high quality photos over the top. It would be really cool if I could hide the low quality images from the viewer, and just use the video for mapping.