Friday, March 30, 2012

StereoVision Urgent

This thread is for more urgent questions or discussion, or topics which otherwise ought
to be brought to the attention of the whole class.  Try to use this thread sparingly.  I may
create additional posts if the volume of discussion warrants it.

17 comments:

  1. Here is a test image and the depth map that I've been using. It's a 250x250 png.

    http://imgur.com/a/Lprwg

    ReplyDelete
    Replies
    1. Here's a screenshot of my current MasterFrame class, using Max's test image and depth map. I'm treating pure black as a depth of 100.0, and the eye positions are: left: -500,0,0, right: +500,0,0.

      http://imgur.com/2eeRI

      Delete
    2. So you are normalizing the grayscale values, normally between 0 and 255, to be between 1 and 100?

      Delete
    3. Max thanks for sharing. Just a quick caution with this image. The depth map doesnt have any zero values. Which is fine, but when scaling the depths down to 100 you end up getting some zeros. And you can test easier.

      Just an FYI for everyone.

      Delete
    4. This is way too late to matter, but I'm not sure how scaling the depths down creates zeros? If you're doing the conversion by multiplying the grayscale values by 100/255, the only depths that would later get cast to zero are those that were originally < 2.55.

      Delete
    5. If you are dividing two int values 100 / 255 you will always get 0. This is because you are performing integer division.

      Delete
  2. Here's a basic 3D scene with a seperate Z-Depth buffer I rendered for testing the program.

    Base image:
    http://imgur.com/unlvs

    Z-Depth map:
    http://imgur.com/40aQ1

    ReplyDelete
    Replies
    1. This is way better than the one I posted. Thanks!

      Delete
    2. No prob. Unfortunately my program isn't at a stage where I can test it yet. Let me know if the images work out ok.

      Delete
    3. I like Brandon's image, but a smaller version, say, 100x100, will be more useful for testing, just to keep the amount of processing down.

      Delete
    4. Here's what a test of it looks like. Awesome image by the way. Did you make this in 3ds?
      Left eye: left 100
      Right eye: right 100, down 20

      http://imgur.com/STui2

      Delete
    5. Reply to Dan:
      I thought I had mine working, but when I do the same coordinates as you, my shifts look a lot less dramatic. Any ideas? Thanks. Here's a pic:

      http://imgur.com/1NAcV

      Delete
    6. A small maximum depth value gives a more dramatic result. I was using a max depth of 20 to make the shifts more noticeable.

      Delete
  3. I don't know if this should be considered as urgent but it is definitely confusing me. So what the assignment requires us to do is load the normal (base) image and then on the top right load the depthmap. Using our depthmap class we determine how far in the background the particular pixel is. Based on that information; we move the pixels in the right eye and the left eye images, which are to be loaded on the bottom of the JPanel. If this is the case the way my pictures are shifting are through the whole image, how do you move the right pixel at the right spot?

    ReplyDelete
    Replies
    1. It is done by having both the original pixels and the depths of each pixel along with the equation on the assignment specs. With that you create a new Buffered Image(the same width and height as the original) with the specified locations of each pixel.

      Delete
    2. You don't have to create a BufferedImage for those panels. If you like you can just override the paint( ) method. Either way is fine.

      @Cesar, the way the depthmap is supposed to work, each pixel has a depth, which tells you how many times further it is from your eye than the screen is. So, at a depth of 1.0, the pixel is right at the screen, and it should shift the same amount that your eye does. At a depth of 2.0, the pixel is as far behind the screen as your eye is in front of the screen. This means that the drawing on the screen should only shift half as much. And so on. As Peter pointed out, I've given formulas for this in the spec.

      One point that has caused some confusion is: in a typical 3-D scene, all the depth values could reasonably be in the range [1.0, 2.0]. In this case, obviously they should be stored as doubles, not ints. Since this information is NOT contained in a grayscale image, you need to pass the depth range [minDepth, maxDepth] as parameters to the DepthMap constructor. It is reasonable to use 0.0 or 1.0 as a default value for minDepth. The grayscale values tell you where on the scale from minDepth to maxDepth the individual pixel depths fall. So, with grayscale values from 0 (farthest) to 255 (nearest), the grayscale value of 127 corresponds to roughly (minDepth+maxDepth)/2. (Exactly (127*minDepth + 128*maxDepth)/255)

      Delete
  4. I man not be exact however //I am not yet done myself

    ReplyDelete