This assignment will exercise the concepts of two-view stereo and photometric stereo. The project contains three parts. All students are expected to implement parts 1 and 2. 3-credit students are given an executable for part 3 and do not have to implement it, while 4-credit students need to complete a function for this part.
Download the code base from our github repo. You are to implement your code in student.py. Inputs and outputs for each function are specified in student.py.
Execute the following script to download the required datasets. This might take a while depending on your connection, so please be patient. We've commented out datasets you don't need in order to complete this assignment to save download time, but we encourage you to download them to try out many different inputs. .
This repository comes with the tentacle dataset. You will need to execute the download script to get the other datasets. For visualizations of the other datasets, please visit these external sites:
You will need ImageMagick, MeshLab and nose. If you are using the class VM then run:
Given a stack of images taken from the same viewpoint under different, known illumination directions, your task is to recover the albedo and normals of the object surface.
where dataset is in: ('tentacle', 'cat', 'frog', 'hippo', 'lizard', 'pig', 'scholar', 'turtle')
For example, if you use the tentacle dataset
the output will be in output/tentacle_{normals.png,normals.npy,albedo.png}.
The following illustrates the different illuminations for the tentacle dataset. The tentacle is a 3D mesh that has been rendered under 9 different directional illumination settings.
Correct tentacle_normals.png for the tentacle dataset looks like:
Red indicates the normal is pointing to the right (+x direction), green indicates the normal is pointing up (+y direction) and blue indicates the normal is pointing out of the screen (+z direction). We expect for you to format your normals in this coordinate frame. Failure to do so will result in incorrect meshes in part 3 of this assignment. The lighting directions we provide are already in this coordinate frame, so the simplest solution should be correct by default.
Correct tentacle_albedo.png for the tentacle dataset looks like:
Given two calibrated images of the same scene, but taken from different viewpoints, your task is to recover a rough depth map.
where dataset is in: ('tentacle', 'Adirondack', 'Backpack', 'Bicycle1', 'Cable', 'Classroom1', 'Couch', 'Flowers', 'Jadeplant', 'Mask', 'Motorcycle', 'Piano', 'Pipes', 'Playroom', 'Playtable', 'Recycle', 'Shelves', 'Shopvac', 'Sticks', 'Storage', 'Sword1', 'Sword2', 'Umbrella', 'Vintage')
You will need to uncomment and download other datasets using "download.sh"
For example, if you use the tentacle dataset
the output will be in output/tentacle_{ncc.png,ncc.gif,depth.npy,projected.gif}.
The following illustrates the two views for the tentacle dataset.
Correct tentacle_projected.png for the tentacle dataset looks like:
This animated gif shows each rendering of the scene as a planar proxy is swept away from the camera.
Correct tentacle_ncc.gif for the tentacle dataset looks like:
This animated gif illustrates slices of the NCC cost volume where each frame corresponds to a single depth. White is high NCC and black is low NCC.
and correct tentacle_ncc.png for the tentacle dataset looks like:
This illustrates the argmax depth according to the NCC cost volume. White is near and black is far.
Protip: Debugging taking too long on the provided examples? Go into dataset.py where you can edit a couple arguments. You can decrease the number of depth layers in the cost volume. For example, the Middlebury datasets are configured to use 128 depth layers by default:
Alternatively, you can decrease the resolution of the input images. For example, the Middlebury datasets are downscaled by a factor of 4 by default:
The output image will be of dimensions (height / 2^stereo_downscale_factor, width / 2^stereo_downscale_factor).
Given a normal map, depth map, or both, reconstruct a mesh.
where dataset is in: ('tentacle', 'cat', 'frog', 'hippo', 'lizard', 'pig', 'scholar', 'turtle', 'Adirondack', 'Backpack', 'Bicycle1', 'Cable', 'Classroom1', 'Couch', 'Flowers', 'Jadeplant', 'Mask', 'Motorcycle', 'Piano', 'Pipes', 'Playroom', 'Playtable', 'Recycle', 'Shelves', 'Shopvac', 'Sticks', 'Storage', 'Sword1', 'Sword2', 'Umbrella', 'Vintage')
and mode is in: ('normals', 'depth', 'both')
For example, if you use the tentacle dataset
The tentacle dataset is the only one compatible with the both option. Other datasets are compatible with either the normals mode (photometric stereo integration) or the depth mode (mesh from depth).
The following video illustrates the expected output for the tentacle dataset. Use Meshlab to open and view the mesh.
Protip: Use the Import Mesh button in Meshlab to open your mesh.
Be patient when running combine.py. For reference, the whole thing should run on the tentacle dataset with the both option in under 20 seconds.
Execute nosetests from the project directory to run the provided test cases in tests.py.
When you run nosetests for the first time, you'll see that all the tests are skipped.
We've configured tests.py to skip any tests where a NotImplementedError has been raised. Skipped tests are shown as a S.
After implementing one function, you might see something like this.
Here we have passed three tests, each indicated by a ..
If you fail a test case, then a F will be printed. For example:
As you work on implementing your solution, we recommend that you extend tests.py with whatever new test cases you feel will help you debug your code.
To recap, you must:
To see exactly what to submit see instructions on CMS: