In this project, you will implement a system to combine a series of photographs into a 360 degree panorama (see panorama above). You will first detect discriminating features in the images and find the best matching features in the other images, using your code from Project 2 (or SIFT). For this project, you will then automatically align the photographs (determine their overlap and relative positions) and then blend the resulting photos into a single seamless panorama. You will then be able to view the resulting panorama inside an interactive Web viewer. To start your project, you will be supplied with some test images and skeleton code you can use as the basis of your project and instructions on how to use the viewer.
The project will consist of a pipeline of command line EXE programs
(Features.exe and Panorama.exe) that will operate on images or intermediate
results to produce the final panorama output. You should
already be familiar with Features.exe, so we focus on Panorama.exe
here.
The steps required to create a panorama are listed below. You will be creating two ways to stitch a panorama: using translations (where you'll need to pre-spherically-warp the input images) and homographies, where you align the input images directly. The steps in square brackets are only used with the spherical warping route:
|
Step |
EXE |
1. |
Take pictures on a tripod (or handheld) |
|
2. |
[Warp to spherical coordinates] |
(Panorama.exe) |
3. |
Extract features |
(Features.exe) |
4. |
Match features |
(Features.exe) |
5. |
Align neighboring pairs using RANSAC |
(Panorama.exe) |
6. |
Write out list of neighboring translations |
(Panorama.exe) |
7. |
Correct for drift |
(Panorama.exe) |
8. |
Read in [warped] images and blend them |
(Panorama.exe) |
9. |
Crop the result and import into a viewer |
|
If you downloaded the code prior to Oct 6 you can get only the modified files and integrate the changes manually (or by overwritting the file if you haven't touched it): Makefile, FeatureSet.cpp, PanoramaMain.cpp, BlendImages.h, BlendImages.cpp
Panorama.exe is a command line program that requires arguments to work properly. Thus you need to run it from the command line, or from a shortcut to the executable that has the arguments specified in the "Target" field of the shortcut properties. (Unlike Features.exe from last time, Panorama.exe has no GUI mode.)
To run from the command line, click the Windows Start button and select "Run". Then enter "cmd" in the "Run" dialog and click "OK". A command window will pop up where you can type DOS commands. Use the DOS "cd" (change directory) command to navigate to the directory where Features.exe or Panorama.exe is located. Then type "Features" or "Panorama" followed by your arguments. If you do not supply any arguments, the program will print out information on what arguments it expects or open the UI in the case of Features.exe.
Another way to pass arguments to a program is to create a shortcut to it. To create a shortcut, right-click on the executable and drag to the location where you wish to place the shortcut. A menu will pop up when you let go of the mouse button. From the menu, select "Create Shortcut Here". Now right-click on the short-cut you've created and select "Properties". In the properties dialog select the "Shortcut" tab and add your arguments after the text in the "Target" field. Your arguments must be outside of the quotation marks and separated with spaces.
You can run the skeleton program from inside Visual Studio. However, you will need to tell Visual Studio what arguments to pass. Here's how:
You will use the feature detection and matching component to combine a series of photographs into a 360 degree panorama. Your software will automatically align the photographs (determine their overlap and relative positions) and then blend the resulting photos into a single seamless panorama. You will then be able to view the resulting panorama inside an interactive Web viewer. To start this component, you will be supplied with some test images and skeleton code. We also provide a Makefile so you can compile the code under Linux and Mac.
Note: The skeleton code includes an image library, ImageLib, that is fairly general and complex. It is NOT necessary for you to peek extensively into this library! We have created some notes for you here.
[TODO] Compute the inverse map to warp the image by filling in the skeleton code in the warpSphericalField routine to:
(Note: You will have to use the focal length f estimates for the half-resolution images provided above (you can either take pictures and save them in small files or save them in large files and reduce them afterwards) . If you use a different image size, do remember to scale f according to the image size.)
(Note 2: This step is not used when estimating homographies between images, only translations.)
To do this, you will have to implement a feature-based translational motion estimation. The skeleton for this code is provided in FeatureAlign.cpp. The main routines that you will be implementing are:
int alignPair(const FeatureSet &f1, const FeatureSet &f2, const vector<FeatureMatch>
&matches, MotionModel m, float
f, int nRANSAC, double RANSACthresh, CTransform3x3& M);
int countInliers(const FeatureSet &f1, const FeatureSet &f2, const vector<FeatureMatch>
&matches, MotionModel m, float
f, CTransform3x3 M, double RANSACthresh, vector<int> &inliers);
int leastSquaresFit(const FeatureSet &f1, const FeatureSet &f2, const vector<FeatureMatch>
&matches, MotionModel m, float
f, const vector<int>
&inliers, CTransform3x3& M);
AlignPair takes two feature sets, f1 and f2, the list of feature matches obtained from the feature detecting and matching component (described in the first part of the project), a motion model (described below), and estimates and inter-image transform matrix M. For this project, the enum MotionModel takes two possible values: eTranslate and eHomography.
AlignPair uses RANSAC (RAndom SAmpling Consensus) to pull out a minimal set of feature matches (one match for the case of translations, four for homographies), estimates the corresponding motion (alignment) and then invokes countInliers to count how many of the feature matches agree with the current motion estimate. After repeated trials, the motion estimate with the largest number of inliers is used to compute a least squares estimate for the motion, which is then returned in the motion estimate M.
CountInliers computes
the number of matches that have a distance below RANSACthresh
is computed. It also returns a list of inlier
match ids.
LeastSquaresFit computes a least squares estimate for the translation or homograpy using all of the matches previously estimated as inliers. It returns the resulting translation or homography in output transform M.
[TODO] You will have to fill in the missing code in alignPair to:
(Note 3: In ComputeHomography, you will compute the best-fit homography using the Singular Value Decomposition. From lecture 11: "the solution h is the eigenvector of A'A with smallest eigenvalue." Recall that the SVD decomposes a matrix by A=USV' where U and V are the left and right singular vectors, and S is a diagonal matrix of singular values, conventionally ordered from largest to smallest. Furthermore, there is a very strong connection between singular vectors and eigenvectors. Consider: A'A = (VSU')(USV') = V(S^2)V'. That is, right singular vectors of A are eigenvectors of A'A, and eigenvalues of A'A are the squares of singular vectors of A. Returning to the problem, this means that the solution h is the right singular vector corresponding to the smallest singular value. For more details, the wikipedia article on the svd is very good.)
[TODO] Given the warped images and their relative displacements, figure out how large the final stitched image will be and their absolute displacements in the panorama (BlendImages.)
[TODO] Then, resample each image to its final location and blend it with its neighbors (AccumulateBlend, NormalizeBlend). Try a simple feathering function as your weighting function (see mosaics lecture slide on "feathering") (this is a simple 1-D version of the distance map described in [Szeliski & Shum]). For extra credit, you can try other blending functions or figure out some way to compensate for exposure differences. In NormalizeBlend, remember to set the alpha channel of the resultant panorama to opaque!
[TODO] Crop the resulting image to make the left and right edges seam perfectly (BlendImages). The horizontal extent can be computed in the previous blending routine since the first image occurs at both the left and right end of the stitched sequence (draw the "cut" line halfway through this image). Use a linear warp to the mosaic to remove any vertical "drift" between the first and last image. This warp, of the form y' = y + ax, should transform the y coordinates of the mosaic such that the first image has the same y-coordinate on both the left and right end. Calculate the value of 'a' needed to perform this transformation.
Note that you can also use SIFT features to do the alignment, which
can be useful if the feature detection and matching code from Project
1 is not working sufficiently well. To do so, add the work sift to the
end of the command, as in:
Panorama alignPair warp1.key
warp2.key match1to2.txt 200 4 sift
Sample SIFT features and matches have been provided to you with the small test sequence above. To extract SIFT features from a new set of images, you can download David Lowe's SIFT binary here (Windows and Linux binaries available). Note that you will need to convert images to .pgm format before running SIFT on them.
You may also refer to the file stitch2.txt provided along with the skeleton code for the appropriate command line syntax. This command-line interface allows you to debug each stage of the program independently.
You can use the test results included in the images/ folder to check whether your program is running correctly. Comparing your output to that of the sample solution is also a good way of debugging your program.
Here is a list of suggestions for extending the program for extra credit. You are encouraged to come up with your own extensions. We're always interested in seeing new, unanticipated ways to use this program!
First, your source code and executable should be zipped up into an archive called 'code.zip', and uploaded to CMS. In addition, turn in a web page describing your approach and results. In particular:
Panorama Mosaic Stitching
This portion of the web page should contain the following:
The webpage (along with all images in JPEG format) should be uploaded to CMS in a zip file called 'webpage.zip'. If you are unfamiliar with HTML you can use any web-page editor such as FrontPage, Word, or Visual Studio 7.0 to make your web-page.
Last modified on October 6, 2012