1. Homepage
  2. Programming
  3. CS435 Computational Photography Final Project - Panoramic Stitching

CS435 Computational Photography Final Project - Panoramic Stitching

Engage in a Conversation
DrexelCS435Computational PhotographyPanoramic StitchingMatlab

CS 435 - Computational Photography CourseNana.COM

Final Project - Panoramic Stitching CourseNana.COM

YOU MAY WORK WITH A PARTNER IF YOU LIKE!!! CourseNana.COM

But if you do so, look at the additional information you need to provide in your submission (stated at the end of the document). CourseNana.COM

Introduction CourseNana.COM

For our final assignment, we’ll attack the problem of creating a panoramic photo. This will require several ideas from this course, including: CourseNana.COM

ˆ Least Squares Estimate (LSE) for Transformation Matrix Discovery ˆ Projection
ˆ Blending
ˆ Interest Point Discovery (subsampling, gradients, edges) CourseNana.COM

ˆ Respresentation (feature extraction) CourseNana.COM

ˆ Feature Matching (point correspondences). CourseNana.COM

You may use functions that you were allowed to use in prior assignments. In particular things like edge, imgradientxy, imgausfilt, etc.. However you may not use Matlab functions to do the new things in this assignment. In particular, functions that might find keypoints and/or do transformations (like imtransform, imregionalmask, imwarp, etc.. In additino, you cannot use anything from the Computer Vision or Machine Learning toolboxes. This is not an exhaustive list, but hopefully you get the idea. If in doubt, ask your instructor! CourseNana.COM

The Dataset CourseNana.COM

For the programming component of this assignment, take two pictures, one slightly offset from the other (via rotation and/or translation). Make sure that the two images have significant overlap of content. CourseNana.COM

Grading CourseNana.COM

Hard Coded Correspondences
Panoramic using hard-coded correspondences Image Pyramids
Extrema Points
Keypoint Matching
Automatic Stitching
Success on Additional Tests
Report quality an ease of running code
TOTAL CourseNana.COM

Table 1: Grading Rubric CourseNana.COM

10pts 30pts 10pts 10pts 10pts 10pts 12pts 8pts 100pts CourseNana.COM

1 (10 points) Hard Coding Point Correspondences CourseNana.COM

Let’s start off by hard coding some point correspondences. Look at each image and choose four point correspondences. Do not make this process interactive. Hard code the coordinates at the top of your script. CourseNana.COM

Display the images side-by-side (as one image) with the point correspondences color coded as dots in the image. An example can be found in Figure 1. CourseNana.COM

Figure 1: Manual Correspondences CourseNana.COM

2 (30 points) Compute Transformation Matrix, Project, and Blend! CourseNana.COM

Next, use the four points you identified in the previous part to compute the transformation matrix that maps one image to the other. You can determine which image you want to be the “base” image. CourseNana.COM

After determining the transformation matrix, we need to determine the dimensions of the new com- bined image. The height of this image should be the maximum of the base image’s height or the maximum projected y value from the other image. The width will be equal to the maximum of the base image’s width or the maximum projected x value from the other image. CourseNana.COM

Finally we need to populate our new image with pixel(s) from the base and projected images. To do this, go through each location in your new image and grab the corresponding pixels from the base and/or projected image (you’ll need to determine where, if anywhere, these come from). If both images map to that location, you’ll want to blend them (using a technique of your choosing). CourseNana.COM

An example can be found in Figure 2. CourseNana.COM

Figure 2: Stitched images using manual correspondences CourseNana.COM

3 (10 points) Create Scale-Space Image Pyramids CourseNana.COM

Now on to the tough(er) stuff! We want to automate all this! CourseNana.COM

The first step is to automatically identify locations of interest. To do this we’ll find the stable local maximas in scale-space for each image. And the first step of that is to create image pyramids! CourseNana.COM

Here are some hyperparameters we’ll use to create our image pyramids: CourseNana.COM

  • ˆ  Find the extremas in grayscale. CourseNana.COM

  • ˆ  Create five scales per octave. CourseNana.COM

  • ˆ  The initial scale will have a standard deviation of σ0 = 1.6. CourseNana.COM

  • ˆ  Each subsequent scale will have a σ value that is k = 2 times larger than the previous. CourseNana.COM

  • ˆ  Each Gaussian kernel will have a width and height that is three times the filter’s σ value, i.e. CourseNana.COM

    w = 3σ. CourseNana.COM

  • ˆ  Create four octaves, each 1/4 of the size of the previous octave, obtained by subsampling ever CourseNana.COM

    other row and column of the previous column (no interpolation). In general, given octave n and scale m, you can compute σ as: CourseNana.COM

    σ = 2n1km1σ0 CourseNana.COM

    In your report show all the images for each octave for one of you images. Something similar to Figure 3. CourseNana.COM

Figure 3: Image Pyramid CourseNana.COM

4 (10 points) Finding the Local Maximas CourseNana.COM

Next, for each octave of each image, locate the local maxima, as discussed in class. These locations then need to be in terms of the original image’s size (i.e. the first octave), which can be done by multiplying their locations by 2n1, where again n is the current octave. CourseNana.COM

After identifying all the extrams, we want to remove the unstable ones, i.e. those that are edge pixels and/or in areas of low contrast. To do this: CourseNana.COM

  • ˆ  Find edge pixels use Matlab’s edge function. This will return a binary image (where a value of one indicates that the pixel is an edge pixel). Use that (perhaps along with Matlab’s find and setdiff functions) to eliminate extremas that are also edge pixels. CourseNana.COM

  • ˆ  We will also eliminate extremas that are too close to the border of the image. You can determine what “too close” means, but your choice will likely be related to your descriptor decision in Part 5 (and how large of a region around they keypoints you’ll use to form the descriptors). CourseNana.COM

  • ˆ  Finally, for each remaining extrema, compute the standard deviation of a patch around it. If this standard deviation is less than some threshold, then the patch has low contrast and thus should be eliminated from the extrema list. Once again, you can decide on the size of the patch and the threshold based on experimentation. CourseNana.COM

    For your report, provide two images for each input image. One with all the extremas superimposed on it (indicated by red circles), and one after unstable extremas were removed. As an example, see Figures 4-5. CourseNana.COM

    Figure 4: All extrema points CourseNana.COM

Figure 5: Pruned extrema points CourseNana.COM

5 (10 points) Keypoint Description and Matching CourseNana.COM

For each remaining extrema/keypoint in each image, we’ll want to extract a descriptor and then match the descriptors from one image to ones in the other. To compare keypoints, you will have to determine what distance or similarity measurement to use. Common distance ones are Eucliden and Manhattan. Common similarity ones are Cosine, Gaussian, and Histogram Intersection. CourseNana.COM

The following sections discuss strategies for describing keypoint regions (descriptor extraction) and keypoint matching. CourseNana.COM

5.1 Descriptors CourseNana.COM

Given the constraints/assumptions of the problem, describing a patch around a keypoint using the RGB values will likely work well (since it encodes both color and positional information). Thus, if we had 9 × 9 region around a keypoint, we could describe that keypoint with a vector of size 9 × 9 × 3 = 243 values. However, feel free to experiment with other descriptors (SIFTs, Local Histograms, Local GISTs, etc..). CourseNana.COM

5.2 Keypoint Correspondences CourseNana.COM

To find keypoint correspondences between images, we’ll make a few problem-specific assumptions: ˆ Correspondences should have roughly the same y value.
ˆ The camera was rotated and/or translated right to obtain the second image. CourseNana.COM

Our general keypoint matching strategy will be: CourseNana.COM

  1. For each keypoint in the first image, find the best match (using the distance or similarity measurement of your choice) in the second image that satisfies the aforementioned constraints. Call this set C1. CourseNana.COM

  2. For each keypoint in the second image, find the best match (using the distance or similarity measurement of your choice) in the first image that satisfies the aforementioned constraints. Call this set C2. CourseNana.COM

  3. Computer the set intersection of these two sets: C = C1 C2. 8 CourseNana.COM

4. Remove from C all correspondences that have a distance above some threshold (or if you use similarity, below some threshold). CourseNana.COM

For visualization (and your report), draw lines between a few matching keypoints, as seen in Figure 6. CourseNana.COM

Figure 6: Some Point Correspondences CourseNana.COM

6 (10 points) Find the Transformation Matrix via RANSAC and Stitch CourseNana.COM

Finally we want to use the keypoint correspondences to compute a transformation matrix that we can then use to auto-stitch our images. CourseNana.COM

However, as you may have noticed, many of the point correspondences might not be correct :(. So instead we’ll use a RANSAC RANdom SAmpling Consensus strategy. CourseNana.COM

To perform RANSAC for our panoramic stitching: CourseNana.COM

  1. For experiments 1 through N (you choose N) CourseNana.COM

    (a) Select four correspondences at random.
    (b) Compute the transformation matrix using these correspondences.
    CourseNana.COM

    (c) Usingthediscoveredtransformationmatrix,counthowmanypointcorrespondences(among all of them) would end up within a few pixels of one another after projection. CourseNana.COM

  2. Keep the transformation matrix the resulting in the largest number of point correspondences (among all of them) that ended up within a few pixels of one another after projection. CourseNana.COM

Now use this transformation matrix to stitch your images! In your report: CourseNana.COM

  • ˆ  Draw lines between the keypoint coorespondences used to computer your final transformation matrix. See in Figure 7. CourseNana.COM

  • ˆ  Your final stitched image. CourseNana.COM

Figure 7: Point Correspondences for final transformation matrix CourseNana.COM

7 (12 points) Additional Tests CourseNana.COM

For the remaining points we’ll test your code against three other picture pairs. You will get 0-4 points for each, depending on how well they stitched together. CourseNana.COM

CourseNana.COM

Submission
NOTE: that 8 points of your grade is based on being able to run your code easily. CourseNana.COM

IN ADDITION: With your your submission, if you worked with someone else, let me know how evenly the work was split. If each contributed evenly it would be 50/50. I will use this information to adjust grades for pairs where one partner did more of the work. CourseNana.COM

For your submission, upload to Blackboard a single zip file containing: CourseNana.COM

1. PDF writeup that includes: CourseNana.COM

(a) Visualization for Part 1 (b) Stitched image for Part 2 CourseNana.COM

(c) Visualization for Part 3 (d) Visualization for Part 4 (e) Visualization for Part 5 CourseNana.COM

(f) Visualization and stitched image for Part 6
2. A README text file (not Word or PDF) that explains
CourseNana.COM

ˆ Features of your program
ˆ Name of your entry-point script
ˆ Any useful instructions to run your script. CourseNana.COM

3. Your source files CourseNana.COM

Get in Touch with Our Experts

WeChat WeChat
Whatsapp WhatsApp
Drexel代写,CS435代写,Computational Photography代写,Panoramic Stitching代写,Matlab代写,Drexel代编,CS435代编,Computational Photography代编,Panoramic Stitching代编,Matlab代编,Drexel代考,CS435代考,Computational Photography代考,Panoramic Stitching代考,Matlab代考,Drexelhelp,CS435help,Computational Photographyhelp,Panoramic Stitchinghelp,Matlabhelp,Drexel作业代写,CS435作业代写,Computational Photography作业代写,Panoramic Stitching作业代写,Matlab作业代写,Drexel编程代写,CS435编程代写,Computational Photography编程代写,Panoramic Stitching编程代写,Matlab编程代写,Drexelprogramming help,CS435programming help,Computational Photographyprogramming help,Panoramic Stitchingprogramming help,Matlabprogramming help,Drexelassignment help,CS435assignment help,Computational Photographyassignment help,Panoramic Stitchingassignment help,Matlabassignment help,Drexelsolution,CS435solution,Computational Photographysolution,Panoramic Stitchingsolution,Matlabsolution,