Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all articles
Browse latest Browse all 4399

How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 3/4

$
0
0

Open CV Tutorial

Defend the world against virtual invaders in this tutorial

This is the third part of a four-part series on implementing Augmented Reality in your games and apps. Check out the first part and the second part of the series here!

Welcome to the third part of this tutorial series! In the first part of this tutorial, you used the AVFoundation classes to create a live video feed for your game to show the video from the rear-facing camera.

In the second part, you learned how to implement the game controls and leverage Core Animation to create some great-looking explosion effects.

Your next task is to implement the target-tracking that brings the Augmented Reality into your app.

If you saved your project from the last part of this tutorial, then you can pick up right where you left off. If you don’t have your previous project, or prefer to start anew, you can download the starter project for this part of the tutorial.

Augmented Reality and Targets

Before you start coding, it’s worth discussing targets for a moment.

part3_1

From retail shelves to train tickets to advertisements in bus shelters, the humble black-and-white QR code has become an incredibly common sight around the world. QR codes are a good example of what’s technically known as a marker.

Markers are real-world objects placed in the field-of-view of the camera system. Once the computer vision software detects the presence of one or more markers in the video stream, the marker can be used as a point of reference from which to initiate and render the rest of the augmented reality experience.

Marker detection comes in two basic flavors:

  • Marker-Based Object Tracking — The marker must be a black-and-white image composed of geometrically simple shapes such as squares or rectangles, like the QR code above.
  • Markerless Object Tracking — The marker can be pretty much anything you like, including photographs, magazine covers or even human faces or fingertips. You can use almost any color you wish, although color gradients can be difficult for a CV system to classify.

Admittedly the term markerless object tracking is confusing, since you are still tracking an image “marker”, albeit one that is more complicated and colorful than a simple collection of black-and-white squares. To confuse matters even further, you’ll find other authors who lump all of the above image-detection techniques into a single bucket they call “marker-based” object tracking, and who instead reserve the term markerless object tracking for systems where GPS or geolocation services are used to locate and interact with AR resources.

While the distinction between marker-based object tracking and markerless object tracking may seem arbitrary, what it really comes down to is CPU cycles.

Marker-based object tracking systems can utilize very fast edge-detection algorithms running in grayscale mode, so high-probability candidate regions in the video frame — where the marker is most likely to be located — can be quickly identified and processed.

Markerless object tracking, on the other hand, requires far more computational power.

Pattern detection in a markerless object tracking system usually involves three steps:

  1. Feature Detection — The sample image is scanned to identify a collection of keypoints, also called features or points of interest, that uniquely characterize the sample image.
  2. Feature Descriptor Extraction — Once the system has identified a collection of keypoints, it uses a second algorithm to extract a vector of descriptor objects from each keypoint in the collection.
  3. Feature Descriptor Matching — The feature descriptor sets of both the input query image and the reference marker pattern are then compared. The greater the number of matching descriptors that the two sets have in common, the more likely it is that the image regions “match” and that you have “found” the marker you are looking for.

All three stages must be performed on each frame in the video stream, in addition to any other image processing steps needed to adjust for such things as scale- and rotation-invariance of the marker, pose estimation (i.e., the angle between the camera lens and the 2D-plane of the marker), ambient lighting conditions, whether or not the marker is partially occluded, and a host of other factors.

Consequently, marker-based object tracking has generally been the preferred technique to use on small, hand-held mobile devices, especially early-generation mobile phones). Markerless object tracking, on the other hand, has generally been relegated for use on the larger, iPad-style tablets with their correspondingly greater computational capabilities.

Designing the Pattern Detector

In this tutorial you’ll take the middle ground between these two standard forms of marker detection.

Your target pattern is more complicated than a simple set of black-and-white QR codes, but it’s not much more complicated. You should be able to cut some corners while still retaining most of the benefits of markerless object tracking.

Take another look at the target pattern you’re going to use as a marker:

part3_2

Clearly you don’t have to worry about rotational invariance as the pattern is already rotationally symmetrical. You won’t have to deal with pose estimation in this tutorial as you’ll keep things simple and assume that the target will be displayed on a flat surface with your camera held nearly parallel to the target.

In other words, you won’t need to handle the case where someone prints out a hard copy of the target marker, lays it down on the floor somewhere and tries to shoot it from across the room at weird angles.

The fastest OpenCV API that meets all these requirements is cv::matchTemplate(). It takes the following four arguments:

  1. Query Image — This is the input image which is searched for the target. In your case, this is a video frame captured from the camera.
  2. Template Image — This is the template pattern you are searching for. In your case, this is the bull’s-eye target pattern illustrated above.
  3. Output Array — An output array of floats that range from 0.0 to 1.0. This is the “answer” you’re looking for. Candidate match regions are indicated by areas where these values reach local minima or maxima. Whether the best possible match is indicated by a minimum or maximum is determined by the statistical matching heuristic used to compare the images as explained below.
  4. Matching Method — One of six possible parameters specifying the statistical heuristic to use when comparing the query and template images. In your case, better matches will correspond to higher numerical values in the output array. However, OpenCV supports matching heuristics where better matches are indicated by lower numerical values in the output array as well.

The caller must ensure that the dimensions of the template image fit within those of the query image and that the dimensions of the output array are sized correctly relative to the dimensions of both the query image and the template pattern.

The matching algorithm used by cv::matchTemplate() is based on a Fast Fourier Transform (FFT) of the two images and is highly optimized for speed.

cv::matchTemplate() does what is says on the tin:

  • It “slides” the template pattern over the top of the query image, one pixel at a time.
  • At each pixel increment, it compares the template pattern with the “windowed sub-image” of the underlying query image to see how well the two images match.
  • The quality of the match at that point is normalized on a scale of 0.0 to 1.0 and saved in the output array.

Once the algorithm terminates, an API like cv::minMaxLoc() can be used to identify both the point at which the best match occurs and the quality of the match at that point. You can also set a “confidence level” below which you will ignore candidate matches as simple noise.

A moment’s reflection should convince you that if the dimensions of the query image are (W,H), and the dimensions of the template pattern are (w,h), with 0 < w < W and 0 < h < H, then the dimensions of the output array must be (W-w+1, H-h+1).

The following picture may be worth a thousand words in this regard:

part3_3

There's one tradeoff you'll make with this API — scale-invariance. If you're searching an input frame for a 200 x 200 pixel target marker, then you're going to have to hold the camera at just the right distance away from the marker so that it fills approximately 200 x 200 pixels on the screen.

The sizes of the two images don't have to match exactly, but the detector won't track the target if your device is too far away from, or too close to the marker pattern.

Converting Image Formats

It's time to start integrating the OpenCV APIs into your AR game.

OpenCV uses its own high-performance, platform-independent container for managing image data. Therefore you must implement your own helper methods for converting the image data back and forth between the formats used by OpenCV and UIKit.

This type of data conversion is often best accomplished using categories. The starter project you downloaded contains a UIImage+OpenCV category for performing these conversions; it's located in the Detector group, but it's not yet been implemented. That's your job! :]

Open UIImage+OpenCV.h and add the following three method declarations:

@interface UIImage (OpenCV)
 
#pragma mark -
#pragma mark Generate UIImage from cv::Mat
+ (UIImage*)fromCVMat:(const cv::Mat&)cvMat;
 
#pragma mark -
#pragma mark Generate cv::Mat from UIImage
+ (cv::Mat)toCVMat:(UIImage*)image;
- (cv::Mat)toCVMat;
 
@end

The function of these methods is fairly clear from their signatures:

  • The first two declarations are for static class methods that convert an OpenCV image container into a UIImage, and vice-versa.
  • The final declaration is for an instance method that converts a UIImage object directly into an OpenCV image container.

You'll be providing the code for these methods in the next few paragraphs, so be prepared for a few warnings. These warnings will go away once you finish adding all the methods.

Note: If you find the syntax cv::Mat to be an odd way of designating an image reference, you're not alone. cv::Mat is actually a reference to a 2-D algebraic matrix, which is how OpenCV2 stores image data internally for reasons of performance and convenience.

The older, legacy version of OpenCV used two very similar, almost interchangeable data structures for the same purpose: cvMat and IplImage. cvMat is also simply a 2-D matrix, while IplImage stands for Intel Processing Library and hints at OpenCV's roots with the chip manufacturing giant.

Open UIImage+OpenCV.mm and add the following code:

+ (cv::Mat)toCVMat:(UIImage*)image
{
    // (1) Get image dimensions
    CGFloat cols = image.size.width;
    CGFloat rows = image.size.height;
 
    // (2) Create OpenCV image container, 8 bits per component, 4 channels
    cv::Mat cvMat(rows, cols, CV_8UC4);
 
    // (3) Create CG context and draw the image
    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,
                                                    cols,
                                                    rows,
                                                    8,
                                                    cvMat.step[0],
                                                    CGImageGetColorSpace(image.CGImage),
                                                    kCGImageAlphaNoneSkipLast | 
                                                    kCGBitmapByteOrderDefault);
 
    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
    CGContextRelease(contextRef);
 
    // (4) Return OpenCV image container reference
    return cvMat;
}

This static method converts an instance of UIImage into an OpenCV image container. It works as follows:

  1. You retrieve the width and height attributes of the UIImage.
  2. You then construct a new OpenCV image container of the specified width and height. The CV_8UC4 flag indicates that the image consists of 4 color channels — red, green, blue and alpha — and that each channel consists of 8 bits per component.
  3. Next you create a Core Graphics context and draw the image data from the UIImage object into that context.
  4. Finally, return the OpenCV image container reference to the caller.

The corresponding instance method is even simpler.

Add the following code to UIImage+OpenCV.mm:

- (cv::Mat)toCVMat
{
    return [UIImage toCVMat:self];
}

This is a convenience method which can be invoked directly on UIImage objects, converting them to cv::Mat format using the static method you just defined above.

Add the following code to UIImage+OpenCV.mm

+ (UIImage*)fromCVMat:(const cv::Mat&)cvMat
{
    // (1) Construct the correct color space
    CGColorSpaceRef colorSpace;
    if ( cvMat.channels() == 1 ) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }
 
    // (2) Create image data reference 
    CFDataRef data = CFDataCreate(kCFAllocatorDefault, cvMat.data, (cvMat.elemSize() * cvMat.total()));
 
    // (3) Create CGImage from cv::Mat container
    CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
    CGImageRef imageRef = CGImageCreate(cvMat.cols,
                                        cvMat.rows,
                                        8,
                                        8 * cvMat.elemSize(),
                                        cvMat.step[0],
                                        colorSpace,
                                        kCGImageAlphaNone | kCGBitmapByteOrderDefault,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault);
 
    // (4) Create UIImage from CGImage
    UIImage * finalImage = [UIImage imageWithCGImage:imageRef];
 
    // (5) Release the references
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CFRelease(data);
    CGColorSpaceRelease(colorSpace);
 
    // (6) Return the UIImage instance
    return finalImage;
}

This static method converts an OpenCV image container into an instance of UIImage as follows:

  1. It first creates a new color space. If the image has only one color channel, then create a new grayscale color space. But if the image has multiple color channels, then create a new RGB color space instead.
  2. Next, the method creates a new Core Foundation data reference that points to the image container's data. elemSize() returns the size of an image pixel in bytes, while total() returns the total number of pixels in the image. The total size of the byte array to be allocated comes from multiplying these two numbers.
  3. It then constructs a new CGImage reference that points to the OpenCV image container.
  4. Next it constructs a new UIImage object from the CGImage reference.
  5. Then it releases the locally defined Core Foundation objects before exiting the method.
  6. Finally, it returns the newly-constructed UIImage instance to the caller.

Build and run your project; nothing visible has changed with your game but occasionally incremental builds are a good practice, if only to validate that newly added code hasn't broken anything.

Pattern Detection Using OpenCV

Next you'll implement the pattern detector for your AR blaster game.

This class serves as the heart-and-soul for your AR target blaster game, so this section deserves your undivided attention (but I know you'd pay attention anyway!). :]

You're going to be writing the pattern detector in C++ for two reasons: for better performance — and because you're going to be interfacing with the OpenCV SDK which is also written in C++.

Add the following code to PatternDetector.h:

#include "VideoFrame.h"
 
class PatternDetector
{
#pragma mark -
#pragma mark Public Interface
public:
    // (1) Constructor
    PatternDetector(const cv::Mat& pattern);
 
    // (2) Scan the input video frame
    void scanFrame(VideoFrame frame);
 
    // (3) Match APIs
    const cv::Point& matchPoint();
    float matchValue();
    float matchThresholdValue();
 
    // (4) Tracking API
    bool isTracking();
 
#pragma mark -
#pragma mark Private Members
private:
    // (5) Reference Marker Images
    cv::Mat m_patternImage;
    cv::Mat m_patternImageGray;
    cv::Mat m_patternImageGrayScaled;
 
    // (6) Supporting Members
    cv::Point m_matchPoint;
    int m_matchMethod;
    float m_matchValue;
    float m_matchThresholdValue;
    float m_scaleFactor;
};

Here's what's going on in the interface above:

  1. The constructor takes a reference to the marker pattern to look for. You'll pass a reference to the bull's-eye target marker pattern as an argument into this constructor.
  2. The object provides an API for scanning input video frames and searching those frames for instances of the marker pattern used to initialize the object in the constructor. You'll pass the video frames as they are captured as arguments into this API. Depending on the power of your hardware, you can expect to invoke this API at least 20 to 30 times per second.
  3. The object provides a collection of APIs for reporting match scores, and is able to provide the exact point in the video frame where a candidate match has been identified; the confidence, or match score, with which that candidate match has been made; and the threshold confidence, below which candidate matches will be discarded as spurious noise.
  4. The object provides a boolean API indicating whether or not it is presently tracking the marker. If the current confidence level, or match score, exceeds the threshold level, this API returns TRUE. Otherwise, it returns FALSE.
  5. m_patternImage is a reference to the original marker pattern. In your code, this will be a reference to the bull's-eye target marker pattern. m_patternImageGray is simply a reference to a grayscale version of m_patternImage. Most image processing algorithms run an order of magnitude faster on grayscale images than on color images. In your code, this will be a reference to a black-and-white version of the bull's-eye target marker pattern.
  6. m_patternImageGrayScaled is a smaller version of m_patternImageGray. This is the actual image reference used for pattern detection where its size has been optimized for speed. In your code, this will be a reference to a small version of the black-and-white version of the bull's-eye target marker pattern.
  7. These elements are simply supporting data members, whose purpose will become clear as you work your way through the rest of this tutorial.

Add the following code to the top of PatternDetector.cpp, just beneath the include directives:

const float kDefaultScaleFactor    = 2.00f;
const float kDefaultThresholdValue = 0.50f;
  • kDefaultScaleFactor is the amount by which m_patternImageGrayScaled will be scaled down from m_patternImageGray. In your code, you'll cutting the image dimensions down by a factor of two, thus improving performance by a factor of about four, since the total area of the image will be about a quarter of the size of the original.
  • Normalized match scores range from 0.0 to 1.0. kDefaultThresholdValue specifies the score below which candidate matches will be discarded as spurious. In your code, you'll discard candidate matches unless the reported confidence of the match is higher than 0.5.

Now add the following definition for the constructor to PatternDetector.cpp:

PatternDetector::PatternDetector(const cv::Mat& patternImage)
{
    // (1) Save the pattern image
    m_patternImage = patternImage;
 
    // (2) Create a grayscale version of the pattern image
    switch ( patternImage.channels() )
    {
        case 4: /* 3 color channels + 1 alpha */
            cv::cvtColor(m_patternImage, m_patternImageGray, CV_RGBA2GRAY);
            break;
        case 3: /* 3 color channels */
            cv::cvtColor(m_patternImage, m_patternImageGray, CV_RGB2GRAY);
            break;
        case 1: /* 1 color channel, grayscale */
            m_patternImageGray = m_patternImage;
            break;
    }
 
    // (3) Scale the gray image
    m_scaleFactor = kDefaultScaleFactor;
    float h = m_patternImageGray.rows / m_scaleFactor;
    float w = m_patternImageGray.cols / m_scaleFactor;
    cv::resize(m_patternImageGray, m_patternImageGrayScaled, cv::Size(w,h));
 
    // (4) Configure the tracking parameters
    m_matchThresholdValue = kDefaultThresholdValue;
    m_matchMethod = CV_TM_CCOEFF_NORMED;
}
  1. You first save a reference to the original marker pattern.
  2. You then convert the original marker pattern to grayscale using the OpenCV function cv::cvtColor() to reduce the number of color channels if necessary.
  3. You reduce the dimensions of the grayscale marker pattern by a factor of m_scaleFactor — in your code, this is set to 2.
  4. CV_TM_CCOEFF_NORMED is one of six possible matching heuristics used by OpenCV to compare images. With this heuristic, increasingly better matches are indicated by increasingly largely numerical values (i.e., closer to 1.0).

Add the following definition to PatternDetector.cpp:

void PatternDetector::scanFrame(VideoFrame frame)
{
    // (1) Build the grayscale query image from the camera data
    cv::Mat queryImageGray, queryImageGrayScale;
    cv::Mat queryImage = cv::Mat(frame.height, frame.width, CV_8UC4, frame.data, frame.stride);
    cv::cvtColor(queryImage, queryImageGray, CV_BGR2GRAY);
 
    // (2) Scale down the image
    float h = queryImageGray.rows / m_scaleFactor;
    float w = queryImageGray.cols / m_scaleFactor;
    cv::resize(queryImageGray, queryImageGrayScale, cv::Size(w,h));
 
    // (3) Perform the matching
    int rows = queryImageGrayScale.rows - m_patternImageGrayScaled.rows + 1;
    int cols = queryImageGrayScale.cols - m_patternImageGrayScaled.cols + 1;
    cv::Mat resultImage = cv::Mat(cols, rows, CV_32FC1);
    cv::matchTemplate(queryImageGrayScale, m_patternImageGrayScaled, resultImage, m_matchMethod);
 
    // (4) Find the min/max settings
    double minVal, maxVal;
    cv::Point minLoc, maxLoc;
    cv::minMaxLoc(resultImage, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat());
    switch ( m_matchMethod ) {
        case CV_TM_SQDIFF:
        case CV_TM_SQDIFF_NORMED:
            m_matchPoint = minLoc;
            m_matchValue = minVal;
            break;
        default:
            m_matchPoint = maxLoc;
            m_matchValue = maxVal;
            break;
    }
}

Here's what's you do in the code above:

  1. Construct a new cv:::Mat image container from the video frame data. Then convert the image container to grayscale mode to accelerate the speed at which matches are performed.
  2. Reduce the dimensions of the grayscale image container by a factor of m_scaleFactor to further accelerate things.
  3. Invoke cv::matchTemplate() at this point. The calculation used here to determine the dimensions of the output array was discussed earlier. The output array will be populated with floats ranging from 0.0 to 1.0 with higher numbers indicating greater confidence in the candidate match for that point.
  4. Use cv::minMaxLoc() to identify the largest value in the frame, as well as the exact value at that point. For most of the matching heuristics used by OpenCV — including the one you're using — larger numbers correspond to better matches. However, for the matching heuristics CV_TM_SQDIFF and CV_TM_SQDIFF_NORMED, better matches are indicated by lower numerical values; you handle these as special cases in a switch block.
Note: OpenCV documentation frequently speaks of "brightness values" in connection with the values saved in the output array. Larger values are considered "brighter" than the others. In OpenCV, images and matrices share the same data type: cv::Mat.

Since the type of resultImage is cv::Mat, the output array can be rendered on-screen as a black-and-white image where brighter pixels indicate better match points between the two images. This can be extremely useful when debugging.

Add the following code to PatternDetector.cpp:

const cv::Point& PatternDetector::matchPoint()
{
    return m_matchPoint;
}
 
float PatternDetector::matchValue()
{
    return m_matchValue;
}
 
float PatternDetector::matchThresholdValue()
{
    return m_matchThresholdValue;
}

These are three simple accessors, nothing more.

Add the following code to PatternDetector.cpp:

bool PatternDetector::isTracking()
{
    switch ( m_matchMethod ) {
        case CV_TM_SQDIFF:
        case CV_TM_SQDIFF_NORMED:
            return m_matchValue < m_matchThresholdValue;
        default:
            return m_matchValue > m_matchThresholdValue;
    }
}

Just as you did above with scanFrame(), the two heuristics CV_TM_SQDIFF and CV_TM_SQDIFF_NORMED must be handled here as special cases.

Using the Pattern Detector

In this section you're going to integrate the pattern detector with the view controller.

Open ViewController.mm and add the following code to the very end of viewDidLoad:

    // Configure Pattern Detector
    UIImage * trackerImage = [UIImage imageNamed:@"target.jpg"];
    m_detector = new PatternDetector([trackerImage toCVMat]); // 1
 
    // Start the Tracking Timer
    m_trackingTimer = [NSTimer scheduledTimerWithTimeInterval:(1.0f/20.0f)
                                                       target:self
                                                     selector:@selector(updateTracking:)
                                                     userInfo:nil
                                                      repeats:YES]; // 2

Taking each comment in turn:

  1. Create a new pattern detector and initialize it with the target marker pattern; you're putting the category image conversion APIs to good use.
  2. Create a repeating NSTimer to manage the tracking state of your app. The timer invokes updateTracking: 20 times per second; you'll implement this method below.

Replace the stubbed-out implementation of updateTracking: in ViewController.mm with the following code:

- (void)updateTracking:(NSTimer*)timer {
    if ( m_detector->isTracking() ) {
        NSLog(@"YES: %f", m_detector->matchValue());
    }
    else {
        NSLog(@"NO: %f", m_detector->matchValue());
    }
}

This method is clearly not "game-ready" in its current state; all you're doing here is quickly checking whether or not the detector is tracking the marker.

If you point the camera at a bull's-eye target marker, the match score will shoot up to almost 1.0 and the detector will indicate that it is successfully tracking the marker. Conversely, if you point the camera away from the bull's-eye target marker, the match score will drop to near 0.0 and the detector will indicate that it is not presently tracking the marker.

However, if you were to build and run your app at this point you'd be disappointed to learn that you can't seem to track anything; the detector consistently returns a matchValue() of 0.0, no matter where you point the camera. What gives?

That's an easy one to solve — you're not processing any video frames yet!

Return to ViewController.mm and add the following line to the very end of frameReady:, just after the dispatch_async() GCD call:

    m_detector->scanFrame(frame);

The full definition for frameReady: should now look like the following:

- (void)frameReady:(VideoFrame)frame {
    __weak typeof(self) _weakSelf = self;
    dispatch_sync( dispatch_get_main_queue(), ^{
        // (1) Construct CGContextRef from VideoFrame
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef newContext = CGBitmapContextCreate(frame.data,
                                                        frame.width,
                                                        frame.height,
                                                        8,
                                                        frame.stride,
                                                        colorSpace,
                                                        kCGBitmapByteOrder32Little |
                                                        kCGImageAlphaPremultipliedFirst);
 
        // (2) Construct CGImageRef from CGContextRef
        CGImageRef newImage = CGBitmapContextCreateImage(newContext);
        CGContextRelease(newContext);
        CGColorSpaceRelease(colorSpace);
 
        // (3) Construct UIImage from CGImageRef
        UIImage * image = [UIImage imageWithCGImage:newImage];
        CGImageRelease(newImage);
        [[_weakSelf backgroundImageView] setImage:image];
    });
 
    m_detector->scanFrame(frame);
}

Previously, frameReady: simply to drew video frames on the screen, thereby creating a "real time" video feed as the visual backdrop for the game. Now, each video frame is being passed off to the pattern detector where the OpenCV APIs scan the frame looking for instances of the target marker.

All right, it's showtime! Build and run your app; open the console in Xcode, and you'll see a long list of "NO" messages indicating the detector can't match the target.

Now, point your camera at the tracker image below:

part3_4

When the camera is aimed directly at the bull's-eye target and the pattern detector is successfully tracking the marker, you'll see a “YES” message being logged to the console along with the corresponding threshold confidence level.

The threshold confidence level is set at 0.5, so you may need to fiddle with the position of your device until the match scores surpass that value.

Note: The pattern detection method you're using does not support scale invariance. This means the target image has to fill up just the “right” amount of space on your screen in order to track. If you’re pointing the camera at the target, but not able to get it to track, try adjusting the distance between the camera lens and marker until you see a “YES” message in the console.

If you're using an iPhone you should expect a match if you hold the iPhone at a distance where the height of the bull's-eye image covers a little less than one third of the height of the iPhone screen in landscape orientation

Your console log should look like the following once the device starts tracking the marker:

2013-12-07 01:45:34.121 OpenCVTutorial[4890:907] NO: 0.243143
2013-12-07 01:45:34.168 OpenCVTutorial[4890:907] NO: 0.243143
2013-12-07 01:45:34.218 OpenCVTutorial[4890:907] NO: 0.264737
2013-12-07 01:45:34.268 OpenCVTutorial[4890:907] NO: 0.270497
2013-12-07 01:45:34.318 OpenCVTutorial[4890:907] NO: 0.270497
2013-12-07 01:45:34.368 OpenCVTutorial[4890:907] YES: 0.835372
2013-12-07 01:45:34.417 OpenCVTutorial[4890:907] YES: 0.834664
2013-12-07 01:45:34.468 OpenCVTutorial[4890:907] YES: 0.834664
2013-12-07 01:45:34.517 OpenCVTutorial[4890:907] YES: 0.842802
2013-12-07 01:45:34.568 OpenCVTutorial[4890:907] YES: 0.841466

Congratulations — you now have a working computer vision system running on your iOS device!

Where To Go From Here?

That's it for this part of the tutorial. You've learned about pattern matching, integrated Open CV into your code, and managed to output the pattern recognition results to the console

You can download the completed project for this part as a zipped project file.

The fourth part of this tutorial will combine all the elements of all that you completed so far into a working game.

If you have any questions or comments on this tutorial series, please come join the discussion below!

How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 3/4 is a post from: Ray Wenderlich

The post How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 3/4 appeared first on Ray Wenderlich.


Viewing all articles
Browse latest Browse all 4399

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>