
Defend the world against virtual invaders in this tutorial
Welcome to the final part of this tutorial series! In the first part of this tutorial, you used the AVFoundation classes to create a live video feed for your game to show the video from the rear-facing camera.
In the second part, you learned how to implement the game controls and leverage Core Animation to create some great-looking explosion effects.
In the third part, you integrated the Open CV framework into your app and printed out messages to the console if a certain target was recognized.
Your final task is to tie everything together into a bonafide Augmented Reality gaming app.
If you saved your project from the last part of this tutorial, then you can pick up right where you left off. If you don’t have your previous project, or prefer to start anew, you can download the starter project for this part of the tutorial.
Implementing Visual Tracking Cues
Reading off numbers in the console gives you some feedback on how well you’re tracking the marker, but the console is far too clunky for in-game use; it’d be nice if you had something a little more visual to guide your actions during gameplay.
You’ll use OpenCV’s image processing capabilities to generate better visual cues for tracking the marker. You’ll modify the pattern detector to “peek” inside and obtain a real-time video feed of where the processor thinks the marker is. If you’re having trouble tracking the marker, you can use this real-time feed to help you guide the camera into a better position.
Open OpenCVTutorial-Prefix.pch and find the following line:
#define kUSE_TRACKING_HELPER 0 |
Replace that line with the following code:
#define kUSE_TRACKING_HELPER 1 |
This activates a Helper button on the lower left portion of the screen; pressing it displays a tracking console assisting with marker tracking.
Build and run your project; you’ll see an orange Helper button in the lower left portion of the screen as shown below:
Tap the Helper button and the tracking console appears…but it’s not yet fully implemented.
Now would be a good time to flesh it out! :]
The tracking console has four main components:
- Most of the console is taken up by a large image view. This is where you will add the real-time “marker” feed from the detector.
- There is a small text label on the right hand side, near the middle of the screen. This is where you will report the “threshold” match confidence level for the detector. This value will remain constant, unless you change it in the source code, recompile the project and restart the app.
- There is a small text label on the right hand side, near the bottom of the screen. This is where you will report the “real-time” match confidence score for the detector. This value will fluctuate wildly as you move the camera around and point the device at different things. In general, the numbers will be close to 0.0 unless you happen to point the camera directly at the target marker, in which case the numbers should be very close to 1.0.
- A close button you can press to dismiss the tracking console.
The close button has already been implemented; press it to dismiss the tracking console.
Open PatternDetector.h and add the following code to the “public” portion of the header file:
const cv::Mat& sampleImage(); |
Here you declare a public accessor method to obtain a reference to a sample image that furnishes a “peek” at the marker from the perspective of the pattern detector.
Next, add the following code to the “private” portion of the same header file:
cv::Mat m_sampleImage; |
Here you declare a private data member that holds the reference to the sample image.
The full header file will now look like the following:
#include "VideoFrame.h" class PatternDetector { #pragma mark - #pragma mark Public Interface public: // Constructor PatternDetector(const cv::Mat& pattern); // Scan the input video frame void scanFrame(VideoFrame frame); // Match APIs const cv::Point& matchPoint(); float matchValue(); float matchThresholdValue(); // Tracking API bool isTracking(); // Peek inside the pattern detector to assist marker tracking const cv::Mat& sampleImage(); #pragma mark - #pragma mark Private Members private: // Reference Marker Images cv::Mat m_patternImage; cv::Mat m_patternImageGray; cv::Mat m_patternImageGrayScaled; cv::Mat m_sampleImage; // Supporting Members cv::Point m_matchPoint; int m_matchMethod; float m_matchValue; float m_matchThresholdValue; float m_scaleFactor; }; |
Open PatternDetector.cpp and add the following code to the very end of the file:
const cv::Mat& PatternDetector::sampleImage() { return m_sampleImage; } |
Still working in PatternDetector.cpp, add the following code to the very end of scanFrame()
method, just after the switch
statement:
#if kUSE_TRACKING_HELPER // (1) copy image cv::Mat debugImage; queryImageGrayScale.copyTo(debugImage); // (2) overlay rectangle cv::rectangle(debugImage, m_matchPoint, cv::Point(m_matchPoint.x + m_patternImageGrayScaled.cols,m_matchPoint.y + m_patternImageGrayScaled.rows), CV_RGB(0, 0, 0), 3); // (3) save to member variable debugImage.copyTo(m_sampleImage); #endif |
This code builds the live debugging display as follows:
- It first copies the input query image to a local variable named
debugImage
. - It then overlays a a big, black rectangle on top of the copied image, drawn with a 3 pixel stroke, and positions it where OpenCV detects the best candidate match with the template pattern.
- It then saves the resulting image to a member variable.
The code is guarded by the compiler macro kUSE_TRACKING_HELPER
; that way you won’t use or even compile this code unless the flag is set. This saves you CPU cycles when the help screen is not visible.
Return to ViewController.mm and replace the stubbed-out implementation of updateSample:
with the following code:
- (void)updateSample:(NSTimer*)timer { self.sampleView.image = [UIImage fromCVMat:m_detector->sampleImage()]; self.sampleLabel1.text = [NSString stringWithFormat:@"%0.3f", m_detector->matchThresholdValue()]; self.sampleLabel2.text = [NSString stringWithFormat:@"%0.3f", m_detector->matchValue()]; } |
This method is pretty straightforward:
- In the first line, you obtain the sample “marker” image from the pattern detector, convert it to an instance of UIImage, and set that as the help console’s image.
- In the second line, you set the top label on the console to the threshold match confidence level.
- In the third line, you set the bottom label on the console to the “real-time” match score for that particular frame. This value will change and fluctuate as the camera is moved around and pointed at different objects.
You’re now ready to try out your “tracking goggles”!
Build and run your project; press the Helper button to bring up the tracking console and you will see it appear on your screen as shown below:
The pattern detector does its best to identify a “match”, and the candidate “match” region is highlighted by the outline of the black rectangle. However, the detector reports a very low confidence value — only 0.190 in this instance — for this candidate match.
Since this value is below your threshold value of 0.5, the result is discarded and the pattern detector indicates that it is not presently tracking the target marker.
The target marker is reproduced below for your convenience:
Point the camera directly at the target marker, and you’ll see that the pattern detector is able to identify the marker perfectly as indicated by the outlines of the sampling rectangle; in the example below the confidence level is 0.985, which is quite high:
At this point, if you were to query the pattern detector’s isTracking()
API it would respond with an indication that it is successfully tracking the target marker.
Don’t forget to disable the help screen once you no longer need it by setting the kUSE_TRACKING_HELPER
flag back to 0 in the *.pch
file.
Toggling Tracking
The next step is to integrate marker tracking more closely with your app’s gameplay.
This requires the following updates to your game:
- When the app is not tracking the marker, the tutorial instruction screen in the upper left portion of the screen should be displayed. Both the scoreboard and the trigger button on the right side of the screen should not be displayed.
- When the app is successfully tracking the marker, both the scoreboard and the trigger button on the right side of the screen should be displayed. The tutorial instruction screen in the upper left portion of the screen should not be displayed.
- When the app loses tracking of the marker, the score should reset back to 0.
Open ViewController.mm and add the following code to the very end of viewDidLoad
:
// Start gameplay by hiding panels [self.tutorialPanel setAlpha:0.0f]; [self.scorePanel setAlpha:0.0f]; [self.triggerPanel setAlpha:0.0f]; |
Here you specify that the game should start off by hiding all three panels.
Of course, you still want to display the panels at various points in the game in response to changes in the tracking state of the app. Moreover, it’d be great if the presentation of the panels could be smoothly animated to make your game more engaging to end users.
You’re in luck: your starter project already contains a collection of useful animation categories on UIView. You simply have to implement the completion blocks for those animations.
Return to ViewController.mm and take a look at the class extension at the top of the file; you’ll see that two block properties have already been declared in the class extension as follows:
@property (nonatomic, copy) void (^transitioningTrackerComplete)(void); @property (nonatomic, copy) void (^transitioningTrackerCompleteResetScore)(void); |
There are two distinct completion behaviors you need to support:
- Regular clean-up code that runs when the animation finishes.
- Regular clean-up code that runs when the animation finishes and sets the score to zero.
copy
property attribute since a block needs to be copied in order to keep track of its captured state outside the original scope where the block was defined.Add the following code to the view end of viewDidLoad
in ViewController.mm:
// Define the completion blocks for transitions __weak typeof(self) _weakSelf = self; self.transitioningTrackerComplete = ^{ [_weakSelf setTransitioningTracker:NO]; }; self.transitioningTrackerCompleteResetScore = ^{ [_weakSelf setTransitioningTracker:NO]; [_weakSelf setScore:0]; }; |
This code provides implementations for the completion blocks as per the requirements outlined above.
Now you can start animating your views and bringing your game to life.
You’ll want the tutorial panel to display on-screen from the time the game starts until the user gains tracking for the first time.
Add the following method to ViewController.mm just after the definition of viewDidLoad
:
- (void)viewDidAppear:(BOOL)animated { // Pop-in the Tutorial Panel self.transitioningTracker = YES; [self.tutorialPanel slideIn:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [super viewDidAppear:animated]; } |
This code makes the tutorial panel “slide in” from the top as soon as the app starts; slideIn:completion:
implements this animation and is a member of animation category included in your starter projec.
Next you need the panels to react to changes in the tracking state of the app.
The app’s tracking state is presently being managed from updateTracking:
in ViewController.mm.
Replace updateTracking:
in ViewController.mm with the following:
- (void)updateTracking:(NSTimer*)timer { // Tracking Success if ( m_detector->isTracking() ) { if ( [self isTutorialPanelVisible] ) { [self togglePanels]; } } // Tracking Failure else { if ( ![self isTutorialPanelVisible] ) { [self togglePanels]; } } } |
The call to isTutorialPanelVisible
simply determines if the tutorial panel is visible; it’s been implemented in the starter project as well.
You do, however, need to provide an implementation for togglePanels
.
Replace the stubbed-out implementation of togglePanels
in ViewController.mm with the following code:
- (void)togglePanels { if ( !self.transitioningTracker ) { self.transitioningTracker = YES; if ( [self isTutorialPanelVisible] ) { // Adjust panels [self.tutorialPanel slideOut:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [self.scorePanel slideIn:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [self.triggerPanel slideIn:kAnimationDirectionFromBottom completion:self.transitioningTrackerComplete]; // Play sound AudioServicesPlaySystemSound(m_soundTracking); } else { // Adjust panels [self.tutorialPanel slideIn:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [self.scorePanel slideOut:kAnimationDirectionFromTop completion:self.transitioningTrackerCompleteResetScore]; [self.triggerPanel slideOut:kAnimationDirectionFromBottom completion:self.transitioningTrackerComplete]; } } } |
Here’s what’s going on in the code above:
- When the tutorial panel is visible and the app calls
togglePanels
, the tutorial panel disappears and and the scoreboard and trigger button are displayed on the right side of the screen. - When the tutorial panel is not visible and the app calls
togglePanels
, the tutorial panel appears and the scoreboard and trigger button on the right side of the screen disappear.
The completion block that resets the score runs when the score panel slides off the screen; as well, a “tracking sound” plays when the detector first begins tracking to give the user an auditory cue that tracking has commenced.
Build and run your project; point the camera at the target marker, reproduced below:
The scoreboard and trigger button are now only visible when the pattern detector is actually tracking the marker. When the pattern detector is not tracking the marker, the tutorial screen pops back down into view.
Camera Calibration
Compared to the optics in expensive cameras, the camera lens that ships with your iOS device is not especially large or sophisticated. Due to its small size and simple design, imperfections in the lens and camera on your iOS device can end up distorting the images you’re trying to take in several different ways:
- Principal Point – The image “center” is not always located at the width/2 and height/2 point of the iPhone screen where you’d expect to find it.
- Focal Length – Focal length is a measure of how strongly the camera lens focuses light and determines how large a distant object appears onscreen.
- Scaling – Camera pixels are not necessarily square; the width of the pixels may be scaled or distorted differently than the height.
- Skew – The angle between the x and y axes of the pixels may not be exactly 90 degrees.
- Lens Distortion – Some lenses give rise to a “pin-cushion” or “barrel-roll” effect where image magnification increases or decreases with the distance from the optical axis.
These parameters usually vary — sometimes widely — from one mobile device to another. What’s a developer to do?
You’ll need to implement a mechanism to calibrate the camera on your device; calibration is the process of mathematically estimating these parameters correcting for them in software. It’s an essential step if you want your AR experience to appear remotely convincing to the end user.
OpenCV uses the following two data structures to calibrate a camera:
- A 1 x 5 distortion matrix that corrects for distortions arising from imperfections in the shape and placement of the lens.
- A 3 x 3 camera matrix that corrects for distortions arising from imperfections in the physical architecture of the camera itself.
Rather than going through the trouble of estimating numerical values for each of these matrices, you’re going to do something much simpler.
Go to the Video Source group and open CameraCalibration.h.
This file declares a much simpler C-struct
that represents camera calibration information:
struct CameraCalibration { float xDistortion; float yDistortion; float xCorrection; float yCorrection; }; |
The problem you’re tackling with camera calibration is properly mapping and aligning points from the three-dimensional “real world” of the video feed onto the two-dimensional “flat world” of your mobile device screen.
In iOS, mobile screens come in one of two aspect ratios, either 480 x 320 pixels or 568 x 320 pixels. Neither of these aspect ratios map especially well onto the 640 x 480 pixel aspect ratio you’re using to capture video data for your target shooter game.
This discrepancy between the aspect ratio of the device screen and the aspect ratio of the video feed is the largest source of “camera error” you’ll need to correct for in this tutorial. Moreover, you can correct for this discrepancy using little more than some simple linear algebra.
Don’t worry — you won’t have to derive all of the math yourself. Did you just breathe a sigh of relief? :]
Instead, the answer will be shown below so you can keep charging toward the end-goal of a fully operational AR target blaster.
Open ViewController.mm and add the following code to the very end of viewDidLoad
:
// Numerical estimates for camera calibration if ( IS_IPHONE_5() ) { m_calibration = {0.88f, 0.675f, 1.78, 1.295238095238095}; } else { m_calibration = {0.8f, 0.675f, (16.0f/11.0f), 1.295238095238095}; } |
Admittedly, these numbers don’t look especially “linear”; there are non-linear eccentricities at play here that were derived through empirical estimation. However, these numbers should be good enough to get your AR target blaster fully operational.
Building the AR Visualization Layer
If you’ve been tapping the trigger button in your app, you’ve noticed that it’s still linked to the selectRandomRing
test API. You can point your device at the marker, the pattern detector can find and track the marker, but scoring is still random and unrelated to the marker pattern being tracked.
The final step is to coordinate the firing of the trigger button with the position of the target marker. In this section, you’re going to build an AR Visualization Layer that will act as the glue between what the computer vision system “sees” out in the real world, and the data model in your game that keeps track of points and scoring.
You’ve come a long way, baby — you’re almost done! :]
AR Layer Basics
Go to the Visualization group and open ARView.h.
Review the header file quickly:
#import "CameraCalibration.h" @interface ARView : UIView #pragma mark - #pragma mark Constructors - (id)initWithSize:(CGSize)size calibration:(struct CameraCalibration)calibration; #pragma mark - #pragma mark Gameplay - (int)selectBestRing:(CGPoint)point; #pragma mark - #pragma mark Display Controls - (void)show; - (void)hide; @end |
ARView is an overlay that is activated whenever your game is tracking the target marker.
The object has two main purposes:
- To “follow” around, or track, the marker in real time as its onscreen position changes.
- To provide a canvas upon which you can “draw” or augment visually in other ways.
Open ARView.m and replace the stubbed-out implementation of show
with the code below:
- (void)show { self.alpha = kAlphaShow; } |
Similarly, replace the stubbed-out implementation of hide
with the following code:
- (void)hide { self.alpha = kAlphaHide; } |
Open ViewController.mm and add the following code to the very end of viewDidLoad
:
// Create Visualization Layer self.arView = [[ARView alloc] initWithSize:CGSizeMake(trackerImage.size.width, trackerImage.size.height) calibration:m_calibration]; [self.view addSubview:self.arView]; [self.arView hide]; // Save Visualization Layer Dimensions m_targetViewWidth = self.arView.frame.size.width; m_targetViewHeight = self.arView.frame.size.height; |
Here you create a new instance of the visualization layer as follows:
- You pass to the constructor both the size of the image you want to track as well as the camera calibration data structure.
- You then hide the visualization layer until the pattern detector begins tracking.
- Finally, you save the dimensions of the visualization layer to simplify some later calculations.
Next you need to link the behavior of the AR visualization layer with the tracking state of your game.
Modify updateTracking:
in ViewController.mm as follows:
- (void)updateTracking:(NSTimer*)timer { // Tracking Success if ( m_detector->isTracking() ) { if ( [self isTutorialPanelVisible] ) { [self togglePanels]; } // Begin tracking the bullseye target cv::Point2f matchPoint = m_detector->matchPoint(); // 1 self.arView.center = CGPointMake(m_calibration.xCorrection * matchPoint.x + m_targetViewWidth / 2.0f, m_calibration.yCorrection * matchPoint.y + m_targetViewHeight / 2.0f); [self.arView show]; } // Tracking Failure else { if ( ![self isTutorialPanelVisible] ) { [self togglePanels]; } // Stop tracking [self.arView hide]; // 2 } } |
Here’s a quick breakdown:
- This code displays the AR layer and constantly updates its position so that it remains centered over the marker’s location in the video stream.
- This code hides the AR if your game loses tracking.
Build and run your app; point the camera at the target marker reproduced below:
Once you’re tracking the marker, your screen will look similar to the following:
The background color of the AR layer is set to dark gray, and the outermost ring is highlighted in blue. The reason for coloring these components is to give you a sense of how the AR layer “tracks” the position of the “real world” marker in the video stream.
Play around with the tracking a bit; try to move the position of the marker around by changing where you point the camera and watch the AR layer “track” the marker and move to the correct position.
Once you’re done waving your device around, open ARView.m and find initWithSize:calibration:
.
Find the line in the constructor that reads self.ringNumber = 1
and modify it to read self.ringNumber = 5
.
This will select the fifth, or innermost, bull’s-eye for highlighting.
Build and run your app; once you are tracking the target you’ll see something like the following:
Play around and set ringNumber
to different values between 1 and 5 to highlight different rings; this can prove useful when trying to debug camera calibration statistics.
Open ARView.m, and scroll to the very top of the file. Find the line that reads #define kDRAW_TARGET_DRAW_RINGS 1
.
Change this line so that it reads #define kDRAW_TARGET_DRAW_RINGS 0
.
Working in the same file, find the line that reads #define kColorBackground [UIColor darkGrayColor]
.
Change this line so that it reads #define kColorBackground [UIColor clearColor]
.
The top three lines of ARView.m should now read like the following:
#define kDRAW_TARGET_DRAW_RINGS 0 #define kDRAW_TARGET_BULLET_HOLES 1 #define kColorBackground [UIColor clearColor] |
This deactivates the highlighting of the rings and sets the background color of the AR layer to a more natural transparent color.
Implementing Scorekeeping
Now that you know how to track the marker, its time to finally link up the scoreboard — correctly. :]
Still working in ARView.m replace the stubbed-out implementation of selectBestRing:
with the following code:
- (int)selectBestRing:(CGPoint)point { int bestRing = 0; CGFloat dist = distance(point, m_center, m_calibration); if ( dist < kRadius5 ) { bestRing = 5; } else if ( dist < kRadius4 ) { bestRing = 4; } else if ( dist < kRadius3 ) { bestRing = 3; } else if ( dist < kRadius2 ) { bestRing = 2; } else if ( dist < kRadius1 ) { bestRing = 1; } return bestRing; } |
The point where the marker was “hit” by the blast from your game is the single argument to this method. The method then calculates the distance from this point to the center of the AR layer, which also corresponds with the center of the bull’s-eye target you’re aiming for. Finally, it finds the smallest enclosing ring for this distance, and returns that ring as the one that was “hit” by the blast.
Open ViewController.mm and remove the very first line of pressTrigger:
where you call selectRandomRing
. Replace it with the following code:
CGPoint targetHit = [self.arView convertPoint:self.crosshairs.center fromView:self.view]; NSInteger ring = [self.arView selectBestRing:targetHit]; |
The full definition for pressTrigger:
now reads as follows:
- (IBAction)pressTrigger:(id)sender { CGPoint hitPoint = [self.arView convertPoint:self.crosshairs.center fromView:self.view]; NSInteger ring = [self.arView selectBestRing:hitPoint]; switch ( ring ) { case 5: // Bullseye [self hitTargetWithPoints:kPOINTS_5]; break; case 4: [self hitTargetWithPoints:kPOINTS_4]; break; case 3: [self hitTargetWithPoints:kPOINTS_3]; break; case 2: [self hitTargetWithPoints:kPOINTS_2]; break; case 1: // Outermost Ring [self hitTargetWithPoints:kPOINTS_1]; break; case 0: // Miss Target [self missTarget]; break; } } |
This method is fairly straightforward:
- The point at which the blast hits is given by the center of the
crosshairs
; the code translates this location from the local coordinate system to that of the AR layer and stores it in a local variable namedhitPoint
. - It then passes
hitPoint
to theselectBestRing:
API you defined previously, which returns the best-fitting ring that encloses the blast point.
The rest of the method works as it did before.
Build and run your app; point your camera at the marker and get a fix on the target below:
Tap the trigger button, and you’ll notice that points are now being tallied more-or-less correctly according to where you’re aiming the crosshairs.
Leaving Your Mark with Sprites
To provide some visual feedback on your marksmanship — and to further augment the user experience — it would be great if you could track the bullet holes you make as you blast into the target pattern.
Fortunately, this is a very simple change.
Open ARView.m and add the following code just before the return
statement:
#if kDRAW_TARGET_BULLET_HOLES if ( bestRing > 0 ) { // (1) Create the UIView for the "bullet hole" CGFloat bulletSize = 6.0f; UIView * bulletHole = [[UIView alloc] initWithFrame:CGRectMake(point.x - bulletSize/2.0f, point.y - bulletSize/2.0f, bulletSize, bulletSize)]; bulletHole.backgroundColor = kColorBulletHole; [self addSubview:bulletHole]; // (2) Keep track of state, so it can be cleared [self.hits addObject:bulletHole]; } #endif |
The newly added code lives between the kDRAW_TARGET_BULLET_HOLES
compiler guards.
Here’s what you’re doing:
- You’re creating a simple UIView object and laying it down to mark the spot where the blast occurred.
- You then track the UIView using an NSSet structure so that you can clear away the blast marks when the game resets.
Working in the same file, update the implementation of hide
as follows:
- (void)hide { self.alpha = kAlphaHide; #if kDRAW_TARGET_BULLET_HOLES for ( UIView * v in self.hits ) { [v removeFromSuperview]; } [self.hits removeAllObjects]; #endif } |
Again, the newly added code sits between the kDRAW_TARGET_BULLET_HOLES
compiler guards.
Here you’re simply clearing out the blast marks when the game resets.
Build and run your app one final time; point your camera at the target marker and blast away:
You should see something like the following on your screen:
Congratulations, your target blaster is fully operational!
Remember: Augmented Reality uses up a lot of processor cycles. The faster the hardware you’re running this app on, the better the user experience.
Where To Go From Here?
I hope you had as much fun building the AR Target Shooter Game as I did! You’ve mastered enough of OpenCV to be able to program a pretty cool Augmented Reality Target Shooter Game.
Here is the completed sample project with all of the code from the above tutorial.
If you’d like to keep exploring the fascinating world of computer vision, there are a number of additional resources out there to keep you going:
- Learning OpenCV by Bradski and Kaehler is an excellent reference for OpenCV.
- Introduction to Augmented Reality on the iPhone is a great introduction to the basics of building Augmented Reality apps on iOS.
- Augmented Reality iOS Tutorial: Marker Tracking is a great introduction to marker-based Augmented Reality using the String SDK.
Finally, many of the leading AR toolkits on the market are pretty deeply integrated with the Unity game engine. The tutorial Beginning Unity for iOS on this site is an excellent introduction to Unity if you’ve never been exposed to it before.
If you have any further questions or comments about this tutorial, or about computer vision and augmented reality in general, please join the forum discussion below!
How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 4/4 is a post from: Ray Wenderlich
The post How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 4/4 appeared first on Ray Wenderlich.