Welcome to the second part of this tutorial series! In the first part of this tutorial, you used the AVFoundation classes to create a live video feed for your game to show the video from the rear-facing camera.
Your task in this stage of the tutorial is to add some HUD overlays to the live video, implement the basic game controls, and dress up the game with some explosion effects. I mean, what gamer doesn’t love cool explosions? :]
If you have the finished project from Part 1 handy, you can start coding right where you left off. Otherwise, you can download the starter project up to this point here and jump right in.
Adding Game Controls
Your first task is to get the game controls up and running.
There’s already a ViewController+GameControls category in your starter project; this category handles all the mundane details relating to general gameplay support. It’s been pre-implemented so you can stay focused on the topics in this tutorial directly related to AR gaming.
Open up ViewController.mm and add the following code to the very end of viewDidLoad
:
// Activate Game Controls [self loadGameControls]; |
Build and run your project; your screen should look something like the following:
Basic gameplay elements are now visible on top of the video feed you built in the last section.
Here’s a quick tour of the new game control elements:
- The instruction panel is in the upper left portion of the screen.
- A scoreboard in located the upper right portion of the screen.
- A trigger button to fire at the target can be found the lower right portion of the screen.
The trigger button is already configured to use pressTrigger:
as its target.
pressTrigger:
is presently stubbed out; it simply logs a brief message to the console. Tap the trigger button a few times to test it; you should see messages like the following show up in the console:
2013-11-15 18:34:25.357 OpenCVTutorial[1953:907] Fire! 2013-11-15 18:34:25.590 OpenCVTutorial[1953:907] Fire! 2013-11-15 18:34:25.827 OpenCVTutorial[1953:907] Fire! |
A set of red crosshairs is now visible at the center of the screen; these crosshairs mark the spot in the “real world” where the player will fire at the target.
The basic object of the game is to line up the crosshairs with a “real world” target image seen through the live camera feed and fire away. The closer you are to the center of the target at the moment you fire, the more points you’ll score!
Designing the Gameplay
Take a moment and consider how you want your gameplay to function.
Your game needs to scan the video feed from the camera and search for instances of the following target image:
Once you detect the target image, you then need to track its position on the screen.
That sounds straightforward enough, but there’s a few challenges here. The onscreen position of the target will change or possibly even disappear as the user moves the device back and forth, or up and down. Also, the apparent size of the target image on the screen will vary as the user moves the device either towards or away from the real world target image.
Shooting things is great and all, but you’ll also need to provide a scoring mechanism for your game:
- If the user aligns the crosshairs with one of the rings on the real world target image and taps the trigger, you’ll record a hit. The number of points awarded depends on how close the user was to the bull’s-eye when they pressed the trigger.
- If the crosshairs are not aligned with any of the five rings on the real world target when the user taps the trigger button, you’ll record a miss.
Finally, you’ll “reset” the game whenever the app loses tracking of the target marker; this should happen when the user moves the device and the target no longer appears in the field-of-view of the camera. A “reset” in this context means setting the score back to 0.
That about covers it; you’ll become intimately familiar with the gameplay logic as you code it in the sections that follow.
Adding Gameplay Simulation
There’s a bit of simulation included in the project to let you exercise the game controls without implementing the AR tracking. Open ViewController+GameControls.m and take a look at selectRandomRing
:
- (NSInteger)selectRandomRing { // Simulate a 50% chance of hitting the target NSInteger randomNumber1 = arc4random() % 100; if ( randomNumber1 < 50 ) { // Stagger the 5 simulations linearly NSInteger randomNumber2 = arc4random() % 100; if ( randomNumber2 < 20 ) { return 1; /* outer most ring */ } else if ( randomNumber2 < 40 ) { return 2; } else if ( randomNumber2 < 60 ) { return 3; } else if ( randomNumber2 < 80 ) { return 4; } else { return 5; /* bullseye */ } } else { return 0; } } |
This method simulates a “shot” at the target marker. It returns a random NSInteger
between 0 and 5, indicating which ring was hit in the simulation:
- 0 indicates a miss.
- 1 indicates a hit on the outermost ring.
- 2 indicates a hit on the second ring in.
- 3 indicates a hit on the third ring in.
- 4 indicates a hit on the fourth ring in.
- 5 indicates a hit on the inner bull’s-eye.
Open ViewController.h and add the following code to the very top of the file, just after the introductory comments:
static const NSUInteger kPOINTS_1 = 50; static const NSUInteger kPOINTS_2 = 100; static const NSUInteger kPOINTS_3 = 250; static const NSUInteger kPOINTS_4 = 500; static const NSUInteger kPOINTS_5 = 1000; |
These constants represent the number of points awarded if the user hits the target; the closer the hit is to the center bull’s-eye, the greater the points awarded.
Open ViewController.mm and update pressTrigger:
as shown below:
- (IBAction)pressTrigger:(id)sender { NSInteger ring = [self selectRandomRing]; switch ( ring ) { case 5: // Bullseye [self hitTargetWithPoints:kPOINTS_5]; break; case 4: [self hitTargetWithPoints:kPOINTS_4]; break; case 3: [self hitTargetWithPoints:kPOINTS_3]; break; case 2: [self hitTargetWithPoints:kPOINTS_2]; break; case 1: // Outermost Ring [self hitTargetWithPoints:kPOINTS_1]; break; case 0: // Miss Target [self missTarget]; break; } } |
This method selects a random ring using the test API selectRandomRing
discussed above. If a ring is selected, record a “hit” along with the commensurate number of points. If no ring was selected, record a “miss”.
You’re abstracting the target hit detection to a separate module so that when it comes time to do away with the simulation and use the real AR visualization layer, all you should need to do is replace the call to selectRandomRing
with the call to your AR code.
Still in ViewController.mm, replace the stubbed-out implementation of hitTargetWithPoints:
with the code below:
- (void)hitTargetWithPoints:(NSInteger)points { // (1) Play the hit sound AudioServicesPlaySystemSound(m_soundExplosion); // (2) Animate the floating scores [self showFloatingScore:points]; // (3) Update the score [self setScore:(self.score + points)]; } |
This method triggers when a “hit” is registered in the game. Taking each numbered comment in turn:
- Play an “explosion” sound effect.
- Render the points awarded in an animation using the
showFloatingScore:
API defined in theGameControls
category. - Update the scoreboard with the new score.
That takes care of the “hit” condition — what about the “miss” condition? That’s even easier.
Replace missTarget
in ViewController.mm with the following code:
- (void)missTarget { // (1) Play the miss sound AudioServicesPlaySystemSound(m_soundShoot); } |
This method triggers when you record a “miss” and simply plays a “miss” sound effect.
Build and run your project; tap the trigger button to simulate a few hits and misses. selectRandomRing
returns a hit 50% of the time, and a miss the other 50% of the time.
At this stage in development, the points will just keep accumulating; if you want to reset the scoreboard you’ll have to restart the app.
Adding Sprites to Your Display
Your crosshairs are in place, and your simulated target detection is working. Now all you need are some giant, firey explosion sprites to appear whenever you hit the target! :]
The images you’ll animate are shown below:
The above explosion consists of 11 separate images concatenated into a single image file explosion.png; each frame measures 128 x 128 pixels and the entire image is 1408 pixels wide. It’s essentially a series of time lapse images of a giant, fiery explosion. The first and last frames in the sequence have intentionally been left blank. In the unlikely event that the animation layer isn’t properly removed after it finishes, using blank frames at the sequence endpoints ensures that the view field will remain uncluttered.
A large composite image composed of many smaller sub-images is often referred to as an image atlas or a texture atlas. This image file has already been included as an art asset in the starter project you downloaded.
You’ll be using Core Animation to animate this sequence of images. A Core Animation layer named SpriteLayer is included in your starter project to save you some time. SpriteLayer implements the animation functionality just described.
Once you cover the basic workings of SpriteLayer, you’ll integrate it with your ViewController in the next section. This will give you the giant, fiery explosions that gamers crave.
SpriteLayer Constructors
Open SpriteLayer.m and look at initWithImage:
:
- (id)initWithImage:(CGImageRef)image { self = [super init]; if ( self ) { self.contents = (__bridge id)image; self.spriteIndex = 1; } return self; } |
This constructor sets the layer’s content
attribute directly using the __bridge
operator to safely cast the pointer from the Core Foundation type CGImageRef to the Objective-C type id.
You then index the first frame of the animation to start at 1, and you keep track of the running value of this index using spriteIndex
.
content
is essentially a bitmap that contains the visual information you want to display. When the layer is automatically created for you as the backing for a UIView, iOS will usually manage all the details of setting up and updating your layer’s content
as required. In this case, you’re constructing the layer yourself, and must therefore provide your own content
directly.Now look at the constructor initWithImage:spriteSize:
- (id)initWithImage:(CGImageRef)image spriteSize:(CGSize)size { self = [self initWithImage:image]; if ( self ) { CGSize spriteSizeNormalized = CGSizeMake(size.width/CGImageGetWidth(image), size.height/CGImageGetHeight(image)); self.bounds = CGRectMake(0, 0, size.width, size.height); self.contentsRect = CGRectMake(0, 0, spriteSizeNormalized.width, spriteSizeNormalized.height); } return self; } |
You code will call this constructor directly.
The image’s bitmap that you set as the layer’s content
is 1408 pixels wide, but you only need to display one 128 pixel-wide “subframe” at a time. The spriteSize
constructor argument let you specify the size of this display “subframe”; in your case, it will be 128 x 128 pixels to match the width of each subframe. You’ll initialize the layer’s bounds
to this value as well.
contentsRect
acts like this display’s “subframe” and specifies how much of the layer’s content
bitmap will actually be visible.
By default, contentsRect
covers the entire bitmap, like so:
Instead, you need to shrink contentsRect
so it only covers a single frame and then animate it left-to-right as you run your layer through Core Animation, like so:
The trick with contentsRect
is that its size is defined using a unit coordinate system, where the value of every coordinate is between 0.0 and 1.0 and is independent of the size of the frame itself. This is very different from the more common pixel-based coordinate system that you’re likely accustomed to from working with properties like bounds
and frame
.
Suppose you were to construct an instance of UIView that was 300 pixels wide and 50 pixels high. In the pixel-based coordinate system, the upper-left corner would be at (0,0) while the lower-right corner would be at (300,50).
However, the unit coordinate system puts the upper-left corner at (0.0, 0.0) while the lower-right corner is always at (1.0, 1.0), no matter how wide or high the frame is in pixels. Core Animation uses unit coordinates to represent those properties whose values should be independent of changes in the frame’s size.
If you step through the math in the constructor above, you can quickly convince yourself that you’re initializing contentsRect
so that it only covers the first frame of your sprite animation — which is exactly the result you’re looking for.
SpriteLayer Animations
Animating a property means to show it changing over time. By this definition, you’re not essentially animating an image: you’re actually animating spriteIndex
.
Fortunately, Core Animation allows you to animate not just familiar built-in properties, like a position or image bitmap, but also user-defined properties like spriteIndex
. The Core Animation API treats the property as a “key” of the layer, much like the key of an NSDictionary
.
Core Animation will animate spriteIndex
when you instruct the layer to redraw its contents whenever the value associated with the spriteIndex
key changes. Core Animation will animate spriteIndex
when you instruct the layer to redraw its contents whenever the value associated with the spriteIndex
key changes. The following method, defined in SpriteLayer.m, accomplishes just that:
+ (BOOL)needsDisplayForKey:(NSString *)key { return [key isEqualToString:@"spriteIndex"]; } |
But what mechanism do you use to tell the layer how to display its contents based on the spriteIndex
?
A clear understanding of the somewhat counterintuitive ways properties change — or how they don’t change — is important here.
Core Animation supports both implicit and explicit animations:
- Implicit Animation: Certain properties of a Core Animation layer — including its bounds, color, opacity, and the
contentsRect
you’re working with — are known as animatable properties. If you change the value of those properties on the layer, then Core Animation automatically animates that value change. - Explicit Animation:Sometimes you must specify an animation by hand and explicitly request that the animation system display it. Creating a CABasicAnimation and adding it to the layer would result in an explicit animation.
Working with explicit animations exposes a subtle distinction between changing the property on the layer and seeing an animation that makes it look like the property is changing. When you request an explicit animation, Core Animation only shows you the visual result of the animation; that is, it shows what it looks like when the layer’s property changes from one state to another.
However, Core Animation does not actually modify the property on the layer itself when running explicit animations. Once you perform an explicit animation, Core Animation simply removes the animation object from the layer and redraws the layer using its current property values, which are exactly the same as when the animation started — unless you changed them separately from the animation.
Animations of user-defined layer keys, like spriteIndex
, are explicit animations. This means that if you request an animation of spriteIndex
from 1 to another number, and at any point during the animation you query SpriteLayer
to find the current value of spriteIndex
, the answer you’ll get back will still be 1!
So if animating spriteIndex
doesn’t actually change the value, then how do you retrieve its value to adjust the position of contentsRect
to the correct location and show the animation?
The answer, dear reader, lies in the presentation layer, a shadowy counterpart to every Core Animation layer which represents how that layer appears on-screen, even while an animation is in progress.
Take a look at currentSpriteIndex
in SpriteLayer.m:
- (NSUInteger)currentSpriteIndex { return ((SpriteLayer*)[self presentationLayer]).spriteIndex; } |
This code returns the value of the spriteIndex
attribute associated with object’s presentation layer, rather than the value of the spriteIndex
attribute associated with the object itself. Calling this method will return the correct, in-progress value of spriteIndex
while the animation is running.
So now you know how to get the visible, animated value of spriteIndex
. But when you change contentsRect
, the layer will automatically trigger an implicit animation, which you don’t want to happen.
Since you’re going to be changing the value of contentsRect
by hand as the animation runs, you need to deactivate this implicit animation by telling SpriteLayer not to produce an animation for the “key”contentsRect
.
Scroll to the definition for defaultActionForKey:
, also located in SpriteLayer.m:
+ (id)defaultActionForKey:(NSString *)event { if ( [event isEqualToString:@"contentsRect"] ) { return (id<CAAction>)[NSNull null]; } return [super defaultActionForKey:event]; } |
The class method defaultActionForKey:
is invoked by the layer before it initiates an implicit animation. This code overrides the default implementation of this method, and instructs Core Animation to suppress any implicit animations associated with the property key contentsRect
.
Finally take a look at display
, which is also defined in SpriteLayer.m:
- (void)display { NSUInteger currentSpriteIndex = [self currentSpriteIndex]; if ( !currentSpriteIndex ) { return; } CGSize spriteSize = self.contentsRect.size; self.contentsRect = CGRectMake(((currentSpriteIndex-1) % (int)(1.0f/spriteSize.width)) * spriteSize.width, ((currentSpriteIndex-1) / (int)(1.0f/spriteSize.width)) * spriteSize.height, spriteSize.width, spriteSize.height); } |
The layer automatically calls display
as required to update its content.
Step through the math of the above code and you’ll see that this is where you manually change the value of contentsRect
and slide it along one frame at a time as the current value of spriteIndex
advances as well.
Implementing Your Sprites
Now that you understand how to create sprites, using them should be a snap!
Open ViewController+GameControls.m and replace the stubbed-out showExplosion
with the following code:
Update showExplosion
so that it looks like this:
- (void)showExplosion { // (1) Create the explosion sprite UIImage * explosionImageOrig = [UIImage imageNamed:@"explosion.png"]; CGImageRef explosionImageCopy = CGImageCreateCopy(explosionImageOrig.CGImage); CGSize explosionSize = CGSizeMake(128, 128); SpriteLayer * sprite = [SpriteLayer layerWithImage:explosionImageCopy spriteSize:explosionSize]; CFRelease(explosionImageCopy); // (2) Position the explosion sprite CGFloat xOffset = -7.0f; CGFloat yOffset = -3.0f; sprite.position = CGPointMake(self.crosshairs.center.x + xOffset, self.crosshairs.center.y + yOffset); // (3) Add to the view [self.view.layer addSublayer:sprite]; // (4) Configure and run the animation CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"spriteIndex"]; animation.fromValue = @(1); animation.toValue = @(12); animation.duration = 0.45f; animation.repeatCount = 1; animation.delegate = sprite; [sprite addAnimation:animation forKey:nil]; } |
Here’s what you do in the above method, step-by-step:
- Create a new instance of SpriteLayer. Prior to iOS6, there was a known ARC bug that would sometimes cause an instance of UIImage to be released immediately after the object’s
CGImage
property was accessed. To avoid any untoward effects, make a copy of the CGImage data before ARC has a chance to accidentally release it and work with the copy instead. - You adjust the position of the sprite layer just slightly to align its center with the target crosshairs on the center of the screen. Even though CALayer declares a
frame
property, its value is derived frombounds
andposition
. To adjust the location or size of a Core Animation layer, it’s best to work directly withbounds
andposition
. - You then add the sprite layer as a sublayer of the current view.
- Construct a new Core Animation object is constructed and add it to the sprite layer.
Sharp-eyed readers will note that the animation runs to index 12, even though there are only 11 frames in the texture atlas. Why would you do this?
Core Animation first converts integers to floats before interpolating them for animation. For example, in the fraction of a second that your animation is rendering frame 1, Core Animation is rapidly stepping through the succession of “float” values between 1.0 and 2.0. When it reaches 2.0, the animation switches to rendering frame 2, and so on. Therefore, if you want the eleventh and final frame to render for its full duration, you need to set the final value for the animation to be 12.
Finally, you need to trigger your new shiny explosions every time you successfully hit the target.
Add the following code to the end of hitTargetWithPoints:
in ViewController.mm:
// (4) Run the explosion sprite [self showExplosion]; } |
Build and run your project; tap the trigger button and you should see some giant balls of fire light up the scene as below:
Giant fiery explosions! They’re just what you need for an AR target blaster game!
Where To Go From Here?
So far you’ve created a “live” video stream using AVFoundation, and you’ve added some HUD overlays to that video as well as some basic game controls. Oh, yes, and explosions – lots of explosions. :]
You can download the completed project for this part as a zipped project file.
The third part of this tutorial will walk you through AR target detection.
If you have any questions or comments on this tutorial series, please come join the discussion below!
How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 2/4 is a post from: Ray Wenderlich
The post How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 2/4 appeared first on Ray Wenderlich.