Learn how to use tab bar controllers to display multiple tabs of information in your apps.
Video Tutorial: Tab Bar Controllers is a post from: Ray Wenderlich
The post Video Tutorial: Tab Bar Controllers appeared first on Ray Wenderlich.
Learn how to use tab bar controllers to display multiple tabs of information in your apps.
Video Tutorial: Tab Bar Controllers is a post from: Ray Wenderlich
The post Video Tutorial: Tab Bar Controllers appeared first on Ray Wenderlich.
Welcome back to our Unity 4.3 2D Tutorial series!
In the first part of the series, you started making a fun game called Zombie Conga. You learned how to add sprites, work with sprite sheets, configure the game view, and animate and move sprites using scripts.
In this second part of the series, you will re-animate the zombie (he is undead, after all), this time using Unity’s built-in animation system. You’ll also add several animations to the cat sprite.
By the time you’re done, you’ll have a great understanding of Unity’s powerful animation system, and Zombie Conga will get its groove on!
First download this starter project, which contains everything from the first installment in this tutorial series, Unity 4.3 2D Tutorial: Getting Started. If you really want to, you can continue with your old project, but it’s better to use this starter project so you’re sure to be in the right place.
Unzip the file and open your scene by double-clicking ZombieConga/Assets/Scenes/CongaScene.unity:
While these assets are the same as the ones you made in Unity 4.3 2D Tutorial: Getting Started, this version of the project has been organized into several folders: Animations, Scenes, Scripts and Sprites. This will help keep things tidy as you add assets to the project.
You’ll store the animations you create in this tutorial in the Animations folder; the other folders have names that clearly describe their contents.
Note: If your folders show up as icons rather than small folders, you can drag the slider in the bottom right of the window to change to the view you see here. I will switch back and forth between the icon view and compressed view in this tutorial, as is convenient.
Solution Inside: Want to learn how to create folders yourself? | SelectShow> |
---|---|
The folders in your Unity project are nothing more than directories on your computer’s disk.
You can create folders from within Unity in one of three ways:
Keep in mind that when you have multiple folders, Unity adds any new assets you create to whatever folder you have selected in the Project browser, or to the top-level Assets folder if nothing is selected. However, you can always drag assets between folders if you want to rearrange them. |
If it has been a while since you completed the first part of this tutorial series, then you may not remember the state of the project. Run the scene now to refresh your memory.
In the first part of this series, you animated the zombie’s walk cycle using a script named ZombieAnimator.cs. That was meant to demonstrate how to access the SpriteRenderer
from within your scripts, which you’ll find can be useful at times. However, here you’ll replace that script to use Unity’s great built-in support for animations.
Select zombie in the Hierarchy and remove the ZombieAnimator (Script) component in the Inspector. To do so, click the gear icon in the upper right of the component and then choose Remove Component from the menu that appears, as shown below:
Of course, disabling components is not just for testing purposes. There will be times when you want to enable or disable components at runtime, which you can do from within your scripts by setting the component’s enabled
flag.
You won’t be using ZombieAnimator.cs anymore in this tutorial, so removing it from zombie is the cleaner option.
Run your scene just to make sure rigor mortis has set in. That is, make sure your zombie isn’t moving those limbs!
To start creating animations, open the Animation view by choosing Window\Animation. As you can see in the following image, you can also add this view to your layout by choosing Animation from the Add Tab menu connected to any other tab:
Arrange your interface so that you can see the contents of the Animation view and the Project browser at the same time. For example, it might look something like this:
Inside the Project browser, expand zombie to reveal its Sprites, then select zombie in the Hierarchy.
Your interface should look something like this:
Because you selected zombie in the Hierarchy, the Animation view allows you to edit the zombie’s animations. Currently, it has none.
That means that later, when you select a Sprite asset in the Project browser and Unity clears the Hierarchy’s selection, the Animation view will still operate on the zombie’s animations. This will continue to be the case until you select something else in the Hierarchy.
The term “usually” was used earlier because if you select certain types of assets in the Project browser, such as Prefabs, the controls in the Animation view all disable themselves rather than allow you to continue working on your animations.
Before creating any animations, it will help if you understand the following three terms:
With those terms defined, it will now be easier to discuss the Animation view. You use this window to create Animation Clips associated with a specific GameObject. Each clip will consist of one or more curves, and each curve will consist of one or more keyframes.
The following image highlights the different parts of the Animation view, and the list after it provides the names used in this tutorial to refer to these areas, along with a brief description of each:
Don’t worry if any of the above descriptions don’t seem clear – the Animation view’s various components will be described in detail throughout this tutorial.
Inside the Project browser, click zombie_0 and then shift+click zombie_3, resulting in a selection containing all four zombie Sprites, as shown below:
Now drag the selected Sprites over the Animation view. A green icon with a plus sign in it will appear, as shown below:
When you see the + icon, release the mouse button, which will display a dialog titled Create New Animation. This dialog simply allows you to name and choose a location in which to save your new Animation Clip.
Enter ZombieWalk in the field labeled Save As, choose the Animations directory, and click Save. The following image shows the completed dialog:
When the dialog closes, Unity has done several things for you behind the scenes:
The Animator Controller is what decides which Animation Clip the Animator should play at any given time and will be covered in the next part of the tutorial.
The other red components are covered in more detail later, but for now, just know that whenever you see a red field or UI component in Unity, you’re currently recording an Animation Clip.
You can only work on one clip at a time in the Animation view. Clicking the clip drop-down allows you to select which Animation Clip to edit from all of the clips associated with a specific GameObject. As you can see in the following image, the zombie currently only has one clip – ZombieWalk.
In the above image, the check mark next to ZombieWalk indicates that this is the current clip selection, which becomes more useful when you have multiple clips in the list. The above image also shows that the menu includes an option labeled [Create New Clip], which allows you to create a new clip associated with the same GameObject. You’ll use this later when animating the cat.
The screenshot below shows how dragging the Sprites into the Animation view automatically added a curve labeled zombie : Sprite:
This means that this animation affects the sprite
field of the zombie’s SpriteRenderer
component. Sometimes, if the property name is not obvious based on the components attached to the object, the property name will include the name of the component as well. For example, if you wanted to animate the enabled
state of the SpriteRenderer
, it would be labeled zombie : SpriteRenderer.enabled.
Select zombie in the Hierarchy (or select zombie : Sprite in the Animation view’s curves list, which automatically selects zombie in the Hierarchy), and then look at the Sprite Renderer component in the Inspector. As the following image shows, the Sprite field is tinted red. This not only indicates that you are recording an Animation Clip, but that the clip you are recording specifically affects this field.
In the Animation view’s timeline, you can see that Unity added four keyframes to the Sprite curve. If you can’t see the frames as clearly as you can in the following image, try zooming in on the timeline by scrolling with your mouse wheel or by performing whatever scroll operation your input device supports.
Along the top of the timeline you’ll see labels that consist of a number of seconds followed by a colon and then a frame number.
Values start counting at zero, so 0:02 indicates the third frame of the first second of animation. I wanted to use 1:02 as the example, but I was afraid the phrase “third frame of the second second” might be confusing. ;]
As you can see, Unity placed the four sprites at 0:00, 0:01, 0:02 and 0:03.
Before doing anything else, try running the clip by clicking the Play button in the Animation view’s control bar.
Be sure that the Animation view is visible along with either the Scene view or the Game view so you can see the zombie strutting his stuff. Or, to put it more accurately, watch as he apparently gets electrocuted.
If that were Unity’s idea of good animation, you’d have to resurrect that script you removed earlier. Fortunately, this isn’t a problem with Unity; it’s just a problem with ZombieWalk‘s configuration.
Click the Animation view’s Play button again to stop the preview.
You may recall from the first installment of this series that the zombie’s walk cycle ran at ten frames per second. However, if you look at the field labeled Samples in the Animation view’s control bar, you’ll see it is set to 60:
The Samples field defines an Animation Clip’s frame rate and it defaults to 60 frames per second. Change this value to ten, and notice how the timeline’s labels now go from 0:00 to 0:09 before moving to 1:00:
Preview your animation again by clicking the Animation view’s Play button. You should see the zombie moving at a much more reasonable pace:
The zombie is looking better already, but he’s not quite his old self yet. That’s because the animation you defined only includes the first four frames of the walk cycle, so when it loops it jumps from zombie_3 back to zombie_0. You need to add some more frames to smooth this transition.
Select only zombie_2 in the Project browser and drag it over the Animation view. Position your cursor inside the zombie : Sprite row in the timeline, just to the right of the last keyframe, and release your mouse button, as shown below:
You should now have keyframes at 0:00 through 0:04. However, depending on your zoom level, it’s pretty easy to accidentally place the new keyframe too far to the right. For example:
If this occurs, simply drag the keyframe to the left using the small diamond above the curve, as shown below:
Now repeat the previous process to add zombie_1 to the end of the animation, which is frame 0:05. Your Animation view now looks like this:
Test your animation again by pressing the Play button in the Animation view. Your zombie is finally strutting his stuff.
With your Animation Clip complete, run your scene and make sure the zombie still moves around properly while animating.
You’ve successfully replaced the zombie’s script-based animation with an Animation Clip. That may have seemed like a lot of effort, but it was really just a lot of words describing the UI. If you break down what you actually did, you basically dragged some sprites into the Animation view and set a frame rate. Darn, now I’m kicking myself for not just writing that in the first place.
If you simply select a group of Sprites in the Project browser and drag them directly into the Scene or Hierarchy views, Unity does everything you saw earlier, like creating an Animation Clip, an Animator and an Animator Controller. However, it also creates a GameObject in the Scene and connects everything to it! I’m guessing that by Unity 5.0, you’ll just have to drag your Sprites into the Scene and Unity will automatically create the correct gameplay based on the art style. You heard it here first!
Ok, you’ve gotten your feet wet with animations in Unity, but there is still plenty left to learn. Fortunately, you’ve got a perfectly good cat to experiment on just sitting there.
Unity can animate things other than Sprites. Specifically, Unity can animate values for any of the following types:
For the zombie, you won’t modify anything other than the Sprite, but over the lifetime of a cat, it will move through the following sequence of animations:
There are actually five different Animation Clips at work here:
You’ll produce each of those animations without any additional artwork. Instead, you’ll animate the cat’s scale, rotation and color properties.
Select cat in the Hierarchy. Remember, Unity decides where to attach new Animation Clips based on the most recent Hierarchy selection.
In the Animation view, choose [Create New Clip] from the clip drop-down menu in the control bar, shown below:
In the dialog that appears, name the new clip CatSpawn, select the Animations directory, and click Save.
Unity automatically adds an Animator component to the cat when you create the first Animation Clip. As you make each of the following clips, Unity will automatically associate them with this Animator.
Repeat the above process to create four more clips, one for each of the other animations you’re going to make. Name them CatWiggle, CatZombify, CatConga and CatDisappear.
The Animation view’s clip drop-down menu now contains all five clips, as shown below:
Of course, if you created the clips before you read this note for some silly reason, like because those instructions came before this note, well then one of us made a serious mistake in our order of operations, didn’t one of us?
This tutorial will show a few different techniques for setting up Animation Clips. For the zombie’s walk cycle, you dragged Sprites into the Animation view and Unity created all the necessary components automatically. This time, you’ll start with an empty clip to edit and let Unity add the curves for you.
Select cat in the Hierarchy, then select CatSpawn from the Animation view’s clip drop-down menu to begin working on this clip, as shown below:
To enter recording mode, press the Record button in the Animation view, shown below:
Once again, Unity indicates that you are now in recording mode by tinting red the Animation view’s Record button, the scene controls, and the checkbox next to the cat’s Animator component. The Record button while recording is shown below:
For the cat’s spawning animation, you simply want to scale the cat from zero to one. In the Inspector, set the Scale values for X and Y to 0. A 2D object’s scale along its z-axis has no effect, so you can ignore the Scale’s Z value.
Your Transform component in the Inspector now looks like this:
As soon as you adjusted one of the Scale fields, they all turned red in the Inspector to indicate that you are currently recording an animation for the cat that contains a Scale curve. Unity automatically added a curve to your Animation Clip named cat : Scale, shown below:
Look in either your Scene or Game views and you will see that your cat just disappeared!
Technically, the cat is still located right where it was, but without a width and height, you can’t see it. That’s because while recording an animation, your Scene and Game views display the GameObject as it appears in the frame currently selected in the Animation view. But what frame is selected in the Animation view?
Take a look at the Animation view’s timeline, which now includes a single keyframe at 0:00 on the cat : Scale curve, as shown below:
The vertical red line you see in the above image is the scrubber. Its position indicates your current frame in the animation. Any changes you make will be recorded at this frame, creating a new keyframe there if necessary. Also, as was previously mentioned, the Scene and Game views will display your GameObject as it will appear during this frame of animation.
The scrubber appears whenever you are in recording mode, as well as while previewing your animation with the Animation view’s Play button.
The Animation view’s control bar includes a current frame field that indicates the frame at which the scrubber is located. As you can see below, it is currently at frame zero:
Type 15 in the frame field to move the scrubber to frame 15. Because this clip is set to run at 60 frames per second, the scrubber is now at 0:15, as the following image shows:
In the Inspector, set the cat’s X and Y values for Scale to 1, as shown below:
Press the Play button in the Animation view to preview your animation. Inside the Game or Scene views, watch your cat scale up to full size and then disappear, over and over again.
Click the Record button in the Animation view to exit recording mode.
Play your scene and notice that the cat still flashes continuously in and out of existence. Hmm, that wasn’t just a feature of the Animation view’s Preview mode?
Everyone loves a pulsating cat, but that’s not really what you were going for. The problem is that Animation Clips in Unity loop by default, but this should really be a one shot deal.
Select CatSpawn in the Project browser to view the Animation Clip’s properties in the Inspector. Uncheck the box labeled Loop Time to disable looping, as shown here:
Play the scene again and the cat pops in before remaining very, very still.
Note: If it’s too hard to see the animation, you can temporarily decrease the samples to 10 like you did earlier to slow down the animation. Just be sure to set it back when you’re done!
You’d think it would want to draw a little more attention to itself to attract the zombie, right? Well, nothing attracts zombies like a wiggling cat, so it’s time to make it wiggle.
It’s perfectly fine to let Unity add any necessary curves for you based on changes you make while recording a clip, but sometimes you’ll want to add a curve explicitly from within the Animation view.
Select cat in the Hierarchy, and then choose CatWiggle from the clip drop-down menu in the Animation view’s control bar.
Click Add Curve to reveal the following menu:
This menu lists each Component on the associated GameObject. Click the triangle next to Transform to reveal the properties available for animating, and click the + icon on the right side of Rotation to add a Rotation curve, as demonstrated below:
Inside the Animation view, click the triangle to the left of cat : Rotation to expose curves for the x, y and z rotation properties. Unity gives you three curves so you can manipulate each component of the rotation individually, and as you can see in the following image, Unity automatically added keyframes at frames 0 and 60 of each curve it created:
Each of the keyframes Unity added has the same values – whatever ones existed on the GameObject when you added the curve. In this case, the rotations are all zero, as shown below:
Inside the Animation view, make sure the scrubber is at frame zero. Move your cursor over the 0 next to Rotation.z, and notice that it highlights to indicate that it’s actually a text field, as shown below:
Click in the Rotation.z field and change the value to 22.5, as shown below:
Now move the scrubber to frame 30 and set the Rotation.z value to -22.5. Unity automatically creates a new keyframe here to store this new value, as shown below:
Finally, move the scrubber to frame 60 and set the Rotation.z value to 22.5.
Press the Play button in the Animation view to preview the rotation. Be honest, that cat’s soothing oscillation makes you want to tape a cat to a fan, doesn’t it?
To make the cat look more like it’s hopping around rather than just swiveling from side to side, you’ll add a bit of scaling to the animation. Basically, you want the cat to be its normal size at the extremes of its rotation, and a bit larger right in the middle.
With what you’ve practiced already, you should have no problem adding a Scale curve to CatWiggle. Do so now.
Solution Inside: Don't remember how to add a curve? | SelectShow> |
---|---|
You can add a Scale curve in either of the following two ways:
|
The rotation is at its most extreme values at the keyframes you placed at 0, 30 and 60, so it’s at its midpoint at frames 15 and 45. That means you’ll want to increase the scale at these two frames.
Set the X and Y Scale values to 1.2 at frames 15 and 45, and set them to 1 at frames 0, 30 and 60.
You’ve learned two different ways to set values, so give it a try yourself.
Solution Inside: Need a refresher on setting values for keyframes? | SelectShow> |
---|---|
You can set these values in either of the two ways you’ve learned:
No matter which method you choose, be sure to move to the appropriate frame before setting each value. |
When you’re done, you should have keyframes that look like this:
Preview the animation by clicking the Play button in the Animation view. Your cat should now look quite enticing to a hungry zombie:
Now run your scene. Hmm, the cat still just pops in and then stays perfectly still, like it just saw a ghost or something else equally terrifying.
The problem is that you have more than one Animation Clip associated with the cat, so now you need to make sure the cat plays the correct animation at the correct time. To do that, you need to configure the cat’s Animator Controller, which you’ll learn how to do in the next part of this tutorial!
Here are a couple extra bits of information about the Animation view that you may find useful.
While this tutorial didn’t cover the Animation view’s Curves mode, you’ll probably find you need to use it at some point to fine-tune your animations.
In Curves mode, the timeline displays a graph of the selected curves. The following image shows the Animation view in Curves mode with both the Rotation.z and the Scale.x curves selected:
You can use this mode to add, delete and move keyframes, as well as set values for keyframes like you did in Dope Sheet mode. However, the real power of Curves mode comes from its ability to adjust the values between the keyframes.
If you select any keyframe, you can access a menu of options by either right-clicking on it, which is often tricky to accomplish, or by clicking the diamond for that curve in the curves list, as shown below:
The options in the menu allow you to control the curve between the keyframes. By selecting an option like Free Smooth, Flat, or Broken, you can then click on a keyframe in the timeline and access handles to control how the curve enters/exits that keyframe, as shown below:
Working directly with the curves can be tricky and error prone; it’s certainly not an exact science. But if you aren’t happy with the timing of one of your animations, playing around with these options can sometimes help.
You probably already know that if you make any changes to your GameObjects while playing a scene in the editor, these changes are lost when you stop the scene. However, this is not the case when previewing Animation Clips using the Animation view’s Play button.
That means that you can tweak the values of a clip while it loops in preview mode until you get them just right. This can come in handy sometimes, such as when adjusting curves in Curves mode or when adjusting a clip’s timing by moving keyframes.
In this part of the tutorial you learned about using Unity’s Animation view to create Animation Clips for your 2D games. You can find a copy of the project with everything you did here.
While you can get started making animations with just this information, there are still more details to cover. You’ll get more practice making Animation Clips in the next part of this tutorial, and you’ll learn about using Animator Controllers to transition between different clips.
In the meantime, don’t forget that Unity has great documentation. For more information on the Animation view (along with some stuff that will be covered in the next part of this tutorial), take a look here.
I hope you enjoyed this tutorial and found it useful. As always, please ask questions or leave remarks in the Comments section.
Unity 4.3 2D Tutorial: Animations is a post from: Ray Wenderlich
The post Unity 4.3 2D Tutorial: Animations appeared first on Ray Wenderlich.
Welcome back to our Unity 4.3 2D Tutorial series!
In the first part of the series, you started making a fun game called Zombie Conga, learning the basics of Unity’s 4.3′s built-in 2D support along the way.
In the second part of the series, you learned how to animate the zombie and the cat using Unity’s powerful built-in animation system.
In this third part of the series, you’ll get more practice creating Animation Clips, and you’ll learn how to control the playback of and transition between those clips.
This tutorial picks up where the previous part ended. If you don’t already have the project from that tutorial, download it here.
Just like you did in Part 1, unzip the file and open your scene by double-clicking ZombieConga/Assets/Scenes/CongaScene.unity.
It’s time to make that cat dance!
So far you’ve only been working with Animation Clips, such as ZombieWalk and CatSpawn. You learned in Part 1 that Unity uses an Animator component attached to your GameObjects in order to play these clips, but how does the Animator know which clip to play?
To find out, select cat in the Hierarchy and look at its Animator component in the Inspector. The Controller field is set to an object named cat, as shown below:
This object is an Animator Controller that Unity created for you when you made the cat’s first Animation Clip, and it’s what the Animator uses to decide which animation to play.
As you can see in the following image, the Animations folder in the Project browser contains the controller named cat, as well as a controller named zombie, which Unity created for the zombie GameObject:
Open the Animator view by choosing Window\Animator. Don’t let the similar names fool you: this view is different from the Animation view you’ve been using.
Select cat in the Hierarchy to view its Animator Controller in the Animator view, as shown below:
For now, ignore the areas named Layers and Parameters in the upper and lower left corners, respectively. Instead, take a look at the various rectangles filling most of the view.
What you’re looking at are the states of a state machine that determines which Animation Clip should run on the cat.
If you’ve never heard of a state machine, think of it as a set of possible modes or conditions, called states. At any given time, the machine is in one of these known states, and it includes rules to determine when to transition between its states.
These rectangles each represent an Animation Clip, except for the teal one named Any State, which you’ll read about later.
The orange rectangle represents the default state, i.e. the state that runs when the Animator Controller starts, as shown below:
Unity sets as the default state the first Animation Clip you associate with that Animation Controller. Because you created the cat’s clips in order, Unity correctly set CatSpawn as the default animation. However, if you ever want to assign a different default state, simply right-click the new state in the Animator view and choose Set As Default from the popup menu that appears.
The following image shows how you would manually set CatSpawn as the default state:
With the Animator view still visible, play your scene. Notice that a blue progress bar appears at the bottom of the CatSpawn state. This bar shows you the cat’s exact position within the state machine on any given frame.
As you can see, the cat is continuously running through the CatSpawn animation without ever moving on to the next state.
Note: If you don’t see this blue bar, make sure that you still have the cat selected in the Hierarchy view.
You need to provide the Animator Controller with rules for moving between states, so this is the perfect time to segue into a talk about transitions!
A state machine isn’t very useful if it can’t ever change states. Here you’ll set up your Animator Controller to smoothly transition your cat from its spawn animation into the animation defined by CatWiggle.
Inside the Animator window, right click on CatSpawn and choose Make Transition. Now as you move your mouse cursor within the Animator view, it remains connected to the CatSpawn state by a line with an arrow in its middle. Click CatWiggle to connect these two states with a transition, as demonstrated below:
Play your scene and you’ll see the cat pop in and start wiggling. How easy was that?
That was certainly quick to set up, and you might be happy with the results. However, sometimes you’ll want to tweak a state transition. For example, in this case, there is actually something a bit strange and unintended happening, so it’s a good time to learn about editing transitions.
With only one transition defined, the easiest way to edit it is to click directly on the transition line within the Animator view, shown in the following image:
This displays the transition’s properties in the Inspector, as shown below:
When you have multiple transitions, selecting them in the Animator view isn’t always easy. Instead, you can view a specific transition’s properties by selecting the state in the Animator view that starts the transition – in this case, CatSpawn. In the Inspector, click on the appropriate row in the Transitions list to reveal details about that transition, as shown below:
In the Inspector, Unity provides you with a visual representation of how it intends to move between the two animation clips. The following image highlights the most important elements of the transition editor:
As you can see in the previous image, Unity claims it will start the transition between the two clips as soon as CatSpawn starts playing. For the duration of CatSpawn‘s animation, Unity gradually blends the amount by which each clip affects the resulting animation, starting at 100% CatSpawn and 0% CatWiggle, and moving to 100% CatWiggle and 0% CatSpawn.
Unfortunately, it seems that Unity actually triggers this transition after CatSpawn has played through one full time, and then it starts the transition at the beginning of the second run of CatSpawn. You can see it more clearly in this frame-by-frame walk through of the transition:
It seems that starting the transition at the zero percent mark of the first clip causes a problem. I don’t know if this is intended behavior or a bug in Unity, but it’s a good chance to try the transition editor.
With the transition selected, look at the Conditions list in the Inspector, shown below (you might have to scroll down a bit to see it):
This list contains the conditions that trigger this transition. By default, the only condition available is named Exit Time, which is considered true
after a specified percentage of the first animation has played.
Exit Time’s value is currently 0.00, meaning it’s set to the start of the clip. You can adjust this value directly in the field next to Exit Time or you can move the Start Marker (>|) described earlier, but it’s easier to be accurate using the field directly.
Change the value for Exit Time to 0.01, as shown in the following image. Exit Time’s value is based on a zero to one scale, so 0.01 means to start the transition after playing just one percent of CatSpawn.
Play your scene again and the cat animates into view, transitioning smoothly into its wiggle animation.
Take a look at the transition in slow motion to see the difference more clearly:
With all this jumping around the cat’s doing, the zombie is sure to notice it! Of course, getting attention from zombies in real life usually leads to zombification, and in this game it’s no different. When the zombie touches them, the cats will turn into demon zombie conga cats. To indicate their undeath, you’ll turn them green.
Switch to CatZombify in the Animation view and add a curve to edit the Sprite Renderer’s color.
Solution Inside: Not sure how to add the Color curve? | SelectShow> |
---|---|
Select cat in the Hierarchy and choose CatZombify in the clip drop-down in the Animation view.
Click Add Curve in the Animation view, expand Sprite Renderer, and click the + next to Color. |
When you add a new curve, Unity automatically adds keyframes at frames zero and 60. You can move between them like you’ve been doing so far, either using the frame field in the Animation view’s control bar, by selecting a keyframe within the timeline, or by dragging the scrubber to a specific frame. However, the Animation view’s control bar also includes two buttons for moving to the Previous and Next keyframes, as shown below:
Click the Next Keyframe button (>|) to move to frame 60. With the cat : Sprite Renderer.Color curve expanded, change the values for Color.r and Color.b to 0, as shown below:
As you can see in the Scene or Game views, or in the Inspector‘s Preview pane (if you still have cat selected in the Hierarchy), your cat is a very bright green. You could tweak the color values to make the cat appear a more zombie-esque hue, but this will be fine for Zombie Conga.
Preview the clip by pressing the Play button in the Animation window. Uh oh, I think kitty is gonna be sick!
You now need to make the cat transition from CatWiggle to CatZombify. Select cat in the Hierarchy and switch to the Animator view.
Right click on CatWiggle, choose Make Transition, and then click CatZombify.
Play your scene and the cat appears, wiggles a bit, turns green, and then stops moving. However, after turning fully green it resets its color to white and transitions to green again, over and over.
You’ve seen this problem before. Try to solve it yourself, but if you get stuck, check out the Spoiler below for the answer.
Solution Inside: Not sure how to keep a green cat green? | SelectShow> |
---|---|
Just like with CatSpawn, you want CatZombify to be a one-shot animation rather than use the default looping behavior. To fix this, select CatZombify in the Project browser and uncheck Loop Time in the Inspector. |
Now when the turns green, it stays green. Purrfect. Get it? Because it’s perfect, but it’s a cat, and cats purr, so I used “purr” in place of “per”. I don’t want to brag, but I’m pretty sure I just invented that. :]
You have the transition from CatWiggle to CatZombify set up, but it occurs as soon as the cat starts wiggling. In the actual game, you’re going to want the cat to keep wiggling until the zombie touches it, and then you’ll want it to turn green. To do that, you need CatWiggle to loop until a specific condition triggers the transition.
Unity allows you to add to an Animator Controller any number of user-defined variables, called parameters. You can then reference these parameters in the conditions that trigger transitions.
With cat selected in the Hierarchy, open the Animator view. Click the + in the lower left of the window, where it says Parameters, and choose Bool from the popup that appears, as shown below:
This parameter will be a flag you set to indicate whether or not a cat is a member of the conga line, so name it InConga.
Your Animator view’s Parameters should look like the following:
In the Animator view, select the transition you created earlier between CatWiggle and CatZombify.
In the Inspector, click the combo box under Conditions and you’ll see that there are now two options, Exit Time and InConga. Choose InConga, as shown below:
Be sure the combo box that appears to the right of the InConga condition is set to true, as shown below:
Now play your scene and notice how the cat appears and starts wiggling, but doesn’t turn green. In the Animator view, you can see how the cat continues looping in the CatWiggle state:
With the scene running and both the Animator and the Game views visible, click the empty checkbox next to InConga in the lower left corner of the Animator view. As soon as you do, you’ll see the animation state transition to CatZombify, and the cat in your Game view turns green and its wiggling smoothly comes to a halt.
Stop the scene. In the next part of this tutorial series, you’ll set the InConga flag from a script when the zombie touches the cat, but for now, you’ll just finish up the rest of the cat’s animations.
bool
to trigger a state change, but you can add parameters of type float
, int
, and trigger
, too. For example, you might have a float
parameter named “Speed” and then set up your Animator Controller to transition from a walk to a run animation if Speed exceeds a certain value.
Trigger
parameters are similar to bool
ones, except when you set a trigger and it initiates a transition, the trigger automatically resets its value once the transition completes.
Right now the cat can appear, wiggle, and turn into a zombie cat. These are all good things for a cat to do, but you still need it to hop along in the conga line and then go away.
The actual logic to move the cats in a line following the cat will have to wait for the next part of this series, which focuses on finishing all the code to make this a playable game. For now, you’re only going to create the Animation Clip and set up the transitions.
First, add to the CatConga Animation Clip a curve that adjusts the cat’s Scale. The scale animation should start and end with values of 1, and be 1.1 at its midpoint. The clip should last for 0.5 seconds and should loop.
Now go. Animate.
Solution Inside: Does animating on your own make you nervous? | SelectShow> |
---|---|
Select cat in the Hierarchy, then go to the Animation view and select CatConga from the clip drop-down menu.
Click Add Curve, expand Transform, and click the + to the right of Scale. Move to frame 15 and set the Scale.x and Scale.y values to 1.1. Drag the keyframe that is currently at frame 60 to the left to move it to frame 30. You should now have three keyframes, one each at frames 0, 15 and 30. The x and y scales should be set to 1 at frames 0 and 30, |
Preview your clip by pressing Play in the Animation view. If you animated the scale properly, you should see a throbbing cat, as shown below:
You’ll have to use your imagination to envision it moving forward in sync with the animation. It looks like it’s hopping, right? If not, imagine better.
Now create a transition between CatZombify and CatConga. Try doing it yourself, but check the spoiler if you need help.
Solution Inside: Forget how to create a transition? | SelectShow> |
---|---|
To create the transition, select cat in the Hierarchy and open the Animator view. Right-click on CatZombify in the Animator view, choose Make Transition and click CatConga. |
Play the scene now. While it’s playing, click the InConga check box in the Animator view. Behold, the zombification of a cat! Almost. After turning green, the cat immediately turns white again, like some sort of demon kitty risen from the undead!
The cat turned white again because the CatConga animation doesn’t set a color for the Sprite Renderer, so Unity took it upon itself to change the color back to its default of white. I’m really not sure if that’s a bug or expected behavior, but either way, there’s an easy fix.
Add a curve to CatConga to edit the Sprite Renderer’s color. You should definitely know how to do this one.
Solution Inside: Don't definitely know how to do this one? | SelectShow> |
---|---|
Select cat in the Hierarchy and choose CatConga in the clip drop-down in the Animation view.
Click Add Curve in the Animation view, expand Sprite Renderer, and click the + next to Color. |
Inside the Animation view, move the scrubber to frame 0 in CatConga and set the Color.r and Color.b values to 0. Then press the Next Keyframe button twice to move to the keyframe at frame 30. Remember, that’s the button in the Animator view’s control bar that looks like >|.
You need to delete this keyframe. You can do so by clicking the diamond for cat : Sprite Renderer.Color in the curves list and choosing Delete Key, as shown below:
Additionally, you can get to these menus by right-clicking on the name of the curve in the curves list, or by right-clicking directly on a keyframe’s diamond-shaped marker in the timeline.
Now you should have a curve with a single keyframe at frame zero. This sets the cat’s color to green as soon as the clip starts and then makes no changes. Your Animation view now looks something like this:
Play your scene again, click the InConga check box in the Animator view, and watch as the cat turns undead and then stays undead, just like nature intended.
The goal of Zombie Conga will be to get a certain number of cats into the zombie’s conga line. When the zombie collides with an old lady, you’ll remove a couple cats from the line and have them spin off and shrink until they disappear, as shown in the following animation:
This is the last Animation Clip you’ll need to make for Zombie Conga, so it’s your last chance to try out this animation business on your own!
Try configuring the CatDisappear Animation Clip as described below:
If you get stuck on any of that, the following has you covered.
Solution Inside: Need help making a cat disappear? | SelectShow> |
---|---|
Select cat in the Hierarchy, and then select CatDisappear from the clip drop-down menu in the Animation view’s control bar.
Rotate the cat
Shrink the cat
Color the cat green
Disable looping
Transition to CatDisappear
|
Now test the cat’s full suite of animations. Play the scene and watch your cat appear and start wiggling. Next, click the InConga check box in the Animator view to turn your cat green and make it start hopping. Finally, uncheck the InConga check box in the Animator view to make the cat spin away into nothingness. Bear witness to the sad, beautiful lifecycle of a cat:
Wait, didn’t that say “nothingness”? The cat disappeared from the Game view, sure, but take a look at the Hierarchy and you’ll see the cat is still hiding out in your scene:
You don’t want zombie cats sticking around in your scene after they disappear. That would make them zombie-zombie cats! Of course, you don’t want to remove a cat until you’re sure it’s done its animation, either. It’s a good thing Unity supports Animation Events!
Synchronizing game logic with animations can be tricky. Fortunately, Unity provides you with a built-in events system tied to your Animation Clips!
Animation Clips can trigger events by calling methods on the scripts attached to the animation’s associated GameObject. For example, you can add a script to cat and then call methods on it at specific times from within an Animation Clip.
Select cat in the Hierarchy and add a new C# script to it named CatController. This should be familiar to you from Part 1 of this series, but the following spoiler includes a refresher.
Solution Inside: Can't remember how to add a script? | SelectShow> |
---|---|
With cat selected in the Hierarchy, click Add Component in the Inspector. From the menu that appears, choose New Script and enter CatController as the name. Be sure CSharp is selected as the Language and click Create and Add.
Unity will put the new script in the top level Assets folder, but to keep things organized, switch to the Project browser and drag CatController from the Assets folder into the Scripts folder. |
Open CatController.cs in MonoDevelop. If you aren’t sure how, just double-click CatController in the Project browser.
Inside MonoDevelop, remove the two empty methods in CatController.cs: Start
and Update
.
Now add the following method to CatController.cs:
void GrantCatTheSweetReleaseOfDeath() { DestroyObject( gameObject ); } |
You are going to configure CatDisappear to call this method when the clip finishes. As its name implies, this method simply destroys the script’s gameObject
to release the zombie cat to wherever zombie cats go when they die. Florida?
Save the file (File\Save) and go back to Unity.
Select cat in the Hierarchy and select CatDisappear from the clips drop-down menu in the Animation view.
Move the scrubber to frame 120, and then click the Add Event button in the Animation view’s control bar, as shown below:
This will bring up a dialog that lets you choose a function from a combo box. As you can see in the following screenshot, the default value in the combo box says (No Function Selected) and if you leave it like this, then the event will have no effect.
The combo box lists all of the methods you’ve added in any scripts you’ve attached to the GameObject. That is, it won’t list methods your scripts inherit from MonoBehaviour
, such as Start
and Update
.
In this case, you’ve only added GrantCatTheSweetReleaseOfDeath()
, so select it in the combo box and then close the dialog.
The timeline includes a marker for your new event, as shown below:
Hovering the cursor over an event in the timeline shows a tooltip that displays the method it this event will call, as you can see in the following image:
Right-clicking on an event brings up a menu that allows you to edit or delete the event, as well as add a new event.
For example, the following image shows a function that takes an int
:
Your event can be of one of the following types:
float
, string
, int
, an object reference or an AnimationEvent
object.
An AnimationEvent
can be used to hold one each of the other types. The following image shows the dialog when editing a method that takes an AnimationEvent
parameter:
Run your scene. Once again, click the InConga check box in the Animator view to zombify the cat, and then click the InConga check box again to uncheck it and watch as the cat disappears into a sandy after-afterlife.
More importantly, notice that cat no longer appears in the Hierarchy, as shown below:
In a real game you would probably want to recycle your cat objects rather than continuously create and destroy them, but this will be fine for Zombie Conga. Besides, do you really want to live in a world that recycles cats?
This tutorial couldn’t cover everything involved with animating in Unity. Unity’s animation system, known as Mecanim, is quite sophisticated and was originally built for creating complex 3D animations. However, as you’ve seen, it works very well for 2D animations, too.
This section includes a few notes that weren’t mentioned anywhere else in this tutorial, but that may be helpful when you’re creating your own animations.
In addition to the states you worked with in the Animator view, there was also a teal rectangle labeled Any State, as shown below:
This is not really a state at all. Instead it is a special endpoint for creating transitions that can occur at any time.
For example, imagine you were writing a game where the player always had the ability to shoot a weapon, no matter what animation might be playing at the time the player tries to fire. Call its shooting animation FireWeapon.
Rather than creating a transition to the FireWeapon state from every one of your other states, you could create a transition to it from Any State instead.
Then, if this transition’s condition is met – in this hypothetical case, maybe a FirePressed bool
parameter is true
– then this transition will trigger, regardless of which state the Animator Controller happened to be executing at the time.
In addition to Animation Clips, states in an Animator Controller can be sub-state machines. Sub-state machines primarily help you keep complicated state machines organized by hiding branches of a state machine under a single state node.
For example, imagine a series of Animation Clips that make up an attack, such as aiming and firing a weapon. Rather than keep them all visible in your Animator Controller, you can combine that series of clips into a sub-state machine. The following image shows a hypothetical Animator Controller for the zombie, where it can transition from walking to attacking:
And the following image shows a simple set of states that might define an attack:
You connect transitions in and out of sub-state machines the same way you would regular states, except a popup menu appears that you use to choose the specific state to which to connect.
For a detailed look at sub-state machines, read this section of Unity’s manual.
Blend Trees are special states you can add to an Animator Controller. They actually blend multiple animations together in order to create a new animation. For example, if you had a walking and a running animation you could use a Blend Tree to create a new animation in between those based on run speed.
Blend Trees are complicated enough to require their own tutorial. To get you started, Unity’s 2D character controller training session includes a great example of using a Blend Tree to choose the appropriate Sprite for a 2D character based on the character’s current velocity.
You can read more about Blend Trees in Unity’s manual.
In the upper-left corner of the Animator view is a section labeled Layers, as shown below:
You can use layers to define complex animations that occur on a 3D character. For example, you might have one layer that controls a 3D character’s legs walking animation while another layer controls the character’s shooting animation, with rules for how to blend the animations.
I don’t know if you can use layers for 2D characters, but if you ever want to learn more about them, check out the documentation here. Please leave a comment if you know of a way they can be used in 2D games. Thanks!
The next part of this series will be mostly scripting, and one of the things you’ll learn is how to access from your scripts the parameters you defined in the Animator view.
However, I’m sure some readers won’t want to wait for the weeks it will probably take before that tutorial comes out. I’m not going to explain it here, but if you just can’t wait, here is some code that will let you access InConga
from a script:
// Get the Animator component from your gameObject Animator anim = GetComponent<Animator>(); // Sets the value anim.SetBool("InConga", true); // Gets the value bool isInConga = anim.GetBool("InConga"); |
It’s best to cache the Animator
component in a class variable rather than getting it every time you need it. Also, if speed is an issue, use Animator.StringToHash
to generate an int
from the string
“InConga”, and then use the versions of these methods that take an int
instead of a string
.
This tutorial covered most of what you’ll need to know to make 2D animations in Unity. You can find the completed project here.
The next and final installment of this series will focus mostly on scripting, but will also show you how to implement collision detection using Unity’s new 2D physics engine. By the end of Part 4, you’ll have cats dancing in a conga line, win/lose states and an intro screen. You’ll even throw in some music and sound effects just for fun.
To learn more about building animations in Unity, take a look at these resources:
Please let us know if this tutorial was helpful. Leave remarks or ask questions in the Comments section.
Unity 4.3 2D Tutorial: Animation Controllers is a post from: Ray Wenderlich
The post Unity 4.3 2D Tutorial: Animation Controllers appeared first on Ray Wenderlich.
This is a reminder that we are having a free live tech talk titled “Sprite Kit vs. Unity 2D vs. Cocos2D Battle Royale” this Tuesday (tomorrow at the time of this post), and you’re all invited! Here are the details:
We hope to see some of you at the tech talk, and we hope you enjoy!
Reminder: Free Live Tech Talk (Sprite Kit vs Unity 2D vs Cocos2D Battle Royale) this Tuesday! is a post from: Ray Wenderlich
The post Reminder: Free Live Tech Talk (Sprite Kit vs Unity 2D vs Cocos2D Battle Royale) this Tuesday! appeared first on Ray Wenderlich.
About once a year, we open up the ability to apply to the raywenderlich.com tech editing team to the general public. Well, now’s that time for 2014!
Keep reading to find out why you should apply to join our tech editing team, and how to apply!
There are many great reasons to be a technical editor for raywenderlich.com:
This is an informal, part-time position – you’d be editing about 1-3 tutorials per month. We do expect that when you are assigned a tutorial to tech edit, that you complete the tech edit within 1 week.
We are looking for a few expert level iOS developers with excellent English writing skills and a perfectionist streak to join the team.
If you meet the above requirements and want to apply to join the team, please send me a direct email with the answers to the following questions:
For the applicants that look most promising, we will send you an invite to an official tryout process, where you will perform a tech edit a mock tutorial, and your score will be compared to those of our current editors. Those that pass the tryout will become full members of the team!
Note: Note that last time we opened up applicants to the public we got over 250 applicants, so please understand we may not have time to respond to all applicants. We do promise to read each and every email though!
Thanks for considering applying to join the raywenderlich.com tech editing team, and we are looking forward to working with you! :]
Call for Applicants: raywenderlich.com Tech Editing Team is a post from: Ray Wenderlich
The post Call for Applicants: raywenderlich.com Tech Editing Team appeared first on Ray Wenderlich.
The first Tuesday of each month, one of the members of the team gives a Tech Talk, and by popular request we’ve started to stream these live.
Today in our March Tech Talk, three Tutorial Team members held an epic debate on which is the best framework for making 2D games on iOS: the Cocos2D vs Sprite Kit vs Unity Battle Royale.
Here were the contenders:
Here’s the video for anyone who didn’t get a chance to watch!
Note: At the point where I start giving the Cocos2D code demo, I forgot to make my screen share visible to viewers. When you get to this point (around 20:30), skip to 25:37 where the issue is resolved. Sorry about that!
Here is the Source Code of the Cat Jump game, made in each framework for easy comparison purposes:
Here are some handy links to learn more about these frameworks:
Thanks again Marin and Brian for giving a great talk and having the guts to present to debate with a live audience :] And thank you to everyone who attended – we hope you enjoyed it!
Next month, our April Tech talk will be on Reactive Cocoa, with Colin Eberhardt (CTO of Shinobi Controls and Tutorial Team member) and Justin Spahr-Summers (Mac developer at GitHub and co-developer of ReactiveCocoa).
We will be broadcasting this talk live on Tuesday, April 8 at 2:00 PM EST, so if you want to join us sign up here! As you watch the talk, you can submit any Q&A you may have live.
Hope to see some of you there! :]
Cocos2D vs Sprite Kit vs Unity 2D Tech Talk Video is a post from: Ray Wenderlich
The post Cocos2D vs Sprite Kit vs Unity 2D Tech Talk Video appeared first on Ray Wenderlich.
These are all real examples of the type of apps that you can create with Augmented Reality. Augmented Reality (AR) is an exciting technology that blends, or augments, a real-time video stream with computer-generated sensory inputs such as sound, graphics or geolocation information.
Some of the most engaging mobile apps today use Augmented Reality, such as Action FX, SnapShop Showroom and Star Chart. They’ve all been huge hits in their own right and new technologies like Google Glass continue to expand the possibilities of AR in the future.
This tutorial showcases the AR capabilities of iOS in a fun and entertaining AR Target Shooter game. You’ll be using the popular OpenCV computer vision library as the foundation of your app, but you won’t need anything more than a basic familiarity with UIKit, CoreGraphics and some elementary C++ programming.
In this tutorial, you’ll learn how to:
Ready to add the next level of interaction to your iOS apps? Then it’s time to get started!
Download the starter project for this tutorial and extract it to a convenient location.
The first thing you’ll need to do is to integrate the OpenCV SDK with the starter project.
The easiest way to do this is to use CocoaPods, a popular dependency management tool for iOS.
CocoaPods is distributed as a ruby gem; this means that installing it is pretty straightforward. Open up a Terminal window, type in the following command and hit Return:
$ [sudo] gem install cocoapods |
You may have to wait for a few moments while the system installs the necessary components.
Once the command has completed and you’ve been returned to the command prompt, type in the following command:
$ pod setup |
That’s all there is to installing CocoaPods. Now you’ll need a Podfile to integrate OpenCV with your project.
Using Terminal, cd
to the top level directory of the starter project; this is the directory where OpenCVTutorial.xcodeproj lives.
To verify this, type ls
and hit Return at the command prompt; you should see some output as shown below:
$ ls OpenCVTutorial OpenCVTutorial.xcodeproj |
Fire up your favorite text editor and create a file named Podfile in this directory.
Add the following line of code to Podfile:
pod 'OpenCV' |
Save Podfile and exit back to the command shell. Then type the following command and hit Return:
$ pod |
After a few moments, you should see some log statements indicating that the necessary dependencies have been analyzed and downloaded and that OpenCV is installed and ready to use.
An example shell session is indicated below:
Type ls
again at the command prompt; you should see the list of files below:
$ ls OpenCVTutorial Podfile OpenCVTutorial.xcodeproj Podfile.lock OpenCVTutorial.xcworkspace Pods |
OpenCVTutorial.xcworkspace is a new file — what does it do?
Xcode provides workspaces as organizational tools to help you manage multiple, interdependent projects. Each project in a workspace has its own separate identity — even while sharing common libraries with other projects across the entire workspace.
Open OpenCVTutorial.xcworkspace in Xcode by double-clicking on the file in the Finder.
$ open OpenCVTutorial.xcworkspace |
Once you have the workspace open in Xcode, take a look at the Navigator. You’ll see the following two projects which are now part of the workspace:
The first project — OpenCVTutorial — is the original starter project that you just downloaded.
The second project — Pods — contains the OpenCV SDK; CocoaPods added this to the workspace for you.
The OpenCV CocoaPod takes care of linking most of the iOS Frameworks required for working with Augmented Reality, including AVFoundation, Accelerate, AssetsLibrary, CoreImage, CoreMedia, CoreVideo and QuartzCore.
However, this list doesn’t contain any Frameworks that support sound effects. You’ll add these yourself.
Add the AudioToolkit Framework to the OpenCVTutorial project as follows:
You should see the following panel:
Click on the +
icon at the bottom of the list and add the AudioToolbox Framework to the OpenCVTutorial project.
Your Link Binary With Libraries menu item should now look similar to the following:
Open OpenCVTutorial-Prefix.pch; you can find it under the Supporting Folders group in the Navigator, like so:
Add the following code to OpenCVTutorial-Prefix.pch, just above the line that reads #ifdef __OBJC__
:
#ifdef __cplusplus #include <opencv2/opencv.hpp> #endif |
The *.pch
extension indicates that this is a prefix header file.
Prefix header files are a feature of some C/C++ compilers, including Xcode.
Any instructions or definitions found in the prefix header file will be included automatically by Xcode at the start of every source file. By adding the above preprocessor directive to OpenCVTutorial-Prefix.pch, you’ve instructed Xcode to add the
header file to the top of every C++ file in your project.
When working with C++ in Objective-C files, you must change your source code filename extension from *.m to *.mm for those files. The *.mm filename extension tells Xcode to use the C++ compiler, while *.m instructs Xcode to use the standard C compiler.
#include
directive points to opencv2
.Build your project at this point to make sure that everything compiles without any errors.
The first component you’ll add to your game is a “real-time” live video feed.
To do this, you’ll need some type of VideoSource object able to retrieve raw video data from the rear-facing camera and forward it on to OpenCV for processing.
If you haven’t worked much with Apple’s AVFoundation Framework, one way to conceptualize it is like a giant, yummy sandwich:
There’s a partially implemented VideoSource
object included in your starter project to help get you started. For the remainder of Part 1 of this tutorial, you’ll complete the the implementation of VideoSource
and get your live video feed up and running.
Start by opening up VideoSource.h, found under the Video Source group.
If you’re having trouble finding the file, you can try typing VideoSource
directly into the Filter bar at the bottom of the Navigator, as shown below:
Only the files whose names match the string you are typing in the Filter bar will be displayed in the Navigator.
Open up VideoSource.h and have a look through the file:
#import <AVFoundation/AVFoundation.h> #import "VideoFrame.h" #pragma mark - #pragma mark VideoSource Delegate @protocol VideoSourceDelegate <NSObject> @required - (void)frameReady:(VideoFrame)frame; @end #pragma mark - #pragma mark VideoSource Interface @interface VideoSource : NSObject @property (nonatomic, strong) AVCaptureSession * captureSession; @property (nonatomic, strong) AVCaptureDeviceInput * deviceInput; @property (nonatomic, weak) id<VideoSourceDelegate> delegate; - (BOOL)startWithDevicePosition:(AVCaptureDevicePosition)devicePosition; @end |
There’s a few things in this file that are worth mentioning:
strong
property named captureSession
.
captureSession
is an instance of AVCaptureSession which is described above; its purpose is to coordinate the flow of video data between the rear-facing camera on your iOS device, the output ports that you’re going to configure below, and ultimately OpenCV.
strong
property named deviceInput
.
deviceInput
is an instance of AVCaptureDeviceInput and acts as an input port that can attach to the various A/V hardware components on your iOS device. In the next section, you’re going to associate this property with the rear-facing camera and add it as an input port for captureSession
.
This protocol is the “glue” between the output ports for captureSession
and OpenCV. Whenever one of your output ports is ready to dispatch a new video frame to OpenCV, it will invoke the frameReady:
callback on the delegate
member of VideoSource.
Open VideoFrame.h and have a look through it as well:
#ifndef OpenCVTutorial_VideoFrame_h #define OpenCVTutorial_VideoFrame_h #include <cstddef> struct VideoFrame { size_t width; size_t height; size_t stride; unsigned char * data; }; #endif |
This file declares a simple C-struct
that you’ll use to hold your video frame data.
Take the example where you’re capturing video at the standard VGA resolution of 640×480 pixels:
width
will be 640.height
will be 480.stride
will be given by the number of bytes per row.
In this example, if you are capturing video data using frames that are 640 pixels wide, and 4 bytes are used to represent each pixel, the value of stride
will be 2560.
The data
attribute, of course, is simply a pointer to the actual video data.
You’ve been patient — it’s now time to start writing some code!
Open VideoSource.mm and replace the stubbed-out implementation of init
with the code below:
- (id)init { self = [super init]; if ( self ) { AVCaptureSession * captureSession = [[AVCaptureSession alloc] init]; if ( [captureSession canSetSessionPreset:AVCaptureSessionPreset640x480] ) { [captureSession setSessionPreset:AVCaptureSessionPreset640x480]; NSLog(@"Capturing video at 640x480"); } else { NSLog(@"Could not configure AVCaptureSession video input"); } _captureSession = captureSession; } return self; } |
Here the constructor simply creates a new instance of AVCaptureSession and configures it to accept video input at the standard VGA resolution of 640×480 pixels.
If the device is unable to accept video input at this resolution, an error is logged to the console.
Next, add the following definition for dealloc
to VideoSource.mm, just after init
:
- (void)dealloc { [_captureSession stopRunning]; } |
When an instance of a VideoSource class is deallocated, it’s a good idea to stop captureSession
as well.
Replace the stubbed-out cameraWithPosition:
in VideoSource.mm with the following code:
- (AVCaptureDevice*)cameraWithPosition:(AVCaptureDevicePosition)position { NSArray * devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; for ( AVCaptureDevice * device in devices ) { if ( [device position] == position ) { return device; } } return nil; } |
Most iOS devices these days ship with both front- and rear-facing cameras.
For today’s AR shooter game, you’re only going to be interested in the rear-facing camera.
Nevertheless, it’s a good practice to write code that is general enough to be reused in different ways. cameraWithPosition:
is a private helper method that lets the caller obtain a reference to the camera device located at the specified position
.
You’ll pass in AVCaptureDevicePositionBack
to obtain a reference to the rear-facing camera.
If no camera device is found at the specified position
, the method returns nil
.
The next thing you’ll need to do is implement the public interface for VideoSource
.
Replace the stubbed-out implementation of startWithDevicePosition:
in VideoSource.mm with the following code:
- (BOOL)startWithDevicePosition:(AVCaptureDevicePosition)devicePosition { // (1) Find camera device at the specific position AVCaptureDevice * videoDevice = [self cameraWithPosition:devicePosition]; if ( !videoDevice ) { NSLog(@"Could not initialize camera at position %d", devicePosition); return FALSE; } // (2) Obtain input port for camera device NSError * error; AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; if ( !error ) { [self setDeviceInput:videoInput]; } else { NSLog(@"Could not open input port for device %@ (%@)", videoDevice, [error localizedDescription]); return FALSE; } // (3) Configure input port for captureSession if ( [self.captureSession canAddInput:videoInput] ) { [self.captureSession addInput:videoInput]; } else { NSLog(@"Could not add input port to capture session %@", self.captureSession); return FALSE; } // (4) Configure output port for captureSession [self addVideoDataOutput]; // (5) Start captureSession running [self.captureSession startRunning]; return TRUE; } |
Here’s what’s going on in this method:
cameraWithPosition:
defined above. The call returns with a reference to the camera device located at the specified position
and you save this reference in videoDevice
.deviceInputWithDevice:error:
defined on the class AVCaptureDeviceInput and pass in videoDevice
as the argument. The method configures and returns an input port for videoDevice
, and saves it in a local variable named videoInput
. If the port can’t be configured, then log an error.
videoInput
to the list of input ports for captureSession
and log an error if anything fails with this operation.addVideoDataOutput
configures the output ports for captureSession
. This method is as yet undefined. You’re going to implement it in the next section.startRunning
is an asynchronous call that starts capturing video data from the camera and dispatches it to captureSession
.It’s worth taking a moment to review the finer points of concurrency and multithreading.
Grand Central Dispatch, or CGD, was first introduced in iOS4 and became the de facto way to manage concurrency on iOS. Using GCD, developers submit tasks to dispatch queues in the form of code blocks which are then run on a thread pool managed by GCD. This frees the developer of managing multiple threads by hand and all the requisite headaches that go along with that! :]
GCD dispatch queues come in three basic flavors:
While the serial queue may seem like the easiest queue to understand, it’s not necessarily the one iOS developers use most frequently.
The main thread handles the the UI of your application; it’s the only thread permitted to call many of the crucial methods in UIKit. As a result, it’s quite common to dispatch a block to the main queue from a background thread to signal the UI that some non-UI background process has completed, such as a long-running computation or waiting on a response to a network request.
Global dispatch queues — also known as concurrent dispatch queues — aren’t typically used for signalling. These queues are most frequently used by iOS developers for structuring the concurrent execution of background tasks.
However, in this portion of the project you need to use a serial dispatch queue so you can process video frames in background threads so that you don’t block the UI. This also ensures that each frame is handled in the order it was received.
In this section you’re going to use GCD to configure the output ports for captureSession
. These output ports format the raw video buffers as they’re captured from the camera and forward them on to OpenCV for further processing.
Before moving forward, you will need to inform the compiler that the VideoSource class adheres to the AVCaptureVideoDataOutputSampleBufferDelegate protocol. This formidable-sounding protocol declares the delegate methods for captureSession
. You’ll learn more about this protocol further along in the tutorial, but for now, it’s all about making the compiler happy.
Update the class extension at the top of VideoSource.mm as follows:
@interface VideoSource () <AVCaptureVideoDataOutputSampleBufferDelegate> @end |
Next, open VideoSource.mm and replace the stubbed-out addVideoDataOutput
with the following code:
- (void) addVideoDataOutput { // (1) Instantiate a new video data output object AVCaptureVideoDataOutput * captureOutput = [[AVCaptureVideoDataOutput alloc] init]; captureOutput.alwaysDiscardsLateVideoFrames = YES; // (2) The sample buffer delegate requires a serial dispatch queue dispatch_queue_t queue; queue = dispatch_queue_create("com.raywenderlich.tutorials.opencv", DISPATCH_QUEUE_SERIAL); [captureOutput setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); // (3) Define the pixel format for the video data output NSString * key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber * value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; NSDictionary * settings = @{key:value}; [captureOutput setVideoSettings:settings]; // (4) Configure the output port on the captureSession property [self.captureSession addOutput:captureOutput]; } |
Taking each numbered comment in turn:
captureOutput
; setting alwaysDiscardsLateVideoFrames
to YES
gives you improved performance at the risk of occasionally losing late frames.
captureSession
is ready to vend a new video buffer. The first parameter to dispatch_queue_create()
identifies the queue and can be used by tools such as Instruments. The second parameter indicates you wish to create a serial, rather than a concurrent queue; in fact, DISPATCH_QUEUE_SERIAL
is #defined
as NULL
and it’s common to see serial dispatch queues created simply by passing in NULL
as the second parameter. You also call dispatch_release()
as the starter project has a release target of iOS 5.
captureOutput
to the list of output ports for captureSession
.dispatch_retain
and dispatch_release
.It’s time to implement the AVCaptureVideoDataOutputSampleBufferDelegate
protocol. You will use this protocol to format the raw video buffers from the camera and dispatch them as video frames to OpenCV.
The protocol only declares the following two methods, both optional:
captureOutput:didOutputSampleBuffer:fromConnection:
notifies the delegate that a video frame has successfully arrived from the camera. This is the method you’re going to be implementing below.captureOutput:didDropSampleBuffer:fromConnection:
notifies the delegate that a video frame has been dropped. You won’t be needing this method for today’s tutorial.Add the following method definition to the very end of VideoSource.mm:
#pragma mark - #pragma mark Sample Buffer Delegate - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // (1) Convert CMSampleBufferRef to CVImageBufferRef CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // (2) Lock pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly); // (3) Construct VideoFrame struct uint8_t *baseAddress = (uint8_t*)CVPixelBufferGetBaseAddress(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); size_t stride = CVPixelBufferGetBytesPerRow(imageBuffer); VideoFrame frame = {width, height, stride, baseAddress}; // (4) Dispatch VideoFrame to VideoSource delegate [self.delegate frameReady:frame]; // (5) Unlock pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer, 0); } |
Looking at each numbered comment, you’ll see the following:
imageBuffer
.kCVPixelBufferLock_ReadOnly
for added performance.
VideoSource
delegate. This is the point where OpenCV picks up the frame for further processing.Now you can turn to ViewController and implement the final delegate callback.
Add the following declaration to the bottom of ViewController.mm:
#pragma mark - #pragma mark VideoSource Delegate - (void)frameReady:(VideoFrame)frame { __weak typeof(self) _weakSelf = self; dispatch_sync( dispatch_get_main_queue(), ^{ // Construct CGContextRef from VideoFrame CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(frame.data, frame.width, frame.height, 8, frame.stride, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Construct CGImageRef from CGContextRef CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); // Construct UIImage from CGImageRef UIImage * image = [UIImage imageWithCGImage:newImage]; CGImageRelease(newImage); [[_weakSelf backgroundImageView] setImage:image]; }); } |
frameReady:
is the callback method defined in VideoSourceDelegate. It takes a video frame, uses a set of straightforward but tedious Core Graphics calls to convert it into a UIImage instance, then renders it for on-screen display.
There are a couple things worth noting about the method:
backgroundImageView
is an IBOutlet
defined on MainStoryboard
. Its type is UIImageView
and as the name suggests, it’s configured to form the background image view for the entire game. frameReady:
runs every time VideoSource dispatches a new video frame, which should happen at least 20 to 30 times per second. Rendering a steady stream of UIImage objects on-screen creates an illusion of fluid video.Again, as with VideoSource
, it’s good form to declare which protocols the class conforms to in the class extension.
Update the class extension at the top of ViewController.mm as follows:
@interface ViewController () <VideoSourceDelegate> |
Finally, you need to instantiate a new VideoSource object before you can use it.
Update viewDidLoad
in ViewController.mm as follows:
- (void)viewDidLoad { [super viewDidLoad]; // Configure Video Source self.videoSource = [[VideoSource alloc] init]; self.videoSource.delegate = self; [self.videoSource startWithDevicePosition:AVCaptureDevicePositionBack]; } |
Here you use AVCaptureDevicePositionBack
to capture video frames from the rear-facing camera.
Build and run your project; hold your device up and you’ll see how the virtual A/V “sandwich” has come together to give you “live video” on your device:
You’ve just taken one small step for man, and one giant leap for your AR Target Shooter game!
This concludes the first part of your AR Target Shooter game. You’ve assembled a tasty sandwich from the ingredients of the AVFoundation Framework, and you now have your “live” video feed.
You can download the completed project for this part as a zipped project file.
In Part 2 of this tutorial, you’ll add some HUD overlays to the live video, implement the basic game controls, and dress up the game with some explosion effects. Oh yeah, I said explosions, baby! :]
If you have any questions or comments, please come join the discussion below!
How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 1/4 is a post from: Ray Wenderlich
The post How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 1/4 appeared first on Ray Wenderlich.
Learn how to create view controllers that contain other view controllers. Also learn how view and view controller hierarchies work.
Video Tutorial: Container View Controllers is a post from: Ray Wenderlich
The post Video Tutorial: Container View Controllers appeared first on Ray Wenderlich.
Welcome to the second part of this tutorial series! In the first part of this tutorial, you used the AVFoundation classes to create a live video feed for your game to show the video from the rear-facing camera.
Your task in this stage of the tutorial is to add some HUD overlays to the live video, implement the basic game controls, and dress up the game with some explosion effects. I mean, what gamer doesn’t love cool explosions? :]
If you have the finished project from Part 1 handy, you can start coding right where you left off. Otherwise, you can download the starter project up to this point here and jump right in.
Your first task is to get the game controls up and running.
There’s already a ViewController+GameControls category in your starter project; this category handles all the mundane details relating to general gameplay support. It’s been pre-implemented so you can stay focused on the topics in this tutorial directly related to AR gaming.
Open up ViewController.mm and add the following code to the very end of viewDidLoad
:
// Activate Game Controls [self loadGameControls]; |
Build and run your project; your screen should look something like the following:
Basic gameplay elements are now visible on top of the video feed you built in the last section.
Here’s a quick tour of the new game control elements:
The trigger button is already configured to use pressTrigger:
as its target.
pressTrigger:
is presently stubbed out; it simply logs a brief message to the console. Tap the trigger button a few times to test it; you should see messages like the following show up in the console:
2013-11-15 18:34:25.357 OpenCVTutorial[1953:907] Fire! 2013-11-15 18:34:25.590 OpenCVTutorial[1953:907] Fire! 2013-11-15 18:34:25.827 OpenCVTutorial[1953:907] Fire! |
A set of red crosshairs is now visible at the center of the screen; these crosshairs mark the spot in the “real world” where the player will fire at the target.
The basic object of the game is to line up the crosshairs with a “real world” target image seen through the live camera feed and fire away. The closer you are to the center of the target at the moment you fire, the more points you’ll score!
Take a moment and consider how you want your gameplay to function.
Your game needs to scan the video feed from the camera and search for instances of the following target image:
Once you detect the target image, you then need to track its position on the screen.
That sounds straightforward enough, but there’s a few challenges here. The onscreen position of the target will change or possibly even disappear as the user moves the device back and forth, or up and down. Also, the apparent size of the target image on the screen will vary as the user moves the device either towards or away from the real world target image.
Shooting things is great and all, but you’ll also need to provide a scoring mechanism for your game:
Finally, you’ll “reset” the game whenever the app loses tracking of the target marker; this should happen when the user moves the device and the target no longer appears in the field-of-view of the camera. A “reset” in this context means setting the score back to 0.
That about covers it; you’ll become intimately familiar with the gameplay logic as you code it in the sections that follow.
There’s a bit of simulation included in the project to let you exercise the game controls without implementing the AR tracking. Open ViewController+GameControls.m and take a look at selectRandomRing
:
- (NSInteger)selectRandomRing { // Simulate a 50% chance of hitting the target NSInteger randomNumber1 = arc4random() % 100; if ( randomNumber1 < 50 ) { // Stagger the 5 simulations linearly NSInteger randomNumber2 = arc4random() % 100; if ( randomNumber2 < 20 ) { return 1; /* outer most ring */ } else if ( randomNumber2 < 40 ) { return 2; } else if ( randomNumber2 < 60 ) { return 3; } else if ( randomNumber2 < 80 ) { return 4; } else { return 5; /* bullseye */ } } else { return 0; } } |
This method simulates a “shot” at the target marker. It returns a random NSInteger
between 0 and 5, indicating which ring was hit in the simulation:
Open ViewController.h and add the following code to the very top of the file, just after the introductory comments:
static const NSUInteger kPOINTS_1 = 50; static const NSUInteger kPOINTS_2 = 100; static const NSUInteger kPOINTS_3 = 250; static const NSUInteger kPOINTS_4 = 500; static const NSUInteger kPOINTS_5 = 1000; |
These constants represent the number of points awarded if the user hits the target; the closer the hit is to the center bull’s-eye, the greater the points awarded.
Open ViewController.mm and update pressTrigger:
as shown below:
- (IBAction)pressTrigger:(id)sender { NSInteger ring = [self selectRandomRing]; switch ( ring ) { case 5: // Bullseye [self hitTargetWithPoints:kPOINTS_5]; break; case 4: [self hitTargetWithPoints:kPOINTS_4]; break; case 3: [self hitTargetWithPoints:kPOINTS_3]; break; case 2: [self hitTargetWithPoints:kPOINTS_2]; break; case 1: // Outermost Ring [self hitTargetWithPoints:kPOINTS_1]; break; case 0: // Miss Target [self missTarget]; break; } } |
This method selects a random ring using the test API selectRandomRing
discussed above. If a ring is selected, record a “hit” along with the commensurate number of points. If no ring was selected, record a “miss”.
You’re abstracting the target hit detection to a separate module so that when it comes time to do away with the simulation and use the real AR visualization layer, all you should need to do is replace the call to selectRandomRing
with the call to your AR code.
Still in ViewController.mm, replace the stubbed-out implementation of hitTargetWithPoints:
with the code below:
- (void)hitTargetWithPoints:(NSInteger)points { // (1) Play the hit sound AudioServicesPlaySystemSound(m_soundExplosion); // (2) Animate the floating scores [self showFloatingScore:points]; // (3) Update the score [self setScore:(self.score + points)]; } |
This method triggers when a “hit” is registered in the game. Taking each numbered comment in turn:
showFloatingScore:
API defined in the GameControls
category.That takes care of the “hit” condition — what about the “miss” condition? That’s even easier.
Replace missTarget
in ViewController.mm with the following code:
- (void)missTarget { // (1) Play the miss sound AudioServicesPlaySystemSound(m_soundShoot); } |
This method triggers when you record a “miss” and simply plays a “miss” sound effect.
Build and run your project; tap the trigger button to simulate a few hits and misses. selectRandomRing
returns a hit 50% of the time, and a miss the other 50% of the time.
At this stage in development, the points will just keep accumulating; if you want to reset the scoreboard you’ll have to restart the app.
Your crosshairs are in place, and your simulated target detection is working. Now all you need are some giant, firey explosion sprites to appear whenever you hit the target! :]
The images you’ll animate are shown below:
The above explosion consists of 11 separate images concatenated into a single image file explosion.png; each frame measures 128 x 128 pixels and the entire image is 1408 pixels wide. It’s essentially a series of time lapse images of a giant, fiery explosion. The first and last frames in the sequence have intentionally been left blank. In the unlikely event that the animation layer isn’t properly removed after it finishes, using blank frames at the sequence endpoints ensures that the view field will remain uncluttered.
A large composite image composed of many smaller sub-images is often referred to as an image atlas or a texture atlas. This image file has already been included as an art asset in the starter project you downloaded.
You’ll be using Core Animation to animate this sequence of images. A Core Animation layer named SpriteLayer is included in your starter project to save you some time. SpriteLayer implements the animation functionality just described.
Once you cover the basic workings of SpriteLayer, you’ll integrate it with your ViewController in the next section. This will give you the giant, fiery explosions that gamers crave.
Open SpriteLayer.m and look at initWithImage:
:
- (id)initWithImage:(CGImageRef)image { self = [super init]; if ( self ) { self.contents = (__bridge id)image; self.spriteIndex = 1; } return self; } |
This constructor sets the layer’s content
attribute directly using the __bridge
operator to safely cast the pointer from the Core Foundation type CGImageRef to the Objective-C type id.
You then index the first frame of the animation to start at 1, and you keep track of the running value of this index using spriteIndex
.
content
is essentially a bitmap that contains the visual information you want to display. When the layer is automatically created for you as the backing for a UIView, iOS will usually manage all the details of setting up and updating your layer’s content
as required. In this case, you’re constructing the layer yourself, and must therefore provide your own content
directly.Now look at the constructor initWithImage:spriteSize:
- (id)initWithImage:(CGImageRef)image spriteSize:(CGSize)size { self = [self initWithImage:image]; if ( self ) { CGSize spriteSizeNormalized = CGSizeMake(size.width/CGImageGetWidth(image), size.height/CGImageGetHeight(image)); self.bounds = CGRectMake(0, 0, size.width, size.height); self.contentsRect = CGRectMake(0, 0, spriteSizeNormalized.width, spriteSizeNormalized.height); } return self; } |
You code will call this constructor directly.
The image’s bitmap that you set as the layer’s content
is 1408 pixels wide, but you only need to display one 128 pixel-wide “subframe” at a time. The spriteSize
constructor argument let you specify the size of this display “subframe”; in your case, it will be 128 x 128 pixels to match the width of each subframe. You’ll initialize the layer’s bounds
to this value as well.
contentsRect
acts like this display’s “subframe” and specifies how much of the layer’s content
bitmap will actually be visible.
By default, contentsRect
covers the entire bitmap, like so:
Instead, you need to shrink contentsRect
so it only covers a single frame and then animate it left-to-right as you run your layer through Core Animation, like so:
The trick with contentsRect
is that its size is defined using a unit coordinate system, where the value of every coordinate is between 0.0 and 1.0 and is independent of the size of the frame itself. This is very different from the more common pixel-based coordinate system that you’re likely accustomed to from working with properties like bounds
and frame
.
Suppose you were to construct an instance of UIView that was 300 pixels wide and 50 pixels high. In the pixel-based coordinate system, the upper-left corner would be at (0,0) while the lower-right corner would be at (300,50).
However, the unit coordinate system puts the upper-left corner at (0.0, 0.0) while the lower-right corner is always at (1.0, 1.0), no matter how wide or high the frame is in pixels. Core Animation uses unit coordinates to represent those properties whose values should be independent of changes in the frame’s size.
If you step through the math in the constructor above, you can quickly convince yourself that you’re initializing contentsRect
so that it only covers the first frame of your sprite animation — which is exactly the result you’re looking for.
Animating a property means to show it changing over time. By this definition, you’re not essentially animating an image: you’re actually animating spriteIndex
.
Fortunately, Core Animation allows you to animate not just familiar built-in properties, like a position or image bitmap, but also user-defined properties like spriteIndex
. The Core Animation API treats the property as a “key” of the layer, much like the key of an NSDictionary
.
Core Animation will animate spriteIndex
when you instruct the layer to redraw its contents whenever the value associated with the spriteIndex
key changes. Core Animation will animate spriteIndex
when you instruct the layer to redraw its contents whenever the value associated with the spriteIndex
key changes. The following method, defined in SpriteLayer.m, accomplishes just that:
+ (BOOL)needsDisplayForKey:(NSString *)key { return [key isEqualToString:@"spriteIndex"]; } |
But what mechanism do you use to tell the layer how to display its contents based on the spriteIndex
?
A clear understanding of the somewhat counterintuitive ways properties change — or how they don’t change — is important here.
Core Animation supports both implicit and explicit animations:
contentsRect
you’re working with — are known as animatable properties. If you change the value of those properties on the layer, then Core Animation automatically animates that value change.Working with explicit animations exposes a subtle distinction between changing the property on the layer and seeing an animation that makes it look like the property is changing. When you request an explicit animation, Core Animation only shows you the visual result of the animation; that is, it shows what it looks like when the layer’s property changes from one state to another.
However, Core Animation does not actually modify the property on the layer itself when running explicit animations. Once you perform an explicit animation, Core Animation simply removes the animation object from the layer and redraws the layer using its current property values, which are exactly the same as when the animation started — unless you changed them separately from the animation.
Animations of user-defined layer keys, like spriteIndex
, are explicit animations. This means that if you request an animation of spriteIndex
from 1 to another number, and at any point during the animation you query SpriteLayer
to find the current value of spriteIndex
, the answer you’ll get back will still be 1!
So if animating spriteIndex
doesn’t actually change the value, then how do you retrieve its value to adjust the position of contentsRect
to the correct location and show the animation?
The answer, dear reader, lies in the presentation layer, a shadowy counterpart to every Core Animation layer which represents how that layer appears on-screen, even while an animation is in progress.
Take a look at currentSpriteIndex
in SpriteLayer.m:
- (NSUInteger)currentSpriteIndex { return ((SpriteLayer*)[self presentationLayer]).spriteIndex; } |
This code returns the value of the spriteIndex
attribute associated with object’s presentation layer, rather than the value of the spriteIndex
attribute associated with the object itself. Calling this method will return the correct, in-progress value of spriteIndex
while the animation is running.
So now you know how to get the visible, animated value of spriteIndex
. But when you change contentsRect
, the layer will automatically trigger an implicit animation, which you don’t want to happen.
Since you’re going to be changing the value of contentsRect
by hand as the animation runs, you need to deactivate this implicit animation by telling SpriteLayer not to produce an animation for the “key”contentsRect
.
Scroll to the definition for defaultActionForKey:
, also located in SpriteLayer.m:
+ (id)defaultActionForKey:(NSString *)event { if ( [event isEqualToString:@"contentsRect"] ) { return (id<CAAction>)[NSNull null]; } return [super defaultActionForKey:event]; } |
The class method defaultActionForKey:
is invoked by the layer before it initiates an implicit animation. This code overrides the default implementation of this method, and instructs Core Animation to suppress any implicit animations associated with the property key contentsRect
.
Finally take a look at display
, which is also defined in SpriteLayer.m:
- (void)display { NSUInteger currentSpriteIndex = [self currentSpriteIndex]; if ( !currentSpriteIndex ) { return; } CGSize spriteSize = self.contentsRect.size; self.contentsRect = CGRectMake(((currentSpriteIndex-1) % (int)(1.0f/spriteSize.width)) * spriteSize.width, ((currentSpriteIndex-1) / (int)(1.0f/spriteSize.width)) * spriteSize.height, spriteSize.width, spriteSize.height); } |
The layer automatically calls display
as required to update its content.
Step through the math of the above code and you’ll see that this is where you manually change the value of contentsRect
and slide it along one frame at a time as the current value of spriteIndex
advances as well.
Now that you understand how to create sprites, using them should be a snap!
Open ViewController+GameControls.m and replace the stubbed-out showExplosion
with the following code:
Update showExplosion
so that it looks like this:
- (void)showExplosion { // (1) Create the explosion sprite UIImage * explosionImageOrig = [UIImage imageNamed:@"explosion.png"]; CGImageRef explosionImageCopy = CGImageCreateCopy(explosionImageOrig.CGImage); CGSize explosionSize = CGSizeMake(128, 128); SpriteLayer * sprite = [SpriteLayer layerWithImage:explosionImageCopy spriteSize:explosionSize]; CFRelease(explosionImageCopy); // (2) Position the explosion sprite CGFloat xOffset = -7.0f; CGFloat yOffset = -3.0f; sprite.position = CGPointMake(self.crosshairs.center.x + xOffset, self.crosshairs.center.y + yOffset); // (3) Add to the view [self.view.layer addSublayer:sprite]; // (4) Configure and run the animation CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"spriteIndex"]; animation.fromValue = @(1); animation.toValue = @(12); animation.duration = 0.45f; animation.repeatCount = 1; animation.delegate = sprite; [sprite addAnimation:animation forKey:nil]; } |
Here’s what you do in the above method, step-by-step:
CGImage
property was accessed. To avoid any untoward effects, make a copy of the CGImage data before ARC has a chance to accidentally release it and work with the copy instead.frame
property, its value is derived from bounds
and position
. To adjust the location or size of a Core Animation layer, it’s best to work directly with bounds
and position
.Sharp-eyed readers will note that the animation runs to index 12, even though there are only 11 frames in the texture atlas. Why would you do this?
Core Animation first converts integers to floats before interpolating them for animation. For example, in the fraction of a second that your animation is rendering frame 1, Core Animation is rapidly stepping through the succession of “float” values between 1.0 and 2.0. When it reaches 2.0, the animation switches to rendering frame 2, and so on. Therefore, if you want the eleventh and final frame to render for its full duration, you need to set the final value for the animation to be 12.
Finally, you need to trigger your new shiny explosions every time you successfully hit the target.
Add the following code to the end of hitTargetWithPoints:
in ViewController.mm:
// (4) Run the explosion sprite [self showExplosion]; } |
Build and run your project; tap the trigger button and you should see some giant balls of fire light up the scene as below:
Giant fiery explosions! They’re just what you need for an AR target blaster game!
So far you’ve created a “live” video stream using AVFoundation, and you’ve added some HUD overlays to that video as well as some basic game controls. Oh, yes, and explosions – lots of explosions. :]
You can download the completed project for this part as a zipped project file.
The third part of this tutorial will walk you through AR target detection.
If you have any questions or comments on this tutorial series, please come join the discussion below!
How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 2/4 is a post from: Ray Wenderlich
The post How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 2/4 appeared first on Ray Wenderlich.
Welcome to the third part of this tutorial series! In the first part of this tutorial, you used the AVFoundation classes to create a live video feed for your game to show the video from the rear-facing camera.
In the second part, you learned how to implement the game controls and leverage Core Animation to create some great-looking explosion effects.
Your next task is to implement the target-tracking that brings the Augmented Reality into your app.
If you saved your project from the last part of this tutorial, then you can pick up right where you left off. If you don’t have your previous project, or prefer to start anew, you can download the starter project for this part of the tutorial.
Before you start coding, it’s worth discussing targets for a moment.
From retail shelves to train tickets to advertisements in bus shelters, the humble black-and-white QR code has become an incredibly common sight around the world. QR codes are a good example of what’s technically known as a marker.
Markers are real-world objects placed in the field-of-view of the camera system. Once the computer vision software detects the presence of one or more markers in the video stream, the marker can be used as a point of reference from which to initiate and render the rest of the augmented reality experience.
Marker detection comes in two basic flavors:
Admittedly the term markerless object tracking is confusing, since you are still tracking an image “marker”, albeit one that is more complicated and colorful than a simple collection of black-and-white squares. To confuse matters even further, you’ll find other authors who lump all of the above image-detection techniques into a single bucket they call “marker-based” object tracking, and who instead reserve the term markerless object tracking for systems where GPS or geolocation services are used to locate and interact with AR resources.
While the distinction between marker-based object tracking and markerless object tracking may seem arbitrary, what it really comes down to is CPU cycles.
Marker-based object tracking systems can utilize very fast edge-detection algorithms running in grayscale mode, so high-probability candidate regions in the video frame — where the marker is most likely to be located — can be quickly identified and processed.
Markerless object tracking, on the other hand, requires far more computational power.
Pattern detection in a markerless object tracking system usually involves three steps:
All three stages must be performed on each frame in the video stream, in addition to any other image processing steps needed to adjust for such things as scale- and rotation-invariance of the marker, pose estimation (i.e., the angle between the camera lens and the 2D-plane of the marker), ambient lighting conditions, whether or not the marker is partially occluded, and a host of other factors.
Consequently, marker-based object tracking has generally been the preferred technique to use on small, hand-held mobile devices, especially early-generation mobile phones). Markerless object tracking, on the other hand, has generally been relegated for use on the larger, iPad-style tablets with their correspondingly greater computational capabilities.
In this tutorial you’ll take the middle ground between these two standard forms of marker detection.
Your target pattern is more complicated than a simple set of black-and-white QR codes, but it’s not much more complicated. You should be able to cut some corners while still retaining most of the benefits of markerless object tracking.
Take another look at the target pattern you’re going to use as a marker:
Clearly you don’t have to worry about rotational invariance as the pattern is already rotationally symmetrical. You won’t have to deal with pose estimation in this tutorial as you’ll keep things simple and assume that the target will be displayed on a flat surface with your camera held nearly parallel to the target.
In other words, you won’t need to handle the case where someone prints out a hard copy of the target marker, lays it down on the floor somewhere and tries to shoot it from across the room at weird angles.
The fastest OpenCV API that meets all these requirements is cv::matchTemplate()
. It takes the following four arguments:
The caller must ensure that the dimensions of the template image fit within those of the query image and that the dimensions of the output array are sized correctly relative to the dimensions of both the query image and the template pattern.
The matching algorithm used by cv::matchTemplate()
is based on a Fast Fourier Transform (FFT) of the two images and is highly optimized for speed.
cv::matchTemplate()
does what is says on the tin:
Once the algorithm terminates, an API like cv::minMaxLoc()
can be used to identify both the point at which the best match occurs and the quality of the match at that point. You can also set a “confidence level” below which you will ignore candidate matches as simple noise.
A moment’s reflection should convince you that if the dimensions of the query image are (W,H)
, and the dimensions of the template pattern are (w,h)
, with 0 < w < W
and 0 < h < H
, then the dimensions of the output array must be (W-w+1, H-h+1)
.
The following picture may be worth a thousand words in this regard:
There's one tradeoff you'll make with this API — scale-invariance. If you're searching an input frame for a 200 x 200 pixel target marker, then you're going to have to hold the camera at just the right distance away from the marker so that it fills approximately 200 x 200 pixels on the screen.
The sizes of the two images don't have to match exactly, but the detector won't track the target if your device is too far away from, or too close to the marker pattern.
It's time to start integrating the OpenCV APIs into your AR game.
OpenCV uses its own high-performance, platform-independent container for managing image data. Therefore you must implement your own helper methods for converting the image data back and forth between the formats used by OpenCV and UIKit.
This type of data conversion is often best accomplished using categories. The starter project you downloaded contains a UIImage+OpenCV category for performing these conversions; it's located in the Detector group, but it's not yet been implemented. That's your job! :]
Open UIImage+OpenCV.h and add the following three method declarations:
@interface UIImage (OpenCV) #pragma mark - #pragma mark Generate UIImage from cv::Mat + (UIImage*)fromCVMat:(const cv::Mat&)cvMat; #pragma mark - #pragma mark Generate cv::Mat from UIImage + (cv::Mat)toCVMat:(UIImage*)image; - (cv::Mat)toCVMat; @end |
The function of these methods is fairly clear from their signatures:
You'll be providing the code for these methods in the next few paragraphs, so be prepared for a few warnings. These warnings will go away once you finish adding all the methods.
cv::Mat
to be an odd way of designating an image reference, you're not alone. cv::Mat
is actually a reference to a 2-D algebraic matrix, which is how OpenCV2 stores image data internally for reasons of performance and convenience.
The older, legacy version of OpenCV used two very similar, almost interchangeable data structures for the same purpose: cvMat
and IplImage
. cvMat
is also simply a 2-D matrix, while IplImage
stands for Intel Processing Library and hints at OpenCV's roots with the chip manufacturing giant.
Open UIImage+OpenCV.mm and add the following code:
+ (cv::Mat)toCVMat:(UIImage*)image { // (1) Get image dimensions CGFloat cols = image.size.width; CGFloat rows = image.size.height; // (2) Create OpenCV image container, 8 bits per component, 4 channels cv::Mat cvMat(rows, cols, CV_8UC4); // (3) Create CG context and draw the image CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, cols, rows, 8, cvMat.step[0], CGImageGetColorSpace(image.CGImage), kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault); CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage); CGContextRelease(contextRef); // (4) Return OpenCV image container reference return cvMat; } |
This static method converts an instance of UIImage into an OpenCV image container. It works as follows:
width
and height
attributes of the UIImage.width
and height
. The CV_8UC4
flag indicates that the image consists of 4 color channels — red, green, blue and alpha — and that each channel consists of 8 bits per component.The corresponding instance method is even simpler.
Add the following code to UIImage+OpenCV.mm:
- (cv::Mat)toCVMat { return [UIImage toCVMat:self]; } |
This is a convenience method which can be invoked directly on UIImage objects, converting them to cv::Mat format using the static method you just defined above.
Add the following code to UIImage+OpenCV.mm
+ (UIImage*)fromCVMat:(const cv::Mat&)cvMat { // (1) Construct the correct color space CGColorSpaceRef colorSpace; if ( cvMat.channels() == 1 ) { colorSpace = CGColorSpaceCreateDeviceGray(); } else { colorSpace = CGColorSpaceCreateDeviceRGB(); } // (2) Create image data reference CFDataRef data = CFDataCreate(kCFAllocatorDefault, cvMat.data, (cvMat.elemSize() * cvMat.total())); // (3) Create CGImage from cv::Mat container CGDataProviderRef provider = CGDataProviderCreateWithCFData(data); CGImageRef imageRef = CGImageCreate(cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorSpace, kCGImageAlphaNone | kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault); // (4) Create UIImage from CGImage UIImage * finalImage = [UIImage imageWithCGImage:imageRef]; // (5) Release the references CGImageRelease(imageRef); CGDataProviderRelease(provider); CFRelease(data); CGColorSpaceRelease(colorSpace); // (6) Return the UIImage instance return finalImage; } |
This static method converts an OpenCV image container into an instance of UIImage as follows:
elemSize()
returns the size of an image pixel in bytes, while total()
returns the total number of pixels in the image. The total size of the byte array to be allocated comes from multiplying these two numbers.Build and run your project; nothing visible has changed with your game but occasionally incremental builds are a good practice, if only to validate that newly added code hasn't broken anything.
Next you'll implement the pattern detector for your AR blaster game.
This class serves as the heart-and-soul for your AR target blaster game, so this section deserves your undivided attention (but I know you'd pay attention anyway!). :]
You're going to be writing the pattern detector in C++ for two reasons: for better performance — and because you're going to be interfacing with the OpenCV SDK which is also written in C++.
Add the following code to PatternDetector.h:
#include "VideoFrame.h" class PatternDetector { #pragma mark - #pragma mark Public Interface public: // (1) Constructor PatternDetector(const cv::Mat& pattern); // (2) Scan the input video frame void scanFrame(VideoFrame frame); // (3) Match APIs const cv::Point& matchPoint(); float matchValue(); float matchThresholdValue(); // (4) Tracking API bool isTracking(); #pragma mark - #pragma mark Private Members private: // (5) Reference Marker Images cv::Mat m_patternImage; cv::Mat m_patternImageGray; cv::Mat m_patternImageGrayScaled; // (6) Supporting Members cv::Point m_matchPoint; int m_matchMethod; float m_matchValue; float m_matchThresholdValue; float m_scaleFactor; }; |
Here's what's going on in the interface above:
TRUE
. Otherwise, it returns FALSE
.
m_patternImage
is a reference to the original marker pattern. In your code, this will be a reference to the bull's-eye target marker pattern. m_patternImageGray
is simply a reference to a grayscale version of m_patternImage
. Most image processing algorithms run an order of magnitude faster on grayscale images than on color images. In your code, this will be a reference to a black-and-white version of the bull's-eye target marker pattern.m_patternImageGrayScaled
is a smaller version of m_patternImageGray
. This is the actual image reference used for pattern detection where its size has been optimized for speed. In your code, this will be a reference to a small version of the black-and-white version of the bull's-eye target marker pattern. Add the following code to the top of PatternDetector.cpp, just beneath the include
directives:
const float kDefaultScaleFactor = 2.00f; const float kDefaultThresholdValue = 0.50f; |
kDefaultScaleFactor
is the amount by which m_patternImageGrayScaled
will be scaled down from m_patternImageGray
. In your code, you'll cutting the image dimensions down by a factor of two, thus improving performance by a factor of about four, since the total area of the image will be about a quarter of the size of the original.kDefaultThresholdValue
specifies the score below which candidate matches will be discarded as spurious. In your code, you'll discard candidate matches unless the reported confidence of the match is higher than 0.5.Now add the following definition for the constructor to PatternDetector.cpp:
PatternDetector::PatternDetector(const cv::Mat& patternImage) { // (1) Save the pattern image m_patternImage = patternImage; // (2) Create a grayscale version of the pattern image switch ( patternImage.channels() ) { case 4: /* 3 color channels + 1 alpha */ cv::cvtColor(m_patternImage, m_patternImageGray, CV_RGBA2GRAY); break; case 3: /* 3 color channels */ cv::cvtColor(m_patternImage, m_patternImageGray, CV_RGB2GRAY); break; case 1: /* 1 color channel, grayscale */ m_patternImageGray = m_patternImage; break; } // (3) Scale the gray image m_scaleFactor = kDefaultScaleFactor; float h = m_patternImageGray.rows / m_scaleFactor; float w = m_patternImageGray.cols / m_scaleFactor; cv::resize(m_patternImageGray, m_patternImageGrayScaled, cv::Size(w,h)); // (4) Configure the tracking parameters m_matchThresholdValue = kDefaultThresholdValue; m_matchMethod = CV_TM_CCOEFF_NORMED; } |
cv::cvtColor()
to reduce the number of color channels if necessary.m_scaleFactor
— in your code, this is set to 2.CV_TM_CCOEFF_NORMED
is one of six possible matching heuristics used by OpenCV to compare images. With this heuristic, increasingly better matches are indicated by increasingly largely numerical values (i.e., closer to 1.0).Add the following definition to PatternDetector.cpp:
void PatternDetector::scanFrame(VideoFrame frame) { // (1) Build the grayscale query image from the camera data cv::Mat queryImageGray, queryImageGrayScale; cv::Mat queryImage = cv::Mat(frame.height, frame.width, CV_8UC4, frame.data, frame.stride); cv::cvtColor(queryImage, queryImageGray, CV_BGR2GRAY); // (2) Scale down the image float h = queryImageGray.rows / m_scaleFactor; float w = queryImageGray.cols / m_scaleFactor; cv::resize(queryImageGray, queryImageGrayScale, cv::Size(w,h)); // (3) Perform the matching int rows = queryImageGrayScale.rows - m_patternImageGrayScaled.rows + 1; int cols = queryImageGrayScale.cols - m_patternImageGrayScaled.cols + 1; cv::Mat resultImage = cv::Mat(cols, rows, CV_32FC1); cv::matchTemplate(queryImageGrayScale, m_patternImageGrayScaled, resultImage, m_matchMethod); // (4) Find the min/max settings double minVal, maxVal; cv::Point minLoc, maxLoc; cv::minMaxLoc(resultImage, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat()); switch ( m_matchMethod ) { case CV_TM_SQDIFF: case CV_TM_SQDIFF_NORMED: m_matchPoint = minLoc; m_matchValue = minVal; break; default: m_matchPoint = maxLoc; m_matchValue = maxVal; break; } } |
Here's what's you do in the code above:
cv:::Mat
image container from the video frame data. Then convert the image container to grayscale mode to accelerate the speed at which matches are performed.m_scaleFactor
to further accelerate things.cv::matchTemplate()
at this point. The calculation used here to determine the dimensions of the output array was discussed earlier. The output array will be populated with floats ranging from 0.0 to 1.0 with higher numbers indicating greater confidence in the candidate match for that point.cv::minMaxLoc()
to identify the largest value in the frame, as well as the exact value at that point. For most of the matching heuristics used by OpenCV — including the one you're using — larger numbers correspond to better matches. However, for the matching heuristics CV_TM_SQDIFF
and CV_TM_SQDIFF_NORMED
, better matches are indicated by lower numerical values; you handle these as special cases in a switch
block.
Since the type of resultImage
is cv::Mat, the output array can be rendered on-screen as a black-and-white image where brighter pixels indicate better match points between the two images. This can be extremely useful when debugging.
Add the following code to PatternDetector.cpp:
const cv::Point& PatternDetector::matchPoint() { return m_matchPoint; } float PatternDetector::matchValue() { return m_matchValue; } float PatternDetector::matchThresholdValue() { return m_matchThresholdValue; } |
These are three simple accessors, nothing more.
Add the following code to PatternDetector.cpp:
bool PatternDetector::isTracking() { switch ( m_matchMethod ) { case CV_TM_SQDIFF: case CV_TM_SQDIFF_NORMED: return m_matchValue < m_matchThresholdValue; default: return m_matchValue > m_matchThresholdValue; } } |
Just as you did above with scanFrame()
, the two heuristics CV_TM_SQDIFF
and CV_TM_SQDIFF_NORMED
must be handled here as special cases.
In this section you're going to integrate the pattern detector with the view controller.
Open ViewController.mm and add the following code to the very end of viewDidLoad
:
// Configure Pattern Detector UIImage * trackerImage = [UIImage imageNamed:@"target.jpg"]; m_detector = new PatternDetector([trackerImage toCVMat]); // 1 // Start the Tracking Timer m_trackingTimer = [NSTimer scheduledTimerWithTimeInterval:(1.0f/20.0f) target:self selector:@selector(updateTracking:) userInfo:nil repeats:YES]; // 2 |
Taking each comment in turn:
updateTracking:
20 times per second; you'll implement this method below.Replace the stubbed-out implementation of updateTracking:
in ViewController.mm with the following code:
- (void)updateTracking:(NSTimer*)timer { if ( m_detector->isTracking() ) { NSLog(@"YES: %f", m_detector->matchValue()); } else { NSLog(@"NO: %f", m_detector->matchValue()); } } |
This method is clearly not "game-ready" in its current state; all you're doing here is quickly checking whether or not the detector is tracking the marker.
If you point the camera at a bull's-eye target marker, the match score will shoot up to almost 1.0 and the detector will indicate that it is successfully tracking the marker. Conversely, if you point the camera away from the bull's-eye target marker, the match score will drop to near 0.0 and the detector will indicate that it is not presently tracking the marker.
However, if you were to build and run your app at this point you'd be disappointed to learn that you can't seem to track anything; the detector consistently returns a matchValue()
of 0.0, no matter where you point the camera. What gives?
That's an easy one to solve — you're not processing any video frames yet!
Return to ViewController.mm and add the following line to the very end of frameReady:
, just after the dispatch_async()
GCD call:
m_detector->scanFrame(frame); |
The full definition for frameReady:
should now look like the following:
- (void)frameReady:(VideoFrame)frame { __weak typeof(self) _weakSelf = self; dispatch_sync( dispatch_get_main_queue(), ^{ // (1) Construct CGContextRef from VideoFrame CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(frame.data, frame.width, frame.height, 8, frame.stride, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // (2) Construct CGImageRef from CGContextRef CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); // (3) Construct UIImage from CGImageRef UIImage * image = [UIImage imageWithCGImage:newImage]; CGImageRelease(newImage); [[_weakSelf backgroundImageView] setImage:image]; }); m_detector->scanFrame(frame); } |
Previously, frameReady:
simply to drew video frames on the screen, thereby creating a "real time" video feed as the visual backdrop for the game. Now, each video frame is being passed off to the pattern detector where the OpenCV APIs scan the frame looking for instances of the target marker.
All right, it's showtime! Build and run your app; open the console in Xcode, and you'll see a long list of "NO" messages indicating the detector can't match the target.
Now, point your camera at the tracker image below:
When the camera is aimed directly at the bull's-eye target and the pattern detector is successfully tracking the marker, you'll see a “YES” message being logged to the console along with the corresponding threshold confidence level.
The threshold confidence level is set at 0.5, so you may need to fiddle with the position of your device until the match scores surpass that value.
If you're using an iPhone you should expect a match if you hold the iPhone at a distance where the height of the bull's-eye image covers a little less than one third of the height of the iPhone screen in landscape orientation
Your console log should look like the following once the device starts tracking the marker:
2013-12-07 01:45:34.121 OpenCVTutorial[4890:907] NO: 0.243143 2013-12-07 01:45:34.168 OpenCVTutorial[4890:907] NO: 0.243143 2013-12-07 01:45:34.218 OpenCVTutorial[4890:907] NO: 0.264737 2013-12-07 01:45:34.268 OpenCVTutorial[4890:907] NO: 0.270497 2013-12-07 01:45:34.318 OpenCVTutorial[4890:907] NO: 0.270497 2013-12-07 01:45:34.368 OpenCVTutorial[4890:907] YES: 0.835372 2013-12-07 01:45:34.417 OpenCVTutorial[4890:907] YES: 0.834664 2013-12-07 01:45:34.468 OpenCVTutorial[4890:907] YES: 0.834664 2013-12-07 01:45:34.517 OpenCVTutorial[4890:907] YES: 0.842802 2013-12-07 01:45:34.568 OpenCVTutorial[4890:907] YES: 0.841466 |
Congratulations — you now have a working computer vision system running on your iOS device!
That's it for this part of the tutorial. You've learned about pattern matching, integrated Open CV into your code, and managed to output the pattern recognition results to the console
You can download the completed project for this part as a zipped project file.
The fourth part of this tutorial will combine all the elements of all that you completed so far into a working game.
If you have any questions or comments on this tutorial series, please come join the discussion below!
How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 3/4 is a post from: Ray Wenderlich
The post How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 3/4 appeared first on Ray Wenderlich.
The first video in our Table Views in iOS series. Learn the basics of creating table views in the Storyboard editor using prototype cells.
Video Tutorial: Table Views Getting Started is a post from: Ray Wenderlich
The post Video Tutorial: Table Views Getting Started appeared first on Ray Wenderlich.
Welcome to the final part of this tutorial series! In the first part of this tutorial, you used the AVFoundation classes to create a live video feed for your game to show the video from the rear-facing camera.
In the second part, you learned how to implement the game controls and leverage Core Animation to create some great-looking explosion effects.
In the third part, you integrated the Open CV framework into your app and printed out messages to the console if a certain target was recognized.
Your final task is to tie everything together into a bonafide Augmented Reality gaming app.
If you saved your project from the last part of this tutorial, then you can pick up right where you left off. If you don’t have your previous project, or prefer to start anew, you can download the starter project for this part of the tutorial.
Reading off numbers in the console gives you some feedback on how well you’re tracking the marker, but the console is far too clunky for in-game use; it’d be nice if you had something a little more visual to guide your actions during gameplay.
You’ll use OpenCV’s image processing capabilities to generate better visual cues for tracking the marker. You’ll modify the pattern detector to “peek” inside and obtain a real-time video feed of where the processor thinks the marker is. If you’re having trouble tracking the marker, you can use this real-time feed to help you guide the camera into a better position.
Open OpenCVTutorial-Prefix.pch and find the following line:
#define kUSE_TRACKING_HELPER 0 |
Replace that line with the following code:
#define kUSE_TRACKING_HELPER 1 |
This activates a Helper button on the lower left portion of the screen; pressing it displays a tracking console assisting with marker tracking.
Build and run your project; you’ll see an orange Helper button in the lower left portion of the screen as shown below:
Tap the Helper button and the tracking console appears…but it’s not yet fully implemented.
Now would be a good time to flesh it out! :]
The tracking console has four main components:
The close button has already been implemented; press it to dismiss the tracking console.
Open PatternDetector.h and add the following code to the “public” portion of the header file:
const cv::Mat& sampleImage(); |
Here you declare a public accessor method to obtain a reference to a sample image that furnishes a “peek” at the marker from the perspective of the pattern detector.
Next, add the following code to the “private” portion of the same header file:
cv::Mat m_sampleImage; |
Here you declare a private data member that holds the reference to the sample image.
The full header file will now look like the following:
#include "VideoFrame.h" class PatternDetector { #pragma mark - #pragma mark Public Interface public: // Constructor PatternDetector(const cv::Mat& pattern); // Scan the input video frame void scanFrame(VideoFrame frame); // Match APIs const cv::Point& matchPoint(); float matchValue(); float matchThresholdValue(); // Tracking API bool isTracking(); // Peek inside the pattern detector to assist marker tracking const cv::Mat& sampleImage(); #pragma mark - #pragma mark Private Members private: // Reference Marker Images cv::Mat m_patternImage; cv::Mat m_patternImageGray; cv::Mat m_patternImageGrayScaled; cv::Mat m_sampleImage; // Supporting Members cv::Point m_matchPoint; int m_matchMethod; float m_matchValue; float m_matchThresholdValue; float m_scaleFactor; }; |
Open PatternDetector.cpp and add the following code to the very end of the file:
const cv::Mat& PatternDetector::sampleImage() { return m_sampleImage; } |
Still working in PatternDetector.cpp, add the following code to the very end of scanFrame()
method, just after the switch
statement:
#if kUSE_TRACKING_HELPER // (1) copy image cv::Mat debugImage; queryImageGrayScale.copyTo(debugImage); // (2) overlay rectangle cv::rectangle(debugImage, m_matchPoint, cv::Point(m_matchPoint.x + m_patternImageGrayScaled.cols,m_matchPoint.y + m_patternImageGrayScaled.rows), CV_RGB(0, 0, 0), 3); // (3) save to member variable debugImage.copyTo(m_sampleImage); #endif |
This code builds the live debugging display as follows:
debugImage
.
The code is guarded by the compiler macro kUSE_TRACKING_HELPER
; that way you won’t use or even compile this code unless the flag is set. This saves you CPU cycles when the help screen is not visible.
Return to ViewController.mm and replace the stubbed-out implementation of updateSample:
with the following code:
- (void)updateSample:(NSTimer*)timer { self.sampleView.image = [UIImage fromCVMat:m_detector->sampleImage()]; self.sampleLabel1.text = [NSString stringWithFormat:@"%0.3f", m_detector->matchThresholdValue()]; self.sampleLabel2.text = [NSString stringWithFormat:@"%0.3f", m_detector->matchValue()]; } |
This method is pretty straightforward:
You’re now ready to try out your “tracking goggles”!
Build and run your project; press the Helper button to bring up the tracking console and you will see it appear on your screen as shown below:
The pattern detector does its best to identify a “match”, and the candidate “match” region is highlighted by the outline of the black rectangle. However, the detector reports a very low confidence value — only 0.190 in this instance — for this candidate match.
Since this value is below your threshold value of 0.5, the result is discarded and the pattern detector indicates that it is not presently tracking the target marker.
The target marker is reproduced below for your convenience:
Point the camera directly at the target marker, and you’ll see that the pattern detector is able to identify the marker perfectly as indicated by the outlines of the sampling rectangle; in the example below the confidence level is 0.985, which is quite high:
At this point, if you were to query the pattern detector’s isTracking()
API it would respond with an indication that it is successfully tracking the target marker.
Don’t forget to disable the help screen once you no longer need it by setting the kUSE_TRACKING_HELPER
flag back to 0 in the *.pch
file.
The next step is to integrate marker tracking more closely with your app’s gameplay.
This requires the following updates to your game:
Open ViewController.mm and add the following code to the very end of viewDidLoad
:
// Start gameplay by hiding panels [self.tutorialPanel setAlpha:0.0f]; [self.scorePanel setAlpha:0.0f]; [self.triggerPanel setAlpha:0.0f]; |
Here you specify that the game should start off by hiding all three panels.
Of course, you still want to display the panels at various points in the game in response to changes in the tracking state of the app. Moreover, it’d be great if the presentation of the panels could be smoothly animated to make your game more engaging to end users.
You’re in luck: your starter project already contains a collection of useful animation categories on UIView. You simply have to implement the completion blocks for those animations.
Return to ViewController.mm and take a look at the class extension at the top of the file; you’ll see that two block properties have already been declared in the class extension as follows:
@property (nonatomic, copy) void (^transitioningTrackerComplete)(void); @property (nonatomic, copy) void (^transitioningTrackerCompleteResetScore)(void); |
There are two distinct completion behaviors you need to support:
copy
property attribute since a block needs to be copied in order to keep track of its captured state outside the original scope where the block was defined.Add the following code to the view end of viewDidLoad
in ViewController.mm:
// Define the completion blocks for transitions __weak typeof(self) _weakSelf = self; self.transitioningTrackerComplete = ^{ [_weakSelf setTransitioningTracker:NO]; }; self.transitioningTrackerCompleteResetScore = ^{ [_weakSelf setTransitioningTracker:NO]; [_weakSelf setScore:0]; }; |
This code provides implementations for the completion blocks as per the requirements outlined above.
Now you can start animating your views and bringing your game to life.
You’ll want the tutorial panel to display on-screen from the time the game starts until the user gains tracking for the first time.
Add the following method to ViewController.mm just after the definition of viewDidLoad
:
- (void)viewDidAppear:(BOOL)animated { // Pop-in the Tutorial Panel self.transitioningTracker = YES; [self.tutorialPanel slideIn:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [super viewDidAppear:animated]; } |
This code makes the tutorial panel “slide in” from the top as soon as the app starts; slideIn:completion:
implements this animation and is a member of animation category included in your starter projec.
Next you need the panels to react to changes in the tracking state of the app.
The app’s tracking state is presently being managed from updateTracking:
in ViewController.mm.
Replace updateTracking:
in ViewController.mm with the following:
- (void)updateTracking:(NSTimer*)timer { // Tracking Success if ( m_detector->isTracking() ) { if ( [self isTutorialPanelVisible] ) { [self togglePanels]; } } // Tracking Failure else { if ( ![self isTutorialPanelVisible] ) { [self togglePanels]; } } } |
The call to isTutorialPanelVisible
simply determines if the tutorial panel is visible; it’s been implemented in the starter project as well.
You do, however, need to provide an implementation for togglePanels
.
Replace the stubbed-out implementation of togglePanels
in ViewController.mm with the following code:
- (void)togglePanels { if ( !self.transitioningTracker ) { self.transitioningTracker = YES; if ( [self isTutorialPanelVisible] ) { // Adjust panels [self.tutorialPanel slideOut:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [self.scorePanel slideIn:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [self.triggerPanel slideIn:kAnimationDirectionFromBottom completion:self.transitioningTrackerComplete]; // Play sound AudioServicesPlaySystemSound(m_soundTracking); } else { // Adjust panels [self.tutorialPanel slideIn:kAnimationDirectionFromTop completion:self.transitioningTrackerComplete]; [self.scorePanel slideOut:kAnimationDirectionFromTop completion:self.transitioningTrackerCompleteResetScore]; [self.triggerPanel slideOut:kAnimationDirectionFromBottom completion:self.transitioningTrackerComplete]; } } } |
Here’s what’s going on in the code above:
togglePanels
, the tutorial panel disappears and and the scoreboard and trigger button are displayed on the right side of the screen.togglePanels
, the tutorial panel appears and the scoreboard and trigger button on the right side of the screen disappear.The completion block that resets the score runs when the score panel slides off the screen; as well, a “tracking sound” plays when the detector first begins tracking to give the user an auditory cue that tracking has commenced.
Build and run your project; point the camera at the target marker, reproduced below:
The scoreboard and trigger button are now only visible when the pattern detector is actually tracking the marker. When the pattern detector is not tracking the marker, the tutorial screen pops back down into view.
Compared to the optics in expensive cameras, the camera lens that ships with your iOS device is not especially large or sophisticated. Due to its small size and simple design, imperfections in the lens and camera on your iOS device can end up distorting the images you’re trying to take in several different ways:
These parameters usually vary — sometimes widely — from one mobile device to another. What’s a developer to do?
You’ll need to implement a mechanism to calibrate the camera on your device; calibration is the process of mathematically estimating these parameters correcting for them in software. It’s an essential step if you want your AR experience to appear remotely convincing to the end user.
OpenCV uses the following two data structures to calibrate a camera:
Rather than going through the trouble of estimating numerical values for each of these matrices, you’re going to do something much simpler.
Go to the Video Source group and open CameraCalibration.h.
This file declares a much simpler C-struct
that represents camera calibration information:
struct CameraCalibration { float xDistortion; float yDistortion; float xCorrection; float yCorrection; }; |
The problem you’re tackling with camera calibration is properly mapping and aligning points from the three-dimensional “real world” of the video feed onto the two-dimensional “flat world” of your mobile device screen.
In iOS, mobile screens come in one of two aspect ratios, either 480 x 320 pixels or 568 x 320 pixels. Neither of these aspect ratios map especially well onto the 640 x 480 pixel aspect ratio you’re using to capture video data for your target shooter game.
This discrepancy between the aspect ratio of the device screen and the aspect ratio of the video feed is the largest source of “camera error” you’ll need to correct for in this tutorial. Moreover, you can correct for this discrepancy using little more than some simple linear algebra.
Don’t worry — you won’t have to derive all of the math yourself. Did you just breathe a sigh of relief? :]
Instead, the answer will be shown below so you can keep charging toward the end-goal of a fully operational AR target blaster.
Open ViewController.mm and add the following code to the very end of viewDidLoad
:
// Numerical estimates for camera calibration if ( IS_IPHONE_5() ) { m_calibration = {0.88f, 0.675f, 1.78, 1.295238095238095}; } else { m_calibration = {0.8f, 0.675f, (16.0f/11.0f), 1.295238095238095}; } |
Admittedly, these numbers don’t look especially “linear”; there are non-linear eccentricities at play here that were derived through empirical estimation. However, these numbers should be good enough to get your AR target blaster fully operational.
If you’ve been tapping the trigger button in your app, you’ve noticed that it’s still linked to the selectRandomRing
test API. You can point your device at the marker, the pattern detector can find and track the marker, but scoring is still random and unrelated to the marker pattern being tracked.
The final step is to coordinate the firing of the trigger button with the position of the target marker. In this section, you’re going to build an AR Visualization Layer that will act as the glue between what the computer vision system “sees” out in the real world, and the data model in your game that keeps track of points and scoring.
You’ve come a long way, baby — you’re almost done! :]
Go to the Visualization group and open ARView.h.
Review the header file quickly:
#import "CameraCalibration.h" @interface ARView : UIView #pragma mark - #pragma mark Constructors - (id)initWithSize:(CGSize)size calibration:(struct CameraCalibration)calibration; #pragma mark - #pragma mark Gameplay - (int)selectBestRing:(CGPoint)point; #pragma mark - #pragma mark Display Controls - (void)show; - (void)hide; @end |
ARView is an overlay that is activated whenever your game is tracking the target marker.
The object has two main purposes:
Open ARView.m and replace the stubbed-out implementation of show
with the code below:
- (void)show { self.alpha = kAlphaShow; } |
Similarly, replace the stubbed-out implementation of hide
with the following code:
- (void)hide { self.alpha = kAlphaHide; } |
Open ViewController.mm and add the following code to the very end of viewDidLoad
:
// Create Visualization Layer self.arView = [[ARView alloc] initWithSize:CGSizeMake(trackerImage.size.width, trackerImage.size.height) calibration:m_calibration]; [self.view addSubview:self.arView]; [self.arView hide]; // Save Visualization Layer Dimensions m_targetViewWidth = self.arView.frame.size.width; m_targetViewHeight = self.arView.frame.size.height; |
Here you create a new instance of the visualization layer as follows:
Next you need to link the behavior of the AR visualization layer with the tracking state of your game.
Modify updateTracking:
in ViewController.mm as follows:
- (void)updateTracking:(NSTimer*)timer { // Tracking Success if ( m_detector->isTracking() ) { if ( [self isTutorialPanelVisible] ) { [self togglePanels]; } // Begin tracking the bullseye target cv::Point2f matchPoint = m_detector->matchPoint(); // 1 self.arView.center = CGPointMake(m_calibration.xCorrection * matchPoint.x + m_targetViewWidth / 2.0f, m_calibration.yCorrection * matchPoint.y + m_targetViewHeight / 2.0f); [self.arView show]; } // Tracking Failure else { if ( ![self isTutorialPanelVisible] ) { [self togglePanels]; } // Stop tracking [self.arView hide]; // 2 } } |
Here’s a quick breakdown:
Build and run your app; point the camera at the target marker reproduced below:
Once you’re tracking the marker, your screen will look similar to the following:
The background color of the AR layer is set to dark gray, and the outermost ring is highlighted in blue. The reason for coloring these components is to give you a sense of how the AR layer “tracks” the position of the “real world” marker in the video stream.
Play around with the tracking a bit; try to move the position of the marker around by changing where you point the camera and watch the AR layer “track” the marker and move to the correct position.
Once you’re done waving your device around, open ARView.m and find initWithSize:calibration:
.
Find the line in the constructor that reads self.ringNumber = 1
and modify it to read self.ringNumber = 5
.
This will select the fifth, or innermost, bull’s-eye for highlighting.
Build and run your app; once you are tracking the target you’ll see something like the following:
Play around and set ringNumber
to different values between 1 and 5 to highlight different rings; this can prove useful when trying to debug camera calibration statistics.
Open ARView.m, and scroll to the very top of the file. Find the line that reads #define kDRAW_TARGET_DRAW_RINGS 1
.
Change this line so that it reads #define kDRAW_TARGET_DRAW_RINGS 0
.
Working in the same file, find the line that reads #define kColorBackground [UIColor darkGrayColor]
.
Change this line so that it reads #define kColorBackground [UIColor clearColor]
.
The top three lines of ARView.m should now read like the following:
#define kDRAW_TARGET_DRAW_RINGS 0 #define kDRAW_TARGET_BULLET_HOLES 1 #define kColorBackground [UIColor clearColor] |
This deactivates the highlighting of the rings and sets the background color of the AR layer to a more natural transparent color.
Now that you know how to track the marker, its time to finally link up the scoreboard — correctly. :]
Still working in ARView.m replace the stubbed-out implementation of selectBestRing:
with the following code:
- (int)selectBestRing:(CGPoint)point { int bestRing = 0; CGFloat dist = distance(point, m_center, m_calibration); if ( dist < kRadius5 ) { bestRing = 5; } else if ( dist < kRadius4 ) { bestRing = 4; } else if ( dist < kRadius3 ) { bestRing = 3; } else if ( dist < kRadius2 ) { bestRing = 2; } else if ( dist < kRadius1 ) { bestRing = 1; } return bestRing; } |
The point where the marker was “hit” by the blast from your game is the single argument to this method. The method then calculates the distance from this point to the center of the AR layer, which also corresponds with the center of the bull’s-eye target you’re aiming for. Finally, it finds the smallest enclosing ring for this distance, and returns that ring as the one that was “hit” by the blast.
Open ViewController.mm and remove the very first line of pressTrigger:
where you call selectRandomRing
. Replace it with the following code:
CGPoint targetHit = [self.arView convertPoint:self.crosshairs.center fromView:self.view]; NSInteger ring = [self.arView selectBestRing:targetHit]; |
The full definition for pressTrigger:
now reads as follows:
- (IBAction)pressTrigger:(id)sender { CGPoint hitPoint = [self.arView convertPoint:self.crosshairs.center fromView:self.view]; NSInteger ring = [self.arView selectBestRing:hitPoint]; switch ( ring ) { case 5: // Bullseye [self hitTargetWithPoints:kPOINTS_5]; break; case 4: [self hitTargetWithPoints:kPOINTS_4]; break; case 3: [self hitTargetWithPoints:kPOINTS_3]; break; case 2: [self hitTargetWithPoints:kPOINTS_2]; break; case 1: // Outermost Ring [self hitTargetWithPoints:kPOINTS_1]; break; case 0: // Miss Target [self missTarget]; break; } } |
This method is fairly straightforward:
crosshairs
; the code translates this location from the local coordinate system to that of the AR layer and stores it in a local variable named hitPoint
.hitPoint
to the selectBestRing:
API you defined previously, which returns the best-fitting ring that encloses the blast point.The rest of the method works as it did before.
Build and run your app; point your camera at the marker and get a fix on the target below:
Tap the trigger button, and you’ll notice that points are now being tallied more-or-less correctly according to where you’re aiming the crosshairs.
To provide some visual feedback on your marksmanship — and to further augment the user experience — it would be great if you could track the bullet holes you make as you blast into the target pattern.
Fortunately, this is a very simple change.
Open ARView.m and add the following code just before the return
statement:
#if kDRAW_TARGET_BULLET_HOLES if ( bestRing > 0 ) { // (1) Create the UIView for the "bullet hole" CGFloat bulletSize = 6.0f; UIView * bulletHole = [[UIView alloc] initWithFrame:CGRectMake(point.x - bulletSize/2.0f, point.y - bulletSize/2.0f, bulletSize, bulletSize)]; bulletHole.backgroundColor = kColorBulletHole; [self addSubview:bulletHole]; // (2) Keep track of state, so it can be cleared [self.hits addObject:bulletHole]; } #endif |
The newly added code lives between the kDRAW_TARGET_BULLET_HOLES
compiler guards.
Here’s what you’re doing:
Working in the same file, update the implementation of hide
as follows:
- (void)hide { self.alpha = kAlphaHide; #if kDRAW_TARGET_BULLET_HOLES for ( UIView * v in self.hits ) { [v removeFromSuperview]; } [self.hits removeAllObjects]; #endif } |
Again, the newly added code sits between the kDRAW_TARGET_BULLET_HOLES
compiler guards.
Here you’re simply clearing out the blast marks when the game resets.
Build and run your app one final time; point your camera at the target marker and blast away:
You should see something like the following on your screen:
Congratulations, your target blaster is fully operational!
Remember: Augmented Reality uses up a lot of processor cycles. The faster the hardware you’re running this app on, the better the user experience.
I hope you had as much fun building the AR Target Shooter Game as I did! You’ve mastered enough of OpenCV to be able to program a pretty cool Augmented Reality Target Shooter Game.
Here is the completed sample project with all of the code from the above tutorial.
If you’d like to keep exploring the fascinating world of computer vision, there are a number of additional resources out there to keep you going:
Finally, many of the leading AR toolkits on the market are pretty deeply integrated with the Unity game engine. The tutorial Beginning Unity for iOS on this site is an excellent introduction to Unity if you’ve never been exposed to it before.
If you have any further questions or comments about this tutorial, or about computer vision and augmented reality in general, please join the forum discussion below!
How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 4/4 is a post from: Ray Wenderlich
The post How To Make An Augmented Reality Target Shooter Game With OpenCV: Part 4/4 appeared first on Ray Wenderlich.
As an iOS developer, nearly every line of code you write is in reaction to some event; a button tap, a received network message, a property change (via Key Value Observing) or a change in user’s location via CoreLocation are all good examples. However, these events are all encoded in different ways; as actions, delegates, KVO, callbacks and others. ReactiveCocoa defines a standard interface for events, so they can be more easily chained, filtered and composed using a basic set of tools.
Sound confusing? Intriguing? … Mind blowing? Then read on :]
ReactiveCocoa combines a couple of programming styles:
For this reason, you might hear ReactiveCocoa described as a Functional Reactive Programming (or FRP) framework.
Rest assured, that is as academic as this tutorial is going to get! Programming paradigms are a fascinating subject, but the rest of this ReactiveCocoa tutorials focuses solely on the practical value, with work-through examples instead of academic theories.
Throughout this ReactiveCocoa tutorial, you’ll be introducing reactive programming to a very simple example application, the ReactivePlayground. Download the starter project, then build and run to verify you have everything set up correctly.
ReactivePlayground is a very simple app that presents a sign-in screen to the user. Supply the correct credentials, which are, somewhat imaginatively, user for the username, and password for the password, and you’ll be greeted by a picture of a lovely little kitten.
Awww! How cute!
Right now it’s a good point to spend a little time looking through the code of this starter project. It is quite simple, so it shouldn’t take long.
Open RWViewController.m and take a look around. How quickly can you identify the condition that results in the enabling of the Sign In button? What are the rules for showing / hiding the signInFailure
label? In this relatively simple example, it might take only a minute or two to answer these questions. For a more complex example, you should be able to see how this same type of analysis might take quite a bit longer.
With the use of ReactiveCocoa, the underlying intent of the application will become a lot clearer. It’s time to get started!
The easiest way to add the ReactiveCocoa framework to your project is via CocoaPods. If you’ve never used CocoaPods before it might make sense to follow the Introduction To CocoaPods tutorial on this site, or at the very least run through the initial steps of that tutorial so you can install the prerequisites.
Note: If for some reason you don’t want to use CocoaPods you can still use ReactiveCocoa, just follow the Importing ReactiveCocoa steps in the documentation on GitHub.
If you still have the ReactivePlayground project open in Xcode, then close it now. CocoaPods will create an Xcode workspace, which you’ll want to use instead of the original project file.
Open Terminal. Navigate to the folder where your project is located and type the following:
touch Podfile open -e Podfile |
This creates an empty file called Podfile and opens it with TextEdit. Copy and paste the following lines into the TextEdit window:
platform :ios, '7.0' pod 'ReactiveCocoa', '2.1.8' |
This sets the platform to iOS, the minimum SDK version to 7.0, and adds the ReactiveCocoa framework as a dependency.
Once you’ve saved this file, go back to the Terminal window and issue the following command:
pod install |
You should see an output similar to the following:
Analyzing dependencies Downloading dependencies Installing ReactiveCocoa (2.1.8) Generating Pods project Integrating client project [!] From now on use `RWReactivePlayground.xcworkspace`. |
This indicates that the ReactiveCocoa framework has been downloaded, and CocoaPods has created an Xcode workspace to integrate the framework into your existing application.
Open up the newly generated workspace, RWReactivePlayground.xcworkspace, and look at the structure CocoaPods created inside the Project Navigator:
You should see that CocoaPods created a new workspace and added the original project, RWReactivePlayground, together with a Pods project that includes ReactiveCocoa. CocoaPods really does make managing dependencies a breeze!
You’ll notice this project’s name is ReactivePlayground
, so that must mean it’s time to play …
As mentioned in the introduction, ReactiveCocoa provides a standard interface for handling the disparate stream of events that occur within your application. In ReactiveCocoa terminology these are called signals, and are represented by the RACSignal
class.
Open the initial view controller for this app, RWViewController.m, and import the ReactiveCocoa header by adding the following to the top of the file:
#import <ReactiveCocoa/ReactiveCocoa.h> |
You aren’t going to replace any of the existing code just yet, for now you’re just going to play around a bit. Add the following code to the end of the viewDidLoad
method:
[self.usernameTextField.rac_textSignal subscribeNext:^(id x) { NSLog(@"%@", x); }]; |
Build and run the application and type some text into the username text field. Keep an eye on the console and look for an output similar to the following:
2013-12-24 14:48:50.359 RWReactivePlayground[9193:a0b] i 2013-12-24 14:48:50.436 RWReactivePlayground[9193:a0b] is 2013-12-24 14:48:50.541 RWReactivePlayground[9193:a0b] is 2013-12-24 14:48:50.695 RWReactivePlayground[9193:a0b] is t 2013-12-24 14:48:50.831 RWReactivePlayground[9193:a0b] is th 2013-12-24 14:48:50.878 RWReactivePlayground[9193:a0b] is thi 2013-12-24 14:48:50.901 RWReactivePlayground[9193:a0b] is this 2013-12-24 14:48:51.009 RWReactivePlayground[9193:a0b] is this 2013-12-24 14:48:51.142 RWReactivePlayground[9193:a0b] is this m 2013-12-24 14:48:51.236 RWReactivePlayground[9193:a0b] is this ma 2013-12-24 14:48:51.335 RWReactivePlayground[9193:a0b] is this mag 2013-12-24 14:48:51.439 RWReactivePlayground[9193:a0b] is this magi 2013-12-24 14:48:51.535 RWReactivePlayground[9193:a0b] is this magic 2013-12-24 14:48:51.774 RWReactivePlayground[9193:a0b] is this magic? |
You can see that each time you change the text within the text field, the code within the block executes. No target-action, no delegates — just signals and blocks. That’s pretty exciting!
ReactiveCocoa signals (represented by RACSignal
) send a stream of events to their subscribers. There are three types of events to know: next, error and completed. A signal may send any number of next events before it terminates after an error, or it completes. In this part of the tutorial you’ll focus on the next event. Be sure to read part two when it’s available to learn about error and completed events.
RACSignal
has a number of methods you can use to subscribe to these different event types. Each method takes one or more blocks, with the logic in your block executing when an event occurs. In this case, you can see that the subscribeNext:
method was used to supply a block that executes on each next event.
The ReactiveCocoa framework uses categories to add signals to many of the standard UIKit controls so you can add subscriptions to their events, which is where the rac_textSignal
property on the text field came from.
But enough with the theory, it’s time to start making ReactiveCocoa do some work for you!
ReactiveCocoa has a large range of operators you can use to manipulate streams of events. For example, assume you’re only interested in a username if it’s more than three characters long. You can achieve this by using the filter
operator. Update the code you added previously in viewDidLoad
to the following:
[[self.usernameTextField.rac_textSignal filter:^BOOL(id value) { NSString *text = value; return text.length > 3; }] subscribeNext:^(id x) { NSLog(@"%@", x); }]; |
If you build and run, then type some text into the text field, you should find that it only starts logging when the text field length is greater than three characters:
2013-12-26 08:17:51.335 RWReactivePlayground[9654:a0b] is t 2013-12-26 08:17:51.478 RWReactivePlayground[9654:a0b] is th 2013-12-26 08:17:51.526 RWReactivePlayground[9654:a0b] is thi 2013-12-26 08:17:51.548 RWReactivePlayground[9654:a0b] is this 2013-12-26 08:17:51.676 RWReactivePlayground[9654:a0b] is this 2013-12-26 08:17:51.798 RWReactivePlayground[9654:a0b] is this m 2013-12-26 08:17:51.926 RWReactivePlayground[9654:a0b] is this ma 2013-12-26 08:17:51.987 RWReactivePlayground[9654:a0b] is this mag 2013-12-26 08:17:52.141 RWReactivePlayground[9654:a0b] is this magi 2013-12-26 08:17:52.229 RWReactivePlayground[9654:a0b] is this magic 2013-12-26 08:17:52.486 RWReactivePlayground[9654:a0b] is this magic? |
What you’ve created here is a very simple pipeline. It is the very essence of Reactive Programming, where you express your application’s functionality in terms of data flows.
It can help to picture these flows graphically:
In the above diagram you can see that the rac_textSignal
is the initial source of events. The data flows through a filter
that only allows events to pass if they contain a string with a length that is greater than three. The final step in the pipeline is subscribeNext:
where your block logs the event value.
At this point it’s worth noting that the output of the filter
operation is also an RACSignal
. You could arrange the code as follows to show the discrete pipeline steps:
RACSignal *usernameSourceSignal = self.usernameTextField.rac_textSignal; RACSignal *filteredUsername = [usernameSourceSignal filter:^BOOL(id value) { NSString *text = value; return text.length > 3; }]; [filteredUsername subscribeNext:^(id x) { NSLog(@"%@", x); }]; |
Because each operation on an RACSignal
also returns an RACSignal
it’s termed a fluent interface. This feature allows you to construct pipelines without the need to reference each step using a local variable.
If you updated your code to split it into the various RACSignal
components, now is the time to revert it back to the fluent syntax:
[[self.usernameTextField.rac_textSignal filter:^BOOL(id value) { NSString *text = value; // implicit cast return text.length > 3; }] subscribeNext:^(id x) { NSLog(@"%@", x); }]; |
The implicit cast from id
to NSString
, at the indicated location in the code above, is less than elegant. Fortunately, since the value passed to this block is always going to be an NSString
, you can change the parameter type itself. Update your code as follows:
[[self.usernameTextField.rac_textSignal filter:^BOOL(NSString *text) { return text.length > 3; }] subscribeNext:^(id x) { NSLog(@"%@", x); }]; |
Build and run to confirm this works just as it did previously.
So far this tutorial has described the different event types, but hasn’t detailed the structure of these events. What’s interesting is that an event can contain absolutely anything!
As an illustration of this point, you’re going to add another operation to the pipeline. Update the code you added to viewDidLoad
as follows:
[[[self.usernameTextField.rac_textSignal map:^id(NSString *text) { return @(text.length); }] filter:^BOOL(NSNumber *length) { return [length integerValue] > 3; }] subscribeNext:^(id x) { NSLog(@"%@", x); }]; |
If you build and run you’ll find the app now logs the length of the text instead of the contents:
2013-12-26 12:06:54.566 RWReactivePlayground[10079:a0b] 4 2013-12-26 12:06:54.725 RWReactivePlayground[10079:a0b] 5 2013-12-26 12:06:54.853 RWReactivePlayground[10079:a0b] 6 2013-12-26 12:06:55.061 RWReactivePlayground[10079:a0b] 7 2013-12-26 12:06:55.197 RWReactivePlayground[10079:a0b] 8 2013-12-26 12:06:55.300 RWReactivePlayground[10079:a0b] 9 2013-12-26 12:06:55.462 RWReactivePlayground[10079:a0b] 10 2013-12-26 12:06:55.558 RWReactivePlayground[10079:a0b] 11 2013-12-26 12:06:55.646 RWReactivePlayground[10079:a0b] 12 |
The newly added map operation transforms the event data using the supplied block. For each next event it receives, it runs the given block and emits the return value as a next event. In the code above, the map takes the NSString
input and takes its length, which results in an NSNumber
being returned.
For a stunning graphic depiction of how this works, take a look at this image:
As you can see, all of the steps that follow the map
operation now receive NSNumber
instances. You can use the map
operation to transform the received data into anything you like, as long as it’s an object.
text.length
property returns an NSUInteger
, which is a primitive type. In order to use it as the contents of an event, it must be boxed. Fortunately the Objective-C literal syntax provides and option to do this in a rather concise manner – @(text.length)
.
That’s enough playing! It’s time to update the ReactivePlayground app to use the concepts you’ve learned so far. You may remove all of the code you’ve added since you started this tutorial.
The first thing you need to do is create a couple of signals that indicate whether the username and password text fields are valid. Add the following to the end of viewDidLoad
in RWViewController.m:
RACSignal *validUsernameSignal = [self.usernameTextField.rac_textSignal map:^id(NSString *text) { return @([self isValidUsername:text]); }]; RACSignal *validPasswordSignal = [self.passwordTextField.rac_textSignal map:^id(NSString *text) { return @([self isValidPassword:text]); }]; |
As you can see, the above code applies a map
transform to the rac_textSignal
from each text field. The output is a boolean value boxed as a NSNumber
.
The next step is to transform these signals so that they provide a nice background color to the text fields. Basically, you subscribe to this signal and use the result to update the text field background color. One viable option is as follows:
[[validPasswordSignal map:^id(NSNumber *passwordValid) { return [passwordValid boolValue] ? [UIColor clearColor] : [UIColor yellowColor]; }] subscribeNext:^(UIColor *color) { self.passwordTextField.backgroundColor = color; }]; |
(Please don’t add this code, there’s a much more elegant solution coming!)
Conceptually you’re assigning the output of this signal to the backgroundColor
property of the text field. However, the code above is a poor expression of this; it’s all backwards!
Fortunately, ReactiveCocoa has a macro that allows you to express this with grace and elegance. Add the following code directly beneath the two signals you added to viewDidLoad
:
RAC(self.passwordTextField, backgroundColor) = [validPasswordSignal map:^id(NSNumber *passwordValid) { return [passwordValid boolValue] ? [UIColor clearColor] : [UIColor yellowColor]; }]; RAC(self.usernameTextField, backgroundColor) = [validUsernameSignal map:^id(NSNumber *passwordValid) { return [passwordValid boolValue] ? [UIColor clearColor] : [UIColor yellowColor]; }]; |
The RAC
macro allows you to assign the output of a signal to the property of an object. It takes two arguments, the first is the object that contains the property to set and the second is the property name. Each time the signal emits a next event, the value that passes is assigned to the given property.
This is a very elegant solution, don’t you think?
One last thing before you build and run. Locate the updateUIState
method and remove the first two lines:
self.usernameTextField.backgroundColor = self.usernameIsValid ? [UIColor clearColor] : [UIColor yellowColor]; self.passwordTextField.backgroundColor = self.passwordIsValid ? [UIColor clearColor] : [UIColor yellowColor]; |
That will clean up the non-reactive code.
Build and run the application. You should find that the text fields look highlighted when invalid, and clear when valid.
Visuals are nice, so here is a way to visualize the current logic. Here you can see two simple pipelines that take the text signals, map them to validity-indicating booleans, and then follow with a second mapping to a UIColor
which is the part that binds to the background color of the text field.
Are you wondering why you created separate validPasswordSignal
and validUsernameSignal
signals, as opposed to a single fluent pipeline for each text field? Patience dear reader, the method behind this madness will become clear shortly!
In the current app, the Sign In button only works when both the username and password text fields have valid input. It’s time to do this reactive-style!
The current code already has signals that emit a boolean value to indicate if the username and password fields are valid; validUsernameSignal
and validPasswordSignal
. Your task is to combine these two signals to determine when it is okay to enable the button.
At the end of viewDidLoad
add the following:
RACSignal *signUpActiveSignal = [RACSignal combineLatest:@[validUsernameSignal, validPasswordSignal] reduce:^id(NSNumber *usernameValid, NSNumber *passwordValid) { return @([usernameValid boolValue] && [passwordValid boolValue]); }]; |
The above code uses the combineLatest:reduce:
method to combine the latest values emitted by validUsernameSignal
and validPasswordSignal
into a shiny new signal. Each time either of the two source signals emits a new value, the reduce block executes, and the value it returns is sent as the next value of the combined signal.
RACSignal
combine methods can combine any number of signals, and the arguments of the reduce block correspond to each of the source signals. ReactiveCocoa has a cunning little utility class, RACBlockTrampoline
that handles the reduce block’s variable argument list internally. In fact, there are a lot of cunning tricks hidden within the ReactiveCocoa implementation, so it’s well worth pulling back the covers!
Now that you have a suitable signal, add the following to the end of viewDidLoad
. This will wire it up to the enabled property on the button:
[signUpActiveSignal subscribeNext:^(NSNumber *signupActive) { self.signInButton.enabled = [signupActive boolValue]; }]; |
Before running this code, it’s time to rip out the old implementation. Remove these two properties from the top of the file:
@property (nonatomic) BOOL passwordIsValid; @property (nonatomic) BOOL usernameIsValid; |
From near the top of viewDidLoad
, remove the following:
// handle text changes for both text fields [self.usernameTextField addTarget:self action:@selector(usernameTextFieldChanged) forControlEvents:UIControlEventEditingChanged]; [self.passwordTextField addTarget:self action:@selector(passwordTextFieldChanged) forControlEvents:UIControlEventEditingChanged]; |
Also remove the updateUIState
, usernameTextFieldChanged
and passwordTextFieldChanged
methods. Whew! That’s a lot of non-reactive code you just disposed of! You’ll be thankful you did.
Finally, make sure to remove the call to updateUIState
from viewDidLoad
as well.
If you build and run, check the Sign In button. It should be enabled because the username and password text fields are valid, as they were before.
An update to the application logic diagram gives the following:
The above illustrates a couple of important concepts that allow you to perform some pretty powerful tasks with ReactiveCocoa;
The result of these changes is the application no longer has private properties that indicate the current valid state of the two text fields. This is one of the key differences you’ll find when you adopt a reactive style — you don’t need to use instance variables to track transient state.
The application currently uses the reactive pipelines illustrated above to manage the state of the text fields and button. However, the button press handling still uses actions, so the next step is to replace the remaining application logic in order to make it all reactive!
The Touch Up Inside event on the Sign In button is wired up to the signInButtonTouched
method in RWViewController.m
via a storyboard action. You’re going to replace this with the reactive equivalent, so you first need to disconnect the current storyboard action.
Open up Main.storyboard, locate the Sign In button, ctrl-click to bring up the outlet / action connections and click the x to remove the connection. If you feel a little lost, the diagram below kindly shows where to find the delete button:
You’ve already seen how the ReactiveCocoa framework adds properties and methods to the standard UIKit controls. So far you’ve used rac_textSignal
, which emits events when the text changes. In order to handle events you need to use another of the methods that ReactiveCocoa adds to UIKit, rac_signalForControlEvents
.
Returning to RWViewController.m
, add the following to the end of viewDidLoad
:
[[self.signInButton rac_signalForControlEvents:UIControlEventTouchUpInside] subscribeNext:^(id x) { NSLog(@"button clicked"); }]; |
The above code creates a signal from the button’s UIControlEventTouchUpInside
event and adds a subscription to make a log entry every time this event occurs.
Build and run to confirm the message actually logs. Bear in mind that the button will enable only when the username and password are valid, so be sure to type some text into both fields before tapping the button!
You should see messages in the Xcode console similar to the following:
2013-12-28 08:05:10.816 RWReactivePlayground[18203:a0b] button clicked 2013-12-28 08:05:11.675 RWReactivePlayground[18203:a0b] button clicked 2013-12-28 08:05:12.605 RWReactivePlayground[18203:a0b] button clicked 2013-12-28 08:05:12.766 RWReactivePlayground[18203:a0b] button clicked 2013-12-28 08:05:12.917 RWReactivePlayground[18203:a0b] button clicked |
Now that the button has a signal for the touch event, the next step is to wire this up with the sign-in process itself. This presents something of a problem — but that’s good, you don’t mind a problem, right? Open up RWDummySignInService.h
and take a look at the interface:
typedef void (^RWSignInResponse)(BOOL); @interface RWDummySignInService : NSObject - (void)signInWithUsername:(NSString *)username password:(NSString *)password complete:(RWSignInResponse)completeBlock; @end |
This service takes a username, a password and a completion block as parameters. The given block is run when the sign-in is successful or when it fails. You could use this interface directly within the subscribeNext:
block that currently logs the button touch event, but why would you? This is the kind of asynchronous, event-based behavior that ReactiveCocoa eats for breakfast!
Fortunately, it’s rather easy to adapt existing asynchronous APIs to be expressed as a signal. First, remove the current signInButtonTouched:
method from the RWViewController.m. You don’t need this logic as it will be replaced with a reactive equivalent.
Stay in RWViewController.m and add the following method:
-(RACSignal *)signInSignal { return [RACSignal createSignal:^RACDisposable *(id<RACSubscriber> subscriber) { [self.signInService signInWithUsername:self.usernameTextField.text password:self.passwordTextField.text complete:^(BOOL success) { [subscriber sendNext:@(success)]; [subscriber sendCompleted]; }]; return nil; }]; } |
The above method creates a signal that signs in with the current username and password. Now for a breakdown of its component parts.
The above code uses the createSignal:
method on RACSignal
for signal creation. The block that describes this signal is a single argument, and is passed to this method. When this signal has a subscriber, the code within this block executes.
The block is passed a single subscriber
instance that adopts the RACSubscriber
protocol, which has methods you invoke in order to emit events; you may also send any number of next events, terminated with either an error or complete event. In this case, it sends a single next event to indicate whether the sign-in was a success, followed by a complete event.
The return type for this block is an RACDisposable
object, and it allows you to perform any clean-up work that might be required when a subscription is cancelled or trashed. This signal does not have any clean-up requirements, hence nil
is returned.
As you can see, it’s surprisingly simple to wrap an asynchronous API in a signal!
Now to make use of this new signal. Update the code you added to the end of viewDidLoad
in the previous section as follows:
[[[self.signInButton rac_signalForControlEvents:UIControlEventTouchUpInside] map:^id(id x) { return [self signInSignal]; }] subscribeNext:^(id x) { NSLog(@"Sign in result: %@", x); }]; |
The above code uses the map
method used earlier to transform the button touch signal into the sign-in signal. The subscriber simply logs the result.
If you build and run, then tap the Sign In button, and take a look at the Xcode console, you’ll see the result of the above code …
… and the result isn’t quite what you might have expected!
2014-01-08 21:00:25.919 RWReactivePlayground[33818:a0b] Sign in result: <RACDynamicSignal: 0xa068a00> name: +createSignal: |
The subscribeNext:
block has been passed a signal all right, but not the result of the sign-in signal!
Time to illustrate this pipeline so you can see what’s going on:
The rac_signalForControlEvents
emits a next event (with the source UIButton
as its event data) when you tap the button. The map step creates and returns the sign-in signal, which means the following pipeline steps now receive a RACSignal
. That is what you’re observing at the subscribeNext:
step.
The situation above is sometimes called the signal of signals; in other words an outer signal that contains an inner signal. If you really wanted to, you could subscribe to the inner signal within the outer signal’s subscribeNext:
block. However it would result in a nested mess! Fortunately, it’s a common problem, and ReactiveCocoa is ready for this scenario.
The solution to this problem is straightforward, just change the map
step to a flattenMap
step as shown below:
[[[self.signInButton rac_signalForControlEvents:UIControlEventTouchUpInside] flattenMap:^id(id x) { return [self signInSignal]; }] subscribeNext:^(id x) { NSLog(@"Sign in result: %@", x); }]; |
This maps the button touch event to a sign-in signal as before, but also flattens it by sending the events from the inner signal to the outer signal.
Build and run, and keep an eye on the console. It should now log whether the sign-in was successful or not:
2013-12-28 18:20:08.156 RWReactivePlayground[22993:a0b] Sign in result: 0 2013-12-28 18:25:50.927 RWReactivePlayground[22993:a0b] Sign in result: 1 |
Exciting stuff!
Now that the pipeline is doing what you want, the final step is to add the logic to the subscribeNext
step to perform the required navigation upon successful sign-in. Replace the pipeline with the following:
[[[self.signInButton rac_signalForControlEvents:UIControlEventTouchUpInside] flattenMap:^id(id x) { return [self signInSignal]; }] subscribeNext:^(NSNumber *signedIn) { BOOL success = [signedIn boolValue]; self.signInFailureText.hidden = success; if (success) { [self performSegueWithIdentifier:@"signInSuccess" sender:self]; } }]; |
The subscribeNext:
block takes the result from the sign-in signal, updates the visibility of the signInFailureText
text field accordingly, and performs the navigation segue if required.
Build and run to enjoy the kitten once more! Meow!
Did you notice there is one small user experience issue with the current application? When the sign-in service validates the supplied credentials, is should disable the Sign In button. This prevents the user from repeating the same sign-in. Furthermore, if a failed sign-in attempt occurred, the error message should be hidden when the user tries to sign-in once again.
But how should you add this logic to the current pipeline? Changing the button’s enabled state isn’t a transformation, filter or any of the other concepts you’ve encountered so far. Instead, it’s what is known as a side-effect; or logic you want to execute within a pipeline when a next event occurs, but it does not actually change the nature of the event itself.
Replace the current pipeline with the following:
[[[[self.signInButton rac_signalForControlEvents:UIControlEventTouchUpInside] doNext:^(id x) { self.signInButton.enabled = NO; self.signInFailureText.hidden = YES; }] flattenMap:^id(id x) { return [self signInSignal]; }] subscribeNext:^(NSNumber *signedIn) { self.signInButton.enabled = YES; BOOL success = [signedIn boolValue]; self.signInFailureText.hidden = success; if (success) { [self performSegueWithIdentifier:@"signInSuccess" sender:self]; } }]; |
You can see how the above adds a doNext:
step to the pipeline immediately after button touch event creation. Notice that the doNext:
block does not return a value, because it’s a side-effect; it leaves the event itself unchanged.
The doNext:
block above sets the button enabled property to NO
, and hides the failure text. Whilst the subscribeNext:
block re-enables the button, and either displays or hides the failure text based on the result of the sign-in.
It’s time to update the pipeline diagram to include this side effect. Bask in all it’s glory:
Build and run the application to confirm the Sign In button enables and disables as expected.
And with that, your work is done – the application is now fully reactive. Woot!
If you got lost along the way, you can download the final project (complete with dependencies), or you can obtain the code from GitHub, where there is a commit to match each build and run step in this tutorial.
Note: Disabling buttons while some asynchronous activity is underway is a common problem, and once again ReactiveCocoa is all over this little snafu. The RACCommand
encapsulates this concept, and has an enabled
signal that allows you to wire up the enabled property of a button to a signal. You might want to give the class a try.
Hopefully this tutorial has given you a good foundation that will help you when starting to use ReactiveCocoa in your own applications. It can take a bit of practice to get used to the concepts, but like any language or program, once you get the hang of it it’s really quite simple. At the very core of ReactiveCocoa are signals, which are nothing more than streams of events. What could be simpler than that?
With ReactiveCocoa one of the interesting things I have found is there are numerous ways in which you can solve the same problem. You might want to experiment with this application, and adjust the signals and pipelines to change the way they split and combine.
It’s worth considering that the main goal of ReactiveCocoa is to make your code cleaner and easier to understand. Personally I find it easier to understand what an application does if its logic is represented as clear pipelines, using the fluent syntax.
In the second part of this tutorial series you’ll learn about more advanced subjects such as error handing and how to manage code that executes on different threads. Until then, have fun experimenting!
ReactiveCocoa Tutorial – The Definitive Introduction: Part 1/2 is a post from: Ray Wenderlich
The post ReactiveCocoa Tutorial – The Definitive Introduction: Part 1/2 appeared first on Ray Wenderlich.
In this episode, we chat with special guest Adam Swinden from ios.devtools.me about the best iOS Dev Tools you should know and love!
Here’s what is covered in this episode:
Our Sponsor
News
What’s New on raywenderlich.com
Tech Talk
Contact Us
We hope you enjoyed this podcast! We have an episode each month, so be sure to subscribe in iTunes to get access as soon as it comes out.
We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear in future episodes. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com!
iOS Dev Tools: The raywenderlich.com Podcast Episode 4 is a post from: Ray Wenderlich
The post iOS Dev Tools: The raywenderlich.com Podcast Episode 4 appeared first on Ray Wenderlich.
ReactiveCocoa is a framework that allows you to use Functional Reactive Programming (FRP) techniques within your iOS applications. With the first installment of this two-part ReactiveCocoa tutorial series you learned how to replace standard actions and event handling logic with signals that emit streams of events. You also learned how to transform, split and combine these signals.
In this, the second part of the series, you’re going to learn about the more advanced features of ReactiveCocoa. Including:
It’s time to dive in!
The application you’re going to develop throughout this tutorial is called Twitter Instant (modeled on the Google Instant concept), a Twitter search application that updates search results in real-time as you type.
The starter project for this application includes the basic user interface and some of the more mundane code you’ll need to get you started. As with part 1, you’ll need to use CocoaPods to obtain the ReactiveCocoa framework and integrate it with your project. The starter project already includes the necessary Podfile, so open up a terminal window and execute the following command:
pod install |
If it executes correctly, you should see output similar to the following:
Analyzing dependencies Downloading dependencies Using ReactiveCocoa (2.1.8) Generating Pods project Integrating client project |
This should have generated a Xcode workspace, TwitterInstant.xcworkspace. Open this up in Xcode and confirm that it contains two projects:
Build and run. The following interface will greet you:
Take a moment to familiarize yourself with the application code. It is a very simple split view controller-based app. The left-hand panel is the RWSearchFormViewController, which has a few UI controls added via the storyboard, and the search text field connected to an outlet. The right-hand panel is the RWSearchResultsViewController, which is a currently just a UITableViewController
subclass.
If you open up RWSearchFormViewController.m you can see the viewDidLoad
method locates the results view controller and assigns it to the resultsViewController
private property. The majority of your application logic is going to live within RWSearchFormViewController, and this property will supply search results to RWSearchResultsViewController.
The first thing you’re going to do is validate the search text to ensure it’s length is greater than two characters. This should be a pleasant refresher if you completed part 1 of this series.
Within RWSearchFormViewController.m add the following method just below viewDidLoad
:
- (BOOL)isValidSearchText:(NSString *)text { return text.length > 2; } |
This simply ensures the supplied search string is longer than two characters. With such simple logic you might be asking “Why is this a separate method in the project file?”
The current logic is simple. But what if it needed to be more complex in future? With the above example, you would only make changes in one place. Furthermore, the above makes your code more expressive and it indicates why you’re checking the length of the string. We all follow good coding practices, right?
At the top of the same file, import ReactiveCocoa:
#import <ReactiveCocoa.h> |
Within the same file add the following to the end of viewDidLoad
:
[[self.searchText.rac_textSignal map:^id(NSString *text) { return [self isValidSearchText:text] ? [UIColor whiteColor] : [UIColor yellowColor]; }] subscribeNext:^(UIColor *color) { self.searchText.backgroundColor = color; }]; |
Wondering what that’s all about? The above code:
backgroundColor
property in the subscribeNext:
block.Build and run to observe how the text field now indicates an invalid entry with a yellow background if the current search string is too short.
Illustrated graphically, this simple reactive pipeline looks a bit like this:
The rac_textSignal
emits next events containing the current text field’s text each time a change occurs. The map step transforms the text value into a color, while the subscribeNext:
step takes this value and applies it to the text field background.
Of course, you do remember this from the first article, right? If not, you might want to stop right here and at least read through the exercises.
Before adding the Twitter search logic, there are a few more interesting topics to cover.
When you’re delving into formatting ReactiveCocoa code, the generally accepted convention is to have each operation on a new line, and align all of the steps vertically.
In this next image, you can see the alignment of a more complex example, taken from the previous tutorial:
This allows you to see the operations that make up the pipeline very easily. Also, minimize the amount of code in each block; anything more than a couple of lines should be broken out into a private method.
Unfortunately, Xcode doesn’t really like this style of formatting, so you might find yourself battling with its automatic indentation logic!
Considering the code you added to the TwitterInstant app, are you wondering how the pipeline you just created is retained? Surely, as it is not assigned to a variable or property it will not have its reference count incremented and is doomed to destruction?
One of the design goals of ReactiveCocoa was to allow this style of programming, where pipelines can form anonymously. In all of the reactive code you’ve written so far, this should seem quite intuitive.
In order to support this model, ReactiveCocoa maintains and retains its own global set of signals. If it has one or more subscribers, then the signal is active. If all subscribers are removed, the signal can be de-allocated. For more information on how ReactiveCocoa manages this process see the Memory Management documentation.
That leaves on final question: How do you unsubscribe from a signal? After a completed or error event, a subscription removes itself automatically (you’ll learn more about this shortly). Manual removal may be accomplished via RACDisposable
.
The subscription methods on RACSignal
all return an instance of RACDisposable
that allows you to manually remove the subscription via the dispose method. Here is a quick example using the current pipeline:
RACSignal *backgroundColorSignal = [self.searchText.rac_textSignal map:^id(NSString *text) { return [self isValidSearchText:text] ? [UIColor whiteColor] : [UIColor yellowColor]; }]; RACDisposable *subscription = [backgroundColorSignal subscribeNext:^(UIColor *color) { self.searchText.backgroundColor = color; }]; // at some point in the future ... [subscription dispose]; |
It is unlikely you’ll find yourself doing this very often, but it is worth knowing the possibility exists.
Note: As a corollary to this, if you create a pipeline but do not subscribe to it, the pipeline never executes, this includes any side-effects such as doNext:
blocks.
While ReactiveCocoa does a lot of clever stuff behind the scenes — which means you don’t have to worry too much about the memory management of signals — there is one important memory-related issue you do need to consider.
If you look at the reactive code you just added:
[[self.searchText.rac_textSignal map:^id(NSString *text) { return [self isValidSearchText:text] ? [UIColor whiteColor] : [UIColor yellowColor]; }] subscribeNext:^(UIColor *color) { self.searchText.backgroundColor = color; }]; |
The subscribeNext:
block uses self
in order to obtain a reference to the text field. Blocks capture and retain values from the enclosing scope, therefore if a strong reference exists between self
and this signal, it will result in a retain cycle. Whether this matters or not depends on the lifecycle of the self
object. If its lifetime is the duration of the application, as is the case here, it doesn’t really matter. But in more complex applications, this is rarely the case.
In order to avoid this potential retain cycle, the Apple documentation for Working With Blocks recommends capturing a weak reference to self
. With the current code you can achieve this as follows:
__weak RWSearchFormViewController *bself = self; // Capture the weak reference [[self.searchText.rac_textSignal map:^id(NSString *text) { return [self isValidSearchText:text] ? [UIColor whiteColor] : [UIColor yellowColor]; }] subscribeNext:^(UIColor *color) { bself.searchText.backgroundColor = color; }]; |
In the above code bself
is a reference to self
that has been marked as __weak
in order to make it a weak reference. Notice that the subscribeNext:
block now uses the bself
variable. This doesn’t look terribly elegant!
The ReactiveCocoa framework inlcudes a little trick you can use in place of the above code. Add the following import to the top of the file:
#import "RACEXTScope.h" |
Then replace the above code with the following:
@weakify(self) [[self.searchText.rac_textSignal map:^id(NSString *text) { return [self isValidSearchText:text] ? [UIColor whiteColor] : [UIColor yellowColor]; }] subscribeNext:^(UIColor *color) { @strongify(self) self.searchText.backgroundColor = color; }]; |
The @weakify
and @strongify
statements above are macros defined in the Extended Objective-C library, and they are also included in ReactiveCocoa. The @weakify
macro allows you to create shadow variables which are weak references (you can pass multiple variables if you require multiple weak references), the @strongify
macro allows you to create strong references to variables that were previously passed to @weakify
.
@weakify
and @strongify
actually do, within Xcode select Product -> Perform Action -> Preprocess “RWSearchForViewController”. This will preprocess the view controller, expand all the macros and allow you to see the final output.
One final note of caution, take care when using instance variables within blocks. These will also result in the block capturing a strong reference to self
. You can turn on a compiler warning to alert you if your code results in this problem. Search for retain within the project’s build settings to find the options indicated below:
Okay, you survived the theory, congrats! Now you’re much wiser and ready to move on to the fun part: adding some real functionality to your application!
Note: The keen-eyed readers among you who paid attention in the previous tutorial will have no doubt notice that you can remove the need for the subscribeNext:
block in the current pipeline by making use of the RAC
macro. If you spotted this, make that change and award yourself a shiny gold star!
You’re going to use the Social Framework in order to allow the TwitterInstant application to search for Tweets, and the Accounts Framework in order to grant access to Twitter. For a more detailed overview of the Social Framework, check out the chapter dedicated to this framework in iOS 6 by Tutorials.
Before you add this code, you need to input your Twitter credentials into the simulator or the iPad you’re running this app on. Open the Settings app and select the Twitter menu option, then add your credentials on the right hand side of the screen:
The starter project already has the required frameworks added, so you just need to import the headers. Within RWSearchFormViewController.m, add the following imports to the top of the file:
#import <Accounts/Accounts.h> #import <Social/Social.h> |
Just beneath the imports add the following enumeration and constant:
typedef NS_ENUM(NSInteger, RWTwitterInstantError) { RWTwitterInstantErrorAccessDenied, RWTwitterInstantErrorNoTwitterAccounts, RWTwitterInstantErrorInvalidResponse }; static NSString * const RWTwitterInstantDomain = @"TwitterInstant"; |
You’re going to be using these shortly to identify errors.
Further down the same file, just beneath the existing property declarations, add the following:
@property (strong, nonatomic) ACAccountStore *accountStore; @property (strong, nonatomic) ACAccountType *twitterAccountType; |
The ACAccountsStore
class provides access to the various social media accounts your device can connect to, and the ACAccountType
class represents a specific type of account.
Further down the same file, add the following to the end of viewDidLoad
:
self.accountStore = [[ACAccountStore alloc] init]; self.twitterAccountType = [self.accountStore accountTypeWithAccountTypeIdentifier:ACAccountTypeIdentifierTwitter]; |
This creates the accounts store and Twitter account identifier.
When an app requests access to a social media account, the user sees a pop-up. This is an asynchronous operation, hence it is a good candidate for wrapping in a signal in order to use it reactively!
Further down the same file, add the following method:
- (RACSignal *)requestAccessToTwitterSignal { // 1 - define an error NSError *accessError = [NSError errorWithDomain:RWTwitterInstantDomain code:RWTwitterInstantErrorAccessDenied userInfo:nil]; // 2 - create the signal @weakify(self) return [RACSignal createSignal:^RACDisposable *(id<RACSubscriber> subscriber) { // 3 - request access to twitter @strongify(self) [self.accountStore requestAccessToAccountsWithType:self.twitterAccountType options:nil completion:^(BOOL granted, NSError *error) { // 4 - handle the response if (!granted) { [subscriber sendError:accessError]; } else { [subscriber sendNext:nil]; [subscriber sendCompleted]; } }]; return nil; }]; } |
This method does the following:
createSignal
returns an instance of RACSignal
.If you recall from the first tutorial, a signal can emit three different event types:
Over a signal’s lifetime, it may emit no events, one or more next events followed by either a completed event or an error event.
Finally, in order to make use of this signal, add the following to the end of viewDidLoad
:
[[self requestAccessToTwitterSignal] subscribeNext:^(id x) { NSLog(@"Access granted"); } error:^(NSError *error) { NSLog(@"An error occurred: %@", error); }]; |
If you build and run, the following prompt should greet you::
If you tap OK, the log message in the subscribeNext:
block should appear in the console, whereas, if you tap Don’t Allow, the error block executes and logs the respective message.
The Accounts Framework remembers the decision you made. Therefore to test both paths you need to reset the simulator via the iOS Simulator -> Reset Contents and Settings … menu option. This is a bit of a pain because you also have to re-enter your Twitter credentials!
Once the user has (hopefully!) granted access to their Twitter accounts, the application needs to continuously monitor the changes to the search text field, in order to query twitter.
The application needs to wait for the signal that requests access to Twitter to emit its completed event, and then subscribe to the text field’s signal. The sequential chaining of different signals is a common problem, but one that ReactiveCocoa handles very gracefully.
Replace your current pipeline at the end of viewDidLoad
with the following:
[[[self requestAccessToTwitterSignal] then:^RACSignal *{ @strongify(self) return self.searchText.rac_textSignal; }] subscribeNext:^(id x) { NSLog(@"%@", x); } error:^(NSError *error) { NSLog(@"An error occurred: %@", error); }]; |
The then
method waits until a completed event is emitted, then subscribes to the signal returned by its block parameter. This effectively passes control from one signal to the next.
Note: You’ve already weakified self
for the pipeline that sits just above this one, so there is no need to precede this pipeline with a @weakify(self)
.
The then
method passes error events through. Therefore the final subscribeNext:error:
block still receives errors emitted by the initial access-requesting step.
When you build and run, then grant access, you should see the text you input into the search field logged in the console:
2014-01-04 08:16:11.444 TwitterInstant[39118:a0b] m 2014-01-04 08:16:12.276 TwitterInstant[39118:a0b] ma 2014-01-04 08:16:12.413 TwitterInstant[39118:a0b] mag 2014-01-04 08:16:12.548 TwitterInstant[39118:a0b] magi 2014-01-04 08:16:12.628 TwitterInstant[39118:a0b] magic 2014-01-04 08:16:13.172 TwitterInstant[39118:a0b] magic! |
Next, add a filter
operation to the pipeline to remove any invalid search strings. In this instance, they are strings comprised of less than three characters:
[[[[self requestAccessToTwitterSignal] then:^RACSignal *{ @strongify(self) return self.searchText.rac_textSignal; }] filter:^BOOL(NSString *text) { @strongify(self) return [self isValidSearchText:text]; }] subscribeNext:^(id x) { NSLog(@"%@", x); } error:^(NSError *error) { NSLog(@"An error occurred: %@", error); }]; |
Build and run again to observe the filtering in action:
2014-01-04 08:16:12.548 TwitterInstant[39118:a0b] magi 2014-01-04 08:16:12.628 TwitterInstant[39118:a0b] magic 2014-01-04 08:16:13.172 TwitterInstant[39118:a0b] magic! |
Illustrating the current application pipeline graphically, it looks like this:
The application pipeline starts with the requestAccessToTwitterSignal
then switches to the rac_textSignal
. Meanwhile, next events pass through a filter and finally onto the subscription block. You can also see any error events emitted by the first step are consumed by the same subscribeNext:error:
block.
Now that you have a signal that emits the search text, it is time to use this to search Twitter! Are you having fun yet? You should be because now you’re really getting somewhere.
The Social Framework is an option to access the Twitter Search API. However, as you might expect, the Social Framework is not reactive! The next step is to wrap the required API method calls in a signal. You should be getting the hang of this process by now!
Within RWSearchFormViewController.m, add the following method:
- (SLRequest *)requestforTwitterSearchWithText:(NSString *)text { NSURL *url = [NSURL URLWithString:@"https://api.twitter.com/1.1/search/tweets.json"]; NSDictionary *params = @{@"q" : text}; SLRequest *request = [SLRequest requestForServiceType:SLServiceTypeTwitter requestMethod:SLRequestMethodGET URL:url parameters:params]; return request; } |
This creates a request that searches Twitter via the v1.1 REST API. The above code uses the q
search parameter to search for tweets that contain the given search string. You can read more about this search API, and other parameters that you can pass, in the Twitter API docs.
The next step is to create a signal based on this request. Within the same file, add the following method:
- (RACSignal *)signalForSearchWithText:(NSString *)text { // 1 - define the errors NSError *noAccountsError = [NSError errorWithDomain:RWTwitterInstantDomain code:RWTwitterInstantErrorNoTwitterAccounts userInfo:nil]; NSError *invalidResponseError = [NSError errorWithDomain:RWTwitterInstantDomain code:RWTwitterInstantErrorInvalidResponse userInfo:nil]; // 2 - create the signal block @weakify(self) return [RACSignal createSignal:^RACDisposable *(id<RACSubscriber> subscriber) { @strongify(self); // 3 - create the request SLRequest *request = [self requestforTwitterSearchWithText:text]; // 4 - supply a twitter account NSArray *twitterAccounts = [self.accountStore accountsWithAccountType:self.twitterAccountType]; if (twitterAccounts.count == 0) { [subscriber sendError:noAccountsError]; } else { [request setAccount:[twitterAccounts lastObject]]; // 5 - perform the request [request performRequestWithHandler: ^(NSData *responseData, NSHTTPURLResponse *urlResponse, NSError *error) { if (urlResponse.statusCode == 200) { // 6 - on success, parse the response NSDictionary *timelineData = [NSJSONSerialization JSONObjectWithData:responseData options:NSJSONReadingAllowFragments error:nil]; [subscriber sendNext:timelineData]; [subscriber sendCompleted]; } else { // 7 - send an error on failure [subscriber sendError:invalidResponseError]; } }]; } return nil; }]; } |
Taking each step in turn:
Now to put this new signal to use!
In the first part of this tutorial you learnt how to use flattenMap
to map each next event to a new signal that is then subscribed to. It’s time to put this to use once again. At the end of viewDidLoad
update your application pipeline by adding a flattenMap
step at the end:
[[[[[self requestAccessToTwitterSignal] then:^RACSignal *{ @strongify(self) return self.searchText.rac_textSignal; }] filter:^BOOL(NSString *text) { @strongify(self) return [self isValidSearchText:text]; }] flattenMap:^RACStream *(NSString *text) { @strongify(self) return [self signalForSearchWithText:text]; }] subscribeNext:^(id x) { NSLog(@"%@", x); } error:^(NSError *error) { NSLog(@"An error occurred: %@", error); }]; |
Build and run, then type some text into the search text field. Once the text is at least three characters or more in length, you should see the results of the Twitter search in the console window.
The following shows just a snippet of the kind of data you’ll see:
2014-01-05 07:42:27.697 TwitterInstant[40308:5403] { "search_metadata" = { "completed_in" = "0.019"; count = 15; "max_id" = 419735546840117248; "max_id_str" = 419735546840117248; "next_results" = "?max_id=419734921599787007&q=asd&include_entities=1"; query = asd; "refresh_url" = "?since_id=419735546840117248&q=asd&include_entities=1"; "since_id" = 0; "since_id_str" = 0; }; statuses = ( { contributors = "<null>"; coordinates = "<null>"; "created_at" = "Sun Jan 05 07:42:07 +0000 2014"; entities = { hashtags = ... |
The signalForSearchText:
method also emits error events which the subscribeNext:error:
block consumes. You could take my word for this, but you’d probably like to test it out!
Within the simulator open up the Settings app and select your Twitter account, then delete it by tapping the Delete Account button:
If you re-run the application, it is still granted access to the user’s Twitter accounts, but there are no accounts available. As a result the signalForSearchText
method will emit an error, which will be logged:
2014-01-05 07:52:11.705 TwitterInstant[41374:1403] An error occurred: Error Domain=TwitterInstant Code=1 "The operation couldn’t be completed. (TwitterInstant error 1.)" |
The Code=1
indicates this is the RWTwitterInstantErrorNoTwitterAccounts
error. In a production application, you would want to switch on the error code and do something more meaningful than just log the result.
This illustrates an important point about error events; as soon as a signal emits an error, it falls straight-through to the error-handling block. It is an exceptional flow.
Note: Have a go at exercising the other exceptional flow when the Twitter request returns an error. Here’s a quick hint, try changing the request parameters to something invalid!
I’m sure you’re itching to wire-up the JSON output of the Twitter search to the UI, but before you do that there is one last thing you need to do. To find out what this is, you need to do a bit of exploration!
Add a breakpoint to the subscribeNext:error:
step at the location indicated below:
Re-run the application, re-enter your Twitter credentials again if needed, and type some text into the search field. When the breakpoint hits you should see something similar to the image below:
Notice the code where the debugger hit a break is not executed on the main thread, which appears as Thread 1 in the above screenshot. Keep in mind that it’s paramount you only update the UI from the main thread; therefore if you want to display the list of tweets in the UI you’re going to have to switch threads.
This illustrates an important point about the ReactiveCocoa framework. The operations shown above execute on the thread where the signal originally emitted its events. Try adding breakpoints at the other pipeline steps, you might be surprised to find they execute on more than one different thread!
So how do you go about updating the UI? The typical approach is to use operation queues (see the tutorial How To Use NSOperations and NSOperationQueues elsewhere on this site for more details), however ReactiveCocoa has a much simpler solution to this problem.
Update your pipeline by adding a deliverOn:
operation just after flattenMap:
as shown below:
[[[[[[self requestAccessToTwitterSignal] then:^RACSignal *{ @strongify(self) return self.searchText.rac_textSignal; }] filter:^BOOL(NSString *text) { @strongify(self) return [self isValidSearchText:text]; }] flattenMap:^RACStream *(NSString *text) { @strongify(self) return [self signalForSearchWithText:text]; }] deliverOn:[RACScheduler mainThreadScheduler]] subscribeNext:^(id x) { NSLog(@"%@", x); } error:^(NSError *error) { NSLog(@"An error occurred: %@", error); }]; |
Now re-run the app and type some text so your app hits the breakpoint. You should see the log statement in your subscribeNext:error:
block is now executing on the main thread:
What? There’s just one simple operation for marshalling the flow of events onto a different thread? Just how awesome is that!?
You can safely proceed to update your UI!
NOTE: If you take a look at the RACScheduler
class you’ll see that there is quite a range of options for delivering on threads with different priorities, or adding delays into pipelines.
It’s time to see those tweets!
If you open RWSearchResultsViewController.h
you’ll see it already has a displayTweets:
method, which will cause the right-hand view controller to render the supplied array of tweets. The implementation is very simple, it’s just a standard UITableView
datasource. The single argument for the displayTweets:
method expects an NSArray
containing RWTweet
instances. You’ll also find the RWTweet
model object was provided as part of the starter project.
The data which arrives at the subscibeNext:error:
step is currently an NSDictionary
, which was constructed by parsing the JSON response in signalForSearchWithText:
. So how do you determine the contents of this dictionary?
If you take a look at the Twitter API documentation you can see a sample response. The NSDictionary
mirrors this structure, so you should find that it has a key named statuses
that is a NSArray
of tweets, which are also NSDictionary
instances.
If you look at RWTweet
it already has a class method tweetWithStatus:
which takes an NSDictionary
in the given format and extracts the required data. So all you need to do is write a for loop, and iterate over the array, creating an instance of RWTweet
for each tweet.
However, you’re not going to do that! Oh no, there’s much better things in store!
This article is about ReactiveCocoa and Functional Programming. The transformation of data from one format to another is more elegant when you use a functional API. You’re going to perform this task with LinqToObjectiveC.
Close the TwitterInstant workspace, and then open the Podfile that you created in the first tutorial, in TextEdit. Update the file to add the new dependency:
platform :ios, '7.0' pod 'ReactiveCocoa', '2.1.8' pod 'LinqToObjectiveC', '2.0.0' |
Open up a terminal window in the same folder and issue the following command:
pod update |
You should see output similar to the following:
Analyzing dependencies Downloading dependencies Installing LinqToObjectiveC (2.0.0) Using ReactiveCocoa (2.1.8) Generating Pods project Integrating client project |
Re-open the workspace and verify the new pod is showing as shown in the image below:
Open RWSearchFormViewController.m and add the following imports to the top of the file:
#import "RWTweet.h" #import "NSArray+LinqExtensions.h" |
The NSArray+LinqExtensions.h
header is from LinqToObjectiveC, and adds a number of methods to NSArray
that allow you to transform, sort, group and filter its data using a fluent API.
Now to put this API to use … update the current pipeline at the end of viewDidLoad
as follows:
[[[[[[self requestAccessToTwitterSignal] then:^RACSignal *{ @strongify(self) return self.searchText.rac_textSignal; }] filter:^BOOL(NSString *text) { @strongify(self) return [self isValidSearchText:text]; }] flattenMap:^RACStream *(NSString *text) { @strongify(self) return [self signalForSearchWithText:text]; }] deliverOn:[RACScheduler mainThreadScheduler]] subscribeNext:^(NSDictionary *jsonSearchResult) { NSArray *statuses = jsonSearchResult[@"statuses"]; NSArray *tweets = [statuses linq_select:^id(id tweet) { return [RWTweet tweetWithStatus:tweet]; }]; [self.resultsViewController displayTweets:tweets]; } error:^(NSError *error) { NSLog(@"An error occurred: %@", error); }]; |
As you can see above, the subscribeNext:
block first obtains the NSArray of tweets. The linq_select
method transforms the array of NSDictionary
instances by executing the supplied block on each array element, resulting in an array of RWTweet
instances.
Once transformed, the tweets get sent to the results view controller.
Build and run to finally see the tweets appearing in the UI:
Note: ReactiveCocoa and LinqToObjectiveC have similar sources of inspiration. Whilst ReactiveCocoa was modelled on Microsoft’s Reactive Extensions library, LinqToObjectiveC was modelled on their Language Integrated Query APIs, or LINQ, specifically Linq to Objects.
You’ve probably noticed there is a gap to the left of each tweet. That space is there to show the Twitter user’s avatar.
The RWTweet
class already has a profileImageUrl
property that is populated with a suitable URL for fetching this image. In order for the table view to scroll smoothly, you need to ensure the code that fetches this image from the given URL is not executed on the main thread. This can be achieved using Grand Central Dispatch or NSOperationQueue. But why not use ReactiveCocoa?
Open RWSearchResultsViewController.m and add the following method to the end of the file:
-(RACSignal *)signalForLoadingImage:(NSString *)imageUrl { RACScheduler *scheduler = [RACScheduler schedulerWithPriority:RACSchedulerPriorityBackground]; return [[RACSignal createSignal:^RACDisposable *(id<RACSubscriber> subscriber) { NSData *data = [NSData dataWithContentsOfURL:[NSURL URLWithString:imageUrl]]; UIImage *image = [UIImage imageWithData:data]; [subscriber sendNext:image]; [subscriber sendCompleted]; return nil; }] subscribeOn:scheduler]; } |
You should be pretty familiar with this pattern by now!
The above method first obtains a background scheduler as you want this signal to execute on a thread other than the main one. Next, it creates a signal that downloads the image data and creates a UIImage
when it has a subscriber. The final piece of magic is subscribeOn:
, which ensures that the signal executes on the given scheduler.
Magic!
Now, within the same file update the tableView:cellForRowAtIndex:
method by adding the following just before the return
statement:
cell.twitterAvatarView.image = nil; [[[self signalForLoadingImage:tweet.profileImageUrl] deliverOn:[RACScheduler mainThreadScheduler]] subscribeNext:^(UIImage *image) { cell.twitterAvatarView.image = image; }]; |
The above first resets the image since these cells are reused and could therefore contain stale data. Then it creates the required signal to fetch the image data. The deliverOn:
pipeline step, you encountered previously, marshals the next event onto the main thread so that the subscribeNext:
block can be safely executed.
Nice and simple!
Build and run to see that the avatars now display correctly:
You might have noticed that every time you type a new character, a Twitter search executes immediately. If you’re a fast typer (or simply hold down the delete key), this can result in the application performing several searches a second. This is not ideal for a couple of reasons: firstly, you’re hammering the Twitter search API and simultaneously throwing away most of the results. Secondly, you’re constantly updating the results which is rather distracting for the user!
A better approach would be to perform a search only if the search text is unchanged for a short amount of time, say 500 milliseconds.
As you’ve probably guessed, ReactiveCocoa makes this task incredibly simple!
Open RWSearchFormViewController.m and update the pipeline at the end of viewDidLoad
by adding a throttle step just after the filter:
[[[[[[[self requestAccessToTwitterSignal] then:^RACSignal *{ @strongify(self) return self.searchText.rac_textSignal; }] filter:^BOOL(NSString *text) { @strongify(self) return [self isValidSearchText:text]; }] throttle:0.5] flattenMap:^RACStream *(NSString *text) { @strongify(self) return [self signalForSearchWithText:text]; }] deliverOn:[RACScheduler mainThreadScheduler]] subscribeNext:^(NSDictionary *jsonSearchResult) { NSArray *statuses = jsonSearchResult[@"statuses"]; NSArray *tweets = [statuses linq_select:^id(id tweet) { return [RWTweet tweetWithStatus:tweet]; }]; [self.resultsViewController displayTweets:tweets]; } error:^(NSError *error) { NSLog(@"An error occurred: %@", error); }]; |
The throttle
operation will only send a next event if another next event isn’t received within the given time period. It’s really that simple!
Build and run to confirm that the search results only update if you stop typing for 500 milliseconds. Feels much better doesn’t it? Your users will think so too.
And…with that final step your Twitter Instant application is complete. Give yourself a pat on the back and do a happy dance.
If you got lost somewhere in the tutorial you can download the final project (Don’t forget to run pod install
from the project’s directory before opening), or you can obtain the code from GitHub where there is a commit for each Build & Run step in this tutorial.
Before heading off and treating yourself to a victory cup of coffee, it’s worth admiring the final application pipeline:
That’s quite a complicated data flow, all expressed concisely as a single reactive pipeline. It’s a beautiful sight to see! Can you imagine how much more complex this application would be using non-reactive techniques? And how much harder it would be to see the data flows in such an application? Sounds very cumbersome, and now you don’t have to go down that road ever again!
Now you know that ReactiveCocoa is really quite awesome!
One final point, ReactiveCocoa makes it possible to use the Model View ViewModel, or the MVVM design pattern, which provides better separation of application logic and view logic. If anyone is interested in a follow-up article on MVVM with ReactiveCocoa, please let me know in the comments. I’d love to hear your thoughts and experiences!
ReactiveCocoa Tutorial – The Definitive Introduction: Part 2/2 is a post from: Ray Wenderlich
The post ReactiveCocoa Tutorial – The Definitive Introduction: Part 2/2 appeared first on Ray Wenderlich.
Learn how to split your data in your table views up into multiple sections.
Video Tutorial: Table Views Multiple Sections is a post from: Ray Wenderlich
The post Video Tutorial: Table Views Multiple Sections appeared first on Ray Wenderlich.
Learn how to delete rows from your table views, by sliding to delete or by switching the table view to editing mode.
Video Tutorial: Table Views Deleting Rows is a post from: Ray Wenderlich
The post Video Tutorial: Table Views Deleting Rows appeared first on Ray Wenderlich.
In the first part of this series, I reviewed several popular app mockup tools:
These are all great tools, but there’s a lot more than that!
So in this second part of the series, we’re going to take at three more mockup tools:
The special thing about these particular tools is that they can be used to make all kinds of mockups – even for desktop apps or websites.
I’ll give a review of each app, then at the end of the article I’ll share my personal recommendation on which is the best app mockup tool for you.
And best of all – we have some giveaways at the end! Keep reading to see how you can enter to win.
Briefs is a $199, first-class Mac application that makes mocking up really easy on platforms… and yes, even fun!
Like the other apps highlighted in part 1, you can create simple mockups and deliver iterations to your design. Even sharing your projects with clients can be painless with the free companion iOS app Briefscase.
Briefs is a desktop application, which presents several advantages. For example, you can enjoy the comfort of working with a bigger screen and other synergic desktop utilities. But there’s a drawback as well; you can’t make changes on the fly as you can with Blueprint or AppCooker.
MartianCraft always keeps Briefs up to date with new OS and hardware releases. The app offers several templates both for mobile and desktop platforms.
Briefs’ primary structure is a timeline with multiple scenes, each of which represent a single app visual state. In turn, they contain many actors to deal with interactions.
I’ve found Briefs’ menus to be intuitive and essential, but they don’t sacrifice functionality. In the upper right-hand corner, you’ll find the mainmenu:
It contains:
Inside the Briefs library there are four sub-libraries: Android, Blueprint, Desktop and iOS, so this is great for cross-platform projects. You’re not forced to use a style but it’s possible to mix elements in a single mockup.
With the inspector, it’s easy to tweak the settings on your actors:
You can use a handy toolbar that allows to flip, change the actor z, flip, lock in place and even group/un-group actors.
If you can’t find a specific actor you need, there is no need to panic or declare failure, Briefs has the solution:
Briefs auto-saves your project, so when it’s re-opened you’ll find the last available project. You can also save your project as a brief file and load it on another computer.
Briefs’ main menu allows you to move and interact with your mockup in an effective way. You can switch between: Timeline, Single Scene and Blueprint.
Timeline, where all your scenes and their connections are highlighted.
Single scene, where you can work in a “single view” you select from the scene list or timeline.
Blueprint, a great tool that will schematize all the app components (per scene) and highlight their properties: sizes, colours, fonts, spacing … both for non-retina and retina display.
It’s a phenomenal method to communicate your design to clients and other developers in a clear way. In addition, versioning features allow you to track documents with ease.
How about this for useful? You can generate a PDF automatically, to contain all of this information, which can be a super-handy way to communicate to pixel-precision between developers and designers. Just do this: File > Export as PDF.
Example Blueprint PDF file download
You can also add interactivity to your mockups, which you can then preview in the companion app Briefscase. This is a great way to show off your mockups to clients or friends. You can do this in two ways:
Select Actor > Add action
Select Actor > New Hotspot Actor
Then just decide what kind of action to trigger and set up its properties (delay, duration, type of transition …).
It took me 25 minutes to get used to Briefs’ dynamics and finish the mockup of my sample app without watching any tutorials. Initially, like with what happened with Groosoft’s Blueprint, I struggled a bit with actions and scenes linkage, but I figured it out after a while.
This tool is outstanding and makes sharing projects really easy. Either use classic sharing mediums, like email, IM or AirDrop, or download the free iOS app Briefscase for real-time interaction and the ability to get immediate feedback on your design!
If you want to learn more about Briefs and Briefscase, check out this cool demo video I made showing off the final results from the sample app:
OmniGraffle is so much more than a wireframing tool. You can use it to draft, chart, brainstorm, and much more! It’s one of the four productivity apps from Omni Group.
OmniGraffle comes in two flavors:
Since OmniGraffle is quite vast, for the sake of simplicity I will cover only those functionalities that will help me to build my simple tracking app mockup.
Every time you open the app for a new project, the Resource Browser prompts you to choose a suitable template. Note that you can create your own templates, in order to start with documents configured to meet your needs. For this article, we’ll go with the iOS template.
At a first glance, I thought Omni reached the right balance between function, overall aesthetic and usability. However, I personally prefer the Briefs’ darker UI colour, because I think it lets you focus more on your mockup contents.
Menus are intuitive and the main interface is dividable into four principal parts:
The inspector has a Stencils Library
, which contains a plethora of elements for your wireframes.
If you feel inclined to add other elements, simply drag and drop custom images from your desktop. Another option is to go to Graffletopia and subscribe to their service. It costs only $24 dollars a year, and you’ll be able to use hundreds of stencils on your both Mac and iPad. If you’re more of a minimalist, then just buy single stencil kits.
In spite of the variety of stencils, I’ve found AppCooker and Blueprint libraries more comprehensive for iOS-only projects. However, with OmniGraffle you can reach a higher level of elements customization, and integrating elements from outside sources is as easy as dragging and dropping.
All in all, I found working with OmniGraffle rather enjoyable, because the software acted as I expected, and it helped me with repeating tasks. Here is an example that highlights its usefulness. This happened while I was positioning arrows for the tableview below:
It might seem to be a simple thing, but try to imagine this kind of behavior in a bigger project, or in every day design tasks — it will save you a lot of time and repetitive click-click-clicking.
Going forward, the sidebar will help you interact quickly with your canvasses:
You can add new canvasses and layers easily, while keeping your whole project organized.
Much like Photoshop, elements layer from back to front; you can hide, lock and edit them granularly. If you need to print the project, you can define if a specific layer should print. One stellar feature is shared layers (represented in orange), so when you add or modify something in that layer, it’s shared automatically across canvasses.
I must admit that I would love to see more iOS components built into OmniGraffle. For instance, it was difficult to find the iOS7 segmented control, so I had to add it externally. However, OmniGraffle compensated for this deficiency with the ability to customize stencils libraries extensively.
When the canvasses are ready, you can start to link them via actions.
When you click on an element from the advanced properties inside the inspector, you’ll find the sub-menu actions. That is where to choose which action to perform when an element is clicked, and it allows you to add basic interactivity to your wireframe. You can choose from different actions:
With the help of the option Jumps Elsewhere, it’s possible to create a prototype from your wireframe. For mobile prototypes, it can also be useful to use Show or Hide Layers. For instance, if you need to show an alert view when the user performs a specific action.
Last but not least, OmniGraffle projects are very portable and exportable to a variety of different formats.
As an OmniGraffle newbie, it took 25 minutes to finish the wireframe. In my opinion, OmniGraffle is one of the most powerful and adaptable software of the reviewed set since it can do far more than just app mockups. That said, if your interest is primarily wireframing for mobile apps, you might find the other reviewed tools more useful because they have focused, out-of-the-box features that can take care of your projects.
All the other tools have a companion app or methods that help you to share project with your team and clients. OmniGraffle’s more time-consuming interface is forgivable since OmniGraffle is a broader, more versatile program.
If you want to learn more about Omnigraffle, check out this cool demo video I made showing off the final results from the sample app:
Balsamiq Mockup is a wireframing tool that allows quick design creation and simple elements that do not distract the evaluator or client, which keeps him or her focused on the content and graphic quality!
Balsamiq offers a desktop app, with a fully functional 7-day trial. At the time of writing this, the price for a lifetime license was $79. Different from the other two tools, Balsamiq Mockups is cross-platform so you can work on Mac, Windows or Linux. Here’s where you can find the download page.
It also offers a web application myBalsamiq. You can begin with 30-day trial and if you want to keep it, it’s $12 per month. The option to share all your work automatically across platforms, including PC, (PCs need love too) makes the web app rather tempting.
Now it’s time to dive into the nitty-gritty with Balsamiq Mockups! :]
At a first glance, it’s clear you can divide the UI in three main areas, the toolbar, the elements bar and the canvas.
I think the UI component bar is really well organized. However, there was a surprise when I opened the iPhone category; there were only a few:
So I Googled and found some community-generated sets of UI components. The crowd saves that day again. Phewww..! :D
For my simple tracking app I will use: iPad Controls, iPad Controls by Raad, iPhone Glossy Bars and iPhone Customizable.
If you still need a custom component, you can drag and drop images inside Balsamiq Mockups. When you add the first custom component, it will create an asset folder on your desktop.
The Project Assets
category will reference that folder.
If you need to customize a component, there is a terrific set of icons ready for you.
When you highlight a specific component, a popup appears to allow you to change its properties.
Different components have different properties inspectors, and you can even change group properties.
I worked with the help of one canvas, and my goal was to quickly copy and paste so that all my mockups would be in the same place. Ten short minutes went by and the next thing I knew, I was done!
Although, Balsamiq Mockups is very simple, it gives you a great level of customization. Plus, it’s user-friendly and quick.
Creating an interactive prototype is simple, you only have to follow a few steps:
When you finish the set up, your views should look like the one below, with little red arrows representing links.
It’s possible to export your project in different formats: PNG, PDF, XML; but to allow interactivity I’ve exported it as a PDF File for your reference.
It took me ten minutes to finish the wireframe. Balsamiq Mockups is the most easy-to-use wireframing tool I’ve ever used. It’s intuitive and allows a substantial amount of customization. It’s cross-platform, and this gives it a definite advantage over the other tools reviewed in this article. Furthermore, you may use the browser app to share your work between office and home! :]
This series has covered a ton of app mockup tools, and at this point it might be hard to keep them all straight! So I’ve put together this handy table that sums up the features offered by the various prototyping tools:
So what’s the verdict?
Overall, I think all of these tools are pretty sweet. Each has a unique set of attributes, abilities and characteristics that may appeal to different developers with specific needs.
But I know you might wanna see what I think is the best after reviewing these tools, so I picked a winner in a few different categories.
If you’re looking for the best app solely focused on making app mockups and time and money is a concern, I’d recommend Briefs.
In my opinion, Briefs is a great app because it offers the right amount of features to create mobile and desktop app wireframes. It’s also intuitive and has the most aesthetically pleasing UI; personally, I think the darker color makes it easier to focus on your work. Lastly, Briefscase completes the Briefs package as a remarkably useful and free prototype tester.
If you’re looking for a general purpose app for mockups, brainstorming, charting, and more, I’d recommend OmniGraffle. There’s a lot you can do with OmniGraffle, and app mockups is just one part!
One small drawback is that OmniGraffle is only for Mac. As far as cost goes, the basic version is yours after a one-time purchase at $99.99. You can update to Pro later on, which makes it less expensive alternative to Photoshop.
If money and time to make the mockups is not a concern, I advise you to choose Photoshop, which was featured in part 1 of this series, because it is much more than a wireframing tool and almost an indispensable part of an app developer’s toolbox.
Photoshop is not cheap – Photoshop Creative Cloud costs $19.99 per month, and that adds up quick. It also is quite time consuming – it has a large learning curve, and making mockups in Photoshop is probably slower than any of the other options (although third party tools like DevRocket make this more appealing).
However, the benefit is that your mockups will look pixel-perfect, and there’s literally nothing you can’t do. Moreover, Photoshop is widely adapted, so collaborating and sharing files is usually simple.
I really loved AppCooker! It makes it possible to work up detailed mockups quickly and is even fun to use. AppTaster completes the package because it provides great add-on to this outstanding app. However, I want to note that I did not get to try OmniGraffle for iPad.
Although Blueprint is also a great tool, I had a better overall experience with AppCooker. The fact that it takes only a few minutes to create an app icon, a business plan, and take notes about ideas is why you might find a lot of value in AppCooker.
AppCooker also wins the best looking mockups category. In my opinion, their mockups were the most orderly, customizable and cleanest of all the tools that I tested.
I found Balsamiq Mockups to be the easiest and quickest tool to use. It’s simple, clean and straight to the point. The fact that it is cross-platform and offers a web application makes the tool even more appealing. It took me only ten minutes; compare that to an average of about 25 minutes with the other tools.
I’ve found Briefscase as the most intuitive companion app. Although you have the possibility to pass files through dropbox, the synchronization happened automatically thanks to the software. In addition, it’s easy to use and features a gorgeous and intuitive UI.
And to end this article, we have some good news – a few lucky readers will win a free mockup tool! The developers of the following apps have been kind enough do donate a few promo codes for some lucky readers:
To enter, all you have to do is comment on this post – in 24 hours, we will randomly select the winners.
Also, if you have any questions or comments on your favorite mockup tools, feel free to join the discussion in the forums below. Happy mocking!
App Mockup Tools Reviews Part 2: Briefs, OmniGraffle, and Balsamiq is a post from: Ray Wenderlich
The post App Mockup Tools Reviews Part 2: Briefs, OmniGraffle, and Balsamiq appeared first on Ray Wenderlich.
Learn how to insert new rows into your table view while in editing mode.
Video Tutorial: Table Views Inserting Rows is a post from: Ray Wenderlich
The post Video Tutorial: Table Views Inserting Rows appeared first on Ray Wenderlich.
Update 2/28/14: Updated for iOS 7 and Sprite Kit.
People love to play games, whether they are casual games that are played on the bus ride home or complex ones that people spend hours on. Playing games is inherently a social activity. Players love to share their highscores, achievements and talk about the boss they defeated in the last level (I know for a fact that I do).
In this 2-part Game Center tutorial series, you’ll make a simple 2-player networked game with Sprite Kit and Game Center matchmaking.
The game you’ll be working with is very simple. It’s a racing game with a dog vs. a kid – tap as fast as you can to win!
This Game Center tutorial assumes you are already familiar with the basics of using Sprite Kit (the awesome new game engine added in iOS 7). If you are new to Sprite Kit, you might want to check out some of the other Sprite Kit tutorials on this site first.
Ready? On your mark, get set, GO!
This Game Center tutorial shows you how to add matchmaking and multiplayer capabilities into a simple game.
Since making the game logic itself isn’t the point of this tutorial, I’ve prepared some starter code for you that has the game without any network code.
Download the code and run the project, and you should see a screen like this:
The goal is to tap the screen as fast as you can until you reach the cat. Try it for yourself!
The game is very simple and well commented – go ahead and browse through the code and make sure you understand everything. If you get stuck on anything, check out some of our other Sprite Kit tutorials for more background information.
At this point, you have a simple playable game, except it’s pretty boring since you’re playing all by yourself!
It would be a lot more fun to use Game Center, so you can invite friends to play with you, or use matchmaking to play with random people online.
But before you can start writing any Game Center code, you need to do two things:
Let’s go through each of these in turn.
Open the project in XCode, if you haven’t already and switch to the CatRaceStarter target settings. In the general tab, change the Bundle Identifier to something unique (probably based on your name or company name), like this:
It’s also a good idea to restart Xcode at this point, as it often gets confused when you switch Bundle Identifiers like this.
Then select the Capabilities tabs on the menu bar at the top.
Next, turn the switch next to the section titled Game Center on. This is a new feature introduced in Xcode 5 which makes it extremely easy to enable Game Center for your apps.
And thats it! With just the click of a button Xcode has automatically created an App ID and provisioning profile for your app, and enabled Game Center for your app – Hurray! Next, you need to register your app with iTunes Connect and enable Game Center.
The next step is to log on to iTunes Connect and create a new entry for your app.
Once you’re logged onto iTunes Connect, select Manage Your Applications, and then click the blue Add New App button in the upper right.
On the first screen, enter a name for your app. I’ve already taken CatRace, so you’ll need to pick something else, like CatRace [Your initials]. Then enter 201 for SKU Number (or anything else you’d like), and select the bundle ID you created earlier, similar to the screenshot below:
Click Continue, and follow the prompts to set up some basic information about your app.
Don’t worry about the exact values to put in, since it doesn’t really matter and you can change any of this later – you just need to put something. To make this process simple I’ve created a zip file that contains a dummy app icon and screenshots that you can use to upload to iTunes connect.
When you’re done, click Save, and if all works well you should be in the “Prepare for Upload” stage and will see a screen like this:
Click the blue Manage Game Center button to the upper right, and select the Enable for Single Game button, and click Done. That’s it – Game Center is enabled for your app, and you’re ready to write some code!
By the way – inside the “Manage Game Center” section, you might have noticed some options to set up Leaderboards or Achievements. I won’t be covering Leaderboards and Achievements in this tutorial, but if you are interested I’ve covered this in detail in our book iOS Games by Tutorials.
When your game starts up, the first thing you need to do is authenticate the local player. Without authentication you cannot use any of the awesome features game center provides.
You can think of this as “logging the player into Game Center.” If he’s already logged in, it will say “Welcome back!” Otherwise, it will ask for the player’s username and password.
So, our strategy to authenticate the player will be as follows:
Now that you’re armed with this plan, let’s try it out!
In the Cat Race Xcode project, right click on the CatRaceStarter group and select New Group.
Name the group GameKit. Next, right-click on the newly-created group and select New File…, choose the Objective-C class template and click Next. Name the class GameKitHelper, make it a subclass of NSObject
and click Next again. Be sure the CatRaceStarter target is checked and click Create.
Replace GameKitHelper.h with the following:
@import GameKit; @interface GameKitHelper : NSObject @property (nonatomic, readonly) UIViewController *authenticationViewController; @property (nonatomic, readonly) NSError *lastError; + (instancetype)sharedGameKitHelper; @end |
This imports the GameKit header file, and then defines two properties – one is a view controller and the other is used to keep track of the last error that occurred while interacting with Game Center APIs. You will learn more about these properties in the sections to come.
Next switch to GameKitHelper.m and add the following right inside the @implementation
section:
+ (instancetype)sharedGameKitHelper { static GameKitHelper *sharedGameKitHelper; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ sharedGameKitHelper = [[GameKitHelper alloc] init]; }); return sharedGameKitHelper; } |
The above method is straightforward. All you are doing here is creating and returning a singleton object.
Next, while still in GameKitHelper.m, add a private instance variable to track if game center is enabled or not as shown below:
@implementation GameKitHelper { BOOL _enableGameCenter; } |
Also add the following initializer method to the implementation section. This method will simply set the above variable to true. Hence by default we assume that Game center is enabled.
- (id)init { self = [super init]; if (self) { _enableGameCenter = YES; } return self; } |
Now it’s time to add the method that will authenticate the local player. Add the following to the implementation section.
- (void)authenticateLocalPlayer { //1 GKLocalPlayer *localPlayer = [GKLocalPlayer localPlayer]; //2 localPlayer.authenticateHandler = ^(UIViewController *viewController, NSError *error) { //3 [self setLastError:error]; if(viewController != nil) { //4 [self setAuthenticationViewController:viewController]; } else if([GKLocalPlayer localPlayer].isAuthenticated) { //5 _enableGameCenter = YES; } else { //6 _enableGameCenter = NO; } }; } - (void)setAuthenticationViewController:(UIViewController *)authenticationViewController { } - (void)setLastError:(NSError *)error { } |
It seems like a lot of code, let’s go through it step-by-step to understand how the player is authenticated:
GKLocalPlayer
class. This instance represents the player who is currently authenticated through Game Center on this device. Only one player may be authenticated at a time.authenticateHandler
of the GKLocalPlayer
object. GameKit may call this handler multiple times.setLastError:
method.authenticateHandler
. It is your duty as the game’s developer to present this view controller to the user when you think it’s feasible. Ideally, you should do this as soon as possible. You will store this view controller in an instance variable using setAuthenticationViewController:
. This is an empty method for now, but you’ll implement it in a moment.GKLocalPlayer
object is true. When this occurs, you enable all Game Center features by setting the _enableGameCenter
boolean variable to YES.Since the authentication process happens in the background, the game might call the player’s authenticateHandler while the user is navigating through the screens of the game or even while the player is racing.
In a situation like this, you’re going to follow this strategy: whenever the game needs to present the GameKit authentication view controller, you will raise a notification, and whichever view controller is presently onscreen will be responsible for displaying it.
First you need to define the notification name. Add this line at the top of GameKitHelper.m:
NSString *const PresentAuthenticationViewController = @"present_authentication_view_controller"; |
Now add the following code inside setAuthenticationViewController:
:
if (authenticationViewController != nil) { _authenticationViewController = authenticationViewController; [[NSNotificationCenter defaultCenter] postNotificationName:PresentAuthenticationViewController object:self]; } |
This simply stores the view controller and sends the notification.
The last method you need to fill out is setLastError:
. This method will keep track of the last error that occurred while communicating with the GameKit service. Add the following code inside setLastError:
:
- (void)setLastError:(NSError *)error { _lastError = [error copy]; if (_lastError) { NSLog(@"GameKitHelper ERROR: %@", [[_lastError userInfo] description]); } } |
This simply logs the error to the console and stores the error for safekeeping.
Next, open GameKitHelper.h and add the following extern
statement above the interface section:
extern NSString *const PresentAuthenticationViewController; |
Next, add the following method declaration to the same file.
- (void)authenticateLocalPlayer; |
You now have all the code in place to authenticate the local player. All you need to do is call the above method at the appropriate place.
Let’s pause for a moment and think about the architecture of the game. The game currently has two view controllers, one that runs the actual game and the other that shows the end result (i.e. whether the player has won or lost). To navigate between these two screens, the app uses a navigation controller.
To get a better idea about the structure, take a look at Main.storyboard:
Since navigation in the app is controller by the navigation controller you’re going to add a call to authenticate the local player there. Open GameNavigationViewController.m and replace its contents with the following:
#import "GameNavigationController.h" #import "GameKitHelper.h" @implementation GameNavigationController - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(showAuthenticationViewController) name:PresentAuthenticationViewController object:nil]; [[GameKitHelper sharedGameKitHelper] authenticateLocalPlayer]; } @end |
All you’re doing here is registering for the PresentAuthenticationViewController
notification and making a call to the authenticateLocalPlayer
method of GameKitHelper
.
When the notification is received you need to present the authentication view controller returned by GameKit. To do this add the following methods to the same file.
- (void)showAuthenticationViewController { GameKitHelper *gameKitHelper = [GameKitHelper sharedGameKitHelper]; [self.topViewController presentViewController: gameKitHelper.authenticationViewController animated:YES completion:nil]; } - (void)dealloc { [[NSNotificationCenter defaultCenter] removeObserver:self]; } |
The showAuthenticationViewController
method will be invoked when the PresentAuthenticationViewController
notification is received. This method will present the authentication view controller to the user over the top view controller in the navigation stack.
Time to build and run! If you haven’t logged into Game Center before the game will present the following view:
Enter your game center credentials and press Go. The next time you launch the game, Game Center will present a banner similar to the one shown below:
There are two ways to find someone to play with via Game Center: search for match programatically, or use the built-in matchmaking user interface.
In this tutorial, you’re going to use the built-in matchmaking user interface. The idea is when you want to find a match, you set up some parameters in a GKMatchRequest
object, then create and display an instance of a GKMatchmakerViewController
.
Let’s see how this works. First make a few changes to GameKitHelper.h:
// Add to top of file right after the @import @protocol GameKitHelperDelegate - (void)matchStarted; - (void)matchEnded; - (void)match:(GKMatch *)match didReceiveData:(NSData *)data fromPlayer:(NSString *)playerID; @end // Modify @interface line to support protocols as follows @interface GameKitHelper : NSObject <GKMatchmakerViewControllerDelegate, GKMatchDelegate> // Add after @interface @property (nonatomic, strong) GKMatch *match; @property (nonatomic, assign) id <GameKitHelperDelegate> delegate; - (void)findMatchWithMinPlayers:(int)minPlayers maxPlayers:(int)maxPlayers viewController:(UIViewController *)viewController delegate:(id<GameKitHelperDelegate>)delegate; |
There’s a bunch of new stuff here, so let’s go over it bit by bit.
GameKitHelperDelegate
that you’ll use to notify other objects of when important events happen, such as the match starting, ending, or receiving data from the other party. For now, the GameViewController
will be implementing this protocol.GameKitHelper
object is marked as implementing two protocols. The first is so that the matchmaker user interface can notify this object when a match is found or not. The second is so that Game Center can notify this object when data is received or the connection status changes.GameViewController
will call to look for someone to play with.Next switch to GameKitHelper.m and make the following changes:
// Add a new private variable to the implementation section BOOL _matchStarted; // Add new method, right after authenticateLocalUser - (void)findMatchWithMinPlayers:(int)minPlayers maxPlayers:(int)maxPlayers viewController:(UIViewController *)viewController delegate:(id<GameKitHelperDelegate>)delegate { if (!_enableGameCenter) return; _matchStarted = NO; self.match = nil; _delegate = delegate; [viewController dismissViewControllerAnimated:NO completion:nil]; GKMatchRequest *request = [[GKMatchRequest alloc] init]; request.minPlayers = minPlayers; request.maxPlayers = maxPlayers; GKMatchmakerViewController *mmvc = [[GKMatchmakerViewController alloc] initWithMatchRequest:request]; mmvc.matchmakerDelegate = self; [viewController presentViewController:mmvc animated:YES completion:nil]; } |
This is the method that the view controller will call to find a match. It does nothing if Game Center is not available.
It initializes the match as not started yet, and the match object as nil. It stores away the delegate for later use, and dismisses any previously existing view controllers (in case a GKMatchmakerViewController
is already showing).
Then it moves into the important stuff. The GKMatchRequest
object allows you to configure the type of match you’re looking for, such as a minimum and maximum amount of players. This method sets it to whatever is passed in (which for this game will be min 2, max 2 players).
Next it creates a new instance of the GKMatchmakerViewController
with the given request, sets its delegate to the GameKitHelper
object, and uses the passed-in view controller to show it on the screen.
The GKMatchmakerViewController
takes over from here, and allows the user to search for a random player and start a game. Once it’s done some callback methods will be called, so let’s add those next:
// The user has cancelled matchmaking - (void)matchmakerViewControllerWasCancelled:(GKMatchmakerViewController *)viewController { [viewController dismissViewControllerAnimated:YES completion:nil]; } // Matchmaking has failed with an error - (void)matchmakerViewController:(GKMatchmakerViewController *)viewController didFailWithError:(NSError *)error { [viewController dismissViewControllerAnimated:YES completion:nil]; NSLog(@"Error finding match: %@", error.localizedDescription); } // A peer-to-peer match has been found, the game should start - (void)matchmakerViewController:(GKMatchmakerViewController *)viewController didFindMatch:(GKMatch *)match { [viewController dismissViewControllerAnimated:YES completion:nil]; self.match = match; match.delegate = self; if (!_matchStarted && match.expectedPlayerCount == 0) { NSLog(@"Ready to start match!"); } } |
If the user cancelled finding a match or there was an error, it just closes the matchmaker view.
However if a match was found, it squirrels away the match object and sets the delegate of the match to be the GameKitHelper
object so it can be notified of incoming data and connection status changes.
It also runs a quick check to see if it’s time to actually start the match. The match object keeps track of how many players still need to finish connecting as the expectedPlayerCount
.
If this is 0, everybody’s ready to go. Right now you’re just going to log that out – later on you’ll actually do something interesting here.
Next, add the implementation of the GKMatchDelegate
callbacks:
#pragma mark GKMatchDelegate // The match received data sent from the player. - (void)match:(GKMatch *)match didReceiveData:(NSData *)data fromPlayer:(NSString *)playerID { if (_match != match) return; [_delegate match:match didReceiveData:data fromPlayer:playerID]; } // The player state changed (eg. connected or disconnected) - (void)match:(GKMatch *)match player:(NSString *)playerID didChangeState:(GKPlayerConnectionState)state { if (_match != match) return; switch (state) { case GKPlayerStateConnected: // handle a new player connection. NSLog(@"Player connected!"); if (!_matchStarted && match.expectedPlayerCount == 0) { NSLog(@"Ready to start match!"); } break; case GKPlayerStateDisconnected: // a player just disconnected. NSLog(@"Player disconnected!"); _matchStarted = NO; [_delegate matchEnded]; break; } } // The match was unable to connect with the player due to an error. - (void)match:(GKMatch *)match connectionWithPlayerFailed:(NSString *)playerID withError:(NSError *)error { if (_match != match) return; NSLog(@"Failed to connect to player with error: %@", error.localizedDescription); _matchStarted = NO; [_delegate matchEnded]; } // The match was unable to be established with any players due to an error. - (void)match:(GKMatch *)match didFailWithError:(NSError *)error { if (_match != match) return; NSLog(@"Match failed with error: %@", error.localizedDescription); _matchStarted = NO; [_delegate matchEnded]; } |
match:didReceiveData:fromPlayer:
method is called when another player sends data to you. This method simply forwards the data onto the delegate, so that it can do the game-specific stuff with it.
For match:player:didChangeState:
, when the player connects you need to check if all the players have connected in, so you can start the match once they’re all in. Other than that, if a player disconnects it sets the match as ended and notifies the delegate.
The final two methods are called when there’s an error with the connection. In either case, it marks the match as ended and notifies the delegate.
OK, now that you have this code to establish a match, let’s use it in our GameViewController
. For the matchmaker view controller to show up it is necessary that the local player is authenticated and since authenticating a local player is an asynchronous process the GameViewController
needs to be notified in some way when the user is authenticated. To do this you’re going to use good old notifications. Still in GameKitHelper.m, make the following changes:
// Add to the top of the file NSString *const LocalPlayerIsAuthenticated = @"local_player_authenticated"; // Add this between the 1st and 2nd step of authenticateLocalPlayer if (localPlayer.isAuthenticated) { [[NSNotificationCenter defaultCenter] postNotificationName:LocalPlayerIsAuthenticated object:nil]; return; } // Modify the 5th step of authenticateLocalPlayer else if([GKLocalPlayer localPlayer].isAuthenticated) { //5 _enableGameCenter = YES; [[NSNotificationCenter defaultCenter] postNotificationName:LocalPlayerIsAuthenticated object:nil]; } |
Next, switch to GameKitHelper.h and add the following to the top of the file:
extern NSString *const LocalPlayerIsAuthenticated; |
With that in place switch to GameViewController.m and make the following changes:
// Add to top of file #import "GameKitHelper.h" // Mark the GameViewController to implement GameKitHelperDelegate @interface GameViewController()<GameKitHelperDelegate> @end // Add to the implementation section - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerAuthenticated) name:LocalPlayerIsAuthenticated object:nil]; } - (void)playerAuthenticated { [[GameKitHelper sharedGameKitHelper] findMatchWithMinPlayers:2 maxPlayers:2 viewController:self delegate:self]; } - (void)dealloc { [[NSNotificationCenter defaultCenter] removeObserver:self]; } // Add new methods to bottom of file #pragma mark GameKitHelperDelegate - (void)matchStarted { NSLog(@"Match started"); } - (void)matchEnded { NSLog(@"Match ended"); } - (void)match:(GKMatch *)match didReceiveData:(NSData *)data fromPlayer:(NSString *)playerID { NSLog(@"Received data"); } |
The most important part here is in the playerAuthenticated
method. It calls the new method you just wrote on GameKitHelper
to find a match by presenting the matchmaker view controller.
The rest is just some stub functions when a match begins or ends that you’ll be implementing later.
That’s it! Compile and run your app, and you should see the matchmaker view controller start up:
Now run your app on a different device so you have two running at the same time (i.e. maybe your simulator and your iPhone).
Important: Make sure you are using a different Game Center account on each device, or it won’t work!
Click Play Now on both devices, and after a little bit of time, the match maker view controller should go away, and you should see something like this in your console log:
CatRace[16440:207] Ready to start match!
Congrats – you now have made a match between two devices! You’re on your way to making a networked game!
Here is a sample project with all of the code you’ve developed so far in this Game Center tutorial.
In the second part of the tutorial series, we’ll cover how to send data back and forth between each device in the game, and wrap up the game into an exciting cat vs. kid race!
In the meantime, if you have any questions or comments please feel free to add comments to the section below.
Game Center Tutorial: How To Make A Simple Multiplayer Game with Sprite Kit: Part 1/2 is a post from: Ray Wenderlich
The post Game Center Tutorial: How To Make A Simple Multiplayer Game with Sprite Kit: Part 1/2 appeared first on Ray Wenderlich.