Learn how to encode and decode JSON in you iOS apps.
Video Tutorial: Saving Data in iOS Part 7: Using JSON is a post from: Ray Wenderlich
The post Video Tutorial: Saving Data in iOS Part 7: Using JSON appeared first on Ray Wenderlich.
Learn how to encode and decode JSON in you iOS apps.
Video Tutorial: Saving Data in iOS Part 7: Using JSON is a post from: Ray Wenderlich
The post Video Tutorial: Saving Data in iOS Part 7: Using JSON appeared first on Ray Wenderlich.
Welcome back to our Unity 4.3 2D Tutorial series!
Yes, Unity 4.5 was recently released, but this series is about Unity’s 2D features, which were first introduced in version 4.3. Some bugs have been fixed in 4.5 and a few GUI elements have changed slightly. So, keep that in mind if you’re using a newer version of Unity and you see slight discrepancies between your editor and the screenshots here.
Throughout the first, second and third parts of this series, you learned most of what you need to begin working with Unity’s 2D tools, including how to import and animate your sprites.
In the fourth part of the series, you were introduced to Unity’s 2D physics engine and learned one way to deal with different screen sizes and aspect ratios.
By the end of this, the final part of this series, you’ll have cats dancing in a conga line and your player will be able to win or lose the game. You’ll even throw in some music and sound effects just for fun.
This last tutorial is the longest of the series, but it seemed better to post it as one huge chunk rather than to make you wait even one extra day for the second half.
This tutorial picks up where the fourth part of the series left off. If you don’t already have the project from that tutorial, download it here.
Unzip the file (if you needed to download it) and open your scene by double-clicking ZombieConga/Assets/Scenes/CongaScene.unity.
You’ve got most of Zombie Conga’s pieces in place, so now it’s time to do what so many aspiring game developers have trouble doing: finish the game!
Zombie Conga is supposed to be a side-scrolling game, but so far you’re zombie has been stuck staring at one small section of beach. It’s high time he had someplace to go.
In order to scroll the scene to the left, you’ll move the camera within the game world to the right. That way, the beach, along with the cats, zombies and old ladies that hang out there, will scroll by naturally without you needing to modify their positions yourself.
Select Main Camera in the Hierarchy. Add a new C# script called CameraController. You’ve already created several scripts by this point in the tutorial series, so try it yourself. If you need a refresher, check out the following spoiler.
Solution Inside: Need help adding a new script? | SelectShow> |
---|---|
There are several ways you could create this script and add it to Main Camera. Here is just one:
|
Open CameraController.cs in MonoDevelop and add the following instance variables:
public float speed = 1f; private Vector3 newPosition; |
You’ll use speed
to control how quickly the scene scrolls. You only need to update the x
component of the Camera
‘s position, but the individual components of a Transform
‘s position
are readonly. Rather than repeatedly creating new Vector3
objects every time you update the position, you’ll reuse newPosition
.
Because you’ll only be setting newPosition
‘s x value, you need to initialize the vector’s other components properly. To do so, add the following line inside Start:
newPosition = transform.position; |
This copies the camera’s initial position to newPosition
.
Now add the following code inside Update
:
newPosition.x += Time.deltaTime * speed; transform.position = newPosition; |
This simply adjusts the object’s position as if it were moving speed
units per second.
newPosition
accurately reflects the camera’s position. If you are one of said sticklers, feel free to replace that line withnewPosition.x = transform.position.x + Time.deltaTime * speed;
Save the file (File\Save) and switch back to Unity.
Play your scene and things start moving. The zombie and enemies seem to handle it fine, but they quickly run out of beach!
You need to handle the background similarly to how you handled the enemy. That is, when the enemy goes off screen, you’ll change its position so it reenters the scene from the other side of the screen.
Create a new C# script named BackgroundRepeater and add it to background. You’ve done this sort of thing several times now, so if you need a refresher, look back through the tutorial to find it.
Open BackgroundRepeater.cs in MonoDevelop and add the following instance variables:
private Transform cameraTransform; private float spriteWidth; |
You’ll store a reference to the camera’s Transform
in cameraTransform
. This isn’t absolutely necessary, but you’ll need to access it every time Update
runs, so rather than repeatedly finding the same component, you’ll simply find it once and keep using it.
You’ll also need to repeatedly access the sprite’s width, which you’ll cache in spriteWidth
because you know you aren’t changing the background’s sprite at runtime.
Initialize these variables by adding the following code in Start
:
//1 cameraTransform = Camera.main.transform; //2 SpriteRenderer spriteRenderer = renderer as SpriteRenderer; spriteWidth = spriteRenderer.sprite.bounds.size.x; |
The above code initializes the variables you added as follows:
Camera
object (which is the only camera in Zombie Conga) and sets cameraTransform
to point to the camera’s Transform
.renderer
property to a SpriteRenderer
in order to access its sprite
property, from which it gets the Sprite’s bounds
. The Bounds
object has a size
property whose x
component holds the object’s width, which it stores in spriteWidth
.In order to determine when the background sprite is off screen, you could implement OnBecameInvisible
, like you did for the enemy. But you already learned about that, so this time you’ll check the object’s position directly.
In Zombie Conga, the camera’s position is always at the center of the screen. Likewise, when you imported the background sprite way back in Part 1 of this tutorial series, you set the origin to the sprite’s center.
Rather than calculate the x position of the left edge of the screen, you’ll estimate by assuming the background has scrolled off screen if it’s at least a full sprite’s width away from the camera. The following image shows how the background sprite is well offscreen when positioned exactly one-sprite’s width away from the camera’s position:
The left edge of the screen will be different on different devices, but this trick will work as long as the screen’s width is not larger than the width of the background sprite.
Add the following code to Update
:
if( (transform.position.x + spriteWidth) < cameraTransform.position.x) { Vector3 newPos = transform.position; newPos.x += 2.0f * spriteWidth; transform.position = newPos; } |
The if
check above checks to see if the object is sufficiently off screen, as described earlier. If so, it calculates a new position that is offset from the current position by twice the width of the sprite.
Why twice the width? By the time this logic determines that the background went offscreen, moving the sprite over by spriteWidth
would pop it into the area viewable by the camera, as shown below:
Save the file (File\Save) and switch back to Unity.
Play the scene and you’ll see that the background goes off screen and eventually comes back into view, as shown in the sped-up sequence below:
That works fine, but you probably don’t want those blue gaps that keep showing up. To fix it, you’ll simply add another background sprite to fill that space.
Right-click on background in the Hierarchy and select Duplicate from the popup menu that appears. Select the duplicated background (if it already isn’t selected) and set the x value of the Transform‘s Position to 20.48, as shown below:
Remember from Part 1 of this series that the background sprite is 2048 pixels wide and you imported it with a ratio of 100 pixels per unit. That means that setting one background sprite’s x position to 20.48 will place it immediately to the right of the other object, whose x position is zero.
You now have a much longer stretch of beach in your Scene view, as shown below:
Play the scene again and now your zombie can spend his entire apocalypse strolling along the beach, as shown in the following sped-up sequence. Don’t let the glitches in this low-quality GIF fool you – in the real game, the background scrolls seamlessly.
While playing the scene, one thing that probably stands out is how utterly devoid of cats that beach is. I don’t know about you, but whenever I go to the beach, I always bring my kitty.
You’ll want new cats to keep appearing on the beach until the player wins or loses the game. To handle this, you’ll create a new script and add it to an empty GameObject.
Create a new empty game object by choosing GameObject\Create Empty in Unity’s menu. Name the new object Kitten Factory.
Create a new C# script called KittyCreator and attach it to Kitten Factory. No more hints for creating new scripts – you can do it! (But if you can’t do it, look back through the earlier parts of the tutorial.)
Open KittyCreator.cs in MonoDevelop and replace its contents with the following code:
using UnityEngine; public class KittyCreator: MonoBehaviour { //1 public float minSpawnTime = 0.75f; public float maxSpawnTime = 2f; //2 void Start () { Invoke("SpawnCat",minSpawnTime); } //3 void SpawnCat() { Debug.Log("TODO: Birth a cat at " + Time.timeSinceLevelLoad); Invoke("SpawnCat", Random.Range(minSpawnTime, maxSpawnTime)); } } |
This code doesn’t actually spawn any cats, it simply lays the groundwork to do so. Here’s what it does:
minSpawnTime
and maxSpawnTime
specify how often new cats appear. After a cat spawns, KittyCreator
will wait at least minSpawnTime
seconds and at most maxSpawnTime
seconds before spawning another cat. You declared them public so you can tweak the spawn rate in the editor later if you’d like.Invoke
method lets you call another method after a specified delay. Start
calls Invoke
, instructing it to wait minSpawnTime
seconds and then to call SpawnCat
. This adds a brief period after the scene starts during which no cats spawn.SpawnCat
simply logs a message letting you know when it executes and then uses Invoke
to schedule another call to SpawnCat
. It waits a random amount of time between minSpawnTime
and maxSpawnTime
, which keeps cats from appearing at predictable intervals.Save the file (File\Save) and switch back to Unity.
Run the scene and you’ll start seeing logs like the following appear in the Console:
Now that you have your Kitten Factory working on schedule, you need to make it spit out some cats. For that, you’ll be using one of Unity’s most powerful features: Prefabs.
Prefabs reside in your Project rather than in your scene’s Hierarchy. You use a Prefab as a template to create objects in your scene.
However, these instances are not just copies of the original Prefab. Instead, the Prefab defines an object’s default values, and then you are free to modify any part of a specific instance in your scene without affecting any other objects created from the same Prefab.
In Zombie Conga, you want to create a cat Prefab and have Kitten Factory create instances of that Prefab at different locations throughout the scene. But don’t you already have a cat object in your scene, properly configured with all the animations, physics and scripts you’ve set up so far? It sure would be annoying if you had to redo that work to make a Prefab. Fortunately, you don’t have to!
To turn it into a Prefab, simply drag cat from the Hierarchy into the Project browser. You’ll see a new cat Prefab object created in the Project browser, but you should also see the word cat turn blue in the Hierarchy, as shown below:
While working on your own games, remember that objects with blue names in the Hierarchy are instances of Prefabs. When you select one, you will see the following buttons in the Inspector:
These buttons are useful while editing instances of a Prefab. They allow you to do the following:
Apply: This button takes any local changes you’ve made to this instance and sets those values back onto the Prefab, making them the default for all Prefab instances. Any existing instances of the Prefab that have not set local overrides for these values will automatically have their values changed to the new defaults.
Important: Clicking Apply affects every GameObject that shares this object’s Prefab in every scene of your project, not just the current scene.
Now you need to get the Kitten Factory to stop polluting your Console with words and start polluting your beach with cats!
Go back to KittyCreator.cs in MonoDevelop and add the following variable to KittyCreator
:
public GameObject catPrefab; |
You’ll assign your cat Prefab to catPrefab
in Unity’s editor and then KittyCreator
will use it as a template when creating new cats. But before you do that, replace the Debug.Log
line in SpawnCat
with the following code:
// 1 Camera camera = Camera.main; Vector3 cameraPos = camera.transform.position; float xMax = camera.aspect * camera.orthographicSize; float xRange = camera.aspect * camera.orthographicSize * 1.75f; float yMax = camera.orthographicSize - 0.5f; // 2 Vector3 catPos = new Vector3(cameraPos.x + Random.Range(xMax - xRange, xMax), Random.Range(-yMax, yMax), catPrefab.transform.position.z); // 3 Instantiate(catPrefab, catPos, Quaternion.identity); |
The above code chooses a random position that’s visible to the camera and places a new cat there. Specifically:
catPrefab
‘s z position (so all cats appear at the same z-depth), and random values for x and y. These random values are chosen within the area shown in the image that follows, which is slightly smaller than the visible area of the scene.You call Instantiate
to create an instance of catPrefab
placed in the scene at the position defined by catPos
. You pass Quaternion.identity
as the new object’s rotation because you don’t want the new object to be rotated at all. Instead, the cat’s rotation will be set by the spawn animation you made in Part 2 of this tutorial series.
Instantiate
a random rotation around the z axis instead of using the identity matrix. However, be advised that this won’t actually work until after you’ve made some changes you’ll read about later in this tutorial. Save the file (File\Save) and switch back to Unity.
You no longer need the cat in the scene because your factory will create them at runtime. Right-click cat in the Hierarchy and choose Delete from the popup menu that appears, as shown below:
Select Kitten Factory in the Hierarchy. Inside the Inspector, click the small circle/target icon on the right of the Kitty Creator (Script) component’s Cat Prefab field, shown below:
Inside the Select GameObject dialog that appears, choose cat from the Assets tab, as shown in the following image:
Kitten Factory now looks like this in the Inspector:
Don’t worry if your Kitten Factory doesn’t have the same Transform values as those shown here. Kitten Factory only exists to hold the Kitty Creator script component. It has no visual component and as such, it’s Transform values are meaningless.
Run the scene again and watch as everywhere you look, the very beach itself appears to be coughing up adorable fur balls.
However, there’s a problem. As you play, notice how a massive list of cats slowly builds up in the Hierarchy, shown below:
This won’t do. If your game lasts long enough, this sort of logic will bring it crashing to a halt. You’ll need to remove cats as they go off screen.
Open CatController.cs in MonoDevelop and add the following method to CatController
:
void OnBecameInvisible() { Destroy( gameObject ); } |
This simply calls Destroy
to destroy gameObject
. All MonoBehaviour
scripts, such as CatController
, have access to gameObject
, which points to the GameObject
that holds the script. Although this method doesn’t show it, it is safe to execute other code in a method after calling Destroy
because Unity doesn’t actually destroy the object right away.
GrantCatTheSweetReleaseOfDeath
, the other method in CatController
, uses DestroyObject
for the same purpose as you are now using Destroy
. What gives?
To be honest, I’m not sure if there is any difference. Unity’s documentation includes Destroy
but not DestroyObject
, but they both seem to have the same effect. I probably just type whichever one I happen to type and since the compiler doesn’t complain, I’ve never thought anything of it.
If you know of a difference or why one should be preferred over the other, please mention it in the Comments section. Thanks!
Save the file (File\Save) and switch back to Unity.
Run the scene again. As was mentioned in Part 3, OnBecameInvisible
only gets called once an object is out of sight of all cameras, so be sure the Scene view is not visible while testing this bit.
Now, no matter how long you play, the Hierarchy never contains more than a few cats. Specifically, it contains the same number of objects as there are cats visible in the scene, as shown below:
Note: Creating and destroying objects is fairly expensive in Unity’s runtime environment. If you’re making a game even only slightly more complicated than Zombie Conga, it would probably be worthwhile to reuse objects when possible.
For example, rather than destroying a cat when it exits the screen, you could reuse that object the next time you needed to spawn a new cat. You already do this for the enemy, but for the cats you would need to handle keeping a list of reusable objects and remembering to reset the cat to an initial animation state prior to spawning it.
This technique is known as object pooling and you can find out a bit more about it in this training session from Unity.
Ok, you’ve got a beach filling up with cats and a zombie walking around looking to party. I think you know what time it is.
If you’ve been following along with this tutorial series since Part 1, you’ve probably started wondering why the heck this game is even called Zombie Conga.
It is time.
When the zombie collides with a cat, you’ll add that cat to the conga line. However, you’ll want to handle enemy collisions differently. In order to tell the difference, you’ll assign specific tags to each of them.
Unity allows you to assign a string to any GameObject, called a tag. Newly created projects include a few default tags, like MainCamera and Player, but you are free to add any tags that you’d like.
In Zombie Conga, you could get away with only one tag, because there are only two types of objects with which the zombie can collide. For example, you could add a tag to the cats and then assume if the zombie collides with an object that is missing that tag, it must be an enemy. However, shortcuts like that are a good way to cause bugs when you later decide to change something about your game.
To make your code easier to understand and more maintainable, you’ll create two tags: cat and enemy.
Choose Edit\Project Settings\Tags and Layers from Unity’s menu. The Inspector now shows the Tags & Layers editor. If it’s not already open, expand the Tags list by clicking the triangle to the left of its name, as shown in the following image:
Type cat in the field labeled Element 0. As soon as you start typing, Unity adds a new tag field labeled Element 1. Your Inspector now looks like this:
Select cat in the Project browser and choose cat from the combo box labeled Tag in the Inspector, like this:
When you were adding the cat tag, you could have added the enemy tag, too. However, I wanted to show you another way to create a tag.
Many times you’ll decide you want to tag an object, only to check the Tag combo box in the Inspector and realize the tag you want doesn’t exist yet. Rather than go through Unity’s Editor menu, you can open the Tags and Layers editor directly from the Tags combo box.
Select enemy in the Hierarchy. In the Inspector, choose Add Tag… from the Tag combo box, as shown below:
Once again, the Inspector now shows the Tags & Layers editor. Inside the Tags section, Type enemy in the field labeled Element 1. The Inspector now looks like the image below:
With the new tag created, select enemy in the Hierarchy and set its Tag to enemy, as shown below:
Now that your objects are tagged, you can identify them in your scripts. To see how, open ZombieController.cs in MonoDevelop and replace the contents of OnTriggerEnter2D
with the following code:
if(other.CompareTag("cat")) { Debug.Log ("Oops. Stepped on a cat."); } else if (other.CompareTag("enemy")) { Debug.Log ("Pardon me, ma'am."); } |
You call CompareTag
to check if a particular GameObject has the given tag. Only GameObjects can have tags, but calling this method on a Component – like you’re doing here – tests the tag on the Component’s GameObject.
Save the file (File\Save) and switch back to Unity.
Run the scene and you should see the appropriate messages appear in the Console whenever the zombie touches a cat or an enemy.
Now that you know your collisions are set up properly, it’s time to actually make them do something.
Remember all those animations you made in Parts two and three of this series? The cat starts out bobbing happily, like this:
When the zombie collides with a cat, you want the cat to turn into a zombie cat. The following image shows how you accomplished this in the earlier tutorial by setting the cat’s InConga
parameter to true
in the Animator window.
Now you want to do the same thing, but from within code. To do that, switch back to CatController.cs in MonoDevelop and add the following method to CatController
:
public void JoinConga() { collider2D.enabled = false; GetComponent<Animator>().SetBool( "InConga", true ); } |
The first line disables the cat’s collider. This will keep Unity from sending more than one collision event when the zombie collides with a cat. (Later you’ll solve this problem in a different way for collisions with the enemy.)
The second line sets InConga
to true
on the cat’s Animator
Component. By doing so, you trigger a state transition from the CatWiggle Animation Clip to the CatZombify Animation Clip. You set up this transition using the Animator window in Part 3 of this series.
By the way, notice that you declared JoinConga
as public. This lets you call it from other scripts, which is what you’ll do right now.
Save CatController.cs (File\Save) and switch to ZombieController.cs, still in MonoDevelop.
Inside ZombieController
, find the following line in OnTriggerEnter2D
:
Debug.Log ("Oops. Stepped on a cat."); |
And replace it with this line:
other.GetComponent<CatController>().JoinConga(); |
Now whenever the zombie collides with a cat, it calls JoinConga
on the cat’s CatController
component.
Save the file (File\Save) and switch back to Unity.
Play the scene and as the zombie walks into the cats, they turn green and start hopping in place. So far, so good.
Nobody wants a bunch of zombie cats scattered across a beach. What you want is for them to join your zombie’s eternal dance, and for that, you need to teach them how to play follow the leader.
You’ll use a List
to keep track of which cats are in the conga line.
Go back to ZombieController.cs in MonoDevelop.
First, add the following at the top of the file with the other using
statements.
using System.Collections.Generic; |
This using
statement is similar to an #import
statement in Objective-C. It simply gives this script access to the specified namespace and the types it contains. In this case, you need access to the Generic
namespace to declare a List
with a specific data type.
Add the following private variable to ZombieController
:
private List<Transform> congaLine = new List<Transform>(); |
congaLine
will store Transform
objects for the cats in the conga line. You’re storing Transform
s instead of GameObject
s because you’ll be dealing mostly with the cat’s positions, and if you ever need access to anything else you can get to any part of a GameObject
from its Transform
, anyway.
Each time the zombie touches a cat, you’ll append the cat’s Transform
to congaLine
. This means that the first Transform
in congaLine
will represent the cat right behind the zombie, the second Transform
in congaLine
will represent the cat behind the first, and so forth.
To add cats to the conga line, add the following line to OnTriggerEnter2D
in ZombieController
, just after the line that calls JoinConga
:
congaLine.Add( other.transform ); |
This line simply adds the cat’s Transform
to congaLine
.
If you were to run the scene right now, you wouldn’t see any difference from before. You’re maintaining a list of cats, but you haven’t written any code to move the cats from their initial positions when they join the conga line. As conga lines go, this one isn’t very festive.
To fix this, open CatController.cs in MonoDevelop.
The code for moving the cats will be similar to what you wrote to move the zombie in
Part 1. Start out by adding the following instance variables to CatController
:
private Transform followTarget; private float moveSpeed; private float turnSpeed; private bool isZombie; |
You’ll use moveSpeed
and turnSpeed
to control the cat’s rate of motion, the same way you did for the zombie. You only want the cat to move after it becomes a zombie, so you’ll keep track of that with isZombie
. Finally followTarget
will hold a reference to the character (cat or zombie) in front of this cat in the conga line. You’ll use this to calculate a position toward which to move.
The above variables are all private, so you may be wondering how you’ll set them. For the conga line to move convincingly, you’ll base the movement of the cats on the zombie’s movement and turn speeds. As such, you’re going to have the zombie pass this information to each cat during the zombification process.
Inside CatController.cs, replace your implementation of JoinConga
with the following code:
//1 public void JoinConga( Transform followTarget, float moveSpeed, float turnSpeed ) { //2 this.followTarget = followTarget; this.moveSpeed = moveSpeed; this.turnSpeed = turnSpeed; //3 isZombie = true; //4 collider2D.enabled = false; GetComponent<Animator>().SetBool( "InConga", true ); } |
Here’s a break down of this new version of JoinConga
:
ZombieController
to call JoinConga
with the appropriate values.this.
to differentiate between the cat’s variables and the method’s parameters of the same names.JoinConga
.Now add the following implementation of Update
to CatController
:
void Update () { //1 if(isZombie) { //2 Vector3 currentPosition = transform.position; Vector3 moveDirection = followTarget.position - currentPosition; //3 float targetAngle = Mathf.Atan2(moveDirection.y, moveDirection.x) * Mathf.Rad2Deg; transform.rotation = Quaternion.Slerp( transform.rotation, Quaternion.Euler(0, 0, targetAngle), turnSpeed * Time.deltaTime ); //4 float distanceToTarget = moveDirection.magnitude; if (distanceToTarget > 0) { //5 if ( distanceToTarget > moveSpeed ) distanceToTarget = moveSpeed; //6 moveDirection.Normalize(); Vector3 target = moveDirection * distanceToTarget + currentPosition; transform.position = Vector3.Lerp(currentPosition, target, moveSpeed * Time.deltaTime); } } } |
That may look a bit complicated, but most of it is actually the same as what you wrote to move the zombie. Here’s what it does:
Update
during every frame that the cat is active in the scene. This check ensures the cat doesn’t move until it’s supposed to.followTarget
‘s position.moveDirection
‘s magnitude
– which is the vector’s length, for the non-mathies out there – and checks to see if the cat is not currently at the target.moveSpeed
per second.Time.deltaTime
. This is basically the same code you wrote in ZombieController.cs in Part 1 of this series.You’re done with CatController.cs for now, so save the file (File\Save).
Because you changed JoinConga
‘s method signature, you need to change the line that calls this method in ZombieController
. Switch back to ZombieController.cs in MonoDevelop.
Inside OnTriggerEnter2D
, replace the call to JoinConga
with the following code:
Transform followTarget = congaLine.Count == 0 ? transform : congaLine[congaLine.Count-1]; other.GetComponent<CatController>().JoinConga( followTarget, moveSpeed, turnSpeed ); |
That first, tricky-looking line figures out what object should be in front of this cat in the conga line. If congaLine
is empty, it assigns the zombie‘s Transform to followTarget
. Otherwise, it uses the last item stored in congaLine
.
The next line calls JoinConga
, this time passing to it the target to follow along with the zombie’s movement and turn speeds.
Save the file (File\Save) and switch back to Unity.
Run the scene and your conga line is finally in place. Sort of. But not really.
When you played the scene, you may have noted the following problems:
These issues happen to be listed in the order of effort required to fix them. The first fix is simple, so start with that.
Go back to CatController.cs inside MonoDevelop.
You already added isZombie
to keep track of when the cat is a zombie. Add the following line at the beginning of OnBecameVisible
to avoid deleting the cat while it’s getting its groove on:
if ( !isZombie ) |
Save the file (File\Save) and switch back to Unity.
Run the scene again, and now cats in the conga line can safely go off screen and later dance right back into view.
To make it look like the cats are hopping along enjoying their undeath, you’ll need to change the logic slightly. Rather than calculating the target position every frame, each cat will choose a point and then hop to it over the course of one CatConga
animation cycle. Then the cat will choose another point and hop to it, and so on.
Switch back to CatController.cs in MonoDevelop and add the following variable to CatController
:
private Vector3 targetPosition; |
This will store the cat’s current target position. The cat will move until it reaches this position, and then find a new target.
Initialize targetPosition
by adding this line to JoinConga
:
targetPosition = followTarget.position; |
Here you set targetPosition
to followTarget
‘s current position. This ensures the cat has someplace to move as soon as it joins the conga line.
Replace the line that declares moveDirection
in Update
with this line:
Vector3 moveDirection = targetPosition - currentPosition; |
This simply calculates moveDirection
using the stored targetPosition
instead of followTarget
‘s current position.
Save the file (File\Save) and switch back to Unity.
Run again and bump into some kitty cats. Hmm. There seems to be a problem.
Whenever the zombie hits a cat, that cat heads straight to wherever the last member of the conga line happens to be at the moment of the collision. It then stays right there. Forever.
The problem is that you assign targetPosition
when the cat joins the conga line, but you never update it after that! Silly you.
Switch back to CatController.cs in MonoDevelop and add the following method:
void UpdateTargetPosition() { targetPosition = followTarget.position; } |
This method simply updates targetPosition
with followTarget
‘s current position. Update
already looks at targetPosition
, so you don’t need to write any other code to send the cat toward the new location.
Save the file (File\Save) and switch back to Unity.
Recall from Part 3 of this tutorial series that Animation Clips can trigger events. You’ll add an event that calls UpdateTargetPosition
during the first frame of CatConga, allowing the cats to calculate their next target position before each hop.
However, you may also recall from that tutorial that you can only edit animations for a GameObject in your scene rather than a Prefab in your project. So to create the animation event, you first need to temporarily add a cat back into the scene.
Drag the cat Prefab from the Project browser to the Hierarchy.
Select cat in the Hierarchy and switch to the Animation view (Window\Animation).
Choose CatConga from the clips drop-down menu in the Animation window’s control bar.
Press the Animation view’s Record button to enter recording mode and move the scrubber to frame 0, as shown below:
Click the Add Event button shown below:
Choose UpdateTargetPosition() from the Function combo box in the Edit Animation Event dialog that appears, as shown in the following image, and then close the dialog.
With that set up, your cats will update their target in sync with their animation.
Run the scene again, and now the cats hop along from point to point, as you can see in the sped-up animation below:
This works, but the cats are spread out a bit too much. Have these cats ever even been in a conga line?
Switch back to CatController.cs in MonoDevelop.
Inside JoinConga
, replace the line that sets this.moveSpeed
with the following code:
this.moveSpeed = moveSpeed * 2f; |
Here you set the cat’s speed to twice that of the zombie. This will produce a tighter conga line.
Save the file (File\Save) and switch back to Unity.
Run the scene again and you’ll see the conga line looks a little friendlier, as the following sped-up sequence demonstrates:
If you’d like, experiment with different conga styles by multiplying the zombie’s speed by values other than two. The larger the number, the more quickly the cat gets to its target, giving it a more jumpy feeling.
The cats are moving along nicely, except that they refuse to look where they’re going. What gives? Well, that’s just how Unity works and there’s no way around it. Sorry about that. Tutorial done.
Aaaah. I’m just messing with you. There’s an explanation for what’s going on, and a solution!
Why won’t animated GameObjects respect the changes made to them via scripts? This is a common question, so it’s worth spending some time here to work through the solution.
First, what’s going on? Remember that while the cat hops along in the conga line, it’s playing the CatConga Animation Clip. As you can see in the following image, CatConga adjusts the Scale property in the cat’s Transform:
Important: If you remember only one thing today, make it this next paragraph.
It turns out that if an Animation Clip modifies any aspect of an object’s Transform, it is actually modifying the entire Transform. The cat was pointing to the right when you set up CatConga, so CatConga now ensures that the cat continues to point to the right. Thanks, Unity?
There is a way around this problem, but it’s going to require some refactoring. Basically, you need to make the cat a child of another GameObject. Then, you’ll run the animations on the child, but adjust the parent’s position and rotation.
You’ll need to make a few changes to your code in order to keep it working after you’ve rearranged your objects. Here you’ll go through the process in much the same way you might if you had just encountered this problem in your own project.
First, you need to move the cat Prefab into a parent object.
Create a new empty game object by choosing GameObject\Create Empty in Unity’s menu. Name the new object Cat Carrier.
Inside the Hierarchy, drag cat and release it onto Cat Carrier. I bet that was the least effort you’ve ever expended putting a cat into its carrier. ;]
You’re Hierarchy now looks like this:
When you made the enemy spawn point a child of Main Camera in Unity 4.3 2D Tutorial: Physics and Screen Sizes, you learned that the child’s position defines an offset from its parent’s position.
In the case of the cat, you want the child to be centered on the parent, so setting the parent’s position to (X,Y,Z) essentially places the child at (X,Y,Z).
Therefore, select cat in the Hierarchy and ensure its Transform‘s Position is (0, 0, 0), as shown below:
Likewise, select Cat Carrier in the Hierarchy and ensure its Transform‘s Position is (0, 0, 0) as well. In reality, only its z position matters, but it’s always nice to keep things tidy. (I swear I had no intention of making a Tidy Cat pun right there.)
In order to limit the number of changes you need to make to your code, you’ll move CatController from cat to Cat Carrier.
Select cat in the Hierarchy. Inside the Inspector, click the gear icon in the upper-right of the Cat Controller (Script) component. Select Remove Component from the popup menu that appears, as shown below:
Click Apply at the top of the Inspector to ensure this change makes it back to the Prefab, as shown in the following image:
Select Cat Carrier in the Hierarchy. In the Inspector, click Add Component and choose Scripts\Cat Controller from the menu that appears, as demonstrated below:
Now drag Cat Carrier from the Hierarchy into the Project browser to turn it into a Prefab. Just like when you created the cat Prefab, Cat Carrier’s name turns blue in the Hierarchy to indicate it is now an instance of a Prefab, as shown below:
Select Cat Carrier in the Hierarchy and delete it by choosing Edit\Delete from Unity’s menu.
The Hierarchy now looks like this:
Inside the Project browser, you now have a cat Prefab and a Cat Carrier Prefab, which itself contains a cat Prefab, as shown below:
The two cat Prefabs do not refer to the same asset, and you no longer need the un-parented one. To avoid confusion later, right-click the un-parented cat Prefab and choose Delete from the popup menu, then click Delete in the confirmation dialog that appears, as shown below:
Finally, select Kitten Factory in the Hierarchy. As you can see in the following image, the Kitty Creator (Script) component’s Cat Prefab field now says “Missing (GameObject)”:
That’s because Cat Prefab had been set to the asset you just deleted.
Change the Cat Prefab field in the Kitty Creator (Script) component to use Cat Carrier instead of cat. If you don’t remember how to do that, check out the following spoiler.
Solution Inside: Need help setting the Cat Prefab field? | SelectShow> |
---|---|
To assign Cat Carrier to the Cat Prefab field, do the following:
|
Run the scene. At this point, you’ll see exceptions similar to the following in the Console whenever the zombie collides with a cat.
Double-click one of these exceptions inside the Console and you’ll arrive at the relevant line, highlighted in MonoDevelop, as shown below:
These exceptions occur because ZombieController
looks for a CatController
component on the GameObject with which it collides, but that component now resides on the cat’s parent, Cat Carrier, rather than the cat itself.
Replace the line highlighted in the image above with the following:
other.transform.parent.GetComponent<CatController>().JoinConga( followTarget, moveSpeed, turnSpeed ); |
You now use the cat’s Transform
to access its parent, which is the Cat Carrier. From there, the rest of the line remains unchanged from what you already had.
JoinConga
method that simply passes its parameters to JoinConga
in its parent’s CatController
component. It really only depends on how you like to organize your code and how much you want different objects to know about each other.Save the file (File\Save) and switch back to Unity.
Run the scene. Once again, you see exceptions in the Console when the zombie collides with a cat. This time they complain of a missing component, like this:
Double-click one of these exceptions in the Console to arrive at the relevant line in MonoDevelop. As you can see, this time the problem is in CatController.cs:
Inside JoinConga
, you attempt to access the object’s Animator
component. This no longer works because you moved the script onto Cat Carrier but the Animator
is still attached to cat.
You don’t want to move the Animator, so instead you’ll change the code.
Inside CatController.cs, find the following two lines of code in JoinConga
:
collider2D.enabled = false; GetComponent<Animator>().SetBool( "InConga", true ); |
Replace those lines with the following code:
Transform cat = transform.GetChild(0); cat.collider2D.enabled = false; cat.GetComponent<Animator>().SetBool( "InConga", true ); |
This code simply uses Cat Carrier’s Transform
to find its first child – indexed from zero. You know Cat Carrier only has one child, which is cat, so this finds the cat. The code then accesses the cat’s Collider2D
and Animator
components in otherwise the same way you did before.
GetChild(0);
:
Transform cat = transform.FindChild("cat"); |
However, that solution relies on you knowing the name of the child you need. Better, but maybe not ideal.
The best solution might be to avoid looking up the object at runtime altogether. To do that, you could add a Transform
variable to CatController
and assign the cat Prefab to it in Unity’s editor. Such choice!
Save the file (File\Save) and switch back to Unity.
Run the scene and now when the zombie collides with a cat…you get this error:
This problem is in your Animation Clip, CatConga. Earlier, you added an event at frame zero that would call the cat’s UpdateTargetPosition
. However, you’ve moved CatController.cs onto a different object, so this error is telling you that you’re trying to call a method that doesn’t exist on the target object.
Select Cat Carrier in the Project browser and then open the Animation view (Window\Animation). What’s this? There are no Animation Clips!
This actually makes sense. Remember, you added the Animation Clips to cat, not Cat Carrier. In fact, the whole reason you added Cat Carrier was because Unity’s animation system was interfering with your GameObject’s Transform.
Expand Cat Carrier in the Project browser and select cat, then choose CatConga from the clip drop-down menu in the Animation view’s control bar. Mouse-over the animation event marker in the timeline and you’ll see it says Error!:
Double click the animation event marker and…nothing happens. Pop quiz! Why? Check the spoiler below for the answer.
Solution Inside | SelectShow> |
---|---|
Remember, you cannot modify Animation Clips on a Prefab. Double-clicking the event marker should bring up the Edit Animation Event dialog, which you can’t access if you can’t edit the object.
To correct the situation, drag Cat Carrier from the Project browser into the Hierarchy. Then select its child, cat, in the Hierarchy. |
Once you’ve corrected the situation, double click the animation event marker again and the following dialog appears, indicating that UpdateTargetPosition is not supported:
Part 4 of this tutorial series alluded to this problem. Animation Events can only access methods on scripts attached to the object associated with the clip. That means you’ll need to add a new script to cat.
Select cat in the Hierarchy and add a new C# script named CatUpdater.cs.
Open CatUpdater.cs in MonoDevelop and replace its contents with the following code:
using UnityEngine; public class CatUpdater : MonoBehaviour { private CatController catController; // Use this for initialization void Start () { catController = transform.parent.GetComponent<CatController>(); } void UpdateTargetPosition() { catController.UpdateTargetPosition(); } } |
This script includes a method named UpdateTargetPosition
that simply calls the identically named method on the CatController
component in the cat’s parent. To avoid repeatedly getting the CatController
component, the script finds the component in Start
and stores a reference to it in catController
.
Save the file (File\Save). However, instead of switching back to Unity, open CatController.cs
in MonoDevelop
.
You called CatController
‘s UpdateTargetPosition
from CatUpdater
, but UpdateTargetPosition
is not a public method. If you went back to Unity now you’d get an error claiming the method is ‘inaccessible due to its protection level’.
Inside CatController.cs, add public
to the beginning of UpdateTargetPosition
‘s declaration, as shown below:
public void UpdateTargetPosition() |
Save the file (File\Save) and switch back to Unity.
Before moving on, you should verify that your animation events are set up correctly. Select cat in the Project browser and choose CatConga from the clip drop-down menu in the Animation view’s control bar. Mouse-over the animation event marker in the timeline and you’ll see it says UpdateTargetPosition():
With cat still selected in the Hierarchy, click Apply in the Inspector to make sure the Prefab includes the script you just added. Then delete Cat Carrier from the scene by right-clicking it in the Hierarchy and choosing Delete from the popup menu.
Run the scene and you, the zombie and the cats can all finally have a dance party.
Now, the zombie can collect cats in his conga line, but the old ladies have no way to defend against this undead uprising. Time to give those ladies a fighting chance!
In Zombie Conga, the player’s goal is to gather a certain number of cats into its conga line before colliding with some number of enemies. Or, that will be the goal once you’ve finished this tutorial.
To make it harder to build the conga line, you’ll remove some cats from the line every time an enemy touches the zombie.
To do so, first open CatController.cs in MonoDevelop and add the following method to the class:
public void ExitConga() { Vector3 cameraPos = Camera.main.transform.position; targetPosition = new Vector3(cameraPos.x + Random.Range(-1.5f,1.5f), cameraPos.y + Random.Range(-1.5f,1.5f), followTarget.position.z); Transform cat = transform.GetChild(0); cat.GetComponent<Animator>().SetBool("InConga", false); } |
The first two lines above assign targetPosition
a random position in the vicinity of the camera’s position, which is the center of the screen. The code you already added to Update
will automatically move the cat toward this new position.
The next two lines get the cat from inside the Cat Carrier and disable its Animator
‘s InConga
flag. Remember from Unity 4.3 2D Tutorial: Animation Controllers, that you need to set InConga
to false
in the Animator
in order to move the animation out of the CatConga state. Doing so will trigger the cat to play the CatDisappear animation clip.
Save the file (File\Save).
You maintain the conga line in ZombieController
, so that’s where you’ll add a call to ExitConga
. Open ZombieController.cs in MonoDevelop now.
Inside the class, find the following line in OnTriggerEnter2D
:
Debug.Log ("Pardon me, ma'am."); |
And replace it with this code:
for( int i = 0; i < 2 && congaLine.Count > 0; i++ ) { int lastIdx = congaLine.Count-1; Transform cat = congaLine[ lastIdx ]; congaLine.RemoveAt(lastIdx); cat.parent.GetComponent<CatController>().ExitConga(); } |
This for
loop may look a little strange, but it’s really not doing much. If there are any cats in the conga line, this loop removes the last two of them, or the last one if there is only one cat in the line.
After removing the cat’s Transform
from congaLine
, it calls ExitConga
, which you just added to CatController
.
Save the file (File\Save) and switch back to Unity.
Run the scene and get some cats in your conga line, then crash into an old lady and see what happens!
Unfortunately, when you crashed into the old lady, you crashed right into two more problems.
First, if the conga line had more than two cats when the zombie collided with the enemy, you probably saw every cat spin out of the line. You can see that in the previous animation.
The second problem is yet another exception in the Console:
No receiver, eh? Before fixing the first problem, try debugging the exception yourself. You’ve already solved an identical problem earlier in this tutorial. If you get stuck, check out the following spoiler.
Solution Inside: Cat not receiving your (function) calls? | SelectShow> | |||
---|---|---|---|---|
You can fix this in either of two ways.
Option 1: You could add a method like the following to CatUpdater.cs:
However, for that to work, you need to change the declaration of
Option 2: The easier way to handle this situation is to add a method like the following to CatUpdater.cs:
This simply tells the cat’s parent to remove itself, which in turn removes the cat. |
With the exception fixed, it’s time to figure out how to keep the enemy from destroying your entire conga line with just one hit.
First, what’s going on? As you saw in Unity 4.3 2D Tutorial: Physics and Screen Sizes, Unity is reporting quite a few collisions as the zombie walks through the enemy.
For the cats, you solved this problem by disabling the cat’s collider when handling the first event. To eliminate redundant enemy collisions, you’ll do something a bit fancier.
You’re going to add a period of immunity after the initial collision. This is common in many games, where contacting an enemy reduces health or points and then blinks the player’s sprite for a second or two, during which time the player can take no damage. And yes, you’re going to make the zombie blink, too!
Open ZombieController.cs in MonoDevelop and add the following variables to the class:
private bool isInvincible = false; private float timeSpentInvincible; |
As their names imply, you’ll use isInvincible
to indicate when the zombie is invincible, and timeSpentInvincible
to keep track of for how long the zombie has been invincible.
Inside OnTriggerEnter2D
, find the following line:
else if(other.CompareTag("enemy")) { |
and replace it with this code:
else if(!isInvincible && other.CompareTag("enemy")) { isInvincible = true; timeSpentInvincible = 0; |
This change to the if
condition causes the zombie to ignore enemy collisions while the zombie is invincible. If a collision occurs while the zombie is not invincible, it sets isInvincible
to true
and resets timeSpentInvincible
to zero.
To let the player know they have a moment of invincibility, as well as to indicate that they touched an enemy, you’ll blink the zombie sprite.
Add the following code to the end of Update
:
//1 if (isInvincible) { //2 timeSpentInvincible += Time.deltaTime; //3 if (timeSpentInvincible < 3f) { float remainder = timeSpentInvincible % .3f; renderer.enabled = remainder > .15f; } //4 else { renderer.enabled = true; isInvincible = false; } } |
Here’s what this code does:
if
check verifies that the zombie is currently invincible, because that’s the only time you want to execute the rest of this logic.Time.deltaTime
to timeSpentInvincible
to keep track of the total time the zombie has been invincible. Remember that you reset timeSpentInvincible
to zero when the collision first occurs.timeSpentInvincible
. This bit of math will blink the zombie on and off about three times per second.isInvincible
to false
. You enable the renderer here to ensure the zombie doesn’t accidentally stay invisible.Save the file (File\Save) and switch back to Unity.
Run now and the conga line grows and shrinks as it should.
Ok, the conga line works, but without a way to win or lose, it’s still not a game. (That’s right, I said it. If you can’t win or lose, it’s not a game!) Time to fix that.
Players of Zombie Conga win the game when they build a long enough conga line. You maintain the conga in ZombieController.cs, so open that file in MonoDevelop.
Add the following code to OnTriggerEnter2D
, inside the block that handles cat collisions, just after the line that adds other.transform
to congaLine
:
if (congaLine.Count >= 5) { Debug.Log("You won!"); Application.LoadLevel("CongaScene"); } |
This code checks if the conga line contains at least five cats. If so, it logs a win message to the Console and then calls Application.LoadLevel
to reload the current scene, named CongaScene. While it includes “level” in its name, LoadLevel
actually loads Unity scenes. See the Application
class documentation to find out more about what this class has to offer.
Don’t worry – reloading CongaScene is only for testing. You’ll change this later to show a win screen instead.
5
in the if
check to any number you’d like.Save the file (File\Save) and switch back to Unity.
Play the scene. Once you get five cats in your conga line, you’ll see “You won!” in the Console and the scene will reset to its start state.
Winning isn’t as satisfying if there’s no way to lose, so take care of that now.
Switch back to ZombieController.cs in MonoDevelop and add the following variable to the class:
private int lives = 3; |
This value keeps track of how many lives the zombie has remaining. When this reaches zero, it’s Game Over.
Add the following code to OnTriggerEnter2D
, inside but at the end of the block of code that handles collisions with enemy objects:
if (--lives <= 0) { Debug.Log("You lost!"); Application.LoadLevel("CongaScene"); } |
This code subtracts one from lives
and then checks to see if there are any lives left. If not, it logs a message to the Console and then calls Application.LoadLevel
to reload the current scene. Once again, this is only for testing – you’ll change it later to show a lose screen.
Save the file (File\Save) and switch back to Unity.
Play the scene now and hit three old ladies. No, don’t do that. Play the game, and in the game, let three old ladies hit you. You’ll see “You lost!” in the Console and the scene will reset to its start state.
And that’s it! Zombie Conga works, even if it is a bit unpolished. In the remainder of this tutorial, you’ll add a few finishing touches, including additional scenes, some background music and a sound effect or two.
To finish up the game, you’ll add the following three screens to Zombie Conga:
So just draw those images and when you’re done, come back and learn how to add them to the game. Shouldn’t take but a minute.
Ok, I really don’t have time to wait for you to do your doodles. Just download and unzip these resources so we can get going.
The file you downloaded includes two folders: Audio and Backgrounds. Ignore Audio for now and look at Backgrounds, which contains a few images created by Mike Berg. You’ll use these images as backgrounds for three new scenes.
You first need to import these new images as Sprites. You learned how to do this way back in Part 1 of this series, so this would be a good time to see how much you remember.
Try creating Sprite assets from the images in Backgrounds. To keep things organized, add these new assets in your project’s Sprites folder. Also, remember to tweak their settings if necessary to ensure they look good!
Solution Inside: Need help creating Sprites? | SelectShow> |
---|---|
Creating Sprites was covered extensively in Unity 4.3 2D Tutorial: Getting Started, so there won’t be much detail here. If you need more help, review that tutorial again.
To create a Sprite asset, you first need to add the files to the project. The easiest way is to drag them into the Project browser from your Finder/Explorer. If Unity is still in 2D mode, which you set up back in Part 1, then these images were turned into Sprites automatically. If not, you need to change each asset’s Texture Type to Sprite in the Inspector. Each of the images is 1136×640 pixels, which is too large for Unity’s default texture size of 1024×1024. To make them look their best, you should adjust each Sprite’s Max Size to 2048. Finally, while they all look fine with the default Format of Compressed, I prefer to set StartUp‘s Format to 16 bits. |
You should now have three new Sprites in the Project browser, named StartUp, YouWin and YouLose, as shown below:
Before creating your new scenes, make sure you don’t lose anything in the current scene. Save CongaScene by choosing File\Save Scene in Unity’s menu.
Choose File\New Scene to create a new scene. This brings up an empty scene with a Main Camera.
Choose File\Save Scene as…, name the new scene LaunchScene and save it inside Assets\Scenes.
Add a StartUp Sprite to the scene, positioned at (0,0,0). You should have no problem doing this yourself, but the following spoiler will help if you’ve forgotten how.
Solution Inside: Need help adding a sprite? | SelectShow> |
---|---|
To add the Sprite, simply drag StartUp from the Project browser into the Hierarchy.
Select StartUp in the Hierarchy and make sure its Transform‘s Position is (0,0,0). |
With the background in the scene, see if you can set up LaunchScene‘s camera yourself. When you’re finished, your Game view should show the entire StartUp image, like this:
If you need any help, check the following spoiler.
At this point, you’ve set up LaunchScene. Play the scene and you should see the following:
Be honest: how long did you stare at it waiting for something to happen?
You want Zombie Conga to start out showing this screen, but then load CongaScene so the user can actually play. To do that, you’ll add a simple script that waits a few seconds and then loads the next scene.
Create a new C# script named StartGame and add it to Main Camera.
Open StartGame.cs in MonoDevelop and replace its contents with the following code:
using UnityEngine; public class StartGame : MonoBehaviour { // Use this for initialization void Start () { Invoke("LoadLevel", 3f); } void LoadLevel() { Application.LoadLevel("CongaScene"); } } |
This script uses two techniques you saw earlier. Inside Start
, it calls Invoke
to execute LoadLevel
after a three second delay. In LoadLevel
, it calls Application.LoadLevel
to load CongaScene.
Save the file (File\Save) and switch back to Unity.
Run the scene. After three seconds, you’ll see the following exception in the Console.
This exception occurs because Unity doesn’t know about your other scene. Why not? It’s right there in the Project browser, isn’t it?
Yes, it’s there, but Unity doesn’t assume that you want to include everything in your project in your final build. This is a good thing, because you’ll surely create many more assets than you ever use in your final game.
In order to tell Unity which scenes are part of the game, you need to add them to the build.
Inside Unity’s menu, choose File\Build Settings… to bring up the Build Settings dialog, shown below:
The lower left of the dialog includes the different platforms for which you can build. Don’t worry if your list doesn’t look the same as the above image.
The current platform for which you’ve been building – most likely, PC, Mac & Linux Standalone – should be highlighted and include a Unity logo to indicate it’s selected.
To add scenes to the build, simply drag them from the Project browser into the upper area of Build Settings, labeled Scenes In Build. Add both LaunchScene and CongaScene to the build, as shown below:
As you can see in the following image, levels in the Scenes In Build list are numbered from zero. You can drag levels to rearrange their order, and when running your game outside of Unity, your player starts at level zero. You can also use index numbers rather than scene names when calling LoadLevel
.
Close the dialog and run the scene. This time, the startup screen appears and then the game play starts after three seconds.
You should now create and add to your game two more scenes: WinScene and LoseScene. These should each display the appropriate background image – YouWin and YouLose, respectively. After three seconds, they should reload CongaScene.
Simply repeat the steps you took to create LaunchScene. The difference is that for these two scenes, you can reuse StartGame.cs rather than creating a new script. Or, check out the following spoiler if you’d like a shortcut.
Solution Inside: Want a shortcut for creating your scenes? | SelectShow> |
---|---|
Rather than make each new scene, simply duplicate the existing LaunchScene and replace the image.
To do so, first, save LaunchScene via File\Save Scene to ensure you don’t lose any of your work. Then, save your scene again, but this time use File\Save Scene as… and name it WinScene. Delete StartUp from the Hierarchy and replace it with YouWin from the Project browser. That’s it. Save the scene (File\Save) and then repeat the process to create LoseScene. |
After creating your new scenes, add them to the build. Your Build Settings should now look similar to this, although the order of your scenes after LaunchScene really doesn’t matter.
Once these scenes are in place, you need to change your code to launch them rather than print messages to the Console.
Open ZombieController.cs in MonoDevelop.
Inside OnTriggerEnter2D
, find the following lines:
Debug.Log("You won!"); Application.LoadLevel("CongaScene"); |
And replace them with this line:
Application.LoadLevel("WinScene"); |
This will load WinScene instead of just reloading CongaScene.
Now fix OnTriggerEnter2D
so it loads LoseScene at the appropriate time.
Solution Inside: Not sure where to load the scene? | SelectShow> | ||
---|---|---|---|
Still in ZombieController.cs, find the following lines in OnTriggerEnter2D :
And replace them with this line:
Now, when players lose, they’ll know it. |
Save the file (File\Save) and switch back to Unity.
At this point, you can play the game in its entirety. For the best experience, switch to LaunchScene before playing. After start up, play a few rounds, making sure you win some and you lose some. Hmm. That’s sounds pretty cool – I should trademark it.
With all your scenes in place, it’s time to get some tunes up in this tut!
Find the folder named Audio in the resources you downloaded earlier. This folder contains music and sound effects made by Vinnie Prabhu for our book, iOS Games by Tutorials.
Add all five files to your project by dragging Audio directly into the Project browser.
Open the Audio folder in the Project browser to reveal your new sound assets, as shown below:
Select congaMusic in the Project browser to reveal the sound’s Import Settings in the Inspector, shown in the following image:
Notice in the image above that Audio Format is disabled. That’s because Unity will not let you choose the format when importing compressed audio clips.
Unity can import .aif, .wav, .mp3 and .ogg files. For .aif and .wav files, Unity lets you choose between using the native format or compressing into an appropriate format for the build target. However, Unity automatically re-encodes .mp3 and .ogg files if necessary to better suit the destination. For example, .ogg files are re-encoded as .mp3 files for iOS.
There is a slight loss of sound quality if Unity needs to convert from one compressed format to another. For that reason, Unity’s documentation recommends that you import audio files in lossless formats like .aif and .wav and let Unity encode them to .mp3 or .ogg as needed. You’re using an .mp3 file here because I didn’t have a lossless version and this one sounds good enough.
For each of the five audio files you imported, you’ll leave most settings with their default values. However, you won’t be placing your sounds in 3D space, so uncheck 3D Sound, as shown below, and then click Apply:
When you hit Apply, Unity reimports the sound clip. If this takes a while, you’ll see a dialog that shows the encoding progress, as shown below:
Disable 3D sound for each of the other four sounds files: hitCat, hitEnemy, loseMusic and winMusic.
With your sound files imported properly, you’ll first add sounds to CongaScene. Save the current scene if necessary and open CongaScene.
To play a sound in Unity, you need to add an Audio Source component to a GameObject. You can add such a component to any GameObject, but you’ll use the camera for Zombie Conga’s background music.
Select Main Camera in the Hierarchy. Add an audio source from Unity’s menu by choosing Component\Audio\Audio Source. The Inspector now displays the Audio Source settings shown below:
Just like how you’ve set assets in fields before, click the small circle/target icon on the right of the Audio Source component’s Audio Clip field to bring up the Select AudioClip dialog. Select congaMusic from the Assets tab, as shown in the following image:
Note that Play On Awake is already checked in the Audio Source component. This instructs Unity to begin playing this audio clip immediately when the scene loads.
This background music should continue to play until the player wins or loses, so check the box labeled Loop, shown below:
This instructs Unity to restart the audio clip when the clip reaches its end.
Play the scene and you’ll finally hear what the cats have been dancing to all this time.
Before you worry about the win and lose scenes, you’ll spice up the gameplay with a few collision sound effects.
Open ZombieController.cs in MonoDevelop and add the following variables to ZombieController
:
public AudioClip enemyContactSound; public AudioClip catContactSound; |
These variables store the AudioClip
s you’ll play during specific collisions. You’ll assign them later in the editor.
In OnTriggerEnter2D
, add the following line inside the block of code that runs when the zombie collides with a cat:
audio.PlayOneShot(catContactSound); |
This calls PlayOneShot
on audio
to play the audio clip stored in catContactSound
. But where did audio
come from?
Every MonoBehaviour
has access to certain built-in fields, like the transform
field you’ve been accessing throughout this tutorial series. If a GameObject contains an AudioSource
component, you can access it through the built-in audio
field.
Now add the following line to OnTriggerEnter2D
, inside the block of code that runs when the zombie collides with an enemy:
audio.PlayOneShot(enemyContactSound); |
This code plays enemyContactSound
when the zombie collides with an enemy.
Save the file (File\Save) and switch back to Unity.
Select zombie in the Hierarchy. The Zombie Controller (Script) component now contains two new fields in the Inspector:
Set Enemy Contact Sound to the hitEnemy sound asset. Then set Cat Contact Sound to hitCat. If you don’t remember how to set these audio clips, review the steps you used earlier to set congaMusic in the camera’s Audio Source.
Play the scene now and run the zombie into an enemy or a cat. Oops. Unity prints out the following exception each time the zombie collides with someone, letting you know there’s a component missing:
The exception points out the problem and helpfully suggests the solution. ZombieController
tried to access the zombie’s AudioSource
via its audio
field, but zombie doesn’t currently have an Audio Source.
Correct this now by adding an Audio Source component to zombie. Select zombie in the Hierarchy and choose Component\Audio\Audio Source in Unity’s menu.
The Audio Source’s default settings are fine. You won’t set an Audio Clip on it because ZombieController
provides the clips when it plays them.
Play the scene again and listen as the beach comes to life with Realistic Sound Effects Technology!
Now add some background music to WinScene and LoseScene on your own. Make WinScene play winMusic and make LoseScene play loseMusic. In both cases, make the sound play as soon as the scene starts and do not let it loop.
Solution Inside | SelectShow> |
---|---|
Open WinScene. Add an Audio Source to Main Camera, then set the Audio Source component’s Audio Clip to winMusic. Be sure Play On Awake is checked and Loop is unchecked.
Open LoseScene. Add an Audio Source to Main Camera, then set the Audio Source component’s Audio Clip to loseMusic. Be sure Play On Awake is checked and Loop is unchecked. |
And that’s it! To get the full Zombie Conga experience, play LaunchScene and then enjoy the music as it kicks in when the gameplay starts. If you win, you’ll be rewarded with WinScene‘s fun image and music, but if you lose you’ll see a sad zombie listening to a sad tune. Enjoy!
If you’ve stuck it out through this entire series, congratulations! You’ve made a simple game in Unity, and hopefully along the way you’ve learned a lot about Unity’s new 2D features.
You can download the complete Zombie Conga project here.
To learn more about working with Unity, 2D or otherwise, I recommend taking a look through Unity’s Live Training Archive. Also, take a look through Unity’s documentation, which was recently updated with the release of Unity 4.5.
I hope you enjoyed this series. As usual, please leave any feedback or ask questions in the Comments sections. Or contact me on Twitter.
Now go play some Zombie Conga. And when you’re done playing, go make a game!
Unity 4.3 2D Tutorial: Scrolling, Scenes and Sounds is a post from: Ray Wenderlich
The post Unity 4.3 2D Tutorial: Scrolling, Scenes and Sounds appeared first on Ray Wenderlich.
Your challenge is to make a class named Animal
that:
name
name
parameter and sets the property appropriatelyspeak()
Then subclass Animal
for Dog
, Cat
, and Fox
so that this code:
let animals = [Dog(), Cat(), Fox()] for animal in animals { animal.speak() } |
Has this output:
Woof! Meow! Ring-ding-ding-ding-dingeringeding! |
Video Tutorial: Introduction to Swift Part 7: Classes is a post from: Ray Wenderlich
The post Video Tutorial: Introduction to Swift Part 7: Classes appeared first on Ray Wenderlich.
Imagine you just took the best selfie of your life. It’s spectacular, it’s magnificent and worthy your upcoming feature in Wired. You’re going to get thousands of likes, up-votes, karma and re-tweets, because you’re absolutely fabulous. Now if only you could do something to this photo to shoot it through the stratosphere…
That’s what image processing is all about! With image processing, you can apply fancy effects to photos such as modifying colors, blending other images on top, and much more.
In this two-part tutorial series, you’re first going to get a basic understanding of image processing. Then, you’ll make a simple app that implements a “spooky image filter” and makes use of four popular image processing methods:
In this first segment of this image processing tutorial, you’ll focus on raw bitmap modification. Once you understand this basic process, you’ll be able to understand what happens with other frameworks. In the second part of the series, you’ll learn about three other methods to make your selfie, and other images, look remarkable.
This tutorial assumes you have basic knowledge of iOS and Objective-C, but you don’t need any previous image processing knowledge.
Before you start coding, it’s important to understand several concepts that relate to image processing. So, sit back, relax and soak up this brief and painless discussion about the inner workings of images.
First things first, meet your new friend who will join you through tutorial…………drumroll……Ghosty!
Now, don’t be afraid, Ghosty isn’t a real ghost. In fact, he’s an image. When you break him down, he’s really just a bunch of ones and zeroes. That’s far less frightening than working with an undead subject.
An image is a collection of pixels, and each one is assigned a single, specific color. Images are usually arranged as arrays and you can picture them as 2-dimensional arrays.
Here is a much smaller version of Ghosty, enlarged:
The little “squares” in the image are pixels, and each one shows only one color. When hundreds and thousands of pixels come together, they create a digital image.
There are numerous ways to represent a color. The method that you’re going to use in this tutorial is probably the easiest to grasp: 32-bit RGBA.
As the name entails, 32-bit RGBA stores a color as 32 bits, or 4 bytes. Each byte stores a component, or channel. The four channels are:
As you probably already know, red, green and blue are a set of primary colors for digital formats. You can create almost any color you want from mixing them the right way.
Since you’re using 8-bits for each channel, the total amount of opaque colors you can actually create by using different RGB values in 32-bit RGBA is 256 * 256 * 256, which is approximately 17 million colors. Whoa man, that’s a lot of color!
The alpha channel is quite different from the others. You can think of it as transparency, just like the alpha property of UIView
.
The alpha of a color doesn’t really mean anything unless there’s a color behind it; its main job is to tell the graphics processor how transparent the pixel is, and thus, how much of the color beneath it should show through.
You’ll get to dive into depth when you work through the section on blending.
To conclude this section, an image is a collection of pixels, and each pixel is encoded to display a single color. For this lesson, you’ll work with 32-bit RGBA.
Note: Have you ever wondered where the term Bitmap originated? A bitmap is a 2D map of pixels, each one comprised of bits! It’s literally a map of bits. Ah-ha!
So, now you know the basics of representing colors in bytes. There are still a three more concepts to cover before you dig in and start coding.
The RGB method to represent colors is an example of a colorspace. It’s one of many methods that stores colors. Another colorspace is grayscale.
As the name entails, all images in the grayscale colorspace are black and white, and you only need to save one value to describe its color.
The downside of RGB is that it’s not very intuitive for humans to visualize.
For example, what color do you think an RGB of [0, 104, 55] produces?
Taking an educated guess, you might say a teal or skyblue-ish color, which is completely wrong. Turns out it’s the dark green you see on this website!
Two other more popular color spaces are HSV and YUV.
HSV, which stand for Hue, Saturation and Value, is a much more intuitive way to describe colors. You can think of the parts this way:
In this color space, if you found yourself looking at unknown HSV values, it’s much easier to imagine what the color looks like based on the values.
The difference between RGB and HSV are pretty easy to understand, at least once you look at this image:
YUV is another popular color space, because it’s what TVs use.
Television signals came into the world with one channel, Grayscale. Later, two more channels “came into the picture” when color film emerged. Since you’re not going to tinker with YUV in this tutorial, you might want to do some more research on YUV and other color spaces to round out your knowledge. :]
Note: For the same color space, you can still have different representations for colors. One example is 16-bit RGB, which optimizes memory use by using 5 bits for R, 6 bits for G, and 5 bits for B.
Why 6 for green, and 5 for red and blue? This is an interesting question and the answer comes from your eyeball. Human eyes are most sensitive to green and so an extra bit enables us to move more finely between different shades of green.
Since an image is a 2D map of pixels, the origin needs specification. Usually it’s the top-left corner of the image, with the y-axis pointing downwards, or at the bottom left, with the y-axis pointing upwards.
There’s no “correct” coordinate system, and Apple uses both in different places.
Currently, UIImage
and UIView
use the top-left corner as the origin and Core Image and Core Graphics use the bottom-left. This is important to remember so you know where to find the bug when Core Image returns an “upside down” image.
This is the last concept to discuss before coding! With raw images, each pixel is stored individually in memory.
If you do the math on an 8 megapixel image, it would take 8 * 10^6 pixels * 4 bytes/pixel = 32 Megabytes to store! Talk about a data hog!
This is where JPEG, PNG and other image formats come into play. These are compression formats for images.
When GPUs render images, they decompress images to their original size, which can take a lot of memory. If your app takes up too much memory, it could be terminated by the OS (which looks to the user like a crash). So be sure to test your app with large images!
Now that you have a basic understanding of the inner workings of images, you’re ready to dive into coding. Today you’re going to work through developing a selfie-revolutionizing app called SpookCam, the app that puts a little Ghosty in your selfie!
Download the starter kit, open the project in Xcode and build and run. On your phone, you should see tiny Ghosty:
In the console, you should see an output like this:
Currently the app is loading the tiny version of Ghosty from the bundle, converting it into a pixel buffer and printing out the brightness of each pixel to the log.
What’s the brightness? It’s simply the average of the red, green and blue components.
Pretty neat. Notice how the outer pixels have a brightness of 0, which means they should be black. However, since their alpha value is 0, they are actually transparent. To verify this, try setting imageView
background color to red, then build and run gain.
Now take a quick glance through the code. You’ll notice ViewController.m
uses UIImagePickerController
to pick images from the album or to take pictures with the camera.
After it selects an image, it calls -setupWithImage:
. In this case, it outputs the brightness of each pixel to the log. Locate logPixelsOfImage:
inside of ViewController.m, and review the first part of the method:
// 1. CGImageRef inputCGImage = [image CGImage]; NSUInteger width = CGImageGetWidth(inputCGImage); NSUInteger height = CGImageGetHeight(inputCGImage); // 2. NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * width; NSUInteger bitsPerComponent = 8; UInt32 * pixels; pixels = (UInt32 *) calloc(height * width, sizeof(UInt32)); // 3. CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); // 4. CGContextDrawImage(context, CGRectMake(0, 0, width, height), inputCGImage); // 5. Cleanup CGColorSpaceRelease(colorSpace); CGContextRelease(context); |
Now, a section-by-section recap:
UIImage
to a CGImage
object, which is needed for the Core Graphics calls. Also, get the image’s width and height.bytesPerPixel
and bitsPerComponent
, then calculate bytesPerRow
of the image. Finally, you allocate an array pixels
to store the pixel data.CGColorSpace
and a CGBitmapContext
, passing in the pixels pointer as the buffer to store the pixel data this context holds. You’ll explore Core Graphics in more depth in a section below.pixels
with the pixel data of image in the format you specified when creating context
.colorSpace
and context
.Note: When you display an image, the device’s GPU decodes the encoding to display it on the screen. To access the data locally, you need to obtain a copy of the pixels, just like you’re doing here.
At this point, pixels
holds the raw pixel data of image
. The next few lines iterate through pixels
and print out the brightness:
// 1. #define Mask8(x) ( (x) & 0xFF ) #define R(x) ( Mask8(x) ) #define G(x) ( Mask8(x >> 8 ) ) #define B(x) ( Mask8(x >> 16) ) NSLog(@"Brightness of image:"); // 2. UInt32 * currentPixel = pixels; for (NSUInteger j = 0; j < height; j++) { for (NSUInteger i = 0; i < width; i++) { // 3. UInt32 color = *currentPixel; printf("%3.0f ", (R(color)+G(color)+B(color))/3.0); // 4. currentPixel++; } printf("\n"); } |
Here’s what’s going on:
for
loops to iterate through the pixels. This could also be done with a single for
loop iterating from 0
to width * height
, but it’s easier to reason about an image that has two dimensions.currentPixel
and log the brightness of the pixel.currentPixel
to move on to the next pixel. If you’re rusty on pointer arithmetic, just remember this: Since currentPixel
is a pointer to UInt32
, when you add 1
to the pointer, it moves forward by 4 bytes
(32-bits), to bring you to the next pixel.
Note: An alternative to the last method is to declare currentPixel
as a pointer to an 8-bit
type (ie char
). This way each time you increment, you move to the next component of the image. By dereferencing it, you get the 8-bit
value of that component.
At this point, the starter project is simply logging raw image data, but not modifying anything yet. That’s your job for the rest of the tutorial!
Of the four methods explored in this series, you’ll spend the most time on this one because it covers the “first principles” of image processing. Mastering this method will allow you to understand what all the other libraries do.
In this method, you’ll loop through each pixel, as the starter kit already does, but this time assign new values to each pixel.
This advantage of this method is that it’s easy to implement and understand; the disadvantage is that scaling to larger images and effects that are more complicated is less than elegant.
As you see in the starter app, the ImageProcessor
class already exists. Hook it up to the main ViewController
by replacing -setupWithImage:
with the following code in ViewController.m:
- (void)setupWithImage:(UIImage*)image { UIImage * fixedImage = [image imageWithFixedOrientation]; self.workingImage = fixedImage; // Commence with processing! [ImageProcessor sharedProcessor].delegate = self; [[ImageProcessor sharedProcessor] processImage:fixedImage]; } |
Also comment out the following line of code in -viewDidLoad:
// [self setupWithImage:[UIImage imageNamed:@"ghost_tiny.png"]]; |
Now take a look at ImageProcessor.m. As you can see, ImageProcessor
is a singleton object that calls -processUsingPixels:
on an input image, then returns the output through the ImageProcessorDelegate
.
-processUsingPixels:
is currently a copy of the code you looked at previously that gives you access to the pixels of inputImage
. Notice the two extra macros A(x)
and RGBAMake(r,g,b,a)
that are defined to provide convenience.
Now build and run. Choose an image from your album (or take a photo) and you should see it appear in your view like this:
That looks way too relaxing, time to bring in Ghosty!
Before the return
statement in processUsingPixels:
, add the following code to get an CGImageRef
of Ghosty:
UIImage * ghostImage = [UIImage imageNamed:@"ghost"]; CGImageRef ghostCGImage = [ghostImage CGImage]; |
Now, do some math to figure out the rect where you want to put Ghosty inside the input image.
CGFloat ghostImageAspectRatio = ghostImage.size.width / ghostImage.size.height; NSInteger targetGhostWidth = inputWidth * 0.25; CGSize ghostSize = CGSizeMake(targetGhostWidth, targetGhostWidth / ghostImageAspectRatio); CGPoint ghostOrigin = CGPointMake(inputWidth * 0.5, inputHeight * 0.2); |
This code resizes Ghosty to take up 25% of the input’s width, and places his origin (top-left corner) at ghostOrigin
.
The next step is to get the pixel buffer of Ghosty, this time with scaling:
NSUInteger ghostBytesPerRow = bytesPerPixel * ghostSize.width; UInt32 * ghostPixels = (UInt32 *)calloc(ghostSize.width * ghostSize.height, sizeof(UInt32)); CGContextRef ghostContext = CGBitmapContextCreate(ghostPixels, ghostSize.width, ghostSize.height, bitsPerComponent, ghostBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(ghostContext, CGRectMake(0, 0, ghostSize.width, ghostSize.height),ghostCGImage); |
This is similar to how you got pixels
from inputImage
. However, by drawing Ghosty into a smaller size and width, he becomes a little smaller.
Now you’re ready to blend Ghosty into your image, which makes this the perfect time to go over blending.
Blending: As mentioned before, each color has an alpha value that indicates transparency. However, when you’re creating an image, each pixel has exactly one color.
So how do you assign a pixel if it has a background color and a “semi-transparent” color on top of it?
The answer is alpha blending. The color on top uses a formula and its alpha value to blend with the color behind it. Here you treat alpha as a float
between 0
and 1
:
NewColor = TopColor * TopColor.Alpha + BottomColor * (1 - TopColor.Alpha)
This is the standard linear interpolation equation.
TopColor.Alpha
is 1
, NewColor
is equal to TopColor
.TopColor.Alpha
is 0
, NewColor
is equal to BottomColor
.TopColor.Alpha
is between 0
and 1
, NewColor
is a blend of TopColor
and BottomColor
.
A popular optimization is to use premultiplied alpha. The idea is to premultiply TopColor
by TopColor.alpha
, thereby saving that multiplication in the formula above.
As trivial as that sounds, it offers a noticeable performance boost when iterating through millions of pixels to perform blending.
Okay, back to Ghosty.
As with most bitmap image processing algorithms, you need some for
loops to go through all the pixels. However, you only need to loop through the pixels you need to change.
Add this code to the bottom of processUsingPixels:
, again right before the return statement:
NSUInteger offsetPixelCountForInput = ghostOrigin.y * inputWidth + ghostOrigin.x; for (NSUInteger j = 0; j < ghostSize.height; j++) { for (NSUInteger i = 0; i < ghostSize.width; i++) { UInt32 * inputPixel = inputPixels + j * inputWidth + i + offsetPixelCountForInput; UInt32 inputColor = *inputPixel; UInt32 * ghostPixel = ghostPixels + j * (int)ghostSize.width + i; UInt32 ghostColor = *ghostPixel; // Do some processing here } } |
Notice how you only loop through the number of pixels in Ghosty’s image, and offset the input image by offsetPixelCountForInput
. Remember that although you’re reasoning about images as 2-D arrays, in memory they are actually 1-D arrays.
Next, fill in this code after the comment Do some processing here
to do the actual blending:
// Blend the ghost with 50% alpha CGFloat ghostAlpha = 0.5f * (A(ghostColor) / 255.0); UInt32 newR = R(inputColor) * (1 - ghostAlpha) + R(ghostColor) * ghostAlpha; UInt32 newG = G(inputColor) * (1 - ghostAlpha) + G(ghostColor) * ghostAlpha; UInt32 newB = B(inputColor) * (1 - ghostAlpha) + B(ghostColor) * ghostAlpha; // Clamp, not really useful here :p newR = MAX(0,MIN(255, newR)); newG = MAX(0,MIN(255, newG)); newB = MAX(0,MIN(255, newB)); *inputPixel = RGBAMake(newR, newG, newB, A(inputColor)); |
There are two points to note in this part.
0.5
. You then blend with the alpha blend formula previously discussed.To test this code, add this code to the bottom of processUsingPixels:
, replacing the current return statement:
// Create a new UIImage CGImageRef newCGImage = CGBitmapContextCreateImage(context); UIImage * processedImage = [UIImage imageWithCGImage:newCGImage]; return processedImage; |
This creates a new UIImage
from the context and returns it. You’re going to ignore the potential memory leak here for now.
Build and run. You should see Ghosty floating in your image like, well, a ghost:
Good work so far, this app is going viral for sure!
One last effect to go. Try implementing the black and white filter yourself. To do this, set each pixel’s red, green and blue components to the average of the three channels in the original, just like how you printed out Ghosty’s brightness in the beginning.
Write this code before the // Create a new UIImage
comment you added in the previous step.
Think you got it? Check your code here.
Solution Inside: Solution | SelectShow> | |
---|---|---|
|
The very last step is to cleanup your memory. ARC cannot manage CGImageRefs
and CGContexts
for you. Add this to the end of the function before the return
statement:
// Cleanup! CGColorSpaceRelease(colorSpace); CGContextRelease(context); CGContextRelease(ghostContext); free(inputPixels); free(ghostPixels); |
Build and run. Be prepared to be spooked out by the result:
Congratulations! You just finished your first image-processing application. You can download a working version of the project at this point here.
That wasn’t too hard, right? You can play around with the code inside the for
loops to create your own effects, try to see if you can implement these ones:
If you’ve completed the first project, you should have a pretty good grasp on the basic concepts of image processing. Now you can set out and explore simpler and faster ways to accomplish these same effects.
In the next part of the series, you replace -processUsingPixels:
with three new functions that will perform the same task using different libraries. Definitely check it out!
In the meantime, if you have any questions or comments about the series so far, please join the forum discussion below!
Image Processing in iOS Part 1: Raw Bitmap Modification is a post from: Ray Wenderlich
The post Image Processing in iOS Part 1: Raw Bitmap Modification appeared first on Ray Wenderlich.
Welcome to part two of this tutorial series about image processing in iOS!
In the first part of the series, you learned how to access and modify the raw pixel values of an image.
In this second and final part of the series, you’ll learn how to perform this same task by using other libraries: Core Graphics, Core Image and GPUImage to be specific. You’ll learn about the pros and cons of each, so that you can make the best choice for your situation.
This tutorial picks up where the previous one left off. If you don’t have the project already, you can download it here.
If you fared well in part one, you’re going to thoroughly enjoy this part! Now that you understand the principles, you’ll fully appreciate how simple these libraries make image processing.
Core Graphics is Apple’s API for drawing based on the Quartz 2D drawing engine. It provides a low-level API that may look familiar if you’re acquainted with OpenGL.
If you’ve ever overridden the -drawRect:
method for a view, you’ve interacted with Core Graphics, which provides several functions to draw objects, gradients and other cool stuff to your view.
There are tons of Core Graphics tutorials on this site already, such as this one and this one. So, in this tutorial, you’re going to focus on how to use Core Graphics to do some basic image processing.
Before you get started, you need to get familiar with the concept of a Graphics Context.
Concept: Graphics Contexts are common to most types of rendering and a core concept in OpenGl and Core Graphics. Think of it as simply a global state object that holds all the information for drawing.
In terms of Core Graphics, this includes the current fill color, stroke color, transforms, masks, where to draw and much more. In iOS, there are also other different types of contexts such as PDF contexts, which allow you to draw to a PDF file.
In this tutorial you’re only going to use a Bitmap context, which draws to a bitmap.
Inside the -drawRect:
function, you’ll find a context that is ready for use. This is why you can directly call and draw toUIGraphicsGetCurrentContext()
. The system has set this up so that you’re drawing directly to the view to be rendered.
Outside of the -drawRect:
function, there is usually no graphics context available. You can create one as you did in the first project using CGContextCreate()
, or you can use UIGraphicsBeginImageContext()
and grab the created context using UIGraphicsGetCurrentContext()
.
This is called offscreen-rendering, as the graphics you’re drawing are not directly presented anywhere. Instead, they render to an off-screen buffer.
In Core Graphics, you can then get an UIImage
of the context and show it on screen. With OpenGL, you can directly swap this buffer with the one currently rendered to screen and display it directly.
Image processing using Core Graphics takes advantage of this off-screen rendering to render your image into a buffer, apply any effects you want and grab the image from the context once you’re done.
All right, enough concepts, now it’s time to make some magic with code! Add this following new method to ImageProcessor.m
:
- (UIImage *)processUsingCoreGraphics:(UIImage*)input { CGRect imageRect = {CGPointZero,input.size}; NSInteger inputWidth = CGRectGetWidth(imageRect); NSInteger inputHeight = CGRectGetHeight(imageRect); // 1) Calculate the location of Ghosty UIImage * ghostImage = [UIImage imageNamed:@"ghost.png"]; CGFloat ghostImageAspectRatio = ghostImage.size.width / ghostImage.size.height; NSInteger targetGhostWidth = inputWidth * 0.25; CGSize ghostSize = CGSizeMake(targetGhostWidth, targetGhostWidth / ghostImageAspectRatio); CGPoint ghostOrigin = CGPointMake(inputWidth * 0.5, inputHeight * 0.2); CGRect ghostRect = {ghostOrigin, ghostSize}; // 2) Draw your image into the context. UIGraphicsBeginImageContext(input.size); CGContextRef context = UIGraphicsGetCurrentContext(); CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0); CGAffineTransform flipThenShift = CGAffineTransformTranslate(flip,0,-inputHeight); CGContextConcatCTM(context, flipThenShift); CGContextDrawImage(context, imageRect, [input CGImage]); CGContextSetBlendMode(context, kCGBlendModeSourceAtop); CGContextSetAlpha(context,0.5); CGRect transformedGhostRect = CGRectApplyAffineTransform(ghostRect, flipThenShift); CGContextDrawImage(context, transformedGhostRect, [ghostImage CGImage]); // 3) Retrieve your processed image UIImage * imageWithGhost = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); // 4) Draw your image into a grayscale context CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); context = CGBitmapContextCreate(nil, inputWidth, inputHeight, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaNone); CGContextDrawImage(context, imageRect, [imageWithGhost CGImage]); CGImageRef imageRef = CGBitmapContextCreateImage(context); UIImage * finalImage = [UIImage imageWithCGImage:imageRef]; // 5) Cleanup CGColorSpaceRelease(colorSpace); CGContextRelease(context); CFRelease(imageRef); return finalImage; } |
That’s quite a bit of stuff. Let’s go over it section by section.
1) Calculate the location of Ghosty
UIImage * ghostImage = [UIImage imageNamed:@"ghost.png"]; CGFloat ghostImageAspectRatio = ghostImage.size.width / ghostImage.size.height; NSInteger targetGhostWidth = inputWidth * 0.25; CGSize ghostSize = CGSizeMake(targetGhostWidth, targetGhostWidth / ghostImageAspectRatio); CGPoint ghostOrigin = CGPointMake(inputWidth * 0.5, inputHeight * 0.2); CGRect ghostRect = {ghostOrigin, ghostSize}; |
Create a new CGContext
.
As discussed before, this creates an “off-screen” context. Remember how the coordinate system for CGContext
uses the bottom-left corner as the origin, as opposed to UIImage
, which uses the top-left?
Interestingly, if you use UIGraphicsBeginImageContext()
to create a context, the system flips the coordinates for you, resulting in the origin being at the top-left. Thus, you’ll need to apply a transformation to your context to flip it back so your CGImage
will draw properly.
If you drew a UIImage
directly to this context, you don’t need to perform this transformation, as the coordinate systems would match up. Setting the transform to the context will apply this transform to all the drawing you do afterwards.
2) Draw your image into the context.
UIGraphicsBeginImageContext(input.size); CGContextRef context = UIGraphicsGetCurrentContext(); CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0); CGAffineTransform flipThenShift = CGAffineTransformTranslate(flip,0,-inputHeight); CGContextConcatCTM(context, flipThenShift); CGContextDrawImage(context, imageRect, [input CGImage]); CGContextSetBlendMode(context, kCGBlendModeSourceAtop); CGContextSetAlpha(context,0.5); CGRect transformedGhostRect = CGRectApplyAffineTransform(ghostRect, flipThenShift); CGContextDrawImage(context, transformedGhostRect, [ghostImage CGImage]); |
After drawing the image, you set the alpha of your context to 0.5
. This only affects future draws, so the input image drawing uses full alpha.
You also need to set the blend mode to kCGBlendModeSourceAtop
.
This sets up the context so it uses the same alpha blending formula you used before. After setting up these parameters, flip Ghosty’s rect and draw him(it?) into the image.
3) Retrieve your processed image
UIImage * imageWithGhost = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); |
To convert your image to Black and White, you’re going to create a new CGContext
that uses a grayscale colorspace. This will convert anything you draw in this context into grayscale.
Since you’re using CGBitmapContextCreate()
to create this context, the coordinate system has the origin in the bottom-left corner, and you don’t need to flip it to draw your CGImage
.
4) Draw your image into a grayscale context.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); context = CGBitmapContextCreate(nil, inputWidth, inputHeight, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaNone); CGContextDrawImage(context, imageRect, [imageWithGhost CGImage]); CGImageRef imageRef = CGBitmapContextCreateImage(context); UIImage * finalImage = [UIImage imageWithCGImage:imageRef]; |
Retrieve your final image.
See how you can’t use UIGraphicsGetImageFromCurrentImageContext()
because you never set this grayscale context as the current graphics context?
Instead, you created it yourself. Thus you’ll need to use CGBitmapContextCreateImage()
to render the image from this context.
5) Cleanup.
CGColorSpaceRelease(colorSpace); CGContextRelease(context); CFRelease(imageRef); return finalImage; |
At the end, you have to release everything you created. And that’s it – you’re done!
Memory Usage: When performing image processing, pay close attention to memory usage. As discussed in part one, an 8 megapixel image takes a whopping 32 megabytes of memory. Try to avoid keeping several copies of the same image in memory at once.
Notice how you need to release context the second time but not the first? In the first case, you got your context using UIGraphicsGetCurrentImageContext()
. The key word here is ‘get’.
‘Get’ means that you’re getting a reference to the current context, but you don’t own it.
In the second case, you called CGBitmapContextCreateImage()
, and Create means that you own the object and have to manage its life. This is also why you need to release the imageRef
, because you created it using CGBitmapContextCreateImage()
.
Good job! Now, replace the first line in processImage:
to call this new method instead of processUsingPixels:
:
UIImage * outputImage = [self processUsingCoreGraphics:inputImage]; |
Build and run. You should see the exact same output as before.
Such spookiness! You can download a complete project with the code described in this section here.
In this simple example, it doesn’t seem like using Core Graphics is that much easier to implement than directly manipulating the pixels.
However, imagine performing a more complex operation, such as rotating an image. In pixels, that would require some rather complicated math.
However, by using Core Graphics, you just set a rotational transform to the context before drawing in the image. Hence, the more complicated your processing becomes, the more time Core Graphics saves.
Two methods down, and two to go. Next up: Core Image!
There are also several great Core Image tutorials on this site already, such as this one, from the iOS 6 feast. We also have several chapters on Core Image in our iOS by Tutorials series.
In this tutorial, you’ll see some discussion about how Core Image compares to the other methods in this tutorial.
Core Image is Apple’s solution to image processing. It absconds with all the low-level pixel manipulation methods, and replaces them with high-level filters.
The best part of Core Image is that it has crazy awesome performance when compared to raw pixel manipulation or Core Graphics. The library uses a mix of CPU and GPU processing to provide near-real-time performance.
Apple also provides a huge selection of pre-made filters. On OSX, you can even create your own filters by using Core Image Kernel Language, which is very similar to GLSL, the language for shaders in OpenGL. Note that at the time of writing this tutorial, you cannot write your own Core Image filters on iOS (only Mac OS X).
There are still some effects that are better to do with Core Graphics. As you’ll see in the code, you get the most out of Core Image by using Core Graphics alongside it.
Add this new method to ImageProcessor.m:
- (UIImage *)processUsingCoreImage:(UIImage*)input { CIImage * inputCIImage = [[CIImage alloc] initWithImage:input]; // 1. Create a grayscale filter CIFilter * grayFilter = [CIFilter filterWithName:@"CIColorControls"]; [grayFilter setValue:@(0) forKeyPath:@"inputSaturation"]; // 2. Create your ghost filter // Use Core Graphics for this UIImage * ghostImage = [self createPaddedGhostImageWithSize:input.size]; CIImage * ghostCIImage = [[CIImage alloc] initWithImage:ghostImage]; // 3. Apply alpha to Ghosty CIFilter * alphaFilter = [CIFilter filterWithName:@"CIColorMatrix"]; CIVector * alphaVector = [CIVector vectorWithX:0 Y:0 Z:0.5 W:0]; [alphaFilter setValue:alphaVector forKeyPath:@"inputAVector"]; // 4. Alpha blend filter CIFilter * blendFilter = [CIFilter filterWithName:@"CISourceAtopCompositing"]; // 5. Apply your filters [alphaFilter setValue:ghostCIImage forKeyPath:@"inputImage"]; ghostCIImage = [alphaFilter outputImage]; [blendFilter setValue:ghostCIImage forKeyPath:@"inputImage"]; [blendFilter setValue:inputCIImage forKeyPath:@"inputBackgroundImage"]; CIImage * blendOutput = [blendFilter outputImage]; [grayFilter setValue:blendOutput forKeyPath:@"inputImage"]; CIImage * outputCIImage = [grayFilter outputImage]; // 6. Render your output image CIContext * context = [CIContext contextWithOptions:nil]; CGImageRef outputCGImage = [context createCGImage:outputCIImage fromRect:[outputCIImage extent]]; UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage]; CGImageRelease(outputCGImage); return outputImage; } |
Look at how different this code looks compared to the previous methods.
With Core Image, you set up a variety of filters to process your images – here you use a CIColorControls
filter for grayscale, CIColorMatrix
and CISourceAtopCompositing
for blending, and finally chain them all together.
Now, take a walk through this function to learn more about each step.
CIColorControls
filter and set its inputSaturation
to 0. As you might recall, saturation is a channel in HSV color space that determines how much color there is. Here a value of 0
indicates grayscale.CIColorMatrix
filter with its alphaVector
set to [0 0 0.5 0]
. This will multiply Ghosty’s alpha by 0.5
.CISourceAtopCompositing
filter to perform alpha blending.CIImage
to a CGImage
and create the final UIImage
. Remember to free your memory afterwards.This method uses a helper function called -createPaddedGhostImageWithSize:
, which uses Core Graphics to create a scaled version of Ghosty padded to be 25% the width of the input image. Can you implement this function by yourself?
Give it a shot. If you are stuck, here is the solution:
Solution Inside: Solution | SelectShow> | |
---|---|---|
|
Finally, replace the first line in processImage:
to call your new method:
UIImage * outputImage = [self processUsingCoreImage:inputImage]; |
Now build and run. Again, you should see the same spooky image.
You can download a project with all the code in this section here.
Core Image provides a large amount of filters you can use to create almost any effect you want. It’s a good friend to have when you’re processing images.
Now onto the last solution, which is incidentally the only third-party option explored in this tutorial: GPUImage.
GPUImage is a well-maintained library for GPU-based image processing on iOS. It won a place in the top 10 best iOS libraries on this site!
GPUImage hides all of the complex code required for using OpenGL ES on iOS, and presents you with an extremely simple interface to process images at blazing speeds. The performance of GPUImage even beats Core Image on many occasions, but Core Image still wins with a few functions.
To start with GPUImage, you’ll need to include it into your project. This can be done using Cocoapods, building the static library or by embedding the source code directly to your project.
The project app already contains a static framework, which was built externally. It’s easy to copy into the project when you follow these steps:
Instructions:
Run build.sh
at the command line. The resulting library and header files will go tobuild/Release-iphone
.
You may also change the version of the iOS SDK by changing the IOSSDK_VER variable in build.sh (all available versions can be found using xcodebuild -showsdks).
To embed the source into your project, follow these instructions from the Github repo:
Instructions:
GPUImage.xcodeproj
file into your application’s Xcode project to embed the framework in your project.GPUImage needs a few other frameworks to link into your application, so you’ll need to add the following as linked libraries in your application target:
CoreMedia
CoreVideo
OpenGLES
AVFoundation
QuartzCore
Then you need to find the framework headers. Within your project’s build settings, set the Header Search Paths to the relative path from your application to the framework/subdirectory within the GPUImage source directory. Make this header search path recursive.
After you add GPUImage to your project, make sure to include the header file in ImageProcessor.m.
If you included the static framework, use #import GPUImage/GPUImage.h
. If you included the project directly, use #import "GPUImage.h"
instead.
Add the new processing function to ImageProcessor.m:
- (UIImage *)processUsingGPUImage:(UIImage*)input { // 1. Create the GPUImagePictures GPUImagePicture * inputGPUImage = [[GPUImagePicture alloc] initWithImage:input]; UIImage * ghostImage = [self createPaddedGhostImageWithSize:input.size]; GPUImagePicture * ghostGPUImage = [[GPUImagePicture alloc] initWithImage:ghostImage]; // 2. Set up the filter chain GPUImageAlphaBlendFilter * alphaBlendFilter = [[GPUImageAlphaBlendFilter alloc] init]; alphaBlendFilter.mix = 0.5; [inputGPUImage addTarget:alphaBlendFilter atTextureLocation:0]; [ghostGPUImage addTarget:alphaBlendFilter atTextureLocation:1]; GPUImageGrayscaleFilter * grayscaleFilter = [[GPUImageGrayscaleFilter alloc] init]; [alphaBlendFilter addTarget:grayscaleFilter]; // 3. Process & grab output image [grayscaleFilter useNextFrameForImageCapture]; [inputGPUImage processImage]; [ghostGPUImage processImage]; UIImage * output = [grayscaleFilter imageFromCurrentFramebuffer]; return output; } |
Hey! That looks straightforward. Here’s what’s going on:
GPUImagePicture
objects; use -createPaddedGhostImageWithSize:
as a helper again. GPUImage uploads the images into the GPU memory as textures when you do this.
GPUImageAlphaBlendFilter
takes two inputs, in this case the top image and bottom image, so that the texture locations matter. -addTarget:atTextureLocation:
sets the textures to the correct inputs.
-useNextFrameForImageCapture
on the last filter in the chain and then -processImage
on both inputs. This makes sure the filter knows that you want to grab the image from it and retains it for you.Finally, replace the first line in processImage:
to call this new method:
UIImage * outputImage = [self processUsingGPUImage:inputImage]; |
And that’s it. Build and run. Ghosty is looking as good as ever!
As you can see, GPUImage is easy to work with. You can also create your own filters by writing your own shaders in GLSL. Check out the documentation for GPUImage here for more on how to use this framework.
Download a version of the project with all the code in this section here.
Congratulations! You’ve implemented SpookCam in four different ways. Here are all the download links again for your convenience:
Of course, there are a few other interesting image processing concepts aside from the solutions presented in this tutorial:
CIDetector
class for this process.Last but not least, no image processing tutorial is complete without mentioning OpenCV.
OpenCV is the de-facto library for all things image processing, and it has an iOS build! However, it is far from lightweight and best for more technical tasks, such as feature tracking. Learn all about OpenCV here.
There is also a great tutorial about using OpenCV right here on this site.
The true next step is to pick a method and start creating your very own revolutionary selfie app. Never stop learning!
I really hope you enjoyed this tutorial. If you have any questions or comments, please let us know in the forum discussion below.
Attribution: Photos courtesy of Free Range Stock, by Roxana Gonzalez.
Image Processing in iOS Part 2: Core Graphics, Core Image, and GPUImage is a post from: Ray Wenderlich
The post Image Processing in iOS Part 2: Core Graphics, Core Image, and GPUImage appeared first on Ray Wenderlich.
Today marks an exciting day – we just released our 500th tutorial! :]
This blog is now 4.5 years old! Just think about how much iOS development has changed since then:
retain
, release
, and autorelease
?@synthesize
statements, and a tool called Accessorizer?I also have some fond memories about this blog:
It’s been an amazing ride, and none of this would have happened without the awesome team of authors, editors, forum subject matter experts, and translators who work hard every day to make this site possible.
It also couldn’t have happened without the support of our readers – like you! Thank you so much for reading this site and being a part of our community.
Remember, 500 tutorials is just the beginning, especially with the advent of Swift!
My cousin Ry Bristow (who also happens to be our first ever summer intern) put together this video to show you what we mean:
Thank you so much for reading and supporting the work we do on this blog – we’ll keep the tutorials coming!
To celebrate, we are giving away some free copies of 1 PDF book of your choice from this site! To enter, simply leave a comment on this post – we’ll select 3 random winners after 48 hours.
Thanks again all, and happy 500!
Our 500th Tutorial: A Reflection and Giveaway! is a post from: Ray Wenderlich
The post Our 500th Tutorial: A Reflection and Giveaway! appeared first on Ray Wenderlich.
Update note: This tutorial was fully updated for iOS 8 and Swift by Caroline Begbie. Original post by Ray Wenderlich.
If you need to detect gestures in your app, such as taps, pinches, pans, or rotations, it’s extremely easy with Swift and the built-in UIGestureRecognizer classes.
In this tutorial, you’ll learn how you can easily add gesture recognizers into your app, both within the Storyboard editor in Xcode, and programatically. You’ll create a simple app where you can move a monkey and a banana around by dragging, pinching, and rotating with the help of gesture recognizers.
You’ll also try out some cool extras like:
This tutorial assumes you are familiar with the basic concepts of Storyboards. If you are new to them, you may wish to check out our Storyboard tutorials first.
I think the monkey just gave us the thumbs up gesture, so let’s get started! :]
Note: At the time of writing this tutorial, our understanding is we cannot post screenshots of Xcode 6 since it is still in beta. Therefore, we are suppressing screenshots in this Swift tutorial until we are sure it is OK.
Open up Xcode 6 and create a new project with the iOS\Application\Single View Application template. For the Product Name enter MonkeyPinch, for the Language choose Swift, and for the Devices choose iPhone. Click Next, choose the folder to save your project, and click Create.
Before you go any further, download the resources for this project and drag the six files into your project. Tick Destination: Copy items if needed, choose Create groups, and click Finish.
Next, open up Main.storyboard. View controllers are now square by default, so that you can use just one storyboard for multiple devices. Generally you will layout your storyboards using constraints and size classes. But because this app is only going to be for the iPhone, you can disable size classes. On the File Inspector panel (View Menu > Utilities > Show File Inspector), untick Use Size Classes. Choose Keep size class data for: iPhone, and click Disable Size Classes.
Your view will now reflect the size and proportions of the iPhone 5.
Drag an Image View into the View Controller. Set the image to monkey.png, and resize the Image View to match the size of the image itself by selecting Editor Menu > Size to Fit Content. Then drag a second image view in, set it to banana.png, and also resize it. Arrange the image views however you like in the view controller. At this point you should have something like this:
That’s it for the UI for this app – now you’ll add a gesture recognizer so you can drag those image views around!
Before you get started, here’s a brief overview of how you use UIGestureRecognizers and why they’re so handy.
In the old days before UIGestureRecognizers, if you wanted to detect a gesture such as a swipe, you’d have to register for notifications on every touch within a UIView – such as touchesBegan, touchesMoves, and touchesEnded. Each programmer wrote slightly different code to detect touches, resulting in subtle bugs and inconsistencies across apps.
In iOS 3.0, Apple came to the rescue with UIGestureRecognizer classes! These provide a default implementation of detecting common gestures such as taps, pinches, rotations, swipes, pans, and long presses. By using them, not only does it save you a ton of code, but it makes your apps work properly too! Of course you can still use the old touch notifications instead, if your app requires them.
Using UIGestureRecognizers is extremely simple. You just perform the following steps:
You can perform these two steps programatically (which you’ll do later on in this tutorial), but it’s even easier adding a gesture recognizer visually with the Storyboard editor. So now to add your first gesture recognizer into this project!
Still with Main.storyboard open, look inside the Object Library for the Pan Gesture Recognizer, and drag it on top of the monkey Image View. This both creates the pan gesture recognizer, and associates it with the monkey Image View. You can verify you got it connected OK by clicking on the monkey Image View, looking at the Connections Inspector (View Menu > Utilities > Show Connections Inspector), and making sure the Pan Gesture Recognizer is in the gestureRecognizers Outlet Collection.
You may wonder why you associated it to the image view instead of the view itself. Either approach would be OK, it’s just what makes most sense for your project. Since you tied it to the monkey, you know that any touches are within the bounds of the monkey so you’re good to go. The drawback of this method is sometimes you might want touches to be able to extend beyond the bounds. In that case, you could add the gesture recognizer to the view itself, but you’d have to write code to check if the user is touching within the bounds of the monkey or the banana and react accordingly.
Now that you’ve created the pan gesture recognizer and associated it to the image view, you just have to write the callback function so something actually happens when the pan occurs.
Open up ViewController.swift and add the following function inside the ViewController class:
@IBAction func handlePan(recognizer:UIPanGestureRecognizer) { let translation = recognizer.translationInView(self.view) recognizer.view.center = CGPoint(x:recognizer.view.center.x + translation.x, y:recognizer.view.center.y + translation.y) recognizer.setTranslation(CGPointZero, inView: self.view) } |
The UIPanGestureRecognizer will call this function when a pan gesture is first detected, and then continuously as the user continues to pan, and one last time when the pan is complete (usually the user lifting their finger).
The UIPanGestureRecognizer passes itself as an argument to this function. You can retrieve the amount the user has moved their finger by calling the translationInView: function. Here you use that amount to move the center of the monkey the same amount the finger has been dragged.
It’s extremely important to set the translation back to zero once you are done. Otherwise, the translation will keep compounding each time, and you’ll see your monkey rapidly move off the screen!
Note that instead of hard-coding the monkey image view into this function, you get a reference to the monkey image view by calling recognizer.view. This makes your code more generic, so that you can re-use this same routine for the banana image view later on.
OK, now that this function is complete, you will hook it up to the UIPanGestureRecognizer. In Main.storyboard, control drag from the Pan Gesture Recognizer to View Controller. A popup will appear – select handlePan:.
One more thing. If you compile and run, and try to drag the monkey, it will not work yet. The reason is that touches are disabled by default on views that normally don’t accept touches, like Image Views. So select both image views, open up the Attributes Inspector, and check the User Interaction Enabled checkbox.
Compile and run again, and this time you should be able to drag the monkey around the screen!
Note that you can’t drag the banana. This is because gesture recognizers should be tied to one (and only one) view. So go ahead and add another gesture recognizer for the banana, by performing the following steps:
Give it a try and you should now be able to drag both image views across the screen. Pretty easy to implement such a cool and fun effect, eh?
In a lot of Apple apps and controls, when you stop moving something there’s a bit of deceleration as it finishes moving. Think about scrolling a web view, for example. It’s common to want to have this type of behavior in your apps.
There are many ways of doing this, but you’re going to do one very simple implementation for a rough but nice effect. The idea is to detect when the gesture ends, figure out how fast the touch was moving, and animate the object moving to a final destination based on the touch speed.
So add the following to the bottom of the handlePan: function in ViewController.swift:
if recognizer.state == UIGestureRecognizerState.Ended { // 1 let velocity = recognizer.velocityInView(self.view) let magnitude = sqrtf((velocity.x * velocity.x) + (velocity.y * velocity.y)) let slideMultiplier = magnitude / 200 println("magnitude: \(magnitude), slideMultiplier: \(slideMultiplier)") // 2 let slideFactor = 0.1 * slideMultiplier //Increase for more of a slide // 3 var finalPoint = CGPoint(x:recognizer.view.center.x + (velocity.x * slideFactor), y:recognizer.view.center.y + (velocity.y * slideFactor)) // 4 finalPoint.x = min(max(finalPoint.x, 0), self.view.bounds.size.width) finalPoint.y = min(max(finalPoint.y, 0), self.view.bounds.size.height) // 5 UIView.animateWithDuration(Double(slideFactor * 2), delay: 0, // 6 options: UIViewAnimationOptions.CurveEaseOut, animations: {recognizer.view.center = finalPoint }, completion: nil) } |
This is just a very simple function I wrote up for this tutorial to simulate deceleration. It takes the following strategy:
Compile and run to try it out, you should now have some basic but nice deceleration! Feel free to play around with it and improve it – if you come up with a better implementation, please share in the forum discussion at the end of this article.
Your app is coming along great so far, but it would be even cooler if you could scale and rotate the image views by using pinch and rotation gestures as well!
First, add the code for the callbacks. Add the following functions to ViewController.swift inside the ViewController class:
@IBAction func handlePinch(recognizer : UIPinchGestureRecognizer) { recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale) recognizer.scale = 1 } @IBAction func handleRotate(recognizer : UIRotationGestureRecognizer) { recognizer.view.transform = CGAffineTransformRotate(recognizer.view.transform, recognizer.rotation) recognizer.rotation = 0 } |
Just like you could get the translation from the UIPanGestureRecognizer, you can get the scale and rotation from the UIPinchGestureRecognizer and UIRotationGestureRecognizer.
Every view has a transform that is applied to it, which you can think of as information on the rotation, scale, and translation that should be applied to the view. Apple has a lot of built in functions to make working with a transform easy, such as CGAffineTransformScale (to scale a given transform) and CGAffineTransformRotate (to rotate a given transform). Here you will use these to update the view’s transform based on the gesture.
Again, since you’re updating the view each time the gesture updates, it’s very important to reset the scale and rotation back to the default state so you don’t have craziness going on.
Now hook these up in the Storyboard editor. Open up Main.storyboard and perform the following steps:
Build and run. Run it on a device if possible, because pinches and rotations are kinda hard to do on the simulator. If you are running on the simulator, hold down the alt key and drag to simulate two fingers, and hold down shift and alt at the same time to move the simulated fingers together to a different position. Now you should be able to scale and rotate the monkey and banana!
You may notice that if you put one finger on the monkey, and one on the banana, you can drag them around at the same time. Kinda cool, eh?
However, you’ll notice that if you try to drag the monkey around, and in the middle of dragging bring down a second finger to attempt to pinch to zoom, it doesn’t work. By default, once one gesture recognizer on a view “claims” the gesture, no others can recognize a gesture from that point on.
However, you can change this by overriding a function in the UIGestureRecognizer delegate.
Open up ViewController.swift and mark the class as implementing UIGestureRecognizerDelegate as shown below:
class ViewController: UIViewController, UIGestureRecognizerDelegate { |
Then implement one of the delegate’s optional functions:
func gestureRecognizer(UIGestureRecognizer, shouldRecognizeSimultaneouslyWithGestureRecognizer:UIGestureRecognizer) -> Bool { return true } |
This function tells the gesture recognizer whether it is OK to recognize a gesture if another (given) recognizer has already detected a gesture. The default implementation always returns false – here you switch it to always return true.
Next, open Main.storyboard, and for each gesture recognizer connect its delegate outlet to the view controller.
Build and run the app again, and now you should be able to drag the monkey, pinch to scale it, and continue dragging afterwards! You can even scale and rotate at the same time in a natural way. This makes for a much nicer experience for the user.
So far you’ve created gesture recognizers with the Storyboard editor, but what if you wanted to do things programatically?
It’s just as easy, so you’ll try it out by adding a tap gesture recognizer to play a sound effect when either of these image views are tapped.
To be able to play a sound, you’ll need to access the AVFoundation framework. At the top of Viewcontroller.swift, add:
import AVFoundation |
Add the following changes to ViewController.swift just before viewDidLoad:
var chompPlayer:AVAudioPlayer? = nil func loadSound(filename:NSString) -> AVAudioPlayer { let url = NSBundle.mainBundle().URLForResource(filename, withExtension: "caf") var error:NSError? = nil let player = AVAudioPlayer(contentsOfURL: url, error: &error) if player == nil { println("Error loading \(url): \(error?.localizedDescription)") } else { player.prepareToPlay() } return player } |
Replace viewDidLoad with the following:
override func viewDidLoad() { super.viewDidLoad() // 1 for view:UIView! in self.view.subviews { // 2 let recognizer = UITapGestureRecognizer(target: self, action:Selector("handleTap:")) // 3 recognizer.delegate = self view.addGestureRecognizer(recognizer) //TODO: Add a custom gesture recognizer too } self.chompPlayer = self.loadSound("chomp") } |
Add to the bottom of the ViewController class:
func handleTap(recognizer: UITapGestureRecognizer) { self.chompPlayer?.play() } |
The audio playing code is outside of the scope of this tutorial so I won’t discuss it (although it is incredibly simple).
The important part is in viewDidLoad:
That’s it! Compile and run, and now you should be able to tap the image views for a sound effect!
It works pretty well, except there’s one minor annoyance. If you drag an object a very slight amount, it will pan it and play the sound effect. But what you really want is to only play the sound effect if no pan occurs.
To solve this you could remove or modify the delegate callback to behave differently in the case a touch and pinch coincide, but here is another useful thing you can do with gesture recognizers: setting dependencies.
There’s a function called requireGestureRecognizerToFail: that you can call on a gesture recognizer. Can you guess what it does? ;]
Open Main.storyboard, open up the Assistant Editor, and make sure that ViewController.swift is showing there. Then control-drag from the monkey pan gesture recognizer to below the class declaration, and connect it to an outlet named monkeyPan. Repeat this for the banana pan gesture recognizer, but name the outlet bananaPan.
Then simply add these two lines to viewDidLoad, right before the TODO:
recognizer.requireGestureRecognizerToFail(monkeyPan) recognizer.requireGestureRecognizerToFail(bananaPan) |
Now the tap gesture recognizer will only get called if no pan is detected. Pretty cool eh? You might find this technique useful in some of your projects.
At this point you know pretty much everything you need to know to use the built-in gesture recognizers in your apps. But what if you want to detect some kind of gesture not supported by the built-in recognizers?
Well, you could always write your own! Now you’ll try it out by writing a very simple gesture recognizer to detect if you try to “tickle” the monkey or banana by moving your finger several times from left to right.
Create a new file with the iOS\Source\Swift File template. Name the file TickleGestureRecognizer.
Then replace the contents of TickleGestureRecognizer.swift with the following:
import UIKit class TickleGestureRecognizer:UIGestureRecognizer { // 1 let requiredTickles = 2 let distanceForTickleGesture:Float = 25.0 // 2 enum Direction:Int { case DirectionUnknown = 0 case DirectionLeft case DirectionRight } // 3 var tickleCount:Int = 0 var curTickleStart:CGPoint = CGPointZero var lastDirection:Direction = .DirectionUnknown } |
This is what you just declared step by step:
Of course, these properties here are specific to the gesture you’re detecting here – you’ll have your own if you’re making a recognizer for a different type of gesture, but you can get the general idea here.
One of the things that you’ll be changing is the state of the gesture – when a tickle is completed, you’ll need to change the state of the gesture to ended. In the original Objective-C UIGestureRecognizer, state is a read-only property, so you will need to create a Bridging Header to be able to redeclare this property.
The easiest way to do this is to create an Objective-C Class, and then delete the implementation part.
Create a new file, using the iOS\Source\Objective-C File template. Call the file Bridging-Header, and click Create. You will then be asked whether you would like to configure an Objective-C bridging header. Choose Yes. Two new files will be added to your project:
Delete Bridging-Header.m.
Add this Objective-C code to MonkeyPinch-Bridging-Header.h:
#import <UIKit/UIGestureRecognizerSubclass.h> |
Now you will be able to change the UIGestureRecognizer’s state property in TickleGestureRecognizer.swift.
Switch to TickleGestureRecognizer.swift and add the following functions to the class:
override func touchesBegan(touches: NSSet!, withEvent event: UIEvent!) { let touch = touches.anyObject() as UITouch self.curTickleStart = touch.locationInView(self.view) } override func touchesMoved(touches: NSSet!, withEvent event: UIEvent!) { let touch = touches.anyObject() as UITouch let ticklePoint = touch.locationInView(self.view) let moveAmt = ticklePoint.x - curTickleStart.x var curDirection:Direction if moveAmt < 0 { curDirection = .DirectionLeft } else { curDirection = .DirectionRight } //moveAmt is a Float, so self.distanceForTickleGesture needs to be a Float also if abs(moveAmt) < self.distanceForTickleGesture { return } if self.lastDirection == .DirectionUnknown || (self.lastDirection == .DirectionLeft && curDirection == .DirectionRight) || (self.lastDirection == .DirectionRight && curDirection == .DirectionLeft) { self.tickleCount++ self.curTickleStart = ticklePoint self.lastDirection = curDirection if self.state == .Possible && self.tickleCount > self.requiredTickles { self.state = .Ended } } } override func reset() { self.tickleCount = 0 self.curTickleStart = CGPointZero self.lastDirection = .DirectionUnknown if self.state == .Possible { self.state = .Failed } } override func touchesEnded(touches: NSSet!, withEvent event: UIEvent!) { self.reset() } override func touchesCancelled(touches: NSSet!, withEvent event: UIEvent!) { self.reset() } |
There’s a lot of code here, but I’m not going to go over the specifics because frankly they’re not quite important. The important part is the general idea of how it works: you’re overriding the UIGestureRecognizer’s touchesBegan, touchesMoved, touchesEnded, and touchesCancelled functions, and writing custom code to look at the touches and detect the gesture.
Once you’ve found the gesture, you want to send updates to the callback function. You do this by changing the state property of the gesture recognizer. Usually once the gesture begins, you want to set the state to .Began, send any updates with .Changed, and finalize it with .Ended.
But for this simple gesture recognizer, once the user has tickled the object, that’s it – you just mark it as ended. The callback you will add to ViewController.swift will get called and you can implement the code there.
OK, now to use this new recognizer! Open ViewController.swift and make the following changes.
Add to the top of the class:
var hehePlayer:AVAudioPlayer? = nil |
In viewDidLoad, right after TODO, add:
let recognizer2 = TickleGestureRecognizer(target: self, action: Selector("handleTickle:")) recognizer2.delegate = self view.addGestureRecognizer(recognizer2) |
At end of viewDidLoad add:
self.hehePlayer = self.loadSound("hehehe1") |
Add at the beginning of handlePan: (gotta turn off pan to recognize tickles):
//comment for panning //uncomment for tickling return; |
At the end of the class add the callback:
func handleTickle(recognizer:TickleGestureRecognizer) { self.hehePlayer?.play() } |
So you can see that using this custom gesture recognizer is as simple as using the built-in ones!
Compile and run and “he he, that tickles!”
Here’s the download for the final project with all of the code from the above tutorial.
Congrats, you’re now a master of gesture recognizers, both built-in and your own custom ones! Touch interaction is such an important part of iOS devices and UIGestureRecognizer
is the key to easy-to-use gestures beyond simple button taps.
If you have any comments or questions about this tutorial or gesture recognizers in general, please join the forum discussion below!
Credits: Artwork by Vicki Wenderlich of Game Art Guppy – Game Art Resources for Indie Game Developers.
Using UIGestureRecognizer with Swift Tutorial is a post from: Ray Wenderlich
The post Using UIGestureRecognizer with Swift Tutorial appeared first on Ray Wenderlich.
A hugely popular app is a double-edged sword. On the one hand you have lots of users. On the other hand, pretty much every edge case will be hit by somebody – which often reveals pesky performance problems or bugs.
Analytics are a good way to keep tabs on users, but how can you track more technical data such as app performance and network lag?
Well, there is now an easy way to do this – thanks to a relatively new app performance monitoring service called Pulse.io.
With Pulse.io, you’ll know if your users are experiencing long waits, poor frame rates, memory warnings or similar problems — and you’ll be able to drill down to find out which parts of your code are responsible for these issues.
In this tutorial, you’ll take a look at an example project called Tourist Helper that has some performance issues, and Pulse.io to detect and solve them.
Download the sample project, unzip the contents and open the project in Xcode.
You’ll need a Flickr account and Flickr API key to continue. Don’t worry — both the account and the API key are free.
Log in to your Flickr account, or sign up for a new account, then visit https://www.flickr.com/services/apps/create/noncommercial/ to register for an API key.
Once that’s done you’ll receive both the API key and a secret key as hexadecimal strings. Head back to Xcode and modify the kFlickrGeoPhotoAPIKey
and kFlickrGeoPhotoSecret
constants in FlickrServices.h to match your key and secret strings.
#define kFlickrGeoPhotoAPIKey (@"Your Flickr API Key") #define kFlickrGeoPhotoSecret (@"Your Flickr API Secret") |
That’s all you need for now to work with Flickr — on to integrating Pulse.io into your app.
Head to pulse.io in your browser. Sign up for an account if you don’t already have one; the process is simple as shown by the streamlined signup form below:
As soon as you have filled out and submitted the form your account will be ready to use. You’re now ready to add your app to Pulse.io. After signing up and logging in, click on the New Application button.
Enter TouristHelper for the name of your app, ensure iOS is selected and click Create Application as shown below:
Next you’ll see a page of instructions describing how to install the Pulse.io SDK. Keep this page open in a tab because it contains a link to the SDK and your app’s Pulse.io token.
Adding the Pulse.io framework to your app is quite straightforward, especially if you’ve worked with third-party frameworks before.
Download the latest version of the SDK (there’s a link at the top of the page you landed on after creating your app), unzip and open the folder that unzips. Then find the PulseSDK.framework file and drag it into your Xcode project. Ensure it’s been added to the TouristHelper target and copied into the destination group as shown below:
Keep your project tidy by putting your Pulse.io framework in the Frameworks group of your Xcode project like so:
Pulse.io has a few dependencies of its own. Select the project in the top left of the left-hand pane, select the Build Phases tab and expand Link Binary With Libraries. You should see something similar to the following screenshot:
Pulse.io requires the following frameworks:
Click the “+” icon and begin typing the name of each library. Select each of the above frameworks as you see it appear in the list. If any of these libraries have already been added to the project don’t worry about it for the purposes of this tutorial.
Once you’ve finished adding the appropriate libraries, your project should look like this:
Open main.m and add the following header import:
#import <PulseSDK/PulseSDK.h> |
Next, add the following line directly before return UIApplicationMain()
inside main
:
[PulseSDK monitor:@"Your Pulse.io API key goes here"]; |
You’ll need to insert your own Pulse.io API key in this space; you can find it on the Pulse.io instructions page you saw earlier, or alternatively from your list of Pulse.io applications that’s displayed when you’re signed in.
Note: If you’ve used other third-party frameworks such as Google Analytics, Crashlytics or TestFlight, you might have expected Pulse.io to start up in you app delegate’s application:didFinishLaunchingWithOptions:
. Pulse.io hooks directly into various classes using deep Objective-C runtime tools. Therefore you need to initialize the SDK very early in your app’s startup, before even UIKit has started up any part of your UI.
That’s all you need to get started!
Make sure you aren’t running in a 64-bit environment, such as the 64-bit simulator, iPhone 5S or iPad Air. Otherwise you’ll see a message warning you that Pulse.io doesn’t yet run in 64-bit environments. For now, select a different target for building your app.
Build and run your app and watch it for a while as it runs. Notice that it doesn’t do much yet. In fact, the observant reader will notice there appears to be an error logged to the console. More on that shortly.
The app, when working, searches Flickr for some interesting images with a default search tag of “beer” around your location and places pins on the map as photos are found. The app then calculates the route between these places and draws it on the map. Finally, the app fetches thumbnails and large images using the image’s Flickr URL and the instant tour is ready for use.
Head back to the Pulse.io page that you landed on after creating the app (you kept it open like I said, right?!); the message at the bottom of the page will eventually update to let you know the app has successfully contacted Pulse.io.
Excellent! You’re up and running and collecting data. As the message in the app states, it could take a little while for results to appear in the dashboard. While you’re waiting for that you can generate some interesting data by exploring images at other locations around the world.
If you haven’t discovered it yet, find the symbol in the Debugger control bar that looks just like the location symbol in iOS maps. Simply tap it and Xcode presents you with a list of simulated locations around the world; you can even use GPX files to create custom simulated locations.
Change the simulated location a few times while the app is running; you’ll notice that no pictures of any interesting sights appear. Hmm, that’s curious.
It turns out that Flickr doesn’t like connections over regular http and wants everything to be https. The good news is that you’ve just logged a few network errors to analyze later! :]
Open FlickrServices.h and change the kFlickrBaseURL
constant as shown below:
#define kFlickrBaseURL (@"https://api.flickr.com") |
That should do it!
Build and run your app and set a simulated location in Xcode. Here’s an example of what you might see if you start near Apple headquarters in Cupertino:
It’s likely that your app runs smoothly in the simulator on your nice, fast Mac. However, the story might be quite different on a slower device. Will your app create a memory-packed day for a tourist, or a memory-packed app for iOS to kill after too many allocations?
You have just a few more customizations to make before checking out the full story in your Pulse.io dashboard.
By default, Pulse.io automatically instruments the key parts of your app to check for things such as excessive wait time and network access.
If you have custom classes that perform a lot of work such as the route calculator object in your app, you can add instrumentation to them using instrumentClass:
. As well, you can add a specific selector using instrumentClass:selector:
rather than instrumenting every message from the class.
Add the following import to the top of main.m:
#import "PathRouter.h" |
Next, add the following line immediately after the PulseSDK monitor
line you added in the previous section:
[PulseSDK instrumentClass:[PathRouter class] selector:@selector(orderPathPoints:)]; |
Pulse.io will now trace every time a PathRouter
instance receives the orderPathPoints:
message. Adding instrumentation to a method adds around 200-1000ns of execution time to that method call. This isn’t a real big hit — unless you call that method in a tight loop. Plan your instrumentation accordingly.
Take a look at PathRouter.m; you’ll see that orderPathPoints:
performs the heavy lifting of finding a reasonable path between points. This is an algorithmically difficult problem, and probably a good spot to add some instrumentation.
The actions reported by Pulse.io are grouped by the user action that triggered them. For example, if the user pans a map, you’ll see methods that result from that pan operation descending from a UIPanGestureRecognizer
. It would be a great idea to attach more meaningful names to some operations to make them easier to find and understand.
Open MapSightsViewController.m and add the following Pulse.io header import to the top of the file:
#import <PulseSDK/PulseSDK.h> |
Next add the following line to the beginning of mapView:viewForAnnotation:
:
[PulseSDK nameUserAction:@"Providing Annotation View"]; |
Now when you look at the results on the Pulse.io dashboard, you’ll see this method is reported using the friendly name instead.
Build and run the app again to generate some more data to analyze. After a little time has passed, say a minute or so, check the Pulse.io dashboard statistics for your application to see what they show. You should see something similar to that below. If you don’t, wait a little while longer and try again. It can take up to an hour for data to trickle in. That’s not a problem in practice though, because you will want to collect a lot of data and then work on it when it’s all in.
Things look a little sparse with just one user generating data with one app, but you can imagine the distributions would smooth out with many users contributing more data. The overview presented above serves as a signpost to the areas you should examine in more detail.
At the top of the browser window there is a row of buttons which you can use to drill down into each metric, as shown below:
The first item to drill down into is the total spinner time of your app.
Click on the UI Performance\Spinner Time button on the web site for a more detailed view of your data. Pulse.io hooks into UIActivityIndicator
to determine how much time your users spend looking at that spinning animation while they wait, as shown below:
Er, that’s pretty ugly! The app displays a spinner for an average of nearly two seconds each time it appears. There’s no way a user would find that acceptable. Click on the Providing Annotation View user action to see more detail as illustrated below:
Those large image requests are taking a lot of time. Note that down as one problem to fix once you’re done reviewing the data on other problem areas.
Note: If you display a custom spinner or other view while the user is waiting, you can use startSpinner:trackVisibility:
and stopSpinner:
to track your spinner events as detailed in the Tracking a Custom Spinner Pulse.io API documentation.
Next, click on the UI Performance\Frame Rate button to show the detail view like so:
This view shows that there was a total of six seconds of low frame rate when displaying annotations descending from the method you annotated with the name Providing Annotation View. That’s beyond unacceptable for your app!
Note this issue on your fix list as well; you’ll have to revisit how you’re handling annotations in order to fix this.
Next, click on the Network\Speed button to drill down into the data. Then select Network\Errors. Generally speed and error count are the two things to investigate when it comes to networking.
The image below shows an example of Pulse.io catching some network errors:
The normal HTTP status response is 200, meaning OK. However a couple of times the app received a 403 code meaning Forbidden. Recently Flickr mandated use of https and started denying plain http requests, which you made earlier in your app before you modified it to use https exclusively.
It’s good to know that Pulse.io will report any client-side, server-side, or network access issues your app will encounter.
Here’s a sample view of Pulse.io’s view of your networking speed:
Pulse.io reports speed as time per request and breaks the results down into time waiting and time receiving. This is useful data; if you find your user base growing and your server being overwhelmed by traffic you’ll see the wait time slowly creep up as time goes on.
Consider the sheer number of requests shown here from only one user with one app. It looks like there are an excessive number of requests for the limited amount of time you’ve used your app. Making excessive requests eats up bandwidth and keeping the network active wastes battery. Note this as another item to fix.
Below the chart you can see a list titled Aggregate URL Response Times. Select flickr.com/*
in this list to see some of the RESTful queries your app made to Flickr.
Take a closer look at the results: there’s some sensitive information there, including your app’s Flickr API key and secret! That should be stripped out of your request before the wrong people see it.
The final button in the dashboard is Memory. Click it and you’ll see your app’s memory usage as demonstrated below:
That feels like a lot of memory for a simple app like yours. iOS is likely to terminate apps using large chunks of memory when resources get low, and making a user restart your app every time they want to use it won’t be a pleasant user experience.
This could be due to the loading of the images as you noted before, so draw a few asterisks next to the item on your fix list that addresses the loading of large images.
Now that you’ve drilled down through the pertinent data points of your app, it’s time to address the issues exposed by Pulse.io.
A likely place to start is the image fetching strategy. Right now the app fetches a thumbnail image for each sight discovered and then pre-fetches the large image in case the user taps on the pin to view it. But not every user will tap on every pin.
What if you could defer the large image fetch until it is actually needed; that is, when the user taps the pin?
Find the implementation of thumbnailImage
in Sight.m. You’ll see that you make two network requests: one for the thumbnail and one for the large image.
Replace the current implementation of thumbnailImage
with the following:
- (UIImage *)thumbnailImage { if (_thumbnail == nil) { NSString *urlString = [NSString stringWithFormat:@"%@_s.jpg", _baseURLString]; AFImageRequestOperation* operation = [AFImageRequestOperation imageRequestOperationWithRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:urlString]] imageProcessingBlock:nil success:^(NSURLRequest *request, NSHTTPURLResponse *response, UIImage *image) { self.thumbnail = image; [_delegate sightDidUpdateAvailableThumbnail:self]; } failure:nil]; [operation start]; } return _thumbnail; } |
This looks very much like the original method – it contains an AFImageRequestOperation
whose success block notifies the delegate MapSightsViewController
that the thumbnail is available.
You’ve removed the code that kicks off the full image download. So next, you’ll need to load the large image only when the user drills down into the annotation. Find initiateImageDisplay
and replace it with the following code:
- (void)initiateImageDisplay { if (_fullImage) { [_delegate sightDisplayImage:self]; } else { [_delegate sightBeginningLargeImageDownload]; NSString *urlString = [NSString stringWithFormat:@"%@_b.jpg", _baseURLString]; AFImageRequestOperation* operation = [AFImageRequestOperation imageRequestOperationWithRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:urlString]] imageProcessingBlock:nil success:^(NSURLRequest *request, NSHTTPURLResponse *response, UIImage *image) { [_delegate sightsDownloadEnded]; self.fullImage = image; [_delegate sightDisplayImage:self]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error) { [_delegate sightsDownloadEnded]; }]; [operation start]; return; } } |
This loads the image the first time it’s requested and caches it for future requests. That should reduce the number of network requests for images — with the added bonus of reduced memory usage. Correcting both of those issues should help reduce the amount of spinner time users need to suffer through! :]
Since you’re already fixing network related items, you may as well strip the sensitive bits from the http requests while you’re at it.
Fortunately this is a simple one-line fix. Add the following line after the call to monitor:
in main.m:
[PulseSDK setURLStrippingEnabled:true]; |
This prevents all query parameters from being logged in the Network dashboard, which helps keep your Flickr API keys secret.
The next thing on your list of things to fix is the algorithm that calculates the route between the various sights. It’s easy to underestimate the complexity of finding the shortest route between an arbitrary number of points. In fact, it is one of the hardest problems encountered in computer science!
Note: Better known as the Travelling Salesman Problem, this type of algorithm is a great example of the class of problems known as NP-hard. In fact, if you find a fast, general solution to this problem while working through this tutorial, there may be a million dollar prize awaiting you!
This app uses a brute force method of finding the shortest route by calculating the complete route many times and saving the shortest one. If you think about it, though, there’s no real requirement to show the shortest route through all points — you can just display any route and let the user vary the route if they feel like it. The time spent waiting for the optimal route just isn’t worth it in this case.
Take a quick look at orderLocationsInRange:byDistanceFromLocation:
in PathRouter.m; you can see that it currently orders the discovered paths in a random fashion. A reasonably good route can be found by starting at one point and visiting the next closest point, repeating until all points are visited.
It’s quite unlikely that this is going to be even close to the optimal route, but the potential gains in performance make this approach your best option.
Inside the else
clause in this method, replace the call to sortedArrayUsingComparator:
(including the block passed to it) with the following code:
NSArray *sortedRemainingLocations = [[self.workingCopy subarrayWithRange:range] sortedArrayUsingComparator:^(id location1, id location2) { CLLocationDistance distance1 = [location1 distanceFromLocation:currentLocation]; CLLocationDistance distance2 = [location2 distanceFromLocation:currentLocation]; if (distance1 > distance2) { return NSOrderedDescending; } else if (distance2 < distance1) { return NSOrderedAscending; } else { return NSOrderedSame; } }]; |
Now find orderPathPoints:
and take a look at the for
loop in there. It currently tries 1000 iterations to find the best route.
But this new algorithm only needs one iteration, because it finds a decent route straight away. 1000 iterations down to 1 – nice one! :]
Find the following lines and remove them:
for (int i = 0; i < 1000; i++) { if ([locations count] == 0) continue; |
Then find the corresponding closing brace and remove it also. (The brace to remove is just above the line that reads // calculation of the path to all the sights, without blocking the main (UI) thread
).
This change cuts the path algorithm down to one iteration and should reduce spinner time even further.
That takes care of the excess spinner time. Next up are those pesky frame rate issues uncovered by Pulse.io.
iOS tries to render a frame once every sixtieth of a second, and your apps should aim for that same performance benchmark. If the code execution to prepare a frame exceeds ~1/60 second (less the actual time to display the frame), then you’ll end up with a reduced frame rate.
If you’re only slowed down by one or two frames per second most users won’t even notice. However, when your frame rate drops to 20 frames/second you can bet most users will find it highly annoying. Using Pulse.io to track your frame rate keeps you ahead of your users and lets you detect slow frame rates before they are noticed by too many users.
One of the changes you made to the app was adding the label Providing Annotation View to a user action. The dashboard showed that slow frame rates were taking place in this specific user action. Pulse.io tells you exactly what your users are experiencing so you don’t have to guess whether or not smooth scrolling on older devices is something you need to handle in your app design.
Map views like the one in this app require multiple annotation views to work together to provide smooth scrolling performance. Map Kit includes a reuse API since reusing an annotation is much faster than allocating a new one every time. Your app isn’t reusing annotation views at the moment, which might explain at least some of the performance issues.
Open MapSightsViewController.m and find mapView:viewForAnnotation:
. Find the following two lines that allocate new annotations:
sightAnnotationView = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:kSightAnnotationIdentifier]; sightAnnotationView.canShowCallout = YES; |
Replace the above lines with the following implementation:
sightAnnotationView = [mapview dequeueReusableAnnotationViewWithIdentifier:kSightAnnotationIdentifier]; if (sightAnnotationView == nil) { sightAnnotationView = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:kSightAnnotationIdentifier]; sightAnnotationView.canShowCallout = YES; } |
This mechanism is similar to the way table views or collection view cells get reused and should be somewhat familiar. The new implementation attempts to dequeue an existing annotation and only creates a new annotation if it fails to get one from the map view.
While this change has the least dramatic effect of all the changes made so far, you should always reuse objects whenever UIKit offers you the chance.
Now that you’ve completed all of the items on your fix list, you need to generate some more analytics in Pulse.io to see how your app performance has improved.
Build and run the app; pick several simulated locations and scroll around the map as an average user would. The question is — will the Pulse.io results show some improvement? Or has all your hard work been for naught?
Here’s a look at some sample data collected after making the code improvements and playing with the app for a bit:
The dashboard shows that the results above arrived a few hours after the first batch of results. Step One in any performance improvement process is to be sure you’re looking at the right data! :]
At first glance things look like they might have improved, but you can’t really tell until you dig down into the nitty gritty details.
Take a close look at the number of network requests made now that you’ve reduced the excessive image loading:
That’s a step in the right direction — you used the app for roughly the same amount of time and in approximately the same way that you did before, but the app made less than half of the network requests than before.
Drill down into Aggregate URL Response Times again and examine the queries made to flickr.com/*
– are those URLs still exposing too much information?
All URLs logged now have the query part of the request stripped thanks to setURLStrippingEnabled:
. You could easily share these results without exposing any details of your web API or compromising other secrets. And even if you didn’t have any secrets to hide, at least URLs in this format are a heck of a lot easier to read! :]
Spinners were particularly worrying — no user wants to waste five seconds of their life staring at a spinner. What does Pulse.io say about the end result of your spinner improvements?
The total spinner time is now 14 seconds, and average spinner time has dropped somewhat but not as dramatically as you might first think. Does that mean your improvements had no effect on average spinner time? How should you interpret these results?
You made two huge reductions to spinner time in the code: first, the new version of the app is making requests for thumbnail (small) images at about the same rate as the first version, but deferring the fetch of the large images to when the user taps on the thumbnail.
Second, you switched the route calculation to a much more sensible next-closest algorithm and only perform it once per set of points.
So the cost of bringing up a spinner and performing an operation has dropped, on average, as you’re requesting smaller pieces of data, but the total number of spinners displayed has dropped dramatically as you’re making far fewer image requests.
What was that previous result again?
Yep – that’s a huge improvement; the app is now much more usable.
Although you’ve achieved some success with your spinner time, you really should aim to avoid any spinner time at all.
One possible way to cut down on spinner use in your app is to make the initial image request to Flickr and display the points on the map immediately. The route calculation would be performed in the background and leave the UI thread responsive so the user can interact with the map. You would then display the calculated route once the algorithm was done.
This would be a great change to attempt on your own to see the effect it has on spinner use. If you do choose to try this, please share your before and after performance results in the forum discussion below!
All that’s left to check on is the frame rate issue that you solved by re-using annotations. What did Pulse.io detect as your improved frame rate?
The small change you made to reuse annotation views has resulted in some performance gains, as shown below:
The recorded low frame rate time descending from Providing Annotation Views has dropped to around four seconds. That’s not a bad improvement in the context of this tutorial, but you should be shooting for no recorded instances at all. On modern hardware, with some code tweaks appropriate to your app, this goal is well within your reach.
Here’s the final version of the sample project that implements the changes discussed in this tutorial. If you use this version of the project, remember to set your own Pulse.io key, Flickr key and secret!
By adding Pulse.io reporting to this sample app, you’ve learned how an app that seems to work well enough on your simulator and test devices can fail to impress your users; in fact, it can lead to a really poor user experience. With Pulse.io in your toolbox, you can gather information about user experience issues on many fronts:
It’s incredibly important to gather metrics and identify fixes for you app before you start seeing these issues mentioned in the reviews of your app. Additionally, you won’t waste time improving areas that, in actual fact, aren’t causing any problems for your users.
Pulse.io is undergoing rapid development and improvement. While there’s an impressive level of detail in the product already, the team is anything but idle. The Pulse.io team recently introduced the Weekly Performance Report as shown below:
This shows you the changes in app usage from week to week. There isn’t much data here with just one user (i.e. the author) and an app still in the development stage, but you can see how useful this would be when your users number in the thousands.
Support for Swift is on the horizon, so keep an eye out for updated versions of the Pulse.io SDK and instructions on integrating Pulse.io into your Swift projects soon.
Have you used Pulse.io in your own apps and found any interesting performance issues or tuning tips? Share your experiences with us in the comments below!
Sponsored Tutorial: Improving Your App’s Performance with Pulse.io Tutorial is a post from: Ray Wenderlich
The post Sponsored Tutorial: Improving Your App’s Performance with Pulse.io Tutorial appeared first on Ray Wenderlich.
Learn how to use the create, read, update, and delete operations of SQLite.
Video Tutorial: Saving Data in iOS Part 8: Using SQLite is a post from: Ray Wenderlich
The post Video Tutorial: Saving Data in iOS Part 8: Using SQLite appeared first on Ray Wenderlich.
Although Swift has only been out for a little while and is still in beta, many of you have been digging in already.
How far have you come so far? Have you:
If you’ve scored 3 or more on this quiz, then you might be a Swift Ninja.
Well, this 2-part series is going to help you find out for sure!
I’ll give you a series of programming challenges in Swift, that you can use to test your Swift knowledge and see if you’re a true Swift Ninja.
And if by any chance you’re not feeling so ninja, you’ll have the chance to learn the craft! No matter whether you’re already advanced or still intermediate in Swift, you’ll likely still learn a thing or two.
Get your shurikens and katana ready – the challenge begins!
Note: This post is for experienced programmers who are well-versed in the Swift language. If you don’t feel quite at ease with it yet, check out the rest of our Swift tutorials.
This series is a bit different in style compared to the ones we usually post on this web site. It will present a series of problems in order of increasing complexity. Some of these problems reuse techniques from previous sections, so working through them in order is essential for your success.
Each of the problems highlight at least one feature, syntax oddity, or clever hack made possible by Swift.
Don’t worry, you’ll not be thrown to the wolves – there’s help when you need it. Each post has two levels of hints, and of course there’s always the Swift books from Apple, or your good friend Stack Overflow.
Each problem is defined by stating what you need to accomplish in code, and what Swift features you can and can’t use. I recommend you use a Playground to work through the each challenge.
If you have difficulties, open up the Hints section. Though Hints won’t give you instant gratification, they offer direction.
In case you can’t muster the solution — open up the problem’s Tutorial section. There you’ll find the techniques to use and provide the code to solve the given problem. In any case, by the end of this series you’ll have the solutions to all the problems.
Oh – and remember to track your score!
Even if you solved the problem yourself, take a few moments to see the solution provided in the Tutorial –it’s always great to compare code!
Some challenges offer an extra shuriken if you do the solution in a certain (more difficult) way.
Keep a piece of paper or your favorite tracking app handy and keep count of how many shurikens you got for each challenge.
Don’t cheat yourself by padding your score. That’s not the way of the noble ninja. At the end of the post you’ll have broadened your mind and moved boldly into the future, even if you don’t collect every single shuriken.
In the Swift book from Apple, there are several examples of a function that swaps the values of two variables. The code always uses the “classic” solution of using an extra variable for storage. But you can do better than that.
Your first challenge is to write a function that takes two variables (of any type) as parameters and swaps their values.
Requirements:
Give yourself if you don’t have to crack open the Hints or Tutorial spoilers.
Solution Inside: Hints | SelectShow> |
---|---|
Swift Tuples are very powerful – you group variables of any type into a tuple. Additionally, you can assign values to a number of variables in one shot if they were grouped in a tuple. One tuple, two tuples! :] Remember since you peeked at the hint, now you can only give yourself for this challenge.
|
Solution Inside: Tutorial | SelectShow> | ||||
---|---|---|---|---|---|
As a Swift ninja knows, one of the new features in Swift is called a Tuple that you can group variables in. The syntax is easy – just surround the list of variables (or constants, expressions, etc.) with brackets
In the code example above
The example above sets the values of both Combining what you’ve learned from the two examples above, you can now write a function that takes two variables of any type (as long as they are of the same data type) and swapping their values. Here’s the solution to the original problem and the code to test it in a playground:
You define a function In the single line of code inside the function, you just make a tuple out of the two function parameters and assign their values to another tuple. Then you change the order of the two parameters in both tuples in order to exchange their values. The code above also declares two variables
Give yourself another if you ran the code in a Playground and learned how to swap values via tuples! |
Swift functions are very flexible — they can take a variable number of arguments, return one or more values, return other functions and much more.
For this challenge you’ll test your understanding of function syntax in Swift. Write a function called flexStrings that meets the following requirements:
String
.Here’s some example usage and output of the function:
flexStrings() //--> "none" flexStrings(s1: "One") //--> "One" flexStrings(s1: "One", s2: "Two") //--> "OneTwo" |
Take for solving the problem, and an extra shuriken for a total of 4 if you did it in one line.
Solution Inside: Tutorial | SelectShow> | |
---|---|---|
Swift function parameters can have a default value, and that is one of the differences between good old Objective-C and Swift. When a parameter has a default value, you call it by name when invoking the function. The benefit is that you can omit the parameter if you’d like to use that default value. Nice! As for the solution? It’s simple when you know about default parameter values:
You define a function that takes two parameters, but both of them have a default value of “” (an empty string). This way you can call the function with:
So all you do in the function body is to check if both parameters are empty strings ( To meet the one-line requirement, you can see that the ternary operator from Objective-C Try calling this function with different parameters inside a Playground — make sure you understand how it works. Give yourself for doing that.
|
You’ve already mastered functions with optional parameters in the previous challenge. That was fun!
But with the approach from the previous solution, you can only have a fixed maximum number of parameters. There’s a better approach if you truly want a variable number of input values for a function.
This challenge demonstrates how to best use the built-in Array
methods and switch
statements. Did you pay attention when you read Apple’s Swift Programming Language book? You’re about to find out. :]
Write a function called sumAny
that can take 0 or more parameters of any type. The function should meet the following requirements:
String
, following the rules below.String
that represents a positive number (e.g. “10″, not “-5″), add it to the result.Int
, add it to the result.return
statement and don’t use any loops (i.e. no for
or while
).Here’s some example calls to the function with their output, so you can check your solution:
let resultEmpty = sumAny() //--> "0" let result1 = sumAny(Double(), 10, "-10", 2) //--> "12" let result2 = sumAny("Marin Todorov", 2, 22, "-3", "10", "", 0, 33, -5) //--> "42" |
Solution Inside: Hints | SelectShow> |
---|---|
You define a function that takes variable number of parameters by defining its last parameter to be name: Type... . Then, from the function body you can access name as a normal Array .
You can use Finally, calculate the sum as a number and just cast it to a
|
Solution Inside: Tutorial | SelectShow> | |||||
---|---|---|---|---|---|---|
This problem uses quite a bit of Swift’s built-in language features, so let’s explore a few separate concepts before approaching the final solution. First look at how to define a function that takes in a variable number of parameters:
If you put “…” after the type of a function parameter, Swift will expect 0 or more values to pass to the function of that type. When you specify You can treat the
The code above returns the count of elements in the array Next, you need to get the sum of the values. You could loop over the elements of the array and use a separate variable to hold the sum, but that solution won’t give you that extra shuriken :] You’ll employee a different strategy — use Here’s the code that converts each element in the array to their sum value:
If you’re not familiar with advanced
Great! This switch statement converts your random values to only integers. After you call Finding the sum of array elements is a perfect application for
Now combine all techniques discussed above to produce the final solution:
How this works, point by point:
You did not need to use any loops, nor any variables — just the built-in If you were able to learn and understand how to use map and reduce, give yourself 1 shuriken.
|
Write a function countFrom(from:Int, #to:Int)
that will produce as output (eg. via print()
or println()
) the numbers from from
to to
. You can’t use any loop operators, variables, nor any built-in Array functions. Assume the value of from
is less than to
(eg. the input is valid).
Here’s a sample call and its output:
countFrom(1, to: 5) //--> 12345 |
Solution Inside: Tutorial | SelectShow> | |
---|---|---|
The solution for this will involve recursion. For each number starting with from , you’ll recursively call countFrom while increasing the value of from by 1 each time. When from equals to , you’ll stop the recursion. This will effectively turn the function into a simple loop.
Have a look at the complete solution:
When you call the function, it prints the The second call prints out “2″ and compares 2 with 5, then again calls itself: The function will continue to recursively call itself and increase Recursion is a powerful concept, and some of the next problems will require recursion as well so make sure you understand the solution above! Give yourself for learning how to use recursion in Swift.
|
Ninjas need to take breaks, too. And if you made it this far you're doing a fantastic job -- you deserve some rest!
I'd like to take this time to reflect a bit on the problems and the solutions so far. Swift is very powerful, and is a much more expressive and safe language than Objective-C is.
With the first 4 problems in this challenge, I wanted to push you into exploring various areas of Swift. I hope working on the first few problems so far has been fun and beneficial experience.
Let's do a small recap:
map
, reduce
, sort
, countElements
, etc.And if you're truly looking to take your mind off programming for a bit you watch the classic opening sequence from the game Ninja Gaiden:
Stay tuned for part 2 of the Swift Ninja programming challenge - where you can get your revenge! :]
Programming Challenge: Are You a Swift Ninja? Part 1 is a post from: Ray Wenderlich
The post Programming Challenge: Are You a Swift Ninja? Part 1 appeared first on Ray Wenderlich.
Welcome back to our “Are you a Swift Ninja?” Programming Challenge!
In the first part of this series, you got some practice with default values in functions, variadic parameters, map/reduce, advanced switch statement features, and more.
Hopefully, you earned plenty of shurikens along the way!
In this second and final part of the series, you will get 4 more challenges to test your ninja skills.
In addition, this tutorial has a special final challenge, where you will get a chance to compete against other developers for fame and fortune!
The best solution to the final challenge will be featured in this post, and will also get a free copy of our upcoming Swift by Tutorials Bundle, which includes three books about programming in Swift.
Ninja mode activate – the challenge continues!
Stretch those fingers and assume the position. It’s time to do another problem that involves recursion and function syntax.
Write a single function that reverses the text in a string. For example, when passed the string “Marin Todorov” will return the string “vorodoT niraM”.
Requirements:
Array
functions.Here’s an example of a function call and its output:
reverseString("Marin Todorov") //--> "vorodoT niraM" |
Solution Inside: Tutorial | SelectShow> | ||
---|---|---|---|
This problem has a solution very similar to the one of Problem #4. The difference is mostly due to the fact that the function reverseString you need to write takes one parameter while countFrom(from:, to:) takes two. You can easily skip around that fact by using what you’ve learned so far!
Start by defining a function that takes two parameters, the latter being an empty string by default. You’ll use this second parameter to accumulate the result of the function. Every recursive call to your function will move one character from the input string When there are no more characters in the input text, that means you’ve moved them all to the result, so you need to stop recursing. Want to see it? Here’s the complete solution:
First you check if If To better understand how the function works look at the parameters of each recursive call:
Give yourself for trying out the code in a Playground and producing the text output from above. Note: you’ll need to add a |
Your next challenge has to do with operator overloading — one of the most powerful features of Swift. I hope you’ve had a chance to look into how to do that :]
Your challenge is to overload the “*
” operator so it takes a Character
and an Int
and produces a String
with the character repeated Int times.
Here’s an example usage and output:
"-" * 10 //output is: "----------" |
Make usage of everything you learned so far and don’t use any variables, loops, inout parameters, or subscripts.
You might need to define an extra auxiliary function. At the time of writing, Xcode crashes when you try to define a nested function inside an operator overload. Hopefully this is corrected at some point.
Solution Inside: Tutorial | SelectShow> | |||
---|---|---|---|---|
For this solution, write a recursive function that takes the following parameters: a
This function is very similar to what you developed in the previous challenge. It keeps recursively calling itself and adding one more character to the accumulator until the length of the result equals the given When you reach the target length, you just return the result. In the function above, you need to define the In order to have your “
… you have to make it work for a Finally, overload the operator itself and make it use your
You declare a function with the operator itself as the name ( You declare that the left side of “ For the body of the operator overload, you just call Give the new operator a try in a Playground, try fun things like these:
Give yourself for trying the above examples in a Playground.
|
This challenge, while not necessarily pushing you to write beautiful and optimized code, will lead you to discover (or exercise) another very powerful feature of Swift.
“What’s that?” you might ask. Well, you’ll just have to work through it and figure that out for yourself!
For this challenge you’ll need to use this function:
import Foundation func doWork() -> Bool { return arc4random() % 10 > 5 } |
This function, for the purpose of writing and testing your solution, randomly succeeds or fails (eg. returns true
or false
).
Write code (and/or additional functions) that will output the message “success!” when doWork()
returns true
, and will output “error” when doWork()
returns false
. Your solution should meet the following requirements:
doWork()
.if
, switch
, while
, or let
.?:
.Solution Inside: Tutorial | SelectShow> | |||
---|---|---|---|---|
To solve this problem, use a logical expression instead of a control structure like Modern languages don’t evaluate parts of a logical expression that don’t change the result of the expression as a whole. For example, if the result of the expression is already clear halfway through evaluating it, the rest is just ignored. What does that really mean? Consider this expression:
Let’s start evaluating the values two by two from left to right:
At this point the result is At this point, the runtime stops evaluating the expression and simply takes Since you now understand the basic concept of control flow via expressions, have a look at the solution to the original problem:
You declare two functions: one prints “success!” and one prints “error”. Let’s have a look at what happens when When doWork() returns true Evaluate the expression in the following order:
When doWork() returns false Evaluate the expression with the following steps:
Note: If you are interested in how lazy evaluation of logical expressions really works you can have a look at this post on the Apple Swift blog that came out during the editing phase of this article.
|
Currying is a relatively unexplored area outside of functional languages like ML, SML, and Haskel. But as you may have noticed from the presentations at WWDC, Swift has this feature as well — and the Apple engineers seem to be pretty excited about it.
Are you up to the challenge of using currying and partial function application?
Extend the Array
structure and add 3 new functions that you could call like this on an array of any type:
list.swapElementAtIndex(index: Int)(withIndex: Int)
: Returns a copy of the original array with the elements at indexes index
and withIndex
exchanged.list.arrayWithElementAtIndexToFront(index: Int)
: Returns a copy of the original array with the element at index index
exchanged with the first element .list.arrayWithElementAtIndexToBack(index: Int)
: Returns a copy of the original array with the element at index index
exchanged with the last element.(The examples above use an array called list
).
Requirements:
func
only one time – to declare swapElementAtIndex
.Here is an example of usage and its output:
let list = [1, 4, 5, 6, 20, 50] //--> [1, 4, 5, 6, 20, 50] list.arrayWithElementAtIndexToBack(2) //--> [1, 4, 50, 6, 20, 5] list.arrayWithElementAtIndexToFront(4) //--> [20, 4, 5, 6, 1, 50] |
Solution Inside: Hints | SelectShow> |
---|---|
Your Since |
Solution Inside: Tutorial | SelectShow> | |||
---|---|---|---|---|
Let’s start by defining As you can see from the requirements, the function takes one
The code above declares So, how do you return a function from Try just returning a closure with the same number of parameters as the expected function. In the example above, replace the comment “
The closure takes one parameter To be able to modify the contents of the array you need to get a mutable copy of it. To get one just declare a local variable and the copy is automatically mutable due to the use of the The next step is to declare the rest of the required functions. But you can’t use Don’t furrow your brow — class and structure variables can be functions too, so you can skip using the keyword You’re going to pre-fabricate two variables to call Add inside the
What these two new functions do is pre-fill one of the parameters required to call And that’s all it takes for this solution. You learned how to create a curried function, and then how to prefabricate other functions out of it using partial application. Good job! |
Time for the last challenge. This time there will be no hints and no tutorial. It’s all on you, dear Ninja.
Approach this challenge carefully and design a beautiful solution. Then post your solution in the comments on this post. Note it’s ideal if you post your solution as a gist so it has nice syntax highlighting.
I will select one solution as the winner. The winner will be immortalized in this post as the correct solution for this challenge! Remember to leave your name along the code, so you can live in infamy, forever known as a true Swift Ninja :]
In addition, the winner will will receive a a free copy of our upcoming Swift by Tutorials Bundle, which includes three books about programming in Swift!
In choosing a winner, I will consider correctness, brevity and use of Swift’s language features. Embrace all the techniques you’ve explored in this post. Should you and another developer post the same winning solution, I’ll choose the one that was posted first.
You have 2 weeks from the time this post goes live. Better get to it!
Let’s get to coding! Here’s the enumerations and a struct to get started:
enum Suit { case Clubs, Diamonds, Hearts, Spades } enum Rank { case Jack, Queen, King, Ace case Num(Int) } struct Card { let suit: Suit let rank: Rank } |
Write a function called countHand
that takes in an array of Card
instances and counts the total value of the cards given.
The requirements for your solution are as follows:
Here’s an example of usage and its result. Use this to check your solution:
countHand([ Card(suit:Suit.Hearts, rank:Rank.Num(10)), Card(suit:Suit.Hearts, rank:Rank.Num(6)), Card(suit:Suit.Diamonds, rank:Rank.Num(5)), Card(suit:Suit.Clubs, rank:Rank.Ace), Card(suit:Suit.Diamonds, rank:Rank.Jack) ]) //--> 110 |
First, check your challenge result!
Calculate how many shurikens you earned — for the final challenge, give yourself 3 shurikens if you solved the problem and none if you didn’t.
How ninja are you?
You can download the complete Playground solution to the first 8 challenges here: NinjaChallenge-Completed.playground
Even if you mastered the challenges, you can always learn more! Check out some additional Swift resources:
Remember to post your solution to the final challenge and your name, and good luck. Thanks for reading this tutorial, and if you have comments or questions, please join in the forum discussion below!
Credit: All images in this post are from the public domain, and are available at: www.openclipart.org
Programming Challenge: Are You a Swift Ninja? Part 2 is a post from: Ray Wenderlich
The post Programming Challenge: Are You a Swift Ninja? Part 2 appeared first on Ray Wenderlich.
Your challenge is to make a function named knockKnockJoke
that returns a random knock knock joke as a tuple. The tuple should have two components: a string named who
, and a string named, punchline
.
Here’s some starter code:
import Foundation func randomIndex(count: Int) -> Int { return Int(arc4random_uniform(UInt32(count))) } // Your code here! Write knockKnockJoke() function // Make an array of 3 knock knock jokes // Return a random joke! let joke = knockKnockJoke() println("Knock, knock.") println("Who's there?") println("\(joke.who)") println("\(joke.who) who?") println("\(joke.punchline)") |
If you need some knock-knock jokes, check out this site!
Video Tutorial: Introduction to Swift Part 8: Tuples is a post from: Ray Wenderlich
The post Video Tutorial: Introduction to Swift Part 8: Tuples appeared first on Ray Wenderlich.
Update note: This tutorial is an abbreviated version of a chapter from iOS 7 by Tutorials by Colin Eberhardt. It was updated for iOS 8 and Swift by James Frost.
The design goals of iOS encourage you to create digital interfaces that react to touch, gestures, and changes in orientation as if they were physical objects far beyond a simple collection of pixels. The end result gives the user a deeper connection with the interface than is possible through skin-deep skeuomorphism.
This sounds like a daunting task, as it is much easier to make a digital interface look real, than it is to make it feel real. However, you have some nifty new tools on your side: UIKit Dynamics and Motion Effects.
When used together, motion and dynamics form a powerhouse of user experience tools that make your digital interfaces come to life. Your users will connect with your app at a deeper level by seeing it respond to their actions in a natural, dynamic way.
Note: At the time of writing this tutorial, our understanding is we cannot post screenshots of iOS 8 since it is still in beta. All the screenshots here are from iOS 7, which should look very close to what things will look like in iOS 8.
UIKit dynamics can be a lot of fun; the best way to start learning about them is to jump in feet-first with some small examples.
Open Xcode, select File / New / Project … then select iOS Application / Single View Application and name your project DynamicsDemo. Once the project has been created, open ViewController.swift and add the following code to the end of viewDidLoad
:
let square = UIView(frame: CGRect(x: 100, y: 100, width: 100, height: 100)) square.backgroundColor = UIColor.grayColor() view.addSubview(square) |
The above code simply adds a square UIView
to the interface.
Build and run your app, and you’ll see a lonely square sitting on your screen, as shown below:
If you’re running your app on a physical device, try tilting your phone, turning it upside-down, or even shaking it. What happens? Nothing? That’s right — everything is working as designed. When you add a view to your interface you expect it to remain firmly stuck in place as defined by its frame — until you add some dynamic realism to your interface!
Still working in ViewController.swift, add the following properties above viewDidLoad
:
var animator: UIDynamicAnimator! var gravity: UIGravityBehavior! |
These properties are implicitly-unwrapped optionals (as denoted by the !
after the type name). These properties must be optional because you won’t be initializing them in our class’s init
method. You can use implicitly-unwrapped optionals because we know that these properties won’t be nil
after you’ve initialized them. This prevents you from having to manually unwrap their values with the !
operator each time.
Add the following to the end of viewDidLoad
:
animator = UIDynamicAnimator(referenceView: view) gravity = UIGravityBehavior(items: [square]) animator.addBehavior(gravity) |
I’ll explain this in a moment. For now, build and run your application. You should see your square slowly start to accelerate in a downward motion until it drops off the bottom of the screen, as so:
In the code you just added, there are a couple of dynamics classes at play here:
UIDynamicAnimator
is the UIKit physics engine. This class keeps track of the various behaviors that you add to the engine, such as gravity, and provides the overall context. When you create an instance of an animator, you pass in a reference view that the animator uses to define its coordinate system.UIGravityBehavior
models the behavior of gravity and exerts forces on one or more items, allowing you to model physical interactions. When you create an instance of a behavior, you associate it with a set of items — typically views. This way you can select which items are influenced by the behavior, in this case which items the gravitational forces affect.Most behaviors have a number of configuration properties; for example, the gravity behavior allows you to change its angle and magnitude. Try modifying these properties to make your objects fall up, sideways, or diagonally with varying rates of acceleration.
NOTE: A quick word on units: in the physical world, gravity (g) is expressed in meters per second squared and is approximately equal to 9.8 m/s2. Using Newton’s second law, you can compute how far an object will fall under gravity’s influence with the following formula:
distance = 0.5 × g × time2
In UIKit Dynamics, the formula is the same but the units are different. Rather than meters, you work with units of thousands of pixels per second squared. Using Newton’s second law you can still work out exactly where your view will be at any time based on the gravity components you supply.
Do you really need to know all this? Not really; all you really need to know is that a bigger value for g means things will fall faster, but it never hurts to understand the math underneath.
Although you can’t see it, the square continues to fall even after it disappears off the bottom of your screen. In order to keep it within the bounds of the screen you need to define a boundary.
Add another property in ViewController.swift:
var collision: UICollisionBehavior! |
Add these lines to the bottom of viewDidLoad
:
collision = UICollisionBehavior(items: [square]) collision.translatesReferenceBoundsIntoBoundary = true animator.addBehavior(collision) |
The above code creates a collision behavior, which defines one or more boundaries with which the associated items interact.
Rather than explicitly adding boundary co-ordinates, the above code sets the translatesReferenceBoundsIntoBoundary
property to true
. This causes the boundary to use the bounds of the reference view supplied to the UIDynamicAnimator
.
Build and run; you’ll see the square collide with the bottom of the screen, bounce a little, then come to rest, as so:
That’s some pretty impressive behavior, especially when you consider just how little code you’ve added at this point.
Next up you’ll add an immovable barrier that the falling square will collide and interact with.
Insert the following code to viewDidLoad
just after the lines that add the square to the view:
let barrier = UIView(frame: CGRect(x: 0, y: 300, width: 130, height: 20)) barrier.backgroundColor = UIColor.redColor() view.addSubview(barrier) |
Build and run your app; you’ll see a red “barrier” extending halfway across the screen. However, it turns out the barrier isn’t that effective as the square falls straight through the barrier:
That’s not quite the effect you were looking for, but it does provide an important reminder: dynamics only affect views that have been associated with behaviors.
Time for a quick diagram:
UIDynamicAnimator
is associated with a reference view that provides the coordinate system. You then add one or more behaviors that exert forces on the items they are associated with. Most behaviors can be associated with multiple items, and each item can be associated with multiple behaviors. The above diagram shows the current behaviors and their associations within your app.
Neither of the behaviors in your current code is “aware” of the barrier, so as far as the underling dynamics engine is concerned, the barrier doesn’t even exist.
To make the square collide with the barrier, find the line that initializes the collision behavior and replace it with the following:
collision = UICollisionBehavior(items: [square, barrier]) |
The collision object needs to know about every view it should interact with; therefore adding the barrier to the list of items allows the collision object to act upon the barrier as well.
Build and run your app; the two objects collide and interact, as shown in the following screenshot:
The collision behavior forms a “boundary” around each item that it’s associated with; this changes them from objects that can pass through each other into something more solid.
Updating the earlier diagram, you can see that the collision behavior is now associated with both views:
However, there’s still something not quite right with the interaction between the two objects. The barrier is supposed to be immovable, but when the two objects collide in your current configuration the barrier is knocked out of place and starts spinning towards the bottom of the screen.
Even more oddly, the barrier bounces off the bottom of the screen and doesn’t quite settle down like the square – this makes sense because the gravity behavior doesn’t interact with the barrier. This also explains why the barrier doesn’t move until the square collides with it.
Looks like you need a different approach to the problem. Since the barrier view is immovable, there isn’t any need to for the dynamics engine to be aware of its existence. But how will the collision be detected?
Change the collision behavior initialization back to its original form so that it’s only aware of the square:
collision = UICollisionBehavior(items: [square]) |
Immediately after this line, add the following:
// add a boundary that has the same frame as the barrier collision.addBoundaryWithIdentifier("barrier", forPath: UIBezierPath(rect: barrier.frame)) |
The above code adds an invisible boundary that has the same frame as the barrier view. The red barrier remains visible to the user but not to the dynamics engine, while the boundary is visible to the dynamics engine but not the user. As the square falls, it appears to interact with the barrier, but it actually hits the immovable boundary instead.
Build and run your app to see this in action, as below:
The square now bounces off the boundary, spins a little, and then continues its journey towards the bottom of the screen where it comes to rest.
By now the power of UIKit Dynamics is becoming rather clear: you can accomplish quite a lot with only a few lines of code. There’s a lot going on under the hood; the next section shows you some of the details of how the dynamic engine interacts with the objects in your app.
Each dynamic behavior has an action property where you supply a block to be executed with every step of the animation. Add the following code to viewDidLoad
:
collision.action = { println("\(NSStringFromCGAffineTransform(square.transform)) \(NSStringFromCGPoint(square.center))") } |
The above code logs the center and transform properties for the falling square. Build and run your app, and you’ll see these log messages in the Xcode console window.
For the first ~400 milliseconds you should see log messages like the following:
[1, 0, 0, 1, 0, 0], {150, 236} [1, 0, 0, 1, 0, 0], {150, 243} [1, 0, 0, 1, 0, 0], {150, 250} |
Here you can see that the dynamics engine is changing the center of the square — that is, its frame— in each animation step.
As soon as the square hits the barrier, it starts to spin, which results in log messages like the following:
[0.99797821, 0.063557133, -0.063557133, 0.99797821, 0, 0] {152, 247} [0.99192101, 0.12685727, -0.12685727, 0.99192101, 0, 0] {154, 244} [0.97873402, 0.20513339, -0.20513339, 0.97873402, 0, 0] {157, 241} |
Here you can see that the dynamics engine is using a combination of a transform and a frame offset to position the view according to the underlying physics model.
While the exact values that dynamics applies to these properties are probably of little interest, it’s important to know that they are being applied. As a result, if you programmatically change the frame or transform properties of your object, you can expect that these values will be overwritten. This means that you can’t use a transform to scale your object while it is under the control of dynamics.
The method signatures for the dynamic behaviors use the term items rather than views. The only requirement to apply dynamic behavior to an object is that it adopts the UIDynamicItem
protocol, as so:
protocol UIDynamicItem : NSObjectProtocol { var center: CGPoint { get set } var bounds: CGRect { get } var transform: CGAffineTransform { get set } } |
The UIDynamicItem
protocol gives dynamics read and write access to the center and transform properties, allowing it to move the items based on its internal computations. It also has read access to bounds, which it uses to determine the size of the item. This allows it to create collision boundaries around the perimeter of the item as well as compute the item’s mass when forces are applied.
This protocol means that dynamics is not tightly coupled to UIView
; indeed there is another UIKit class that isn’t a view but still adopts this protocol: UICollectionViewLayoutAttributes
. This allows dynamics to animate items within collection views.
So far you have added a few views and behaviors then let dynamics take over. In this next step you will look at how to receive notifications when items collide.
Still in ViewController.swift, adopt the UICollisionBehaviorDelegate
protocol by updating the class declaration:
class ViewController: UIViewController, UICollisionBehaviorDelegate { |
In viewDidLoad
, set the view controller as the delegate just after the collision object is initialized, as follows:
collision.collisionDelegate = self |
Next, add an implementation for one of the collision behavior delegate methods to the class:
func collisionBehavior(behavior: UICollisionBehavior!, beganContactForItem item: UIDynamicItem!, withBoundaryIdentifier identifier: NSCopying!, atPoint p: CGPoint) { println("Boundary contact occurred - \(identifier)") } |
This delegate method is called when a collision occurs. It prints out a log message to the console. In order to avoid cluttering up your console log with lots of messages, feel free to remove the collision.action
logging you added in the previous section.
Build and run; your objects will interact, and you’ll see the following entries in your console:
Boundary contact occurred - barrier Boundary contact occurred - barrier Boundary contact occurred - nil Boundary contact occurred - nil Boundary contact occurred - nil Boundary contact occurred - nil |
From the log messages above you can see that the square collides twice with the boundary identifier barrier; this is the invisible boundary you added earlier. The (null) identifier refers to the outer reference view boundary.
These log messages can be fascinating reading (seriously!), but it would be much more fun to provide a visual indication when the item bounces.
Below the line that sends message to the log, add the following:
let collidingView = item as UIView collidingView.backgroundColor = UIColor.yellowColor() UIView.animateWithDuration(0.3) { collidingView.backgroundColor = UIColor.grayColor() } |
The above code changes the background color of the colliding item to yellow, and then fades it back to gray again.
Build and run to see this effect in action:
The square will flash yellow each time it hits a boundary.
So far UIKit Dynamics has automatically set the physical properties of your items (such as mass and elasticity) by calculating them based on your item’s bounds. Next up you’ll see how you can control these physical properties yourself by using the UIDynamicItemBehavior
class.
Within viewDidLoad
, add the following to the end of the method:
let itemBehaviour = UIDynamicItemBehavior(items: [square]) itemBehaviour.elasticity = 0.6 animator.addBehavior(itemBehaviour) |
The above code creates an item behavior, associates it with the square, and then adds the behavior object to the animator. The elasticity property controls the bounciness of the item; a value of 1.0 represents a completely elastic collision; that is, where no energy or velocity is lost in a collision. You’ve set the elasticity of your square to 0.6, which means that the square will lose velocity with each bounce.
Build and run your app, and you’ll notice that the square now behaves in a bouncier manner, as below:
Note: If you are wondering how I produced the above image with trails that show the previous positions of the square, it was actually very easy! I simply added a block to the action property of one of the behaviors, and every third time the block code was executed, added a new square to the view using the current center and transform from the square. You can see my solution in the spoiler section below.
Solution Inside: Trails | SelectShow> | |
---|---|---|
|
In the above code you only changed the item’s elasticity; however, the item’s behavior class has a number of other properties that can be manipulated in code. They are as follows:
In its current state, your app sets up all of the behaviors of the system, then lets dynamics handle the physics of the system until all items come to rest. In this next step, you’ll see how behaviors can be added and removed dynamically.
Open ViewController.swift and add the following property, above viewDidLoad
:
var firstContact = false |
Add the following code to the end of the collision delegate method collisionBehavior(behavior:beganContactForItem:withBoundaryIdentifier:atPoint:)
if (!firstContact) { firstContact = true let square = UIView(frame: CGRect(x: 30, y: 0, width: 100, height: 100)) square.backgroundColor = UIColor.grayColor() view.addSubview(square) collision.addItem(square) gravity.addItem(square) let attach = UIAttachmentBehavior(item: collidingView, attachedToItem:square) animator.addBehavior(attach) } |
The above code detects the initial contact between the barrier and the square, creates a second square and adds it to the collision and gravity behaviors. In addition, you set up an attachment behavior to create the effect of attaching a pair of objects with a virtual spring.
Build and run your app; you should see a new square appear when the original square hits the barrier, as shown below:
While there appears to be a connection between the two squares, you can’t actually see the connection as a line or spring since nothing has been drawn on the screen to represent it.
As you’ve just seen, you can dynamically add and remove behaviours when your physics system is already in motion. In this final section, you’ll add another type of dynamics behaviour, UISnapBehavior
, whenever the user taps the screen. A UISnapBehavior
makes an object jump to a specified position with a bouncy spring-like animation.
Remove the code that you added in the last section: both the firstContact
property and the if
statement in collisionBehavior()
. It’s easier to see the effect of the UISnapBehavior
with only one square on screen.
Add two properties above viewDidLoad
:
var square: UIView! var snap: UISnapBehavior! |
This will keep track of your square view, so that you can access it from elsewhere in the view controller. You’ll be using the snap
object next.
In viewDidLoad
, remove the let
keyword from the declaration of the square, so that it uses the new property instead of a local variable:
square = UIView(frame: CGRect(x: 100, y: 100, width: 100, height: 100)) |
Finally, add an implementation for touchesEnded
, to create and add a new snap behavior whenever the user touches the screen:
override func touchesEnded(touches: NSSet!, withEvent event: UIEvent!) { if (snap) { animator.removeBehavior(snap) } let touch = touches.anyObject() as UITouch snap = UISnapBehavior(item: square, snapToPoint: touch.locationInView(view)) animator.addBehavior(snap) } |
This code is quite straightforward. First, it checks if there’s an existing snap behavior and removes it. Then creates a new snap behavior which snaps the square to the location of the user’s touch, and adds it to the animator.
Build and run your app. Try tapping around; the square should zoom into place wherever you touch!
At this point you should have a solid understanding of the core concepts of UIKit Dynamics. You can download the final DynamicsDemo project from this tutorial for further study.
UIKit Dynamics brings the power of a physics engine to your iOS apps. With subtle bounces and springs and gravity, you can bring life to your apps and make them feel more real and immersive for your users.
If you’re interested in learning more about UIKit Dynamics, check out our book iOS 7 By Tutorials. The book takes what you’ve learned so far and goes a step further, showing you how to apply UIKit Dynamics in an real world scenario:
The user can pull up on a recipe to take a peek at it, and when they release the recipe, it will either drop back into the stack, or dock to the top of the screen. The end result is an application with a real-world physical feel.
I hope you enjoyed this UIKit Dynamics tutorial – we think it’s pretty cool and look forward to seeing the creative ways you use it in your apps. If you have any questions or comments, please join the forum discussion below!
UIKit Dynamics Tutorial in Swift is a post from: Ray Wenderlich
The post UIKit Dynamics Tutorial in Swift appeared first on Ray Wenderlich.
You’ve all seen the all-star Hollywood programmer hacking through the mainframe, fingers racing on the keyboard while terminals fly across the screen. If you’ve ever wanted to be like that, you’re in the right place!
This tutorial will teach you how to be more like that programmer, in Xcode. Call it what you like — magic, mad skillz, pure luck or hacks, there is no doubt you’ll feel much cooler (and have improved Xcode efficiency) after following along this tutorial, and maybe even save the world from destruction with your newly found prowess.
Since coolness is the goal, here is what contributes towards coolness points in this tutorial:
To gain extra ninja points, you can try to accomplish each task without touching the mouse or track-pad. Yes, that’s the equivalent of going pewpew in Xcode.
Your studies will commence with learning a few useful Xcode features. Then, you’ll continue your training by fixing some bugs in CardTilt, which you might be familiar with from this tutorial. Finally, you’ll clean up the code a bit and make the interface pixel accurate.
Keep in mind that this tutorial is not about the final app; this lesson is about learning how to take advantage of Xcode to develop apps with more speed and grace than ever before.
This tutorial assumes a basic understanding of Xcode and focuses on techniques that improve your efficiency as a programmer. Everyone’s coding habits are different, so this tutorial is not meant to force a certain style on you.
Throughout, you’ll see alternatives to certain commands. When following along, just focus on refining and building on your current development style, and try not to let the subtle differences throw you.
Note: If you’re not confident working with Xcode yet, you can check out these tutorials here and here.
Download the CardTilt-starter and get ready to code!
There are a few tasks in Xcode that you perform regularly in most projects. This section will take a close look at some of these tasks and talk about some tricks to tackle them with pizazz. As you progress, you’ll build upon these tricks and discover new ways to use them. These tricks will become the ninja stars and smoke bombs that are a must on your coding tool belt.
Open CardTilt in Xcode, but don’t dive into the code just yet. First, take a moment to match up what you see to the diagram of the Xcode Workspace Window below.
These labels are how this tutorial will refer to the individual parts of the workspace. If your setup doesn’t look exactly like this – don’t worry! In the Hotkeys section below, you’ll learn how to easily show and dismiss inconsistencies.
Here is a quick rundown of the individual views that make up the workspace interface:
All of these views are essential to Xcode, and you’ll probably interact with every one of them when you’re working on a project. Usually you won’t need to look at them all at once though, and this next section will teach you how to configure your workspace quickly by using hotkeys.
On this journey of Xcode coolness, you’ll first learn how to master it with hotkeys. The most useful hotkeys are surprisingly easy to remember, thanks to a few patterns.
Prepare to be in awe of the intuitive nature of the following hotkeys.
The first pattern to know is the relation of the common modifier keys to various parts of Xcode.
Here’s a quick breakdown of the most common:
The second pattern to recognize is the relation of number keys
to tabs. Combinations of number keys and the modifiers keys above will let you switch through tabs quickly.
Generally, the numbers correspond to the indexes (starting from one) of the tabs, and zero always hides or shows the area. Can it get any more intuitive?
The most common combinations are:
Command 1~8
to jump through the Navigators, Command 0
to close the navigation area.Command Alt 1~6
to jump through the Inspectors, Command Alt 0
to close the utility area.Control Command Alt 1~4
to jump through the Libraries.Control 1~6
to bring down tabs in the Jump Bar.The last, and most simple pattern is the Enter key. When used with Command, they allow you to switch between editors:
Last, but not least, open and close the debugging area with Command + Shift + Y. To remember this just ask, “Y is my code not working?”
You can find most of these hotkeys under the Navigate menu in Xcode, in case you forget.
To finish off this section, rejoice in knowing that you can make Xcode dance for you by using only your keyboard!
Stats:
Coolness points gained:100
Total Coolness points:100
Ninja points gained:100
Total Ninja points:100
Controlling the interface of Xcode using hotkeys is cool, but do you know what’s even more ninja? Having Xcode transform to the interface you want automatically. Now that’s the epitome of cool.
Fortunately, Xcode provides Behaviors that let you do this very easily. They are simply a defined set of actions that are triggered by a specific event, such as starting a build. Actions range all the way from changing the interface to running a custom script.
To see an example you’re familiar with, make a quick modification to CTAppDelegate.m
so that running will generate console output. Replace didFinishLaunchingWithOptions
with the following:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { [[UIApplication sharedApplication] setStatusBarHidden:YES]; // Override point for customization after application launch. NSLog(@"Show me some output!"); return YES; } |
Now build while carefully watching the Debugging Area. You’ll notice a familiar sight – the debugging area appears as soon as the app starts running:
To see what defines this event, open the behavior preferences window by going to Xcode\Behaviors\Edit Behaviors. On the left, you’ll see a list of all the events; on the right, a list of the actions the events can trigger. Click on the Generates output event under Running and you’ll see it is set to show the debugger:
There are two different sets of actions I recommend for the Generates output event, depending on your development environment. If you have a dual-screen environment, try the first one. If you work on a single screen, try jumping ahead to the second method below.
If you work on two or more screens, wouldn’t it be handy to have the debug console in its own window on your second screen? Well, you can set up the actions so it looks like this:
Now build and run, and you’ll see a separate window appear. Position it on your secondary display, and just like that, you have yourself an efficient debugging setup!
If you have a single screen environment, maximize the effective area for the console by hiding the utilities pane and setting the console to take up the entire debug area. To do this, set up the actions like so:
Now run the app, and watch as Xcode automatically follows your commands!
You’ll also want to change the behavior to account for when the project pauses. Go to Pauses
event under Running, and change the settings like this:
Now whenever you hit a breakpoint, you’ll get a new tab named Fix that will display the variables and console views, and automatically navigate to the first issue. You should really think about demonstrating this next time you host a party, because it’s that cool.
Well, maybe not…unless your guests are burgeoning coders.
The last behavior you’ll create for now is the one of my personal favorites. This is a custom behavior that you can assign to a hotkey. When triggered, it transforms Xcode into what I refer to as Dev Mode, which is an optimized layout for developing your next masterpiece.
To do this, create a new event by pressing the + button near the bottom left of the behavior preferences window. Name this behavior Dev Mode.
Double-click the Command symbol (⌘) to the right of the event name, and then type Command . to define a hotkey.
Next, configure the behavior with the following actions:
Now whenever you press Command .
you’ll be greeted with the same, clean development interface.
This is the perfect time to introduce you to Xcode tab names, which work beautifully with behaviors.
Xcode Tab Names: You can rename an Xcode tab simply by double-clicking on its title. This is rather a useless feature by itself, but becomes extremely powerful when you pair it with behaviors.
In the second example above, when modifying the Pauses behavior, you named the tab Fix. This means that when the behavior is triggered, Xcode will use the Fix tab if it exists, or create a new tab if it’s not already there.
Another example is the dual-screen Starts behavior. If a tab named Debug is open from a previous run, it’ll re-use it instead of creating a new one.
You can create some very interesting behaviors by utilizing tab names in this manner.
Okay, take a few minutes to play around with behaviors. Don’t worry, this tutorial will wait for you. :]
Wait, are you back already? Okay, then it’s time for some action!
400
500
50
150
In the following sections, you’ll learn to put these tricks to a test, and learn some new ones while working on CardTilt.
Build and run CardTilt, and you should see a screen like this:
Not what you’re seeing? Looks like it’s time for some ninja-level bug squashing!
It appears as though the app is having trouble loading the data, and it’s your job to fix it. Open CTMainViewController.m and enter Dev Mode (Command .
).
Notice these first few lines in viewDidLoad
:
self.dataSource = [[CTTableViewDataSource alloc] init]; self.view.dataSource = self.dataSource; self.view.delegate = self; |
Looks like CTTableViewDataSource implements UITableViewDataSource
and provides the table with data. Time to use your Xcode skills to confirm this and get to the bottom of the issue.
Hold Command and click on CTTableViewDataSource to open CTTableViewDataSource.h
in your primary editor. CTTableViewDataSource.m
should’ve loaded in your assistant editor, as well. If that’s not the case, use the Jump Bar to change the assistant editor mode to counterparts like this:
Look around, and you’ll see members
holds the data, and loadData
loads it from the bundle. That looks like a great starting point for debugging. Switch CTTableViewDataSource.m
to the primary editor by right clicking anywhere inside the assistant editor, and then choosing Open in Primary Editor.
Below is an animation showing the steps you’ve taken thus far:
Ninja Dojo: For bonus ninja points, you can do all of the above without your mouse by following these steps:
Command .
).CTTableViewDataSource
and press Command + Control + J to jump to the definition.CTTableViewDataSource.m
in the primary editor.Remember that being a ninja isn’t always the most efficient route, but you’ll always look cool.
Bonus Ninja Tip: Open Quickly (Command Shift O
) is one of the coolest Xcode ninja tools. Use it and love it.
100
600
100
250
You need to determine if data made its way into members
, so start by setting a breakpoint right below self.members = json[@"Team"];
and run the project.
Note: If you are new to setting breakpoints and debugging in general, check out our video tutorial series on debugging.
Of the behaviors you looked at earlier, Generates output
will get triggered first, and then Pause
follows immediately after. Because of the custom setup you created for Pause, you’ll get a new tab named Fix with a layout that is perfect for debugging! Now you have another cool trick to show your friends at your next party.
Look at the variable inspector. Do you notice that self.members
is nil
a little, um, fishy. In loadData
you can see that self.members
is populated like this:
NSDictionary *json = [NSJSONSerialization JSONObjectWithData:data options:kNilOptions error:&error]; self.members = json[@"Team"]; |
Dig into json
in the variable inspector, so you can determine if the dictionary loaded correctly.
You’ll see the first key in the data is @"RWTeam"
instead of @"Team"
. When loading self.members
, the key used the wrong data. Ah-ha! Eureka, there’s that little bug!
To resolve, you need to correct the data in the source file:
Command .
. Command + Option + J
to jump to the filter bar and type this: teammember.TeamMembers.json
to open it up in the assistant editor. "RWTeam"
with "Team"
.This is how it looks when you follow these steps:
Now remove the breakpoint, and then build and run. You should see this:
Much better, but it looks there is another bug. See how the descriptions below Ray and Brian’s titles are missing? Well, that’s a good thing because you’ll get to add more points to your coolness and ninja stats by resolving the problem.
Stats:
Coolness points gained:200
Total Coolness points:800
Ninja points gained:100
Total Ninja points:350
Let’s try to utilize more ninja tools for this second bug.
You probably know that UITableViewCells
are configured in tableView:cellForRowAtIndexPath:
, so navigate there using Open Quickly and following these steps:
Command + Shift + O
to bring up Open Quickly.cellForRow
, then press down once to select the instance in CardTilt.Hold Command
and click on setupWithDictionary
to navigate to the definition. Look around a bit, and you’ll see some code that appears to be loading descriptions:
NSString *aboutText = dictionary[@"about"]; // Should this be aboot? self.aboutLabel.text = [aboutText stringByReplacingOccurrencesOfString:@"\\n" withString:@"\n"]; |
It’s loading the label from data found in dictionary[@"about"]
.
Now use Open Quickly to bring up TeamMembers.json
. This time, open in the assistant editor by pressing Alt + Enter.
Check for the about
key, and you’ll see someone misspelled it as aboot
—probably a Canadian, like me! To fix this, use global Find and Replace. Sure, you could do this directly in the file, but using the find navigator is infinitely cooler.
Open up the find navigator and change the mode to Replace by clicking on the Jump Bar at the top. Type aboot into the find field and press Enter.
Hmm… there is one other place that uses the word aboot outside of TeamMembers.json
.
No worries! Click on CTCardCell.m
in the search results and press Delete. Now you no longer need to concern yourself with replacing it – cool!
Go to the replace field and type in ‘about’, and then press Replace All to finish the job.
Here’s how that looks in action:
Ninja Dojo: This section already gave you tons of ninja points. To obtain even more, you can use Command + Shift + Option + F
to open the Find navigator in replace mode.
If that’s a tad too ninja, you can use Command + Shift + F
to open up the find navigator in Find mode and change the mode to Replace afterwards.
If that’s still too ninja, you can use Command 3
to open up the Find navigator and go from there!
Ninja Tip: There are many, many ways to perform actions in Xcode. Play around and find the way that suits you best :]
Build and run. You should now see all the cells properly loaded, like this:
200
1000
50
400
That’s it for debugging today. Give yourself a round of applause and pat on the back for getting the app up and running.
Before you go and show it to someone, you want to make sure the app’s interface is flawless. Especially if that someone is your designer, who likes to take his ruler to the screen.
This section will show you a few tricks in Interface Builder to achieve such perfection, and of course, help you become even cooler in the process.
Open up MainStoryboard.storyboard
. Usually, you want the standard editor and the utilities area open when working in interface builder, so create a new custom behavior for this called IB Mode
. Feel free to use the version below, but try to create your own before you look at the solution used to create this tutorial. It’s cool to be different!
You can see that I used Command Option .
as the hotkey for IB Mode.
Now that you’re looking at a comfy Interface Builder, take a look at CTCardCell
. First, you want to center mainView inside Content View. Here are two tricks make this relatively elementary:
Hold Control + Shift and left click anywhere on mainView within the editor.
You’ll see a popup that lets you choose between all the views under your mouse pointer like this:
This allows you to select mainView easily, even though cardbg
is blocking it.
Once you select mainView, hold Alt and move your cursor around the edges of Content View to see the spacing of the view.
Turns out the alignment isn’t much to behold. That’s not very ninja!
To fix this, you’ll need to resize the view. turn on Editor\Canvas\Live AutoresizingTo force subviews to re-size when you resize their parent view. Now drag the corners of mainView
while holding Alt and adjust until there are 15 points on each side.
Try using the same trick to help align the three labels titleLabel
, locationLabel
and aboutLabel
so that they have zero vertical spacing between them. Hold Alt to monitor the spacing while repositioning the labels with the arrow keys or your mouse.
Did you notice the left edges of these labels are also misaligned?
Your designer will definitely want them to be left-aligned with nameLabel
and webLabel
. To accomplish this easily, you’ll use a Vertical Guide.
Select cardbg and go to Editor\Add Vertical Guide. Take note of the hotkeys, they’re Command -
for horizontal guide and Command |
for a vertical guide.
Those two hotkeys probably make the most visual sense–ever.
Once you have the vertical guide on the screen, drag it to 10 points from the left edge of cardbg
. Now views can snap to this vertical guide and align perfectly. Go ahead and line up those labels.
OK, so Xcode isn’t always perfect, and you may occasionally have issues selecting a guideline right after you create it.
If hovering over it doesn’t work, quickly open a different source file and then flip back to the storyboard. Once it reloads the storyboard, the issues typically resolve themselves.
Bonus Ninja Tip: The best part about vertical and horizontal guides is that all views can snap to them, they don’t have to be in the same hierarchy to align nicely!
Here is a replay of the steps to correct alignment in this scene:
I bet you can’t wait to show your work to your designer now!
Stats:
Coolness points gained:400
Total Coolness points:1400
Ninja points gained:0
Total Ninja points:400
Now that you have a functional app and a happy designer, now you just need to do a little code cleanup.
Use Open Quickly to open CTCardCell.m
– you should know how by now! Remember to enter Dev Mode as well.
Just look at that messy list of @properties
at the top of CTCardCell.m:
@property (weak, nonatomic) IBOutlet UILabel *locationLabel; @property (strong, nonatomic) NSString *website; @property (weak, nonatomic) IBOutlet UIButton *fbButton; @property (weak, nonatomic) IBOutlet UIImageView *fbImage; @property (strong, nonatomic) NSString *twitter; @property (weak, nonatomic) IBOutlet UIButton *twButton; @property (weak, nonatomic) IBOutlet UILabel *webLabel; @property (weak, nonatomic) IBOutlet UIImageView *profilePhoto; @property (strong, nonatomic) NSString *facebook; @property (weak, nonatomic) IBOutlet UIImageView *twImage; @property (weak, nonatomic) IBOutlet UILabel *aboutLabel; @property (weak, nonatomic) IBOutlet UIButton *webButton; @property (weak, nonatomic) IBOutlet UILabel *nameLabel; @property (weak, nonatomic) IBOutlet UILabel *titleLabel; |
In this section, you’re going to create a custom service to run the shell commands sort
and uniq
on blocks of code like this.
Note: If you’re not familiar with these shell commands, they’re quite self-explanatory. sort
organizes the lines alphabetically, and uniq
removes any duplicate lines.
uniq
won’t really come in handy here, but is handy when you’re organizing #import
lists!
Mac OSX allows you to create services you can access throughout the OS. You’ll use this to create a shell script service to use in Xcode.
Follow these steps to set it up:
sort | uniq
Command S
and save your new service as Sort & UniqHere’s what the final window looks like:
Now Go back to Xcode and select that messy block of @properties
in CTCardCell.m. Right click on the selected code and go to Services -> Sort & Uniq and watch how tidy that rowdy list becomes. You can watch the magic on the big screen here:
Now that is worth at least 800 coolness points.
Stats:
Coolness points gained:801
Total Coolness points:1801
Ninja points gained:0
Total Ninja points:400
That marks the end of basic ninja training and your task of debugging CardTilt – congratulations on getting here! I hope you’ve learned and feel more cool and ninja-like.
Surely, you’re eager to learn even more tricks. Fortunately for you, there is one last trick to share.
You have likely used Xcode’s Code Snippets
before. Some common ones are the forin
snippet and dispatch_after
snippet.
In this section, you’ll learn how to create your own custom snippets and look extremely cool as you re-use common code blocks.
The code snippet you’ll create is the singleton accessor snippet.
Note: If you’re not familiar with the singleton pattern, you can read all about it in this great tutorial.
Below is some boilerplate code you’re likely to use frequently with this pattern:
+ (instancetype)sharedObject { static id _sharedInstance = nil; static dispatch_once_t oncePredicate; dispatch_once(&oncePredicate, ^{ _sharedInstance = [[self alloc] init]; }); return _sharedInstance; } |
What’s also cool is that this snippet includes the dispatch_once
snippet.
Create a new class in CardTilt called SingletonObject
and make it a subclass of NSObject
. You won’t actually it for anything, except for as a spot from which to drag code to create a snippet.
Follow these steps:
SingletonObject.m
, just below the @implementation
lineCommand Option Control 2
. You should see the library of code snippets that are included in Xcode by default+sharedObject
function and drag it into the libraryNote: If you’re having issues dragging code, click on the selected code and hold for a second before starting to drag.
Your new code snippet will automatically show up at the very bottom of the library. You can use it by dragging it from the library into any file – go try it out!
Now double-click on your newly created snippet and press edit.
The fields that display in this popup are particularly useful; in fact they are so valuable that each deserves and explanation:
Fill in the properties like this:
Snippets become especially powerful when you add Tokens because they allow you to mark code in the snippet that shouldn’t be hard-coded. It makes them very easy to modify using the Tab key, much like as it is with auto-completed methods.
To add a token, simply type <#TokenName#>
in your snippet.
Create a token for the Object
part of your sharedObject
snippet by replacing sharedObject
with shared<#ObjectName#>
so it looks like this:
Save the snippet by hitting Done and give it a spin.
Type singleton accessor
in the SingletonObject implementation file and use the autocomplete when it shows up.
Custom code snippets like this can become very powerful for frequently used patterns. Learning these last few tricks is definitely worth some extra points!
Stats:
Coolness points gained:50000
Total Coolness points:51801
Ninja points gained:2000
Total Ninja points:2400
Congratulations on achieving such a high score!
To sum it up, here’s what you’ve learned and done as you’ve worked through this tutorial:
That was all pretty easy, now wasn’t it? Think of all the cool tricks you have to show your friends and family now! They’ll no doubt completely understand your excitement ;]
There are still plenty of other ways you can improve your Xcode efficiency and up your own personal coolness and ninja factors. A few of them are:
The next step is to go and put your newfound ninja skills to use!
I hope you enjoyed this tutorial. If you have any questions, comments or would like to share a cool trick you know, make sure to leave a comment below!
Supercharging Your Xcode Efficiency is a post from: Ray Wenderlich
The post Supercharging Your Xcode Efficiency appeared first on Ray Wenderlich.
Learn how to use FMBD APIs.
Video Tutorial: Saving Data in iOS Part 9: Introduction to FMDB is a post from: Ray Wenderlich
The post Video Tutorial: Saving Data in iOS Part 9: Introduction to FMDB appeared first on Ray Wenderlich.
As you know, we are always trying to improve our written tutorials, video tutorials, and books on this site.
What we need most is a fresh batch of folks on the team, to contribute your passion, ideas and experience.
So today, I am pleased to announce we are having an open call for applicants for three different teams at raywenderlich.com (click links to jump to that section):
For these teams, we are looking for advanced-level developers only. However, your experience could be in a variety of categories (not just iOS):
Are you an advanced-level developer interested in joining one of these teams? Keep reading for more details on what’s involved, the benefits of joining, and how to apply.
The Tutorial Team is an elite group of app developers and writers who are joining together to make the highest quality developer tutorials available in the world.
By writing tutorials for this site, you can make a huge positive difference in the developer community. Your tutorials will be widely read, and will help a ton of developers learn and grow. You may even help some developers start their careers making apps or games – making dreams come true!
And through the hard work it takes to write these tutorials and the detailed feedback from the editing team, you will become a much better developer and writer yourself.
As a part of the raywenderlich.com Tutorial Team, you’ll receive the following benefits (in addition to learning & helping others, of course):
This is an informal, part-time position – you’d be writing about 3 tutorials per year. We do expect that when you are assigned a tutorial to write, that you complete the tutorial within 1 month.
Here are the requirements:
To apply, simply send me an email with the following details:
I will be selecting a few of the top applicants to tryout for the team by writing their first tutorial. If your tutorial is accepted, you’re in.
Have you ever found a bug, grammar mistake, or technical mistake in one of our tutorials? Well, technical editing might be for you. :]
Our Tech Editors are some of our most experienced developers. We have particularly high standards for what we look for in tech editors and a grueling tryout process.
This is for good reason. As a tech editor, we look to you to “level-up” each tutorial you get your hands on by adding your technical expertise, and make each tutorial as polished as possible.
By improving our tutorials, you make a huge difference in the iOS community by making sure everyone is learning the right stuff. It also really helps our authors learn and improve, and you’ll learn a ton along the way as well – while getting paid. :]
Note: We actually just had a call for tech editors just a few months ago. But now that we’ve started a brand new Update Team our tech editors are getting stretched a bit thin, so we could use a few more. If you applied before but didn’t hear back, feel free to try again.
There are many great reasons to be a technical editor for raywenderlich.com:
This is an informal, part-time position – you’d be editing about 1-3 tutorials per month. We do expect that when you are assigned a tutorial to tech edit, that you complete the tech edit within 1 week.
Here are the requirements:
To apply, simply send me an email with the following details:
I will be selecting a few of the top applicants to tryout for the team by going through a multi-phase tryout process – I will send you more details if you’re selected.
The Code Team is for developers who are awesome at writing code, but who are not interested (or maybe not good) in writing a tutorial about their code.
Your job is to write cool advanced level sample projects demonstrating neat techniques that other developers would be interested in learning about. For example, these are the kinds of projects we’d be looking for:
If some of these projects sound a fun challenge to you and something you’d definitely be capable of, you might be a good match for the Code Team. :]
As a part of the raywenderlich.com Code Team, you’ll receive the following benefits:
This is an informal, part-time position – you’d be writing about 3 sample projects per year. We do expect that when you are assigned a sample project to write, that you complete the sample project within 1 month.
Here are the requirements:
To apply, just send me an email with the following details:
I will be selecting a few of the top applicants to tryout for the team by writing your first sample project. If your sample project is accepted, you’re in.
Thanks so much for your consideration in joining one of our teams!
Please note that we usually get hundreds of emails when we do a public call for applicants, so please understand we may not have time to respond to everyone. We do promise to read each and every email though.
We can’t wait to welcome some of you to our team, and look forward to hanging out with you and getting to know you.
If you have any questions or comments, please join the forum discussion below.
Call for Applicants: Authors, Tech Editors, and Coders! is a post from: Ray Wenderlich
The post Call for Applicants: Authors, Tech Editors, and Coders! appeared first on Ray Wenderlich.
In this pixel shaders tutorial, you’ll learn how to turn your iPhone into a full-screen GPU canvas.
What this means is that you’ll make a low-level, graphics-intensive app that will paint every pixel on your screen individually by combining interesting math equations.
But why? Well, besides being the absolute coolest things in computer graphics, pixel shaders can be very useful in:
Note: The demos linked above use WebGL, which is only fully supported on Chrome and Opera, at least at the time of writing this tutorial. These demos are also pretty intense – so try to have them not running on multiple tabs simultaneously.
The shaders you’ll write are not as complex as the ones above, but you’ll get a lot more out of these exercises if you’re familiar with OpenGL ES. If you’re new to the API, then please check out some of our written or video tutorials on the subject first :]
Without further ado, it is my pleasure to get you started with pixel shaders in iOS!
Note: The term “graphics-intensive” is no joke in this tutorial. This app will safely push your iPhone’s GPU to its limit, so use an iPhone 5 or newer version. If you don’t have an iPhone 5 or later, the iOS simulator will work just fine.
First, download the starter pack for this tutorial. Have a look at RWTViewController.m
to see the very light GLKViewController
implementation, and then build and run. You should see the screen below:
Nothing too fancy just yet, but I’m sure Green Man would approve :]
For the duration of this tutorial, a full green screen means your base shaders (RWTBase.vsh
and RWTBase.fsh
) are in working order and your OpenGL ES code is set up properly. Throughout this tutorial, green means “Go” and red means “Stop”.
If at any point you find yourself staring at a full red screen, you should “Stop” and verify your implementation, because your shaders failed to compile and link properly. This works because the viewDidLoad
method in RWTViewController sets glClearColor()
to red.
A quick look at RWTBase.vsh reveals one of the simplest vertex shaders you’ll ever encounter. All it does is calculate a point on the x-y plane, defined by aPosition
.
The vertex attribute array for aPosition
is a quad anchored to each corner of the screen (in OpenGL ES coordinates), named RWTBaseShaderQuad
in RWTBaseShader.m
. RWTBase.fsh is an even more simple fragment shader that colors all fragments green, regardless of position. This explains your bright green screen!
Now, to break this down a bit further…
If you’ve taken some of our previous OpenGL ES tutorials, you may have noticed that we talk about vertex shaders for manipulating vertices and fragment shaders for manipulating fragments. Essentially, a vertex shader draws objects and a fragment shader colors them. Fragments may or may not produce pixels depending on factors such as depth, alpha and viewport coordinates.
So, what happens if you render a quad defined by four vertices as shown below?
Assuming you haven’t enabled alpha blending or depth testing, you get an opaque, full-screen cartesian plane.
Under these conditions, after the primitive rasterizes, it stands to reason that each fragment corresponds to exactly one pixel of the screen – no more, no less. Therefore, the fragment shader will color every screen pixel directly, thus earning itself the name of pixel shader :O
Note: By default, GL_BLEND
and GL_DEPTH_TEST
are disabled. You can see a list of glEnable()
and glDisable()
capabilities here, and you can query them programmatically using the function glIsEnabled()
.
Your first pixel shader will be a gentle lesson in computing linear gradients.
Note: In order to conserve space and focus on the algorithms/equations presented in this tutorial, the global GLSL precision
value for floats
is defined as highp
.
The official OpenGL ES Programming Guide for iOS has a small section dedicated to precision hints which you can refer to afterwards for optimization purposes, along with the iOS Device Compatibility Reference.
Remember, for a full-screen iPhone 5, each fragment shader gets called 727,040 times per frame! (640*1136)
The magic behind pixel shaders lies within gl_FragCoord
. This fragment-exclusive variable contains the window-relative coordinates of the current fragment.
For a normal fragment shader, “this value is the result of fixed functionality that interpolates primitives after vertex processing to generate fragments”. For pixel shaders, however, just know the xy
swizzle value of this variable maps exactly to one unique pixel on the screen.
Open RWTGradient.fsh
and add the following lines just below precision:
// Uniforms
uniform vec2 uResolution; |
uResolution
comes from the rect
variable of glkView:drawInRect:
within RWTViewController.m
(i.e. the rectangle containing your view).
uResolution
in RWTBaseShader.m
handles the width and height of rect
and assigns them to the corresponding GLSL uniform in the method renderInRect:atTime:
. All this means is that uResolution
contains the x-y resolution of your screen.
Many times you’ll greatly simplify pixel shader equations by converting pixel coordinates to the range 0.0 ≤ xy ≤ 1.0
, achieved by dividing gl_FragCoord.xy/uResolution
. This is a perfect range for gl_FragColor
too, so let’s see some gradients!
Add the following lines to RWTGradient.fsh
inside main(void)
:
vec2 position = gl_FragCoord.xy/uResolution; float gradient = position.x; gl_FragColor = vec4(0., gradient, 0., 1.); |
Next, change your program’s fragment shader source from RWTBase
to RWTGradient
in RWTViewController.m
by changing the following line:
self.shader = [[RWTBaseShader alloc] initWithVertexShader:@"RWTBase" fragmentShader:@"RWTBase"]; |
to:
self.shader = [[RWTBaseShader alloc] initWithVertexShader:@"RWTBase" fragmentShader:@"RWTGradient"]; |
Build and run! Your screen should show a really nice black->green gradient from left->right
Pretty cool, eh? To get the same gradient from bottom->top, change the following line in RWTGradient.fsh
:
float gradient = position.x; |
to:
float gradient = position.y; |
Build and run again to see your gradient’s new direction…
Now it’s time for a challenge! See if you can reproduce the screenshot below by just changing one line of code in your shader.
Hint: Remember that position
ranges from 0.0
to 1.0
and so does gl_FragColor
.
Well done if you figured it out! If you didn’t, just take a moment to review this section again before moving on. :]
In this section, you’ll learn how to use math to draw simple shapes, starting with a 2D disc/circle and finishing with a 3D sphere.
Open RWTSphere.fsh
and add the following lines just below precision:
// Uniforms
uniform vec2 uResolution; |
This is the same uniform encountered in the previous section and it’s all you’ll need to generate static geometry. To create a disc, add the following lines inside main(void)
:
// 1 vec2 center = vec2(uResolution.x/2., uResolution.y/2.); // 2 float radius = uResolution.x/2.; // 3 vec2 position = gl_FragCoord.xy - center; // 4 if (length(position) > radius) { gl_FragColor = vec4(vec3(0.), 1.); } else { gl_FragColor = vec4(vec3(1.), 1.); } |
There’s a bit of math here and here are the explanations of what’s happening:
center
of your disc will be located exactly in the center of your screen.radius
of your disc will be half the width of your screen.position
is defined by the coordinates of the current pixel, offset by the disc center. Think of it as a vector pointing from the center of the disk to the position.length()
calculates the length of a vector, which in this case is defined by the Pythagorean Theorem √(position.x²+position.y²)
.
radius
, then that particular pixel lies outside the disc area and you color it black.For an explanation of this behavior, look to the circle equation defined as: (x-a)²+(y-b)² = r²
. Note that r
is the radius, ab
is the center and xy
is the set of all points on the circle.
Since a disc is the region in a plane bounded by a circle, the if-else
statement will accurately draw a disc in space!
Before you build and run, change your program’s fragment shader source to RWTSphere
in RWTViewController.m
:
self.shader = [[RWTBaseShader alloc] initWithVertexShader:@"RWTBase" fragmentShader:@"RWTSphere"]; |
Now, build and run. Your screen should show a solid white disc with a black background. No, it’s not the most innovative design, but you have to start somewhere.
Feel free to play around with some of the disc’s properties and see how modifications affect your rendering. For an added challenge, see if you can make the circle shape shown below:
Hint: Try creating a new variable called thickness
defined by your radius
and used in your if-else
conditional.
Solution Inside: Skinny Circle | SelectShow> | |
---|---|---|
|
If you attempted the challenge or modified your GLSL code, please revert back to that basic solid white disc for now (Kudos for your curiosity though!).
Replace your if-else
conditional with the following:
if (length(position) > radius) { discard; } gl_FragColor = vec4(vec3(1.), 1.); |
Dear reader, please let me introduce you to discard
. discard
is a fragment-exclusive keyword that effectively tells OpenGL ES to discard the current fragment and ignore it in the following stages of the rendering pipeline. Build and run to see the screen below:
In pixel shader terminology, discard
returns an empty pixel that isn’t written to the screen. Therefore, glClearColor()
determines the actual screen pixel in its place.
From this point on, when you see a bright red pixel, it means discard
is working properly. But you should still be wary of a full red screen, as it means something in the code is not right.
Now it’s time to put a new spin on things and convert that drab 2D disc to a 3D sphere, and to do that you need to account for depth.
In a typical vertex+fragment shader program, this would be simple. The vertex shader could handle 3D geometry input and pass along any information necessary to the fragment shader. However, when working with pixel shaders you only have a 2D plane on which to “paint”, so you’ll need to fake depth by inferring z values.
Several paragraphs ago you created a disc by coloring any pixels inside a circle defined by:
(x-a)²+(y-b)² = r²
Extending this to the sphere equation is very easy, like so:
(x-a)²+(y-b)²+(z-c)² = r²
c
is the z
center of the sphere. Since the circle center ab
offsets your 2D coordinatesand your new sphere will lie on the z
origin, this equation can be simplified to:
x²+y²+z² = r²
Solving for z
results in the equation:
z² = √(r²-x²-y²)
And that’s how you can infer a z
value for all fragments, based on their unique position! Luckily enough, this is very easy to code in GLSL. Add the following lines to RWTSphere.fsh
just before gl_FragColor
:
float z = sqrt(radius*radius - position.x*position.x - position.y*position.y); z /= radius; |
The first line calculates z
as per your reduced equation, and the second divides by the sphere radius
to contain the range between 0.0
and 1.0
.
In order to visualize your sphere’s depth, replace your current gl_FragColor
line with the following:
gl_FragColor = vec4(vec3(z), 1.); |
Build and run to see your flat disc now has a third dimension.
Since positive z-values are directed outwards from the screen towards the viewer, the closest points on the sphere are white (middle) while the furthest points are black (edges).
Naturally, any points in between are part of a smooth, gray gradient. This piece of code is a quick and easy way to visualize depth, but it ignores the xy values of the sphere. If this shape were to rotate or sit alongside other objects, you couldn’t tell which way is up/down or left/right.
Replace the line:
z /= radius; |
With:
vec3 normal = normalize(vec3(position.x, position.y, z)); |
A better way to visualize orientation in 3D space is with the use of normals
. In this example, normals are vectors perpendicular to the surface of your sphere. For any given point, a normal defines the direction that point faces.
In the case of this sphere, calculating the normal for each point is easy. We already have a vector (position
) that points from the center of the sphere to the current point, as well as its z
value. This vector doubles as the direction the point is facing, or the normal.
If you’ve worked through some of our previous OpenGL ES tutorials, you know that it’s also generally a good idea to normalize()
vectors, in order to simplify future calculations (particularly for lighting).
Normalized normals lie within the range -1.0 ≤ n ≤ 1.0
, while pixel color channels lie within the range 0.0 ≤ c ≤ 1.0
. In order to visualize your sphere’s normals properly, define a normal n
to color c
conversion like so:
-1.0 ≤ n ≤ 1.0 (-1.0+1.0) ≤ (n+1.0) ≤ (1.0+1.0) 0.0 ≤ (n+1.0) ≤ 2.0 0.0/2.0 ≤ (n+1.0)/2.0 ≤ 2.0/2.0 0.0 ≤ (n+1.0)/2.0 ≤ 1.0 0.0 ≤ c ≤ 1.0 c = (n+1.0)/2.0 |
Voilà! It’s just that simple
Now, replace the line:
gl_FragColor = vec4(vec3(z), 1.); |
With:
gl_FragColor = vec4((normal+1.)/2., 1.); |
Then build and run. Prepare to feast your eyes on the round rainbow below:
This might seem confusing at first, particularly when your previous sphere rendered so smoothly, but there is a lot of valuable information hidden within these colors…
What you’re seeing now is essentially a normal map of your sphere. In a normal map, rgb colors represent surface normals which correspond to actual xyz coordinates, respectively. Take a look at the following diagram:
The rgb color values for the circled points are:
p0c = (0.50, 0.50, 1.00) p1c = (0.50, 1.00, 0.53) p2c = (1.00, 0.50, 0.53) p3c = (0.50, 0.00, 0.53) p4c = (0.00, 0.50, 0.53) |
Previously, you calculated a normal n
to color c
conversion. Using the reverse equation, n = (c*2.0)-1.0
, these colors can be mapped to specific normals:
p0n = (0.00, 0.00, 1.00) p1n = (0.00, 1.00, 0.06) p2n = (1.00, 0.00, 0.06) p3n = (0.00, -1.00, 0.06) p4n = (-1.00, 0.00, 0.06) |
Which, when represented with arrows, look a bit like this:
Now, there should be absolutely no ambiguity for the orientation of your sphere in 3D space. Furthermore, you can now light your object properly!
Add the following lines above main(void)
in RWTSphere.fsh
:
// Constants const vec3 cLight = normalize(vec3(.5, .5, 1.)); |
This constant defines the orientation of a virtual light source that illuminates your sphere. In this case, the light gleams towards the screen from the top-right corner.
Next, replace the following line:
gl_FragColor = vec4((normal+1.)/2., 1.); |
With:
float diffuse = max(0., dot(normal, cLight)); gl_FragColor = vec4(vec3(diffuse), 1.); |
You may recognize this as the simplified diffuse
component of the Phong reflection model. Build and run to see your nicely-lit sphere!
Note: To learn more about the Phong lighting model, check out our Ambient, Diffuse, and Specular video tutorials [Subscribers Only].
3D objects on a 2D canvas? Just using math? Pixel-by-pixel? WHOA
This is a great time for a little break so you can bask in the soft, even glow of your shader in all of its glory…and also clear your head a bit because, dear reader, you’ve only just begun.
In this section, you’ll learn all about texture primitives, pseudorandom number generators, and time-based functions – eventually working your way up to a basic noise shader inspired by Perlin noise.
The math behind Perlin Noise is a bit too dense for this tutorial, and a full implementation is actually too complex to run at 30 FPS.
The basic shader here, however, will still cover a lot of noise essentials (with particular thanks to the modular explanations/examples of Hugo Elias and Toby Schachman).
Ken Perlin developed Perlin noise in 1981 for the movie TRON, and it’s one of the most groundbreaking, fundamental algorithms in computer graphics.
It can mimic pseudorandom patterns in natural elements, such as clouds and flames. It is so ubiquitous in modern CGI that Ken Perlin eventually received an Academy Award in Technical Achievement for this technique and its contributions to the film industry.
The award itself explains the gist of Perlin Noise quite nicely:
“To Ken Perlin for the development of Perlin Noise, a technique used to produce natural appearing textures on computer generated surfaces for motion picture visual effects. The development of Perlin Noise has allowed computer graphics artists to better represent the complexity of natural phenomena in visual effects for the motion picture industry.”
So yeah, it’s kind of a big deal… and you’ll get to implement it from the ground up.
But first, you must familiarize yourself with time inputs and math functions.
Open RWTNoise.fsh
and add the following lines just below precision highp float;
// Uniforms uniform vec2 uResolution; uniform float uTime; |
You’re already familiar with the uResolution
uniform, but uTime
is a new one. uTime
comes from the timeSinceFirstResume
property of your GLKViewController
subclass, implemented as RWTViewController.m
(i.e. time elapsed since the first time the view controller resumed update events).
uTime
handles this time interval in RWTBaseShader.m
and is assigned to the corresponding GLSL uniform in the method renderInRect:atTime:
, meaning that uTime
contains the elapsed time of your app, in seconds.
To see uTime
in action, add the following lines to RWTNoise.fsh
, inside main(void)
:
float t = uTime/2.; if (t>1.) { t -= floor(t); } gl_FragColor = vec4(vec3(t), 1.); |
This simple algorithm will cause your screen to repeatedly fade-in from black to white.
The variable t
is half the elapsed time and needs converting to fit in between the color range 0.0
to 1.0
. The function floor()
accomplishes this by returning the nearest integer less than or equal to t
, which you then subtract from itself.
For example, for uTime = 5.50:
at t = 0.75
, your screen will be 75% white.
Before you build and run, remember to change your program’s fragment shader source to RWTNoise
in RWTViewController.m
:
self.shader = [[RWTBaseShader alloc] initWithVertexShader:@"RWTBase" fragmentShader:@"RWTNoise"]; |
Now build and run to see your simple animation!
You can reduce the complexity of your implementation by replacing your if
statement with the following line:
t = fract(t); |
fract()
returns a fractional value for t
, calculated as t - floor(t)
. Ahhh, there, that’s much better
Now that you have a simple animation working, it’s time to make some noise (Perlin noise, that is).
fract()
is an essential function in fragment shader programming. It keeps all values within 0.0
and 1.0
, and you’ll be using it to create a pseudorandom number generator (PRNG) that will approximate a white noise image.
Since Perlin noise models natural phenomena (e.g. wood, marble), PRNG values work perfectly because they are random-enough to seem natural, but are actually backed by a mathematical function that will produce subtle patterns (e.g. the same seed input will produce the same noise output, every time).
Controlled chaos is the essence of procedural texture primitives!
Note: Computer randomness is a deeply fascinating subject that could easily span dozens of tutorials and extended forum discussions. arc4random()
in Objective-C is a luxury for iOS developers. You can learn more about it from NSHipster, a.k.a. Mattt Thompson. As he so elegantly puts it, “What passes for randomness is merely a hidden chain of causality”.
The PRNG you’ll be writing will be largely based on sine waves, since sine waves are cyclical which is great for time-based inputs. Sine waves are also straightforward as it’s just a matter of calling sin()
.
They are also easy to dissect. Most other GLSL PRNGs are either great, but incredibly complex, or simple, but unreliable.
But first, a quick visual recap of sine waves:
You may already be familiar with the amplitude A
and wavelength λ
. However, if you’re not, don’t worry too much about them; after all, the goal is to create random noise, not smooth waves.
For a standard sine wave, peak-to-peak amplitude ranges from -1.0
to 1.0
and wavelength is equal to 2π
(frequency = 1).
In the image above, you are viewing the sine wave from the “front”, but if you view it from the “top” you can use the waves crests and troughs to draw a smooth greyscale gradient, where crest = white and trough = black.
Open RWTNoise.fsh
and replace the contents of main(void)
with the following:
vec2 position = gl_FragCoord.xy/uResolution.xy; float pi = 3.14159265359; float wave = sin(2.*pi*position.x); wave = (wave+1.)/2.; gl_FragColor = vec4(vec3(wave), 1.); |
Remember that sin(2π)
= 0, so you are multiplying 2π
by the fraction along the x-axis for the current pixel. This way, the far left side of the screen will be the left side of the sin wave, and the far right side of the screen will be the right side of the sin wave.
Also remember the output of sin
is between -1 and 1, so you add 1 to the result and divide it by 2 to get the output in the range of 0 to 1.
Build and run. You should see a smooth sine wave gradient with one crest and one trough.
Transferring the current gradient to the previous diagram would look something like this:
Now, make that wavelength shorter by increasing its frequency and factoring in the y-axis of the screen.
Change your wave
calculation to:
float wave = sin(4.*2.*pi*(position.x+position.y)); |
Build and run. You should see that your new wave not only runs diagonally across the screen, but also has way more crests and troughs (the new frequency is 4
).
So far the equations in your shader have produced neat, predictable results and formed orderly waves. But the goal is entropy, not order, so now it’s time to start breaking things a bit. Of course, this is a calm, controlled kind of breaking, not a bull-in-a-china-shop kind of breaking.
Replace the following lines:
float wave = sin(4.*2.*pi*(position.x+position.y)); wave = (wave+1.)/2.; |
With:
float wave = fract(sin(16.*2.*pi*(position.x+position.y))); |
Build and run. What you’ve done here is increase the frequency of the waves and use fract()
to introduce harder edges in your gradient. You’re also no longer performing a proper conversion between different ranges, which adds a bit of spice in the form of chaos.
The pattern generated by your shader is still fairly predictable, so go ahead and throw another wrench in the gears.
Change your wave
calculation to:
float wave = fract(10000.*sin(16.*(position.x+position.y))); |
Now build and run to see a salt & pepper spill.
The 10000
multiplier is great for generating pseudorandom values and can be quickly applied to sine waves using the following table:
Angle sin(a) 1.0 .0174 2.0 .0349 3.0 .0523 4.0 .0698 5.0 .0872 6.0 .1045 7.0 .1219 8.0 .1392 9.0 .1564 10.0 .1736 |
Observe the sequence of numbers for the second decimal place:
1, 3, 5, 6, 8, 0, 2, 3, 5, 7 |
Now observe the sequence of numbers for the fourth decimal place:
4, 9, 3, 8, 2, 5, 9, 2, 4, 6 |
A pattern is more apparent in the first sequence, but less so in the second. While this may not always be the case, less significant decimal places are a good starting place for mining pseudorandom numbers.
It also helps that really large numbers may have unintentional precision loss/overflow errors.
At the moment, you can probably still see a glimpse of a wave imprinted diagonally on the screen. If not, it might be time to pay a visit to your optometrist. ;]
The faint wave is simply a product of your calculation giving equal importance to position.x
and position.y
values. Adding a unique multiplier to each axis will dissipate the diagonal print, like so:
float wave = fract(10000.*sin(128.*position.x+1024.*position.y)); |
Time for a little clean up! Add the following function, randomNoise(vec2 p)
, above main(void)
:
float randomNoise(vec2 p) { return fract(6791.*sin(47.*p.x+p.y*9973.)); } |
The most random part about this PRNG is your choice of multipliers.
I chose the ones above from a list of prime numbers and you can use it too. If you select your own numbers, I would recommend a small value for p.x
, and larger ones for p.y
and sin()
.
Next, refactor your shader to use your new randomNoise
function by replacing the contents of main(void)
with the following:
vec2 position = gl_FragCoord.xy/uResolution.xy; float n = randomNoise(position); gl_FragColor = vec4(vec3(n), 1.); |
Presto! You now have a simple sin-based PRNG for creating 2D noise. Build and run, then take a break to celebrate, you’ve earned it.
When working with a 3D sphere, normalizing vectors makes equations much simpler, and the same is true for procedural textures, particularly noise. Functions like smoothing and interpolation are a lot easier if they happen on a square grid. Open RWTNoise.fsh
and replace the calculation for position
with this:
vec2 position = gl_FragCoord.xy/uResolution.xx; |
This ensures that one unit of position
is equal to the width of your screen (uResolution.x
).
On the next line, add the following if statement:
if ((position.x>1.) || (position.y>1.)) { discard; } |
Make sure you give discard
a warm welcome back into you code, then build and run to render the image below:
This simple square acts as your new 1×1 pixel shader viewport.
Since 2D noise extends infinitely in x and y, if you replace your noise input with either of the following lines below:
float n = randomNoise(position-1.); float n = randomNoise(position+1.); |
This is what you’ll see:
For any noise-based procedural texture, there is a primitive-level distinction between too much noise and not enough noise. Fortunately, tiling your square grid makes it possible to control this.
Add the following lines to main(void)
, just before n
:
float tiles = 2.; position = floor(position*tiles); |
Then build and run! You should see a 2×2 square grid like the one below:
This might be a bit confusing at first, so here’s an explanation:
floor(position\*tiles)
will truncate any value to the nearest integer less than or equal to position*tiles
, which lies in the range (0.0, 0.0)
to (2.0, 2.0)
, in both directions.
Without floor()
, this range would be continuously smooth and every fragment position would seed noise()
with a different value.
However, floor()
creates a stepped range with stops at every integer, as shown in the diagram above. Therefore, every position
value in-between two integers will be truncated before seeding noise()
, creating a nicely-tiled square grid.
The number of square tiles you choose will depend on the type of texture effect you want to create. Perlin noise adds many grids together to compute its noisy pattern, each with a different number of tiles.
There is such a thing as too many tiles, which often results in blocky, repetitive patterns. For example, the square grid for tiles = 128.
looks something like this:
At the moment, your noise texture is a bit too, ahem, noisy. This is good if you wish to texture an old-school TV set with no signal, or maybe MissingNo.
But what if you want a smoother texture? Well, you would use a smoothing function. Get ready for a shift gears and move onto image processing 101.
In 2D image processing, pixels have a certain connectivity with their neighbors. An 8-connected pixel has eight neighbors surrounding it; four touching at the edges and four touching at the corners.
You might also know this concept as a Moore neighborhood and it looks something like this, where CC is the centered pixel in question:
Note: To learn more about the Moore neighborhood and image processing in general, check out our Image Processing in iOS tutorial series.
A common use of image smoothing operations is attenuating edge frequencies in an image, which produces a blurred/smeared copy of the original. This is great for your square grid because it reduces harsh intensity changes between neighboring tiles.
For example, if white tiles surround a black tile, a smoothing function will adjust the tiles’ color to a lighter gray. Smoothing functions apply to every pixel when you use a convolution kernel, like the one below:
This is a 3×3 neighborhood averaging filter, which simply smooths a pixel value by averaging the values of its 8 neighbors (with equal weighting). To produce the image above, this would be the code:
p = 0.1 p’ = (0.3+0.9+0.5+0.7+0.2+0.8+0.4+0.6+0.1) / 9 p’ = 4.5 / 9 p’ = 0.5 |
It’s not the most interesting filter, but it’s simple, effective and easy to implement! Open RWTNoise.fsh
and add the following function just above main(void)
:
float smoothNoise(vec2 p) { vec2 nn = vec2(p.x, p.y+1.); vec2 ne = vec2(p.x+1., p.y+1.); vec2 ee = vec2(p.x+1., p.y); vec2 se = vec2(p.x+1., p.y-1.); vec2 ss = vec2(p.x, p.y-1.); vec2 sw = vec2(p.x-1., p.y-1.); vec2 ww = vec2(p.x-1., p.y); vec2 nw = vec2(p.x-1., p.y+1.); vec2 cc = vec2(p.x, p.y); float sum = 0.; sum += randomNoise(nn); sum += randomNoise(ne); sum += randomNoise(ee); sum += randomNoise(se); sum += randomNoise(ss); sum += randomNoise(sw); sum += randomNoise(ww); sum += randomNoise(nw); sum += randomNoise(cc); sum /= 9.; return sum; } |
It’s a bit long, but also pretty straightforward. Since your square grid is divided into 1×1 tiles, a combination of ±1.
in either direction will land you on a neighboring tile. Fragments are batch-processed in parallel by the GPU, so the only way to know about neighboring fragment values in procedural textures is to compute them on the spot.
Modify main(void)
to have 128 tiles
, and compute n
with smoothNoise(position)
. After those changes, your main(void)
function should look like this:
void main(void) { vec2 position = gl_FragCoord.xy/uResolution.xx; float tiles = 128.; position = floor(position*tiles); float n = smoothNoise(position); gl_FragColor = vec4(vec3(n), 1.); } |
Build and run! You’ve been hit by, you’ve been struck by, a smooooooth functional. :P
Nine separate calls to randomNoise()
, for every pixel, are quite the GPU load. It doesn’t hurt to explore 8-connected smoothing functions, but you can produce a pretty good smoothing function with 4-connectivity, also called the Von Neumann neighborhood.
Neighborhood averaging also produces a rather harsh blur, turning your pristine noise into grey slurry. In order to preserve original intensities a bit more, you’ll implement the convolution kernel below:
This new filter reduces neighborhood averaging significantly by having the pixel in question contribute 50% of the final result, with the other 50% coming from its 4 edge-neighbors. For the image above, this would be:
p = 0.1 p’ = (((0.3+0.5+0.2+0.4) / 4) / 2) + (0.1 / 2) p’ = 0.175 + 0.050 p’ = 0.225 |
Time for a quick challenge! See if you can implement this half-neighbor-averaging filter in smoothNoise(vec2 p)
.
Hint: Remember to remove any unnecessary neighbors! Your GPU will thank you and reward you with faster rendering and less griping.
Solution Inside: Smooth Noise Filter | SelectShow> | |
---|---|---|
|
If you didn’t figure it out, take a look at the code in the spoiler, and replace your existing smoothNoise
method with it. Reduce your number of tiles
to 8.
, then build and run.
Your texture is starting to look more natural, with smoother transitions between tiles. Compare the image above (smooth noise) with the one below (random noise) to appreciate the impact of the smoothing function.
Great job so far :]
The next step for your noise shader is rid the tiles of hard edges by using bilinear interpolation, which is simply linear interpolation on a 2D grid.
For ease of comprehension, the image below shows the desired sampling points for bilinear interpolation within your noise function roughly translated to your previous 2×2 grid:
Tiles can blend into one another by sampling weighted values from their corners at point P
. Since each tile is 1×1 unit, the Q
points should be sampling noise like so:
Q11 = smoothNoise(0.0, 0.0); Q12 = smoothNoise(0.0, 1.0); Q21 = smoothNoise(1.0, 0.0); Q22 = smoothNoise(1.0, 1.0); |
In code, you achieve this with a simple combination of floor()
and ceil()
functions for p
. Add the following function to RWTNoise.fsh
, just above main(void)
:
float interpolatedNoise(vec2 p) { float q11 = smoothNoise(vec2(floor(p.x), floor(p.y))); float q12 = smoothNoise(vec2(floor(p.x), ceil(p.y))); float q21 = smoothNoise(vec2(ceil(p.x), floor(p.y))); float q22 = smoothNoise(vec2(ceil(p.x), ceil(p.y))); // compute R value // return P value } |
GLSL already includes a linear interpolation function called mix()
.
You’ll use it to compute R1
and R2
, using fract(p.x)
as the weight between two Q
points at the same height on the y-axis. Include this in your code by adding the following lines at the bottom of interpolatedNoise(vec2 p)
:
float r1 = mix(q11, q21, fract(p.x)); float r2 = mix(q12, q22, fract(p.x)); |
Finally, interpolate between the two R
values by using mix()
with fract(p.y)
as the floating-point weight. Your function should look like the following:
float interpolatedNoise(vec2 p) { float q11 = smoothNoise(vec2(floor(p.x), floor(p.y))); float q12 = smoothNoise(vec2(floor(p.x), ceil(p.y))); float q21 = smoothNoise(vec2(ceil(p.x), floor(p.y))); float q22 = smoothNoise(vec2(ceil(p.x), ceil(p.y))); float r1 = mix(q11, q21, fract(p.x)); float r2 = mix(q12, q22, fract(p.x)); return mix (r1, r2, fract(p.y)); } |
Since your new function requires smooth, floating-point weights and implements floor()
and ceil()
for sampling, you must remove floor()
from main(void)
.
Replace the lines:
float tiles = 8.; position = floor(position*tiles); float n = smoothNoise(position); |
With the following:
float tiles = 8.; position *= tiles; float n = interpolatedNoise(position); |
Build and run. Those hard tiles are gone…
… but there is still a discernible pattern of “stars”, which is totally expected, by the way.
You’ll get rid of the undesirable pattern with a smoothstep function. smoothstep()
is a nicely curved function that uses cubic interpolation, and it’s much nicer than simple linear interpolation.
Add the following line inside interpolatedNoise(vec2 p)
, at the very beginning:
vec2 s = smoothstep(0., 1., fract(p)); |
Now you can use s
as the smooth-stepped weight for your mix()
functions, like so:
float r1 = mix(q11, q21, s.x); float r2 = mix(q12, q22, s.x); return mix (r1, r2, s.y); |
Build and run to make those stars disappear!
The stars are definitely gone, but there’s still a bit of a pattern; almost like a labyrinth. This is simply due to the 8×8 dimensions of your square grid. Reduce tiles
to 4.
, then build and run again!
Much better.
Your noise function is still a bit rough around the edges, but it could serve as a texture primitive for billowy smoke or blurred shadows.
Final stretch! Hope you didn’t forget about little ol’ uTime
, because it’s time to animate your noise. Simply add the following line inside main(void)
, just before assigning n
:
position += uTime; |
Build and run.
Your noisy texture will appear to be moving towards the bottom-left corner, but what’s really happening is that you’re moving your square grid towards the top-right corner (in the +x, +y direction). Remember that 2D noise extends infinitely in all directions, meaning your animation will be seamless at all times.
Hypothesis: Sphere + Noise = Moon? You’re about to find out!
To wrap up this tutorial, you’ll combine your sphere shader and noise shader into a single moon shader in RWTMoon.fsh. You have all the information you need to do this, so this is a great time for a challenge!
Hint: Your noise tiles
will now be defined by the sphere’s radius
, so replace the following lines:
float tiles = 4.; position *= tiles; |
With a simple:
position /= radius; |
Also, I double-dare you to refactor a little bit by completing this function:
float diffuseSphere(vec2 p, float r) { } |
Solution Inside: Werewolves, Beware | SelectShow> | |
---|---|---|
|
Remember to change your program’s fragment shader source to RWTMoon
in RWTViewController.m
:
self.shader = [[RWTBaseShader alloc] initWithVertexShader:@"RWTBase" fragmentShader:@"RWTMoon"]; |
While you’re there, feel free to change your glClearColor()
to complement the scene a bit more (I chose xkcd’s midnight purple):
glClearColor(.16f, 0.f, .22f, 1.f); |
Build and run! Oh yeah, I’m sure Ozzy Osbourne would approve.
Here is the completed project with all of the code and resources for this OpenGL ES Pixel Shaders tutorial. You can also find its repository on GitHub.
Congratulations, you’ve taken a very deep dive into shaders and GPUs, like a daring math-stronaut, testing all four dimensions, as well as the limits of iOS development itself! This was quite a different and difficult tutorial, so I whole-heartedly applaud your efforts.
You should now understand how to use the immense power of the GPU, combined with clever use of math, to create interesting pixel-by-pixel renderings. You should be comfortable with GLSL functions, syntax and organization too.
There wasn’t much Objective-C in this tutorial, so feel free to go back to your CPU and think of cool ways to manipulate your shaders even more!
Try adding uniform variables for touch points, or gyroscope data, or microphone input. Browser + WebGL may be more powerful, but Mobile + OpenGL ES is certainly be more interesting :]
There are many paths to explore from here on out, and here are a few suggestions:
In general, I suggest you check out the amazing GLSL Sandbox gallery straight away.
There you can find shaders for all levels and purposes, plus the gallery is edited/curated by some of the biggest names in WebGL and OpenGL ES. They’re the rockstars that inspired this tutorial and are shaping the future of 3D graphics, so a big THANKS to them. (Particularly @mrdoob, @iquilezles, @alteredq.)
If you have any questions, comments or suggestions, feel free to join the discussion below!
OpenGL ES Pixel Shaders Tutorial is a post from: Ray Wenderlich
The post OpenGL ES Pixel Shaders Tutorial appeared first on Ray Wenderlich.
I’ve been working from home for over three years now, and while I absolutely love it, one of the things I miss the most about working in an office is camaraderie you have with fellow developers there.
The good news is that in the past year or so, I’ve found my fix with an online alternative: IRC!
IRC is an internet chat protocol that has been around since the beginning of the Internet. You can connect to IRC servers to chat about any subject imaginable – including iOS development, OS X development, and even Swift development.
I believe IRC is a great way to get to know fellow iOS developers, to get help with questions, and to help out others.
That’s why I’m writing this tutorial! This tutorial will help get you started with:
Let’s get chatting!
The first step is to choose and download and install an OS X IRC client, and then follow some instructions I’ve provided to connect to a chat room. Here are some of the most popular options:
Again – download and install the client of your choice, and then jump to the appropriate instructions below!
Connecting to an IRC server
Start up Colloquy and go to File\New Connection. For Nickname enter your preferred nickname, for Chat Server enter irc.freenode.net, and click Connect:
Back in your list of connections, after a few moments you should see a lightning bolt icon appear – this indicates you are connected. Note that you can always double click a connection to connect.
Registering your nickname
Click the Console button to reveal a connection to the IRC server itself. This will allow you to send some commands to register your nickname, which is a prerequisite to connecting to some of the iOS development channels.
Enter the following command down in the text field at the bottom of the screen and hit enter:
/msg NickServ REGISTER password youremail@example.com |
After a few moments, you should see a reply from NickServ letting you know that it has sent you an email:
Check your email and enter the command that it tells you in the text field and hit enter to continue. You should see a success message from NickServ.
Back in your Connections list, right click your connection and choose Get Info. Enter the password you set in the password field:
Right click on the connection, and choose Disconnect. Then double click to connect again. If you still have your console open, you will see an “authentication successful” message – this means your nickname and password is registered!
Joining a channel
Now for the fun part – joining a chat channel for iOS developers. Click the Join Room button in your Connections window:
Make sure the Connection is set to irc.freenode.net, for the Chat Room enter cocoa-init, and click Join:
And you’re in! You can use the text field at the bottom to chat.
At this point, feel free to skip ahead to the IRC Channels for iOS Developers section to find out about more channels you can join!
Connecting to an IRC server
Start up Adium. If the Setup Assistant appears, click the x button to dismiss it.
Then go to File\Add Account\IRC (Internet Relay Chat). For Nickname enter your preferred nickname, for Hostname enter irc.freenode.net, and click OK:
After a few moments, the green icon next to your name should light up to indicate that you are online. Note that you can always use the dropdown to switch your status to available to connect.
Registering your nickname
Go to File\New Chat, make sure that From is set to
Enter the following command down in the text field at the bottom of the screen and hit enter:
REGISTER password youremail@example.com |
After a few moments, you should see a reply from NickServ letting you know that it has sent you an email:
Check your email and enter the command that it tells you in the text field (without the /msg NickServ part) and hit enter to continue. You should see a success message from NickServ.
Close the NickServ window. In the Contacts window, choose the dropdown next to Available and set it to Offline to disconnect. Then set it back to Available to reconnect.
After a few moments, NickServ will ask you for your password, so enter the password you set in the password field:
If you don’t see any errors – this means your nickname and password is registered!
Joining a channel
Now for the fun part – joining a chat channel for iOS developers. Go to File\Join Group Chat…, make sure the Account is set to irc.freenode.net, for Channel enter #cocoa-init, and click Join:
And you’re in! You can use the text field at the bottom to chat.
At this point, feel free to skip ahead to the IRC Channels for iOS Developers section to find out about more channels you can join!
Connecting to an IRC server
Irssi is different than the other options so far in that everything is on the command line!
Start up Irssi and you’ll see the following:
Enter these commands to connect to Freenode:
/set nick yournickname /network add -whois 1 -msgs 4 -kicks 1 -modes 4 freenode /server add -auto -network freenode irc.freenode.net 6667 /connect freenode |
After a few moments you should see some welcome messages from Freenode – this indicates you are connected.
Registering your nickname
Next you need to send some commands to NickServ to register your nickname, which is a prerequisite to connecting to some of the iOS development channels.
Enter the following command down in the text field at the bottom of the screen and hit enter:
/msg NickServ REGISTER password youremail@example.com |
This causes irssi to open a new window – use Command-P to switch to it.
After a few moments, you should see a reply from NickServ letting you know that it has sent you an email in the new window.
Check your email and enter the command that it tells you in the text field (but without the /msg NickServ part) and hit enter to continue. You should see a success message from NickServ.
Hit Command-P to go back to the main window. Enter this command to auto-register with NickServ when you connect from now on:
/network add -autosendcmd "/^msg nickserv identify password;wait 2000" freenode /save /quit |
Restart irssi, and verify that you automatically connect and register your nickname.
Joining a channel
Now for the fun part – joining a chat channel for iOS developers. Simply enter the following command:
/join #cocoa-init |
You will see a list of users in the channel, and you can use the text field at the bottom to chat.
And you’re in! You can use the text field at the bottom to chat. For more information, check out the Irssi documentation.
At this point, feel free to skip ahead to the IRC Channels for iOS Developers section to find out about more channels you can join!
Connecting to an IRC server
Start up Textual, click the + button in the lower left, and select Add Server:
For Network Name enter Freenode and for Server Address enter irc.freenode.net:
Switch to the Identity tab, for Nickname enter your preferred nickname, and click Save:
Back in the main window, double click the Freenode entry to connect. You should see a message from the server – this indicates you are connected.
Registering your nickname
Next you need to send some commands to register your nickname, which is a prerequisite to connecting to some of the iOS development channels.
Enter the following command down in the text field at the bottom of the screen and hit enter:
/msg NickServ REGISTER password youremail@example.com |
After a few moments, you should see a reply from NickServ letting you know that it has sent you an email:
Check your email and enter the command that it tells you in the text field (without the /msg NickServ part) and hit enter to continue. You should see a success message from NickServ.
Back on the sidebar, right click your Freenode connection and choose Server Properties. In the Identity tab, enter the password you set in the Personal Password field:
Right click on the freenode connection, and choose Disconnect. Then right click and choose Connect to connect again. If you don’t get any errors, this means you’re connected and authenticated successfully!
Joining a channel
Now for the fun part – joining a chat channel for iOS developers. Right click the Freenode entry and choose Join Channel. For Channel enter #cocoa-init, and click Save:
And you’re in! You can use the text field at the bottom to chat.
At this point, feel free to skip ahead to the next section to find out about more channels you can join!
Note: Some IRC channels ban web-based clients like IRCCloud. You may prefer to use one of the other clients to avoid this.
Connecting to an IRC server
Go to irccloud.com and register for a free account. Once you have signed up, you will be automatically directed to Join a new network screen.
Under hostname enter irc.freenode.net. For Nickname, enter you preferred nickname. Leave other values to defaults and click Join network button.
Registering your nickname
You will need to register your nickname with the server before you can start chatting. Click on freenode shown towards the right side of window to reveal the server console. Here you can send commands to register your nickname, which is required to connect to some of the iOS development channels.
In the text field shown at the bottom of the screen, enter the following command
/msg NickServ REGISTER password youremail@example.com |
After a few moments, you should see a reply from NickServ, letting you know that it has sent you an email:
Check your email and enter the command that it tells you in the text field and hit enter to continue. You will see a successfully verified message from NickServ.
Now click on freenode towards the right side to select the server and click on the Identify Nickname button. Once you are identified succesfully, you are good to join channels.
Joining a channel
In the text field shown below, enter the following command.
/join #cocoa-init |
You will soon be redirected to the #cocoa-init channel screen. You can use the text field at the bottom of the screen to start chatting.
At this point, feel free to skip ahead to the IRC Channels for iOS Developers section to find out about more channels you can join!
Now that you’ve successfully connected to IRC, you may be wondering what some good channels are to join. Here are our recommendations:
There are a few areas of IRC Etiquette that you should keep in mind.
First, it’s cool to ask questions on IRC, but if you do be sure to try to answer questions and help others as well. Learn the art of asking good questions. If you want to share source code, don’t paste it directly into the channel but use a “pastebin” instead.
Second, note that IRC can be very distracting if you let it. What I personally have found helpful is to simply minimize IRC and ignore it for a while when I get busy or am in the middle of something. Don’t worry, no-one will be insulted if you leave mid-conversation – we all do the same thing :]
Sometimes people who have nothing better to do with their time (usually bored kids) find it funny to troll on IRC. They do this just to get a rise out of people. The best advice is to ignore them. If a troll finds no response, they’ll go away eventually. If the trolling gets really bad, notify one of the channel operators so they can kick the trolls out of the room. Of course, don’t be a troll yourself. ;]
Remember that text — especially in real time chat — lacks the finesse of face-to-face conversation. It’s good to have a thick skin on IRC. It’s easy to get offended — or to offend — and start a flame war, but that spoils the mood for everyone and will get you kicked, or even banned, from the channel. Respect the channel rules.
Tip: Most IRC clients support “tab completion”. So if you want to respond to someone with the nick JonnyAppleseed, just type the first few letters of the nick followed by the tab key, and the IRC app will complete the name for you. Typing “jo<tab>” is a lot quicker than typing the full name.
Be nice, and make friends!
Enjoy! Remember the whole idea is to have an informal place to chat, help each other out, hang out, and have fun – when you have time to spare and need a “water cooler” moment! :]
Myself and many other IRC fans out there hope to get a chance to talk to you soon!
IRC for iOS Developers is a post from: Ray Wenderlich
The post IRC for iOS Developers appeared first on Ray Wenderlich.
I thought you’d all go on vacation by now, but I should have known you’d keep making awesome apps instead!
The raywenderlich.com community just can’t stop building new, exciting apps.
This month we’ve got:
What are you waiting for? You’ve got a lot of apps to download from your fellow readers!
Did you think Flappy Bird was hard? You haven’t seen Beelly yet :]
Beelly inverts the controls on you every 8 seconds. One second, tilting right will move you right and the next tilting right will move you left. As if that wasn’t enough to bring the pain, the game objective itself is to thread a needle with this cute little bee.
If you dare, steer Beelly throw a winding meadow with pinpoint accurancy and see how far you can get. >:]
A truly one of a kind app, Smilophone is an instrument we can all play.
Smilophone creates music based off your facial expressions! Using the camera of your iOS device, Smilophone uses face tracking technology to figure out if you’re smiling, or if you’re raising your eyebrows.
You can smile to make high pitched sounds, Raise your eyebrows to change the tune. Sad faces make sad sounds. Your face is all you need to create a musical work of art.
Most of us as developers are all too aware of the pain that is time tracking. Thanks to Sometime, that may longer be a problem at all.
Sometime makes it easy to track time on multiple projects. Simply setup “buckets” for tracking time. Each bucket can be assigned to a project and client so you can get granular in your tracking. Simply tap a bucket to start tracking real time.
Sometime also lets you see an overview timeline of your work for each day. Add in geofencing and calendar integration and you have an app that can track all the time you need right in your pocket. This is definitely a productivity tool worth checking out.
S3nsitive is a puzzle game with a little more to it than meets the eye. Its simple looking at first, just get from A to B. But theres much more to it than that.
Each step you take eliminates the block beneath you. So you can’t backtrack in this tricky puzzle. And don’t spend too much time thinking, blocks between platforms can only support you for so long.
With 40 levels, a sweet soundtrack, and GameCenter leaderboards, this puzzler is definitely worth checking out.
Flyover puts you in charge of your own airplane to fly around the world.
With over 260 destinations you choose where your airplane flys. You can upgrade its speed, capacity, and maximum distance with cash you earn flying passengers between cities.
Keep track of fuel costs for maximum efficiency and watch out for bad weather grounding your planes.
Well camera apps come and go but Hipster faces live on forever.
Hipster Face Live definitely got a few laughs from my friends. You can select different stamps like hats, glasses, beards, etc and then see them line up in realtime on your face using facial recognition.
You can take snapshots then save them to your camera roll or share them directly. Its quite a bit of fun to take silly pictures and send them to your mom. Your mom needs more silly hipster pictures, so download this app and give it a go! :]
The Last Ninja standing must fight the undead. Zombies galore on this highrise of pain.
Zombies are everywhere. Each floor you advance they get tougher and more cunning. But you’re a ninja! Cut them down with slashs and chops. Use special abilities to clear them with style.
Above all, keep jumping! With spikes on the floor growing ever closer, its death or zombies. You decide. ;]
TheNews is an app that makes catching up on the daily news quick and minimal.
TheNews integrates with Designer News and HackNews in a clean, simple interface that makes reading a delight. Expected gesture based controls make managing your list a breeze.
Embedding web browsing, commenting, and sharing keep you in the app without needing to bounce around. Truly a one stop shop for quick news snippets.
I don’t know about you but juggling is hard. Juggler however, makes it a fun game.
Juggler is about not dropping any balls as they’re dropped on the screen. It supports full multitouch so you can juggle as many at a time as you can handle.
The higher your score the more balls thrown at you. You’ve got to be fast if you want to top the GameCenter charts.
AS Test measures your arithmetic skills with speed tests, offers training to improve, and global leaderboards to challenge.
With built in data tracking, AS Test can show you where you struggle on a question heat map. The heat map pinpoints problem areas based on all the questions you’ve been presented. Then using the training section you can hone your skills and battle for highest global score.
Speed counts, but so does difficulty. The more questions you answer in a time limit with the higher complexity the higher you’ll rank.
What would you do if you were a poor little devil kicked out of Demon Academy? Go to heaven of course!
Help this cute little devil jump his way to heaven through obstacles and puzzles. Use four powerups: fire, ice, blasting, and shrinking.
The game offers 30 completely free levels of addicitive sidescrolling platformer style gameplay.
Each month tons of our readers submit awesome applications they’re made for me to review. While I give every app a try I don’t have time to write about them all. These are still great apps. Its not a popularity contest or even a favorite picking contest. I enjoy getting glimpse of the community through your apps. Take a moment to check out these great apps I just didn’t have enough time to share with you.
RebelChick
BubbleTT
Pirate Ring
Ball Smasher: The Big Bang
VBall ScoreMaster
County
Novae Marathon
Filetto RT
Retro Sparkle
Family Life on the Map
Newton’s Playground
Black Screen Video Spy Recorder
Make The Match!
Squirmy Puzzle
ArithMate
Cannon Bird
Banometer
Alchemistas Beyond The Veil
Animal Sounds & Name
Another month come and gone. I love seeing what our community of readers comes up with. The apps you build are the reason we keep writing tutorials. Make sure you tell me about your next one, submit here!
If you’ve never made an app, this is your month! Check out our free tutorials to become an iOS star. What are you waiting for – I want to see your app next month!
Readers’ App Reviews – July 2014 is a post from: Ray Wenderlich
The post Readers’ App Reviews – July 2014 appeared first on Ray Wenderlich.
With the introduction of iOS 7, Apple changed all of this. In one fell swoop, developers now had access to a 2D graphics and physics engine, all accessible through Objective-C. Now, developers could focus on making a game as opposed to managing the architecture of one.
The game framework was called SpriteKit and in this tutorial, you are going to learn how to make a game similar to Cut the Rope, an award-winning physics-based puzzle game. You’ll learn how to:
By the end of this tutorial, you’ll be well on your way to using Sprite Kit in your own projects.
Just keep in mind that this is not an entry level level tutorial. If classes such as SKNode or SKAction are entirely new to you, then check out our Sprite Kit Tutorial for Beginners. That tutorial will get you quickly up to speed so you can start feeding pineapples to crocodiles.
Wait. What?
Read on.
In this tutorial, you’ll be creating a game called Cut the Verlet. This game is modeled after Cut the Rope which tasks you to cut a rope holding a candy so it will drop into the mouth of a rather hungry, albeit, impatient creature. Each level adds additional challenges such as spiders and buzz saws. This game was demonstrated in a previous tutorial using Cocos2D, but in this tutorial, you will be using Sprite Kit to build it out.
So what is a Verlet? A verlet is short for verlet integration which is way to model particle trajectories in motion and also a great tool to model your rope physics. Gustavo Ambrozio, the author of Cocos2D version of this tutorial, provides an excellent overview of verlets and how they are applied to this game. Give that section a read before continuing with this tutorial. Think of it as required reading. :]
To get started, first download the starter project for this tutorial. Extract the project to a convenient location on your hard drive and then open it in Xcode for a quick look at how it’s structured.
The project’s contents are in four main folders, as shown below:
In addition, I’ve added all of the necessary #import
statements to the starter project. This includes #import
statements in the CutTheVerlet-Prefix.pch file.
Close the Resources and Other Resources folders; you won’t be making any changes in those areas. You’ll work directly only with the files located in the Classes and Helpers folders.
It’s time to begin!
A constant is a variable you can rely upon: once you set its value, that value never changes. Constants are a great way to make your code more manageable. Global constants in particular can make your code easier to read and maintain.
In this project, you’ll create global constants to define sprite texture image names, sound file names, sprite node names, the z-order or zPosition
of your sprites and the category defined for each sprite, which you’ll use for collision detection.
Open TLCSharedConstants.h and add the following code above the @interface
line:
typedef NS_ENUM(int, Layer) { LayerBackground, LayerForeground, LayerCrocodile, LayerRope, LayerPrize }; typedef NS_OPTIONS(int, EntityCategory) { EntityCategoryCrocodile = 1 << 0, EntityCategoryRopeAttachment = 1 << 1, EntityCategoryRope = 1 << 2, EntityCategoryPrize = 1 << 3, EntityCategoryGround = 1 << 4 }; extern NSString *const kImageNameForRopeHolder; extern NSString *const kImageNameForRopeTexture; extern NSString *const kImageNameForCrocodileBaseImage; extern NSString *const kImageNameForCrocodileMouthOpen; extern NSString *const kImageNameForCrocodileMouthClosed; extern NSString *const kSoundFileNameForCutAction; extern NSString *const kSoundFileNameForSplashAction; extern NSString *const kSoundFileNameForBiteAction; extern NSString *const kSoundFileNameForBackgroundMusic; extern NSString *const kImageNameForPrize; extern NSString *const kNodeNameForPrize; |
The code above declares two typedef
variables of type int
: EntityCategory
and Layer
. You’ll use these to determine the collision category and zPosition
of a sprite when you add it to the scene—more about this soon.
The code also declares a group of constant NSString
variables using the const
keyword. The extern
keyword comes in handy when creating global variables, as it allows you to create unambiguous declarations. That means you can declare a variable here but set its value elsewhere.
Solution Inside: Why do programmers name constants with a ‘k’ prefix? | SelectShow> |
---|---|
There’s a small debate as to why we use k, but the general consensus is that it originates from Hungarian notation, where k designates a constant. Or is it a konstant? :]
|
Remember that part about declaring a variable here and setting it elsewhere? Well, the elsewhere in this case is TLCSharedConstants.m.
Open TLCSharedConstants.m and add the following code above the @implementation
line:
NSString *const kImageNameForRopeHolder = @"ropeHolder"; NSString *const kImageNameForRopeTexture = @"ropeTexture"; NSString *const kImageNameForCrocodileBaseImage = @"croc"; NSString *const kImageNameForCrocodileMouthOpen = @"croc01"; NSString *const kImageNameForCrocodileMouthClosed = @"croc00"; NSString *const kSoundFileNameForCutAction = @"cut.caf"; NSString *const kSoundFileNameForSplashAction = @"splash.caf"; NSString *const kSoundFileNameForBiteAction = @"bite.caf"; NSString *const kSoundFileNameForBackgroundMusic = @"CheeZeeJungle.caf"; NSString *const kImageNameForPrize = @"pineapple"; NSString *const kNodeNameForPrize = @"pineapple"; |
Here you use string values to set the names of images and sound clips. If you’ve played Cut the Rope, you can probably figure out what the variable names represent. You also set a string value for the sprite node name that you’ll use in the collision detection methods, which the Collision Detection section of this tutorial will explain.
Now that you’ve got your constants in place, you can begin adding nodes to your scene, starting with the scenery itself—the background and foreground!
The starter project provides stub versions of the project’s methods—adding the code is your job. The first steps are to initialize the scene and add a background.
Open TLCMyScene.m and add the following properties to the interface declaration:
@property (nonatomic, strong) SKNode *worldNode; @property (nonatomic, strong) SKSpriteNode *background; @property (nonatomic, strong) SKSpriteNode *ground; @property (nonatomic, strong) SKSpriteNode *crocodile; @property (nonatomic, strong) SKSpriteNode *treeLeft; @property (nonatomic, strong) SKSpriteNode *treeRight; |
Here you define properties to hold references to the different nodes in the scene.
Now add the following block of code to initWithSize:
, just after the comment that reads /* add setup here */
:
self.worldNode = [SKNode node]; [self addChild:self.worldNode]; [self setupBackground]; [self setupTrees]; |
The code above creates an SKNode
object and assigns it to the world
property. It then adds the node to the scene using addChild:
.
It also calls two methods, one for setting up the background and one for setting up the two trees. Because the two methods are almost identical, I’ll explain them together once you’ve added them.
First, locate setupBackground
and add the following:
self.background = [SKSpriteNode spriteNodeWithImageNamed:@"background"]; self.background.anchorPoint = CGPointMake(0.5, 1); self.background.position = CGPointMake(self.size.width/2, self.size.height); self.background.zPosition = LayerBackground; [self.worldNode addChild:self.background]; self.ground = [SKSpriteNode spriteNodeWithImageNamed:@"ground"]; self.ground.anchorPoint = CGPointMake(0.5, 1); self.ground.position = CGPointMake(self.size.width/2, self.background.frame.origin.y); self.ground.zPosition = LayerBackground; [self.worldNode addChild:self.ground]; SKSpriteNode *water = [SKSpriteNode spriteNodeWithImageNamed:@"water"]; water.anchorPoint = CGPointMake(0.5, 1); water.position = CGPointMake(self.size.width/2, self.ground.frame.origin.y + 10); water.zPosition = LayerBackground; [self.worldNode addChild:water]; |
Next, locate setupTrees
and add this code:
self.treeLeft = [SKSpriteNode spriteNodeWithImageNamed:@"treeLeft"]; self.treeLeft.anchorPoint = CGPointMake(0.5, 1); self.treeLeft.position = CGPointMake(self.size.width * .20, self.size.height); self.treeLeft.zPosition = LayerForeground; [self.worldNode addChild:self.treeLeft]; self.treeRight = [SKSpriteNode spriteNodeWithImageNamed:@"treeRight"]; self.treeRight.anchorPoint = CGPointMake(0.5, 1); self.treeRight.position = CGPointMake(self.size.width * .86, self.size.height); self.treeRight.zPosition = LayerForeground; [self.worldNode addChild:self.treeRight]; |
Now that everything is in place, it’s time to explain.
In setupBackground
and setupTrees
, you create an SKSpriteNode
and initialize it using spriteNodeWithImageNamed:
, whereby you pass in the image name and assign it to its equivalently-named variable. In other words, you initialize the property variables.
You then change each of the anchorPoints
from the default value of (0.5, 0.5) to a new value of (0.5, 1).
You also set the sprite’s position
(location) and zPosition
(depth). For the most part, you’re only taking the size of the scene’s width and height into consideration when you set the sprite’s position.
The ground
sprite, however, needs to be positioned directly under the edge of the background. You accomplish this by getting a handle on the background’s frame using self.background.frame.origin.y
.
Likewise, you want the water sprite, which isn’t using a declared variable, directly under the ground
sprite with a little space in between. You achieve this using self.ground.frame.origin.y + 10
.
Recall that in TLCSharedConstants.h, you specified some constants for use with the sprite’s zPosition
. You use two of them in the code above: LayerBackground
and LayerForeground
. Since SKSpriteNode
inherits from SKNode
, you have access to all of SKNode’s properties, including zPosition
.
Finally, you add all of the newly created sprites to your world node.
While it’s possible to add sprites directly to a scene, creating a world node to contain things will be better in the long run, especially because you’re going to apply physics to the world.
You’ve got official approval for your first build and run! So… what are you waiting for?
Build and run your project. If you did everything right, you should see the following screen:
It’s a lonely world out there. It’s time to bring out the crocodiles!
Adding the crocodile node is not much different from adding the background and foreground.
Locate setupCrocodile
inside of TLCMyScene.m and add the following block of code:
self.crocodile = [SKSpriteNode spriteNodeWithImageNamed:kImageNameForCrocodileMouthOpen]; self.crocodile.anchorPoint = CGPointMake(0.5, 1); self.crocodile.position = CGPointMake(self.size.width * .75, self.background.frame.origin.y + (self.crocodile.size.height - 5)); self.crocodile.zPosition = LayerCrocodile; [self.worldNode addChild:self.crocodile]; [self animateCrocodile]; |
The code above uses two of the constants you set up earlier: kImageNameForCrocodileMouthClosed
and LayerCrocodile
. It also sets the position
of the crocodile node based on the background node’s frame.origin.y
location and the crocodile node’s size.
Just as before, you set the zPosition
to place the crocodile node on top of the background and foreground. By default, Sprite Kit will “layer” nodes based on the order in which they’re added. You can choose a node’s depth yourself by giving it a different zPosition
.
Now it’s time to animate the crocodile
.
Find animateCrocodile
and add the following code:
NSMutableArray *textures = [NSMutableArray arrayWithCapacity:1]; for (int i = 0; i <= 1; i++) { NSString *textureName = [NSString stringWithFormat:@"%@0%d", kImageNameForCrocodileBaseImage, i]; SKTexture *texture = [SKTexture textureWithImageNamed:textureName]; [textures addObject:texture]; } CGFloat duration = RandomFloatRange(2, 4); SKAction *move = [SKAction animateWithTextures:textures timePerFrame:0.25]; SKAction *wait = [SKAction waitForDuration:duration]; SKAction *rest = [SKAction setTexture:[textures objectAtIndex:0]]; SKAction *animateCrocodile = [SKAction sequence:@[wait, move, wait, rest]]; [self.crocodile runAction: [SKAction repeatActionForever:animateCrocodile]]; |
The previous code creates an array of SKTexture
objects which you then animate using SKAction
objects.
You also use a constant to set the base image name and set the number of images in your animation using a for
loop. There are only two images for this animation, croc00 and croc01. Finally, you use a series of SKAction
objects to animate the crocodile node.
An SKAction sequence:
allows you to set multiple actions and run them as… you guessed it… a sequence!
Once you’ve established the sequence, you run the action on the node using the node’s runAction
method. In the code above, you use repeatActionForever:
to instruct the node to animate indefinitely.
The final step in adding and animating your crocodile is to make the call to setupCrocodile
. You’ll do this in initWithSize:
.
Toward the top of TLCMyScene.m, locate initWithSize:
and add the following line after [self setupTrees];
:
[self setupCrocodile]; |
That’s it! Prepare yourself to see a mean-looking crocodile wildly open and shut its jaws in the hope of eating whatever may be around.
Build and run the project to see this fierce reptile in action!
That’s pretty scary, right? As the player, it’s your job to keep this guy happy with pineapple, which everyone knows is a crocodile’s favorite food. ;]
If your screen doesn’t look like the one above, you may have missed a step somewhere along the way.
You’ve got scenery and you’ve got a player, so let’s institute some ground rules to get this party started—physics!
SpriteKit makes use of iOS’ packaged physics engine which in reality is just Box 2D under the covers. If you’ve ever used Cocos-2D, then you may have used Box 2D for managing your physics. The big difference between using Box 2D in Sprite Kit is that Apple has encapsulated the library in an Objective-C wrapper so you won’t need to use C++ to access it.
To get started, locate initWithSize:
inside of TLCMyScene.m and add the following three lines after the [self add child:self.worldNode]
line:
self.worldNode.scene.physicsWorld.contactDelegate = self; self.worldNode.scene.physicsWorld.gravity = CGVectorMake(0.0,-9.8); self.worldNode.scene.physicsWorld.speed = 1.0; |
Also, add the following to the end of the @interface
line at the top of the file:
<SKPhysicsContactDelegate> |
The code above sets the world node’s contact delegate, gravity and speed. Remember, this is the node that will contain all your other nodes, so it makes sense to add your physics here.
The gravity and speed values above are the defaults for their respective properties. The former specifies the gravitational acceleration applied to physics bodies in the world, while the latter specifies the speed at which the simulation executes. Since they’re the default values, you don’t need to specify them above, but it’s good to know they exist in case you want to tweak your physics.
Both of these properties can be found in the SKPhysicsWorld Class Reference.
Now for the part you’ve been eagerly anticipating… the ropes! Excuse me—I mean, the verlets.
This project uses the TLCGameData
class as a means of setting up the ropes. In a production environment, you’d likely use a PLIST or some other data store to configure the levels.
In a moment, you’re going to create an array of TLCGameData
objects to represent your datastore.
Open TLCGameData.h and add the following properties:
@property (nonatomic, assign) int name; @property (nonatomic, assign) CGPoint ropeLocation; @property (nonatomic, assign) int ropeLength; @property (nonatomic, assign) BOOL isPrimaryRope; |
These will serve as your data model. Again, in a production environment, you’d benefit from using a PLIST rather than programmatically creating your game data.
Now go back to TLCMyScene.m and add the following after the last #import
statement:
#define lengthOfRope1 24 #define lengthOfRope2 18 #define lengthOfRope3 15 |
Then, add two properties to hold the prize and level data. Add them right after the other properties:
@property (nonatomic, strong) SKSpriteNode *prize; @property (nonatomic, strong) NSMutableArray *ropes; |
Once you’ve done that, locate setupGameData
and add the following block of code:
self.ropes = [NSMutableArray array]; TLCGameData *rope1 = [[TLCGameData alloc] init]; rope1.name = 0; rope1.ropeLocation = CGPointMake(self.size.width *.12, self.size.height * .94); rope1.ropeLength = lengthOfRope1; rope1.isPrimaryRope = YES; [self.ropes addObject:rope1]; TLCGameData *rope2 = [[TLCGameData alloc] init]; rope2.name = 1; rope2.ropeLocation = CGPointMake(self.size.width *.85, self.size.height * .90); rope2.ropeLength = lengthOfRope2; rope2.isPrimaryRope = NO; [self.ropes addObject:rope2]; TLCGameData *rope3 = [[TLCGameData alloc] init]; rope3.name = 2; rope3.ropeLocation = CGPointMake(self.size.width *.86, self.size.height * .76); rope3.ropeLength = lengthOfRope3; rope3.isPrimaryRope = NO; [self.ropes addObject:rope3]; |
The code above sets basic parameters for your ropes. The most important is the property isPrimaryRope
, because it determines how the ropes are connected to the prize. When creating your ropes, only one should have this property set to YES
.
Finally, add two more calls to initWithSize:
: [self setupGameData]
and [self setupRopes]
. When you’re done, initWithSize:
should look like this:
if (self = [super initWithSize:size]) { /* Setup your scene here */ self.worldNode = [SKNode node]; [self addChild:self.worldNode]; self.worldNode.scene.physicsWorld.contactDelegate = self; self.worldNode.scene.physicsWorld.gravity = CGVectorMake(0.0,-9.8); self.worldNode.scene.physicsWorld.speed = 1.0; [self setupSounds]; [self setupGameData]; [self setupBackground]; [self setupTrees]; [self setupCrocodile]; [self setupRopes]; } return self; |
Now you can build the ropes!
In this section, you’ll begin creating the class that handles the ropes.
Open TLCRope.h. You’re going to add two blocks of code to this file. Add the first block, the delegate’s protocol, before the @interface
section:
@protocol TLCRopeDelegate - (void)addJoint:(SKPhysicsJointPin *)joint; @end |
Add the second block of code, which includes the declaration of a custom init
method, after the @interface
section:
@property (strong, nonatomic) id<TLCRopeDelegate> delegate; - (instancetype)initWithLength:(int)length usingAttachmentPoint:(CGPoint)point toNode:(SKNode*)node withName:(NSString *)name withDelegate:(id<TLCRopeDelegate>)delegate; - (void)addRopePhysics; - (NSUInteger)getRopeLength; - (NSMutableArray *)getRopeNodes; |
Delegation requires one object to define a protocol containing methods to which it expects its delegate to respond. The delegate class must then declare that it follows this protocol and implement the required methods.
Once you’ve finished your header file, open TLCRope.m and add the following properties to the @interface
section:
@property (nonatomic, strong) NSString *name; @property (nonatomic, strong) NSMutableArray *ropeNodes; @property (nonatomic, strong) SKNode *attachmentNode; @property (nonatomic, assign) CGPoint attachmentPoint; @property (nonatomic, assign) int length; |
Your next step is to add the code for the custom init
. Locate #pragma mark Init Method
and add the following block of code:
- (instancetype)initWithLength:(int)length usingAttachmentPoint:(CGPoint)point toNode:(SKNode*)node withName:(NSString *)name withDelegate:(id<TLCRopeDelegate>)delegate; { self = [super init]; if (self) { self.delegate = delegate; self.name = name; self.attachmentNode = node; self.attachmentPoint = point; self.ropeNodes = [NSMutableArray arrayWithCapacity:length]; self.length = length; } return self; } |
This is simple enough: You take the values that you passed into init
and use them to set the private properties in your class.
The next two methods you need to add are getRopeLength
and getRopeNodes
.
Locate #pragma mark Helper Methods
and add the following:
- (NSUInteger)getRopeLength { return self.ropeNodes.count; } - (NSMutableArray *)getRopeNodes { return self.ropeNodes; } |
The two methods above serve as a means of reading the values of the private properties.
You may have noticed that the code above refers to rope nodes, plural. That’s because in this game, each of your ropes will be made up of many nodes to give it a fluid look and feel. Let’s see how that will work in practice.
Although the next few methods you write will be incomplete, it’s for a good reason: to fully understand what’s happening, it’s important to take things step by step.
The first step is to add the rope parts, minus the physics.
Still working in TLCRope.m, locate #pragma mark Setup Physics
and add the following method:
- (void)addRopePhysics { // keep track of the current rope part position CGPoint currentPosition = self.attachmentPoint; // add each of the rope parts for (int i = 0; i < self.length; i++) { SKSpriteNode *ropePart = [SKSpriteNode spriteNodeWithImageNamed:kImageNameForRopeTexture]; ropePart.name = self.name; ropePart.position = currentPosition; ropePart.anchorPoint = CGPointMake(0.5, 0.5); [self addChild:ropePart]; [self.ropeNodes addObject:ropePart]; /* TODO - Add Physics Here */ // set the next rope part position currentPosition = CGPointMake(currentPosition.x, currentPosition.y - ropePart.size.height); } } |
In the code above, which you’ll call after the object has been initialized, you create each rope part and add it to the ropeNodes
array. You also give each part a name so you can reference it later. Finally, you add it as a child of the actual TLCRope object using addChild:
.
Soon, you’ll replace the TODO comment above with some code to give these rope parts their own physics bodies.
Now that you have everything in place, you’re almost ready to build and run to see your ropes. The final step is to add the ropes and the attached prize to the main game scene, which is exactly what you’re about to do.
Since the project uses a delegate pattern for TLCRope
, you need to declare this in whatever class will act as its delegate. In this case, it’s TLCMyScene
.
Open TLCMyScene.m and locate the @interface
line. Change it to read as follows:
@interface TLCMyScene() <SKPhysicsContactDelegate, TLCRopeDelegate> |
You’ll work with three methods to add the ropes to the scene, and they are all interconnected: setupRopes
, addRopeAtPosition:withLength:withName
and setupPrizeUsingPrimaryRope
.
Starting with the first, locate setupRopes
and add the following block of code:
// get ropes data for (int i = 0; i < [self.ropes count]; i++) { TLCGameData *currentRecord = [self.ropes objectAtIndex:i]; // 1 TLCRope *rope = [self addRopeAtPosition:currentRecord.ropeLocation withLength:currentRecord.ropeLength withName:[NSString stringWithFormat:@"%i", i]]; // 2 [self.worldNode addChild:rope]; [rope addRopePhysics]; // 3 if (currentRecord.isPrimaryRope) { [self setupPrizeUsingPrimaryRope:rope]; } } self.prize.position = CGPointMake(self.size.width * .50, self.size.height * .80); |
Here’s the breakdown:
Locate addRopeAtPosition:withLength:withName
and replace its current contents with this block of code:
SKSpriteNode *ropeHolder = [SKSpriteNode spriteNodeWithImageNamed:kImageNameForRopeHolder]; ropeHolder.position = location; ropeHolder.zPosition = LayerRope; [self.worldNode addChild:ropeHolder]; CGPoint ropeAttachPos = CGPointMake(ropeHolder.position.x, ropeHolder.position.y -8); TLCRope *rope = [[TLCRope alloc] initWithLength:length usingAttachmentPoint:ropeAttachPos toNode:ropeHolder withName:name withDelegate:self]; rope.zPosition = LayerRope; rope.name = name; return rope; |
Essentially, you’re using this method to create the individual ropes to display in your scene.
You’ve seen everything that’s happening here before. First you initialize an SKSpriteNode
using one of the constants for the image name, and then you set its position
and zPosition
with constants. This SKSpriteNode
will act as the “holder” for your rope.
The code then continues to initialize your rope object and set its zPosition
and name
.
Finally, the last piece of the puzzle gets you the prize! Clever, isn’t it? =]
Locate setupPrizeUsingPrimaryRope
and add the following:
self.prize = [SKSpriteNode spriteNodeWithImageNamed:kImageNameForPrize]; self.prize.name = kNodeNameForPrize; self.prize.zPosition = LayerPrize; self.prize.anchorPoint = CGPointMake(0.5, 1); SKNode *positionOfLastNode = [[rope getRopeNodes] lastObject]; self.prize.position = CGPointMake(positionOfLastNode.position.x, positionOfLastNode.position.y + self.prize.size.height * .30); [self.worldNode addChild:self.prize]; |
You may have noticed in setupRopes
and in the game data’s rope object a property for isPrimaryRope
. This property lets you loop through the data and select a rope to use as the primary rope for connecting to the prize.
When you set isPrimaryRope
to YES
, the code above executes and finds the end of the passed-in rope object by getting the last object in the rope’s ropeNodes
array. It does this using the helper method getRopeNodes
from the TLCRope
class.
TLCRope
object, this is due to a bug in Sprite Kit. When you set a physics body, if the position of the node is not set beforehand, the body will behave unpredictably. The previous code uses the last rope part, of a single rope (of your choosing) to use as it’s initial positionAnd now for the moment you’ve been waiting for … Build and run your project!
Wait… Why isn’t the pineapple attached to the ropes? Why does everything look so stiff?
Don’t worry! The solution to these problems is… more physics!
Remember that TODO comment you added earlier? It’s time to replace that with some physics to get things moving!
Open TLCRope.m and locate addRopePhysics
. Replace the TODO comment with the following code:
CGFloat offsetX = ropePart.frame.size.width * ropePart.anchorPoint.x; CGFloat offsetY = ropePart.frame.size.height * ropePart.anchorPoint.y; CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, 0 - offsetX, 7 - offsetY); CGPathAddLineToPoint(path, NULL, 7 - offsetX, 7 - offsetY); CGPathAddLineToPoint(path, NULL, 7 - offsetX, 0 - offsetY); CGPathAddLineToPoint(path, NULL, 0 - offsetX, 0 - offsetY); CGPathCloseSubpath(path); ropePart.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:path]; ropePart.physicsBody.allowsRotation = YES; ropePart.physicsBody.affectedByGravity = YES; ropePart.physicsBody.categoryBitMask = EntityCategoryRope; ropePart.physicsBody.collisionBitMask = EntityCategoryRopeAttachment; ropePart.physicsBody.contactTestBitMask = EntityCategoryPrize; [ropePart skt_attachDebugFrameFromPath:path color:[SKColor redColor]]; CGPathRelease(path); |
The code above creates a physics body for each of your rope parts, allowing you to set a series of physical characteristics for each node, like shape, size, mass, gravity and friction effects.
Physics bodies are created using the class method SKPhysicsBody bodyWithPolygonFromPath:
. This method takes one parameter: a path. A handy online tool for generating a path is SKPhysicsBody Path Generator.
Note: Refer to the SKPhysicsBody Class Reference for more information about using physics bodies in your projects.
In addition to setting the physics bodies for each node, the code above also sets some key properties to handle collisions: categoryBitMask
, collisionBitMask
and contactTestBitMask
. Each is assigned one of the constants you defined earlier. The tutorial will cover these properties in depth later.
If you were to run your app right now, each rope component would fall to the bottom of your screen. That’s because you’ve added a physics body to each but have yet to connect them together.
To fuse your rope, you’re going to use SKPhysicsJoint
s. Add the following method below addRopePhysics
:
- (void)addRopeJoints { // setup joint for the initial attachment point SKNode *nodeA = self.attachmentNode; SKSpriteNode *nodeB = [self.ropeNodes objectAtIndex:0]; SKPhysicsJointPin *joint = [SKPhysicsJointPin jointWithBodyA: nodeA.physicsBody bodyB: nodeB.physicsBody anchor: self.attachmentPoint]; // force the attachment point to be stiff joint.shouldEnableLimits = YES; joint.upperAngleLimit = 0; joint.lowerAngleLimit = 0; [self.delegate addJoint:joint]; // setup joints for the rest of the rope parts for (int i = 1; i < self.length; i++) { SKSpriteNode *nodeA = [self.ropeNodes objectAtIndex:i-1]; SKSpriteNode *nodeB = [self.ropeNodes objectAtIndex:i]; SKPhysicsJointPin *joint = [SKPhysicsJointPin jointWithBodyA: nodeA.physicsBody bodyB: nodeB.physicsBody anchor: CGPointMake(CGRectGetMidX(nodeA.frame), CGRectGetMinY(nodeA.frame))]; // allow joint to rotate freely joint.shouldEnableLimits = NO; joint.upperAngleLimit = 0; joint.lowerAngleLimit = 0; [self.delegate addJoint:joint]; } } |
This method connects all of the rope parts by using the SKPhysicsJoint
class. This class allows two connected bodies to rotate independently around the anchor points, resulting in a “rope-like” feel.
You connect (anchor) the first rope part to the attachmentNode
at the attachmentPoint
and link each subsequent node to the one before.
[self.scene.physicsWorld addJoint:joint];
to accomplish the same thing.Now add a call to this new method at the very bottom of addRopePhysics
:
[self addRopeJoints]; |
Build and run. Ack!
While you have some nice fluid ropes, they don’t contribute much if they just fall off the screen. :] That’s because you haven’t set up the physics bodies on the nodes in TLCScene.m. It’s time to add physics bodies to the prize and the rope holders!
Open TLCMyScene.m and locate addRopeAtPosition:withLength:withName:
. Right below the line [self.worldNode addChild:ropeHolder];
, add the following block of code:
CGFloat offsetX = ropeHolder.frame.size.width * ropeHolder.anchorPoint.x; CGFloat offsetY = ropeHolder.frame.size.height * ropeHolder.anchorPoint.y; CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, 0 - offsetX, 6 - offsetY); CGPathAddLineToPoint(path, NULL, 6 - offsetX, 6 - offsetY); CGPathAddLineToPoint(path, NULL, 6 - offsetX, 0 - offsetY); CGPathAddLineToPoint(path, NULL, 0 - offsetX, 0 - offsetY); CGPathCloseSubpath(path); ropeHolder.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:path]; ropeHolder.physicsBody.affectedByGravity = NO; ropeHolder.physicsBody.dynamic = NO; ropeHolder.physicsBody.categoryBitMask = EntityCategoryRopeAttachment; ropeHolder.physicsBody.collisionBitMask = 0; ropeHolder.physicsBody.contactTestBitMask = EntityCategoryPrize; [ropeHolder skt_attachDebugFrameFromPath:path color:[SKColor redColor]]; CGPathRelease(path); |
Here you add an SKPhysicsBody
for each of the rope holders and set their collision properties. You want the holders to act as solid anchors, which you achieve by disabling their affectedByGravity
and dynamic
properties.
Next, locate the delegate method, addJoint:
, and add this line:
[self.worldNode.scene.physicsWorld addJoint:joint]; |
The above method adds the joints you just created in TLCRope.m to the scene. This is the line that holds the rope parts together!
The next step is to add a physics body to the prize and set its collision detection properties.
Locate setupPrizeUsingPrimaryRope:
. Before the [self.worldNode addChild:self.prize];
line, add the following block of code:
CGFloat offsetX = self.prize.frame.size.width * self.prize.anchorPoint.x; CGFloat offsetY = self.prize.frame.size.height * self.prize.anchorPoint.y; CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, 18 - offsetX, 75 - offsetY); CGPathAddLineToPoint(path, NULL, 5 - offsetX, 65 - offsetY); CGPathAddLineToPoint(path, NULL, 3 - offsetX, 55 - offsetY); CGPathAddLineToPoint(path, NULL, 4 - offsetX, 34 - offsetY); CGPathAddLineToPoint(path, NULL, 8 - offsetX, 7 - offsetY); CGPathAddLineToPoint(path, NULL, 21 - offsetX, 2 - offsetY); CGPathAddLineToPoint(path, NULL, 33 - offsetX, 4 - offsetY); CGPathAddLineToPoint(path, NULL, 38 - offsetX, 20 - offsetY); CGPathAddLineToPoint(path, NULL, 34 - offsetX, 53 - offsetY); CGPathAddLineToPoint(path, NULL, 36 - offsetX, 62 - offsetY); CGPathCloseSubpath(path); self.prize.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:path]; self.prize.physicsBody.allowsRotation = YES; self.prize.physicsBody.affectedByGravity = YES; self.prize.physicsBody.density = 1; self.prize.physicsBody.dynamic = NO; self.prize.physicsBody.categoryBitMask = EntityCategoryPrize; self.prize.physicsBody.collisionBitMask = 0; self.prize.physicsBody.contactTestBitMask = EntityCategoryRope; [self.prize skt_attachDebugFrameFromPath:path color:[SKColor redColor]]; CGPathRelease(path); |
Just like before, you add a physics body and set its collision detection properties.
To connect the prize to the end of the ropes, you need to do two things.
First, locate setupRopes
. At the end of the for
loop, add the following so that it’s the last line in the loop:
// connect the other end of the rope to the prize [self attachNode:self.prize toRope:rope]; |
Then, locate attachNode:toRope
and add the following block of code:
SKNode *previous = [[rope getRopeNodes] lastObject]; node.position = CGPointMake(previous.position.x, previous.position.y + node.size.height * .40); SKSpriteNode *nodeAA = [[rope getRopeNodes] lastObject]; SKPhysicsJointPin *jointB = [SKPhysicsJointPin jointWithBodyA: previous.physicsBody bodyB: node.physicsBody anchor: CGPointMake(CGRectGetMidX(nodeAA.frame), CGRectGetMinY(nodeAA.frame))]; [self.worldNode.scene.physicsWorld addJoint:jointB]; |
The code above gets the last node from the TLCRope
object and creates a new SKPhysicsJointPin
to attach the prize.
Build and run the project. If all your joints and nodes are set up properly, you should see a screen similar to the one below.
It looks good, right? Hmm… Maybe it’s a little stiff? Then again, maybe that’s the effect you want in your game. If not, you can give it a more fluid appearance.
Go to the top of TLCMyScene.m and add the following line below your other #define
statements:
#define prizeIsDynamicsOnStart YES |
Then, locate setupRopes
and change the last two lines to this:
// reset prize position and set if dynamic; depends on your game play self.prize.position = CGPointMake(self.size.width * .50, self.size.height * .80); self.prize.physicsBody.dynamic = prizeIsDynamicsOnStart; |
Build and run the project again.
Notice how much more fluid the ropes feel? Of course, it you prefer it the other way, change the value for prizeIsDynamicsOnStart
to NO
. It’s your game, after all! :]
Since you’ve already got physics bodies on your mind, it makes sense to set them up for the player and water nodes. Once you have those configured, you’ll be primed to start work on collision detection.
In TLCMyScene.m, locate setupCrocodile
and add the following block of code just before the [self.worldNode addChild:self.crocodile];
line:
CGFloat offsetX = self.crocodile.frame.size.width * self.crocodile.anchorPoint.x; CGFloat offsetY = self.crocodile.frame.size.height * self.crocodile.anchorPoint.y; CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, 47 - offsetX, 77 - offsetY); CGPathAddLineToPoint(path, NULL, 5 - offsetX, 51 - offsetY); CGPathAddLineToPoint(path, NULL, 7 - offsetX, 2 - offsetY); CGPathAddLineToPoint(path, NULL, 78 - offsetX, 2 - offsetY); CGPathAddLineToPoint(path, NULL, 102 - offsetX, 21 - offsetY); CGPathCloseSubpath(path); self.crocodile.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:path]; self.crocodile.physicsBody.categoryBitMask = EntityCategoryCrocodile; self.crocodile.physicsBody.collisionBitMask = 0; self.crocodile.physicsBody.contactTestBitMask = EntityCategoryPrize; self.crocodile.physicsBody.dynamic = NO; [self.crocodile skt_attachDebugFrameFromPath:path color:[SKColor redColor]]; CGPathRelease(path); |
Just as with the rope nodes, you establish a path for your player node’s physics body and set its collision detection properties, each of which I’ll explain momentarily.
Last but not least, the water also needs a physics body so you can detect when the prize lands there rather than in the mouth of the hungry crocodile.
Locate setupBackground
. Before the [self.worldNode addChild:water];
line, add the following block of code:
// make the size a little shorter so the prize will look like it’s landed in the water CGSize bodySize = CGSizeMake(water.frame.size.width, water.frame.size.height -100); water.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:bodySize]; water.physicsBody.dynamic = NO; water.physicsBody.categoryBitMask = EntityCategoryGround; water.physicsBody.collisionBitMask = EntityCategoryPrize; water.physicsBody.contactTestBitMask = EntityCategoryPrize; |
Once again, you add a physics body, but this time you use bodyWithRectangleOfSize:
. You also set the body’s collision detection properties.
Notice that you are assigning EntityCategoryGround
as the categoryBitMask for the water object. In reality EntityCategoryGround represents the point of failure for your fruit as opposed to the physical ground. If you wanted to include additional traps such as spinning buzz saws, you would assign it EntityCategoryGround bit mask.
skt_attachDebugFrameFromPath:
for most of the physics bodies. This is a method from SKNode+SKTDebugDraw
, which is part of a group of Sprite Kit utilities developed by Razeware. This particular method helps with debugging physics bodies. To turn it on, open SKNode+SKTDebugDraw.m and change the line BOOL SKTDebugDrawEnabled = NO;
to BOOL SKTDebugDrawEnabled = YES;
. This will draw a shape that represents your physics body. Don’t forget to turn it off when you’re done!It can’t be named Cut the Verlet if your verlets have no fear of being cut, right?
In this section, you’re going to learn how to work with the touch methods that will allow your players to cut those ropes. The first step is to define some basic variables.
Still working in TLCMyScene.m, add the following properties to the @interface
section:
@property (nonatomic, assign) CGPoint touchStartPoint; @property (nonatomic, assign) CGPoint touchEndPoint; @property (nonatomic, assign) BOOL touchMoving; |
You’ll need these for tracking the user’s touches.
Then, add your final definition at the top of TLCMyScene.m:
#define canCutMultipleRopesAtOnce NO |
This will be useful if you want to make changes to the way the game functions.
iOS incudes a few methods that deal with handling touch events. You’ll be working with three: touchesBegan:withEvent:
, touchesEnded:withEvent:
and touchesMoved:withEvent:
.
Locate touchesBegan:withEvent:
and add the following code:
self.touchMoving = NO; for (UITouch *touch in touches) { self.touchStartPoint = [touch locationInNode:self]; } |
The code above sets the variable based on the location of the user’s touch.
Next, locate touchesEnded:withEvent:
and add this:
for (UITouch *touch in touches) { if (touches.count == 1 && self.touchMoving) { self.touchEndPoint = [touch locationInNode:self]; if (canCutMultipleRopesAtOnce) { /* allow multiple ropes to be cut */ [self.worldNode.scene.physicsWorld enumerateBodiesAlongRayStart:self.touchStartPoint end:self.touchEndPoint usingBlock:^(SKPhysicsBody *body, CGPoint point, CGVector normal, BOOL *stop) { [self checkRopeCutWithBody:body]; }]; } else { /* allow only one rope to be cut */ SKPhysicsBody *body = [self.worldNode.scene.physicsWorld bodyAlongRayStart:self.touchStartPoint end:self.touchEndPoint]; [self checkRopeCutWithBody:body]; } } } self.touchMoving = NO; |
This code does a few things. First, it makes sure the user is touching the screen with only one finger, and then it determines if the user is moving that finger. Finally, it retrieves and sets the property touchEndPoint
. With that information, you can take the appropriate action based on whether you’re allowing only one rope or multiple ropes to be cut with a single swipe.
To cut multiple ropes, you use SKPhysicsWorld
’s method enumerateBodiesAlongRayStart:end:usingBlock:
to capture multiple touch points. To cut a single rope, you use bodyAlongRayStart:end:
to get only the first touch point. Then you pass that information to the custom method, checkRopeCutWithBody:
.
Finally, locate touchesMoved:withEvent:
and add this code:
if (touches.count == 1) { for (UITouch *touch in touches) { NSString *particlePath = [[NSBundle mainBundle] pathForResource:@"TLCParticle" ofType:@"sks"]; SKEmitterNode *emitter = [NSKeyedUnarchiver unarchiveObjectWithFile:particlePath]; emitter.position = [touch locationInNode:self]; emitter.zPosition = LayerRope; emitter.name = @"emitter"; [self.worldNode addChild:emitter]; self.touchMoving = YES; } } |
Technically, you don’t need most of the code above, but it does provide for really cool effects when your users swipe the screen. You do, however, need to set the touchMoving
property to YES
, as the code above does. As you saw earlier, you’re evaluating this variable to determine if the user is moving her finger.
So, what does the rest of the code do?
It uses an SKEmitterNode to automatically generate awesome green particles that appear onscreen whenever the user swipes.
The code above loads a particle file and adds it to the worldNode
. Particle emitters are not within the scope of this tutorial, but now that you know they exist… you’ve got something else to do later. =]
With the touch events complete, it’s time to finish the method that you call in touchesEnded:withEvent:
.
Locate checkRopeCutWithBody:
and add the following block of code:
SKNode *node = body.node; if (body) { self.prize.physicsBody.affectedByGravity = YES; self.prize.physicsBody.dynamic = YES; [self.worldNode enumerateChildNodesWithName:node.name usingBlock:^(SKNode *node, BOOL *stop) { for (SKPhysicsJoint *joint in body.joints) { [self.worldNode.scene.physicsWorld removeJoint:joint]; } SKSpriteNode *ropePart = (SKSpriteNode *)node; SKAction *fadeAway = [SKAction fadeOutWithDuration:0.25]; SKAction *removeNode = [SKAction removeFromParent]; SKAction *sequence = [SKAction sequence:@[fadeAway, removeNode]]; [ropePart runAction: sequence]; }]; } |
The code above enumerates through all the child nodes in worldNode
and if it comes across any joints, it removes them. Rather than remove them abruptly, it uses an SKAction
sequence to fade out the node first.
Build and run the project. You should be able to swipe and cut all three ropes—as well as the prize (for now). Toggle the canCutMultipleRopesAtOnce
setting to see how the behavior differs. By the way, aren’t those particles awesome?
You’re almost done! You’ve got swiping in place, physics out of the way, and you’ve specified all of the collision properties—but what exactly do they mean? How do they work? And, more importantly, how do they work with one another?
Here are a few key things to note:
TLCMyScene
acts as a contact delegate: SKPhysicsContactDelegate
.self.worldNode.scene.physicsWorld.contactDelegate = self
.categoryBitMask
, a collisionBitMask
and a contactTestBitMask
.You did the first two when you set up the physics for worldNode
. You took care of the third when you set up the rest of the node’s physics bodies. That was excellent foresight on your part! =]
That leaves number four on the list: implement the methods. Before doing so, however, you need to create a few properties.
In the @implementation
section of TLCMyScene.m, add the following:
@property (nonatomic, assign) BOOL scoredPoint; @property (nonatomic, assign) BOOL hitGround; |
Now you’re ready to modify the delegate method.
Locate didBeginContact:
and add the following block of code:
SKPhysicsBody *other = (contact.bodyA.categoryBitMask == EntityCategoryPrize ? contact.bodyB : contact.bodyA); if (other.categoryBitMask == EntityCategoryCrocodile) { if (!self.hitGround) { NSLog(@"scoredPoint"); self.scoredPoint = YES; } return; } else if (other.categoryBitMask == EntityCategoryGround) { if (!self.scoredPoint) { NSLog(@"hitGround"); self.hitGround = YES; return; } } |
The code above executes anytime the scene’s physicsWorld
detects a collision. It checks the body’s categoryBitMask
and, based on its value, either scores a point or registers a ground hit.
The three settings, categoryBitMask
, collisionBitMask
and contactTestBitMask
, all work in tandem with one another.
categoryBitMask
sets the category to which the sprite belongs.collisionBitMask
sets the category with which a sprite may collide.contactTestBitMask
defines which categories trigger notifications to the delegate.Check the SKPhysicsBody Class Reference for more detailed information.
This game uses five categories, as defined within the TLCSharedConstants
class. Open TLCSharedConstants.m and take another look. You will see some collision categories you set set up earlier in the tutorial.
EntityCategoryCrocodile = 1 << 0, EntityCategoryRopeAttachment = 1 << 1, EntityCategoryRope = 1 << 2, EntityCategoryPrize = 1 << 3, EntityCategoryGround = 1 << 4 |
You want to detect when the prize collides with the crocodile node and when the prize collides with the water. You’re not going to award points or end the game based on any contact with the rope nodes, but setting categories for the rope and rope attachment points will help make the rope look more realistic at its attachment point.
Build and run the project. When you cut the ropes, you should see log statements corresponding with where the prize lands.
While you do have the faint outline of game, users are not going to be staring at their console to see the win or fail condition. Also, if you cut the correct rope, the fruit falls through the crocodile as opposed to the crocodile eating it. Users will expect to see the crocodile munch down that pineapple.
It’s time to fulfill that expectation with animation. To do this, you’ll modify nomnomnomActionWithDelay:
.
In TLCMyScene.m, find nomnomnomActionWithDelay:
and add the following block of code:
[self.crocodile removeAllActions]; SKAction *openMouth = [SKAction setTexture:[SKTexture textureWithImageNamed:kImageNameForCrocodileMouthOpen]]; SKAction *wait = [SKAction waitForDuration:duration]; SKAction *closeMouth = [SKAction setTexture:[SKTexture textureWithImageNamed:kImageNameForCrocodileMouthClosed]]; SKAction *nomnomnomAnimation = [SKAction sequence:@[openMouth, wait, closeMouth]]; [self.crocodile runAction: [SKAction repeatAction:nomnomnomAnimation count:1]]; if (!self.scoredPoint) { [self animateCrocodile]; } |
The code above removes any animation currently running on the crocodile node using removeAllActions
. It then creates a new animation sequence that opens and closes the crocodile’s mouth and runs this sequence on the crocodile
. At that point, if the player hasn’t scored a point, it runs animateCrocodile
, which resets the random opening and closing of the crocodile’s jaw.
Next, locate checkRopeCutWithBody:
and, after the self.prize.physicsBody.dynamic = YES;
line, add the following two lines of code:
[self nomnomnomActionWithDelay:1]; |
This code executes every time the user cuts a rope. It runs the method you just created. The animation gives the illusion that the crocodile is opening its mouth in hope something yummy will fall into it.
You also need to run this method in didBeginContact:
so that when the prize touches the crocodile, he opens his mouth to eat it.
In didBeginContact:
, after the self.scoredPoint = YES;
line, add the following line:
[self nomnomnomActionWithDelay:.15]; |
Just as before, you run nomnomnomActionWithDelay
, except this time you run it when the prize
collides with the crocodile
. This makes the crocodile appear to eat the prize.
Build and run.
The food falls right through the crocodile. You can fix this by making a few simple changes.
Locate checkForScore
and add the following block of code:
if (self.scoredPoint) { self.scoredPoint = NO; SKAction *shrink = [SKAction scaleTo:0 duration:0.08]; SKAction *removeNode = [SKAction removeFromParent]; SKAction *sequence = [SKAction sequence:@[shrink, removeNode]]; [self.prize runAction: sequence]; } |
The code above checks the value of the scoredPoint
property. If this is set to YES
, the code sets it back to NO
, runs the action to play the nomnomnom sound and then removes the prize
from the scene using an SKAction sequence
.
You want this code to execute continually to keep track of your variable. To make that happen, you need to modify update:
.
Locate update:
and add the following line:
[self checkForScore]; |
update:
invokes before each frame of the animation renders. Here you call the method that checks if the player scored a point.
The next thing you need to do is check for a ground hit. Locate checkForGroundHit
and add the following block of code:
if (self.hitGround) { self.hitGround = NO; SKAction *shrink = [SKAction scaleTo:0 duration:0.08]; SKAction *removeNode = [SKAction removeFromParent]; SKAction *sequence = [SKAction sequence:@[shrink, removeNode]]; [self.prize runAction: sequence]; } |
Almost like checkForScore
, this code checks the value of hitGround
. If the value is YES
, the code resets it to NO
, runs the action to play the splash sound and then removes the prize
from the scene using an SKAction sequence
.
Once again, you need to call this method from update:
. Locate update:
and add the following line:
[self checkForGroundHit]; |
With everything in place, build and run the project.
You should see and hear all of the fabulous things you added. But, you’ll also notice that once you score a point or miss the crocodile’s mouth, the game just hangs there. You can fix that!
In TLCMyScene.m, find switchToNewGameWithTransition:
and add the following block of code:
SKView *skView = (SKView *)self.view; TLCMyScene *scene = [[TLCMyScene alloc] initWithSize:self.size]; [skView presentScene:scene transition:transition]; |
The code above uses SKView’s presentScene:transition:
to present the next scene.
In this case, you present TLCMyScene
. You also pass in a transition using the SKTransition
class.
You need to call this method in two places: checkForScore
and checkForGroundHit
.
In checkForGroundHit
, add the following line of code at the end of the if
statement (within the braces):
SKTransition *sceneTransition = [SKTransition fadeWithDuration:1.0]; [self performSelector:@selector(switchToNewGameWithTransition:) withObject:sceneTransition afterDelay:1.0]; |
Next, in checkForScore
, add the following line of code, also at the end of the if
statement (but in between the braces):
/* Various kinds of scene transitions */ NSArray * transitions = @[[SKTransition doorsOpenHorizontalWithDuration:1.0], [SKTransition doorsOpenVerticalWithDuration:1.0], [SKTransition doorsCloseHorizontalWithDuration:1.0], [SKTransition doorsCloseVerticalWithDuration:1.0], [SKTransition flipHorizontalWithDuration:1.0], [SKTransition flipVerticalWithDuration:1.0], [SKTransition moveInWithDirection:SKTransitionDirectionLeft duration:1.0], [SKTransition pushWithDirection:SKTransitionDirectionRight duration:1.0], [SKTransition revealWithDirection:SKTransitionDirectionDown duration:1.0], [SKTransition crossFadeWithDuration:1.0], [SKTransition doorwayWithDuration:1.0], [SKTransition fadeWithColor:[UIColor darkGrayColor] duration:1.0], [SKTransition fadeWithDuration:1.0] ]; int randomIndex = arc4random_uniform((int) transitions.count); [self performSelector:@selector(switchToNewGameWithTransition:) withObject:transitions[randomIndex] afterDelay:1.0]; |
The code above includes all of the available transitions, stored in an NSArray. The code then selects a random transition by using the arc4random_uniform function. The random transition is then provided to switchToNewGameWithTransition:
so you should see a different transition after each game.
Now build and run the project.
You should see the scene transition to a new one whenever the player scores a point or loses the prize.
You need to make one final modification to handle when the prize leaves the screen. This can happen if the user cuts the ropes in such a way as to “throw” the prize off the screen.
To handle this case, add the following code to checkForPrize
:
[self.worldNode enumerateChildNodesWithName:kNodeNameForPrize usingBlock:^(SKNode *node, BOOL *stop) { if (node.position.y <= 0) { [node removeFromParent]; self.hitGround = YES; } }]; |
The code above enumerates through the child nodes in the worldNode
to find one that matches the specified constant, which in this case is the name of the prize node. If the code finds the right node, it assumes the node has not made contact with the player
and therefore sets the variable hitGround
to YES
.
Again, you need to add a call to checkForPrize
in update:
. Adding the following line to update:
:
[self checkForPrize]; |
Finally, remember that your user can still swipe the prize to score an easy victory. You may have noticed this in your testing. I call this the Cheater Bug. To fix this, locate checkRopeCutWithBody:
and add the following just above the for
loop line (for (SKPhysicsJoint *joint in body.joints) { … }
):
if ([node.name isEqualToString:kNodeNameForPrize]) { return; } |
The code above checks if the user has swiped the prize node by looking for a name match. If there is a match, the method returns and does nothing.
While the game is technically complete, it lacks a certain pop. A silent game may quickly bore your users. It’s time to add a little “juice” to make things pop.
I’ve selected a nice jungle song from incompetech.com and some sound effects from freesound.org.
Because this game will play music in the background, it makes sense to use a single AVPlayer
in the App Delegate. You don’t need to add it because the starter project already contains a property in TLCAppDelegate.h for an AVAudioPlayer
(backgroundMusicPlayer
). You simply need to add the playBackgroundMusic:
method and then call that method.
Open TLCMyScene.m and locate playBackgroundMusic
. Add the following code:
NSError *error; NSURL *backgroundMusicURL = [[NSBundle mainBundle] URLForResource:filename withExtension:nil]; TLCAppDelegate *appDelegate = (TLCAppDelegate *)[[UIApplication sharedApplication] delegate]; if (!appDelegate.backgroundMusicPlayer) // not yet initialized, go ahead and set it up { appDelegate.backgroundMusicPlayer = nil; appDelegate.backgroundMusicPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:backgroundMusicURL error:&error]; appDelegate.backgroundMusicPlayer.numberOfLoops = -1; appDelegate.backgroundMusicPlayer.volume = 1.0; [appDelegate.backgroundMusicPlayer prepareToPlay]; } if (!appDelegate.backgroundMusicPlayer.isPlaying) // is it currently playing? if not, play music { [appDelegate.backgroundMusicPlayer play]; } |
The code above checks if the instance of backgroundMusicPlayer
has been initialized. If not, it initializes it with some basic settings, like the number of loops, the volume and the URL to play, which is passed into the method as a parameter.
AVAudioPlayer
isn’t specific to Sprite Kit, so this tutorial won’t cover it in detail. To learn more about AVAudioPlayer
, check out our Audio Tutorial for iOS.Once the method has initialized the music player, it checks if the music player is already playing, and turns it on if it’s not.
You need this check so that when the scene reloads after the player scores a point or the prize hits the ground, the music won’t “skip” or “restart.” Is this necessary? No. Does it sound better? Absolutely.
Locate setupSounds
and add the following line:
[self playBackgroundMusic:kSoundFileNameForBackgroundMusic]; |
That line makes a call to the method you just wrote. By the way, did you catch that constant you’re using? If you did, you score two extra points. You defined the constant kSoundFileNameForBackgroundMusic
in TLCSharedConstants.m earlier.
You may as well add sound effects while you’re at it!
For the last time, locate the @interface
section of TLCMyScene.m and add the following properties:
@property (nonatomic, strong) SKAction *soundCutAction; @property (nonatomic, strong) SKAction *soundSplashAction; @property (nonatomic, strong) SKAction *soundNomNomNomAction; |
Next, locate setupSounds
. Just above the last line, add the code below:
self.soundCutAction = [SKAction playSoundFileNamed:kSoundFileNameForCutAction waitForCompletion:NO]; self.soundSplashAction = [SKAction playSoundFileNamed:kSoundFileNameForSplashAction waitForCompletion:NO]; self.soundNomNomNomAction = [SKAction playSoundFileNamed:kSoundFileNameForBiteAction waitForCompletion:NO]; |
This code initializes the variables using SKAction
’s playSoundFileNamed:waitForCompletion:
method.
In TLCMyScene.m, find checkForGroundHit
and add the following line of code just above SKAction *shrink = [SKAction scaleTo:0 duration:0.08];
line:
[self runAction:self.soundSplashAction]; |
Find checkForScore
and add the following line of code just above SKAction *shrink = [SKAction scaleTo:0 duration:0.08];
:
[self runAction:self.soundNomNomNomAction]; |
Find checkRopeCutWithBody:
and add the following line of code just above [self nomnomnomActionWithDelay:1];
line:
[self runAction:self.soundCutAction]; |
Finally, locate initWithSize:
and add the following line before [self setupBackground];
line:
[self setupSounds]; |
Build and run the project.
The app should be popping now, yet the discerning player may notice a slight sound bug. In some instances, you may hear both the nom-nom sound but also, the splashing sound. This is due to the prize triggering multiple collisions before it is removed from the scene. To fix this, add a new property in the interface section:
@property (nonatomic, assign) BOOL roundOutcome; |
Next, add the following code to both checkForScore
and checkForGroundHit
at the top of each if
block.
self.roundOutcome = YES; |
Finally, replace the contents of update:
with the following::
if (!self.roundOutcome) { [self checkForScore]; [self checkForGroundHit]; [self checkForPrize]; } |
By containing all of the checks in a block, you insure that the methods will not be called once an outcome has occurred. Build and run and swipe away. There’s no sound collisions and you will have one very stuffed crocodile :]
I hope you enjoyed working through this tutorial as much as I’ve enjoyed writing it. To compare notes, download the CutTheVerlet-Finished completed sample project here.
But, don’t let the fun stop here! Try adding new levels, different ropes, and maybe even a HUD with a score display and timer. Why not!? It’s only code!
If you’d like to learn more about Sprite Kit, be sure to check out our book, iOS Games by Tutorials.
If you have any questions or comments, feel free to join in the discussion below!
How to Create a Game Like Cut the Rope Using Sprite Kit is a post from: Ray Wenderlich
The post How to Create a Game Like Cut the Rope Using Sprite Kit appeared first on Ray Wenderlich.