Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4374 articles
Browse latest View live

Intro To Object-Oriented Design: Part 2/2


iOS Apprentice Second Edition Now Available: Fully Updated for iOS 7!

Beginning Blender Tutorial: Animating and Rendering a Mushroom

Bullet Physics Tutorial: Getting Started

Readers’ App Reviews – November 2013

How To Export Blender Models to OpenGL ES: Part 1/3

How To Export Blender Models to OpenGL ES: Part 2/3

How To Export Blender Models to OpenGL ES: Part 3/3


Happy Thanksgiving 2013!

Call for Applicants – Readers’ App Awards 2013!

App Mockup Tools Reviews

How To Create A Breakout Game Using SpriteKit

Introducing the raywenderlich.com Podcast: Episode 1!

iOS Games by Tutorials Print Version Now Available!

NSURLSession Tech Talk Video

$
0
0

In our latest monthly Tech Talk, Charlie Fulton gave a great talk on NSURLSession.

We had a lot of fun, so wanted to post it here so you all could enjoy!

Note that we had some technical difficulties toward the beginning of the hangout, so I have modified the video to start at the point at which the screenshare was finally working OK :]

Source Code

Here is the source code of all of the demos from the tech talk.

Q&A Clarifications

This is the first time we ran the hangout live, and we had a lot of great live Q&A from the audience. Charlie has sent me a few clarifications and additional notes to post regarding some of the questions that came up:

Regarding cache management and file management

“Reading the documentation on Apple’s URL Loading System Guide provides some great detail on how this works.

Also, this diagram gives a great overview. Important: The NSURLSession API involves many different classes working together in a fairly complex way that may not be obvious if you read the reference documentation by itself. Before using this API, you should read the URL Loading System Programming Guide to gain a conceptual understanding of how these classes interact with one another.”

So is it an async request or do I need to use GCD?

“The requests are asynchronous, and by default the completion handlers are running in a background thread. I like to use GCD to update UIKit on the main thread, but you could also use the performSelector… way.”

Are run loops for NSURLSessions handled the same way as NSURLConnection? For example, needing to run a NSURLConnection with a different run loop within NSOperation code blocks.

“They are handled the same way; IE you would need to roll your own solution for this. One tip suggested by Alexis would be to create all of your tasks (they start in a suspended state) and then add them to your own queue, calling resume on them, etc. You can also use the description property on a task to inspect them.”

So is the completion block running on the main thread or the default for the shared queue background queue? I still didn’t get it.

“By default it’s running on a background thread.”

Does NSURLSession bring anything to the table as far as OAuth is concerned?

“Unfortunately No. But I have a solution for OAuth that talks to dropbox’s api that you are welcome to use. It’s included in the sample app in the NSURLSession chapter of iOS7 by Tutorials.”

Want to Join Us Next Month?

Thanks again Charlie for giving a great talk and having the guts to do our first live hangout :] We hope you enjoyed it!

Next month, Pietro Rea will give a talk on the new multitasking APIs in iOS 7, which allow you to download data or files from a remote web service periodically in the background to keep your app up-to-date.

We will be broadcasting this talk live on Jan 7 at 7:00 PM EST, so if you want to join us sign up here! As you watch the talk, you can submit any Q&A you may have live.

Hope to see some of you there! :]

NSURLSession Tech Talk Video is a post from: Ray Wenderlich

The post NSURLSession Tech Talk Video appeared first on Ray Wenderlich.


OpenGL ES Transformations with Gestures

$
0
0
OpenGL ES Transformations with Gestures

Gestures: Intuitive, sophisticated and easy to implement!

In this tutorial, you’ll learn how to use gestures to control OpenGL ES transformations by building a sophisticated model viewer app for 3D objects.

For this app, you’ll take maximum advantage of the iPhone’s touchscreen to implement an incredibly intuitive interface. You’ll also learn a bit of 3D math and use this knowledge to master basic model manipulation.

The hard work has already been done for you, specifically in our tutorial series How To Export Blender Models to OpenGL ES, allowing you to concentrate on nothing but transformations and gestures. The aforementioned series is an excellent build-up to this tutorial, since you’ll be using virtually the same code and resources. If you missed it, you’ll also be fine if you’ve read our OpenGL ES 2.0 for iPhone or Beginning OpenGL ES 2.0 with GLKit tutorials.

Note: Since this is literally a “hands-on” tutorial that depends on gestures, you’ll definitely need an iOS device to fully appreciate the implementation. The iPhone/iPad Simulator can’t simulate all the gestures covered here.

Getting Started

First, download the starter pack for this tutorial.

As mentioned before, this is essentially the same project featured in our Blender to OpenGL ES tutorial series. However, the project has been refactored to present a neat and tidy GLKit View Controller class—MainViewController—that hides most of the OpenGL ES shader implementation and 3D model rendering.

Have a look at MainViewController.m to see how everything works, and then build and run. You should see the screen below:

s_Run1

The current model viewer is very simple, allowing you to view two different models in a fixed position. So far it’s not terribly interesting, which is why you’ll be adding the wow factor by implementing gesture recognizers!

Gesture Recognizers

Any new iPhone/iPad user will have marveled at the smooth gestures that allow you to navigate the OS and its apps, such as pinching to zoom or swiping to scroll. The 3D graphics world is definitely taking notice, since a lot of high-end software, including games, requires a three-button mouse or double thumbsticks to navigate their worlds. Touchscreen devices have changed all this and allow for new forms of input and expression. If you’re really forward-thinking, you may have already implemented gestures in your apps.

An Overview

Although we’re sure you’re familiar with them, here’s quick overview of the four gesture recognizers you’ll implement in this tutorial:

Pan (One Finger)
g_Pan1Finger

Pan (Two Fingers)
g_Pan2Fingers

Pinch
g_Pinch

Rotation
g_Rotation

The first thing you need to do is add them to your interface.

Adding Gesture Recognizers

Open MainStoryboard.storyboard and drag a Pan Gesture Recognizer from your Object library and drop it onto your GLKit View, as shown below:

g_PanStoryboard

Next, show the Assistant editor in Xcode with MainStoryboard.storyboard in the left window and MainViewController.m in the right window. Click on your Pan Gesture Recognizer and control+drag a connection from it to MainViewController.m to create an Action in the file. Enter pan for the Name of your new action and UIPanGestureRecognizer for the Type. Use the image below as a guide:

g_PanAction

Repeat the process above for a Pinch Gesture Recognizer and a Rotation Gesture Recognizer. The Action for the former should have the Name pinch with Type UIPinchGestureRecognizer, while the latter should have the Name rotation with Type UIRotationGestureRecognizer. If you need help, use the image below:

Solution Inside: Adding Pinch and Rotation Gesture Recognizers SelectShow>

Revert Xcode back to your Standard editor view and open MainStoryboard.storyboard. Select your Pan Gesture Recognizer and turn your attention to the right sidebar. Click on the Attributes inspector tab and set the Maximum number of Touches to 2, since you’ll only be handling one-finger and two-finger pans.

Next, open MainViewController.m and add the following lines to pan::

// Pan (1 Finger)
if(sender.numberOfTouches == 1)
{
    NSLog(@"Pan (1 Finger)");
}
 
// Pan (2 Fingers)
else if(sender.numberOfTouches == 2)
{
    NSLog(@"Pan (2 Fingers)");
}

Similarly, add the following line to pinch::

NSLog(@"Pinch");

And add the following to rotation::

NSLog(@"Rotation");

As you might have guessed, these are simple console output statements to test your four new gestures, so let’s do just that: build and run! Perform all four gestures on your device and check the console to verify your actions.

Gesture Recognizer Data

Now let’s see some actual gesture data. Replace both NSLog() statements in pan: with:

CGPoint translation = [sender translationInView:sender.view];
float x = translation.x/sender.view.frame.size.width;
float y = translation.y/sender.view.frame.size.height;
NSLog(@"Translation %.1f %.1f", x, y);

At the beginning of every new pan, you set the touch point of the gesture (translation) as the origin (0.0, 0.0) for the event. While the event is active, you divide its reported coordinates over its total view size (width for x, height for y) to get a total range of 1.0 in each direction. For example, if the gesture event begins in the middle of the view, then its range will be: -0.5 ≤ x ≤ +0.5 from left to right and -0.5 ≤ y ≤ +0.5 from top to bottom.

Pop quiz! If the gesture event begins in the top-left corner of the view, what is its range?

Solution Inside: Pan Gesture Range SelectShow>

The pinch and rotation gestures are much easier to handle. Replace the NSLog() statement in pinch: with this:

float scale = [sender scale];
NSLog(@"Scale %.1f", scale);

And replace the NSLog() statement in rotation: with the following:

float rotation = GLKMathRadiansToDegrees([sender rotation]);
NSLog(@"Rotation %.1f", rotation);

At the beginning of every new pinch, the distance between your two fingers has a scale of 1.0. If you bring your fingers together, the scale of the gesture decreases for a zoom-out effect. If you move your fingers apart, the scale of the gesture increases for a zoom-in effect.

A new rotation gesture always begins at 0.0 radians, which you conveniently convert to degrees for this exercise with the function GLKMathRadiansToDegrees(). A clockwise rotation increases the reported angle, while a counterclockwise rotation decreases the reported angle.

Build and run! Once again, perform all four gestures on your device and check the console to verify your actions. You should see that pinching inward logs a decrease in the scale, rotating clockwise logs a positive angle and panning to the bottom-right logs a positive displacement.

Handling Your Transformations

With your gesture recognizers all set, you’ll now create a new class to handle your transformations. Click File\New\File… and choose the iOS\Cocoa Touch\Objective-C class template. Enter Transformations for the class and NSObject for the subclass. Make sure both checkboxes are unchecked, click Next and then click Create.

Open Transformations.h and replace the existing file contents with the following:

#import <GLKit/GLKit.h>
 
@interface Transformations : NSObject
 
- (id)initWithDepth:(float)z Scale:(float)s Translation:(GLKVector2)t Rotation:(GLKVector3)r;
- (void)start;
- (void)scale:(float)s;
- (void)translate:(GLKVector2)t withMultiplier:(float)m;
- (void)rotate:(GLKVector3)r withMultiplier:(float)m;
- (GLKMatrix4)getModelViewMatrix;
 
@end

These are the main methods you’ll implement to control your model’s transformations. You’ll examine each in detail within their own sections of the tutorial, but for now they will mostly remain dummy implementations.

Open Transformations.m and replace the existing file contents with the following:

#import "Transformations.h"
 
@interface Transformations ()
{
    // 1
    // Depth
    float   _depth;
}
 
@end
 
@implementation Transformations
 
- (id)initWithDepth:(float)z Scale:(float)s Translation:(GLKVector2)t Rotation:(GLKVector3)r
{
    if(self = [super init])
    {
        // 2
        // Depth
        _depth = z;
    }
 
    return self;
}
 
- (void)start
{
}
 
- (void)scale:(float)s
{
}
 
- (void)translate:(GLKVector2)t withMultiplier:(float)m
{
}
 
- (void)rotate:(GLKVector3)r withMultiplier:(float)m
{
}
 
- (GLKMatrix4)getModelViewMatrix
{
    // 3
    GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
    modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, 0.0f, 0.0f, -_depth);
 
    return modelViewMatrix;
}
 
@end

There are a few interesting things happening with _depth, so let’s take a closer look:

  1. _depth is a variable specific to Transformations which will determine the depth of your object in the scene.
  2. You assign the variable z to _depth in your initializer, and nowhere else.
  3. You position your model-view matrix at the (x,y) center of your view with the values (0.0, 0.0) and with a z-value of -_depth. You do this because, in OpenGL ES, the negative z-axis runs into the screen.

That’s all you need to render your model with an appropriate model-view matrix. :]

Open MainViewController.m and import your new class by adding the following statement to the top of your file:

#import "Transformations.h"

Now add a property to access your new class, right below the @interface line:

@property (strong, nonatomic) Transformations* transformations;

Next, initialize transformations by adding the following lines to viewDidLoad:

// Initialize transformations
self.transformations = [[Transformations alloc] initWithDepth:5.0f Scale:1.0f Translation:GLKVector2Make(0.0f, 0.0f) Rotation:GLKVector3Make(0.0f, 0.0f, 0.0f)];

The only value doing anything here is the depth of 5.0f. You’re using this value because the projection matrix of your scene has near and far clipping planes of 0.1f and 10.0f, respectively (see the function calculateMatrices), thus placing your model right in the middle of the scene.

Locate the function calculateMatrices and replace the following lines:

GLKMatrix4 modelViewMatrix = GLKMatrix4Identity;
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, 0.0f, 0.0f, -2.5f);

With these:

GLKMatrix4 modelViewMatrix = [self.transformations getModelViewMatrix];

Build and run! Your starship is still there, but it appears to have shrunk!

s_Run2

You’re handling your new model-view matrix by transformations, which set a depth of 5.0 units. Your previous model-view matrix had a depth of 2.5 units, meaning that your starship is now twice as far away. You could easily revert the depth, or you could play around with your starship’s scale…

The Scale Transformation

The first transformation you’ll implement is also the easiest: scale. Open Transformations.m and add the following variables inside the @interface extension at the top of your file:

// Scale
float   _scaleStart;
float   _scaleEnd;

All of your transformations will have start and end values. The end value will be the one actually transforming your model-view matrix, while the start value will track the gesture’s event data.

Next, add the following line to initWithDepth:Scale:Translation:Rotation:, inside the if statement:

// Scale
_scaleEnd = s;

And add the following line to getModelViewMatrix, after you translate the model-view matrix—transformation order does matter, as you’ll learn later on:

modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, _scaleEnd, _scaleEnd, _scaleEnd);

With that line, you scale your model-view matrix uniformly in (x,y,z) space.

To test your new code, open MainViewController.m and locate the function viewDidLoad. Change the Scale: initialization of self.transformations from 1.0f to 2.0f, like so:

self.transformations = [[Transformations alloc] initWithDepth:5.0f Scale:2.0f Translation:GLKVector2Make(0.0f, 0.0f) Rotation:GLKVector3Make(0.0f, 0.0f, 0.0f)];

Build and run! Your starship will be twice as big as your last run and look a lot more proportional to the size of your scene.

Back in Transformations.m, add the following line to scale::

_scaleEnd = s * _scaleStart;

As mentioned before, the starting scale value of a pinch gesture is 1.0, increasing with a zoom-in event and decreasing with a zoom-out event. You haven’t assigned a value to _scaleStart yet, so here’s a quick question: should it be 1.0? Or maybe s?

The answer is neither. If you assign either of those values to _scaleStart, then every time the user performs a new scale gesture, the model-view matrix will scale back to either 1.0 or s before scaling up or down. This will cause the model to suddenly contract or expand, creating a jittery experience. You want your model to conserve its latest scale so that the transformation is continuously smooth.

To make it so, add the following line to start:

_scaleStart = _scaleEnd;

You haven’t called start from anywhere yet, so let’s see where it belongs. Open MainViewController.m and add the following function at the bottom of your file, before the @end statement:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
    // Begin transformations
    [self.transformations start];
}

touchesBegan:withEvent: is the first method to respond whenever your iOS device detects a touch on the screen, before the gesture recognizers kick in. Therefore, it’s the perfect place to call start and conserve your scale values.

Next, locate the function pinch: and replace the NSLog() statement with:

[self.transformations scale:scale];

Build and run! Pinch the touchscreen to scale your model up and down. :D

s_Run3

That’s pretty exciting!

The Translation Transformation

Just like a scale transformation, a translation needs two variables to track start and end values. Open Transformations.m and add the following variables inside your @interface extension:

// Translation
GLKVector2  _translationStart;
GLKVector2  _translationEnd;

Similarly, you only need to initialize _translationEnd in initWithDepth:Scale:Translation:Rotation:. Do that now:

// Translation
_translationEnd = t;

Scroll down to the function getModelViewMatrix and change the following line:

modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, 0.0f, 0.0f, -_depth);

To this:

modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, _translationEnd.x, _translationEnd.y, -_depth);

Next, add the following lines to translate:withMultiplier::

// 1
t = GLKVector2MultiplyScalar(t, m);
 
// 2
float dx = _translationEnd.x + (t.x-_translationStart.x);
float dy = _translationEnd.y - (t.y-_translationStart.y);
 
// 3
_translationEnd = GLKVector2Make(dx, dy);
_translationStart = GLKVector2Make(t.x, t.y);

Let’s see what’s happening here:

  1. m is a multiplier that helps convert screen coordinates into OpenGL ES coordinates. It is defined when you call the function from MainViewController.m.
  2. dx and dy represent the rate of change of the current translation in x and y, relative to the latest position of _translationEnd. In screen coordinates, the y-axis is positive in the downwards direction and negative in the upwards direction. In OpenGL ES, the opposite is true. Therefore, you subtract the rate of change in y from _translationEnd.y.
  3. Finally, you update _translationEnd and _translationStart to reflect the new end and start positions, respectively.

As mentioned before, the starting translation value of a new pan gesture is (0.0, 0.0). That means all new translations will be relative to this origin point, regardless of where the model actually is in the scene. It also means the value assigned to _translationStart for every new pan gesture will always be the origin.

Add the following line to start:

_translationStart = GLKVector2Make(0.0f, 0.0f);

Everything is in place, so open MainViewController.m and locate your pan: function. Replace the NSLog() statement inside your first if conditional for a single touch with the following:

[self.transformations translate:GLKVector2Make(x, y) withMultiplier:5.0f];

Build and run! Good job—you can now move your starship around with the touch of a finger! (But not two.)

s_Run4

A Quick Math Lesson: Quaternions

Before you move onto the last transformation—rotation—you need to know a bit about quaternions. This lesson will thankfully be pretty quick, though, since GLKit provides an excellent math library to deal with quaternions.

Quaternions are a complex mathematical system with many applications, but for this tutorial you’ll only be concerned with their spatial rotation properties. The main advantage of quaternions in this respect is that they don’t suffer from gimbal lock, unlike Euler angles.

Euler angles are a common representation for rotations, usually in (x,y,z) form. When rotating an object in this space, there are many opportunities for two axes to align with each other. In these cases, one degree of freedom is lost since any change to either of the aligned axes applies the same rotation to the object being transformed—that is, the two axes become one. That is a gimbal lock, and it will cause unexpected results and jittery animations.

Gimbal lock, from Wikipedia

A gimbal lock, from Wikipedia.

One reason to prefer Euler angles to quaternions is that they are intrinsically easier to represent and to read. However, GLKQuaternion simplifies the complexity of quaternions and reduces a rotation to four simple steps:

  1. Create a quaternion that represents a rotation around an axis.
  2. For each (x,y,z) axis, multiply the resulting quaternion against a master quaternion.
  3. Derive the 4×4 matrix that performs an (x,y,z) rotation based on a quaternion.
  4. Calculate the product of the resulting matrix with the main model-view matrix.

You’ll be implementing these four simple steps shortly. :]

Quaternions and Euler angles are very deep subjects, so check out these summaries from CH Robotics if you wish to learn more: Understanding Euler Angles and Understanding Quaternions.

The Rotation Transformation: Overview

In this tutorial, you’ll use two different types of gesture recognizers to control your rotations: two-finger pan and rotation. The reason for this is that your iOS device doesn’t have a single gesture recognizer that reports three different types of values, one for each (x,y,z) axis. Think about the ones you’ve covered so far:

  • Pinch produces a single float, perfect for a uniform scale across all three (x,y,z) axes.
  • One-finger pan produces two values corresponding to movement along the x-axis and the y-axis, just like your translation implementation.

No gesture can accurately represent rotation in 3D space. Therefore, you must define your own rule for this transformation.

Rotation about the z-axis is very straightforward and intuitive with the rotation gesture, but rotation about the x-axis and/or y-axis is slightly more complicated. Thankfully, the two-finger pan gesture reports movement along both of these axes. With a little more effort, you can use it to represent a rotation.

Let’s start with the easier one first. :]

Z-Axis Rotation With the Rotation Gesture

Open Transformations.m and add the following variables inside your @interface extension:

// Rotation
GLKVector3      _rotationStart;
GLKQuaternion   _rotationEnd;

This is slightly different than your previous implementations for scale and translation, but it makes sense given your new knowledge of quaternions. Before moving on, add the following variable just below:

// Vectors
GLKVector3      _front;

As mentioned before, your quaternions will represent a rotation around an axis. This axis is actually a vector, since it specifies a direction—it’s not along z, it’s either front-facing or back-facing.

Complete the vector’s implementation by initializing it inside initWithDepth:Scale:Translation:Rotation: with the following line:

// Vectors
_front = GLKVector3Make(0.0f, 0.0f, 1.0f);

As you can see, the vector is front-facing because its direction is towards the screen.

Note: Previously, I mentioned that in OpenGL ES, negative z-values go into the screen. This is because OpenGL ES uses a right-handed coordinate system. GLKit, on the other hand (pun intended), uses the more conventional left-handed coordinate system.

Left-handed and right-handed coordinate systems, from Learn OpenGL ES

Left-handed and right-handed coordinate systems, from Learn OpenGL ES

Next, add the following lines to initWithDepth:Scale:Translation:Rotation:, right after the code you just added above:

r.z = GLKMathDegreesToRadians(r.z);
_rotationEnd = GLKQuaternionIdentity;
_rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(-r.z, _front), _rotationEnd);

These lines perform the first two steps of the quaternion rotation described earlier:

  • You create a quaternion that represents a rotation around an axis by using GLKQuaternionMakeWithAngleAndVector3Axis().
  • You multiply the resulting quaternion against a master quaternion using GLKQuaternionMultiply().

All calculations are performed with radians, hence the call to GLKMathDegreesToRadians(). With quaternions, a positive angle performs a counterclockwise rotation, so you send in the negative value of your angle: -r.z.

To complete the initial setup, add the following line to getModelViewMatrix, right after you create modelViewMatrix:

GLKMatrix4 quaternionMatrix = GLKMatrix4MakeWithQuaternion(_rotationEnd);

Then, add the following line to your matrix calculations, after the translation and before the scale:

modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, quaternionMatrix);

These two lines perform the last two steps of the quaternion rotation described earlier:

  • You derive the 4×4 matrix that performs an (x,y,z) rotation based on a quaternion, using GLKMatrix4MakeWithQuaternion().
  • You calculate the product of the resulting matrix with the main model-view matrix using GLKMatrix4Multiply().

Note: The order of your transformations is not arbitrary. Imagine the following instructions given to two different people:

  1. Starting from point P: take n steps forward; turn to your left; then pretend to be a giant twice your size.
  2. Starting from point P: pretend to be a giant twice your size; turn to your left; then take n steps forward.

See the difference below:

g_TransformationOrder

Even though the instructions have the same steps, the two people end up at different points, P’1 and P’2. This is because Person 1 first walks (translation), then turns (rotation), then grows (scale), thus ending n paces in front of point P. With the other order, Person 2 first grows, then turns, then walks, thus taking giant-sized steps towards the left and ending 2n paces to the left of point P.

Open MainViewController.m and test your new code by changing the z-axis initialization angle of self.transformations to 180.0 inside viewDidLoad:

self.transformations = [[Transformations alloc] initWithDepth:5.0f Scale:2.0f Translation:GLKVector2Make(0.0f, 0.0f) Rotation:GLKVector3Make(0.0f, 0.0f, 180.0f)];

Build and run! You’ve caught your starship in the middle of a barrel roll.

s_Run5

After you’ve verified that this worked, revert the change, since you would rather have your app launch with the starship properly oriented.

The next step is to implement the rotation with your rotation gesture. Open Transformations.m and add the following lines to rotate:withMultiplier::

float dz = r.z - _rotationStart.z;
_rotationStart = GLKVector3Make(r.x, r.y, r.z);
_rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(-dz, _front), _rotationEnd);

This is a combination of your initialization code and your translation implementation. dz represents the rate of change of the current rotation about the z-axis. Then you simply update _rotationStart and _rotationEnd to reflect the new start and end positions, respectively.

There is no need to convert r.z to radians this time, since the rotation gesture’s values are already in radians. r.x and r.y will be passed along as 0.0, so you don’t need to worry about them too much—for now.

As you know, a new rotation gesture always begins with a starting value of 0.0. Therefore, all new rotations will be relative to this zero angle, regardless of your model’s actual orientation. Consequently, the value assigned to _rotationStart for every new rotation gesture will always be an angle of zero for each axis.

Add the following line to start:

_rotationStart = GLKVector3Make(0.0f, 0.0f, 0.0f);

To finalize this transformation implementation, open MainViewController.m and locate your rotation: function. Replace the NSLog() statement with the following:

[self.transformations rotate:GLKVector3Make(0.0f, 0.0f, rotation) withMultiplier:1.0f];

Since a full rotation gesture perfectly spans 360 degrees, there is no need to implement a multiplier here, but you’ll find it very useful in the next section.

Lastly, since your calculations are expecting radians, change the preceding line:

float rotation = GLKMathRadiansToDegrees([sender rotation]);

To this:

float rotation = [sender rotation];

Build and run! You can now do a full barrel roll. :D

s_Run6

X- and Y-Axis Rotation With the Two-Finger Pan Gesture

This implementation for rotation about the x-axis and/or y-axis is very similar to the one you just coded for rotation about the z-axis, so let’s start with a little challenge!

Add two new variables to Transformations.m, _right and _up, and initialize them inside your class initializer. These variables represent two 3D vectors, one pointing right and the other pointing up. Take a peek at the instructions below if you’re not sure how to implement them or if you want to verify your solution:

Solution Inside: Right and Up Vectors SelectShow>

For an added challenge, see if you can initialize your (x,y) rotation properly, just as you did for your z-axis rotation with the angle r.z and the vector _front. The correct code is available below if you need some help:

Solution Inside: Rotation Initialization SelectShow>

Good job! There’s not a whole lot of new code here, so let’s keep going. Still in Transformations.m, add the following lines to rotate:withMultiplier:, just above dz:

float dx = r.x - _rotationStart.x;
float dy = r.y - _rotationStart.y;

Once again, this should be familiar—you’re just repeating your z-axis logic for the x-axis and the y-axis. The next part is a little trickier, though…

Add the following lines to rotate:withMultiplier:, just after _rotationStart:

_rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dx*m, _up), _rotationEnd);
_rotationEnd = GLKQuaternionMultiply(GLKQuaternionMakeWithAngleAndVector3Axis(dy*m, _right), _rotationEnd);

For the z-axis rotation, your implementation rotated the ship about the z-axis and all was well, because that was the natural orientation of the gesture. Here, you face a different situation. If you look closely at the code above, you’ll notice that dx rotates about the _up vector (y-axis) and dy rotates about the _right vector (x-axis). The diagram below should help make this clear:

g_GestureAxis

And you finally get to use m! A pan gesture doesn’t report its values in radians or even degrees, but rather as 2D points, so m serves as a converter from points to radians.

Finish the implementation by opening MainViewController.m and replacing the contents of your current two-touch else if conditional inside pan: with the following:

const float m = GLKMathDegreesToRadians(0.5f);
CGPoint rotation = [sender translationInView:sender.view];
[self.transformations rotate:GLKVector3Make(rotation.x, rotation.y, 0.0f) withMultiplier:m];

The value of m dictates that for every touch-point moved in the x- and/or y-direction, your model rotates 0.5 degrees.

Build and run! Your model is fully rotational. Woo-hoo!

s_Run7

Nice one—that’s a pretty fancy model viewer you’ve built!

Locking Your Gestures/Transformations

You’ve fully implemented your transformations, but you may have noticed that sometimes the interface accidentally alternates between two transformations—for example, if you remove a finger too soon or perform an unclear gesture. To keep this from happening, you’ll now write some code to make sure your model viewer only performs one transformation for every continuous touch.

Open Transformations.h and add the following enumerator and property to your file, just below your @interface statement:

typedef enum TransformationState
{
    S_NEW,
    S_SCALE,
    S_TRANSLATION,
    S_ROTATION
}
TransformationState;
 
@property (readwrite) TransformationState state;

state defines the current transformation state of your model viewer app, whether it be a scale (S_SCALE), translation (S_TRANSLATION) or rotation (S_ROTATION). S_NEW is a value that will be active whenever the user performs a new gesture.

Open Transformations.m and add the following line to start:

self.state = S_NEW;

See if you can implement the rest of the transformation states in their corresponding methods.

Solution Inside: Transformation States SelectShow>

Piece of cake! Now open MainViewController.m and add a state conditional to each gesture. I’ll give you the pan: implementations for free and leave the other two as a challenge. :]

Modify pan: to look like this:

- (IBAction)pan:(UIPanGestureRecognizer *)sender
{    
    // Pan (1 Finger)
    if((sender.numberOfTouches == 1) &&
        ((self.transformations.state == S_NEW) || (self.transformations.state == S_TRANSLATION)))
    {
        CGPoint translation = [sender translationInView:sender.view];
        float x = translation.x/sender.view.frame.size.width;
        float y = translation.y/sender.view.frame.size.height;
        [self.transformations translate:GLKVector2Make(x, y) withMultiplier:5.0f];
    }
 
    // Pan (2 Fingers)
    else if((sender.numberOfTouches == 2) &&
        ((self.transformations.state == S_NEW) || (self.transformations.state == S_ROTATION)))
    {
        const float m = GLKMathDegreesToRadians(0.5f);
        CGPoint rotation = [sender translationInView:sender.view];
        [self.transformations rotate:GLKVector3Make(rotation.x, rotation.y, 0.0f) withMultiplier:m];
    }
}

Click below to see the solution for the other two—but give it your best shot first!

Solution Inside: Pinch and Rotation States SelectShow>

Build and run! See what cool poses you can set for your model and have fun playing with your new app.

s_Run8

Congratulations on completing this OpenGL ES Transformations With Gestures tutorial!

Where to Go From Here?

Here is the completed project with all of the code and resources from this tutorial. You can also find its repository on GitHub.

If you completed this tutorial, you’ve developed a sophisticated model viewer using the latest technologies from Apple for 3D graphics (GLKit and OpenGL ES) and touch-based user interaction (gesture recognizers). Most of these technologies are unique to mobile devices, so you’ve definitely learned enough to boost your mobile development credentials!

You should now understand a bit more about basic transformations—scale, translation and rotation—and how you can easily implement them with GLKit. You’ve learned how to add gesture recognizers to a View Controller and read their main event data. Furthermore, you’ve created a very slick app that you can expand into a useful portfolio tool for 3D artists. Challenge accepted? ;]

If you have any questions, comments or suggestions, feel free to join the discussion below!

OpenGL ES Transformations with Gestures is a post from: Ray Wenderlich

The post OpenGL ES Transformations with Gestures appeared first on Ray Wenderlich.

Introduction to Core Bluetooth: Building a Heart Rate Monitor

$
0
0
Learn how to use Core Bluetooth on a real-world device!

Learn how to use Core Bluetooth on a real-world device!

The Core Bluetooth framework lets your iOS and Mac apps communicate with Bluetooth low energy devices (Bluetooth LE for short). Bluetooth LE devices include heart rate monitors, digital thermostats, and more.

The Core Bluetooth framework is an abstraction of the Bluetooth 4.0 specification and defines a set of easy-to-use protocols for communicating with Bluetooth LE devices.

In this tutorial, you’ll learn about the key concepts of the Core Bluetooth framework and how to leverage the framework to discover, connect, and retrieve data from compatible devices. You’ll use these skills by building a heart rate monitoring application that communicates with a Bluetooth heart monitor.

The heart rate monitor we use in this tutorial is the Polar H7 Bluetooth Smart Heart Rate Sensor. If you don’t have one of these devices, you can still follow along with the tutorial, but you’ll need to tweak the code for whatever Bluetooth device that you need to work with.

Allright, it’s Bluetooth LE time!

Understanding Central and Peripheral Devices in Bluetooth

The two major players involved in all Bluetooth LE communication are known as the central and the peripheral:

  • A central is kinda like the “boss”. It wants information from a bunch of its workers in order to accomplish a particular task.
  • A peripheral is kinda like the “worker”. It gathers and publishes data to that is consumed by other devices.

The following image illustrates this relationship:

iOSDevice_HeartMonitor

In this scenario, an iOS device (the central) communicates with a Heart Rate Monitor (the peripheral) to retrieve and display heart rate information on the device in a user-friendly way.

How Centrals Communicate with Peripherals

Advertising is the primary way that peripherals make their presence known via Bluetooth LE.

In addition to advertising their existence, advertising packets can contain some data, such as the peripheral’s name. It can also include some extra data related to what the peripheral collects. For example, in the case of a heart rate monitor, the packet also provides heartbeats per minute (BPM) data.

The job of a central is to scan for these advertising packets, identify any peripherals it finds relevant, and connect to individual devices for more information.

The Structure of Peripheral Data

Like I said, advertising packets are very small and cannot contain a great deal of information. So to get more, a central needs to connect to a peripheral to obtain all of the data available.

Once the central connects to a peripheral, it needs to choose the data it is interested in. In Bluetooth LE, data is organized into concepts called services and characteristics:

  • A service is a collection of data and associated behaviors describing a specific function or feature of a device. An example of a service is a heart rate monitor exposing heart rate data from the monitor’s heart rate sensor. A device can have more than one service.
  • A characteristic provides further details about a peripheral’s service. For example, the heart rate service just described may contain a characteristic that describes the intended body location of the device’s heart rate sensor and another characteristic that transmits heart rate measurement data. A service can have more than one characteristic.

The diagram below further describes the relationship between services and characteristics:

Peripheral_Characteristcs

Once a central has established a connection to a peripheral, it’s free to discover the full range of services and characteristics of the peripheral and to read or write the characteristic values of the available services.

CBPeripheral, CBService and CBCharacteristic

In the CoreBluetooth framework, a peripheral is represented by the CBPeripheral object, while the services relating to a specific peripheral are represented by the CBService object.

The characteristics of a peripheral’s service are represented by CBCharacteristic objects which are defined as attribute types containing a single logical value.

Note: If you’re interested in learning more about the Bluetooth standard, feel free to check out http://developer.bluetooth.org where you can find a list of the standardized services and characteristics of each.

Centrals are represented by the CBCentralManager object and are used to manage discovered or connected peripheral devices.

The following diagram illustrates the basic structure of a peripheral’s services and characteristics object hierarchy:

CBPeripheral_Hierarchy

Each service and characteristic you create must be identified by a unique identifier, or UUID. UUIDs can be 16- or 128-bit values, but if you are building your client-server (central-peripheral) application, then you’ll need to create your own 128-bit UUIDs. You’ll also need to make sure the UUIDs don’t collide with other potential services in close proximity to your device.

In the next section, you’ll learn how to reference the CoreBluetooth and QuartzCore header files and conform to their delegates so you can communicate and retrieve information about the heart rate monitor.

Getting Started

Enough background, time to code!

Start by downloading the starter project for this tutorial. This is a bare bones View-based application that simply has an image you need added.

Next, you need to import CoreBluetooth and QuartzCore into your project. To do this, open HRMViewController.h and add the following lines:

@import CoreBluetooth;
@import QuartzCore;

This uses the new @import keyword introduced in Xcode 5. To learn more about this, check out our What’s New in Objective-C and Foundation in iOS 7 tutorial.

Next, add some #defines for the service UUIDs for the Polar H7 heart rate monitor you’ll be working with in this tutorial. These come from the services section of the Bluetooth specification:

#define POLARH7_HRM_DEVICE_INFO_SERVICE_UUID @"180A"       
#define POLARH7_HRM_HEART_RATE_SERVICE_UUID @"180D"

You are interested in two services here: one for the device info, and one for the heart rate service.

Similarly, add some defines for the characteristics you’re interested in. These come from the characteristics section of the Bluetooth specification:

#define POLARH7_HRM_MEASUREMENT_CHARACTERISTIC_UUID @"2A37"
#define POLARH7_HRM_BODY_LOCATION_CHARACTERISTIC_UUID @"2A38"
#define POLARH7_HRM_MANUFACTURER_NAME_CHARACTERISTIC_UUID @"2A29"

Here you list out the three characteristics from the heart rate service that you are interested in.

Note that if you are working with a different type of device, you can add the appropriate services/characteristics for your device here according to your device and the specification.

Conforming to the Delegate

HRMViewController needs to implement the CBCentralManagerDelegate protocol to allow the delegate to monitor the discovery, connectivity, and retrieval of peripheral devices. It also needs to implement the CBPeripheralDelegate protocol so it can monitor the discovery, exploration, and interaction of a remote peripheral’s services and properties.

Open HRMViewController.h and update the interface declaration as follows:

@interface HRMViewController : UIViewController <CBCentralManagerDelegate, CBPeripheralDelegate>

Next, add the following properties between the @interface and @end lines to represent your CentralManager and your peripheral device:

@property (nonatomic, strong) CBCentralManager *centralManager;
@property (nonatomic, strong) CBPeripheral     *polarH7HRMPeripheral;

Next let’s add some stub implementations for the delegate methods. Switch to HRMViewController.m and add this code:

#pragma mark - CBCentralManagerDelegate
 
// method called whenever you have successfully connected to the BLE peripheral
- (void)centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)peripheral 
{
}
 
// CBCentralManagerDelegate - This is called with the CBPeripheral class as its main input parameter. This contains most of the information there is to know about a BLE peripheral.
- (void)centralManager:(CBCentralManager *)central didDiscoverPeripheral:(CBPeripheral *)peripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI 
{
}
 
// method called whenever the device state changes.
- (void)centralManagerDidUpdateState:(CBCentralManager *)central 
{
}

That takes care of the CentralManager — now add the following empty stubs for your delegate callback methods for your CBPeripheralDelegate protocol:

#pragma mark - CBPeripheralDelegate
 
// CBPeripheralDelegate - Invoked when you discover the peripheral's available services.
- (void)peripheral:(CBPeripheral *)peripheral didDiscoverServices:(NSError *)error 
{
}
 
// Invoked when you discover the characteristics of a specified service.
- (void)peripheral:(CBPeripheral *)peripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error 
{
}
 
// Invoked when you retrieve a specified characteristic's value, or when the peripheral device notifies your app that the characteristic's value has changed.
- (void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error 
{
}

Finally, create the following empty stubs for retrieving CBCharacteristic information for Heart Rate, Manufacturer Name, and Body Location:

#pragma mark - CBCharacteristic helpers
 
// Instance method to get the heart rate BPM information
- (void) getHeartBPMData:(CBCharacteristic *)characteristic error:(NSError *)error 
{
}
// Instance method to get the manufacturer name of the device
- (void) getManufacturerName:(CBCharacteristic *)characteristic 
{
}
// Instance method to get the body location of the device
- (void) getBodyLocation:(CBCharacteristic *)characteristic 
{
}
// Helper method to perform a heartbeat animation
- (void)doHeartBeat {
}
Note: As you progress through this tutorial, you’ll flesh out these methods as required.

Creating the User Interface

Let’s create a rough user interface to display the data from the heart rate monitor.

Open HRMViewController.h and add the following properties between the @interface and @end lines, underneath the other property methods you just created:

// Properties for your Object controls
@property (nonatomic, strong) IBOutlet UIImageView *heartImage;
@property (nonatomic, strong) IBOutlet UITextView  *deviceInfo;
 
// Properties to hold data characteristics for the peripheral device
@property (nonatomic, strong) NSString   *connected;
@property (nonatomic, strong) NSString   *bodyData;
@property (nonatomic, strong) NSString   *manufacturer;
@property (nonatomic, strong) NSString   *polarH7DeviceData;
@property (assign) uint16_t heartRate;
 
// Properties to handle storing the BPM and heart beat
@property (nonatomic, strong) UILabel    *heartRateBPM;
@property (nonatomic, retain) NSTimer    *pulseTimer;
 
// Instance method to get the heart rate BPM information
- (void) getHeartBPMData:(CBCharacteristic *)characteristic error:(NSError *)error;
 
// Instance methods to grab device Manufacturer Name, Body Location
- (void) getManufacturerName:(CBCharacteristic *)characteristic;
- (void) getBodyLocation:(CBCharacteristic *)characteristic;
 
// Instance method to perform heart beat animations
- (void) doHeartBeat;

Next open Main.storyboard. Look for the right sidebar in your Xcode window; if you don’t see one, you might need to use the rightmost button under the View section on the toolbar at the top to make the right hand sidebar visible.

Select and drag a Label, an ImageView, and a TextView control from the Object Library main view and position them roughly as shown below:

VisualDesigner

Change the text of your label to read “Heart Rate Monitor”. Connect the ImageView to the heartimage property, and the TextView to the deviceInfo property.

Note: If you are unsure how to perform these steps, a good place to start would be Linda Burke’s tutorial Objectively Speaking: A Crash Course in Objective-C for iOS 6.

With the project window opened, ensure that you have selected the active scheme configuration HeartMonitor\iPhone Simulator.

Build and run your app; you can do this by selecting Product\Run from the Xcode menu, or alternatively pressing Command + R. The iOS simulator will appear, and your app will be displayed on-screen.

As you can see, your application doesn’t do much at the moment. However, this is a good check to make sure that all your bits and pieces compile correctly. In the next section, you’ll add some functionality to make it talk to the Bluetooth device.

Leveraging the Bluetooth Framework

Open HRMViewController.m and replace viewDidLoad: with the following code:

- (void)viewDidLoad
{
    [super viewDidLoad];
 
	// Do any additional setup after loading the view, typically from a nib.
	self.polarH7DeviceData = nil;
	[self.view setBackgroundColor:[UIColor groupTableViewBackgroundColor]];
	[self.heartImage setImage:[UIImage imageNamed:@"HeartImage"]];
 
	// Clear out textView
	[self.deviceInfo setText:@""];
	[self.deviceInfo setTextColor:[UIColor blueColor]];
	[self.deviceInfo setBackgroundColor:[UIColor groupTableViewBackgroundColor]];
	[self.deviceInfo setFont:[UIFont fontWithName:@"Futura-CondensedMedium" size:25]];
	[self.deviceInfo setUserInteractionEnabled:NO];
 
	// Create your Heart Rate BPM Label
	self.heartRateBPM = [[UILabel alloc] initWithFrame:CGRectMake(55, 30, 75, 50)];
	[self.heartRateBPM setTextColor:[UIColor whiteColor]];
	[self.heartRateBPM setText:[NSString stringWithFormat:@"%i", 0]];
	[self.heartRateBPM setFont:[UIFont fontWithName:@"Futura-CondensedMedium" size:28]];
	[self.heartImage addSubview:self.heartRateBPM];
 
	// Scan for all available CoreBluetooth LE devices
	NSArray *services = @[[CBUUID UUIDWithString:POLARH7_HRM_HEART_RATE_SERVICE_UUID], [CBUUID UUIDWithString:POLARH7_HRM_DEVICE_INFO_SERVICE_UUID]];
	CBCentralManager *centralManager = [[CBCentralManager alloc] initWithDelegate:self queue:nil];
	[centralManager scanForPeripheralsWithServices:services options:nil];
	self.centralManager = centralManager;
 
}

Here you initialize and set up your user interface controls and load your heart image from the Xcode assets library. Next you create the CBCentralManager object; the first argument sets the delegate — in this case, the view controller. The second argument (the queue) is set to nil, because the Peripheral Manager will run on the main thread.

You then call scanForPeripheralsWithServices:; this tells the Central Manager to search for all compliant services in range. Finally, you specify a search for all compliant heart rate monitoring devices and retrieve the device information associated with that device.

Adding the Delegate Methods

Once the Peripheral Manager is initialized, you immediately need to check its state. This tells you if the device your app is running on is compliant with the Bluetooth LE standard.

Adding centralManagerDidUpdateState:central

Open HRMViewController.m and replace centralManagerDidUpdateState:central with the following code:

- (void)centralManagerDidUpdateState:(CBCentralManager *)central
{
    // Determine the state of the peripheral
    if ([central state] == CBCentralManagerStatePoweredOff) {
        NSLog(@"CoreBluetooth BLE hardware is powered off");
    }
    else if ([central state] == CBCentralManagerStatePoweredOn) {
        NSLog(@"CoreBluetooth BLE hardware is powered on and ready");
    }
    else if ([central state] == CBCentralManagerStateUnauthorized) {
        NSLog(@"CoreBluetooth BLE state is unauthorized");
    }
    else if ([central state] == CBCentralManagerStateUnknown) {
        NSLog(@"CoreBluetooth BLE state is unknown");
    }
    else if ([central state] == CBCentralManagerStateUnsupported) {
        NSLog(@"CoreBluetooth BLE hardware is unsupported on this platform");
    }
}

The above method ensures that your device is Bluetooth low energy compliant and it can be used as the central device object of your CBCentralManager. If the state of the central manager is powered on, you’ll receive a state of CBCentralManagerStatePoweredOn. If the state changes to CBCentralManagerStatePoweredOff, then all peripheral objects that have been obtained from the central manager become invalid and must be re-discovered.

Let’s try this out. Build and run your code – on an actual device, not the simulator. You should see the following output in the console:

CoreBluetooth[WARNING] <CBCentralManager: 0x14e3a8c0> is not powered on
CoreBluetooth BLE hardware is powered on and ready

Adding didDiscoverPeripheral:peripheral:

Remember that in viewDidLoad, you called scanForPeripheralWithServices: to start searching for Bluetooth LE devices that have the heart rate or device info services. When one of these devices is found, the didDiscoverPeripheral:peripheral: delegate method will be called, so implement that next:

- (void)centralManager:(CBCentralManager *)central didDiscoverPeripheral:(CBPeripheral *)peripheral advertisementData:(NSDictionary *)advertisementData RSSI:(NSNumber *)RSSI
{
    NSString *localName = [advertisementData objectForKey:CBAdvertisementDataLocalNameKey];
    if ([localName length] > 0) {
        NSLog(@"Found the heart rate monitor: %@", localName);
        [self.centralManager stopScan];
        self.polarH7HRMPeripheral = peripheral;
        peripheral.delegate = self;
        [self.centralManager connectPeripheral:peripheral options:nil];
    }
}

When a peripheral with one of the designated services is discovered, the delegate method is called with the peripheral object, the advertisement data, and something called the RSSI.

Note: RSSI stands for Received Signal Strength Indicator. This is a cool parameter, because by knowing the strength of the transmitting signal and the RSSI, you can estimate the current distance between the central and the peripheral.

With this knowledge, you can invoke certain actions like reading data only when the central is close enough to the peripheral; if it’s almost out of range then your app could wait until the RSSI is higher before it performs certain actions.

Here you check to make sure that the device has a non-empty local name, and if so you log out the name and store the CBPeripheral for later reference. You also cease scanning for devices and call a method on the central manager to establish a connection to the peripheral object.

Build and run your code again, but this time make sure you are actually wearing your heart rate monitor (it won’t send data unless you’re wearing it!). You should see something like the following in the console:

Found the heart rate monitor: Polar H7 252D9F

Adding centralManager:central:peripheral:

Your next step is to determine if you have established a connection to the peripheral. Open HRMViewController.m and replace centralManager:central:peripheral: with the following code:

- (void)centralManager:(CBCentralManager *)central didConnectPeripheral:(CBPeripheral *)peripheral
{
    [peripheral setDelegate:self];
    [peripheral discoverServices:nil];
    self.connected = [NSString stringWithFormat:@"Connected: %@", peripheral.state == CBPeripheralStateConnected ? @"YES" : @"NO"];
    NSLog(@"%@", self.connected);
}

When you establish a local connection to a peripheral, the central manager object calls the centralManager:didConnectPeripheral: method of its delegate object.

In your implementation of the method above, you first set your peripheral object to be the delegate of the current view controller so that it can notify the view controller using callbacks. If no error occurs, you next ask the peripheral to discover the services associated with the device. Finally, you determine the peripheral’s current state to see if you’ve established a connection.

However, if the connection attempt fails, the central manager object calls centralManager:didFailToConnectPeripheral:error: of its delegate object instead.

Run this code on your device again (still wearing your heart rate monitor), and after a few seconds you should see this on the console:

Found the heart rate monitor: Polar H7 252D9F
Connected: YES

Adding peripheral:didDiscoverServices:

Once the services of the peripheral are discovered, peripheral:didDiscoverServices: will be called. So implement that with the following:

- (void)peripheral:(CBPeripheral *)peripheral didDiscoverServices:(NSError *)error
{
    for (CBService *service in peripheral.services) {
        NSLog(@"Discovered service: %@", service.UUID);
        [peripheral discoverCharacteristics:nil forService:service];
    }
}

Here you simply iterate through each service discovered, log out its UUID, and call a method to discover the characteristics for that service.

Build and run, and this time you should see something like the following in the console:

Discovered service: Unknown (<180d>)
Discovered service: Device Information
Discovered service: Battery
Discovered service: Unknown (<6217ff49 ac7b547e eecf016a 06970ba9>)

Does that Unknown (<180d>) value look familiar to you? It should; it’s the heart-rate monitor service id from the href=”https://developer.bluetooth.org/gatt/services/Pages/ServicesHome.aspx”>services section of the Bluetooth specification that you defined earlier:

#define POLARH7_HRM_HEART_RATE_SERVICE_UUID @"180D"

Adding peripheral:didDiscoverCharacteristicsForService:

Since you called discoverCharacteristics:forService:, once the characteristics are found for each service peripheral:didDiscoverCharacteristicsForService: will be called. So replace that with the following:

- (void)peripheral:(CBPeripheral *)peripheral didDiscoverCharacteristicsForService:(CBService *)service error:(NSError *)error
{
    if ([service.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_HEART_RATE_SERVICE_UUID]])  {  // 1
        for (CBCharacteristic *aChar in service.characteristics)
        {
            // Request heart rate notifications
            if ([aChar.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_MEASUREMENT_CHARACTERISTIC_UUID]]) { // 2
                [self.polarH7HRMPeripheral setNotifyValue:YES forCharacteristic:aChar];
                NSLog(@"Found heart rate measurement characteristic");
            }
            // Request body sensor location
            else if ([aChar.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_BODY_LOCATION_CHARACTERISTIC_UUID]]) { // 3
                [self.polarH7HRMPeripheral readValueForCharacteristic:aChar];
                NSLog(@"Found body sensor location characteristic");
            }
        }
    }
    // Retrieve Device Information Services for the Manufacturer Name
    if ([service.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_DEVICE_INFO_SERVICE_UUID]])  { // 4
        for (CBCharacteristic *aChar in service.characteristics)
        {
            if ([aChar.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_MANUFACTURER_NAME_CHARACTERISTIC_UUID]]) {
                [self.polarH7HRMPeripheral readValueForCharacteristic:aChar];
                NSLog(@"Found a device manufacturer name characteristic");
            }
        }
    }
}

This method lets you determine what characteristics this device has. Taking each numbered comment in turn, you’ll see the following actions:

  1. First, check if the service is the the heart rate service.
  2. If so, iterate through the characteristics array and determine if the characteristic is a heart rate notification characteristic. If so, you subscribe to this characteristic, which tells CBCentralManager to watch for when this characteristic changes and notify your code using setNotifyValue:forCharacteristic when it does.
  3. If the characteristic is the body location characteristic, there is no need to subscribe to it (as it won’t change), so just read this value.
  4. If the service is the device info service, look for the manufacturer name and read it.

Build and run, and you should see something like the following in the console:

Found heart rate measurement characteristic
Found body sensor location characteristic
Found a device manufacturer name characteristic

Adding peripheral:didUpdateValueForCharacteristic:

The peripheral:didUpdateValueForCharacteristic: will be called when CBPeripheral reads a value (or updates a value periodically). You need to implement this method to check to see which characteristic’s value has been updated, then call one of the helper methods to read in the value.

So implement the method as follows:

- (void)peripheral:(CBPeripheral *)peripheral didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic error:(NSError *)error
{
    // Updated value for heart rate measurement received
    if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_MEASUREMENT_CHARACTERISTIC_UUID]]) { // 1
        // Get the Heart Rate Monitor BPM
        [self getHeartBPMData:characteristic error:error];
    }
    // Retrieve the characteristic value for manufacturer name received
    if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_MANUFACTURER_NAME_CHARACTERISTIC_UUID]]) {  // 2
        [self getManufacturerName:characteristic];
    }
    // Retrieve the characteristic value for the body sensor location received
    else if ([characteristic.UUID isEqual:[CBUUID UUIDWithString:POLARH7_HRM_BODY_LOCATION_CHARACTERISTIC_UUID]]) {  // 3
        [self getBodyLocation:characteristic];
    }
 
    // Add your constructed device information to your UITextView
    self.deviceInfo.text = [NSString stringWithFormat:@"%@\n%@\n%@\n", self.connected, self.bodyData, self.manufacturer];  // 4
}

Looking at each numbered section in turn:

  1. First check that a notification has been received to read heart rate BPM information. If so, call your instance method getHeartRateBPM:characteristic error: and pass in the value of the characteristic.
  2. Next, check if a notification has been received to obtain the manufacturer name of the device. If so, call your instance method getManufacturerName:characteristic: and pass in the characteristic value.
  3. Check if a notification has been received to determine the location of the device on the body. If so, call your instance method getBodyLocation:characteristic: and pass in the characteristic value.
  4. Finally, concatenate each of your values and output them to your UITextView control.

You can build and run if you want, but only a few null values will be written to the text field, because you haven’t implemented the helper methods yet. Let’s do that next.

Adding getHeartRateBPM:(CBCharacteristic *)characteristic error:(NSError *)error

To understand how to interpret the data from a characteristic, you have to check the Bluetooth specification. For example, check out the entry for heart rate measurement.

You’ll see that a heart rate measurement consists of a number of flags, followed by the heart rate measurement itself, some energy information, and other data. You need to write a method to read this, so implement getHeartRateBPM:(CBCharacteristic *)characteristic error:(NSError *)error in HRMViewController.m as follows:

- (void) getHeartBPMData:(CBCharacteristic *)characteristic error:(NSError *)error
{
    // Get the Heart Rate Monitor BPM
    NSData *data = [characteristic value];      // 1
    const uint8_t *reportData = [data bytes];
    uint16_t bpm = 0;
 
    if ((reportData[0] & 0x01) == 0) {          // 2
        // Retrieve the BPM value for the Heart Rate Monitor
        bpm = reportData[1];
    }
    else {
        bpm = CFSwapInt16LittleToHost(*(uint16_t *)(&reportData[1]));  // 3
    }
    // Display the heart rate value to the UI if no error occurred
    if( (characteristic.value)  || !error ) {   // 4
        self.heartRate = bpm;
        self.heartRateBPM.text = [NSString stringWithFormat:@"%i bpm", bpm];
        self.heartRateBPM.font = [UIFont fontWithName:@"Futura-CondensedMedium" size:28];
        [self doHeartBeat];
        self.pulseTimer = [NSTimer scheduledTimerWithTimeInterval:(60. / self.heartRate) target:self selector:@selector(doHeartBeat) userInfo:nil repeats:NO];
    }
    return;
}

The above method runs each time the peripheral sends new data; it’s responsible for handling heart monitor device notifications received by the peripheral delegate.

Once again, going through the numbered comments one by one reveals the following:

  1. Convert the contents of your characteristic value to a data object. Next, get the byte sequence of your data object and assign this to your reportData object. Then initialize your bpm variable which will store the heart rate information.
  2. Next, obtain the first byte at index 0 in the array as defined by reportData[0] and mask out all but the 1st bit. The result returned will either be 0, which means that the 2nd bit is not set, or 1 if it is set. If the 2nd bit is not set, retrieve the BPM value at the second byte location at index 1 in the array.
  3. If the second bit is set, retrieve the BPM value at second byte location at index 1 in the array and convert this to a 16-bit value based on the host’s native byte order.
  4. Output the value of bpm to your heartRateBPM UILabel control, and set the fontName and fontSize. Assign the value of bpm to heartRate, and again set the control’s font type and size. Finally, set up a timer object [NSTimer scheduledTimerWithTimeInterval:(60. / self.heartRate) target:self selector:@selector(doHeartBeat) userInfo:nil repeats:NO]; which calls doHeartBeat:at 60-second intervals; this performs the basic animation that simulates the beating of a heart through the use of Core Animation.

Build and run, and at long last you’ll see your heart beat on display in the app!

My resting pulse

My resting pulse

Adding getManufacturerName:(CBCharacteristic *)characteristic

Next let’s add the code to read the manufacturer name characteristic. Implement getManufacturerName:(CBCharacteristic *)characteristic in HRMViewController.m as follows:

// Instance method to get the manufacturer name of the device
- (void) getManufacturerName:(CBCharacteristic *)characteristic
{
    NSString *manufacturerName = [[NSString alloc] initWithData:characteristic.value encoding:NSUTF8StringEncoding];  // 1
    self.manufacturer = [NSString stringWithFormat:@"Manufacturer: %@", manufacturerName];    // 2
    return;
}

The above method executes each time the peripheral sends new data; it handles heart monitor device notifications received by the Peripheral delegate.

This method isn’t terribly long or complicated, but take a look at each commented section to see what’s going on:

  1. Take the value of the characteristic discovered by your peripheral to obtain the manufacturer name. Use initWithData: to return the contents of your characteristic object as a data object and tell NSString that you want to use NSUTF8StringEncoding so it can be interpreted as a valid string.
  2. Next, assign the value of the manufacturer name to self.manufacturer so that you can display this value in your UITextView control.

You could build and run here, but I’d recommend skipping to the next one first.

Adding getBodyLocation:(CBCharacteristic *)characteristic

The last step is to add the code to read the body sensor location characteristic. Replace getBodyLocation:(CBCharacteristic *)characteristic in HRMViewController.m with the following code:

- (void) getBodyLocation:(CBCharacteristic *)characteristic
{
    NSData *sensorData = [characteristic value];         // 1
    uint8_t *bodyData = (uint8_t *)[sensorData bytes];
    if (bodyData ) {
        uint8_t bodyLocation = bodyData[0];  // 2
        self.bodyData = [NSString stringWithFormat:@"Body Location: %@", bodyLocation == 1 ? @"Chest" : @"Undefined"]; // 3
    }
    else {  // 4
        self.bodyData = [NSString stringWithFormat:@"Body Location: N/A"];
    }
    return;
}

The above method executes each time the peripheral sends new data; it handles heart monitor device notifications received by the Peripheral delegate.

Stepping through the numbered comments reveals the following:

  1. Use the value of the characteristic discovered by your peripheral to obtain the heart rate monitor’s body location. Next, convert the characteristic value to a data object consisting of byte sequences and assign this to your bodyData object.
  2. Next, determine if you have device body location data to report and access the first byte at index 0 in your array as defined by bodyData[0].
  3. Next, determine the body location of the device using the bodyLocation variable; here you’re only interested in the location on the chest. Finally, assign the body location data to bodyData so that it can be displayed in your UITextView control.
  4. If no data is available, assign N/A as the body location and assign it to self.bodyData variable so that it can be displayed in your UITextView control.

Build and run, and now the text view shows the manufacturer name and body location properly:

Seeing sensor location and manufacturer name

Make Your Heart Beat a Little Faster!

Congratulations, you now have a working heart rate monitor, and even more importantly have a good understanding of how Core Bluetooth works. You can apply these same techniques to a variety of Bluetooth LE devices.

Before you go, we have one little bonus for you. For fun, let’s make the heart image beat in time with the BPM data from the heart monitor.

Open HRMViewController.m and replace doHeartBeat as follows:

- (void) doHeartBeat
{
    CALayer *layer = [self heartImage].layer;
    CABasicAnimation *pulseAnimation = [CABasicAnimation animationWithKeyPath:@"transform.scale"];
    pulseAnimation.toValue = [NSNumber numberWithFloat:1.1];
    pulseAnimation.fromValue = [NSNumber numberWithFloat:1.0];
 
    pulseAnimation.duration = 60. / self.heartRate / 2.;
    pulseAnimation.repeatCount = 1;
    pulseAnimation.autoreverses = YES;
    pulseAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn];
    [layer addAnimation:pulseAnimation forKey:@"scale"];
 
    self.pulseTimer = [NSTimer scheduledTimerWithTimeInterval:(60. / self.heartRate) target:self selector:@selector(doHeartBeat) userInfo:nil repeats:NO];
}

In the method above, you first create a CALayer class to manage your image-based content for the animation. You then create a pulseAnimation variable to perform basic, single-keyframe animation for your layer. Finally, you use the CAMediaTimingFunction that defines the pacing of the animation.

Build and run your app; you should see the heart image pulsate with each heartbeat received from the heart monitor. Try some light exercise (or try coding an Android app!) and watch your heart rate rise!

Where to Go From Here?

In this tutorial you’ve learned about Core Bluetooth LE and how you can use this to connect with Low Energy peripheral devices to retrieve certain attributes pertaining to the device.

Another example of Core Bluetooth devices is iBeacons. If you’d like to learn more about that, check out the What’s New in Core Location chapter in iOS 7 by Tutorials. The book contains more info and examples of iBeacons along with tons of other chapters on almost everything else in iOS 7.

Here is the completed sample project with all of the code from the above tutorial. If you liked this tutorial and would like to see more Core Bluetooth tutorials in the future, please let me know in the forums!

Introduction to Core Bluetooth: Building a Heart Rate Monitor is a post from: Ray Wenderlich

The post Introduction to Core Bluetooth: Building a Heart Rate Monitor appeared first on Ray Wenderlich.

2D Skeletal Animation with Spine Tutorial

$
0
0

If you’ve ever made a 2D game and needed to animate your sprites, you likely asked your artist to create separate images for each frame of the animation, like this example from iOS Games by Tutorials:

SpriteFrames

You then probably wrote some code to play through the list of frames quickly, to give the illusion of movement, like you see here:

SpriteFrames2

This method is simple and it works, but it has a number of big disadvantages:

  • High memory and storage requirements. Because you have to make a separate image for each frame of animation, you are using a lot of memory and storage for your textures. The bigger the sprites are that you are animating, and the more sprites you have, the bigger a problem this becomes. This is a particularly big problem on mobile devices, which only have a limited amount of memory and texture memory.
  • The animations are expensive to make. Drawing individual animation frames like this is time consuming for your artist. Also, making changes to the animations after they have been completed is very time-consuming.
  • You (probably) cannot make the animations yourself. Since each frame animation needs to be hand-drawn, if you are a developer this is probably something you need to rely on your artist to do – even if there’s a particular effect you’re going after.

The way to solve these problems is to integrate something called a 2D Skeletal Animation system into your games. The idea is instead of saving out each and every frame of animation, instead you save out individual body parts like this:

Body Parts

Then you create a small file that describes how to move the body parts around in order to perform the animation you want, such as walking, running, or jumping. You also add some code into your game to read this animation file, create sprites for each body part, and move them around according to the instructions in the file.

Of course, creating a 2D skeletal animation system by hand is a crazy amount of work. Luckily, the folks at Esoteric Software have created a great tool to help you out called Spine.

Spine is a graphical interface that allows you to create a skeleton out of each pieces of your sprite, and move it around in order to create animations you can use in your game.

spine43

Spine also comes with a huge list of pre-made Spine runtimes, which is a fancy way of saying “code you can add into your game to read Spine files, and create animated sprites from them.” Runtimes include Unity, Sprite Kit, cocos2d-iPhone, and much more.

In this tutorial, you’ll use Spine to animate a clumsy elf so that it walks and trips. Along the way, you’ll learn how to:

  • Import artwork into Spine.
  • Build a skeleton for the elf.
  • Create two different animations.
  • Save and export your work.

Note that this tutorial does not cover integrating the resulting animations into a game; that will be a separate tutorial. Instead, the focus of this tutorial is using Spine itself, which will be useful no matter what game framework you may be using.

If you’re ready to take your first steps with Spine, let’s get started!

Getting Started

First things first: you need to download and install Spine.

Spine is available for Windows, Mac and Linux. There are five versions of Spine from which you can choose.

  • Trial (Free): Includes all features, but you cannot save, import or export projects. This version is great for learning the software, but you won’t be able to export your animations into your app.
  • Essential ($60 – $75 USD): Contains the most important features with the ability to save, import and export projects. This version does not include some current features, such as auto-keying, dopesheets and ghosting. It also does not include support for new releases.
  • Professional ($249 – $299 USD): Contains every feature, as well as all future-release features of Spine.
  • Enterprise (Base price $2200 USD): The same as Professional, but for businesses with $500,000+ of annual revenue.
  • Education ($610 – $8217 USD + 10% enrollment fee): The same as Professional, but for schools and educational institutions. The price of the license depends upon the number of computers supported.

For the purposes of this tutorial, you can do almost everything with the trial version. However, at the end of this tutorial, you’ll find an optional section on exporting your animations, which you cannot do with the trial version. If you complete the rest of the tutorial and are eager to see your animations running in your apps, you should consider purchasing an Essential or Professional license so that you have the ability to save and export your work.

So – choose a version of Spine and download, install, and run it. If you are running on the Mac, you may get the following message when you try to run Spine:

X11 message

Click Continue and you will be directed to an Apple support page. On this page, click the http://xquartz.macosforge.org link in the first paragraph. This will take you the X Quartz download page. Download and install X11, then run Spine again and it should launch with no problems.

Once you successfully run Spine, you’ll be greeted with a sample project.

spine1

Feel free to peek around the sample project if you’d like. When you’re ready, read on to learn how to create your own animation!

Importing Artwork Into Spine

So that you can focus on learning how to use Spine, I’ve created some artwork for you to create an animated elf.

Download the art here, uncompress the folder and drag it to your Desktop. This will make it easier to find it in Spine.

Click on the Spine logo in the upper-left corner and select New Project.

New Project

In the Tree panel on the right, select the Images folder and then click Browse under the Images listing.

spine3

Browse for the SpineElf_START folder on your Desktop, select it and click Choose.

spine4

Now your Tree window contains all of the art for your elf in the Images folder.

spine5

At this point, you would normally save your project. After all, the Number 1 rule in developing is to save often.

Unfortunately, if you’re using the trial version of Spine, you won’t be able to save. However, if you’ve upgraded to the Essential or Professional version, you can Ctrl+S or Cmd+S now and save your project in the SpineElf_START folder.

If you’re using the trial version, don’t fret. You’re a developer making your own animations, which means you are bold and adventurous! That Cmd+S hotkey is for the faint of heart, which you certainly are not!

have_a_spine

Assembling Your Character

To create your character, you’re going to need to enroll in some anatomy and fine art classes at your local university. Just kidding! Since this tutorial provides the art for you, all you need to do is drag and drop the images onto the stage.

Select the body label in the Images folder and then drag it onto the stage.

spine6

spine7

As far as I can tell, there’s no way to drag the canvas itself around. So to get the view of the canvas where you want it, you have to zoom out (using the mouse scroll wheel), and then zoom in to the portion of the canvas you want to look at. If anyone finds a better way to do this, let me know. Update: @mig_akira pointed out that you can move the canvas by right-clicking somewhere and moving the mouse. Thanks!

Now drag the head label onto the stage.

spine8

Drag the lArm, lLeg, rArm and rLeg labels onto the stage, but not head2 or head3.

spine9

If you accidentally dragged head2 and/or head3 onto the stage, don’t worry. Ctrl+Z or Cmd+Z will undo any mistakes you make. Although you may be bold enough to work without saving, even the toughest of the tough still use the undo hotkey!

Now you need to assemble your elf. You can build him better, stronger and faster. Select the Translate tool in the Transform toolbar and then select the elf’s head.

spine10

Drag the elf’s head to the top of his body. If your stage isn’t big enough, you can either use your mouse scroll wheel to zoom out or use the zoom tool on the left side of Spine.

spine11

Using the same Translate tool, drag the elf’s arms and legs to their appropriate positions.

spine12

Wait a second… why are his left arm and leg on top of his torso instead of behind it? It looks like you need to adjust the order of body parts.

Changing the Draw Order

Above the Images folder, you’ll see a listing called Draw Order. If you’re familiar with Adobe Photoshop or Sketchbook Pro, think of the draw order as layers. The artwork on the top of the list appears on top of the artwork below it.

spine13

To rearrange the draw order, simply drag and drop a label up or down the list. Rearrange the order from top to bottom to be like this: rArm, rLeg, head, body, lLeg and lArm.

spine14

Your elf should now look like this:

spine16

Now that’s a good-looking elf! The final step in setting up your elf is to align his feet with the horizon line in Spine. You can do this by moving each body part one-by-one—or you can select everything and do it in one swoop, which is much easier.

Select all of the elf’s body parts in the Draw Order folder by Shift+clicking.

spine15

While still using the Translate tool, drag the elf so that he’s standing right on the horizon line.

spine17

You might be wondering what you’re supposed to do with those other two head images. After all, head2 and head3 have just been sitting there, patiently waiting for you to use them.

Multiple Images for One Body Part

Above the Draw Order folder, there is a listing for root. Click the drop-down arrow next to root and you’ll see all of the body parts listed.

spine18

Click on the drop-down arrow next to head and you’ll see the image of the head that is attached to this body part.

spine19

You can add multiple images to each body part and switch between them to animate your character. Drag head2 from the images folder and drop it under head in the root listing.

spine20

Note that when you drag head2 on to the canvas, it might default to the origin. If that happens, just move the head back to where it belongs.

Do the same for head3.

spine21

If you want to toggle between the elf’s different faces, click the dots under the eye in the Tree panel.

spine22

Now that you’re using all of the artwork, you can build your elf’s bones!

Bone Up!: Adding a Skeleton

It’s time to give your elf some bones. How else is he going to move if he doesn’t have a skeleton?

In the Tree window, select the root listing.

spine23

Then select the Create tool from the Tools window at the bottom of Spine.

spine24

Click on the middle of the elf’s chest. This creates a new bone called bone1 (or maybe just bone).

spine25

Now click and drag from the bottom of the elf’s head to his hat. This creates a new joint where his neck would be.

spine26

The attached bone is called bone2 and appears under bone1 in the Tree window because bone2 is a child of bone1. That means if you were to move bone1, bone2 and any other children of bone1 would also move.

spine27

In the Tree window, select bone1. This will make the next bone you create also a child of bone1.

spine28

Click and drag from the point where the elf’s left arm meets his torso down to his elbow to create bone3.

spine29

Repeat the same procedure for the elf’s right arm and both legs. First, click bone1 in the Tree window.

spine30

Then click the point where the elf’s right arm meets his torso and drag down to his right elbow.

spine31

Go back and click bone1 in the Tree window again.

spine32

Next, click the point where his left leg meets his body and drag down to his knee.

spine33

Do the same for the elf’s right leg.

spine34

spine35

The elf’s skeleton is now complete. The head bone’s connected to the… body bone! The arm bone’s connected to the… body bone! The leg bone’s connected to the… body bone! And that’s the way it goes.

Note: You can create more complex skeletons depending on your character’s needs. You can have bones for shoulders, elbows, wrists, ankles, tails and even clothing. If you were to add an upper arm and forearm, you’d want to parent the forearm to the upper arm and the upper arm to the torso. That way, all pieces of the arm would be tied together.

Attaching the Bones to the Body

Now you’ve got images that are pieced together to look like an elf and a skeleton that can fit inside the elf, but they’re not actually attached to each other. You don’t believe me? Select the Rotate tool and then click on any of the skeleton’s bones.

spine36

Click anywhere on the stage and drag. The bones rotate, but the elf doesn’t move. D’oh!

spine37

Hit Ctrl+Z or Cmd+Z to undo the bone rotation and look at the Tree window. You’ll see that the images and bones are in different lists—that’s why they’re not paired.

spine38

To pair them, you’ll have to—you guessed it: drag and drop! Click on the body image in the Tree window and drag it down to bone1.

spine39

Notice how body is now listed under bone1? The body bone and the body image are now married and can function as one. Awww!

spine40

Drag the head image down to bone2 to attach the head bone to the elf’s head.

spine41

If you’re ever confused by the hierarchy structure in the Tree window, an easy way to tell if the body parts have been properly attached is to test them. Select the rotate tool like you did before and then select the skeleton bone. Click and drag on the stage to see if the image moves when you move the skeleton. You can always undo any mistakes by hitting Ctrl+Z or Cmd+Z.

spine42

Drag and drop the following:

  • lArm to bone3
  • rArm to bone4
  • lLeg to bone5
  • rLeg to bone6

spine43

Your elf now has a fully functioning skeleton! And just think—all you’ve done so far is drag and drop. Next you’re going to move onto animating your elf. This simply requires more dragging and dropping. Who would have guessed?

A Standing Animation

The first animation you’ll create is one of the elf standing. You might be asking yourself, “Isn’t he already standing? That doesn’t require animation!”

True, Santa’s little helper is already standing, but he’s not doing anything. That makes him a pretty boring subject, but you can give him some subtle movements while he’s standing in place. That will make for a more interesting game.

To switch to the Animate mode, click the word SETUP in the upper-left of Spine. This brings up a timeline at the bottom of the screen.

spine44

spine45

In the Tree window, click on Animations and then on New Animation.

spine46

Name the new animation standing.

spine47

spine48

Assuming you’re using the trial version of Spine, you have access to advanced features that are available in the Professional version that aren’t included in the Essential version—including the Dopesheet and Auto Keying.

Using the Dopesheet and Auto Keying

Think of the Dopesheet as a more advanced timeline on which your animation will play. And Auto Key lets Spine set the keyframes for you when you animate your character. But what are keyframes, you ask?

Keyframes are an animation’s most important frames. If you wanted to animate a ball rolling from the left to the right, you’d need one keyframe for the ball on the left and one keyframe for the ball on the right. The frames between the keyframes are called in-betweens, also referred to as “tweens”. Spine creates the in-betweens for you and Auto Key will help you set the keyframes. Pretty sweet!

Click on the Dopesheet and Auto Key buttons at the bottom of Spine.

spine50

Hold down the Cmd or Ctrl key and click on the left arm, right arm and head bones of the elf’s skeleton.

spine51

In the Transform window, there are three green key icons. Click on each key once to turn it red.

spine52

spine53

This simply sets the initial keyframes for the elf’s arms and head, which you will see in the Dopesheet.

spine54

You won’t need to set keyframes on the elf’s legs in this animation, since he’ll be standing still. Also, since you’ve enabled Auto Key, that was the last time you’ll have to click on the key icons. Spine will do it automatically for the rest of the standing animation.

Select the Rotate tool if it’s not already selected, and then click on the elf’s head bone in the skeleton.

spine55

On the timeline in the Dopesheet, click on the mark for frame 5. To keep things simple, you’ll animate everything by increments of 5.

spine56

Now click and drag on the stage to move the elf’s head forward slightly. Subtlety is key here, unless you want him to look very cartoony. Since you’ve enabled Auto Key, Spine makes a new keyframe for you on the 5th frame.

spine57

spine58

You can also change his facial expression here. In the Tree window, navigate to the head image under bone2 and expand the list by clicking the corresponding arrow icon.

spine59

Click on the dot underneath the eye icon next to head to display the image of the elf smiling.

spine63

If you see a red dot next to the head listing under the key icon, you’re good to go. But if you see a yellow dot instead, this is to show you that you’ve made an uncommitted change. Click the yellow dot to turn it red, which sets a keyframe for the image swap.

spine64

spine65

Click on frame 10 in the Dopesheet timeline and move the elf’s head slightly forward again by clicking and dragging on the stage.

spine67

To speed up the animating process, you can also copy and paste keyframes. With the head bone still selected, look in the timeline and click on the white rectangle on frame 5 in the standing row. Then click the copy button.

spine68

Click on frame 15 and then click the paste button.

spine69

Now select frame 0, click copy, select frame 20 and then click paste.

spine70

In the playback controls, click the loop button and then play. Your elf is now bobbing his head back and forth.

spine71

Note: If you want to experiment further, try changing the elf’s head on different keyframes. Remember to select the frame you want, pick the different head image in the Tree window and then click the yellow dot to turn it red to set the keyframe.

Completing the Animation

Now onto the arms! Select frame 0 and then select the elf’s right arm bone. Then, simply follow the same steps that you used to animate his head.

spine72

Select frame 5 and rotate his right arm slightly outward.

spine73

Select frame 10 and move it slightly outward again.

spine74

Click the white rectangle on frame 5 in the standing row and then click copy. Paste it on frame 15.

spine75

Click the white rectangle on frame 0 in the standing row, click copy and paste it on frame 20.

spine76

Repeat the same steps for the left arm and then click play to see the results. It’s a fully animated elf!

spine77

A Walking and Tripping Animation

If you’re new to animation, what you just did may have seemed like a lot of work. In actuality, all it took to animate the elf was to select a frame, move a body part, select a frame, move a body part and then copy and paste. In the traditional days of animation, what you just did could have taken at least a day to complete.

Now you’ll create a new animation where the elf will take a couple of steps and fall to the ground. Since you’re well on your way to becoming a professional animator, these steps will seem familiar and go quickly.

In the Tree window, click on Animations, then on New Animation and name it walking.

spine78

You’ve created a brand new animation file, so Spine has reset the elf to his default position. Select frame 0 in the timeline and then in the Tree window, Shift+select all of the bones.

spine79

Click the green key icons in the Transform window to turn them red. This sets the initial keyframe.

spine80

First He Walks…

Select frame 5, then select the elf’s left leg bone and rotate it forward slightly. Then select his right arm bone and rotate that forward slightly. When humans (and elves) walk, they alternate opposing arm and leg movement, so make sure you alternate opposing arms and legs.

spine81

Select frame 10 in the timeline and then rotate both the elf’s left leg and right arm forward a bit more. Rotate the elf’s right leg and left arm backward slightly and his head forward.

spine82

Select frame 15 and begin slowly reversing the animation by rotating his left leg backward, right leg forward and so forth.

If you’re having trouble keeping the elf’s feet level with the horizon line, select his body bone and then select the Translate tool to move his entire body. This is why you made the torso the parent for all of the other bones earlier in the tutorial.

spine83

…Then He Trips!

Start the tripping motion on frame 20. When someone trips, their feet get caught up behind them, their arms go forward and their head leans backwards. Begin to simulate that movement with your elf.

Now is also the time to swap out the head image for head2. Remember to bring up head2 in the Tree window, and then click the yellow dot to turn it red next to head.

spine84

On frame 25, use the Translate tool to select the body bone to raise the elf off the ground. Switch to the Rotate tool and rotate his entire body to exaggerate the tripping motion. Continue with the rotation of the arms, legs and neck.

If at any point you notice a limb starting to pop out, use the Translate tool to shift it back behind the body.

spine85

By frame 30, you can really start to get the elf airborne and flying like Superman.

spine87

On frame 35, begin the downward motion of the elf falling back to the ground.

spine88

On frame 40, make the elf begin his initial impact with the ground.

spine89

Change the elf’s head to head3 via the Tree window on frame 45 to really sell that ground impact.

spine90

On frame 50, make your elf lie face-first on the ground. Now you have the chance to add some fine details to enhance the animation’s effect. When someone hits their face on the ground, their head bounces slightly. Animate this by going to frame 51 and rotating the head bone up slightly, and then moving it back down by frame 53.

spine91

And there you have it! You’ve created an animation of an elf standing and an animation of an elf doing a face plant. If at anytime you want to switch between the animations, simply click on the circle under the eye icon in the Animations listing in the Tree window.

spine92

Optional: Exporting Your Work

If you’ve decided to upgrade your Spine license from the trial version, you have the ability to export your animations. To do this, first click on the Spine logo in the upper-left and choose Export.

spine93

Here you probably want the JSON option – this creates a nice file that describes the animation concisely that the Spine runtimes know how to read. Save the file as elf.json, in the same SpineElf_START folder on your desktop.

If you’re unsure how to implement the animations in your app, have a look at the runtimes you can use.

spine94

Where to Go From Here?

This tutorial is a very basic example of what you can do with Spine. Please experiment with adding keyframes in other increments to change the timing, adding different artwork, building more complex skeletons, animating logos and anything else you can think of.

I’ve created a slightly more complex version of the elf tripping, which you can download here. If you don’t have an Essential or Professional license, you won’t be able to open the .spine project file, but you will be able to use the JSON file I’ve included to import it into your game.

If you’re interested in animation, it is definitely worth checking out the book The Animator’s Survival Kit by Richard Williams. Every animator I know has a copy of this book in his or her studio, as it’s the go-to guide for animation. If you’re not one for paperback books, they’ve also converted it into an iPad app.

Finally, if you enjoyed this tutorial, stay tuned for an upcoming tutorial by our own Ray Wenderlich on how to integrate your animations into a Sprite Kit game!

If you have any questions, comments or suggestions, feel free to join the discussion below!

2D Skeletal Animation with Spine Tutorial is a post from: Ray Wenderlich

The post 2D Skeletal Animation with Spine Tutorial appeared first on Ray Wenderlich.

Integrating Spine with SpriteKit Tutorial

$
0
0
Integrate Spine with SpriteKit!

Integrate Spine with SpriteKit!

Spine is a tool that allows you to easily create animated sprites for your games, in an incredibly efficient and flexible manner.

In our previous tutorial, you learned how to use Spine to create an animated elf.

In this short tutorial, you’ll learn how to take that animated elf and put it into a simple Sprite Kit game. Let’s dive right in!

Warning: The Spine-SpriteKit runtime you are going to use in this tutorial is an unofficial runtime, and is likely to be replaced with an official runtime in the future (fingers crossed).

After taking a look at the unofficial runtime, I think there are some things it’s missing, so I wouldn’t really recommend using it unless you’re an experienced coder comfortable with hacking around, etc.

For less experienced coders, I’d recommend waiting for the official runtime for SpriteKit, or using a different (official) Spine runtime at this point. You can use this tutorial just to get your feet wet with a working example for now.

Getting Started

Spine comes with a huge list of runtimes for almost every game framework, that contains all of the code you need to parse and use Spine animations in your game.

Good news for Sprite Kit fans – Sprite Kit has an (unofficial) Sprite Kit runtime, too. This project a dependency on the official spine runtime, so rather than downloading the project from github as a zip, it’s better to download it from the command line as follows:

$ git clone https://github.com/simonkim/spine-spritekit
 
$ cd spine-spritekit
 
$ git submodule init
 
$ git submodule update

Next, open Spine-Spritekit-Demo\Spine-SpriteKit-Demo.xcodeproj. Build and run the project, and you should see something that looks like this:

Spine Sprite Kit Demo

This demo project shows off some of the sample animations that come with Spine. Feel free to poke around if you’d like – but when you’re ready to make your own Sprite Kit game using your elf animation, read on!

Integrating Spine-SpriteKit

Create a new project with the iOS\Application\SpriteKit Game template. Name the project SpineTest, and save it to your Desktop.

Next, copy the spine-runtimes\spine-c and spine-spritekit folders into your SpineTest directory. At this point your directory should look like this:

Copying Spine Folders

Back in Xcode, drag the spine-c and spine-spritekit folders from your project directory into your project. Make sure that Create groups for any added folders is selected, that the SpineTest target is checked, and click Finish.

Open the spine-c group in your project navigator and delete everything except for include and src. Choose Remove References. At this point your project navigator should look like the following:

Spine files in project navigator

Next, select SpineTest in the Project Navigator, select the SpineTest target, and select the Build Settings tab. Double click Search Paths\Header Search Paths and enter the following paths:

  • ./spine-c/include
  • ./spine-spritekit

Adding the include directories

Finally, to test that it works open MyScene.m and add this import to the top of the file:

#import "DZSpineScene.h"

Build and run – if it compiles and runs with no errors, you have successfully integrated Spine-SpriteKit into your game!

Packing Your Art

Next, you need the artwork from the previous tutorial. If you don’t have it already, you can download it here.

Remember in Sprite Kit, it’s best to put any images you want to work with inside a texture atlas. However, at this time the Spine-SpriteKit runtime does not support Sprite Kit’s built-in texture packer format. Instead, you need to use either Spine’s (free) built-in texture packer or the paid tool TexturePacker to pack the sprites into a spritesheet (in .atlas and .json format).

If you are using the solution from the previous tutorial, we have already made the spritesheet in .atlas/.json format for you, so you can skip the rest of this section. But if you followed along with the previous tutorial and want to pack your own output from Spine to use, keep reading.

In this tutorial, you’re going to use Spine’s built-in texture packing. To do this, open your Spine project, click the Spine logo, and select the TexturePacker menu option.

Select Texture Packer menu option

Use the Browse buttons to select the directory where your Spine project and PNGs are, and enter skeleton for the name:

Texture Packer Settings

You can leave all the settings as default. Click Pack, and some text should appear that says Packing complete.

Adding Your Art

At this point you should have three files in your Spine project’s directory:

  • skeleton.png: All of the pieces of the elf efficiently packed into a small image using TexturePacker.
  • skeleton.atlas: A file describing the original names of each elf piece and their locations in skeleton.png, made by TexturePacker.
  • skeleton.json: The file generated by Spine that describes the animations and how to move each sprite over time.

The three Spine files

Drag these three files into your Xcode project (inside the SpineTest group). Make sure that Copy items into destination group’s folder (if needed) is checked and that the SpineTest target is checked, and click Finish.

Allright, now you finally have all the pieces in place – time to code!

Basic Animation

Open MyScene.m and add this import to the top of the file:

#import "DZSpineSceneBuilder.h"

DZSpineSceneBuilder is the main class in the Spine-SpriteKit runtime used to read the spine output and convert it into SKNodes and actions.

Next declare a few private instance variables as follows:

@implementation MyScene {
    SpineSkeleton *_skeleton;
    DZSpineSceneBuilder *_builder;
    SKNode *_elf;
    SKNode *_spineNode;
}

The first variable keeps track of the skeleton you want to work with (the elf skeleton in this case), the next keeps track of the scene builder class, and finally there are nodes for the elf node, and the spine nodes within.

Note that with Spine-SpriteKit, you need to have a “placeholder” node (i.e. _elf) to put the spine nodes within. This is so that you can position the spine nodes where the belong on the screen (otherwise they will be fixed to the bottom left).

Next, replace initWithSize: with the following:

-(id)initWithSize:(CGSize)size {    
    if (self = [super initWithSize:size]) {
 
        // 1
        _skeleton = [DZSpineSceneBuilder loadSkeletonName:@"skeleton" scale:0.5];
 
        // 2
        _builder = [DZSpineSceneBuilder builder];
 
        // 3
        _elf = [SKNode node];
        _elf.position = CGPointMake(self.size.width/2, 0);
        [self addChild:_elf];
 
        // 4
        _spineNode = [_builder nodeWithSkeleton:_skeleton animationName:@"trip" loop:NO];
        [_elf addChild:_spineNode];
 
    }
    return self;
}

Let’s go over this bit by bit:

  1. Loads the skeleton that you want to work with. In the previous tutorial, you probably never changed the name so it will be the default “skeleton”. However you can have more than one skeleton in a Spine file.
  2. Loads the builder object, that converts Spine files to SKNodes/SKActions.
  3. Makes the “placeholder” object to allow you to position the el, and places it in the center bottom of the screen.
  4. Here’s the important part – this helper method creates the SKNode chain given a particular skeleton and an animation to run on the skeleton. It adds it as a child of the elf, and the animations start to play right away.

Note: The animation name shown here (“trip”) may be different for you if you are not using the sample files. Check in Spine what exactly you named the animation and replace this appropriately, or it will not run the animation.

That’s it – build and run, and you’ll see your animated sprite!

AnimatedSprite

Changing Animations

Like I said, this SpriteKit-Spine library is still unofficial and in the early stages, and it seems to be missing (as far as I can tell) a bunch of handy functionality you’d typically want to use, such as the ability to change the animation to something else after you start a node.

Luckily, this is fairly easy to hack in. Open spine-spritekit\SpriteKit\DZSpineSceneBuilder.h and add the following method:

- (void)runAnimationName:(NSString *)animationName skeleton:(SpineSkeleton *)skeleton loop:(BOOL)loop;

Then open DZSpineSceneBuilder.m and implement the method as follows:

- (void)runAnimationName:(NSString *)animationName skeleton:(SpineSkeleton *)skeleton loop:(BOOL)loop {
 
    SpineAnimation *animation = [skeleton animationWithName:animationName];
    if (!animation) {
        NSLog(@"No such animation: %@", animation.name);
        return;
    }
 
    DZSpineSpriteKitAnimation *skAnimation = [[DZSpineSpriteKitAnimation alloc] initWithSkeleton:skeleton maps:self.maps];
 
    // Bone Animations
    //[skAnimation chainAnimations:[animations copy] rootBone:bone rootNode:root loop:loop];
    [skAnimation applyBoneAnimations:@[animation] loop:loop];
 
    // Slot Animations
    [skAnimation applySlotAnimations:@[animation] loop:loop];
}

This is a helper method to make the sprite run a different animation. To use it, open MyScene.m and add the following:

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
 
    [_builder runAnimationName:@"standing" skeleton:_skeleton loop:NO];
 
}

Build and run, and now when you tap the sprite performs the standing animation.

Standing Animation

Where To Go From Here?

Here is the finished example project from the above tutorial.

At this point, you can see your Spine animation working in a game. If Sprite Kit isn’t your engine of choice, you can check out one of the many other runtimes and follow a similar process.

Speaking of which – I chose Sprite Kit for this tutorial since I’m particularly interested in it (we recently wrote a book on the subject), but if you’d like a similar tutorial on integrating Spine with another game framework, let me know.

I hope this has got you excited about the possibilities of using Spine in your game. If you have any questions or comments, please join the forum discussion below!

Integrating Spine with SpriteKit Tutorial is a post from: Ray Wenderlich

The post Integrating Spine with SpriteKit Tutorial appeared first on Ray Wenderlich.

Easily Overlooked New Features in iOS 7

$
0
0
Did you know about these hidden gems in iOS 7?

Did you know about these hidden gems in iOS 7?

iOS 7 has been out for some time now. By now, you’re undoubtedly aware of its groundbreaking visual design; you’re dabbled in the new APIs such as SpriteKit, UIKit Dynamics and TextKit; and as a developer you’ve probably taken at least a few first steps with Xcode 5.

However, this is one of the biggest iOS releases ever, especially in terms of new and deprecated features. Unless you’re the type to stay up all night reading the iOS 7 release notes, it’s pretty likely that there are one or two new changes that you might have overlooked.

We’ve compiled a handy (but non-exhaustive!) list of some of the more urgent and interesting changes in iOS 7. Have a read through and see what new gems you weren’t aware of before!

The Bad News, the Good News, and the Really Good News

There’s some bad news, good news, and really good news about iOS 7.

  • The bad news: There are a few app-breaking changes in iOS 7 which you need be on top of right away. If you haven’t already taken a close look at these changes in iOS 7, then you’d do well to read up on them, as they have the capability to break your existing apps when run on iOS 7!
  • The good news: A few of the features and APIs that you’re familiar with in iOS 7 have been enhanced — but there are a few other features that have been deprecated. Taking the time to review these changes would be a great investment in the future of your current iOS apps.
  • The really good news: The introduction of OS7 really shook up the mobile development world, but out of that tumultuous event came a bunch of really neat new features that have the potential to give your apps an edge, and may even serve as a catalyst for dreaming up really innovative apps for the future.

This article breaks down the easily overlooked features in iOS 7 into these three categories. Feel free to use this table of contents to quickly jump to the section you’re interested in – or keep reading to learn about all the changes!

The Bad News: App-Breaking Changes

  1. -[UIDevice uniqueIdentifier] is no more
  2. UIPasteboard as shared area is now sandboxed
  3. MAC addresses can’t be used to identify a device
  4. iOS now requests user consent for apps to use the microphone

The Good News: Enhancements & Deprecations

  1. Implementation of -[NSArray firstObject]
  2. Introduction of instancetype
  3. Tint images with UIImage.renderingMode
  4. Usage of tintColor vs barTintColor
  5. Texture colors are gone
  6. UIButtonTypeRoundRect is deprecated in favor of UIButtonTypeSystem

The Really Good News: New Features

  1. Check which wireless routes are available
  2. Get information about the cellular radio signal
  3. Sync passwords between user’s devices via iCloud
  4. Display HTML with NSAttributedString
  5. Use native Base64
  6. Check screenshots with UIApplicationUserDidTakeScreenshotNotification
  7. Implement multi-language speech synthesis
  8. Use the new UIScreenEdgePanGestureRecognizer
  9. Realize Message.app behavior with UIScrollViewKeyboardDismissMode
  10. Detect blinks and smiles with Core Image
  11. Add links to UITextViews

The Bad News: App-Breaking Changes

This section is dedicated to changes that you probably noticed during your transition to iOS 7, but you might not have known just how deep the changes go — and how they may affect your apps. The fact that these changes are all related to user privacy should tip you off to how important user privacy is to Apple (and hence to you)!

1. -[UIDevice uniqueIdentifier] is no more

Apple has always taken the privacy of users quite seriously. -[UIDevice uniqueIdentifier] was originally deprecated on iOS 5, but iOS 7 drops it altogether. Xcode 5 won’t even let you compile an app that contains a reference to -[UIDevice uniqueIdentifier]! Additionally, the behavior of pre-iOS 7 apps that use -[UIDevice uniqueIdentifier] has changed on iOS 7: instead of returning the device’s UUID, this call returns a string starting with FFFFFFFF, followed by the hex value of -[UIDevice identifierForVendor].

2. UIPasteboard as shared area is now sandboxed

UIPasteboard is used to share data among apps. While that isn’t an issue in itself, a problem arose when developers started to use it to store generated identifiers and share the identifiers with all other interested apps. One library using this trick is OpenUDID.

In iOS 7, pasteboards created with +[UIPasteboard pasteboardWithName:create:] and +[UIPasteboard pasteboardWithUniqueName] are now only visible to apps in the same application group, which makes OpenUDID much less useful than it once was.

It's still fine to use this kind of Mac though

It’s still fine to use this kind of Mac though

3. MAC addresses can’t be used to identify a device

Using the iOS device’s Media Access Control (MAC) address was another common approach to generate unique identifiers on iOS devices. A MAC address is a unique number assigned to the network adapter at the physical network level. Apple has alternate names for this address, such as “Hardware Address” or “Wi-Fi Address” in some cases, but these terms all refer to the same thing.

A lot of projects and frameworks used this approach to generate unique device IDs, such as ODIN. However, Apple doesn’t want anyone to potentially identify a user by their MAC address, so all calls to retrieve the MAC address on iOS 7 return 02:00:00:00:00:00. That’s it. No crying. Put those tears away.

Apple made it pretty clear that you should use -[UIDevice identifierForVendor] or -[ASIdentifierManager advertisingIdentifier] as unique identifiers in your frameworks and applications. Frankly, it’s not all that hard to implement these changes, as shown in the code snippet below:

NSString *identifierForVendor = [[UIDevice currentDevice].identifierForVendor UUIDString];
NSString *identifierForAdvertising = [[ASIdentifierManager sharedManager].advertisingIdentifier UUIDString];

Each of these approaches is best suited to a specific use case:

  • identifierForVendor is a value that is unique to the vendor; that is, all apps released by the same company running on the same device will have the same identifier. However, this value will change if the user deletes a vendor’s apps from their device and later reinstalls those same apps. So this scheme isn’t persistent.
  • advertisingIdentifier returns the same value to all vendors running on the same device and should only be used to serve up advertisements. This value too may change in some scenarios, such as when the user erases the device.

You can read more about the various approaches to in this post written by Ole Begemann.

4. iOS now requests user consent for apps to use the microphone

In previous versions, iOS prompted the user for permission to retrieve the user’s location to access their contacts, calendars, reminders and photos, to receive push notifications and to use their social networks. In iOS 7, access to the microphone is now on that list. If the user doesn’t grant permission for an app to use the microphone, then apps using the microphone will only receive silence.

Here’s a bit of code you can use to detect if your app has been given permission to access the microphone:

// The first time you call this method, the system prompts the user to grant your app access
// to the microphone; any other time you call this method, the system will not prompt the user
// and instead passes the previous value for 'granted'
[[AVAudioSession sharedInstance] requestRecordPermission:^(BOOL granted) {
    if (granted) {
        // the user granted permission!
    } else {
        // maybe show a reminder to let the user know that the app has no permission?
    }
}];

Also note that using any methods to access the microphone before the user has granted permission will cause iOS to display the following alert:

Apps on iOS 7 needs to get your permission to access the microphone!

Apps on iOS 7 needs to get your permission to access the microphone!

The Good News: Enhancements & Deprecations

So that’s it for the significant stuff that can break your existing apps. However, there’s a few enhancements and deprecations of existing APIs that may affect your apps in other ways that you might not notice at first glance.

5. Implementation of -[NSArray firstObject]

-[NSArray firstObject] is probably one of the most requested APIs in Objective-C. A simple search on Open Radar shows several requests that have been filed with Apple. The good news is that it’s finally available. firstObject actually goes back as far as iOS 4.0, but only as a private method. Previously, developers worked around this in the following fashion:

NSArray *arr = @[];
id item = [arr firstObject];
// previously you had to do the following:
id item = [arr count] > 0 ? arr[0] : nil;

Since the above pattern was fairly common, several people added this as a category to NSArray and created their own firstObject method. Do a quick search on GitHub and you’ll see just how many times this has been implemented in the past.

The problem with this approach is that method names in categories should be unique, otherwise the behavior of this method can be unexpected. Apple recommends that you always prefix method names when creating categories on Framework classes. Be sure to check if you have any custom code that implements firstObject on NSArray and either preface it as necessary, or remove it entirely.

6. Introduction of instancetype

instancetype has the effect of making iOS 7 APIs diffs a lot harder to read. Apple changed most initializers and convenience constructors to return instancetype instead of id. But what is this new keyword, anyway?

instancetype is used in method declarations to indicate the return type to the compiler; it indicates that the object returned will be an instance of the class on which the method is called. It’s better than returning id as the compiler can do a bit of error-checking against return types at compile time, as opposed to only detecting these issues at run time. It also does away with the need to cast the type of the returned value when calling methods on subclasses.

The long and short of instancetype? Basically, use it whenever possible.

You can read more about instancetype in What’s New in Objective-C and Foundation in iOS 7 by Matt Galloway, as well as on NSHipster.

7. Tint images with UIImage.renderingMode

Tinting is a big part of the new look and feel of iOS 7, and you have control whether your image is tinted or not when rendered. UIImage now has a read-only property named renderingMode as well as a new method imageWithRenderingMode: which uses the new enum UIImageRenderingMode containing the following possible values:

UIImageRenderingModeAutomatic      // Use the default rendering mode for the context where the image is used
UIImageRenderingModeAlwaysOriginal // Always draw the original image, without treating it as a template
UIImageRenderingModeAlwaysTemplate // Always draw the image as a template image, ignoring its color information

The default value of renderingMode is UIImageRenderingModeAutomatic. Whether the image will be tinted or not depends on where it’s being displayed as shown by the examples below:

UIImageRenderingMode Cheat Sheet

UIImageRenderingMode Cheat Sheet

The code below shows how easy it is to create an image with a given rendering mode:

UIImage *img = [UIImage imageNamed:@"myimage"];
img = [img imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];

8. Usage of tintColor vs barTintColor

In iOS 7 you can tint your entire app with a given color or even implement color themes to help your app stand out from the rest. Setting the tint color of your app is as easy as using the new property tintColor of UIView.

Does that property sound familiar? it should — some classes such as UINavigationBar, UISearchBar, UITabBar and UIToolbar already had a property with this name. They now have a new property: barTintColor.

In order to avoid getting tripped up by the new property, you should perform the following check if your app needs to support iOS 6 or earlier:

UINavigationBar *bar = self.navigationController.navigationBar;
UIColor *color = [UIColor greenColor];
if ([bar respondsToSelector:@selector(setBarTintColor:)]) { // iOS 7+
    bar.barTintColor = color;
} else { // what year is this? 2012?
    bar.tintColor = color;
}

9. Texture colors are gone

More victims of Jony Ive

More victims of Jony Ive

Texture colors? Yup, they’re gone. You can’t create colors that represent textures anymore. According to the comments in UIInterface.h, -[UIColor groupTableViewBackgroundColor] was supposed to be deprecated in iOS 6, but instead, it just doesn’t return the textured color it used to. However, the following colors have been deprecated in iOS 7:

+ (UIColor *)viewFlipsideBackgroundColor;
+ (UIColor *)scrollViewTexturedBackgroundColor;
+ (UIColor *)underPageBackgroundColor;

10. UIButtonTypeRoundRect has been deprecated in favor of UIButtonTypeSystem

Good bye, old friend.

Good bye, old friend.

One of your old friends from your beginnings in iOS development (and a popular control in many rapid prototypes) is now defunct: UIButtonTypeRoundRect has been replaced by a new UIButtonTypeSystem. Excuse the dust, but progress we must! :]

The Really Good News: New Features

What would a major release of iOS be without some new features? These new features have largely been well-received in the iOS community, and you may even find some novel ways to integrate them in your own apps!

11. Check which wireless routes are available

The ability to customize a video player (and friends) has evolved throughout the past few iOS versions. As an example, prior to iOS 6 you couldn’t change the AirPlay icon on a MPVolumeView.

In iOS 7, you’re finally able to know if a remote device is available via AirPlay, Bluetooth, or some other wireless mechanism. This allows your app to behave appropriately, such as hiding an AirPlay icon when that service isn’t available on other devices.

The following two new properties and notifications have been added to MPVolumeView:

@property (nonatomic, readonly) BOOL wirelessRoutesAvailable; // is there a route that the device can connect to?
@property (nonatomic, readonly) BOOL wirelessRouteActive; // is the device currently connected?
 
NSString *const MPVolumeViewWirelessRoutesAvailableDidChangeNotification;
NSString *const MPVolumeViewWirelessRouteActiveDidChangeNotification;

12. Get information about the cellular radio signal

Prior to iOS 7, detecting if a device was connected via WWAN or Wifi was possible using Reachability. iOS 7 takes it a step further and tells you exactly which kind of cell radio networking the device is connected to, such as Edge, HSDPA or LTE. This can be extremely useful to tailor the user experience to the speed of the network they are connected to, by making fewer network requests or downloading lower resolution images as appropriate.

This feature is part of the little known CTTelephonyNetworkInfo, which is part of the CoreTelephony framework. iOS 7 also added the currentRadioAccessTechnology property to this class, as well as the notification CTRadioAccessTechnologyDidChangeNotification. There’s some new string constants as well to define the possible values such as CTRadioAccessTechnologyLTE.

Here’s how you’d use this new feature in your app delegate:

 
@import CoreTelephony.CTTelephonyNetworkInfo; // new modules syntax!
 
@interface AppDelegate () 
// we need to keep a reference to the CTTelephonyNetworkInfo object, otherwise the notifications won't be fired!
@property (nonatomic, strong) CTTelephonyNetworkInfo *networkInfo;
 
@end
 
@implementation ViewController 
 
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
  // whatever stuff your method does...
 
  self.networkInfo = [[CTTelephonyNetworkInfo alloc] init];
  NSLog(@"Initial cell connection: %@", self.networkInfo.currentRadioAccessTechnology);
  [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(radioAccessChanged) name:CTRadioAccessTechnologyDidChangeNotification object:nil];
 
  // whatever stuff your method does...
}
 
- (void)radioAccessChanged {
  NSLog(@"Now you're connected via %@", self.networkInfo.currentRadioAccessTechnology);
}
 
@end
Note: Take a look at CTTelephonyNetworkInfo.h to discover the other string constants for other wireless networks. As a note, currentRadioAccessTechnology will return nil if the device is currently not connected to a radio tower.

13. Sync passwords between user’s devices via iCloud

iOS 7 and Mavericks introduced iCloud Keychain to provide synchronization of passwords and other sensitive data via iCloud. This feature is available to developers via the kSecAttrSynchronizable key in the Keychain attributes dictionary.

Since dealing directly with the Keychain is often a real pain, wrapper libraries provide an easy way to work with the Keychain – without the sharp edges. The SSKeychain wrapper library is probably the best-known one out there, and as an added bonus, it currently supports iCloud sync out of the box.

The code snippet below shows how you’d use SSKeychain:

#import <SSKeychain.h>
 
- (BOOL)saveCredentials:(NSError **)error {
    SSKeychainQuery *query = [[SSKeychainQuery alloc] init];
    query.password = @"MySecretPassword";
    query.service = @"MyAwesomeService";
    query.account = @"John Doe";
    query.synchronizable = YES;
    return [query save:&error];
}
 
- (NSString *)savedPassword:(NSError **)error {
    SSKeychainQuery *query = [[SSKeychainQuery alloc] init];
    query.service = @"MyAwesomeService";
    query.account = @"John Doe";
    query.synchronizable = YES;
    query.password = nil;
    if ([query fetch:&error]) {
        return query.password;
    }
    return nil;
}

Don’t forget that CocoaPods is a quick and easy way to install SSKeychain.

14. Display HTML with NSAttributedString

Using Webviews in your apps can be frustrating at times; even if you’re only displaying a small amount of HTML content, Webviews can consume a lot of memory. iOS 7 makes this a lot easier, as you can create an NSAttributedString from HTML with a few lines of code, as such:

NSString *html = @"<bold>Wow!</bold> Now <em>iOS</em> can create <h3>NSAttributedString</h3> from HTMLs!";
NSDictionary *options = @{NSDocumentTypeDocumentAttribute: NSHTMLTextDocumentType};
 
NSAttributedString *attrString = [[NSAttributedString alloc] initWithData:[html dataUsingEncoding:NSUTF8StringEncoding] options:options documentAttributes:nil error:nil];

Now you’re free to use the NSAttributedString on any UIKit object, such as a UILabel or a UITextField, as in the following code:

Note: NSHTMLTextDocumentType is just one of the possible values for the NSDocumentTypeDocumentAttribute key. You can also use NSPlainTextDocumentType, NSRTFTextDocumentType or NSRTFDTextDocumentType.

You can also create an HTML string from an NSAttributedString as shown below:

NSAttributedString *attrString; // from previous code
NSDictionary *options = @{NSDocumentTypeDocumentAttribute: NSHTMLTextDocumentType};
 
NSData *htmlData = [attrString dataFromRange:NSMakeRange(0, [attrString length]) documentAttributes:options error:nil];
NSString *htmlString = [[NSString alloc] initWithData:htmlData encoding:NSUTF8StringEncoding];

That should encourage you to use HTML more freely in your applications! You can learn more about attributed strings in iOS 6 by Tutorials.

15. Use native Base64

Base64 is a popular way to represent binary data using ASCII characters. Until now, developers were forced to use one of the many open source alternatives to encode and decode the Base64 content.

iOS 7 introduces the following four new NSData methods to manipulate Base64-encoded data:

// From NSData.h
 
/* Create an NSData from a Base-64 encoded NSString using the given options. By default, returns nil when the input is not recognized as valid Base-64.
*/
- (id)initWithBase64EncodedString:(NSString *)base64String options:(NSDataBase64DecodingOptions)options;
 
/* Create a Base-64 encoded NSString from the receiver's contents using the given options.
*/
- (NSString *)base64EncodedStringWithOptions:(NSDataBase64EncodingOptions)options;
 
/* Create an NSData from a Base-64, UTF-8 encoded NSData. By default, returns nil when the input is not recognized as valid Base-64.
*/
- (id)initWithBase64EncodedData:(NSData *)base64Data options:(NSDataBase64DecodingOptions)options;
 
/* Create a Base-64, UTF-8 encoded NSData from the receiver's contents using the given options.
*/
- (NSData *)base64EncodedDataWithOptions:(NSDataBase64EncodingOptions)options;

These methods let you easily convert NSData objects to and from Base64, as shown in the following example:

NSData* sampleData = [@"Some sample data" dataUsingEncoding:NSUTF8StringEncoding];
 
NSString * base64String = [sampleData base64EncodedStringWithOptions:0];
NSLog(@"Base64-encoded string is %@", base64String); // prints "U29tZSBzYW1wbGUgZGF0YQ=="
 
NSData* dataFromString = [[NSData alloc] initWithBase64EncodedString:base64String options:0];
NSLog(@"String is %@",[NSString stringWithUTF8String:[dataFromString bytes]]); // prints "String is Some sample data"

If you need to support iOS 6 or earlier, you can use the following two deprecated methods that are now public:

/* These methods first appeared in NSData.h on OS X 10.9 and iOS 7.0. They are deprecated in the same releases in favor of the methods in the <code>NSDataBase64Encoding</code> category. However, these methods have existed for several releases, so they may be used for applications targeting releases prior to OS X 10.9 and iOS 7.0.
*/
- (id)initWithBase64Encoding:(NSString *)base64String;
- (NSString *)base64Encoding;

16. Check screenshots with UIApplicationUserDidTakeScreenshotNotification

Prior to iOS 7, apps like Snapshot or Facebook Poke used some pretty creative methods to detect when a user took a screenshot. However, iOS 7 provides a brand-new notification for this event: UIApplicationUserDidTakeScreenshotNotification. Just subscribe to it as usual to know when a screenshot was taken.

Note: UIApplicationUserDidTakeScreenshotNotification is posted after the screenshot is taken. Currently there is no way to be notified before a screenshot is taken, which could be useful for hiding an embarrassing photo. Hopefully Apple adds an UIApplicationUserWillTakeScreenshotNotification in iOS 8! :]

17. Implement multi-language speech synthesis

Wouldn’t be nice if you could make your app speak? iOS 7 introduces two new classes: AVSpeechSynthesizer and AVSpeechUtterance. Together, they can give your app a voice. The really interesting news? There is a huge selection of available languages are available — even ones that Siri doesn’t speak, like Brazilian Portuguese!

Using these two classes to provide speech synthesis in your apps is very easy. AVSpeechUtterance represents what and how you want to say something. Then, AVSpeechSynthesizer is used to say it, as shown in the code snippet below:

AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
AVSpeechUtterance *utterance = 
  [AVSpeechUtterance speechUtteranceWithString:@"Wow, I have such a nice voice!"];
utterance.rate = AVSpeechUtteranceMaximumSpeechRate / 4.0f;
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"]; // defaults to your system language
[synthesizer speakUtterance:utterance];

That’s impressive — it only takes five lines of code to add speech to your app!

18. Use the new UIScreenEdgePanGestureRecognizer

UIScreenEdgePanGestureRecognizer inherits from UIPanGestureRecognizer and lets you detect gestures starting near the edge of the screen.

Using this new gesture recognizer is quite simple, as shown below:

UIScreenEdgePanGestureRecognizer *recognizer = [[UIScreenEdgePanGestureRecognizer alloc] initWithTarget:self action:@selector(handleScreenEdgeRecognizer:)];
recognizer.edges = UIRectEdgeLeft; // accept gestures that start from the left; we're probably building another hamburger menu!
[self.view addGestureRecognizer:recognizer];

19. Realize Message.app behavior with UIScrollViewKeyboardDismissMode

Dismissing the keyboard while you scroll is such a nice experience in Messages.app. However, building this behavior into your own apps can be tough. Luckily, Apple added the handy property keyboardDismissMode on UIScrollView to make your life a little easier.

Now your app can behave like Messages.app just by changing a single property on your Storyboard, or alternatively by adding one line of code!

This property uses the new UIScrollViewKeyboardDismissMode enum. The possible values of this enum are as follows:

UIScrollViewKeyboardDismissModeNone        // the keyboard is not dismissed automatically when scrolling
UIScrollViewKeyboardDismissModeOnDrag      // dismisses the keyboard when a drag begins
UIScrollViewKeyboardDismissModeInteractive // the keyboard follows the dragging touch off screen, and may be pulled upward again to cancel the dismiss

Here’s the Storyboard property to change to dismiss the keyboard on scroll:

You don't even have to code to use UIScrollViewKeyboardDismissMode!

You don’t even have to code to use UIScrollViewKeyboardDismissMode!

20. Detect blinks and smiles with CoreImage

iOS 7 adds two new face detection features to Core Image: CIDetectorEyeBlink and CIDetectorSmile. In plain English, that means you can now detect smiles and blinks in a photo! Unfortunately, that means that now iOS 7 can now get its feelings hurt.

Here’s an example of how you could use it in your app:

UIImage *image = [UIImage imageNamed:@"myImage"];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:nil
                                          options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];
 
NSDictionary *options = @{ CIDetectorSmile: @YES, CIDetectorEyeBlink: @YES };
 
NSArray *features = [detector featuresInImage:image.CIImage options:options];
 
for (CIFaceFeature *feature in features) {
    NSLog(@"Bounds: %@", NSStringFromCGRect(feature.bounds));
 
    if (feature.hasSmile) {
	NSLog(@"Nice smile!");
    } else {
	NSLog(@"Why so serious?");
    }
    if (feature.leftEyeClosed || feature.rightEyeClosed) {
	NSLog(@"Open your eyes!");
    }
}

21. Add links to UITextViews

Creating your own Twitter client just got easier on iOS 7 — now you’re able to add a link to an NSAttributedString and invoke a custom action when it’s tapped.

First, create an NSAttributedString and add an NSLinkAttributeName attribute to it, as shown below:

NSMutableAttributedString *attributedString = [[NSMutableAttributedString alloc] initWithString:@"This is an example by @marcelofabri_"];
[attributedString addAttribute:NSLinkAttributeName
                         value:@"username://marcelofabri_"
                         range:[[attributedString string] rangeOfString:@"@marcelofabri_"]];
 
 
NSDictionary *linkAttributes = @{NSForegroundColorAttributeName: [UIColor greenColor],
                                 NSUnderlineColorAttributeName: [UIColor lightGrayColor],
                                 NSUnderlineStyleAttributeName: @(NSUnderlinePatternSolid)};
 
// assume that textView is a UITextView previously created (either by code or Interface Builder)
textView.linkTextAttributes = linkAttributes; // customizes the appearance of links
textView.attributedText = attributedString;
textView.delegate = self;

That makes a link appear in the body of your text. However, you can also control what happens when the link is tapped by implementing the new shouldInteractWithURL: method of the UITextViewDelegate protocol, like so:

- (BOOL)textView:(UITextView *)textView shouldInteractWithURL:(NSURL *)URL inRange:(NSRange)characterRange {
    if ([[URL scheme] isEqualToString:@"username"]) {
        NSString *username = [URL host]; 
        // do something with this username
        // ...
        return NO;
    }
    return YES; // let the system open this URL
}

Where to Go From Here?

Wow! That’s a ton of new features; some you may already be familiar with, but some of them are probably news to you, as they were to me.

If you want to learn even more about the changes under the hood in iOS 7, I recommend taking a look at the following resources:

Have you found any other hidden gems in iOS 7? If so, come join the forum discussion and share your discoveries with everyone!

Easily Overlooked New Features in iOS 7 is a post from: Ray Wenderlich

The post Easily Overlooked New Features in iOS 7 appeared first on Ray Wenderlich.

Viewing all 4374 articles
Browse latest View live