Learn some new commands in Swift 2 to deal with scope.
Video Tutorial: What’s New in Swift 2 Part 2: Scope is a post from: Ray Wenderlich
The post Video Tutorial: What’s New in Swift 2 Part 2: Scope appeared first on Ray Wenderlich.
Learn some new commands in Swift 2 to deal with scope.
Video Tutorial: What’s New in Swift 2 Part 2: Scope is a post from: Ray Wenderlich
The post Video Tutorial: What’s New in Swift 2 Part 2: Scope appeared first on Ray Wenderlich.
Hopefully you didn’t vomit when read numerical algorithms. If you did, well, look on the bright side … you can have lunch again! :]
In this tutorial, you’ll learn what numerical algorithms are and how to use them to solve problems that don’t have an analytic solution. You’ll also learn how you can use playgrounds to easily visualize the solutions.
If the notion of math doesn’t excite you, nor are you an avid user of physics or computer science, you’ll still find value in this tutorial. All you need is a basic understanding of calculus and some elementary physics.
You’ll learn how to solve two different problems with numerical algorithms, but for the sake of learning, both also allow analytic solutions. Though algorithms are ideal for those times when analytics won’t work, it’s easier to understand how they work when you can compare the two methodologies.
Simply put, numerical algorithms are methods to solve mathematical problems that don’t rely on a closed-form analytic solution.
It’s any mathematical formula that you use to solve an exact value by plugging in the values you already know and performing a finite set of operations.
More simply put, if you can use algebra to find an expression to solve an unknown value, and all you need to do is substitute the known values and evaluate that expression, then you have a closed-form analytic solution.
For many problems, no analytic solution exists. For others, there is, but it would take too long to calculate, and In these cases, you need numerical algorithms.
For example, imagine that you wrote a physics engine that computes the behavior of many objects in a limited amount of time. In this scenario, you can use numerical algorithms to calculate the behavior much faster.
There is a downside: You pay for the faster calculation with less precise results. But in many cases, the result is good enough.
Weather forecasting is example of an activity that benefits from the use of numerics. Think about how quickly it evolves and how many factors affect it; it’s a highly complex system and only numerical simulations can handle the task of predicting the future.
Maybe a lack of these algorithms is why your iPhone tells you that it’s raining but a look outside says the opposite!
As a warm up, you’ll play a game, and then you’ll calculate the square root of a given number. For both tasks, you’ll use the bisection method. Surprise! You probably know this method, but maybe not by name.
Think back to the childhood game where you choose a number between one and 100, and then someone else has to guess it. The only hint you can give the other person is if the number is bigger or smaller than the guess.
Let’s say you’re guessing what I’ve chosen, and you start with one. I tell you the number is higher. Then you choose two, and again, I tell you it’s higher. Now you choose three, then four, and each time I tell you the number is bigger, until you get to five, which is my number.
After five steps you find the number — not bad — but if I chose 78, this approach would take quite a bit of time.
This game moves much faster when you use the bisection method to find the solution.
You know the number is inside the interval [1,100], so instead of making incremental or even random guesses, you divide this interval into two subintervals of the same size: a=[1,50] and b=[51,100].
Then you determine if the number is inside interval a or b by asking if the number is 50. If the number is smaller than 50, you forget interval b and subdivide interval a again.
Then you repeat these steps until you find the number. Here’s an example:
My number is 60, and the intervals are a=[1,50] and b=[51,100].
In the first step, you say 50 to test the upper bound of interval a. I tell you the number is bigger, and now you know that the number is in interval b. Now you subdivide b into the intervals c=[51,75] and d=[76,100]. Again, you take the upper bound of interval c, 75, and my answer is that the number is smaller. This means the number must be in interval c, so you subdivide again
By using this method, you find the number after just seven steps versus 60 steps with the first approach.
For the square root of a number x, the process looks similar. The square root is between 0 and x, or expressed as an interval it is (0, x]. If the number is bigger than or equal to 1. You can use the interval [1, x].
Dividing this interval brings you to a=(0, x/2] and b=(x/2, x].
If my number is 9, the interval is [1, 9] the divided intervals are a=[1, 5] and b=(5, 9]. The middle m is (1+9)/2 = 5.
Next, you check if m*m – x is bigger than the desired accuracy. Is this case, you check if m*m is bigger or smaller than x. If it’s bigger, you use the interval (0,x/2] otherwise (x/2,x]
Let’s see this in action, we start with m=5 and a desired accuracy of 0.1:
At this point, you’re going to take the theory and put it into practice by creating your own bisection algorithm. Create a new playground and add the following function to it.
func bisection(x: Double) -> Double { //1 var lower = 1.0 //2 var upper = x //3 var m = (lower + upper) / 2 var epsilon = 1e-10 //4 while (fabs(m * m - x) > epsilon) { //5 m = (lower + upper) / 2 if m * m > x { upper = m } else { lower = m } } return m } |
What does each section do?
To test this function, add this line to your playground.
let bis = bisection(2.5) |
As you can see on the right of the line m = (lower + upper) / 2
, this code executes 35 times, meaning this method takes 35 steps to find the result.
Now you’re going to take advantage of one of the loveable features of playgrounds — the ability to view the history of a value.
Since the bisection algorithm successively calculates more accurate approximations of the actual solution, you can use the value history graph to see how this numerical algorithm converges on the correct solution.
Press option+cmd+enter to open the assistant editor, and then click on the rounded button on the right side of the line m = (lower + upper) / 2
to add a value history to the assistant editor.
Here you can see the method jumping around the correct value.
The next algorithm you’ll learn dates back to antiquity. It originated in Babylonia, as far back as 1750 B.C, but was described in Heron of Alexandria’s book Metrica around the year 100. And this is how it came to be known as Heron’s method. You can learn more about Heron’s method over here.
This method works by using the function , where a is the number for which you want to calculate the square root. If you can find this curve’s zero point — the value of x where the function equals zero — then you have found the square root of a.
To do this, you start with an arbitrary starting value of x, calculate the tangent line at this value, and then find the tangent line’s zero point. Then you repeat this using that zero point as the next starting value, and keep repeating until you reach the desired accuracy.
Since every tangent moves closer to true zero, the process converges on the true answer. The next graphic illustrates this process by solving with a=9, using a starting value of 1.
The starting point at x0=1 generates the red tangent line, producing the next point x1 that generates the purple line, producing the x2 that generates the blue line, finally leading to the answer.
You have something that the Babylonians did not: playgrounds. Check out what happens when you add the following function to your playground:
func heron(x: Double) -> Double { //1 var xOld = 0.0 var xNew = (x + 1.0) / 2.0 var epsilon = 1e-10 //2 while (fabs(xNew - xOld) > epsilon) { //3 xOld = xNew xNew = (xOld + x / xOld) / 2 } return xNew } |
What’s happening here?
xOld
is the last calculated value andxNew
the actual value.
xNew
is initialized.epsilon
is the desired accuracy.While
checks if the desired accuracy is reached.xNew
becomes the new xOld
, and then the next iteration starts.Check your code by adding the line:
let her = heron(2.5) |
Heron method requires only five iterations to find the result.
Click on the rounded button on the right side of the line xNew = (xOld + x / xOld) / 2
to add a value history to the assistant editor, and you’ll see that the first iteration found a good approximation.
In this section, you’ll learn how to use numerical integration algorithms in order to model a simple harmonic oscillator, a basic dynamical system.
The system can describe many phenomena, from the swinging of a pendulum to the vibrations of a weight on a spring, to name a couple. Specifically, it can be used to describe scenarios that involve a certain offset or displacement that changes over time.
You’ll use the playground’s value history feature to model how this offset value changes over time, but won’t use it to show how your numerical approximation takes you successively closer to the perfect solution.
For this example, you’ll work with a mass attached to a spring. To make things easier, you’ll ignore damping and gravity, so the only force acting on the mass is the spring force that tries to pull the mass back to an offset position of zero.
With this assumption, you only need to work with two physical laws.
Since the spring is the only source of force on the mass, you combine these equations and write the following:
You can write this as:
is also known as , which is the square of the resonance frequency.
The formula for the exact solution is as follows:
A is the amplitude, and in this case it means the displacement, , is known as phase difference. Both are constant values that can be found using initial values.
If you say at the time t=0 you have and you find the amplitude and phase difference as:
Let’s look at an example. We have a mass of 2kg attached to a spring with a spring constant k=196N/m. At the time t=0 the spring has a displacement of 0.1m. To calculate and we have to calculate .
and
Before you use this formula to calculate the exact solution for any given time step, you need to write some code.
Go back to your playground and add the following code at the end:
//1 typealias Solver = (Double, Double, Double, Double, Double) -> Void //2 struct HarmonicOscillator { var kSpring = 0.0 var mass = 0.0 var phase = 0.0 var amplitude = 0.0 var deltaT = 0.0 init(kSpring: Double, mass: Double, phase: Double, amplitude: Double, deltaT: Double) { self.kSpring = kSpring self.mass = mass self.phase = phase self.amplitude = amplitude self.deltaT = deltaT } //3 func solveUsingSolver(solver: Solver) { solver(kSpring, mass, phase, amplitude, deltaT) } } |
What’s happening in this block?
typealias
for a function that takes five Double
arguments and returns nothing.struct
that describes a harmonic oscillator.Closures
to solve the oscillator.The code for the exact solution is as follows:
func solveExact(amplitude: Double, phase: Double, kSpring: Double, mass: Double, t: Double) { var x = 0.0 //1 let omega = sqrt(kSpring / mass) var i = 0.0 while i < 100.0 { //2 x = amplitude * sin(omega * i + phase) i += t } } |
This method includes all the parameters needed to solve the movement equation.
while loop
, and i is incremented for the next step.Test it by adding the following code:
let osci = HarmonicOscillator(kSpring: 0.5, mass: 10, phase: 10, amplitude: 50, deltaT: 0.1) osci.solveUsingSolver(solveExact) |
The solution function is a bit curious. It takes arguments, but it returns nothing and it prints nothing.
So what is it good for?
The purpose of this function is that, within its while loop, it models the actual dynamics of your oscillator. You’ll observe those dynamics by using the value history feature of the playground.
Add a value history to the line x = amplitude * sin(omega * i + phase)
and you’ll see the oscillator moving.
Now that you have an exact solution working, you can start with the first numerical solution.
The Euler method is the simplest method for numerical integration. It was introduced in the year 1768 in the book Institutiones Calculi Integralis by Leonhard Euler. To learn more about the Euler method, you can find out here.
The idea behind this method is to approximate a curve by using short lines.
This is done by calculating the slope on a given point and drawing a short line with the same slope. At the end of this line, you calculate the slope again and draw another line. As you can see, the accuracy depends on the length of the lines.
Did you wonder what deltaT
is used for?
The numerical algorithms you use have a step size, which is important for the accuracy of such algorithms; bigger step sizes cause lower accuracy but faster execution speed and vice versa.
deltaT
is the step size for your algorithms. You initialized it with a value of 0.1, meaning that you calculated the position of the mass for every 0.1 second. In the case of the Euler method, this means that the lines have a length of 0.1 units on the x-Axis.
Before you start coding, you need to have a look again at the formula:
This second order differential equation can be replaced with two differential equations of first order.
can be written as and this as
You get this by using the difference quotient:
And you also get those:
With these equations, you can directly implement the Euler method.
Add the following code right behind solveExact
:
func solveEuler(amplitude: Double, phase: Double, kSpring: Double, mass: Double, t: Double) { //1 var x = amplitude * sin(phase) let omega = sqrt(kSpring / mass) var i = 0.0 //2 var v = amplitude * omega * cos(phase); var vold = v var xoldEuler = x while i < 100 { //3 v -= omega * omega * x * t //4 x += vold * t xoldEuler = x vold = v i+=t } } |
What does this do?
while
loop.Now test this method by adding the following to the bottom of your playground:
osci.solveUsingSolver(solveEuler) |
Then add a value history to the line xoldEuler = x
. Take a look at the history, and you’ll see that this method also draws a sine curve with a rising amplitude. The Euler method is not exact, and in this instance the big step size of 0.1 is contributing to the inaccuracy.
Here is another image, but this shows how it looks with a step size of 0.01, which leads to a much better solution. So, when you think about the Euler method, remember that it’s most useful for small step sizes, but it also has the easiest implementation.
The last method is called Velocity Verlet. It works off the same idea as Euler but calculates the new position a slightly different way.
Euler calculates the new position, while ignoring the actual acceleration, with the formula .
Velocity Verlet takes acceleration into account − this brings better results for the same step size.
Add the following code right after solveEuler
:
func solveVerlet(amplitude: Double, phase: Double, kSpring: Double, mass: Double, t: Double) { //1 var x = amplitude * sin(phase) var xoldVerlet = x let omega = sqrt(kSpring / mass) var v = amplitude * omega * cos(phase) var vold = v var i = 0.0 while i < 100 { //2 x = xoldVerlet + v * t + 0.5 * omega * omega * t * t v -= omega * omega * x * t xoldVerlet = x i+=t } } |
What’s going on here?
Again, test the function by adding this line to the end of your playground:
osci.solveUsingSolver(solveVerlet) |
And also add a value history to the line xoldVerlet = x
You can download the completed project over here.
I hope you enjoyed your trip into the numerical world, learned a few things about algorithms, and even a few fun facts about the ancient world to help you on your next trivia night.
Before you start exploring more, you should play with different values for deltaT
, but keep in mind that playgrounds are not that fast and choosing a small step size can freeze Xcode for a long time — my MacBook froze for several minutes after I changed deltaT
to 0.01 to make the second Euler screenshot.
With the algorithms you learned, you can solve a variety of problems like the n-body problem.
If you are feeling a little shaky about algorithms in general, you should check out MIT’s Introductions to Algorithms class on YouTube.
Finally, keep check back to the site for more tutorials on this subject.
If you have questions or ideas about other ways to use numerical algorithms, bring it to the forums by commenting below. I look forward to hearing your thoughts and talking about all the cool things math can do for your apps.
Numerical Algorithms using Playgrounds is a post from: Ray Wenderlich
The post Numerical Algorithms using Playgrounds appeared first on Ray Wenderlich.
For years iOS developers have used a service by Burstly called TestFlight, which was a simple way to distribute your apps to beta testers.
In February 2014, Apple acquired Burstly, and with the release of iOS 8 officially integrated TestFlight into its own iTunes Connect portal.
This tutorial will walk you through integrating TestFlight into your own apps.
This is one of those rare tutorials where you don’t have to code — just follow through the steps in this tutorial and you’ll be up and running with TestFilght in no time! :]
This tutorial assumes that you’ve already created your certificates, app ID, and provisioning profiles in both the developer portal and on iTunes Connect.
If you’re not familiar with setting up your app on the dev portal or in iTunes Connect, read through the section “Understanding Apple’s Rules Once and Forever” in the Introduction to the TestFlight SDK tutorial here on the site.
Open up your project in Xcode, make sure you have a correct Bundle Identifier and that you’ve chosen the correct Distribution Certificate:
Choose Product\Archive from the top toolbar:
Once Xcode finishes archiving your project, click the shiny blue Submit to App Store… button:
Now you need to choose your development team:
Finally, click Submit:
Wait for a few minutes as your build uploads. Grab a coffee, perhaps, or if you have a slow internet connection, go grab a bite. Or two. Or three. :]
Once you’re done, you should receive a success message like the following:
That’s all the work required for Xcode. Your beta build is now available on iTunes Connect, and that’s where you’ll be doing the rest of the work to set up TestFlight.
Your build is ready for testing, but who’s going to test it?
Apple defines two types of testers for Test Flight:
Before your external developers can test your app, you have to submit your app to be reviewed by Apple, just as you would with a normal App Store submission. These reviews tend to go faster than normal app reviews (although don’t count on it), and once it’s approved you can let external developers test your app. Internal testers, on the other hand, are instantaneously able to test new builds.
You’ll learn more about external testers later, but for now, you’ll focus on internal testers.
To add an internal tester, head to the Users and Roles section in iTunes Connect:
On the Users and Roles screen, click the + button to add a new user:
Fill in your new user info and click Next:
Now you need to assign roles for the user. In most cases, you’ll want to choose Technical. You can read more about the privileges for each role and choose the appropriate one for your user.
Once that’s done, click Next:
Choose the type of notifications you want your new testers to receive, then click Save:
Your user is now created, but as the message indicates, that user first needs to verify his or her email address before the account will show in iTunes Connect.
Once you enable the testing account, choose the newly added account and enable the Internal Tester setting to let your user beta test your app, then click Save:
Creating a new internal beta tester is only the first part of the process; the remaining step is to invite this particular tester to test your latest build.
To learn more, check out the “iTunes Connect and Apple IDs” section of the iTunes Connect Developer Guide.
It’s time to enable your app for testing — so the tester actually has something to test! :]
To enable beta testing of your app, go to the My Apps section on the iTunes Connect home page and click on your app:
Select the Prerelease tab and you’ll find your latest build. Simply turn on the switch to enable TestFlight Beta Testing:
Your app is now available for beta testing, but you need to invite testers before they can access your app.
To invite internal testers, go to the Internal Testers tab to start inviting testers. Check off the user or users you want to invite, click Invite, then click Invite on the confirmation popup:
All selected testers will now receive an email that lets them download and install this build.
That takes care of internal testers, but what about external testers?
That’s just as easy! Go to the External Testers tab, click the + button and select Add New Testers:
Add the email addresses of any external users you want to add. Once you’re finished, click Add to add these testers to your account. All external testers will count toward your 1000 external tester limit:
However, since you haven’t yet submitted your app for a normal review by Apple, external testers won’t get an invite until the app is approved. Go back to the Build tab and click Submit for Beta App Review. Once the app is approved by Apple, your external testers will receive an invitation email:
Before your app is considered for review, you’ll have to fill out most of the information on that page.
Once you’ve filled out the information, click Next:
Once you choose the appropriate Export Compliance option, click Submit.
You’ll see that your app is now waiting for review.
Both internal and external testers will receive the same email:
A tester must tap Open in TestFlight on the device; otherwise the app won’t be available for download by the tester.
Whenever you create a new build, the tester will receive a push notification stating that a new build is available. The tester must launch the TestFlight app to update the installed version of their app.
That shows the developer’s perspective of app testing, but what does it look like from the tester’s perspective?
As an internal tester, you need to link your Apple ID to iTunes Connect. By now, you should have received an email from iTunes Connect that looks like this:
Click on activate your account and follow the supplied steps. Once your account is ready for testing, get your iOS device and go to the Settings app:
Scroll down to iTunes & App Store:
Log in with the account you just verified a minute ago. If you’re already logged in with another account, log out first:
Go to the App Store, and search for the TestFlight app:
Download the TestFlight app and launch it.
You’ll see the following screen which means you’re all ready for testing:
When the app has been approved and the app is ready for testing, you’ll receive an email inviting you to start testing:
Open this email on your testing device, then tap Open in TestFlight. This will launch TestFlight and show you the app you need to test. Tap Accept, then Install, and wait for the app to download:
That was the hardest part of being a tester. From now on, whenever a new version of this app is available, you’ll see a notification from TestFlight. All you need to do is update your app and run the latest version.
In this tutorial you learned how to upload your test build and invite internal and external testers to your app.
If you’re interested in knowing more about iTunes Connect in general, and beta testing in particular, read through Apple’s TestFlight Beta Testing Documentation.
You can also check out iOS 8 by Tutorials; the final chapter What’s New with iTunes Connect showcases everything you need to know to manage your testing effort.
I hope you enjoyed this tutorial, and if you have any questions or comments, please join the forum discussion below!
iOS Beta Testing with TestFlight Tutorial is a post from: Ray Wenderlich
The post iOS Beta Testing with TestFlight Tutorial appeared first on Ray Wenderlich.
Learn about option sets, a new feature of Swift 2 that makes working with bitmasks incredibly easy.
Video Tutorial: What’s New in Swift 2 Part 3: Option Sets is a post from: Ray Wenderlich
The post Video Tutorial: What’s New in Swift 2 Part 3: Option Sets appeared first on Ray Wenderlich.
This is just a quick note to let you know that we are one of the proud sponsors of Pragma Conference 2015, this October in Florence, Italy.
This year, Marin Todorov, author of iOS Animations by Tutorials, iOS Games by Tutorials, and many other books, will be there giving an awesome 6-hour workshop called Power Up Your Animations.
Also, the organizer of Pragma Conference, #pragma mark, has been kind enough to set up a 15% off discount for all raywenderlich.com readers. To get it, just order a ticket with the discount code PRAGMACONF-RAY15.
Also stay tuned for our official raywenderlich.com conference, RWDevCon, which comes next February! :]
raywenderlich.com at Pragma Conference 2015 is a post from: Ray Wenderlich
The post raywenderlich.com at Pragma Conference 2015 appeared first on Ray Wenderlich.
Learn about the dependency manager Carthage with Mic, Jake, and James!
[Subscribe in iTunes] [RSS Feed]
Interested in sponsoring a podcast episode? We sell ads via Syndicate Ads, check it out!
Contact Us
We hope you enjoyed this episode of our podcast. Be sure to subscribe in iTunes to get notified when the next episode comes out.
We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear next season. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com.
Carthage with James Frost – Podcast S04 E06 is a post from: Ray Wenderlich
The post Carthage with James Frost – Podcast S04 E06 appeared first on Ray Wenderlich.
Learn about the new pattern matching and conditional statements in Swift 2, and how they can improve working with enumerations.
Video Tutorial: What’s New in Swift 2 Part 4: Pattern Matching is a post from: Ray Wenderlich
The post Video Tutorial: What’s New in Swift 2 Part 4: Pattern Matching appeared first on Ray Wenderlich.
It’s likely that you’ve bumped into OAuth2 and the different families of flows while building apps to share content with your favorite social network (Facebook, Twitter, etc) or with your enterprise OAuth2 server — even if you weren’t aware of what was going on under the hood. But do you know how to hook up to your service using OAuth2 in an iOS app?
In this tutorial, you’ll work on a selfie-sharing app named Incognito as you learn how to use the AeroGear OAuth2, AFOAuth2Manager and OAuthSwift open source OAuth2 libraries to share your selfies to Google Drive.
Download the Incognito starter project. The starter project uses CocoaPods to fetch AeroGear dependencies and contains everything you need, including generated pods and xcworkspace directories.
Open Incognito.xcworkspace in Xcode. The project is based on a standard Xcode Single View Application template, with a single storyboard that contains a single view controller ViewController.swift. All UI actions are already handled in ViewController.swift.
Build and run your project to see what the app looks like:
The app lets you pick your best selfie and add some accessories to the image. Did you recognize me behind my disguise? :]
Note: To add photos in the simulator, simply go to the home screen using Cmd + Shift + H and drag and drop your images onto the simulator.
All that’s left to add to the app is the share to Google Drive feature using three different OAuth2 libraries.
Mission impossible? Nope, it’s nothing you can’t handle! :]
Instead of boring you with an introduction to the RFC6749 OAuth2 specification, let me tell you a story…
On Monday morning, Bob, our mobile nerd bumps into Alice, another friendly geek, in front of the coffee machine. Bob seems busy, carrying a heavy bunch of documents: his boss wants him to delve into the OAuth2 specification for the Incognito app.
Put any two developers in a coffee room and soon they’ll chat about geeky things, of course. Bob asks Alice:
“…what problem are we trying to solve with OAuth2?”
On one side, you have services in the form of APIs, such as the Twitter API, which you can use to get a list of followers or Tweets. Those APIs handle your confidential data, which is protected by a login and password.
On the other side, you have apps that consume those services. Those apps need to access your data, but do you want to trust all of them with your credentials? Maybe — but maybe not.
This brings up the concept of delegated access. OAuth2 lets users grant third-party apps access to their web resources, without sharing their passwords, through a security object known as an access token. It’s impossible to obtain the password from the access token, meaning your password is safe inside the main service, and each app that wants to connect to the service just gets their own access token. Access tokens can then be revoked if you ever want to revoke access to just that app.
OAuth2 works with the following four actors:
The OAuth2 specification describes the interactions between these actors as grant flows.
The specification details four different grant flows that can be grouped into two different families:
You’ll use your existing Google Drive account and upload your Incognito selfies there. This is a good case for implementation of the 3-legged authorization code grant.
Although using open source libraries hides most of the sticky details of the OAuth2 protocol from you, knowing its basic inner workings will help you get the configuration right.
Here are the steps involved in the authorization code grant dance:
Your application needs to be registered with the service you want to access. In your case, for Incognito, that’s Google Drive. Don’t worry, the following section will explain how to do that.
The dance begins when Incognito sends a request for an authorization code to the third-party service that includes:
The app then switches to the web browser. Once the user logs in, the Google authorization server displays a grant page: “Incognito would like to access your photos: Allow/Deny”. When the end user clicks “Allow”, the server redirects to the Incognito app using the redirect URI and sends an authorization code to the app.
The authorization code is only temporary; therefore the OAuth2 library has to exchange this temporary code for a proper access token, and optionally, a refresh token.
Using the access token, Incognito can access protected resources on the server — that is, the resources the end-user granted access to. Your upload is free to proceed.
Ready to see this in action? First, you need to register with the OAuth2 provider: Google.
If you don’t have a Google account, go create one now. It’s OK; I’ll wait for you. :]
Open http://console.developer.google.com in your browser; you’ll be prompted to authenticate with Google.
Click Create Project and name your new project Incognito:
Next, you need to enable the Drive API.
Navigate to APIs & auth\APIs, then click Google Apps APIs\Drive API. On the next screen, click Enable API:
Now you need to create new credentials to access your Drive accounts from the app.
Go to APIs & auth\Credentials and click the blue Create new Client ID button inside the OAuth area.
Then click Configure consent screen and in the screen that appears, fill out the following information:
Click Save and you’ll return to the Client ID screen.
Select Installed application, then select iOS and enter com.raywenderlich.Incognito as your Bundle ID.
The authorization server will use the bundle id entered above as the redirect URI.
Finally, click Create Client ID. The final screen will show you value of the generated Client ID, Client secret, and redirect URIs, which you’ll make use of later on:
Now that you’ve registered with Google, you’re ready to start your OAuth2 implementation using the first OAuth2 library: AeroGear with an external browser.
Open ViewController.swift and add the following imports to the top of the file:
import AeroGearHttp import AeroGearOAuth2 |
Now, add the following instance variable inside the ViewController
class:
var http: Http! |
Now instantiate it by adding the following at the end of viewDidLoad()
:
self.http = Http() |
You’ll use this instance of Http
, which comes from the AeroGearHttp
library, to perform HTTP requests.
Still in ViewController.swift, find the empty share(:)
method and add the following code to it:
let googleConfig = GoogleConfig( clientId: "YOUR_GOOGLE_CLIENT_ID", // [1] Define a Google configuration scopes:["https://www.googleapis.com/auth/drive"]) // [2] Specify scope let gdModule = AccountManager.addGoogleAccount(googleConfig) // [3] Add it to AccountManager self.http.authzModule = gdModule // [4] Inject the AuthzModule // into the HTTP layer object let multipartData = MultiPartData(data: self.snapshot(), // [5] Define multi-part name: "image", filename: "incognito_photo", mimeType: "image/jpg") let multipartArray = ["file": multipartData] self.http.POST("https://www.googleapis.com/upload/drive/v2/files", // [6] Upload image parameters: multipartArray, completionHandler: {(response, error) in if (error != nil) { self.presentAlert("Error", message: error!.localizedDescription) } else { self.presentAlert("Success", message: "Successfully uploaded!") } }) |
Here’s what’s going on in the method above:
YOUR_GOOGLE_CLIENT_ID
above with the Client ID from your Google Console to use the correct authorization configuration.AccountManager
utility methods.POST()
checks that an OAuth2 module is plugged into HTTP and makes the appropriate call for you. One of the following:
Build and run your app; select an image, add an overly of your choosing, then tap the Share button. Enter your Google credentials if you’re prompted; if you’ve logged in before, your credentials may be cached. You’ll be redirected to the grant page. Tap Accept and…
Boom — you receive the Safari Cannot Open Page error message. :[ What’s up with that?
Once you tap Accept, the Google OAuth site redirects you to com.raywenderlich.Incognito://[some url]. Therefore, you’ll need to enable your app to open this URL scheme.
Note: Safari stores your authentication response in a cookie on the simulator, so you won’t be prompted again to authenticate. To clear these cookies in the simulator, go to iOS simulator\Reset content and Settings.
To allow your user to be re-directed back to Incognito, you’ll needs to associate a custom URL scheme with your app.
Go to the Incognito\Supporting Files group in Xcode and find Info.plist. Right click on it and choose Open As\Source Code.
Add the following to the bottom of the plist, right before the closing </dict> tag:
<key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLSchemes</key> <array> <string>com.raywenderlich.Incognito</string> </array> </dict> </array> |
The scheme is the first part of a URL. In web pages, for example, the scheme is usually http or https. iOS apps can specify their own custom URL schemes, such as com.raywenderlich.Incognito://doStuff. The important point is to choose a custom scheme that it unique among all apps installed on your users’ devices.
The OAuth2 dance uses your custom URL scheme to re-enter the application from which the request came. Custom schemes, like any URL, can have parameters. In this case, the authorization code is contained in the code
parameter. The OAuth2 library will extract the authorization code from the URL and pass it in the next request in exchange for the access token.
You’ll need to implement a method in Incognito’s AppDelegate
class for the app to respond when it’s launched via a custom URL scheme.
Open AppDelegate.swift and add the following import statement to the top of the file:
import AeroGearOAuth2 |
Next, implement application(_: openURL: sourceApplication: annotation)
as shown below:
func application(application: UIApplication, openURL url: NSURL, sourceApplication: String?, annotation: AnyObject?) -> Bool { let notification = NSNotification(name: AGAppLaunchedWithURLNotification, object:nil, userInfo:[UIApplicationLaunchOptionsURLKey:url]) NSNotificationCenter.defaultCenter().postNotification(notification) return true } |
This method simply creates an NSNotification
containing the URL used to open the app. The AeroGearOAuth2 library listens for the notification and calls the completionHandler
of the POST
method you invoked above.
Build and run your project again, take a snazzy selfie and dress it up. Click the share button, authenticate yourself, and lo and behold:
You can download the finished Incognito AeroGear project from this section if you wish.
Switching context to an external browser during the OAuth2 authentication step is a bit clunky. There must be a more streamlined approach…
Embedded web views make for a more user-friendly experience. This can be achieved by using a UIWebView
rather than switching to Safari. From a security point of view, it’s a less-secure approach since your app’s code sits between the login form and the provider. Your app could use Javascript to access the credentials of the user as they type them. However, this could be an acceptable option if your end users trust your app to be secure.
You’ll revisit the share method using the OAuthSwift library, but this time, you’ll implement OAuth2 using an embedded web view.
You’re going to start again with a different project. So close the existing Xcode workspace, download this version of the Incognito starter project, and open the project in Xcode using the Incognito.xcworkspace file.
Build and run the project; things should look pretty familiar.
As before, you first need to import the OAuthSwift library included in the project.
Open ViewController.swift and add the following import to the top of the file:
import OAuthSwift |
Still in ViewController.swift, add the following code to share()
:
// 1 Create OAuth2Swift object let oauthswift = OAuth2Swift( consumerKey: "YOUR_GOOGLE_DRIVE_CLIENT_ID", // 2 Enter google app settings consumerSecret: "YOUR_GOOGLE_DRIVE_CLIENT_SECRET", authorizeUrl: "https://accounts.google.com/o/oauth2/auth", accessTokenUrl: "https://accounts.google.com/o/oauth2/token", responseType: "code" ) // 3 Trigger OAuth2 dance oauthswift.authorizeWithCallbackURL( NSURL(string: "com.raywenderlich.Incognito:/oauth2Callback")!, scope: "https://www.googleapis.com/auth/drive", // 4 Scope state: "", success: { credential, response in var parameters = [String: AnyObject]() // 5 Get the embedded http layer and upload oauthswift.client.postImage( "https://www.googleapis.com/upload/drive/v2/files", parameters: parameters, image: self.snapshot(), success: { data, response in let jsonDict: AnyObject! = NSJSONSerialization.JSONObjectWithData(data, options: nil, error: nil) self.presentAlert("Success", message: "Successfully uploaded!") }, failure: {(error:NSError!) -> Void in self.presentAlert("Error", message: error!.localizedDescription) }) }, failure: {(error:NSError!) -> Void in self.presentAlert("Error", message: error!.localizedDescription) }) |
Here’s what’s going on in the code above:
OAuth2Swift
that will handle the OAuth dance for you.YOUR_GOOGLE_CLIENT_ID
and YOUR_GOOGLE_DRIVE_CLIENT_SECRET
with the client id and client secret from the Google console.oauthswift
instance.scope
parameter indicates that you are requesting access to the Drive API.Just as in the previous project, this version of Incognito has been set up to accept a custom URL scheme; all you need to do is implement the code to handle the custom URL.
Open AppDelegate.swift and add the following import:
import OAuthSwift |
Then, implement application(_:openURL: sourceApplication: annotation:)
as shown below:
func application(application: UIApplication, openURL url: NSURL, sourceApplication: String?, annotation: AnyObject?) -> Bool { OAuth2Swift.handleOpenURL(url) return true } |
Unlike AeroGearOAuth2, OAuthSwift uses a class method to handle parsing the returned URL. However, if you inspect the handleOpenURL(_)
method, you’ll see that it simply sends an NSNotification, just like AeroGearOAuth2 required you to do!
Build and run your project; create a new selfie and upload it. Woo! It works again! That was easy. :]
As promised, you’ll now add the web view. In Xcode, click on the Incognito file group in Project Navigator and choose the File\New\File… menu option. Select iOS\Source\Swift File then click Next. Name it WebViewController and save it with your project.
Then open WebViewController.swift and add the following content to it:
import UIKit import OAuthSwift class WebViewController: OAuthWebViewController { var targetURL : NSURL? var webView : UIWebView = UIWebView() override func viewDidLoad() { super.viewDidLoad() webView.frame = view.bounds webView.autoresizingMask = UIViewAutoresizing.FlexibleHeight | UIViewAutoresizing.FlexibleHeight webView.scalesPageToFit = true view.addSubview(webView) loadAddressURL() } override func setUrl(url: NSURL) { targetURL = url } func loadAddressURL() { if let targetURL = targetURL { let req = NSURLRequest(URL: targetURL) webView.loadRequest(req) } } } |
In the above code, you create a WebViewController
class that extends OAuthWebViewController
. This class only implements one method: setUrl:
. In viewDidLoad()
you adjust the size of the web view and add it to the view controller’s superview. Additionally, you load the URL passed in by the OAuth2Swift
instance that made the request.
Next, open ViewController.swift and locate the share()
method. Then add the following line immediately after the creation of the oauthswift
instance and before the call to authorizeWithCallbackURL
:
oauthswift.webViewController = WebViewController() |
This tells the oauthswift
instance to use the web view controller you just created.
Finally, open AppDelegate.swift and modify application(_:openURL: sourceApplication: annotation:)
by adding the following line just above return true
:
// [1] Dismiss webview once url is passed to extract authorization code UIApplication.sharedApplication().keyWindow?.rootViewController?.dismissViewControllerAnimated(true, completion: nil) |
Build and run your project again; note that when the authentication form appears, it’s not displayed within Safari, and no app switching happens. As well, the authentication form is presented each time you run the app since no web cookies are stored in your app by default.
Using a UIWebView
is to authenticate with Google looks more streamlined, for sure! :]
You can download the final Incognito OAuthSwift project here.
There’s one thing left to look at in this tutorial. You’ll revisit the share()
method and make use of the well-known HTTP library AFNetworking with its OAuth2 companion.
AFOAuth2Manager takes a different approach than other OAuth2 libraries: it’s a lower-level API based on the well known AFNetworking framework. Whether you want to use a UIWebView
or open an external browser is entirely up to you; the developer is free to choose either mechanism to initiate step 1 of the OAuth2 dance.
Once again there’s another starter project for this part of the tutorial. Close your existing project and then download the new one here: Incognito starter project.
Open the new project and then open ViewController.swift. You’ll begin by defining some helper methods and extensions.
Add the following String
extension to the top of the file:
extension String { public func urlEncode() -> String { let encodedURL = CFURLCreateStringByAddingPercentEscapes( nil, self as NSString, nil, "!@#$%&*'();:=+,/?[]", CFStringBuiltInEncodings.UTF8.rawValue) return encodedURL as String } } |
The above code simply extends the String
class by adding a function that URL encodes a given string.
Now, add the following method to to ViewController
:
func parametersFromQueryString(queryString: String?) -> [String: String] { var parameters = [String: String]() if (queryString != nil) { var parameterScanner: NSScanner = NSScanner(string: queryString!) var name:NSString? = nil var value:NSString? = nil while (parameterScanner.atEnd != true) { name = nil; parameterScanner.scanUpToString("=", intoString: &name) parameterScanner.scanString("=", intoString:nil) value = nil parameterScanner.scanUpToString("&", intoString:&value) parameterScanner.scanString("&", intoString:nil) if (name != nil && value != nil) { parameters[name!.stringByReplacingPercentEscapesUsingEncoding(NSUTF8StringEncoding)!] = value!.stringByReplacingPercentEscapesUsingEncoding(NSUTF8StringEncoding) } } } return parameters } |
This simply extracts query string parameters from a string representation of a URL. For example, if the query string is name=Bob&age=21
this method would return a dictionary containing name => Bob, age => 21
.
Next, you’ll need to define a helper function in ViewController
to extract the OAuth code from the URL passed in a NSNotification
.
Simply add the following method below the existing share()
method:
func extractCode(notification: NSNotification) -> String? { let url: NSURL? = (notification.userInfo as! [String: AnyObject])[UIApplicationLaunchOptionsURLKey] as? NSURL // [1] extract the code from the URL return self.parametersFromQueryString(url?.query)["code"] } |
This grabs the code
key from the query string dictionary, obtained using the method you implemented just now.
Now add the following code to share()
:
// 1 Replace with client id /secret let clientID = "YOUR_GOOGLE_CLIENT_ID" let clientSecret = "YOUR_GOOGLE_CLIENT_SECRET" let baseURL = NSURL(string: "https://accounts.google.com") let scope = "https://www.googleapis.com/auth/drive".urlEncode() let redirect_uri = "com.raywenderlich.Incognito:/oauth2Callback" if !isObserved { // 2 Add observer var applicationLaunchNotificationObserver = NSNotificationCenter.defaultCenter().addObserverForName( "AGAppLaunchedWithURLNotification", object: nil, queue: nil, usingBlock: { (notification: NSNotification!) -> Void in // [5] extract code let code = self.extractCode(notification) // [6] carry on oauth2 code auth grant flow with AFOAuth2Manager var manager = AFOAuth2Manager(baseURL: baseURL, clientID: clientID, secret: clientSecret) manager.useHTTPBasicAuthentication = false // [7] exchange authorization code for access token manager.authenticateUsingOAuthWithURLString("o/oauth2/token", code: code, redirectURI: redirect_uri, success: { (cred: AFOAuthCredential!) -> Void in // [8] Set credential in header manager.requestSerializer.setValue("Bearer \(cred.accessToken)", forHTTPHeaderField: "Authorization") // [9] upload photo manager.POST("https://www.googleapis.com/upload/drive/v2/files", parameters: nil, constructingBodyWithBlock: { (form: AFMultipartFormData!) -> Void in form.appendPartWithFileData(self.snapshot(), name:"name", fileName:"fileName", mimeType:"image/jpeg") }, success: { (op:AFHTTPRequestOperation!, obj:AnyObject!) -> Void in self.presentAlert("Success", message: "Successfully uploaded!") }, failure: { (op: AFHTTPRequestOperation!, error: NSError!) -> Void in self.presentAlert("Error", message: error!.localizedDescription) }) }) { (error: NSError!) -> Void in self.presentAlert("Error", message: error!.localizedDescription) } }) isObserved = true } // 3 calculate final url var params = "?scope=\(scope)&redirect_uri=\(redirect_uri)&client_id=\(clientID)&response_type=code" // 4 open an external browser UIApplication.sharedApplication().openURL(NSURL(string: "https://accounts.google.com/o/oauth2/auth\(params)")!) |
Whoa, that’s a big method! But it makes a lot of sense when you break it down, step-by-step:
YOUR_GOOGLE_CLIENT_ID
and YOUR_GOOGLE_DRIVE_CLIENT_SECRET
with the client id and client secret from Google console.AppDelegate
.This approach is a bit more involved than the other libraries, as you have to extract the authorization code yourself. The above code reuses some of the functionality from the AeroGear library.
There’s just one more step!
As you’ve done before, open AppDelegate.swift and add the following method:
func application(application: UIApplication, openURL url: NSURL, sourceApplication: String?, annotation: AnyObject?) -> Bool { let notification = NSNotification( name: "AGAppLaunchedWithURLNotification", object:nil, userInfo:[UIApplicationLaunchOptionsURLKey:url]) NSNotificationCenter.defaultCenter().postNotification(notification) return true } |
This fires a notification, that the authorization code listens to, in order to obtain the code from the URL.
Build and run your project one final time. It works again! Just like before, but using AFOAuth2Manager
this time!
You can download the final Incognito AFOAuth2Manager project here.
One thing you haven’t looked at is how to store those precious access and refresh tokens you receive as part of the OAuth2 dance. Where do you store them? How do you refresh an expired access token? Can you revoke your grants?
The best way to store them is…on your Keychain, of course! :]
This is the default strategy adopted by AFOAuthCredential (from AFOAuth2Manager) and OAuth2Session (from AeroGear).
If you would like to read more about the keychain, then I recommend reading our other tutorials on the subject.
To refresh the access token, you simply make an HTTP call to an access token endpoint and pass the refresh token as parameter.
For example, AeroGear leaves it up to the library to determine whether the token is still valid by using requestAccess:completionHandler: method.
OAuth2 defines a different specification for revoking tokens, which makes it possible to either revoke tokens separately or all at once. Most providers revoke both access and refresh tokens at the same time.
You covered not one, not two, but three open source libraries that implement OAuth2 – and hopefully learned a little more about how OAuth2 works under the hood.
Maybe now you’re ready to read the OAuth2 specification, RFC6749?! OK, maybe not. It’s a beast of a document! But at least you now understand the fundamentals and how it relates to your app.
I hope you use one of them in your app. Once you’ve picked your favorite open source OAuth2 library, contributing to it is essential. If you notice a bug, report an issue. If you know how to fix it, even better – propose a pull request.
If you have any comments or questions about this tutorial, please join the forum discussion below!
OAuth 2.0 with Swift Tutorial is a post from: Ray Wenderlich
The post OAuth 2.0 with Swift Tutorial appeared first on Ray Wenderlich.
Learn about protocol extensions, a new feature in Swift 2 that will make you rethink how you design your code.
Video Tutorial: What’s New in Swift 2 Part 5: Protocol Extensions is a post from: Ray Wenderlich
The post Video Tutorial: What’s New in Swift 2 Part 5: Protocol Extensions appeared first on Ray Wenderlich.
Review what you learned in this series and find out where you should look next as you continue to explore Swift 2.
Video Tutorial: What’s New in Swift 2 Part 6: Conclusion is a post from: Ray Wenderlich
The post Video Tutorial: What’s New in Swift 2 Part 6: Conclusion appeared first on Ray Wenderlich.
Note from Ray: At our recent RWDevCon tutorial conference, in addition to hands-on tutorials, we also had a number of “inspiration talks” – non-technical talks with the goal of giving you a new idea, some battle-won advice, and leaving you excited and energized.
We recorded these talks so that you can enjoy them, even if you didn’t get to attend the conference. Here’s our next talk – Identity by Alexis Gallagher – I hope you enjoy!
“I enter the teletransporter. I’ve been to Mars before, but only by the old method: a spaceship journey taking several weeks. This machine will send me at the speed of light. I merely need to push the green button.”
“Like others, I’m nervous. Will it work? I remind myself what I have been told to expect. When I push the button, the scanner here on Earth will destroy my brain and body while exactly recording the state of all my cells. It will transmit this information by radio.”
“Traveling at the speed of light, the message will take about an hour to reach Mars where the replicator will create out of new matter, a brain and body exactly like mine. It’s in this body that I will awake.”
“I believe this is what will happen, but still I hesitate, but then I remember my wife grinning this morning when I confessed to her my hesitation about it. She reminded me she has often been teletransported, and there is nothing wrong with her.”
“I press the green button. As predicted, I lose consciousness and seem a moment later to regain consciousness, but in a different cubicle. Examining my body, I can find no change at all. Even the cut on my upper lip where I was shaving this morning is still the same.” (Taken from Reasons and Persons by Derek Parfit.)
Hi, I’m Alexis. I haven’t actually been to a teletransporter, but I think it’s a really interesting to discuss a parallel that I noticed. In this talk, I want to tell you about that parallel, and then I also wanted to offer what’s really a personal reflection on it.
Some of this is a bit philosophical. Some of it is a bit personal. I hope some part of it speaks to you.
Derek Parfit wrote the passage I just read, which begins a discussion of the transporter problem. He wrote this in a book called Reasons and Persons in a chapter called “What We Believe Ourselves To Be.”
Parfit worked on that book for 15 years, and when it came out, everyone loved it. Philosophers have said that it’s the most significant work in moral philosophy since 1837.
I first heard Parfit lecturing about 20 years ago, and in-person he has exactly the sort of unworldly air you would expect from a legendary moral philosopher. He has dramatic white hair. You could say he’s sort of an alpha nerd in the world of moral philosophy.
The transporter problem is intended to clarify ideas about personal identity. Remember, the transporter consists of a scanner and a replicator. The scanner destroys your brain and body, turning it all into information. Basically, it’s like NSCoding, but for your body.
The question is, how do you feel about NSCoding for your body? When you walk into the transporter, do you hesitate before you push the green button, or is that something you are happy to do?
Do you feel like the transporter is killing you and creating a new person, or do you feel a transporter is just moving you by moving your body? Is it the same you coming out of the transporter?
That comes down to what makes two persons the same person. To consider this, Parfit distinguishes between two ways that any two things can be the same.
One way things can be the same is if they are qualitatively identical. This is a piece of jargon from analytical philosophy. This just means that things are exactly alike. They are alike in their observable qualities.
There’s a sense in which they are identical, qualitatively identical, but there’s another sense in which they are not. That other sense is called numerically identical. We say two things are numerically identical if they are actually one and the same thing.
This idea, with this distinction in-hand, we can restate the transporter problem. Remember, by definition, the transporter preserves the exact state of all your cells. By definition, the person leaving the transporter is qualitatively identical to you. That’s what the transporter does.
That leaves the big question. The big question is, is it the same person walking out? Is that person numerically identical to you?
That’s really a question about what kind of thing a person is. From inside, I feel like it’s the same me through time.
I feel like it’s the same me who is trying to pick up the pumpkin. It’s the same me who was in college with an umbrella indoors for some reason. I feel like it’s the same me as the me right now, that all of these are the same entity.
That’s the feeling, but what makes that feeling true? Is it just about physical continuity through time, like the same criterion that you would apply to a rock, or is it continuity of my memories? This is what John Locke believed, that what makes us the same person through time is you can form a chain of memories going back all the way to as soon as you started having memories.
Parfit considers many theories. I’m not going to review them all here. I just want to get to showing you the parallel that I promised.
Rich Hickey is the creator of Clojure, which is a very cool Lisp dialect that runs on the JVM. As you can see, like Parfit, he has very dramatic hair. I guess you could say that Rich Hickey is an alpha nerd in the world of somewhat esoteric programming languages.
I was reminded of Rich Hickey and of Parfit’s discussion of identity when I heard Hickey talking about this:
This is Clojure’s model of time. It’s used to model change in the language. Hickey, I think he’s the only person that does this, defines identity as a noun to mean any continuously existing thing in the world, like a person, for example.
Every identity has a state that changes through time. A value, in the language, a value captures a snapshot of the state at some moment. The identity is that loop you can use to see the sequence of values that moves through time.
One of those values is a state. A value is just like the record captured by the scanner here on earth. It’s a snapshot of a moment.
People wonder what value types are for in Swift. Quick parentheses. You can think of value types as things that are really good for expressing values.
If you were going to use the Clojure model, use a reference type to describe an entity, like a person, but then that entity holds a value. It’s quality is right there at that moment in time, and that value can change, but that value, that snapshot, is always sort of mutable.
This is the parallel I noticed.
Numerically identical is to qualitatively identical just as same identity is to equal value.
Later I learned that the Clojure model is actually echoing earlier work, I think.
This gentleman here is Eric Evans. He wrote a nice book called Domain-Driven Design around 2005.
He talks about the distinction between entities and value objects. It’s a very similar distinction. I think in retrospect, it’s not too surprising. You arrive at this kind of distinction between an entity and a value or a snapshot of it as soon as you try to specify exactly what is staying the same about something that might be completely changing.
What’s the answer? What is staying the same about something that might be completely changing? To be honest, I don’t know.
I think the Clojure model and Parfit’s model and what Evans have to say, I understand exactly what they are saying when they are talking about the snapshots, when they are talking about values or moments or qualities, but I feel like all of these models get slippery and tricky exactly when they try to nail down conceptually what identity is.
One of my takeaways is that identity is a fundamentally subtle and slightly problematic concept. When I was looking into this, I found the Stanford Encyclopedia of Philosophy has 10,000 words just defining “identity” in general, and then another 10,000 before it handles the special case of identity as it concerns people.
That’s why this parallel speaks to me because what it suggests is that the niggling puzzles that we encounter when we are trying to do data modeling in our applications are not trivialities. They are actually the protruding, tiny edge of much more profound, subtle issues under the surface that people can wrestle with and have thought about for a long time.
Let me offer my personal perspective on this also. This is a picture of San Francisco, specifically these are The Painted Ladies.
They are Victorian houses that people probably know from the opening of Full House.
This is near Alamo Park, which is not far from where I live. I grew up in San Francisco. Then I moved away, and after about 20 years, I moved back. That was about a year and a half ago.
When I moved back, for months, I had the eerie sort of spooky feeling that I was not in the real San Francisco anymore. In other words, it seemed like someone had created a perfect replica of San Francisco.
The old and new San Franciscos seemed to be equal, but not the same. It seemed like the new San Francisco was qualitatively identical. The streets hadn’t changed that much, but it wasn’t numerically identical, like the essence of San Francisco somehow was gone from it, like it was a perfect replica.
I think this is not an uncommon experience. We often have it connected to places that were really powerful early in our lives, like our home from childhood or a place where we had powerful friendships that really touched us.
I think it’s because of a feeling that was in us, at that age, and then when we get older, our feelings change and we come back and our feelings don’t match the place anymore. Then the place seems like its soul is missing.
It seems eerie, like you are meeting a familiar friend and they don’t recognize you anymore. When we visit the place again, our own emotions are different, and so there’s no fit. It’s nostalgic, and it can be unpleasant, like the place doesn’t recognize you.
It’s not just about places. When I was walking around San Francisco having this feeling, I also had the feeling that I wasn’t myself anymore. I had the feeling that I wasn’t the real Alexis, almost as if I had been replicated.
Like it wasn’t me; like I was the ghost of myself. It kind of sucked. Why was I feeling that way?
Before I say that, let me show one more gentleman. Does anyone recognize this man?
Yeah, I see nods. This is Richard Feynman, the Nobel prize-winning physicist. He also wrote a very nice book of anecdotes which made him very popular. Lots of people read it.
Of course, he’s a canonical alpha nerd with dramatic hair. He won a Nobel prize. You can’t get more alpha nerd than that. Pretty good hair!
I think the reason I was having the feelings I was is because the last time I lived in San Francisco, I was graduating high school, and at that time, I was 100% obsessed with theoretical physics. Maybe I was 200% obsessed with theoretical physics.
In my mind, I thought Feynman was the coolest thing in the world. San Francisco, for me, for 20 years, had been captured in wax as a place that had these emotions of child Alexis, really interested in physics, wanting to be a physicist. Now I was coming back, and I was grown man Alexis with a family making software.
It was that mismatch that felt disturbing. It was that mismatch that made me feel like this wasn’t the real me. This wasn’t how things were imagined at some point.
I’m going to quote for Derek Parfit once more, when he’s discussing different kinds of identity. He describes one possible notion here: “Suppose that an artist paints a self-portrait, and then by repainting turns this into a portrait of this father. Even though these portraits are more similar than a caterpillar and a butterfly, they are not stages in the continued existence of a single painting.”
“The self-portrait is a painting that the artist destroyed. In a general discussion of identity, we would need to explain why the requirements of physical continuity differs in such ways for different kinds of things.”
Parfit dismisses that notion of identity. He raises it just to blow it off as being irrelevant for people, because he’s got this very philosophical thing he’s trying to get to, a person as a kind of object, and what does that mean?
Actually, that description strikes me as a very evocative description of the normal process of aging and moving through life. Your story about yourself changes. You erase pieces of yourself.
If the changes are big enough, you are likely to feel that you have become a new kind of thing at some point. Are you being turned into a butterfly, or are you being destroyed?
Or, is it the same thing? How do you know?
These are CT scans of a caterpillar in its chrysalis.
I think if you asked a caterpillar if it wanted to do this before the process started, it would probably say, “No thank you.” It would say, “This looks like what you are describing is severe damage to my internal organs over ten days. This is a bad idea.”
To be honest, frustration and disappointment is also part of this story, part of how we interpret these things. I’m sure if had been walking around San Francisco and I met my 16-year-old self, his reaction would be, “What? You’re not a physicist? What happened?”
At the same time, he’d also think iPhones were pretty cool. iPhones didn’t exist when I was 16. It really doesn’t seem like he had all the information necessary to make all the decisions about how my life was supposed to go and how the world is supposed to be.
This also reminds me of discussions you see about impostor syndrome.
This usually comes up in the context of groups that are under-represented in technology, but of course, it’s a universal experience of the human condition:
These feelings are available to everybody.
For me, personally, I could tell you that roughly every four to six months I have this experience. I am working on something and I have the feeling when I’m working with software that I have no idea what I’m doing because I discover something where I realize it was very basic, and I should have known it, and somehow I never managed to learn it so far. At those moments, I have feelings of paranoia and frustration and anger with myself that I still don’t know how to do this stuff right.
Then I reflect that every four to six months I’m doing something substantially new, so I’m always operating in an area where, to some extent, I don’t know what I’m doing. It’s a big world. That’s part of it.
That doesn’t change the feeling. I just need to notice the feeling and not take it too seriously and remind myself that there are other reasons to believe I’m not an idiot, even though I feel like I’m an idiot at those points.
Part of me still wonders, would I feel more real, would I not have these feelings, if I had dramatic hair and was an alpha nerd in a black and white photo? The alpha nerd snapshots don’t feel what it feels like inside the snapshot.
Even Feynman in his books writes about periods of feeling like an impostor when he was starting his teaching responsibilities for the first time and was also trying to do research and was developing a curriculum. He writes about running away into the library and reading fairy tales of One Thousand And One Nights because he was too stressed out by it all.
What does all this add up to? Let me try to wrap it up with three claims I can offer.
So don’t waste time feeling like an impostor. If at this moment, you are doing the work, then you are actually doing the work. It’s as simple as that.
There’s no use resisting change because it’s still happening. Every moment, the cells inside you are being modified and recycled and some are dying. The matter that are constituted of, some of that you are shedding. You are taking in new matter as you eat.
You are being destroyed and re-created moment-to-moment even if you are just standing still. Even if you do not step into the transporter, time is a transporter. It’s destroying and re-creating you.
My advice, and I realize this sounds very cheesy, but I believe it, is to embrace that. Step into the transporter, press the green button, go somewhere new. Go to Mars.
You don’t have a choice because you are moving forward anyway.
That’s my thought on identity.
RWDevCon Inspiration Talk – Identity by Alexis Gallagher is a post from: Ray Wenderlich
The post RWDevCon Inspiration Talk – Identity by Alexis Gallagher appeared first on Ray Wenderlich.
A challenge is waiting for you at the end of each instructional video in this series. Be sure to give them each a try!
Video Tutorial: Introducing Stack Views: Series Introduction is a post from: Ray Wenderlich
The post Video Tutorial: Introducing Stack Views: Series Introduction appeared first on Ray Wenderlich.
Who needs unit tests? Not you — your code is perfect. Umm…so you’re just reading this tutorial for your “friend” who needs to learn more about writing unit tests in Swift, right? Right. :]
Unit tests are a great way to write better code; tests help you find most of the bugs early on in the process, but more importantly, writing code in a test-based development mindset helps you write modular code that’s easy to maintain. As a rule of thumb: if your code isn’t easy to test, it’s not going to be easy to maintain or debug.
Unit tests deal with isolated “micro features”. Often you need to mock classes — that is, provide fake yet functional implementations — to isolate a specific micro feature so it can be tested. In Objective-C there are several third-party frameworks that help with mocking and stubbing. But those rely on introspection, which isn’t yet available on Swift. Someday, hopefully! :]
In this tutorial you’ll learn how to write your own mocks, fakes and stubs to test a simple app that helps you remember your friends birthdays.
Download the starter project here; this is a basic contacts app that can be hooked up to a web backend. You won’t work on the core app functionality; rather, you’ll write some tests for it to make sure it behaves as expected.
Build and run your app to see how it works. Tap the plus sign and add good ol’ John Appleseed to your list:
The sample app uses Core Data to store your contacts.
Don’t panic! :] You don’t need any experience with Core Data for this tutorial; there’s no rocket science involved.
Note: If you do want to become a Core Data master, you can get started by reading this Core Data: Getting Started tutorial.
When it comes to testing, there’s good news, and bad news. The bad news is that there can be disadvantages to unit tests, like the following:
Although there is no silver bullet, there is a silver lining — testing has the following advantages:
A lot of the code in the sample app is based on the Master-Detail Application template with Core Data enabled. But there are some significant improvements over the template code. Open the sample project in Xcode and have a look at the project navigator:
Take note of the following details:
Person
class is an NSManagedObject
that contains some basic information about each person. The PersonInfo
struct contains the same information but can instanced from the address book.The file collection in PeopleList is an attempt to avoid massive view controllers. It’s good practice to avoid massive view controllers by moving some responsibilities into other classes that communicate with the view controllers via a simple protocol. You can learn more about massive view controllers and how to avoid them by reading this interesting albeit older article.
In this case, the protocol is defined in PeopleListDataProviderProtocol.swift; open it and have a look. A class conforming to this protocol must have the properties managedObjectContext
and tableView
and must define the methods addPerson(_:)
and fetch()
. In addition, it must conform to the UITableViewDataSource
protocol.
The view controller PeopleListViewController
has a property dataProvider
, which conforms to PeopleListDataProviderProtocol
. This property is set to an instance of PeopleListDataProvider
in AppDelegate.swift.
You add people to the list using ABPeoplePickerNavigationController
. This class lets you, the developer, access the user’s contacts without requiring explicit permission.
PeopleListDataProvider
is responsible for filling the table view and for talking to the Core Data persistent store.
Note: Several classes and methods in the starter project are declared as public; this is so the test target can access those classes and methods. The test target is outside of the app module. If you don’t add any access modifier the classes and methods are defined as internal
. This means they are only accessible within the same module. To access them from outside the module (for example from the test target) you need to add the public
access modifier.
That’s enough overview — time to start writing some tests!
Mocks let you check if a method call is performed or if a property is set when something happens in your app. For example, in viewDidLoad()
of PeopleListViewController
, the table view is set to the tableView
property of the dataProvider
.
You’ll write a test to check that this actually happens.
First, you need to prepare the project to make testing possible.
Select the project in the project navigator, then select Build Settings in the Birthdays target. Search for Defines Module, and change the setting to Yes as shown below:
Next, select the BirthdaysTests folder and go to File\New\File…. Select a iOS\Source\Test Case Class template, click Next, name it PeopleListViewControllerTests, ensure you’re creating a Swift file, click Next, then finally click Create.
If Xcode prompts you to create a bridging header, select No. This is a bug in Xcode that occurs when there is no file in the target and you add a Swift file.
Open the newly created PeopleListViewControllerTests.swift. Import the module you just enabled by adding the import Birthdays
statement right after the other import statements as shown below:
import UIKit import XCTest import Birthdays |
Remove the following two template test methods:
func testExample() { // This is an example of a functional test case. XCTAssert(true, "Pass") } func testPerformanceExample() { // This is an example of a performance test case. self.measureBlock() { // Put the code you want to measure the time of here. } } |
You now need an instance of PeopleListViewController
so you can use it in your tests.
Add the following line to the beginning of PeopleListViewControllerTests
:
var viewController: PeopleListViewController! |
Replace the setUp()
method with the following code:
override func setUp() { super.setUp() viewController = UIStoryboard(name: "Main", bundle: nil).instantiateViewControllerWithIdentifier("PeopleListViewController") as! PeopleListViewController } |
This uses the main storyboard to create an instance of PeopleListViewController
and assigns it to viewController
.
Select Product\Test; Xcode builds the project and runs any existing tests. Although you don’t have any tests yet, this is a good way to ensure everything is set up correctly. After a few seconds, Xcode should report that all tests succeeded.
You’re now ready to create your first mock.
Since you’re going to be working with Core Data, add the following import to the top of PeopleListViewControllerTests.swift, right below import Birthdays
:
import CoreData |
Next, add the following code within the class definition of PeopleListViewControllerTests
:
class MockDataProvider: NSObject, PeopleListDataProviderProtocol { var managedObjectContext: NSManagedObjectContext? weak var tableView: UITableView! func addPerson(personInfo: PersonInfo) { } func fetch() { } func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return 1 } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { return UITableViewCell() } } |
This looks like a quite complicated mock class. However, this is just the bare minimum required, as you’re going to set an instance of this mock class to the dataProvider
property of PeopleListViewController
. Your mock class also has to conform to the PeopleListDataProviderProtocol
as well as the UITableViewDataSource
protocol.
Select Product\Test; your project will compile again and your zero tests will run with zero failures. Sorry — that doesn’t count at a 100% pass rate. :] But now you have everything set up for the first unit test using a mock.
It’s good practice to separate the unit tests in three parts called given, when and then. ‘Given’, sets up the environment; ‘when’ executes the code you want to test; and ‘then’ checks for the expected result.
Your test will check that the tableView
property of the data provider is set after viewDidLoad()
has been executed.
Add the following test to PeopleListViewControllerTests
:
func testDataProviderHasTableViewPropertySetAfterLoading() { // given // 1 let mockDataProvider = MockDataProvider() viewController.dataProvider = mockDataProvider // when // 2 XCTAssertNil(mockDataProvider.tableView, "Before loading the table view should be nil") // 3 let _ = viewController.view // then // 4 XCTAssertTrue(mockDataProvider.tableView != nil, "The table view should be set") XCTAssert(mockDataProvider.tableView === viewController.tableView, "The table view should be set to the table view of the data source") } |
Here is what the above test is doing:
MockDataProvider
and sets it to the dataProvider
property of the view controller.tableView
property is nil
before the test.viewDidLoad()
.tableView
property is not nil
and that it is set to the tableView
of the view controller.Select Product\Test again; once the tests have finished, open the test navigator (Cmd+5 is a handy shortcut). You should see something like the following:
Your first test with a mock passed with flying colors! :]
The next test is to ensure selecting a contact from the list calls addPerson(_:)
of the data provider.
Add the following property to the MockDataProvider
class:
var addPersonGotCalled = false |
Next, change addPerson(_:)
to the following:
func addPerson(personInfo: PersonInfo) { addPersonGotCalled = true } |
Now when you call addPerson(_:)
, you’ll register this in an instance of MockDataProvider
by setting addPersonGotCalled
to true
.
You’ll have to import the AddressBookUI framework before you can add a method to test this behavior.
Add the following import right below the other imports in PeopleListViewControllerTests.swift:
import AddressBookUI |
Now add the following test method with the rest of the test cases:
func testCallsAddPersonOfThePeopleDataSourceAfterAddingAPersion() { // given let mockDataSource = MockDataProvider() // 1 viewController.dataProvider = mockDataSource // when // 2 let record: ABRecord = ABPersonCreate().takeRetainedValue() ABRecordSetValue(record, kABPersonFirstNameProperty, "TestFirstname", nil) ABRecordSetValue(record, kABPersonLastNameProperty, "TestLastname", nil) ABRecordSetValue(record, kABPersonBirthdayProperty, NSDate(), nil) // 3 viewController.peoplePickerNavigationController(ABPeoplePickerNavigationController(), didSelectPerson: record) // then // 4 XCTAssert(mockDataSource.addPersonGotCalled, "addPerson should have been called") } |
So what’s going on here?
ABPersonCreate()
.peoplePickerNavigationController(_:didSelectPerson:)
. Normally, calling delegate methods manually is a code smell, but it’s fine for testing purposes.
addPerson(_:)
was called by checking that addPersonGotCalled
of the data provider mock is true.
Select Product\Test to run the tests — they should all pass. Hey, this testing thing is pretty easy!
But wait! How do you know that the tests actually test what you think they’re testing?
A quick way to check that a test is actually validating something is to remove the entity that the test validates.
Open PeopleListViewController.swift and comment out the following line in peoplePickerNavigationController(_:didSelectPerson:)
:
dataProvider?.addPerson(person) |
Run the tests again; the last test you wrote should now fail. Cool — you now know that your test is actually testing something. It’s good practice to test your tests; at the very least you should test your most complicated tests to be sure they work.
Un-comment the line to get the code back to a working state; run the tests again to make sure everything is working.
You may have used singletons such as NSNotificationCenter.defaultCenter()
and NSUserDefaults.standardUserDefaults()
— but how would you test that a notification is actually sent or that a default is set? Apple doesn’t allow you to inspect the state of these classes.
You could add the test class as an observer for the expected notifications. But this might cause your tests to become slow and unreliable since they depend on the implementation of those classes. Or the notification could be sent from another part of your code, and you wouldn’t be testing an isolated behavior.
To get around these limitations, you can use mocks in place of these singletons.
Build and run your app; add John Appleseed and David Taylor to the list of people and toggle the sorting between ‘Last Name’ and ‘First Name’. You’ll see that the order of the contacts in the list depends on the sort order of the table view.
The code that’s responsible for sorting lives in changeSort()
in PeopleListViewController.swift:
@IBAction func changeSorting(sender: UISegmentedControl) { userDefaults.setInteger(sender.selectedSegmentIndex, forKey: "sort") dataProvider?.fetch() } |
This adds the selected segment index for the key sort
to the user defaults and calls fetch()
on the data provider. fetch()
should read this new sort order from the user defaults and update the contact list, as demonstrated in PeopleListDataProvider
:
public func fetch() { let sortKey = NSUserDefaults.standardUserDefaults().integerForKey("sort") == 0 ? "lastName" : "firstName" let sortDescriptor = NSSortDescriptor(key: sortKey, ascending: true) let sortDescriptors = [sortDescriptor] fetchedResultsController.fetchRequest.sortDescriptors = sortDescriptors var error: NSError? = nil if !fetchedResultsController.performFetch(&error) { println("error: \(error)") } tableView.reloadData() } |
PeopleListDataProvider
uses an NSFetchedResultsController
to fetch data from the Core Data persistent store. To change the sorting of the list, fetch()
creates an array with sort descriptors and sets it to the fetch request of the fetched results controller. Then it performs a fetch to update the list and call reloadData()
on the table view.
You’ll now add a test to ensure the user’s preferred sort order is correctly set in NSUserDefaults
.
Open PeopleListViewControllerTests.swift and add the following class definition right below the class definition of MockDataProvider
:
class MockUserDefaults: NSUserDefaults { var sortWasChanged = false override func setInteger(value: Int, forKey defaultName: String) { if defaultName == "sort" { sortWasChanged = true } } } |
MockUserDefaults
is a subclass of NSUserDefaults
; it has a boolean property sortWasChanged
with a default value of false
. It also overrides the method setInteger(_:forKey:)
that changes the value of sortWasChanged
to true
.
Add the following test below the last test in your test class:
func testSortingCanBeChanged() { // given // 1 let mockUserDefaults = MockUserDefaults(suiteName: "testing")! viewController.userDefaults = mockUserDefaults // when // 2 let segmentedControl = UISegmentedControl() segmentedControl.selectedSegmentIndex = 0 segmentedControl.addTarget(viewController, action: "changeSorting:", forControlEvents: .ValueChanged) segmentedControl.sendActionsForControlEvents(.ValueChanged) // then // 3 XCTAssertTrue(mockUserDefaults.sortWasChanged, "Sort value in user defaults should be altered") } |
Here’s the play-by-play of this test:
MockUserDefaults
to userDefaults
of the view controller; this technique is known as dependency injection).UISegmentedControl
, add the view controller as the target for the .ValueChanged
control event and send the event.setInteger(_:forKey:)
of the mock user defaults was called. Note that you don’t check if the value was actually stored in NSUserDefaults
, since that’s an implementation detail.Run your suite of tests — they should all succeed.
What about the case when you have a really complicated API or framework underneath your app, but all you really want to do is test a small feature without delving deep into the framework?
That’s when you “fake” it ’till you make it! :]
Fakes behave like a full implementation of the classes they are faking. You use them as stand-ins for classes or structures that are too complicated to deal with for the purposes of your test.
In the case of the sample app, you don’t want to add records to and fetch them from the real Core Data persistent store in your tests. So instead, you’ll fake the Core Data persistent store.
Select the BirthdaysTests folder and go to File\New\File…. Choose an iOS\Source\Test Case Class template and click Next. Name your class it PeopleListDataProviderTests, click Next and then Create.
Again remove the following demo tests in the created test class:
func testExample() { // ... } func testPerformanceExample() { // ... } |
Add the following two imports to your class:
import Birthdays import CoreData |
Now add the following properties:
var storeCoordinator: NSPersistentStoreCoordinator! var managedObjectContext: NSManagedObjectContext! var managedObjectModel: NSManagedObjectModel! var store: NSPersistentStore! var dataProvider: PeopleListDataProvider! |
Those properties contain the major components that are used in the Core Data stack. To get started with Core Data, check out our tutorial, Core Data Tutorial: Getting Started
Add the following code to setUp():
// 1 managedObjectModel = NSManagedObjectModel.mergedModelFromBundles(nil) storeCoordinator = NSPersistentStoreCoordinator(managedObjectModel: managedObjectModel) store = storeCoordinator.addPersistentStoreWithType(NSInMemoryStoreType, configuration: nil, URL: nil, options: nil, error: nil) managedObjectContext = NSManagedObjectContext() managedObjectContext.persistentStoreCoordinator = storeCoordinator // 2 dataProvider = PeopleListDataProvider() dataProvider.managedObjectContext = managedObjectContext |
Here’s what’s going on in the code above:
setUp()
creates a managed object context with an in-memory store. Normally the persistent store of Core Data is a file in the file system of the device. For these tests, you are creating a ‘persistent’ store in the memory of the device.PeopleListDataProvider
and the managed object context with the in-memory store is set as its managedObjectContext
. This means your new data provider will work like the real one, but without adding or removing objects to the persistent store of the app.Add the following two properties to PeopleListDataProviderTests
:
var tableView: UITableView! var testRecord: PersonInfo! |
Now add the following code to the end of setUp()
:
let viewController = UIStoryboard(name: "Main", bundle: nil).instantiateViewControllerWithIdentifier("PeopleListViewController") as! PeopleListViewController viewController.dataProvider = dataProvider tableView = viewController.tableView testRecord = PersonInfo(firstName: "TestFirstName", lastName: "TestLastName", birthday: NSDate()) |
This sets up the table view by instantiating the view controller from the storyboard and creates an instance of PersonInfo
that will be used in the tests.
When the test is done, you’ll need to discard the managed object context.
Replace tearDown()
with the following code:
override func tearDown() { managedObjectContext = nil var error: NSError? = nil XCTAssert(storeCoordinator.removePersistentStore(store, error: &error), "couldn't remove persistent store: \(error)") super.tearDown() } |
This code sets the managedObjectContext
to nil to free up memory and removes the persistent store from the store coordinator. This is just basic housekeeping. You want to start each test with a fresh test store.
Now — you can write the actual test! Add the following test to your test class:
func testThatStoreIsSetUp() { XCTAssertNotNil(store, "no persistent store") } |
This tests checks that the store is not nil
. It’s a good idea to have this check here to fail early in case the store could not be set up.
Run your tests — everything should pass.
The next test will check whether the data source provides the expected number of rows.
Add the following test to the test class:
func testOnePersonInThePersistantStoreResultsInOneRow() { dataProvider.addPerson(testRecord) XCTAssertEqual(tableView.dataSource!.tableView(tableView, numberOfRowsInSection: 0), 1, "After adding one person number of rows is not 1") } |
First, you add a contact to the test store, then you assert that the number of rows is equal to 1.
Run the tests — they should all succeed.
By creating a fake “persistent” store that never writes to disk, you can keep your tests fast and your disk clean, while maintaining the confidence that when you actually run your app, everything will work as expected.
In a real test suite you could also test the number of sections and rows after adding two or more test contacts; this all depends on the level of confidence you’re attempting to reach in your project.
If you’ve ever worked with several teams at once on a project, you know that not all parts of the project are ready at the same time — but you still need to test your code. But how can you test a part of your code against something that may not exist, such as a web service or other back-end provider?
Stubs to the rescue! :]
Stubs fake a response to method calls of an object. You’ll use stubs to test your code against a web service that isn’t yet finished.
The web team for your project has been tasked with building a website with the same functionality of the app. The user creates an account on the website and can then synchronize the data between the app and the website. But the web team hasn’t even started – and you’re nearly done. Looks like you’ll have to write a stub to stand-in for the web backend.
In this section you will focus on two test methods: one for fetching contacts added to the website, and one to post contacts from your app to the website. In a real-world scenario you’d also need some kind of login mechanism and error handling, but that’s beyond the scope of this tutorial.
Open APICommunicatorProtocol.swift; this protocol declares the two methods for getting contacts from the web service and for posting contacts to the web service.
You could pass around Person
instances, but this would require you to use another managed object context. Using a struct is simpler in this case.
Now open APICommunicator.swift. APICommunicator
conforms to APICommunicatorProtocol
, but right now there’s just enough implementation to keep the compiler happy.
You’ll now create stubs to support the interaction of the view controller with an instance of APICommunicator
.
Open PeopleListViewControllerTests.swift and add the following class definition within the PeopleListViewControllerTests
class:
// 1 struct MockAPICommunicator: APICommunicatorProtocol { var allPersonInfo = [PersonInfo]() var postPersonGotCalled = false // 2 func getPeople() -> (NSError?, [PersonInfo]?) { return (nil, allPersonInfo) } // 3 func postPerson(personInfo: PersonInfo) -> NSError? { postPersonGotCalled = true return nil } } |
There are few things to note here:
APICommunicator
is a struct, the mock implementation is a class. It’s more convenient to use a class in this case because your tests require you to mutate data. This is a little easier to do in a class than in a struct.getPeople()
returns what is stored in allPersonInfo
. Instead of going out on the web and having to download or parse data, you just store contact information in a simple array.postPerson(_:)
sets postPersonGotCalled
to true
.You’ve just created your “web API” in under 20 lines of code! :]
Now it’s time to test your stub API by ensuring all contacts that come back from the API are added to the persistent store on the device when you call addPerson()
.
Add the following test method to PeopleListViewControllerTests
:
func testFetchingPeopleFromAPICallsAddPeople() { // given // 1 let mockDataProvider = MockDataProvider() viewController.dataProvider = mockDataProvider // 2 let mockCommunicator = MockAPICommunicator() mockCommunicator.allPersonInfo = [PersonInfo(firstName: "firstname", lastName: "lastname", birthday: NSDate())] viewController.communicator = mockCommunicator // when viewController.fetchPeopleFromAPI() // then // 3 XCTAssert(mockDataProvider.addPersonGotCalled, "addPerson should have been called") } |
Here’s what going on in the above code:
mockDataProvider
and mockCommunicator
you’ll use in the test.fetchPeopleFromAPI()
to make a fake network call.addPerson(_:)
was called.Build and run your tests — all should pass.
Download the final project here; this version also includes some extra tests that didn’t make it into the tutorial.
You’ve learned how to write mocks, fakes and stubs to test micro features in your app, along with getting a sense how XCTest works in Swift.
The tests in this tutorial are only a starter; I’m sure you already have ideas for tests in your own projects.
For more on unit testing, check out Test Driven Development (TDD) and Behavior Driven Development (BDD). Both are development methodologies (and, frankly, a whole new mindset) where you write the tests before you write the code.
You can listen to tutorial team member Ellen Shapiro discuss unit testing in the official Ray Wenderlich podcast.
Unit Tests are only one part of a complete test suite; integration tests are the next logical step. An easy way to start working with integration tests is UIAutomation. It’s well worth the read if you’re serious about testing your apps — and you should be! :]
If you have any comments or questions about this tutorial, feel free to join the discussion in the forum below!
Unit Testing Tutorial: Mocking Objects is a post from: Ray Wenderlich
The post Unit Testing Tutorial: Mocking Objects appeared first on Ray Wenderlich.
Learn the basics of stack views by building one using interface builder.
Video Tutorial: Introducing Stack Views Part 1: Your First Stack View is a post from: Ray Wenderlich
The post Video Tutorial: Introducing Stack Views Part 1: Your First Stack View appeared first on Ray Wenderlich.
Every decent iOS App out there has custom elements, custom UI, custom animations, etc. Custom, custom, custom!
If you want your app to stand out from the rest, you have to invest time into adding some unique features that will give your app that WOW factor.
In this tutorial, you’ll build a custom text field that makes a sweet little elastic bounce animation when it gets tapped.
You’ll use a number of interesting API’s along the way:
Start by downloading the starter project.
The project is based on the Single View Application iOS\Application\Single View Application. It currently has two text fields and one button inside a container view.
Your aim is to give them an elastic bounce when they receive focus. How do you achieve this, you say?
The technique is simple; you’re going to use four control point views and one CAShapeLayer
, and then animate the control points with UIView spring animations. While they’re animating, you’ll redraw the shape around their positions.
Note: If you’re unfamiliar with the CAShapeLayer class, here is a great tutorial by Scott Gardner that should get you started in no time.
If this all sounds a little complicated, don’t worry! It’s easier than you think.
First, you’re going to create your base elastic view; you’ll embed it in UITextfield
as a subview, and you’ll animate this view to give your control the elastic bounce.
Right-click the ElasticUI group in the project navigator and select New File…, then select the iOS/Source/Cocoa Touch Class template. Click Next.
Call the class ElasticView, enter UIView into the Subclass of field and make sure the language is Swift. Click Next and then Create to choose the default location to store the file associated with this new class.
First of all, you need to create four control point views and one CAShapeLayer
. Add the code below so you end up with the following class definition:
import UIKit class ElasticView: UIView { private let topControlPointView = UIView() private let leftControlPointView = UIView() private let bottomControlPointView = UIView() private let rightControlPointView = UIView() private let elasticShape = CAShapeLayer() override init(frame: CGRect) { super.init(frame: frame) setupComponents() } required init(coder aDecoder: NSCoder) { super.init(coder: aDecoder) setupComponents() } private func setupComponents() { } } |
The views and layer can be created straight away. setUpComponents()
is a setup method that’s called from all the initialization paths. You’re going to implement it now.
Add the following inside setupComponents()
:
elasticShape.fillColor = backgroundColor?.CGColor elasticShape.path = UIBezierPath(rect: self.bounds).CGPath layer.addSublayer(elasticShape) |
Here you’re configuring the shape layer, setting its fill color to be the same as the ElasticView's
background color and filling the path to be the same size as the view’s bounds. Finally, you add it to the layer hierarchy.
Next, add the following code at the end of setupComponents()
:
for controlPoint in [topControlPointView, leftControlPointView, bottomControlPointView, rightControlPointView] { addSubview(controlPoint) controlPoint.frame = CGRect(x: 0.0, y: 0.0, width: 5.0, height: 5.0) controlPoint.backgroundColor = UIColor.blueColor() } |
This adds all four control points to your view. To help with debugging, this also changes the background of the control points to blue so they’re easy to see in the simulator. You’ll remove this at the end of the tutorial.
You need to position the control points at the top center, bottom center, left center and right center. This makes it so that as you animate them away from the view, you use their positioning to draw a new path in your CAShapeLayer.
You’ll need to do this quite often, so create a new function to do it. Add the following to ElasticView.swift:
private func positionControlPoints(){ topControlPointView.center = CGPoint(x: bounds.midX, y: 0.0) leftControlPointView.center = CGPoint(x: 0.0, y: bounds.midY) bottomControlPointView.center = CGPoint(x:bounds.midX, y: bounds.maxY) rightControlPointView.center = CGPoint(x: bounds.maxX, y: bounds.midY) } |
The function moves each control point to the correct position on the view’s edge.
Now call the new function from the end of setupComponents()
:
positionControlPoints() |
Before you dive into animations, you’re going to add a view to play around with, so you can see how the ElasticView works. To do this, you’ll add a new view to your storyboard.
Open Main.storyboard, drag a new UIView to your view controller’s view, and set its Custom Class to be ElasticView. Don’t worry about the setting its position; as long as it’s on the screen, you’ll be able to see what’s going on.
Build and run your project.
Look at that! Four little blue squares — these are the control point views you added in setupComponents
Now you’re going to use them to create a path on the CAShapeLayer in order to get the elastic look.
Before you delve into the next series of steps, think about how you draw something in 2D — you rely on drawing lines, specifically, straight lines and curves. Before drawing anything, you need to specify a start and end location if you’re drawing a straight line, or multiple locations if you’re drawing something more complex.
These points are CGPoints where you specify x and y in the current coordinate systems.
When you want to draw vector-based shapes like squares, polygons and intricate curved shapes, it gets a little more complex.
To simulate the elastic effect, you’ll draw a quadratic Bézier curve that looks like a rectangle, but it will have control points for each side of the rectangle, whic gives it a curve to create an elastic effect.
Bézier curves are named after Pierre Bézier, who was a French engineer that worked with representing curves in CAD/CAM systems. Take a look at what a quadratic Bézier curve looks like:
The blue circles are your control points, which are the four views you created earlier, and the red dots are the corners of the rectangle.
Note: Apple has an in-depth Class Reference documentation for UIBezierPath. It’s worth checking out if you’d like to drill down into how to create a path.
Now it’s time to put the theory into practice! Add the following method to ElasticView.swift:
private func bezierPathForControlPoints()->CGPathRef { // 1 let path = UIBezierPath() // 2 let top = topControlPointView.layer.presentationLayer().position let left = leftControlPointView.layer.presentationLayer().position let bottom = bottomControlPointView.layer.presentationLayer().position let right = rightControlPointView.layer.presentationLayer().position let width = frame.size.width let height = frame.size.height // 3 path.moveToPoint(CGPointMake(0, 0)) path.addQuadCurveToPoint(CGPointMake(width, 0), controlPoint: top) path.addQuadCurveToPoint(CGPointMake(width, height), controlPoint:right) path.addQuadCurveToPoint(CGPointMake(0, height), controlPoint:bottom) path.addQuadCurveToPoint(CGPointMake(0, 0), controlPoint: left) // 4 return path.CGPath } |
There’s a lot going on in this method, so here’s an incremental breakdown:
presentationLayer
is to get the “live” position of the view during its animation.CGPathRef
, since that’s what a shape layer expects.You need to call this method when you’re animating the control points because it lets you keep re-drawing a new shape. How do you do that?
A CADisplayLink
object is a timer that allows your application to synchronize activity with the display’s refresh rate. You add a target and an action that are called whenever the screen’s contents update.
It’s the perfect opportunity to re-draw your path and update the shape layer.
First, add a method to call every time an update is required:
func updateLoop() { elasticShape.path = bezierPathForControlPoints() } |
Then, create the display link by adding the following variable to ElasticView.swift:
private lazy var displayLink : CADisplayLink = { let displayLink = CADisplayLink(target: self, selector: Selector("updateLoop")) displayLink.addToRunLoop(NSRunLoop.currentRunLoop(), forMode: NSRunLoopCommonModes) return displayLink }() |
This is a lazy variable meaning it won’t get created until you access it. Each time the screen updates, it will call the updateLoop()
function.
You will need methods to start and stop the link, so add the following:
private func startUpdateLoop() { displayLink.paused = false } private func stopUpdateLoop() { displayLink.paused = true } |
You’ve got everything ready to draw a new path whenever your control points move. Now, you have to move them!
Apple is really good at adding new features in every iOS release, and spring animations are one of its recent inclusions that makes it easy to up your app’s WOW factor.
It allows you to animate elements with custom damping and velocity, making it more special and bouncy!
Note: If you would like to master animations, check out iOS Animations by Tutorials.
Add the following method to ElasticView.swift to get those control points moving:
func animateControlPoints() { //1 let overshootAmount : CGFloat = 10.0 // 2 UIView.animateWithDuration(0.25, delay: 0.0, usingSpringWithDamping: 0.9, initialSpringVelocity: 1.5, options: nil, animations: { // 3 self.topControlPointView.center.y -= overshootAmount self.leftControlPointView.center.x -= overshootAmount self.bottomControlPointView.center.y += overshootAmount self.rightControlPointView.center.x += overshootAmount }, completion: { _ in // 4 UIView.animateWithDuration(0.45, delay: 0.0, usingSpringWithDamping: 0.15, initialSpringVelocity: 5.5, options: nil, animations: { // 5 self.positionControlPoints() }, completion: { _ in // 6 self.stopUpdateLoop() }) }) } |
Here’s the step-by-step breakdown:
overshootAmount
is the amount the control points will move by.UIView
class reference for a detailed explanation of the damping and velocity variables. For the rest of us non-rocket scientists, just know that these variables control how the animation bounces. It’s normal to play with the numbers to find a configuration that feels right.So far, you haven’t called animateControlPoints
. The main purpose of your custom control is to be animated once you tap on it, so the best place to call the above method is inside touchedBegan
.
Add the following to it:
override func touchesBegan(touches: Set, withEvent event: UIEvent) { startUpdateLoop() animateControlPoints() } |
Build and run, and then tap your view. Voila! :]
Now you’ve gotten a glimpse of the cool animation, but you have a little more work to do to make your ElasticView more abstract.
The first obstacle to clear out is overshootAmount
. At the moment, it’s hardcoded with a value of 10, but it would be great to change its value both programmatically and via Interface Builder.
One of the new features of Xcode 6.0 is @IBInspectable, which is a nice way of setting custom properties via interface builder.
Note: If you want to learn more about @IBInspectable, then please read Modern Core Graphics with Swift by Caroline Begbie.
You’re going to take advantage of this awesome new feature by adding overshootAmount
as an @IBInspectable
property, so that each ElasticView you create can have a different value.
Add the following variable to ElasticView
:
@IBInspectable var overshootAmount : CGFloat = 10 |
Reference the property in animateControlPoints()
by replacing this line:
let overshootAmount : CGFloat = 10.0 |
With this line:
let overshootAmount self.overshootAmount |
Head over to Main.storyboard, click on ElasticView and select the Attributes Inspector tab.
You’ll notice a new tab that shows the name of your view and an input field named Overshoot A…
For every variable you declare with @IBInspectable
, you’ll see a new input field in Interface Builder where you can edit its value.
To see this in action, duplicate ElasticView
so you end up with two views and place the new one above your current view, like so.
Change the value of Overshoot Amount in the original view to 20 and 40 in your new view.
Build and run. Tap on both views to see the difference. As you can see, the animations vary slightly, and are dependant on the amount you’ve entered in interface builder.
Try changing the value to -40 instead of 40, and see what happens. You can see the control points animating inwards but the background doesn’t seem to be changing.
Are you ready to fix that on your own? I bet you are!
I’ll give you one clue: You’ll need to change something inside the setupComponents
method. Try it on your own, but if you get stuck, take a peek at the solution below.
Solution Inside: Solution | SelectShow> | |
---|---|---|
|
Well done, you’ve finally completed your ElasticView.
Now that you have an ElasticView, you can embed it in different controls, such as text fields or buttons.
Now that you have built the core functionality of your elastic view, the next task is to embed it into a custom text field.
Right-click the ElasticUI group in the project navigator, and then select New File…. Select the iOS/Source/Cocoa Touch Class template and click Next.
Call the class ElasticTextField, enter UITextfield into the Subclass of field and make sure the language is Swift. Click Next and then Create.
Open up ElasticTextField.swift and replace its contents with the following:
import UIKit class ElasticTextField: UITextField { // 1 var elasticView : ElasticView! // 2 @IBInspectable var overshootAmount: CGFloat = 10 { didSet { elasticView.overshootAmount = overshootAmount } } // 3 override init(frame: CGRect) { super.init(frame: frame) setupView() } required init(coder aDecoder: NSCoder) { super.init(coder: aDecoder) setupView() } // 4 func setupView() { // A clipsToBounds = false borderStyle = .None // B elasticView = ElasticView(frame: bounds) elasticView.backgroundColor = backgroundColor addSubview(elasticView) // C backgroundColor = UIColor.clearColor() // D elasticView.userInteractionEnabled = false } // 5 override func touchesBegan(touches: Set, withEvent event: UIEvent) { elasticView.touchesBegan(touches, withEvent: event) } } |
There’s a lot going on in there! Here’s a step-by-step breakdown:
IBInspectable
variable called overshootAmount
, so you can change your control’s elasticity via Interface Builder. It overrides didSet
and just sets the overshootAmount
of your elastic view.clipsToBounds
to false
. This lets the elastic view go beyond its parent’s bounds and changes the border style of the UITextField
to .None
to flatten the control.ElasticView
as a subview of your control.backgroundColor
of your control to be clear; you do this because you want the ElasticView to decide the color.userInteractionEnabled
to false
. Otherwise, it steals touches from your control.touchesBegan
and forwards it to your ElasticView
so it can animate. :]Head over to Main.storyboard, select both instances of UITextfield and change their classes from UITextField to ElasticTextField in the Identity Inspector.
Also, make sure you delete both instances of ElasticView that you added for testing purposes.
Build and run. Tap on your textfield and notice how it doesn’t actually work:
The reason is that when you create an ElasticView
in code, it gets a clear background color, which is passed on to the shape layer.
To correct this, you need a way to forward the color to the shape layer whenever you set a new background color on your view.
Because you want to use elasticShape
as the primary background of your view, you have to override backgroundColor
inside ElasticView
.
Add the following code to ElasticView.swift:
override var backgroundColor: UIColor? { willSet { if let newValue = newValue { elasticShape.fillColor = newValue.CGColor super.backgroundColor = UIColor.clearColor() } } } |
Before the value is set, willSet
is called. You check that a value has been passed, then you set fillColor
for elasticShape
to the user’s chosen color. Then, you call super
and set its background color to clear.
Build and run, and you should have a lovely elastic control. Yippee! :]
Notice how close the placeholder text from UITextField
is to the left edge. It’s a bit snug, don’t you think? Would you like a go at fixing that on your own?
No hints this time. If you get stuck, feel free to open up the solution below.
Solution Inside: Solution | SelectShow> | |
---|---|---|
|
Open up ElasticView.swift and remove the following from setupComponents
.
controlPoint.backgroundColor = UIColor.blueColor() |
You should be proud of all the work you’ve done so far! You’ve turned a standard UITextfield into some funky elastic thing and created a custom UIView that can be embedded in all sorts of controls.
Here’s a link to the completed project.
You have a fully working elastic text field. There are many more controls where you could apply these techniques.
You’ve learned how to use view positions to redraw a custom shape and add bounce to it. With this skill, it could be said that the world is your oyster!
To take it a step or few further, you could play around with some different animations, add more control points for some crazy shapes, etc.
Check out easings.net; it’s great playground for working with different animations that utilize easings.
After you get comfortable with this technique, you could have a go at integrating BCMeshTransformView into your project. It’s a neat library created by Bartosz Ciechanowski that lets you manipulate individual pixels from your view.
Imagine how cool it would be if you could morph pixels into different shapes. :]
It’s been fun to walk you through how to create an elastic UI control with Swift, and I hope that you’ve learned a few things along the way. If you have questions, comments or great ideas about how to animate in Swift, please chime in below. I look forward to hearing from you!
The post How To Create an Elastic Animation with Swift appeared first on Ray Wenderlich.
Your challenge is to add a new nested and properly configured stack view to the app. See the Challenge PDF for full details.
View previous video: Your First Stack View
Once you’ve tried the challenge you can find the solution here in this video:
The post Video Tutorial: Introducing Stack Views Part 2: Nested Stack Views appeared first on Ray Wenderlich.
With more than 1.4 million apps in the iOS App Store today, it’s a real challenge to make your app stand out. You have a very small window of opportunity to capture the attention of your users before your app ends up in the big black hole of obscurity.
There’s no better place to start wowing your users than at the loading screen of your app, where you can add a delightful animation that serves as a precursor to your on-boarding or authentication workflow.
In this tutorial you will learn how to make such an animation. You’ll learn how to build it up piece-by-piece, utilising advanced techniques to create a fluid and captivating animation.
Download the starter project for this tutorial here, save it to a convenient location and open it in Xcode.
Open HolderView.swift. In this UIView
subclass, you will add and animate the following sublayers (found in the Layers subgroup) as shown in the animation above:
OvalLayer
is wobbling. When this view rotates, OvalLayer
contracts back to zero size leaving just the TriangleLayer
visible.TriangleLayer
. RectangleLayer
with an animation effect that’s very similar to a glass being filled with water. Open OvalLayer.swift; the starter project already contains the code to initialize this layer and all the Bezier paths you’ll use in your animations. You’ll see that expand()
, wobble()
and contract()
are all empty; you’ll populate those methods as you work through the tutorial. All the other *Layer files are structured in a similar fashion.
Finally, open ViewController.swift and take a look at addHolderView()
; this method adds an instance of HolderView
as a subview to the center of the view controller’s view. This view will house all the animations. The view controller just needs to put it on the screen, and the view will take care of the actual animation code.
The animateLabel()
function is a delegate callback provided by the HolderView
class that you will fill in as you complete the animation sequence. addButton()
simply adds a button to the view so that you can tap and restart the animation.
Build and run your app; you should see an empty white screen. An empty canvas — the perfect thing on which to start creating your new animations! :]
By the end of this tutorial, your app will look like this:
So without further ado, let’s get started!
The animation starts with a red oval that expands into view from the centre of the screen and then wobbles around a bit.
Open HolderView.swift and declare the following constant near the top of the HolderView
class:
let ovalLayer = OvalLayer() |
Now add the following function to the bottom of the class:
func addOval() { layer.addSublayer(ovalLayer) ovalLayer.expand() } |
This first adds the OvalLayer
instance you created above as a sublayer to the view’s layer, then calls expand()
, which is one of the stubbed-out functions you need to fill in.
Go to OvalLayer.swift and add the following code to expand()
:
func expand() { var expandAnimation: CABasicAnimation = CABasicAnimation(keyPath: "path") expandAnimation.fromValue = ovalPathSmall.CGPath expandAnimation.toValue = ovalPathLarge.CGPath expandAnimation.duration = animationDuration expandAnimation.fillMode = kCAFillModeForwards expandAnimation.removedOnCompletion = false addAnimation(expandAnimation, forKey: nil) } |
This function creates an instance of CABasicAnimation
that changes the oval’s path from ovalPathSmall
to ovalPathLarge
. The starter project provides both of these Bezier paths for you. Setting removedOnCompletion
to false
and fillMode
to KCAFillModeForwards
on the animation lets the oval retain its new path once the animation has finished.
Finally, open ViewController.swift and add the following line to addHolderView()
just below view.addSubview(holderView)
:
holderView.addOval() |
This calls addOval
to kickstart the animation after it has been added to the view controller’s view.
Build and run your app; your animation should now look like this:
With your oval now expanding into view, the next step is to put some bounce in its step and make it wobble.
Open HolderView.swift and add the following function to the bottom of the class:
func wobbleOval() { ovalLayer.wobble() } |
This calls the stubbed-out method wobble()
in OvalLayer
.
Now open OvalLayer.swift and add the following code to wobble()
:
func wobble() { // 1 var wobbleAnimation1: CABasicAnimation = CABasicAnimation(keyPath: "path") wobbleAnimation1.fromValue = ovalPathLarge.CGPath wobbleAnimation1.toValue = ovalPathSquishVertical.CGPath wobbleAnimation1.beginTime = 0.0 wobbleAnimation1.duration = animationDuration // 2 var wobbleAnimation2: CABasicAnimation = CABasicAnimation(keyPath: "path") wobbleAnimation2.fromValue = ovalPathSquishVertical.CGPath wobbleAnimation2.toValue = ovalPathSquishHorizontal.CGPath wobbleAnimation2.beginTime = wobbleAnimation1.beginTime + wobbleAnimation1.duration wobbleAnimation2.duration = animationDuration // 3 var wobbleAnimation3: CABasicAnimation = CABasicAnimation(keyPath: "path") wobbleAnimation3.fromValue = ovalPathSquishHorizontal.CGPath wobbleAnimation3.toValue = ovalPathSquishVertical.CGPath wobbleAnimation3.beginTime = wobbleAnimation2.beginTime + wobbleAnimation2.duration wobbleAnimation3.duration = animationDuration // 4 var wobbleAnimation4: CABasicAnimation = CABasicAnimation(keyPath: "path") wobbleAnimation4.fromValue = ovalPathSquishVertical.CGPath wobbleAnimation4.toValue = ovalPathLarge.CGPath wobbleAnimation4.beginTime = wobbleAnimation3.beginTime + wobbleAnimation3.duration wobbleAnimation4.duration = animationDuration // 5 var wobbleAnimationGroup: CAAnimationGroup = CAAnimationGroup() wobbleAnimationGroup.animations = [wobbleAnimation1, wobbleAnimation2, wobbleAnimation3, wobbleAnimation4] wobbleAnimationGroup.duration = wobbleAnimation4.beginTime + wobbleAnimation4.duration wobbleAnimationGroup.repeatCount = 2 addAnimation(wobbleAnimationGroup, forKey: nil) } |
That’s a lot of code, but it breaks down nicely. Here’s what’s going on:
CAAnimationGroup
and add this group animation to your OvalLayout
.The beginTime
of each subsequent animation is the sum of the beginTime
of the previous animation and its duration
. You repeat the animation group twice to give the wobble a slightly elongated feel.
Even though you now have all the code required to produce the wobble animation, you aren’t calling your new animation yet.
Go back to HolderView.swift and add the following line to the end of addOval()
:
NSTimer.scheduledTimerWithTimeInterval(0.3, target: self, selector: "wobbleOval", userInfo: nil, repeats: false) |
Here you create a timer that calls wobbleOval()
right after the OvalLayer
has finished expanding.
Build and run your app; check out your new animation:
It’s very subtle, but that’s an important factor of a truly delightful animation. You don’t need things to be flying all over the screen!
It’s time to get a little fancy! :] You’re going to morph the oval into a triangle. To the user’s eye, this transition should look completely seamless. You’ll use two separate shapes of the same colour to make this work.
Open HolderView.swift and add the following code to the top of HolderView
class, just below the ovalLayer
property you added earlier:
let triangleLayer = TriangleLayer() |
This declares a constant instance of TriangleLayer
, just like you did for OvalLayer
.
Now, make wobbleOval()
look like this:
func wobbleOval() { // 1 layer.addSublayer(triangleLayer) // Add this line ovalLayer.wobble() // 2 // Add the code below NSTimer.scheduledTimerWithTimeInterval(0.9, target: self, selector: "drawAnimatedTriangle", userInfo: nil, repeats: false) } |
The code above does the following:
TriangleLayer
instance you initialized earlier as a sublayer to the HolderView
‘s layer.1.8
, the half-way point would be a great place to start the morphing process. You therefore add a timer that adds drawAnimatedTriangle()
after a delay of 0.9
.
Note: Finding the right duration or delay for animations takes some trial and error, and can mean the difference between a good animation and a fantastic one. I encourage you to tinker with your animations to get them looking perfect. It can take some time, but it’s worth it!
Next, add the following function to the bottom of the class:
func drawAnimatedTriangle() { triangleLayer.animate() } |
This method is called from the timer that you just added to wobbleOval()
. It calls the (currently stubbed out) method in triangleLayer
which causes the triangle to animate.
Now open TriangleLayer.swift and add the following code to animate()
:
func animate() { var triangleAnimationLeft: CABasicAnimation = CABasicAnimation(keyPath: "path") triangleAnimationLeft.fromValue = trianglePathSmall.CGPath triangleAnimationLeft.toValue = trianglePathLeftExtension.CGPath triangleAnimationLeft.beginTime = 0.0 triangleAnimationLeft.duration = 0.3 var triangleAnimationRight: CABasicAnimation = CABasicAnimation(keyPath: "path") triangleAnimationRight.fromValue = trianglePathLeftExtension.CGPath triangleAnimationRight.toValue = trianglePathRightExtension.CGPath triangleAnimationRight.beginTime = triangleAnimationLeft.beginTime + triangleAnimationLeft.duration triangleAnimationRight.duration = 0.25 var triangleAnimationTop: CABasicAnimation = CABasicAnimation(keyPath: "path") triangleAnimationTop.fromValue = trianglePathRightExtension.CGPath triangleAnimationTop.toValue = trianglePathTopExtension.CGPath triangleAnimationTop.beginTime = triangleAnimationRight.beginTime + triangleAnimationRight.duration triangleAnimationTop.duration = 0.20 var triangleAnimationGroup: CAAnimationGroup = CAAnimationGroup() triangleAnimationGroup.animations = [triangleAnimationLeft, triangleAnimationRight, triangleAnimationTop] triangleAnimationGroup.duration = triangleAnimationTop.beginTime + triangleAnimationTop.duration triangleAnimationGroup.fillMode = kCAFillModeForwards triangleAnimationGroup.removedOnCompletion = false addAnimation(triangleAnimationGroup, forKey: nil) } |
This code animates the corners of TriangleLayer
to pop out one-by-one as the OvalLayer
wobbles; the Bezier paths are already defined for each corner as part of the starter project. The left corner goes first, followed by the right and then the top. You do this by creating three instances of a path-based CABasicAnimation
that you add to a CAAnimationGroup
, which, in turn, you add to TriangleLayer
.
Build and run the app to see the current state of the animation; as the oval wobbles, each corner of the triangle begins to appear until all three corners are visible, like so:
To complete the morphing process, you’ll rotate HolderView
by 360 degrees while you contract OvalLayer
, leaving just TriangleLayer
alone.
Open HolderView.swift add the following code to the end of drawAnimatedTriangle()
:
NSTimer.scheduledTimerWithTimeInterval(0.9, target: self, selector: "spinAndTransform", userInfo: nil, repeats: false) |
This sets up a timer to fire after the triangle animation has finished. The 0.9s time was once again determined by trial and error.
Now add the following function to the bottom of the class:
func spinAndTransform() { // 1 layer.anchorPoint = CGPointMake(0.5, 0.6) // 2 var rotationAnimation: CABasicAnimation = CABasicAnimation(keyPath: "transform.rotation.z") rotationAnimation.toValue = CGFloat(M_PI * 2.0) rotationAnimation.duration = 0.45 rotationAnimation.removedOnCompletion = true layer.addAnimation(rotationAnimation, forKey: nil) // 3 ovalLayer.contract() } |
The timer you created just before adding this code calls this function once the the oval stops wobbling and all corners of the triangle appear. Here’s a look at this function in more detail:
CABasicAnimation
to rotate the layer 360 degrees, or 2*Pi
radians. The rotation is around the z-axis, which is the axis going into and out of the screen, perpendicular to the screen surface.contract()
on OvalLayer
to perform the animation that reduces the size of the oval until it’s no longer visible.Now open OvalLayer.swift and add the following code to contract()
:
func contract() { var contractAnimation: CABasicAnimation = CABasicAnimation(keyPath: "path") contractAnimation.fromValue = ovalPathLarge.CGPath contractAnimation.toValue = ovalPathSmall.CGPath contractAnimation.duration = animationDuration contractAnimation.fillMode = kCAFillModeForwards contractAnimation.removedOnCompletion = false addAnimation(contractAnimation, forKey: nil) } |
This sets OvalLayer
back to its initial path of ovalPathSmall
by applying a CABasicAnimation
. This is the exact reverse of expand()
, which you called at the start of the animation.
Build and run your app; the triangle is the only thing that should be left on the screen once the animation is done:
In this next part, you’re going to animate the drawing of a rectangular container to create an enclosure. To do this, you’ll use the stroke property of RectangleLayer
. You’ll do this twice, using both red and blue as the stroke color.
Open HolderView.swift and declare two RectangularLayer
constants as follows, underneath the triangleLayer
property you added earlier:
let redRectangleLayer = RectangleLayer() let blueRectangleLayer = RectangleLayer() |
Next add the following code to the end of spinAndTransform()
:
NSTimer.scheduledTimerWithTimeInterval(0.45, target: self, selector: "drawRedAnimatedRectangle", userInfo: nil, repeats: false) NSTimer.scheduledTimerWithTimeInterval(0.65, target: self, selector: "drawBlueAnimatedRectangle", userInfo: nil, repeats: false) |
Here you create two timers that call drawRedAnimatedRectangle()
and drawBlueAnimatedRectangle()
respectively. You draw the red rectangle first, right after the rotation animation is complete. The blue rectangle’s stroke begins as the red rectangle’s stroke draws close to completion.
Add the following two functions to the bottom of the class:
func drawRedAnimatedRectangle() { layer.addSublayer(redRectangleLayer) redRectangleLayer.animateStrokeWithColor(Colors.red) } func drawBlueAnimatedRectangle() { layer.addSublayer(blueRectangleLayer) blueRectangleLayer.animateStrokeWithColor(Colors.blue) } |
Once you add the RectangleLayer
as a sublayer to HolderView
, you call animateStrokeWithColor(color:)
and pass in the appropriate color
to animate the drawing of the border.
Now open RectangleLayer.swift and populate animateStrokeWithColor(color:)
as follows:
func animateStrokeWithColor(color: UIColor) { strokeColor = color.CGColor var strokeAnimation: CABasicAnimation = CABasicAnimation(keyPath: "strokeEnd") strokeAnimation.fromValue = 0.0 strokeAnimation.toValue = 1.0 strokeAnimation.duration = 0.4 addAnimation(strokeAnimation, forKey: nil) } |
This draws a stroke
around RectangleLayer
by adding a CABasicAnimation
to it. The strokeEnd
key of CAShapeLayer
indicates how far around the path to stop stroking. By animating this property from 0 to 1, you create the illusion of the path being drawn from start to finish. Animating from 1 to 0 would create the illusion of the entire path being rubbed out.
Build and run your app to see how the two strokes look as they build the container:
With your container now in place, the next phase of the animation is to fill it up. The effect you’re looking for is that of water filling up a glass. This is a great visual effect and sets things up for a big…splash! :]
Open HolderView.swift and add the following constant just below the two RectangleLayer
properties:
let arcLayer = ArcLayer() |
Now add the following code to the end of drawBlueAnimatedRectangle()
:
NSTimer.scheduledTimerWithTimeInterval(0.40, target: self, selector: "drawArc", userInfo: nil, repeats: false) |
This creates a timer to call drawArc()
once the blue RectangleLayer
finishes drawing.
Add the following function to the end of the class:
func drawArc() { layer.addSublayer(arcLayer) arcLayer.animate() } |
This adds the instance of ArcLayer
created above to the HolderView
‘s layer before you animate in the fill.
Open ArcLayer.swift and add the following code to animate()
:
func animate() { var arcAnimationPre: CABasicAnimation = CABasicAnimation(keyPath: "path") arcAnimationPre.fromValue = arcPathPre.CGPath arcAnimationPre.toValue = arcPathStarting.CGPath arcAnimationPre.beginTime = 0.0 arcAnimationPre.duration = animationDuration var arcAnimationLow: CABasicAnimation = CABasicAnimation(keyPath: "path") arcAnimationLow.fromValue = arcPathStarting.CGPath arcAnimationLow.toValue = arcPathLow.CGPath arcAnimationLow.beginTime = arcAnimationPre.beginTime + arcAnimationPre.duration arcAnimationLow.duration = animationDuration var arcAnimationMid: CABasicAnimation = CABasicAnimation(keyPath: "path") arcAnimationMid.fromValue = arcPathLow.CGPath arcAnimationMid.toValue = arcPathMid.CGPath arcAnimationMid.beginTime = arcAnimationLow.beginTime + arcAnimationLow.duration arcAnimationMid.duration = animationDuration var arcAnimationHigh: CABasicAnimation = CABasicAnimation(keyPath: "path") arcAnimationHigh.fromValue = arcPathMid.CGPath arcAnimationHigh.toValue = arcPathHigh.CGPath arcAnimationHigh.beginTime = arcAnimationMid.beginTime + arcAnimationMid.duration arcAnimationHigh.duration = animationDuration var arcAnimationComplete: CABasicAnimation = CABasicAnimation(keyPath: "path") arcAnimationComplete.fromValue = arcPathHigh.CGPath arcAnimationComplete.toValue = arcPathComplete.CGPath arcAnimationComplete.beginTime = arcAnimationHigh.beginTime + arcAnimationHigh.duration arcAnimationComplete.duration = animationDuration var arcAnimationGroup: CAAnimationGroup = CAAnimationGroup() arcAnimationGroup.animations = [arcAnimationPre, arcAnimationLow, arcAnimationMid, arcAnimationHigh, arcAnimationComplete] arcAnimationGroup.duration = arcAnimationComplete.beginTime + arcAnimationComplete.duration arcAnimationGroup.fillMode = kCAFillModeForwards arcAnimationGroup.removedOnCompletion = false addAnimation(arcAnimationGroup, forKey: nil) } |
This animation is very similar to the earlier wobble animation; you create a CAAnimationGroup
that contains five instances of a path-based CABasicAnimation
. Each path has a slightly different arc with increasing height and is part of the starter project. Finally, you apply the CAAnimationGroup
to the layer and instruct it to not be removed on completion so it will retain its state when the animation has finished.
Build and run your app to watch the magic unfold!
All that’s left to do is expand the blue HolderView
to fill in the entire screen and add a UILabel
to the view to serve as the logo.
Open HolderView.swift and add the following code to the end of drawArc()
:
NSTimer.scheduledTimerWithTimeInterval(0.90, target: self, selector: "expandView", userInfo: nil, repeats: false) |
This creates a timer that calls expandView()
after the ArcLayer
fills up the container.
Now, add the following function to the bottom of the same class:
func expandView() { // 1 backgroundColor = Colors.blue // 2 frame = CGRectMake(frame.origin.x - blueRectangleLayer.lineWidth, frame.origin.y - blueRectangleLayer.lineWidth, frame.size.width + blueRectangleLayer.lineWidth * 2, frame.size.height + blueRectangleLayer.lineWidth * 2) // 3 layer.sublayers = nil // 4 UIView.animateWithDuration(0.3, delay: 0.0, options: UIViewAnimationOptions.CurveEaseInOut, animations: { self.frame = self.parentFrame }, completion: { finished in self.addLabel() }) } |
Here’s what that method does:
RectangleLayer
‘s stroke width that you added earlier.HolderView
to fill the screen. Once that animation’s done, you call addLabel()
.Add the following function to the bottom of the class:
func addLabel() { delegate?.animateLabel() } |
This simply calls the view’s delegate function to animate the label.
Now open ViewController.swift and add the following code to animateLabel()
:
func animateLabel() { // 1 holderView.removeFromSuperview() view.backgroundColor = Colors.blue // 2 var label: UILabel = UILabel(frame: view.frame) label.textColor = Colors.white label.font = UIFont(name: "HelveticaNeue-Thin", size: 170.0) label.textAlignment = NSTextAlignment.Center label.text = "S" label.transform = CGAffineTransformScale(label.transform, 0.25, 0.25) view.addSubview(label) // 3 UIView.animateWithDuration(0.4, delay: 0.0, usingSpringWithDamping: 0.7, initialSpringVelocity: 0.1, options: UIViewAnimationOptions.CurveEaseInOut, animations: ({ label.transform = CGAffineTransformScale(label.transform, 4.0, 4.0) }), completion: { finished in self.addButton() }) } |
Taking each commented section in turn:
HolderView
from the view and set the view’s background color to blue.UILabel
with text of ‘S’ to represent the logo, and add it to the view.addButton()
to add a button to your view, which, when pressed, repeats the animation.Build and run the application, give yourself a pat on the back and take a moment to enjoy what you’ve built! :]
You can download the final completed project here.
This tutorial covered quite a few different animation techniques that, when stacked together, create a rather complex loading animation that really makes your app shine on first run.
From here, feel free to play around with different timings and shapes to see what cool animations you can come up with.
If you want to take your new found animation skills to the next level, then I suggest you check out our book, iOS Animations by Tutorials.
I hope that you had a ton of fun going through this tutorial, and if you have any questions or comments, please join the forum discussion below!
The post How to Create a Complex Loading Animation in Swift appeared first on Ray Wenderlich.
With iOS 8, Apple introduced the new HealthKit API and corresponding Health App. Meanwhile, apps in the Health & Fitness category are hugely popular on the App Store.
This tutorial will show you how to make an app like RunKeeper, a GPS-based app to help you track your runs. Your new app, called MoonRunner, takes things to the next level with badges based on planets and moons in our Solar System!
Along the way, you’ll build all the features needed in a a motivational run-tracking app:
You should already be familiar with the basics of Storyboards and Core Data before continuing. There’s so much to talk about that this tutorial comes in two parts: the first segment focuses on recording the run data and rendering the color-coded map, and the second segment introduces the badge system.
Begin by downloading the starter project download for the tutorial. Unzip it and open the project file, called MoonRunner.xcodeproj.
Build and run your project to check out what you’ll be starting with. The app will have a pretty simple flow:
UILabels
).Even a marathon begins with just a single CLLocationManager
update. It’s time to start tracking some movement!
You need to make a few important project-level changes first. Click on the MoonRunner project at the top of the project navigator. Select the MoonRunner target and then the Capabilities tab. Open up Background Modes, turn on the switch for this section on the right and then tick Location Updates. This will allow the app to update the location even if the user presses the home button to take a call, browse the net or find out where the nearest Starbucks is! Neat!
Next, select the Info tab and open up Custom iOS Target Properties. Add these two lines to the plist:
NSLocationWhenInUseUsageDescription |
String | MoonRunner wants to track your run |
NSLocationAlwaysUsageDescription |
String | MoonRunner wants to track your run |
iOS will display the text in these two lines when it asks the user whether they want to allow the app to access the location data.
Now, back to the code. Open NewRunViewController.swift and add the following imports:
import CoreLocation import HealthKit |
You’ll need Core Location to access all the location-based APIs, and Health Kit to access units, quantities and conversion methods.
Then, at the end of the file, add a class extension to conform to the CLLocationManagerDelegate
protocol:
// MARK: - CLLocationManagerDelegate extension NewRunViewController: CLLocationManagerDelegate { } |
You’ll implement some delegate methods later, to be notified on location updates.
Next, add several new properties to the class:
var seconds = 0.0 var distance = 0.0 lazy var locationManager: CLLocationManager = { var _locationManager = CLLocationManager() _locationManager.delegate = self _locationManager.desiredAccuracy = kCLLocationAccuracyBest _locationManager.activityType = .Fitness // Movement threshold for new events _locationManager.distanceFilter = 10.0 return _locationManager }() lazy var locations = [CLLocation]() lazy var timer = NSTimer() |
Note the syntax to lazily instantiate variables in Swift. Pretty straightforward, huh? Here is what these properties are here for:
seconds
tracks the duration of the run, in seconds.distance
holds the cumulative distance of the run, in meters.locationManager
is the object you’ll tell to start or stop reading the user’s location.locations
is an array to hold all the Location
objects that will roll in.timer
will fire each second and update the UI accordingly.Let’s pause for a second to talk about your CLLocationManager
and its configuration.
Once lazily instantiated, you set the CLLocationManagerDelegate
to your NewRunViewController
.
You then feed it a desiredAccuracy
of kCLLocationAccuracyBest
. Since you’re tracking a run, you want the best and most precise location readings, which also use more battery power.
The activityType
parameter is made specifically for an app like this. It intelligently helps the device to save some power throughout a user’s run, say if they stop to cross a road.
Lastly, you set a distanceFilter
of 10 meters. As opposed to the desiredAccuracy
, this parameter doesn’t affect battery life. The desiredAccuracy
is for readings, and the distanceFilter
is for the reporting of readings.
As you’ll see after doing a test run later, the location readings can be a little zigged or zagged away from a straight line.
A higher distanceFilter
could reduce the zigging and zagging and thus give you a more accurate line. Unfortunately, too high a filter would pixelate your readings. That’s why 10 meters is a good balance.
Next, add this line at the end of viewWillAppear(_:)
:
locationManager.requestAlwaysAuthorization() |
This iOS 8 only method will request the location usage authorization from your users. If you want your app to run on iOS versions prior to 8, you will need to test the availability of this method before calling it.
Now add the following method to your implementation:
override func viewWillDisappear(animated: Bool) { super.viewWillDisappear(animated) timer.invalidate() } |
With this method, the timer is stopped when the user navigates away from the view.
Add the following method:
func eachSecond(timer: NSTimer) { seconds++ let secondsQuantity = HKQuantity(unit: HKUnit.secondUnit(), doubleValue: seconds) timeLabel.text = "Time: " + secondsQuantity.description let distanceQuantity = HKQuantity(unit: HKUnit.meterUnit(), doubleValue: distance) distanceLabel.text = "Distance: " + distanceQuantity.description let paceUnit = HKUnit.secondUnit().unitDividedByUnit(HKUnit.meterUnit()) let paceQuantity = HKQuantity(unit: paceUnit, doubleValue: seconds / distance) paceLabel.text = "Pace: " + paceQuantity.description } |
This is the method that will be called every second, by using an NSTimer
(which will be set up shortly). Each time this method is called, you increment the second count and update each of the statistics labels accordingly.
Here’s the final helper method to add to the class before starting your run:
func startLocationUpdates() { // Here, the location manager will be lazily instantiated locationManager.startUpdatingLocation() } |
Here, you tell the manager to start getting location updates! Time to hit the pavement. Wait! Are you lacing up your shoes? Not that kind of run!
To actually begin the run, add these lines to the end of startPressed(_:)
:
seconds = 0.0 distance = 0.0 locations.removeAll(keepCapacity: false) timer = NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: "eachSecond:", userInfo: nil, repeats: true) startLocationUpdates() |
Here, you’re resetting all the fields that will update continually throughout the run and then starting up the timer and location updates.
Build and run. If you start a new run you will see the time label increment.
However, the distance and pace labels are not updated because you are not tracking the location yet. So, let’s do this now!
You’ve created the CLLocationManager
, but now you need to get updates from it. That is done through its delegate
. Open NewRunViewController.swift once again and add the following method to the class extension conforming to CLLocationManagerDelegate
:
func locationManager(manager: CLLocationManager!, didUpdateLocations locations: [AnyObject]!) { for location in locations as! [CLLocation] { if location.horizontalAccuracy < 20 { //update distance if self.locations.count > 0 { distance += location.distanceFromLocation(self.locations.last) } //save location self.locations.append(location) } } } |
This delegate method will be called each time there are new location updates to provide the app. Usually, the locations
array only contains one object, but if there are more, they are ordered by time with the most recent location last.
A CLLocation
contains some great information. Namely the latitude and longitude, along with the timestamp of the reading.
But before blindly accepting the reading, it’s worth a horizontalAccuracy
check. If the device isn’t confident it has a reading within 20 meters of the user’s actual location, it’s best to keep it out of your dataset. This check is especially important at the start of the run, when the device first starts narrowing down the general area of the user. At that stage, it’s likely to update with some inaccurate data for the first few points.
If the CLLocation
passes the check, then you add the distance between it and the most recent point to the cumulative distance of the run. The distanceFromLocation(_:)
method is very convenient here, taking into account all sorts of surprisingly-difficult math involving the Earth’s curvature.
Finally, you append the location object itself to a growing array of locations.
CLLocation
object also contains information on altitude, with a corresponding verticalAccuracy
value. As every runner knows, hills can be a game changer on any run, and altitude can affect the amount of oxygen available. A challenge to you, then, is to think of a way to incorporate this data into the app.
As much as I hope that this tutorial and the apps you build encourage more enthusiasm for fitness, the build and run phase does not need to be taken that literally while developing it!
You don’t need to lace up and head out the door either, for there’s a way to get the simulator to pretend it’s running!
Build and run in the simulator, and select Debug\Location\City Run to have the simulator start generating mock data:
Of course, this is much easier and less exhausting than taking a short run to test this — or any other — location-based app.
However, I recommend eventually doing a true beta test with a device. Doing so gives you the chance to fine-tune the location manager parameters and assess the quality of location data you can really get.
Thorough testing could help instill a healthy habit, too. :]
At some point, despite that voice of motivation inside you that tells you to keep going (mine sounds like Gunnery Sergeant Hartman in Full Metal Jacket), there comes a time to end the run. You already arranged for the UI to accept this input, and now it’s time to process that data.
Add this method to NewRunViewController.swift:
func saveRun() { // 1 let savedRun = NSEntityDescription.insertNewObjectForEntityForName("Run", inManagedObjectContext: managedObjectContext!) as! Run savedRun.distance = distance savedRun.duration = seconds savedRun.timestamp = NSDate() // 2 var savedLocations = [Location]() for location in locations { let savedLocation = NSEntityDescription.insertNewObjectForEntityForName("Location", inManagedObjectContext: managedObjectContext!) as! Location savedLocation.timestamp = location.timestamp savedLocation.latitude = location.coordinate.latitude savedLocation.longitude = location.coordinate.longitude savedLocations.append(savedLocation) } savedRun.locations = NSOrderedSet(array: savedLocations) run = savedRun // 3 var error: NSError? let success = managedObjectContext!.save(&error) if !success { println("Could not save the run!") } } |
So what’s happening here? If you’ve done a simple Core Data flow before, this should look like a familiar way to save new objects:
1. You create a new Run
object, and give it the cumulative distance and duration values as well as assign it a timestamp.
2. Each of the CLLocation
objects recorded during the run is trimmed down to a new Location
object and saved. Then you link the locations to the Run
3. You then save your NSManagedObjectContext
Finally, you’ll need to call this method when the user stops the run and then chooses to save it. Find actionSheet(_:clickedButtonAtIndex:)
and add the following line to the top the if buttonIndex == 1
block, above the call to performSegueWithIdentifier(_:sender:)
:
saveRun() |
Build and run. You are now able to record a run and save it.
However, the detail view of the run is mostly empty. That’s coming up next!
Now it’s time to show the map post-run stats. Open DetailViewController.swift and import HealthKit
:
import HealthKit |
Next, find configureView()
and add the following code to the method:
let distanceQuantity = HKQuantity(unit: HKUnit.meterUnit(), doubleValue: run.distance.doubleValue) distanceLabel.text = "Distance: " + distanceQuantity.description let dateFormatter = NSDateFormatter() dateFormatter.dateStyle = .MediumStyle dateLabel.text = dateFormatter.stringFromDate(run.timestamp) let secondsQuantity = HKQuantity(unit: HKUnit.secondUnit(), doubleValue: run.duration.doubleValue) timeLabel.text = "Time: " + secondsQuantity.description let paceUnit = HKUnit.secondUnit().unitDividedByUnit(HKUnit.meterUnit()) let paceQuantity = HKQuantity(unit: paceUnit, doubleValue: run.duration.doubleValue / run.distance.doubleValue) paceLabel.text = "Pace: " + paceQuantity.description |
This sets up the details of the run into the three labels on the screen.
Rendering the map will require just a little more detail. There are three basic steps to it:
Start by adding the following method to the class:
func mapRegion() -> MKCoordinateRegion { let initialLoc = run.locations.firstObject as! Location var minLat = initialLoc.latitude.doubleValue var minLng = initialLoc.longitude.doubleValue var maxLat = minLat var maxLng = minLng let locations = run.locations.array as! [Location] for location in locations { minLat = min(minLat, location.latitude.doubleValue) minLng = min(minLng, location.longitude.doubleValue) maxLat = max(maxLat, location.latitude.doubleValue) maxLng = max(maxLng, location.longitude.doubleValue) } return MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: (minLat + maxLat)/2, longitude: (minLng + maxLng)/2), span: MKCoordinateSpan(latitudeDelta: (maxLat - minLat)*1.1, longitudeDelta: (maxLng - minLng)*1.1)) } |
An MKCoordinateRegion
represents the display region for the map, and you define it by supplying a center
point and a span
that defines horizontal and vertical ranges.
For example, my jog may be quite zoomed in around my short route, while my more athletic friend’s map will appear more zoomed out to cover all the distance she traveled. It’s important to also account for a little padding, so that the route doesn’t crowd all the way to the edge of the map.
Next, add the following method:
func mapView(mapView: MKMapView!, rendererForOverlay overlay: MKOverlay!) -> MKOverlayRenderer! { if !overlay.isKindOfClass(MKPolyline) { return nil } let polyline = overlay as! MKPolyline let renderer = MKPolylineRenderer(polyline: polyline) renderer.strokeColor = UIColor.blackColor() renderer.lineWidth = 3 return renderer } |
This method says that whenever the map comes across a request to add an overlay, it should check if it’s an MKPolyline
. If so, it should use a renderer that will make a black line. You’ll spice this up shortly. An overlay is something that is drawn on top of a map view. A polyline is such an overlay and represents a line drawn from a series of location points.
Lastly, you need to define the coordinates for the polyline. Add the following method:
func polyline() -> MKPolyline { var coords = [CLLocationCoordinate2D]() let locations = run.locations.array as! [Location] for location in locations { coords.append(CLLocationCoordinate2D(latitude: location.latitude.doubleValue, longitude: location.longitude.doubleValue)) } return MKPolyline(coordinates: &coords, count: run.locations.count) } |
Here, you shove the data from the Location
objects into an array of CLLocationCoordinate2D
, the required format for polylines.
Now, it’s time to put these three together! Add the following method:
func loadMap() { if run.locations.count > 0 { mapView.hidden = false // Set the map bounds mapView.region = mapRegion() // Make the line(s!) on the map mapView.addOverlay(polyline()) } else { // No locations were found! mapView.hidden = true UIAlertView(title: "Error", message: "Sorry, this run has no locations saved", delegate:nil, cancelButtonTitle: "OK").show() } } |
Here, you make sure that there are points to draw, set the map region as defined earlier, and add the polyline overlay.
Finally, add the following code at the end of configureView()
:
loadMap() |
And now build and run!. You should now see a map after your simulator is done with its workout.
The app is pretty cool as-is, but one way you can help your users train even smarter is to show them how fast or slow they ran at each leg of the run. That way, they can identify areas where they are most at risk of straying from an even pace.
To do this, you’ll extend the polyline class you’ve already been using to add color support.
In Xcode, navigate to File\New\File…, and select iOS\Source\Swift File. Call the file MulticolorPolylineSegment
and save it to disk. When the file opens in the editor, replace its contents with the following:
import UIKit import MapKit class MulticolorPolylineSegment: MKPolyline { var color: UIColor? } |
This special, custom, polyline will be used to render each segment of the run. The color is going to denote the speed and so the color of each segment is stored here on the polyline. Other than that, it’s the same as an MKPolyline
. There will be one of these objects for each segment connecting two locations.
Next, you need to figure out how to assign the right color to the right polyline segment. Add the following class method to your MulticolorPolylineSegment
class:
private class func allSpeeds(forLocations locations: [Location]) -> (speeds: [Double], minSpeed: Double, maxSpeed: Double) { // Make Array of all speeds. Find slowest and fastest var speeds = [Double]() var minSpeed = DBL_MAX var maxSpeed = 0.0 for i in 1..<locations.count { let l1 = locations[i-1] let l2 = locations[i] let cl1 = CLLocation(latitude: l1.latitude.doubleValue, longitude: l1.longitude.doubleValue) let cl2 = CLLocation(latitude: l2.latitude.doubleValue, longitude: l2.longitude.doubleValue) let distance = cl2.distanceFromLocation(cl1) let time = l2.timestamp.timeIntervalSinceDate(l1.timestamp) let speed = distance/time minSpeed = min(minSpeed, speed) maxSpeed = max(maxSpeed, speed) speeds.append(speed) } return (speeds, minSpeed, maxSpeed) } |
This method returns the array of speed values for each sequential pair of locations, along with the minimum and maximum speeds. To return multiple values, you wrap them in a tuple.
The first thing you’ll notice is a loop through all the locations from the input. You have to convert each Location
to a CLLocation
so you can use distanceFromLocation(_:)
.
Remember basic physics: distance divided by time equals speed. Each location after the first is compared to the one before it, and by the end of the loop you have a complete collection of all the changes in speed throughout the run.
This method is private, and can only be accessed from within the class. Next, add the following class method that will act as the public interface:
class func colorSegments(forLocations locations: [Location]) -> [MulticolorPolylineSegment] { var colorSegments = [MulticolorPolylineSegment]() // RGB for Red (slowest) let red = (r: 1.0, g: 20.0 / 255.0, b: 44.0 / 255.0) // RGB for Yellow (middle) let yellow = (r: 1.0, g: 215.0 / 255.0, b: 0.0) // RGB for Green (fastest) let green = (r: 0.0, g: 146.0 / 255.0, b: 78.0 / 255.0) let (speeds, minSpeed, maxSpeed) = allSpeeds(forLocations: locations) // now knowing the slowest+fastest, we can get mean too let meanSpeed = (minSpeed + maxSpeed)/2 return colorSegments } |
Here you define the three colors you’ll use for slow, medium and fast polyline segments.
Each color, in turn, has its own RGB components. The slowest components will be completely red, the middle will be yellow, and the fastest will be green. Everything else will be a blend of the two nearest colors, so the end result could be quite colorful.
Notice how you also retrieve and decompose the return tuple from allSpeeds(forLocations:)
and then use minSpeed
and maxSpeed
to compute a meanSpeed
.
Finally, add the following to the end of the method, before the return statement:
for i in 1..<locations.count { let l1 = locations[i-1] let l2 = locations[i] var coords = [CLLocationCoordinate2D]() coords.append(CLLocationCoordinate2D(latitude: l1.latitude.doubleValue, longitude: l1.longitude.doubleValue)) coords.append(CLLocationCoordinate2D(latitude: l2.latitude.doubleValue, longitude: l2.longitude.doubleValue)) let speed = speeds[i-1] var color = UIColor.blackColor() if speed < minSpeed { // Between Red & Yellow let ratio = (speed - minSpeed) / (meanSpeed - minSpeed) let r = CGFloat(red.r + ratio * (yellow.r - red.r)) let g = CGFloat(red.g + ratio * (yellow.g - red.g)) let b = CGFloat(red.r + ratio * (yellow.r - red.r)) color = UIColor(red: r, green: g, blue: b, alpha: 1) } else { // Between Yellow & Green let ratio = (speed - meanSpeed) / (maxSpeed - meanSpeed) let r = CGFloat(yellow.r + ratio * (green.r - yellow.r)) let g = CGFloat(yellow.g + ratio * (green.g - yellow.g)) let b = CGFloat(yellow.b + ratio * (green.b - yellow.b)) color = UIColor(red: r, green: g, blue: b, alpha: 1) } let segment = MulticolorPolylineSegment(coordinates: &coords, count: coords.count) segment.color = color colorSegments.append(segment) } |
In this loop, you determine the value of each pre-calculated speed, relative to the full range of speeds. This ratio then determines the UIColor
to apply to the segment.
Next, you construct a new MulticolorPolylineSegment
with the two coordinates and the blended color.
Finally, you collect all the multicolored segments together, and you’re almost ready to render!
Repurposing the detail view controller to use your new multicolor polyline is actually quite simple! Open DetailViewController.swift, find loadMap()
, and replace the following line:
mapView.addOverlay(polyline()) |
with the following:
let colorSegments = MulticolorPolylineSegment.colorSegments(forLocations: run.locations.array as! [Location]) mapView.addOverlays(colorSegments) |
This creates the array of segments and adds all the overlays to the map.
Lastly, you need to prepare your polyline renderer to pay attention to the specific color of each segment. So replace your current implementation of mapView(_:rendererForOverlay:)
with the following:
func mapView(mapView: MKMapView!, rendererForOverlay overlay: MKOverlay!) -> MKOverlayRenderer! { if !overlay.isKindOfClass(MulticolorPolylineSegment) { return nil } let polyline = overlay as! MulticolorPolylineSegment let renderer = MKPolylineRenderer(polyline: polyline) renderer.strokeColor = polyline.color renderer.lineWidth = 3 return renderer } |
This is very similar to what you had before, but now the specific color of each segment renders individually.
Alright! Now you’re all set to build & run, let the simulator go on a little jog, and check out the fancy multi-colored map afterward!
That post-run map is stunning, but how about one during the run?
Open Main.storyboard
and find the New Run Scene. Drag in a new MapKit View and size it roughly so it fits between the “Ready to Launch” label and the Start button.
Then, be sure to add the four Auto Layout constraints displayed in the screenshots:
Top Space
to Ready to Launch? label of 20 points.Bottom Space
to Start! button of 20 points.Trailing Space
and Leading Space
of 0 points to the Superview.Then open NewRunViewController.swift and add the MapKit
import:
import MapKit |
Next, add an outlet for the map to the class:
@IBOutlet weak var mapView: MKMapView! |
Then add this line to the end of viewWillAppear(_:)
:
mapView.hidden = true |
This makes sure that the map is hidden at first. Now add this line to the end of startPressed(_:)
:
mapView.hidden = false |
This makes the map appear when the run starts.
The trail is going to be another polyline, which means you’ll need to implement a map delegate method. Add the following class extension to the end of the file:
// MARK: - MKMapViewDelegate extension NewRunViewController: MKMapViewDelegate { func mapView(mapView: MKMapView!, rendererForOverlay overlay: MKOverlay!) -> MKOverlayRenderer! { if !overlay.isKindOfClass(MKPolyline) { return nil } let polyline = overlay as! MKPolyline let renderer = MKPolylineRenderer(polyline: polyline) renderer.strokeColor = UIColor.blueColor() renderer.lineWidth = 3 return renderer } } |
This version is similar to the one for the run details screen, except that the stroke color is always blue here.
Next, you need to write the code to update the map region and draw the polyline every time a valid location is found. Find your current implementation of locationManager(_:didUpdateLocations:)
and update it to this:
func locationManager(manager: CLLocationManager!, didUpdateLocations locations: [AnyObject]!) { for location in locations as! [CLLocation] { let howRecent = location.timestamp.timeIntervalSinceNow if abs(howRecent) < 10 && location.horizontalAccuracy < 20 { //update distance if self.locations.count > 0 { distance += location.distanceFromLocation(self.locations.last) var coords = [CLLocationCoordinate2D]() coords.append(self.locations.last!.coordinate) coords.append(location.coordinate) let region = MKCoordinateRegionMakeWithDistance(location.coordinate, 500, 500) mapView.setRegion(region, animated: true) mapView.addOverlay(MKPolyline(coordinates: &coords, count: coords.count)) } //save location self.locations.append(location) } } } |
Now, the map always centers on the most recent location, and constantly adds little blue polylines to show the user’s trail thus far.
Open Main.storyboard
and find the New Run Scene. Connect the outlet for mapView
to the map view, and set its delegate
to the view controller.
Build and run, and start a new run. You’ll see the map updating in real-time!
Is your heart rate up yet? :]
Here’s a download of the sample project with all the code you’ve written up until this point.
You’ve seen how to store data for a run in a simple set of Core Data models, and display the run details (even in real time!) on a map. That’s really the core of MoonRunner, so congratulations!
If you’re up for some super-extra credit challenges, why not try to use the altitude information, maybe to change the thickness of the line segments? Or try blending the segment colors more smoothly by averaging a segment’s speed with that of the segment before it.
In any case, stay tuned for part two of this tutorial, where you’ll add the final touches and introduce a badge system personalized for each user.
As always, feel free to post comments and questions in the forum discussion below!
The post How To Make an App Like RunKeeper with Swift: Part 1 appeared first on Ray Wenderlich.
AsyncDisplayKit is an iOS framework that was originally designed for Facebook’s Paper. It makes it possible to achieve smoother and more responsive UI behavior than you can get with standard views.
You may have learned a bit about AsyncDisplayKit already in our beginning AsyncDisplayKit tutorial or your own studies; this tutorial will take your knowledge to the next level.
This tutorial will explain how you can make full use of the framework by exploring AsyncDisplayKit node hierarchies. By doing this, you get the benefits of smooth scrolling that AsyncDisplayKit is known for, at the same time as being able to build flexible and reusable UIs.
One of the key concepts of AsyncDisplayKit is the node
. As you’ll learn, AsyncDisplayKit nodes are a thread-safe abstraction layer over UIView
, which is (as you know) not thread safe. You can learn more about AsyncDisplayKit in AsyncDisplayKit’s Quick Start introduction.
The good news is that if you already know UIKit, then you’ll find that you already know the methods and properties in AsyncDisplayKit, because the APIs are almost identical.
By following along, you’ll learn:
ASDisplayNode
subclass.Here’s how you’ll do it: You’ll build a container node that will hold two subnodes — one for the image and one for the title. You’ll see how containers measure their own size and how they lay out their subnodes. By the end, you’ll take your existing UIView
container subclasses and convert them over to ASDisplayNode
subclasses.
This is what you’re aiming towards:
Cool stuff right? The smoother the UI, the better it is for all. With that said, it’s time to dive in!
Note: This is an intermediate tutorial tailored for engineers who have already dabbled a bit with AsyncDisplayKit and are familiar with the basics. If this is your first time using AsyncDisplayKit, first read through our beginning AsyncDisplayKit tutorial and check out AsyncDisplayKit’s Getting Started guide.
The app you’ll build presents a card that shows one of the wonders of the world, the Taj Mahal.
Download and open the starter project.
The project is a basic app with one view controller. Time to get acquainted with it!
The project uses CocoaPods to pull in AsyncDisplayKit. So, in usual CocoaPods style, go ahead and open Wonders.xcworkspace but NOT Wonders.xcodeproj.
Note: If you’re not familiar with CocoaPods, then that’s OK. But if you want to learn more about it then check out this Introduction to CocoaPods Tutorial.
Open ViewController.swift and take notice of the view controller’s constant property named card. It holds a data model value for the Taj Mahal, and you’ll use this model later to create a card node to display this wondrous structure to the user.
Build and run to make sure you have a working project. You should see an empty black screen — the digital equivalent of a blank canvas.
Now you’re going to build your very first node hierarchy. It’s very similar to building a UIView
hierarchy, which I’m sure you’re familiar with :].
Open Wonders-Bridging-Header.h, and add the following import statement:
#import <AsyncDisplayKit/ASDisplayNode+Subclasses.h> |
ASDisplayNode+Subclasses.h exposes methods that are internal to ASDisplayNode
. You need to import this header so you can override methods in ASDisplayNode
subclasses, but it’s important to note that you should only call these methods within your ASDisplayNode
subclasses. This is a similar pattern to UIGestureRecognizer
which also has a header purely for subclasses.
Open CardNode.swift and add the following ASDisplayNode
subclass implementation:
class CardNode: ASDisplayNode {} |
This declares a new ASDisplayNode
subclass that you’ll use as a container to hold the card’s user interface.
Open ViewController.swift and implement viewDidLoad()
:
override func viewDidLoad() { super.viewDidLoad() // Create, configure, and lay out container node let cardNode = CardNode() cardNode.backgroundColor = UIColor(white: 1.0, alpha: 0.27) let origin = CGPointZero let size = CGSize(width: 100, height: 100) cardNode.frame = CGRect(origin: origin, size: size) // Create container node’s view and add to view hierarchy view.addSubview(cardNode.view) } |
This code creates a new card node with a hard-coded size. It will sit on the upper-left corner and will have a width and height of 100.
Don’t worry about the odd alignment at this pass. You’ll center the card nicely within the view controller very soon!
Build and run.
Great! You have a custom node subclass that shows up on the screen. The next step is to give your node subclass, named CardNode
, the ability to calculate its own size. This is required to be able to center it in the view. Before doing that, you should understand how the node layout engine works.
The next task is to ask a node to calculate its own size by calling measure(constrainedSize:)
on the node.
You’ll pass the constrainedSize
argument into the method to tell the node to calculate a size that fits within constrainedSize
.
In layman’s terms, this means the calculated size can be no larger than the constrained size provided.
For example, consider the following diagram:
This shows a constrained size with a certain width and height. The calculated size is equal in width, but smaller in height. It could have been equal on both width and height, or smaller on both width and height. But neither the width nor the height are allowed to be greater than the constrained size.
This works similarly to UIView’s sizeThatFits(size:)
. But the difference is that measure(constrainedSize:)
holds on to the size it calculates, allowing you to access the cached value via the node’s calculatedSize
property.
An example of when the calculated size is smaller in width and height than the constrained size is as follows:
Here the image’s size smaller than the constrained size, and without any sizing-to-fit logic, the calculated size is smaller than the constrained size.
The reason AsyncDisplayKit incorporates sizing into its API is because often, it may take a perceivable amount of time to calculate a size. Reading an image from disk to calculate the size can be very slow for example. By incorporating sizing into the node API, which remember is thread safe, means that sizing can all be performed on a background thread! Neat! It’s a sweet little feature that makes the UI smooth as butter, and the user has less of those awkward moments where he wonders if his phone broke.
A node will run size calculations if it has not already done so and has no cached value, or if the constrained size provided is different than the constrained size used to determine the cached calculated size.
In programmers’ terms, it works like this:
measure(constrainedSize:)
either returns a cached size or runs a size calculation by calling calculateSizeThatFits(constrainedSize:)
. calculateSizeThatFits(constrainedSize:)
within your ASDisplayNode
subclass. calculateSizeThatFits(constrainedSize:)
is internal to ASDisplayNode
, and you shouldn’t call it outside of your subclass.
Now that you understand the method to the madness here, it’s time to apply it and measure some node sizes for yourself.
Open CardNode.swift and replace the class there with the following:
class CardNode: ASDisplayNode { override func calculateSizeThatFits(constrainedSize: CGSize) -> CGSize { return CGSize(width: constrainedSize.width * 0.2, height: constrainedSize.height * 0.2) } } |
For now, this method returns a size that is 20 percent of the constrained size provided, hence, it takes up just 4 percent of the available area.
Open ViewController.swift, delete the viewDidLoad()
implementation, and implement the following createCardNode(containerRect:)
method:
/* Delete this method override func viewDidLoad() { super.viewDidLoad() // 1 let cardNode = CardNode() cardNode.backgroundColor = UIColor(white: 1.0, alpha: 0.27) let origin = CGPointZero let size = CGSize(width: 100, height: 100) cardNode.frame = CGRect(origin: origin, size: size) // 2 view.addSubview(cardNode.view) } */ func createCardNode(#containerRect: CGRect) -> CardNode { // 3 let cardNode = CardNode() cardNode.backgroundColor = UIColor(white: 1.0, alpha: 0.27) cardNode.measure(containerRect.size) // 4 let size = cardNode.calculatedSize let origin = containerRect.originForCenteredRectWithSize(size) cardNode.frame = CGRect(origin: origin, size: size) return cardNode } |
Here’s a section-by-section breakdown:
createCardNode(containerRect:)
creates a new card node with the same background color as the old container node, and it uses a provided container rect to constrain the size of the card node, so the card node cannot be any larger than containerRect
’s size.containerRect
using the originForCenteredRectWithSize(size:)
helper method. Note that the helper method is a custom method provided in the starter project that was added to CGRect
instances via an extension.Right below the createCardNode(containerRect:)
method, re-implement viewDidLoad()
:
override func viewDidLoad() { super.viewDidLoad() let cardNode = createCardNode(containerRect: UIScreen.mainScreen().bounds) view.addSubview(cardNode.view) } |
When the view controller’s view loads, createCardNode(containerRect:)
creates and sets up a new CardNode
. The card node cannot be any larger than the main screen’s bounds size.
At this point in its lifecycle, the view controller’s view has not been laid out. Therefore, it’s not safe to use the view controller’s view’s bounds size, so you’re using the main screen’s bounds size to constrain the size of the card node.
This approach, albeit less than elegant, works for this view controller because it spans the entire screen. Later in this tutorial, you’ll move this logic to a more appropriate method, but for now, it works, so roll with it!
Build and run, and you’ll see your node properly centered.
Sometimes it takes a human being a perceivable amount of time to lay out complex hierarchies, if that is happening on the main thread. This blocks UI interaction. You can’t have any perceivable wait times if you expect to please the user.
For this reason, you’ll create, set up and lay out nodes in the background so that you can avoid blocking the main UI thread.
Implement addCardViewAsynchronously(containerRect:)
in between createCardNode(containerRect:)
and viewDidLoad()
:
func addCardViewAsynchronously(#containerRect: CGRect) { dispatch_async(dispatch_get_global_queue(QOS_CLASS_BACKGROUND, 0)) { let cardNode = self.createCardNode(containerRect: containerRect) dispatch_async(dispatch_get_main_queue()) { self.view.addSubview(cardNode.view) } } } |
addCardViewAsynchronously(containerRect:)
creates the CardNode
on a background queue — which is fine because nodes are thread safe! After creating, configuring and framing the node, execution returns to the main queue in order to add the node’s view to the view controller’s view hierarchy — after all, UIKit isn’t thread safe. :]
Note: Once you create the node’s view, all access to the node occurs exclusively on the main thread.
Re-implement viewDidLoad()
by using addCardViewAsynchronously(containerRect:)
:
override func viewDidLoad() { super.viewDidLoad() addCardViewAsynchronously(containerRect: UIScreen.mainScreen().bounds) } |
No more blocking the main thread, ensuring the user interface remains responsive!
Build and run. Same as before, but all the sizing of your node is now being done on a background thread! Neat! :]
Remember I said that you’d use a more elegant solution to size the node than just relying on the screen size? Well, I’m delivering on that promise right now!
Open ViewController.swift. Add the following property at the top of the class:
var cardViewSetupStarted = false |
Then replace viewDidLoad()
with viewWillLayoutSubviews()
:
/* Delete this method override func viewDidLoad() { super.viewDidLoad() addCardViewAsynchronously(containerRect: UIScreen.mainScreen().bounds) } */ override func viewWillLayoutSubviews() { super.viewWillLayoutSubviews() if !cardViewSetupStarted { addCardViewAsynchronously(containerRect: view.bounds) cardViewSetupStarted = true } } |
Instead of using the main screen’s bounds size, the logic above uses the view controller’s view’s bounds size to constrain the size of the card node.
Now it’s safe to use the view controller’s views’ bound size since the logic is inside viewWillLayoutSubviews()
instead of viewDidLoad()
. By this time in its lifecycle, the view controller’s view will already have its size set.
This approach is superior because a view controller’s view can be any size, and you don’t want to depend on the fact that this view controller happens to span the entire screen.
The view can be laid out multiple times. So viewWillLayoutSubviews()
can be called multiple times. You only want to create the card node once, and that’s why you need the cardViewSetupStarted
flag to prevent the view controller from creating the card node multiple times.
Build and run.
Currently you have an empty container card node on screen. Now you want to display some content. The way to do this is to add subnodes to the card node. The following diagram describes the simple node hierarchy you’ll build.
The process of adding a subnode will look very familiar since the process is similar to how you add subviews within custom UIView
subclasses.
The first step is to add the image node, but first, you should know how container nodes lay out their subnodes.
You now know how to measure the container node’s size and how to use that calculated size to lay out the container node’s view. That takes care of the container, but how does the container node lay out its subnodes?
It’s a two-step process:
calculateSizeThatFits(constrainedSize:)
. This ensures that each subnode caches a calculated size.layout()
on your custom ASDisplayNode
subclass. layout()
works just like UIView’s layoutSubviews()
, except that layout()
doesn’t have to calculate the sizes of all of its children. layout()
simply queries each subnode’s calculated size.Back to the UI. The Taj Mahal’s card size should equal the size of its image, and the title should then fit within that size. The easiest way to accomplish this is to measure the Taj Mahal image node’s size and use the result to constrain the title text node’s size, so that the text node fits within the size of the image.
And that is the logic you’ll use to lay out the card’s subnodes. Now you’re going to make it happen in code. :]
Open CardNode.swift and add the following code to CardNode
above calculateSizeThatFits(constrainedSize:)
:
// 1 let imageNode: ASImageNode // 2 init(card: Card) { imageNode = ASImageNode() super.init() setUpSubnodesWithCard(card) buildSubnodeHierarchy() } // 3 func setUpSubnodesWithCard(card: Card) { // Set up image node imageNode.image = card.image } // 4 func buildSubnodeHierarchy() { addSubnode(imageNode) } |
Here’s what that does:
Next, re-implement calculateSizeThatFits(constrainedSize:)
:
override func calculateSizeThatFits(constrainedSize: CGSize) -> CGSize { // 1 imageNode.measure(constrainedSize) // 2 let cardSize = imageNode.calculatedSize // 3 return cardSize } |
Here’s what that code does:
ASImageNode
which is used here.imageNode
’s calculated size, which is also the size of the entire card node. Specifically, it uses the image node’s measured size as the card node size to constrain subnodes. You’ll use this value when adding more subnodes.Next, override layout()
:
override func layout() { imageNode.frame = CGRect(origin: CGPointZero, size: imageNode.calculatedSize).integerRect } |
This logic positions the image in the upper-left corner, aka zero origin, of the card node. It also makes sure that the image node’s frame doesn’t have any fractional values, so that you avoid pixel boundary display issues.
Take note of how this method uses the image node’s cached calculated size during layout.
Since the size of this image node determines the size of the card node, the image will span the entire card.
Go back to ViewController.swift, and inside createCardNode(containerRect:)
, replace the line that initializes CardNode
with:
let cardNode = CardNode(card: card) |
This line uses the new initializer you added to CardNode
. The card value that passes into the initializer is simply a constant property on ViewController
that stores the Taj Mahal card model.
Build and run. Boom! Huzzah! :]
Awesome. You’ve successfully created a container node that presents a node hierarchy! Sure, it’s a simple one, but it’s a node hierarchy!
Hey, where are you going? You’re not done yet! Just how do you expect the user to know what he’s looking at without a title? Nevermind, don’t answer that; we’re moving on now.
You need at least one more subnode to hold the title.
Open CardNode.swift and add the following titleTextNode
property to the class:
let titleTextNode: ASTextNode |
Initialize the titleTextNode
property inside init(card:)
above super.init()
:
titleTextNode = ASTextNode() |
Add the following line to setUpSubnodesWithCard(card:)
:
titleTextNode.attributedString = NSAttributedString.attributedStringForTitleText(card.name) |
This line gives the text node an attributed string that holds the card’s title. attributedStringForTitleText(text:)
is a helper method that was added to NSAttributedString
via extension. It existed in the starter project, and it creates the attributed string with the provided title and with the text styling appropriate for this app’s card titles.
Next, add the following at the end of buildSubnodeHierarchy()
:
addSubnode(titleTextNode) |
Make sure it goes below the line adding the image node, otherwise the image would be on top of the title!
And inside calculateSizeThatFits(constrainedSize:)
, add the following right above the return statement:
titleTextNode.measure(cardSize) |
This measures the rest of the subnodes by using this card’s size as the constrained size.
Add the following to layout()
:
titleTextNode.frame = FrameCalculator.titleFrameForSize(titleTextNode.calculatedSize, containerFrame: imageNode.frame) |
This line calculates the title text node’s frame with the help of FrameCalculator
, a custom class included in the starter project. FrameCalculator
hides frame calculation to keep things simple.
Build and run. Now there will be no questions about the Taj Mahal.
And that’s…how you do it! That’s a full node hierarchy.
You’ve built a node hierarchy that uses a container with two sub-nodes.
If you’d like to check out the final project, you can download it here.
To learn more about AsyncDisplayKit, check out the getting started guide and the sample projects in the AsyncDisplayKit Github repo.
The library is entirely open source, so if you’re wondering how something works you can delve into the minutia for yourself!
Now that you’ve got this foundation, you can compose node hierarchies into well organized and reusable containers that make your code easier to read and understand. To sweeten the deal further, you’re ensuring a smoother and more responsive UI by using nodes which perform a lot of work in the background that UIKit can only dream of doing.
If you have any questions, comments or sweet discoveries, please don’t hesitate to jump into the forum discussion below!
The post AsyncDisplayKit Tutorial: Node Hierarchies appeared first on Ray Wenderlich.
Learn about some of the more advanced configuration options available within a stack view so that you can build the exact layout you want.
The post Video Tutorial: Introducing Stack Views Part 3: Configuration appeared first on Ray Wenderlich.