Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4370 articles
Browse latest View live

WatchKit by Tutorials Now Available!

$
0
0
WatchKit by Tutorials Now Available for Download!

WatchKit by Tutorials Now Available for Download!

We are very happy to announce that WatchKit by Tutorials is now available for download!

If you already ordered the book, you can download it now on your My Loot page.

WatchKit by Tutorials teaches you everything you need to know to make your own apps for the Apple Watch.

It starts out with the basics, covering making your first WatchKit app and the new WatchKit UI controls and layout system. It then moves into more advanced topics, like notifications, handoff, sharing Core Data stores and other types of data, and more.

Note the PDF version is available now, but we are waiting to release the print version until WatchKit is out of beta.

Currently, WatchKit by Tutorials and our other new Swift book iOS Animations by Tutorials are for sale, as part of our Spring Swift Bundle. The price will be going up at the end of the week, so grab the discount while you still can!

We hope you enjoy WatchKit by Tutorials, and look forward to some great WatchKit apps from you!

WatchKit by Tutorials Now Available! is a post from: Ray Wenderlich

The post WatchKit by Tutorials Now Available! appeared first on Ray Wenderlich.


WatchKit with Mike Oliver – Podcast S03 E06

$
0
0
Take a deep dive into WatchKit with Mike Oliver!

Take a deep dive into WatchKit with Mike Oliver!

Welcome back to season 3 of the raywenderlich.com podcast!

In this episode, take a deep dive into WathKit with the WatchKit by Tutorials tech editor and lead iOS Engineer at RunKeeper Mike Oliver!

[Subscribe in iTunes] [RSS Feed]

Our Sponsor

Interested in sponsoring a podcast episode? We sell ads via Syndicate Ads, check it out!

Links and References

Contact Us

Where To Go From Here?

We hope you enjoyed this episode of our podcast. Stay tuned for a new episode next week! :]

Be sure to subscribe in iTunes to get access as soon as it comes out!

We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear in future episodes. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com!

WatchKit with Mike Oliver – Podcast S03 E06 is a post from: Ray Wenderlich

The post WatchKit with Mike Oliver – Podcast S03 E06 appeared first on Ray Wenderlich.

Video Tutorial: Adaptive Layout Part 0: Introduction

$
0
0

Challenge

Your challenge is to keep watching this video tutorial series and learn about Adaptive Layout!

Download lecture slides

Are you unfamiliar with Auto Layout and how to work with constraints? Then you should first check out our video tutorial series on Auto Layout before continuing with this series on Adaptive Layout.

View next video: Size Classes

Video Tutorial: Adaptive Layout Part 0: Introduction is a post from: Ray Wenderlich

The post Video Tutorial: Adaptive Layout Part 0: Introduction appeared first on Ray Wenderlich.

Video Tutorial: Adaptive Layout Part 1: Size Classes

WatchKit FAQ

$
0
0
WatchKit FAQ

WatchKit FAQ

Note from Ray: This is an article on WatchKit released as part of the Spring Swift Fling celebration. We hope you enjoy! :]

WatchKit has been available to developers for about three months (at the time of writing this post). As developers get to know this cool new technology, a lot of questions undoubtedly come up.

In this WatchKit FAQ, we’ll answer a bunch of frequently asked questions that we’ve seen around forums, Twitter, email and Stack Overflow. We’ll also periodically update this FAQ as new questions bubble up.

For some questions there aren’t clear solutions, so some answers are a mix of wisdom, opinion and an occasional educated guess. Just like you, we’re learning more about WatchKit as we go along, and the tech is still under heavy development and therefore subject to change.

Make sure to share your opinions and comment about what you like and don’t like in WatchKit, and ask more questions; we’ll update this FAQ based on your feedback.

Basic Questions

What is WatchKit and how does it work?

WatchKit is Apple’s framework for building hybrid apps for the Apple Watch, and it is bundled with Xcode 6.2.

WatchKit works by splitting your app into two distinct parts:

WatchKit_03

  • Your Apple Watch contains just the user interface resources like the storyboard and asset catalog, and even though it handles user input it doesn’t actually execute any of the code. In other words, the Apple Watch behaves just like a thin client.
  • Your iPhone contains all the code, and executes it as an extension, just like a Today or Action extension.

One cool thing is that communication between the Apple Watch and the iPhone is automatic and happens behind the scenes.

You work the way you’re used to and WatchKit handles all the wireless communication on your behalf. As far as the code you write is concerned, your views and outlets are connected locally even though they’re on a completely separate device. Pretty cool stuff!

To learn more, check out our WatchKit: Initial Impressions post.

What’s the difference between Xcode 6.2 beta and Xcode 6.3 beta? Which one should I use for WatchKit development?

If you’re planning to submit your WatchKit app the moment the App Store starts accepting them, you should use the latest Xcode 6.2 beta.

Xcode 6.3 comes with the new version of Swift – 1.2 – and according to threads in the Apple Developer Forum, Xcode 6.3 won’t be out of beta when Apple Watch is released, so you won’t be able to submit apps built with this version in time for the Apple Watch launch.

You can learn more about Xcode 6.3 and Swift 1.2 in our What’s New in Swift 1.2 post.

Can you build Apple Watch apps in Swift?

Yes, you can build apps for the Apple Watch in either Objective-C or Swift, or a combination of both. Apple has provided two sample projects for WatchKit:

In addition, our WatchKit Tutorial, WatchKit video tutorial series, and WatchKit by Tutorials book are all written exclusively in Swift.

Apple has also provided the WatchKit Framework documentation in both Objective-C and Swift.

Can I create custom watch faces?

No. Custom watch faces are not currently supported.

Watch Faces!

Watch faces are not supported yet!

How many Apple Watches can I pair with one iPhone?

You can pair one Apple Watch with one iPhone at a time — it’s an exclusive relationship.

Can I pair my Apple Watch with an iPad?

No. The Apple Watch only pairs with an iPhone at this time.

Can an iPhone app wake up its WatchKit extension and watch app?

No. The WatchKit extension can only ask the system to launch the companion iPhone app, which it will do so in the background. There is currently no support for this to work the other way around.

Can third-party apps make phone calls from a watch app?

No. There is no public API that lets you initiate a phone call directly from a WatchKit extension. Since the companion iPhone app can’t be brought to the foreground either, the system silently ignores all phone call or openURL: requests from the companion iPhone app.

Can you access the heartbeat sensor and other sensors on the watch from your watch app?

No. There is currently no API to access the hardware sensors on the Apple Watch at this time.

What are the differences between short-look, long-look, static and dynamic notifications?

  • Short-Look: A Short-Look notification is provided by the system. Similar to a notification banner you receive on an iPhone, you don’t have any control over a Short-Look notification. A Short-Look notification displays your app icon, app name and the title string from the notifications payload. When a notification arrives, a user sees the Short-Look notification. If the user raises their wrist, after a slight pause, the Short-Look notification will transition to the Long-Look notification.
  • Long-Look: A Long-Look notification can be either Static or Dynamic.
    • Static: A static notification includes a single label that is populated automatically using the notifications payload. You can create a static notification scene in your watch app’s storyboard, but can’t really customize it beyond changing the color of the sash and the title.
    • Dynamic: A dynamic notification requires you to subclass WKUserNotificationInterfaceController. It’s instantiated from your storyboard, and you can provide your own custom interface. Note that there are no guarantees a dynamic notification interface will be displayed. For example, if the watch’s batteries are low, the system may decide to show the static notification interface instead to preserve battery as they are less expensive to create.

Under The Hood

Select the appropriate scheme

Select the appropriate scheme

How can I test a glance or a notification using the simulator?

Each glance or notification requires its own dedicated build scheme to run in the simulator. Then you simply select the appropriate scheme, and build and run.

Can I position interface elements on top of each other?

No, interface elements can’t be positioned on top of each other natively. However, there are workarounds. For example, you can use a WKInterfaceGroup and set its background image to something that represents the control you want to overlay. Inside the group you can then add the necessary labels, buttons and so on.

Can I customize the CALayer property of interface elements?

No. There is no CALayer property available on the Apple Watch interface elements, as they don’t descend from either UIView or CALayer.

Can I subclass the classes available in WatchKit?

There is nothing to stop you from subclassing a class in WatchKit, but you might find you’re unable to use it. You can subclass some classes, such as WKInterfaceController and WKUserNotificationInterfaceController and use those in your storyboard.

However, the storyboard of a watch app doesn’t allow you to change the class of any interface elements. And you’ve can’t dynamically create interface elements and insert or remove them as subviews; you can only hide or show interface elements that are already present in the storyboard.

Can I mix page-based and navigation-based interface controllers?

Yes, but with some limitations. A page-based controller can only show a navigation-based controller by presenting it modally. A navigation-based controller can also only show a page-based controller by presenting it modally.

A navigation-based controller can’t push a page-based controller onto its navigation stack. Likewise, a page-based controller can’t show a navigation-based controller as one of it’s pages.

Are there equivalents for UIActivityIndicator or UIAlertController on the Apple Watch?

No, but in lieu of a UIAlertController, you can display a custom WKInterfaceController modally.
You can work around an activity indicator by adding a sequence of images to create the necessary animation, or simply display a label with the appropriate text. For an example of this, check out Apple’s Lister example. In the Watch App’s glance, you’ll see there are 360 images representing a single circular animation!

Lister

Can I use Core Graphics to generate images dynamically and then use them in a watch app? Can they be cached on the watch?

Yes, but the composition of any images using Core Graphics must take place on the iPhone as part of the extension. Once you have rendered the Core Graphics drawing context into an instance of UIImage you can then cache it on the watch using addCachedImage(_:name:) from WKInterfaceDevice.

Can I use custom views in the Apple Watch? Can I customize the interface elements beyond their public API?

No. You can’t use custom views. WatchKit only supports certain native interface elements. None of the interface elements can be subclassed or customized beyond their public API. The available interface elements are WKInterfaceLabel, WKInterfaceButton, WKInterfaceImage, WKInterfaceGroup, WKInterfaceSeparator, WKInterfaceTable, WKInterfaceSwitch, WKInterfaceMap, WKInterfaceSlider and WKInterfaceTimer.

How does the Apple Watch communicate with your iPhone?

The Apple Watch leverages both Bluetooth LE and Wi-Fi technologies to communicate with its paired iPhone. The exact nature of the communication and its implementation is opaque to both users and developers.

Turn on Wi-Fi  and Bluetooth after turning on Airplane mode

Turn on Wi-Fi and Bluetooth after turning on Airplane mode

Can I use the Apple Watch while in Airplane Mode?

Yes. In fact, you can turn Bluetooth and Wi-Fi on after enabling Airplane mode to enable communications and continue using your watch.

What happens to my app when the Apple Watch can’t communicate with its paired iPhone?

Simply put, your app won’t run, and if it’s already running it’ll be suspended.

In the WatchKit extension, didDeactivate() will be called on the current interface controller and you’ll be given the chance to do any necessary cleanup. A red iPhone icon will display in the status bar on the watch to indicate loss of connectivity, and the interface will remain on screen, but won’t be interactive. Users can either restore connectivity or exit the app.

How can a watch app communicate with its companion iPhone app?

There are various techniques you may use, a popular being that the watch app writes or updates data in a shared container, and then it notifies the iPhone app. Afterwards, the iPhone app can pull in the changes from the shared container.

Another technique is to pass data to the iPhone app via a dictionary, but this can only be initiated by the watch app. There is a single API for this in the WatchKit extension; call the class method openParentApplication(userInfo:reply:) of WKInterfaceController, as shown in the following code snippet:

// Notify companion iPhone app of some changes in the shared container.
let kSharedContainerDidUpdate = "com.rayWenderlich.shared-container.didUpdate"
let requestInfo: [NSObject: AnyObject] = [kSharedContainerDidUpdate: true]
WKInterfaceController.openParentApplication(requestInfo) { (replyInfo: [NSObject : AnyObject]!, error: NSError!) -> Void in
  // Handle the reply from the companion iPhone app...
}

In the userInfo dictionary, you simply pass a flag or some data for the companion iPhone app to act upon. To receive this communication, the companion iPhone app must implement application(_:handleWatchKitExtensionRequest:reply:) in its app delegate:

func application(application: UIApplication!, handleWatchKitExtensionRequest userInfo: [NSObject : AnyObject]!, reply: (([NSObject : AnyObject]!) -> Void)!) {
let kSharedContainerDidUpdate = "com.rayWenderlich.shared-container.didUpdate"
  if let isUpdate = userInfo[kSharedContainerDidUpdate] as? Bool {
    // Process request, then call reply block
    reply(...)
  }
 }

If the companion iPhone app is suspended or terminated, the system will launch it in the background. Depending on the purpose of the communication, the companion iPhone app may return something in the reply block which the watch app can then process accordingly.

How can an iPhone app communicate with its watch app?

There is no way for an iPhone app to initiate communication with its extension. Aside from writing to a shared container, or responding to the watch app’s requests, the iPhone app may use the Darwin Notification Center to notify the WatchKit extension about a particular event — Darwin Notification Center is an API of the Core Foundation framework.

If you decide to use the Darwin Notification Center, there are some very important things to remember:

  • An application has only one Darwin Notification Center.
  • All Darwin notifications are system-wide.
  • The main thread’s run loop must be running in one of the common modes, such as kCFRunLoopDefaultMode, for notifications to be delivered.
  • Both the watch app and the iPhone app must be running in the foreground to handle sending and receiving Darwin notifications.
  • You can’t pass objects via Darwin notifications because they only carry a name and a userInfo dictionary.
  • Darwin notifications are not persisted, rather, they are delivered immediately. Therefore, if an observer is suspended, terminated or placed in the background, the notification is lost.

What happens if I don’t implement static or dynamic notification interfaces?

Even if you don’t have a watch app, the system will still display notifications for your iPhone app on the watch. However, the default notification interface has no custom styling. If you have interactive notifications, the system will hide those actions on the watch.

What’s the difference between setImage(_:) and setImageNamed(_:)?

You should use setImageNamed(_:) when the image you want to display is either cached on the watch on is in an asset catalog in the watch app’s bundle, and use setImage(_:) when the image isn’t cached — this will transfer the image data to the Apple Watch over the air!

An image is cached on the watch if it’s one of the following:

  1. Bundled with the watch app target in the project, meaning that the image resides in the asset catalog belonging to Watch App target in the project
  2. Explicitly cached on the watch beforehand via one of these WKInterfaceDevice APIs: addCachedImage(_:name:) or addCachedImageWithData(_:name:)

Can I use iCloud in a watch App?

Yes, you can use iCloud with the Apple Watch app. Lister: A Productivity App, which is one of the sample projects provided by Apple, demonstrates how to use iCloud in this context.

Animation

How can I add animations to my watch app?

There is only one way to display animations on the Apple Watch: image sequences. To make something appear animated, you have to pre-generate a series of images, and then cycle through them like a flip-book. The era of the animated GIF is back! ;]

ClappingDude

You can display a sequence of static images in a WKInterfaceImage object to create your own custom animations.

@IBOutlet weak var image: WKInterfaceImage?
...
image?.setImageNamed(image1) // Load the initial image using the required <name><number> format
image?.startAnimating() // Starts animating
...
image?.stopAnimating() // Optional. Stops animating.

You can also animate only a subset of images as shown in the following code snippet:

image?.startAnimatingWithImagesInRange(range, duration: 2, repeatCount: 1)

Can I create animations for the Apple Watch in code?

Yes, but perhaps not in the way you think – as I mentioned above there is no Core Animation framework or equivalent. You can use Core Graphics to render each frame of the animation to an offscreen context, render it to an instance of UIImage, and at the end you’ll have a sequence of images you can animate on the Apple Watch.

The following code snippet shows how you can create an animation for a moving circle using Core Graphics by generating an image sequence for each frame (code courtesy of Jack Wu from WatchKit by Tutorials):

// Create an offscreen context
UIGraphicsBeginImageContextWithOptions(size, opaque, scale)
let context = UIGraphicsGetCurrentContext()
 
for centerX in 0..100 {
  // Draw a circle at centerX.
  // Create a snapshot.
  let image = UIGraphicsGetImageFromCurrentImageContext()
 
  // Write the image as a file
  let data = UIImagePNGRepresentation(image)
  let file = "path\\image_\(centerX).png"
  data.writeToFile(file, atomically: true)
}
 
// End the context.
UIGraphicsEndImageContext()

fitness_2

What is the maximum animation frame rate on the Apple Watch?

You can’t set the animation frame rate on the Apple Watch. However, it’s possible to set the animation length and the let the system automatically determine the frame rate.

If the sequence of images are sent over the air, for instance when images are not cached, they will run up to 10 fps. If the images are already cached on the Apple Watch via the image cache or asset catalog in the WatchKit app bundle, they will run up to 30 fps.

Debugging and Unit Testing

How can I run and debug both the iPhone app and the Apple Watch app at the same time using the simulators?

  1. Build and run the watch app; this launches the Apple Watch simulator, runs the watch app and attaches it to the debugger.
  2. Then in the iOS simulator, tap on your iPhone app’s icon to launch it.
  3. Now go back to Xcode, and from the top menu select Debug\Attach To Process and then select the appropriate iPhone app. This will add a new process to your Xcode Debug Navigator and attach the iPhone app to the Debugger.

How can I unit test my WatchKit extension?

You can write unit tests for your watch app in the same way you write unit tests for your iPhone or iPad apps; simply add a new Unit Test target for the WatchKit extension to your project. However, you can’t specify the watch app as a Host Application.

01

For the iPhone app target, the iPhone application appears as the Host Application

02

But for the WatchKit Extension target there is no Host Application

You have to add every single file that you want to test to the WatchKit extension Unit Test target explicitly.

03

Add the files that you want to test from the WatchKit extension to the Unit Test target

Sharing Data

How can you share data between a WatchKit extension and its containing iOS app?

You need to enable App Groups; this refers to a container on the local file system that both an extension and its containing iPhone app can access. You can define multiple app groups and enable them for different extensions.

Once App Groups are enabled, you can use either of the following techniques, based on your needs:

  • Read and write to a shared container directly. You get the URL of the shared container by asking NSFileManager. There is a single API for this:
    let kMyAppGroupName = "com.raywenderlich.mywatchapp.container"
    var sharedContainerURL: NSURL? = NSFileManager.defaultManager().
      containerURLForSecurityApplicationGroupIdentifier(kMyAppGroupName)
  • Use shared NSUserDefaults. To create defaults storage in the shared container, you need to initialize a new NSUserDefaults instance using NSUserDefaults(suiteName:) and pass in the unique identifier of the app group, like so:
    let kMyAppGroupName = "com.raywenderlich.mywatchapp.container"
    let sharedUserDefaults = NSUserDefaults(suiteName: kMyAppGroupName)

Note: When you want to read from or write to a shared container, you must do so in a coordinated manner to avoid data corruption, because a shared container can be accessed simultaneously by separate processes. The recommended way to coordinate reads and writes is using NSFilePresenter and NSFileCoordinator. However, it’s advised against using file coordination APIs in an app extension because they can result in deadlocks.

The reason behind this is the lifecycle of app extensions. App extensions have only 3 states: (a) running, (b) suspended and (c) terminated. If an app extension that uses file coordination APIs is suspended while writing, it never gets the chance to relinquish ownership, so other processes get deadlocked.

However, an iPhone or an iPad application is notified via its app delegate when it’s placed in the background, and it should then remove file presenters. When the app is brought back to the foreground, the file presenters can be re-added.

Instead, you can use atomic safe-save operations like NSData‘s writeToURL(_:atomically:). SQLite and Core Data also allow you to share data in a shared container in safe manner between multiple processes, even if one is suspended mid-transaction.

You can learn more about this in Technical Note TN2408: Accessing Shared Data from an App Extension and its Containing App.

How can you share a Core Data database between a watch app and an iPhone app?

To share a Core Data persistent store file, you essentially use the same mechanism as shown for the shared data example. Below is a code snippet to show how this works:

let kMyAppGroupName = "com.raywenderlich.mywatchapp.container"
var sharedContainerURL: NSURL? = NSFileManager.defaultManager().
  containerURLForSecurityApplicationGroupIdentifier(kMyAppGroupName)
if let sharedContainerURL = sharedContainerURL {
  let storeURL = sharedContainerURL.URLByAppendingPathComponent("MyCoreData.sqlite")
  var coordinator: NSPersistentStoreCoordinator? = 
    NSPersistentStoreCoordinator(managedObjectModel: self.managedObjectModel)
  coordinator?.addPersistentStoreWithType(NSSQLiteStoreType, 
    configuration: nil,
    URL: storeURL,
    options: nil,
    error: nil)
}

Let’s Get Down to Business

Can you make games for the Apple Watch? What kinds of games are suitable?

While it’s still too early to say what type of games will succeed and whether users will want to play games on the watch, it almost goes without saying that you’ll need to think about the Apple Watch from a different perspective.

Remember how games for iPhone and iPad required a different mentality when compared to desktop games? Likewise, the Apple Watch will require a unique approach.

We know already there are limits to what you can do because there is no API for hardware access on the Apple Watch, nor does it support gesture recognizers or allow custom drawing on the screen. Remember, you can only use native interface elements.

But don’t let those limits put a dampener on your creativity; think of them as ground rules. :]

challenge_accepted_watchkit

How can you earn money with Apple Watch apps?

It’s still far too early to say. One thing to note: iAd is not supported, and considering the small screen size and the amount of time a user will interact with your watch app, on-screen advertisements would probably annoy users and not perform well enough to be financially worth it anyway.

Also, if the WatchKit extension is included in your app bundle, you can’t disable it or otherwise prevent user from installing it. So, it can’t be supplementary that you make available through in-app purchases.

However, there are still ways that you can monetize a WatchKit extension:

  • If you have a free version app and a paid version app in the App Store, you could just implement the WatchKit extension in the paid app version.
  • If you have an app with in-app purchases, you could just display limited information in the extension, but allow a user to unlock additional features via in-app purchases.

This is obviously not an exhaustive list of the possibilities to monetize WatchKit apps, but one thing is clear: you’re going to have to ben just as creative with your monetization as you are with the watch apps themselves.

Is there any reason to believe watch apps are a new opportunity that might let developers make a living just by developing for the App Store?

Any judgements at this point are premature. Even though the Apple Watch is indeed a brand new platform full of new opportunities, it may not create a similar gold-rush era to the one we saw when the App Store first launched.

One thing to keep in mind is that the Apple Watch is a different proposition entirely. Given its aesthetics and price tag, it’s more akin to jewelry that communicates with your iPhone than an essential device.

However, a WatchKit extension could make it easier for an app to stand out in the crowd. Much like the days when native iPad apps were more successful than iPhone apps that were merely scaled to fit, a thoughtful, well-designed Apple Watch app that complements an iPhone app could drive sales.

More Questions?

If you have any questions that weren’t covered here (And I know you do!) please post a comment. We’ll pick out the most frequently asked, compelling and even challenging to update the post – and you’ll also get attribution just for asking!

Also, as mentioned earlier, please chime in with any comments or clarifications for the answers listed here and we’ll update as needed.

Thanks all!

WatchKit FAQ is a post from: Ray Wenderlich

The post WatchKit FAQ appeared first on Ray Wenderlich.

WatchKit Tutorial with Swift: Tables and Network Requests

$
0
0
Thank you for being a part of the Swift Spring Fling!

Continue your WatchKit adventures during the Swift Spring Fling!

Note from Ray: This is a bonus WatchKit tutorial released as part of the Spring Swift Fling. If you like what you see, be sure to check out WatchKit by Tutorials for even more on developing for the Apple Watch. We hope you enjoy! :]

You and I know that WatchKit apps are just iOS app extensions — but this isn’t always obvious from the user’s perspective. Users can launch the app from the watch and use it without being aware the code is actually running on their phone.

That means your extension can be started and stopped at a moment’s notice. If you have a long-running task such as finding the user’s location or fetching data from the network, you can’t rely on the extension to stay around long enough for those tasks to complete.

In our introductory WatchKit tutorial, you built a Bitcoin-tracking WatchKit app with a simple interface that performed the network request right in the extension.

In this tutorial, you’ll take things to the next level and build a WatchKit app that tracks a collection of cryptocurrencies, including Dogecoin and Litecoin. You’ll learn about tables, table rows, and use the watch-to-phone communication feature to trigger the network request and pass the data back to the Watch.

If you didn’t make enough on the Bitcoin exchange to save up for the Apple Watch Edition, it’s not too late to start mining some DOGE! :]

CoinTracker-doge1

Getting Started

Download the starter project and open it in Xcode. Remember to use Xcode 6.2 Beta 5! You’ll see an iOS app named CoinTracker with a table view interface listing the tracked currencies, like so:

CoinTracker-starter

The Xcode project includes a WatchKit app target and an empty interface controller so you can get started right away. Check out our introductory WatchKit tutorial if you want to learn how to set this up from scratch.

Open CoinsInterfaceController.swift inside the CoinTracker WatchKit Extension group. Add the following import underneath the other imports:

import CoinKit

The starter project includes the CoinKit framework, which contains the data structure and helper class you’ll be using throughout this tutorial.

Next, add the following properties to the CoinsInterfaceController class:

var coins = [Coin]()
let coinHelper = CoinHelper()

The framework will fetch the latest values of each coin and return Coin objects, which have the name of the currency and the exchange rate, among other values. You’ll store these in coins.

CoinHelper has the methods that perform the network requests. You won’t trigger this from the extension — remember, you’ll call out to the main iOS app for that! However, there are some useful methods here to cache currency values, as there could be a delay between the Watch app starting up and new data coming in. It’s good practice to have something to show the user on launch, even if it’s older cached values.

Now that the properties are in place, it’s time to set up the interface and display some data.

Creating WatchKit Tables

WatchKit tables are just as useful as their iOS counterpart, UITableView. They’re also much easier to use, since you don’t have to worry about data sources or delegates. Instead, support for tables is baked right into WKInterfaceController.

Still in CoinsInterfaceController.swift, add the following outlet to the class:

@IBOutlet weak var coinTable: WKInterfaceTable!

You always need a reference to your tables so you can set them up and add rows from code.

Open Interface.storyboard in the CoinTracker WatchKit App group, where you’ll see a single interface controller. Find Table in the object library and drag one to the interface like so:

CoinTracker-table1

As with table views in iOS, you can have many different kinds of rows. In this case, the table comes with one type of row to start and you’ll only need the one in this tutorial.

Notice how the table row has a group inside it – you’ll add all of the row contents to this group. Start by dragging two labels into the table row group as shown below:

CoinTracker-table2

Select the group (this is easiest to do from the Document Outline) and open the Attributes Inspector. Change Group\Layout to Vertical and Size\Height to Size to Fit Content. That will stack the two labels vertically; the containing group will grow to fit the labels automatically.

Select the top label, set its text to Coin name and change its font to the Headline style so it looks a little more prominent.

Select the bottom label and set its text to Price. At this point your interface should look like the following:

CoinTracker-table3

The interface looks good; it’s time to wire up all the outlets so you can add rows and set those labels to something meaningful.

Creating Table Connections

You already have an outlet ready for the table, but you need to connect those two labels in the table row to something.

In iOS, you might subclass UITableViewCell and add the outlets there. But the objects in WatchKit are much more lightweight than that. Remember, when you set labels or images from code, the phone will send only the modified data over the air to the watch. The complex bits are handled by WatchKit and the interface objects, so you don’t need some fancy object wrapping all of that inside your row.

Right-click (or control-click) on the CoinTracker WatchKit Extension group and click New File. Select the iOS\Source\Swift File template, name the new file CoinRow and click Create.

Open CoinRow.swift and add the following code to the file:

import WatchKit
 
class CoinRow: NSObject {
  @IBOutlet weak var titleLabel: WKInterfaceLabel!
  @IBOutlet weak var detailLabel: WKInterfaceLabel!
}

That’s right — your table row is simply a subclass of NSObject.

First, you need to import WatchKit to access the interface object definitions. Then you’ll declare two outlets for the labels.

Open Interface.storyboard. Control-drag from the interface controller icon to the table, then select coinTable from the Outlets popup to connect the outlet:

CoinTracker-outlet1

Next, select Table Row Controller in the Document Outline. Using the Identity Inspector, change the class to CoinRow. Then use the Attributes Inspector to set the Identifier to CoinRow as well.

Changing the class links the table row controller to the NSObject subclass you created. Later on, when you instruct the table to add some rows, the identifier will tell the table which type of row to create. Although you only need one type of row for this app, remember that your WatchKit App could potentially have many different types of rows.

Now you just need to connect the outlets to the two labels inside the row. Control-drag from CoinRow in the document library to Coin name, then select the titleLabel outlet like so:

CoinTracker-outlet2

Finally, Control-drag from CoinRow to the Price label, and select the detailLabel outlet.

That takes care of the interface and outlet connections! It’s time to get back to coding.

Displaying Table Data

First you’ll take care of the code that sets up the table rows.

Open CoinsInterfaceController.swift and add the following helper method to the class:

func reloadTable() {
  // 1
  coinTable.setNumberOfRows(10, withRowType: "CoinRow")
 
  for (index, coin) in enumerate(coins) {
    // 2
    if let row = coinTable.rowControllerAtIndex(index) as? CoinRow {
      // 3
      row.titleLabel.setText(coin.name)
      row.detailLabel.setText("\(coin.price)")
    }
  }
}

Here’s what’s going on in the above method:

  1. You set the number of rows and their type; the type string here is the identifier you set on the table row back in the storyboard. Since you don’t have any coin data yet, you’re hard-coding ten rows just to test things out.
  2. Once you have coin data, you enumerate through the array of coins. At this point the table has already created the ten row objects, so rowControllerAtIndex(_:) simply fetches the object for a particular row. To be safe, there’s some optional binding here to ensure your code is dealing with CoinRow objects.
  3. Finally, if the creation of your CoinRow succeeded, you set the text for the two labels.

Again, this approach is quite different from UITableView; you need to set up the table and all its rows in one shot; there’s no cellForRowAtIndexPath-like callback where you provide the rows on an as-needed basis.

Note: Since you need to populate all rows all at once, displaying a large data set can take a long time. Apple recommends limiting the number of rows to some reasonable number for your use case. For example, you might only display 20 rows in a news reader app with a “Show More” button at the bottom.

Next, find awakeWithContext(_:) and add the following code to the end of that method:

coins = coinHelper.cachedPrices()
reloadTable()

First, you get the set of cached coin prices. Since you haven’t made any network requests, this returns an empty array for now. Then you call reloadTable() to set up the table rows.

Ensure you’ve selected the CoinTracker WatchKit App scheme and build and run your project; you’ll see your table with ten test rows as shown below:

CoinTracker-run1

Excellent — you’ve set up the interface and some code to display a table the watch! Now you need some real data in there so you can start amassing your fortune! :]

Return to Xcode, stop the app, then modify the first line in reloadTable() that sets the number of rows like so:

if coinTable.numberOfRows != coins.count {
  coinTable.setNumberOfRows(coins.count, withRowType: "CoinRow")
}

That sets the number of rows to the actual number of coins in the array.

Note: There’s a bug in the current version of WatchKit where setting the number of rows to the same number already in a table causes the rows to act strangely. The workaround is what you see above – checking the existing number of rows before setting it.

Going From WatchKit to iOS

You’re probably used to iOS apps that wake up in the background to perform background refreshes or respond to push notifications. iOS 8.2 now gives you the ability to wake up the app via a call from the WatchKit extension!

Add the following code to the end of awakeWithContext(_:):

WKInterfaceController.openParentApplication(["request": "refreshData"],
 reply: { (replyInfo, error) -> Void in
  // TODO: process reply data
  NSLog("Reply: \(replyInfo)")
})

openParentApplication(_:reply:) is a class method on WKInterfaceController that forces the iOS app — that is, the parent application — to launch.

The first parameter is a user info dictionary that can contain any data you like, as long as it conforms to the plist data format. The iOS app will receive this object at the other end, so you can pass in any options you need here. In this case, there’s only one reason to call the parent application — to get the coin data — so technically you don’t need to put anything here, but the request key in the dictionary leaves the door open for future functionality beyond refreshData.

The second parameter is similar to a completion handler that’s called when the parent application finishes. WKInterfaceController will call the closure expression you pass in here, along with a dictionary that the parent can supply as a reply in replyInfo.

Going From iOS to WatchKit

You have the request ready to be sent to the iOS app, so the next step is to do the heavy lifting of the network request and send a reply back to the watch.

Open AppDelegate.swift in the CoinTracker group. This is the app delegate for the iOS app and contains all the entry-point code of the app.

Add the following method to the class:

func application(application: UIApplication!,
 handleWatchKitExtensionRequest userInfo: [NSObject : AnyObject]!,
 reply: (([NSObject : AnyObject]!) -> Void)!) {
 
  // 1
  if let request = userInfo["request"] as? String {
    if request == "refreshData" {
      // 2
      let coinHelper = CoinHelper()
      let coins = coinHelper.requestPriceSynchronous()
 
      // 3
      reply(["coinData": NSKeyedArchiver.archivedDataWithRootObject(coins)])
      return
    }
  }
 
  // 4
  reply([:])
}

You’ll call this method in response to openParentApplication(_:reply:) on the WatchKit side. Here’s the play-by-play:

  1. You check for the request key and its value, just to be sure it’s really the watch app calling.
  2. Next, you instantiate CoinHelper to perform the network request. Note that you’re calling the synchronous version of the fetch request; if you performed a background fetch instead, the method would return right away and the reply would always be empty, which would be of no use to you.
  3. After the request comes back, you call the reply handler. coins is an array of Coin objects, so you need to archive the array into an NSData instance so it survives the trip back to WatchKit.
  4. If something goes wrong, the default action is to send back an empty dictionary.

Build and run the watch app and keep an eye on the console. The watch app itself won’t display any table rows, but you should see a hex dump of the NSData coming back from the iOS app in the console as shown in the following example:

CoinTracker WatchKit Extension[29705:1176700] Reply:
[coinData: <62706c69 73743030 d4010203 04050649 4a582476 65727369 6f6e5824
6f626a65 63747359 24617263 68697665 72542474 6f701200 0186a0ac 0708111b
1c232d2e 38394344 55246e75 6c6cd209 0a0b105a 4e532e6f 626a6563 74735624

That’s a good sign: it means the iOS app is indeed starting up and sending something back to the watch. If only it were something more useful than binary data! Are you up for the challenge of decoding the data and seeing what’s hiding in there?

001_ChallengeAccepted

Yeah, we thought so. Okay — the final stretch of code is coming up!

Collecting Your Coins

Open CoinsInterfaceController.swift and find awakeWithContext(_:). Replace the NSLog() call inside the openParentApplication closure expression with the following:

if let coinData = replyInfo["coinData"] as? NSData {
  if let coins = NSKeyedUnarchiver.unarchiveObjectWithData(coinData) as? [Coin] {
    self.coinHelper.cachePriceData(coins)
    self.coins = coins
    self.reloadTable()
  }
}

There are two levels of optional binding here: the first to ensure you received an NSData object in response, and the second to ensure what was archived inside the NSData is actually an array of Coin objects.

If everything checks out, you cache the data for later use. Then it’s just a matter of setting the coins property with the actual data and reloading the table rows.

Build and run your watch app; you should see some real data come through:

CoinTracker-run2

In a relatively small amount of code, you’ve sent a request from the watch to the phone, sent a request out to the network, received a response with data, then decoded the data and sent it from the phone back to the watch — that’s quite a trip! The glorious end result is that you can see four cryptocurrencies with a tap of the wrist so you can stay on top of your trading! :]

Adding Table Segues

There’s one final feature to look at: getting the table to do something when you tap on a row. The standard iOS action is to push a details screen onto the navigation stack. That’s exactly what you’ll do, but you’ll do it WatchKit-style instead.

You saw how easy it was to set up the table and rows for your WatchKit app, and navigation is just as easy. You don’t need to wrap anything in a navigation controller, since interface controllers are already set up for you to handle navigation stacks and table row taps.

Open Interface.storyboard and drag a new Interface Controller from the object library to the canvas area. Control-drag from your table row to the new interface controller to create a segue — make sure you drag from the empty part of the table row, not one of the labels!

CoinTracker-segue1

Select push in the Selection Segue popover to create a push segue:

CoinTracker-segue2

Select the segue and open the Attributes Inspector. Set the Identifier to CoinDetails. You’ll only have one segue in this app, but it’s a good idea to future-proof your app by naming your segues and checking for them by name in your code in case you add more segues later.

Passing Segue Data

If you’ve dealt with segues in iOS, you know things can get a little painful when passing data around: your prepareForSegue method tends to get quite long and there’s a lot of casting involved, among other details.

As always, WatchKit takes a cleaner approach. Every interface controller accepts a context parameter, which is an optional AnyObject. That means controllers have a built-in way to receive an initial bundle of data. As an extra bonus, WKInterfaceController also has a built-in way to send this context data right from a table segue. How handy is that?

Open CoinsInterfaceController.swift and add the following method to the class:

override func contextForSegueWithIdentifier(segueIdentifier: String,
 inTable table: WKInterfaceTable, rowIndex: Int) -> AnyObject? {
  if segueIdentifier == "CoinDetails" {
    let coin = coins[rowIndex]
    return coin
  }
 
  return nil
}

WatchKit calls this method when the user taps on a table row, as long as the row type has a segue connected to it. Your job is to return an object that will be passed into the destination controller as the context parameter. Here, you’re sending the Coin object for the selected row.

Receiving Context Data

Sending the data is only half the job, of course. You still need to process the incoming context and do something useful with it!

First, you’ll need a new class for the second interface controller that will show the coin details. Right-click (or control-click) on the CoinTracker WatchKit Extension group and click New File. Select the iOS\Source\Swift File template, name the new file CoinDetailInterfaceController, then click Create.

Open CoinDetailInterfaceController.swift and add the following code to the file:

import WatchKit
import CoinKit
 
class CoinDetailInterfaceController: WKInterfaceController {
  var coin: Coin!
 
  override func awakeWithContext(context: AnyObject?) {
    super.awakeWithContext(context)
 
    if let coin = context as? Coin {
      self.coin = coin
      setTitle(coin.name)
      NSLog("\(self.coin)")
    }
  }
}

First up are the usual imports for WatchKit and CoinKit; you need CoinKit so the Coin class is available to you.

The major point of entry for interface controllers is awakeFromContext(_:). Here, you start with optional binding to ensure the context passed in is a Coin object. If so, you keep a reference to the coin in a property. For now, all the visible interface work just sets the title and logs the coin details to the console.

Open Interface.storyboard and select the second interface controller that you added as a segue destination. In the Identity Inspector, set Class to CoinDetailInterfaceController to link the interface with the class you just created.

Build and run the watch app; tap on a row and you’ll see your (mostly) blank details controller appear:

CoinTracker-run3

Check out the console and you’ll see some details on the selected currency:

CoinTracker WatchKit Extension[30154:1201186] DOGE 0.00014345

Looks like the value of DOGE has gone up! :]

CoinTracker-doge2

Where to Go From Here?

You can download the final CoinTracker project here, complete with all the code and interface work from this tutorial.

You now have a WatchKit app that pulls fresh network data by asking the iOS app to do the fetching for you. You’ve also built a table to display the data, and learned the basics of tables and table rows in WatchKit.

You’ve also seen how easy it is to connect segues from Interface Builder. Although the segue works and you’ve even passed some context data across, the detail interface controller is mostly blank. You’d like it to show some details about the selected currency, wouldn’t you?

Fear not — in Part 2 of this tutorial, you’ll wrap up the interface for that second controller. You’ll also learn about glances, a special kind of interface controller that works a little like a Today extension that shows a short summary of information you can read quickly — at a glance, even! :]

Do you have any thoughts on tables or network downloads or anything else WatchKit-related? Share your thoughts in the forum discussion below!

WatchKit Tutorial with Swift: Tables and Network Requests is a post from: Ray Wenderlich

The post WatchKit Tutorial with Swift: Tables and Network Requests appeared first on Ray Wenderlich.

Spring Swift Fling Bonus Giveaway: Swift Summit Conference Tickets!

$
0
0
2 Free Tickets to the March Swift Summit in London!

2 Free Tickets to the March Swift Summit in London!

As part of our Swift Spring Fling celebration, we are running a giveaway where some lucky readers will win a free copy of our new Swift books.

But that’s not all – today we’re happy to announce that we’re going to have a second bonus giveaway!

The organizers of the upcoming Swift Summit conference in London were kind enough to offer 2 free tickets to readers of this site.

This conference is focused on Swift development by the organizers of the Swift San Francisco and Swift London meetup groups. Both authors of Swift by Tutorials will be there – Colin Eberhardt and Matt Galloway!

Swift Summit Cofounder Beren Rumble describes the conference as the following: “A lot of the speakers have been exploring Swift since the announcement at last year’s WWDC. We’re trying to take the best bits of knowledge from each of them, and share it with attendees in a 2 day briefing so they can ‘Get up to Speed on the State of Swift’.”

If you’re interested in getting a free ticket to this conference, simply comment on this post – we will choose a lucky winner at the end of the Swift Spring Fling, this Friday.

Please comment on this post only if you can travel to London and can definitely attend on 21-22 March.

A big thanks to the organizers of the Swift Summit for donating these tickets, and we hope you all enjoy the conference! :]

Spring Swift Fling Bonus Giveaway: Swift Summit Conference Tickets! is a post from: Ray Wenderlich

The post Spring Swift Fling Bonus Giveaway: Swift Summit Conference Tickets! appeared first on Ray Wenderlich.

Video Tutorial: Adaptive Layout Part 2: Constraints


WatchKit Tutorial with Swift: More Tables, Glances and Handoff

$
0
0
Thank you for being a part of the Swift Spring Fling!

Continue your WatchKit adventures during the Swift Spring Fling!

Note from Ray: This is a bonus WatchKit tutorial released as part of the Spring Swift Fling. If you like what you see, be sure to check out WatchKit by Tutorials for even more on developing for the Apple Watch. We hope you enjoy! :]

Welcome to the second and final part of the Swift Spring Fling WatchKit tutorial series! CoinTracker, your cryptocurrency-tracking watch app, is ready to be souped-up and taken to the next level.

In Part 1, you created the initial interface controller, added a table to display the available cryptocurrencies and their current prices, and delegated the network requests to the containing iOS app.
F has
Your task in this tutorial is to take CoinTracker to its final form. Specifically, you’ll do the following:

  • Flesh out the second interface controller you added at the very end of Part 1 to show the details of the selected cryptocurrency.
  • Add a Glance to your project to display the current price of the user’s favorite cryptocurrency.
  • Pass contextual information to the Watch app using Handoff; this lets the Watch app update its interface and provide a richer experience to the user.

Here’s a sneak peek at what you’ll be building throughout the rest of this tutorial:

Tease

Does that whet your appetite? Of course it does — time to get right to it!

The Devil is in the Detail…Controller

Open the Xcode project you completed in Part 1, or alternatively, you can download the final completed project from Part 1 and start with that.

You finished off Part 1 of this tutorial by adding a second interface controller to display the details of the selected coin. It’s now time to flesh out that interface controller.

Open Interface.storyboard from the CoinTracker WatchKit App group and select the empty details interface controller. Open the Attributes inspector and set the Identifier to CoinDetailInterfaceController so you can access this interface controller in your code.

Now, on to the interface itself. Drag a Group from the Object Library onto the empty interface controller.

Next, drag an Image into the group. Select the image, and use the Attributes Inspector to change Position\Vertical to Center. Then set both Size\Width and Size\Height to Fixed with a value of 40 for each:

02

Now drag a Label into the group. You’ll notice it’s automatically placed to the right of the image – this is because the group’s Layout attribute is set to Horizontal by default — and exactly what you need for your interface.

Select the label, and use the Attributes Inspector to set the following values:

  • Text to Coin
  • Font to System, with a Style of Bold and Size of 34
  • Horizontal position to Right
  • Vertical position to Center

Select the group again; you can click on the space between the image and the label, or select it from the Document Outline if that’s easier. Use the Attributes Inspector to change Group\Insets to Custom and set Top, Bottom, Left and Right all to 2, like so:

01

That create a little space around the image and label so they aren’t hard up up against the edges of the screen.

Once you’ve done that, your interface controller should look like the following:

03

The bones are there; now it’s just a matter of filling in some real data.

Setting up Outlets and Data

The image and label you’ve just added will contain the icon and name of the selected currency, respectively.

Open CoinDetailInterfaceController.swift from the CoinTracker WatchKit Extension group and add the following two outlets just below the coin property:

@IBOutlet weak var coinImage: WKInterfaceImage!
@IBOutlet weak var nameLabel: WKInterfaceLabel!

You’ll use these outlets to populate the image and label.

Next, add the following code just inside the if let block in awakeWithContext(_:):

coinImage.setImageNamed(coin.name)
nameLabel.setText(coin.name)

Here you use coin.name as both the image name and the text of the label. That trick works because the images already provided in the asset catalog of the CoinTracker WatchKit App group have exactly the same names as their corresponding currency. Hey, anything to save a bit of coding time, right? :]

Head back to Interface.storyboard and Right-click (or Control-click) on the yellow interface controller icon on CoinDetailInterfaceController to invoke the connections dialog. Drag from coinImage to the image to connect the two as shown below:

04

Now drag from nameLabel to the label to connect them as well.

Ensure the CoinTracker WatchKit App scheme is selected, then build and run your app; select any coin from the list and you’ll see the icon and name appear on the details screen like so:

05

Okay — you now have the icon and name displayed, which is great. However, you’re a savvy cryptocurrency trader and you want the current financial data as well.

Financial data? Yes please.

Financial data? Yes please.

Fortunately, that’s an easy task with the table functionality of WatchKit — something you’ve already seen in the previous tutorial.

A Table For Three, Please

The three key pieces of cryptocurrency financial data to show on the screen are as follows:

  1. The current price
  2. The price from 24 hours ago
  3. The last 24-hour trading volume

This bring the functionality of the Watch app directly in line with the containing iOS app. Feature-parity FTW! :]

Now, you could add several labels to display this information, and lay them out manually to get their positions just right. But this approach doesn’t plan for the future; your data provider could add daily high and low prices or moving averages, and managing all those labels could quickly become unwieldy.

Too! Many! Labels!

Too! Many! Labels!

Luckily, there is a better way. You can use a table to contain the data and reuse the same row controller you created in Part 1 – how’s that for code reuse?

Open Interface.storyboard and drag a Table from the Object Library onto CoinDetailInterfaceController, making sure to position it just below the group containing the image and label:

06

Select Table Row Controller in the Document Outline, and then use the Identity Inspector to set Custom Class\Class to CoinRow. In the Attributes Inspector, set the Identifier to CoinRow.

Next, select the group inside the table row and use the Attributes Inspector to set Group\Layout to Vertical; you’ll want the contents of this group to be laid out underneath each other, not side-by-side.

Drag two Labels from the Object Library into the table row; you’ll notice the bottom label is slightly cut off. To fix this, select the group containing the two labels and change the Size\Height in the Attributes Inspector to Size To Fit Content. The group will expand to properly house both labels.

Select the top label in the group and use the Attributes Inspector to set Text to Title, Position\Horizontal to Right, and Font to Headline; this last step makes the title a little more pronounced than the rest of the text on the screen.

Now, select the bottom label and set its Text to Detail and Position\Horizontal to Right.

Your completed table row should now look like the following:

07

The final step to setting up your table row is to connect the outlets already defined in CoinRow to the labels you’ve just created.

In the Document Outline, Right-click (or Control-click) on CoinRow to invoke the connections dialog. Connect both titleLabel and detailLabel to their corresponding labels in the table row like so:

08

Now open CoinDetailInterfaceController.swift and add the following outlet just below the existing ones:

@IBOutlet weak var table: WKInterfaceTable!

Once this outlet is connected, you’ll be able to use it to populate the table with the correct number of rows.

Next, add the following code just below the point where you set the icon and name inside awakeWithContext(_:):

// 1
let titles = ["Current Price", "Yesterday's Price", "Volume"]
let values = ["\(coin.price) USD", "\(coin.price24h) USD", String(format: "%.4f", coin.volume)]
 
// 2
table.setNumberOfRows(titles.count, withRowType: "CoinRow")
 
// 3
for i in 0..<titles.count {
  if let row = table.rowControllerAtIndex(i) as? CoinRow {
    row.titleLabel.setText(titles[i])
    row.detailLabel.setText(values[i])
  }
}

Here’s the play-by-play of what’s happening:

  1. You first create two arrays: the first holds the titles of each row, and the second the values.
  2. Next, you set the number of rows on the table; in this case you use the count of the titles array. You also inform the table that it should be using the CoinRow row type, which matches the identifier you set in the storyboard.
  3. Finally, you iterate over each row in the table and set its titleLabel and detailLabel to the corresponding values in the titles and values arrays. You also make use of downcasting to ensure you’re dealing with an instance of CoinRow.

All that’s left to do now is connect the table outlet to the table in the storyboard.

Jump back to Interface.storyboard and Right-click (or Control-click) on the yellow interface controller icon on CoinDetailInterfaceController to invoke the connections dialog. Drag from the table outlet to the table to connect the two:

09

Make sure the CoinTracker WatchKit App scheme is selected and build and run your app. Select any coin from the list and you’ll see all the financial data displayed in the table:

10

Sweet! The Watch app now has feature-parity with the iOS app, and with a tap of the wrist you can check the price, previous price, and 24-hour trading volume of your favorite cryptocurrencies.

You could call it a day — or you could take advantage of some of the unique features the Apple Watch and WatchKit have to offer, such as glances and Handoff. What say you, good sir?

ChallengeAccepted

I thought you’d be interested! :]

Setting Your Favorite Cryptocurrency

A glance is a cracking way to provide your users with useful, timely read-only information without having to first navigate their way through your Watch app. As a completely random example, you could, perhaps, display the current price of their favorite cryptocurrency! :]

But before you add a glance to your app, you need to give your users a way to inform the Watch app of their favorite currency.

Open Interface.storyboard and drag a Switch from the Object Library onto CoinDetailInterfaceController, making sure to position it just below the table. Select the switch and use the Attributes inspector to set Title to Favorite and Font to Caption 2. As well, ensure its State is set to Off:

11

Open the Assistant Editor and make sure CoinDetailInterfaceController.swift is displayed. Control-drag from the switch into the class just below the existing outlets to create a new one. Name it favoriteSwitch:

12

Repeat the process, but this time drag to just below awakeForContext(_:) and change Connection to Action. Name it favoriteSwitchValueChanged, and once the action is created close the Assistant Editor.

Open CoinDetailInterfaceController.swift and add the following constants to the top of the class, just below the coin property:

let defaults = NSUserDefaults.standardUserDefaults()
let favoriteCoinKey = "favoriteCoin"

Here you create a reference to the standard user defaults along with a constant that will act as the key used to store and retrieve the users favorite cryptocurrency from the defaults.

Next, add the following code to favoriteSwitchValueChanged(_:):

// 1
if let coin = coin {
  // 2
  defaults.removeObjectForKey(favoriteCoinKey)
  // 3
  if value {
    defaults.setObject(coin.name, forKey: favoriteCoinKey)
  }
  // 4
  defaults.synchronize()
}

Here’s what’s going on in the code above:

  1. First, since the coin property is an optional, you unwrap it to make sure it’s not nil before continuing.
  2. Next, you remove any previously stored favorite from the defaults.
  3. Then, if value is true it means the user has set the switch to on, so you store the name of the current coin as the favorite in the defaults.
  4. Finally, you synchronize the defaults to guarantee the changes are written to disk and any observers are notified of the changes.

The final piece of the puzzle is to make sure the correct state is set on the switch when the interface controller is first displayed. Add the following to awakeWithContext(_:), just below the for loop:

if let favoriteCoin = defaults.stringForKey(favoriteCoinKey) {
  favoriteSwitch.setOn(favoriteCoin == coin.name)
}

Here you first check the defaults to see if a string exists for the favorite key; if so, you compare the string with the name of the current coin and set the state of the switch accordingly.

Make sure the CoinTracker WatchKit App scheme is selected and then build and run your app. Select a currency from the list and then tap the switch to add it as your favorite:

13

Return to the list and select a different currency (not your favorite); the switch should display the off state. Return to the list once again and this time select the coin you favorited; the switch again displays the on state.

That takes care of setting the user’s favorite currency; now it’s just a matter of adding the glance.

Creating a Glance

A glance is made up of three individual parts: the interface, which you create in Interface Builder; a subclass of WKInterfaceController, which you use to populate the glance with the relevant information; and finally a custom build scheme so you can run the glance in the simulator. You’ll tackle each of these three things in order.

Building the Glance Interface

Open Interface.storyboard and drag a Glance Interface Controller from the Object Library onto the storyboard. You’ll notice immediately that two groups already exist in the interface. This is because glances are template-based, and are split into an Upper and Lower group:

14

You can choose a template for each of the two groups independently using the Attributes Inspector — but for now, you’ll just stick with the default templates.

Just as you did with the coin detail interface, drag an Image and a Label into the upper group. These will display the coin icon and name respectively.

Select the image and use the Attributes Inspector to make the following changes:

  • Set Position\Vertical to Center
  • Set Size\Width to Fixed with a value of 40
  • Set Size\Height to Fixed with a value of 40

Next, select the label and update the following attributes in the Attributes Inspector:

  • Set Text to Coin
  • Set Font to System with a Style of Bold and a Size of 34
  • Set Min Scale to 0.7
  • Set Position\Vertical to Center

The Min Scale attribute lets the font shrink to a percentage of the specified size for text that would normally be wrapped or truncated, which is perfect for cryptocurrencies with long names. The entire interface for a glance must fit within a single screen, since glances are non-interactive and don’t support scrolling. Adjusting the font is a great alternative to wrapping text in situations where space is at a premium — and on the Apple Watch, every pixel counts! :]

Your glance interface controller should now look like the following:

15

Time to move on to the lower group.

Drag a new Group from the Object Library into the lower group and change its Layout to Vertical. Also set both Position\Horizontal and Position\Vertical to Center. Finally, set Size\Width to Size To Fit Content.

Drag two Labels into this new group. Select the first label and use the Attribute Inspector to update the label’s attributes as follows:

  • Set Text to 0.00
  • Set Font to System with a Style of Bold and a Size of 38
  • Set Min Scale to 0.5
  • Set Position\Horizontal to Center

Again, you make use of the Min Scale attribute to make sure any cryptocurrency with a value extending to many decimal places still fits on a single line. Remember that within a glance, screen real-estate is at a premium since the content can’t be scrolled.

Select the second label and make the following changes in the Attribute Inspector:

  • Set Text to USD
  • Set Position\Horizontal to Right
  • Set Position\Vertical to Bottom

This label is a static indicator of the native currency and is positioned underneath and to the right of the price itself.

The completed glance should now looking like the following:

16

Note that the the price is centered and the “USD” text is right-aligned to the price, rather than to the edge of the screen. That’s the power of putting two labels inside a size-to-fit group, that itself lives within another group: everything just flows without manually arranging all the various pieces of the interface.

Now that the glance interface is completed, it’s time to display the cryptocurrency data to the user.

Creating the Glance Interface Controller

Just like every other interface controller in WatchKit, with the exception of notifications, glances are backed by a subclass of WKInterfaceController. You’ll create one now.

Right-click (or Control-click) on the CoinTracker WatchKit Extension group and click New File. Select the iOS\Source\Swift File template and name the new file CoinGlanceInterfaceController. Click Create.

Open CoinGlanceInterfaceController.swift and replace its contents with the following:

import WatchKit
import CoinKit
 
class CoinGlanceInterfaceController: WKInterfaceController {
 
  // 1
  @IBOutlet weak var coinImage: WKInterfaceImage!
  @IBOutlet weak var nameLabel: WKInterfaceLabel!
  @IBOutlet weak var priceLabel: WKInterfaceLabel!
 
  // 2
  let helper = CoinHelper()
  let defaults = NSUserDefaults.standardUserDefaults()
 
  override func awakeWithContext(context: AnyObject?) {
    super.awakeWithContext(context)
 
    // 3
    if let favoriteCoin = defaults.stringForKey("favoriteCoin") ?? "DOGE" {
      // 4
      let coins = helper.cachedPrices()
      for coin in coins {
        // 5
        if coin.name == favoriteCoin {
          coinImage.setImageNamed(coin.name)
          nameLabel.setText(coin.name)
          priceLabel.setText("\(coin.price)")
          break
        }
      }
    }
  }
}

Taking each commented section in turn:

  1. You define three outlets for the coin icon, name, and current price.
  2. You create a reference to the CoinHelper class so you can access the cached coin data, and a reference to the standard user defaults so you can retrieve the user’s favorite cryptocurrency.
  3. You then ask the defaults for the user’s favorite currency. It’s possible the user hasn’t selected a favorite and the defaults will return nil. To cover this case, you use the nil coalescing operator to select Dogecoin by default. Everyone loves the Doge, after all. ;]
  4. You next retrieve the cached coin data using the CoinHelper class and iterate over each of the cached coins.
  5. Finally, if the name of the current cached coin matches the favorite, you use its data to populate the glance, via the outlets. You also break out of the loop early since you’ve found a match.

With that in place you now just need to connect everything up.

Open Interface.storyboard and select the Glance Interface Controller. In the Identity Inspector set Custom Class\Class to CoinGlanceInterfaceController. Then, right-click on the yellow interface controller icon to invoke the connections dialog and connect the three outlets to their corresponding interface elements like so:

17

Before you build and run, there’s just one more thing you need to do before you can run the glance in the simulator — you need to create a custom build scheme.

Creating a Glance Build Scheme

When you first add a WatchKit App target to your project, Xcode adds a matching build scheme so you can run your app in the simulator. But you’ll need to create schemes manually for glances and notifications you add to the storyboard yourself.

From the Xcode menu, choose Product\Scheme\Manage Schemes…. In the dialog that appears select the CoinTracker WatchKit App scheme, click the gear icon, and choose Duplicate:

18

Name the new scheme CoinTracker Glance, and then select the Run step in the source list on the left hand side. In the main pane, change Watch Interface to Glance like so:

19

Click Close, and then Close again to save your changes.

With that done, it’s finally time to test your glance. Make sure the new CoinTracker Glance scheme is selected, then build and run. You should see your glance appear in the simulator, populated with the data of either your favorited cryptocurrency, or everyone’s favorite default currency, Dogecoin:

20

Tap the glance, and you’ll be thrown straight into the Watch app and greeted by the list of available currencies. But wouldn’t it be even better if you could somehow jump directly to the details screen for the currency displayed on the glance? Of course it would — and you can do exactly that with Handoff.

Adding Handoff to the Glance

Handoff is one of the stand-out features of WatchKit; it lets you pass contextual information from the glance to the Watch app, even though they run as separate processes. For example, you could pass the name of the cryptocurrency being displayed by the glance to the Watch app so it can load the details of that currency on launch. As a matter of fact, that’s exactly what you’re going to do! :]

One of selling features of Handoff is its simplicity; there’s one method call to pass the contextual information and one overridden method to receive it. That’s all you need to get your glance and Watch app talking to each other — although the conversation is a little one-sided.

Open CoinGlanceInterfaceController.swift and add the following just below where you set the text on priceLabel in awakeWithContext(_:):

updateUserActivity("com.razeware.CoinTracker.glance", userInfo: ["coin": coin.name], webpageURL: nil)

Here you inform WatchKit that there’s a user activity going on. In this case, the user is glancing at a certain currency.

The first parameter is the activity type, which is just a string that identifies the activity using reverse domain notation. You also pass a userInfo dictionary to the Watch app when it’s launched via tapping the glance. The third parameter, webpageURL, is used for watch-to-iPhone Handoff. As of this writing it’s not available and isn’t relevant in this context, so you simply pass nil.

Now, open CoinsInterfaceController.swift and add the following method to the class:

override func handleUserActivity(userInfo: [NSObject : AnyObject]!) {
  // 1
  if let handedCoin = userInfo["coin"] as? String {
    // 2
    let coins = coinHelper.cachedPrices()
    // 3
    for coin in coins {
      if coin.name == handedCoin {
        // 4
        pushControllerWithName("CoinDetailInterfaceController", context: coin)
        break
      }
    }
  }
}

Here’s the breakdown of the above code:

  1. You first check to see if the coin key and value pair exist in the userInfo dictionary. You also try to downcast the value to a String as that’s what you originally passed from the glance.
  2. If the key and value pair exist, you retrieve all the cached coins using the CoinHelper class.
  3. Then you enumerate over the cached coin data, looking for a match against the string that came in from the userInfo dictionary.
  4. If there’s a match, you push the coin detail interface controller onto the navigation stack, passing the current coin as the context object.

handleUserActivity(_:) is called only on the initial interface controller — as denoted by the Main arrow in the storyboard shown below — which is why you’ve overridden it in CoinsInterfaceController:

21

With the CoinTracker Glance scheme selected, build and run. Tap the glance once it shows up; the Watch app will launch and you’ll be taken straight to the coin detail interface for the coin displayed by the glance:

22

And there you have it — a complete Watch app that has feature-parity with its companion iOS app, looks great, is designed with future updates in mind, uses Handoff and glances. Give yourself a little pat on the back — you’ve learned a lot in this series! :]

Where to Go From Here?

You can download the final CoinTracker project here, complete with all the code and interface work from this tutorial.

You’ve come a long way in this two-part tutorial series. You’ve touched on a lot of what WatchKit offers: interface controllers, tables, segues, context passing, glances, and Handoff. To top it all off, you learned how to pull it all together into a fully functional, real-world app.

But WatchKit doesn’t end there — no, no, no! There is far more to learn than what we’ve shown you in these two tutorials – we’ve only just scratched the surface.

A good place to start is Apple’s Apple Watch Programming Guide, which offers a great introduction to WatchKit and some concepts that might feel a little unfamiliar if you’re approaching the watch from a background in iOS or OS X.

Next on your reading list should be the Apple Watch Human Interface Guidelines. As the watch is a brand new platform full of new paradigms and design patterns, it’s essential that you pay close attention to these; they’re packed full of advice and recommendations on all aspects of designing interfaces for the watch.

And of course, I highly recommend checking out the 17-part video tutorial series on WatchKit that we have here on the site, as well as picking up a copy of WatchKit by Tutorials.

Do you have any thoughts on tables, glances, or Handoff, or anything else WatchKit-related? Share your thoughts in the forum discussion below!

WatchKit Tutorial with Swift: More Tables, Glances and Handoff is a post from: Ray Wenderlich

The post WatchKit Tutorial with Swift: More Tables, Glances and Handoff appeared first on Ray Wenderlich.

Video Tutorial: Adaptive Layout Part 3: Views

Spring Swift Fling Giveaway Winners – And Last Day for Discount!

$
0
0
Thank you for being a part of the Spring Swift Fling!

Thank you for being a part of the Spring Swift Fling!

Thank you for being a part of the first ever Spring Swift Fling!

During the Spring Swift Fling, we released two new books, two podcast episodes, four tutorials, a FAQ, and a tech talk. But now, it’s time to say goodbye.

But first, there’s two important pieces of news!

Spring Swift Fling Giveaway Winners

To celebrate the Swift Spring Fling, we had two giveaways: one for some free Swift books and one for some free Swift conference tickets. All you had to do to enter was comment on the respective posts.

Across both posts, we had a whopping 750+ applicants – this was the biggest amount of entries we’ve seen in a giveaway yet!

And now it’s time to announce the 6 lucky winners… drum roll please!

Book Winners

The 4 lucky book winners are: dlcervan, dfried, grompot, and z0mb13. Congratulations! :]

Each winner will receive a free PDF+print copy of a new Swift book of their choice – either iOS Animations or WatchKit by Tutorials. I will be in touch with the winners directly via email.

Swift Summit Winners

The 2 lucky Swift Summit ticket winners are: NES8 and TheGamingArt.

Each winner will receive a free ticket to the upcoming Swift Summit conference in London. I will be in touch with the winners directly via email.

Last Day for Discount!

Finally I’d like to remind everyone that today is the last day for the current discounted price for our new Swift books and bundles.

Starting tomorrow, the books and bundles will be raised to normal price (at least by $10, sometimes more). So be sure to grab the discount while you still can!

Thanks to everyone who entered the Spring Swift Fling giveaway, bought the books, or simply read these posts. We really appreciate each and every one of you, and thank you for your support and encouragement – it is so amazing being a part of this community.

We hope you all have a wonderful springtime, full of delightful animations and WatchKit apps! :]

Spring Swift Fling Giveaway Winners – And Last Day for Discount! is a post from: Ray Wenderlich

The post Spring Swift Fling Giveaway Winners – And Last Day for Discount! appeared first on Ray Wenderlich.

Readers’ App Reviews – February 2015

$
0
0
ASCIImator

Apps from readers like you!

I’ve got another month of inspiring apps from the community to share with you!

You guys never stop producing top notch apps, so I don’t plan to stop showcasing as many as I can. ;]

This month we’ve got:

  • An app that makes buying sports tickets a breeze
  • An ASCII-art photo extension that will amaze you
  • An instrument you can play by shopping
  • And predictably, much more!

Keep reading to see your fellow readers’ creations this month!

Fifty Five

FiftyFive
Fifty Five is a numbers game. It will make you think. And its a lot of fun.

Fifty Five’s board hosts number tiles 1-5. Comparing numbers gives you points. swipe a 2 over a 4 and get 24 points. Swipe a 4 over 2 and get 42 points. But the real magic is in the multiplier.

Building chains improves your multiplier. Swiping 1 to 2, 2 to 3, 3 to 4, and 4 to 5 gives you a complete chain getting you 5 extra moves and your multiplier increases. You can protect your multiplier mid chain by swiping matching numbers.

In a world of Threes clones, its nice to see Fifty Five is as fun as it is unique. :]

E-Z Park

EZPark
Have you ever forgotten where you parked? How about left a meter running too long? Worry no more. E-Z Park will making your parking a breeze.

E-Z Park lets you tag your parking space using the GPS in your phone to make sure you’re never lost. Plus you can set a timer for when your parking meter expires. E-Z Park will alert you when you’re getting close to your meter limit based on the time of your choosing. Don’t come back to a parking ticket ever again.

Barcodas

barcodas
Barcodes are all around us. We use them to track inventory, identify things, and more. But have you ever used a barcode for music? Barcodas is here to change just that.

With Barcodas, simply scan any barcode and you’ll immediately hear it converted to a musical tune. You can tweak pitches, speed, and scales to customize your new barcode melody. Its really cool! Each barcode becomes its own rhythmic synthesizer allowing you to create unique music in just a snap of your camera.

Of course Barcodas allows you to save your favorite barcodes. And you can share the barcodes for others to scan and hear as well.

Volotic

Volotic
Volotic is an awesome nonlinear sequencer for your iPad.

Volotic makes it easy to experiment and create awesome songs from a library of towers, each with a unique impact to your music. There are over 50 different towers to choose from. There are drums, scales, tone changers, beat emitters, and more.

Volotic also supports 3D sound. Moving the center of your musical creation will adjust its output. Its way cool!

Volotic has manual tuning support for the audiophiles among us. And you can easily record your composition on your Mac if you’re using iOS8 and Yosemite.

ASCIImator

ASCIImator
ASCIImator is one of the coolest photo extensions I’ve seen yet.

I’ve always wanted a quick way to convert images into ASCII Art. You know, those painstaking creations from letters and symbols. Turns out, I needed to look no further than the RayWenderlich.com community!

ASCIImator works great with iOS8’s new photo extensions letting you ASCIIfy any of your pictures without even leaving the photos app. Its incredible how easy it is. And best of all, since its using iOS8 native photo extensions, your original is a tap away.

ASCIImator has a few settings for which symbols are allowed and the font size you draw at for detail. Give it a try, its a ton of fun.

2UPTOP

2UPTOP
2UPTOP is an app for all the soccer (football? :p) fans out there.

2UPTOP is an easy to use line up visualizer for the die hard fans. Simply select your favorite team, choose the 11 players ready to play, and drag and drop them on the field into their positions. It couldn’t be easier.

2UPTOP pulls rosters from 10 of Europe’s most popular leagues like the Premier League, League One, and the Scottish Premier. If they’ve missed one, the app also allows you to create custom players to fill in the gaps.

Of course, no fan app would be complete without the ability to share your formation. The app will generate a beautiful field image of your lineup to share on your favorite social networks.

Rocket Renegade

RocketRenegade
Rocket Renegade harkens the arcade space shooters of the past and brings fond memories with it.

Rocket Renegade features drag and drop steering to control your lone shooter and auto fire making it easy to one handed on the train.

The soundtrack is fantastic as you blast away the drones of your enemies. There are 12 types of drones to battle, 10 unique bosses, and asteroid levels in between.

Even the background looks great as the stars rush past you on your mission to free the galaxy.

Gametime

Gametime
Gametime makes buy tickets to your favorite sport a breeze.

Gametime shows available tickets, often with discounts up to 60% off, in a real time streaming list for the game of your choosing. Each ticket shows an awesome panoramic shot from the seat you’re thinking about getting so you know the exact view you’re paying for.

A simple two tap process is all it takes to book your seat. No printing necessary, just show your phone at the door! Its serving a limited set of markets but adding several each week. Check the list and give it a try!

Lightning Fingers

LightningFingers
Lightning Fingers puts the power of lightning in your hands! It is a game about coordination and finesse.

The game starts when you place two fingers on the screen. Lightning shoots out of your fingers and holds them connected. The longer you can hold out the more points you’ll earn. The farther apart your fingers and the larger the lightning, creating a multiplier effect.

As shapes begin to fall you must dodge them. But don’t think you can simply lift your fingers from the screen. If you lift a finger the lightning dissipates and the games over!

Hues Rush

HuesRush
Hope you have a keen eye for colors, because Hues Rush is gonna test it.

Hues Rush is a game about a simple grid of colors. There will be one main color. And one color thats a bit off. Tap the one thats off to advance levels. As you advance the grid becomes larger and the number of shades between colors becomes less and less perceptible.

A few boost powerups will help you reach higher scores with longer bonus rounds or higher contrast. And of course GameCenter makes sure you can brag to your friends.

Envelope

Envelope
Envelope is a curating email app that helps you organize and sort your email automatically. And its packed with features.

Envelope features clusters that are like extra powerful threads to combine emails that go together so you can take action on them all at once. It also helps your filter your inbox by specific senders or important topics, or simple emails you haven’t seen yet today.

Envelope can even keep track of responses. When you reply to an important email you can mark it as expecting a response, and Envelope will alert you after a couple days if you haven’t heard back from your reply.

Envelope also offers handwritten signatures, a Notification Center Widget, organized attachments, saving attachments to dropbox, sending voice memos, and more!

Envelope supports Gmail, Google Apps, Yahoo, iCloud, Outlook, AOL, and any server that operates using IMAP.



Honorable Mentions

Every month I get more submissions than I can handle. I give every app a shot, but I can’t write about them all. These are still great apps made by readers like you. Its no a popularity contest or even a favorite picking contest. I just try to get a glimpse of what the community is working on through your submissions. Take a moment and checkout these other great apps I didn’t have time to showcase properly.

QQISSO?
Speedy Mouse
Greenchick 2
Green Cage Ball
TheFlapAttack
Cool HDWallpaper
Tap4
Loopy Lava
SuperGocha
Dodgers in Space
Fast Lane Racer
Raus!
JJ Says: Retro Classic Memory Game HD
A Parcel of Courage
Tappy Bird – No Gravity
YAE Runner
Brown Bear Bounce
Flat Jewels Match 3
StopTap
Colorized – A Puzzle Adventure



Where To Go From Here?

Each month, I really enjoy seeing what our community of readers comes up with. The apps you build are the reason we keep writing tutorials. Make sure you tell me about your next one, submit here!

If you saw an app your liked, hop to the App Store and leave a review! A good review always makes a dev’s day. And make sure you tell them you’re from Ray Wenderlich, this is a community of makers!

If you’ve never made an app, this is the month! Check out our free tutorials to become an iOS star. What are you waiting for – I want to see your app next month!

Readers’ App Reviews – February 2015 is a post from: Ray Wenderlich

The post Readers’ App Reviews – February 2015 appeared first on Ray Wenderlich.

iOS Dev Weekly with Dave Verwer – Podcast S03 E07

$
0
0
Chat about iOS Dev Weekly with Dave Verwer!

Chat about iOS Dev Weekly with Dave Verwer!

Welcome back to season 3 of the raywenderlich.com podcast!

In this episode, we’ll chat with Dave Verwer about his popular iOS newsletter iOS Dev Weekly.

[Subscribe in iTunes] [RSS Feed]

Our Sponsor

Interested in sponsoring a podcast episode? We sell ads via Syndicate Ads, check it out!

Links and References

Contact Us

Where To Go From Here?

We hope you enjoyed this episode of our podcast. Stay tuned for a new episode next week! :]

Be sure to subscribe in iTunes to get access as soon as it comes out!

We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear in future episodes. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com!

iOS Dev Weekly with Dave Verwer – Podcast S03 E07 is a post from: Ray Wenderlich

The post iOS Dev Weekly with Dave Verwer – Podcast S03 E07 appeared first on Ray Wenderlich.

Video Tutorial: Adaptive Layout Part 4: Fonts and Images

iOS 8 Metal Tutorial with Swift Part 3: Adding Texture

$
0
0
Learn how add texture to a 3D cube with Metal!

Learn how add texture to a 3D cube with Metal!

Welcome back to our iOS 8 Metal tutorial series!

In the first part of the series, you learned how to get started with Metal and render a simple 2D triangle.

In the second part of the series, you learned how to set up a series of transformations to move from a triangle to a full 3D cube.

In this third part of the series, you’ll learn how to add a texture to the cube. As you work through this tutorial, you’ll learn:

  • How to reuse uniform buffers
  • A few useful and simple texture concepts
  • How to apply textures to a 3D model
  • How to add touch input to your app
  • How to debug Metal

Dust off your guitars — it’s time to rock Metal!

Getting Started

First, download the starter project. It’s very similar to the app at the end of part two, but with a few modifications, explained below.

Previously, ViewController was a heavy lifter. Even though you’d refactored it, it still had more than one responsibility. Now in the updated starter project, ViewController is now split into two classes:

  • MetalViewController: The base class that contains generic Metal setup code.
  • MySceneViewController: A subclass where that contains code specific to this app – i.e. creating and rendering the cube model.

The most important part is the new protocol MetalViewControllerDelegate:

protocol MetalViewControllerDelegate : class{
  func updateLogic(timeSinceLastUpdate:CFTimeInterval)
  func renderObjects(drawable:CAMetalDrawable)
}

This establishes callbacks from MetalViewController so that your app knows when to update logic and when to actually render.

In MySceneViewController you set yourself to be a delegate and then implement MetalViewControllerDelegate methods, and this is where all the cube rendering and updating action happens.

Now that you’re up to speed on the changes from part two, it’s time to move forward and delve deeper into the world of Metal.

Reusing Uniform Buffers (optional)

Note: This next section is theory driven and gives you more context about how Metal works under the hood. If you’re eager to move into exercises, feel free to skip ahead to the next section (Texturing). But reading this will make you at least 70 percent smarter. ;-]

In the previous part of this series, you learned about allocating new uniform buffers for every new frame, and you also learned that it’s not always the best way to use Metal since it’s not very efficient.

So, the time has come to change your ways and make Metal sing, like an epic hair-band guitar solo — starting with identifying the problem.

The Problem

In the render method in Node.swift, find:

let uniformBuffer = device.newBufferWithLength(sizeof(Float) * Matrix4.numberOfElements() * 2, options: nil)

Take a good look at this monster! This method is called 60 times per second, and you create new buffer each time.

Since this is a performance issue, you’ll want to compare stats before and after optimization.
Build and run the app, open the Debug Navigator tab and select the FPS row.

Screen Shot 2015-01-08 at 4.35.43 PM

You should have numbers similar to these:

before

You’ll return to those numbers after optimization, so you may want to grab a screencap or jot down the stats before you move on.

The Solution

The solution is that instead of allocating a buffer each time, you’ll reuse a pool of buffers.

To keep your code clean, you’ll encapsulate all of the logic to create and reuse buffers into a helper class called BufferProvider that you’ll create.

You can visualize the class as follows:

bufferProvider_Diagram

As you can see, the BufferProvider will be responsible for creating a pool of buffers, and it’ll have a method to get the next available reusable buffer. This is kinda like UITableViewCell!

Now it’s time to dig in and make magic happen. Create a new Swift class named BufferProvider, and make it a subclass of NSObject.

First import Metal at the top of the file:

import Metal

Now, add these properties to the class:

// 1
let inflightBuffersCount: Int 
// 2
private var uniformsBuffers: [MTLBuffer]
// 3
private var avaliableBufferIndex: Int = 0

You’ll get some errors at the moment due to a missing initializer, but you’ll fix those shortly. For now, let’s review each property you just added:

  1. An Int that will store the number of buffers stored by BufferProvider — in the diagram above, this equals 3.
  2. An array that will store the buffers themselves.
  3. The index of the next available buffer. In your case it’ll change like this: 0 -> 1 -> 2 -> 0 -> 1 -> 2 -> 0 -> …

Now add an initializer:

init(device:MTLDevice, inflightBuffersCount: Int, sizeOfUniformsBuffer: Int){
 
  self.inflightBuffersCount = inflightBuffersCount
  uniformsBuffers = [MTLBuffer]()
 
  for i in 0...inflightBuffersCount-1{
    var uniformsBuffer = device.newBufferWithLength(sizeOfUniformsBuffer, options: nil)
    uniformsBuffers.append(uniformsBuffer)
  }
}

Here you create a number of buffers, equal to the inflightBuffersCount parameter passed in to this initializer, and append them to the array.

Now add a method to fetch the next available buffer and copy some data into it:

func nextUniformsBuffer(projectionMatrix: Matrix4, modelViewMatrix: Matrix4) -> MTLBuffer {
 
  // 1  
  var buffer = uniformsBuffers[avaliableBufferIndex]
 
  // 2
  var bufferPointer = buffer.contents()
 
  // 3
  memcpy(bufferPointer, modelViewMatrix.raw(), UInt(sizeof(Float)*Matrix4.numberOfElements()))
  memcpy(bufferPointer + sizeof(Float)*Matrix4.numberOfElements(), projectionMatrix.raw(), UInt(sizeof(Float)*Matrix4.numberOfElements()))
 
  // 4
  avaliableBufferIndex++
  if avaliableBufferIndex == inflightBuffersCount{
    avaliableBufferIndex = 0
  } 
 
  return buffer
}

Let’s review this section by section:

  1. Fetch MTLBuffer from uniformsBuffers array at avaliableBufferIndex index.
  2. Get void * pointer from MTLBuffer.
  3. Copy the passed-in matrices data into the buffer, using memcpy.
  4. Increment avaliableBufferIndex.

Almost done – just need to set up the rest of the code to use this now.

To do this, open Node.swift, and add this new property:

var bufferProvider: BufferProvider

Find init and add this at the end of the method:

self.bufferProvider = BufferProvider(device: device, inflightBuffersCount: 3, sizeOfUniformsBuffer: sizeof(Float) * Matrix4.numberOfElements() * 2)

Finally, inside render, replace this code:

let uniformBuffer = device.newBufferWithLength(sizeof(Float) * Matrix4.numberOfElements() * 2, options: nil)
 
var bufferPointer = uniformBuffer?.contents()
 
memcpy(bufferPointer!, nodeModelMatrix.raw(), UInt(sizeof(Float)*Matrix4.numberOfElements()))
memcpy(bufferPointer! + sizeof(Float)*Matrix4.numberOfElements(), projectionMatrix.raw(), UInt(sizeof(Float)*Matrix4.numberOfElements()))

With this far more elegant code:

let uniformBuffer = bufferProvider.nextUniformsBuffer(projectionMatrix, modelViewMatrix: nodeModelMatrix)

Build and run. Just as it did before adding bufferProvider, everything is working just fine!

IMG_3030

A Wild Race Condition Appears!

Things are running smoothly, but there is a problem that could cause you some major pain later.

Have a look at this graph (and the explanation below):

gifgif

Currently, the CPU gets the “next available buffer”, fills it with data, and then sends it to the GPU for processing.

But since there’s no guarantee about how long the GPU takes to render each frame, there could be a situation where you’re filling buffers on the CPU faster than the GPU can deal with them. In that case, you could find a scenario where you need a buffer on the CPU even though it’s in use on the GPU.

On the graph above, the CPU wants to encode the third frame while the GPU draws the first frame, but its uniform buffer is still in use.

So how do you fix this? The easiest way is to increase the number of buffers in the reuse pool so that it’s unlikely for the CPU to be ahead of the GPU. This would probably fix it but wouldn’t be 100% safe, and I know you’re here because you want to code in Metal like a ninja.

Patience. That’s what you need to solve this problem like a real ninja.

Like A Ninja

Like an undisciplined ninja, the CPU lacks patience, and that’s the problem. It’s good that the CPU can encode commands so quickly, but it wouldn’t hurt the CPU to wait a bit to avoid this racing condition.

feel like a ninja

Fortunately, it’s easy to “train” the CPU to wait when the buffer it wants is still in use.

For this task you’ll use semaphores, a low-level synchronization primitive. Basically, semaphores allow you to keep track of how many of a limited amount of resources are available, and block when no more resources are available.

Here’s how you’ll use a semaphore in this example:

  • Initialize with number of buffers. You’ll be using the semaphore to keep track of how many buffers are currently in use on the GPU, so you’ll initialize the semaphore with the number of buffers that are available (3 to start in this case).
  • Wait before accessing a buffer. Every time you need to access a buffer, you’ll ask the semaphore to “wait”. If a buffer is available, you’ll continue running as usual (but decrement the count on the semaphore). If all buffers are in use, this will block the thread until one becomes available. This should be a very short wait in practice as the GPU is fast.
  • Signal when done with a buffer. When the GPU is done with a buffer, you will “signal” the semaphore to track that it’s available again.

Note: To learn more about semaphores, check out this great explanation.

This will make more sense in code than in prose, let’s try this out. Go to BufferProvider.swift and add the following property:

var avaliableResourcesSemaphore:dispatch_semaphore_t

Now add this to the top of init:

avaliableResourcesSemaphore = dispatch_semaphore_create(inflightBuffersCount)

Here you create your semaphore with an initial count equal to the number of available buffers.

Now open Node.swift and add this at the top of render method:

dispatch_semaphore_wait(bufferProvider.avaliableResourcesSemaphore, DISPATCH_TIME_FOREVER)

This will make the CPU to wait in case bufferProvider.avaliableResourcesSemaphore has no free resources.

Now you need to signal the semaphore when the resource becomes available.

While you’re still in the render method, find:

let commandBuffer = commandQueue.commandBuffer()

And add this below:

commandBuffer.addCompletedHandler { (commandBuffer) -> Void in
  var temp = dispatch_semaphore_signal(self.bufferProvider.avaliableResourcesSemaphore)
}

This makes it so that when the GPU finishes rendering, it executes a completion handler to signal the semaphore (and bumps its count back up again).

Also in BufferProvider.swift, add this method:

deinit{
  for i in 0...self.inflightBuffersCount{
    dispatch_semaphore_signal(self.avaliableResourcesSemaphore)
  }
}

deinit simply does a little cleanup before object deletion, and without this your app would crash when the semaphore waits and when you delete BufferProvider.

Build and run. Everything should work as before – but now ninja style!

IMG_3030

Performance Results

Now you must be eager to see if there’s been any performance improvement. As you did before, open the Debug Navigator tab, select the FPS row.

after

These are my stats – you’ll see that for me the CPU Frame Time decreased from 1.7ms to 1.2ms. It looks like a small win now, but the more objects you draw, the more value it gains. Please note that your actual results will depend on the device you’re using.

57902579

Texturing

Note: If you skipped the previous section, start with this version of the project.

So, what is texture? Simply put, textures are 2D images that are typically mapped to 3D models.

For better understanding, think about some real life objects, for example, a mandarin. How would mandarin texture look in Metal? Something like this:

mandarines

If you rendered a mandarin, first you would create a sphere-like 3D model, then you would use texture similar to one above, and Metal would map it.

Texture Coordinates

Contrary to the bottom-left origination of OpenGL, Metal’s textures originate in the top-left corner.

Here’s a sneak peek of the texture you’ll use in this tutorial.

coords

With 3D graphics, it’s typical to see the texture coordinate axis marked with letter s for horizontal and t for vertical, just like the image above.

Going forward in this tutorial, to differentiate iOS device pixels and texture pixels, you’ll refer to texture pixels as texels.

So, your texture has 512×512 texels. In this tutorial, you’ll use normalized coordinates, which means that coordinates within the texture are always within the range of 0->1. So the:

  • Top-left corner has the coordinates (0.0, 0.0)
  • Top-right corner has the coordinates (1.0, 0.0)
  • Bottom-left corner has the coordinates (0.0, 1.0)
  • Bottom-right corner has the coordinates (1.1, 1.1)

When you map this texture to your cube, normalized coordinates will be important to understand.

Using normalized coordinates isn’t mandatory, but it has some advantages. For example, you want to switch texture with one that has resolution 256×256 texels. If you use normalized coordinates, it’ll “just work” (if new texture maps correctly).

Using Textures in Metal

In Metal, an object that represents texture is any object that conforms to MTLTexture protocol. There are countless texture types in Metal, but for now all you need is a type called MTLTexture2D.

Another important protocol is MTLSamplerState. An object that conforms to this protocol basically instructs the GPU how to use the texture.

So, when you pass a texture, you’ll pass the sampler as well. When using multiple textures that need to be treated similarly, you use the same sampler.

Here is a small visual to help illustrate how you’ll work with textures:

texture_diagrm

For your convenience, the project file contains a special, handcrafted class named MetalTexture that holds all the code to create MTLTexture from the image file in bundle.

Note: I’m not going to delve into it here, but if you want to learn how to create MTLTexture, refer to this post on MetalByExample.com.

MetalTexture

Now that you understand how this will work, it’s time to bring this texture to life. Download and copy MetalTexture.swift to your project and open it.

There are two important methods in this file. The first is:

init(resourceName: String,ext: String, mipmaped:Bool)

Here you pass the name of the file and its extension, and you also have choose whether you want mipmaps or not.

But wait, what’s a mipmap?

For through this tutorial, all you need to know is that when mipmaped is set to true, it generates an array of images instead of one when the texture loads, and each image is two times smaller than the previous. Lastly, the GPU automatically selects the best mip level from which to read texels.

The other method to note is this:

func loadTexture(#device: MTLDevice, commandQ: MTLCommandQueue, flip: Bool)

This method is called when MetalTexture actually creates MTLTexture. To create this object, you need a device object (similar to buffers). Also, you pass in MTLCommandQueue, and it’s used when mipmap levels are generated. Usually textures are loaded upside down, so this also has a flip param to deal with that.

Okay, now it’s time to put it all together.

Open Node.swift, and add 2 new variables:

var texture: MTLTexture
lazy var samplerState: MTLSamplerState? = Node.defaultSampler(self.device)

For now, Node holds just one texture and one sampler.

Now add the following method at the end of the file:

class func defaultSampler(device: MTLDevice) -> MTLSamplerState
{
  var pSamplerDescriptor:MTLSamplerDescriptor? = MTLSamplerDescriptor();
 
  if let sampler = pSamplerDescriptor
  {
    sampler.minFilter             = MTLSamplerMinMagFilter.Nearest
    sampler.magFilter             = MTLSamplerMinMagFilter.Nearest
    sampler.mipFilter             = MTLSamplerMipFilter.Nearest
    sampler.maxAnisotropy         = 1
    sampler.sAddressMode          = MTLSamplerAddressMode.ClampToEdge
    sampler.tAddressMode          = MTLSamplerAddressMode.ClampToEdge
    sampler.rAddressMode          = MTLSamplerAddressMode.ClampToEdge
    sampler.normalizedCoordinates = true
    sampler.lodMinClamp           = 0
    sampler.lodMaxClamp           = FLT_MAX
  }
  else
  {
    println(">> ERROR: Failed creating a sampler descriptor!")
  }
  return device.newSamplerStateWithDescriptor(pSamplerDescriptor!)
}

This method generates a simple texture sampler that basically just holds a bunch of flags. Here you are enabling “nearest-neighbor” filtering, which is the fastest method of filtering (opposed to “linear”), and “clamp to edge” which instructs Metal how to deal with out-of-range values (which won’t happen in this tutorial, so it doesn’t matter).

Now in the render method, find this code:

renderEncoder.setRenderPipelineState(pipelineState)
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, atIndex: 0)

And add this below it:

renderEncoder.setFragmentTexture(texture, atIndex: 0)
if let samplerState = samplerState{
  renderEncoder.setFragmentSamplerState(samplerState, atIndex: 0)
}

This simply passes the texture and sampler to the shaders. It’s similar to what you did with vertex and uniform buffers, except that now you pass them to a fragment shader because you want to map texels to fragments.

Now you need to modify init. Change its declaration so it matches this:

init(name: String, vertices: Array<Vertex>, device: MTLDevice, texture: MTLTexture) {

Find this:

vertexCount = vertices.count

And add this just below it:

self.texture = texture

Each vertex needs to map to some coordinates on the texture. So open Vertex.swift and replace its contents with the following:

struct Vertex{
 
  var x,y,z: Float     // position data
  var r,g,b,a: Float   // color data
  var s,t: Float       // texture coordinates
 
  func floatBuffer() -> [Float] {
    return [x,y,z,r,g,b,a,s,t]
  }
 
};

This adds two floats that hold texture coordinates.

Now open Cube.swift, and change init so it looks like this:

init(device: MTLDevice, commandQ: MTLCommandQueue){  
  // 1
 
  //Front
  let A = Vertex(x: -1.0, y:   1.0, z:   1.0, r:  1.0, g:  0.0, b:  0.0, a:  1.0, s: 0.25, t: 0.25)
  let B = Vertex(x: -1.0, y:  -1.0, z:   1.0, r:  0.0, g:  1.0, b:  0.0, a:  1.0, s: 0.25, t: 0.50)
  let C = Vertex(x:  1.0, y:  -1.0, z:   1.0, r:  0.0, g:  0.0, b:  1.0, a:  1.0, s: 0.50, t: 0.50)
  let D = Vertex(x:  1.0, y:   1.0, z:   1.0, r:  0.1, g:  0.6, b:  0.4, a:  1.0, s: 0.50, t: 0.25)
 
  //Left
  let E = Vertex(x: -1.0, y:   1.0, z:  -1.0, r:  1.0, g:  0.0, b:  0.0, a:  1.0, s: 0.00, t: 0.25)
  let F = Vertex(x: -1.0, y:  -1.0, z:  -1.0, r:  0.0, g:  1.0, b:  0.0, a:  1.0, s: 0.00, t: 0.50)
  let G = Vertex(x: -1.0, y:  -1.0, z:   1.0, r:  0.0, g:  0.0, b:  1.0, a:  1.0, s: 0.25, t: 0.50)
  let H = Vertex(x: -1.0, y:   1.0, z:   1.0, r:  0.1, g:  0.6, b:  0.4, a:  1.0, s: 0.25, t: 0.25)
 
  //Right
  let I = Vertex(x:  1.0, y:   1.0, z:   1.0, r:  1.0, g:  0.0, b:  0.0, a:  1.0, s: 0.50, t: 0.25)
  let J = Vertex(x:  1.0, y:  -1.0, z:   1.0, r:  0.0, g:  1.0, b:  0.0, a:  1.0, s: 0.50, t: 0.50)
  let K = Vertex(x:  1.0, y:  -1.0, z:  -1.0, r:  0.0, g:  0.0, b:  1.0, a:  1.0, s: 0.75, t: 0.50)
  let L = Vertex(x:  1.0, y:   1.0, z:  -1.0, r:  0.1, g:  0.6, b:  0.4, a:  1.0, s: 0.75, t: 0.25)
 
  //Top
  let M = Vertex(x: -1.0, y:   1.0, z:  -1.0, r:  1.0, g:  0.0, b:  0.0, a:  1.0, s: 0.25, t: 0.00)
  let N = Vertex(x: -1.0, y:   1.0, z:   1.0, r:  0.0, g:  1.0, b:  0.0, a:  1.0, s: 0.25, t: 0.25)
  let O = Vertex(x:  1.0, y:   1.0, z:   1.0, r:  0.0, g:  0.0, b:  1.0, a:  1.0, s: 0.50, t: 0.25)
  let P = Vertex(x:  1.0, y:   1.0, z:  -1.0, r:  0.1, g:  0.6, b:  0.4, a:  1.0, s: 0.50, t: 0.00)
 
  //Bot
  let Q = Vertex(x: -1.0, y:  -1.0, z:   1.0, r:  1.0, g:  0.0, b:  0.0, a:  1.0, s: 0.25, t: 0.50)
  let R = Vertex(x: -1.0, y:  -1.0, z:  -1.0, r:  0.0, g:  1.0, b:  0.0, a:  1.0, s: 0.25, t: 0.75)
  let S = Vertex(x:  1.0, y:  -1.0, z:  -1.0, r:  0.0, g:  0.0, b:  1.0, a:  1.0, s: 0.50, t: 0.75)
  let T = Vertex(x:  1.0, y:  -1.0, z:   1.0, r:  0.1, g:  0.6, b:  0.4, a:  1.0, s: 0.50, t: 0.50)
 
  //Back
  let U = Vertex(x:  1.0, y:   1.0, z:  -1.0, r:  1.0, g:  0.0, b:  0.0, a:  1.0, s: 0.75, t: 0.25)
  let V = Vertex(x:  1.0, y:  -1.0, z:  -1.0, r:  0.0, g:  1.0, b:  0.0, a:  1.0, s: 0.75, t: 0.50)
  let W = Vertex(x: -1.0, y:  -1.0, z:  -1.0, r:  0.0, g:  0.0, b:  1.0, a:  1.0, s: 1.00, t: 0.50)
  let X = Vertex(x: -1.0, y:   1.0, z:  -1.0, r:  0.1, g:  0.6, b:  0.4, a:  1.0, s: 1.00, t: 0.25) 
 
  // 2    
  var verticesArray:Array<Vertex> = [
    A,B,C ,A,C,D,   //Front
    E,F,G ,E,G,H,   //Left
    I,J,K ,I,K,L,   //Right
    M,N,O ,M,O,P,   //Top
    Q,R,S ,Q,S,T,   //Bot
    U,V,W ,U,W,X    //Back
  ] 
 
  //3    
  var texture = MetalTexture(resourceName: "cube", ext: "png", mipmaped: true)
  texture.loadTexture(device: device, commandQ: commandQ, flip: true)  
 
  super.init(name: "Cube", vertices: verticesArray, device: device, texture: texture.texture) 
}

Let’s go over the changes here section by section:

  1. Now as you create each vertex, you also specify the texture coordinate for each vertex. For a bit more clarity look at this image, and make sure you understand the s and t values of each vertex.

    texture_coord

    Note that you also need to create vertices for each side of the cube individually now (rather than reusing vertices), because the texture coordinates might not match up correctly otherwise. It’s okay if adding extra vertices is a little confusing at this stage — your brain will grasp it soon enough.

  2. Here you form triangles, just like you did in part two of this tutorial series.
  3. You create and load the texture using the MetalTexture helper class.

Handling Texture on the GPU

At this point, you’re done working on the CPU side of things, and it’s all GPU from here.

Add this image to your project.

Open Shaders.metal and replace the entire file with this:

#include <metal_stdlib>
using namespace metal;
 
// 1
struct VertexIn{
  packed_float3 position;
  packed_float4 color;
  packed_float2 texCoord;  
};
 
struct VertexOut{
  float4 position [[position]];
  float4 color;
  float2 texCoord; 
};
 
struct Uniforms{
  float4x4 modelMatrix;
  float4x4 projectionMatrix;
};
 
vertex VertexOut basic_vertex(
                              const device VertexIn* vertex_array [[ buffer(0) ]],
                              const device Uniforms&  uniforms    [[ buffer(1) ]],
                              unsigned int vid [[ vertex_id ]]) {
 
  float4x4 mv_Matrix = uniforms.modelMatrix;
  float4x4 proj_Matrix = uniforms.projectionMatrix;
 
  VertexIn VertexIn = vertex_array[vid];
 
  VertexOut VertexOut;
  VertexOut.position = proj_Matrix * mv_Matrix * float4(VertexIn.position,1);
  VertexOut.color = VertexIn.color;
  // 2
  VertexOut.texCoord = VertexIn.texCoord; 
 
  return VertexOut;
}
 
// 3
fragment float4 basic_fragment(VertexOut interpolated [[stage_in]],
                              texture2d<float>  tex2D     [[ texture(0) ]],    
// 4
                              sampler           sampler2D [[ sampler(0) ]]) {  
// 5
  float4 color = tex2D.sample(sampler2D, interpolated.texCoord);               
  return color;
}

Now let’s go through all the stuff you changed:

  1. The vertex structs now contain texture coordinates.
  2. You now pass texture coordinates from VertexIn to VertexOut.
  3. Here you receive the texture you passed in.
  4. Here you receive the sampler.
  5. You use sample() on the texture to get color for the specific texture coordinate from the texture, by using rules specified in sampler.

Almost done! Open MySceneViewController.swift and replace this line:

objectToDraw = Cube(device: device)

With this:

objectToDraw = Cube(device: device, commandQ:commandQueue)

Build and run. Your cube should now be texturized.

IMG_3049

Colorizing a Texture (Optional)

At this point, you’re ignoring the cube’s color values and just using color values from the texture. But what if you need to texturize the object’s color instead of covering it up?

In the fragment shader, replace this line:

float4 color = tex2D.sample(sampler2D, interpolated.texCoord);

With:

float4 color =  interpolated.color * tex2D.sample(sampler2D, interpolated.texCoord);

You should get something like this:

IMG_3048

You did this just to see how you combine colors inside the fragment shader. And yes, it’s as simple as doing a little multiplication.

But don’t continue until you revert that last change because it doesn’t look that good.

Adding User Input

All this texturing is cool, but it’s rather static. Wouldn’t it be cool if you could rotate the cube with your finger and see your beautiful texturing work at every angle?

Let’s do this! You’ll use UIPanGestureRecognizer to detect user interactions.

Open MySceneViewController.swift, and add these two new properties:

let panSensivity:Float = 5.0
var lastPanLocation: CGPoint!

Now add two new methods:

//MARK: - Gesture related
// 1
func setupGestures(){
  var pan = UIPanGestureRecognizer(target: self, action: Selector("pan:"))
  self.view.addGestureRecognizer(pan)  
}
 
// 2
func pan(panGesture: UIPanGestureRecognizer){
  if panGesture.state == UIGestureRecognizerState.Changed{  
    var pointInView = panGesture.locationInView(self.view)
    // 3
    var xDelta = Float((lastPanLocation.x - pointInView.x)/self.view.bounds.width) * panSensivity
    var yDelta = Float((lastPanLocation.y - pointInView.y)/self.view.bounds.height) * panSensivity  
    // 4
    objectToDraw.rotationY -= xDelta
    objectToDraw.rotationX -= yDelta    
    lastPanLocation = pointInView
  } else if panGesture.state == UIGestureRecognizerState.Began{
    lastPanLocation = panGesture.locationInView(self.view)
  } 
}

Let’s review this section by section:

  1. Create a pan gesture recognizer and add it to your view.
  2. Check if the touch moved.
  3. When touch moves, you calculate by how much in normalized coordinates. You also apply panSensivity to control rotation speed.
  4. Apply the changes to the cube by setting the rotation properties.

Now find the end of viewDidLoad() and add:

setupGestures()

Build and run.

Hmmm, the cube spins all by itself. Why is that? Think through what you just did and see if you can identify the problem here. Open the spoiler to check if your assumption is correct.

Solution Inside SelectShow>

Debugging Metal

Like any code, you’ll need to do a little debugging to make sure your work is free of errors. And if you look closely, you’ll notice that at some angles, the sides are a little “crispy”.

lupe

To fully understand the problem, you’ll need to debug. Fortunately, Metal comes with some stellar tools to help you.

While app is running, press the Capture the GPU Frame button.

Screen Shot 2015-01-14 at 1.23.27 PM

Pressing the button will automatically pause the app on a breakpoint, and then Xcode will collect all values and states of this single frame.

Xcode may put you into assistant mode, meaning that it splits your main area into two, but you don’t need all that, so feel free to return to regular mode. Also, in the debug area, select All MTL Objects, just like on the screenshot:

Screen Shot 2015-01-14 at 1.46.25 PM

At last, you have proof that you’re actually drawing in triangles, not squares!

Anyways, in the debug area, find and open the Textures group.

Screen Shot 2015-01-14 at 1.54.18 PM

You’re probably asking why you have two textures, because you only passed in one.

One texture is for the cube image, and the other is the one formed with the fragment shader and the one shown to the screen.

Dive deeper; you’re getting close to explaining the mystery.

The weird part is this other texture has non-Retina resolution. Ah-ha!!! So the reason why you cube was a bit crispy is because the non-Retina texture stretched to fill screen. You’ll fix this in a moment.

Fixing Drawable Texture Resizing

There is one more problem to debug and solve before you can officially declare your mastery of Metal. Run your app again and rotate the device into landscape mode.

IMG_3050

Not the best view, eh?

The problem here is that when the device rotates, its bounds change. However, the displayed texture dimensions don’t have any reason to change.

But it’s pretty easy to fix. Open MetalViewController.swift and take a look at this setup code in viewDidLoad:

  device = MTLCreateSystemDefaultDevice()
  metalLayer = CAMetalLayer()
  metalLayer.device = device
  metalLayer.pixelFormat = .BGRA8Unorm
  metalLayer.framebufferOnly = true
  metalLayer.frame = view.layer.frame
  view.layer.addSublayer(metalLayer)

The important line is metalLayer.frame = view.layer.frame, which sets the layer frame once. You just need to update it when the device rotates.

So override viewDidLayoutSubviews like this:

//1
override func viewDidLayoutSubviews() {
  super.viewDidLayoutSubviews()
 
  if let window = view.window {
    let scale = window.screen.nativeScale  
    let layerSize = view.bounds.size
//2
    view.contentScaleFactor = scale
    metalLayer.frame = CGRectMake(0, 0, layerSize.width, layerSize.height)
    metalLayer.drawableSize = CGSizeMake(layerSize.width * scale, layerSize.height * scale)  
  }    
}
  1. Gets the display nativeScale for the device (2 for iPhone 5s, 6 and iPads, 3 for iPhone 6 Plus)
  2. Applies the scale to increase the drawable texture size

Now delete following line in viewDidLoad:

metalLayer.frame = view.layer.frame

Build and run. Here is a classic before-and-after comparison.

compare

The difference is even more obvious when you’re on an iPhone 6+.

Now rotate to landscape.

IMG_3052

It’s all flat now, but at least the background is a rich green and the edges are pretty. :]

If you repeat the steps from the debug section, you’d see the texture’s dimensions are now correct. So, what’s the problem?

It’s something, that’s for sure. Think through what you just did and try to figure out what’s causing the pain. Then check the answer below to see if you figured it out.

Solution Inside SelectShow>

Wrap Up

Nicely done. Take a moment to review what you’ve done in this tutorial.

  1. You created BufferProvider to cleverly reuse uniform buffers instead of creating new buffers every time.
  2. You added MetalTexture and loaded a MTLTexture with it.
  3. You modified the structure of Vertex so it also stores corresponding texture coordinates from MTLTexture.
  4. You modified Cube so it contains 24 vertices, each with it’s own texture coordinates.
  5. You modified the shaders to receive texture coordinates of the fragments, and then you read corresponding texel using sample().
  6. You added a cool rotation UI effect with UIPanGestureRecognizer.
  7. You debugged the Metal frame and identified why it rendered a subpar image.
  8. You resized a drawable texture in viewDidLayoutSubviews to fix the rotation issue and improve the image’s quality.

Where To Go From Here?

Here is the final example project from this iOS 8 Metal Tutorial.

We hope to make more Metal tutorials in the future, but in the meantime, be sure to check out some of these great resources:

Also tune into the OpenGL ES video tutorials on this site, and learn as Ray explains — in depth — how many of these same concepts work in OpenGL ES.

Thank you for joining me for this tour through Metal. As you can see, it’s a powerful technology that’s relatively easy to implement once you understand how it works.

If you have questions, comments or Metal discoveries to share, please leave them in the comments below!

iOS 8 Metal Tutorial with Swift Part 3: Adding Texture is a post from: Ray Wenderlich

The post iOS 8 Metal Tutorial with Swift Part 3: Adding Texture appeared first on Ray Wenderlich.


Trigonometry for Games – Sprite Kit and Swift Tutorial: Part 1/2

$
0
0
Learn Trigonometry for game programming!

Learn Trigonometry for game programming!

Update Note: This is the third incarnation of one of our very popular tutorials – the first version was written by Tutorial Team member Matthijs Hollemans for Cocos2D, and the second version was update to Sprite Kit by Tony Dahbura. This latest version still uses Sprite Kit, but is updated for iOS 8 and Swift.

Does the thought of doing mathematics give you cold sweats? Are you ready to give up on your career as a budding game developer because the math just doesn’t make any sense to you?

Don’t fret – math can be fun, and this cool 2-part game tutorial will prove it!

Here’s a little secret: as an app developer, you don’t really need to know a lot of math. Most of the computations that we do in our professional lives don’t go much beyond basic arithmetic.

That said, for making games it is useful to have a few more math skills in your toolbox. You don’t need to become as smart as Archimedes or Isaac Newton, but a basic understanding of trigonometry, combined with some common sense, will take you a long way.

In this tutorial, you will learn about some important trigonometric functions and how you can use them in your games. Then you’ll get some practice applying the theories by developing a simple space shooter iPhone game using the Sprite Kit game framework.

Don’t worry if you’ve never used Sprite Kit before or are going to use a different framework for your game – the mathematics covered in this tutorial are applicable to any engine you might choose to use. And you don’t need any prior experience, as I’ll walk through the process step-by-step.

If you supply the common sense, this tutorial will get you up to speed on the trigonometry, so let’s get started!

Note: The game you’ll build in this tutorial uses the accelerometer so you’ll need a real iOS device and a paid developer account.

Getting Started: It’s All About Triangles

It sounds like a mouthful, but trigonometry (or trig, for short) simply means calculations with triangles (that’s where the tri comes from).

You may not have realized this, but games are full of triangles. For example, imagine you have a spaceship game, and you want to calculate the distance between these ships:

Distance between ships

You have X and Y position of each ship, but how can you find the length of that line?

Well, you can simply draw a line from the center point of each ship to form a triangle like this:

Then, since you know the X and Y coordinates of each ship, you can compute the length of each of the new lines. Now that you know the lengths of two sides of the triangle, you can use trig to compute the length of the diagonal line – the distance between the ships.

Note that one of the corners of this triangle has an angle of 90 degrees. This is known as a right triangle (or right-angle triangle, for you Brits out there!), and that’s the sort of triangle you’ll be dealing with in this tutorial.

Any time you can express something in your game as a triangle with a 90-degree right angle – such as the spatial relationship between the two sprites in the picture – you can use trigonometric functions to do calculations on them.

So in summary, trigonometry is the mathematics that you use to calculate the lengths and angles of right triangles. And that comes in handy more often that you might think.

For example, in this spaceship game you might want to:

  • Have one ship shoot a laser in the direction of the other ship
  • Have one ship start moving in the direction of another ship to chase
  • Play a warning sound effect if an enemy ship is getting too close

All of this and more you can do with the power of trigonometry!

Your Arsenal of Functions

First, let’s get the theory out of the way. Don’t worry, I’ll keep it short so you can get to the fun coding bits as quickly as possible.

These are the parts that make up a right triangle:

In the picture above, the slanted side is called the hypotenuse. It always sits across from the corner with the 90-degree angle (also called a right angle), and it is always the longest of the three sides.

The two remaining sides are called the adjacent and the opposite, as seen from one particular corner of the triangle, the bottom-left corner in this case.

If you look at the triangle from the point of view of the other corner (top-right), then the adjacent and opposite sides switch places:

Alpha (α) and beta (β) are the names of the two other angles. You can call these angles anything you want (as long as it sounds Greek!), but usually alpha is the angle in the corner of interest and beta is the angle in the opposing corner. In other words, you label your opposite and adjacent sides with respect to alpha.

The cool thing is that if you only know two of these things, trigonometry allows you to find out all the others using the trigonometric functions sine, cosine and tangent. For example, if you know any angle and the length of one of the sides, then you can easily derive the lengths and angles of the other sides and corners:

You can see the sine, cosine, and tangent functions (often shortened to sin, cos and tan) are just ratios – again, if you know alpha and the length of one of the sides, then sin, cos and tan are ratios that relate two sides and the angle together.

Think of the sin, cos and tan functions as “black boxes” – you plug in numbers and get back results. They are standard library functions, available in almost every programming language, including Swift.

Note: The behavior of the trigonometric functions can be explained in terms of projecting circles onto straight lines, but you don’t need to know how to derive those functions in order to use them. If you’re curious, there are plenty of sites and videos to explain the details; check out the Math is Fun site for one example.

Know Angle and Length, Need Sides

Let’s consider an example. Suppose that you know the alpha angle between the ships is 45 degrees, and the length between the ships (the hypotenuse) is 10 points long.

Triangles-in-games-measured

You can then plug these values into the formula:

sin(45) = opposite / 10

To solve this for the hypotenuse, you simply shift this formula around a bit:

opposite = sin(45) * 10

The sine of 45 degrees is 0.707 (rounded to three decimal places), and filling that in the forumla gives you the result:

opposite = 0.707 * 10 = 7.07

There is a handy mnemonic for remembering what these functions do that you may remember from high school: SOH-CAH-TOA (where SOH stands for Sine is Opposite over Hypotenuse, and so on), or if you need something more catchy: Some Old Hippy / Caught Another Hippy / Tripping On Acid. (That hippy was probably a mathematician who did a little too much trig! :])

Know 2 Sides, Need Angle

The formulae above are useful when you already know an angle, but that is not always the case – sometimes you know the length of two sides and are looking for the angle between them. To derive the angle, you can use the inverse trig functions, aka arc functions (which has nothing to do with Automatic Reference Counting, before you ask!):

Inverse trig functions

  • angle = arcsin(opposite/hypotenuse)
  • angle = arccos(adjacent/hypotenuse)
  • angle = arctan(opposite/adjacent)

If sin(a) = b, then it is also true that arcsin(b) = a. Of these inverse functions, you will probably use the arc tangent (arctan) the most in practice because it will help you find the hypotenuse (remember TOA – Opposite over Adjacant!). Sometimes these functions are written as sin-1, cos-1, and tan-1, so don’t let that confuse you.

Is any of this sinking in or sounding familiar? Good, because you’re not done yet with the theory lesson – there is still more that you can calculate with triangles.

Know 2 Sides, Need Remaining Side

Sometimes you may know the length of two of the sides and you need to know the length of the third (as with the example at the beginning of this tutorial, where you wanted to find the distance between the two spaceships).

This is where Trigonometry’s Pythagorean Theorem comes to the rescue. Even if you forgot everything else about math, this is probably the one formula you do remember:

a2 + b2 = c2

Or, put in terms of the triangle that you saw earlier:

opposite2 + adjacent2 = hypotenuse2

If you know any two sides, calculating the third is simply a matter of filling in the formula and taking the square root. This is a very common thing to do in games and you’ll be seeing it several times in this tutorial.

Note: Want to drill this formula into your head while having a great laugh at the same time? Search YouTube for “Pythagoras song” – it’s an inspiration for many!

Have Angle, Need Other Angle

Lastly, the angles. If you know one of the non-right angles from the triangle, then figuring out the other ones is a piece of cake. In a triangle, all angles always add up to a total of 180 degrees. Because this is a right triangle, you already know that one of the angles is 90 degrees. That leaves:

alpha + beta + 90 = 180

Or simply:

alpha + beta = 90

The remaining two angles must add up to 90 degrees. So if you know alpha, you can calculate beta, and vice-versa.

And those are all the formulae you need to know! Which one to use in practice depends on the pieces that you already have. Usually you either have the angle and the length of at least one of the sides, or you don’t have the angle but you do have two of the sides.

Enough theory. Let’s put this stuff into practice.

To Skip, or Not to Skip?

In the next few sections, you will be setting up a basic Sprite Kit project with a spaceship that can move around the screen using the accelerometer. This won’t involve any trigonometry (yet), so if you already know Sprite Kit and feel like this guy:

"F that!" guy

Then feel free to skip ahead to the Begin the Trigonometry! section below – I have a starter project waiting for you there.

But if you’re the type who likes to code everything from scratch, keep reading! :]

Creating the Project

First make sure you have Xcode 6.1.1 or later, as Swift is a brand new language and the syntax is prone to change subtly between versions.

Next, start up Xcode, select File\New\Project…, choose iOS\Application\Game template. Name the project TrigBlaster. Make sure Language is set to “Swift”, Game Technology is set to “SpriteKit”, and Devices is set to “iPhone”. Then click Next:

Select SpriteKit Template

Build and run the template project. If all works OK, you should see the following:

SpriteKitHelloWorld

Download the resources for this tutorial. This file contains the images for the sprites and the sound effects. Unzip it, and drag the images individually into your Images.xcassets to set up the sprites. You can delete/replace the Spaceship sprite that comes with the default project template, as you won’t be using that.

Now add the sounds. You can simply drag the whole Sounds folder into Xcode, but make sure you select the “Create groups” option.

AddingSounds

FileImport

Great; the usual preliminaries are over with – now let’s get coding!

Steering with Accelerometers

Because this is a simple game, you will be doing most of your work inside a single file: GameScene.swift. Right now, this file contains a bunch of stuff that you don’t need. It also does not run with the correct orientation for the game, so let’s fix that first.

Switching to Landscape Orientation

Open the target settings by clicking your TrigBlaster project in the Project Navigator and selecting the TrigBlaster target. Then, in the Deployment Info section make sure General is checked at the top and under Device Orientation uncheck all but Landscape Left (as shown below):

LandscapeRight

If you build and run, the app will now launch in landscape orientation. The app is currently loading an empty scene from the GameScene.sks file in GameViewController.swift, and then in GameScene.swift, a “Hello World” label is being added programmatically.

Replace the contents of GameScene.swift with:

import SpriteKit
 
class GameScene: SKScene {
 
  override func didMoveToView(view: SKView) {
 
    // set scene size to match view
    size = view.bounds.size
 
    backgroundColor = SKColor(red: 94.0/255, green: 63.0/255, blue: 107.0/255, alpha: 1)
  }
 
  override func update(currentTime: CFTimeInterval) {
 
  }
}

Build and run, and you should see… nothing but purple:

Purple

Let’s make things a bit more exciting by adding a spaceship to the scene. Modify the GameScene class as follows:

class GameScene: SKScene {
 
  let playerSprite = SKSpriteNode(imageNamed: "Player")
 
  override func didMoveToView(view: SKView) {
 
    // set scene size to match view
    size = view.bounds.size
 
    backgroundColor = SKColor(red: 94.0/255, green: 63.0/255, blue: 107.0/255, alpha: 1)
 
    playerSprite.position = CGPoint(x: size.width - 50, y: 60)
    addChild(playerSprite)
  }
 
  ...
}

This is all pretty basic if you have worked with Sprite Kit before. The playerSprite property holds the spaceship sprite, which is positioned in the bottom-right corner of the screen. Remember that with Sprite Kit it is the bottom of the screen that has a Y-coordinate of 0, unlike in UIKit where y = 0 points to the top of the screen. You’ve set the Y-coordinate to 60 to position it just above the FPS (Frames Per Second) counter in the bottom-left corner.

Note: The FPS counter is useful for debugging purposes, but you can disable it in GameViewController.swift if it annoys you. You’ll probably want to do that before you submit your game to the App Store!

Build and run to try it out, and you should see the following:

PurpleShip

To move the spaceship, you’ll be using the iPhone’s built-in accelerometer. Unfortunately, the iOS Simulator can’t simulate the accelerometer, so from now on you will need to run the app on a real device to test it.

Note: If you are unsure how to install the app on a device, check out this extensive tutorial that explains how to obtain and install the certificates and provisioning profiles that allow Xcode to install on a physical iPhone or iPad. It’s not as intimidating as it looks, but you will need to sign up for the paid Apple developer program.

To move the spaceship with the accelerometer, you’ll need to tilt your device from side to side. This was the reason you de-selected all Device Orientation options except for Landscape Right in the Project Settings screen earlier, because it would be really annoying for the screen to flip when you’re in the middle of a heated battle!

Using the accelerometer is pretty straightforward thanks to the Core Motion framework. There are two ways to get accelerometer data: You can either register to have it delivered to your application at a specific frequency via a callback, or you can poll the values when you need them. Apple recommends not having data pushed to your application unless timing is very critical (like a measurement or navigation service) because it can drain the batteries more quickly.

Your game already has a logical place from which to poll the accelerometer data: the update() method that gets called by Sprite Kit once per frame. You will read the accelerometer values whenever this method is fired, and use them to move the spaceship.

First, add the following import to the top of GameScene.swift:

import CoreMotion

Now you’ll have Core Motion available to you and it’ll be linked into your app.

Next, add the following properties inside the class implementation:

var accelerometerX: UIAccelerationValue = 0
var accelerometerY: UIAccelerationValue = 0
 
let motionManager = CMMotionManager()

You’ll need these properties to keep track of the Core Motion manager and the accelerometer values. You only need to store the values for the x- and y-axes; the z-axis isn’t used by this game.

Next, add the following utility methods to the class:

func startMonitoringAcceleration() {
 
  if motionManager.accelerometerAvailable {
    motionManager.startAccelerometerUpdates()
    NSLog("accelerometer updates on...")
  }
}
 
func stopMonitoringAcceleration() {
 
  if motionManager.accelerometerAvailable && motionManager.accelerometerActive {
    motionManager.stopAccelerometerUpdates()
    NSLog("accelerometer updates off...")
  }
}

The start and stop methods check to make sure the accelerometer hardware is available on the device and, if so, tell it to start gathering data. The stop method will be called when you wish to turn off acceleration monitoring.

A good place to activate the accelerometers is inside didMoveToView(). Add the following line to it underneath the addChild(playerSprite) line:

startMonitoringAcceleration()

For stopping the accelerometers, a good place is in the class de-initializer. Add the following to the class:

deinit {
  stopMonitoringAcceleration()
}

Next, add the method that will be called to read the values and let your player change positions:

func updatePlayerAccelerationFromMotionManager() {
 
  if let acceleration = motionManager.accelerometerData?.acceleration {
 
    let FilterFactor = 0.75
 
    accelerometerX = acceleration.x * FilterFactor + accelerometerX * (1 - FilterFactor)
    accelerometerY = acceleration.y * FilterFactor + accelerometerY * (1 - FilterFactor)
  }
}

This bit of logic is necessary to filter, or smooth the data that you get back from the accelerometers so that it appears less jittery. The motionManager.accelerometerData property may be nil if no data is yet available, so you use the ?. operator to access the acceleration property, and wrap the logic in if let ... to ensure it will be skipped if there is no acceleration data to work with yet.

Note: An accelerometer records the acceleration currently being applied to it. The iPhone is always under acceleration due to the pull of gravity (which is how iOS knows which way to orient the screen), but because the user is holding the iPhone in their hands (and hands are never completely steady) there are a lot of tiny fluctuations in this gravity value. You’re not so interested in these fluctuations as in the larger, deliberate changes that the user makes to the device orientation. By applying this simple low-pass filter, you retain the orientation information but filter out the less important fluctuations.

Now that you have a stable measurement of the device’s orientation, how can you use this to make the player’s spaceship move?

Motion in physics-based games is typically implemented like this:

  1. First, you set the acceleration, based on some form of user input (in this case the accelerometer values).
  2. Second, you add the new acceleration to the spaceship’s current velocity. This makes the object speed up or slow down, depending on the direction of the acceleration.
  3. Finally, you add the new velocity to the spaceship’s position to make it move.

You have a great mathematician to thank for the equations that control this motion: Sir Isaac Newton!

You’ll need to add some more properties to track the velocity and acceleration. There is no need to keep track of the player’s position because the SKSpriteNode already does that for you.

Note: Technically, Sprite Kit can keep track of velocity and acceleration as well, thanks to the SKPhysicsBody property. Sprite Kit’s physics can track forces on the sprite and update the acceleration, velocity and position automatically. But if you use Sprite Kit’s physics to do all the math, you won’t learn much about trigonometry! So, for this tutorial, you’re going to do all the math yourself.

Add these properties to the class next:

var playerAcceleration = CGVector(dx: 0, dy: 0)
var playerVelocity = CGVector(dx: 0, dy: 0)

It’s good to set some bounds on how fast the spaceship can travel or it would be pretty hard to maneuver. Unlimited acceleration would make the ship tricky to control (not to mention turning the poor pilot into jello!), so let’s set an upper limit.

Add the following lines directly below the import statements:

let MaxPlayerAcceleration: CGFloat = 400
let MaxPlayerSpeed: CGFloat = 200

This defines two constants: The maximum acceleration (400 points per second squared), and the maximum speed (200 points per second). You’ve used the common Swift convention of capitalising the first letter of your configuration constants to distinguish them from regular “let” variables.

Now add the following code to the bottom of the if let ... statement in updatePlayerAccelerationFromMotionManager:

playerAcceleration.dx = CGFloat(accelerometerY) * -MaxPlayerAcceleration
playerAcceleration.dy = CGFloat(accelerometerX) * MaxPlayerAcceleration

Accelerometer values are provided in the range -1 to +1, so to get the final acceleration, you simply multiply the accelerometer value by MaxPlayerAcceleration.

Note: You’re using the accelerometerY value for the x-direction and accelerometerX for the Y-direction. That’s as it should be. Remember that this game is in landscape, so the X-accelerometer runs from top to bottom in this orientation, and the Y-accelerometer from right to left.

You’re almost there. The last step is applying the playerAcceleration.dx and playerAcceleration.dy values to the velocity and position of the spaceship. You will do this from within the game’s update() method. This method is called once per frame (60 times per second), so it’s the natural place to perform all of the game logic.

Add the updatePlayer() method:

func updatePlayer(dt: CFTimeInterval) {
 
  // 1
  playerVelocity.dx = playerVelocity.dx + playerAcceleration.dx * CGFloat(dt)
  playerVelocity.dy = playerVelocity.dy + playerAcceleration.dy * CGFloat(dt)
 
  // 2
  playerVelocity.dx = max(-MaxPlayerSpeed, min(MaxPlayerSpeed, playerVelocity.dx))
  playerVelocity.dy = max(-MaxPlayerSpeed, min(MaxPlayerSpeed, playerVelocity.dy))
 
  // 3
  let newX = playerSprite.position.x + playerVelocity.dx * CGFloat(dt)
  let newY = playerSprite.position.y + playerVelocity.dy * CGFloat(dt)
 
  // 4
  newX = min(size.width, max(0, newX));
  newY = min(size.height, max(0, newY));
 
  playerSprite.position = CGPoint(x: newX, y: newY)
}

If you’ve programmed games before (or studied physics), then this should look familiar. Here’s how it works:

  1. This code adds the current acceleration to the velocity.

    The acceleration is expressed in points per second (actually, per second squared, but don’t worry about that). However, the update() method is executed a lot more often than once per second. To compensate for this difference, you multiply the acceleration by the elapsed or “delta” time, dt. Without this, the spaceship would move about sixty times faster than it should!

  2. This clamps the velocity so that it doesn’t go faster than MaxPlayerSpeed if it is positive or -MaxPlayerSpeed if it is negative.

  3. This adds the current velocity to the sprite’s position. Again, velocity is measured in points per second, so you need to multiply it by the delta time to make it work correctly.
  4. Clamp the new position to the sides of the screen. You don’t want the player’s spaceship to fly off-screen, never to be found again!

One more thing: you need to measure time as differences (deltas) in time. The Sprite Kit update() method gets called repeatedly with the current time, but you’ll need to track the delta time between calls to the update() method ourselves, so that the velocity calculations are smooth.

To track the delta time, add another property:

var lastUpdateTime: CFTimeInterval = 0

Then replace the update() method stub with the actual implementation:

override func update(currentTime: CFTimeInterval) {
 
  // to compute velocities we need delta time to multiply by points per second
  // SpriteKit returns the currentTime, delta is computed as last called time - currentTime
  let deltaTime = max(1.0/30, currentTime - lastUpdateTime)
  lastUpdateTime = currentTime
 
  updatePlayerAccelerationFromMotionManager()
  updatePlayer(deltaTime)
}

That should do it.

You calculate deltaTime by subtracting the last recorded update time from the the current time. Just to be safe, clamp deltaTime to a maximum of 1/30th of a second. That way, if the app’s frame rate should fluctuate or stall for some reason, the ship won’t get catapulted across the screen when the next update occurs.

The updatePlayerAccelerationFromMotionManager() method is called to calculate the players acceleration from the accelerometer values.

Finally, updatePlayer() is called to move the ship, making use of the delta time to compute the velocity.

Build and run the game on an actual device (not the simulator). You can now control the spaceship by tilting the device:

MovingShip

One last thing before you proceed: In GameViewController.swift, find the line:

skView.ignoresSiblingOrder = true

And change it to:

skView.ignoresSiblingOrder = false

This disables a small optimization in the way that sprites are rendered, but it means that sprites will be drawn in the order they are added. This will be useful later.

Begin the Trigonometry!

If you skipped ahead to this section, here is the starter project at this point. Build and run it on your device – you’ll see there’s a spaceship that you can move around with the accelerometer.

You haven’t used any actual trigonometry yet, so let’s put some into action!

It would be cool – and much less confusing to the player – to rotate the spaceship in the direction it is currently moving rather than having it always pointing upward.

To rotate the spaceship, you need to know the angle to rotate it to. But you don’t know what that is; you only have the velocity vector. So how can you get an angle from a vector?

Let’s think about what you do know. The player’s velocity consists of two components: a length in the X-direction and a length in the Y-direction:

VelocityComponents

If you rearrange these a little, you can see that they form a triangle:

VelocityTriangle

Here you know the lengths of the adjacent (playerVelocity.dx) and the opposite (playerVelocity.dy) sides.

So basically, you know 2 sides of a right triangle, and you want to find an angle (the Know 2 Sides, Need Angle case), so you need to use one of the inverse functions: arcsin, arccos or arctan.

The sides you know are the opposite and adjacent sides to the angle you need, so you’ll want to use the arctan function to find the angle to rotate the ship. Remember, that looks like the following:

angle = arctan(opposite / adjacent)

The Swift standard library includes an atan() function that computes the arc tangent, but it has a couple of limitations: x / y yields exactly the same value as -x / -y, which means that you’ll get the same angle output for opposite velocities. Worse than that, the angle inside the triangle isn’t exactly the one you want anyway – you want the angle relative to one particular axis, which may be 90, 180 or 270 degrees offset from the angle returned by atan().

You could write a four-way if-statement to work out the correct angle by taking into account the signs of the velocity components to determine which quadrant the angle is in, and then apply the correct offset. But it turns out there’s a much simpler way:

For this specific problem, instead of using atan(), it is simplier to use the function atan2(), which takes the x and y components as separate parameters, and correctly determines the overall rotation angle.

angle = atan2(opposite, adjacent)

Add the following two lines to the bottom of updatePlayer:

let angle = atan2(playerVelocity.dy, playerVelocity.dx)
playerSprite.zRotation = angle

Notice that the Y-coordinate goes first. A common mistake is to write atan(x, y), but that’s the wrong way around. Remember the first parameter is the opposite side, and in this case the Y coordinate lies opposite the angle you’re trying to measure.

Build and run the app to try it out:

ShipPointingWrongWay

Hmm, this doesn’t seem to be working quite right. The spaceship certainly rotates but it’s pointing in a different direction from where it’s flying!

Here’s what’s happening: the sprite image for the spaceship points straight up, which corresponds to the default rotation value of 0 degrees. But by mathematical convention, an angle of 0 degrees doesn’t point upward, but to the right, along the X-axis:

RotationDifferences

To fix this, subtract 90 degrees from the rotation angle so that it matches up with the sprite image:

playerSprite.zRotation = angle - 90

Try it out…

Nope! If anything, it’s even worse now! What’s missing?

Radians, Degrees and Points of Reference

Normal humans tend to think of angles as values between 0 and 360 (degrees). Mathematicians, however, usually measure angles in radians, which are expressed in terms of π (the Greek letter Pi, which sounds like “pie” but doesn’t taste as good).

One radian is defines the angle you get when you travel the distance of the radius along the arc of the circle. You can do that 2π times (roughly 6.28 times) before you end up at the beginning of the circle again.

Notice the yellow line (the radius) is the same length as the red curved line (the arc). That magic angle where the two are equal is one radian!

So while you may think of angles as values from 0 to 360, a mathematician sees values from 0 to 2π. Most computer math functions work in radians, because it’s a more useful unit for doing calculations. Sprite Kit uses radians for all its angular measurements as well. The atan2() function returns a value in radians, but you’ve tried to offset that angle by 90 degrees.

Since you will be working with both radians and degrees, it will be useful if you have a way to easily convert between them. The conversion is pretty simple: Since there are 2π radians or 360 degrees in a circle, π equates to 180 degrees, so to convert from radians to degrees you divide by π and multiply by 180. To convert from degrees to radians you divide by 180 and multiply by π.

The C math library (which is automatically made available to Swift) has a constant, M_PI, that represents the value of π as a double. Swift’s strict casting rules make it inconvenient to use this constant when most of the values you’re dealing with are CGFloat, so you can just define your own constant. In GameScene.swift add the following to the top-level of the file, above the class definition:

let Pi = CGFloat(M_PI)

Now define another two constants that will make it easy to convert between degrees and radians:

let DegreesToRadians = Pi / 180
let RadiansToDegrees = 180 / Pi

Finally, edit the rotation code in updatePlayer again, to include the DegreesToRadians multiplier:

playerSprite.zRotation = angle - 90 * DegreesToRadians

Build and run again and you’ll see that the spaceship is finally rotating correctly.

Bouncing Off the Walls

You have a spaceship that you can move using the accelerometers and you’re using trig to make sure it points in the direction it’s flying. That’s a good start.

Having the spaceship get stuck on the edges of the screen isn’t very satisfying though. You’re going to fix that by making it bounce off the screen borders instead!

First, delete these lines from updatePlayer():

// 4
newX = min(size.width, max(0, newX))
newY = min(size.height, max(0, newY))

And replace them with the following:

var collidedWithVerticalBorder = false
var collidedWithHorizontalBorder = false
 
if newX < 0 {
  newX = 0
  collidedWithVerticalBorder = true
} else if newX > size.width {
  newX = size.width
  collidedWithVerticalBorder = true
}
 
if newY < 0 {
  newY = 0
  collidedWithHorizontalBorder = true
} else if newY > size.height {
  newY = size.height
  collidedWithHorizontalBorder = true
}

This checks whether the spaceship hit any of the screen borders, and if so, sets a Bool variable to true. But what to do after such a collision takes place? To make the spaceship bounce off the border, you can simply reverse its velocity and acceleration.

Add the following lines to updatePlayer(), directly below the code you just added:

if collidedWithVerticalBorder {
  playerAcceleration.dx = -playerAcceleration.dx
  playerVelocity.dx = -playerVelocity.dx
  playerAcceleration.dy = playerAcceleration.dy
  playerVelocity.dy = playerVelocity.dy
}
 
if collidedWithHorizontalBorder {
  playerAcceleration.dx = playerAcceleration.dx
  playerVelocity.dx = playerVelocity.dx
  playerAcceleration.dy = -playerAcceleration.dy
  playerVelocity.dy = -playerVelocity.dy
}

If a collision is registered, you invert the acceleration and velocity values, causing the ship to bounce away again.

Build and run to try it out.

Hmm, the bouncing works, but it seems a bit energetic. The problem is that you wouldn’t expect a spaceship to bounce like a rubber ball – it should lose most of its energy in the collision, and bounce off with less velocity than it had beforehand.

Define another constant at the top of the file, right below the let MaxPlayerSpeed: CGFloat = 200 line:

let BorderCollisionDamping: CGFloat = 0.4

Now, replace the code you just added in updatePlayer with this:

if collidedWithVerticalBorder {
  playerAcceleration.dx = -playerAcceleration.dx * BorderCollisionDamping
  playerVelocity.dx = -playerVelocity.dx * BorderCollisionDamping
  playerAcceleration.dy = playerAcceleration.dy * BorderCollisionDamping
  playerVelocity.dy = playerVelocity.dy * BorderCollisionDamping
}
 
if collidedWithHorizontalBorder {
  playerAcceleration.dx = playerAcceleration.dx * BorderCollisionDamping
  playerVelocity.dx = playerVelocity.dx * BorderCollisionDamping
  playerAcceleration.dy = -playerAcceleration.dy * BorderCollisionDamping
  playerVelocity.dy = -playerVelocity.dy * BorderCollisionDamping
}

You’re now mutliplying the acceleration and velocity by a damping value, BorderCollisionDamping. This allows you to control how much energy is lost in the collision. In this case, you make the spaceship retain only 40% of its speed after bumping into the screen edges.

For fun, play with the value of BorderCollisionDamping to see the effect of different values for this constant. If you make it larger than 1.0, the spaceship actually gains energy from the collision!

You may have noticed a slight problem: Keep the spaceship aimed at the bottom of the screen so that it continues smashing into the border over and over, and you’ll see that it starts to stutter between pointing up and pointing down.

Using the arc tangent to find the angle between a pair of X and Y components works quite well, but only if those X and Y values are fairly large. In this case, the damping factor has reduced the speed to almost zero. When you apply atan2() to very small values, even a tiny change in these values can result in a big change in the resulting angle.

One way to fix this is to not change the angle when the speed is very slow. That sounds like an excellent reason to give a call to your old friend, Pythagoras.

pythagoras

Right now you don’t actually store the ship’s speed. Instead, you store the velocity, which is the vector equivalent (see here for an explanation of the difference between speed and velocity), with one component in the X-direction and one in the Y-direction. But in order to draw any conclusions about the ship’s speed (such as whether it’s too slow to be worth rotating the ship) you need to combine these X and Y speed components into a single scalar value.

Pythagoras

Here you are in the Know 2 Sides, Need Remaining Side case, discussed earlier.

As you can see, the true speed of the spaceship – how many points it moves across the screen per second – is the hypotenuse of the triangle that is formed by the speed in the X-direction and the speed in the Y-direction.

Put in terms of the Pythagorean formula:

true speed = √(playerVelocity.dx2 + playerVelocity.dy2)

Remove this block of code from updatePlayer():

let angle = atan2(playerVelocity.dy, playerVelocity.dx)
playerSprite.zRotation = angle - 90 * DegreesToRadians

And replace it with this:

let RotationThreshold: CGFloat = 40
 
let speed = sqrt(playerVelocity.dx * playerVelocity.dx + playerVelocity.dy * playerVelocity.dy)
if speed > RotationThreshold {
  let angle = atan2(playerVelocity.dy, playerVelocity.dx)
  playerSprite.zRotation = angle - 90 * DegreesToRadians
}

Build and run. You’ll see the spaceship rotation seems a lot more stable at the edges of the screen. If you’re wondering where the value 40 came from, the answer is: experimentation. Putting NSLog() statements into the code to look at the speeds at which the craft typically hit the borders helped in tweaking this value until it felt right :]

Blending Angles for Smooth Rotation

Of course, fixing one thing always breaks something else. Try slowing down the spaceship until it has stopped, then flip the device so the spaceship has to turn around and fly the other way.

Previously, that would happen with a nice animation where you actually saw the ship turning. But because you just added some code that prevents the ship from changing its angle at low speeds, the turn is now very abrupt. It’s only a small detail, but it’s the details that make great apps and games.

The fix is to not switch to the new angle immediately, but to gradually blend it with the previous angle over a series of successive frames. This re-introduces the turning animation and still prevents the ship from rotating when it is not moving fast enough.

This “blending” sounds fancy, but it’s actually quite easy to implement. It will require you to keep track of the spaceship’s angle between updates, however, so add a new property for it in the implementation of the GameScene class:

var playerAngle: CGFloat = 0

Update the rotation code in updatePlayer() to this:

let RotationThreshold: CGFloat = 40
let RotationBlendFactor: CGFloat = 0.2
 
let speed = sqrt(playerVelocity.dx * playerVelocity.dx + playerVelocity.dy * playerVelocity.dy)
if speed > RotationThreshold {
  let angle = atan2(playerVelocity.dy, playerVelocity.dx)
  playerAngle = angle * RotationBlendFactor + playerAngle * (1 - RotationBlendFactor)
  playerSprite.zRotation = playerAngle - 90 * DegreesToRadians
}

The playerAngle variable combines the new angle and its own previous value by multiplying them with a blend factor. In human-speak, this means the new angle only contributes 20% towards the actual rotation that you set on the spaceship. Over time, more and more of the new angle gets added so that eventually the spaceship points in the correct direction.

Build and run to verify that there is no longer an abrupt change from one rotation angle to another.

Now try flying in a circle a couple of times, both clockwise and counterclockwise. You’ll notice that at some point in the turn, the spaceship suddenly spins 360 degrees in the opposite direction. It always happens at the same point in the circle. What’s going on?

The atan2() returns and angle between +π and –π (between +180 and -180 degrees). That means that if the current angle is very close +π, and then it turns a little further, it’s going to wrap around to -π (or vice-versa).

That’s actually equivalent to the same position on the circle (just like -180 and +180 degrees are the same point), but your blending algorithm isn’t smart enough to realise that – it thinks the angle has jumped a whole 360 degrees (aka 2π radians) in one step, and it needs to spin the ship 360 degrees in the opposite direction to catch back up.

To fix it, you need to recognize when the angle crosses that threshold, and adjust playerAngle accordingly. Add a new property to the GameScene class:

var previousAngle: CGFloat = 0

And change the rotation code one more time to this:

let speed = sqrt(playerVelocity.dx * playerVelocity.dx + playerVelocity.dy * playerVelocity.dy)
if speed > RotationThreshold {
  let angle = atan2(playerVelocity.dy, playerVelocity.dx)
 
  // did angle flip from +π to -π, or -π to +π?
  if angle - previousAngle > Pi {
    playerAngle += 2 * Pi
  } else if previousAngle - angle > Pi {
    playerAngle -= 2 * Pi
  }
 
  previousAngle = angle
  playerAngle = angle * RotationBlendFactor + playerAngle * (1 - RotationBlendFactor)
  playerSprite.zRotation = playerAngle - 90 * DegreesToRadians
}

Now you’re checking the difference between the current angle and the previous angle to watch for changes over the thresholds of 0 and π (180 degrees).

Build and run. That’ll fix things right up and you should have no more problems turning your spacecraft!

Using Trig to Find Your Target

This is a great start – you have a spaceship moving along pretty smoothly! But so far the little spaceship’s life is too easy and carefree. Let’s spice things up by adding an enemy: a big cannon!

Add two new properties to the GameScene class:

let cannonSprite = SKSpriteNode(imageNamed: "Cannon")
let turretSprite = SKSpriteNode(imageNamed: "Turret")

You’ll set these sprites up in didMoveToView(). Place this code before the setup for playerSprite, so that the spaceship always gets drawn after (and therefore in front of) the cannon:

cannonSprite.position = CGPoint(x: size.width/2, y: size.height/2)
addChild(cannonSprite)
 
turretSprite.position = CGPoint(x: size.width/2, y: size.height/2)
addChild(turretSprite)

Note: Remember that change you made to set skView.ignoresSiblingOrder = false earlier? That ensures that sprites are drawn in the order they are added to their parent. There are other ways to control sprite drawing order – such as using the zPosition – but this is the simplest.

The cannon consists of two sprites: the fixed base, and the turret that can rotate to take aim at the player. Build and run, and you should see a brand-new cannon sitting smack in the middle of the screen.

Cannon

Now to give the cannon a target to snipe at!

You want the cannon’s turret to point at the player at all times. To get this to work, you’ll need to figure out the angle between the turret and the player.

Figuring this out will be very similar to how you calculated how to rotate the spaceship to face the direction it’s moving in. The difference is that this time, the triangle won’t be derived from the velocity of the spaceship; instead, it will be drawn between the centers of the two sprites:

Again, you can use atan2() to calculate this angle. Add the following method:

func updateTurret(dt: CFTimeInterval) {
 
  let deltaX = playerSprite.position.x - turretSprite.position.x
  let deltaY = playerSprite.position.y - turretSprite.position.y
  let angle = atan2(deltaY, deltaX)
 
  turretSprite.zRotation = angle - 90 * DegreesToRadians
}

The deltaX and deltaY variables measure the distance between the player sprite and the turret sprite. You plug these values into atan2() to get the relative angle between them.

As before, you need to convert this angle to include the offset from the X-axis (90 degrees) so the sprite is oriented correctly. Remember that atan2() always gives you the angle between the hypotenuse and the 0-degree line; it’s not the angle inside the triangle.

Finally, add a call this new method. Find update() and add the following code to the end of that method:

updateTurret(deltaTime)

Build and run. The turret will now always point toward the spaceship. See how easy that was? That’s the power of trig for you!

TurretTrackingPlayer

Challenge: It is unlikely that a real cannon would be able to move instantaneously – it would have to be able to predict exactly where the target was going. Instead, it would always be playing catch up, trailing the position of the ship slightly.

You can accomplish this by “blending” the old angle with the new one, just like you did earlier with the spaceship’s rotation angle. The smaller the blend factor, the more time the turret needs to catch up with the spaceship. See if you can implement this on your own.

Adding Health Bars

In part 2, you’ll add code to let player fire missiles at the cannon, and the cannon will be able to inflict damage on the player. To show the amount of hit points each object has remaining, you will need to add some health bar sprites to the scene. Let’s do that now.

Add the following new constants to the top of the GameScene.swift file:

let MaxHealth = 100
let HealthBarWidth: CGFloat = 40
let HealthBarHeight: CGFloat = 4

Also, add these new properties to the GameScene class:

let playerHealthBar = SKSpriteNode()
let cannonHealthBar = SKSpriteNode()
 
var playerHP = MaxHealth
var cannonHP = MaxHealth

Now, insert the following code into didMoveToView(), just before startMonitoringAcceleration():

addChild(playerHealthBar)
 
addChild(cannonHealthBar)
 
cannonHealthBar.position = CGPoint(
  x: cannonSprite.position.x,
  y: cannonSprite.position.y - cannonSprite.size.height/2 - 10
)

The playerHealthBar and cannonHealthBar objects are SKSpriteNodes, but you haven’t specified an image to display for them. Instead, you will be drawing the health bar images dynamically using Core Graphics.

Note that you placed the cannonHealthBar sprite slightly below the cannon, but didn’t assign a position to the playerHealthBar yet. That’s because the cannon never moves, so you can simply set the position of its health bar once and forget about it.

Whenever the spaceship moves though, you’ll have to adjust the position of the playerHealthBar as well. That happens in updatePlayer. Add these lines to the bottom of that method:

playerHealthBar.position = CGPoint(
  x: playerSprite.position.x,
  y: playerSprite.position.y - playerSprite.size.height/2 - 15
)

Now all that’s left is to draw the bars themselves. Add this new method to the class:

func updateHealthBar(node: SKSpriteNode, withHealthPoints hp: Int) {
 
  let barSize = CGSize(width: HealthBarWidth, height: HealthBarHeight);
 
  let fillColor = UIColor(red: 113.0/255, green: 202.0/255, blue: 53.0/255, alpha:1)
  let borderColor = UIColor(red: 35.0/255, green: 28.0/255, blue: 40.0/255, alpha:1)
 
  // create drawing context
  UIGraphicsBeginImageContextWithOptions(barSize, false, 0)
  let context = UIGraphicsGetCurrentContext()
 
  // draw the outline for the health bar
  borderColor.setStroke()
  let borderRect = CGRect(origin: CGPointZero, size: barSize)
  CGContextStrokeRectWithWidth(context, borderRect, 1)
 
  // draw the health bar with a colored rectangle
  fillColor.setFill()
  let barWidth = (barSize.width - 1) * CGFloat(hp) / CGFloat(MaxHealth)
  let barRect = CGRect(x: 0.5, y: 0.5, width: barWidth, height: barSize.height - 1)
  CGContextFillRect(context, barRect)
 
  // extract image
  let spriteImage = UIGraphicsGetImageFromCurrentImageContext()
  UIGraphicsEndImageContext()
 
  // set sprite texture and size
  node.texture = SKTexture(image: spriteImage)
  node.size = barSize
}

This code draws a single health bar. First it sets up the fill and border colors, then it creates a drawing context, and draws two rectangles: the border, which always has the same size, and the bar itself, which varies in width depending on the number of hit points. The method then generates a UIImage from the drawing context, and assigns it as the texture for the sprite.

You need to call this method twice, once for the player and once for the cannon. Because redrawing the health bar is relatively expensive (Core Graphics drawing isn’t hardware accelerated), you don’t want to do it every frame. Instead, you’ll call this code only when the player’s or cannon’s health changes. For now, you’ll call it just once to set the initial appearance for the bars.

Add the following code to the end of didMoveToView:

startMonitoringAcceleration()

Build and run. Now, both the player and the cannon have health bars:

HealthBars

Using Trig for Collision Detection

Right now, the spaceship can fly directly through the cannon without consequence. It would be more challenging (and realistic) if it suffered damage when colliding with the cannon. This is where you enter the sphere of collision detection (sorry about the pun! :])

At this point, a lot of game devs would think, “I need a physics engine!” and while it’s certainly true that you can use Sprite Kit’s physics for this, it’s not that hard to do collision detection yourself, especially if you model the sprites using simple circles.

Detecting whether two circles intersect is a piece of cake: all you have to do is calculate the distance between them (*cough* Pythagoras) and see if it is smaller than the sum of the radii (or “radiuses” if you prefer) of both circles.

Add two new constants to the top of GameScene.swift:

let CannonCollisionRadius: CGFloat = 20
let PlayerCollisionRadius: CGFloat = 10

These are the sizes of the collision circles around the cannon and the player. Looking at the sprite, you’ll see that the actual radius of the cannon image in pixels is slightly larger than the constant you’ve specified (around 25 points), but it’s nice to have a bit of wiggle room; you don’t want your games to be too unforgiving, or players won’t have fun.

The fact that the spaceship isn’t circular at all shouldn’t deter you. A circle is often a good enough approximation for the shape of an arbitrary sprite, and it has the big advantage that it makes it much simpler to do the trig calculations. In this case, the body of the ship is roughly 20 points in diameter (remember, the diameter is twice the radius).

Add a new method to the class to do the collision detection:

func checkShipCannonCollision() {
 
  let deltaX = playerSprite.position.x - turretSprite.position.x
  let deltaY = playerSprite.position.y - turretSprite.position.y
 
  let distance = sqrt(deltaX * deltaX + deltaY * deltaY)
  if distance <= CannonCollisionRadius + PlayerCollisionRadius {
    runAction(collisionSound)
  }
}

You’ve seen how this works before: first you calculate the distance between the X-positions of the two sprites, then the Y-positions. Treating these two values as the sides of a right triangle, you can then calculate the hypotenuse, which is the true distance between these sprites.

If that distance is smaller than the sum of the collision radii, play a sound effect. You’ll see an error on that line will error for now, because you haven’t added the sound effect code yet – it’s coming soon, so just be patient!

Add a call to this new method at the end of update():

checkShipCannonCollision()

Then, add this property to the top of the GameScene class:

let collisionSound = SKAction.playSoundFileNamed("Collision.wav", waitForCompletion: false)

Time to build and run again. Give the collision logic a whirl by flying the spaceship into the cannon.

Overlap

Notice that the sound effect plays endlessly as soon as a collision begins. That’s because, while the spaceship flies over the cannon, the game registers repeated collisions, one after another. There isn’t just one collision, there are 60 per second, and it plays the sound effect for every one of them!

Collision detection is only the first half of the problem. The second half is collision response. Not only do you want audio feedback from the collision, but you also want a physical response – the spaceship should bounce off the cannon.

Add this constant to the top of the GameScene.swift file:

let CollisionDamping: CGFloat = 0.8

Then add these lines inside the if statement in checkShipCannonCollision():

playerAcceleration.dx = -playerAcceleration.dx * CollisionDamping
playerAcceleration.dy = -playerAcceleration.dy * CollisionDamping
playerVelocity.dx = -playerVelocity.dx * CollisionDamping
playerVelocity.dy = -playerVelocity.dy * CollisionDamping

This is very similar to what you did to make the spaceship bounce off the screen borders. Build and run to see how it works.

It looks pretty good if the spaceship is going fast when it hits the cannon. But if it’s moving too slowly, then even after reversing the speed, the ship sometimes stays within the collision radius and never makes its way out of it. Clearly, this solution has some problems.

Instead of just bouncing the ship off the cannon by reversing its velocity, you need to physically push the ship away from the cannon by adjusting its position so that the radii no longer overlap.

To do this, you’ll need to calculate the vector between the cannon and the spaceship, which, fortunately, you already calculated earlier in order to measure the distance between them. So how do you use that distance vector to move the ship?

The vector formed by deltaX and deltaY is already pointing in the right direction, but it’s the wrong length. The length you need it to be is the difference between the radii of the ships and its current length – that way, when you add it to the ship’s current position, the ship will no longer be overlapping the cannon.

The current length of the vector is distance, but the length that you need it to be is:

CannonCollisionRadius + PlayerCollisionRadius – distance

So how can you change the length of a vector?

The solution is to use a technique called “normalization”. You normalize a vector by dividing the X and Y components by the current scalar length (calculated using Pythagoras). The resultant “normal” vector, has an overall length of one.

Then, you just multiply the X and Y by the desired length to get the offset for the spaceship. Add the following code immediately underneath the previous lines you added:

let offsetDistance = CannonCollisionRadius + PlayerCollisionRadius - distance
let offsetX = deltaX / distance * offsetDistance
let offsetY = deltaY / distance * offsetDistance
playerSprite.position = CGPoint(
  x: playerSprite.position.x + offsetX,
  y: playerSprite.position.y + offsetY
)

Build and run, and you’ll see the spaceship now bounces properly off the cannon.

To round off the collision logic, you’ll subtract some hit points from the spaceship and the cannon, and update the health bars. Add the following code inside the if statement:

playerHP = max(0, playerHP - 20)
cannonHP = max(0, cannonHP - 5)
 
updateHealthBar(playerHealthBar, withHealthPoints: playerHP)
updateHealthBar(cannonHealthBar, withHealthPoints: cannonHP)

Build and run again. The ship and cannon now lose a few hit points each time they collide.

Damage

Adding Some Spin

For a nice effect, you can add some spin to the spaceship after a collision. This is additional rotation that doesn’t influence the flight direction; it just makes the effect of the collision more profound (and the pilot more dizzy). Add a new constant to the top of GameScene.swift:

let PlayerCollisionSpin: CGFloat = 180

This sets the amount of spin to half a circle per second, which I think looks pretty good. Now add a new property to the GameScene class:

var playerSpin: CGFloat = 0

In checkShipCannonCollision(), add the following line inside the if statement:

playerSpin = PlayerCollisionSpin

Finally, add the following code to updatePlayer(), immediately before the line playerSprite.zRotation = playerAngle - 90 * DegreesToRadians:

if playerSpin > 0 {
 
  playerAngle += playerSpin * DegreesToRadians
  previousAngle = playerAngle
  playerSpin -= PlayerCollisionSpin * CGFloat(dt)
  if playerSpin < 0 {
    playerSpin = 0
  }
}

The playerSpin effectively just overrides the display angle of the ship for the duration of the spin, without affecting the velocity. The amount of spin quickly decreases over time, so that the ship comes out of the spin after one second. While spinning, you update previousAngle to match the spin angle, so that the ship doesn’t suddenly snap to a new angle after coming out of the spin.

Build and run and set that ship spinning!

Where to Go from Here?

Here is the full example project from the tutorial up to this point.

Triangles are everywhere! You’ve seen how you can use this fact to breathe life into your sprites with the various trigonometric functions to handle movement, rotation and even collision detection.

You have to admit, it wasn’t that hard to follow along, was it? Math doesn’t have to be boring if you can apply it to fun projects, such as making games!

But there’s more to come: in Part 2 of this Trigonometry for Game Programming series, you’ll add missiles to the game, learn more about sine and cosine, and see some other useful ways to put the power of trig to work in your games.

Credits: The graphics for this game are based on a free sprite set by Kenney Vleugels. The sound effects are based on samples from freesound.org.

Trigonometry for Games – Sprite Kit and Swift Tutorial: Part 1/2 is a post from: Ray Wenderlich

The post Trigonometry for Games – Sprite Kit and Swift Tutorial: Part 1/2 appeared first on Ray Wenderlich.

Video Tutorial: Adaptive Layout Part 5: Trait Collections

What’s New in Unity 5

$
0
0

Start your gaming engines! Unity 5 is here!

Start your gaming engines! Unity 5 is here!

The 2015 Game Developer Conference opened with a bang.

Epic Games came out swinging with the announcement that the fourth iteration of the Unreal Engine will be free outside of a small 5% cut of gross revenue.

Not to be outdone, Unity Technologies quickly took command of the stage by announcing the release of Unity 5, noting that the free version of Unity would have access to all the features of the professional version.

Then, to make crazy week even crazier, Valve officially announced their latest game engine called Source 2 which will also be free for all developers although the details of the program are still under wraps.

At this point, it’s not hard to feel like an audience member at an Oprah taping. Needless to say, it’s an exciting time to be making games!

We, here, at raywenderlich.com have produced a variety of Unity tutorials over the years, so naturally, we’re all very excited with the latest Unity updates. In this article, I’ll be providing an overview of some of the changes that Unity 5 brings to the table and in the process, covering the following major features:

  • Physical Shaders
  • Real-time Global Illumination
  • Reflection Probes
  • Audio Mixer
  • Physics Engine
  • Animator
  • WebGL
  • Metal & IL2CPP
  • 64 Bit Editor
  • Cloud Builds
  • Licensing Changes

There’s actually a whole lot more, but we didn’t want you to take a day off work to read this article! :]

Physical Shaders

Unity 5's Standard Shader

Unity 5’s Standard Shader

With the build up to the Unity 5’s release, there’s been a lot of talk about Unity’s Physical Based Shading system. If you read the documentation, you’ll see it described as a “user friendly way of achieving a consistent, plausible look under different lighting conditions”.

Practically, this means, you’ll be using one shader to do 95% of the heavy lifting.

That’s right, you’ll be using this one shader to tweak to your heart’s content to replicate a vast array of materials such as wood, metal, hair, skin, or even stone. The shader can take several textures for input and for those slots that you don’t use, they are optimized out of your project so there is no performance hit. According to Unity Technologies, the idea of physical based shading isn’t meant to produce “realism”, but rather, model how achieve a consistent look and feel in a variety of lighting situations. Basically, how things would really look under different lighting conditions.

For those shader writers out there gripping your chair in white knuckled panic, you can still write your own shaders in Unity and if you need to use any of the Unity 4 shaders, they are available to use as well. They are categorized under Legacy Shaders which tells us they won’t be around for much longer.

Unity Technologies hopes that you won’t have write any new shaders. In fact, in one article, Physical Shaders were referred as the “one shader to rule them all”. That said, for the indie developer this is a great tool to get awesome looking visuals without having to learn any of the shading languages.

Global Illumination

While Physical Shaders are enough to warrant an article in their own right, one of the biggest changes is known as Global Illumination. The idea behind Global Illumination is to calculate lighting effects not based just a global light source, but on reflective surfaces as well. This tends to be very expensive, but Unity gets around it by using a lot of pre-calcuations.

Unity has managed to make this scale from mobile to desktop, giving you the ability to create easy day night cycles in your games, but also create some dynamic realtime lighting to enhance certain moods. One big practical benefit is that this new system does away with Beast Lightmapping in Unity 4.

Mind you, you can still bake your lightmaps, but it’s now all done for you in the background so there’s no need to click a “bake” button anymore. This is what Unity calls “Iterative Light Baking”. Basically, once you finish making an adjustment on a light, Unity will automatically bake the results for you. If you change a different light, it will simply recalculate, and re-bake without you having to worry about it.

Reflection Probes

Screen Shot 2015-03-04 at 6.45.29 PMOne nice feature that works hand-in-hand with Global Illumination and Physical Based Shaders are reflecting probes. These give you the ability to create reflective surfaces such as a glossy tables or mirror like surfaces. They will reflect any objects in the radius includes lights, models, and so forth.

According Unity’s documentation, “a probe acts much like a camera that captures a spherical view of its surroundings in all directions.” The image is stored as CubeMap which you can apply on reflective materials. These probes can be realtime, but you can also have them baked if you are concerned about performance.

Audio Mixer

Mixing your audio has never been easier.

Mixing your audio has never been easier.

When dealing with a lot audio sources, balancing the various levels could be a somewhat frustrating experience. Unity 5 now offers an audio mixer asset. You can use this asset to pipe all your various audio sources into it. From there, you can adjust each various level. You can create sounds that are “children” of other sounds, so when you decrease the level of one audio source, the children sounds will decrease.

Typically, when working in play mode, any changes you make in a scene are dropped once stop the game. The mixer will actually keep your changes, allowing you to set your correct levels while the game is being played.

It will probably take time for some of us to get used to the idea of making lasting changes in play mode, and it may even cause a little confusion in regards to other GameObject and components, but it’s a great feature to have.

Physics Engine

Unity’s physics engine received a big upgrade in this latest release. The underlying PhysX engine was leveled up to version 3.3 which according to the documentation is an entire rewrite.

One big benefit was that Continuous Collision Detection (CCD) has been improved. CCD is useful when working with fast moving bodies. In previous versions of Unity, if a body moved too fast, it would pass through another, even with colliders enabled. According to Unity Technologies, this issue should be a thing of the past, but we’ll have to see in practice.

The wheel collider component also received a makeover. The wheel component can now be used to create realistic suspension and tire friction. Behind the scenes, Unity is using PhysX3’s vehicle SDK. You can use this to create your own vehicles as shown here.

Unity has also updated their cloth simulations. They’ve done away with the Interactive Cloth and Skinned Cloth replaced it with just Cloth. The idea was to give developers flexibility while also making it inexpensive. By default, cloth does not interact with the world. You have to add colliders and even then, cloth will not apply force to the world. It will only receive force. That way, it can be both dynamic while also being low cost to use in your game.

Animator

mecUnity5-2If you’ve done any work with Unity’s animator, you will have quickly run into situations where you have connections going everywhere. With Unity 5, we’ve been giving state machine transitions so now we can hook up state machines with each other. Each individual state machine has its own entry and exit nodes allowing you to customize the animations for a particular behavior.

For example, you could create an expansive set of animations for your idle behavior, but have it all self contained in the individual state machine. Once a user stops moving their avatar, you could then transition them into the Idle machine, and from the entry point, randomly select a particular Idle animation.

How would write such code?

Easy! By using state behaviors! State behaviors, simply put, are callbacks for your state machines. We now have OnStateEnter, OnStateUpdate, OnStateExit, OnStateMove, and OnStateIK. This will allow you to call your code for particular events in your state machine.

What’s even cooler is that Unity now has something known as a Asset Creation API. This means you can create Controllers, StateMachines, States, BlendTress, and Layers all in your own code.

WebGL

WebGLWhile the Unity player is pretty awesome, it requires the user to download software to their computer, then enable the plugin on their browser. For those knowledgable Unity, this is not a problem at all, but for those who did not know anything of Unity, this is may be asking too much of them. With the amount of malware, adware, and spyware out there, the Unity player gives people pause. From a security point, this is a good thing, but from a game developer’s perspective, this is just another hurdle for the user.

With Unity 5, you will soon have the option to publish your content to WebGL. This means that your game will be able to played in any browser that supports WebGL. This means no plugins. No players. All the user has to do is navigate to a web page and they can start playing right away.

Unfortunately, at the time of this writing, the WebGL player still isn’t ready for primetime. Unity has stated the current Unity player will continue to ship throughout the Unity 5.x lifetime. In fact when the WebGL player is first released, it will only support Chrome and Firefox. You also lose a few features as well such as webcam and microphone access for starters.

That said, it will probably be awhile before it reaches mature status, but when it does, the WebGL player will be a welcome relief for users uneasy with downloading third party software and that’s always a good thing.

Metal & IL2CPP

In the 2014, Apple released a new API called Metal that allowed for better graphics performance on their devices. Unity Technologies was quick to respond that were planning to take advantage of the API. Well, many months later, you now can leverage the performance benefits of Metal in your own game. When you build for an iOS device, the graphics API is set to automatic, meaning it will use Metal if it is available. You can also manually select Metal, meaning your game will only run on a A7 based device. As the years progress, Metal will ultimately be available on all iOS devices which means faster games for everyone.

Unity 5 also introduces a technology called IL2CPP which simply translates to “Intermediate Language to C Plus Plus”. In short, Unity is now taking our compiled script assemblies from our game, and rewriting them in C++. What this means is that we are getting native performance on certain platforms. Being still a very new technology, it currently will only be available on iOS and WebGL. Unity plans to extend this to all platforms in the future. From Unity’s GDC keynote, the performance benefits were impressive. I’m sure we’ll be hearing more about this technology as it matures.

64 Bit Editor

One of the strongest criticisms leveraged against the Unity development environment was that it was 32 bit. Jeff LaMarche wrote a blog post not too long ago about why he was switching from Unity to Unreal and one of the primary reasons was because the editor was 32 bit. Jeff was working on a game at his company MartianCraft and with all the assets involved, he quickly ran into that 32 bit ceiling. Once you hit that ceiling, the app becomes unstable.

Thankfully, Unity 5 is now a 64 bit application so unless you are truly making an unbelievably massive game, you shouldn’t run into any memory issues.

Cloud Builds

Unity also unveiled their product known as Cloud Builds. While it is not exactly a Unity 5 specific feature, it still bears mentioning.

In short, you keep your project in a source control repository whether it be Git, Subversion, or Perforce. Once you commit a change to your project, Unity pulls a copy of your project into the cloud and builds it for you. Once the build is complete, an email is sent out to all the stakeholders who can then download the build to their devices and start testing it.

The service does start with a free version and then, like the editor, quickly scales up on price depending on your needs. The free allows you to create projects up to 1 gig is size.

Licensing

In previous versions of Unity, various features were gated, depending on whether you were using the free version or the professional version. Thankfully, with the release of Unity 5, everything is now unlocked by default. This isn’t too much of a big deal as these features were quite specific to certain needs such as audio filters and advanced water. It does make sure that everyone is on equal footing.

Like Unity 4, we still have two versions of Unity: Unity Personal and Unity Pro. Unity Personal is only for developers or studios making less that 100K per year, so if you just scored big time with your latest Kickstarter, you’ll have to start paying for the professional version.

As for cost, well, that’s a matter of perspective. Game engines used to cost hundreds of thousands of dollars. In such light, Unity is actually a steal. The base editor will set you back $1,500. After that, you may want to purchases add-ons for both iOS and Android. Those add-ons will also cost you an extra $1,500 a piece, so if you are looking to create a truly cross platform game, you’ll be paying $4,500.

Keep in mind that you can publish to mobile devices without the pro add-ons so you don’t necessarily have to purchase them. The pro-addons basically provide a profiler so that you can determine any performance related issues. They also provide Occlusion Culling for those platforms which also gives your game a performance boost by not drawing unnecessary geometry.

What Wasn’t Covered?

A whole lot of things.

For example, for Unity 2D users, we now have effectors that apply physics in various different ways, depending on the effector. A platform effector works like a conveyer belt, pushing objects away from it. Unity also provides Frame Debugging now, allowing you to see all the draws calls in a particular frame. Skyboxes have been completely reworked and the list goes on and on.

If you are interested in reading all the changes, then do yourself a favor in check out the release notes. You won’t be disappointed.

The key issue to keep in mind: Unity 5 isn’t a point release masquerading with a higher version number, this is the next evolution of the platform. If you are currently using Unity 4, especially doing 3D game development, then it’s time to make the jump.

Screen Shot 2015-03-03 at 7.00.34 PM

Just a quick note of caution … make sure to always backup your project before starting Unity 5. Unity has to process your older projects to make them compatible with Unity 5. I’ve heard some accounts of this breaking projects. I’m also not sure if you can back port to Unity 4, so make a backup before you take the plunge.

And yes … you can run Unity 4 along with Unity 5. If you are using a Mac, rename your Unity 4 folder to something other than Unity. I call it Unity46. Then, download the new Unity installer. That way, you can still manage Unity 4 and Unity 5 projects from the same machine.

Where to Go From Here?

Although just recently released, there are a lot of resources already available for Unity 5. Unity has a great article on Global Illumination over here. They also produced a great video on Mastering Physical Based Shading for Unity last year. Unity has produced an excellent tutorial video demonstrating the new Audio Mixer. They also have a great video detailing on the changes in the animator.

And of course, check out their GDC conference and a Unity 5 demo to get a real sense of the new features.

Also, keep checking back here as we will be posting new Unity 5 tutorials.

It’s a great time to be a game developer. Now enough reading, go on and make your game! :]

What’s New in Unity 5 is a post from: Ray Wenderlich

The post What’s New in Unity 5 appeared first on Ray Wenderlich.

How To Make A Simple Drawing App with UIKit and Swift

$
0
0
How To Make A Simple Drawing App with UIKit<br />

How To Make A Simple Drawing App with UIKit

Update note: This tutorial was updated to iOS 8 and Swift by Jean-Pierre Distler. Original post by tutorial team member Abdul Azeem Khan.

At some stage in our lives, we all enjoyed drawing pictures, cartoons, and other stuff.

For me it was using a pen and paper when I was growing up, but these days the old pen and paper has been replaced by the computer and mobile devices! Drawing can be especially fun on touch-based devices, as you can see by the abundance of drawing apps on the App Store.

Want to learn how to make a drawing app of your own? The good news is it’s pretty easy, thanks to some great drawing APIs available in iOS.

In this tutorial, you will create an app very much like Color Pad for iPhone. In the process you’ll learn how to:

  • draw lines and strokes using Quartz2D,
  • use multiple colors,
  • set brush stroke widths and opacity,
  • create an eraser,
  • create a custom RGB color selector, and
  • share your drawing!

Grab your pencils and get started; no need to make this introduction too drawn out! :]

Getting Started

Start by downloading the starter project.

Start Xcode, open the project and have a look on the files inside. As you can see I have not done too much work for you. I added all needed images to the asset catalog and created the main view of the app with all needed constraints. The whole project is based on the Single View Application template.

Now open Main.storyboard and have look at the interface. The View Controller Scene has three buttons on the top. As the titles say they will be used to reset or share a drawing and to bring up a settings screen. On the bottom you can see more buttons with pen images and an eraser. They will be used to select colors.

Finally there are two image views called mainImageView and tempImageView – you’ll see later why you need two image views when you use them to allow users to draw with different brush opacity levels.

MainStoryboard2

The view controller has the actions and outlets set as you’d expect: each button at the top has an action, the pencil colors all link to the same action (they have different tags set to tell them apart), and there are outlets for the two image views.

In order to let your inner artist shine, you’ll need to add some code!

Quick on the Draw

Your app will start off with a simple Drawing Feature, whereby you can swipe your finger on the screen to draw simple black lines. (Hey, even Picasso started with the basics).

Open ViewController.swift and add the following properties to the class:

var lastPoint = CGPoint.zeroPoint
var red: CGFloat = 0.0
var green: CGFloat = 0.0
var blue: CGFloat = 0.0
var brushWidth: CGFloat = 10.0
var opacity: CGFloat = 1.0
var swiped = false

Here’s a quick explanation of the variables used above:

  • lastPoint stores the last drawn point on the canvas. This is used when a continuous brush stroke is being drawn on the canvas.
  • red, green, and blue store the current RGB values of the selected color.
  • brushWidth and opacity store the brush stroke width and opacity.
  • swiped identifies if the brush stroke is continuous.

The default RGB values are all 0, which means the drawing color will start out as black just for now. The default opacity is set to 1.0 and line width is set to 10.0.

Now for the drawing part! All touch-notifying methods come from the parent class UIResponder; they are fired in response to touches began, moved, and ended events. You’ll use these three methods to implement your drawing logic.

Start by adding the following method:

override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
  swiped = false
  if let touch = touches.anyObject() as? UITouch {
    lastPoint = touch.locationInView(self.view)
  }
}

touchesBegan is called when the user puts a finger down on the screen. This is the start of a drawing event, so you first reset swiped to false since the touch hasn’t moved yet. You also save the touch location in lastPoint so when the user starts drawing with their finger, you can keep track of where they started. This is, so to speak, where the brush hits the paper! :]

Add the following two methods next:

func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint) {
 
  // 1
  UIGraphicsBeginImageContext(view.frame.size)
  let context = UIGraphicsGetCurrentContext()
  tempImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: view.frame.size.width, height: view.frame.size.height))
 
  // 2
  CGContextMoveToPoint(context, fromPoint.x, fromPoint.y)
  CGContextAddLineToPoint(context, toPoint.x, toPoint.y)
 
  // 3
  CGContextSetLineCap(context, kCGLineCapRound)
  CGContextSetLineWidth(context, brushWidth)
  CGContextSetRGBStrokeColor(context, red, green, blue, 1.0)
  CGContextSetBlendMode(context, kCGBlendModeNormal)
 
  // 4
  CGContextStrokePath(context)
 
  // 5
  tempImageView.image = UIGraphicsGetImageFromCurrentImageContext()
  tempImageView.alpha = opacity
  UIGraphicsEndImageContext()
 
}
 
override func touchesMoved(touches: NSSet, withEvent event: UIEvent) {
  // 6
  swiped = true
  if let touch = touches.anyObject() as? UITouch {
    let currentPoint = touch.locationInView(view)
    drawLineFrom(lastPoint, toPoint: currentPoint)
 
    // 7
    lastPoint = currentPoint
  }
}

Here’s what’s going on in this method:

  1. The first method is responsible for drawing a line between two points. Remember that this app has two image views – mainImageView (which holds the “drawing so far”) and tempImageView (which holds the “line you’re currently drawing”). Here you want to draw into tempImageView, so you need to set up a drawing context with the image currently in the tempImageView (which should be empty the first time).
  2. Next, you get the current touch point and then draw a line with CGContextAddLineToPoint from lastPoint to currentPoint. You might think that this approach will produce a series of straight lines and the result will look like a set of jagged lines. This will produce straight lines, but the touch events fire so quickly that the lines are short enough and the result will look like a nice smooth curve.
  3. Here are all the drawing parameters for brush size and opacity and brush stroke color.
  4. This is where the magic happens, and where you actually draw the path!
  5. Next, you need to wrap up the drawing context to render the new line into the temporary image view.
  6. In touchesMoved, you set swiped to true so you can keep track of whether there is a current swipe in progress. Since this is touchesMoved, the answer is yes, there is a swipe in progress! You then call the helper method you just wrote to draw the line.
  7. Finally, you update the lastPoint so the next touch event will continue where you just left off.

Next, add the final touch handler:

override func touchesEnded(touches: NSSet, withEvent event: UIEvent) {
 
  if !swiped {
    // draw a single point
    drawLineFrom(lastPoint, toPoint: lastPoint)
  }
 
  // Merge tempImageView into mainImageView
  UIGraphicsBeginImageContext(mainImageView.frame.size)
  mainImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: view.frame.size.width, height: view.frame.size.height), blendMode: kCGBlendModeNormal, alpha: 1.0)
  tempImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: view.frame.size.width, height: view.frame.size.height), blendMode: kCGBlendModeNormal, alpha: opacity)
  mainImageView.image = UIGraphicsGetImageFromCurrentImageContext()
  UIGraphicsEndImageContext()
 
  tempImageView.image = nil
}

First, you check if the user is in the middle of a swipe. If not, then it means the user just tapped the screen to draw a single point. In that case, just draw a single point using the helper method you wrote earlier.

If the user was in the middle of a swipe then that means you can skip drawing that single point – since touchesMoved was called before, you don’t need to draw any further since this is touchesEnded.

The final step is to merge the tempImageView with mainImageView. You drew the brush stroke on tempImageView rather than on mainImageView. What’s the point of an extra UIImageView — can’t you just draw directly to mainImageView?

You could, but the dual image views are used to preserve opacity. When you’re drawing on tempImageView, the opacity is set to 1.0 (fully opaque). However, when you merge tempImageView with mainImageView, you can set the tempImageView opacity to the configured value, thus giving the brush stroke the opacity you want. If you were to draw directly on mainImageView, it would be incredibly difficult to draw brush strokes with different opacity values.

Okay, time to get drawing! Build and run your app. You will see that you can now draw pretty black lines on your canvas!

SecondRun

That’s a great start! With just those touch handling methods you have a huge amount of the functionality in place. Now it’s time to fill in some more options, starting with color.

The App of Many Colors

It’s time to add a splash of color to the scene – line art alone is kind of drab.

There are 10 color buttons on the screen at the moment, but if you tap any button right now, nothing will happen. First, you’ll need to define all the colors. Add the following array property to the class:

let colors: [(CGFloat, CGFloat, CGFloat)] = [
  (0, 0, 0),
  (105.0 / 255.0, 105.0 / 255.0, 105.0 / 255.0),
  (1.0, 0, 0),
  (0, 0, 1.0),
  (51.0 / 255.0, 204.0 / 255.0, 1.0),
  (102.0 / 255.0, 204.0 / 255.0, 0),
  (102.0 / 255.0, 1.0, 0),
  (160.0 / 255.0, 82.0 / 255.0, 45.0 / 255.0),
  (1.0, 102.0 / 255.0, 0),
  (1.0, 1.0, 0),
  (1.0, 1.0, 1.0),
]

This builds up an array of RGB values, where each array element is a tuple of three CGFloats. The colors here match the order of the colors in the interface as well as each button’s tag.

Next, find pencilPressed and add the following implementation:

// 1
var index = sender.tag ?? 0
if index < 0 || index >= colors.count {
  index = 0
}
 
// 2
(red, green, blue) = colors[index]
 
// 3
if index == colors.count - 1 {
  opacity = 1.0
}

This is a short method, but let’s look at it step by step:

  1. First, you need to know which color index the user selected. There are many places this could go wrong – incorrect tag, tag not set, not enough colors in the array – so there are a few checks here. The default if the value is out of range is just black, the first color.
  2. Next, you set the red, green, and blue properties. You didn’t know you could set multiple variables with a tuple like that? There’s your Swift tip of the day! :]
  3. The last color is the eraser, so it’s a bit special. The eraser button sets the color to white and opacity to 1.0. As your background color is also white, this will give you a very handy eraser effect!

What? Time for more drawing already? Yup — build and run, and get ready to let the colors fly! Now, tapping a color button changes the brush stroke to use that button’s color. No more drab line art!

ThirdRun

Tabula Rasa

All great artists have those moments where they step back and shake their head muttering “No! No! This will never do!” You’ll want to provide a way to clear the drawing canvas and start over again. You already have a ‘Reset’ button set up in your app for this.

Find reset() and fill in the implementation as follows:

mainImageView.image = nil

That’s it, believe it or not! All the code above does is set the mainImageView‘s image nil, and — voila — your canvas is cleared! Remember, you drew lines into the image view’s image context, so clearing that out to nil here will reset everything.

Build and run your code again. Draw something, and then tap the Reset button to clear your drawing. There! No need to go tearing up canvases in frustration.

Finishing Touches — Settings

Okay! You now have a functional drawing app, but there’s still that second screen of settings to deal with!

First, open SettingsViewController.swift and add the following two properties to the class:

var brush: CGFloat = 10.0
var opacity: CGFloat = 1.0

This will let you keep track of the brush size and opacity the user selects.

Now add the following implementation to sliderChanged():

if sender == sliderBrush {
  brush = CGFloat(sender.value)
  labelBrush.text = NSString(format: "%.2f", brush.native)
} else {
  opacity = CGFloat(sender.value)
  labelOpacity.text = NSString(format: "%.2f", opacity.native)
}
 
drawPreview()

In the code above, as the slider control changes, the slider values will change appropriately to match. Then you’ll need to update those preview images in drawPreview, which you’ll add next!

Add the implementation for drawPreview:

func drawPreview() {
  UIGraphicsBeginImageContext(imageViewBrush.frame.size)
  var context = UIGraphicsGetCurrentContext()
 
  CGContextSetLineCap(context, kCGLineCapRound)
  CGContextSetLineWidth(context, brush)
 
  CGContextSetRGBStrokeColor(context, 0.0, 0.0, 0.0, 1.0)
  CGContextMoveToPoint(context, 45.0, 45.0)
  CGContextAddLineToPoint(context, 45.0, 45.0)
  CGContextStrokePath(context)
  imageViewBrush.image = UIGraphicsGetImageFromCurrentImageContext()
  UIGraphicsEndImageContext()
 
  UIGraphicsBeginImageContext(imageViewBrush.frame.size)
  context = UIGraphicsGetCurrentContext()
 
  CGContextSetLineCap(context, kCGLineCapRound)
  CGContextSetLineWidth(context, 20)
  CGContextMoveToPoint(context, 45.0, 45.0)
  CGContextAddLineToPoint(context, 45.0, 45.0)
 
  CGContextSetRGBStrokeColor(context, 0.0, 0.0, 0.0, opacity)
  CGContextStrokePath(context)
  imageViewOpacity.image = UIGraphicsGetImageFromCurrentImageContext()
 
  UIGraphicsEndImageContext()
}

This method uses the same techniques to draw a preview of the settings like ViewController uses in the touch handling methods. In both cases, the method draws a single point rather than a line with the appropriate line width and opacity from the slider values.

Build and run your code, open the Settings screen, and play around with the sliders. You will see that the preview images and value labels change as you move it now!

ChangingSettings

Settings Integration

There’s still one important piece missing here. Did you notice what it was?

The updated opacity and width values are still not being applied to the ViewController drawing canvas! That’s because you have not yet communicated the values specified in the Settings screen to the ViewController yet. This is a perfect job for a delegate protocol.

Open SettingsViewController.swift file and add the following code just below the imports:

protocol SettingsViewControllerDelegate: class {
  func settingsViewControllerFinished(settingsViewController: SettingsViewController)
}

This will define a class protocol with one required method. This will be the way for the settings screen to communicate back with any interested party on what the settings are.

Also add a property to the SettingsViewController class:

weak var delegate: SettingsViewControllerDelegate?

This will hold the reference to the delegate. If there is a delegate, you’ll need to notify it when the user taps the Close button. Find close() and add the following line to the end of the method:

self.delegate?.settingsViewControllerFinished(self)

This will call the delegate method so it can update itself with the new values.

Now, open ViewController.swift and add a new class extension for the protocol to the bottom of the file:

extension ViewController: SettingsViewControllerDelegate {
  func settingsViewControllerFinished(settingsViewController: SettingsViewController) {
    self.brushWidth = settingsViewController.brush
    self.opacity = settingsViewController.opacity
  }
}

This declares the class as conforming to SettingsViewControllerDelegate and implements its one method. In the implementation, all it needs to do is set the current brush width and opacity to the values from the settings view’s slider controls.

When the user moves from the drawing to the settings, you’ll want the sliders to show the currently-selected values for brush size and opacity. That means you’ll need to pass them along when you open the settings.

Add the following method override to the class:

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  let settingsViewController = segue.destinationViewController as SettingsViewController
  settingsViewController.delegate = self
  settingsViewController.brush = brushWidth
  settingsViewController.opacity = opacity
}

When the user triggers the segue by tapping the Settings button, this method will then configure the new SettingsViewController by setting itself as the delegate and passing the current brush and opacity settings through.

Build and run! At this stage, you will see the brush and opacity values are now updated after you change them in the settings screen. Now you can draw with many colors of different brush sizes and opacity levels!

AppliedSettings

Finishing Touches — A Custom Color Selector

Currently, you have 10 color buttons on your drawing canvas screen. However, with the custom RGB color selector, the discriminating artists using your app will have the ability to control to pick any available color from the RGB range.

There are a set of RGB color sliders in the settings screen that you will implement next.

Since you’ve provided a preview of the brush size and opacity, you might as well provide a preview of the new brush color! :] The preview will be shown in both preview image views – as well, the opacity and brush preview will be shown in the RGB color. No need for an extra image; you’ll re-use what you already have!

Open SettingsViewController.swift and add the following properties:

var red: CGFloat = 0.0
var green: CGFloat = 0.0
var blue: CGFloat = 0.0

You’ll use these to save the current RGB values.

Now add the implementation of colorChanged:

red = CGFloat(sliderRed.value / 255.0)
labelRed.text = NSString(format: "%d", Int(sliderRed.value))
green = CGFloat(sliderGreen.value / 255.0)
labelGreen.text = NSString(format: "%d", Int(sliderGreen.value))
blue = CGFloat(sliderBlue.value / 255.0)
labelBlue.text = NSString(format: "%d", Int(sliderBlue.value))
 
drawPreview()

This is the method that will be called when you move any of the RGB sliders. In above code, notice how all you’re doing is updating the property values and updating the labels.

If you build and run your project now you will notice that your color changes will not be shown in the previews. To show them you need a small change in drawPreview(). Search the lines that call CGContextSetRGBStrokeColor and replace all 0.0 values with the variables red, green and blue.

In the first half of the method, replace the call to CGContextSetRGBStrokeColor with the following:

CGContextSetRGBStrokeColor(context, red, green, blue, 1.0)

And in the second half, replace the call to CGContextSetRGBStrokeColor with the following:

CGContextSetRGBStrokeColor(context, red, green, blue, opacity)

Now that you have the brush and opacity samples drawing with all the correct settings, you’ll want to show them right when the settings screen appears. Add the following implementation of add viewWillAppear to the class:

override func viewWillAppear(animated: Bool) {
  super.viewWillAppear(animated)
 
  sliderBrush.value = Float(brush)
  labelBrush.text = NSString(format: "%.1f", brush.native)
  sliderOpacity.value = Float(opacity)
  labelOpacity.text = NSString(format: "%.1f", opacity.native)
  sliderRed.value = Float(red * 255.0)
  labelRed.text = NSString(format: "%d", Int(sliderRed.value))
  sliderGreen.value = Float(green * 255.0)
  labelGreen.text = NSString(format: "%d", Int(sliderGreen.value))
  sliderBlue.value = Float(blue * 255.0)
  labelBlue.text = NSString(format: "%d", Int(sliderBlue.value))
 
  drawPreview()
}

As you can see this method just presets all labels and sliders with the correct values. The drawPreview call ensures that the preview image views also show the correct thing.

Finally, open ViewController.swift. As before, you’ll need to make sure the current color make it across to the settings screen so add the following lines to the end of prepareForSegue:

settingsViewController.red = red
settingsViewController.green = green
settingsViewController.blue = blue

This will pass on the current red, green and blue values so the RGB sliders are set correctly.

Finally, find settingsViewControllerFinished in the class extension and add the following lines to that method:

self.red = settingsViewController.red
self.green = settingsViewController.green
self.blue = settingsViewController.blue

In the above code, as the SettingsViewController closes, the updated RGB values are being fetched as well.

All right — time for another build and run stage! Put the Color Picker through its paces. The selected RGB color, which is displayed in RGBPreview, is now the default brush stroke color on the drawing canvas!

FinishedSettings

But what good is all of these wonderful works of art if you can’t share them with the world? Since you can’t stick the pictures up on your refrigerator, you’ll share them with the world in the final step of this tutorial! :]

Finishing Touches – Share and Enjoy!

In this final step, you’ll use the iOS share sheet to send your works of art out there into the world!

There are two parts to this: first, you need to get the drawing as a UIImage object; then, you just pass it on to UIActivityViewController to decides which sharing options will work best depending on what accounts and services are available.

In ViewController.swift, add the following implementation to share():

UIGraphicsBeginImageContext(mainImageView.bounds.size)
mainImageView.image?.drawInRect(CGRect(x: 0, y: 0, 
  width: mainImageView.frame.size.width, height: mainImageView.frame.size.height))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
 
let activity = UIActivityViewController(activityItems: [image], applicationActivities: nil)
presentViewController(activity, animated: true, completion: nil)

This method is pretty simple – first it renders out the drawing from mainImageView to a new UIImage. Then, UIActivityViewController does most of the heavy lifting! All you need to do is pass it an array of things to share; in this case, that’s just the single image.

The second parameter to the initializer applicationActivities lets you limit the activities, so passing nil means iOS will provide as many share options as possible. Your drawing deserves no less!

Build and run the app, and create your masterpiece! When you hit Share, you’ll now be able to let the world know of your talents.

drawing-share

Where To Go From Here?

Here is the DrawPad-final-61 with all of the code from the above tutorial.

You can play around a bit more with the brush strokes and also investigate the drawing of arcs and rectangles using Quartz2D. A good start would be to have a look at Quartz 2D Programming Guide” . There are number of beginner and advanced-level concepts there with which you can play around to create awesome shapes and patterns.

If you want to learn how to draw more smooth lines, you should also check out this smooth line drawing article by Krzysztof Zablocki. It’s based on Cocos2D but you can use the same technique in UIKit.

I hope you enjoyed this tutorial as much as I did! Feel free to post your questions and comments on the forum — and feel free to share your masterpieces you created with your new app! :]

How To Make A Simple Drawing App with UIKit and Swift is a post from: Ray Wenderlich

The post How To Make A Simple Drawing App with UIKit and Swift appeared first on Ray Wenderlich.

Video Tutorial: Adaptive Layout Part 8: Adaptive Presentation

Viewing all 4370 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>