In this screencast, you'll learn how to incorporate dependencies into your app using git submodules.
The post Screencast: Git: Dependency Management Using Submodules appeared first on Ray Wenderlich.
In this screencast, you'll learn how to incorporate dependencies into your app using git submodules.
The post Screencast: Git: Dependency Management Using Submodules appeared first on Ray Wenderlich.
In recent months, we’ve released updates to courses such as Beginning Realm on iOS and Intermediate Realm on iOS, introduced brand new courses like Advanced Swift 3 and iOS Design Patterns, and covered advanced topics in weekly screencasts.
Today we’re happy to announce another course: Beginning CloudKit, has been updated for Swift 3 & iOS 10!
CloudKit can be used to implement your own custom iCloud solution. It can be also used to create a web service backend without having to write any server-side code! In this 12-part course, you’ll learn all aspects of the CloudKit framework including important basics like managing databases with the CloudKit Dashboard, and creating and modifying records, as well as more advanced features like managing data conflicts, sharing data, and more.
Let’s take a look at what’s inside this course:
Video 1: Introduction. Looking to tie your app into the Cloud? Learn all about Apple’s solution, CloudKit, that allows you to have a persistent online backend for your app.
Video 2: Containers & DBs. Learn about two fundamental objects used throughout the entire CloudKit API: Containers and Databases.
Video 3: CloudKit Dashboard. CloudKit allows you to model data on Apple’s servers. This video will cover the basics of working with their browser-based tool.
Video 4: Saving Records. This video covers the basics of saving data to CloudKit using the convenience API.
Video 5: Fetching Records. This video teaches how you can read data from CloudKit using the convenience API.
Video 6: References. References are used to relate records to each other. This video covers the basics on how to create references and then how to let CloudKit know about them.
Video 7: Subscriptions. Be informed about record changes by using subscriptions. This video covers their usage.
Video 8: Operations. After a while, you’ll reach the limits of the convenience API. Thankfully, you can use operations instead, which provide a lot of power and flexibility.
Video 9: Managing Conflicts. Eventually, users will try to save old data over new data. This may be their intent or it may happen by accident. This video will teach you how to manage these conflicts.
Video 10: User Accounts. Learn how to use user accounts to determine the status of a user and to fetch data for the the user.
Video 11: Sharing Data. CloudKit has recently added a new database: the sharing database. This video will teach you what it is, and how to use it.
Video 12: Conclusion. This video concludes the series and will provides some resources on where to continue.
Want to check out the course? You can watch two of the videos for free:
The rest of the course is for raywenderlich.com subscribers only. Here’s how you can get access:
We hope you enjoy, and stay tuned for more new courses and updates to come! :]
The post Updated Course: Beginning CloudKit appeared first on Ray Wenderlich.
Maps are ubiquitous in modern apps. Maps provide locations of nearby points of interest, help users navigate a town or park, find nearby friends, track progress on a journey, or provide context for an augmented reality game.
Unfortunately, this means most maps look the same from app to app. Booooooring!
This tutorial covers how to include hand-drawn maps, instead of programmatically-generated ones, like the ones in Pokemon Go.
Hand-drawing a map is a significant effort. Given the size of the planet, it’s only practical for a well-defined, geographically small area. If you have a well-defined area in mind for your map, then a custom map can add a ton of sizzle to your app.
Download the starter project here.
MapQuest is the start of a fun adventure game. The hero runs around Central Park, NYC in real life, but embarks on adventures, fights monsters, and collects treasure in an alternate reality. It has a cute, childish design to make players feel comfortable and to indicate the game isn’t that serious.
The game has several Points of Interest (POI) that define locations where the player can interact with the game. These can be quests, monsters, stores, or other game elements. Entering a 10-meter zone around a POI starts the encounter. For the sake of this tutorial, the actual gameplay is secondary to the map rendering.
There are two heavy-lifting files in the project:
The main view of the game is a MKMapView
. MapKit uses tiles at various zoom levels to fill its view and provide information about geographic features, roads, etc.
The map view can display either a traditional road map or satellite imagery. This is useful for navigating around the city, but rather useless for imagining you’re adventuring around a medieval world. However, MapKit lets you supply your own map art to customize the aesthetic and presented information.
A map view is made up of many tiles that are loaded dynamically as you pan around the view. The tiles are 256 pixels by 256 pixels and are arranged in a grid that corresponds to a Mercator map projection.
To see the map in action, build and run the app.
Wow! What a pretty town. The game’s primary interface is location, which means there’s nothing to see or do without visiting Central Park.
Contrary to other tutorials, MapQuest is a functional app right out of the gate! But, unless you live in New York City, you’re a little out of luck. Fortunately, XCode comes with at least two ways of simulating location.
With the app still running in the iPhone Simulator, set the user’s location.
Go to Debug\Location\Custom Location….
Set the Latitude to 40.767769
and Longitude to -73.971870
. This will activate the blue user location dot and focus the map on the Central Park Zoo. This is where a wild goblin lives; you’ll be forced to fight it, and then collect its treasure.
After beating up on the helpless goblin, you’ll be placed in the zoo (Note the blue dot).
A static location is pretty useful for testing out many location-based apps. However, this game requires visiting multiple locations as part of the adventure. The Simulator can simulate changing locations for a run, bike ride, and a drive. These pre-included trips are for Cupertino, but MapQuest only has encounters in New York.
Occasions such as these call for simulation location with a GPX file (GPS Exchange Format). This file specifies many waypoints, and the Simulator will interpolate a route between them.
Creating this file is outside the scope of this tutorial, but the sample project includes a test GPX file for your use.
Open the scheme editor in XCode by selecting Product\Scheme\Edit Scheme….
Select Run in the left pane, and then the Options tab on the right. In the Core Location section, click the check mark for Allow Location Simulation. In the Default Location drop-down choose Game Test.
This means the app will simulate moving between the waypoints specified in Game Test.gpx.
Build and run.
The simulator will now walk from the 5th Avenue subway to the Central Park Zoo where you’ll have to fight the goblin again. After that, it’s on to your favorite fruit company’s flagship store to buy an upgraded sword. Once the loop is complete, the adventure will start over.
OpenStreetMap is a community-supported open database of map data. That data can be used to generate the same kind of map tiles used by Apple Maps. The Open Street Map community provides more than basic road maps, such as specialized maps for topography, biking, and artistic rendering.
Note: The Open Street Map tile policy has strict requirements about data usage, attribution, and API access. This is fine for use in a tutorial, but check for compliance before using the tiles in a production application.
Replacing the map tiles requires using a MKTileOverlay
to display new tiles on top of the default Apple Maps.
Open MapViewController.swift, and replace setupTileRenderer()
with the following:
func setupTileRenderer() {
// 1
let template = "https://tile.openstreetmap.org/{z}/{x}/{y}.png"
// 2
let overlay = MKTileOverlay(urlTemplate: template)
// 3
overlay.canReplaceMapContent = true
// 4
mapView.add(overlay, level: .aboveLabels)
//5
tileRenderer = MKTileOverlayRenderer(tileOverlay: overlay)
}
By default, MKTileOverlay
supports loading tiles by URL templated to take a tile path.
{x}
, {y}
, and {z}
are replaced at runtime by an individual tile’s coordinate. The z-coordinate, or zoom-level is specified by how much the user has zoomed in the map. The x and y are the index of the tile given the section of the Earth shown. A tile needs to be supplied for every x and y for each zoom level supported. mapView
. Custom tiles can either be above the roads or above the labels (like road and place names). Open street map tiles come prelabeled, so they should go above Apple’s labels.Before the tiles will show up, the tile renderer has to be set up with the MKMapView
in order for the tiles to be drawn.
At the bottom of viewDidLoad()
add the following line:
mapView.delegate = self
This sets the MapViewController
to be the delegate of its mapView
.
Next, in the MapView Delegate extension, add the following method:
func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer {
return tileRenderer
}
An overlay renderer tells the map view how to draw an overlay. The tile renderer is a special subclass for loading and drawing map tiles.
That’s it! Build and run to see the standard Apple map replaced with Open Street Map.
At this point, you can really see the difference between the open source maps and Apple Maps!
The magic of the tile overlay is the ability to translate from a tile path to a specific image asset. The tile’s path is represented by there coordinates: x, y, and z. The x and y correspond to indices on the map’s surface, with 0,0 being the upper left tile. The z-coordinate is for the zoom level and determines how many tiles make up the whole map.
At zoom-level 0, the whole world is represented by a 1-by-1 grid, requiring one tile:
At zoom-level 1, the whole world is divided into a 2-by-2 grid. This requires four tiles:
At level 2, the number of rows and columns doubles again, requiring sixteen tiles:
This pattern continues, quadrupling both the level of detail and number of tiles at each zoom level. Each zoom level requires 22*z tiles, all the way down to zoom level 19 requiring 274,877,906,944 tiles!
Since the map view is set to follow the user’s location, the default zoom level is set to 16, which shows a good level of detail to give the user the context of where they are. Zoom level 16 would require 4,294,967,296 tiles for the whole planet! It would take more than a lifetime to hand-draw these tiles.
Having a smaller bounded area like a town or park makes it possible to create custom artwork. For a larger range of locations, the tiles can be procedurally generated from source data.
Since the tiles for this game are prerendered and included in the resource bundle, you simply need to load them. Unfortunately, a generic URL template is not enough, since it’s better to fail gracefully if the renderer requests one of the billions of tiles not included with the application.
To do that, you’ll need a custom MKTileOverlay
subclass. Open AdventureMapOverlay.swift and add the following code:
class AdventureMapOverlay: MKTileOverlay {
override func url(forTilePath path: MKTileOverlayPath) -> URL {
let tileUrl = "https://tile.openstreetmap.org/\(path.z)/\(path.x)/\(path.y).png"
return URL(string: tileUrl)!
}
}
This sets up the subclass, and replaces the basic class using a template URL with a specialized URL generator.
Keep the Open Street Map tiles for now in order to test out the custom overlay.
Open MapViewController.swift, and replace setupTileRenderer()
with the following:
func setupTileRenderer() {
let overlay = AdventureMapOverlay()
overlay.canReplaceMapContent = true
mapView.add(overlay, level: .aboveLabels)
tileRenderer = MKTileOverlayRenderer(tileOverlay: overlay)
}
This swaps in the custom subclass.
Build and run again. If all goes well, the game should look exactly the same as before. Yay!
Now comes the fun part. Open AdventureMapOverlay.swift, and replace url(forTilePath:)
with the following:
override func url(forTilePath path: MKTileOverlayPath) -> URL {
// 1
let tilePath = Bundle.main.url(
forResource: "\(path.y)",
withExtension: "png",
subdirectory: "tiles/\(path.z)/\(path.x)",
localization: nil)
guard let tile = tilePath else {
// 2
return Bundle.main.url(
forResource: "parchment",
withExtension: "png",
subdirectory: "tiles",
localization: nil)!
}
return tile
}
This code loads the custom tiles for the game.
Build and run again. Now the custom map is shown.
Try zooming in and out to see different levels of detail.
Don’t zoom too far in or out, or you’ll lose the map altogether.
Fortunately this is an easy fix. Open MapViewController.swift, add the following lines to the bottom of setupTileRenderer()
:
overlay.minimumZ = 13
overlay.maximumZ = 16
This informs the mapView
that tiles are only provided between those zoom levels. Changing the zoom beyond that scales the tile images provided in the app. No additional detail is supplied, but at least the image shown now matches the scale.
This next section is optional, as it covers how to draw specific tiles. To skip to more MapKit techniques, jump ahead to the “Fancifying the Map” section.
The hardest part of this whole maneuver is creating the tiles of the right size and lining them up properly. To draw your own custom tiles, you’ll need a data source and an image editor.
Open up the project folder and take a look at MapQuest/tiles/14/4825/6156.png. This tile shows the bottom part of Central Park at zoom level 14. The app contains dozens of these little images to form the map of New York City where the game takes place, and each one was drawn by hand using pretty rudimentary skills and tools.
The first step is to figure out what tiles you’ll need to draw. You can download the source data from Open Street Map and use a tool like MapNik to generate tile images from it. Unfortunately, the source is a 57GB download! And the tools are a little obscure and out of the scope of this tutorial.
For a bounded region like Central Park, there’s an easier workaround.
In AdventureMapOverlay.swift add the following line to url(forTilePath:)
:
print("requested tile\tz:\(path.z)\tx:\(path.x)\ty:\(path.y)")
Build and run. Now as you zoom and pan around the map, the tile paths are displayed in the console output.
Next it’s a matter of getting a source tile then customizing it. You can reuse the URL scheme from before to get an open street map tile.
The following terminal command will grab and store it locally. You can change the URL, replacing the x, y, and z with a particular map path.
curl --create-dirs -o z/x/y.png https://tile.openstreetmap.org/z/x/y.png
For the south section of Central Park, try:
curl --create-dirs -o 14/4825/6156.png https://tile.openstreetmap.org/14/4825/6156.png
This directory structure of zoom-level/x-coordinate/y-coordinate
makes it easier to find and use the tiles later.
The next step is to use the base image as a starting point for customization. Open the tile in your favorite image editor. For example, this is what it looks like in Pixelmator:
Now you can use the brush or pencil tools to draw roads, paths, or interesting features.
If your tool supports layers, drawing different features on separate layers will allow you to adjust them to give the best look. Using layers makes the drawing a little more forgiving, as you can cover up messy lines beneath other features.
Now repeat this process for all the tiles in the set, and you’re good to go. As you can see, this will be a little bit of a time investment.
You can make the process a little easier:
After creating your new tiles, put them back in the tiles/zoom-level/x-coordinate/y-coordinate
folder structure in the project. This keeps things organized and easily accessible.
This means you can access them easily, as you did in the code you added for url(forTilePath:)
.
let tilePath = Bundle.main.url(
forResource: "\(path.y)",
withExtension: "png",
subdirectory: "tiles/\(path.z)/\(path.x)",
localization: nil)
That’s it. Now you’re ready to go forth and draw some beautiful maps!
The map looks great and fits the aesthetic of the game. But there’s so much more to customize!
Your hero is not well represented by a blue dot, but you can replace the current location annotation with some custom art.
Open MapViewController.swift and add the following method to the MapView Delegate extension:
func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? {
switch annotation {
// 1
case let user as MKUserLocation:
// 2
let view = mapView.dequeueReusableAnnotationView(withIdentifier: "user")
?? MKAnnotationView(annotation: user, reuseIdentifier: "user")
// 3
view.image = #imageLiteral(resourceName: "user")
return view
default:
return nil
}
}
This code creates a custom view for the user annotation.
MKUserLocation
.MKAnnotationView
is pretty flexible, but it’s only used to represent the adventurer with just an image.Build and run. Instead of the blue dot, there will now be a little stick figure wandering around.
MKMapView
also allows you to mark up your own locations of interest. MapQuest plays along with the NYC subway, treating the subways system as a great big warp network.
Add some markers to the map for nearby subway stations. Open MapViewController.swift, add the following line at the end of viewDidLoad()
:
mapView.addAnnotations(Game.shared.warps)
Build and run, and a selection of subways stations are now represented as pins.
Like the user location blue dot, these standard pins don’t really match the game’s aesthetic. Custom annotations can come to the rescue.
In mapView(_:viewFor:)
add the following case to the switch
statement above the default case:
case let warp as WarpZone:
let view = mapView.dequeueReusableAnnotationView(withIdentifier: WarpAnnotationView.identifier)
?? WarpAnnotationView(annotation: warp, reuseIdentifier: WarpAnnotationView.identifier)
view.annotation = warp
return view
Build and run again. The custom annotation view will uses a template image and color it for the specific subway line.
If only the subway were an instantaneous warp in real life!
MapKit has lots of ways to spruce up the map for the game. Next, use a MKPolygonRenderer
to draw a gradient-based shimmer effect on the reservoir.
Replace setupLakeOverlay()
with:
func setupLakeOverlay() {
// 1
let lake = MKPolygon(coordinates: &Game.shared.reservoir, count: Game.shared.reservoir.count)
mapView.add(lake)
// 2
shimmerRenderer = ShimmerRenderer(overlay: lake)
shimmerRenderer.fillColor = #colorLiteral(red: 0.2431372549, green: 0.5803921569, blue: 0.9764705882, alpha: 1)
// 3
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] _ in
self?.shimmerRenderer.updateLocations()
self?.shimmerRenderer.setNeedsDisplay()
}
}
This sets up a new overlay by:
MKPolygon
annotation that’s the same shape as the reservoir. These coordinates are pre-programmed in Game.swift.Next, replace mapView(_:rendererFor:)
with:
func mapView(_ mapView: MKMapView, rendererFor overlay: MKOverlay) -> MKOverlayRenderer {
if overlay is AdventureMapOverlay {
return tileRenderer
} else {
return shimmerRenderer
}
}
This will select the right renderer for each of the two overlays.
Build and run, again. Then pan over to the reservoir to see the Shimmering Sea!
You can download the final project for the tutorial here.
Creating hand-drawn map tiles is time consuming, but using them can give an app a distinct and immersive feel. Apart from creating the assets, using them is pretty straightforward.
In addition to the basic tiles, Open Street Map has a list of specialized tile providers for things like cycling and terrain. Open Street Map also provides data you can use if you want to design your own tiles programmatically.
If you want a custom but realistic map appearance, without hand-drawing everything, take a look at third-party tools such as MapBox. It allows you to customize the appearance of a map with good tools at a modest price.
For more information on custom overlays and annotations, check out this other tutorial.
If you have any questions or comments on this tutorial, feel free to join in the discussion below!
Open Street Map data and images are © OpenStreetMap contributors.
The post Advanced MapKit Tutorial: Custom Tiles appeared first on Ray Wenderlich.
This year we ran our 3rd annual conference focused on high quality hands-on tutorials called RWDevCon.
The conference was a huge hit and got rave reviews, so we are running it for the 4th time next March!
This year we’re making a number of improvements, including:
This is just a quick heads-up that ticket sales for the conference will open up in 1 week, on Wed July 12 @ 9:00 AM EST.
And even better, the first 75 people who buy tickets will get $100 off.
For the past two years, the tickets have sold out quickly, so if you’re interested in attending, be sure to snag your ticket while you can.
If you’d like a reminder when tickets go on sale, sign up for our raywenderlich.com weekly newsletter. We hope to see you at the conference!
The post RWDevCon 2018: Ticket Sales Open in 1 Week! appeared first on Ray Wenderlich.
As you may know, our friends at Five Pack Creative run live in-person classes based on the materials you know and love at raywenderlich.com: ALT-U.
In the past year, they’ve run several classes from beginner to advanced, and they’ve been a hit!
Today, we have some exciting news: for the first time ever, ALT-U is offering some of their classes in a live online format.
Simply sign up for the class you’re interested in, and you can follow along live with the instructor, from the comfort of your own home.
Keep reading to learn more about the three upcoming classes, and how to get a special discount for raywenderlich.com readers.
ALT-U is kicking off their live online training with three options:
ALT-U’s live online training is great if you’re the type of person who learns best by having access to a live instructor to ask questions to, and enjoys interacting with fellow students.
The folks at ALT-U have bene kind enough to offer a 15% off discount for raywenderlich.com readers.
Simply select your course and then enter the following discount code: LEARNMORE17
Note that the discount expires on July 14, so be sure to snag it while you still can.
We hope you enjoy the new live online training offerings from ALT-U!
The post ALT-U Live Online Training: iOS Debugging, Auto Layout, and Instruments appeared first on Ray Wenderlich.
From the morning wake up call of a buzzing alarm, to the silent rumble of an air conditioner, sound surrounds us. Audio can make or break a video game and can provide a fully immersive experience for the player when done correctly. The Audio Mixer is a great aid in helping you achieve that.
In this tutorial, you’ll learn how to do the following:
Before you dive into the world of Audio Mixers, Groups, and Sound Effects, it’s best to understand the basic types of sounds that you’ll see in games:
Each of these categories can be broken down into several sub-categories. For this tutorial, you will focus on the above four categories.
Download the starter project here and extract it to a location of your choosing.
Open up the Starter Project in Unity. The assets are sorted inside several folders:
Open the MixerMayhem scene in the StarterAssets/Scenes folder.
Run the starter project. If everything went okay, you should be able to hover over each object, and they should each produce a unique sound when clicked on.
Right now, the audio in the scene is being played by Audio Sources on each of the objects. Jumping back and forth between all these individual Audio Sources to make changes or add effects to them can become quite a tedious process. Luckily, the Unity Audio Mixer is here to save the day!
To create an Audio Mixer, first make sure the Audio Mixer window is open. You will find it located under Window >> Audio Mixer or Ctrl + 8:
In the Audio Mixer window, create an Audio Mixer named MasterMixer by clicking the + located at the top right of the window.
Alternatively, you can create an Audio Mixer under Project Window\Create\Audio Mixer:
You should now have a blank Audio Mixer opened in your Audio Mixer window.
Here’s some explanation:
An Audio Mixer Group is used by one or more Audio Sources to combine and modify their audio outputs before it reaches the Audio Listener in the scene. A Group will route its output to another Group if it’s a child of that former group. The Master Group will normally route its output to the Audio Listener only if its Audio Mixer doesn’t route into yet another Audio Mixer.
To create a new Audio Mixer Group, click the + located next to Groups in the Audio Mixer window. Name this group Music.
A new group named Music should appear in the Strip View. By default, each group has a few traits to go along with it:
Add the following additional Groups:
SFX should be under Master, while Grenade, Sink, Radio and Environment should all be under SFX. If your groups ended up in a different order, you can re-order them by dragging and dropping, just as you do with GameObjects in the Hierarchy.
It’s a good habit to color-code your Groups for the ease of readability. Do that now by setting the Master to Red, Music to Orange and all of the SFX to Cyan.
To change the color of a Group, simply right-click the Group in the Strip View and choose a color.
When finished, your Groups should look like this:
With your Groups in place, you can now set up Views, which are visibility toggle sets that indicate which groups to show in the Strip view. Views become quite useful when there are many Groups.
When you created the Audio Mixer, an initial View called View was created. Change the name of that view to Everything, as this will be your main view of all of the Groups. Then create two more Views by clicking the + next to Views and name them Music and SFX respectively.
With the Music View selected, toggle everything Cyan off.
Then select the SFX View and toggle everything Orange off.
You should now be able to jump between all three views by clicking on their names in the View window, and the appropriate Groups should appear in the Strip View.
You have an Audio Mixer with a bunch of Groups set up, but no Audio Sources are outputting to any Groups. Currently, your Audio Sources are routed directly to the Audio Listener.
What you want is for the Audio Sources to output to an appropriate Group that will then output to the Audio Mixer, which will in turn output to the Audio Listener.
Locate the GameObject named Audio Objects in the Hierarchy window.
Now select the Trumpet child object of Audio Objects. Then jump over to the Inspector window, and scroll down until you see the Audio Source attached to the Trumpet. You should see an Output field underneath the AudioClip field. Click on the icon next to the Output field and select the Music group.
Awesome job — you just routed your first Audio Source! With the Audio Mixer window still open, run the game, and click on the Trumpet object. Notice when the Trumpet is playing, the Music group is showing Attenuation, as is the Master group, because the Music group is a child of the Master group.
Click the Edit in Play Mode button while the game is still running. You can adjust the Music volume slider to that of your liking, and the changes will persist outside of Play Mode.
So now you have one Audio Source currently routed, but you need to route all of the SFX sounds.
Go through the remaining children under the Audio Objects game object in the Hierarchy window, and set their Audio Source Output fields to an appropriate Group:
There’s also one more Audio Source on the AudioManager GameObject for a room tone sound. Set the Ouput field for that one to Environment.
Now run the project, and start clicking away. If you take a look at the Audio Mixer window, and you should start to see all of your Groups being put to use as you click the various objects in the scene.
Play around with the Solo and Mute buttons to see just exactly how they work with the Audio Mixer. For instance, muting the SFX group will mute all sound that routes through that group.
You can add various effects to Groups just like you can add effects to Audio Sources. To add an effect simply, select the Group you wish to add an effect to and click on Add Effect in the Inspector window.
Unity provides a bunch of audio effects right out of the box. You’ll be using the following for the remainder of this tutorial:
Groups can have as many effects as you’d like, with Attenuation being the default effect for every Group. Effects are executed from top to bottom, meaning the order of effects can impact the final output of the sound. You can re-order effects at anytime by simply moving them up or down in the Strip View.
A common practice in audio mixing is to create separate Groups for complicated or frequently used effects.
Create two additional Groups. Name one Reverb and the other Distortion. Mark them both as green for readability and move them up front next to the Master.
Instead of adding a SFX Reverb to each Group, you can simply add it to the Reverb group. Do this now.
You’re probably wondering how this will do anything if your Audio Sources are not routed to the Reverb group. You’re right — it won’t. You will have to add another effect named Receive to your Reverb group, and make sure it is above the Reverb effect as order matters.
Then you can add the opposite effect named Send on the Music and SFX groups.
Select the Music group and in the Inspector window under the Send effect, connect the Receive field to the Reverb\Receive option, and set the Send Level to 0.00 dB.
Then do the same thing with the SFX group.
With all that sending and receiving set up, jump back over to your Reverb group. In the Inspector window, under SFX Reverb, change the following fields to the indicated value:
These settings will give your audio a nice empty-room effect, and will help unify the audio as if it’s all coming from the same room.
Save and run your project, and start making some noise! You should see the Reverb group in the Audio Mixer window picking up noise as you click on various objects.
Try starting and abruptly stopping the Trumpet sound. You should hear the Reverb echoing and decaying quite nicely.
You can also use the Bypass button to have the Reverb group bypass its effects, so you can hear the difference with and without the SFX Reverb.
For some finishing touches, add a Lowpass filter to the Grenade group and the Sink group. The default values should be just fine. This will help remove some of the high frequency signals being created from the Reverb group.
With the Reverb working nicely, it’s time to add the next effect — Distortion.
Run the game and click the radio, then wait until you hear the phrase Ladies and Gentlemen.
Right now, that phrase doesn’t sound like it’s coming from a radio at all. To fix that, you’ll first need to add a Receive effect to the Distortion group, and then have the Radio group send to the newly created Receive channel.
Don’t forget to set the Send Level field to 0.00 dB under the Radio group in the Inspector window. Otherwise the Distortion group will have no effect.
Now it’s time to create the actual Distorting effect. To really get it sounding like a radio transmission, you’ll need to do a little bit of layering.
Add the following effects:
Now select the Distortion group. Go to the Inspector window and set the values for all of the effects accordingly:
When finished, run the project again and click on the radio.
The Distortion effect should sound similar to this:
With all of the effects in place, you can pat yourself on the back for setting up your first Audio Mixer!
As a final touch, you can adjust some of the decibel levels. Music and Environment should be more of a background noise, so lower their decibel levels. Also, you don’t want to give your player a heart attack when the Grenade goes off so lower that a bit too. :]
In the next section you’ll learn how to change some of the Audio Mixer fields via scripting.
With the Audio Mixer complete, it’s time to give the player some options in game that will let them make their own changes to the Audio Mixer.
In the Hierarchy window, there is a game object named AudioManager that has a child canvas object named AudioManagerCanvas. Under that, you should see another object that is currently inactive named SliderPanel. Enable this game object.
In the Game window you should see some sliders and buttons. These will be the sliders to adjust the Music and SFX volumes.
In order to access the volume levels of your Music and SFX group, you need to expose those parameters. To do this, first select the Music group, then in the Inspector window, right click the Volume field and select the Expose ‘Volume (of Music)’ to script option.
Now select the SFX group and expose its Volume as well. Back in the Audio Mixer window, you should now see Exposed Parameters (2) at the top right.
Click on this and rename the newly exposed parameters respectively to musicVol and sfxVol. These are the names you’ll use to access them in scripts.
Select the AudioManager game object. In the Inspector window, navigate to the Audio Manager script. This contains an array of Audio Settings
.
Each element within the array has three values. The first two are for the sliders functionality, but the third is the exposed parameter that the slider influences. Type in the exposed parameter names where musicVol should be under the element containing the MusicSlider. sfxVol should be under the SFXSlider.
With that done, open up the AudioManager.cs script. The first thing you need to add is a reference to your Audio Mixer
. At the top of the script, underneath public static AudioManager instance;
add the following line:
public AudioMixer mixer;
Save the script, then return to the Unity Editor. A new field named Mixer should be on the Audio Manager component. Drag your MasterMixer from the Project window into this empty variable. This will give you a reference to your Audio Mixer so you can change its exposed parameters.
Open AudioManager.cs again, scroll down to the AudioSetting class, and just below the Initialize() method, add this new method:
public void SetExposedParam(float value) // 1
{
redX.SetActive(value <= slider.minValue); // 2
AudioManager.instance.mixer.SetFloat(exposedParam, value); // 3
PlayerPrefs.SetFloat(exposedParam, value); // 4
}
Here's what that code does:
float
for the new value.Audio Mixer
that sets the exposed parameter to the specified value.PlayerPref
to remember the users volume choice.In the main AudioManager
class, create the following two public methods that your sliders will hook to:
public void SetMusicVolume(float value)
{
audioSettings[(int)AudioGroups.Music].SetExposedParam(value);
}
public void SetSFXVolume(float value)
{
audioSettings[(int)AudioGroups.SFX].SetExposedParam(value);
}
Save the script. Head back to the Unity Editor and navigate to the MusicSlider game object.
Find the Slider component attached to the Music Slider object in the Inspector window. Then hook the AudioManager.SetMusicVolume
dynamically to the slider.
Then select the SFXSlider game object and do the same thing, but hook it dynamically to AudioManager.SetSFXVolume
.
Selecting Dynamic float passes in the current value of the slider to the method.
Save and run the project. You should now be able to adjust the sliders in-game to change the volume levels of the Music and SFX groups.
Congratulations! You just exposed and modified your first Audio Mixer fields.
Now it's time to give functionality to the two buttons that have been waiting patiently in the wings. These will be used to transition between Snapshots in your Audio Mixer.
The first thing you need is another Snapshot, which is basically a saved state of your Audio Mixer. For this example, you'll be creating a new Snapshot that focuses the audio levels to favor the Music group.
Click the + next to the Snapshots header, and name your new Snapshot MusicFocus, as the focus of the Snapshot will be on the Music group. Then rename the first Snapshot to Starting as it will be the default Snapshot that will be used, as indicated by the star icon to the right of the name.
With the MusicFocus Snapshot selected, adjust the Music group volume to be louder than the SFX group volume. You should now be able to jump back and forth between Snapshots, and the values for each Snapshot should be automatically saved.
With the setup done, you can dive back into the AudioManager
script, and add the following new instance variables to the top of the script.
private AudioMixerSnapshot startingSnapshot; // 1
private AudioMixerSnapshot musicFocusSnapshot; // 2
Here's what they do:
At the top of the Start()
method, just before the for loop, add:
startingSnapshot = mixer.FindSnapshot("Starting"); // 1
musicFocusSnapshot = mixer.FindSnapshot("MusicFocus"); // 2
startingSnapshot
reference variable.musicFocusSnapshot
reference variable.Then create two new methods in the AudioManager
class that will be used to transition between each snapshot:
public void SnapshotStarting()
{
startingSnapshot.TransitionTo(.5f);
}
public void SnapshotMusic()
{
musicFocusSnapshot.TransitionTo(.5f);
}
TransitionTo
will interpolate from the current snapshot to the invoking snapshot over a specified time interval.
Save your script and return to the Unity Editor. Find the StartingSnapShot game object in the Hierarchy window.
In the Inspector window under the Button component, hook the button into the newly created SnapshotStarting()
method.
Then do the same with the MusicFocusSnapshot game object, but with the SnapshotMusic()
method.
Save and run your project. When you click the Music Focus button, you should see the volume of the Music group transition up, and the SFX group transition down. You can then transition back to the starting snapshot by clicking the Starting Snapshot button.
In case you missed anything, you can download the final project for this tutorial here.
That wraps up this introduction to the Unity Audio Mixer, and all of its major components.
I'd love to see the neat effects you come up with, so I encourage you to try out the other effects not covered by this tutorial! Also, have a go at using multiple Audio Mixers by routing one into another, for organizational and layering benefits.
If you have any questions or comments, please join in the forum discussion below!
The post Audio tutorial for Unity: the Audio Mixer appeared first on Ray Wenderlich.
In this screencast, you'll learn how to add and remove third party dependencies by way of Git submodules.
The post Git: Third Party Dependencies with Submodules appeared first on Ray Wenderlich.
Yoga is a cross-platform layout engine based on Flexbox that makes working with layouts easy. Instead of using Auto Layout for iOS or using Cascading Style Sheets (CSS) on the web, you can use Yoga as a common layout system.
Initially launched as css-layout, an open source library from Facebook in 2014, it was revamped and rebranded as Yoga in 2016. Yoga supports multiple platforms including Java, C#, C, and Swift.
Library developers can incorporate Yoga into their layout systems, as Facebook has done for two of their open source projects: React Native and Litho. However, Yoga also exposes a framework that iOS developers can directly use for laying out views.
In this tutorial, you’ll work through core Yoga concepts then practice and expand them in building the FlexAndChill app.
Even though you’ll be using the Yoga layout engine, it will be helpful for you to be familiar with Auto Layout before reading this tutorial. You’ll also want to have a working knowledge of CocoaPods to include Yoga in your project.
Flexbox, also referred to as CSS Flexible Box, was introduced to handle complex layouts on the web. One key feature is the efficient layout of content in a given direction and the “flexing” of its size to fit a certain space.
Flexbox consists of flex containers, each having one or more flex items:
Flexbox defines how flex items are laid out inside of a flex container. Content outside of the flex container and inside of a flex item are rendered as usual.
Flex items are laid out in a single direction inside of a container (although they can be optionally wrapped). This sets the main axis for the items. The opposite direction is known as the cross axis.
Flexbox allows you to specify how items are positioned and spaced on the main axis and the cross axis. justify-content
specifies the alignment of items along the container’s main axis. The example below shows item placements when the container’s flex direction is row
:
flex-start
: Items are positioned at the beginning of the container.flex-end
: Items are positioned at the end of the container.center
: Items are positioned in the middle of the container.space-between
: Items are evenly spaced with the first item placed at the beginning and the last item placed at the end of the container.space-around
: Items are evenly spaced with the equal spacing around them.align-items
specifies the alignment of items along the container’s cross axis. The example shows item placements when the container’s flex direction is row
which means the cross axis runs vertically:
The items are vertically aligned at the beginning, center, or end of the container.
These initial Flexbox properties should give you a feel for how Flexbox works. There are many more you can work with. Some control how an item stretches or shrinks relative to the available container space. Others can set the padding, margin, or even size.
A perfect place to try out Flexbox concepts is jsFiddle, an online playground for JavaScript, HTML and CSS.
Go to this starter JSFiddle and take a look. You should see four panes:
The code in the three editors drive the output you see in the lower right pane. The starter example displays a white box.
Note the yoga
class selector defined in the CSS editor. These represent the CSS defaults that Yoga implements. Some of the values differ from the Flexbox w3 specification defaults. For example, Yoga defaults to a column
flex direction and items are positioned at the start of the container. Any HTML elements that you style via class="yoga"
will start off in “Yoga” mode.
Check out the HTML source:
<div class="yoga"
style="width: 400px; height: 100px; background-color: white; flex-direction:row;">
</div>
The div
‘s basic style is yoga
. Additional style
properties set the size, background, and overrides the default flex direction so that items will flow in a row.
In the HTML editor, add the following code just above the closing div
tag:
<div class="yoga" style="background-color: #cc0000; width: 80px;"></div>
This adds a yoga
styled, 80-pixel wide, red box to the div
container.
Tap Run in the top menu. You should see the following output:
Add the following child element to the root div
, right after the red box’s div
:
<div class="yoga" style="background-color: #0000cc; width: 80px;"></div>
This adds an 80-pixel wide blue box.
Tap Run. The updated output shows the blue box stacked to the right of the red box:
Replace the blue box’s div
code with the following:
<div class="yoga" style="background-color: #0000cc; width: 80px; flex-grow: 1;"></div>
The additional flex-grow
property allows the box to expand and fill any available space.
Tap Run to see the updated output with the blue box stretched out:
Replace the entire HTML source with the following:
<div class="yoga"
style="width: 400px; height: 100px; background-color: white; flex-direction:row; padding: 10px;">
<div class="yoga" style="background-color: #cc0000; width: 80px; margin-right: 10px;"></div>
<div class="yoga" style="background-color: #0000cc; width: 80px; flex-grow: 1; height: 25px; align-self: center;"></div>
</div>
This introduces padding to the child items, adds a right margin to the red box, sets the height of the blue box, and aligns the blue box to the center of the container.
Tap Run to view the resulting output:
You can view the final jsFiddle here. Feel free to play around with other layout properties and values.
Even though Yoga is based on Flexbox, there are some differences.
Yoga doesn’t implement all of CSS Flexbox. It skips non-layout properties such as setting the color. Yoga has modified some Flexbox properties to provide better Right-to-Left support. Lastly, Yoga has added a new AspectRatio
property to handle a common need when laying out certain elements such as images.
While you may want to stay in wonderful world wide web-land, this is a Swift tutorial. Fear not, the Yoga API will keep you basking in the afterglow of Flexbox familiarity. You’ll be able to apply your Flexbox learnings to your Swift app layout.
Yoga is written in C, primarily to optimize performance and for easy integration with other platforms. To develop iOS apps, you’ll be working with YogaKit, which is a wrapper around the C implementation.
Recall that in the Flexbox web examples, layout was configured via style attributes. With YogaKit, layout configuration is done through a YGLayout
object. YGLayout
includes properties for flex direction, justify content, align items, padding, and margin.
YogaKit exposes YGLayout
as a category on UIView
. The category adds a configureLayout(block:)
method to UIView
. The block
closure takes in a YGLayout
parameter and uses that info to configure the view’s layout properties.
You build up your layout by configuring each participating view with the desired Yoga properties. Once done, you call applyLayout(preservingOrigin:)
on the root view’s YGLayout
. This calculates and applies the layout to the root view and subviews.
Create a new Swift iPhone project with the Single View Application template named YogaTryout.
You’ll be creating your UI programmatically so you won’t need to use storyboards.
Open Info.plist and delete the Main storyboard file base name
property. Then, set the Launch screen interface file base name
value to an empty string. Finally, delete Main.storyboard and LaunchScreen.storyboard.
Open AppDelegate.swift and add the following to application(_:didFinishLaunchingWithOptions:)
before the return
statement:
window = UIWindow(frame: UIScreen.main.bounds)
window?.rootViewController = ViewController()
window?.backgroundColor = .white
window?.makeKeyAndVisible()
Build and run the app. You should see a blank white screen.
Close the Xcode project.
Open Terminal and enter the following command to install CocoaPods if you don’t already have it:
sudo gem install cocoapods
In Terminal, go to the directory where YogaTryout.xcodeproj is located. Create a file named Podfile and set its content to the following:
platform :ios, '10.3'
use_frameworks!
target 'YogaTryout' do
pod 'YogaKit', '~> 1.5'
end
Run the following command in Terminal to install the YogaKit
dependency:
pod install
You should see output similar to the following:
Analyzing dependencies
Downloading dependencies
Installing Yoga (1.5.0)
Installing YogaKit (1.5.0)
Generating Pods project
Integrating client project
[!] Please close any current Xcode sessions and use `YogaTryout.xcworkspace` for this project from now on.
Sending stats
Pod installation complete! There is 1 dependency from the Podfile and 2 total pods installed.
From this point onwards, you’ll be working with YogaTryout.xcworkspace.
Open YogaTryout.xcworkspace then build and run. You should still see a blank white screen.
Open ViewController.swift and add the following import:
import YogaKit
This imports the YogaKit
framework.
Add the following to the end of viewDidLoad()
:
// 1
let contentView = UIView()
contentView.backgroundColor = .lightGray
// 2
contentView.configureLayout { (layout) in
// 3
layout.isEnabled = true
// 4
layout.flexDirection = .row
layout.width = 320
layout.height = 80
layout.marginTop = 40
layout.marginLeft = 10
}
view.addSubview(contentView)
// 5
contentView.yoga.applyLayout(preservingOrigin: true)
This code does the following:
contentView.
Build and run the app on an iPhone 7 Plus simulator. You should see a gray box:
You may be scratching your head, wondering why you couldn’t have simply instantiated a UIView
with the desired frame size and set its background color. Patience my child. The magic starts when you add child items to this initial container.
Add the following to viewDidLoad()
just before the line that applies the layout to contentView
:
let child1 = UIView()
child1.backgroundColor = .red
child1.configureLayout{ (layout) in
layout.isEnabled = true
layout.width = 80
}
contentView.addSubview(child1)
This code adds an 80-pixel wide red box to contentView
.
Now, add the following just after the previous code:
let child2 = UIView()
child2.backgroundColor = .blue
child2.configureLayout{ (layout) in
layout.isEnabled = true
layout.width = 80
layout.flexGrow = 1
}
contentView.addSubview(child2)
This adds a blue box to the container that’s 80 pixels wide but that’s allowed to grow to fill out any available space in the container. If this is starting to look familiar, it’s because you did something similar in jsFiddle.
Build and run. You should see the following:
Now, add the following statement to the layout configuration block for contentView
:
layout.padding = 10
This sets a padding for all the child items.
Add the following to child1
‘s layout configuration block:
layout.marginRight = 10
This sets a right margin offset for the red box.
Finally, add the following to child2
‘s layout configuration block:
layout.height = 20
layout.alignSelf = .center
This sets the height of the blue box and aligns it to the center of its parent container.
Build and run. You should see the following:
What if you want to center the entire gray box horizontally? Well, you can enable Yoga on contentView
‘s parent view which is self.view
.
Add the following to viewDidLoad()
, right after the call to super
:
view.configureLayout { (layout) in
layout.isEnabled = true
layout.width = YGValue(self.view.bounds.size.width)
layout.height = YGValue(self.view.bounds.size.height)
layout.alignItems = .center
}
This enables Yoga for the root view and configures the layout width and height based on the view bounds. alignItems
configures the child items to be center-aligned horizontally. Remember that alignItems
specifies how a container’s child items are aligned in the cross axis. This container has the default column
flex direction. So the cross axis is in the horizontal direction.
Remove the layout.marginLeft
assignment in contentView
‘s layout configuration. It’s no longer needed as you’ll be centering this item through its parent container.
Finally, replace:
contentView.yoga.applyLayout(preservingOrigin: true)
With the following:
view.yoga.applyLayout(preservingOrigin: true)
This will calculate and apply the layout to self.view
and its subviews which includes contentView
.
Build and run. Note that the gray box is now centered horizontally:
Centering the gray box vertically on the screen is just as simple. Add the following to the layout configuration block for self.view
:
layout.justifyContent = .center
Remove the layout.marginTop
assignment in contentView
‘s layout configuration. It won’t be needed since the parent is controlling the vertical alignment.
Build and run. You should now see the gray box center-aligned both horizontally and vertically:
Rotate the device to landscape mode. Uh-oh, you’ve lost your center:
Fortunately, there’s a way to get notified about device orientation changes to help resolve this.
Add the following method to the end of the class:
override func viewWillTransition(
to size: CGSize,
with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
// 1
view.configureLayout{ (layout) in
layout.width = YGValue(size.width)
layout.height = YGValue(size.height)
}
// 2
view.yoga.applyLayout(preservingOrigin: true)
}
The code does the following:
Rotate the device back to portrait mode. Build and run the app. Rotate the device to landscape mode. The gray box should now be properly centered:
You can download the final tryout project here if you wish to compare with your code.
Granted, you’re probably mumbling under your breath about how you could have built this layout in less than three minutes with Interface Builder, including properly handling rotations:
You’ll want to give Yoga a fresh look when your layout starts to become more complicated than you’d like and things like embedded stack views are giving you fits.
On the other hand, you may have long abandoned Interface Builder for programmatic layout approaches like layout anchors or the Visual Format Language. If those are working for you, no need to change. Keep in mind that the Visual Format Language doesn’t support aspect ratios whereas Yoga does.
Yoga is also easier to grasp once you understand Flexbox. There are many resources where you can quickly try out Flexbox layouts before building them out on iOS with Yoga.
Your joy of building white, red, and blue boxes has probably worn thin. Time to shake it up a bit. In the following section, you’ll take your newly minted Yoga skills to create a view similar to the following:
Download and explore the starter project. It already includes the YogaKit dependency. The other main classes are:
Build and run the app. You should see a black screen.
Here’s a wireframe breakdown of the desired layout to help plan things out:
Let’s quickly dissect the layout for each box in the diagram:
As you build each piece of the layout you’ll get a better feel for additional Yoga properties and how to fine tune a layout.
Open ViewController.swift and add the following to viewDidLoad()
, just after the shows are loaded from the plist:
let show = shows[showSelectedIndex]
This sets the show
to be displayed.
Yoga introduces an aspectRatio
property to help lay out a view if an item’s aspect ratio is known. AspectRatio
represents the width-to-height ratio.
Add the following code right after contentView
is added to its parent:
// 1
let episodeImageView = UIImageView(frame: .zero)
episodeImageView.backgroundColor = .gray
// 2
let image = UIImage(named: show.image)
episodeImageView.image = image
// 3
let imageWidth = image?.size.width ?? 1.0
let imageHeight = image?.size.height ?? 1.0
// 4
episodeImageView.configureLayout { (layout) in
layout.isEnabled = true
layout.flexGrow = 1.0
layout.aspectRatio = imageWidth / imageHeight
}
contentView.addSubview(episodeImageView)
Let’s go through the code step-by-step:
UIImageView
aspectRatio
based on the image sizeBuild and run the app. You should see the image stretch vertically yet respect the image’s aspect ratio:
Thus far you’ve seen flexGrow
applied to one item in a container. You stretched the blue box in a previous example by setting its flexGrow
property to 1.
If more than one child sets a flexGrow
property, then the child items are first laid out based on the space they need. Each child’s flexGrow
is then used to distribute the remaining space.
In the series summary view, you’ll lay out the child items so that the middle section takes up twice as much left over space as the other two sections.
Add the following after episodeImageView
is added to its parent:
let summaryView = UIView(frame: .zero)
summaryView.configureLayout { (layout) in
layout.isEnabled = true
layout.flexDirection = .row
layout.padding = self.padding
}
This code specifies that the child items will be laid out in a row and include padding.
Add the following just after the previous code:
let summaryPopularityLabel = UILabel(frame: .zero)
summaryPopularityLabel.text = String(repeating: "★", count: showPopularity)
summaryPopularityLabel.textColor = .red
summaryPopularityLabel.configureLayout { (layout) in
layout.isEnabled = true
layout.flexGrow = 1.0
}
summaryView.addSubview(summaryPopularityLabel)
contentView.addSubview(summaryView)
This adds a popularity label and sets its flexGrow
property to 1.
Build and run the app to view the popularity info:
Add the following code just above the line that adds summaryView
to its parent:
let summaryInfoView = UIView(frame: .zero)
summaryInfoView.configureLayout { (layout) in
layout.isEnabled = true
layout.flexGrow = 2.0
layout.flexDirection = .row
layout.justifyContent = .spaceBetween
}
This sets up a new container view for the summary label child items. Note that the flexGrow
property is set to 2. Therefore, summaryInfoView
will take up twice as much remaining space as summaryPopularityLabel
.
Now add the following code right after the previous block:
for text in [showYear, showRating, showLength] {
let summaryInfoLabel = UILabel(frame: .zero)
summaryInfoLabel.text = text
summaryInfoLabel.font = UIFont.systemFont(ofSize: 14.0)
summaryInfoLabel.textColor = .lightGray
summaryInfoLabel.configureLayout { (layout) in
layout.isEnabled = true
}
summaryInfoView.addSubview(summaryInfoLabel)
}
summaryView.addSubview(summaryInfoView)
This loops through the summary labels to display for a show. Each label is a child item to the summaryInfoView
container. That container’s layout specifies that the labels be placed at the beginning, middle, and end.
Build and run the app to see the show’s labels:
To tweak the layout to get the spacing just right, you’ll add one more item to summaryView
. Add the following code next:
let summaryInfoSpacerView =
UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 1))
summaryInfoSpacerView.configureLayout { (layout) in
layout.isEnabled = true
layout.flexGrow = 1.0
}
summaryView.addSubview(summaryInfoSpacerView)
This serves as a spacer with flexGrow
set to 1. summaryView
has 3 child items. The first and third child items will take 25% of any remaining container space while the second item will take 50% of the available space.
Build and run the app to see the properly tweaked layout:
Continue building the layout to see more spacing and positioning examples.
Add the following just after the summaryView
code:
let titleView = UIView(frame: .zero)
titleView.configureLayout { (layout) in
layout.isEnabled = true
layout.flexDirection = .row
layout.padding = self.padding
}
let titleEpisodeLabel =
showLabelFor(text: selectedShowSeriesLabel,
font: UIFont.boldSystemFont(ofSize: 16.0))
titleView.addSubview(titleEpisodeLabel)
let titleFullLabel = UILabel(frame: .zero)
titleFullLabel.text = show.title
titleFullLabel.font = UIFont.boldSystemFont(ofSize: 16.0)
titleFullLabel.textColor = .lightGray
titleFullLabel.configureLayout { (layout) in
layout.isEnabled = true
layout.marginLeft = 20.0
layout.marginBottom = 5.0
}
titleView.addSubview(titleFullLabel)
contentView.addSubview(titleView)
The code sets up titleView
as a container with two items for the show’s title.
Build and run the app to see the title:
Add the following code next:
let descriptionView = UIView(frame: .zero)
descriptionView.configureLayout { (layout) in
layout.isEnabled = true
layout.paddingHorizontal = self.paddingHorizontal
}
let descriptionLabel = UILabel(frame: .zero)
descriptionLabel.font = UIFont.systemFont(ofSize: 14.0)
descriptionLabel.numberOfLines = 3
descriptionLabel.textColor = .lightGray
descriptionLabel.text = show.detail
descriptionLabel.configureLayout { (layout) in
layout.isEnabled = true
layout.marginBottom = 5.0
}
descriptionView.addSubview(descriptionLabel)
This creates a container view with horizontal padding and adds a child item for the show’s detail.
Now, add the following code:
let castText = "Cast: \(showCast)";
let castLabel = showLabelFor(text: castText,
font: UIFont.boldSystemFont(ofSize: 14.0))
descriptionView.addSubview(castLabel)
let creatorText = "Creators: \(showCreators)"
let creatorLabel = showLabelFor(text: creatorText,
font: UIFont.boldSystemFont(ofSize: 14.0))
descriptionView.addSubview(creatorLabel)
contentView.addSubview(descriptionView)
This adds two items to descriptionView
for more show details.
Build and run the app to see the complete description:
Next, you’ll add the show’s action views.
Add a private helper method to the ViewController
extension:
func showActionViewFor(imageName: String, text: String) -> UIView {
let actionView = UIView(frame: .zero)
actionView.configureLayout { (layout) in
layout.isEnabled = true
layout.alignItems = .center
layout.marginRight = 20.0
}
let actionButton = UIButton(type: .custom)
actionButton.setImage(UIImage(named: imageName), for: .normal)
actionButton.configureLayout{ (layout) in
layout.isEnabled = true
layout.padding = 10.0
}
actionView.addSubview(actionButton)
let actionLabel = showLabelFor(text: text)
actionView.addSubview(actionLabel)
return actionView
}
This sets up a container view with an image and label that are center-aligned horizontally.
Now, add the following after the descriptionView
code in viewDidLoad()
:
let actionsView = UIView(frame: .zero)
actionsView.configureLayout { (layout) in
layout.isEnabled = true
layout.flexDirection = .row
layout.padding = self.padding
}
let addActionView =
showActionViewFor(imageName: "add", text: "My List")
actionsView.addSubview(addActionView)
let shareActionView =
showActionViewFor(imageName: "share", text: "Share")
actionsView.addSubview(shareActionView)
contentView.addSubview(actionsView)
This creates a container view with two items created using showActionViewFor(imageName:text)
.
Build and run the app to view the actions.
Time to lay out some tabs.
Add a new method to the ViewController
extension:
func showTabBarFor(text: String, selected: Bool) -> UIView {
// 1
let tabView = UIView(frame: .zero)
tabView.configureLayout { (layout) in
layout.isEnabled = true
layout.alignItems = .center
layout.marginRight = 20.0
}
// 2
let tabLabelFont = selected ?
UIFont.boldSystemFont(ofSize: 14.0) :
UIFont.systemFont(ofSize: 14.0)
let fontSize: CGSize = text.size(attributes: [NSFontAttributeName: tabLabelFont])
// 3
let tabSelectionView =
UIView(frame: CGRect(x: 0, y: 0, width: fontSize.width, height: 3))
if selected {
tabSelectionView.backgroundColor = .red
}
tabSelectionView.configureLayout { (layout) in
layout.isEnabled = true
layout.marginBottom = 5.0
}
tabView.addSubview(tabSelectionView)
// 4
let tabLabel = showLabelFor(text: text, font: tabLabelFont)
tabView.addSubview(tabLabel)
return tabView
}
Going through the code step-by-step:
Add the following code after actionsView
has been added to contentView
(in viewDidLoad
_:
let tabsView = UIView(frame: .zero)
tabsView.configureLayout { (layout) in
layout.isEnabled = true
layout.flexDirection = .row
layout.padding = self.padding
}
let episodesTabView = showTabBarFor(text: "EPISODES", selected: true)
tabsView.addSubview(episodesTabView)
let moreTabView = showTabBarFor(text: "MORE LIKE THIS", selected: false)
tabsView.addSubview(moreTabView)
contentView.addSubview(tabsView)
This sets up the tab container view and adds the tab items to the container.
Build and run the app to see your new tabs:
The tab selection is non-functional in this sample app. Most of the hooks are in place if you’re interested in adding it later.
You’re almost done. You just have to add the table view to the end.
Add following code after tabView
has been added to contentView
:
let showsTableView = UITableView()
showsTableView.delegate = self
showsTableView.dataSource = self
showsTableView.backgroundColor = backgroundColor
showsTableView.register(ShowTableViewCell.self,
forCellReuseIdentifier: showCellIdentifier)
showsTableView.configureLayout{ (layout) in
layout.isEnabled = true
layout.flexGrow = 1.0
}
contentView.addSubview(showsTableView)
This code creates and configures a table view. The layout configuration sets the flexGrow
property to 1, allowing the table view to expand to fill out any remaining space.
Build and run the app. You should see a list of episodes included in the view:
Congratulations! If you’ve made it this far you’re practically a Yoga expert. Roll out your mat, grab the extra special stretch pants, and just breathe. You can download the final tutorial project here.
Check out the Yoga documentation to get more details on additional properties not covered such as Right-to-Left support.
Flexbox specification is a good resource for more background on Flexbox. Flexbox learning resources is a really handy guide for exploring the different Flexbox properties.
I do hope you enjoyed reading this Yoga tutorial. If you have any comments or questions about this tutorial, please join the forum discussion below!
The post Yoga Tutorial: Using a Cross-Platform Layout Engine appeared first on Ray Wenderlich.
Learn how to dynamically change iOS app icons in this screencast starring everyone's favorite pink bird.
The post Screencast: Alternate App Icons: Getting Started appeared first on Ray Wenderlich.
We recently updated our Beginning iOS Animations course, and today we’re ready to announce that the sequel, Intermediate iOS Animations, has also been updated for Swift 3 and iOS 10!
If you worked through the Beginning iOS Animations course and are ready to advance you animation skills, then this 12-part intermediate course is for you! In this update, we’ve added new videos covering UIViewPropertyAnimator, a new way to animate views as of iOS 10. You’ll also find a solid introduction to animating layers with Core Animation. You’ll learn more advanced techniques to control animation timing and springs, how to create interactive animations, and more!
Let’s take a look at what’s inside this course:
Video 1: Introduction. Get a brief introduction to the topics of this course: property animators and layer animations!
Video 2: Beginning Property Animators. In this video, you’ll get started with the basics of using the UIViewPropertyAnimator class and create your first property animator animation.
Video 3: Intermediate Property Animators. Take property animators further by building up a peek and pop animation using keyframes, springs, and more.
Video 4: Interactive Property Animators. Learn about the animation state machine and wrap up your peek and pop animation by making it interactive.
Video 5: Basic Layer Animation. Learn about the basics of Core Animation, how to create your first layer animation, and send it off for rendering on screen.
Video 6: Core Animation Models. Find out more about how Core Animation works, and how to avoid common pitfalls when animating with layers.
Video 7: Animation Timing. Learn how to use fill modes to safely add delay to your layer animations. Find out how to use both predefined and custom easing curves.
Video 8: Animation Groups. Avoid duplicating animation code by learning to group layer animations together when they share common properties.
Video 9: Animation Delegate. Learn how to make use of CAAnimation delegate methods to react to the start and end of layer animations.
Video 10: Advanced Springs. Take more control over spring animations with additional parameters that can be applied to both layer animations and property animators.
Video 11: Layer Keyframes. Learn how to build multi-part keyframe layer animations by using CAKeyframeAnimation.
Video 12: Conclusion. Review what you’ve learned in this course and we’ll give you some parting advice on how to keep learning about animation in iOS.
Want to check out the course? You can watch the first two videos for free:
The rest of the course is for raywenderlich.com subscribers only. Here’s how you can get access:
We hope you enjoy, and stay tuned for more new courses and updates to come! :]
The post Updated Course: Intermediate iOS Animations appeared first on Ray Wenderlich.
Note: This tutorial uses Xcode 9.0 and Swift 4.
Introduced in iOS 6 and refined with new features in iOS 10, UICollectionView
is the first-class choice to customize and animate the presentation of data collections in iOS applications.
A key entity associated with UICollectionView
is the UICollectionViewLayout
. The UICollectionViewLayout
object is responsible for defining the attributes of all the elements of a collection view such as cells, supplementary views and decoration views.
UIKit
offers a default implementation of UICollectionViewLayout
called UICollectionViewFlowLayout
. This class lets you set up a grid layout with some elementary customizations.
This UICollectionViewLayout
tutorial will teach you how to subclass and customize the UICollectionViewLayout
class. It will also show you how to add custom supplementary views, stretchy, sticky and parallax effects to a collection view.
UICollectionViewLayout
tutorial requires an intermediate knowledge of Swift 4.0, an advanced knowledge of UICollectionView
, affine transforms and a clear understanding of how the core layout process works in the UICollectionViewLayout
class.If you’re unfamiliar with any of these topics, you could read the Apple official documentation…
…or, you can check out some of the excellent tutorials on the site!
At the end of this UICollectionViewLayout
tutorial you’ll be able to implement a UICollectionView
like the following:
Are you ready to win the Jungle Cup? Let’s go!
Download the starter project for this tutorial and open it in Xcode. Build and run the project.
You’ll see some cute owls laid out in a standard UICollectionView
with sections headers and footers like the following:
The app presents the Owls Team’s players who are taking part in the Jungle Soccer Cup 2017. Section headers show their roles in the team while footers display their collective strength.
Let’s have a closer look at the starter project:
Inside JungleCupCollectionViewController.swift file you’ll find the implementation of a UICollectionViewController
subclass conforming to the UICollectionDataSource
protocol. It implements all the required methods plus the optional method for adding supplementary views.
The JungleCupCollectionViewController
adopts MenuViewDelegate
too. It’s a protocol to let the collection view switch its data source.
In the Reusable Views folder, there are subclasses of UICollectionViewCell
for the cells, and UICollectionReusableView
for section header and section footer views. They link to their respective views designed in the Main.storyboard file.
Besides that, there are the custom supplementary views the CustomLayout
requires. Both the HeaderView
and MenuView
classes are subclasses of UICollectionReusableView
. They’re both linked to their own .xib files.
MockDataManager.swift file holds the data structures for all the teams. For convenience’s sake, the Xcode project embeds all the necessary assets.
The Custom Layout folder deserves special attention because it contains two important files:
CustomLayoutSettings.swift implements a structure with all the layout settings. The first group of settings deals with collection view’s elements sizes. The second group defines the layout behaviors, and the third sets up the layout spacings.
The CustomLayoutAttributes.swift file implements a UICollectionViewLayoutAttributes
subclass named CustomLayoutAttributes
. This class stores all the information the collection view needs to configure an element before displaying it.
It inherits the default attributes such as frame
, transform
, transform3D
, alpha
and zIndex
from the superclass.
It also adds some new custom properties:
var parallax: CGAffineTransform = .identity
var initialOrigin: CGPoint = .zero
var headerOverlayAlpha = CGFloat(0)
parallax
, initialOrigin
and headerOverlayAlpha
are custom properties you’ll use later in the implementation of stretchy and sticky effects.
UICollectionViewLayoutAttributes
, you must conform to NSCopying
by implementing an appropriate method for copying your custom attributes to new instances.
If you implement custom layout attributes, you must also override the inherited isEqual
method to compare the values of your properties. Starting with iOS 7, the collection view does not apply layout attributes if those attributes have not changed.
Currently the collection view can’t display all the teams yet. For the moment, supporters of Tigers, Parrots and Giraffes have to wait.
No worries. They will be back soon! CustomLayout
will solve the problem :]
The main goal of a UICollectionViewLayout
object is to provide information about the position and visual state of every element in a UICollectionView
. Please keep in mind a UICollectionViewLayout
object isn’t responsible for creating the cells or supplementary views. Its job is to provide them with the right attributes.
Creating a custom UICollectionViewLayout
is a three-step process:
UICollectionViewLayout
and declare all the properties you’ll need to perform the layout calculations.CollectionViewLayout
core process from scratch. CustomLayout
class.Inside the Custom Layout group you can find a Swift file named CustomLayout.swift which contains a CustomLayout
class stub. Within this class you’ll implement the UICollectionViewLayout
subclass and all the Core Layout processes.
First, declare all the properties CustomLayout
needs to calculate the attributes.
import UIKit
final class CustomLayout: UICollectionViewLayout {
// 1
enum Element: String {
case header
case menu
case sectionHeader
case sectionFooter
case cell
var id: String {
return self.rawValue
}
var kind: String {
return "Kind\(self.rawValue.capitalized)"
}
}
// 2
override public class var layoutAttributesClass: AnyClass {
return CustomLayoutAttributes.self
}
// 3
override public var collectionViewContentSize: CGSize {
return CGSize(width: collectionViewWidth, height: contentHeight)
}
// 4
var settings = CustomLayoutSettings()
private var oldBounds = CGRect.zero
private var contentHeight = CGFloat()
private var cache = [Element: [IndexPath: CustomLayoutAttributes]]()
private var visibleLayoutAttributes = [CustomLayoutAttributes]()
private var zIndex = 0
// 5
private var collectionViewHeight: CGFloat {
return collectionView!.frame.height
}
private var collectionViewWidth: CGFloat {
return collectionView!.frame.width
}
private var cellHeight: CGFloat {
guard let itemSize = settings.itemSize else {
return collectionViewHeight
}
return itemSize.height
}
private var cellWidth: CGFloat {
guard let itemSize = settings.itemSize else {
return collectionViewWidth
}
return itemSize.width
}
private var headerSize: CGSize {
guard let headerSize = settings.headerSize else {
return .zero
}
return headerSize
}
private var menuSize: CGSize {
guard let menuSize = settings.menuSize else {
return .zero
}
return menuSize
}
private var sectionsHeaderSize: CGSize {
guard let sectionsHeaderSize = settings.sectionsHeaderSize else {
return .zero
}
return sectionsHeaderSize
}
private var sectionsFooterSize: CGSize {
guard let sectionsFooterSize = settings.sectionsFooterSize else {
return .zero
}
return sectionsFooterSize
}
private var contentOffset: CGPoint {
return collectionView!.contentOffset
}
}
That’s a fair chunk of code, but it’s fairly straightforward once you break it down:
enum
is a good choice for defining all the elements of the CustomLayout
. This prevents you from using strings. Remember the golden rule? No strings = no typos.layoutAttributesClass
computed property provides the class to use for the attributes instances. You must return classes of type CustomLayoutAttributes
: the custom class found in the starter project.UICollectionViewLayout
must override the collectionViewContentSize
computed property.CustomLayout
needs all these properties in order to prepare the attributes. They’re all fileprivate
except the settings
, since settings
could be set up by an external object.Now that you’re done with declarations, you can focus on the Core Layout process implementation.
The collection view works directly with your CustomLayout
object to manage the overall layout process. For example, the collection view asks for layout information when it’s first displayed or resized.
During the layout process, the collection view calls the required methods of the CustomLayout
object. Other optional methods may be called under specific circumstances like animated updates. These methods are your chance to calculate the position of items and to provide the collection view with the information it needs.
The first two required methods to override are:
prepare()
shouldInvalidateLayout(forBoundsChange:)
prepare()
is your opportunity to perform whatever calculations are needed to determine the position of the elements in the layout. shouldInvalidateLayout(forBoundsChange:)
is where you define how and when the CustomLayout
object needs to perform the core process again.
Let’s start by implementing prepare()
.
Open CustomLayout.swift and add the following extension to the end of the file:
// MARK: - LAYOUT CORE PROCESS
extension CustomLayout {
override public func prepare() {
// 1
guard let collectionView = collectionView,
cache.isEmpty else {
return
}
// 2
prepareCache()
contentHeight = 0
zIndex = 0
oldBounds = collectionView.bounds
let itemSize = CGSize(width: cellWidth, height: cellHeight)
// 3
let headerAttributes = CustomLayoutAttributes(
forSupplementaryViewOfKind: Element.header.kind,
with: IndexPath(item: 0, section: 0)
)
prepareElement(size: headerSize, type: .header, attributes: headerAttributes)
// 4
let menuAttributes = CustomLayoutAttributes(
forSupplementaryViewOfKind: Element.menu.kind,
with: IndexPath(item: 0, section: 0))
prepareElement(size: menuSize, type: .menu, attributes: menuAttributes)
// 5
for section in 0 ..< collectionView.numberOfSections {
let sectionHeaderAttributes = CustomLayoutAttributes(
forSupplementaryViewOfKind: UICollectionElementKindSectionHeader,
with: IndexPath(item: 0, section: section))
prepareElement(
size: sectionsHeaderSize,
type: .sectionHeader,
attributes: sectionHeaderAttributes)
for item in 0 ..< collectionView.numberOfItems(inSection: section) {
let cellIndexPath = IndexPath(item: item, section: section)
let attributes = CustomLayoutAttributes(forCellWith: cellIndexPath)
let lineInterSpace = settings.minimumLineSpacing
attributes.frame = CGRect(
x: 0 + settings.minimumInteritemSpacing,
y: contentHeight + lineInterSpace,
width: itemSize.width,
height: itemSize.height
)
attributes.zIndex = zIndex
contentHeight = attributes.frame.maxY
cache[.cell]?[cellIndexPath] = attributes
zIndex += 1
}
let sectionFooterAttributes = CustomLayoutAttributes(
forSupplementaryViewOfKind: UICollectionElementKindSectionFooter,
with: IndexPath(item: 1, section: section))
prepareElement(
size: sectionsFooterSize,
type: .sectionFooter,
attributes: sectionFooterAttributes)
}
// 6
updateZIndexes()
}
}
Taking each commented section in turn:
cache
dictionary is empty or not. This is crucial to not to mess up old and new attributes
instances.cache
dictionary is empty, you have to properly initialize it. Do this by calling prepareCache()
. This will be implemented after this explanation.attributes
first. You create an instance of the CustomLayoutAttributes
class and then pass it to prepareElement(size:type:attributes)
. Again, you’ll implement this method later. For the moment keep in mind each time you create a custom element you have to call this method in order to cache its attributes
correctly.attributes
the same way as before.item
in every section
of the collection view you:
attributes
for the section's header.attributes
for the items
.indexPath
.frame
and zIndex
.contentHeight
of the UICollectionView
.cache
dictionary using the type
(in this case a cell) and indexPath
of the element as keys.attributes
for the section's footer.zIndex
values. You're going to discover details later about updateZIndexes()
and you'll learn why it’s important to do that.Next, add the following method just below prepare()
:
override public func shouldInvalidateLayout(forBoundsChange newBounds: CGRect) -> Bool {
if oldBounds.size != newBounds.size {
cache.removeAll(keepingCapacity: true)
}
return true
}
Inside shouldInvalidateLayout(forBoundsChange:)
, you have to define how and when you want to invalidate the calculations performed by prepare()
. The collection view calls this method every time its bounds
property changes. Note that the collection view's bounds
property changes every time the user scrolls.
You always return true
and if the bounds size
changes, which means the collection view transited from portrait
to landscape
mode or vice versa, you purge the cache
dictionary too.
A cache purge is necessary because a change of the device’s orientation triggers a redrawing of the collection view’s frame
. As a consequence all the stored attributes won’t fit inside the new collection view's frame.
Next, you're going to implement all the methods called inside prepare()
but haven't yet implemented:
Add the following to the bottom of the extension:
private func prepareCache() {
cache.removeAll(keepingCapacity: true)
cache[.header] = [IndexPath: CustomLayoutAttributes]()
cache[.menu] = [IndexPath: CustomLayoutAttributes]()
cache[.sectionHeader] = [IndexPath: CustomLayoutAttributes]()
cache[.sectionFooter] = [IndexPath: CustomLayoutAttributes]()
cache[.cell] = [IndexPath: CustomLayoutAttributes]()
}
This first thing this method does is empty the cache
dictionary. Next, it resets all the nested dictionaries, one for each element family, using the element type
as primary key. The indexPath
will be the secondary key used to identify the cached attributes.
Next, you're going to implement prepareElement(size:type:attributes:)
.
Add the following definition to the end of the extension:
private func prepareElement(size: CGSize, type: Element, attributes: CustomLayoutAttributes) {
//1
guard size != .zero else {
return
}
//2
attributes.initialOrigin = CGPoint(x:0, y: contentHeight)
attributes.frame = CGRect(origin: attributes.initialOrigin, size: size)
// 3
attributes.zIndex = zIndex
zIndex += 1
// 4
contentHeight = attributes.frame.maxY
// 5
cache[type]?[attributes.indexPath] = attributes
}
Here's a step-by-step explanation of what's happening above:
size
or not. If the element has no size, there's no reason to cache its attributes
origin
value to the attribute's initialOrigin
property. Having a backup of the initial position of the element will be necessary in order to calculate the parallax and sticky transforms later.zIndex
value to prevent overlapping between different elements.contentHeight
since you've added a new element to your UICollectionView
. A smart way to perform this update is by assigning the attribute's frame maxY
value to the contentHeight
property.cache
dictionary using the element type
and indexPath
as unique keys. Finally it’s time to implement updateZIndexes()
called at the end of prepare()
.
Add the following to the bottom of the extension:
private func updateZIndexes(){
guard let sectionHeaders = cache[.sectionHeader] else {
return
}
var sectionHeadersZIndex = zIndex
for (_, attributes) in sectionHeaders {
attributes.zIndex = sectionHeadersZIndex
sectionHeadersZIndex += 1
}
cache[.menu]?.first?.value.zIndex = sectionHeadersZIndex
}
This methods assigns a progressive zIndex
value to the section headers. The count starts from the last zIndex
assigned to a cell. The greatest zIndex
value is assigned to the menu's attributes
. This re-assignment is necessary to have a consistent sticky behaviour. If this method isn't called, the cells of a given section will have a greater zIndex
than the header of the section. This will cause ugly overlapping effects while scrolling.
To complete the CustomLayout
class and make the layout core process work correctly, you need to implement some more required methods:
layoutAttributesForSupplementaryView(ofKind:at:)
layoutAttributesForItem(at:)
layoutAttributesForElements(in:)
The goal of these methods is to provide the right attributes to the right element at the right time. More specifically, the two first methods provide the collection view with the attributes for a specific supplementary view or a specific cell. The third one returns the layout attributes for the displayed elements in a given moment.
//MARK: - PROVIDING ATTRIBUTES TO THE COLLECTIONVIEW
extension CustomLayout {
//1
public override func layoutAttributesForSupplementaryView(
ofKind elementKind: String,
at indexPath: IndexPath) -> UICollectionViewLayoutAttributes? {
switch elementKind {
case UICollectionElementKindSectionHeader:
return cache[.sectionHeader]?[indexPath]
case UICollectionElementKindSectionFooter:
return cache[.sectionFooter]?[indexPath]
case Element.header.kind:
return cache[.header]?[indexPath]
default:
return cache[.menu]?[indexPath]
}
}
//2
override public func layoutAttributesForItem(
at indexPath: IndexPath) -> UICollectionViewLayoutAttributes? {
return cache[.cell]?[indexPath]
}
//3
override public func layoutAttributesForElements(
in rect: CGRect) -> [UICollectionViewLayoutAttributes]? {
visibleLayoutAttributes.removeAll(keepingCapacity: true)
for (_, elementInfos) in cache {
for (_, attributes) in elementInfos where attributes.frame.intersects(rect) {
visibleLayoutAttributes.append(attributes)
}
}
return visibleLayoutAttributes
}
}
Taking it comment-by-comment:
layoutAttributesForSupplementaryView(ofKind:at:)
you switch on the element kind
property and return the cached attributes matching the correct kind
and indexPath
.layoutAttributesForItem(at:)
you do exactly the same for the cells’s attributes.layoutAttributesForElements(in:)
you empty the visibleLayoutAttributes
array (where you’ll store the visibile attributes). Next, iterate on all cached attributes and add only visible elements to the array. To determinate whether an element is visibile or not, test if its frame
intersects the collection view’s frame
. Finally return the visibleAttributes
array.Before building and running the project you need to:
CustomLayout
class.JungleCupCollectionViewController
support the custom supplementary views.Open Main.storyboard and select the Collection View Flow Layout in the Jungle Cup Collection View Controller Scene as shown below:
Next, open the Identity Inspector and change the Custom Class to CustomLayout
as shown below:
Next, open JungleCupCollectionViewController.swift.
Add the computed property customLayout
to avoid verbose code duplication.
Your code should look like the following:
var customLayout: CustomLayout? {
return collectionView?.collectionViewLayout as? CustomLayout
}
Next, replace setUpCollectionViewLayout()
with the following:
private func setupCollectionViewLayout() {
guard let collectionView = collectionView,
let customLayout = customLayout else {
return
}
// 1
collectionView.register(
UINib(nibName: "HeaderView", bundle: nil),
forSupplementaryViewOfKind: CustomLayout.Element.header.kind,
withReuseIdentifier: CustomLayout.Element.header.id
)
collectionView.register(
UINib(nibName: "MenuView", bundle: nil),
forSupplementaryViewOfKind: CustomLayout.Element.menu.kind,
withReuseIdentifier: CustomLayout.Element.menu.id
)
// 2
customLayout.settings.itemSize = CGSize(width: collectionView.frame.width, height: 200)
customLayout.settings.headerSize = CGSize(width: collectionView.frame.width, height: 300)
customLayout.settings.menuSize = CGSize(width: collectionView.frame.width, height: 70)
customLayout.settings.sectionsHeaderSize = CGSize(width: collectionView.frame.width, height: 50)
customLayout.settings.sectionsFooterSize = CGSize(width: collectionView.frame.width, height: 50)
customLayout.settings.isHeaderStretchy = true
customLayout.settings.isAlphaOnHeaderActive = true
customLayout.settings.headerOverlayMaxAlphaValue = CGFloat(0)
customLayout.settings.isMenuSticky = true
customLayout.settings.isSectionHeadersSticky = true
customLayout.settings.isParallaxOnCellsEnabled = true
customLayout.settings.maxParallaxOffset = 60
customLayout.settings.minimumInteritemSpacing = 0
customLayout.settings.minimumLineSpacing = 3
}
Here's what the code above does:
UICollectionReusableView
subclasses already implemented in the starter project.CustomLayout
settings.Before you build an run the app, add the following two case
options to viewForSupplementaryElementOfKind(_:viewForSupplementaryElementOfKind:at:)
to handle custom supplementary view types:
case CustomLayout.Element.header.kind:
let topHeaderView = collectionView.dequeueReusableSupplementaryView(
ofKind: kind,
withReuseIdentifier: CustomLayout.Element.header.id,
for: indexPath)
return topHeaderView
case CustomLayout.Element.menu.kind:
let menuView = collectionView.dequeueReusableSupplementaryView(
ofKind: kind,
withReuseIdentifier: CustomLayout.Element.menu.id,
for: indexPath)
if let menuView = menuView as? MenuView {
menuView.delegate = self
}
return menuView
Well done! It was a long journey, but you're almost done.
Build and run the project! You should see something similar to the following:
The UICollectionView
from the starter project now has some extra features:
You've already done a good job, but you can do better. It’s time to go for some nice visual effects to dress up your UICollectionView
.
In the final section of this UICollectionViewLayout
tutorial, you're going to add the following visual effects:
CGATransform
, you can check out this tutorial before continuing. The following part of the UICollectionViewLayout
tutorial implies a basic knowledge of affine transforms.The Core Graphics
CGAffineTransform
API is the best way to apply visual effects to the elements of a UICollectionView
.
Affine transforms are quite useful for a variety of reasons:
UIKit
components and AutoLayout
.The math behind affine transforms is really cool. However, explaining how matrices work behind the scenes of CGATransform
is out of scope for this UICollectionViewLayout
tutorial.
If you’re interested in this topic, you can find more details in Apple’s Core Graphic Framework Documentation.
Open CustomLayout.swift and update layoutAttributesForElements(in:)
to the following:
override public func layoutAttributesForElements(
in rect: CGRect) -> [UICollectionViewLayoutAttributes]? {
guard let collectionView = collectionView else {
return nil
}
visibleLayoutAttributes.removeAll(keepingCapacity: true)
// 1
let halfHeight = collectionViewHeight * 0.5
let halfCellHeight = cellHeight * 0.5
// 2
for (type, elementInfos) in cache {
for (indexPath, attributes) in elementInfos {
// 3
attributes.parallax = .identity
attributes.transform = .identity
// 4
updateSupplementaryViews(
type,
attributes: attributes,
collectionView: collectionView,
indexPath: indexPath)
if attributes.frame.intersects(rect) {
// 5
if type == .cell,
settings.isParallaxOnCellsEnabled {
updateCells(attributes, halfHeight: halfHeight, halfCellHeight: halfCellHeight)
}
visibleLayoutAttributes.append(attributes)
}
}
}
return visibleLayoutAttributes
}
Here's a step-by-step explanation of what's happening above:
parallax
transform and the element attributes transform
.Next, it's time to implement the two methods called in the above loop:
updateSupplementaryViews(_:attributes:collectionView:indexPath:)
updateCells(_:halfHeight:halfCellHeight:)
Add the following:
private func updateSupplementaryViews(_ type: Element,
attributes: CustomLayoutAttributes,
collectionView: UICollectionView,
indexPath: IndexPath) {
// 1
if type == .sectionHeader,
settings.isSectionHeadersSticky {
let upperLimit =
CGFloat(collectionView.numberOfItems(inSection: indexPath.section))
* (cellHeight + settings.minimumLineSpacing)
let menuOffset = settings.isMenuSticky ? menuSize.height : 0
attributes.transform = CGAffineTransform(
translationX: 0,
y: min(upperLimit,
max(0, contentOffset.y - attributes.initialOrigin.y + menuOffset)))
}
// 2
else if type == .header,
settings.isHeaderStretchy {
let updatedHeight = min(
collectionView.frame.height,
max(headerSize.height, headerSize.height - contentOffset.y))
let scaleFactor = updatedHeight / headerSize.height
let delta = (updatedHeight - headerSize.height) / 2
let scale = CGAffineTransform(scaleX: scaleFactor, y: scaleFactor)
let translation = CGAffineTransform(
translationX: 0,
y: min(contentOffset.y, headerSize.height) + delta)
attributes.transform = scale.concatenating(translation)
if settings.isAlphaOnHeaderActive {
attributes.headerOverlayAlpha = min(
settings.headerOverlayMaxAlphaValue,
contentOffset.y / headerSize.height)
}
}
// 3
else if type == .menu,
settings.isMenuSticky {
attributes.transform = CGAffineTransform(
translationX: 0,
y: max(attributes.initialOrigin.y, contentOffset.y) - headerSize.height)
}
}
Taking each numbered comment in turn:
transform
. Finally assign the calculated value to the attributes' transform
property.Now it's time to transform the collection view cells:
private func updateCells(_ attributes: CustomLayoutAttributes,
halfHeight: CGFloat,
halfCellHeight: CGFloat) {
// 1
let cellDistanceFromCenter = attributes.center.y - contentOffset.y - halfHeight
// 2
let parallaxOffset = -(settings.maxParallaxOffset * cellDistanceFromCenter)
/ (halfHeight + halfCellHeight)
// 3
let boundedParallaxOffset = min(
max(-settings.maxParallaxOffset, parallaxOffset),
settings.maxParallaxOffset)
// 4
attributes.parallax = CGAffineTransform(translationX: 0, y: boundedParallaxOffset)
}
Here's the play-by-play:
center
of the collection view.parallax
value (set in the layout settings)parallaxOffset
to avoid visual glitches.CAAffineTransform
translation with the computed parallax
value. Finally, assign the translation to the cell's attributes transform
property.To achieve the parallax effect on the PlayerCell
, the image's frame should have top and bottom negative insets. In the starter project these constraints are set for you. You can check them in the Constraint inspector (see below).
Before building, you have to fix one final detail. Open JungleCupCollectionViewController.swift. Inside setupCollectionViewLayout()
change the following value:
customLayout.settings.headerOverlayMaxAlphaValue = CGFloat(0)
to the following:
customLayout.settings.headerOverlayMaxAlphaValue = CGFloat(0.6)
This value represents the maximum opacity value the layout can assign to the black overlay on the headerView
.
Build and run the project to appreciate all the visual effects. Let it scroll! Let it scroll! Let it scroll! :]
You can download the final project here with all of the code from the UICollectionViewLayout
tutorial.
With a bit of code and some basic transforms, you’ve created a fully custom and settable UICollectionViewLayout
you can reuse in your future projects for any need or purpose!
If you’re looking to learn more about custom UICollectionViewLayout
, consider reading Creating Custom Layouts section of the Collection View Programming Guide for iOS, which covers this subject extensively.
I hope you enjoyed this UICollectionViewLayout
tutorial! If you have any questions or comments feel free to join the discussion below in the forums.
(Credit for the vectorial animals used in the Jungle Cup logo go to: www.freevector.com)
The post Custom UICollectionViewLayout Tutorial With Parallax appeared first on Ray Wenderlich.
Next April, we are running our fourth annual iOS conference focused on high quality hands-on tutorials: RWDevCon 2018.
Today, the team and I are happy to announce that RWDevCon 2018 tickets are now available!
And good news – the first 75 people who buy tickets will get a $100 discount off the standard ticket price.
Keep reading to find out what makes RWDevCon special, and what’s in store this year!
The easiest way to what makes RWDevCon special is to watch this video:
RWDevCon is designed around 4 main ideas:
1) Hands-On Experience
RWDevCon is unique in that it is focused on high quality hands-on tutorials. It has 3 simultaneous tracks of tutorials, leading to some challenging and fun choices on which to attend! :]
In each tutorial, you will follow along with a hands-on demo with the instructor:
Instead of just watching the instructor, you’ll code along with him or her so you can see things working for yourself, step-by-step.
We really think this hands-on experience is the best way to learn, and this way you won’t just leave with notes and references – you’ll leave with actual new skills.
If you are the type of person who learns best by doing, this is the conference for you!
2) Inspiration
After a long days work on hands-on tutorials, you’ll be ready for a break.
That’s why at the end of the day, we switch to something completely different: inspiration talks.
These are short 18-minute non-technical talks with the goal of giving you a new idea, some battle-won advice, and leaving you excited and energized.
3) Team Coordination
Just like we do for books and tutorials on this site, RWDevCon is highly coordinated as a team. This lets us:
4) Friendship
We believe one of the best parts about going to a conference is the people, and there’s plenty of opportunities to meet new friends.
We’ll have an opening reception before the conference begins to get to meet each other, board games at lunch, an awesome party and game show on Friday night, and a spectacular closing reception. We have some surprises up our sleeves this year too – you won’t want to miss it! :]
If you’re interested in getting a ticket, now’s the best time:
You can register now at the RWDevCon web site. We hope to see you there! :]
The post RWDevCon 2018: Tickets Now Available! appeared first on Ray Wenderlich.
If you’ve ever used Snapchat’s “Lenses” feature, you’ve used a combination of augmented reality and face detection.
Augmented reality — AR for short — is technical and an impressive-sounding term that simply describes real-world images overlaid with computer-generated ones. As for face detection, it’s nothing new for humans, but finding faces in images is still a new trick for computers, especially handheld ones.
Writing apps that feature AR and face detection used to require serious programming chops, but with Google’s Mobile Vision suite of libraries and its Face API, it’s much easier.
In this augmented reality tutorial, you’ll build a Snapchat Lens-like app called FaceSpotter. FaceSpotter draws cartoony features over faces in a camera feed.
In this tutorial, you’ll learn how to:
Google’s Face API performs face detection, which locates faces in pictures, along with their position (where they are in the picture) and orientation (which way they’re facing, relative to the camera). It can detect landmarks (points of interest on a face) and perform classifications to determine whether the eyes are open or closed, and whether or not a face is smiling. The Face API also detects and follows faces in moving images, which is known as face tracking.
Note that the Face API is limited to detecting human faces in pictures. Sorry, cat bloggers…
The Face API doesn’t perform face recognition, which connects a given face to an identity. It can’t perform that Facebook trick of detecting a face in an image and then identifying that person.
Once you’re able to detect a face, its position and its landmarks in an image, you can use that data to augment the image with your own reality! Apps like Pokemon GO, or Snapchat make use of augmented reality to give users a fun way to use their cameras, and so can you!
Download the FaceSpotter starter project here and open it in Android Studio. Build and run the app, and it will ask for permission to use the camera.
Click ALLOW, then point the camera at someone’s face.
The button in the app’s lower left-hand corner toggles between front and back camera.
This project was made so that you can start using face detection and tracking quickly. Let’s review what’s included.
Open the project’s build.gradle (Module: app):
At the end of the dependencies
section, you’ll see the following:
compile 'com.google.android.gms:play-services-vision:10.2.0'
compile 'com.android.support:design:25.2.0'
The first of these lines imports the Android Vision API, which supports not just face detection, but barcode detection and text recognition as well.
The second brings in the Android Design Support Library, which provides the Snackbar widget that informs the user that the app needs access to the cameras.
FaceSpotter specifies that it uses the camera and requests the user’s permission to do so with these lines in AndroidManifest.xml:
<uses-feature android:name="android.hardware.camera" />
<uses-permission android:name="android.permission.CAMERA" />
The starter project comes with a few pre-defined classes:
FaceTracker
data to FaceGraphic
.FaceGraphic
subclasses it.Let’s take a moment to get familiar with how they work.
FaceActivity
defines the app’s only activity, and along with handling touch events, also requests for permission to access the device’s camera at runtime (applies to Android 6.0 and above). FaceActivity also creates two objects which FaceSpotter depends on, namely CameraSource and FaceDetector.
Open FaceActivity.java and look for the createCameraSource
method:
private void createCameraSource() {
Context context = getApplicationContext();
// 1
FaceDetector detector = createFaceDetector(context);
// 2
int facing = CameraSource.CAMERA_FACING_FRONT;
if (!mIsFrontFacing) {
facing = CameraSource.CAMERA_FACING_BACK;
}
// 3
mCameraSource = new CameraSource.Builder(context, detector)
.setFacing(facing)
.setRequestedPreviewSize(320, 240)
.setRequestedFps(60.0f)
.setAutoFocusEnabled(true)
.build();
}
Here’s what the above code does:
FaceDetector
object, which detects faces in images from the camera’s data stream.Now let’s check out the createFaceDetector
method:
@NonNull
private FaceDetector createFaceDetector(final Context context) {
// 1
FaceDetector detector = new FaceDetector.Builder(context)
.setLandmarkType(FaceDetector.ALL_LANDMARKS)
.setClassificationType(FaceDetector.ALL_CLASSIFICATIONS)
.setTrackingEnabled(true)
.setMode(FaceDetector.FAST_MODE)
.setProminentFaceOnly(mIsFrontFacing)
.setMinFaceSize(mIsFrontFacing ? 0.35f : 0.15f)
.build();
// 2
MultiProcessor.Factory<Face> factory = new MultiProcessor.Factory<Face>() {
@Override
public Tracker<Face> create(Face face) {
return new FaceTracker(mGraphicOverlay, context, mIsFrontFacing);
}
};
// 3
Detector.Processor<Face> processor = new MultiProcessor.Builder<>(factory).build();
detector.setProcessor(processor);
// 4
if (!detector.isOperational()) {
Log.w(TAG, "Face detector dependencies are not yet available.");
// Check the device's storage. If there's little available storage, the native
// face detection library will not be downloaded, and the app won't work,
// so notify the user.
IntentFilter lowStorageFilter = new IntentFilter(Intent.ACTION_DEVICE_STORAGE_LOW);
boolean hasLowStorage = registerReceiver(null, lowStorageFilter) != null;
if (hasLowStorage) {
Log.w(TAG, getString(R.string.low_storage_error));
DialogInterface.OnClickListener listener = new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int id) {
finish();
}
};
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle(R.string.app_name)
.setMessage(R.string.low_storage_error)
.setPositiveButton(R.string.disappointed_ok, listener)
.show();
}
}
return detector;
}
Taking the above comment-by-comment:
FaceDetector
object using the Builder pattern, and sets the following properties:
NO_LANDMARKS
if it should not detect facial landmarks (this makes face detection faster) or ALL_LANDMARKS
if landmarks should be detected.NO_CLASSIFICATIONS
if it should not detect whether subjects’ eyes are open or closed or if they’re smiling (which speeds up face detection) or ALL_CLASSIFICATIONS
if it should detect them.FAST_MODE
to detect fewer faces (but more quickly), or ACCURATE_MODE
to detect more faces (but more slowly) and to detect the Euler Y angles of faces (we’ll cover this topic later).true
to detect only the most prominent face in the frame.FaceTracker
instances.Processor
. In this app, you’ll handle multiple faces, so you’ll create a MultiProcessor
instance, which creates a new FaceTracker
instance for each detected face. Once created, we connect the processor to the detector.With the intro taken care of, it’s time to detect some faces!
First you add a view into the overlay to draw detected face data.
Open FaceGraphic.java. You may have noticed the declaration for the instance variable mFace
is marked with the keyword volatile
. mFace
stores face data sent from FaceTracker
, and may be written to by many threads. Marking it as volatile
guarantees that you always get the result of the latest “write” any time you read its value. This is important since face data will change very quickly.
Delete the existing draw()
and add the following to FaceGraphic
:
// 1
void update(Face face) {
mFace = face;
postInvalidate(); // Trigger a redraw of the graphic (i.e. cause draw() to be called).
}
@Override
public void draw(Canvas canvas) {
// 2
// Confirm that the face and its features are still visible
// before drawing any graphics over it.
Face face = mFace;
if (face == null) {
return;
}
// 3
float centerX = translateX(face.getPosition().x + face.getWidth() / 2.0f);
float centerY = translateY(face.getPosition().y + face.getHeight() / 2.0f);
float offsetX = scaleX(face.getWidth() / 2.0f);
float offsetY = scaleY(face.getHeight() / 2.0f);
// 4
// Draw a box around the face.
float left = centerX - offsetX;
float right = centerX + offsetX;
float top = centerY - offsetY;
float bottom = centerY + offsetY;
// 5
canvas.drawRect(left, top, right, bottom, mHintOutlinePaint);
// 6
// Draw the face's id.
canvas.drawText(String.format("id: %d", face.getId()), centerX, centerY, mHintTextPaint);
}
Here’s what that code does:
FaceTracker
instance gets an update on a tracked face, it calls its corresponding FaceGraphic
instance’s update
method and passes it information about that face. The method saves that information in mFace
and then calls FaceGraphic
’s parent class’ postInvalidate
method, which forces the graphic to redraw.draw
method checks to see if the face is still being tracked. If it is, mFace
will be non-null
.FaceTracker
provides camera coordinates, but you’re drawing to FaceGraphic
’s view coordinates, so you use GraphicOverlay
’s translateX
and translateY
methods to convert mFace
’s camera coordinates to the view coordinates of the canvas.GraphicOverlay
’s scaleX
and scaleY
methods.id
using the face’s center point as the starting coordinates.The face detector in FaceActivity
sends information about faces it detects in the camera’s data stream to its assigned multiprocessor. For each detected face, the multiprocessor spawns a new FaceTracker
instance.
Add the following methods to FaceTracker.java after the constructor:
// 1
@Override
public void onNewItem(int id, Face face) {
mFaceGraphic = new FaceGraphic(mOverlay, mContext, mIsFrontFacing);
}
// 2
@Override
public void onUpdate(FaceDetector.Detections<Face> detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
mFaceGraphic.update(face);
}
// 3
@Override
public void onMissing(FaceDetector.Detections<Face> detectionResults) {
mOverlay.remove(mFaceGraphic);
}
@Override
public void onDone() {
mOverlay.remove(mFaceGraphic);
}
Here’s what each method does:
Face
is detected and its tracking begins. You’re using it to create a new instance of FaceGraphic
, which makes sense: when a new face is detected, you want to create new AR images to draw over it.FaceGraphic
instance to the GraphicOverlay
and then call FaceGraphic
’s update
method, which passes along the tracked face’s data.FaceGraphic
instance from the overlay.Run the app. It will draw a box around each face it detects, along with the corresponding ID number:
The Face API can identify the facial landmarks shown below.
You’ll modify the app so that it identifies the following for any tracked face:
This information will be saved in a FaceData
object, instead of the provided Face
object.
For facial landmarks, “left” and “right” refer to the subject’s left and right. Viewed through the front camera, the subject’s right eye will be closer to the right side of the screen, but through the rear camera, it’ll be closer to the left.
Open FaceTracker.java and modify onUpdate()
as shown below. The call to update()
will momentarily cause a build error while you are in the process of modifying the app to use the FaceData
model and you will fix it soon.
@Override
public void onUpdate(FaceDetector.Detections detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
// Get face dimensions.
mFaceData.setPosition(face.getPosition());
mFaceData.setWidth(face.getWidth());
mFaceData.setHeight(face.getHeight());
// Get the positions of facial landmarks.
updatePreviousLandmarkPositions(face);
mFaceData.setLeftEyePosition(getLandmarkPosition(face, Landmark.LEFT_EYE));
mFaceData.setRightEyePosition(getLandmarkPosition(face, Landmark.RIGHT_EYE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_CHEEK));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_CHEEK));
mFaceData.setNoseBasePosition(getLandmarkPosition(face, Landmark.NOSE_BASE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR_TIP));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR_TIP));
mFaceData.setMouthLeftPosition(getLandmarkPosition(face, Landmark.LEFT_MOUTH));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.BOTTOM_MOUTH));
mFaceData.setMouthRightPosition(getLandmarkPosition(face, Landmark.RIGHT_MOUTH));
mFaceGraphic.update(mFaceData);
}
Note that you’re now passing a FaceData
instance to FaceGraphic
’s update
method instead of the Face
instance that the onUpdate
method receives.
This allows you to specify the face information passed to FaceTracker
, which in turn lets you use some math trickery based on the last known locations of facial landmarks when the faces are moving too quickly to approximate their current locations. You use mPreviousLandmarkPositions
and the getLandmarkPosition
and updatePreviousLandmarkPositions
methods for this purpose.
Now open FaceGraphic.java.
First, since it’s now receiving a FaceData
value instead of a Face
value from FaceTracker
, you need to change a key instance variable declaration from:
private volatile Face mFace;
to:
private volatile FaceData mFaceData;
Modify update()
to account for this change:
void update(FaceData faceData) {
mFaceData = faceData;
postInvalidate(); // Trigger a redraw of the graphic (i.e. cause draw() to be called).
}
And finally, you need to update draw()
to draw dots over the landmarks of any tracked face, and identifying text over those dots:
@Override
public void draw(Canvas canvas) {
final float DOT_RADIUS = 3.0f;
final float TEXT_OFFSET_Y = -30.0f;
// Confirm that the face and its features are still visible before drawing any graphics over it.
if (mFaceData == null) {
return;
}
// 1
PointF detectPosition = mFaceData.getPosition();
PointF detectLeftEyePosition = mFaceData.getLeftEyePosition();
PointF detectRightEyePosition = mFaceData.getRightEyePosition();
PointF detectNoseBasePosition = mFaceData.getNoseBasePosition();
PointF detectMouthLeftPosition = mFaceData.getMouthLeftPosition();
PointF detectMouthBottomPosition = mFaceData.getMouthBottomPosition();
PointF detectMouthRightPosition = mFaceData.getMouthRightPosition();
if ((detectPosition == null) ||
(detectLeftEyePosition == null) ||
(detectRightEyePosition == null) ||
(detectNoseBasePosition == null) ||
(detectMouthLeftPosition == null) ||
(detectMouthBottomPosition == null) ||
(detectMouthRightPosition == null)) {
return;
}
// 2
float leftEyeX = translateX(detectLeftEyePosition.x);
float leftEyeY = translateY(detectLeftEyePosition.y);
canvas.drawCircle(leftEyeX, leftEyeY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("left eye", leftEyeX, leftEyeY + TEXT_OFFSET_Y, mHintTextPaint);
float rightEyeX = translateX(detectRightEyePosition.x);
float rightEyeY = translateY(detectRightEyePosition.y);
canvas.drawCircle(rightEyeX, rightEyeY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("right eye", rightEyeX, rightEyeY + TEXT_OFFSET_Y, mHintTextPaint);
float noseBaseX = translateX(detectNoseBasePosition.x);
float noseBaseY = translateY(detectNoseBasePosition.y);
canvas.drawCircle(noseBaseX, noseBaseY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("nose base", noseBaseX, noseBaseY + TEXT_OFFSET_Y, mHintTextPaint);
float mouthLeftX = translateX(detectMouthLeftPosition.x);
float mouthLeftY = translateY(detectMouthLeftPosition.y);
canvas.drawCircle(mouthLeftX, mouthLeftY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("mouth left", mouthLeftX, mouthLeftY + TEXT_OFFSET_Y, mHintTextPaint);
float mouthRightX = translateX(detectMouthRightPosition.x);
float mouthRightY = translateY(detectMouthRightPosition.y);
canvas.drawCircle(mouthRightX, mouthRightY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("mouth right", mouthRightX, mouthRightY + TEXT_OFFSET_Y, mHintTextPaint);
float mouthBottomX = translateX(detectMouthBottomPosition.x);
float mouthBottomY = translateY(detectMouthBottomPosition.y);
canvas.drawCircle(mouthBottomX, mouthBottomY, DOT_RADIUS, mHintOutlinePaint);
canvas.drawText("mouth bottom", mouthBottomX, mouthBottomY + TEXT_OFFSET_Y, mHintTextPaint);
}
Here’s what you should note about this revised method:
mFaceData
are not null
before using their data. Without these checks, the app will crash.Run the app. You should get results similar to this…
…or with multiple faces, results like this:
Now that you can identify landmarks on faces, you can start drawing cartoon features over them! But first, let’s talk about facial classifications.
The Face
class provides classifications through these methods:
Both return float
s with a range of 0.0 (highly unlikely) to 1.0 (bet everything on it). You’ll use the results from these methods to determine whether an eye is open and whether a face is smiling and pass that information along to FaceGraphic
.
Modify FaceTracker
to make use of classifications. First, add two new instance variables to the FaceTracker class to keep track of the previous eye states. As with landmarks, when subjects move around quickly, the detector may fail to determine eye states, and that’s when having the previous state comes in handy:
private boolean mPreviousIsLeftEyeOpen = true;
private boolean mPreviousIsRightEyeOpen = true;
onUpdate
also needs to be updated as follows:
@Override
public void onUpdate(FaceDetector.Detections<Face> detectionResults, Face face) {
mOverlay.add(mFaceGraphic);
updatePreviousLandmarkPositions(face);
// Get face dimensions.
mFaceData.setPosition(face.getPosition());
mFaceData.setWidth(face.getWidth());
mFaceData.setHeight(face.getHeight());
// Get the positions of facial landmarks.
mFaceData.setLeftEyePosition(getLandmarkPosition(face, Landmark.LEFT_EYE));
mFaceData.setRightEyePosition(getLandmarkPosition(face, Landmark.RIGHT_EYE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_CHEEK));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_CHEEK));
mFaceData.setNoseBasePosition(getLandmarkPosition(face, Landmark.NOSE_BASE));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.LEFT_EAR_TIP));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.RIGHT_EAR_TIP));
mFaceData.setMouthLeftPosition(getLandmarkPosition(face, Landmark.LEFT_MOUTH));
mFaceData.setMouthBottomPosition(getLandmarkPosition(face, Landmark.BOTTOM_MOUTH));
mFaceData.setMouthRightPosition(getLandmarkPosition(face, Landmark.RIGHT_MOUTH));
// 1
final float EYE_CLOSED_THRESHOLD = 0.4f;
float leftOpenScore = face.getIsLeftEyeOpenProbability();
if (leftOpenScore == Face.UNCOMPUTED_PROBABILITY) {
mFaceData.setLeftEyeOpen(mPreviousIsLeftEyeOpen);
} else {
mFaceData.setLeftEyeOpen(leftOpenScore > EYE_CLOSED_THRESHOLD);
mPreviousIsLeftEyeOpen = mFaceData.isLeftEyeOpen();
}
float rightOpenScore = face.getIsRightEyeOpenProbability();
if (rightOpenScore == Face.UNCOMPUTED_PROBABILITY) {
mFaceData.setRightEyeOpen(mPreviousIsRightEyeOpen);
} else {
mFaceData.setRightEyeOpen(rightOpenScore > EYE_CLOSED_THRESHOLD);
mPreviousIsRightEyeOpen = mFaceData.isRightEyeOpen();
}
// 2
// See if there's a smile!
// Determine if person is smiling.
final float SMILING_THRESHOLD = 0.8f;
mFaceData.setSmiling(face.getIsSmilingProbability() > SMILING_THRESHOLD);
mFaceGraphic.update(mFaceData);
}
Here are the changes:
FaceGraphic
should be responsible simply for drawing graphics over faces, not determining whether an eye is open or closed based on the face detector’s probability assessments. This means that FaceTracker
should do those calculations and provide FaceGraphic
with ready-to-eat data in the form of a FaceData
instance. These calculations take the results from getIsLeftEyeOpenProbability
and getIsRightEyeOpenProbability
and turn them into a simple true
/false
value. If the detector thinks that there’s a greater than 40% chance that an eye is open, it’s considered open.getIsSmilingProbability
, but more strictly. If the detector thinks that there’s a greater than 80% chance that the face is smiling, it’s considered to be smiling.Now that you’re collecting landmarks and classifications, you can now overlay any tracked face with these cartoon features:
This requires the following changes to FaceGraphic
’s draw
method:
@Override
public void draw(Canvas canvas) {
final float DOT_RADIUS = 3.0f;
final float TEXT_OFFSET_Y = -30.0f;
// Confirm that the face and its features are still visible
// before drawing any graphics over it.
if (mFaceData == null) {
return;
}
PointF detectPosition = mFaceData.getPosition();
PointF detectLeftEyePosition = mFaceData.getLeftEyePosition();
PointF detectRightEyePosition = mFaceData.getRightEyePosition();
PointF detectNoseBasePosition = mFaceData.getNoseBasePosition();
PointF detectMouthLeftPosition = mFaceData.getMouthLeftPosition();
PointF detectMouthBottomPosition = mFaceData.getMouthBottomPosition();
PointF detectMouthRightPosition = mFaceData.getMouthRightPosition();
if ((detectPosition == null) ||
(detectLeftEyePosition == null) ||
(detectRightEyePosition == null) ||
(detectNoseBasePosition == null) ||
(detectMouthLeftPosition == null) ||
(detectMouthBottomPosition == null) ||
(detectMouthRightPosition == null)) {
return;
}
// Face position and dimensions
PointF position = new PointF(translateX(detectPosition.x),
translateY(detectPosition.y));
float width = scaleX(mFaceData.getWidth());
float height = scaleY(mFaceData.getHeight());
// Eye coordinates
PointF leftEyePosition = new PointF(translateX(detectLeftEyePosition.x),
translateY(detectLeftEyePosition.y));
PointF rightEyePosition = new PointF(translateX(detectRightEyePosition.x),
translateY(detectRightEyePosition.y));
// Eye state
boolean leftEyeOpen = mFaceData.isLeftEyeOpen();
boolean rightEyeOpen = mFaceData.isRightEyeOpen();
// Nose coordinates
PointF noseBasePosition = new PointF(translateX(detectNoseBasePosition.x),
translateY(detectNoseBasePosition.y));
// Mouth coordinates
PointF mouthLeftPosition = new PointF(translateX(detectMouthLeftPosition.x),
translateY(detectMouthLeftPosition.y));
PointF mouthRightPosition = new PointF(translateX(detectMouthRightPosition.x),
translateY(detectMouthRightPosition.y));
PointF mouthBottomPosition = new PointF(translateX(detectMouthBottomPosition.x),
translateY(detectMouthBottomPosition.y));
// Smile state
boolean smiling = mFaceData.isSmiling();
// Calculate the distance between the eyes using Pythagoras' formula,
// and we'll use that distance to set the size of the eyes and irises.
final float EYE_RADIUS_PROPORTION = 0.45f;
final float IRIS_RADIUS_PROPORTION = EYE_RADIUS_PROPORTION / 2.0f;
float distance = (float) Math.sqrt(
(rightEyePosition.x - leftEyePosition.x) * (rightEyePosition.x - leftEyePosition.x) +
(rightEyePosition.y - leftEyePosition.y) * (rightEyePosition.y - leftEyePosition.y));
float eyeRadius = EYE_RADIUS_PROPORTION * distance;
float irisRadius = IRIS_RADIUS_PROPORTION * distance;
// Draw the eyes.
drawEye(canvas, leftEyePosition, eyeRadius, leftEyePosition, irisRadius, leftEyeOpen, smiling);
drawEye(canvas, rightEyePosition, eyeRadius, rightEyePosition, irisRadius, rightEyeOpen, smiling);
// Draw the nose.
drawNose(canvas, noseBasePosition, leftEyePosition, rightEyePosition, width);
// Draw the mustache.
drawMustache(canvas, noseBasePosition, mouthLeftPosition, mouthRightPosition);
}
…and add the following methods to draw the eyes, nose, and mustache:
private void drawEye(Canvas canvas,
PointF eyePosition, float eyeRadius,
PointF irisPosition, float irisRadius,
boolean eyeOpen, boolean smiling) {
if (eyeOpen) {
canvas.drawCircle(eyePosition.x, eyePosition.y, eyeRadius, mEyeWhitePaint);
if (smiling) {
mHappyStarGraphic.setBounds(
(int)(irisPosition.x - irisRadius),
(int)(irisPosition.y - irisRadius),
(int)(irisPosition.x + irisRadius),
(int)(irisPosition.y + irisRadius));
mHappyStarGraphic.draw(canvas);
} else {
canvas.drawCircle(irisPosition.x, irisPosition.y, irisRadius, mIrisPaint);
}
} else {
canvas.drawCircle(eyePosition.x, eyePosition.y, eyeRadius, mEyelidPaint);
float y = eyePosition.y;
float start = eyePosition.x - eyeRadius;
float end = eyePosition.x + eyeRadius;
canvas.drawLine(start, y, end, y, mEyeOutlinePaint);
}
canvas.drawCircle(eyePosition.x, eyePosition.y, eyeRadius, mEyeOutlinePaint);
}
private void drawNose(Canvas canvas,
PointF noseBasePosition,
PointF leftEyePosition, PointF rightEyePosition,
float faceWidth) {
final float NOSE_FACE_WIDTH_RATIO = (float)(1 / 5.0);
float noseWidth = faceWidth * NOSE_FACE_WIDTH_RATIO;
int left = (int)(noseBasePosition.x - (noseWidth / 2));
int right = (int)(noseBasePosition.x + (noseWidth / 2));
int top = (int)(leftEyePosition.y + rightEyePosition.y) / 2;
int bottom = (int)noseBasePosition.y;
mPigNoseGraphic.setBounds(left, top, right, bottom);
mPigNoseGraphic.draw(canvas);
}
private void drawMustache(Canvas canvas,
PointF noseBasePosition,
PointF mouthLeftPosition, PointF mouthRightPosition) {
int left = (int)mouthLeftPosition.x;
int top = (int)noseBasePosition.y;
int right = (int)mouthRightPosition.x;
int bottom = (int)Math.min(mouthLeftPosition.y, mouthRightPosition.y);
if (mIsFrontFacing) {
mMustacheGraphic.setBounds(left, top, right, bottom);
} else {
mMustacheGraphic.setBounds(right, top, left, bottom);
}
mMustacheGraphic.draw(canvas);
}
Run the app and start pointing the camera at faces. For non-smiling faces with both eyes open, you should see something like this:
This one’s of me winking with my right eye (hence it’s closed) and smiling (which is why my iris is a smiling star):
The app will draw cartoon features over a small number of faces simultaneously…
…and even over faces in illustrations if they’re realistic enough:
It’s a lot more like Snapchat now!
The Face API provides another measurement: Euler angles.
Pronounced “Oiler” and named after mathematician Leonhard Euler, these describe the orientation of detected faces. The API uses the x-, y-, and z- coordinate system below.
…and reports the following Euler angles for each detected face.
ACCURATE_MODE
.Open FaceTracker.java and add support for Euler angles by adding these lines to its onUpdate()
method, after the call to updatePreviousLandmarkPositions
:
// Get head angles.
mFaceData.setEulerY(face.getEulerY());
mFaceData.setEulerZ(face.getEulerZ());
You’ll make use of the Euler z angle to modify FaceGraphic
so that it draws a hat on any face whose Euler z angle is greater than 20 degrees to one side.
Open FaceGraphic.java and add the following to the end of draw
:
// Head tilt
float eulerY = mFaceData.getEulerY();
float eulerZ = mFaceData.getEulerZ();
// Draw the hat only if the subject's head is titled at a sufficiently jaunty angle.
final float HEAD_TILT_HAT_THRESHOLD = 20.0f;
if (Math.abs(eulerZ) > HEAD_TILT_HAT_THRESHOLD) {
drawHat(canvas, position, width, height, noseBasePosition);
}
…and add the following drawHat
method to the end of the class:
private void drawHat(Canvas canvas, PointF facePosition, float faceWidth, float faceHeight, PointF noseBasePosition) {
final float HAT_FACE_WIDTH_RATIO = (float)(1.0 / 4.0);
final float HAT_FACE_HEIGHT_RATIO = (float)(1.0 / 6.0);
final float HAT_CENTER_Y_OFFSET_FACTOR = (float)(1.0 / 8.0);
float hatCenterY = facePosition.y + (faceHeight * HAT_CENTER_Y_OFFSET_FACTOR);
float hatWidth = faceWidth * HAT_FACE_WIDTH_RATIO;
float hatHeight = faceHeight * HAT_FACE_HEIGHT_RATIO;
int left = (int)(noseBasePosition.x - (hatWidth / 2));
int right = (int)(noseBasePosition.x + (hatWidth / 2));
int top = (int)(hatCenterY - (hatHeight / 2));
int bottom = (int)(hatCenterY + (hatHeight / 2));
mHatGraphic.setBounds(left, top, right, bottom);
mHatGraphic.draw(canvas);
}
Run the app. Now a cute little hat will appear near the top of any head titled at a jaunty angle:
Finally, you’ll use a simple physics engine to make the irises bounce around. This requires two simple changes to FaceGraphic
. First, you need to declare two new instance variables, which provide a physics engine for each eye. Put these just below the declaration for the Drawable
instance variables:
// We want each iris to move independently, so each one gets its own physics engine.
private EyePhysics mLeftPhysics = new EyePhysics();
private EyePhysics mRightPhysics = new EyePhysics();
The second change goes in the call to FaceGraphic
’s draw
method. Until now, you’ve set the iris positions to the same coordinates as the eye positions.
Now, modify the code in draw
’s “draw the eyes” section to use the physics engines to determine each iris’ position:
// Draw the eyes.
PointF leftIrisPosition = mLeftPhysics.nextIrisPosition(leftEyePosition, eyeRadius, irisRadius);
drawEye(canvas, leftEyePosition, eyeRadius, leftIrisPosition, irisRadius, leftEyeOpen, smiling);
PointF rightIrisPosition = mRightPhysics.nextIrisPosition(rightEyePosition, eyeRadius, irisRadius);
drawEye(canvas, rightEyePosition, eyeRadius, rightIrisPosition, irisRadius, rightEyeOpen, smiling);
Run the app. Now everyone has googly (pun somewhat intended) eyes!
You can download the final project here.
You’ve made the journey from augmented reality and face detection newbie to…well, maybe not grizzled veteran, but someone who now knows how to make use of both in Android apps.
Now that you’ve gone through a few iterations of the app, from starter version to finished version, you should have no trouble understanding this diagram showing how FaceSpotter’s objects are related:
A good next step would be to take a closer look at Google’s Mobile Vision site, and particularly the section on the Face API.
Reading other people’s code is a great way to learn things, and Google’s android-vision GitHub repository is a treasure trove of ideas and code.
If you have any questions or comments, please join the discussion below!
The post Augmented Reality in Android with Google’s Face API appeared first on Ray Wenderlich.
Last week, we announced the new live online training provided by our friends at Five Pack Creative: ALT-U.
They currently have three classes available: Advanced iOS Debugging (July 24), Auto Layout (Aug 25), and Instruments (Sep 16).
This is just a quick heads up that today is the last day for the 15% off discount for raywenderlich.com readers.
To get the discount, simply select your course and enter the following discount code: LEARNMORE17
We hope to see some of you in class! :]
The post ALT-U Live Online Training: Last Day for Discount appeared first on Ray Wenderlich.
See a practical use of alternate app icons, in action, and learn why errors might occur when working with alternate app icons in iOS.
The post Screencast: Alternate App Icons: Error Handling appeared first on Ray Wenderlich.
Welcome to another installment of our Top App Dev Interview series!
Each interview in this series focuses on a successful mobile app or developer and the path they took to get where they are today. Today’s special guest is Ryan McLeod.
Ryan is the creator of the global hit app, Blackbox. Step inside Ryan’s creative mind as he describes the build of Blackbox and his thought processes for building new levels.
Ryan has a truly inspiring story and offers special advice for being an Indie iOS Developer and creating something truly unique to the Apple App Store.
Ryan, you have been an indie iOS developer for some time now. Can you tell me what you did before you were an indie iOS developer, and how you transitioned to being an indie?
I think it’s only been about two years, but sure feels like longer to me!
I graduated from Cal Poly San Luis Obispo in 2014, and then fumbled around as a web app developer for a bit. I worked with a few friends on a social music startup but when that disbanded all I knew was that I didn’t want to move up the coast to San Francisco and take a real job. So instead, I tried my hand at consulting while trying to learn iOS with a little side-project I was calling “Blackbox”.
Whenever friends asked what I was working on and I’d hand them the Blackbox prototype. It was always a fight to get my phone back so I knew I was onto something. I was enjoying iOS development a lot and my costs were pretty low so I decided to go all in on it until I ran out of money, launch, and see if I could survive off it.
Blackbox ended up getting featured at launch and the whole week was pretty emotionally overwhelming; I felt like my life changed that week. However, things quickly trailed off revenue-wise and within a few months I was having some interviews: this time as an iOS developer though!
I released a big update as a sort of final burn to see if I could turn things around and sure enough, things took off and the numbers stabilized in better places (largely due to Blackbox getting more prominently featured). I think that’s when the naive idea of hitting it rich died and I realized that the indie life was within reach, but would require a lot of constant work.
Can you tell me your daily routine/schedule in detail?
I’m a pretty unscheduled person, especially since I can’t figure out if it’s better for me to work more nocturnally, but this week I’m trying to have a more normal schedule and it looks something like this:
What is the hardest challenges of being an indie developer, and how do you combat those challenges?
Sometimes I wish I was working within a small team at a company somewhere so I could take a break from making decisions; my perfectionism can be really paralyzing but I’m practicing letting go of some control and not regretting decisions made.
Another challenge is living in a smallish town. I have a fantastic community of friends but I don’t have a professional community of people to talk design/development with or talk API changes over lunch with (unless you count Twitter of course). Sometimes that feels isolating and other times it feels liberating to be isolated from.
Developers talk about the “indie-apocalypse” and that how very few indie developers can survive, what would you have to say about this?
It’s not impossible, but it’s very hard. By all means, I’m very successful as an indie (call me the indie 1%) but I’d be hard pressed to grow my team of one right now.
I’m not complaining; my lifestyle is worth a lot to me, but I have a hard time finding the state of things encouraging for someone who’s maybe just getting started and has loans to pay off or a family to support. Dental and vision are nice, but so is going camping in the middle of the week.
However, even if all the surface gold has all been collected there’s still a ton to pan or dig for – it just takes divining and work.
Surviving off one app is rare these days but I think a lot of Indies are finding success by having a portfolio of smaller apps. Being a successful indie is so much less about engineering than anyone aspiring (raises hand) ever anticipates. A rock solid product guarantees nothing, it simply gives you the best possible weapon to take into battle.
Developers who stubbornly aren’t willing to realize this (I was in this camp) simply do not succeed anymore.
How did you get the idea for Blackbox?
A major unexpected seed of inspiration was the Inception app (a promotional app for the movie — it’s still on the store!) which makes audio “dreamscapes” by morphing and mixing mic input. The most compelling part of the app is that you can unlock new soundscapes by opening the app during a rainstorm, late at night, or by traveling to Africa… that kind of blew open a hole in my mind of what an app could be.
Then there was Clear (the todo app) which has Easter egg themes that you can unlock (often by surprise) by using the app at certain times, or poking around the app.
Finally, there was Hatch (the iPhone pet) that would dance if you were listening to music or yawn if your device was nearly dead. It personified our most personal device and brought it closer in a way. I think when I saw the trailer for Hatch might have been the moment I put it all together and thought, “Woah, there’s actually a lot going on here under the surface that most apps are not tapping into… enough to make something compelling.”
I love games that take over your mind after you walk away from the computer (Fez, Braid, Machinarium come to mind). They all require genuine outside-the-box thinking and provide so much self-satisfaction.
Indie developers really struggle to market their apps on the app store, but Blackbox hit the market by storm. What’s your advice to fellow developers?
I was in that struggle camp. Like a lot of indies my visceral reaction to the word “marketing” is a bad one; it means a failure of our products to represent and sell themselves, or gross growth hacking strategies and ads.
However, I found when your embrace it more holistically from the start it’s really not that bad and has far greater effect.
When it came to the idea itself I knew I needed to make something technically impressive or truly unique in order to stand out and not get lost in the sea of what’s now about two thousand new apps added each day. I didn’t have the skills to be technically impressive so I thought about what most games were doing (a lot of simple touching and swiping) and ran the other way.
The limitation was liberating and helped Blackbox stand out which probably helped it get featured. When it came to brand voice I struggled to get in character to write chipper, helpful copy; so I instead just started channelling my own sardonic, less than helpful voice and it resonated with a lot of people if for no reason other than being real and refreshingly different.
People share stories, not apps (literal stories, not the circle encapsulated kind). When the Inception app was going around the conversation was always about the ridiculous Africa puzzle… why would the developers add a puzzle they most people would surely never solve? People talked about it endlessly.
While working on Blackbox I often asked what the conversation was. Lo and behold many puzzles and features—purposefully or otherwise—encourage story telling and sharing. Whether it’s a going on a hike with a friend, talking to a friend, singing like a mad person, or trying to get to Africa to solve a damn puzzle.
I always try to work backwards. If I imagine myself leaving a rating, sharing something, or purchasing something: what proceed that happening for me to care enough to do that and to do so gladly?
Did you have a clear marketing plan for the launch of Blackbox? If yes, can you describe to me the plan in detail?
Get featured haha!
I don’t think I had much of a plan beyond that. I didn’t know what I was doing but I had read a lot. I tried as best I could to give the app a decent shot at being featured by making it make the platform shine as much as I could muster, having a great preview video, etc.
When I made the scale app Gravity it got tons of press on its own. I think I hoped Blackbox could draw a similar crowd but it didn’t. In fact, the overall impact of the press I did get was so much smaller than I could have ever anticipated.
For a while I tried to get YouTubers to check out the game, tried some social ads and generally floundered. When Mashable featured Blackbox on their Snapchat story earlier this year (unbeknownst to me) it was a mind-bogglingly, record-setting day. I don’t exactly know what I’d do differently if I did it again, maybe a soft launch to build some proof of success first? I’m not sure.
I like the trailer for your app. How did you make it?
I made the original trailer (the one with the overly cinematic music) in Final Cut. Believe it or not I made the Push Pack announcement one in Keynote (I’d love to learn to properly do motion graphics).
The latest super epic one was made in collaboration with my friend at Foreground Films. It features my sister’s hand, a jerry-rigged living room light box, and a high Sierra camping shot. I think it’s Super Bowl ready.
One of the most challenging parts of making a game is tweaking it just right so it’s not too easy, not too hard, and just the right level of fun. How do you go about doing that?
I tend to think of the puzzles as interfaces tuned to be on the cusp of intuitive but not quite as to leave the player with a mental gap they must bridge and cross in order to put the whole thing together. Like interface design, a lot of puzzle design is as much about what’s there as what isn’t.
The best experiences are delightful not because of how they handle things going right but because of how they prevent things from going wrong. Providing delight and deeply satisfying “ah ha!” moments is as much about preventing needless confusion and frustration as it is nudging the player just enough in the right direction.
Beyond that, I’ve gotten really in tune with my gut so I know when an idea, visual, etc doesn’t feel right. It wasn’t a difficult sense to develop but it was very hard to learn to listen to and still can be; sometimes I misjudge how something could be interpreted or what someone might try but that’s where beta testing and player feedback often saves me. Two people putting their phone in the freezer is two too many.
Swallowing my ego to take feedback is critical. It’s easy to say but going out of my way to get critical feedback and listen between the lines of what people are willing to say can be a fight against nature.
Players can be quick to blame themselves for misunderstanding things and it’s easy to agree to avoid cognitive dissonance but I have to remember that at the end of the day it’s almost always my fault and responsibility to improve. Something that tripped up one person and caused a poor experience is bound to affect magnitudes more down the line.
Procrastination is a real problem for most developers. Can you tell me how you focus on the day ahead?
I’m always experimenting with new tools and techniques to focus—exercise, good music, and coffee are my main weapons—but it’s a constant battle and learning process.
Generally I avoid procrastination by creating stress for myself! I know this is not the healthiest method, but it worked well for me when my savings were dwindling so we’re familiar frienemies.
My biggest form of procrastination is getting myself really lost in the weeds on arguably unnecessary features and detail. Before I know it a week has passed, and I’ve finished one feature but learned to make solar plots accurate to a degree no one will ever notice. I think this attention to detail is what makes the game in a way but it has to be balanced with shipping.
Reading reviews and emails from players is one way I derive positive motivation to get to work and remember to zoom out to focus on what really matters.
Do you use any tools to manage your daily tasks?
In general I plan things very loosely but I do try to keep lists of daily task that I can strike off (if for no reason other than to allow myself to feel somewhat accomplished at the end of the day). Right now I’m in love with:
How do you prioritize what new features to add into your apps?
I try to strike a balance between things that keep players satisfied, keep the lights on, and keep me entertained as a developer. If I could just build puzzles 24/7 I would—I have a long backlog of ideas—but I always have to take two steps back to:
These all need to be resolved before I can work on new things.
And that concludes our Top App Dev Interview with Ryan McLeod. A huge thanks to Ryan for sharing his journey with the iOS community :]
We hope you enjoyed reading about Ryan’s inspiring story at being an iOS indie app developer. In the end, being completely different on the App Store seems to be the key to success in Ryan’s case.
Ryan’s special eye for detail and desire to be different really stands out and has made him & Blackbox Puzzles a real success. I think taking some of the tips and advice to heart could really help you in making the next successful App Store explosion!
If you are an app developer with a hit app or game in the top 100 in the App store, we’d love to hear from you. Please drop us a line anytime. If you have a request for any particular developer you’d like to hear from, please join the discussion in the forum below!
The post Full-Time Indie iOS Dev and Creator of Blackbox: A Top Dev Interview With Ryan McLeod appeared first on Ray Wenderlich.
Note: This tutorial uses Xcode 9 and Swift 4.
iOS 9 and tvOS introduced the concept of on-demand resources (ODR), a new API used to deliver content to your applications after you’ve installed an app.
ODR allows you to tag specific assets of your application and have them hosted on Apple’s servers. These assets won’t be downloaded until the application needs them, and your app will purge resources from your user’s devices when they’re no longer needed. This results in smaller apps and faster downloads — which always makes users happy.
In this tutorial, you’ll learn the basics of on-demand resources including:
Download the starter project for this tutorial. You can find it Bamboo-Breakout-Starter.
The starter project is a game called Bamboo Breakout. Michael Briscoe wrote this app as a SpriteKit tutorial, and it serves as a great example of how simple it is to write a SpriteKit app in Swift. You can find the original tutorial here.
The original game had only one game level, so I’ve added a few changes to the original app: five new game levels and some code to load each level.
Once you have the starter application, open it in Xcode and open the Bamboo Breakout folder.
In this folder, you will see six SpriteKit scenes. Each one of these scenes represents a level in the Bamboo Breakout game. At the moment, you’re packaging all these scenes with the application. By the end of this tutorial, you’ll have only the first level installed.
Build and run the app, and you’ll see the first level of the game in the simulator.
Time to take a look at the starter project. You don’t need to examine the entire project, but there’s a few things you do need to be familiar with.
Look at the top of the GameScene.swift class. In Xcode, open GameScene.swift and look for the following snippet:
lazy var gameState: GKStateMachine = GKStateMachine(states: [
WaitingForTap(scene: self),
Playing(scene: self),
LevelOver(scene: self),
GameOver(scene: self)
])
Here you see the creation and initialization of a GKStateMachine
object. The GKStateMachine
class is part of Apple’s GameplayKit; it’s a finite-state machine that helps you define the logical states and rules for a game. Here, the gameState
variable has four states:
WaitingForTap:
The initial state of the gamePlaying:
Someone is playing the gameLevelOver:
The most recent level is complete (this is where you’ll be doing most of your work)GameOver:
The game has ended either with a win or a lossTo see where the initial game state is set, scroll down to the bottom of didMove(to:)
.
gameState.enter(WaitingForTap.self)
This is where the initial state of the game is set, and it’s where you’ll begin your journey.
Note: didMove(to:)
is a SpriteKit method and part of the SKScene
class. The app calls this method immediately after it presents the scene.
The next thing you need to look at is touchesBegan(_:with:)
in GameScene.swift.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
switch gameState.currentState {
// 1
case is WaitingForTap:
gameState.enter(Playing.self)
isFingerOnPaddle = true
// 2
case is Playing:
let touch = touches.first
let touchLocation = touch!.location(in: self)
if let body = physicsWorld.body(at: touchLocation) {
if body.node!.name == PaddleCategoryName {
isFingerOnPaddle = true
}
}
// 3
case is LevelOver:
if let newScene = GameScene(fileNamed:"GameScene\(self.nextLevel)") {
newScene.scaleMode = .aspectFit
newScene.nextLevel = self.nextLevel + 1
let reveal = SKTransition.flipHorizontal(withDuration: 0.5)
self.view?.presentScene(newScene, transition: reveal)
}
// 4
case is GameOver:
if let newScene = GameScene(fileNamed:"GameScene1") {
newScene.scaleMode = .aspectFit
let reveal = SKTransition.flipHorizontal(withDuration: 0.5)
self.view?.presentScene(newScene, transition: reveal)
}
default:
break
}
There’s a lot going on here. Let’s go through this line by line.
touchesBegan(_:with:)
, gameState.currentState
is set to WaitingForTap
. When the switch
hits this case
, the app changes gameState.currentState
to the Playing
state and sets isFingerOnPaddle
to true
. The app uses the isFingerOnPaddle
variable to move the paddle.
case
in the switch
executes when the game is in the Playing
state. This state is used to track when the user is playing the game and touching the game paddle.
case
statement when the game is in the LevelOver
state. In this case the game loads the next scene based on the nextLevel
variable. The nextLevel
variable is set to 2 on creation of the very first scene.
GameOver
state, it loads the scene GameScene1.sks and restarts the game.
This process assumes you packaged all these scenes with the installed app.
Before you start using on-demand resources, you need to know how resource bundles work.
iOS uses Bundles to organize resources into well-defined subdirectories inside an application. You need to use the Bundle object to retrieve the resource you are looking for; the Bundle object provides a single interface for locating items. You can imagine the main bundle looking similar to the following:
This example shows what the main bundle looks like when it contains three game levels.
On-demand resources are different. They’re not packaged with the distributed application. Instead, Apple stores them on their servers. Your app retrieves them only when it needs to, using NSBundleResourceRequest
. You pass the NSBundleResourceRequest
object a collection of tags, which represent the resources you want to retrieve. When the app downloads the resources to the device, they’re stored in an alternative bundle.
In this example, the application is making a request for three on-demand resources. The system will retrieve these resources and store them in an alternative bundle.
Now, what exactly are tags?
Note: You can only use Downloaded Only On Demand while in development. You’ll have to deploy the app to the App Store or TestFlight to use the other tag types.
The first thing to consider is which resources you want to package with the application. For this game app, it makes sense to at least give the user the first level of the game. You don’t want them starting without any game levels.
In the project navigator, select GameScene2.sks from the Bamboo Breakout group:
Open the File Inspector using the Utilities menu. Find the section named On Demand Resource Tags:
When tagging a resource, try to use a meaningful name. This will help you keep all your on-demand resources organized. For GameScene2.sks, which represents Level 2 of the game, you are going to use the tag level2.
Type level2 in the Tags input and press Enter.
Once you’ve finished tagging GameScene2.sks, tag the rest of the scenes using the same pattern. When finished, select the Bamboo Breakout Target, Resource Tags, and then All. You should see all the tags you added.
Okay, you’ve tagged all your on-demand resources. It’s time to add the code to download them. Before doing this, take a closer look at the NSBundleResourceRequest
object:
// 1
public convenience init(tags: Set<String>)
// 2
open var loadingPriority: Double
// 3
open var tags: Set<String> { get }
// 4
open var bundle: Bundle { get }
// 5
open func beginAccessingResources(completionHandler: @escaping (Error?) -> Swift.Void)
// 6
open var progress: Progress { get }
// 7
open func endAccessingResources()
Taking it step-by-step:
init()
method. It takes a Set
of tags representing the resources to download.loadingPriority
. It provides a hint to the resource loading system and represents the loading priority of this request. The range of this priority is from 0 to 1, with 1 being the highest priority. The default value is 0.5.tags
contains the set of tags to be requested by this object.bundle
represents the alternate bundle described earlier. This bundle
is where the system stores the retrieved resources.beginAccessingResources
starts the request of the resources. You invoke this method and pass it a completion handler that takes an Error object.Progress
object. You can watch this object to see the status of the download. This application won’t use this object, because the assets are so small and download really quickly. It’s good to be aware of it though.endAccessingResources
tells the system you no longer need these resources. The system will now know it can purge these resource from the device.
Now that you know the internals of NSBundleResourceRequest
, you can create a utility class to manage the downloading of resources.
Create a new Swift file and name it ODRManager
. Replace the contents of the file with the following:
import Foundation
class ODRManager {
// MARK: - Properties
static let shared = ODRManager()
var currentRequest: NSBundleResourceRequest?
}
Currently the class contains a reference to itself (implementing the singleton approach) and a variable of type NSBundleResourceRequest
.
Next, you’ll need a method to start the ODR request. Add the following method below the currentRequest
property:
// 1
func requestSceneWith(tag: String,
onSuccess: @escaping () -> Void,
onFailure: @escaping (NSError) -> Void) {
// 2
currentRequest = NSBundleResourceRequest(tags: [tag])
// 3
guard let request = currentRequest else { return }
request.beginAccessingResources { (error: Error?) in
// 4
if let error = error {
onFailure(error as NSError)
return
}
// 5
onSuccess()
}
}
Taking each commented section in turn:
NSBundleResourceRequest
to perform your request.
beginAccessingResources()
to begin the request.
Now it’s time to put this class to use. Open GameScene.swift, find touchesBegan(_:with:)
and change the LevelOver
case to the following:
case is LevelOver:
// 1
ODRManager.shared.requestSceneWith(tag: "level\(nextLevel)", onSuccess: {
// 2
guard let newScene = GameScene(fileNamed:"GameScene\(self.nextLevel)") else { return }
newScene.scaleMode = .aspectFit
newScene.nextLevel = self.nextLevel + 1
let reveal = SKTransition.flipHorizontal(withDuration: 0.5)
self.view?.presentScene(newScene, transition: reveal)
},
// 3
onFailure: { (error) in
let controller = UIAlertController(
title: "Error",
message: "There was a problem.",
preferredStyle: .alert)
controller.addAction(UIAlertAction(title: "Dismiss", style: .default, handler: nil))
guard let rootViewController = self.view?.window?.rootViewController else { return }
rootViewController.present(controller, animated: true)
})
At first glance, this may look like a complex body of code, but it’s pretty straightforward.
ODRManager
and call requestSceneWith(tag:onSuccess:onFailure:)
. You pass this method the tag of the next level and a success and error handler.
UIAlertController
and let the user know a problem occurred.
Once you’ve made all these changes, build and run the app. See if you can get through the first level and then stop. You should see the following:
You may need to plug in your device and play the game there, since it can be difficult to play in the simulator. Be sure to leave your device plugged in and Xcode attached.
After beating the first level, tap the screen once more and stop. You will now see a screen like the following:
Open Xcode, open the Debug navigator then select Disk. Here, you’ll see the status of all on-demand resources in the app:
At this point, the app has only downloaded Level 2 and it’s In Use. Go ahead and play some more levels and keep an eye on the Disk Usage. You can watch as the app downloads each resource when it’s required.
There’s several things you can do to improve a user’s experience. You can improve error reporting, set download priorities and purge resources no longer in use.
In the previous example, whenever you encountered an error the app would simply state “There was a problem.” There’s not a whole lot the user can do with this.
You can make this a much better experience. Open GameScene.swift, and inside touchesBegan(_:with:)
, replace onFailure
within the LevelOver
with the following:
onFailure: { (error) in
let controller = UIAlertController(
title: "Error",
message: "There was a problem.",
preferredStyle: .alert)
switch error.code {
case NSBundleOnDemandResourceOutOfSpaceError:
controller.message = "You don't have enough space available to download this resource."
case NSBundleOnDemandResourceExceededMaximumSizeError:
controller.message = "The bundle resource was too big."
case NSBundleOnDemandResourceInvalidTagError:
controller.message = "The requested tag does not exist."
default:
controller.message = error.description
}
controller.addAction(UIAlertAction(title: "Dismiss", style: .default, handler: nil))
guard let rootViewController = self.view?.window?.rootViewController else { return }
rootViewController.present(controller, animated: true)
})
Take a moment to look over this change. It is a fair amount of code, but the main change is the addition of the switch
statement. You’ll see that it’s testing the error code returned by the request object. Depending on which case the switch
hits, the app will change the error message. This is much nicer. Take a look at each one of these errors.
NSBundleOnDemandResourceOutOfSpaceError
is encountered when the user does not have enough space on their device to download the requested resources. This is useful since it gives your user a chance to clear up some space and try again.
NSBundleOnDemandResourceExceededMaximumSizeError
is returned when this resource would exceed the maximum memory for in-use on-demand resources for this app. This would be a good time to purge some resources.
NSBundleOnDemandResourceInvalidTagError
is returned when the resource tag being requested cannot be found. This would most likely be a bug on your part, and you may want to make sure you have the correct tag name.
The next improvement you can make is setting the loading priority of the request. This only requires a single line.
Open ODRManager.swift and add the following to requestSceneWith(tag:onSuccess:onFailure:)
immediately after guard let request = currentRequest else { return }
:
request.loadingPriority =
NSBundleResourceRequestLoadingPriorityUrgent
NSBundleResourceRequestLoadingPriorityUrgent
tells the operating system to download the content as soon as possible. In the case of downloading the next level of a game, it’s very urgent. You don’t want your users waiting. Remember, if you want to customize the loading priorities, you can use a Double
between 0 and 1.
You can get rid of unneeded resources by calling the method endAccessingResources
on the current NSBundleResourceRequest
.
Still in ODRManager.swift, add the following line immediately after guard let request = currentRequest else { return }
:
// purge the resources associated with the current request
request.endAccessingResources()
Calling endAccessingResources
now cleans up after yourself and purges the any resources you no longer need. You’re now being a courteous iOS citizen and cleaning up after yourself.
You can find the completed project here.
I hope that knowing how to use on-demand resources helps you reduce the size of your initial app downloads and makes your users a little happier.
For more in-depth coverage of on-demand resources, check out this excellent 2016 WWDC Video on Optimizing On-Demand Resources.
If you have any questions or comments, please join the forum discussion below!
The post On-Demand Resources in iOS Tutorial appeared first on Ray Wenderlich.
Even if you’ve never used Git before, you’ve probably practiced version control. Discover what version control is, and how Git can help with your source code.
The post Video Tutorial: Beginning Git Part 1: Introduction appeared first on Ray Wenderlich.
One of the ways you might start out with git is by creating your own copy of somebody else's repository. Discover how to clone a remote repo to your local machine, and what constitutes "forking" a repository.
The post Video Tutorial: Beginning Git Part 2: Cloning a Repo appeared first on Ray Wenderlich.
As we all wait for the imminent release of iOS 11 and Xcode 9, we thought this would be an excellent time to focus on tools and skills that don’t run on Apple’s update schedule. Today, we’re excited to announce the release of a brand new course: Beginning Git.
Source control is one of the most important tools that software developers use in their daily workflow, and is one of the few tools that is platform-agnostic. Git is currently one of the most popular source control solutions, and offers a comprehensive set of features.
This 13-part Beginning Git video course is designed to take you from knowing very little about Git all the way through to being able to experience the benefits of source control every single day. The course is focused on real-world processes, and as such will cover everything from cloning and creating repos, through committing and ignoring files, to managing remotes and pull requests.
Even if Git isn’t new to you, there is something for you to learn. Let’s take a look at what’s inside:
Video 1: Introduction (Free!) Even if you’ve never used Git or Subversion before, you’ve probably practiced version control. Discover what version control is, and how exactly Git can help you with your source code.
Video 2: Cloning a Repo
One of the ways you might start out with Git is by creating your own copy of somebody else’s repository. Discover how to clone a remote repo to your local machine, and what constitutes “forking” a repository.
Video 3: Creating a Repo
If you are starting a new project, and want to use Git for source control, you first need to create a new repository. Learn how you can get started initialising a new Git repository, and then look at some conventions that all code repos should adopt.
Video 4: Creating a Remote
Code, like team sports, is meant to be shared with other people. Discover how you can create a remote for your new Git repo, and push it to GitHub for all your friends to enjoy.
Video 5: Committing Changes
A Git repo is made up of a sequence of commits—each representing the state of your code at a point in time. Discover how to create these commits to track the changes you make in your code.
Video 6: The Staging Area (Free!)
Before you can create a Git commit, you have to use the “add” command. What does it do? Discover how to use the staging area to great effect through the interactive git add
command.
Video 7: Ignoring Files
Sometimes, there are things that you really don’t want to store in your source code repository. Whether it be the diary of your teenage self, or build artefacts, you can tell Git to ignore them via the gitignore file.
Video 8: Viewing History
There’s very little point in creating a nice history of your source code if you can’t explore it. In this video you’ll discover the versatility of the git log
command—displaying branches, graphs and even filtering the history.
Video 9: Branching
The real power in Git comes from its branching and merging model. This allows you to work on multiple things simultaneously. Discover how to manage branches, and exactly what they are in this next video.
Video 10: Merging
Branches in Git without merging would be like basketball without the hoop—fun, sure, but with very little point. In this video you’ll learn how you can use merging to combine the work on multiple branches back into one.
Video 11: Syncing with a Remote
Now that you’ve been working hard on your local copy of the Git repository, you want to know how you can share this with your friends. See how you can share through using remotes, and how you can use multiple remotes at the same time.
Video 12: Pull Requests
GitHub introduced the concept of a Pull Request, which is essentially a managed merge with a large amount of additional metadata. Pull requests are key to a GitHub-based workflow, so you’ll discover how to use them in this video.
Video 13: Conclusion
The Beginning Git video course took you from knowing nothing about Git, all the way to completing everything you need to know to use it in your daily development life. But wait… there’s more.
Want to check out the course? You can watch two of the videos for free:
The rest of the course is for raywenderlich.com subscribers only. Here’s how you can get access:
There’s much more in store for raywenderlich.com subscribers – if you’re curious, you can check out our full schedule of upcoming courses.
I hope you enjoy our new course, and stay tuned for many more new courses and updates to come! :]
The post New Course: Beginning Git appeared first on Ray Wenderlich.