In this video, you'll learn how to use multiple section controllers and how to update your list when your data has changed.
The post Screencast: IGListKit: Multiple Sections and Updates appeared first on Ray Wenderlich.
In this video, you'll learn how to use multiple section controllers and how to update your list when your data has changed.
The post Screencast: IGListKit: Multiple Sections and Updates appeared first on Ray Wenderlich.
In the first part of this HTC Vive in Unity tutorial, you learned how to create an interaction system and use it to grab, snap and throw objects.
In this second part of this advanced HTC Vive tutorial, you’ll learn how to:
This tutorial is intended for an advanced audience, and it will skip a lot of the details on how to add components and make new GameObjects, scripts and so on. It’s assumed you already know how to handle these things. If not, check out our series on beginning Unity here.
Download the starter project, unzip it somewhere and open the folder inside Unity. Here’s an overview of the folders in the Project window:
Here’s what each will be used for:
Open up the Game scene inside the Scenes folder to get started.
At the moment there’s not even a bow present in the scene.
Create a new empty GameObject and name it Bow.
Set the Bow‘s position to (X:-0.1, Y:4.5, Z:-1) and its rotation to (X:0, Y:270, Z:80).
Now drag the Bow model from the Models folder onto Bow in the Hierarchy to parent it.
Rename it BowMesh and set its position, rotation and scale to (X:0, Y:0, Z:0), (X:-90, Y:0, Z:-180) and (X:0.7, Y:0.7, Z:0.7) respectively.
It should now look like this:
Before moving on, I’d like to show you how the string of the bow works.
Select BowMesh and take a look at its Skinned Mesh Renderer. Unfold the BlendShapes field to reveal the Bend blendshape value. This is where the magic happens.
Keep looking at the bow. Change the Bend value from 0 to 100 and back by dragging and holding down your cursor on the word Bend in the Inspector. You should see the bow bending and the string being pulled back:
Set Bend back to 0 for now.
Remove the Animator component from the BowMesh, all animations are done using blendshapes.
Now add an arrow by dragging an instance of RealArrow from the Prefabs folder onto Bow.
Name it BowArrow and reset its Transform component to move it into position relative to the Bow.
This arrow won’t be used as a regular arrow, so break the connection to its prefab by selecting GameObject\Break Prefab Instance from the top menu.
Unfold BowArrow and delete its child, Trail. This particle system is used by normal arrows only.
Remove the Rigidbody, second Box Collider and RWVR_Snap To Controller components from BowArrow.
All that should be left is a Transform and a Box Collider component.
Set the Box Collider‘s Center to (X:0, Y:0, Z:-0.28) and set its size to (X:0.1, Y:0.1, Z:0.2). This will be the part the player can grab and pull back.
Select Bow again and add a Rigidbody and a Box Collider to it. This will make sure it has a physical presence in the world when not in use.
Change the Box Collider‘s Center and Size to (X:0, Y:0, Z:-0.15) and (X:0.1, Y:1.45, Z:0.45) respectively.
Now add a RWVR_Snap To Controller component to it. Enable Hide Controller Model, set Snap Position Offset to (X:0, Y:0.08, Z:0) and Snap Rotation Offset to (X:90, Y:0, Z:0).
Play the scene and test if you can pick up the bow.
Before moving on, set up the tags on the controllers so future scripts will function correctly.
Unfold [CameraRig], select both controllers and set their tag to Controller.
In the next part you’ll make the bow work by doing some scripting.
The bow system you’ll create consists of three key parts:
Each of these needs their own script to work together to make the bow shoot.
For starters, the normal arrows need some code to allow them to get stuck in objects and be picked up again later.
Create a new C# script inside the Scripts folder and name it RealArrow. Note this script doesn’t belong in the RWVR folder as it’s not a part of the interaction system.
Open it up and remove the Start()
and Update()
methods.
Add the following variable declarations below the class declaration:
public BoxCollider pickupCollider; // 1
private Rigidbody rb; // 2
private bool launched; // 3
private bool stuckInWall; // 4
Quite simply:
true
when an arrow is launched from the bow.true
when this arrow hits a solid object.Now add the Awake()
method:
private void Awake()
{
rb = GetComponent<Rigidbody>();
}
This simply caches the Rigidbody component that’s attached to this arrow.
Add the following method below Awake()
:
private void FixedUpdate()
{
if (launched && !stuckInWall && rb.velocity != Vector3.zero) // 1
{
rb.rotation = Quaternion.LookRotation(rb.velocity); // 2
}
}
This snippet will make sure the arrow will keep facing the direction it’s headed. This allows for some cool skill shots, like shooting arrows in the sky and then watching them come down upon the ground again with their heads stuck in the soil. It also makes things more stable and prevents arrows from getting stuck in awkward positions.
FixedUpdate
does the following:
Add these methods below FixedUpdate()
:
public void SetAllowPickup(bool allow) // 1
{
pickupCollider.enabled = allow;
}
public void Launch() // 2
{
launched = true;
SetAllowPickup(false);
}
Looking at the two commented sections:
pickupCollider
.launched
flag to true
and doesn’t allow the arrow to be picked up.Add the next method to make sure the arrow doesn’t move once it hits a solid object:
private void GetStuck(Collider other) // 1
{
launched = false; // 2
rb.isKinematic = true; // 3
stuckInWall = true; // 4
SetAllowPickup(true); // 5
transform.SetParent(other.transform); // 6
}
Taking each commented section in turn:
Collider
as a parameter. This is what the arrow will attach itself to.stuckInWall
flag to true
.The final piece of this script to add is OnTriggerEnter()
, which is called when the arrow’s trigger hits something:
private void OnTriggerEnter(Collider other)
{
if (other.CompareTag("Controller") || other.GetComponent<Bow>()) // 1
{
return;
}
if (launched && !stuckInWall) // 2
{
GetStuck(other);
}
}
You’ll get an error saying Bow doesn’t exist yet. Ignore this for now: you’ll create the Bow script next.
Here’s what’s the code above does:
Save this script, then create a new C# script in the Scripts folder named Bow. Open it in your code editor.
Remove the Start()
method and add this line right above the class declaration:
[ExecuteInEditMode]
This will let this script execute its method, even while you’re working in the editor. You’ll see why this can be quite handy in just a bit.
Add these variables above Update()
:
public Transform attachedArrow; // 1
public SkinnedMeshRenderer BowSkinnedMesh; // 2
public float blendMultiplier = 255f; // 3
public GameObject realArrowPrefab; // 4
public float maxShootSpeed = 50; // 5
public AudioClip fireSound; // 6
This is what they’ll be used for:
blendMultiplier
to get the final Bend value for the blend shape.Add the following encapsulated field below the variables:
bool IsArmed()
{
return attachedArrow.gameObject.activeSelf;
}
This simply returns true
if the arrow is enabled. It’s much easier to reference this field than to write out attachedArrow.gameObject.activeSelf
each time.
Add the following to the Update()
method:
float distance = Vector3.Distance(transform.position, attachedArrow.position); // 1
BowSkinnedMesh.SetBlendShapeWeight(0, Mathf.Max(0, distance * blendMultiplier)); // 2
Here’s what each of these lines do:
blendMultiplier
.Next, add these methods below Update()
:
private void Arm() // 1
{
attachedArrow.gameObject.SetActive(true);
}
private void Disarm()
{
BowSkinnedMesh.SetBlendShapeWeight(0, 0); // 2
attachedArrow.position = transform.position; // 3
attachedArrow.gameObject.SetActive(false); // 4
}
These methods handle the loading and unloading of arrows in the bow.
Add OnTriggerEnter()
below Disarm()
:
private void OnTriggerEnter(Collider other) // 1
{
if (
!IsArmed()
&& other.CompareTag("InteractionObject")
&& other.GetComponent<RealArrow>()
&& !other.GetComponent<RWVR_InteractionObject>().IsFree() // 2
) {
Destroy(other.gameObject); // 3
Arm(); // 4
}
}
This handles what should happen when a trigger collides with the bow.
Collider
as a parameter. This is the trigger that hit the bow.true
if the bow is unarmed and is hit by a RealArrow. There’s a few checks to make sure it only reacts to arrows that are held by the player.This code is essential to make it possible for a player to rearm the bow once the intitally loaded arrow has been shot.
The final method shoots the arrow. Add this below OnTriggerEnter()
:
public void ShootArrow()
{
GameObject arrow = Instantiate(realArrowPrefab, transform.position, transform.rotation); // 1
float distance = Vector3.Distance(transform.position, attachedArrow.position); // 2
arrow.GetComponent<Rigidbody>().velocity = arrow.transform.forward * distance * maxShootSpeed; // 3
AudioSource.PlayClipAtPoint(fireSound, transform.position); // 4
GetComponent<RWVR_InteractionObject>().currentController.Vibrate(3500); // 5
arrow.GetComponent<RealArrow>().Launch(); // 6
Disarm(); // 7
}
This might seem like a lot of code, but it’s quite simple:
distance
.distance
. The further the string gets pulled back, the more velocity the arrow will have.Launch()
method.Time to set up the bow in the inspector!
Save this script and return to the editor.
Select Bow in the Hierarchy and add a Bow component.
Expand Bow to reveal its children and drag BowArrow to the Attached Arrow field.
Now drag BowMesh to the Bow Skinned Mesh field and set Blend Multiplier to 353.
Drag a RealArrow prefab from the Prefabs folder onto the Real Arrow Prefab field and drag the FireBow sound from the Sounds folder to the Fire Sound.
This is what the Bow component should look like when you’re finished:
Remember how the skinned mesh renderer affected the bow model? Move the BowArrow in the Scene view on its local Z-axis to test what the full bow bend effect looks like:
That’s pretty sweet looking!
You’ll now need to set up the RealArrow to work as intended.
Select RealArrow in the Hierarchy and add a Real Arrow component to it.
Now drag the Box Collider with Is Trigger disabled to the Pickup Collider slot.
Click the Apply button at the top of the Inspector to apply this change to all RealArrow prefabs as well.
The final piece of the puzzle is the special arrow that sits in the bow.
The arrow in the bow needs to be pulled back by the player in order to bend the bow, and then released to fire an arrow.
Create a new C# script inside the Scripts \ RWVR folder and name it RWVR_ArrowInBow. Open it up in a code editor and remove the Start() and Update() methods.
Make this class derive from RWVR_InteractionObject by replacing the following line:
public class RWVR_ArrowInBow : MonoBehaviour
With this:
public class RWVR_ArrowInBow : RWVR_InteractionObject
Add these variables below the class declaration:
public float minimumPosition; // 1
public float maximumPosition; // 2
private Transform attachedBow; // 3
private const float arrowCorrection = 0.3f; // 4
Here’s what they’re for:
Add the following method below the variable declarations:
public override void Awake()
{
base.Awake();
attachedBow = transform.parent;
}
This calls the base class’ Awake()
method to cache the transform and stores a reference to the bow in the attachedBow
variable.
Add the following method to react while the player holds the trigger button:
public override void OnTriggerIsBeingPressed(RWVR_InteractionController controller) // 1
{
base.OnTriggerIsBeingPressed(controller); // 2
Vector3 arrowInBowSpace = attachedBow.InverseTransformPoint(controller.transform.position); // 3
cachedTransform.localPosition = new Vector3(0, 0, arrowInBowSpace.z + arrowCorrection); // 4
}
Taking it step-by-step:
OnTriggerIsBeingPressed()
and get the controller that’s interacting with this arrow as a parameter.InverseTransformPoint()
. This allows for the arrow to be pulled back correctly, even though the controller isn’t perfectly aligned with the bow on its local Z-axis.arrowCorrection
to it on its Z-axis to get the correct value.Now add the following method:
public override void OnTriggerWasReleased(RWVR_InteractionController controller) // 1
{
attachedBow.GetComponent<Bow>().ShootArrow(); // 2
currentController.Vibrate(3500); // 3
base.OnTriggerWasReleased(controller); // 4
}
This method is called when the arrow is released.
OnTriggerWasReleased()
method and get the controller that’s interacting with this arrow as a parameter.currentController
.Add this method below OnTriggerWasReleased()
:
void LateUpdate()
{
// Limit position
float zPos = cachedTransform.localPosition.z; // 1
zPos = Mathf.Clamp(zPos, minimumPosition, maximumPosition); // 2
cachedTransform.localPosition = new Vector3(0, 0, zPos); // 3
//Limit rotation
cachedTransform.localRotation = Quaternion.Euler(Vector3.zero); // 4
if (currentController)
{
currentController.Vibrate(System.Convert.ToUInt16(500 * -zPos)); // 5
}
}
LateUpdate()
is called at the end of every frame. It’s used to limit the position and rotation of the arrow and vibrates the controller to simulate the effort needed to pull the arrow back.
zPos
.zPos
between the minimum and maximum allowed position.Vector3.zero
.Save this script and return to the editor.
Unfold Bow in the Hierarchy and select its child BowArrow. Add a RWVR_Arrow In Bow component to it and set Minimum Position to -0.4.
Save the scene, and get your HMD and controllers ready to test out the game!
Pick up the bow with one controller and pull the arrow back with the other one.
Release the controller to shoot an arrow, and try rearming the bow by dragging an arrow from the table onto it.
The last thing you’ll create in this tutorial is a backpack (or a quiver, in this case) from which you can grab new arrows to load in the bow.
For that to work, you’ll need some new scripts.
In order to know if the player is holding certain objects with the controllers, you’ll need a controller manager which references both controllers.
Create a new C# script in the Scripts/RWVR folder and name it RWVR_ControllerManager. Open it in a code editor.
Remove the Start()
and Update()
methods and add these variables:
public static RWVR_ControllerManager Instance; // 1
public RWVR_InteractionController leftController; // 2
public RWVR_InteractionController rightController; // 3
Here’s what the above variables are for:
public static
reference to this script so it can be called from all other scripts.Add the following method below the variables:
private void Awake()
{
Instance = this;
}
This saves a reference to this script in the Instance variable.
Now add this method below Awake()
:
public bool AnyControllerIsInteractingWith<T>() // 1
{
if (leftController.InteractionObject && leftController.InteractionObject.GetComponent<T>() != null) // 2
{
return true;
}
if (rightController.InteractionObject && rightController.InteractionObject.GetComponent<T>() != null) // 3
{
return true;
}
return false; // 4
}
This helper method checks if any of the controllers have a certain component attached to them:
Save this script and return to the editor.
The final script is for the backpack itself.
Create a new C# script in the Scripts \ RWVR folder and name it RWVR_SpecialObjectSpawner.
Open it in your favorite code editor and replace this line:
public class RWVR_SpecialObjectSpawner : MonoBehaviour
With this:
public class RWVR_SpecialObjectSpawner : RWVR_InteractionObject
This makes the backpack inherit from RWVR_InteractionObject
.
Now remove both the Start()
and Update()
methods and add the following variables in their place:
public GameObject arrowPrefab; // 1
public List<GameObject> randomPrefabs = new List<GameObject>(); // 2
These are the GameObjects which will be spawned out of the backpack.
Add the following method:
private void SpawnObjectInHand(GameObject prefab, RWVR_InteractionController controller) // 1
{
GameObject spawnedObject = Instantiate(prefab, controller.snapColliderOrigin.position, controller.transform.rotation); // 2
controller.SwitchInteractionObjectTo(spawnedObject.GetComponent<RWVR_InteractionObject>()); // 3
OnTriggerWasReleased(controller); // 4
}
This method attaches an object to the player’s controller, as if they grabbed it from behind their back.
prefab
is the GameObject that will be spawned, while controller
is the controller to which it will attach to.spawnedObject
.InteractionObject
to the object that was just spawned.The next method decides what kind of object should be spawned when the player pressed the trigger button over the backpack.
Add the following method below SpawnObjectInHand()
:
public override void OnTriggerWasPressed(RWVR_InteractionController controller) // 1
{
base.OnTriggerWasPressed(controller); // 2
if (RWVR_ControllerManager.Instance.AnyControllerIsInteractingWith<Bow>()) // 3
{
SpawnObjectInHand(arrowPrefab, controller);
}
else // 4
{
SpawnObjectInHand(randomPrefabs[UnityEngine.Random.Range(0, randomPrefabs.Count)], controller);
}
}
Here’s what each part does:
OnTriggerWasPressed()
method.OnTriggerWasPressed()
method.randomPrefabs
list.Save this script and return to the editor.
Create a new Cube in the Hierarchy, name it BackPack and drag it onto [CameraRig]\ Camera (head) to parent it to the player’s head.
Set its position and scale to (X:0, Y:-0.25, Z:-0.45) and (X:0.6, Y:0.5, Z:0.5) respectively.
The backpack is now positioned right behind and under the player’s head.
Set the Box Collider‘s Is Trigger to true
. this object doesn’t need to collide with anything.
Set Cast Shadows to Off and disable Receive Shadows on the Mesh Renderer component.
Now add a RWVR_Special Object Spawner component and drag a RealArrow from the Prefabs folder onto the Arrow Prefab field.
Finally, drag a Book and a Die prefab from the same folder to the Random Prefabs list.
Now add a new empty GameObject, name it ControllerManager and add a RWVR_Controller Manager component to it.
Expand [CameraRig] and drag Controller (left) to the Left Controller slot and Controller (right) to the Right Controller slot.
Now save the scene and test out the backpack. Try grabbing behind your back and see what stuff you’ll pull out!
That concludes this tutorial! You now have a fully functional bow and arrow and an interaction system you can expand with ease!
You can download the finished project here.
In this tutorial you’ve learned how to create the following features and updates for your HTC Vive game:
If you’re interested in learning more about creating killer games with Unity, check out our book, Unity Games By Tutorials.
In this book, you create four complete games from scratch:
By the end of this book, you’ll be ready to make your own games for Windows, macOS, iOS, and more!
This book is for complete beginners to Unity, as well as for those who’d like to bring their Unity skills to a professional level. The book assumes you have some prior programming experience (in a language of your choice).
If you have any comments or suggestions, please join the discussion below!
The post Advanced VR Mechanics With Unity and the HTC Vive – Part 2 appeared first on Ray Wenderlich.
Note: This tutorial requires Xcode 9 Beta 1 or later, Swift 4 and iOS 11.
Machine learning is all the rage. Many have heard about it, but few know what it is.
This iOS machine learning tutorial will introduce you to Core ML and Vision, two brand-new frameworks introduced in iOS 11.
Specifically, you’ll learn how to use these new APIs with the Places205-GoogLeNet model to classify the scene of an image.
Download the starter project. It already contains a user interface to display an image and let the user pick another image from their photo library. So you can focus on implementing the machine learning and vision aspects of the app.
Build and run your project; you’ll see an image of a city at night, and a button:
Choose another image from the photo library in the Photos app. This starter project’s Info.plist already has a Privacy – Photo Library Usage Description, so you might be prompted to allow usage.
The gap between the image and the button contains a label, where you’ll display the model’s classification of the image’s scene.
Machine learning is a type of artificial intelligence where computers “learn” without being explicitly programmed. Instead of coding an algorithm, machine learning tools enable computers to develop and refine algorithms, by finding patterns in huge amounts of data.
Since the 1950s, AI researchers have developed many approaches to machine learning. Apple’s Core ML framework supports neural networks, tree ensembles, support vector machines, generalized linear models, feature engineering and pipeline models. However, neural networks have produced many of the most spectacular recent successes, starting with Google’s 2012 use of YouTube videos to train its AI to recognize cats and people. Only five years later, Google is sponsoring a contest to identify 5000 species of plants and animals. Apps like Siri and Alexa also owe their existence to neural networks.
A neural network tries to model human brain processes with layers of nodes, linked together in different ways. Each additional layer requires a large increase in computing power: Inception v3, an object-recognition model, has 48 layers and approximately 20 million parameters. But the calculations are basically matrix multiplication, which GPUs handle extremely efficiently. The falling cost of GPUs enables people to create multilayer deep neural networks, hence the term deep learning.
Neural networks need a large amount of training data, ideally representing the full range of possibilities. The explosion in user-generated data has also contributed to the renaissance of machine learning.
Training the model means supplying the neural network with training data, and letting it calculate a formula for combining the input parameters to produce the output(s). Training happens offline, usually on machines with many GPUs.
To use the model, you give it new inputs, and it calculates outputs: this is called inferencing. Inference still requires a lot of computing, to calculate outputs from new inputs. Doing these calculations on handheld devices is now possible because of frameworks like Metal.
As you’ll see at the end of this tutorial, deep learning is far from perfect. It’s really hard to construct a truly representative set of training data, and it’s all too easy to over-train the model so it gives too much weight to quirky characteristics.
Apple introduced NSLinguisticTagger
in iOS 5 to analyze natural language. Metal came in iOS 8, providing low-level access to the device’s GPU.
Last year, Apple added Basic Neural Network Subroutines (BNNS) to its Accelerate framework, enabling developers to construct neural networks for inferencing (not training).
And this year, Apple has given you Core ML and Vision!
You can also wrap any image-analysis Core ML model in a Vision model, which is what you’ll do in this tutorial. Because these two frameworks are built on Metal, they run efficiently on the device, so you don’t need to send your users’ data to a server.
This tutorial uses the Places205-GoogLeNet model, which you can download from Apple’s Machine Learning page. Scroll down to Working with Models, and download the first one. While you’re there, take note of the other three models, which all detect objects — trees, animals, people, etc. — in an image.
After you download GoogLeNetPlaces.mlmodel, drag it from Finder into the Resources group in your project’s Project Navigator:
Select this file, and wait for a moment. An arrow will appear when Xcode has generated the model class:
Click the arrow to see the generated class:
Xcode has generated input and output classes, and the main class GoogLeNetPlaces
, which has a model
property and two prediction
methods.
GoogLeNetPlacesInput
has a sceneImage
property of type CVPixelBuffer
. Whazzat!?, we all cry together, but fear not, the Vision framework will take care of converting our familiar image formats into the correct input type. :]
The Vision framework also converts GoogLeNetPlacesOutput
properties into its own results
type, and manages calls to prediction
methods, so out of all this generated code, your code will use only the model
property.
Finally, you get to write some code! Open ViewController.swift, and import the two frameworks, just below import UIKit
:
import CoreML
import Vision
Next, add the following extension below the IBActions
extension:
// MARK: - Methods
extension ViewController {
func detectScene(image: CIImage) {
answerLabel.text = "detecting scene..."
// Load the ML model through its generated class
guard let model = try? VNCoreMLModel(for: GoogLeNetPlaces().model) else {
fatalError("can't load Places ML model")
}
}
}
Here’s what you’re doing:
First, you display a message so the user knows something is happening.
The designated initializer of GoogLeNetPlaces
throws an error, so you must use try
when creating it.
VNCoreMLModel
is simply a container for a Core ML model used with Vision requests.
The standard Vision workflow is to create a model, create one or more requests, and then create and run a request handler. You’ve just created the model, so your next step is to create a request.
Add the following lines to the end of detectScene(image:)
:
// Create a Vision request with completion handler
let request = VNCoreMLRequest(model: model) { [weak self] request, error in
guard let results = request.results as? [VNClassificationObservation],
let topResult = results.first else {
fatalError("unexpected result type from VNCoreMLRequest")
}
// Update UI on main queue
let article = (self?.vowels.contains(topResult.identifier.first!))! ? "an" : "a"
DispatchQueue.main.async { [weak self] in
self?.answerLabel.text = "\(Int(topResult.confidence * 100))% it's \(article) \(topResult.identifier)"
}
}
VNCoreMLRequest
is an image analysis request that uses a Core ML model to do the work. Its completion handler receives request
and error
objects.
You check that request.results
is an array of VNClassificationObservation
objects, which is what the Vision framework returns when the Core ML model is a classifier, rather than a predictor or image processor. And GoogLeNetPlaces
is a classifier, because it predicts only one feature: the image’s scene classification.
A VNClassificationObservation
has two properties: identifier
— a String
— and confidence
— a number between 0 and 1 — it’s the probability the classification is correct. When using an object-detection model, you would probably look at only those objects with confidence
greater than some threshold, such as 30%.
You then take the first result, which will have the highest confidence value, and set the indefinite article to “a” or “an”, depending on the identifier’s first letter. Finally, you dispatch back to the main queue to update the label. You’ll soon see the classification work happens off the main queue, because it can be slow.
Now, on to the third step: creating and running the request handler.
Add the following lines to the end of detectScene(image:)
:
// Run the Core ML GoogLeNetPlaces classifier on global dispatch queue
let handler = VNImageRequestHandler(ciImage: image)
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([request])
} catch {
print(error)
}
}
VNImageRequestHandler
is the standard Vision framework request handler; it isn’t specific to Core ML models. You give it the image that came into detectScene(image:)
as an argument. And then you run the handler by calling its perform
method, passing an array of requests. In this case, you have only one request.
The perform
method throws an error, so you wrap it in a try-catch.
Whew, that was a lot of code! But now you simply have to call detectScene(image:)
in two places.
Add the following lines at the end of viewDidLoad()
and at the end of imagePickerController(_:didFinishPickingMediaWithInfo:)
:
guard let ciImage = CIImage(image: image) else {
fatalError("couldn't convert UIImage to CIImage")
}
detectScene(image: ciImage)
Now build and run. It shouldn’t take long to see a classification:
Well, yes, there are skyscrapers in the image. There’s also a train.
Tap the button, and select the first image in the photo library: a close-up of some sun-dappled leaves:
Hmmm, maybe if you squint, you can imagine Nemo or Dory swimming around? But at least you know the “a” vs. “an” thing works. ;]
This tutorial’s project is similar to the sample project for WWDC 2017 Session 506 Vision Framework: Building on Core ML. The Vision + ML Example app uses the MNIST classifier, which recognizes hand-written numerals — useful for automating postal sorting. It also uses the native Vision framework method VNDetectRectanglesRequest
, and includes Core Image code to correct the perspective of detected rectangles.
You can also download a different sample project from the Core ML documentation page. Inputs to the MarsHabitatPricePredictor model are just numbers, so the code uses the generated MarsHabitatPricer
methods and properties directly, instead of wrapping the model in a Vision model. By changing the parameters one at a time, it’s easy to see the model is simply a linear regression:
137 * solarPanels + 653.50 * greenHouses + 5854 * acres
You can download the complete project for this tutorial here. If the model shows up as missing, replace it with the one you downloaded.
You’re now well-equipped to integrate an existing model into your app. Here’s some resources that cover this in more detail:
From 2016:
Thinking about building your own model? I’m afraid that’s way beyond the scope of this tutorial (and my expertise). These resources might help you get started:
Last but not least, I really learned a lot from this concise history of AI from Andreessen Horowitz’s Frank Chen: AI and Deep Learning a16z podcast.
I hope you found this tutorial useful. Feel free to join the discussion below!
The post Core ML and Vision: Machine Learning in iOS 11 Tutorial appeared first on Ray Wenderlich.
Update note: This tutorial has been updated to Swift 4 and Xcode 9 by Lyndsey Scott. The original tutorial was written by Marin Todorov.
Core Text is a low-level text engine that when used alongside the Core Graphics/Quartz framework, gives you fine-grained control over layout and formatting.
With iOS 7, Apple released a high-level library called Text Kit, which stores, lays out and displays text with various typesetting characteristics. Although Text Kit is powerful and usually sufficient when laying out text, Core Text can provide more control. For example, if you need to work directly with Quartz, use Core Text. If you need to build your own layout engines, Core Text will help you generate “glyphs and position them relative to each other with all the features of fine typesetting.”
This tutorial takes you through the process of creating a very simple magazine application using Core Text… for Zombies!
Oh, and Zombie Monthly’s readership has kindly agreed not to eat your brains as long as you’re busy using them for this tutorial… So you may want to get started soon! *gulp*
Note: To get the most out of this tutorial, you need to know the basics of iOS development first. If you’re new to iOS development, you should check out some of the other tutorials on this site first.
Open Xcode, create a new Swift universal project with the Single View Application Template and name it CoreTextMagazine.
Next, add the Core Text framework to your project:
Now the project is setup, it’s time to start coding.
For starters, you’ll create a custom UIView
, which will use Core Text in its draw(_:)
method.
Create a new Cocoa Touch Class file named CTView subclassing UIView
.
Open CTView.swift, and add the following under import UIKit
:
import CoreText
Next, set this new custom view as the main view in the application. Open Main.storyboard, open the Utilities menu on the right-hand side, then select the Identity Inspector icon in its top toolbar. In the left-hand menu of the Interface Builder, select View. The Class field of the Utilities menu should now say UIView. To subclass the main view controller’s view, type CTView into the Class field and hit Enter.
Next, open CTView.swift and replace the commented out draw(_:)
with the following:
//1
override func draw(_ rect: CGRect) {
// 2
guard let context = UIGraphicsGetCurrentContext() else { return }
// 3
let path = CGMutablePath()
path.addRect(bounds)
// 4
let attrString = NSAttributedString(string: "Hello World")
// 5
let framesetter = CTFramesetterCreateWithAttributedString(attrString as CFAttributedString)
// 6
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, attrString.length), path, nil)
// 7
CTFrameDraw(frame, context)
}
Let’s go over this step-by-step.
draw(_:)
will run automatically to render the view’s backing layer.NSAttributedString
, as opposed to String
or NSString
, to hold the text and its attributes. Initialize “Hello World” as an attributed string. CTFramesetterCreateWithAttributedString
creates a CTFramesetter
with the supplied attributed string. CTFramesetter
will manage your font references and your drawing frames.CTFrame
, by having CTFramesetterCreateFrame
render the entire string within path
.
CTFrameDraw
draws the CTFrame
in the given context.That’s all you need to draw some simple text! Build, run and see the result.
Uh-oh… That doesn’t seem right, does it? Like many of the low level APIs, Core Text uses a Y-flipped coordinate system. To make matters worse, the content is also flipped vertically!
Add the following code directly below the guard let context
statement to fix the content orientation:
// Flip the coordinate system
context.textMatrix = .identity
context.translateBy(x: 0, y: bounds.size.height)
context.scaleBy(x: 1.0, y: -1.0)
This code flips the content by applying a transformation to the view’s context.
Build and run the app. Don’t worry about status bar overlap, you’ll learn how to fix this with margins later.
Congrats on your first Core Text app! The zombies are pleased with your progress.
If you’re a bit confused about the CTFramesetter
and the CTFrame
– that’s OK because it’s time for some clarification. :]
Here’s what the Core Text object model looks like:
When you create a CTFramesetter
reference and provide it with an NSAttributedString
, an instance of CTTypesetter
is automatically created for you to manage your fonts. Next you use the CTFramesetter
to create one or more frames in which you’ll be rendering text.
When you create a frame, you provide it with the subrange of text to render inside its rectangle. Core Text automatically creates a CTLine
for each line of text and a CTRun
for each piece of text with the same formatting. For example, Core Text would create a CTRun
if you had several words in a row colored red, then another CTRun
for the following plain text, then another CTRun
for a bold sentence, etc. Core Text creates CTRun
s for you based on the attributes of the supplied NSAttributedString
. Furthermore, each of these CTRun
objects can adopt different attributes, so you have fine control over kerning, ligatures, width, height and more.
Download and unarchive the zombie magazine materials.
Drag the folder into your Xcode project. When prompted make sure Copy items if needed and Create groups are selected.
To create the app, you’ll need to apply various attributes to the text. You’ll create a simple text markup parser which will use tags to set the magazine’s formatting.
Create a new Cocoa Touch Class file named MarkupParser subclassing NSObject
.
First things first, take a quick look at zombies.txt. See how it contains bracketed formatting tags throughout the text? The “img src” tags reference magazine images and the “font color/face” tags determine text color and font.
Open MarkupParser.swift and replace its contents with the following:
import UIKit
import CoreText
class MarkupParser: NSObject {
// MARK: - Properties
var color: UIColor = .black
var fontName: String = "Arial"
var attrString: NSMutableAttributedString!
var images: [[String: Any]] = []
// MARK: - Initializers
override init() {
super.init()
}
// MARK: - Internal
func parseMarkup(_ markup: String) {
}
}
Here you’ve added properties to hold the font and text color; set their defaults; created a variable to hold the attributed string produced by parseMarkup(_:)
; and created an array which will eventually hold the dictionary information defining the size, location and filename of images found within the text.
Writing a parser is usually hard work, but this tutorial’s parser will be very simple and support only opening tags — meaning a tag will set the style of the text following it until a new tag is found. The text markup will look like this:
These are <font color="red">red<font color="black"> and <font color="blue">blue <font color="black">words.
and produce output like this:
These are red and blue words.
Lets’ get parsin’!
Add the following to parseMarkup(_:)
:
//1
attrString = NSMutableAttributedString(string: "")
//2
do {
let regex = try NSRegularExpression(pattern: "(.*?)(<[^>]+>|\\Z)",
options: [.caseInsensitive,
.dotMatchesLineSeparators])
//3
let chunks = regex.matches(in: markup,
options: NSRegularExpression.MatchingOptions(rawValue: 0),
range: NSRange(location: 0,
length: markup.characters.count))
} catch _ {
}
attrString
starts out empty, but will eventually contain the parsed markup.regex
matches, then produce an array of the resulting NSTextCheckingResult
s.Note: To learn more about regular expressions, check out NSRegularExpression Tutorial.
Now you’ve parsed all the text and formatting tags into chunks
, you’ll loop through chunks
to build the attributed string.
But before that, did you notice how matches(in:options:range:)
accepts an NSRange
as an argument? There’s going to be lots of NSRange
to Range
conversions as you apply NSRegularExpression
functions to your markup String
. Swift’s been a pretty good friend to us all, so it deserves a helping hand.
Still in MarkupParser.swift, add the following extension
to the end of the file:
// MARK: - String
extension String {
func range(from range: NSRange) -> Range<String.Index>? {
guard let from16 = utf16.index(utf16.startIndex,
offsetBy: range.location,
limitedBy: utf16.endIndex),
let to16 = utf16.index(from16, offsetBy: range.length, limitedBy: utf16.endIndex),
let from = String.Index(from16, within: self),
let to = String.Index(to16, within: self) else {
return nil
}
return from ..< to
}
}
This function converts the String's starting and ending indices as represented by an NSRange
, to String.UTF16View.Index
format, i.e. the positions in a string’s collection of UTF-16 code units; then converts each String.UTF16View.Index
to String.Index
format; which when combined, produces Swift's range format: Range
. As long as the indices are valid, the method will return the Range
representation of the original NSRange
.
Your Swift is now chill. Time to head back to processing the text and tag chunks.
Inside parseMarkup(_:)
add the following below let chunks
(within the do
block):
let defaultFont: UIFont = .systemFont(ofSize: UIScreen.main.bounds.size.height / 40)
//1
for chunk in chunks {
//2
guard let markupRange = markup.range(from: chunk.range) else { continue }
//3
let parts = markup.substring(with: markupRange).components(separatedBy: "<")
//4
let font = UIFont(name: fontName, size: UIScreen.main.bounds.size.height / 40) ?? defaultFont
//5
let attrs = [NSAttributedStringKey.foregroundColor: color, NSAttributedStringKey.font: font] as [NSAttributedStringKey : Any]
let text = NSMutableAttributedString(string: parts[0], attributes: attrs)
attrString.append(text)
}
chunks
.NSTextCheckingResult
's range, unwrap the Range<String.Index>
and proceed with the block as long as it exists.chunk
into parts separated by "<". The first part contains the magazine text and the second part contains the tag (if it exists).fontName
, currently "Arial" by default, and a size relative to the device screen. If fontName
doesn't produce a valid UIFont
, set font
to the default font.parts[0]
to create the attributed string, then append that string to the result string.To process the "font" tag, insert the following after attrString.append(text)
:
// 1
if parts.count <= 1 {
continue
}
let tag = parts[1]
//2
if tag.hasPrefix("font") {
let colorRegex = try NSRegularExpression(pattern: "(?<=color=\")\\w+",
options: NSRegularExpression.Options(rawValue: 0))
colorRegex.enumerateMatches(in: tag,
options: NSRegularExpression.MatchingOptions(rawValue: 0),
range: NSMakeRange(0, tag.characters.count)) { (match, _, _) in
//3
if let match = match,
let range = tag.range(from: match.range) {
let colorSel = NSSelectorFromString(tag.substring(with:range) + "Color")
color = UIColor.perform(colorSel).takeRetainedValue() as? UIColor ?? .black
}
}
//5
let faceRegex = try NSRegularExpression(pattern: "(?<=face=\")[^\"]+",
options: NSRegularExpression.Options(rawValue: 0))
faceRegex.enumerateMatches(in: tag,
options: NSRegularExpression.MatchingOptions(rawValue: 0),
range: NSMakeRange(0, tag.characters.count)) { (match, _, _) in
if let match = match,
let range = tag.range(from: match.range) {
fontName = tag.substring(with: range)
}
}
} //end of font parsing
tag
.tag
starts with "font", create a regex to find the font's "color" value, then use that regex to enumerate through tag
's matching "color" values. In this case, there should be only one matching color value.enumerateMatches(in:options:range:using:)
returns a valid match
with a valid range in tag
, find the indicated value (ex. <font color="red">
returns "red") and append "Color" to form a UIColor
selector. Perform that selector then set your class's color
to the returned color if it exists, to black if not.fontName
to that string.Great job! Now parseMarkup(_:)
can take markup and produce an NSAttributedString
for Core Text.
It's time to feed your app to some zombies! I mean, feed some zombies to your app... zombies.txt, that is. ;]
It's actually the job of a UIView
to display content given to it, not load content. Open CTView.swift and add the following above draw(_:)
:
// MARK: - Properties
var attrString: NSAttributedString!
// MARK: - Internal
func importAttrString(_ attrString: NSAttributedString) {
self.attrString = attrString
}
Next, delete let attrString = NSAttributedString(string: "Hello World")
from draw(_:)
.
Here you've created an instance variable to hold an attributed string and a method to set it from elsewhere in your app.
Next, open ViewController.swift and add the following to viewDidLoad()
:
// 1
guard let file = Bundle.main.path(forResource: "zombies", ofType: "txt") else { return }
do {
let text = try String(contentsOfFile: file, encoding: .utf8)
// 2
let parser = MarkupParser()
parser.parseMarkup(text)
(view as? CTView)?.importAttrString(parser.attrString)
} catch _ {
}
Let’s go over this step-by-step.
zombie.txt
file into a String
.ViewController
's CTView
.Build and run the app!
That's awesome? Thanks to about 50 lines of parsing you can simply use a text file to hold the contents of your magazine app.
If you thought a monthly magazine of Zombie news could possibly fit onto one measly page, you'd be very wrong! Luckily Core Text becomes particularly useful when laying out columns since CTFrameGetVisibleStringRange
can tell you how much text will fit into a given frame. Meaning, you can create a column, then once its full, you can create another column, etc.
For this app, you'll have to print columns, then pages, then a whole magazine lest you offend the undead, so... time to turn your CTView
subclass into a UIScrollView
.
Open CTView.swift and change the class CTView
line to:
class CTView: UIScrollView {
See that, zombies? The app can now support an eternity of undead adventures! Yep -- with one line, scrolling and paging is now available.
Up until now, you've created your framesetter and frame inside draw(_:)
, but since you'll have many columns with different formatting, it's better to create individual column instances instead.
Create a new Cocoa Touch Class file named CTColumnView
subclassing UIView
.
Open CTColumnView.swift and add the following starter code:
import UIKit
import CoreText
class CTColumnView: UIView {
// MARK: - Properties
var ctFrame: CTFrame!
// MARK: - Initializers
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)!
}
required init(frame: CGRect, ctframe: CTFrame) {
super.init(frame: frame)
self.ctFrame = ctframe
backgroundColor = .white
}
// MARK: - Life Cycle
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.textMatrix = .identity
context.translateBy(x: 0, y: bounds.size.height)
context.scaleBy(x: 1.0, y: -1.0)
CTFrameDraw(ctFrame, context)
}
}
This code renders a CTFrame
just as you'd originally done in CTView
. The custom initializer, init(frame:ctframe:)
, sets:
CTFrame
to draw into the context.Next, create a new swift file named CTSettings.swift which will hold your column settings.
Replace the contents of CTSettings.swift with the following:
import UIKit
import Foundation
class CTSettings {
//1
// MARK: - Properties
let margin: CGFloat = 20
var columnsPerPage: CGFloat!
var pageRect: CGRect!
var columnRect: CGRect!
// MARK: - Initializers
init() {
//2
columnsPerPage = UIDevice.current.userInterfaceIdiom == .phone ? 1 : 2
//3
pageRect = UIScreen.main.bounds.insetBy(dx: margin, dy: margin)
//4
columnRect = CGRect(x: 0,
y: 0,
width: pageRect.width / columnsPerPage,
height: pageRect.height).insetBy(dx: margin, dy: margin)
}
}
pageRect
.pageRect
's width by the number of columns per page and inset that new frame with the margin for columnRect
.Open, CTView.swift, replace the entire contents with the following:
import UIKit
import CoreText
class CTView: UIScrollView {
//1
func buildFrames(withAttrString attrString: NSAttributedString,
andImages images: [[String: Any]]) {
//3
isPagingEnabled = true
//4
let framesetter = CTFramesetterCreateWithAttributedString(attrString as CFAttributedString)
//4
var pageView = UIView()
var textPos = 0
var columnIndex: CGFloat = 0
var pageIndex: CGFloat = 0
let settings = CTSettings()
//5
while textPos < attrString.length {
}
}
}
buildFrames(withAttrString:andImages:)
will create CTColumnView
s then add them to the scrollview.CTFramesetter
framesetter
will create each column's CTFrame
of attributed text.UIView
pageView
s will serve as a container for each page's column subviews; textPos
will keep track of the next character; columnIndex
will keep track of the current column; pageIndex
will keep track of the current page; and settings
gives you access to the app's margin size, columns per page, page frame and column frame settings.attrString
and lay out the text column by column, until the current text position reaches the end.Time to start looping attrString
. Add the following within while textPos < attrString.length {
.:
//1
if columnIndex.truncatingRemainder(dividingBy: settings.columnsPerPage) == 0 {
columnIndex = 0
pageView = UIView(frame: settings.pageRect.offsetBy(dx: pageIndex * bounds.width, dy: 0))
addSubview(pageView)
//2
pageIndex += 1
}
//3
let columnXOrigin = pageView.frame.size.width / settings.columnsPerPage
let columnOffset = columnIndex * columnXOrigin
let columnFrame = settings.columnRect.offsetBy(dx: columnOffset, dy: 0)
settings.pageRect
and offset its x origin by the current page index multiplied by the width of the screen; so within the paging scrollview, each magazine page will be to the right of the previous one.pageIndex
.pageView
's width by settings.columnsPerPage
to get the first column's x origin; multiply that origin by the column index to get the column offset; then create the frame of the current column by taking the standard columnRect
and offsetting its x origin by columnOffset
.Next, add the following below columnFrame
initialization:
//1
let path = CGMutablePath()
path.addRect(CGRect(origin: .zero, size: columnFrame.size))
let ctframe = CTFramesetterCreateFrame(framesetter, CFRangeMake(textPos, 0), path, nil)
//2
let column = CTColumnView(frame: columnFrame, ctframe: ctframe)
pageView.addSubview(column)
//3
let frameRange = CTFrameGetVisibleStringRange(ctframe)
textPos += frameRange.length
//4
columnIndex += 1
CGMutablePath
the size of the column, then starting from textPos
, render a new CTFrame
with as much text as can fit.CTColumnView
with a CGRect
columnFrame
and CTFrame
ctframe
then add the column to pageView
.CTFrameGetVisibleStringRange(_:)
to calculate the range of text contained within the column, then increment textPos
by that range length to reflect the current text position.Lastly set the scroll view's content size after the loop:
contentSize = CGSize(width: CGFloat(pageIndex) * bounds.size.width,
height: bounds.size.height)
By setting the content size to the screen width times the number of pages, the zombies can now scroll through to the end.
Open ViewController.swift, and replace
(view as? CTView)?.importAttrString(parser.attrString)
with the following:
(view as? CTView)?.buildFrames(withAttrString: parser.attrString, andImages: parser.images)
Build and run the app on an iPad. Check that double column layout! Drag right and left to go between pages. Lookin' good. :]
You've columns and formatted text, but you're missing images. Drawing images with Core Text isn't so straightforward - it's a text framework after all - but with the help of the markup parser you've already created, adding images shouldn't be too bad.
Although Core Text can't draw images, as a layout engine, it can leave empty spaces to make room for images. By setting a CTRun
's delegate, you can determine that CTRun
's ascent space, descent space and width. Like so:
When Core Text reaches a CTRun
with a CTRunDelegate
it asks the delegate, "How much space should I leave for this chunk of data?" By setting these properties in the CTRunDelegate
, you can leave holes in the text for your images.
First add support for the "img" tag. Open MarkupParser.swift and find "} //end of font parsing". Add the following immediately after:
//1
else if tag.hasPrefix("img") {
var filename:String = ""
let imageRegex = try NSRegularExpression(pattern: "(?<=src=\")[^\"]+",
options: NSRegularExpression.Options(rawValue: 0))
imageRegex.enumerateMatches(in: tag,
options: NSRegularExpression.MatchingOptions(rawValue: 0),
range: NSMakeRange(0, tag.characters.count)) { (match, _, _) in
if let match = match,
let range = tag.range(from: match.range) {
filename = tag.substring(with: range)
}
}
//2
let settings = CTSettings()
var width: CGFloat = settings.columnRect.width
var height: CGFloat = 0
if let image = UIImage(named: filename) {
height = width * (image.size.height / image.size.width)
// 3
if height > settings.columnRect.height - font.lineHeight {
height = settings.columnRect.height - font.lineHeight
width = height * (image.size.width / image.size.height)
}
}
}
tag
starts with "img", use a regex to search for the image's "src" value, i.e. the filename.settings.columnRect.height - font.lineHeight
.Next, add the following immediately after the if let image
block:
//1
images += [["width": NSNumber(value: Float(width)),
"height": NSNumber(value: Float(height)),
"filename": filename,
"location": NSNumber(value: attrString.length)]]
//2
struct RunStruct {
let ascent: CGFloat
let descent: CGFloat
let width: CGFloat
}
let extentBuffer = UnsafeMutablePointer<RunStruct>.allocate(capacity: 1)
extentBuffer.initialize(to: RunStruct(ascent: height, descent: 0, width: width))
//3
var callbacks = CTRunDelegateCallbacks(version: kCTRunDelegateVersion1, dealloc: { (pointer) in
}, getAscent: { (pointer) -> CGFloat in
let d = pointer.assumingMemoryBound(to: RunStruct.self)
return d.pointee.ascent
}, getDescent: { (pointer) -> CGFloat in
let d = pointer.assumingMemoryBound(to: RunStruct.self)
return d.pointee.descent
}, getWidth: { (pointer) -> CGFloat in
let d = pointer.assumingMemoryBound(to: RunStruct.self)
return d.pointee.width
})
//4
let delegate = CTRunDelegateCreate(&callbacks, extentBuffer)
//5
let attrDictionaryDelegate = [(kCTRunDelegateAttributeName as NSAttributedStringKey): (delegate as Any)]
attrString.append(NSAttributedString(string: " ", attributes: attrDictionaryDelegate))
Dictionary
containing the image's size, filename and text location to images
.RunStruct
to hold the properties that will delineate the empty spaces. Then initialize a pointer to contain a RunStruct
with an ascent
equal to the image height and a width
property equal to the image width.CTRunDelegateCallbacks
that returns the ascent, descent and width properties belonging to pointers of type RunStruct
.CTRunDelegateCreate
to create a delegate instance binding the callbacks and the data parameter together.attrString
which holds the position and sizing information for the hole in the text.Now MarkupParser
is handling "img" tags, you'll need to adjust CTColumnView
and CTView
to render them.
Open CTColumnView.swift. Add the following below var ctFrame:CTFrame!
to hold the column's images and frames:
var images: [(image: UIImage, frame: CGRect)] = []
Next, add the following to the bottom of draw(_:)
:
for imageData in images {
if let image = imageData.image.cgImage {
let imgBounds = imageData.frame
context.draw(image, in: imgBounds)
}
}
Here you loop through each image and draw it into the context within its proper frame.
Next open CTView.swift and the following property to the top of the class:
// MARK: - Properties
var imageIndex: Int!
imageIndex
will keep track of the current image index as you draw the CTColumnView
s.
Next, add the following to the top of buildFrames(withAttrString:andImages:)
:
imageIndex = 0
This marks the first element of the images
array.
Next add the following, attachImagesWithFrame(_:ctframe:margin:columnView)
, below buildFrames(withAttrString:andImages:)
:
func attachImagesWithFrame(_ images: [[String: Any]],
ctframe: CTFrame,
margin: CGFloat,
columnView: CTColumnView) {
//1
let lines = CTFrameGetLines(ctframe) as NSArray
//2
var origins = [CGPoint](repeating: .zero, count: lines.count)
CTFrameGetLineOrigins(ctframe, CFRangeMake(0, 0), &origins)
//3
var nextImage = images[imageIndex]
guard var imgLocation = nextImage["location"] as? Int else {
return
}
//4
for lineIndex in 0..<lines.count {
let line = lines[lineIndex] as! CTLine
//5
if let glyphRuns = CTLineGetGlyphRuns(line) as? [CTRun],
let imageFilename = nextImage["filename"] as? String,
let img = UIImage(named: imageFilename) {
for run in glyphRuns {
}
}
}
}
ctframe
's CTLine
objects.CTFrameGetOrigins
to copy ctframe
's line origins into the origins
array. By setting a range with a length of 0, CTFrameGetOrigins
will know to traverse the entire CTFrame
.nextImage
to contain the attributed data of the current image. If nextImage
contain's the image's location, unwrap it and continue; otherwise, return early.Next, add the following inside the glyph run for-loop
:
// 1
let runRange = CTRunGetStringRange(run)
if runRange.location > imgLocation || runRange.location + runRange.length <= imgLocation {
continue
}
//2
var imgBounds: CGRect = .zero
var ascent: CGFloat = 0
imgBounds.size.width = CGFloat(CTRunGetTypographicBounds(run, CFRangeMake(0, 0), &ascent, nil, nil))
imgBounds.size.height = ascent
//3
let xOffset = CTLineGetOffsetForStringIndex(line, CTRunGetStringRange(run).location, nil)
imgBounds.origin.x = origins[lineIndex].x + xOffset
imgBounds.origin.y = origins[lineIndex].y
//4
columnView.images += [(image: img, frame: imgBounds)]
//5
imageIndex! += 1
if imageIndex < images.count {
nextImage = images[imageIndex]
imgLocation = (nextImage["location"] as AnyObject).intValue
}
CTRunGetTypographicBounds
and set the height to the found ascent.CTLineGetOffsetForStringIndex
then add it to the imgBounds
' origin.CTColumnView
.nextImage
and imgLocation
so they refer to that next image.OK! Great! Almost there - one final step.
Add the following right above pageView.addSubview(column)
inside buildFrames(withAttrString:andImages:)
to attach images if they exist:
if images.count > imageIndex {
attachImagesWithFrame(images, ctframe: ctframe, margin: settings.margin, columnView: column)
}
Build and run on both iPhone and iPad!
Congrats! As thanks for all that hard work, the zombies have spared your brains! :]
Check out the finished project here.
As mentioned in the intro, Text Kit can usually replace Core Text; so try writing this same tutorial with Text Kit to see how it compares. That said, this Core Text lesson won't be in vain! Text Kit offers toll free bridging to Core Text so you can easily cast between the frameworks as needed.
Have any questions, comments or suggestions? Join in the forum discussion below!
The post Core Text Tutorial for iOS: Making a Magazine App appeared first on Ray Wenderlich.
In this screencast, you'll learn how to use the Charts framework to control the appearance of a line chart and bar chart.
The post Screencast: Charts: Format & Style appeared first on Ray Wenderlich.
Recently, I updated my Beginning Realm on iOS course, that introduces Realm, a popular cross-platform mobile database.
If you’re ready to use Realm in your real-world projects, I’m excited to announce that an update to my course, Intermediate Realm on iOS, is available today! This course is fully up-to-date with Swift 3, Xcode 8, and iOS 10.
This 7-part course covers a number of essential Realm features for production apps such as bundled, multiple, and encrypted Realm files, as well as schema migrations across several versions of published apps.
Let’s see what’s inside!
Video 1: Introduction
In this video, you will learn what topics will be covered in the Intermediate Realm on iOS video course.
Video 2: Bundled Data
Learn how to bundle with your app a Realm file with initial data for your users to use immediately upon the first launch.
Video 3: Multiple Realm Files
Split your app’s data persistence needs across several files to isolate data or just use different Realm features per file.
Video 4: Encrypted Realms
Sensitive data like medical records or financial information should be protected well – Realm makes that a breeze with built-in encryption.
Video 5: Migrations Part 1
Your app changes from version to version and so does your database – learn how to migrate Realm data to newer schema versions.
Video 6: Migrations Part 2
Learn even more how to handle more complex migrations across several app versions.
Video 7: Conclusion
In this course’s gripping conclusion you will look back at what you’ve learned and see where to go next.
Want to check out the course? You can watch the introduction for free!
The rest of the course is for raywenderlich.com subscribers only. Here’s how you can get access:
There’s much more in store for raywenderlich.com subscribers – if you’re curious, you can check out our full schedule of upcoming courses.
I hope you enjoy our new course, and stay tuned for many more new courses and updates to come!
The post Updated Course: Intermediate Realm on iOS appeared first on Ray Wenderlich.
The answer is — yes!
And on top of that, we’ll be releasing these books this year as free updates for existing PDF customers!
Here are the books we’ll be updating:
That’s 9 free updates in one year! You won’t find that kind of value anywhere else.
If you purchase any of these PDF books from our online store, you’ll get the existing iOS 10/Swift 3/Xcode 8 edition — but you’ll automatically receive a free update to the iOS 11/Swift 4/Xcode 9 edition once it’s available.
We’re targeting Fall 2017 for the release of the new editions of the books, so stay tuned for updates.
While you’re waiting, I suggest you check out some of the great Swift 4 and iOS 11 material we’ve already released:
Happy reading!
The post Will raywenderlich.com Books be Updated for Swift 4 and iOS 11? appeared first on Ray Wenderlich.
iOS 10’s new Speech Recognition API lets your app transcribe live or pre-recorded audio. It leverages the same speech recognition engine used by Siri and Keyboard Dictation, but provides much more control and improved access.
The engine is fast and accurate and can currently interpret over 50 languages and dialects. It even adapts results to the user using information about their contacts, installed apps, media and various other pieces of data.
Audio fed to a recognizer is transcribed in near real time, and results are provided incrementally. This lets you react to voice input very quickly, regardless of context, unlike Keyboard Dictation, which is tied to a specific input object.
Speech Recognizer creates some truly amazing possibilities in your apps. For example, you could create an app that takes a photo when you say “cheese”. You could also create an app that could automatically transcribe audio from Simpsons episodes so you could search for your favorite lines.
In this speech recognition tutorial for iOS, you’ll build an app called Gangstribe that will transcribe some pretty hardcore (hilarious) gangster rap recordings using speech recognition. It will also get users in the mood to record their own rap hits with a live audio transcriber that draws emojis on their faces based on what they say. :]
The section on live recordings will use AVAudioEngine. If you haven’t used AVAudioEngine before, you may want to familiarize yourself with that framework first. The 2014 WWDC session AVAudioEngine in Practice is a great intro to this, and can be found at apple.co/28tATc1. This session video explains many of the systems and terminology we’ll use in this speech recognition tutorial for iOS.
The Speech Recognition framework doesn’t work in the simulator, so be sure to use a real device with iOS 10 (or later) for this speech recognition tutorial for iOS.
Download the sample project here. Open Gangstribe.xcodeproj in the starter project folder for this speech recognition tutorial for iOS. Select the project file, the Gangstribe target and then the General tab. Choose your development team from the drop-down.
Connect an iOS 10 (or later) device and select it as your run destination in Xcode. Build and run and you’ll see the bones of the app.
From the master controller, you can select a song. The detail controller will then let you play the audio file, recited by none other than our very own DJ Sammy D!
The transcribe button is not currently operational, but you’ll use this later to kick off a transcription of the selected recording.
Tap Face Replace on the right of the navigation bar to preview the live transcription feature. You’ll be prompted for permission to access the camera; accept this, as you’ll need it for this feature.
Currently if you select an emoji with your face in frame, it will place the emoji on your face. Later, you’ll trigger this action with speech.
Take a moment to familiarize yourself with the starter project. Here are some highlights of classes and groups you’ll work with during this speech recognition tutorial for iOS:
handleTranscribeButtonTapped(_:)
to have it kick off file transcription.
You’ll start this speech recognition tutorial for iOS by making the transcribe button work for pre-recorded audio. It will then feed the audio file to Speech Recognizer and present the results in a label under the player.
The latter half of the speech recognition tutorial for iOS will focus on the Face Replace feature. You’ll set up an audio engine for recording, tap into that input, and transcribe the audio as it arrives. You’ll display the live transcription and ultimately use it to trigger placing emojis over the user’s face.
You can’t just dive right in and start voice commanding unicorns onto your face though; you’ll need to understand a few basics first.
There are four primary actors involved in a speech transcription:
SFSpeechRecognizer
is the primary controller in the framework. Its most important job is to generate recognition tasks and return results. It also handles authorization and configures locales.
SFSpeechRecognitionRequest
is the base class for recognition requests. Its job is to point the SFSpeechRecognizer
to an audio source from which transcription should occur. There are two concrete types: SFSpeechURLRecognitionRequest
, for reading from a file, and SFSpeechAudioBufferRecognitionRequest
for reading from a buffer.
SFSpeechRecognitionTask
objects are created when a request is kicked off by the recognizer. They are used to track progress of a transcription or cancel it.
SFSpeechRecognitionResult
objects contain the transcription of a chunk of the audio. Each result typically corresponds to a single word.
Here’s how these objects interact during a basic Speech Recognizer transcription:
The code required to complete a transcription is quite simple. Given an audio file at url
, the following code transcribes the file and prints the results:
let request = SFSpeechURLRecognitionRequest(url: url)
SFSpeechRecognizer()?.recognitionTask(with: request) { (result, _) in
if let transcription = result?.bestTranscription {
print("\(transcription.formattedString)")
}
}
SFSpeechRecognizer
kicks off a SFSpeechRecognitionTask
for the SFSpeechURLRecognitionRequest
using recognitionTask(with:resultHandler:)
. It returns partial results as they arrive via the resultHandler
. This code prints the formatted string value of the bestTranscription
, which is a cumulative transcription result adjusted at each iteration.
You’ll start by implementing a file transcription very similar to this.
Before you start reading and sending chunks of the user’s audio off to a remote server, it would be polite to ask permission. In fact, considering their commitment to user privacy, it should come as no surprise that Apple requires this! :]
You’ll kick off the the authorization process when the user taps the Transcribe button in the detail controller.
Open RecordingViewController.swift and add the following to the import
statements at the top:
import Speech
This imports the Speech Recognition API.
Add the following to handleTranscribeButtonTapped(_:)
:
SFSpeechRecognizer.requestAuthorization {
[unowned self] (authStatus) in
switch authStatus {
case .authorized:
if let recording = self.recording {
//TODO: Kick off the transcription
}
case .denied:
print("Speech recognition authorization denied")
case .restricted:
print("Not available on this device")
case .notDetermined:
print("Not determined")
}
}
You call the SFSpeechRecognizer
type method requestAuthorization(_:)
to prompt the user for authorization and handle their response in a completion closure.
In the closure, you look at the authStatus
and print error messages for all of the exception cases. For authorized
, you unwrap the selected recording for later transcription.
Next, you have to provide a usage description displayed when permission is requested. Open Info.plist and add the key Privacy - Speech Recognition Usage Description
providing the String value I want to write down everything you say
:
Build and run, select a song from the master controller, and tap Transcribe. You’ll see a permission request appear with the text you provided. Select OK to provide Gangstribe the proper permission:
Of course nothing happens after you provide authorization — you haven’t yet set up speech recognition! It’s now time to test the limits of the framework with DJ Sammy D’s renditions of popular rap music.
Back in RecordingViewController.swift, find the RecordingViewController
extension at the bottom of the file. Add the following method to transcribe a file found at the passed url
:
fileprivate func transcribeFile(url: URL) {
// 1
guard let recognizer = SFSpeechRecognizer() else {
print("Speech recognition not available for specified locale")
return
}
if !recognizer.isAvailable {
print("Speech recognition not currently available")
return
}
// 2
updateUIForTranscriptionInProgress()
let request = SFSpeechURLRecognitionRequest(url: url)
// 3
recognizer.recognitionTask(with: request) {
[unowned self] (result, error) in
guard let result = result else {
print("There was an error transcribing that file")
return
}
// 4
if result.isFinal {
self.updateUIWithCompletedTranscription(
result.bestTranscription.formattedString)
}
}
}
Here are the details on how this transcribes the passed file:
SFSpeechRecognizer
initializer provides a recognizer for the device’s locale, returning nil
if there is no such recognizer. isAvailable
checks if the recognizer
is ready, failing in such cases as missing network connectivity.
updateUIForTranscriptionInProgress()
is provided with the starter to disable the Transcribe button and start an activity indicator animation while the transcription is in process. A SFSpeechURLRecognitionRequest
is created for the file found at url
, creating an interface to the transcription engine for that recording.
recognitionTask(with:resultHandler:)
processes the transcription request
, repeatedly triggering a completion closure. The passed result
is unwrapped in a guard, which prints an error on failure.
isFinal
property will be true when the entire transcription is complete. updateUIWithCompletedTranscription(_:)
stops the activity indicator, re-enables the button and displays the passed string in a text view. bestTranscription
contains the transcription Speech Recognizer is most confident is accurate, and formattedString
provides it in String format for display in the text view.
bestTranscription
, there can of course be lesser ones. SFSpeechRecognitionResult
has a transcriptions
property that contains an array of transcriptions sorted in order of confidence. As you see with Siri and Keyboard Dictation, a transcription can change as more context arrives, and this array illustrates that type of progression.
Now you need to call this new code when the user taps the Transcribe button. In handleTranscribeButtonTapped(_:)
replace //TODO: Kick off the transcription
with the following:
self.transcribeFile(url: recording.audio)
After successful authorization, the button handler now calls transcribeFile(url:)
with the URL of the currently selected recording.
Build and run, select Gangsta’s Paradise, and then tap the Transcribe button. You’ll see the activity indicator for a while, and then the text view will eventually populate with the transcription:
The results aren’t bad, considering Coolio doesn’t seem to own a copy of Webster’s Dictionary. Depending on the locale of your device, there could be another reason things are a bit off. The above screenshot was a transcription completed on a device configured for US English, while DJ Sammy D has a slightly different dialect.
But you don’t need to book a flight overseas to fix this. When creating a recognizer, you have the option of specifying a locale — that’s what you’ll do next.
Still in RecordingViewController.swift, find transcribeFile(url:)
and replace the following two lines:
fileprivate func transcribeFile(url: URL) {
guard let recognizer = SFSpeechRecognizer() else {
with the code below:
fileprivate func transcribeFile(url: URL, locale: Locale?) {
let locale = locale ?? Locale.current
guard let recognizer = SFSpeechRecognizer(locale: locale) else {
You’ve added an optional Locale
parameter which will specify the locale of the file being transcribed. If locale
is nil
when unwrapped, you fall back to the device’s locale. You then initialize the SFSpeechRecognizer
with this locale.
Now to modify where this is called. Find handleTranscribeButtonTapped(_:)
and replace the transcribeFile(url:)
call with the following:
self.transcribeFile(url: recording.audio, locale: recording.locale)
You use the new method signature, passing the locale stored with the recording
object.
recordingNames
array up top. Each element contains the song name, artist, audio file name and locale. You can find information on how locale identifiers are derived in Apple’s Internationalization and Localization Guide here — apple.co/1HVWDQa
Build and run, and complete another transcription on Gangsta’s Paradise. Assuming your first run was with a locale other than en_GB
, you should see some differences.
You can probably understand different dialects of languages you speak pretty well. But you’re probably significantly weaker when it comes to understanding languages you don’t speak. The Speech Recognition engine understands over 50 different languages and dialects, so it likely has you beat here.
Now that you are passing the locale of files you’re transcribing, you’ll be able to successfully transcribe a recording in any supported language. Build and run, and select the song Raise Your Hands, which is in Thai. Play it, and then tap Transcribe to see the transcribed content.
Flawless transcription! Presumably.
Live transcription is very similar to file transcription. The primary difference in the process is a different request type — SFSpeechAudioBufferRecognitionRequest
— which is used for live transcriptions.
As the name implies, this type of request reads from an audio buffer. Your task will be to append live audio buffers to this request as they arrive from the source. Once connected, the actual transcription process will be identical to the one for recorded audio.
Another consideration for live audio is that you’ll need a way to stop a transcription when the user is done speaking. This requires maintaining a reference to the SFSpeechRecognitionTask
so that it can later be canceled.
Gangstribe has some pretty cool tricks up its sleeve. For this feature, you’ll not only transcribe live audio, but you’ll use the transcriptions to trigger some visual effects. With the use of the FaceReplace library, speaking the name of a supported emoji will plaster it right over your face!
To do this, you’ll have to configure the audio engine and hook it up to a recognition request. But before you start recording and transcribing, you need to request authorization to use speech recognition in this controller.
Open LiveTranscribeViewController.swift and add the following to the top of the file by the other imports:
import Speech
Now the live transcription controller has access to Speech Recognition.
Next find viewDidLoad()
and replace the line startRecording()
with the following:
SFSpeechRecognizer.requestAuthorization {
[unowned self] (authStatus) in
switch authStatus {
case .authorized:
self.startRecording()
case .denied:
print("Speech recognition authorization denied")
case .restricted:
print("Not available on this device")
case .notDetermined:
print("Not determined")
}
}
Just as you did with pre-recorded audio, you’re calling requestAuthorization(_:)
to obtain or confirm access to Speech Recognition.
For the authorized
status, you call startRecording()
which currently just does some preparation — you’ll implement the rest shortly. For failures, you print relevant error messages.
Next, add the following properties at the top of LiveTranscribeViewController
:
let audioEngine = AVAudioEngine()
let speechRecognizer = SFSpeechRecognizer()
let request = SFSpeechAudioBufferRecognitionRequest()
var recognitionTask: SFSpeechRecognitionTask?
audioEngine
is an AVAudioEngine
object you’ll use to process input audio signals from the microphone.
speechRecognizer
is the SFSpeechRecognizer
you’ll use for live transcriptions.
request
is the SFSpeechAudioBufferRecognitionRequest
the speech recognizer will use to tap into the audio engine.
recognitionTask
will hold a reference to the SFSpeechRecognitionTask
kicked off when transcription begins.
Now find startRecording()
in a LiveTranscribeViewController
extension in this same file. This is called when the Face Replace view loads, but it doesn’t yet do any recording. Add the following code to the bottom of the method:
// 1
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
// 2
node.installTap(onBus: 0, bufferSize: 1024,
format: recordingFormat) { [unowned self]
(buffer, _) in
self.request.append(buffer)
}
// 3
audioEngine.prepare()
try audioEngine.start()
This code does the following:
node
associated with the device’s microphone, as well as its corresponding outputFormat
.
node
, using the same recording format. When the buffer is filled, the closure returns the data in buffer
which is appended to the SFSpeechAudioBufferRecognitionRequest
. The request
is now tapped into the live input node.
audioEngine
to start recording, and thus gets data going to the tap.
Because starting the audio engine throws, you need to signify this on the method. Change the method definition to match the following:
fileprivate func startRecording() throws {
With this change, you likewise need to modify where the method gets called. Find viewDidLoad()
and replace self.startRecording()
with the following:
do {
try self.startRecording()
} catch let error {
print("There was a problem starting recording: \(error.localizedDescription)")
}
startRecording()
is now wrapped in a do-catch
, printing the error if it fails.
There is one last thing to do before you can kick off a recording — ask for user permission. The framework does this for you, but you need to provide another key in the plist with an explanation. Open Info.plist and add the key Privacy - Microphone Usage Description
providing the String value I want to record you live
.
Build and run, choose a recording, then select Face Replace from the navigation bar. You’ll immediately be greeted with a prompt requesting permission to use the microphone. Hit OK so that Gangstribe can eventually transcribe what you say:
With the tap in place, and recording started, you can finally kick off the speech recognition task.
In LiveTranscribeViewController.swift, go back to startRecording()
and add the following at the bottom of the method:
recognitionTask = speechRecognizer?.recognitionTask(with: request) {
[unowned self]
(result, _) in
if let transcription = result?.bestTranscription {
self.transcriptionOutputLabel.text = transcription.formattedString
}
}
recognitionTask(with:resultHandler:)
is called with the request
connected to the tap, kicking off transcription of live audio. The task is saved in recognitionTask
for later use.
In the closure, you get bestTranscription
from the result. You then update the label that displays the transcription with the formatted string of the transcription
.
Build and run, and tap the Face Replace button in the navigation bar. Start talking, and you’ll now see a real time transcription from speech recognition!
But there’s a problem. If you try opening Face Replace enough times, it will crash spectacularly. You’re currently leaking the SFSpeechAudioBufferRecognitionRequest
because you’ve never stopping transcription or recording!
Add the following method to the LiveTranscribeViewController
extension that also contains startRecording()
:
fileprivate func stopRecording() {
audioEngine.stop()
request.endAudio()
recognitionTask?.cancel()
}
Calling stop()
on the audio engine releases all resources associated with it. endAudio()
tells the request that it shouldn’t expect any more incoming audio, and causes it to stop listening. cancel()
is called on the recognition task to let it know its work is done so that it can free up resources.
You’ll want to call this when the user taps the Done! button before you dismiss the controller. Add the following to handleDoneTapped(_:)
, just before the dismiss
:
stopRecording()
The audio engine and speech recognizer will now get cleaned up each time the user finishes with a live recording. Good job cleaning up your toys! :]
The live transcription below your video is pretty cool, but it’s not what you set out to do. It’s time to dig into these transcriptions and use them to trigger the emoji face replacement!
First, you need to understand a bit more about the data contained in the SFTranscription
objects returned in SFSpeechRecognitionResult
objects. You’ve been accessing these with the bestTranscription
property of results returned to the recognitionTask(with:resultHandler:)
closure.
SFTranscription
has a segments
property containing an array of all SFTranscriptionSegment
objects returned from the request. Among other things, a SFTranscriptionSegment
has a substring
containing the transcribed String for that segment, as well as its duration
from the start of the transcription. Generally, each segment will consist of a single word.
Each time the live transcription returns a new result, you want to look at the most recent segment to see if it matches an emoji keyword.
First add the following property to at the top of the class:
var mostRecentlyProcessedSegmentDuration: TimeInterval = 0
mostRecentlyProcessedSegmentDuration
tracks the timestamp of the last processed segment. Because the segment duration is from the start of transcription, the highest duration indicates the latest segment.
Now add the following to the top of startRecording()
:
mostRecentlyProcessedSegmentDuration = 0
This will reset the tracked duration each time recording starts.
Now add the following new method to the bottom of the last LiveTranscribeViewController
extension:
// 1
fileprivate func updateUIWithTranscription(_ transcription: SFTranscription) {
self.transcriptionOutputLabel.text = transcription.formattedString
// 2
if let lastSegment = transcription.segments.last,
lastSegment.duration > mostRecentlyProcessedSegmentDuration {
mostRecentlyProcessedSegmentDuration = lastSegment.duration
// 3
faceSource.selectFace(lastSegment.substring)
}
}
Here’s what this code does:
SFTranscription
and uses it to update the UI with results. First, it updates the transcription label at the bottom of the screen with the results; this will soon replace similar code found in startRecording()
.
last
segment from the passed transcription
. It then checks that the segment’s duration is higher than the mostRecentlyProcessedSegmentDuration
to avoid an older segment being processed if it returns out of order. The new duration is then saved in mostRecentlyProcessedSegmentDuration
.
selectFace()
, part of the Face Replace code, accepts the substring
of this new transcription, and completes a face replace if it matches one of the emoji names.
In startRecording()
, replace the following line:
self.transcriptionOutputLabel.text = transcription.formattedString
with:
self.updateUIWithTranscription(transcription)
updateUIWithTranscription()
is now called each time the resultHandler
is executed. It will update the transcription label as well as triggering a face replace if appropriate. Because this new method updates the transcription label, you removed the code that previously did it here.
Build and run and select Face Replace. This time, say the name of one of the emojis. Try “cry” as your first attempt.
The speech recognizer will transcribe the word “cry” and feed it to the FaceSource
object, which will attach the cry emoji to your face. What a time to be alive!
names
array. Each of these map to one of the emojis in the faces
array above it.
While they aren’t yet clearly defined, Apple has provided some usage guidelines for Speech Recognition. Apple will be enforcing the following types of limitations:
Apple hasn’t provided any numbers for device and app daily limits. These rules are likely to mature and become more concrete as Apple sees how third party developers use the framework.
Apple also emphasizes that you must make it very clear to users when they are being recorded. While it isn’t currently in the review guidelines, it’s in your best interest to follow this closely to avoid rejections. You also wouldn’t want to invade your user’s privacy!
Finally, Apple suggests presenting transcription results before acting on them. Sending a text message via Siri is a great example of this: she’ll present editable transcription results and delay before sending the message. Transcription is certainly not perfect, and you want to protect users from the frustration and possible embarrassment of mistakes.
You can download the completed sample project here. In this speech recognition tutorial for iOS, you learned everything you need to know to get basic speech recognition working in your apps. It’s an extremely powerful feature where the framework does the heavy lifting. With just a few lines of code, you can bring a lot of magic to your apps.
There isn’t currently much documentation on Speech Recognition, so your best bet is to explore the headers in the source for more detail. Here are a couple of other places to go for more info:
Questions? Comments? Come join the forum discussion below!
This speech recognition tutorial for iOS was taken from Chapter 7 of iOS 10 by Tutorials, which also covers the new changes in Swift 3, source editor extensions, Core Data updates, photography updates, search integration and all the other new, shiny APIs in iOS 10.
You’ll definitely enjoy the other 13 chapters and 300+ pages in the book. Check it out in our store and let us know what you think!
The post Speech Recognition Tutorial for iOS appeared first on Ray Wenderlich.
If you’re developing apps for iOS, you already have a particular set of skills that you can use to write apps for another platform – macOS!
If you’re like most developers, you don’t want to have to write your app twice just to ship your app on a new platform, as this can take too much time and money. But with a little effort, you can learn how to port iOS apps to macOS, reusing a good portion of your existing iOS app, and only rewriting the portions that are platform-specific.
In this tutorial, you’ll learn how to create an Xcode project that is home to both iOS and macOS, how to refactor your code for reuse on both platforms, and when it is appropriate to write platform specific code.
To get the most out of this tutorial you should be familiar with NSTableView. If you need to refresh your knowledge we have an introduction for you.
For this tutorial, you’ll need to download the starter project here.
The sample project is a version of the BeerTracker app used in previous tutorials. It allows you to keep a record of beers you’ve tried, along with notes, ratings, and images of the beers. Build and run the app to get a feel for how it works.
Since the app is only available on iOS, the first step to porting the app for macOS is to create a new target. A target simply is a set of instructions telling Xcode how to build your application. Currently, you only have an iOS target, which contains all the information needed to build your app for an iPhone.
Select the BeerTracker project at the top of the Project Navigator. At the bottom of the Project and Targets list, click the + button.
This will present a window for you to add a new target to your project. At the top of the window, you’ll see tabs representing the different categories of platforms supported. Select macOS, then scroll down to Application and choose Cocoa App. Name the new target BeerTracker-mac.
In the starter app you downloaded, you’ll find a folder named BeerTracker Mac Icons. You’ll need to add the App Icons to AppIcon in Assets.xcassets found under the BeerTracker-mac group. Also add beerMug.pdf to Assets.xcassets. Select beerMug, open the Attributes Inspector and change the Scales to Single Scale. This ensures you don’t need to use different scaled images for this asset.
When you’re done, your assets should look like this:
In the top left of the Xcode window, select the BeerTracker-mac scheme in the scheme pop-up. Build and run, and you’ll see an empty window. Before you can start adding the user interface, you’ll need to make sure your code doesn’t have any conflicts between UIKit, the framework used on iOS, and AppKit, the framework used by macOS.
The Foundation framework allows your app to share quite a bit of code, as it is universal to both platforms. However, your UI cannot be universal. In fact, Apple recommends that multi-platform applications should not attempt to share UI code, as your secondary platform will begin to take on the appearance of your initial application’s UI.
iOS has some fairly strict Human Interface Guidelines that ensure your users are able to read and select elements on their touchscreen devices. However, macOS has different requirements. Laptops and desktops have a mouse pointer to click and select, allowing elements on the screen to be much smaller than would be possible on a phone.
Having identified the UI as needing to be different on both platforms, it is also important to understand what other components of your code can be reused, and which ones need to be rewritten. Keep in mind that there isn’t necessarily a definitive right or wrong answer in most of these cases, and you will need to decide what works best for your app. Always remember that the more code shared, the less code you need to test and debug.
Generally, you’ll be able to share models and model controllers. Open Beer.swift, and open the Utilities drawer in Xcode, and select the File Inspector. Since both targets will use this model, under Target Membership, check BeerTracker-mac leaving BeerTracker still checked. Do the same thing for BeerManager.swift, and SharedAssets.xcassets under the Utilities group.
If you try to build and run, you will get a build error. This is because Beer.swift is importing UIKit. The model is using some platform specific logic to load and save images of beers.
Replace the import line at the top of the file with the following:
import Foundation
If you try to build and run, you’ll see the app no longer compiles due to UIImage being part of the now removed UIKit. While the model portion of this file is shareable between both targets, the platform specific logic will need to be separated out. In Beer.swift, delete the entire extension marked Image Saving. After the import
statement, add the following protocol:
protocol BeerImage {
associatedtype Image
func beerImage() -> Image?
func saveImage(_ image: Image)
}
Since each target will still need access to the beer’s image, and to be able to save images, this protocol provides a contract that can be used across the two targets to accomplish this.
Create a new file by going to File/New/File…, select Swift File, and name it Beer_iOS.swift. Ensure that only the BeerTracker target is checked. After that, create another new file named Beer_mac.swift, this time selecting BeerTracker-mac as the target.
Open Beer_iOS.swift, delete the file’s contents, and add the following:
import UIKit
// MARK: - Image Saving
extension Beer: BeerImage {
// 1.
typealias Image = UIImage
// 2.
func beerImage() -> Image? {
guard let imagePath = imagePath,
let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first else {
return #imageLiteral(resourceName: "beerMugPlaceholder")
}
// 3.
let pathName = (path as NSString).appendingPathComponent("BeerTracker/\(imagePath)")
guard let image = Image(contentsOfFile: pathName) else { return #imageLiteral(resourceName: "beerMugPlaceholder") }
return image
}
// 4.
func saveImage(_ image: Image) {
guard let imgData = UIImageJPEGRepresentation(image, 0.5),
let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first else {
return
}
let appPath = (path as NSString).appendingPathComponent("/BeerTracker")
let fileName = "\(UUID().uuidString).jpg"
let pathName = (appPath as NSString).appendingPathComponent(fileName)
var isDirectory: ObjCBool = false
if !FileManager.default.fileExists(atPath: appPath, isDirectory: &isDirectory) {
do {
try FileManager.default.createDirectory(atPath: pathName, withIntermediateDirectories: true, attributes: nil)
} catch {
print("Failed to create directory: \(error)")
}
}
if (try? imgData.write(to: URL(fileURLWithPath: pathName), options: [.atomic])) != nil {
imagePath = fileName
}
}
}
Here’s what’s happening:
Switch your scheme to BeerTracker, then build and run. The application should behave as before.
Now that your iOS target is working, you’re ready to add macOS-specific code. Open Beer_mac.swift, delete all the contents, and add the following code:
import AppKit
// MARK: - Image Saving
extension Beer: BeerImage {
// 1.
typealias Image = NSImage
func beerImage() -> Image? {
// 2.
guard let imagePath = imagePath,
let path = NSSearchPathForDirectoriesInDomains(.applicationSupportDirectory, .userDomainMask, true).first else {
return #imageLiteral(resourceName: "beerMugPlaceholder")
}
let pathName = (path as NSString).appendingPathComponent(imagePath)
guard let image = Image(contentsOfFile: pathName) else { return #imageLiteral(resourceName: "beerMugPlaceholder") }
return image
}
func saveImage(_ image: Image) {
// 3.
guard let imgData = image.tiffRepresentation,
let path = NSSearchPathForDirectoriesInDomains(.applicationSupportDirectory, .userDomainMask, true).first else {
return
}
let fileName = "/BeerTracker/\(UUID().uuidString).jpg"
let pathName = (path as NSString).appendingPathComponent(fileName)
if (try? imgData.write(to: URL(fileURLWithPath: pathName), options: [.atomic])) != nil {
imagePath = fileName
}
}
}
The above code is nearly identical to the previous code, with just a few changes:
tiffRepresentation
.Switch your target to BeerTracker_mac, then build and run. Your app now compiles for both platforms, while maintaining a standard set of functionality from your model.
Your empty view Mac app isn’t very useful, so it’s time to build the UI. From the BeerTracker-mac group, open Main.storyboard. Start by dragging a Table View into your empty view. Now select the Table View in the Document Outline.
macOS storyboards sometimes require you to dig down a bit deeper into the view hierarchy. This is a change from iOS, where you’re used to seeing all template views at the top level.
With the Table View selected, make the following changes in the Attributes Inspector:
Select the Table Column in the Document Outline and set its Title to Beer Name.
In the Document Outline, select the Bordered Scroll View (which houses the Table View), and in the Size Inspector find the View section and set the View dimensions to the following:
Setting the coordinates is going to be slightly different here, as well. In macOS, the origin of the UI is not in the top left, but the lower left. Here, you’ve set the y coordinate to 17, which means 17 points up from the bottom.
Next you’ll need to connect your delegate, data source and properties for the Table View. Again, you’ll need to select the Table View from the Document Outline to do this. With it selected, you can Control-drag to the View Controller item in the Document Outline and click delegate. Repeat this for the dataSource.
Open ViewController.swift in the Assistant Editor, Control-drag from the Table View and create a new outlet named tableView
.
Before you finish with the Table View, there’s one last thing you need to set. Back in the Document Outline, find the item named Table Cell View. With that selected, open the Identity Inspector, and set the Identifier to NameCell.
With the Table View setup, next comes the “form” section of the UI.
First, you’ll add an Image Well to the right of the table. Set the frame to the following:
An Image Well is a convenient object that displays an image, but also allows a user to drag and drop a picture onto it. To accomplish this, the Image Well has the ability to connect an action to your code!
Open the BeerTracker-mac ViewController.swift in the Assistant Editor and create an outlet for the Image Well named imageView
. Also create an action for the Image View, and name it imageChanged
. Ensure that you change Type to NSImageView, as shown:
While drag and drop is great, sometimes users want to be able to view an Open Dialog and search for the file themselves. Set this up by dropping a Click Gesture Recognizer on the Image Well. In the Document Outline, connect an action from the Click Gesture Recognizer to ViewController.swift named selectImage
.
Add a Text Field to the right of the Image Well. In the Attributes Inspector, change the Placeholder to Enter Name. Set the frame to the following:
Create an outlet in ViewController.swift for the Text Field named nameField
.
Next, add a Level Indicator below the name field. This will control setting the rating of your beers. In the Attributes Inspector, set the following:
Set the frame to the following:
Create an outlet for the Level Indicator named ratingIndicator
.
Add a Text View below the rating indicator. Set the frame to:
To create an outlet for the Text View, you’ll need to make sure you select Text View inside the Document Outline, like you did with the Table View. Name the outlet noteView
. You’ll also need to set the Text View‘s delegate to the ViewController.
Below the note view, drop in a Push Button. Change the title to Update, and set the frame to:
Connect an action from the button to ViewController named updateBeer
.
With that, you have all the necessary controls to edit and view your beer information. However, there’s no way to add or remove beers. This will make the app difficult to use, even if your users haven’t had anything to drink. :]
Add a Gradient Button to the bottom left of the screen. In the Attributes Inspector, change Image to NSAddTemplate if it is not already set.
In the Size Inspector, set the frame to:
Add an action from the new button named addBeer
.
One great thing about macOS is that you get access to template images like the + sign. This can make your life a lot simpler when you have any standard action buttons, but don’t have the time or ability to create your own artwork.
Next, you’ll need to add the remove button. Add another Gradient Button directly to the right of the previous button, and change the Image to NSRemoveTemplate. Set the frame to:
And finally, add an action from this button named removeBeer
.
You’re almost finished building the UI! You just need to add a few labels to help polish it off.
Add the following labels:
For each of these labels, in the Attributes Inspector, set the font to Other – Label, and the size to 10.
For the last label, connect an outlet to ViewController.swift named beerCountField
.
Make sure your labels all line like so:
Click the Resolve Auto Layout Issues button and in the All Views in View Controller section click Reset to Suggested Constraints.
Whew! Now you’re ready to code. Open ViewController.swift and delete the property named representedObject
. Add the following methods below viewDidLoad()
:
private func setFieldsEnabled(enabled: Bool) {
imageView.isEditable = enabled
nameField.isEnabled = enabled
ratingIndicator.isEnabled = enabled
noteView.isEditable = enabled
}
private func updateBeerCountLabel() {
beerCountField.stringValue = "\(BeerManager.sharedInstance.beers.count)"
}
There are two methods that will help you control your UI:
setFieldsEnabled(_:)
will allow you to easily turn off and on the ability to use the form controls.updateBeerCountLabel()
simply sets the count of beers in the beerCountField
.Beneath all of your outlets, add the following property:
var selectedBeer: Beer? {
didSet {
guard let selectedBeer = selectedBeer else {
setFieldsEnabled(enabled: false)
imageView.image = nil
nameField.stringValue = ""
ratingIndicator.integerValue = 0
noteView.string = ""
return
}
setFieldsEnabled(enabled: true)
imageView.image = selectedBeer.beerImage()
nameField.stringValue = selectedBeer.name
ratingIndicator.integerValue = selectedBeer.rating
noteView.string = selectedBeer.note!
}
}
This property will keep track of the beer selected from the table view. If no beer is currently selected, the setter takes care of clearing the values from all the fields, and disabling the UI components that shouldn’t be used.
Replace viewDidLoad()
with the following code:
override func viewDidLoad() {
super.viewDidLoad()
if BeerManager.sharedInstance.beers.count == 0 {
setFieldsEnabled(enabled: false)
} else {
tableView.selectRowIndexes(IndexSet(integer: 0), byExtendingSelection: false)
}
updateBeerCountLabel()
}
Just like in iOS, you want our app to do something the moment it starts up. In the macOS version, however, you’ll need to immediately fill out the form for the user to see their data.
Right now, the table view isn’t actually able to display any data, but selectRowIndexes(_:byExtendingSelection:)
will select the first beer in the list. The delegate code will handle the rest for you.
In order to get the table view showing you your list of beers, add the following code to the end of ViewController.swift, outside of the ViewController
class:
extension ViewController: NSTableViewDataSource {
func numberOfRows(in tableView: NSTableView) -> Int {
return BeerManager.sharedInstance.beers.count
}
}
extension ViewController: NSTableViewDelegate {
// MARK: - CellIdentifiers
fileprivate enum CellIdentifier {
static let NameCell = "NameCell"
}
func tableView(_ tableView: NSTableView, viewFor tableColumn: NSTableColumn?, row: Int) -> NSView? {
let beer = BeerManager.sharedInstance.beers[row]
if let cell = tableView.makeView(withIdentifier: NSUserInterfaceItemIdentifier(rawValue: CellIdentifier.NameCell), owner: nil) as? NSTableCellView {
cell.textField?.stringValue = beer.name
if beer.name.characters.count == 0 {
cell.textField?.stringValue = "New Beer"
}
return cell
}
return nil
}
func tableViewSelectionDidChange(_ notification: Notification) {
if tableView.selectedRow >= 0 {
selectedBeer = BeerManager.sharedInstance.beers[tableView.selectedRow]
}
}
}
This code takes care of populating the table view’s rows from the data source.
Look at it closely, and you’ll see it’s not too different from the iOS counterpart found in BeersTableViewController.swift. One notable difference is that when the table view selection changes, it sends a Notification to the NSTableViewDelegate.
Remember that your new macOS app has multiple input sources — not just a finger. Using a mouse or keyboard can change the selection of the table view, and that makes handling the change just a little different to iOS.
Now to add a beer. Change addBeer()
to:
@IBAction func addBeer(_ sender: Any) {
// 1.
let beer = Beer()
beer.name = ""
beer.rating = 1
beer.note = ""
selectedBeer = beer
// 2.
BeerManager.sharedInstance.beers.insert(beer, at: 0)
BeerManager.sharedInstance.saveBeers()
// 3.
let indexSet = IndexSet(integer: 0)
tableView.beginUpdates()
tableView.insertRows(at: indexSet, withAnimation: .slideDown)
tableView.endUpdates()
updateBeerCountLabel()
// 4.
tableView.selectRowIndexes(IndexSet(integer: 0), byExtendingSelection: false)
}
Nothing too crazy here. You’re simply doing the following:
You might have even noticed that, like in iOS, you need to call beginUpdates()
and endUpdates()
before inserting the new row. See, you really do know a lot about macOS already!
To remove a beer, add the below code for removeBeer(_:)
:
@IBAction func removeBeer(_ sender: Any) {
guard let beer = selectedBeer,
let index = BeerManager.sharedInstance.beers.index(of: beer) else {
return
}
// 1.
BeerManager.sharedInstance.beers.remove(at: index)
BeerManager.sharedInstance.saveBeers()
// 2
tableView.reloadData()
updateBeerCountLabel()
tableView.selectRowIndexes(IndexSet(integer: 0), byExtendingSelection: false)
if BeerManager.sharedInstance.beers.count == 0 {
selectedBeer = nil
}
}
Once again, very straightforward code:
Remember how Image Wells have the ability to accept an image dropped on them? Change imageChanged(_:)
to:
@IBAction func imageChanged(_ sender: NSImageView) {
guard let image = sender.image else { return }
selectedBeer?.saveImage(image)
}
And you thought it was going to be hard! Apple has taken care of all the heavy lifting for you, and provides you with the image dropped.
On the flip side to that, you’ll need to do a bit more work to handle user’s picking the image from within your app. Replace selectImage()
with:
@IBAction func selectImage(_ sender: Any) {
guard let window = view.window else { return }
// 1.
let openPanel = NSOpenPanel()
openPanel.allowsMultipleSelection = false
openPanel.canChooseDirectories = false
openPanel.canCreateDirectories = false
openPanel.canChooseFiles = true
// 2.
openPanel.allowedFileTypes = ["jpg", "png", "tiff"]
// 3.
openPanel.beginSheetModal(for: window) { (result) in
if result == NSApplication.ModalResponse.OK {
// 4.
if let panelURL = openPanel.url,
let beerImage = NSImage(contentsOf: panelURL) {
self.selectedBeer?.saveImage(beerImage)
self.imageView.image = beerImage
}
}
}
}
The above code is how you use NSOpenPanel
to select a file. Here’s what’s happening:
NSOpenPanel
, and configure its settings.Finally, add the code that will save the data model in updateBeer(_:)
:
@IBAction func updateBeer(_ sender: Any) {
// 1.
guard let beer = selectedBeer,
let index = BeerManager.sharedInstance.beers.index(of: beer) else { return }
beer.name = nameField.stringValue
beer.rating = ratingIndicator.integerValue
beer.note = noteView.string
// 2.
let indexSet = IndexSet(integer: index)
tableView.beginUpdates()
tableView.reloadData(forRowIndexes: indexSet, columnIndexes: IndexSet(integer: 0))
tableView.endUpdates()
// 3.
BeerManager.sharedInstance.saveBeers()
}
Here’s what you added:
You’re all set! Build and run the app, and start adding beers. Remember, you’ll need to select Update to save your data.
You’ve learned a lot about the similarities and differences between iOS and macOS development. There’s another concept that you should familiarize yourself with: Settings/Preferences. In iOS, you should be comfortable with the concept of going into Settings, finding your desired app, and changing any settings available to you. In macOS, this can be accomplished inside your app through Preferences.
Build and run the BeerTracker target, and in the simulator, navigate to the BeerTracker settings in the Settings app. There you’ll find a setting allowing your users to limit the length of their notes, just in case they get a little chatty after having a few.
In order to get the same feature in your mac app, you’ll create a Preferences window for the user. In BeerTracker-mac, open Main.storyboard, and drop in a new Window Controller. Select the Window, open the Size Inspector, and change the following:
Next, select the View of the empty View Controller, and change the size to match the above settings, 380 x 55.
Doing these things will ensure your Preferences window is always the same size, and opens in a logical place to the user. When you’re finished, your new window should look like this in the storyboard:
At this point, there is no way for a user to open your new window. Since it should be tied to the Preferences menu item, find the menu bar scene in storyboard. It will be easier if you drag it close to the Preferences window for this next part. Once it is close enough, do the following:
Find a Check Box Button, and add it to the empty View Controller. Change the text to be Restrict Note Length to 1,024 Characters.
With the Checkbox Button selected, open the Bindings Inspector, and do the following:
Create a new Swift file in the Utilities group named StringValidator.swift. Make sure to check both targets for this file.
Open StringValidator.swift, and replace the contents with the following code:
import Foundation
extension String {
private static let noteLimit = 1024
func isValidLength() -> Bool {
let limitLength = UserDefaults.standard.bool(forKey: "BT_Restrict_Note_Length")
if limitLength {
return self.characters.count <= String.noteLimit
}
return true
}
}
This class will provide both targets with the ability to check if a string is a valid length, but only if the user default BT_Restrict_Note_Length is true.
In ViewController.swift add the following code at the bottom:
extension ViewController: NSTextViewDelegate {
func textView(_ textView: NSTextView, shouldChangeTextIn affectedCharRange: NSRange, replacementString: String?) -> Bool {
guard let replacementString = replacementString else { return true }
let currentText = textView.string
let proposed = (currentText as NSString).replacingCharacters(in: affectedCharRange, with: replacementString)
return proposed.isValidLength()
}
}
Finally, change the names of each Window in Main.storyboard to match their purpose, and give the user more clarity. Select the initial Window Controller, and in the Attributes Inspector change the title to BeerTracker. Select the Window Controller for the Preferences window, and change the title to Preferences.
Build and run your app. If you select the Preferences menu item, you should now see your new Preferences window with your preferences item. Select the checkbox, and find some large amount of text to paste in. If this would make the note more 1024 characters, the Text View will not accept it, just like the iOS app.
You can download the finished project here.
In this tutorial you learned:
For more information about porting your apps to macOS, check out Apple's Migrating from Cocoa Touch Overview.
If you have any questions or comments, please join in the forum discussion below!
The post Porting Your iOS App to macOS appeared first on Ray Wenderlich.
WWDC 2017 brought us some of the coolest features we’ve ever seen in an iOS release. Augmented reality with ARKit, machine learning in CoreML and drag and drop in iOS are just the beginning of all the great stuff in iOS 11.
And like every year post-WWDC, our book teams have already jumped into high gear. We’re excited to announce that iOS 11 by Tutorials is available for pre-order today!
This is the seventh installment in our ever-popular iOS by Tutorials series, and this year we’re having a creative competition for the book, where you can help decide what goes on the cover and get a chance to win some great prizes.
Read on for details of what we’re hoping to cover in the book, how to pre-order your own copy — and how you can get involved!
We’re still plowing our way through the 100+ videos from this year’s WWDC sessions, and there’s a tremendous amount of great stuff to cover!
Here are the high-level topics we’re planning on covering right now. Expect more to be added or changed as we work on the book:
Although you could teach yourself these items by reading Apple docs and sample code, it would take forever – and let’s face it, we’re all busy developers.
That’s why we create the iOS by Tutorials books each year. We do the hard work of figuring things out and making a nice easy-to-understand tutorial – that way you can quickly get up-to-speed and get back to making great apps.
We’ve been so busy digging into all the great features of iOS 11, Swift 4 and Xcode 9 that we just haven’t had any time to decide what to put on the cover — and that’s where you come in!
Our current selection of books features a handsome array of sea creatures on the covers, and iOS 11 by Tutorials should be no different.
iOS 10 by Tutorials hosted a school of ten clownfish on the cover, and to continue the tradition, we need your suggestions:
To enter, simply add a comment to this post, telling us what school of sea creatures should appear on the cover, and why.
We’ll pick three winners from the comments:
Maybe sharks, to represent Apple’s aggressive moves into AR and VR this year? Rockfish, to pay homage to Phil Schiller’s promise that the HomePod will “Rock the house”? :] We know you can do better than that!
Get your suggestions in now! We’ll be closing the entries soon.
We’re opening pre-orders for iOS 11 by Tutorials today, for a limited-time, pre-order sale price of $44.99.
When you pre-order the book, you’ll get exclusive access to the upcoming early access releases of the book, which will be coming out in July and August, 2017, so you can get a jumpstart on learning all the new APIs. The full edition of the book will be released in Fall 2017.
Head on over to our store page to pre-order iOS 11 by Tutorials now:
We’re looking forward to sharing all the great new stuff in iOS 11, Swift 4 and Xcode 9 with you — and don’t forget to add your comment below with your idea for the cover! We can’t wait to see what you come up with.
The post iOS 11 by Tutorials Pre-Orders Available Now! appeared first on Ray Wenderlich.
In this video, you will learn how to setup Charles Proxy on both macOS and iOS.
The post Screencast: Charles Proxy: Getting Started appeared first on Ray Wenderlich.
I consider myself incredibly lucky, because I every day I get to work with the best team of tutorial writers on the Internet. Over the years, we’ve made over 1.5K written tutorials, 500+ videos, and 10+ books, and have become great friends along the way.
Have you ever considered joining our team? Well, there’s good news – we currently have 4 opportunities to join! We are currently looking for the following:
All of these roles can easily be done in your spare time, and are a great way to get your foot in the door at our site. Keep reading to find out what’s involved, and how to apply!
Did you know we have over 100 people on our waiting list to join the raywenderlich.com team tutorial team?
The problem is, although most of these folks appear to be incredibly talented developers, we are currently bottlenecked in our tryout process. Our tutorial teams are so busy making tutorials, we often don’t have the bandwidth to walk the best candidates through our tryout process.
Therefore, we are looking for someone to be the official raywenderlich.com recruiter. This involves:
This opportunity is great for anyone who wants to be in the inner circle at raywenderlich.com – you’ll work closely with myself and the other team leads. Your work will make a huge difference helping us identify the best developers, teachers, and authors out there, so we can continue to make amazing tutorials for our community.
Although our primary focus at raywenderlich.com is writing tutorials, we also have a small article team. Here are a few recent articles by the team:
Currently, I am the article team lead, but I’ve been having a hard time running the team along with my other obligations, so I am looking for someone to take my spot.
This opportunity is great for people who consider themselves an especially talented writer, and often write for fun. It’s a very fun and creative role, and your work will help us continue to create unique and helpful articles for our readers, leveraging the unique strengths of our team.
As you may know, Mic Pringle and Jake Gundersen have been running the raywenderlich.com podcast for many years now (ever since 2014!), and at this point they’re ready to pass the torch to someone else.
The podcast has been very successful, with over 6,000 downloads per episode on average. In our podcast we dive deep into technical subjects while avoiding rambling and being respectful of listener’s time – with the goal each listener always learning something new.
We are looking for two new podcasters to take over for Mic and Jake. They have big shoes to fill! This opportunity is great if you are an advanced level mobile developer who loves learning new things and talking about tech.
Part of raywenderlich.com is our forums: a place where people can ask questions about our any of our tutorials, or the general subjects of iOS, Android, and Unity development.
We have a few forum subject matter experts, who periodically check the forums and help answer questions. However, recently there’s been a larger volume of questions than we can handle, so we could use some help.
This opportunity is great for people who enjoy helping other developers who are struggling with a problem. You’ll also find that by helping our with various issues, you’ll learn a ton yourself!
Here are the top reasons to join the Tutorial Team:
Money! Get paid to learn!
If you are interested in any of these roles, please send me an email with the answers to the following questions:
I look forward to working with a few of you to help continue to improve our site! :]
The post 4 Opportunities to Join the raywenderlich.com Team appeared first on Ray Wenderlich.
If you weren’t lucky enough to get a “golden ticket” to WWDC 2017, catching up by watching the videos will be quite a challenge, as there are over 130 WWDC session videos available this year!
There are videos on the newest APIs, such as ARKit, CoreML, Drag and Drop and Vision; ones covering Xcode 9 with new and improved refactoring in Swift, Objective-C and C++, and then there’s everything new in Swift 4, new bits in UIKit, great videos on accessibility and so much more.
What’s a developer to do?
Fear not, as the raywenderlich.com tutorial team and friends have assembled a list of the Top 10 WWDC 2017 videos that cover everything you need to know, in a minimum of time. We consider these “must-see” sessions for developers from all backgrounds and specialties!
$0.playbackRate = 1.4;
. You can thank me later! :]https://developer.apple.com/videos/play/wwdc2017/102/
If you only have time for one video, this is it!
For developers, the real start of WWDC is the Platforms State of the Union session. The Keynote is a fluffy offering to surprise and delight the general public, investors and Apple faithfuls. The State of the Union, in contrast, is where the really interesting details come out.
This talk surveys the new technologies and outlines which sessions will provide more details on each technology. Here are the highlights of the 2017 Platforms State of the Union:
There are many more new items covered in the Platform State of the Union than I can address in this article. If you watch no other WWDC 2017 session video, this is definitely the one you want.
https://developer.apple.com/videos/play/wwdc2017/402/
“The What’s New in Swift is so dense this year you can cut it with a knife!” – Me, just now.
The session begins with a shout out to Oleg Begerman’s open source playground that you could use to test Swift 4 before WWDC. Because it’s open source, you can grab a snapshot from swift.org and add it to Xcode since the snapshot is a toolchain item. Xcode 9 now offers refactoring and the possibility to use the toolchain mechanism to roll your own refactoring.
This session is so dense that we can only cover some of the highlights:
private
keyword has been redefined to reach across multiple extensions in your code while protecting other elements in the same source file..split
. However, creating splits can create a strings’ owner reference to the original string. To avoid a double reference or possible memory leak, Substring is now a type, with the same behaviors as strings.Exclusive Access to Memory makes it easier to deal with local variables and enable programmer and compiler optimizations, since properties sometimes need to be protected during operations. While it may be fine for a variable to be read by two separate processes, writing to the variable should be an exclusive operation. With this new rule in Swift 4, the complier will tell you when this occurs, on a single thread. The new Thread Sanitizer Tool will tell you when this occurs in multi-threaded cases.
https://developer.apple.com/videos/play/wwdc2017/201/
Presented by Eliza Block and Josh Shaffer, What’s New in Cocoa Touch is a rapid-fire overview of new productivity, UI refinements and new APIs. Like the Platforms State of the Union, this session leads the way into other, in-depth sessions. Eliza gives a brief overview of adding drag functionality, moving items around the screen and finally dropping, where your app receives the dropped data. Drag and Drop is relatively easy to deploy, as many existing frameworks have the hooks in place already to handle this.
“What’s new in Cocoa Touch is a great overview of a lot of the changes on iOS” – Ellen Shapiro
Document management is also further refined by file management across Cocoa Touch. Based on UIDocumentBrowserViewController, files can now be accessed independently and stored in folders. In fact, files from one app may be even accessed from other apps. There is an understated push to make the iPad and larger iPhones more flexible though these and other refinements.
Josh Shaffer covers the new dynamic large titles as part of the UI refinements. The large prominent title, reminiscent of the News app, lives inside a larger header bar. As you scroll down the page, the header shrinks to the familiar style and size. The Safe Area creates a buffer space around the edge of devices. This clears the edges for gestures, creates a cleaner look and most importantly aids with overscan buffer, which is important for tvOS devices. And even better, UIScrollView no longer fights with your contentInsets! The Design Shorts 2 session and Updating Your App For iOS 11 have more info on these.
Eliza returns to cover a few new things in Swift 4, such as the new KeyPath type with its new literal “\” that eases and provides clarity in the new block-based KVO. She also covers the Codable protocol that enables objects to be archived and unarchived, and which enables native JSON encoding in Cocoa Touch apps.
Working with Dynamic Type, which is essential for accessibility, is now easier with the new UIFontMetric objects. Auto Layout works with Dynamic Type to aid the system sizing your fonts. Password autofill is also covered in brief.
This session gives you enough information to speak clearly about new features in Asset Catalogs, PDF-backed images, ProMotion’s support of higher refresh rates on the latest devices.
https://developer.apple.com/videos/play/wwdc2017/710/
Machine Learning is clearly a hot topic these days and Apple has made it easy to add this technology to your apps.
With Core ML, you can consider machine learning as simply calling a library from code. You only need to drop a Core ML library into your project and let Xcode sort everything else out. In this session, Krishna Sridhar and Zach Nation give overviews of the types of use cases for machine learning in your apps.
“It was good, lots of attention of augmented reality and machine learning. Especially when Apple made machine learning plug and play. All you need is to find a model you can use or worry about training your own. Everything else just works! “ – Vincent Ngo
You can put Core ML to work with handwriting recognition, credit card analysis, sentiment analysis with text input and gesture recognizes. Krishna demonstrates how Core ML has a natural way of dealing with numerics and categorical input. You can also use Core ML with the Natural Language Processing (NLP) to determine the mood of the user by processing text.
The speakers cover the hardware optimization of Core ML along with Core ML Tools that let you convert and work with popular machine learning formats. You also don’t need to sort out whether your project will use the CPU or GPU. Core ML takes care of that for you.
Zach Nation demonstrates how to use Apple’s open-sourced Core ML Tools, which are a set of Python scripts to import common machine learning formats and convert them to Core ML library format.
“CoreML and related are fantastic – great technology and easy to use.” – Mark Rubin
I’m also awarding a “honorable mention” to Introducing Core ML [https://developer.apple.com/videos/play/wwdc2017/703/] which also ranked well. It’s further proof that Core ML seems to be the runaway topic of WWDC 2017!
https://developer.apple.com/videos/play/wwdc2017/230/
Joe Cerra takes you through some basics on UIAnimations, with the aim to help you make your animations interactive and interruptible. In 2016, Apple introduced UIVIewPropertyAnimator that enables you to do just that. With this framework, you can give your animation customized timing as well as update them on the fly. Joe walks though how to adjust timings to create more interesting effects.
Joe demonstrates several enhancements to a simple demo animation with a pan gesture recognizer, including .PauseAnimation
to pause and .ContinueAnimation
to continue moving an object in the view. Midway through the talk, he demonstrates how to combine a number of tap and pan gestures along with animation properties to create an interactive and interruptible experience. Building on the effect, he adds in new behaviors, linear and nonlinear scrubs, pausing, and uses springs to add realism and damping.
Using the UIVisualEffectView, Joe combines blur and zoom to create compelling effects that he terms “view morphing”. The final reveal involves new properties for corner radii and masked corners. There are plenty of great tips and tricks covered in the session – way more than can I can fit here in a few paragraphs.
https://developer.apple.com/videos/play/wwdc2017/232/
Samantha Mravca refreshes viewers on ResearchKit and CareKit, as well as how they combine to sit on top of Apple’s HealthKit framework. ResearchKit allows institutions to build tools for gathering medial information and share that data with other HealthKit apps. CareKit, introduced in 2016, enables users to play an active role in their health.
The session covers some new features and the CareKit prototyping tool. There are some really interesting widgets and controls in CareKit to display progress, collect stats and capture optional and read-only data. I wonder if these widgets could find a place in other types of apps.
The speakers covered some existing data collection, or “active task” examples such as hearing tests, Stroop focus tests and cognitive tests such as trail making tests for visual attention. New modules include range of motion tests that make use of the accelerometer and gyro to test the motion of shoulder and knees.
CareKit now combines the user’s health data and symptoms into Care Contents. CareKit also includes some ready-to-use glyphs for iOS and watchOS. New this year are threshold measurements including numeric and adherence thresholds.
The session also covers the CareKit prototyping tool, which is targeted at non-technical builders who want to leverage the prototyping tool. Ultimately, these tools are designed for health professions and involve a minimal amount of coding in some cases none. Health care is a fascinating subject that we all have a vested interest in.
https://developer.apple.com/videos/play/wwdc2017/803/
Apple sound designer Hugo Verweji invites attendees to close their eyes as he takes the viewers on an aural journey through a forest, then into light rain and finally a thunder storm. Sound, he says, has a magical ability to create emotions. This session takes the audience through various soundscapes and demonstrates that sound is an integral part of our experiences with our apps and devices.
Sound can warn us; sound can convey a person’s calm or haste. App design doesn’t end with how the app looks. Using sound in apps helps shapes the experience the developer is trying to convey. Sounds attached to notifications can indicate “look at me!”, “time to wake up”, or “oops, your Apple Pay transaction failed”.
He demonstrates how sound can be used in the Toast Modern app, which is a timely demonstration as hipster toast sweeps through the Bay area. Hugo continues with a special set where he shows how some of the popular and familiar sounds in iOS were created. Sorry, I won’t give any spoilers here — you’ll have to watch it yourself! :]
Haptics combine with sound to provide a rich experience to what we see, hear and feel on our Apple Watches and iPhone 7s. The session also covers sound design to create different feelings for different tones.
This session is for more than just musicians and sound designers; it’s a must see even if you’ve never thought about sound in your app before. If you do nothing about sound, you’ll be stuck with the default sounds, and you’ll miss the opportunity to make your app stand our and to be in line with your branding.
Hugo also reminds us that silence is golden. Use sound sparingly, and offer to turn off sounds altogether. Whatever you do, ask yourself, “What do I want people to feel when they use my app?”
https://developer.apple.com/videos/play/wwdc2017/506/
In this session Brett Keating describes what you can do with Vision framework. Face Detection with deep learning and optionally combined with Core ML promises some interesting enhancements. There’s better detection and higher recall, which enables you to recognize smaller faces, strong profiles and even obstructed faces.
Image registration will let you stitch together separate images by using common landmarks, and features like rectangle detection and object tracking are now more refined.
Combined with CoreML and computer vision, you won’t have to any heavy lifting to implement the Vision framework in your app. The framework will tell you where the faces are, and Apple will take care of the rest. In Apple’s words, Vision provides a “high-level device to solve computer vision problems in one simple API.”
He also discusses the benefits of on-device image processing versus cloud-based processing. By keeping processing on on the device, you can retain the privacy of your user’s data. The cost of cloud based services is also a factor, as the cost to use cloud based services may affect the developer and the user. The low latency of device-based processing is also an advantage.
Frank Doepke then takes over the talk and delves into some practical demos of the Vision framework. He explains that it’s a matter of making requests, handling requests and viewing the results. You can use basic settings, and feed in a single image or a series of images. You can also use Core Image if that’s how you roll. Dropping in a Core ML model lets you further refine the tasks your app performs, such as object recognition. In the last demo, he makes use of MNISTVision, popular in the machine learning community. With this he’s able to categorize, straighten and recognize hand written characters.
This is a great session if you’re interested in computer vision. Throw in Core ML and Core Image and you can create the next great hotdog-detecting app.
https://developer.apple.com/videos/play/wwdc2017/404/
“Debugging is what we developers do when we’re not writing bugs.” – Me, again.
I spend an awful lot of time setting breakpoints, looking a debug logs, playing in the View Debugger and Memory Graph debugger. Any session on debugging is my favorite session.
Wireless Development is the first topic covered in this session. The Lightning cable is no longer required — yay! Working untethered definitely aids in ARKit and tvOS development, managing other accessories plugged into the Lightning port or even when you’re just kicking back on the couch. Connecting is straightforward on basic networks, Apple TV and corporate networks. This new capability is demoed working wirelessly with the accelerometer.
Time Profiler now has an “All” button that lets you view all the active threads in your app. You can also pin one thread and compare as you scroll through the other threads.
Breakpoint debugging with conditions are now easier to work with, and code completion is now included in the Breakpoint editor. Additionally, breakpoints with options now have a white triangle indicator for easy recognition. A tooltip is also available to see what options are available.
In the View Debugger, view controllers are now included in the view tree as the parents of the views. They are also indicated in-canvas with a banner making it easy to find the view controllers. View Controllers can also be selected and reviewed in the inspector.
The View Debugger lets you inspect SpriteKit views so you can debug sprites and views. Apple has included the SceneKit Inspector to edit your scene and debug it in runtime debugging mode. The entire scene graph can be explored and additionally saved as a snapshot.
“We use our debugger to debug our debuggers.” – Chris Miles.
The Memory Graph Debugger is actually built in SpriteKit. In the demo, the presenter opens another Xcode and debug it in another copy of Xcode. Finally Sebastian Fischer demos the new enhancements in Xcode 9 debugging.
https://developer.apple.com/videos/play/wwdc2017/204/
You’ll definitely want to watch this session, unless you’re one of those mythical developers who has never wrestled with your app layout.
Up first — UIKit. The navigation bars and tab bars as children of UIBarItem can now apply the new landscape tab bar, which is slightly smaller with a title and icon side-by-side. Turning on Large Titles in the navigation bar is as easy as setting a property and adopting largeTitleDisplayMode. A new UISearchBarController now houses new style of search bar in the header, and can scroll away to hide under the header.
Navigation Bars and Tab Bars now support Auto Layout. These items provide their positions, and you provide the sizes. The former layout margins are now actually minimums, and TopLayoutGuide and bottomLayoutGuide are now deprecated by SafeAreaInsets. Layout margins now have Directional Layout Margins that apply to leading and trailing constraints. You can also decide to override the properties altogether and have full-screen content.
TableView headers and footers are now self-sizing in iOS 11. If you’re not ready for this, you can easily override this behavior by setting the estimated sizes to zero. In iOS 11, the TableView separator insets inside this region are now relative to the edge of the cells and the full width of the screen. UITableView and UITableViewHeaderFooterView now have content views that respect the safe area insets as well.
If you are eager to adopt the look and feel of iOS 11 in your apps, this should definitely be on your watchlist.
In summary, here are our picks of the top 10 WWDC videos to watch:
Thanks to contributors: Ellen Shapiro, Kelvin Lau, Kevin Hirsch, Sam Davies, Kiva John, Caroline Begbie, Mark Rubin, Matthijs Hollemans, Vincent Ngo, and Jaime Lopez Jr!
What do you think are the “don’t miss” videos of WWDC 2017? Tell us in the comments below!
The post Top 10 WWDC 2017 Videos appeared first on Ray Wenderlich.
Whether an app retrieves application data from a server, updates your social media status or downloads remote files to disk, it’s the HTTP network requests living at the heart of mobile applications that make the magic happen. To help you with the numerous requirements for network requests, Apple provides URLSession
, a complete networking API for uploading and downloading content via HTTP.
In this URLSession
tutorial, you’ll learn how to build the Half Tunes app, which lets you query the iTunes Search API, then download 30-second previews of songs. The finished app will support background transfers, and let the user pause, resume or cancel in-progress downloads.
Download the starter project; it already contains a user interface to search for songs and display search results, networking service classes, and helper methods to store and play tracks. So you can focus on implementing the networking aspects of the app.
Build and run your project; you’ll see a view with a search bar at the top and an empty table view below:
Type a query in the search bar, and tap Search. The view remains empty, but don’t worry: you’ll change this with your new URLSession
calls.
Before you begin, it’s important to appreciate URLSession
and its constituent classes, so take a look at the quick overview below.
URLSession
is technically both a class and a suite of classes for handling HTTP/HTTPS-based requests:
URLSession
is the key object responsible for sending and receiving HTTP requests. You create it via URLSessionConfiguration
, which comes in three flavors:
.default
: Creates a default configuration object that uses the disk-persisted global cache, credential and cookie storage objects..ephemeral
: Similar to the default configuration, except that all session-related data is stored in memory. Think of this as a “private” session..background
: Lets the session perform upload or download tasks in the background. Transfers continue even when the app itself is suspended or terminated by the system.URLSessionConfiguration
also lets you configure session properties such as timeout values, caching policies and additional HTTP headers. Refer to the documentation for a full list of configuration options.
URLSessionTask
is an abstract class that denotes a task object. A session creates one or more tasks to do the actual work of fetching data and downloading or uploading files.
There are three types of concrete session tasks:
URLSessionDataTask
: Use this task for HTTP GET requests to retrieve data from servers to memory.URLSessionUploadTask
: Use this task to upload a file from disk to a web service, typically via a HTTP POST or PUT method.URLSessionDownloadTask
: Use this task to download a file from a remote service to a temporary file location.You can also suspend, resume and cancel tasks. URLSessionDownloadTask
has the additional ability to pause for future resumption.
Generally, URLSession
returns data in two ways: via a completion handler when a task finishes, either successfully or with an error, or by calling methods on a delegate that you set when creating the session.
Now that you have an overview of what URLSession
can do, you’re ready to put the theory into practice!
You’ll start by creating a data task to query the iTunes Search API for the user’s search term.
In SearchVC+SearchBarDelegate.swift, searchBarSearchButtonClicked(_:)
first enables the network activity indicator on the status bar, to indicate to the user that a network process is running. Then it calls getSearchResults(searchTerm:completion:)
, which is a stub in QueryService.swift.
In Networking/QueryService.swift, replace the first // TODO
with the following:
// 1
let defaultSession = URLSession(configuration: .default)
// 2
var dataTask: URLSessionDataTask?
Here’s what you’ve done:
URLSession
, and initialized it with a default session configuration.
URLSessionDataTask
variable, which you’ll use to make an HTTP GET request to the iTunes Search web service when the user performs a search. The data task will be re-initialized each time the user enters a new search string.
Next, replace the getSearchResults(searchTerm:completion:)
stub with the following:
func getSearchResults(searchTerm: String, completion: @escaping QueryResult) {
// 1
dataTask?.cancel()
// 2
if var urlComponents = URLComponents(string: "https://itunes.apple.com/search") {
urlComponents.query = "media=music&entity=song&term=\(searchTerm)"
// 3
guard let url = urlComponents.url else { return }
// 4
dataTask = defaultSession.dataTask(with: url) { data, response, error in
defer { self.dataTask = nil }
// 5
if let error = error {
self.errorMessage += "DataTask error: " + error.localizedDescription + "\n"
} else if let data = data,
let response = response as? HTTPURLResponse,
response.statusCode == 200 {
self.updateSearchResults(data)
// 6
DispatchQueue.main.async {
completion(self.tracks, self.errorMessage)
}
}
}
// 7
dataTask?.resume()
}
}
Taking each numbered comment in turn:
URLComponents
object from the iTunes Search base URL, then set its query string: this ensures that characters in the search string are properly escaped.
url
property of urlComponents
might be nil, so you optional-bind it to url
.URLSessionDataTask
with the query url
and a completion handler to call when the data task completes.
updateSearchResults(_:)
, which parses the response data
into the tracks
array.
tracks
to the completion handler in SearchVC+SearchBarDelegate.swift.
resume()
starts the data task.
Now flip back to the getSearchResults(searchTerm:completion:)
completion handler in SearchVC+SearchBarDelegate.swift: after hiding the activity indicator, it stores results
in searchResults
, then updates the table view.
URLRequest
with the url
, set the request’s HTTPMethod
property appropriately, then create a data task with the URLRequest
, instead of with the URL
.
Build and run your app; search for any song and you’ll see the table view populate with the relevant track results like so:
With a bit of URLSession
magic added, Half Tunes is now a bit functional!
Being able to view song results is nice, but wouldn’t it be better if you could tap on a song to download it? That’s precisely your next order of business. You’ll use a download task, which makes it easy to save the song snippet in a local file.
To make it easy to handle multiple downloads, you’ll first create a custom object to hold the state of an active download.
Create a new Swift file named Download.swift in the Model group.
Open Download.swift, and add the following implementation:
class Download {
var track: Track
init(track: Track) {
self.track = track
}
// Download service sets these values:
var task: URLSessionDownloadTask?
var isDownloading = false
var resumeData: Data?
// Download delegate sets this value:
var progress: Float = 0
}
Here’s a rundown of the properties of Download
:
url
property also acts as a unique identifier for a Download
.
URLSessionDownloadTask
that downloads the track.
Data
produced when the user pauses a download task. If the host server supports it, your app can use this to resume a paused download in the future.Next, in Networking/DownloadService.swift, add the following property at the top of the class:
var activeDownloads: [URL: Download] = [:]
This dictionary simply maintains a mapping between a URL and its active Download
, if any.
You could create your download task with a completion handler, like the data task you just created. But later in this tutorial, you’ll monitor and update the download progress: for that, you’ll need to implement a custom delegate, so you might as well do that now.
There are several session delegate protocols, listed in URLSession documentation. URLSessionDownloadDelegate
handles task-level events specific to download tasks.
You’ll soon set SearchViewController
as the session delegate, so first create an extension to conform to the session delegate protocol.
Create a new Swift file named SearchVC+URLSessionDelegates.swift in the Controller group. Open it, and create the following URLSessionDownloadDelegate
extension:
extension SearchViewController: URLSessionDownloadDelegate {
func urlSession(_ session: URLSession, downloadTask: URLSessionDownloadTask,
didFinishDownloadingTo location: URL) {
print("Finished downloading to \(location).")
}
}
The only non-optional URLSessionDownloadDelegate
method is urlSession(_:downloadTask:didFinishDownloadingTo:)
, which is called when a download finishes. For now, you’ll just print a message whenever a download completes.
With all the preparatory work out of the way, you’re now ready to implement file downloads. You’ll first create a dedicated session to handle your download tasks.
In Controller/SearchViewController.swift, add the following code right before viewDidLoad()
:
lazy var downloadsSession: URLSession = {
let configuration = URLSessionConfiguration.default
return URLSession(configuration: configuration, delegate: self, delegateQueue: nil)
}()
Here you initialize a separate session with a default configuration, and specify a delegate, which lets you receive URLSession
events via delegate calls. This will be useful for monitoring the progress of the task.
Setting the delegate queue to nil
causes the session to create a serial operation queue to perform all calls to delegate methods and completion handlers.
Note the lazy creation of downloadsSession
: this lets you delay the creation of the session until after the view controller is initialized, which allows you to pass self
as the delegate parameter to the session initializer.
Now add this line at the end of viewDidLoad()
:
downloadService.downloadsSession = downloadsSession
This sets the downloadsSession
property of DownloadService
.
With your session and delegate configured, you’re finally ready to create a download task when the user requests a track download.
In Networking/DownloadService.swift, replace the startDownload(_:)
stub with the following implementation:
func startDownload(_ track: Track) {
// 1
let download = Download(track: track)
// 2
download.task = downloadsSession.downloadTask(with: track.previewURL)
// 3
download.task!.resume()
// 4
download.isDownloading = true
// 5
activeDownloads[download.track.previewURL] = download
}
When the user taps a table view cell’s Download button, SearchViewController
, acting as TrackCellDelegate
, identifies the Track
for this cell, then calls startDownload(_:)
with this Track
. Here’s what’s going on in startDownload(_:)
:
Download
with the track.
URLSessionDownloadTask
with the track’s preview URL, and set it to the task
property of the Download
.
resume()
on it.
Download
in the activeDownloads
dictionary.
Build and run your app; search for any track and tap the Download button on a cell. After a while, you’ll see a message in the debug console signifying that the download is complete. The Download button remains, but you’ll fix that soon. First, you want to play some tunes!
When a download task completes, urlSession(_:downloadTask:didFinishDownloadingTo:)
provides a URL to the temporary file location: you saw this in the print message. Your job is to move it to a permanent location in your app’s sandbox container directory before you return from the method.
In SearchVC+URLSessionDelegates, replace the print statement in urlSession(_:downloadTask:didFinishDownloadingTo:)
with the following code:
// 1
guard let sourceURL = downloadTask.originalRequest?.url else { return }
let download = downloadService.activeDownloads[sourceURL]
downloadService.activeDownloads[sourceURL] = nil
// 2
let destinationURL = localFilePath(for: sourceURL)
print(destinationURL)
// 3
let fileManager = FileManager.default
try? fileManager.removeItem(at: destinationURL)
do {
try fileManager.copyItem(at: location, to: destinationURL)
download?.track.downloaded = true
} catch let error {
print("Could not copy file to disk: \(error.localizedDescription)")
}
// 4
if let index = download?.track.index {
DispatchQueue.main.async {
self.tableView.reloadRows(at: [IndexPath(row: index, section: 0)], with: .none)
}
}
Here’s what you’re doing at each step:
Download
in your active downloads, and remove it from that dictionary.localFilePath(for:)
helper method in SearchViewController.swift, which generates a permanent local file path to save to, by appending the lastPathComponent
of the URL (the file name and extension of the file) to the path of the app’s Documents directory.
FileManager
, you move the downloaded file from its temporary file location to the desired destination file path, first clearing out any item at that location before you start the copy task. You also set the download track’s downloaded
property to true
.
index
property to reload the corresponding cell.Build and run your project. Run a query, then pick any track and download it. When the download has finished, you’ll see the file path location printed to your console:
The Download button disappears now, because the delegate method set the track’s downloaded
property to true
. Tap the track and you’ll hear it play in the presented AVPlayerViewController
as shown below:
What if the user wants to pause a download, or cancel it altogether? In this section, you’ll implement the pause, resume and cancel features to give the user complete control over the download process.
You’ll start by allowing the user to cancel an active download.
In DownloadService.swift, replace the cancelDownload(_:)
stub with the following code:
func cancelDownload(_ track: Track) {
if let download = activeDownloads[track.previewURL] {
download.task?.cancel()
activeDownloads[track.previewURL] = nil
}
}
To cancel a download, you retrieve the download task from the corresponding Download
in the dictionary of active downloads, and call cancel()
on it to cancel the task. You then remove the download object from the dictionary of active downloads.
Pausing a download is conceptually similar to cancelling: pausing cancels the download task, but also produces resume data, which contains enough information to resume the download at a later time, if the host server supports that functionality.
Now, replace the pauseDownload(_:)
stub with the following code:
func pauseDownload(_ track: Track) {
guard let download = activeDownloads[track.previewURL] else { return }
if download.isDownloading {
download.task?.cancel(byProducingResumeData: { data in
download.resumeData = data
})
download.isDownloading = false
}
}
The key difference here is you call cancel(byProducingResumeData:)
instead of cancel()
. You provide a closure parameter to this method, where you save the resume data to the appropriate Download
for future resumption.
You also set the isDownloading
property of the Download
to false
to indicate that the download is paused.
With the pause function completed, the next order of business is to allow the resumption of a paused download.
Replace the resumeDownload(_:)
stub with the following code:
func resumeDownload(_ track: Track) {
guard let download = activeDownloads[track.previewURL] else { return }
if let resumeData = download.resumeData {
download.task = downloadsSession.downloadTask(withResumeData: resumeData)
} else {
download.task = downloadsSession.downloadTask(with: download.track.previewURL)
}
download.task!.resume()
download.isDownloading = true
}
When the user resumes a download, you check the appropriate Download
for the presence of resume data. If found, you create a new download task by invoking downloadTask(withResumeData:)
with the resume data. If the resume data is absent for some reason, you create a new download task with the download URL.
In both cases, you start the task by calling resume()
, and set the isDownloading
flag of the Download
to true
, to indicate the download has resumed.
There’s only one thing left to do for these three functions to work properly: you need to show or hide the Pause/Resume and Cancel buttons, as appropriate. To do this, the TrackCell configure(track:downloaded:)
method needs to know if the track has an active download, and whether it’s currently downloading.
In TrackCell.swift, change configure(track:downloaded:)
to configure(track:downloaded:download:)
:
func configure(track: Track, downloaded: Bool, download: Download?) {
In SearchViewController.swift, fix the call in tableView(_:cellForRowAt:)
:
cell.configure(track: track, downloaded: track.downloaded,
download: downloadService.activeDownloads[track.previewURL])
Here, you extract the track’s download object from the activeDownloads
dictionary.
Back in TrackCell.swift, locate the two TODOs in configure(track:downloaded:download:)
. Replace the first // TODO
with this property:
var showDownloadControls = false
And replace the second // TODO
with the following code:
if let download = download {
showDownloadControls = true
let title = download.isDownloading ? "Pause" : "Resume"
pauseButton.setTitle(title, for: .normal)
}
As the comment notes, a non-nil download object means a download is in progress, so the cell should show the download controls: Pause/Resume and Cancel. Since the pause and resume functions share the same button, you toggle the button between the two states, as appropriate.
Below this if-closure, add the following code:
pauseButton.isHidden = !showDownloadControls
cancelButton.isHidden = !showDownloadControls
Here, you show the buttons for a cell only if a download is active.
Finally, replace the last line of this method:
downloadButton.isHidden = downloaded
with the following code:
downloadButton.isHidden = downloaded || showDownloadControls
Here, you tell the cell to hide the Download button if its track is downloading.
Build and run your project; download a few tracks concurrently and you’ll be able to pause, resume and cancel them at will:
URLSessionConfiguration.background(withIdentifier: "bgSessionConfiguration")
Currently, the app doesn’t show the progress of the download. To improve the user experience, you’ll change your app to listen for download progress events, and display the progress in the cells. And there’s a session delegate method that’s perfect for this job!
First, in TrackCell.swift, add the following helper method:
func updateDisplay(progress: Float, totalSize : String) {
progressView.progress = progress
progressLabel.text = String(format: "%.1f%% of %@", progress * 100, totalSize)
}
The track cell has progressView
and progressLabel
outlets. The delegate method will call this helper method to set their values.
Next, in SearchVC+URLSessionDelegates.swift, add the following delegate method to the URLSessionDownloadDelegate
extension:
func urlSession(_ session: URLSession, downloadTask: URLSessionDownloadTask,
didWriteData bytesWritten: Int64, totalBytesWritten: Int64,
totalBytesExpectedToWrite: Int64) {
// 1
guard let url = downloadTask.originalRequest?.url,
let download = downloadService.activeDownloads[url] else { return }
// 2
download.progress = Float(totalBytesWritten) / Float(totalBytesExpectedToWrite)
// 3
let totalSize = ByteCountFormatter.string(fromByteCount: totalBytesExpectedToWrite, countStyle: .file)
// 4
DispatchQueue.main.async {
if let trackCell = self.tableView.cellForRow(at: IndexPath(row: download.track.index,
section: 0)) as? TrackCell {
trackCell.updateDisplay(progress: download.progress, totalSize: totalSize)
}
}
}
Looking through this delegate method, step-by-step:
downloadTask
, and use it to find the matching Download
in your dictionary of active downloads.
Download
. The track cell will use this value to update the progress view.
ByteCountFormatter
takes a byte value and generates a human-readable string showing the total download file size. You’ll use this string to show the size of the download alongside the percentage complete.Track
, and call the cell’s helper method to update its progress view and progress label with the values derived from the previous steps. This involves the UI, so you do it on the main queue.Now, update the cell’s configuration, to properly display the progress view and status when a download is in progress.
Open TrackCell.swift. In configure(track:downloaded:download:)
, add the following line inside the if-closure, after the pause button title is set:
progressLabel.text = download.isDownloading ? "Downloading..." : "Paused"
This gives the cell something to show, before the first update from the delegate method, and while the download is paused.
And add the following code below the if-closure, below the isHidden
lines for the two buttons:
progressView.isHidden = !showDownloadControls
progressLabel.isHidden = !showDownloadControls
As for the buttons, this shows the progress view and label only while the download is in progress.
Build and run your project; download any track and you should see the progress bar status update as the download progresses:
Hurray, you’ve made, erm, progress! :]
Your app is quite functional at this point, but there’s one major enhancement left to add: background transfers. In this mode, downloads continue even when your app is backgrounded or crashes for any reason. This isn’t really necessary for song snippets, which are pretty small; but your users will appreciate this feature if your app transfers large files.
But if your app isn’t running, how can this work? The OS runs a separate daemon outside the app to manage background transfer tasks, and it sends the appropriate delegate messages to the app as the download tasks run. In the event the app terminates during an active transfer, the tasks will continue to run unaffected in the background.
When a task completes, the daemon will relaunch the app in the background. The re-launched app will re-create the background session, to receive the relevant completion delegate messages, and perform any required actions such as persisting downloaded files to disk.
You access this magic by creating a session with the background session configuration.
In SearchViewController.swift, in the initialization of downloadsSession
, find the following line of code:
let configuration = URLSessionConfiguration.default
…and replace it with the following line:
let configuration = URLSessionConfiguration.background(withIdentifier:
"bgSessionConfiguration")
Instead of using a default session configuration, you use a special background session configuration. Note that you also set a unique identifier for the session here to allow your app to create a new background session, if needed.
If a background task completes when the app isn’t running, the app will be relaunched in the background. You’ll need to handle this event from your app delegate.
Switch to AppDelegate.swift, and add the following code near the top of the class:
var backgroundSessionCompletionHandler: (() -> Void)?
Next, add the following method to AppDelegate.swift:
func application(_ application: UIApplication, handleEventsForBackgroundURLSession
identifier: String, completionHandler: @escaping () -> Void) {
backgroundSessionCompletionHandler = completionHandler
}
Here, you save the provided completionHandler
as a variable in your app delegate for later use.
application(_:handleEventsForBackgroundURLSession:)
wakes up the app to deal with the completed background task. You need to handle two things in this method:
SearchViewController
, you’re already reconnected at this point!The place to invoke the provided completion handler is urlSessionDidFinishEvents(forBackgroundURLSession:)
: it’s a URLSessionDelegate
method that fires when all tasks pertaining to the background session have finished.
In SearchVC+URLSessionDelegates.swift find the import:
import Foundation
and add the following import underneath:
import UIKit
lastly, add the following extension:
extension SearchViewController: URLSessionDelegate {
// Standard background session handler
func urlSessionDidFinishEvents(forBackgroundURLSession session: URLSession) {
if let appDelegate = UIApplication.shared.delegate as? AppDelegate,
let completionHandler = appDelegate.backgroundSessionCompletionHandler {
appDelegate.backgroundSessionCompletionHandler = nil
DispatchQueue.main.async {
completionHandler()
}
}
}
}
The above code simply grabs the stored completion handler from the app delegate and invokes it on the main thread. You reference the app delegate by getting the shared delegate from the UIApplication, which is accessible thanks to the UIKit import.
Build and run your app; start a few concurrent downloads and tap the Home button to background the app. Wait until you think the downloads have completed, then double-tap the Home button to reveal the app switcher.
The downloads should have finished, with their new status reflected in the app snapshot. Open the app to confirm this:
You now have a fully functional music streaming app! Your move now, Apple Music! :]
You can download the complete project for this tutorial here.
Congratulations! You’re now well-equipped to handle most common networking requirements in your app. There are more URLSession
topics than would fit in this tutorial, for example, upload tasks and session configuration settings, such as timeout values and caching policies.
To learn more about these features (and others!), check out the following resources:
I hope you found this tutorial useful. Feel free to join the discussion below!
The post URLSession Tutorial: Getting Started appeared first on Ray Wenderlich.
In this screencast, you'll learn how to use iOS 11 drag and drop to export your data using many different representations, and how to add drag and drop into custom views in your app.
The post Screencast: iOS 11 Drag and Drop with Multiple Data Representations and Custom Views appeared first on Ray Wenderlich.
In this video, you'll be introduced to "iOS design patterns" including what they are and how they're useful.
The post Video Tutorial: iOS Design Patterns Part 1: Introduction appeared first on Ray Wenderlich.
Learn two ways to structure your project for design patterns, "grouping by function" and "grouping by type" and learn which is best for your project.
The post Video Tutorial: iOS Design Patterns Part 2: Project Setup appeared first on Ray Wenderlich.
Have you picked up so much wonderful code that you’re overwhelmed and don’t know how to keep it all tidily organized in your projects? Don’t worry if so! It’s a common problem raywenderlich.com visitors develop. :]
Or do you feel like you’re always doing the same things over and over? Always “reinventing the wheel”?
If you’re looking for help solving these issues, today is your luck day!
I’m proud to announce the release of my brand new, highly-anticipated course, iOS Design Patterns, ready for Swift 3 & iOS 10!
In this 11-part course, you’ll be introduced to design patterns that are specifically tailored for use with iOS. You’ll start by learning how best to set your projects up, and then jump right in to using design patterns that include:
Let’s take a peek at what’s inside.
Video 1: Introduction You’ll be introduced to “iOS design patterns” in this video, including what design patterns are and how they’re useful.
Video 2: Project Setup Learn two ways to structure your project for design patterns: “grouping by function” and “grouping by type.” You’ll also learn which to use depending on your project.
Video 3: MVC-N In this video, learn about Model-View-Controller (MVC) and the dreaded massive view controller problem. Oh my! Fortunately, Model-View-Controller-Networking (MVC-N) is here to save the day.
Video 4: MVVM. Learn about Model-View-ViewModel (MVVM) in this video, which you’ll use to further combat massive view controllers.
Video 5: Multicast Closure Delegate Learn about the multicast closure delegate pattern, a spin-off pattern from delegate. This is in preparation for performing auto re-login authentication in the next video.
Video 6: Auto Re-Login Authentication Use the multicast closure delegate pattern from the previous video to create an auto re-login authentication client.
Video 7: Memento Learn about the memento pattern, which allows an object’s state to be saved and restored later.
Video 8: Composition Over Inheritance Learn about “composition over inheritance” in this video, a design principle used by most design patterns.
Video 9: Container Views In this video, you’ll learn how to DRY out your storyboard UIs using container views. This is in preparation for the next video, where you’ll use this to implement the visitor pattern.
Video 10: Visitor Learn the visitor design pattern, which you’ll use to eliminate view controllers’ code duplication logic. This video builds on the previous one, which eliminated storyboard UI duplication.
Video 11: Conclusion In this video, you’ll review what you learned in this “iOS Design patterns” video tutorial series and find out where to go from here.
Want to check out the course? You can watch the introduction for free!
The rest of the course is for raywenderlich.com subscribers only. Here’s how you can get access:
There’s much more in store for raywenderlich.com subscribers – if you’re curious, you can check out our full schedule of upcoming courses.
I hope you enjoy our new course, and stay tuned for many more new courses and updates to come! :]
The post New Course: iOS Design Patterns appeared first on Ray Wenderlich.
Update Note: This tutorial has been updated to Xcode 9.0 and Swift 4 by Owen Brown. The original tutorial was written by Ray Wenderlich.
UIScrollView
is one of the most versatile and useful controls in iOS. It is the basis for the very popular UITableView
and is a great way to present content larger than a single screen. In this UIScrollView
tutorial, you’ll create an app that’s very similar to the Photos app and learn all about UIScrollView
. You’ll learn:
UIScrollView
to zoom and view a very large image.UIScrollView
‘s content centered while zooming.UIScrollView
for vertical scrolling with Auto Layout.UIPageViewController
to allow scrolling through multiple pages of content.This tutorial assumes you understand how to use Interface Builder to add objects and connect outlets between your code and storyboard scenes. If you’re not familiar with Interface Builder or Storyboards, work through our Storyboards tutorial before this one.
Click here to download the starter project for this UIScrollView
tutorial, and then open it in Xcode.
Build and run to see what you’re starting with:
You can select a photo to see it full sized, but sadly, you can’t see the whole image due to the limited size of the device. What you really want is to fit the image to the device’s screen by default, and zoom to see details just like the Photos app.
Can you fix it? Yes you can!
To kick off this UIScrollView
tutorial, you’ll set up a scroll view that lets the user pan and zoom an image.
Open Main.storyboard, and drag a Scroll View from the Object Library onto the Document Outline right below View on the Zoomed Photo View Controller scene. Then, move Image View inside your newly-added Scroll View. Your Document Outline should now look like this:
See that red dot? Xcode is complaining that your Auto Layout rules are wrong.
To fix them, select Scroll View and tap the pin button at the bottom of the storyboard window. Add four new constraints: top, bottom, leading and trailing. Set each constraint’s constant to 0, and uncheck Constrain to Margins. This should look like this:
Now select Image View and add the same four constraints on it too.
If you get an Auto Layout warning afterwards, select Zoomed Photo View Controller in the Document Outline and then select Editor\Resolve Auto Layout Issues\Update Frames. If you don’t get a warning, Xcode likely updated the frames automatically for you, so you don’t need to do anything.
Build and run.
Thanks to the scroll view, you can now see the full-size image by swiping! But what if you want to see the picture scaled to fit the device screen? Or what if you want to zoom in and out?
You’ll need to write code for these!
Open ZoomedPhotoViewController.swift, and add the following outlets inside the class declaration:
@IBOutlet weak var scrollView: UIScrollView!
@IBOutlet weak var imageViewBottomConstraint: NSLayoutConstraint!
@IBOutlet weak var imageViewLeadingConstraint: NSLayoutConstraint!
@IBOutlet weak var imageViewTopConstraint: NSLayoutConstraint!
@IBOutlet weak var imageViewTrailingConstraint: NSLayoutConstraint!
Back in Main.storyboard, set the scrollView
outlet to the Scroll View, and set the Scroll View's delegate
to Zoomed View Controller. Also, connect the new constraint outlets the appropriate constraints in the Document Outline like this:
Back in ZoomedPhotoViewController.swift, add the following to the end of the file:
extension ZoomedPhotoViewController: UIScrollViewDelegate {
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
return imageView
}
}
This makes ZoomedPhotoViewController
conform to UIScrollViewDelegate
and implement viewForZooming(in:)
. The scroll view calls this method to get which of its subviews to scale whenever its pinched, and here you tell it to scale imageView
.
Next, add the following inside the class right after viewDidLoad()
:
fileprivate func updateMinZoomScaleForSize(_ size: CGSize) {
let widthScale = size.width / imageView.bounds.width
let heightScale = size.height / imageView.bounds.height
let minScale = min(widthScale, heightScale)
scrollView.minimumZoomScale = minScale
scrollView.zoomScale = minScale
}
This method calculates the zoom scale for the scroll view. A zoom scale of one indicates that the content is displayed at normal size. A zoom scale less than one shows the content zoomed out, and a zoom scale greater than one shows the content zoomed in.
To get the minimum zoom scale, you first calculate the required zoom to fit the image view snugly within the scroll view based on its width. You then calculate the same for the height. You take the minimum of the width and height zoom scales, and set this for both minimumZoomScale
and zoomScale
on the scroll view. Thereby, you’ll initially see the entire image fully zoomed out, and you’ll be able to zoom out to this level too.
Since the maximumZoomScale
defaults to 1, you don’t need to set it. If you set it to greater than 1, the image may appear blurry when fully zoomed in. If you set it to less than 1, you wouldn’t be able to zoom in to the full image’s resolution.
Finally, you also need to update the minimum zoom scale each time the controller updates its subviews. Add the following right before the previous method to do this:
override func viewWillLayoutSubviews() {
super.viewWillLayoutSubviews()
updateMinZoomScaleForSize(view.bounds.size)
}
Build and run, and you should get the following result:
You can now pan and zoom, and the image initially fits on the screen. Awesome!
However, there’s still one problem: the image is pinned to the top of the scroll view. It’d sure be nice to have it centered instead, right?
Still in ZoomedPhotoViewController.swift, add the following inside the class extension right after viewForZooming(in:)
func scrollViewDidZoom(_ scrollView: UIScrollView) {
updateConstraintsForSize(view.bounds.size)
}
fileprivate func updateConstraintsForSize(_ size: CGSize) {
let yOffset = max(0, (size.height - imageView.frame.height) / 2)
imageViewTopConstraint.constant = yOffset
imageViewBottomConstraint.constant = yOffset
let xOffset = max(0, (size.width - imageView.frame.width) / 2)
imageViewLeadingConstraint.constant = xOffset
imageViewTrailingConstraint.constant = xOffset
view.layoutIfNeeded()
}
The scroll view calls scrollViewDidZoom
each time the user scrolls. In response, you simply call updateConstraintsForSize(_:)
and pass in the view’s bounds size.
updateConstraintsForSize(_:)
gets around an annoyance with UIScrollView
: if the scroll view’s content size is smaller than its bounds, the contents are placed at the top-left rather than the center.
You get around this by adjusting the layout constraints for the image view. You first center the image vertically by subtracting the height of imageView
from the view
‘s height and dividing it in half. This value is used as padding for the top and bottom imageView
constraints. Similarly, you calculate an offset for the leading and trailing constraints of imageView
based on the width.
Give yourself a pat on the back, and build and run your project! Select an image, and if everything went smoothly, you’ll end up with a lovely image that you can zoom and pan. :]
Now suppose you want to change PhotoScroll to display the image at the top and add comments below it. Depending on how long the comment is, you may end up with more text than your device can display: Scroll View to the rescue!
Note: In general, Auto Layout considers the top, left, bottom, and right edges of a view to be the visible edges. However, UIScrollView
scrolls its content by changing the origin of its bounds. To make this work with Auto Layout, the edges within a scroll view actually refer to the edges of its content view.
To size the scroll view’s frame with Auto Layout, constraints must either be explicit regarding the width and height of the scroll view, or the edges of the scroll view must be tied to views outside of its own subtree.
You can read more in this technical note from Apple.
You’ll next learn how to fix the width of a scroll view, which is really its content size width, using Auto Layout.
Open Main.storyboard and lay out a new scene:
First, add a new View Controller. In the Size Inspector replace Fixed with Freeform for the Simulated Size, and enter a width of 340 and a height of 800.
You’ll notice the layout of the controller gets narrower and longer, simulating the behavior of a long vertical content. The simulated size helps you visualize the display in Interface Builder. It has no runtime effect.
Uncheck Adjust Scroll View Insets in the Attributes Inspector for your newly created view controller.
Add a Scroll View that fills the entire space of the view controller.
Add leading and trailing constraints with constant values of 0 to the view controller, and make sure to uncheck Constrain to margin. Add top and bottom constraints from Scroll View to the Top and Bottom Layout guides, respectively. They should also have constants of 0.
Add a View as a child of the Scroll View, and resize it to fit the entire space of the Scroll View.
Rename its storyboard Label to Container View. Like before, add top, bottom, leading and trailing constraints, with constants of 0 and unchecked Constrain to Margins.
To fix the Auto Layout errors, you need to specify the scroll view’s size. Set the width of Container View to match the view controller’s width. Attach an equal-width constraint from the Container View to the View Controller’s main view. For the height of Container View, define a height constraint of 500.
Note: Auto Layout rules must comprehensively define a Scroll View’s contentSize
. This is the key step in getting a Scroll View to be correctly sized when using Auto Layout.
Add an Image View inside Container View.
In the Attributes Inspector, specify photo1 for the image; choose Aspect Fit for the mode; and check clips to bounds.
Add top, leading, and trailing constraints to Container View like before, and add a height constraint of 300.
Add a Label inside Container View below the image view. Specify the label’s text as What name fits me best?, and add a centered horizontal constraint relative to Container View. Add a vertical spacing constraint of 0 with Photo View.
Add a Text Field inside of Container View below the new label. Add leading and trailing constraints to Container View with constant values of 8, and no margin. Add a vertical-space constraint of 30 relative to the label.
You next need to connect a segue to your new View Controller.
To do so, first delete the existing push segue between the Photo Scroll scene and the Zoomed Photo View Controller scene. Don’t worry, all the work you’ve done on Zoomed Photo View Controller will be added back to your app later.
In the Photo Scroll scene, from PhotoCell, control-drag to the new View Controller, add a show segue. Make the identifier showPhotoPage.
Build and Run.
You can see that the layout is correct in vertical orientation. Try rotating to landscape orientation. In landscape, there is not enough vertical room to show all the content, yet the scroll view allows you to properly scroll to see the label and the text field. Unfortunately, since the image in the new view controller is hard-coded, the image you selected in the collection view is not shown.
To fix this, you need to pass the image name to the view controller when the segue is executed.
Create a new file with the iOS\Source\Cocoa Touch Class template. Name the class PhotoCommentViewController, and set the subclass to UIViewController. Make sure that the language is set to Swift. Click Next and save it with the rest of the project.
Replace the contents of PhotoCommentViewController.swift with this code:
import UIKit
class PhotoCommentViewController: UIViewController {
@IBOutlet weak var imageView: UIImageView!
@IBOutlet weak var scrollView: UIScrollView!
@IBOutlet weak var nameTextField: UITextField!
var photoName: String?
override func viewDidLoad() {
super.viewDidLoad()
if let photoName = photoName {
self.imageView.image = UIImage(named: photoName)
}
}
}
This adds IBOutlet
s and sets the image of imageView
based on a passed-in photoName
.
Back in the storyboard, open the Identity Inspector for View Controller, and select PhotoCommentViewController for the Class. Then wire the IBOutlets for the Scroll View, Image View and Text Field.
Open CollectionViewController.swift, and replace prepare(segue:sender:)
with this:
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if let cell = sender as? UICollectionViewCell,
let indexPath = collectionView?.indexPath(for: cell),
let photoCommentViewController = segue.destination as? PhotoCommentViewController {
photoCommentViewController.photoName = "photo\(indexPath.row + 1)"
}
}
This sets the name of the photo to be shown on PhotoCommentViewController
whenever one of the photos is tapped.
Build and run.
Your view nicely displays the content and when needed allows you to scroll down to see more. You’ll notice two issues with the keyboard: first, when entering text, the keyboard hides the Text Field. Second, there is no way to dismiss the keyboard. Ready to fix the glitches?
Unlike UITableViewController
, which automatically handles moving content out of the way of the keyboard, you manually have to manage the keyboard when you use a UIScrollView
directly.
You can do this by making PhotoCommentViewController
observe keyboard Notification
objects sent by iOS whenever the keyboard will show and hide.
Open PhotoCommentViewController.swift, and add the following code at the bottom of viewDidLoad()
(ignore the compiler errors for now):
NotificationCenter.default.addObserver(
self,
selector: #selector(PhotoCommentViewController.keyboardWillShow(_:)),
name: Notification.Name.UIKeyboardWillShow,
object: nil
)
NotificationCenter.default.addObserver(
self,
selector: #selector(PhotoCommentViewController.keyboardWillHide(_:)),
name: Notification.Name.UIKeyboardWillHide,
object: nil
)
Next, add the following method to stop listening for notifications when the object’s life ends:
deinit {
NotificationCenter.default.removeObserver(self)
}
Then add the promised methods from above to the view controller:
func adjustInsetForKeyboardShow(_ show: Bool, notification: Notification) {
let userInfo = notification.userInfo ?? [:]
let keyboardFrame = (userInfo[UIKeyboardFrameBeginUserInfoKey] as! NSValue).cgRectValue
let adjustmentHeight = (keyboardFrame.height + 20) * (show ? 1 : -1)
scrollView.contentInset.bottom += adjustmentHeight
scrollView.scrollIndicatorInsets.bottom += adjustmentHeight
}
@objc func keyboardWillShow(_ notification: Notification) {
adjustInsetForKeyboardShow(true, notification: notification)
}
@objc func keyboardWillHide(_ notification: Notification) {
adjustInsetForKeyboardShow(false, notification: notification)
}
adjustInsetForKeyboardShow(_:,notification:)
takes the keyboard’s height as delivered in the notification and adds a padding value of 20 to either be subtracted from or added to the scroll views’s contentInset
. This way, the UIScrollView
will scroll up or down to let the UITextField
always be visible on the screen.
When the notification is fired, either keyboardWillShow(_:)
or keyboardWillHide(_:)
will be called. These methods will then call adjustInsetForKeyboardShow(_:,notification:)
, indicating which direction to move the scroll view.
To dismiss the keyboard, add this method to PhotoCommentViewController.swift:
@IBAction func hideKeyboard(_ sender: AnyObject) {
nameTextField.endEditing(true)
}
This method will resign the first responder status of the text field, which will in turn dismiss the keyboard.
Finally, open Main.storyboard, and from Object Library drag a Tap Gesture Recognizer onto the View
on the Photo Comment View Controller scene. Then, wire it to the hideKeyboard(_:) IBAction
in Photo Comment View Controller.
To make it more user friendly, the keyboard should also dismiss when the return key is pressed. Right click on nameTextField
and wire Primary Action Triggered
to hideKeyboard(_:)
also.
Build and run.
Navigate to the Photo Comment View Controller scene. Tap the text field and then tap somewhere else on the view. The keyboard should properly show and hide itself relative to the other content on the screen. Likewise, tapping the return key does the same.
In the third section of this UIScrollView
tutorial, you’ll create a scroll view that allows paging. This means that the scroll view locks onto a page when you stop dragging. You can see this in action in the App Store app when you view screenshots of an app.
Go to Main.storyboard and drag a Page View Controller from the Object Library. Open the Identity Inspector and enter PageViewController for the Storyboard ID.
In the Attributes Inspector, the Transition Style is set to Page Curl by default; change it to Scroll and set the Page Spacing to 8.
In the Photo Comment View Controller scene’s Identity Inspector, specify a Storyboard ID of PhotoCommentViewController, so that you can refer to it from code.
Open PhotoCommentViewController.swift and add this property after the others:
var photoIndex: Int!
This will reference the index of the photo to show and will be used by the page view controller.
Create a new file with the iOS\Source\Cocoa Touch Class template. Name the class ManagePageViewController and set the subclass to UIPageViewController.
Open ManagePageViewController.swift and replace the contents of the file with the following:
import UIKit
class ManagePageViewController: UIPageViewController {
var photos = ["photo1", "photo2", "photo3", "photo4", "photo5"]
var currentIndex: Int!
override func viewDidLoad() {
super.viewDidLoad()
// 1
if let viewController = viewPhotoCommentController(currentIndex ?? 0) {
let viewControllers = [viewController]
// 2
setViewControllers(
viewControllers,
direction: .forward,
animated: false,
completion: nil
)
}
}
func viewPhotoCommentController(_ index: Int) -> PhotoCommentViewController? {
guard let storyboard = storyboard,
let page = storyboard.instantiateViewController(withIdentifier: "PhotoCommentViewController")
as? PhotoCommentViewController else {
return nil
}
page.photoName = photos[index]
page.photoIndex = index
return page
}
}
Here’s what this code does:
viewPhotoCommentController(_:_)
creates an instance of PhotoCommentViewController
though the Storyboard. You pass the name of the image as a parameter so that the view displayed matches the image you selected in previous screen.UIPageViewController
by passing it an array that contains the single view controller you just created.You next need to implement UIPageViewControllerDataSource
. Add the following class extension to the end of this file:
extension ManagePageViewController: UIPageViewControllerDataSource {
func pageViewController(_ pageViewController: UIPageViewController,
viewControllerBefore viewController: UIViewController) -> UIViewController? {
if let viewController = viewController as? PhotoCommentViewController,
let index = viewController.photoIndex,
index > 0 {
return viewPhotoCommentController(index - 1)
}
return nil
}
func pageViewController(_ pageViewController: UIPageViewController,
viewControllerAfter viewController: UIViewController) -> UIViewController? {
if let viewController = viewController as? PhotoCommentViewController,
let index = viewController.photoIndex,
(index + 1) < photos.count {
return viewPhotoCommentController(index + 1)
}
return nil
}
}
The UIPageViewControllerDataSource
allows you to provide content when the page changes. You provide view controller instances for paging in both the forward and backward directions. In both cases, photoIndex
is used to determine which image is currently being displayed. The viewController
parameter to both methods indicates the currently displayed view controller, and based on the photoIndex
, a new controller is created and returned.
You also need to actually set the dataSource
. Add the following to the end of viewDidLoad()
:
dataSource = self
There are only a couple things left to do to get your page view running. First, you will fix the flow of the application.
Switch back to Main.storyboard and select your newly created
Page View Controller scene. In the Identity Inspector, specify ManagePageViewController for its class.
Delete the push segue showPhotoPage you created earlier. Then control drag from Photo Cell in Scroll View Controller to Manage Page View Controller Scene and select a Show segue. In the Attributes Inspector for the segue, specify its name as showPhotoPage as before.
Open CollectionViewController.swift and change the implementation of prepare(segue:sender:)
to the following:
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if let cell = sender as? UICollectionViewCell,
let indexPath = collectionView?.indexPath(for: cell),
let managePageViewController = segue.destination as? ManagePageViewController {
managePageViewController.photos = photos
managePageViewController.currentIndex = indexPath.row
}
}
Build and run.
You can now scroll side to side to page between different detail views. :]
For the final part of this UIScrollView
tutorial, you will add a UIPageControl
to your application.
Fortunately, UIPageViewController
has the ability to automatically provide a UIPageControl
.
To do so, your UIPageViewController
must have a transition style of UIPageViewControllerTransitionStyleScroll
, and you must provide implementations of two special methods on UIPageViewControllerDataSource
. You previously set the Transition Style- great job!- so all you need to do is add these two methods inside the UIPageViewControllerDataSource
extension on ManagePageViewController
:
func presentationCount(for pageViewController: UIPageViewController) -> Int {
return photos.count
}
func presentationIndex(for pageViewController: UIPageViewController) -> Int {
return currentIndex ?? 0
}
In presentationCount(for:)
, you specify the number of pages to display in the page view controller.
In presentationIndex(for:)
, you tell the page view controller which page should initially be selected.
After you've implemented the required delegate methods, you can add further customization with the UIAppearance
API. In AppDelegate.swift, replace application(application: didFinishLaunchingWithOptions:)
with this:
func application(_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
let pageControl = UIPageControl.appearance()
pageControl.pageIndicatorTintColor = UIColor.lightGray
pageControl.currentPageIndicatorTintColor = UIColor.red
return true
}
This will customize the colors of the UIPageControl
.
Build and run.
Almost there! The very last step is to add back the zooming view when tapping an image.
Open PhotoCommentViewController.swift, and add the following to the end of the class:
@IBAction func openZoomingController(_ sender: AnyObject) {
self.performSegue(withIdentifier: "zooming", sender: nil)
}
override func prepare(for segue: UIStoryboardSegue,
sender: Any?) {
if let id = segue.identifier,
let zoomedPhotoViewController = segue.destination as? ZoomedPhotoViewController,
id == "zooming" {
zoomedPhotoViewController.photoName = photoName
}
}
In Main.storyboard, add a Show Detail segue from Photo Comment View Controller to Zoomed Photo View Controller. With the new segue selected, open the Identity Inspector and set the Identifier to zooming.
Select the Image View in Photo Comment View Controller, open the Attributes Inspector and check User Interaction Enabled. Drag a Tap Gesture Recognizer onto the Image View, and connect it to openZoomingController(_:)
.
Now, when you tap an image in Photo Comment View Controller Scene, you'll be taken to the Zoomed Photo View Controller Scene where you can zoom the photo.
Build and run one more time.
Yes, you did it! You've created a Photos app clone: a collection view of images you can select and navigate through by swiping, as well as the ability to zoom the photo content.
Here is the final PhotoScroll project with all of the code from this UIScrollView
tutorial.
You’ve delved into many of the interesting things that a scroll view is capable of. If you want to go further, there is an entire video series dedicated to scroll views. Take a look.
Now go make some awesome apps, safe in the knowledge that you’ve got mad scroll view skillz!
If you run into any problems along the way or want to leave feedback about what you've read here, join the discussion in the comments below.
The post UIScrollView Tutorial: Getting Started appeared first on Ray Wenderlich.
Learn about Model-View-Controller (MVC) and the dreaded massive view controller problem and how Model-View-Controller-Networking (MVC-N) can save the day.
The post Video Tutorial: iOS Design Patterns Part 3: MVC-N appeared first on Ray Wenderlich.