Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4370 articles
Browse latest View live

Video Tutorial: Intro to Auto Layout Part 7: Priorities


Modern Core Graphics with Swift: Part 2

$
0
0

FinalApp

Welcome back to our modern Core Graphics with Swift tutorial series!

In the first part of the tutorial series, you learned about drawing lines and arcs and using Xcode’s interactive storyboard features.

In this second part, you’ll delve further into Core Graphics, learning about drawing gradients and manipulating CGContext with transformations.

Core Graphics

You’re now going to leave the comfortable world of UIKit and enter the underworld of Core Graphics.

This image from Apple describes the relevant frameworks conceptually:

2-Architecture

UIKit is the top layer, and it’s also the most approachable. You’ve used UIBezierPath, which is a UIKit wrapper of the Core Graphics CGPath.

One thing to know about lower layer Core Graphics objects and functions is that they always have the prefix CG, so they are easy to recognize. Another fun fact: CG functions are C functions, so you don’t call them with explicit parameter names, which is different than when you call Swift functions.

Getting Going with Graph View

By the time you’re at the end of this session, you’ll create a graph view that looks like this by using sample historical data:

2-ResultGraphView

Before drawing on the graph view, you’ll set it up in the storyboard and create the code that animates the transition to show the graph view.

The complete view hierarchy will look like this:

2-ViewHierarchy

If you don’t have it already, download a copy of Flo from the first Core Graphics tutorial.

Go to File\New\File…, choose the iOS\Source\Cocoa Touch Class template and click Next. Enter the name GraphView as the class name, choose the subclass UIView and set the language to Swift. Click Next then Create.

Go to Main.storyboard and drag a UIView on to the view controller’s view.

This will contain the Graph and Counter Views, so make it a subview of ViewController’s main view, but place it in front of Counter View.

Your Document Outline should look like this:

2-DocumentOutline

In the Size Inspector, set X=150, Y=50, Width=300, and Height=300:

2-ContainerCoordinates

Create the AutoLayout constraints similarly to how you did in part 1:

  • With the view selected, Control-drag slightly left, and choose Width from the popup menu.
  • Then control-drag slightly up and choose Height from the popup menu.
  • Next, control-drag left from inside the view to outside the view, and choose Center Vertically in Container.
  • Finally, control-drag up from inside the view to outside the view and choose Center Horizontally in Container.

When creating views, it’s often helpful to give the view you’re working with a temporary color, so that you can easily see what you’re doing.

In the Attributes Inspector, color the background yellow.

2-ContainerViewBackground

Drag another UIView onto the yellow view, and make sure that it becomes a subview.

In the Identity Inspector, change the class of the view to GraphView.

2-GraphViewClass

In the Size Inspector, set X=0, Y=25, Width=300, and Height=250:

2-GraphViewCoordinates

In the Document Outline, drag the Counter View to make it a subview of the yellow view, and make sure it’s positioned behind its sibling Graph View.

After moving the Counter View, the auto layout constraints will turn orange. Select the Counter View, and look at the bottom right of the storyboard. Find and click Resolve Auto Layout Issues, and choose Selected Views: Clear Constraints.

2-ClearAutoLayout

You can reset to default constraints, because the Counter View is now snug inside the Container View.

Click the name of the yellow view in the Document Outline slowly twice to rename it, and call it Container View. Your Document Outline should look like this:

Flo2-Outline

The reason you need a Container View is to make an animated transition between the Counter View and the Graph View.

Go to ViewController.swift and add property outlets for the Container and Graph Views:

@IBOutlet weak var containerView: UIView!
@IBOutlet weak var graphView: GraphView!

This creates an outlet for the container view and graph view. Now let’s hook them up to the views you created in the storyboard.

Go back to Main.storyboard and hook up the Graph View and the Container View to the outlets:

Flo2-ConnectGraphViewOutlet

Set up the Animated Transition

Still in Main.storyboard, drag a UITapGestureRecognizer from the Object Library to the Container View in the Document Outline:

Flo2-AddTapGesture

Go to ViewController.swift and add this property to the top of the class:

var isGraphViewShowing = false

This simply marks whether the graph view is currently showing.

Now add the tap method to do the transition:

@IBAction func counterViewTap(gesture:UITapGestureRecognizer?) {
  if (isGraphViewShowing) {
 
    //hide Graph
    UIView.transitionFromView(graphView,
        toView: counterView,
        duration: 1.0,
        options: UIViewAnimationOptions.TransitionFlipFromLeft
          | UIViewAnimationOptions.ShowHideTransitionViews,
        completion:nil)
  } else {
 
    //show Graph
    UIView.transitionFromView(counterView,
      toView: graphView,
      duration: 1.0,
      options: UIViewAnimationOptions.TransitionFlipFromRight
        | UIViewAnimationOptions.ShowHideTransitionViews,
      completion: nil)
  }
  isGraphViewShowing = !isGraphViewShowing
}

UIView.transitionFromView(_:toView:duration:options:completion:) performs a horizontal flip transition. Other transitions are cross dissolve, vertical flip and curl up or down. The transition masks the ShowHideTransitionViews constant, so you don’t have to remove the view to prevent it from being shown.

Add this code at the end of btnPushButton(_:):

if isGraphViewShowing {
  counterViewTap(nil)
}

This makes it so that if the user presses the plus button while the graph is showing, the display will swing back to show the counter.

Lastly, to get this transition working, go back to Main.storyboard and hook up your tap gesture to the newly added counterViewTap(gesture:):

Flo2-TapGestureConnection

Build and run the application. Currently you’ll see the graph view when you start the app. Later on, you’ll set the graph view hidden, so the counter view will appear first. Tap it, and you’ll see the transition flipping.

2-ViewTransition

Analysis of the Graph View

2-AnalysisGraphView

Remember the Painter’s Model from Part 1? It explains that drawing with Core Graphics is done from the back to the front, so you need an order in mind before you code. For Flo’s graph, that would be:

  1. Gradient background view
  2. Clipped gradient under the graph
  3. The graph line
  4. The circles for the graph points
  5. Horizontal graph lines
  6. The graph labels

Drawing a Gradient

You’ll now draw a gradient in the Graph View’s context.

Go to GraphView.swift and replace the code with:

import UIKit
 
@IBDesignable class GraphView: UIView {
 
  //1 - the properties for the gradient
  @IBInspectable var startColor: UIColor = UIColor.redColor()
  @IBInspectable var endColor: UIColor = UIColor.greenColor()
 
    override func drawRect(rect: CGRect) {
 
      //2 - get the current context
      let context = UIGraphicsGetCurrentContext()
      let colors = [startColor.CGColor, endColor.CGColor]
 
      //3 - set up the color space
      let colorSpace = CGColorSpaceCreateDeviceRGB()
 
      //4 - set up the color stops
      let colorLocations:[CGFloat] = [0.0, 1.0]
 
      //5 - create the gradient
      let gradient = CGGradientCreateWithColors(colorSpace, 
                                                colors, 
                                                colorLocations)
 
      //6 - draw the gradient
      var startPoint = CGPoint.zeroPoint
      var endPoint = CGPoint(x:0, y:self.bounds.height)
      CGContextDrawLinearGradient(context, 
                                  gradient, 
                                  startPoint, 
                                  endPoint, 
                                  0)
    }
}

There are a few things to go over here:

  1. You set up the start and end colors for the gradient as @IBInspectable properties, so that you’ll be able to change them in the storyboard.
  2. CG drawing functions need to know the context in which they will draw, so you use the UIKit method UIGraphicsGetCurrentContext() to obtain the current context. That’s the one that drawRect(_:) draws into.
  3. All contexts have a color space. This could be CMYK or grayscale, but here you’re using the RGB color space.
  4. The color stops describe where the colors in the gradient change over. In this example, you only have two colors, red going to green, but you could have an array of three stops, and have red going to blue going to green. The stops are between 0 and 1, where 0.33 is a third of the way through the gradient.
  5. Create the actual gradient, defining the color space, colors and color stops.
  6. Finally, you draw the gradient. CGContextDrawLinearGradient() takes the following parameters:
    • The CGContext in which to draw
    • The CGGradient with color space, colors and stops
    • The start point
    • The end point
    • Option flags to extend the gradient

The gradient will fill the entire rect of drawRect(_:).

Set up Xcode so that you have a side-by-side view of your code and the storyboard using the Assistant Editor, and you’ll see the gradient appear on the Graph View.

2-InitialGradient

In the storyboard, select the Graph View. Then in the Attributes Inspector, change Start Color to RGB(250, 233, 222), and End Color RGB(252, 79, 8):

2-FirstGradient

Now for some clean up duty. In Main.storyboard, select each view in turn, except for the main ViewController view, and set the Background Color to clear color. You don’t need the yellow color any more, and the push button views should have a transparent background too.

Run the application, and you’ll notice the transition is now a lot more stylish.

Clipping areas

When you used the gradient just now, you filled the whole of the view’s context area. However, you can create paths to use as clipping areas instead of being used for drawing.

Go to GraphView.swift, and add this code to the top of drawRect(_:):

let width = rect.width
let height = rect.height
 
//set up background clipping area
var path = UIBezierPath(roundedRect: rect,
    byRoundingCorners: UIRectCorner.AllCorners,
    cornerRadii: CGSize(width: 8.0, height: 8.0))
path.addClip()

This will create a clipping area that constrains the gradient. You’ll use this same trick shortly to draw a second gradient under the graph line.

Build and run the application and see that your graph view has nice, rounded corners:

2-RoundedCorners2

Speed Note: Drawing static views Core Graphics is generally quick enough, but if your views move around or need frequent redrawing, you should use Core Animation layers. It’s optimized so that the GPU, not the CPU, handles most of the processing. In contrast, the CPU processes view drawing performed by drawRect(_:).

Instead of using a clipping path, you can create rounded corners using the cornerRadius property of a CALayer, but you should optimize for your situation. For a good lesson on this concept, check out Custom Control Tutorial for iOS and Swift: A Reusable Knob by Mikael Konutgan and Sam Davies, where you’ll use Core Animation to create a custom control.

Tricky Calculations for Graph Points

Now you’ll take a short break from drawing to make the graph. You’ll plot 7 points; the x-axis will be the ‘Day of the Week’ and the y-axis will be the ‘Number of Glasses Drunk’.

First, set up sample data for the week.

Still in GraphView.swift, at the top of the class, add this property:

//Weekly sample data
var graphPoints:[Int] = [4, 2, 6, 4, 5, 8, 3]

This holds sample data that represents seven days.

Next, add this code to the end of drawRect(_:):

//calculate the x point
 
let margin:CGFloat = 20.0
var columnXPoint = { (column:Int) -> CGFloat in
  //Calculate gap between points
  let spacer = (width - margin*2 - 4) / 
        CGFloat((self.graphPoints.count - 1))
  var x:CGFloat = CGFloat(column) * spacer
  x += margin + 2
  return x
}

The x-axis points consist of 7 equally spaced points. The above code above is a closure expression. It could have been added as a function, but for small calculations like this, it’s logical to keep them inline.

columnXPoint takes a column as a parameter, and returns a value where the point should be on the x-axis.

Add the code to calculate the y-axis points to the end of drawRect(_:):

// calculate the y point
 
let topBorder:CGFloat = 60
let bottomBorder:CGFloat = 50
let graphHeight = height - topBorder - bottomBorder
let maxValue = maxElement(graphPoints)
var columnYPoint = { (graphPoint:Int) -> CGFloat in
  var y:CGFloat = CGFloat(graphPoint) / 
          CGFloat(maxValue) * graphHeight
  y = graphHeight + topBorder - y // Flip the graph
  return y
}

columnYPoint is also a closure expression that takes the value from the array for the day of the week as its parameter. It returns the y position, between 0 and the greatest number of glasses drunk.

Because the origin is in the top-left corner and you draw a graph from an origin point in the bottom-left corner, columnYPoint adjusts its return value so that the graph is oriented as you would expect.

Continue by adding line drawing code to the end of drawRect(_:):

// draw the line graph
 
UIColor.whiteColor().setFill()
UIColor.whiteColor().setStroke()
 
//set up the points line
var graphPath = UIBezierPath()
//go to start of line
graphPath.moveToPoint(CGPoint(x:columnXPoint(0), 
                              y:columnYPoint(graphPoints[0])))
 
//add points for each item in the graphPoints array
//at the correct (x, y) for the point
for i in 1..<graphPoints.count {
  let nextPoint = CGPoint(x:columnXPoint(i), 
                          y:columnYPoint(graphPoints[i]))
  graphPath.addLineToPoint(nextPoint)
}
 
graphPath.stroke()

In this block, you create the path for the graph. The UIBezierPath is built up from the x and y points for each element in graphPoints.

The Graph View in the storyboard should now look like this:

2-FirstGraphLine

Now that you verified the line draws correctly, remove this from the end of drawRect(_:):

graphPath.stroke()

That was just so that you could check out the line in the storyboard and verify that the calculations are correct.

A Gradient Graph

You’re now going to create a gradient underneath this path by using the path as a clipping path.

First set up the clipping path at the end of drawRect(_:):

//Create the clipping path for the graph gradient
 
//1 - save the state of the context (commented out for now)
//CGContextSaveGState(context)
 
//2 - make a copy of the path
var clippingPath = graphPath.copy() as UIBezierPath
 
//3 - add lines to the copied path to complete the clip area
clippingPath.addLineToPoint(CGPoint(
       x: columnXPoint(graphPoints.count - 1),
       y:height))
clippingPath.addLineToPoint(CGPoint(
       x:columnXPoint(0), 
       y:height))
clippingPath.closePath()
 
//4 - add the clipping path to the context
clippingPath.addClip()
 
//5 - check clipping path - temporary code
UIColor.greenColor().setFill()
let rectPath = UIBezierPath(rect: self.bounds)
rectPath.fill()
//end temporary code

A section-by-section breakdown of the above code:

  1. CGContextSaveGState is commented out for now — you’ll come back to this in a moment once you understand what it does.
  2. Copy the plotted path to a new path that defines the area to fill with a gradient.
  3. Complete the area with the corner points and close the path. This adds the bottom-right and bottom-left points of the graph.
  4. Add the clipping path to the context. When the context is filled, only the clipped path is actually filled.
  5. Fill the context. Remember that rect is the area of the context that was passed to drawRect(_:).

Your Graph View in the storyboard should now look like this:

2-GraphClipping

Next, you’ll replace that lovely green with a gradient you create from the colors used for the background gradient.

Remove the temporary code with the green color fill from the end of drawRect(_:), and add this code instead:

let highestYPoint = columnYPoint(maxValue)
startPoint = CGPoint(x:margin, y: highestYPoint)
endPoint = CGPoint(x:margin, y:self.bounds.height)
 
CGContextDrawLinearGradient(context, gradient, startPoint, endPoint, 0)
//CGContextRestoreGState(context)

In this block, you find the highest number of glasses drunk and use that as the starting point of the gradient.

You can’t fill the whole rect the same way you did with the green color. The gradient would fill from the top of the context instead of from the top of the graph, and the desired gradient wouldn’t show up.

Take note of the commented out CGContextRestoreState — you’ll remove the comments after you draw the circles for the plot points.

At the end of drawRect(_:), add this:

//draw the line on top of the clipped gradient
graphPath.lineWidth = 2.0
graphPath.stroke()

This code draws the original path.

Your graph is really taking shape now:

2-SecondGraphLine

At the end of drawRect(_:), add this:

//Draw the circles on top of graph stroke
for i in 0..<graphPoints.count {
  var point = CGPoint(x:columnXPoint(i), y:columnYPoint(graphPoints[i]))
  point.x -= 5.0/2
  point.y -= 5.0/2
 
  let circle = UIBezierPath(ovalInRect: 
               CGRect(origin: point, 
                        size: CGSize(width: 5.0, height: 5.0)))
  circle.fill()
}

This code draws the plot points and is nothing new. It fills a circle path for each of the elements in the array at the calculated x and y points.

2-GraphWithFlatCircles

Hmmm…but what’s showing up in the storyboard are not nice, round circle points! Whaaaaaaat? Press on, it’ll all come together.

Context States

Graphics contexts can save states. When you set many context properties, such as fill color, transformation matrix, color space or clip region, you’re actually setting them for the current graphics state.

You can save a state by using CGContextSaveGState(), which pushes a copy of the current graphics state onto the state stack. You can also make changes to context properties, but when you call CGContextRestoreGState(), the original state is taken off the stack and the context properties revert.

Still in GraphView.swift, in drawRect(_:), uncomment the CGContextSaveGState() that takes place before creating the clipping path, and uncomment the CGContextRestoreGState() that takes place after the clipping path has been used.

By doing this, you:

  1. Push the original graphics state onto the stack with CGContextSaveGState().
  2. Add the clipping path to a new graphics state.
  3. Draw the gradient within the clipping path.
  4. Restore the original graphics state with CGContextRestoreGState() — this was the state before you added the clipping path.

Your graph line and circles should be much clearer now:

2-GraphWithCircles

At the end of drawRect(_:), add the code to draw the three horizontal lines:

//Draw horizontal graph lines on the top of everything
var linePath = UIBezierPath()
 
//top line
linePath.moveToPoint(CGPoint(x:margin, y: topBorder))
linePath.addLineToPoint(CGPoint(x: width - margin, 
                                y:topBorder))
 
//center line
linePath.moveToPoint(CGPoint(x:margin, 
                             y: graphHeight/2 + topBorder))
linePath.addLineToPoint(CGPoint(x:width - margin, 
                                y:graphHeight/2 + topBorder))
 
//bottom line
linePath.moveToPoint(CGPoint(x:margin, 
                             y:height - bottomBorder))
linePath.addLineToPoint(CGPoint(x:width - margin, 
                                y:height - bottomBorder))
let color = UIColor(white: 1.0, alpha: 0.3)
color.setStroke()
 
linePath.lineWidth = 1.0
linePath.stroke()

Nothing in this code is new. All you’re doing is moving to a point and drawing a horizontal line.

2-GraphWithAxisLines

Adding the Graph Labels

Now you’ll add the labels to make the graph user-friendly.

Go to ViewController.swift and add these outlet properties for the labels:

//Label outlets
@IBOutlet weak var averageWaterDrunk: UILabel!
@IBOutlet weak var maxLabel: UILabel!

This adds outlets for the two labels that you want to dynamically change text for (the average water drunk label, and the max water drunk label).

Now go to Main.storyboard and add the following UILabels as subviews of the Graph View:

  1. “Water Drunk”
  2. “Average:”
  3. “2” (averageWaterDrunk)
  4. “99” (maxLabel). Right aligned
  5. “0”. Right aligned
  6. Labels for each day of a week — the text for each will be changed in code. Center aligned.

Shift-select all the labels, and then change the fonts to custom Avenir Next Condensed, Medium style.

2-LabelledGraph

Connect averageWaterDrunk and maxLabel to the corresponding labels in Main.storyboard. Control-drag from View Controller to the correct label and choose the outlet from the pop up:

2-ConnectLabels

For each weekday label, go into the Attributes Inspector and change the View’s Tag corresponding to the day, so that the first one is 1 and the last one is 7.

2-LabelViewTag

Now that you’ve finished setting up the graph view, in Main.storyboard select the Graph View and check Hidden so the graph doesn’t appear when the app first runs.

2-GraphHidden

Go to ViewController.swift and add this method to set up the labels:

func setupGraphDisplay() {
 
  //Use 7 days for graph - can use any number,
  //but labels and sample data are set up for 7 days
  let noOfDays:Int = 7
 
  //1 - replace last day with today's actual data
  graphView.graphPoints[graphView.graphPoints.count-1] = counterView.counter
 
  //2 - indicate that the graph needs to be redrawn
  graphView.setNeedsDisplay()
 
  maxLabel.text = "\(maxElement(graphView.graphPoints))"
 
  //3 - calculate average from graphPoints
  let average = graphView.graphPoints.reduce(0, +) 
            / graphView.graphPoints.count
  averageWaterDrunk.text = "\(average)"
 
  //set up labels
  //day of week labels are set up in storyboard with tags
  //today is last day of the array need to go backwards
 
  //4 - get today's day number
  let dateFormatter = NSDateFormatter()
  let calendar = NSCalendar.currentCalendar()
  let componentOptions:NSCalendarUnit = .WeekdayCalendarUnit
  let components = calendar.components(componentOptions, 
                                       fromDate: NSDate())
  var weekday = components.weekday
 
  let days = ["S", "S", "M", "T", "W", "T", "F"]
 
  //5 - set up the day name labels with correct day
  for i in reverse(1...days.count) {
    let labelView = graphView.viewWithTag(i) as UILabel
    if weekday == 7 {
      weekday = 0
    }
    labelView.text = days[weekday--]
    if weekday < 0 {
      weekday = days.count - 1
    }
  }
}

This looks a little burly, but it’s required to set up the calendar and retrieve the current day of the week. Take it in sections:

  1. You set today’s data as the last item in the graph’s data array. In the final project, which you can download at the end of Part 3, you’ll expand on this by replacing it with 60 days of sample data, and you’ll include a method that splits out the last x number of days from an array, but that is beyond the scope of this session. :]
  2. Redraws the graph in case there are any changes to today’s data.
  3. Here you use Swift’s reduce to calculate the average glasses drunk for the week; it’s a very useful method to sum up all the elements in an array.
  4. Note: This Swift Functional Programming Tutorial explains functional programming in some depth.

  5. This section put the current day’s number from the iOS calendar into the property weekday.
  6. Note: Dates can get complicated. David Ronnqvist’s Working with Dates explains everything.

  7. This loop just goes from 7 to 1, gets the view with the corresponding tag number and extracts the correct day title from the days array.

Still in ViewController.swift, call this new method from counterViewTap(_:). In the else part of the conditional, where the comment says show graph, add this code:

setupGraphDisplay()

Run the application, and click the counter. Hurrah! The graph swings into view in all its glory!

2-GraphFinished

Mastering the Matrix

Your app is looking really sharp! The counter view you created in part one could be improved though, like by adding markings to indicate each glass to be drunk:

2-Result

Now that you’ve had a bit of practice with CG functions, you’ll use them to rotate and translate the drawing context.

Notice that these markers radiate from the center:

2-LinesExpanded

As well as drawing into a context, you have the option to manipulate the context by rotating, scaling and translating the context’s transformation matrix.

At first, this can seem confusing, but after you work through these exercises, it’ll make more sense. The order of the transformations is important, so first I’ll outline what you’ll be doing with diagrams.

The following diagram is the result of rotating the context and then drawing a rectangle in the center of the context.

2-RotatedContext

The black rectangle is drawn before rotating the context, then the green one, then the red one. Two things to notice:

  1. The context is rotated at the top left (0,0)
  2. The rectangle is still being drawn in the center of the context, but after the context has been rotated.

When you’re drawing the counter view’s markers, you’ll translate the context first, then you’ll rotate it.

2-RotatedTranslatedContext

In this diagram, the rectangle marker is at the very top left of the context. The blue lines outline the translated context, then the context rotates (red dashed line) and is translated again.

When the red rectangle marker is finally drawn into the context, it’ll appear in the view at an angle.

After the context is rotated and translated to draw the red marker, it needs to be reset to the center so that the context can be rotated and translated again to draw the green marker.

Just as you saved the context state with the clipping path in the Graph View, you’ll save and restore the state with the transformation matrix each time you draw the marker.

Go to CounterView.swift and add this code to the end of drawRect(_:) to add the markers to the counter:

//Counter View markers
 
let context = UIGraphicsGetCurrentContext()
 
//1 - save original state
CGContextSaveGState(context)
outlineColor.setFill()
 
let markerWidth:CGFloat = 5.0
let markerSize:CGFloat = 10.0
 
//2 - the marker rectangle positioned at the top left
var markerPath = UIBezierPath(rect: 
       CGRect(x: -markerWidth/2, 
       y: 0, 
       width: markerWidth, 
       height: markerSize))
 
//3 - move top left of context to the previous center position
CGContextTranslateCTM(context, 
                      rect.width/2, 
                      rect.height/2)
 
for i in 1...NoOfGlasses {
  //4 - save the centred context
  CGContextSaveGState(context)
 
  //5 - calculate the rotation angle
  var angle = arcLengthPerGlass * CGFloat(i) + startAngle - π/2
 
  //rotate and translate
  CGContextRotateCTM(context, angle)
  CGContextTranslateCTM(context, 
                        0, 
                        rect.height/2 - markerSize)
 
  //6 - fill the marker rectangle
  markerPath.fill()
 
  //7 - restore the centred context for the next rotate
  CGContextRestoreGState(context)
}
 
//8 - restore the original state in case of more painting
CGContextRestoreGState(context)

Here’s what you’ve just done:

  1. Before manipulating the context’s matrix, you save the original state of the matrix.
  2. Define the position and shape of the path — but you’re not drawing it yet.
  3. Move the context so that rotation happens around the context’s original center. (Blue lines in the previous diagram.)
  4. For each mark, you first save the centered context state.
  5. Using the individual angle previously calculated, you determine the angle for each marker and rotate and translate the context.
  6. Draw the marker rectangle at the top left of the rotated and translated context.
  7. Restore the centered context’s state.
  8. Restore the original state of the context that had no rotations or translations.

Whew! Nice job hanging in there for that. Now build and run the application, and admire Flo’s beautiful and informative UI:

2-FinalPart2

Where to Go to From Here?

Here is Flo, complete with all of the code you’ve developed so far.

At this point, you’ve learned how to draw paths, gradients and how to change the context’s transformation matrix.

In the third and final part of this Core Graphics tutorial, you’ll create a patterned background and draw a vector medal image.

If you’ve any questions or comments, please join me in the forum below.

Modern Core Graphics with Swift: Part 2 is a post from: Ray Wenderlich

The post Modern Core Graphics with Swift: Part 2 appeared first on Ray Wenderlich.

Modern Core Graphics with Swift: Part 3

$
0
0

FinalApp

Welcome back to the third and final part of the Core Graphics tutorial series! Flo, your water drinking tracking app, is ready for its final evolution, which you’ll make happen with Core Graphics.

In the part one, you drew three custom-shaped controls with UIKit. Then in the part two, you created a graph view to show the user’s water consumption over a week, and you explored transforming the context transformation matrix (CTM).

In this third and final part, you’ll take Flo to its final form. Specifically, you’ll:

  • Create a repeating pattern for the background.
  • Draw a medal from start to finish to award the users for successfully drinking eight glasses of water a day.

If you don’t have it already, download a copy of the Flo project from the second part of this series.

Background Repeating Pattern

Your mission in this section is to use UIKit’s pattern methods to create this background pattern:

3-FinalBackground

Note: If you need to optimize for speed, then work through Core Graphics Tutorial: Patterns which demonstrates a basic way to create patterns with Objective-C and Core Graphics. For most purposes, like when the background is only drawn once, UIKit’s easier wrapper methods should be acceptable.

Go to File\New\File… and select the iOS iOS\Source\Cocoa Touch Class template to create a class called BackgroundView with a subclass of UIView. Click Next and then Create.

Go to Main.storyboard, select the main view of ViewController, and change the class to BackgroundView in the Identity Inspector.

3-BackgroundViewStoryboard3

Set up BackgroundView.swift and Main.storyboard so they are side-by-side, using the Assistant Editor.

Replace the code in BackgroundView.swift with:

import UIKit
 
@IBDesignable
 
class BackgroundView: UIView {
 
  //1 
  @IBInspectable var lightColor: UIColor = UIColor.orangeColor()
  @IBInspectable var darkColor: UIColor = UIColor.yellowColor()
  @IBInspectable var patternSize:CGFloat = 200
 
  override func drawRect(rect: CGRect) {
    //2
    let context = UIGraphicsGetCurrentContext()
 
    //3
    CGContextSetFillColorWithColor(context, darkColor.CGColor)
 
    //4
    CGContextFillRect(context, rect)
  }
}

The background view of your storyboard should now be yellow. More detail on the above code:

  1. lightColor and darkColor have @IBInspectable attributes so it’s easier to configure background colors later on. You’re using orange and yellow as temporary colors, just so you can see what’s happening. patternSize controls the size of the repeating pattern. It’s initially set to large, so it’s easy to see what’s happening.
  2. UIGraphicsGetCurrentContext() gives you the view’s context and is also where drawRect(_:) draws.
  3. Use the Core Graphics method CGContextSetFillColorWithColor() to set the current fill color of the context. Notice that you need to use CGColor, a property of darkColor when using Core Graphics.
  4. Instead of setting up a rectangular path, CGContextFillRect() fills the entire context with the current fill color.

You’re now going to draw these three orange triangles using UIBezierPath(). The numbers correspond to the points in the following code:

3-GridPattern

Still in BackgroundView.swift, add this code to the end of drawRect(_:):

let drawSize = CGSize(width: patternSize, height: patternSize)
 
//insert code here
 
 
let trianglePath = UIBezierPath()
//1
trianglePath.moveToPoint(CGPoint(x:drawSize.width/2, 
                                 y:0)) 
//2
trianglePath.addLineToPoint(CGPoint(x:0, 
                                    y:drawSize.height/2)) 
//3
trianglePath.addLineToPoint(CGPoint(x:drawSize.width, 
                                    y:drawSize.height/2)) 
 
//4
trianglePath.moveToPoint(CGPoint(x: 0, 
                                 y: drawSize.height/2)) 
//5
trianglePath.addLineToPoint(CGPoint(x: drawSize.width/2, 
                                    y: drawSize.height)) 
//6
trianglePath.addLineToPoint(CGPoint(x: 0, 
                                    y: drawSize.height)) 
 
//7
trianglePath.moveToPoint(CGPoint(x: drawSize.width, 
                                 y: drawSize.height/2)) 
//8
trianglePath.addLineToPoint(CGPoint(x:drawSize.width/2, 
                                    y:drawSize.height)) 
//9
trianglePath.addLineToPoint(CGPoint(x: drawSize.width, 
                                    y: drawSize.height)) 
 
lightColor.setFill()
trianglePath.fill()

Notice how you use one path to draw three triangles. moveToPoint(_:) is just like lifting your pen from the paper when you’re drawing and moving it to a new spot.

Your storyboard should now have an orange and yellow image at the top left of your background view.

So far, you’ve drawn directly into the view’s drawing context. To be able to repeat this pattern, you need to create an image outside of the context, and then use that image as a pattern in the context.

Find the following. It’s close to the top of drawRect(_:), but after:

let drawSize = CGSize(width: patternSize, height: patternSize)

Add the following code where it conveniently says Insert code here:

UIGraphicsBeginImageContextWithOptions(drawSize, true, 0.0)
let drawingContext = UIGraphicsGetCurrentContext()
 
//set the fill color for the new context
darkColor.setFill()
CGContextFillRect(drawingContext,
      CGRectMake(0, 0, drawSize.width, drawSize.height))

Hey! Those orange triangles disappeared from the storyboard. Where’d they go?

UIGraphicsBeginImageContextWithOptions() creates a new context and sets it as the current drawing context, so you’re now drawing into this new context. The parameters of this method are:

  • The size of the context.
  • Whether the context is opaque — if you need transparency, then this needs to be false.
  • The scale of the context. If you’re drawing to a retina screen, this should be 2.0, and if to an iPhone 6 Plus, it should be 3.0. However, this uses 0.0, which ensures the correct scale for the device is automatically applied.

Then you used UIGraphicsGetCurrentContext() to get a reference to this new context.

You then filled the new context with yellow. You could have let the original background show through by setting the context opacity to false, but it’s faster to draw opaque contexts than it is to draw transparent, and that’s argument enough to go opaque.

Add this code to the end of drawRect(_:):

let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

This extracts a UIImage from the current context. When you end the current context with UIGraphicsEndImageContext(), the drawing context reverts to the view’s context, so any further drawing in drawRect(_:) happens in the view.

To draw the image as a repeated pattern, add this code to the end of drawRect(_:):

UIColor(patternImage: image).setFill()
CGContextFillRect(context, rect)

This creates a new UIColor by using an image as a color instead of a solid color.

Build and run the app. You should now have a rather bright background for your app. :]

3-BoldBackground2

Go to Main.storyboard, select the background view, and in the Attributes Inspector change the @IBInspectable values to the following:

  • Light Color: RGB(255, 255, 242)
  • Dark Color: RGB(223, 255, 247)
  • Pattern Size: 30

3-BackgroundColors2

Experiment a little more with drawing background patterns. See if you can get a polka dot pattern as a background instead of the triangles.

And of course, you can substitute your own non-vector images as repeating patterns.

Drawing Images

In the final stretch of this tutorial, you’ll make a medal to handsomely reward users for drinking enough water. This medal will appear when the counter reaches the target of eight glasses.

3-MedalFinal

I know that’s certainly not a museum-worthy piece of art, so please know that I won’t be offended if you improve it, or even take it to the next level by draw a trophy instead of a medal. ;]

Instead of using @IBDesignable, you’ll draw it in a Swift Playground, and then copy the code to a UIImageView subclass. Though interactive storyboards are often useful, they have limitations; they only draw simple code, and storyboards often time out when you create complex designs.

In this particular case, you only need to draw the image once when the user drinks eight glasses of water. If the user never reaches the target, there’s no need to make a medal.

Once drawn, it also doesn’t need to be redrawn with drawRect(_:) and setNeedsDisplay().

Time to put the brush to the canvas. First return Xcode to single viewing, rather than side-by-side, by clicking the Standard Editor icon:

3-StandardEditor

Go to File\New\File… and choose the iOS Playground template. Click Next, name the playground MedalDrawing and then click Create.

Replace the playground code with:

import UIKit
 
let size = CGSize(width: 120, height: 200)
 
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
 
 
 
//This code must always be at the end of the playground
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

This creates a drawing context, just as you did for the patterned image.

Take note of these last two lines; you always need them at the bottom of the playground so you can preview the image in the playground.

Next, in the gray results column click the + button to the right of this code:

let image = UIGraphicsGetImageFromCurrentImageContext()

3-PlaygroundImage2

This opens a new side-by-side pane so you can see the image change with every stroke. If you don’t get the expected view in the right pane, click the breadcrumb trail at the top of the pane, and set it to Timeline\MedalDrawing.playground

3-TimelineBreadcrumbs

It’s often best to do a sketch to wrap your head around the order you’ll need to draw the elements — look at the “masterpiece” I made while conceptualizing this tutorial:

3-Sketch

This is the order to draw the medal:

  1. The back ribbon (red)
  2. The medallion (gold gradient)
  3. The clasp (dark gold)
  4. The front ribbon (blue)
  5. The number 1 (dark gold)

Remember to keep the last two lines of the playground (where you extract the image from the context at the very end), and add this drawing code to the playground before those lines:

First, set up the non-standard colors you need.

//Gold colors
let darkGoldColor = UIColor(red: 0.6, green: 0.5, blue: 0.15, alpha: 1.0)
let midGoldColor = UIColor(red: 0.86, green: 0.73, blue: 0.3, alpha: 1.0)
let lightGoldColor = UIColor(red: 1.0, green: 0.98, blue: 0.9, alpha: 1.0)

This should all look familiar by now. Notice that the colors appear in the right margin of the playground as you declare them.

Add the drawing code for the red part of the ribbon:

//Lower Ribbon
var lowerRibbonPath = UIBezierPath()
lowerRibbonPath.moveToPoint(CGPointMake(0, 0))
lowerRibbonPath.addLineToPoint(CGPointMake(40,0))
lowerRibbonPath.addLineToPoint(CGPointMake(78, 70))
lowerRibbonPath.addLineToPoint(CGPointMake(38, 70))
lowerRibbonPath.closePath()
UIColor.redColor().setFill()
lowerRibbonPath.fill()

Nothing too new here, just creating a path and filling it. You should see the red path appear in the right hand pane.

Add the code for the clasp:

//Clasp
 
var claspPath = UIBezierPath(roundedRect: 
                           CGRectMake(36, 62, 43, 20), 
                           cornerRadius: 5)
claspPath.lineWidth = 5
darkGoldColor.setStroke()
claspPath.stroke()

Here you make use of UIBezierPath(roundedRect:) with rounded corners by using the cornerRadius parameter. The clasp should draw in the right pane.

Add the code for the medallion:

//Medallion
 
var medallionPath = UIBezierPath(ovalInRect: 
                    CGRect(origin: CGPointMake(8, 72), 
                             size: CGSizeMake(100, 100)))
//CGContextSaveGState(context)
//medallionPath.addClip()
let gradient = CGGradientCreateWithColors(
                      CGColorSpaceCreateDeviceRGB(), 
                      [darkGoldColor.CGColor, 
                       midGoldColor.CGColor, 
                       lightGoldColor.CGColor],
                      [0, 0.51, 1])
CGContextDrawLinearGradient(context,
                            gradient, 
                            CGPointMake(40, 40), 
                            CGPointMake(40,162), 
                             0)
//CGContextRestoreGState(context)

Notice the commented out the lines. These are here to temporarily show how the gradient is drawn:

3-SquareGradient

To put the gradient on a slant, so that it goes from top-left to bottom-right, change the end x coordinate of the gradient. Alter the CGContextDrawLinearGradient() code to:

CGContextDrawLinearGradient(context,
                            gradient, 
                            CGPointMake(40, 40), 
                            CGPointMake(100,160), 
                             0)

3-SkewedGradient

Now uncomment those three lines in the medallion drawing code to create a clipping path to constrain the gradient within the medallion’s circle.

Just as you did when drawing the graph in part two, you save the context’s drawing state before adding the clipping path, and restore it after the gradient is drawn so that the context is no longer clipped.

3-ClippedGradient

To draw the solid internal line of the medal, use the medallion’s circle path, but scale it before drawing. Instead of transforming the whole context, you’ll just apply the transform to one path.

Add this code after the medallion drawing code:

//Create a transform
//Scale it, and translate it right and down
var transform = CGAffineTransformMakeScale(0.8, 0.8)
transform = CGAffineTransformTranslate(transform, 15, 30)
 
medallionPath.lineWidth = 2.0
 
//apply the transform to the path
medallionPath.applyTransform(transform)
medallionPath.stroke()

3-MedalOutline

This scales the path down to 80 percent of its original size, and then translates the path to keep it centered within the gradient view.

Add the upper ribbon drawing code after the internal line code:

//Upper Ribbon
 
var upperRibbonPath = UIBezierPath()
upperRibbonPath.moveToPoint(CGPointMake(68, 0))
upperRibbonPath.addLineToPoint(CGPointMake(108, 0))
upperRibbonPath.addLineToPoint(CGPointMake(78, 70))
upperRibbonPath.addLineToPoint(CGPointMake(38, 70))
upperRibbonPath.closePath()
 
UIColor.blueColor().setFill()
upperRibbonPath.fill()

This is very similar to the code you added for the lower ribbon – making a bezier path and filling it.

3-UpperRibbon

The last step is to draw the number one on the medal. Add this code after //Upper Ribbon:

//Number One
 
//Must be NSString to be able to use drawInRect()
let numberOne = "1"
let numberOneRect = CGRectMake(47, 100, 50, 50)
let font = UIFont(name: "Academy Engraved LET", size: 60)
let textStyle = NSMutableParagraphStyle.defaultParagraphStyle()
let numberOneAttributes = [
  NSFontAttributeName: font!,
  NSForegroundColorAttributeName: darkGoldColor]
numberOne.drawInRect(numberOneRect, 
                     withAttributes:numberOneAttributes)

Here you define a String with text attributes, and draw it into the drawing context using drawInRect(_:).

3-NumberOne

Looking good!

You’re getting close, but it’s looking a little two-dimensional — it would be nice to have some drop shadows.

Shadows

To create a shadow, you need three elements: the color, the offset (distance and direction of the shadow) and the blur.

At the top of the playground, after defining the gold colors but just before the //Lower Ribbon, insert this shadow code:

//Add Shadow
let shadow:UIColor = UIColor.blackColor().colorWithAlphaComponent(0.80)
let shadowOffset = CGSizeMake(2.0, 2.0)
let shadowBlurRadius: CGFloat = 5
 
CGContextSetShadowWithColor(context, 
                            shadowOffset, 
                            shadowBlurRadius, 
                            shadow.CGColor)

Okay, that makes a shadow, but the result is probably not what you pictured. Why is that?

3-MessyShadows

When you draw an object into the context, this code creates a shadow for each object.

3-IndividualShadows

Ah-ha! Your medal comprises five objects. No wonder it looks a little fuzzy.

Fortunately, it’s pretty easy to fix. Simply group drawing objects with a transparency layer, and you’ll only draw one shadow for the whole group.

3-GroupedShadow

Add the code to make the group after the shadow code. Start with this:

CGContextBeginTransparencyLayer(context, nil)

When you begin a group you also need to end it, so add this next block at the end of the playground, but still before retrieving the final image:

CGContextEndTransparencyLayer(context)

Now you’ll have a completed medal image with clean, tidy shadows:

3-MedalFinal

That completes the playground code, and you have a medal to show for it. :]

Image View Using Core Graphics Image

Create a new file for the Image View.

Click File\New\File… and choose the Cocoa Touch Class template. Click Next , and name the class MedalView. Make it a subclass of UIImageView, then click Next, then click Create.

Go to Main.storyboard and add a UIImageView as a subview of Counter View. Select the UIImageView, and in the Identity Inspector change the class to MedalView.

3-MedalViewClass

In the Size Inspector, give the Image View the coordinates:

3-MedalViewCoordinates

In the Attributes Inspector, change Image Mode to Aspect Fit, so that the image automatically resizes to fit the view.

3-MedalAspectFit

Go to MedalView.swift and add a method to create the medal:

func createMedalImage() -> UIImage {
  println("creating Medal Image")
 
}

This makes a log so that you know when the image is being created.

Go to MedalDrawing playground, highlight and copy the entire code except for the initial import UIKit.

Go back to MedalView.swift and paste the playground code into createMedalImage(),

At the end of createMedalImage(), add:

return image

That should squash the compile error.

At the top of the class, add a property to hold the medal image:

lazy var medalImage:UIImage = self.createMedalImage()

The lazy declaration modifier means that the medal image code, which is computationally intensive, only draws when necessary. Hence, if the user never records drinking eight glasses, the medal drawing code will never run.

Add a method to show the medal:

func showMedal(show:Bool) {
  if show {
    image = medalImage
  } else {
    image = nil
  }
}

Go to ViewController.swift and add an outlet at the top of the class:

@IBOutlet weak var medalView: MedalView!

Go to Main.storyboard and connect the new MedalView to this outlet.

Go back to ViewController.swift and add this method to the class:

func checkTotal() {
  if counterView.counter >= 8 {
    medalView.showMedal(true)
  } else {
    medalView.showMedal(false)
  }
}

This shows the medal if you drink enough water for the day.

Call this method at both the end of viewDidLoad() and btnPushButton(_:):

checkTotal()

Build and run the application. It should look like this:

3-CompletedApp

In the debug console, you’ll see the creating Medal Image log only outputs when the counter reaches eight and displays the medal, and this is because medalImage uses a lazy declaration.

Where to Go From Here?

You’ve come a long way in this epic tutorial series. You’ve mastered the basics of Core Graphics: drawing paths, creating patterns and gradients, and transforming the context. To top it all off, you learned how to put it all together in a useful app.

Download the complete version of Flo right here. This version also includes extra sample data and radial gradients to give the buttons a nice UI touch so they respond when pressed.

I hope you enjoyed making Flo, and that you’re now able to make some stunning UIs using nothing but Core Graphics and UIKit! If you have any questions, comments, or you want to hash out how to draw a trophy instead of a medal, please join the forum discussion below.

Modern Core Graphics with Swift: Part 3 is a post from: Ray Wenderlich

The post Modern Core Graphics with Swift: Part 3 appeared first on Ray Wenderlich.

Video Tutorial: Intro to Auto Layout Part 8: Constraints in Code

Two New Swift Books Coming Soon!

$
0
0
Two new Swift books right around the corner!

Two new Swift books right around the corner!

This is just a quick heads up that two brand new Swift books are coming to raywenderlich.com soon!

The first you know about already – WatchKit by Tutorials. This is our new book that teaches you everything you need to know to make Apple Watch apps – we’ve been working hard on this for months and are just about wrapped up.

The second you may have heard some rumors about – iOS Animations by Tutorials. We have been working on this for months but have kept it a secret, so we could give a free advance copy to everyone who attended RWDevCon.

Wondering when these will be available? Just around the corner!

  • iOS Animations by Tutorials: Monday Feb 16: iOS Animations by Tutorials (both print and PDF versions) will be for sale on Monday. This book teaches you how to create delightful animations in Swift – from beginning to advanced topics.
  • WatchKit by Tutorials: Monday Feb 23: WatchKit by Tutorials (PDF version) will be available for download in a little over a week! This will give you plenty of time to read and get your apps ready for the Apple Watch launch. The print version will come sometime after WatchKit is out of beta.

So check back on Monday for the big launch celebration – we have lots of exciting things planned! :]

Two New Swift Books Coming Soon! is a post from: Ray Wenderlich

The post Two New Swift Books Coming Soon! appeared first on Ray Wenderlich.

Implementing Tesseract OCR in iOS

$
0
0
Code your way into his/her heart this Valentine's Day!

Code your way into his/her heart this Valentine’s Day!

You’ve undoubtedly seen it before… It’s widely used to process everything from scanned documents, to the handwritten scribbles on your tablet PC, to the Word Lens technology Google recently added to their Translate app. And today you’ll learn to use it in your very own iPhone app! Pretty neat, huh?

So… what is it?

What is OCR?

Optical Character Recognition, or OCR, is the process of electronically extracting text from images and reusing it in a variety of ways such as document editing, free-text searches, or compression.

In this tutorial, you’ll learn how to use Tesseract, an open source OCR engine maintained by Google.

Introducing Tesseract

Tesseract OCR is quite powerful, but does have the following limitations:

  • Unlike some OCR engines (like those used by the U.S. Postal Service to sort mail), Tesseract is unable to recognize handwriting and is limited to about 64 fonts in total.
  • Tesseract requires a bit of preprocessing to improve the OCR results; images need to be scaled appropriately, have as much image contrast as possible, and have horizontally-aligned text.
  • Finally, Tesseract OCR only works on Linux, Windows, and Mac OS X.
Wait, WHAT?

Uh oh…how are you going to use this in iOS? Luckily, there’s an Objective-C wrapper for Tesseract OCR, which can also be used in Swift and iOS. Don’t worry, this Swift-compatible version is the one included in the starter package!

Phew! :]

The App: Love In A Snap

You didn’t think the team here at Ray Wenderlich would let you down this upcoming Valentine’s Day, did you? Of course not! We’ve got your back. We’ve managed to figure out the sure-fire way to impress your true heart’s desire. And you’re about to build the app to make it happen.

U + OCR = LUV

U + OCR = LUV

In this tutorial, you’ll learn how to use Tesseract, an open source OCR engine maintained by Google. You’ll work on the Love In A Snap app, which lets you take a picture of a love poem and “make it your own” by replacing the name of the original poet’s muse with the name of the object of your own affection. Brilliant! Get ready to impress.

Getting Started

Download the starter project package here and extract it to a convenient location.

The archive contains the following folders:

  • LoveInASnap: The Xcode starter project for this tutorial.
  • Tesseract Resources: The Tesseract framework and language data.
  • Image Resources: Sample images containing text that you’ll use later.

Looking at your current LoveinASnap.xcodeproj, you’ll notice that ViewController.swift has been pre-populated with a few @IBOutlets and empty @IBAction methods which link the view controller to its pre-made Main.storyboard interface.

Following those empty methods, you’ll see two pre-coded functions which handle showing and removing the view’s activity indicator:

func addActivityIndicator() {
  activityIndicator = UIActivityIndicatorView(frame: view.bounds)
  activityIndicator.activityIndicatorViewStyle = .WhiteLarge
  activityIndicator.backgroundColor = UIColor(white: 0, alpha: 0.25)
  activityIndicator.startAnimating()
  view.addSubview(activityIndicator)
}
 
func removeActivityIndicator() {
  activityIndicator.removeFromSuperview()
  activityIndicator = nil
}

Next there are several more methods which move the elements of the view in order to prevent the keyboard from blocking active text fields:

func moveViewUp() {
  if topMarginConstraint.constant != originalTopMargin {
    return
  }
 
  topMarginConstraint.constant -= 135
  UIView.animateWithDuration(0.3, animations: { () -> Void in
    self.view.layoutIfNeeded()
  })
}
 
func moveViewDown() {
  if topMarginConstraint.constant == originalTopMargin {
    return
  }
 
  topMarginConstraint.constant = originalTopMargin
  UIView.animateWithDuration(0.3, animations: { () -> Void in
    self.view.layoutIfNeeded()
  })
 
}

Finally, the remaining methods appropriately trigger keyboard resignation and calls to moveViewUp() and moveViewDown() depending on user action:

@IBAction func backgroundTapped(sender: AnyObject) {
  view.endEditing(true)
  moveViewDown()
}
 
func textFieldDidBeginEditing(textField: UITextField) {
  moveViewUp()
}
 
@IBAction func textFieldEndEditing(sender: AnyObject) {
  view.endEditing(true)
  moveViewDown()
}
 
func textViewDidBeginEditing(textView: UITextView) {
  moveViewDown()
}

Although important to the app’s UX, these methods are the least relevant to this tutorial and as such, have been pre-populated for you so you can get into the fun coding nitty-gritty right away.

But before writing your first line of code, build and run the starter code; click around a bit in the app to get a feel for the UI. The text view isn’t editable at present, and tapping on the text fields simply calls and dismisses the keyboard. Your job is to bring this app to life!

ocr-first-run

Adding the Tesseract Framework

Inside the starter ZIP file you unpacked should be a Tesseract Resources folder, which contains the Tesseract framework as well as the tessdata folder that holds English and French language recognition data.

Open that folder in the Finder and add TesseractOCR.framework to your project by dragging it to Xcode’s Project navigator. Make sure Copy items if needed is checked.

Adding the Tesseract framework

Adding the Tesseract framework

Finally, click Finish to add the framework.

Now you’ll need to add the tessdata folder as a referenced folder so the internal folder structure is maintained. Drag the tessdata folder from the Finder to the Supporting Files group in the Project navigator.

Again, make sure Copy items if needed is checked and also make sure that the Added Folders option is set to Create folder references.

Adding tessdata as a referenced folder

Adding tessdata as a referenced folder

Finally, click Finish to add the data to your project. You’ll see a blue tessdata folder appear in the Project Navigator; the blue color tells you that the folder is referenced rather than an Xcode group.

Since Tesseract requires libstdc++.6.0.9.dylib and CoreImage.framework you’ll need to link both of these libraries in.

Select the LoveInASnap project file and the LoveInASnap target. In the General tab, scroll down to Linked Frameworks and Libraries.

ocr-frameworks

There should be only one file here: TesseractOCR.framework, which you just added. Click the + button underneath the list. Find both libstdc++.dylib and CoreImage.framework and add them to your project.

ocr-addlibs

Then on the above tab bar next to Build Phases, click Build Settings. Find Other Linker Flags using the convenient search bar at the top of the table and append -lstdc++ to any and all existing Other Linker Flags keys. Then in that same Build Settings table, find C++ Standard Library and make sure it’s set to “Compiler Default”.

Almost there! One last step…

Almost there!

Wipe away those happy tears, Champ! Almost there! One step to go…

Finally, since Tesseract is an Objective-C framework, you’ll need to create an Objective-C bridging header to use the framework in your Swift app.

The easiest way to create an Objective-C bridging header and all the project settings to support it is to add any Objective-C file to your project.

Go to File\New\File…, select iOS\Source\Cocoa Touch Class and then click Next. Enter FakeObjectiveCClass as the Class name and choose NSObject as the subclass. Also, make sure the Language is set to Objective-C! Click Next, then Create.

When prompted Would you like to configure an Objective-C bridging header? select Yes.

You can chuck out those Objective-c classes! (For this tutorial at least...)

Toss out those Objective-c classes!

You’ve successfully created an Objective-C bridging header. You can delete FakeObjectiveCClass.m and FakeObjectiveCClass.h from the project now, since you really just needed the bridging header. :]

To import the Tesseract framework into your new bridging header, find LoveInASnap-Bridging-Header.h in Project Navigator, open it, then add the following line:

#import <TesseractOCR/TesseractOCR.h>

Now you will have access to the Tesseract framework throughout your project. Build and run your project to make sure everything still compiles properly.

All good? Now you can get started with the fun stuff!

Loading up the Image

Of course, the first thing you’ll need for your OCR app is a mechanism to to load up an image to process. The easiest way to do this is to use an instance of UIImagePickerController to select an image from the camera or Photo Library.

Open ViewController.swift and replace the existing stub of takePhoto() with the following implementation:

@IBAction func takePhoto(sender: AnyObject) {
  // 1
  view.endEditing(true)
  moveViewDown()
 
  // 2
  let imagePickerActionSheet = UIAlertController(title: "Snap/Upload Photo",
    message: nil, preferredStyle: .ActionSheet)
 
  // 3
  if UIImagePickerController.isSourceTypeAvailable(.Camera) {
    let cameraButton = UIAlertAction(title: "Take Photo",
      style: .Default) { (alert) -> Void in
        let imagePicker = UIImagePickerController()
        imagePicker.delegate = self
        imagePicker.sourceType = .Camera
        self.presentViewController(imagePicker,
          animated: true,
          completion: nil)
    }
    imagePickerActionSheet.addAction(cameraButton)
  }
 
  // 4
  let libraryButton = UIAlertAction(title: "Choose Existing",
    style: .Default) { (alert) -> Void in
      let imagePicker = UIImagePickerController()
      imagePicker.delegate = self
      imagePicker.sourceType = .PhotoLibrary
      self.presentViewController(imagePicker,
        animated: true,
        completion: nil)
  }
  imagePickerActionSheet.addAction(libraryButton)
 
  // 5
  let cancelButton = UIAlertAction(title: "Cancel",
    style: .Cancel) { (alert) -> Void in
  }
  imagePickerActionSheet.addAction(cancelButton)
 
  // 6
  presentViewController(imagePickerActionSheet, animated: true,
    completion: nil)
}

This code presents two or three options to the user depending on the capabilities of their device. Here’s what’s going on in more detail:

  1. If you’re currently editing either the text view or a text field, close the keyboard and move the view back to its original position.
  2. Create a UIAlertController with the action sheet style to present a set of capture options to the user.
  3. If the device has a camera, add the Take Photo button to imagePickerActionSheet. Selecting this button creates and presents an instance of UIImagePickerController with sourceType .Camera.
  4. Add a Choose Existing button to imagePickerActionSheet. Selecting this button creates and presents an instance of UIImagePickerController with sourceType .PhotoLibrary.
  5. Add a Cancel button to imagePickerActionSheet. Selecting this button cancels your UIImagePickerController, even though you don’t specify an action beyond setting the style as .Cancel.
  6. Finally, present your instance of UIAlertController.

Build and run your project; tap the Snap/Upload a picture of your Poem button and you should see your new UIAlertController like so:

ocr-action-sheet

If you’re running on the simulator, there’s no physical camera available so you won’t see the “Take Photo” option.

As mentioned earlier in the list of Tesseract’s limitations, images must be within certain size constraints for optimal OCR results. If an image is too big or too small, Tesseract may return bad results or even, strangely enough, crash the entire program with an EXC_BAD_ACCESS error.

To that end, you’ll need a method to resize the image without altering its aspect ratio so you distort the image as little as possible.

Scaling Images to Preserve Aspect Ratio

The aspect ratio of an image is the proportional relationship between its width and height. Mathematically speaking, to reduce the size of the original image without affecting the aspect ratio, you must keep the width to height ratio constant.

Aspect_Ratio

When you know both the height and the width of the original image, and you know either the desired height or width of the final image, you can rearrange the aspect ratio equation as follows:

Aspect_Ratio_b

This results in the two formulas Height1/Width1 * width2 = height2 — and conversely, Width1/Height1 * height2 = width2. You’ll use these formulas to maintain the image’s aspect ratio in your scaling method.

Still in ViewController.swift, add the following helper method to the class:

func scaleImage(image: UIImage, maxDimension: CGFloat) -> UIImage {
 
  var scaledSize = CGSize(width: maxDimension, height: maxDimension)
  var scaleFactor: CGFloat
 
  if image.size.width > image.size.height {
    scaleFactor = image.size.height / image.size.width
    scaledSize.width = maxDimension
    scaledSize.height = scaledSize.width * scaleFactor
  } else {
    scaleFactor = image.size.width / image.size.height
    scaledSize.height = maxDimension
    scaledSize.width = scaledSize.height * scaleFactor
  }
 
  UIGraphicsBeginImageContext(scaledSize)
  image.drawInRect(CGRectMake(0, 0, scaledSize.width, scaledSize.height))
  let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
  UIGraphicsEndImageContext()
 
  return scaledImage
}

Given maxDimension, this method takes the height or width of the image — whichever is greater — and sets that dimension equal to the maxDimension argument. It then scales the other side of the image appropriately based on the aspect ratio, redraws the original image to fit into the newly calculated frame, then finally returns the newly scaled image back to the calling method.

Whew! </math>

Now that we’ve gotten all of that out of the way (drumroll please…) you can now get started with your Tesseract implementation!

Implementing Tesseract OCR

Find the UIImagePickerControllerDelegate class extension at the bottom of ViewController.swift and add the following method inside the extension:

func imagePickerController(picker: UIImagePickerController,
  didFinishPickingMediaWithInfo info: [NSObject : AnyObject]) {
    let selectedPhoto = info[UIImagePickerControllerOriginalImage] as UIImage
    let scaledImage = scaleImage(selectedPhoto, maxDimension: 640)
 
    addActivityIndicator()
 
    dismissViewControllerAnimated(true, completion: {
      self.performImageRecognition(scaledImage)
    })
}

imagePickerController(_:didFinishPickingMediaWithInfo:) is a UIImagePickerDelegate method that returns the selected image information in an info dictionary object. You get the selected photo from info using the UIImagePickerControllerOriginalImage key and then scale it using scaleImage(_:maxDimension:).

You call addActivityIndicator() to disable user interaction and display an activity indicator to the user while Tesseract does its work. You then dismiss your UIImagePicker and pass the image to performImageRecognition() (which you’ll implement next!) for processing.

Next, add the following method to the main class declaration:

func performImageRecognition(image: UIImage) {
  // 1
  let tesseract = G8Tesseract()
 
  // 2
  tesseract.language = "eng+fra"
 
  // 3
  tesseract.engineMode = .TesseractCubeCombined
 
  // 4
  tesseract.pageSegmentationMode = .Auto
 
  // 5
  tesseract.maximumRecognitionTime = 60.0
 
  // 6
  tesseract.image = image.g8_blackAndWhite()
  tesseract.recognize()
 
  // 7
  textView.text = tesseract.recognizedText
  textView.editable = true
 
  // 8
  removeActivityIndicator()
}

This is where the OCR magic happens! Since this is the meat of this tutorial, here’s a detailed look at each part of the code in turn:

  1. Initialize tesseract to a contain a new G8Tesseract object.
  2. Your poem vil impress vith French! Ze language ov looove! *Haugh* *Haugh* *Haugh*

    Your poem vil impress vith French! Ze language ov love! *Haugh* *Haugh* *Haugh*

  3. Tesseract will search for the .traineddata files of the languages you specify in this parameter; specifying eng and fra will search for “eng.traineddata” and “fra.traineddata”containing the data to detect English and French text respectively. The French trained data has been included in this project since the sample poem you’ll be using for this tutorial contains a bit of French (Très romantique!). The poem’s French accented characters aren’t in the English character set, so you need to link to French .traineddata in order for those accents to appear; it’s also good to include the French data since there’s a component of .traineddata which takes language vocabulary into account.
  4. You can specify three different OCR engine modes: .TesseractOnly, which is the fastest, but least accurate method; .CubeOnly, which is slower but more accurate since it employs more artificial intelligence; and .TesseractCubeCombined, which runs both .TesseractOnly and .CubeOnly to produce the most accurate results — but as a result is the slowest mode of the three.
  5. Tesseract assumes by default that it’s processing a uniform block of text, but your sample image has multiple paragraphs. Tesseract’s pageSegmentationMode lets the Tesseract engine know how the text is divided, so in this case, set pageSegmentationMode to .Auto to allow for fully automatic page segmentation and thus the ability to recognize paragraph breaks.
  6. Here you set maximumRecognitionTime to limit the amount of time your Tesseract engine devotes to image recognition. However, only the Tesseract engine is limited by this setting; if you’re using the .CubeOnly or .TesseractCubeCombined engine mode, the Cube engine will continue processing even once your Tesseract engine has hit its maximumRecognitionTime.
  7. You’ll get the best results of of Tesseract when the text contrasts highly with the background. Tesseract has a built in filter, g8_blackAndWhite(), that desaturates the image, increases the contrast, and reduces the exposure. Here, you’re assigning the filtered image to the image property of your Tesseract object, before kicking off the Tesseract image recognition process.
  8. Note that the image recognition is synchronous so at this point, the text is available. You then put the recognized text into your textView and make the view editable so your user can edit it as she likes.
  9. Finally, remove the activity indicator to signal that the OCR is complete and to let the user edit their poem.

Now it’s time to test this first batch of code you’ve written and see what happens!

Processing Your First Image

The sample image for this tutorial, found in Image Resources\Lenore.png is shown below:

Lenore

Lenore.png contains an image of a love poem addressed to a “Lenore” — but with a few edits you can turn it into a poem that is sure to get the attention of the one you desire! :]

Although you could print a copy of the image, then snap a picture with the app to perform the OCR, make it easy on yourself and add the image to your device’s Camera Roll to eliminate the potential for human error, lighting inconsistencies, skewed text, and flawed printing among other things. If you’re using the Simulator, simply drag and drop the image file onto the Simulator.

Build and run your app; select Snap/Upload a picture of your Poem then select Choose Existing and choose the sample image from the Photo Library to begin Tesseract processing. You’ll have to allow your app to access the Photo Library the first time you run it, and you’ll see the activity indicator spinning away after you select an image.

And… Voila! Eventually, the deciphered text appears in the text view — and it looks like Tesseract did a great job with the OCR.

OCR_complete

But if the apple of your eye isn’t named “Lenore”, he or she probably won’t appreciate this poem coming from you as it stands…and they’ll likely want to know who this “Lenore” character is! ;]

And considering “Lenore” appears quite often in the scanned text, customizing the poem to your tootsie’s liking is going to take a bit of work…

What’s that, you say? Yes, you COULD implement a great time-saving function to find and replace these words! Brilliant idea! The next section shows you how to do just that.

Finding and Replacing Text

Now that the OCR engine has turned the image into text in the text view, you can treat it as you would any other string!

Open ViewController.swift and you’ll see that there’s already a swapText() method ready for you, which is hooked up to the Swap button in your app. How convenient. :]

Replace the implementation of swapText() with the following:

@IBAction func swapText(sender: AnyObject) {
  // 1
  if textView.text.isEmpty {
    return
  }
 
  // 2
  textView.text =
    textView.text.stringByReplacingOccurrencesOfString(findTextField.text,
      withString: replaceTextField.text, options: nil, range: nil)
 
  // 3
  findTextField.text = nil
  replaceTextField.text = nil
 
  // 4
  view.endEditing(true)
  moveViewDown()
}

The above code is pretty straightforward, but take a moment to walk through it step-by-step.

  1. If the textView is empty, there’s no text to swap so simply bail out of the method.
  2. Otherwise, find all occurrences of the string you’ve typed into findTextField in the textView and replace them with the string you’ve entered in replaceTextField.
  3. Next, clear out the values in findTextField and replaceTextField once the replacements are complete.
  4. Finally, resign the keyboard and move the view back into the correct position. As before in takePhoto(), you’re ensuring the view stays positioned correctly when the keyboard goes away.

Note: Tapping the background also ends “editing” mode and moves the view into its original position. This is facilitated through a UIButton that lives behind the other elements of the interface, which triggers backgroundTapped() in ViewController.swift.

Build and run your app; select the sample image again and let Tesseract do its thing. Once the text appears, enter Lenore in the Find this… field (note that the searched text is case-sensitive), then enter your true love’s name in the Replace with… field, and tap Swap to complete the switch-a-roo.

swap_text

Presto chango — you’ve created a love poem that will is tailored to your sweetheart and your sweetheart alone.

Play around with the find and replace to replace other words and names as necessary; once you’re done — uh, what should you do with it once you’re done? Such artistic creativity and bravery shouldn’t live on your device alone; you’ll need some way to share your masterpiece with the world.

Sharing The Final Result

In this final section, you’ll create an UIActivityViewController to let your users can share their new creations.

Replace the current implementation of sharePoem() in ViewController.swift with the following:

@IBAction func sharePoem(sender: AnyObject) {
  // 1
  if textView.text.isEmpty {
    return
  }
 
  // 2
  let activityViewController = UIActivityViewController(activityItems:
    [textView.text], applicationActivities: nil)
 
  // 3
  let excludeActivities = [
    UIActivityTypeAssignToContact,
    UIActivityTypeSaveToCameraRoll,
    UIActivityTypeAddToReadingList,
    UIActivityTypePostToFlickr,
    UIActivityTypePostToVimeo]
  activityViewController.excludedActivityTypes = excludeActivities
 
  // 4
  presentViewController(activityViewController, animated: true,
    completion: nil)
}

Taking each numbered comment in turn:

  1. If the textView is empty, don’t share anything.
  2. Otherwise, create an new instance of UIActivityViewController, put the text from the text view inside an array and pass it in as the activity item to be shared.
  3. UIActivityViewController has a long list of built-in activity types. You can exclude UIActivityTypeAssignToContact, UIActivityTypeSaveToCameraRoll, UIActivityTypeAddToReadingList, UIActivityTypePostToFlickr, and UIActivityTypePostToVimeo since they don’t make much sense in this context.
  4. Finally, present your UIActivityViewController and let the user share their creation where they wish.

Build and run the app again, and run the image through Tesseract. You can do the find and replace steps again if you like and when you’re happy with the text, tap the share button.

share_poem

That’s it! Your Love In A Snap app is complete — and sure to win over the heart of the one you adore.

Or if you’re anything like me, you’ll replace Lenore’s name with your own, send that poem to your inbox through a burner account, stay in alone on Valentine’s night, order in some Bibimbap, have a glass of wine, get a bit bleary-eyed, then pretend that email you received is from the Queen of England for an especially classy and sophisticated St. Valentine’s evening full of romance, comfort, mystery, and intrigue. But maybe that’s just me…

Where to Go From Here?

You can download the final version of the project here.

You can find the iOS wrapper for Tesseract on GitHub at https://github.com/gali8/Tesseract-OCR-iOS. You can also download more language data from Google’s Tesseract OCR site; use data versions 3.02 or higher to guarantee compatibility with the current framework.

Try out the app with other poems, songs, and snippets of text; try snapping some images with your camera as well as using images from your Photo Library. You’ll see how the OCR results vary between sources.

Examples of potentially problematic image inputs that can be corrected for improved results. Source: Google's Tesseract OCR site

Examples of potentially problematic image inputs that can be corrected for improved results. Source: Google’s Tesseract OCR site

Remember: “Garbage In, Garbage Out”. The easiest way to improve the quality of the output is to improve the quality of the input. As Google lists on their Tesseract OCR site there are many ways in which your input quality could be improved: dark or uneven lighting, image noise, skewed text orientation, and thick dark image borders can all contribute to less-than-perfect results.

You can look into image pre-processing or even implement your own artificial intelligence logic, such as neural networks or utilizing Tesseract’s own training tools to help your program learn from its errors and improve its success rate over time. Or since even small variations in image brightness, color, contrast, exposure, can result in variations in output, you can run the image through multiple filters then compare the results to determine the most accurate output. Chances are you’ll get the best results by using some or all of these strategies in combination, so play around with some of these approaches and see what works best for your application.

Tesseract is pretty powerful as is, but the potential for OCR is unlimited. Keep in mind as you use and improve the capabilities of Tesseract OCR with your software that as a sensing, thinking being, if you’re capable of deciphering characters using your eyes or ears or even fingertips, you’re a certifiable expert at character recognition already and are fully capable of teaching your computer so much more than it already knows.

ocr_expert

As always, if you have comments or questions on this tutorial, Tesseract, or OCR strategies, feel free to join the discussion below!

Implementing Tesseract OCR in iOS is a post from: Ray Wenderlich

The post Implementing Tesseract OCR in iOS appeared first on Ray Wenderlich.

Video Tutorial: Intro to AutoLayout Part 9: Visual Format Language

Welcome to the Swift Spring Fling!

$
0
0
Welcome to the Swift Spring Fling!

Welcome to the Swift Spring Fling!

Spring is almost upon us, and it’s time to celebrate!

This is because we are releasing two brand new Swift books:

  • iOS Animations by Tutorials (today): Learn how to create delightful animations in Swift, through a series of hands-on tutorials and challenges: from beginning to advanced!
  • WatchKit by Tutorials (next Monday): Learn how to create your own apps for the Apple Watch – by building a fully-featured, rich, and engaging WatchKit app from scratch!

To celebrate the launch of these two Swift books, we are running a special event called the Swift Spring Fling.

The Swift Spring Fling will run for two weeks, where every day we’ll release a new post related to iOS Animations and WatchKit. We’ll have giveaways, tech talks, podcasts, free tutorials, special discounts, and more!

Keep reading to find out what’s coming – including our first giveaway! :]

The New Swift Books

Two brand new Swift books!

Two brand new Swift books!

As mentioned earlier, the Swift Spring Fling is all about celebrating our two new Swift books – iOS Animations and WatchKit by Tutorials.

We realize that most of you will want to get both books, so we set up a Spring Swift Bundle where you can get both books for a discount.

Note that the books are all currently $10 off to celebrate the book launches, and the prices will be increasing after the Swift Spring Fling – so be sure to grab the discount while you still can!

The Swift Spring Fling

During the Swift Spring Fling, we will be releasing a new post related to iOS Animation or WatchKit every day!

Here’s the current schedule:

Calendar for the Swift Spring Fling

As you can see, lots of fun stuff is planned, so be sure to check back every day :]

Giveaway!

To celebrate the Swift Spring Fling, we will be giving away a few free books.

To enter for the giveaway, simply leave a comment on this post. On Feb 27 (the last day of the Fling) we will choose four random lucky winners, each of which will get one PDF+print book of their choice – either iOS Animations or WatchKit by Tutorials.

We hope you enjoy the Spring Swift Fling, and be sure to get the discount on our new Spring Swift Books while you still can!

Welcome to the Swift Spring Fling! is a post from: Ray Wenderlich

The post Welcome to the Swift Spring Fling! appeared first on Ray Wenderlich.


Reminder: Free Live Tech Talk (iOS Animations with Swift) Tomorrow (Tuesday)!

$
0
0
Free live tech talk (iOS Animations with Swift) tomorrow!

Free live tech talk (iOS Animations with Swift) tomorrow!

This is a reminder that we are having a free live tech talk on iOS Animations with Swift tomorrow (Tuesday Feb 17, and you’re all invited! Here are the details:

  • When: Tuesday, Feb 17 at 2:00 PM EST – 3:00 PM EST
  • What: iOS Animations with Swift Tech Talk/Demo followed by live Q&A (come w/ questions!)
  • Who: Marin Todorov (Tutorial Team member and iOS Animations by Tutorials author)
  • Where: Google Hangouts Event Page
  • Why: For learning and fun!
  • How: Visit the event page and a video URL should be posted. Follow the instructions there to submit your Q&A (via text) as the talk runs.

We hope to see some of you at the tech talk, and we hope you enjoy!

Reminder: Free Live Tech Talk (iOS Animations with Swift) Tomorrow (Tuesday)! is a post from: Ray Wenderlich

The post Reminder: Free Live Tech Talk (iOS Animations with Swift) Tomorrow (Tuesday)! appeared first on Ray Wenderlich.

Video Tutorial: Intro to AutoLayout Part 10: Visual Format Language II

Core Animation with Marin Todorov – Podcast S03 E05

$
0
0
Take a deep dive into Core Animation with Marin Todorov!

Take a deep dive into Core Animation with Marin Todorov!

Welcome back to season 3 of the raywenderlich.com podcast!

In this episode, take a deep dive into Core Animation with the author of iOS Animations by TutorialsMarin Todorov!

[Subscribe in iTunes] [RSS Feed]

Transcript

Mic: Hey, Jake. How are you?

Jake: I’m good, Mic. How are you doing?

Mic: Not so bad, just about got over the jet lag from RWDevCon now.

Jake: Yeah. I heard it took you a few days. I saw some chat where you were saying you were still trying to adjust.

Mic: Yeah. Well, I think I just about got it going on the last day when we were doing the last podcast in the hotel room. I was just about getting over the jet lag from going and then obviously I had to fly back the same day. It’s probably taken me about the same number of days to recover, which is unusual for me because it just usually takes a day or so. I think we were five hours difference so almost halfway there. Yeah it’s been good working on WatchKit by Tutorials this week so flat out getting ready for the launch which is a week from Monday.

Jake: Yeah. How’s that going? Has that been fun?

Mic: It’s been a lot of fun to be fair getting it straight into it on a brand new platform, brand new framework, getting to learn all that, new team of authors which has been really good. There is a unique spin on this book that I’m not going to give away at this point that I think the readers will enjoy.

Jake: You tease.

Mic: Yeah, it’s been a lot of fun.

Jake: Good.

Mic: We’re going to be talking about iOS Animations and Core Animation in the podcast today and a little bit later on we’re going to be joined by Marin Todorov, who’s an iOS developer, and a raywenderlich.com team member. He’s also written this book; iOS Animations by Tutorials which is available today. He’s going to be joining us shortly and we’re going to have a chat about the book a general chat about Core Animation.

One of the things that strikes me about Core Animation is its one of those things that’s so ingrained [00:02:00] in iOS and iOS development that a lot of people will be using it without even knowing they’re using it.

I know we were taking off air a little bit about this Jake. I wanted to really get your take on Core Animation. How it relates to iOS, all the stuff that we’ve come to take for granted almost is all handled by this framework when it comes to UI and UX interactions and things like that.

Jake: It’ll be interesting to get Marin’s take on this too but my understanding is that when the first iPhone came out, you had to do a lot of this animation on the GPU because it was the only way to move that much visual data around that quickly and make it work.

We got it in 10.5 on the Mac but it was really only there first because they were working on it for the iPhone. Anything that actually shows up on screen at some point passes through some level of Core Animation even if there’s not animations going on.
I think the composting is done … I don’t know if that’s technically called Core Animation but the compositing is done on the GPU. Anyway, it’s interesting because you and I were just saying where does Core Animation end and not Core Animation begin? It’s hard to define that because I think Core Animation is everywhere.

Mic: This seems to be in every layer of abstraction. At some point you’re going to be dipping down and touching Core Animation. If you’re using the UIView class methods animate with duration, that kind of thing, that underneath is using Core Animation.

If you want to drop down and you want to manipulate layers directly, that’s using Core Animation. Like you say right down until you get to the GPU, all the way down that tree you’re going to be using Core Animations. Is it a little bit of a difficult topic to get a grasp on and know which bits to pull out and talk specifically about?

Jake: Yeah, just in terms of the questions we wanted to ask. We didn’t want to seem like we didn’t know that Core Animation was still being used [00:04:00] even though we’re saying when you use the UIView API, we wanted to talk about UIView Animation but it’s still Core Animation.

Mic: I think things like UI Dynamics and all that kind of stuff as well, that still at some level uses Core Animation. I think just listening to the last two or three minutes that we’ve been talking people are going to understand that it’s almost like you can’t escape it, you know, anything that is using an iOS, definitely an iOS is at some level…

Jake: It’s why the focus on UI for the iPhone has always been so dynamic. Animations communicate so much to the user. I think it’s one of those things where we take it for granted now but at the time it was pretty sweet that there was so much animation and it was so fluid.

Mic: I think that was one of the things that Jobs wanted for that initial iPhone. It’s a similar idea to the whole skeuomorphism because things had to behave in a way that they behaved in real life so people knew how to interact with them because it was the first real really responsive touch screen.

We had this whole new platform and they were expecting people to be a little bit unsure. By using skeuomorphism and textures and things that resemble their real world counterparts and having things move in a fluid motion, be responsive in the same way that they are in real life, I think that was the whole part of that initial launch and why everything is so key.

Jake: Yeah and it was part of that initial magic that was so impressive relied heavily on this technology.

Mic: I think animation as well is key in the shift that we’ve had recently from skeuomorphism, iOS 6 being the latest one into iOS 7 and 8. As we’ve gone much flatter you can’t use gradients and shadows and textures as much to differentiate UI [00:06:00] and to make the bits UI. The focus has moved almost exclusively onto feedback through animation and interaction.

Jake: Yeah and I feel like the most impressive apps are those that have really dynamic interesting custom animations. When it’s done well, it’s really impressive especially, if it’s a little bit novel.

Mic: We had Alli Dryer on a couple of weeks ago talking about capptivate.co. That entire site is all about capturing those really great interactions and visual design. If you look on that site, it’s all about animations because of this shift.

The other thing I briefly wanted to talk about before we ask Marin to join the conversation was, we are iOS 8 now so Core Animation has been the king of animation on the iOS platform for the entire 8 iterations, but we are starting to see the odd framework, new framework pop up here and there that’s going to try and tackle animation in a different way, that perhaps doesn’t use Core Animation under the hood.

One of those is Facebook’s Pop framework. I was just wondering what your take on that was?

Jake: I think it’s interesting. I haven’t built an app on top of Pop so I don’t have the deep experience with it but it’s interesting. One of the exciting things for me about Pop was that you could use their Quartz Composer plug-in to design some of those animations.

Then you could basically translate the settings that you used in the Quartz Composer patches into the code. So it gave you the visual design tool on top of some of the dynamic animations available in Pop that aren’t necessarily easy in [00:08:00] Core Animation but the design tool on top making it easy to come up with them and iterate over different was something that was really exciting to me.

I don’t know if people are using that a lot. I haven’t heard as much about it since launch. I’ve heard a lot more about the Pop framework than I have heard about the Quartz Composer plugin lately. I think that’s key; is being able to try out different things quickly and get that visceral feedback going.

Mic: This is Origami I think, the plugin code.

Jake: Thanks for reminding me. Yes.

Mic: I think this is the difference between a group of developers coming up with a set of API that functions really well and then people in the real world using that. Because something like Origami is the kind of thing that you’d expect to be built into Xcode in the same way that interface builder is.

It’s a visual interaction designer but we don’t have that and it doesn’t look like that’s going to be coming anytime soon. People are really looking to push the boundaries on iOS design and interaction and things like that.

They’re actually developing their own tools or plugins to other tools that we do have access to, to allow them to do this kind of thing. I think that’s quite significant.

Jake: Yeah. I have seen there are some tools coming out, not from Apple and obviously it would be better if it was built into Xcode but there’s a tool I think called Core Animator. I’m not sure if that’s the right name, but anyway it’s like a key frame, almost a Flash-style animation tool that then spits out Core Animation code. People are working on these tools to make that more accessible so that it doesn’t have to all be done in code. Personally I like interface builder.

I know some programmers are like, do everything in code. It’s just better and cleaner and it makes more sense but for me I feel like if you’re designing something visual having the visual feedback is, there is a visceral element to it that you don’t get when you’re working with the code that I think is essential to really being creative and exploring this space.

Jake: Definitely and also the bigger your app gets, the longer between each [00:10:00] build and run. If you’re going to have a tool that’s giving you immediate feedback rather than writing some code, waiting a few seconds for it to build and then change it.

I also think, just before we bring Marin in, because I know he’s waiting in the wings, Facebook have another open source library called Tweaks. That’s what that’s designed to do.

We have a tutorial on the site. If you want to check that out then find the tutorial on the site. We’ll put a link in the show notes but that’s a really good way to avoid unnecessary build and runs because you can divide some parameters and then change those parameters while the app’s actually running and see some immediate feedback.

We’re going to invite Marin into the conversation now. Welcome Marin.

Marin: Hello guys.

Mic: How are you?

Marin: Thank you for inviting me. I’m very well, thank you.

Mic: No problem. Just before we get into talking some more about iOS animations and Core Animation, can you give our listeners a little bit of insight into just who is Marin Todorov?

Marin: Well that’s a long story but I’ll try to give you the abbreviated version. Well, I’m an independent developer and a consultant. I’ve been working for the iPhone basically for iOS since the end of 2009.

I started at the very end. I thought it was just around the corner, good times. Right now I do a lot of programming and consulting and apparently I work on a lot of books with the Ray Wenderlich team.

With two books on my own and a few more working on a team I can already say that I’m also a writer. Yeah. If you want to know anything else, just ask away but I think that’s as good as an introduction.

Mic: That’s great. Also, you are not just a writer but you also produced a series of videos on Core Animation or [00:12:00] iOS animations. How did you end up getting roped into that?

Marin: That was a great adventure. If you remember, some time ago, Ray decided to put a video section on the website and so naturally he started pioneering this part of the website himself.

Then he brought a couple of more members from his company Razeware. There was a series made by Vicki and there was I think an early video by Brian Moakley and the he made a call for authors within the team.

I being overly competitive wanted to be the first of out of Razeware to make a video series. When I saw that he was looking for somebody to do a series on animations, I have to be the one.

This is pretty much what my email to him said like “I have to be the one.” That’s how it happened and that was a great, great experience.

Mic: How did you find that because obviously that’s very different from writing?

Marin: It was difficult because I had to do about a month and a half of research in advance, develop all the projects and plan everything out without having an idea how the actual recording would go.

Then we would get together, me and Ray, and actually start recording and then we were like whatever happens, happens. We have only this much time and we had to finish it in time. That was a little bit stressful in the beginning.

Put a lot of pressure but whilst we were doing videos, but by the end it was pretty fun. We had a good time.

Mic: Okay, great. My final question before we get back to animations is at what point did you decide you had enough in the video series to then transition to writing fully fledged “By Tutorials” book [00:14:00]?

Marin: Well as said, I had a month and a half to prepare for the video tutorial series and in that time you can come up with a lot of ideas. We had only this much time.

I think we agreed in the beginning to have a maximum 15 episodes in the series and so I had to order by priority how content will just flow, how episodes could feel natural flowing one after the other.

I had to draw the line at some point and say this is going to go on the video series. I think it was the second day of the recordings that I said to Ray, “Ray I had so many ideas and many of them, good ones, didn’t make it in the video series. What do you think about a book?”

He was, “Well a book is good. Some people learn better from books, some other people learn better from videos so that’s probably a good idea.” Funny enough when I had to prepare the book, we as well had a cap on how much content we could put in.

I again ended up with more ideas that I can actually produce for the book. That’s how I came up with the idea for the newsletter so I am also now running a newsletter because I couldn’t fit everything in the book. That’s pretty much it.

Jake: What are you going to do next when you can’t fit everything into the newsletter?

Marin: If I can’t fit everything into the newsletter I’m probably going to take a vacation. [Laughter]

Mic: You heard our brief explanation of Core Animation at the top of the show. Can you define Core Animation and clarify anything you thought maybe we left ambiguous?

Marin: Actually I was listening to you guys and I was laughing [00:16:00] just a little bit because this is pretty much how you can explain Core Animation. It’s something that is not clearly defined what it is and its everywhere.

If you’re an iOS developer it just floats into air. Everywhere you’re around it surrounds you, kind of the Midichlorians in Star Wars. It’s just everywhere and you use it.

Joke aside, the idea of Core Animation is mainly I think to separate the app logic and the app code from the actual process that renders everything else.

Core Animation is a little bit deceivingly named because at the time “Core Graphics” were already taken so they couldn’t really call it Core Graphics but Core Animation actually is also responsible for rendering not only content that has been animated but actually all content.

Because UIView is just a thin wrapper around CALayer and CALayer is just Core Animation layer. Core Animation takes care of rendering stuff and for animating them. This happens on a background process.

There is this thing called a rendering server so whenever you start an animation or you just send some content to your components or views and so forth, the hierarchy is being sent to this background process where the server will interpolate all the intermediate steps of your animation and everything that needs to happen on screen.

It will push all this information to the GPU so that the GPU can take care of rendering everything independently from your own app process. That’s why even if you block the main thread of your app, even if you are doing something like some heavy lifting on your app process, your animation will [00:18:00] still run fluently in the background.

I think it’s a pretty neat idea and we’ve benefitted greatly from it. As one of you guys said, that was one of the best things of the first iPhone. It was so fluid, everything was so amazing. That was really great, yeah.

Mic: You just touched on it there. You mentioned CALayer and its relationship to UIView. Can you give us a little bit of insight because I know obviously we’re all familiar with animating UIViews but I believe you can also animate the layer directly? I was just wondering what the difference between the two is.

Marin: Well as said in iOS, the UIView is really just a wrapper around the CALayer. That’s why everything you can do to a view, you can also do to a layer animation-wise.

You can just move them around. You can change their opacity and transfer them and so forth. When you animate for example the opacity of your view that you’re actually animating the opacity of your layer, therefore the CALayer is just a level deeper inside the framework so you can be more flexible [00:20:00].

You can animate more things but the main idea would be that UIKit have to provide an easier and more sane way into animations because like everything was really new.

People really wanted to grasp how everything works and Apple really wants to make animations very, very accessible to anyone so UIs can be lively and animate and so forth.

UIKit just provides you with a shortcut to what actually Core Animation offers. UIKit would offer you this one line APIs, really easy but not as flexible as Core Animation. If you use Core Animation you can just create animations straight for your layers and then you can animate much more things.

You can achieve much more flexibility in terms of for example the transitions in Core Animations, they have third dimensions. That’s something. You get only two dimensions with UIKit and three dimensions with Core Animation. That’s a very good example of what kind of more flexibility Core Animation offers.

Of course in the end you have all the specialized layers in Core Animations like the gradient layers or the shape layers where you just create very specific super powerful effects by using a certain type of layer to create animations.

Jake: You’re touching on it a little bit but can we go through a survey of different animations? As you said, you scale, rotate, translate; which is just move stuff around.

You can adjust opacity but we’ve also got things like UIKit dynamics that simulate physics. We’ve got CAShapeLayer that will allow you to animate the drawing of a path.

I just wanted to give the listeners the scope of like if you’re thinking about you want to animate something, what are your options [00:22:00].

Marin: Okay. I think that’s a good question because I think that there are more ways to animate something on iOS than most people know about. Actually that’s one of the reasons why we called the video series and the book iOS Animations by Tutorials.

We didn’t want to write about Core Animation because Core Animation is just one of the many ways to animate stuff on iOS. Of course we have View Animations – actually using only the UIKit built in APIs will get you very, very far.

You can create amazing applications by just combining different View Animations. This will get you really far don’t get me wrong. Of course with View Animation there comes Auto Layout because that’s the way to create animations when your UI is a slave to Auto Layout.

Basically that’s one way but of course another way to animate stuff is layer animations where you can do exactly the same things as with views plus more. You can do Keyframe Animations with Layers.

This is a really interesting topic because this is the way to make certain objects on screen for example, an image or a pattern or anything actually animate over a path. This is really a cool effect because most animations go from point A to point B.

Keyframe Animation with Layers is the only way to make something actually follow a bezier path or any arbitrary path you define. You can do a lot with shape layers. That’s the way to draw shapes on screen and you can draw any arbitrary shape and you can just feed the UIBezierPath.

Actually CGPath but you can use a UIBezierPath to create one so you can draw basically anything on screen. You can also draw interactively any shape, meaning [00:24:00] you can simulate pen writing, a word on screen or something like this. It’s really cool animations.

In the book we have a chapter about creating an animation with a gradient layer. Even though gradient is not the first thing that you think about that you can animate but also you can create amazing effects when you combine it with masks or with text.

That’s really a great topic that I haven’t seen much about on the Internet. Besides this you can go into the third dimension as I mentioned earlier. Core Animation is not really a full-fledged 3D framework but it still offers you quite a bit of 3D possibilities like rotate things around a position in 3D space. That’s just really cool as well. You can add simple 2D effects to your app as well.

Jake: That’d be something like a deck of cards and flipping a card over, something like that, right?

Marin: Yeah. Well actually layers are just 2D textures. They represent on an image basically and you can rotate this in 3D space and position them. You still work with 2D objects, so all of your layers are three but you can still just rotate them around kind of like position them. You can’t really build a very complex 3D object but you can still rotate and position your 2D layers in 3D space.

Mic: One thing I wanted to pick up on Marin you mentioned a couple of minutes back. The old way of doing things – before Auto Layout was usable on iOS – the way that you would animate would be that you would have an existing frame, you’d calculate your new frame and then in that UIView [00:26:00] animation black, you’d set that new frame and Core Animation or UI animation would take care of the rest for you.

Obviously now as developers are moving towards using Auto Layout, how do we go around moving from this thing where we would set frame manually to how do we animate constraints and things like that?

Marin: That’s actually easier than one might think to do. Earlier we just animated the frame manually ourselves and since now the frame of the view is managed by auto layout then we need to talk to auto layout to animate the frame for us. How do we talk to Auto Layout?

We talk to it by constraints. It’s really easy actually. If your view is positioned on a certain spot on screen and it has constraints to that position, you just would remove some of those constraints and add new ones to just position it to a different spot.

Auto layout will actually take care to see that the frame has been changed and will take care to create animations between these two positions for you and then head it off to Core Animation. It’s pretty easy. You just only need to learn how to do it and when to do it but then it’s broken apart.

Mic: There is a chapter that covers this in the book?

Marin: Yeah. That was one of the topics that I regretted not including in the video series. We just didn’t have the time for it. That’s one of the very important topics I wanted to be in the book.

We have a section with two chapters. One is an introduction to Auto Layout and how it works. It’s like a crash course into all the layouts and then a second chapter is then how to continue the projects from previous chapters and add animations to the layout that you produced [00:28:00].

Further in other of the new projects in the book there is also Auto Layout UI, so you get to exercise a few more times all the other animations.

Jake: The book is available now. It was officially announced during the keynote at RWDevCon, right?

Marin: Yeah, that was great. That was all Ray’s idea. He gets all the credit for it. It was amazing idea how to surprise and awe everyone. I think it worked.

Jake: What happened? How did that go down?

Marin: Back in October I sent my pitch to Ray because it was a single author book so it was not really a team book. It was only me. I sit down and wrote this pitch about the book, what it’s going to be about, all the content.

I did a detailed table of contents. I also wrote few chapters just to show what I had in mind. I sent it to Ray and after a few days he was like “Yeah I’m fully on board with this. It looks great.”

We were talking about who the editors were going to be and so on and then he just came with the idea. How did he come up with this? Maybe we have to ask him and chat about this because he said “Hey how about we keep the book secret and we just announce it on the day of launch?”

I was like “Oh okay. That sounds good.” A few days later, “Oh we have the conference in three months. How about we work nonstop and then we have it ready for the conference?”

I’m “Let that be the plan. Let’s see how it goes but that will be the ultimate goal” to have ready for the conference and launch it for the audience, give books as freebies. It was all him [00:30:00]. Don’t ask me.

Mic: The attendees, they didn’t know they were getting a book right? Because you teased this in the keynote but then in the following remarks you then said actually we’ve got one more thing and then the doors opened.

Marin: Yeah. Actually the attendees didn’t know and I think most of the team members didn’t know either because it was the idea that we would just really keep this a secret until the very end because we didn’t want the work to like go out and people would start talking about it or so.

We just really wanted it to be the very best surprise for everybody just to … I think there is a photo in the Flickr channel that is from this very moment where all this happens at the end.

It’s this amazing photo where every single person in the audience is smiling. It’s really amazing photo. It is so positive and this was great.

Mic: The book is part of the Spring Swift Fling celebration currently underway at raywenderlich.com. Can you give us a little bit more insight into that?

Marin: iOS Animation by Tutorials and WatchKit by Tutorials are available right now on the raywenderlich.com store. iOS Animations by Tutorials is ready for download right now as we are doing this podcast.

WatchKit is going to come out a week later but it’s available for preorder. Since there were two books already ready to go at almost the same time we thought to make a little celebration.

We’re going to put out a number of posts related to both books. There are going to be a few abbreviated chapters from our Animations by Tutorials and there is going to be a few posts by the guys involved in WatchKit by Tutorials.

They’re going to prepare special posts about WatchKit that have to do with creating [00:32:00] watch phases and whatnot. In the end we’re going to be giving some free books so be sure to check out the Swift Spring Fling on raywenderlich.com.

Mic: I think that’s probably as good a place as any to wrap things up chaps. Thanks again for joining us, Marin.

Marin: Thank you very much for having me.

Mic: No, it’s been our pleasure. As always, we’d love to hear your feedback so please do email in using podcast@raywenderlich.com.

As this episode represents the midway point of season three, our current season, we’re now starting think about what we can change for season four to hopefully improve things. This is a really good opportunity for you to let us know what we’re doing right and what you’d like to see us change. If you did like this episode then please don’t forget to leave a nice review on iTunes.

We really do hope that you enjoyed the raywendrlich.com podcast. Thanks for listening and we’ll see you again next time.

Our Sponsor

Interested in sponsoring a podcast episode? We sell ads via Syndicate Ads, check it out!

Links and References

Contact Us

Where To Go From Here?

We hope you enjoyed this episode of our podcast. Stay tuned for a new episode next week! :]

Be sure to subscribe in iTunes to get access as soon as it comes out!

We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear in future episodes. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com!

Core Animation with Marin Todorov – Podcast S03 E05 is a post from: Ray Wenderlich

The post Core Animation with Marin Todorov – Podcast S03 E05 appeared first on Ray Wenderlich.

iOS Animation with Swift Tech Talk Video

$
0
0

Each month, one of the members of the team gives a Tech Talk, and by popular request we also stream these live.

Today in our February Tech Talk, tutorial team member Marin Todorov gave an excellent talk and Q&A on iOS Animation with Swift.

Here’s the video for anyone who didn’t get a chance to attend live!

Helpful Links

Here are some handy links related to the talk:

Want to Join Us Next Month?

Thanks again to Marin for giving a great talk and Q&A, having the guts to present to a live audience :] And thank you to everyone who attended – we hope you enjoyed it!

Next month’s tech talk will be announced soon – we’ll post it on the sidebar of the site later, so be sure to check for it later.

Hope to see some of you there! :]

iOS Animation with Swift Tech Talk Video is a post from: Ray Wenderlich

The post iOS Animation with Swift Tech Talk Video appeared first on Ray Wenderlich.

Video Tutorial: Intro to Auto Layout Part 11: Debugging Auto Layout

UIView Animation with Swift Tutorial

$
0
0
Learn about iOS Animations in our Swift Spring Fling celebration!

Learn about iOS Animations in our Swift Spring Fling celebration!

Note from Ray: This is an abbreviated chapter from iOS Animations by Tutorials released as part of the Spring Swift Fling to give you a sneak peek of what’s inside the book. We hope you enjoy! :]

Animation is a critical part of your iOS user interfaces. Animation draws the user’s attention toward things that change, and adds a ton of fun and polish to your apps UI.

Even more importantly, in an era of “flat design”, animation is one of the key ways to make your app stand apart from others.

In this tutorial, you’ll learn how to use UIView animation to do the following:

  • Set the stage for a cool animation.
  • Create move and fade animations.
  • Adjust the animation easing.
  • Reverse and repeat animations.

There’s a fair bit of material to get through, but I promise it will be a lot of fun. Are you up for the challenge?

001_ChallengeAccepted

All right – time to get animating!

Getting Started

Start by downloading the starter project for this tutorial, which represents the login screen for a fictional airline – “Bahama Air”.

Build and run your project in Xcode and you’ll see the following:

002_LoginScreen

The app doesn’t do much right now – it just shows a login form with a title, two text fields, and a big friendly button at the bottom.

There’s also a nice background picture and four clouds. The clouds are already connected to outlet variables in the code.

Open ViewController.swift and have a look inside. At the top of the file you’ll see all the connected outlets and class variables. Further down, there’s a bit of code in viewDidLoad(), which initializes some of the UI. The project is ready for you to jump in and shake things up a bit!

Enough with the introductions – you’re undoubtedly ready to try out some code!

Your First Animation

Your first task is to animate the form elements onto the screen when the user opens the application. Since the form is now visible when the app starts, you’ll have to move it off of the screen just before your view controller makes an appearance.

Add the following code to viewWillAppear():

heading.center.x  -= view.bounds.width
username.center.x -= view.bounds.width
password.center.x -= view.bounds.width

This moves each of the form elements outside the visible bounds of the screen, like so:

003_FormElements

Since the code above executes before the view controller appears, it will look like those text fields were never there in the first place.

Build and run your project to make sure your fields truly appear off-screen just as you had planned:

004_MovedElements

Perfect – now you can animate those form elements back to their original locations via a delightful animation.

Add the following code to the end of viewDidAppear():

UIView.animateWithDuration(0.5, animations: {
  self.heading.center.x += self.view.bounds.width
})

To animate the title in you call the UIView class method animateWithDuration(_:animations:). The animation starts immediately and animates over half a second; you set the duration via the first method parameter in the code.

It’s as easy as that; all the changes you make to the view in the animations closure will be animated by UIKit.

Build and run your project; you should see the title slide neatly into place like so:

005_TitleSlide

That sets the stage for you to animate in the rest of the form elements.

Since animateWithDuration(_:animations:) is a class method, you aren’t limited to animate just one specific view; in fact you can animate as many views as you want in your animations closure.

Add the following line to the animations closure:

self.username.center.x += self.view.bounds.width

Build and run your project again; watch as the username field slides into place:

006_UsernameSlide

Delayed Animations

Seeing both views animate together is quite cool, but you probably noticed that animating the two views over the same distance and with the same duration looks a bit stiff. Only kill-bots move with such absolute synchronization! :]

Wouldn’t it be cool if each of the elements moved independently of the others, possibly with a little bit of delay in between the animations?

First remove the commented out line of code below that animates username:

UIView.animateWithDuration(0.5, animations: {
  self.heading.center.x += self.view.bounds.width
  // self.username.center.x += self.view.bounds.width
})

Then add the following code to the bottom of viewDidAppear():

UIView.animateWithDuration(0.5, delay: 0.3, options: nil, animations: {
  self.username.center.x += self.view.bounds.width
}, completion: nil)

The class method you use this time looks familiar, but it has a few more parameters to let you customize your animation:

  1. duration: The duration of the animation.
  2. delay: The amount of seconds UIKit will wait before it starts the animation.
  3. options: A bitmask value that allows you to customize a number of aspects about your animation. You’ll learn more about this parameter later on, but for now you can pass nil to mean “no options.”
  4. animations: The closure expression to provide your animations.
  5. completion: A code closure to execute when the animation completes; this parameter often comes in handy when you want to perform some final cleanup tasks or chain animations one after the other.

In the code you added above you set delay to 0.3 to make the animation start just a hair later than the title animation.

Build and run your project; how does the combined animation look now?

007_Combined

Ahh – that looks much better. Now all you need to do is animate in the password field.

Add the following code to the bottom of viewDidAppear():

UIView.animateWithDuration(0.5, delay: 0.4, options: nil, animations: {
  self.password.center.x += self.view.bounds.width
}, completion: nil)

Here you’ve mostly mimicked the animation of the username field, just with a slightly longer delay.

Build and run your project again to see the complete animation sequence:

008_CompleteAnimation

That’s all you need to do to animate views across the screen with a UIKit animation!

That’s just the start of it – you’ll be learning a few more awesome animation techniques in the remainder of this tutorial!

Animatable Properties

Now that you’ve seen how easy animations can be, you’re probably keen to learn how else you can animate your views.

This section will give you an overview of the animatable properties of a UIView, and then guide you through exploring these animations in your project.

Not all view properties can be animated, but all view animations, from the simplest to the most complex, can be built by animating the subset of properties on a view that do lend themselves to animation, as outlined in the section below.

Position and Size

009_PositionAndSize

You can animate a view’s position and frame in order to make it grow, shrink, or move around as you did in the previous section. Here are the properties you can use to modify a view’s position and size:

  • bounds: Animate this property to reposition the view’s content within the view’s frame.
  • frame: Animate this property to move and/or scale the view.
  • center: Animate this property when you want to move the view to a new location on screen.

Don’t forget that Swift lets you adjust single members of structures as well. This means you can move a view vertically by changing center.y or you can shrink a view by decreasing frame.size.width.

Appearance

010_Appearance

You can change the appearance of the view’s content by either tinting its background or making the view fully or semi-transparent.

  • backgroundColor: Change this property of a view to have UIKit gradually change the tint of the background color over time.
  • alpha: Change this property to create fade-in and fade-out effects.

Transformation

011_Transformation

Transforms modify views in much the same way as above, since you can also adjust size and position.

  • transform: Modify this property within an animation block to animate the rotation, scale, and/or position of a view.

These are affine transformations under the hood, which are much more powerful and allow you to describe the scale factor or rotation angle rather than needing to provide a specific bounds or center point.

These look like pretty basic building blocks, but you’ll be surprised at the complex animation effects you’re about to encounter!

Animation Options

Looking back to your animation code, you were always passing in nil to the options parameter.

options lets you customize how UIKit creates your animation. You’ve only adjusted the duration and delay of your animations, but you can have a lot more control over your animation parameters than just that.

There’s a list of options declared in the UIViewAnimationOptions enumeration that you can combine in different ways for use in your animations.

Repeating

You’ll first take a look at the following two animation options:

  • .Repeat: Enable this option to makes your animation loop forever.
  • .Autoreverse: Enable this option only in conjunction with .Repeat; this option repeatedly plays your animation in forward then in reverse.

Modify the code that animates the password field viewDidAppear() to use the .Repeat option as follows:

UIView.animateWithDuration(0.5, delay: 0.4,
  options: .Repeat, animations: {
  self.password.center.x += self.view.bounds.width
}, completion: nil)

Build and run your project to see the effect of your change:

012_Repeat

The form title and username field fly in and settle down in the center of the screen, but the password field keeps animating forever from its position offscreen.

Modify the same code you changed above to use both .Repeat and .Autoreverse in the options parameter as follows:

UIView.animateWithDuration(0.5, delay: 0.4,
  options: .Repeat | .Autoreverse, animations: {
    self.password.center.x += self.view.bounds.width
}, completion: nil)

Build and run your project again; this time the password field just can’t make up its mind about staying on the screen!

Animation Easing

In real life things don’t just suddenly start or stop moving. Physical objects like cars or trains slowly accelerate until they reach their target speed, and unless they hit a brick wall, they gradually slow down until they come to a complete stop at their final destination.

The image below illustrates this concept in detail:

013_Easing

To make your animations look more realistic, you can apply the same effect of building momentum at the beginning and slowing down before the end, known in general terms as ease-in and ease-out.

You can choose from four different easing options:

  • .Linear: This option applies no acceleration or deceleration to the animation.
  • .CurveEaseIn: This option applies acceleration to the start of your animation.
  • .CurveEaseOut: This option applies deceleration to the end of your animation.
  • .CurveEaseInOut: This option applies acceleration to the start of your animation and applies deceleration to the end of your animation.

To better understand how these options add visual impact to your animation, you’ll try a few of the options in your project.

Modify the animation code for your password field once again with a new option as follows:

UIView.animateWithDuration(0.5, delay: 0.4,
  options: .Repeat | .Autoreverse | .CurveEaseOut, animations: {
    self.password.center.x += self.view.bounds.width
}, completion: nil)

Build and run your project; notice how smoothly the field decelerates until it reaches its rightmost position, before returning to the left side of the screen:

014_Deceleration

This looks much more natural since that’s how you expect things to move in the real world.

Now try the opposite: ease-in the animation when the field is still outside of the screen by modifying the same code as above to change the .CurveEaseOut option to .CurveEaseIn as follows:

UIView.animateWithDuration(0.5, delay: 0.4,
  options: .Repeat | .Autoreverse | .CurveEaseIn, animations: {
    self.password.center.x += self.view.bounds.width
}, completion: nil)

Build and run your project; observe how the field jumps back from its rightmost with robotic vigor. This looks unnatural and isn’t as visually pleasing as the previous animation.

You’ve seen how the various animation options affect your project and how to make movements look smooth and natural.

Now that you have some experience with animation curves, change the options on the piece of code you’ve been playing with back to nil:

UIView.animateWithDuration(0.5, delay: 0.4,
 options: nil, animations: {
  self.password.center.x += self.view.bounds.width
}, completion: nil)

And that’s it – you now understand how to use UIView animation API – so go forth and add some cool animations in your apps!

Where To Go From Here?

Now that you know how basic animations work, you’re ready to tackle some more dazzling animation techniques.

Animating views from point A to point B? Pshaw – that’s so easy! :]

If you want to learn more, check out our book iOS Animations by Tutorials. In the book, you’ll learn how to animate with springs, transitions, keyframe animations, CALayer animations, Auto Layout constraint animations, view controller transition animations, and more.

We hope you enjoyed this tutorial, and if you have any questions or comments, please join the forum discussion below!

UIView Animation with Swift Tutorial is a post from: Ray Wenderlich

The post UIView Animation with Swift Tutorial appeared first on Ray Wenderlich.

Video Tutorial: Intro to Auto Layout Part 12: Animating Constraints


How To Implement A Circular Image Loader Animation with CAShapeLayer

$
0
0
Learn how to create a neat circular progress + mask animation!

Learn how to create a neat circular progress + mask animation!

Note from Ray: This is an intermediate iOS Animation tutorial released as part of the Spring Swift Fling celebration. We hope you enjoy! :]

A few weeks ago, Michael Villar created a really interesting loading animation for his post on Motion Experiments.

The GIF to the right shows the loading animation, which marries a circular progress indicator with a circular reveal animation. The combined effect is fascinating, unique, and more than a little mesmerizing! :]

This tutorial will show you how to recreate this exact effect in Swift and Core Animation. Let’s get animating!

Getting Started

First download the starter project for this tutorial, and build and run. After a moment, you should see a simple image displayed as follows:

StarterProject

The starter project already has the views and image loading logic in place. Take a minute and browse through the project once you’ve extracted it; there’s a ViewController that has a UIImageView subclass named CustomImageView, along with a SDWebImage method call to load the image.

You might notice when you first run the app, the app will seem to pause for a few seconds while the image downloads, and then the image will appear on the screen without fanfare. Of course, there’s no circular progress indicator at the moment – that’s what you’ll create in this tutorial!

You will create this animation in two distinct phases:

  1. Circular progress. First, you will draw a circular progress indicator and update it based on the progress of the download.
  2. Expanding circular image. Second, you will reveal the downloaded image through an expanding circular window.

Follow along closely to prevent yourself from going in “circles”! :]

Creating the Circular Indicator

Think for a moment about the basic design of the progress indicator. The indicator is initially empty to show a progress of 0%, then gradually fills in as the image is downloaded. This is fairly simple to achieve with a CAShapeLayer whose path is a circle.

Note: If you’re new to the concept of CAShapeLayer (or CALayers in general, check out Scott Gardner’s CALayer in iOS with Swift article.

You can control the start and end position of the outline, or stroke, of your shape with the CAShapeLayer properties strokeStart and strokeEnd. By varying strokeEnd between 0 and 1, you can fill in the stroke appropriately to show the progress of the download.

Let’s try this out. Create a new file with the iOS\Source\Cocoa Touch Class template. Name it CircularLoaderView and set it to be a subclass of UIView as shown below:

Screenshot 2015-01-25 19.25.43

Click Next, and then Create. This new subclass of UIView will house all of your new animation code.

Open CircularLoaderView.swift and add the following property and constant to the class:

let circlePathLayer = CAShapeLayer()
let circleRadius: CGFloat = 20.0

circlePathLayer represents the circular path, while circleRadius, ostensibly, will be the radius of the circular path.

Add the following initialization code to CircularLoaderView.swift to configure the shape layer:

override init(frame: CGRect) {
  super.init(frame: frame)
  configure()
}
 
required init(coder aDecoder: NSCoder) {
  super.init(coder: aDecoder)
  configure()
}
 
func configure() {
  circlePathLayer.frame = bounds
  circlePathLayer.lineWidth = 2
  circlePathLayer.fillColor = UIColor.clearColor().CGColor
  circlePathLayer.strokeColor = UIColor.redColor().CGColor
  layer.addSublayer(circlePathLayer)
  backgroundColor = UIColor.whiteColor()
}

Both of the initializers call configure. configure sets up a shape layer to have a line width of 2 points, a clear fill color, and a red stroke color. It then adds the shape layer you configured as a sublayer of the view’s main layer, and then set the view’s backgroundColor to white so the rest of the screen is blanked out while the image loads.

Adding the Path

You’ll notice that you haven’t yet assigned a path to the layer. To do that, add the following method (still in CircularLoaderView.swift):

func circleFrame() -> CGRect {
  var circleFrame = CGRect(x: 0, y: 0, width: 2*circleRadius, height: 2*circleRadius)
  circleFrame.origin.x = CGRectGetMidX(circlePathLayer.bounds) - CGRectGetMidX(circleFrame)
  circleFrame.origin.y = CGRectGetMidY(circlePathLayer.bounds) - CGRectGetMidY(circleFrame)
  return circleFrame
}

The above method returns an instance of CGRect that bounds your indicator’s path. The bounding rectangle is 2*circleRadius wide and 2*circleRadius tall, and lies at the center of the view.

You’ll need to recalculate circleFrame each time the view’s size changes, so you may as well put it in a method of its own.

Now add the following method to create your path:

func circlePath() -> UIBezierPath {
  return UIBezierPath(ovalInRect: circleFrame())
}

This simply returns the circular UIBezierPath as bounded by circleFrame. Since circleFrame() returns a square, the “oval” in this case will end up as a circle.

Since layers don’t have an autoresizingMask property, you’ll need to update the circlePathLayer’s frame in layoutSubviews to respond appropriately to changes in the view’s size.

Next override layoutSubviews() as follows:

override func layoutSubviews() {
  super.layoutSubviews()
  circlePathLayer.frame = bounds
  circlePathLayer.path = circlePath().CGPath
}

You’re calling circlePath() here since a change in the frame should also trigger a recalculation of the path.

Now open CustomImageView.swift and add the following instance of CircularLoaderView as a property:

let progressIndicatorView = CircularLoaderView(frame: CGRectZero)

Next add these lines in init(coder:), right before the code that downloads the image:

addSubview(self.progressIndicatorView)
progressIndicatorView.frame = bounds
progressIndicatorView.autoresizingMask = .FlexibleWidth | .FlexibleHeight

This adds the progress indicator view as a subview to the custom image view. autoresizingMask ensures that progress indicator view remains the same size as the image view.

Build and run your project; you’ll see a red, hollow circle appear like so:

Screenshot 2015-01-25 21.44.17

Okay — you have your progress indicator drawing on the screen. Your next task is to vary the stroke as the download progresses.

Modifying the Stroke Length

Head back to CircularLoaderView.swift and add the following lines directly below the other properties in the file:

var progress: CGFloat {
  get {
    return circlePathLayer.strokeEnd
  }
  set {
    if (newValue > 1) {
      circlePathLayer.strokeEnd = 1
    } else if (newValue < 0) {
      circlePathLayer.strokeEnd = 0
    } else {
      circlePathLayer.strokeEnd = newValue
    }
  }
}

This creates a computed property — that is, a property without any backing variable — that has a custom setter and getter. The getter simply returns circlePathLayer.strokeEnd, and the setter validates that the input is between 0 and 1 and sets the layer’s strokeEnd property accordingly.

Add the following line to configure() to initialize progress on first run:

progress = 0

Build and run your project; you should see nothing but a blank white screen. Trust me! This is good news! :] Setting progress to 0 in turn sets the strokeEnd to 0, which means no part of the shape layer was drawn.

The only thing left to do with your indicator is to update progress in the image download callback.

Go back to CustomImageView.swift and replace the comment Update progress here with the following:

self!.progressIndicatorView.progress = CGFloat(receivedSize)/CGFloat(expectedSize)

This calculates the progress by dividing receivedSize by expectedSize.

Note: You’ll notice the block uses a weak reference to self – this is to avoid a retain cycle.

Build and run your project; you’ll see the progress indicator begin to move like so:

indicator

Even though you didn’t add any animation code yourself, CALayer handily detects any animatable property on the layer and smoothly animates is as it changes. Neat!

That takes care of the first phase. Now on to the second and final phase — the big reveal! :]

Creating the Reveal Animation

The reveal phase gradually displays the image in a window in the shape of an expanding circular ring. If you’ve read my previous tutorial on creating a Ping-style view controller animation, you’ll know that this is a perfect use-case of the mask property of a CALayer.

Add the following method to CircularLoaderView.swift:

func reveal() {
 
  // 1
  backgroundColor = UIColor.clearColor()
  progress = 1
  // 2
  circlePathLayer.removeAnimationForKey("strokeEnd")
  // 3
  circlePathLayer.removeFromSuperlayer()
  superview?.layer.mask = circlePathLayer
}

This is an important method to understand, so let’s go over this section by section:

  1. Clears the view’s background color so the image behind the view isn’t hidden anymore, and sets progress to 1, or 100%.
  2. Removes any pending implicit animations for the strokeEnd property, which may have otherwise interfered with the reveal animation. For more about implicit animations, check out iOS Animations by Tutorials.
  3. Removes circlePathLayer from its superLayer and assigns it instead to the superView’s layer mask, so the image is visible through the circular mask “hole”. This lets you reuse the existing layer and avoid duplicating code.

Now you need to call reveal() from somewhere. Replace the Reveal image here comment in CustomImageView.swift with the following:

self!.progressIndicatorView.reveal()

Build and run your app; once the image downloads you’ll see it partially revealed through a small ring:

Screenshot 2015-01-26 02.49.54

You can see your image in the background — but just barely! :]

Expanding Rings

Your next step is to expand this ring both inwards and outwards. You could do this with two separate, concentric UIBezierPath, but you can do it in a more efficient manner with just a single Bezier path.

How? You simply increase the circle’s radius (the path property) to expand outward, while simultaneously increasing the line’s width (the lineWidth property) to make the ring thicker and expand inward. Eventually, both values grow enough to reveal the entire image underneath.

Go back to CircularLoaderView.swift and add the following code to the end of reveal():

// 1
let center = CGPoint(x: CGRectGetMidX(bounds), y: CGRectGetMidY(bounds))
let finalRadius = sqrt((center.x*center.x) + (center.y*center.y))
let radiusInset = finalRadius - circleRadius
let outerRect = CGRectInset(circleFrame(), -radiusInset, -radiusInset)
let toPath = UIBezierPath(ovalInRect: outerRect).CGPath
 
// 2
let fromPath = circlePathLayer.path
let fromLineWidth = circlePathLayer.lineWidth
 
// 3
CATransaction.begin()
CATransaction.setValue(kCFBooleanTrue, forKey: kCATransactionDisableActions)
circlePathLayer.lineWidth = 2*finalRadius
circlePathLayer.path = toPath
CATransaction.commit()
 
// 4
let lineWidthAnimation = CABasicAnimation(keyPath: "lineWidth")
lineWidthAnimation.fromValue = fromLineWidth
lineWidthAnimation.toValue = 2*finalRadius
let pathAnimation = CABasicAnimation(keyPath: "path")
pathAnimation.fromValue = fromPath
pathAnimation.toValue = toPath
 
// 5
let groupAnimation = CAAnimationGroup()
groupAnimation.duration = 1
groupAnimation.timingFunction = CAMediaTimingFunction(name: kCAMediaTimingFunctionEaseInEaseOut)
groupAnimation.animations = [pathAnimation, lineWidthAnimation]
groupAnimation.delegate = self
circlePathLayer.addAnimation(groupAnimation, forKey: "strokeWidth")

Here’s a comment-by-comment explanation of what’s going on above:

  1. Determine the radius of the circle that can fully circumscribe the image view, then calculate the CGRect that would fully bound this circle. toPath represents the final shape of the CAShapeLayer mask like so:
  2. Set the initial values of lineWidth and path to match the current values of the layer.
  3. Set lineWidth and path to their final values; this prevents them from jumping back to their original values when the animation completes. Wrapping this in a CATransaction with kCATransactionDisableActions set to true disables the layer’s implicit animations.
  4. Create two instances of CABasicAnimation, one for path and the other for lineWidth. lineWidth has to increase twice as fast as the radius increases in order for the circle to expand inward as well as outward.
  5. Add both animations to a CAAnimationGroup, and add the animation group to the layer. You also assign self as the delegate, as you’ll use this in just a moment.

Build and run your project; you’ll see the reveal animation kick-off once the image finishes downloading. But a portion of the circle remains on the screen once the reveal animation is done.

StillVisible

To fix this, add the following implementation of animationDidStop(_:finished:) to CircularLoaderView.swift:

override func animationDidStop(anim: CAAnimation!, finished flag: Bool) {
  superview?.layer.mask = nil
}

This code removes the mask on the super layer, which removes the circle entirely.

Build and run your project again, and now you’ll see the full effect of your animation:

indicator final final

Congratulations, you have finished creating the circular image loading animation!

Where to Go From Here?

You can download the completed project here.

From here, you can further tweak the timing, curves and colors of the animation to suit your needs and personal design aesthetic. One possible improvement is to use kCALineCapRound for the shape layer’s lineCap property to round off the ends of the circular progress indicator. See what improvements you can come up with on your own!

If you enjoyed this tutorial and would like to learn how to create more animations like these, check out Marin Todorov’s book iOS Animations by Tutorials, which starts with basic view animations and moves all the way to layer animations, animating constraints, view controller transitions, and more.

If you have any questions or comments about the tutorial, please join the discussion below. I’d also love to see ways in which you’ve incorporated this cool animation in your app!

How To Implement A Circular Image Loader Animation with CAShapeLayer is a post from: Ray Wenderlich

The post How To Implement A Circular Image Loader Animation with CAShapeLayer appeared first on Ray Wenderlich.

Swift Video Tutorial Series Updated for Xcode 6.1.1

$
0
0
Our Swift Video Tutorial Series is now Fully Updated for Xcode 6.1.1!

Our Swift Video Tutorial Series is now Fully Updated for Xcode 6.1.1!

Back when I originally made our Introduction to Swift video tutorial series, it was back before Xcode 6 was even out of beta! This meant I couldn’t show screenshots of Xcode so had to rely on simply explaining things via slides.

But good news – Brian Moakley has fully updated the series for Xcode 6.1.1! He now uses live coding with Playgrounds, which should make the series more fun and easy to follow.

If you haven’t gotten started with Swift yet, now’s a great time to learn. Get started with the fully updated introduction to the series here!

Brian and I hope you enjoy this video tutorial update, and thanks for subscribing!

Swift Video Tutorial Series Updated for Xcode 6.1.1 is a post from: Ray Wenderlich

The post Swift Video Tutorial Series Updated for Xcode 6.1.1 appeared first on Ray Wenderlich.

Video Tutorial: Intro to Auto Layout Part 13: Auto Layout Library: Snap

Custom View Controller Presentation Transitions with Swift

$
0
0
Learn about iOS Animations in our Swift Spring Fling celebration!

Learn about iOS Animations in our Swift Spring Fling celebration!

Note from Ray: This is an abbreviated chapter from iOS Animations by Tutorials released as part of the Spring Swift Fling to give you a sneak peek of what’s inside the book. We hope you enjoy! :]

Whether you’re presenting the camera view controller, the address book, or one of your own custom-designed modal screens, you call the same UIKit method every time: presentViewController(_: animated:completion:). This method “gives up” the current screen to another view controller.

The default presentation animation simply slides the new view up to cover the current one. The illustration below shows a “New Contact” view controller sliding up over the list of contacts:

01_iAT_addressbook

In this tutorial, you’ll create your own custom presentation transitions controller to replace the default animations and liven up tutorial’s starter project.

Getting Started

Download the starter project: BeginnerCook-Starter, unarchive the zip file and open the Beginner Cook starter project. Select Main.storyboard to begin the tour:

image003

The first view controller (ViewController) contains the app’s title and main description as well as a scroll view at the bottom, which shows the list of available herbs.

The main view controller presents HerbDetailsViewController whenever the user taps one of the images in the list; this view controller sports a background, a title, a description and some buttons to credit the image owner.

There’s already enough code in ViewController.swift and HerbDetailsViewController.swift to support the basic application. Build and run the project to see how the app looks and feels:

image005

Tap on one of the herb images, and the details screen comes up via the standard vertical cover transition. That might be OK for your garden-variety app, but your herbs deserve better!

Your job is to add some custom presentation transitions to your app to make it blossom! You’ll replace the current stock animation with one that expands the tapped herb image to a full-screen view like so:

image007

Roll up your sleeves, put your developer apron on and get ready for the inner workings of custom presentation controllers!

Behind the Scenes of Custom Transitions

UIKit lets you customize your view controller’s presentation via the delegate pattern; you simply make your main view controller (or another class you create specifically for that purpose) adopt UIViewControllerTransitioningDelegate.

Every time you present a new view controller, UIKit asks its delegate whether or not it should use a custom transition. Here’s what the first step of the custom transitioning dance looks like:

image009

UIKit calls animationControllerForPresentedController(:_ presentingController:sourceController:); if that method returns nil UIKit uses the built-in transition. If UIKit receives an object instead, then UIKit uses that object as the animation controller for the transition.

There are a few more steps in the dance before UIKit can use the custom animation controller:

image011

UIKit first asks your animation controller (simply known as the animator) for the transition duration in seconds, then calls animateTransition() on it. This is when your custom animation gets to take center stage.

In animateTransition(), you have access to both the current view controller on the screen as well as the new view controller to be presented. You can fade, scale, rotate and manipulate the existing view and the new view however you like.

Now that you’ve learned a bit about how custom presentation controllers work, you can start to create your own.

Implementing Transition Delegates

Since the delegate’s task is to manage the animator object that performs the actual animations, you’ll first have to create a stub for the animator class before you can write the delegate code.

From Xcode’s main menu select File\New\File… and choose the template iOS\Source\Cocoa Touch Class.

Set the new class name to PopAnimator and make it a subclass of NSObject.

Open PopAnimator.swift and update the class definition to make it conform to the UIViewControllerAnimatedTransitioning protocol as follows:

class PopAnimator: NSObject, UIViewControllerAnimatedTransitioning {
 
}

You’ll see some complaints from Xcode since you haven’t implemented the required delegate methods yet, so you’ll stub those out next.

Add the following method to the class:

func transitionDuration(transitionContext: UIViewControllerContextTransitioning)-> NSTimeInterval {
  return 0
}

The 0 value above is just a placeholder value for the duration; you’ll replace this later with a real value as you work through the project.

Now add the following method stub to the class:

func animateTransition(transitionContext: UIViewControllerContextTransitioning) {
 
}

The above stub will hold your animation code; adding it should have cleared the remaining errors in Xcode.

Now that you have the basic animator class, you can move on to implementing the delegate methods on the view controller side.

Open ViewController.swift and add the following extension to the end of the file:

extension ViewController: UIViewControllerTransitioningDelegate{
 
}

This code indicates the view controller conforms to the transitioning delegate protocol. You’ll add some methods here in a moment.

Find didTapImageView() in the main body of the class. Near the bottom of that method you’ll see the code that presents the details view controller. herbDetails is the instance of the new view controller; you’ll need to set its transitioning delegate to the main controller.

Add the following code right before the last line of the method that calls presentViewController:

herbDetails.transitioningDelegate = self

Now UIKit will ask ViewController for an animator object every time you present the details view controller on the screen. However, you still haven’t implemented any of UIViewControllerTransitioningDelegate’s methods, so UIKit will still use the default transition.

The next step is to actually create your animator object and return it to UIKit when requested.

Add the following new property to ViewController:

let transition = PopAnimator()

This is the instance of PopAnimator that will drive your animated view controller transitions. You only need one instance of PopAnimator since you can continue to use the same object each time you present a view controller, as the transitions are the same every time.

Now add the first delegate method to the extension in ViewController:

func animationControllerForPresentedController(
 presented: UIViewController!,
 presentingController presenting: UIViewController!,
 sourceController source: UIViewController!) ->
 UIViewControllerAnimatedTransitioning! {
 
  return transition
}

This method takes a few parameters that let you make an informed decision whether or not you want to return a custom animation. In this tutorial you’ll always return your single instance of PopAnimator since you have only one presentation transition.

You’ve already added the delegate method for presenting view controllers, but how will you deal with dismissing one?

Add the following delegate method to handle this:

func animationControllerForDismissedController(dismissed: UIViewController!) -> UIViewControllerAnimatedTransitioning! {
  return nil
}

The above method does essentially the same thing as the previous one: you check which view controller was dismissed and decide whether to return nil and use the default animation, or return a custom transition animator and use that instead. At the moment you return nil, as you aren’t going to implement the dismissal animation until later.

You finally have a custom animator to take care of your custom transitions. But does it work?

Build and run your project and tap one of the herb images:

image013

Nothing happens. Why? You have a custom animator to drive the transition, but… oh, wait, you haven’t added any code to the animator class! :] You’ll take care of that in the next section.

Creating your Transition Animator

Open PopAnimator.swift; this is where you’ll add the code to transition between the two view controllers.

First, add the following properties to this class:

let duration    = 1.0
var presenting  = true
var originFrame = CGRect.zeroRect

You’ll use duration in several places, such as when you tell UIKit how long the transition will take and when you create the constituent animations.

You also define presenting to tell the animator class whether you are presenting or dismissing a view controller. You want to keep track of this because typically, you’ll run the animation forward to present and in reverse to dismiss.

Finally, originFrame represents the image view the user originally tapped. You’re going to animate out from the herb image in the scroll view, so this is will be the starting frame.

Now you can move on to the UIViewControllerAnimatedTransitioning methods.

Replace the code inside transitionDuration() with the following:

return duration

Reusing the duration property lets you easily experiment with the transition animation; you can simply modify the value of the property to make the transition run faster or slower.

Setting your Transition’s Context

It’s time to add some magic to animateTransition. This method has one parameter of type UIViewControllerContextTransitioning, which gives you access to the parameters and view controllers of the transition.

Before you start working on the code itself, it’s important to understand what the animation context actually is.

When the transition between the two view controllers begins, the existing view is added to a transition container view and the new view controller’s view is created but not yet visible, as illustrated below:

image015

Therefore your task is to add the new view to the transition container within animateTransition(), “animate in” its appearance, and “animate out” the old view if required.

By default, the old view is removed from the transition container when the transition animation is done:

image017

Adding a Pop Transition

To create the animations for your custom transition you need to work with the so called “container” view of the transition context. This is a temporary view (much like a scratchpad) that gets added to the screen only for the time while the transition takes place. You will create all your animations in this view.

Insert into animateTransition():

let containerView = transitionContext.containerView()
 
let toView =
  transitionContext.viewForKey(UITransitionContextToViewKey)!
 
let herbView = presenting ? toView : transitionContext.viewForKey(UITransitionContextFromViewKey)!

containerView is where your animations will live, while toView is the new view to present. If you’re presenting, herbView is just the toView; otherwise it will be fetched from the context. For both presenting and dismissing, herbView will always be the view that you animate.

When you present the details controller view, it will grow to take up the entire screen. When dismissed, it will shrink to the image’s original frame.

Add the following to animateTransition():

let initialFrame = presenting ? originFrame : herbView.frame
let finalFrame = presenting ? herbView.frame : originFrame
 
let xScaleFactor = presenting ?
  initialFrame.width / finalFrame.width :
  finalFrame.width / initialFrame.width
 
let yScaleFactor = presenting ?
  initialFrame.height / finalFrame.height :
  finalFrame.height / initialFrame.height

In the code above, you detect the initial and final animation frames and then calculate the scale factor you need to apply on each axis as you animate between each view.

Now you need to carefully position the new view so it appears exactly above the tapped image; this will make it look like the tapped image expands to fill the screen.

Add the following to animateTransition():

let scaleTransform = CGAffineTransformMakeScale(xScaleFactor, yScaleFactor)
 
if presenting {
  herbView.transform = scaleTransform
  herbView.center = CGPoint(
    x: CGRectGetMidX(initialFrame),
    y: CGRectGetMidY(initialFrame))
  herbView.clipsToBounds = true
}

When presenting the new view, you set its scale and position so it exactly matches the size and location of the initial frame.

Now add the final bits of code to animateTransition():

containerView.addSubview(toView)
containerView.bringSubviewToFront(herbView)
 
UIView.animateWithDuration(duration, delay:0.0,
  usingSpringWithDamping: 0.4,
  initialSpringVelocity: 0.0,
  options: nil,
  animations: {
    herbView.transform = self.presenting ?
     CGAffineTransformIdentity : scaleTransform
 
    herbView.center = CGPoint(x: CGRectGetMidX(finalFrame),
                              y: CGRectGetMidY(finalFrame))
 
  }, completion:{_ in
    transitionContext.completeTransition(true)
})

This will first add toView to the container. Next, you need to make sure the herbView is on top since that’s the only view you’re animating. Remember that when dismissing, toView is the original view so in the first line, you’ll be adding it on top of everything else and your animation will be hidden away unless you bring herbView to the front.

Then, you can kick off the animations – using a spring animation here will give it a bit of bounce.

Inside the animations expression, you change the transform and position of herbView. When presenting, you’re going from the small size at the bottom to the full screen so the target transform is just the identity transform. When dismissing, you animate it to scale down to match the original image size.

At this point, you’ve set the stage by positioning the new view controller over the tapped image; you’ve animated between the initial and final frames; and finally, you’ve called completeTransition() to hand things back to UIKit. It’s time to see your code in action!

Build and run your project; tap the first herb image to see your view controller transition in action:

image019

Currently your animation starts from the top-left corner; that’s because the default value of originFrame has the origin at (0, 0) – and you never set it to any other value.

Open ViewController.swift and add the following code to the top of animationControllerForPresentedController():

transition.originFrame = 
  selectedImage!.superview!.convertRect(selectedImage!.frame, toView: nil)
 
transition.presenting = true

This sets the originFrame of the transition to the frame of selectedImage, which is the image view you last tapped. Then you set presenting to true and hide the tapped image during the animation.

Build and run your project again; tap different herbs in the list and see how your transition looks for each:

image023

Adding a Dismiss Transition

All that’s left to do is dismiss the details controller. You’ve actually done most of the work in the animator already – the transition animation code does the logic juggling to set the proper initial and final frames, so you’re most of the way to playing the animation both forwards and backwards. Sweet! :]

Open ViewController.swift and replace the body of animationControllerForDismissedController() with the following:

transition.presenting = false
return transition

This tells your animator object that you’re dismissing a view controller so the animation code will run in the correct direction.

Build and run the project to see the result. Tap on an herb and then tap anywhere on screen to dismiss it and enjoy!

image027

Where To Go From Here?

You can grab the completed project from this tutorial here: BeginnerCook-Completed.

From here, you can do a lot more to polish this transition even further. For example, here are some ideas to develop:

  • hide tapped images during transitions so it really looks like they grow to take up the full screen
  • fade in and out each herb’s description text so the transition animation is smoother
  • animate the corner radius of the selected images
  • test and adjust the transition for landscape orientation

All of these and more are tackled in the presentation transition animations chapter in iOS Animations by Tutorials. In the book, you’ll learn in a bit more detail about view controller presentation transitions, orientation change animations, navigation controllers and interactive transitions, and more.

We hope you enjoyed this tutorial, and if you have any questions or comments, please join the forum discussion below!

Custom View Controller Presentation Transitions with Swift is a post from: Ray Wenderlich

The post Custom View Controller Presentation Transitions with Swift appeared first on Ray Wenderlich.

Video Tutorial: Intro to Auto Layout Part 14: Auto Layout Library: Cartography

Viewing all 4370 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>