Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4374 articles
Browse latest View live

Video Tutorial: Beginner OpenGL ES and GLKit Part 12: Making a Simple 3D Game (Part 2)


Video Tutorial: Beginner OpenGL ES and GLKit Part 13: Making a Simple 3D Game (Part 3)

How To Write A Simple Node.js/MongoDB Web Service for an iOS App

$
0
0
Learn how to create your own web service for your iOS app with Node.js and MongoDB!

Learn how to create your own web service for your iOS app with Node.js and MongoDB!

In today’s world of collaborative and social apps, it’s crucial to have a backend that is simple to build and easy to deploy. Many organizations rely on an application stack using the following three technologies:

This stack is quite popular for mobile applications since the native data format is JSON which can be easily parsed by apps by way of Cocoa’s NSJSONSerialization class or other comparable parsers.

In this tutorial you’ll learn how to set-up a Node.js environment that leverages Express; on top of that platform you’ll build a server that exposes a MongoDB database through a REST API, as such:

The backend database rendered in a HTML table

The backend database rendered in a HTML table

The second part of this tutorial series focuses on the iOS app side. You’ll build a cool “places of interest” app to tag interesting locations so that other users can find out what’s interesting near them. Here’s a sneak peek at what you’ll be building:

The TourMyTown main view

The TourMyTown main view

This tutorial assumes that you already know the basics of JavaScript and web development but are new to Node.js, Express, and MongoDB.

A Case for Node+Mongo

Most Objective-C developers likely aren’t familiar with JavaScript, but it’s an extremely common language among web developers. For this reason, Node has gained a lot of popularity as a web framework, but there’s many more reasons that make it a great choice as a back-end service:

  • Built-in server functionality
  • Good project management through its package manager
  • A fast JavaScript engine, known as V8
  • Asynchronous event-driven programming model.

An asynchronous programming model of events and callbacks is well suited for a server which has to wait for a log of things, such as incoming requests and inter-process communications with other services (like MongoDB).

MongoDB is a low-overhead database where all entities are free-form BSON — “binary JSON” — documents . This lets you work with heterogeneous data and makes it easy to handle a wide variety of data formats. Since BSON is compatible with JSON, building a REST API is simple — the server code can pass requests to the database driver without a lot of intermediate processing.

Node and MongoDB are inherently scalable and synchronize easily across multiple machines in a distributed model; this combination is a good choice for applications that don’t have an evenly distributed load.

Getting Started

This tutorial assumes you have OS X Mountain Lion or Mavericks, and Xcode with its command line tools already installed.

The first step is to install Homebrew. Just as CocoaPods manages packages for Cocoa and Gem manages packages for Ruby, Homebrew manages Unix tools on Mac OS X. It’s built on top of Ruby and Git and is highly flexible and customizable.

If you already have Homebrew installed, feel free to skip to the next step. Otherwise, install Homebrew by opening Terminal and executing the following command:

ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
Note: cURL is a handy tool to send and receive files and data using URL requests. You use it here to load the Homebrew installation script — later on in this tutorial you’ll use it to interact with the Node server.

Once Homebrew is installed, enter the following command in Terminal:

brew update

This simply updates Homebrew so you have the most up-to-date package list.

Now, install MongoDB via Homebrew with the following command:

brew install mongodb

Make a note of the directory where MongoDB is installed as shown in the “Summary” output. You’ll need this later to launch the MongoDB service.

Download and run the Node.js installer from http://nodejs.org/download/.

Once the installer has completed, you can test out your Node.js installation right away.

Enter the following command in Terminal:

node

This puts you into the Node.js interactive environment where you can execute JavaScript expressions.

Enter the following expression at the prompt:

console.log("Hello World");

You should receive the following output:

Hello World
undefined

console.log is the Node.js equivalent of NSLog. However, the output stream of console is much more complex than NSLog: it has console.info, console.assert, console.error among other streams that you might expect from more advanced loggers such as CocoaLumberjack.

The “undefined” value written to the output is the return value of console.log — which has no return value. Node.js always displays the output of all expressions, whether the return value is defined or not.

Note: If you’ve worked with JavaScript before, you need to be aware that there are a few differences between the Node.js environment and the browser environment. The global object is called global instead of window. Typing global and pressing enter at the Node.js interactive prompt displays all the methods and objects in the global namespace; however it’s easier to just use the Node.js documentation as a reference. :]

The global object has all the pre-defined constants, functions, and datatypes available to programs running in the Node.js environment. Any user-created variables are added to the global context object as well. The output of global will list pretty much everything accessible in memory.

Running a Node.js Script

The interactive environment of Node.js is great for playing around and debugging your JavaScript expressions, but usually you’ll use script files to get the real work done. Just as an iOS app includes Main.m as its starting point, the default entry point for Node.js is index.js. However, unlike Objective-C there is no main function; instead, index.js is evaluated from top to bottom.

Press Control+C twice to exit the Node.js shell. Execute the following command to create a new folder to hold your scripts:

mkdir ~/Documents/NodeTutorial

Now execute the following command to navigate into this new folder and create a new script file in your default text editor:

cd ~/Documents/NodeTutorial/; edit index.js

Add the following code to index.js:

console.log("Hello World.");

Save your work, return to Terminal and execute the following command to see your script in action:

node index.js

Once again, there’s the familiar “Hello World” output. You can also execute node . to run your script, as . looks for index.js by default.

node_run

Admittedly, a “hello world” script does not a server make, but it’s a quick way to test out your installation. The next section introduces you to the world of Node.js packages which will form the foundation of your shiny new web server!

Node Packages

Node.js applications are divided up into packages, which are the “frameworks” of the Node.js world. Node.js comes with several basic and powerful packages, but there are over 50,000 public packages provided by the vibrant developer community — and if you can’t find what you need, you can easily make your own.

Note: Check out https://npmjs.org/ for a list of available packages.

Replace the contents of index.js with the following code:

//1
var http = require('http');
 
//2 
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/html'});
  res.end('<html><body><h1>Hello World</h1></body></html>');
}).listen(3000);
 
console.log('Server running on port 3000.');

Taking each numbered comment in turn, you’ll see the following:

  1. require imports modules into the current file. In this case you’re importing the HTTP libraries.
  2. Here you create the web service that responds to simple HTTP requests by sending a 200 response and writes the page content into the response.

One of the biggest strengths of Node.js as a runtime environment is its event-driven model. It’s designed around the concept of callbacks that are called asynchronously. In the above example, you’re listening on port 3000 for incoming http requests. When you receive a request, your script calls function (req, res) {…} and returns a response to the caller.

Save your file, then return to Terminal and execute the following command:

node index.js

You should see the following console output:

node_server

Open your favorite browser and navigate to http://localhost:3000; lo and behold, there’s Node.js serving up your “Hello World” page:

web_helloworld

Your script is still sitting there, patiently waiting for http requests on port 3000. To kill your Node instance, simply press Ctrl+C in Terminal.

Note: Node packages are usually written with a top-level function or object that is exported. This function is then assigned to a top-level variable by using the require function. This helps manage scope and expose the module’s API in a sane manner. You’ll see how to create a custom module later in this tutorial when you add a driver for MongoDB.

NPM – Using External Node Modules

The previous section covered Node.js built-in modules, but what about third party modules such as Express which you’ll need later to provide the routing middleware for your server platform?

External modules are also imported into the file with the require function, but you have to download them separately and then make them available to your Node instance.

Node.js uses npm — the Node Packages Module — to download, install, and maintain package dependencies. If you’re familiar with Cocoapods or ruby gems, then npm will feel familiar. Your Node.js application uses package.json which defines the configuration and dependencies of npm.

Using Express

Express is a popular Node.js module for routing middleware. Why do you need this separate package? Consider the following scenario.

If you used just the http module by itself, you’d have to parse each request’s location separately to figure out what content to serve up to the caller — and that would become unwieldy in a very short time.

However, with Express you can easily define routes and chains of callbacks for each request. Express also makes it easy to provide different callbacks based on the http verb (eg. POST, PUT, GET, DELETE, HEAD, etc).

A Short Diversion into HTTP Verbs

An HTTP request includes a method — or verb — value. The default value is GET, which is for fetching data, such as web pages in a browser. POST is meant for uploading data, such submitting web forms. For web APIs, POST is generally used to add data, but it can also be used for remote procedure call-type endpoints.

PUT differs from POST in that it is generally used to replace existing data. In practical terms POST and PUT are usually used in the same way: to provide entities in the request body to be placed into a backend datastore. DELETE is used to remove items from your backend datastore.

POST, GET, PUT, and DELETE are the HTTP implementations of the CRUD model — Create, Read, Update and Delete.

There are a few other HTTP verbs that are less well-known. HEAD acts like GET but only returns the response headers and not the the body. This helps minimize data transfer if the information in the response header is sufficient to determine if there is new data available. Other verbs such as TRACE and CONNECT are used for network routing.

Adding a Package to Your Node Instance

Execute the following command in Terminal:

edit package.json

This creates a new package.json to contain your package configuration and dependencies.

Add the following code to package.json:

{
  "name": "mongo-server",
  "version": "0.0.1",
  "private": true,
  "dependencies": {
    "express": "3.3.4"
  }
}

This file defines some metadata such as the project name and version, some scripts, and most importantly for your purposes, the package dependencies. Here’s what each line means:

  • name is the name of the project.
  • version is the current version of the project.
  • private prevents the project from being published accidentally if you set it to true.
  • dependencies is a list containing the Node.js modules used by your application.

Dependencies take the form of key/value pairs of module names and versions. Your list of dependencies contains version 3.3.4 of Express; if you want to instruct Node.js to use the latest version of a package, you can use the wildcard “*”.

Save your file, then execute the following command in Terminal:

npm install

You’ll see the following output:

npm_install

install downloads and installs the dependencies specified in package.json — and your dependencies’ dependencies! :] — into a folder named node_modules folder and makes them available to your application.

Once npm has completed, you can now use Express in your application.

Find the following line in index.js:

var http = require('http');

…and add the require call for Express, as below:

var http = require('http'),
    express = require('express');

This imports the Express package and stores it in a variable named express.

Add the following code to index.js just after the section you added above:

var app = express();
app.set('port', process.env.PORT || 3000);

This creates an Express app and sets its port to 3000 by default. You can overwrite this default by creating an environment variable named PORT. This type of customization is pretty handy during development tool, especially if you have multiple applications listening on various ports.

Add the following code to index.js just under the section you added above:

app.get('/', function (req, res) {
  res.send('<html><body><h1>Hello World</h1></body></html>');
});

This creates a route handler, which is a fancy name for a chain of request handlers for a given URL. Express matches the specified paths in the request and executes the callback appropriately.

Your callback above tells Express to match the root “/” and return the given HTML in the response. send formats the various response headers for you — such as content-type and the status code — so you can focus on writing great code instead.

Finally, replace the current http.createServer(...) section right down to and including the console.log line with the following code:

http.createServer(app).listen(app.get('port'), function(){
  console.log('Express server listening on port ' + app.get('port'));
});

This is a little more compact than before. app implements the function(req,res) callback separately instead of including it inline here in the createServer call. You’ve also added a completion handler callback that is called once the port is open and ready to receive requests. Now your app waits for the port to be ready before logging the “listening” message to the console.

To review, index.js should now look like the following:

var http = require('http'),
    express = require('express');
 
var app = express();
app.set('port', process.env.PORT || 3000); 
 
app.get('/', function (req, res) {
  res.send('<html><body><h1>Hello World</h1></body></html>');
});
 
http.createServer(app).listen(app.get('port'), function(){
  console.log('Express server listening on port ' + app.get('port'));
});

Save your file and execute the following command in Terminal:

node index.js

Return to your browser and reload http://localhost:3000 to check that your Hello World page still loads.

Your page looks no different than before, but there’s more than one way to see what’s going on under the hood.

Create another instance of Terminal and execute the following command:

curl -v http://localhost:3000

You should see the following output:

curl_v

curl spits back the response headers and content for your HTTP request to show you the raw details of what’s being served up. Note the “X-Powered-By : Express” header; Express adds this metadata automatically to the response.

Serving up Content With Express

It’s easy to serve up static files using Express.

Add the following statement to the require section at the top of index.js:

path = require('path');

Now add the following line just after the app.set statement:

app.use(express.static(path.join(__dirname, 'public')));

This tells Express to use the middleware express.static which serves up static files in response to incoming requests.

path.join(__dirname, 'public') maps the local subfolder public to the base route /; it uses the Node.js path module to create a platform-independent subfolder string.

index.js should now look like the following:

//1
var http = require('http'),
    express = require('express'),
    path = require('path');
 
//2 
var app = express();
app.set('port', process.env.PORT || 3000); 
app.use(express.static(path.join(__dirname, 'public')));
 
app.get('/', function (req, res) {
  res.send('<html><body><h1>Hello World</h1></body></html>');
});
 
http.createServer(app).listen(app.get('port'), function(){
  console.log('Express server listening on port ' + app.get('port'));
});

Using the static handler, anything in /public can now be accessed by name.

To demonstrate this, kill your Node instance by pressing Control+C, then execute the commands below in Terminal:

mkdir public; edit public/hello.html

Add the following code to hello.html:

<html></body>Hello World</body></html>

This creates a new directory public and creates a basic static HTML file.

Restart your Node instance again with the command node index.js. Point your browser to http://localhost:3000/hello.html and you’ll see the newly created page as follows:

web_hello

Advanced Routing

Static pages are all well and good, but the real power of Express is found in dynamic routing. Express uses a regular expression matcher on the route string and allows you to define parameters for the routing.

For example, the route string can contain the following items:

  • static terms/files only matches http://localhost:300/pages
  • parameters prefixed with “:”/files/:filename matches /files/foo and /files/bar, but not /files
  • optional parameters suffixed with “?”/files/:filename? matches both “/files/foo” and “/files
  • regular expressions/^\/people\/(\w+)/ matches /people/jill and /people/john

To try it out, add the following route after the app.get statement in index.js:

app.get('/:a?/:b?/:c?', function (req,res) {
	res.send(req.params.a + ' ' + req.params.b + ' ' + req.params.c);
});

This creates a new route which takes up to three path levels and displays those path components in the response body. Anything that starts with a : is mapped to a request parameter of the supplied name.

Restart your Node instance and point your browser to: http://localhost:3000/hi/every/body. You’ll see the following page:

web_hieverybody

“hi” is the value of req.params.a, “every” is assigned to req.params.b and finally “body” is assigned to req.params.c.

This route matching is useful when building REST APIs where you can specify dynamic paths to specific items in backend datastores.

In addition to app.get, Express supports app.post, app.put, app.delete among others.

Error Handling And Templated Web Views

Server errors can be handled in one of two ways. You can pass an exception up the call stack — and likely kill the app by doing so — or you can catch the error and return a valid error code instead.

The HTTP 1.1 protocol defines several error codes in the 4xx and 5xx range. The 400 range errors are for user errors, such as requesting an item that doesn’t exist: a familiar one is the common 404 Not Found error. 500 range errors are meant for server errors such as timeout or programming errors such as a null dereference.

You’ll add a catch-all route to display a 404 page when the requested content can’t be found. Since the route handlers are added in the order they are set with app.use or app.verb, a catch-all can be added at the end of the route chain.

Add the following code between the final app.get and the call to http.createServer in index.js:

app.use(function (req,res) { //1
    res.render('404', {url:req.url}); //2
});

This code causes the 404 page to be loaded if it is there is no previous call to res.send().

There are a few points to note in this segment:

  1. app.use(callback) matches all requests. When placed at the end of the list of app.use and app.verb statements, this callback becomes a catch-all.
  2. The call to res.render(view, params) fills the response body with output rendered from a templating engine. A templating engine takes a template file called a “view” from disk and replaces variables with a set of key-value parameters to produce a new document.

Wait — a “templating engine”? Where does that come from?

Express can use a variety of templating engines to serve up views. To make this example work, you’ll add the popular Jade templating engine to your application.

Jade is a simple language that eschews brackets and uses whitespace instead to determine the ordering and containment of HTML tags. It also allows for variables, conditionals, iteration and branching to dynamically create the HTML document.
Update the list of dependencies in package.json as follows:

{
  "dependencies": {
    "express": "3.3.4",
    "jade": "1.1.5"
  }

Head back to Terminal, kill your Node instance, and execute the following command:

npm update

This downloads and installs the jade package for you.

Add the following code directly beneath the first app.set line in index.js:

app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');

The first line above specifies where the view templates live, while the second line sets Jade as the view rendering engine.

Execute the following command in Terminal:

mkdir views; edit views/404.jade

Add the following code to 404.jade:

doctype html
body
    h1= 'Could not load page: ' + url

The first two lines of the Jade template create a new HTML document with a body element. The third line creates an h1 element inside the body element due to the indent. Spacing is important in Jade! :]

The text of the h1 element is the concatenated value of “Could not load page” and the value of the url parameter passed in as part of the res.render() in index.js.

As a quick check, your index.js file should now look like the following:

var http = require('http'),
    express = require('express'),
    path = require('path');
 
var app = express();
app.set('port', process.env.PORT || 3000); 
app.set('views', path.join(__dirname, 'views')); //A
app.set('view engine', 'jade'); //B
 
app.use(express.static(path.join(__dirname, 'public')));
 
app.get('/', function (req, res) {
  res.send('<html><body><h1>Hello World</h1></body></html>');
});
 
app.use(function (req,res) {
    res.render('404', {url:req.url});
});
 
http.createServer(app).listen(app.get('port'), function(){
  console.log('Express server listening on port ' + app.get('port'));
});

Restart your instance of Node and use your browser to load the URL http://localhost:3000/show/a/404/page. You’ll see the page below:

web_404

You now have enough starter code in index.js to accept incoming requests and provide some basic responses. All that you’re missing is the database persistence to turn this into a useful web application that can be leveraged from a mobile app.

Introducing MongoDB

MongoDB is a database that stores JSON objects. Unlike a SQL database, a NoSQL database like Mongo does not support entity relationships. In addition, there is no pre-defined schema, so entities in the same collection (or “table”, in relational-speak) don’t need to have the same fields or conform to any predefined pattern.

MongoDB also provides a powerful querying language map-reduce along with support for location data. MongoDB is popular for its ability to scale, replicate and shard. Scaling and high-availability features are not covered in this tutorial.

The biggest drawbacks of MongoDB are the lack of relationship support and that it can be a memory hog as it memory-maps the actual database file. These issues can be mitigated by carefully structuring the data; this will be covered in part 2 of this tutorial.

Because of the close relationship between MongoDB documents and JSON, MondoDB is a great choice as a database for both web and mobile apps. MongoDB doesn’t store raw JSON; instead, the documents are in a format called BSON — Binary JSON — that is more efficient for data storage and queries. BSON also has the advantage of supporting more datatypes than JSON, such as dates and C-types.

Adding MongoDB to Your Project

MongoDB is a native application and is accessed through drivers. There are a number of drivers for almost any environment, including Node.js. The MongoDB driver connects to the MongoDB server and issues commands to update or read data.

This means you’ll need a running MongoDB instance listening on an open port. Fortunately, that’s your very next step! :]

Open a new instance of Terminal and execute the following commands:

cd /usr/local/opt/mongodb/; mongod

This starts the MongoDB server daemon.

Now the MongoDB service is up and running on the default port of 27017.

Although the MongoDB driver provides database connectivity, it still has to be wired up to the server to translate incoming HTTP requests into the proper database commands.

Creating a MongoDB Collection Driver

Remember the /:a/:b/:c route you implemented earlier? What if you could use that pattern instead to look up database entries?

Since MongoDB documents are organized into collections, the route could be something simple like: /:collection/:entity which lets you access objects based on a simple addressing system in an extremely RESTful fashion!

Kill your Node instance and execute the following commands in Terminal:

edit collectionDriver.js

Add the following code to collectionDriver.js:

var ObjectID = require('mongodb').ObjectID;

This line imports the various required packages; in this case, it’s the ObjectID function from the MongoDB package.

Note: If you’re familiar with traditional databases, you probably understand the term “primary key”. MongoDB has a similar concept: by default, new entities are assigned a unique _id field of datatype ObjectID that MongoDB uses for optimized lookup and insertion. Since ObjectID is a BSON type and not a JSON type, you’ll have to convert any incoming strings to ObjectIDs if they’re to be used when comparing against an “_id” field.

Add the following code to collectionDriver.js just after the line you added above:

CollectionDriver = function(db) {
  this.db = db;
};

This function defines the CollectionDriver constructor method; it stores a MongoDB client instance for later use. In JavaScript, this is a reference to the current context, just like self in Objective-C.

Still working in the same file, add the following code below the block you just added:

CollectionDriver.prototype.getCollection = function(collectionName, callback) {
  this.db.collection(collectionName, function(error, the_collection) {
    if( error ) callback(error);
    else callback(null, the_collection);
  });
};

This section defines a helper method getCollection to obtain a Mongo collection by name. You define class methods by adding functions to prototype.

db.collection(name,callback) fetches the collection object and returns the collection — or an error — to the callback.

Add the following code to collectionDriver.js, below the block you just added:

CollectionDriver.prototype.findAll = function(collectionName, callback) {
    this.getCollection(collectionName, function(error, the_collection) { //A
      if( error ) callback(error);
      else {
        the_collection.find().toArray(function(error, results) { //B
          if( error ) callback(error);
          else callback(null, results);
        });
      }
    });
};

CollectionDriver.prototype.findAll gets the collection in line A above, and if there is no error such as an inability to access the MongoDB server, it calls find() on it in line B above. This returns all of the found objects.

find() returns a data cursor that can be used to iterate over the matching objects. find() can also accept a selector object to filter the results. toArray() organizes all the results in an array and passes it to the callback. This final callback then returns to the caller with either an error or all of the found objects in the array.

Still working in the same file, add the following code below the block you just added:

CollectionDriver.prototype.get = function(collectionName, id, callback) { //A
    this.getCollection(collectionName, function(error, the_collection) {
        if (error) callback(error);
        else {
            var checkForHexRegExp = new RegExp("^[0-9a-fA-F]{24}$"); //B
            if (!checkForHexRegExp.test(id)) callback({error: "invalid id"});
            else the_collection.findOne({'_id':ObjectID(id)}, function(error,doc) { //C
                if (error) callback(error);
                else callback(null, doc);
            });
        }
    });
};

In line A above, CollectionDriver.prototype.get obtains a single item from a collection by its _id. Similar to prototype.findAll method, this call first obtains the collection object then performs a findOne against the returned object. Since this matches the _id field, a find(), or findOne() in this case, has to match it using the correct datatype.

MongoDB stores _id fields as BSON type ObjectID. In line C above, ObjectID() (C) takes a string and turns it into a BSON ObjectID to match against the collection. However, ObjectID() is persnickety and requires the appropriate hex string or it will return an error: hence, the regex check up front in line B.

This doesn’t guarantee there is a matching object with that _id, but it guarantees that ObjectID will be able to parse the string. The selector {'_id':ObjectID(id)} matches the _id field exactly against the supplied id.

Note: Reading from a non-existent collection or entity is not an error – the MongoDB driver just returns an empty container.

Add the following line to collectionDriver.js below the block you just added:

exports.CollectionDriver = CollectionDriver;

This line declares the exposed, or exported, entities to other applications that list collectionDriver.js as a required module.

Save your changes — you’re done with this file! Now all you need is a way to call this file.

Using Your Collection Driver

To call your collectionDriver, first add the following line to the dependencies section of package.json:

    "mongodb":"1.3.23"

Execute the following command in Terminal:

npm update

This downloads and installs the MongoDB package.

Execute the following command in Terminal:

edit views/data.jade

Now add the following code to data.jade, being mindful of the indentation levels:

body
    h1= collection
    #objects
        table(border=1)
          if objects.length > 0
              - each val, key in objects[0]
                  th= key 
          - each obj in objects
            tr.obj
              - each val, key in obj
                td.key= val

This template renders the contents of a collection in an HTML table to make them human-readable.

Add the following code to index.js, just beneath the line path = require('path'),:

MongoClient = require('mongodb').MongoClient,
Server = require('mongodb').Server,
CollectionDriver = require('./collectionDriver').CollectionDriver;

Here you include the MongoClient and Server objects from the MongoDB module along with your newly created CollectionDriver.

Add the following code to index.js, just after the last app.set line:

var mongoHost = 'localHost'; //A
var mongoPort = 27017; 
var collectionDriver;
 
var mongoClient = new MongoClient(new Server(mongoHost, mongoPort)); //B
mongoClient.open(function(err, mongoClient) { //C
  if (!mongoClient) {
      console.error("Error! Exiting... Must start MongoDB first");
      process.exit(1); //D
  }
  var db = mongoClient.db("MyDatabase");  //E
  collectionDriver = new CollectionDriver(db); //F
});

Line A above assumes the MongoDB instance is running locally on the default port of 27017. If you ever run a MongoDB server elsewhere you’ll have to modify these values, but leave them as-is for this tutorial.

Line B creates a new MongoClient and the call to open in line C attempts to establish a connection. If your connection attempt fails, it is most likely because you haven’t yet started your MongoDB server. In the absence of a connection the app exits at line D.

If the client does connect, it opens the MyDatabase database at line E. A MongoDB instance can contain multiple databases, all which have unique namespaces and unique data. Finally, you create the CollectionDriver object in line F and pass in a handle to the MongoDB client.

Replace the first two app.get calls in index.js with the following code:

app.get('/:collection', function(req, res) { //A
   var params = req.params; //B
   collectionDriver.findAll(req.params.collection, function(error, objs) { //C
    	  if (error) { res.send(400, error); } //D
	      else { 
	          if (req.accepts('html')) { //E
    	          res.render('data',{objects: objs, collection: req.params.collection}); //F
              } else {
	          res.set('Content-Type','application/json'); //G
                  res.send(200, objs); //H
              }
         }
   	});
});
 
app.get('/:collection/:entity', function(req, res) { //I
   var params = req.params;
   var entity = params.entity;
   var collection = params.collection;
   if (entity) {
       collectionDriver.get(collection, entity, function(error, objs) { //J
          if (error) { res.send(400, error); }
          else { res.send(200, objs); } //K
       });
   } else {
      res.send(400, {error: 'bad url', url: req.url});
   }
});

This creates two new routes: /:collection and /:collection/:entity. These call the collectionDriver.findAll and collectionDriver.get methods respectively and return either the JSON object or objects, an HTML document, or an error depending on the result.

When you define the /collection route in Express it will match “collection” exactly. However, if you define the route as /:collection as in line A then it will match any first-level path store the requested name in the req.params. collection in line B. In this case, you define the endpoint to match any URL to a MongoDB collection using findAll of CollectionDriver in line C.

If the fetch is successful, then the code checks if the request specifies that it accepts an HTML result in the header at line E. If so, line F stores the rendered HTML from the data.jade template in response. This simply presents the contents of the collection in an HTML table.

By default, web browsers specify that they accept HTML in their requests. When other types of clients request this endpoint such as iOS apps using NSURLSession, this method instead returns a machine-parsable JSON document at line G. res.send() returns a success code along with the JSON document generated by the collection driver at line H.

In the case where a two-level URL path is specified, line I treats this as the collection name and entity _id. You then request the specific entity using the get() collectionDriver‘s method in line J. If that entity is found, you return it as a JSON document at line K.

Save your work, restart your Node instance, check that your mongod daemon is still running and point your browser at http://localhost:3000/items; you’ll see the following page:

web_emptyitems

Hey that’s a whole lot of nothing? What’s going on?

Oh, wait — that’s because you haven’t added any data yet. Time to fix that!

Working With Data

Reading objects from an empty database isn’t very interesting. To test out this functionality, you need a way to add entities into the database.

Add the following method to CollectionDriver.js and add the following new prototype method just before the exports.CollectionDriver line:

//save new object
CollectionDriver.prototype.save = function(collectionName, obj, callback) {
    this.getCollection(collectionName, function(error, the_collection) { //A
      if( error ) callback(error)
      else {
        obj.created_at = new Date(); //B
        the_collection.insert(obj, function() { //C
          callback(null, obj);
        });
      }
    });
};

Like findAll and get, save first retrieves the collection object at line A. The callback then takes the supplied entity and adds a field to record the date it was created at line B. Finally, you insert the modified object into the collection at line C. insert automatically adds _id to the object as well.

Add the following code to index.js just after the string of get methods you added a little while back:

app.post('/:collection', function(req, res) { //A
    var object = req.body;
    var collection = req.params.collection;
    collectionDriver.save(collection, object, function(err,docs) {
          if (err) { res.send(400, err); } 
          else { res.send(201, docs); } //B
     });
});

This creates a new route for the POST verb at line A which inserts the body as an object into the specified collection by calling save() that you just added to your driver. Line B returns the success code of HTTP 201 when the resource is created.

There’s just one final piece. Add the following line to index.js just after the app.set lines, but before any app.use or app.get lines:

app.use(express.bodyParser());

This tells Express to parse the incoming body data; if it’s JSON, then create a JSON object with it. By putting this call first, the body parsing will be called before the other route handlers. This way req.body can be passed directly to the driver code as a JavaScript object.

Restart your Node instance once again, and execute the following command in Terminal to insert a test object into your database:

curl -H "Content-Type: application/json" -X POST -d '{"title":"Hello World"}' http://localhost:3000/items

You’ll see the record echoed back to you in your console, like so:

term_create

Now head back to your browser and reload http://localhost:3000/items; you’ll see the item you inserted show up in the table:

web_createitem

Updating and Deleting Data

You’ve implemented the Create and Read operations of CRUD — all that’s left are Update and Delete. These are relatively straightforward and follow the same pattern as the other two.

Add the following code to CollectionDriver.js before the exports.CollectionDriver line:

//update a specific object
CollectionDriver.prototype.update = function(collectionName, obj, entityId, callback) {
    this.getCollection(collectionName, function(error, the_collection) {
        if (error) callback(error);
        else {
            obj._id = ObjectID(entityId); //A convert to a real obj id
            obj.updated_at = new Date(); //B
            the_collection.save(obj, function(error,doc) { //C
                if (error) callback(error);
                else callback(null, obj);
            });
        }
    });
};

update() function takes an object and updates it in the collection using collectionDriver‘s save() method in line C. This assumes that the body’s _id is the same as specified in the route at line A. Line B adds an updated_at field with the time the object is modified. Adding a modification timestamp is a good idea for understanding how data changes later in your application’s lifecycle.

Note that this update operation replaces whatever was in there before with the new object – there’s no property-level updating supported.

Add the following code to collectionDriver.js just before the exports.CollectionDriver line:

//delete a specific object
CollectionDriver.prototype.delete = function(collectionName, entityId, callback) {
    this.getCollection(collectionName, function(error, the_collection) { //A
        if (error) callback(error);
        else {
            the_collection.remove({'_id':ObjectID(entityId)}, function(error,doc) { //B
                if (error) callback(error);
                else callback(null, doc);
            });
        }
    });
};

delete() operates the same way as the other CRUD methods. It fetches the collection object in line A then calls remove() with the supplied id in line B.

Now you need two new routes to handle these operations. Fortunately, the PUT and DELETE verbs already exist so you can create handlers that use the same semantics as GET.

Add the following code to index.js just after the app.post() call:

app.put('/:collection/:entity', function(req, res) { //A
    var params = req.params;
    var entity = params.entity;
    var collection = params.collection;
    if (entity) {
       collectionDriver.update(collection, req.body, entity, function(error, objs) { //B
          if (error) { res.send(400, error); }
          else { res.send(200, objs); } //C
       });
   } else {
       var error = { "message" : "Cannot PUT a whole collection" };
       res.send(400, error);
   }
});

The put callback follows the same pattern as the single-entity get: you match on the collection name and _id as shown in line A. Like the post route, put passes the JSON object from the body to the new collectionDriver‘s update() method in line B.

The updated object is returned in the response (line C), so the client can resolve any fields updated by the server such as updated_at.

Add the following code to index.js just below the put method you added above:

app.delete('/:collection/:entity', function(req, res) { //A
    var params = req.params;
    var entity = params.entity;
    var collection = params.collection;
    if (entity) {
       collectionDriver.delete(collection, entity, function(error, objs) { //B
          if (error) { res.send(400, error); }
          else { res.send(200, objs); } //C 200 b/c includes the original doc
       });
   } else {
       var error = { "message" : "Cannot DELETE a whole collection" };
       res.send(400, error);
   }
});

The delete endpoint is very similar to put as shown by line A except that delete doesn’t require a body. You pass the parameters to collectionDriver‘s delete() method at line B, and if the delete operation was successful then you return the original object with a response code of 200 at line C.

If anything goes wrong during the above operation, you’ll return the appropriate error code.

Save your work and restart your Node instance.

Execute the following command in Terminal, replacing {_id} with whatever value was returned from the original POST call:

curl -H "Content-Type: application/json" -X PUT -d '{"title":"Good Golly Miss Molly"}' http://localhost:3000/items/{_id}

You’ll see the following response in Terminal:

term_update

Head to your browser and reload http://localhost:3000/items; you’ll see the item you inserted show up in the table:

web_updated

Execute the following command in Terminal to delete your record:

curl -H "Content-Type: application/json" -X DELETE  http://localhost:3000/items/{_id}

You’ll receive the following response from curl:

term_delete

Reload http://localhost:3000/items and sure enough, your entry is now gone:

web_delete

And with that, you’ve completed your entire CRUD model using Node.js, Express, and MongoDB!

Where to Go From Here?

Here is the completed sample project with all of the code from the above tutorial.

Your server is now ready for clients to connect and start transferring data. In the next part of this tutorial series, you’ll build an iOS app to connect to your new server and take advantage of some of the cool features of MongoDB and Express.

For more information MongoDB, check out the official MongoDB documentation.

If you have any questions or comments about this tutorial, please join the discussion below!

How To Write A Simple Node.js/MongoDB Web Service for an iOS App is a post from: Ray Wenderlich

The post How To Write A Simple Node.js/MongoDB Web Service for an iOS App appeared first on Ray Wenderlich.

Video Tutorial: Beginner OpenGL ES and GLKit Part 14: Making a Simple 3D Game (Part 4)

How to Write An iOS App that Uses a Node.js/MongoDB Web Service

$
0
0
Learn how to create an iOS app that uses a Node.js/MongoDB server as its back end!

Learn how to create an iOS app that uses a Node.js/MongoDB server as its back end!

Welcome back to the second part of this two-part tutorial series on creating an iOS app with a Node.js and MongoDB back-end.

In the first part of this series, you created a simple Node.js server to expose MongoDB through a REST API.

In this second and final part of the series, you’ll create a fun iPhone application that lets users tag interesting places near them so other users can discover them.

As part of this process you’ll take the starter app provided and add several things: a networking layer using NSURLSession, support for geo queries and the ability to store images on the backend.

Getting Started

First things first: download the starter project and extract it to a convenient location on your system.

The zip file contains two folders:

  • server contains the javascript server code from the previous tutorial.
  • TourMyTown contains the starter Xcode project with the UI pre-built, but no networking code added yet.

Open TourMyTown\TourMyTown.xcodeproj and build and run. You should see something like this:

starter project

Right now not much happens, but here’s a sneak peek of what the app will look like when you finish this tutorial:

TourMyTown screenshot

TourMyTown screenshot

Users add new location markers to the app along with descriptions, categories and pictures. The Add button places a marker at the center of the map, and the user can drag the marker to the desired location. Alternatively, a tap and hold places a marker at the selected location.

The view delegate uses Core Location’s geo coder functionality to look up the address and place name of the location, if it’s available. Tapping the Info button on the annotation view presents the detail editing screen.

Edit point of interest data

Edit point of interest data

The app saves all data to the backend so that it can be recalled in future sessions.

The map with an annotation.

The map with an annotation.

You’ve got a lot of work to do to transition the app to this state, so let’s get coding!

Setting up Your Node.js Instance

If you didn’t complete the first part of this tutorial or don’t want to use your existing project, you can use the files contained in the server directory as a starting point.

The following instructions take you through setting up your Node.js instance; if you already have your working instance from Part 1 of this tutorial then feel free to skip straight to the next section.

Open Terminal and navigate to the MongoDB install directory — likely /usr/local/opt/mongodb/ but this may be slightly different on your system.

Execute the following command in Terminal to start the MongoDB daemon:

mongod

Now navigate to the server directory you extracted above. Execute the following command:

npm install

This reads the package.json file and installs the dependencies for your new server code.

Finally, launch your Node.js server with the following command:

node .
Note: The starter project is configured to connect to localhost, port 3000. This is fine when you’re running the app locally on your simulator, but if you want to deploy the app to a physical device you’ll have to change localhost to <mac-name>.local if your Mac and iOS device are on the same network. If they’re not on the same network, then you’ll need to set it to the IP address of your machine. You’ll find these values near the top of Locations.m.

The Data Model of Your App

The Location class of your project represents a single point of interest and its associated data. It does the following things:

  • Holds the location’s data, including its coordinates, description, and categories.
  • Knows how to serialize and deserialize the object to a JSON-compatible NSDictionary.
  • Conforms to the MKAnnotation protocol so it can be placed on an instance of MKMapView as a pin.
  • Has zero or more categories as defined in Categories.m.

The Locations class represents your application’s collection of Location objects and the mechanisms that load the objects from the server. This class is responsible for:

  • Serving as the app’s data model by providing a filterable list of locations via filteredObjects.
  • Communicating with the server by loading and saving items via import, persist and query.

The Categories class contains the list of categories that a Location can belong to and provides the ability to filter the list of locations by category. Categories also does the following:

  • Houses allCategories which provides the master list of categories. You can also add additional categories to its array.
  • Provides a list of all categories in the active set of locations.
  • Filters the locations by categories.

Loading Locations from the Server

Replace the stubbed-out implementation of import in Locations.m with the following code:

- (void)import
{
    NSURL* url = [NSURL URLWithString:[kBaseURL stringByAppendingPathComponent:kLocations]]; //1
 
    NSMutableURLRequest* request = [NSMutableURLRequest requestWithURL:url];
    request.HTTPMethod = @"GET"; //2
    [request addValue:@"application/json" forHTTPHeaderField:@"Accept"]; //3
 
    NSURLSessionConfiguration* config = [NSURLSessionConfiguration defaultSessionConfiguration]; //4
    NSURLSession* session = [NSURLSession sessionWithConfiguration:config];
 
    NSURLSessionDataTask* dataTask = [session dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { //5
        if (error == nil) {
            NSArray* responseArray = [NSJSONSerialization JSONObjectWithData:data options:0 error:NULL]; //6
            [self parseAndAddLocations:responseArray toArray:self.objects]; //7
        }
    }];
 
    [dataTask resume]; //8
}

Here’s what import does:

  1. The most important bits of information are the URL and request headers. The URL is simply the result of concatenating the base URL with the “locations” collections.
  2. You’re using GET since you’re reading data from the server. GET is the default method so it’s not necessary to specify it here, but it’s nice to include it for completeness and clarity.
  3. The server code uses the contents of the Accept header as a hint to which type of response to send. By specifying that your request will accept JSON as a response, the returned bytes will be JSON instead of the default format of HTML.
  4. Here you create an instance of NSURLSession with a default configuration.
  5. A data task is your basic NSURLSession task for transferring data from a web service. There are also specialized upload and download tasks that have specialized behavior for long-running transfers and background operation. A data task runs asynchronously on a background thread, so you use a callback block to be notified when the operation completes or fails.
  6. The completion handler checks for any errors; if it finds none it tries to deserialize the data using a NSJSONSerialization class method.
  7. Assuming the return value is an array of locations, parseAndAddLocations: parses the objects and notifies the view controller with the updated data.
  8. Oddly enough, data tasks are started with the resume message. When you create an instance of NSURLSessionTask it starts in the “paused” state, so to start it you simply call resume.

Still working in the same file, replace the stubbed-out implementation of parseAndAddLocations: with the following code:

- (void)parseAndAddLocations:(NSArray*)locations toArray:(NSMutableArray*)destinationArray //1
{
    for (NSDictionary* item in locations) { 
        Location* location = [[Location alloc] initWithDictionary:item]; //2
        [destinationArray addObject:location];        
    }
 
    if (self.delegate) {
        [self.delegate modelUpdated]; //3
    }
}

Taking each numbered comment in turn:

  1. You iterate through the array of JSON dictionaries and create a new Location object for each item.
  2. Here you use a custom initializer to turn the deserialized JSON dictionary into an instance of Location.
  3. The model signals the UI that there are new objects available.

Working together, these two methods let your app load the data from the server on startup. import relies on NSURLSession to handle the heavy lifting of networking. For more information on the inner workings of NSURLSession, check out the NSURLSession on this site.

Notice the Location class already has the following initializer which simply takes the various values in the dictionary and sets the corresponding object properties appropriately:

- (instancetype) initWithDictionary:(NSDictionary*)dictionary
{
    self = [super init];
    if (self) {
        self.name = dictionary[@"name"];
        self.location = dictionary[@"location"];
        self.placeName = dictionary[@"placename"];
        self.imageId = dictionary[@"imageId"];
        self.details = dictionary[@"details"];
        _categories = [NSMutableArray arrayWithArray:dictionary[@"categories"]];
    }
    return self;
}

Saving Locations to the Server

Unfortunately, loading locations from an empty database isn’t super interesting. Your next task is to implement the ability to save Locations to the database.

Replace the stubbed-out implementation of persist: in Locations.m with the following code:

- (void) persist:(Location*)location
{
    if (!location || location.name == nil || location.name.length == 0) {
        return; //input safety check
    }
 
 
    NSString* locations = [kBaseURL stringByAppendingPathComponent:kLocations];
 
    BOOL isExistingLocation = location._id != nil;
    NSURL* url = isExistingLocation ? [NSURL URLWithString:[locations stringByAppendingPathComponent:location._id]] :
    [NSURL URLWithString:locations]; //1
 
    NSMutableURLRequest* request = [NSMutableURLRequest requestWithURL:url];
    request.HTTPMethod = isExistingLocation ? @"PUT" : @"POST"; //2
 
    NSData* data = [NSJSONSerialization dataWithJSONObject:[location toDictionary] options:0 error:NULL]; //3
    request.HTTPBody = data;
 
    [request addValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; //4
 
    NSURLSessionConfiguration* config = [NSURLSessionConfiguration defaultSessionConfiguration];
    NSURLSession* session = [NSURLSession sessionWithConfiguration:config];
 
    NSURLSessionDataTask* dataTask = [session dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { //5
        if (!error) {
            NSArray* responseArray = @[[NSJSONSerialization JSONObjectWithData:data options:0 error:NULL]];
            [self parseAndAddLocations:responseArray toArray:self.objects];
        }
    }];
    [dataTask resume];
}

persist: parallels import and also uses a NSURLSession request to the locations endpoint. However, there are just a few differences:

  1. There are two endpoints for saving an object: /locations when you’re adding a new location, and /locations/_id when updating an existing location that already has an id.
  2. The request uses either PUT for existing objects or POST for new objects. The server code calls the appropriate handler for the route rather than using the default GET handler.
  3. Because you’re updating an entity, you provide an HTTPBody in your request which is an instance of NSData object created by the NSJSONSerialization class.
  4. Instead of an Accept header, you’re providing a Content-Type. This tells the bodyParser on the server how to handle the bytes in the body.
  5. The completion handler once again takes the modified entity returned from the server, parses it and adds it to the local collection of Location objects.

Notice just like initWithDictionary:, Location.m already has a helper module to handle the conversion of Location object into a JSON-compatible dictionary as shown below:

#define safeSet(d,k,v) if (v) d[k] = v;
- (NSDictionary*) toDictionary
{
    NSMutableDictionary* jsonable = [NSMutableDictionary dictionary];
    safeSet(jsonable, @"name", self.name);
    safeSet(jsonable, @"placename", self.placeName);
    safeSet(jsonable, @"location", self.location);
    safeSet(jsonable, @"details", self.details);
    safeSet(jsonable, @"imageId", self.imageId);
    safeSet(jsonable, @"categories", self.categories);
    return jsonable;
}

toDictionary contains a magical macro: safeSet(). Here you check that a value isn’t nil before you assign it to a NSDictionary; this avoids raising an NSInvalidArgumentException. You need this check as your app doesn’t force your object’s properties to be populated.

“Why not use an NSCoder?” you might ask. The NSCoding protocol with NSKeyedArchiver does many of the same things as toDictionary and initWithDictionary; namely, provide a key-value conversion for an object.

However, NSKeyedArchiver is set up to work with plists which is a different format with slightly different data types. The way you’re doing it above is a little simpler than repurposing the NSCoding mechanism.

Saving Images to the Server

The starter project already has a mechanism to add photos to a location; this is a nice visual way to explore the data in the app. The pictures are displayed as thumbnails on the map annotation and in the details screen. The Location object already has a stub imageId which provides a link to to a stored file on the server.

Adding an image requires two things: the client-side call to save and load images and the server-side code to store the images.

Return to Terminal, ensure you’re in the server directory, and execute the following command to create a new file to house your file handler code:

edit fileDriver.js

Add the following code to fileDriver.js:

var ObjectID = require('mongodb').ObjectID, 
    fs = require('fs'); //1
 
FileDriver = function(db) { //2
  this.db = db;
};

This sets up your FileDriver module as follows:

  1. This module uses the filesystem module fs to read and write to disk.
  2. The constructor accepts a reference to the MongoDB database driver to use in the methods that follows.

Add the following code to fileDriver.js, just below the code you added above:

FileDriver.prototype.getCollection = function(callback) {
  this.db.collection('files', function(error, file_collection) { //1
    if( error ) callback(error);
    else callback(null, file_collection);
  });
};

getCollection() looks through the files collection; in addition to the content of the file itself, each file has an entry in the files collection which stores the file’s metadata including its location on disk.

Add the following code below the block you just added above:

//find a specific file
FileDriver.prototype.get = function(id, callback) {
    this.getCollection(function(error, file_collection) { //1
        if (error) callback(error);
        else {
            var checkForHexRegExp = new RegExp("^[0-9a-fA-F]{24}$"); //2
            if (!checkForHexRegExp.test(id)) callback({error: "invalid id"});
            else file_collection.findOne({'_id':ObjectID(id)}, function(error,doc) { //3
                if (error) callback(error);
                else callback(null, doc);
            });
        }
    });
};

Here’s what’s going on in the code above:

  1. get fetches the files collection from the database.
  2. Since the input to this function is a string representing the object’s _id, you must convert it to a BSON ObjectID object.
  3. findOne() finds a matching entity if one exists.

Add the following code directly after the code you added above:

FileDriver.prototype.handleGet = function(req, res) { //1
    var fileId = req.params.id;
    if (fileId) {
        this.get(fileId, function(error, thisFile) { //2
            if (error) { res.send(400, error); }
            else {
                    if (thisFile) {
                         var filename = fileId + thisFile.ext; //3
                         var filePath = './uploads/'+ filename; //4
    	                 res.sendfile(filePath); //5
    	            } else res.send(404, 'file not found');
            }
        });        
    } else {
	    res.send(404, 'file not found');
    }
};

handleGet is a request handler used by the Express router. It simplifies the server code by abstracting the file handling away from index.js. It performs the following actions:

  1. Fetches the file entity from the database via the supplied id.
  2. Adds the extension stored in the database entry to the id to create the filename.
  3. Stores the file in the local uploads directory.
  4. Calls sendfile() on the response object; this method knows how to transfer the file and set the appropriate response headers.

Once again, add the following code directly underneath what you just added above:

//save new file
FileDriver.prototype.save = function(obj, callback) { //1
    this.getCollection(function(error, the_collection) {
      if( error ) callback(error);
      else {
        obj.created_at = new Date();
        the_collection.insert(obj, function() {
          callback(null, obj);
        });
      }
    });
};

save() above is the same as the one in collectionDriver; it inserts a new object into the files collection.

Add the following code, again below what you just added:

FileDriver.prototype.getNewFileId = function(newobj, callback) { //2
	this.save(newobj, function(err,obj) {
		if (err) { callback(err); } 
		else { callback(null,obj._id); } //3
	});
};
  1. getNewFileId() is a wrapper for save for the purpose of creating a new file entity and returning id alone.
  2. This returns only _id from the newly created object.

Add the following code after what you just added above:

FileDriver.prototype.handleUploadRequest = function(req, res) { //1
    var ctype = req.get("content-type"); //2
    var ext = ctype.substr(ctype.indexOf('/')+1); //3
    if (ext) {ext = '.' + ext; } else {ext = '';}
    this.getNewFileId({'content-type':ctype, 'ext':ext}, function(err,id) { //4
        if (err) { res.send(400, err); } 
        else { 	         
             var filename = id + ext; //5
             filePath = __dirname + '/uploads/' + filename; //6
 
	     var writable = fs.createWriteStream(filePath); //7
	     req.pipe(writable); //8
             req.on('end', function (){ //9
               res.send(201,{'_id':id});
             });               
             writable.on('error', function(err) { //10
                res.send(500,err);
             });
        }
    });
};
 
exports.FileDriver = FileDriver;

There’s a lot going on in this method, so take a moment and review the above comments one by one:

  1. handleUploadRequest creates a new object in the file collection using the Content-Type to determine the file extension and returns the new object’s _id.
  2. This looks up the value of the Content-Type header which is set by the mobile app.
  3. This tries to guess the file extension based upon the content type. For instance, an image/png should have a png extension.
  4. This saves Content-Type and extension to the file collection entity.
  5. Create a filename by appending the appropriate extension to the new id.
  6. The designated path to the file is in the server’s root directory, under the uploads sub-folder. __dirname is the Node.js value of the executing script’s directory.
  7. fs includes writeStream which — as you can probably guess — is an output stream.
  8. The request object is also a readStream so you can dump it into a write stream using the pipe() function. These stream objects are good examples of the Node.js event-driven paradigm.
  9. on() associates stream events with a callback. In this case, the readStream’s end event occurs when the pipe operation is complete, and here the response is returned to the Express code with a 201 status and the new file _id.
  10. If the write stream raises an error event then there is an error writing the file. The server response returns a 500 Internal Server Error response along with the appropriate filesystem error.

Since the above code expects there to be an uploads subfolder, execute the command below in Terminal to create it:

mkdir uploads

Add the following code to the end of the require block at the top of index.js:

    FileDriver = require('./fileDriver').FileDriver;

Next, add the following code to index.js just below the line var mongoPort = 27017;:

var fileDriver;

Add the following line to index.js just after the line var db = mongoClient.db("MyDatabase");:

In the mongoClient setup callback create an instance of FileDriver after the CollectionDriver creation:

fileDriver = new FileDriver(db);

This creates an instance of your new FileDriver.

Add the following code just before the generic /:collection routing in index.js:

app.use(express.static(path.join(__dirname, 'public')));
app.get('/', function (req, res) {
  res.send('<html><body><h1>Hello World</h1></body></html>');
});
 
app.post('/files', function(req,res) {fileDriver.handleUploadRequest(req,res);});
app.get('/files/:id', function(req, res) {fileDriver.handleGet(req,res);});

Putting this before the generic /:collection routing means that files are treated differently than a generic files collection.

Save your work, kill your running Node instance with Control+C if necessary and restart it with the following command:

node index.js

Your server is now set up to handle files, so that means you need to modify your app to post images to the server.

Saving Images in your App

The Location class has two properties: image and imageId. imageId is the backend property that links the entity in the locations collection to the entity in the files collection. If this were a relational database, you’d use a foreign key to represent this link. image stores the actual UIImage object.

Saving and loading files requires an extra request for each object to transfer the file data. The order of operations is important to make sure the file id is property associated with the object. When you save a file, you must send the file first in order to receive the associated id to link it with the location’s data.

Add the following code to the bottom of Locations.m:

- (void) saveNewLocationImageFirst:(Location*)location
{
    NSURL* url = [NSURL URLWithString:[kBaseURL stringByAppendingPathComponent:kFiles]]; //1
    NSMutableURLRequest* request = [NSMutableURLRequest requestWithURL:url];
    request.HTTPMethod = @"POST"; //2
    [request addValue:@"image/png" forHTTPHeaderField:@"Content-Type"]; //3
 
    NSURLSessionConfiguration* config = [NSURLSessionConfiguration defaultSessionConfiguration];
    NSURLSession* session = [NSURLSession sessionWithConfiguration:config];
 
    NSData* bytes = UIImagePNGRepresentation(location.image); //4
    NSURLSessionUploadTask* task = [session uploadTaskWithRequest:request fromData:bytes completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { //5
        if (error == nil && [(NSHTTPURLResponse*)response statusCode] < 300) {
            NSDictionary* responseDict = [NSJSONSerialization JSONObjectWithData:data options:0 error:NULL];
            location.imageId = responseDict[@"_id"]; //6
            [self persist:location]; //7
        }
    }];
    [task resume];
}

This is a fairly busy module, but it’s fairly straightforward when you break it into small chunks:

  1. The URL is the files endpoint.
  2. Using POST triggers handleUploadRequest of fileDriver to save the file.
  3. Setting the content type ensures the file will be saved appropriately on the server. The Content-Type header is important for determining the file extension on the server.
  4. UIImagePNGRepresentation turns an instance of UIImage into PNG file data.
  5. NSURLSessionUploadTask lets you send NSData to the server in the request itself. For example, upload tasks automatically set the Content-Length header based on the data length. Upload tasks also report progress and can run in the background, but neither of those features is used here.
  6. The response contains the new file data entity, so you save _id along with the location object for later retrieval.
  7. Once the image is saved and _id recorded, then the main Location entity can be saved to the server.

Add the following code to persist: in Location.m just after the if (!location || location.name == nil || location.name.length == 0) block’s closing brace:

- (void) persist:(Location*)location
 
    //if there is an image, save it first
    if (location.image != nil && location.imageId == nil) { //1
        [self saveNewLocationImageFirst:location]; //2
        return;
    }

This checks for the presence of a new image, and saves the image first. Taking each numbered comment in turn, you’ll find the following:

  1. If there is an image but no image id, then the image hasn’t been saved yet.
  2. Call the new method to save the image, and exits.

Once the save is complete, persist:will be called again, but at that point imageId will be non-nil, and the code will proceed into the existing procedure for saving the Location entity.

Next replace the stub method loadImage: in Location.m with the following code:

- (void)loadImage:(Location*)location
{
    NSURL* url = [NSURL URLWithString:[[kBaseURL stringByAppendingPathComponent:kFiles] stringByAppendingPathComponent:location.imageId]]; //1
 
    NSURLSessionConfiguration* config = [NSURLSessionConfiguration defaultSessionConfiguration];
    NSURLSession* session = [NSURLSession sessionWithConfiguration:config];
 
    NSURLSessionDownloadTask* task = [session downloadTaskWithURL:url completionHandler:^(NSURL *fileLocation, NSURLResponse *response, NSError *error) { //2
        if (!error) {
            NSData* imageData = [NSData dataWithContentsOfURL:fileLocation]; //3
            UIImage* image = [UIImage imageWithData:imageData];
            if (!image) {
                NSLog(@"unable to build image");
            }
            location.image = image;
            if (self.delegate) {
                [self.delegate modelUpdated];
            }
        }
    }];
 
    [task resume]; //4
}

Here’s what’s going on in the code above:

  1. Just like when loading a specific location, the image’s id is appended to the path along with the name of the endpoint: files.
  2. The download task is the third kind of NSURLSession; it downloads a file to a temporary location and returns a URL to that location, rather than the raw NSData object, as the raw object can be rather large.
  3. The temporary location is only guaranteed to be available during the completion block’s execution, so you must either load the file into memory, or move it somewhere else.
  4. Like all NSURLSession tasks, you start the task with resume.

Next, replace the current parseAndAddLocations:toArray: with the following code:

- (void)parseAndAddLocations:(NSArray*)locations toArray:(NSMutableArray*)destinationArray
{
    for (NSDictionary* item in locations) {
        Location* location = [[Location alloc] initWithDictionary:item];
        [destinationArray addObject:location];
 
        if (location.imageId) { //1
            [self loadImage:location]; 
        }
    }
 
    if (self.delegate) {
        [self.delegate modelUpdated];
    }
}

This updated version of parseAndAddlocations checks for an imageId; if it finds one, it calls loadImage:.

A Quick Recap of File Handling

To summarize: file transfers in an iOS app work conceptually the same way as regular data transfers. The big difference is that you’re using NSURLSessionUploadTask and NSURLSessionDownloadTask objects and semantics that are slightly different from NSURLSessionDataTask.

On the server side, file wrangling is a fairly different beast. It requires a special handler object that communicates with the filesystem instead of a Mongo database, but still needs to store some metadata in the database to make retrieval easier.

Special routes are then set up to map the incoming HTTP verb and endpoint to the file driver. You could accomplish this with generic data endpoints, but the code would get quite complicated when determining where to persist the data.

Testing it Out

Build and run your app and add a new location by tapping the button in the upper right.

As part of creating your new location, add an image. Note that you can add images to the simulator by long-pressing on pictures in Safari.

Once you’ve saved your new location, restart the app — and lo and behold, the app reloads your data without a hitch, as shown in the screenshot below:

Adding an image to a Location in Tour My Town.

Adding an image to a Location in Tour My Town.

Location annotation with an image.

Location annotation with an image.

Querying for Locations

Your ultra-popular Tour My Town app will collect a ton of data incredibly quickly after it’s released. To prevent long wait times while downloading all of the data for the app, you can limit the amount of data retrieved by using location-based filtering. This way you only retrieve the data that’s going to be shown on the screen.

MongoDB has a powerful feature for finding entities that match a given criteria. These criteria can be basic comparisons, type checking, expression evaluation (including regular expression and arbitrary javascript), and geospatial querying.

The geospatial querying of MongoDBis a natural fit with a map-based application. You can use the extents of the map view to obtain only the subset of data that will be shown on the screen.

Your next task is to modify collectionDriver.js to supply filter criteria with a GET request.

Add the following method above the final exports line in collectionDriver.js:

//Perform a collection query
CollectionDriver.prototype.query = function(collectionName, query, callback) { //1
    this.getCollection(collectionName, function(error, the_collection) { //2
      if( error ) callback(error)
      else {
        the_collection.find(query).toArray(function(error, results) { //3
          if( error ) callback(error)
          else callback(null, results)
        });
      }
    });
};

Here’s how the above code functions:

  1. query is similar to the existing findAll, except that it has a query parameter for specifying the filter criteria.
  2. You fetch the collection access object just like all the other methods.
  3. CollectionDriver‘s findAll method used find() with no arguments, but here the query object is passed in as an argument. This will be passed along to MongoDB for evaluation so that only the matching documents will be returned in the result.

Note: This passes in the query object directly to MongoDB. In an open API case, this can be dangerous since MongoDB permits arbitrary JavaScript using the $where query operator. This runs the risk of crashes, unexpected results, or security concerns; but in this tutorial project which uses a limited set of operations, it is a minor concern.

Go back to index.js and replace the current app.get('/:collection'... block with the following:

app.get('/:collection', function(req, res, next) {  
   var params = req.params;
   var query = req.query.query; //1
   if (query) {
        query = JSON.parse(query); //2
        collectionDriver.query(req.params.collection, query, returnCollectionResults(req,res)); //3
   } else {
        collectionDriver.findAll(req.params.collection, returnCollectionResults(req,res)); //4
   }
});
 
function returnCollectionResults(req, res) {
    return function(error, objs) { //5
        if (error) { res.send(400, error); }
	        else { 
                    if (req.accepts('html')) { //6
                        res.render('data',{objects: objs, collection: req.params.collection});
                    } else {
                        res.set('Content-Type','application/json');
                        res.send(200, objs);
                }
        }
    };
};
  1. HTTP queries can be added to the end of a URL in the form http://domain/endpoint?key1=value1&key2=value2.... req.query gets the whole “query” part of the incoming URL. For this application the key is “query” (hence req.query.query)
  2. The query value should be a string representing a MongoDB condition object. JSON.parse() turns the JSON-string into a javascript object that can be passed directly to MongoDB.
  3. If a query was supplied to the endpoint, call collectionDriver.query()returnCollectionResults is a common helper function that formats the output of the request.
  4. If no query was specified, then collectionDriver.findAll returns all the items in the collection.
  5. Since returnCollectionResults() is evaluated at the time it is called, this function returns a callback function for the collection driver.
  6. If the request specified HTML for the response, then render the data table in HTML; otherwise return it as a JSON document in the body.

Save your work, kill your Node.js instance and restart it with the following command:

node index.js

Now that the server is set up for queries, you can add the geo-querying functions to the app.

Replace the stubbed-out implementation of queryRegion of Locations.m with the following code:

- (void) queryRegion:(MKCoordinateRegion)region
{
    //note assumes the NE hemisphere. This logic should really check first.
    //also note that searches across hemisphere lines are not interpreted properly by Mongo
    CLLocationDegrees x0 = region.center.longitude - region.span.longitudeDelta; //1
    CLLocationDegrees x1 = region.center.longitude + region.span.longitudeDelta;
    CLLocationDegrees y0 = region.center.latitude - region.span.latitudeDelta;
    CLLocationDegrees y1 = region.center.latitude + region.span.latitudeDelta;
 
    NSString* boxQuery = [NSString stringWithFormat:@"{\"$geoWithin\":{\"$box\":[[%f,%f],[%f,%f]]}}",x0,y0,x1,y1]; //2
    NSString* locationInBox = [NSString stringWithFormat:@"{\"location\":%@}", boxQuery]; //3
    NSString* escBox = (NSString *)CFBridgingRelease(CFURLCreateStringByAddingPercentEscapes(NULL,
                                                                                  (CFStringRef) locationInBox,
                                                                                  NULL,
                                                                                  (CFStringRef) @"!*();':@&=+$,/?%#[]{}",
                                                                                  kCFStringEncodingUTF8)); //4
    NSString* query = [NSString stringWithFormat:@"?query=%@", escBox]; //5
    [self runQuery:query]; //7
}

This is a fairly straightforward block of code; queryRegion: turns a Map Kit region generated from a MKMapView into a bounded-box query. Here’s how it does it:

  1. These four lines calculate the map-coordinates of the two diagonal corners of the bounding box.
  2. This defines a JSON structure for the query using MongoDB’s specific query language.
    A query with a $geoWithin key specifies the search criteria as everything located within the structure defined by the provided value. $box specifies the rectangle defined by the provided coordinates and supplied as an array of two longitude-latitude pairs at opposite corners.
  3. boxQuery just defines the criteria value; you also have to provide the search key field along boxQuery as a JSON object to MongoDB.
  4. You then escape the entire query object as it will be posted as part of a URL; you need to ensure that that internal quotes, brackets, commas, and other non-alphanumeric bits won’t be interpreted as part of the HTTP query parameter. CFURLCreateStringByAddingPercentEscapes is a CoreFoundation method for creating URL-encoded strings.
  5. The final piece of the string building sets the entire escaped MongoDB query as the query value in the URL.
  6. You then request matching values from the server with your new query.
Note: In MongoDB coordinate pairs are specified as [longitude, latitude], which is the opposite of the usual lat/long pairing you’d see in things like the Google Maps API.

Replace the stubbed-out implementation of runQuery: in Locations.m with the following code:

- (void) runQuery:(NSString *)queryString
{
    NSString* urlStr = [[kBaseURL stringByAppendingPathComponent:kLocations] stringByAppendingString:queryString]; //1
    NSURL* url = [NSURL URLWithString:urlStr];
 
    NSMutableURLRequest* request = [NSMutableURLRequest requestWithURL:url];
    request.HTTPMethod = @"GET";
    [request addValue:@"application/json" forHTTPHeaderField:@"Accept"];
 
    NSURLSessionConfiguration* config = [NSURLSessionConfiguration defaultSessionConfiguration];
    NSURLSession* session = [NSURLSession sessionWithConfiguration:config];
 
    NSURLSessionDataTask* dataTask = [session dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {
        if (error == nil) {
            [self.objects removeAllObjects]; //2
            NSArray* responseArray = [NSJSONSerialization JSONObjectWithData:data options:0 error:NULL];
            NSLog(@"received %d items", responseArray.count);
            [self parseAndAddLocations:responseArray toArray:self.objects];
        }
    }];    
     [dataTask resume];
}

runQuery: is very similar to import but has two important differences:

  1. You add the query string generated in queryRegion: to the end of the locations endpoint URL.
  2. You also discard the previous set of locations and replace them with the filtered set returned from the server. This keeps the active results at a manageable level.

Build and run your app; create a few new locations of interest that are spread out on the map. Zoom in a little, then pan and zoom the map and watch NSLog display the changing count of the items both inside and outside the map range, as shown below:

Debugger output while panning and zooming map.

Debugger output while panning and zooming map.

Using Queries to Filter by Category

The last bit to add categories to your Locations that users can filter on. This filtering can re-use the server work done in the previous section through the use of MongoDB’s array conditional operators.

Replace the stubbed-out implementation of query in Categories.m with the following code:

+ (NSString*) query
{
    NSArray* a = [self filteredCategories:YES]; //1
    NSString* query = @"";
    if (a.count > 0) {
 
        query = [NSString stringWithFormat:@"{\"categories\":{\"$in\":[%@]}}", [a componentsJoinedByString:@","]]; //2
        query = (NSString *)CFBridgingRelease(CFURLCreateStringByAddingPercentEscapes(NULL,
                                                                                           (CFStringRef) query,
                                                                                           NULL,
                                                                                           (CFStringRef) @"!*();':@&=+$,/?%#[]{}",
                                                                                           kCFStringEncodingUTF8));
 
        query = [@"?query=" stringByAppendingString:query];
    }
    return query;
}

This creates a query string similar to the one used by the geolocation query that has the following differences:

  1. This is the list of selected categories.
  2. The $in operator accepts a MongoDB document if the specified property categories has a value matching any of the items in the corresponding array.

Build and run your app; add a few Locations and assign them one or more categories. Tap the folder icon and select a category to filter on. The map will reload just the Location annotations matching the selected categories as shown below:

A map with many locations

A map with many locations

Select just the "Park" category

Select just the “Park” category

Map after filtering

Map after filtering

Where to Go From Here?

You can download the completed sample project here.

In this tutorial you covered the basics of MongoDB storage — but there’s a ton of functionality beyond what you covered here.

MongoDB offers a multitude of options for selecting data out of the database; as well there are a host of server-side features to manage scaling and security. As well, your Node.js installation could definitely be improved by adding user authentication and more privacy around the data.

As for your iOS app, you could add a pile of interesting features, including the following:

  • Routing users to points of interest
  • Adding additional media to locations
  • Improved text editing

Additionally, every decent networked app should cache data locally so it remains functional when data connections are spotty.

Hopefully you’ve enjoyed this small taste of Node.js, Express and MongoDB — if you have any questions or comments please come join the discussion below!

How to Write An iOS App that Uses a Node.js/MongoDB Web Service is a post from: Ray Wenderlich

The post How to Write An iOS App that Uses a Node.js/MongoDB Web Service appeared first on Ray Wenderlich.

Video Tutorial: Beginning 3D Modeling with Blender: Getting Started

Developing iOS 7 Applications with iBeacons Tutorial

$
0
0

Have you ever wished that your phone could show your location inside a large building like a shopping mall or baseball stadium?

Sure, GPS can give you an idea on which side of the building you are. But good luck getting an accurate GPS signal in one of those steel and concrete sarcophaguses. What you need is something inside of the building to let your device determine its physical location.

Enter… iBeacons!

In this iBeacons tutorial you’ll create an app that lets you register known iBeacon emitters and tells you when your phone has moved outside of their range. If you’re wondering how this could be useful, just think about how many times you’ve arrived at work or school early, only to have to turn around because you forgot your laptop bag.

This handy little app and an iBeacon could save you gas, time and even face — when you throw out some choice language around your colleagues first thing Monday morning as you storm out the door to retrieve your laptop bag!

The use case for this app is attaching an iBeacon emitter to your laptop bag, purse, or even your cat’s collar — anything important you don’t want to lose. Once your device moves outside the range of the emitter, your app detects this and notifies you.

By the end of this iBeacons tutorial you’ll understand how to monitor iBeacons and react appropriately when you encounter one.

iBeacon Hardware

When Apple introduced iBeacon in iOS 7, they also announced that any compatible iOS device could act as an iBeacon. However, they also stated that hardware vendors could create stand-alone, low-cost iBeacons as well. As of this posting, it’s been about six months since iOS 7 was released and multiple companies have announced and released stand-alone hardware iBeacon emitters.

iBeacons use Bluetooth LE technology, so you must possess an iOS device with built-in Bluetooth LE to work with iBeacons. The list currently includes the following devices:

  • iPhone 4s or later
  • 3rd generation iPad or later
  • iPad Mini or later
  • 5th generation iPod touch or later

KSTechnology-Particles

I was lucky enough to get my hands on some evaluation units created by the talented (and friendly!) team at KS Technologies. Their iBeacon hardware, named Particle, comes pre-programmed to broadcast a specific UUID, major and minor combination — you’ll learn what these are shortly. It also boasts a button cell battery advertised to keep your iBeacon running for up to six months.

KS Technologies has two apps available the App Store: Particle Detector and Particle Accelerator. Particle Detector allows you to easily monitor for iBeacons without writing a single line of code, while Particle Accelerator allows you to wirelessly reconfigure the devices as well as update their firmware.

There are other iBeacon offerings out there as well, a quick Google search should reveal them to you. For the purposes of this tutorial, you’ll focus on the Particle by KS Technologies, although the basic concepts should apply to most other iBeacon hardware.

Note: If you do not have a standalone iBeacon emitter but you do have another iOS 7 device that supports Bluetooth LE, you can follow along by creating an app that acts as an iBeacon as described in Chapter 22 — What’s new in Core Location of iOS 7 by Tutorials.

UUID, Major, and Minor identifiers

If you’re unfamiliar with iBeacons, you might not be familiar with the terms UUID, major value and minor value.

An iBeacon is nothing more than a Bluetooth Low Energy device that advertises information in a specific structure. Those specifics are beyond the scope of this tutorial, but the important thing to understand is that iOS can monitor for iBeacons based on these UUID, major and minor values.

UUID is an acronym for universally unique identifier, which is effectively a random string; B558CBDA-4472-4211-A350-FF1196FFE8C8 is one example. In the context of iBeacons, a UUID is generally used to represent your top-level identity. If you generate a UUID as a developer and assign it to your iBeacon device, then when a device detects your iBeacon it knows exactly which iBeacon it’s talking to.

Major and minor values provide a little more granularity on top of the UUID. These values are simply 16 bit unsigned integers that identify each individual iBeacon, even ones with the same UUID.

For instance, if you owned multiple department stores you might have all of your iBeacons emit the same UUID, but each store would have its own major value, and each department within that store would have its own minor value. Your app could then respond to an iBeacon located in the shoe department of your Miami, Florida store.

You can learn more about UUIDs in this Wikipedia article on UUIDs

Getting Started

Download the starter project here — it contains a simple interface for adding and removing items from a table view. Each item in the table view represents a single iBeacon emitter, which in the real world translates to an item that you don’t want to leave behind.

Build and run the app; you’ll see an empty list, devoid of items. Press the + button at the top right to add a new item as shown in the screenshot below:

First Launch

First Launch

To add an item, you simply enter a name for the item and the values corresponding to its iBeacon. Try using 8AEFB031-6C32-486F-825B-E26FA193487D for the UUID, 1 for the major value and 2 for the minor value as placeholder values for now, as shown below:

Adding an Item

Adding an item

Note: You can find your iBeacon’s UUID using companion apps such as Particle Detector or by reviewing your iBeacon’s documentation.

Press Save to return to the list of items; you’ll see your item with a location of Unknown, as shown below:

ForgotMeNot-item-location-unknown

You can add more items if you wish, or swipe to delete existing ones. NSUserDefaults persists the items in the list so that they’re available when the user re-launches the app.

On the surface it appears there’s not much going on; most of the fun stuff is under the hood. The majority of the project will be straightforward for the seasoned iOS developer. The unique aspect in this app is the RWTItem model class which represents the items in the list.

Open RWTItem.h and have a look at it in Xcode. The model class mirrors what the interface requests from the user, and it conforms to NSCoding so that it can be serialized and deserialized to disk for persistence.

Now take a look at RWTAddItemViewController.m. This is the controller for adding a new item. It’s a simple UITableViewController, except that it does some validation on user input to ensure that the user enters valid names and UUIDs. The name field is only required to be at least one character, whereas the UUID field has more strict requirements based on the UUID specification.

Take a closer look at the following line in viewDidLoad:.

NSString *uuidPatternString = @"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$";

This is a regular expression pattern that checks the entered UUID against the standard UUID format. It returns TRUE if the string conforms to the format XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX, where all X’s are a hexadecimal value 0 through F. All iBeacons must broadcast a UUID and it must meet the above specifications.

The Save button at the top right becomes tappable as soon as nameTextField and uuidTextField are both valid.

Now that you’re acquainted with the starter project, you can move on to implementing the iBeacon bits into your project!

Listening for Your iBeacon

Your device won’t listen for your iBeacon automatically — you have to tell it to do this first. The CLBeaconRegion class represents an iBeacon; the CL class prefix infers that it is part of the Core Location framework.

It may seem strange for an iBeacon to be related to Core Location since it’s a Bluetooth device, but consider that iBeacons provide micro-location awareness while GPS provides macro-location awareness. You would leverage the Core Bluetooth framework for iBeacons when programming an iOS device to act as an iBeacon, but when monitoring for iBeacons you only need to work with Core Location.

Your first order of business is to adapt the RWTItem model for CLBeaconRegion.

Open RWTItem.h and replace it’s contents with the code shown below:

#import <Foundation/Foundation.h>
 
@import CoreLocation;
 
@interface RWTItem : NSObject <NSCoding>
 
@property (strong, nonatomic, readonly) NSString *name;
@property (strong, nonatomic, readonly) NSUUID *uuid;
@property (assign, nonatomic, readonly) CLBeaconMajorValue majorValue;
@property (assign, nonatomic, readonly) CLBeaconMinorValue minorValue;
 
- (instancetype)initWithName:(NSString *)name
                        uuid:(NSUUID *)uuid
                       major:(CLBeaconMajorValue)major
                       minor:(CLBeaconMinorValue)minor;
 
 
@end

The only differences here are that you import CoreLocation and replace the uint16_t types with CLBeaconMajorValue and CLBeaconMinorValue types for the majorValue and minorValue properties respectively.

Although the underlying data type is the same, this improves readability of the model.

Open RWTItem.m and update the types in the initWithName:uuid:major:minor: method’s signature as shown below:

- (instancetype)initWithName:(NSString *)name
                        uuid:(NSUUID *)uuid
                       major:(CLBeaconMajorValue)major
                       minor:(CLBeaconMinorValue)minor

This simply suppress the compiler warnings.

Open RWTItemsViewController.m, add an import statement for Core Location below the other imports, add CLLocationManagerDelegate as a protocol that this class conforms to, and finally create a new locationManager property of type CLLocationManager, as shown below:

#import "RWTItemCell.h"
 
@import CoreLocation;
 
static NSString * const kRWTStoredItemsKey = @"storedItems";
 
@interface RWTItemsViewController () <UITableViewDataSource, UITableViewDelegate, CLLocationManagerDelegate>
 
@property (weak, nonatomic) IBOutlet UITableView *itemsTableView;
@property (strong, nonatomic) NSMutableArray *items;
@property (strong, nonatomic) CLLocationManager *locationManager;
 
@end

Next, in viewDidLoad: add a call to initialize the self.locationManager property and assign its delegate to self, as shown below:

- (void)viewDidLoad {
    [super viewDidLoad];
 
    self.locationManager = [[CLLocationManager alloc] init];
    self.locationManager.delegate = self;
 
    [self loadItems];
}

This simply instructs the location manager that the class wants to receive its delegate method calls.

Now that you have an instance of CLLocationManager, you can instruct your app to begin monitoring for specific regions using CLBeaconRegion. When you register a region to be monitored, those regions persist between launches of your application. This will be important later when you respond to the boundary of a region being crossed while your application is not running.

Your iBeacon items in the list are represented by the the RWTItem model via the items property. CLLocationManager, however, expects you to provide a CLBeaconRegion instance in order to begin monitoring a region.

Open RWTItemsViewController.m and create the following helper method:

- (CLBeaconRegion *)beaconRegionWithItem:(RWTItem *)item {
    CLBeaconRegion *beaconRegion = [[CLBeaconRegion alloc] initWithProximityUUID:item.uuid
                                                                           major:item.majorValue
                                                                           minor:item.minorValue
                                                                      identifier:item.name];
    return beaconRegion;
}

This returns a new CLBeaconRegion instance derived from the provided RWTItem.

You can see that the classes are similar in structure to each other, so creating an instance of CLBeaconRegion is very straightforward with initWithProximityUUID:major:minor:identifier:.

Now you need a method to begin monitoring a given item.

Still working in RWTItemsViewController.m, add the following code directly below beaconRegionWithItem::

- (void)startMonitoringItem:(RWTItem *)item {
    CLBeaconRegion *beaconRegion = [self beaconRegionWithItem:item];
    [self.locationManager startMonitoringForRegion:beaconRegion];
    [self.locationManager startRangingBeaconsInRegion:beaconRegion];
}

This method takes an RWTItem instance and creates a CLBeaconRegion instance using the method you defined earlier. It then tells the location manager to start monitoring the given region, and to start ranging iBeacons within that region. Ranging is the process of discovering iBeacons within the given region, and determining their distance. An iOS device receiving an iBeacon transmission can approximate the distance from the iBeacon. The distance (between transmitting iBeacon and receiving device) is categorised into 3 distinct ranges:

  • Immediate Within a few centimetres
  • Near Within a couple of metres
  • Far Greater than 10 metres away

By default, monitoring notifies you when the region is entered or exited regardless of whether your app is running. Ranging, on the other hand, monitors the proximity of the region only while your app is running.

You’ll also need a way to stop monitoring an item’s region after it’s deleted.

Add the following code to RWTItemsViewController.m directly below startMonitoringItem::

- (void)stopMonitoringItem:(RWTItem *)item {
    CLBeaconRegion *beaconRegion = [self beaconRegionWithItem:item];
    [self.locationManager stopMonitoringForRegion:beaconRegion];
    [self.locationManager stopRangingBeaconsInRegion:beaconRegion];
}

The above method is identical to startMonitoringItem: except that it calls the stop variants of the location manager’s monitor and ranging activities.

Now that you have the start and stop methods, it’s time to put them to use! The natural place to start monitoring is when a user adds a new item to the list.

Have a look at prepareForSegue: in RWTItemsViewController.m; you’ll see that RWTAddItemViewController has a callback that runs when you add items via the itemAddedCompletion block property.

Add a call to startMonitoringItem: in the itemAddedCompletion block as shown below:

[addItemViewController setItemAddedCompletion:^(RWTItem *newItem) {
    [self.items addObject:newItem];
    [self.itemsTableView beginUpdates];
    NSIndexPath *newIndexPath = [NSIndexPath indexPathForRow:self.items.count-1 inSection:0];
    [self.itemsTableView insertRowsAtIndexPaths:@[newIndexPath]
                               withRowAnimation:UITableViewRowAnimationAutomatic];
    [self.itemsTableView endUpdates];
    [self startMonitoringItem:newItem]; // Add this statement
    [self persistItems];
}];

Now when you add a new item you call startMonitoringItem: immediately. Take note of the subsequent call to persistItems; this takes all of the known items and persists them to NSUserDefaults so that the user won’t have to to re-enter their items each time they launch the app.

In RWTItemsViewController.m, viewDidLoad: calls loadItems which reads the persisted items from the user defaults and stores them in the items array.

Once the persisted items are loaded into memory, you have to start monitoring for them.

Still working in RWTItemsViewController.m, update loadItems to make sure each item is being monitored, as shown below:

- (void)loadItems {
    NSArray *storedItems = [[NSUserDefaults standardUserDefaults] arrayForKey:kRWTStoredItemsKey];
    self.items = [NSMutableArray array];
 
    if (storedItems) {
        for (NSData *itemData in storedItems) {
            RWTItem *item = [NSKeyedUnarchiver unarchiveObjectWithData:itemData];
            [self.items addObject:item];
            [self startMonitoringItem:item]; // Add this statement
        }
    }
}

Now every time you load the items from the persistence store you also instruct the location manager to begin monitoring for them.

You now have to take care of removing items from the list.

Replace the contents of tableView:commitEditingStyle:forRowAtIndexPath: with the following:

- (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath {
    if (editingStyle == UITableViewCellEditingStyleDelete) {
        RWTItem *itemToRemove = [self.items objectAtIndex:indexPath.row];
        [self stopMonitoringItem:itemToRemove];
        [tableView beginUpdates];
        [self.items removeObjectAtIndex:indexPath.row];
        [tableView deleteRowsAtIndexPaths:@[indexPath] withRowAnimation:UITableViewRowAnimationAutomatic];
        [tableView endUpdates];
        [self persistItems];
    }
}

In the code above you check if the editing style is UITableViewCellEditingStyleDelete; if so, you remove the item from the items array and use it to invoke stopMonitoringItem:.

It should strike you that a lot of this work has been really simple to implement. Apple has done an incredible job of making iBeacons very easy to use as a developer.

At this point you’ve made a lot of progress! Your application now starts and stops listening for specific iBeacons as appropriate.

You can build and run your app at this point; but even though your registered iBeacons might be within range your app has no idea how to react when it finds one!

Time to fix that!

Acting on Found iBeacons

Now that your location manager is listening for iBeacons, it’s time to react to them!

When you instantiated CLLocationManager earlier, you also set its delegate to self. It’s time to implement some of those delegate methods.

First and foremost is to add some error handling, since you’re dealing with very specific hardware features of the device and you want to know if the monitoring or ranging fails for any reason.

Add the following two methods to RWTItemsViewController.m:

- (void)locationManager:(CLLocationManager *)manager monitoringDidFailForRegion:(CLRegion *)region withError:(NSError *)error {
    NSLog(@"Failed monitoring region: %@", error);
}
 
- (void)locationManager:(CLLocationManager *)manager didFailWithError:(NSError *)error {
    NSLog(@"Location manager failed: %@", error);
}

These methods simply log the received error so you can analyze the output.

If everything goes smoothly in your app you should never see any output from these calls. However, it’s possible that these log messages could provide very valuable information if something isn’t working; it’s better than failing silently. Simple NSLog statements are okay for a tutorial, but in your shipped application you should handle the error situations in a more robust manner.

The next step is to display the perceived proximity of your registered iBeacons in real-time.

Add the following stubbed-out method to RWTItemsViewController.m:

- (void)locationManager:(CLLocationManager *)manager
        didRangeBeacons:(NSArray *)beacons
               inRegion:(CLBeaconRegion *)region
{
 
}

The above delegate method is called when iBeacons come within range, move out of range, or when the range of an iBeacon changes.

The goal of your app is to use the passed-in array of ranged iBeacons to update the list of items and display their perceived proximity. You’ll start by iterating over the beacons array to match the ranged iBeacons with the ones in your list.

Update your stubbed-out implementation of locationManager: as shown below:

- (void)locationManager:(CLLocationManager *)manager
        didRangeBeacons:(NSArray *)beacons
               inRegion:(CLBeaconRegion *)region
{
    for (CLBeacon *beacon in beacons) {
        for (RWTItem *item in self.items) {
            // Determine if item is equal to ranged beacon
        }
    }
}

Now each time you range the iBeacons, you’ll iterate over both the ranged beacons and the items.

Note the comment in the second for loop; this is where you’ll need to implement some logic to check if this is the iBeacon representing the current item.

Open RWTItem.h and add the following method declaration:

- (BOOL)isEqualToCLBeacon:(CLBeacon *)beacon;

This new method compares a CLBeacon instance with the RWTItem instance to see if they are equal — that is, if all of their identifiers match.

Add the following property to RWTItem.h:

@property (strong, nonatomic) CLBeacon *lastSeenBeacon;

This property stores the last CLBeacon instance seen for this specific item; this is used to display the proximity information.

Add the following method to RWTItem.m:

- (BOOL)isEqualToCLBeacon:(CLBeacon *)beacon {
    if ([[beacon.proximityUUID UUIDString] isEqualToString:[self.uuid UUIDString]] &&
        [beacon.major isEqual: @(self.majorValue)] &&
        [beacon.minor isEqual: @(self.minorValue)])
    {
        return YES;
    } else {
        return NO;
    }
}

A CLBeacon instance is equal to an RWTItem if the UUID, major, and minor values are all equal.

Now you’ll need to complete the ranging delegate method with a call to the above helper method.

Update locationManager:didRangeBeacons:inRegion in RWTItemsViewController.m to call your new method, as follows:

- (void)locationManager:(CLLocationManager *)manager
        didRangeBeacons:(NSArray *)beacons
               inRegion:(CLBeaconRegion *)region
{
    for (CLBeacon *beacon in beacons) {
        for (RWTItem *item in self.items) {
            if ([item isEqualToCLBeacon:beacon]) {
                item.lastSeenBeacon = beacon;
            }
        }
    }
}

In the code above you set lastSeenBeacon when you find a matching item and iBeacon.

Now it’s time to use that property to display the perceived proximity of the ranged iBeacon.

Open RWTItemCell.m and update the setItem: as follows:

- (void)setItem:(RWTItem *)item {
    if (_item) {
        [_item removeObserver:self forKeyPath:@"lastSeenBeacon"];
    }
 
    _item = item;
    [_item addObserver:self 
            forKeyPath:@"lastSeenBeacon"
               options:NSKeyValueObservingOptionNew
               context:NULL];
 
    self.textLabel.text = _item.name;
}

When you set the item for the cell you also add an observer for the lastSeenBeacon property. Additionally, if the cell already had an item set you remove the observer to keep things properly balanced, as required by key-value observation.

You should also remove the observer when the cell is deallocated. Still in RWTItemCell.m, add the following method:

- (void)dealloc {
    [_item removeObserver:self forKeyPath:@"lastSeenBeacon"];
}

Now that you’re observing for the value, you can put some logic in to react to any changes in the iBeacon’s proximity.

Each CLBeacon instance has a proximity property which is an enum , so you must translate its integer value to something more meaningful.

Add the following method to RWTItemCell.m:

- (NSString *)nameForProximity:(CLProximity)proximity {
    switch (proximity) {
        case CLProximityUnknown:
            return @"Unknown";
            break;
        case CLProximityImmediate:
            return @"Immediate";
            break;
        case CLProximityNear:
            return @"Near";
            break;
        case CLProximityFar:
            return @"Far";
            break;
    }
}

This returns a human-readable proximity value from proximity which you’ll use in the method below.

Now add the observeValueForKeyPath:ofObject:change:context: method, as follows:

- (void)observeValueForKeyPath:(NSString *)keyPath 
                      ofObject:(id)object 
                        change:(NSDictionary *)change 
                       context:(void *)context {
    if ([object isEqual:self.item] && [keyPath isEqualToString:@"lastSeenBeacon"]) {
        NSString *proximity = [self nameForProximity:self.item.lastSeenBeacon.proximity]
        self.detailTextLabel.text = [NSString stringWithFormat:@"Location: %@", proximity];
    }
}

You call the above method each time the lastSeenBeacon value changes, which sets the cell’s detailTextLabel.text property with the perceived proximity value.

Build and run your app; ensure your iBeacon is registered and move your device closer or away from your device. You’ll see the label update as you move around, as shown below:

ForgetMeNot-item

You may find that the perceived proximity is drastically affected by the physical location of your iBeacon; if it’s placed inside of something like a box or a bag, the signal may be blocked as the iBeacon is a very low-power device and the signal can attenuate easily.

Keep this in mind when designing your application — and when deciding the best placement for your iBeacon hardware.

Notifications

Things feel pretty complete at this point; you have your list of iBeacons and can monitor their proximity in real time. But that isn’t the end goal of your app. You still need to notify the user when the app is not running in case they forgot their laptop bag or their cat ran away — or worse, if their cat ran away with the laptop bag! :]

zorro-ibeacon

They look so innocent, don’t they?

If you’ve learned anything by this point, it’s that it doesn’t take much code to add a lot of iBeacon functionality — and these last few methods are no different.

Open RWTAppDelegate.m and import the Core Location module as such:

@import CoreLocation;

Next, update the class extension to conform to the CLLocationManagerDelegate protocol and add a property for a CLLocationManager instance as shown below:

@interface RWTAppDelegate () <CLLocationManagerDelegate>
 
@property (strong, nonatomic) CLLocationManager *locationManager;
 
@end

Just as before, you need to initialize the location manager and set the delegate accordingly.

Add the following statements to the very top of application:didFinishLaunchingWithOptions::

self.locationManager = [[CLLocationManager alloc] init];
self.locationManager.delegate = self;

At first glance, this likely seems a little too simple. What’s the use of a newly allocated instance of CLLocationManager when your app launches and how will it know about the monitored regions?

Recall that any regions you add for monitoring using startMonitoringForRegion: are shared by all location managers in your application. So you get a little persistence for free, which turns out to be extremely helpful.

Without this capability, it would be up to you to figure out which regions were being monitored and to start monitoring them again each time the app is launched. But even that wouldn’t be sufficient, since your app wouldn’t know to wake up when a region was encountered.

Thankfully Apple has done a lot of the heavy lifting here for you in Core Location. The final step here is simply to react when Core Location wakes up your app when a region is encountered.

Add the following method to the bottom of RWTAppDelegate.m:

- (void)locationManager:(CLLocationManager *)manager didExitRegion:(CLRegion *)region {
    if ([region isKindOfClass:[CLBeaconRegion class]]) {
        UILocalNotification *notification = [[UILocalNotification alloc] init];
        notification.alertBody = @"Are you forgetting something?";
        notification.soundName = @"Default";
        [[UIApplication sharedApplication] presentLocalNotificationNow:notification];
    }
}

Your location manager calls the above method when you exit a region, which is the event of interest for this app. You don’t need to be notified if you move closer to your laptop bag — only if you move too far away from it.

Here you check the region to see if it’s a CLBeaconRegion, since it’s possible it could be a CLCircularRegion if you’re also performing geolocation region monitoring. Then you post a local notification with the generic message “Are you forgetting something?“.

Build and run your app; move away from one of your registered iBeacons and you’ll see the notification pop up once you move far enough away:
ForgetMeNot-notification

If it’s not practical to move far enough away from your iBeacon you can just power it down or remove the battery to test this functionality.

Note: A few final notes on iBeacon and iOS behavior:

iOS 7.1 added the ability to wake your app from the background when it encounters monitored iBeacons. Previously, users had to have the app opened to react to notifications, but now it all works for free!

Apple delays exit notifications in undocumented ways. This is probably by design so that your app doesn’t receive premature notifications if you’re loitering on the fringe of the range or if the iBeacon’s signal is briefly interrupted. In my experience, the exit notification usually occurs one to two minutes after the iBeacon is out of range.

Where to Go From Here?

You now have a very useful app for monitoring those things that you find tricky to keep track of.

You can download the final project here.

With a bit of imagination and coding prowess you could add a lot of really useful features to this app:

  • Notify the user which item has moved out of range.
  • Repeat the notification to make sure the user sees it.
  • Alert the user when iBeacon is back in range.
  • …or anything else you can dream up!

This iBeacons tutorial merely scratches the surface of what’s possible with iBeacons. At the beginning of this tutorial I’ve provided a few links to articles showcasing how Major League Baseball and shopping malls are using iBeacons in very engaging ways.

iBeacons aren’t just limited to custom apps; you can use them with Passbook passes as well. Imagine that you ran a movie theater; you could offer movie tickets as Passbook passes. Then when the patrons walked up to the ticket taker their app would present the ticket on their iPhone automatically.

If you have any questions or comments on this tutorial, or if you have any novel ideas for the use of iBeacons, feel free to join the discussion below!

Developing iOS 7 Applications with iBeacons Tutorial is a post from: Ray Wenderlich

The post Developing iOS 7 Applications with iBeacons Tutorial appeared first on Ray Wenderlich.

Video Tutorial: Beginning 3D Modeling with Blender: Editing an Object


Internationalization Tutorial for iOS [2014 Edition]

$
0
0
Me Gusta Internationalization!

Me Gusta Internationalization!

Update 23 April 2014: Original post by Sean Berry, now fully updated for iOS 7 by Ali Hafizji.

Creating a great iOS app is no small feat, yet there is much more to it than great code, gorgeous design and intuitive interaction. Climbing the App Store rankings requires well-timed product marketing, the ability to scale up along with the user base, and utilizing tools and techniques to reach as wide of an audience as possible.

International markets are an afterthought for a lot of devs, but thanks to the painless global distribution provided by the App Store, any iOS dev can release their app in over 150 countries with a single click. Asia and Europe alone represent a continually growing pool of potential customers, many of whom are not native English speakers, but in order to capitalize on the global market potential of your app, you’ll need to at least be conversational in the language of app internationalization.

This tutorial will guide you through the basics concepts of internationalization by taking a simple app called iLikeIt and adding internationalization support. This simple app has a label and a You Like? button. Whenever the user taps You Like?, some optimistic sales data and accompanying image fades in below the button.

But currently, the app is English only – os vamos a traducir!

Note: Another important aspect of internationalization is using Auto Layout, due to changing text sizes. However, to keep this tutorial simple we will not be focusing on Auto Layout, as we have other tutorials for that.

Internationalization vs Localization

Before you start working your way through the tutorial, it is important to know the difference between internationalization and localization, as these concepts are often confused.

Simply put, internationalization is the process of designing your app for international compatibility. For example:

  • Handle text input, output processing in the user’s native language.
  • Handle different date, time and number formats.
  • Utilize the appropriate calendar and time zone for processing data.

Internationalization is an activity that you, the developer, perform by utilizing the system provided APIs and making additions and modifications to your code to make your app as good in Chinese or Arabic as it is in English.

By contrast, localization is merely translating the app’s user interface and resources into different languages, which is something you can and should offload to someone else, unless you happen to be fluent in every language your app will support :)

Getting Started

The first step is to download the iLikeIt starter project you will use throughout this tutorial.

Open the project in Xcode 5 and run the app on the simulator. You should see the following appear after you tap ‘You like?’:

Starter product screenshot

As you can see from the screenshot, you will need to localize 4 items:

  • UI Element: ‘Hello’ label
  • UI Element: ‘You Like?’ button
  • Sales Data Text: ‘Yesterday you sold 1000000 apps’
  • Image Text: ‘I LIKE IT’

Take a moment to browse the files and folders to familiarize yourself with the project structure. Main.storyboard contains a single screen which is an instance of the ViewController class.

Separating text from code

Currently, all of the text displayed by the app exists as hard-coded strings within Main.storyboard and ViewController.m. In order to localize these strings, you need to put them into a separate file. Then, rather than hard-coding them within your methods, you will simply reference the strings using the file in your bundle.

Xcode uses files with the “.strings” file extension to store and retrieve all of the strings used within the app, for each supported language. A simple method call in your code will lookup and return the requested string based on the current language in use on the iOS device.

Let’s try this out. Go to File > New > File. Choose Strings File unders the Resource subsection as shown below:

Choose strings file

Click Next, name the file Localizable.strings, then click Save.

Note: Localizable.strings is the default filename iOS uses for localized text. Resist the urge to name the file something else, otherwise you will have to type the name of your .strings file every time you reference a localized string.

Now that you’ve created the Localizable.strings file, you need to add all of the text that is currently hardcoded into the app. You need to follow a specific, but fairly simple, format like this:

"KEY" = "CONTENT";

These key/content pairs function just like an NSDictionary, and convention is to use the default language translation of the content as the key: e.g. for “You Like?” you would write:

"You like?" = "You like?";

Key/content pairs can also contain format strings:

"Yesterday you sold %@ apps" = "Yesterday you sold %@ apps";

Now switch to ViewController.m, and find the viewDidLoad method. Currently, the app sets the text for the likeButton and salesCountLabel as shown below:

_salesCountLabel.text = [NSString stringWithFormat:@"Yesterday you sold %@ apps", @(1000000)];
[_likeButton setTitle:@"You like?" forState:UIControlStateNormal];

Instead, you will need to read in the strings from the Localizable.strings file you created earlier. Change both lines to use a macro called NSLocalizedString as shown below:

_salesCountLabel.text = [NSString stringWithFormat:NSLocalizedString(@"Yesterday you sold %@ apps", nil), @(1000000)];
[_likeButton setTitle:NSLocalizedString(@"You like?", nil) forState:UIControlStateNormal];

Macros wrap up a longer snippet of code into a more manageable size, and are created using the #define directive.
If you’re curious what the NSLocalizedString macro does, control-click on NSLocalizedString where it is defined as follows:

#define NSLocalizedString(key, comment) 
    [[NSBundle mainBundle] localizedStringForKey:(key) value:@"" table:nil]

The NSLocalizedString macro uses the localizedStringForKey method to look up the string for the given key, in the current language. It passes nil for the table name, so it uses the default strings filename (Localizable.strings). For full details, check out Apple’s NSBundle Class Reference.

Note: This macro takes a comment as a parameter, but seems to do nothing with it. This is because instead of manually typing in each key/value pair into Localizable.strings like you’ve been doing, you can use a tool that comes with the iPhone SDK called genstrings to do this automatically (which can be quite convenient for large projects).

If you use this method, you can put a comment for each string that will appear next to the default strings as an aid for the translator. For example, you could add a comment indicating the context where the string is used.

Enough background info – let’s try it out!

Build and run your project, and it should display the same text on the main screen as before, but where’s the Spanish? Now that your app is set up for localization, adding translations is a cinch.

Adding a Spanish Localization

To add support for another language, click on the blue iLikeIt project folder on the left pane, select the Project in the next pane (NOT the Target), and under the Info tab you’ll see a section for Localizations. Click the + and choose Spanish (es).

Adding Spanish

The next screen asks you which files you want to localize. Keep them all selected and click Finish. Note: Localizable.strings will not show up in this list, so don’t panic!

Select files to localize

At this point, Xcode has set up some directories, behind the scenes, containing separate versions of InfoPlist.strings and Main.storyboard for each language you selected. To see this for yourself, open your project folder using Finder, and you should see the following:

New project structure

See en.lproj and es.lproj? They contain the language-specific versions of your files.

‘en’ is the localization code for English, and ‘es’ is the localization code for Spanish. For other languages, see the full list of language codes.

From now on, when your app wants to get the English version of a file, it will look in en.lproj, and when it wants the Spanish version of a file it will look in es.lproj.

It’s that simple! Put your resources in the appropriate folder and iOS will do the rest.

But wait, what about Localizable.strings? To let Xcode know you want it localized, select the file using the left pane, and open the File Inspector in the right pane. There you will see a button labeled Localize, click it, choose English (because it’s currently in English), and finally click Localize.

Localize button

Now the File Inspector panel will show which languages this file belongs to. Currently, as you can see, the file is only localized for English. Add Spanish localization by checking that box to the left of Spanish.

Select spanish button

Go back to the left panel and click on the arrow next to Localizable.strings, so it shows the sub-elements. You now have two versions of this file: one for English and the other for Spanish:

Localization files

To change the text for Spanish, select Localizable.strings (Spanish) and replace its contents with the following:

"Yesterday you sold %@ apps" = "Ayer le vendi&oacute; %@ aplicaciones";
"You like?" = "~Es bueno?~";

Congratulations, your app is now bilingual!

To test it out and verify everything worked, change the display language on your simulator/device to Spanish by launching the Settings app and choosing:

General -> International -> Language -> Espanol.

If you are still running the Xcode debugger, click Stop in Xcode, then click Build & Run and you should see:

Spanish version

Locale vs Language

1 million is a pretty impressive sales number; let’s make it look even better by adding some formatting.

Open ViewController.m and replace the line that sets the text for _salesCountLabel with the following:

NSNumberFormatter *numberFormatter = [[NSNumberFormatter alloc] init];
[numberFormatter setNumberStyle:NSNumberFormatterDecimalStyle];
NSString *numberString = [numberFormatter stringFromNumber:@(1000000)];
_salesCountLabel.text = [NSString stringWithFormat:NSLocalizedString(@"Yesterday you sold %@ apps", nil), numberString];

Build and Run the app and the number should now be a lot easier to read.

Number formatted

This looks great to an American, but in Spain 1 million is written as “1.000.000″ not “1,000,000″. Run the app in Espanol and you’ll see commas used to separate the zeroes. In iOS, number formatting is based on the region/locale, not the language, so in order to see how someone in Spain will view the sales number, open Setting.app and change the locale by navigating to:

General -> International -> Region Format -> Spanish -> Spain

Spanish region format

Build and Run the app again and you should now see the properly formatted number like this:

Spain number formatting

For a little extra work up-front, NSNumberFormatter automatically formats your numbers for the appropriate region. Whenever possible, resist the urge to re-invent the wheel, because on iOS, it usually pays to do things the Apple way.

Internationalizing Storyboards

UI elements in your storyboard such as labels, buttons and images can be set in your code or directly in the storyboard. You have already learned how to support multiple languages when setting text programmatically, but the “Hello” label at the top of the screen has no IBOutlet and only has its text set within Main.storyboard.

You could add an IBOutlet, connect it to the label in Main.storyboard, then set its text property using NSLocalizedString as with the likeButton and the salesCountLabel, but there is a much easier way to localize storyboard elements, without the need for additional code.

Open the disclosure triangle to the left of Main.storyboard and you should see Main.storyboard (Base) and Main.storyboard (Spanish). Clicking on Main.storyboard (Spanish) opens the editor with the localizable text in your storyboard. You should already have an entry for the Hello label which will look something like this:

/* Class = "IBUILabel"; text = "Hello"; ObjectID = "pUp-yc-27W"; */
"pUp-yc-27W.text" = "Hello";

Replace the two occurrences of “Hello” with the Spanish translation, “Hola” like this:

/* Class = "IBUILabel"; text = "Hola"; ObjectID = "pUp-yc-27W"; */
"pUp-yc-27W.text" = "Hola";

Note: Never directly change the auto-generated ObjectID. Also, do not copy and paste the lines above, as the ObjectID for your label may be different from the one shown above.

Internationalizing Images

Since the app uses an image that contains english text, you will need to localize the image itself, as having bits and pieces of English in a mostly Spanish app not only makes your app look amateur, but also detracts from the overall usability and market potential.

To localize the image, first download this Spanish version of the image (right-click -> Save Image As… on most browsers):

Me Gusta

Open Images.xcassets and add the image to the asset catalog by dragging and dropping the newly downloaded megusta.png into the list of images on the left. Asset catalogs cannot be internationalized, so you wil need to use a simple work-around to localize the image.

Open Localizable.strings (English) and add the following to it:

"imageName" = "ilike";

Similarly add the following to the Localizable.strings (Spanish) file:

"imageName" = "megusta";

From now on, you will use the imageName key to retrieve the name of the localized version of the image. Open ViewController.m and add the following line of code to the viewDidLoad method:

[_imageView setImage:[UIImage imageNamed:NSLocalizedString(@"imageName", nil)]];

If needed, switch your simulator/device to Espanol, then Build & Run and you will see the localized version of the image displayed:

Spanish image

Congrats! You now have all the tools required to localize your apps for multiple different languages.

Note: This is just one way to do things, useful if you have different filenames per language. A perhaps better way of doing this is to localize a resources folder, as described in this article.

Gratuitous Bonus

For a final bonus, let’s localize the name of the app itself. Your Info.plist has a special file (InfoPlist.strings) in which you can put string overrides for other languages. To give the app a different name in Spanish, open Supporting Files > InfoPlist.strings (Spanish) and insert the following:

"CFBundleDisplayName" = "Me Gusta";

This changes the name of the app as it appears on the Springboard.

Exercise: Internationalizing Audio files

If you’ve got this far, you should be comfortable with the basics of internationalization. This is a simple exercise where you’ll test out your newly acquired knowledge by taking two different audio files, one in english and the other on Spanish, and playing the appropriate file based on the user’s selected language.

Here is a brief description of the necessary steps:

  1. Download the sample audio files
  2. Copy box-en.wav first audio file to the project.
  3. Open the file inspector for the audio file and select the localize button, make sure you select english and spanish as the supported languages
  4. Rename the second audio file (box-es.wav) to be the same as the first one (box-en.wav) and copy it to the es.lproj folder.
  5. Make sure you select the “Replace File” option in the Finder prompt.

Where To Go From Here?

Here is the Final Project with all of the code you’ve written in the above tutorial.

Now that you know the basic techniques for internationalizing an iPhone app, add a foreign language to one of your existing apps or when designing your next app. As you have seen, it takes almost no time to implement, you open up your apps to a wider, more diverse audience, and your non-English speaking audience will thank you for it!

For the actual translation, you may be able to get away with using Google’s free translation service at http://www.google.com/translate, but the results are very hit or miss. If you can spare a few bucks, there are several third party vendors listed at the bottom of Apple’s Internationalization and Localization page. Pricing varies from vendor to vendor, but is typically less than 10 cents per word.

If you have any questions, or advice for others, regarding internationalization, please join in on the forum discussion below!

Internationalization Tutorial for iOS [2014 Edition] is a post from: Ray Wenderlich

The post Internationalization Tutorial for iOS [2014 Edition] appeared first on Ray Wenderlich.

Video Tutorial: Beginning 3D Modeling with Blender: Unwrapping the UV Mesh

How To Make A Swipeable Table View Cell With Actions – Without Going Nuts With Scroll Views

$
0
0
Cookbook: Move table view cells with a long press gesture!

Make a swipeable table view cell without going nuts with scroll views!

Apple introduced a great new user interface scheme in the iOS 7 Mail app – swiping left to reveal a menu with multiple actions. This tutorial shows you how to make such a swipeable table view cell without getting bogged down in nested scroll views. If you’re unsure what a swipeable table view cell means, then see this screenshot of Apple’s Mail.app:

Multiple Options

You’d think that after introducing something like this, Apple would have made it available to developers. After all, how much harder could it be? Unfortunately, they’ve only made the Delete button available to developers — at least for the time being. If you want to add other buttons, or change the text or color of the Delete button, you’ll have to write the whole thing yourself.

In this tutorial, you’ll learn how to implement the simple swipe-to-delete action before moving on to the swipe-to-perform-actions. This will require some digging into the structure of an iOS 7 UITableViewCell to replicate the desired behavior. You’ll use a couple of my favorite techniques for examining view hierarchies: coloring views and using the recursiveDescription method to log the view hierarchy.

Ready to see what buttons and actions are underneath those innocent-looking table view cells? Let’s get started!

Getting Started

Open Xcode, go to File\New\Project… and select a Master-Detail Application for iOS as shown below:

Master-Detail Application

Name your project SwipeableCell and fill in your own organization name and company identifier. Select iPhone as the target device and make sure the Use Core Data checkbox is unchecked, as shown below:

Set Up Project

For a proof of concept project like this, you want to keep the data model as simple as possible.

Open MasterViewController.m and find viewDidLoad. Replace the default method which sets up the navigation bar items with the following implementation:

- (void)viewDidLoad {
  [super viewDidLoad];
 
  //1
  _objects = [NSMutableArray array];
 
  //2
  NSInteger numberOfItems = 30;
  for (NSInteger i = 1; i <= numberOfItems; i++) {
    NSString *item = [NSString stringWithFormat:@"Item #%d", i];
    [_objects addObject:item];
  }
}

There are two things happening in this method:

  1. This line creates and initializes an instance of NSMutableArray so that you can add objects to it. If your array isn’t initialized, you can call addObject: as many times as you want, but your objects won’t be stored anywhere.
  2. This loop adds a bunch of strings to the _objects array; these are the strings displayed in the table view when your application runs. You can change the value of numberOfItems to store more or fewer strings as you see fit.

Next, find tableView:cellForRowAtIndexPath: and replace its implementation with the following:

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
    UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"Cell" forIndexPath:indexPath];
 
    NSString *item = _objects[indexPath.row];
    cell.textLabel.text = item;
    return cell;
}

The boilerplate tableView:cellForRowAtIndexPath: uses date strings as sample data; instead, your implementation uses the NSString objects in your array to populate the UITableViewCell’s textLabel.

Scroll down to tableView:canEditRowAtIndexPath:; you’ll see that this method is already set up to return YES which means that every row of the table view supports editing.

Directly below that method, tableView:commitEditingStyle:forRowAtIndexPath: handles the deletion of objects. However, since you won’t be adding anything in this application, you’ll tweak it a bit to better suit your needs.

Replace tableView:commitEditingStyle:forRowAtIndexPath: with the following code:

- (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath {
  if (editingStyle == UITableViewCellEditingStyleDelete) {
    [_objects removeObjectAtIndex:indexPath.row];
    [tableView deleteRowsAtIndexPaths:@[indexPath] withRowAnimation:UITableViewRowAnimationFade];
  } else {
    NSLog(@"Unhandled editing style! %d", editingStyle);
  }
}

When the user deletes a row you remove the object at the index passed in, from the backing array, and tells the table view it needs to remove the row at the same indexPath to ensure the model and view both match.

Your app only allows for the “delete” editing style, but it’s a good idea to log what you’re not handling in the else condition. That way if something fishy happens, you’ll get a heads-up message logged to the console rather than a silent return from the method.

Finally, there’s a little bit of cleanup to do. Still in MasterViewController.m, delete insertNewObject. This method is now incorrect, since insertion is no longer supported.

Build and run your application; you’ll see a nice simple list of items as shown below:

Closed Easy

Swipe one of rows to the left and you’ll see a “Delete” button, like so:

Easy delete button

Woo — that was easy. But now it’s time to get your hands dirty and dig into the guts of the view hierarchies to see what’s going on.

Digging into the View Hierarchy

First things first: you need to see where the delete button lives in the view hierarchy so that you can decide if you can continue to use it in your custom cell.

One of the easiest ways to do this is to color the separate pieces of the view to make it obvious where specific pieces begin and end.

Still working in MasterViewController.m, add the following two lines to tableView:cellForRowAtIndexPath: just above the final return statement:

cell.backgroundColor = [UIColor purpleColor];
cell.contentView.backgroundColor = [UIColor blueColor];

These colors make it clear where these views are in the cell.

Build and run your application again, you’ll see the colored elements as in the screenshot below:

Colored Cells

You can clearly see the contentView in blue stops before the accessory indicator begins, but the cell itself — highlighted in purple — continues all the way over to the edge of the UITableView.

Drag the cell over to the left, and you’ll see something similar to the following:

Start to drag cell

It looks like the delete button is actually hiding below the cell. The only way to be 100% sure is to dig a little deeper into the view hierarchy.

To assist your view archaeology, you can use a debugging-only method named recursiveDescription to print out the view hierarchy of any view. Note that this is a private method, and should not be included in any code that’s going to the App Store, but it is highly useful for examining your view hierarchy.

Note: There are a couple of paid apps which allow you to examine the view hierarchy visually: Reveal and Spark Inspector. Additionally, there’s an open-source project that does this as well: iOS-Hierarchy-Viewer.

These apps vary in price and quality, but they all require the addition of a library to your project to supports their product. Logging the recursiveDescription is definitely the best way to access this information if you don’t want to install additional libraries within your project.

Add the following log statement to tableView:cellForRowAtIndexPath:, just before the final return statement:

#ifdef DEBUG
  NSLog(@"Cell recursive description:\n\n%@\n\n", [cell performSelector:@selector(recursiveDescription)]);
#endif

Once you add this line, you’ll get a warning that the recursiveDescription method hasn’t been declared; it’s a private method and the compiler doesn’t know it exists. The wrapper ifdef / endif will make extra sure the line doesn’t make it into your release builds.

Build and run your application; you’ll see your console filled to the brim with log statements, similar to the following:

2014-02-01 09:56:15.587 SwipeableCell[46989:70b] Cell recursive description:
 
<UITableViewCell: 0x8e25350; frame = (0 396; 320 44); text = 'Item #10'; autoresize = W; layer = <CALayer: 0x8e254e0>>
   | <UITableViewCellScrollView: 0x8e636e0; frame = (0 0; 320 44); clipsToBounds = YES; autoresize = W+H; gestureRecognizers = <NSArray: 0x8e1d7d0>; layer = <CALayer: 0x8e1d960>; contentOffset: {0, 0}>
   |    | <UIButton: 0x8e22a70; frame = (302 16; 8 12.5); opaque = NO; userInteractionEnabled = NO; layer = <CALayer: 0x8e22d10>>
   |    |    | <UIImageView: 0x8e20ac0; frame = (0 0; 8 12.5); clipsToBounds = YES; opaque = NO; userInteractionEnabled = NO; layer = <CALayer: 0x8e5efc0>>
   |    | <UITableViewCellContentView: 0x8e23aa0; frame = (0 0; 287 44); opaque = NO; gestureRecognizers = <NSArray: 0x8e29c20>; layer = <CALayer: 0x8e62220>>
   |    |    | <UILabel: 0x8e23d70; frame = (15 0; 270 43); text = 'Item #10'; clipsToBounds = YES; opaque = NO; layer = <CALayer: 0x8e617d0>>

Whoa — that’s tons of information. What you’re seeing here is the recursive description log statement, printed out every time a cell is created or recycled. So you should see a few of these, one for each cell that’s initially on the screen. recursiveDescription goes through every subview of a particular view and logs out the description of that view aligned just as the view hierarchy is. It does this recursively, so for each subview it goes looks at the subviews of that, and so on.

It’s a lot of information, but it is calling description on every view as you step through the view hierarchy. Therefore you’ll see the same information as if you logged each individual view on its own, but this output adds a pipe character and some spacing at the front to reflect the structure of the views.

To make it a little easier to read, here’s just the class name and frame:

<UITableViewCell; frame = (0 396; 320 44);> //1
   | <UITableViewCellScrollView; frame = (0 0; 320 44); > //2
   |    | <UIButton; frame = (302 16; 8 12.5)> //3
   |    |    | <UIImageView; frame = (0 0; 8 12.5);> //4
   |    | <UITableViewCellContentView; frame = (0 0; 287 44);> //5
   |    |    | <UILabel; frame = (15 0; 270 43);> //6

There are six views within the cell as it exists right now:

  1. UITableViewCell — This is the highest-level view. The frame log shows that it is 320 points wide and 44 points tall – the height and width you’d expect since it’s as wide as the screen and 44 points tall.
  2. UITableViewCellScrollView — While you can’t use this private class directly, its name gives you a pretty good idea as to its purpose in life. It’s exactly the same size as the cell itself. We can infer that it’s job is to handle the sliding out of the content atop the delete button.
  3. UIButton — This lives at the far right of the cell and serves as the disclosure indicator button. Note that this is not the delete button, but rather the chevron – the disclosure indicator.
  4. UIImageView — This is a subview of the above UIButton and contains the image for the disclosure indicator.
  5. UITableViewCellContentView — Another private class that contains the content of your cell. This view is exposed to the developer as the UITableViewCell’s contentView property. It’s only exposed to the outside world as a UIView, which means you can only call public UIView methods on it; you can’t use any of the private methods associated with this custom subclass.

  6. UILabel — Displays the “Item #” text.

You’ll notice that the delete button appears nowhere in this view hierarchy. Hmm. Maybe it’s only added to the hierarchy when the swipe starts. That would make sense as an optimisation. There’s no point having the delete button there when it’s not necessary. To test this hypothesis, add the following code to tableView:commitEditingStyle:forRowAtIndexPath:, inside the delete editing style if-statement:

#ifdef DEBUG
    NSLog(@"Cell recursive description:\n\n%@\n\n", [[tableView cellForRowAtIndexPath:indexPath] performSelector:@selector(recursiveDescription)]);
#endif

This is the same as before, except this time we need to grab the cell from the table view using cellForRowAtIndexPath:.

Build & run the application, swipe over the first cell, and tap Delete. Then check your console and find the last recursive description for the first cell. You know it’s the first cell because the text property of the cell is set to Item #1. You should see something like this:

<UITableViewCell: 0xa816140; frame = (0 0; 320 44); text = 'Item #1'; autoresize = W; gestureRecognizers = <NSArray: 0x8b635d0>; layer = <CALayer: 0xa816310>>
   | <UITableViewCellScrollView: 0xa817070; frame = (0 0; 320 44); clipsToBounds = YES; autoresize = W+H; gestureRecognizers = <NSArray: 0xa8175e0>; layer = <CALayer: 0xa817260>; contentOffset: {82, 0}>
   |    | <UITableViewCellDeleteConfirmationView: 0x8b62d40; frame = (320 0; 82 44); layer = <CALayer: 0x8b62e20>>
   |    |    | <UITableViewCellDeleteConfirmationButton: 0x8b61b60; frame = (0 0; 82 43.5); opaque = NO; autoresize = LM; layer = <CALayer: 0x8b61c90>>
   |    |    |    | <UILabel: 0x8b61e60; frame = (15 11; 52 22); text = 'Delete'; clipsToBounds = YES; userInteractionEnabled = NO; layer = <CALayer: 0x8b61f00>>
   |    | <UITableViewCellContentView: 0xa816500; frame = (0 0; 287 43.5); opaque = NO; gestureRecognizers = <NSArray: 0xa817d40>; layer = <CALayer: 0xa8165b0>>
   |    |    | <UILabel: 0xa8167a0; frame = (15 0; 270 43.5); text = 'Item #1'; clipsToBounds = YES; layer = <CALayer: 0xa816840>>
   |    | <_UITableViewCellSeparatorView: 0x8a2b6e0; frame = (97 43.5; 305 0.5); layer = <CALayer: 0x8a2b790>>
   |    | <UIButton: 0xa8166a0; frame = (297 16; 8 12.5); opaque = NO; userInteractionEnabled = NO; layer = <CALayer: 0xa8092b0>>
   |    |    | <UIImageView: 0xa812d50; frame = (0 0; 8 12.5); clipsToBounds = YES; opaque = NO; userInteractionEnabled = NO; layer = <CALayer: 0xa8119c0>>

Woo! There’s the delete button! Now, below the content view, is a view of class UITableViewCellDeleteConfirmationView. So that’s where the delete button comes in. Notice that the x-value of its frame is 320. This means that it’s positioned at the far end of the scroll view. But the delete button doesn’t move as you swipe. So Apple must be moving the delete button every time the scroll view is scrolled. That’s not particularly important, but it’s interesting!

Back to the cell now.

You’ve also learned more about how the cell works; namely, that UITableViewCellScrollView — which contains the contentView and the disclosure indicator (and the delete button when it’s added) — is clearly doing something. You’ve might guess from its name that it’s a subclass of UIScrollView.

You can test this assumption by adding the simple for loop below to tableView:cellForRowAtIndexPath:, just below the line that logs the recursiveDescription:

for (UIView *view in cell.subviews) {
  if ([view isKindOfClass:[UIScrollView class]]) {
    view.backgroundColor = [UIColor greenColor];
  }
}

Build and run your application again; the green highlighting confirms that this private class is indeed a subclass of UIScrollView since it covers up all of the cell’s purple coloring:

Visible Scrollview

Recall that your logs of recursiveDescription showed that the UITableViewCellScrollView’s frame was exactly the same size as that of the cell itself.

But what exactly is this view doing? Keep dragging the cell over to the side and you’ll see that the scroll view powers the “springy” action when you drag the cell and release it, like so:

swipeable-demo

One last thing to be aware of before you start building your own custom UITableViewCell subclass comes straight out of the UITableViewCell Class Reference:

“If you want to go beyond the predefined styles, you can add subviews to the contentView property of the cell. When adding subviews, you are responsible for positioning those views and setting their content yourself.”

In plain English, this means that any custom mods to UITableViewCell must be performed in the contentView. You can’t simply add your own views below the cell itself — you have to add them to the cell’s contentView.

This means you’re going to have to cook up your own solution to add custom buttons. But never fear, you can quite easily replicate the solution Apple use!

A List Of Ingredients for a Swipeable Table View Cell

So what does this mean for you? Well, at this point you have a list of obvious ingredients for cooking up a UITableViewCell subclass with your own custom buttons.

Going in reverse z-order with the items at the “bottom” of the view stack first, you have the following:

  1. The contentView as your base view, since it’s required that you add subviews to this view.
  2. Any UIButtons you want to display after the user swipes.
  3. A container view above the buttons to hold all of your content.
  4. Either a UIScrollView to hold your container view, like Apple use, or you could use a UIPanGestureRecognizer. This can also handle the swipes to reveal/hide the buttons. You’ll take the latter approach in your project.
  5. Finally, the views with your actual content.

There’s one ingredient that may not be as obvious: you have to ensure the existing UIPanGestureRecognizer — which lets you swipe to show the delete button — is disabled. Otherwise that gesture recognizer will collide with the custom one you’re adding to your project.

The good news is that disabling the default swipe is pretty simple.

Open MasterViewController.m. Modify tableView:canEditRowAtIndexPath: to always return NO as follows:

- (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath {
  return NO;
}

Build and run your application; swipe one of the items and you’ll find that you can no longer swipe to delete.

To keep it simple, you’ll walk through this example with two buttons, but these same techniques will work with one button, or more than two buttons — though be warned you may need to add a few tweaks not covered in this article if you add so many buttons that you’d have to slide the entire cell out of view to see them all.

Creating the Custom Cell

You can see from the basic list of views and gesture recognizers that there’s an awful lot going on in the table view cell. You’ll want to create your own custom UITableViewCell subclass to keep all the logic in one place.

Go to File\New\ File… and select iOS\Cocoa Touch\Objective-C class. Name the new class SwipeableCell and make it a subclass of UITableViewCell, like so:

Creating custom cell

Set up the following class extension and IBOutlets in SwipeableCell.m, just below the #import statement and above the @implementation statement:

@interface SwipeableCell()
 
@property (nonatomic, weak) IBOutlet UIButton *button1;
@property (nonatomic, weak) IBOutlet UIButton *button2;
@property (nonatomic, weak) IBOutlet UIView *myContentView;
@property (nonatomic, weak) IBOutlet UILabel *myTextLabel;
 
@end

Next, go into your storyboard and select the UITableViewCell prototype, as shown below:

Select Table View Cell

Open the Identity Inspector, then change the Custom Class to SwipeableCell, like so:

Change Custom Class

The name of the UITableViewCell prototype now appears as “Swipeable Cell” in the Document Outline on the left. Right-click on the item that says Swipeable Cell – Cell, you’ll see the list of IBOutlets you set up above:

New Name and Outlets

First, you’ll need to change a couple things in the Attributes Inspector to customize the view. Set the Style to Custom, the Selection to None, and the Accessory to None, as shown in the screenshot below:

Reset Cell Items

Next, drag two Buttons into the cell’s content view. Set each button’s background color in the View section of the Attributes Inspector to some distinctive color and set each button’s text color to something legible so you can see the buttons clearly.

Pin the first button to the right side, top, and bottom of the contentView. Pin the second button to the left edge of the first button, and to the top and bottom of the contentView. When you’re done, the cell should look something like this, although your colors may differ:

Buttons Added to Prototype Cell

Next, hook up each of your buttons to the appropriate outlets. Right-click the swipeable cell to open up its outlets, then drag from the button1 outlet to the right button, and button2 to the left button, as such:

swipeable-button1

You need to create a method to handle taps on each of these buttons.

Open SwipeableCell.m and add the following method:

- (IBAction)buttonClicked:(id)sender {
  if (sender == self.button1) {
    NSLog(@"Clicked button 1!");
  } else if (sender == self.button2) {
    NSLog(@"Clicked button 2!");
  } else {
    NSLog(@"Clicked unknown button!");
  }
}

This handles button taps from either of the buttons and logs it to the console so you can confirm which button was tapped.

Open the Storyboard again, and hook up the action for both buttons to this new method. Right-click the Swipeable Cell – Cell to bring up its list of outlets and actions. Drag from the buttonClicked: action to your button, like so:

swipeable-buttonClicked

Select Touch Up Inside from the list of events, as shown below:

swipeable-touchupinside

Repeat the above steps for the second button. Now tapping on either button calls buttonClicked:.

Since you’re customizing the cell’s content view, you can’t rely on the built-in text label. Instead, you’ll need to add your own property and method to set the cell’s text.

Open SwipeableCell.h and add the following property:

@property (nonatomic, strong) NSString *itemText;

You’ll be doing more with the itemText property later, but for now, this is all you need.

Open MasterViewController.m and add the following line to the top:

#import "SwipeableCell.h"

This ensures the class knows about your custom cell subclass.

Replace the contents of tableView:cellForRowAtIndexPath: with the following:

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
  SwipeableCell *cell = [tableView dequeueReusableCellWithIdentifier:@"Cell" forIndexPath:indexPath];
 
  NSString *item = _objects[indexPath.row];
  cell.itemText = item;
 
  return cell;
}

It’s now your new cell class being used instead of the standard UITableViewCell.

Build and run your application; you’ll see something like the following:

ALL THE BUTTONS!

Adding a delegate

Hooray — your buttons are there! If you tap on each button, you’ll see the appropriate log messages in your console. However, you don’t want to have the cell itself take any direct action.

For instance, a cell can’t present another view controller or push directly onto the navigation stack. You’ll have to set up a delegate to pass the button tap event back to the view controller to handle that event.

Open SwipeableCell.h and add the following delegate protocol declaration above the @interface statement:

@protocol SwipeableCellDelegate <NSObject>
- (void)buttonOneActionForItemText:(NSString *)itemText;
- (void)buttonTwoActionForItemText:(NSString *)itemText;
@end

Add the following delegate property to SwipeableCell.h, just below your property for itemText:

@property (nonatomic, weak) id <SwipeableCellDelegate> delegate;

Update buttonClicked: in SwipeableCell.m as shown below:

- (IBAction)buttonClicked:(id)sender {
  if (sender == self.button1) {
    [self.delegate buttonOneActionForItemText:self.itemText];
  } else if (sender == self.button2) {
    [self.delegate buttonTwoActionForItemText:self.itemText];
  } else {
    NSLog(@"Clicked unknown button!");
  }
}

This updates the method to call the appropriate delegate methods instead of simply creating an entry in the log.

Now, open MasterViewController.m and add the following delegate methods to the implementation:

#pragma mark - SwipeableCellDelegate
- (void)buttonOneActionForItemText:(NSString *)itemText {
  NSLog(@"In the delegate, Clicked button one for %@", itemText);
}
 
- (void)buttonTwoActionForItemText:(NSString *)itemText {
  NSLog(@"In the delegate, Clicked button two for %@", itemText);
}

These methods will simply log to the console to ensure everything is passing through properly.

Next, add the following protocol conformance declaration to the class extension at the top of MasterViewController.m:

@interface MasterViewController () <SwipeableCellDelegate> {
  NSMutableArray *_objects;
}
@end

This simply indicates that this class conforms to the SwipeableCellDelegate protocol.

Finally, you need to set this view controller as the cell’s delegate.

Add the following line to tableView:cellForRowAtIndexPath: just before the final return statement:

cell.delegate = self;

Build and run your application; you’ll see the appropriate “in the delegate” messages firing off when you tap on the buttons.

Adding actions to the buttons

If you’re happy with the log messages, feel free to skip to the next section. However, if you’d like something a little more tangible, you can add some handling to show the included DetailViewController when one of the delegate methods is called.

Add the following two methods to MasterViewController.m:

- (void)showDetailWithText:(NSString *)detailText
{
  //1
  UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"Main" bundle:nil];
  DetailViewController *detail = [storyboard instantiateViewControllerWithIdentifier:@"DetailViewController"];
  detail.title = @"In the delegate!";
  detail.detailItem = detailText;
 
  //2
  UINavigationController *navController = [[UINavigationController alloc] initWithRootViewController:detail];
 
  //3
  UIBarButtonItem *done = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(closeModal)];
  [detail.navigationItem setRightBarButtonItem:done];
 
  [self presentViewController:navController animated:YES completion:nil];
}
 
//4
- (void)closeModal
{
  [self dismissViewControllerAnimated:YES completion:nil];
}

You perform four actions in the code above:

  1. Grab the detail view controller out of the storyboard and set its title and detail item for display.
  2. Set up a UINavigationController to contain the detail view controller and to give you a place to add the close button.
  3. Add the close button with a target within the MasterViewController.
  4. Set up the actual target for the close button, which dismisses any modal view controller.

Next, replace the methods you added earlier with the following implementations:

- (void)buttonOneActionForItemText:(NSString *)itemText
{
  [self showDetailWithText:[NSString stringWithFormat:@"Clicked button one for %@", itemText]];
}
 
- (void)buttonTwoActionForItemText:(NSString *)itemText
{
  [self showDetailWithText:[NSString stringWithFormat:@"Clicked button two for %@", itemText]];
}

Finally, open Main.storyboard and click on the Detail View Controller. Select the Identity Inspector and set the Storyboard ID to DetailViewController to match the class name, like so:

Add Storyboard Identifier

If you forget this step, instantiateViewControllerWithIdentifier will crash on an invalid argument exception stating that a view controller with that identifier doesn’t exist.

Build and run the application; click one of the buttons in a cell, and watch your modal view controller appear, as shown in the following screenshot:

View Launched from Delegate

Adding the Top Views And The Swipe Action

Now that you have the bottom part of the view working, it’s time to get the top portion up and running.

Open Main.storyboard and drag a UIView into your SwipeableTableCell. The view should take up the entire height and width of the cell and cover your buttons so you won’t able to see them until you get the swipe working.

If you want to be precise, you can open the Size Inspector and set the view’s width and height to 320 and 43, respectively:

swipeable-320-43

You’ll also need a constraint to pin the view to the edges of the content view. Select the view and click the Pin button. Select all four spacing constraints and set their values to 0 as shown below:

swipeable-constraint

Hook this new view up to its outlet by following the same steps as before: right-click the swipe able cell in the navigator on the left and drag from the myContentView outlet to the new view.

Next, drag a UILabel into the view; pin it 20 points from the left side of the view and center it vertically. Hook this label up to the myTextLabel outlet.

Build and run your application; your cells are looking somewhat normal again:

Back to cells

Adding the data

But why is the actual cell text data not showing up? That’s because you’re only assigning the itemText to a property rather than doing anything that affects myTextLabel.

Open SwipeableCell.m and add the following method:

- (void)setItemText:(NSString *)itemText {
  //Update the instance variable
  _itemText = itemText;
 
  //Set the text to the custom label.
  self.myTextLabel.text = _itemText;
}

This is an override of the default setter for the itemText property.

Aside from updating the backing instance variable, the above method also updates the visible label.

Finally, to make the result of the next few steps a little easier to see, you’re going to make the title of the item a little longer so that some text will still be visible when the cell is swiped.

Head back to MasterViewController.m and update the following line in viewDidLoad where the item titles are generated:

NSString *item = [NSString stringWithFormat:@"Longer Title Item #%d", i];

Build and run your application; you can now see the appropriate item titles as shown below:

Longer Item Titles displayed in custom label

Gesture recognisers – go!

Now here comes the “fun” part — building up the math, the constraints, and the gesture recognizers that facilitate the swiping action.

First, add the following properties to your SwipeableCell class extension at the top of SwipeableCell.m:

@property (nonatomic, strong) UIPanGestureRecognizer *panRecognizer;
@property (nonatomic, assign) CGPoint panStartPoint;
@property (nonatomic, assign) CGFloat startingRightLayoutConstraintConstant;
@property (nonatomic, weak) IBOutlet NSLayoutConstraint *contentViewRightConstraint;
@property (nonatomic, weak) IBOutlet NSLayoutConstraint *contentViewLeftConstraint;

The short version of what you’re going to be doing is to track a pan gesture and then adjust the left and right constraints on your view based on a) how far the user has panned the cell and b) where the cell was when it started.

In order to do that, you’ll first need to hook up the IBOutlets for the left and right constraints of the myContentView view. These constraints pin that view to the cell’s contentView.

You can figure out which constraints these are by flipping open the list of constraints and examining which ones light up as you go through the list until you find the appropriate ones. In this case, it’s the constraint between the right side of myContentView and the main contentView as shown below:

Highlighting Constraints

Once you’ve located the appropriate constraint, hook up the appropriate outlet — in this case, it’s the contentViewRightConstraint, as such:

Hook Up Constraint to IBOutlet

Follow the same steps to hook up the contentViewLeftConstraint to the constraint between the left side of myContentView and the main contentView.

Next, open SwipeableCell.m and modify the @interface statement for the class extension category so that it conforms to the UIGestureRecognizerDelegate protocol as follows:

@interface SwipeableCell() <UIGestureRecognizerDelegate>

Then, still in SwipeableCell.m, add the following method:

- (void)awakeFromNib {
  [super awakeFromNib];
 
  self.panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(panThisCell:)];
  self.panRecognizer.delegate = self;
  [self.myContentView addGestureRecognizer:self.panRecognizer];
}

This sets up the pan gesture recognizer and adds it to the cell.

Also add the following method:

- (void)panThisCell:(UIPanGestureRecognizer *)recognizer {
  switch (recognizer.state) {
    case UIGestureRecognizerStateBegan:
      self.panStartPoint = [recognizer translationInView:self.myContentView];
      NSLog(@"Pan Began at %@", NSStringFromCGPoint(self.panStartPoint));
      break;
    case UIGestureRecognizerStateChanged: {
      CGPoint currentPoint = [recognizer translationInView:self.myContentView];
      CGFloat deltaX = currentPoint.x - self.panStartPoint.x;
      NSLog(@"Pan Moved %f", deltaX);
    }
      break;
    case UIGestureRecognizerStateEnded:
      NSLog(@"Pan Ended");
      break;
    case UIGestureRecognizerStateCancelled:
      NSLog(@"Pan Cancelled");
      break;
    default:
      break;
  }
}

This is the method that’s called when the pan gesture recogniser fires. For now, it simply logs the pan gesture details to the console.

Build and run your application; drag your finger across the cell and you’ll see all the logs firing with the movement, like so:

Pan Logs

You’ll see positive numbers if you swipe to the right of your initial touch point, and negative numbers if you swipe to the left of your initial touch point. These numbers will be used to adjust the constraints of myContentView.

Moving those constraints

Essentially, you need to push myContentView over to the left by adjusting the left and right constraints that pin it to the cell’s contentView. The right constraint will take a positive value, and the left constraint will take an equal but negative value.

For instance, if myContentView needs to be moved 5 points to the left, then the right constraint will take a value of 5 and the left constraint will take a value of -5. This slides the entire view over to the left by 5 points without changing its width.

Sounds easy — but there’s a lot of moving parts to watch out for. You have to handle a whole lot of things very differently depending on whether the cell is already open or not, and what direction the user is panning.

You also need to know how far the cell is allowed to slide open. To do this, you’ll have to calculate the width of the area covered by the buttons. The easiest way is to subtract the minimum X position of the leftmost button from the full width of the view.

To clarify, here’s a sneak peek ahead to more clearly illustrate the dimensions you’ll need to be concerned with:

Minimum x of button 2

Luckily, thanks to the CGRect CGGeometry functions, this is super-easy to translate into code.

Add the following method to SwipeableCell.m:

- (CGFloat)buttonTotalWidth {
    return CGRectGetWidth(self.frame) - CGRectGetMinX(self.button2.frame);
}

Add the following two skeleton methods to SwipeableCell.m:

- (void)resetConstraintContstantsToZero:(BOOL)animated notifyDelegateDidClose:(BOOL)endEditing
{
	//TODO: Build.
}
 
- (void)setConstraintsToShowAllButtons:(BOOL)animated notifyDelegateDidOpen:(BOOL)notifyDelegate
{
	//TODO: Build
}

These two skeleton methods — once you flesh them out — will snap the cell open and snap the cell closed. You’ll come back to these in a bit once you’ve added more handling in the pan gesture recognizer.

Replace the UIGestureRecognizerStateBegan case of panThisCell: with the following code:

case UIGestureRecognizerStateBegan:
  self.panStartPoint = [recognizer translationInView:self.myContentView];	           
  self.startingRightLayoutConstraintConstant = self.contentViewRightConstraint.constant;
  break;

You need to store the initial position of the cell (i.e. the constraint value), to determine whether the cell is opening or closing.

Next you need to start adding more handling for when the pan gesture recognizer has changed. Still in, panThisCell:, change the UIGestureRecognizerStateChanged case to look like this:

case UIGestureRecognizerStateChanged: { 
  CGPoint currentPoint = [recognizer translationInView:self.myContentView];
  CGFloat deltaX = currentPoint.x - self.panStartPoint.x;
  BOOL panningLeft = NO; 
  if (currentPoint.x < self.panStartPoint.x) {  //1
    panningLeft = YES;
  }
 
  if (self.startingRightLayoutConstraintConstant == 0) { //2
    //The cell was closed and is now opening
    if (!panningLeft) {
      CGFloat constant = MAX(-deltaX, 0); //3
      if (constant == 0) { //4
        [self resetConstraintContstantsToZero:YES notifyDelegateDidClose:NO];
      } else { //5
        self.contentViewRightConstraint.constant = constant;
      }
    } else {
      CGFloat constant = MIN(-deltaX, [self buttonTotalWidth]); //6
      if (constant == [self buttonTotalWidth]) { //7
        [self setConstraintsToShowAllButtons:YES notifyDelegateDidOpen:NO];
      } else { //8
        self.contentViewRightConstraint.constant = constant;
      }
    }
  }

Most of the code above deals with pan gestures starting from cells in their default “closed” state. Here’s what’s going on in detail:

  1. Here you determine whether you’re presently panning to the left or the right of your original pan point.
  2. If the right layout constraint’s constant is equal to zero, that means myContentView is flush up against the contentView. Therefore the cell must be closed at this point and the user is attempting to open it.
  3. This is the case where the user swipes from left to right to close the cell. Rather than just saying “you can’t do that”, you have to handle the case where the user swipes the cell open a bit then wants to swipe it closed without having lifted their finger to end the gesture.
     
    Since a left-to-right swipe results in a positive value for deltaX and the right-to-left swipe will result in a negative value, you must calculate the constant to set on the right constraint based on the negative of deltaX. The maximum of this and zero is taken, so that the view can’t go too far off to the right.
  4. If the constant is zero, the cell is being closed completely. Fire the method that handles closing — which, as you’ll recall, does nothing at the moment.
  5. If the constant is not zero, then you should set it to the right-hand side constraint.
  6. Otherwise, if you’re panning right to left, the user is attempting to open the cell. In this case, the constant will be the lesser of either the negative value of deltaX or the total width of both buttons.
  7. If the target constant is the total width of both buttons, the cell is being opened to the catch point and you should fire the method that handles opening.
  8. If the constant is not the total width of both buttons, then set the constant to the right constraint’s constant.

Phew! That’s a lot of handling…and that’s just for the case where the cell was already closed. You now need the code to handle the case when the cell is partially open when the gesture starts.

Add the following code directly below the code you just added:

  else {
    //The cell was at least partially open.
    CGFloat adjustment = self.startingRightLayoutConstraintConstant - deltaX; //1
    if (!panningLeft) {
      CGFloat constant = MAX(adjustment, 0); //2
      if (constant == 0) { //3
        [self resetConstraintContstantsToZero:YES notifyDelegateDidClose:NO];
      } else { //4
        self.contentViewRightConstraint.constant = constant;
      }
    } else {
      CGFloat constant = MIN(adjustment, [self buttonTotalWidth]); //5
      if (constant == [self buttonTotalWidth]) { //6
        [self setConstraintsToShowAllButtons:YES notifyDelegateDidOpen:NO];
      } else { //7
        self.contentViewRightConstraint.constant = constant;
      }
    }
  }
 
  self.contentViewLeftConstraint.constant = -self.contentViewRightConstraint.constant; //8
}
    break;

This is the other side of the outer if-statement. It is therefore the case where the cell is initially open.

Once again, here’s an explanation of the various cases you’re handling:

  1. In this case, you’re not just taking the deltaX – you’re subtracting deltaX from the original position of the rightLayoutConstraint to see how much of an adjustment has been made.
  2. If the user is panning left to right, you must take the greater of the adjustment or 0. If the adjustment has veered into negative numbers, that means the user has swiped beyond the edge of the cell, and the cell is closed, which leads you to the next case.
  3. If you’re seeing the constant equal to 0, the cell is closed and you must fire the method that handles closing the cell.
  4. Otherwise, you set the constant to the right constraint.
  5. In the case of panning right to left, you’ll want to take the lesser of the adjustment and the total button width. If the adjustment is higher, then the user has swiped too far past the catch point.
  6. If you’re seeing the constant equal to the total button width, the cell is open, and you must fire the method that handles opening the cell.
  7. Otherwise, set the constant to the right constraint.
  8. Now, you’re finally out of both the “cell was closed” and “cell was at least partially open” conditions, and you can do the same thing to the left constraint’s constant in any of these cases: set it to the negative value of the right constraint’s constant. This ensures the width of myContentView stays consistent no matter what you’ve had to do to the right constraint.

Build and run your application; you can now pan the cell back and forth! It’s not super-smooth, and it stops a little bit before you’d like it to. This is because you haven’t yet implemented the two methods that handle opening and closing the cell.

Note: You may also notice that the table view itself doesn’t scroll at the moment. Don’t worry. Once you’ve got the cells sliding open properly, you’ll fix that.

Snap!

Next up, you need to make the cell snap into place as appropriate. You’ll notice at the moment that the cell just stops if you let go.

Before you get into the methods that handle this, you’ll need a single method to create an animation.

Open SwipeableCell.m and add the following method:

- (void)updateConstraintsIfNeeded:(BOOL)animated completion:(void (^)(BOOL finished))completion {
  float duration = 0;
  if (animated) {
    duration = 0.1;
  }
 
  [UIView animateWithDuration:duration delay:0 options:UIViewAnimationOptionCurveEaseOut animations:^{
    [self layoutIfNeeded];
  } completion:completion];
}
Note: The duration of 0.1 seconds and the animation curve as an ease-out curve are values that I found looked about right through trial and error. If you find other speeds or animation curves more pleasing to your eye, feel free to change them!

Next, you’ll need to flesh out the two skeleton methods that open and close the cell. Remember that in the original implementation, there’s a bit of a bounce since it uses a UIScrollView subclass as one of the lowest z-index superviews.

To make things look right, you’ll need to give your cell a bit of a bounce when it hits either edge. You’ll also have to ensure your contentView and myContentView have the same backgroundColor for the optical illusion of the bounce to look as seamless as possible.

Add the following constant to the top of SwipeableCell.m, just underneath the import statement:

static CGFloat const kBounceValue = 20.0f;

This constant stores the bounce value to be used in all your bounce animations.

Update setConstraintsToShowAllButtons:notifyDelegateDidOpen: as follows:

- (void)setConstraintsToShowAllButtons:(BOOL)animated notifyDelegateDidOpen:(BOOL)notifyDelegate {
  //TODO: Notify delegate.
 
  //1
  if (self.startingRightLayoutConstraintConstant == [self buttonTotalWidth] &&
      self.contentViewRightConstraint.constant == [self buttonTotalWidth]) {
    return;
  }
  //2
  self.contentViewLeftConstraint.constant = -[self buttonTotalWidth] - kBounceValue;
  self.contentViewRightConstraint.constant = [self buttonTotalWidth] + kBounceValue;
 
  [self updateConstraintsIfNeeded:animated completion:^(BOOL finished) {
    //3
    self.contentViewLeftConstraint.constant = -[self buttonTotalWidth];
    self.contentViewRightConstraint.constant = [self buttonTotalWidth];
 
    [self updateConstraintsIfNeeded:animated completion:^(BOOL finished) {
      //4
      self.startingRightLayoutConstraintConstant = self.contentViewRightConstraint.constant;
    }];
  }];
}

This method executes when the cell should open up all the way. Here’s what’s going on:

  1. If the cell started open and the constraint is already at the full open value, just bail — otherwise the bouncing action will happen over and over and over again as you continue to swipe past the total button width.
  2. You initially set the constraints to be the combined value of the total button width and the bounce value, which pulls the cell a bit further to the left than it should go so that it can snap back. Then you fire off the animation for this setting.

  3. When the first animation completes, fire off a second animation which brings the cell to rest in an open position at exactly the button width.
  4. When the second animation completes, reset the starting constraint or you’ll see multiple bounces.

Update resetConstraintContstantsToZero:notifyDelegateDidClose: as follows:

- (void)resetConstraintContstantsToZero:(BOOL)animated notifyDelegateDidClose:(BOOL)notifyDelegate {
  //TODO: Notify delegate.
 
  if (self.startingRightLayoutConstraintConstant == 0 &&
      self.contentViewRightConstraint.constant == 0) {
    //Already all the way closed, no bounce necessary
    return;
  }
 
  self.contentViewRightConstraint.constant = -kBounceValue;
  self.contentViewLeftConstraint.constant = kBounceValue;
 
  [self updateConstraintsIfNeeded:animated completion:^(BOOL finished) {
    self.contentViewRightConstraint.constant = 0;
    self.contentViewLeftConstraint.constant = 0;
 
    [self updateConstraintsIfNeeded:animated completion:^(BOOL finished) {
      self.startingRightLayoutConstraintConstant = self.contentViewRightConstraint.constant;
    }];
  }];
}

As you can see, this is similar to setConstraintsToShowAllButtons:notifyDelegateDidOpen:, but the logic closes the cell instead of opening it.

Build and run your application; drag the cell all the way to its catch points. You’ll see the bouncing action when you release the cell.

However, if you release the cell before either it’s fully open or fully closed, it’ll remain stuck in the middle. Whoops! You’re not handling the two cases of touches ending or being cancelled.

Find panThisCell: and replace the handling for the UIGestureRecognizerStateEnded case with the following:

case UIGestureRecognizerStateEnded:
  if (self.startingRightLayoutConstraintConstant == 0) { //1
    //Cell was opening
    CGFloat halfOfButtonOne = CGRectGetWidth(self.button1.frame) / 2; //2
    if (self.contentViewRightConstraint.constant >= halfOfButtonOne) { //3
      //Open all the way
      [self setConstraintsToShowAllButtons:YES notifyDelegateDidOpen:YES];
    } else {
      //Re-close
      [self resetConstraintContstantsToZero:YES notifyDelegateDidClose:YES];
    }
  } else {
    //Cell was closing
    CGFloat buttonOnePlusHalfOfButton2 = CGRectGetWidth(self.button1.frame) + (CGRectGetWidth(self.button2.frame) / 2); //4
    if (self.contentViewRightConstraint.constant >= buttonOnePlusHalfOfButton2) { //5
      //Re-open all the way
      [self setConstraintsToShowAllButtons:YES notifyDelegateDidOpen:YES];
    } else {
      //Close
      [self resetConstraintContstantsToZero:YES notifyDelegateDidClose:YES];
    }
  }
  break;

Here, you’re performing handling based on whether the cell was already open or closed as well as where the cell was when the pan gesture ended. In detail:

  1. Check whether the cell was already open or closed when the pan started by checking the starting right layout constraint.
  2. If the cell was closed and you are opening it, you want the point at which the cell automatically slides all the way open to be half of the width of the rightmost button — self.button1. Since you’re measuring against the constraint’s constant, you only need to calculate the actual width of the button itself, not its X position in the view. 

  3. Next, test if the constraint has been opened past the point where you’d like the cell to open automatically. If it’s past that point, automatically open the cell. If it’s not, automatically close the cell.
  4. In the case where the cell starts as open, you want the point at which the cell will automatically snap closed to be a point more than halfway past the leftmost button. Add together the widths of any buttons which are not the leftmost button — in this case, just self.button1 — and half the width of the leftmost button — self.button2 — to find the point to check. 

  5. Test if the constraint has moved past the point where you’d like the cell to close automatically. If it has, close the cell. If it hasn’t, re-open the cell.

Finally, you’ll need a bit of handling in case the touch event is cancelled. Replace the UIGestureRecognizerStateCancelled case with the following:

case UIGestureRecognizerStateCancelled:
  if (self.startingRightLayoutConstraintConstant == 0) {
    //Cell was closed - reset everything to 0
    [self resetConstraintContstantsToZero:YES notifyDelegateDidClose:YES];
  } else {
    //Cell was open - reset to the open state
    [self setConstraintsToShowAllButtons:YES notifyDelegateDidOpen:YES];
  }
  break;

This handling is a bit more straightforward; since the user has cancelled the touch, they don’t want to change the existing state of the cell, so you just need to set everything back the way it was.

Build and run your application; swipe the cell and you’ll see that the cell snaps open and closed no matter where you lift your finger, as shown below:

swipeable-bounce

Playing Nicer With The Table View

There’s just a few more steps before you’re done!

First, your UIPanGestureRecognizer can sometimes interfere with the one which handles the scroll action on the UITableView. Since you’ve already set up the cell to be the pan gesture recognizer’s UIGestureRecognizerDelegate, you only have to implement one (comically verbosely named) delegate method to make this work.

Add the following method to SwipeableCell.m:

#pragma mark - UIGestureRecognizerDelegate
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
   return YES;
}

This method tells the gesture recognizers that they can both work at the same time.

Build and run your application; open the first cell and you can now scroll the tableview.

There’s still an issue with cell reuse: rows don’t remember their state, so as cells are reused their opened/closed state in the view won’t reflect the actions of the user. To see this, open a cell, then scroll the table a bit. You’ll notice that one cell always remains open, but it’s a different one each time.

To fix the first half of this issue, add the following method to SwipeableCell.m:

- (void)prepareForReuse {
  [super prepareForReuse];
  [self resetConstraintContstantsToZero:NO notifyDelegateDidClose:NO];
}

This method ensures the cell re-closes before it’s recycled.

To solve the second half of the issue, you’re going to add a public method to the cell to facilitate its opening. Then you’ll add some delegate methods to allow MasterViewController to manage which cells are open.

Open SwipeableCell.h. In the SwipeableCellDelegate protocol declaration, add the following two new methods below the existing methods:

- (void)cellDidOpen:(UITableViewCell *)cell;
- (void)cellDidClose:(UITableViewCell *)cell;

These methods will notify the delegate — in your case, the master view controller — that a cell has opened or closed.

Add the following public method declaration in the @interface declaration for SwipeableCell:

- (void)openCell;

Next, open SwipeableCell.m and add the following implementation for openCell:

- (void)openCell {
  [self setConstraintsToShowAllButtons:NO notifyDelegateDidOpen:NO];
}

This method allows the delegate to change the state of a cell.

Still working in the same file, find resetConstraintsToZero:notifyDelegateDidOpen: and replace the TODO at the top of the method with the following code:

if (notifyDelegate) {
  [self.delegate cellDidClose:self];
}

Next, find setConstraintsToShowAllButtons:notifyDelegateDidClose: and replace the TODO at the top of that method with the following code:

if (notifyDelegate) {
  [self.delegate cellDidOpen:self];
}

These two changes notify the delegate when a swipe gesture has completed and the cell has either opened or closed the menu.

Add the following property declaration to the top of MasterViewController.m, inside the class extension category:

@property (nonatomic, strong) NSMutableSet *cellsCurrentlyEditing;

This stores a list of cells that are currently open.

Add the following code to the end of viewDidLoad:

self.cellsCurrentlyEditing = [NSMutableSet new];

This initializes the set so you can add things to it later.

Now add the following methods to the same file:

- (void)cellDidOpen:(UITableViewCell *)cell {
  NSIndexPath *currentEditingIndexPath = [self.tableView indexPathForCell:cell];
  [self.cellsCurrentlyEditing addObject:currentEditingIndexPath];
}
 
- (void)cellDidClose:(UITableViewCell *)cell {
  [self.cellsCurrentlyEditing removeObject:[self.tableView indexPathForCell:cell]];
}

Note that you’re adding the index paths rather than the cells themselves to the list of cells currently editing. If you added the cell objects directly, then you’d see the same issue where the cells would appear open as they are recycled. With this method, you’ll be able to open the cells at the appropriate index paths instead.

Finally, add the following lines to tableView:cellForRowAtIndexPath: just before the final return statement:

if ([self.cellsCurrentlyEditing containsObject:indexPath]) {
  [cell openCell];
}

If the current cell’s index path is in the set, it should be set to open.

Build & run the application. That’s it! You now have a table view that scrolls, maintains the open and closed state of cells, and uses delegate methods to launch arbitrary tasks from button taps in any cell.

Where To Go From Here

The final project is available here as a download. I’ll be working with what I’ve developed here to assemble an open source project to make things a bit more flexible – I’ll be posting a link in the forums when it’s ready to roll.

Any time you’re you’re trying to replicate something Apple did without knowing exactly how they did it, you’ll find that there are many, many ways to do it. This is just one solution to this problem; however, it’s one of the only solutions I’ve found that doesn’t involve lots of crazy mucking around with nested scroll views and the resulting gesture recognizer collisions that can get extremely hairy to untangle! :]

A couple of resources that were very helpful in writing this article, but which ultimately took very different approaches, were Ash Furrow’s article that got the entire ball rolling, and Massimiliano Bigatti’s BMXSwipeableCell project which showed just how deep the rabbit hole can go with the UIScrollView approach.

If you have any suggestions, questions, or related pieces of code, fire away in the comments!

How To Make A Swipeable Table View Cell With Actions – Without Going Nuts With Scroll Views is a post from: Ray Wenderlich

The post How To Make A Swipeable Table View Cell With Actions – Without Going Nuts With Scroll Views appeared first on Ray Wenderlich.

Video Tutorial: Beginning 3D Modeling with Blender: Exporting for OpenGL ES

Readers’ App Reviews – April 2014

$
0
0
mahjongcards

April Readers’ Apps: From Color Theory to Steampunk!

April is finally here, and boy have you all been busy. There were more than 60 submissions since last month’s review article!

It was very hard to narrow them down this month, and I encourage everyone to check the Honorable Mentions at the end of the article. There were tons of great games I just didn’t have time to review this month.

Here are the highlights of the month:

  • An awesome iPad app that lets you make music with chips
  • A game mixing physics and color theory
  • A word game from your steampunk dreams

So stop what you’re doing now: it’s app time!

Chiptunes Pro

Chiptunes
Chiptunes is an amazing app that lets you make music the old fashioned way. No, not with instruments silly – with circuits!

Drag and drop any circuits you need to make an awesome tune. You’ve got tuners, multipliers, filters, splitters, and more. Everything you need to build awesome digital music.

Chiptunes even packs audiobus support so you can stream anything you make into other audio apps for recording, tracking, and manipulation.

FreeDum

Freedum
Dum needs your help, he’s been trapped by an evil teenager with cruel intentions!

Help guide Dum through obstacles of all kinds from pencils to razor blades. Fly over sharp objects and head for that exit sign. Dum also has to help lead baby lady bugs to safety along the way.

Won’t you free poor Dum from these mazes of doom?! Freedom is only a few hops away!

Mahjong Cards

mahjongcards
Mahjong Cards is a spin on Mahjon using regular playing cards.

Mahjong Cards has beautiful retina graphics that look awesome on the iPad. There are over 70 cool layouts to choose from.

Its a very simple, well made game thats relaxing to play. No scores or time limits. Simple sound effects that won’t interupt your own music. This is a great game to sit and play while watching Netflix to unwind.

No-Hitter Alerts

noHitters
No-Hitter Alerts is a must have for baseball fans.

No-Hitter Alerts will send you a notification anytime a no-hitter is in progress. You can set up custom alerts for your favorite teams or to be notified anytime 5 innings of no-hitters are pitched.

No-Hitter Alerts has live streaming updates and a season leaderboard for the most no-hitter innings by pitcher.

No-Hitter Alerts will also suggest TV and Radio stations to get in on the action along with weblinks to pitch by pitch coverage.

SnoreCatcher

snorecatcher
Measure your own snores or even better, prove once and for all that your Grandmother snores!

SnoreCatcher listens while you sleep and records your snores automatically. It grades them and offers replays so you can relieve the lost sleep.

SnoreCatcher will hold on to all your sleep sessions so you can compare over time.

Kitty Pig

kittypig
Kitty and Pig have banded together to catch butterflies.

Pig just needs to bounce Kitty in the right angle to get the most butterlies. Easy! except pig doesn’t seem to have figured out the bouncing on his own. So its up to you to steer Pig and make sure Kitty hits all the butterflies.

Its fun to watch Kitty bounce, but you won’t be getting any blue ribbons unless you keep your bounce count to a minimum.

Design Shots

designshots
Sit back and enjoy some Dribbble on the go with Design Shots: your new goto mobile Dribbble viewer.

Check out the latest popular shots or a few of the debuts for the day. Keep an eye on your following list. Share your favorite shots right from the app or add them to your reading list for a closer look later.

Mix It Up!

MixItup
Mix It Up takes the liquid puzzle game to a new level.

Mix it Up has you combine multiple color liquids to beat each level. But you’ve got to get the mixtures just right. Too much blue in your purple and you’re ruined.

Mix it Up has awesome phyics and simple swipe to cut controls that let you guide falling water to its proper place.

Aurobot

Aurobot
Do you have a Roomba? Autobot is like that, but way more fun!

Aurobot is on a mission and you’re driving! Simple taps left and right decide which direction Aurobot points himself. Tons of obstacles stand between you and victory.

As it progresses Autobot gets faster and the spaces get tighter. You’ll have to have your reflexes ready for hairpin turns at maximum speed to get out of this one alive.

Smash The Bird

smashthebird
Tired of Flappy Bird clones? Me too! Smash the Bird saves the day.

Smash the Bird gives you the chance to obliterate as many flappy birds as you can before the time runs out. Not smashing enough birds? You can get a triple fire powerup!

With tilt to aim and tap to fire, this game makes getting Flappy Bird in your crosshairs the best release you’ve had all day.

LETTERARIUM

letterarium
Steampunk meets boggle in their additing word game.

Letters fall into the machine and its up to you to make the given word with the letters you have. As time goes on you’ll get more and more letters to choose from.

Watch out for bombs and keep an eye on the clock, this machine has a schedule to keep!

Phyics and steampunk graphics throughout the game at the flare to make this a unique word game experience.

Travel Memo

TravelMemo
Travel Memo is a handy app to save places you like to visit.

Have you ever been somewhere and found a sweet hangout or a delicious burger bar off the beaten path that you’d love to find again if you ever come back? Travel Memo can help!

Travel Memo lets you tag locations and put them in groups. Great for that vacation spot you visit once a year but always forget all the good restaurants. Or awesome for marking the best coffee shops around the convention center of your favorite conference.

iCanMorph

iCanMorph
iCanMorph is an app that turns shapes, colors, text, and even your own photos into works of animated art.

With over 575 animated effects available, iCanMorph will keep you busy blowing your mind with some amazing effects like breaking up your pictures into hundreds of blocks and building a pyramid.

iCanMorph uses special particle drawing to be able to do all these animations at a solid 60 fps on retina devices.



Honorable Mentions

I had some apps I couldn’t review. Don’t feel bad, it wasn’t a popularity contest or even a rating contest. Forum members get priority, among other factors. I make time to try out every app that comes across my screen, one day I’ll find time to write about them all too!

CypherText
Meteor Run
YourSquare
Capture the Tank Strategy Game
Shooting Battle Fighting Game
Flying Tiny Bird In Candy-Land Saga
Tunnel Dash
Prom Salon – Girl Games
palCine
Twerky Bird
Hungry Ray
Flash Tunnel
Tappy Fly
Bricks
Tiny Flap Pig : Jump
Blokz
Patterns for smart kids
You & Me: Story Maker
Nearbie
Flapalope
Trave Memo
FlyingEagles
Super 4096
Do You Know Your USA?
Battle-Sheep
Flying Bacon
Lightz
Balloon Sho-oter
iGroup IM
PicDate
TinyTale
PiggyJ—ump
BAD DUCK
Pins
CheckShare
Wanted Bounty Hunters
Flappy Love Story



Where To Go From Here?

As expected, I really enjoyed your apps – it’s great fun to see what fellow readers make each month.

If you’ve never made an app, we’ve got you covered! Check out our free tutorials to become an iOS star. What are you waiting for – I want to see your app next month!

If you’ve already made the next great app, let me know about it! Submit here!

Readers’ App Reviews – April 2014 is a post from: Ray Wenderlich

The post Readers’ App Reviews – April 2014 appeared first on Ray Wenderlich.

Audio Tutorial for iOS: File and Data Formats [2014 Edition]

$
0
0
Speaker with audio

Image credit: ilco

Before working with the iPhone, I had sadly little experience with sound formats. I knew the difference between .WAVs and .MP3s, but for the life of me I couldn’t tell you exactly what a .AAC or a .CAF was, or what the best way to convert audio files was on the Mac.

I’ve learned that if you want to develop on the iPhone, it really pays to have a basic understanding of file and data formats, conversion, recording, and which APIs to use when.

This audio tutorial is the first in a three-part Audio Tutorial series covering audio topics of interest to the iPhone developer. In this article, we’ll start by covering file and data formats.

(Jump to Part 2 or Part 3 in the Audio Tutorial series.)

File Formats and Data Formats, Oh My!

The thing to understand is that there are actually two pieces to every audio file: its file format (or audio container), and its data format (or audio encoding).

File Formats (or audio containers) describe the format of the file itself. The actual audio data inside can be encoded many different ways. For example, a CAF file is a file format, that can contain audio that is encoded in MP3, linear PCM, and many other data formats.

So let’s dig into each of these more thoroughly.

Data Formats (or Audio Encoding)

We’re actually going to start with the audio encoding rather than the file format, because the encoding is actually the most important part.

Here are the data formats supported by the iPhone, and a description of each:

  • AAC: AAC stands for “Advanced Audio Coding”, and it was designed to be the successor of MP3. As you would guess, it compresses the original sound, resulting in disk savings but lower quality. However, the loss of quality is not always noticeable depending on what you set the bit rate to (more on this later). In practice, AAC usually does better compression than MP3, especially at bit rates below 128kbit/s (again more on this later).
  • HE-AAC: HE-AAC is a superset of AAC, where the HE stands for “high efficiency.” HE-AAC is optimized for low bit rate audio such as streaming audio.
  • AMR: AMR stands for “Adaptive Multi-Rate” and is another encoding optimized for speech, featuring very low bit rates.
  • ALAC: Also known as “Apple Lossless”, this is an encoding that compresses the audio data without losing any quality. In practice, the compression is about 40-60% of the original data. The algorithm was designed so that data could be decompressed at high speeds, which is good for devices such as the iPod or iPhone.
  • iLBC: This is yet another encoding optimized for speech, good for voice over IP, and streaming audio.
  • IMA4: This is a compression format that gives you 4:1 compression on 16-bit audio files. This is an important encoding for the iPhone, the reasons of which we will discuss later.
  • linear PCM: This stands for linear pulse code modulation, and describes the technique used to convert analog sound data into a digital format. In simple terms, this just means uncompressed data. Since the data is uncompressed, it is the fastest to play and is the preferred encoding for audio on the iPhone when space is not an issue.
  • μ-law and a-law: As I understand it, these are alternate encodings to convert analog data into digital format, but are more optimized for speech than linear PCM.
  • MP3: And of course the format we all know and love, MP3. MP3 is still a very popular format after all of these years, and is supported by the iPhone.

For more information about these types see Apple’s Using Audio.

So which do I use?

That looks like a big list, but there are actually just a few that are the preferred encodings to use. To know which to use, you have to first keep this in mind:

  • You can play linear PCM, IMA4, and a few other formats that are uncompressed or simply compressed quite quickly and simultaneously with no issues.
  • For more advanced compression methods such as AAC, MP3, and ALAC, the iPhone does have hardware support to decompress the data quickly – but the problem is it can only handle one file at a time. Therefore, if you play more than one of these encodings at a time, they will be decompressed in software, which is slow.

So to pick your data format, here are a couple of rules that generally apply:

  • If space is not an issue, just encode everything with linear PCM. Not only is this the fastest way for your audio to play, but you can play multiple sounds simultaneously without running into any CPU resource issues.
  • If space is an issue, most likely you’ll want to use AAC encoding for your background music and IMA4 encoding for your sound effects.

The Many Variants of Linear PCM

One final and important note about linear PCM encoding, which again is the preferred uncompressed data format for the iPhone. There are several variants of linear PCM depending on how the data is stored. The data can be stored in big or little endian formats, as floats or integers, and in varying bit-widths.

The most important thing to know here is the preferred variant of linear PCM on the iPhone is little-endian integer 16-bit, or LEI16 for short. Note that this differs from the preferred variant on the Mac OSX, which is native-endian floating point 32-bit. Because audio files are often created on the Mac, it’s a good idea to examine the files and convert them to the preferred format for the iPhone.

File Formats (or Audio Containers)

The iPhone supports many file formats including MPEG-1 (.mp3), MPEG-2 ADTS (.aac), AIFF, CAF, and WAVE. But the most important thing to know here is that usually you’ll just want to use CAF, because it can contain any encoding supported on the iPhone, and it is the preferred file format on the iPhone.

Bit Rates

There’s an important piece of terminology related to audio encoding that we need to mention next: bit rates.

The bit rate of an audio file is the number of bits that are processed per unit of time, usually expressed as bits per second (bit/s) or kilobits per second (kbit/s). Higher bit rates produce larger files. Some encodings such as AAC or MP3 let you specify the bit rate to use when compressing the audio file. When you lower the bit rate, you lose quality as well. Unlike other computer-related units, 1 kbit/s is actually 1000 bit/s, not 1024 bit/s.

You should choose a bit rate based on your particular sound file – try it out at different bit rates and see where the best match between file size and quality is. If your file is mostly speech, you can probably get away with a lower bit rate.

Here’s a table that gives an overview of the most common bit rates:

  • 32kbit/s: AM Radio quality
  • 48kbit/s: Common rate for long speech podcasts
  • 64kbit/s: Common rate for normal-length speech podcasts
  • 96kbit/s: FM Radio quality
  • 128kbit/s: Most common bit rate for MP3 music
  • 160kbit/s: Musicians or sensitive listeners prefer this to 128kbit/s
  • 192kbit/s: Digital radio broadcasting quality
  • 320kbit/s: Virtually indistinguishable from CDs
  • 500kbit/s-1,411kbit/s: Lossless audio encoding such as linear PCM

Sample Rates

There’s one final piece of terminology to cover before we move on: sample rates.

When converting an analog signal to digital format, the sample rate is how often the sound wave is sampled to make a digital signal.

Almost always, 44,100Hz is used because that is the same rate for CD audio.

What’s Next?

Next up in the Audio Tutorial series I talk about converting audio files and recording audio files on the Mac.

Audio Tutorial for iOS: File and Data Formats [2014 Edition] is a post from: Ray Wenderlich

The post Audio Tutorial for iOS: File and Data Formats [2014 Edition] appeared first on Ray Wenderlich.

Audio Tutorial for iOS: Converting and Recording [2014 Edition]

$
0
0

GarageBand Tracks

This article is the second in a three-part Audio Tutorial series covering audio topics of interest to the iPhone developer.

In the first article in the Audio Tutorial series, I covered the difference between file formats and data formats, and the various formats that are supported on the iPhone. Now let’s talk about how you can convert between different formats!

(If you’re in a hurry to learn how to actually play audio on the iPhone, jump to the third article in the Audio Tutorial series.)

afplay, afconvert, and afinfo

Converting audio files on the Mac is extremely easy due to three built in command-line utilities on the Mac: afplay, afconvert, and afinfo.

The easiest to use is afplay – just give it the name of your audio file from a Terminal and it will play away. This is quite convenient when compressing files to various bit rates to hear how they sound.

The next one is afinfo – just give it the name of your audio file, and it will display the file format, data format, bit rate, and other useful info like so:

afinfo pew-pew-lei.caf 
File:           pew-pew-lei.caf
File type ID:   caff
Data format:     1 ch,  44100 Hz, 'lpcm' (0x0000000C) 
    16-bit little-endian signed integer no channel layout.
estimated duration: 0.560181 sec
audio bytes: 49408
audio packets: 24704
bit rate: 705600 bits per second
packet size upper bound: 2
maximum packet size: 2
audio data file offset: 4096
optimized
audio 24704 valid frames + 0 priming + 0 remainder = 24704
source bit depth: I16
----

The above shows you that this file has a file type of CAF, a data format of 16-bit little-endian signed integer (LEI16), a sample rate of 44,100 Hz, and a bit rate of 705,600 bits per second.

Finally, let’s discuss the best utility of all: afconvert. This is extremely easy to use – just issue a command line like the following:

afconvert -d [out data format] -f [out file format] [in file] [out file]

So to convert a file to the preferred uncompressed audio encoding for the iPhone (reminder: the little-endian integer 16-bit variant of linear PCM, a.k.a. LEI16) and the preferred file format for the iPhone (reminder: Core Audio File Format a.k.a. CAFF), you would issue a command like the following:

afconvert -d LEI16 -f 'caff' input_file.xxx output_file.caf

Note I didn’t specify the extension for the input file, because afconvert is smart enough to detect the type of audio file and convert appropriately, so it can be any audio data format with any audio file format.

One other note: You can add the -b option right before the input/output files to set the bit rate. So for example, here I save the file at 128kbit/s, then 32kbit/s:

afconvert -d aac -f 'caff' -b 128000 background-music-lei.caf test_128.caf
afconvert -d aac -f 'caff' -b 32000 background-music-lei.caf test_32.caf

Recording Audio on the Mac

I wanted to jot down a couple of notes about good ways to make music and sounds for your apps on the Mac.

First, there is GarageBand. GarageBand makes it really easy to put together some premade loops of drums, guitars, and other sound instruments and make a little song out of it. And if you’re musically inclined, you can record yourself playing along and make some much cooler stuff.

Garage Band Screenshot

So if you haven’t already, take a couple minutes to go through the GarageBand Help from Apple. Specifically, “Use Apple Loops in your projects” is the one I found the most useful.

Note that after you are happy with your song, you’ll have to share it to iTunes or Media Browser, and then “Reveal in Finder” to grab your file for future use.

I found that GarageBand wasn’t the greatest for recording simple sound effects. For that, I turned to a great free audio program called Audacity. You can plug in your mike (I used my Rock Band mike and it worked just fine!) and record your effect, and save it out easily.

Audacity Screenshot

Don’t forget that when you make your own sounds like this, they will be most likely be saved as 16-bit big-endian signed integer, or BEI16. So don’t forget to convert to LEI16 before you include them in your app.

If you aren’t musically inclined, there are some sounds licensed under the Creative Commons license at The Freesound Project. Or you can always hire a professional!

What’s Next?

In the next and final article in the Audio Tutorial series I show how to play audio programmatically on the iPhone.

Audio Tutorial for iOS: Converting and Recording [2014 Edition] is a post from: Ray Wenderlich

The post Audio Tutorial for iOS: Converting and Recording [2014 Edition] appeared first on Ray Wenderlich.


Audio Tutorial for iOS: Playing Audio Programatically [2014 Edition]

$
0
0
Screenshot from BasicSounds sample project

Screenshot from BasicSounds sample project

This article is the third in a three-part Audio Tutorial series covering audio topics of interest to the iPhone developer.

So far in this Audio Tutorial series we’ve talked about the difference between file and data formats and how to convert and record audio on your Mac. Now we’ll get to the fun part – actually playing audio on your phone!

There are many ways to play audio on the iPhone – System Sound Services, AVAudioPlayer, Audio Queue Services, and OpenAL. Without outside support libraries, the two easiest ways by far are System Sound Services and AVAudioPlayer – so let’s talk about when you would (and wouldn’t) want to use those, and how you can use them.

System Sound Services

System Sound Services provides an extremely easy way to play short audio files. This is particularly useful for audio alerts and simple game sounds (such as making a ‘click’ when moving a game piece). All you have to do is the following:

NSString *pewPewPath = [[NSBundle mainBundle] 
    pathForResource:@"pew-pew-lei" ofType:@"caf"];
NSURL *pewPewURL = [NSURL fileURLWithPath:pewPewPath];
AudioServicesCreateSystemSoundID((__bridge CFURLRef)pewPewURL, &self.pewPewSound);
AudioServicesPlaySystemSound(self.pewPewSound);

It is important to define pewPewSound as an iVar or property, and not as a local variable so that you can dispose of it later in dealloc. It is declared as a SystemSoundID.

If you were to dispose of it immediately after AudioServicesPlaySystemSound(self.pewPewSound), then the sound would never play.

Doesn’t get much easier than that. However there are some strong drawbacks to this method:

  • It only supports audio data formats linear PCM or IMA4.
  • It only supports audio file formats CAF, AIF, or WAV.
  • The sounds must be 30 seconds or less in length
  • The sound plays at once.
  • Only one sound can play at a time
  • And more – see the Multimedia Programming Guide.

AVAudioPlayer

So what if you have an audio file encoded with AAC or MP3 that you want to play as background music? Another easy way to play music is via the AVAudioPlayer class. Since the AVAudioPlayer class is part of AVFoundation, you will need to @import AVFoundation into your project.
For the most part, it again is quite simple:

NSError *error;
self.backgroundMusicPlayer = [[AVAudioPlayer alloc]
initWithContentsOfURL:backgroundMusicURL error:&error];
[self.backgroundMusicPlayer prepareToPlay];
[self.backgroundMusicPlayer play];

There are many advantages of using AVAudioPlayer; it gives you a lot of bang for the buck. You can also play several sounds at once (using a different AVAudioPlayer for each sound), and you can play sounds even when your app is in the background.

However, the drawback of AVAudioPlayer is it can be extremely slow. If you tap a button and try to trigger a sound with AVAudioPlayer, there will be a noticeable small delay. But if that doesn’t matter to you (like for starting up background music), AVAudioPlayer is a fine choice. By the way, prepareToPlay is optional; if you don’t call it, it will be called implicitly when the AVAudioPlayer calls play.

And there are a couple other things to keep in mind:

  1. If you’re playing background music, you should check to see if other audio (like the iPod) is playing first, so you don’t have two layers of music going on at once!
  2. If a phone call arrives and the user chooses “Decline,” by default your AVAudioPlayer will stop. You can start it back up again by registering for the AVAudioPlayerDelegate and resuming music in the audioPlayerEndInterruption:withOptions method.

Sample Code

I put together some sample code showing how to use System Sound Services and AVAudioPlayer that you might want to check out. It also illustrates how to use AVAudioSessions, how to handle interruptions, and how to set audio to play in the background. And in addition to demonstrating those APIs, it has some mighty funky beats and a cool spaceship to boot. Pew-pew!

OpenAL

If you’re writing a game or another app where you want fine grained control of audio with low latency, you probably don’t want to use the above methods. Instead, you might want to use OpenAL, a cross-platform audio library supported by the iPhone.


OpenAL can be a beast with a steep learning curve. Luckily, Alex Restrepo has made a nice Sound Engine library using OpenAL that you can either use in your projects or use as a reference.

Another option is the Cocos2D game library, which includes an extremely easy to use sound engine that makes playing audio a snap. You can learn how to use it the tutorial on How To Make a Simple iPhone Game With Cocos2D 3.0.

Where to go from here?

That’s about all I’m going to cover about audio topics in iPhone programming for now – but keep in mind I’ve barely scratched the surface. If you’re interested in more, I’d recommend reading Apple’s docs, especially the Core Audio Overview and the Audio Session Programming Guide, and possibly digging into OpenAL a bit more. You might also like to look at the tutorial How To Make a Music Visualizer that contains some additional examples of iPhone audio.

I hope this Audio Tutorial series has been useful to other developers who may be new to audio concepts. Feel free to share any additional info you’re aware of regarding audio programming that may be useful to others!

Audio Tutorial for iOS: Playing Audio Programatically [2014 Edition] is a post from: Ray Wenderlich

The post Audio Tutorial for iOS: Playing Audio Programatically [2014 Edition] appeared first on Ray Wenderlich.

How to Create a Framework for iOS

$
0
0

In the previous tutorial, you learned how to create a reusable knob control. However, it might not be obvious how to make it easy for other developers to reuse it.

One way to share it would be to provide the source code files directly. However, this isn’t particularly elegant. It exposes implementation details you might not want to share. Additionally, developers might not want to see everything, because they may just want to integrate a portion of your brilliant code into their own codebase.

Another approach is to compile your code into a static library for developers to add to their projects. However, this requires you to distribute public header files in tandem, which is awkward at best.

You need a simpler way to compile your code, and it needs to be easy to share and reuse across multiple projects. What you need is a way to package a static library and its headers in a single component, which you can add to a project and use immediately.

Good thing that’s the focus of this tutorial. What you’ll learn will help you solve this exact problem, by making use of frameworks. OS X has the best support for them because Xcode offers a project template which includes a default build target and can accommodate resource files such as images, sounds and fonts. You can create a framework for iOS, but it’s a touch trickier; if you follow along you’ll learn how to work around the many roadblocks.

As you work through this tutorial, you’ll:

  • Build a basic static library project in Xcode
  • Build an app with a dependency on this library project
  • Discover how to convert the static library project into a fully-fledged framework
  • Finally, at the end, you’ll see how to package an image file to go along with your framework in a resource bundle

Getting Started

The main purpose of this tutorial is to explain how to create a framework for use in your iOS projects, so unlike other tutorials on the site, there will only be a small amount of Objective-C code used throughout the tutorial, and this is mainly just to demonstrate the concepts we’ll cover.

Start by downloading the source files for the RWKnobControl available here. As you go through the process of creating the first project in the section Creating a Static Library Project, you’ll see how to use them.

All of the code, and project files you’ll create whilst making this project are available on Github, with a separate commit for each build-stage of the tutorial.

What is a Framework?

A framework is a collection of resources; it collects a static library and its header files into a single structure that Xcode can easily incorporate into your projects.

On OS X, it’s possible to create a Dynamically Linked framework. Through dynamic linking, frameworks can be updated transparently without requiring applications to relink to them. At runtime, a single copy of the library’s code is shared among all the processes using it, thus reducing memory usage and improving system performance. As you see, it’s powerful stuff.

On iOS, you cannot add custom frameworks to the system in this manner, so the only dynamically linked frameworks are those that Apple provides.

However, this doesn’t mean frameworks are irrelevant to iOS. Statically Linked frameworks are still a convenient way to package up a code-base for re-use in different apps.

Since a framework is essentially a one-stop shop for a static library, the first thing you’ll do in this tutorial is learn how to create and use a static library. When the tutorial moves on to building a framework, you’ll know what’s going on, and it won’t seem like smoke and mirrors.

Creating a Static Library Project

Open Xcode and create a new static library project by clicking File\New\Project and selecting iOS\Framework and Library\Cocoa Touch Static Library.

ios_framework_creating_static_lib

Name the product RWUIControls and save the project to an empty directory.

ios_framework_options_for_static_lib

A static library project is made up of header files and implementation files, which are compiled to make the library itself.

To make life easier for developers that use your library and framework, you’re going to make it so they only need to import a single header file to access all the classes you wish to make public.

When creating the static library project, Xcode added RWUIControls.h and RWUIControls.m. You don’t need the implementation file, so right click on RWUIControls.m and select delete; move it to the trash if prompted.

Open up RWUIControls.h and replace the contents with the following:

#import <UIKit/UIKit.h>

This imports the umbrella header of UIKit, which the library itself needs. As you create the different component classes, you’ll add them to this file, which ensures they’ll be accessible to the libraries users.

The project you’re building relies on UIKit, but Xcode’s static library project doesn’t link against it by default. To fix this, add UIKit as a dependency. Select the project in the navigator, and in the central pane, choose the RWUIControls target.

Click on Build Phases and then expand the Link Binary with Libraries section. Click the + to add a new framework and navigate to find UIKit.framework, before clicking add.

ios_framework_add_uikit_dependency

A static library is of no use unless it’s combined with header files. These compile a manifest of what classes and methods exist within the binary. Some of the classes you create in your library will be publicly accessible, and some will be for internal use only.

Next, you need to add a new phase in the build, which will collect the public header files and put them somewhere accessible to the compiler. Later, you’ll copy these into the framework.

While still looking at the Build Phases screen in Xcode, choose Editor\Add Build Phase\Add Copy Headers Build Phase.

Note: If you find the option is grayed out, try clicking in the white area directly below the existing build phases to shift the focus, and then try again.

ios_framework_add_copy_headers_build_phase

Drag RWUIControls.h from the navigator to the Public part of the panel to place it in the public section under Copy Headers. This ensures the header file is available to anybody who uses your library.

ios_framework_add_header_to_public

Note: It might seem obvious, but it’s important to note that all header files included in any of your public headers must also be made public. Otherwise, developers will get compiler errors while attempting to use the library. It’s no fun for anybody when Xcode reads the public headers and then cannot read the headers you forgot to make public.

Creating a UI Control

Now that you’ve set up your project, it’s time to add some functionality to the library. Since the point of this tutorial is to describe how to build a framework, not how to build a UI control, you’ll borrow the code from the last tutorial. In the zip file you downloaded earlier you’ll find the directory RWKnobControl. Drag it from the finder into the RWUIControls group in Xcode.

ios_framework_drop_rwuiknobcontrol_from_finder

Choose to Copy items into destination group’s folder and ensure the new files go to the RWUIControls static library target by ticking the appropriate box.

ios_framework_import_settings_for_rwknobcontrol

This will add the implementation files to the compilation list and, by default, the header files to the Project group. This means that they will be private.

ios_framework_default_header_membership

Note: The three section names can be somewhat confusing until you break them down. Public is just as you’d expect. Private headers are still exposed, which is a little misleading. And Project headers are those specific to your project which are, somewhat ironically, private. Therefore, you’ll find more often than not that you’ll want your headers in either the Public or Project groups.

Now you need to share the main control header, RWKnobControl.h, and there are several ways you can do this. The first is to drag the file from the Project group to the Public group in the Copy Headers panel.

ios_framework_drag_header_to_public

Alternatively, you might find it easier to change the membership in the Target Membership panel when editing the file. This option is a bit more convenient as you can continue to add and develop the library.

ios_framework_header_membership

Note: As you continue to add new classes to your library, remember to keep their membership up-to-date. Make as few headers public as possible, and ensure the remainder are in the Project group.

The other thing to do with your control’s header file is add it to the library’s main header file, RWUIControls.h. With such a main header file, a developer using your library only needs to include one file like you see below, instead of having to sort out which pieces they need:

#import <RWUIControls/RWUIControls.h>

Therefore, add the following to RWUIControls.h:

// Knob Control
#import <RWUIControls/RWKnobControl.h>

Configuring Build Settings

You are now very close to building this project and creating a static library. However, there are a few settings to configure to make the library as user-friendly as possible.

First, you need to provide a directory name for where you’ll copy public headers. This ensures you can locate the relevant headers when you use the static library.

Click on the project in the Project Navigator, and then select the RWUIControls static library target. Select the Build Settings tab, then search for public header. Double click on the Public Headers Folder Path setting and enter the following in the popup:

include/$(PROJECT_NAME)

ios_framework_public_headers_path

You’ll see this directory later.

Now you need to change some other settings, specifically those that remain in the binary library. The compiler gives you the option of removing dead code; code which is never accessed. And you can also remove debug symbols i.e. function names and other debugging related details.

Since you’re creating a framework for others to use, it’s best to disable both and let the user choose what’s best for their project. To do this, using the same search field as before, update the following settings:

  • Dead Code Stripping – Set this to NO
  • Strip Debug Symbols During Copy – Set this to NO for all configurations
  • Strip Style – Set this to Non-Global Symbols

Build and run. There’s not a lot to see yet, but it’s still good to confirm the project builds successfully and without warnings or errors.

To build, select the target as iOS Device and press cmd+B to perform the build. Once completed, the libRWUIControls.a product in the Products group of the Project Navigator will turn from red to black, signaling that it now exists. Right click on libRWUIControls.a and select Show in Finder.

ios_framework_successful_first_build

In this directory you’ll see the static library itself, libRWUIControls.a, and the directory you specified for the public headers, include/RWUIControls. Notice the headers you made public can be found in this folder, just as you might expect.

Creating a Dependent Development Project

Developing a UI controls library for iOS is extremely difficult when you can’t actually see what you’re doing, and that seems to be the case now.

Nobody wants you to work blindly, so in this section you’re going to create a new Xcode project that will have a dependency on the library you just created. This will allow you to develop the framework using an example app. Naturally, the code for this app will be kept completely separate from the library itself, as this makes for a much cleaner structure.

Close the static library project by choosing File/Close Project. Then create a new project using File/New/Project. Select iOS/Application/Single View Application, and call the new project UIControlDevApp. Set the class prefixes to RW and specify that it should be iPhone only. Finally save the project in the same directory you used for RWUIControls.

To add the RWUIControls library as a dependency, drag RWUIControls.xcodeproj from the finder into the UIControlDevApp group in Xcode.

ios_framework_import_library_into_dev_app

You can now navigate around the library project, from inside the app’s project. This is perfect because it means that you can edit code inside the library and run the example app to test the changes.

Note: You can’t have the same project open in two different Xcode windows. If you find that you’re unable to navigate around the library project, check that you don’t have it open in another Xcode window.

Rather than recreate the app from the last tutorial, you can simply copy the code. First, select Main.storyboard, RWViewController.h and RWViewController.m and delete them by right clicking and selecting Delete, choosing to move them to the trash. Then copy the DevApp folder from the zip file you downloaded earlier right into the UIControlDevApp group in Xcode.

ios_framework_adding_files_to_dev_app

Now you’re going to add the static library as a build dependency of the example app:

  • Select the UIControlDevApp project in the Project Navigator.
  • Navigate to the Build Phases tab of the UIControlDevApp target.
  • Open the Target Dependencies panel and click the + to show the picker.
  • Find the RWUIControls static library, select and click Add. This action means that when building the dev app, Xcode will check to see whether the static library needs rebuilding or not.

In order to link against the static library itself, expand the Link Binary With Libraries panel and again click the +. Select libRWUIControls.a from the Workspace group and click Add.

This action makes it so that Xcode will link it against the static library, just as it links against system frameworks like UIKit.

ios_framework_add_dependencies_to_dev_app

Build and run to see it in action. If you followed the previous tutorial on building a knob control, you’ll recognize the simple app before your eyes.

ios_framework_dev_app_buildrun1

The beauty of using nested projects like this is that you can continue to work on the library itself, without ever leaving the example app project, even as you maintain the code in different places. Each time you build the project, you’re also checking that you have the public/project header membership set correctly. The example app won’t build if it’s missing any required headers.

Building a Framework

By now, you’re probably impatiently tapping your toes and wondering where the framework comes into play. Understandable, because so far you’ve done a lot of work and yet there is no framework in sight.

Well, things are about to change, and quickly too. The reason that you’ve not created a framework is because it’s pretty much a static library and a collection of headers – exactly what you’ve built so far.

There are a couple of things that make a framework distinct:

  1. The directory structure. Frameworks have a special directory structure that is recognized by Xcode. You’ll create a build task, which will create this structure for you.
  2. The Slices. Currently, when you build the library, it’s only for the currently required architecture, i.e. i386, arm7, etc. In order for a framework to be useful, it needs to include builds for all the architectures on which it needs to run. You’ll create a new product which will build the required architectures and place them in the framework.

There is a quite a lot of scripting magic in this section, but we’ll work through it slowly; it’s not nearly as complicated as it appears.

Framework Structure

As mentioned previously, a framework has a special directory structure which looks like this:

ios_framework_directory_structure

Now you’ll add a script to create this during the static library build process. Select the RWUIControls project in the Project Navigator, and select the RWUIControls static library target. Choose the Build Phases tab and add a new script by selecting Editor/Add Build Phase/Add Run Script Build Phase.

ios_framework_framework_add_run_script_build_phase

This creates a new panel in the build phases section, which allows you to run an arbitrary Bash script at some point during the build. Drag the panel around in the list if you want to change the point at which the script runs in the build process. For the framework project, run the script last, so you can leave it where it’s placed by default.

ios_framework_new_run_script_build_phase

Rename the script by double clicking on the panel title Run Script and replace it with Build Framework.

ios_framework_rename_script

Paste the following Bash script into the script field:

set -e
 
export FRAMEWORK_LOCN="${BUILT_PRODUCTS_DIR}/${PRODUCT_NAME}.framework"
 
# Create the path to the real Headers die
mkdir -p "${FRAMEWORK_LOCN}/Versions/A/Headers"
 
# Create the required symlinks
/bin/ln -sfh A "${FRAMEWORK_LOCN}/Versions/Current"
/bin/ln -sfh Versions/Current/Headers "${FRAMEWORK_LOCN}/Headers"
/bin/ln -sfh "Versions/Current/${PRODUCT_NAME}" \
             "${FRAMEWORK_LOCN}/${PRODUCT_NAME}"
 
# Copy the public headers into the framework
/bin/cp -a "${TARGET_BUILD_DIR}/${PUBLIC_HEADERS_FOLDER_PATH}/" \
           "${FRAMEWORK_LOCN}/Versions/A/Headers"

This script first creates the RWUIControls.framework/Versions/A/Headers directory before then creating the three symbolic links required for a framework:

  • Versions/Current => A
  • Headers => Versions/Current/Headers
  • RWUIControls => Versions/Current/RWUIControls

Finally, the public header files copy into the Versions/A/Headers directory from the public headers path you specified before. The -a argument ensures the modified times don’t change as part of the copy, thereby preventing unnecessary rebuilds.

Now, select the RWUIControls static library scheme and the iOS Device build target, then build using cmd+B.

ios_framework_build_target_static_lib

Right click on the libRWUIControls.a static library in the Products group of the RWUIControls project, and once again select Show In Finder.

ios_framework_static_lib_view_in_finder

Within this build directory you can access the RWUIControls.framework, and confirm the correct directory structure is present and populated:

ios_framework_created_framework_directory_structure

This is a leap forward on the path of completing your framework, but you’ll notice that there isn’t a static lib in there yet. That’s what you’re going to sort out next.

Multi-Architecture Build

iOS apps need to run on many different architectures:

  • arm7: Used in the oldest iOS 7-supporting devices
  • arm7s: As used in iPhone 5 and 5C
  • arm64: For the 64-bit ARM processor in iPhone 5S
  • i386: For the 32-bit simulator
  • x86_64: Used in 64-bit simulator

Every architecture requires a different binary, and when you build an app Xcode will build the correct architecture for whatever you’re currently working with. For instance, if you’ve asked it to run in the simulator, then it’ll only build the i386 version (or x86_64 for 64-bit).

This means that builds are as fast as they can be. When you archive an app or build in release mode, then Xcode will build for all three ARM architectures, thus allowing the app to run on most devices. What about the other builds though?

Naturally, when you build your framework, you’ll want developers to be able to use it for all possible architectures, right? Of course you do since that’ll mean you can earn the respect and admiration of your peers.

Therefore you need to make Xcode build for all five architectures. This process creates a so-called fat binary, which contains a slice for each of the architectures. Ah-ha!

Note: This actually highlights another reason to create an example app which has a dependency on the static library: the library only builds for the architecture required by the example app, and will only rebuild if something changes. Why should this excite you? It means the development cycle is as quick as possible.

The framework will be created using a new target in the RWUIControls project. To create it, select the RWUIControls project in the Project Navigator and then click the Add Target button shown below the existing targets.

ios_framework_add_target_button

Navigate to iOS/Other/Aggregate, click Next and name the target Framework.

ios_framework_select_aggregate_target

Note: Why use an Aggregate target to build a Framework? Why so indirect? Because Frameworks enjoy better support on OS X, as reflected by the fact that Xcode offers a straightforward Cocoa Framework build target for OS X apps. To work around this, you’ll use the aggregate build target as a hook for bash scripts that build the magic framework directory structure. Are you starting to see the method to the madness here?

To ensure the static library builds whenever this new framework target is created, you need to add a dependency on the static library target. Select the Framework target in the library project and add a dependency in the Build Phases tab. Expand the Target Dependencies panel, click the + and choose the RWUIControls static library.

ios_framework_add_dependency_to_framework_target

The main build part of this target is the multi-platform building, which you’ll perform using a script. As you did before, create a new Run Script build phase by selecting the Build Phases tab of the Framework target, and clicking Editor/Add Build Phase/Add Run Script Build Phase.

ios_framework_framework_add_run_script_build_phase

Change the name of the script by double clicking on Run Script. This time name it MultiPlatform Build.

Paste the following Bash script into the script text box:

set -e
 
# If we're already inside this script then die
if [ -n "$RW_MULTIPLATFORM_BUILD_IN_PROGRESS" ]; then
  exit 0
fi
export RW_MULTIPLATFORM_BUILD_IN_PROGRESS=1
 
RW_FRAMEWORK_NAME=${PROJECT_NAME}
RW_INPUT_STATIC_LIB="lib${PROJECT_NAME}.a"
RW_FRAMEWORK_LOCATION="${BUILT_PRODUCTS_DIR}/${RW_FRAMEWORK_NAME}.framework"
  • set -e ensures that if any part of the script should fail then the entire script will fail. This helps you avoid a partially-built framework.
  • Next, the RW_MULTIPLATFORM_BUILD_IN_PROGRESS variable determines whether the script was called recursively. If it has, then quit.
  • Then set up some variables. The framework name will be the same as the project i.e. RWUIControls, and the static lib is libRWUIControls.a.

The next part of the script sets up some functions that the project will use later on. Add the following to the very bottom of the script:

function build_static_library {
    # Will rebuild the static library as specified
    #     build_static_library sdk
    xcrun xcodebuild -project "${PROJECT_FILE_PATH}" \
                     -target "${TARGET_NAME}" \
                     -configuration "${CONFIGURATION}" \
                     -sdk "${1}" \
                     ONLY_ACTIVE_ARCH=NO \
                     BUILD_DIR="${BUILD_DIR}" \
                     OBJROOT="${OBJROOT}" \
                     BUILD_ROOT="${BUILD_ROOT}" \
                     SYMROOT="${SYMROOT}" $ACTION
}
 
function make_fat_library {
    # Will smash 2 static libs together
    #     make_fat_library in1 in2 out
    xcrun lipo -create "${1}" "${2}" -output "${3}"
}
  • build_static_library takes an SDK as an argument, for example iphoneos7.0, and will build the static lib. Most of the arguments pass directly from the current build job, the difference being that ONLY_ACTIVE_ARCH is set to ensure all architectures build for the current SDK.
  • make_fat_library uses lipo to join two static libraries into one. Its arguments are two input libraries followed by the desired output location. Read more about lipo here

The next section of the script determines some more variables which you’ll need in order to use the two methods. You need to know what the other SDK is, for example iphoneos7.0 should go to iphonesimulator7.0 and vice versa, and to locate the build directory for that SDK.

Add the following to the very end of the script:

# 1 - Extract the platform (iphoneos/iphonesimulator) from the SDK name
if [[ "$SDK_NAME" =~ ([A-Za-z]+) ]]; then
  RW_SDK_PLATFORM=${BASH_REMATCH[1]}
else
  echo "Could not find platform name from SDK_NAME: $SDK_NAME"
  exit 1
fi
 
# 2 - Extract the version from the SDK
if [[ "$SDK_NAME" =~ ([0-9]+.*$) ]]; then
  RW_SDK_VERSION=${BASH_REMATCH[1]}
else
  echo "Could not find sdk version from SDK_NAME: $SDK_NAME"
  exit 1
fi
 
# 3 - Determine the other platform
if [ "$RW_SDK_PLATFORM" == "iphoneos" ]; then
  RW_OTHER_PLATFORM=iphonesimulator
else
  RW_OTHER_PLATFORM=iphoneos
fi
 
# 4 - Find the build directory
if [[ "$BUILT_PRODUCTS_DIR" =~ (.*)$RW_SDK_PLATFORM$ ]]; then
  RW_OTHER_BUILT_PRODUCTS_DIR="${BASH_REMATCH[1]}${RW_OTHER_PLATFORM}"
else
  echo "Could not find other platform build directory."
  exit 1
fi

All four of these statements are very similar, they use string comparison and regular expressions to determine RW_OTHER_PLATFORM and RW_OTHER_BUILT_PRODUCTS_DIR.

The four if statements in more detail:

  1. SDK_NAME will be of the form iphoneos7.0 or iphonesimulator6.1. This regex extracts the non-numeric characters at the beginning of this string. Hence, it results in iphoneos or iphonesimulator.
  2. This regex pulls the numeric version number from the end of the SDK_NAME variable, 7.0 or 6.1 etc.
  3. Here a simple string comparison switches iphonesimulator for iphoneos and vice versa.
  4. Take the platform name from the end of the build products directory path and replace it with the other platform. This ensures the build directory for the other platform can be found. This will be critical when joining the two static libraries.

Now you can trigger the build for the other platform, and then join the resulting static libraries.

Add the following to the end of the script:

# Build the other platform.
build_static_library "${RW_OTHER_PLATFORM}${RW_SDK_VERSION}"
 
# If we're currently building for iphonesimulator, then need to rebuild
#   to ensure that we get both i386 and x86_64
if [ "$RW_SDK_PLATFORM" == "iphonesimulator" ]; then
    build_static_library "${SDK_NAME}"
fi
 
# Join the 2 static libs into 1 and push into the .framework
make_fat_library "${BUILT_PRODUCTS_DIR}/${RW_INPUT_STATIC_LIB}" \
                 "${RW_OTHER_BUILT_PRODUCTS_DIR}/${RW_INPUT_STATIC_LIB}" \
                 "${RW_FRAMEWORK_LOCATION}/Versions/A/${RW_FRAMEWORK_NAME}"
  • First there’s a call to build the other platform using the function you defined beforehand
  • If you’re currently building for the simulator, then by default Xcode will only build the architecture for that system, e.g. i386 or x86_64. In order to build both architectures, this second call to build_static_library rebuilds with the iphonesimulator SDK, and ensures that both architectures build.
  • Finally a call to make_fat_library joins the static lib in the current build directory with that in the other build directory to make the multi-architecture fat static library. This is placed inside the framework.

The final commands of the script are simple copy commands. Add the following to the end of the script:

# Ensure that the framework is present in both platform's build directories
cp -a "${RW_FRAMEWORK_LOCATION}/Versions/A/${RW_FRAMEWORK_NAME}" \
      "${RW_OTHER_BUILT_PRODUCTS_DIR}/${RW_FRAMEWORK_NAME}.framework/Versions/A/${RW_FRAMEWORK_NAME}"
 
# Copy the framework to the user's desktop
ditto "${RW_FRAMEWORK_LOCATION}" "${HOME}/Desktop/${RW_FRAMEWORK_NAME}.framework"
  • The first command ensures that the framework is present in both platform’s build directories.
  • The second copies the completed framework to the user’s desktop. This is an optional step, but I find that it’s a lot easier to place the framework somewhere that is easily accessible.

Select the Framework aggregate scheme, and press cmd+B to build the framework.

ios_framework_select_framework_aggregate_scheme

This will build and place a RWUIControls.framework on your desktop.

ios_framework_built_framework_on_desktop

In order to check that the multi-platform build worked, fire up a terminal and navigate to the framework on the desktop, as follows:

$ cd ~/Desktop/RWUIControls.framework
$ RWUIControls.framework  xcrun lipo -info RWUIControls

The first command navigates into the framework itself, and the second line uses the lipo command to get the required information on theRWUIControls static library. This will list the slices that are present in the library.

ios_framework_architectures_in_fat_library

You can see here that there are five slices: i386, x86_64, arm7, arm7s and arm64, which is exactly what you set out to build. Had you run the lipo -info command beforehand, you would have seen a subset of these slices.

How to Use a Framework

Okay, you have a framework, you have libraries and they’re elegant solutions for problems you’ve not yet encountered. But what’s the point of all this?

One of the primary advantages in using a framework is its simplicity in use. Now you’re going to create a simple iOS app that uses the RWUIControls.framework that you’ve just built.

Start by creating a new project in Xcode. Choose File/New/Project and select iOS/Application/Single View Application. Call your new app ImageViewer; set it for iPhone only and save it in the same directory you’ve used for the previous two projects. This app will display an image and allow the user to change its rotation using a RWKnobControl.

Look in the ImageViewer directory of the zip file you downloaded earlier for a sample image. Drag sampleImage.jpg from the finder into the ImageViewer group in Xcode.

ios_framework_drag_sample_image_into_xcode

Check the Copy items into destination group’s folder box, and click Finish to complete the import.

Importing a framework follows a nearly identical process. Drag RWUIControls.framework from the desktop into the Frameworks group in Xcode. Again, ensure that you’ve checked the box before Copy items into destination group’s folder.

ios_framework_import_framework

Open up RWViewController.m and replace the code with the following:

#import "RWViewController.h"
#import <RWUIControls/RWUIControls.h>
 
@interface RWViewController ()
@property (nonatomic, strong) UIImageView *imageView;
@property (nonatomic, strong) RWKnobControl *rotationKnob;
@end
 
@implementation RWViewController
 
- (void)viewDidLoad
{
    [super viewDidLoad];
    // Create UIImageView
    CGRect frame = self.view.bounds;
    frame.size.height *= 2/3.0;
    self.imageView = [[UIImageView alloc] initWithFrame:CGRectInset(frame, 0, 20)];
    self.imageView.image = [UIImage imageNamed:@"sampleImage.jpg"];
    self.imageView.contentMode = UIViewContentModeScaleAspectFit;
    [self.view addSubview:self.imageView];
 
    // Create RWKnobControl
    frame.origin.y += frame.size.height;
    frame.size.height /= 2;
    frame.size.width  = frame.size.height;
    self.rotationKnob = [[RWKnobControl alloc] initWithFrame:CGRectInset(frame, 10, 10)];
    CGPoint center = self.rotationKnob.center;
    center.x = CGRectGetMidX(self.view.bounds);
    self.rotationKnob.center = center;
    [self.view addSubview:self.rotationKnob];
 
    // Set up config on RWKnobControl
    self.rotationKnob.minimumValue = -M_PI_4;
    self.rotationKnob.maximumValue = M_PI_4;
    [self.rotationKnob addTarget:self
                          action:@selector(rotationAngleChanged:)
                forControlEvents:UIControlEventValueChanged];
}
 
- (void)rotationAngleChanged:(id)sender
{
    self.imageView.transform = CGAffineTransformMakeRotation(self.rotationKnob.value);
}
 
- (NSUInteger)supportedInterfaceOrientations
{
    return UIInterfaceOrientationMaskPortrait;
}
 
@end

This is a simple view controller that does the following:

  • Import the framework’s header with #import <RWUIControls/RWUIControls.h>.
  • Set up a couple of private properties to hold the UIImageView and the RWKnobControl.
  • Create a UIImageView, and use the sample image that you added to the project a few steps back.
  • Create a RWKnobControl and position it appropriately.
  • Set some properties on the knob control, including setting the change event handler to be the rotationAngleChanged: method.
  • The rotationAngleChanged: method simply updates the transform property of the UIImageView so the image rotates as the knob control is moves.

For further details on how to use the RWKnobControl check out the previous tutorial, which explains how to create it.

Build and run. You’ll see a simple app, which as you change the value of the knob control the image rotates.

ios_framework_image_viewer_rotating

Using a Bundle for Resources

Did you notice that the RWUIControls framework only consists of code and headers? For example, you haven’t used any other assets, such as images. This is a basic limitation on iOS, where a framework can only contain header files and a static library.

Now buckle up, this tutorial is about to take off. In this section you’ll learn how to work around this limitation by using a bundle to collect assets, which can then be distributed alongside the framework itself.

You’re going to create a new UI control to be part of the RWUIControls library; a ribbon control. This will place an image of a ribbon on the top right hand corner of a UIView.

Creating a bundle

The resources will be added to a bundle, which takes the form of an additional target on the RWUIControls project.

Open the UIControlDevApp project, and select the RWUIControls sub-project. Click the Add Target button, then navigate to OS X/Framework and Library/Bundle. Call the bundle RWUIControlsResources and select Core Foundation from the framework selection box.

ios_framework_add_bundle_target

There are a couple of build settings to configure since you’re building a bundle for use in iOS as opposed to the default of OSX. Select the RWUIControlsResources target and then the Build Settings tab. Search for base sdk, select the Base SDK line and press delete. This will switch from OSX to iOS.

ios_framework_bundle_set_base_sdk

You also need to change the product name to RWUIControls. Search for product name and double-click to edit. Replace ${TARGET_NAME} with RWUIControls.

ios_framework_bundle_set_product_name

By default, images which have two resolutions can produce some interesting results; for instance when you include a retina @2x version. They’ll combine into a multi-resolution TIFF, and that’s not a good thing. Search for hidpi and change the COMBINE_HIDPI_IMAGES setting to NO.

ios_framework_bundle_hidpi_images

Now you’ll make sure that when you build the framework, the bundle will also build and add the framework as a dependency to the aggregate target. Select the Framework target, and then the Build Phases tab. Expand the Target Dependencies panel, click the +, and then select the RWUIControlsResources target to add it as a dependency.

ios_framework_add_bundle_as_framework_dependency

Now, within the Framework target’s Build Phases, open the MultiPlatform Build panel, and add the following to the end of the script:

# Copy the resources bundle to the user's desktop
ditto "${BUILT_PRODUCTS_DIR}/${RW_FRAMEWORK_NAME}.bundle" \
      "${HOME}/Desktop/${RW_FRAMEWORK_NAME}.bundle"

This command will copy the built bundle to the user’s desktop. Build the framework scheme now so you can see the bundle appear on the desktop.

ios_framework_bundle_on_desktop

Importing the Bundle

In order to develop against this new bundle, you’ll need to be able to use it in the example app. This means you must add it as both a dependency, and an object to copy across to the app.

In the Project Navigator, select the UIControlDevApp project, then click on the UIControlDevApp target. Expand the Products group of the RWUIControls project and drag RWUIControls.bundle to the Copy Bundle Resources panel inside the Build Phases tab.

In the Target Dependencies panel, click the + to add a new dependency, and then select RWUIControlsResources.

ios_framework_add_bundle_for_dev_project

Building a Ribbon View

That’s all the setup required. Drag the RWRibbon directory from inside the zip file you downloaded earlier into the RWUIControls group within the RWUIControls project.

ios_framework_drag_rwribbon

Choose Copy the items into the destination group’s folder, making sure they are part of the RWUIControls static lib target by ticking the appropriate box.

ios_framework_rwribbon_membership

An important part of the source code is how you reference images. If you take a look at addRibbonView inside the RWRibbonView.m file you’ll see the relevant line:

UIImage *image = [UIImage imageNamed:@"RWUIControls.bundle/RWRibbon"];

The bundle behaves just like a directory, so it’s really simple to reference an image inside a bundle.

To add the images to bundle, choose them in turn, and then, in the right hand panel, select that they should belong to theRWUIControlsResources target.

ios_framework_rwribbon_image_membership

Remember the discussion about making sure the framework is accessible to the public? Well, now you need to export the RWRibbon.h header file, select the file, then choose Public from the drop down menu in the Target Membership panel.

ios_framework_rwribbon_header_membership

Finally, you need to add the header to the framework’s header file. Open RWUIControls.h and add the following lines:

// RWRibbon
#import <RWUIControls/RWRibbonView.h>

Add the Ribbon to the Example App

Open RWViewController.m in the UIControlDevApp project, and add the following instance variable between the curly braces in the @interface section:

RWRibbonView  *_ribbonView;

To create a ribbon view, add the following at the end of viewDidLoad:

// Creates a sample ribbon view
_ribbonView = [[RWRibbonView alloc] initWithFrame:self.ribbonViewContainer.bounds];
[self.ribbonViewContainer addSubview:_ribbonView];
// Need to check that it actually works :)
UIView *sampleView = [[UIView alloc] initWithFrame:_ribbonView.bounds];
sampleView.backgroundColor = [UIColor lightGrayColor];
[_ribbonView addSubview:sampleView];

Build and run the UIControlDevApp scheme and you’ll see the new ribbon control at the bottom of the app:

ios_framework_dev_app_with_ribbon

Using the Bundle in ImageViewer

The last thing to share with you is how to use this new bundle inside another app, the ImageViewer app you created earlier.

To start, make sure your framework and bundle are up to date. Select the Framework scheme and then press cmd+B to build it.

Open up the ImageViewer project, find the RWUIControls.framework item inside the Frameworks group and delete it, choosing Move to Trash if you’re prompted. Then drag the RWUIControls.framework from your desktop to the Frameworks group. This is necessary because the framework is much different than it was when you first imported it.

ios_framework_delete_framework

Note: If Xcode refuses to let you add the framework, then it might not have properly moved it to the trash. If this is the case then delete the framework from the ImageViewer directory in Finder and retry.

To import the bundle, simply drag it from the desktop to the ImageViewer group. Choose to Copy items into destination group’s folder and ensure that it’s added to the ImageViewer target by ticking the necessary box.

ios_framework_import_bundle

You’re going to add the ribbon to the image, which rotates, so there are a few simple changes to make to the code inRWViewController.m.

Open it up and change the type of the imageView property from UIImageView to RWRibbonView:

@property (nonatomic, strong) RWRibbonView *imageView;

Replace the first part of the viewDidLoad method which was responsible for creating and configuring the UIImageView, with the following:

[super viewDidLoad];
// Create UIImageView
CGRect frame = self.view.bounds;
frame.size.height *= 2/3.0;
self.imageView = [[RWRibbonView alloc] initWithFrame:CGRectInset(frame, 0, 20)];
UIImageView *iv = [[UIImageView alloc] initWithFrame:self.imageView.bounds];
iv.image = [UIImage imageNamed:@"sampleImage.jpg"];
iv.contentMode = UIViewContentModeScaleAspectFit;
[self.imageView addSubview:iv];
[self.view addSubview:self.imageView];

Build and run the app. You’ll see you’re now using both the RWKnobControl and the RWRibbonView from theRWUIControls framework.

ios_framework_image_viewer_with_ribbon

Where To Go From Here?

In this tutorial, you’ve learned everything you need to know about building a framework for use in your iOS projects, including the best way to develop frameworks and how to use bundles to share assets.

Do you have a favored functionality that you use in lots of different apps? The concepts you’ve learned can make your life easier by creating a library you can access over and over again. A framework offers an elegant way to procure a library of code, and gives you the flexibility to access whatever you need for your next series of awesome apps.

The source code for the completed project is available on Github, with a commit for each build step, or as a downloadable zip file.

How to Create a Framework for iOS is a post from: Ray Wenderlich

The post How to Create a Framework for iOS appeared first on Ray Wenderlich.

Video Tutorial: Using LLDB in iOS Part 1: Getting Started

Unity 4.3 2D Tutorial: Physics and Screen Sizes

$
0
0

zombie_contained

Welcome back to our Unity 4.3 2D Tutorial series!

In the first part of the series, you started making a fun game called Zombie Conga, learning the basics of Unity’s 4.3′s built-in 2D support along the way.

In the second part of the series, you learned how to animate the zombie and the cat using Unity’s powerful built-in animation system.

In this third part of the series, you got more practice creating Animation Clips, and learned how to control the playback of and transition between those clips.

In this fourth part of the series, you’ll learn about some common issues you may encounter while making your own games, such as 2D physics and dealing with different screen sizes and aspect ratios.

This tutorial picks up where the third part of the series left off. If you don’t already have the project from that tutorial, download it here.

Unzip the file (if you needed to download it) and open your scene by double-clicking ZombieConga/Assets/Scenes/CongaScene.unity.

With your project ready to go, let’s get physical! Or physics-le, as it were.

Getting Started

In Zombie Conga, you don’t actually need to use Unity’s built-in physics engine to write the game. Sure, you’ll need to know when the zombie collides with an enemy or a cat, but you can accomplish that with some basic math.

However, in the spirit of doing unnecessary things in the pursuit of education, this tutorial will show you how to use physics to handle Zombie Conga’s collision detection. By the time you’re done here, you’ll be better prepared to explore other physics topics on your own.

If you’ve ever used physics with 3D objects in Unity, then you already understand a lot about its 2D physics engine, too, because they are quite similar. They both rely on rigidbodies, colliders, physics materials and forces to represent an object’s state in a physics simulation.

One difference is that 2D rigidbodies can only move within the XY plane and can only rotate around the z-axis, which now makes 2D collisions a lot easier to deal with than they were before Unity 4.3 when you had to trick 3D objects into reacting only in two dimensions.

The following demonstrates this point by dropping a cube and a sprite and letting physics take over:

physics_comparison

Another difference is that 3D colliders have a size in all three dimensions, whereas 2D colliders have an infinite z-depth. This means that 2D objects will collide with each other regardless of their positions along the z-axis.

The following image shows the colliders for two cubes and two sprites. The positions of the two cubes differ only along the z-axis, and the positions of the two sprites differ only along the z-axis. As you’d expect, the two cubes are not currently colliding, but what you might not expect is that the two sprites are colliding.

depth_diagram

In order for the zombie to participate in physics-based collisions, it needs a collider component. In 3D you would add some subclass of Collider, such as BoxCollider or MeshCollider. When working with the 2D physics engine, you use instances of Collider2D instead of Collider, such as BoxCollider2D or CircleCollider2D.

Note: You can mix both 2D and 3D physics in the same game, but objects using physics of one time cannot interact with objects using physics of the other type because the two simulations run independently. This also means you cannot add 2D colliders or rigidbodies on a GameObject that contains 3D colliders or rigidbodies. Don’t worry, if you ever accidentally try, Unity will complain about it.

In the zombie’s case, you’ll add what is basically the 2D equivalent of a MeshCollider, called a PolygonCollider2D.

To do so, select zombie in the Hierarchy and add the collider by choosing Component\Physics 2D\Polygon Collider 2D from Unity’s menu.

Unity automatically calculated a group of vertices that fit your sprite fairly well, which you can see highlighted in green in the Scene view, as shown below:

zombie_collider

However, there is actually a slight problem here. The collider you created was for the first frame of animation, because that was the Sprite set on the zombie when you added the component. Unfortunately, it won’t match up with the other Sprites displayed during the zombie’s walk animation, as you can see in the images below:

zombie_frame_collider_mismatch

In many games, this will be fine. In fact, you’d get perfectly acceptable results in Zombie Conga if you used a much simpler collider, such as a BoxCollider2D or a CircleCollider2D. But at some point you’re going to want to have collision areas that match the shape of an animating sprite, so now is a good time to learn how to do it.

Double-click ZombieController inside the Scripts folder in the Project browser to open the script in MonoDevelop.

Rather than use a single collider, you’re going to create a separate one for each frame of animation and then swap them to match the animation. Add the following instance variables to ZombieController to keep track of these colliders:

[SerializeField]
private PolygonCollider2D[] colliders;
private int currentColliderIndex = 0;

Let’s go over this line by line:

  1. The [SerializeField] directive tells Unity to expose the instance variable below (colliders) in the Inspector. This allows you to make the variable private in code while still giving you access to it from Unity’s editor.
  2. colliders will hold a separate PolygonCollider2D for each frame of animation.
  3. currentColliderIndex will keep track of the index into colliders for the currently active collider.

Save the file (File\Save) and go back to Unity.

Select zombie in the Hierarchy. In the Inspector, expand the Colliders field in the Zombie Controller (Script) component to reveal its Size field.

This field currently contains the value zero, meaning it’s an array with no elements. You want to store a different collider for each of the zombie’s Sprites, so change this value to 4, as shown below:

zombie_colliders_empty

Inside the Inspector, click the Polygon Collider 2D component and drag it into the Zombie Controller (Script) component, releasing it over the field labeled Element 0 in the Colliders array, as demonstrated below:

drag_collider

Next, change the zombie’s sprite to zombie_1 by clicking the target icon to the right of the Sprite field and double-clicking on zombie_1 in the dialog that appears, as shown below:

zombie_sprite_selector

zombie_sprite_selection_dialog

With this new Sprite set on the zombie, add a new Polygon Collider 2D. Check the following Spoiler if you don’t remember how.

Solution Inside: Need help adding a collider? SelectShow>

The zombie now has two colliders attached to it, as shown below:

two_collider_components

Inside the Inspector, click the new Polygon Collider 2D component you just added and drag it into the Element 1 field in the Colliders array of the Zombie Controller (Script) component.

collider_into_element_2

Repeat the previous steps to create colliders for the zombie_02 and zombie_03 Sprites and add these to the Colliders array in the Element 2 and Element 3 fields, respectively.

When you are finished, the Inspector should look like this:

four_collider_components

Select zombie in the Hierarchy and reset its Sprite to zombie_0. Inside the Scene view, you can see that he now has colliders for each of his Sprites, but all at the same time!

zombie_with_four_colliders

This isn’t exactly what you want. Rather than have them all active at the same time, you want to activate only the collider that matches the current frame of animation. But how do you know which is the current frame?

Swapping Colliders at Runtime

As you learned in part 3 of this series (on Animation Controllers), you can configure an Animation Clip to call methods on a GameObject at specific frames of the animation. You’ll take advantage of that feature to update the zombie’s collider.

Switch back to ZombieController.cs in MonoDevelop and add the following method to the class:

public void SetColliderForSprite( int spriteNum )
{
  colliders[currentColliderIndex].enabled = false;
  currentColliderIndex = spriteNum;
  colliders[currentColliderIndex].enabled = true;
}

This first disables the current collider, then it updates currentColliderIndex and enables the new Sprite’s collider.

Save the file (File\Save) and switch back to Unity.

The code you just wrote will take care of setting the zombie’s collider, so you want to make sure the zombie starts without any colliders enabled.

Select zombie in the Hierarchy and disable each of the Polygon Collider 2D components in the Inspector. Your zombie’s colliders should now look like this in the Inspector:

four_colliders_disabled

With its colliders disabled, you’ll need to call SetColliderForSprite each time the zombie’s Sprite changes to ensure the zombie’s current collider and Sprite match each other. To do so, you’ll set up ZombieWalk to fire Animation Events at each keyframe.

Note: You learned how to add Animation Events in the part 3 of this series, so review that if you have any problems with these next few steps.

Select zombie in the Hierarchy and open the Animation view (Window\Animation). Be sure the clip drop-down menu in the Animation view’s control bar shows ZombieWalk.

Press the Animation view’s Record button to enter recording mode and move the scrubber to frame 0, as shown below:

animation_window

Click the Add Event button shown below:

add_event_btuton

Choose SetColliderForSprite(int) from the Function combo box in the Edit Animation Event dialog that appears. Make sure the Int field listed under Parameters contains the value 0, as shown in the following image, and then close the dialog.

anim_event

Now add similar events at frames 1, 2, 3, 4 and 5. When setting the parameters for each event, pass in the values 1, 2, 3, 2 and 1, respectively, to correspond with the Sprites shown at each keyframe.

When you’re done, ZombieWalk‘s events should be configured as shown below:

anim_events

Play the scene with zombie selected in the Hierarchy. Pause the scene by pressing the Pause button in the Scene controls at the top of Unity’s interface, shown here:

pause_button

Now step through the scene a frame at a time using the Frame Advance button to the right of the Pause button, shown here:

frame_advance_button

While stepping through the animation, you can see in the Scene view that the collider matches the current frame of animation, as demonstrated below:

fixed_colliders

Note: Technically, the zombie’s collision detection may use the wrong collider ten frames out of every second. That’s because during each of ZombieWalk‘s keyframes, Unity may process collisions before it fires the Animation Event that updates the zombie’s collider. In other words, Unity may detect collisions before changing the collider, so that the detected collision was actually for the previous frame’s collider.

Is there a solution? Yes, you could change the clip’s frame rate to something much higher, like 60, and then fire the SetColliderForSprite event exactly one frame before the keyframe that changes the Sprite.

Should you bother? Probably not. Zombie Conga’s players probably won’t notice any difference either way because the zombie’s various colliders are so similar. If any bad collision occurs, it will most likely be valid the next frame anyway and the player will probably never notice what happened.

The zombie looks ready, but it needs something to collide with. Hey, hasn’t that old lady been just sitting around doing nothing ever since she showed up in part 1 of this series? It’s time to put the elderly to work!

Collision Detection

For the old lady to participate in collisions, she’ll need a collider. Try attaching a polygon collider to her, checking the following Spoiler if you need help.

Solution Inside: Still need help adding a collider? SelectShow>

Run the scene and now the zombie…walks right through the old lady just like he always did.

failed_collision_1

The zombie and enemy each have colliders, but for colliders to actually have any effect during a collision, at least one of the GameObjects involved needs a RigidBody2D component, too. In this case, you’ll add one to the zombie.

Select zombie in the Hierarchy and choose Component\Physics 2D\Rigidbody 2D from Unity’s menu.

Run the scene now and let the zombie walk straight into the old lady. Hmm, there are a couple bad things happening here.

bad_rigidbody

First, as soon as you pressed Play, the zombie slipped down a bit in the scene. Then, when the zombie hit the enemy, it seemed to actually bump into her, probably getting stuck for a bit before eventually walking around her.

The first problem is a result of the scene’s gravity acting on the zombie’s Rigidbody 2D component. It only happens when the scene first starts playing because after processing the gravity in the first frame, the code you wrote in ZombieController.cs starts setting the zombie’s position directly.

Fortunately, you have your choice of two easy fixes: you can turn off gravity for the entire scene, or just for the zombie. Because Zombie Conga doesn’t need any gravity, you’ll simply turn it off for the entire scene.

In Unity’s menu, go to Edit\Project Settings\Physics 2D and then change Gravity‘s Y value to 0 in the Inspector, as shown below:

gravity_settings

The Gravity values simply accelerates all objects along the X and/or Y-axes by the specified amount. But setting them to zero, you have effectively disabled gravity.

Note: If you ever want to disable or adjust gravity for a particular object, adjust Gravity Scale in the object’s Rigidbody 2D component in the Inspector.

Run again and the zombie doesn’t have that annoying dip when the scene starts.

fixed_gravity

That fixes the first problem, but the zombie is still bumping into the enemy. That’s because your zombie and the old lady are using colliders defined as solid objects, but what you really want are triggers.

Triggers

Triggers are a special kind of collider. They still test for collisions like regular colliders do, but they don’t actually have any effect on the objects with which they collide. That is, they allow other colliders to pass through them, but Unity notifies the objects involved that a collision occurred, allowing you to “trigger” some game logic.

Because Zombie Conga only requires collision detection and not physics-based reactions, trigger colliders are the perfect solution.

Turn the zombie’s colliders into triggers by selecting zombie in the Hierarchy and checking the box labeled Is Trigger in each of the Polygon Collider 2D components in the Inspector, shown below:

trigger_colliders

Run the scene and the zombie walks through the enemy once again, but this time, with colliders!

failed_collision_1

But how is this different from what you had before you added colliders?

When two colliders touch and at least one of them is a trigger, Unity calls various methods on the scripts attached to the GameObject(s) containing the trigger(s). Unity calls OnTriggerEnter2D when a collider enters a trigger, OnTriggerExit2D when a collider exits a trigger, and OnTriggerStay2D every frame during which the collider remains inside a trigger.

Note: You made the zombie’s four colliders triggers instead of the enemy’s one collider, by why? Honestly, you could have done it either way, but you chose the zombie because the game logic you are going to trigger will make more sense in a script on the zombie than it would in a script on the enemy.

However, you should also know that you can attach multiple scripts to a GameObject, and each one can define one or more of these trigger handling methods. During collisions, Unity calls these methods on every script containing them, so you can have multiple scripts react to the collision if necessary. And if two triggers ever collide, Unity calls the appropriate methods on the scripts for both objects.

In Zombie Conga, you’ll only ever need to know when a collider enters a trigger, so you’ll only implement OnTriggerEnter2D.

Switch back to ZombieController.cs in MonoDevelop and add the following method to the class:

void OnTriggerEnter2D( Collider2D other )
{
  Debug.Log ("Hit " + other.gameObject);
}

This just prints out a log to Unity’s console when the zombie collides with any other Collider2D, which you’ll use to test that things are working properly before writing any real logic.

Save the file (File\Save) and go back to Unity.

Play the scene and ram that zombie right into that old lady! Check your Console view (Window\Console) and you’ll see the following message print out many times (or 1 time if you have the Collapse option set in your Console window).

hit_enemy_output

You see so many log messages because Unity calls the method multiple times. But why?

There are actually many different contact points generated as the zombie walks through the enemy, and Unity sends you a message for every one. In most games you’ll want to make sure you don’t respond to all of these collisions, so you’ll add some code later to solve this problem.

Most games also have different types of objects with which to collide and each one may need to trigger different game logic. Zombie Conga is no exception. But before you can handle different types of collisions differently, you’ll need some different types of collisions, right? It’s time to add a collider to the cat so the zombie can hit that kitty like it’s some old lady on the beach!

Hitting Cats

Public Service Announcement: It’s wrong to hit the elderly, and it’s wrong to hit cats. Also, don’t hit elderly people with cats. But if you can manage to hit a cat with an elderly person, then kudos to you, because that’s not easy!

You’ve been adding polygon colliders to your sprites so far, but the collisions in Zombie Conga really don’t need to be as exact as what you get with these colliders. When writing video games, doing the simplest thing is usually the best for performance reasons, so for the cat, you’ll use a simple circle.

Select cat in the Hierarchy and add a collider by choosing Component\Physics 2D\Circle Collider 2D.

In the Scene view you can see a green circle indicating the bounds of the cat’s collider, like this:

cat_collider_large

Unity picks a size that tries to fill your sprite’s bounding box, but the cat’s collider doesn’t need to be so big. Inside the Inspector, change the Radius value to 0.3 for the Circle Collider 2D component, as shown below:

cat_collider_fix

The cat’s collider now looks like this:

cat_collider_small

Play the scene and you’ll see it prints out a bunch of messages like this when the zombie touches the cat:

cat_collision_msg

Again, you’ll fix the multiple collision issue later. For now, you need to fix something about your colliders that you probably didn’t even know was broken.

Static Colliders, and Why They’re Secretly Bad For You

You’ve added colliders to both the enemy and the cat, but there is something else you did that you may not have realized. You inadvertently told Unity that these colliders are static, meaning they won’t move within the scene and they won’t get added, removed, enabled or disabled at runtime.

Unfortunately, those assumptions are incorrect. The enemies in Zombie Conga will move and you’ll disable a cat’s collider once the zombie touches it. You need to tell Unity that these colliders are not static.

But first, why does it matter?

Unity builds up a physics simulation containing all of the colliders in the scene, and it optimizes this process by handling differently colliders it believes to be static. If a static collider changes at runtime, Unity needs to rebuild the physics simulation, which can be an expensive operation (read: slow) and can also cause the physics to behave oddly. So when you know you’ll be moving GameObjects around, be sure they don’t have static colliders attached.

How do you make a collider dynamic (i.e. not static)? In Unity, if a GameObject has colliders but no rigidbody, then those colliders are considered static. That means you just need to add rigidbodies to the cat and the enemy.

Select cat in the Hierarchy and add a Rigidbody 2D component by choosing Component\Physics 2D\Rigidbody 2d in Unity’s menu.

Repeat the previous step for the enemy to give it a Rigidbody 2D component as well.

With enemy selected in the Hierarchy, check the box labeled Is Kinematic in the RigidBody 2D component inside the Inspector, shown below:

kinematic_rigidbody

This indicates that you will be controlling this object’s movement via scripts rather than relying on the physics engine to move it around.

Note: Ideally, you would make the zombie’s Rigidbody 2D component kinematic, too, because you are moving the zombie via a script rather than physics forces. However, there seems to be a bug in Unity’s 2D physics engine that keeps it from registering trigger collisions unless at least one rigidbody in the collision is not kinematic.

For the enemy to move, she’ll need a script. With enemy still selected, add a new C# script called EnemyController. You should remember how to add scripts from the earlier tutorials in this series, but if not, the following Spoiler gives a brief explanation.

Solution Inside: Need help adding a script? SelectShow>

Open EnemyController.cs in MonoDevelop and then add the following instance variable to the script:

public float speed = -1;

Like you’ve done in other parts of this tutorial, you declared speed as public so you can adjust it from within the editor, allowing you to tweak the feel of the game. You used a negative value for speed to move enemies across the screen from the right to the left.

Now add the following line to Start:

rigidbody2D.velocity = new Vector2(speed, 0);

This simply starts the old lady walking along the x-axis. You won’t ever change her velocity, so there is no need to set the velocity anywhere other than inside Start.

Note: Be aware that setting the enemy’s velocity only in Start means it will ignore any adjustments you make to speed at runtime. You can still tweak this value from within Unity’s editor, but you’ll need to restart the scene to try each new value, which makes the process a bit more tedious.

If you’d like to be able to adjust the enemy’s speed at runtime, simply add the same line you added to Start to a method called FixedUpdate, which is similar to Update but is called at fixed intervals based on the physics simulation.

Save the file (File\Save) and go back to Unity.

Play the scene and cheer as Grandma has clearly recovered nicely from her hip surgery.

Enemy walking with a speed of -2

Enemy walking with a speed of -2

It won’t take you long to realize that the old lady has wandered off and probably isn’t ever coming back. Rather than issuing a Silver Alert, you’ll solve this problem by doing the following two things: you’ll determine when she leaves the screen and then you’ll respawn her on the other side of the screen, just out of sight. That will make it look like another lady walked onto the beach. But to build your Old Lady Army, you’ll need to know the screen’s bounds.

Dealing with Screen Boundaries

When making games, you’ll often want to position sprites so they look good when running on devices with different aspect ratios. For example, imagine trying to display the player’s score in the upper right corner of the screen. If you positioned the score a fixed number of units away from the screen’s center, it would appear to be in different spots on devices with different aspect ratios, such as a 3.5″ iPhone, a 4″ iPhone and an iPad.

The following images demonstrate this point:

Score Label positioned at (3.5, 2.75) on iPhone 4

Score Label positioned at (3.5, 2.75) on iPhone 4

Score Label positioned at (3.5, 2.75) on iPhone 5

Score Label positioned at (3.5, 2.75) on iPhone 5

Respawning the enemy just off screen so she can walk into view poses the same problem as positioning a score label. You can’t pick a specific location because it might be off screen on a small device, but visible on a larger device, as shown in the following images:

bad_spawn_iphone4

Point positioned at (5.5, 0) on iPhone 5

Point positioned at (5.5, 0) on iPhone 5

To solve the above situation, you might think to try placing the spawn point further to the right to account for the largest possible device. However, that would also be incorrect because the game would then play differently on different devices, because the enemy would spend more time off screen on smaller devices than on larger ones. The following images show why:

Point at (7.5, 0). Enemy needs to traverse 2.7 units to appear on screen on iPhone 4.

Point at (7.5, 0). Enemy needs to traverse 2.7 units to appear on screen on iPhone 4.

Point at (7.5, 0). Enemy needs to traverse 1.82 units to appear on screen on iPhone 5.

Point at (7.5, 0). Enemy needs to traverse 1.82 units to appear on screen on iPhone 5.

Neither of those two scenarios is acceptable.

In this section, you’ll create a script that you can attach to a GameObject and it will position the object relative to one of the edges of the camera’s view.

Note: This script will only work if your camera has an orthographic projection. Unity allows you to have more than one camera in a scene, so if you’re working on a game using a perspective camera, simply add a separate camera with an orthographic projection and set up your cameras so the second camera only renders the user interface.

If you don’t know how to do that, take a look at Unity’s tutorial video on cameras.

Go to the Scripts folder in the Project browser, right-click and choose Create\C# Script. Name the new script ScreenRelativePosition.

Open ScreenReltaivePosition.cs in MonoDevelop and add the following instance variables to the script:

public enum ScreenEdge {LEFT, RIGHT, TOP, BOTTOM};
public ScreenEdge screenEdge;
public float yOffset;
public float xOffset;

This first line defines an enum type used to identify the four sides of the screen. You’ll assign screenEdge in the Inspector to position an object at the center of that edge of the screen, and you’ll set xOffset and yOffset to adjust the object's position away from that point.

Add the following code inside Start:

// 1
Vector3 newPosition = transform.position;
Camera camera = Camera.main;
 
// 2
switch(screenEdge)
{
  // 3
  case ScreenEdge.RIGHT:
    newPosition.x = camera.aspect * camera.orthographicSize + xOffset;
    newPosition.y = yOffset;
    break;
 
  // 4
  case ScreenEdge.TOP:
    newPosition.y = camera.orthographicSize + yOffset;
    newPosition.x = xOffset;
    break;
}
// 5
transform.position = newPosition;

To keep things easier to digest, the above code only includes the logic for positioning along the right and top sides of the camera's view. Here is what it does:

  1. It copies the object's current position, ensuring the object maintains whatever z position you set in the editor. It also gets a reference to the scene's main camera, which it needs to calculate the new position. If you ever use this script in a scene with more than one camera, modify it so you can specify which camera it should use.
  2. It uses a switch statement to calculate the correct position based on screenEdge. All calculations assume the center of the screen is at position (0,0) because that keeps the sample code the simplest. However, it means that if the camera in your game moves, as it will in Zombie Conga, then this version of the script will only work on child objects of the camera. You'll see more about this in a bit.
  3. If screenEdge equals ScreenEdge.RIGHT, it positions the object based on the camera's orthographicSize. Remember, the orthographic size is half the view's height, so multiplying it by the camera's aspect value gives you half the width, to which you add xOffset. You assign the result as newPosition's x value.

    As for newPosition's y value, you simply use yOffset because you know the center of the right side of the view has a y value of zero.
  4. If screenEdge equals ScreenEdge.TOP, it simply adds yOffset to the camera's orthographicSize and assigns the result as newPosition's y value. Because you know the center of the top of the view has an x value of zero, you simply use xOffset for newPosition's x value.
  5. Finally, it updates the object's position with newPosition.

Before moving on with this script, you should test your code. Save the file (File\Save) and go back to Unity.

Choose GameObject\Create Other\Cube to add a cube to your scene. Don't worry, it's just for a quick test.

Now drag ScreenRelativePosition from the Project browser onto Cube in the Hierarchy.

Inside the Inspector, set Screen Edge to RIGHT in the Screen Relative Position (Script) component, as shown below:

cube_position_settings

Make a note of the cube's position within the scene, then play the scene and notice that the cube centers itself on the center of the right edge of the Game view, as shown below:

cube_right_in_game

Feel free to try out some other settings, remembering that you've only implemented RIGHT and TOP thus far. The following shows the cube positioned with Screen Edge set to RIGHT, YOffset set to 2, and XOffset set to -1.5:

cube_right_with_offset

Once you're satisfied that your code works, right-click on Cube in the Hierarchy and choose Delete.

With positioning along the top and right edges working, switch back to MonoDevelop and try to add the cases for the left and bottom edges yourself. Check out the Spoiler below for a solution.

Solution Inside: Need help finding the left or bottom of your screen? SelectShow>

When you're done with the script, save it (File\Save) and switch back to Unity.

With ScreenRelativePosition.cs complete, you can now position objects in the same place regardless of screen size. You'll use this in Zombie Conga to place the enemy's spawn point just off the right edge of the screen.

Spawn Points

Create a new empty GameObject by going to GameObject\Create Empty in Unity's menu. Inside the Inspector, name this new object SpawnPoint and then drag it onto Main Camera in the Hierarchy. SpawnPoint should now be a child of the camera, as shown below:

spawnpoint_in_hierarchy

You added SpawnPoint as a child of Main Camera because later you'll be moving the camera and you want to ensure the spawn point moves with it.

Note: When you added SpawnPoint to the Main Camera, you may have seen its Transform values change in the Inspector. Although its Transform may have appeared to change, it still occupies the same location in the scene.

That's because when you make one object a child of another, the child's Transform component gets modified to take its parent's Transform into account.

SpawnPoint exists only to represent a position in your game and as such, you won't actually see it. However, it's difficult to keep track of invisible objects while you're developing. To remedy this, do the following:

Select SpawnPoint in the Hierarchy. Click the icon that looks like a cube in the upper left of the Inspector, shown below:

icon_chooser

After clicking the above icon, a menu of different icons appears. Select the green oval shown below:

icon_selection_menu

Inside the Scene view, you should now see a green oval labeled SpawnPoint representing the object's location, shown here:

spawnpoint_in_scene

You're only using this icon to help yourself keep track of SpawnPoint's location, so feel free to choose a different one if you'd prefer. Some of them scale their size when you zoom in and out, while others remain a fixed size regardless of the Scene view's zoom level, so experiment and see what you like the best.

Select SpawnPoint in the Hierarchy. Press Add Component and choose Scripts\Screen Relative Position from the menu that appears.

In the Inspector, set the Screen Relative Position (Script) component's Screen Edge value to RIGHT and set its XOffset value to 1, as shown below:

spawnpoint_position

Select SpawnPoint in the Hierarchy and then play the scene. You'll see in the Scene view that SpawnPoint immediately moves to the right of the scene, just outside of the camera's viewable area, indicated by the white rectangle in the following image:

spawnpoint_moving

With the spawn point in place, it's time to change EnemyController.cs to move the enemy back to the spawn point whenever she moves off screen.

Open EnemyController.cs in MonoDevelop and add the following instance variable to the script:

private Transform spawnPoint;

You'll store the spawn point's Transform here so that the enemy can reference its position whenever she needs to respawn.

Add the following line of code to Start:

spawnPoint = GameObject.Find("SpawnPoint").transform;

This simply finds the object named "SpawnPoint" and gets its Transform component. You could have made spawnPoint a public or serialized field and then assigned the object in the Inspector, but this shows another way to locate objects in your scene.

Note: Keep in mind that using GetObject.Find is slower at runtime than referencing a value set in the Inspector, so don't use this technique if you need to find the same object often, such as from within every execution of Update.

There are several ways you could handle detecting when an object leaves the view, but the easiest for Zombie Conga is to implement OnBecameInvisible. Unity calls this method whenever an object ceases to be visible to a camera.

Add the following implementation of OnBecameInvisible in EnemyController.cs:

void OnBecameInvisible()
{
  float yMax = Camera.main.orthographicSize - 0.5f;
  transform.position = new Vector3( spawnPoint.position.x, 
                                    Random.Range(-yMax, yMax), 
                                    transform.position.z );
}

This code calculates a new position for the enemy. It always uses the x value from spawnPoint, but it chooses a y value by picking a random point within the available vertical space. The random vertical position keeps the player guessing.

Notice how the first line subtracts 0.5 when calculating the maximum y value. This ensures the enemy doesn't spawn too close to the top or bottom of the screen. If you don't understand why the y value is chosen between negative and positive yMax, remember that the center of the screen is at (0,0), meaning there is Camera.main.orthographicSize vertical space available in each direction.

Save the file (File\Save) and go back to Unity.

Important Note: When you play scenes in Unity, it is not exactly the same as running a real build on a target platform. One major difference is that the various Scene and Game views that you have visible in Unity are all considered when determining an object's visibility. In other words, if the enemy moves off screen in the Game view but is still visible in the Scene view, Unity will not call OnBecameInvisible.

This means that if you want this respawning logic to work, you need to make sure you only see Zombie Conga in the Game view while playing the scene. You can still have a Scene tab open, but it has to be behind another tab so that it isn't rendering.

Play the scene inside Unity. After the enemy moves off the left side of the screen, you should see a new one walk in on the right side, as you can see below:

enemy_respawning

Note: While testing, Unity will probably log the following error when you stop the scene:

unity_null_ref_error
This seems to occur because the camera gets removed from the scene when you stop playing it and the enemy receives one last notification that it became invisible. I'm not sure if it's an error that will occur in a real build or if it only happens when testing in the editor, but you can fix it by adding the following line at the beginning of OnBecameInvisible in EnemyController.cs:

if (Camera.main == null)
  return;

This simply aborts the method if the camera is not present.

The above video shows one small problem: the old lady ran off with your cat! You saw a similar problem earlier between the zombie and the enemy. Try to figure it out yourself, and then check the following Spoiler to see if you're right.

Solution Inside: Need help keeping a cat away from an old lady? SelectShow>

Now that you know that respawning works, it's time to point out why you went through all that screen-relative positioning in the first place. Stop the scene and then change the Game view's aspect ratio by choosing one of the presets in the Aspect drop down menu in the Game view's control bar, shown below:

aspect_menu

Play the scene again and notice that the enemy spends the same amount of time off screen, no matter which size you chose. Repeat the test with different aspect ratios until you're satisfied it works.

The following images show the spawn point's location when running with a few different aspect ratios. The white rectangles indicate the area viewable by the camera:

5:4 Aspect Ratio

5:4 Aspect Ratio

3:2 Aspect Ratio

3:2 Aspect Ratio

iPhone 5's Aspect Ratio

iPhone 5's Aspect Ratio

Note: Even though Unity allows you to change the Game view's aspect ratio while playing the scene, you must change the aspect ratio while the scene isn't running because ScreenRelativePosition.cs sets SpawnPoint's location only once, when the scene first starts.

Keeping the Zombie On Screen

Now that the enemy moves and respects the world bounds, you should probably get the zombie to do the same. You could do all sorts of fancy things with physics and colliders to keep the zombie in the proper area, but sometimes the easiest thing to do is to use a few if checks and some basic math.

Open ZombieController.cs in MonoDevelop and add the following method:

private void EnforceBounds()
{
  // 1
  Vector3 newPosition = transform.position; 
  Camera mainCamera = Camera.main;
  Vector3 cameraPosition = mainCamera.transform.position;
 
  // 2
  float xDist = mainCamera.aspect * mainCamera.orthographicSize; 
  float xMax = cameraPosition.x + xDist;
  float xMin = cameraPosition.x - xDist;
 
  // 3
  if ( newPosition.x < xMin || newPosition.x > xMax ) {
    newPosition.x = Mathf.Clamp( newPosition.x, xMin, xMax );
    moveDirection.x = -moveDirection.x;
  }
  // TODO vertical bounds
 
  // 4
  transform.position = newPosition;
}

The code above only handles the horizontal bounds. You'll add code to handle the vertical bounds once you know this works. Here is what it does:

  1. It copies the zombie's current position, ensuring the zombie maintains whatever z position you set in the editor. It also gets a reference to the scene's main camera and copies the camera's position, both of which will be necessary to calculate the zombie's new position.
  2. It calculates the x values in world coordinates for the edges of the screen. It does so by first calculating the distance from the center of the screen to one of its edges, and then adding that to the camera's x position. This means that if the camera is at (50, 0), and xDist is 4.8, the right side of the screen has an x position of 54.8 in world coordinates.
  3. This checks to see if the zombie's current position (stored in newPosition, confusingly enough) exceeds the view's horizontal limits. If so, it sets newPosition's x value to the boundary value and reverses the x component of moveDirection.

    You haven't seen it since part 1 of this series, but moveDirection is the vector that Update uses to advance the zombie's position each frame, so reversing its x component will start it moving in the opposite direction, effectively bouncing it off the edge of the screen.
  4. Finally, it updates the zombie's position with newPosition. This will be the same position the zombie had when you called this method if the zombie was already within its allowable space.
Note: If you want to make the zombie turn slightly before it reached the edge of the screen, simply reduce the size of xDist.

Now add a call to EnforceBounds at the end of Update:

EnforceBounds();

Save the file (File\Save) and go back to Unity.

Play the scene and try to walk the zombie off the left and right sides of the beach. He should turn right around each time rather than walking off into the great unknown.

zombie_horizontal_limits

With the left and right constraints in place, try limiting the zombie's vertical movement yourself. Put your code inside EnforceBounds in ZombieController.cs, just after the comment that reads // TODO vertical bounds. It should be similar to what you wrote for the horizontal bounds, but even simpler. The following Spoiler has a solution.

Solution Inside: Zombie getting away from you? SelectShow>

Run again and now you're zombie stays in its sandbox.

zombie_contained

At this point you might want a little break, so you'll finish up Zombie Conga in the fifth and final part of this series!

Where to Go From Here?

In this part of the tutorial, you learned how to use Unity's 2D physics engine to detect collisions, and you saw how you can handle some issues that arise when trying to support different aspect ratios. You can find a copy of the project up to this point here.

The main thing you should do next is the next part of this tutorial, of course! But if you want more information about 2D physics in Unity, take a look at Unity's Component References for its 2D Components and Unity's RigidBody2D and Physics 2D Overview videos.

I hope you're enjoying this tutorial series. If you've made it this far, you've got a lot of free time on your hands. Also, you're really close to the end, so don't stop now!

Please ask questions or leave remarks in the comments section. Thanks for reading!

Unity 4.3 2D Tutorial: Physics and Screen Sizes is a post from: Ray Wenderlich

The post Unity 4.3 2D Tutorial: Physics and Screen Sizes appeared first on Ray Wenderlich.

Video Tutorial: Using LLDB in iOS Part 2: Using Expressions

Viewing all 4374 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>