Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4370 articles
Browse latest View live

Opportunity: Lead WordPress Developer at raywenderlich.com

$
0
0
Help us level up this site!

Help us level up this site!

As you might know, this site is a community effort of over 100 part-time authors and editors, who team up to write high quality tutorials and books.

But you might not know that we also have 6 full-time employees too. Our job is to keep things running smoothly, and create video tutorials.

Today, we are happy to announce an opportunity for a new full-time position: Lead WordPress Developer at raywenderlich.com!

If you are an advanced WordPress developer or know someone who is, keep reading to find out what’s involved!

What’s Involved

We are seeking a highly motivated and talented individual to join our team as our Lead WordPress Developer. This is a key leadership role where you will get the chance to take an already popular site to the next level – with tons of readers thanking you for each improvement you make! :]

Better yet – our team is distributed, so you can work remotely from the comfort of your own home – from anywhere in the world.

The ideal candidate will possess expert level knowledge of WordPress, including advanced theme and custom plugin development in addition to religious devotion to best practices and clean code standards.

Here’s what we’re looking for:

Technical Skills

  • Passion for WordPress development. We believe that being passionate about what you’re working on is the key to success.
  • 2+ years WordPress development experience. You should have experience building custom themes and plugins, and you should have an advanced knowledge of the WordPress API.
  • 5+ years PHP + MySQL (LAMP stack), Javascript (jQuery), HTML5, and CSS3. This is the bread and butter!
  • 2+ years working on high traffic sites. As you’ll be working on a popular site with high traffic, you should have experience developing web sites that cater for 3 million+ page view per month, and should be able to diagnose issues relating to high traffic, and tune the site so that it performs well under heavy load.
  • Git experience. This is our source control of choice :]
  • System administration experience. You should be familiar with command-line Linux system administration (setting up servers from scratch, configuring firewalls, security best practices, etc), and (ideally) experience working with hosting options such as WPEngine or Amazon web services.

Non-technical Skills

  • Interest in web design. You should have an interest in modern web design principles – for example, you likely have a personal site that looks great.
  • Self-driven work ethic. You need to be a self-starter who loves taking initiative and seeing things through to completion, with loose (at best) direction.
  • Detail oriented and highly organized. We are a small company with a lot to do, so you should be able to work on and prioritize multiple tasks at any given time, and stay very organized.
  • Leadership skills. We are looking for you to be a leader in this small company, and plan on building a team around you.
  • Great communication skills. We’re a distributed team, so frequent and clear written communication is a must.
  • Friendly personality. We pride ourselves on being a friendly and open bunch, and this is especially important because you will be frequently interacting with customers and external team members.
  • Curiosity and the desire to learn. Our business is changing and growing fast – who knows what will be the skills of tomorrow?
  • Bonus items! Bonus if you like Apple products, video games, board games, and/or zombies – you’ll fit right in! :]
If you like zombie cats, even better!

If you like zombie cats, even better!

About Razeware

Razeware is the company behind the this site. We’re passionate about learning new development skills, and teaching others what we’ve learned through high quality, hands-on tutorials.

We are a small company that has been profitable for over 5 years. Currently we have just 6 full-time employees, so you’d be getting in on the ground floor.

Razeware February 2015.

Razeware February 2015.

We’re also a 100% distributed company: everyone works from their own home or office, whether it be Maryland, Virginia, Connecticut, Canada, or England. We make heavy use of Trello, Slack, Google Hangouts, and email for team communication.

We have a ton of great benefits, such as:

  • Remote working!
  • Health insurance and 401K match (US only)
  • Generous PTO – plus 1-week company-wide Christmas vacation
  • Competitive salary
  • Free trip to our annual conference – RWDevCon
  • Professional development (conferences & training)

Our site is helping millions of developers across the world make apps, further their careers, and fulfill lifelong dreams. If you’re passionate about helping our small but highly motivated team take this to the next level, this is the job for you! :]

How To Apply

To apply, please email a resume and cover letter to ray@razeware.com.

Not a web developer, but still interested in joining our team? We have another cool job opportunity that we’ll be posting about soon – stay tuned.

We look forward to hearing from you! :]

Opportunity: Lead WordPress Developer at raywenderlich.com is a post from: Ray Wenderlich

The post Opportunity: Lead WordPress Developer at raywenderlich.com appeared first on Ray Wenderlich.


Custom Collection View Layouts Part 7: Sticky Headers

Introducing React Native: Building Apps with JavaScript

$
0
0
React Native

React Native: Build native iOS applications with JavaScript.

A few months ago Facebook announced React Native, a framework that lets you build native iOS applications with JavaScript – and the official repository just came out of beta today!

People have been using JavaScript and HTML5 to create iOS applications using the PhoneGap wrapper for a number of years, so is React Native really such a big deal?

React Native is a big deal, and people are getting very excited about it for two main reasons:

  1. With React Native your application logic is written and runs in JavaScript, whereas your application UI is fully native; therefore you have none of the compromises typically associated with HTML5 UI.
  2. React introduces a novel, radical and highly functional approach to constructing user interfaces. In brief, the application UI is simply expressed as a function of the current application state.

The key point with React Native is that it aims to primarily bring the power of the React programming model to mobile app development. It is not aiming to be a cross platform, write-once run-anywhere, tool. It is aiming to be learn-once write-anywhere. An important distinction to make. This tutorial only covers iOS, but once you’ve learned the concepts here you could port that knowledge into creating an Android app very quickly.

If you have only ever written applications in Objective-C or Swift, you might not be particularly excited about the prospect of using JavaScript instead. Although, as a Swift developer, the second point above should pique your interest!

Through Swift, you’ve no doubt been learning new and more functional ways to encode algorithms, and techniques that encourage transformation and immutability. However, the way in which you construct your UI is very much the same as it was when developing with Objective-C: it’s still UIKit-based and imperative.

Through intriguing concepts such as a virtual DOM and reconciliation, React brings functional programming directly to the UI layer.

This tutorial takes you through the process of building an application for searching UK property listings:

PropertyFinder

If you’ve never written any JavaScript before, fear not; this tutorial leads you through each and every step of the coding. React uses CSS properties for styling which are generally easy to read and understand, but if you need to, you can always refer to the excellent Mozilla Developer Network reference.

Want to learn more? Read on!

Getting Started

The React Native framework is available via GitHub. You can grab the framework by either cloning the repository using git, or you can choose download it as a zip file. Once you have the React Native framework locally, there are a few other prerequisites to take care of before you can start coding.

React Native uses Node.js, a JavaScript runtime, to build your JavaScript code. If you don’t already have Node.js installed, it’s time to get it!

First install Homebrew using the instructions on the Homebrew website, then install Node.js by executing the following in a Terminal window:

brew install node

Next, use homebrew to install watchman, a file watcher from Facebook:

brew install watchman

This is used by React Native to figure out when your code changes and rebuild accordingly. It’s like having Xcode do a build each time you save your file.

The React Native code has a number of dependencies that you need to satisfy before you can run it. Open a Terminal window in your React Native folder and execute the following:

npm install

This uses the Node Package Manager to fetch the project dependencies; it’s similar in function to CocoaPods or Carthage. Once this command has run successfully, you’ll find a node_modules folder has been created with the various external dependencies.

The final step is to start the development server. Within the same Terminal window as the previous step, execute the following:

npm start

On executing the above, you will see the following:

$ npm start
 
> react-native@0.1.0 start /Users/colineberhardt/Projects/react-native
> ./packager/packager.sh
 
 
 ===============================================================
 |  Running packager on port 8081.       
 |  Keep this packager running while developing on any JS         
 |  projects. Feel free to close this tab and run your own      
 |  packager instance if you prefer.                              
 |                                                              
 |     https://github.com/facebook/react-native                 
 |                                                              
 ===============================================================
 
 
React packager ready.

That’s it, you’re good to get started! Leave the script running in the terminal window as you continue with the tutorial.

At this point, I’d recommend trying one of the React Native example apps to test your setup. Open the project from the react-native/Examples/Movies folder in Xcode, then build and run it and check that you can launch the Movies application without issue.

Note: One final thing before you get too deep in the code — you’re going to be writing a lot of JavaScript code in this tutorial, and Xcode is certainly not the best tool for this job! I use Sublime Text, which is a cheap and versatile editor, but atom, brackets or any other lightweight editor will do the job.

Hello React Native

Before getting started on the property search application, you’re going to create a very simple Hello World app. You’ll be introduced to the various components and concepts as you go along.

Download and unzip the starter project for this tutorial to the react-native/Examples folder. Once unzipped, open the PropertyFinder project within Xcode. Don’t build and run just yet; you’re going to have to write some JavaScript first!

Open PropertyFinderApp.js in your text editor of choice and add the following to the start of the file:

'use strict';

This enables Strict Mode, which adds improved error handling and disables some less-than-ideal JavaScript language features. In simple terms, it makes JavaScript better!

Note: For a more detailed overview of Strict Mode, I’d encourage you to read Jon Resig’s article “ECMAScript 5 Strict Mode, JSON, and More”.

Next, add the following line:

var React = require('react-native');

This loads the react-native module and assigns it to React. React Native uses the same module loading technology as Node.js with the require function, which is roughly equivalent to linking and importing libraries in Swift.

Note: For more information about JavaScript modules I’d recommend reading this article by Addy Osmani on writing modular JavaScript.

Just below the require statement, add the following:

var styles = React.StyleSheet.create({
  text: {
    color: 'black',
    backgroundColor: 'white',
    fontSize: 30,
    margin: 80
  }
});

This defines a single style that you will shortly apply to your “Hello World” text. If you’ve done any web development before, you’ll probably recognize those property names; React Native uses Cascading Style Sheets (CSS) to style the application UI.

Now for the app itself! Still working in the same file, add the following code just beneath the style declaration above:

class PropertyFinderApp extends React.Component {
  render() {
    return React.createElement(React.Text, {style: styles.text}, "Hello World!");
  }
}

Yes, that’s a JavaScript class!

Classes were introduced in ECMAScript 6 (ES6). Although JavaScript is constantly evolving, web developers are restricted in what they can use due to the need to maintain compatibility with older browsers. React Native runs within JavaScriptCore; as a result, you can use modern language features without worrying about supporting legacy browsers.

Note: If you are a web developer, I’d thoroughly encourage you to use modern JavaScript, and then convert to older JavaScript using tools such as Babel to maintain support for older and incompatible browsers.

PropertyFinderApp extends React.Component, the basic building block of the React UI. Components contain immutable properties, mutable state variables and expose a method for rendering. Your current application is quite simple and only requires a render method.

React Native components are not UIKit classes; instead they are a lightweight equivalent. The framework takes care of transforming the tree of React components into the required native UI.

Finally, add the following to the end of the file:

React.AppRegistry.registerComponent('PropertyFinderApp', function() { return PropertyFinderApp });

AppRegistry defines the entry point to the application and provides the root component.

Save your changes to PropertyFinderApp.js and then return to Xcode. Ensure the PropertyFinder scheme is selected with one of the iPhone simulators, and then build and run your project. After a few seconds you’ll see your “Hello World” app in action:

react-helloworld

That’s a JavaScript application running in the simulator, rendering a native UI, without a browser in sight!

Still don’t trust me? :] Verify it for yourself: within Xcode, select Debug\View Debugging\Capture View Hierarchy and take a look at the native view hierarchy. You will see no UIWebView instances anywhere! Just a proper, real, view! Neat :]!

A native view hierarchy

Curious as to how it all works? In Xcode open AppDelegate.m and locate application:didFinishLaunchingWithOptions:. This method constructs a RCTRootView, which loads the JavaScript application and renders the resultant view.

When the application starts, the RCTRootView loads the application from the following URL:

http://localhost:8081/Examples/PropertyFinder/PropertyFinderApp.includeRequire.runModule.bundle

Recall at the beginning of the tutorial when you executed npm start within a terminal window; this starts a packager and server that handles the above request.

Open this URL in Safari; you’ll see the JavaScript code for your app. You should be able to find your “Hello World” code embedded among the React Native framework.

When your app starts, this code is loaded and executed by the JavaScriptCore framework. In the case of your application, it loads the PropertyFinderApp component, then constructs the native UIKit view. You’ll learn a bit more about this later in the tutorial.

Hello World JSX

Your current application uses React.createElement to construct the simple UI for your application, which React turns into the native equivalent. While your JavaScript code is perfectly readable in its present form, a more complex UI with nested elements would rapidly become quite a mess.

Make sure the app is still running, then return to your text editor to edit PropertyFinderApp.js. Modify the return statement of your component’s render method as follows:

return <React.Text style={styles.text}>Hello World (Again)</React.Text>;

This is JSX, or JavaScript syntax extension, which mixes HTML-like syntax directly in your JavaScript code; if you’re already a web developer, this should feel rather familiar. You’ll use JSX throughout this article.

Save your changes to PropertyFinderApp.js and return to the simulator. Press Cmd+R, and you’ll see your application refresh to display the updated message “Hello World (Again)”.

Re-running a React Native application is really as simple as refreshing a web browser! :]

Since you’ll be working with the same set of JavaScript, you can leave the app running and simply refresh the app in this fashion after modifying and saving PropertyFinderApp.js.

Note: If you are feeling inquisitive, take a look at your ‘bundle’ in the browser to see what the JSX is transformed into.

Okay, enough of this “Hello World” fun; it’s time to build the real application!

Adding Navigation

The Property Finder app uses the standard stack-based navigation experience provided by UIKit’s navigation controller. It’s time to add this behavior.

Within PropertyFinderApp.js, rename the PropertyFinderApp class to HelloWorld:

class HelloWorld extends React.Component {

You’ll keep the “Hello World” text display around a little longer, but it won’t be the root component of your app anymore.

Next add the following class below the HelloWorld component:

class PropertyFinderApp extends React.Component {
  render() {
    return (
      <React.NavigatorIOS
        style={styles.container}
        initialRoute={{
          title: 'Property Finder',
          component: HelloWorld,
        }}/>
    );
  }
}

This constructs a navigation controller, applies a style and sets the initial route to the HelloWorld component. In web development, routing is a technique for defining the navigation structure of an application, where pages — or routes — are mapped to URLs.

Within the same file, update the styles declaration to include the container style as shown below:

var styles = React.StyleSheet.create({
  text: {
    color: 'black',
    backgroundColor: 'white',
    fontSize: 30,
    margin: 80
  },
  container: {
    flex: 1
  }
});

You’ll find out what flex: 1 means a bit later in this tutorial.

Head back to the simulator and press Cmd+R to see your new UI in action:

react-helloworldagain

There’s the navigation controller with its root view, which is currently the “Hello World” text. Excellent — you now have the basic navigation structure for your application in place. It’s time to add the ‘real’ UI!

Building the Search Page

Add a new file to the project named SearchPage.js and place it in the same folder as PropertyFinderApp.js. Add the following code to this file:

'use strict';
 
var React = require('react-native');
var {
  StyleSheet,
  Text,
  TextInput,
  View,
  TouchableHighlight,
  ActivityIndicatorIOS,
  Image,
  Component
} = React;

You’ve already seen the strict mode and the react-native import before, but the assignment statement that follows it is something new.

This is a destructuring assignment, which lets you extract multiple object properties and assign them to variables using a single statement. As a result, the rest of your code can drop the React prefix; for example, you can refer directly to StyleSheet rather than React.StyleSheet. Destructuring is also useful for manipulating arrays and is well worth learning more about.

Still working in the same file, SearchPage.js, add the following style:

var styles = StyleSheet.create({
  description: {
    marginBottom: 20,
    fontSize: 18,
    textAlign: 'center',
    color: '#656565'
  },
  container: {
    padding: 30,
    marginTop: 65,
    alignItems: 'center'
  }
});

Again, these are standard CSS properties. Setting up styles like this is less visual than using Interface Builder, but it’s better than setting view properties one by one in your viewDidLoad() methods! :]

Add the component itself just below the styles you added above:

class SearchPage extends Component {
  render() {
    return (
      <View style={styles.container}>
        <Text style={styles.description}>
          Search for houses to buy!
        </Text>
        <Text style={styles.description}>
          Search by place-name, postcode or search near your location.
        </Text>
      </View>
    );
  }
}

render is a great demonstration of JSX and the structure it provides. Along with the style, you can very easily visualize the UI constructed by this component: a container with two text labels.

Finally, add the following to the end of the file:

module.exports = SearchPage;

This exports the SearchPage class, which permits its use in other files.

The next step is to update the application routing in order to make this the initial route.

Open PropertyFinderApp.js and add the following just after the current require import near the top of the file:

var SearchPage = require('./SearchPage');

Within the render function of the PropertyFinderApp class, update initialRoute to reference the newly added page as shown below:

component: SearchPage

At this point you can remove the HelloWorld class and its associated style, if you like. You won’t be needing that code any longer.

Return to the simulator, hit Cmd+R and check out the new UI:

react-searchstarter

This is using the new component, SearchPage, which you added.

Styling with Flexbox

So far, you’ve seen basic CSS properties that deal with margins, paddings and color. However, you might not be familiar with flexbox, a more recent addition to the CSS specification that is very useful for application UI layout.

React Native uses the css-layout library, a JavaScript implementation of the flexbox standard which is transpiled into C (for iOS) and Java (for Android).

It’s great that Facebook has created this as a separate project that targets multiple languages, since this allows for novel applications, such as applying flexbox layout to SVG (yes, that was written by me … and no, I don’t sleep much!).

In your app, the container has the default flow direction of column, which means its children are arranged in a vertical stack, like so:

FlexStack

This is termed the main axis, and can run either vertically or horizontally.

The vertical position of each child is determined from a combination of its margin, height and padding. The container also sets the alignItems property to center, which determines the placement of children on the cross axis. In this case, it results in center-aligned text.

It’s time to add the input field and buttons. Open SearchPage.js and insert the following just after the closing tag of the second Text element:

<View style={styles.flowRight}>
  <TextInput
    style={styles.searchInput}
    placeholder='Search via name or postcode'/>
  <TouchableHighlight style={styles.button}
      underlayColor='#99d9f4'>
    <Text style={styles.buttonText}>Go</Text>
  </TouchableHighlight>
</View>
<TouchableHighlight style={styles.button}
    underlayColor='#99d9f4'>
  <Text style={styles.buttonText}>Location</Text>
</TouchableHighlight>

You’ve added two top-level views here: one to hold a text input and a button, and one with only a button. You’ll read about how these elements are styled in a little bit.

Next, add the accompanying styles to your styles definition:

flowRight: {
  flexDirection: 'row',
  alignItems: 'center',
  alignSelf: 'stretch'
},
buttonText: {
  fontSize: 18,
  color: 'white',
  alignSelf: 'center'
},
button: {
  height: 36,
  flex: 1,
  flexDirection: 'row',
  backgroundColor: '#48BBEC',
  borderColor: '#48BBEC',
  borderWidth: 1,
  borderRadius: 8,
  marginBottom: 10,
  alignSelf: 'stretch',
  justifyContent: 'center'
},
searchInput: {
  height: 36,
  padding: 4,
  marginRight: 5,
  flex: 4,
  fontSize: 18,
  borderWidth: 1,
  borderColor: '#48BBEC',
  borderRadius: 8,
  color: '#48BBEC'
}

Take care with the formatting; each style property should be separated by a comma. That means you’ll need to add a trailing comma after the container selector.

These styles are used by the text input and buttons which you just added.

Return to the simulator and press Cmd+R to see the updated UI:

react-searchpageinput

The text field and ‘Go’ button are on the same row, so you’ve wrapped them in a container that has a flexDirection: 'row' style. Rather than explicitly specifying the widths of the input field and button, you give each a flex value instead. The text field is styled with flex: 4, while the button has flex: 1; this results in their widths having a 4:1 ratio.

You might have noticed that your buttons…aren’t actually buttons! :] With UIKit, buttons are little more than labels that can be tapped, and as a result the React Native team decided it was easier to construct buttons directly in JavaScript. The buttons in your app use TouchableHighlight, a React Native component that becomes transparent and reveals the underlay colour when tapped.

The final step to complete the search screen of the application is to add the house graphic. Add the following beneath the TouchableHighlight component for the location button:

<Image source={require('image!house')} style={styles.image}/>

Now, add the image’s corresponding style to the end of the style list, remembering to add a trailing comma to the previous style:

image: {
  width: 217,
  height: 138
}

The require('image!house') statement is used to reference an image located within your application’s asset catalogue. Within Xcode, if you open up Images.xcassets you will find the ‘house’ icon that the above code refers to.

Returning to the simulator, hit Cmd+R and admire your new UI:

react-searchpagehouse

Note: If you don’t see the house image at this point and see an image that “image!house” cannot be found instead, try restarting the packager (i.e. the “npm start” command you have in the terminal).

Your current app looks good, but it’s somewhat lacking in functionality. Your task now is to add some state to your app and perform some actions.

Adding Component State

Each React component has its own state object, which is used as a key-value store. Before a component is rendered you must set the initial state.

Within SearchPage.js, add the following code to the SearchPage class, just before render():

constructor(props) {
  super(props);
  this.state = {
    searchString: 'london'
  };
}

Your component now has a state variable, with searchString set to an initial value of london.

Time to make use of this component state. Within render, change the TextInput element to the following:

<TextInput
  style={styles.searchInput}
  value={this.state.searchString}
  placeholder='Search via name or postcode'/>

This sets the TextInput value property — that is, the text displayed to the user — to the current value of the searchString state variable. This takes care of setting the initial state, but what happens when the user edits this text?

The first step is to create a method that acts as an event handler. Within the SearchPage class add the following method:

onSearchTextChanged(event) {
  console.log('onSearchTextChanged');
  this.setState({ searchString: event.nativeEvent.text });
  console.log(this.state.searchString);
}

This takes the value from the event’s text property and uses it to update the component’s state. It also adds some logging code that will make sense shortly.

To wire up this method so it gets called when the text changes, return to the TextInput field within the render method and add an onChange property so the tag looks like the following:

<TextInput
  style={styles.searchInput}
  value={this.state.searchString}
  onChange={this.onSearchTextChanged.bind(this)}
  placeholder='Search via name or postcode'/>

Whenever the user changes the text, you invoke the function supplied to onChange; in this case, it’s onSearchTextChanged.

Note: You might be wondering what the bind(this) statement is for. JavaScript treats the this keyword a little differently than most other languages; its counterpart in Swift is self. The use of bind in this context ensures that inside the onSearchTextChanged method, this is a reference to the component instance. For more information, see the MDN page on this.

There’s one final step before you refresh your app again: add the following logging statement to the top of render(), just before return:

console.log('SearchPage.render');

You are about to learn something quite intriguing from these log statements! :]

Return to your simulator, then press Cmd+R. You should now see that the text input has an initial value of ‘london’ and that editing the text causes some statements to be logged to the Xcode console:

react-renderconsole

Looking at the screenshot above, the order of the logging statement seems a little odd:

  1. This is the initial call to render() to set up the view.
  2. You invoke onSearchTextChanged() when the text changes.
  3. You then update the component state to reflect the new input text, which triggers another render.
  4. onSearchTextChanged() then wraps things up by logging the new search string.

Whenever the app updates the state of any React component, this triggers an entire UI re-rendering that in turn calls render of all of your components. This is a great idea, as it entirely de-couples the rendering logic from the state changes that affect the UI.

With most other UI frameworks, it is either your responsibility to manually update the UI based on state changes, or use of some kind of binding framework which creates an implicit link between the application state and its UI representation; see, for example, my article on implementing the MVVM pattern with ReactiveCocoa.

With React, you no longer have to worry about which parts of the UI might be affected by a state change; your entire UI is simply expressed as a function of your application state.

At this point you’ve probably spotted a fundamental flaw in this concept. Yes, that’s right — performance!

Surely you can’t just throw away your entire UI and re-build it every time something changes? This is where React gets really smart. Each time the UI renders itself, it takes the view tree returned by your render methods, and reconciles — or diffs — it with the current UIKit view. The output of this reconciliation process is a simple list of updates that React needs to apply to the current view. That means only the things that have actually changed will re-render.

It’s amazing to see the novel concepts that make ReactJS so unique — the virtual-DOM (Document Object Model, the visual-tree of a web document) and reconciliation — applied to an iOS app.

You can wrap your head around all that later; you still have some work to do in the app. Remove the logging code you just added above, since it’s no longer necessary and just adds cruft to the code.

Initiating a Search

In order to implement the search functionality you need handle the ‘Go’ button press, create a suitable API request, and provide a visual indication to the user that a query is in progress.

Within SearchPage.js, update the initial state within the constructor:

this.state = {
  searchString: 'london',
  isLoading: false
};

The new isLoading property will keep track of whether a query is in progress.

Add the following logic to the start of render:

var spinner = this.state.isLoading ?
  ( <ActivityIndicatorIOS
      hidden='true'
      size='large'/> ) :
  ( <View/>);

This is a ternary if statement that either adds an activity indicator or an empty view, depending on the component’s isLoading state. Because the entire component is rendered each time, you are free to mix JSX and JavaScript logic.

Within the JSX that defines the search UI in return, add the following line below the Image:

{spinner}

Now, add the following property within the TouchableHighlight tag that renders the ‘Go’ text:

onPress={this.onSearchPressed.bind(this)}

Next, add the following methods to the SearchPage class:

_executeQuery(query) {
  console.log(query);
  this.setState({ isLoading: true });
}
 
onSearchPressed() {
  var query = urlForQueryAndPage('place_name', this.state.searchString, 1);
  this._executeQuery(query);
}

_executeQuery() will eventually run the query, but for now it simply logs a message to the console and sets isLoading appropriately so the UI can show the new state.

Note: JavaScript classes do not have access modifiers, so they have no concept of ‘private’. As a result you often see developers prefixing methods with an underscore to indicate that they should be considered private.

You’ll invoke onSearchPressed() and initiate the query when the ‘Go’ button is pressed.

Finally, add the following utility function just above the SearchPage class declaration:

function urlForQueryAndPage(key, value, pageNumber) {
  var data = {
      country: 'uk',
      pretty: '1',
      encoding: 'json',
      listing_type: 'buy',
      action: 'search_listings',
      page: pageNumber
  };
  data[key] = value;
 
  var querystring = Object.keys(data)
    .map(key => key + '=' + encodeURIComponent(data[key]))
    .join('&');
 
  return 'http://api.nestoria.co.uk/api?' + querystring;
};

This function doesn’t depend on SearchPage, so it’s implemented as a free function rather than a method. It first creates the query string based on the parameters in data. Following this, it transforms the data into the required string format, name=value pairs separated by ampersands. The => syntax is an arrow function, another recent addition to the JavaScript language that provides a succinct syntax for creating anonymous functions.

Head back to the simulator, press Cmd+R to reload the application and tap the ‘Go’ button. You’ll see the activity indicator spin; take a look at the Xcode console to see what it’s telling you:

SearchAcitivityIndicator

The activity indicator renders and the URL for the required query appears in the log. Copy and paste that URL into your browser to see the result. You’ll see a massive JSON object. Don’t worry — you don’t need to understand that! You’ll add code to parse that now.

Note: This app makes use of the Nestoria API for searching property listings. The JSON response coming back from the API is pretty straightforward, but you can have a look at the documentation for all the details on the expected request URL and response formats.

The next step is to make the request from within your application.

Performing an API Request

Still within SearchPage.js, update the initial state in the class constructor to add a message variable:

this.state = {
  searchString: 'london',
  isLoading: false,
  message: ''
};

Within render, add the following to the bottom of your UI:

<Text style={styles.description}>{this.state.message}</Text>

You’ll use this to display a range of messages to the user.

Within the SearchPage class, add the following code to the end of _executeQuery():

fetch(query)
  .then(response => response.json())
  .then(json => this._handleResponse(json.response))
  .catch(error => 
     this.setState({
      isLoading: false,
      message: 'Something bad happened ' + error
   }));

This makes use of the fetch function, which is part of the Web API, and provides a vastly improved API versus XMLHttpRequest. The asynchronous response is returned as a promise, with the success path parsing the JSON and supplying it to a method which you are going to add next.

The final step is to add the following function to SearchPage:

_handleResponse(response) {
  this.setState({ isLoading: false , message: '' });
  if (response.application_response_code.substr(0, 1) === '1') {
    console.log('Properties found: ' + response.listings.length);
  } else {
    this.setState({ message: 'Location not recognized; please try again.'});
  }
}

This clears isLoading and logs the number of properties found if the query was successful.

Note: Nestoria has a number of non-1** response codes that are potentially useful. For example, 202 and 200 return a list of best-guess locations. When you’ve finished building your app, why not try handling these and present a list of options to the user?

Save your work, then in the simulator press Cmd+R and try searching for ‘london’; you should see a log message saying that 20 properties were found. Next try a non-existent location, such as ‘narnia’ (*sniff*), and you’ll be greeted by the following message:

react-narnia

It’s time to see what those 20 properties in real places such as London look like!

Displaying the Results

Create a new file SearchResults.js, and add the following:

'use strict';
 
var React = require('react-native');
var {
  StyleSheet,
  Image, 
  View,
  TouchableHighlight,
  ListView,
  Text,
  Component
} = React;

Yes, that’s right, it’s a require statement that includes the react-native module, and a destructuring assignment. You’ve been paying attention haven’t you? :]

Next add the component itself:

class SearchResults extends Component {
 
  constructor(props) {
    super(props);
    var dataSource = new ListView.DataSource(
      {rowHasChanged: (r1, r2) => r1.guid !== r2.guid});
    this.state = {
      dataSource: dataSource.cloneWithRows(this.props.listings)
    };
  }
 
  renderRow(rowData, sectionID, rowID) {
    return (
      <TouchableHighlight
          underlayColor='#dddddd'>
        <View>
          <Text>{rowData.title}</Text>
        </View>
      </TouchableHighlight>
    );
  }
 
  render() {
    return (
      <ListView
        dataSource={this.state.dataSource}
        renderRow={this.renderRow.bind(this)}/>
    );
  }
 
}

The above code makes use of a more specialized component — ListView — which displays rows of data within a scrolling container, similar to UITableView. You supply data to the ListView via a ListView.DataSource, and a function that supplies the UI for each row.

When constructing the data source, you provide a function that compares the identity of a pair of rows. The ListView uses this during the reconciliation process, in order to determine the changes in the list data. In this instance, the properties returned by the Nestoria API have a guid property, which is a suitable check for this purpose.

Now add the module export to the end of the file:

module.exports = SearchResults;

Add the following to SearchPage.js near the top of the file, underneath the require call for React:

var SearchResults = require('./SearchResults');

This allows us to use the newly added SearchResults class from within the SearchPage class.

Modify the current _handleResponse method by replacing the console.log statement with the following:

this.props.navigator.push({
  title: 'Results',
  component: SearchResults,
  passProps: {listings: response.listings}
});

The above code navigates to your newly added SearchResults component and passes in the listings from the API request. Using the push method ensures the search results are pushed onto the navigation stack, which means you’ll get a ‘Back’ button to return to the root.

Head back to the simulator, press Cmd+R and try a quick search. You’ll be greeted by a list of properties:

react-searchresults1

It’s great to see the property listings, but that list is a little drab. Time to liven things up a bit.

A Touch of Style

This React Native code should be starting to look familiar by now, so this tutorial is going to pick up the pace.

Add the following style definition just after the destructuring assignment in SearchResults.js:

var styles = StyleSheet.create({
  thumb: {
    width: 80,
    height: 80,
    marginRight: 10
  },
  textContainer: {
    flex: 1
  },
  separator: {
    height: 1,
    backgroundColor: '#dddddd'
  },
  price: {
    fontSize: 25,
    fontWeight: 'bold',
    color: '#48BBEC'
  },
  title: {
    fontSize: 20,
    color: '#656565'
  },
  rowContainer: {
    flexDirection: 'row',
    padding: 10
  }
});

This defines all the styles that you are going to use to render each row.

Next replace renderRow() with the following:

renderRow(rowData, sectionID, rowID) {
  var price = rowData.price_formatted.split(' ')[0];
 
  return (
    <TouchableHighlight onPress={() => this.rowPressed(rowData.guid)}
        underlayColor='#dddddd'>
      <View>
        <View style={styles.rowContainer}>
          <Image style={styles.thumb} source={{ uri: rowData.img_url }} />
          <View  style={styles.textContainer}>
            <Text style={styles.price}>£{price}</Text>
            <Text style={styles.title} 
                  numberOfLines={1}>{rowData.title}</Text>
          </View>
        </View>
        <View style={styles.separator}/>
      </View>
    </TouchableHighlight>
  );
}

This manipulates the returned price, which is in the format ‘300,000 GBP’, to remove the GBP suffix. Then it renders the row UI using techniques that you are by now quite familiar with. This time, the data for the thumbnail image is supplied via a URL, and React Native takes care of decoding this off the main thread.

Also, note the use of an arrow function in the onPress property of the TouchableHighlight component; this is used to capture the guid for the row.

The final step is to add this method to the class to handle the press:

rowPressed(propertyGuid) {
  var property = this.props.listings.filter(prop => prop.guid === propertyGuid)[0];
}

This method locates the property that was tapped by the user. It doesn’t do anything with it yet, but you’ll fix that shortly. But right now, it’s time to admire your handiwork.

Head back to the simulator, press Cmd+R and check out your results:

react-searchresults2

That looks a lot better — although it’s a wonder anyone can afford to live in London!

Time to add the final view to the application.

Property Details View

Add a new file PropertyView.js to the project, then add the following to the top of the file:

'use strict';
 
var React = require('react-native');
var {
  StyleSheet,
  Image, 
  View,
  Text,
  Component
} = React;

Surely you can do this in your sleep by now! :]

Next add the following styles:

var styles = StyleSheet.create({
  container: {
    marginTop: 65
  },
  heading: {
    backgroundColor: '#F8F8F8',
  },
  separator: {
    height: 1,
    backgroundColor: '#DDDDDD'
  },
  image: {
    width: 400,
    height: 300
  },
  price: {
    fontSize: 25,
    fontWeight: 'bold',
    margin: 5,
    color: '#48BBEC'
  },
  title: {
    fontSize: 20,
    margin: 5,
    color: '#656565'
  },
  description: {
    fontSize: 18,
    margin: 5,
    color: '#656565'
  }
});

Then add the component itself:

class PropertyView extends Component {
 
  render() {
    var property = this.props.property;
    var stats = property.bedroom_number + ' bed ' + property.property_type;
    if (property.bathroom_number) {
      stats += ', ' + property.bathroom_number + ' ' + (property.bathroom_number > 1
        ? 'bathrooms' : 'bathroom');
    }
 
    var price = property.price_formatted.split(' ')[0];
 
    return (
      <View style={styles.container}>
        <Image style={styles.image} 
            source={{uri: property.img_url}} />
        <View style={styles.heading}>
          <Text style={styles.price}>£{price}</Text>
          <Text style={styles.title}>{property.title}</Text>
          <View style={styles.separator}/>
        </View>
        <Text style={styles.description}>{stats}</Text>
        <Text style={styles.description}>{property.summary}</Text>
      </View>
    );
  }
}

The first part of render() performs some manipulation on the data. As is often the case, the data returned by the API is of mixed quality and often has missing fields. This code applies some simple logic to make the data a bit more presentable.

The rest of render is quite straightforward; it’s simply a function of the immutable state of this component.

Finally add the following export to the end of the file:

module.exports = PropertyView;

Head back to SearchResults.js and add the require statement to the top of the file, just underneath the React require line:

var PropertyView = require('./PropertyView');

Next update rowPressed() to navigate to your newly added PropertyView:

rowPressed(propertyGuid) {
  var property = this.props.listings.filter(prop => prop.guid === propertyGuid)[0];
 
  this.props.navigator.push({
    title: "Property",
    component: PropertyView,
    passProps: {property: property}
  });
}

You know the drill: head back to the Simulator, press Cmd+R, and go all the way to the property details by running a search and tapping on a row:

react-property

Affordable living at it’s finest — that’s a fancy looking pad!

Your app is almost complete; the final step is to allow users to search for nearby properties.

Geolocation Search

Within Xcode, open Info.plist and add a new key, by right clicking inside the editor and selecting Add Row. Use NSLocationWhenInUseUsageDescription as the key name and use the following value:

PropertyFinder would like to use your location to find nearby properties

Here’s how your plist file will look once you’ve added the new key:

Info.plist after adding key

This key details the prompt that you’ll present to the to the user to request access to their current location.

Open SearchPage.js, locate the TouchableHighlight that renders the ‘Location’ button and add the following property value:

onPress={this.onLocationPressed.bind(this)}

When you tap the button, you ‘ll invoke onLocationPressed — you’re going to add that next.

Add the following within the body of the SearchPage class:

onLocationPressed() {
  navigator.geolocation.getCurrentPosition(
    location => {
      var search = location.coords.latitude + ',' + location.coords.longitude;
      this.setState({ searchString: search });
      var query = urlForQueryAndPage('centre_point', search, 1);
      this._executeQuery(query);
    },
    error => {
      this.setState({
        message: 'There was a problem with obtaining your location: ' + error
      });
    });
}

The current position is retrieved via navigator.geolocation; this is an interface defined by the Web API, so it should be familiar to anyone who has used location services within the browser. The React Native framework provides its own implementation of this API using the native iOS location services.

If the current position is successfully obtained, you invoke the first arrow function; this sends a query to Nestoria. If something goes wrong, you’ll display a basic error message instead.

Since you’ve made a change to the plist, you’ll need to relaunch the app to see your changes. No Cmd+R this time — sorry. Stop the app in Xcode, and build and run your project.

Before you use the location-based search, you need to specify a location that is covered by the Nestoria database. From the simulator menu, select Debug\Location\Custom Location … and enter a latitude of 55.02 and a longitude of -1.42, the coordinates of a rather nice seaside town in the North of England that I like to call home!

WhitleyBaySearch

Note from Ray: Location searching worked for some of us, but not for others (reporting an access denied error even though we gave access) – we’re not sure why at the moment, perhaps an issue with React Native? If anyone has the same issue and figures it out, please let us know.

It’s not quite as swank as London — but it’s a lot more affordable! :]

Where To Go From Here?

Congratulations on completing your first React Native application! You can download the final project for this tutorial to try out on your own.

If you come from the web world, you can see how easy it is to define your interface and navigation with JavaScript and React to get a fully native UI from your code. If you work mainly on native apps, I hope you’ve gained a feel for the benefits of React Native: fast app iteration, modern JavaScript and clear style rules with CSS.

Perhaps you might write your next app using this framework? Or then again, perhaps you will stick with Swift or Objective-C? Whichever path you take, I hope you have learned something new and interesting in this article, and can carry some of the principles forward into your next project.

If you have any questions or comments on this tutorial, feel free to join the discussion in the forums below!

Introducing React Native: Building Apps with JavaScript is a post from: Ray Wenderlich

The post Introducing React Native: Building Apps with JavaScript appeared first on Ray Wenderlich.

Video Tutorial: Custom Collection View Layouts Part 8: Ultravisual – Featured Cell

What’s New in Unity 2D in Unity 5 Tutorial

$
0
0
Some of the new features introduced with Unity 5

Some of the new features introduced with Unity 5

Recently at this year’s GDC, Unity Technologies dropped an exciting bomb on the audience.

Not only was the latest and greatest version of Unity 5 released, but all the professional version features were now available for free!

Unity 5 also brought forth a wealth of new features from the Physical Shaders to the Physics Engine and so on. In fact, we’ve covered the updates in out What’s New in Unity 5 article.

Unfortunately, the only way to find out about the changes to Unity 2D was to dig through extensive release notes. To make matters worse, some of the more subtle yet impactful changes were buried deep, so if your project failed to properly migrate, you were stuck with a broken game.

In this tutorial, you’re going to learn about the new features, and you’ll see why you should be at the edge of your seat over this update. As you work through the tutorial you’ll learn about the following:

  • Changes in Component Access
  • Physics Effectors
  • Adding Constant Force
  • Audio Mixer

You’ll learn about these features by completing a game. This game isn’t about saving the world; it’s about destroying it.

Note: This tutorial assumes you are familiar with the basics of making Unity 2D games, and want to get up-to-date with the latest changes. If you are new to Unity 2D, check out Chris LaPollo’s Getting Started with Unity 2D tutorial series first.

Getting Started

Before you delve into the nitty and the gritty, download the starter project. Unzip it and open StarterProject\Assets\Scenes\Main.unity in Unity. You should see something that looks like the following:

Screen Shot 2015-03-11 at 2.54.11 PM

The scene is set for tragic and utter destruction; the camera is already set up and a starry night background is in place. The foreground contains buildings in pristine shape, and they are composed of a few sprites taken from opengameart.org.

How tranquil.

This particular set is produced by an artist named hc. Each of the buildings consists of individual sprites placed on a sorting layer named City. Within the buildings are three smoking particle systems each so that as the buildings takes damage they emit more and more smoke.

And what better to inflict damage on your peaceful city than an aggressive attack from outer space?

You can’t have an alien invasion without a fear-inducing alien spacecraft, which has already been added to your project – the bug-like ship at the top of your screen. This is another asset from opengamerart.org created by C-TOY called Spaceships Drakir.

The object of the game is to fly your spaceship over the buildings and bomb them into oblivion. After a few hits, a building will begin to smoke. As you continue to hit the building, more smoke will emit from it until the entire building crumbles.

Renovating downtown has never been more fun.

Renovating downtown has never been more fun.

Sounds easy, eh?

Well, no one likes to just take the bombing and move on with life — unless it’s good news at a gamer developers’ conference — so the population of the city defends itself with plenty of anti-spaceship missiles.

To win, your ship must dodge all the missiles and destroy every last building. And when the city is nothing but a smoking mass of rubble, you can land at the ruins of city hall and announce that you’ve come in peace.

All of the particle systems and most of the necessary scripts have been written for you. Your job is to finish the game by using some of Unity 5’s latest and greatest features.

Use the Force!

Run the game and you’ll see that nothing much is happening. An alien ship hangs low over a bored skyline. It won’t even make the 11 o’clock news. Time to spice things up.

First, select Spaceship in the Hierarchy and add a New Script component to it. Make it a C Sharp script named ShipController.

Remember how to add a new script?

Solution Inside SelectShow>

Double-click ShipController.cs to launch it in MonoDevelop. Just inside the opening brace of the class, add the following two variables:

private Rigidbody2D shipRigidbody;
private float force = 150.0f;

The shipRigidBody holds a reference to a Rigidbody 2D component and the force variable holds the value of how much force you’ll use to move the ship.

You might wonder why you need a variable to hold the Rigidbody 2D component. After all, in previous versions of Unity, all you needed to access the component was to use the variable accessor, rigidbody2D. If there was a rigidbody attached to the object, it would be populated. If there wasn’t an object, it’d be null.

Here’s the first big change in Unity 5.

In a nutshell, Unity developers felt these accessors were too inconsistent. Accessors were provided for some components but not for others. There was no logic to explain whether a component had an accessor or not.

More importantly, these accessors created dependencies throughout the various subsystems, so in order to keep development flexible, Unity junked of all the accessors except for transforms. This means you have to now create variables for all your components except transforms.

Unfortunately, Unity appears to be using the older names, so if you do decide to name your variables after the accessors, you’ll get a warning that your local variable is hiding an inherited one.

ragecomic

To learn more about the reasoning behind this change, check out Unity’s excellent blog post about API Changes to automatic script updating.

Now, add the following to Start():

shipRigidbody = GetComponent<Rigidbody2D>();

Prepare yourself to do that frequently from now on.

Finally, add the following to Update():

if (Input.GetKeyDown(KeyCode.RightArrow)) {
    shipRigidbody.AddForce(new Vector2(force, 0));
}
if (Input.GetKeyDown(KeyCode.LeftArrow)) {
    shipRigidbody.AddForce(new Vector2(-force, 0));
}
if (Input.GetKeyDown(KeyCode.UpArrow)) {
    shipRigidbody.AddForce(new Vector2(0, force));
}
if (Input.GetKeyDown(KeyCode.DownArrow)) {
    shipRigidbody.AddForce(new Vector2(0, -force));
}

The gives your ship some rudimentary controls.

Note: While the controlling the ship by applying force for each keypress will get the job done, in a real game, you would provide a much tighter control mechanism such as providing a constant force while a key is held down. You will soon read about constant force, but for the sake of this tutorial, we’ve kept it simple.

Run the scene. Your ship flies great except for when it passes beyond the edges of screen, and at that point it’s gone baby gone. Time to add some invisible walls.

no-boundry

None Shall Pass the Invisible Wall

Invisible walls are necessary to keep the player in the game, but at the same time they can also affect the gameplay. Which leads to a laundry list of questions about how they should behave:

  • Should the wall stop all momentum?
  • Should it be flexible like a rubber wall that bounces the player back into play?
  • Or, should flying to the left bring the player out on the right?

For the sake of this tutorial, you’ll implement the middle option as it has a side effect of increasing the challenge.

Before Unity 5, walls required a whole lot of code to make the bounce both fluid and natural. With Unity 5, all you need to do is add a new component called an effector.

Right-click in the Hierarchy and from the drop-down, select Create Empty. Do this twice more so that you have three empty GameObjects.

Screen Shot 2015-03-12 at 12.42.06 PM

The purpose of these walls will bounce the player back into the play field. You will create three of them. One is for the top and the other two are for the sides. This will keep they player in the play area. You could put one for the bottom of the screen, but the buildings will ultimately stop the player. A problem does occur when the player has destroyed all the building. In which case, they will fly through the ground. To fix this, you can add a box collider that can explode the player. Feel free to add it once you have completed the tutorial.

Select the first empty GameObject you created. Single-click its name in the Hierarchy and rename it Top Barrier. In the Inspector, set its Position to the following: X: 2.3 Y: 8.21 Z:0. For its Scale, assign the following values: X: 48.27 Y: 2.33 Z: 2.33.

These are the base units used by Unity. In 3D space, a single unit in Unity corresponds to one meter in reality. With Unity 2D, you can correspond units to actual pixels. To learn more about this, check out Chris LaPallo’s tutorial, Unity 4.3 2D Tutorial: Getting Started under the section, Fix Your Camera’s Projection.

In this case, you are setting the position and scale of the area that will bounce the user back out.

Note: In 2D games, Unity ignores the Z scale and all colliders have infinite z-depth, so it doesn’t really matter what Z value you put here.

Next, in the Tag dropdown, select Add Tag.

Screen Shot 2015-03-12 at 6.27.45 PM

The Tags and Layers dialog should appear with the Tags section expanded. Click the plus sign, then add the name: Barrier. This will be important later in the tutorial when you work with collisions.

new_barrier_tag

Once you’ve added the new tag, select Top Barrier in the Hierarchy and from the Tag dropdown, select Barrier. It should look like the following:

Screen Shot 2015-03-13 at 11.13.30 AM

In the Hierarchy, select the second empty GameObject you created, probably named GameObject 1. Change its name to Left Barrier.

In the Inspector, set its Position to X: -14.68 Y: 1.23 Z: 0. and set its Scale to X: 2.33 Y: 30.17 Z: 2.33. From the Tag dropdown, select Barrier.

Select the third empty GameObject you created, most likely called GameObject 2, and rename it Right Barrier. In the Inspector, set its Position to X: 15.6 Y: 1.23 Z: 0 and its Scale to X: 2.33 Y: 30.17 Z: 2.33. From the Tag dropdown, select Barrier.

Congratulations! You’ve just erected barriers. Except, well, they don’t do much of anything. To bounce the player back, you’ll use an effector.

Effecting Things With Effectors

In Unity 5, an effector is a way to apply force to Sprites. You have four kinds of effectors available to you.

  • Surface Effector: Applies a force along the surface of a collider, allowing you to create things like moving surfaces.
  • Point Effector: Defines a point that can either attract or repel by using a force.
  • Area Effector: Applies directional forces within a specific area.
  • Platform Effector: Helps you create platformer-style platforms, complete with options for one-way collisions and settings to control the friction and bounciness of the platform’s edges.

Unity has provided an image that nicely highlights everything you can do with the effectors:

Here are the various effectors available to you.

Here are the various effectors available to you.

For your barriers, you’ll use an area effector, so that when the ship flies into the effector it’ll essentially push the ship back out.

In order for the effector to work, the corresponding GameObject needs to have a collider attached to it. The collider must be a trigger, and the collider must be marked to be Used By Effector. If those conditions aren’t met the effector won’t work, but at least Unity will let you know.

Screen Shot 2015-03-12 at 2.12.34 PM

To do this, select Top Barrier in the Hierarchy, and first add a Box Collider 2D component to it. In the Inspector, check the collider’s Is Trigger and Used By Effector options. Next, add an Area Effector 2D to it. Your Top Barrier should look like the following:

Screen Shot 2015-03-12 at 6.39.24 PM

Note: You’ll notice that the area effector has a whole bunch of options, and I bet you’d like to learn about them briefly.

Collider Mask determines what layers will be affected when objects enter the area. For now, keep Everything selected.

Force Direction determines the degree direction of the force. The value ranges from -359.99 to 359.99, which really just defines a circle moving in either a clockwise or counter-clockwise direction. As the image below shows, zero points to the right and positive values move counter-clockwise along the circle; negative values move clockwise.

circle

Force Magnitude allows you to set the strength of this area’s force, while Force Variation lets you define a range to provide some randomization to the force for each collision.

The Drag and Angular Drag options allow you to further customize the area’s effects. In fact, you could use only drag values and no force values to create an area that slows any objects that enter.

Force Target determines at what point on the object the force gets applied. When set to RigidBody, forces apply to the object’s center of mass, whereas setting it to Collider applies forces at the point of collision.

Now back to creating this game.

With Top Barrier selected, set its Force Direction to be 270. This means the force will push downwards. Next, set the Force Magnitude to 45. This will push the ship down, but not so hard that the player loses control.

Run the scene and fly up. You’ll find you are knocked back down when you hit the ceiling.

Now do the same for both the Left Barrier and the Right Barrier. If you need a hint, check the following spoiler.

Solution Inside SelectShow>

Now run the scene. Your ship is now unable to leave the frame. Time to blow some stuff up.

bouncingwalls

Implement Your Defenses

Let’s face it … if you have an alien spaceship threating your city, you want it to leave. This is no time for diplomacy, so the best way to deal with the extraterrestrial menace is hit it with a barrage of missile fire.

I’ve already created the missile with associated particle systems for you. Your job is to make it fly. There are plenty ways to do this, but Unity 5 has made an even easier way to do it.

From the Prefab folder, drag a missile into the scene. Place it near the buildings so you can see it take-off.

The missile is actually a Saturn 5 rocket that I downloaded from opengameart.org from IsometricRobot.

To make this missile fly, you’ll add some force, and not just a little force, but a whole lot of force. Unity makes this possible with its introduction of the Constant Force 2D component, which does exactly what you expect: It continuously applies a specific amount of force.

With missile selected in the Hierarchy, add a Constant Force 2D component.

You have three options:

  • Force: takes a value and applies it relative to world coordinates
  • Relative Force: defines a force applied relative to the object’s local coordinate system
  • Torque: force that’s used to create angular movement

Screen Shot 2015-03-12 at 3.08.32 PM

With the missile still selected, in the Constant Force 2D component, set the Force’s Y value to 20. Click Apply at the top of the Inspector to save this change back to the prefab.

Screen Shot 2015-03-12 at 4.12.06 PM

Now run the scene and just try not laugh. The missile flies upwards but when it hits the Top Barrier, it bounces back down, flies back up, and keeps bouncing. Evidentially, the laws of physics mean nothing to the missiles.

It should pass right through it because the Top Barrier should only affect the Ship.

missile-launcha

Select Spaceship in the Hierarchy and in the Layers dropdown, select Add New Layer.

Screen Shot 2015-03-12 at 3.26.29 PM

The Inspector will now display the Tags and Layers dialog. In the Layers section, add a new layer by typing Spaceship in the User Layer 8 field.

Screen Shot 2015-03-12 at 3.30.42 PM

Select Spaceship in the Hierarchy, and in the Layer dropdown choose Spaceship.

Screen Shot 2015-03-12 at 3.32.33 PM

You’ll get a prompt to apply this layer to the spaceship’s children as well. Click Yes, change children. This means that the Layer value will apply to any of this object’s child GameObjects, too.

Screen Shot 2015-03-12 at 3.34.09 PM

Next, select Top Barrier in the Hierarchy. In its Area Effector 2D component, click the Collider Mask. Uncheck everything but Spaceship. (The fastest way to do that is to first choose Nothing, and then choose Spaceship.)

Screen Shot 2015-03-12 at 3.37.38 PM

Now, the effector will only work on the ship. Run the scene. The missile flies up and out while the ship remains trapped.

missile4

A single missile is a sufficient warning shot, but if you want to take down the alien invader you’ll need a bigger fire show. A GameManager object has already been created for you to help with this.

Select the GameManager in the Hierarchy, and you’ll see a few fields.

Screen Shot 2015-03-12 at 3.51.18 PM

The Missile field stores a reference to the missile prefab the Game Manager uses when it creates missiles at runtime. From your Prefab folder, drag missile into this field.

Screen Shot 2015-03-12 at 3.53.16 PM

The Max Missiles field determines how many missiles display on the screen at one time. The default is 10 but feel free to increase it if you’re feeling a little sadistic; 10 is a little conservative when you’re dealing with an unknown intelligence.

Finally, Max Time Between Missiles is the total amount of time that you want to pass between each missile launch. The default value is 5 seconds, but again, if you’re craving punishment for the aliens, go ahead and lower the value.

With the missiles in place, select missile in the Hierarchy and delete it from your scene.

Run the scene. Right away, things get interesting. Sure, you have missiles flying every which way, but the aliens are somehow unscathed.

There’s no collision detection, so the invaders are effectively immune from your puny offensive measures.
mwhahaha
Also, if you keep watching, you’ll notice that some missiles curve from the right and left. Are the operators drunk or is something wrong in the code?

missiles5

The missiles are spawned inside the area effector, which then pushes them out while their constant force pushes them up.

You can easily set the Collider Mask on the area effectors to ignore the missiles like you did for the Top Barrier, but in this case, leave it as is — it’s a happy accident. It provides a degree of random difficulty so the player must account for the missiles that fly vertically as well as randomly from left to right.

Bombs Away!

While missiles are cool, what’s even cooler is blowing them up. For this task, the alien ship is equipped with a launcher that takes a bullet. Look in your Prefab folder and drag the bullet to your scene. Since the missiles fly up you want your bullet to fly down.

Add a Constant Force 2D component to the bullet, and set its Force’s Y value to -20. Click Apply at the top of the Inspector so all instances of the bullet acquire the new component. Then, delete the bullet from your scene.

Screen Shot 2015-03-12 at 4.19.33 PM

Note: It isn’t always necessary to move the prefab into the scene before editing it. However, doing so lets you test your values prior to committing to them, and that’s usually what you’ll want to do.

Preparing the bullet is not enough because you need a mechanism from which to fire.

Open ShipController.cs in MonoDevelop. Add the following instance variables:

public GameObject launcher;
public GameObject bullet;
private bool isAlive = true;

launcher stores the origin point of the bullet. By using a GameObject, you can visually determine where you want to fire the bullet. bullet contains a reference to the bullet’s prefab. Finally, isAlive is a simple flag that indicates whether the player is alive or dead.

Next, at the bottom of Update(), add the following:

if (Input.GetKeyDown(KeyCode.Space))  {
    if (isAlive) {
        Instantiate(bullet,launcher.transform.position, Quaternion.identity);
    }
}

With this code, the spaceship drops a bomb every time the player presses the space bar — that is, if the spaceship hasn’t been destroyed.

Save the script in MonoDevelop and return back to Unity. Select Spaceship in the Hierarchy, and in the Ship Controller (Script) component in the Inspector, you’ll now see the Launcher and Bullet fields.

The spaceship has a child GameObject named Launcher. Drag this child object to the spaceship’s Launcher field. Also, drag bullet to the spaceship’s Bullet field from the Prefab folder.

Screen Shot 2015-03-12 at 5.17.42 PM

Run the scene and let chaos ensue!

Screen Shot 2015-03-12 at 5.32.53 PM

Level the Playing Field

While your game is certainly fun, there’s no way for you to lose. The missiles fly right through your ship and those buildings are more like mirages than actual obstacles.

Open ShipController.cs in MonoDevelop and add the following instance variables to the class:

private SpriteRenderer spriteRenderer;
private ParticleSystem explosion;

Since you need the ship to explode dramatically, you’ll need to access both the SpriteRenderer and ParticleSystem components. Previously, you could just access them via the component accessor variables, but with Unity 5, it’s all about exercising your typing skills.

In Start(), add the following:

explosion = GetComponent<ParticleSystem>();
spriteRenderer = GetComponent<SpriteRenderer>();

This gives you the ability to access the components in your code. Next, add the following method:

void OnTriggerEnter2D(Collider2D collider) {
    if (isAlive && collider.gameObject.tag != "Barrier") {
        shipRigidbody.velocity = Vector3.zero;
        spriteRenderer.enabled = false;
        explosion.Play();
        isAlive = false;
    }
}

Since all the collisions use triggers, you use OnTriggerEnter2D. First, you check if the player is alive and that the trigger is not a barrier. If the player is not alive, you set the player’s velocity to zero. This is so the resulting explosion remains fixed in space.

Next, you disable the SpriteRenderer to effectively hide the ship from view. Then you start the explosion particle system to indicate that the ship has gone boom. Finally, you mark the player dead.

Save the script and switch back to Unity. Run the scene and fire while moving the spaceship.

Screen Shot 2015-03-12 at 5.54.32 PM

Uh oh. While your ship collides with both missiles and buildings, it will also collide with its own bullets! How embarrassing for the aliens. But for you, it’s just another problem to solve.

This behavior can be problematic, especially when you fire and move at the same time. To fix this, select bullet in the Prefab folder.

As you did earlier, choose Add Layer from the Layer dropdown in the Inspector to create a new layer. Name the new layer Bullet, then assign this new layer to the bullet prefab.

Finally, select Edit\Project Settings\ Physics 2D, and you’ll see a whole bunch of physics options. You’re interested in the chart at the bottom that indicates whether layers can interact with each other.

Deselect the checkbox at the intersection of Spaceship and Bullet to indicate that physics objects on these layers shouldn’t collide. This isn’t new to Unity 5, but it’s certainly worth mentioning.

Screen Shot 2015-03-12 at 6.07.03 PM

Now, you can fire at will without the fear of blowing up.

Symphony of Destruction

By now the game runs and plays pretty well, except that there’s one thing holding it back from being the coolest thing you’ve ever made. Where’s the sound? Oh, that’s right, there isn’t any. It’s time to fix that.

First, add a little background music. Select GameManager in the Hierarchy and add an Audio Source component to it. From the Sounds folder, drag the backgroundMusic clip to the Audio Source’s AudioClip field. Check the Loop checkbox.

Screen Shot 2015-03-12 at 8.39.03 PM

When you run the scene, you should hear the music.

Now, for some sound effects. The Audio Sources are already added to the necessary objects, and their AudioClips are in place, so you just have to call them.

First, add a sound for when a building collapses. In the Scripts folder, find Building.cs and open it in MonoDevelop.

Add the following instance variable to the Building class:

private AudioSource collapseSound;

You’ll use collapseSound to store a reference to the building’s AudioSource component. The buildings included in the starter project already have their AudioSources added.

Next add the following to Start():

collapseSound = GetComponent<AudioSource>();

This gets a reference to the AudioSource so you can call it in code.

Next, replace the contents of Update() with the following:

if (isDestroyed && onScreen()) {
    transform.position = new Vector3( transform.position.x, 
                                      transform.position.y - Time.deltaTime * speed, 
                                      transform.position.z);
    if (!collapseSound.isPlaying) {
        collapseSound.Play();
    }
} else if (isDestroyed && !onScreen()) {
    collapseSound.Stop();
}

If the building is on screen and designated as destroyed, this code checks to see if collapseSound is playing. If not, it starts playing, and then it moves the building off-screen.

Save the script and go back to Unity. Run the scene and when you destroy a building, you should hear the collapse sound.

Next, open ShipController.cs in MonoDevelop, and add the following instance variable to the class:

public SoundManager soundManager;

Inside Update(), add the following line just above the line that calls Instantiate:

soundManager.DropBomb();

Now add the following line to OnTriggerEnter2D(), just before setting shipRigidbody.velocity:

soundManager.ExplodeShip();

That should add a nice sound when the ship explodes.

Finally, Save the script and return to Unity, and select Spaceship in the Hierarchy. You’ll now see a Sound Manager field. Drag the SoundManager GameObject from the Hierarchy and into the Sound Manager field.

Screen Shot 2015-03-12 at 9.46.26 PM

Now run the scene. Your bombs should now make sounds and when your ship explodes, you at least go out with a sound effect.

Finally, it’s time to add some sound when the bomb explodes. From the Scripts folder, open Bullet.cs in MonoDevelop. As you’ve done before, add an instance variable to store a component reference:

private SoundManager soundManager;

To initialize soundManager, add the following code to Start():

GameObject gameObject = GameObject.Find("SoundManager");
soundManager = gameObject.GetComponent<SoundManager>();

Since the Bullet component only exists on a prefab, you cannot assign the SoundManager GameObject directly from the scene in Unity. Instead, you have to get a hold of the reference by searching the scene for the GameObject named “SoundManager.”

Now play the explosion sound by adding the following line to OnTriggerEnter2D(), just after the call to explosion.Play():

soundManager.HitBuilding();

Save the script and return to Unity. Run the scene and take out some buildings — your bullets should now explode with vibrance.

Unity 5’s Audio Mixer

While the sounds work well enough, some are a little loud and others are too soft. In previous versions of Unity, you’d have to adjust the level of each AudioSource, whereas with Unity 5, you can adjust them from one central location, an Audio Mixer.

You can have several mixers in your project, and a full introduction to Unity’s awesome new sound features deserves its own tutorial.

However, creating them to adjust base audio levels is easy enough to explain here without dragging on.

Select your Sounds folder, then right-click it and select Create\Audio Mixer. Name the new object Game Mixer.

To view the mixer, double-click it or select Window\Audio Mixer to bring up the panel below:

audio_mixer

By default, all mixers start with one group known as the Master group, where you can apply effects to all audio sources at once.

You can also think of groups as channels. For example, each group has an attenuation effect applied to it that allows you to change the group’s volume, even while the game is in progress.

One of the great things about the mixer is that changes made while playing the game in the editor will be saved when you stop playing. This lets you tweak audio until it’s perfect without writing down the changes so that you can redo them when you stop.

You’re going to add a few groups. Click the plus sign in the Groups to create a new group, and name it Music. Then click the plus sign button again and create a group named Sound Effects. Most likely, your groups look like those shown below:

nested_audio_groups

New groups are added as children of the currently selected group; the Music group was still selected when you created the second new group, so it became a child of Music. This wasn’t really the desired effect, although it’s an important behavior to know.

To fix, simply drag the Sound Effects group out of Music and into Master to re-parent it. Now select the Sound Effects group and add a new subgroup named Collapsing Buildings. You should now see three mixers:

audio_groups

Now you need to assign a mixer group to each of the AudioSources in your scene. First, select GameManager in the Hierarchy, then look at its Audio Source component in the Inspector. Click the open circle to the right of the Output field to bring up the Select AudioMixerGroup dialog, and choose Music inside the dialog.

Screen Shot 2015-03-12 at 11.35.01 PM

You need to direct the audio output of each of the building to the Collapsing Buildings group.

Select BuildingA in the Hierarchy and then shift-click BuildingH to select all buildings at once (BuildingA, BuildingB, BuildingC, BuildingD, BuildingE, BuildingF, BuildingG, and BuildingH). This trick allows you to edit common fields for all of the selected objects simultaneously.

In the Inspector, set the Output field of the Audio Source component to Collapsing Buildings.

Finally, set the output of the SoundManager‘s AudioSource to the Sound Effects group.

Now run your scene. Your mixer will come to life.

audio_levels_jumping

To change the levels, click the Edit in Play Mode button at the top of the Audio Mixer panel. Keep in mind that you’ll only see this button while your scene plays. Once you press the button you can alter the levels. If you lower the volume of one group, it will lower the volume of all sub-groups as well.

adjust_audio_levels

Underneath the volume control are three buttons. The S button means Solo; it silences all the other tracks so you can adjust the current one without distraction.

The M button stands for Mute and allows you to silence only that track. The B button stands for Bypass, and it allows you to ignore any audio effects you’ve added to the track.

To add effects to a group, simply click the Add… button to provide a bunch of effects. To try it out, click the Add… button underneath the Collapsing Buildings channel and add the Echo effect.

Screen Shot 2015-03-13 at 1.28.16 PM
Now blast a building to smithereens to hear the effect.

At this point, feel free to play your game and continue to perfect the sound. In the GameManager, temporarily set the Max Missiles value to 0 so you can focus on the audio without fear of death.

You’ve only scratched the surface of Unity’s new Audio Mixer. You can also do things like hide groups and save them as Views to help you focus on different editing tasks, and take Snapshots to save different groups of settings — either for comparison or just for different situations. For more information, check out Unity’s video tutorials.

Where to Go From Here

Download the final project here.

Unity 5 offers a whole host of new features. To learn more about what Unity now has to offer, check out our What’s New in Unity 5 article.

Unity also offers a wealth of free tutorials specifically for Unity 5 and even free live training.

PushyPixels also presents weekly free videos about creating a variety of games for Unity.

Finally, keep checking back here for upcoming video and written tutorials specifically on Unity 5. Take advantage of the forums below to ask questions, bounce ideas off other developers and share what’s working for you in Unity 5.

I hope you enjoy working with Unity as much as we do. Remember, when you make your games, share them with us so we can check them out. Have fun!

What’s New in Unity 2D in Unity 5 Tutorial is a post from: Ray Wenderlich

The post What’s New in Unity 2D in Unity 5 Tutorial appeared first on Ray Wenderlich.

Video Tutorial: Custom Collection View Layouts Part 9: Ultravisual – Fading Content

WatchKit by Tutorials: Update Now Available!

$
0
0
Bonus Chapter for WatchKit by Tutorials Now Available!

Bonus Chapter for WatchKit by Tutorials Now Available!

Good news – a new version of WatchKit by Tutorials is now available, with two major updates:

  • Brand New Chapter: The book includes a brand new chapter on localization, that covers using genstring, NSLocalizedString, locales and number formatting, and more.
  • Errata and Bug Fixes: We’ve scoured through the errata thread on the book forums and fixed the reported issues – as well as a few we found on our own.

If you are a WatchKit by Tutorials customer, you can download the new version (v1.2) that includes the new chapter immediately on your My Loot page.

And if you don’t have the book yet, be sure to grab your copy – it’s not too late for the Apple Watch launch. :]

The WatchKit by Tutorials team and I hope you enjoy this free update, and we hope to see some great WatchKit apps from you soon!

WatchKit by Tutorials: Update Now Available! is a post from: Ray Wenderlich

The post WatchKit by Tutorials: Update Now Available! appeared first on Ray Wenderlich.

Opportunity: Senior iOS Instructor at raywenderlich.com

$
0
0
Join our team full-time!

Join our team full-time!

A few days ago, I posted about a new job opportunity on our site as a Lead WordPress Developer.

I know most of you are iOS developers, so wouldn’t be too interested in that. But I think many of you will be interested in this one :]

We are also looking to hire a Senior iOS Instructor, to create high quality video tutorials for our site.

With this position, not only do you get paid to learn – but you get to share your knowledge with others, helping to shape the next generation of mobile developers.

Best yet – our team is distributed, so you can work remotely from the comfort of your own home. Keep reading to learn more about what’s involved!

What’s Involved

You would be making high quality video tutorial series on various iOS topics, such as this Collection View Custom Layouts series by Mic Pringle:

CCVL00-featured-image

Or this Adaptive Layout series by Greg Heo:

AdaptiveLayout00-featured-image

Or this OpenGL ES and GLKit series by myself:

OpenGLES

As you can see, it takes a combination of many skills to pull off great tutorial videos like these. Here’s what we’re looking for:

Technical Skills

  • Passion for iOS development. We believe that being passionate about what you’re working on is the key to success.
  • 3+ years iOS development experience. You should be familiar with all the major iOS APIs, including UIKit, Core Animation, Core Graphics, Core Data, and more, and should consider yourself an advanced-level developer.
  • Swift experience. You should already have been diving into Swift and should have some practical hands-on experience.
  • Bonus points if you’re up-to-date with brand new technologies like WatchKit, Swift 1.2, React Native, etc.

Training Skills

  • Speaking skills. In order to make video tutorials, you’ll need good speaking skills, and be comfortable in front of a camera. Bonus if you have experience speaking at conferences or are in Toastmasters.
  • Presentation skills. You must be able to make high quality slides, including illustrations and animations demonstrating the topic you are covering.
  • Writing skills. You will also often need to write challenge documents and written tutorials, so strong writing skills are a must.
  • Learning skills. You must be able to learn things that are new to you quickly – gaining not just a surface understanding, but a deep understanding of the topic.
  • Teaching skills. You must be able to understand how to take the 80% most important areas of any topic or API, and break it down into easily digestible chunks.
  • Sense of humor. Part of your job is to inject some of your own sense of humor into your training, to keep things light-hearted and fun!

Other Skills

  • Leadership skills. Since we are a small company of full-time folks working with a large distributed team, we always look for our full-timers to take on leadership roles. We will likely give you a small leadership role soon after you join our team.
  • Self-driven work ethic. You need to be a self-starter who loves taking initiative and seeing things through to completion, with loose (at best) direction.
  • Detail oriented and highly organized. We are a small company with a lot to do, so you should be able to work on and prioritize multiple tasks at any given time, and stay very organized.
  • Great communication skills. We’re a distributed team, so frequent and clear written communication is a must.
  • Curiosity and the desire to learn. Our business is changing and growing fast – who knows what will be the skills of tomorrow?
  • Bonus points if you like video games, board games, and/or zombies – you’ll fit right in! :]
  • Extra bonus points if you’re already a member of the raywenderlich.com team!
Are you already on the map?

Are you already on the map?

About Razeware

Razeware is the company behind the this site. We’re passionate about learning new development skills, and teaching others what we’ve learned through high quality, hands-on tutorials.

We are a small company that has been profitable for over 5 years. Currently we have just 6 full-time employees, so you’d be getting in on the ground floor.

Razeware February 2015.

Razeware February 2015.

We’re also a 100% distributed company: everyone works from their own home or office, whether it be Maryland, Virginia, Connecticut, Canada, or England. We make heavy use of Trello, Slack, Google Hangouts, and email for team communication.

We have a ton of great benefits, such as:

  • Remote working!
  • Health insurance and 401K match (US only)
  • Generous PTO – plus 1-week company-wide Christmas vacation
  • Competitive salary
  • Free trip to our annual conference – RWDevCon
  • Professional development (conferences & training)
  • Equipment for your own at-home video recording studio!

Our site is helping millions of developers across the world make apps, further their careers, and fulfill lifelong dreams. If you’re passionate about helping our small but highly motivated team take this to the next level, this is the job for you! :]

How To Apply

To apply, please email a resume and cover letter to ray@razeware.com.

We look forward to hearing from you! :]

Opportunity: Senior iOS Instructor at raywenderlich.com is a post from: Ray Wenderlich

The post Opportunity: Senior iOS Instructor at raywenderlich.com appeared first on Ray Wenderlich.


Video Tutorial: Custom Collection View Layouts Part 10: Ultravisual – Cell Snapping

Introducing the raywenderlich.com Collection

$
0
0
The raywenderlich.com Collection Now Available!

The raywenderlich.com Collection Now Available!

Our goal has always been to make high quality tutorials more accessible. More relevant. And ultimately, to fill more space on your bookshelf.

The raywenderlich.com Collection represents a new chapter in the relationship iOS developers have with learning.

It’s the most comprehensive book we’ve ever made, because it’s the first one that contains every tutorial we’ve ever writtenall 762 of them.

Keep reading to learn about how the book was made, what’s inside, and how to snag one of these beauties for yourself!

The One Book To Rule Them All

We have hundreds of tutorials on our site and in our books, which are well loved by the community.

But until now, we’ve never gathered them in one convenient place, that you can lovingly store on your bookshelf!

You may need a bigger bookcase.

You may need a bigger bookcase.

The raywenderlich.com Collection contains every tutorial we’ve ever written, including:

  • iOS Tutorials: Navigation Controllers, Table Views, Scroll Views, ARC, Storyboards, iCloud, Collection Views, Passbook, Auto Layout, NSURLSession, Text Kit, Adaptive UI, App Extensions, CloudKit, WatchKit
  • Game APIs: Sprite Kit, Unity, Cocos2D, and Unreal Engine
  • Other tutorials: Android Development, Arduino, and more!

Once we put all these tutorials together, we were amazed to find the book was a whopping 22,864 pages and measures over 2 feet wide.

Furthermore, every tutorial in the book has been re-tested, tech edited and is fully up-to-date. To do this, our tech editor Brian Moakley spent 4 months full-time tech editing the book – with very little sleep and no bathroom breaks.

I think we broke our tech editor.

I think we broke our tech editor.

This book is truly the most comprehensive repository of iOS, Android, and Gaming knowledge ever to exist. And you can order today.

A Collection For Everyone

At raywenderlich.com, we understand that developers need to reflect their personal style on their bookshelf. So we designed two amazing options for you to choose from.

The raywenderlich.com Collection Sport Edition features an aluminum cover embossed with black lacquer, and a special fluoroelastomer bookmark designed by raywenderlich.com engineers specifically to help you keep your place in this massive tome. The bookmark is durable and strong, yet surprisingly soft.

The raywenderlich.com Collection Sport Edition is our entry-level model, and is available for the low price of $399.

The raywenderlich.com Collection Gold Edition features a cover crafted from 18-karat gold that our metallurgists have developed to hold twice as many tutorials as standard book covers. The bookmark is adorned with polished sapphire crystal, providing a striking complement.

cover-gold

The raywenderlich.com Collection Gold edition only has a limited number of copies for sale, and will sell for $9,999.

Gold editions are signed by all 100 authors, and hand-delivered to your house by Ray, who will lovingly place it on your bookshelf. Please have a cold beer ready.

Where To Go From Here?

The team and I are happy to announce that this book is available now. Click one of the options below to order:

button_sport

button_gold

Make space on your bookshelf, start lifting some weights, and get ready for the most epic book you’ve ever read. Don’t be foolish, order today!

Introducing the raywenderlich.com Collection is a post from: Ray Wenderlich

The post Introducing the raywenderlich.com Collection appeared first on Ray Wenderlich.

Video Tutorial: Custom Collection View Layouts Part 11: Timbre – Cell Transform

Video Tutorial: Custom Collection View Layouts Part 12: Timbre – View Clipping

How to Make a Game Like Candy Crush Tutorial: OS X Port

$
0
0
Level up your iOS Sprite Kit skills to cover OS X!

Level up your iOS Sprite Kit skills to cover OS X!

If you’ve successfully completed and enjoyed the two-part tutorial How to Make a Game Like Candy Crush with Swift, you’re surely hungry for more cookie adventures! Where could you go from there?

The game makes heavy use of Sprite Kit, and Sprite Kit is available on the Mac as well. Apple has done an excellent job of keeping the Sprite Kit APIs unified across both platforms so if you are planning on developing games with Sprite Kit, why not consider targeting iOS devices and Macs in one project?

This tutorial will show you how to take an existing iOS Sprite Kit game written in Swift – Cookie Crunch – and adapt it to run on OS X. Whether you worked through the Objective-C or Swift version of the iOS tutorial, you’ll find many of the concepts in this tutorial familiar. Still, the Cookie Crunch game presents its own unique challenges.

Getting Started

This tutorial assumes that you have already successfully completed the How to Make a Game Like Candy Crush with Swift Tutorial. For convenience, you can download the completed project from that tutorial here.

Unzip the project and open it in Xcode. Build and run the project and you’ll see the game running in the iOS Simulator:

Cookie Crunch OS X Starter iPhone Screenshot

Add an OS X Build Target

The first thing you need to do to kick off your OS X conversion is to create a new build target for OS X.

At the top of the Project Navigator, select the CookieCrunch project to display project settings. You’ll see that there are currently only two targets: CookieCrunch and CookieCrunch Tests:

OSX_CC_Targets

Ensure your Xcode project is the active window and select the File \ New \ Target… menu item. Xcode will prompt you to select a template for the new target. Select OS X \ Application \ Game and click Next, as shown below:

OSX_CC_CreateGameTarget

Enter CookieCrunch Mac as the product name and click Finish:

OSX_CC_MacTarget

Xcode will create two new targets for you: CookieCrunch Mac and CookieCrunch Mac Tests:

OSX_CC_AddedMacTargets

Try running your new app on OS X. First, select CookieCrunch Mac from the scheme list, then select My Mac as shown below:

OSC_CC_MacScheme

Build and run. What do you expect to see? Will it be:

  1. Nothing at all?
  2. The CookieCrunch game running on Mac?
  3. Something else?
Solution Inside: Solution SelectShow>

 

Shortly you will strip out everything from the Mac target you don’t need before adding in and adapting the code you already developed for iOS. Adding the Mac target doesn’t affect the iOS target, which will continue to work as usual.

Surveying the work to be done

Before diving in to the work of adapting CookieCrunch for OS X, it is important to first survey what needs to be done. After all, you just never dive into a project without a plan, do you? :]

Sprite Kit on iOS and OS X

Apple has done an excellent job keeping the Sprite Kit framework essentially the same on both iOS and OS X. However, there are a couple of significant differences since the iOS framework is built on top of UIKit and the OS X framework is built on top of AppKit.

On iOS, the SKNode class inherits from UIResponder, and SKView inherits from UIView. On OS X, they inherit from NSResponder and NSView respectively.

If you peruse the documentation for each of the classes, you’ll see many similarities and many more differences between UIResponder and NSResponder, and UIView and NSView. The biggest differences are in event handling. They exist because of different input methods: multi-touch on iOS versus mouse events on the Mac. Apple also took the opportunity with UIKit to learn from the lessons of the past and create much “cleaner” APIs.

Touch events received in an SKNode on iOS are UITouch objects, but on OS X events (whether mouse, keyboard, etc) come through as NSEvent objects.

Sprite Kit even helps you out here. Do you want to know the location of a mouse or touch event in an SKNode? The function locationInNode is available as an extension to both NSEvent and UITouch. This simplifies things for you somewhat, but you’ll still have to do work to make your code truly cross-platform.

Platform-specific UI

In the case of Cookie Crunch, you have another, even larger, porting challenge before you. If you worked through the original tutorial, you may remember that you created a number of UI elements (the score labels, game over images, and shuffle button) in Interface Builder. Unfortunately, these are all UIKit elements and are not available to the OS X target.

You could design a roughly similar OS X interface using AppKit, but there are several problems here:

  1. The UI elements such as NSLabel and NSButton do not have the built-in ability to do things like shadowed text.
  2. Overlaying NSView objects on top of a Sprite Kit scene is possible, but involves tricky things with layers that doesn’t seem to handle the transition between retina and non-retina screens very well.
  3. You end up with lots of similar but not-quite-reusable code in both iOS and OS X targets.

Because of this, to make your game code as platform-agnostic as possible, it will be necessary to first migrate the iOS target to use a pure Sprite Kit-based approach for the UI as well as the game elements. You will do this while keeping the ultimate goal in mind: getting the game running just as well on OS X as it currently does on iOS.

So, first things first, you will rearrange your project files a bit to clearly identify what can be shared between the iOS and OS X targets, and what will remain platform-specific.

Source File Organization

In the Project Navigator, select the CookieCrunch project. Create a new group by selecting File \ New \ Group. Name CookieCrunch Shared, as shown below:

OSX_CC_SharedGroup

This group will be the location for all code and resources common to both the OS X and iOS targets.

Now, move (drag and drop) the following source files to the group you just created:

  • Array2D.swift
  • Chain.swift
  • Cookie.swift
  • Extensions.swift
  • GameScene.swift
  • Level.swift
  • Set.swift
  • Swap.swift
  • Tile.swift

Also move the assets and subfolders Grid.atlas, Sprites.atlas, Images.xcassets, Levels and Sounds from the CookieCrunch group into the shared group as well.

When you’re done, your project navigator should look like this:

OSX_CC_FilesMigrated

Now, for all of the files in the shared group, you need to add them to the Mac target. In Project Navigator, select all the Swift files from the shared group. In the file inspector, make sure that CookieCrunch Mac is selected in Target Membership, as shown below:

CookieCrunch-OSX-TargetMembership

Do the same with Grid.atlas, Images.xcassets, Sprites.atlas, and each of the files in the Levels and Sounds groups.

While you’re dealing with the asset catalog, you should also make sure that the app has a Mac icon. Open Images.xcassets, find AppIcon, and in the Inspector, make sure Mac all sizes is checked:

CookieCrunch-OSX-AppIcon

You also need to clean up a few things that Xcode created by default as part of the CookieCrunch Mac target.

From the CookieCrunch Mac group, delete Images.xcassets, GameScene.swift (since there is already a shared GameScene class to reuse), and GameScene.sks.

Replace the contents of the OS X AppDelegate.swift file with the following:

import Cocoa
import SpriteKit
 
@NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {
 
  @IBOutlet weak var window: NSWindow!
  @IBOutlet weak var skView: SKView!
 
  func applicationDidFinishLaunching(aNotification: NSNotification) {
  }
 
  func applicationShouldTerminateAfterLastWindowClosed(sender: NSApplication) -> Bool {
    return true
  }
}

This is a pretty simple AppDelegate right now. But, later, you’ll come back to this code to add a few important elements. It’s a good idea at this stage to make sure that the iOS target still works. Select an iPhone target and build and run the project. If the game still launches, you are ready to move on to the next step. Otherwise, go back and carefully re-check each step you’ve performed so far.

Now, have a go at building the Mac target. What do you expect to happen?

Whoops – what is that? Xcode is reporting a bunch of errors in the GameScene class. Take a look, and you’ll see several instances of:

Use of undeclared type 'UIEvent'
Use of undeclared type 'UITouch'

Remember what I said earlier about the differences in event handling on iOS vs OS X? Now is the time to deal with that.

Porting UIKit to Sprite Kit

As mentioned earlier, the Sprite Kit framework is built on top of the platform-specific UIKit framework on iOS and AppKit on OS X. On both platforms SKScene inherits ultimately from SKNode. On iOS this is built on top of UIResponder; on OS X it’s NSResponder. UIResponder provides methods such as touchesBegan and touchesMoved. NSResponder provides functions such as mouseDown and mouseDragged.

You have several options to tackle this difference:

  1. Conditional compilation in the GameScene class.
    This has the advantage of keeping all code in a single source file. But it sacrifices code readability to do this since you’ll have one big source file with a lot of #if ande #else statements.
  2. Extract the common GameScene code into a superclass and implement OS-specific stuff in a subclass specific to its target. e.g.:
    • GameScene.swift
    • GameSceneIOS.swift
    • GameSceneMac.swift

    This has the advantage of only having platform-neutral code in the shared group, and any OS-specific stuff exists only in the relevant targets. However, the amount of platform-specific work you’ll have to do is actually quite small, so there is a third, better way.

  3. Put all event handling code inside a class extension and use conditional compilation.
    Doing this keeps code all in one place, and it is reusable by any class that derives from SKNode (something you will take advantage of shortly).

Cross-platform event handling extension

Right-click on the CookieCrunch Shared group, select New File… and create a new Swift File. Name it EventHandling.swift and make sure it is added to both the CookieCrunch and CookieCrunch Mac targets.

Replace the contents of the file with the following code:

import SpriteKit
 
// MARK: - cross-platform object type aliases
 
#if os(iOS)
typealias CCUIEvent = UITouch
#else
typealias CCUIEvent = NSEvent
#endif

The first step is to create the CCUIEvent type alias. On iOS it refers to a UITouch object; on OS X, an NSEvent. This will let you use this type in event handling code without having to worry about what platform you are developing on… within reason, of course. In your cross-platform code you will be limited to only calling methods or accessing properties that exist on both platforms.

Apple itself takes this approach for classes such as NSColor and UIColor, by creating a type alias SKColor that points to one or the other as appropriate for the platform. You’re already in good company with your cross-platform style! :]

Next, add the following code to the file:

extension SKNode {
 
  #if os(iOS)
 
  // MARK: - iOS Touch handling
 
  override public func touchesBegan(touches: NSSet, withEvent event: UIEvent)  {
    userInteractionBegan(touches.anyObject() as UITouch)
  }
 
  override public func touchesMoved(touches: NSSet, withEvent event: UIEvent)  {
    userInteractionContinued(touches.anyObject() as UITouch)
  }
 
  override public func touchesEnded(touches: NSSet, withEvent event: UIEvent) {
    userInteractionEnded(touches.anyObject() as UITouch)
  }
 
  override public func touchesCancelled(touches: NSSet, withEvent event: UIEvent) {
    userInteractionCancelled(touches.anyObject() as UITouch)
  }
 
  #else
 
  // MARK: - OS X mouse event handling
 
  override public func mouseDown(event: NSEvent) {
    userInteractionBegan(event)
  }
 
  override public func mouseDragged(event: NSEvent) {
    userInteractionContinued(event)
  }
 
  override public func mouseUp(event: NSEvent) {
    userInteractionEnded(event)
  }
 
  #endif
 
  // MARK: - Cross-platform event handling
 
  func userInteractionBegan(event: CCUIEvent) {
  }
 
  func userInteractionContinued(event: CCUIEvent) {
  }
 
  func userInteractionEnded(event: CCUIEvent) {
  }
 
  func userInteractionCancelled(event: CCUIEvent) {
  }
 
}

This section of code defines the extension to SKNode for cross-platform behavior. You’re mapping the relevant location-based event handling methods on each platform to generic userInteractionBegan/Continued/Ended/Cancelled methods. In each call, the CCUIEvent is passed as a parameter.

Cross-platform GameScene class

Now, in any SKNode-derived class in your project, you only need to change any use of touchesBegan, etc, to userInteractionBegan, etc and it should compile and run on both platforms!

Open GameScene.swift. First, find and replace the following lines:

override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
  // Convert the touch location to a point relative to the cookiesLayer.
  let touch = touches.anyObject() as UITouch
  let location = touch.locationInNode(cookiesLayer)

with:

override func userInteractionBegan(event: CCUIEvent) {
  // Convert the touch location to a point relative to the cookiesLayer.
  let location = event.locationInNode(cookiesLayer)

Similarly, find the method definition for touchesMoved:

override func touchesMoved(touches: NSSet, withEvent event: UIEvent) {

and replace it with the following:

override func userInteractionContinued(event: CCUIEvent) {

Within that function, replace the lines

let touch = touches.anyObject() as UITouch
let location = touch.locationInNode(cookiesLayer)

with:

let location = event.locationInNode(cookiesLayer)

Now replace:

override func touchesEnded(touches: NSSet, withEvent event: UIEvent) {

with:

override func userInteractionEnded(event: CCUIEvent) {

And finally, replace:

override func touchesCancelled(touches: NSSet, withEvent event: UIEvent) {
  touchesEnded(touches, withEvent: event)
}

with:

override func userInteractionCancelled(event: CCUIEvent) {
  userInteractionEnded(event)
}

That’s all the changes to call the cross-platform version of the methods. Thanks to the type alias, you can just implement the userInteraction methods now for both OS X and iOS.

Build and run the Mac target. It won’t do anything yet, as its AppDelegate doesn’t set up the game.

As a final check to make sure you’ve done this all correctly, build and run the iOS target. The game should still play as normal.

Removing Dependencies on UIKit

Most of the game user interaction logic is, understandably enough, located in the GameViewController class. However, while AppKit for OS X provides view controllers and views just like UIKit on iOS, the differences are large enough that it would take a lot of work to reimplement the logic in the game controller specifically for OS X.

However, if you survey the GameViewController code, you’ll notice that most of it should be reusable on both platforms. Background music is played through an AVAudioPlayer which is also available on OS X. Much of the game turn logic should be reusable. It is only the code that accesses UIKit components (UILabel, UIImageView and UIButton) that needs to be adapted for OS X.

The various labels can be replaced with Sprite Kit’s SKLabelNode. The Shuffle button and Game Over images can be implemented using SKSpriteNode objects. With these, you can create a custom controller object that is usable on both platforms, maximising code reuse and leaving the total amount of platform-specific code at an absolute minimum.

Labels with Shadows

If you review the API for SKLabelNode, you’ll notice that there is nothing there about shadows. (Incidentally, the NSLabel component in AppKit doesn’t have the ability to add shadows, either.) Hence, if you want to keep your text readable, you’ll need to implement your own custom ShadowedLabelNode class.

Create a new Swift file in the shared group, naming it ShadowedLabelNode.swift (make sure you add it to both iOS and OS X targets!). Replace its auto-generated contents with the following code:

import SpriteKit
 
class ShadowedLabelNode: SKNode {
 
  // 1
  private let label: SKLabelNode
  private let shadowLabel: SKLabelNode
 
  // 2
  var text: String {
    get {
      return label.text
    }
    set {
      label.text = newValue
      shadowLabel.text = newValue
    }
  }
 
  // 3
  var verticalAlignmentMode: SKLabelVerticalAlignmentMode {
    get {
      return label.verticalAlignmentMode
    }
    set {
      label.verticalAlignmentMode = newValue
      shadowLabel.verticalAlignmentMode = newValue
    }
  }
 
  var horizontalAlignmentMode: SKLabelHorizontalAlignmentMode {
    get {
      return label.horizontalAlignmentMode
    }
    set {
      label.horizontalAlignmentMode = newValue
      shadowLabel.horizontalAlignmentMode = newValue
    }
  }
 
  required init(coder: NSCoder) {
    fatalError("NSCoding not supported")
  }
 
  // 4
  init(fontNamed fontName: String, fontSize size: CGFloat, color: SKColor, shadowColor: SKColor) {
    label = SKLabelNode(fontNamed: fontName)
    label.fontSize = size
    label.fontColor = color
 
    shadowLabel = SKLabelNode(fontNamed: fontName)
    shadowLabel.fontSize = size
    shadowLabel.fontColor = shadowColor
 
    super.init()
 
    shadowLabel.position = CGPoint(x: 1, y: -1)
    addChild(shadowLabel)
    addChild(label)
  }
}

Let’s walk through this class step-by-step:

  1. As you can see, a shadowed label is constructed by using two labels of different colors, one offset from the other by a point in the vertical and horizontal directions.
  2. The text is a computed property that passes through to the child labels, ensuring both labels text are set correctly.
  3. Likewise the verticalAlignmentMode and horizontalAlignmentMode properties pass through to the two labels as well.
  4. Finally, the initializer sets up the two labels, ensuring that they are slightly offset from each other to create the shadow effect.

You could create a more comprehensive wrapper matching the SKLabelNode API; but this is all that is needed for the Cookie Crunch game.

A simple Sprite Kit button

You’ll also need a Sprite Kit-based button to replace the UIButton currently used in the iOS target.

In the shared group, create a new file, ButtonNode.swift, adding it to both the iOS and OS X targets. Replace its contents with the following code:

import SpriteKit
 
class ButtonNode: SKSpriteNode {
 
  // 1 - action to be invoked when the button is tapped/clicked on
  var action: ((ButtonNode) -> Void)?
 
  // 2
  var isSelected: Bool = false {
    didSet {
      alpha = isSelected ? 0.8 : 1
    }
  }
 
  // MARK: - Initialisers
 
  required init(coder: NSCoder) {
    fatalError("NSCoding not supported")
  }
 
  // 3
  init(texture: SKTexture) {
    super.init(texture: texture, color: SKColor.whiteColor(), size: texture.size())
    userInteractionEnabled = true
  }
 
  // MARK: - Cross-platform user interaction handling
 
  // 4
  override func userInteractionBegan(event: CCUIEvent) {
    isSelected = true
  }
 
  // 5
  override func userInteractionContinued(event: CCUIEvent) {
    let location = event.locationInNode(parent)
 
    if CGRectContainsPoint(frame, location) {
      isSelected = true
    } else {
      isSelected = false
    }
  }
 
  // 6
  override func userInteractionEnded(event: CCUIEvent) {
    isSelected = false
 
    let location = event.locationInNode(parent)
 
    if CGRectContainsPoint(frame, location) {
      // 7
      action?(self)
    }
  }
}

The class is deliberately kept nice and simple: only implementing the things absolutely needed for this game. Note the following (numbers reference the corresponding comment in the code):

  1. An action property holds a reference to the closure that will be invoked when the user taps or clicks on the button.
  2. You will want to visually indicate when the button is being pressed. A simple way to do this is to change the alpha value when the button is selected.
  3. Most of the initialization is handled by the SKSpriteNode superclass. All you need to do is pass in a texture to use, and make sure that the node is enabled for user interaction!
  4. When user interaction begins (either a touch down or mouse down event), the button is marked as selected.
  5. If, during the course of user interaction, the mouse or touch moves outside the node’s bounds, the button is no longer shown to be selected.
  6. If when the mouse click finishes or the user lifts their finger and the event location is within the bounds of the button, the action is triggered.
  7. Note the use of optional chaining as indicated by the ?. This indicates that nothing should happen if no action is set (that is, when action == nil)

You are also going to need one more thing before you can begin your controller conversion in earnest. The iOS controller uses a UITapGestureRecognizer to trigger the beginning of a new game. OS X has good gesture recognizer support as well, but in this case you will need an NSClickGestureRecognizer.

In EventHandling.swift, add the following type alias to the iOS section below the definition of CCUIEvent = UITouch:

typealias CCTapOrClickGestureRecognizer = UITapGestureRecognizer

Similarly below the line CCUIEvent = NSEvent:

typealias CCTapOrClickGestureRecognizer = NSClickGestureRecognizer

The APIs on these classes are similar enough that you can type alias them as you did with UITouch and NSEvent.

A cross-platform controller

Now it’s time to get your hands really dirty, and gut the iOS GameViewController class to create the cross-platform controller. But before you do that, you need somewhere to put the shared controller code.

Create a new Swift file GameController.swift in the Shared group, and add it to both the iOS and OS X targets.

Replace its contents with the following:

import SpriteKit
import AVFoundation
 
class GameController: NSObject {
 
  let view: SKView
 
  // The scene draws the tiles and cookie sprites, and handles swipes.
  let scene: GameScene
 
  // 1 - levels, movesLeft, score
 
  // 2 - labels, buttons and gesture recognizer
 
  // 3 - backgroundMusic player
 
 
  init(skView: SKView) {
    view = skView
    scene = GameScene(size: skView.bounds.size)
 
    super.init()
 
    // 4 - create and configure the scene
 
    // 5 - create the Sprite Kit UI components
 
    // 6 - begin the game
  }
 
  // 7 - beginGame(), shuffle(), handleSwipe(), handleMatches(), beginNextTurn(), updateLabels(), decrementMoves(), showGameOver(), hideGameOver()
}

Move the following code out of GameViewController.swift into the marked locations in GameController.swift.

  • At 1, insert the declarations for level, movesLeft and score.
  • At 3 (you’ll come back to 2 later), put the code for creating the backgroundMusic AVAudioPlayer instance.
  • At 4, take everything from viewDidLoad from the “create and configure the scene” comment to the line that assigns the swipe handler (scene.swipeHandler = handleSwipe) and move it into the initializer for the GameController.
  • Delete the lines that hide the gameOverPanel and shuffleButton – you’ll do things slightly differently when you create the labels, buttons, etc, in a moment, below, at 5.
  • At 6, move the lines from skView.presentScene(scene) to beginGame().
  • At 7, move the functions beginGame(), shuffle(), handleSwipe(), handleMatches(), beginNextTurn(), updateLabels(), decrementMoves(), showGameOver(), hideGameOver().

The old iOS GameViewController class should be looking a lot slimmer now! You’re not yet done gutting it, however. Delete the following lines from the GameViewController class:

// The scene draws the tiles and cookie sprites, and handles swipes.
var scene: GameScene!
 
@IBOutlet weak var targetLabel: UILabel!
@IBOutlet weak var movesLabel: UILabel!
@IBOutlet weak var scoreLabel: UILabel!
@IBOutlet weak var gameOverPanel: UIImageView!
@IBOutlet weak var shuffleButton: UIButton!
 
var tapGestureRecognizer: UITapGestureRecognizer!

You’ll deal with the shuffleButtonPressed() method in a moment.

Before that, you’ll need a reference to the new GameController object within the GameViewController. Add the following property declaration to the GameViewController class:

var gameController: GameController!

Create an instance of this class at the end of viewDidLoad() with the following code:

gameController = GameController(skView: skView)

There’s now some housekeeping to do in the iOS storyboard. Open Main.storyboard and delete all the labels, the image view and the shuffle button so the game view controller becomes a blank canvas:

CookieCrunch-OSX-EmtpyIOSstoryboard

The new Sprite Kit-based components will be created in the GameController class.

In GameController.swift, at the // 2 comment, paste this code:

let targetLabel = ShadowedLabelNode(fontNamed: "GillSans-Bold", fontSize: 22, color: SKColor.whiteColor(), shadowColor: SKColor.blackColor())
let movesLabel = ShadowedLabelNode(fontNamed: "GillSans-Bold", fontSize: 22, color: SKColor.whiteColor(), shadowColor: SKColor.blackColor())
let scoreLabel = ShadowedLabelNode(fontNamed: "GillSans-Bold", fontSize: 22,  color: SKColor.whiteColor(), shadowColor: SKColor.blackColor())
 
var shuffleButton: ButtonNode!
var gameOverPanel: SKSpriteNode!
 
var tapOrClickGestureRecognizer: CCTapOrClickGestureRecognizer!

This code creates the three labels that display the level target, remaining moves and current score. It then declares properties for the shuffleButton, gameOverPanel and the tapOrClickGestureRecognizer which will handle the rest of the user interaction.

To create the labels for target, moves and score, paste the following into GameController.swift at the // 5 comment:

let nameLabelY = scene.size.height / 2 - 30
let infoLabelY = nameLabelY - 34
 
let targetNameLabel = ShadowedLabelNode(fontNamed: "GillSans-Bold", fontSize: 16, color: SKColor.whiteColor(), shadowColor: SKColor.blackColor())
targetNameLabel.text = "Target:"
targetNameLabel.position = CGPoint(x: -scene.size.width / 3, y: nameLabelY)
scene.addChild(targetNameLabel)
 
let movesNameLabel = ShadowedLabelNode(fontNamed: "GillSans-Bold", fontSize: 16, color: SKColor.whiteColor(), shadowColor: SKColor.blackColor())
movesNameLabel.text = "Moves:"
movesNameLabel.position = CGPoint(x: 0, y: nameLabelY)
scene.addChild(movesNameLabel)
 
let scoreNameLabel = ShadowedLabelNode(fontNamed: "GillSans-Bold", fontSize: 16, color: SKColor.whiteColor(), shadowColor: SKColor.blackColor())
scoreNameLabel.text = "Score:"
scoreNameLabel.position = CGPoint(x: scene.size.width / 3, y: nameLabelY)
scene.addChild(scoreNameLabel)

This code first determines the y location for the name (“Target:”, “Moves:” and “Score:”) and value labels. Since the scene’s anchor point is the center, the y location is determined by adding half the scene height and then subtracting a small value, putting the labels just below the top of the view. The labels displaying the score, etc, are set to display 34 points below the heading labels.

For each label that is created, its text and position (the x position is relative to the center of the scene) are set, and the label is added to the scene.

Add the following code just below the code you just added:

targetLabel.position = CGPoint(x: -scene.size.width / 3, y: infoLabelY)
scene.addChild(targetLabel)
movesLabel.position = CGPoint(x: 0, y: infoLabelY)
scene.addChild(movesLabel)
scoreLabel.position = CGPoint(x: scene.size.width / 3, y: infoLabelY)
scene.addChild(scoreLabel)

This code sets the positions of the value labels and adds them to the scene.

To create the shuffle button, add the following code just below the code you just added:

shuffleButton = ButtonNode(texture: SKTexture(imageNamed: "Button"))
shuffleButton.position = CGPoint(x: 0, y:  -scene.size.height / 2 + shuffleButton.size.height)
 
let nameLabel = ShadowedLabelNode(fontNamed: "GillSans-Bold", fontSize: 20, color: SKColor.whiteColor(), shadowColor: SKColor.blackColor())
nameLabel.text = "Shuffle"
nameLabel.verticalAlignmentMode = .Center
 
shuffleButton.addChild(nameLabel)
scene.addChild(shuffleButton)
shuffleButton.hidden = true

This creates the button node, positions it just above the bottom of the scene, and adds the text “Shuffle” by using another ShadowedLabelNode as a child of the button. By setting center vertical alignment on the label it will be rendered properly centered on its parent button node. (By default, labels are aligned on its text’s baseline.) The button is added to the scene; but is initially hidden.

To set up the button’s action, add the following code just below the code you just added:

shuffleButton.action = { (button) -> Void in
  // shuffle button pressed!
}

Ok, what should go here? That’s right – the contents of the shuffleButtonPressed() method from the GameViewController class. Move the contents of that method into the shuffleButton action closure (you will need to prefix each method call with self as well), so it looks like this:

shuffleButton.action = { (button) -> Void in
  self.shuffle()
  self.decrementMoves()
}

As it is no longer needed, delete the shuffleButtonPressed() method from the GameViewController class. That class is looking rather svelte now, don’t you think?

Ok, just the game over panel and starting a new game left to do before the iOS version of the game is running again.

Above, you changed the gameOverPanel to be an SKSpriteNode. Find the decrementMoves() function in the GameController class and replace:

gameOverPanel.image = UIImage(named: "LevelComplete")

with:

gameOverPanel = SKSpriteNode(imageNamed: "LevelComplete")

Likewise, replace:

gameOverPanel.image = UIImage(named: "GameOver")

with:

gameOverPanel = SKSpriteNode(imageNamed: "GameOver")

In showGameOver(), you need to add the gameOverPanel to the scene instead of unhiding it. So, replace:

gameOverPanel.hidden = false

with:

scene.addChild(gameOverPanel!)

To use the cross-platform CCTapOrClickGestureRecognizer to handle starting a new game, replace the contents of the animateGameOver() closure in showGameOver() with the following:

self.tapOrClickGestureRecognizer = CCTapOrClickGestureRecognizer(target: self, action: "hideGameOver")
self.view.addGestureRecognizer(self.tapOrClickGestureRecognizer)

In hideGameOver(), replace all references to tapGestureRecognizer with tapOrClickGestureRecognizer. And, instead of hiding the gameOverPanel you need to remove it from the scene and clean up by setting it to nil. Replace:

gameOverPanel.hidden = true

With:

gameOverPanel.removeFromParent()
gameOverPanel = nil

Build and run the iOS target. If everything went well, you should be able to play the game just as before. But now you’ll be doing it entirely using Sprite Kit-based UI components!

CookieCrunch-iOS-Simulator-SpriteKitConversion

Getting the game running on OS X

As you might imagine, after all that work, you must be really close to getting the OS X version running. And you’d be right!

Switch the current build target to CookieCrunch Mac and invoke a build. There should only be three errors, all instances of the same problem that you’ll have to deal with now:

Swift Compiler Error ‘SKView’ does not have a member named ‘userInteractionEnabled’

Under iOS, the UIView class declares this property. Unfortunately, NSView doesn’t. But all is not lost: with a custom subclass of SKView it is possible to implement this property yourself.

Checking the documentation for the NSView class, you will find the hitTest function. From the discussion in the documentation:

This method is used primarily by an NSWindow object to determine which view should receive a mouse-down event. You’d rarely need to invoke this method, but you might want to override it to have a view object hide mouse-down events from its subviews.

Ah, that sounds like what you want! If you override this function and return nil when user interaction should be disabled, the view won’t receive any mouse events. This should emulate exactly the userInteractionEnabled behaviour of UIKit.

In the CookieCrunch Mac group, create a new Swift file CCView.swift. Only add it to the OS X target and not iOS. Replace the contents of CCView.swift with the following:

import SpriteKit
 
@objc(CCView)
class CCView: SKView {
 
  var userInteractionEnabled: Bool = true
 
  override func hitTest(aPoint: NSPoint) -> NSView? {
    if userInteractionEnabled {
      return super.hitTest(aPoint)
    }
    return nil
  }
}

That looks pretty easy, doesn’t it? If user interaction is enabled, perform the normal hit testing functionality, otherwise return nil to say “nope! don’t send any mouse down events to me!”.

Note the line @objc(CCView) – this is needed so the compiled nib from Interface Builder can find and load the class. The @objc directive instructs the compiler that the CCView class should be accessible from Objective-C classes as well.

To use this class instead of the standard SKView you need to make changes in two places.

The first is in MainMenu.xib. Select it, then in the document outline, select the SKView object, which is contained inside a parent view:

OSX_CC_CCView

In the Identity Inspector change SKView to CCView as shown below:

OSX_CC_CustomClass

In the CookieCrunch Mac AppDelegate.swift, change the line:

@IBOutlet weak var skView: SKView!

to:

@IBOutlet weak var skView: CCView!

And in GameController.swift file in the shared group, change:

let view: SKView

to:

let view: CCView

And change:

init(skView: SKView) {

to:

init(skView: CCView) {

Now Build the OS X target. The errors you had previously should have gone!

Of course, some new errors have been introduced into the iOS target: it doesn’t yet know about anything called CCView. To fix this is relatively simple.

Open GameViewController.swift and, just below the import statements, add the following line:

typealias CCView = SKView

SKView already has everything you need because it subclasses from UIView, so creating a type alias on iOS is all you need to do.

Now change the line in viewDidLoad():

let skView = view as SKView

to:

let skView = view as CCView

Build and run the iOS target to verify that it works again.

Now switch back to the OS X target and build and run. Surprised? All you see is a blank grey window. This is because no instance of the GameController is created in the CookieCrunch Mac AppDelegate!

In the CookieCrunch Mac group, open AppDelegate.swift. Add the following variable to the class:

var gameController: GameController!

This will store a strong reference to the game controller instance so it doesn’t get deallocated while the game is playing. Now, add one simple line to applicationDidFinishLaunching:

gameController = GameController(skView: skView)

Build and run, and voilà! Cookie Crunch running as a native Mac OS X application!

CookieCrunch-OSX-UnoptimisedWindow

OS X Finishing Touches

While the game runs just fine on OS X, there are a few issues you still have to deal with.

The first thing you’ll notice is that the window is much too big for the game. When you resize it, the game scales but the score labels can be cut off. To fix this, open MainMenu.xib. Select the CookieCrunch Mac window. In the Inspector area on the right, select the Size Inspector:

OSX_CC_MacMenu

Since the @2x background image is 320 x 568 points at 1x scale, set the initial window size to match. To prevent resizing, set the minimum and maximum size to the same values:

OSX_CC_MacWindowSize

Build and run. The game is now sized appropriately and does not allow resizing:

CookieCrunch-OSX-OptimisedWindow

Congratulations! Your OS X conversion of Cookie Crunch is complete!

Where to go from here?

You can grab the completed sample files for this project from here.

If you take a look over the project, you’ll see that the amount of platform-specific code is really quite low. By using Sprite Kit for the UI elements as well as the game itself, you have achieved an extremely high level of code reuse! Although developing a UI for a Sprite Kit game in a storyboard for iOS makes it easy to rapidly prototype the game, when it comes to porting to OS X it makes it very difficult, so you might want to consider going Sprite Kit-only from the start.

If you have any comments or questions, feel free to join the discussion below!

How to Make a Game Like Candy Crush Tutorial: OS X Port is a post from: Ray Wenderlich

The post How to Make a Game Like Candy Crush Tutorial: OS X Port appeared first on Ray Wenderlich.

Video Tutorial: Custom Collection View Layouts Part 13: Timbre – Parallax Effect

Geofencing in iOS with Swift

$
0
0
Learn how to build a location-based reminder app in iOS with geofencing!

Let’s get geofencing!

Geofencing notifies your app when its device enters or leaves geographical regions you set up.

Here are two examples of what you can do with this:

  • Key finding app: You could make an app that triggers a notification whenever you leave home, reminding you to bring those perpetually elusive keys. :]
  • Deal finding app: You could also make an app that greets users with the latest and greatest deals whenever favorite shops are nearby.

In this tutorial, you’ll learn how to use geofencing in iOS with Swift – using the Region Monitoring API in Core Location.

In particular, you’ll create a location-based reminder app called Geotify that will let the user create reminders and associate them with real-world locations. Let’s get started!

Getting Started

Download the starter project. The project provides a simple user interface for adding/removing annotation items to/from a map view. Each annotation item represents a reminder with a location, or as I like to call it, a geotification. :]

Build and run the project, and you’ll see an empty map view.

Interface of Geotify

Tap on the + button on the navigation bar to add a new geotification. The app will present a separate view, allowing you to set up various properties for your geotification.

For this tutorial, you will add a pin on Apple’s headquarters in Cupertino. If you don’t know where it is, open this google map in a separate tab and use it to hunt the right spot. Be sure to zoom in to make the pin nice and accurate!

Note: To pinch to zoom on the simulator, hold down option, then hold shift temporarily to move the pinch center, then release shift and click-drag to pinch.

iOS Simulator Screen Shot 20 Feb 2015 10.18.19 pm

The radius represents the distance in meters from the specified location, at which iOS will trigger the notification. The note can be any message you wish to display during the notification.

Enter 1000 for the radius value and Say Hi to Tim! for the note, as shown below:

Screen Shot 2015-02-20 at 10.14.00 pm

The app also lets the user specify whether it should trigger the reminder upon either entry or exit of the defined circular geofence, via the segmented control at the top. Leave it as Upon Entry for your first geotification, as shown below:

Screen Shot 2015-02-20 at 10.14.05 pm

Click Add once you’re satisfied with all the values. You’ll see your geotification appear as a new annotation pin on the map view, with a circle around it denoting the defined geofence:

A Geotification

Tap on the pin and you’ll reveal the geotification’s details, such as the reminder note and the event type you specified earlier. Don’t tap on the little cross unless you want to delete the geotification!

Geotification

Feel free to add or remove as many geotifications as you want. As the app uses NSUserDefaults as a persistence store, the list of geotifications will persist between relaunches.

Now that you have a better appreciation of how the app works from the user’s perspective, you’ll quickly walk through some of the main classes to get up to speed with the project’s structure.

Starter Project Tour

Open Geotification.swift in Xcode. This class represents the basic model class for a geotification and contains all the properties that you saw were configurable via the user interface.

Perhaps you’ve noticed the presence of an additional identifier string property that isn’t exposed in the user interface. The app will use this string internally to uniquely identify each geotification.

In addition, the class adopts the NSCoding protocol to facilitate serializing and deserializing of the object for persistence purposes. As you also need to present the geotification visually as a map annotation, the class adopts the MKAnnotation protocol by implementing the coordinate, title and subtitle properties.

Next, take a look at AddGeotificationViewController.swift. As you saw earlier, this view controller lets the user add a new geotification. First the controller does some basic validation on radiusTextField and noteTextField to ensure non-empty inputs. If it deems both inputs to be valid, the controller enables the Add button.

Finally, take a closer look at onAdd(_:), which is triggered when the user clicks the Add button. Besides grabbing the location coordinates, event type (represented by an enum value of either OnEntry or OnExit), radius and note values from the relevant controls, the method also generates a UUID for the identifier by using the NSUUID class.

The method passes all the values to the class’s delegate and main view controller, GeotificationsViewController, which creates a new geotification based on these values. Then the method adds the geotification to a geotifications list as well as to the main map view.

Note: UUID stands for Uniquely Universal Identifier. A UUID is often represented in 32 hexadecimal characters; for example: 8753A44-4D6F-1226-9C60-0050E4C00067. The NSUUID class provides a convenient means to generate random UUID strings, which you can use to uniquely identify your objects.

Whew! Now that you feel more at home with the starter code, let’s move on to the crux of this tutorial: integrating geofencing into the project!

Setting Up a Location Manager and Permissions

At this point, any geotifications you’ve added to the map view are only for visualization. You’ll fix this by taking each geotification and registering its associated geofence with Core Location for monitoring.

Before any geofence monitoring can happen, though, you need to set up a Location Manager instance and request the appropriate permissions.

Open GeotificationsViewController.swift and add the following line at the top of the file to import the CoreLocation framework:

import CoreLocation

Next, ensure that the GeotificationsViewController conforms to the CLLocationManagerDelegate protocol and declare a constant instance of a CLLocationManager near the top of the class, as shown below:

class GeotificationsViewController: UIViewController, AddGeotificationsViewControllerDelegate, MKMapViewDelegate, CLLocationManagerDelegate // Add this protocol {
 
  @IBOutlet weak var mapView: MKMapView!
 
  var geotifications = [Geotification]()
  let locationManager = CLLocationManager() // Add this statement
 
  ...
}

Next, replace viewDidLoad() with the following code:

override func viewDidLoad() {
  super.viewDidLoad()
 
  // 1
  locationManager.delegate = self
  // 2
  locationManager.requestAlwaysAuthorization()
  // 3
  loadAllGeotifications()
}

Let’s run through this method step by step:

  1. You set the view controller as the delegate of the locationManager instance so that the view controller can receive the relevant delegate method calls.
  2. You make a call to requestAlwaysAuthorization(), which invokes a prompt to the user requesting for Always authorization to use location services. Apps with geofencing capabilities need Always authorization, due to the need to monitor geofences even when the app isn’t running.
  3. You call loadAllGeotifications(), which deserializes the list of geotifications previously saved to NSUserDefaults and loads them into a local geotifications array. The method also loads the geotifications as annotations on the map view.

Finally, open Info.plist in the Supporting Files group of your project. Add the key NSLocationAlwaysUsageDescription and set its value to Geotify requires access to your phone’s location to notify you when you enter or leave a geofence.

Add_NSLocationAlwaysUsageDescription

Tip: To quickly add a key to a property list, right-click on the empty space below the existing rows and select Add Row.

When the app prompts the user for authorization, it will show NSLocationAlwaysUsageDescription, a user-friendly explanation of why the app requires access to the user’s location. This key is mandatory when you request authorization for location services. If it’s missing, the system will ignore the request and prevent location services from starting altogether.

Build and run the project, and you’ll see a user prompt with the aforementioned description you’ve just set:

Requesting Always Authorization

You’ve set up your app to request the required permission. Great!

Before you proceed to implement the geofencing, there’s a small issue you have to resolve. You might have noticed that the user’s current location isn’t showing up on the map view. This feature is disabled, and as a result, the zoom button on the top-left of the navigation bar doesn’t work.

At first glance, the fix for this seems straightforward: You could simply set the showsUserLocation property of your map view to true in viewDidLoad() or in the storyboard.

However, there’s a slight issue with this strategy, as is evident by the warning you’ll get in your console if you try to implement it. When the user first launches the app, the map view may attempt to fetch the current location before the app is authorized to use Location Services.

Fortunately, the real fix is not difficult, either—you’ll simply enable the current location only after the app is authorized.

In GeotificationsViewController.swift, add the following delegate method at the bottom of the class:

func locationManager(manager: CLLocationManager!, didChangeAuthorizationStatus status: CLAuthorizationStatus) {
  mapView.showsUserLocation = (status == .AuthorizedAlways)
}

The location manager calls locationManager(_:didChangeAuthorizationStatus:) whenever the authorization status changes. If the user has already granted the app permission to use Location Services, this method will be called by the location manager after you’ve initialized the location manager and set its delegate.

That makes this method an ideal place to check if the app is authorized. If it is, you enable the map view to show the user’s current location.

Build and run the app. If you’re running it on a device, you’ll see the blue location marker appear on the main map view. If you’re running on the simulator, click Debug\Location\Apple in the menu to see the blue location marker:

view-location-on-simulator

In addition, the zoom button on the navigation bar now works. :]

zoom-in-simulator

Registering Your Geofences

With the location manager properly configured, the next order of business is to allow your app to register user geofences for monitoring.

In your app, the user geofence information is stored within your custom Geotification model. However, Core Location requires each geofence to be represented as a CLCircularRegion instance before it can be registered for monitoring. To handle this requirement, you’ll create a helper method that returns a CLCircularRegion from a given Geotification object.

Open GeotificationsViewController.swift and add the following method:

func regionWithGeotification(geotification: Geotification) -> CLCircularRegion {
  // 1
  let region = CLCircularRegion(center: geotification.coordinate, radius: geotification.radius, identifier: geotification.identifier)
  // 2
  region.notifyOnEntry = (geotification.eventType == .OnEntry)
  region.notifyOnExit = !region.notifyOnEntry
  return region
}

Here’s what the above method does:

  1. You initialize a CLCircularRegion with the location of the geofence, the radius of the geofence and an identifier that allows iOS to distinguish between the registered geofences of a given app. The initialization is rather straightforward, as the Geotification model already contains the required properties.
  2. The CLCircularRegion instance also has two Boolean properties, notifyOnEntry and notifyOnExit. These flags specify whether geofence events will be triggered when the device enters and leaves the defined geofence, respectively. Since you’re designing your app to allow only one notification type per geofence, you set one of the flags to true while you set the other to false, based on the enum value stored in the Geotification object.

Next, you need a method to start monitoring a given geotification whenever the user adds one.

Add the following method to GeotificationsViewController.swift:

func startMonitoringGeotification(geotification: Geotification) {
  // 1
  if !CLLocationManager.isMonitoringAvailableForClass(CLCircularRegion) {
    showSimpleAlertWithTitle("Error", message: "Geofencing is not supported on this device!", viewController: self)
    return
  }
  // 2
  if CLLocationManager.authorizationStatus() != .AuthorizedAlways {
    showSimpleAlertWithTitle("Warning", message: "Your geotification is saved but will only be activated once you grant Geotify permission to access the device location.", viewController: self)
  }
  // 3
  let region = regionWithGeotification(geotification)
  // 4
  locationManager.startMonitoringForRegion(region)
}

Let’s walk through the method step by step:

  1. isMonitoringAvailableForClass(_:) determines if the device has the required hardware to support the monitoring of geofences. If monitoring is unavailable, you bail out entirely and alert the user accordingly. showSimpleAlertWithTitle(_:message:viewController) is a helper function in Utilities.swift that takes in a title and message and displays an alert view.
  2. Next, you check the authorization status to ensure that the app has also been granted the required permission to use Location Services. If the app isn’t authorized, it won’t receive any geofence-related notifications. However, in this case, you’ll still allow the user to save the geotification, since Core Location lets you register geofences even when the app isn’t authorized. When the user subsequently grants authorization to the app, monitoring for those geofences will begin automatically.
  3. You create a CLCircularRegion instance from the given geotification using the helper method you defined earlier.
  4. Finally, you register the CLCircularRegion instance with Core Location for monitoring.

With your start method done, you also need a method to stop monitoring a given geotification when the user removes it from the app.

In GeotificationsViewController.swift, add the following method below startMonitoringGeotificiation(_:):

func stopMonitoringGeotification(geotification: Geotification) {
  for region in locationManager.monitoredRegions {
    if let circularRegion = region as? CLCircularRegion {
      if circularRegion.identifier == geotification.identifier {
        locationManager.stopMonitoringForRegion(circularRegion)
      }
    }
  }
}

The method simply instructs the locationManager to stop monitoring the CLCircularRegion associated with the given geotification.

Now that you have both the start and stop methods complete, you’ll use them whenever you add or remove a geotification. You’ll begin with the adding part.

First, take a look at addGeotificationViewController(_:didAddCoordinate) in GeotificationsViewController.swift.

The method is the delegate call invoked by the AddGeotificationViewController upon creating a geotification; it’s responsible for creating a new Geotification object using the values passed from AddGeotificationsViewController, and updating both the map view and the geotifications list accordingly. Then you call saveAllGeotifications(), which takes the newly-updated geotifications list and persists it via NSUserDefaults.

Now, replace the contents of the method with the following code:

func addGeotificationViewController(controller: AddGeotificationViewController, didAddCoordinate coordinate: CLLocationCoordinate2D, radius: Double, identifier: String, note: String, eventType: EventType) {
  controller.dismissViewControllerAnimated(true, completion: nil)
  // 1
  let clampedRadius = (radius > locationManager.maximumRegionMonitoringDistance) ? locationManager.maximumRegionMonitoringDistance : radius
 
  let geotification = Geotification(coordinate: coordinate, radius: clampedRadius, identifier: identifier, note: note, eventType: eventType)
  addGeotification(geotification)
  // 2
  startMonitoringGeotification(geotification)
 
  saveAllGeotifications()
}

You’ve made two key changes to the code:

  1. You ensure that the value of the radius is clamped to the maximumRegionMonitoringDistance property of locationManager, which is defined as the largest radius in meters that can be assigned to a geofence. This is important, as any value that exceeds this maximum will cause monitoring to fail.
  2. You add a call to startMonitoringGeotification(_:) to ensure that the geofence associated with the newly-added geotification is registered with Core Location for monitoring.

At this point, the app is fully capable of registering new geofences for monitoring. There is, however, a limitation: As geofences are a shared system resource, Core Location restricts the number of registered geofences to a maximum of 20 per app.

While there are workarounds to this limitation (See Where to Go From Here? for a short discussion), for the purposes of this tutorial, you’ll take the approach of limiting the number of geotifications the user can add.

Add a line to updateGeotificationsCount(), as shown in the code below:

func updateGeotificationsCount() {
  title = "Geotifications (\(geotifications.count))"
  navigationItem.rightBarButtonItem?.enabled = (geotifications.count < 20)  // Add this line
}

This line disables the Add button in the navigation bar whenever the app reaches the limit.

Finally, let’s deal with the removal of geotifications. This functionality is handled in mapView(_:annotationView:calloutAccessoryControlTapped:), which is invoked whenever the user taps the “delete” accessory control on each annotation.

Add a call to stopMonitoringGeotification(_:) to mapView(_:annotationView:calloutAccessoryControlTapped:), as shown below:

func mapView(mapView: MKMapView!, annotationView view: MKAnnotationView!, calloutAccessoryControlTapped control: UIControl!) {
  // Delete geotification
  var geotification = view.annotation as Geotification
  stopMonitoringGeotification(geotification)   // Add this statement
  removeGeotification(geotification)
  saveAllGeotifications()
}

The additional statement stops monitoring the geofence associated with the geotification, before removing it and saving the changes to NSUserDefaults.

At this point, your app is fully capable of monitoring and un-monitoring user geofences. Hurray!

Build and run the project. You won’t see any changes, but the app will now be able to register geofence regions for monitoring. However, it won’t be able to react to any geofence events just yet. Not to worry—that will be your next order of business!

These_eyes

Reacting to Geofence Events

You’ll start by implementing some of the delegate methods to facilitate error handling – these are important to add in case anything goes wrong.

In GeotificationsViewController.swift, add the following methods at the bottom of the class:

func locationManager(manager: CLLocationManager!, monitoringDidFailForRegion region: CLRegion!, withError error: NSError!) {
  println("Monitoring failed for region with identifier: \(region.identifier)")
}
 
func locationManager(manager: CLLocationManager!, didFailWithError error: NSError!) {
  println("Location Manager failed with the following error: \(error)")
}

These delegate methods simply log any errors that the location manager encounters to facilitate your debugging.

Note: You’ll definitely want to handle these errors more robustly in your production apps. For example, instead of failing silently, you could inform the user what went wrong.

Next, open AppDelegate.swift; this is where you’ll add code to properly listen and react to geofence entry and exit events.

Add the following line at the top of the file to import the CoreLocation framework:

import CoreLocation

Ensure that the AppDelegate conforms to the CLLocationManagerDelegate protocol and declare a CLLocationManager instance near the top of the class, as shown below:

class AppDelegate: UIResponder, UIApplicationDelegate, CLLocationManagerDelegate // Add this protocol {
  var window: UIWindow?
 
  let locationManager = CLLocationManager() // Add this statement
  ...
}

Add the following two lines to application(_:didFinishLaunchingWithOptions:), as shown below:

func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool {
  locationManager.delegate = self                // Add this line
  locationManager.requestAlwaysAuthorization()   // And this one
  return true
}

You’ve set up your AppDelegate to receive geofence-related events. But you might wonder, “Why did I designate the AppDelegate to do this instead of the view controller?”

Geofences registered by an app are monitored at all times, including when the app isn’t running. If the device triggers a geofence event while the app isn’t running, iOS automatically relaunches the app directly into the background. This makes the AppDelegate an ideal entry point to handle the event, as the view controller may not be loaded or ready.

Now you might also wonder, “How will a newly-created CLLocationManager instance be able to know about the monitored geofences?”

It turns out that all geofences registered by your app for monitoring are conveniently accessible by all location managers in your app, so it doesn’t matter where the location managers are initialized. Pretty nifty, right? :]

Now all that’s left is to implement the relevant delegate methods to react to the geofence events. Before you do so, you’ll create a method to handle a geofence event.

Add the following method to AppDelegate.swift:

func handleRegionEvent(region: CLRegion!) {
  println("Geofence triggered!")
}

At this point, the method takes in a CLRegion and simply logs a statement. Not to worry—you’ll implement the event handling later.

Next, add the following delegate methods in AppDelegate.swift, as well as a call to the handleRegionEvent(_:) function you just created, as shown in the code below:

func locationManager(manager: CLLocationManager!, didEnterRegion region: CLRegion!) {
  if region is CLCircularRegion {
    handleRegionEvent(region)
  }
}
 
func locationManager(manager: CLLocationManager!, didExitRegion region: CLRegion!) {
  if region is CLCircularRegion {
    handleRegionEvent(region)
  }
}

As the method names aptly suggest, you fire locationManager(_:didEnterRegion:) when the device enters a CLRegion, while you fire locationManager(_:didExitRegion:) when the device exits a CLRegion.

Both methods return the CLRegion in question, which you need to check to ensure it’s a CLCircularRegion, since it could be a CLBeaconRegion if your app happens to be monitoring iBeacons, too. If the region is indeed a CLCircularRegion, you accordingly call handleRegionEvent(_:).

Note: A geofence event is triggered only when iOS detects a boundary crossing. If the user is already within a geofence at the point of registration, iOS won’t generate an event. If you need to query whether the device location falls within or outside a given geofence, Apple provides a method called requestStateForRegion(_:).

Now that your app is able to receive geofence events, you’re ready to give it a proper test run. If that doesn’t excite you, it really ought to, because for the first time in this tutorial, you’re going to see some results. :]

The most accurate way to test your app is to deploy it on your device, add some geotifications and take the app for a walk or a drive. However, it wouldn’t be wise to do so right now, as you wouldn’t be able to verify the print logs emitted by the geofence events with the device unplugged. Besides, it would be nice to get assurance that the app works before you commit to taking it for a spin.

Fortunately, there’s an easy way do this without leaving the comfort of your home. Xcode lets you include a hardcoded waypoint GPX file in your project that you can use to simulate test locations. Lucky for you, the starter project includes one for your convenience. :]

Open up TestLocations.gpx, which you can find in the Supporting Files group, and inspect its contents. You’ll see the following:

<?xml version="1.0"?>
<gpx version="1.1" creator="Xcode">
  <wpt lat="37.422" lon="-122.084058">
    <name>Google</name>
  </wpt>
  <wpt lat="37.3270145" lon="-122.0310273">
    <name>Apple</name>
  </wpt>
</gpx>

The GPX file is essentially an XML file that contains two waypoints: Google’s Googleplex in Mountain View and Apple’s Headquarters in Cupertino.

To begin simulating the locations in the GPX file, build and run the project. When the app launches the main view controller, go back to Xcode, select the Location icon in the Debug bar and choose TestLocations:

TestLocations

Back in the app, use the Zoom button on the top-left of the navigation bar to zoom to the current location. Once you get close to the area, you’ll see the blue location marker moving repeatedly from the Googleplex to Apple, Inc. and back:

2simulatedlocations

Test the app by adding a few geotifications along the path defined by the two waypoints. If you added any geotifications earlier in the tutorial before you enabled geofence registration, those geotifications will obviously not work, so you might want to clear them out and start afresh.

For the test locations, it’s a good idea to place a geotification roughly at each waypoint. Here’s a possible test scenario:

  • Google: Radius: 1000m, Message: “Say Bye to Google!”, Notify on Exit
  • Apple: Radius: 1000m, Message: “Say Hi to Apple!”, Notify on Entry

iOS Simulator Screen Shot 4 Mar 2015 6.01.25 pm

Once you’ve added your geotifications, you’ll see a log in the console each time the blue location marker enters or leaves a geofence. If you activate the home button or lock the screen to send the app to the background, you’ll also see the logs each time the device crosses a geofence, though you obviously won’t be able to verify that behavior visually.

GeofenceTriggered

Note: Location simulation works both in iOS Simulator and on a real device. However, iOS Simulator can be quite inaccurate in this case; the timings of the triggered events do not coincide very well with the visual movement of the simulated location in and out of each geofence. You would do better to simulate locations on your device, or better still, take the app for a walk!

Notifying the User of Geofence Events

You’ve made a lot of progress with the app. At this point, it simply remains for you to notify the user whenever the device crosses the geofence of a geotification—so prepare yourself to do just that.

To obtain the note associated with a triggering CLCircularRegion returned by the delegate calls, you need to retrieve the corresponding geotification that was persisted in NSUserDefaults. This turns out to be trivial, as you can use the unique identifier you assigned to the CLCircularRegion during registration to find the right geotification.

In AppDelegate.swift, add the following helper method at the bottom of the file:

func notefromRegionIdentifier(identifier: String) -> String? {
  if let savedItems = NSUserDefaults.standardUserDefaults().arrayForKey(kSavedItemsKey) {
    for savedItem in savedItems {
      if let geotification = NSKeyedUnarchiver.unarchiveObjectWithData(savedItem as NSData) as? Geotification {
        if geotification.identifier == identifier {
          return geotification.note
        }
      }
    }
  }
  return nil
}

This helper method retrieves the geotification note from the persistence store, given the geotification identifier. It fetches and unarchives the stored geotifications from NSUserDefaults and loops through each geotification by comparing its identifier with the input identifier. Once the method finds the geotification, it returns the accompanying note.

Now that you’re able to retrieve the note associated with a geofence, you’ll write code to trigger a notification whenever a geofence event is fired, using the note as the message.

Add the following statements to the end of application(_:didFinishLaunchingWithOptions:), just before the method returns:

application.registerUserNotificationSettings(UIUserNotificationSettings(forTypes: .Sound | .Alert | .Badge, categories: nil))
UIApplication.sharedApplication().cancelAllLocalNotifications()

The code you’ve added prompts the user for permission to enable notifications for this app. In addition, it does some housekeeping by clearing out all existing notifications.

Next, replace the contents of handleRegionEvent(_:) with the following:

func handleRegionEvent(region: CLRegion!) {
  // Show an alert if application is active
  if UIApplication.sharedApplication().applicationState == .Active {
    if let message = notefromRegionIdentifier(region.identifier) {
      if let viewController = window?.rootViewController {
        showSimpleAlertWithTitle(nil, message: message, viewController: viewController)
      }
    }
  } else {
  // Otherwise present a local notification
    var notification = UILocalNotification()
    notification.alertBody = notefromRegionIdentifier(region.identifier)
    notification.soundName = "Default";
    UIApplication.sharedApplication().presentLocalNotificationNow(notification)
  }
}

If the app is active, the code above simply shows an alert controller with the note as the message. Otherwise, it presents a location notification with the same message.

Build and run the project, and run through the test procedure covered in the previous section. Whenever your test triggers a geofence event, you’ll see an alert controller displaying the reminder note:

Say Bye to Google!

Send the app to the background by activating the Home button or locking the device while the test is running. You’ll continue to receive notifications periodically that signal geofence events:

Geotify Notifications on Lock screen

And with that, you have a fully functional, location-based reminder app in your hands. And yes, get out there and take that app for a spin!

Note: When you test the app, you may encounter situations where the notifications don’t fire exactly at the point of boundary crossing.

This is because before iOS considers a boundary as crossed, there is an additional cushion distance that must be traversed and a minimum time period that the device must linger at the new location. iOS internally defines these thresholds, seemingly to mitigate the spurious firing of notifications in the event the user is traveling very close to a geofence boundary.

As well, these thresholds seem to be affected by the available location hardware capabilities. From experience, the geofencing behavior is a lot more accurate when Wi-Fi is enabled on the device.

Where to Go From Here?

You can download the complete project for this tutorial here.

Congratulations—you’re now equipped with the basic knowledge you need to build your own geofencing-enabled apps!

Geofencing is a powerful technology with many practical and far-reaching applications in such realms as marketing, resource management, security, parental control and even gaming—what you can achieve is really up to your imagination.

If you’re looking for additional challenges or learning opportunities, consider the following:

  • Allow the user to edit an existing geotification.
  • Enable the app to handle more than 20 geofences by selectively monitoring geofences that are in the user’s vicinity. One way to achieve this is by monitoring for significant location changes and selecting the 20 geofences closest to the user’s current location.
  • Re-implement the app using iOS 8’s new location-based notifications. Check out the relevant documentation for more details.
  • Check out our iBeacons tutorial!

I hope you’ve enjoyed this tutorial, and I really look forward to seeing how you use geofencing in your apps. Feel free to leave a comment or question below!

Geofencing in iOS with Swift is a post from: Ray Wenderlich

The post Geofencing in iOS with Swift appeared first on Ray Wenderlich.


Flappy Felipe Video Tutorial Series: Updated for Swift!

$
0
0
Learn how to make a game like Flappy Bird with Swift!

Learn how to make a game like Flappy Bird with Swift!

This is just a quick heads up that we have updated our popular Flappy Felipe video tutorial series to Swift!

In this video tutorial series, you’ll learn how to make a game like Flappy Bird with Swift and Sprite Kit.

But instead of just a normal bird, you have an epic pink bird, modeled after the ever-lovable Felipe Laso Marsetti from our tutorial team :]

Here’s what’s new in this update:

  • Fully up-to-date: The entire series is now updated for Swift, Xcode 6.2, and iOS 8
  • Shorter videos: To keep each part of the series shorter, we split the content into more parts (previously 5 parts; now 12)
  • New section: We added a brand new section on Screen Size and Aspect Ratios, discussing how to get the game to work on various screen sizes

If you are a video tutorial subscriber, all 12 parts are available now. Here’s a link to the first part:

We hope you enjoy this video tutorial series, and happy flapping! :]

Flappy Felipe Video Tutorial Series: Updated for Swift! is a post from: Ray Wenderlich

The post Flappy Felipe Video Tutorial Series: Updated for Swift! appeared first on Ray Wenderlich.

Google Glass App Tutorial

$
0
0
Google Glass - one step closer to the Terminator.

Google Glass – one step closer to the Terminator.

Google Glass is a headset you wear like a pair of eyeglasses, except it has optical display.

Not only does it make you feel like a bad-ass Terminator, but you can make apps for it too!

In this tutorial, you’ll get a head start with Google Glass development by learning how to make a simple Google Glass shopping list app. In the process, you’ll learn:

  • The difference between Simple Live Cards and Immersion Activities
  • Using Google’s GDK (Glass Development Kit)
  • Testing on the device
  • How to change the app’s theme
  • How to create custom voice commands

Come with me if you want to live – in the world of Google Glass!

Note: This Google Glass app tutorial assumes you know the basics of Android development. If you are new to Android development, check out our Android Tutorial for Beginners series first.

Developing for Google Glass

Get a head start with Google Glass!

Attribution: Matt Brown

Google Glass development is unlike any other kind of mobile development.

Because the device is on your face and not in your hand, touch isn’t the main source of input. Imagine tapping, pinching and swiping on a pair of glasses; you’d have to clean them every two seconds!

Google Glass is meant to free your hands, and as such, voice input is the main source of interaction with Glass apps.

Another distinct difference is that because the display is so much smaller than hand-held devices, you have less real estate to work to show information. As a result, user interfaces tend to be plain and simple.

You may have heard that Google is no longer accepting new “Explorers” to purchase early copies of Google Glass. Although this means you can’t get access to a Google Glass if you don’t have one already, the project is far from dead.

Think of it as Google Glass’s way of saying “I’ll be back.”

There’s been a recent change of leadership and direction, so you can expect Glass 2.0 to surface within the next year or two. Wearables are the future of technology, and Glass is anything but a thing of the past.

Note: Because there is no simulator, you’ll need an actual Google Glass to test your work. If you don’t have one, you can at least read along to see what’s involved :]

Getting Started

Google Glass runs a modified version of Android, and much of the code and many of tools you’ll use are the same.

Install Android Studio by following the instructions in this Android for Beginners tutorial.

When you launch Android Studio for the first time, the main menu will come up. By default, Android Studio doesn’t install all the tools necessary for Glass development.

To download these tools, press Configure\SDK Manager. Under Android 4.4.2 (API 19), choose SDK Platform and Glass Development Kit Preview. Deselect everything else.

Screen Shot 2015-03-03 at 8.40.01 AM

Press Install 2 packages and accept the license for each tool. Once downloading finishes, close out of the SDK manager.

Two Types of Glass Apps

There are two different types of Google Glass apps:

  • Live Cards exist inside the timeline, so the user doesn’t have to enter or exit the app. They are minimally dependent on user interaction. One example is Google+, which uses a Live Card to show posts on the timeline.
  • Immersion Activities take up the entire Glass display and require the user exit to return to the main timeline. These behave more like apps for your phone.

You’ll be creating an Immersion Activity in this Google Glass app tutorial.

Note: If you worked through our WatchKit book or tutorials, you may notice that these two different types are a little familiar; they very much resemble WatchKit’s apps and glances!

Interactions In the Moment

Though they are rivals, Glass apps are remarkably similar to Apple Watch apps. From a design perspective, both primarily show white text on a black background. Both platforms allow for limited require user interaction.

In the Apple Watch Human Interface Guidelines, Apple states, “If you measure interactions with your iOS app in minutes, you can expect interactions with your WatchKit app to be measured in seconds.”

The user will not be using your app for very long, so you need it to be fast, simple and straightforward. This is why I say Apple Watch and Google Glass have a lot in common.

Google’s Glass Design Principles state: “Glass works best with information that is simple, relevant and current.” Glass is meant to be in the moment, not stuck in the past hour, month or year.

30seconds

What you can take away about wearable tech is it’s not meant to have complex apps that do everything. Wearable demands short, frequent and periodic use in order to show the user current, relevant information.

So let’s get get going on creating your Google Glass app!

Creating the Project

On the Welcome screen, select Start a new Android Studio project.

Screen Shot 2015-04-03 at 9.50.12 AM

Name the application “Shopping List” and press Next.

Screen Shot 2015-04-03 at 9.51.15 AM

This next screen is where you’ll choose what devices you want to run the app. Deselect the default Phone and Tablet and select Glass.

Screen Shot 2015-04-03 at 9.52.07 AM

As mentioned before, you’ll be creating an Immersion Activity. The next page of the project setup lets you select this option.

Keep the default Activity Name, MainActivity, but change the Activity Title from Hello World Immersion to Shopping List. This title is what shows in the user’s list of activities.

Screen Shot 2015-04-03 at 9.53.19 AM

When you’re done, press Finish.

Running your Glass Activity

Before you can actually run activity on your Glass, you first need to enable testing on the device. On your Glass, go to Settings\Device Info\Turn on debug. Now you’re ready to make some magic.

Plug your Google Glass into your computer’s USB port. You’ll hear a dinging sound come from the Glass when it is connected properly.

In Android Studio, press the Run button on the toolbar, which is the right arrow to the left of the bug icon.

An Edit Configuration screen will appear. Make sure that your settings on this screen are the same as below:

  • Module: app
  • Package: Deploy default APK
  • Activity: Launch default Activity
  • Target Device: USB Device

Screen Shot 2015-04-03 at 9.54.52 AM

Press Run. If a pop-up informs you that Configuration is still incorrect, select Continue Anyway.

Screen Shot 2015-04-03 at 9.55.21 AM

Android Studio will now build and run the app on your device. Go you! Your app is now running on your Google Glass.

When you first run it, it won’t show up on screen automatically; you have to have a chat with it. So, while wearing the Glass, say “OK Glass,” then ask it to “Show me a demo.” This phrase is the default voice trigger that launches your app.

Your screen should now look like this:

screen

Basic Project Settings

Now take Glass off and get back to work. :]

The template project is not ready for real-time. You didn’t think we’d let you off that easily in this tutorial, did you? It’s missing some pretty important pieces of code.

Open AndroidManifest.xml, which you’ll find in the manifests directory. Under the <manifest> tag and above the <application> tag, add this line of code:

<uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT" />

That code requests permission to test new, unlisted voice commands in your app.

Note: This line of code is for development purposes only. To launch Glassware with unique commands, you’ll need to submit a new voice command for approval before you release the app.

Open voice_triggers.xml (in the res/xml folder). This file declares the voice command that launches your Glassware.

When you first run the app, you needed to say, “OK Glass, show me a demo.” That command is currently declared in this file.

I’m going out on a limb here and guessing that you don’t want the user to launch your app by saying, “Show me a demo.”

For a more contextual trigger, simply change this line to the following:

<trigger keyword="@string/app_name" />

This changes the starting voice trigger to be the same as your app’s name. So in this case, the trigger is “Shopping List.” Note that command is replaced by keyword.

Another thing that sticks out is the strange appearance of the app. Google Glass apps are generally white text on a black background. However, if you look at the app you have, it’s just the opposite.

In order for the app to look normal, you’ll need to specify what theme to use. Open res/values/styles.xml and look for this line:

<style name="AppTheme" parent="android:Theme.Holo.Light"></style>

Currently, the app uses the Holo.Light theme, which is more appropriate for the screen of an Android phone or tablet than it is for Glass.

Light backgrounds tend to make it more difficult to see the content because the screen is semi-transparent, so the text can get lost in background. Besides, your app will look like a bit of an oddball when compared to other apps.

Replace the above line with the following to implement the default theme.

<style name="AppTheme" parent="android:Theme.DeviceDefault"></style>

Now, run the app again on your Google Glass. Notice how the launch command is no longer Show me a demo, but Open Shopping List instead. Also, when you open the app, it should now be a little easier on the eyes.

Looking spiffy!

Looking spiffy!

Foundation of Voice Commands

Although you just told Android that your app will test your custom voice commands, each individual activity that uses them must explicitly request permission to use them.

In MainActivity.java, find onCreate(Bundle bundle). Right below the call to super.onCreate(bundle);, add the following line:

getWindow().requestFeature(WindowUtils.FEATURE_VOICE_COMMANDS);

You need to import WindowUtils, so add this line of code at the top of the file:

import com.google.android.glass.view.WindowUtils;

Alternatively, you can press Option-Enter and Android Studio will automatically add the import.

Next, you need to declare which voice commands your app must listen to. Glass apps practice selective hearing unless you force their hands, very much like teenagers. :]

Open strings.xml inside res/values. It should predefine the following values:

<string name="app_name">Shopping List</string>
<string name="title_activity_main">Shopping List</string>
<string name="hello_world">Hello world!</string>

For your shopping list, you’ll need two basic functions: the ability to add and remove an item from the list.

Add the following lines under the "hello_world" declaration:

<string name="add_item">Add item</string>
<string name="remove_item">Remove item</string>

On its own, these declarations don’t mean a thing. They only give you a constant string that can be referenced anywhere in the project so you don’t accidentally type in the wrong value.

When using Google Glass, one phrase that you use frequently is “OK Glass.” This is the command that tells your device to listen up. The commands that are available afterwards depend on how the app was designed or context.

Menus are created using XML. (I promise this is the last XML you’ll have to look at!) In the Project Navigator, select res. Press ⌘N (or File\New) and select Directory. When prompted for the name of the directory, enter menu.

Screen Shot 2015-04-03 at 9.57.56 AM

Again, select this new directory and press ⌘N. This time, select Menu resource file. Name the file activity_menu.

Replace everything beneath the XML version header line with the following:

<menu xmlns:android="http://schemas.android.com/apk/res/android">
    <item
        android:id="@+id/add_menu_item"
        android:title="@string/add_item">
    </item>
    <item
        android:id="@+id/remove_menu_item"
        android:title="@string/remove_item">
    </item>
</menu>

Not only will this file let you create a menu with these two voice commands, it also generates a unique identifier for each one to help you figure out which command was used. This way, you know whether the user wanted to add or remove an item from the list.

For the app to know that you want these options in the “OK Glass” menu, you need to inflate the menu in two designated methods.

To inflate a menu means that you convert an XML file into a Menu object.

In MainActivity.java, add the following method after onCreate():

@Override
public boolean onCreatePanelMenu(int featureId, Menu menu) {
    if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS){
        getMenuInflater().inflate(R.menu.activity_menu, menu);
        return true;
    }
    return super.onCreatePanelMenu(featureId, menu);
}

At the top of the file, import the following file, or use Option-Enter to automatically import:

import android.view.Menu;

These methods inflate your menu options into the menu parameter, so that it knows what voice commands are available.

Run the app on your Google Glass. When you get into the app, say, “OK Glass.” A menu with the options Add Item and Remove Item will appear.

device-2015-03-11-083829

Try to use the Add Item command. Glass will recognize the voice command, but nothing will happen. Why is that?

You’ve not yet told the app what to do when it hears the voice command.

Asking User for an Item

After the user triggers a voice command, onMenuItemSelected is called. Add the following code under onCreatePanelMenu:

@Override
public boolean onMenuItemSelected(int featureId, MenuItem item) {
    // 1
    if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
        // 2
        Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
        // 3
        startActivityForResult(intent, item.getItemId());
        return true;
    }
    return super.onMenuItemSelected(featureId, item);
}

At the top of the file, import these classes (or do three Option-Enters):

import android.content.Intent;
import android.view.MenuItem;
import android.speech.RecognizerIntent;

It’s short and sweet, but there’s quite a bit of action in those three lines:

  1. First, you check to make sure that the feature ID is for voice commands – this means the menu item was selected from the “OK Glass” menu.
  2. When the voice commands are called, this finds out what you want the app to do. If the user wants to add an item, you need to find out what that item is! In order to do this, you create an Intent.
  3. This Intent, when started, shows the “Speak your message” screen with the microphone on the right side, a screen most Glass users know well. After the user dictates the item they want to add (or remove), you’re able to use the transcribed text in your app.

Depending on the action you need to call when the intent is completed, you need to pass a different constant.

When the intent completes, it gives you back the constant you pass in; from there, you can act according to which action you originally intended.

But you’ve already thought of this! The way you set up activity_menu.xml has already defined a unique constant for each action. This item ID is attached to the MenuItem parameter in the method. You launch the intent using startActivityForResult, and pass the MenuItem’s ID as the second parameter.

If the feature isn’t for voice commands, then it is best practice to pass on the method call to the superclass.

Run and launch the app and say “OK Glass,” followed by “Add Item.” A new screen will appear that lets you say the item you want to add.

device-2015-03-11-084423

You know you’re craving a sugar rush right about now, so why not tell Glass to add a 3 Musketeers or your favorite candy bar to the list?

Note: Glass needs a working network connection to understand voice commands.

But when you try to add an item nothing in the app changes. This makes sense because you haven’t set anything to use the dictated text. Seems that you won’t be getting your sugar fix until you work through this tutorial.

Even though it’s empty, your shopping list is almost done! All that you have left to do is save and load some items in the list.

Saving & Retrieving Items

Select the MainActivity file in the Project Navigator (this makes sure your new file goes into the right group) then go to File\New… and create a new Java class named DataManager. Replace everything in the file after the package statement with the following:

import android.content.Context;
import android.content.SharedPreferences;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
//1
public class DataManager {
 
    private Context context;
    private static final String StoredStringsKey = "com.raywenderlich.shoppinglist.storedstringskey";
    private static final String PreferencesLocation = "com.raywenderlich.shoppinglist";
 
    public DataManager(Context c){
        context = c;
    }
//2
    public ArrayList<String> getStoredStrings(){
        SharedPreferences preferences = context.getSharedPreferences(PreferencesLocation, Context.MODE_PRIVATE);
        Set<String> stringSet = preferences.getStringSet(StoredStringsKey, Collections.<String>emptySet());
        return new ArrayList<>(stringSet);
    }
//3
    public void setStoredStrings(ArrayList<String> strings){
        SharedPreferences preferences = context.getSharedPreferences(PreferencesLocation, Context.MODE_PRIVATE);
        SharedPreferences.Editor editor = preferences.edit();
        Set<String> stringSet = new HashSet<>(strings);
        editor.putStringSet(StoredStringsKey, stringSet);
        editor.apply();
    }
}

This code creates a utility class that saves and retrieves the user’s shopping list items. Here’s a breakdown of what’s going on:

  1. The Context is a way for this class to access the SharedPreferences for this application, and this variable is set in the constructor (the public DataManager(...) code).
  2. SharedPreferences is a key-value store, and it acts like a dictionary — if you’ve ever used Cocoa’s NSUserDefaults it will look a little familiar. The key for the stored values is StoredStringsKey, and the values are set in the setStoredStrings function.
  3. When you want to get these values back in getStoredStrings() this just asks the SharedPreferences, “Hey, what are the values that are attached to the string here in StoredStringsKey?”

Replace the two occurrences of “com.raywenderlich” with whatever identifier you used when you created the project.

Open MainActivity.java. After the declaration of onMenuItemSelected, add the following method:

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    if (requestCode == R.id.add_menu_item && resultCode == RESULT_OK) {
        // 1
    } else if (requestCode == R.id.remove_menu_item && resultCode == RESULT_OK) {
        // 2
    }
    super.onActivityResult(requestCode, resultCode, data);
}

onActivityResult is called when the speech-recognizing Intent completes. The Intent usually completes with a result code of RESULT_OK of RESULT_CANCEL. You only care about what happens if the Intent completes successfully, so in both cases, you check to make sure that resultCode == RESULT_OK.

requestCode is the same code that you gave the Intent when you called startActivityForResult, which is either R.id.add_menu_item or R.id.remove_menu_item.

If the code is R.id.add_menu_item, you want to take the item that the user dictated and add it to the strings stored in the DataManager class.

Add the following code at // 1:

//Part A
List<String> results = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
String spokenText = results.get(0);
 
//Part B
DataManager dataManager = new DataManager(getBaseContext());
ArrayList<String> storedStrings = dataManager.getStoredStrings();
storedStrings.add(spokenText);
dataManager.setStoredStrings(storedStrings);
 
//Part C
mView = buildView();
mCardScroller.getAdapter().notifyDataSetChanged();

Again, at the top of the file import these classes:

import java.util.ArrayList;
import java.util.List;

A lot of this code should look new to you, so let me explain each part:

  • Part A: Take the Intent and get the spoken text from its “extra results.”
  • Part B: Create a new DataManager instance using the base context. After you get a copy of the strings already stored in the DataManager, you add the spoken text to this copy. Then, you set the DataManager’s stored strings to the new values.
  • Part C: You update the view to show the new data.

Now, add the following code at //2:

List<String> results = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
String spokenText = results.get(0);
DataManager dataManager = new DataManager(getBaseContext());
List<String> storedStrings = dataManager.getStoredStrings();
if (storedStrings.contains(spokenText)){
    storedStrings.remove(spokenText);
    dataManager.setStoredStrings(new ArrayList<>(storedStrings));
}
mView = buildView();
mCardScroller.getAdapter().notifyDataSetChanged();

This code is similar to the code added at // 1, except that you remove the spoken text instead of adding it.

What About the UI?

Both of these code snippets end by updating the user interface. Because you haven’t really looked at that yet, let’s look through the automatically generated code that shapes the app’s appearance.

Inside the onCreate implementation, you have the following code:

mView = buildView();
 
mCardScroller = new CardScrollView(this);
mCardScroller.setAdapter(new CardScrollAdapter() {
    @Override
    public int getCount() {
        return 1;
    }
 
    @Override
    public Object getItem(int position) {
        return mView;
    }
 
    @Override
    public View getView(int position, View convertView, ViewGroup parent) {
        return mView;
    }
 
    @Override
    public int getPosition(Object item) {
        if (mView.equals(item)) {
            return 0;
        }
        return AdapterView.INVALID_POSITION;
    }
});
 
mCardScroller.setOnItemClickListener(new AdapterView.OnItemClickListener() {
    @Override
    public void onItemClick(AdapterView<?> parent, View view, int position, long id) {
        // Plays disallowed sound to indicate that TAP actions are not supported.
        AudioManager am = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
        am.playSoundEffect(Sounds.DISALLOWED);
    }
});
setContentView(mCardScroller);

The app gives you two variables from the get-go: mView and mCardScroller, and mView is just a generic view.

mCardScroller is a CardScrollView. If you have multiple views that should be displayed side by side, you change the adapter methods, which are located in the setAdapter block, to display these views. The individual views shown in a CardScroller are called “cards.”

Next, you call setOnItemClickListener on your mCardScroller, and the code inside the block plays an alert to inform the user that tapping is not allowed in this app. When users tap, they hear the sound and that’s it.

Finally, you set the content view to be the card scroller, and it only has one view — mView — so that is the view/card that shows on-screen.

When you create the mView, you call buildView(). Jump down to the implementation of buildView() to see how this is implemented:

private View buildView() {
    CardBuilder card = new CardBuilder(this, CardBuilder.Layout.TEXT);
    card.setText(R.string.hello_world);
    return card.getView();
}

Hello world? That’s not a 3 Musketeers. You want to show the list of items, not a default “Hello World!”

Replace this method with the following:

private View buildView() {
 
    // 1
    CardBuilder card = new CardBuilder(this, CardBuilder.Layout.TEXT);
 
    // 2
    DataManager dataManager = new DataManager(getBaseContext());
    ArrayList<String> strings = dataManager.getStoredStrings();
 
    // 3
    StringBuilder builder = new StringBuilder();
    if (strings.size() == 0){
        builder.append("No Items!");
    // 4
    } else {
        for (String s : strings) {
            builder.append("- ").append(s).append("\n");
        }
    }
 
    // 5
    card.setText(builder.toString());
    return card.getView();
}

Let’s review how this works:

  1. First, you create a new CardBuilder with a Text layout. For reference, CardBuilders generate cards or views, for your CardScroller to present.
  2. The next two lines of code should look familiar; you’re simply getting the stored strings from the DataManager.
  3. Next, you instantiate a new StringBuilder instance to create the card’s text. If there’s nothing in the shopping list, then you want to show “No Items!”
  4. Otherwise, you create a bulleted list comprised of each string in the shopping list by adding a hyphen, the item and then a new line.
  5. Finally, you set the CardBuilder’s text to the string you just created, and return the CardBuilder’s view.

Run your Shopping List, and you’ll find everything working as expected!

In the Shopping List, speak “OK Glass… Add Item… Samsung Galaxy Gear.” Someone has to want one, right?

Now, the app should show this screen, with the smart watch in the list:

It works!!!

It works!!!

After thinking about it a little bit, you realize that you really want an Apple Watch. Tell your Glass, “OK Glass… Add Item… Apple Watch.” Your list should have two items in it, as shown below:

device-2015-03-11-084901

Now you have two smart watches in your list at once! Maybe you should reconsider that Samsung.

Tell your Glass, “OK Glass… Remove Item… Samsung Galaxy Gear.” Make sure that you say the item’s name exactly as it appears in the app, because otherwise it will stay on the list.

device-2015-03-11-084918

Hasta la vista, baby.

Your shopping list is now completed — congratulations on creating your very own Google Glass app!

Where To Go From Here?

Here is the download for the final project.

If you’re interested in learning more about Glass development, check out Google’s Glass Developer page.

Google also has a Glass Github page with several really cool sample projects that I recommend you check out.

Challenge: You noticed that if you have too many items in the sample app, you cannot see the last few items on the list. To keep the tutorial more general and approachable, I did not address this issue.

There is an open-sourced control that you can implement, called HeadScrollView. This view scrolls when the user raises and lowers his/her head. If you’re up for a challenge, try it out and let me know how it turns out in the comments.

New platforms are always fun to start developing for, and Glass is no exception. I hope you had fun and learned a lot!

Do you have questions about this Google Glass app tutorial – or do you just want to share your favorite Terminator movie? Either way, post it to the comments below! :]

Google Glass App Tutorial is a post from: Ray Wenderlich

The post Google Glass App Tutorial appeared first on Ray Wenderlich.

Swift Style Guide: April 2015 Update

$
0
0
Swift style guide: updated for Swift 1.2!

Swift style guide: updated for Swift 1.2!

This is an exciting week for iOS developers – Swift 1.2 came out of beta with the release of Xcode 6.3, which is now available on the App Store!

Swift 1.2 brings faster compiles, more concise syntax, an improved if let statement, and more. If you’re not familiar with the changes, check out our post on What’s New in Swift 1.2.

Our goal is to stay on top of changes to Swift and iOS – we are in the process of updating our books and tutorials to Swift 1.2.

One of the things we’ve already updated is our raywenderlich.com Swift style guide. We’ve made some changes for Swift 1.2, and some other changes based on pull requests and issues filed from team members and readers.

Let’s dive in and see what changed!

Note: Our style guide is different than other style guides out there; we’re a tutorial site, which means our priorities are shorter and more readable code for printed books and the web. Many of the decisions were made with an eye toward conserving space for print, legibility, and tutorial writing.

Multiple Optional Binding

Swift 1.2 now allows you to optionally bind multiple things in one line. Our style guide recommends using this new style, rather than the old style of multiple if let statements:

// old
if let widget = widget {
  if let url = options.url {
    if let host = options.host {
      // widget, url, and host are all non-optionals
    }
  }
}
 
// new
if let widget = widget, url = options.url, host = options.host {
  // widget, url, and host are all non-optionals
}

This new syntax lets you avoid the “pyramid of doom” when you had to test and bind several optionals.

So many spaces saved!

So many spaces saved!

Note you can also include boolean conditions in your if let statements, which can make your code more concise as well.

Enumerations and Case

After a good amount of discussion, we formalized the style we were using in our tutorials: UpperCamelCase for enumeration values.

enum Language {
  case Swift
  case ObjectiveC
  case ObjectOrientedFortran
}

There’s a bit of an inconsistency since enum values are like class constants inside the enum scope, and feel like they should be in lower-camel case. However, the style guide usage matches our general practice, existing enumerations in Cocoa, and Swift itself – remember, optionals are implemented as enums with values Some and None which begin with a capital letter.

Trailing Closures

Trailing closures are great for keeping the closure expression linked to its function call, without having to wrap them in the argument parentheses.

UIView.animateWithDuration(0.75) {
  self.dogeImage.alpha = 1.0
}

Our old guidance was to use trailing closures wherever possible. However, there’s one special case where this isn’t a good idea: when there are two closure arguments! In that case, you should use the normal syntax to keep both closure expressions “on the same level” with named arguments:

UIView.animateWithDuration(1.0, animations: {
  self.myView.alpha = 0
}, completion: { finished in
  self.myView.removeFromSuperview()
})

The alternative is to have one named and one trailing, which looks strange and we recommend against.

// Don't do this!
UIView.animateWithDuration(1.0, animations: {
  self.myView.alpha = 0
}) { f in
  self.myView.removeFromSuperview()
}

Function and Method Naming

As in Objective-C, a full, unambiguous function signature includes the method name and its named arguments. When referring to a function in a tutorial, we follow the same formatting as you see in the Xcode jump bar:

xcode-jump

Previously, we said it was OK to write just the bare function name if the context was clear. That allowed us to write viewDidAppear rather than the full viewDidAppear(_:).

After some discussion, we decided it was best to be clear about whether something was a function or a variable. The new guideline is to always include the full signature. That means something like viewDidLoad() where there are no arguments, and tableView(_:cellForRowAtIndexPath:) where there are arguments.

MARK

In the last style guide update, we introduced the rule of one extension per protocol. That keeps the protocol conformance together with the protocol methods, and makes adding and finding related code easier.

As an addition to that rule, we want to start adding MARK comments to provide better navigation.

// MARK: - UITableViewDataSource
 
extension MyTableViewController: UITableViewDataSource {
  // table view data source methods here...
}

Extensions are great for grouping methods together, and the MARK comment should help navigating your way around source files even easier than before!

Code flows out much easier when your files are well organized!

Code flows out much easier when your files are well organized!

Shadowing

Speaking of optional binding, one thing that hasn’t changed in the style guide is that we use the same name for the optional and the unwrapped version:

var subview: UIView?
 
// later...
if let subview = subview {
  // referring to "subview" here is the unwrapped version
}

The benefit is that the names remain short and we avoid things like maybeView and unwrappedWidget and optionalTableView. Once you understand Swift and optionals, the code makes sense. The downside is that the code just looks strange and can be hard to read and understand for beginners.

The issue thread on variable shadowing is still there and we’re open to change things if you’re especially convincing. “Especially convincing” includes offers of an Apple Watch Edition, of course. ;]

Where to Go From Here?

Style is a simple way of saying complicated things,” said the French writer Jean Cocteau. That’s what programming is really about, isn’t it? — complicated algorithms and app logic, expressed in an understandable way through code. I hope our site’s coding style helps you understand the sometimes complicated things we’re trying to teach as you expand your knowledge in Swift development.

Swift 1.3 and 2.0 and beyond are surely in the works and we’ll keep adapting to provide you with a consistent and readable style. As Swift evolves, why not get involved yourself? File radars on parts of the language you think should be changed, and join in the discussion at our Swift style guide repository on GitHub.

If you’ve left a comment, filed a new issue, or even sent in a pull request – thank you! Please keep the feedback coming. If you have a general question or comment, join in the forum discussion below!

Swift Style Guide: April 2015 Update is a post from: Ray Wenderlich

The post Swift Style Guide: April 2015 Update appeared first on Ray Wenderlich.

Video Tutorial: Intermediate Swift Part 0: Overview

$
0
0

Challenge

There’s no challenge project with this video. Keep watching this series to find your next challenge! :]

Download lecture slides

View next video: Structs

We’ve added a new feature to our newest videos, English subtitles! You can now click the ‘CC’ button in the video player bar to turn subtitles on or off.

Video Tutorial: Intermediate Swift Part 0: Overview is a post from: Ray Wenderlich

The post Video Tutorial: Intermediate Swift Part 0: Overview appeared first on Ray Wenderlich.

Video Tutorial: Intermediate Swift Part 1: Structs

Viewing all 4370 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>