Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4403 articles
Browse latest View live

Firebase Tutorial for Android: Getting Started

$
0
0

Firebase Tutorial for Android: Getting Started

Nearly all projects we work on every day use a REST service. It’s a concept where you use a certain set of methods (GET, POST, DELETE…) to send data, receive data or both.

And REST is great, but sometimes you just can’t find a (back)end to all the challenges! More often than not, the variable names are in a different style than you prefer or the paths to the requests are confusing…

Then comes Google, introducing Google Firebase and saying: “Why would you work with other people when you can write a backend of your own?” And you start thinking: “IS THAT POSSIBLE??”, suddenly you’re inspired. You cannot keep your mind straight, and all you want to do is create backends all day.

Firebase works similar to hosting, so to speak. You pick what services you like, connect them to your application and bam, you’re ready to take over the world. Firebase’s backend is just another service. It provides a JSON like database in which you store and read data. It doesn’t really follow the REST standard as it has no REST methods (GET, POST…). Here, you’re working directly with the database.

You won’t be using all of Firebase in this tutorial. Instead, we’ll focus on Firebase’s Realtime Database to create a mobile friendly backend. Your goal for this tutorial is to build a joke-sharing social network application, using the Authentication and Realtime Database services. So let’s heat it up. :]

Note: This Firebase tutorial assumes that you’re familiar with the basics of Android development and REST services. If you’re new to Android, please check out the Beginner Android series and other Android tutorials.

Getting started

Before doing anything, download the starter and final projects by clicking on the Download Materials button at the top or the bottom of the tutorial. Once you’ve downloaded the starter project, open it up in Android Studio 3.1.1 or later so we can do an overview of the materials to get you caught up.

Most of the code is already pre-baked for you. It’s better to focus entirely on Firebase in the tutorial, rather than the rest of the application. Don’t worry, you’ll be guided through everything in the starter project.

Tasting the pre-baked goodies

A picture showing the project structure: common, di, firebase, model, presentation and ui packages.

The project setup is simple.

  • The common package contains all the extensions and global functions you’ll use for showing errors, validating data and handling the UI events like clicks.
  • The di package contains the Dependecy Injection setup. I’ve chosen Dagger because it’s the standard and most people will feel at home using it. If you’re looking for a Kotlin alternative to Dagger, check out Koin. It’s simpler than Dagger, but still not at its 1.0 version milestone.

    Note: If you want to learn more about Dagger and how it works, Ray Wenderlich has you covered. Checkout the this tutorial to learn more about Dagger with Kotlin.

  • The ui package contains all the Activities and Fragments you’ll use. And the model package has model classes for the app data.

The presentation and firebase packages are the ones you’ll be working with. The projects uses an MVP pattern for the application. Splitting the project with this package structure allows you to focus on the Firebase implementation in this tutorial instead of getting lost in the pre-baked code.

A hot package

There are two parts of the Firebase implementation you’ll be finishing up, the FirebaseDatabaseManager and the FirebaseAuthenticationManager. Both are in subpackages of the firebase.

The former will take care of all-things-user, like logging a user in or out and registering a new user. The latter will be your man(ager) in the middle, and will read data from the database and store new data when it comes in.

Authentication will be done using an email and a password. Reading the data on the other hand will be done multiple ways. You’ll listen for single data events, for individual updates and for general updates. Don’t worry if that doesn’t make sense, it’ll be explained in further sections. :]

Rev it up!

In order to start working, you need to add Firebase specific dependencies. However, to be able to add them you also need something called a google-services.json file. It’s a JSON file which contains all the configuration data Firebase uses internally. To get it, you need a Firebase project, let’s create one!

Visit the Firebase website. You should see something similar to this:

Firebase landing page, with an overview of the site and services

Login using a Google account if you haven’t already and if you want, take a quick tour around the landing page. When ready, click the Go to Console button in the upper right. You’ll now create a new Firebase project. You can do it by clicking on the Add project card like below:

Add project card

Choose your project name. In this tutorial you’ll use the name “Why so serious”, and choose your current country like so:

Firebase Add project window with the Why so serious project name

Accept any terms if needed and click Create Project. After finishing the first step, you should see a success message:

Firebase project created success message

Click Continue.

You should see the project dashboard now. Next you need to add an Android application to the project, so click on the Add Firebase to your Android app card:

Firebase dashboard with three options, adding an iOS, Android or a Web app

Fill in the package name com.raywenderlich.android.whysoserious from the starter project and click Register app. The SHA key can be empty for now, you need it only when signing an APK.

Now follow the instructions on the Firebase page to add the google-services.json file to the project and click Next.

Add the required library dependencies to your app module build.gradle file:

ext {
  //Add the playServices version
  playServices = "15.0.0"
}
dependencies {
  //Add the following lines
  implementation "com.google.firebase:firebase-core:$playServices"
  implementation "com.google.firebase:firebase-auth:$playServices"
  implementation "com.google.firebase:firebase-database:$playServices"
}
//Add to the bottom of the file
apply plugin: 'com.google.gms.google-services'

And add the following to the project-level build.gradle:

buildscript {
  dependencies {
    //Add this line
    classpath 'com.google.gms:google-services:3.3.1'
  }
}

Hit Sync Now to sync your Gradle files.

Finally you can Build and Run the project! You should see something like this:

A welcome screen with a greeting message, a login and a register button.

Woohoo! Now you can start working on the juicy parts of this app.

Planning our stand-up

By the end of this tutorial you’ll have an app which serves other people the best jokes the internet has ever read! To achieve this, you first need to be able to manage users. The first thing on your plate is to finish up the authentication part of the app.

After the users make their way to the app, you’ll start patching things together with the realtime database. You’ll store the users in the database so that each user can have a list of his/her favorite jokes. Each joke will have a title, description (the joke itself), the author’s name and id.

To model these entities, you’ll use everything from the model package. You only need a few data classes to achieve this.

So the user will be able to sign up for the application, create jokes and like other jokes. It’s not much, but it’s enough for you to learn how to use Firebase as a mini backend.

Hiring comedians

What good is a joke app if there are no comedians – absolutely pun. Good thing you aren’t hiring me! You really need people who actually have a sense of humor. Recruit them and sign them up through Firebase Authentication!

Start by having a look at the RegisterActivity screen and it’s content.

Register Activity

It has a few input fields you need to create a unique user. Since you’re building an authentication service using an email and a password, the user needs to provide them while signing up. Additionally, each user will have a username that you’ll display next to each joke.

In order for your application to use email registration, you need to enable the “email sign-in” method in the Firebase console.

Open up the Firebase Console, click on the Authentication option on the left under Develop and choose the Sign-in Method tab. Now, select the Email/Password provider and enable it.

Yay! You can now proceed to the implementation. :]

Preparing the stage

To connect to Firebase’s authentication service, first, go to the di package and add a class called FirebaseModule.kt to the module package and provide the FirebaseAuth and FirebaseDatabase dependencies in the module:

@Module
@Singleton
class FirebaseModule {

  @Provides
  fun firebaseAuth(): FirebaseAuth = FirebaseAuth.getInstance()

  @Provides
  fun firebaseDatabase(): FirebaseDatabase = FirebaseDatabase.getInstance()
}

Next, add the module to the InteractionModule.kt list of includes, like so:

@Module(includes = [FirebaseModule::class])
@Singleton
abstract class InteractionModule { ...

Now, if you’re using either FirebaseAuth or FirebaseDatabase, you have them provided in the Dagger graph.

Next, open up the FirebaseAuthenticationManager.kt file in the firebase.authentication package and add a FirebaseAuth property to the constructor like this:

class FirebaseAuthenticationManager @Inject constructor(
private val authentication: FirebaseAuth) : FirebaseAuthenticationInterface

You’ve setup everything to use Firebase-related services, way to go! Next you’ll connect the actual authentication process, so keep up the good work. :]

Gathering the crowd

At last, head over to the register method in the FirebaseAuthenticationManager. It should look something like this:

override fun register(email: String, password: String, userName: String, onResult: (Boolean) -> Unit) {
      
}

The registration process will go like this: once the user’s data is valid and they press the Sign up button, you’ll try to create a unique user with the provided email.

If the email is already in use, Firebase will return an error. If it isn’t, an empty user will be created. Since you’re also collecting a username, you’ll need to create a UserProfileChangeRequest which edits a user to include a username.

After you finish all that, you still need to create a user in the database, since the Authentication and Realtime Database are separate services, and don’t share user objects. You’ll do that later on in the tutorial.

Add the following to the body of your register method:

//1
authentication.createUserWithEmailAndPassword(email, password).addOnCompleteListener {
  //2
  if (it.isComplete && it.isSuccessful) {
    authentication.currentUser?.updateProfile(UserProfileChangeRequest
        .Builder()
        .setDisplayName(userName)
        .build())
    //3
    onResult(true)
  } else {
    onResult(false)
  }
}

Taking each commented section in turn:

  1. This method, like most Firebase methods, returns a Task. A Task is something that you can listen to, for the final result.
  2. Inside the lambda block addOnCompleteListener, you can check whether the result of creating a user was successful and/or completed by using the returned object’s properties.
  3. If the task is successful, you will add a username to the new user and call the onResult callback, saying the user has been successfully created. Any other case will just notify that you didn’t manage to create a user.

A session of comedy

Notice how you’re using the authentication.currentUser property. It’s important to know how the FirebaseAuth service works inside an Android application. As soon as you register or log in, the currentUser is available for use. Basically, Firebase caches your login up until the point where you log out or clear the application’s data. This is called a Session. Although not visible through code, it will be there until you close it.

After logging in once, there is no need to worry about the rest. If the currentUser doesn’t return null, then you’re logged in. It also contains information like the unique user id, email and username. Quite handy when you need it.

Finish up FirebaseAuthenticationManager.kt class by filling in the rest of the implementation as follows:

override fun login(email: String, password: String, onResult: (Boolean) -> Unit) {
    authentication.signInWithEmailAndPassword(email, password).addOnCompleteListener {
    onResult(it.isComplete && it.isSuccessful)
  }
}

override fun getUserId(): String = authentication.currentUser?.uid ?: ""
override fun getUserName(): String = authentication.currentUser?.displayName ?: ""

override fun logOut(onResult: () -> Unit) {
  authentication.signOut()

  onResult()
}

The code above is pretty simple. The getUserId and getUserName methods return the data from the current user. Furthermore, the logOut method closes the current Session and the login method tries to log the user in.

After adding this, you’re able to log out of the app in the Profile tab. Moreover you’re able to log back in at the login screen.

Breaking the ice

Build and Run the App. Open the register screen by clicking the Sign up button. Fill in the form and hit Sign up to create a new user. If everything goes well you should see the following screen:

Main Screen

You managed to register! Quickly open up the Firebase console again, navigate your way to the Authentication tab to see your new user at the Users tab. Right now there is only one user in the application and that is you.

Making punnections

Only having an email provider is very limiting for your app. Firebase, however, allows you to easily add a social provider as an authentication solution. It’s about as simple as flipping an extra switch in the console. In the same place where you enabled email sign-in, you can enable various other providers like Github, Google, Facebook, Twitter and more. This allows your users to connect multiple accounts to one FirebaseUser.

When a user connects multiple accounts, and logs in with any of them, they enter your application as a single user in the Authentication service. It’s super convenient for your users when your app support multiple login platforms.

Check out this link to see the official documentation for this process.

That concludes the Authentication section. Once again, good job! You’re ready for the next part – the Realtime Database.

Time to shine

Now that you have a line of comedians all waiting to do their own stand-ups, all you need is their jokes. Head over to the Firebase dashboard again, this time to the Database option under Develop. When you open this option, you should get to choose between the Cloud Firestore and Realtime Database like so:

Cloud Store and Realtime Databases

The difference is in the way each stores data and works with data. On one hand the Firestore is like an actual database, it stores files in documents and allows for efficient queries. On the other hand you have the Realtime Database which stores everything as a big JSON object which you read parts of.

For small applications, the latter is simpler to use. It also allows realtime updates (as the name says) which is handy when you’re building applications with feeds or chat rooms. This is the reason you will choose it for this tutorial. It’s not a complex application and it will benefit a lot from the realtime updates.

Blue pill or red pill?

Blue Pill or Red Pill

As mentioned above, choose the Realtime Database. You should see a popup asking you for the type of security rules:

Security Rules Screen

Choose either one, it doesn’t really matter which, since you’ll change the settings anyway. Head over to the Rules tab and paste this:

{
  "rules": {
    ".read": "auth != null",
    ".write": "auth != null"
  }
}

What these rules mean is that only people who have been authenticated can read or write data, i.e. only your logged in users. Once you’ve copied and pasted the snippet above, click the Publish button to save the rules. Once you switch back to the Data tab your database will be empty and look like this:

Rules Screen

Press any key to continue

Right now there is no data in the database as you’ve only created it. Once you start pushing the jokes it will be much better. However, to save data you need to understand how data is represented.

I’ve already mentioned that our database choice stores everything in a JSON object. This means that each piece of information has it’s own key. Once you request data at a certain key or “directory” (for example key “user”), you’ll receive what is called a DataSnapshot. A snapshot is the current state of that directory which holds everything under the key “user” in this example. You parse that snapshot to get the value, pretty straightforward! You’ll see how to do this in the next section when you start saving data.

The keys can be custom made or generated by Firebase. Usually you’d just let Firebase generate the keys so you don’t have to worry about that. Once Firebase generates a key, and stores the data, the whole directory gets sorted by a timestamp, starting from the newest items first.

Also ordering is always ascending when you query items. Sadly, Realtime Database doesn’t support descending ordering, for example for the number of likes. Firestore was built in response to this, and supports more query options and also much more. However, that might be another tutorial!

Playing the game

You’ve finally reached the part where you’ll work with data. You’ll start off by finishing the user registration process. Remember how I said you’d have to store the user in the database after registration? Well, open up the FirebaseDatabaseManager under the di/firebase/database folder.

Add a private value to the constructor named database and of the type FirebaseDatabase. Your class definition should look like this:

class FirebaseDatabaseManager @Inject constructor(private val database: FirebaseDatabase) : FirebaseDatabaseInterface

And the following KEYS at the top above the class:

private const val KEY_USER = "user"
private const val KEY_JOKE = "joke"
private const val KEY_FAVORITE = "favorite"

You can now connect to the database and store or read data.

Update the createUser() to be the following to finish up the user creation:

override fun createUser(id: String, name: String, email: String) {
  val user = User(id, name, email)

  database
    .reference        // 1
    .child(KEY_USER)  // 2
    .child(id)        // 3
    .setValue(user)   // 4  
}

You’ll call this method right after signing up a user. Here’s what each line means:

  1. Get a reference to the database, which effectively points to the root of the tree/directory.
  2. From the root directory, open up the “user” directory
  3. Inside the “users” directory, open up the directory which matches the “id” of this particular user
  4. Store a new user in that directory by calling the setValue(user) method

And that’s how you store data! If you want to delete a value somewhere, just call the setValue(null).

So the database is like a big family. There are the parents, and each parent can have a bunch of children.

Build and run the app, logout from the Profile tab if you need to, and try to register a new user. You should see a new entry in the database with the new user’s data, right-away and even without refreshing the Firebase page! Now that’s Realtime! :]

I’ve created a couple of users myself and here is the result:

User Sample

Have a look at the structure of the database. The “user” key has two entries, each of which is a unique user. Each user on the other hand has multiple fields, representing its data.

Loading Firebase…

Now that you have a user in the database, why not read it back from the database and show it in their profile? To read data, and receive snapshots, you need to use Firebase’s Value Listeners. By adding a listener to a reference you get the values from the database. You can listen to data in three ways.

First, by calling addListenerForSingleValueEvent, you read the data only once. After you receive it, the listener is removed. This is great when you need to read something once to use it in your app.

Second, using the addValueEventListener method, you listen to a certain directory and all its changes. Even if the smallest thing changes (like a user’s name) you get the entire snapshot again. This is great for showing data that isn’t large but tends to change and can benefit from a realtime update, like a user’s profile.

Lastly, with the addChildEventListener method you subscribe to changes for each child in a directory. If you change any of them, remove them or move them, you get an event for each of the mentioned cases. More importantly, it will emit each of the children one by one, the first time you attach the listener. It’s great for things like chats and feeds where new items are added often.

You’ll use the addChildEventListener for all jokes, addValueEventListener for favorite jokes and the user profile and the addListenerForSingleValueEvent for changing the joke’s liked status.

The peak of the show

There are five methods to finish up in the FirebaseDatabaseManager before your social network is ready!

Add the following code to read the jokes and the profile.

//1
override fun listenToJokes(onJokeAdded: (Joke) -> Unit) {
  database.reference.child(KEY_JOKE)
      .orderByKey()
      .addChildEventListener(object : ChildEventListener {
        override fun onCancelled(p0: DatabaseError?) = Unit
        override fun onChildMoved(p0: DataSnapshot?, p1: String?) = Unit
        override fun onChildChanged(p0: DataSnapshot?, p1: String?) = Unit
        override fun onChildRemoved(p0: DataSnapshot?) = Unit

        override fun onChildAdded(snapshot: DataSnapshot?, p1: String?) {
          snapshot?.getValue(JokeResponse::class.java)?.run {
            if (isValid()) {
              onJokeAdded(mapToJoke())
            }
          }
        }
      })
}
//2
override fun getFavoriteJokes(userId: String, onResult: (List<Joke>) -> Unit) {
  database.reference
      .child(KEY_USER)
      .child(userId)
      .child(KEY_FAVORITE)
      .addValueEventListener(object : ValueEventListener {
        override fun onCancelled(error: DatabaseError?) = onResult(listOf())

        override fun onDataChange(snapshot: DataSnapshot?) {
          snapshot?.run {
            val jokes = children.mapNotNull { it.getValue(JokeResponse::class.java) }

            onResult(jokes.map(JokeResponse::mapToJoke))
          }
        }
      })
}
//3
override fun getProfile(id: String, onResult: (User) -> Unit) {
  database.reference
      .child(KEY_USER)
      .child(id)
      .addValueEventListener(object : ValueEventListener {
        override fun onCancelled(error: DatabaseError?) = Unit

        override fun onDataChange(snapshot: DataSnapshot?) {
          val user = snapshot?.getValue(UserResponse::class.java)
          val favoriteJokes = snapshot?.child(KEY_FAVORITE)?.children
              ?.map { it?.getValue(JokeResponse::class.java) }
              ?.mapNotNull { it?.mapToJoke() }
              ?: listOf()


          user?.run { onResult(User(id, username, email, favoriteJokes)) }
        }
      })
  }
}

Let’s go through the logic behind the implementation:

  1. When listening to jokes, add a child listener to the “joke” directory. On each child, we parse the joke and add it to the list. Notice how you parse the data, by calling getValue(class) the snapshot is parsed to whatever data model you want.
  2. Favorite jokes will be stored on each user’s profile. Since the queries in the database are limited, you cannot request all jokes by ids. This is why jokes are stored on the user. You’ll read favorite jokes from each of the users’ profile directory. Every time that directory changes, you get an event, since you need to know which jokes are liked in order to show the appropriate icon in the list. The same goes for the profile, as you’re showing the number of favorite jokes there.
  3. To look up a profile, you have to call the child(KEY_USER) to enter the “users” directory and then child(id) to find a specific user. However there is more to a profile than just the user part. Since lists are actually HashMaps in Firebase, you have to manually parse each item. For that reason there is the somewhat ugly block of code for mapping all the children.

Closing up

In order to add a new joke, create a child inside the “jokes” directory, and set the value to the joke object:

override fun addNewJoke(joke: Joke, onResult: (Boolean) -> Unit) {
  val newJokeReference = database.reference.child(KEY_JOKE).push()
  val newJoke = joke.copy(id = newJokeReference.key)

  newJokeReference.setValue(newJoke).addOnCompleteListener { onResult(it.isSuccessful && it.isComplete) }
}

By calling push, you generate a new key like mentioned before. Set the value to the new joke and add a listener to be notified about completion.

Changing whether the joke is in your favorites or not is a bit trickier. You first have to know if it’s in your favorites, and then remove it if it already is, or add if it isn’t:

override fun changeJokeFavoriteStatus(joke: Joke, userId: String) {
  val reference = database.reference
      .child(KEY_USER)
      .child(userId)
      .child(KEY_FAVORITE)
      .child(joke.id)

  reference.addListenerForSingleValueEvent(object : ValueEventListener {
    override fun onCancelled(error: DatabaseError?) {}

    override fun onDataChange(snapshot: DataSnapshot?) {
      val oldJoke = snapshot?.getValue(JokeResponse::class.java)

      if (oldJoke!=null) {
        reference.setValue(null)
      } else {
        reference.setValue(joke)
      }
    }
  })
}

By listening for the value of a child, you can tell that the child doesn’t exist if the value is null – or in your case, that a joke isn’t a favorite. Same goes the other way around, if it’s not null, you know the child exists.

Build and Run the app. Go and add a new joke. You should see it in the all jokes list:

Finished App

It is not yet added to your favorite jokes, click on the heart icon to add it to favorites. Go back to the Firebase console and the Database/Data tab, and you’ll see the favorite joke in the data. If you click on the heart icon again and switch back to the console, it should be gone. The power of realtime updates! :]

Take a look at the Firebase dashboard, your database should have some data in it now.

Where to go from here

Congratulations on creating your joke-sharing social app! Wasn’t it just puntastic! You can download the final project by using the link at the top or bottom of this tutorial.

There is much more to Firebase than the Auth/Database services, try exploring every aspect of it by following the documentation.

Also check out the Firestore, the upgraded version of the database we used.

You can also read the Joe Birch article series about each of the services.

And most importantly, feel free to leave comments and suggestions in the section below! Your feedback means a lot! :]

The post Firebase Tutorial for Android: Getting Started appeared first on Ray Wenderlich.


Metal by Tutorials: First 8 Chapters Available Now!

$
0
0

We’re excited to announce that the first 8 chapters of our book, Metal by Tutorials, are now available!

This update adds four new chapters to the book, which is in early release and available on our online store:

  • Chapter 5, Lighting Fundamentals: Lighting and beyond! In this chapter you’ll learn basic lighting; but more importantly, you’ll learn how to craft data in shaders, and be on the path to mastering shader artistry. Lighting, shadows, non-photorealistic rendering – these are all techniques that start with the methods that you’ll learn in this chapter.
  • Chapter 6, Textures and Samplers: Now that you have light in your scene, the next step is to add color to it. In this chapter you’ll learn about UV coordinates, texturing a model, samplers, mipmaps, and the asset catalog.
  • Chapter 7, Maps and Materials: This is the final chapter on how to render still models. In the previous chapter, you rendered a simple house with a single color texture imported using Model I/O. In this chapter you’ll find out how to use material groups to describe a surface, and how to design textures for micro detail.
  • Learn about lighting, maps and materals, textures and more in Metal!

  • Chapter 8, Character Animation: Rendering still models is a great achievement, but rendering models that move is even more fun. So far your models have been simple inanimate props. You’ll now render characters with body movement and give them personality. In this chapter, you’ll start off by bouncing a ball, and then move on to rendering a friendly skeleton.

WWDC 2018 Metal Changes

With OpenGL and OpenCL now deprecated, WWDC 2018 brings Metal to the forefront of graphics and compute on macOS, iOS and tvOS!

We’re tremendously excited about the new features in Metal 2, and, of course, we will be covering these in upcoming editions of our book:

  • Ray tracing using Metal Performance Shaders: Ray tracing is far more accurate than rasterization, and real-time ray tracing is the ultimate goal of rendering. Ray tracing is traditionally performed on the CPU, but using Metal Performance Shaders, you’ll be able parallelize it on the GPU. If you have multiple external GPUs, you’ll be able to achieve phenomenal performance.
  • GPU-Driven Command Encoding: You can now encode your commands on the GPU rather than the CPU, using a compute shader.
  • New GPU debugging tools: These look simply magnificent! The dependency viewer visualizes all your render passes and combines them into a beautifully rendered flowchart. The interactive shader debugger lets you examine your pixels as you step through your shader functions and change your shader output on the fly.

    Vertices aren’t forgotten – you can inspect them with the new geometry viewer. This has a free-fly camera so that you can investigate issues outside your camera frame. If you have an iPhone X or newer, you’ll be able to use the A11 shader profiler to see how long each statement in your shaders takes to execute. Apple have really worked hard on these and other GPU profiling tools!

And as always, you’ll receive free updates for this book when you purchase the early-access version!

Where to Go From Here?

Here’s how you can get your hands on a copy of Metal by Tutorials:

The Metal by Tutorials book team and I hope you enjoy this second early-access edition of the book!

The post Metal by Tutorials: First 8 Chapters Available Now! appeared first on Ray Wenderlich.

ARKit by Tutorials: Upcoming Changes for ARKit 2?

$
0
0

We’ve just launched our new book, ARKit by Tutorials. And given the nature of such a new technology, it’s no surprise that WWDC 2018 introduced ARKit 2, the successor to the current ARKit 1.5 framework!

What does this mean for ARKit by Tutorials? It means we’ll be updating the book to bring you all the new goodness inside ARKit 2, including:

  • Shared AR Experiences: Create AR experiences you can share in multiplayer mode.
  • Updates to Image Tracking: Use object persistence in your scenes.
  • 3D Object Scanning: Detect physical 3D objects in your environment and use them in your AR scenes.
  • And much, much more!

We’re still digesting the all changes to ARKit announced at WWDC, but rest assured our book team is hard at work already planning the updates for the ARKit 2 edition!

What’s in ARKit by Tutorials?

ARKit is Apple’s mobile AR development framework. With it, you can create an immersive, engaging experience, mixing virtual 2D and 3D content with the live camera feed of the world around you.

What sets ARKit apart from other AR frameworks, such as Vuforia, is that ARKit performs markerless tracking. ARKit instantly transforms any Apple device with an A9 or higher processor into a markerless AR-capable device. At this very moment, millions of Apple users already have a sophisticated AR device right in their pockets!

If you’ve worked with any of Apple’s other frameworks, you’re probably expecting that it will take a long time to get things working. But with ARKit, it only takes a few lines of code — ARKit does most of the the heavy lifting for you, so you can focus on what’s important: creating an immersive and engaging AR experience.

In this book, you’ll build five immersive, great-looking AR apps:

  • Tabletop Poker Dice
  • Immersive Sci-Fi Portal
  • 3D Face Masking
  • Location-Based Content
  • Monster Truck Sim

By the end of the book, you’ll have a ton of great experience working inside the ARKit framework, including how to work with 3D objects and textures, how to add game physics, detect placeholders, how to work with face-based AR, how to work with blend shapes, record your experience with ReplayKit, and more!

Where to Go From Here?

If you haven’t yet gotten your hands on a copy of ARKit by Tutorials, what are you waiting for?

The book is currently on sale as part of our Game On book launch event, where you can get the book for just $44.99 — that’s a savings of $10!

And as usual, all purchasers of our digital edition will receive free updates to any future editions of the book.

The ARKit Book team and I hope you enjoy the rest of WWDC 2018, and we hope you enjoy the book!

The post ARKit by Tutorials: Upcoming Changes for ARKit 2? appeared first on Ray Wenderlich.

Unity Tutorial Part 2: GameObjects

$
0
0

This is an excerpt taken from Chapter 1, “Hello Unity” of our book Unity Games by Tutorials, newly updated for Unity 2018.1, which walks you through creating four Unity games from scratch — and even shows you how to develop for VR in Unity. Enjoy!

In the first tutorial of this series, our brave hero was left alone with fantasies in his barely attached bobblehead of encountering aliens to blast.

In this tutorial, you’ll make his dreams come true. But, first, you must understand one crucial concept: GameObjects.

Note: This tutorial project continues where the previous tutorial left off. If you didn’t follow along, or you want to start fresh, please download the materials for this this tutorial and use those as your starting point.

Introducing GameObjects

In Unity, game scenes contain objects with a name that you’ll never guess: GameObjects. :]

There are GameObjects for your player, the aliens, bullets on the screen, the actual level geometry — basically, everything in the game itself.

Just as the Project Browser contains all assets, the Hierarchy contains a list of GameObjects in your scene.

To see this, you’ll need to have your project open so, if it isn’t open already, do so now.

Note: Unity’s way of opening project files is a bit strange. You can navigate your file system and look for a scene file. Double-click the scene, then Unity will open the unity welcome. From that screen, you can select the project that you want to open.

Or, if you start Unity or click File\Open Project, you’ll see a project list. Select the desired project and Unity will take care of the rest.

If your project isn’t listed, click the Open button to bring up a system dialog box. Instead of searching for a particular file, try navigating to the top-level directory of your Unity project and click Select Folder. The engine will detect the Unity project within the folder and open it.

When you open an older Unity project, you may get a warning from Unity that the project was created with an older version of Unity. All this means is that Unity may re-import all your assets. That is, it will create new metadata for all your assets. Needless to say, for large games, this re-import could take a long time. Also, since Unity is updating your project, you should ALWAYS make a backup first.

Once your project is open, take a look at your Hierarchy and count the GameObjects.

Your first thought may be three because you added three GameObjects in the previous tutorial: space marine body and space marine head.

However, there are two other GameObjects: Main Camera and Directional Light. Remember how Unity creates these by default? Yes, these are also GameObjects.

Yet, there are even more GameObjects. You’ll notice that there are disclosure triangles to the left of the GameObjects that you imported.

Holding down the Alt button on PC, or Option on Mac, click each disclosure triangle.

As you can see, you have so many GameObjects:

Three important points to remember:

  • GameObjects can contain other GameObjects. On a base level, this useful behavior allows organizing and parenting of GameObjects that are related to each other. More importantly, changes to parent GameObjects may affect their children — more on this in just a moment.
  • Models are converted into GameObjects. Unity creates GameObjects for the various pieces of your model that you can alter like any other GameObject.
  • Everything contained in the Hierarchy is a GameObject. Even things such as cameras and lights are GameObjects. If it’s in the Hierarchy, it’s a GameObject that’s subject to your command.

Our hero is so bored that he’s picking his nose with his gun. You need to get him moving but, first, you need to reposition your GameObjects.

Moving GameObjects

Before starting, collapse all the GameObject trees by clicking the disclosure triangles.

Select BobbleArena in the Hierarchy and take a moment to observe the Inspector, which provides information about the selected GameObject.

GameObjects contain a number of components, which you can think of as small units of functionality. There is one component that all GameObjects contain: The Transform component.

The Transform component contains the position, rotation and scale of the GameObject. Using the inspector, you can set these to specific numbers instead of having to rely upon your eye. When hovering the mouse over the axis name, you’ll see arrows appear next to the pointer.

Press the left mouse button and drag the mouse either left or right to adjust those numbers. This trick is an easy way to adjust the values by small increments.

With BobbleArena selected, set Position to (6.624, 13.622, 6.35). As I developed this game, the arena ended up in this position. You could just as well have placed it in the center of the game of the game world. Set the Scale to (2.0, 2.0, 2.0). This gives the player more room to navigate the arena.

If you zoom out of the Scene view, you’ll probably notice the space marine is suspended in the void; mostly likely, he’s questioning his assumptions about gravity. You could move his head and then his body, but your life will be much easier if you group his body parts into one GameObject.

In the Hierarchy, click the Create button and select Create Empty.

An empty is a GameObject that only has only the one required component that all GameObjects have — the Transform component, as you learned earlier.

Note: You can also create an empty GameObject by clicking GameObject\Create Empty. This goes for other things such as components. There is no “preferred” way to do things — go with whatever works best for your workflow.

Parenting the Space Marine

In the Hierarchy, you’ll see your new GameObject creatively named: GameObject. Single-click the GameObject and name it SpaceMarine.

You can insert spaces in GameObjects’ names, e.g., Space Marine. However, for the sake of consistency, you’ll use camel casing for names in this tutorial.

Drag BobbleMarine-Body and BobbleMarine-Head into the SpaceMarine GameObject.

A few things happen when you parent GameObjects. In particular, the position values for the children change even though the GameObjects don’t move. This modification happens because GameObject positions are always relative to the parent GameObject.

Select the SpaceMarine in the Hierarchy. Go to the Scene view and press F to focus on it. Chances are, the arena is blocking your view.

Thankfully, you don’t need to get Dumbledore on speed dial. You can make it disappear! Select BobbleArena in the Hierarchy, and in the Inspector, uncheck the box to the left of the GameObject’s name. This will make the arena disappear.

You should only see the hero now. Select the SpaceMarine GameObject. In the Inspector, mouse over the X position label until you see the scrubber arrows. Hold the left mouse button and move your mouse left or right. Notice how all the GameObjects move relative to the parent.

As you can see, having parents does have its advantages. No offense to any parentless deities out there.

When you parent a GameObject in another, the position of the child GameObject won’t change. The difference is that of the child GameObject is now positioned relative to the parent. That is, setting the child to (0, 0, 0) will move the child to the center of the parent versus the center of the game world.

You’ll do this now to assemble your marine.

Select BobbleMarine-Body, and in the Inspector, set Position to (0, 0, 0). Go select BobbleMarine-Head and set Position to (1.38, 6.16, 1.05) in the Inspector.

Congratulations! Your hero is assembled.

Positioning the Marine

Now to it’s time to place the space marine in his proper starting position. Select BobbleArena, and in the Inspector, check the box next to the name to re-enable it.

Select SpaceMarine, and in the Inspector, set its position to (4.9, 12.54, 5.87). Also, set the rotation to (0, 0, 0). Your marine should end up directly over the hatch. If this isn’t the case, then feel free to tweak the values until that is the case.

Once the hero is in place, press F in the Scene view so you can see him standing proud.

The hero should now be positioned precisely over the elevator, ready to rock. Unfortunately for him, his grandiose rock party will soon degrade into a bug hunt.

Creating a Prefab

This game features creepy crawly bugs and, like the hero, they’re composed of many pieces. Some assembly is required.

In the Hierarchy, click the Create button and select Create Empty from the drop-down menu. Single-click the GameObject to name it Alien.

Select Alien in the Hierarchy and, in the Inspector, set the position to: (2.9, 13.6, 8.41).

From the Project Browser, drag BobbleEnemy-Body from the Models folder into the Alien GameObject.

Set BobbleEnemy-Body Position to (0, 0, 0). Now the alien and hero should be side by side in the arena.

As creepy as the alien is without a head, the hero needs more to shoot at than that spindly little frame. From the Project Browser, drag BobbleEnemy-Head into the Alien GameObject. Set Position to (0.26, 1.74, 0.31), Rotation to (89.96, 0, 0) and Scale to (100, 100, 100).

That’s one fierce little bug. They go together so well that you could mistake them for the next superstar crime-fighting duo.

At this point, you have one parent GameObject for the hero and another for the alien. For the hero, this works great because you need only one. For the alien, you’re going to need many — so, so many.

You could copy and paste the alien to make clones, but they’d all be individuals. If you needed to make a change to the alien’s behavior, you’d have to change each instance.

For this situation, it’s best to use a prefab, which is a master copy that you use to make as many individual copies as you want. Prefabs are your friend because when you change anything about them, you can apply the same to the rest of the instances.

Making a prefab is simple. Select the Alien GameObject and drag it into the Prefabs folder in the Project Browser.

A few things have changed. There’s a new entry in your Prefabs folder with an icon beside it. You’ll also note the name of the GameObject in the Hierarchy is now blue. You’ll also notice that there are already prefabs in that folder. These are prefabs that you imported with the rest of the assets.

Note: You don’t have to drag your model into that specific folder to create a prefab — all that’s required is dragging a GameObject into any folder in the Project Browser. Having a Prefabs folder is simply good housekeeping.

The blue indicates the GameObject has been either instanced from a prefab or a model, such as the BobbleArena. Select the Alien GameObject in the Hierarchy and look at the Inspector. You’ll notice some additional buttons.

Here’s the breakdown of these new buttons:

  • Select will select the prefab inside the Project Browser. This is useful when you have lots of files and want easy access to the prefab to make changes.
  • Revert will undo changes you’ve made to your instance. For example, you might play around with size or color but end up with something horrible, like a viciously pink spider. You’d click the Revert button to restore sanity.
  • Apply will apply any changes you made to that instance to its prefab. All instances of that prefab will be updated as well.

Creating a prefab instance is quite easy. Select the Alien prefab in the Project Browser and drag it next to your other Alien in the Scene view.

You can also drag an instance to the Hierarchy. As you can see, creating more aliens is as easy as dragging the Alien prefab from the Project Browser. But you don’t need droves of aliens yet, so delete all the Aliens from the Hierarchy. You delete a GameObject by selecting it in the Hierarchy, and pressing Delete on your keyboard, (Command–Delete on a Mac), or you can right-click it and select Delete.

Fixing the Models

The next to-do is fixing some of your models. In this case, Unity imported your models but lost references to the textures. You’ll fix this by adding a texture to a material.

In Unity, a material is a texture with a program attached to it, known as a shader. You can write your own shaders, which is beyond the scope of this tutorial. A shader determines how a texture will look. For instance, you could write a shader to make a stone texture look gritty or a metal texture appear glossy. Thankfully, Unity comes with its own shaders to do that for you.

In the Models folder of your Project Browser, drag a BobbleArena-Column into the Scene view. You’ll find a dull white material.

If a material name or texture name changes in the source package, Unity will lose connection to that material. It tries to fix this by creating a new material for you but with no textures attached to it.

To fix this, you have to assign a new texture to the material. In the Project Browser, select the Models subfolder and then, expand the BobbleArena-Column to see all the child objects.

Next, select Cube_001 in the Project Browser, and in the Inspector, click the disclosure triangle for the Main_Material shader.

You’ll see that there are a lot of options! These options configure how this material will look.

For example, the Metallic slider determines the metal-like quality of the material. A high value metallic value means the texture will reflect light much like metal. You’ll notice a grey box to the left of most of the properties. These boxes are meant for textures.

In this case, all you want is a an image on your model. In the Project Browser, select the Textures folder. Click and drag the Bobble Wars Marine texture to the Albedo property box located in the shader properties.

The Albedo property is for textures, but you can put colors there as well. Once you do this, your columns will now have a texture.

The arena also suffered a texture issue.

In the Hierarchy, expand the BobbleArena and then expand Floor. Select the Floor_Piece.

In the Inspector, you’ll notice two materials attached to the floor piece. The BobbleArena-Main_Texture material does not have a texture assigned to it. You can tell because the material preview is all white.

Like you did for the column, select the Textures folder in the Project Browser. Click and drag the Bobble Wars Marine texture to the Albedo property box located in the shader properties.

Your floor will now acquire borders. How stylish!

You’ll also notice that not just one, but all the floor sections acquired borders. This is because they all use the same material.

Adding Obstacles

Now that you have your models fixed, it’s time add a bunch of columns to the arena. You’ll make a total of seven columns, and you’ll use prefabs for this task.

Note: Whenever it seems like you should make a duplicate of a GameObject, use a prefab instead — it’s another best practice. Some Unity developers insist on using prefabs for everything, even unique objects.

The thinking is that it’s much easier to create a prefab and make duplicates than it is to turn a group of existing GameObjects into prefab instance. The former method requires minimal work, whereas the latter requires you to extract the commonalities into a prefab while maintaing any unique changes for each instance. This results a lot of work.

In the Hierarchy, drag the BobbleArena-Column into your Prefabs folder to turn it into a prefab.

With BobbleArena-Column instance still selected in the Hierarchy view, go to the Inspector and set position to (1.66, 12.83, 54.48). Set scale to (3.5, 3.5, 3.5). You do want all the prefabs to be the same scale, so click the Apply button.

Now it’s time for you to make the rest of the columns.

Dragging one column at a time from project to project in the Scene view can be a little tedious, especially when there are several instances. Duplicate a column by selecting one in the Hierarchy and pressing Ctrl–D on PC or Command–D on Mac.

Create a total of six duplicates and give them following positions:

  • Column 1: (44.69, 12.83, 28.25)
  • Column 2: (42.10, 12.83, 30.14)
  • Column 3: (8.29, 12.83, 63.04)
  • Column 4: (80.40, 12.83, 13.65)
  • Column 5: (91.79, 12.83, 13.65)
  • Column 6: (48.69, 12.83, 33.74)

You should now have seven columns in the arena.

The arena looks good, but the Hierarchy is a bit messy. Tidy it up by clicking the Create button and select Create Empty. Rename the GameObject to Columns and drag each column into it.

You’ll notice that the columns have a similar name with unique numbers appended to them. Since they essentially act as one entity, it’s fine for them to share a name. Hold the Shift key and select all the columns in the Hierarchy. In the Inspector, change the name to Column.

Note: As you can see, it’s possible to change a common property for a bunch of GameObjects at once.

Creating Spawn Points

What good are bloodthirsty aliens unless they spawn in mass quantities? In this section, you’ll set up spawn points to produce enemies to keep our hero on his toes.

So far, you’ve assembled the game with GameObjects that you want the player to see. When it comes to spawn points, you don’t want anybody to see them. Yet, it’s important that you know where they lay.

You could create a 3D cube and place it in your scene to represent a spawn point, then remove it when the game starts, but that’s a clunky approach. Unity provides a simpler mechanism called labels, which are GameObjects visible in the Scene view, but invisible during gameplay. To see them in action, you’ll create a bunch of different spawn points similar to how you created the columns.

Click the Create button in the Hierarchy and select Create Empty. Give it the name Spawn and set position to (5.44, 13.69, 90.30).

In the Inspector, click the colored cube next to the checkmark. A flyout with all the different label styles will appear. Click blue capsule.

Look at your Scene view; you’ll see it’s been annotated with the spawn point.

You may not see the label in your Scene. If this is the case, you will need to increase the size of the label. To do so, click the Gizmos button in the scene view and drag the 3D Icons slider to the far right. This will increase the size of the labels so you can see them when zoomed out.

You need to create 10 more spawn points. Make placement easier by doing the following:

  1. In the Scene view, click on the center cube of the scene gizmo to switch the Scene view to Isometric mode.
  2. Click the green y-axis arrow so that you look down on the scene.

Now go ahead with duplicating and repositioning 10 more spawn points.

Don’t worry if you don’t get them exactly the same – game design is more of an art than a science!

Once you’re done, click the Create button in the Hierarchy, select Create Empty and name it SpawnPoints. Drag all the spawn points into it. Batch rename them like you did with the columns to Spawn.

Congratulations! Your game is now properly set up. Make sure to save!

Where to Go From Here?

At this point, you have your hero, his enemy the alien and the arena in which they will battle to the death. You’ve even created spawn points for the little buggers. As you set things up, you learned about:

  • GameObjects and why they are so important when working with Unity.
  • Prefabs for when you want to create many instances of a single GameObject.
  • Labels to help you annotate game development without interfering with the game.

There’s still no action in your game, but that’s fine. You’re ready to learn how to give your game the breath of life and take it away (via the space marine’s magnificent machine gun).

In the next section of this tutorial mini-series, you’ll add some action to this game, learn how to work with Components and let your space marine fire away at will!

If you’re enjoying this tutorial series and want to learn more, you should definitely check out Unity Games by Tutorials.

The book teaches you everything you need to know about building games in Unity, whether you’re a beginner or a more experienced game developer. In the book, you’ll build four great games:

  • A 3D twin-stick shooter
  • A classic 2D platformer
  • A 3D tower-defense game (with virtual reality mode!)
  • A first-person shooter

Check out the trailer for the book here:

If you have questions or comments on this tutorial, please leave them in the discussion below!

The post Unity Tutorial Part 2: GameObjects appeared first on Ray Wenderlich.

Game On Book Launch Giveaway Winners — and Last Day For Discount!

$
0
0

Hopefully you managed to get your game on with all the books and tutorials we’ve released over the past two weeks as part of our Game On Book Launch event!

From building beautiful, immersive apps with ARKit, to building your own game engine in Metal, to crafting classic beat ’em up, 3D first-person shooters and 2D platformers in Unity, there’s something for every gaming fan in this event, from the 8-bit heroes all the way up to today’s cutting-edge AR game developers.

And I also hope you took advantage of the book launch discounts over at our online store, as well!

As part of the celebration, we’re giving away a pile of books today to twelve (twelve!) lucky readers.

See below to find out who’s won — and to find out how to get your discount on these great books before time runs out!

Game On Book Launch Giveaway Winners

To enter the giveaway, we asked you to add a comment to the announcement post with the answer to this question:

What book are you most excited about in our Game On book launch event?

We’ve randomly selected three winners for each of the books featured in our Game On event. The winners are:

ARKit by Tutorials winners:

  • chuck_jay
  • ron_coolson
  • mattbarr

Metal by Tutorials winners:

  • grzehotnik
  • typarks
  • mr_berna

Beat ’Em Up Game Starter Kit – Unity winners:

  • saurabh19851
  • seanperez29
  • cherry

Unity Games by Tutorials Third Edition winners:

  • justintimestdio
  • charlesmuchene
  • asmodeo

Congratulations! We have added a free copy of the book you’ve won to your account. Enjoy!

Last Day for the Discount!

Today, June 8 2018, is the absolute last day to get any of the books featured in our Game On event. So if you’re looking to learn more about Metal, get started with ARKit, or build 2D and 3D Games in Unity, then today is the day to do it!

You can grab the discount at any one of the book links below:

Thanks to everyone who entered the giveaway, bought the books, read the tutorials, left comments in the forums, shared our posts on Twitter and sent us some great comments over the last two weeks. We truly appreciate you for making what we do possible!

The post Game On Book Launch Giveaway Winners — and Last Day For Discount! appeared first on Ray Wenderlich.

Unity Tutorial Part 3: Components

$
0
0

This is an excerpt taken from Chapter 1, “Hello Unity” of our book Unity Games by Tutorials, newly updated for Unity 2018.1, which walks you through creating four Unity games from scratch — and even shows you how to develop for VR in Unity. Enjoy!

Welcome to the third and final part of this Unity mini-series! In this tutorial, you’ll learn all about Components in Unity while you give your hero free reign to blast the landscape with bullets!

This tutorial continues on from the previous tutorial, Unity Games, Part 2: GameObjects.

At this point, you’ve accomplished your goal of having all the main actors ready, but it’s essentially an empty stage. Sure, you might win some avant-garde awards for a poignant play on the lack of free will, but that’s not going to pay the bills.

In this tutorial, you’ll add some interactivity to your game through the use of Components, which are fundamental to Unity game development. If you were to think of a GameObject as a noun, then a component would be a verb, or the part that performs the action; a component does something on behalf of a GameObject.

You’ve learned a bit about components already. In the previous two parts in this tutorial series, you learned how each GameObject has one required component: The Transform component, which stores the GameObject’s position, rotation and scale.

But Unity comes with far more components than that. For instance, there’s a light component that will illuminate anything near it. There’s an audio source component that will produce sound. There’s even a particle system component that will generate some impressive visual effects. And, of course, if there isn’t a component that does what you want, you can write your own.

In this tutorial, you’ll learn about several types of components, including the Rigidbody component, the script component, and more. By the end of this tutorial, your marine will be running and gunning!

Getting Started

If you completed the last tutorial, open your current Bobblehead Wars project to pick up where you left off. If you got stuck or skipped ahead, download the materials for this course using the links at the top or bottom of this page, open the Bobblehead Wars starter project from this tutorial’s resources.

Currently, the space marine only knows how to stand like a statue. He needs to move around if he’s going to avoid being some alien’s snack.

The first thing you’ll add is a Rigidbody component, which opts the GameObject into the physics engine. By adding it, you’ll enable the GameObject to collide with other GameObjects.

Adding the Rigidbody Component

In the Hierarchy, select the SpaceMarine GameObject and click the Add Component button in the Inspector.

You’ll see many different categories. When you know which component you need, simply search by name. Otherwise, select one of the categories and pick the best option.

Click the Physics category then select Rigidbody.

You’ll see that a Rigidbody component was attached to the GameObject. Congratulations! You’ve added your first component.

Each component has its own set of properties, values and so forth. These properties can be changed in the Inspector or in code. Components also have their own icons to make it easy to determine their type at a glance.

You’ll notice some icons in the top right-hand corner of each component, like this:

The first icon is the Reference icon. Click it. It’ll open another window with the documentation page for that component. If you installed the documentation, this page is on your computer — and, yes, the search bar works.

The second icon is a new feature in Unity 2018. This is the presets button. As you customize your components, you may prefer a certain, preset, configuration created previously.

In the Rigidbody component, check the IsKinematic checkbox. Also, uncheck the Use Gravity option. You’ll learn about those options in a moment. But once you have unchecked them, click the presets button.

You can save that configuration as a preset, then switch to it as needed. You can also create new GameObjects with those assigned presets, saving you time.

This dialog will list all of your presets. Click the Save current to and from the Project Browser, select the Presets folder. Call it Kinematic. Now when you click the preset button again, you’ll see your newly saved preset. Presets are a great way to save your current configuration settings when you want to make changes.

The last of the three buttons in the top right-hand corner of the component is a gear icon. Click it. This dialog will appear:

Here are the most important options listed here:

  • Reset will reset the component to its default values.
  • Move to Front and Move to Back adjust the ordering of sprites in 2D games.
  • Remove Component will delete the component from the GameObject — you can undo this action.
  • Copy Component allows you to copy a component from one GameObject and paste it onto another.
  • Paste Component as New will paste a copied component to a GameObject.
  • Paste Component Values allows you to overwrite the values of the current component from a copied component. Specifically, you can copy the values of a component while your game is being played. When you stop the game, you can paste those values onto another component. This is quite useful because sometimes it’s useful to tweak things as you play to see what works in practice.

From the menu, select remove component. Now click the add component button again, and add a Rigidbody component back to the GameObject. Remember, it’s in the Physics category. Finally, click the presets button and select the Kinematic preset.

Notice that the Rigidbody component was auto-magically updated per your preset. But you may be asking, what’s a Rigidbody?

By adding the Rigidbody component to the space marine, you’ve made it so that he now can respond to collision events and respond to gravity. However, you don’t want him bouncing off enemies or walls. You definitely want to know about those collisions events, but you don’t want the hero to fly out of control just because he bumped into a column.

The isKinematic property tells the physics engine that you’re manually controlling the marine, rather than letting the physics engine move the marine for you. But the physics engine is still aware of where the marine is and when it collides with something. This is helpful so that when the marine collides with an alien, the physics engine will notify you so you can make something happen — like the space marine getting chomped!

By unchecking the Use Gravity option, the space marine won’t be affected by gravity. Again — you don’t need this because you’ll be moving the marine manually and are only using the Rigidbody for collision detection.

Now comes the fun part: It’s time to make that marine dance! Or move. That’s fine, too.

Unity provides some built-in components to make movement easy. But for learning purposes, in this tutorial. you’ll do everything manually through the power of scripting.

Introducing Scripting

For the non-programmer, at best, scripting is a small barrier to entry. At worst, scripting is the devil that never sleeps.

Coming from an arts background, I can say that scripting is not as hard as you might imagine. Like anything, it just requires some time and patience to understand how it works.

In this tutorial, you’ll write scripts in C#, which is a popular language developed by Microsoft that works for both mobile and desktop apps. It’s feature-rich and fully versatile. Unfortunately, I can’t teach it all within this tutorial.

Thankfully, you can learn C# at raywenderlich.com where there’s a free course that teaches the language from the ground up, for complete beginners.

It’s designed for non-programmers and taught within the context of Unity.

If you’re a beginner to programming, definitely check that out first. If you’re an experienced developer, or a particularly brave beginner, feel free to continue along, since everything in this tutorial is step-by-step; just understand that there may be some gaps in your understanding without C# knowledge.

Why Not JavaScript?

Back in the day, Unity shipped with Javascript and a python variant called Boo. Most developers used C# so in 2017, support was dropped for Javascript. Boo was dropped a few years earlier. Now, all scripting is done entirely in C#.

You may encounter tutorials that reference Javascript, but you can’t even create Javascript files in the editor anymore. This is a good thing since Unity has since focused on their C# implementation, updating it to a near contemporary version of the language.

C# may take some time to learn, but you can leverage those programming skills outside of Unity. For instance, if you find yourself disliking game development (or having a hard time making a living) but enjoying the language, you can transition those skills into a C# development job, creating desktop or mobile apps, or even developing backend server apps.

The way coding works in Unity is that you create scripts. Scripts are simply another type of component that you attach to GameObjects, that you get to write the code for.

A script derives from a class called MonoBehaviour, and you can override several methods to get notified upon certain events:

  • Update(): This event occurs at every single frame. If your game runs at sixty frames per second, Update() is called sixty times. Needless to say, you don’t want to do any heavy processing in this method.
  • OnEnable(): This is called when a GameObject is enabled and also when an inactive GameObject suddenly reactivates. Typically, you deactivate GameObjects when you don’t need them for a while but will have a need at a later point in time.
  • Start(): This is called once in the script’s lifetime and before Update() is called. It’s a good place to do setup and initalization.
  • Destroy(): This is called right before the object goes to the GameObject afterlife. It’s a good place to do clean up, such as shutting down network connections.

There are many other events that you’ll discover throughout this tutorial. To see a complete listing, head to the MonoBehavior reference on Unity’s site:

Creating Your First Script

It’s showtime!

You have many options for creating a script. You could click the Add Component button and then select New Script.

But I’d like you to try it this way: Select the Scripts folder in the Project Browser, and then click the Create button. Select C# Script from the drop-down menu and name it PlayerController.

You’ll see that your new script in the Scripts folder. Drag it from the Scripts folder onto the SpaceMarine GameObject.

You should now see the script listed as one of the components on the SpaceMarine GameObject in the Hierarchy.

You’ve added your first custom component! Granted, it doesn’t do anything… yet.

You’ll change that in just a moment butm before you do, you need to learn about the Input Manager.

Managing Input

Inputs are the game’s controls, and if you’re developing a game for a desktop computer, your users will expect the ability to rebind keys. Unity’s Input Manager makes this easy, and is the preferred way for your game to deal with user input.

To get access to the Input Manager, click Edit\Project Settings\Input.

The Inspector will look pretty empty. Click the disclosure triangle next to the word Axes.

Once expanded, you’ll see all the pre-configured inputs that are available to you.

The first property is Size, and it’s the number of inputs your game uses. You can decrease the number to decrease the amount, and increase it if you want more inputs. The current amount of inputs is more than enough for this game.

Click the disclosure triangle next to Horizontal. Here you can configure the input for the horizontal axis, i.e., left or right.

Here’s a breakdown of the key fields:

  • Name is the name Unity gives the input. This example is called ‘Horizontal’, but you can call it anything. This is also the name you reference in code, which you’ll see in a moment.
  • Descriptive Name and Negative Name are the names presented to the user in the Unity game launcher if they want to remap the keys. You can disable the Unity game launch and provide your own key mapping interface if you’d like, so these aren’t required properties.+* Negative and Positive Buttons are the actual keys being used. Unity allows buttons to have negative or opposite keys. For instance, the right arrow key is positive while the left arrow key is negative. You don’t need to provide a negative for all keys — it wouldn’t make sense to provide a negative key for a use action.+* Alt Negative and Alt Positive Buttons are alternative keys. In this case, instead of Left and Right Arrow keys, you enable the A and D keys.

The other fields mostly relate to the functionality of analog sticks. For simplicity, this game will only use keyboard input. If you wanted to make the game a bonafide twin stick shooter, these are the options you’d tweak to create a tight control scheme.

Accessing Input From Code

Now to actually implement the control scheme. In the Project Browser, double-click the PlayerController script to launch the editor.

When the code editor opens, you’ll see your first script. Every new script contains empty implementations for Start() and Update().

Look for a blank line below the first { (a.k.a., “curly bracket”) — that’s the class definition. Add the following code there:

public float moveSpeed = 50.0f;
Note: If you are new to programming languages, it’s critical that you copy everything exactly as it is written. Any deviation will produce errors. Programming languages are very precise and become grumpy when you use the incorrect case or syntax.

If your script throws an error, carefully review the code to make sure you didn’t miss, forget or mess up something.

moveSpeed is a variable that determines how fast the hero moves around in the arena. You set it to a default value of 50 that you can change later in Unity’s interface.

Now, to write the actual moving code. In Update(), add the following:

Vector3 pos = transform.position;

This bit of code simply gets the current position of the current GameObject — in this case, the space marine since that is what this script is attached to — and stores it in a variable named pos. In Unity, a Vector3 encapsulates the x, y and z of a GameObject, i.e., the object’s point in 3D space.

Now comes the tricky part. Add the following after the previous line:

pos.x += moveSpeed * Input.GetAxis("Horizontal") * Time.deltaTime;
pos.z += moveSpeed * Input.GetAxis("Vertical") * Time.deltaTime; 

When you move the object, it’ll only be on z- and x-axes because y represents up and down. Because there are two different input sources (“Horizontal” for left and right, and “Vertical” for up and down), you need to calculate values for each axis separately.

Input.GetAxis("Horizontal") retrieves a value from the Horizontal component of the Input Manager. The value returned is either 1 or -1, with 1 indicating that a positive button in the Input Manager is being pressed. According to the settings you saw defined earlier, this is either the right arrow or d keys. Similarly, a value of -1 indicates that a negative button is being pressed, meaning it was either the left arrow or the a key.

Whatever the returned value may be, it’s then multiplied by the moveSpeed and added to the current x position of the GameObject, effectively moving it in the desired direction.

The same thing happens with Input.GetAxis("Vertical"), except it retrieves a value from the vertical component of the Input Manager (indicating the s, down, w or up keys), multiplies this (1 or -1) value by the moveSpeed and adds it to the z position of the GameObject.

So what’s with Time.deltaTime? That value indicates how much time has passed since the last Update(). Remember, Update() is called with every frame, so that time difference must be taken into account or the space marine would move too fast to be seen.

TL/DR: Time.deltaTime ensures movement is in sync with the frame rate.

What do these numbers mean?

By default, Unity considers one point to be equal to one meter, but you don’t have to follow this logic. For instance, you may be making a game about planets, and that scale would be way too small. For the purposes of simplicity in this tutorial, we have sized our models to follow Unity’s default of one point per meter.

Now that you’ve altered the location, you have to apply it to the SpaceMarine GameObject. Add the following after the previous line:

transform.position = pos;

This updates the SpaceMarine GameObject’s position with the new position.

Save the script and switch back to Unity. You may be tempted to run your game, but there’s a slight problem. The camera is not positioned correctly.

In the Hierarchy, select the Main Camera, and in the Inspector, set Position to (9.7, 53.6, 56.1) and Rotation to (30, 0, 0). I determined these values by moving the camera around manually and looking at the Camera preview in the lower right until I was happy with the result.

In the Camera component, set Field of View to 31. This effectively “zooms in” the view a bit.

Now it’s time to give your game a test run. Look for the play controls at the center-top of the editor. Click the Play button.

Note: You’ll notice the controls give you two more options: Pause and pause stepper. Pause allows you to, well, pause your game in motion. The stepper allows you to step through the animation one frame at a time and is especially useful for debugging animation issues.

Now, look at the Game window and move your character by pressing the Arrow keys or WASD keys. Behold… life!

The Game window

The Game window is where you actually play the game. There are two life-and-death details to keep in mind as you play.

First, when you start playing, the interface becomes darker to give a visual queue that you’re in play mode.

Second, when you play your game. you can change anything about it (including changing values on components in the inspector) but, when you stop playing the game, ALL YOUR CHANGES WILL BE LOST. This is both a blessing and a curse. The blessing is that you have the ability to tweak the game without consequence. The curse is that sometimes you forget you’re in play mode, continue working on your game then, for some reason, the game stops. Poof! Buh-bye changes!

Thankfully, you can (and should) make play mode really obvious. Select Edit\Preferences on PC or Unity\Preferences on Mac to bring up a list of options.

Select the Colors section. Look for Playmode tint. Click the color box next to it, and then give it an unmistakable color — I prefer red. Now play your game to see if it’s conspicuous enough.

Camera Movement

There’s only one problem with the space marine’s movement: He will slip off screen. You want the camera to follow the hero around the arena, so he doesn’t get away from you.

With a little scripting, you can keep the marine in focus.

First, make sure you’re not in play mode – to stop play mode, select the play button again.

In the Hierarchy, click the Create button and select Create Empty. Name it CameraMount.

The basic idea is you want CameraMount to represent the position the camera should focus on and have the camera be relative to this position.

Initially you want the camera to focus where the space marine is, so let’s configure the CameraMount to be at the exact same position as the space marine.

To do this, select the space marine, click on the Gear button to the upper-right of the Transform component, and select Copy Component

Then select the CameraMount, click on the Gear button to the upper right of the Transform component, and select Paste Component Values:

Next, drag the Main Camera GameObject into the CameraMount GameObject.

Great! Now as you move the player around, you can move the CameraMount to move with the player, and the camera will track the player. You just need to write a script to do this.

With the CameraMount selected, click the Add Component button in the Inspector and select New Script. Call it CameraMovement and then click Create and Add.

Note: When you make a new script by clicking the Add Component button, Unity will create the script in the top level of your assets folder. Get into the habit of moving assets into their respective folders the moment you make or see them.

Drag your new file from the top level of the assets folder into the Scripts folder.

Double-click the CameraMovement script to open it in your code editor. Underneath the class definition, add the following variables:

public GameObject followTarget;
public float moveSpeed;

followTarget is what you want the camera to follow and moveSpeed is the speed at which it should move. By creating these as public variables, Unity will allow you to set these within the Unity editor itself, so you can set the followTarget to the space marine and fiddle with the moveSpeed to your heart’s content, as you’ll see shortly.

Now add the following to Update():

if (followTarget != null) 
{
  transform.position = Vector3.Lerp(transform.position, 
    followTarget.transform.position, Time.deltaTime * moveSpeed);
}

This code checks to see if there is a target available. If not, the camera doesn’t follow.

Next, Vector3.Lerp() is called to calculate the required position of the CameraMount.

Lerp() takes three parameters: A start position in 3D space, an end position in 3D space, and a value between 0 and 1 that represents a point between the starting and ending positions. Lerp() returns a point in 3D space between the start and end positions that’s determined by the last value.

For example, if the last value is set to 0 then Lerp() will return the start position. If the last value is 1, it returns the end position. If the last value is 0.5, then it returns a point halfway between the start and end positions.

In this case, you will supply the camera mount position as the start and the player position as the end. Finally, you multiply the time since the last frame rate by a speed multiplier to get a suitable value for the last parameter. Effectively, this makes the camera mount position smoothly move to where the player is over time.

Save your code and return to Unity. Select CameraMount in the Hierarchy, and look in Inspector. You’ll see two new fields named Follow Target and Move Speed. As mentioned earlier, these were automatically derived by Unity from the public variables you just added to the script. These variables need some values.

With the CameraMount still selected in the Hierarchy, drag SpaceMarine to the Follow Target field and set the Move Speed to 20.

Play your game to see what’s changed.

The marine is a real superstar now, complete with a personal camera crew. Granted, he can’t turn, and he walks right through objects just like Kitty Pryde, but these are easily solvable issues that you don’t have to tackle now.

Note: The bigger the move speed, the faster the camera mount will move to the player. The smaller, the more the camera will “lag” behind the space marine’s position, letting the player “jump ahead” of the camera. Try changing the move speed to a smaller value, like 2, and see what happens for yourself!

Adding Gunplay

Unfortunately, the finer parts of diplomacy are lost on the flesh-eating antagonists of this game. It’s best you give the hero some firepower so he can protect himself on his terribly relaxing (terrible?) vacation.

First, you need to create a bullet. In the Hierarchy, click the Create button. From the drop-down, select 3D Object\Sphere to create a sphere in the Scene view.

Give it the name Projectile. With it still selected, check out the Inspector. You’ll notice a bunch of new components.

The three new components are:

  1. The Mesh Filter is a component that contains data about your model’s mesh and passes it to a renderer.
  2. The Mesh Renderer displays the mesh. It contains a lot of information about lighting, such as casting and receiving shadows.
  3. Finally, you’ll notice the sphere contains a Sphere Collider. This component serves as the GameObject’s boundaries.

Since you want the bullet to participate in Unity’s physics, it needs a Rigidbody.

Luckily, you’ve done this before. Click the Add Component button and select Rigidbody from the Physics category. Make sure to uncheck Use Gravity.

Since the marine will burn through lots of projectiles, drag it from the Hierarchy to the Prefabs folder in the Project Browser. Delete the projectile from the Hierarchy because you don’t need it now that it’s gone on to be a prefab.

At this point, you need to create a script to launch the projectile. In the Project Browser, select the Scripts folder then click the Create button. Choose C# Script and name it Gun. Double-click the file to launch the code editor.

This file needs a few properties underneath the class definition. Add the following:

public GameObject bulletPrefab;
public Transform launchPosition;

Again, when you create a public variable on a script, Unity exposes these variables in the editor. You will set the bulletPrefab to the bullet prefab you just created, and you will set the launchPosition to the position of the barrel of the Space Marine’s gun.

Next, add the following method:

void fireBullet() 
{
  // 1
  GameObject bullet = Instantiate(bulletPrefab) as GameObject;
  // 2
  bullet.transform.position = launchPosition.position;
  // 3
  bullet.GetComponent<Rigidbody>().velocity = 
    transform.parent.forward * 100;
}

Let’s review this section by section:

  1. Instantiate() is a built-in method that creates a GameObject instance for a particular prefab. In this case, this will create a bullet based on the bullet prefab. Since Instantiate() returns a type of Object, the result must be cast into a GameObject.
  2. The bullet’s position is set to the launcher’s position — you’ll set the launcher as the barrel of the gun in just a moment.
  3. Since the bullet has a Rigidbody attached to it, you can specify its velocity to make the bullet move at a constant rate. Direction is determined by the transform of the object to which this script is attached — you’ll soon attach it to the body of the space marine, ensuring the bullet travels in same the direction as the marine faces.

Save and switch back to Unity. In the Hierarchy, expand the SpaceMarine GameObject and select the BobbleMarine-Body GameObject.

In the Inspector, click the Add Component button and near the bottom of the list of components, select Scripts. From the list of scripts, choose Gun.

You’ll see that your Gun script component has been added to the body of the marine. You’ll also notice there are two new fields: Bullet Prefab and Launch Position. Do those sound familiar?

Click the circle next to Bullet Prefab. In the Select GameObject dialog, click the Assets tab. Select Projectile from the resulting list. Now you have loaded the bullet and just need to set the launch position.

In the Hierarchy, hold the Alt key on PC or Option on Mac and click the disclosure triangle next to the BobbleMarine-Body. You’ll see a large list of child GameObjects. Look for Gun.

Select that GameObject and click the Create button. Choose Create Empty Child and rename it to Launcher. This new GameObject lives in the center of the gun’s barrel and represents where bullets will spawn from — feel free to move it around in the scene editor if you’d like to tweak the spawn position.

Keep all the GameObjects expanded and select BobbleMarine-Body so that the Inspector shows all the components. Drag the new Launcher GameObject into the Gun component’s Launch Position field.

Notice that when you add the GameObject to a transform field, Unity finds and references the attached transform.

It’s official! The marine’s gun is locked and loaded. All that’s left is the firing code. Thankfully, it’s pretty easy.

Switch back to your code editor and open Gun.cs.

The gun should fire when the user presses the mouse button and stop when the user releases it.

You could simply check to see if the button is pressed in Update() and call fireBullet() if so, but since Update() is called every frame, that would mean your space marine would shoot up to 60 times per second! Our space marine can shoot fast, but not that fast.

What you need is a slight delay between when you shoot bullets. To do this, add the following to Update():

if (Input.GetMouseButtonDown(0)) 
{
  if (!IsInvoking("fireBullet")) 
  {
    InvokeRepeating("fireBullet", 0f, 0.1f);
  }
}

First, you check with the Input Manager to see if the left mouse button is held down.

Note: If you wanted to check the right mouse button, you’d pass in 1, and for the middle mouse button, you’d pass in 2.

If the mouse is being held down, you check if fireBullet() is being invoked. If not, you call InvokeRepeating(), which repeatedly calls a method until you call CancelInvoke().

InvokeRepeating() needs a method name, a time to start and the repeat rate. InvokeRepeating() is a method of MonoBehaviour.

After that bit of code, add the following:

if (Input.GetMouseButtonUp(0)) 
{
  CancelInvoke("fireBullet");
}

This code makes it so the gun stops firing once the user releases the mouse button. Save your work and return to Unity, then play the game.

Hold down the mouse button. You have bullets for days!

Where to Go From Here?

At this point, you should be feeling more comfortable with Unity. You have a walking space marine with a functioning weapon. You’ve learned the following:

  • Components and how they give your GameObjects behavior.
  • Scripting and how to use scripts to create custom behavior.
  • The Input Manager and how to access it from code.
  • The Game window and how to test your games.

This was a thick, heavy series and you made it to the end. Congratulations! You’ve come a long way.

There’s still a lot more to do. Your poor marine is a sitting duck because he can’t turn around to see what’s sneaking up behind him. At the same time, he has nothing to worry about because there are no aliens hordes attacking him — yet!

If you’ve enjoyed this tutorial series and want to learn how to finish off your game (and build more amazing games in Unity), check out the complete Unity Games by Tutorials book!

The book teaches you everything you need to know about building games in Unity, whether you’re a beginner or a more experienced game developer. In the book, you’ll build four great games:

  • A 3D twin-stick shooter
  • A classic 2D platformer
  • A 3D tower-defense game (with virtual reality mode!)
  • A first-person shooter

Check out the trailer for the book here:

If you have questions or comments on this tutorial, please leave them in the discussion below!

The post Unity Tutorial Part 3: Components appeared first on Ray Wenderlich.

What’s New in Swift 4.2?

$
0
0

What’s New in Swift 4.2?

Good news: Swift 4.2 is now available in Xcode 10 beta! This release updates important Swift 4.1 features and improves the language in preparation for ABI stability.

This tutorial covers the most significant changes in Swift 4.2. It requires Xcode 10, so make sure you download and install the latest beta of Xcode before getting started.

Getting Started

Swift 4.2 is source compatible with Swift 4.1, but isn’t binary compatible with any other releases. Apple designed Swift 4.2 to be an intermediate step towards achieving ABI stability in Swift 5, which should enable binary compatibility between applications and libraries compiled with different Swift versions. The ABI features receive plenty of time for feedback from the community before integration into the final ABI.

This tutorial’s sections contain Swift Evolution proposal numbers such as [SE-0001]. You can explore the details of each change by clicking the linked tag of each proposal.

You’ll get the most out of this tutorial if you try out the changes in a playground. Start Xcode 10 and go to File ▸ New ▸ Playground. Select iOS for the platform and Blank for the template. Name it whatever you like and save it anywhere you wish. You’re now ready to get started!

Note: Need a refresher of the Swift 4.1 highlights? Check out our Swift 4.1 tutorial: What’s New in Swift 4.1?

Language Improvements

There are quite a few language features in this release such as random number generators, dynamic member lookup and more.

Generating Random Numbers

Swift 4.1 imports C APIs to generate random numbers, as in the snippet below:

let digit = Int(arc4random_uniform(10))

arc4random_uniform(_:) returned a random digit between 0 and 9. It required you to import Foundation, and didn’t work on Linux. On the other hand, all Linux-based approaches introduced modulo bias, which meant that certain numbers were generated more often than others.

Swift 4.2 solves these problems by adding a random API to the standard library [SE-0202]:

// 1  
let digit = Int.random(in: 0..<10)

// 2
if let anotherDigit = (0..<10).randomElement() {
  print(anotherDigit)
} else {
  print("Empty range.")
}

// 3
let double = Double.random(in: 0..<1)
let float = Float.random(in: 0..<1)
let cgFloat = CGFloat.random(in: 0..<1)
let bool = Bool.random()

Here’s what this does:

  1. You use random(in:) to generate random digits from ranges.
  2. randomElement() returns nil if the range is empty, so you unwrap the returned Int? with if let.
  3. You use random(in:) to generate a random Double, Float or CGFloat and random() to return a random Bool.
Generating random numbers like a pro in Swift 4.2!

Generating random numbers like a pro in Swift 4.2!

Swift 4.1 also used C functions for generating random values from arrays:

let playlist = ["Nothing Else Matters", "Stairway to Heaven", "I Want to Break Free", "Yesterday"]
let index = Int(arc4random_uniform(UInt32(playlist.count)))
let song = playlist[index]

Swift 4.1 used arc4random_uniform(_:) to generate a valid index from playlist and return the corresponding song. This solution required you to cast between Int and UInt32 and also had all the previously mentioned issues.

Swift 4.2 takes a more straightforward approach:

if let song = playlist.randomElement() {
  print(song)
} else {
  print("Empty playlist.")
}

randomElement() returns nil if playlist is empty, so you unwrap the returned String?.

Swift 4.1 didn’t contain any collection shuffling algorithms, so you had to use a roundabout way to achieve the intended result:

// 1
let shuffledPlaylist = playlist.sorted{ _, _ in arc4random_uniform(2) == 0 }

// 2
var names = ["Cosmin", "Oana", "Sclip", "Nori"]
names.sort { _, _ in arc4random_uniform(2) == 0 }

Here’s what you’re doing in this code:

  1. You use arc4random_uniform(_:) to determine the shuffling order of the playlist and return shuffledPlaylist with sorted(_:_:).
  2. You then shuffle names in place with sort(_:_:) using the previous technique.

Swift 4.2 provides more efficient, and arguably more elegant, shuffling algorithms:

let shuffledPlaylist = playlist.shuffled()
names.shuffle()

In 4.2, you simply use shuffled() to create a shuffled playlist and shuffle names on the spot with shuffle(). Boom!

Shuffling playlists has never been easier thanks to Swift 4.2!

Shuffling playlists has never been easier thanks to Swift 4.2!

Dynamic Member Lookup

Swift 4.1 used the following square brackets syntax for custom subscript calls:

class Person {
  let name: String
  let age: Int
  private let details: [String: String]
  
  init(name: String, age: Int, details: [String: String]) {
    self.name = name
    self.age = age
    self.details = details
  }
  
  subscript(key: String) -> String {
    switch key {
      case "info":
        return "\(name) is \(age) years old."
      default:
        return details[key] ?? ""
    }
  }
}

let details = ["title": "Author", "instrument": "Guitar"]
let me = Person(name: "Cosmin", age: 32, details: details)
me["info"]   // "Cosmin is 32 years old."
me["title"]  // "Author"

The subscript in this case returns contents from a private data store or a custom message based on the person’s name and age.

Swift 4.2 uses dynamic member lookup to provide dot syntax for subscripts instead in [SE-0195]:

// 1
@dynamicMemberLookup
class Person {
  let name: String
  let age: Int
  private let details: [String: String]
  
  init(name: String, age: Int, details: [String: String]) {
    self.name = name
    self.age = age
    self.details = details
  }
  
  // 2
  subscript(dynamicMember key: String) -> String {
    switch key {
      case "info":
        return "\(name) is \(age) years old."
      default:
        return details[key] ?? ""
    }
  }
}


// 3
me.info   // "Cosmin is 32 years old." 
me.title  // "Author"

Taking it comment-by-comment:

  1. You mark Person as @dynamicMemberLookup to enable dot syntax for its custom subscripts.
  2. You conform to @dynamicMemberLookup by implementing subscript(dynamicMember:) for the class.
  3. You call the previously implemented subscript using dot syntax.

The compiler evaluates the subscript call dynamically at runtime, which lets you to write type-safe code much like you would in scripting languages like Python or Ruby.

Dynamic member lookup doesn’t mess up your class properties:

me.name // "Cosmin"
me.age // 32

You use dot syntax to call name and age instead of the subscript in this case.

Further, derived classes inherit dynamic member lookup from their base ones:

@dynamicMemberLookup
class Vehicle {
  let brand: String
  let year: Int
  
  init(brand: String, year: Int) {
    self.brand = brand
    self.year = year
  }
  
  subscript(dynamicMember key: String) -> String {
    return "\(brand) made in \(year)."
  }
}

class Car: Vehicle {}

let car = Car(brand: "BMW", year: 2018)
car.info  // "BMW made in 2018."

You can use dot syntax to call the car’s subscript, since any Car is a Vehicle and Vehicle implements @dynamicMemberLookup.

You can add dynamic member lookup to existing types with protocol extensions:

// 1
@dynamicMemberLookup
protocol Random {}

// 2
extension Random {
  subscript(dynamicMember key: String) -> Int {
    return Int.random(in: 0..<10)
  }
}

// 3
extension Int: Random {}

// 4
let number = 10
let randomDigit = String(number.digit)
let noRandomDigit = String(number).filter { String($0) != randomDigit }

Here’s the play-by-play:

  1. You annotate Random with @dynamicMemberLookup to enable dot syntax for its subscripts.
  2. You extend the protocol and make it conform to @dynamicMemberLookup by implementing subscript(dynamicMember:). The subscript uses random(in:) to return a random digit between 0 and 9.
  3. You extend Int and make it conform to Random.
  4. You use dot syntax to generate a random digit and filter it out from number.

Enumeration Cases Collections

Swift 4.1 didn’t provide access to collections of enumeration cases by default. This left you with rather inelegant solutions like the following:

enum Seasons: String {
  case spring = "Spring", summer = "Summer", autumn = "Autumn", winter = "Winter"
}

enum SeasonType {
  case equinox
  case solstice
}

let seasons = [Seasons.spring, .summer, .autumn, .winter]
for (index, season) in seasons.enumerated() {
  let seasonType = index % 2 == 0 ? SeasonType.equinox : .solstice
  print("\(season.rawValue) \(seasonType).")
}

Here, you add Seasons cases to seasons and loop through the array to get each season name and type. But Swift 4.2 can do you one better!

Swift 4.2 adds enumeration cases arrays to enumerations [SE-0194]:

// 1
enum Seasons: String, CaseIterable {
  case spring = "Spring", summer = "Summer", autumn = "Autumn", winter = "Winter"
}

enum SeasonType {
  case equinox
  case solstice
}

// 2
for (index, season) in Seasons.allCases.enumerated() {
  let seasonType = index % 2 == 0 ? SeasonType.equinox : .solstice
  print("\(season.rawValue) \(seasonType).")
}

Here’s how you can accomplish the same thing in Swift 4.2:

  1. You conform Seasons to CaseIterable to create the array of enumeration cases.
  2. You loop through allCases and print each season name and type.

You have the option of only adding certain cases to the enumeration cases array:

enum Months: CaseIterable {
  case january, february, march, april, may, june, july, august, september, october, november, december          
  
  static var allCases: [Months] {
    return [.june, .july, .august]
  }
}

Here you add only the summer months to allCases since they are the sunniest ones of the year!

Summer is all over the place in Swift 4.2 enumerations!

Summer is all over the place in Swift 4.2 enumerations!

You should add all available cases manually to the array if the enumeration contains unavailable ones:

enum Days: CaseIterable {
  case monday, tuesday, wednesday, thursday, friday
  
  @available(*, unavailable)
  case saturday, sunday
  
  static var allCases: [Days] {
    return [.monday, .tuesday, .wednesday, .thursday, .friday]
  }
}

You add only weekdays to allCases because you mark both .saturday and .sunday as unavailable on any version of any platform.

You can also add cases with associated values to the enumeration cases array:

enum BlogPost: CaseIterable {
  case article
  case tutorial(updated: Bool)
  
  static var allCases: [BlogPost] {
    return [.article, .tutorial(updated: true), .tutorial(updated: false)]
  }
}

In this example, you add all types of blog posts on the website to allCases: articles, new tutorials and updated ones.

New Sequence Methods

Swift 4.1 defined Sequence methods that determined either the first index of a certain element, or the first element which satisfied a certain condition:

let ages = ["ten", "twelve", "thirteen", "nineteen", "eighteen", "seventeen", "fourteen",  "eighteen", 
            "fifteen", "sixteen", "eleven"]

if let firstTeen = ages.first(where: { $0.hasSuffix("teen") }), 
   let firstIndex = ages.index(where: { $0.hasSuffix("teen") }), 
   let firstMajorIndex = ages.index(of: "eighteen") {
  print("Teenager number \(firstIndex + 1) is \(firstTeen) years old.")
  print("Teenager number \(firstMajorIndex + 1) isn't a minor anymore.")
} else {
  print("No teenagers around here.")
}

The Swift 4.1 way of doing things is to use first(where:) to find the first teenager’s age in ages, index(where:) for the first teenager’s index and index(of:) for the index of the first teenager who is 18.

Swift 4.2 renames some of these methods for consistency [SE-0204]:

if let firstTeen = ages.first(where: { $0.hasSuffix("teen") }), 
   let firstIndex = ages.firstIndex(where: { $0.hasSuffix("teen") }), 
   let firstMajorIndex = ages.firstIndex(of:  "eighteen") {
  print("Teenager number \(firstIndex + 1) is \(firstTeen) years old.")
  print("Teenager number \(firstMajorIndex + 1) isn't a minor anymore.")
} else {
  print("No teenagers around here.")
}

index(where:) becomes firstIndex(where:), and index(of:) becomes firstIndex(of:) to remain consistent with first(where:).

Swift 4.1 also didn’t define any Collection methods for finding either the last index of a certain element or the last element which matched a given predicate. Here’s how you’d handle this in 4.1:

// 1
let reversedAges = ages.reversed()

// 2
if let lastTeen = reversedAges.first(where: { $0.hasSuffix("teen") }), 
   let lastIndex = reversedAges.index(where: { $0.hasSuffix("teen") })?.base, 
   let lastMajorIndex = reversedAges.index(of: "eighteen")?.base {
  print("Teenager number \(lastIndex) is \(lastTeen) years old.")
  print("Teenager number \(lastMajorIndex) isn't a minor anymore.")
} else {
  print("No teenagers around here.")
}

Looking at this in sections:

  1. You create a reversed version of ages with reversed().
  2. You use first(where:) to determine the last teenager’s age in reversedAges, index(where:) for the last teenager’s index and index(of:) for the index of the last teenager who is 18.

Swift 4.2 adds the corresponding Sequence methods which collapses the above down to:

if let lastTeen = ages.last(where: { $0.hasSuffix("teen") }), 
   let lastIndex = ages.lastIndex(where: { $0.hasSuffix("teen") }), 
   let lastMajorIndex = ages.lastIndex(of: "eighteen") {
  print("Teenager number \(lastIndex + 1) is \(lastTeen) years old.")
  print("Teenager number \(lastMajorIndex + 1) isn't a minor anymore.")
} else {
  print("No teenagers around here.")
}

You can simply use last(where:), lastIndex(where:) and lastIndex(of:) to find the previous element and specific indices in ages.

Testing Sequence Elements

A fairly simple routine absent from Swift 4.1 is a way to check whether all elements in a Sequence satisfied a certain condition. You could always craft your own approach, though, such as here where you have to determine whether all elements are even:

let values = [10, 8, 12, 20]
let allEven = !values.contains { $0 % 2 == 1 }

Kludgey, isn’t it? Swift 4.2 adds this missing method to Sequence [SE-0207]:

let allEven = values.allSatisfy { $0 % 2 == 0 }

Much better! This simplifies your code and improves its readability.

Conditional Conformance Updates

Swift 4.2 adds several conditional conformance improvements to extensions and the standard library [SE-0143].

Conditional conformance in extensions

Swift 4.1 couldn’t synthesize conditional conformance to Equatable in extensions. Take the following Swift 4.1 snippet as an example:

// 1
struct Tutorial : Equatable {
  let title: String
  let author: String
}

// 2
struct Screencast<Tutorial> {
  let author: String
  let tutorial: Tutorial
}

// 3
extension Screencast: Equatable where Tutorial: Equatable {
  static func ==(lhs: Screencast, rhs: Screencast) -> Bool {
    return lhs.author == rhs.author && lhs.tutorial == rhs.tutorial
  }
}

// 4
let swift41Tutorial = Tutorial(title: "What's New in Swift 4.1?", author: "Cosmin Pupăză")
let swift42Tutorial = Tutorial(title: "What's New In Swift 4.2?", author: "Cosmin Pupăză")
let swift41Screencast = Screencast(author: "Jessy Catterwaul", tutorial: swift41Tutorial)
let swift42Screencast = Screencast(author: "Jessy Catterwaul", tutorial: swift42Tutorial)
let sameScreencast = swift41Screencast == swift42Screencast
  1. You make Tutorial conform to Equatable.
  2. You make Screencast generic, since website authors base their screencasts on published tutorials.
  3. You implement ==(lhs:rhs:) for screencasts since Screencast conforms to Equatable as long as Tutorial does.
  4. You compare screencasts directly because of the conditional conformance you declared.

Swift 4.2 adds a default implementation for Equatable conditional conformance to an extension:

extension Screencast: Equatable where Tutorial: Equatable {}

This feature applies to Hashable and Codable conformances in extensions as well:

// 1
struct Tutorial: Hashable, Codable {
  let title: String
  let author: String
}

struct Screencast<Tutorial> {
  let author: String
  let tutorial: Tutorial
}

// 2
extension Screencast: Hashable where Tutorial: Hashable {}
extension Screencast: Codable where Tutorial: Codable {}

// 3
let screencastsSet: Set = [swift41Screencast, swift42Screencast]
let screencastsDictionary = [swift41Screencast: "Swift 4.1", swift42Screencast: "Swift 4.2"]

let screencasts = [swift41Screencast, swift42Screencast]
let encoder = JSONEncoder()
do {
  try encoder.encode(screencasts)
} catch {
  print("\(error)")
}

In this block:

  1. You conform Tutorial to both Hashable and Codable.
  2. You constrain Screencast to conform to Hashable and Codable if Tutorial does.
  3. You add screencasts to sets and dictionaries and encode them.

Conditional conformance runtime queries

Swift 4.2 implements dynamic queries of conditional conformances. You can see this in action in the following code:

// 1
class Instrument {
  let brand: String
  
  init(brand: String = "") {
    self.brand = brand
  }
}

// 2
protocol Tuneable {
  func tune()
}

// 3
class Keyboard: Instrument, Tuneable {
  func tune() {
    print("\(brand) keyboard tuning.")
  }
}

// 4
extension Array: Tuneable where Element: Tuneable {
  func tune() {
    forEach { $0.tune() }
  }
}

// 5
let instrument = Instrument()
let keyboard = Keyboard(brand: "Roland")
let instruments = [instrument, keyboard]

// 6
if let keyboards = instruments as? Tuneable {
  keyboards.tune()
} else {
  print("Can't tune instrument.")
}

Here’s what’s going on above:

  1. You define Instrument with a certain brand.
  2. You declare Tuneable for all instruments that can tune.
  3. You override tune() in Keyboard to return keyboard standard tuning.
  4. You use where to constrain Array to conform to Tuneable as long as Element does.
  5. You add an Instrument and a Keyboard to instruments.
  6. You check if instruments implements Tuneable and tune it if the test succeeds. In this example, the array can't be cast to Tuneable because the Instrument type isn't tuneable. If you created an array of two keyboards, the test would pass and the keyboards would be tuned.

Hashable conditional conformance improvements in the standard library

Optionals, arrays, dictionaries and ranges are Hashable in Swift 4.2 when their elements are Hashable as well:

struct Chord: Hashable {
  let name: String
  let description: String?
  let notes: [String]
  let signature: [String: [String]?]
  let frequency: CountableClosedRange<Int>
}

let cMajor = Chord(name: "C", description: "C major", notes: ["C", "E",  "G"], 
                   signature: ["sharp": nil,  "flat": nil], frequency: 432...446)
let aMinor = Chord(name: "Am", description: "A minor", notes: ["A", "C", "E"], 
                   signature: ["sharp": nil, "flat": nil], frequency: 440...446)
let chords: Set = [cMajor, aMinor]
let versions = [cMajor: "major", aMinor: "minor"]

You add cMajor and aMinor to chords and versions. This wasn’t possible prior to 4.2 because String?, [String], [String: [String]?] and CountableClosedRange<Int> weren’t Hashable.

Hashable Improvements

Take the following example in Swift 4.1 which implements custom hash functions for a class:

class Country: Hashable {
  let name: String
  let capital: String
  
  init(name: String, capital: String) {
    self.name = name
    self.capital = capital
  }
  
  static func ==(lhs: Country, rhs: Country) -> Bool {
    return lhs.name == rhs.name && lhs.capital == rhs.capital
  }
  
  var hashValue: Int {
    return name.hashValue ^ capital.hashValue &* 16777619
  }
}

let france = Country(name: "France", capital: "Paris")
let germany = Country(name: "Germany", capital: "Berlin")
let countries: Set = [france, germany]
let countryGreetings = [france: "Bonjour", germany: "Guten Tag"]

You can add countries to sets and dictionaries here since they are Hashable. But the hashValue implementation is hard to understand and isn’t efficient enough for untrusted source values.

Swift 4.2 fixes this by defining universal hash functions [SE-0206]:

class Country: Hashable {
  let name: String
  let capital: String
  
  init(name: String, capital: String) {
    self.name = name
    self.capital = capital
  }
  
  static func ==(lhs: Country, rhs: Country) -> Bool {
    return lhs.name == rhs.name && lhs.capital == rhs.capital
  }

  func hash(into hasher: inout Hasher) {
    hasher.combine(name)
    hasher.combine(capital)
  }
}

Here, you’ve replaced hashValue with hash(into:) in Country. The function uses combine() to feed the class properties into hasher. It’s easy to implement, and it improves performance over all previous versions.

Hashing sets and dictionaries like a pro in Swift 4.2!

Hashing sets and dictionaries like a pro in Swift 4.2!

Removing Elements From Collections

You’ll often want to remove all occurrences of a particular element from a Collection. Here’s a way to do it in Swift 4.1 with filter(_:):

var greetings = ["Hello", "Hi", "Goodbye", "Bye"]
greetings = greetings.filter { $0.count <= 3 }

You filter greetings to return only the short greetings. This doesn’t affect the original array, so you have to make the assignment back to greetings.

Swift 4.2 adds removeAll(_:) in [SE-0197]:

greetings.removeAll { $0.count > 3 }

This performs the removal in place. Again, you have simplified code and improved efficiency.

Toggling Boolean States

Toggling Booleans! Who hasn’t done something like this in Swift 4.1:

extension Bool {
  mutating func toggle() {
    self = !self
  }
}

var isOn = true
isOn.toggle()

Swift 4.2 adds toggle() to Bool under [SE-0199].

New Compiler Directives

Swift 4.2 defines compiler directives that signal issues in your code [SE-0196]:

// 1
#warning("There are shorter implementations out there.")

let numbers = [1, 2, 3, 4, 5]
var sum = 0
for number in numbers {
  sum += number
}
print(sum)

// 2
#error("Please fill in your credentials.")

let username = ""
let password = ""
switch (username.filter { $0 != " " }, password.filter { $0 != " " }) {
  case ("", ""):
    print("Invalid username and password.")
  case ("", _):
    print("Invalid username.")
  case (_, ""):
    print("Invalid password.")
  case (_, _):
    print("Logged in succesfully.")
}     

Here’s how this works:

  1. You use #warning as a reminder that the functional approach for adding elements in numbers is shorter than the imperative one.
  2. You use #error to force other developers to enter their username and password before logging in.

New Pointer Functions

withUnsafeBytes(of:_:) and withUnsafePointer(to:_:) only worked for mutable variables in Swift 4.1:

let value = 10
var copy = value
withUnsafeBytes(of: &copy) { pointer in print(pointer.count) }
withUnsafePointer(to: &copy) { pointer in print(pointer.hashValue) }

You had to create a copy of value to make both functions work. Swift 4.2 overloads these functions for constants, so you no longer need to save their values [SE-0205]:

withUnsafeBytes(of: value) { pointer in print(pointer.count) }
withUnsafePointer(to: value) { pointer in print(pointer.hashValue) }

Memory Layout Updates

Swift 4.2 uses key paths to query the memory layout of stored properties [SE-0210]. Here's how it works:

// 1
struct Point {
  var x, y: Double
}

// 2
struct Circle {
  var center: Point
  var radius: Double
  
  var circumference: Double {
    return 2 * .pi * radius
  }
  
  var area: Double {
    return .pi * radius * radius
  }
}

// 3
if let xOffset = MemoryLayout.offset(of: \Circle.center.x), 
   let yOffset = MemoryLayout.offset(of: \Circle.center.y), 
   let radiusOffset = MemoryLayout.offset(of: \Circle.radius) {
  print("\(xOffset) \(yOffset) \(radiusOffset)")
} else {
  print("Nil offset values.")
}

// 4
if let circumferenceOffset = MemoryLayout.offset(of: \Circle.circumference), 
   let areaOffset = MemoryLayout.offset(of: \Circle.area) {
  print("\(circumferenceOffset) \(areaOffset)")
} else {
  print("Nil offset values.")
}

Going over this step-by-step:

  1. You define the point’s horizontal and vertical coordinates.
  2. You declare the circle’s center, circumference, area and radius.
  3. You use key paths to get the offsets of the circle’s stored properties.
  4. You return nil for the offsets of the circle’s computed properties since they aren’t stored inline.

Inline Functions in Modules

In Swift 4.1, you couldn’t declare inline functions in your own modules. Go to View ▸ Navigators ▸ Show Project Navigator, right-click Sources and select New File. Rename the file FactorialKit.swift and replace its contents with the following block of code:

public class CustomFactorial {
  private let customDecrement: Bool
  
  public init(_ customDecrement: Bool = false) {
    self.customDecrement = customDecrement
  }
  
  private var randomDecrement: Int {
    return arc4random_uniform(2) == 0 ? 2 : 3
  }
  
  public func factorial(_ n: Int) -> Int {
    guard n > 1 else {
      return 1
    }
    let decrement = customDecrement ? randomDecrement : 1
    return n * factorial(n - decrement)
  }
}

You’ve created a custom version of the factorial implementation. Switch back to the playground and add this code at the bottom:

let standard = CustomFactorial()
standard.factorial(5)
let custom = CustomFactorial(true)
custom.factorial(5)

Here, you’re generating both the default factorial and a random one. Cross-module functions are more efficient when inlined in Swift 4.2 [SE-0193], so go back to FactorialKit.swift and replace CustomFactorial with the following:

public class CustomFactorial {
  @usableFromInline let customDecrement: Bool
  
  public init(_ customDecrement: Bool = false) {
    self.customDecrement = customDecrement
  }
  
  @usableFromInline var randomDecrement: Int {
    return Bool.random() ? 2 : 3
  }
  
  @inlinable public func factorial(_ n: Int) -> Int {
    guard n > 1 else {
      return 1
    }
    let decrement = customDecrement ? randomDecrement : 1
    return n * factorial(n - decrement)
  }
}

Here’s what this does:

  1. You set both customDecrement and randomDecrement as internal and mark them as @usableFromInline since you use them in the inlined factorial implementation.
  2. You annotate factorial(_:) with @inlinable to make it inline. This is possible because you declared the function as public.

Miscellaneous Bits and Pieces

There are a few other changes in Swift 4.2 you should know about:

Swift Package Manager Updates

Swift 4.2 adds a few improvements to the Swift Package Manager:

Defining Swift language versions for packages

Swift 4.1 defined swiftLanguageVersions in Package.swift as [Int], so you could declare only major releases for your packages:

let package = Package(name: "Package", swiftLanguageVersions: [4])

Swift 4.2 lets you define minor versions as well with SwiftVersion cases [SE-0209]:

let package = Package(name: "Package", swiftLanguageVersions: [.v4_2])

You can also use .version(_:) to declare future releases:

let package = Package(name: "Package", swiftLanguageVersions: [.version("5")])

Declaring local dependencies for packages

In Swift 4.1, you declared dependencies for your packages using repository links. This added overhead if you had interconnected packages, so Swift 4.2 uses local paths in this case instead [SE-0201].

Adding system library targets to packages

System-module packages required separate repositories in Swift 4.1. This made the package manager harder to use, so Swift 4.2 replaces them with system library targets [SE-0208].

Removing Implicitly Unwrapped Optionals

In Swift 4.1, you could use implicitly unwrapped optionals in nested types:

let favoriteNumbers: [Int!] = [10, nil, 7, nil]
let favoriteSongs: [String: [String]!] = ["Cosmin": ["Nothing Else Matters", "Stairway to Heaven"], 
                                          "Oana": nil] 
let credentials: (usermame: String!, password: String!) = ("Cosmin", nil)

Swift 4.2 removes them from arrays, dictionaries and tuples [SE-0054]:

let favoriteNumbers: [Int?] = [10, nil, 7, nil]
let favoriteSongs: [String: [String]?] = ["Cosmin": ["Nothing Else Matters", "Stairway to Heaven"], 
                                          "Oana": nil] 
let credentials: (usermame: String?, password: String?) = ("Cosmin", nil)

Where to Go From Here?

You can download the final playground using the Download Materials link at either the top or bottom of this tutorial.

Swift 4.2 improves upon many Swift 4.1 features and prepares the language for ABI stability in Swift 5, coming in early 2019.

You can read more about the changes in this version either on the official Swift CHANGELOG or the Swift standard library diffs.

You can also check out the Swift Evolution proposals to see what changes are coming in Swift 5. Here you can give feedback for current proposals under review and even pitch a proposal yourself!

What do you like or dislike about Swift 4.2 so far? Let us know in the forum discussion below!

The post What’s New in Swift 4.2? appeared first on Ray Wenderlich.

New Course: Server Side Swift with Kitura

$
0
0

If you’re ready to level up your skills to become a full stack developer, we’re releasing a brand new course for you today: Server Side Swift with Kitura!

Kitura is a REST API framework written in Swift by IBM. If you ever wanted to extend your skills past developing for mobile devices, but didn’t have time to learn a new language, this is your chance!

In this 31-video course, you’ll learn how to write a server in Swift, link a mobile app with it, and how to make a website in Swift using your server.

Take a look at what’s inside:

Part 1: Hello, Kitura

  1. Introduction: We’ll cover what you’ll learn in this course, and how you’ll walk away with a truly full-stack application.
  2. EmojiJournal – The Walkthrough: We’ll walk through the finished product and what it should look like, outlining each component of the application and what you’ll use to make it.
  3. Getting Started: We’ll set up your development environment and make sure you have everything you need to know to write all the code you’ll need for this course.
  4. HTTP Crash Course: I’ll walk you through the basics of HTTP and how it will act as a data transport layer for the basis of this course.
  5. Running Your API In Docker: Let’s set up Docker on your machine so that you can easily test your Kitura API on Linux and prepare it for deployment.
  6. Preparing To Deploy – IBM Cloud: I’ll show you how to set up your API for deployment with Cloud Foundry via IBM Cloud, and how to set up your machine to easily push new versions every time you make a change.
  7. Deploying To IBM Cloud: I’ll walk you through what to expect when you "push the button" to put your API in the Cloud, and how to test it when it’s live on the internet!
  8. Introduction To Kitura CLI: I’ll also walk you through the Kitura CLI, which allows you to generate boilerplate Kitura servers from the command line.
  9. Conclusion: Let’s recap everything we’ve covered and how it positions you to focus on one component at a time as we go through the rest of the course!

Part 2: Completing Your Backend

  1. Introduction To Kitura 2.0: We’ll cover the basics introduced in Kitura 1.0, and then go into some of the finer points of what was added to the Kitura framework in version 2.0, including what a router is and how codable routes work.
  2. Making Codable Routes: Let’s make a basic GET route using Kitura, and then let’s enhance and streamline that route using the Codable protocol with Kitura’s best built in feature – Codable Routing.
  3. Challenge: Create A DELETE Route: Now that you’ve made two routes in Kitura, you’ll make your third one to improve on its functionality.
  4. Introduction To CouchDB: I’ll show you how CouchDB works, and how you’ll make use of it to store journal entries that you enter with EmojiJournal.
  5. Connecting To CouchDB: I’ll walk you through setting up CouchDB on your local machine, and how to connect your Kitura server to it.
  6. Challenge: Using CouchDB In The Cloud: After I show you how to set up an instance of CouchDB in IBM Cloud, I’ll challenge you to connect to it once you have all the information you need to do so.
  7. Writing Your Persistence Functions: I’ll walk you through writing a series of persistence functions in your Kitura application, so that you have an easy way to keep track of how your app uses CouchDB.
  8. Challenge: Linking Your Router To CouchDB: After you write your persistence functions, you’ll hook up your codable routes to your database, finally putting all the pieces together.
  9. Conclusion: Take a second to smell the roses, and look at what you built! I’ll run you through a test drive of your Kitura application.

Part 3: Linking Your iOS Client To Kitura

  1. Introduction To KituraKit: Time to work within iOS again! I’ll show you how KituraKit makes client-side connections to Kitura nice and straightforward, and how you can use it to drastically reduce the amount of code you write for networking on your client.
  2. Demo The iOS Starter: Let’s walk through what the iOS app does right now, and let’s highlight the pieces you need to get into and make work!
  3. Integrating KituraKit Using Cocoapods: I’ll help you set up KituraKit with Cocoapods on your iOS application, so that nothing stands in the way of you writing your networking code!
  4. Creating Your KituraKit Client: First, you’ll walk through writing a client class for your KituraKit client, so that you have easy functions to make use of when you are connecting your iOS app to your server.
  5. Challenge: Finishing Your Client: Now that you’ve made your client class, I’ll help you hook up a couple of the functions to the UI, and let you finish putting the puzzle together yourself once you have everything you need.
  6. Conclusion: This is the moment you knew you could be a full-stack developer – let’s test out your mobile application, and see how well it works with your shiny new Kitura server!

Part 4: Templating A HTML Front-End With Stencil

  1. Introduction To Web Clients: I’ll show you the importance of having web functionality for your application, and why it’s important to make a client thats accessible to everyone. Then, I’ll show you how Kitura can help you accomplish that goal.
  2. Introduction To KituraStencil: Let’s take a look at how a templating engine called Stencil can help you make any website you want, and how you can pass information from your Swift API into your HTML page with KituraStencil.
  3. Creating A Landing Page: Let’s start by serving up a static HTML page using your existing Kitura router that you’ve already created, and take some time to explain how you can serve a context with it.
  4. Showing Your Web UI: Expand on your previous route by adding some context to your template page, and take a look at how that gets populated throughout your HTML – you’ll have a web UI working by the end of this video!
  5. Adding A New Entry Via The Web: Now that you can see the entries you’ve populated, let’s make it possible to use the web UI to add another entry to your EmojiJournal.
  6. Challenge: Deleting Entries Via The Web: The final hurrah – you’ve got everything you need to know to fit the last puzzle piece into your web UI, and now I’ll challenge you to add it in and delete those EmojiJournal entries you want to remove.
  7. Conclusion: You did it! Let’s see how you can use either of your clients at any given time to use EmojiJournal, and what you might be interested in trying out next time around!

Where To Go From Here?

Want to check out the course? The first part of the course is ready for you today! The rest of the course will be released over the next two weeks, and entire course will be available for free.

Stay tuned for more new and updated courses to come. I hope you enjoy the course! :]

The post New Course: Server Side Swift with Kitura appeared first on Ray Wenderlich.


Create ML Tutorial: Getting Started

$
0
0

Create ML: Getting Started

Create ML is proof that Apple is committed to making it easier for you to use machine learning models in your apps. In this Create ML tutorial, you’ll learn how Create ML speeds up the workflow for improving your model by improving your data while also flattening the learning curve by doing it all in the comfort of Xcode and Swift.

At the same time, you’ll gain familiarity with ML toolsets and terminology. No math needed! You don’t need to know how to write a compiler to use Swift, and you don’t need to be able to write a new ML algorithm to use a classifier. With Create ML, you have no excuse not to get started!

A brief history of Apple ML:

  • Core ML: Announced at WWDC 2017, and already supported by every major ML platform to convert existing models. But the existing models tend to be too big and/or too general.
  • Turi Create: Acquired by Apple sometime after WWDC 2017, it lets you customize existing models with your own data. But … Python :[.
  • IBM Watson Services: Announced in March 2018. You can customize IBM Watson’s visual recognition model to recognize your own data. Drag-and-drop your data, no coding required, but you have to navigate the IBM Cloud maze, and the Core ML model is wrapped in the Watson API.
  • Create ML: Announced at WWDC 2018. ML in Xcode & Swift! Currently includes only two of Turi Create’s seven task-focused toolkits, plus a generic classifier and regressor, and data tables. I see it as a trail of breadcrumbs leading you to the Turi Create gingerbread house, inhabited by a “good dog” instead of a witch! (Turi Create’s logo is a dog silhouette.)

You’ll start this Create ML tutorial with the spectacular Create ML party trick: You’ll build an image classifier in a GUI, using images from the Kaggle Cats and Dogs Dataset. Then you’ll compare this with the Turi Create example that uses the same dataset. As you’ll see, Turi Create is more manual, but it’s also more flexible and not at all mysterious! For a more code-based example, you’ll compare the text classifier code for Create ML and Turi Create.

Then I’ll show you how to quickly set up an environment to work with Turi Create. Apple has even modified Xcode playgrounds to behave more like Jupyter notebooks, so the coding environment will feel familiar! To try it out, you’ll use Turi Create in a Jupyter notebook to build an image similarity model for the same cats and dogs dataset.

You could wait and hope for Apple to move the rest of Turi Create to Create ML, but you’ll see it’s not hard to use Create ML as a stepping stone to working directly with Turi Create. If you need more than Turi Create, we have tutorials on “rolling your own” with Keras, scikit-learn and Caffe (coming soon). And the ML universe has a wealth of starting points, all available to you, once you’re comfortable with the development environment.

Note: What about Swift for TensorFlow? Create ML is ML for Swift people, while Swift for TensorFlow is Swift for ML people — the project aims to provide a better programming language, with compiler support.

Getting Started

To work through this Create ML tutorial, you need:

  • a Mac running macOS 10.14 Mojave beta
  • Xcode 10.x beta

Click the Download Materials button at the top or bottom of this tutorial. The starter folder contains:

  • Pets-100, Pets-1000 and Pets-Testing: These contain images of cats and dogs; you’ll use these to train and evaluate a cat-dog classifier.
  • ClassifyingImagesWithVisionAndCoreML: Apple’s sample project for CoreML; you’ll replace the MobileNet model with the model you train in Create ML.
  • good-dog.png: An additional dog picture.
  • turienv.yaml: You’ll use this file to create an environment where you can run Turi Create code.

Create ML Image Classifier

First, prepare your data — you’re going to train an image classifier model to recognize images of cats and dogs. When you show it an image, it will return the label “Cat” or “Dog”. To train the model, you need a Cat folder with images of cats and a Dog folder with images of dogs. Ideally, there should be about the same number of images in each class folder — if you have 30 cat images and 200 dog images, the model will be biased towards classifying images as Dog. And don’t include any images that contain both kinds of animal.

How many images of each class? At least 10, but more images will train the model to be more accurate. The Kaggle Cats and Dogs Dataset has 12,500 images of each class, but you don’t need to use all of them! Training time increases when you use more images — doubling the number of images roughly doubles the training time.

To train a Create ML image classifier, you give it a training dataset — a folder containing the class folders. Actually, the starter folder contains two datasets I prepared earlier ;]. Pets-100 contains the first 50 images of the Kaggle dataset’s Cat and Dog class folders; Pets-1000 has the first 500 images of each.

After training the model, you’ll need a testing dataset to evaluate the model: a folder containing Cat and Dog folders. The images in the testing dataset should be different from the images in the training dataset, because you want to evaluate how well the model works on images it hasn’t seen before. If you’re collecting your own data, you would put 20% of your images in the testing dataset, and the rest in the training dataset. But we have 12,500 images of each class to play with, so Pets-Testing contains images 900 to 999 from each Kaggle dataset class folder.

You’ll start by training the model with Pets-100, and test it with Pets-Testing. Then you’ll train it with Pets-1000, and test it with Pets-Testing.

Apple’s Spectacular Party Trick

In Xcode 10, create a new macOS playground, and enter this code:

import CreateMLUI

let builder = MLImageClassifierBuilder()
builder.showInLiveView()

Show the assistant editor, and click the run button:

You’re creating and showing an interactive view for training and evaluating an image classifier. It’s brilliant! It magically makes it easy for you experiment with different datasets — because what matters is not who has the best algorithms, but who has the best data ;]. The algorithms are already very good, and you can let the data science researchers carry on with making them better. But garbage in, garbage out; most of the time, effort, expense of machine learning goes into curating the datasets. And this GUI image classifier helps you hone your data curating skills! Feel free to download the Kaggle Cats and Dogs Dataset and create your own datasets. After you see what my datasets produce, you might want to be more careful selecting from this grab bag of good, bad and awful images.

Drag the Pets-100 folder onto the view. The training process starts immediately. Images load, with a progress bar below. After a short time, a table appears in the debug area, displaying Images Processed, Elapsed Time and Percent Complete:

What’s happening here? It’s called transfer learning, if you want to look it up. The underlying model — VisionFeaturePrint_Screen, which backs the Vision framework — was pre-trained on a ginormous dataset to recognize an enormous number of classes. It did this by learning what features to look for in an image, and how to combine these features to classify the image. Almost all of the training time for your dataset is the model extracting around 1000 features from your images. These could include low-level shapes and textures and higher-level shape of ears, distance between eyes, shape of snout. Then it spends a relatively tiny amount of time training a logistic regression model to separate your images into two classes. It’s similar to fitting a straight line to scattered points, but in 1000 dimensions instead of 2. But it’s still very quick to do: my run 1m 15s for feature extraction and 0.177886 seconds to train and apply the logistic regression.

Transfer learning only works successfully when features of your dataset are reasonably similar to features of the dataset that was used to train the model. A model pre-trained on ImageNet — a large collection of photos — might not transfer well to pencil drawings or microscopy images.

You might like to browse two fascinating articles about features from (mostly) Google Brain/Research:

Note: I’m running Create ML on an early-2016 MacBook with 1.1GHz CPU. Your times will probably be faster, especially if your Mac is new enough for Create ML to be using your GPU. Also, beta macOS and Xcode … ‘nuf said!

On a 2017 MacBook Pro with a 2.9GHz i7 CPU, the feature extraction time drops to 11.27s and training takes 0.154341 seconds.

Training & Validation Accuracy

When training finishes, the view displays Training and (sometimes) Validation accuracy metrics, with details in the debug area:

I got 100% training and validation accuracy! This time. Your mileage may vary, because the validation set is randomly chosen for each training session, so your validation set will be a different 10 images. There’s no way of knowing which images are chosen.

So what’s validation? And what do the accuracy figures mean?

Training accuracy is easy: Training involves guessing how much weight to give each feature to compute the answer. Because you labeled your images “Cat” or “Dog”, the training algorithm can check its answers and compute what percentage it got right. Then, it feeds the right-or-wrong information into the next iteration to refine the weights.

Validation accuracy is similar: Before training starts, a randomly chosen 10% of the dataset is split off to be validation data. Features are extracted and answers are computed with the same weights as the training dataset. But the results aren’t used directly for recomputing the weights. Their purpose is to prevent the model overfitting — getting fixated on a feature that doesn’t actually matter, like a background color or lighting. If validation accuracy is very different from training accuracy, the algorithm makes adjustments to itself. So the choice of validation images affects both the validation accuracy and the training accuracy. Turi Create lets you provide a fixed validation dataset if you’ve created one with similar characteristics to your testing data. And your testing dataset is a good representation of what your users will feed to your app.

Evaluation

The real question is: how well does the model classify images it didn’t train on?

The view prompts you to Drop Images to Begin Testing: drag the Pets-Testing folder onto the view. Very soon, the view displays Evaluation accuracy, with details in the debug area:

97% accuracy: the confusion matrix says two cat images were misclassified as dog, and four dog images were misclassified as cat. Scroll through the test images, to see which ones confused the model. There’s one in the screenshot above, and here’s the other confusing cat:

They’re pretty awful photos: one is blurry and too bright, the other is blurry with much of the head cropped off. The model resizes the images to 299×299, often cropping the edges, so the object you care about should ideally be centered in the image, but not too big or too small.

In the screenshot above, I clicked the disclosure button to see the confidence level: the model is 100% confident this cat is a dog! But scroll through the other images to see how the model gets it right for some pretty terrible images.

Improving Accuracy

The Pets-100 training dataset used only 50 of the 12,500 images for each class. Create ML makes it super easy to experiment with different data sets, to see whether more data improves accuracy.

Click the playground’s stop button, then click it again when it becomes a run button. This loads a new view, ready to accept training data.

Drag the Pets-1000 folder onto the view. Extracting features from 1000 images will take five to ten times longer than 100. While you’re waiting, here’s a summary of Apple’s helpful article Improving Your Model’s Accuracy, which gives specific advice for improving the different accuracy metrics.

Improving Training Accuracy

  • Increase Max iterations for image classifiers. (This isn’t working in the first Xcode beta, but will work in the second beta.)
  • Use different algorithms for text classifiers.
  • Use different models for generic classifiers or regressors.

Improving Validation Accuracy

  • Possible overfitting: reduce Max iterations. You probably don’t have to worry about this here, because my training run stopped when it was happy with the results, before reaching 10 iterations.

Improving Evaluation Accuracy

Make sure the diversity of characteristics of your training data match those of your testing data, and both sets are similar to the data your app users will feed to your model.

Back to the Playground

Training with 1000 images got 100% training accuracy, but only 96% validation accuracy, Again, YMMV — I’ve run this a few times, and sometimes get 99% validation accuracy.

Drag the Pets-Testing folder onto the view to evaluate this model; it gets 98.5% accuracy on the 200 test images!

The confusion matrix says the model classified three of the cat images as dog. Actually, there are only the same two cats mislabelled as dogs — with 100% confidence!


Although the confusion matrix doesn’t say so, there are two dogs labelled as cats, but with lower confidence. They’re also blurry, with low contrast:


Probably the only way to further improve this model is to use more data, either by augmenting the 1000 images, or by adding more images from the full Kaggle dataset. Or by selecting your datasets more carefully to leave out really awful images that you don’t want your app to handle. Feel free to experiment! Remember it’s easy to do — the training just takes longer with larger datasets. I’ve run this with 5000 images: it took 32 minutes, and I got 99% for both training and validation accuracies … that time.

Increase Max Iterations?

The accuracy metrics for this example are actually pretty good — the underlying model probably already knows all about cats and dogs. But if you’re training for different classes, and getting low training accuracy, you’ll want to try increasing Max iterations to 20. At the time of writing this tutorial with the first Xcode beta, that’s not implemented. But here’s how you’d do it.

Stop and start the playground, then click the disclosure symbol next to ImageClassifier to show the options, change 10 to 20, and press Enter:

Click the disclosure symbol to hide the options, then open the options again to check Max iterations is still 20.

If you’re using Xcode beta 2 or later, drag your training folder onto the view to start training. This will take a little longer than the 10-iteration training session, but the feature extraction will take the same amount of time, and that’s the bulk of it.

Note: The Create in both Create ML and Turi Create is a problem — you can’t train a model without creating a new one. To increase the number of iterations, you have to start all over and extract the exact same features as before. The Create ML GUI doesn’t give you the option of saving the features. A more manual framework, like Keras, constructs, compiles, then fits a model, so running the fit instruction again actually starts where it left off. It’s actually possible to peer into Turi Create’s source code and pull out the lower-level code that extracts features from the images — the part that uses most of the time. Then, you can save the extracted features and reload them whenever you want to do more training iterations! Hopefully this motivates you to be more interested in Turi Create and perhaps also in Keras!

Using the Image Classifier

This is a continuation of Apple’s spectacular party trick :]. The Create ML GUI exports a Core ML model, then you just drag your model into the old Core ML project, change one word in the code, and you’re good to go!

Click the disclosure symbol next to ImageClassifier to see a different set of options. Click on the text, and change it to PetsClassifier. Change the Where location to the starter folder, then click Save:

Open the ClassifyingImagesWithVisionAndCoreML project in the starter folder. This is Apple’s 2017 project: I’ve updated it to Swift 4.2, and fixed the photo picker call. It uses MobileNet.mlmodel, which is 17.1 MB:

Drag PetsClassifier.mlmodel into the project navigator. It’s 17 KB:

Search the project for MobileNet:

In the let model code statement, replace MobileNet with PetsClassifier:

let model = try VNCoreMLModel(for: PetsClassifier().model)

Build and run. Click the camera icon to open the photo picker, then drag some dog and cat images into Photos:

Select one; the app classifies it as a dog or a cat by showing the probability of each label:

Turi Create Image Classifier

Here’s the code from the Turi Create image classifier example for the same dataset — the full 25,000-image dataset:

import turicreate as tc

# 1. Load images (Note: you can ignore 'Not a JPEG file' errors)
data = tc.image_analysis.load_images('PetImages', with_path=True)

# 2. From the path-name, create a label column
data['label'] = data['path'].apply(lambda path: 'dog' if '/Dog' in path else 'cat')

# Note: If you have more than two classes, extract the folder names like this:
# train_data["label"] = train_data["path"].apply(lambda path: os.path.basename(os.path.split(path)[0]))

# 3. Make a train-test split
train_data, test_data = data.random_split(0.8)

# 4. Create the model
model = tc.image_classifier.create(train_data, target='label')

# 5. Save predictions to an SArray
predictions = model.predict(test_data)

# 6. Evaluate the model and save the results into a dictionary
metrics = model.evaluate(test_data)
print(metrics['accuracy'])

# 7. Save the model for later use in Turi Create
model.save('mymodel.model')

# 8. Export for use in Core ML
model.export_coreml('MyCustomImageClassifier.mlmodel')

It’s a lot more code than you wrote in the playground, but you’ll soon see that it’s similar to the Create ML text classifier code.

Matching up the steps with what you did in Create ML:

  • Steps 1 to 4 correspond to creating the Training and Testing folders, then dragging the Training folder onto the view. Turi Create must extract the class labels from the paths of the images, but step 3 randomly allocates 20% of the dataset to test_data, which saves you the work of creating the Training and Testing folders, and you also get a different testing dataset each time you run this code.
Note: In Step 2, extracting the class labels for just two classes is a special case. I’ve added a note in the code above, to show the more general case. First, os.path.split() splits the path into two pieces: the name of the file (like 42.jpg) and everything leading up to it. Then os.path.basename() is the name of the last folder, which is the one with the class name.
  • Steps 5 and 6 correspond to dragging the Testing folder onto the view. A Jupyter notebook can display the predictions array as easily as the Create ML view. You can also filter the array to find the wrong classifications, instead of scrolling through the test images.
  • Step 7 saves the model for later use, so you could load it again and run it on a different testing dataset.
  • Step 8 exports the Core ML model.

So Turi Create image classification is more manual, but more flexible than Create ML. The turicreate.create() documentation lists several optional parameters. You can specify the underlying model to match Create ML. Note the difference in the sizes of the Core ML models! You can also supply a fixed validation_set, if you’ve created one that really represents your real test data and don’t want the model to use a random selection from your training data.

Image classification is a very special case in Create ML: The MLImageClassifierBuilder GUI removes the need — and the opportunity — to write code. In the next section, you’ll see that other Create ML models also require more code.

Text Classifier

Now compare how Create ML and Turi Create train and test a text classifier model. The Turi Create model needs test text converted into a bag of words — this is a straightforward transformation that’s built into the Create ML model, so it accepts test text directly.

Create ML

Here’s the code for the Create ML text classifier example:

import CreateML

// 1. Load data from a JSON file
let data = try? MLDataTable(contentsOf: URL(fileURLWithPath: "<#/path/to/read/data.json#>"))

// 2. Make a train-test split
let (trainingData, testingData) = data.randomSplit(by: 0.8, seed: 5)

// 3. Create the model
let sentimentClassifier = try? MLTextClassifier(trainingData: trainingData,
  textColumn: "text", labelColumn: "label")
  
// 4. Training accuracy as a percentage
let trainingAccuracy = (1.0 - sentimentClassifier.trainingMetrics.classificationError) * 100

// 5. Validation accuracy as a percentage
let validationAccuracy = (1.0 - sentimentClassifier.validationMetrics.classificationError) * 100

// 6. Evaluation accuracy as a percentage
let evaluationMetrics = sentimentClassifier.evaluation(on: testingData)
let evaluationAccuracy = (1.0 - evaluationMetrics.classificationError) * 100

// 7. Add metadata
let metadata = MLModelMetadata(author: "John Appleseed",
  shortDescription: "A model trained to classify movie review sentiment", version: "1.0")

// 8. Export for use in Core ML
try? sentimentClassifier.write(to: URL(fileURLWithPath: "<#/path/to/save/SentimentClassifier.mlmodel#>"),
    metadata: metadata)
  • Step 1 loads data into a table with text and label columns, where the value of label is positive, negative or neutral. The WWDC 2018 Session 703 video shows an alternative way to load text data using separate text files in folders named positive and negative, similar to the way you load images to train an image classifier. This is a special extra in Create ML; it isn’t available in Turi Create.
Alternative way to load labeled text data, from WWDC 2018 Session 703:
let trainDirectory = URL(fileURLWithPath: “/Users/createml/Desktop/train”)
let testDirectory = URL(fileURLWithPath: “/Users/createml/Desktop/test”)
// Create Model
let classifier = try MLTextClassifier(trainingData: .labeledDirectories(at: trainDirectory))

Back to the main text classifier code:

  • Step 2 does the same as Turi Create’s random_split(), randomly allocating 20% of the data to testingData. The optional seed parameter sets the seed for the random number generator.
  • Step 3 does the same as Turi Create’s sentence_classifier.create().
  • Steps 4-6 calculate training, validation and evaluation accuracy metrics.
  • Steps 7 and 8 export the Core ML model with some metadata.

Turi Create

This code is from our tutorial Natural Language Processing on iOS with Turi Create. It trains a sentence classifier with poems from 10 poets, to predict the author of the test text.

import turicreate as tc

# 1. Load data from a JSON file
data = tc.SFrame.read_json('corpus.json', orient='records')

# 2. Create the model
model = tc.sentence_classifier.create(data, 'author', features=['text'])

# 3. Export for use in Core ML
model.export_coreml('Poets.mlmodel')
  • Step 1: Like Create ML, you can load data from JSON or CSV files.
  • Step 2 trains the model.
  • Step 3 exports the Core ML model.

The Turi Create tutorial materials include an iOS app where you can test the model on text pasted into a textview. The app uses a wordCounts(text:) helper function, similar to the bag of words function at the bottom of the Turi Create text classification example.

The Turi Create text classifier expects input in the form of a dictionary of words and word counts. The Create ML text classifier accepts the text input directly, and creates its own bag of words.

Turi Create Image Similarity

Now take some deep breaths — you’re going on a Turi Create adventure!

Turi Create has five task-focused toolkits that aren’t (yet?) in Create ML:

  • Recommender systems
  • Image similarity
  • Object detection
  • Style transfer
  • Activity classification

Cat and dog pictures are fun to look at, so you’ll train a model to find similar images.

And yes, you need to write some Python. The development environment that will feel the most familiar is a Jupyter notebook — it’s like an Xcode playground, but it runs in your browser.

The easiest way to get going is to use Anaconda — created by the ML community to sort out all the versions of Python and ML libraries, and manage them in separate environments.

Anaconda & Notebooks

Download the Python 3.6 version of Anaconda for macOS, and install it in your home directory, not in your root directory:

If it says you can’t install it there, click the Install on a specific disk… button, then click back to the Home button — it should be more agreeable:

Note: Installing Anaconda and creating the Turi Create environment can take several minutes. While you wait, browse Michael Kennedy’s November 2014 Comparison of Python and Swift Syntax and Jason Brownlee’s May 2016 Crash Course in Python for Machine Learning Developers. Brownlee’s article includes examples of using the data science libraries NumPy, Matplotlib and Pandas. The biggest difference between Swift and Python syntax is that you define closures, functions and classes with indentation instead of { ... }.

Create Turi Create Environment

Use either the Anaconda Navigator GUI or a Terminal command to create an environment where you can run Turi Create code.

GUI: Open Anaconda Navigator, switch to its Environments tab, and import starter/turienv.yaml — simply click the folder icon and locate the file in Finder. Anaconda Navigator will fill in the environment name from the file:

Terminal: Open Terminal and enter this command:

conda env create -f <drag starter/turienv.yaml file from Finder>

Launch Jupyter Notebook

Use either the GUI or Terminal commands to launch Jupyter notebook in the turienv environment.

First, in Finder, create a local folder named notebooks.

If you have a really new and powerful Mac, download and unarchive the Kaggle Cats and Dogs Dataset, then move the PetImages folder into notebooks, so you can easily load it into the notebook you’re about to create.

The full Kaggle dataset contains 25,000 images, which takes a long time to process on an older Mac. Feel free to use the Pets-1000 folder instead, or create your own dataset.

GUI: If you’re using Anaconda Navigator, switch to the Home tab, check that turienv appears in the Applications on field, then click jupyter Launch:

A terminal window opens to run the Jupyter server, then a browser window displays your home directory. Navigate to your notebooks folder.

Terminal: If you’re using Terminal, enter this command to load turienv:

source activate turienv

The command line prompt now starts with (turienv). Enter this command to start the Jupyter server in the notebooks folder, and display the browser window:

jupyter notebook <drag notebooks folder from the Finder>

Training the Model

Create a new Python 3.6 notebook:

Double-click the title to rename the notebook:

Note: This example is the same as Apple’s Image similarity example, but using the Cat and Dog dataset.

The notebook contains a single empty cell. Type this line in the cell, then press Shift-Enter to run the cell:

import turicreate as tc
Note: Shift-Enter also works in Xcode playgrounds if you want to run just one code statement.

A new empty cell appeared below the first. Type the following in it, then run it:

reference_data = tc.image_analysis.load_images('./PetImages')
reference_data = reference_data.add_row_number()
reference_data.save('./kaggle-pets.sframe')

You’re loading the images into a table, adding row numbers to the table, then saving it for future use. Ignore the JPEG decode failure messages.

Note: While typing Python code, use the Tab key for autocomplete.

In the next cell, run this statement to explore the data:

reference_data.explore()

A window opens, displaying id, path and image columns. Hovering the cursor in a row shows the image:

Next, run this statement:

model = tc.image_similarity.create(reference_data)

This will take a while — In [*] shows it’s running. While you wait, read about unsupervised learning.

Note: To stop the cell before it finishes, click the Stop button (next to Run in the toolbar). Feel free to delete images from PetImages, or just load Pets-1000 instead. I went out for lunch while this ran on my early-2015 MacBook Pro, and it was finished when I returned 90 minutes later ;].

Unsupervised Learning

Providing labeled data to the image classifier enables it to measure how accurate it is by checking its predictions against the labels. This is supervised learning.

Although you supplied the same labeled dataset to this image similarity trainer, it doesn’t use the labels: this model uses unsupervised learning. The underlying model looked at a very large number of images, and taught itself which arrangements of pixel values constituted features that it could use to cluster “similar” images. So just like the image classifier, most of the training time is used for extracting these features from your dataset. Then it does “brute force” nearest neighbors model training: for each image, it computes its distance to every other image, and ranks the other images into radius bands. Again, this step is fast, compared to the feature extraction.

Querying the Model

When the model is ready, run these lines:

query_results = model.query(reference_data[0:10], k=10)
query_results.head()

You’re passing an array that contains the first 10 reference_data images, asking for 10 similar images for each, then showing the first 10 rows of query_results.

Suppose you want to find similar images for the 10th image. First, see what it is:

reference_data[9]['image'].show()

The loading order of images is non-deterministic, so your 10th image is probably something different. What matters is that it should look like the output of the next cell.

So run these lines:

similar_rows = query_results[query_results['query_label'] == 9]['reference_label']
reference_data.filter_by(similar_rows, 'id').explore()

The target image is actually the first image returned. The other images show cats that look similar and/or are positioned in a similar way.

Congratulations! You’ve just built an image similarity model in Python! And your Mac didn’t explode ;]. Hopefully, you’ll try out other Turi Create examples on your own data.

Shutting Down

Log out of the jupyter browser windows.

In the Terminal window where the jupyter server is running, press Control-C-C to stop the server.

If your command line prompt starts with (turienv), enter this command to exit:

source deactivate

If you really don’t want to keep Anaconda, enter this command:

rm -rf ~/anaconda3

Where To Go From Here?

The finished Turi Create notebook and iOS project are in the finished folder of this tutorial’s materials. Use the Download Materials button at the top or bottom of this tutorial.

You’re now well-equipped to experiment with datasets in Create ML, and hopefully you’ll continue learning about Turi Create.

Explore Create ML and its documentation, but also spend some time browsing the Turi Create User Guide, even if you don’t want to write Python. The Turi Create How it works documentation is impressively informative and mostly math-free. To find out even more, follow their academic reference links.

And here are some other resources and further reading to deepen your own learning:

Our Tutorials

This tutorial is just the latest in a series of ML tutorials from your favorite how-to site. And yes, there will be more!

ML Community

I hope you enjoyed this Create ML tutorial. Please join the discussion below if you have any questions or comments. And especially tell us what you do with Create ML and Turi Create!

The post Create ML Tutorial: Getting Started appeared first on Ray Wenderlich.

The Full Kotlin Apprentice Book Is Here!

$
0
0

Hello, Android developers! We’re happy to announce that the full release of our Kotlin Apprentice book is now available!

This is the sister book to our Android Apprentice book, which focuses on creating apps for Android, while Kotlin Apprentice focuses on the Kotlin language fundamentals.

If you’re new to the Kotlin language, there’s no need to be intimidated: This book starts with the basics of basics — an introduction to programming — and then takes you through Kotlin fundamentals. But, if you’re familiar with Kotlin, you’ll advance into more intermediate and nuanced features of the language, such as functional programming, conventions and operator overloading, and coroutines.

Whatever your familiarity with the language, this book will teach you to organize and customize your code in Kotlin to create clean and modern apps in Kotlin.

Here’s what’s contained in the full release of the book:

Section I: Kotlin Basics

  • Chapter 1: Your Kotlin Development Environment: We start you right at the beginning so you can get up-to-speed with programming basics. Learn how to work with IntelliJ IDEA, which you will use throughout the rest of the book.
  • Chapter 2: Expressions, Variables & Constants: This is it, your whirlwind introduction to the world of programming! You’ll begin with an overview of computers and programming, and then spend your time working with code comments, arithmetic operations, constants and variables.
  • Build your projects using IntelliJ IDEA!

  • Chapter 3: Types & Operations: You’ll learn about handling different types, including strings, which allow you to represent text. You’ll learn about converting between types and you’ll also be introduced to type inference, which makes your life as a programmer a lot simpler.
  • Chapter 4: Basic Control Flow: You’ll learn how to make decisions and repeat tasks in your programs by using syntax to control the flow. You’ll also learn about Booleans, which represent true and false values, and how you can use these to compare data.
  • Chapter 5: Advanced Control Flow: Continuing the theme of code not running in a straight line, you’ll learn about another loop known as the `for` loop. You’ll also learn about `when` expressions, which are particularly powerful in Kotlin.
  • Chapter 6: Functions: Functions are the basic building blocks you use to structure your code in Kotlin. You’ll learn how to define functions to group your code into reusable units.
  • Chapter 7: Nullability: Many programming languages suffer from the “billion dollar mistake” of null values. You’ll learn how Kotlin protects you from the dreaded null pointer exception.

Section II: Collections & Lambdas

  • Chapter 8: Arrays & Lists: Why have only one of a thing when you could have many? Learn about the Kotlin collection types — arrays, lists, maps and sets — including what they’re good for, how to use them and when to use each.
  • Chapter 9: Maps & Sets: Maps are useful when you want to look up values by means of an identifier. For example, the table of contents of this book maps chapter names to their page numbers, making it easy to skip to the chapter you want to read. With an array, you can only fetch a value by its index, which has to be an integer, and all indexes have to be sequential. In a map, the keys can be of any type and are generally in no particular order. You’ll also learn about sets, which let you store unique values in a collection.
  • Use maps when you want to look up values by means of an identifier!

  • Chapter 10: Lambdas: Put code into variables and pass code around to help avoid callback insanity!
  • Chapter 11: Classes: In this chapter, you’ll get acquainted with classes, which are are named types. Classes are one of the cornerstones of object-oriented programming, a style of programming where the types have both data and behavior. In classes, data takes the form of properties and behavior is implemented using functions called methods.

Section III: Building Your Own Types

  • Chapter 12: Objects: Kotlin has a special keyword object that makes it easy to follow the singleton pattern in your projects, and that is also used to create properties specific to a class and not its instances. You also use the keyword to create anonymous classes.
  • Chapter 13: Properties: In this chapter, you’ll learn more about Kotlin properties, along with some tricks to deal with properties, how to monitor changes in a property’s value and how to delay initialization of a property.
  • Chapter 14: Methods: Methods are merely functions that reside in a class. In this chapter, you’ll take a closer look at methods and see how to add methods onto classes that were created by someone else.
  • Get help figuring out if you’re dealing with a property or a method!

  • Chapter 15: Interfaces: Classes are used when you want to create types that contain both state and behavior. When you need a type that allows primarily the specification of behavior, you’re better off using an interface. See how to create and use interfaces.
  • Chapter 16: Advanced Classes: Having seen the basics of creating classes, in this chapter you’ll see the more advanced aspects of object-oriented programing, including inheritance and limiting member visibility.
  • Chapter 17: Enum Classes: Enumerations are useful when you have a quantity that can take on a finite set of discrete values. See how to define and use enum classes and see some examples of working with enum classes and when expressions.
  • Chapter 18: Generics: At some point, you will need to ability to create abstractions that go beyond what’s available in regular classes and functions. You’ll learn how to use generics to super-power your classes and functions.

Section IV: Intermediate Topics

  • Chapter 19: Kotlin/Java Interoperability: Kotlin is designed to be 100% compatible with Java and the JVM. Seamlessly use Kotlin in your Java projects and call back and forth between the languages.
  • Chapter 20:Exceptions: No software is immune to error conditions. See how to use exceptions in Kotlin to provide some control over when and how errors are handled.
  • Mapping an unhandled exception!

  • Chapter 21: Functional Programming: Kotlin goes beyond just being an object-oriented programming language, and provides many of the constructs found in the domain of functional programming. See how to treat functions as first-class citizens by learning how to use functions as parameters and return values from other functions.
  • Chapter 22: Conventions & Operator Overloading : You’ll learn the concept of conventions and see how to use conventions to implement operator overloading and write more concise but still readable code.
  • Chapter 23: Kotlin Coroutines: Simplify your asynchronous programming using Kotlin coroutines, and discover the differences between coroutines and threads.
  • Chapter 24: Scripting with Kotlin: Kotlin is not just useful on Android or the JVM, but also can be used for scripting at the command line. See how to use Kotlin as a scripting language.
  • Appendix A: Kotlin Platforms: Now that you’ve learned about how to use Kotlin, you may be asking yourself: Where can I apply all of this knowledge? There are many different platforms that allow you to use Kotlin as a programming language. Anything that runs Java can run Kotlin, and there are very few machines that can’t run Java. In this chapter, you’ll learn about the top platforms for Kotlin and what to watch out for.

Where to Go From Here?

Everything in Kotlin Apprentice takes place in a clean, modern development environment, which means you can focus on the core features of programming in the Kotlin language, without getting bogged down in the many details of building apps. You won’t want to miss your opportunity to start developing in this cutting-edge language.

Get your own copy:

The Kotlin language has been around since 2011, but its popularity took off in 2017 when Google announced Kotlin’s inclusion as a first-class language for Android development. With modern and expressive language characteristics such as those found in Swift, and 100% interoperability with Java, it’s no wonder that Kotlin has been named the second most-loved language by Stack Overflow users.

In this book, we’ll take you through programming basics, object-oriented programming with classes, exceptions, generics, functional programming, and more!

Questions about the book? Ask them in the comments below!

The post The Full Kotlin Apprentice Book Is Here! appeared first on Ray Wenderlich.

Android Design Support Library: Getting Started

$
0
0
Android Design Support Library to the rescue!

Android Design Support Library to the rescue!

Have you been surfing through thousands of Github libraries to add a Facebook-like Sliding Navigation menu in your project? Or searching for a convenient widget that will show tabs with icons at the bottom of your app’s homepage?

Behold – the Android Design Support Library is for you!

The Android Design Support Library helps you implement those shiny, interactive design components with minimal code!

In this tutorial, you’ll create a marketplace app called iSell using the Android Design Support Library. By the end, you will learn how to:

  • Use common Material Design components like BottomNavigationView and FloatingActionButton
  • Get rid of needing different images for the same icon to support different device resolutions with VectorDrawables
  • Bring some wow-factors to your users with CoordinatorLayout and CollapsingToolbarLayout
This is gonna be awesome!

This is gonna be awesome!

Getting started

To kick things off, start by downloading the materials for this tutorial (you can find a link at the top or bottom of this tutorial). Unzip iSell.zip folder into your desktop.

Now launch Android Studio 3.1.2 or greater and select Open an existing Android Studio project to import the starter project.

Open an existing Android Studio project

Choose iSell-Starter inside the iSell folder and click Open

select iSell-Starter and click Open

Build and run by firing keyboard shortcut Shift + F10 (or Control + R if you are using a Mac)

empty home screen

And you will see a screen with the project name. Kudos – you’ve successfully kickstarted the project!

Managing Design Support Dependencies

Adding Design Support Library to your project is a piece of cake, but it requires a few different components. You should always use the latest version of each component, and make sure they’re the same version for all the components.
To manage this efficiently, an external variable is defined in your Project level build.gradle file as following:

ext.supportLibraryVersion = '27.1.1'

You will use this version with all the Design Support Library components later.

Open the build.gradle file from your app module and append the following lines inside the dependencies section:

// 1: Design Support Library
implementation "com.android.support:design:$supportLibraryVersion"
// 2: CardView
implementation "com.android.support:cardview-v7:$supportLibraryVersion"

Let’s take a moment to understand what each of dependencies provides:

  1. Design Support Library: Adds all those “exciting” UI components we are talking about. For example: BottomNavigationView, FloatingActionButton. We’ll be adding one of each later in this tutorial.
  2. CardView: Is a View component that shows its content in an elevated card, and highlights it to stand-out from the background. Most commonly used with list items.

Also, you’re are going to use VectorDrawables for the icons, so add this inside the defaultConfig section:

vectorDrawables.useSupportLibrary = true

Your app module’s build.gradle file finally may look like this:

apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'

android {
    compileSdkVersion 27
    defaultConfig {
        applicationId "com.raywenderlich.isell"
        minSdkVersion 16
        targetSdkVersion 27
        versionCode 1
        versionName "1.0"
        vectorDrawables.useSupportLibrary = true
    }
    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
        }
    }
    androidExtensions {
        experimental = true
    }
}

dependencies {
    // Kotlin
    implementation "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlinVersion"
    // AppCompat
    implementation "com.android.support:appcompat-v7:$supportLibraryVersion"
    // Design Support Library
    implementation "com.android.support:design:$supportLibraryVersion"
    // CardView
    implementation "com.android.support:cardview-v7:$supportLibraryVersion"
}

Notice that the build.gradle file has uses a plugin:

apply plugin: 'kotlin-android-extensions'

With kotlin-android-extensions, you can directly access a View‘s id without having to initialize it using findViewById(). Its just sweet syntactic sugar! If you are keen to know more about kotlin-android-extensions you can find out more here.

Before adding any Kotlin code, configure Android Studio to automatically insert import statements so that you don’t need to worry about imports for every change you make.

Go to Preferences\Editor\General\Auto Import, check Add unambiguous imports on the fly and Optimize imports on the fly checkboxes and click OK.

Again, don’t forget to click Sync Now in the top-right corner of your IDE. You’re now done with all the setup!

Showcasing Items with RecyclerView

First things first, you’ll display a list of items to sell. To show the list, replace the TextView inside activity_main.xml with a RecyclerView:

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
                xmlns:app="http://schemas.android.com/apk/res-auto"
                xmlns:tools="http://schemas.android.com/tools"
                android:layout_width="match_parent"
                android:layout_height="match_parent"
                tools:context=".ui.activity.MainActivity">

  <android.support.v7.widget.RecyclerView
      android:id="@+id/itemsRecyclerView"
      android:layout_width="match_parent"
      android:layout_height="match_parent"
      app:layoutManager="android.support.v7.widget.GridLayoutManager"
      app:spanCount="2"/>

</RelativeLayout>

Here, RecyclerView will be identified as itemsRecyclerView and the width and height properties should match its parent layout.

app:layoutManager="android.support.v7.widget.GridLayoutManager"
app:spanCount="2"

You’ll notice that you set the layoutManager directly in the XML to use a GridLayoutManager. Setting the app:spanCount tells the GridLayoutManager that you want to show 2 items in a row, so you tell the layoutManager to prepare layout for the grid.

Next, you need to design the layout for the items in the list.

Highlight with CardView

For each item in being sold, you’ll want to show the image, price and title. To make a layout for that, right-click on layout folder, select New, then select Layout resource file and name it layout_list_item.xml.

Now add following code inside layout_list_item.xml:

<?xml version="1.0" encoding="utf-8"?>
<android.support.v7.widget.CardView
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:card_view="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="wrap_content"
    android:layout_margin="@dimen/default_margin"
    android:foreground="?android:attr/selectableItemBackground"
    card_view:cardBackgroundColor="@color/cardview_light_background"
    card_view:cardCornerRadius="@dimen/cardview_default_radius"
    card_view:cardElevation="@dimen/cardview_default_elevation">

  <RelativeLayout
      android:layout_width="match_parent"
      android:layout_height="wrap_content">

    <ImageView
        android:id="@+id/itemImage"
        android:layout_width="match_parent"
        android:layout_height="@dimen/item_image_height"
        android:scaleType="fitCenter"
        tools:src="@drawable/laptop_1"/>

    <TextView
        android:id="@+id/itemPrice"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_below="@id/itemImage"
        android:layout_marginLeft="@dimen/default_margin"
        android:layout_marginRight="@dimen/default_margin"
        android:maxLines="1"
        android:textAppearance="@style/TextAppearance.AppCompat.Headline"
        android:textColor="@color/colorAccent"
        tools:text="@string/hint_price"/>

    <TextView
        android:id="@+id/itemTitle"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_below="@id/itemPrice"
        android:layout_marginBottom="@dimen/default_margin"
        android:layout_marginLeft="@dimen/default_margin"
        android:layout_marginRight="@dimen/default_margin"
        android:ellipsize="end"
        android:maxLines="2"
        android:minLines="2"
        android:textAppearance="@style/TextAppearance.AppCompat.Title"
        tools:text="@string/hint_title"/>
  </RelativeLayout>
</android.support.v7.widget.CardView>

It will look like this in the Preview pane:

Here the CardView and its properties may seem new to you, but other components are quite familiar – you are just adding views to show the image, price and title inside a RelativeLayout sequentially.

Using CardView makes your item appear elevated with the use of drop shadows around the element.

card_view:cardBackgroundColor="@color/cardview_light_background"

The above property adds a light-themed background color for the CardView from Design Support Library.

card_view:cardCornerRadius="@dimen/cardview_default_radius"

This property makes the card’s corners to look rounded. You are using the default radius provided in Design Support Library. You can play with the values for this property, the edges will look more rounded with a larger value.

The most interesting property of CardView is:

card_view:cardElevation="@dimen/cardview_default_elevation"

This property allows a CardView to look more or less elevated. This elevation of the view determines the size of the drop shadow. The larger the value you provide, the more elevated it’ll look.

RecyclerView in Action

It’s time to bind some data to the layout. Consider the DataProvider class inside util package as a storehouse of all your Items. You need an adapter to show items in the RecyclerView added earlier. To do so, add a new class inside the adapter package named ItemsAdapter as follows:

// 1
class ItemsAdapter(private val items: List<Item>)
  : RecyclerView.Adapter<RecyclerView.ViewHolder>() {

  // 2
  class ViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) {
    fun bind(item: Item) = with(itemView) {
      itemTitle.text = item.title
      itemImage.setImageResource(item.imageId)
    }
  }

  // 3
  override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): RecyclerView.ViewHolder {
    val view = LayoutInflater.from(parent?.context)
        .inflate(R.layout.layout_list_item, parent, false)
    return ViewHolder(view)
  }

  // 4
  override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
    (holder as ViewHolder).bind(items[position])
  }

  // 5
  override fun getItemCount(): Int {
    return items.size
  }
}

That’s a lot of code to digest at once! Let’s break it down…

  1. ItemsAdapter is declared as a subclass of RecyclerView.Adapter which accepts a list of Item.
  2. ViewHolder is a subclass of RecyclerView.ViewHolder. It inherits the power of being cached into memory and re-used to display an Item inside the RecyclerView. The bind(item: Item) function inside it does all the binding between the Item and the View.
  3. onCreateViewHolder() function creates a new ViewHolder when the adapter needs a new one with the view you designed in layout_list_item.xml.
  4. onBindViewHolder() glues each Item from the list with a ViewHolder to populate it using the bind() function.
  5. getItemCount() function tells ItemsAdapter the number of items in the list.

Add a function that will set the data in RecyclerView according to the category you want inside MainActivity.kt:

private fun populateItemList(category: Category) {
  val items = when (category) {
    Category.LAPTOP -> DataProvider.laptopList
    Category.MONITOR -> DataProvider.monitorList
    Category.HEADPHONE -> DataProvider.headphoneList
  }
  if (items.isNotEmpty()) {
    itemsRecyclerView.adapter = ItemsAdapter(items)
  }
}

This function accepts Category as input and fetches a list of items from that category through the DataProvider. Then it creates a new ItemsAdapter with those items and sets to itemsRecyclerView.

Call this function with a Category from onCreate() function inside MainActivity:

populateItemList(Category.LAPTOP)

Here you are fetching items from the LAPTOP category through the DataProvider. Feel free to play around with other categories also.

Run the app again. You’ll see the list as follows:

Listening to Click Events

You need to set a listener that’ll notify you when a user clicks on an item from the list. For that, you should declare an interface inside ItemsAdapter:

interface OnItemClickListener {
  fun onItemClick(item: Item, itemView: View)
}

The onItemClick(item: Item, itemView: View) function will be called by the interface to let you know which specific Item object is clicked. Here itemView is the view that represents the Item in RecyclerView.

Now modify ItemsAdapter‘s constructor to match the following:

class ItemsAdapter(private val items: List<Item>, private val clickListener: OnItemClickListener)

ItemsAdapter now requires an OnItemClickListener instance when created so that you can use the instance later.

Modify onBindViewHolder() as follows:

override fun onBindViewHolder(holder: RecyclerView.ViewHolder, position: Int) {
  (holder as ViewHolder).bind(items[position], clickListener)
}

This binds the clickListener instance to every ViewHolder.

At this point you might see a compiler warning like this:

but don’t worry, all you need is to update ViewHolder class to fix that:

class ViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) {
  fun bind(item: Item, listener: OnItemClickListener) = with(itemView) {
    itemTitle.text = item.title
    itemPrice.text = "\$" + item.price
    itemImage.setImageResource(item.imageId)
    setOnClickListener {
      listener.onItemClick(item, it)
    }
  }
}

Notice that you are using listener.onItemClick(item, it) inside setOnClickListener for itemView. That means, onItemClick() function will be fired whenever user clicks on an itemView and passes the reference of it’s corresponding item and view through the listener interface.

You should navigate from MainActivity to DetailsActivity when a user clicks on a item to see its details. Implement the OnItemClickListener interface in MainActivity as follows:

class MainActivity : AppCompatActivity(),
    ItemsAdapter.OnItemClickListener {
}

and override onItemClick() function inside MainActivity:

override fun onItemClick(item: Item, itemView: View) {
  val detailsIntent = Intent(this, DetailsActivity::class.java)
  detailsIntent.putExtra(getString(R.string.bundle_extra_item), item)
  startActivity(detailsIntent)
}

When an item inside the RecyclerView is clicked on, the onItemClick(item: Item, itemView: View) function will notify MainActivity and navigate to DetailsActivity starting a new Intent. Update the creation of the ItemsAdapter

itemsRecyclerView.adapter = ItemsAdapter(items)

to include the OnItemClickListener inside populateItemList(category: Category) method.

itemsRecyclerView.adapter = ItemsAdapter(items, this)

Run again at this and click on a item from the list – you will see a white screen with a text like this:

But that’s ok for now, you will make it look good soon enough!

Before going any farther let’s import the icons you are going to use later…

Using VectorDrawables for Icons

VectorDrawables are graphic elements that consist of points, lines and color information defined in a XML. It adjusts itself to different screen densities without loss of image quality, so using one file is enough to support devices with different resolution which results a smaller APK!

Android Studio has a lot of VectorDrawables bundled with it for your convenience, but you can also use your own svg or psd icons as VectorDrawables. To import them, you would right-click on the res folder and select New > Vector Asset. This will open the Asset Studio.

The starter project for this tutorial has icons that have already been converted to vector drawables in a folder named SVG inside the project directory. Navigate to that folder and copy all the xml files into the res/drawables folder.

You can now use those vector drawables inside a BottomNavigationView.

Categorize items with BottomNavigationView

You may want to display items of different categories in different tabs. How about showing them at the bottom of MainActivity with a nice icons? Design Support Library provides a widget called BottomNavigationView that makes this task easy!

Implementing BottomNavigationView

Right-click on res > menu folder and add new Menu resource file.

Name it menu_bottom_navigation.xml and add following code:

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android"
      xmlns:app="http://schemas.android.com/apk/res-auto">
    <item
        android:id="@+id/nav_laptops"
        android:icon="@drawable/ic_laptop"
        android:title="@string/laptops"/>
    <item
        android:id="@+id/nav_monitors"
        android:icon="@drawable/ic_monitor"
        android:title="@string/monitors"/>
    <item
        android:id="@+id/nav_headphones"
        android:icon="@drawable/ic_headphone"
        android:title="@string/headphones"/>
</menu>

The code is pretty straightforward here, you are setting an id, icon and title for each item in the BottomNavigationView which will be displayed as a Tab with an icon and title.

Now add BottomNavigationView inside activity_main.xml under RecyclerView:

<android.support.design.widget.BottomNavigationView
    android:id="@+id/bottomNavigationView"
    android:layout_width="match_parent"
    android:layout_height="wrap_content"
    android:layout_alignParentBottom="true"
    app:menu="@menu/menu_bottom_navigation"/>

You should also add

android:layout_above="@+id/bottomNavigationView"

to the RecyclerView so that it shows above BottomNavigationView instead of fullscreen.

Handling Item Selection on BottomNavigationView

The intention of the BottomNavigationView is to show Items of different categories when users select different tabs from the BottomNavigationView.

First, have MainActivity implement BottomNavigationView.OnNavigationItemSelectedListener. Modify the MainActivity declaration as follows:

class MainActivity : AppCompatActivity(),
        ItemsAdapter.OnItemClickListener,
        BottomNavigationView.OnNavigationItemSelectedListener {
}

and add following function inside MainActivity class:

override fun onNavigationItemSelected(item: MenuItem): Boolean {
  when (item.itemId) {
    R.id.nav_laptops -> populateItemList(Category.LAPTOP)
    R.id.nav_monitors -> populateItemList(Category.MONITOR)
    R.id.nav_headphones -> populateItemList(Category.HEADPHONE)
    else -> return false
  }
  return true
}

The onNavigationItemSelected(item: MenuItem) function fires each time a user clicks a MenuItem shown as Tab in BottomNavigationView.

This function, determines which MenuItem was clicked using the item.itemId and calls populateItemList() with it’s corresponding Category to show the items of that category in the RecyclerView.

onNavigationItemSelected(item: MenuItem) will return true to notify MainActivity that a NavigationItem is selected if item.itemId matches to any item defined in the menu_bottom_navigation.xml or returns false otherwise to keep things unchanged.

Add this line inside onCreate() function in MainActivity:

bottomNavigationView.setOnNavigationItemSelectedListener(this)

Run the app again you’ll notice the items changing after each tab is selected.

Adding New Items

Adding another button inside MainActivity to add new items will eat a lot of precious real estate in your landing page, but how about overlaying a button over the list of items? Did someone say FloatingActionButton?

Using FloatingActionButton

Add a FloatingActionButton at the bottom of activity_main.xml, to look as follows:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
                xmlns:app="http://schemas.android.com/apk/res-auto"
                xmlns:tools="http://schemas.android.com/tools"
                android:layout_width="match_parent"
                android:layout_height="match_parent"
                tools:context=".ui.activity.MainActivity">

    <android.support.v7.widget.RecyclerView
        android:id="@+id/itemsRecyclerView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:layout_above="@+id/bottomNavigationView"
        app:layoutManager="android.support.v7.widget.GridLayoutManager"
        app:spanCount="2"/>

    <android.support.design.widget.BottomNavigationView
        android:id="@+id/bottomNavigationView"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_alignParentBottom="true"
        app:menu="@menu/menu_bottom_navigation"/>

    <android.support.design.widget.FloatingActionButton
        android:id="@+id/addFab"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_above="@id/bottomNavigationView"
        android:layout_alignParentRight="true"
        android:clickable="true"
        android:onClick="onClickAddFab"
        app:srcCompat="@drawable/ic_add"
        app:useCompatPadding="true"/>

</RelativeLayout>

Then add following function inside MainActivity.kt:

fun onClickAddFab(view: View) {
  startActivity(Intent(this, AddItemActivity::class.java))
}

This function will be called when user clicks on the FloatingActionButton and navigate from MainActivity to AddItemsActivity by starting a new Intent.

Run the app again, and click on the FloatingActionButton, you should see the app transition into the AddItemsActivity:

Before moving to the next section, update imageFromCategory() inside AddItemActivity.kt so that it returns an icon that matches with provided Category:

private fun imageFromCategory(category: Category): Int {
  return when (category) {
    Category.LAPTOP -> R.drawable.ic_laptop
    Category.MONITOR -> R.drawable.ic_monitor
    else -> R.drawable.ic_headphone
  }
}

Interacting with TextInputLayout

Add a TextInputLayout in activity_add_item.xml to wrap titleEditText and priceEditText:

  <android.support.design.widget.TextInputLayout
    android:id="@+id/titleTextInput"
    android:layout_width="match_parent"
    android:layout_height="wrap_content"
    android:layout_below="@+id/categoryTitle"
    app:counterEnabled="true"
    app:counterMaxLength="30">

    <EditText
      android:id="@+id/titleEditText"
      android:layout_width="match_parent"
      android:layout_height="wrap_content"

      android:hint="@string/hint_title"
      android:maxLines="2" />

  </android.support.design.widget.TextInputLayout>

  <android.support.design.widget.TextInputLayout
    android:id="@+id/priceTextInput"
    android:layout_width="match_parent"
    android:layout_height="wrap_content"
    android:layout_below="@+id/titleTextInput"
    app:counterEnabled="true"
    app:counterMaxLength="30">

    <EditText
      android:id="@+id/priceEditText"
      android:layout_width="match_parent"
      android:layout_height="wrap_content"

      android:hint="@string/hint_price"
      android:inputType="numberDecimal" />

  </android.support.design.widget.TextInputLayout>

  <EditText
    android:id="@+id/detailsEditText"
    android:layout_width="match_parent"
    android:layout_height="wrap_content"
    android:layout_below="@+id/priceTextInput"
    android:hint="@string/hint_details"
    android:lines="2" />

TextInputLayout is basically a wrapper for EditText enhancing it to display floating hint animations, error message and character counts in a more material way.

For example, adding

app:counterEnabled="true"
app:counterMaxLength="30"

in titleTextInput inside activity_add_item.xml displays 30 character limit for titleEditText input and shows character count interactively and no extra line needed!

Showing error message is easy when using TextInputLayout. You might want to check if user inputs title and price before adding an item, and show an error near the input fields if empty or invalid. Write a function in AddItemActivity.kt that does exactly that:

private fun hasValidInput(): Boolean {
    // 1
  val title = titleEditText.text.toString()
  if (title.isNullOrBlank()) {
    titleTextInput.error = "Please enter a valid Title"
    return false
  }
  // 2
  val price = priceEditText.text.toString().toDoubleOrNull()
  if (price == null || price <= 0.0) {
    priceTextInput.error = "Please enter a minimum Price"
    return false
  }
  // 3
  return true
}
  1. This section checks if user leaves titleEditText blank or inserts whitespace only. Then it'll set an error to titleTextInput field and stops proceeding farther by returning false
  2. This section tries to convert user input on priceEditText to a Double value. toDoubleOrNull() returns null if the conversion fails due to and invalid input or even whitespace. Again, user must input a price greater than 0. So you set an error on the priceTextInput field which stops proceeding farther by returning false if the price is null or no more than 0.0
  3. It returns true if all validation filters above passed and proceeds to add an item.

Call hasValidInput() from inside onClickAddItem(view: View) function like this:

fun onClickAddItem(view: View) {
  if (hasValidInput()) {
    val selectedCategory = categorySpinner.selectedItem as Category
    DataProvider.addItem(Item(
        imageId = imageFromCategory(selectedCategory),
        title = titleEditText.text.toString(),
        details = detailsEditText.text.toString(),
        price = priceEditText.text.toString().toDouble(),
        category = selectedCategory,
        postedOn = System.currentTimeMillis())
    )
  }
}

You should clear all error message whenever user starts typing on the input fields again. To do that you need to modify beforeTextChanged() function as following:

override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {
  titleTextInput.error = null
  priceTextInput.error = null
}

easy-peasy, huh?

Run the app and try to add an item without price - you'll be stopped with an error message!

Using Snackbar for Temporary Messages

Snackbar is the smarter version of Toasts in Android. You can provide action buttons like "OK" or "CANCEL" along with a message using a Snackbar. Unlike Toast, you need to provide a view to display the Snackbar.

It's good practice to show a confirmation message after adding an item successfully and take back to the item list after user's acknowledgement. Let's add a function for that inside AddItemActivity:

private fun showAddItemConfirmation() {
  Snackbar.make(addItemRootView, getString(R.string.add_item_successful), Snackbar.LENGTH_LONG)
      .setAction(getString(R.string.ok)) {
        navigateBackToItemList()
      }
      .show()
}

It shows a Snackbar in addItemRootView with a success message for a longer duration defined by Snackbar.LENGTH_LONG.

You added an action button with text "OK" by appending

.setAction(getString(R.string.ok)) {
    navigateBackToItemList()
}

which performs navigateBackToItemList() action on button click.

Add showAddItemConfirmation() at the bottom of onClickAddItem() function:

fun onClickAddItem(view: View) {
  if (hasValidInput()) {
    val selectedCategory = categorySpinner.selectedItem as Category
    DataProvider.addItem(Item(
         imageId = imageFromCategory(selectedCategory),
         title = titleEditText.text.toString(),
         details = detailsEditText.text.toString(),
         price = priceEditText.text.toString().toDouble(),
         category = selectedCategory,
         postedOn = System.currentTimeMillis())
    )
    showAddItemConfirmation()
  }
}

Run the app again and to add a new item title, price and details, The output should be like this:

Animating Item Details

Presenting item details is an attractive way of providing the user more information that may lead to more items sold. One approach to making the detail page more attractive is to use animation. In this section, you'll leverage what Design Support Library offers to make the app more lucrative...

Using CoordinatorLayout and CollapsingToolbarLayout

Combining CoordinatorLayout with CollapsingToolbarLayout is a killer-combo that can make your app lot more fascinating to the users. Before seeing them in action, replace everything inside activity_details.xml with the following:

<?xml version="1.0" encoding="utf-8"?>
<!-- 1 -->
<android.support.design.widget.CoordinatorLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".ui.activity.DetailsActivity">

    <!-- 2 -->
    <android.support.design.widget.AppBarLayout
        android:id="@+id/appBar"
        android:layout_width="match_parent"
        android:layout_height="@dimen/app_bar_height"
        android:theme="@style/AppTheme.AppBarOverlay">

        <!-- 3 -->
        <android.support.design.widget.CollapsingToolbarLayout
            android:id="@+id/collapsingToolbarLayout"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            app:contentScrim="@color/colorPrimary"
            app:layout_scrollFlags="scroll|exitUntilCollapsed">

            <ImageView
                android:id="@+id/itemImageView"
                android:layout_width="match_parent"
                android:layout_height="match_parent"
                android:scaleType="centerCrop"
                app:layout_scrollFlags="scroll|enterAlways|enterAlwaysCollapsed"/>

            <!-- 4 -->
            <android.support.v7.widget.Toolbar
                android:id="@+id/toolBar"
                android:layout_width="match_parent"
                android:layout_height="?attr/actionBarSize"
                app:layout_collapseMode="pin"
                app:popupTheme="@style/AppTheme.PopupOverlay"/>

        </android.support.design.widget.CollapsingToolbarLayout>
    </android.support.design.widget.AppBarLayout>

    <!-- 5 -->
    <include layout="@layout/content_details"/>

    <!-- 6 -->
    <android.support.design.widget.FloatingActionButton
        android:id="@+id/shareFab"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_margin="@dimen/fab_margin"
        android:onClick="onClickShareFab"
        app:layout_anchor="@+id/appBar"
        app:layout_anchorGravity="bottom|end"
        app:srcCompat="@android:drawable/ic_menu_share"/>
</android.support.design.widget.CoordinatorLayout>

Switch to the layout-blueprint for a better overview, then you'll go over each item in the layout, one by one:

  1. CoordinatorLayout is the root layout and the container for it's child views. By specifying a behavior to a direct child of CoordinatorLayout, you’ll be able to intercept touch events, window insets, measurement, layout, and nested scrolling. Don't panic - you'll learn to implement them in the next section!
  2. AppBarLayout allows your Toolbar and other views (such as the ImageView) to react to scroll events in a sibling view.
    android:theme="@style/AppTheme.AppBarOverlay"
    

    You used the above property to overwrite relevant attributes for light overlay style.

  3. CollapsingToolbarLayout is a wrapper for Toolbar which allows the Toolbar to expand or collapse as the user scrolls through a view.
    app:contentScrim="@color/colorPrimary"
    

    Using the above property gradually changes CollapsingToolbarLayout's color to the provided color when it's collapsing.

    app:layout_scrollFlags="scroll|exitUntilCollapsed"
    

    The above property means the view should scroll off until it reaches to its minimum height (?attr/actionBarSize in this case)

  4. Toolbar is actually a more flexible and customizable ActionBar that holds your navigation button, activity title etc. Here, using
    android:layout_height="?attr/actionBarSize"
    

    assures the Toolbar has exactly same height of a regular ActionBar, and

    app:layout_collapseMode="pin"
    

    pins it on top when CollapsingToolbarLayout is fully collapsed. Finally

    app:popupTheme="@style/AppTheme.PopupOverlay"
    

    styles it to be elevated as like a Popup when the Toolbar is visible.

  5. You are including a layout from content_details.xml that shows the price, title and details of the item.
  6. The FloatingActionButton allows you to share your item via a share-intent.
    android:onClick="onClickShareFab"
    

    Setting the above property will fire the onClickShareFab(view: View) function inside DetailsActivity when user clicks on it.

    app:layout_anchor="@+id/appBar"
    app:layout_anchorGravity="bottom|end"
    

    These last two properties keep the button it attached to the bottom-end of the AppBarLayout. The CoordinatorLayout automatically manages the visibility of FloatingActionButton when AppBarLayout is collapsed as you set the appBar as layout_anchor.

Put everything inside content_details.xml within a NestedScrollView, so the layout will look like this:

<?xml version="1.0" encoding="utf-8"?>
<android.support.v4.widget.NestedScrollView
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:layout_behavior="@string/appbar_scrolling_view_behavior"
    tools:context=".ui.activity.DetailsActivity"
    tools:showIn="@layout/activity_details">

    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:orientation="vertical"
        android:padding="@dimen/default_padding">

        <TextView
            android:id="@+id/priceTextView"
            style="@style/TextAppearance.AppCompat.Display1"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:textColor="@color/colorAccent"
            tools:text="@string/hint_price"/>

        <TextView
            android:id="@+id/titleTextView"
            style="@style/TextAppearance.AppCompat.Title"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:layout_marginBottom="@dimen/default_margin"
            android:layout_marginTop="@dimen/default_margin"
            android:transitionName="@string/transition_title"
            tools:targetApi="lollipop"
            tools:text="@string/hint_title"/>

        <TextView
            android:id="@+id/detailsTextView"
            style="@style/TextAppearance.AppCompat.Body1"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            tools:text="@string/hint_details"/>
    </LinearLayout>

</android.support.v4.widget.NestedScrollView>
app:layout_behavior="@string/appbar_scrolling_view_behavior"

With the above property, you share scroll events on the NestedScrollView with AppBarLayout so that it can expand or collapse accordingly.

Finally, set the Toolbar inside onCreate() function in DetailsActivity.kt:

setSupportActionBar(toolBar)
supportActionBar?.setDisplayHomeAsUpEnabled(true)

and modify populateDetails(item: Item?) function like this:

private fun populateDetails(item: Item?) {
  supportActionBar?.title = item?.category?.name
  itemImageView.setImageResource(item?.imageId!!)
  priceTextView.text = getString(R.string.currency_symbol) + item?.price.toString()
  titleTextView.text = item?.title
  detailsTextView.text = item?.details
}

This sets the category name in the Toolbar title and the Item's image to the ImageView.

Run the app again, and navigate to the DetailsActivity of any item - you should see something amazing:

Adding Parallax Scrolling Effect

As you already seen, Design Support Library does all the heavy-lifting when it comes to animation, providing your users a rich user experience. Why not add one more effect! add following to the ImageView inside activity_details.xml:

app:layout_collapseMode="parallax"
app:layout_collapseParallaxMultiplier="0.7"

This adds a nice parallax-scrolling effect to the ImageView when AppBarLayout open or collapse. The layout_collapseParallaxMultiplier impacts on speed and visible area during scroll. Default value for this is 0.5, you can play around this value and check which suits you best.

Build and Run and see the changes in all its glory!

This completes your quest into making an awesome marketplace app! Don't stop here; keep working on it - and maybe you can make the next eBay!

Where To Go From Here?

You can download the final project using the link at the top or the bottom of this tutorial.

To take your Material Design skills to the next level, here are some additional things to explore:

I hope you enjoyed materializing your app with Android Design Support Library. If you have any questions, comments, or awesome modifications to this project app please join the forum discussion and comment below!

The post Android Design Support Library: Getting Started appeared first on Ray Wenderlich.

Screencast: Dependency Injection with Koin

Screencast: Alamofire: Uploading Files

Server Side Swift with Kitura Part 2: Completing Your Backend

$
0
0

Part two of our new, free course, Server Side Swift with Kitura, is available today! If you ever wanted to extend your skills past developing for mobile devices, but didn’t have time to learn a new language, this is your chance!

In part two, you’ll get an introduction to Kitura version 2.0, and learn to connect your Kitura server to CouchDB to store your EmojiJournal entries.

Take a look at what’s inside:

Part 2: Completing Your Backend

  1. Introduction To Kitura 2.0: We’ll cover the basics introduced in Kitura 1.0, and then go into some of the finer points of what was added to the Kitura framework in version 2.0, including what a router is and how codable routes work.
  2. Making Codable Routes: Let’s make a basic GET route using Kitura, and then let’s enhance and streamline that route using the Codable protocol with Kitura’s best built in feature – Codable Routing.
  3. Challenge: Create A DELETE Route: Now that you’ve made two routes in Kitura, you’ll make your third one to improve on its functionality.
  4. Introduction To CouchDB: I’ll show you how CouchDB works, and how you’ll make use of it to store journal entries that you enter with EmojiJournal.
  5. Connecting To CouchDB: I’ll walk you through setting up CouchDB on your local machine, and how to connect your Kitura server to it.
  6. Challenge: Using CouchDB In The Cloud: After I show you how to set up an instance of CouchDB in IBM Cloud, I’ll challenge you to connect to it once you have all the information you need to do so.
  7. Writing Your Persistence Functions: I’ll walk you through writing a series of persistence functions in your Kitura application, so that you have an easy way to keep track of how your app uses CouchDB.
  8. Challenge: Linking Your Router To CouchDB: After you write your persistence functions, you’ll hook up your codable routes to your database, finally putting all the pieces together.
  9. Conclusion: Take a second to smell the roses, and look at what you built! I’ll run you through a test drive of your Kitura application.

Where To Go From Here?

Want to check out the course? The first two parts of the course are ready for you today! The rest of the course will be released over the next two weeks, and the entire course will be available for free.

Stay tuned for more new and updated courses to come. I hope you enjoy the course! :]

The post Server Side Swift with Kitura Part 2: Completing Your Backend appeared first on Ray Wenderlich.

Enum-Driven TableView Development

$
0
0

Enum-Driven TableView Development

Is there anything more fundamental, in iOS development, than UITableView? It’s a simple, clean control. Unfortunately, a lot of complexity lies under the hood: Your code needs to show loading indicators at the right time, handle errors, wait for service call completions and show results when they come in.

In this tutorial, you’ll learn how to use Enum-Driven TableView Development to manage this complexity.

To follow this technique, you’ll refactor an existing app called Chirper. Along the way, you’ll learn the following:

  • How to use an enum to manage the state of your ViewController.
  • The importance of reflecting the state in the view for the user.
  • The dangers of poorly defined state
  • How to use property observers to keep your view up-to-date.
  • How to work with pagination to simulate an endless list of search results.
This tutorial assumes some familiarity with UITableView and Swift enums. If you need help, take a look at the iOS and Swift tutorials first.

Getting Started

The Chirper app that you’ll refactor for this tutorial presents a searchable list of bird sounds from the xeno-canto public API.

If you search for a species of bird within the app, it will present you with a list of recordings that match your search query. You can play the recordings by tapping the button in each row.

To download the starter project, use the Download Materials button at the top or bottom of this tutorial. Once you’ve downloaded this, open the starter project in Xcode.

Chirper app

Different States

A well-designed table view has four different states:

  • Loading: The app is busy fetching new data.
  • Error: A service call or another operation has failed.
  • Empty: The service call has returned no data.
  • Populated: The app has retrieved data to display.

The state populated is the most obvious, but the others are important as well. You should always let the user know the app state, which means showing a loading indicator during the loading state, telling the user what to do for an empty data set and showing a friendly error message when things go wrong.

To start, open MainViewController.swift to take a look at the code. The view controller does some pretty important things, based on the state of some of its properties:

  • The view displays a loading indicator when isLoading is set to true.
  • The view tells the user that something went wrong when error is non-nil.
  • If the recordings array is nil or empty, the view displays a message prompting the user to search for something different.
  • If none of the previous conditions are true, the view displays the list of results.
  • tableView.tableFooterView is set to the correct view for the current state.

There’s a lot to keep in mind while modifying the code. And, to make things worse, this pattern gets more complicated when you pile on more features through the app.

Poorly Defined State

Search through MainViewController.swift and you’ll see that the word state isn’t mentioned anywhere.

The state is there, but it’s not clearly defined. This poorly defined state makes it hard to understand what the code is doing and how it responds to the changes of its properties.

Invalid State

If isLoading is true, the app should show the loading state. If error is non-nil, the app should show the error state. But what happens if both of these conditions are met? You don’t know. The app would be in an invalid state.

MainViewController doesn’t clearly define its states, which means it may have some bugs due to invalid or indeterminate states.

A Better Alternative

MainViewController needs a better way to manage its state. It needs a technique that is:

  • Easy to understand
  • Easy to maintain
  • Insusceptible to bugs

In the steps that follow, you’re going to refactor MainViewController to use an enum to manage its state.

Refactoring to a State Enum

In MainViewController.swift, add this above the declaration of the class:

enum State {
  case loading
  case populated([Recording])
  case empty
  case error(Error)
}

This is the enum that you’ll use to clearly define the view controller’s state. Next, add a property to MainViewController to set the state:

var state = State.loading

Build and run the app to see that it still works. You haven’t made any changes to the behavior yet so everything should be the same.

Refactoring the Loading State

The first change you’ll make is to remove the isLoading property in favor of the state enum. In loadRecordings(), the isLoading property is set to true. The tableView.tableFooterView is set to the loading view. Remove these two lines from the beginning of loadRecordings():

isLoading = true
tableView.tableFooterView = loadingView

Replace it with this:

state = .loading

Then, remove self.isLoading = false inside the fetchRecordings completion block. loadRecordings() should look like this:

@objc func loadRecordings() {
  state = .loading
  recordings = []
  tableView.reloadData()
    
  let query = searchController.searchBar.text
  networkingService.fetchRecordings(matching: query, page: 1) { [weak self] response in
      
    guard let `self` = self else {
      return
    }
      
    self.searchController.searchBar.endEditing(true)
    self.update(response: response)
  }
}

You can now remove MainViewController’s isLoading property. You won’t need it any more.

Build and run the app. You should have the following view:

search view without loading state

The state property has been set, but you’re not doing anything with it. tableView.tableFooterView needs to reflect the current state. Create a new method in MainViewController named setFooterView().

func setFooterView() {
  switch state {
  case .loading:
    tableView.tableFooterView = loadingView
  default:
    break
  }
}

Now, back to loadRecordings(). After setting the state to .loading, add the following:

setFooterView()

Build and run the app.

Now when you change the state to loading setFooterView() is called and the progress indicator is displayed. Great job!

Refactoring the Error State

loadRecordings() fetches recordings from the NetworkingService. It takes the response from networkingService.fetchRecordings() and calls update(response:), which updates the app’s state.

Inside update(response:), if the response has an error, it sets the error’s description on the errorLabel. The tableFooterView is set to the errorView, which contains the errorLabel. Find these two lines in update(response:):

errorLabel.text = error.localizedDescription
tableView.tableFooterView = errorView

Replace them with this:

state = .error(error)
setFooterView()

In setFooterView(), add a new case for the error state:

case .error(let error):
  errorLabel.text = error.localizedDescription
  tableView.tableFooterView = errorView

The view controller no longer needs its error: Error? property. You can remove it. Inside update(response:), you need to remove the reference to the error property that you just removed:

error = response.error

Once you’ve removed that line, build and run the app.

You’ll see that the loading state still works well. But how do you test the error state? The easiest way is to disconnect your device from the internet; if you’re running the simulator on your Mac, disconnect your Mac from the internet now. This is what you should see when the app tries to load data:

No connection view

Refactoring the Empty and Populated States

There’s a pretty long if-else chain at the beginning of update(response:). To clean this up, replace update(response:) with the following:

func update(response: RecordingsResult) {
  if let error = response.error {
    state = .error(error)
    setFooterView()
    tableView.reloadData()
    return
  }
  
  recordings = response.recordings
  tableView.reloadData()
}

You’ve just broken the states populated and empty. Don’t worry, you’ll fix them soon!

Setting the Correct State

Add this below the if let error = response.error block:

guard let newRecordings = response.recordings,
  !newRecordings.isEmpty else {
    state = .empty
    setFooterView()
    tableView.reloadData()
    return
}

Don’t forget to call setFooterView() and tableView.reloadData() when updating the state. If you miss it, you won’t see the changes.

Next, find this line inside of update(response:):

recordings = response.recordings

Replace it with this:

state = .populated(newRecordings)
setFooterView()

You’ve just refactored update(response:) to act on the view controller’s state property.

Setting the Footer View

Next, you need to set the correct table footer view for the current state. Add these two cases to the switch statement inside setFooterView():

case .empty:
  tableView.tableFooterView = emptyView
case .populated:
  tableView.tableFooterView = nil

The app no longer uses the default case, so remove it.

Build and run the app to see what happens:

Getting Data from the State

The app isn’t displaying data anymore. The view controller’s recordings property populates the table view, but it isn’t being set. The table view needs to get its data from the state property now. Add this computed property inside the declaration of the State enum:

var currentRecordings: [Recording] {
  switch self {
  case .populated(let recordings):
    return recordings
  default:
    return []
  }
}

You can use this property to populate the table view. If the state is .populated, it uses the populated recordings; otherwise, it returns an empty array.

In tableView(_:numberOfRowsInSection:), remove this line:

return recordings?.count ?? 0

And replace it with the following:

return state.currentRecordings.count

Next up, in tableView(_:cellForRowAt:), remove this block:

if let recordings = recordings {
  cell.load(recording: recordings[indexPath.row])
}

Replace it with this:

cell.load(recording: state.currentRecordings[indexPath.row])

No more unnecessary optionals!

You don’t need the recordings property of MainViewController anymore. Remove it along with its final reference in loadRecordings().

Build and run the app.

All the states should be working now. You’ve removed the isLoading, error, and recordings properties in favor of one clearly defined state property. Great job!

Keeping in Sync with a Property Observer

You’ve removed the poorly defined state from the view controller, and you can now easily discern the view’s behavior from the state property. Also, it’s impossible to be in both a loading and an error state — that means no chance of invalid state.

There’s still one problem, though. When you update the value of the state property, you must remember to call setFooterView() and tableView.reloadData(). If you don’t, the view won’t update to properly reflect the state that it’s in. Wouldn’t it be better if everything was refreshed whenever the state changed?

This is a great opportunity to use a didSet property observer. You use a property observer to respond to a change in a property’s value. If you want to reload the table view and set the footer view every time the state property is set, then you need to add a didSet property observer.

Replace the declaration of var state = State.loading with this:

var state = State.loading {
  didSet {
    setFooterView()
    tableView.reloadData()
  }
}

When the value of state is changed, then the didSet property observer will fire. It calls setFooterView() and tableView.reloadData() to update the view.

Remove all other calls to setFooterView() and tableView.reloadData(); there are four of each. You can find them in loadRecordings() and update(response:). They’re not needed anymore.

Build and run the app to check that everything still works:

Adding Pagination

When you use the app to search, the API has many results to give but it doesn’t return all results at once.

For example, search Chirper for a common species of bird, something that you’d expect to see many results for — say, a parrot:

Search parrot view

That can’t be right. Only 50 recordings of parrots?

The xeno-canto API limits the results to 500 at a time. Your project app cuts that amount to 50 results within NetworkingService.swift, just to make this example easy to work with.

If you only receive the first 500 results, then how do you get the rest of the results? The API that you’re using to retrieve the recordings does this through pagination.

How an API Supports Pagination

When you query the xeno-canto API within the NetworkingService, this is what the URL looks like:

http://www.xeno-canto.org/api/2/recordings?query=parrot

The results from this call are limited to the first 500 items. This is referred as the first page, which contains items 1–500. The next 500 results would be referred to as the second page. You specify which page you want as a query parameter:

http://www.xeno-canto.org/api/2/recordings?query=parrot&page=2

Notice the &page=2 on the end; this code tells the API that you want the second page, which contains the items 501–1000.

Supporting Pagination in Your Table View

Take a look at MainViewController.loadRecordings(). When it calls networkingService.fetchRecordings(), the page parameter is hard coded to 1. This is what you need to do:

  1. Add a new state called paging.
  2. If the response from networkingService.fetchRecordings indicates that there are more pages, then set the state to .paging.
  3. When the table view is about to display the last cell in the table, load the next page of results if the state is .paging.
  4. Add the new recordings from the service call to the array of recordings.

When the user scrolls to the bottom, the app will fetch more results. This gives the impression of an infinite list — sort of like what you’d see in a social media app. Pretty cool, huh?

Adding the New Paging State

Start by adding the new paging case to your state enum:

case paging([Recording], next: Int)

It needs to keep track of an array of recordings to display, just like the .populated state. It also needs to keep track of the next page that the API should fetch.

Try to build and run the project, and you’ll see that it no longer compiles. The switch statement in setFooterView is exhaustive, meaning that it covers all cases without a default case. This is great because it ensures that you update it when a new state is added. Add this to the switch statement:

case .paging:
  tableView.tableFooterView = loadingView

If the app is in the paging state, it displays the loading indicator at the end of the table view.

The state’s currentRecordings computed property isn’t exhaustive though. You’ll need to update it if you want to see your results. Add a new case to the switch statement inside currentRecordings:

case .paging(let recordings, _):
  return recordings

Setting the State to .paging

In update(response:), replace state = .populated(newRecordings) with this:

if response.hasMorePages {
  state = .paging(newRecordings, next: response.nextPage)
} else {
  state = .populated(newRecordings)
}

response.hasMorePages tells you if the total number of pages that the API has for the current query is less than the current page. If there are more pages to be fetched, you set the state to .paging. If the current page is the last page or the only page, then set the state to .populated.

Build and run the app:

Pagination with loading state

If you search for something with multiple pages, the app displays the loading indicator at the bottom. But if you search for a term that has only one page of results, you would get the usual .populated state without the loading indicator.

You can see when there are more pages to be loaded, but the app isn’t doing anything to load them. You’ll fix that now.

Loading the Next Page

When the user is about to reach the end of the list, you want the app to start loading the next page. First, create a new empty method named loadPage:

func loadPage(_ page: Int) {
}

This is the method that you’ll call when you want to load a particular page of results from the NetworkingService.

Remember how loadRecordings() was loading the first page by default? Move all the code from loadRecordings() to loadPage(_:), except for the first line where the state is set to .loading.

Next, update fetchRecordings(matching: query, page: 1) to use the page parameter, like this:

networkingService.fetchRecordings(matching: query, page: page)

loadRecordings() is looking a little bare now. Update it to call loadPage(_:), specifying page 1 as the page to be loaded:

@objc func loadRecordings() {
  state = .loading
  loadPage(1)
}

Build and run the app:

If nothing has changed, you’re on the right track!

Add the following to tableView(_: cellForRowAt:), just before the return statement.

if case .paging(_, let nextPage) = state,
  indexPath.row == state.currentRecordings.count - 1 {
  loadPage(nextPage)
}

If the current state is .paging, and the current row to be displayed is the same index as the last result in the currentRecordings array, it’s time to load the next page.

Build and run the app:

Exciting! When the loading indicator comes into view, the app fetches the next page of data. But it doesn’t append the data to the current recordings — it just replaces the current recordings with the new ones.

Appending the Recordings

In update(response:), the newRecordings array is being used for the view’s new state. Before the if response.hasMorePages statement, add this:

var allRecordings = state.currentRecordings
allRecordings.append(contentsOf: newRecordings)

You get the current recordings and then append to new recordings to that array. Now, update the if response.hasMorePages statement to use allRecordings instead of newRecordings:

if response.hasMorePages {
  state = .paging(allRecordings, next: response.nextPage)
} else {
  state = .populated(allRecordings)
}

See, how easy was it with the help of the state enum? Build and run the app to see the difference:

Search with working pagination

Where to Go From Here?

If you want to download the finished project, use the Download Materials button at the top or bottom of this tutorial.

In this tutorial, you refactored an app to handle complexity in a much clearer way. You replaced a lot of error-prone, poorly defined state with a clean and simple Swift enum. You even tested out your enum-driven table view by adding a complicated new feature: pagination.

When you refactor code, it’s important to test things to make sure that you haven’t broken anything. Unit tests are great for this. Take a look at the iOS Unit Testing and UI Testing tutorial to learn more.

Now that you’ve learned how to work with a pagination API in an app, you can learn how to build the actual API. The Server Side Swift with Vapor video course can get you started.

Did you enjoy this tutorial? I hope it’s useful to manage the states of all the apps you’ll build! If you have any questions or insights to share, I’d love to hear from you in the comments forum below.

The post Enum-Driven TableView Development appeared first on Ray Wenderlich.


Unreal Engine 4 Tutorial: Painting With Render Targets

$
0
0

Unreal Engine 4 Tutorial: Painting With Render Targets

A render target is basically a texture that you can write to at runtime. On the engine side of things, they store information such as base color, normals and ambient occlusion.

On the user side of things, render targets were mainly used as a sort of secondary camera. You could point a scene capture at something and store the image to a render target. You could then display the render target on a mesh to simulate something like a security camera.

With the release of 4.13, Epic introduced the ability to draw materials directly to render targets using Blueprints. This feature allowed for advanced effects such as fluid simulations and snow deformation. Sounds pretty exciting, right? But before you get into such advanced effects, it’s always best to start simple. And what’s more simple than just painting onto a render target?

In this tutorial, you will learn how to:

  • Dynamically create a render target using Blueprints
  • Display a render target on a mesh
  • Paint a texture onto a render target
  • Change the brush size and texture during gameplay
Note: This tutorial assumes you already know the basics of using Unreal Engine. If you are new to Unreal Engine, check out our 10-part Unreal Engine for Beginners tutorial series.
Note: This tutorial is part of a 6-part tutorial series on Diving Deeper into Unreal Engine:

Getting Started

Start by downloading the materials for this tutorial (you can find a link at the top or bottom of this tutorial). Unzip it and navigate to CanvasPainterStarter and open CanvasPainter.uproject. If you press Play, you will see the following:

unreal engine render target

The square in the middle (the canvas) is what you will be painting on. The UI elements on the left will be the texture you want to paint and its size.

To start, let’s go over the method you will use to paint.

Painting Method

The first thing you will need is a render target to act as the canvas. To determine where to paint the render target, you do a line trace going forward from the camera. If the line hits the canvas, you can get the hit location in UV space.

For example, if the canvas is perfectly UV mapped, a hit in the center will return a value of (0.5, 0.5). If it hits the bottom-right corner, you will get a value of (1, 1). You can then use some simple math to calculate the draw location.

unreal engine render target

But why get the location in UV space? Why not use the actual world space location? Using world space, you would first need to calculate the hit’s location relative to the plane. You would also need to take into account the plane’s rotation and scale.

Using UV space, you don’t need to do any of these calculations. On a perfectly UV mapped plane, a hit in the middle will always return (0.5, 0.5), regardless of the plane’s location and rotation.

Note: The method in this tutorial generally only works for planes or plane-like surfaces. For other types of geometry, a more advanced method is required which I will cover in a future tutorial.

First, you will create the material that will display the render target.

Creating the Canvas Material

Navigate to the Materials folder and then open M_Canvas.

For this tutorial, you will create render targets dynamically in Blueprints. This means you will need to set up a texture as a parameter so you can pass in the render target. To do this, create a TextureSampleParameter2D and name it RenderTarget. Afterwards, connect it to BaseColor.

unreal engine render target

Don’t worry about setting the texture here — you will do this next in Blueprints. Click Apply and then close M_Canvas.

The next step is to create the render target and then use it in the canvas material.

Creating the Render Target

There are two ways to create render targets. The first is to create them in the editor by clicking Add New\Materials & Textures\Render Target. This will allow you to easily reference the same render target across multiple actors. However, if you wanted to have multiple canvases, you would have to manually create a render target for each canvas.

A better way to do this is to create render targets using Blueprints. The advantage to this is that you only create render targets as needed and they do not bloat your project files.

First, you will need to create the render target and store it as a variable for later use. Go to the Blueprints folder and open BP_Canvas. Locate Event BeginPlay and add the highlighted nodes.

unreal engine render target

Set Width and Height to 1024. This will set the resolution of the render target to 1024×1024. Higher values will increase image quality but at the cost of more video memory.

unreal engine render target

Next is the Clear Render Target 2D node. You can use this node to set the color of your render target. Set Clear Color to (0.07, 0.13, 0.06). This will fill the entire render target with a greenish color.

unreal engine render target

Now you need to display the render target on the canvas mesh.

Displaying the Render Target

At the moment, the canvas mesh is using its default material. To display the render target, you need to create a dynamic instance of M_Canvas and supply the render target. Then, you need to apply the dynamic material instance to the canvas mesh. To do this, add the highlighted nodes:

unreal engine render target

First, go to the Create Dynamic Material Instance node and set Parent to M_Canvas. This will create a dynamic instance of M_Canvas.

unreal engine render target

Next, go to the Set Texture Parameter Value node and set Parameter Name to RenderTarget. This will pass in the render target to the texture parameter you created before.

unreal engine render target

Now the canvas mesh will display the render target. Click Compile and then go back to the main editor. Press Play to see the canvas change colors.

unreal engine render target

Now that you have your canvas, you need to create a material to act as your brush.

Creating the Brush Material

Navigate to the Materials folder. Create a material named M_Brush and then open it. First, set the Blend Mode to Translucent. This will allow you to use textures with transparency.

unreal engine render target

Just like the canvas material, you will also set the texture for the brush in Blueprints. Create a TextureSampleParameter2D and name it BrushTexture. Connect it like so:

unreal engine render target

Click Apply and then close M_Brush.

The next thing to do is to create a dynamic instance of the brush material so you can change the brush texture. Open BP_Canvas and then add the highlighted nodes.

unreal engine render target

Next, go to the Create Dynamic Material Instance node and set Parent to M_Canvas.

unreal engine render target

With the brush material complete, you now need a function to draw the brush onto the render target.

Drawing the Brush to the Render Target

Create a new function and name it DrawBrush. First, you will need parameters for which texture to use, brush size and draw location. Create the following inputs:

  • BrushTexture: Set type to Texture 2D
  • BrushSize: Set type to float
  • DrawLocation: Set type to Vector 2D

unreal engine render target

Before you draw the brush, you need to set its texture. To do this, create the setup below. Make sure to set Parameter Name to BrushTexture.

unreal engine render target

Now you need to draw to the render target. To do this, create the highlighted nodes:

unreal engine render target

Begin Draw Canvas to Render Target will let the engine know you want to start drawing to the specified render target. Draw Material will then allow you to draw a material at the specified location, size and rotation.

Calculating the draw location is a two-step process. First, you need to scale DrawLocation to fit into the render target’s resolution. To do this, multiply DrawLocation with Size.

unreal engine render target

By default, the engine will draw materials using the top-left as the origin. This will lead to the brush texture not being centered on where you want to draw. To fix this, you need to divide BrushSize by 2 and then subtract the result from the previous step.

unreal engine render target

Afterwards, connect everything like so:

unreal engine render target

Finally, you need to tell the engine you want to stop drawing to the render target. Add an End Draw Canvas to Render Target node and connect it like so:

unreal engine render target

Now whenever DrawBrush executes, it will first set the texture for BrushMaterial to the supplied texture. Afterwards, it will draw BrushMaterial to RenderTarget using the supplied position and size.

That’s it for the drawing function. Click Compile and then close BP_Canvas. The next step is to perform a line trace from the camera and then paint the canvas if there was a hit.

Line Tracing From the Camera

Before you paint on the canvas, you will need to specify the brush texture and size. Go to the Blueprints folder and open BP_Player. Afterwards, set the BrushTexture variable to T_Brush_01 and BrushSize to 500. This will set the brush to a monkey image with a size of 500×500 pixels.

unreal engine render target

Next, you need to do the line trace. Locate InputAxis Paint and create the following setup:

unreal engine render target

This will perform a line trace going forward from the camera as long as the player is holding down the key binding for Paint (in this case, left-click).

Now you need to check if the line trace hit the canvas. Add the highlighted nodes:

unreal engine render target

Now if the line trace hits the canvas, the DrawBrush function will execute using the supplied brush variables and UV location.

Before the Find Collision UV node will work, you will need to change two settings. First, go to the LineTraceByChannel node and enable Trace Complex.

unreal engine render target

Second, go to Edit\Project Settings and then Engine\Physics. Enable Support UV From Hit Results and then restart your project.

unreal engine render target

Once you have restarted, press Play and left-click to paint onto the canvas.

unreal engine render target

You can even create multiple canvases and paint on each one separately. This is possible because each canvas dynamically creates its own render target.

unreal engine render target

In the next section, you will implement functionality so the player can change the brush size.

Changing Brush Size

Open BP_Player and locate the InputAxis ChangeBrushSize node. This axis mapping is set to use the mouse wheel. To change brush size, all you need to do is change the value of BrushSize depending on the Axis Value. To do this, create the following setup:

unreal engine render target

This will add or subtract from BrushSize every time the player uses the mouse wheel. The first multiply will determine how fast to add or subtract. For safe measure, a Clamp (float) is added to ensure the brush size does not go below 0 or above 1,000.

Click Compile and then go back to the main editor. Use the mouse wheel to change the brush size while you paint.

unreal engine render target

In the final section, you will create functionality to let the player change the brush texture.

Changing the Brush Texture

First, you will need an array to hold textures the player can use. Open BP_Player and then create an array variable. Set the type to Texture 2D and name it Textures.

unreal engine render target

Afterwards, create three elements in Textures. Set each of them to:

  • T_Brush_01
  • T_Brush_02
  • T_Brush_03

unreal engine render target

These are the textures the player will be able to paint. To add more textures, simply add them to this array.

Next, you need a variable to hold the current index in the array. Create an integer variable and name it CurrentTextureIndex.

unreal engine render target

Next, you need a way to cycle through the textures. For this tutorial, I have set up an action mapping called NextTexture set to right-click. Whenever the player presses this button, it should change to the next texture. To do this, locate the InputAction NextTexture node and create the following setup:

unreal engine render target

This will increment CurrentTextureIndex every time the player presses right-click. If the index reaches the end of the array, it will reset back to 0. Finally, BrushTexture is set to the appropriate texture.

Click Compile and then close BP_Player. Press Play and press right-click to cycle between the textures.

unreal engine render target

Where to Go From Here?

You can download the completed project using the link at the top or bottom of this tutorial.

Render targets are extremely powerful and what you’ve learnt in this tutorial is only scratching the surface. If you’d like to learn more about what render targets can do, check out Content-Driven Multipass Rendering in UE4. In the video, you will see examples of flow map painting, volume painting, fluid simulation and more.

Also check out the live training for Blueprint Drawing to Render Targets to learn how to create a height map painter using render targets.

If there are any effects you’d like to me cover, let me know in the comments below!

The post Unreal Engine 4 Tutorial: Painting With Render Targets appeared first on Ray Wenderlich.

Kotlin – Podcast S08 E01

$
0
0

Kicking off Season 8 with Ellen Shapiro and Joe Howard from “The Kotlin Apprentice” talking about the differences between Kotlin and Swift and where Kotlin can be found.

[Subscribe in iTunes] [RSS Feed]

This episode is sponsored by PubNub.

Interested in sponsoring a podcast episode? We sell ads via Syndicate Ads, check it out!

Episode Links

Kotlin

Kotlin v. Swift presentations

Kotlin/Native:

Kotlin iOS:

Kotlin for Data Science:

Server Side Kotlin:

KotlinJS:

Kotlin v. Dart/Flutter:

Contact Us

Where To Go From Here?

We hope you enjoyed this episode of our podcast. Be sure to subscribe in iTunes to get notified when the next episode comes out.

We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear in future episodes. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com.

The post Kotlin – Podcast S08 E01 appeared first on Ray Wenderlich.

Server Side Swift with Kitura Part 3: Linking Your iOS Client To Kitura

$
0
0

Part three of our new, free course, Server Side Swift with Kitura, is available today! If you ever wanted to extend your skills past developing for mobile devices, but didn’t have time to learn a new language, this is your chance.

In part three, you’ll learn how to use KituraKit to connect the EmojiJournal iOS app to your Kitura server.

Take a look at what’s inside:

Part 3: Linking Your iOS Client To Kitura

  1. Introduction To KituraKit: Time to work within iOS again! I’ll show you how KituraKit makes client-side connections to Kitura nice and straightforward, and how you can use it to drastically reduce the amount of code you write for networking on your client.
  2. Demo The iOS Starter: Let’s walk through what the iOS app does right now, and let’s highlight the pieces you need to get into and make work!
  3. Integrating KituraKit Using Cocoapods: I’ll help you set up KituraKit with Cocoapods on your iOS application, so that nothing stands in the way of you writing your networking code!
  4. Creating Your KituraKit Client: First, you’ll walk through writing a client class for your KituraKit client, so that you have easy functions to make use of when you are connecting your iOS app to your server.
  5. Challenge: Finishing Your Client: Now that you’ve made your client class, I’ll help you hook up a couple of the functions to the UI, and let you finish putting the puzzle together yourself once you have everything you need.
  6. Conclusion: This is the moment you knew you could be a full-stack developer – let’s test out your mobile application, and see how well it works with your shiny new Kitura server!

Where To Go From Here?

Want to check out the course? The first three parts of the course are ready for you today! The rest of the course will be released later this week, and the entire course will be available for free.

Stay tuned for more new and updated courses to come. I hope you enjoy the course! :]

The post Server Side Swift with Kitura Part 3: Linking Your iOS Client To Kitura appeared first on Ray Wenderlich.

How To Make a Custom Control Tutorial: A Reusable Knob

$
0
0
Update note: Lorenzo Boaro updated this tutorial for iOS 11, Xcode 9, and Swift 4. Sam Davies wrote the original tutorial.

How To Make a Custom Control Tutorial: A Reusable Knob

Custom UI controls are extremely useful when you need some new functionality in your app — especially when they’re generic enough to be reusable in other apps.

We have an excellent tutorial providing an introduction to custom UI Controls in Swift. That tutorial walks you through the creation of a custom double-ended UISlider that lets you select a range with start and end values.

This custom control tutorial takes that concept a bit further and covers the creation of a control kind of like a circular slider inspired by a control knob, such as those found on a mixer:

sound_desk_knob

UIKit provides the UISlider control, which lets you set a floating point value within a specified range. If you’ve used any iOS device, then you’ve probably used a UISlider to set volume, brightness, or any one of a multitude of other variables. Your project will have the same functionality, but in a circular form.

Getting Started

Use the Download Materials button at the top or bottom of this tutorial to download the starter project.

Go to ReusableKnob/Starter and open the starter project. It’s a simple single view application. The storyboard has a few controls that are wired up to the main view controller. You’ll use these controls later in the tutorial to demonstrate the different features of the knob control.

Build and run your project to get a sense of how everything looks before you dive into the code. It should look like this:

To create the class for the knob control, click File ▸ New ▸ File… and select iOS ▸ Source ▸ Cocoa Touch Class. On the next screen, specify the class name as Knob, subclass UIControl and make sure the language is Swift. Click Next, choose the ReusableKnob group and click Create.

Before you can write any code for the new control, you have to add it to your view controller.

Open Main.storyboard and select the view to the left of the label. In Identity Inspector, set the class to Knob like this:

Now create an outlet for your knob. In the storyboard, open the Assistant editor; it should display ViewController.swift.

To create the outlet, click the Knob and control-drag it right underneath the animateSwitch IBOutlet. Release the drag and, in the pop-up window, name the outlet knob then click Connect. You’ll use it later in the tutorial.

Switch back to the Standard editor and, in Knob.swift, replace the boiler-plate class definition with the following code:

class Knob: UIControl {
  override init(frame: CGRect) {
    super.init(frame: frame)
    commonInit()
  }

  required init?(coder aDecoder: NSCoder) {
    super.init(coder: aDecoder)
    commonInit()
  }

  private func commonInit() {
    backgroundColor = .blue
  }
}

This code defines the two initializers and sets the background color of the knob so that you can see it on the screen.

Build and run your app and you’ll see the following:

With the basic building blocks in place, it’s time to work on the API for your control!

Designing Your Control’s API

The main reason for creating a custom UI control is to create a reusable component. It’s worth taking a bit of time up-front to plan a good API for your control. Developers should understand how to use your component from looking at the API alone, without browsing the source code.

Your API consists of the public functions and properties of your custom control.

In Knob.swift, add the following code to the Knob class above the initializers:

var minimumValue: Float = 0

var maximumValue: Float = 1

private (set) var value: Float = 0

func setValue(_ newValue: Float, animated: Bool = false) {
  value = min(maximumValue, max(minimumValue, newValue))
}

var isContinuous = true
  • minimumValue, maximumValue and value set the basic operating parameters for your control.
  • setValue(_:animated:) lets you set the value of the control programmatically, while the additional boolean parameter indicates whether or not the change in value should be animated. Because value can only be set between the limits of minimum and maximum you make its setter private with the private (set) qualifiers.
  • If isContinuous is true, the control calls back repeatedly as the value changes. If it’s false, the control calls back once after the user has finished interacting with it.

You’ll ensure that these properties behave appropriately later on in this tutorial.

Now, it’s time to get cracking on the visual design.

Setting the Appearance of Your Control

In this tutorial, you’ll use Core Animation layers.

A UIView is backed by a CALayer, which helps iOS optimize the rendering on the GPU. CALayer objects manage visual content and are designed to be incredibly efficient for all types of animations.

Your knob control will be made up of two CALayer objects: one for the track, and one for the pointer itself.

The diagram below illustrates the structure of your knob control:

CALayerDiagram

The blue and red squares represent the two CALayer objects. The blue layer contains the track of the knob control, and the red layer the pointer. When overlaid, the two layers create the desired appearance of a moving knob. The difference in coloring above is just for illustration purposes.

The reason to use two separate layers becomes obvious when the pointer moves to represent a new value. All you need to do is rotate the layer containing the pointer, which is represented by the red layer in the diagram above.

It’s cheap and easy to rotate layers in Core Animation. If you chose to implement this using Core Graphics and override drawRect(_:), the entire knob control would be re-rendered in every step of the animation. Since it’s a very expensive operation, it would likely result in sluggish animation.

To keep the appearance parts separate from the control parts, add a new private class to the end of Knob.swift:

private class KnobRenderer {
}

This class will keep track of the code associated with rendering the knob. That will add a clear separation between the control and its internals.

Next, add the following code inside the KnobRenderer definition:

var color: UIColor = .blue {
  didSet {
    trackLayer.strokeColor = color.cgColor
    pointerLayer.strokeColor = color.cgColor
  } 
}

var lineWidth: CGFloat = 2 {
  didSet {
    trackLayer.lineWidth = lineWidth
    pointerLayer.lineWidth = lineWidth
    updateTrackLayerPath()
    updatePointerLayerPath()
  }
}

var startAngle: CGFloat = CGFloat(-Double.pi) * 11 / 8 {
  didSet {
    updateTrackLayerPath()
  }
}

var endAngle: CGFloat = CGFloat(Double.pi) * 3 / 8 {
  didSet {
    updateTrackLayerPath()
  }
}

var pointerLength: CGFloat = 6 {
  didSet {
    updateTrackLayerPath()
    updatePointerLayerPath()
  }
}

private (set) var pointerAngle: CGFloat = CGFloat(-Double.pi) * 11 / 8

func setPointerAngle(_ newPointerAngle: CGFloat, animated: Bool = false) {
  pointerAngle = newPointerAngle
}

let trackLayer = CAShapeLayer()
let pointerLayer = CAShapeLayer()

Most of these properties deal with the visual appearance of the knob. The two CAShapeLayer properties represent the layers shown above. The color and lineWidth properties just delegate to the strokeColor and lineWidth of the two layers. You’ll see unresolved identifier compiler errors until you implement updateTrackLayerPath and updatePointerLayerPath in a moment.

Now add an initializer to the class right underneath the pointerLayer property:

init() {
  trackLayer.fillColor = UIColor.clear.cgColor
  pointerLayer.fillColor = UIColor.clear.cgColor
}

Initially you set the appearance of the two layers as transparent.

You’ll create the two shapes that make up the overall knob as CAShapeLayer objects. These are a special subclasses of CALayer that draw a bezier path using anti-aliasing and some optimized rasterization. This makes CAShapeLayer an extremely efficient way to draw arbitrary shapes.

Add the following two methods to the KnobRenderer class:

private func updateTrackLayerPath() {
  let bounds = trackLayer.bounds
  let center = CGPoint(x: bounds.midX, y: bounds.midY)
  let offset = max(pointerLength, lineWidth  / 2)
  let radius = min(bounds.width, bounds.height) / 2 - offset
  
  let ring = UIBezierPath(arcCenter: center, radius: radius, startAngle: startAngle,
                          endAngle: endAngle, clockwise: true)
  trackLayer.path = ring.cgPath
}

private func updatePointerLayerPath() {
  let bounds = trackLayer.bounds
  
  let pointer = UIBezierPath()
  pointer.move(to: CGPoint(x: bounds.width - CGFloat(pointerLength)
    - CGFloat(lineWidth) / 2, y: bounds.midY))
  pointer.addLine(to: CGPoint(x: bounds.width, y: bounds.midY))
  pointerLayer.path = pointer.cgPath
}

updateTrackLayerPath creates an arc between the startAngle and endAngle values with a radius that ensures the pointer will fit within the layer, and positions it on the center of the trackLayer. Once you create the UIBezierPath, you use the cgPath property to set the path on the appropriate CAShapeLayer.

Since UIBezierPath has a more modern API, you use that to initially create the path, and then convert it to a CGPathRef.

updatePointerLayerPath creates the path for the pointer at the position where angle is equal to zero. Again, you create a UIBezierPath, convert it to a CGPathRef and assign it to the path property of your CAShapeLayer. Since the pointer is a straight line, all you need to draw the pointer are move(to:) and addLine(to:).

Note: If you need a referesher on drawing angles and other related concepts, check out our Trigonometry for Game Programming tutorial.

Calling these methods redraws the two layers. This must happen when you modify any of the properties used by these methods.

You may have noticed that the two methods for updating the shape layer paths rely on one more property which has never been set — namely, the bounds of each of the shape layers. Since you never set the CAShapeLayer bounds, they currently have zero-sized bounds.

Add a new method to KnobRenderer:

func updateBounds(_ bounds: CGRect) {
  trackLayer.bounds = bounds
  trackLayer.position = CGPoint(x: bounds.midX, y: bounds.midY)
  updateTrackLayerPath()

  pointerLayer.bounds = trackLayer.bounds
  pointerLayer.position = trackLayer.position
  updatePointerLayerPath()
}

The above method takes a bounds rectangle, resizes the layers to match and positions the layers in the center of the bounding rectangle. When you change a property that affects the paths, you must call the updateBounds(_:) method manually.

Although the renderer isn’t quite complete, there’s enough here to demonstrate the progress of your control. Add a property to hold an instance of your renderer to the Knob class:

private let renderer = KnobRenderer()

Replace the code of commonInit() method of Knob with:

private func commonInit() {
  renderer.updateBounds(bounds)
  renderer.color = tintColor
  renderer.setPointerAngle(renderer.startAngle, animated: false)

  layer.addSublayer(renderer.trackLayer)
  layer.addSublayer(renderer.pointerLayer)
}

The above method sets the knob renderer’s size, then adds the two layers as sublayers of the control’s layer.

Build and run your app, and your control should look like the one below:

Exposing Appearance Properties in the API

Currently, all of the properties which manipulate the look of the knob are hidden away in the private renderer.

To allow developers to change the control’s appearance, add the following properties to the Knob class:

var lineWidth: CGFloat {
  get { return renderer.lineWidth }
  set { renderer.lineWidth = newValue }
}

var startAngle: CGFloat {
  get { return renderer.startAngle }
  set { renderer.startAngle = newValue }
}

var endAngle: CGFloat {
  get { return renderer.endAngle }
  set { renderer.endAngle = newValue }
}

var pointerLength: CGFloat {
  get { return renderer.pointerLength }
  set { renderer.pointerLength = newValue }
}

The four properties are simple proxies for the properties in the renderer.

To test that the new API bits are working as expected, add this code to the end of viewDidLoad() in ViewController.swift:

knob.lineWidth = 4
knob.pointerLength = 12

Build and run again. You’ll see that the line thickness and the length of the pointer have both increased based on the values you just set:

Setting the Control’s Value Programmatically

The knob doesn’t actually do anything. In this next phase, you’ll modify the control to respond to programmatic interactions — that is, when the value property of the control changes.

At the moment, the value of the control is saved when the value property is modified directly or when you call setValue(_:animated:). However, there isn’t any communication with the renderer, and the control won’t re-render.

The renderer has no concept of value; it deals entirely in angles. You’ll need to update setValue(_:animated:) in Knob so that it converts the value to an angle and passes it to the renderer.

In Knob.swift, replace setValue(_:animated:) with the following code:

func setValue(_ newValue: Float, animated: Bool = false) {
  value = min(maximumValue, max(minimumValue, newValue))

  let angleRange = endAngle - startAngle
  let valueRange = maximumValue - minimumValue
  let angleValue = CGFloat(value - minimumValue) / CGFloat(valueRange) * angleRange + startAngle
  renderer.setPointerAngle(angleValue, animated: animated)
}

The code above works out the appropriate angle for the given value by mapping the minimum and maximum value range to the minimum and maximum angle range and sets the pointerAngle property on the renderer.

Note you’re just passing the value of animated to the renderer, but nothing is actually animating at the moment — you’ll fix this later.

Although the pointerAngle property is being updated, it doesn’t yet have any effect on your control. When the pointer angle is set, the layer containing the pointer should rotate to the specified angle to give the impression that the pointer has moved.

Update setPointerAngle(_:animated:) as follows:

func setPointerAngle(_ newPointerAngle: CGFloat, animated: Bool = false) {
  pointerLayer.transform = CATransform3DMakeRotation(newPointerAngle, 0, 0, 1)

  pointerAngle = newPointerAngle
}

This simply creates a rotation transform which rotates the layer around the z-axis by the specified angle.

The transform property of CALayer expects to be passed a CATransform3D, not a CGAffineTransform like UIView. This means that you can perform transformations in three dimensions.

CGAffineTransform uses a 3×3 matrix and CATransform3D uses a 4×4 matrix; the addition of the z-axis requires the extra values. At their core, 3D transformations are simply matrix multiplications. You can read more about matrix multiplication in this Wikipedia article.

To demonstrate that your transforms work, you’re going to link the UISlider with the knob control in the view controller. As you adjust the slider, the value of the knob will change.

The UISlider has already been linked to handleValueChanged(_:). Open ViewController.swift and add the following to that method:

knob.setValue(valueSlider.value)

Now the knob value is set to match the valueSlider as it slides.

Build and run. Now, change the value of the UISlider and you’ll see the pointer on the knob control move to match as shown below:

Despite the fact that you haven’t started coding any of the animations yet, your control is animating. Why?

Core Animation is quietly calling implicit animations on your behalf. When you change certain properties of CALayer — including transform — the layer animates smoothly from the current value to the new value.

Now try sliding quickly from the end to the start. Rather than rotating counter-clockwise, the pointer will rotate clockwise over the end of the track, and into the bottom. That’s not what you want!

To solve this, you need to disable these animations. Update setPointerAngle(_:animated:) by replacing the CATransform3DMakeRotation line with:

CATransaction.begin()
CATransaction.setDisableActions(true)

pointerLayer.transform = CATransform3DMakeRotation(newPointerAngle, 0, 0, 1)

CATransaction.commit()

You wrapped the property change in a CATransaction and disable animations for that interaction.

Build and run once more. You’ll see that as you move the UISlider, the knob follows instantaneously, and the knob moves predictably.

Animating Changes to the Control’s Value

Currently, setting the animated parameter to true has no effect on your control. To enable this bit of functionality, add the following to setPointerAngle(_:animated:) just below the CATransform3DMakeRotation call and before the commit:

if animated {
  let midAngleValue = (max(newPointerAngle, pointerAngle) - min(newPointerAngle, pointerAngle)) / 2 
    + min(newPointerAngle, pointerAngle)
  let animation = CAKeyframeAnimation(keyPath: "transform.rotation.z")
  animation.values = [pointerAngle, midAngleValue, newPointerAngle]
  animation.keyTimes = [0.0, 0.5, 1.0]
  animation.timingFunctions = [CAMediaTimingFunction(name: kCAMediaTimingFunctionEaseInEaseOut)]
  pointerLayer.add(animation, forKey: nil)
}

Now when animated is true, you create an explicit animation that rotates the pointer in the correct direction. In order to specify the rotation direction, you use a keyframe animation. That’s simply an animation where you specify some in-between points in addition to the usual start and end points.

You create a CAKeyFrameAnimation and specify that the property to animate is the rotation around the z-axis with transform.rotation.z as its keypath.

Next, in animation.values, you specify three angles through which the layer should rotate: the start point, mid-point and end point. Along with that, there’s the array animation.keyTimes specifying the normalized times (as percentages) at which to reach those values. Adding the animation to the layer ensures that once the transaction is committed the animation will start.

To see this new functionality in action, you’ll need the knob to jump to a value. To do this, you’ll implement the method wired up to the Random Value button to cause the slider and knob controls to move to a random value.

Open ViewController.swift and add the following to handleRandomButtonPressed(_:):

let randomValue = Float(arc4random_uniform(101)) / 100.0
knob.setValue(randomValue, animated: animateSwitch.isOn)
valueSlider.setValue(Float(randomValue), animated: animateSwitch.isOn)

The above generates a random value between 0.00 and 1.00 and sets the value on both controls. It then inspects the isOn property of animateSwitch to determine whether or not to animate the transition to the new value.

Build and run. Now tap the Random Value button a few times with the animate switch toggled on, then tap the Random Value button a few times with the animate switch toggled off to see the difference the animated parameter makes.

Updating the Label

Next you’ll populate the label to the right of the knob with its current value. Open ViewController.swift and add this method below the two @IBAction methods:

func updateLabel() {
  valueLabel.text = String(format: "%.2f", knob.value)
}

This will show the current value selected by the knob control. Next, call this new method at the end of both handleValueChanged(_:) and handleRandomButtonPressed(_:) like this:

updateLabel()

Finally, update the initial value of the knob and the label to be the initial value of the slider so that all they are in sync when the app starts. Add the following code to the end of viewDidLoad():

knob.setValue(valueSlider.value)
updateLabel()

Build and run, and perform a few tests to make sure the label shows the correct value.

Responding to Touch Interaction

The knob control you’ve built responds only to programmatic interaction, but that alone isn’t terribly useful for a UI control. In this final section, you’ll see how to add touch interaction using a custom gesture recognizer.

Apple provides a set of pre-defined gesture recognizers, such as tap, pan and pinch. However, there’s nothing to handle the single-finger rotation you need for your control.

Add a new private class to the end of Knob.swift:

import UIKit.UIGestureRecognizerSubclass

private class RotationGestureRecognizer: UIPanGestureRecognizer {
}

This custom gesture recognizer will behave like a pan gesture recognizer. It will track a single finger dragging across the screen and update the location as required. For this reason, it subclasses UIPanGestureRecognizer.

The import is necessary so you can override some gesture recognizer methods later.

Note: You might be wondering why you’re adding all these private classes to Knob.swift rather than the usual one-class-per-file. For this project, it makes it easy to distribute just a single file to anyone who wants to use this simple control.

Add the following property to your RotationGestureRecognizer class:

private(set) var touchAngle: CGFloat = 0

touchAngle represents the touch angle of the line which joins the current touch point to the center of the view to which the gesture recognizer is attached, as demonstrated in the following diagram:

GestureRecogniserDiagram

There are three methods of interest when subclassing UIGestureRecognizer: they represent the time that the touches begin, the time they move and the time they end. You’re only interested when the gesture starts and when the user’s finger moves on the screen.

Add the following two methods to RotationGestureRecognizer:

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
  super.touchesBegan(touches, with: event)
  updateAngle(with: touches)
}

override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
  super.touchesMoved(touches, with: event)
  updateAngle(with: touches)
}

Both of these methods call through to their super equivalent, and then call a utility function which you’ll add next:

private func updateAngle(with touches: Set<UITouch>) {
  guard 
    let touch = touches.first, 
    let view = view 
  else {
    return
  }
  let touchPoint = touch.location(in: view)
  touchAngle = angle(for: touchPoint, in: view)
}

private func angle(for point: CGPoint, in view: UIView) -> CGFloat {
  let centerOffset = CGPoint(x: point.x - view.bounds.midX, y: point.y - view.bounds.midY)
  return atan2(centerOffset.y, centerOffset.x)
}

updateAngle(with:) takes the set of touches and extracts the first one. It then uses location(in:) to translate the touch point into the coordinate system of the view associated with this gesture recognizer. It then updates the touchAngle property using angle(for:in:), which uses some simple geometry to find the angle as demonstrated below:

AngleCalculation

x and y represent the horizontal and vertical positions of the touch point within the control. The tangent of the rotation, that is the touch angle is equal to h / w. To calculate touchAngle all you need to do is establish the following lengths:

  • h = y - (view height) / 2.0 (since the angle should increase in a clockwise direction)
  • w = x - (view width) / 2.0

angle(for:in:) performs this calculation for you, and returns the angle required.

Note: If this math makes no sense, refer to our old friend, the Trigonometry for Game Programming tutorial.

Finally, your gesture recognizer should work with one touch at a time. Add the following initializer to the class:

override init(target: Any?, action: Selector?) {
  super.init(target: target, action: action)

  maximumNumberOfTouches = 1
  minimumNumberOfTouches = 1
}

Wiring Up the Custom Gesture Recognizer

Now that you’ve completed the custom gesture recognizer, you just need to wire it up to the knob control.

In Knob, add the following to the end of commonInit():

let gestureRecognizer = RotationGestureRecognizer(target: self, action: #selector(Knob.handleGesture(_:)))
addGestureRecognizer(gestureRecognizer)

This creates a recognizer, specifies it should call Knob.handleGesture(_:) when activated, then adds it to the view. Now you need to implement that action!

Add the following method to Knob:

@objc private func handleGesture(_ gesture: RotationGestureRecognizer) {
  // 1
  let midPointAngle = (2 * CGFloat(Double.pi) + startAngle - endAngle) / 2 + endAngle
  // 2
  var boundedAngle = gesture.touchAngle
  if boundedAngle > midPointAngle {
    boundedAngle -= 2 * CGFloat(Double.pi)
  } else if boundedAngle < (midPointAngle - 2 * CGFloat(Double.pi)) {
    boundedAngle -= 2 * CGFloat(Double.pi)
  }
  
  // 3
  boundedAngle = min(endAngle, max(startAngle, boundedAngle))

  // 4
  let angleRange = endAngle - startAngle
  let valueRange = maximumValue - minimumValue
  let angleValue = Float(boundedAngle - startAngle) / Float(angleRange) * valueRange + minimumValue

  // 5
  setValue(angleValue)
}

This method extracts the angle from the custom gesture recognizer, converts it to the value represented by this angle on the knob control, and then sets the value to trigger the UI updates.

Here’s what happening in the code above:

  1. You calculate the angle which represents the mid-point between the start and end angles. This is the angle which is not part of the knob track, and instead represents the angle at which the pointer should flip between the maximum and minimum values.
  2. The angle calculated by the gesture recognizer will be between -π and π, since it uses the inverse tangent function. However, the angle required for the track should be continuous between the startAngle and the endAngle. Therefore, create a new boundedAngle variable and adjust it to ensure that it remains within the allowed ranges.
  3. Update boundedAngle so that it sits inside the specified bounds of the angles.
  4. Convert the angle to a value, just as you converted it in setValue(_:animated:) earlier.
  5. Set the knob control's value to the calculated value.

Build and run your app. Play around with your knob control to see the gesture recognizer in action. The pointer will follow your finger as you move it around the control :]

Sending Action Notifications

As you move the pointer around, you'll notice that the UISlider doesn't update. You'll wire this up using the target-action pattern which is an inherent part of UIControl.

Open ViewController.swift and add the following code at the end of viewDidLoad():

knob.addTarget(self, action: #selector(ViewController.handleValueChanged(_:)), for: .valueChanged)

Here you're listening for value-changed events.

Now replace the contents of handleValueChanged(_:) with:

if sender is UISlider {
  knob.setValue(valueSlider.value)
} else {
  valueSlider.value = knob.value
}
updateLabel()

If the user changes the value on the knob, you update the slider. If they change the slider, you update the knob. You continue to update the label in either case.

Build and run. Now move the knob around and...nothing has changed. Whoops. You haven't actually fired the event from within the knob control itself.

To fix that, inside the Knob class, add the following code to the end of handleGesture(_:):

if isContinuous {
  sendActions(for: .valueChanged)
} else {
  if gesture.state == .ended || gesture.state == .cancelled {
    sendActions(for: .valueChanged)
  }
}

If isContinuous is true, then the event should be fired every time that the gesture sends an update, so call sendActions(for:).

If isContinuous is false, then the event should only fire when the gesture ends or is cancelled.

Since the control is only concerned with value changes, the only event you need to handle is UIControlEvents.valueChanged.

Build and run again. Move the knob once again and you'll see the UISlider move to match the value on the knob. Success!

Where to Go From Here?

Congrats, your knob control is now fully functional and you can drop it into your apps.

You can download the final version of the project using the Download Materials button at the top or bottom of this tutorial.

However, there are still a lot of ways to improve your control:

  • Add extra configuration options to the appearance of the control — you could allow an image to be used for the pointer.
  • Ensure that a user can only interact with the control if their first touch is on the pointer.
  • At the moment, if you resize the knob control, the layers won't be re-rendered. You can add this functionality with just a few lines of code.

These suggestions are quite good fun, and will help you hone your skills with the different features of iOS you've encountered in this tutorial. You can also apply what you've learned in other controls that you build.

To learn how to make another custom UIControl, check out this tutorial on making a reusable UISlider.

I'd love to hear your comments or questions in the forums below!

The post How To Make a Custom Control Tutorial: A Reusable Knob appeared first on Ray Wenderlich.

Text Recognition with ML Kit

$
0
0

Text Recognition with ML Kit

At Google I/O 2018, Google announced a new library, ML Kit, for developers to easily leverage machine learning on mobile. With it, you can now add some common machine learning functionality to your app without necessarily being a machine learning expert!

In this tutorial, you’ll learn how to setup and use Google’s ML Kit in your Android apps by creating an app to open a Twitter profile from a picture of that profile’s Twitter handle. By the end you will learn:

  • What ML Kit is, and what it has to offer
  • How to set up ML Kit with your Android app and Firebase
  • How to run text recognition on-device
  • How to run text recognition in the cloud
  • How to use the results from running text recognition with ML Kit

Note: This tutorial assumes you have basic knowledge of Kotlin and Android. If you’re new to Android, check out our Android tutorials. If you know Android, but are unfamiliar with Kotlin, take a look at Kotlin For Android: An Introduction.

Before we get started, let’s first take a look at what ML Kit is.

What is ML Kit?

Machine learning gives computers the ability to “learn” through a process which trains a model with a set of inputs that produce known outputs. By feeding a machine learning algorithm a bunch of data, the resulting model is able to make predictions, such as whether or not a cute cat is in a photo. When you don’t have the help of awesome libraries, the machine learning training process takes lots of math and specialized knowledge.

Google provided TensorFlow and TensorFlow Lite for mobile so that developers could create their own machine learning models and use them in their apps. This helped a tremendous amount in making machine learning more approachable; however, it still feels daunting to many developers. Using TensorFlow still requires some knowledge of machine learning, and often the ability to train your own model.

In comes ML Kit! There are many common cases to use machine learning in mobile apps, and they often include some kind of image processing. Google has already been using machine learning for some of these things, and has made their work available to us through an easy to use API. They built ML Kit on top of TensorFlow Lite, the Cloud Vision API, and the Neural Networks API so that we developers can take advantage of models for:

  • Text recognition
  • Face detection
  • Barcode scanning
  • Image labeling
  • Landmark recognition

Google has plans to include more APIs in the future, too! Having these options, you’re able to implement intelligent features into your app without needing to understand machine learning, or training your own models.

In this tutorial, you’re going to focus on text recognition. With ML Kit, you can provide an image, and then receive a response with the text found in the image, along with the text’s location in the image. Text recognition is one of the ML Kit APIs that can run both locally on your device and also in the cloud, so we will look at both. Some of the other APIs are only supported on one or the other.

Time to get started! :]

Getting Started

Have you ever taken a photo of someone’s Twitter handle so you could find them later? The sample app you will be working on, TwitterSnap, allows you to select a photo from your device, and run text recognition on it.

You will first work to run the text recognition locally on the device, and then follow that up with running in the cloud. After any recognition completes, a box will show up around the detected Twitter handles. You can then click these handles to open up that profile in a browser, or the Twitter app if you have it installed.

Start by downloading the starter project. You can find the link at the top or bottom of this tutorial. Then open the project in Android Studio 3.1.2 or greater by going to File > New > Import Project, and selecting the build.gradle file in the root of the project.

There’s one main file you will be working with in this tutorial, and that’s MainActivityPresenter.kt. It’s pretty empty right now with a couple helper methods. This file is for you to fill in! You’ll also need to add some things to build.gradle and app/build.grade, so make sure you can find these too.

Once the starter project finishes loading and building, run the application on a device or emulator.

Starter project

You can select an image by clicking the camera icon FloatingActionButton in the bottom corner of the screen. If you don’t have an image on hand that has a Twitter handle, feel free to download the one below. You can also go to Twitter and take a screenshot.

Sample image

Once you have the image selected, you can see it in view, but not much else happens. It’s time for you to implement the ML Kit functionality to make this app fun! :]

Selected image

Setting up Firebase

ML Kit uses Firebase, so we need to set up a new app on the Firebase console before we move forward. To create the app, you need a unique app ID. In the app/build.gradle file, you’ll find a variable named uniqueAppId.

def uniqueAppId = "<YOUR_ID_HERE>"

Replace that string with something unique to you. You can make it your name, or something funny like “tinypurplealligators”. Make sure it’s all lowercase, with no spaces or special characters. And don’t forget what you picked! You’ll need it again soon.

Try running the app to make sure it’s okay. You’ll end up with two installs on your device, one with the old app ID, and one with the new one. If this bothers you, feel free to uninstall the old one, or uninstall both since future steps will reinstall this one.

Moving on the the Firebase Console

Open the console, and make sure you’re signed in with a Google account. From there, you need to create a new project. Click on the “Add project” card.

Add project button

In the screen that pops up, you have to provide a project name. Input TwitterSnap for the name of this project. While you’re there, choose your current country from the dropdown, and accept the terms and conditions.

Add project window

You should then see the project ready confirmation screen, on which you should hit Continue.

Project ready confirmation

Now that you have the project set up, you need to add it to your Android app. Select Add Firebase to your Android app.

Add Firebase to your Android ap

On the next screen, you need to provide the package name. This will use the app ID you changed in the app/build.gradle file. Enter com.raywenderlich.android.twittersnap.YOUR_ID_HERE
being sure to replace YOUR_ID_HERE with the unique id you provided earlier. Then click the Register App button.

Register app

After this, you’ll be able to download a google-services.json file. Download it and place this file in your app/ directory.

Download google-services.json

Finally, you need to add the required dependencies to your build.gradle files. In the top level build.gradle, add the google services classpath in the dependencies block:

classpath 'com.google.gms:google-services:3.3.1'

The Firebase console may suggest a newer version number, but go ahead and stick with the numbers given here so that you’ll be consistent with the rest of the tutorial.

Next, add the Firebase Core and Firebase ML vision dependencies to app/build.gradle in the dependencies block.

implementation 'com.google.firebase:firebase-core:15.0.2'
implementation 'com.google.firebase:firebase-ml-vision:15.0.0'

Add this line to the bottom of app/build.gradle to apply the Google Services plugin:

apply plugin: 'com.google.gms.google-services'

Sync the project Gradle files, then build and run the app to make sure it’s all working. Nothing will change in the app, but you should see the app activated in the Firebase console if you finish the instructions for adding the Firebase to the app. You’ll then also see your app on the Firebase console Overview screen.

Firebase console overview

Enabling Firebase for in-cloud text recognition

There are a couple of extra steps to complete in order to run text recognition in the cloud. You’ll do this now, as it takes a couple minutes to propagate and be usable on your device. It should then be ready for when you get to that part of the tutorial.

On the Firebase console for your project, ensure you’re on the Blaze plan instead of the default Spark plan. Click on the plan information in the bottom left corner of the screen to change it.

Modify Firebase plan

You’ll need to enter payment information for Firebase to proceed with the Blaze plan. Don’t worry, the first 1000 requests are free, so you don’t have to pay towards it over the course of following this tutorial. You can switch back to the free Spark plan when you’ve finished the tutorial.

If you’re hesitant to put in payment information, you can just stick with the Spark plan while following the on-device steps in the remainder of the tutorial, and then just skip over the instruction steps below for using ML Kit in the cloud. :]

Firebase pricing plans

Next, you need to enable the Cloud Vision API. Choose ML Kit in the console menu at the left.

ML Kit menu

Next, choose Cloud API Usage on the resulting screen.

Cloud API Usage link

This takes you to the Cloud Vision API screen. Make sure to select your project at the top, and click the Enable button.

Cloud Vision API

In a few minutes, you’ll be able to run your text recognition in the cloud!

Detecting text on-device

Now you get to dig into the code! You’ll start by adding functionality to detect text in the image on-device.

You have the option to run text recognition on both the device and in the cloud, and this flexibility allows you to use what is best for the situation. If there’s no network connectivity, or you’re dealing with sensitive data, running the model on-device might be better.

In MainActivityPresenter.kt, find the runTextRecognition() method to fill in. Add this code to the body of runTextRecognition(). Use Alt + Enter on PC or Option + Return on a Mac to import any missing dependencies.

view.showProgress()
val image = FirebaseVisionImage.fromBitmap(selectedImage)
val detector = FirebaseVision.getInstance().visionTextDetector

This starts by signaling to the view to show the progress so you have a visual queue that work is being done. Then you instantiate two objects, a FirebaseVisionImage from the bitmap passed in, and a FirebaseVisionTextDetector that you can use to detect the text.

Now you can use that detector to detect the text in the image. Add the following code to the same runTextRecognition() method below the code you added previously. There is one method call, processTextRecognitionResult() that is not implemented yet, and will show an error because of it. Don’t worry, you’ll implement that next.

detector.detectInImage(image)
    .addOnSuccessListener { texts ->
      processTextRecognitionResult(texts)
    }
    .addOnFailureListener { e ->
      // Task failed with an exception
      e.printStackTrace()
    }

Using the detector, you detect the text by passing in the image. The method detectInImage() takes two callbacks, one for success, and one for error. In the success callback, you have that method you still need to implement.

Once you have the results, you need to use them. Create the following method:

private fun processTextRecognitionResult(texts: FirebaseVisionText) {
  view.hideProgress()
  val blocks = texts.blocks
  if (blocks.size == 0) {
    view.showNoTextMessage()
    return
  }
}

At the start of this method you tell the view to stop showing the progress, and do a check to see if there is any text to process by checking the size of the text blocks property.

Once you know you have text, you can do something with it. Add the following to the bottom of the processTextRecognitionResult() method:

blocks.forEach { block ->
  block.lines.forEach { line ->
    line.elements.forEach { element ->
      if (looksLikeHandle(element.text)) {
        view.showHandle(element.text, element.boundingBox)
      }
    }
  }
}

The results come back as a nested structure so you can look at it in whatever kind of granularity you want. The hierarchy for on-device recognition is block > line > element > text. You iterate through each of these, check to see if it looks like a Twitter handle, using a regular expression in a helper method looksLikeHandle(), and show it if it does. Each of these elements have a boundingBox for where ML Kit found the text in the image. This is what the app uses to draw a box around where each detected handle is.

Now build and run the app, select an image containing Twitter handles, and see the results! If you tap on one of these results, it will open the Twitter profile. :]

On-device results

You can click the above screenshot to see it full size and verify that the bounding boxes are surrounding Twitter handles. :]

As a bonus, the view also has a generic showBox(boundingBox: Rect?) method. You can use this at any stage of the loop to show the outline of any of these groups. For example, in the line forEach, you can call view.showBox(line.boundingBox) to show boxes for all the lines found. Here’s what it would look like if you did that with the line element:

Showing line elements

Detecting text in the cloud

After you run the on-device text recognition, you may have noticed that the image on the FAB changes to a cloud icon. This is what you’ll tap to run the in-cloud text recognition. Time to make that button do some work!

When running text recognition in the cloud, you receive more detailed and accurate predictions. You also avoid doing all that extra processing on-device, saving some of that power. Make sure you completed the Enabling Firebase for in-cloud text recognition section above so you can get started.

The first method you’ll implement is very similar to what you did for the on-device recognition. Add the following code to the runCloudTextRecognition() method:

view.showProgress()
// 1
val options = FirebaseVisionCloudDetectorOptions.Builder()
    .setModelType(FirebaseVisionCloudDetectorOptions.LATEST_MODEL)
    .setMaxResults(15)
    .build()
val image = FirebaseVisionImage.fromBitmap(selectedImage)
// 2
val detector = FirebaseVision.getInstance()
    .getVisionCloudDocumentTextDetector(options)
detector.detectInImage(image)
    .addOnSuccessListener { texts ->
      processCloudTextRecognitionResult(texts)
    }
    .addOnFailureListener { e ->
      e.printStackTrace()
    }

There are a couple small differences from what you did for on-device recognition.

  • The first is that you’re including some extra options to your detector. Building these options using a FirebaseVisionCloudDetectorOptions builder, you’re saying that you want the latest model, and to limit the results to 15.
  • When you request a detector, you are also specifying that you want a FirebaseVisionCloudTextDetector, which you pass those options to. You handle the success and failure cases in the same way as on-device.

You will be processing the results similar to before, but diving in a little deeper using information that comes back from the in-cloud processing. Add the following nested class and helper functions to the presenter:

class WordPair(val word: String, val handle: FirebaseVisionCloudText.Word)

private fun processCloudTextRecognitionResult(text: FirebaseVisionCloudText?) {
  view.hideProgress()
  if (text == null) {
    view.showNoTextMessage()
    return
  }
  text.pages.forEach { page ->
    page.blocks.forEach { block ->
      block.paragraphs.forEach { paragraph ->
        paragraph.words
            .zipWithNext { a, b ->
              // 1
              val word = wordToString(a) + wordToString(b)
              // 2
              WordPair(word, b)
            }
            .filter { looksLikeHandle(it.word) }
            .forEach {
              // 3
              view.showHandle(it.word, it.handle.boundingBox)
            }
      }
    }
  }
}

 private fun wordToString(
     word: FirebaseVisionCloudText.Word): String =
     word.symbols.joinToString("") { it.text }

If you look at the results, you’ll notice something. The structure of the text recognition result is slightly different than when it runs on the cloud. The accuracy of the cloud model can provide us with some more detailed information we didn’t have before. The hierarchy you’ll see is page > block > paragraph > word > symbol. Because of this, we need to do a little extra to process it.

  • With the granularity of the results the “@” and the other characters of a handle are is separate words. Because of that, you are taking each word, creating a string from it using the symbols in wordToString(), and concatenating each neighboring word.
  • The new class you see, WordPair, is a way to give names to the pair of objects, the string you just created, and the Firebase object for the handle.
  • From there, you are displaying it the same as the on-device code.

Build and run the project, and test it out! After you pick an image, and run the recognition on the device, click the cloud icon in the bottom corner to run the recognition in the cloud. You may see results that the on-device recognition missed!

In-cloud results

Again, you can use showBox(boundingBox: Rect?) at any level of the loops to see what this level detects. This has boxes around every paragraph:

In-cloud extra results

Where to go from here?

Congratulations! You can now detect text from an image using ML Kit on both the device and in the cloud. Imagine the possibilities for where you can use this and other parts of ML Kit!

Feel free to download the completed project to check it out. Find the download link at the top or bottom of this tutorial.

Note: You must complete the “Setting up Firebase” and “Enabling Firebase for in-cloud text recognition” sections in order for the final project to work. Remember to also go back to the Firebase Spark plan if you upgraded to the Blaze plan in the Firebase console.

ML Kit doesn’t stop here! You can also use it for face detection, barcode scanning, image labeling, and landmark recognition with similar ease. Be on the lookout for possible future additions to ML Kit as well. Google has talked about adding APIs for face contour and smart replies. Check out these resources as you continue on your ML Kit journey:

Feel free to share your feedback, findings or ask any questions in the comments below or in the forums. I hoped you enjoyed getting started with text recognition using ML Kit!

Happy coding!

The post Text Recognition with ML Kit appeared first on Ray Wenderlich.

Viewing all 4403 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>