Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4374 articles
Browse latest View live

Screencast: What’s New in Swift 4: Miscellaneous Changes


Screencast: What’s New in UIKit: Accessibility Improvements

Getting Started with Android Wear with Kotlin

$
0
0

Android Wear is a version of the Android operating system designed for smartwatches. It lets you check notifications, track your fitness, answer phone calls and much more – all without pulling out your phone!

It even lets you check the time. Mind blowing, right?

In this Android Wear tutorial we’ll cover the basics of developing for Android Wear by creating a simple recipe app. In the process, you’ll learn:

  • How to send messages from your watch to your phone and vice versa
  • How to share code between your app module and your Wear module
  • About the wearable support library
  • How to package and ship your new Wear app

A Brief History of Android Wear

Google officially announced Android Wear in the spring of 2014. Quickly after that, LG and Samsung released the first two Wear watches – the LG G watch and the Samsung Gear Live. Lots of updates ensued, and in the summer of 2015 Google decided to spread the love and released the Android Wear app on iOS. This allowed iPhone users to pair Android Wear watches with their phones. This gives you, our soon-to-be expert Wear app maker, twice the potential audience!

aw yeah!

The latest update in our Wear saga came in February of 2017, when Google announced Wear 2.0. This update, among other things, has paved the way for standalone Wear apps. Standalone Wear apps are watch apps without a companion app on the phone. It also introduced a dedicated Wear app store on the watch to find new apps. Nice!

Connecting to a Wear Device

Before you get started, you’ll need to connect your Wear device to your computer so you can debug the app.

Note: If you don’t own an Android Wear device, don’t worry! You can still follow this tutorial with a watch AVD. If you choose this option, you can simply follow the very detailed step-by-step tutorial on the official documentation and skip directly to the next section.

The first thing you’ll want to do is enable developer options on your watch. To do this, navigate to Settings -> System -> About on your watch or emulator and scroll down until you see the build number.

Android Wear settings Android Wear settings Android Wear settings

Now tap the build number 7 times.

Wait What?

Yep. Just like on a normal Android Phone, you unlock developer options by tapping the build number 7 times.

Next, go back to the settings screen and tap the Developer options menu item. Make sure ADB debugging is turned on. If you see an option for usb-debugging, turn that on as well. Otherwise, turn on Debug over Bluetooth.

ADB Debugging

Now that you’ve got ADB debugging all setup, you need to actually connect your Wear device to your computer. If you saw the debug over usb option in the developers option list, congratulations! You should be able to just plug your watch into your computer (via the charging cord) and be ready to go. Otherwise you need to connect over bluetooth.

Debugging over Bluetooth

To debug over bluetooth, you’ll need an Android Wear watch paired with an Android phone. Make sure both the watch and the phone have adb debugging enabled. Next, open the Android Wear app on your phone and navigate to settings.

Android Wear App Settings

Scroll down to the Debugging over Bluetooth section and make sure the Device to Debug is set to your paired watch. Next, enable the Debugging over Bluetooth toggle. At this point, there should be a subtitle under the toggle that says Host: Disconnected and Target: Disconnected.

Android Wear companion app settings

But don’t you worry – they won’t stay disconnected for long! Open a terminal and enter the command

adb forward tcp:4444 localabstract:/adb-hub

followed by

 adb connect 127.0.0.1:4444

Accept the adb-debugging prompt on your phone and you’re good to go! At this point the Android Wear app on your phone should have changed to connected for both host and target.

This process can be a bit thorny – if you run into any problems, you can check out the official documentation.

Getting Started

Start by downloading the WEARsmyrecipe starter project here.

Unzip then import the project in Android Studio 3.0.1 or later.

Android Studio welcome dialog

If you see a message to update the project’s Gradle plugin since you’re using a later version of Android Studio, then go ahead and choose “Update”.

Wait for the Gradle sync to complete.

On the top left hand side of Android Studio, make sure the Android dropdown is selected – this will give you a nice view of the project structure.

Project structure

You can see two modules now:

  1. The mobile module, which is where the phone app code lives.
  2. A new wear module which is where the Wear app code lives.

The code is pretty basic right now – there’s a few helper files in place to show a list of recipes on the phone app, but that’s about it. Don’t worry, you’ll change that soon.

Run the mobile app. You may need to make sure the mobile configuration is set in the configuration dropdown. You should see a screen similar to this on your phone:

Mobile app

Nothing too crazy going on here – just a simple list of Recipes. (You can tell by the list that I’m an expert chef! Just kidding, I recently burned a can of soup. Turns out you need to take the soup out of the can. Who knew!)

Next you’ll run the Wear app. To point Android Studio towards the Wear module, you need to change the run configuration from mobile to Wear. Click the run configuration dropdown next to the run button. It should say “mobile” right now. Click the Wear option to change to the Wear configuration:

Configurations

Now that we’re using the Wear run configuration, click the run button and and select your Wear device – you should see a screen similar to this:

Hello world app

If you have a round watch (like the Moto360) you may see something closer to this:

Hello world app round

And that cutoff text is just a darn shame. But you’re going to fix it!

Building a Simple Recipe Layout

Android Wear devices come in lots of shapes and sizes. There’s three main form factors you need to consider as a Wear developer:

  • Square watches
  • Round watches
  • Round watches with a chin

By default, all Wear devices treat the root layout of the view as a square, even if the device itself has a circular form factor. That means your views can be clipped if they sit at the edges of the screen. Here’s a few examples:

ClippingClippingClippingClipping

Luckily, there’s a handy dandy support widget you can use to work around the clipping issue!

Using the BoxInsetLayout Widget

The BoxInsetLayout is a top-level widget that can box its children into a square that will fit inside a round screen. If your code is running on a square screen it will have no affect. You can define which sides to box in by using the app:boxedEdges attribute on a direct child of the BoxInsetLayout. The possible values are left, right, top, bottom, and all. You can combine different values too – so app:boxedEdges:"left|top|bottom" is totally legal.

Now that you’ve got the idea down, open the wear/res/layout/activity_meal.xml file and replace its contents with the following:

<android.support.wear.widget.BoxInsetLayout
  xmlns:android="http://schemas.android.com/apk/res/android"
  xmlns:app="http://schemas.android.com/apk/res-auto"
  android:layout_width="match_parent"
  android:layout_height="match_parent">

  <LinearLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:boxedEdges="all">

    <TextView
      android:layout_width="wrap_content"
      android:layout_height="wrap_content"
      android:text="Hello World!"/>
  </LinearLayout>
</android.support.wear.widget.BoxInsetLayout>

Here’s what’s happening in this new layout:

  • The top level layout is now a BoxInsetLayout
  • The BoxInsetLayout has one child – a LinearLayout
  • That LinearLayout has the app:boxedEdges="all" layout attribute, meaning that this view will be boxed in on all sides.

Note: In the preview tab, you can change the device type used to render the preview.
Feel free to change to Wear Round or Wear Square to see how the BoxInsetLayout works.

Preview tab

Run the Wear app again. You should see that the text is no longer being clipped, and the screen now looks like this:

Fixed hello world round

Just for fun, you can set the background of the LinearLayout to gray to see where the bounding box is.
Add android:background="@android:color/darker_gray" to the LinearLayout. If you run the app again you should see the following:
Fixed

Since you specified app:boxedEdges="all", the box is bounded on all four sides. Cool stuff!

Fleshing out the Recipe Layout

Replace the contents of the wear/res/layout/activity_meal.xml file you just edited with the following:

<android.support.wear.widget.BoxInsetLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

  <LinearLayout
      android:layout_width="match_parent"
      android:layout_height="match_parent"
      android:orientation="vertical"
      android:padding="8dp"
      app:boxedEdges="all">

    <TextView
        android:id="@+id/mealTitle"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="Meal title"
        android:textSize="18sp"
        android:textStyle="bold"/>

    <TextView
        android:id="@+id/calories"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:paddingTop="8dp"
        android:text="Number of calories"/>

    <TextView
        android:id="@+id/ingredients"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:paddingTop="8dp"
        android:text="Ingredients"/>
  </LinearLayout>
</android.support.wear.widget.BoxInsetLayout>

Nothing too crazy going on here – you added 3 new TextView that will contain the recipes title, calories and ingredients. You’ll update them soon, so don’t worry about the placeholder values.

Run the watch app now so that you should see a screen like this:

Meal layout

Sharing Code Between the Watch and the Phone

When you create a wearable app you’ll want to share code between the phone and watch apps. The app you’re creating has a Meal model that should be shared across both apps. You can accomplish this by using a shared module.

In the toolbar, click File -> New -> New Module

New module

Choose a Java Library.
A Java library contains no Android references. If you wanted to include drawable files or other Android files, you would instead choose the Android library option.

Note: Ideally you’d create a Kotlin library instead of a Java library. But this is Android land, and that would be WAY too easy. Android Studio doesn’t have the option to create a pre-configured Kotlin module yet.

New module

Name the module shared and name the class Meal. You can leave the Create .gitignore file option checked.

Add shared module

Click Finish.

Gradle will run a sync and if you’ve done the right Gradle dance it will succeed!

However, we now have a Java library. Not a Kotlin library. And let’s be serious – who uses Java anymore? ;]

Navigate to Gradle Scripts/build.gradle for the shared module:

shared build.gradle

Replace the contents with the following code:

apply plugin: 'java-library'
apply plugin: 'kotlin'

dependencies {
  compile fileTree(dir: 'libs', include: ['*.jar'])
  compile "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
}

sourceCompatibility = "1.7"
targetCompatibility = "1.7"

The code above adds Kotlin support to the new module.

Importing the Shared Library

Now that you’ve got a helpful shared library, it’s time to actually share that library.

Open the build.gradle file for your mobile app:

mobile build.gradle

in the dependencies block, add the following line: compile project(':shared')

Your dependencies block should now look like this:

dependencies {
  compile fileTree(dir: 'libs', include: ['*.jar'])
  compile project(':shared')
  compile "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
  compile 'com.google.android.gms:play-services-wearable:11.6.0'
  compile "com.android.support:support-v4:$support_version"
  compile "com.android.support:appcompat-v7:$support_version"
  compile "com.android.support:recyclerview-v7:$support_version"
  compile "com.android.support:cardview-v7:$support_version"
  compile 'com.android.support.constraint:constraint-layout:1.0.2'
  compile 'com.google.code.gson:gson:2.8.2'
  androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
    exclude group: 'com.android.support', module: 'support-annotations'
  })
  testCompile 'junit:junit:4.12'
}

The compile project method is the way to include a local module in your project.

Now you need to do the same thing for the watch app.
Open the build.gradle file for your Wear app:

wear build.gradle

Just like before, in the dependencies block, add the compile project(':shared') line.

The dependencies block of the Wear app should now look like this:

dependencies {
  compile fileTree(dir: 'libs', include: ['*.jar'])
  compile project(':shared')
  compile "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version"
  compile "com.android.support:support-v4:$support_version"
  compile "com.android.support:wear:$support_version"
  compile 'com.google.android.gms:play-services-wearable:11.6.0'
  compile 'com.google.android.support:wearable:2.1.0'
  provided 'com.google.android.wearable:wearable:2.1.0'
  compile 'com.google.code.gson:gson:2.8.2'
}

This recipe app is really HEATING up! Because you cook with heat. And the app has to do with meals. Which you cook. With heat. Why does no one understand my witty humor?

Adding the Meal Class

Your shared library contains one class – a Meal model written in Java. However, your mobile module actually already contains an even better meal class written in Kotlin. That’s the model that you want to share – so go ahead and delete the java Meal class in your shared library:

Delete Meal class

Click OK on the delete dialog:

Delete dialog

BOOM! No more Java.

Now drag the Kotlin Meal class from your mobile module to your shared module:

Refactor meal class

Click the Refactor button in the popup. Now the phone app is using the Meal class from the shared module.

Sending Messages Between Devices

Now that both the watch app and the phone app know about your Meal class, it’s time to pass some data around.



Phone apps communicate with Wear apps via the Message API and the Data API.

The Data API should be used for messages you need delivered. If the system can’t immediately deliver it, it will queue it up until the watch is connected to the phone.

Data API - good

The Message API, on the other hand, should be used for short messages that you don’t mind losing. If the system can’t immediately deliver the message it won’t try again and that will be that. Your message will be dropped on the floor.

Data API - bad

Using the Message API

You’re now going to use the Message API to send a recipe from your phone to your watch.

First, open the MealListActivity file. Add the following code to your imports:

import com.google.android.gms.common.api.GoogleApiClient
import com.google.android.gms.wearable.Node

The Message and Data APIs both use the GoogleApiClient system under the hood, so that’s why you’re importing the GoogleApiClient. A Node is fancy speak for a wearable device.

Under the adapter property declaration add the following two properties:

private lateinit var client: GoogleApiClient
private var connectedNode: List<Node>? = null

One is for your GoogleApiClient and the other is for your Nodes. There could be multiple connected watches (cooouuuullllldd be….) so that’s why it’s a List of Nodes.

Next, make the MealListActivity implement the GoogleApiClient.ConnectionCallbacks interface.

class MealListActivity : AppCompatActivity(),
    MealListAdapter.Callback,
    GoogleApiClient.ConnectionCallbacks {

When you connect to the GoogleApiClient, the ConnectionCallbacks will provide you with a callback to store your nodes.
Now, you need to implement two methods – onConnected and onConnectionSuspended. Add the following below your onCreate method:

override fun onConnected(bundle: Bundle?) {
  Wearable.NodeApi.getConnectedNodes(client).setResultCallback {
    connectedNode = it.nodes
  }
}

override fun onConnectionSuspended(code: Int) {
  connectedNode = null
}

The onConnected method gets called once the GoogleApiClient connects. At that point, you want to get all of the Nodes from the Wearable.NodeApi and save them in your list.

onConnectionSuspended is called when the GoogleApiClient you’re using gets disconnected. In this scenario you no longer have access to your Nodes (wearable devices) so you clear out your connectedNode list.

Next, in your onCreate method, add the following:

client = GoogleApiClient.Builder(this)
    .addApi(Wearable.API)
    .addConnectionCallbacks(this)
    .build()
client.connect()

Here your building up a GoogleApiClient that has access to the Wearable API. You’ll use this client shortly to actually send messages to the watch!

You’ll notice that there’s a stub for the mealClicked method in your activity. Replace that with the following:

override fun mealClicked(meal: Meal) {
  val gson = Gson()
  connectedNode?.forEach { node ->
    val bytes = gson.toJson(meal).toByteArray()
    Wearable.MessageApi.sendMessage(client, node.id, "/meal", bytes)
  }
}

This method uses Gson to serialize your meal. It then uses the MessageApi.sendMessage method to send the meal to your watch. The String can be used to filter messages on the receiving side. You can ignore it for this tutorial.

Alright – onto the watch!

Listening for Messages

Good news: your phone app is sending messages! Bad news: your watch isn’t receiving any messages.

sad dog

But that’s all about to change.

The code you’re going to add to your watch app is very similar to the code you just added to your phone app.

To start, open the MealActivity class in your Wear module.

Add the following import: import kotlinx.android.synthetic.main.activity_meal.*.
This will allow you to reference your views without using all that old-school findViewById junk!

Next, add the following two properties to your activity:

private lateinit var client: GoogleApiClient
private var currentMeal: Meal? = null

One is your now-familiar GoogleApiClient, which you’ll use to listen for messages. The other is the current meal being displayed.

Next make your activity implement the GoogleApiClient.ConnectionCallbacks interface. Then, add the following code below your onCreate method:

override fun onConnected(bundle: Bundle?) {
  Wearable.MessageApi.addListener(client) { messageEvent ->
    currentMeal = Gson().fromJson(String(messageEvent.data), Meal::class.java)
    updateView()
  }
}

override fun onConnectionSuspended(code: Int) {
  Log.w("Wear", "Google Api Client connection suspended!")
}

private fun updateView() {
  currentMeal?.let {
    mealTitle.text = it.title
    calories.text = getString(R.string.calories, it.calories)
    ingredients.text = it.ingredients.joinToString(separator = ", ")
  }
}

The updateView() method is pretty simple – it looks at the current Meal and updates your view accordingly.

The onConnectionSuspended method isn’t doing too much. You don’t have anything to clear out when the connection ends.

The onConnected method is where the magic is. Once the GoogleApiClient has connected, you added a MessageListener to listen for new Message API events from the phone. In the callback, you are doing the opposite of what you did on the phones side. The MessageEvent object has a data parameter. You used Gson to deserialize the ByteArray into a Meal.

Finally, initialize your GoogleApiClient in onCreate:

client = GoogleApiClient.Builder(this)
    .addConnectionCallbacks(this)
    .addApi(Wearable.API)
    .build()
client.connect()

Boom! Your Wear app is listening for messages for your phone.

Testing the App

First run the mobile app on your phone. After that, run the Wear app on your watch.

Now, do a rain dance. Followed by a little prayer. Followed by an offering of chocolates to the Android gods.

Then tap the Apple Pie list item on your phone app.

If everything runs smoothly, you should see this screen on your watch:

Result on Watch device

Using the Data Api

This app is already pretty hot, but its time to make it a bit spicier. Maybe throw some red pepper on there.

You’re going to add a star button to your watch layout so you can favorite specific meals.

Open up the activity_meal.xml file in your wear module.

Add the following widget as the last item in your LinearLayout:

<ImageView
    android:id="@+id/star"
    android:layout_width="wrap_content"
    android:layout_height="0dp"
    android:layout_gravity="center"
    android:layout_weight="1"
    android:src="@drawable/ic_star_border_black_24dp"
    android:tint="@android:color/white"/>

You just added a simple ImageView with a black border star. This will be your “like” button. The height is set to 0dp and the layout_weight is set to 1 so the star fills the rest of the screen.

In your MealActivity class, adding the following method:

private fun sendLike() {
  currentMeal?.let {
    val bytes = Gson().toJson(it.copy(favorited = true)).toByteArray()
    Wearable.DataApi.putDataItem(client, PutDataRequest.create("/liked").setData(bytes).setUrgent())
  }
}

Here’s the breakdown of the new method: First it creates a copy of your meal with the favorited flag set to true. Then it serializes that new copy into a ByteArray. Next it creates a PutDataRequest. You can think of a PutDataRequest as the DataApi version of a Message. Why didn’t they call it something like…DataItem? Again – that’d be too easy. Finally, the method sends that request on the /liked path with the ByteArray attached as the data.

You may also notice the setUrgent call. You can toggle that option to gently encourage the system to deliver the PutDataRequest as fast as possible.

Next, add the following code in your MealActivity onCreate method:

star.setOnClickListener {
  sendLike()
}

Now your Wear app is sending Data API items to your mobile app.

Listening for Data Items

Next up is adding code to your mobile app to listen for Data API items.

Open your MealListActivity class. In the onConnected method, add the following code after the connectedNode = it.nodes line:

Wearable.DataApi.addListener(client) { data ->
  val meal = Gson().fromJson(String(data[0].dataItem.data), Meal::class.java)
  adapter?.updateMeal(meal)
}

This code is very similar to the Message code you added previously. It adds a DataListener to the DataApi. The DataListener deserializes the ByteArray contained in the DataItem. Then it makes a call to the adapter to update the newly favorited meal.

Do a few more rain dances and run the mobile app and the Wear app.

Send one of the recipes to the watch again by tapping a recipe list item.

Once the recipe makes it to the watch, tap the star. If everything went well, you should see a black star appear next to that list item on the phone – like so:

starred receipe

After running that test, try sending a new recipe to the watch and putting your phone in airplane mode. Wait a few seconds and then tap the like button on the watch again. Then take your phone out of airplane mode. Once the phone pairs to the watch again, you should see the item starred!

Adding a Confirmation View

One nice thing about developing for Wear is that it comes with a few juicy animations built in. You’re going to take advantage of that by adding a ConfirmationActivity to your Wear app.

First, add the following import to the top of MealActivity in the wear module:

import android.support.wearable.activity.ConfirmationActivity

Then, add a new method in your MealActivity class:

private fun showConfirmationScreen() {
  val intent = Intent(this, ConfirmationActivity::class.java)
  intent.putExtra(
      ConfirmationActivity.EXTRA_ANIMATION_TYPE,
      ConfirmationActivity.SUCCESS_ANIMATION
  )
  intent.putExtra(
      ConfirmationActivity.EXTRA_MESSAGE,
      getString(R.string.starred_meal)
  )
  startActivity(intent)
}

ConfirmationActivity is a built-in activity. Specifically, it’s a fullscreen activity that shows a checkmark and then disappears.

The method above creates an Intent to launch the ConfirmationActivity with two extras.

  • EXTRA_ANIMATION_TYPE dictates the animation type.
  • EXTRA_MESSAGE is used to show a small text message below the animation.

Next up you need to trigger the animation. So, In the sendLike method, replace the putDataItem line with the following:

Wearable.DataApi.putDataItem(
    client,
    PutDataRequest.create("/liked")
        .setData(bytes)
        .setUrgent()
).setResultCallback {
  showConfirmationScreen()
}

The only difference is that after the putDataItem call, it adds a ResultCallback where you check to see if the put request was successful. If it was, you make a call to show the confirmation.

Try it out on your watch. Eventually, once you send a like for a recipe, you should see the following view:

ConfirmationActivity

Uploading your Wear App to the Play Store

When Android Wear first came out, the only way to get a Wear app to users was to embed the APK in your mobile app. Wear 2.0 changed that. Now you can upload your Wear app to the play store in exactly the same way you’d upload a normal phone app.

The only requirement is that you have the following line in your Wear apps manifest:

uses-feature android:name="android.hardware.type.watch"

As long as you have that line, your Wear app will show up on the Wear play store.

Your Wear app will also be installed whenever a user downloads your phone app from the play store.

Where To Go From Here?

Here is the download for the final project.

In this Android Wear tutorial, you learned:

  • how to design for both round and square watches
  • how to communicate between the mobile and the Wear device, using both the Message and Data API
  • and how to show some snazzy animations right out of the box!

There is a lot more to learn and do with Android Wear! If you’re interested in learning more about Android Wear development, check out the official documentation.

  • You can build standalone apps with Wear-specific UI, new interactions and gestures.
  • Also, you can create new Watch Faces
  • Or you can add Voice Capabilities to control your app with your voice!
  • And many more cool features!

If you have any questions, or just want to share your favorite food puns, join the discussion below!

The post Getting Started with Android Wear with Kotlin appeared first on Ray Wenderlich.

Screencast: What’s New in UIKit: Tableview Improvements

Creator of Pixaki and Full-Time Indie iOS Dev: A Top Dev Interview With Luke Rogers

$
0
0

Creator of Pixaki app, Luke Rogers

Welcome to another installment of our Top App Dev Interview series!

Each interview in this series focuses on a successful mobile app or developer and the path they took to get where they are today. Today’s special guest is Luke Rogers.

Luke is the creator of the famous pixel art app Pixaki, which has been very successful over the years. Aside from this, you can spot Luke at many conferences speaking about his indie story.

Indie Developer

Luke, you have been an indie iOS developer for some time now. Can you tell me how it all started?

I was at university studying Computer Science when the iPhone came out — it was a pretty exciting time to be starting out in the tech industry. I also realized while I was studying that I really wanted to do something entrepreneurial with my life.

However when I graduated, I got a job writing CSS and HTML for template websites — it was pretty soul destroying! I only lasted a few months before I quit and decided to go full-time indie. I had no plan and no savings but somehow managed to stumble through for 18 months before I had to get a “real job” again. That was round one of being self-employed.

The big problem for most developers wanting to go indie full time is making the leap. Can you tell me how you managed this?

I created the first version of Pixaki on evenings and weekends while working a full-time job. Initially, I thought it would only take a few months, but it actually took two years to get to version 1!

I was hoping that once the app was released it would generate enough money that I could quit my job and be a full-time indie, but while it was somewhat successful it wasn’t anywhere near enough to live on. I’ve since learned to always run the numbers and do an in-depth analysis before starting a new project, rather than just hoping for the best.

Version 2 was a complete Swift re-write!

So I kept my job, but I kept working on Pixaki — version 2 turned into a complete rewrite and I moved the code from Objective-C to Swift. The real turning point was being made redundant, and I can honestly say it’s one of the best things that’s happened to me!

Rather than look for another full-time job, I decided to go back to freelancing. This time was very different though — I went with a much higher and more sustainable hourly rate, plus I had money saved from Pixaki sales and it was still generating money each month. This softened the blow of suddenly not having a regular monthly income, but it still felt like a big risk.

Being self-employed again has enabled me to spend so much more time on Pixaki, which has really changed everything. After the first few months, I released Pixaki 2, then I began working on Pixaki 3. Version 3 added a huge number of new features, it was the first paid upgrade, and I increased the price from $8.99 to $24.99.

Sales have been going really well, and now Pixaki accounts for about two-thirds of the income I need each month which means that I can be far less dependant on client work. It’s a great position to be in, and it’s all built on the foundations set when I started the project one evening over six years ago.

Sales increasing with version 3 of Pixaki.

What’s your daily schedule like?

I work from home, but every morning I still “walk to work”. I walk the same route every day and I don’t listen to any music or podcasts, but I use the time to think about what I need to be working on today. I also like to think about the bigger picture and consider what my plans are for Pixaki in the months to come and what my next project will be.

After my walk, the day generally looks like this:

  • 9:00-9:30: start working.
  • 12:00-13:00: take an hour for lunch.
  • 17:00-17:30: finish working for the day

I’ve tried working really long hours in the past, but it’s so draining that I think I’m actually more productive by not trying to work too hard. I don’t have much of a set daily routine, but I like to write out a to-do list each day and work my way through that.

In general, I’ll start with the less appealing tasks first, and then when I’m starting to flag I can switch to something more exciting.

Work-Life Balance

Procrastination is a real problem for everyone, how do you fight the battle of distractions?

I work from home, so there’s definitely a constant battle to stay focused. I always have a timer running on my laptop that tells me when I’ve been idle for 5 minutes or more, so I can keep track of how many hours in the day I’m actually working. Closing apps and tabs in Safari helps too — I have Mail closed most of the time, and only open it a couple of times a day to check my emails.

Luke’s workspace at home.

The most dangerous type of distraction I find, though, is the desire to start a new project. Something like Twitter might steal a few minutes from your day, but ditching your current unfinished project to start something new that you also don’t finish can take away months. And because you’re still doing work, it’s much easier to tell yourself that you are being productive.

The first time I was a full-time indie developer, pretty much all I did was start something and then a few months later move onto the next project without finishing anything. I spent a year and a half doing this with very little to show at the end of it all. So now I’m very cautious about starting new projects; I have a process where I weigh up how viable the idea is, which I run through before I start anything.

I’ve decided that Pixaki will be my primary focus for the next couple of years at least, and while I’m considering what will come next, I’m in no hurry to make a start.

Can you list any tools you use that help with your indie development?

One of the best tools I have is a notebook and pencil. I use it for my to-do lists, but also designing and decision making; I think there’s a tremendous value in stepping away from technology when you can. I have a Leuchtturm1917 dotted notebook which is great for writing and designing, and a Pentel mechanical pencil.

In terms of software I use;-

  • Xcode – For making Pixaki.
  • Sketch – For doing any design resources.
  • Tower – This is a git GUI.
  • Harvest – The timer I mentioned earlier.

What is your ultimate advice for being an indie developer?

Think long-term and keep persevering. It took me a few years to realise that success is not going to happen suddenly, and I think that’s probably true for the vast majority of people. When Pixaki was seeing limited success, I considered giving up on it and moving on to a new project on many occasions, but I’m so glad I stuck with it. And I’m going to keep working on it to grow the product and see where I can take it.

I hope for Pixaki to still be around in ten or twenty years time, so everything I do is with that in mind. Often that means writing my own code that I know I’ll be able to maintain rather than using a third-party library. I also try to keep the app modern without getting too caught up in the latest fashions of app design; there are very little blur and vibrancy effects for example, and I think the app will age more gracefully because of things like that.

Luke’s vision to allow the app to age gracefully.

If you could change anything about being an indie iOS developer what would it be and why?

I think we’re incredibly fortunate to have a platform for iOS to develop for. It’s easy to find fault when it’s what you work with all day every day, but looking at it objectively it’s a fantastic platform.

The thing that makes me the most nervous about building a business around an iOS app, though, is how much control Apple has. Given that they own both the platform and the sole distribution channel for apps, any changes that they make in the future could have a massive impact on many businesses.

So far it’s been good though, and there have been some nice changes to the App Store recently, which is encouraging. I’d like to see them slow down how quickly iOS changes from year to year too, as just keeping up with the platform is a lot of work, but I don’t see that happening anytime soon!

Pixaki

Can you tell me the ultimate success story of Pixaki? How did it all start and how have you managed great success?

Success is an interesting term because it can be measured in so many ways and it’s always relative. There’s obviously a financial success, but also success in terms of influence within the pixel art community, and success in terms of equipping others to create amazing things. I struggle to think of the app as successful because I know where I want to take it and it feels like I’m just getting started, but looking at where I’ve come from I can see that it has achieved success in a lot of ways.

I started Pixaki because I wanted to make pixel art on my iPad but I didn’t really like the look of any of the other apps that were out there — I’m very fussy when it comes to apps! If I started a project like this now, I’d do a lot more market analysis first and take the time to run the numbers. I’ve learnt a lot about running a business in the last few years, and in hindsight, I don’t think I made life very easy for myself. But a combination of learning these business skills and sheer determination has led me to the point that I’m at now.

Pixaki in action!

What’s the thought-process for building new features for Pixaki, is it ultimately user feedback or do you have a personal backlog of features to implement in the future?

User feedback is driving things a lot at the moment. I have a spreadsheet where I collate all of the requests that come in and order the requested features by popularity, which has become my backlog. There are also features that I’d like to add that maybe aren’t the most requested, but are important for the direction I want to take the product in.

This way of working means that I’m not that quick to implement the latest features in iOS because my customers aren’t requesting them, but I don’t think that’s a bad thing necessarily. I’ve released 3 major updates to Pixaki 3 so far, and I’ve got another 5 planned which should keep me busy for 2018!

For me, it’s all about the people. I love to attend conferences for making new connections and getting different perspectives on things. It’s nice to have people to talk to about the world of app development too.

What’s the process for releasing new features and how do you keep the quality control high on Pixaki?

I have a great group of beta testers. In the early days I was just recruiting anyone I knew with an iPad to help with testing, but over the years as the product has become more established, I’ve managed to recruit some of my most loyal users to help with testing. I’m very grateful for these people — they volunteer their time to help make the app better because they believe in the product and want to see me succeed. It’s really amazing, and they’ve played a huge part in making the app what it is today.

I really enjoy obsessing over details, which helps when trying to make a high-quality product. I don’t want to release anything that I think is only “good enough”, so I’ll happily iterate five or ten times on a particular aspect of the app until I’m happy with it.

I’ve found having long beta testing periods has been useful — Pixaki 3 was in beta for 9 months before release. There’s definitely more I’d like to do in terms of having a process for maintaining the quality, though.

Lots of folks would like to see Pixaki on the Mac, any signs of this happening in the future?

Yes! It’s currently in active development. There’s still quite a way to go, but I’m really excited about the product it’s turning into. I love the Mac, I do nearly all of my work on a Mac and I know a lot of other people do too, so I think it will be really great for people working on large projects and those who just prefer to work on a desktop. I am hoping to release at some point in 2018. (If anyone would like to help with beta testing, please email me at luke@rizer.co).

Pixaki in action on the Mac, credits to Jason Tammemagi.

Where To Go From Here?

And that concludes our Top App Dev Interview with Luke Rogers. Huge thanks to Luke for sharing his journey with the iOS community :]

I hope you enjoyed reading about Luke’s journey with Pixaki and is a clear example of our very few indie iOS developers in the community.

Remaining clear of any distractions is clearly key to Luke’s determination to make a successful product, Pixaki. I hope you can take away some tips and use in your workflow.

If you are an app developer with a hit app or game in the top 100 in the App store, we’d love to hear from you. Please drop us a line anytime. If you have a request for any particular developer you’d like to hear from, please join the discussion in the forum below!

The post Creator of Pixaki and Full-Time Indie iOS Dev: A Top Dev Interview With Luke Rogers appeared first on Ray Wenderlich.

Baby It’s iPhone Time! Merry Christmas Song

$
0
0

MerryChristmas-2016-feature

As you may or may not know, we have a tradition where each year, we sing a silly Christmas song about a geeky topic.

This year, we have made a song titled “Baby It’s iPhone Time”, sung to the tune of “Baby It’s Cold Outside.” We hope you enjoy – and we apologize in advance for our singing! :]

Lyrics

I really can’t pay (but baby, it’s iPhone time)
I don’t like space grey (but baby, there’s silver too)
This keynote has been (please get your preorder in)
So very nice (that iphone 8 just won’t suffice)
Face ID is a slight drawback (hey murphy’s law gave us a setback)
Cause privacy is under attack (even sansa stark couldn’t hack)
So really, I think I’ll pass (hey look at that sexy glass)
But maybe I’ll just look at the store (if you like pixels, we’ve got some more)
This camera looks new (truedepth to focus you)
Say what’s this big poo (that’s animoji for you)
I wish I knew why (you like them, please don’t deny)
Craig’s neighing now (hey just be glad he’s not a cow)
I ought to say not this time (it’s so good we skipped the 9)
But damn that iPhone looks fine (and now you’re flat broke without a dime)
I really should wait (You’re changing my tone)
But it’s iPhone time!

Credits

But Wait, There’s More!

If you enjoyed this video, we have a few more videos you might want to check out:

That’s It!

Have a Merry Christmas and very happy New Year everyone, and thanks so much for reading this site! :]

The post Baby It’s iPhone Time! Merry Christmas Song appeared first on Ray Wenderlich.

Updated Course: Testing in iOS

$
0
0

Testing in iOS

A few weeks ago, we released an update to our Networking with URLSession course, which brings everything in our Getting Started with iOS category up to date for iOS 11, Swift 4, and Xcode 9. If you’re ready to move on, and learn about how to automate the process of ensuring your apps are functioning as expected, Testing in iOS is for you!

In this 28-video course, you’ll get started writing basic unit tests, and then pick up some more advanced techniques. You’ll also explore UI testing to test your app’s user interface. And of course, this course uses the latest version of Swift, iOS, and Xcode as well!

Let’s have a look at what’s inside.

Part 1: Unit Testing Basics

In this first part, you’ll learn about the structure of unit tests, and strategies for writing and refactoring your tests.

Part 1

This section contains 11 videos:

  1. Introduction: In this video, you’ll get an overview of what will be covered in this first section and why testing is important.
  2. Getting Started: Learn the basics of adding unit tests to an already existing project.
  3. Importing Modules: Often times, unit tests need to access code from other modules. Learn how to do this using @testable.
  4. Test Case Structure: This video covers the structure of unit tests and what is expected when you write them.
  5. Running Your First Test: At long last, it’s time to run your first test. Like with all things with Xcode, there are many way to do this.
  6. Challenge: Writing Your First Test: Now that you have experience writing a unit test, go ahead and try writing another one.
  7. Fixing Your Second Test: Writing unit tests often means fixing unit tests. This video will walk you through the process.
  8. Red Green Refactor: There are many strategies for writing unit tests and one of them is called “Red/Green/Refactor”. This video walks you through it.
  9. Challenge: Write Red Green Refactor Test: Your challenge is to write another unit test but this time, adhering to the Red Green Refactor method.
  10. Optimizing Tests: Tests can get messy. In this video, you’ll do a little clean up to make your tests cleaner and understandable.
  11. Conclusion: This reviews what it means to write unit tests and reviews some essential strategies.

Part 2: Advanced Unit Testing Techniques

In part 2, you’ll pick up some more advanced techniques, including testing asynchronous methods, using mock objects, and testing performance.

Part 2

This section contains 9 videos:

  1. Introduction: This video gives you an overview of some of the advanced techniques that you’ll be learning in this section.
  2. Asynchronous Testing This video covers some strategies when testing methods that return results over an indeterminate time.
  3. XCTWaiter and Expectations: The XCTWaiter and expectations allow you to wait for a result in your unit tests.
  4. Challenge: Adding XCTWaiter: In this challenge, you’ll add an XCTWaiter to your unit test.
  5. Mocking: This video explores relationships that you can establish between objects.
  6. Mocking Tests Writing tests for your mock objects takes a little bit of reworking. This video walks you through the process.
  7. Code Coverage: Knowing what is tested and what is not, is as important as writing the tests. Thankfully, Xcode provides code coverage reports.
  8. Performance Testing: Sometimes you’ll want to test how well a method preforms and for that, you use a special unit test: a performance test.
  9. Conclusion: This video reviews the section and reminds you about some strategies to keep in mind.

Part 3: UI Testing

In the final part of this course, learn how to test your app’s user interface with UI testing.

Part 3

This section contains 8 videos:

  1. Introduction: UI tests allow you to test the user interface of your iOS app. Xcode allows you to autmate the process.
  2. Recording a UI Test Thankfully creating a UI test is as simple as clicking a button and recording your actions.
  3. Refactoring Your UI Test: This video covers how to take recorded UI actions and convert them into bonafide UI tests.
  4. Challenge: Write Your Own UI Test: Now that you have an idea on how to write UI tests, your challenge is to write one on your own.
  5. Queries: Queries are used to fetch items like navigation bar and buttons. This video covers the process of how it works.
  6. Multidevice UI Testing Some UI tests on iPhones won’t work on iPads. This video gives you strategies for dealing with the issue.
  7. Challenge: About Screen Test: Your next challenge is to write a test for the about screen view controller.
  8. Conclusion: This video concludes the series and gives you some things to think about in your future testing.

Where To Go From Here?

Want to check out the course? You can watch the introduction video for free! Asynchronous Testing, in Part 2: Advanced Unit Testing Techniques, will also be free when it is released.

The rest of the course is for raywenderlich.com subscribers only. Here’s how you can get access:

  • If you are a raywenderlich.com subscriber: The first part of the course is ready for your today! The rest of the course will be released next week. You can check out the course here.
  • If you are not a subscriber yet: What are you waiting for? Subscribe now to get access to our updated Testing in iOS course and our entire catalog of over 500 videos.

Stay tuned for more new and updated iOS 11 courses to come. I hope you enjoy the course! :]

The post Updated Course: Testing in iOS appeared first on Ray Wenderlich.

Internationalizing Your iOS App: Getting Started

$
0
0
Update note: This tutorial has been updated for iOS 11 and Xcode 9 by Richard Critz. The original tutorial was written by Sean Berry with updates by Ali Hafizji.

Creating a great iOS app is no small feat, yet there is much more to it than great code, gorgeous design and intuitive interaction. Climbing the App Store rankings requires well-timed product marketing, the ability to scale up along with the user base, and utilizing tools and techniques to reach as wide an audience as possible.

For many developers, international markets are an afterthought. Thanks to the painless global distribution provided by the App Store, you can release your app in over 150 countries with a single click. Asia and Europe alone represent a continually growing pool of potential customers, many of whom are not native English speakers. In order to capitalize on the global market potential of your app, you’ll need to understand the basics of app internationalization and localization.

This tutorial will guide you through the basic concepts of internationalization by taking a simple app called iLikeIt and adding internationalization support. This simple app has a label and a You Like? button. Whenever the user taps the button, some fictitious sales data and a cheerful image fade in below the button.

Currently, the app is English only; time to fix that!

Note: Because changing languages can also change the size of UI elements, it is crucial that you use Auto Layout in any app that you plan to internationalize.

Internationalization vs Localization

Making your app speak another language requires both internationalization and localization. Aren’t they just two words for the same thing? Not at all! They represent separate and equally important steps in the process of bringing your app into the world of multiple languages.

Internationalization is the process of designing and building your app for international compatibility. This means, for example, building your app to:

  • Handle text input and output processing in the user’s native language.
  • Handle different date, time and number formats.
  • Utilize the appropriate calendar and time zone for processing dates.

Internationalization is an activity that you, the developer, perform by utilizing system-provided APIs and making the necessary modifications to your code to make your app as good in Chinese or Arabic as it is in English. These modifications allow your app to be localized.

Localization is the process of translating an app’s user interface and resources into different languages. Unless you happen to be fluent in the language you’re supporting, this is something you can, and should, entrust to someone else.

Getting Started

Download the starter project here. Build and run; tap You like?. You should see something similar to the following:

iOS Internationalization

As you can see, you will need to localize four items:

  • The “Hello” label
  • The “You Like?” button
  • The “Sales Data” label
  • The text in the image

Take a moment to browse the project’s files and folders to familiarize yourself with the project structure. Main.storyboard contains a single screen which is an instance of MainViewController.

Separating Text From Code

Currently, all of the text displayed by the app exists as hard-coded strings within Main.storyboard and MainViewController.swift. In order to localize these strings, you must put them into a separate file. Then, rather than hard-coding them, you will retrieve the appropriate strings from this separate file in your app’s bundle.

iOS uses files with the .strings file extension to store all of the localized strings used within the app, one or more for each supported language. A simple function call will retrieve the requested string based on the current language in use on the device.

Choose File\New\File from the menu. In the resulting dialog, select iOS\Resource\Strings File and click Next.

new strings file

Name the file Localizable and click Create.

Note: Localizable.strings is the default filename iOS uses for localized text. Resist the urge to name the file something else unless you want to type the name of your .strings file every time you reference a localized string.

A .strings file contains any number of key-value pairs, just like a Dictionary. Conventional practice uses the development, or base, language translation as the key. The file has a specific, but simple, format:

"KEY" = "CONTENT";
Note: Unlike Swift, the .strings file requires that each line terminate with a semicolon.

Add the following to the end of Localizable.strings:

"You have sold 1000 apps in %d months" = "You have sold 1000 apps in %d months";
"You like?" = "You like?";

As you can see, you may include format specifiers in either the key or the value portion of the string to allow you to insert real data at run time.

NSLocalizedString(_:tableName:bundle:value:comment:) is the primary tool you use in your code to access these localized strings. The tableName, bundle, and value parameters all have default values so you normally specify only the key and the comment. The comment parameter is there for you to provide a hint to translators as to what purpose this string serves in your app’s user experience.

Open MainViewController.swift and add the following function:

override func viewDidLoad() {
  super.viewDidLoad()

  likeButton.setTitle(NSLocalizedString("You like?", comment: "You like the result?"),
                      for: .normal)
}

Here, you update the title on the button using the localized value. Now, in the function likeButtonPressed(), find the following line:

salesCountLabel.text = "You have sold 1000 apps in \(period) months"

Replace that line with:

let formatString = NSLocalizedString("You have sold 1000 apps in %d months",
                                     comment: "Time to sell 1000 apps")
salesCountLabel.text = String.localizedStringWithFormat(formatString, period)

You use the String static function localizedStringWithFormat(_:_:) to substitute the number of months in your sales period into the localized string. As you’ll see later, this way of performing the substitution respects the user’s locale setting.

Note: The comment strings in this tutorial have been purposely kept short to make them format nicely on-screen. When writing them in your own code, take the time to make them as descriptive as possible. It will help your translators significantly and result in better translations.

Build and run. Your project should appear exactly as it did before.

Adding a Spanish Localization

To add support for another language, click on the blue iLikeIt project folder in the Project navigator and select the iLikeIt Project in the center pane (NOT the Target). On the Info tab, you will see a section for Localizations. Click the + and choose Spanish (es) to add a Spanish localization to your project.

adding a localization

Xcode will list the localizable files in your project and allow you to select which ones it should update. Keep them all selected and click Finish.

select files to add for Spanish

But wait! Where is Localizable.strings? Don’t worry; you haven’t marked it for localization yet. You’ll fix that shortly.

At this point, Xcode has set up some directories, behind the scenes, to contain localized files for each language you selected. To see this for yourself, open your project folder using Finder, and you should see something similar to the following:

new project structure

See en.lproj and es.lproj? They contain the language-specific versions of your files.
“en” is the localization code for English, and “es” is the localization code for Spanish. For other languages, see the full list of language codes.

So what is Base.lproj? Those are the files in the base, or development, language — in this case, English. When your app asks iOS for the localized version of a resource, iOS looks first in the appropriate language directory. If that directory doesn’t exist, or the resource isn’t there, iOS will fetch the resource from Base.lproj.

It’s that simple! Put your resources in the appropriate folder and iOS will do the rest.

Localizable.strings doesn’t do much good unless it exists in these .lproj directories. Tell Xcode to put it there by selecting it in the Project navigator, then clicking Localize… in the File inspector.

localize Localizable.strings

Xcode will ask you to confirm the file’s language. The default will be English since that’s your development language. Click Localize.

Select localization language

The File inspector will update to show the available and selected languages. Click the checkbox next to Spanish to add a Spanish version of the file.

Add Spanish Localizable.strings

Look at the Project navigator. Localizable.strings now has a disclosure triangle next to it. Expand the list and you’ll see that Xcode lists both the English and Spanish versions.

expand Localizable.strings group

Select Localizable.strings (Spanish) in the Project navigator and replace its contents with the following:

"You have sold 1000 apps in %d months" = "Has vendido 1000 aplicaciones en %d meses";
"You like?" = "¿Es bueno?";

Xcode makes it easy to test your localizations without the bother of constantly changing languages or locales on your simulator. Click the active scheme in the toolbar and choose Edit scheme… from the menu (you can also Option-Click on the Run button).

edit scheme

The Run scheme will be selected by default and that’s the one you want. Click the Options tab, then change Application Language to Spanish. Click Close.

set application language to Spanish

Build and run and you should see something like this when you click ¿Es bueno?:

first run in Spanish

Hooray! There’s some Spanish in your app!

Internationalizing Storyboards

UI elements in your storyboard, such as labels, buttons and images, can be set either in your code or directly in the storyboard. You have already learned how to support multiple languages when setting text programmatically, but the Hello label at the top of the screen has no IBOutlet and only has its text set within Main.storyboard.

You could add an IBOutlet, connect it to the label in Main.storyboard, then set its text property using NSLocalizedString(_:tableName:bundle:value:comment:) as you did with likeButton and salesCountLabel, but there is a much easier way to localize storyboard elements, without the need for additional code.

In the Project navigator, open the disclosure triangle next to Main.storyboard and you should see Main.storyboard (Base) and Main.strings (Spanish).

storyboard Spanish localization

Click on Main.strings (Spanish) to open it in the editor. You should already have an entry for the Hello label which will look something like this:

/* Class = "UILabel"; text = "Hello"; ObjectID = "jSR-nf-1wA"; */
"DO NOT COPY AND PASTE.text" = "Hello";

Replace the English translation with the Spanish translation:

/* Class = "UILabel"; text = "Hello"; ObjectID = "jSR-nf-1wA"; */
"DO NOT COPY AND PASTE.text" = "Hola";
Note: Never directly change the auto-generated ObjectID. Also, do not copy and paste the lines above, as the ObjectID for your label may be different from the one shown above.

Change the localizations for the other two entries. Again, do not edit the unique ObjectID:

"DO NOT COPY AND PASTE.text" = "Has vendido 1000 aplicaciones en 20 meses";
"DO NOT COPY AND PASTE.normalTitle" = "¿Es bueno?";

Xcode lets your preview your storyboard localizations. Select Main.storyboard in the Project navigator and open the assistant editor with View\Assistant Editor\Show Assistant Editor. Make sure it is showing the preview of the storyboard:

assistant editor preview

Click the language menu in the lower right corner and select Spanish.

select Spanish

Your preview, except for the image, should be in Spanish.

Spanish preview

Internationalizing Images

Since the app uses an image that contains English text, you will need to localize the image itself. Unfortunately, while Apple recommends that you put all of your images into an asset catalog, they provide no direct mechanism for localizing those images. Not to worry, however, as there is a simple trick that makes it easy to do.

Open Assets.xcassets and you will see a Spanish-localized version of the image named MeGusta. Now open Localizable.strings (English) and add the following at the end:

"imageName" = "iLikeIt";

Next, open Localizable.strings (Spanish) and add the following at the end:

"imageName" = "MeGusta";

Finally, open MainViewController.swift and replace viewDidLoad() with:

override func viewDidLoad() {
  super.viewDidLoad()

  imageView.image = UIImage(named: NSLocalizedString("imageName",
                                                     comment: "name of the image file"))
}

You use the key imageName to retrieve the name of the localized version of the image and load that image from the asset catalog. You also deleted setting the title of the button from MainViewController.swift because it’s now set in the storyboard localization.

Internationalizing Numbers

You have made great progress in preparing your app to run in multiple languages, but there is more to the process than just changing the words. Formatting for other common data, such as numbers and dates, varies around the world. For example, in the US you might write “1,000.00”. In Spain, you would write “1.000,00” instead.

Luckily, iOS provides various formatters such as NumberFormatter and DateFormatter to do all of this for you. Open MainViewController.swift and in likeButtonPressed() replace:

salesCountLabel.text = String.localizedStringWithFormat(formatString, period)

with:

let quantity = NumberFormatter.localizedString(from: 1000, number: .decimal)
salesCountLabel.text = String.localizedStringWithFormat(formatString, quantity, period)

This creates a localized presentation of the number 1000 and inserts it in the formatted string assigned to the label.

Open Localizable.strings (English) and change the second “1000” to %@.

"You have sold 1000 apps in %d months" = "You have sold %@ apps in %d months";

Do the same in Localizable.strings (Spanish).

"You have sold 1000 apps in %d months" = "Has vendido %@ aplicaciones en %d meses";

Make sure your scheme is still set to run with the Application Language set to Spanish, then build and run. You should see something like this:

Spanish number localization

Edit your run scheme and change the Application Language back to System Language. Build and run again; this time you should see something like:

English number localization

Note: If you live in a country where English is not the primary language, you still may not see 1,000 formatted with the comma. In this case, change the scheme’s Application Region to United States to get the results shown above.

Using various combinations of Application Language and Application Region, you can test almost all localizations you desire.

Pluralization

You may have observed that iLikeIt randomly chooses for you to take either 1, 2 or 5 months to sell 1000 apps. If not, run the app now and tap You like? several times to see this in action. You’ll notice, whether you’re running in English or Spanish, that the message is grammatically incorrect when you take only one month.

Never fear, iOS to the rescue again! iOS supports another localization file type called a .stringsdict. It works just like a .strings file except that it contains a dictionary with multiple replacements for a single key.

Choose File\New\File from the menu. In the resulting dialog, select iOS\Resource\Stringsdict File and click Next. Name the file Localizable and click Create. Open all of the disclosure triangles and you will see the following:

stringsdict format

Here’s what each section of the dictionary does:

  1. The Localized String Key is a dictionary that represents one localization and it’s the key used by NSLocalizedString(_:tableName:bundle:value:comment:). The localization lookup searches the .stringsdict first and then, if it finds nothing there, the equivalent .strings file. You may have as many of these keys as you like in your .stringsdict; iLikeIt will only need one.
  2. The Localized Format Key is the actual localization — the value returned by NSLocalizedString(_:tableName:bundle:value:comment:). It can contain variables for substitution. These variables take the form %#@variable-name@.
  3. You must include a Variable dictionary for each variable contained in the Localized Format Key. It defines the rules and substitutions for the variable.
  4. There are two rule types for a variable: Plural Rule and Size Rule. This tutorial will only cover the former.
  5. The Number Format Specifier is optional and tells iOS the data type of the value being used to make the substitution.
  6. The Variable dictionary contains one or more keys that specify the exact substitutions for the variable. For a Plural Rule, only the other key is required; the others are language-specific.
Note: For complete information on the stringsdict file format, see Appendix C of Apple’s Internationalization and Localization Guide.

Edit the dictionary to match this picture; specific changes are listed below.

English stringsdict file

Here are the specific changes you are making:

  1. Change the name of the Localized String Key to You have sold 1000 apps in %d months
  2. Change the value of the Localized Format Key to You have sold %@ apps in %#@months@
    This defines a variable months for use in the dictionary.
  3. Rename the Variable dictionary to months
  4. Set the Number Format Specifier (NSStringFormatValueTypeKey) to d
  5. Set the one key’s value to %d month
    Use this key when you want the singular form of the phrase.
  6. Set the other key’s value to %d months
    Use this key for all other cases.

You may delete the empty keys but I recommend against it since you may need them later. If the key is empty, iOS just ignores it.

Note: Xcode switches from showing the “friendly” names of some keys and values to showing their raw values when you edit the stringsdict. While it’s ugly, it’s not incorrect. Just ignore it.

You’ve now completed your base Localizable.stringsdict and are ready to add the Spanish version. In the File inspector, click Localize….

As it did with Localizable.strings, Xcode will ask you to confirm the file’s language. The default will be English since that’s your development language. Click Localize.

The File inspector will update to show the available and selected languages. Click the checkbox next to Spanish to add a Spanish version of the file.

Click the disclosure triangle next to Localizable.stringsdict in the Project navigator to show the individual language files. Open Localizable.stringsdict (Spanish) and make the following changes:

  1. NSStringLocalizedFormatKey: Has vendido %@ aplicaciones en %#@months@
  2. one: %d mes
  3. other: %d meses

Spanish stringsdict file

Build and run. Tap You like? until you have seen all three values and see that the grammar is now correct. And you didn’t change a bit of code! It’s worth internationalizing your app just to get plural handling for free!

English singular correct

Edit your scheme and change the Application Language to Spanish. Build and run. Tap ¿Es bueno? several times to see that the Spanish localization is working correctly.

Notice that although you have left the localizations for your sales volume in the various Localizable.strings files, those localizations are superseded by the ones in Localizable.stringsdict.

Adding Another Language

You may be wondering why there are so many options in the Values dictionary. While many languages such as English and Spanish have one form for singular and one form for plural, other languages have more complex rules for plurals, decimals, zero and so on. iOS implements all of the rules the for languages it supports. To see details on these rules, check out the CLDR Language Plural Rules specified by the Unicode organization.

One language that has more than one plural form is Polish. You’re going to add a Polish localization to iLikeIt in order to see it in action. You have already performed all of these steps in this tutorial to add your Spanish localization so this should be easy for you.

  1. Select the blue iLikeIt icon in the Project navigator to reveal the project localizations. Click the + to add Polish. Select all three files to be localized.
  2. Under Main.storyboard, open Main.strings (Polish). Change the values as follows:
    • Hello label text: Cześć
    • Sales label text: Sprzedałeś 1000 aplikacji w 20 miesięcy
    • You like button title: Lubisz to?
    • Polish storyboard strings

  3. Under Localizable.strings, open Localizable.strings (Polish). Replace the contents with:
  4. "You like?" = "Lubisz to?";
    "imageName" = "LubieTo";
    
  5. Under Localizable.stringsdict, open Localizable.stringsdict (Polish) and make the following changes:
    1. NSStringLocalizedFormatKey: Sprzedałeś %@ aplikacji w %#@months@
    2. one: %d miesiąc
    3. few: %d miesiące
    4. many: %d miesięcy
    5. other: %d miesiąca

Polish stringsdict file

And that’s all there is to it! Edit your scheme and change the Application Language to Polish. Build and run. Tap Lubisz to? several times to see the various singular and plural forms of the sales message. Notice the formatting of the number 1000 has changed as well.

Polish localization complete

Localizing Your Icon

There’s one last little bit of localization for you to do to make your app look totally professional: the name of the app as it appears under the icon on the home screen.

Using the skills you’ve already learned, add a new strings file to your project and name it InfoPlist.strings. This is another “magic” name that iOS looks for. For more information, check out the Information Property List Key Reference.

Add the following to InfoPlist.strings:

CFBundleDisplayName = "iLikeIt";

Now localize the file as English and add Spanish and Polish localizations. Change the value of the display name in InfoPlist.strings (Spanish) to MeGusta. In InfoPlist.strings (Polish), make it LubięTo.

Build and run; exit the app and check the home screen. You’ll see it’s still called iLikeIt. Unfortunately, the only way to test this localization is to change the language setting in the simulator itself.

On the simulator, open the Settings app. Navigate to General > Language & Region > iPhone Language to select a new language. Choose either Spanish or Polish and tap Done. Accept the language change and wait while the simulator reboots. Go back to the home screen and check your app now!

Spanish icon title

Helpfully, the Settings app will have English as the second choice on the screen when you’re ready to return the setting to English.

Where to Go From Here?

You can download the completed project for this tutorial here.

To learn more about internationalization, check out:

Both videos explain more about stringsdict files and the use of size rules as well as plural rules. To save you from endless and fruitless searching in the documentation, the “magic” numbers in size rules are the “M” width of a display — the number of uppercase Ms that fit on a single line.

I hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below!

The post Internationalizing Your iOS App: Getting Started appeared first on Ray Wenderlich.


Gradle Tutorial for Android: Getting Started

$
0
0

Gradle and AndroidOne of the vital parts of creating an app, along with the development itself, is the process of building – putting together the modules and specifying the interactions between them. The primary goal of build automation systems is to make this process more convenient and efficient for a developer. Gradle is an excellent example of such a tool, and is the primary build tool for Android.

In this tutorial, you’ll gain a better understanding of what Gradle is, and how you can use Gradle to supercharge your builds. By the end of this tutorial you should be able to

  1. Build your Android apps from the command-line
  2. Read through a Gradle build file
  3. Create your own Gradle plugin
  4. Create build flavors for profit!
Note: This tutorial assumes you’re already familiar with the basics of Android development. If you are completely new to Android development, read through our Beginning Android Development tutorials to familiarize yourself with the basics.

What is Gradle?

Gradle is an open source build automation system. It brings the convenience of a Groovy-based DSL along with the advantages of Ant and Maven. With Gradle, you can easily manipulate the build process and its logic to create multiple versions of your app. It’s much easier to use and a lot more concise and flexible when compared to Ant or Maven alone.

So, there was little wonder why during Google I/O in May 2013, the Android Gradle plugin was introduced as the build tool built into the first preview of Android Studio :]

Getting Started

Download SocializifyStarter, the starter project for this tutorial. At minimum, you’ll need Android Studio 3.0 installed on your computer. Open the project in Android Studio, and you’ll be prompted to setup the Gradle wrapper:

Setup Gradle wrapper

Choose OK to configure the wrapper, which you’ll learn more about later in the tutorial.

Depending on which version of Android Studio you’re running, you may also be prompted to update the Gradle plugin:

Update gradle plugin

Choose Update to finish opening the project in Android Studio.

Before starting working with the project, let’s review its structure in the Project pane in Android Studio:

Project structure

Pay attention to the files with the green Gradle icon and .gradle extension. These files are generated by Android Studio automatically during project creation. They are responsible for the processing of your project’s build. They contain the necessary info about the project structure, library dependencies, library versions, and the app versions you’ll get as a result of the build process.

Project-level build.gradle

Find the build.gradle file in the root directory of the project. It’s called a top-level (project-level) build.gradle file. It contains the settings which are applied to all modules of the project.

Note: If you’re unfamiliar with modules, checkout out our second tutorial on Android studio found here.
// 1
buildscript {
    // 2
    repositories {
        google()
        jcenter()
    }
    // 3
    dependencies {
        classpath 'com.android.tools.build:gradle:3.0.0'
        classpath 'org.jetbrains.kotlin:kotlin-gradle-plugin:1.1.51'
    }
}

// 4
allprojects {
    repositories {
        google()
        jcenter()
    }
}

Here’s what’s going on, step by step:

  1. In the buildscript block you define settings needed to perform your project building.
  2. In the repositories block you add names of the repositories that Gradle should search for the libraries you use.
  3. The dependencies block contains necessary plugin dependencies, in this case the Gradle and Kotlin plugins. Do not put your module dependencies in this block.
  4. The structure of the allprojects block is similar to the buildscript block, but here you define repositories for all of your modules, not for Gradle itself. Usually you don’t define the dependencies section for allprojects. The dependencies for each module are different and should reside in the module-level build.gradle.

Module-level build.gradle

Now go to the build.gradle file in the app module directory. It contains dependencies (libraries which a module relies on), and instructions for the build process. Each module defines its own build.gradle file.

// 1
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'

// 2
android {
    // 3
    compileSdkVersion 27
    // 4
    buildToolsVersion "26.0.2"
    // 5
    defaultConfig {
        // 6
        applicationId "com.raywenderlich.socializify"
        // 7
        minSdkVersion 21
        // 8
        targetSdkVersion 27
        // 9
        versionCode 1
        // 10
        versionName "1.0"
    }
}

// 11
dependencies {
    implementation fileTree(dir: 'libs', include: ['*.jar'])
    implementation 'org.jetbrains.kotlin:kotlin-stdlib-jre7:1.1.51'
    implementation 'com.android.support:appcompat-v7:27.0.1'
    implementation 'com.android.support.constraint:constraint-layout:1.0.2'
}

The code above does the following:

  1. Specifies a list of plugins needed to build the module. The com.android.application plugin is necessary in order to setup the Android-specific settings of the build process. Here you can also use com.android.library if you’re creating a library module. The kotlin-android and kotlin-android-extensions plugins allow you to use the Kotlin language and the Kotlin Android extensions in your module.
  2. In the android block you place all platform-specific options of the module.
  3. The compileSdkVersion option indicates the API level your app will be compiled with. In other words, you cannot use features from an API higher than this value. Here, you’ve set the value to use APIs from Android Oreo.
  4. The buildToolsVersion option indicates the version of the compiler. From Gradle plugin 3.0.0 onward, this field is optional. If it is not specified, the Android SDK uses the most recent downloaded version of the Build Tools.
  5. The defaultConfig block contains options which will be applied to all build versions (e.g., debug, release, etc) of your app by default.
  6. The applicationId is the identifier of your app. It should be unique so as to successfully publish or update your app on Google Play Store.
  7. In order to set the lowest API level supported, use minSdkVersion. Your app will not be available in the Play Store for the devices running on lower API levels.
  8. Note: To get more acquainted with the Android SDK versions read our tutorial covering that topic
  9. The targetSdkVersion parameter defines the maximum API level your app has beeen tested on. That is to say, you’re sure your app works properly on the devices with this SDK version, and it doesn’t require any backward compatibility behaviors. The best approach is to thoroughly test an app using the latest API, keeping your targetSdkVersion value equal to compileSdkVersion.
  10. versionCode is a numeric value for the app version.
  11. versionName is a user-friendly string for the app version.
  12. The dependencies block contains all dependencies needed for this module. Later in this tutorial, you’ll find out more about managing your project’s dependencies.

Finally, settings.gradle

Whew, build.gradle was quite a big file! Hope, you’re not tired yet :] The next file will be quite short – move to the settings.gradle file in the root directory. Its contents should look as follows:


include ':app'

In this file, you should define all of your project’s modules by name. Here we have only one module – app. In a large, multi-module project, this file can have a much longer list.

Groovy vs. Kotlin in Gradle

Talk about Kotlin

Kotlin’s popularity is growing every day. Besides Android apps, you can also write back-end web code, front-end web code, and even iOS apps using Kotlin! Recently, Gradle announced Kotlin language support for writing build scripts. The Gradle Kotlin DSL is still in pre-release and requires nontrivial setup, and won’t be covered in this tutorial. However, it’s quite promising and surely worth waiting for its release.

Why Kotlin

You may be wondering, why would you use Kotlin for writing Gradle scripts?

First of all, Kotlin is a statically typed language (Groovy is dynamically typed), which allows for conveniences like autocompletion, better refactoring tools and source-code navigation. You can work in script files just you would with Kotlin classes, with all support of Android Studio you’re used to. Moreover, autocompletion will prevent you from making typos :].

Secondly, it’s practical to work with a single language across your app and your build system.

Mastering the build: Gradle Commands

To execute Gradle commands, you can use both the command line and Android Studio. It’s better to start from the former one to get acquainted more deeply about what’s going on. So, how can you start working with Gradle commands? Pretty easy – use gradlew.

What is gradlew

gradlew is the Gradle Wrapper. You don’t need to worry about installating Gradle on your computer – the wrapper will do that for you. Even more, it’ll allow you to have different projects built with various versions of Gradle.

Open your command line and move to the root directory of the starter project:

cd path/to/your/Android/projects/SocializifyStarter/

gradlew tasks

After that, execute the following command:

./gradlew tasks

You’ll see a list containing all available tasks:

> Task :tasks

------------------------------------------------------------
All tasks runnable from root project
------------------------------------------------------------

Android tasks
-------------
androidDependencies - Displays the Android dependencies of the project.
signingReport - Displays the signing info for each variant.
sourceSets - Prints out all the source sets defined in this project.

Build tasks
-----------
assemble - Assembles all variants of all applications and secondary packages.
assembleAndroidTest - Assembles all the Test applications.
assembleDebug - Assembles all Debug builds.
assembleRelease - Assembles all Release builds.
...

Build Setup tasks
-----------------
init - Initializes a new Gradle build.
wrapper - Generates Gradle wrapper files.

Help tasks
----------
...

Install tasks
-------------
...

Verification tasks
------------------
...
lint - Runs lint on all variants.
...

To see all tasks, run gradlew tasks --all

To get more detail about task, run gradlew help --task <task>

These commands exist to help you with tasks like project initialization, building, testing and analyzing. If you forget a specific command, just execute ./gradlew tasks to refresh your memory.

gradlew assemble

Now skim the list of commands again, and find commands starting with ‘assemble’ under the Build tasks section. Run the first command:

./gradlew assemble

Below is the output of executing this command:

> Task :app:compileDebugKotlin
Using kotlin incremental compilation

> Task :app:compileReleaseKotlin
Using kotlin incremental compilation


BUILD SUCCESSFUL in 29s
52 actionable tasks: 52 executed

From the output, it’s apparent that Gradle compiled two versions of the app – debug and release.
Verify this by changing to the build output directory:

cd app/build/outputs/apk/

To review the contents of a directory run the following command:

ls -R

The ls command displays all files and directories in the current directory. The -R parameter forces this command to execute recursively. In other words, you’ll not only see the contents of your current directory but also of child directories.

You’ll get the following output:

debug	release

./debug:
app-debug.apk	output.json

./release:
app-release-unsigned.apk	output.json

As you see, Gradle generated both debug and release apks.

gradlew lint

Move back to the root directory:


cd ../../../..

Run the following command:

./gradlew lint

The lint command, and any commands which start with ‘lint’, analyzes the whole project looking for various mistakes, typos or vulnerabilities. The first command will find all the issues in a project with both critical and minor severity.

You’ll get the output with the count of issues found:

> Task :app:lint
Ran lint on variant debug: 47 issues found
Ran lint on variant release: 47 issues found
Wrote HTML report to file:///Users/username/path/to/your/Android/projects/SocializifyStarter/app/build/reports/lint-results.html
Wrote XML report to file:///Users/username/path/to/your/Android/projects/SocializifyStarter/app/build/reports/lint-results.xml

Review the report by typing the following on Mac:

open app/build/reports/lint-results.html

or on Linux:

xdg-open app/build/reports/lint-results.html

The default browser on your computer will open with the specified file:

Lint issues 1

Lint issues 2

You can inspect all the issues found with code snippets and an expanded description of a possible solution. However, don’t focus too much on all of these issues – pay attention to the critical ones and fix them immediately. Minor issues shouldn’t necessarily warrant a refactoring, depending upon your teams guidelines and processes.

Managing Dependencies

Now it’s time to make changes to the application itself. Build and run the starter project:

The starter project

This screen shows a user’s profile – name, followers, photos, etc. However, something’s missing – an avatar! In order to load an avatar from a URL, we’ll use a third-party library in our application, namely, we’ll use Picasso.

Picasso is described as a “A powerful image downloading and caching library for Android”. To get started with Picasso, you need to add it as a dependency to your project.

First, create a file named dependencies.gradle in the root directory of the project. You’ll use this file as the means of identifying all the project dependency versions in one place. Add the following to this file:

ext {
    minSdkVersion = 17
    targetSdkVersion = 27
    compileSdkVersion = 27
    buildToolsVersion = "26.0.2"
    kotlinVersion = "1.1.51"
    supportVersion = "27.0.1"
    picassoVersion = "2.5.2"
}

Open the project-level build.gradle file (the one in the root directory, not the one in the app directory!) and add the following line on the top of the file:

apply from: 'dependencies.gradle'

Now you can use the properties you specified in the dependencies.gradle file in your other project build files like this:

app module-level build.gradle

android {
    compileSdkVersion rootProject.compileSdkVersion
    buildToolsVersion rootProject.buildToolsVersion
    defaultConfig {
        applicationId "com.raywenderlich.socializify"
        minSdkVersion rootProject.minSdkVersion
        targetSdkVersion rootProject.targetSdkVersion
        versionCode 1
        versionName "1.0"
    }
}

dependencies {
    implementation fileTree(include: ['*.jar'], dir: 'libs')
    implementation "com.android.support:appcompat-v7:$rootProject.supportVersion"
    implementation "com.android.support:design:$rootProject.supportVersion"
    implementation "org.jetbrains.kotlin:kotlin-stdlib-jre7:$rootProject.kotlinVersion"
}

Add the following line, to include Picasso, in your module-level build.gradle file (in the app directory!) inside the dependencies block:

implementation "com.squareup.picasso:picasso:$rootProject.picassoVersion"

When you modify build files, you’ll be prompted to sync the project:

Gradle sync

Don’t be afraid to re-sync your project. It takes a short while until the sync is completed, and the time it takes gets longer when you have more dependencies and more code in your project.

When the project syncing is complete, open ProfileActivity and add this function to load a user’s avatar:

// 1
private fun loadAvatar() {
  // 2
  Picasso.with(this).load("https://goo.gl/tWQB1a").into(avatar)
}

If you get build errors, or if you’re prompted to resolve the imports in Android studio, be sure the following imports are included:

import com.squareup.picasso.Picasso
import kotlinx.android.synthetic.main.activity_profile.*

Here’s a step-by-step explanation of what’s going on:

  1. You define a Kotlin function loadAvatar()
  2. Picasso needs an instance of a Context to work, so you must call with(context: Context!), passing in the current Activity as the Context. This returns an instance of the Picasso class. Next, you specify the URL of an image you want to load with the load(path: String!) method. The only thing left is to tell Picasso where you want this image to be shown by calling the into(target: ImageView!) method.

Add this function invocation in the onCreate(savedInstanceState: Bundle?) function of your ProfileActivity:

loadAvatar()

The last step, which even senior Android developers tend to forget, is to add the android.permission.INTERNET permission. If you forgot to add this permission, Picasso simply can’t download the image and it’s hard to spot any error. Go to the AndroidManifest.xml file and add the following permission above the application tag:

<uses-permission android:name="android.permission.INTERNET" />

That’s all you need to show the user’s avatar. Build and run the project:

App with avatar

Gradle Dependency Configurations

The implementation keyword you previously used is a dependency configuration, which tells Gradle to add Picasso in such way, that it’s not available to other modules. This option significantly speeds up the build time. You’ll use this keyword more often than others.

In some other cases, you may want your dependency to be accessible to other modules of your project. In those cases, you can use the api keyword.

Other options include runtimeOnly and compileOnly configurations, which mark a dependency’s availability during runtime or compile time only.

Ready to Publish: Working with Product Flavors and Build Types

Your app is ready, and you’re thinking of ways to profit from it :]

Money money money

One solution might be to have multiple versions of your app: a free and paid version. Luckily for you, gradle supports this at the build level, allowing you to define the boundaries of different build types. However, before getting started, you need to understand how Gradle allows you to work with different app versions.

Build Types

By default, there are two build types – debug and release. The only difference between them is the value of the debuggable parameter. In other words, you can use the debug version to review logs and to debug the app, while the release one is used to publish your app to the Google Play Store. You can configure properties to the build types by adding the following code in the android block of your module-level build.gradle file:

buildTypes {
    release {

    }
    debug {

    }
}

In the debug and release blocks you can specify the type-specific settings of your application.

Build Signing

One of the most important configurations of the build is its signature. Without a signature, you’ll be unable to publish your application, since it’s necessary to verify you as an owner of the specific application. While you don’t need to sign the debug build – Android Studio does it automatically – the release build should be signed by a developer.

Note: To proceed you need to generate the keystore for your release build. Take a look at this tutorial to find a step-by-step guide

When your keystore is ready, add the code below in the android block and above the buildTypes block (the order of declaration matters) of the module-level build.gradle file:

signingConfigs {
    release {
        storeFile file("path to your keystore file")
        storePassword "your store password"
        keyAlias "your key alias"
        keyPassword "your key password"
    }
}

In the signingConfigs block, you specify your signature info for the build types. Pay attention to the keystore file path. It should be specified with respect to the module directory. In other words, if you created a keystore file in the module directory and named it “keystore.jks”, the value you should specify will be equal to the name of the file.

Update the buildTypes block to sign your release build automatically:

release {
    signingConfig signingConfigs.release
}
Note: There are two important considerations related to your keystore file:
  1. Once you’ve published your app to the Google Play Store, subsequent submissions must use the same keystore file and password, so keep them safe.
  2. Be sure NOT to commit your keystore passwords to a version control system such as GitHub. You can do so by keeping the password in a separate file from build.gradle, say keystorePassword.gradle in a Signing directory, and then referencing the file from the app module-level build.gradle via:
    apply from: "../Signing/keystorePassword.gradle

Then be sure to keep keystorePassword.gradle ignored by your version control system. Other techniques include keeping the password in an OS-level environment variable, especially on your remote Continuous Integration system, such as CircleCI.

Build Flavors

In order to create multiple versions of your app, you need to use product flavors. Flavors are a way to differentiate the properties of an app, whether it’s free/paid, staging/production, etc.

You’ll distinguish your app flavors with different app names. First, add the following names as strings in the strings.xml file:

<string name="app_name_free">Socializify Free</string>
<string name="app_name_paid">Socializify Paid</string>

And remove the existing:

<string name="app_name">Socializify</string>

Now that the original app_name string is no longer available, edit your AndroidManifest.xml file and replace android:label="@string/app_name" with android:label="${appName}" inside the application tag.

Add the following code in the android block of your module-level build.gradle file:

// 1
flavorDimensions "appMode"
// 2
productFlavors {
    // 3
    free {
        // 4
        dimension "appMode"
        // 5
        applicationIdSuffix ".free"
        // 6
        manifestPlaceholders = [appName: "@string/app_name_free"]
    }
    paid {
        dimension "appMode"
        applicationIdSuffix ".paid"
        manifestPlaceholders = [appName: "@string/app_name_paid"]
    }
}
  1. You need to specify the flavor dimensions to properly match the build types. In this case, you need only one dimension – the app mode.
  2. In the productFlavors specify a list of flavors and their settings. In this case, free, and paid
  3. Specify the name of the first product flavor – free.
  4. It’s mandatory to specify the dimension parameter value. The free flavor belongs to the appMode dimension.
  5. Since you want to create separate apps for free and paid functionality, you need them to have different app identifiers. The applicationIdSuffix parameter defines a string that’ll be appended to the applicationId giving your app unique identifiers.
  6. The manifestPlaceholders allows you to modify properties in your AndroidManifest.xml file at build time. In this case, modify the application name depending on its version.

Sync your project with Gradle again. After the project sync, run the tasks command, and see if you can spot what’s changed:

./gradlew tasks

You’ll get a similar list of tasks to the one you got when you ran this command first time:

...
Build tasks
-----------
...
assembleDebug - Assembles all Debug builds.
assembleFree - Assembles all Free builds.
assemblePaid - Assembles all Paid builds.
assembleRelease - Assembles all Release builds.
...

Spot the difference? If you pay attention to the tasks under the Build tasks section, you should have some new ones there. You now have separate commands for each build type and build flavor.

Remove the generated output folder from previous ./gradlew assemble task so that you can see the clear difference before and after adding buildTypes and productFlavors. Run the command:

rm -rf app/build/outputs/apk

Then

./gradlew assembleDebug

When the command completes, check the output directory:

cd app/build/outputs/apk
ls -R

You’ll get something like this:

free   paid

./free:
debug

./free/debug:
app-free-debug.apk   output.json

./paid:
debug

./paid/debug:
app-paid-debug.apk   output.json

You should have two builds generated – freeDebug and paidDebug.

What is a Build Variant

From the output above, what you’ve actually generated are different build variants, which are a combination of build types – debug and release and build flavors – free and paid. That is to say, you have four possible build variants – paidDebug, paidRelease, freeDebug and freeRelease.

Great! You’ve got two different build flavors, however, differing names isn’t enough for you to profit from. Instead, you’ll configure your app’s behavior based on the flavor type!

Declare a constant for the paid flavor right below the declaration of ProfileActivity class:

companion object {
  const val PAID_FLAVOR = "paid"
}

Add the following function to ProfileActivity:

private fun isAppPaid() = BuildConfig.FLAVOR == PAID_FLAVOR

You can now check if a user is using a paid version of the app. Depending on the result of this check, you’ll enable or disable some functionality visible to your user so they can clearly see what version they’re using in-app.

Add these strings to the strings.xml file:

<string name="free_app_message">Hi! You\'re using the free version of the application</string>
<string name="paid_app_message">Hi! Congratulations on buying
        the premium version of the application</string>

Add the following functions below isAppPaid():

private fun showMessage() {
  val message = if (isAppPaid()) R.string.paid_app_message else R.string.free_app_message
  Toast.makeText(this, message, Toast.LENGTH_LONG).show()
}

private fun togglePhotosVisibility() {
  extraPhotos.visibility = if (isAppPaid()) View.VISIBLE else View.GONE
  restriction.visibility = if (isAppPaid()) View.GONE else View.VISIBLE
}

Add these functions invocations in the onCreate(savedInstanceState: Bundle?) function:

showMessage()
togglePhotosVisibility()

Now, your user will see a different greeting message and will be able to view the whole photo feed or just some of the photos depending on the app version.

Select the freeRelease build variant in the window below:

Build variant panr

Build and run the project (you may first need to choose the app build configuration in the drop-down next to the Run button):

Free app

You should see that the functionality of the app is restricted and the message with a corresponding text is shown.

Select the paidRelease option, and run the app again:

Paid app

If a user buys your app, they’ll be able to access its full functionality.

Creating Tasks

Sometimes you need your build system to do something more complicated or customize the build process in some way. For example, you may want Gradle to output an APK file containing the build date in its name. One possible solution to this is to create a custom Gradle task.

Add the following code in your module-level build.gradle file at the same level as android block:

// 1
task addCurrentDate() {
    // 2
    android.applicationVariants.all { variant ->
        // 3
        variant.outputs.all { output ->
            // 4
            def date = new Date().format("dd-MM-yyyy")
            // 5
            def fileName = variant.name + "_" + date + ".apk"
            // 6
            output.outputFileName = fileName
        }
    }
}

Here’s what’s is going on:

  1. You define an addCurrentDate() task.
  2. You iterate through all the output build variants.
  3. You iterate over all the APK files.
  4. You create an instance of Date and format it.
  5. You create a new filename appending the current date to the initial name.
  6. You set the new filename to current APK file.

Now you need to execute this task at a specific point of the build process. Add the following code below the task addCurrentDate() block:

gradle.taskGraph.whenReady {
    addCurrentDate
}

The task specified in the whenReady block will be called once when the current graph is filled with tasks and ready to start executing them. Here, you specify the name of your addCurrentDate task.

Now, go back to the command line and make sure you’re in the root directory. Run the following command to assemble a build:

./gradlew assemblePaidRelease

After the task has completed, go to the output directory and check if the build has been named correctly:

cd app/build/outputs/apk/paid/release/
ls

You should get a similar output:

output.json paidRelease_12-11-2017.apk

If your task executed correctly, all your builds will be named with this convention.

Creating Custom Plugins

Usually it’s a good idea to factor out your code into smaller pieces so it can be reused. Similarly, you can factor out your tasks into a custom behavior for the building process as a plugin. This will allow you to reuse the same behavior in other modules you may add to your project.

To create a plugin, add the following class below the addCurrentDate task in the module-level build.gradle file:

class DatePlugin implements Plugin<Project> {
    void apply(Project project) {
        project.task('addCurrentDatePluginTask') {
            project.android.applicationVariants.all { variant ->
                variant.outputs.all { output ->
                    def date = new Date().format("dd-MM-yyyy")
                    def fileName = variant.name + "_" + date + ".apk"
                    output.outputFileName = fileName
                }
            }
        }
    }
}

Add the name of your plugin at the top of this file along with the other apply plugin definitions:

apply plugin: DatePlugin

Conceptually, the code in the plugin is doing the same thing as the task – you’re still modifying the names of the output files. The only difference is that you define a class which implements Plugin and its single method apply(Project project).

In this method, you’re adding your plugin to the target – Project. By calling the task(String name, Closure configureClosure) method you’re creating a new task with a specific name and behavior and adding it to the project.

Now modify the whenReady block to call a new task:

gradle.taskGraph.whenReady {
    addCurrentDatePluginTask
}

and remove the task addCurrentDate() block you added earlier.

Now you can verify that this plugin is doing the same thing like the task. Assemble a new build and verify the APK filename:

./gradlew assemblePaidRelease
cd app/build/outputs/apk/paid/release/
ls

output.json paidRelease_12-11-2017.apk

Where to Go From Here

You can download the final project here.

The Android Gradle plugin 3.0 contains some significant differences from previous versions. So it’s worth reviewing the changelog.

Also, if you’re insterested in the Gradle Kotlin DSL, here you can find a list of usage examples to get familiar with it.

I hope you’ve enjoyed this Getting Started with Gradle tutorial! Don’t forget to leave your feedback and feel free to ask any questions in the comments below :]

The post Gradle Tutorial for Android: Getting Started appeared first on Ray Wenderlich.

Screencast: What’s New in Xcode 9: Source Code Improvements

React Native Tutorial: Building Android Apps with JavaScript

$
0
0
React Native tutorial

React Native Tutorial: Build native Android applications with JavaScript.

In this React Native tutorial you’ll learn how to build native apps based on the hugely popular React JavaScript library.

What makes React Native different from other frameworks such as PhoneGap (Apache Cordova) or Appcelerator Titanium, that use JavaScript to create iOS apps?

  1. (Unlike PhoneGap) with React Native your code may be written in JavaScript but the app’s UI is fully native. It doesn’t have the drawbacks typically associated with a hybrid HTML5 app.
  2. Additionally (unlike Titanium), React introduces a novel, radical and highly functional approach to constructing user interfaces. Your UI is simply a function of the current app state.

React Native brings the React paradigm to mobile app development. It’s goal isn’t to write the code once and run it on any platform. The goal is to learn-once (the React way) and write-anywhere. An important distinction to make.

The community has even added tools such as Expo and Create React Native App to help you quickly build React Native apps without having to touch Xcode or Android Studio!

While you can write React Native apps for iOS and Android, this tutorial only covers Android. You can also check out our tutorial focused on React Native for iOS.

The tutorial takes you through the process of building an Android app for searching UK property listings:

Don’t worry if you’ve never written any JavaScript or used the CSS-like properties you’ll see. This tutorial will guide you through every step and provide resources where you can learn more.

Ready to get going? Read on!

Getting Started

Node and Java Development Kit

React Native uses Node.js, a JavaScript runtime, to build your JavaScript code. React Native also requires a recent version of the Java SE Development Kit (JDK) to run on Android. Follow the instructions for your system to make sure you install the required versions.

MacOS

First install Homebrew using the instructions on the Homebrew website. Then install Node.js by executing the following in Terminal:

brew install node

Next, use homebrew to install watchman, a file watcher from Facebook:

brew install watchman

This is used by React Native to figure out when your code changes and rebuild accordingly. It’s like having Android Studio do a build each time you save your file.

Finally, download and install JDK 8 or newer if needed.

Windows

First install Chocolatey using the instructions on the Chocolatey website.

Install Node.js if you don’t have it or have a version older than 4. Run the following command as Administrator (Right-click on Command Prompt and select “Run as Administrator”):

choco install -y nodejs.install

Python is needed to run the React Native build scripts. Run the following command as Administrator if you don’t have Python 2:

choco install -y python2

Run the following command as Administrator if you don’t have a JDK or have a version older than 8:

choco install -y jdk8

Linux

Install Node.js by following the installation instructions for your Linux distribution. You will want to install Node.js version 6 or newer.

Finally, download and install JDK 8 or newer if needed.

React Native CLI

Use Node Package Manager (or npm) to install the React Native Command Line Interface (CLI) tool. In your terminal (Terminal or Command Prompt or shell) type:

npm install -g react-native-cli

npm fetches the CLI tool and installs it globally; npm is similar in function to JCenter and is packaged with Node.js.

Next, install Yarn using the instructions on the Yarn website. Yarn is a fast npm client.

Android Development Environment

Set up your Android development environment, if haven’t done so. Make sure you can successfully run an Android app on an emulator.

React Native requires Android 6.0 (Marshmallow). In Android Studio, go to Tools\Android\SDK Manager. Select SDK Platforms and check Show Package Details. Make sure that the following items are checked:

  • Google APIs, Android 23
  • Android SDK Platform 23
  • Intel x86 Atom_64 System Image
  • Google APIs Intel x86 Atom_64 System Image

Next, select SDK Tools and check Show Package Details. Expand Android SDK Build-Tools and make sure 23.0.1 is selected.

Finally, tap Apply to install your selections.

When the Android components are finished installing, create a new emulator running SDK Platform 23.

Create the Starter App

Navigate to the folder where you would like to develop your app and run the following in your terminal:

react-native init PropertyFinder

This uses the CLI tool to create a starter project containing everything you need to build and run a React Native app.

In a terminal, run:

cd PropertyFinder

In the created folders and files you will find a few items of note:

  • node_modules is a folder which contains the React Native framework
  • index.js is the entry point created by the CLI tool
  • App.js is the skeletal app created by the CLI tool
  • android is a folder containing an Android project and the code required to bootstrap your application
  • ios is a folder containing iOS-related code, which you won’t be touching in this tutorial.

Start your Android emulator running SDK 23 if it isn’t running.

Run the following command in a terminal:

react-native run-android

The emulator will display the following:

If you receive an error related to “SDK location not found”, then perform the following steps:

  • Go to the android/ directory of your react-native project
  • Create a file called local.properties with this line:
sdk.dir = {PATH TO ANDROID SDK}

For example, on macOS, the SDK path will look something like /Users/USERNAME/Library/Android/sdk.

You might also have noticed that a terminal window has popped up, displaying something like this:

This is Metro Bundler, the React Native JavaScript bundler running under Node.js. You’ll find out what it does shortly.

Don’t close the terminal window; just keep it running in the background. If you do close it by mistake, simply run the following in terminal:

react-native start
Note: You’ll be mostly writing JavaScript code for this React Native tutorial so no need to use Android Studio as your editor. I use Sublime Text, which is a cheap and versatile editor, but Atom, Brackets or any other lightweight editor will do the job.

React Native Basics

In this section, you’ll learn React Native basics as you begin working on PropertyFinder.

Open App.js in your text editor of choice and take a look at the structure of the code in the file:

import React, { Component } from 'react'; // 1
import {
  Platform,
  StyleSheet,
  Text,
  View
} from 'react-native';

const instructions = Platform.select({ ... }); // 2

export default class App extends Component<{}> { ... } // 3

const styles = StyleSheet.create({ ... }); // 4

Let’s go through the code step-by-step:

  1. Imports the required modules.
  2. Sets up a platform-specific display message.
  3. Defines the component that represents the UI.
  4. Creates a style object that controls the component’s layout and appearance.

Take a closer look at this import statement:

import React, { Component } from 'react';

This uses the ECMAScript 6 (ES6) import syntax to load the react module and assign it to a variable called React. This is roughly equivalent to importing libraries in Android. It also uses what’s called a destructuring assignment to bring in the Component object. Destructuring lets you extract multiple object properties and assign them to variables using a single statement.

Note: For more information about ES6 modules I’d recommend reading this blog post by Dr. Axel Rauschmayer.

ES6 is a much nicer way to write JavaScript, supporting features like default parameters, classes, arrow functions, and destructuring assignments. Not all browsers support ES6. React Native uses a tool called Babel to automatically translate modern JavaScript into compatible legacy JavaScript where necessary.

Back to App.js, check out the class definition:

export default class App extends Component<{}>

This defines a class which extends a React Component. The export default class modifier makes the class “public”, allowing it to be used in other files.

Open index.js and take a look at the entry point file:

import { AppRegistry } from 'react-native';
import App from './App';

AppRegistry.registerComponent('PropertyFinder', () => App);

This registers the imported component that serves as the app’s entry point.

It’s time to start building your app.

In App.js, add the following at the top of the file, just before the import statements:

'use strict';

This enables Strict Mode, which adds improved error handling and disables some less-than-ideal JavaScript language features. In simple terms, it makes JavaScript better!

Inside the App class replace render() with the following:

render() {
  return React.createElement(Text, {style: styles.description}, "Search for houses to buy!");
}

App extends React.Component, the basic building block of the React UI. Components contain immutable properties, mutable state variables and expose a method for rendering. Your current application is quite simple and only requires a render method.

React Native components are not Android view classes; instead they are a lightweight equivalent. The framework takes care of transforming the tree of React components into the required native UI.

Next, replace the const styles statement with the following:

const styles = StyleSheet.create({
  description: {
    fontSize: 18,
    textAlign: 'center',
    color: '#656565',
    marginTop: 65,
  },
});

This defines a single style that you’ve applied to the description text. If you’ve done any web development before, you’ll probably recognize those property names. The React Native StyleSheet class used to style the application UI is similar to the Cascading Style Sheets (CSS) used on the Web.

Then, get rid of the instructions assignment code block as you no longer need it.

Save your changes to App.js and return to the emulator. Double tap R on your keyboard, and you’ll see your fledgling property search app starting to take shape:

That’s a JavaScript application running in the emulator, rendering a native UI, without a browser in sight!

Still don’t trust me? :] Verify it for yourself: within Android Studio, select Tools\Android\Layout Inspector. Then check Show All Proceses, select com.propertyfinder and tap OK to inspect the view hierarchy:

You will see no WebView instances anywhere! Your text is being displayed in a view called ReactTextView:

But what is that? Go to the project file finder and enter ReactTextView.java in the prompt. Select the result matching this file to view the source code. Notice ReactTextView inherits directly from TextView. Neat!

Curious as to how it all works? Take a quick look at MainActivity.java and MainApplication.java which you can find in android/app/src/main/java/com/propertyfinder.

MainApplication sets up a ReactNativeHost which in turn creates a ReactInstanceManager. The instance manager handles the communication between JavaScript and native Android.

MainActivity extends ReactActivity which creates a ReactRootView when launched. ReactRootView uses the instance manager to start the JavaScript application. It also renders the App component to set the Activity’s content view.

The terminal window that was opened when you ran this application started a packager and server that allows your JavaScript code to be fetched, by default on port 8081. For example:

http://localhost:8081/index.bundle?platform=android

Open this URL in your browser; you’ll see the JavaScript code for your app. You can find your “Search for houses to buy!” description code embedded among the React Native framework.

When your app starts, this code is loaded and executed by the JavaScriptCore library. In the case of your application, it loads the App component, then constructs the native Android view.

Using JSX

Your current application uses React.createElement to construct the simple UI for your application, which React turns into the native equivalent. While your JavaScript code is perfectly readable in its present form, a more complex UI with nested elements would rapidly become quite a mess.

Make sure the app is still running, then return to your text editor to edit App.js. Modify the body of render to be the following:

return <Text style={styles.description}>Search for houses to buy! (Again)</Text>;

This is JSX, or JavaScript syntax extension, which mixes HTML-like syntax directly in your JavaScript code; if you’re already a web developer, this should feel rather familiar. You’ll use JSX throughout this article.

Save your changes to App.js and return to the emulator. Tap R twice, and you’ll see your application refresh to display the updated message:

Re-running a React Native application is really as simple as refreshing a web browser! :] Note that this will only reflect changes made to your JavaScript files – native code or resource changes will require you to restart the packager.

You can even skip having to refresh the app by enabling live reload. Press Cmd+m for Mac or Ctrl+m for Windows/Linux in the emulator then select Enable Live Reload:

In App.js, modify the render method’s body to the following:

return <Text style={styles.description}>Search for houses to buy!</Text>;

Save your changes. Note that the emulator automatically refreshes to reflect your changes:

Adding Navigation

React Navigation is a community effort led by Facebook and Expo to provide an easy-to-use navigation solution for React Native apps. It’s a JavaScript implementation which means that it works across iOS and Android. You’ll be working with this library in this tutorial.

There are other native navigation solutions out there including AirBnB’s Native Navigation and React Native Navigation from Wix. Be sure to check out the alternatives if you’re looking for a more native look and feel for your future app.

Install React Navigation by running the following in terminal:

yarn add react-navigation

You’re now ready to use its navigation components.

In App.js, add the following after the import statements near the top:

import {
  StackNavigator,
} from 'react-navigation';

StackNavigator enables your app to transition from one screen to another with the new screen being placed on top of a stack.

Next, replace the App class definition with the following:

class SearchPage extends Component<{}> {

Next, add the following to SearchPage just before render():

static navigationOptions = {
  title: 'Property Finder',
};

This sets the title in the navigation bar for this screen.

Add the following below the SearchPage component:

const App = StackNavigator({
  Home: { screen: SearchPage },
});
export default App;

This configures the SearchPage component as the initial component in the navigation stack.

Save your changes and check the emulator to see the updated UI:

Excellent — you now have the basic navigation structure in place.

Building out the Search Page

Add a new file named SearchPage.js and place it in the same folder as App.js. Add the following code to this file:

'use strict';

import React, { Component } from 'react';
import {
  StyleSheet,
  Text,
  TextInput,
  View,
  Button,
  ActivityIndicator,
  Image,
} from 'react-native';

This imports the modules you’ll need to build the UI.

Add the following Component subclass after the import statements:

export default class SearchPage extends Component<{}> {
  static navigationOptions = {
    title: 'Property Finder',
  };

  render() {
    return (
      <View style={styles.container}>
        <Text style={styles.description}>
          Search for houses to buy!
        </Text>
        <Text style={styles.description}>
          Search by place-name or postcode.
        </Text>
      </View>
    );
  }
}

render is a great demonstration of JSX and the structure it provides. Along with the style, you can very easily visualize the UI constructed by this component: a container with two text labels.

Now, add the following style code at the bottom of the file:

const styles = StyleSheet.create({
  description: {
    marginBottom: 20,
    fontSize: 18,
    textAlign: 'center',
    color: '#656565'
  },
  container: {
    padding: 30,
    marginTop: 65,
    alignItems: 'center'
  },
});

Again, these are standard CSS properties. Setting up styles like this is less visual than using Android Studio’s layout design editor, but it’s better than setting view properties one by one in your onCreate() methods! :]

Save your changes.

Open App.js and add the following just after the current import statements near the top of the file:

import SearchPage from './SearchPage';

This imports SearchPage from the file you just created.

Remove the SearchPage class and its associated description style from App.js. You won’t be needing that code any longer. This may also be a good time to get rid of the all unused imports: those from react and react-native.

Save your changes and return to the emulator to check out the new UI:

Styling with Flexbox

So far, you’ve seen basic CSS properties that deal with margins, paddings and color. However, you might not be familiar with Flexbox, a more recent addition to the CSS specification that’s useful for handling complex layout across different screen sizes.

React Native uses the Yoga library under the hood to drive layout. Yoga is a C implementation of Flexbox and it includes bindings for Java (for Android), Swift, Objective-C, and C# (for .NET).

Generally you use a combination of Yoga’s flexDirection, alignItems, and justifyContent properties to manage your layout.

So far, your layout has a container with two children arranged vertically:

This is due to the default flexDirection value of column being active. flexDirection helps define the main axis and cross axis. Your container’s main axis is vertical. It’s cross axis is therefore horizontal.

alignItems determines the placement of children in the cross axis. Your app has set this value to center. This means the children are center-aligned.

You’re going to see some other layout options at play.

Open SearchPage.js and insert the following just after the closing tag of the second Text element:

<View style={styles.flowRight}>
  <TextInput
    underlineColorAndroid={'transparent'}
    style={styles.searchInput}
    placeholder='Search via name or postcode'/>
  <Button
    onPress={() => {}}
    color='#48BBEC'
    title='Go'
  />
</View>

You’ve added a view that holds a text input and a button.

In your styles definition, add the following new styles below the container style:

flowRight: {
  flexDirection: 'row',
  alignItems: 'center',
  alignSelf: 'stretch',
},
searchInput: {
  height: 36,
  padding: 4,
  marginRight: 5,
  flexGrow: 1,
  fontSize: 18,
  borderWidth: 1,
  borderColor: '#48BBEC',
  borderRadius: 8,
  color: '#48BBEC',
},

These set the placement of the text input and button.

Save your changes and check the emulator to see your updates:

The text field and Go button are on the same row, so you’ve wrapped them in a container view using the flowRight style which uses flexDirection: 'row' to horizontally place the items in a row.

You’ve also added a flexGrow: 1 style to the text input. Yoga first lays out the text input and button according to their sizes. It then distributes the remaining space according to the flexGrow values. The text input therefore takes over the remaining space.

Handling Assets

The final step to complete the search screen of the application is to add the house graphic. Download and unzip the images zip file.

Next, create a directory in your root project folder named Resources. Place the three images of the house in this directory.

Drawables: In Android, static app images are typically added to the project’s res/drawable folder. In React Native, however, it’s recommended not to. Placing your image assets alongside your components helps to keep your components self contained, doesn’t require the app to be relaunched if you add new images. It also provides a single place for adding images if you are building for both iOS and Android.

Back in SearchPage.js, add the following beneath the closing tag of the View component that wraps the text input and button:

<Image source={require('./Resources/house.png')} style={styles.image}/>

Now, add the image’s corresponding style to the end of the style list:

image: {
  width: 217,
  height: 138,
},

Save your changes and check out your new UI:

You may need to restart the packager on Windows if the image doesn’t show up.

Your current app looks good, but it’s somewhat lacking in functionality. Your task now is to add some state to your app and perform some actions.

Adding Component State

A React component can manage its internal state through an object called, you guessed it, state. Whenever a component’s state changes, render() is called.

Within SearchPage.js, add the following code just before render():

constructor(props) {
  super(props);
  this.state = {
    searchString: 'london'
  };
}

Your component now has a state variable, with searchString set to an initial value of london.

Within render(), change TextInput to the following:

<TextInput
  underlineColorAndroid={'transparent'}
  style={styles.searchInput}
  value={this.state.searchString}
  placeholder='Search via name or postcode'/>

This sets the TextInput value property — that is, the text displayed to the user — to the current value of the searchString state variable. This takes care of setting the initial state, but what happens when the user edits this text?

The first step is to create a method that acts as an event handler. Within the SearchPage class add the following method below the constructor:

_onSearchTextChanged = (event) => {
  console.log('_onSearchTextChanged');
  this.setState({ searchString: event.nativeEvent.text });
  console.log('Current: '+this.state.searchString+', Next: '+event.nativeEvent.text);
};

This defines a function using the => syntax. This is an arrow function, another recent addition to the JavaScript language that provides a succinct syntax for creating anonymous functions.

The function takes the value from the native browser event’s text property and uses it to update the component’s state. It also adds some logging code that will make sense shortly.

Note: JavaScript classes do not have access modifiers, so they have no concept of private. As a result you often see developers prefixing methods with an underscore to indicate that they should be considered private.

To wire up this method so it gets called when the text changes, return to the TextInput field within the render method and add an onChange property so the tag looks like the following:

<TextInput
  underlineColorAndroid={'transparent'}
  style={styles.searchInput}
  value={this.state.searchString}
  onChange={this._onSearchTextChanged}
  placeholder='Search via name or postcode'/>

Whenever the user changes the text, you invoke the function supplied to onChange; in this case, it’s _onSearchTextChanged.

There’s one final step before you refresh your app again: add the following logging statement to the top of render(), just before return:

console.log('SearchPage.render');

Save your changes and return to your emulator. You should see the text input’s initial value set to london:

Run the following in terminal to view the debug logs:

react-native log-android

In the emulator, edit the input text. You should see something like this:

11-26 12:11:48.827  3698  5067 I ReactNativeJS: SearchPage.render
11-26 12:18:01.006  3698  5067 I ReactNativeJS: _onSearchTextChanged
11-26 12:18:01.006  3698  5067 I ReactNativeJS: Current: london, Next: londona
11-26 12:18:01.006  3698  5067 I ReactNativeJS: SearchPage.render

Looking at the console logs, the order of the logging statement seems a little odd:

  1. This is the initial call to render() to set up the view.
  2. You invoke _onSearchTextChanged() when the text changes.
  3. You call this.setState() to schedule an update to the component state to reflect the new input text. This triggers another render.
  4. You log the current and the next search text values.

A React component state change triggers a UI update. This de-couples the rendering logic from state changes affecting the UI. Most other UI frameworks put the onus on you to update the UI based on state changes. Alternatively, the updates are done through an implicit link between the state and UI, for example by using Android’s Data Binding Library.

At this point you’ve probably spotted a fundamental flaw in this concept. Yes, that’s right — performance!

Surely you can’t just throw away your entire UI and re-build it every time something changes? This is where React gets really smart.

Each time the UI renders itself, it takes the view tree returned by your render methods, and reconciles — or diffs — it with the current Android UI view. The output of this reconciliation process is a simple list of updates that React needs to apply to the current view. That means only the things that have actually changed will re-render!

You can wrap your head around all that later; you still have some work to do in the app.

Initiating a Search

First, remove the logging code you just added above, including the value and onChange attributes on the TextInput, since it’s no longer necessary.

In order to implement the search functionality you need to handle the Go button press, create a suitable API request, and provide a visual indication that a query is in progress.

Within SearchPage.js, update the initial state within the constructor:

this.state = {
  searchString: 'london',
  isLoading: false,
};

The new isLoading property will keep track of whether a query is in progress.

Add the following logic to the start of render:

const spinner = this.state.isLoading ?
  <ActivityIndicator size='large'/> : null;

This is a ternary if statement that optionally adds an activity indicator, depending on the component’s isLoading state. Because the entire component is rendered each time, you are free to mix JSX and JavaScript logic.

Within the JSX that defines the search UI in return, add the following line below the Image to place the spinner:

{spinner}

Next, add the following methods to the SearchPage class:

_executeQuery = (query) => {
  console.log(query);
  this.setState({ isLoading: true });
};

_onSearchPressed = () => {
  const query = urlForQueryAndPage('place_name', this.state.searchString, 1);
  this._executeQuery(query);
};

_executeQuery() will eventually run the query, but for now it simply logs a message to the console and sets isLoading appropriately so the UI can show the new state.

_onSearchPressed() configures and initiates the search query. This should kick off when the Go button is pressed.

To accomplish that, go back to the render method and replace the onPress prop for the Go Button as follows:

onPress={this._onSearchPressed}

Finally, add the following utility function just above the SearchPage class declaration:

function urlForQueryAndPage(key, value, pageNumber) {
  const data = {
      country: 'uk',
      pretty: '1',
      encoding: 'json',
      listing_type: 'buy',
      action: 'search_listings',
      page: pageNumber,
  };
  data[key] = value;

  const querystring = Object.keys(data)
    .map(key => key + '=' + encodeURIComponent(data[key]))
    .join('&');

  return 'https://api.nestoria.co.uk/api?' + querystring;
}

urlForQueryAndPage doesn’t depend on SearchPage, so it’s implemented as a free function rather than a method. It first creates the query string based on the parameters in data. Then it transforms the data into name=value pairs separated by ampersands. Finally, it calls the Nestoria API to return the property listings.

Save your changes, head back to the emulator and press Go. You’ll see the activity indicator spin:

In terminal, the debug logs should show something like this:

11-26 12:37:40.876  3698  5261 I ReactNativeJS: https://api.nestoria.co.uk/api?country=uk&pretty=1&encoding=json&listing_type=buy&action=search_listings&page=1&place_name=london

Copy and paste that URL into your browser to see the result. You’ll see a massive JSON object. Don’t worry — you don’t need to understand that! You’ll add code to parse that now.

Performing an API Request

Still within SearchPage.js, update the initial state in the class constructor to add a message variable to the end of the list:

message: '',

Within render, add the following to the bottom of your UI, right after the spinner:

<Text style={styles.description}>{this.state.message}</Text>

You’ll use this to display a range of messages to the user.

Add the following code to the end of _executeQuery:

fetch(query)
  .then(response => response.json())
  .then(json => this._handleResponse(json.response))
  .catch(error =>
     this.setState({
      isLoading: false,
      message: 'Something bad happened ' + error
   }));

This makes use of the fetch function, which is part of the Fetch API. The asynchronous response is returned as a Promise. The success path calls _handleResponse which you’ll define next, to parse the JSON response.

Add the following function to SearchPage:

_handleResponse = (response) => {
  this.setState({ isLoading: false , message: '' });
  if (response.application_response_code.substr(0, 1) === '1') {
    console.log('Properties found: ' + response.listings.length);
  } else {
    this.setState({ message: 'Location not recognized; please try again.'});
  }
};

This clears isLoading and logs the number of properties found if the query was successful.

Note: Nestoria has a number of non-1** response codes that are potentially useful. For example, 202 and 200 return a list of best-guess locations.

Save your changes, head back to the emulator and press Go. You should see a debug log message saying that 20 properties (the default result size) were found:

11-26 12:48:01.837  3698  5366 I ReactNativeJS: Properties found: 20

Also note that when this message is logged, the spinner goes away.

It’s time to see what those 20 properties actually look like!

Displaying the Results

Create a new file SearchResults.js, and add the following:

'use strict';

import React, { Component } from 'react'
import {
  StyleSheet,
  Image,
  View,
  TouchableHighlight,
  FlatList,
  Text,
} from 'react-native';

This imports the relevant modules you’ll use.

Next, add the component:

export default class SearchResults extends Component {
  static navigationOptions = {
    title: 'Results',
  };

  _keyExtractor = (item, index) => index;

  _renderItem = ({item}) => {
    return (
      <TouchableHighlight
        underlayColor='#dddddd'>
        <View>
          <Text>{item.title}</Text>
        </View>
      </TouchableHighlight>
    );

  };

  render() {
    const { params } = this.props.navigation.state;
    return (
      <FlatList
        data={params.listings}
        keyExtractor={this._keyExtractor}
        renderItem={this._renderItem}
      />
    );
  }
}

The above code makes use of a more specialized component — FlatList — which displays rows of data within a scrolling container, similar to RecyclerView. Here’s a look at the FlatList properties:

  • data provides the data to display
  • keyExtractor provides a unique key that React uses for efficient list item management
  • renderItem specifies how the UI is rendered for each row

Save your new file.

In App.js, add the following just beneath the import statements:

import SearchResults from './SearchResults';

This brings in the newly added SearchResults class.

Now, modify your StackNavigator as follows:

const App = StackNavigator({
  Home: { screen: SearchPage },
  Results: { screen: SearchResults },
});

This adds a new route named Results to the navigator and registers SearchResults as the component that will handle this route. When a component is registered with a navigator, it gets a navigation prop added to it that can be used to manage screen transitions and pass in data.

Save your file changes.

In SearchPage.js, go to _handleResponse and replace the console.log statement with the following:

this.props.navigation.navigate(
  'Results', {listings: response.listings});

This navigates to your newly added route and passes in the listings data from the API request via the params argument.

Save your changes, head back to the emulator and press Go. You’ll be greeted by a list of properties:

It’s great to see the property listings, but that list is a little drab. Time to liven things up a bit.

A Touch of Style

Add the following style definition at the end of SearchResults.js:

const styles = StyleSheet.create({
  thumb: {
    width: 80,
    height: 80,
    marginRight: 10
  },
  textContainer: {
    flex: 1
  },
  separator: {
    height: 1,
    backgroundColor: '#dddddd'
  },
  price: {
    fontSize: 25,
    fontWeight: 'bold',
    color: '#48BBEC'
  },
  title: {
    fontSize: 20,
    color: '#656565'
  },
  rowContainer: {
    flexDirection: 'row',
    padding: 10
  },
});

This defines all the styles that you are going to use to render each row.

Add a new component representing a row by adding the following just under the import statements:

class ListItem extends React.PureComponent {
  _onPress = () => {
    this.props.onPressItem(this.props.index);
  }

  render() {
    const item = this.props.item;
    const price = item.price_formatted.split(' ')[0];
    return (
      <TouchableHighlight
        onPress={this._onPress}
        underlayColor='#dddddd'>
        <View>
          <View style={styles.rowContainer}>
            <Image style={styles.thumb} source={{ uri: item.img_url }} />
            <View style={styles.textContainer}>
              <Text style={styles.price}>{price}</Text>
              <Text style={styles.title}
                numberOfLines={1}>{item.title}</Text>
            </View>
          </View>
          <View style={styles.separator}/>
        </View>
      </TouchableHighlight>
    );
  }
}

This manipulates the returned price, which is in the format 300,000 GBP, to remove the GBP suffix. Then it renders the row UI using techniques that you are by now quite familiar with. Of note, an Image is added to the row and is loaded from a returned URL (item.img_url) which React Native decodes off the main thread.

You may have noticed that this component extends React.PureComponent. React re-renders a Component if its props or state changes. React only re-renders a PureComponent if a shallow compare of the state and props shows changes. Used under the right conditions, this can give your app a performance boost.

Now replace _renderItem with the following:

_renderItem = ({item, index}) => (
  <ListItem
    item={item}
    index={index}
    onPressItem={this._onPressItem}
  />
);

_onPressItem = (index) => {
  console.log("Pressed row: "+index);
};

_onPressItem is passed into ListItem to handle a row selection. This design pattern is equivalent to a callback. In this callback, the index for the selected row is logged.

Save your work, head back to the emulator, press Go, and check out your results:

Tap the first row and verify that your debug console reflects the selection:

11-26 13:43:56.923  3698  5756 I ReactNativeJS: Pressed row: 0

Try tapping other listings or searching other locations in the UK.

Where To Go From Here?

Congratulations on completing this React Native tutorial! You can find the complete project here if you want to compare notes. :]

As a challenge, try showing a property’s details when the user selects one from the search list. You can check out the challenge solution if you get stuck.

Before opening the finished project or the challenge solution, first run yarn in terminal in the root folder of the project.

Check out the React Native’s source code if you’re curious. I suggest taking a look at this ES6 resource to continue brushing up on modern JavaScript.

You may also want to check out the equivalent tutorial for building React Native apps on iOS.

If you’re a web developer, you’ve seen how to use JavaScript to easily create a native app. If you’re a native app developer, you’ve gained some appreciation for React Native’s fast iteration cycle. Whether you decide to use React Native in a future app or simply stick with native Android, I hope you’ve learned some interesting principles to apply to your next project.

If you have any questions or comments on this React Native tutorial, feel free to join the discussion in the forums below!

The post React Native Tutorial: Building Android Apps with JavaScript appeared first on Ray Wenderlich.

Screencast: What’s New in Xcode 9: Source Control Improvements

Open Source Swift, Raspberry Pi, and Firebase – Podcast S07 E07

$
0
0

In this episode Louie de la Rosa from Capital One joins Dru and Janie to talk about Open Swift applications on the server and the Raspberry Pi, and then Janie braves the world of Firebase.

[Subscribe in iTunes] [RSS Feed]

Interested in sponsoring a podcast episode? We sell ads via Syndicate Ads, check it out!

Episode Links

Open Swift

Firebase

Contact Us

Where To Go From Here?

We hope you enjoyed this episode of our podcast. Be sure to subscribe in iTunes to get notified when the next episode comes out.

We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear in future episodes. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com.

The post Open Source Swift, Raspberry Pi, and Firebase – Podcast S07 E07 appeared first on Ray Wenderlich.

How To Make A Game Like Bomberman With Unity

$
0
0
Update note: This tutorial has been updated to Unity 2017.1 by Brian Broom. The original tutorial was written by Eric Van de Kerckhove.

Bomberman tutorialBlowing stuff up is fun. Blowing stuff up with friends is even more fun. Blowing your friends up? We have a winner!

Unfortunately, it’s a little difficult to secure C4 explosives along with some willing buddies willing to explore the afterlife. Thankfully, there are some alternatives.

Enter this Bomberman tutorial. Bomberman is a game where four players battle it out by strategically placing bombs across the battlefield with the goal being to blow each other up.

Each bomb has a few seconds of delay before it explodes and spews out an inferno in four directions. For additional excitement, explosions can trigger impressive chain reactions.

The original Bomberman came out in the early 80s and spinoffs have been published ever since. It’s a timeless game formula that’s still a lot of fun to play and build.

The original title was 2D, but you’re going to create a basic 3D version inside of Unity.

In this tutorial, you’ll learn the following:

  • Dropping bombs and snapping them to a tile position.
  • Spawning explosions by using raycasts to check for free tiles.
  • Handling explosions colliding with the player.
  • Handling explosions colliding with bombs.
  • Handling player death(s) to determine a win/draw.

Loosen up your wrists and get ready to shout “fire in the hole”. Things are about to get really explody inside of Unity. :]

Note: This Bomberman tutorial assumes you know your way around the Unity editor and know how to edit code in a text editor. Check out some of our other Unity tutorials first if you’re not confident yet.

Getting Started with this Bomberman tutorial

Download the Starter Project for this Bomberman tutorial and extract it to a location of your choosing.

Open up the Starter Project in Unity and start this Bomberman tutorial. The assets are sorted inside several folders:

AssetFolders

  • Animation Controllers: Holds the player animation controller, including the logic to animate the players’ limbs when they walk around. If you need to brush up on animation, check out our Introduction to Unity Animation tutorial
  • Materials: Contains the block material for the level
  • Models: Holds the player, level and bomb models, as well as their materials
  • Music: Contains the soundtrack
  • Physics Materials: Holds the physics material of the players — these are special kinds of materials that add physical properties to surfaces. For this tutorial it’s used to allow the players to move effortlessly arund the level without friction.
  • Prefabs: Contains the bomb and explosion prefabs
  • Scenes: Holds the game scene
  • Scripts: Contains the starter scripts; be sure to open them and read through them because they’re heavily commented to make them easier to understand
  • Sound Effects: Holds the sound effects for the bomb and explosion
  • Textures: Contains both player textures

Dropping A Bomb

If it’s not opened yet, open up the Game scene and give it a run.

FirstRun

Both players can walk around the map using either the WASD keys and the arrow keys.

Normally, when player 1 (the red one) presses Space he should place a bomb at his feet, player 2 should be able to do the same thing by pressing Enter/Return.

However, that doesn’t work yet. You need to implement the code for placing bombs first, so open the Player.cs script in your favorite code editor.

This script handles all player movement and animation logic. It also includes a method named DropBomb that simply checks if the bombPrefab GameObject is attached:

private void DropBomb()
{
  if (bombPrefab)
  { //Check if bomb prefab is assigned first

  }
}

To make a bomb drop beneath the player, add the following line inside the if statement:

Instantiate(bombPrefab, myTransform.position, bombPrefab.transform.rotation);

This will make a bomb spawn at the player’s feet. Save your changes to the script and then give your scene a run to try it out:

DropBombs

It’s working great!

There’s a small problem with the way the bombs get dropped though, you can drop them wherever you want and this will create some problems when you need to calculate where the explosions should spawn.

You’ll learn the specifics of why this is important when this tutorial covers how to make the explosions.

Snapping

The next task it to make sure the bombs snap into position when dropped so they align nicely with the grid on the floor. Each tile on this grid is 1×1, so it’s fairly easy to make this change.

In Player.cs, edit the Instantiate() you have just added to DropBomb() like so:

Instantiate(bombPrefab, new Vector3(Mathf.RoundToInt(myTransform.position.x),
  bombPrefab.transform.position.y, Mathf.RoundToInt(myTransform.position.z)),
  bombPrefab.transform.rotation);

Mathf.RoundToInt calls for the x and z values of the player position, rounds off any float to an int value, which then snaps the bombs to the tile positions:

Bombs snap to a grid

Save your changes, play the scene and run around while dropping some bombs. The bombs will now snap into place:

BombsSnap

Although dropping bombs on the map is pretty fun, you know it’s really all about the explosions! Time to add some firepower to this thing. :]

Creating Explosions

To start off, you’re going to need a new script:

  • Select the Scripts folder in the Project view.
  • Press the Create button.
  • Select C# Script.
  • Name the newly created script Bomb.

MakeBombScript

Now attach the Bomb script to the Bomb prefab:

  • In the Prefabs folder, select the Bomb GameObject.
  • In the Inspector window click the Add Component button.
  • Type bomb in the search box.
  • Select the Bomb script you just made.

Finally, open the Bomb script in your code editor. Inside of Start(), add the following line of code:

Invoke("Explode", 3f);

Invoke() takes 2 parameters, firstly the name of the method you want to be called and secondly, the delay before it gets called. In this case, you want to make the bomb explode in three seconds, so you call Explode() — you’ll add it next.

Add the following under Update():

void Explode()
{

}

Before you can spawn any Explosion GameObjects, you’ll need a public variable of the type GameObject so you can assign an Explosionprefab in the Editor. Add the following right above Start():

public GameObject explosionPrefab;

Save your file and return to the Editor. Select the Bomb prefab in the Prefabs folder and drag the Explosion prefab to the Explosion Prefab slot:

DragExplosionPrefab

Once you’ve done this, return to the code editor. You finally get to write the code that makes things go boom!

Inside Explode(), add the following lines:

Instantiate(explosionPrefab, transform.position, Quaternion.identity); //1

GetComponent<MeshRenderer>().enabled = false; //2
transform.Find("Collider").gameObject.SetActive(false); //3
Destroy(gameObject, .3f); //4

This piece of code does the following:

  1. Spawns an explosion at the bomb’s position.
  2. Disables the mesh renderer, making the bomb invisible.
  3. Disables the collider, allowing players to move through and walk into an explosion.
  4. Destroys the bomb after 0.3 seconds; this ensures all explosions will spawn before the GameObject is destroyed.

Save your Bomb script and return to the editor and give your game a play. Put down down some bombs and bask in the fiery goodness as they explode!

Cool guys don't look at explosions!

Cool guys don’t look at explosions!

Add a LayerMask

The walls in the game are, luckily, bombproof. The bombs are not bombproof, and the players are definitely not bombproof. You need a way to tell if an object is a wall or not. One way to do that is with a LayerMask

A LayerMask selectively filters out certain layers and is commonly used with raycasts. In this case, you need to filter out only the blocks so the ray doesn’t hit anything else.

In the Unity Editor click the Layers button at the top right and select Edit Layers…

EditLayersButton

If necessary click on the expansion triangle in front of the word Layers to expand the list of layers if it is not visible.
Click the text field next to User Layer 8 and type in “Blocks“. This defines a new layer you can use.

BlocksLayer

Inside the hierarchy view, select the Blocks GameObject, inside the Map container object.

BlocksGameObject

Change the layer to your newly created Blocks layer:

SelectBlocksLayer

When the Change Layer dialog comes up, click the “Yes, change children” button, to apply to all of the yellow blocks scattered across the map.

ChangeLayerDialog

Finally add a public reference to a LayerMask so the Bomb script will be able to access the layer by adding the following line just below the reference to the explosionPrefab.

public LayerMask levelMask;

Don’t forget to save your code.

Bigger! The Explosions Must be Bigger!

The next step is to add the iconic touch of expanding rows of explosions. To do that, you’ll need to create a coroutine.

Note: A coroutine is essentially a function that allows you to pause execution and return control to Unity. At a later point, execution of that function will resume from where it last left off.

People often confuse coroutines with multi-threading. They are not the same: Coroutines run in the same thread and they resume at intermediate points in time.

To learn more about coroutines and how to define them, check out the Unity documentation.

Return to your code editor and edit the Bomb script. Under Explode(), add a new IEnumerator named CreateExplosions:

private IEnumerator CreateExplosions(Vector3 direction)
{
  return null; // placeholder for now
}

Create the Coroutines

Add the following four lines of code between the Instantiate call and the disabling of the MeshRenderer in Explode():

StartCoroutine(CreateExplosions(Vector3.forward));
StartCoroutine(CreateExplosions(Vector3.right));
StartCoroutine(CreateExplosions(Vector3.back));
StartCoroutine(CreateExplosions(Vector3.left));

The StartCoroutine calls will start up the CreateExplosions IEnumerator once for every direction.

Now comes the interesting part. Inside of CreateExplosions(), replace return null; // placeholder for now with this piece of code:

//1
for (int i = 1; i < 3; i++)
  {
  //2
  RaycastHit hit;
  //3
  Physics.Raycast(transform.position + new Vector3(0,.5f,0), direction, out hit,
    i, levelMask);

  //4
  if (!hit.collider)
  {
    Instantiate(explosionPrefab, transform.position + (i * direction),
    //5
      explosionPrefab.transform.rotation);
    //6
  }
  else
  { //7
    break;
  }

  //8
  yield return new WaitForSeconds(.05f);
}

This looks like quite a complicated code snippet, but it's actually fairly straightforward. Here's a section-by-section explanation:

  1. Iterates a for loop for every unit of distance you want the explosions to cover. In this case, the explosion will reach two meters.
  2. A RaycastHit object holds all the information about what and at which position the Raycast hits -- or doesn't hit.
  3. This important line of code sends out a raycast from the center of the bomb towards the direction you passed through the StartCoroutine call. It then outputs the result to the RaycastHit object. The i parameter dictates the distance the ray should travel. Finally, it uses a LayerMask named levelMask to make sure the ray only checks for blocks in the level and ignores the player and other colliders.
  4. If the raycast doesn't hit anything then it's a free tile.
  5. Spawns an explosion at the position the raycast checked.
  6. The raycast hits a block.
  7. Once the raycast hits a block, it breaks out of the for loop. This ensures the explosion can't jump over walls.
  8. Waits for 0.05 seconds before doing the next iteration of the for loop. This makes the explosion more convincing by making it look like it's expanding outwards.

Here's how it looks in action:

BombExplosionDiagram

The red line is the raycast. It checks the tiles around the bomb for a free space, and if it finds one then it spawns an explosion. When it hits a block, it doesn't spawn anything and it stops checking in that direction.

Now you can see the reason why bombs need to be snapped to the center of the tiles. If the bombs could go anywhere, then in some edge cases the raycast will hit a block and not spawn any explosions because it is not aligned properly with the level:

AllignedVsNot

Finally, select the Bomb prefab in the Prefabs folder in the project view, and change the Level Mask to Blocks.

BlocksLevelMask

Run the scene again and drop some bombs. Watch your explosions spread out nicely and go around the blocks:

ExplosionSpread

Congratulations, you've just made it through the hardest part of this tutorial!

Go ahead and reward yourself with a refreshing drink or a delicious snack, think about what you just did, and then come back to play around with reactions to explosions!

Chain Reactions

When an explosion from one bomb touches another, the next bomb should explode -- this feature makes for a more strategic, exciting and firey game.

Luckily this is quite easy to implement.

Open up the Bomb.cs script in your code editor. Add a new method named OnTriggerEnter below CreateExplosions():

public void OnTriggerEnter(Collider other)
{

}

OnTriggerEnter is a pre-defined method in a MonoBehaviour that gets called upon collision of a trigger collider and a rigidbody. The Collider parameter, named other, is the collider of the GameObject that entered the trigger.

In this case, you need to check the colliding object and make the bomb explode when it is an explosion.

First, you need know if the bomb has exploded. The exploded variable will need to be declared first, so add the following right under the levelMask variable declaration:

private bool exploded = false;

Inside OnTriggerEnter(), add this snippet:

if (!exploded && other.CompareTag("Explosion"))
{ // 1 & 2
  CancelInvoke("Explode"); // 2
  Explode(); // 3
}

This snippet does three things:

  1. Checks the the bomb hasn't exploded.
  2. Checks if the trigger collider has the Explosion tag assigned.
  3. Cancel the already called Explode invocation by dropping the bomb -- if you don't do this the bomb might explode twice.
  4. Explode!

Now you have a variable, but it hasn't been changed anywhere yet. The most logical place to set this is inside Explode(), right after you disable the MeshRenderer component:

...
GetComponent<MeshRenderer>().enabled = false;
exploded = true;
...

Now everything is set up, so save your file and run the scene again. Drop some bombs near each other and watch what happens:

ChainReaction

Now you've got some seriously destructive firepower going on. One little explosion can set your little game world on fire by triggering other bombs, allowing for these cool chain reactions!

The last thing to do is to handle players' reactions to explosions (Hint: they're not good!) and how the game translates the reaction into a win or draw state.

Player Death And How To Handle It

Open the Player.cs script in your code editor.

Right now, there's no variable to indicate if the player is dead or alive, so add a boolean variable at the top of the script, right under the canMove variable:

public bool dead = false;

This variable is used to keep track if the player died to an explosion.

Next, add this above all other variable declarations:

public GlobalStateManager globalManager;

This is a reference to the GlobalStateManager, a script that is notified of all player deaths and determines which player won.

Inside OnTriggerEnter(), there's already a check to see if the player was hit by an explosion, but all it does right now is log it in the console window.

Add this snippet under the Debug.Log call:

dead = true; // 1
globalManager.PlayerDied(playerNumber); // 2
Destroy(gameObject); // 3

This piece of code does the following things:

  1. Sets the dead variable so you can keep track of the player's death.
  2. Notifies the global state manager that the player died.
  3. Destroys the player GameObject.

Save this file and return to the Unity editor. You'll need to link the GlobalStateManager to both players:

  • In the hierarchy window, select both Player GameObjects.
  • Drag the Global State Manager GameObject into their Global Manager slots.

LinkGlobalStateManager

Run the scene again and make sure one of the players is obliterated by an explosion.

HitByExplosion

Every player that gets in the way of an explosion dies instantly.

The game doesn't know who won though because the GlobalStateManager doesn't use the information it received yet. Time to change that.

Declare the Winner

Open up GlobalStateManager.cs in your code editor.

For the GlobalStateManager to keep track of what player(s) died, you need two variables. Add these at the top of the script above PlayerDied():

private int deadPlayers = 0;
private int deadPlayerNumber = -1;

First off, deadPlayers will hold the amount of players that died. The deadPlayerNumber is set once the first player dies, and it indicates which one it was.

Now that you have this set up, you can add the actual logic. In PlayerDied(), add this piece of code:

deadPlayers++; // 1

if (deadPlayers == 1)
{ // 2
    deadPlayerNumber = playerNumber; // 3
    Invoke("CheckPlayersDeath", .3f); // 4
}

This snippet does the following:

  1. Adds one dead player.
  2. If this is the first player that died...
  3. It sets the dead player number to the player that died first.
  4. Checks if the other player also died or if just one bit the dust after 0.3 seconds.

That last delay is crucial for allowing a draw check. If you checked right away, you might not see that everybody died. 0.3 seconds is sufficient to determine if everybody died.

Win, Lose or Draw

You've made it to the very last section! Here you create the logic behind choosing between a win or a draw!

Make a new method named CheckPlayersDeath in the GlobalStateManager script:

void CheckPlayersDeath()
{
  // 1
  if (deadPlayers == 1)
  {
    // 2
    if (deadPlayerNumber == 1)
    {
      Debug.Log("Player 2 is the winner!");
    // 3
    }
    else
    {
      Debug.Log("Player 1 is the winner!");
    }
     // 4
  }
  else
  {
    Debug.Log("The game ended in a draw!");
  }
}

This is the logic behind the different if-statements in this method:

  1. A single player died and he's the loser.
  2. Player 1 died so Player 2 is the winner.
  3. Player 2 died so Player 1 is the winner.
  4. Both players died, so it's a draw.

Save your code, then give the game a final run and test if the console window reads out what player won or if it ended up as a draw:

FInalRun

And that concludes this tutorial! Now go ask a friend if they want to share in the fun and blow them up when you get the chance. :]

Where To Go From Here?

Download the finished Bomberman tutorial Final Project if you got stuck.

Now you know how to make a basic Bomberman-like game by using Unity.

This Bomberman tutorial used some particle systems for the bomb and the explosion, if you want to learn more about particle systems, check out my Introduction To Unity: Particle Systems Tutorial.

I highly encourage you to keep working on this game -- make it your own by adding new features! Here are some suggestions:

  • Make the bombs "pushable", so you can escape bombs next to you and push them towards your opponent
  • Limit the amount of bombs that can be dropped
  • Make it easy to quickly restart the game
  • Add breakable blocks that get destroyed by the explosions
  • Create interesting powerups
  • Add lives, or a way to earn them
  • UI elements to indicate what player won
  • Find a way to allow more players

Be sure to share your creations here, I'd love to see what you guys can come up with! As always, I hope you enjoyed this tutorial!

If you have any remarks or questions, you can do so in the Comments section.

The post How To Make A Game Like Bomberman With Unity appeared first on Ray Wenderlich.

OAuth 2.0 with Swift Tutorial

$
0
0
Update note: This tutorial has been updated to Swift 4 by Owen Brown. The original tutorial was written by Corinne Krych.
Take a look at a few OAuth 2.0 libraries, and find out how to integrate them into an app.

Take a look at a few different OAuth 2.0 libraries, and find out how to integrate them into an app.

It’s likely that you’ve bumped into OAuth 2.0 and the different families of flows while building apps to share content with your favorite social network (Facebook, Twitter, etc) or with your enterprise OAuth 2.0 server — even if you weren’t aware of what was going on under the hood. But do you know how to hook up to your service using OAuth 2.0 in an iOS app?

In this tutorial, you’ll work on a selfie-sharing app named Incognito as you learn how to use the AeroGear OAuth2 and OAuthSwift open source OAuth 2.0 libraries to share your selfies on Google Drive.

Getting Started

Download the Incognito starter project. The starter project uses CocoaPods to fetch AeroGear dependencies and contains everything you need, including generated pods and xcworkspace directories.

Open Incognito.xcworkspace in Xcode. The project is based on a standard Xcode Single View Application template, with a single storyboard which contains a single view controller ViewController.swift. All UI actions are already handled in ViewController.swift.

Build and run your project to see what the app looks like:

OAuth_App

The app lets you pick your best selfie and add some accessories to the image. Did you recognize me behind my disguise? :]

Note: To add photos in the simulator, simply go to the home screen using Cmd + Shift + H and drag and drop your images onto the simulator.

The missing part in the app is adding the ability to share on Google Drive using two different OAuth 2.0 libraries.

Mission impossible? Nope, it’s nothing you can’t handle! :]

Instead of boring you with an introduction to the RFC6749 OAuth2 specification, let me tell you a story…

Explaining the Need for OAuth 2.0

On Monday morning, Bob, our mobile nerd bumps into Alice, another friendly geek, in front of the coffee machine. Bob seems busy, carrying a heavy bunch of documents: his boss wants him to delve into the OAuth 2.0 specification for the Incognito app.

Put any two developers in a coffee room and soon they’ll chat about geeky things, of course. Bob asks Alice:

“…what problem are we trying to solve with OAuth 2.0?”

oauth2-explained-1

On one side, you have services in the form of APIs, such as the Twitter API, which you can use to get a list of followers or Tweets. Those APIs handle your confidential data, which is protected by a login and password.

On the other side, you have apps that consume those services. Those apps need to access your data, but do you want to trust all of them with your credentials? Maybe — but maybe not.

This brings up the concept of delegated access. OAuth2 lets users grant third-party apps access to their web resources, without sharing their passwords, through a security object known as an access token. It’s impossible to obtain the password from the access token, since your password is kept safe inside the main service. If an app wants to connect to the service, it must get its own access token. Access tokens can then be revoked if you ever want to revoke access to just that app.

OAuth 2.0 works with the following four actors:

  • authorization server: responsible for authentication and authorization — it provides the access token.
  • resource server: in charge of serving up resources if a valid token is provided.
  • resource owner: the owner of the data — that is, the end user of Incognito.
  • client: the Incognito mobile app.

The OAuth 2.0 specification describes the interactions between these actors as grant flows.

The specification details four different grant flows that can be grouped into two different families:

  • 3-legged flows: the end user needs to grant permission in these cases. The implicit grant is for browser-based apps that aren’t capable of keeping tokens secure. The authorization code grant, which generates an access token and optionally a refresh token, is for clients capable of keeping tokens secure. Such clients include mobile apps which have somewhere secure they can store the token, such as in the keychain on iOS.
  • 2-legged flows: the credentials are given to the app. The key difference here is that the resource owner inputs the credentials directly into the client. An example of where you see this in practice is when accessing many APIs, e.g. Parse, as a developer and put your key in your app.

You’ll use your existing Google Drive account and upload your Incognito selfies there. This is a good case for implementation of the 3-legged authorization code grant.

The Authorization Dance

Although using open source libraries hides most of the sticky details of the OAuth 2.0 protocol from you, knowing its basic inner workings will help you get the configuration right.

Here are the steps involved in the authorization code grant dance:

Step 0: Registration

Your application needs to be registered with the service you want to access. In your case, for Incognito, that’s Google Drive. Don’t worry, the following section will explain how to do that.

Step 1: Authorization Code

The dance begins when Incognito sends a request for an authorization code to the third-party service that includes:

  • client ID: Provided during service registration. Defines which app is talking to the service.
  • redirect URI: Where the user should be redirected after entering their credentials into the service, and granting permission.
  • scope: Used to tell the service what level of permission the app should have.

The app then switches to the web browser. Once the user logs in, the Google authorization server displays a grant page: “Incognito would like to access your photos: Allow/Deny”. When the end user clicks “Allow”, the server redirects to the Incognito app using the redirect URI and sends an authorization code to the app.

Step 2: Exchange Code for Token

The authorization code is only temporary; therefore the OAuth 2.0 library has to exchange this temporary code for a proper access token, and optionally, a refresh token.

Step 3: Get Resources

Using the access token, Incognito can access protected resources on the server — that is, the resources the end-user granted access to. Your upload is free to proceed.

Ready to see this in action? First, you need to register with the OAuth 2.0 provider: Google.

Registering With your OAuth 2.0 Provider

If you don’t have a Google account, go create one now. It’s OK; I’ll wait for you. :]

Open https://console.developer.google.com in your browser; you’ll be prompted to authenticate with Google.

Click Create Project and name your new project Incognito:

Next, you need to enable the Drive API.

Click Library in left menu and search for Google Drive API and select it. On the next screen, click Enable:

Now you need to create new credentials to access your Drive accounts from the app.

Select Credentials in left menu and from the blue Create Credentials drop down, select OAuth client ID.

Then click Configure consent screen and in the screen that appears, fill out the following information:

  • Email address: Select your email address
  • Product name: Incognito
  • Homepage URL: http://www.raywenderlich.com

Click Save and you’ll return to the Client ID screen. Select select iOS and enter com.raywenderlich.Incognito as your Bundle ID.

The authorization server will use the bundle id entered above as the redirect URI.

Finally, click Create. A popup with the Client ID appears, just click Ok.
The important parameter needed for later is the Client ID. You can grab it anytime by clicking Credentials in the left menu and picking your client ID from the OAuth ID list.

Now that you’ve registered with Google, you’re ready to start your OAuth 2.0 implementation using the first OAuth 2.0 library: AeroGear with an external browser.

Authenticating with AeroGear and External Browsers

Open ViewController.swift and add the following imports to the top of the file:

import AeroGearHttp
import AeroGearOAuth2

Now, add the following instance variable inside the ViewController class:

private let http = Http(baseURL: "https://www.googleapis.com")

You’ll use this instance of Http, which comes from the AeroGearHttp library, to perform HTTP requests.

Still in ViewController.swift, find the empty share(_:) method and add the following code to it:

//1
let googleConfig = GoogleConfig(
  clientId: "YOUR_GOOGLE_CLIENT_ID",
  scopes:["https://www.googleapis.com/auth/drive"])

//2
let gdModule = AccountManager.addGoogleAccount(config: googleConfig)
//3
http.authzModule = gdModule
//4
let multipartData = MultiPartData(data: snapshot(),
  name: "image",
  filename: "incognito_photo",
  mimeType: "image/jpg")
let multipartArray =  ["file": multipartData]
//5
http.request(method: .post, path: "/upload/drive/v2/files",  parameters: multipartArray) {
  (response, error) in
  if (error != nil) {
    self.presentAlert("Error", message: error!.localizedDescription)
  } else {
    self.presentAlert("Success", message: "Successfully uploaded!")
  }
}

Here’s what’s going on in the method above:

  1. Create a configuration. You’ll need to replace YOUR_GOOGLE_CLIENT_ID above with the Client ID from your Google Console to use the correct authorization configuration. At initialisation you also define the scope of the grant request. In the case of Incognito, you need access to the Drive API.
  2. You then instantiate an OAuth2 module via AccountManager utility methods.
  3. Next you inject the OAuth2 module into the HTTP object, which links the HTTP object to the authorization module.
  4. Then you create a multi-part data object to encapsulate the information you wish to send to the server.
  5. Finally, you use a simple HTTP call in to upload the photo. The library checks that an OAuth2 module is plugged into HTTP and makes the appropriate call for you. This will result in one of the following outcomes:
    • start the authorization code grant if no access token exists.
    • refresh the access token if needed.
    • if all tokens are available, simply run the POST call.
Note: For more information on how to use AeroGear OAuth2, either check out AeroGear’s online documentation and API reference, or browse through the source code in the Pods section.

Build and run your app; select an image, add an overlay of your choosing, then tap the Share button. Enter your Google credentials if you’re prompted; if you’ve logged in before, your credentials may be cached. You’ll be redirected to the grant page. Tap Accept and…

Boom — you receive the Safari Cannot Open Page error message. :[ What’s up with that?

Invalid address in OAuth2 flow

Once you tap Accept, the Google OAuth site redirects you to com.raywenderlich.Incognito://[some url]. Therefore, you’ll need to enable your app to open this URL scheme.

Note: Safari stores your authentication response in a cookie on the simulator, so you won’t be prompted again to authenticate. To clear these cookies in the simulator, go to Hardware\Erase All Content and Settings.

Configuring the URL Scheme

To allow your user to be re-directed back to Incognito, you’ll needs to associate a custom URL scheme with your app.

Go to the Incognito\Supporting Files group in Xcode and find Info.plist. Right click on it and choose Open As\Source Code.

Add the following to the bottom of the plist, right before the closing </dict> tag:

<key>CFBundleURLTypes</key>
<array>
    <dict>
        <key>CFBundleURLSchemes</key>
        <array>
            <string>com.raywenderlich.Incognito</string>
        </array>
    </dict>
</array>

The scheme is the first part of a URL. In web pages, for example, the scheme is usually http or https. iOS apps can specify their own custom URL schemes, such as com.raywenderlich.Incognito://doStuff. The important point is to choose a custom scheme that it unique among all apps installed on your users’ devices.

The OAuth 2.0 dance uses your custom URL scheme to re-enter the application from which the request came. Custom schemes, like any URL, can have parameters. In this case, the authorization code is contained in the code parameter. The OAuth 2.0 library will extract the authorization code from the URL and pass it in the next request in exchange for the access token.

You’ll need to implement a method in Incognito’s AppDelegate class for the app to respond when it’s launched via a custom URL scheme.

Open AppDelegate.swift and add the following import statement to the top of the file:

import AeroGearOAuth2

Next, implement application(_:open:options) as shown below:

func application(_ app: UIApplication,
                 open url: URL,
                 options: [UIApplicationOpenURLOptionsKey : Any] = [:]) -> Bool {

  let notification = Notification(name: Notification.Name(AGAppLaunchedWithURLNotification),
                                  object:nil,
                                  userInfo:[UIApplicationLaunchOptionsKey.url:url])
  NotificationCenter.default.post(notification)
  return true
}

This method simply creates an Notification containing the URL used to open the app. The AeroGearOAuth2 library listens for the notification and calls the completionHandler of the POST method you invoked above.

Build and run your project again, take a snazzy selfie and dress it up. Click the share button, authenticate yourself, and lo and behold:

You can download the finished Incognito AeroGear project from this section if you wish.

Switching context to an external browser during the OAuth 2.0 authentication step is a bit clunky. There must be a more streamlined approach…

Using Embedded Safari View

Embedded Safari web views make for a more user-friendly experience. This can be achieved by using a SFSafariViewController rather than switching to the Safari app. From a security point of view, it’s a less-secure approach since your app’s code sits between the login form and the provider. Your app could use Javascript to access the credentials of the user as they type them. However, this could be an acceptable option if your end users trust your app to be secure.

oauth2-explained-2

You’ll revisit the share method using the OAuthSwift library, but this time, you’ll implement OAuth 2.0 using an embedded Safari view.

OAuthSwift with Embedded Safari View

You’re going to start again with a different project. So close the existing Xcode workspace, download this version of the Incognito starter project, and open the project in Xcode using the Incognito.xcworkspace file.

Build and run the project; things should look pretty familiar.

As before, you first need to import the OAuthSwift library included in the project.

Open ViewController.swift and add the following import to the top of the file:

import OAuthSwift

Still in ViewController.swift, add the following code to share():

//1
let oauthswift = OAuth2Swift(
  consumerKey:    "YOUR_GOOGLE_DRIVE_CLIENT_ID",
  consumerSecret: "",		// No secret required
  authorizeUrl:   "https://accounts.google.com/o/oauth2/auth",
  accessTokenUrl: "https://accounts.google.com/o/oauth2/token",
  responseType:   "code"
)

oauthswift.allowMissingStateCheck = true
//2
oauthswift.authorizeURLHandler = SafariURLHandler(viewController: self, oauthSwift: oauthswift)

guard let rwURL = URL(string: "com.raywenderlich.Incognito:/oauth2Callback") else { return }

//3
oauthswift.authorize(withCallbackURL: rwURL, scope: "https://www.googleapis.com/auth/drive", state: "", success: {
  (credential, response, parameters) in
  oauthswift.client.postImage("https://www.googleapis.com/upload/drive/v2/files",
    parameters: parameters,
    image: self.snapshot(),
    success: {
      //4
      (response) in
      if let _ = try? JSONSerialization.jsonObject(with: response.data, options: []) {
        self.presentAlert("Success", message: "Successfully uploaded!")
      }
    },
    failure: {
      (error) in
      self.presentAlert("Error", message: error.localizedDescription)
    })
}, failure: { (error) in
  self.presentAlert("Error", message: error.localizedDescription)
})

Here’s what’s going on in the code above:

  1. You first create the OAuth2Swift that will handle the OAuth dance for you. Don’t forget to replace YOUR_GOOGLE_CLIENT_ID with the client id from the Google console.
  2. Then initiatize the authorizeURLHandler to a SafariURLHandler which will automatically handle displaying and dismissing a SFSafariViewController.
  3. Next, request authorization via the oauthswift instance. The scope parameter indicates that you are requesting access to the Drive API.
  4. If authorization is granted, you can go ahead and upload the image.

Configuring URL Handling

Just as in the previous project, this version of Incognito has been set up to accept a custom URL scheme; all you need to do is implement the code to handle the custom URL.

Open AppDelegate.swift and add the following import:

import OAuthSwift

Then, implement application(_:open:options) as shown below:

func application(_ app: UIApplication,
                 open url: URL,
                 options: [UIApplicationOpenURLOptionsKey : Any] = [:]) -> Bool {

  OAuthSwift.handle(url: url)
  return true
}

Unlike AeroGearOAuth2, OAuthSwift uses a class method to handle parsing the returned URL. However, if you inspect the handle(_) method, you’ll see that it simply sends a Notification, just like AeroGearOAuth2 required you to do!

Build and run your project; note that when the authentication form appears, it’s not displayed within the Safari app, and no app switching happens. As well, the authentication form is presented each time you run the app since no web cookies are stored in your app by default.

Using a SFSafariViewController to authenticate with Google looks more streamlined, for sure! :]

You can download the final Incognito OAuthSwift project here.

More About Tokens

One thing you haven’t looked at is how to store those precious access and refresh tokens which you receive as part of the OAuth 2.0 dance. Where do you store them? How do you refresh an expired access token? Can you revoke your grants?

Storing tokens

The best way to store them is…on your Keychain, of course! :]

oauth2-epalined-4b

This is the default strategy adopted by OAuth2Session (from AeroGear).

If you would like to read more about the keychain, then I recommend reading our other tutorials on the subject.

Refreshing and Revoking

To refresh the access token, you simply make an HTTP call to an access token endpoint and pass the refresh token as parameter.

For example, AeroGear leaves it up to the library to determine whether the token is still valid.

OAuth 2.0 defines a different specification for revoking tokens, which makes it possible to either revoke tokens separately or all at once. Most providers revoke both access and refresh tokens at the same time.

Where To Go From Here?

You covered two open source libraries which implement OAuth 2.0 – and hopefully learned a little more about how OAuth 2.0 works under the hood.

Maybe now you’re ready to read the OAuth 2.0 specification, RFC6749?! OK, maybe not. It’s a beast of a document! But at least you now understand the fundamentals and how it relates to your app.

I hope you use one of them in your app. Once you’ve picked your favorite open source OAuth 2.0 library, contributing to it is essential. If you notice a bug, report an issue. If you know how to fix it, even better – propose a pull request.

If you have any comments or questions about this tutorial, please join the forum discussion below!

The post OAuth 2.0 with Swift Tutorial appeared first on Ray Wenderlich.


Screencast: What’s New in Xcode 9: Debugging Improvements

Using Spots Framework for Cross-Platform Development

$
0
0

Spots is an open-source framework that enables you to design your UI for one platform, and use it on iOS, tvOS, and macOS. This lets you spend more time working on your app, and less time porting it to other platforms. Spots is also architected in such a way that it makes it incredibly easy to redesign your layout, by making use of the view model pattern. You can read more about what inspired the creators of Spots here.

Getting Started

In this tutorial, you’ll start off by making a simple app for iOS and use Spots to help you port the app to tvOS and macOS. Start by downloading the starter project here.

The starter project includes Spots, which has been pre-installed via Cocoapods. If you’re curious to learn more, you can look inside the Podfile to see how it’s set up, or check out our Cocoapods with Swift tutorial. You’ll use the imported Spots framework later to port your UI to JSON.

Open up Dinopedia.xcworkspace, and then open up the Dinopedia-iOS group. Then open up Main.storyboard within that group. You’ll notice that it contains an empty UINavigationController. Embedding UIViewControllers in a UINavigationController facilitates navigation between the UIViewControllers and makes it easy for you to set the UIViewControllers’ titles. You will work with both these features within this tutorial.

Note: At the time this tutorial was written, Spots did not compile cleanly with Swift 4 so you will see a warning that conversion to Swift 4 is available. When you build, you will see a number of other warnings in the Spots libraries. You’ll just have to ignore them for now.

Creating Your First View

To build a user interface in Spots, you first have to instantiate a custom view. In Spots, you make a custom view by creating a new subclass of UIView that conforms to ItemConfigurable. Then, you set up your constraints and the size of your view.

Create a new file inside the Dinopedia-iOS group named CellView.swift that inherits from UIView. At the top of the file, add the following code:

import Spots

Add the following code inside the CellView class:

lazy var titleLabel = UILabel()

You have now created a label that you will soon populate. By declaring the property as lazy, the label will be instantiated when it is first accessed. In this case, it means it will be instantiated when the label is actually going to be populated and displayed. Properties that are not declared as lazy are instantiated when the class or struct in which they are declared is instantiated.

Below where you declared the titleLabel, add the following code:

override init(frame: CGRect) {
  super.init(frame: frame)

  addSubview(titleLabel)
}

This overrides the view’s initializer, and will initialize the view and add the label.

Next, add the following required method below init(frame:):

required init?(coder aDecoder: NSCoder) {
  fatalError("init(coder:) has not been implemented")
}

In Swift, a subclass does not inherit its superclass’s designated initializer(s) by default. Since CellView.swift inherits from UIView, you must override all UIView‘s designated initializers.

Finally, you’ll implement three methods for configuring your view. First you will add constraints to the titleLabel you created earlier so that it displays nicely on the screen. Constraining the titleLabel is not enough; next you will need to populate the titleLabel with text.

Add the following new method at the bottom of the class:

func setupConstraints() {
   titleLabel.translatesAutoresizingMaskIntoConstraints = false
   titleLabel.centerYAnchor.constraint(equalTo: centerYAnchor).isActive = true
   titleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16).isActive = true
   titleLabel.trailingAnchor.constraint(equalTo: trailingAnchor, constant: -16).isActive = true
}

These constraints position the label in the center of your view vertically and give it a width equal to that of your view, with a bit of padding on either side.

At the bottom of init(frame:), add the following code:

setupConstraints()

This will therefore add the constraints right when CellView is initialized.

Now add the following to the bottom of the file, outside of the class definition:

extension CellView: ItemConfigurable {

  func configure(with item: Item) {
    titleLabel.text = item.title
  }

  func computeSize(for item: Item) -> CGSize {
    return CGSize(width: bounds.width, height: 80)
  }

}

configure(with:) sets the label’s text with the data passed as a parameter. computeSize(for:) sets the size of the view.

Now it’s time to use your view. In order for the application to use your view, you’ll have to register it. Open AppDelegate.swift and add the following code:

import Spots

Then add the following to application(didFinishLaunchingWithOptions), before the return:

Configuration.register(view: CellView.self, identifier: "Cell")

This registers the view you just created with the identifier "Cell". This identifier lets you reference your view within the Spots framework.

Creating Your First ComponentModel

It’s time to work with the Spots framework. First, you will create a ComponentModel.

Open ViewController.swift (make sure you choose the one in Dinopedia-iOS!). Items make up your ComponentModel and contain the data for your application. This data will be what the user sees when running the app.

There are many properties associated with Items. For example:

  • title is the name of the dinosaur’s species.
  • kind is the identifier that you gave CellView.swift in the AppDelegate.swift above.
  • meta has additional attributes, like the dinosaur’s scientific name and diet. You’ll use some of these properties now.

Add the following code at the top of the file:

import Spots

Add the following inside the viewDidLoad(), below super.viewDidLoad().

let model = ComponentModel(kind: .list, items: [
  Item(title: "Tyrannosaurus Rex", kind: "Cell", meta: [
    "ScientificName": "Tyrannosaurus Rex",
    "Speed": "12mph",
    "Lived": "Late Cretaceous Period",
    "Weight": "5 tons",
    "Diet": "Carnivore",
]),
  Item(title: "Triceratops", kind: "Cell", meta: [
    "ScientificName": "Triceratops",
    "Speed": "34mph",
    "Lived": "Late Cretaceous Period",
    "Weight": "5.5 tons",
    "Diet": "Herbivore",
]),
  Item(title: "Velociraptor", kind: "Cell", meta: [
    "ScientificName": "Velociraptor",
    "Speed": "40mph",
    "Lived": "Late Cretaceous Period",
    "Weight": "15 to 33lbs",
    "Diet": "Carnivore",
]),
  Item(title: "Stegosaurus", kind: "Cell", meta: [
    "ScientificName": "Stegosaurus Armatus",
    "Speed": "7mph",
    "Lived": "Late Jurassic Period",
    "Weight": "3.4 tons",
    "Diet": "Herbivore",
]),
  Item(title: "Spinosaurus", kind: "Cell", meta: [
    "ScientificName": "Spinosaurus",
    "Speed": "11mph",
    "Lived": "Cretaceous Period",
    "Weight": "7.5 to 23 tons",
    "Diet": "Fish",
]),
  Item(title: "Archaeopteryx", kind: "Cell", meta: [
    "ScientificName": "Archaeopteryx",
    "Speed": "4.5mph Running, 13.4mph Flying",
    "Lived": "Late Jurassic Period",
    "Weight": "1.8 to 2.2lbs",
    "Diet": "Carnivore",
]),
  Item(title: "Brachiosaurus", kind: "Cell", meta: [
    "ScientificName": "Brachiosaurus",
    "Speed": "10mph",
    "Lived": "Late Jurassic Period",
    "Weight": "60 tons",
    "Diet": "Herbivore",
]),
  Item(title: "Allosaurus", kind: "Cell", meta: [
    "ScientificName": "Allosaurus",
    "Speed": "19 to 34mph",
    "Lived": "Late Jurassic Period",
    "Weight": "2.5 tons",
    "Diet": "Carnivore",
]),
  Item(title: "Apatosaurus", kind: "Cell", meta: [
    "ScientificName": "Apatosaurus",
    "Speed": "12mph",
    "Lived": "Late Jurassic Period",
    "Weight": "24.5 tons",
    "Diet": "Herbivore",
]),
  Item(title: "Dilophosaurus", kind: "Cell", meta: [
    "ScientificName": "Dilophosaurus",
    "Speed": "20mph",
    "Lived": "Early Jurassic Period",
    "Weight": "880lbs",
    "Diet": "Carnivore",
  ]),
])

The code here is fairly straightforward. At the top, you create a new ComponentModel of type list. This causes your view to render as a UITableView instance. Then, you create your array of Items with a specific title and kind. This contains your data and sets its view type to the identifier, "Cell", which you specified earlier in AppDelegate.swift.

Adding Your View to the Scene

To use your data, you’ll need to create a controller. Still inside viewDidLoad(), add the following below your model:

let component = Component(model: model)

The final steps to get your view on the screen are to create a SpotsController and add it to the screen, so let’s do that now. Still inside viewDidLoad(), add the following under your component:

let controller = SpotsController(components: [component])
controller.title = "Dinopedia"

This will create a new SpotsController and set its title, which the UINavigationController will use.

Finally, add the controller to the UINavigationController with:

setViewControllers([controller], animated: true)

The code above sets the stack of the UINavigationController, which at this point consists of SpotsController. If you had more than one UIViewController that you wanted within the UINavigationController‘s stack, you would simply add it inside the Array that currently holds [controller].

Build and run to see your dinosaurs!

dinosaur list at first run

Responding to Taps on Dinosaurs

You’ll notice, however, that you can’t tap on the dinosaurs to see more information about them. To respond when the user taps a cell, you need to implement the component(itemSelected:) method of the ComponentDelegate protocol.

Still in ViewController.swift, at the bottom of the file, make a new extension and implement the method by adding the following code:

extension ViewController: ComponentDelegate {
  func component(_ component: Component, itemSelected item: Item) {

  }
}

In the code above, your ViewController adopts ComponentDelegate so that it has the ability to respond when a user taps on a cell. Your ViewController conforms to ComponentDelegate by implementing the required method inside the extension.

First, you’ll want to retrieve the information about each dinosaur. When you made the ComponentModel, you stored the information in the meta property. Inside the component(itemSelected:) method you just added, make a new ComponentModel by adding the following code:

let itemMeta = item.meta

let newModel = ComponentModel(kind: .list, items: [
  Item(title: "Scientific Name: \(itemMeta["ScientificName"] as! String)", kind: "Cell"),
  Item(title: "Speed: \(itemMeta["Speed"] as! String)", kind: "Cell"),
  Item(title: "Lived: \(itemMeta["Lived"] as! String)", kind: "Cell"),
  Item(title: "Weight: \(itemMeta["Weight"] as! String)", kind: "Cell"),
  Item(title: "Diet: \(itemMeta["Diet"] as! String)", kind: "Cell")
])

Here, you create a property itemMeta and set it to the meta property of the item which the user tapped. itemMeta is a Dictionary of String to Any. When creating newModel, you retrieve the value associated with each key in itemMeta. Like before, the kind parameter is the identifier of CellView.swift that you declared in the AppDelegate.

Finally, add the following code underneath that which you just added:

let newComponent = Component(model: newModel) //1
newComponent.tableView?.allowsSelection = false //2

let detailController = SpotsController() //3
detailController.components = [newComponent]
detailController.title = item.title
detailController.view.backgroundColor = UIColor.white

pushViewController(detailController, animated: true) //4

This creates the Component and SpotsController and adds it to the scene. Breaking it down:

  1. First you instantiate newComponent, which has a property called tableView.
  2. You disable selection on the tableView.
  3. Next you instantiate detailController and add newComponent to the components property on detailController.
  4. Finally, you push the new controller.

If you were to build and run now, nothing would happen when you click on the cells. This is because you haven’t set the ViewController as the SpotsController‘s delegate.

Back inside viewDidLoad(), add the following where you defined the SpotsController:

controller.delegate = self

Build and run to see some more information about the dinosaurs in your app!

dinosaur detail

Converting to JSON

If you looked around in the project, you may have noticed the dinopedia.json file. Open it up and you’ll see that the JSON data looks very similar to the model you made. You’ll use this JSON file to port your app to tvOS and macOS. This is one of the selling points of Spots. You can create your controllers with simple JSON data. The idea being that you can move this JSON to come from your web server, making it very easy to create your views from data your server sends.

First, you’ll change your iOS app to use JSON instead of manually creating the model.

Open ViewController.swift and replace the contents of viewDidLoad() with the following:

super.viewDidLoad()

guard let jsonPath = Bundle.main.path(forResource: "dinopedia", ofType: "json") else { //1
  print("JSON Path Not Found")
  return
}

let jsonURL = URL(fileURLWithPath: jsonPath)
do {
  let jsonData = try Data(contentsOf: jsonURL, options: .mappedIfSafe)
  let jsonResult = try JSONSerialization.jsonObject(with: jsonData,
                                                    options: .mutableContainers) as! [String: Any] //2

  let controller = SpotsController(jsonResult) //3
  controller.delegate = self
  controller.title = "Dinopedia"

  setViewControllers([controller], animated: true) //4
} catch {
  print("Error Creating View from JSON")
}

Here’s what you’re doing above:

  1. First, you find the path of the JSON file and create a URL with it.
  2. Then you retrieve the data and parse it into a Dictionary.
  3. Next, you create a new SpotsController, passing in the JSON.
  4. Finally, you add it to the scene.

Build and run to see your app. It looks just as it did before, but now you’re using JSON!

iOS views created from JSON

Porting to tvOS

Now that you’ve spent time creating your app on iOS, it’s time to port to tvOS. Luckily, it’s very easy to port your app to tvOS using Spots. You’ll reuse all the code you wrote for iOS!

Add each Swift file from your iOS target to the tvOS target, including AppDelegate.swift, by checking the boxes in the File Inspector on the right-hand side of Xcode.

Inside the tvOS version of Main.storyboard, a UINavigationController has already been added for you. Since iOS and tvOS both use UIKit, you can conveniently share all of your files! Build and run the tvOS target to see your app beautifully ported to tvOS.

dinopedia on tvOS

Porting to macOS

Unfortunately, macOS doesn’t use UIKit and takes a little more work to port. You can’t just reuse files like you did for tvOS. But you’ll reuse most of the code, with only a few minor changes here and there.

Inside the macOS target, open up Main.storyboard. A stack view is already set up for you. It contains a view on the left and right with a divider in the middle. Both views have outlets already made and wired up to ViewController.swift.

Now right click on the Dinopedia-macOS group and select New File…. Then select macOS\Cocoa Class and click Next. Name the class CellView with a subclass of NSView, and click Next. Then save it in the default location, making sure that the Dinopedia-macOS target is selected.

Now remove the call to draw() and add the following code to the top of the file:

import Spots

Inside CellView, define a new NSTextField called titleLabel:

lazy var titleLabel = NSTextField()

Implement the required methods for Spots:

override init(frame frameRect: NSRect) {
  super.init(frame: frameRect)

  addSubview(titleLabel)
}

required init?(coder decoder: NSCoder) {
  super.init(coder: decoder)
}

As with the implementation of iOS Dinopedia’s CellView, here the macOS CellView must override NSView‘s designated initializer.

Now, create the setupConstraints() method to set up the titleLabel:

func setupConstraints() {
  titleLabel.translatesAutoresizingMaskIntoConstraints = false
  NSLayoutConstraint.activate([
    titleLabel.centerYAnchor.constraint(equalTo: centerYAnchor),
    titleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16),
    titleLabel.trailingAnchor.constraint(equalTo: trailingAnchor, constant: -16),
    ])
}

Here you are constraining titleLabel so that it is centered vertically within its super view and so that it has a slight margin of 16 points on either side relative to its super view.

Now add the following code at the end of init(frame:):

setupConstraints()

This ensures that setupConstraints() is called when CellView is initialized.

Finally, create a new extension at the bottom of the file to set up the size of the view:

extension CellView: ItemConfigurable {

  func configure(with item: Item) {
    titleLabel.stringValue = item.title
    titleLabel.isEditable = false
    titleLabel.isSelectable = false
    titleLabel.isBezeled = false
    titleLabel.drawsBackground = false
  }

  func computeSize(for item: Item) -> CGSize {
    return CGSize(width: item.size.width, height: 80)
  }

}

Here you give the titleLabel some text and set certain properties on the NSTextField. You also create a method that returns the size of the item.

The last step in setting up your view is to register it in the AppDelegate. Switch to AppDelegate.swift (the one inside Dinopedia-macOS) and add the following code to the top of the file:

import Spots

Add the following inside the AppDelegate:

override func awakeFromNib() {
  super.awakeFromNib()
  Configuration.register(view: CellView.self, identifier: "Cell")
}

Just like you did with registering CellView.swift‘s identifier in the AppDelegate.swift for the iOS and tvOS targets, you are performing a similar action above. However, since you use the view in a storyboard, you need register the view in awakeFromNib().

Now it’s time to set up your ViewController. Open up ViewController.swift (again, the one in Dinopedia-macOS) and add the following code to the top of the file:

import Spots

Add the following code to the end of viewDidLoad():

guard let jsonPath = Bundle.main.path(forResource: "dinopedia", ofType: "json") else { //1
  print("JSON Path Not Found")
  return
}

let jsonURL = URL(fileURLWithPath: jsonPath)
do {
  let jsonData = try Data(contentsOf: jsonURL, options: .mappedIfSafe)
  let jsonResult = try JSONSerialization.jsonObject(with: jsonData,
                                                   options: .mutableContainers) as! [String: Any] //2

  let controller = SpotsController(jsonResult) //3
  controller.title = "Dinopedia" //4

  addChildViewController(controller) //5
  leftView.addSubview(controller.view)
  controller.view.translatesAutoresizingMaskIntoConstraints = false
  NSLayoutConstraint.activate([
    controller.view.leadingAnchor.constraint(equalTo: leftView.leadingAnchor, constant: 0),
    controller.view.trailingAnchor.constraint(equalTo: leftView.trailingAnchor, constant: 0),
    controller.view.topAnchor.constraint(equalTo: leftView.topAnchor, constant: 0),
    controller.view.bottomAnchor.constraint(equalTo: leftView.bottomAnchor, constant: 0)
    ])
} catch {
  print("Error Creating View from JSON")
}

There’s a lot going on there, but it’s relatively straightforward:

  1. First you find the path to the dinopedia.json file.
  2. You then retrieve that data and deserialize it into a Dictionary.
  3. Next you instantiate a new SpotsController.
  4. You subsequently set the UINavigationController‘s title.
  5. Finally, you add the SpotsController as a childViewController of ViewController and constrain it within ViewController.

You’ll notice that this is the same code used for iOS, but you add constraints to the SpotsController and add it to the leftView. You add constraints to the view to make sure it fills the entire view.

Create a new extension at the bottom of the file and implement ComponentDelegate:

extension ViewController: ComponentDelegate {
  func component(_ component: Component, itemSelected item: Item) {

  }
}

Here you are adopting and conforming to ComponentDelegate so that ViewController responds when the user clicks a cell.

You can repeat the same code used to retrieve the data, so add the following to component(itemSelected:):

let itemMeta = item.meta

let newModel = ComponentModel(kind: .list, items: [
  Item(title: "Scientific Name: \(itemMeta["ScientificName"] as! String)", kind: "Cell"),
  Item(title: "Speed: \(itemMeta["Speed"] as! String)", kind: "Cell"),
  Item(title: "Lived: \(itemMeta["Lived"] as! String)", kind: "Cell"),
  Item(title: "Weight: \(itemMeta["Weight"] as! String)", kind: "Cell"),
  Item(title: "Diet: \(itemMeta["Diet"] as! String)", kind: "Cell"),
  ])
let newComponent = Component(model: newModel)

You’ll need to remove the SpotsController on the righthand pane and replace it with a new SpotsController whenever the user selects a new dinosaur. To do this you check if a SpotsController has been added to the right, and remove it if it has. Then you can add a new SpotsController to the right.

Add the following to the end of component(itemSelected:):

if childViewControllers.count > 1 {
  childViewControllers.removeLast()
  rightView.subviews.removeAll()
}

In this code, you determine if there is more than one view controller in childViewControllers. This check is important to make sure that childViewControllers.removeLast() can be successfully executed. If childViewControllers.removeLast() is called and there is not at least one childViewControllers, then the app would crash because childViewControllers.removeLast() would be trying to remove something that does not exist. You subsequently remove all the subviews on rightView since these subviews will be replaced with the user’s new dinosaur selection.

Now that you have a clear space to add your new SpotsController, add the following to the end of component(itemSelected:):

let detailController = SpotsController()
detailController.components = [newComponent]
detailController.title = item.title

addChildViewController(detailController)
rightView.addSubview(detailController.view)
detailController.view.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
  detailController.view.leadingAnchor.constraint(equalTo: rightView.leadingAnchor, constant: 0),
  detailController.view.trailingAnchor.constraint(equalTo: rightView.trailingAnchor, constant: 0),
  detailController.view.topAnchor.constraint(equalTo: rightView.topAnchor, constant: 0),
  detailController.view.bottomAnchor.constraint(equalTo: rightView.bottomAnchor, constant: 0)
  ])

Again, this repeats from iOS, but adds constraints to the new view to fill the space.

Now that SpotsController conforms to ComponentDelegate, it’s time to set SpotsController as the delegate. Back inside viewDidLoad(), add the following where you defined the SpotsController:

controller.delegate = self

Before you build and run your macOS application, go to the macOS Project Editor and make sure you have a development team selected:

If a development team is not available, you may have to set up your macOS credentials. This Create Certificate Signing Request Tutorial is a helpful resource if you are unsure how to set up your credentials.

Now it is time to build and run to see your finished application running on macOS!

Where To Go From Here?

Well, that was a whirlwind tour of Spots! You’ve seen how you can build a simple UI using the framework, and port it from iOS to tvOS and macOS. Hopefully you can see how this could be useful. When the UI gets even more complex, this ease of porting becomes very useful. You’ve also seen how Spots uses the view model concept through its “controllers”, and how these can easily be created from JSON data.

Here’s the Final Project for this tutorial.

To learn more about Spots, you can check out the documentation as well as the getting started guide on Spots’ GitHub page.

If you have any questions feel free to join the discussion below!

The post Using Spots Framework for Cross-Platform Development appeared first on Ray Wenderlich.

Android Accessibility Tutorial: Getting Started

$
0
0

Most people will have at least a short term disability at some time that makes it difficult to use their mobile device. This includes someone who was born blind, or lost fine motor skills in an accident. This also includes someone who can’t use their hands because they are carrying a wiggly child. You may have experienced difficulties using your phone while wearing gloves when it’s cold outside. Maybe you’ve had a hard time distinguishing items on the screen when it’s bright outside.

With so much of the population experiencing decreased vision, hearing, mobility, and cognitive function, you should do your best to give everyone the best experience in your apps that you can. It’s a small way you can make people’s lives better.

In this tutorial, you are going to learn ways you can make your Android app more accessible by updating a coffee tracking app. There are many things that help, and by the end of this tutorial you will know the most basic ways you can improve your app for accessibility. You will learn:

  • What accessibility tools people are using to navigate your app
  • How to discover existing accessibility issues, and prevent accessibility regression
  • Android attributes you can use to make your app more accessible
  • Design guidelines to allow your user to use your app with ease

This tutorial assumes you have basic knowledge of Kotlin and Android. If you’re new to Android, check out our Android tutorials. If you know Android, but are unfamiliar with Kotlin, take a look at Kotlin For Android: An Introduction.

Getting Started

The app you will be improving allows you to set the number of cups of coffee you want to limit yourself to, and keeps track of where you are within that limit. There is an EditText field to enter the number of cups you want to limit consumption to. There is also a button to add cups of coffee that have been consumed. To show how much of the limit has been consumed, there is a coffee cup that fills up as more cups are consumed, reaching full when the limit is reached.

Start by downloading the starter project. Then open the project in Android Studio 3.0.1 or greater by going to File/New/Import Project, and selecting the build.gradle file in the root of the project.

Once it finishes loading and building, you will be able to run the application on a device or emulator. Try setting a new limit, and adding cups of coffee to see how the app works.

The two main files you will be working with for this tutorial are MainActivity.kt and activity_main.xml. Here’s a quick summary of all the files you’ll see in this tutorial.

  • MainActivity.kt contains the view code for the main screen. It listens for events from the user updating the limit and how many cups of coffee have been consumed, and updates the view accordingly.
  • activity_main.xml is the layout for the main screen. In it you’ll see all the components that make up the view.
  • strings.xml holds all the strings you define that are user visible or audible.
  • styles.xml contains the app wide styles of the app.
  • CoffeeRepository.kt keeps track of how much coffee has been consumed. You won’t need to change anything in it, just know this is what it is used for.

Now that you have the app up and running, and have a basic understanding of how it works, you can look for accessibility shortcomings, and make changes to fix them.

Enabling accessibility tools

There are many tools that people use to interact with their Android devices. This includes TalkBack, Magnification, and Switch Access, to name a few.

TalkBack allows you to explore the view using gestures, while also audibly describing what’s on the screen. Magnification allows you to zoom in on parts of the screen. Both TalkBack and Magnification are helpful for people with limited visibility. People with limited mobility can use Switch Access to allow them to navigate without using the touch screen. You can find all the accessibility features in Settings/Accessibility on your device.

This tutorial is going to look mainly at TalkBack, as it incorporates both screen traversal for navigation and screen reading to understand what is in focus. You can enable all the accessibility tools the same way as you will turn on TalkBack in this tutorial.

By using TalkBack, a user can use gestures, such as swiping left to right, on the screen to traverse the items shown on the screen. As each item is in focus, there is an audible description given. This is useful for people with vision impairments that cannot see the screen well enough to understand what is there, or select what they need to.

Note: TalkBack is only available on physical devices running Lollipop or higher, and cannot be accessed on an emulator.

To turn on TalkBack, go to Settings on your Android device. Then find Accessibility/TalkBack, and toggle the tool on.

With the default settings, in a left to right language, swipe right to advance to the next item on the screen, left to go to the previous, and double tap to select. In a right to left language, the swipe directions are reversed.

With these gestures you can start exploring the application using TalkBack. Try closing your eyes while using it to see if you can understand what you’re “looking” at. Don’t be shy to try out the other accessibility tools too. By trying these out you are able to spot things that are hard to access or understand by those that are impaired.

Testing Tools

While using TalkBack and other accessibility tools is helpful for finding accessibility shortcomings, there are a couple other testing tools provided for developers to help identify accessibility issues.

Lint

The simplest of these is the linter that Google provides. This is enabled by default in Android Studio, and will warn you of accessibility issues such as missing contentDescription (Later in this tutorial you’ll learn why using contentDescription in important).

Espresso tests

For more in depth checks, you can turn on checks in your Espresso tests. Do this by adding the following line to your test or test runner.

AccessibilityChecks.enable()

Check the Google docs for how you can further configure these tests.

Accessibility Scanner

Google also gives us an Accessibility Scanner that you can download from the Play Store. Download it now so you can use it with this tutorial. After downloading, the scanner can be turned on in the same Accessibility settings menu you were in before to turn on TalkBack. Navigate to Settings/Accessibility/Accessibility Scanner, and toggle it on.

Once turned on, navigate to the screen you want to scan, tap the check mark in the blue circle. Wait a moment for the results of the scan to show.

The results will show the scanned screen with orange boxes around any issues it found. By clicking on any of these boxes, you’ll get the description of what was found to be wrong, and a suggestion for how to fix it.

Now that you know how to find accessibility issues, you can get to work on fixing them, and making the world a better place!

Accessibility Attributes

One of the things that are highlighted by the Accessibility Scanner is the “add coffee” FloatingActionButton. It gives you the message “This item may not have a label readable by screen readers.”

You can better understand the implications of this while using TalkBack. Try navigating to it using TalkBack. When the button comes into focus you hear the audio “unlabeled button”. This is not very descriptive for a person who cannot see the screen well enough to infer the meaning.

Content description

You can easily improve this user experience by adding a android:contentDescription attribute to the “add coffee” FloatingActionButton. contentDescription is a small bit of text that describes the view. You don’t need to include that it is a button in your description. This is already inferred by the screen reader, and it will announce it on its own. Add the content description “Add Coffee” to the button. This is done directly on that element in activity_main.xml. You will also need to add the string resource in the strings.xml.

activity_main.xml

<android.support.design.widget.FloatingActionButton
   android:id="@+id/addCoffee"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:clickable="true"
   android:contentDescription="@string/add_coffee"
   android:focusable="true"
   android:src="@drawable/ic_add_24px"
   app:fabSize="normal"
   app:layout_anchor="@id/mainContent"
   app:layout_anchorGravity="bottom|right|end"
   tools:targetApi="lollipop_mr1"/>

strings.xml

<resources>
  <string name="app_name">CoffeeOverflow</string>
  <string name="coffee_limit_label">Coffee Limit</string>
  <string name="default_coffee_limit">5</string>
  <string name="coffee_limit_input_hint">Limit</string>
  <string name="amount_consumed">Amount Consumed</string>
  <string name="consumed_format">%1$d of %2$s</string>
  <string name="add_coffee">Add Coffee</string>
</resources>

Once this change is made, build and run the app again. By running the Accessibility Scanner, you will no longer see this warning.

Then, when you test it out with TalkBack, you will hear the audio “add coffee button” when the FloatingActionButton is selected. Yay! Already making improvements.

See how the reader includes “button” in the audio without us including that into the description, along with instructions on how to use: “double tap to activate.” When a user can further interact with an element they have in focus, the screen reader will give clues on how to do that automatically.

Adding a content description is something you should do for every image or button that does not otherwise have text for the screen reader to read. If the element is not important to understand what is on the screen, the contentDescription can be set to @null. If you do this, TalkBack, and other screen readers will skip the element entirely, and move onto the next thing in the view.

Another attribute you can use to tell the screen reader to skip the view element is android:isImportantForAccessibility. When set to ”no”, the screen reader will skip this element while traversing the view.

Grouping

While using the TalkBack tool, you may have noticed how hard it was to get to the “add coffee” FloatingActionButton.

TalkBack traverses through everything on the screen, in order of the file. This can make some core actions, like incrementing the cups of coffee, harder to reach. There are other things, like label/value pairs, that require extra steps to get all the information when they should be grouped together. You should make sure everything on the screen is reached in a logical order, and grouped in a way that makes sense.

There are attributes you can use to improve these issues. Start with the “Amount Consumed” label, and the associated value. These are both parts of the same piece of information, and should be grouped together instead of requiring you to traverse each one separately.

To specify that both the elements should be in focus at the same time, add the android:focusable attribute with the value ”true” the parent element of the two, the LinearLayout with the id consumedContainer. Also add the attribute android:focusableInTouchMode with the value ”false”, as you only want this to be focusable for the screen reader.

activity_main.xml

<LinearLayout
   android:id="@+id/consumedContainer"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:focusable="true"
   android:focusableInTouchMode="false"
   android:orientation="horizontal">

Run the app with these changes, and try out TalkBack. Observe that “Amount Consumed” and the value are read at the same time. Now that the information is grouped together, it is clear that they are part of the same bit of information, and is it quicker to understand and consume.

Labels

That solved the grouping for the consumption, but you have a similar issue for “Coffee Limit”. The label is read separately from the editable value, with nothing linking the two. This will use a different solution than you used for the amount consumed. The EditText still needs to be individually focusable to change the value. Add the android:labelFor attribute to the “Coffee Limit” TextView, with a value of the id of the EditText value, coffeeLimitValue.

activity_main.xml

<TextView
   android:id="@+id/coffeeLimitLabel"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:layout_gravity="start|center_vertical"
   android:layout_weight="1"
   android:labelFor="@id/coffeeLimitValue"
   android:text="@string/coffee_limit_label"
   android:textAppearance="@style/TextAppearance.AppCompat.Title"/>

Now run the app to observe the changes. When the EditText with the value for the limit is selected, it includes the text of the label in the audio: ‘limit for “Coffee Limit”’. This provides the user context about what the editable value is for, relating it to the previous element. Use labelFor whenever you have a situation where you have a label and a value that should be individually focusable, but are referring to the same thing.

Traversal order

Now to handle the FloatingActionButton. You can use android:accessibilityTraversalBefore on a view to specify what item this should come before. Add this to the FloatingActionButton using the id of the container holding the consumed amount as the value.

activity_main.xml

<android.support.design.widget.FloatingActionButton
   android:id="@+id/addCoffee"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:accessibilityTraversalBefore="@+id/consumedContainer"
   android:clickable="true"
   android:contentDescription="@string/add_coffee"
   android:focusable="true"
   android:src="@drawable/ic_add_24px"
   app:fabSize="normal"
   app:layout_anchor="@id/mainContent"
   app:layout_anchorGravity="bottom|right|end"
   tools:targetApi="lollipop_mr1"/>

Now when you rebuild and run the app with TalkBack, you will reach the “add coffee” button before the container holding the consumption, and therefore, the rest of the view. This will also be helpful for navigating the app using Switch Access.

Note: accessibilityTraversalBefore is only available on Lollipop and higher, and will only work if the element whose id you provided is focusable. Because the LinearLayout with the id consumedContainer is focusable, this works here if you’re running on a device with version 5.0 or higher.

Announce for accessibility

Have you tried adding a cup of coffee with TalkBack on? Did you notice anything missing for those with visual impairments? When the cup is added the value of the amount consumed is changed along with the ratio of coffee in the coffee cup. These are great visual cues, but it is lacking for those who can’t see them. How can you make this audible?

For this you can use the method announceForAccessibility() on a view. When announceForAccessibility() is called, Android will give an audible announcement for those using a screen reader, and do nothing if an accessibility tool is not in use. You can use this to inform the user that the value has been incremented.

In the onCreate() method in MainActivity, there is a click listener on the “add coffee” button that increments the number of cups of coffee, and shows the result. Add a call to announceForAccessibility() on the updated view to announce the change was made. Put the string you’re using for the message in the strings file. There is already a helper method, consumedString(), you can use to get the resulting value.

MainActivity.kt

override fun onCreate(savedInstanceState: Bundle?) {
 // …

 addCoffee.setOnClickListener {
   coffeeRepo.increment()
   showCount()

   amountConsumed.announceForAccessibility(getString(R.string.count_updated, consumedString()))
 }
}

strings.xml

<resources>
  <string name="app_name">CoffeeOverflow</string>
  <string name="coffee_limit_label">Coffee Limit</string>
  <string name="default_coffee_limit">5</string>
  <string name="coffee_limit_input_hint">Limit</string>
  <string name="amount_consumed">Amount Consumed</string>
  <string name="consumed_format">%1$d of %2$s</string>
  <string name="add_coffee">Add Coffee</string>
  <string name="count_updated">Count updated %s</string>
</resources>

Try using TalkBack to increment the cups of coffee consumed with this update. Now there is an audible cue in addition to the visual when the amount consumed is incremented.

If you don’t want the experience to change for sighted users, announceForAccessibility() is a great thing to use. Use it whenever you have a place where there is a meaningful change that was previously only indicated visually.

Another option for updating the user about a change is with Toasts. When Toasts are shown on screen, they are announced when accessibility tools such as TalkBack are enabled.

Designing for accessibility

There are things you can build into the design of your apps to make it easier for all your users to use. Size, colors, and layouts are all things you can be considerate of.

Touch targets

Take a look at your Accessibility Scanner results again. You can open the app to see the history of any scans you have taken, or perform a new scan. There is a warning on the limit EditText. It says, “Consider making this clickable item larger. This item’s height is 44dp. Consider making the height of this touch target 48dp or larger.”

It’s recommended by Google to make any clickable items at least 48dp in size. That is because anything smaller is difficult for people with vision and motor impairments to tap accurately. In addition, you’ll help out all your users that might be in a bumpy vehicle, wearing gloves, or in bright light that makes it hard to see the screen. Everyone benefits from making this improvement.

To solve this, add the android:minHeight attribute to that EditText. Make sure the value is at least 48dp. Alternatively, you could set android:height to 48dp or higher. This example uses minHeight.

activity_main.xml

<EditText
   android:id="@+id/coffeeLimitValue"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:layout_gravity="end"
   android:layout_weight="1"
   android:hint="@string/coffee_limit_input_hint"
   android:inputType="number"
   android:minHeight="48dp"
   android:text="@string/default_coffee_limit"
   android:textAlignment="textEnd"/>

Run the Accessibility Scanner again to make sure this works. There should no longer be a warning for the height of this item.

Color contrast

There are two warnings remaining. There is still one on the EditText that says “Multiple items have the same description.” Because you put the labelFor on the label for this field, the description for the label is part of the description for the limit field. You can leave this one be.

The other is on the text in the toolbar. The message says “Consider increasing this item’s text foreground to background contrast ratio.” Having low vision, color blindness, or a dimmed screen can make it difficult to read text with a low color contrast.

The recommended contrast ratio for text this size is at least 3.0 to 1.

Depending on where this is in your app, there are multiple possible actions you can take. You can change the font color. You can also change the background color. These are usually done in the layout xml file, activity_main.xml in this case. Because this is in the the action bar, you are going to change it in the styles in styles.xml.

Open the file and observe the parent style. The app is currently using a parent dark action bar theme, Theme.AppCompat.Light.DarkActionBar. The action bar is yellow, a light color, so this should be a light theme. Replace the parent style with Theme.AppCompat.Light.

styles.xml

<style name="AppTheme" parent="Theme.AppCompat.Light">
  <item name="colorPrimary">@color/colorPrimary</item>
  <item name="colorPrimaryDark">@color/colorPrimaryDark</item>
  <item name="colorAccent">@color/colorAccent</item>
</style>

This will change the text of the action bar from white to black. Run the Scanner again to see that this warning is gone.

Where to go from here

By completing this tutorial, you’ve learned many ways to make your apps more accessible. You can download the finished project here.

Now you know how to use TalkBack and the Accessibility Scanner to identify accessibility issues in your app. You also know that you can use Espresso and Lint to catch and make sure issues don’t creep in.

Through using the Accessibility Scanner and TalkBack, you identified areas of the app that could use accessibility improvements, then learned and completed the steps to fix them. You can use contentDescription, isImportantForAccessibility, focusable, accessibilityTraversalBefore, and announceForAccessibility to give the user the right information at the right time when using a screen reader.

You also know some tips for creating accessible designs. Making sure touch targets are big enough, and you have a high enough color contrast will benefit all your users.

These are some of the main things you can do to make your app accessible, but there are also many more things. Make sure to go through Google’s accessibility checklist when creating your app. You’ll find things you learned here, as well ways to get started making even more improvements.

Write your app with everyone in mind!

Here are some more resources on accessibility for you:

Please join the comments below with any questions or thoughts!

The post Android Accessibility Tutorial: Getting Started appeared first on Ray Wenderlich.

Document Based Apps: Defining Custom File Extensions

Unreal Engine 4 Tutorial: Artificial Intelligence

$
0
0

Unreal Engine 4 Tutorial: Getting Started with AI

In video games, Artificial Intelligence (AI) usually refers to how a non-player character makes decisions. This could be as simple as an enemy seeing the player and then attacking. It could also be something more complex such as an AI-controlled player in a real-time strategy.

In Unreal Engine, you can create AI by using behavior trees. A behavior tree is a system used to determine which behavior an AI should perform. For example, you could have a fight and a run behavior. You could create the behavior tree so that the AI will fight if it is above 50% health. If it is below 50%, it will run away.

In this tutorial, you will learn how to:

  • Create an AI entity that can control a Pawn
  • Create and use behavior trees and blackboards
  • Use AI Perception to give the Pawn sight
  • Create behaviors to make the Pawn roam and attack enemies
Note: This tutorial is part of a 9-part tutorial series on Unreal Engine:

Getting Started

Download the starter project and unzip it. Navigate to the project folder and open MuffinWar.uproject.

Press Play to start the game. Left-click within the fenced area to spawn a muffin.

Unreal Engine 4 Particles Tutorial

In this tutorial, you will create an AI that will wander around. When an enemy muffin comes into the AI’s range of vision, the AI will move to the enemy and attack it.

To create an AI character, you need three things:

  1. Body: This is the physical representation of the character. In this case, the muffin is the body.
  2. Soul: The soul is the entity controlling the character. This could be the player or an AI.
  3. Brain: The brain is how the AI makes decisions. You can create this in different ways such as C++ code, Blueprints or behavior trees.

Since you already have the body, all you need is a soul and brain. First, you will create a controller which will be the soul.

What is a Controller?

A controller is a non-physical actor that can possess a Pawn. Possession allows the controller to—you guessed it—control the Pawn. But what does ‘control’ mean in this context?

For a player, this means pressing a button and having the Pawn do something. The controller receives inputs from the player and then it can send the inputs to the Pawn. The controller could also handle the inputs instead and then tell the Pawn to perform an action.

Unreal Engine 4 AI Tutorial

In the case of AI, the Pawn can receive information from the controller or brain (depending on how you program it).

To control the muffins using AI, you need to create a special type of controller known as an AI controller.

Creating an AI Controller

Navigate to Characters\Muffin\AI and create a new Blueprint Class. Select AIController as the parent class and name it AIC_Muffin.

Unreal Engine 4 AI Tutorial

Next, you need to tell the muffin to use your new AI controller. Navigate to Characters\Muffin\Blueprints and open BP_Muffin.

By default, the Details panel should show the default settings for the Blueprint. If it doesn’t, click Class Defaults in the Toolbar.

Unreal Engine 4 AI Tutorial

Go to the Details panel and locate the Pawn section. Set AI Controller Class to AIC_Muffin. This will spawn an instance of the controller when the muffin spawns.

Unreal Engine 4 AI Tutorial

Since you are spawning the muffins, you also need to set Auto Possess AI to Spawned. This will make sure AIC_Muffin automatically possesses BP_Muffin when spawned.

Unreal Engine 4 AI Tutorial

Click Compile and then close BP_Muffin.

Now, you will create the logic that will drive the muffin’s behavior. To do this, you can use behavior trees.

Creating a Behavior Tree

Navigate to Characters\Muffin\AI and select Add New\Artificial Intelligence\Behavior Tree. Name it BT_Muffin and then open it.

The Behavior Tree Editor

The behavior tree editor contains two new panels:

  1. Behavior Tree: This graph is where you will create nodes to make the behavior tree
  2. Details: Nodes you select will display its properties here
  3. Blackboard: This panel will show Blackboard keys (more on this later) and their values. Will only display when the game is running.

Like Blueprints, behavior trees consist of nodes. There are four types of nodes in behavior trees. The first two are tasks and composites.

What are Tasks and Composites?

As its name implies, a task is a node that "does" something. This can be something complex such as performing a combo. It could also be something simple such as waiting.

Unreal Engine 4 AI Tutorial

To execute tasks, you need to use composites. A behavior tree consists of many branches (the behaviors). At the root of each branch is a composite. Different types of composites have different ways of executing their child nodes.

For example, you have the following sequence of actions:

Unreal Engine 4 AI Tutorial

To perform each action in a sequence, you would use a Sequence composite. This is because a Sequence executes its children from left to right. Here’s what it would look like:

Unreal Engine 4 AI Tutorial

Note: Everything beginning from a composite can be called a subtree. Generally, these are your behaviors. In this example, Sequence, Move To Enemy, Rotate Towards Enemy and Attack can be considered the "attack enemy" behavior.

If any of a Sequence’s children fail, the Sequence will stop executing.

For example, if the Pawn is unable to move to the enemy, Move To Enemy will fail. This means Rotate Towards Enemy and Attack will not execute. However, they will execute if the Pawn succeeds in moving to the enemy.

Later on, you will also learn about the Selector composite. For now, you will use a Sequence to make the Pawn move to a random location and then wait.

Moving to a Random Location

Create a Sequence and connect it to the Root.

Unreal Engine 4 AI Tutorial

Next, you need to move the Pawn. Create a MoveTo and connect it to Sequence. This node will move the Pawn to a specified location or actor.

Unreal Engine 4 AI Tutorial

Afterwards, create a Wait and connect it to Sequence. Make sure you place it to the right of MoveTo. Order is important here because the children will execute from left to right.

Unreal Engine 4 AI Tutorial

Note: You can check the order of execution by looking at the numbers at the top-right of each node. Lower numbered nodes have priority over higher numbered nodes.

Congratulations, you have just created your first behavior! It will move the Pawn to a specified location and then wait for five seconds.

To move the Pawn, you need to specify a location. However, MoveTo only accepts values provided through blackboards so you will need to create one.

Creating a Blackboard

A blackboard is an asset whose sole function is to hold variables (known as keys). You can think of it as the AI’s memory.

While you are not required to use them, blackboards offer a convenient way to read and store data. It is convenient because many of the nodes in behavior trees only accept blackboard keys.

To create one, go back to the Content Browser and select Add New\Artificial Intelligence\Blackboard. Name it BB_Muffin and then open it.

The Blackboard Editor

The blackboard editor consists of two panels:

Unreal Engine 4 AI Tutorial

  1. Blackboard: This panel will display a list of your keys
  2. Blackboard Details: This panel will display the properties of the selected key

Now, you need to create a key that will hold the target location.

Creating the Target Location Key

Since you are storing a location in 3D space, you need to store it as a vector. Click New Key and select Vector. Name it TargetLocation.

Unreal Engine 4 AI Tutorial

Next, you need a way to generate a random location and store it in the blackboard. To do this, you can use the third type of behavior tree node: service.

What is a Service?

Services are like tasks in that you use them to do something. However, instead of making the Pawn perform an action, you use services to perform checks or update the blackboard.

Services are not individual nodes. Instead, they attach to tasks or composites. This results in a more organized behavior tree because you have less nodes to deal with. Here’s how it would look using a task:

Unreal Engine 4 AI Tutorial

Here’s how it would look using a service:

Unreal Engine 4 AI Tutorial

Now, it’s time to create a service that will generate a random location.

Creating a Service

Go back to BT_Muffin and click New Service.

Unreal Engine 4 AI Tutorial

This will create a new service and open it automatically. Name it BTService_SetRandomLocation. You’ll need to go back to the Content Browser to rename it.

The service only needs to execute when the Pawn wants to move. To do this, you need to attach it to MoveTo.

Open BT_Muffin and then right-click on MoveTo. Select Add Service\BTService Set Random Location.

Unreal Engine 4 AI Tutorial

Now, BTService_SetRandomLocation will activate when MoveTo activates.

Next, you need to generate a random target location.

Generating a Random Location

Open BTService_SetRandomLocation.

To know when the service activates, create an Event Receive Activation AI node. This will execute when the service’s parent (the node it’s attached to) activates.

Unreal Engine 4 AI Tutorial

Note: There is also another event called Event Receive Activation that does the same thing. The difference between the two events is that Event Receive Activation AI also provides the Controlled Pawn.

To generate a random location, add the highlighted nodes. Make sure to set Radius to 500.

Unreal Engine 4 AI Tutorial

This will give you a random navigable location within 500 units of the Pawn.

Note: GetRandomPointInNavigableRadius uses navigation data (called NavMesh) to determine if a point is navigable. In this tutorial, I have already created the NavMesh for you. You can visualize it by going to the Viewport and selecting Show\Navigation.

Unreal Engine 4 AI Tutorial

If you would like to create your own NavMesh, create a Nav Mesh Bounds Volume. Scale it so that it encapsulates the area you want to be navigable.

Next, you need to store the location into the blackboard. There are two ways of specifying which key to use:

  1. You can specify the key by using its name in a Make Literal Name node
  2. You can expose a variable to the behavior tree. This will allow you to select a key from a drop-down list.

You will use the second method. Create a variable of type Blackboard Key Selector. Name it BlackboardKey and enable Instance Editable. This will allow the variable to appear when you select the service in the behavior tree.

Unreal Engine 4 AI Tutorial

Afterwards, create the highlighted nodes:

Unreal Engine 4 AI Tutorial

Summary:

  1. Event Receive Activation AI executes when it’s parent (in this case, MoveTo) activates
  2. GetRandomPointInNavigableRadius returns a random navigable location within 500 units of the controlled muffin
  3. Set Blackboard Value as Vector sets the value of a blackboard key (provided by BlackboardKey) to the random location

Click Compile and then close BTService_SetRandomLocation.

Next, you need to tell the behavior tree to use your blackboard.

Selecting a Blackboard

Open BT_Muffin and make sure you don’t have anything selected. Go to the Details panel. Under Behavior Tree, set Blackboard Asset to BB_Muffin.

Unreal Engine 4 AI Tutorial

Afterwards, MoveTo and BTService_SetRandomLocation will automatically use the first blackboard key. In this case, it is TargetLocation.

Unreal Engine 4 AI Tutorial

Finally, you need to tell the AI controller to run the behavior tree.

Running the Behavior Tree

Open AIC_Muffin and connect a Run Behavior Tree to Event BeginPlay. Set BTAsset to BT_Muffin.

Unreal Engine 4 AI Tutorial

This will run BT_Muffin when AIC_Controller spawns.

Click Compile and then go back to the main editor. Press Play, spawn some muffins and watch them roam around.

Unreal Engine 4 AI Tutorial

That was a lot of set up but you got there! Next, you will set up the AI controller so that it can detect enemies within its range of vision. To do this, you can use AI Perception.

Setting Up AI Perception

AI Perception is a component you can add to actors. Using it, you can give senses (such as sight and hearing) to your AI.

Open AIC_Muffin and then add an AIPerception component.

Unreal Engine 4 AI Tutorial

Next, you need to add a sense. Since you want to detect when another muffin moves into view, you need to add a sight sense.

Select AIPerception and then go to the Details panel. Under AI Perception, add a new element to Senses Config.

Unreal Engine 4 AI Tutorial

Set element 0 to AI Sight config and then expand it.

Unreal Engine 4 AI Tutorial

There are three main settings for sight:

  1. Sight Radius: The maximum distance the muffin can see. Leave this at 3000.
  2. Lose Sight Radius: If the muffin has seen an enemy, this is how far the enemy must move away before the muffin loses sight of it. Leave this at 3500.
  3. Peripheral Vision Half Angle Degrees: How wide the muffin’s vision is. Set this to 45. This will give the muffin a 90 degree range of vision.

Unreal Engine 4 AI Tutorial

By default, AI Perception only detects enemies (actors assigned to a different team). However, actors do not have a team by default. When an actor doesn’t have a team, AI Perception considers it neutral.

As of writing, there isn’t a method to assign teams using Blueprints. Instead, you can just tell AI Perpcetion to detect neutral actors. To do this, expand Detection by Affiliation and enable Detect Neutrals.

Unreal Engine 4 AI Tutorial

Click Compile and then go back to the main editor. Press Play and spawn some muffins. Press the key to display the AI debug screen. Press 4 on the numpad to visualize AI Perception. When a muffin moves into view, a green sphere will appear.

Unreal Engine 4 AI Tutorial

Next, you will move the muffin towards an enemy. To do this, the behavior tree needs to know about the enemy. You can do this by storing a reference to the enemy in the blackboard.

Creating an Enemy Key

Open BB_Muffin and then add a key of type Object. Rename it to Enemy.

Unreal Engine 4 AI Tutorial

Right now, you will not be able to use Enemy in a MoveTo. This is because the key is an Object but MoveTo only accepts keys of type Vector or Actor.

To fix this, select Enemy and then expand Key Type. Set Base Class to Actor. This will allow the behavior tree to recognize Enemy as an Actor.

Unreal Engine 4 AI Tutorial

Close BB_Muffin. Now, you need to create a behavior to move towards an enemy.

Moving Towards An Enemy

Open BT_Muffin and then disconnect Sequence and Root. You can do this by alt-clicking the wire connecting them. Move the roam subtree aside for now.

Next, create the highlighted nodes and set their Blackboard Key to Enemy:

Unreal Engine 4 AI Tutorial

This will move the Pawn towards Enemy. In some cases, the Pawn will not completely face towards its target so you also use Rotate to face BB entry.

Now, you need to set Enemy when AI Perception detects another muffin.

Setting the Enemy Key

Open AIC_Muffin and then select the AIPerception component. Add an On Perception Updated event.

Unreal Engine 4 AI Tutorial

This event will execute whenever a sense updates. In this case, whenever the AI sees or loses sight of something. This event also provides a list of actors it currently senses.

Add the highlighted nodes. Make sure you set Make Literal Name to Enemy.

Unreal Engine 4 AI Tutorial

This will check if the AI already has an enemy. If it doesn’t, you need to give it one. To do this, add the highlighted nodes:

Summary:

  1. IsValid will check if the Enemy key is set
  2. If it is not set, loop over all the currently detected actors
  3. Cast To BP_Muffin will check if the actor is a muffin
  4. If it is a muffin, check if it is dead
  5. If IsDead returns false, set the muffin as the new Enemy and then break the loop

Click Compile and then close AIC_Muffin. Press Play and then spawn two muffins so that one is in front of the other. The muffin behind will automatically walk towards the other muffin.

Unreal Engine 4 AI Tutorial

Next, you will create a custom task to make the muffin perform an attack.

Creating an Attack Task

You can create a task within the Content Browser instead of the behavior tree editor. Create a new Blueprint Class and select BTTask_BlueprintBase as the parent.

Unreal Engine 4 AI Tutorial

Name it BTTask_Attack and then open it. Add an Event Receive Execute AI node. This node will execute when the behavior tree executes BTTask_Attack.

Unreal Engine 4 AI Tutorial

First, you need to make the muffin attack. BP_Muffin contains an IsAttacking variable. When set, the muffin will perform an attack. To do this, add the highlighted nodes:

Unreal Engine 4 AI Tutorial

If you use the task in its current state, execution will become stuck on it. This is because the behavior tree doesn’t know if the task has finished. To fix this, add a Finish Execute to the end of the chain.

Unreal Engine 4 AI Tutorial

Next, enable Success. Since you are using a Sequence, this will allow nodes after BTTask_Attack to execute.

Unreal Engine 4 AI Tutorial

This is what your graph should look like:

Unreal Engine 4 AI Tutorial

Summary:

  1. Event Receive Execute AI will execute when the behavior tree runs BTTask_Attack
  2. Cast To BP_Muffin will check if Controlled Pawn is of type BP_Muffin
  3. If it is, its IsAttacking variable is set
  4. Finish Execute will let the behavior tree know the task has finished successfully

Click Compile and then close BTTask_Attack.

Now, you need to add BTTask_Attack to the behavior tree.

Adding Attack to the Behavior Tree

Open BT_Muffin. Afterwards, add a BTTask_Attack to the end of the Sequence

Unreal Engine 4 AI Tutorial

Next, add a Wait to the end of the Sequence. Set its Wait Time to 2. This will make sure the muffin doesn’t constantly attack.

Unreal Engine 4 AI Tutorial

Go back to the main editor and press Play. Spawn two muffins like last time. The muffin will move and rotate towards the enemy. Afterwards, it will attack and wait two seconds. It will then perform the entire sequence again if it sees another enemy.

Unreal Engine 4 AI Tutorial

In the final section, you will combine the attack and roam subtrees together.

Combining the Subtrees

To combine the subtrees, you can use a Selector composite. Like Sequences, they also execute from left to right. However, a Selector will stop when a child succceeds rather than fail. By using this behavior, you can make sure the behavior tree only executes one subtree.

Open BT_Muffin and then create a Selector after the Root node. Afterwards, connect the subtrees like so:

Unreal Engine 4 AI Tutorial

This set up will allow only one subtree to run at a time. Here is how each subtree will run:

  • Attack: Selector will run the attack subtree first. If all tasks succeed, the Sequence will also succeed. The Selector will detect this and then stop executing. This will prevent the roam subtree from running.
  • Roam: The selector will attempt to run the attack subtree first. If Enemy is not set, MoveTo will fail. This will cause Sequence to fail as well. Since the attack subtree failed, Selector will execute its next child which is the roam subtree.

Go back to the main editor, press Play. Spawn some muffins to test it out.

Unreal Engine 4 AI Tutorial

"Hang on, why doesn’t the muffin attack the other one immediately?"

In traditional behavior trees, execution starts from the root every update. This means every update, it would try the attack subtree first and then the roam subtree. This means the behavior tree can instantly change subtrees if the value of Enemy changes.

However, Unreal’s behavior trees do not work the same way. In Unreal, execution picks up from the last executed node. Since AI Perception does not sense other actors immediately, the roam subtree begins running. The behavior tree now has to wait for the roam subtree to finish before it can re-evaluate the attack subtree.

To fix this, you can use the final type of node: decorators.

Creating a Decorator

Like services, decorators attach to tasks or composites. Generally, you use decorators to perform checks. If the result is true, the decorator will also return true and vice versa. By using this, you can control if a decorator’s parent can execute.

Decorators also have the ability to abort a subtree. This means you can stop the roam subtree once Enemy is set. This will allow the muffin to attack an enemy as soon as one is detected.

To use aborts, you can use a Blackboard decorator. These simply check if a blackboard key is or isn’t set. Open BT_Muffin and then right-click on the Sequence of the attack subtree. Select Add Decorator\Blackboard. This will attach a Blackboard decorator to the Sequence.

Unreal Engine 4 AI Tutorial

Next, select the Blackboard decorator and go to the Details panel. Set Blackboard Key to Enemy.

Unreal Engine 4 AI Tutorial

This will check if Enemy is set. If it is not set, the decorator will fail and cause the Sequence to fail. This will then allow the roam subtree to run.

In order to abort the roam subtree, you need to use the Observer Aborts setting.

Using Observer Aborts

Observer aborts will abort a subtree if the selected blackboard key has changed. There are two types of aborts:

  1. Self: This setting will allow the attack subtree to abort itself when Enemy becomes invalid. This can occur if the Enemy dies before the attack subtree completes.
  2. Lower Priority: This setting will cause lower priority trees to abort when Enemy is set. Since the roam subtree is after attack, it is of lower priority.

Set Observer Aborts to Both. This will enable both abort types.

Unreal Engine 4 AI Tutorial

Now, the attack subtree can immediately go into roaming if it no longer has an enemy. Also, the roam subtree can immediately go into attack mode once it detects an enemy.

Here is the complete behavior tree:

Unreal Engine 4 AI Tutorial

Summary of attack subtree:

  1. Selector will run the attack subtree if Enemy is set
  2. If it is set, the Pawn will move and rotate towards the enemy
  3. Afterwards, it will perform an attack
  4. Finally, the Pawn will wait two seconds

Summary of roam subtree:

  1. Selector will run the roam subtree if the attack subtree fails. In this case, it will fail if Enemy is not set.
  2. BTService_SetRandomLocation will generate a random location
  3. The Pawn will move to the generated location
  4. Afterwards, it will wait for five seconds

Close BT_Muffin and then press Play. Spawn some muffins and prepare for the deadliest battle royale ever!

Unreal Engine 4 AI Tutorial

Where to Go From Here?

You can download the completed project here.

As you can see, it’s easy to create a simple AI character. If you want to create more advanced AI, check out the Environment Query System. This system will allow your AI to collect data about the environment and react to it.

If there’s a topic you’d like me to cover, let me know in the comments below!

The post Unreal Engine 4 Tutorial: Artificial Intelligence appeared first on Ray Wenderlich.

Viewing all 4374 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>