Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all 4395 articles
Browse latest View live

New Course: Beginning Android Layouts

$
0
0

Beginning Android Layouts

It’s day 4 of the Android Avalanche: an event where we’ll be releasing new Android and Kotlin books, courses, and screencasts every day!

Today, we are releasing a brand new course: Beginning Android Layouts.

In this 29-video course by Joe Howard, you’ll learn how to use Android’s layout system to lay out the views in your app, regardless of devices size. Through a series of hands-on exercises and challenges, you’ll learn about the view hierarchy, basic layout types, the powerful ConstraintLayout, and using the design and XML editors in Android Studio.

Take a look at what’s inside:

Part 1: Introduction to Android Layouts

In part one, learn how view layout and the view hierarchy works in Android.

  1. Introduction: Find out what’s covered in our Beginning Android Layouts video tutorial series.

  2. Building the Starter App: Download the starter app and build it in Android Studio, and take a peek at the included starter layout files.

  3. ViewGroups and Layout Editors: Learn about the foundation of all Android layouts, the ViewGroup class, and explore the layout editors in Android Studio.

  4. The View Hierarchy: Be introduced to the concept of the view hierarchy, and begin your study of layouts by seeing how to define view widths and heights.

  5. Sizes, Margins, and Padding: Learn how to specify size units in Android, and use those size units to space views via margins and padding.

  6. Challenge: Padding: Practice adding padding to a view, through a hands-on challenge.

  7. Conclusion: Let’s review what you’ve covered in this first section, and discuss what’s next.

Part 2: Basic Layout Types

In the second part, learn how to use the basic layout types: RelativeLayout and LinearLayout.

  1. Introduction: Let’s review the basic layout types that you’ll learn about in this section.

  2. RelativeLayout: Learn about how to use the RelativeLayout ViewGroup to position sibling views with specific relationships to one another.

  3. Gravity and Cleanup: Learn how to align elements using the layout_gravity and gravity tags, and do some code and design cleanup.

  4. Challenge: Relative Layout: Practice creating a RelativeLayout, through a hands-on challenge.

  5. LinearLayout: Learn about how to use the LinearLayout ViewGroup to position views along a horizontal or vertical dimension.

  6. Layout Weights: Discover how to use layout_weight with LinearLayout, in order to divide the shared size between sibling views in a specific ratio.

  7. Challenge: LinearLayout: Practice creating a LinearLayout, through a hands-on challenge.

  8. FrameLayout: Learn about how to use the FrameLayout ViewGroup to create a layering of views, and how to switch between the layers.

  9. Challenge: FrameLayout: Practice creating an empty state using FrameLayout, through a hands-on challenge.

  10. Conclusion: Let’s review what you learned about the basic layout types, and discuss what’s next.

Part 3: ConstraintLayout

In part 3, learn how to use the complex but powerful ConstraintLayout.

  1. Introduction: Let’s review what you’ll be learning in this section by introducing ConstraintLayout and constraints.

  2. Converting to ConstraintLayout: Use the converter built-in to Android Studio to convert an existing layout to ConstraintLayout.

  3. Editing Controls: Create a new ConstrainLayout from scratch, utilizing the layout editor editing controls

  4. Challenge: Converting a Layout: Practice converting an existing layout to ConstraintLayout, through a hands-on challenge.

  5. Challenge: Hiding Empty Views: Practice programmatically hiding empty views in a ConstraintLayout, through a hands-on challenge.

  6. Conclusion: Let’s review what you’ve learned about ConstraintLayout, and discuss what’s next.

Part 4: Configuration Changes

In part 4, learn to use both the design editor and XML editor in Android Studio.

  1. Introduction: Let’s discuss why it’s important to handle both portrait mode and landscape mode device configurations.

  2. Resource Qualifiers: Learn about how resource qualifiers are used to handle different device densities and also for localization.

  3. Landscape Mode: See how to create a layout file tailored specifically for when the device is in the landscape orientation.

  4. Challenge: Landscape Mode: Practice creating a landscape mode layout, through a hands-on challenge.

  5. The Great Flattening: Remove all of the nested layouts in an existing layout to create a completely flat ConstraintLayout.

  6. Conclusion: In this final episode, you’ll get a summary of the course as well as see an overview of Android layout topics that were not covered.

The Android Avalanche Bundle

If you like this course, from now until March 30th you can get it along with the rest of our new Android and Kotlin books, courses, and screencasts — at a big discount!

Our new Android Avalanche Bundle includes:

  • Android Apprentice ($54.99 value): Gives you access to our new Android Apprentice book, which teaches you how to build four complete Android apps from scratch. PDF/ePub format.
  • Kotlin Apprentice ($54.99 value): Gives you access to our new Kotlin Apprentice book, which gives you a deep dive into the Kotlin programming language itself. PDF/ePub format.
  • A raywenderlich.com subscription ($19.99 value): Gives you access to all 8 of our new Android video courses, our 2 new Android screencasts, and access to any new courses and screencasts we release in the future.

The bundle price of $99.99 includes the first month of your subscription, which will continue at $19.99/month thereafter. You can cancel at any time and keep the books. This bundle gives you more than 20% off everything in the Android Avalanche!

The Android Avalanche bundle is only available for the next two weeks, so be sure to order your copy while you can.

Already a subscriber? As a subscriber, you already have access to this new course as part of your subscription. You can also enjoy a $20 discount on the bundle that will get you both books added to your collection. It’s our way of thanking you for supporting what we do here at raywenderlich.com.

Where To Go From Here?

If you want to learn Android and Kotlin development — or level up your existing skills – there’s no better way to learn than these new books, courses, and screencasts.

And this is only the beginning! We’re committed to creating more new books, courses, and screencasts on Android development, with the goal of covering Android and Kotlin in the same way that we’ve covered iOS and Swift over the years.

We truly appreciate your support in making this possible. We’re excited about this new chapter at raywenderlich.com. So order your copy of the Android Avalanche Bundle today before the deal is over!

The post New Course: Beginning Android Layouts appeared first on Ray Wenderlich.


New Course: Beginning RecyclerView

$
0
0

It’s day 5 of the Android Avalanche: an event where we’ll be releasing new Android and Kotlin books, courses, and screencasts every day!

Today, we are releasing a brand new course: Beginning RecyclerView.

In this 37-video course by Joe Howard, you’ll learn how to use Android’s RecyclerView to efficiently display a list of items. Through a series of hands-on exercises and challenges, you’ll set up a basic RecyclerView, learn to use different layout managers, add animation, and more. Take a look at what’s inside:

Part 1: RecyclerView Basics

In part one, learn to bind model data to RecyclerViews.

  1. Introduction: Find out what’s covered in our RecyclerView video tutorial series, from basic setup to animations and common interactions.

  2. The Starter App: Download the starter app and build it in Android Studio, and take a peek at the included starter layout files.

  3. A Basic RecyclerView: See how to setup a basic RecyclerView, along with the corresponding LayoutManager, Adapter, and ViewHolder.

  4. Binding the Views: Learn how to connect the model data displayed in the RecyclerView to the corresponding objects in the view layer.

  5. Challenge: RecyclerView: Practice what you’ve learned so far to add more data into the rows of the RecyclerView, and then see a solution.

  6. Responding to Clicks: See how to respond to clicks on the rows of the RecyclerView, and take the user to a detail screen for the corresponding row item.

  7. Challenge: Favorites: Take all the basics of RecyclerView that you’ve learned so far to build a Favorites screen for the sample app.

  8. Conclusion: Let’s review what you’ve covered in this first part on RecyclerView basics, and discuss what’s next.

Part 2: Layout Managers

In part two, learn to use the various RecyclerView layout managers: LinearLayoutManager, GridLayoutManager, and StaggeredGridLayoutManager.

  1. Introduction: Let’s take a quick look at the layout managers that you’ll learn about in this part, and describe the capabilities of each.

  2. LinearLayoutManager: Learn more detail about LinearLayoutManager by creating a horizontal RecyclerView on the detail screen.

  3. Nested RecyclerViews: Create a nested RecyclerView, learn about LinearSnapHelper, and improve performance with a RecycledViewPool.

  4. GridLayoutManager: See how to create a grid of items with RecyclerView using GridLayoutManager, replacing the need for GridView.

  5. Custom Span Size: See how to use varying span sizes within a RecyclerView managed by GridLayoutManager using a span size lookup.

  6. Challenge: Span Size: Practice setting custom span sizes on a RecyclerView managed by GridLayoutManager, then see a solution.

  7. StaggeredGridLayoutManager: Learn how to handle the case of grid items having different natural sizes using StaggeredGridLayoutManager.

  8. Switching Between Span Sizes: Add a menu to allow switching between span sizes for a RecyclerView managed by a StaggeredGridLayoutManager.

  9. Challenge: Layout Managers: Practice what you’ve learned about layout manager to create a grid of items on the detail screen, then see a solution.

  10. Conclusion: Let’s review what you learned about the various RecyclerView layout managers, and discuss what’s next.

Part 3: Decorating and Animating

In part three, learn how to use item decorations for spacing and separators and add animation.

  1. Introduction: Learn about the objectives of this part, which are to become familiar with the capabilities of ItemDecoration and to see how to animate items in a RecyclerView.

  2. ItemDecoration: Offsets: See how to use ItemDecoration with a RecyclerView to control the spacing around the items utilizing offsets.

  3. ItemDecoration: Drawing: See how to use ItemDecoration with a RecyclerView to create separators between the elements in a list.

  4. Challenge: ItemDecoration: Practice using ItemDecoration with a RecyclerView to create separators between the items in a grid, then see a solution.

  5. Item Animations: Discover how to use animations to add dynamic effects to the presentation of the items in a RecyclerView.

  6. Challenge: Item Animations: Practice using animations to add dynamic effects to the presentation of the items in a RecyclerView.

  7. Conclusion: Let’s review what you learned about using ItemDecoration and animations with a RecyclerView, and discuss what’s next.

Part 4: Section Headers and View Types

In part four, learn about the different view types in a RecyclerView .

  1. Introduction: Learn about the objectives of this part, which are to become familiar with using different view types in a RecyclerView.

  2. Custom Section Headers: See how to sort the items displayed in a RecyclerView into groups and then add section headers to the groups.

  3. Multiple View Types: Use view types to customize the display of items displayed in a RecyclerView, based on the type of the item.

  4. Challenge: View Types: Practice using view types to customize the display of certain items in a RecyclerView, then see a solution.

  5. Conclusion: Let’s review what you learned about using view types for items displayed in a RecyclerView, and discuss what’s next.

Part 5: Common Interactions

In the final part, learn how to handle different interactions with a RecyclerView.

  1. Introduction: Learn about the objectives of this part, which are to become familiar with common interactions with a RecyclerView, such as drag and drop and swipe-to-delete.

  2. ItemTouchHelper: Discover the capabilities and use cases for combining the ItemTouchHelper class with a RecyclerView.

  3. Rearranging Rows: See how to use ItemTouchHelper to create a basic capability to rearrange the items in a RecyclerView.

  4. Handles and Selection: See how to setup drag handles and item selection highlighting using ItemTouchHelper with a RecyclerView.

  5. Challenge: ItemTouchHelper: Practice using ItemTouchHelper to allow for drag and drop rearrangement of the items displayed in a grid via a RecyclerView

  6. Swipe to Delete: See how to use ItemTouchHelper to add a basic swipe-to-delete capability for the items in a RecyclerView.

  7. Conclusion: In this final episode, we’ll summarize the course, and then see an overview of RecyclerView topics that were not covered.

The Android Avalanche Bundle

If you like this course, from now until March 30th you can get it along with the rest of our new Android and Kotlin books, courses, and screencasts — at a big discount!

Our new Android Avalanche Bundle includes:

  • Android Apprentice ($54.99 value): Gives you access to our new Android Apprentice book, which teaches you how to build four complete Android apps from scratch. PDF/ePub format.
  • Kotlin Apprentice ($54.99 value): Gives you access to our new Kotlin Apprentice book, which gives you a deep dive into the Kotlin programming language itself. PDF/ePub format.
  • A raywenderlich.com subscription ($19.99 value): Gives you access to all 8 of our new Android video courses, our 2 new Android screencasts, and access to any new courses and screencasts we release in the future.

The bundle price of $99.99 includes the first month of your subscription, which will continue at $19.99/month thereafter. You can cancel at any time and keep the books. This bundle gives you more than 20% off everything in the Android Avalanche!

The Android Avalanche bundle is only available for the next two weeks, so be sure to order your copy while you can.

Already a subscriber? As a subscriber, you already have access to this new course as part of your subscription. You can also enjoy a $20 discount on the bundle that will get you both books added to your collection. It’s our way of thanking you for supporting what we do here at raywenderlich.com.

Where To Go From Here?

If you want to learn Android and Kotlin development — or level up your existing skills – there’s no better way to learn than these new books, courses, and screencasts.

And this is only the beginning! We’re committed to creating more new books, courses, and screencasts on Android development, with the goal of covering Android and Kotlin in the same way that we’ve covered iOS and Swift over the years.

We truly appreciate your support in making this possible. We’re excited about this new chapter at raywenderlich.com. So order your copy of the Android Avalanche Bundle today before the deal is over!

The post New Course: Beginning RecyclerView appeared first on Ray Wenderlich.

Looking for an Android Podcast?

$
0
0

With the recent launch of our new Android and Kotlin books, courses, and screencasts, I thought some of you might be looking for a good Android podcast to go along with it all.

So – I’d like to introduce you to Android Snacks – a podcast by our friend Michael Scamell. I’ve asked him to explain what the podcast is all about, and his description is below! :]

Note: Do you have any Android podcasts you’d like to recommend to fellow readers? Please join the forum discussion below!

About Android Snacks

Have you ever got to the end of the week and you realized you don’t know what’s going in the Android world? Have you ever looked at your inbox and seen that 2 week old Android newsletter email that you meant to read?

Do you wish there was just some way to keep up with it all?

Well now you can!

The Android Snacks podcast is the TL;DR of last weeks Android Developer news. In 5 minutes find out about the best blog posts, the latest talks, the newest libraries and all the other stuff like podcasts or youtube videos.

Let Mike Scamell and his occasional Australian robot co-host Sheila take you on a journey to Android news enlightenment with the occasional attempt at humor.

Episode Highlights

Here are some episode highlights. And make sure you always listen until the end…..

Episode 2 (or what happens when you tell your Mum and Dad Jake Wharton’s left Square) Link

Episode 13 (or what happens when your co-host doesn’t like you) Link

Episode 23 (or when Jake Wharton whistled the podcast intro) Link

Where to Go From Here?

We hope you enjoy Android Snacks, and definitely let us know if you have any other podcasts you recommend!

And don’t forget – our special Android Avalanche Bundle sale ends next Friday, March 30 – be sure to order your copy before the deal is over!

The post Looking for an Android Podcast? appeared first on Ray Wenderlich.

New Course: Android Animations

$
0
0

Android Animations

It’s day 6 of the Android Avalanche: an event where we’ll be releasing new Android and Kotlin books, courses, and screencasts every day!

Today, we are releasing a brand new course: Android Animations.

In this 31-video course by Joe Howard, you’ll learn how to add dynamic animations to your Android apps to make the users’ experience more dynamic, fun and effective. Through a series of hands-on exercises and challenges, you’ll learn how to use basic property and view animators, add activity transitions, animate vector drawables, and more!

Take a look at what’s inside:

Part 1: Property Animations

In part one, learn how use the basic property animators on Android.

  1. Introduction: Find out what’s covered in our Android Animations video tutorial series, from property animations to vector animations and physics-based animations.

  2. The Starter App: Download the starter app and build it in Android Studio, review the existing app code, and check out some built-in animations.

  3. ValueAnimator: See how to perform a basic property animation using ValueAnimator. You’ll animate the payment methods container on the cart screen.

  4. ObjectAnimator: Switch to using an ObjectAnimator for the payment method container, in order to see the difference between ValueAnimator and ObjectAnimator.

  5. Challenge: Animators: Practice what you’ve learned so far to add a hide animation to the payment method container, and then see a solution.

  6. Interpolators: Review some of the various interpolators available for property animations, and settle on AccelerateDecelerateInterpolator.

  7. Animator Sets: See how to combine multiple animators into an AnimatorSet by animating a food item image when adding the item to the cart.

  8. Animator Listeners: Use an Animator.AnimatorListener or an AnimatorListenerAdapter to respond to various events for the animation, such as the animation end.

  9. Challenge: Animator Sets: Take what you’ve learned about property animations and AnimatorSet to animate the cart icon count, and then see a solution.

  10. Conclusion: Let’s review what you’ve covered in this first part on Android Animations, and then discuss what’s next.

Part 2: View, Transition, and Other Animations

In the second part, learn how to use view animators and work with scenes and transitions.

  1. Introduction: We’ll summarize the animations discussed in this part: view animations, activity transitions, circular reveal, and view pager transforms.

  2. View Animations: Learn about view animations and their differences from property animations. Animate food on the detail screen using a view animation.

  3. Challenge: View Animations: Practice using view animations by scaling the food image and feeding the food to a hungry monster, and then see a solution.

  4. Activity Transitions: Learn about scenes and transitions, and create an activity transition that animates the food image between activities.

  5. Challenge: Activity Transitions: Practice working with activity transitions by animating the food name between activities, and then see a solution.

  6. Circular Reveal: See how to create a circular reveal animation by replacing the payment method container show and hide property animations with a circular reveal.

  7. View Pager Transformers: Learn about ViewPager transformers and add zoom and depth transformers to the food categories ViewPager.

  8. Challenge: View Pager Transformers: Practice working with ViewPager transformers by updating the depth transformer to switch the direction of the depth animation, and then see a solution.

  9. Conclusion: Let’s review what you learned about the various types of common interactions discussed in this part of the course, and then discuss what’s next.

Part 3: Animated Vector Drawables

In part 3, you’ll animate vector drawables and try out the Lottie animation library from AirBnB.

  1. Introduction: Learn about the objectives of this part, which are to understand how to animate vector drawables and to work with the Lottie animation library from AirBnB.

  2. Vector Drawables: Learn about the inner working of vector drawables on Android, as preparation for understanding how to animate them.

  3. Animated Vector Drawables: Use some predefined morphing animations to morph a plus sign to a checkmark and back when adding food to the cart on the detail screen.

  4. Challenge: Animated Vector Drawables: Practice working with animated vector drawables by adding the plus to checkmark animation to the items list screen, and then see a solution.

  5. Lottie: Discover how to work with the Lottie animation library from AirBnB, and animate an image that marks your favorite foods.

  6. Conclusion: Let’s review what you learned about using animated vector drawables and Lottie, and then discuss what’s next.

Part 4: Physics-based Animations

In the final part, learn to use spring and fling animations.

  1. Introduction: Learn about the objectives of this part, which is to work with physics-based animations provided in the Android dynamic animation support library.

  2. Spring Animations: Add the dynamic animation support library to the project and see how to add a spring animation to a donut image on the checkout screen.

  3. Challenge: Spring Animations: Practice working with spring animations by adding vertical spring animations to the donut, then see a solution.

  4. Fling Animations: Use an ObjectAnimator to animate a block at the top of the screen, then use a fling animation to fling a donut at the block to try to win free donuts!

  5. Challenge: Fling Animations: Practice working with fling animations by adding a cookie fling animation to try to win free cookies, then see a solution.

  6. Conclusion: In this final episode, we’ll summarize this final part and the whole course, and then see an overview of Android Animation topics that were not covered.

The Android Avalanche Bundle

If you like this course, from now until March 30th you can get it along with the rest of our new Android and Kotlin books, courses, and screencasts — at a big discount!

Our new Android Avalanche Bundle includes:

  • Android Apprentice ($54.99 value): Gives you access to our new Android Apprentice book, which teaches you how to build four complete Android apps from scratch. PDF/ePub format.
  • Kotlin Apprentice ($54.99 value): Gives you access to our new Kotlin Apprentice book, which gives you a deep dive into the Kotlin programming language itself. PDF/ePub format.
  • A raywenderlich.com subscription ($19.99 value): Gives you access to all 8 of our new Android video courses, our 2 new Android screencasts, and access to any new courses and screencasts we release in the future.

The bundle price of $99.99 includes the first month of your subscription, which will continue at $19.99/month thereafter. You can cancel at any time and keep the books. This bundle gives you more than 20% off everything in the Android Avalanche!

The Android Avalanche bundle is only available for the next two weeks, so be sure to order your copy while you can.

Already a subscriber? As a subscriber, you already have access to this new course as part of your subscription. You can also enjoy a $20 discount on the bundle that will get you both books added to your collection. It’s our way of thanking you for supporting what we do here at raywenderlich.com.

Where To Go From Here?

If you want to learn Android and Kotlin development — or level up your existing skills – there’s no better way to learn than these new books, courses, and screencasts.

And this is only the beginning! We’re committed to creating more new books, courses, and screencasts on Android development, with the goal of covering Android and Kotlin in the same way that we’ve covered iOS and Swift over the years.

We truly appreciate your support in making this possible. We’re excited about this new chapter at raywenderlich.com. So order your copy of the Android Avalanche Bundle today before the deal is over!

The post New Course: Android Animations appeared first on Ray Wenderlich.

Large Mobile Dev Teams and The Android Avalanche – Podcast S07 E13

$
0
0

In our season 7 finale, Dru and Janie bring back Capital One’s Louie de la Rosa to talk about working with large mobile development teams and then Razeware’s Joe Howard opens up the gates on the Android Avalanche.

[Subscribe in iTunes] [RSS Feed]

This episode is sponsored by The Android Avalanche.

Interested in sponsoring a podcast episode? We sell ads via Syndicate Ads, check it out!

Episode Links

Large Mobile Development Teams

The Android Avalanche

Contact Us

Where To Go From Here?

We hope you enjoyed this episode of our podcast. Be sure to subscribe in iTunes to get notified when the next episode comes out.

We’d love to hear what you think about the podcast, and any suggestions on what you’d like to hear in future episodes. Feel free to drop a comment here, or email us anytime at podcast@raywenderlich.com.

The post Large Mobile Dev Teams and The Android Avalanche – Podcast S07 E13 appeared first on Ray Wenderlich.

New Course: Saving Data on Android

$
0
0

Saving Data on Android

It’s day 7 of the Android Avalanche: an event where we’ll be releasing new Android and Kotlin books, courses, and screencasts every day!

Today, we are releasing a brand new course: Saving Data on Android.

In this 32-video course by Joe Howard, you’ll learn how persist data on your Android apps between app restarts. Through a series of hands-on exercises and challenges, you’ll use SharedPreferences, read and write files to storage, save data using SQLite, and try out the new Room Library to save data!

Take a look at what’s inside:

Part 1: SharedPreferences

In part one, learn how save data using SharedPreferences.

  1. Introduction: Find out what’s covered in our Saving Data on Android video tutorial series: SharedPrefences, saving to files, SQLite, and Room.

  2. The Starter App: Download the starter app, build it in Android Studio, and review the existing app code. Review the Model-View-Presenter (MVP) and Repository patterns.

  3. Getting SharedPrefs: Learn about use cases for SharedPreferences, see how to access the default SharedPreferences, and also see how to setup a custom SharedPreferences.

  4. Reading and Writing SharedPrefs: Learn how to save data into SharedPreferences, and how to read the data back in.

  5. Challenge: Reading and Writing SharedPrefs: Practice what you’ve learned so far to save data into SharedPreferences and then read the data back in.

  6. SharedPrefs Repository: Switch from saving the app data from an in-memory repository into a repository backed by SharedPreferences. Learn about the limitations of SharedPreferences.

  7. Challenge: SharedPrefs Repository: Practice some more with the Repository pattern by deleting data from the SharedPreferences repository.

  8. Conclusion: Let’s review what you’ve covered on SharedPreferences and the Repository pattern in this first part on Saving Data on Android, and then discuss what’s next.

Part 2: Saving to Files

In the second part, learn to read and write files to storage.

  1. Introduction: We’ll survey various issues with writing files, including internal vs. external storage, permissions, thread concerns, and checking for disk space.

  2. Writing to Internal Storage: Learn how to write files to internal storage. You’ll use Gson to convert the app data and peer into the Android file system using Device File Explorer.

  3. Reading from Internal Storage: See how to read files from internal storage using the Java SDK FileInputStream and BufferedReader classes.

  4. Challenge: Internal Storage: Practice what you’ve learned about saving to internal storage by safeguarding your app against read and write errors.

  5. External Storage: Switch from using internal storage to external storage for saving the app data, and see how to use adb to pull files from a device.

  6. Deleting Files: Update the FileRepository to delete files, using some of the helper functions you’ve already developed.

  7. Challenge: Deleting Files: Finish the use of the FileRepository for saving app data by practicing what you’ve learned to delete files.

  8. Conclusion: Let’s review what you learned in this part of the course about saving data into files, reading files, and deleting files, and then discuss what’s next.

Part 3: SQLite

In part 3, you’ll learn how to save data using SQLite.

  1. Introduction: You are introduced to SQLite and the concepts of relational databases, including tables, columns, keys, and relationships.

  2. Creating a Database: Create a database schema and use SQLiteOpenHelper to create a basic database. See how to use sqlite3 at the command line to review the database schema.

  3. Writing Data: Use ContentValues as a means of writing data into SQLite. See how to use sqlite3 at the command line to query the database.

  4. Challenge: Writing Data: Practice what you’ve learned so far about SQLite and ContentValues to write data into the SQLite datbase.

  5. Reading Data: Use Cursor and and its companion CursorWrapper to read data from the SQLite database and into model objects.

  6. Deleting Data: See how to delete data from a SQLite database, and how to guard against SQL injection attacks in your SQL code.

  7. Challenge: Deleting Data: Finish the use of the SQLiteRepository for saving app data by practicing what you’ve learned to delete files.

  8. Database Migrations: Learn about using database versioning and database migrations to handle modifying your database schema after your app has shipped.

  9. Conclusion: Let’s review what you learned about saving data in your app using SQLite, and then discuss what’s next.

Part 4: Room

In the final part, use the new Room library, part of the Android Architecture Components, to save data.

  1. Introduction: Learn about the Architecture Components from Google, including LiveData, ViewModel, and the data peristence library Room that provides a layer above SQLite.

  2. Architecture: Review a new architecture of the DataDrop app that uses Model-View-ViewModel (MVVM) instead of MVP, as preparation for working with LiveData and Room.

  3. Entities and DAOs: Learn about the use of Entities and Data-Access-Objects (DAOs), and see how to set them both up using annotations.

  4. Room Database: See how to create a Room database and how to read from and write data into Room. See also how to use TypeConverters and observe changes to LiveData.

  5. Challenge: Room Database: Practice what you’ve learned about Room by finishing the RoomRepository to handle the delete use cases.

  6. Relationships: See how to setup relationships in a Room database. Also see how to pre-populate Room data using a RoomDatabase.Callback.

  7. Conclusion: In this final episode, we’ll summarize this final part and the whole course, and then see an overview of data persistence topics that were not covered.

The Android Avalanche Bundle

If you like this course, from now until March 30th you can get it along with the rest of our new Android and Kotlin books, courses, and screencasts — at a big discount!

Our new Android Avalanche Bundle includes:

  • Android Apprentice ($54.99 value): Gives you access to our new Android Apprentice book, which teaches you how to build four complete Android apps from scratch. PDF/ePub format.
  • Kotlin Apprentice ($54.99 value): Gives you access to our new Kotlin Apprentice book, which gives you a deep dive into the Kotlin programming language itself. PDF/ePub format.
  • A raywenderlich.com subscription ($19.99 value): Gives you access to all 8 of our new Android video courses, our 2 new Android screencasts, and access to any new courses and screencasts we release in the future.

The bundle price of $99.99 includes the first month of your subscription, which will continue at $19.99/month thereafter. You can cancel at any time and keep the books. This bundle gives you more than 20% off everything in the Android Avalanche!

The Android Avalanche bundle is only available for the next two weeks, so be sure to order your copy while you can.

Already a subscriber? As a subscriber, you already have access to this new course as part of your subscription. You can also enjoy a $20 discount on the bundle that will get you both books added to your collection. It’s our way of thanking you for supporting what we do here at raywenderlich.com.

Where To Go From Here?

If you want to learn Android and Kotlin development — or level up your existing skills – there’s no better way to learn than these new books, courses, and screencasts.

And this is only the beginning! We’re committed to creating more new books, courses, and screencasts on Android development, with the goal of covering Android and Kotlin in the same way that we’ve covered iOS and Swift over the years.

We truly appreciate your support in making this possible. We’re excited about this new chapter at raywenderlich.com. So order your copy of the Android Avalanche Bundle today before the deal is over!

The post New Course: Saving Data on Android appeared first on Ray Wenderlich.

New Course: Android Networking

$
0
0

It’s day 8 of the Android Avalanche: an event where we’ll be releasing new Android and Kotlin books, courses, and screencasts every day!

Today, we are releasing a brand new course: Android Networking.

In this 27-video course, Joe Howard will show you how to make calls to REST APIs and how to send and receive structured data. Through a series of hands-on exercises and challenges, you’ll learn some fundamental concepts of networking, how to use the Retrofit networking library, how to handle tasks like authentication, and more!

Take a look at what’s inside:

Part 1: Networking Basics

In part one, learn how fundamental concepts of networking like HTTP, requests, and responses work.

  1. Introduction: Find out what’s covered in our Android Networking video tutorial series: HTTP basics, HttpUrlConnection, and using the Retrofit library from Square.
  2. The Starter App: Download the starter app, build it in Android Studio, and review the existing app code. Review the use of ViewModel, LiveData, and the Repository pattern.
  3. HTTP Basics: Learn about basic concepts of HTTP requests, JSON, using REST APIs and consuming responses, and use the Postman REST client to investigate the GitHub API.
  4. Check Connectivity: See how to check for network connectivity from the starter app, and setup network state access permissions.
  5. HttpUrlConnection: Use the HttpUrlConnection class, along with an AsyncTask, to query the GitHub repos API for any GitHub user.
  6. Challenge: HttpUrlConnection: Practice what you’ve learned so far to query the GitHub API and retrieve a list of code gists for any GitHub user.
  7. Parsing JSON: Learn how to parse the structured HTTP data responses from a REST API using JSONObject and JSONArray.
  8. Challenge: Parsing JSON: Practice what you’ve learned about parsing JSON responses to parse the code gists JSON data for any GitHub user.
  9. Challenge: Profile: Practice all that you’ve learned about making HTTP connections and parsing JSON data to retrieve and display a GitHub user’s profile information.
  10. Conclusion: Let’s review what you’ve covered on networking basics in this first part of Android Networking, and then discuss what’s next.

Part 2: Getting Started with Retrofit

In the second part, learn to setup the Retrofit networking library.

  1. Introduction: We’ll review the various issues with making HTTP connections from Android, then see a preview of how the Retrofit library from Square can help.
  2. Setting up Retrofit: Learn about how to setup the Retrofit library and Gson parsing library as dependencies for your Android project.
  3. GET Requests: See how to make a basic GET request using Retrofit, and also see how to use the Android Profiler to monitor network requests.
  4. Challenge: GET Requests: Practice what you’ve learned about making GET requests with Retrofit to retrieve a GitHub user’s gists and profile.
  5. Parsing with Converters: Use the Gson parsing library from Google to automatically parse the response data from Retrofit requests into Kotlin instances.
  6. Challenge: Parsing with Converters: Practice what you’ve learned about Gson and Retrofit to show the user profile data obtained from GitHub.
  7. Logging Interceptor: Setup an HttpLoggingInterceptor for an OkHttp client and use the client with your Retrofit instance.
  8. Error Handling: See how to handle errors that occur when making API requests with Retrofit by creating an Either class that can represent both success and error.
  9. Challenge: Error Handling: Practice what you’ve learned about error handling to handle errors that occur when making requests for GitHub repos and gists.
  10. Conclusion: Let’s review what you’ve covered about getting started with Retrofit in this second part of Android Networking, and then discuss what’s next.

Part 3: More with Retrofit

In the last part, learn how to handle networking tasks like authentication and posts.

  1. Introduction: Survey other capabilities of the Retrofit library, and preview the network requests that will be seen in this part of the course.
  2. Authentication: Learn how to authenticate into a GitHub user’s account by retrieving an OAuth2 token from the GitHub API.
  3. POST Requests: Use an interceptor to add the OAuth2 token into requests made by Retrofit, and then make POST requests to add a new gist into the user’s GitHub account.
  4. Challenge: POST Requests: Practice what you’ve learned about making POST requests to send more data back when creating a new gist for the authenticated GitHub user.
  5. DELETE Requests: Learn how to make a DELETE request with Retrofit and then add the ability to delete gists for the authenticated GitHub user.
  6. Challenge: Retrofit: Practice all that you’ve learned about Retrofit to send an update request for profile data for the authenticated GitHub user.
  7. Conclusion: In this final episode, we’ll summarize both this last part and the whole course, and then see an overview of Android networking topics that were not covered.

The Android Avalanche Bundle

If you like this course, from now until March 30th you can get it along with the rest of our new Android and Kotlin books, courses, and screencasts — at a big discount!

Our new Android Avalanche Bundle includes:

  • Android Apprentice ($54.99 value): Gives you access to our new Android Apprentice book, which teaches you how to build four complete Android apps from scratch. PDF/ePub format.
  • Kotlin Apprentice ($54.99 value): Gives you access to our new Kotlin Apprentice book, which gives you a deep dive into the Kotlin programming language itself. PDF/ePub format.
  • A raywenderlich.com subscription ($19.99 value): Gives you access to all 8 of our new Android video courses, our 2 new Android screencasts, and access to any new courses and screencasts we release in the future.

The bundle price of $99.99 includes the first month of your subscription, which will continue at $19.99/month thereafter. You can cancel at any time and keep the books. This bundle gives you more than 20% off everything in the Android Avalanche!

The Android Avalanche bundle is only available for the next two weeks, so be sure to order your copy while you can.

Already a subscriber? As a subscriber, you already have access to this new course as part of your subscription. You can also enjoy a $20 discount on the bundle that will get you both books added to your collection. It’s our way of thanking you for supporting what we do here at raywenderlich.com.

Where To Go From Here?

If you want to learn Android and Kotlin development — or level up your existing skills – there’s no better way to learn than these new books, courses, and screencasts.

And this is only the beginning! We’re committed to creating more new books, courses, and screencasts on Android development, with the goal of covering Android and Kotlin in the same way that we’ve covered iOS and Swift over the years.

We truly appreciate your support in making this possible. We’re excited about this new chapter at raywenderlich.com. So order your copy of the Android Avalanche Bundle today before the deal is over!

The post New Course: Android Networking appeared first on Ray Wenderlich.

Droidcon Boston 2018 Conference Report

$
0
0

Droidcon Boston 2018 Conference Report

There are many awesome Android conferences that occur throughout the year. The Droidcon series of conferences are some of my favorites. The high quality of the presentations is consistently impressive, and for those that can’t be there in person, the conference videos are posted online soon after the events.

Droidcons occur throughout the year in Berlin, San Francisco, New York, and many other cities. In April 2017, Droidcon Boston held it’s first annual conference. I’ve had the opportunity to attend both Droidcon Boston 2017 and this year’s event, which was held just these past two days March 26-27, 2018.

This year’s Droidcon Boston was packed with attendees and excitement. Most of the sessions I attended were using Kotlin in their code examples. There were talks that were architecture focused, some doing deep dives into tools like ProGuard and Gradle, and others discussing newer technologies such as ARCore, TensorFlow, and coroutines.

In this article, I’ll give you highlights and summaries of the sessions I was able to attend. There were many other conflicting sessions and workshops I wanted to attend that aren’t included in this report, but luckily we should be able to check them all out online soon.

We’ll update this post with a link to the conference videos once they’re available.

Conference Welcome Screen

Day 1

Community Addiction: The 2nd Edition

The conference was kicked off with an introduction by organizers Giorgio Natili, Software Development Manager at Amazon, Eliza Camberogiannis, Android Developer at Pixplicity, and Garima Jain, Senior Android Engineer at Fueled. They call themselves the EGG team, Eliza, Garima, Giorgio! :]

The introduction detailed the conference info: 2 tracks with 25 talks and 6 workshops, lightning talks at lunchtime, and a Facebook happy hour followed by the official party.

The organizers discussed the following principles of the conference: good people; diversity; cool stuff including Kotlin, Gradle, and ML; respect; trust; love; integrity; and, being together.

Giorgio ended the introduction with a quote:

Keep calm, learn all the things, and have fun at Droidcon Boston 2018.

Keynote: Tips for Library Development from a Startup Developer

Lisa Wray, Android Developer, GDE, Present

Lisa’s talk was all about the why, when, and how of writing libraries. It was her “Boston edition” and presented in the style of XKCD. :]

Lisa started off with a great question: why write open-source libraries for free on the Internet? She made the case that writing libraries not only benefits the community, but is really in your own self-interest, and proved the point with a cool self-interest Venn diagram. She explained that writing a library is like joining a startup, just without all the money, but that you can benefit in unexpected ways. You often get community input-other people writing code for you for free. Sometimes you write a library out of bare necessity, or sometimes just to learn something new in a sandbox.

The next question then becomes: so you want to write a library, how do you go about doing so? Lisa explained that you want to start small in accordance with Parkinsons Law of work expanding to fill all available time. Try to focus first on a single use case. Try not to use the word “and” when explaining what the library does. Using the library should be just a couple lines of code, just add a Gradle dependency and a dead simple example.

Lisa wrapped up her session with a great list of principles to follow as you develop and release the library. Be your own user and dogfood your library, because if it’s not good enough for you, it’s not good enough for others. Be honest about the library scope and size. Use semantic versioning, and use alpha and beta releases to get feedback. Don’t advertise your library all over the place, e.g. in Stack Overflow answers; let users find it on their own. Be sure to include sufficient testing and continuous integration, which will greatly help you know whether to accept pull requests when you receive them.

Lisa wrapped up her session with some things to watch out for. Determine ownership for the work you do building a library while employed at big, medium, and small companies. Be ready to get some competition. And also realize that you’re library may eventually get deprecated by the Android SDK taking over the features.

I really loved Lisa’s last few points. She pointed out that no library is too small. Writing them is a great way to practice your craft. Look inside your app for boilerplate code, utilities or extensions. The community will thank you for a well-focused library.

Pragmatic Gradle for Your Multi-Module Projects

Clive Lee, Android Developer, Ovia Health

Gradle is the powerful build tool we use to make Android apps, but not everyone has mastered all it’s intracacies, especially for multi-module products. Clive’s talked aimed at giving you pragmatic tips to help get you to Gradle mastery.

Clive started by defining some terminology, like the root project and it’s build.gradle file and dependency coordinates. He then explained that you need to keep Gradle code clean because, well, it’s code. He also discussed why you may want to use multiple modules, because you can then use the tools to enforce dependencies and clean architecture layers. You also end up with faster builds thanks to compilation avoidance and caching. And loading multi-module projects into Android Studio is getting faster.

Next, Clive walked through his set of tips. The first were about using parallel execution and caching, via adding org.grade.parallel=true and org.grade.caching=true to your gradle.properties file. He recommends the android-cache-fix-gradle-plugin tool. He pointed out that the project must be modularized for parallel Gradle execution to occur.

When using multiple modules, you don’t want dependency version numbers in multiple places. You need a better way. Clive discussed techniques progressing from putting the version numbers in the project-level build.gradle, to putting the entire dependency coordinate in the project-level build.gradle, to using a separate versions.gradle file, and finally, using a Java file to hold the coordinates, which gives you both code navigation and completion for the dependencies. Clive also mentioned using the Kotlin-Gradle DSL and that you can also store other constants like your minSdkVersion using these various techniques.

Clive then walked through how to get organized with your gradle files. Put build.gradle files in a directory. Use settings.gradle to add the name of the directory, and you can then access it easily in the Android Studio Project panel. Another great help is to rename each module’s build.gradle file to correspond to the module name, since it makes navigating and searching easier. You need to set buildFileName on the rootProject subProjects. It’s well worth 5 minutes of time. You can use a bash script for renaming, and be sure to do the renaming for any new modules you add to the project.

The end of the talk discussed pitfalls you may run into. Avoid using a “core” module for common dependencies, since it enlarges the surface between modules. The result is slower builds. Use a shared Gradle file instead. Also, proceed with caution when using nested modules, which, while they may make navigating easier, lead to a slower reload when switching branches. Also, they do not show up in the Project panel in Android Studio.

Lunch & a Lightning Talk: Background Processing

Karl Becker, Co-Founder / CTO, Third Iron

One really fun part of Droidcon Boston is hearing some talks during lunch. On the first day, Karl Becker gave some lessons learned adding notifications and background processing to an app.

Some key considerations are: platform differences, battery optimizations, notification specific issues, and targeting the latest APIs. You also need to avoid skewing your metrics, since running the app in the background may increase you analytics session count. Determine what your analytics library does specifically, and consider sending foreground and background to different buckets.

Once they rolled out the updated app with notifications, Karl and his team saw that Android notifications get tapped on 3 times as much as in their iOS app. He pointed out that even higher multiples are seen in articles online. Some possible reasons are: notifications are just a lot easier to get to on Android, and they go away quickly on iOS.

Key takeaways from Karl were: background processing is easy to implement, but hard to test. Schedule 2x time for testing. Support SDK 21 or newer if possible. Don’t expect background events to happen at specific times. And, notifications will be more impactful on Android than on iOS.

Practical Kotlin with a Disco Ball

Nicholas DiPatri, Principal Engineer, Comcast Corporation

In one of the first sessions after lunch on Day 1, Nicholas gave a great walkthrough of Kotlin from a Java developers perspective, and used a disco ball with some Bee Gees music to do so! He’s built an app named RoboButton in Kotlin, that uses bluetooth beacons to control nearby things with your phone, such as the aforementioned disco ball. :]

Disco Ball

The first question answered was: what is Kotlin? It’s a new language from JetBrains that compiles to bytecode for the JVM, so it’s identical to Java at the bytecode level.

Next question: why use it? It’s way better than Java, fully supported by Google on Android, and Android leaders in the community use it. The only real risks with moving to Kotlin are the learning curve of a new language, and that the auto-converter in Android Studio from Java to Kotlin is imperfect.

Nicholas then gave a summary of syntactic Kotlin. You have mutable properties using var, which you can re-assign. The properties have synthesized accessors when being using from Java. You create a read-only property with val. It’s wrong to call a val immutable, it just can’t be re-assigned. Kotlin has type inference and compile-time null checking that help you avoid runtime null pointer exceptions. There are safety and “Elvis” operators and smart casts for use with nullable values. For object construction and initialization, you can combine the class declaration and constructor in one line. You can have named arguments, and an init block is used to contain initialization code. There is a subtle distinction between property and field in Kotlin: fields hold state, whereas properties expose state. Kotlin provides backing fields to properties.

Great places to begin learning Kotlin? The great documentation at kotlinlang.org, including a searchable language reference. The Kotlin style guide from Jake Wharton. And the online sandbox.

Nicholas pointed out that when moving to Kotlin, large legacy projects will be hybrid Java-Kotlin projects. Legacy Java can remain. New features can be written in Kotlin. Small projects can be completely converted, such as Nicholas converting the RoboButton project. You can use Code > Convert Java File to Kotlin File in Android Studio, but there is one bad thing: revision history is lost. Also, you may get some compile errors after the conversion, so you may need to sweep through and fix them.

Nicholas wrapped up his talk by discussing idiomatic Kotlin. When using Dagger for Dependency Injection, inject in the init block and fix any compile errors using lateinit. Perfom view injection using the Kotlin Android extensions. Use decompilation to get a better understanding of what Kotlin code is doing: create bytecode and then decompile to Java. Nicholas showed an example of discovering that Kotlin view extension uses a cached findViewById(). Use extension functions, which allow you to make local changes to a 3rd party API.

TensorFlow for Android Developers

Joe Birch, Senior Android Engineer, GDE, Buffer

Like many developers, Joe Birch likes to explore new things that come out. Such was the case with Joe for TensorFlow, when a friend gave him a book on the topic. He was a little scared at first, but it turned out to be not so bad. His talk was not a deep dive into machine learning, but instead was showing how to use existing machine learning models in apps and how to re-train them. Joe used the TensorFlow library itself with Python on his dev machine for the retraining, and the TensorFlow Mobile library to use the re-trained model on a device.

Joe started with some machine learning 101: getting data, clean prep & manipulate, training a model based on patterns in the data, then improve. Joe discussed the differences between unsupervised learning and supervised learning, and then described the supervised learning topics of classification and regression. Some machine learning applications are photo search, smart replies in Gmail, Google Translate, and others like YouTube video recommendations.

Then Joe began a dive into TensorFlow. It’s open-source from Google, and they use it in their own applications. You use it to create a computational graph, a collection of nodes that all perform operations and compute values until the end predicted result. They’re also known as neural networks, a model that can learn but that needs to be taught first. The result of the training is a model that can be exported, then used and re-trained.

For the rest of talk, Joe detailed his re-training of an open source model that classifies images to specifically recognize images of fruit, and then the use of the re-trained model in a sample app that lets you add the recognized fruit to a shopping cart.

Joe walked through the use of the web-based TensorBoard suite of visualization tools to see training of your model in action. TensorBoard shows things like accuracy and loss rate. Joe listed all the steps you need to perform to retrain a model and examine the results of the re-training in TensorBoard. He started the re-training with a web site that gave him 500 images of each type of fruit at different angles. To test the re-trained model, you take images outside of the training set and run through the model. As an example, a banana was recognized at 94% probability in a few hundred milliseconds.

Before using the re-trained model, you want to optimize it using a Python script provided by TensorFlow, and also perform quantization via another Python script. Quantization normalizes values to improve compression so that the model file is smaller in your app. Joe achieved around 70% compression, from 50MB to 16MB. You want to check the model after optimization and quantization to make sure accuracy and performance were not impacted too much for your use case.

After re-training, you have a model for use in an app. Add a TensorFlow dependency in the app. You put model files into the app assets folder. Create a TensorFlow reference. Feed in data from a photo or the camera as bitmap pixels, and run inference. Handle confidence of the prediction by setting a threshold.

Workshop: Hands on Android Things

Kaan Mamikoglu, Mobile Software Engineer, MyDrive Solutions

James Coggan, Senior Mobile Engineer, MyDrive Solutions

Pascal How, Mobile Software Engineer, MyDrive Solutions

Of the 6 workshops at Droidcon Boston 2018, I was able to attend the one on Android Things, the IoT version of Android.

The workshop setup attendees with an NXP i.MX7D kit. We flashed Android Things onto the board. The board was attached to a breadboard with a temperature sensor and LEDs. The workshop walked us through creating an app for the device that communcates with Firebase and that, using DialogFlow and Google Assistant, allows you to ask “Hey Google, what’s the temperature at home?”

The workshop was hands-on, and consisted of opening the workshop apps in Android Studio and running through the following steps:

  • Connecting board to WiFi using an adb command.
  • Setup Firebase in Android Studio and your project, with Realtime Database support
  • Open the Firebase console and enable anonymous login
  • Setup Firebase on the common module
  • Create a HomeInformation data class with light, temperature and button
  • Observe changes to HomeInformation in the database using LiveData
  • Setup Firebase on the Things device
  • Setup Firebase on the mobile app

You can find more info on the workshop at James’s GitHub.

Day 2

Keynote: Design + Develop + Test

Kelly Shuster, Android Developer, GDE, Ibotta

The second day keynote by Kelly Shuster was about communication between designers, developers, and testers. She pointed out that we often get in weeds as software engineers, solving problems. We need to remember other things in order to build a successful product that people will actually use. It’s a responsibility to get good at effective cross-team communication. Kelly gave a great example of what happens when cross-team communication is missing from when she was a child actor in the Secret Garden and everyone forgot their lines in one scene. A painful experience but a lifelong memory.

Kelly explained the typical flow of app development: design mocks -> development begins -> development finalized. Often have another step with “developer questions”. A big problem is that you start to have a lot of these conversations, and much of the conversation is boilerplate. What if we could remove these boilerplate conversations? She came across a blog post about removing barriers from your life, and thought: what are the barriers to effective communication?

For barriers with designers, Kelly listed a number of tools that can help. Zeplin, which can give you details like sizes, fonts, and font sizes. A designer uploads a Sketch file to Zeplin, which parses out all the details. Another tool is a style guide with the design team, which serves as “boilerplate code completion” for the design world. A good design style guide has a color palette with common names, zero state and loading state designs, and button states. These things cover 80% of boilerplate design discussions right in the style guide.

Kelly then discussed how SVGs can help minimize back and forth with designers, and how a designer she worked with once offered to use Android Studio to learn how to work best with Vector Asset Studio. Conversely, as developers, Kelly pointed out that we often say a lot of negative things to designers. We should instead think about what can we do for them: be a relentless advocate for your designer’s vision (within reason).

Kelly discussed some surveys she’d done, one question being “if I could do one thing to make my product better it would be…” Most answers boiled down to “spend more time testing”. Even from designers and QA. Kelly talked about testing from two perspectives: pre-development and post-development. For pre-development, she suggested tools such as Usability Hub for user testing of designs, and Marvel, which takes pictures of paper designs and turns them into a clickable prototype.

“Post”-develoment testing is really post-design unit testing and integration testing. Kelly gave an anecdote about wanting to learn a song on piano for 5 years and not doing so until committing to practicing 5 minutes a day, and then compared it to how to get good at testing: write at least one test a week. You’ll be able to write tests fast within a few months.

Kelly finished up with results from another survey question: “the hardest part of working with developers is…” The consistent answer: their lack of smoke testing. Kelly gave great advice such as using “Day of the Week” emulators to test across API levels. She’s found many UI bugs by doing this.

fun() Talk

Nate Ebel, Android Developer, Udacity

Nate’s talk was all about exploring Kotlin functions. He pointed out that Kotlin loves functions. You can see it in docs and examples. The standard library is full of functions, including higher-order and extension functions. Functions in Kotlin are flexible and convenient. As Android developers, they give us the freedom to re-imagine how to build apps.

Nate dove into parameter and type freedom. Kotlin allows default parameter values, used when an argument is omitted. They give you overloads without the verbosity of writing them, and documents sensible defaults. For Java interop, you can use @JvmOverloads, which tells the compiler to generate the overloads for you.

You can also use named arguments in Kotlin. Without names, parameter meaning can be confusing without looking at source. Named arguments make it easer to understand. You can also switch the order when using named arguments. There are some limitations, for example, once named, all subsequent arguments must be named.

You can use variable number of arguments using vararg. They are treated as an Array of type T. You can use any number when calling, and they typically but not always will be the last parameter. They do not have to be last if using named arguments or if the last parameter is a function. You can use the spread operator * to pass existing arrays into vararg.

Nate described Kotlin return types. If there is no return value, the default is Unit. You append the return type to the function signature with a : if there is a non-Unit return value. Single-expression functions can infer return type.

Nate then discussed Generic functions, and how much of the standard library is built on generics. You add a generic type after the fun keyword.

Nate then explored how and when to create functions. He discussed top-level functions and how to rename files using @file:JvmName. He gave examples of member functions on a class or object, local functions which are functions inside functions, and companion object functions. He described function variations, such as infix functions, extension functions, and higher-order functions.

Nate finished up by re-imagining Android via the power of Kotlin functions. We can use fewer helper classes and less boilerplate. We can pass logic around for a try-catch block. We have libraries like Anko and Android KTX. We have DSLs like the Kotlin Gradle DSL.

Common Poor Coding Patterns and How to Avoid Them

Alice Yuan, Software Engineer, Pinterest

Alices’s talk focused on a number of problems her team has solved for the Pinterest app. She walked through each problem in detail, and then talked about how they found a solution.

Problem #1. Views with common components. They initially chose inheritance to combine the common components, but that led to many problems in the implementation. The solution: Be deliberate with inheritance and consider composition first.

Problem #2. So many bugs with the “follow” button in the app. They have many fragments, and were using an event bus library as a global event queue. It becomes confusing with more and more events and more fragments. The code looks simple, but it breaks due to different lifecycles and different conditions. Views require tight coupling, but an event bus is de-coupling. Event bus does not make sense for this scenario. Other use cases that are de-coupled make more sense. The solution was to use an Observer pattern. They reintroduced observers into code base. The key takeway was that event bus is often misued due to its simplicity. Only use it where it makes sense.

Problem #3. Do we need to send events to maintain data consistency? Why do we even need to send events? They were caching models on a per Fragment basis, which leads to data inconsistencies across the app. They had model dependent logic in the view layer. There are a lot of different ways to introduce data inconsistency: global static variables, utils class and singletons. The solution: have a central place to handle storing and retrieving of models using the Repository pattern. A repository checks memory and disk cache, and makes network calls if needed for the freshest model. You can also use the Repository pattern with RxJava, kind of like observables on steroids, more than just a simple listener pattern. The key takeway: build a central way to fetch and retrieve models.

Problem #4. Why is writing unit tests so difficult? Laziness. It’s a lot of work to write unit tests. Also, a typical fragment has too much logic, including business logic. You just want to unit test business logic. Things like mocking and Robolectric can be pains to use. The solution: separate concerns using a pattern like MVVM or MVP. Now you can communicate between classes without knowing internals. Kelly gave an example using MVP. Loose coupling is preferred here to make business logic more testable. It makes the code way cleaner and understandable and increases re-usability of the codebase. The key takeaway: unit testing is easier when you separate concerns. Consider MVP/MVVM/MVI. Use interfaces to abstract. You can then easily mock interfaces with Mockito.

The overall key takeway from Alice’s talk: have awareness to see if you’re making any of these mistakes.

Lunch & a Lightning Talk: Reactive Architecture

Dan Leonardis, Director of Engineering, Viacom & LEO LLC

The second day’s lunch talk was about Reactive Architecture. Dan chose to give all examples in Java to get maximum impact.

Dan explained that Reactive Architecture is simple but not easy. The goals are to make a system responsive (with background threads), resilient (with error scenarios built-in), elastic (easy to change), and message driven (with well-defined data types). It provides an asynchronous paradigm built on streams of information.

Dan walked though a history lesson on MVC, MVP, and MVVM. MVC was not meant to be a system architecture, but meant to just be for UI. MVP has kind of been killed off, with three nails: Android data binding, Rx, and ViewModel from Android Architecture Components which helps with lifecycle issues and is agnostic to Activity recreation on rotation.

Dan then emphasized how the whole point of reactive architecture is to funnel asynchronous events to update UI. Examples are button clicks, scrolls, and UI filters. Each are events. You flatmap them into specific event types that have data you need from the UI. Use merge to make them one stream. You use actions to translate events for testability using Transformers. See Dan Lew’s blog post.

The basic flow is a UI event -> transformer -> split stream into event types with publish operator in each stream, transform into action -> merge back to one stream -> end up with an observable stream of actions. What’s next is a Result, and you then use transformers to go back to the UI too.

Dan then gave a deep dive into a simple app and walked through code examples to see a Reactive Architecture in action.

ARCore for Android Developers

Kunal Khona, Android | AR Developer, Wayfair

Kunal started with a nice fun virtual version of himself that introduced himself on a projected device. His talk then introduced us to augmented reality, ARCore on Android, and showed some code examples written in C# with Unity.

Pokemon Go is the best example so far of AR affecting the real world. It’s an illusion to add annotations to the real physical world. It let’s you escape the boundaries of a 2D screen. But we’ve been wired for thousands of years to interact with the world in 3D.

Kunal then contrasted VR and AR. VR is virtual environment with virtual experience. AR is real environment with virtual experience. He said that in VR, the screen becomes your world, but in AR the world becomes your screen. AR uses your device as a window into the augmented world. Mobile phones are the most common device available, so you have to do AR on mobile if you want to hit scale.

Kunal gave some history of AR on Android. Tango used a wide-angle camera and a depth sensor. It could use motion tracking, area learning, and depth sensing to map indoor spaces with high accuracy. But it was only supported on a few phones, and Tango is dead as of March 2018.

ARCore replaces Tango. ARCore went to 1.0 stable in February 2018. It works without special hardware, with motion tracking technology using the phone camera to identify interesting points. It tracks the position of the device as it moves and builds an understanding of the real world. It’s scalable across the Android ecosystem, and currently works on a wide range of devices running Android 7.0 and above, about 100 million devices total. More phones will be supported over time. You can even run ARCore apps in the Android emulator in Android Studio 3.1 beta, with a simulated environment.

The fundamental concepts of ARCore are: motion tracking, environmental understanding, and light estimation. Motion tracking creates feature points and uses them to determine change in location. Environmental understanding begins with clusters of feature points that appear to live on common horizontal surfaces and are made available as planes. Light estimation in an environment gives an increased sense of realism by lighting a virtual object under the same conditions as the environment. Kunal showed a great example of a virutal lion getting scared when the lights turn off.

ARCore does have limitations. ARCore does not use a depth sensor. It has difficulty with flat surfaces without texture. And it cannot remember session information once a session is closed.

Kunal discussed using ARCore at Wayfair. How does a consumer know furniture will look good in a room? He showed an awesome demo of placing a couch in a room. A consumer can hit a cart button and purchase the couch. Kunal described many of the possible ARCore applications: shopping, games, education, entertainment, visualizers, and information/news.

The remainder of the talk was a code example using Unity to abstract away complex ARCore and OpenGL concepts. He showed motion tracking, using an ARCore.Session to track attachment points and plane detection, and placing an object with hit testing, transforming a 2D touch to a 3D position. He described using anchors to deal with the changing understanding of the environment even as the device moves around.

Why MVI? Model View Intent — The curious case of yet another pattern

Garima Jain, Senior Android Engineer, Fueled

Garima gave a great talk on the Model-View-Intent (MVI) pattern. There were two parts to the talk: (1) Why MVI and (2) moving from MVP/MVVM to MVI.

First, why MVI? She defined MVI as a mapping from (Model, View, Intent) to (State, View, Intention). And add a user into the picture and MVI becomes different from other patterns. The user interacts with the View, the intent changes state, goes to the view, and is seen by user.

The key concept is State as a single source of truth. A common example of the State problem is: what is going on with state when doing a pull to refresh – the state of a list. The state problem is a main reason people are motivated to follow MVI. First there’s a loading state, then an empty state, then pull to refresh but with the empty state still showing. The View is not updating state correctly.

Garima used an MVP diagram to represent the problem. If you misuse MVP, some of the state is in the view and some is in the presenter. Multiple inputs and outputs cause the possibility of misuses. The MVI pattern helps here. MVI has single source of state truth. This is sometimes called the Hannes model, after Hannes Dorfmann.

Part 2 of the talk discussed moving from MVP/MVVM to MVI. How to move to MVI if you have other patterns already in place. You don’t have to go all the way to MVI right away. Call it “MVPI”. Build on top of MVP or MVVM.

Garima gave a great description of data flow in the MVI pattern. From a UI event in the View, the event gets transformed to an Action sent to the Presenter, then sent to a Reducer as a Result. The reducer creates new state which gets rendered to the View. She discussed applying the pattern to the loading problem. State is an immutable object. You create a new one to show in the view after a user interaction. Now there is only one state, one source of truth for the model. Changes only occur due to actions.

MVI data flow

She gave some good rules of thumb. Action should be a plain object that has a type that is not undefined and conveys the intention of the user. State is an immutable list or object representing the state of the view. The Reducer is a function of the previous state and the result. It returns the current state for an unknown result. Use a Kotlin sealed class for the different kinds of results.

In summary, Garima mentioned that many consider that MVI is MVP/MVVM done right. It prevents us from misusing patterns like MVP and MVVM. She also gave some great refrerences:

MVI References

Coroutines and RxJava: An asynchronicity comparison

Manuel Vicente Vivo, Principal Associate Software Engineer, Capital One

Manual is comparing coroutines and RxJava because they’re both trying to solve the same problem: asynchronous programming.

He started with a coroutines recap, compared the libraries, and demoed a Fibonacci calculator with fun facts from an API.

Coroutines simplify asynchronous programming. Computations can be suspended without blocking a thread. They’re conceptually similar to a thread. They start and finish and but are not bound to a particular thread.

The launch function is a coroutine builder. The first parameter is a coroutine context. It takes a suspending lambda as second parameter. For example, making a network request and updating a database with data. The user taps a button and starts a heavy computation. You suspend while waiting for a response from the network. You carry on with execution when the response comes back.

Manuel then gave a detailed coroutine and RxJava comparison.

He showed how to cancel execution in both. In RxJava, call disposable.dispose(). For coroutines, use the coroutine job which is returned from launch and call job.cancel(). You can concatenate coroutine contexts and can cancel them all at the same time. You can also use a parent named argument and cancel the parent.

Manuel defined Channels as the coroutine equivalent of an RxJava Observable and Reactive Streams Publisher. Channels can be shared between different coroutines, with a default capacity of 1. In RxJava, you get values when onNext is called. You use send and receive to send/receive elements through the channel, and they are suspending functions. You can close channels. And you can also offer elements. produce is another coroutine builder for a coroutine with a Channel built-in. It’s useful to create custom operators.

Channels have to deal with race conditions: which coroutine gets the values first? In RxJava, you use Subjects to handle this issue. BroadcastChannel can be used to send the same value to all coroutines. ConflatedBroadcastChannel behaves similar to an RxJava subject.

What about RxJava back-pressure? It’s handled by default in coroutines, since send and receive are suspending functions.

You can use publish for a “cold observable” behavior, but it’s in the interop library, a bridge between coroutines and RxJava. Another use case would be calling coroutines from Java.

An Actor is a coroutine with a channel built-in, but the opposite of produce. Think of it as a mailbox that will process and receive different events.

For the equivalent of RxJava operators, some are built-in to the language in Kotlin collections. Some others are easy to implement. And some are more work, such as zip.

For threading in RxJava, you use observeOn and subscribeOn. Threading in coroutines is done within the coroutine context. CommonPool, UI (Android), and Unconfined are examples of contexts. You can create your own contexts using ThreadPoolDispatcher.

Manuel finished up with his example application, a math app built with the MVI Architecture. He has three projects: one using coroutines, one using RxJava, and one with interop between the two.

Coroutine architecture

The project uses the Android Architecture Component ViewModel to survive configuration changes, and can be found on GitHub.

Where to go from here?

You can find more info about the Droidcon series of conferences at the official site.

The full agenda for Droidcon Boston 2018, including sessions not covered in this report, can be found here.

The videos from the most recent Droidcon NYC can be found here.

The videos and slides from Droidcon Boston 2017 are here.

We’ll add a link to the videos from Droidcon Boston 2018 when they’re available.

Feel free to share your feedback, findings or ask any questions in the comments below or in the forums. I hope you enjoyed this summary of Droidcon Boston 2018! :]

The post Droidcon Boston 2018 Conference Report appeared first on Ray Wenderlich.


Screencast: Getting Started with TensorFlow on Android

What’s New in Swift 4.1?

$
0
0

Xcode 9.3 and Swift 4.1 are finally out of beta! This release contains some long-awaited improvements to the standard library and language itself. If you haven’t been following the Swift Evolution Process closely, keep reading.

In this tutorial, you’ll learn about the most significant changes introduced in Swift 4.1.

This article requires Xcode 9.3, so make sure you have it installed and ready to go before getting started.

Getting Started

Swift 4.1 is source-compatible with Swift 4, so the new features won’t break your code if you’ve already migrated your project to Swift 4 using the Swift Migrator in Xcode.

In the sections below, you’ll see linked tags such as [SE-0001]. These are Swift Evolution proposal numbers. I’ve included the link to each proposal so you can dig into the full details of each particular change. I recommend you try out the features in a playground so you have a better understanding of everything that changes as you work.

To start, fire up Xcode 9.3 and select File ▸ New ▸ Playground. Choose iOS as the platform and Blank as its template. Name and save it as you like. To get the most out of this tutorial, try out each feature in your new playground as you work.

Note: Need to catch up the highlights of Swift 4? No problem! Check out the predecessor to this tutorial, Swift 4: What’s New in Swift 4.

Language Improvements

There are a number of language improvements in this release, including conditional conformance, recursive constraints on associated types in protocols and more.

Conditional Conformance

Conditional conformance enables protocol conformance for generic types where the type arguments satisfy certain conditions [SE-0143]. This is a powerful feature that makes your code more flexible. You can see how it works with a few examples.

Conditional conformance in the standard library

In Swift 4, you could compare arrays, dictionaries and optionals as long as their elements were Equatable. This worked absolutely fine for basic scenarios such as:

// Arrays of Int
let firstArray = [1, 2, 3]
let secondArray = [1, 2, 3]
let sameArray = firstArray == secondArray

// Dictionaries with Int values
let firstDictionary = ["Cosmin": 10, "George": 9]
let secondDictionary = ["Cosmin": 10, "George": 9]
let sameDictionary = firstDictionary == secondDictionary

// Comparing Int?
let firstOptional = firstDictionary["Cosmin"]
let secondOptional = secondDictionary["Cosmin"]
let sameOptional = firstOptional == secondOptional

Using the == operator to test equality in these examples worked since Int is Equatable in Swift 4. However, comparing collections of optionals was a common situation you might have run into with Swift 4 since optionals do not conform to Equatable. Swift 4.1 fixes this issue using conditional conformance, letting optional types with underlying Equatable types to be compared:

// Array of Int?
let firstArray = [1, nil, 2, nil, 3, nil]
let secondArray = [1, nil, 2, nil, 3, nil]
let sameArray = firstArray == secondArray

// Dictionary with Int? values
let firstDictionary = ["Cosmin": 10, "George": nil]
let secondDictionary = ["Cosmin": 10, "George": nil]
let sameDictionary = firstDictionary == secondDictionary

// Comparing Int?? (Optional of Optional)
let firstOptional = firstDictionary["Cosmin"]
let secondOptional = secondDictionary["Cosmin"]
let sameOptional = firstOptional == secondOptional

Int? is Equatable in Swift 4.1, so the == operator works for [Int?], [String: Int?] and Int??.

A similar problem has been solved when comparing arrays of arrays (e.g. [[Int]]). In Swift 4, you could only compare arrays of sets (e.g. [Set<Int>]), since sets conform to Equatable. Swift 4.1 solves this, since arrays (and dictionaries) are Equatable as long as their underlying values are, too.

let firstArrayOfSets = [Set([1, 2, 3]), Set([1, 2, 3])]
let secondArrayOfSets = [Set([1, 2, 3]), Set([1, 2, 3])]

// Will work in Swift 4 and Swift 4.1
// since Set<Int> is Equatable
firstArrayOfSets == secondArrayOfSets

let firstArrayOfArrays = [[1, 2, 3], [3, 4, 5]]
let secondArrayOfArrays = [[1, 2, 3], [3, 4, 5]]

// Caused an error in Swift 4, but works in Swift 4.1
// since Arrays are Equatable in Swift 4.1
firstArrayOfArrays == secondArrayOfArrays

Generally, Swift 4.1’s Optional, Array and Dictionary now conform to Equatable and Hashable whenever their underlying values or elements conform to these protocols.

This is how conditional conformance works in the standard library. Next, you will implement it in your own code.

Conditional conformance in code

You’re going to use conditional conformance to create your own band of musical instruments. Add the following block of code at the bottom of the playground to get started:

// 1 
class LeadInstrument: Equatable {
  let brand: String
  
  init(brand: String) {
    self.brand = brand
  }
  
  func tune() -> String {
    return "Standard tuning."
  }
  
  static func ==(lhs: LeadInstrument, rhs: LeadInstrument) -> Bool {
    return lhs.brand == rhs.brand
  }
}

// 2
class Keyboard: LeadInstrument {
  override func tune() -> String {
    return "Keyboard standard tuning."
  }
}

// 3
class Guitar: LeadInstrument {
  override func tune() -> String {
    return "Guitar standard tuning."
  }
}

Here’s what’s this does step-by-step:

  1. LeadInstrument conforms to Equatable. It has a certain brand and a method named tune() that you’ll eventually use to tune the instrument.
  2. You override tune() in Keyboard to return keyboard standard tuning.
  3. You do the same thing for Guitar.

Next, declare the band of instruments:

// 1  
class Band<LeadInstrument> {
  let name: String
  let lead: LeadInstrument
  
  init(name: String, lead: LeadInstrument) {
    self.name = name
    self.lead = lead
  }
}

// 2
extension Band: Equatable where LeadInstrument: Equatable {
  static func ==(lhs: Band<LeadInstrument>, rhs: Band<LeadInstrument>) -> Bool {
    return lhs.name == rhs.name && lhs.lead == rhs.lead
  }
}

Here’s what you’re doing step-by-step:

  1. You create a class called Band with a generic type – LeadInstrument. Each band has an unique name and lead instrument.
  2. You use where to constrain Band to conform to Equatable as long as LeadInstrument does. Your ability to conform the Band‘s generic LeadInstrument to Equatable is exactly where conditional conformance comes into play.

Next, define your favorite bands and compare them:

// 1
let rolandKeyboard = Keyboard(brand: "Roland")
let rolandBand = Band(name: "Keys", lead: rolandKeyboard)
let yamahaKeyboard = Keyboard(brand: "Yamaha")
let yamahaBand = Band(name: "Keys", lead: yamahaKeyboard)
let sameBand = rolandBand == yamahaBand

// 2
let fenderGuitar = Guitar(brand: "Fender")
let fenderBand = Band(name: "Strings", lead: fenderGuitar)
let ibanezGuitar = Guitar(brand: "Ibanez")
let ibanezBand = Band(name: "Strings", lead: ibanezGuitar)
let sameBands = fenderBand == ibanezBand

In this piece of code, you create two Keyboards and Guitars along with their appropriate Bands. You then compare the bands directly, thanks to the conditional conformance you defined earlier.

Conditional conformance in JSON parsing

Arrays, dictionaries, sets and optionals conform to Codable if their elements conform to Codable in Swift 4.1. Add the following code to your playground to try this:

struct Student: Codable, Hashable {
  let firstName: String
  let averageGrade: Int
}

let cosmin = Student(firstName: "Cosmin", averageGrade: 10)
let george = Student(firstName: "George", averageGrade: 9)
let encoder = JSONEncoder()

// Encode an Array of students
let students = [cosmin, george]
do {
  try encoder.encode(students)
} catch {
  print("Failed encoding students array: \(error)")
}

// Encode a Dictionary with student values
let studentsDictionary = ["Cosmin": cosmin, "George": george]
do {
  try encoder.encode(studentsDictionary)
} catch {
  print("Failed encoding students dictionary: \(error)")
}

// Encode a Set of students
let studentsSet: Set = [cosmin, george]
do {
  try encoder.encode(studentsSet)
} catch {
  print("Failed encoding students set: \(error)")
}

// Encode an Optional Student
let optionalStudent: Student? = cosmin
do {
  try encoder.encode(optionalStudent)
} catch {
  print("Failed encoding optional student: \(error)")
}

You use this code to encode [Student], [String: Student], Set<Student> and Student?. This works smoothly in Swift 4.1 since Student is Codable, which makes these collection types conform to it as well.

Convert Between Camel Case and Snake Case During JSON Encoding

Swift 4.1 lets you convert CamelCase properties to snake_case keys during JSON encoding:

var jsonData = Data()
encoder.keyEncodingStrategy = .convertToSnakeCase
encoder.outputFormatting = .prettyPrinted

do {
  jsonData = try encoder.encode(students)
} catch {
  print(error)
}

if let jsonString = String(data: jsonData, encoding: .utf8) {
  print(jsonString)
}

When creating your encoder, you set keyEncodingStrategy to .convertToSnakeCase. Looking at your console, you should see:

[
  {
    "first_name" : "Cosmin",
    "average_grade" : 10
  },
  {
    "first_name" : "George",
    "average_grade" : 9
  }
]

You can also go back from snake case keys to camel case properties during JSON decoding:

var studentsInfo: [Student] = []
let decoder = JSONDecoder()
decoder.keyDecodingStrategy = .convertFromSnakeCase

do {
  studentsInfo = try decoder.decode([Student].self, from: jsonData)
} catch {
  print(error)
}

for studentInfo in studentsInfo {
  print("\(studentInfo.firstName) \(studentInfo.averageGrade)")
} 

This time, you set keyDecodingStrategy to .convertFromSnakeCase.

Equatable and Hashable Protocols Conformance

Swift 4 required you to write boilerplate code to make structs conform to Equatable and Hashable:

struct Country: Hashable {
  let name: String
  let capital: String
  
  static func ==(lhs: Country, rhs: Country) -> Bool {
    return lhs.name == rhs.name && lhs.capital == rhs.capital
  }
  
  var hashValue: Int {
    return name.hashValue ^ capital.hashValue &* 16777619
  }
}

Using this code, you implemented ==(lhs:rhs:) and hashValue to support both Equatable and Hashable. You could compare countries, add them to sets and even use them as dictionary keys:

let france = Country(name: "France", capital: "Paris")
let germany = Country(name: "Germany", capital: "Berlin")
let sameCountry = france == germany

let countries: Set = [france, germany]
let greetings = [france: "Bonjour", germany: "Guten Tag"]

Swift 4.1 adds default implementations in structs for Equatable and Hashable as long as all of their properties are Equatable and Hashable as well [SE-0185].

This highly simplifies your code, which can simply be rewritten as:

struct Country: Hashable {
  let name: String
  let capital: String
}

Enumerations with associated values also needed extra code to work with Equatable and Hashable in Swift 4:

enum BlogPost: Hashable {
  case tutorial(String, String)
  case article(String, String)
  
  static func ==(lhs: BlogPost, rhs: BlogPost) -> Bool {
    switch (lhs, rhs) {
    case let (.tutorial(lhsTutorialTitle, lhsTutorialAuthor), .tutorial(rhsTutorialTitle, 
               rhsTutorialAuthor)):
      return lhsTutorialTitle == rhsTutorialTitle && lhsTutorialAuthor == rhsTutorialAuthor
    case let (.article(lhsArticleTitle, lhsArticleAuthor), .article(rhsArticleTitle, rhsArticleAuthor)):
      return lhsArticleTitle == rhsArticleTitle && lhsArticleAuthor == rhsArticleAuthor
    default:
      return false
    }
  }
  
  var hashValue: Int {
    switch self {
    case let .tutorial(tutorialTitle, tutorialAuthor):
      return tutorialTitle.hashValue ^ tutorialAuthor.hashValue &* 16777619
    case let .article(articleTitle, articleAuthor):
      return articleTitle.hashValue ^ articleAuthor.hashValue &* 16777619
    }
  }
}

You used the enumeration’s cases to write implementations for ==(lhs:rhs:) and hashValue. This enabled you to compare blog posts and use them in sets and dictionaries:

let swift3Article = BlogPost.article("What's New in Swift 3.1?", "Cosmin Pupăză")
let swift4Article = BlogPost.article("What's New in Swift 4.1?", "Cosmin Pupăză")
let sameArticle = swift3Article == swift4Article

let swiftArticlesSet: Set = [swift3Article, swift4Article]
let swiftArticlesDictionary = [swift3Article: "Swift 3.1 article", swift4Article: "Swift 4.1 article"]

As the case was with Hashable, this code’s size is vastly reduced in Swift 4.1 thanks to default Equatable and Hashable implementations:

enum BlogPost: Hashable {
  case tutorial(String, String)
  case article(String, String)
}

You just saved yourself from maintaining 20 lines of boilerplate code!

Saving time with Swift 4.1!

Hashable Index Types

Key paths may have used subscripts if the subscript parameter’s type was Hashable in Swift 4. This enabled them to work with arrays of double; for example:

let swiftVersions = [3, 3.1, 4, 4.1]
let path = \[Double].[swiftVersions.count - 1]
let latestVersion = swiftVersions[keyPath: path]

You use keyPath to get the current Swift version number from swiftVersions.

Swift 4.1 adds Hashable conformance to all index types in the standard library [SE-0188]:

let me = "Cosmin"
let newPath = \String.[me.startIndex]
let myInitial = me[keyPath: newPath]

The subscript returns the first letter of the string. It works since String index types are Hashable in Swift 4.1.

Recursive Constraints on Associated Types in Protocols

Swift 4 didn’t support defining recursive constraints on associated types in protocols:

protocol Phone {
  associatedtype Version
  associatedtype SmartPhone
}

class IPhone: Phone {
  typealias Version = String
  typealias SmartPhone = IPhone
}

In this example, you defined a SmartPhone associated type, but it might have proved useful to constrain it to Phone, since all smartphones are phones. This is now possible in Swift 4.1 [SE-0157]:

protocol Phone {
  associatedtype Version
  associatedtype SmartPhone: Phone where SmartPhone.Version == Version, SmartPhone.SmartPhone == SmartPhone
}

You use where to constrain both Version and SmartPhone to be the same as the phone’s.

Weak and Unowned References in Protocols

Swift 4 supported weak and unowned for protocol properties:

class Key {}
class Pitch {}

protocol Tune {
  unowned var key: Key { get set }
  weak var pitch: Pitch? { get set }
}

class Instrument: Tune {
  var key: Key
  var pitch: Pitch?
  
  init(key: Key, pitch: Pitch?) {
    self.key = key
    self.pitch = pitch
  }
}

You tuned an instrument in a certain key and pitch. The pitch may have been nil, so you’d model it as weak in the Tune protocol.

But both weak and unowned are practically meaningless if defined within the protocol itself, so Swift 4.1 removes them and you will get a warning using these keywords in a protocol [SE-0186]:

protocol Tune {
  var key: Key { get set }
  var pitch: Pitch? { get set }
}

Index Distances in Collections

Swift 4 used IndexDistance to declare the number of elements in a collection:

func typeOfCollection<C: Collection>(_ collection: C) -> (String, C.IndexDistance) {
  let collectionType: String
  
  switch collection.count {
  case 0...100:
    collectionType = "small"
  case 101...1000:
    collectionType = "medium"
  case 1001...:
    collectionType = "big"
  default:
    collectionType = "unknown"
  }
  
  return (collectionType, collection.count)
}

typeOfCollection(_:) returned a tuple, which contained the collection’s type and count. You could use it for any kind of collections like arrays, dictionaries or sets; for example:

typeOfCollection(1...800) // ("medium", 800)
typeOfCollection(greetings) // ("small", 2)

You could improve the function’s return type by constraining IndexDistance to Int with a where clause:

func typeOfCollection<C: Collection>(_ collection: C) -> (String, Int) where C.IndexDistance == Int {
  // same code as the above example
}

Swift 4.1 replaces IndexDistance with Int in the standard library, so you don’t need a where clause in this case [SE-0191]:

func typeOfCollection<C: Collection>(_ collection: C) -> (String, Int) {
  // same code as the above example
}

Structure Initializers in Modules

Adding properties to public structs could lead to source-breaking changes in Swift 4. For this tutorial, make sure the Project Navigator is visible in Xcode by going to View\Navigators\Show Project Navigator. Next, right-click on Sources and select New File from the menu. Rename the file DiceKit.swift. Replace its contents with the following block of code:

public struct Dice {
  public let firstDie: Int
  public let secondDie: Int

  public init(_ value: Int) {
    let finalValue: Int

    switch value {
    case ..<1:
      finalValue = 1
    case 6...:
      finalValue = 6
    default:
      finalValue = value
    }

    firstDie = finalValue
    secondDie = 7 - finalValue
  }
}

The struct's initializer makes sure both dice have valid values between 1 and 6. Switch back to the playground and add this code at the end of it:

// 1
let dice = Dice(0)
dice.firstDie
dice.secondDie

// 2
extension Dice {
  init(_ firstValue: Int, _ secondValue: Int) {
    firstDie = firstValue
    secondDie = secondValue
  }
}

// 3
let newDice = Dice(0, 7)
newDice.firstDie
newDice.secondDie

Here's what you did with this code:

  1. You created a valid pair of dice.
  2. You extended Dice with another initializer that has direct access to its properties.
  3. You defined an invalid pair of dice with the struct's new initializer.

In Swift 4.1, cross-target initializers should call the default one. Change your extension on Dice to:

extension Dice {
  init(_ firstValue: Int, _ secondValue: Int) {
    self.init(abs(firstValue - secondValue))
  }
}

This change makes structs behave like classes: cross-module initializers must be convenience initializers in Swift 4.1 [SE-0189].

In Swift 4.1 you can no longer cheat in dice games!

In Swift 4.1 you can no longer cheat in dice games!

Platform Settings and Build Configuration Updates

Swift 4.1 adds some much-needed platform and build features for code testing:

Build Imports

In Swift 4, you tested if a module was available on a certain platform by checking the operating system itself; for example:

#if os(iOS) || os(tvOS)
  import UIKit
  print("UIKit is available on this platform.")
#else
  print("UIKit is not available on this platform.")
#endif

UIKit is available on iOS and tvOS, so you imported it if the test succeeded. Swift 4.1 further simplifies this by letting you check for the module itself instead:

#if canImport(UIKit)
print("UIKit is available if this is printed!")
#endif

In Swift 4.1, you use #if canImport(UIKit) to confirm a certain framework is available for importing [SE-0075].

Target Environments

When writing Swift 4 code, the most well-known way to check whether you were running on a simulator or a physical device was by checking both the architecture and operation system:

#if (arch(i386) || arch(x86_64)) && (os(iOS) || os(tvOS) || os(watchOS))
  print("Testing in the simulator.")
#else
  print("Testing on the device.")
#endif

If your architecture was Intel-based and your operating system was iOS, tvOS or watchOS, you were testing in the simulator. Otherwise, you were testing on the device.

This test was very cumbersome and was also very non-descriptive of the issue at hand. Swift 4.1 makes this test much more straightforward; just use targetEnvironment(simulator) [SE-0190] like so:

#if targetEnvironment(simulator)
  print("Testing in the simulator.")
#endif

Miscellaneous Bits and Pieces

There are a few other updates in Swift 4.1 that are worth knowing:

Compacting Sequences

In Swift 4, it was fairly common to use flatMap(_:) to filter out nil values from a sequence:

let pets = ["Sclip", nil, "Nori", nil]
let petNames = pets.flatMap { $0 } // ["Sclip", "Nori"]

Unfortunately, flatMap(_:) was overloaded in various ways and, in that specific scenario, the flatMap(_:) naming wasn't very descriptive of the action taken.

For these reasons, Swift 4.1 introduces a rename of flatMap(_:) to compactMap(_:) to make its meaning clearer and unique [SE-0187]:

let petNames = pets.compactMap { $0 }

Unsafe Pointers

Swift 4 used temporary unsafe mutable pointers to create and mutate unsafe mutable buffer pointers:

let buffer = UnsafeMutableBufferPointer<Int>(start: UnsafeMutablePointer<Int>.allocate(capacity: 10), 
                                             count: 10)
let mutableBuffer = UnsafeMutableBufferPointer(start: UnsafeMutablePointer(mutating: buffer.baseAddress), 
                                               count: buffer.count)

Swift 4.1 lets you work with unsafe mutable buffer pointers directly, using the same approach as with unsafe mutable pointers [SE-0184]:

let buffer = UnsafeMutableBufferPointer<Int>.allocate(capacity: 10)
let mutableBuffer = UnsafeMutableBufferPointer(mutating: UnsafeBufferPointer(buffer))

New Playground Features

Swift 4 allowed you to customize type descriptions in Xcode playgrounds:

class Tutorial {}
extension Tutorial: CustomPlaygroundQuickLookable {
  var customPlaygroundQuickLook: PlaygroundQuickLook {
    return .text("raywenderlich.com tutorial")
  }
}
let tutorial = Tutorial()

You implemented CustomPlaygroundQuickLookable for Tutorial to return a custom quick-look playground description. The description’s type in customPlaygroundQuickLook was limited to PlaygroundQuickLook cases. This is no longer the case (pun intended) in Swift 4.1:

extension Tutorial: CustomPlaygroundDisplayConvertible {
  var playgroundDescription: Any {
    return "raywenderlich.com tutorial"
  }
}

You implement CustomPlaygroundDisplayConvertible this time. The description’s type is Any now, so you can return anything from playgroundDescription. This simplifies your code and makes it more flexible [SE-0198].

Where to Go From Here?

You can download the final playground using the Download Materials link at either the top or bottom of this tutorial.

Swift 4.1 polishes up some Swift 4 features in preparation for more serious changes that will be coming in Swift 5 later this year. These include ABI stability, improved generics and strings, new memory ownership and concurrency models and more.

If you're feeling adventurous, head over and look at the Swift standard library diffs or to the official Swift CHANGELOG where you can read more information about all changes in this version. You can also use this to keep an eye out for what's coming in Swift 5!

If you're curious about what changes are coming in Swift 5 and beyond, we also recommend that you check out Swift Evolution proposals, where you can see which new features, changes and additions are being proposed. If you're really keen, why not give feedback on one of the current proposals under review or even pitch a proposal yourself!

What do you like or dislike about Swift 4.1 so far? Let us know in the forum discussion below!

The post What’s New in Swift 4.1? appeared first on Ray Wenderlich.

Screencast: Getting Started with Flutter in Android Studio

Android Avalanche Giveaway Winners – and Last Day for Discount!

$
0
0

Over the past two weeks, we’ve released tons of new books, screencasts, and video courses Android and Kotlin development, in the Android Avalanche.

To celebrate, during this two week period we’re offering a bundle of all of our new Android + Kotlin Content for 20% off – and we’re giving away a few copies to a few lucky readers.

Keep reading to find out who the winners are – and how to get the discounted bundle before it’s too late!

Android Avalanche Giveaway Winners

To enter the giveaway, all you had to do was reply to the announcement post with the answer to one simple question:

Why are you interested in our new Android books, courses, and screencasts?

We’ve randomly selected 3 winners, who each win a free copy of the Android Avalanche Bundle. Below is each winner, and their (abbreviated) quote:

1) vakas

“As an avid iPhone user and developer from Pakistan, I’ve learnt SO much from raywenderlich.com. Hailing from Pakistan, a place where coding and development is still faaaaar from flourishing, you guys have saved me time and time again! Recently switched to Android development at work. And now I get an android version of you guys? I AM FLOORED. Christmas presents just got here early!” —vakas

2) epinaud

“I would like to port my apps written for iOS into Android, but have no clue how to do it.” —epinaud

3) chlkdst

“Never stop learning! Hope to get lucky super package!” —chlkdst

Congratulations! We will be in touch soon to deliver your prizes.

Last Day for Discount!

Finally I’d like to remind everyone that today is the last day for the 20% discounted Android Avalanche Bundle.

Starting tomorrow, the bundle will no longer be available, so you’d need to purchase everything separately at their full price. So be sure to grab the discount while you still can!

Thanks to everyone who entered the Android Avalanche giveaway, bought the new books and courses, or simply read these posts. We truly appreciate your support in making this new area of our site possible.

The post Android Avalanche Giveaway Winners – and Last Day for Discount! appeared first on Ray Wenderlich.

Kitura Tutorial: Getting Started with Server Side Swift

$
0
0

Kitura Tutorial: Getting Started with Server Side Swift

Are you a busy Swift developer, with no time to learn Node.js, but still feeling drawn to server-side development? This Kitura tutorial will teach you how to create RESTful APIs written entirely in Swift.

You’ll build a “Today I Learned” app to help you learn and remember common acronyms. Along the way, you’ll learn how to:

  • Create a backend API from scratch.
  • Link your API to a CouchDB instance running on your local machine.
  • Assign GET, POST, and DELETE routes for a model object.

Getting Started

To complete this Kitura tutorial, you’ll need:

  • macOS 10.12 (Sierra) or higher
  • Xcode 9.2 or newer
  • Basic familiarity with Terminal, as you’ll use the command line quite a bit in this tutorial.
Note: It’s possible to use Kitura with simply a text editor and a standalone Swift installation, which makes it possible to run Kitura even on Linux! However, this tutorial uses Xcode to take advantage of autocomplete and the nuances of a familiar development environment.

Installing CouchDB

You’ll use a database called CouchDB in this Kitura tutorial. This is a NoSQL database that strictly enforces JSON and uses revision keys for updates. So it’s safe — and fast!

Homebrew, a popular package manager for macOS, is the easiest way to install CouchDB. If you don’t have Homebrew installed already, open Terminal and enter this command:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Enter your password if prompted. You should see Installation Successful once it completes.

Next, enter this command to install CouchDB:

brew install couchdb

Once it’s installed, enter this command to start CouchDB:

brew services start couchdb

To confirm CouchDB installed and started successfully, open a web browser and navigate to http://localhost:5984. You should see something like this:

{
  "couchdb": "Welcome",
  "uuid": "29b2fe0fb4054c61e6b4b8e01761707b",
  "version": "1.7.1",
  "vendor": {
      "name": "Homebrew",
      "version": "1.7.1"
  }
}
Note: If you’d prefer not to install CouchDB directly and have Docker installed, you may run it in Docker using the command:
docker run --name couchdb -p 5984:5984 -d couchdb

Before you can dive into this Kitura tutorial, you’ll need to first understand a little about Kitura and REST.

Kitura & RESTful API Routing

IBM created Kitura as an open-source framework in 2015, shortly after Apple open-sourced Swift. They modeled Kitura after Express.js, the de-facto framework for creating RESTful APIs using Node.js.

REST is an acronym for Representational State Transfer. In RESTful apps, each unique URL represents an object. Non-unique URLs represent actions, which are combined with RESTful verbs like GET to fetch objects, POST to insert, DELETE to remove and PUT to update objects.

Backend development often involves many components working together. You’ll only be concerned with two backend components in this Kitura tutorial: the API and database.

For example, if you want to populate a table view with a list of acronyms and their meanings, your client app sends a GET request to the backend. In practice, your app requests the URL http://yourAPI.com/acronyms.

Kitura tutorial client request made to API

The API receives your request and uses a router to decide how to handle it. The router checks all available routes, which are simply publicly accessible endpoints, to determine if there is a GET route ending in /acronyms. If it finds one, it executes the associated route’s code.

The /acronyms route then does the following:

  1. Retrieves the acronyms from the database
  2. Serializes them into JSON
  3. Packages them into a response
  4. Returns the response to the API to send to the client

This results in the following interaction between the API and database:

Kitura tutorial API and database interaction

If an API is RESTful, then it must also be stateless. In our example, you can think of the API as the orchestrator, commanding data to and fro in your ecosystem. Once the request is fulfilled, the state of the API and its routes should be unchanged and able to handle the next request.

Kitura tutorial API response to client

Just because the API is stateless doesn’t mean it isn’t allowed to store or modify objects. The API itself doesn’t store states, but it does query and update the database to fetch, store and modify objects’ states.

Setting up the Kitura Tutorial Project

Open Terminal and enter the following commands:

mkdir KituraTIL
cd KituraTIL
swift package init --type executable

This uses the Swift Package Manager to create a new executable.

You should see output like this:

Creating executable package: KituraTIL
Creating Package.swift
Creating README.md
Creating .gitignore
Creating Sources/
Creating Sources/KituraTIL/main.swift
Creating Tests/

Next, enter the following command to open Package.swift with Xcode:

open -a Xcode Package.swift

Replace the entire contents of Package.swift with the following:

// swift-tools-version:4.1

import PackageDescription

let package = Package(
  // 1
  name: "KituraTIL",
  dependencies: [
    // 2
    .package(url: "https://github.com/IBM-Swift/Kitura.git",
             .upToNextMinor(from: "2.1.0")),
    // 3
    .package(url: "https://github.com/IBM-Swift/HeliumLogger.git",
             .upToNextMinor(from: "1.7.1")),
    // 4
    .package(url: "https://github.com/IBM-Swift/Kitura-CouchDB.git",
             .upToNextMinor(from: "2.0.1")),
  ],
  //5
  targets: [
    .target(name: "KituraTIL",
            dependencies: ["Kitura" , "HeliumLogger", "CouchDB"],
            path: "Sources")
  ]
)

Here’s what each of these commands does:

  1. You first set the name of your target executable. By convention, you should name this after the enclosing directory.
  2. Here you declare the dependency for Kitura itself.
  3. This is a backend logging framework, which you’ll use to log messages while your backend app is running.
  4. You’ll use this dependency to allow Kitura to communicate with CouchDB.
  5. Finally, you declare your targets and their dependencies.

Save this file and go back to Terminal, where you should still be in the same directory containing Package.swift. Enter the following command:

swift build

This will generate a lot of logging, ending with logs about compiling your Kitura tutorial project. You’ll see this output at the end:

Compile Swift Module 'KituraTIL' (1 sources)
Linking ./.build/x86_64-apple-macosx10.10/debug/KituraTIL

Note: In case you get errors from swift build, enter the following in Terminal to verify your Swift version:

swift --version

If your version is lower than Swift 4.1, this is likely your problem. To fix this, make sure that you have the latest version of Xcode 9 installed and then run the following command:

sudo xcode-select -s /Applications/Xcode.app

…where Xcode.app should be replaced with whatever you called Xcode 9.

If you’re still having trouble, it’s possible you’re using swiftenv or another Swift version management tool, and you may need to manually set your Swift version to 4.1.

Here’s the command to do this if you’re using swiftenv:

swiftenv global 4.1

Using Kitura with Xcode

Still in Terminal, at the root directory for your Kitura tutorial project, enter the following command:

swift package generate-xcodeproj

You should see this output:

generated: ./KituraTIL.xcodeproj

Enter this command to open your new Xcode project:

open KituraTIL.xcodeproj/

You’ll then be greeted with this view:

Kitura tutorial initial project

From here, you need to update the KituraTIL-Package scheme to run your executable. Go to the top of your Xcode window, and click the scheme pane where it says KituraTIL-Package. You should see this dialog:

Kitura tutorial edit scheme

Click Edit Scheme, and another dialog will open. In the middle of the dialog, click the dropdown for Executable and select KituraTIL.

Kitura tutorial set executable

Click Close in the bottom-right corner of the dialog to dismiss it.

Next, make sure you’ve set to run this scheme on My Mac by clicking to the right of the scheme dropdown that you clicked earlier and selecting My Mac from the list:

Kitura tutorial select My Mac

Build and run, and you’ll see this printed to the console:

Hello, world!
Program ended with exit code: 0

You can safely ignore any compiler warnings about deprecations. Kitura and CouchDB use a few methods that were deprecated in Swift 4, but this shouldn’t cause any problems.

Awesome, you’re now ready to get your backend app up and running!

Using Kitura

First, create a new Swift File named Application.swift in the same directory as main.swift. Make sure to add this file to the KituraTIL executable target:

Kitura tutorial set target

Next, replace the contents of this file with the following:

import Kitura
import LoggerAPI

public class App {
  
  // 1
  let router = Router()
  
  public func run() {
    // 2
    Kitura.addHTTPServer(onPort: 8080, with: router)
    // 3
    Kitura.run()
  }
}

Here’s what this does:

  1. The Router will handle incoming requests by routing them to the appropriate endpoint.
  2. Here, you register router to run on port 8080.
  3. Kitura will run infinitely on the main run loop after you call run().

With your App class created, open main.swift and replace its contents with the following:

import Kitura
import HeliumLogger
import LoggerAPI

HeliumLogger.use()

let app = App()
app.run()

Here you create an App instance and run it.

The HeliumLogger.use() command sets up HeliumLogger as the default logger for Kitura. It’s good practice to “log early and log often”.

Build and run, and you should see log messages from Kitura appear in the console.

Next, navigate to http://localhost:8080 in your browser, and you should see this page:

First Kitura server

Congratulations, you’re now running a Swift RESTful API on your local machine!

Creating Your Model

In this section, you’ll create a model type that represents an acronym.

Create a new file named Acronym.swift and remember to add it to the KituraTIL target.

Replace the contents of this file with the following:

// 1
struct Acronym: Codable {

  var id: String?
  var short: String
  var long: String
  
  init?(id: String?, short: String, long: String) {
    // 2
    if short.isEmpty || long.isEmpty {
      return nil
    }
    self.id = id
    self.short = short
    self.long = long
  }
}

// 3
extension Acronym: Equatable {

  public static func ==(lhs: Acronym, rhs: Acronym) -> Bool {
    return lhs.short == rhs.short && lhs.long == rhs.long
  }
}

Here’s what this does:

  1. By making Acronym conform to Codable, you’ll be able to take advantage of a new Kitura feature named Codable Routing. You’ll learn more about this shortly.
  2. Within the initializer, you validate that neither short nor long are empty strings.
  3. You make Acronym conform to Equatable to enable you to determine if two acronyms are the same. You’ll use this later, as another form of validation.

Build and run your project to make sure everything builds properly.

Codable is simply a typealias that combines the Encodable and Decodable protocols. This ensures conforming objects can be converted both to and from external representations. In particular, Kitura uses this to easily convert instances to and from JSON.

Before Kitura 2.0, you had to pass a request object into every endpoint closure, parse properties manually, cast appropriately, do necessary transformations and finally create JSON to send as a response. It was a lot of work!

Fortunately, you can now leverage the power of Kitura’s Codable Routing to significantly reduce the boilerplate code in your routes. Win! You simply need to make your models conform to Codable to take advantage of this, as you did above.

With this theory out of the way, it’s now time to connect your API to CouchDB.

Connecting to CouchDB

Open Application.swift, and replace its contents with the following:

// 1
import CouchDB
import Foundation
import Kitura
import LoggerAPI

public class App {

  // 2
  var client: CouchDBClient?
  var database: Database?
    
  let router = Router()
    
  private func postInit() {
    // 3
  }
    
  private func createNewDatabase() {
    // 4
  }
    
  private func finalizeRoutes(with database: Database) {
    // 5
  }
    
  public func run() {
    // 6
    postInit()
    Kitura.addHTTPServer(onPort: 8080, with: router)
    Kitura.run()
  }
}

Let’s go over these changes:

  1. You first import CouchDB in order to set up your persistence layer.
  2. You add properties for client for CouchDB and database to keep track of changes.
  3. You’ll add code here after you’ve created your instance of App to connect to your database.
  4. This organizes your code stemming from the previous function.
  5. Once you’ve set up your database, you’ll list all available routes for your API to match against here.
  6. You call postInit() from within run() to make this part of your API setup.

Next, complete postInit() by replacing // 3 with the following.:

// 1
let connectionProperties = ConnectionProperties(host: "localhost", port: 5984, secured: false)
client = CouchDBClient(connectionProperties: connectionProperties)
// 2
client!.dbExists("acronyms") { exists, _ in
  guard exists else {
    // 3
    self.createNewDatabase()
    return
  }
  // 4
  Log.info("Acronyms database located - loading...")
  self.finalizeRoutes(with: Database(connProperties: connectionProperties, dbName: "acronyms"))
}

Here’s what you just did:

  1. You created a ConnectionProperties object that you use to specify configuration values and a new CouchDBClient.
  2. You check to see if a matching database already exists, so you don’t overwrite existing data.
  3. If a new database does not exist, then you call createNewDatabase() to create a new database.
  4. If a new database does exist, you call finalizeRoutes(with:) to configure your routes.

Next, complete createNewDatabase() by replacing // 4 with the following:

Log.info("Database does not exist - creating new database")
// 1
client?.createDB("acronyms") { database, error in
  // 2
  guard let database = database else {
    let errorReason = String(describing: error?.localizedDescription)
    Log.error("Could not create new database: (\(errorReason)) - acronym routes not created")
    return
  }
  self.finalizeRoutes(with: database)
}

Here’s what this does piece by piece:

  1. You create your database with a given name. You can choose anything, but it’s best to keep it simple.
  2. You ensure the database exists, or else, you abort and log an error.
  3. Just like before, you call finalizeRoutes(with:) to configure your routes

You won’t be able to implement finalizeRoutes just yet. You first need to complete your persistence layer. That’s what you’ll do in the next section.

Persisting Acronyms

Create a file named AcronymPersistence.swift and add it to the KituraTIL target.

Replace the contents of AcronymPersistence.swift with the following:

import Foundation
import CouchDB
// 1
import SwiftyJSON

extension Acronym {
  // 2
  class Persistence {

    static func getAll(from database: Database,
                       callback: @escaping (_ acronyms: [Acronym]?, _ error: NSError?) -> Void) {
      database.retrieveAll(includeDocuments: true) { documents, error in
        guard let documents = documents else {
          callback(nil, error)
          return
        }
        var acronyms: [Acronym] = []
        for document in documents["rows"].arrayValue {
          let id = document["id"].stringValue
          let short = document["doc"]["short"].stringValue
          let long = document["doc"]["long"].stringValue
          if let acronym = Acronym(id: id, short: short, long: long) {
            acronyms.append(acronym)
          }
        }
        callback(acronyms, nil)
      }
    }
    
    static func save(_ acronym: Acronym, to database: Database,
                     callback: @escaping (_ id: String?, _ error: NSError?) -> Void) {
      getAll(from: database) { acronyms, error in
        guard let acronyms = acronyms else {
          return callback(nil, error)
        }
        // 3
        guard !acronyms.contains(acronym) else {
          return callback(nil, NSError(domain: "Kitura-TIL",
                                       code: 400,
                                       userInfo: ["localizedDescription": "Duplicate entry"]))
        }
        database.create(JSON(["short": acronym.short, "long": acronym.long])) { id, _, _, error in
          callback(id, error)
        }
      }
    }
    
    // 4
    static func get(from database: Database, with id: String,
                    callback: @escaping (_ acronym: Acronym?, _ error: NSError?) -> Void) {
      database.retrieve(id) { document, error in
        guard let document = document else {
          return callback(nil, error)
        }
        guard let acronym = Acronym(id: document["_id"].stringValue,
                                    short: document["short"].stringValue,
                                    long: document["long"].stringValue) else {
            return callback(nil, error)
        }
        callback(acronym, nil)
      }
    }
    
    static func delete(with id: String, from database: Database,
                       callback: @escaping (_ error: NSError?) -> Void) {
      database.retrieve(id) { document, error in
        guard let document = document else {
          return callback(error)
        }
        let id = document["_id"].stringValue
        // 5
        let revision = document["_rev"].stringValue
        database.delete(id, rev: revision) { error in
          callback(error)
        }
      }
    }
  }
}

Here’s what this does in detail:

  1. Kitura’s CouchDB wrapper has yet to be updated to use Codable, unfortunately. Instead, it utilizes SwiftyJSON to serialize objects into JSON.
  2. You create Persistence as a nested class within Acronym. This results in Persistence-namespaced methods for retrieving, saving and deleting Acronyms from CouchDB. This prevents name collisions in the event you have more than one model class, as more real-world apps do.
  3. Remember how you made Acronym conform to Equatable? This is where it comes in handy. You use it here to ensure you aren’t saving duplicate entries in the database.
  4. In addition to fetching all available acronyms, you also provide a method to find a single Acronym by matching its id.
  5. Here is where CouchDB differs from other NoSQL databases: each record has a revision stored as _rev, which you can use to check that you are making a proper update.

Setting up Your Codable Routes

You’re finally ready to create your routes. Create a new file named AcronymRoutes.swift and add it to the KituraTIL target.

Replace the contents of AcronymRoutes.swift with the following:

import CouchDB
import Kitura
import KituraContracts
import LoggerAPI

private var database: Database?

func initializeAcronymRoutes(app: App) {
  database = app.database
  // 1
  app.router.get("/acronyms", handler: getAcronyms)
  app.router.post("/acronyms", handler: addAcronym)
  app.router.delete("/acronyms", handler: deleteAcronym)
}

// 2
private func getAcronyms(completion: @escaping ([Acronym]?, RequestError?) -> Void) {
  guard let database = database else {
    return completion(nil, .internalServerError)
  }
  Acronym.Persistence.getAll(from: database) { acronyms, error in
    return completion(acronyms, error as? RequestError)
  }
}

// 3
private func addAcronym(acronym: Acronym, completion: @escaping (Acronym?, RequestError?) -> Void) {
  guard let database = database else {
    return completion(nil, .internalServerError)
  }
  Acronym.Persistence.save(acronym, to: database) { id, error in
    guard let id = id else {
      return completion(nil, .notAcceptable)
    }
    Acronym.Persistence.get(from: database, with: id) { newAcronym, error in
      return completion(newAcronym, error as? RequestError)
    }
  }
}

// 4
private func deleteAcronym(id: String, completion: @escaping (RequestError?) -> Void) {
  guard let database = database else {
    return completion(.internalServerError)
  }
  Acronym.Persistence.delete(with: id, from: database) { error in
    return completion(error as? RequestError)
  }
}

Let’s look at the routes you just set up:

  1. Here you declare handlers for each route, which associate API endpoints with methods to be called.
  2. getAcronyms will be called to fetch Acronyms whenever a GET /acronyms request is made.
  3. addAcronym will insert a new Acronym into the database whenever a POST /acronyms request is made.
  4. deleteAcronym will remove an Acronym from the database whenever a DELETE /acronyms request is made.

Notice the conciseness of each of your routes; the beauty of Kitura’s Codable Routing is that you don’t need to worry about requests or responses directly. Instead, Kitura will recognize the request that’s made, route it to the appropriate endpoint, execute just the relevant code and even create a response for you. Nice!

To complete your app, open Application.swift and complete finalizeRoutes() by replacing // 5 with the following:

self.database = database
initializeAcronymRoutes(app: self)
Log.info("Acronym routes created")

Testing Your API

Build and run your Kitura tutorial project, and navigate to http://localhost:8080 to ensure your API is still running.

Then, open Terminal and enter the following command:

curl http://localhost:8080/acronyms

If everything is set up correctly, you should get back an empty JSON placeholder ([]). This means you’ve correctly set up your GET route!

Now try adding a new acronym to your backend. Type the following command in the same terminal window:

curl -X POST http://localhost:8080/acronyms -H 'content-type: application/json' -d '{"short": "BRB", "long": "Be right back"}'

Hit Enter, and you should see a response like this:

{"id":"b2edde7b8032c30c7aeeff8d18000ad9","short":"BRB","long":"Be right back"}

In Xcode, Kitura’s log messages should include a Received POST type-safe request message.

To verify this actually saved to the database, enter the following GET command one more time:

curl http://localhost:8080/acronyms

If you get back the same JSON you entered inside an array, then congratulations! You’ve successfully created a working API!

Where to Go From Here?

This tutorial has introduced you to Kitura by building a backend API. However, that is not all Kitura can do! In part two of this tutorial series, you’ll learn how to use Kitura and Stencil to build a website that includes a front end.

You can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial.

As part of Kitura 2.0, The Swift@IBM team has created a command line interface for Kitura that streamlines generating a similar starter project, without requiring you to write any code yourself! You can try this out (after completing this tutorial, of course) by entering the following commands in Terminal:

brew tap ibm-swift/kitura
brew install kitura
kitura init

If you’d like to learn more about Swift 4’s Codable protocol, check out our tutorial about it here.

You can also read through IBM’s introduction to Kitura Codable Routing here.

There’s a lot of material on the internet about Kitura, and some especially great stuff available directly from IBM! If you’d like to continue learning about Codable Routing and other Kitura 2.0 features, check out this tutorial.

I encourage you to comment in the forum below if you have any questions or comments!

The post Kitura Tutorial: Getting Started with Server Side Swift appeared first on Ray Wenderlich.

Kitura Stencil Tutorial: How to make Websites with Swift

$
0
0

Kitura Stencil Tutorial: How to make Websites with Swift

IBM’s Kitura is a very popular and exciting server framework, but did you know it can be used to create web apps?

In this tutorial on Kitura and Stencil, you’ll pick up from the first part of this tutorial series and do the following:

  • Add the Stencil dependency to your project.
  • Create a basic website interface.
  • Add a route to your API for your website.
  • Make your Swift code interact with HTML.

If you’re new to Kitura, or haven’t set up CouchDB on your system yet, check out the Server Side Swift with Kitura tutorial first.

Getting Started

Just like the previous tutorial, you’ll need the following available to you:

  • MacOS 10.12 or higher
  • Xcode 9.2 or newer
  • CouchDB
  • Basic familiarity with Terminal, as you’ll use the command line quite a bit

You also need to make sure CouchDB is running. If you installed CouchDB via Homebrew, open Terminal and enter this command:

brew services start couchdb

If you get a message that says successfully started or already started, you’re good to go!

Running the Starter Project

Use the Download Materials button at the top or bottom of this tutorial to download the starter project. This has additional files that aren’t part of the previous tutorial, so you’ll need to download this starter even if you’ve completed the prior tutorial.

Open Terminal and navigate to the root directory for your project, where Package.swift lives. Then, enter the following command:

swift build

You should see a succession of logs followed by this at the end:

Compile Swift Module 'KituraWebInterface' (5 sources)
Linking ./.build/x86_64-apple-macosx10.10/debug/KituraWebInterface
swift build

Input the following command and press Enter:

.build/debug/KituraWebInterface

You should see logs about starting the application, and a window may pop up asking you to allow incoming connections. If so, click Allow.

Next, open a new Terminal window and enter the following command:

curl -X POST http://localhost:8080/acronyms -H 'content-type: application/json' -d '{"short": "AFK", "long": "Away from keyboard"}'

Unless you’ve already added this acronym, you should get a JSON response representing the new acronym. Then enter the following command:

curl http://localhost:8080/acronyms

All of the acronyms stored in CouchDB will be displayed. If you completed the previous tutorial, you may see more than just the AFK acronym.

Before you generate the Xcode project, you’ll add a few directories to organize your frontend app. Press Control-C to stop the app and carry on!

How the Backend Organizes Data

Whenever you made a GET request via the browser, your backend searches these available routes:

app.router.get("/acronyms", handler: getAcronyms)
app.router.post("/acronyms", handler: addAcronym)
app.router.delete("/acronyms", handler: deleteAcronym)

The API matches against each route by considering the HTTP verb, which in this case is GET, and the route path, which is /acronyms.

However, a user doesn’t think in HTTP terms — they just want to see what they’re interested in. This is where the user interface comes in.

How the Web Frontend Organizes Data

The same routers that handle your data endpoints render your user interfaces. The only difference is the route’s closure returns HTML instead of JSON.

At the root of nearly any Kitura web app, you’ll see two directories: public and Views. The public directory usually contains four subdirectories:

  • html: Contains files for determining how the UI is put together.
  • css: Contains definitions for “themes” and how the content is styled.
  • js: Contains JavaScript files for onscreen user interaction and other under-the-hood functionality.
  • img: Contains images, cat pictures and memes mostly!

The public directory holds only static files, but Kitura can do more than just serve static files.

Kitura uses a tool called Stencil to render webpages and populate dynamic page content based on information passed from your Swift API. Stencil lets you create templates to display content from your APIs in a predetermined format.

For example, suppose you want to add a title to a page indicating the current month. A naïve approach could look like this:

<!DOCTYPE html>
<html>
  <head>
    <title>The current month is December!</title>
  </head>
</html>

This works only 1/12th of the time: when it’s December!

What if your API could instead calculate the current month and provide a string from that calculation to use as the title? In your Swift code, you’d pass that string to the renderer, and your Stencil template might look like this:

<!DOCTYPE html>
<html>
  <head>
    <title>The current month is {{ currentMonth }}!</title>
  </head>
</html>

Notice the curly braces? This is how you tell Stencil to insert information into the HTML from a context. The workflow for Stencil is essentially this:

  1. Prepare data from your API.
  2. Set up a context: a dictionary of key/value pairs.
  3. Populate your context with the values you want to display.
  4. Render a template in your response, using a context.

With this workflow, you never have to worry about manually updating your webpage content. Instead, it is generated dynamically from the API.

Stencil templates typically have the extension .stencil or .html.stencil and go in the Views directory. This is where Stencil looks for them when you tell it to render one.

Adding the Stencil Dependency

Before you can use Stencil, you need to add it as a dependency for your project. Open Package.swift in Xcode, and replace its contents with the following. The only changes are below the two commented lines:

// swift-tools-version:4.0
import PackageDescription

let package = Package(
  name: "KituraWebInterface",
  dependencies: [
    .package(url: "https://github.com/IBM-Swift/Kitura.git", .upToNextMinor(from: "2.0.0")),
    .package(url: "https://github.com/IBM-Swift/HeliumLogger.git", .upToNextMinor(from: "1.7.1")),
    .package(url: "https://github.com/IBM-Swift/Kitura-CouchDB.git", .upToNextMinor(from: "1.7.2")),
    // 1
    .package(url: "https://github.com/IBM-Swift/Kitura-StencilTemplateEngine.git",
             .upToNextMinor(from: "1.8.4"))
  ],
  // 2
  targets: [ .target(name: "KituraWebInterface",
                     dependencies: ["Kitura" , "HeliumLogger", "CouchDB", "KituraStencil"]) ]
)

Here’s what the changes do:

  1. You declare a dependency on Stencil, using its fully qualified Git URL.
  2. You add Stencil to your target, so it will link correctly.

Save and close this file.

Open Terminal and navigate to the root project directory, which contains Package.swift, and enter this command:

swift build

This will setup the new dependency on Stencil. Nice!

Note: Stencil is not fully updated with the latest Swift changes so you’ll probably see a host of warnings. They can safely be ignored.

You’re making excellent progress, but you have a little more to set up before you can get to coding.

Configuring the Kitura Stencil Tutorial Project

You next need to set up some directories to keep your project organized.

Still in your project’s root directory in Terminal, enter the following commands:

mkdir Views
cd Views
touch home.stencil
touch header.stencil
touch add.stencil
cd ..

You’ve just created the Views directory, populated it with three template files and returned to the root directory.

Next, enter the following commands:

mkdir public
cd public
mkdir css
mkdir img
touch css/index.css
cd ..

You’re simply creating the directory structure and empty files for now. You’ll populate these files shortly.

Create a Swift file for handling the client route:

touch Sources/KituraWebInterface/Routes/ClientRoutes.swift

Finally, open your project directory in Finder. You can do this most easily by entering in Terminal:

open .

Find the file kitura.jpg in your downloaded materials and drag it into the img directory.

Your project directory should now look like this:

KituraWebInteface Project hierarchy

Go back to Terminal, and execute the following commands from your project’s root directory:

swift package generate-xcodeproj
open KituraWebInterface.xcodeproj

You should now have the following window open in Xcode:

KituraWebInterface Xcode first look

Take a look at your File Hierarchy, and you’ll notice everything is nicely organized.

Note: Once again, ignore any warnings about Swift 4 conversions and deprecations. These are expected with the current version of Stencil.

There are only two more things you need to do before writing code. Just like in the previous tutorial, you need to set the current scheme’s executable so you can build and run the project in Xcode.

First, go to the Select Scheme dropdown and select Edit Scheme.

Select Edit Scheme on Kitura Stencil tutorial project

In the window that pops up, click on the menu next to Executable and select KituraWebInterface. Then, click Close to dismiss the pop-up.

Set executable

Next, click on the drop-down to the right of the Select Scheme dropdown, and select My Mac (if it’s not selected already).

Build and run. Then, open your web browser and go to http://localhost:8080 to make sure the Kitura home page loads.

Now, you’re finally ready to do some web coding!

Preparing Your Web UI

Note: Xcode isn’t the best tool for editing HTML, CSS and Stencil files. You can get basic syntax coloring via Editor menu ▸ Syntax ▸ Coloring and choosing the file type, but this won’t recognize Stencil tags. For a better experience, you can use this extension for Visual Studio Code written by a Ray Wenderlich team member.

Open index.css and replace its contents with the following:

* {
  margin: 0;
  padding: 0;
  box-sizing: border-box;
}
.menu-container {
  color: #000;
  padding: 20px 0;
  display: flex;
  justify-content: center;
  font-family: 'Open Sans', 'helvetica Neue', 'Helvetica', 'Arial', "Lucida Grande", sans-serif;
}
.menu {
  border: 1px solid #FFF;
  width: 900px;
}
.menu-add {
  color: #000;
  padding: 2px;
  display: flex;
  justify-content: space-around;
  font-family: 'Open Sans', 'Arial', sans-serif;
}
.menu-add-component {
  color: #000;
  display: flex;
  justify-content: space-around;
  width: 300px;
  font-family: 'Open Sans', 'Arial', sans-serif;
}
input.button-primary {
  background-color: #00B3E4;
  color: #FFF;
  border-color: black;
  padding: 4px;
  font-family: 'Open Sans', 'Arial', sans-serif;
}
input.acronym-field {
  font-family: 'Open Sans', 'Arial', sans-serif;
  border-color: #333;
  border-width: 1px;
}
::-webkit-input-placeholder {
  font-family: 'Open Sans', 'Arial', sans-serif;
}
:-moz-placeholder {
  font-family: 'Open Sans', 'Arial', sans-serif;
  opacity: 1;
}
::-moz-placeholder {
  font-family: 'Open Sans', 'Arial', sans-serif;
  opacity: 1;
}
:-ms-input-placeholder {
  font-family: 'Open Sans', 'Arial', sans-serif;
  color: #909;
}
::-ms-input-placeholder {
  font-family: 'Open Sans', 'Arial', sans-serif;
  color: #909;
}

A complete discussion of CSS is outside the scope of this tutorial. However, this essentially sets various styles for the components you’ll be adding in HTML. You also use a framework called FlexBox to make it easier to organize content within a web page.

Next, open header.stencil and replace its contents with the following:

<div class='menu-container'>
  <div class='title'><h1>TIL: Today I Learned<img src="/img/kitura.jpg"></h1></div>
</div>

This showcases a key element of Stencil template files: composition. You can define small templates to reuse in larger templates inside of loops. Here, header.stencil will simply be included at the top of each page, but the next example will showcase just how useful Stencil templating can be.

Open home.stencil, and replace its contents with the following:

<!DOCTYPE html>
<html>
  <head>
    <title>TIL - Kitura</title>
    <link rel="stylesheet" href="/css/index.css">
    <link rel="stylesheet"
      href="//fonts.googleapis.com/css?family=Open+Sans:300,400,600,700&amp;subset=latin,latin-ext">
  </head>
  <body>
    <!--1-->
    {% include "header.stencil" %}
    <div class="menu-container">
      <h2>Existing Acronyms</h4>
    </div>
    <div class="menu-add">
      <div class="menu-add-component">
        <h3>Short Form</h3>
      </div>
      <div class="menu-add-component">
        <h3>Long Form</h3>
      </div>
    </div>
  </body>
</html>

The line below the comment labeled <!--1--> shows how Stencil allows dynamic content to be passed into the template.

To see what this actually looks like, you’ll next write a route to render the page.

Adding a Kitura Route to Render your Template

Open ClientRoutes.swift and enter the following:

import Foundation
import KituraStencil
import Kitura

func initializeClientRoutes(app: App) {
  // 1
  app.router.setDefault(templateEngine: StencilTemplateEngine())
  // 2
  app.router.all(middleware: StaticFileServer())
  
  // 3
  app.router.get("/") { _, response, _ in
    // 4
    let context: [String: Any] = [:]
    // 5
    try response.render("home", context: context)
  }
}

Here’s what that does, step by step:

  1. With Kitura, you have a choice between templating engines. This tutorial uses Stencil, but you can choose from others such as Mustache or Markdown.
  2. Here you set up a handler to serve the static files in the public directory.
  3. This is actual definition for your route. Notice how you override the main home page path? This means this route will respond when you visit http://localhost:8080
  4. This is where you define the context to pass into your rendered response. The render function does not accept nil for a context; instead, you should always provide a context of type [String: Any].
  5. Notice that, because you set Stencil as the default template engine, you can leave off the default .stencil extension when referring to templates.

Now to add a router to render the response.

Open Application.swift and replace finalizeRoutes(with:) with this:

private func finalizeRoutes(with database: Database) {
  self.database = database
  initializeAcronymRoutes(app: self)
  initializeClientRoutes(app: self)
  Log.info("Acronym routes created")
}

Now, whenever your application registers its routers with the main router, it will include this new router for your rendering response.

Build and run, open a web browser and visit http://localhost:8080.

First served page

You’ve just rendered your first HTML template using Swift!

Next, you need to make your page display some acronyms.

Passing a Context to Stencil

It would be great if the acronyms from your database were displayed on the webpage. To do that, you first need to place them in a context.

Open ClientRoutes.swift file, and replace the get route with this:

app.router.get("/") { _, response, _ in
  if let database = app.database {
    // 1
    Acronym.Persistence.getAll(from: database) { acronyms, error in
      guard let acronyms = acronyms else {
        response.send(error?.localizedDescription)
        return
      }
      var contextAcronyms: [[String: Any]] = []
      for acronym in acronyms {
        // 2
        if let id = acronym.id {
          // 3
          let map = ["short": acronym.short, "long": acronym.long, "id": id]
          contextAcronyms.append(map)
        }
      }
      // 4
      do {
        try response.render("home", context: ["acronyms": contextAcronyms])
      } catch let error {
        response.send(error.localizedDescription)
      }
    }
  }
}

Here’s what this does:

  1. First, you call Persistence.getAll to retrieve all of the acronyms.
  2. You’ll soon add support for creating an Acronym, but for now, you only use those that already have an id within the database.
  3. In order for Stencil to read properties from your context, you must serialize them. Stencil doesn’t yet support automatic serialization through Codable so you do it the old fashioned way.
  4. Finally, you set the contextAcronyms using the key "acronyms". Later on, you’ll use this same key to access this array in HTML.

Next, you must update home.stencil to use the new array you passed via the context. Replace the contents of body within home.stencil with the following:

<body>
  {% include "header.stencil" %}
  <div class="menu-container">
    <h2>Existing Acronyms</h2>
  </div>
  <div class="menu-add">
    <div class="menu-add-component">
      <h3>Short Form</h3>
    </div>
    <div class="menu-add-component">
      <h3>Long Form</h3>
    </div>
  </div>
  <!--1-->
  {% for acronym in acronyms %}
    <div class="menu-add">
      <div class="menu-add-component">
        <!--2-->
        <h5>{{ acronym.short }}</h5>
      </div>
      <div class="menu-add-component">
        <!--3-->
        <p>{{ acronym.long }}</p>
      </div>
    </div>
  {% endfor %}
</body>

Here’s the play by play:

  1. Here, you use the context values passed into Stencil: you loop through all available acronyms and create template HTML for each.
  2. In the same manner, you display the short version for each acronym.
  3. Finally, you display the long version.

Build and run, and navigate your web browser to http://localhost:8080.

Depending on how many acronyms you have stored in your database, your webpage should look something like the following:

Display acronyms

As a final touch, you’ll add a UI so you can add and remove acronyms through the web app.

Adding a Template for Creating Acronyms

Open add.stencil and enter the following:

<div class="menu-container">
  <h2>Add New Acronym</h4>
</div>
<div class="menu-add">
  <label>Acronym</label>
  <input id="shortField" class="acronym-field" name="short" placeholder="e.g. BRB" />
  <label>Long Form</label>
  <input id="longField" class="acronym-field" name="long" placeholder="e.g. Be Right Back" />
  <input class="button-primary" type="submit" value="Save Acronym" onClick="submitForm()">
</div>

You’ll use this HTML template at the top of the page to allow the user to create a new Acronym. In particular, look at the line for button-primary, which has onClick set to submitForm. You need to create a JavaScript function for this. At the end of the file, add the following:

<script type="text/javascript">
function submitForm() {
  // 1
  var short = document.getElementById("shortField").value;
  var long = document.getElementById("longField").value;
  if (long == "" || short == "") {
    // 2
    alert("Both fields must contain text!");
    return;
  }
  var xhr = new XMLHttpRequest();
  xhr.open("POST", "/acronyms");
  // 3
  xhr.setRequestHeader("Content-Type", "application/json");
  xhr.onreadystatechange = function() {
  if (xhr.readyState == XMLHttpRequest.DONE) {
     // 5
     location.reload();
    }
  }
  // 4
  xhr.send(JSON.stringify({ "short": short, "long": long }));
}
</script>

Here’s what this does:

  1. Here you access form fields and prepare to populate your JSON request to your API.
  2. Just as you placed object validation on the backend, it’s good practice to duplicate your validation on the front-end.
  3. Since your API is using Codable Routing, you need to set your content header appropriately.
  4. You send your JSON with this handy stringify function to serialize your data.
  5. When the request receives a response, you reload the page to update what you see in your list.

Now you’ll add this template to the home page.

Open home.stencil and find the line that has {% include "header.stencil" %}. Add the following right after that line:

{% include "add.stencil" %}

This will include everything you wrote in add.stencil. This is a great way to separate out reusable components.

Reload your webpage, it should look like this:

Add acronym

Try using your fancy new UI to add a couple of new acronyms. After you click the Save Acronym button, you’ll either see a refreshed page, or an alert if you didn’t enter text for both fields!

You’re almost at the finish line! You just need a way to delete acronyms.

Adding Delete Functionality

This is another place to take advantage of your templating abilities. Rather than create a UI where you enter an ID and then click delete for that acronym, wouldn’t it be nice to have a delete button next to each acronym?

Still in home.stencil file, replace everything inside the body tag with this:

<body>
  {% include "header.stencil" %}
  {% include "add.stencil" %}
  <div class="menu-container">
    <h2>Existing Acronyms</h2>
  </div>
  <div class="menu-add">
    <div class="menu-add-component">
      <h3>Short Form</h3>
    </div>
    <div class="menu-add-component">
      <h3>Long Form</h3>
    </div>
    <!--1-->
    <div class="menu-add-component">
      <h3>Delete Acronym</h3>
    </div>
  </div>
  {% for acronym in acronyms %}
    <div class="menu-add">
      <div class="menu-add-component">
        <h5>{{ acronym.short }}</h5>
      </div>
      <div class="menu-add-component">
        <p>{{ acronym.long }}</p>
      </div>
      <!--2-->
      <div class="menu-add-component">
        <input id="{{ acronym.id }}" class="button-primary one-line" type="submit" value="Delete" onClick="deleteAcronym(this.id)">
      </div>
    </div>
  {% endfor %}
</body>

Here’s what you added:

  1. Add a header to reflect your options below.
  2. Add a button to delete the acronym on a given line. Notice that the id parameter makes use of the acronym’s id. This will be important shortly.

Below all of your HTML, you need to add one more JavaScript function. Just before the </body> tag, add the following:

<script type="text/javascript">
    // 1
    function deleteAcronym(acronymID) {
        var xhr = new XMLHttpRequest();
        // 2
        xhr.open("delete", "/acronyms/" + acronymID);
        xhr.setRequestHeader("Content-Type", "application/json");
        xhr.onreadystatechange = function() {
            if (xhr.readyState == XMLHttpRequest.DONE) {
                // 3
                location.reload();
            }
        }
        xhr.send();
    }
</script>

Here’s how this works:

  1. The button’s ID is used as a parameter to tell your function which acronym you want to delete.
  2. You use this parameter to create the correct URL.
  3. When the request completes, you refresh the page to update the UI.

Reload the page again. It should look like this:

Final project screen

Try deleting the acronyms you’ve already typed. After you click the delete button, the page should update and show that you did in fact delete the acronym!

You’ve just built a fully functional web application with a backend written in Swift! Pretty neat.

Where to Go From Here?

You can download the completed version of the project using the Download Materials button at the top or bottom of this tutorial. Because projects built using the Swift Package Manager (SPM) treat the .xcodeproj as disposable, it is not included. Run swift package generate-xcodeproj again to download the dependencies and generate an Xcode project.

Web templating is fun when you need to interface with your Swift API right away. I hope that Stencil helps you get on track for whatever it is you want to create, whether it’s a blog, a photo library or anything else you have in mind.

If you have any questions or comments, please join the forum discussion below!

The post Kitura Stencil Tutorial: How to make Websites with Swift appeared first on Ray Wenderlich.

Unreal Engine 4 Toon Outlines Tutorial

$
0
0

Unreal Engine 4 Toon Outlines Tutorial

When people say "toon outlines", they are referring to any technique that render lines around objects. Like cel shading, outlines can help your game look more stylized. They can give the impression that objects are drawn or inked. You can see examples of this in games such as Okami, Borderlands and Dragon Ball FighterZ.

In this tutorial, you will learn how to:

  • Create outlines using an inverted mesh
  • Create outlines using post processing and convolution
  • Create and use material functions
  • Sample neighboring pixels
Note: This tutorial assumes you already know the basics of using Unreal Engine. If you are new to Unreal Engine, check out our 10-part Unreal Engine for Beginners tutorial series.

If you are new to post process materials, you should go through our cel shading tutorial first. This tutorial will use some of the concepts presented in the cel shading tutorial.

Getting Started

Start by downloading the materials for this tutorial (you can find a link at the top or bottom of this tutorial). Unzip it and navigate to ToonOutlineStarter and open ToonOutline.uproject. You will see the following scene:

unreal engine toon outline

To start, you will create outlines by using an inverted mesh.

Inverted Mesh Outlines

The idea behind this method is to duplicate your target mesh. Then, make the duplicate a solid color (usually black) and expand it so that it is slightly larger than the original mesh. This will give you a silhouette.

unreal engine toon outline

If you use the duplicate as is, it will completely block the original.

unreal engine toon outline

To fix this, you can invert the normals of the duplicate. With backface culling enabled, you will see the inward faces instead of the outward faces.

unreal engine toon outline

This will allow the original to show through the duplicate. And because the duplicate is larger than the original, you will get an outline.

unreal engine toon outline

Advantages:

  • You will always have clean lines since the outline is made up of polygons
  • Appearance and thickness are easily adjustable by moving vertices
  • Outlines shrink over distance. This can also be a disadvantage.

Disadvantages:

  • Generally, does not outline details inside the mesh
  • Since the outline consists of polygons, they are prone to clipping. You can see this in the example above where the duplicate overlaps the ground.
  • Possibly bad for performance. This depends on how many polygons your mesh has. Since you are using duplicates, you are basically doubling your polygon count.
  • Works better on smooth and convex meshes. Hard edges and concave areas will create holes in the outline. You can see this in the image below.

unreal engine toon outline

Generally, you should create the inverted mesh in a modelling program. This will give you more control over the silhouette. If working with skeletal meshes, it will also allow you to skin the duplicate to the original skeleton. This will allow the duplicate to move with the original mesh.

For this tutorial, you will create the mesh in Unreal rather than a modelling program. The method is slightly different but the concept remains the same.

First, you need to create the material for the duplicate.

Creating the Inverted Mesh Material

For this method, you will mask the outward-facing polygons. This will leave you with the inward-facing polygons.

Note: Because of the masking, this method is slightly more expensive than using a manually created mesh.

Navigate to the Materials folder and open M_Inverted. Afterwards, go to the Details panel and adjust the following settings:

  • Blend Mode: Set this to Masked. This will allow you to mark areas as visible or invisible. You can adjust the threshold by editing Opacity Mask Clip Value.
  • Shading Model: Set this to Unlit. This will make it so lights do not affect the mesh.
  • Two Sided: Set this to enabled. By default, Unreal culls backfaces. Enabling this option disables backface culling. If you leave backface culling enabled, you will not be able to see the inward-facing polygons.

unreal engine toon outline

Next, create a Vector Parameter and name it OutlineColor. This will control the color of the outline. Connect it to Emissive Color.

unreal engine toon outline

To mask the outward-facing polygons, create a TwoSidedSign and multiply it by -1. Connect the result to Opacity Mask.

unreal engine toon outline

TwoSidedSign will output 1 for frontfaces and -1 for backfaces. This means frontfaces will be visible and backfaces will be invisible. However, you want the opposite effect. To do this, you reverse the signs by multiplying by -1. Now frontfaces will output -1 and backfaces will output 1.

Finally, you need a way to control the outline thickness. To do this, add the highlighted nodes:

unreal engine toon outline

In Unreal, you can move the position of every vertex using World Position Offset. By multiplying the vertex normal by OutlineThickness, you are making the mesh thicker. Here is a demonstration using the original mesh:

unreal engine toon outline

At this point, the material is complete. Click Apply and then close M_Inverted.

Now, you need to duplicate the mesh and apply the material you just created.

Duplicating the Mesh

Navigate to the Blueprints folder and open BP_Viking. Add a Static Mesh component as a child of Mesh and name it Outline.

unreal engine toon outline

Make sure you have Outline selected and set its Static Mesh to SM_Viking. Afterwards, set its material to MI_Inverted.

unreal engine toon outline

MI_Inverted is an instance of M_Inverted. This will allow you to adjust the OutlineColor and OutlineThickness parameters without recompiling.

Click Compile and then close BP_Viking. The viking will now have an outline. You can control the color and thickness by opening MI_Inverted and adjusting the parameters.

unreal engine toon outline

That’s it for this method! See if you can create an inverted mesh in your modelling program and then bring it into Unreal.

If you want to create outlines in a different way, you can use post processing instead.

Post Process Outlines

You can create post process outlines by using edge detection. This is a technique which detects discontinuities across regions in an image. Here are a few types of discontinuities you can look for:

unreal engine toon outline

Advantages:

  • Can apply to the entire scene easily
  • Fixed performance cost since the shader always runs for every pixel
  • Line width stays the same at various distances. This can also be a disadvantage.
  • Lines don’t clip into geometry since it is a post process effect

Disadvantages:

  • Usually requires multiple edge detectors to catch all edges. This has an impact on performance.
  • Prone to noise. This means edges will show up in areas with a lot of variance.

A common way to do edge detection is to perform convolution on each pixel.

What is Convolution?

In image processing, convolution is an operation on two groups of numbers to produce a single number. First, you take a grid of numbers (known as a kernel) and place the center over each pixel. Below is an example of a 3×3 kernel moving over the top two rows of an image:

unreal engine toon outline

For every pixel, multiply each kernel entry by its corresponding pixel. Let’s take the pixel from the top-left corner of the mouth for demonstration. We’ll also convert the image to grayscale to simplify the calculations.

unreal engine toon outline

First, place the kernel (we’ll use the same one from before) so that the target pixel is in the center. Afterwards, multiply each kernel element with the pixel it overlaps.

unreal engine toon outline

Finally, add all the results together. This will be the new value for the center pixel. In this case, the new value is 0.5 + 0.5 or 1. Here is the image after performing convolution on every pixel:

unreal engine toon outline

The kernel you use determines what effect you get. The kernel from the examples is used for edge detection. Here are a few examples of other kernels:

unreal engine toon outline

Note: You’ll notice that you can find these as filters in image editing programs. Convolution is actually how image editing programs perform many of their filter operations. In fact, you can perform convolution with your own kernels in Photoshop!

To detect edges in an image, you can use Laplacian edge detection.

Laplacian Edge Detection

First, what is the kernel for Laplacian edge detection? It’s actually the one you saw in the examples from the last section!

unreal engine toon outline

This kernel works for edge detection because the Laplacian measures the change in slope. Areas with greater change diverge from zero, indicating it is an edge.

To help you understand it, let’s look at the Laplacian in one dimension. The kernel for this would be:

unreal engine toon outline

First, place the kernel over an edge pixel and then perform convolution.

unreal engine toon outline

This will give you a value of 1 which indicates there was a large change. This means the target pixel is likely to be an edge.

Next, let’s convolve an area with less variance.

unreal engine toon outline

Even though the pixels have different values, the gradient is linear. This means there is no change in slope and indicates the target pixel is not an edge.

Below is the image after convolution and a graph with each value plotted. You can see that pixels on an edge are further away from zero.

unreal engine toon outline

Phew! That was a lot of theory but don’t worry — now comes the fun part. In the next section, you will build a post process material that performs Laplacian edge detection on the depth buffer.

Building the Laplacian Edge Detector

Navigate to the Maps folder and open PostProcess. You will see a black screen. This is because the map contains a Post Process Volume using an empty post process material.

unreal engine toon outline

This is the material you will edit to build the edge detector. The first step is to figure out how to sample neighboring pixels.

To get the position of the current pixel, you can use a TextureCoordinate. For example, if the current pixel is in the middle, it will return (0.5, 0.5). This two-component vector is called a UV.

unreal engine toon outline

To sample a different pixel, you just need to add an offset to the TextureCoordinate. In a 100×100 image, each pixel has a size of 0.01 in UV space. To sample a pixel to the right, you add 0.01 on the X-axis.

unreal engine toon outline

However, there is a problem with this. As the image resolution changes, the pixel size also changes. If you use the same offset (0.01, 0) in a 200×200 image, it will sample two pixels to the right.

To fix this, you can use the SceneTexelSize node which returns the pixel size. To use it, you do something like this:

unreal engine toon outline

Since you are going to be sampling multiple pixels, you would have to create this multiple times.

unreal engine toon outline

Obviously, this will quickly become messy. Fortunately, you can use material functions to keep your graph clean.

Note: A material function is like a function you would find in Blueprints or C++.

In the next section, you will put the duplicate nodes into the function and create an input for the offset.

Creating the Sample Pixel Function

First, navigate to the Materials\PostProcess folder. To create a material function, click Add New and select Materials & Textures\Material Function.

unreal engine toon outline

Rename it to MF_GetPixelDepth and then open it. The graph will have a single FunctionOutput. This is where you will connect the value of the sampled pixel.

unreal engine toon outline

First, you need to create an input that will accept an offset. To do this, create a FunctionInput.

unreal engine toon outline

This will show up as an input pin when you use the function later.

Now you need to specify a few settings for the input. Make sure you have the FunctionInput selected and then go to the Details panel. Adjust the following settings:

  • InputName: Offset
  • InputType: Function Input Vector 2. Since the depth buffer is a 2D image, the offset needs to be a Vector 2.
  • Use Preview Value as Default: Enabled. If you don’t provide an input value, the function will use the value from Preview Value.

unreal engine toon outline

Next, you need to multiply the offset by the pixel size. Then, you need to add the result to the TextureCoordinate. To do this, add the highlighted nodes:

unreal engine toon outline

Finally, you need to sample the depth buffer using the provided UVs. Add a SceneDepth and connect everything like so:

unreal engine toon outline

Note: You can also use a SceneTexture set to SceneDepth instead.

Summary:

  1. Offset will take in a Vector 2 and multiply it by SceneTexelSize. This will give you an offset in UV space.
  2. Add the offset to TextureCoordinate to get a pixel that is (x, y) pixels away from the current pixel
  3. SceneDepth will use the provided UVs to sample the appropriate pixel and then output it

That’s it for the material function. Click Apply and then close MF_GetPixelDepth.

Note: You may see an error in the Stats panel saying only translucent or post process materials can read from scene depth. You can safely ignore this. Since you will be using the function in a post process material, it will work.

Next, you need to use the function to perform convolution on the depth buffer.

Performing Convolution

First, you need to create the offsets for each pixel. Since the corners of the kernel are always zero, you can skip them. This leaves you with the left, right, top and bottom pixels.

Open PP_Outline and create four Constant2Vector nodes. Set them to the following:

  • (-1, 0)
  • (1, 0)
  • (0, -1)
  • (0, 1)

unreal engine toon outline

Next, you need to sample the five pixels in the kernel. Create five MaterialFunctionCall nodes and set each to MF_GetPixelDepth. Afterwards, connect each offset to their own function.

unreal engine toon outline

This will give you the depth values for each pixel.

Next is the multiplication stage. Since the multiplier for neighboring pixels is 1, you can skip the multiplication. However, you still need to multiply the center pixel (bottom function) by -4.

unreal engine toon outline

Next, you need to sum up all the values. Create four Add nodes and connect them like so:

unreal engine toon outline

If you remember the graph of pixel values, you’ll see that some of them are negative. If you use the material as is, the negative pixels will appear black because they are below zero. To fix this, you can get the absolute value which converts any inputs to a positive value. Add an Abs and connect everything like so:

unreal engine toon outline

Summary:

  1. The MF_GetPixelDepth nodes will get the depth value for the center, left, right, top and bottom pixels
  2. Multiply each pixel by its corresponding kernel value. In this case, you only need to multiply the center pixel.
  3. Calculate the sum of all the pixels
  4. Get the absolute value of the sum. This will prevent pixels with negative values from appearing as black.

Click Apply and then go back to the main editor. The entire image will now have lines!

unreal engine toon outline

There are a few problems with this though. First, there are edges where there is only a slight depth difference. Second, the background has circular lines due to it being a sphere. This is not a problem if you are going to isolate the edge detection to meshes. However, if you want lines for your entire scene, the circles are undesirable.

To fix these, you can implement thresholding.

Implementing Thresholding

First, you will fix the lines that appear because of small depth differences. Go back to the material editor and create the setup below. Make sure you set Threshold to 4.

unreal engine toon outline

Later, you will connect the result from the edge detection to A. This will output 1 (indicating an edge) if the pixel’s value is higher than 4. Otherwise, it will output 0 (no edge).

Next, you will get rid of the lines in the background. Create the setup below. Make sure you set DepthCutoff to 9000.

unreal engine toon outline

This will output 0 (no edge) if the current pixel’s depth is greater than 9000. Otherwise, it will output the value from A < B.

Finally, connect everything like so:

Now, lines will only appear if the pixel value is above 4 (Threshold) and its depth is lower than 9000 (DepthCutoff).

Click Apply and then go back to the main editor. The small lines and background lines are now gone!

unreal engine toon outline

Note: You can create a material instance of PP_Outline to control the Threshold and DepthCutoff.

The edge detection is working pretty well. But what if you want thicker lines? To do this. you need a larger kernel size.

Creating Thicker Lines

Generally, larger kernel sizes have a greater impact on performance. This is because you have to sample more pixels. But what if there was a way to have larger kernels with the same performance as a 3×3 kernel? This is where dilated convolution comes in handy.

In dilated convolution, you simply space the offsets further apart. To do this, you multiply each offset by a scalar called the dilation rate. This defines the spacing between each kernel element.

unreal engine toon outline

As you can see, this allows you to increase the kernel size while sampling the same number of pixels.

Now let’s implement dilated convolution. Go back to the material editor and create a ScalarParameter called DilationRate. Set its value to 3. Afterwards, multiply each offset by DilationRate.

unreal engine toon outline

This will place each offset 3 pixels away from the center pixel.

Click Apply and then go back to the main editor. You will see that your lines are a lot thicker. Here is a comparison between multiple dilation rates:

unreal engine toon outline

Unless you’re going for a line art look for your game, you probably want to have the original scene show through. In the final section, you will add the lines to the original scene image.

Adding Lines to the Original Image

Go back to the material editor and create the setup below. Order is important here!

unreal engine toon outline

Next, connect everything like so:

Now, the Lerp will output the scene image if the alpha reaches zero (black). Otherwise, it will output LineColor.

Click Apply and then close PP_Outline. The original scene will now have outlines!

unreal engine toon outline

Where to Go From Here?

You can download the completed project using the link at the top or bottom of this tutorial.

If you’d like to do more with edge detection, try creating one that works on the normal buffer. This will give you some edges that don’t appear in a depth edge detector. You can then combine both types of edge detection together.

Convolution is a wide topic that has many uses including artificial intelligence and audio processing. I encourage you to explore convolution by creating other effects such as sharpening and blurring. Some of these are as simple as changing the values in the kernel! Check out Images Kernels explained visually for an interactive explanation of convolution. It also contains the kernels for some other effects.

I also highly recommend you check out the GDC presentation on Guilty Gear Xrd’s art style. They also use the inverted mesh method for the outer lines. However, for the inner lines, they present a simple yet ingenious technique using textures and UV manipulation.

If there are any effects you’d like to me cover, let me know in the comments below!

The post Unreal Engine 4 Toon Outlines Tutorial appeared first on Ray Wenderlich.


Video Tutorial: Beginning iOS Debugging Part 1: Introduction

Video Tutorial: Beginning iOS Debugging Part 2: Breakpoints

Video Tutorial: Beginning iOS Debugging Part 3: Controlling Breakpoints

New Course: Beginning iOS Debugging

$
0
0

Debugging is a skill that you’ll use on pretty much every project you work on. It’s useful for finding and fixing bugs in your applications, of course, but you can also use the same skills to understand what’s happening “under the hood” of your code.

Today, we are releasing a brand new course: Beginning iOS Debugging. In this short course, you’ll learn basic debugging skills like using breakpoints to step through your code, using debugger commands, and solving layout issues with the debug the view hierarchy tool.

Take a look at what’s inside:

  1. Introduction Why learn debugging? This introductory video will answer this question and preview the course.
  2. Breakpoints Pause your app in the middle of execution using breakpoints. Use the breakpoint navigator to view and manage all the breakpoints in your project.
  3. Controlling Breakpoints Activate and deactivate individual breakpoints, disable all breakpoints in a project, and delete breakpoints in Xcode.
  4. Inspecting Variables View the values stored in your properties and constants while your app is paused. Use debugger commands to change the state of your app while debugging.
  5. Challenge: Breakpoints In this challenge, use the skills you’ve learned to find and fix a bug.
  6. Control Flow Use Xcode to step through execution of your app one line at a time. Learn the different ways to step through your code.
  7. Call Stack See how the call stack works and how to navigate it during a debug session to view the app state at different points in the stack.
  8. View Hierarchy Learn some tools for debugging visual issues with your app. Debug a layout issue using Xcode’s debug view hierarchy tool.
  9. Challenge: Putting it Together In this challenge, use breakpoints and the call stack to find the source of another bug.
  10. Conclusion Review what you’ve learned in this course and see where to go next.

Where To Go From Here?

Want to check out the course? You can watch the first two videos for free!

The rest of the course is for raywenderlich.com subscribers only. Here’s how you can get access:

  • If you are a raywenderlich.com subscriber: The first three videos are ready for you today! The rest of the course will be released over this week. You can check out the course here.
  • If you are not a subscriber yet: What are you waiting for? Subscribe now to get access to our new Beginning iOS Debugging course and our entire catalog of over 500 videos.

Stay tuned for more new and updated courses to come. I hope you enjoy the course! :]

The post New Course: Beginning iOS Debugging appeared first on Ray Wenderlich.

IBM Watson Services for Core ML Tutorial

$
0
0

Have you been exploring the exciting possibilities of adding machine learning (ML) to your apps with Apple’s Core ML and Vision frameworks? Maybe you’ve used your own data to extend one of Apple’s Turi Create models. Give a big welcome to the newest player on the field: IBM Watson Services, now with Core ML!

Note: Core ML models are initially available only for visual recognition, but hopefully the other services will become Core ML-enabled, too.

IBM Watson Services for Core ML Tutorial

In this tutorial, you’ll set up an IBM Watson account, train a custom visual recognition Watson service model, and set up an iOS app to use the exported Core ML model.

Watson Services

You’ll be using Watson Studio in this tutorial. It provides an easy, no-code environment for training ML models with your data.

The list of Watson services covers a range of data, knowledge, vision, speech, language and empathy ML models. You’ll get a closer look at these, once you’re logged into Watson Studio.

The really exciting possibility is building continuous learning into your app, indicated by this diagram from Apple’s IBM Watson Services for Core ML page:

This is getting closer to what Siri and FaceID do: continuous learning from user data, in your apps!

Is this really groundbreaking? Right now, if a Core ML model changes after the user installs your app, your app can download and compile a new model. The app needs some kind of notification, to know there’s an update to the model. A bigger question is: why would the model change? Maybe by using a better training algorithm, but real improvements usually come from more data. Even better, if actual users supply the new data.

Even if you managed to collect user data, the workflow to retrain your model would be far from seamless. This is what could tip the balance in favor of Watson Services: the promise of easy — or at least, easier — integration of data collection, retraining and deployment. I’ll tell you more about this later.

Turi vs. Watson

Which should you use?

  • Turi and Watson both let you extend ML models for vision and language, but Watson exports Core ML only for visual recognition models.
  • Turi has an activity classifier, Watson doesn’t. Watson has Discovery, which sounds much more sophisticated than anything Turi has.
  • You need to write and run Python to use Turi to train models. Watson just needs your data to train the model.

Um Er … User Privacy?

The big deal about Core ML is that models run on the iOS device, enabling offline use and protecting the user’s data. The user’s data never leaves the device.

But when the user provides feedback on the accuracy of a Watson model’s predictions, your app is sending the user’s photos to IBM’s servers! Well, IBM has a state-of-the-art privacy-on-the-cloud policy. And no doubt Apple will add a new privacy key requirement, to let users opt into supplying their data to your model.

Getting Started

Carthage

Eventually, you’ll need the Carthage dependency manager to build the Watson Swift SDK, which contains all the Watson Services frameworks.

Install Carthage by downloading the latest Carthage.pkg from Carthage releases, and running it.

Or, if you prefer to use Homebrew to install Carthage, follow the instructions in Carthage readme.

IBM’s Sample Apps

From here, the roadmap can become a little confusing. I’ll provide direct links, but also tell you where the links are, on the multitude of pages, to help you find your way around when you go back later.

Start on Apple’s page: IBM Watson Services for Core ML. Scroll down to Getting Started, and Command-click the middle link Start on GitHub, under Begin with Watson Starters. Command-click opens GitHub in a new tab: you want to keep the Apple page open, to make it easier to get back to the GitHub page, to make it easier to get back to the Watson Studio login page — trust me ;]!

Download the zip file, and open the workspace QuickstartWorkspace.xcworkspace. This workspace contains two apps: Core ML Vision Simple and Core ML Vision Custom. The Simple app uses Core ML models to classify common DIY tools or plants. The Custom app uses a Core ML model downloaded from Watson Services. That’s the model you’re going to build in this tutorial!

Scroll down to the README section Running Core ML Vision Custom: the first step in Setting up is to login to Watson Studio. Go ahead and click the link.

Signing Up & Logging In

After you’ve gotten into Watson once, you can skip down to that bottom right link, and just sign in. Assuming this is your first time, you’ll need to create an account.

Note: If you already have an IBM Cloud account, go ahead with the sign up for IBM Watson step.

OK, type in an email address, check the checkbox, and click Next. You’ll see a form:

Fill in the fields. To avoid frustration, know that the password requirements are:

Password must contain 8-31 characters with at least one upper-case, one lower-case, one number, and one special character ( – _ . @ )

The eyeball-with-key icon on the right reveals the password, so you can edit it to include the necessary oddities.

Check or uncheck the checkbox, then click Create Account. You’ll get a page telling you to check your mailbox, so do that, open the email, and confirm your address.

Don’t follow any links from the confirmation page! They tend to lead you away from where you want to be. Get back to that login page for Watson Studio, and click the link to sign up for IBM Watson.

Note: If Watson thinks you’re already logged in, it will skip the next step and continue with the one following.

IBMid??? Relax, your IBM Cloud login will get you in! Enter the email address you used, and click Continue. On the next page, enter your password, and Sign in:

Interesting. I was in Vancouver when I created this account, not in the US South. But each of the Watson Services is available in only certain regions, and US South gives you access to all of them. So keep that _us-south appendage, or add it, if it isn’t there. Click Continue, and wait a short time, while a few different messages appear, and then you’re Done!

Clicking Get Started runs through more messages, and spins a while on this one:

And then you’re in!

Remember: in future, you can go straight to the bottom right link, and just sign in to Watson.

Look at the breadcrumbs: you’ve logged into Services / Watson Services / watson_vision_combined-dsx. This is because you clicked the login link on the GitHub page, and that specifies target=watson_vision_combined. You’ll explore Watson Services later, but for now, you’ll be building a custom object classification model on top of Watson’s object classifier. IBM’s sample uses four types of cables, but you can use your own training images. I’ll give detailed instructions when we reach that step.

Note: This is an important and useful page, but it’s easy to lose track of it, as you explore the Watson site. To get back to it, click the IBM Watson home button in the upper left corner, then scroll down to Watson Services to find its link.

Creating a Custom Object Classifier

OK, back to the GitHub page: Training the model. You’ll be following these instructions, more or less, and I’ll show you what you should be seeing, help you find everything, and provide a high-level summary of what you’re doing.

Here’s the first high-level summary. The steps to integrate a custom Watson object classifier into IBM’s sample app are:

  1. Create a new project.
  2. Upload and add training data to the project.
  3. Train the model.
  4. Copy the model’s classifierId and apiKey to String properties in the sample app.
  5. In the Core ML Vision Custom directory, use Carthage to download and build the Watson Swift SDK.
  6. Select the Core ML Vision Custom scheme, build, run and test.

1. Creating a New Watson Project

Like most IDEs, you start by creating a project. This one uses the Visual Recognition tool and the watson_vision_combined-dsx service. The “dsx” stands for Data Science Experience, the old name for Watson Studio. Like NS for NextStep. ;]

Click Create Model, and wait a very short time to load the New project page. Enter Custom Core ML for the Name, and click Create.

Note: Various Watson people pop up from time to time. I don’t think they’re really there, but let me know if you have an actual chat with any of them.

After a while, you’ll see this:

2. Adding Training Data

The Watson visual recognition model is pre-trained to recognize tools and plants. You can upload your own training data to create a model that classifies objects that matter to your app. IBM’s sample project trains a model with photos of cables, to classify new images as HDMI, Thunderbolt, VGA or USB.

Note: To upload your own training images, instead of cable images, organize your images into folders: the names of the images don’t matter, but the name of each folder should be the label of the class represented by the images in it. There’s a link to IBM’s guidelines for image data in the Resources section at the end of this tutorial. If you want your model to recognize your objects in different lighting, or from different angles, then supply several samples of each variation. Then zip up the folders, and upload them instead of the sample zipfiles.

Click the binary data icon to open a sidebar where you can add zipfiles of training images:

Click Browse, and navigate to the Training Images folder in the visual-recognition-coreml-master folder you downloaded from GitHub:

Select all four zipfiles, and click Choose, then wait for the files to upload.

Check all four checkboxes, then click the refined hamburger menu icon to see the Add selected to model option.

Select that, then watch and wait while the images get added to the model.

Note: When you upload your own training images, you can also upload images to the negative class: these would be images that don’t match any of the classes you want the model to recognize. Ideally, they would have features in common with your positive classes, for example, an iTunes card when you’re training a model to recognize iPods.

3. Training the Model

The model is ready for training whenever you add data. Training uses the same machine learning model that created the basic tools and plants classifier, but adds your new classes to what it’s able to recognize. Training a model of this size on your Mac would take a very long time.

Close the sidebar, and click Train Model

Go get a drink and maybe a snack — this will probably take at least 5 minutes, maybe 10:

Success looks like this:

4. Adding the Model to Your App

The sample app already has all the VisionRecognition code to download, compile and use your new model. All you have to do is edit the apiKey and classifierId properties, so the app creates a VisionRecognition object from your model. Finding these values requires several clicks.

Note: For this step and the next, I think it’s easier if you just follow my instructions, and don’t look at the GitHub page.

Click the here link to see what the GitHub page calls the custom model overview page:

Click the Associated Service link (the GitHub page calls this your Visual Recognition instance name): you’re back at Services / Watson Services / watson_vision_combined-dsx! But scroll down to the bottom:

That’s the model you just trained!

Note: The GitHub page calls this the Visual Recognition instance overview page in Watson Studio.

Back to Xcode — remember Xcode? — open Core ML Vision Custom/ImageClassificationViewController.swift, and locate the classifierID property, below the outlets.

On the Watson page, click the Copy model ID link, then paste this value between the classifierID quotation marks, something like this, but your value will be different:

let classifierId = "DefaultCustomModel_1752360114"

Scroll up to the top of the Watson page, and select the Credentials tab:

Copy and paste the api_key value into the apiKey property, above classifierId:

let apiKey = "85e5c50b26b16d1e4ba6e5e3930c328ce0ad90cb"

Your value will be different.

These two values connect your app to the Watson model you just trained. The sample app contains code to update the model when the user taps the reload button.

One last edit: change version to today’s date in YYYY-MM-DD format:

let version = "2018-03-28"

The GitHub page doesn’t mention this, but the Watson Swift SDK GitHub repository README recommends it.

5. Building the Watson Swift SDK

The final magic happens by building the Watson Swift SDK in the app’s directory. This creates frameworks for all the Watson Services.

Open Terminal and navigate to the Core ML Vision Custom directory, the one that contains Cartfile. List the files, just to make sure:

cd <drag folder from finder>
ls

You should see something like this:

Audreys-MacBook-Pro-4:Core ML Vision Custom amt1$ ls
Cartfile			Core ML Vision Custom.xcodeproj
Core ML Vision Custom

Open the Core ML Vision Custom project in the Project navigator:

VisualRecognitionV3.framework is red, meaning it’s not there. You’re about to fix that!

Remember how you installed Carthage, at the start of this tutorial? Now you get to run this command:

carthage bootstrap --platform iOS

This takes around five minutes. Cloning swift-sdk takes a while, then downloading swift-sdk.framework takes another while. It should look something like this:

$ carthage bootstrap --platform iOS
*** No Cartfile.resolved found, updating dependencies
*** Fetching swift-sdk
*** Fetching Starscream
*** Fetching common-crypto-spm
*** Fetching zlib-spm
*** Checking out zlib-spm at "1.1.0"
*** Checking out Starscream at "3.0.4"
*** Checking out swift-sdk at "v0.23.1"
*** Checking out common-crypto-spm at "1.1.0"
*** xcodebuild output can be found in /var/folders/5k/0l8zvgnj6095_s00jpv6gxj80000gq/T/carthage-xcodebuild.lkW2sE.log
*** Downloading swift-sdk.framework binary at "v0.23.1"
*** Skipped building common-crypto-spm due to the error:
Dependency "common-crypto-spm" has no shared framework schemes for any of the platforms: iOS

If you believe this to be an error, please file an issue with the maintainers at https://github.com/daltoniam/common-crypto-spm/issues/new
*** Skipped building zlib-spm due to the error:
Dependency "zlib-spm" has no shared framework schemes for any of the platforms: iOS

If you believe this to be an error, please file an issue with the maintainers at https://github.com/daltoniam/zlib-spm/issues/new
*** Building scheme "Starscream" in Starscream.xcodeproj

Look in Finder to see what’s new:

A folder full of frameworks! One for each Watson Service, including the formerly missing VisualRecognitionV3.framework. And sure enough, there it is in the Project navigator:

Note: IBM recommends that you regularly download updates of the SDK so you stay in sync with any updates to this project.

6. Build, Run, Test

The moment of truth!

Select the Core ML Vision Custom scheme, then build and run, on an iOS device if possible. You’ll need to take photos of your cables to test the model, and it’s easier to feed these to the app if it’s running on the same device.

Note: To run the app on your device, open the target and, in the Bundle Identifier, replace com.ibm.watson.developer-cloud with something unique to you. Then in the Signing section, select a Team.

The app first compiles the model, which takes a little while:

Then it tells you the ID of the current model:

Note: If you get an error message about the model, tap the reload button to try again.

Tap the camera icon to add a test photo. The app then displays the model’s classification of the image:

The model isn’t always right: it kept insisting that my Thunderbolt cable was a USB, no matter what angle I took the photo from.

Note: I couldn’t see any obvious reason why you must add the apiKey and classifierId before you build the Watson Swift SDK, so I tried doing it the other way around. I downloaded a fresh copy of the sample code, and ran the carthage command in its Core ML Vision Custom directory: the output of the command looks the same as above, and the Carthage folder contents look the same. Then I added the apiKey and classifierId to the app, and built and ran it: the app didn’t download the model. Breakpoints in viewDidLoad() or viewWillAppear(_:) don’t fire! The app loads, you add a photo of a thunderbolt cable, and it classifies it as a hoe handle or cap opener — it’s using the basic visual recognition model.

TL;DR: Follow the instructions in the order given!

Show Me the Code!

So the sample app works. Now what code do you need to include in your apps, to use your models?

Actually, IBM presents all the code very clearly in the Visual Recognition section of their Watson Swift SDK GitHub repository README. There’s no Core ML code! The Vision Recognition framework wraps the Core ML model that’s wrapped around the Watson vision recognition model!

The only thing I’ll add is this note:

Note: To automatically download the latest model, check for updates to the model by calling the VisualRecognition method updateLocalModel(classifierID:failure:success) in viewDidLoad() or viewDidAppear(_:). It won’t download a model from Watson unless that model is a newer version of the local model.

Updating the Model

What I wanted to do in this section is show you how to implement continous learning in your app. It’s not covered in the GitHub example, and I haven’t gotten definitive answers to my questions. I’ll tell you as much as I know, or have guessed.

Directly From the App

You can send new data sets of positive and negative examples to your Watson project directly from your app using this VisualRecognition method:

public func updateClassifier(classifierID: String, positiveExamples: [VisualRecognitionV3.PositiveExample]? = default, negativeExamples: URL? = default, failure: ((Error) -> Swift.Void)? = default, success: @escaping (VisualRecognitionV3.Classifier) -> Swift.Void)

The documentation for non-Swift versions of this describes the parameters, but also announces:

Important: You can’t update a custom classifier with an API key for a Lite plan. To update a custom classifier on a Lite plan, create another service instance on a Standard plan and re-create your custom classifier.

To be on a Standard plan, you must hand over your credit card, and pay between US$0.002 and US$0.10 for each tagging, detection or training event.

Using Moderated User Feedback

But you shouldn’t send data directly from the app to the model unless you’re sure of the data’s correctness. Best practice for machine learning is to preprocess training data, to “fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated” — you know: garbage in, garbage out! So the idea of feeding uninspected data to your model is anathema.

Instead, you should enable the Data Preparation tool in the Tools section of your project’s Settings page:

Then your app should send positive and negative examples to a storage location, which you connect to your project as a data source. Back in Watson, you (or your ML experts) use the Data Refinery tool to cleanse the new data, before using it to train your model.

This information is from the IBM Watson Refine data documentation.

Watson Services

Curious about what else is available in Watson Services? Let’s take a look!

Find your Visual Recognition : watson_vision_combined-dsx page:

Command-click Watson Services to open this in a new tab:

Click Add Service to see the list:

Click a service’s Add link to see what’s available and for how much but, at this early stage, Watson generates Core ML models only for Visual Recognition. For the other services, your app must send requests to a model running on the Watson server. The Watson Swift SDK GitHub repository README contains sample code to do this.

Note: Remember, you can get back to your project page by clicking the IBM Watson home button in the upper left corner, then scrolling down to Watson Services to find its link.

Where to Go From Here?

There’s no finished project for you to download from here since it’s all IBM’s code and you can reproduce it yourself.

Resources

Further Reading

I hope you enjoyed this introduction to IBM Watson Services for Core ML. Please join the discussion below if you have any questions or comments.

The post IBM Watson Services for Core ML Tutorial appeared first on Ray Wenderlich.

Viewing all 4395 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>