Quantcast
Channel: Kodeco | High quality programming tutorials: iOS, Android, Swift, Kotlin, Unity, and more
Viewing all articles
Browse latest Browse all 4370

Google I/O 2018 Keynote Reaction

$
0
0

Google I/O 2018 Keynote Reaction

Google I/O 2018 began this week at the Shoreline Amphitheater in Mountain View, California. The I/O conference is Google’s annual opportunity to set a direction for the developer community as well as share with us the technologies and development tools they’ve been working on in the past year. The conference features presentations on Google products and services such as Google Assistant, Google apps like Google Maps and Google News, Chrome and ChromeOS, Augmented Reality, and of course, lots of Android. :]

The conference starts each year with two keynote presentations, the first a feature-focused presentation led by company CEO Sundar Pichai, and the second a developer-focused keynote. One of the first sessions after the two keynotes is What’s New in Android, often called “the Android Keynote”.

  • The opening keynote focused primarily on Google’s Artificial Intelligence (AI) and Machine Learning (ML) advancements, and had recurring themes of responsibility and saving time. The Google Assistant was one of the main technologies discussed, and another recurring theme was using the Assistant to improve your Digital Wellbeing.
  • The Developer Keynote started with a review of new Android features such as App Bundles and Android Jetpack. It then moved on to developer-oriented discussions of Google Assistant, Web apps and running Linux on ChromeOS, an expansion of Material Design called Material Theming, and new Firebase and AR advancements.
  • The What’s New in Android session gave a brief introduction to each of the topics that were being announced or covered at the conference for Android, and along the way pointed you to the sessions you need to see to learn more.

The most exciting announcements from the keynotes were:

  • Google Duplex: Google demoed the Google Assistant literally making a phone call for you. Google said that they’re “still working” to perfect this capability, but the the sample calls they played were jaw dropping in their naturalness and possibilities. Google is planning on simple use cases in the near future. A use case I could imagine would be having the Assistant call a number to talk through an automated system and stay on hold for you, and then notify you when the person on the other end is ready while telling them you’ll be right back.
  • Computer Vision and Google Lens: A pretty sweet AR demo in Google Maps was shown. The demo overlayed digital content on the real world over your camera feed from within the Maps app, while still showing you directions at the bottom of the screen, making it much easier to find your way in unknown places.
  • Android Jetpack: The Jetpack incorporates a number of Google libraries for Android into one package, including the Support Library and Android Architecture Components. Having them all under one name should simplify discoverability of the features and encourage more developers to use them in their apps.
  • MLKit: MLKit is a library that is Firebase-hosted and makes it easier to incorporate Google’s advanced ML into your apps, including text recognition and image labeling. There was a pretty sweet demo of grabbing the name of an item off a menu, which you could then search for a description of. And its available for both iOS and Android. MLKit, CoreML, ARCore, ARKit: hey what’s in a name? :]
  • App Actions and Slices: These will increase engagement with your app by helping you embed pieces of the app into other parts of Android like Search and Google Assistant results. The options go far beyond a simple icon for your app on the system share sheet.
  • ARCore and Sceneform: The original ARCore API required either using a framework like Unity or working with lower level OpenGL code. Sceneform promises to make it easier to code AR interactions into your apps.
  • New Voices for Google Assistant: ML training has advanced to the point that less work is required to incorporate new voices, and Google’s working with John Legend to create a voice for him. In the future, you may be able to use your own voice or select from popular celebrity voices. Would love to have a Google Assistant voice for James Earl Jones! :]

The rest of this post summarizes the three keynotes, in case you may not have had a chance or had time to watch them. At the bottom of the post are links to the actual keynote videos on the Google Developers YouTube channel, and I encourage you to watch them for yourself. And then also dive into the session videos on YouTube, once they’re available.

Opening Keynote

The keynote began with a video of little multi-colored cube creatures with some type of glow inside them. Kind of like intelligent building blocks. The video ended with the banner “Make good things together”.

Google CEO Sundar Pichai then took the stage and announced that there were over 7,000 attendees and a live stream, as well as a lot to cover. He joked about a “major bug” in a key product, getting the cheese wrong in a cheese burger emoji and the foam wrong in a beer emoji. :]

He then discussed the recurring Google theme of AI being an important inflection point in computing. He said that the conference would discuss the impact of AI advances, and that these advances would have to be navigated “carefully and deliberately”.

AI

The AI portion of the keynote started by reviewing some key fields in which Google has made advancements:

  • In healthcare, not only can retina images be used to diagnose diabetic retinopathy in developing countries, but the same eye images can also non-invasively predict cardiovascular risk. And AI can now predict medical events like chance of readmission for a patient. The possibilities for AI in the healthcare world seem to be just scratching the surface of using big data to improve the medical industry.
  • Sundar showed two impressive demos of using AI to improve accessibility. In the first, those with hearing impairments can be helped in situations like people talking over each other on closed-captioning, as AI can now disambiguate voices. The second was using AI to add new languages like morse code to the Google keyboard Gboard, helping those that require alternative languages to communicate.
  • Gmail has been redesigned with an AI-based feature called smart compose, which uses ML to start suggesting phrases and then you hit tab and keep autocompleting. The short demo in the presentation was pretty impressive, with Gmail figuring out what you next want to write as you type.
  • Google Photos was built from the ground up with AI, and over 5 billion photos are viewed by users every day. It has a new feature Suggested Actions, which are smart actions for a photo in context, things like “Share with Lauren”, “Fix brightness”, “Fix document” to a PDF, “Color pop”, and “Colorize” for black and white photos. All in all a very practical example of the combination of computer vision and AI.

Google has also been investing in scale and hardware for AI and ML, introducing TPU 3.0, with liquid cooling introduced in data centers and giant pods that achieve 100 petaflops, or 8x last year’s performance, and allow for larger and more accurate models.

These AI advancements, especially in healthcare and accessibility, clearly demonstrate Google taking the AI responsibility in a serious way. And features like those added to Gmail and Google Photos are just two simple examples of using AI to save time.

Google Assistant

Google wants the Assistant to be natural and comfortable to talk to. Using the DeepMind WaveNet technology, they’re adding 6 new voices to Google Assistant. WaveNet shortens studio time needed for voice recording and the new models still capture the richness of a voice.

Scott Huffman came on stage to discuss Assistant being on 500M devices, with 40 auto brands and 5000 device manufacturers. Soon it will be in 30 languages and 80 countries. Scott discussed needing the Assistant to be naturally conversational and visually assistive and that it needs to understand social dynamics. He introduced Continued Conversation and Multiple Actions (called coordination reduction in linguistics) as features for the voice Assistant. He also discussed family improvements, introducing Pretty Please, which helps keep kids from being rude in their requests to the Assistant. Assistant responds to positive conversation with polite reinforcement.

Lillian Rincon then came on to discuss Smart Displays. She showed watching YouTube by voice and cooking and recipes by voice on the smart display devices. They’ll also have video calling, connect to smart home devices, and give access to Google Maps. Lillian then reviewed a reimagined Assistant experience on phones, which can now have a rich and immersive response to requests. These include smart home device requests with controls like adjusting temperature, and things like “order my usual from Starbucks”. There are many partners for Food pick-up and delivery via Google Assistant. The Assistant can also be swiped up to get a visual representation of your day, including reminders, notes, and lists. And in Google Maps, you can use voice to send your ETA to a recipient.

Google Duplex

Sundar came back on stage to discuss using Google Assistant to connect users to businesses “in a good way”. He noted that 60% of small businesses in the US do not have an online booking system. He then gave a pretty amazing demo of Google Assistant making a call for you in the background for an appointment such as a haircut. On a successful call, you get a notification that the appointment was successfully scheduled. Other examples are restaurant reservations and making a doctor appointment while caring for a sick child. Incredible!

The calls don’t often go as expected, and Google is still developing the technology. They want to “handle the interaction gracefully.” One thing they will do in the coming weeks is make such calls on they’re own from Google to do things like update holiday hours for a business, which will help all customers immediately with improved information.

Digital Wellbeing

At this point the keynote introduced the idea of Digital Wellbeing, which is Google turning their attention to keeping your digital life from making too negative an impact on your physical life. The principles are:

  • Understand your habits
  • Focus on what matters
  • Switch off and wind down
  • Find balance for your family

A good example is getting a reminder on your devices to do things like taking a break from YouTube. Another is an Android P feature called Android Dashboard, which give full visibility into how you are spending your time on your device.

Google News

Trystan Upstill came on stage to announce a number of new features for the Google News platform, and the focus was on:

  • Keep up with the news you care about
  • Understanding the full story
  • Enjoy and support the news sources you love

Reinforcement learning is used throughout the News app. Newscasts in the app are kind of like a preview of a story. There’s a Full Coverage button, an invitation to learn more from multiple sources and formats. Publishers are front and center throughout the app, and there’s a Subscribe with Google feature, a collaboration with over 60 publishers that lets you subscribe to their news across platforms all through Google. Pretty cool!

What’s going on with Android?

Dave Burke then came on stage to discuss Android P and how it’s an important first step for putting AI and ML at the core of the Android OS.

The ML features being brought to Android P are:

  • Adaptive Battery: using ML to optimize battery life by figuring out which apps you’re likely to use.
  • Adaptive Brightness: improving auto-brightness using ML.
  • App Actions: predicting actions you may wish to take depending on things like whether your headphones are plugged in.
  • Slices: interactive snippets of app UI, laying the groundwork with search and Google Assistant.
  • MLKit: a new set of APIs available through Firebase that include: image labeling, text recognition, face detection, barcode scanning, landmark recognition, and smart reply. MLKit is cross-platform on both Android and iOS.

Dave then introduced new gesture-based navigation and the new recent app UI in Android P, and new controls like the volume control.

Sameer Samat came on to discuss in more detail how Android fits into the idea of Digital Wellbeing. The new Android Dashboard helps you to understand habits. You can drill down within the dashboard to see what you’re doing when and how often. There is an App Timer with limits. And Do Not Disturb improvements like the new Shush mode: turn you phone over on a table and hear no sounds or vibrations except from Starred Contacts. There’s a Wind Down mode with Google Assistant, that puts your phone in gray-scale to help ease you into a restful sleep.

Lastly, an Android P beta was announced, for Pixel phones and devices from seven other manufacturers, and available today. Many of the new Android P features introduce ways to keep your mobile phone usage from taking over your entire life but still being meaningful and useful.

Google Maps

Jen Fitzpatrick gave demos of the new For You feature in Google Maps, which uses ML to see trending events around you, and also a matching score that uses ML to tell you how well a suggestion matches your interests.

Aparna Chennapragada then gave a pretty cool demo of combining the device camera and computer vision to reimagine navigation by showing digital content as AR overlays on the real world. You can instantly know where you are and still see the map and stay oriented. GPS alone is not enough, instead it’s a Visual Positioning System. She also showed new Google Lens features that are integrated right inside the camera app on many devices:

  • Smart Text Selection: Recognize and understand words and copy and paste from the real world into the phone.
  • Style Match: Give me things like this.
  • Real-time Results: Both on device and cloud compute.

Self-Driving Cars

The opening keynote wrapped up with a presentation by Waymo CEO John Krafcik. He discussed an Early Rider program taking place in Phoenix, AZ.

Dmitri Dolgov from Waymo then discussed how self-driving car ML touches Perception, Prediction, Decision-making, and Mapping. He discussed having trained for 6M miles driven on public roads and 5B miles in simulation. He noted that Waymo uses TensorFlow and Google TPUs, with learning 15x more efficient with TPUs. They’ve now moved to using simulations to train self-driving cars in difficult weather like snow.

Developer Keynote

The Developer Keynote shifts the conference from a consumer and product focus towards a discussion of how developers will create new applications using all the new technologies from Google. It’s a great event to get a sense for which of the new tools will be discussed at the conference.

Jason Titus took the stage to start the Developer Keynote. He first gave a shoutout to all the GDGs and GDEs around the world. He mentioned that one key goal for the Google developer support team is to make Google AI technology available to everyone. For example, with TensorFlow, dropping models into your apps.

Android

Stephanie Cuthbertson then came up to detail all the latest and greatest on developing for Android. The Android developer community is growing, with the number of developers using the Android IDE almost tripling in two years. She emphasized that developer feedback drives the new features, like Kotlin last year. 35% of pro developers are now using Kotlin. Google is committed to Kotlin for the long term. Stephanie walked though current focuses:

  • Innovative distribution with Android App Bundles that optimizes your application size for 99% of devices and are almost no work for developers.
  • Faster development with the Android Jetpack that includes Architecture, UI, Foundation, and Behavior components (see more below in “What’s New in Android”) with new features including WorkManager for asynchronous tasks and the Navigation Editor for visualizing app navigation flow.
  • Increased engagement with App Actions and Slices, interactive mini-snippets of your app.

Stephanie then mentioned that Android Things is now 1.0 for commercial devices, and that attendees would be receiving an Android Things developer kit!

Google Assistant

Brad Abrams discussed Google Assistant actions. There are over 1M actions available on lots of categories of devices. He described a new era of conversational computing, and mentioned the Dialogflow library that builds natural and rich conversational experiences. He said you can think of an Assistant action as a companion experience to the main features of your app.

Web and Chrome

Tal Oppenheimer came on stage to discuss the Web platform and new features in ChromeOS. She emphasized that Google’s focus is to make the platform more powerful, but at the same time make web development easier. She discussed Google’s push on Progressive Web Apps (PWAs) that have reliable performance, push notifications, and can be added to the home screen. She discussed other Web technologies like Service Worker, WebAssembly, Lighthouse 3.0, and AMP. Tal then wrapped up by announcing that ChromeOS is gaining the ability to run full Linux desktop apps, which will eventually also include Android Studio. So ChromeOS will be a one-stop platform for consuming and developing both Web and Android apps. Sweet!

Material Theming

There was a lot of discussion prior to I/O about a potential Material Design 2.0. The final name is Material Theming, as presented by Rich Fulcher. Material Theming adds flexibility to Material Design allowing you to distinguish your brand to provide customized experiences. You can create a unified and adaptable design system for your app, including color, typography, and shape across your products.

There’s a new redline viewer for dimensions, padding and hex color values as part of two new tools:

  • Material Theme editor, a plugin for Sketch.
  • Material Gallery, with which you can review and comment on design iterations.

There are also now the open source Material Components for Android, iOS, Web, and Flutter, all with Material Theming.

Progress in AI

Jia Li came on to give more developer announcements related to AI. She discussed TPU 3.0 and Google’s ongoing commitment to AI hardware. She walked through Cloud Text-to-Speech, DeepMind Wavenet, and Dialogflow Enterprise Edition. She discussed TensorFlow.js for web and TensorFlowLite for mobile and Raspberry Pi. She finished up by giving more information on two new libraries:

  • Cloud AutoML, which can automate the creation of ML models. For example, to recognize images unique to your application without writing any code.
  • MLKit, the SDK to provide Google ML to mobile developers through Firebase, including text recognition and smart reply.

Firebase

Francis Ma discussed the Firebase goals of helping mobile developers solve key problems across the lifecycle of an app to build better apps, improve app quality, and grow your business. He mentioned that there are 1.2M active Firebase apps every month. He discussed the following Firebase technologies:

  • Fabric + Firebase. Google has brought Crashlytics into Firebase and integrated it with Google Analytics. Firebase is not just a platform for app infrastructure, but also lets you understand and improve your app.
  • MLKit for text recognition, image labeling, face detection, barcode scanning, and landmark recognition.

He mentioned that the ML technology works both on device or in the cloud, and that you can bring in custom TensorFlow models too. You upload to Google cloud infrastructure, and you can then update your model without redeploying your entire app.

ARCore

Nathan Martz came on to discuss ARCore, which launched as 1.0 three months ago. There are amazing apps already, like building a floor-plan from walking around a home. He announced a major update today, with three incredible new features:

  • Sceneform, which makes it easy to create AR applications or add to apps you’ve already built. There’s a Sceneform SDK, an expressive API with a powerful renderer and seamless support for 3D assets.
  • Augmented Images, which allow you to attach AR content and experiences to physical content in the real world. You can compute 3D position in real time.
  • Cloud Anchors for ARCore, where multiple devices create a shared understanding of the world. Available on both Android and iOS.

What’s New in Android

As is tradition, the What’s New in Android session was run by Chet Haase, Dan Sandler, and Romain Guy. They describe the session as the “Android Keynote”. In the session, they summarized the long list of new features in Android and directed you to the sessions in which you can learn more.

The long list of new features in tooling and Android P is summarized here:

  • Android App Bundles to reduce app size.
  • Android JetPack includes Architecture, UI, Foundation and Behavior components. Mainly a repackaging and you’re already familiar with most of what’s in it, but they’re adding to it, and also refactoring the support library to be AndroidX. New features are Paging, Navigation, WorkManager, Slices, and Android KTX.
  • Android Test now has first class Kotlin support, with new APIs to reduce boilerplate and increase readability.
  • Battery Improvements include app standby buckets and background restrictions that a user can set.
  • Background Input & Privacy, where in background there is no access to the microphone or camera.
  • Many Kotlin performance improvements from ART, D8 & R8. Increased nullability annotation coverage in the support library and libcore, and easier to use platform APIs. Android KTX, to take advantage of Kotlin language features in Android APIs.
  • Mockable Framework, and Mockito can now mock final and static methods.
  • Background Text Measurement which offloads and pre-computes text measurement on a background thread so there is less work done on the UI thread.
  • Magnifier for text but also an API for other use cases.
  • Baseline Distance between text views for easier matching with design specs.
  • Smart Linkify to detect custom link entities using ML in the background.
  • Indoor Location using android.net.wifi.rtt.* for WiFi Round-Trip-Time APIs.
  • Accessibility app navigation improvements.
  • Security improvements via a unified biometric dialog, stronger protection for private keys, and a StrongBox backend.
  • Enterprise changes that include switching apps between profiles, locking any app to the device screen, ephemeral users, and a true kiosk mode to hide the navigation bar.
  • Display Cutout, aka the notch, using WindowInsets. There are modes for “never”, “default”, and “shortEdges” with variations
  • Slices are a new approach to app remote content, either within an app or between apps. They use structured data and flexible templates and are interactive and updatable. They’re addressable by a content URI and backwards-compatible in Android Jetpack all the way back to API 19.
  • App Actions are related to slices and act as deep links into your app. They are “shortcuts with parameters” and act as a “visible Intent”.
  • Notifications have a new messaging style and allow images stickers and a smart reply UI.
  • Deprecation Policy has been updated and apps will soon be required to target newer versions of Android, for security and performance. As of August 2018 new apps must target API 26 or above. November 2018 for app updates. And in August 2019, 64-bit ABI will be required.
  • App Compatibility means no more calls to private APIs.
  • NDK r17 includes the Neural Network API, JNI Shared Memory API, Rootless ASAN, and support for UBSAN. It removes support for ARMv5, MIPS, and MIPS64. NDK r18 will remove gcc support, instead you must use clang.
  • Graphics and Media changes include camera API improvements like OIS timestamps and display based flash, support for external USB cameras, and multi-camera support. There is an ImageDecoder, support for HDR VP9, HDR rendering on compatible hardware, and HEIF support the HEVC/H.265 codec, a container for multiple images.
  • Vulkan 1.1 has lots of improvements to the graphics API, including multi-GPU support and protected content.
  • Neural Network API 1.1 is a C API for ML and on-device inference. TensorFlow is built on top of it, and it’s hardware-accelerated on the Pixel 2.
  • ARCore additions such as Sceneform.
  • ChromeOS now allows Linux apps and soon Android Studio on ChromeOS, for a full-blown Android development environment.

Summary

Overall, the keynotes saw Google proudly representing their AI prowess. They are making incredible and futuristic advances while also attempting to ensure that the advances in AI are used in responsible ways for the benefit of all (except maybe certain competitors :] ).

Google is spreading their AI capabilities and expertise across the entire business, and at the same making it easier for developers to use in their own apps.

Google AI is clearly ahead of competitors in terms of performance and accuracy. By helping developers integrate the technology into more and more apps, Google and its platforms like Android will maintain their lead and keep bringing these futuristic features to more and more people around the world.

Where to go from here?

There was so much to digest in just these three sessions! You can see them for yourself at these links

Keynote: https://www.youtube.com/watch?v=ogfYd705cRs

Developer Keynote: https://www.youtube.com/watch?v=flU42CTF3MQ

What’s New in Android: https://www.youtube.com/watch?v=eMHsnvhcf78

There’s a nice short introduction to Android Jetpack here:

Introducing Jetpack: https://www.youtube.com/watch?v=r8U5Rtcr5UU

And here are some links given out during the keynotes to various new technologies:

Some condensed versions of the opening keynote are here:

Finally, you can see the full Google I/O 2018 schedule here, with descriptions of the various sessions that you may want to later check out on YouTube:

https://events.google.com/io/schedule/

Two which you definitely want to see if your an Android developer are:

What did you think of all the announcements made on the first day of Google I/O 2018? Share your thoughts in the forum below.

The post Google I/O 2018 Keynote Reaction appeared first on Ray Wenderlich.


Viewing all articles
Browse latest Browse all 4370

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>