Every year, Apple uses the WWDC (Worldwide Developer Conference) event as an opportunity to provide an update on impending software enhancements in relation to macOS, watchOS, tvOS and iOS. This years event garnered considerable attention, taking place between the 5th and the 9th of June in San Jose, California. If you’re a business with a commercial interest in the mobile app development ecosystem, WWDC is considered to be a seminal event. It’s worth digesting some of the key takeaways from this years event in order to understand the impact these announcements will have on your company. The event itself usually tends to be light when it comes to new hardware, but WWDC 2017 proved to be slightly different as Apple outlined plans for new Macs and iPads as well as the new HomePod product, which promises to be a significant competitor for the Amazon Echo and Google Assistant.
One thing is clear based on the output of WWDC 2017: Apple is set to go big on Machine Learning (ML), voice activated search (Siri) and Augmented Reality (AR). Both ML and AR appear to be huge areas of emphasis for Apple moving forward and this is liable to have profound ramifications for all types of businesses, especially if your business involves a mobile software offering. Here are some of the key aspects of WWDC 2017 that’re worth digesting from a commercial perspective:
There were a number of keywords that appeared to be consistent throughout the 2017 WWDC and Machine Learning was a phrase that seemed to crop up repeatedly. In some ways, this proved to be surprising as Machine Learning was not mentioned at all in 2016, yet appeared to be a significant highlight of the 2017 WWDC and certainly gave a clear indication of Apple’s plans for the future. According to the WWDC announcement, Apple discussed the use of ‘trained models’.
For the uninitiated, trained models relate to the development of Machine Learning based applications. The process of developing and training an ML based model involves creating a learning algorithm that has training data to learn from. The expression ‘ML model’ refers to the model artefact that is created as a result of the training process. So, in the context of WWDC and what this means for businesses, Apple suggested their focus would be on running trained models directly on users existing iOS devices (rather than running trained models on a server).
This is important because it reinforces Apple’s focus on strong user privacy by opting to run the models on user devices. If you’re a business interested in using Machine Learning to extend your current software offering, this means that you’ll need to consider how to train specific models on a server, before deploying them to Apple devices such as iPhones and iPads. This potentially opens up huge opportunities for businesses with existing mobile apps and for businesses who are planning their first foray into the world of mobile app development.
This is a huge step for Apple and extremely important for software companies in that this form of remote training (combined with local execution) can enable businesses to bring real-time machine-learning functionality and capability into existing mobile apps. Again, if you’re planning on developing your first mobile initiative, it’s also worth considering how machine-learning can be factored into your plans for the future. Here are some of the features that will be available to developers and businesses interested in incorporating machine-learning into their apps:
- Real-time image recognition on the device
- Real-time natural language processing
- Real-time decisioning
- Real-time sentiment analysis
From a consumer perspective this is likely to have a significant impact on the way in which mobile apps are conceived and developed. One example would be an application that utilises sentiment analysis to assess the emotional tone of an email. Users could be notified that an email contains an overly emotional or angry tone before the email has been opened. Similarly, if a user in the process of composing an email and some of the language is perceived to be unfriendly and liable to incite the wrong reaction, this type of technology can be used to flag up a warning signal that may prevent a damaging email from being sent. Other uses of this technology would include developing more complex and better AI players within a game, or from a business perspective, using trained models to segment and profile existing customers into distinct groups.
If you’re interested in learning more about trained models, or how to incorporate them into an existing or new app project, Waracle’s recommendation would be to think about the data that your organisation currently owns and to think about which trained models could be used for machine-learning at mobile device level. You can then think about how you can collect more data to train each model.
During the WWDC 2017 event Apple was largely focused on the intelligent assistant functionality of Siri. Apple was much less focused on talking about how the voice-interface element of Siri has evolved. This is something we touched upon in last years Mary Meeker report, as she suggested that the current accuracy of voice recognition technology is approximately 95%. Mary Meeker highlighted the fact that there is a massive difference between 95% and 99% when it comes to voice activated search and that as the technology improves, and accuracy increase from 95% to 99%, consumers will go from hardly using the technology at all to using it all the time. Given the fact that Apple were keen to highlight the intelligent assistant aspect of the technology, it maybe suggests that 99% accuracy is still a few years away.
With Amazon Echo and the Google Assistant products, mobile app developers can create their own interaction intents and domains. This effectively enables these types of devices and software applications to grow at the speed of the Internet. One of the major problems that Apple faces is the fact that it’s not as easy for app developers to access core elements of the Siri technology which in turn is decelerating the development and evolution of the platform. It seems that Apple are either unable or unwilling at this point in time to provide developers with more flexibility when it comes to incorporating voice activated search into future development plans and projects. The Apple HomePod has enormous potential, but without a rich and immersive ecosystem of apps, it’s unlikely to generate the traction in the marketplace that Amazon Alexa and Google assistant have successfully garnered. Apple did mention the fact that they have already enhanced the functionality of the on-device natural language processing that’s currently available to app developers. This means that apps developed using this technique can now gather audio from the device microphone, transcribe the audio and derive some actionable insight based on what each user says to the app itself.
If you’re a business or brand wondering how voice activated search will impact your business, the time to start planning is now. If your business involves search engine optimisation, it’s liable that voice activated search will change everything as users shift from performing search queries through typing, to asking the search engine questions in a conversational style. Humans can talk much faster than they can type, which makes voice search the perfect method for discovering what you want, when you want it. Digital marketing is just one of the many business functions that will be completely transformed as voice activated search technologies increase in accuracy.
If you’re considering developing an app right now that utilises voice activated functionality, it’s worth considering Amazon Echo, Google Assistant and Apple HomePod as potential platforms for your app. Whilst Apple need to focus on opening up the technology for developers, there are still many potential opportunities associated with all three technologies and the best fit will depend on the requirements of your project and your perceived target audience.
Apple focused heavily on the launch of its new ARKit Augmented Reality development framework. Apple is so far the biggest company in the world to announce the launch of a native AR platform and the company was quick to emphasise the fact that which such a large existing install base of iOS users, that ARKit is already the biggest AR platform in the world. Tim Cook has already suggested that AR will play a significant role in the company’s future. This fits very neatly with Apple’s vision that the iPhone and iPad will always be central to the daily lives of millions of people when it comes to performing a wide multitude of tasks. Introducing AR functionality into their existing user base makes incredible business sense for Apple and serves to enhance their already dominant position in the premium smartphone/tablet market.
Augmented Reality has significant commercial potential for Apple and also has the ability to prove extremely lucrative for brands and businesses. It’s estimated that in the next 3 years Apple will generate an astonishing $3 billion from Pokemon Go IAP’s (in-app-purchases) (Source: Reuters). As device shipments continue to reach peak saturation in developed economies, Apple will seek to drive more revenue via its existing software services division and AR represents a great opportunity by which to achieve this goal. The launch of ARKit could also symbolise Apple’s intention to release a stand-alone HUD (heads up display) in the future. Apple already has the confidence to demo the ARKit technology in front of a live audience (there were thousands of people at the WWDC event) which suggests they feel the platform, framework and overall technology suite is already relatively mature and sophisticated. The demonstration of ARKit at WWDC featured flying saucers that hovered over the heads of audience members and was deemed to have been an overwhelming success.
Apple provides three different ways to incorporate AR functionality into a mobile app development project. Whilst these techniques look like a more probable fit for games and entertainment purposes, there are some really interesting commercial applications of the technology. As an example, IKEA are already using the technology to demonstrate how furniture will look in a customer’s living room before they commit to buy. This type of insight is important from a commercial perspective as being able to visualise a product prior to purchase in its intended space will help to dramatically accelerate the buying decision process.
The recent launch of ARKit represents an enormous commercial tipping point when it comes to mainstream adoption of AR technology. Apple’s entrance into the AR market will generate much wider adoption of the technology by accessing a huge existing base of users. This can also help Apple to sell more devices. If you’re a business interested in AR, the launch of ARKit is significant because it’s never been as accessible or easy to develop an AR app that has the potential to reach hundreds of millions of iOS devices. Not only does this impact how new apps will be developed, but AR elements can now be incorporated into existing mobile apps. From a marketing and software development perspective, ARKit is a game changer as the barriers to entry have never been as low when considering the creation of an AR app.
It’s fair to suggest that WWDC 2017 was Apple’s most exciting event in a while in terms of announcing game changing new technologies. The announcements relating to machine learning, voice activated search and augmented reality will have a profound impact on your business. It’s worth spending some time digesting each of these trans-formative technologies understanding how they’re likely to impact what you do and working out how you can potentially harness them to your advantage. If you’re a business interested in exploring any of these new technologies in more detail, we’d love to hear from you, contact Waracle today to start the conversation.