Want to add a Siri-like “conversational interface” to your mobile app or device? Thenapi.ai, the Palo Alto-headquartered startup and Russian team behind Speaktoit Assistant, a Siri alternative for Android, iOS and Windows Phone, has had you covered for a while.
But now the company has refined its offering to make it a lot easier for developers in the Internet-of-Things (IoT) space, such as the smart home and wearables, to be able to use its technology to enhance their offerings.
Originally launched last September, api.ai effectively opened up the AI and natural language tech that powered Speaktoit so that developers could add conversational interfaces to their apps. And although 5,000 or so developers signed up to the platform, the feedback the api.ai team received informed them of two things.
Firstly there was a lot of interest shown, not just by mobile app developers, but also from the IoT space, namely the smart home, and wearables, such as smart watches — areas the company had always planned on targeting.
Secondly, for many developers the platform required too much work upfront; despite the huge amount of heavily lifting the company’s machine learning-based tech already does, developers were craving more out-of-the-box examples they could easily plug into.
As a result, the api.ai team have gone to work to make its conversational UI a lot more context-aware by adding what the startup calls “pre-defined domains”, including those for various IoT categories. This means the platform knows ahead of time what domain any defined entities and intent applies to.
So for example, if a developer wanted to add voice recognition to control a smart lighting system, api.ai would already know they are working within the smart home domain and is able to tap into its existing AI library related to that domain.
“Developers can now start from something right out of the box,” co-founder and CEO Ilya Gelfenbeyn tells me. “They can use domains like news, weather or smart home, and so on.”
Developers can also describe their own interactions and scenarios by simply providing a few examples based on the device’s capability, and the api.ai platform will use these to seed a more fully developed conversational UI.
“What our system will do is train itself based on these examples, by finding some common semantic units, to enable it to understand further examples that were not covered by the developer,” he explains.
Here’s an example provided by api.ai of how that might pan out in practice:
Person: It’s very dark here.
Smart Home: Let’s turn on the light then.
Person: Turn it to romantic mode.
Smart Home: Ooh, I see. Here it is.
Person: Still too bright.
Smart Home: Taking it to the minimum.
Person: Same in the kitchen.
Smart Home: Lights in the kitchen are on.
Person: Turn on the heating there as well.
Smart Home: Thermostat is on for the kitchen only.