To use Samsung Product API, <script type="text/javascript" src="$WEBAPIS/webapis/webapis.js"></script> Should be loaded in index.html
To use Samsung Product API,
<script type="text/javascript" src="$WEBAPIS/webapis/webapis.js"></script>
Should be loaded in index.html
Samsung TVs allow developers to use voice commands such as Navigation, Search, Selection and Media Control to control their application. The Voice Assistant in the TV can be Bixby or other assistants. Regardless of which assistant is used, the application will interact with it via the Voice Interaction APIs. For best results, we recommend the application to be implemented with all the features that are described in this document.
Since : 6.0
Product : TV
Privilege Level : Public
Privilege : http://developer.samsung.com/privilege/voicecontrol
the application status handled via Voice Interaction.
enum VoiceApplicationState { "None", "Home", "List", "Player", "Setting", "Search", "Unknown" };
The following values are supported
The navigation via Voice Interaction.
enum VoiceNavigation { "NAV_PREVIOUS", "NAV_NEXT", "NAV_LEFT", "NAV_RIGHT", "NAV_UP", "NAV_DOWN", "NAV_SHOW_MORE", "NAV_UNKNOWN" };
Defines the fields which can be delivered via voice search.
enum VoiceSearchTermField { "SEARCH_TERM_UTTERANCE", "SEARCH_TERM_TITLE", "SEARCH_TERM_GENRE", "SEARCH_TERM_CAST", "SEARCH_TERM_CONTENT_TYPE", "SEARCH_TERM_RELEASE_DATE_FROM", "SEARCH_TERM_RELEASE_DATE_TO" };
Enum for the param of Media Control function mode.
enum MediaFunctionMode { "MEDIA_FUNCTION_ON", "MEDIA_FUNCTION_OFF" };
Enum for the param of Rotate.
enum MediaRotateMode { "MEDIA_ROTATE_LEFT", "MEDIA_ROTATE_RIGHT" };
Enum for the param of Zoom.
enum MediaZoomMode { "MEDIA_ZOOM_IN", "MEDIA_ZOOM_OUT" };
Enum for the param of Media Repeat.
enum MediaRepeatMode { "MEDIA_REPEAT_OFF", "MEDIA_REPEAT_ONE", "MEDIA_REPEAT_ALL" };
Enum for the param of List item.
enum ListItem { "LIST_BOOKMARKS", "LIST_WATCH_LATER", "LIST_UNKNOWN" };
The VoiceInteractionManagerObject interface defines what is instantiated by the WebAPIs object from the Tizen Platform. There will be a webapis.voiceinteraction object that allows access to the functionality of the Voice Interaction API. To use the Voice Interaction API, the following privileges must be declared in the manifest file of the application. If they are not declared, all usage of these APIs are blocked, and security exceptions are thrown. To use these privileges, the minimum ‘required_version’ is 6.0.
[NoInterfaceObject] interface VoiceInteractionManagerObject { readonly attribute VoiceInteractionManager voiceinteraction; };
WebApi implements VoiceInteractionManagerObject;
Every voice interaction application should implement a callback function, so that the Assistant can operate the controls. Your application can interact with voice assistant by passing the callbacks in webapis.voiceinteraction.setCallback and calling webapis.voiceinteraction.listen. Except onupdatestate, onsearchcollection callbacks, the return value for the callback is the boolean flag for support or not. If a callback returns the false/undefined value, or there is no callback implementation for the utterance, Samsung device may perform its basic function.
[Callback, NoInterfaceObject] interface VoiceInteractionCallback { optional VoiceApplicationState onupdatestate(); optional boolean onnavigation(VoiceNavigation voiceNavigation); optional boolean onsearch(VoiceSearchTerm voiceSearchTerm); optional boolean onplay(); optional boolean onstop(); optional boolean onpause(); optional boolean onexit(); optional boolean onselection(long voiceSelection); optional boolean ontitleselection(DOMString title); optional boolean onfastforward(); optional boolean onrewind(); optional DOMString onsearchcollection(VoiceSearchTerm voiceSearchTerm); optional boolean onchangeappstate(VoiceApplicationState state); optional boolean onchangeprevioustrack(); optional boolean onchangenexttrack(); optional boolean onrestart(); optional boolean onskipbackward(long offsetSeconds); optional boolean onskipforward(long offsetSeconds); optional boolean onsetplayposition(long position); optional boolean onchangesubtitlemode(MediaFunctionMode mode); optional boolean onchangeshufflemode(MediaFunctionMode mode); optional boolean onchangescreenfitmode(MediaFunctionMode mode); optional boolean onzoom(MediaZoomMode zoom); optional boolean onrotate(MediaRotateMode direction); optional boolean onchange360mode(MediaFunctionMode mode); optional boolean onchangerepeatmode(MediaRepeatMode mode); optional boolean oncontinuewatching(); optional boolean oncustom(DOMString jsonObjectString); optional DOMString onrequestcontentcontext(); optional boolean onadditiontolist(ListItem list); optional boolean onremovalfromlist(ListItem list); };
Developers must specify the current application status to the onupdatestate function to get a proper voice control from the Voice Assistant. This function is called right before processing every utterance, so that the Voice Assistant can be aware of the application status dynamically. Depending on the status, the callback function called could be different. Therefore, it is important to return the real status of the application.
optional VoiceApplicationState onupdatestate();
Return Value :
Code Example :
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "List"; } });
To support navigation controls via voice, the application should have the following callbacks. Otherwise, the Key Event will be generated.
optional boolean onnavigation(VoiceNavigation voiceNavigation);
Parameters :
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "List"; }, onnavigation : function (voiceNavigation) { var bSupport = true; console.log("onnavigation : " + voiceNavigation); switch(voiceNavigation) { case "NAV_PREVIOUS ": // "Previous page" break; case "NAV_NEXT": // "Next page" break; case "NAV_LEFT": // "Go to left" break; case "NAV_RIGHT": // "Move to right" break; case "NAV_UP": // "Move up" break; case "NAV_DOWN": // "Go down" break; // If there is no callback implementation for these enums, // TV will Generate Remote Control Key Events: "ArrowRight", "ArrowLeft", "ArrowUp", "ArrowDown". case "NAV_SHOW_MORE": // "Show me more" break; default: bSupport = false; break; } return bSupport; } });
For a contents search provided by the application, it can receive a search utterance to show the search results. In order to support the search via voice, the application must be registered to Samsung TV's companion search application and have the following callbacks. The OnSearch parameter has a specific format. To learn more about it, please refer to the getDataFromSearchTerm. Some callbacks are not called by default such as onsearch. To receive all the callback signals on the development stage, request a key from Samsung and apply the key to the Manifest of your Tizen Application. The following example shows how to add the "http://developer.samsung.com/tizen/metadata/support-vif-devfeature" key to the manifest file.
optional boolean onsearch(VoiceSearchTerm voiceSearchTerm);
<?xml version="1.0" encoding="UTF-8"?> <widget xmlns="http://www.w3.org/ns/widgets" tizen="http://tizen.org/ns/widgets" id="http://yourdomain/VoiceInteractionWebSample" version="1.0.0" viewmodes="maximized"> <access origin="" subdomains="true"></access> < application id="AbCDEfgHrJ.VoiceInteractionWebSample" package="AbCDEfgHrJ" required_version="6.0"/> <content src="index.html"/> <feature name="http://tizen.org/feature/screen.size.normal.1080.1920"/> <icon src="icon.png"/> < metadata key="http://developer.samsung.com/tizen/metadata/support-vif-dev-feature" value="true"/> <name>VoiceInteractionWebSample</name> < privilege name="http://developer.samsung.com/privilege/voicecontrol"/> < privilege name="http://developer.samsung.com/privilege/microphone"/> < profile name="tv-samsung"/> </widget> @endcode @param voiceSearchTerm The Object contains the information via Voice. @see getDataFromSearchTerm @throw None N/A @return boolean value of whether the app supports this feature. If a callback returns the false/undefined value, or there is no callback implementation for the utterance, Samsung device may perform its basic function. @code webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "Home"; }, onsearch : function (voiceSearchTerm) { console.log("OnSearch : " + JSON.stringify(voiceSearchTerm)); var title = webapis.voiceinteraction.getDataFromSearchTerm(voiceSearchTerm, "SEARCH_TERM_TITLE"); var genre = webapis.voiceinteraction.getDataFromSearchTerm(voiceSearchTerm, "SEARCH_TERM_GENRE"); console.log("Request to search " + title + ", " + genre); return true; } });
Supports the media control to handle playback(Play).
optional boolean onplay();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "Player"; }, onplay : function () { // TV Default Action: Generates the Remote Control Key, "MediaPlay". console.log("OnPlay called"); return true; } });
Supports the media control to handle playback(Stop).
optional boolean onstop();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "Player"; }, onstop : function () { // TV Default Action: Generates the Remote Control Key, "MediaStop". console.log("OnStop called"); return true; } });
Supports the media control to handle playback(Pause).
optional boolean onpause();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "Player"; }, onpause : function () { // TV Default Action: Generates the Remote Control Key, "MediaPlayPause". console.log("OnPause called"); return true; } });
Supports the media control to handle playback(Exit).
optional boolean onexit();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "Player"; }, onexit : function () { // TV Default Action: Generates the Remote Control Key, "Exit". console.log("OnExit called"); return true; } });
To support selection controls via voice, the application should have the following callbacks. This callback supports the ordinal, relative selection control.
optional boolean onselection(long voiceSelection);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "List"; }, onselection : function (voiceSelection) { var bSupport = true; console.log("onselection : " + voiceSelection); switch(voiceSelection) { case -1 : // "Select the last one" break; case 0 : // "Select this" break; default: { if(voiceSelection >= 1) // "Select the first one" { // Select the (voiceSelection)th item // index of the first item is 1 console.log("For Ordinal : " + voiceSelection); } else { bSupport = false; } } break; } return bSupport; } });
Title selection refers to an utterance in the format "Select {Title Name}". To support the title selection via voice, the following two callbacks are required : onrequestcontentcontext and ontitleselection. The onrequestcontentcontext callback provides the information regarding the content of the screen (such as title and position) to the Samsung TV. This method will be invoked at the beginning of every utterance. The response consists of positionX(int), positionY(int), title(string), alias(string array) and bFocused(bool). The title property is used for ASR conversion and the other properties are used to set the priority of each title. The OnTitleSelection callback receives the title name from the utterance.
optional boolean ontitleselection(DOMString title);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "List"; }, onrequestcontentcontext : function () { console.log("onrequestcontentcontext "); var result = []; try { var item = { "positionX" : 0, "positionY" : 0, "title" : "Title Text1", "alias" : ["Title1", "My Text1"], "bFocused" : true }; result.push(item); var item2 = { "positionX" : 1, "positionY" : 0, "title" : "Title Text2", "alias" : ["Title2", "My Text2"], "bFocused" : false }; result.push(item2); result.push(webapis.voiceinteraction.buildVoiceInteractionContentContextItem(2,0,"Title Text3", ["Title3", "My Text3"], false)); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); } return webapis.voiceinteraction.buildVoiceInteractionContentContextResponse(result); }, ontitleselection : function (title) { console.log("ontitleselection" + title); return true; } });
Supports the media control to handle playback(FF).
optional boolean onfastforward();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "Player"; }, onfastforward : function () { // TV Default Action: Generates the Remote Control Key, "MediaFastForward". console.log("onfastforward called"); return true; } });
Supports the media control to handle playback(Rewind).
optional boolean onrewind();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "Player"; }, onrewind : function () { // TV Default Action: Generates the Remote Control Key, "MediaRewind". console.log("onrewind called"); return true; } });
Supports the search utterance appending to the default search application. The value of the result for the search term.
optional DOMString onsearchcollection(VoiceSearchTerm voiceSearchTerm);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "None"; }, onsearchcollection : function (voiceSearchTerm) { Log("onsearchcollection : " + JSON.stringify(voiceSearchTerm)); var result = []; //appName and appId are of the UI Application to get the payload on launch. result.push(webapis.voiceinteraction.buildCollectionDeeplinkData("a1b2c3d4ef.myTizenAppId", "myTizenApp", "page_id=123456&method=voice")); return JSON.stringify(result); } });
To support the utterance of shortcut commands, the application must have the onchangeappstate callback.
optional boolean onchangeappstate(VoiceApplicationState state);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "None"; }, onchangeappstate: function (state) { // TV Default Action: Launches the Samsung TV FirstScreen, Menu or Search application. Depending on the input parameter. // "Go to Home", "Go to Settings", "Go to Search". console.log("onchangeappstate : " + state); var bSupport = true; switch (state) { case "Home": // Go to App's Home break; default: bSupport = false; break; } return bSupport; } });
In order to handle requests to move to the previous track via voice, the application should override the onchangeprevioustrack callback.
optional boolean onchangeprevioustrack();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onchangeprevioustrack : function () { console.log("onchangeprevioustrack"); return true; } });
In order to handle requests to move to the next track via voice, the application should override the onchangenexttrack callback.
optional boolean onchangenexttrack();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onchangenexttrack : function () { console.log("onchangenexttrack"); return true; } });
In order to handle requests to restart to this track via voice, the application should override the onrestart callback.
optional boolean onrestart();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onrestart : function () { console.log("onrestart"); return true; } });
In order to handle requests to skip backward via voice, the application should override the onskipbackward callback.
optional boolean onskipbackward(long offsetSeconds);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onskipbackward : function (offsetSeconds) { console.log("onskipbackward : " + offsetSeconds); return true; } });
In order to handle requests to skip forward via voice, the application should override the onskipforward callback.
optional boolean onskipforward(long offsetSeconds);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onskipforward : function (offsetSeconds) { console.log("onskipforward : " + offsetSeconds); return true; } });
In order to handle requests to set play position via voice, the application should override the onsetplayposition callback.
optional boolean onsetplayposition(long position);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onsetplayposition : function (position) { console.log("onsetplayposition : " + position); return true; } });
In order to handle requests to turn the subtitle feature on/off via voice, the application should override the onchangesubtitlemode callback.
optional boolean onchangesubtitlemode(MediaFunctionMode mode);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onchangesubtitlemode : function (mode) { console.log("onchangesubtitlemode"); switch(mode) { case "MEDIA_FUNCTION_ON" : console.log("Function ON"); break; default : console.log("Function OFF"); break; } return true; } });
In order to handle requests to turn the shuffle feature on/off via voice, the application should override the onchangeshufflemode callback.
optional boolean onchangeshufflemode(MediaFunctionMode mode);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onchangeshufflemode : function (mode) { console.log("onchangeshufflemode"); switch(mode) { case "MEDIA_FUNCTION_ON" : console.log("Function ON"); break; default : console.log("Function OFF"); break; } return true; } });
In order to handle requests to turn the screen fit feature on/off via voice, the application should override the onchangescreenfitmode callback.
optional boolean onchangescreenfitmode(MediaFunctionMode mode);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onchangescreenfitmode : function (mode) { console.log("onchangescreenfitmode"); switch(mode) { case "MEDIA_FUNCTION_ON" : console.log("Function ON"); break; default : console.log("Function OFF"); break; } return true; } });
In order to handle requests to zoom in/out via voice, the application should override the onzoom callback.
optional boolean onzoom(MediaZoomMode zoom);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onzoom : function (zoom) { console.log("onzoom"); switch(zoom) { case "MEDIA_ZOOM_IN" : console.log("Zoom IN"); break; default : console.log("Zoom OUT"); break; } return true; } });
In order to handle requests to rotate left/right via voice, the application should override the onrotate callback.
optional boolean onrotate(MediaRotateMode direction);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onrotate : function (direction) { console.log("onrotate"); switch(direction) { case "MEDIA_ROTATE_LEFT" : console.log("Rotate Left"); break; default : console.log("Rotate Right"); break; } return true; } });
In order to handle requests to turn the 360 feature on/off via voice, the application should override the onchange360mode callback.
optional boolean onchange360mode(MediaFunctionMode mode);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onchange360mode : function (mode) { console.log("onchange360mode"); switch(mode) { case "MEDIA_FUNCTION_ON" : console.log("Function ON"); break; default : console.log("Function OFF"); break; } return true; } });
In order to handle requests to change the repeat mode via voice, the application should override the onchangerepeatmode callback.
optional boolean onchangerepeatmode(MediaRepeatMode mode);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onchangerepeatmode : function (mode) { console.log("onchangerepeatmode"); switch(mode) { case "MEDIA_REPEAT_ONE" : console.log("ONE"); break; case "MEDIA_REPEAT_ALL" : console.log("ALL"); break; default : console.log("OFF"); break; } return true; } });
In order to handle requests to launch the application with playing the history track via voice, the application should override the oncontinuewatching callback to play the history track. This callback will be called after launching the application.
optional boolean oncontinuewatching();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, oncontinuewatching : function () { console.log("oncontinuewatching"); return true; } });
Supports the custom voice assistant action.
optional boolean oncustom(DOMString jsonObjectString);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "List"; }, oncustom : function (jsonObjectString) { console.log("oncustom : " + jsonObjectString); try { var customObject = JSON.parse(jsonObjectString); for (var key in customObject) { if(customObject.hasOwnProperty(key)) { console.log("Key : " + key + ", Value : " + customObject[key]); } } } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); return false; } return true; } });
In order to support the voice title selection via voice, the application should override the onrequestcontentcontext callback, return the JsonObject string for the current content context list showing.
optional DOMString onrequestcontentcontext();
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "List"; }, onrequestcontentcontext : function () { Log("onrequestcontentcontext "); var result = []; try { var item = webapis.voiceinteraction.buildVoiceInteractionContentContextItem(1,1,"test", ["test set", "test title"], true); result.push(item); var item2 = webapis.voiceinteraction.buildVoiceInteractionContentContextItem(2,1,"test2", ["test set 2", "test title 2"], false); result.push(item2); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); } return webapis.voiceinteraction.buildVoiceInteractionContentContextResponse(result); } });
In order to handle requests to add this context to list via voice, the application should override the onadditiontolist callback.
optional boolean onadditiontolist(ListItem list);
Since : 6.5
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onadditiontolist : function (list) { console.log("onadditiontolist"); switch(list) { case "LIST_BOOKMARKS" : console.log("Add this context to Bookmarks"); break; case "LIST_WATCH_LATER" : console.log("Add this context to Watch later"); break; default : break; } return true; } });
In order to handle requests to remove this context from list via voice, the application should override the onremovalfromlist callback.
optional boolean onremovalfromlist(ListItem list);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onremovalfromlist : function (list) { console.log("onremovalfromlist"); switch(list) { case "LIST_BOOKMARKS" : console.log("Remove this context from Bookmarks"); break; case "LIST_WATCH_LATER" : console.log("Remove this context from Watch later"); break; default : break; } return true; } });
This interface represents information about the voice search term.
[NoInterfaceObject] interface VoiceSearchTerm { readonly attribute DOMString? utterance; readonly attribute DOMString? title; readonly attribute DOMString? genre; readonly attribute DOMString? cast; readonly attribute DOMString? contentType; readonly attribute DOMString? from; readonly attribute DOMString? to; };
This interface represents information about the content context for an item.
[NoInterfaceObject] interface VoiceInteractionContentContext { attribute long positionX; attribute long positionY; attribute DOMString title; attribute boolean bFocused; };
The VoiceInteractionManager interface is the top-level interface for the VoiceInteractionManager API that provides access to the module functionalities.
[NoInterfaceObject] interface VoiceInteractionManager { DOMString getVersion(); void setCallback(VoiceInteractionCallback callback); void listen(); DOMString getDataFromSearchTerm(VoiceSearchTerm voiceSearchTerm, VoiceSearchTermField field); DOMString buildCollectionDeeplinkData(DOMString appId, DOMString title, DOMString payload); DOMString buildCollectionShowData(DOMString appId, DOMString title, DOMString payload, DOMString thumbnail); VoiceInteractionContentContext buildVoiceInteractionContentContextItem(long positionX, long positionY, DOMString title, DOMString[] aliasArr, boolean bFocused); DOMString buildVoiceInteractionContentContextResponse(VoiceInteractionContentContext[] contentContextArr); };
This method gets the plugin's version number.
DOMString getVersion();
Exceptions :
try { var version = webapis.voiceinteraction.getVersion(); console.log("works with voiceinteraction [" + version + "]"); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); }
API to define callback functions for the voice interaction commands.
void setCallback(VoiceInteractionCallback callback);
try { webapis.voiceinteraction.setCallback({ onupdatestate : function () { return "List"; }, onnavigation : function (vn) { console.log("onnavigation" + vn); return true; }, onselection : function (voiceSelection) { console.log("OnSelection" + voiceSelection); return true; } }); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); }
API to start listening the voice interaction commands after setting callbacks by setCallback.
void listen();
try { webapis.voiceinteraction.listen(); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); }
API to parse the searchTerm from samsung side.
DOMString getDataFromSearchTerm(VoiceSearchTerm voiceSearchTerm, VoiceSearchTermField field);
var title = webapis.voiceinteraction.getDataFromSearchTerm(voiceSearchTerm, "SEARCH_TERM_TITLE"); var genre = webapis.voiceinteraction.getDataFromSearchTerm(voiceSearchTerm, "SEARCH_TERM_GENRE");
API to build data for search collection easily.
DOMString buildCollectionDeeplinkData(DOMString appId, DOMString title, DOMString payload);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "None"; }, onsearchcollection : function (voiceSearchTerm) { Log("OnSearchCollection : " + JSON.stringify(voiceSearchTerm)); var result = []; // appName and appId are of the UI Application to get the payload on launch. try { result.push(webapis.voiceinteraction.buildCollectionDeeplinkData("a1b2c3d4ef.myTizenAppId", "myTizenApp", "page_id=123456&method=voice")); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); } return JSON.stringify(result); } });
DOMString buildCollectionShowData(DOMString appId, DOMString title, DOMString payload, DOMString thumbnail);
webapis.voiceinteraction.setCallback({ onupdatestate : function () { console.log("Assistant tries to get app state"); return "None"; }, onsearchcollection : function (voiceSearchTerm) { Log("OnSearchCollection : " + JSON.stringify(voiceSearchTerm)); var result = []; // appName and appId are of the UI Application to get the payload on launch. try { result.push(webapis.voiceinteraction.buildCollectionShowData("a1b2c3d4ef.myTizenAppId", "myTizenApp", "page_id=123456&method=voice", "http://myservice.com/content/123456.png")); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); } return JSON.stringify(result); } });
API to build the VoiceInteractionContentContext of an item.
VoiceInteractionContentContext buildVoiceInteractionContentContextItem(long positionX, long positionY, DOMString title, DOMString[] aliasArr, boolean bFocused);
onrequestcontentcontext : function () { console.log("onrequestcontentcontext "); var result = []; try { var item = webapis.voiceinteraction.buildVoiceInteractionContentContextItem(1,1,"test", ["test set", "test title"], true); result.push(item); } catch (e) { console.log("exception [" + e.code + "] name: " + e.name + " message: " + e.message); } return webapis.voiceinteraction.buildVoiceInteractionContentContextResponse(result); }
API to build the response of onrequestcontentcontext callback function from VoiceInteractionContentContext.
DOMString buildVoiceInteractionContentContextResponse(VoiceInteractionContentContext[] contentContextArr);
module VoiceInteraction { enum VoiceApplicationState { "None", "Home", "List", "Player", "Setting", "Search", "Unknown" }; enum VoiceNavigation { "NAV_PREVIOUS", "NAV_NEXT", "NAV_LEFT", "NAV_RIGHT", "NAV_UP", "NAV_DOWN", "NAV_SHOW_MORE", "NAV_UNKNOWN" }; enum VoiceSearchTermField { "SEARCH_TERM_UTTERANCE", "SEARCH_TERM_TITLE", "SEARCH_TERM_GENRE", "SEARCH_TERM_CAST", "SEARCH_TERM_CONTENT_TYPE", "SEARCH_TERM_RELEASE_DATE_FROM", "SEARCH_TERM_RELEASE_DATE_TO" }; enum MediaFunctionMode { "MEDIA_FUNCTION_ON", "MEDIA_FUNCTION_OFF" }; enum MediaRotateMode { "MEDIA_ROTATE_LEFT", "MEDIA_ROTATE_RIGHT" }; enum MediaZoomMode { "MEDIA_ZOOM_IN", "MEDIA_ZOOM_OUT" }; enum MediaRepeatMode { "MEDIA_REPEAT_OFF", "MEDIA_REPEAT_ONE", "MEDIA_REPEAT_ALL" }; enum ListItem { "LIST_BOOKMARKS", "LIST_WATCH_LATER", "LIST_UNKNOWN" }; [NoInterfaceObject] interface VoiceInteractionManagerObject { readonly attribute VoiceInteractionManager voiceinteraction; }; WebApi implements VoiceInteractionManagerObject; [Callback, NoInterfaceObject] interface VoiceInteractionCallback { optional VoiceApplicationState onupdatestate(); optional boolean onnavigation(VoiceNavigation voiceNavigation); optional boolean onsearch(VoiceSearchTerm voiceSearchTerm); optional boolean onplay(); optional boolean onstop(); optional boolean onpause(); optional boolean onexit(); optional boolean onselection(long voiceSelection); optional boolean ontitleselection(DOMString title); optional boolean onfastforward(); optional boolean onrewind(); optional DOMString onsearchcollection(VoiceSearchTerm voiceSearchTerm); optional boolean onchangeappstate(VoiceApplicationState state); optional boolean onchangeprevioustrack(); optional boolean onchangenexttrack(); optional boolean onrestart(); optional boolean onskipbackward(long offsetSeconds); optional boolean onskipforward(long offsetSeconds); optional boolean onsetplayposition(long position); optional boolean onchangesubtitlemode(MediaFunctionMode mode); optional boolean onchangeshufflemode(MediaFunctionMode mode); optional boolean onchangescreenfitmode(MediaFunctionMode mode); optional boolean onzoom(MediaZoomMode zoom); optional boolean onrotate(MediaRotateMode direction); optional boolean onchange360mode(MediaFunctionMode mode); optional boolean onchangerepeatmode(MediaRepeatMode mode); optional boolean oncontinuewatching(); optional boolean oncustom(DOMString jsonObjectString); optional DOMString onrequestcontentcontext(); optional boolean onadditiontolist(ListItem list); optional boolean onremovalfromlist(ListItem list); }; [NoInterfaceObject] interface VoiceSearchTerm { readonly attribute DOMString? utterance; readonly attribute DOMString? title; readonly attribute DOMString? genre; readonly attribute DOMString? cast; readonly attribute DOMString? contentType; readonly attribute DOMString? from; readonly attribute DOMString? to; }; [NoInterfaceObject] interface VoiceInteractionContentContext { attribute long positionX; attribute long positionY; attribute DOMString title; attribute boolean bFocused; }; [NoInterfaceObject] interface VoiceInteractionManager { DOMString getVersion(); void setCallback(VoiceInteractionCallback callback); void listen(); DOMString getDataFromSearchTerm(VoiceSearchTerm voiceSearchTerm, VoiceSearchTermField field); DOMString buildCollectionDeeplinkData(DOMString appId, DOMString title, DOMString payload); DOMString buildCollectionShowData(DOMString appId, DOMString title, DOMString payload, DOMString thumbnail); VoiceInteractionContentContext buildVoiceInteractionContentContextItem(long positionX, long positionY, DOMString title, DOMString[] aliasArr, boolean bFocused); DOMString buildVoiceInteractionContentContextResponse(VoiceInteractionContentContext[] contentContextArr); }; };