Filter
-
Content Type
-
Category
Mobile/Wearable
Visual Display
Digital Appliance
Platform
Recommendations
Filter
Learn Code Lab
codelabtrack deadlift exercise on galaxy watch objective create a native app for galaxy watch, operating on wear os powered by samsung, using health services to track deadlift exercise this app measures repetition count, calories burned, and time spent during the exercise overview health services provides a simple and unified way for accessing a wide range of health and wellness related data with health services api, you will no longer need to develop your own algorithms processing sensors data in order to compute metrics like heart rate, steps counts, distance, calories burned, and other more these are now accessible through health services embedded on wearables operating on wear os powered by samsung see health platform descriptions for detailed information set up your environment you will need the following galaxy watch4 or newer android studio latest version recommended java se development kit jdk 11 or later sample code here is a sample code for you to start coding in this code lab download it and start your learning experience! health track deadlift sample code 132 83 kb turn on developer mode and adjust its settings on your watch, go to settings > about watch > software and tap on software version 5 times upon successful activation of developer mode, a toast message will display as on the image below afterwards, developer options will be visible under settings tap developer options and enable the following options adb debugging debug over wi-fi turn off automatic wi-fi connect your galaxy watch to wi-fi go to settings > connection > wi-fi and make sure that wi-fi is enabled from the list of available wi-fi networks, choose and connect to the same one as your pc when successfully connected, tap a wi-fi network name, swipe down, and note the ip address you will need this to connect your watch over adb from your pc connect your galaxy watch to android studio in android studio, go to terminal and type adb connect <ip address as mentioned in previous step> when prompted, tap always allow from this computer to allow debugging upon successful connection, you will see the following message in android studio’s terminal connected to <ip address of your watch> now, you can run the app directly on your watch start your project after downloading the sample code containing the project files, open your android studio and click open to open an existing project locate the downloaded android project deadlift from the directory and click ok check dependency and app manifest in the dependencies section of gradle scripts > build gradle module app file, see the appropriate dependency for health services dependencies { implementation 'androidx health health-services-client 1 0 0-beta03' // } notesince the library might update from time to time, it is recommended to choose the version suggested by android studio in androidmanifest xml file, note the following <queries> element <queries> <package android name="com google android wearable healthservices" /> </queries> section with requests for necessary permissions <uses-permission android name="android permission body_sensors" /> <uses-permission android name="android permission activity_recognition" /> check capabilities to check what can be measured during an exercise, you need to check its capabilities go to app > java > com samsung sdc21 deadlift open the deadliftutil java file and navigate to the checkcapabilities method an inner class c definition implements the methods of the futurecallback interface within this definition, define the onsuccess method to retrieve the exercise type capabilities public void onsuccess exercisecapabilities result { objects requirenonnull result ; log i tag, "got exercise capabilities" ; /*********************************************************************************** * [practice 1] define the onsuccess method * * - hint uncomment lines below and replace todo 1 * call getexercisetypecapabilities method of result object, * passing already initialized t as an argument **********************************************************************************/ final exercisetype t = exercisetype deadlift; // final exercisetypecapabilities capabilities = "todo 1" // final exerciseconfig builder builder = exerciseconfig builder t ; // builder setdatatypes capabilities getsupporteddatatypes ; // exerciseconfigbuilder = builder; } next, implement the findcapabilitesfuture method to get a callback with exercisecapabilities getcapabilitiesasync returns the exercisecapabilities of the exerciseclient for the device static listenablefuture<exercisecapabilities> findcapabilitiesfuture exerciseclient client { /******************************************************************************************* * [practice 1] create a listenablefuture object that will get a callback with * with exercise capabilities choose the correct method from exerciseclient * * - hint uncomment line and replace null with todo 2 * for checking capabilities use getcapabilitiesasync method ******************************************************************************************/ return null; //"todo 2"; } start the exercise inside the startexercise method, there is a call to the futures addcallback method this method adds a callback function that executes when the asynchronous operation of starting the exercise completes set an update callback for the exercise client within the onsuccess method of the callback function public void onsuccess void result { log i tag, "successfully started" ; /*************************************************************************** * [practice 2] set an update callback * * - hint uncomment lines below and fill todos * 1 make appropriate call of setupdatecallback method * and pass exerciseupdatelistener object as an argument * 2 change ismeasurementrunning flag value to true **************************************************************************/ // exerciseclient setupdatecallback "todo 3 1 " ; log i tag, "successfully set update listener" ; // "todo 3 2 " } in the deadlift java file, call the startexercise method in onbuttonclickhelper public void onbuttonclickhelper { /******************************************************************************************* * [practice 2] start the exercise using a method from deadliftutil java * * - hint uncomment line below and fill todo 4 * call startexercise method on util object * ****************************************************************************************/ // "todo 4" } get the results go to the deadliftutil java file, and in the getnewrepsvalue method, call the getlatestmetrics method from the exerciseupdate class to get the data collected during the exercise store the data in the resultlist, where the last element holds the most up-to-date value of all your repetitions public long getnewrepsvalue exerciseupdate update, deltadatatype<long, intervaldatapoint<long>> datatype { /******************************************************************************************* * [practice 3] get the data collected during exercise * * - hint uncomment lines below and fill todo 5 * call getlatestmetrics method of exerciseupdate object * then, get the data of appropriate type * for this, you can use dedicated method getdata , passing datatype as an argument * ****************************************************************************************/ // final list<intervaldatapoint<long>> resultlist = "todo 5" // if !resultlist isempty { // final int lastindex = resultlist size - 1; // return resultlist get lastindex getvalue ; // } return no_new_value; } run unit tests for your convenience, you will find an additional unit tests package this will let you verify your code changes even without using a physical watch see instruction below on how to run unit tests right click on com samsung sdc21 deadlift test > deadliftunittest and execute run 'deadliftunittest' command if you completed all the tasks correctly, you will see all the unit tests passed successfully run the app after building the apk, you can run the application on a connected device to measure actual deadlift parameters right after the app is started, it will request for the user permission allow the app to receive data of the activity afterwards, the application main screen will be shown before doing deadlifts, press the start button to track your exercise when done, tap on the stop button you're done! congratulations! you have successfully achieved the goal of this code lab now, you can create a deadlift exercise tracker app by yourself! if you're having trouble, you may download this file health track deadlift complete code 132 42 kb learn more by going to health platform
Learn Code Lab
codelabimplement keyevent callback by mapping air actions objective learn how to implement keyevent callback on your s pen remote by mapping the air actions to a car racing game overview s pen connects via bluetooth low energy ble to a device and the s pen framework manages the connection ble events are converted by the s pen framework into keyevents before sending to the app apps are able to handle s pen events by reusing existing keyevent callback without adding other interfaces collecting and sending s pen remote events to apps the s pen framework collects and manages keyevents to be received by the app, and are defined in xml format before making them public below is the process of the s pen remote event handling and sending s pen remote event-defined keyevents to apps ble event is sent to the s pen framework the s pen framework checks for the foreground app and looks for the keyevent that the app made public the found keyevent is sent to the app's keyevent callback the app performs the actions defined in the keyevent set up your environment you will need the following java se development kit 10 jdk or later android studio latest version recommended samsung galaxy device with s pen remote capability galaxy tab s6 series or newer galaxy s22 ultra or newer galaxy fold3 or newer sample code here is a sample code for you to start coding in this code lab download it and start your learning experience! air actions sample code 6 4 mb start your project in android studio, click open an existing android studio project locate the android project from the directory and click ok implement remote actions check for the activity to handle the keyevent in a manifest file, androidmanifest xml add <intent-filter> and <meta-data> elements to that activity androidmanifest xml resource file defining remoteactions must be specified for <meta-data> <?xml version="1 0" encoding="utf-8"?> <manifest xmlns android="http //schemas android com/apk/res/android" package="com example spenremote racingcar"> <application <activity android name=" mainactivity" <intent-filter> <action android name="com samsung android support remote_action" /> </intent-filter> <meta-data android name="com samsung android support remote_action" android resource="@xml/remote_action_sample"/> </activity> </application> </manifest> noteonly one remoteaction per app is allowed at the moment if you define remoteactions to several activities, all other except one may be ignored create an xml file under res/xml/ name the file with the same name as the resource specified previously create the desired <action> elements in the xml file created as seen below xml has a root element of <remote-actions>, and may include several <action> elements in addition, each <action> contains information about id, label, priority, trigger_key, etc if you want to map a default action to a specific gesture, you need to add a <preference> element <?xml version="1 0" encoding="utf-8"?> <remote-actions version="1 2"> <action id="pause_or_resume" label="@string/pause_or_resume" priority="1" trigger_key="space"> <preference name="gesture" value="click"/> <preference name="button_only" value="true"/> </action> <action id="move_left" label="@string/move_car_left" priority="2" trigger_key="dpad_left"> <preference name="gesture" value="swipe_left"/> <preference name="motion_only" value="true"/> </action> <action id="move_right" label="@string/move_car_right" priority="3" trigger_key="dpad_right"> <preference name="gesture" value="swipe_right"/> <preference name="motion_only" value="true"/> </action> <action id="restart" label="@string/restart" priority="4" trigger_key="ctrl+r"> <preference name="gesture" value="circle_cw|circle_ccw"/> <preference name="motion_only" value="true"/> </action> </remote-actions> implement keyevent callback apply the keyevent callback to the activity where remoteactions have been declared it is recommended to handle the sent keyevent at onkeydown tiplearn more about keyevent callback in android developers add the following keyevent callback as seen below space key pause or resume the racing game left key move the car to left direction right key move the car to right direction ctrl + r key restart the racing game @override public boolean onkeydown int keycode, keyevent event { if keycode == keyevent keycode_space { racingcar playstate state = racingcar getplaystate ; if state == racingcar playstate playing { pause ; } else { resume ; } } else if keycode == keyevent keycode_dpad_left { movecarto racingcar pos_left ; } else if keycode == keyevent keycode_dpad_right { movecarto racingcar pos_right ; } else if event getmetastate & keyevent meta_ctrl_on != 0 && keycode == keyevent keycode_r { restart ; } return super onkeydown keycode, event ; } notepay attention to cases when the child view consumes the keyevent for example, if the edittext is focused on a screen where the edittext co-exists, most keyevents will be consumed for text inputs or a cursor movement and they may not be sent to the activities or container layouts on this screen, you must specify the key code or key combination that the child view cannot consume run the app now, everything is ready when you install your app, make sure that your app icon appears properly in the settings, settings > advanced features > s pen > air actions you can now control the car in the game using air actions you're done! congratulations! you have successfully achieved the goal of this code lab now, you can map air actions events on a game or app by yourself! if you're having trouble, you may download this file air actions complete code 6 4 mb
tutorials iot
blogmatter is an open-source connectivity standard for smart home and internet of things (iot) devices. it is a secure, reliable, and seamless cross-platform protocol for connecting compatible devices and systems with one another. smartthings provides the matter virtual device application and the smartthings home apis to help you quickly develop matter devices and use the smartthings ecosystem without needing to build your own iot ecosystem. when matter was first introduced, because few devices supported it, platform companies struggled to test and optimize matter support on their own devices. to alleviate this issue, samsung smartthings developed the matter virtual device application, which can be used to test and optimize various device types, including those that have not yet been released. the matter virtual device is part of the matter open source project you can create and test matter devices virtually too. in this tutorial, you will learn how to create an occupancy sensor as a matter virtual device that you can control using the smartthings application. you will also gain an understanding of the concept of clusters on matter devices. for more information about smartthings matter, see matter in smartthings. prerequisites to follow along with this tutorial, you need the following hardware and software: host pc running windows 10 or higher or ubuntu 20.04 (x64) android studio (latest version recommended) java se development kit (jdk) 11 or higher devices connected to the same network: mobile device running android 8.0 oreo or higher mobile device with the smartthings application installed matter-enabled smartthings station onboarded with the same samsung account as the smartthings application ※ for ubuntu, to set up the development environment: step 1. turn on developer mode and enable usb debugging on your mobile device. step 2. install the required os-specific dependencies from your terminal: $ sudo apt-get install git gcc g++ pkg-config libssl-dev libdbus-1-dev \ libglib2.0-dev libavahi-client-dev ninja-build python3-venv python3-dev \ python3-pip unzip libgirepository1.0-dev libcairo2-dev libreadline-dev step 3. from the sdk manager in android studio, install sdk platform 26 and ndk version 22.1.7171670. step 4. register the ndk path to the path environment variable: export android_ndk_home=[ndk path] export path=$path:${android_ndk_home} step 5. install kotlin compiler (kotlinc) version 1.5.10. step 6. register the kotlinc path to the path environment variable. export kotlinc_home=[kotlinc path]/bin export path=$path:${kotlinc_home} implement the occupancy sensor device type to implement the occupancy sensor device type in the matter virtual device application: step 1. download the sample application project and open it in android studio. sample code - virtual device app (11.11mb) nov 21, 2023 step 2. build and run the sample application. the application implements an on/off switch device type as an example. step 3. go to the file path feature > main > java > com.matter.virtual.device.app.feature.main. step 4. to add the occupancy sensor device type to the application home screen, in the mainfragment.kt file, select the device type: notefor all code examples in this tutorial, look for "todo #' in the sample application to find the location where you need to add the code. // todo 1 mainuistate.start -> { val itemlist = listof( device.onoffswitch, // sample device type // todo 1 device.occupancysensor, // add the occupancy sensor ) retrieve cluster values clusters are the functional building block elements of the data model. a cluster can be an interface, a service, or an object class, and it is the lowest independent functional element in the data model. each matter device supports a defined set of relevant clusters that can interact with your preferred controller (such as smartthings). this allows for easy information retrieval, behavior setting, event notifications, and more. the following steps are implemented in the file occupancysensorviewmodel.kt at the path feature > sensor > java > com.matter.virtual.device.app.feature.sensor. to retrieve the relevant cluster values from the device through the viewmodel: step 1. retrieve the current occupancy status. the boolean value is used by occupancyfragment to update the ui. // todo 2 private val _occupancy: stateflow<boolean> = getoccupancyflowusecase() val occupancy: livedata<boolean> get() = _occupancy.aslivedata() step 2. retrieve the current battery status. the integer value is used by occupancyfragment to update the ui. //todo 3 private val _batterystatus: mutablestateflow<int> = getbatpercentremainingusecase() as mutablestateflow<int> val batterystatus: livedata<int> get() = _batterystatus.aslivedata() step 3. when the "occupancy" button in occupancyfragment is pressed, use the setoccupancyusecase() function to update the occupancy status boolean value. // todo 4 viewmodelscope.launch { timber.d("current value = ${_occupancy.value}") if (_occupancy.value) { timber.d("set value = false") setoccupancyusecase(false) } else { timber.d("set value = true") setoccupancyusecase(true) } } step 4. when the "battery" slider in occupancyfragment is moved, store the slider progress as the battery status. // todo 5 batterystatus.value = progress step 5. when the "battery" slider in occupancyfragment is moved, use the updatebatteryseekbarprogress() function to update the battery status value on the slider. use setbatpercentremainingusecase() to update the battery status integer value. // todo 6 viewmodelscope.launch { updatebatteryseekbarprogress(progress) setbatpercentremainingusecase(progress) } monitor cluster values the observe() function monitors for and reacts to changes in a cluster value. the following steps are implemented in the file occupancysensorfragment.kt at the path feature > sensor > java > com.matter.virtual.device.app.feature.sensor. to monitor changes in the virtual occupancy sensor’s cluster values: step 1. trigger updating the occupancy status of the virtual device when the ”occupancy” button is pressed. // todo 7 binding.occupancybutton.setonclicklistener { viewmodel.onclickbutton() } step 2. use the onprogresschanged() function to update the fragment ui through live data from the viewmodel. the onstoptrackingtouch() function triggers updating the battery status when touch tracking on the “battery” slider stops. // todo 8 binding.occupancysensorbatterylayout.titletext.text = getstring(r.string.battery) binding.occupancysensorbatterylayout.seekbardata = seekbardata(progress = viewmodel.batterystatus) binding.occupancysensorbatterylayout.seekbar.setonseekbarchangelistener( object : seekbar.onseekbarchangelistener { override fun onprogresschanged(seekbar: seekbar, progress: int, fromuser: boolean) { viewmodel.updatebatteryseekbarprogress(progress) } override fun onstarttrackingtouch(seekbar: seekbar) {} override fun onstoptrackingtouch(seekbar: seekbar) { viewmodel.updatebatterystatustocluster(seekbar.progress) } } ) step 3. monitor the occupancy status and update the fragment ui when it changes. // todo 9 viewmodel.occupancy.observe(viewlifecycleowner) { if (it) { binding.occupancyvaluetext.text = getstring(r.string.occupancy_state_occupied) binding.occupancybutton.setimageresource(r.drawable.ic_occupied) } else { binding.occupancyvaluetext.text = getstring(r.string.occupancy_state_unoccupied) binding.occupancybutton.setimageresource(r.drawable.ic_unoccupied) } } step 4. monitor the battery status and update the fragment ui when it changes. // todo 10 viewmodel.batterystatus.observe(viewlifecycleowner) { val text: string = getstring(r.string.battery_format, it) binding.occupancysensorbatterylayout.valuetext.text = html.fromhtml(text, html.from_html_mode_legacy) } test the virtual device to test the virtual occupancy sensor device: step 1. build and run the project on your android device. the application’s home screen shows the sample on/off switch device type and the occupancy sensor device type that you just implemented. step 2. to create a virtual occupancy sensor device, select occupancy sensor and follow the instructions to receive a qr code. step 3. within the smartthings application on the other mobile device, onboard the occupancy sensor by scanning the qr code. step 4. change the occupancy or battery status of the virtual occupancy sensor in the virtual device application. the values are synchronized to the smartthings application. conclusion this tutorial demonstrates how you can implement the occupancy sensor device type in the matter virtual device application and create a virtual occupancy sensor for testing purposes. to learn about implementing other device types, go to code lab (matter: create a virtual device and make an open source distribution). the code lab also describes how to contribute to the matter open source project.
HyoJung Lee
tutorials game, mobile
blogintro foldable technology for mobile is a ground-breaking experience not only for users, but also for developers. the presence of many form factors, like immersive display, app continuity, flex mode, and ux optimization, challenge developers to think outside of the box to adapt to this technology. there are already a large number of blogs talking about general application considerations, but what about gaming on a foldable device? in this blog, we look at the challenges and new opportunities foldable devices present for mobile game development. this blog focuses on unity and unreal engine as they are the most common game engines for mobile development. however, a lot of the information applies to other engines as well. app continuity and multi-window mode firstly, let's establish that the large internal display is called the main display whilst the smaller external display is called the cover display. app continuity and multi-window mode are two key features of foldable smartphones. with app continuity, you can seamlessly transition between the cover display to the main display without needing to reopen the app. with multi-window, you can run multiple apps at once and multitask like never before. app continuity moving an app between the two displays affects the size, density, and aspect ratio of the display it can use. app continuity is enabled and disabled from the android manifest of the app: the <activity> element has a new attribute called resizeableactivity. if not set, this attribute defaults to true, thus enabling app continuity and assuming the app fully supports both app continuity and multi-window mode. if set to true then the application automatically attempts to resize when moving between displays, as shown in the video below. if set to false then the system still allows you to transition from the cover display to the main display but does not attempt to resize the application. instead, a new button appears, allowing the user to restart the application at a time of their choosing, as shown below. when the app transitions between the two displays, the activity is destroyed and recreated. as such, the app data needs to be stored and then used to restore the previous state. from testing, both unity and unreal engine appear to handle this process already. however, if you are developing on a custom engine, it is worth confirming that your engine/app can handle this. continue on cover screen by default, foldable devices simply lock the device when closing the main display. however, it is possible for users to disable this functionality for certain apps. in the device's display settings, there is an option called "continue on cover screen" that has to be enabled by the user. entering this menu displays a list of all applications on the device and enables users to set specific applications to not lock the device when closing the main display. if an application has resizeableactivity set to false then the option to enable this functionality is greyed out and is not available. if your application supports resizing and app continuity then you should make sure to test that this works as users may wish to enable this functionality on their device. multi-window mode multi-window mode can allow up to three apps to run at the same time in a split-screen arrangement (see left image) or for two apps to run in pop-up mode which keeps one app full-screen but creates another app in a smaller pop-up window (see right image). just like app continuity, this is enabled using the attribute resizeableactivity in the android manifest. if resizeableactivity is not set then it defaults to true. so again, it is recommended that you set this yourself to ensure that your app works as intended. app continuity without multi-window mode if you would like your app to support app continuity but don't want it to be used by the multi-window mode, then this is possible. first, you should set the resizableactivity attribute to false to disable both app continuity and multi-window mode. next use the following <meta-data> element in your <activity> element: <meta-data android:name="android.supports_size_changes" android:value="true" /> this re-enables the app continuity feature without enabling multi-window mode. game engine features to use if using existing game engines then there are several existing features that are useful if developing for foldable devices. this section provides a high-level look at these features. for more in-depth information, i recommend visiting the engine's documentation. unity unity is a very good engine for developing on foldable devices, requiring little to no setup to enable features for foldable devices. the engine automatically handles resizing of the application without any modifications required. the resizeableactivity attribute is controlled by an option in the player settings: resizable window. when enabled, resizeableactivity appears to be set to true; when disabled, resizeableactivity appears to be set to false. note this option is not available in older versions of unity. if this option is not present then you have to set the resizeableactivity manually. the engine also provides a very robust ui scaling system that helps to reduce the amount of work required to create a ui that works across both the cover display and main display. the canvas scaler and anchor system work very well with foldable devices and work to resize and position elements consistently across the two displays, thus preventing developers from having to create two ui designs (one for cover display and one for main display). unreal engine 4 unreal engine 4 is another good engine for developing on foldable devices, although it requires a little more work to get it set up for foldable devices. first of all, unreal engine 4 by default disables the resize event on android devices, however, it is possible to re-enable it in device profiles using the console variable: r.enablenativeresizeevent = 1 unreal engine 4 by default also has a max aspect ratio of 2.1. this is, however, too small for the cover display of some foldable devices which can result in black bars on either side of the image. fortunately, unreal engine makes this value easily changeable in the project settings: platform -> android -> max aspect ratio from my testing, 2.8 is a good value. however, i recommend users experiment to find the best value. unreal engine's umg (unreal motion graphics ui designer) has a comprehensive set of tools for creating responsive ui designs. this system automatically scales ui elements to fit the screen size, and the anchor system is also very useful for ensuring the ui elements are positioned correctly between the two displays. samsung remote test lab no matter what engine you use, the best way to ensure your app works well on a foldable device is to test it on a foldable device. samsung remote test lab has a range of foldable devices available for developers to test their applications on if getting hold of a physical device is too difficult or expensive. all you need to do is create a samsung account and you can start using these devices for testing. android jetpack windowmanager despite being very good engines for foldable game development, neither unity nor unreal currently provide information about the current state of the foldable device (that is, is it open/closed/halfway?, where is the fold positioned?, what is the fold's orientation? and so forth). however, android has recently created a new library as a part of their android jetpack libraries that enables developers to access this information and make use of it in their apps. android jetpack in their own words is "a suite of libraries to help developers follow best practices, reduce boilerplate code, and write code that works consistently across android versions and devices so that developers can focus on the code they care about." windowmanager is a new library from the android jetpack suite, intended to help application developers support new device form factors and multi-window environments. the library had its 1.0.0 release in january 2022, and targeted foldable devices but—according to the documentation—future versions will extend to more display types and window features. more technical resources this blogpost has an accompanying technical blogpost and code lab tutorial, demonstrating how to use the android jetpack windowmanager with both unity and unreal to take advantage of the flex mode feature of samsung's foldable devices. i recommend starting with the jetpack windowmanager blogpost to learn how to set up the windowmanager in java: https://developer.samsung.com/galaxy-gamedev/blog/en-us/2022/07/20/how-to-use-jetpack-window-manager-in-android-game-dev then, follow it up with the code labs to learn how to make use of the windowmanager to implement a simple flex mode setup in either unreal or unity: https://developer.samsung.com/codelab/gamedev/flex-mode-unreal.html https://developer.samsung.com/codelab/gamedev/flex-mode-unity.html also, visit our game developer community to join the discussion around mobile gaming and foldable devices. click here to find out more about other design considerations when designing apps for foldable devices and large screens. final thoughts foldable devices provide a richer experience than regular phones. hopefully, this blogpost and accompanying tutorial have provided you with the necessary information to begin taking advantage of these new form factors and start providing users a richer foldable experience. additional resources on the samsung developers site the samsung developers site has many resources for developers looking to build for and integrate with samsung devices and services. stay in touch with the latest news by creating a free account and subscribing to our monthly newsletter. visit the galaxy store games page for information on bringing your game to galaxy store and visit the marketing resources page for information on promoting and distributing your android apps. finally, our developer forum is an excellent way to stay up-to-date on all things related to the galaxy ecosystem.
Lochlann Henry Ramsay-Edwards
events game, uiux, health, mobile, galaxy watch, smarttv, marketplace, foldable
blogit's almost time again for the samsung developer conference (aka sdc). after a few years of limited travel, virtual-only events, and nasal swabs, we're excited to get back to a live event, if only for one day. as you may have seen on this site and in our social media accounts, sdc is scheduled for october 12, at the moscone conference center in san francisco. the keynote address and technical sessions from the live event will be streamed via youtube, and there will be additional virtual content available to view beginning october 13, on the conference web site. sdc is a special time for us at the samsung developers team worldwide. internally, we have members in many countries: south korea, philippines, the united kingdom, vietnam, brazil, poland, bangladesh, and the united states. we're all excited to come together each year to bring useful and informative content to you. we enjoy planning and staging the event each year and we all have great memories. in this post, we'll share some of our favorite experiences with you. behind the scenes we understand the boring details behind the scenes isn't as exciting to most, but for our events team, the challenge of planning and pulling off a major event is a thrill. most years, planning for sdc begins almost the moment the previous event is completed. the planners begin executing contracts with the venue, caterers, and the experiential vendor that handles all the small details. they meet weekly on a conference call, usually in the evenings for those in the us to talk with our counterparts in south korea. as the months move along, the teams secure contractors with audio-visual teams to run the massive wall of displays on the keynote stage, as well as the cameras in the rooms where sessions are held. we look through dozens of clothing catalogs to find apparel for attendees as well as employees. during these times, we plan out the social media notices, come up with topics for blog posts, and design the email campaigns that go out to hundreds of thousands of developers. we discuss the colors of signs in great detail to make sure each shade is exactly the right color. we talk to the technical teams to understand which new tech is ready for developers to use. we track down guest speakers from other companies to show how samsung works with so many partners. as the months go by, we're working more and more on sdc tasks. a few days prior to showtime, you will see many of our team on-site at the venue, setting up demo stations, wiping down counters, and making sure everything is "just right," so that you're amazed the moment you walk through those doors. best of galaxy store awards beginning in 2018, the samsung developers team started the best of galaxy store awards program as a way to show our appreciation for those developers and designers who bring their products to galaxy store. some of the top names in the world of gaming, productivity, and social media have received the awards. to name a few: riot games' league of legends: wild rift (2021 best strategy game) epic games' fortnite (2020 game of the year) spotify (2020 bixby capsule of the year) gameloft asphalt 9 (2019 best racing game) tiktok (2019 best social app) disney heroes (2018 best game) best of galaxy store winners at sdc while the 2018 and 2019 awards were presented live at sdc, the covid pandemic required us to make the awards show a virtual event. you can watch the 2020 and 2021 awards on youtube. the 2022 awards are scheduled for december 6, 2022, on youtube. stay tuned to the samsung developers channels on twitter, facebook, linkedin, and youtube to watch the awards show when it premieres. for a full list of samsung best of galaxy store awards winners, click here. before and after hours sdc isn't just all business though—the organizers and attendees also love to have fun. before the keynote, attendees could join groups stretching and going for a run through the city streets. others opted for calm and quiet with yoga at the expo hall stage. staying fit at sdc a tech conference wouldn't be complete without an after-hours party and sdc is no different. in 2014, the sdc crowd went to san francisco's famed exploratorium, where they experimented with science projects among tables of excellent food and drinks. attendees in 2016 experienced the beautiful asian art museum in san francisco. in 2017, attendees walked virtually through a murder mystery with samsung gear vr headsets. the 2019 sdc brought more excitement to the after-hours party. as the event was held at the end of october, the theme was "day of the dev," playing on the name of the holiday celebrated in november. a vr-powered interactive journey led players through a spooky experience, while halloween-themed goodies were available throughout the expo hall. a foggy mist permeated the place, while characters from the vr journey appeared in real life to the delight of the crowd. day of the dev at sdc technical sessions no matter your interests, if you're into tech, sdc probably has a topic that will pique your curiosity. whether you are into mobile design, voice control, smart devices, enterprise security, streaming media, or a good old-fashioned shoot-em-up game, you will find plenty of technical sessions to satisfy your need to know more. foldables and security are popular topics at sdc after you've taken in the experts' ideas in the technical sessions, you can start building your own ideas on samsung's platforms in the code lab area of the expo hall. at the code lab, you can talk with the actual developers of many of samsung's best products and learn from them. this interactive experience is also available to anyone at any time on the samsung developers web site. keynote and spotlight session the keynote presentation is always the subject of great speculation on the internet for days prior to sdc. samsung electronics president and ceo dj koh is a favorite each year. do you remember the 2018 sdc at moscone west, when senior vice president justin denison walked on stage to present the galaxy fold, the first foldable mobile device? that is one of many exciting moments on the first day of the event. revealing the galaxy fold the sdc keynote includes other memorable moments, such as the announcement of one ui, which brought a sensible and consistent design ethos to samsung mobile and wearable devices. bixby cto adam cheyer used his time on stage to show the power of ai, while also amazing and amusing the audience with his sleight-of-hand. the second day starts off with the spotlight session. the spotlight session is where samsung focuses on companies making a successful partnership with samsung platforms and products. compared to the flash and excitement of the keynote, the spotlight session is where executives and developers share their experiences with their peers. spotlight session highlights—behind the scenes with vitalik buterin; on stage with tim sweeney and john hanke some of the more memorable moments of the spotlight sessions include hosting a fireside chat of games and how to monetize them on mobile platforms. this session included epic games ceo tim sweeney and niantic labs ceo john hanke. another memorable spotlight session brought ethereum co-founder vitalik buterin to the stage with samsung's john jun to talk about the possibilities of blockchain for mobile devices. one final memory of the sdc spotlight session that's special to those of us on the samsung developers team, aside from putting all this together for you, was from 2018. the week before the event, one of the teams' members proposed to use the note9 with s pen to take a photo on stage and send the photo to twitter. in three days, they had the prototype working and tested. the team convinced vp yoon lee to perform this live demo on stage to close the event. as you can see below, the demo worked flawlessly. wrapping up after sdc is finished, the samsung developers team is there to put everything away. demo machines are disassembled and our booth is sent to the warehouse. we pack up leftover t-shirts and make sure we left the venue cleaner than we found it. after that, we have a well-earned party for ourselves and a good night's sleep. thank you for reading through our memories of sdc events in the past. please join us online, october 12, 2022, at 10 am pt, for the next samsung developer conference keynote. let us know your favorite moments from sdc. join us on twitter, facebook, linkedin, and youtube to continue the discussion.
Learn Code Lab
codelabmeasure blood oxygen level and heart rate on galaxy watch objective create a health app for galaxy watch, operating on wear os powered by samsung, utilizing samsung health sensor sdk to trigger and obtain results of simultaneous blood oxygen level spo2 and heart rate measurements overview samsung health sensor sdk provides means of accessing and tracking health information contained in the health data storage its tracking service gives raw and processed sensor data such as accelerometer and body composition data sent by the samsung bioactive sensor the latest bioactive sensor of galaxy watch runs powerful health sensors such as photoplethysmogram ppg , electrocardiogram ecg , bioelectrical impedance analysis bia , sweat loss, and spo2 see samsung health sensor sdk descriptions for detailed information set up your environment you will need the following galaxy watch4 or newer android studio latest version recommended java se development kit jdk 11 or later sample code here is a sample code for you to start coding in this code lab download it and start your learning experience! measuring blood oxygen level and heart rate sample code 159 2 kb connect your galaxy watch to wi-fi go to settings > connection > wi-fi and make sure that wi-fi is enabled from the list of available wi-fi networks, choose and connect to the same one as your pc turn on developer mode and adjust its settings on your watch, go to settings > about watch > software and tap on software version 5 times upon successful activation of developer mode, a toast message will display as on the image below afterwards, developer options will be visible under settings tap developer options and enable the following options adb debugging in developer options find wireless debugging turn on wireless debugging check always allow on this network and tap allow go back to developer options and click turn off automatic wi-fi notethere may be differences in settings depending on your one ui version connect your galaxy watch to android studio go to settings > developer options > wireless debugging and choose pair new device take note of the wi-fi pairing code, ip address & port in android studio, go to terminal and type adb pair <ip address> <port> <wi-fi pairing code> when prompted, tap always allow from this computer to allow debugging after successfully pairing, type adb connect <ip address of your watch> <port> upon successful connection, you will see the following message in android studio’s terminal connected to <ip address of your watch> now, you can run the app directly on your watch turn on developer mode for health platform on your watch go to settings > apps > health platform quickly tap health platform title for 10 times this enables developer mode and displays [dev mode] below the title to stop using developer mode, quickly tap health platform title for 10 times to disable it start your project in android studio, click open to open existing project locate the downloaded android project from the directory and click ok establish service connection and check capabilities for the device to track data with the samsung health sensor sdk, it must connect to the service by healthtrackingservice api after establishing a connection, verify if the required tracker type is available to check this, get the list of available tracker types and verify that the tracker is on the list in this code lab, you will utilize blood oxygen level and heart rate trackers the healthtrackingservice api usage is in the table below healthtrackingservicehealthtrackingservice initiates a connection to samsung's health tracking service and provides a healthtracker instance to track a healthtrackertype public void connectservice establish a connection with samsung's health tracking service public void disconnectservice release a connection for samsung's health tracking service public healthtrackercapability gettrackingcapability provide a healthtrackercapability instance to get a supporting health tracker type list initialize multiple trackers before the measurement starts, initialize the spo2 tracker by obtaining the proper health tracker object in the connectionmanager java file, navigate to initspo2 , create an oxygen saturation healthtracker instance, and pass it to the spo2listener instance get the healthtracker object using the healthtrackingservice api use the healthtrackertype spo2_on_demand type as parameter healthtrackingservicehealthtrackingservice initiates a connection to samsung's health tracking service and provides a healthtracker instance to track a healthtrackertype public healthtracker gethealthtracker healthtrackertype healthtrackertype provide a healthtracker instance for the given healthtrackertype pass the healthtracker object to the spo2listener instance object spo2listener public void sethealthtracker healthtracker tracker set healthtracker instance for the given tracker /******************************************************************************************* * [practice 1] create blood oxygen level health tracker object * - get health tracker object * - pass it to spo2listener ------------------------------------------------------------------------------------------- * - hint replace todo 1 with parts of code * 1 get healthtracker object using healthtrackingservice gethealthtracker * use healthtrackertype spo2_on_demand type as parameter * 2 pass it to spo2listener using sethealthtracker function ******************************************************************************************/ public void initspo2 spo2listener spo2listener { //"todo 1 1 " //"todo 1 2 " sethandlerforbaselistener spo2listener ; } next, in the connectionmanager java file, in the initheartrate function, create a heart rate healthtracker instance, and pass it to the heartratelistener instance get the healthtracker object using the healthtrackingservice api use the healthtrackertype heart_rate_continuous type as parameter pass the healthtracker object to the heartratelistener instance object heartratelistener public void sethealthtracker healthtracker tracker set healthtracker instance for the given tracker /******************************************************************************************* * [practice 2] create heart rate health tracker object * - get health tracker object * - pass it to heartratelistener ------------------------------------------------------------------------------------------- * - hint replace todo 2 with parts of code * 1 get healthtracker object using healthtrackingservice gethealthtracker * use healthtrackertype heart_rate_continuous type as parameter * 2 pass it to heartratelistener using sethealthtracker function ******************************************************************************************/ public void initheartrate heartratelistener heartratelistener { //"todo 2 1 " //"todo 2 2 " sethandlerforbaselistener heartratelistener ; } start and stop trackers for the client app to start obtaining the data through the sdk, set a listener method on healthtracker this method will be called every time there is new data after the measurement completes, the listener has to be disconnected to start measurement in the baselistener java file, navigate to starttracker function, and set trackereventlistener as listener healthtracker object set an event listener on healthtracker object using healthtracking api use the healthtracker trackereventlistener object instance as parameter healthtrackerhealthtracker enables an application to set an event listener and get tracking data for a specific healthtrackertype public void seteventlistener healthtracker trackereventlistener listener set an event listener to this healthtracker instance /******************************************************************************************* * [practice 3] start health tracker by setting event listener * - set event listener on health tracker ------------------------------------------------------------------------------------------- * - hint replace todo 3 with parts of code * set event listener on healthtracker object using healthtracker seteventlistener * use trackereventlistener object as parameter ******************************************************************************************/ public void starttracker { log i app_tag, "starttracker called " ; log d app_tag, "healthtracker " + healthtracker tostring ; log d app_tag, "trackereventlistener " + trackereventlistener tostring ; if !ishandlerrunning { handler post -> { //"todo 3" sethandlerrunning true ; } ; } } to stop measurement, unset the trackereventlistener from the healthtracker object in the stoptracker function unset the event listener on healthtracker object using healthtracking api healthtrackerhealthtracker enables an application to set an event listener and get tracking data for a specific healthtrackertype public void unseteventlistener stop the registered event listener to this healthtracker instance /******************************************************************************************* * [practice 4] stop health tracker by removing event listener * - unset event listener on health tracker ------------------------------------------------------------------------------------------- * - hint replace todo 4 with parts of code * unset event listener on healthtracker object using healthtracker unseteventlistener ******************************************************************************************/ public void stoptracker { log i app_tag, "stoptracker called " ; log d app_tag, "healthtracker " + healthtracker tostring ; log d app_tag, "trackereventlistener " + trackereventlistener tostring ; if ishandlerrunning { //"todo 4" sethandlerrunning false ; handler removecallbacksandmessages null ; } } process obtained and batching data the response from the platform will be asynchronous with the results you want to obtain follow the steps below to get blood oxygen level and heart rate data in the spo2listener java file, navigate to updatespo2 function, and read spo2 data from datapoint get the oxygen saturation status using the datapoint api key valuekey spo2set status get the oxygen saturation value using the datapoint api key valuekey spo2set spo2 datapointdatapoint provides a map of valuekeyand value with a timestamp public <t>t getvalue valuekey<t> type get data value for the given key /******************************************************************************************* * [practice 5] read values from datapoint object * - get blood oxygen level status * - get blood oxygen level value ------------------------------------------------------------------------------------------- * - hint replace todo 5 with parts of code * 1 remove spo2status calculating and * set status from 'datapoint' object using datapoint getvalue valuekey spo2set status * 2 set spo2value from 'datapoint' object using datapoint getvalue valuekey spo2set spo2 * if status is 'spo2status measurement_completed' ******************************************************************************************/ public void updatespo2 datapoint datapoint { int status = spo2status calculating; //"todo 5 1 " int spo2value = 0; //"todo 5 2 " trackerdatanotifier getinstance notifyspo2trackerobservers status, spo2value ; log d app_tag, datapoint tostring ; } in the heartratelistener java file, navigate to readvaluesfromdatapoint function, and read the heart rate data from datapoint get heart rate status using datapoint api key valuekey heartrateset heart_rate_status get heart rate value using datapoint api key valuekey heartrateset heart_rate get heart rate ibi value using datapoint api key valuekey heartrateset ibi_list get ibi quality using datapoint api key valuekey heartrateset ibi_status_list /******************************************************************************************* * [practice 6] read values from datapoint object * - get heart rate status * - get heart rate value * - get heart rate ibi value * - check retrieved heart rate’s ibi and ibi quality values ------------------------------------------------------------------------------------------- * - hint replace todo 6 with parts of code * 1 set hrdata status from 'datapoint' object using datapoint getvalue valuekey heartrateset heart_rate_status * 2 set hrdata hr from 'datapoint' object using datapoint getvalue valuekey heartrateset heart_rate * 3 set local variable 'list<integer> hribilist' using datapoint getvalue valuekey heartrateset ibi_list * 4 set local variable 'final list<integer> hribistatus' using datapoint getvalue valuekey heartrateset ibi_status_list * 5 set hrdata ibi with the last of 'hribilist' values * 6 set hrdata qibi with the last of 'hribistatus' values ******************************************************************************************/ public void readvaluesfromdatapoint datapoint datapoint { heartratedata hrdata = new heartratedata ; //"todo 6 1 " //"todo 6 2 " //"todo 6 3 " //"todo 6 4 " //"todo 6 5 " //"todo 6 6 " trackerdatanotifier getinstance notifyheartratetrackerobservers hrdata ; log d app_tag, datapoint tostring ; } run unit tests for your convenience, you can find an additional unit tests package this lets you verify your code changes even without using a physical watch see instructions below on how to run unit tests right click on com samsung health multisensortracking test and execute run 'tests in 'com samsung health multisensortracking'' command if you completed all the tasks correctly, you can see that all the unit tests passed successfully run the app after building the apk, you can run the application on a connected device to see blood oxygen level and heart rate values right after the app is started, it requests for user permission allow the app to receive data from the body sensors afterwards, it shows the application's main screen and automatically display the heart rate to get the blood oxygen level spo2 value, tap on the measure button to stop the measurement, tap on the stop button tap on the details label to see more heart rate data you're done! congratulations! you have successfully achieved the goal of this code lab now, you can create a health app that measures blood oxygen level and heart rate by yourself! if you're having trouble, you may download this file measuring blood oxygen level and heart rate complete code 158 8 kb to learn more about samsung health, visit developer samsung com/health
events
blogwe're sure you've heard about the exciting sessions and speakers at samsung developer conference. from samsung leaders announcing the latest dev tools to fireside chats about the future of tech to sessions on emerging topics, sdc19 is full of valuable insights and intriguing information. but the fun doesn't stop there! it's also a haven for innovation and inspiration — thanks to these cool activations and exhibits. a developer's playground – tech square and dev park tech square visit samsung product zones at tech square – from bixby to smart tv to galaxy. discover the latest sdks, work 1:1 with samsung experts, and chat with samsung partners about their latest work. ai/iot zone – check out areas dedicated to smartthings, tizen, and bixby. relax in comfy massage chairs powered by tizen, explore the smartthings partner showcase, and stop by the bixby hackathon and magic show. code lab – get hands-on experience with the latest sdks and developer tools with the help of samsung engineers. plus, complete fun coding challenges for the chance to win exclusive tech prizes. dev park play, network, and get inspired with fun activations at dev park, like the designer zone featuring the winners of samsung's mobile design competition, and trending topics at the theater. last but not least, swing by to say hi to the samsung developer program and think tank teams! hacker's playground – learn skills like attack, defense, and reversing from expert hackers. we’ll have toy examples with step-by-step guides, and if you complete the tutorial, you might be in for a surprise. up for a challenge? show off your hacking skills for a chance to win hot samsung prizes. it's open to everyone from newbies to experts, so bring your own laptop and compete for some serious bragging rights. xr: delusion experience – get in the halloween spirit with our haunted xr experience – delusion: lies within. experience the latest in samsung ar and xr as you "interact" with hidden delusion characters. workshop with the tech after testing out the tetavi volumetric rig. just scan yourself, drop your avatar into delusion content, and have some spooky fun. think tank: ona interactive wall – check out this multi-touch, multi-user interface with 3d-gesture sensing. ona tracks your location and movement when you're using it to play games, making it a literal game changer. feeling inspired yet? these are just a few of the highlights. don't miss the rest – register for sdc19 today! use the code priority until october 22 to secure exclusive seating near the stage during the keynote. only valid for the first 100 people to redeem the code! check out the full lineup of tech sessions, follow us on social, and keep an eye on #sdc19 for the latest news and updates. we can't wait to see you in san jose!
Samsung Developer Program
Learn Code Lab
codelabapply gyro effects to a watch face using watch face studio objective learn how to change the appearance of your watch face as the watch tilts in different directions by applying gyro effects using watch face studio overview watch face studio is a graphical authoring tool that enables you to design watch faces for the wear os smartwatch ecosystem, including galaxy watch4 and newer versions it provides easy and simple features, such as gyro effects, for creating attractive and dynamic watch faces without coding gyro effects are a feature in watch face studio, which uses the watch's gyro sensor to animate or alter the look of watch faces the effects are triggered when the watch is tilted within the -90° to 90° tilt angle range along the x and y axes see illustration below a tilt angle of 0° on both axes indicates that the watch rests on a flat surface the following shows the expected tilt angle values when the watch tilts 90° in different directions direction of tilt tilt angle clockwise along the y-axis or tilt forward y = -90° counterclockwise along the y-axis or tilt backward y = 90° clockwise along the x-axis or tilt to the right x = 90° counterclockwise along the x-axis or tilt to the left x = -90° set up your environment you will need the following watch face studio latest version galaxy watch4 or newer smart watch running on wear os api 30 or higher sample project here is a sample project for you to start coding in this code lab download it and start your learning experience! gyro effects sample project 322 00 kb start your project to load the sample project in watch face studio click open project locate the downloaded file and click open the sample project is a premade watch face design it contains a background image, a digital clock, and a date notein watch face studio, you can add a digital clock component in different formats such as hh, mm, ss, or hh mm ss setting the digital time properties, such as color and dimension, per digit is also possible when you choose a component from variable options the project also contains three ellipses two ellipses with tag expressions in rotate > angle properties to represent the analog time hour and second and one ellipse as the background of the digital hour in the style tab, you can see the watch face's different theme color palettes resize a component when tilted forward or backward as the watch tilts, a component can enlarge or shrink relative to its pivot when applying gyro effects the pivot is the point at the center of the component the component remains at the same position as it gets bigger or smaller select the time background component in properties, you can see that the component's dimension width x height is 100px by 100px select apply gyro to show the properties of gyro effects initially, a component does not resize, move, rotate, or change opacity when you tilt the watch in any direction the gyro effect occurs when you set the changing to properties higher or lower than the default values to resize the component when the watch tilts forward or backward, go to y-axis properties and set the scale width and height to the following 70% for -90° to 0° range 130% for 0° to 90° range in the run pane, you can simulate the rotation of the watch the component enlarges from 100% to 130% as the watch tilts backward and shrinks to 70% as the watch tilts forward notewhen you drag the gyro joystick upward, it simulates the backward rotation of the watch similarly, when you move the joystick downward, it simulates the forward rotation rotate a component when tilted to the left or right in gyro mode, a component can rotate around its pivot when a component rotates, its position remains the same select the digital date component the format of the digital date is a curved text, and its pivot is at the center of the canvas inner pivot x = 225px, inner pivot y = 225px enable gyro effects then, change the rotation angle to -90° for the following tilt angle ranges along the x-axis -90° to 0° 0° to 90° as the watch tilts to the left or right, the date component rotates up to 90° gradually counter-clockwise change the appearance of the background image based on the adjusted tilt angle range range values define how tilted the watch should be for a component to resize, rotate, appear, or move to the maximum value you set for example, when you set the range from 0° to 45° along the x-axis and the scale to 200%, the component will enlarge twice from its original size once the watch reaches the 45° tilt angle to the right select the background image change the opacity of the components to 0° for the following ranges -45° to 0° along the x-axis 0° to 45° along the x-axis -45° to 0° along the y-axis as the watch tilts in any direction other than backward, the background image slowly disappears as it approaches the 45° tilt angle then, for the 0° to 45° range along the y-axis, change the rotation angle to 360° to rotate the background image clockwise as the watch tilts backward test the watch face in the run pane, click the preview watch face icon move the gyro simulator joystick to see how the appearance of time background, digital date, and background image changes to test your watch face on a smartwatch, you need to connect a watch device to the same network as your computer in watch face studio, select run on device select the connected watch device you want to test with if your device is not detected automatically, click scan devices to detect your device or enter its ip address manually by clicking the + button notethe always-on display is already set in the project to run the watch face successfully, you may need to configure its always-on display to avoid any error when you click run on device you’re done! congratulations! you have successfully achieved the goal of this code lab now, you can apply gyro effects using watch face studio to change the appearance of your watch face all by yourself! if you’re having trouble, you may download this file gyro effects complete project 322 02 kb to learn more about watch face studio, visit developer samsung com/watch-face-studio
Learn Code Lab
codelabcreate an android automotive operating system aaos app with payments via samsung checkout objective create a shopping app for android automotive os aaos , which uses templates from aaos and ignite store, and processes payments via the ignite payment sdk powered by samsung checkout partnership request to use the ignite payment sdk and have access to development tools and resources, such as ignite aaos emulators, you must become an official partner once done, you can fully utilize this code lab you can learn more about the partnership process by visiting the ignite store developer portal overview android automotive os android automotive os aaos is a base android platform that runs directly on the car and is deeply integrated with the underlying vehicle hardware unlike the android auto platform, users can download compatible apps with aaos directly into their cars, without needing a phone, and utilize an interface specifically designed for the car screen aaos can run both system and third-party android applications as aaos is not a fork and shares the same codebase as android for mobile devices, developers can easily adapt existing smartphone apps to function on aaos the diagram below illustrates the architecture of aaos at the hardware abstraction layer hal level, aaos incorporates additional components such as the vehicle hal vhal , exterior view system evs , and broadcast radio br to handle vehicle properties and connectivity at the framework level, car service and webview modules are included at the application level, the main system applications include car system ui, car launcher, and car input method editor ime additionally, car media and automotive host are incorporated as system apps third-party apps are classified into three categories audio media, car templated, and parked car templates the car templated apps use templates specified by the car app library, which are rendered by the automotive host, customized by original equipment manufacturers oem the library consists of approximately 10 templates list, grid, message, pane, navigation and is utilized in both android auto aa and android automotive os aaos apps to target aaos, you must incorporate an additional app-automotive library that injects the carappactivity into the app the carappactivity needs to be included in the manifest and can be set as distractionoptimized upon launching the application, the carappactivity provides a surface that is employed by the automotive host to render the template models additionally, on the harman ignite store, you can optionally integrate the ignite-car-app-lib, which adds supplementary templates such as explore, listdetails, routeoverview, and poistreaming harman ignite store the harman ignite store is a white-labeled and modular aaos-based automotive app store by connecting app developers with car manufacturers, harman creates unique in-vehicle experiences the ignite store has a rich app ecosystem with unique content, growth opportunities, and long-term partnerships it facilitates future-proof monetization with a payments api powered by samsung checkout after registering at the ignite store developer portal, developers can submit their apps for certification and testing by harman upon approval from the oem, the developer can proceed with publishing their app comprehensive developer documentation and tools are available to support app developers throughout the development process payments api the ignite store comes enabled with payment features empowering developers to monetize their apps developers are now able to offer their apps as paid apps the payment sdk exposes apis for goods and services, in-app purchases, and subscriptions developers can also integrate their own payment service providers psps , to sell goods or services, and receive the money directly in their bank account for a frictionless in-car payment experience, ignite provides a dedicated digital wallet app for end-users to securely store their credit card information the payment processor is powered by the industry proven samsung checkout the developer portal provides additional documentation to allow developers to access ignite aaos emulators, vim3, tablet or cuttlefish ignite images, and additional guidelines set up your environment you will need the following ignite aaos system image running on android emulator or on reference devices android studio latest version recommended java se development kit jdk 11 or later sample code here is a sample code for you to start coding in this code lab download it and start your learning experience! aaos ignite shopping app sample code 11 7 mb prepare your ignite aaos emulator add the ignite aaos emulator to your android studio by following the guide provided in the ignite store developer console open the device manager and start the emulator configure the emulator to use fake data for payments, as instructed in the ignite store developer console under the offline mode tab in the payment sdk section the sample app requires navigation from point a starting location to point b destination location the destination addresses are predefined and near san jose mcenery convention center to shorten the distance between two locations, follow the steps below to set the starting location a open the extended controls in the emulator panel b go to location, search for a location near the destination location, and click set location next, in the emulator, go to the ignite navigation app's settings and enable the following enable navigation simulation enable mock location provider go to settings > system > developer options > location and set ignite navigation as mock location app start your project after downloading the sample code containing the project files, open your android studio and click open to open an existing project locate the downloaded android project igniteautomotivepaymentssdc202488 from the directory and click ok check the ignite payment sdk dependency verify that the ignite payment sdk library is included in the dependencies section of the module's build gradle file dependencies { implementation files 'libs/ignite-payment-sdk-3 13 0 24030417-0-snapshot aar' } add payment permission next, go to the manifests folder and, in the androidmanifest xml file, include the payment_request permission to perform in-app purchases <uses-permission android name="com harman ignite permission payment_request" /> this ensures that the app has the necessary permissions to perform transactions and handle sensitive financial data show the payment screen when an item is added to the cart, the shopping cart screen displays the select store button, selected pickup store address, total amount to pay, and details of each item added the screen also includes the pay button go to kotlin + java > com harman ignite pickupstore > screens > shoppingcartscreen kt in the docheckout function, use the car app's screenmanager to navigate to the payment screen from the shopping cart screen after clicking the pay button getscreenmanager push paymentscreen carcontext, session notethe screenmanager class provides a screen stack you can use to push screens that can be popped automatically when the user selects a back button in the car screen or uses the hardware back button available in some cars instantiate the ignite payment client the ignite payment api provides a singleton class called ignitepaymentclientsingleton, which enables performing and tracking transactions navigate to the paymentscreen kt file and instantiate the ignite payment client private val mipc = ignitepaymentclientsingleton getinstance carcontext define the ignite payment transaction callback the ignite payment transaction provides three callback methods onsuccess, oncanceled, and onfailure after each callback, make sure to set the ispaymentfailed variable to track whether a payment is successful or not update the session which owns the shopping cart screen to reflect the status of the payment transaction call the updatetemplate function to invalidate the current template and create a new one with updated information private val mipctcb = object iignitepaymentclienttransactioncallback { override fun onsuccess requestuuid string?, sessionid string?, successmessage string?, paymentadditionalproperties hashmap<string, string>? { log d tag, log_prefix + "onsuccess rid $requestuuid, sid $sessionid, sm $successmessage" cartoast maketext carcontext, "payment successful", cartoast length_short show ispaymentfailed = false session paymentdone requestuuid, sessionid, successmessage updatetemplate } override fun oncanceled requestuuid string?, sessionid string? { log d tag, log_prefix + "oncanceled rid $requestuuid, sid $sessionid" cartoast maketext carcontext, "payment canceled", cartoast length_long show ispaymentfailed = true session paymenterror requestuuid, sessionid, null updatetemplate } override fun onfailure requestuuid string?, sessionid string?, wallererrorcode int, errormessage string { log d tag, log_prefix + "onfailure rid $requestuuid, sid $sessionid, wec $wallererrorcode, em $errormessage" cartoast maketext carcontext, "payment failed", cartoast length_long show ispaymentfailed = true session paymenterror requestuuid, sessionid, errormessage updatetemplate } } define the ignite payment client connection callback the ignite payment client needs to be connected in order to perform a payment request once the client connects successfully, retrieve the names of the shopping cart items and use them to create an order summary afterwards, construct an ignite payment request containing the total amount, currency code, merchant id, and details of the order summary then, initiate the payment process by invoking the readytopay function of the ignite payment client api private val mipccb = iignitepaymentclientconnectioncallback { connected -> log d tag, log_prefix + "onpaymentclientconnected $connected" if connected { val textsummary = session shoppingcart cartitems jointostring ", " { item -> item name } val ipr = ignitepaymentrequest builder setamount session shoppingcart gettotal * 100 setcurrencycode currencycode usd setpaymentoperation paymentoperation purchase setmerchantid constants merchant_id setordersummary ordersummary builder setordersummarybitmapimage bitmapfactory decoderesource carcontext resources, session shoppingcart store largeicon setordersummarylabel1 "${carcontext getstring r string pickupstore_app_title } ${session shoppingcart store title}" setordersummarysublabel1 session shoppingcart store address setordersummarylabel2 textsummary setordersummarysublabel2 carcontext getstring r string pickupstore_payment_sublabel2 build build try { mipc readytopay ipr, mipctcb } catch e exception { log d tag, log_prefix + "payment exception $e" e printstacktrace } catch e error { log d tag, log_prefix + "payment error $e" e printstacktrace } } } start the payment process and go back to previous screen after the transaction next, in the startpayment function, connect the ignite payment client and the connection callback to start the payment process mipc connect mipccb after the transaction is completed, the updatetemplate function refreshes the template used in the payment screen before calling the schedulegoback function modify the schedulegoback function to navigate back to the previous template screen shopping cart you can use the pop method of the screenmanager screenmanager pop start the navigation to the store to collect the paid pickup the shopping cart screen shows the pickup store location, details of the order, and go to store button after a successful payment go to kotlin + java > com harman ignite pickupstore > pickupstoresession kt modify the navigatetostore geofence geofence function to trigger the navigation to the pickup store when the go to store button is clicked you can use the intent carcontext action_navigate with geo schema rfc 5879 data, containing latitude and longitude e g , geo 12 345, 14 8767 to send the intent, use the carcontext startcarapp api call val geouristring = "geo ${geofence latitude},${geofence longitude}" val uri = uri parse geouristring val navintent = intent carcontext action_navigate, uri try { carcontext startcarapp navintent } catch e exception { log e tag, log_prefix + "navigatetostore exception starting navigation" e printstacktrace cartoast maketext carcontext, "failure starting navigation", cartoast length_short show } run the app on ignite aaos emulator run the pickup-store-app on ignite aaos emulator when the app starts for the first time, it requests for user permissions click grant permissions choose allow all the time for location permission and click the return button 4 browse the pickup store catalog and add items to shopping cart open the shopping cart and click pay you can also change the pickup store by clicking select store check the order summary and billing information then, click confirm and pay to process payment after a successful payment, the app returns to shopping cart screen with the updated transaction information click go to store to start the navigation to the store the app displays a notification when the car is near the store click the notification to show a reference qr code to present to the store upon pick up you're done! congratulations! you have successfully achieved the goal of this code lab topic now, you can create an aaos templated app, which supports payments by yourself! if you're having trouble, you may download this file aaos ignite shopping app complete code 11 7 mb learn more by going to the developer console section of the ignite store developer portal
Learn Code Lab
codelabget creative with weather data in watch face studio objective learn how to create a watch face that changes dynamically based on weather conditions using weather tags, bitmap font, and conditional lines overview watch face studio is a graphic authoring tool which provides a straightforward and easy-to-understand method of configuring the watch movement and adding watch components without the need of any coding you can use watch face studio to create custom watch faces for galaxy watch, and other watches running on wear os bitmap fonts are images that can be used to replace preset dynamic elements such as timestamps, dates, text, and digital clock components you can replace numbers, punctuation, clock, and calendar abbreviations tag expressions are conditions that let you change the rotation, placement, and opacity of a component based on tag values that represent watch data, such as the date and time, battery status, steps, or weather your watch face changes dynamically as the tag value changes you can configure its movement and data dynamically weather tags provide a variety of information about the weather such as weather conditions, current temperature, or uv index conditional lines let you show and hide components on a watch face and control the components' behavior use conditional lines to change the appearance of your watch face in response to certain conditions such as step count, battery, or weather set up your environment you will need the following watch face studio version 1 7 or higher galaxy watch or smartwatch running on wear os 5 or higher sample project here is a sample project for this code lab download it and start your learning experience! weather data sample project 574 3 kb start your project to load the sample project in watch face studio click open project locate the downloaded file and select weather data sample project check the availability of weather data when creating watch faces, always check the value of the [wthr_is_avail] tag before utilizing other weather tags in the sample project, it is already configured to display a no info text with a black background whenever the weather data is not available go to the run pane > weather to preview how your watch face appears with and without weather data notethe [wthr_is_avail] tag checks the availability of weather data and it returns either 0 no or 1 yes change the background using bitmap font watch face studio allows you to use bitmap font for text components you can use the bitmap font to change the background color of your watch face based on specific weather conditions such as sunny, rainy, or snowy using the weather condition tag [wthr_cond] as the value of the text field, you can change the background during a snowy weather select bg from the component list from the properties tab, go to font setting section click the gear icon of the bitmap font setting noteweather condition for bg is already set in the sample project for this bitmap, the default font size is set to 450 pixels which is the watch face size in addition, the [wthr_cond] tag has a value of 0 to 15, wherein the value of 7 indicates a snowy weather since the digit in the bitmap font settings only provides values from 0 to 9, the remaining values 10 to 15 are added in the custom tab to set the bitmap font for snowy weather, click the + sign for 7 to add a background image locate the image from the downloaded sample project in the resources folder, select the bg_snow png image using tag expression to set the background of your watch face during a rainy weather condition, use a tag expression under the bg_transparent group, select the bg_rain_transparent image component change the image visibility by toggling the eye icon from the properties tab, go to color and in its opacity click the tags button input [wthr_cond] == 6 ?0 -100 as the tag expression since the initial opacity value is 100, the expression indicates that the background image is set to 100% opacity when the weather is rainy and 0% opacity otherwise notethe [wthr_cond] tag has a value of 0 to 15, wherein the value of 6 indicates a rainy weather set the weather icon using conditional lines to add a weather icon, you can either use bitmap font or conditional lines in this part, use the conditional lines to set the icon based on weather conditions the conditional line is already added for all weather conditions except for the sunny weather click on the + sign to add the conditional lines for weather click the show/hide weather icon to show the conditional lines for weather under the icon group, select the sunny image component and set its condition under sunny add a temperature unit using bitmap font the weather data provides the current temperature in either celsius c or fahrenheit f as its temperature unit use the temperature unit tag [wthr_tem_unit] to change the text component to the unit based on preference notethe temperature unit tag [wthr_tem_unit] has the following values for celsius, the value is 1 for fahrenheit, the value is 2 use bitmap font to set the image of the temperature unit based on the value of [wthr_tem_unit] under the temperature group, select unit c/f from the component list from the properties tab, go to font setting section click the gear icon of the bitmap font setting to set bitmap font for the value of 2 fahrenheit , click the + sign to add an image locate the image from the downloaded sample project in the resources folder, select the fahrenheit png image notein the bitmap font setting, the default font size is set to 50 test the watch face go to the run panel > weather to test the watch face for different weather conditions by changing the values of the condition adjust the weather condition value to see how the watch face changes for sunny, rainy, and snow weather conditions you can also check whether the temperature unit changes to either celsius or fahrenheit for other weather conditions, the watch face displays the following you're done! congratulations! you have successfully achieved the goal of this code lab now, you can create a watch face that changes dynamically based on weather conditions in watch face studio all by yourself! if you're having trouble, you may download this file weather data complete project 665 6 kb to learn more about watch face studio, visit developer samsung com/watch-face-studio/
We use cookies to improve your experience on our website and to show you relevant advertising. Manage you settings for our cookies below.
These cookies are essential as they enable you to move around the website. This category cannot be disabled.
These cookies collect information about how you use our website. for example which pages you visit most often. All information these cookies collect is used to improve how the website works.
These cookies allow our website to remember choices you make (such as your user name, language or the region your are in) and tailor the website to provide enhanced features and content for you.
These cookies gather information about your browser habits. They remember that you've visited our website and share this information with other organizations such as advertisers.
You have successfully updated your cookie preferences.