Samsung Neural SDK
Samsung Neural SDK enables developers to efficiently execute the pretrained neural networks on Samsung devices. The SDK is designed to accelerate neural network models in order to improve performance and provide the best use of underlying hardware components.
A deep neural network consists of several computationally intensive operations that increase the latency and impacts the performance of any handheld device. Samsung Neural SDK bridges the gap between a neural network designer and device performance, allowing network developers to focus on improving the overall user experience.
Developers can integrate their code with simple C++ APIs in order to deploy their trained models on device. The SDK supports popular Caffe and TensorFlow framework based model formats.
For delivering accelerated performance, Neural SDK uses the Samsung Neural Acceleration Platform, which is tried and tested in a wide range of applications using Convolutional Neural Networks such as Bixby Vision, AI Gallery, Selfie Focus Live, Shot Suggestion, Avatar, Scene Optimizer, and many more.
Support for the most popular frameworks in the machine-learning industry: Caffe, TensorFlow
High performance and highly accurate compute capabilities using various compute engines: CPU, GPU, and AI processor (NPU/DSP).
Neural networks supported include a large number of existing pretrained models, customized models, and a rich set of operations.
Enables optimal usage of system resources, such as memory and power.
IP Protection: Highest priority given to NN model protection using industry standard crypto-encryption methods.
Flexibility for users to choose the runtime (CPU/ GPU/ DSP/ NPU) as the application demands.
Enables NN model developers to focus on improving accuracy of the models to enhance user experience.
Sample benchmarking code is provided.
Release Version: 2.0
Release Date: March 30, 2020
Release Contents SDK Libraries Provides Samsung Neural SDK libraries. Sample Provides sample benchmarking application. Tools Provides optimizations for deploying Caffe models with the SDK. Documents Tutorial Includes programming guide, API reference, supported device list, and other materials.
In this release, TensorFlow models are not supported to run on GPU and on Exynos NPU
Execution of a model on GPU may take more time in its first run because of the GPU Kernel caching feature. This problem should not occur on subsequent runs.
Tools to convert the models to run on NPU are not provided along with the SDK. These have to be downloaded from respective vendor sites.
No, Samsung Neural SDK is designed to run only on Samsung devices.
After converting models to appropriate vendor formats, various network models can be run on NPU using the SDK. Detailed documentation depicting the usage is available for downloading.
Samsung Neural SDK employs kernel caching for faster execution on GPU. The kernel cache files are generated on the first run and will be stored in the device, which takes some time. On subsequent runs, these generated files are made use of for better execution speeds.
Register for a Samsung Account.
Sign up for a Samsung account, if you do not already have one.
Apply Partnership Request
Complete required information and submit the Request page.
Application Scenario Review.
Get access to Samsung Neural SDK.
Make your own application using Samsung Neural SDK.