Samsung Neural SDK
Samsung Neural SDK enables developers to efficiently execute the pretrained neural networks on Samsung devices. The SDK is designed to accelerate neural network models in order to improve performance and provide the best use of underlying hardware components.
A deep neural network consists of several computationally intensive operations that increase the latency and impacts the performance of any handheld device. Samsung Neural SDK bridges the gap between a neural network designer and device performance, allowing network developers to focus on improving the overall user experience.
Developers can integrate their code with simple C++ APIs in order to deploy their trained models on device. The SDK supports popular Caffe and TensorFlow framework based model formats.
Support for the most popular frameworks in the machine-learning industry: Caffe, TensorFlow
High performance and highly accurate compute capabilities using various compute engines: CPU, GPU, and AI processor (NPU/DSP).
Neural networks supported include a large number of existing pretrained models, customized models, and a rich set of operations.
Enables optimal usage of system resources, such as memory and power.
IP Protection: Highest priority given to NN model protection using industry standard crypto-encryption methods.
Flexibility for users to choose the runtime (CPU/ GPU/ DSP/ NPU) as the application demands.
Enables NN model developers to focus on improving accuracy of the models to enhance user experience.
Partnership Request Process
To use the Samsung Neural SDK, you must become a Samsung partner. To request partnership:
If prompted, log in to your Samsung Account. If you do not already have a Samsung Account, create one.
2. Enter your company and developer information.
Your name, email address and country are filled in for you.
3. Enter information about the application for which you are applying to use the Samsung Neural SDK.
Provide the name and description for the application, and attach documents that detail the application features and use cases.
If you have been in contact with a Samsung representative about your application proposal, enter their contact information.
4. When you are ready to submit the request, click "Submit".
Your parnership request is reviewed. When it is approved, you receive access to the Samsung Neural SDK libraries and documentation.
5. Use the Samsung Neural SDK to develop your application.
For delivering accelerated performance, Neural SDK uses the Samsung Neural Acceleration Platform, which is tried and tested in a wide range of applications using Convolutional Neural Networks such as AI Gallery, Selfie Focus Live, Shot Suggestion, Avatar, Scene Optimizer, and many more.
No, Samsung Neural SDK is designed to run only on Samsung devices.
After converting models to appropriate vendor formats, various network models can be run on NPU using the SDK. Detailed documentation depicting the usage is available for downloading.
Samsung Neural SDK employs kernel caching for faster execution on GPU. The kernel cache files are generated on the first run and will be stored in the device, which takes some time. On subsequent runs, these generated files are made use of for better execution speeds.
Release Version: 2.0
Release Date: March 30, 2020
Release Contents SDK Libraries Provides Samsung Neural SDK libraries. Sample Provides sample benchmarking application. Tools Provides optimizations for deploying Caffe models with the SDK. Documents Tutorial Includes programming guide, API reference, supported device list, and other materials.
In this release, TensorFlow models are not supported to run on GPU and on Exynos NPU
Execution of a model on GPU may take more time in its first run because of the GPU Kernel caching feature. This problem should not occur on subsequent runs.
Tools to convert the models to run on NPU are not provided along with the SDK. These have to be downloaded from respective vendor sites.