Samsung Neural SDK
The Samsung Neural SDK download policy has been changed, and the SDK is no longer provided to third-party developers. We regret any inconvenience this might cause.
Samsung Neural SDK enables developers to efficiently execute the pretrained neural networks on Samsung devices. The SDK is designed to accelerate neural network models in order to improve performance and provide the best use of underlying hardware components.
A deep neural network consists of several computationally intensive operations that increase the latency and impacts the performance of any handheld device. Samsung Neural SDK bridges the gap between a neural network designer and device performance, allowing network developers to focus on improving the overall user experience.
Developers can integrate their code with simple C++ APIs in order to deploy their trained models on device. The SDK supports Caffe, TensorFlow, TensorflowLite, ONNX and SNF (Samsung Neural Format) framework based model formats.
Support for the most popular frameworks in the machine-learning industry: Caffe, TensorFlow
High performance and highly accurate compute capabilities using various compute engines: CPU, GPU, and AI processor (NPU/DSP).
Neural networks supported include a large number of existing pretrained models, customized models, and a rich set of operations.
Enables optimal usage of system resources, such as memory and power.
IP Protection: Highest priority given to NN model protection using industry standard crypto-encryption methods.
Flexibility for users to choose the runtime (CPU/ GPU/ DSP/ NPU) as the application demands.
Enables NN model developers to focus on improving accuracy of the models to enhance user experience.
For delivering accelerated performance, Neural SDK uses the Samsung Neural Acceleration Platform, which is tried and tested in a wide range of applications using Convolutional Neural Networks such as AI Gallery, Selfie Focus Live, Shot Suggestion, Avatar, Scene Optimizer, and many more.
No, Samsung Neural SDK is designed to run only on Samsung devices.
After converting models to appropriate vendor formats, various network models can be run on NPU using the SDK. Detailed documentation depicting the usage is available for downloading.
Samsung Neural SDK employs kernel caching for faster execution on GPU. The kernel cache files are generated on the first run and will be stored in the device, which takes some time. On subsequent runs, these generated files are made use of for better execution speeds.
Release Version: 3.0
Release Date: May 24, 2021
Release Contents SDK Libraries Provides Samsung Neural SDK libraries. Sample Provides sample benchmarking application. Tools Provides optimizations for deploying Caffe and Samsung Neural Format (SNF) models with the SDK. Documents Tutorial Includes programming guide, API reference, supported device list, and other materials.
TensorFlow models are not supported to run on Exynos NPU and TensorFlowLite models are not supported on Qualcomm NPU/DSP.
Execution of a model on GPU may take more time in its first run because of the GPU Kernel caching feature. This problem should not occur on subsequent runs.
Tools to convert the models to run on NPU are not provided along with the SDK. These have to be downloaded from respective vendor sites.