The Connected Cloud: Samsung Mobile’s Multi-cloud Architecture
VP of Product Marketing
“Connected Thinking” is the theme of this year’s Samsung Developer Conference, and it is at the core of Samsung’s strategy to create intelligent platforms that connect Samsung devices to deliver unique and seamless user experiences. Internally at Samsung, this focus on connected software and services triggered the need to rethink how Samsung uses cloud infrastructure to power these connected experiences. How can the data that is the strategic foundation of many of these services be securely controlled and shared? How can connectivity and portability between key cloud providers be achieved? And, as cloud usage increases, how can cloud costs be controlled?
Just over one year ago Samsung acquired Joyent to be the cornerstone of a new, connected cloud strategy designed to allow Samsung to drive down its cloud costs, control strategic data, and enable portability across cloud providers. This post highlights two key elements of Samsung’s connected cloud strategy:
- How Samsung is leveraging Triton Private Regions from Joyent to take control of its data and lower its object storage costs.
- Patterns for using containers and container scheduling tools like Kubernetes to enable portability of software and services across cloud providers.
Taking back control of data, while lowering storage costs
Historically, Samsung has leveraged public cloud object storage services to hold the data that many of the enhanced user experiences Samsung is building depend upon. This creates two problems. First, as the amount of data being stored grows the cost of storing that data in a public cloud simply becomes too expensive. A more cost effective solution is needed. Second, because connected data is at the foundation of Samsung’s integrated platforms, that data is a strategic asset. Securely controlling it is a requirement.
To solve these two challenges Samsung turned to Joyent to build and run for them Triton Private Regions around the world to support a hybrid cloud architecture where object storage is placed in Private Regions, while unpredictable compute workloads continue to run on a public cloud.
An example: Samsung Cloud
Samsung Cloud allows users to backup and sync photos as well as content from apps such as Samsung Notes, Calendar and Contacts across their devices in accordance with their preferences. Before the move to Triton Private Regions, all the compute, storage and mobile backend services that make up the Samsung Cloud application were deployed on a single public cloud provider’s infrastructure.
By leveraging Triton Private Regions to provide object storage for Samsung Cloud, a new connected cloud architecture is now being deployed.
Migrating to a connected cloud architecture, where Samsung Cloud’s object storage needs are served by Triton Private Regions, while its computing and mobile backend needs are served by a public cloud vendor, significantly reduces overall costs and gives Samsung back control over its data. If it had continued to push data into a public cloud vendor’s object storage service, the size and gravity of this application’s data would have effectively locked Samsung Cloud into a single provider and an unsustainable cost structure.
Creating cloud portability
Once a long term solution to Samsung’s data storage requirements was defined, the team’s attention turned to finding a way to enable applications and application components, like the mobile back end services referenced in the Samsung Cloud example, above, to be easily moved between cloud providers. Creating this kind of portability is essential to support a connected cloud where independent development teams can operate in harmony with each other, and to enable each team to chose the cloud environment that best fits their application’s needs.
An example: Containers and Kubernetes
What are containers?
With VMs, one installs applications on a host using the operating system package manager. This entangles application executables, configuration, libraries, and lifecycle with each other and with the host OS. One could build immutable VM images for predictable rollouts, but they are heavyweight and non-portable between different cloud vendors.
Containers on the other hand, use OS-level virtualization rather than hardware virtualization. Containers are isolated from each other and from the host: they have their own filesystems, they can’t see each other’s processes, and their computational resource usage can be bounded. They are easier to build than VMs. The most popular and robust container platform is Docker.
What is Kubernetes?
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. K8s was designed with microservices in mind. Microservices architectures are common in mobile and IoT applications. In simple terms, a microservices architecture decouples a traditional multi-function/tier service into a single-function/tier service. These microservice units are best deployed on containers due to their lightweight runtime. The problem, however, is that you can end up with 1000s of microservices in place of the 10-100 legacy services you had earlier. This makes orchestration and scaling very complex. Enter Kubernetes, which provides a framework for running 1000s of microservices at scale. Kubernetes provides a full set of container specific services like networking, auto-scaling, self-healing, load balancing, authentication and application level grouping of resources among it’s many cloud like features.
How do you get started with Kubernetes?
The solution provides a highly resilient container management system with built in clustering and HA. Under the hood, the solution uses open source projects like Docker, Kubernetes, Rancher, Terraform, Ansible and Packer to deploy and manage a large scale kubernetes cluster. The following diagram explains the architecture of Kubernetes on Triton solution.
How containers and Kubernetes solve the portability issue
Because containers are small and fast, one application can be packed in each container image. This one-to-one application-to-image relationship unlocks the full benefits of containers. With containers, immutable container images can be created at build/release time rather than deployment time, since each application doesn’t need to be composed with the rest of the application stack, nor married to the production environment. This concept of build local, run anywhere enables a consistent environment to be carried from development into production.
Also Kubernetes is evolving to be the standard way everyone orchestrates containers and applications deployed on them. As containers are decoupled from the underlying infrastructure and from the host filesystem, they are portable across clouds and OS distributions. It works on premises on your own hardware and also on every major cloud provider. This makes porting your application workloads across clouds very easy, as long as you containerize and orchestrate them following standards.
The power of a connected cloud
Putting all of these pieces together allows us to do things we couldn’t before. As an example, we can create a blueprint for a common IoT application pattern that can connect multiple devices into a seamless experience for a user, which can easily be deployed on any cloud. If you want to give it a try yourself, dig into this open source project that will give you a working IoT dashboard connected to a SmartThings IoT hub and devices. You can use it to monitor IoT devices, but it's really designed as an example of how to build containerized microservices applications in Node.js.
Joyent is the Official Cloud Provider of the Samsung Developer Conference. Come visit us at booth 201 in the Pavilion on the second floor to see how Triton Private Regions can benefit your organization.