Why Kubernetes?

Kubernetes delivers unparalleled benefits for development and operations teams. Releases happen faster. Applications run better. Systems heal automatically. Uptime stays high. Productivity goes up. Costs go down. Users stay happy. Kubernetes is a tool, and like any tool, whether or not you use it comes down to the benefits that it provides.

A Detailed Overview of Rancher's Architecture

Get the eBook

Benefits

grade

Decision Maker

You're a decision maker. The technicals are interesting, but the bottom line matters most. You want maximum performance at the lowest cost, and you're always searching for the intersection between value and impact.

What will Kubernetes do for you?

Kubernetes enables your business to adapt to changes in the environment, pushing out changes to your application as quickly as your teams can develop and test them. It makes it easy to design a system where changes go out without any downtime.

Kubernetes is finding a home in businesses of all kinds, including telecom and service providers, retail and manufacturing, content providers, banks, financial services, media and entertainment producers, government, and healthcare. All of these businesses have found that Kubernetes makes their business more successful.

Kubernetes is open source software, and in March of 2018 it became the second-largest open source project after Linux itself. As of October 2018 it has 24,500 contributors across more than 100 companies. As long as your business is using the OSS version of Kubernetes from the Cloud Native Computing Foundation and not a feature-restricted fork, your projects will never find themselves starving from a framework abandoned by its creators.

Docker allows your team to package an application and run it anywhere that Docker runs. It is guaranteed to be consistent. Kubernetes carries this concept further, wiring containers together into applications and delivering inter-container communication, load balancing, monitoring, metrics, logging, and other services. It guarantees that these apps will run within any Kubernetes environment anywhere. Where cloud applications are today hardwired to services provided by cloud vendors, Kubernetes abstracts those services into pluggable modules. Applications then use these modules to provision and consume resources like storage, without concern for where the resource is located or how it was provisioned. This gives you the freedom to move between vendors and providers as your business requires.

Kubernetes maximizes your investment in infrastructure, whether on-premise or in the cloud. If you choose to run applications inside of a hosted Kubernetes environment such as AWS, GCP, or Azure, you're already starting with a reduced footprint and benefitting from the cloud model of paying for what you use. The monitoring present in Kubernetes lets you pack applications to near maximum density, and if any application needs more resources, Kubernetes will automatically scale the resources or the underlying infrastructure up and down to meet the need. You're only paying for what you use, and it makes sure you're using exactly the right amount at any given moment.

Hands-on User

You're a developer or an operations engineer. You want stability, but you also want agility. You're searching for a solution that automates human tasks while still following the instructions set by the humans that control it.

What will Kubernetes do for you?

Kubernetes is driven by its API. Your CI/CD system can communicate with Kubernetes via its REST and CLI interfaces to carry out actions within the cluster. You can build staging environments. You can automatically deploy new releases. You can perform staged rollouts with atomic versioning and easy rollbacks in the event of issues. You can tear down environments when they're no longer needed. Whether you use Jenkins, Gitlab, Circle, Travis, or the CI/CD system of tomorrow, Kubernetes seamlessly integrates with it and guarantees that tasks will be executed the same way every time, without human error.

Kubernetes monitors the containers running within it, performing readiness and health checks via TCP, HTTP, or by running a command within the container. Only healthy resources receive traffic, and those that fail health checks for too long will be restarted by Kubernetes or moved to other nodes in the cluster.

Gone are the days of ramping up server capacity in anticipation of a surge of traffic. Kubernetes is ready for Oprah, ready for a viral news story, ready for anything. It scales by deploying more copies of the pods that run your application, or it can allocate more host resources to applications where spawning more doesn't make sense. It can even scale the cluster size up if you deploy more resources than the current cluster can safely handle, or down when those resources are no longer needed.

Kubernetes turns a promise of cloud computing into a reality. You don't need to know, care, or pay attention to where resources are running. You can control scheduling if you wish, but Kubernetes makes everything available by default. It handles the internal wiring between pod replicas, the common IP address that load balances across them, and the external addresses that make those resources accessible to the world. It adds and removes resources to keep applications available as the number of replicas grows or shrinks, or as application failures occur. When you deploy a microservice-style application into a Kubernetes cluster, it builds and maintains all of the routing and communication paths for you, no matter which node your application lands on.

The Kubernetes model is declarative. You create instructions in YAML that tell it the desired state of the cluster. It then does what it needs to do to maintain this state. When the cluster deviates, such as when you want more or fewer pods running for your application, it recognizes that the current state differs from the desired state, and it carries out actions to bring the cluster back to the desired state. There are no surprises. You tell it what to be, and it works to become that and exist that way until told otherwise.

Architecture

dashboard

Availability

cloud

Kubernetes is everywhere. You can run it on bare metal servers or in a private cloud. You can run it on the public cloud. You can run applications within a hosted Kubernetes service. Your options are almost limitless, and this flexibility makes it a buyer's market. You decide how much control you want over your cluster and where you want to put it. With so much competition, innovation is almost built into the product. Businesses develop the features that make their offerings competitive, and the entire community benefits when these features come home to the core product.

Hosted Solutions

Spin up a Kubernetes cluster and within a few minutes you're deploying applications into it. Providers like Google and Amazon manage the Kubernetes master and backend components for you, and they integrate external service offerings like storage, DNS and load balancing into the offering.

Platform Providers

Some companies, like RedHat and Pivotal, have a Kubernetes offering that integrates tightly with their other product offerings. These solutions appear to benefit from shopping with a single vendor, but they often carry heavy license fees and have features that are exclusive to the company and their products.

For some businesses, this is the right answer, but you aren't required to get your Kubernetes from the same place you get your Linux. If you want the protection of a support agreement, you can get it for Kubernetes without giving up any of the benefits of using the open source version.

Self-Managed

It's easy to deploy Kubernetes yourself on bare metal, private cloud, or public cloud. You can run it on a single virtual machine or at Google's scale, but don't be overwhelmed by it.

Take the Kubernetes Challenge by deploying Kubernetes the Hard Way, or go the easy route with an installer like RKE, Kops, or Kubespray to stand up the cluster with minimal effort.

Multi-Cloud Management

Why not go for a little of everything? Multi-cloud deployments are the next battleground for business, and any solution that manages clusters with different providers needs a solution for keeping security, cost, and performance under tight control.

Rancher does this by integrating with external authentication providers like Active Directory, Azure AD, Ping, Github, LDAP, and others. You're then empowered to deploy security policies uniformly across all Kubernetes environments that you manage.

Rancher's Contribution to Kubernetes

add_circle

Since its inception in 2014, Rancher Labs has been a leader in open source software and container solutions. When v1 of Rancher came out in 2016, we quickly saw that Kubernetes was on the rise and added a solution for it.

We rebuilt Rancher v2 within Kubernetes, making our already-tight integration even tighter. We also developed the Rancher Kubernetes Engine (RKE) to make it easy to deploy Kubernetes clusters in any location and to keep those clusters updated.

We've made more than 500 code contributions to Kubernetes, and one of our founders sits on the governing board of the CNCF.

Businesses who use Rancher have accelerated the adoption of Kubernetes within their organization and realize the benefits of doing so every single day.

All of our products are free and open source. Rancher believes in open source software, and we are committed to the success of Kubernetes and all companies who use it.

Who uses Kubernetes with Rancher?

group
Contact Us keyboard_arrow_up