☸️Kubernetes: Revolutionizing Industries☸️

Kubernetes is a popular open source platform for container orchestration — that is, for the management of applications built out of multiple, largely self-contained runtimes called containers. Containers have become increasingly popular since the Docker containerization project launched in 2013, but large, distributed containerized applications can become increasingly difficult to coordinate. By making containerized applications dramatically easier to manage at scale, Kubernetes has become a key part of the container revolution.
What is Kubernetes?

Kubernetes is an open source project that has become one of the most popular container orchestration tools around; it allows you to deploy and manage multi-container applications at scale. While in practice Kubernetes is most often used with Docker, the most popular containerization platform, it can also work with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes. And because Kubernetes is open source, with relatively few restrictions on how it can be used, it can be used freely by anyone who wants to run containers, most anywhere they want to run them — on-premises, in the public cloud, or both.
Google and Kubernetes
Google is probably the first company that realized it needed a better way to implement and manage its software components to scale globally, and for years developed Borg (later called Omega) internally.
Kubernetes began life as a project within Google. It’s a successor to — though not a direct descendent of — Google Borg, an earlier container management tool that Google used internally. Google open sourced Kubernetes in 2014, in part because the distributed microservices architectures that Kubernetes facilitates makes it easy to run applications in the cloud. Google sees the adoption of containers, microservices, and Kubernetes as potentially driving customers to its cloud services (although Kubernetes certainly works with Azure and AWS as well). Kubernetes is currently maintained by the Cloud Native Computing Foundation, which is itself under the umbrella of the Linux Foundation.
Kubernetes architecture: How Kubernetes works
In Kubernetes, there is a master node and multiple worker nodes, each worker node can handle multiple pods. Pods are just a bunch of containers clustered together as a working unit. You can start designing your applications using pods. Once your pods are ready, you can specify pod definitions to the master node, and how many you want to deploy. From this point, Kubernetes is in control. It takes the pods and deploys them to the worker nods.
If a worker node goes down, Kubernetes starts new pods on a functioning worker node. This makes the process of managing the containers easy and simple. It makes it easy to build and add more features and improving the application to attain higher customer satisfaction. Finally, no matter what technology you’re invested in, Kubernetes can help you.

Transforming a company’s IT infrastructure with Kubernetes

Since its inception, Kubernetes has been a project that has enjoyed great recognition and has always had a lot of impact, but in recent months its influence has been consolidated based on different factors.
The community has grown considerably. Google and Red Hat are the biggest contributors, but there is also Meteor, CoreOS, Huawei, Mesosphere and many more.
In addition, Kubernetes is no longer perceived as something new to experiment with, it is gaining enough credit to be used more and more in production. In fact, by 2019, this platform was in production in 78% of the companies. One year earlier, in 2018, it was in 58%. Companies such as Tinder, Reddit, New York Times, Airbnb or Pinterest have integrated this technology into their services.
Companies are looking to develop applications, and containers and open source are becoming very important, as they realize that Kubernetes is the first step to create modern scalable applications.
Kubernetes is a system that can be used to efficiently implement applications. As a result, it can help companies save money by using less labor to manage their IT infrastructure.
Kubernetes effectively automates container management. Because containers allow for the assembly of code into smaller, easier to transport parts, and larger applications involve a package of multiple containers, Kubernetes can organize multiple containers into units. Therefore, containerized applications can be scaled automatically, making it more feasible with only fewer resources needed to manage multiple containers.
Kubernetes offers these capabilities to a business:
🔹Multi-cloud flexibility: As more enterprises run on multi-cloud platforms, they benefit from Kubernetes, as it easily runs any application on any public cloud service or a combination of public and private clouds.
🔹Faster time to market: Because Kubernetes can help the development team break down into smaller units to focus on single, targeted, smaller micro-services, these smaller teams tend to be more agile.
🔹IT cost optimization: Kubernetes can help a company reduce infrastructure costs quite dramatically if it is operating on a large scale.
🔹Improved scalability and availability: Kubernetes serves as a critical management system that can scale an application and its infrastructure whenever the workload increases, and reduce it as the load decreases.
🔹Effective migration to the cloud: Kubernetes can handle re-hosting, re-platforming and refactoring. It offers a seamless route to effectively move an application from the facility to the cloud.
CASE STUDIES: Let us see some successful stories now : )
1. NOKIA

Challenge:
Nokia’s core business is building telecom networks end-to-end; its main products are related to the infrastructure, such as antennas, switching equipment, and routing equipment. “As telecom vendors, we have to deliver our software to several telecom operators and put the software into their infrastructure, and each of the operators have a bit different infrastructure,” says Gergely Csatari, Senior Open Source Engineer. “There are operators who are running on bare metal. There are operators who are running on virtual machines. There are operators who are running on VMWare and OpenStack Cloud. We want to run the same product on all of these different infrastructures without changing the product itself.”
Solution:
The company decided that moving to cloud native technologies would allow teams to have infrastructure-agnostic behavior in their products. Teams at Nokia began experimenting with Kubernetes in pre-1.0 versions. “The simplicity of the label-based scheduling of Kubernetes was a sign that showed us this architecture will scale, will be stable, and will be good for our purposes,” says Csatari. The first Kubernetes-based product, the Nokia Telephony Application Server, went live in early 2018. “Now, all the products are doing some kind of re-architecture work, and they’re moving to Kubernetes.”
Impact:
Kubernetes has enabled Nokia’s foray into 5G. “When you develop something that is part of the operator’s infrastructure, you have to develop it for the future, and Kubernetes and containers are the forward-looking technologies,” says Csatari. The teams using Kubernetes are already seeing clear benefits. “By separating the infrastructure and the application layer, we have less dependencies in the system, which means that it’s easier to implement features in the application layer,” says Csatari. And because teams can test the exact same binary artifact independently of the target execution environment, “we find more errors in early phases of the testing, and we do not need to run the same tests on different target environments, like VMware, OpenStack, or bare metal,” he adds. As a result, “we save several hundred hours in every release.”
2. Open AI

Challenge:
An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.
Solution:
OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. “We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster,” says Christopher Berner, Head of Infrastructure. “This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration.”
Impact:
The company has benefited from greater portability: “Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters,” says Berner. Being able to use its own data centers when appropriate is “lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud,” he adds. “As long as the utilization is high, the costs are much lower there.” Launching experiments also takes far less time: “One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work.”
3) Why did eBay choose Kubernetes?
Daily, eBay handles 300 billion data queries & a massive amount of data that’s above 500 petabytes. eBay has to move massive amounts of data & manage the traffic, keeping in mind a smooth user experience while still ensuring a secure, stable environment that’s flexible enough to encourage innovation. In the fall of 2018, the company announced they were in the midst of a three-year plan they called “re-platforming.” eBay’s 90% of cloud technology was dependent on OpenStack, and they are in the move to ditch OpenStack altogether. eBay is “re-platforming, itself with Kubernetes, Docker, & Apache Kafka, a stream processing platform that increases data handling and decreases latency.
The goal is to improve the user experience and to promote productivity with their engineers and programmers & completely revamp its data center infrastructure. The other activities in this re-platforming include designing their own custom servers and rolling out a new, decentralized strategy for their data centers. Like Facebook & Microsoft, eBay is relying on open-sourcing to design their custom servers. Such an inspiring case study.

4) Bloomberg is one of the first companies to adopt Kubernetes
They used Kubernetes into production in 2017. The aim was to bring up new applications and services to users as fast as possible and free up developers from operational tasks. After evaluating many offerings from different firms, they selected Kubernetes as they thought it aligned exactly with what they were trying to solve. One of the key aims at Bloomberg was to make better use of existing hardware investments using different features of Kubernetes. As a result, they were able to very efficiently use the hardware to the point where they could get close to 90 to 95 percent utilization rates (as per Andrey Rybka, head of the compute infrastructure team at Bloomberg) Nothing great comes easy; Kubernetes makes many things simpler only if you know how to use it. As the developers initially found it challenging to use, the teams had many training programs around Kubernetes at Bloomberg.
5) Adidas

Challenge:
In recent years, the adidas team was happy with its software choices from a technology perspective — but accessing all of the tools was a problem. For instance, “just to get a developer VM, you had to send a request form, give the purpose, give the title of the project, who’s responsible, give the internal cost center a call so that they can do recharges,” says Daniel Eichten, Senior Director of Platform Engineering. “The best case is you got your machine in half an hour. Worst case is half a week or sometimes even a week.”
Solution:
To improve the process, “we started from the developer point of view,” and looked for ways to shorten the time it took to get a project up and running and into the adidas infrastructure, says Senior Director of Platform Engineering Fernando Cornago. They found the solution with containerization, agile development, continuous delivery, and a cloud native platform that includes Kubernetes and Prometheus.
Impact:
Just six months after the project began, 100% of the adidas e-commerce site was running on Kubernetes. Load time for the e-commerce site was reduced by half. Releases went from every 4–6 weeks to 3–4 times a day. With 4,000 pods, 200 nodes, and 80,000 builds per month, adidas is now running 40% of its most critical, impactful systems on its cloud native platform.
6) A banking app’s must-read story of running Kubernetes in production
A journey that affirms you don’t have to be too big to use Kubernetes. They started their Cloud Native journey by splitting the massive monolith application into smaller microservices. To spin up these microservices, they used Ansible, Terraform, and Jenkins and to deploy these microservices as a whole unit (as shown in the image).
Then they suddenly started to experience some of the scaling issues with Microservices. So, they didn’t get any of the microservices benefits. Hence they started looking for ways to get out of this complexity by shifting their focus from machine-oriented to application-oriented architecture. They chose Kubernetes as the abstraction layer along with AWS, not worrying about where the containers are running, and this is how they were able to manage microservices and unlocked the velocity of microservices. They also chose Kubernetes from a security perspective and to specify how the applications should run. Now they run around 80+ microservices now in production with the help of Kubernetes.
Conclusions
Adoption in the use of containers will continue to grow. Around the world, many CIOs and technologists have chosen to use Kubernetes, and it is expected to evolve much more in the coming years.
Containers are becoming increasingly popular in the software world and Kubernetes has become the industry standard for deploying containers into production. In addition, a high growth rate is expected for Kubernetes throughout this year as well.