AZURE KUBERNETES SERVICE

Aakash Bhardwaj
4 min readJun 10, 2021

--

Kubernetes:

Kubernetes is a portable, extensible, open source platform for container orchestration. It allows developers and engineers to manage containerized workloads and services through both declarative configuration and automation.

Basic benefits of Kubernetes include:

  • Run distributed systems resiliently
  • Automatically mount a storage system
  • Automated rollouts and rollbacks
  • Self-healing
  • Secret and configuration management

Azure Kubernetes Services

Azure Kubernetes Service (AKS) is a fully-managed service that allows you to run Kubernetes in Azure without having to manage your own Kubernetes clusters. Azure manages all the complex parts of running Kubernetes, and you can focus on your containers. Basic features include:

  • Pay only for the nodes (VMs)
  • Easier cluster upgrades
  • Integrated with various Azure and OSS tools and services
  • Kubernetes RBAC and Azure Active Directory Integration
  • Enforce rules defined in Azure Policy across multiple clusters
  • Kubernetes can scale your Nodes using cluster auto-scaler
  • Expand your scale even greater by scheduling your containers on Azure Container Instances

Azure Kubernetes Service Benefits

  • Efficient resource utilization: The fully managed AKS offers easy deployment and management of containerized applications with efficient resource utilization that elastically provisions additional resources without the headache of managing the Kubernetes infrastructure.
  • Faster application development: Developers spent most of the time on bug-fixing. AKS reduces the debugging time while handling patching, auto-upgrades, and self-healing and simplifies the container orchestration. It definitely saves a lot of time and developers will focus on developing their apps while remaining more productive.
  • Quicker development and integration: Azure Kubernetes Service (AKS) supports auto-upgrades, monitoring, and scaling and helps in minimizing the infrastructure maintenance that leads to comparatively faster development and integration. It also supports provisioning additional compute resources in Server less Kubernetes within seconds without worrying about managing the Kubernetes infrastructure.

Real Life Example

Logicworks is a Microsoft Azure Gold Partner that helps companies migrate their applications to Azure. In the example below, one of our customers was looking to deploy and scale their public-facing web application on AKS in order to solve for the following business use case:

  • Achieve portability across on-premise and public clouds
  • Accelerate containerized application development
  • Unify development and operational teams on a single platform
  • Take advantage of native integration into the Azure ecosystem to easily achieve:
  • Enterprise-Grade Security
  • Azure Active Directory integration
  • Track, validate, and enforce compliance across Azure estate and AKS clusters
  • Hardened OS images for nodes
  • Operational Excellence
  • Achieve high availability and fault tolerance through the use of availability zones
  • Elastically provision compute capacity without needing to automate and manage underlying infrastructure.
  • Gain insight into and visibility into your AKS environment through automatically configured control plane telemetry, log aggregation, and container health.

Cluster Multi-Tenancy

SDLC environments are split across two clusters isolating Production from lower level SDLC environments such as dev/stage. The use of namespaces provides the same operation benefits while saving cost and operational complexity by not deploying an AKS cluster per SDLC environment.

Scheduling and Resource Quotas

Since multiple SDLC environments and other applications share the same cluster, it’s imperative that scheduling and resource quotas are established to ensure applications and the services they depend on get the resources required for operation. When combined with cluster autoscaler we can ensure that our applications get the resources they need and that compute infrastructure is scaled in when they need it.

Azure AD integration

Leverages Azure AD to authenticate/authorize users to access and initiate CRUD (create, update, and delete) operations against AKS clusters. AAD integration makes it convenient and easy to unify layers of authentication (Azure and Kubernetes) and provide the right personnel with the level of access they require to meet their responsibilities while adhering to principle of least privilege

Pod Identities

Instead of hardcoding static credentials within our containers, Pod Identity is deployed into the default namespace and dynamically assigns Managed Identities to the appropriate pods determined by label. This provides our example application the ability to write to Cosmos DB and our CI/CD pipelines the ability to deploy containers to production and stage clusters.

Ingress Controller

Ingress controllers bring traffic into the AKS cluster by creating ingress rules and routes, providing application services with reverse proxying, traffic routing/load balancing, and TLS termination. This allows us to evenly distribute traffic across our application services to ensure scalability and meet reliability requirements.

Monitoring

Naturally, monitoring the day-to-day performance and operations of our AKS clusters is key to maintaining uptime and proactively solving potential issues. Using AKS’ toggle-based implementation, application services hosted on the AKS cluster can easily be monitored and debugged using Azure Monitor.

Summary

Azure Kubernetes Service is a powerful service for running containers in the cloud. Best of all, you only pay for the VMs and other resources consumed, not for AKS itself, so it’s easy to try out. With the best practices described in this post and the AKS Quickstart, you should be able to launch a test cluster in under an hour and see the benefits of AKS for yourself.

--

--