MetaOps took an engagement with a Public Services Client to implement support of Docker Containers in their existing Azure-hosted Platform and migrate 40+ existing Microservices into Docker.
Broad constraints and requirements set by the Client:
- Use existing Azure Network
- Use Terraform as a primary Infrastructure as a Code Tool
- Use Git version control and Jenkins for CI/CD Pipelines
- Use Helm Charts for application management in Kubernetes
- Create an Internal Container Registry
- Build private Highly Available Kubernetes Cluster with Ingress and RBAC
- Configure end-to-end monitoring of the solution
- Setup Centralised Logging with the potential of forwarding logs to the Protective Monitoring System
- Create a full set of Documentation to support the solution: Low-level design and Operational Documentation.
As part of the initial engagement, the requirements were refined and Operational Solution Architecture was developed and agreed upon with the Client.
The following additional technology choices were recommended by MetaOps and approved by the Client:
- Azure Container Registry(ACR) for storing Docker images
- Azure Kubernetes Service to be used for running containers
- Istio Service Mesh to provide Ingress and TLS overlay for the Kubernetes Cluster for additional security, service discovery and requests routing
- Azure Log Analytics to be used for the log aggregation
- The defined monitoring stack: Azure Container insights, Prometheus, Grafana, Kiali
The decision was made to Deliver the project using the Scrum methodology to allow the client to utilise new infrastructure and tools ASAP. A number of Epics were created to manage the work and to plan the delivery:
- Infrastructure pre-requisites
- Solution Build
- Containerisation and Deployment
Delivery ran using 2 week sprints with the MVP available in Sprint 3 and the full project was completed by Sprint 8.
The delivery Team consisted of
- 1 Team Lead/Cloud Architect/Scrum master
- 2 DevOps Engineers
- 1 Infrastructure Tester
- The capacity of the Platform has grown significantly allowing it to handle much higher volumes of traffic compared to the previous “static infrastructure”
- Cloud costs and Operational costs of running Static DEV/Test environments have shrunk due to the use of Ephemeral Environments running in Kubernetes
- Increased speed of Deployment and improved time to stand up and environment for testing
- De-coupling of individual services allowed to speed up the delivery and move towards no downtime Blue-Green and Canaries Deployment strategies