7 min read
Published on 01/30/2022
Last updated on 02/05/2024
Using Deployment Markers to Enhance Observability
Share
A Deployment is a Kubernetes object that instructs the platform on how to create and modify instances of pods running applications. Deployment objects enable developers to create declarative updates for pods and ReplicaSets, ensuring that at least one instance of the application is always available. To do so, the deployment controller checks cluster nodes and pods for their health status and replaces failed ones.
Teams track application deployments to assess how the application performs over time and identify optimization opportunities. To help with this, Application Performance Monitoring (APM) solutions leverage markers to record new deployments and view a list of past deployments and their respective performance.
This post explores how organizations can use deployment markers to enhance the observability of Kubernetes workloads.
What Are Deployment Markers for Kubernetes?
Deployment tracking is a significant observability mechanism that helps identify various aspects of a deployment, including when it was performed, the platform onto which it was deployed, and its effect on application performance. By offering visual indicators of a workload's events based on specific timelines, deployment markers allow for the efficient troubleshooting of performance issues. They also help measure tangible metrics before and after a deployment, helping to optimize changes so that deployments don’t impact other processes in production.Importance of Deployment Markers for Kubernetes Workloads
While use cases may differ for different organizations, here are some common advantages of including deployment markers for Kubernetes workloads:Health-Check APIs
When it comes to monitoring applications at scale, microservice-based applications pose a consistent challenge. To overcome this, organizations can instrument their source code to create API checkpoints for monitoring the health of these services. These health checks are combined with deployment markers and other monitoring practices to ensure Kubernetes applications stay resilient and available.Log Formats and Catalogs
In order to optimally implement observability, it is crucial to create well-formed log messages. The loosely coupled nature of modern applications and siloed team structures, however, complicates the adoption of consistent logging practices. Deployment markers solve this by aggregating logs generated by multiple services and creating a standard format for representing the state of individual services. This allows teams to maintain a centralized repository of logs for easier collaboration and comprehension of the root cause of events.Deployment Correlation
Since Kubernetes powers Continuous Integration and Deployment, it is critical to correlate performance and availability issues with continuous changes in a deployment. Deployment markers help access deployment activities within the APM performance metrics timeline while making it easy to visually correlate the changes in performance with the deployment of newer application versions.Distributed Tracing
Traditionally, logs and metrics were captured in machine- and component-centric ways. But these methods are mostly irrelevant for highly distributed, dynamic, microservice-based environments like Kubernetes. Instead, deployment markers can be used as unique IDs for each transaction, which can then be passed to each microservice and written in as part of its log data. When interfaced with an APM solution like Epsagon, this time-stamped log information can be used for practical, distributed tracing.Topology Discovery
It is important to understand the relationships and dependencies among different services of an application. This is because when one service fails, it may result in the sub-optimal performance of other microservices or the entire application. By including descriptors that spell out specific configuration requirements and security options, deployment markers help discover mappings and topology relationships between various application services.Configuration Management
Deployment markers enable teams to create a precise knowledge base and define platform boundaries to eliminate configuration drift—where changes meant for a specific environment don't get replicated in other environments. With deployment markers, developers can assess the roles and processes for updating configurations, create a versioned repository for configuration data, and automate precise changes.First Failure Data Capture (FFDC)
To effectively solve performance problems, teams must quickly access diagnostic information. Deployment markers let teams automatically collect information as soon as the APM detects an error. Integrating these markers into an APM solution enables the creation of instrumented libraries that produce valuable information for quicker error resolution.Kubernetes Deployment Tracking Markers
For efficient performance monitoring of a Kubernetes workload, deployments can be tracked using various markers, including the following.Version Tag
Tagging a deployment creates a locked version of the service/component. This approach allows teams to tag specific points of a deployment, including CI/CD pipelines. While listing version tags can be a pretty straightforward task, it is important to have a company-wide consensus since teams have to deploy numerous packages as the system grows. Tagging can be used to monitor deployments and application performance, or for infrastructure, trace metrics, traces, logs, and profiles. For Kubernetes applications in production, version tags are used to access information such as:- Total requests by version
- Requests per second by version
- Total errors by version
- Errors per second by version
- Percentage error rate by version
Versions Deployed
Marking deployed versions enables the monitoring of all active versions deployed in the selected time interval, the first and last times that traces of the version were seen, and indicators of error types. This further helps to measure the effectiveness and health of new deployments when compared with previous versions. Through deployed version markers, developers can set up an observability dashboard that displays information such as:- Requests per second
- Error rate vs. total request ratio
- New active endpoints
- Total active time
- Total number of requests
- Number of errors
- Latency
Deployment Comparison
Deployment comparison markers can enable seamless observability of a deployment by offering various insightful indicators, including:- Comparison graphs: When monitoring multiple types of deployments, comparison graphs offer intelligent indexes to visualize errors and requests.
- Error comparison: This will display errors that a version introduces/solves.
- Endpoint comparison: Such a comparison helps monitor how each version affects endpoint error rates and latency.
Using Epsagon’s Kubernetes Deployment Tracker Options
Epsagon’s end-to-end observability platform includes a Kubernetes Explorer dashboard that enables efficient deployment tracking. By offering deeper, insightful information of Kubernetes resources, the explorer helps you evaluate the comprehensive performance of your clusters, nodes, controllers, pods, and containers. Espagon’s Kubernetes explorer helps observe metrics such as:- Application metrics: errors, latency
- Infrastructure metrics: CPU, memory, network I/O, disk
Plus Monitoring...
Espagon’s Kubernetes Explorer includes an overview of each component’s YAML file and annotated information on the component’s name and version. The platform leverages deployment markers to simplify deployment tracing based on an efficient comparison and correlation. While doing so, it allows for easy access to performance information based on various metrics and events, then compares each version’s YAML file to examine the root cause of a problem.Summary
Modern software development involves frequent and rapid changes in its delivery pipelines. The fast-paced nature of modern software development requires a complex correlation of performance issues with the changes that have been committed. In a Kubernetes-based cloud-native ecosystem, deployment markers enable teams to trace metrics associated with different versions of an application, offering holistic visibility of various distributed services, integrations, and dependencies. Epsagon’s Kubernetes Explorer helps to simplify this correlation by providing information on metrics, events, and component configuration through a single pane of glass. Epsagon is cloud-native and built to solve performance bottlenecks in complex ecosystems by helping analyze request flows with payloads, metrics, and events. To learn more about how Epsagon can help your organization with intuitive discovery and visualization of performance metrics, sign up for FREE here!Subscribe to
the Shift!
Get emerging insights on emerging technology straight to your inbox.
Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach
Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.
Related articles
Subscribe
to
The Shift
!Get on innovative technology straight to your inbox.
emerging insights
The Shift is Outshift’s exclusive newsletter.
Get the latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.