People often say, "You can't manage what you can't measure." This principle holds especially true for the software development lifecycle. Pipeline observability is the practice of capturing telemetry data across the entire lifecycle to gain actionable insights and calculate key delivery metrics — as illustrated in the figure below.
By extending observability across the full software development pipeline, organizations unlock several critical capabilities:
With pipeline observability in place, conditions such as failed builds, prolonged lead times, or irregular deployment frequencies can be automatically detected and addressed. A cornerstone of this approach is the use of DORA metrics, which include:
Beyond metrics, logs and traces from pipeline executions provide valuable context for debugging failures and identifying performance bottlenecks. This enables teams to optimize their pipelines for greater efficiency, reliability, and speed.
Events within the Software Development Lifecycle (SDLC) play a pivotal role in achieving effective pipeline observability. SDLC events represent key actions that occur throughout the lifecycle — such as releasing a new software version, deploying that version, or successfully passing a performance test. These events are typically emitted by pipelines during the continuous integration (CI) or continuous delivery (CD) phases. By capturing and analyzing SDLC events, teams gain access to complete, precise, and real-time data. In essence, SDLC events serve as the foundational signals that power observability across the pipeline, enabling teams to monitor, measure, and optimize every step of the development and delivery process.
With SDLC events, you can unlock powerful capabilities across your software delivery process:
These capabilities make SDLC events a foundational element for building intelligent, resilient, and efficient engineering platforms.
The event dataflow in Dynatrace has three stages:
Capture – SDLC event data can be ingested from event streams, logs, and API (REST endpoint).
Process – The data processing pipeline processes incoming software lifecycle events to improve analysis and reporting. Rules can be created to filter, parse, enrich, transform, or assign a retention period. Rules are processed in sequence.
Analyze – You can explore data stored in Grail using the Dynatrace Query Language (DQL) to discover patterns, identify anomalies and outliers, report on trends, and more. With DQL queries, you can build dashboards, charts, metrics, and reports. You can also export selected data to external tools.