-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unify inclusion/exclusion logic for container images #32013
base: main
Are you sure you want to change the base?
Conversation
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 8d89378 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | otel_to_otel_logs | ingress throughput | +0.91 | [+0.24, +1.58] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.34 | [-0.39, +1.06] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | +0.15 | [-0.67, +0.97] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | +0.11 | [+0.07, +0.15] | 1 | Logs bounds checks dashboard |
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.11 | [+0.05, +0.17] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.07 | [-0.57, +0.71] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.04 | [-0.77, +0.85] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.10, +0.13] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | +0.00 | [-0.74, +0.75] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | -0.03 | [-0.78, +0.73] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.15 | [-0.61, +0.31] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.30 | [-1.06, +0.47] | 1 | Logs |
➖ | file_tree | memory utilization | -0.53 | [-0.65, -0.41] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.55 | [-1.34, +0.24] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | -0.74 | [-3.69, +2.22] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | -1.47 | [-1.58, -1.35] | 1 | Logs bounds checks dashboard |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
6572117
to
3cc80b1
Compare
3cc80b1
to
445db7d
Compare
Package size comparisonComparison with ancestor Diff per package
Decision |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=50846193 --os-family=ubuntu Note: This applies to commit c131e82 |
TODO: UPDATE THE DOCUMENTATION
What does this PR do?
This PR unifies the inclusion/exclusion filtering logic for container images.
Motivation
There are several parts that should be explained here:
These 3 parts are interconnected, and any change that is done needs to be handled with care in order not to break any user setup.
Current Documentation:
Currently, the documentation indicates that container metrics and logs can be filtered for inclusion/exclusion by specifying regex for container name, namespace, and image name.
The documentation doesn't clarify if image name can include the image tag (or image digest), or if it should strictly include the image name without the tag nor the digest.
But then, the documentation gives some examples:
image:^dockercloud/network-daemon$
image:ubuntu
None of the examples given in the documentation includes the image tag or image digest.
What is supported in practice:
In practice, we make no check on whether the regex for image name contains a tag or digest, so such filters are accepted during configuration of the agent.
We basically use filtering by image for 2 main purposes:
When checking if a log should be filtered out, we use
Image.RawName
field of the container entity in workloadmeta.The
Image.RawName
contains the full image name, including the tag or digest. (example)When checking if a metric should be filtered out, we use
Image.Name
field of the container entity in workloadmeta.The
Image.Name
contains the image name excluding its tag or digest. (example)We also have other use cases, but in general we don't impose any requirement on whether the image name that we match against the image filter regex includes should include (or may) the tag/digest or not.
Due to this discrepancy, users can configure filters that result in inconsistent filtering behaviour. Here is an example:
Suppose that a user configures
DD_CONTAINER_EXCLUDE: "image:^bar$"
Based on the documentation, the user would expect exclusion of logs and metrics emitted by containers having the image
bar
, regardless of the image tag or digest.However, since metric exclusion is matching
Image.Name
against the regex, and log exclusion is matchingImage.RawName
against the regex, only the metric exclusion will result in a match (bar
matches^bar$
, butbar:latest
doesn't match ^bar$`).A similar issue occurs if a user configures
DD_CONTAINER_EXCLUDE: "image:^bar:latest"
. In this case, logs will be excluded, but metrics won't.Given this, we already have an inconsistency problem.
We had (at least one) support ticket regarding this problem (CONS-6500).
How users are configuring filters
Checking how users are configuring image filters, we notice that they actually use almost all possible ways of configuring image name regex filter.
Here are some examples:
image:********@sha256:e2feace0e0f852ffa3a3b9031[REDACTED]$
image:906394416424.dkr.ecr.us-west-2.amazonaws.com/aws-for-fluent-bit:latest
image:^gcr.io/datadoghq/agent:latest$
image:mysql:8
This means that some users might already have inconsistent filtering behaviour without them knowing about it.
A lot of users are including image tag in the image filter regex, and the documentation says nothing about it. It doesn't say if this is supported or not. In practice, it is supported for logs exclusion, but not for metrics exclusion.
Summary
Based on what was explained above, it is difficult to drop support for including image tags and/or image digest for 2 main reasons:
In order to achieve consistent filtering support without breaking any user setup, we are doing the following:
name$
(e.g.^nginx$
ornginx$
), convert the regex toname(@sha256)?:.*
IsExcluded
method of container filter:containerImage
.:
.IsExcluded
are expected to pass the maximum information that they have. For instance, if a component has the image name with the tag, it should include the tag when callingIsExcluded
. (Specifically, the metric collectors should useImage.RawName
instead ofImage.Name
).With this, no existing user setup will be broken, and the filtering logic will be consistent and uniform everywhere.
Describe how you validated your changes
We need to validate filtering is working as expected.
For this we need to test several cases on a sample deployment.
We will use the following deployment as an example:
It contains 2 containers, the first one having an image with a digest, the second one having an image with a tag.
Case 1: Default Behaviour: No filtering at all
Deploy the agent with the following:
Navigate to the metrics and logs explorers and verify that metrics and logs are reported for both containers:
This is to ensure the default behaviour is still conserved.
Case 2: Filtering Out
Deploy the agent as follows:
With this, metrics and logs should be excluded for both containers:
However, logs and metrics should still be visible for other containers:
Possible Drawbacks / Trade-offs
Additional Notes