-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DEBUG-3180 enable Ruby dynamic instrumentation tests #3531
base: main
Are you sure you want to change the base?
Conversation
c317b4b
to
307db17
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI is failing
607738c
to
e56853c
Compare
e56853c
to
73f34a9
Compare
Ruby CI has been repaired. PHP CI is failing on all PRs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the feature is not implemented, the good decorator is missing_feature
, not irrelevant
. Appart form that, seems good.
https://github.com/DataDog/system-tests/blob/main/docs/edit/skip-tests.md
Follow-up after a chat with @p-datadog : Test on ruby are not passing because there is a test that check that no error is reported : some tests are marked as xfailed because the feature is not implemented. But as they are executed in the background, they cause some errors that make the no-error assertion fails. If the test must be adapted, then you can move forward by using : @missing_feature(context.library == "ruby", force_skip=True) # test must be adapted to not fail when an error is reported Or, if error should really not be reported, then it means that the implementation must be adapted. @shurivich could you help us to know if @p-datadog implementation is fine and the test should be adapted, or if really no error should be reported, even if some unsupported part are sent threw RC ? |
# Therefore, we need to issue a request to the application for | ||
# remote config to start. | ||
response = weblog.get("/debugger/init") | ||
assert response.status_code == 200 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I also miss that point. setup
function must never fail. If you want to add any assertion, it must be done inside test functions.
def setup_stuff:
self.init_response = weblog.get("/debugger/init")
def test_stuff:
assert self.init_response.status_code == 200
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why must setup
never fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Test functions" are instrumented by the framework to offer a friendly output on failure (logs, some variables values on assert
.
setup
functions are executed during the lifespan of the tested infra, but they are not "test functions". If they fails, you'll have a basic stack trace painful to understand
We had a bug in our setup. We fixed it, and switched to the |
Motivation
Initial implementation of dynamic instrumentation in Ruby.
Changes
#3516 added the needed controller implementation for dynamic instrumentation system tests and adjusted the tests to work with the feature as implemented in Ruby. However, that PR did not turn on the tests for any tracer version.
Now that DataDog/dd-trace-rb#4098 is merged, DI is usable in Ruby once the environment variable to enable it is set, and the feature will be shipped with the next tracer version (2.8.0).
This PR enables the dynamic instrumentation tests for Ruby tracer 2.8.0 or higher.
The companion PR to this one is DataDog/dd-trace-rb#4146 which adds the scenarios to the list of scenarios to be executed in the Ruby tracer CI.
This PR also changes decorators for Ruby from missing_feature to irrelevant. missing_feature decorator runs the setup (and the tests and expects them to fail) which causes unsupported probes to be sent to the tracer via remote config. The tracer logs that the metric/span/span decoration probe specifications are not valid/understood (while running the log probe tests, which are supported) as an error and the system tests fail when they assert that application log does not contain any errors. To avoid logging the errors, we need to skip the setup thus mark the tests as irrelevant.
Workflow
codeowners
file quickly.🚀 Once your PR is reviewed, you can merge it!
🛟 #apm-shared-testing 🛟
Reviewer checklist
[<language>]
, double-check that only<language>
is impacted by the changebuild-XXX-image
label is present