Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/skip-ruby' into zachg/ruby_grpc_…
Browse files Browse the repository at this point in the history
…to_http
  • Loading branch information
ZStriker19 committed Oct 23, 2024
2 parents f9db456 + 9323b1c commit e8dc3ec
Show file tree
Hide file tree
Showing 50 changed files with 544 additions and 383 deletions.
17 changes: 9 additions & 8 deletions .github/workflows/run-end-to-end.yml
Original file line number Diff line number Diff line change
Expand Up @@ -115,10 +115,8 @@ jobs:
run: ./run.sh CROSSED_TRACING_LIBRARIES
env:
DD_API_KEY: ${{ secrets.DD_API_KEY }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
AWS_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
SYSTEM_TESTS_AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
SYSTEM_TESTS_AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Run PROFILING scenario
if: always() && steps.build.outcome == 'success' && contains(inputs.scenarios, '"PROFILING"')
run: ./run.sh PROFILING
Expand All @@ -134,10 +132,13 @@ jobs:
run: ./run.sh INTEGRATIONS
env:
DD_API_KEY: ${{ secrets.DD_API_KEY }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
AWS_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Run INTEGRATIONS_AWS scenario
if: always() && steps.build.outcome == 'success' && contains(inputs.scenarios, '"INTEGRATIONS_AWS"')
run: ./run.sh INTEGRATIONS_AWS
env:
DD_API_KEY: ${{ secrets.DD_API_KEY }}
SYSTEM_TESTS_AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
SYSTEM_TESTS_AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Run APM_TRACING_E2E_OTEL scenario
if: always() && steps.build.outcome == 'success' && contains(inputs.scenarios, '"APM_TRACING_E2E_OTEL"')
run: ./run.sh APM_TRACING_E2E_OTEL
Expand Down
6 changes: 3 additions & 3 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -215,16 +215,16 @@ onboarding_dotnet:
matrix:
- ONBOARDING_FILTER_ENV: [dev, prod]
ONBOARDING_FILTER_WEBLOG: [test-app-dotnet]
SCENARIO: [HOST_AUTO_INJECTION_INSTALL_SCRIPT]
SCENARIO: [HOST_AUTO_INJECTION_INSTALL_SCRIPT, HOST_AUTO_INJECTION_INSTALL_SCRIPT_PROFILING]
- ONBOARDING_FILTER_ENV: [dev, prod]
ONBOARDING_FILTER_WEBLOG: [test-shell-script]
SCENARIO: [INSTALLER_AUTO_INJECTION_BLOCK_LIST]
- ONBOARDING_FILTER_ENV: [dev, prod]
ONBOARDING_FILTER_WEBLOG: [test-app-dotnet-container]
SCENARIO: [ CONTAINER_AUTO_INJECTION_INSTALL_SCRIPT]
SCENARIO: [ CONTAINER_AUTO_INJECTION_INSTALL_SCRIPT, CONTAINER_AUTO_INJECTION_INSTALL_SCRIPT_PROFILING]
- ONBOARDING_FILTER_ENV: [dev, prod]
ONBOARDING_FILTER_WEBLOG: [test-app-dotnet,test-app-dotnet-container]
SCENARIO: [INSTALLER_AUTO_INJECTION]
SCENARIO: [INSTALLER_AUTO_INJECTION, SIMPLE_AUTO_INJECTION_PROFILING]
- ONBOARDING_FILTER_ENV: [dev, prod]
ONBOARDING_FILTER_WEBLOG: [test-app-dotnet]
SCENARIO: [INSTALLER_AUTO_INJECTION_LD_PRELOAD]
Expand Down
10 changes: 10 additions & 0 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,16 @@
"justMyCode": true,
"python": "${workspaceFolder}/venv/bin/python"
},
{
"name": "Run INTEGRATIONS_AWS scenario",
"type": "python",
"request": "launch",
"module": "pytest",
"args": ["-S", "INTEGRATIONS_AWS", "-p", "no:warnings"],
"console": "integratedTerminal",
"justMyCode": true,
"python": "${workspaceFolder}/venv/bin/python"
},
{
"name": "Replay CROSSED_TRACING_LIBRARIES scenario",
"type": "python",
Expand Down
77 changes: 71 additions & 6 deletions docs/execute/binaries.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,13 @@ But, obviously, testing validated versions of components is not really interesti

## C++ library

* Tracer: TODO
* Tracer:
There are two ways for running the C++ library tests with a custom tracer:
1. Create a file `cpp-load-from-git` in `binaries/`. Content examples:
* `https://github.com/DataDog/dd-trace-cpp@main`
* `https://github.com/DataDog/dd-trace-cpp@<COMMIT HASH>`
2. Clone the dd-trace-cpp repo inside `binaries`

* Profiling: add a ddprof release tar to the binaries folder. Call the `install_ddprof`.

## .Net library
Expand All @@ -22,15 +28,46 @@ But, obviously, testing validated versions of components is not really interesti

## Golang library

1. Under `binaries`, create a file `golang-load-from-go-get`, the content will be installed by `go get`. You can specify a specify branch of dd-trace-go you want to install for testing. Content example:
1. To test unmerged PRs locally, run the following in the utils/build/docker/golang/parametric directory:

```sh
go get -u gopkg.in/DataDog/dd-trace-go.v1@<commit_hash>
go mod tidy
```

* Content example:
* `gopkg.in/DataDog/dd-trace-go.v1@main` Test the main branch
* `gopkg.in/DataDog/[email protected]` Test the 1.67.0 release

2. Clone the dd-trace-go repo inside `binaries`

## Java library

1. Add a valid `dd-java-agent-<VERSION>.jar` file in `binaries`. `<VERSION>` must be a valid version number.
* To use the `jar` from your *PR* find the `build_lib` step and check the artifacts tab up toward the top then add it to the folder above.
Follow these steps to run Parametric tests with a custom Java Tracer version:

1. Clone the repo and checkout to the branch you'd like to test:
```bash
git clone [email protected]:DataDog/dd-trace-java.git
cd dd-trace-java
```
By default you will be on the `master` branch, but if you'd like to run system-tests on the changes you made to your local branch, `git checkout` to that branch before proceeding.

2. Build Java Tracer artifacts
```
./gradlew :dd-java-agent:shadowJar :dd-trace-api:jar
```

3. Copy both artifacts into the `system-tests/binaries/` folder:
* The Java tracer agent artifact `dd-java-agent-*.jar` from `dd-java-agent/build/libs/`
* Its public API `dd-trace-api-*.jar` from `dd-trace-api/build/libs/` into

Note, you should have only TWO jar files in `system-tests/binaries`. Do NOT copy sources or javadoc jars.

4. Run Parametric tests from the `system-tests/parametric` folder:

```bash
TEST_LIBRARY=java ./run.sh test_span_sampling.py::test_single_rule_match_span_sampling_sss001
```

## NodeJS library

Expand All @@ -44,8 +81,24 @@ But, obviously, testing validated versions of components is not really interesti

## PHP library

1. In the `build packages` stage from the `package extension` job for your PR on CircleCI find the relevant `datadog-setup.php` and `dd-library-php-*-aarch64-linux-gnu.tar.gz` file.
2. Add both files inside the `binaries` folder.
- Place `datadog-setup.php` and `dd-library-php-[X.Y.Z+commitsha]-aarch64-linux-gnu.tar.gz` (or the `x86_64` if you're not on ARM) in `/binaries` folder
- You can download those from the `build_packages/package extension` job artifacts, from a CI run of your branch.
- Copy it in the binaries folder

##Then run the tests

From the repo root folder:

- `./build.sh -i runner`
- `TEST_LIBRARY=php ./run.sh PARAMETRIC` or `TEST_LIBRARY=php ./run.sh PARAMETRIC -k <my_test>`

> :warning: **If you are seeing DNS resolution issues when running the tests locally**, add the following config to the Docker daemon:
```json
"dns-opts": [
"single-request"
],
```

## Python library

Expand All @@ -54,6 +107,11 @@ But, obviously, testing validated versions of components is not really interesti
2. Add a `.tar.gz` or a `.whl` file in `binaries`, pip will install it
3. Clone the dd-trace-py repo inside `binaries`

You can also run:
```bash
echo “ddtrace @ git+https://github.com/DataDog/dd-trace-py.git@<name-of-your-branch>> binaries/python-load-from-pip
```

## Ruby library

* Create an file `ruby-load-from-bundle-add` in `binaries/`, the content will be installed by `bundle add`. Content example:
Expand All @@ -63,6 +121,13 @@ But, obviously, testing validated versions of components is not really interesti
## WAF rule set

* copy a file `waf_rule_set` in `binaries/`

#### After Testing with a Custom Tracer:
Most of the ways to run system-tests with a custom tracer version involve modifying the binaries directory. Modifying the binaries will alter the tracer version used across your local computer. Once you're done testing with the custom tracer, ensure you **remove** it. For example for Python:
```bash
rm -rf binaries/python-load-from-pip
```

----

Hint for components who allows to have the repo in `binaries`, use the command `mount --bind src dst` to mount your local repo => any build of system tests will uses it.
32 changes: 17 additions & 15 deletions docs/execute/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,25 @@
## `docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))`

Your docker engine is not started or not ready. start it, and wait.
It also happens when you do not allow the default socket to be used (see Advanced options in docker desktop).
Your docker engine is either not started or not ready. Start it, and wait a bit before trying again. This error also happens when you do not allow the default socket to be used (see Advanced options in docker desktop).

## On Mac/Parametric tests, fix "allow incoming internet connection" popup

The popup should disappear, don't worry
The popup should disappear, don't worry.

## Errors on build.sh

When running `build.sh`, you have this error :
When running `build.sh`, you have this error:

### `failed to solve: system_tests/weblog`

```
ERROR: failed to solve: system_tests/weblog: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
```

It says it try to get `system_tests/weblog` image from docker hub because it does not exists loccaly. But a `docker images ls -a | grep weblog` says this image exists. You may not using the `default` docker buildx, try :
This error message says that the build script tried to pull the `system_tests/weblog` image from docker hub because it does not exist locally. However, `docker image ls -a | grep weblog` says that this image does exist locally. You may need to switch to the `default` docker buildx. Try:

```bash
docker buildx use default
docker context use default
```

### `open /Users/<username>/.docker/buildx/current: permission denied`
Expand All @@ -31,23 +30,23 @@ ERROR: open /Users/<username>/.docker/buildx/current: permission denied
Build step failed after 1 attempts
```

File permission on your `.docker` are not the good ones :
Adjust file permissions on your `.docker`:

```bash
sudo chown -R $(whoami) ~/.docker
```

## NodeJs weblog experimenting segfaults on Mac/Intel

In docker dashbaord, setting, general, untick `Use Virtualization Framework`. See this [Stack overflow thread](https://stackoverflow.com/questions/76735062/segmentation-fault-in-node-js-application-running-in-docker).
In the docker dashboard -> settings -> general, untick `Use Virtualization Framework`. See this [Stack overflow thread](https://stackoverflow.com/questions/76735062/segmentation-fault-in-node-js-application-running-in-docker) for more information.

## Parametric scenario : `GRPC recvmsg:Connection reset by peer`
## Parametric scenario: `GRPC recvmsg:Connection reset by peer`

The GRPC interface seems to be less stable. No other solution than retry so far.
The GRPC interface seems to be less stable. So far, the only solution is to retry.

## Parametric scenario : `Fail to bind port`
## Parametric scenario: `Fail to bind port`

Docker seems to sometimes keep a host port open, even after the container being removed. There is wait and rety mechanism, but it may be not enough. No other solution than retry so far.
Docker seems to occasionally keep a host port open, even after the container is removed. There is the wait-and-retry mechanism, but it may not be enough. So far, the only solution is to retry.

## Install python3.12 on ubuntu

Expand All @@ -56,12 +55,15 @@ Docker seems to sometimes keep a host port open, even after the container being
## Unable to start postgres instance

When executing `run.sh`, postgres can fail to start and log:

```
/usr/local/bin/docker-entrypoint.sh: line 177: /docker-entrypoint-initdb.d/init_db.sh: Permission denied
```
This may happen if your `umask` prohibits "other" access to files
(for example, it is `027` on Datadog Linux laptops). To fix it, try running:

This may happen if your `umask` prohibits "other" access to files (for example, it is `027` on Datadog Linux laptops). To fix this, try:

```bash
chmod 755 ./utils/build/docker/postgres-init-db.sh
```
then rebuild and rerun.

Then, rebuild and rerun.
Loading

0 comments on commit e8dc3ec

Please sign in to comment.