Skip to content

Commit

Permalink
Merge pull request #5293 from sgibson91/github-outputs
Browse files Browse the repository at this point in the history
Use GITHUB_OUTPUT instead of GITHUB_ENV in automation
  • Loading branch information
sgibson91 authored Dec 17, 2024
2 parents 72edf0c + dde7168 commit f7cf459
Show file tree
Hide file tree
Showing 3 changed files with 55 additions and 61 deletions.
97 changes: 47 additions & 50 deletions .github/workflows/deploy-hubs.yaml
Original file line number Diff line number Diff line change
@@ -1,10 +1,3 @@
# The use of ::set-output in this workflow file was deprecated as per:
# https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
# We have instead pivoted to writing to the GITHUB_ENV file, either by the
# method suggested in the above blog post, or by following the below Stack
# Overflow answer for python code: https://stackoverflow.com/a/70123641
# More info on GITHUB_ENV: https://docs.github.com/en/actions/learn-github-actions/environment-variables

name: Deploy and test hubs

on:
Expand Down Expand Up @@ -83,14 +76,14 @@ jobs:
# two lists of dictionaries, which can be read by GitHub Actions as matrix jobs. The
# first set of jobs describes which clusters need their support chart and/or staging
# hub upgraded; and the second set of jobs describe which production hubs require
# upgrading. These two lists are set as job outputs via GITHUB_ENV to be consumed
# upgrading. These two lists are set as job outputs via GITHUB_OUTPUT to be consumed
# by the later jobs. They are also pretty-printed in a human-readable format to the
# logs, and converted into Markdown tables for posting into GitHub comments.
generate-jobs:
runs-on: ubuntu-latest
outputs:
support-and-staging-matrix-jobs: ${{ env.support-and-staging-matrix-jobs }}
prod-hub-matrix-jobs: ${{ env.prod-hub-matrix-jobs }}
support-and-staging-matrix-jobs: ${{ steps.generate-jobs.outputs.support-and-staging-matrix-jobs }}
prod-hub-matrix-jobs: ${{ steps.generate-jobs.outputs.prod-hub-matrix-jobs }}

steps:
- uses: actions/checkout@v4
Expand Down Expand Up @@ -173,6 +166,7 @@ jobs:
# Markdown table format to be posted on a Pull Request, if this job is triggered
# by one
- name: Generate matrix jobs
id: generate-jobs
run: |
deployer generate helm-upgrade-jobs "${{ steps.changed-files.outputs.changed_files }}" '${{ steps.pr-labels.outputs.result }}'
Expand All @@ -188,8 +182,8 @@ jobs:
- name: Upload artifacts
if: >
github.event_name == 'pull_request' &&
(env.support-and-staging-matrix-jobs != '[]' ||
env.prod-hub-matrix-jobs != '[]')
(steps.generate-jobs.outputs.support-and-staging-matrix-jobs != '[]' ||
steps.generate-jobs.outputs.prod-hub-matrix-jobs != '[]')
uses: actions/upload-artifact@v4
with:
name: pr
Expand All @@ -214,6 +208,7 @@ jobs:
footer: "<{run_url}|Failing Run>"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_GHA_FAILURES_WEBHOOK_URL }}

# This job upgrades the support chart, staging hub, and dask-staging hub (if present)
# for clusters in parallel, if those upgrades are required. This job needs the
# `generate-jobs` job to have completed and set an output to the
Expand All @@ -234,36 +229,36 @@ jobs:
#
# If you are adding a new cluster, please remember to list it here!
outputs:
failure_2i2c-aws-us: "${{ env.failure_2i2c-aws-us }}"
failure_2i2c-uk: "${{ env.failure_2i2c-uk }}"
failure_2i2c: "${{ env.failure_2i2c }}"
failure_awi-ciroh: "${{ env.failure_awi-ciroh }}"
failure_catalystproject-africa: "${{ env.failure_catalystproject-africa }}"
failure_catalystproject-latam: "${{ env.failure_catalystproject-latam }}"
failure_cloudbank: "${{ env.failure_cloudbank }}"
failure_dubois: "${{ env.failure_dubois }}"
failure_earthscope: "${{ env.failure_earthscope }}"
failure_gridsst: "${{ env.failure_gridsst }}"
failure_hhmi: "${{ env.failure_hhmi }}"
failure_jupyter-health: "${{ env.failure_jupyter-health }}"
failure_jupyter-meets-the-earth: "${{ env.failure_jupyter-meets-the-earth }}"
failure_kitware: "${{ env.failure_kitware }}"
failure_leap: "${{ env.failure_leap }}"
failure_maap: "${{ env.failure_maap }}"
failure_nasa-cryo: "${{ env.failure_nasa-cryo }}"
failure_nasa-ghg: "${{ env.failure_nasa-ghg }}"
failure_nasa-veda: "${{ env.failure_nasa-veda }}"
failure_nmfs-openscapes: "${{ env.failure_nmfs-openscapes }}"
failure_openscapes: "${{ env.failure_openscapes }}"
failure_opensci: "${{ env.failure_opensci }}"
failure_pangeo-hubs: "${{ env.failure_pangeo-hubs }}"
failure_projectpythia: "${{ env.failure_projectpythia }}"
failure_queensu: "${{ env.failure_queensu }}"
failure_smithsonian: "${{ env.failure_smithsonian }}"
failure_strudel: "${{ env.failure_strudel }}"
failure_ubc-eoas: "${{ env.failure_ubc-eoas }}"
failure_utoronto: "${{ env.failure_utoronto }}"
failure_victor: "${{ env.failure_victor }}"
failure_2i2c-aws-us: "${{ steps.declare-failure.outputs.failure_2i2c-aws-us }}"
failure_2i2c-uk: "${{ steps.declare-failure.outputs.failure_2i2c-uk }}"
failure_2i2c: "${{ steps.declare-failure.outputs.failure_2i2c }}"
failure_awi-ciroh: "${{ steps.declare-failure.outputs.failure_awi-ciroh }}"
failure_catalystproject-africa: "${{ steps.declare-failure.outputs.failure_catalystproject-africa }}"
failure_catalystproject-latam: "${{ steps.declare-failure.outputs.failure_catalystproject-latam }}"
failure_cloudbank: "${{ steps.declare-failure.outputs.failure_cloudbank }}"
failure_dubois: "${{ steps.declare-failure.outputs.failure_dubois }}"
failure_earthscope: "${{ steps.declare-failure.outputs.failure_earthscope }}"
failure_gridsst: "${{ steps.declare-failure.outputs.failure_gridsst }}"
failure_hhmi: "${{ steps.declare-failure.outputs.failure_hhmi }}"
failure_jupyter-health: "${{ steps.declare-failure.outputs.failure_jupyter-health }}"
failure_jupyter-meets-the-earth: "${{ steps.declare-failure.outputs.failure_jupyter-meets-the-earth }}"
failure_kitware: "${{ steps.declare-failure.outputs.failure_kitware }}"
failure_leap: "${{ steps.declare-failure.outputs.failure_leap }}"
failure_maap: "${{ steps.declare-failure.outputs.failure_maap }}"
failure_nasa-cryo: "${{ steps.declare-failure.outputs.failure_nasa-cryo }}"
failure_nasa-ghg: "${{ steps.declare-failure.outputs.failure_nasa-ghg }}"
failure_nasa-veda: "${{ steps.declare-failure.outputs.failure_nasa-veda }}"
failure_nmfs-openscapes: "${{ steps.declare-failure.outputs.failure_nmfs-openscapes }}"
failure_openscapes: "${{ steps.declare-failure.outputs.failure_openscapes }}"
failure_opensci: "${{ steps.declare-failure.outputs.failure_opensci }}"
failure_pangeo-hubs: "${{ steps.declare-failure.outputs.failure_pangeo-hubs }}"
failure_projectpythia: "${{ steps.declare-failure.outputs.failure_projectpythia }}"
failure_queensu: "${{ steps.declare-failure.outputs.failure_queensu }}"
failure_smithsonian: "${{ steps.declare-failure.outputs.failure_smithsonian }}"
failure_strudel: "${{ steps.declare-failure.outputs.failure_strudel }}"
failure_ubc-eoas: "${{ steps.declare-failure.outputs.failure_ubc-eoas }}"
failure_utoronto: "${{ steps.declare-failure.outputs.failure_utoronto }}"
failure_victor: "${{ steps.declare-failure.outputs.failure_victor }}"

if: |
(github.event_name == 'push' && contains(github.ref, 'main')) &&
Expand Down Expand Up @@ -320,6 +315,7 @@ jobs:
deployer run-hub-health-check ${{ matrix.jobs.cluster_name }} dask-staging
- name: Declare failure status
id: declare-failure
if: always()
shell: python
run: |
Expand All @@ -328,8 +324,8 @@ jobs:
name = "${{ matrix.jobs.cluster_name }}".replace(".", "-")
failure = "${{ job.status == 'failure' }}"
env_file = os.getenv("GITHUB_ENV")
with open(env_file, "a") as f:
output_file = os.getenv("GITHUB_OUTPUT")
with open(output_file, "a") as f:
f.write(f"failure_{name}={failure}")
# https://github.com/ravsamhq/notify-slack-action
Expand Down Expand Up @@ -363,14 +359,15 @@ jobs:
needs.generate-jobs.outputs.prod-hub-matrix-jobs != '[]'
outputs:
filtered-prod-hub-matrix-jobs: ${{ env.filtered-prod-hub-matrix-jobs }}
prod-hub-matrix-jobs: ${{ steps.filter-jobs.outputs.prod-hub-matrix-jobs }}

steps:
# This Python script filters out any prod hub deployment job from running
# later based on if its part of a cluster where support/staging upgrade
# just failed. Data is injected to the script before its executed via
# string literals as rendered GitHub workflow expressions.
- name: Filter prod deploy jobs to run based on failures in support/staging
id: filter-jobs
shell: python
run: |
import os
Expand All @@ -388,9 +385,9 @@ jobs:
except KeyError:
print(f"The {cluster_name} cluster wasn't found in the `upgrade-support-and-staging.outputs` list. Please add it before continuing!")
env_file = os.getenv("GITHUB_ENV")
with open(env_file, "a") as f:
f.write(f"filtered-prod-hub-matrix-jobs={json.dumps(filtered_jobs)}")
output_file = os.getenv("GITHUB_OUTPUT")
with open(output_file, "a") as f:
f.write(f"prod-hub-matrix-jobs={json.dumps(filtered_jobs)}")
# https://github.com/ravsamhq/notify-slack-action
# Needs to be added per job
Expand Down Expand Up @@ -423,12 +420,12 @@ jobs:
!cancelled() &&
(github.event_name == 'push' && contains(github.ref, 'main')) &&
needs.filter-generate-jobs.result == 'success' &&
needs.filter-generate-jobs.outputs.filtered-prod-hub-matrix-jobs != '[]'
needs.filter-generate-jobs.outputs.prod-hub-matrix-jobs != '[]'
strategy:
# Don't stop other deployments if one fails
fail-fast: false
matrix:
jobs: ${{ fromJson(needs.filter-generate-jobs.outputs.filtered-prod-hub-matrix-jobs) }}
jobs: ${{ fromJson(needs.filter-generate-jobs.outputs.prod-hub-matrix-jobs) }}

steps:
- uses: actions/checkout@v4
Expand Down
11 changes: 4 additions & 7 deletions deployer/commands/generate/helm_upgrade/jobs.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,14 +140,11 @@ def helm_upgrade_jobs(
# This will avoid errors trying to set CI output variables in an environment that
# doesn't exist.
ci_env = os.environ.get("CI", False)
# The use of ::set-output was deprecated as per the below blog post and
# instead we share variables between steps/jobs by writing them to GITHUB_ENV:
# https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
# More info on GITHUB_ENV: https://docs.github.com/en/actions/learn-github-actions/environment-variables
env_file = os.getenv("GITHUB_ENV")
# More info on GITHUB_OUTPUT: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/passing-information-between-jobs
output_file = os.getenv("GITHUB_OUTPUT")
if ci_env:
# Add these matrix jobs as environment variables for use in another job
with open(env_file, "a") as f:
# Add these matrix jobs as output variables for use in another job
with open(output_file, "a") as f:
f.write(f"prod-hub-matrix-jobs={json.dumps(prod_hub_matrix_jobs)}\n")
f.write(
f"support-and-staging-matrix-jobs={json.dumps(support_and_staging_matrix_jobs)}\n"
Expand Down
8 changes: 4 additions & 4 deletions docs/sre-guide/common-problems-solutions.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,11 +193,11 @@ as a comment and therefore requires the PR number). The second file we set the
name of with our second environment variable.

```bash
export GITHUB_ENV=test.txt # You can call this file anything you like, it's the setting of GITHUB_ENV that's important
export GITHUB_OUTPUT=test.txt # You can call this file anything you like, it's the setting of GITHUB_OUTPUT that's important
```

This mimics the GitHub Actions environment where a `GITHUB_ENV` file is available
to store and share environment variables across steps/jobs, and this will be where
This mimics the GitHub Actions environment where a `GITHUB_OUTPUT` file is available
to store and share output variables across steps/jobs, and this will be where
our JSON formatted job matrices will be written to.

Now we're setup, we can run:
Expand All @@ -216,7 +216,7 @@ Where to find a list of changed files from GitHub Actions logs
```

Once you have executed the command, the JSON formatted job matrices will be available
in the file set by `GITHUB_ENV` in the following form:
in the file set by `GITHUB_OUTPUT` in the following form:

```text
prod-hub-matrix-jobs=<JSON formatted array>
Expand Down

0 comments on commit f7cf459

Please sign in to comment.