Skip to content

MayaSal/FluSight-forecast-hub

 
 

Repository files navigation

FluSight 2023-2024

This repository is designed to collect forecast data for the 2023-2024 FluSight collaborative exercise run by the US CDC. This project collects forecasts for weekly new hospitalizations due to confirmed influenza. Anyone interested in using these data for additional research or publications is requested to contact [email protected] for information regarding attribution of the source forecasts.'

Nowcasts and Forecasts of Confirmed Influenza Hospitalizations Admissions During the 2023-2024 Influenza Season

Influenza-related hospitalizations are a major contributor to the overall burden of influenza in the United States. Accurate predictions of influenza hospital admissions will help support appropriate public health planning and interventions during the 2023-2024 season as COVID-19, RSV, and other respiratory pathogens continue to circulate. CDC will coordinate a collaborative nowcasting and forecasting challenge for weekly laboratory confirmed influenza hospital admissions during the 2023-2024 influenza season, currently planned to begin October 11, 2023. Each week during the challenge (October through May 1), participating teams will be asked to provide national- and jurisdiction-specific (all 50 states, Washington DC, and Puerto Rico) probabilistic nowcasts and forecasts of the weekly number of confirmed influenza hospital admissions during the preceding week, the current week, and the following three weeks. This predicted timespan will include the four weeks after the most recent hospital admissions data are officially released by CDC on healthdata.gov (details here). Prediction activities may begin earlier or end later depending on reported influenza activity. Teams can but are not required to submit predictions for all week horizons or for all locations. Predictions will be compared with the number of confirmed influenza admissions (Field #34) from the COVID-19 Reported Patient Impact and Hospital Capacity by State Timeseries, aggregated to the weekly scale. This field remains mandatory through April 2024. Previously collected influenza data from the 2020-2021 and 2022-2023 influenza seasons (Fields 33-35) and the number of hospitals reporting these data each day are included in the COVID-19 Reported Patient Impact and Hospital Capacity by State Timeseries dataset and the facility-level dataset, respectively. Note that in the latter dataset data values less than 4 are suppressed.

Dates: The Challenge Period will begin October 11, 2023, and will run until Monday, May 1, 2024. Participants are currently asked to submit weekly nowcasts and forecasts by 11PM Eastern Time each Wednesday (herein referred to as the Forecast Due Date). The Forecast Due Date has been designated based on the release of hospitalization data on Wednesdays. In the event that timelines of data availability change, FluSight may change the day of week that forecasts are due. In this case, participants would be notified at least one week in advance. Weekly submissions (including file names) will be specified in terms of the reference date, which is the Saturday following the Forecast Due Date. The reference date is the last day of the epidemiological week (EW) (Sunday to Saturday) containing the Forecast Due Date.

Prediction Targets: Participating teams are asked to provide national- and jurisdiction-specific (all 50 states, Washington DC, and Puerto Rico) predictions for two groups of targets: 1) quantile predictions for weekly laboratory confirmed influenza hospital admissions, and 2) category probability predictions for the direction and magnitude of changes in hospitalization rates per 100k population. Both target groups are optional.

For the first target group, teams will submit quantile nowcasts and forecasts of the weekly number of confirmed influenza hospital admissions for the epidemiological week (EW) ending on the reference date as well as the three following weeks. Hindcasts may also be submitted for the preceding week (see Note below). Teams can but are not required to submit forecasts for all weekly horizons or for all locations. The evaluation data for forecasts will be the weekly aggregate of confirmed influenza admission data (field name: previous_day_admission_influenza_confirmed, further described in the entry for Field #34 in this pdf from the COVID-19 Reported Patient Impact and Hospital Capacity by State Timeseries- see the "Data processing" section in target-data subdirectory for details on weekly aggregation. In contrast to COVID-19 hospitalization forecasting activities, influenza hospitalization forecasts will be made for the weekly total of confirmed influenza admissions, rather than for individual days. We will use the specification of EWs defined by the CDC, which run Sunday through Saturday. The target end date for a prediction is the Saturday that ends an EW of interest, and can be calculated using the expression: target end date = reference date + horizon * (7 days).

There are standard software packages to convert from dates to epidemic weeks and vice versa (e.g. MMWRweek and lubridate for R and pymmwr and epiweeks for Python).

An additional forecast target will be included during this year's influenza forecasting challenge following the pilot of the experimental rate trend category target in the 2022-2023 season. The objective of this trend target is to characterize the trajectory of confirmed influenza hospital admissions as "large increase", "increase", "stable", "decrease" or "large decrease" over the 1- to 4 -week forecast period following the most recent official hospital admissions data (see Table 2 - to be added). Predictions for these targets will be in the form of probabilities for each rate trend category, and will be submitted in the same file as a team's weekly hospital admissions forecasts using a target name of "wk flu hosp rate change".

Rate-trend categories are defined by binning state-level changes in weekly hospital admission incidence on a rate scale (counts per 100k people). A change is defined as the difference between the finalized reported weekly hospitalization rates in the EW ending on the target end date and the baseline EW ending two weeks prior to the reference date. At the time that nowcasts and forecasts are generated, this baseline week will be the most recent week for which the official data released on healthdata.gov include reported hospital admission values for at least some days (see Figures 1 and 2). Let $t$ denote the reference date and $y_s$ denote the finalized hospitalization rate in units of counts/100k population on the week ending on date $s$. The change in hospitalization rates at a weekly horizon $h$ is rate_change = $y_{t+h7} - y_{t-27}$. The date ranges used in these calculations are illustrated in an example in Table 2. Corresponding count changes are based on state-level population sizes (i.e., count_change = rate_change*state_population / 100,000). See the locations.csv file in auxiliary-data/ for the population sizes that will be used to calculate rates.

Rate thresholds separating categories of change (e.g., separating "stable" trends from "increase" trends) will be the same across states, but are translatable into counts using the state's population size (see locations.csv, in the auxiliary-data subfolder of this repository). Any week pairs with a difference of fewer than 10 hospital admissions will be classified as having a "stable" trend. Specific rate-difference thresholds for changes have been developed for each prediction horizon, based on past distributions observed in FluSurv-NET and HHS-Protect. These are provided below in the model-outputs directory README file.

Note: This season we will solicit these targets for the same horizons as the hospital admission targets (i.e., rate difference between week $t+N$ and current week $t$, where $N$ is either -1, 0, 1, 2 or 3, respectively). Since hospitalization admission data for the preceding week will be provided on the Wednesday deadline, weekly hospital admission targets with a horizon of -1 will not be scored in summary evaluations nor included in visualizations. However, teams are welcome and encouraged to submit targets with a horizon of -1 to aid in detecting potential calibration issues.

If you have questions about the development of this target, please reach out to Rebecca Borchering ([email protected]). There are no seasonal targets (e.g., season peak or intensity) for this year's influenza forecasting challenge.

Additionally, the infrastructure to submit individual forecast trajectories (e.g., random sample simulations from the predictive distribution(s) of the forecast model) over the 2023-2024 influenza season may become available. Trajectory submissions would be optional. Details on formatting and the submission procedure would be provided prior to the first week of forecasts when trajectory submission is possible.

Acknowledgments

This repository follows the guidelines and standards outlined by the hubverse, which provides a set of data formats and open source tools for modeling hubs.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • R 100.0%