Join MEXA here. Registration has closed.
Teams will have from 0:00 UTC on December 3 to 24:00 UTC on December 5 to work on the challenge. You cna see what time that is in your time zone here. Depending on your time zone, that might be December 2 for you!
MEXA's first hackathon is funded by Wellcome and supported by Google Health and Google DeepMind!
Traditional methods of assessment in mental health, such as clinical interviews and self-report questionnaires, have issues. Patients are asked to recall how often and how intensely they experience their symptoms, but that recollection is subjective, and depending on how the question is constructed, not fully comprehensive. Some of these measures also provide only a snapshot of an individual’s mental health at a single moment and use a limited set of data collected at a moment in time. Your challenge is to overcome these limitations and design tools or solutions that use generative AI to revolutionize mental health measurement. Your challenge is to overcome these limitations and design tools or solutions that use generative AI to revolutionize mental health measurement.
Mental health care and research have long relied on clinical interviews and self-report questionnaires (e.g., PHQ-9 for depression, GAD-7 for anxiety) to understand the nature of people’s problems, make appropriate diagnoses, and monitor change over time. However, these methods often provide a limited or inaccurate point-in-time snapshot of an individual's mental health status, which can lead to inaccurate diagnosis and ineffective treatment. As mental health is a dynamic, fluctuating experience, there is a growing need for continuous, real-time monitoring that provides deeper insights into an individual's mental health journey.
Advances in AI, particularly large language models (LLMs), offer new opportunities to analyze passively collected natural language data, such as texts, emails, or social media interactions, to assess mental health problems. These language-based data streams are rich in emotional, behavioral, and cognitive signals that may be valuable for understanding the mental states of individuals over time. Moreover, combining natural language data with other data streams, such as physiological data from wearable sensors or behavioral data from smartphone usage, has the potential to create even more robust and accurate mental health measurement tools. And those are just the beginning, there may be many other sources of data that contain useful information on mental state.
The keyword for this challenge is measurement. This challenge asks participants to explore the role LLMs, language, and other data sources could play in better measuring the status and progression of mental health problems over time and/or provide a more nuanced understanding of how people experience their symptoms. Projects should provide solutions or tools that help healthcare professionals or people with mental health conditions to better see mental health as a dynamic and fluctuating state or provide more detailed and nuanced info about a given diagnosis, patient, or symptom.
Answers to this challenge prompt might range from research tools to professional solutions or somewhere in between.
Projects should be informed by the perspectives of those with lived experience of relevant mental health problems. Teams should think about where they would need to get input from end users with lived experience to shape their project and test their assumptions about what safe, effective, and ethical tools would look like.
You may choose to consider the following areas (or any other areas or points that you consider are important):
- Data Types and Sources: Identify the types of natural language data (e.g., social media posts, text messages, emails, diary entries) that could be analyzed to reliably assess mental health conditions. What specific linguistic markers or patterns (e.g., sentiment shifts, use of particular words or phrases) can LLMs detect in these data that correlate with mental health symptoms such as anxiety, depression, or stress? Consider how data sources themselves could create bias or risk; are there opportunities to reduce that? Are there specific contexts or data types or sources that should never be used for inferences about users’ mental health status?
- Combining Multi-Modal Data Streams: Explore how language data could be combined with other types of data, such as passively collected sensor data (e.g., heart rate variability from wearables, activity levels from fitness trackers) or behavioral data (e.g., sleep patterns, smartphone usage). How can multi-modal data integration improve the accuracy and robustness of mental health assessments compared to using language data alone?
- Ethical and Privacy Considerations: Address the ethical challenges of using passively collected data and other issues related to the inference of sensitive information such as a user’s mental health status from such data. How can your solution ensure data privacy, consent, and security while providing meaningful insights? How can you ensure that user expectations regarding the handling of their data are met and communicated? How will you prevent the model from making inaccurate or harmful conclusions, especially when operating autonomously?
- Evaluating Progress Over Time: Propose how your solution could be used to track changes in mental health conditions over time. How can LLMs detect meaningful trends or patterns in an individual’s language and behaviors that signal improvement, deterioration, or a need for intervention?
- Health Care and Real-World Applicability: Consider how this solution could be implemented in real-world settings. How can it assist healthcare professionals in identifying and monitoring patients at risk of developing mental health problems? Alternatively, how could it be used by individuals as a self-management tool for mental health?
When designing your project and submitting your materials, it is important to bear in mind the following:
- Personal data. You must not use any personal data throughout the process (both when using any tools provided to you or in your submission materials). Personal data is any information that either identifies an individual or could be used to identify an individual and includes in particular live patient data. If you need to use data as part of your project, please use dummy or artificial information instead.
- Confidential and proprietary information. The goal of the hackathon is to share ideas and build an engaged multi-disciplinary community to tackle the challenge of using generative AI to revolutionize mental health measurement. It will be conducted in an open environment with materials available to external mentors and judges as well as other hackathon participants. You should think carefully when producing materials so you do not include confidential or sensitive information that you would not want others to view or access. If you use any information or materials owned by a third party you must ensure you have the rights to use those for the purpose of the hackathon.
- Focus on research, not clinical deployment . The purpose of this hackathon is to propose possible research avenues and solutions to revolutionize mental health measurement. You must not use any tools (including Google’s technology or products) or submit materials in a way that could be determined to be giving clinical advice or creating a diagnostic tool or other medical device.
- Respect others. We want to maintain a respectful environment for everyone, which means you must follow these basic rules of conduct:
- comply with applicable laws, including export control and sanctions laws
- respect the rights of others, including privacy and intellectual property rights
- comply with any usage standards or rules relating to any content you submit or use in the hackathon, such as the terms of service for any generative AI technology or tools
- don’t abuse or harm others, yourself or any services (or threaten or encourage such abuse or harm) — for example, by misleading, illegally impersonating, defaming, bullying or harassing others or generating or sharing malware