The LLM Privacy Challenge is a competition aimed at identifying and mitigating privacy risks associated with Large Language Models (LLMs). It focuses on exploring and developing strategies to preserve privacy across various stages of LLM application, including data fine-tuning and prompt generation. The challenge is divided into Red Team and Blue Team tracks, targeting the identification and protection against privacy vulnerabilities, respectively.
The competition is open to everyone interested in advancing the privacy and security of LLMs. Individuals and teams from academic, industry, and independent backgrounds are welcome to contribute their expertise.
Red Team Track: Participants aim to uncover and exploit privacy vulnerabilities in LLMs, simulating potential attackers.
Blue Team Track: Focuses on defending against privacy breaches, developing methods to safeguard sensitive data in LLMs.
For more information on the tracks, please refer to Prizes & Tracks page..
Registration details and deadlines will be provided on the official NeurIPS 2024 LLM Privacy Challenge website. Participants can register for either or both tracks at any time during the competition period.
Participants are encouraged to have a background in machine learning, cybersecurity, or related fields. However, the challenge is designed to accommodate a range of skills and knowledge levels. Familiarity with the provided LLM-PBE toolkit will be beneficial.
The competition will focus on either the privacy of fine-tuned data or privacy of prompts, with specific datasets provided for each. These datasets include synthetic private data or real-world examples like the Enron dataset, depending on the chosen focus.
For more information on the datasets, please refer to the Getting Started page.
Participants must submit their code, models (if developed), and a short paper describing their approach. Details on submission formats and platforms is available on the Getting Started page.
Yes, teams of any size are allowed, including solo participants. Collaboration is encouraged to leverage diverse skills and perspectives.
During the validation phase, each team is limited to 5 submissions per day for each track. In the test phase, teams are restricted to a total of 5 submissions. Only one account per team is permitted for submissions to ensure fairness.
To be eligible for prizes, winning teams must share their methods, code, and models with the organizers. Sharing with the broader community is encouraged to foster knowledge exchange and innovation.
Submissions will be assessed based on attack accuracy, attack efficiency, and defense effectiveness. These criteria are designed to measure the practical and theoretical impacts of the proposed privacy-preserving strategies.
For more information on evaluation metrics, please refer to the Prizes & Tracks page.
Prizes will include cash awards and credits for accessing LLMs. Awards will be given for the first, second, and third place in each track, as well as special awards for cost-effective and high-performing methods against the top-3 submissions from the opposite track.
For more information on prizes, please refer to the Prizes & Tracks page.
For any inquiries, participants can reach out to the organizers via email: llmpc2024.info@gmail.com
Key dates, including registration deadlines, submission deadlines, and prize announcement dates, will be displayed on the website and through official communications to registered participants.
We suggest using the CC-BY 4.0 license for data and code, which allows others to distribute, remix, adapt, and build upon the work, even commercially, as long as they credit the original creation.