Frequently Asked Questions

What is the NeurIPS 2024 LLM Privacy Challenge?

The LLM Privacy Challenge is a competition aimed at identifying and mitigating privacy risks associated with Large Language Models (LLMs). It focuses on exploring and developing strategies to preserve privacy across various stages of LLM application, including data fine-tuning and prompt generation. The challenge is divided into Red Team and Blue Team tracks, targeting the identification and protection against privacy vulnerabilities, respectively.

Who can participate in the LLM Privacy Challenge?

The competition is open to everyone interested in advancing the privacy and security of LLMs. Individuals and teams from academic, industry, and independent backgrounds are welcome to contribute their expertise.

What are the competition tracks?

Red Team Track: Participants aim to uncover and exploit privacy vulnerabilities in LLMs, simulating potential attackers.

Blue Team Track: Focuses on defending against privacy breaches, developing methods to safeguard sensitive data in LLMs.

For more information on the tracks, please refer to Prizes & Tracks page..

How do I register for the competition?

Registration details and deadlines will be provided on the official NeurIPS 2024 LLM Privacy Challenge website. Participants can register for either or both tracks at any time during the competition period.

Are there any prerequisites for participation?

Participants are encouraged to have a background in machine learning, cybersecurity, or related fields. However, the challenge is designed to accommodate a range of skills and knowledge levels. Familiarity with the provided LLM-PBE toolkit will be beneficial.

How is the competition data structured?

The competition will focus on either the privacy of fine-tuned data or privacy of prompts, with specific datasets provided for each. These datasets include synthetic private data or real-world examples like the Enron dataset, depending on the chosen focus.

For more information on the datasets, please refer to the Getting Started page.

What are the submission requirements?

Participants must submit their code, models (if developed), and a short paper describing their approach. Details on submission formats and platforms is available on the Getting Started page.

Can we collaborate as a team?

Yes, teams of any size are allowed, including solo participants. Collaboration is encouraged to leverage diverse skills and perspectives.

How many submissions are allowed?

During the validation phase, each team is limited to 5 submissions per day for each track. In the test phase, teams are restricted to a total of 5 submissions. Only one account per team is permitted for submissions to ensure fairness.

Are participants required to share their methods?

To be eligible for prizes, winning teams must share their methods, code, and models with the organizers. Sharing with the broader community is encouraged to foster knowledge exchange and innovation.

How will submissions be evaluated?

Submissions will be assessed based on attack accuracy, attack efficiency, and defense effectiveness. These criteria are designed to measure the practical and theoretical impacts of the proposed privacy-preserving strategies.

For more information on evaluation metrics, please refer to the Prizes & Tracks page.

What prizes are available?

Prizes will include cash awards and credits for accessing LLMs. Awards will be given for the first, second, and third place in each track, as well as special awards for cost-effective and high-performing methods against the top-3 submissions from the opposite track.

For more information on prizes, please refer to the Prizes & Tracks page.

How do I contact the organizers?

For any inquiries, participants can reach out to the organizers via email:

What are the important dates and deadlines?

Key dates, including registration deadlines, submission deadlines, and prize announcement dates, will be displayed on the website and through official communications to registered participants.