The Pentagon has initiated a groundbreaking bounty program, offering rewards totaling $24,000 for identifying evidence of bias within artificial intelligence (AI) models, particularly those exhibiting legal bias against protected groups. This initiative seeks contributions that can pinpoint bias in real-world scenarios, leveraging AI technologies like Meta’s open-source LLama-2 70B model.
This effort underscores the Department of Defense’s (DoD) commitment to addressing and mitigating instances of bias in AI, focusing on biases that could affect protected groups. Participants in the bounty program are encouraged to provide clear, real-world examples of bias by interacting with a large language model (LLM). A specific challenge involves comparing the AI’s responses to identical medical inquiries framed for different racial groups, highlighting any discriminatory biases in its outputs.
Eligibility and Rewards
The program is not just a call to action for identifying bias but is structured as a competition. With $24,000 on the line, the DoD has specified that only the most relevant and impactful submissions will be rewarded. The top three contributions will share the majority of the prize money, while every participant whose submission meets the program criteria will be awarded $250. The evaluation of submissions will be based on their realism, relevance to protected classes, the evidence provided, clarity of description, and the efficiency in eliciting biased responses from the AI.
Future Plans and Participation
This initiative represents the first in a series of two planned “bias bounties” by the Pentagon, aiming to engage the public in identifying and mitigating bias within AI technologies. Open until February 27, this contest invites widespread participation, with plans for another round to follow. Through these efforts, the DoD aims to foster an environment where AI technologies are scrutinized for biases, ensuring their equitable and unbiased application in real-world scenarios.