Submit a Model Vulnerability beta

Join us in pioneering the security of AI by reporting vulnerabilities in AI/ML model files. As models become the backbone of modern technology, safeguarding them is essential for a safer AI-powered world.

Please Note:

  • Rewards: As we're hashing out a systematic approach to price model file vulnerabilities, the final determination for the bounty amounts will be subjective and vary based on the severity and impact of the vulnerability. For vulnerabilities in pickle files, rewards are up to $1,500, and for all other formats it's up to $3,000.
  • Beta Label: As this is a beta program, we are currently unable to confirm CVE assignments or public disclosure timelines.
  • Review Timeline: We aim to review all submissions within 45 days of receipt (as usual).
  • Resources: For examples and guidance on potential vulnerabilities, please refer to Protect AI's Knowledge Base and ModelScan Repository.

Your expertise is vital in enhancing the security landscape of AI/ML models. Thank you for contributing to this important initiative.

Please log in to continue. By logging in you agree to our terms of service.