New top bounty: Up to $3,000 for Model Format vulnerabilities

Participate

Submit a Model Format vulnerability beta

What's a Model Format vulnerability?

A model format vulnerability refers to a security flaw that arises from the way an AI/ML model is stored or serialized in a specific file format. Exploiting these flaws can lead to real-world impacts, such as unauthorized model manipulation or malicious code execution.

Currently, we have identified two broad categories of model format vulnerabilities: Deserialization and Backdoors.

  • Deserialization vulnerabilities occur when improperly handled serialized data allows attackers to inject malicious payloads during model loading.
  • Backdoors, on the other hand, involve the intentional embedding of hidden malicious functionality within the model itself.

These categories are not exhaustive and we are constantly on the lookout for new threat vectors. If you have discovered a vulnerability that doesn't fall within these categories, please submit it anyway.

Please note:

  • Rewards: As we're hashing out a systematic approach to price model format vulnerabilities, the final determination for the bounty amounts will be subjective and vary based on the severity and impact of the vulnerability. For vulnerabilities in pickle files, rewards are up to $1,500, and for all other formats it's up to $3,000.
  • Beta Label: As this is a beta program, we are currently unable to confirm CVE assignments or public disclosure timelines.
  • Review Timeline: We aim to review all submissions within 45 days of receipt (as usual).
  • Scope: All model formats are in scope. If we're missing any, please reach out.
  • Who can participate: All huntr users (new and existing) are welcome.
  • Resources: For inspiration and guidance please refer to Protect AI's Knowledge Base and the ModelScan Repository. New content will be added regularly.

Your expertise is vital in enhancing the security landscape of AI/ML models. Thank you for contributing to this important initiative. Good luck ☘️

Please log in to continue. By logging in you agree to our terms of service.