Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

redhat логотип

CVE-2026-21869

Опубликовано: 07 янв. 2026
Источник: redhat
CVSS3: 8.1

Описание

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.

A flaw was found in llama.cpp. A remote attacker can exploit an input validation vulnerability in the server's completion endpoints. By supplying a negative value for the n_discard parameter in JSON input, an attacker can cause out-of-bounds memory writes. This can lead to a process crash or enable remote code execution (RCE), allowing the attacker to run arbitrary code on the affected system.

Отчет

This vulnerability is rated Important as it allows remote code execution in the llama.cpp server. An unauthenticated remote attacker can trigger out-of-bounds memory writes by providing a negative n_discard parameter in JSON input to the server's completion endpoints, leading to a crash or arbitrary code execution. This could impact deployments where the llama.cpp server is exposed to untrusted networks.

Дополнительная информация

Статус:

Important
Дефект:
CWE-787
https://bugzilla.redhat.com/show_bug.cgi?id=2427743llama.cpp: llama.cpp: Remote code execution via invalid n_discard parameter in server endpoints

8.1 High

CVSS3

Связанные уязвимости

CVSS3: 8.8
ubuntu
3 месяца назад

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.

CVSS3: 8.8
nvd
3 месяца назад

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.

CVSS3: 8.8
debian
3 месяца назад

llama.cpp is an inference of several LLM models in C/C++. In commits 5 ...

8.1 High

CVSS3