Описание
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
| Релиз | Статус | Примечание |
|---|---|---|
| devel | needs-triage | |
| jammy | DNE | |
| noble | DNE | |
| plucky | DNE | |
| questing | needs-triage | |
| upstream | needs-triage |
Показывать по
10
EPSS
Процентиль: 20%
0.00065
Низкий
Связанные уязвимости
nvd
7 месяцев назад
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
debian
7 месяцев назад
llama.cpp is an inference of several LLM models in C/C++. Integer Over ...
EPSS
Процентиль: 20%
0.00065
Низкий