Описание
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
Пакеты
| Пакет | Статус | Версия исправления | Релиз | Тип |
|---|---|---|---|---|
| ggml | fixed | 0.0~git20250711.b6d2ebd-1 | package | |
| llama.cpp | fixed | 5882+dfsg-1 | package |
Примечания
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-vgg9-87g3-85w8
Fixed by: https://github.com/ggml-org/llama.cpp/commit/26a48ad699d50b6268900062661bd22f3e792579 (b5854)
llama.cpp builts embedded ggml but does not use it, rather Debian uses standalone src:ggml
Связанные уязвимости
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.