Описание
NVIDIA Triton Inference Server contains a vulnerability where a user may cause an out-of-bounds read issue by releasing a shared memory region while it is in use. A successful exploit of this vulnerability may lead to denial of service.
A flaw was found in the NVIDIA Triton Inference Server. Affected versions of Triton Inference Server contain a vulnerability where a user may cause an out-of-bounds read issue by releasing a shared memory region while it is in use, possibly leading to denial of service.
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base or stability.
Затронутые пакеты
| Платформа | Пакет | Состояние | Рекомендация | Релиз |
|---|---|---|---|---|
| OpenShift Lightspeed | openshift-lightspeed-beta/lightspeed-service-api-rhel9 | Not affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/bootc-nvidia-rhel9 | Not affected | ||
| Red Hat Enterprise Linux AI (RHEL AI) | rhelai1/instructlab-nvidia-rhel9 | Not affected |
Показывать по
Дополнительная информация
Статус:
4.9 Medium
CVSS3
Связанные уязвимости
NVIDIA Triton Inference Server contains a vulnerability where a user may cause an out-of-bounds read issue by releasing a shared memory region while it is in use. A successful exploit of this vulnerability may lead to denial of service.
NVIDIA Triton Inference Server contains a vulnerability where a user may cause an out-of-bounds read issue by releasing a shared memory region while it is in use. A successful exploit of this vulnerability may lead to denial of service.
Уязвимость программного обеспечения для развертывания и выполнения моделей искусственного интеллекта NVIDIA Triton Inference Server (ранее TensorRT Inference Server), позволяющая нарушителю вызвать отказ в обслуживании
4.9 Medium
CVSS3