Описание
NVIDIA Triton Inference Server for Windows and Linux and the Tensor RT backend contain a vulnerability where an attacker could cause an underflow by a specific model configuration and a specific input. A successful exploit of this vulnerability might lead to denial of service.
NVIDIA Triton Inference Server for Windows and Linux and the Tensor RT backend contain a vulnerability where an attacker could cause an underflow by a specific model configuration and a specific input. A successful exploit of this vulnerability might lead to denial of service.
Связанные уязвимости
NVIDIA Triton Inference Server for Windows and Linux and the Tensor RT backend contain a vulnerability where an attacker could cause an underflow by a specific model configuration and a specific input. A successful exploit of this vulnerability might lead to denial of service.
Уязвимость программного обеспечения для развертывания и выполнения моделей искусственного интеллекта NVIDIA Triton Inference Server (ранее TensorRT Inference Server), связанная с возможностью целочисленного переполнения, позволяющая нарушителю вызвать отказ в обслуживании