Логотип exploitDog
Консоль
Логотип exploitDog

exploitDog

msrc логотип

CVE-2025-25183

Опубликовано: 03 сент. 2025
Источник: msrc
CVSS3: 2.6
EPSS Низкий

Описание

vLLM using built-in hash() from Python 3.12 leads to predictable hash collisions in vLLM prefix cache

FAQ

Is Azure Linux the only Microsoft product that includes this open-source library and is therefore potentially affected by this vulnerability?

One of the main benefits to our customers who choose to use the Azure Linux distro is the commitment to keep it up to date with the most recent and most secure versions of the open source libraries with which the distro is composed. Microsoft is committed to transparency in this work which is why we began publishing CSAF/VEX in October 2025. See this blog post for more information. If impact to additional products is identified, we will update the CVE to reflect this.

EPSS

Процентиль: 20%
0.00064
Низкий

2.6 Low

CVSS3

Связанные уязвимости

CVSS3: 2.6
redhat
около 1 года назад

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

CVSS3: 2.6
nvd
около 1 года назад

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

CVSS3: 2.6
debian
около 1 года назад

vLLM is a high-throughput and memory-efficient inference and serving e ...

CVSS3: 2.6
github
около 1 года назад

vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache

EPSS

Процентиль: 20%
0.00064
Низкий

2.6 Low

CVSS3