Описание
smb: client: fix crypto buffers in non-linear memory
FAQ
Is Azure Linux the only Microsoft product that includes this open-source library and is therefore potentially affected by this vulnerability?
One of the main benefits to our customers who choose to use the Azure Linux distro is the commitment to keep it up to date with the most recent and most secure versions of the open source libraries with which the distro is composed. Microsoft is committed to transparency in this work which is why we began publishing CSAF/VEX in October 2025. See this blog post for more information. If impact to additional products is identified, we will update the CVE to reflect this.
EPSS
7.1 High
CVSS3
Связанные уязвимости
In the Linux kernel, the following vulnerability has been resolved: smb: client: fix crypto buffers in non-linear memory The crypto API, through the scatterlist API, expects input buffers to be in linear memory. We handle this with the cifs_sg_set_buf() helper that converts vmalloc'd memory to their corresponding pages. However, when we allocate our aead_request buffer (@creq in smb2ops.c::crypt_message()), we do so with kvzalloc(), which possibly puts aead_request->__ctx in vmalloc area. AEAD algorithm then uses ->__ctx for its private/internal data and operations, and uses sg_set_buf() for such data on a few places. This works fine as long as @creq falls into kmalloc zone (small requests) or vmalloc'd memory is still within linear range. Tasks' stacks are vmalloc'd by default (CONFIG_VMAP_STACK=y), so too many tasks will increment the base stacks' addresses to a point where virt_addr_valid(buf) will fail (BUG() in sg_set_buf()) when that happens. In practice: too many parallel re...
In the Linux kernel, the following vulnerability has been resolved: smb: client: fix crypto buffers in non-linear memory The crypto API, through the scatterlist API, expects input buffers to be in linear memory. We handle this with the cifs_sg_set_buf() helper that converts vmalloc'd memory to their corresponding pages. However, when we allocate our aead_request buffer (@creq in smb2ops.c::crypt_message()), we do so with kvzalloc(), which possibly puts aead_request->__ctx in vmalloc area. AEAD algorithm then uses ->__ctx for its private/internal data and operations, and uses sg_set_buf() for such data on a few places. This works fine as long as @creq falls into kmalloc zone (small requests) or vmalloc'd memory is still within linear range. Tasks' stacks are vmalloc'd by default (CONFIG_VMAP_STACK=y), so too many tasks will increment the base stacks' addresses to a point where virt_addr_valid(buf) will fail (BUG() in sg_set_buf()) when that happens. In practice: too many paralle
In the Linux kernel, the following vulnerability has been resolved: s ...
In the Linux kernel, the following vulnerability has been resolved: smb: client: fix crypto buffers in non-linear memory The crypto API, through the scatterlist API, expects input buffers to be in linear memory. We handle this with the cifs_sg_set_buf() helper that converts vmalloc'd memory to their corresponding pages. However, when we allocate our aead_request buffer (@creq in smb2ops.c::crypt_message()), we do so with kvzalloc(), which possibly puts aead_request->__ctx in vmalloc area. AEAD algorithm then uses ->__ctx for its private/internal data and operations, and uses sg_set_buf() for such data on a few places. This works fine as long as @creq falls into kmalloc zone (small requests) or vmalloc'd memory is still within linear range. Tasks' stacks are vmalloc'd by default (CONFIG_VMAP_STACK=y), so too many tasks will increment the base stacks' addresses to a point where virt_addr_valid(buf) will fail (BUG() in sg_set_buf()) when that happens. In practice: too many para...
Уязвимость функции cifs_sg_set_buf() ядра операционной системы Linux, позволяющая нарушителю вызвать отказ в обслуживании
EPSS
7.1 High
CVSS3