Описание
In the Linux kernel, the following vulnerability has been resolved:
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty
limits in PAGE_SIZE units fit into 32-bit (so that various multiplications
fit into 64-bits). If limits end up being larger, we will hit overflows,
possible divisions by 0 etc. Fix these problems by never allowing so
large dirty limits as they have dubious practical value anyway. For
dirty_bytes / dirty_background_bytes interfaces we can just refuse to set
so large limits. For dirty_ratio / dirty_background_ratio it isn't so
simple as the dirty limit is computed from the amount of available memory
which can change due to memory hotplug etc. So when converting dirty
limits from ratios to numbers of pages, we just don't allow the result to
exceed UINT_MAX.
This is root-only triggerable problem which occurs when the operator
sets dirty limits to >16 TB.
A vulnerability was found in the Linux kernel's memory management subsystem where a lack of proper size checks on dirty limits can lead to situations where large dirty limits end up being larger than 32-bits, resulting in potential overflows and divisions by 0. This can cause memory corruption, system instability, or crashes.
Отчет
Red Hat believes this flaw is moderate severity because the privileges necessary to exploit this vulnerability are high, as it requires a user to have elevated permissions to set the dirty page limits.
Меры по смягчению последствий
Mitigation for this issue is either not available or the currently available options do not meet the Red Hat Product Security criteria comprising ease of use and deployment, applicability to widespread installation base or stability.
Затронутые пакеты
Платформа | Пакет | Состояние | Рекомендация | Релиз |
---|---|---|---|---|
Red Hat Enterprise Linux 6 | kernel | Out of support scope | ||
Red Hat Enterprise Linux 7 | kernel | Out of support scope | ||
Red Hat Enterprise Linux 7 | kernel-rt | Out of support scope | ||
Red Hat Enterprise Linux 9 | kernel-rt | Affected | ||
Red Hat Enterprise Linux 8 | kernel-rt | Fixed | RHSA-2024:7001 | 24.09.2024 |
Red Hat Enterprise Linux 8 | kernel | Fixed | RHSA-2024:7000 | 24.09.2024 |
Red Hat Enterprise Linux 9 | kernel | Fixed | RHSA-2024:6567 | 11.09.2024 |
Red Hat Enterprise Linux 9 | kernel | Fixed | RHSA-2024:6567 | 11.09.2024 |
Red Hat Enterprise Linux 9.2 Extended Update Support | kernel | Fixed | RHSA-2024:6267 | 04.09.2024 |
Red Hat Enterprise Linux 9.2 Extended Update Support | kernel-rt | Fixed | RHSA-2024:6268 | 04.09.2024 |
Показывать по
Дополнительная информация
Статус:
EPSS
4.4 Medium
CVSS3
Связанные уязвимости
In the Linux kernel, the following vulnerability has been resolved: mm: avoid overflows in dirty throttling logic The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit into 64-bits). If limits end up being larger, we will hit overflows, possible divisions by 0 etc. Fix these problems by never allowing so large dirty limits as they have dubious practical value anyway. For dirty_bytes / dirty_background_bytes interfaces we can just refuse to set so large limits. For dirty_ratio / dirty_background_ratio it isn't so simple as the dirty limit is computed from the amount of available memory which can change due to memory hotplug etc. So when converting dirty limits from ratios to numbers of pages, we just don't allow the result to exceed UINT_MAX. This is root-only triggerable problem which occurs when the operator sets dirty limits to >16 TB.
In the Linux kernel, the following vulnerability has been resolved: mm: avoid overflows in dirty throttling logic The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit into 64-bits). If limits end up being larger, we will hit overflows, possible divisions by 0 etc. Fix these problems by never allowing so large dirty limits as they have dubious practical value anyway. For dirty_bytes / dirty_background_bytes interfaces we can just refuse to set so large limits. For dirty_ratio / dirty_background_ratio it isn't so simple as the dirty limit is computed from the amount of available memory which can change due to memory hotplug etc. So when converting dirty limits from ratios to numbers of pages, we just don't allow the result to exceed UINT_MAX. This is root-only triggerable problem which occurs when the operator sets dirty limits to >16 TB.
In the Linux kernel, the following vulnerability has been resolved: m ...
In the Linux kernel, the following vulnerability has been resolved: mm: avoid overflows in dirty throttling logic The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit into 64-bits). If limits end up being larger, we will hit overflows, possible divisions by 0 etc. Fix these problems by never allowing so large dirty limits as they have dubious practical value anyway. For dirty_bytes / dirty_background_bytes interfaces we can just refuse to set so large limits. For dirty_ratio / dirty_background_ratio it isn't so simple as the dirty limit is computed from the amount of available memory which can change due to memory hotplug etc. So when converting dirty limits from ratios to numbers of pages, we just don't allow the result to exceed UINT_MAX. This is root-only triggerable problem which occurs when the operator sets dirty limits to >16 TB.
Уязвимость компонента mm ядра операционной системы Linux, позволяющая нарушителю выполнить произвольный код
EPSS
4.4 Medium
CVSS3