Логотип exploitDog
bind:CVE-2024-56677
Консоль
Логотип exploitDog

exploitDog

bind:CVE-2024-56677

Количество 12

Количество 12

ubuntu логотип

CVE-2024-56677

7 месяцев назад

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0...

EPSS: Низкий
redhat логотип

CVE-2024-56677

7 месяцев назад

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff...

CVSS3: 4.7
EPSS: Низкий
nvd логотип

CVE-2024-56677

7 месяцев назад

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7f

EPSS: Низкий
debian логотип

CVE-2024-56677

7 месяцев назад

In the Linux kernel, the following vulnerability has been resolved: p ...

EPSS: Низкий
github логотип

GHSA-gwjv-8q72-w8xf

7 месяцев назад

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0...

EPSS: Низкий
fstec логотип

BDU:2025-05119

7 месяцев назад

Уязвимость компонентов powerpc/fadump ядра операционной системы Linux, позволяющая нарушителю повысить привилегии в системе

CVSS3: 7.8
EPSS: Низкий
redos логотип

ROS-20250410-01

3 месяца назад

Множественные уязвимости kernel-lt

CVSS3: 7.8
EPSS: Низкий
suse-cvrf логотип

SUSE-SU-2025:0428-1

5 месяцев назад

Security update for the Linux Kernel

EPSS: Низкий
suse-cvrf логотип

SUSE-SU-2025:0557-1

5 месяцев назад

Security update for the Linux Kernel

EPSS: Низкий
suse-cvrf логотип

SUSE-SU-2025:0499-1

5 месяцев назад

Security update for the Linux Kernel

EPSS: Низкий
suse-cvrf логотип

SUSE-SU-2025:0289-1

6 месяцев назад

Security update for the Linux Kernel

EPSS: Низкий
oracle-oval логотип

ELSA-2025-20480

4 дня назад

ELSA-2025-20480: Unbreakable Enterprise kernel security update (IMPORTANT)

EPSS: Низкий

Уязвимостей на страницу

Уязвимость
CVSS
EPSS
Опубликовано
ubuntu логотип
CVE-2024-56677

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0...

0%
Низкий
7 месяцев назад
redhat логотип
CVE-2024-56677

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff...

CVSS3: 4.7
0%
Низкий
7 месяцев назад
nvd логотип
CVE-2024-56677

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7f

0%
Низкий
7 месяцев назад
debian логотип
CVE-2024-56677

In the Linux kernel, the following vulnerability has been resolved: p ...

0%
Низкий
7 месяцев назад
github логотип
GHSA-gwjv-8q72-w8xf

In the Linux kernel, the following vulnerability has been resolved: powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init() During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, since pageblock_order is still zero and it gets initialized later during initmem_init() e.g. setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order() One such use case where this causes issue is - early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init() This causes CMA memory alignment check to be bypassed in cma_init_reserved_mem(). Then later cma_activate_area() can hit a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory area was not pageblock_order aligned. Fix it by moving the fadump_cma_init() after initmem_init(), where other such cma reservations also gets called. <stack trace> ============== page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010 flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0...

0%
Низкий
7 месяцев назад
fstec логотип
BDU:2025-05119

Уязвимость компонентов powerpc/fadump ядра операционной системы Linux, позволяющая нарушителю повысить привилегии в системе

CVSS3: 7.8
0%
Низкий
7 месяцев назад
redos логотип
ROS-20250410-01

Множественные уязвимости kernel-lt

CVSS3: 7.8
3 месяца назад
suse-cvrf логотип
SUSE-SU-2025:0428-1

Security update for the Linux Kernel

5 месяцев назад
suse-cvrf логотип
SUSE-SU-2025:0557-1

Security update for the Linux Kernel

5 месяцев назад
suse-cvrf логотип
SUSE-SU-2025:0499-1

Security update for the Linux Kernel

5 месяцев назад
suse-cvrf логотип
SUSE-SU-2025:0289-1

Security update for the Linux Kernel

6 месяцев назад
oracle-oval логотип
ELSA-2025-20480

ELSA-2025-20480: Unbreakable Enterprise kernel security update (IMPORTANT)

4 дня назад

Уязвимостей на страницу