Количество 2
Количество 2
CVE-2024-5184
The EmailGPT service contains a prompt injection vulnerability. The service uses an API service that allows a malicious user to inject a direct prompt and take over the service logic. Attackers can exploit the issue by forcing the AI service to leak the standard hard-coded system prompts and/or execute unwanted prompts. When engaging with EmailGPT by submitting a malicious prompt that requests harmful information, the system will respond by providing the requested data. This vulnerability can be exploited by any individual with access to the service.
GHSA-27m7-5vm3-3prg
The EmailGPT service contains a prompt injection vulnerability. The service uses an API service that allows a malicious user to inject a direct prompt and take over the service logic. Attackers can exploit the issue by forcing the AI service to leak the standard hard-coded system prompts and/or execute unwanted prompts. When engaging with EmailGPT by submitting a malicious prompt that requests harmful information, the system will respond by providing the requested data. This vulnerability can be exploited by any individual with access to the service.
Уязвимостей на страницу
Уязвимость | CVSS | EPSS | Опубликовано | |
|---|---|---|---|---|
CVE-2024-5184 The EmailGPT service contains a prompt injection vulnerability. The service uses an API service that allows a malicious user to inject a direct prompt and take over the service logic. Attackers can exploit the issue by forcing the AI service to leak the standard hard-coded system prompts and/or execute unwanted prompts. When engaging with EmailGPT by submitting a malicious prompt that requests harmful information, the system will respond by providing the requested data. This vulnerability can be exploited by any individual with access to the service. | CVSS3: 6.5 | 0% Низкий | больше 1 года назад | |
GHSA-27m7-5vm3-3prg The EmailGPT service contains a prompt injection vulnerability. The service uses an API service that allows a malicious user to inject a direct prompt and take over the service logic. Attackers can exploit the issue by forcing the AI service to leak the standard hard-coded system prompts and/or execute unwanted prompts. When engaging with EmailGPT by submitting a malicious prompt that requests harmful information, the system will respond by providing the requested data. This vulnerability can be exploited by any individual with access to the service. | CVSS3: 6.5 | 0% Низкий | больше 1 года назад |
Уязвимостей на страницу