-
VU#252619: Multiple deserialization vulnerabilities in PyTorch Lightning 2.4.0 and earlier versions
Overview
PyTorch Lightning versions 2.4.0 and earlier do not use any verification mechanisms to ensure that model files are safe to load before loading them. Users of PyTorch Lightning should use caution when loading models from unknown or unmanaged sources.
Description
PyTorch Lightning, a high-level framework built on top of PyTorch, is designed to streamline deep learning model training, scaling, and deployment. PyTorch Lightning is widely used in AI research and production environments, often integrating with various cloud and distributed computing platforms to manage large-scale machine learning workloads.
PyTorch Lightning contains multiple vulnerabilities related to the deserialization of untrusted data (CWE-502). These vulnerabilities arise from the unsafe use of torch.load()
, which is used to deserialize model checkpoints, configurations, and sometimes metadata. While torch.load()
provides an optional weights_only=True
parameter to mitigate the risks of loading arbitrary code, PyTorch Lightning does not require or enforce this safeguard as a principal security requirement for the product.
Kasimir Schulz of HiddenLayer identified and reported the following five vulnerabilities:
- The
DeepSpeed
integration in PyTorch Lightning loads optimizer states and model checkpoints without enforcing safe deserialization practices. It does not validate the integrity or origin of serialized data before passing it to torch.load()
, allowing deserialization of arbitrary objects.
- The
PickleSerializer
class directly utilizes Python’s pickle module to handle data serialization and deserialization. Since pickle inherently allows execution of embedded code during deserialization, any untrusted or manipulated input processed by this class can introduce security risks.
- The
_load_distributed_checkpoint
component is responsible for handling distributed training checkpoints. It processes model state data across multiple nodes, but it does not include safeguards to verify or restrict the content being deserialized.
- The
_lazy_load
function is designed to defer loading of model components for efficiency. However, it does not enforce security controls on the serialized input, allowing for the potential deserialization of unverified objects.
- The
Cloud_IO
module facilitates storage and retrieval of model files from local and remote sources. It provides multiple deserialization pathways, such as handling files from disk, from remote servers, and from in-memory byte streams, without applying constraints on how the serialized data is interpreted.
Impact
A user could unknowingly load a malicious file from local or remote locations containing embedded code that executes within the system’s context, potentially leading to full system compromise.
Solution
To reduce the risk of deserialization-based vulnerabilities in PyTorch Lightning, users and organizations can implement the following mitigations at the system and operational levels:
- Verify that files to be loaded are from trusted sources and with valid signatures;
- Use Sandbox environments to prevent abuse of arbitrary commands when untrusted models or files are being used or tested;
- Perform static and dynamic analysis of files to be loaded to verify that the ensuing operations will remain restricted to the data processing needs of the environment;
- Disable unnecessary deserialization features by ensuring that
torch.load()
is always used with weights_only = True
when the files to be loaded are model weights.
We have not received a statement from Lightning AI at this time. Please check the Vendor Information section for updates as they become available.
Acknowledgements
Thanks to the reporter, Kasimir Schulz [kschulz@hiddenlayer.com] from HiddenLayer. Thanks to Matt Churilla for verifying the vulnerabilities. This document was written by Renae Metcalf, Vijay Sarvepalli, and Eric Hatleback.
-
VU#726882: Paragon Partition Manager contains five memory vulnerabilities within its BioNTdrv.sys driver that allow for privilege escalation and denial-of-service (DoS) attacks
Overview
Paragon Partition Manager's BioNTdrv.sys driver, versions 1.3.0 and 1.5.1, contain five vulnerabilities. These include arbitrary kernel memory mapping and write vulnerabilities, a null pointer dereference, insecure kernel resource access, and an arbitrary memory move vulnerability. An attacker with local access to a device can exploit these vulnerabilities to escalate privileges or cause a denial-of-service (DoS) scenario on the victim's machine. Additionally, as the attack involves a Microsoft-signed Driver, an attacker can leverage a Bring Your Own Vulnerable Driver (BYOVD) technique to exploit systems even if Paragon Partition Manager is not installed. Microsoft has observed threat actors (TAs) exploiting this weakness in BYOVD ransomware attacks, specifically using CVE-2025-0289 to achieve privilege escalation to SYSTEM level, then execute further malicious code. These vulnerabilities have been patched by both Paragon Software, and vulnerable BioNTdrv.sys versions blocked by Microsoft's Vulnerable Driver Blocklist.
Description
Paragon Partition Manager is a software tool from Paragon Software, available in both Community and Commercial versions, that allows users to manage partitions (individual sections) on a hard drive. Paragon Partition Manager uses a kernel-level Driver distributed as BioNTdrv.sys. The driver allows for a low-level access to the hard drive with elevated privileges to access and manage data as the kernel device.
Microsoft researchers have identified five vulnerabilities in Paragon Partition Manager version 17.9.1. These vulnerabilities, particularly in BioNTdrv.sys versions 1.3.0 and 1.5.1, allow attackers to achieve SYSTEM-level privilege escalation, which surpasses typical administrator permissions. The vulnerabilities also enable attackers to manipulate the driver via device-specific Input/Output Control (IOCTL) calls, potentially resulting in privilege escalation or system crashes (e.g., a Blue Screen of Death, or BSOD). Even if Paragon Partition Manager is not installed, attackers can install and misuse the vulnerable driver through the BYOVD method to compromise the target machine.
Identified Vulnerabilities:
CVE-2025-0288
An arbitrary kernel memory vulnerability in version 17.9.1 caused by the memmove function, which fails to sanitize user-controlled input. This allows an attacker to write arbitrary kernel memory and achieve privilege escalation.
CVE-2025-0287
A null pointer dereference vulnerability in version 17.9.1 caused by the absence of a valid MasterLrp structure in the input buffer. This allows an attacker to execute arbitrary kernel code, enabling privilege escalation.
CVE-2025-0286
An arbitrary kernel memory write vulnerability in version 17.9.1 due to improper validation of user-supplied data lengths. This flaw can allow attackers to execute arbitrary code on the victim’s machine.
CVE-2025-0285
An arbitrary kernel memory mapping vulnerability in version 17.9.1 caused by a failure to validate user-supplied data lengths. Attackers can exploit this flaw to escalate privileges.
CVE-2025-0289
An insecure kernel resource access vulnerability in version 17.9.1 caused by failure to validate the MappedSystemVa pointer before passing it to HalReturnToFirmware. This allows attackers to compromise the affected service.
Impact
An attacker with local access to a target device can exploit BioNTdrv.sys version 1.3.0 to escalate privileges to SYSTEM level or cause a DoS scenario. Microsoft has observed this driver being used in ransomware attacks, leveraging the BYOVD technique for privilege escalation prior to further malicious code execution.
Solution
Paragon Software has updated Partition Manager and released a new driver, BioNTdrv.sys version 2.0.0, which addresses these vulnerabilities. This new driver is present in version 17.45.0 of Paragon Partition Manager. Ensure your installation of Paragon Partition Manager is updated to the latest version. Users can verify if their Vulnerable Driver Blocklist is enabled under Windows Security settings. On Windows 11 devices, this blocklist is enabled by default. Users can learn more about the Vulnerable Driver Blocklist here: Microsoft Vulnerable Driver Blocklist Information. Enterprise organizations should ensure the blocklist is applied for their user base to prevent potential loading of the vulnerable driver BioNTdrv.sys versions 1.3.0 and 1.5.1 by TAs. This will not prevent exploitation by TAs who already have administrator access.
Acknowledgements
Thanks to Microsoft for reporting the vulnerability.This document was written by Christopher Cullen.
-
VU#148244: PandasAI interactive prompt function can be exploited to run arbitrary Python code through prompt injection, which can lead to remote code execution (RCE)
Overview
PandasAI, an open source project by SinaptikAI, has been found vulnerable to Prompt Injection attacks. An attacker with access to the chat prompt can craft malicious input that is interpreted as code, potentially achieving arbitrary code execution. In response, SinaptikAI has implemented specific security configurations to address this vulnerability.
Description
PandasAI is a Python library that allows users to interact with their data using natural language queries. The library parses these queries into Python or SQL code, leveraging a large language model (LLM) (such as OpenAI's GPT or similar) to generate explanations, insights, or code. As part of its setup, users import the AI Agent
class, instantiate it with their data, and facilitate a connection to the database. Once connected the AI agent can maintain the context throughout the discussion, allowing for ongoing exchanges with the user's queries as prompts.
A vulnerability was discovered that enables arbitrary Python code execution through prompt injection. Researchers at NVIDIA demonstrated the ability to bypass PandasAI's restrictions, such as preventing certain module imports, jailbreak protections, and the use of allowed lists. By embedding malicious Python code in various ways via a prompt, attackers can exploit the vulnerability to execute arbitrary code within the context of the process running PandasAI.
This vulnerability arises from the fundamental challenge of maintaining a clear separation between code and data in AI chatbots and agents. In the case of PandasAI, any code generated and executed by the agent is implicitly trusted, allowing attackers with access to the prompt interface to inject malicious Python or SQL code. The security controls of PandasAI (2.4.3 and earlier) fail to distinguish between legitimate and malicious inputs, allowing the attackers to manipulate the system into executing untrusted code, leading to untrusted code execution (RCE), system compromise, or pivoting attacks on connected services. The vulnerability is tracked as CVE-2024-12366. Sinaptik AI has introduced new configuration parameters to address this issue and allow the user to choose appropriate security configuration for their installation and setup.
Impact
An attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code. This could result in arbitrary code execution, enabling attackers to compromise the system running PandasAI or maintain persistence within the environment.
Solution
SinaptikAI has introduced a Security parameter to the configuration file of the PandasAI project. Users can now select one of three security configurations:
- Standard: Default security settings suitable for most use cases.
- Advanced: Higher security settings for environments with stricter requirements.
- None: Disables security features (not recommended).
By choosing the appropriate configuration, users can tailor PandasAI's security to their specific needs. SinaptikAI has also released a sandbox. More information regarding the sandbox can be found at the appropriate documentation page.
Acknowledgements
Thank you to the reporter, the NVIDIA AI Red Team (Joe Lucas, Becca Lynch, Rich Harang, John Irwin, and Kai Greshake). This document was written by Christopher Cullen.
-
VU#733789: ChatGPT-4o contains security bypass vulnerability through time and search functions called "Time Bandit"
Overview
ChatGPT-4o contains a jailbreak vulnerability called "Time Bandit" that allows an attacker the ability to circumvent the safety guardrails of ChatGPT and instruct it to provide illicit or dangerous content. The jailbreak can be initiated in a variety of ways, but centrally requires the attacker to prompt the AI with questions regarding a specific time period in history. The jailbreak can be established in two ways, either through the Search function, or by prompting the AI directly. Once this historical timeframe been established in the ChatGPT conversation, the attacker can exploit time line confusion and procedural ambiguity in following prompts to circumvent the safety guidelines, resulting in ChatGPT generating illicit content. This information could be leveraged at scale by a motivated threat actor for malicious purposes.
Description
"Time Bandit" is a jailbreak vulnerability present in ChatGPT-4o that can be used to bypass safety restrictions within the chatbot and instruct it to generate content that breaks its safety guardrails. An attacker can exploit the vulnerability by beginning a session with ChatGPT and prompting it directly about a specific historical event, historical time period, or by instructing it to pretend it is assisting the user in a specific historical event. Once this has been established, the user can pivot the received responses to various illicit topics through subsequent prompts. These prompts must be procedural, first instructing the AI to provide further details on the time period asked before gradually pivoting the prompts to illicit topics. These prompts must all maintain the established time for the conversation, otherwise it will be detected as a malicious prompt and removed.
This jailbreak could also be achieved through the "Search" functionality. ChatGPT supports a Search feature, which allows a logged in user to prompt ChatGPT with a question, and it will then search the web based on that prompt. By instructing ChatGPT to search the web for information surrounding a specific historical context, an attacker can then continue the searches within that time frame and eventually pivot to prompting ChatGPT directly regarding illicit subjects through usage of procedural ambiguity.
During testing, the CERT/CC was able to replicate the jailbreak, but ChatGPT removed the prompt provided and stated that it violated usage policies. Nonetheless, ChatGPT would then proceed to answer the removed prompt. This activity was replicated several times in a row. The first jailbreak, exploited through repeated direct prompts and using procedural ambiguity, was exploited without authentication. The second, which requires exploit through the Search function, requires authentication by the user. During testing, the jailbreak was more successful using a time frame within the 1800s or 1900s.
Impact
This vulnerability bypasses the security and safety guidelines of OpenAI, allowing an attacker to abuse ChatGPT for instructions regarding, for example, how to make weapons or drugs, or for other malicious purposes. A jailbreak of this type exploited at scale by a motivated threat actor could result in a variety of malicious actions, such as the mass creation of phishing emails and malware. Additionally, the usage of a legitimate service such as ChatGPT can function as a proxy, hiding their malicious activities.
Solution
OpenAI has mitigated this vulnerability. One OpenAI spokesperson provided the below statement:
"It is very important to us that we develop our models safely. We don’t want our models to be used for malicious purposes. We appreciate you for disclosing your findings. We’re constantly working to make our models safer and more robust against exploits, including jailbreaks, while also maintaining the models' usefulness and task performance."
Acknowledgements
Thanks to the reporter, Dave Kuszmar, for reporting the vulnerability. This document was written by Christopher Cullen.
-
VU#199397: Insecure Implementation of Tunneling Protocols (GRE/IPIP/4in6/6in4)
Overview
Tunnelling protocols are an essential part of the Internet and form much of the backbone that modern network infrastructure relies on today. One limitation of these protocols is that they do not authenticate and/or encrypt traffic. Though this limitation exists, IPsec can be implemented to help prevent attacks. However, implementation of these protocols have been executed poorly in some areas.
For the latest security findings from the researchers at the DistriNet-KU Leuven research group, please refer to: https://papers.mathyvanhoef.com/usenix2025-tunnels.pdf
Description
Researchers at the DistriNet-KU Leuven research group have discovered millions of vulnerable Internet systems that accept unauthenticated IPIP, GRE, 4in6, or 6in4 traffic. This can be considered a generalization of the vulnerability in VU#636397 : IP-in-IP protocol routes arbitrary traffic by default (CVE-2020-10136). The exposed systems can be abused as one-way proxies, enable an adversary to spoof the source address of packets (CWE-290 Authentication Bypass by Spoofing), or permit access to an organization's private network. Vulnerable systems can also facilitate Denial-of-Service (DoS) attacks.
Two types of DoS attacks exploiting this vulnerability can amplify traffic: one concentrates traffic in time ("Tunneled-Temporal Lensing"), and the other can loop packets between vulnerable systems, resulting in an amplification factor of at least 13- and 75-fold, respectively. Additionally, the researchers discovered an Economic Denial of Sustainability (EDoS), where the outgoing bandwidth of a vulnerable system is drained, raising the cost of operations if hosted by a third-party cloud service provider.
Impact
An adversary can abuse these security vulnerabilities to create one-way proxies and spoof source IPv4/6 addresses. Vulnerable systems may also allow access to an organization's private network or be abused to perform DDoS attacks.
Solution
See the "Defences" section in the researcher's publication https://papers.mathyvanhoef.com/usenix2025-tunnels.pdf
Acknowledgements
Thanks to the researchers Mathy Vanhoef and Angelos Beitis of the DistriNet-KU Leuven research group for the initial discovery and research. This document was written by Ben Koo.
CVE-2024-7595
GRE and GRE6 Protocols (RFC2784) do not validate or verify the source of a network packet, allowing an attacker to route arbitrary traffic via an exposed network interface that can lead to spoofing, access control bypass, and other unexpected network behaviors. This can be considered similar to CVE-2020-10136.
CVE-2024-7596
Proposed Generic UDP Encapsulation (GUE) (IETF draft-ietf-intarea-gue*) does not validate or verify the source of a network packet, allowing an attacker to route arbitrary traffic via an exposed network interface that can lead to spoofing, access control bypass, and other unexpected network behaviors. This can be considered similar to CVE-2020-10136.
*Note: GUE Draft is expired and no longer canonical.
CVE-2025-23018
The IPv4-in-IPv6 and IPv6-in-IPv6 protocols (RFC2473) do not require the validation or verification of the source of a network packet, allowing an attacker to route arbitrary traffic via an exposed network interface that can lead to spoofing, access control bypass, and other unexpected network behaviors. This can be considered similar to CVE-2020-10136.
CVE-2025-23019
The IPv6-in-IPv4 protocol (RFC4213) does not require authentication of incoming packets, allowing an attacker to route traffic via an exposed network interface that can lead to spoofing, access control bypass, and other unexpected network behaviors.
Note: CVE-2024-7595, CVE-2024-7596, and CVE-2025-23018 are considered similar to CVE-2020-10136 in that they highlight the inherent weakness that these protocols do not validate or verify the source of a network packet. These distinct CVEs are meant to specify the different protocols in question that are vulnerable.
For reference: (CVE-2020-10136) Multiple products that implement the IP Encapsulation within IP (IPIP) standard (RFC 2003, STD 1) decapsulate and route IP-in-IP traffic without any validation, which could allow an unauthenticated remote attacker to route arbitrary traffic via an exposed network interface and lead to spoofing, access control bypass, and other unexpected network behaviors.