Login
 

KRPweb.com

  • Increase font size
  • Default font size
  • Decrease font size
Home Learning About Joomla! News Feeds CERT Recently Published Vulnerability Notes
Newsfeeds
CERT Recently Published Vulnerability Notes
CERT publishes vulnerability advisories called "Vulnerability Notes." Vulnerability Notes include summaries, technical details, remediation information, and lists of affected vendors. Many vulnerability notes are the result of private coordination and disclosure efforts.

  • VU#163057: BMC software fails to validate IPMI session.

    Overview

    The Intelligent Platform Management Interface (IPMI) implementations in multiple manufacturer's Baseboard Management Controller (BMC) software are vulnerable to IPMI session hijacking. An attacker with access to the BMC network (with IPMI enabled) can abuse the lack of session integrity to hijack sessions and execute arbitrary IPMI commands on the BMC.

    Description

    IPMI is a computer interface specification that provides a low-level management capability independent of hardware, firmware, or operating system. IPMI is supported by many BMC manufacturers to allow for transparent access to hardware. IPMI also supports pre-boot capabilities of a computer such as selection of boot media and boot environment. BMCs are recommended to be accessible via dedicated internal networks to avoid risk of exposure.

    IPMI sessions between a client and a BMC follow the RAKP key exchange protocol, as specified in the IPMI 2.0 specification. This involves a session ID and a BMC random number to uniquely identify an IPMI session. The security researcher, who wishes to remain anonymous, has attempted to disclose two vulnerabilities related to BMC software and session management. The first vulnerability identifies the use of weak randomization while interacting with a BMC using IPMI sessions. The researcher discovered that if both the IPMI session ID and BMC's random number are predictable or constant, an attacker can either hijack a session or replay a session without knowing the password that was set to protect the BMC. The second vulnerability from the reporter identifies certain cases where the BMC software fails to enforce previously negotiated IPMI 2.0 session parameters, allowing an attacker to either downgrade or disable session verification. Due to the reuse of software or libraries, these vulnerabilities may be present in multiple models of BMC. It is recommended that sufficient precaution is taken in protecting datacenters and cloud installations with multiple servers to protect IPMI session interaction using both the software updates and the recommendations to secure and isolate the networks where IPMI is accessible.

    Impact

    An unauthenticated attacker with access to the BMC network can predict IPMI session IDs and/or BMC random numbers to replay a previous session or hijack an IPMI session. This can allow the attacker to inject arbitrary commands into the BMC and be able to perform high-privileged functions (reboot, power-off, re-image of the machine) that are available to the BMC.

    Solution

    Apply an update

    Please consult the Vendor Information section for information provided by BMC vendors to address these vulnerabilities.

    Restrict access

    As a general good security practice, only allow connections from trusted hosts and networks to the BMC network that exposes the IPMI enabled interface.

    Acknowledgements

    Thanks to the security researcher who would like to remain anonymous for researching and reporting these vulnerabilities.

    This document was written by Ben Koo.



  • VU#238194: R Programming Language implementations are vulnerable to arbitrary code execution during deserialization of .rds and .rdx files

    Overview

    A vulnerability in the R language that allows for arbitrary code to be executed directly after the deserialization of untrusted data has been discovered. This vulnerability can be exploited through RDS (R Data Serialization) format files and .rdx files. An attacker can create malicious RDS or .rdx formatted files to execute arbitrary commands on the victim's target device.

    Description

    R supports data serialization, which is the process of turning R objects and data into a format that can then be deserialized in another R session. This will provide a copy of the R objects from the original session.

    The RDS format, which mainly comprises .rds files, is used to save and load serialized R objects. These objects are utilized to share states and transfer data sets across programs. They are not expected to run code when they are loaded by an R implementation unless prompted by the user. R Packages use .rdx files, which contain a list of offsets, lengths, and names, and are accompanied by a .rdb file, which is used to extract more information about those offsets. .rdx and .rdb files contain RDS formatted data within themselves. A .rds file functions similarly to a .rdx file but only allows for storing a single R object. When loading a .rds or .rdx file, the readRDS function is utilized. An R implementation using the readRDS function given that information will then read the offsets and load the data.

    R supports lazy evaluation. This can be implemented through a type called Promise, which can be represented in the RDS format as PROMSXP. This type is used to manage expressions that are called and completed in a asynchronous manner when their associated values are needed to be used by the program. When constructing an unserialized object in this context from the RDS format, the Promise object will require three pieces of data. These are the value of the Promise, the expression, and the environment. This information is loaded by the eval function. The eval function in R takes an expression, in this case the Promise, and evaluates it within the environment specified.

    The vulnerability occurs when the eval function evaluates a promise type that has an unevaluated value. The Promise expression will not be properly evaluated and will execute the expression when it is referenced in the program that contains it. A threat actor can include malicious code within a .rds or .rdx file that is referenced by an unevaluated value. When an R implemention loads a package that contains an .rds or .rdx file and the promise value is reached, it will execute the referenced code. This code is arbitrary and will be executed prior to any opportunity for the victim to explore and see what functions or objects are within the file loaded.

    Impact

    An attacker can create malicious .rds and .rdx files and use social engineering to distribute those files to execute arbitrary code on the victim's device. Projects that use readRDS on untrusted files are also vulnerable to the attack. Attackers can also leverage system commands to access resources available to the application and exfiltrate data from any environment available to the application on the target device. The code in the malicious files can also be used to access adjacent resources such other computers/devices, devices in a cluster and shared documents/folders available to the application.

    Solution

    Apply Updates

    R project has provided R Core Version 4.4.0, which addresses the vulnerability. R Core version 4.4.0 now restricts promises in the serialization stream so that they are not used for implementing lazy evaluation. Apply the update at your earliest convenience.

    Secure or Sandbox RDS file usage

    Protect and use untrusted/third-party .rds, rdb, and .rdx files either in Containers or in a Sandbox environment to prevent unexpected access to resources.

    Acknowledgements

    Thanks to the reporter, Kasimir Schulz and Kieran Evans of HiddenLayer for reporting this vulnerability. This document was written by Christopher Cullen.



  • VU#253266: Keras 2 Lambda Layers Allow Arbitrary Code Injection in TensorFlow Models

    Overview

    Lambda Layers in third party TensorFlow-based Keras models allow attackers to inject arbitrary code into versions built prior to Keras 2.13 that may then unsafely run with the same permissions as the running application. For example, an attacker could use this feature to trojanize a popular model, save it, and redistribute it, tainting the supply chain of dependent AI/ML applications.

    Description

    TensorFlow is a widely-used open-source software library for building machine learning and artificial intelligence applications. The Keras framework, implemented in Python, is a high-level interface to TensorFlow that provides a wide variety of features for the design, training, validation and packaging of ML models. Keras provides an API for building neural networks from building blocks called Layers. One such Layer type is a Lambda layer that allows a developer to add arbitrary Python code to a model in the form of a lambda function (an anonymous, unnamed function). Using the Model.save() or save_model() method, a developer can then save a model that includes this code.

    The Keras 2 documentation for the Model.load_model() method describes a mechanism for disallowing the loading of a native version 3 Keras model (.keras file) that includes a Lambda layer when setting safe_mode (documentation):

    safe_mode: Boolean, whether to disallow unsafe lambda deserialization. When safe_mode=False, loading an object has the potential to trigger arbitrary code execution. This argument is only applicable to the TF-Keras v3 model format. Defaults to True.

    This is the behavior of version 2.13 and later of the Keras API: an exception will be raised in a program that attempts to load a model with Lambda layers stored in version 3 of the format. This check, however, does not exist in the prior versions of the API. Nor is the check performed on models that have been stored using earlier versions of the Keras serialization format (i.e., v2 SavedModel, legacy H5).

    This means systems incorporating older versions of the Keras code base prior to versions 2.13 may be susceptible to running arbitrary code when loading older versions of Tensorflow-based models.

    Similarity to other frameworks with code injection vulnerabilities

    The code injection vulnerability in the Keras 2 API is an example of a common security weakness in systems that provide a mechanism for packaging data together with code. For example, the security issues associated with the Pickle mechanism in the standard Python library are well documented, and arise because the Pickle format includes a mechanism for serializing code inline with its data.

    Explicit versus implicit security policy

    The TensorFlow security documentation at https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) includes a specific warning about the fact that models are not just data, and makes a statement about the expectations of developers in the TensorFlow development community:

    Since models are practically programs that TensorFlow executes, using untrusted models or graphs is equivalent to running untrusted code. (emphasis in earlier version)

    The implications of that statement are not necessarily widely understood by all developers of TensorFlow-based systems.The last few years has seen rapid growth in the community of developers building AI/ML-based systems, and publishing pretrained models through community hubs like huggingface (https://huggingface.co/) and kaggle (https://www.kaggle.com). It is not clear that all members of this new community understand the potential risk posed by a third-party model, and may (incorrectly) trust that a model loaded using a trusted library should only execute code that is included in that library. Moreover, a user may also assume that a pretrained model, once loaded, will only execute included code whose purpose is to compute a prediction and not exhibit any side effects outside of those required for those calculations (e.g., that a model will not include code to communicate with a network).

    To the degree possible, AI/ML framework developers and model distributors should strive to align the explicit security policy and the corresponding implementation to be consistent with the implicit security policy implied by these assumptions.

    Impact

    Loading third-party models built using Keras could result in arbitrary untrusted code running at the privilege level of the ML application environment.

    Solution

    Upgrade to Keras 2.13 or later. When loading models, ensure the safe_mode parameter is not set to False (per https://keras.io/api/models/model_saving_apis/model_saving_and_loading, it is True by default). Note: An upgrade of Keras may require dependencies upgrade, learn more at https://keras.io/getting_started/

    If running pre-2.13 applications in a sandbox, ensure no assets of value are in scope of the running application to minimize potential for data exfiltration.

    Advice for Model Users

    Model users should only use models developed and distributed by trusted sources, and should always verify the behavior of models before deployment. They should follow the same development and deployment best practices to applications that integrate ML models as they would to any application incorporating any third party component. Developers should upgrade to the latest versions of the Keras package practical (v2.13+ or v3.0+), and use version 3 of the Keras serialization format to both load third-party models and save any subsequent modifications.

    Advice for Model Aggregators

    Model aggregators should distribute models based on the latest, safe model formats when possible, and should incorporate scanning and introspection features to identify models that include unsafe-to-deserialize features and either to prevent them from being uploaded, or flag them so that model users can perform additional due diligence.

    Advice for Model Creators

    Model creators should upgrade to the latest versions of the Keras package (v2.13+ or v3.0+). They should avoid the use of unsafe-to-deserialize features in order to avoid the inadvertent introduction of security vulnerabilities, and to encourage the adoption of standards that are less susceptible to exploitation by malicious actors. Model creators should save models using the latest version of formats (Keras v3 in the case of the Keras package), and, when possible, give preference to formats that disallow the serialization of models that include arbitrary code (i.e., code that the user has not explicitly imported into the environment). Model developers should re-use third-party base models with care, only building on models from trusted sources.

    General Advice for Framework Developers

    AI/ML-framework developers should avoid the use of naïve language-native serialization facilities (e.g., the Python pickle package has well-established security weaknesses, and should not be used in sensitive applications).

    In cases where it's desirable to include a mechanism for embedding code, restrict the code that can be executed by, for example:

    • disallow certain language features (e.g., exec)
    • explicitly allow only a "safe" language subset
    • provide a sandboxing mechanism (e.g., to prevent network access) to minimize potential threats.

    Acknowledgements

    This document was written by Jeffrey Havrilla, Allen Householder, Andrew Kompanek, and Ben Koo.



  • VU#123335: Multiple programming languages fail to escape arguments properly in Microsoft Windows

    Overview

    Various programming languages lack proper validation mechanisms for commands and in some cases also fail to escape arguments correctly when invoking commands within a Microsoft Windows environment. The command injection vulnerability in these programming languages, when running on Windows, allows attackers to execute arbitrary code disguised as arguments to the command. This vulnerability may also affect the application that executes commands without specifying the file extension.

    Description

    Programming languages typically provide a way to execute commands (for e.g., os/exec in Golang) on the operating system to facilitate interaction with the OS. Typically, the programming languages also allow for passing arguments which are considered data (or variables) for the command to be executed. The arguments themselves are expected to be not executable and the command is expected to be executed along with properly escaped arguments, as inputs to the command. Microsoft Windows typically processes these commands using a CreateProcess function that spawns a cmd.exe for execution of the command. Microsoft Windows has documented some of the concerns related to how these should be properly escaped before execution as early as 2011. See https://learn.microsoft.com/en-us/archive/blogs/twistylittlepassagesallalike/everyone-quotes-command-line-arguments-the-wrong-way.

    A vulnerability was discovered in the way multiple programming languages fail to properly escape the arguments in a Microsoft Windows command execution environment. This can lead confusion at execution time where an expected argument for a command could be executed as another command itself. An attacker with knowledge of the programming language can carefully craft inputs that will be processed by the compiled program as commands. This unexpected behavior is due to lack of neutralization of arguments by the programming language (or its command execution module) that initiates a Windows execution environment. The researcher has found multiple programming languages, and their command execution modules fail to perform such sanitization and/or validation before processing these in their runtime environment.

    Impact

    Successful exploitation of this vulnerability permits an attacker to execute arbitrary commands. The complete impact of this vulnerability depends on the implementation that uses a vulnerable programming language or such a vulnerable module.

    Solution

    Updating the runtime environment

    Please visit the Vendor Information section so see if your programming language Vendor has released the patch for this vulnerability and update the runtime environment that can prevent abuse of this vulnerability.

    Update the programs and escape manually

    If the runtime of your application doesn't provide a patch for this vulnerability and you want to execute batch files with user-controlled arguments, you will need to perform the escaping and neutralization of the data to prevent any intended command execution.

    Security researcher has more detailed information in the blog post which provides details on specific languages that were identified and their Status.

    Acknowledgements

    Thanks to the reporter, RyotaK.This document was written by Timur Snoke.



  • VU#155143: Linux kernel on Intel systems is susceptible to Spectre v2 attacks

    Overview

    A new cross-privilege Spectre v2 vulnerability that impacts modern CPU architectures supporting speculative execution has been discovered. CPU hardware utilizing speculative execution that are vulnerable to Spectre v2 branch history injection (BHI) are likely affected. An unauthenticated attacker can exploit this vulnerability to leak privileged memory from the CPU by speculatively jumping to a chosen gadget. Current research shows that existing mitigation techniques of disabling privileged eBPF and enabling (Fine)IBT are insufficient in stopping BHI exploitation against the kernel/hypervisor.

    Description

    Speculative execution is an optimization technique in which a computer system performs some task preemptively to improve performance and provide additional concurrency as and when extra resources are available. However, these speculative executions leave traces of memory accesses or computations in the CPU’s cache, buffer, and branch predictors. Attackers can take advantage of these and, in some cases, also influence speculative execution paths via malicious software to infer privileged data that is part of a distinct execution. See article Spectre Side Channels for more information. Attackers exploiting Spectre v2 take advantage of the speculative execution of indirect branch predictors, which are steered to gadget code by poisoning the branch target buffer of a CPU used for predicting indirect branch addresses, leaking arbitrary kernel memory and bypassing all currently deployed mitigations.

    Current mitigations rely on the unavailability of exploitable gadgets to eliminate the attack surface. However, researchers demonstrated that with the use of their gadget analysis tool, InSpectre Gadget, they can uncover new, exploitable gadgets in the Linux kernel and that those are sufficient at bypassing deployed Intel mitigations.

    Impact

    An attacker with access to CPU resources may be able to read arbitrary privileged data or system registry values by speculatively jumping to a chosen gadget.

    Solution

    Please update your software according to the recommendations from respective vendors with the latest mitigations available to address this vulnerability and its variants.

    Acknowledgements

    Thanks to Sander Wiebing, Alvise de Faveri Tron, Herbert Bos, and Cristiano Giuffrida from the VUSec group at VU Amsterdam for discovering and reporting this vulnerability, as well as supporting coordinated disclosure. This document was written by Dr. Elke Drennan, CISSP.