How Risky Is Your Open-Source LLM Project? A New Research Explains The Risk Factors Associated With Open-Source LLMs

Large Language Models (LLMs) and Generative AI, such as GPT engines, have been creating big waves in the AI domain recently, and there is a big hype in the market, both among retail individuals and corporates, to ride this new tech wave. However, as this technology is rapidly taking over multiple use cases in the market, we need to pay more attention to the security aspects of it and pay attention in greater detail to the risk associated with its usage, especially the open-source LLMs. 

In recent research conducted by Rezilion, a renowned automated software supply chain security platform, experts have investigated this exact issue, and the findings will surprise us. They considered all the projects that fit these criteria:

  1. Projects must have been created eight months ago or less (approx November 2022, to June 2023, at the time of this paper’s publication) 
  2. Projects are related to the topics: LLM, ChatGPT, Open-AI, GPT-3.5, or GPT-4 
  3. Projects must have at least 3,000 stars on GitHub. 

These criteria have ensured that all the major projects come under the research. 

To articulate their research, they have used a framework called OpenSSF Scorecard. Scorecard is a SAST tool created by the Open Source Security Foundation (OSSF). Its goal is to assess the security of open-source projects and help improve them. The assessment is based on different facts about the repository, such as its number of vulnerabilities, how often it’s being maintained, if it contains binary files and many more.

The purpose of all the checks together is to ensure adherence to security best practices and industry standards. Each check has a risk level associated with it. The risk level represents the estimated risk associated with not adhering to a specific best practice and adds weight to the score accordingly.

Currently, 18 checks can be divided into three themes: holistic security practices, source code risk assessment, and build process risk assessment. The OpenSSF Scorecard assigns an ordinal score between 0 to 10 and a risk level score for each check.

It turns out that almost all of these LLMs (open-sourced) and projects deal with major security concerns, which the experts have categorized as follows:

1.Trust boundary risk 

Risks such as inadequate sandboxing, unauthorized code execution, SSRF vulnerabilities, insufficient access controls, and even prompt injections fall under the general concept of trust boundaries.

Anyone can inject any malicious nlp masked command, which can cross multiple channels and severely affect the entire software chain.

One of the popular examples is CVE-2023-29374 Vulnerability in LangChain (3rd most popular open source gpt)

2. Data management Risk 

Data leakage and training data poisoning fall under the data management risks category. These risks pertain to any machine-learning system and are not just restricted to Large Language Models.

Training data poisoning refers to deliberately manipulating an LLM’s training data or fine-tuning procedures by an attacker to introduce vulnerabilities, backdoors, or biases that can undermine the model’s security, effectiveness, or ethical behavior. This malicious act aims to compromise the integrity and reliability of the LLM by injecting misleading or harmful information during the training process.

 3. Inherent Model Risk

These security concerns occur due to the limitation of the underlying ML model: inadequate AI alignment and overreliance on LLM-generated content.

 4. Basic Security Best Practices

It consists of issues such as improper error handling or insufficient access controls that fall under general security best practices. They are common not to machine learning models in general and not specifically to LLMs.

The astonishing and concerning fact is the security score all these models have received. The average score among the checked projects was just 4.6 out of 10, the average age was 3.77 months, and the average number of stars was 15,909. The projects that gain popularity comparatively quickly are much more at risk than those built over a long period. 

The company has not only highlighted the security issues these projects are dealing with right now but has also extensively suggested the steps in their research that can be taken to mitigate these risks and make them safer in the longer run. 

In conclusion, the company has highlighted the need for security protocols to be properly administered and ensured, has highlighted the specific security weak points, and has suggested changes that can be done to eradicate such risks. By taking comprehensive risk assessments and robust security measures, organizations can harness the power of open-source LLMs while protecting sensitive information and maintaining a secure environment.


Don’t forget to join our 25k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


References:

  • https://www.darkreading.com/tech-trends/open-source-llm-project-insecure-risky-use
  • https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape
🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...