Follow Us

Senators Question Meta CEO Zuckerberg Over LLaMA AI Model “Leak”

Share on facebook
Share on twitter
Share on linkedin

Share

Senators Question Meta CEO Zuckerberg Over LLaMA AI Model "Leak"
Share on facebook
Share on twitter
Share on linkedin

Two Senators Cross questions Zuckerberg about the leak of LLaMA AI Model and accuses meta to not follow the security measures. Meta was questioned about its security policies and preventive measures.

Meta to Be Responsible For the “Leak”

Recently, Meta’s groundbreaking large language model, LLaMA got leaked and concerns for the same were raised. Sens. Richard Blumenthal (D-CT), chair of the Senate’s Subcommittee on Privacy, Technology, & the Law, and Josh Hawley (R-MO), ranking member wrote a letter that raised questions on the leak of the AI Model. 

The Senators are afraid that this leak could lead to various cyber crimes like spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harm. Many questions are raised and two politicians are deeply interested in Meta’s security system. They asked what procedure was followed to assess the risk before launching LLaMA. They said they are very eager to understand the policies and practices that are in place to prevent the model’s abuse in its availability.

Based on Meta’s answers to their questions, the Senators accused Meta of improper censoring and not having enough security measures for the model. Open AI’s ChatGPT denies some of the requests based on ethics and guidelines. For example, when ChatGPT is asked to write a letter on behalf of someone’s son and ask for some money to get out of a difficult situation, it will deny the request. While on the other hand, LLaMA will fulfill the request and generate the letter. It will also complete the requests that involve self harm, crime and antisemitism.

It is very important to understand the varied and unique features of LLaMA. It is not only distinct but also one of the most extensive Large Language models till date. Almost every uncensored LLM that is popular today is based on the LLaMA. It is highly sophisticated and accurate for an open-source model. Examples of some LLaMA based LLMs are Stanford’s Alpaca, Vicuna etc. LLaMA has played an important role in making LLMs what they are today. LLaMA is responsible for the evolution of low utility chatbots to fine-tuned modes.

LLaMA was released in February. According to the Senators, Meta allowed researchers to download the model but did not take security measures like centralization or restricting access. The controversy arose when the complete model of LLaMA surfaced on BitTorrent. This made the model accessible to anyone and everyone. This led to the compromise in the quality of the AI model and raised issues of its misuse. 

The Senators at first were not even sure if there was any “leak”. But issues arose when the internet was flooded with AI developments that were launched by startups, collectives and academics. The letter mentions that Meta must be held responsible for the potential abuse of LLaMA and must have taken care of minimal protections before release.

Meta made LLaMA’s weights available to the researchers. Unfortunately, these weights got leaked, which enabled global access for the first time.

Leave a Reply

Your email address will not be published. Required fields are marked *

Download our App for getting faster updates at your fingertips.

en_badge_web_generic.b07819ff-300x116-1

We Recommend

Top Rated Cryptocurrency Exchange

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00