Meta Launches Llama 2: Smarter Text-Generating Models

Meta Launches Llama 2 the text generating AI

Meta has unveiled a new family of AI models Llama 2, to power applications such as OpenAI’s ChatGPT, Bing Chat and other modern chatbots. The models are trained on a combination of publicly available data and reportedly perform better than the previous generation of Llama models. The generative AI landscape is expanding rapidly every day. Meta Launches Llama 2 to generate text just like ChatGPT

Meta released Llama 2 to the public, allowing anyone to access its models and generate text and code in response to prompts. Llama 2 builds on the success of Llama and offers a more robust set of models that can generate text and code than its predecessor. Despite Meta’s initial caution with Llama, it was later leaked online and spread across various AI communities.

Llama 2, free for research and commercial use, will be available for fine-tuning on AWS, Azure and Hugging Face’s AI model hosting platform in pre-trained form.

Meta states that it will be easier to run, optimized for Windows due to an expanded partnership with Microsoft, as well as smartphones and PCs packing Qualcomm’s Snapdragon system-on-chip. Qualcomm announces that it is working to bring Llama 2 to Snapdragon devices by 2024.

Meta outlines the differences between Llama 2 and Llama in a lengthy white paper. The two versions of Llama 2, Llama 2 and Llama 2-Chat, offer varying levels of sophistication. Llama 2-Chat is specially designed for two-way conversations, while Llama 2 provides three options: 7 billion parameter, 13 billion parameter and 70 billion parameter.

The two million tokens Llama 2 was trained on also allowed us to set more precise parameters, which define the skill of the model in generating text.

Google trained PaLM 2 on 3.6 million tokens and speculates that GPT-4 was trained on trillions of tokens. According to Meta’s whitepaper, they sourced the training data from the web, mostly in English, and focused on text of a “factual” nature, instead of from their own products or services.

People are likely hesitant to disclose training details for both competitive and legal reasons. Just today, thousands of authors signed a letter demanding that tech companies stop using their writing to train AI models without consent or payment. However, that is a topic for another discussion.

Human evaluators find Llama 2 roughly as “helpful” as ChatGPT, across a set of 4,000 prompts designed to test “helpfulness” and “safety.” However, in a range of benchmarks, Llama 2 performs slightly worse than GPT-4 and PaLM 2, with a significant gap between it and GPT-4 in computer programming.

Although Meta’s tests cannot possibly account for all real-world scenarios and its benchmarks may fail to provide adequate diversity, such as coding and human reasoning, it is important to take their results with a grain of salt.

Meta acknowledges that Llama 2, like all generative AI models, has biases along certain axes. For example, its training data is inequitably weighted, causing it to generate “he” pronouns more frequently than “she” pronouns.

However, its performance suffers due to the presence of toxic text in the training data. Additionally, it has a Western skew, caused by data imbalances including an abundance of the words “Christian,” “Catholic” and “Jewish.”

The models are often too cautious, declining requests or providing too many safety details. To address this, Meta is partnering with Microsoft to incorporate Azure AI Content Safety into the Llama 2 models hosted on Azure. This service can detect “inappropriate” content in AI-generated images and text and reduce toxic outputs.

Meta makes every effort to ensure that no potentially harmful outcomes arise from Llama 2 by emphasizing in the white paper that users must comply with Meta’s license, acceptable use policy, and guidelines for “safe development and deployment.”

Meta also expresses its belief that openly sharing today’s large language models will support the development of helpful and safe generative AI in a blog post.

We eagerly anticipate discovering what the world creates using Llama 2. Nevertheless, the utilization of open source models makes it difficult to predict precisely how or where the models will be employed. The internet evolves rapidly, so it won’t take long for us to find out.