SEO

New Open Source ChatGPT Clone – Called Dolly

Open source GPT chat has taken it a step further with the release of the Large Dolly Language (DLL) model created by software company Databricks Foundation.

ChatGPT’s new clone is called Dolly, after the famous sheep of that name, and is the first mammal to be cloned.

Large language models are open source

Dolly LLM is the latest manifestation of the growing open source AI movement that seeks to provide greater access to the technology so that it is not monopolized and controlled by large corporations.

One of the concerns driving the open source AI movement is that companies may be reluctant to hand over sensitive data to a third party that controls the AI ​​technology.

Based on open source

Dolly was created from an open-source model created by the non-profit EleutherAI Research Institute and Stanford University alpaca model Which is built from 65 billion open source workers LLAMA model Created by Meta.

LLaMA, which stands for Large Language Model Meta AI, is a language model that is trained on publicly available data.

According to an article before Weights and biasesLLaMA can outperform several higher language paradigms (OpenAI GPT-3, Gopher by Deep Mind, Chinchilla by DeepMind) despite being smaller.

Create a better data set

Another inspiration came from an academic paper (SELF-INSTRUCT: Aligning a Language Model with Self-Generated Instructions). PDF) that outlined a method for generating a high-quality auto-generated question and answer with better training data than limited public data.

The self-help research paper states:

“…we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 using SELF-INSTRUCT outperforms using existing generic instruction datasets by a large margin, leaving only a 5% absolute gap behind InstructGPT.. .

… applying our method to vanilla GPT3, we showed an absolute 33% improvement over the original model in SUPERNATURALINSTRUCTIONS, on par with the performance of InstructGPT … trained using own user data and human annotations. “

The significance of Dolly is that it shows that a useful large language model can be constructed using a smaller but high-quality dataset.

Databricks notes:

Dolly works by taking an open-source 6-billion-parameter model from EleutherAI and slightly modifying it to elicit following instructions for abilities like brainstorming and text generation not found in the original model, using data from Alpaca.

… We show that anyone can take a well-dated open source open source language model (LLM) and give them a magical ChatGPT-like instruction after ability by training them in 30 minutes on a single machine, using high-quality training data.

Surprisingly, following the instructions doesn’t seem to require the latest or largest models: our model has only 6 billion parameters, compared to 175 billion for GPT-3.”

Databricks is open source AI

Dolly is said to be democratizing artificial intelligence. It’s part of the dress-up movement that the non-profit organization Mozilla recently joined with the founding of Mozilla.ai. Mozilla is the publisher of the Firefox browser and other open source software.

Read the full announcement for Databricks:

Hey Dolly: democratizing the magic of ChatGPT with open forms

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button