SEO
Open Source Language Model Named Dolly 2.0 Trained Similarly To ChatGPT
Databricks announced the release of the first open source instruction-tuned language model, called Dolly 2.0. It was trained using similar methodology as InstructGPT but with a claimed higher quality dataset that is 100% open source.
This model is free to use, including for commercial purposes, because every part of the model is 100% open source.
Open Source Instruction Training
What makes ChatGPT able to follow directions is the training it receives using techniques outlined in the InstructGPT research paper.
The breakthrough discovered with InstructGPT is that language models don’t need larger and larger training sets.
By using human evaluated question and answer training, OpenAI was able to train a better language model using one hundred times fewer parameters than the previous model, GPT-3.
Databricks used a similar approach to create prompt and response dataset called they call databricks-dolly-15k.
Their prompt/response dataset was created without scraping web forums or Reddit.
databricks-dolly-15k is a dataset created by Databricks employees, a 100% original, human generated 15,000 prompt and response pairs designed to train the Dolly 2.0 language model in the same way that ChatGPT model was created with InstructGPT.
The GitHub page for the dataset explains how they did it:
“databricks-dolly-15k is an open source dataset of instruction-following records used in training databricks/dolly-v2-12b that was generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
…Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category.
The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.”
Databricks claims that this may be the very first human generated instruction dataset created to train a language model to follow instructions, just like ChatGPT does.
The challenge was to create a 100% original dataset that had zero ties to ChatGPT or any other source with a restrictive license.
Employees were incentivized by a contest to contribute to generating the 15,000 prompt/responses along seven categories of tasks such as brainstorming, classification, and creative writing.
Databricks asserts that the databricks-dolly-15k training set may be superior to the dataset used to train ChatGPT.
They note that although their dataset is smaller than the one used to train the Stanford Alpaca model, their model performed better because their data is higher quality.
They write:
“Dolly 2.0 model, based on EleutherAI’s pythia-12b, exhibited high-quality instruction following behavior. In hindsight, this isn’t surprising.
Many of the instruction tuning datasets released in recent months contain synthesized data, which often contains hallucinations and factual errors.
databricks-dolly-15k, on the other hand, is generated by professionals, is high quality, and contains long answers to most tasks.
…we don’t expect Dolly to be state-of-the-art in terms of effectiveness.
However, we do expect Dolly and the open source dataset will act as the seed for a multitude of follow-on works, which may serve to bootstrap even more powerful language models.”
Limitations to the Dataset
The GitHub page for the dataset acknowledges that there may be some shortcomings to the dataset.
Wikipedia data was used for some of the training in the context of creating prompts and responses. Thus, it’s possible that whatever bias contained in Wikipedia may end up reflected within the resulting dataset.
Some of the employees who worked to create the dataset were not native speakers of English, which could introduce some anomalies in the dataset.
The demographic makeup of the employees who created the dataset may itself influence the dataset to contain biases that are peculiar to those employees.
Despite those possible shortcomings in the dataset, Databricks expressed that theirs is of a higher quality.
Additionally, Dolly 2.0 is meant to serve as a starting point for others to create and innovate even better versions.
Databricks Insists that Open Source AI Is Better
One of the motivations behind creating Dolly 2.0 is that users of the data can own the models they created and can better safeguard their data by not having to share it with a third party.
They also believe that AI safety should not be concentrated in the hands of three large corporations but spread out among all the stakeholders.
Open source is picking up momentum and it will be interesting to see where this industry is at within the next two years.
More information on where to download the Dolly 2.0 model and how to use it can be found in their announcement.
Free Dolly: Introducing the World’s First Truly Open Instruction-Tuned LLM
Featured image by Shutterstock/Kamil Macniak