Stability AI launched StableVicuna, the first open-source chatbot based on human feedback

Stability AI has introduced StableVicuna, the first large-scale open-source chatbot that has been trained through reinforcement learning from human feedback (RLHF), marking a significant development in the field of AI.  StableVicuna is an upgraded version of Vicuna v0 13b, which is an instruction fine-tuned LLaMA 13b model.

The chatbot has been benchmarked against other similarly sized open-source chatbots, demonstrating high performance. You can ask it to perform basic math, write code, or help you with grammar.

“A Stable Vicuña” – Stable Diffusion XL

The chat model StableVicuna is a further instruction fine-tuned and RLHF trained version of Vicuna v0 13b. It has reached high performance by following a three-stage RLHF pipeline:

  1. Fine-tune the Vicuna model with supervised finetuning (SFT),on a mixture of three datasets: OpenAssistant Conversations (OASST1), GPT4All Prompt Generations, and Alpaca. These datasets contain different kinds of conversations, prompts, and instructions. The model was provided with sample data and the corresponding correct responses to learn from. 
  2. Train a reward model (RM) using the trlX framework on 3 RLHF preference datasets: OASST1, Anthropic HH-RLHF, and SHP. The model was initialized from the SFT model and was further trained using trlX (a reinforcement learning library). It learned to predict how “good” or “bad” was a response, based on human feedback. Its output was a scalar value, the reward.
  3. Use trlX to perform  Proximal Policy Optimization (PPO)reinforcement learning. The SFT model (from stage 1) was trained again using the RM (from stage 2), guided by human feedback (RLHF).

By following this pipeline, Vicuna became StableVicuna, an improved chatbot that can generate more helpful and harmless responses and follow a wider range of instructions.

The company is dedicated to enhancing the chatbot’s performance by making incremental changes, and they intend to launch a Discord bot on the Stable Foundation server in the coming weeks.

Stability AI has announced an upcoming chat interface for StableVicuna (see picture below).

StableVicuna: the upcoming chat interface

The pros and cons of applying RHLF to chatbots

RLHF can significantly speed up the training process, as compared to traditional reinforcement learning (where the agent learns through trial and error). As RLHF uses human feedback to guide the agent’s learning, the model converges to optimal behavior more quickly.

Moreover, chatbots trained with RLHF can generate more natural and engaging conversations, as they learn from a user feedback in a more personalized and relevant way.

Human feedback rewards good behaviors and fixes bad ones, which helps chatbots learn complex human preferences and avoid biases and toxicity.

Despite their benefits, RLHF methods have some limitations. One challenge is the difficulty of obtaining high-quality human feedback at scale, which can be time-consuming and expensive. Additionally, human feedback is subjective, varies between trainers, and can be ambiguous or inconsistent.

How can you use StableVicuna

You can test the model on this HuggingFace space and provide feedback to help improve the overall user experience.

You can also download the StableVicuna model and run it locally on your machine. Hugging Face is hosting the model, and Stability AI has provided installation instructions (see Obtaining StableVicuna-13B).

Moreover, you can create your own model, by utilizing the RLHF pipeline code and data released by Stability AI on GitHub. The code and data can serve as a base model and be further trained using your specific dataset.

Conclusion

Stability AI’s StableVicuna is the first large-scale open-source chatbot that incorporates both instruction finetuning and RLHF paradigms.

This achievement was the result of collaboration between Open Assistant, Anthropic, and Stanford, who made RLHF chat datasets accessible to the public. A key role in this development was played by TrlX, a framework designed for RLHF training.

By opening access to people regardless of their technical expertise or financial resources, the release of StableVicuna is a new contribution to AI democratization.

Learn more: 

Other popular posts