Guanaco Models


Guanaco - Generative Universal Assistant for Natural-language Adaptive Context-aware Omnilingual outputs

Guanaco is A Multilingual Instruction-Following Language Model Based on LLaMAThis is an advanced instruction fine-tuned large language model (LLM).The dataset for the Guanaco model is designed to enhance the multilingual capabilities and address various linguistic tasks,and is publically available.This model is trained with modified Alpaca-LoRA (with LoRA + embed_tokens + lm_head be trained).

You can think of Guanaco model as a mulit-turn, multi-language chatbot (in EN, JA, DE, and ZE)Amazingly, it has been trained on a single GPU in a single day.

Here, we present the weightwatcher results for all 4 LLAMA sizes.Notice from weightwatcher LLAMAmodels, which is typical for these kinds if fined tuned models.


Primary Reference: https://the-decoder.com/guanaco-is-a-chatgpt-competitor-trained-on-a-single-gpu-in-one-day
Secondary Reference:
https://www.youtube.com/watch?v=66wc00ZnUgA
Paper: N/A

Guanaco Models Included

Guanaco Model Set Plots