Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
Jun 12, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
🐋MindChat(漫谈)——心理大模型:漫谈人生路, 笑对风霜途
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Finetuning coding LLM OpenCodeInterpreter-DS-6.7B for Text-to-SQL Code Generation on a Single A100 GPU in PyTorch.
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU
A bash scripting assistant that helps you automate tasks. Powered by a streamlit chat interface, A finetuned nl2bash model generates bash code from natural language descriptions provided by the user
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
PEFT of Gemma 2b model using QLoRA
META LLAMA3 GENAI Real World UseCases End To End Implementation Guide
End to End Generative AI Industry Projects on LLM Models with Deployment
Finetune any model on HF in less than 30 seconds
This repo contains everything about transformers and NLP.
This project fine-tunes large language models (LLMs) for text-based recommendations, using a novel prompt mechanism to improve accuracy and user satisfaction. It demonstrates efficient model adaptation with diverse datasets, leveraging advanced libraries and techniques for optimal performance.
Tuning the Finetuning: An exploration of achieving success with QLoRA
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."