-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adapters + LLama -- re-design. #2526
Comments
I totally agree.. what we have done for LLama2 is the QLora(Qunatization+Lora) and only applicable for LLama2. I think with this trend of public LLM released so often, and all of them need Lora for fine-tuning, we need to have an unified class that could handle Lora(and Qlora) for different models. In HF, sft_trainer.py could handle this efficient fine-tuning. We also need to have an easy interface in SB for efficient fine-tuning. It should also work well with DDP. |
I think it is very important too. We can support at least LoRA and adapters. I'm not sure what could be the best way to go to support it elegantly. One idea could be to implement a wrapper (e.g., Adapter) to apply to our models, which can plug the necessary new modules. However, it seems to me quite hard to create something easy to use and flexible at the same time. Any idea from the community? |
I have used PEFT to apply Lora to an existing model and it was pretty straightforward to use. You just pass the model and it automatically replaces all relevant layers with Lora layers. We could do something similar |
We could even import and use peft if the dependencies are light |
Hi @TParcollet, I agree with your strategy.
I don't think that PEFT library is necessary since the theory/implementation is quite "easy" to reproduce. For instance, ESPNet has their own impl of LORA etc (https://github.com/espnet/espnet/blob/47de29c21f5a7db22089717e92add5e9604fcd48/espnet2/layers/create_adapter_fn.py#L224). We should follow the same strategy and provide our own adapters because many researchers may want to develop their own design/modify code etc which may be harder if we have speechbrain and peft. Note: we could have our own implementation and also provide a PEFT compatible wrapper. But I don't know if this make sense/necessary. |
Alright, I think that there are two problems here.
I am happy to do a PR for 1., but it will remain shallow IMHO. |
Describe the bug
It's not a bug, just a discussion. I think that people including @Adel-Moumen @poonehmousavi @mravanelli and maybe @pplantinga @asumagic may want to engage.
Adapters, or, more generally, altering an existing pre-trained model (you can see it as an object originating from the Pretrainer or checkpointer) is something becoming more and more common. Due to this, we, imho, must define a proper design in SpeechBrain to do so. Recently, I implemented LoRA and Houlsby for our Transformer, on my side. But I also realised that @poonehmousavi did some work for Llama here. I don't think we are doing this correctly. The code in Llama 2, for instance, might be hard to understand, and some functions (like the one replacing a module in an existing module) should be generalised and considered a general SpeechBrain util. My strategy would be to create an Adapters.py in lobes where we could put everything relating to them, instead of having them randomly appearing in lobes files.
What do you folks think?
Expected behaviour
Respect the Zen of SpeechBrain.
To Reproduce
No response
Environment Details
No response
Relevant Log Output
No response
Additional Context
No response
The text was updated successfully, but these errors were encountered: