Skip to content

Add LoRA configuration support for fine-tuning module #169

@gkumbhat

Description

@gkumbhat

Description

After refactoring #163 PT to take out prompt tuning related configuration to separate module. We want to add support for prompt tuning (specifically LoRA) in fine-tuning module, i.e text_generation. Changes:

  1. Add LoRA configuration support in the toolkit functionality created in Refactor peft module to take out common peft config functionality #163
  2. Add LoRA configuration and training from text-generation module (train).
  3. Expose parameter required for LoRA configuration via .train function
  4. Add support for saving LoRA models with "merged weights". This is to be done in .train function itself, that way, the model that we configure to __init__ function will look like any other transformers model.

Acceptance Criteria

  • Unit tests cover new/changed code
  • Examples build against new/changed code
  • README updated here
  • Example for training lora added in example script

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

Status

Ready for Review

Relationships

None yet

Development

No branches or pull requests

Issue actions