You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*Issue #, if available:*
*Description of changes:* Adds support for LoRA fine-tuning.
- [x] Move peft/pandas dependency to an extra
- [x] Add tests for LoRA
- [x] Update notebook with LoRA info
- [x] Enable automatic recognition and loading of LoRA adapters
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
The time series used for validation and model selection. The format of `validation_inputs` is exactly the same as `inputs`, by default None which
125
130
means that no validation is performed. Note that enabling validation may slow down fine-tuning for large datasets.
131
+
finetune_mode
132
+
One of "full" (performs full fine-tuning) or "lora" (performs Low Rank Adaptation (LoRA) fine-tuning), by default "full"
133
+
lora_config
134
+
The configuration to use for LoRA fine-tuning when finetune_mode="lora". Can be a `LoraConfig` object or a dict which is used to initialize `LoraConfig`.
135
+
When unspecified and finetune_mode="lora", a default configuration is used
126
136
context_length
127
137
The maximum context length used during fine-tuning, by default set to the model's default context length
128
138
learning_rate
129
139
The learning rate for the optimizer, by default 1e-6
140
+
When finetune_mode="lora", we recommend using a higher value of the learning rate, such as 1e-5
130
141
num_steps
131
142
The number of steps to fine-tune for, by default 1000
0 commit comments