Skip to content

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #298

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO)

Add support for the Training Method for finetuning, and for Direct-Preference Optimization (DPO) #298

Triggered via pull request March 5, 2025 17:53
Status Cancelled
Total duration 14s
Artifacts

_integration_tests.yml

on: pull_request
Matrix: build
Fit to window
Zoom out
Zoom in

Annotations

6 errors
integration_test python v3.9
Canceling since a higher priority waiting request for 'integration-tests-Vprov/dpo_python' exists
integration_test python v3.10
Canceling since a higher priority waiting request for 'integration-tests-Vprov/dpo_python' exists
integration_test python v3.10
The operation was canceled.
integration_test python v3.12
Canceling since a higher priority waiting request for 'integration-tests-Vprov/dpo_python' exists
integration_test python v3.11
Canceling since a higher priority waiting request for 'integration-tests-Vprov/dpo_python' exists
integration_test python v3.11
A task was canceled.