Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Optimization] Support context_parallel_spliter for cp #9904

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion paddlenlp/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -460,13 +460,17 @@
else ["labels"]
)
self.label_names = default_label_names if self.args.label_names is None else self.args.label_names
self.context_parallel_spliter = None

self.control = self.callback_handler.on_init_end(self.args, self.state, self.control)
self.print_config()

# very last
self._memory_tracker.stop_and_update_metrics()

def set_context_parallel_spliter(self, context_parallel_spliter):
self.context_parallel_spliter = context_parallel_spliter

Check warning on line 472 in paddlenlp/trainer/trainer.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/trainer.py#L472

Added line #L472 was not covered by tests

def _wrap_amp_model(self, args, model):
logger.info("Using half precision")
self.enable_autocast_context_manager = True
Expand Down Expand Up @@ -1020,7 +1024,12 @@
if self.args.use_hybrid_parallel and self.args.sep_parallel_degree > 1:
inputs = split_inputs_sequence_dim(inputs)
if self.args.use_hybrid_parallel and self.args.context_parallel_degree > 1:
inputs = split_inputs_sequence_dim_load_balance(inputs)
context_parallel_spliter = (

Check warning on line 1027 in paddlenlp/trainer/trainer.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/trainer.py#L1027

Added line #L1027 was not covered by tests
split_inputs_sequence_dim_load_balance
if self.context_parallel_spliter is None
else self.context_parallel_spliter
)
inputs = context_parallel_spliter(inputs)

Check warning on line 1032 in paddlenlp/trainer/trainer.py

View check run for this annotation

Codecov / codecov/patch

paddlenlp/trainer/trainer.py#L1032

Added line #L1032 was not covered by tests
if self.args.ignore_data_skip:
self.timers and self.timers("read-data").stop()

Expand Down
Loading