Skip to content

Why do we make the loss computable on the first part of the input sequence (the question)? #27

@thuann2cats

Description

@thuann2cats

can anyone tell me why, in training a GPT-2 model on solving math problem in the GSM8k dataset, did this code make the loss calculable on the first part of the input sequence (which contains the question)? Why should we compute the GPT-2 loss on the question? Don't we need to compute the loss on only the generated answer? Thanks!

class GSMDataset(th.utils.data.Dataset):
    def init(self, tokenizer, examples, loss_on_prefix=True):
        self.examples = examples
        self.qns = [ex["question"] for ex in self.examples]
        self.ans = [ex["answer"] for ex in self.examples]
        self.qns = tokenizer(self.qns, padding=False)
        self.ans = tokenizer(self.ans, padding=False)
        self.loss_on_prefix = loss_on_prefix
        self.max_len = max(
            [
                len(self.qns["input_ids"][i]) + len(self.ans["input_ids"][i])
                for i in range(len(self.examples))
            ]
        )
        print(f"Max tokens: {self.max_len}")

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions