Skip to content

Tokenizers simplification#3889

Draft
asolergi-nv wants to merge 9 commits intoNVIDIA:mainfrom
asolergi-nv:tok
Draft

Tokenizers simplification#3889
asolergi-nv wants to merge 9 commits intoNVIDIA:mainfrom
asolergi-nv:tok

Conversation

@asolergi-nv
Copy link
Contributor

@asolergi-nv asolergi-nv commented Mar 16, 2026

What does this PR do ?

Summary

This PR performs a structural refactoring of the Megatron tokenizer subsystem:

  1. Remove the empty model-wrapper layer (text/models/, vision/models/) that added indirection with zero functionality.
  2. Unify SFT and chat template logic into a shared, library-agnostic conversation/ module, deleting the standalone SFTTokenizer class and the MegatronTokenizerChatTemplate mixin.
  3. Introduce a new CLI API (--tokenizer-library, --tokenizer-mode, --tokenizer-prompt-format) that replaces the monolithic --tokenizer-type flag, with full backward compatibility.
  4. Eliminate duplicated conversation-tokenization code between the SFT and multimodal paths.

Motivation

The previous tokenizer codebase had several architectural problems:

  • Empty model wrappers: GPTTokenizer, BertTokenizer, MambaTokenizer, T5Tokenizer, DefaultTokenizerText, and DefaultTokenizerVision were all trivial subclasses that set class_name/class_path in metadata and did nothing else. They added a routing layer (TOKENIZER_MAPPING_NAMES + importlib dynamic lookup) that obscured the actual code path.

  • SFT was a fake "library": SFTTokenizer was registered as a tokenizer library ("sft") but internally created its own transformers.AutoTokenizer.from_pretrained(), completely bypassing the library abstraction. It was not a subclass of MegatronTokenizerTextAbstract and could only work with HuggingFace — making --tokenizer-library sentencepiece --tokenizer-mode sft impossible.

  • Duplicated conversation tokenization: The multimodal tokenizer (multimodal_tokenizer.py) had its own inline implementation of conversation tokenization (~80 lines) with per-turn masking logic. The SFT tokenizer had a separate but nearly identical implementation. Both duplicated the same pattern: apply chat template to the full conversation, then iterate per-turn to build the target mask. Bugs fixed in one wouldn't be fixed in the other.

  • Chat template as a mixin: MegatronTokenizerChatTemplate was a mixin class in its own file, inherited by SentencePieceTokenizer and TikTokenTokenizer via multiple inheritance. It added Jinja2-based apply_chat_template, but with a restrictive signature (no **kwargs, chat_template was positional). The HuggingFace tokenizer had its own completely separate apply_chat_template that delegated to HF's native method — with a different signature. This meant conversation tokenization code had to be HF-specific.

  • Hardcoded HF assumptions in conversation code: conversation_tokenizer.py had a parameter named hf_tokenizer, used HF-specific kwargs (return_tensors="np", return_assistant_token_mask=False), and prompt config factories called tokenizer.convert_tokens_to_ids() / tokenizer.pad_token_id — all HuggingFace-specific APIs.

  • Monolithic build_tokenizer: The build_tokenizer() function was a single 90-line if/elif chain mapping each --tokenizer-type value to library-specific kwargs. Adding a new tokenizer type meant extending this chain.


New Tokenizer Architecture

Before

MegatronTokenizer.from_pretrained()
  └─> _get_tokenizer_model_class()
        └─> TOKENIZER_MAPPING_NAMES lookup (model_type -> class name)
              └─> importlib.import_module("megatron.core.tokenizers.{type}.models")
                    └─> GPTTokenizer / BertTokenizer / ... (empty wrappers)
                          └─> MegatronTokenizerText
                                └─> library-level tokenizer (SP / HF / TikToken / SFT)

SFT path:    SFTTokenizer (standalone class, creates own HF tokenizer)
Multimodal:  MegatronMultimodalTokenizer (inline conversation tokenization)
Chat:        MegatronTokenizerChatTemplate mixin (separate from HF path)

After

MegatronTokenizer.from_pretrained()
  └─> _get_tokenizer_model_class()
        └─> TEXT_LIBRARIES check → MegatronTokenizerText (direct)
        └─> VISION_LIBRARIES check → MegatronTokenizerVision (direct)

MegatronTokenizerText
  ├─> library-level tokenizer (SP / HF / TikToken / ByteLevel / Null)
  │     └─> all inherit MegatronTokenizerTextAbstract
  │           ├─> apply_chat_template()  (concrete, Jinja2-based, with **kwargs)
  │           └─> token_to_id()          (concrete default, overridden by each library)
  └─> _prompt_config (optional, when prompt_format is provided → SFT capability)

conversation/ module (shared by text SFT + multimodal):
  ├─> tokenize_conversation()  (single implementation, library-agnostic)
  ├─> prompt_config.py         (PromptConfig + PROMPT_FORMAT_REGISTRY + agnostic helpers)
  └─> __init__.py              (public exports)

Key change: SFT is no longer a library — it's a capability. Any text tokenizer library gains conversation tokenization when prompt_format is provided. The model-wrapper layer is gone entirely.


What Changed

1. Removed Empty Model Wrappers

Deleted files:

  • text/models/bert_tokenizer.py (12 lines)
  • text/models/gpt_tokenizer.py (12 lines)
  • text/models/mamba_tokenizer.py (12 lines)
  • text/models/t5_tokenizer.py (12 lines)
  • text/models/default_tokenizer.py (12 lines)
  • vision/models/default_tokenizer.py (12 lines)

Each was an empty subclass like:

class GPTTokenizer(MegatronTokenizerText):
    def __init__(self, path, config, **kwargs):
        config['class_name'] = self.__class__.__name__
        config['class_path'] = self.__class__.__module__
        super().__init__(path, config, **kwargs)

Resolution: The class_name/class_path metadata injection was moved into MegatronTokenizerText.__init__() itself via config.setdefault(...). The TOKENIZER_MAPPING_NAMES registry and importlib-based dynamic dispatch in megatron_tokenizer.py were replaced with a direct TEXT_LIBRARIES / VISION_LIBRARIES check that returns MegatronTokenizerText or MegatronTokenizerVision directly.

Backward-compat aliases are kept in text/models/__init__.py and vision/models/__init__.py for any external code that imports by class name.

2. Deleted SFTTokenizer — SFT is Now a Capability

Deleted: text/libraries/sft_tokenizer.py (254 lines)

SFTTokenizer was a standalone class that:

  • Created its own transformers.AutoTokenizer.from_pretrained() internally
  • Was registered as library "sft" in TOKENIZER_MAPPING_LIBRARIES
  • Was not a subclass of MegatronTokenizerTextAbstract
  • Could only work with HuggingFace tokenizers

Resolution: SFT conversation tokenization is now a runtime capability of MegatronTokenizerText. When prompt_format is provided as a kwarg:

# In MegatronTokenizerText.__init__():
prompt_format = kwargs.get('prompt_format', None)
if prompt_format is not None:
    self._prompt_config = PROMPT_FORMAT_REGISTRY[prompt_format](self._tokenizer)

This means any text library tokenizer (SentencePiece, TikToken, HuggingFace, etc.) can now do SFT conversation tokenization, not just HuggingFace.

References removed:

  • ("sft", "SFTTokenizer") from TOKENIZER_MAPPING_LIBRARIES
  • "sft" from TEXT_LIBRARIES
  • SFTTokenizer import from text/libraries/__init__.py

3. Deleted chat_template.py Mixin — Folded into Abstract Base

Deleted: text/libraries/chat_template.py (71 lines)

MegatronTokenizerChatTemplate was a mixin class used via multiple inheritance:

class SentencePieceTokenizer(MegatronTokenizerTextAbstract, MegatronTokenizerChatTemplate): ...
class TikTokenTokenizer(MegatronTokenizerTextAbstract, MegatronTokenizerChatTemplate): ...

Resolution: The apply_chat_template() implementation was moved directly into MegatronTokenizerTextAbstract as a concrete (non-abstract) method. The signature was improved:

  • chat_template changed from required positional to optional keyword (falls back to self.chat_template)
  • Added **kwargs to absorb HF-specific kwargs transparently
  • Full Jinja2 compile + render + optional text_to_ids() tokenization

HuggingFaceTokenizer keeps its own apply_chat_template override that delegates to HF's native method.

Additionally, a concrete token_to_id() default was added to the abstract base:

def token_to_id(self, token: str) -> int:
    return self.tokens_to_ids([token])[0]

This ensures all library tokenizers expose a canonical single-token-to-ID method. SP, TikToken, and HF already override it with optimized versions.

4. Created Shared conversation/ Module

New module: megatron/core/tokenizers/conversation/

  • __init__.py — exports tokenize_conversation, PROMPT_FORMAT_REGISTRY, PromptConfig
  • conversation_tokenizer.py — single library-agnostic conversation tokenization implementation
  • prompt_config.pyPromptConfig dataclass, chat template strings, factory registry

This module consolidates:

  • The SFT conversation tokenization logic (from SFTTokenizer)
  • The multimodal conversation tokenization logic (from MegatronMultimodalTokenizer.tokenize_conversation)
  • All prompt format configurations (from both SFT and multimodal paths)
  • All chat template strings (from multimodal_tokenizer.py inline definitions)

5. De-duplicated Multimodal Conversation Tokenization

Before: multimodal_tokenizer.py had ~80 lines of inline conversation tokenization + ~80 lines of per-format PromptConfig construction, duplicating the SFT logic.

After: MegatronMultimodalTokenizer.tokenize_conversation() is now a single call:

return tokenize_conversation(
    tokenizer=self.tokenizer,
    conversation=conversation,
    prompt_config=self._prompt_config,
    return_target=return_target,
    add_generation_prompt=add_generation_prompt,
    apply_image_tag_fn=self._apply_image_tag,
)

The per-format if/elif chain was replaced with PROMPT_FORMAT_REGISTRY[prompt_format](tokenizer).

6. Made Conversation Tokenization Library-Agnostic

conversation_tokenizer.py:

  • Renamed parameter hf_tokenizertokenizer
  • Removed HF-only kwargs (return_tensors="np", return_assistant_token_mask=False)
  • Normalized return type to np.ndarray regardless of backend

prompt_config.py — added adapter helpers:

def _token_to_id(tokenizer, token) -> int:    # token_to_id first (text libs), convert_tokens_to_ids fallback (multimodal/raw HF)
def _get_pad_token_id(tokenizer) -> int:      # pad_token_id (HF) or pad_id (SP/TikToken)
def _get_bos_token_id(tokenizer) -> int:      # bos_token_id (HF) or bos_id (SP/TikToken)
def _get_eos_token_id(tokenizer) -> int:      # eos_token_id (HF) or eos_id (SP/TikToken)

All 14 factory functions use these helpers instead of HF-specific attribute access.

7. New CLI API with Backward Compatibility

New flags (in arguments.py):

  • --tokenizer-library — choices: huggingface, sentencepiece, tiktoken, megatron, byte-level, null
  • --tokenizer-mode — choices: text, sft, multimodal (default: text)
  • --tokenizer-prompt-format — prompt format name for SFT or multimodal

Backward compatibility: --tokenizer-type is deprecated but fully supported. A mapping table in validate_args() converts each legacy type to its (library, mode) pair:

'GPT2BPETokenizer'    → ('megatron', 'text')
'SentencePieceTokenizer' → ('sentencepiece', 'text')
'HuggingFaceTokenizer'   → ('huggingface', 'text')
'SFTTokenizer'           → ('huggingface', 'sft')
'MultimodalTokenizer'    → ('huggingface', 'multimodal')
...

Refactored build_tokenizer(): The 90-line if/elif chain was decomposed into:

  • _resolve_library(args) — maps library+mode to internal library string
  • _resolve_tokenizer_path(args) — resolves the model path
  • _build_library_kwargs(args) — builds library-specific kwargs
  • _build_mode_kwargs(args) — builds mode-specific kwargs (SFT prompt format, multimodal settings)

8. Bug Fixes

  • eod fallback: MegatronTokenizerText.eod now falls back to eos_id when the underlying library tokenizer doesn't define eod (e.g., SentencePiece). Previously this would raise AttributeError.

  • pad/pad_id with SFT: When a prompt_config is active, MegatronTokenizerText.pad and pad_id now return prompt_config.pad_token_id instead of the library's default pad. This is critical for SFT masking where the prompt config defines its own pad token.

  • Infinite recursion in abstract base: The previous abstract_tokenizer.py had property aliases like @property def cls_id(self): if hasattr(self, 'cls_id'): return self.cls_id which would infinite-recurse on any tokenizer that doesn't define cls_id elsewhere. These broken aliases were removed.

  • Multimodal validate_no_image_in_assistant: The multimodal conversation tokenization previously had an inline IMAGE_TOKEN check only for the assistant role. The shared implementation uses the validate_no_image_in_assistant flag on PromptConfig, making the behavior explicit and configurable per format.

  • Multimodal capitalize_roles: The nemotron5-aligned format had inline role capitalization in multimodal_tokenizer.py. This is now a PromptConfig.capitalize_roles flag, handled by the shared conversation tokenizer.


New User Experience

# SFT with SentencePiece (NEW - previously impossible)
--tokenizer-library sentencepiece --tokenizer-mode sft \
    --tokenizer-model /path/to/model.sp --tokenizer-prompt-format nemotron-h-aligned

# SFT with TikToken (NEW - previously impossible)
--tokenizer-library tiktoken --tokenizer-mode sft \
    --tokenizer-model /path/to/vocab.json --tokenizer-prompt-format default

# SFT with HuggingFace (same as before, now the default for sft mode)
--tokenizer-library huggingface --tokenizer-mode sft \
    --tokenizer-model /path --tokenizer-prompt-format nemotron-h-aligned

# Legacy still works (deprecated warning emitted)
--tokenizer-type SFTTokenizer --tokenizer-model /path --sft-tokenizer-prompt-format nemotron-h-aligned

Files Changed

File Change
megatron_tokenizer.py Remove TOKENIZER_MAPPING_NAMES, remove "sft" from TEXT_LIBRARIES, replace importlib dispatch with direct class resolution
text/text_tokenizer.py Add class_name/class_path defaults, add _prompt_config SFT capability, rewrite tokenize_conversation, add pad/eod overrides, remove ("sft", "SFTTokenizer")
text/libraries/abstract_tokenizer.py Add concrete apply_chat_template (from mixin), add concrete token_to_id default, remove broken property aliases
text/libraries/sentencepiece_tokenizer.py Remove MegatronTokenizerChatTemplate mixin inheritance
text/libraries/tiktoken_tokenizer.py Remove MegatronTokenizerChatTemplate mixin inheritance
text/libraries/__init__.py Remove SFTTokenizer import
text/models/__init__.py Replace model wrapper imports with backward-compat aliases
vision/models/__init__.py Replace DefaultTokenizerVision import with backward-compat alias
vision/vision_tokenizer.py Add class_name/class_path defaults
vision/libraries/multimodal_tokenizer.py Replace inline conversation tokenization + prompt configs with shared conversation/ module
utils/build_tokenizer.py Decompose monolithic function into _resolve_library, _resolve_tokenizer_path, _build_library_kwargs, _build_mode_kwargs
training/arguments.py Add --tokenizer-library, --tokenizer-mode, --tokenizer-prompt-format flags; add --tokenizer-type deprecation mapping
tests/.../test_tokenizer.py Update test_sft_tokenizer to use library: huggingface + prompt_format

Files Created

File Purpose
conversation/__init__.py Public exports for the shared conversation module
conversation/conversation_tokenizer.py Single library-agnostic conversation tokenization implementation
conversation/prompt_config.py PromptConfig dataclass, chat templates, PROMPT_FORMAT_REGISTRY, agnostic helpers

Files Deleted

File Reason
text/libraries/sft_tokenizer.py (254 lines) SFT is now a capability of MegatronTokenizerText, not a separate library
text/libraries/chat_template.py (71 lines) Logic folded into MegatronTokenizerTextAbstract.apply_chat_template
text/models/bert_tokenizer.py (12 lines) Empty wrapper, replaced by alias
text/models/gpt_tokenizer.py (12 lines) Empty wrapper, replaced by alias
text/models/mamba_tokenizer.py (12 lines) Empty wrapper, replaced by alias
text/models/t5_tokenizer.py (12 lines) Empty wrapper, replaced by alias
text/models/default_tokenizer.py (12 lines) Empty wrapper, replaced by alias
vision/models/default_tokenizer.py (12 lines) Empty wrapper, replaced by alias

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@asolergi-nv asolergi-nv requested review from a team as code owners March 16, 2026 18:20
@asolergi-nv asolergi-nv added Run tests Run functional tests Run MBridge tests Attach this for testing this PR against MBridge main labels Mar 16, 2026
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft March 16, 2026 18:20
@github-actions
Copy link
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 16, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 16, 2026
@asolergi-nv
Copy link
Contributor Author

/ok to test 54bbe5f

@asolergi-nv
Copy link
Contributor Author

/ok to test 0634e70

@asolergi-nv
Copy link
Contributor Author

/ok to test 69724b5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Run functional tests Run MBridge tests Attach this for testing this PR against MBridge main Run tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants