Add nemo-skills-core subpackage for lightweight installs#1229
Add nemo-skills-core subpackage for lightweight installs#1229gwarmstrong wants to merge 25 commits intomainfrom
Conversation
8fa5c7d to
4e2fad9
Compare
d22246e to
76c2a18
Compare
Signed-off-by: George Armstrong <georgea@nvidia.com>
a2751f3 to
f0eb8d0
Compare
| _EVALUATOR_MAP_PATHS[eval_type] = None | ||
| _resolved_evaluator_map[eval_type] = eval_fn |
There was a problem hiding this comment.
Setting _EVALUATOR_MAP_PATHS[eval_type] = None creates a fragile state. If _resolved_evaluator_map is ever cleared or doesn't contain the eval_type, _get_evaluator_fn will call _resolve(None) and crash.
| _EVALUATOR_MAP_PATHS[eval_type] = None | |
| _resolved_evaluator_map[eval_type] = eval_fn | |
| # Store function directly, bypassing the lazy resolution path | |
| _resolved_evaluator_map[eval_type] = eval_fn |
There was a problem hiding this comment.
Good catch, switched to a "<dynamically-registered>" sentinel to be safe.
There was a problem hiding this comment.
Actually, reverting this back to None. The _resolved_evaluator_map cache is internal and never cleared, so this scenario cannot happen in practice. Per our project guidelines: "Don't add error handling, fallbacks, or validation for scenarios that can't happen." If the cache were somehow corrupted, crashing is the correct signal.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a lightweight core package and requirements, documents installation and the Core/Pipeline dependency boundary, reorganizes optional extras in packaging, adds a CI step for uv, implements lazy evaluator resolution, refactors dataset loading to prefer local modules and delegates cluster handling to a new pipeline dataset module, and adds a runtime guard for pipeline imports. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Core as nemo_skills.dataset.utils
participant Pipeline as nemo_skills.pipeline.dataset
participant Cluster
rect rgba(100, 150, 200, 0.5)
Note over User,Core: Local-only flow (default)
User->>Core: get_dataset_module(dataset, data_dir=None)
Core->>Core: import from nemo_skills.dataset or local path
Core-->>User: return dataset module
end
rect rgba(200, 100, 150, 0.5)
Note over User,Cluster: Cluster flow (deprecated in Core)
User->>Core: get_dataset_module(dataset, cluster_config=...)
Core->>Core: emit DeprecationWarning
Core->>Pipeline: delegate get_dataset_module(...)
Pipeline->>Cluster: fetch / download cluster module (remote)
Cluster-->>Pipeline: module content / init.py
Pipeline->>Core: imported module
Core-->>User: return dataset module
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@nemo_skills/evaluation/evaluator/__init__.py`:
- Around line 113-117: The error message incorrectly labels
`_EVALUATOR_MAP_PATHS.keys()` as "All supported types" when it only contains
function-based evaluators; update the ValueError text in the raise block (the
code that references eval_type) to clearly distinguish class-based vs
function-based types by either listing both maps together (combine
`_EVALUATOR_CLASS_MAP_PATHS.keys()` and `_EVALUATOR_MAP_PATHS.keys()`) or
renaming the second label to "Function-based evaluator types" so users see
accurate descriptions of `_EVALUATOR_CLASS_MAP_PATHS` and
`_EVALUATOR_MAP_PATHS`.
- Around line 106-107: register_evaluator currently stores None into
_EVALUATOR_MAP_PATHS[eval_type], which will cause AttributeError when code later
iterates or calls _resolve expecting a path string; change register_evaluator so
it stores a sentinel string (e.g. "<dynamic>") into
_EVALUATOR_MAP_PATHS[eval_type] instead of None, and ensure any
resolution/display logic in _resolve/_get_evaluator_fn treats that sentinel as a
dynamic entry (or filters it out) so rsplit is only called on real path strings;
update references to _EVALUATOR_MAP_PATHS, register_evaluator,
_resolved_evaluator_map, _get_evaluator_fn, and _resolve accordingly.
- Line 137: Remove the leftover debug print statement print(f"evaluator:
{evaluator}") from the module (it should not be in production code); either
delete that line or replace it with an appropriate logger.debug call using the
module logger (e.g., logger.debug("evaluator: %s", evaluator)) so diagnostics
use the configured logging system and not stdout—locate the print by searching
for the exact string and update in the __init__ module where the evaluator
variable is in scope.
- Around line 93-94: Remove the debug print statement print(f"evaluator:
{evaluator}") from the module so it no longer emits debug output; locate the
temporary print in the evaluator initialization block near where EVALUATOR_MAP
and EVALUATOR_CLASS_MAP are set and delete that single line, leaving the maps
and the helper functions (_get_evaluator_fn, _get_evaluator_cls, evaluate,
get_evaluator_class) intact so iteration via EVALUATOR_MAP/EVALUATOR_CLASS_MAP
still works per the documented design.
In `@nemo_skills/pipeline/dataset.py`:
- Around line 60-62: The check uses cluster_config.get("executor") which masks a
missing-key error; change it to access the key directly
(cluster_config["executor"]) so missing executor raises immediately, and keep
the logic that if cluster_config is None or cluster_config["executor"] in (None,
"none") then return _get_local_dataset_module(dataset, data_dir); update any
related code paths that assume executor exists (e.g., the code around
get_unmounted_path in nemo_skills/pipeline/utils/mounts.py) to rely on the same
direct-access semantics to fail fast on misconfiguration.
🧹 Nitpick comments (6)
CONTRIBUTING.md (1)
56-59: Fenced code block missing language specifier.Minor nit from markdownlint — adding a language (e.g.,
text) would silence MD040.Proposed fix
-``` +```text Pipeline can import from Core. Core CANNOT import from Pipeline. -``` +```core/requirements.txt (1)
17-27: Section label "math evaluation" is misleading — several packages below it aren't math-specific.
mcp,numpy,openai,requests,rich,tqdm, andtransformersare general-purpose dependencies, not math evaluation specific. Consider either reorganizing sections or using a broader label like# --- general / shared ---.nemo_skills/pipeline/dataset.py (3)
39-51: Imported module outlives its backing file.
import_from_pathis called inside aTemporaryDirectorycontext manager. Once thewithblock exits, the downloadedinit.pyis deleted, but the module object (and its__file__attribute) still references the now-removed path. This works at runtime because CPython caches the compiled bytecode in memory, but it can cause confusing errors if any downstream code inspectsmodule.__file__or attempts a reload.Consider moving the temp directory lifecycle to the caller or keeping it alive longer if module introspection is needed.
44-50: Chain the re-raised exception for clearer tracebacks.Per the static analysis hint (B904),
raise ... from errpreserves the original traceback context.Proposed fix
try: cluster_download_file(cluster_config, cluster_dataset_path, tmp_path) - except FileNotFoundError: - raise RuntimeError( + except FileNotFoundError as err: + raise RuntimeError( f"Init file {mounted_path} not found on the cluster. " f"Please check the dataset name you're using. Did you forget to run prepare data commands?" - ) + ) from err
109-113: Chain the re-raisedRuntimeErrorfor clearer tracebacks.Same B904 pattern — add
from errto preserve the originalModuleNotFoundErrorcontext.Proposed fix
- except ModuleNotFoundError: - raise RuntimeError( + except ModuleNotFoundError as err: + raise RuntimeError( f"Dataset {dataset} not found in any of the searched locations: " f"{data_dir if data_dir else 'nemo_skills.dataset'}, {extra_datasets}" - ) + ) from errnemo_skills/dataset/utils.py (1)
116-135: Chain re-raised exceptions for clearer tracebacks.Same pattern as flagged in
pipeline/dataset.py— theraise RuntimeError(...)statements at Lines 120 and 126 insideexceptclauses should usefromto preserve the original exception context.Proposed fix
except ModuleNotFoundError: dataset = dataset.replace(".", "/") extra_datasets = extra_datasets or os.environ.get("NEMO_SKILLS_EXTRA_DATASETS") if extra_datasets is None: - raise RuntimeError(f"Dataset {dataset} not found in {data_dir if data_dir else 'nemo_skills.dataset'}") + raise RuntimeError( + f"Dataset {dataset} not found in {data_dir if data_dir else 'nemo_skills.dataset'}" + ) from None if extra_datasets_type == ExtraDatasetType.local or extra_datasets_type is None: with add_to_path(extra_datasets): try: dataset_module = importlib.import_module(dataset) - except ModuleNotFoundError: - raise RuntimeError( + except ModuleNotFoundError as err: + raise RuntimeError( f"Dataset {dataset} not found in any of the searched locations: " f"{data_dir if data_dir else 'nemo_skills.dataset'}, {extra_datasets}" - ) + ) from err
| _EVALUATOR_MAP_PATHS[eval_type] = None | ||
| _resolved_evaluator_map[eval_type] = eval_fn |
There was a problem hiding this comment.
Setting _EVALUATOR_MAP_PATHS[eval_type] = None is fragile. If _resolved_evaluator_map gets cleared or doesn't contain eval_type, _get_evaluator_fn will call _resolve(None) and crash with ValueError: not enough values to unpack.
The current implementation works only because the function is immediately added to _resolved_evaluator_map, but this implicit dependency is error-prone. Consider either:
- Not setting
_EVALUATOR_MAP_PATHS[eval_type]at all (just use_resolved_evaluator_map) - Setting it to a sentinel string that provides a better error message if accidentally resolved
There was a problem hiding this comment.
Good catch, switched to a "<dynamically-registered>" sentinel to be safe.
There was a problem hiding this comment.
Actually, reverting this back to None. The _resolved_evaluator_map cache is internal and never cleared, so this scenario cannot happen in practice. Per our project guidelines: "Don't add error handling, fallbacks, or validation for scenarios that can't happen." If the cache were somehow corrupted, crashing is the correct signal.
Additional Comments (1)
|
Signed-off-by: George Armstrong <georgea@nvidia.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@nemo_skills/pipeline/dataset.py`:
- Around line 54-62: The function _get_default_dataset_module currently drops
extra_datasets and extra_datasets_type when cluster_config is None by delegating
to _get_local_dataset_module(dataset, data_dir); update
_get_default_dataset_module to forward extra_datasets and extra_datasets_type
into the local call (e.g., call _get_local_dataset_module(dataset, data_dir,
extra_datasets=..., extra_datasets_type=...)) so get_dataset_module's outer
ModuleNotFoundError path remains reachable and callers' extra_datasets are
honored; ensure the function signature for _get_default_dataset_module accepts
the extra_* params and that _get_local_dataset_module is invoked with those
parameters.
🧹 Nitpick comments (3)
nemo_skills/pipeline/dataset.py (2)
39-51: Chain exception context withfromwhen re-raising.Static analysis (B904) correctly flags that re-raising inside
exceptwithoutfromloses the original traceback context. This applies here and at lines 109-113.Proposed fix
- except FileNotFoundError: - raise RuntimeError( + except FileNotFoundError as exc: + raise RuntimeError( f"Init file {mounted_path} not found on the cluster. " f"Please check the dataset name you're using. Did you forget to run prepare data commands?" - ) + ) from exc
91-113: Chain the innerRuntimeErrorre-raise withfrom.Same B904 issue as above — preserve context for debugging.
Proposed fix
- except ModuleNotFoundError: - raise RuntimeError( + except ModuleNotFoundError as exc: + raise RuntimeError( f"Dataset {dataset} not found in any of the searched locations: " f"{data_dir if data_dir else 'nemo_skills.dataset'}, {extra_datasets}" - ) + ) from excnemo_skills/evaluation/evaluator/__init__.py (1)
93-94: Semantic change inEVALUATOR_MAP/EVALUATOR_CLASS_MAPvalues.These aliases now expose dotted-path strings instead of resolved callables/classes. Any downstream code (external plugins, scripts) that iterates
.values()expecting callables will break silently. The comment on lines 90-92 documents the intent, and the repo itself only uses these for key enumeration, so this is safe internally. Just worth noting for external consumers if this is a public API.
|
Added |
Signed-off-by: George Armstrong <georgea@nvidia.com>
Signed-off-by: George Armstrong <georgea@nvidia.com>
core/requirements.txt
Outdated
| # No cluster orchestration deps (nemo_run, typer, etc.) | ||
|
|
||
|
|
||
| # --- code evaluation --- |
There was a problem hiding this comment.
are you sure this covers all benchmarks? Generally, we should move to keeping this reqs really simple and move most benchmark-specific requirements to install at runtime, but for now probably we might need some more packages here? E.g. datasets is almost certainly needed and then other benchmark specific things, like sacrebleu, etc.
There was a problem hiding this comment.
I revisited the separation. This should contain all the reqs not needed for cluster orchestration now.
requirements/pipeline.txt
Outdated
| nemo-evaluator-launcher<0.1.47 | ||
| nemo_run @ git+https://github.com/NVIDIA-NeMo/Run | ||
| typer >= 0.13 | ||
| wandb |
There was a problem hiding this comment.
this is actually a core dependency, it's being used in summarize-results, which is required for core functionality. Currently summarize-results is kind of in a weird half-pipeline state, but we should fix it to cleanly separate it into pipeline and non-pipeline components via #779 (comment)
CONTRIBUTING.md
Outdated
| | CLI commands, cluster orchestration, experiment tracking | `requirements/pipeline.txt` | | ||
| | Everything else (dataset-specific deps, benchmark-specific packages) | `requirements/main.txt` only | | ||
|
|
||
| Dependencies in `core/requirements.txt` should be things that a typical `GenerationTask` run with PythonTool would need. Dataset-specific or benchmark-specific packages (e.g., `faiss-cpu`, `sacrebleu`, `func-timeout`) go only in `requirements/main.txt`. |
There was a problem hiding this comment.
this part I don't fully understand - I think benchmark-specific packages should go to core for now as otherwise the code will fail when those benchmarks are used e.g. in evaluator. Eventually we should migrate to jit install, but it's not done yet, so I'd put those into core
There was a problem hiding this comment.
Fair. My original scope was pretty PythonTool specific, but I think we can come up with something that makes a little more sense in terms of aligning the core code with core dependencies.
There was a problem hiding this comment.
yeah it's now in core and there is a clearer description of what's in pipeline vs core
CONTRIBUTING.md
Outdated
|
|
||
| Dependencies in `core/requirements.txt` should be things that a typical `GenerationTask` run with PythonTool would need. Dataset-specific or benchmark-specific packages (e.g., `faiss-cpu`, `sacrebleu`, `func-timeout`) go only in `requirements/main.txt`. | ||
|
|
||
| All core and pipeline deps must also appear in `requirements/main.txt` (the monolithic file used for default installs). |
There was a problem hiding this comment.
can we not link multiple requirements listed in pyproject.toml? We duplicate?
There was a problem hiding this comment.
we should be able to do that. It should be implemented that way now--I updated this at one point so it links against the file rather than duplicating. Will fix.
CONTRIBUTING.md
Outdated
| **When writing new core code:** | ||
|
|
||
| - If you need something from `nemo_skills.pipeline`, your code probably belongs in pipeline, not core. Move it. | ||
| - If you have a function that works locally but *also* needs a cluster variant, put the local version in core and a cluster-aware wrapper in `nemo_skills/pipeline/` (see `pipeline/dataset.py` for the pattern). |
There was a problem hiding this comment.
I actually think that if we have a case like this, it means we need to redesign something. Ideally separation should be clean, and we shouldn't need to duplicate functionality. E.g. the dataset module part is a bit messy and there is probably a way to do it better, such that there is a pipeline level that only manages pulling from cluster and then there is a local level that always assumes things are present locally and is being called inside pipeline directly
There was a problem hiding this comment.
Makes sense, updated the docs here to reflect that and made the implementation more consistent with the guidance here/there.
Per review feedback: all benchmark-specific packages should go to core for now since JIT install is not yet implemented. Previously only PythonTool-specific deps were in core while benchmark deps like datasets, sacrebleu, faiss-cpu, etc. were only in main.txt. This led to an inconsistent boundary where math grader deps were in core but BFCL deps were not, despite both being benchmark-specific. Addresses review comments #1, #4, #6 on PR #1229. Signed-off-by: George Armstrong <georgea@nvidia.com>
pyproject.toml now composes default dependencies from core/requirements.txt + requirements/pipeline.txt instead of maintaining a separate monolithic main.txt that duplicated both. This ensures a single source of truth for each dependency: it lives in exactly one requirements file, and pyproject.toml references both. Addresses review comment #5 on PR #1229. Signed-off-by: George Armstrong <georgea@nvidia.com>
Creates the test file referenced in docs/basics/installation.md that verifies the core/pipeline dependency boundary. Tests import each core module in a subprocess where nemo_run and nemo_skills.pipeline are blocked, ensuring core has no top-level pipeline dependencies. Addresses review comment #2 on PR #1229. Signed-off-by: George Armstrong <georgea@nvidia.com>
Rewrite the dependency boundary section to: - Define core as "everything needed for inference + evaluation" (not just PythonTool-specific deps) - Remove references to deleted requirements/main.txt - Clarify that all benchmark evaluator deps go to core until JIT install is implemented - Improve dataset module separation guidance (pipeline = cluster I/O only, core = all local logic) - Add note about summarize-results refactor (issue #779) Addresses review comments #3, #4, #6, #7 on PR #1229. Signed-off-by: George Armstrong <georgea@nvidia.com>
Refactor pipeline/dataset.py so it ONLY handles cluster I/O (SSH downloads, mount path resolution) and delegates all local import/resolution logic to core's dataset/utils.py. Key changes: - Extract cluster-specific loading into _get_cluster_dataset_module() - For local extra_datasets fallback, delegate to core instead of reimplementing add_to_path + import_module - For non-cluster cases, delegate entirely to core from the start - Remove duplicated local import logic that was parallel to core Addresses review comment #7 on PR #1229. Signed-off-by: George Armstrong <georgea@nvidia.com>
The section labels (agent runtime, math evaluation, code evaluation, benchmark evaluator deps) were misleading since many deps span multiple categories. Keep it as a flat alphabetical list. Signed-off-by: George Armstrong <georgea@nvidia.com>
Signed-off-by: George Armstrong <georgea@nvidia.com>
Pipeline no longer calls importlib.import_module or add_to_path directly — all import/module-resolution logic lives in core. Pipeline's only responsibilities are now: - Local executor: unmount paths via get_unmounted_path, then delegate to core, then map returned paths back to mounted form - Remote executor: SSH download via cluster_download_file for custom data_dir or cluster-type extra_datasets Addresses review comment #7 on PR #1229. Signed-off-by: George Armstrong <georgea@nvidia.com>
Collapse 3 helper functions into one download helper + one main function. Pipeline only does two things: unmount paths (local executor) and SSH download (remote executor). All import logic delegates to core. 140 -> 78 lines. Signed-off-by: George Armstrong <georgea@nvidia.com>
wandb is used by summarize-results which is core functionality, not just pipeline/orchestration. Move it to core/requirements.txt. Signed-off-by: George Armstrong <georgea@nvidia.com>
The Dockerfile referenced the deleted requirements/main.txt. Update to install from core/requirements.txt + pipeline.txt, matching how pyproject.toml now composes dependencies. Signed-off-by: George Armstrong <georgea@nvidia.com>
The Docker container installs deps from requirements files without running `pip install .`, so package metadata is not available. Checking for nemo_run import instead correctly detects whether pipeline deps are installed. Signed-off-by: George Armstrong <georgea@nvidia.com>
| @@ -0,0 +1,39 @@ | |||
| # Core dependencies for inference, evaluation, tool calling, and all benchmark evaluators. | |||
There was a problem hiding this comment.
can we keep this inside requirements/core.txt? Would be simpler for people to only look in a single folder for all reqs
There was a problem hiding this comment.
It has to be in the same directory as the pyproject.toml for the install of nemo-skills-core to work. But I can put a symlink in the requirements/ directory so it is at least clear that this file exists if someone goes to look in requirements.txt for it?
requirements/pipeline.txt
Outdated
| @@ -0,0 +1,7 @@ | |||
| # Pipeline/orchestration dependencies (CLI, cluster management, experiment tracking). | |||
| # These are additional to core.txt. | |||
There was a problem hiding this comment.
comment should be updated or we should move core/requirement.txt into core.txt in this folder
tests/test_dependency_isolation.py
Outdated
| import pytest | ||
|
|
||
| # Core modules that must be importable without nemo_run / pipeline | ||
| CORE_MODULES = [ |
There was a problem hiding this comment.
can we dynamically find everything inside nemo_skills except pipeline subfolder?
| the dataset into slurm tests. This is the most comprehensive test we can do by running full | ||
| evaluation on cluster with arbitrary model and check that results are as expected. | ||
|
|
||
| ### Respect the Core / Pipeline dependency boundary |
There was a problem hiding this comment.
can we maybe keep the part in here brief, just summarize the logic in a few sentences / bullet points. And then the full description we move to another .md file? I think the full description is quite helpful, but it's a bit too detailed for the guidelines section, which I hope we can keep relatively short
There was a problem hiding this comment.
sure. made it brief and moved the bulk of the guidelines to core/README.md, which is referenced here now
.github/workflows/tests.yml
Outdated
| with: | ||
| python-version: "3.10" | ||
| cache: pip | ||
| - name: Install uv |
There was a problem hiding this comment.
good catch, it was from an old testing strategy, removed
docs/basics/installation.md
Outdated
| from core, but core modules must not import from pipeline. | ||
|
|
||
| This boundary is enforced by `tests/test_dependency_isolation.py` which creates | ||
| fresh virtualenvs and verifies that core modules import successfully without |
There was a problem hiding this comment.
is this true? I don't think we create fresh envs in tests?
There was a problem hiding this comment.
yeah this is outdated from old tests, good cattch
docs/basics/installation.md
Outdated
| pip install -e ".[dev]" | ||
| ``` | ||
|
|
||
| ## Core / Pipeline architecture boundary |
There was a problem hiding this comment.
I'd maybe move this part somewhere else (e.g. only keep in contributing.md or better in a new .md where extra details from contributing can go as well). It's helpful, but probably a bit too dense for the "basics" part of the docs. More oriented towards people who'd need to modify our code
There was a problem hiding this comment.
okay, put it in the core/README.md
The BFCL eval venv uses --system-site-packages and pins huggingface_hub<1, which downgrades the system's huggingface_hub 1.x to 0.x. This breaks transformers (from system packages) which needs is_offline_mode only available in huggingface_hub>=1.0. Gorilla's own BFCL does not pin huggingface_hub, so removing the constraint is safe. Signed-off-by: George Armstrong <georgea@nvidia.com>
Signed-off-by: George Armstrong <georgea@nvidia.com>
Move detailed core/pipeline boundary docs from CONTRIBUTING.md and installation.md into docs/core-pipeline-boundary.md. Add symlink at requirements/core.txt pointing to core/requirements.txt for discoverability. Signed-off-by: George Armstrong <georgea@nvidia.com>
…rable-pipeline Signed-off-by: George Armstrong <georgea@nvidia.com> # Conflicts: # nemo_skills/dataset/utils.py # nemo_skills/evaluation/evaluator/__init__.py
Signed-off-by: George Armstrong <georgea@nvidia.com>
Signed-off-by: George Armstrong <georgea@nvidia.com>
Signed-off-by: George Armstrong <georgea@nvidia.com>
Directories with hyphens (e.g., answer-judge, math-500, llama3-instruct) cannot be imported via `import` statement. Use importlib.import_module() which handles arbitrary module names correctly. Signed-off-by: George Armstrong <georgea@nvidia.com>
Adds a lightweight
nemo-skills-coresubpackage (core/subdirectory)with only inference, evaluation, and tool calling deps. Default
pip install nemo-skillsis unchanged (installs everything).Changes
core/pyproject.toml+core/requirements.txt: New subpackage installable viapip install ./coreor git URL with#subdirectory=core. Single source of truth for core deps, referenced by both core and rootpyproject.toml.nemo_skills/pipeline/__init__.py: Import guard usingimportlib.metadata-- importing pipeline modules with only core installed raises a clearImportErrorinstead of a crypticModuleNotFoundError.nemo_skills/_cli_stub.py: StubnsCLI entry point for core-only installs that prints a helpful message.nemo_skills/evaluation/evaluator/__init__.py: Lazy evaluator registry using string paths instead of eager imports, so core-only installs don't fail on benchmark-specific deps (faiss,func_timeout, etc.).nemo_skills/dataset/utils.py+nemo_skills/pipeline/dataset.py: Moved cluster-dependent dataset logic into pipeline module to keep core free ofnemo_runimports.requirements/pipeline.txt: New requirements file for pipeline-only deps (nemo_run,typer, etc.)..github/workflows/tests.yml: Installuvin CI for use with testing installation.Summary by CodeRabbit
New Features
Documentation
Chores