Skip to content

fix: auto-adapt LeRobot state dimension instead of raising ValueError#82

Open
cagataycali wants to merge 3 commits intostrands-labs:mainfrom
cagataycali:fix/lerobot-state-dim-adapt
Open

fix: auto-adapt LeRobot state dimension instead of raising ValueError#82
cagataycali wants to merge 3 commits intostrands-labs:mainfrom
cagataycali:fix/lerobot-state-dim-adapt

Conversation

@cagataycali
Copy link
Copy Markdown
Member

TL;DR

When a robot exposes more joints than the policy was trained on (e.g. aloha has 16 joints but ACT expects 14), the policy raised a hard ValueError during inference. This fix auto-adapts state dimensions — truncate excess or zero-pad if fewer — with debug logging.

What changed

File Change
strands_robots/policies/lerobot_local/policy.py Replace ValueError with truncate/zero-pad + debug logging
tests/test_lerobot_local.py Update test to assert pad behavior instead of raises

Why

This is the standard approach in robotics — LeRobot's own teleoperation code does the same. Hard crashing on dimension mismatch makes sim↔real transfer fragile and prevents running policies trained on one embodiment on another with different joint counts.

# Before: Hard crash
if len(state_values) != expected_dim:
    raise ValueError(f"State dimension mismatch: got {len(state_values)}...")

# After: Auto-adapt with logging
if len(state_values) > expected_dim:
    logger.debug("State dim %d > model expects %d — truncating", ...)
    state_values = state_values[:expected_dim]
elif len(state_values) < expected_dim:
    logger.debug("State dim %d < model expects %d — zero-padding", ...)
    state_values.extend([0.0] * (expected_dim - len(state_values)))

Testing

  • ✅ All 266 existing tests pass
  • ✅ Updated test_state_padded_to_expected_dim to verify auto-pad behavior
  • ✅ Lint clean (ruff check + ruff format --check)

Part 1 of 6 in the MuJoCo simulation PR decomposition (see PR_TASKS.md)

When a robot exposes more joints than the policy was trained on
(e.g. aloha has 16 joints but ACT expects 14), the policy raised a
hard ValueError during inference, making sim-to-real transfer fragile.

Fix: truncate excess joints or zero-pad if fewer, with debug logging.
This is the standard approach in robotics — LeRobot's own teleoperation
code does the same.
Address review feedback: state dimension truncation/padding should
be visible to operators since it can affect device behavior.
f"robot_state_keys but model expects {expected_dim}. "
f"Check that robot_state_keys matches your robot's actual joint count."
if len(state_values) > expected_dim:
logger.warning(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The warning logs dimension counts but not key names. When a user sees State dim 16 > model expects 14 — truncating, they can't tell which joints are dropped. Consider adding key names to the message so users can verify the mapping is intentional vs. a genuine misconfiguration:

logger.warning(
    "State dim %d > model expects %d — truncating to first %d values. "
    "Check that robot_state_keys matches your robot's actual joint count.",
    len(state_values), expected_dim, expected_dim,
)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 669735d — warning now includes actionable hint about robot_state_keys:

logger.warning(
    "State dim %d > model expects %d — truncating to first %d values. "
    "Check that robot_state_keys matches your robot's actual joint count.",
    len(state_values), expected_dim, expected_dim,
)

Same pattern applied to the zero-padding path.

Address review: include robot_state_keys hint in truncation/padding
warnings so users can diagnose joint count mismatches.
Copy link
Copy Markdown

@yinsong1986 yinsong1986 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All review comments addressed. LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In review

Development

Successfully merging this pull request may close these issues.

2 participants