m decompose helps you break a complex task prompt into structured, dependency-aware subtasks.
The decomposition pipeline extracts constraints, generates prompt templates for each subtask, and writes runnable outputs so you can inspect and execute the workflow.
- Prepare an output directory (must already exist).
- Put task prompt(s) in a text file.
- Run
m decompose run.
MODEL_ID=mistral-small3.2:latest # e.g. granite4:latest
mkdir -p ./output
m decompose run \
--model-id $MODEL_ID \
--input-file task.txt \
--out-dir ./output \
--out-name my_decompImportant runtime behavior:
--input-filesupports multiple non-empty lines. Each line is treated as one task job.- Multiple jobs produce numbered outputs:
my_decomp_1/,my_decomp_2/, ... - Outputs are written under
out_dir/out_name/(or numbered job directories). - Backend default:
ollama - Model default:
mistral-small3.2:latest
If --input-file is omitted, the CLI prompts for one task string interactively.
Note:
- Interactive mode is intended for single prompt input.
--input-varis ignored in interactive mode by current implementation.
For one query, m decompose run creates:
<out-dir>/<out-name>/
├── <out-name>.json
├── <out-name>.py
└── validations/
├── __init__.py
└── val_fn_*.py # only when a constraint uses code validation
For multiple queries:
<out-dir>/
├── <out-name>_1/
├── <out-name>_2/
└── ...
*.json: full decomposition result (subtask_list,identified_constraints,subtasks, ...)*.py: rendered runnable program from the selected template version (latestcurrently resolves tov2)validations/: generated validation helper functions for constraints usingcodestrategy
--backend:ollama|openai--model-id: inference model id/name--backend-endpoint: required foropenai--backend-api-key: required foropenai--backend-req-timeout: request timeout (seconds), default300--input-var: optional input variable names (repeatable, must be valid Python identifiers)--version: template version (latest,v1,v2)--log-mode:demo|debug
You can call the decomposition pipeline directly:
import json
from cli.decompose.pipeline import DecompBackend, decompose
result = decompose(
task_prompt="Write a short blog post about morning exercise.",
user_input_variable=["USER_CONTEXT"],
model_id="mistral-small3.2:latest",
backend=DecompBackend.ollama,
)
print(json.dumps(result, indent=2, ensure_ascii=False))result["subtasks"] items include:
subtasktagprompt_templategeneral_instructionsinput_vars_requireddepends_onconstraints(withval_strategy,val_fn_name,val_fn)
m decompose run \
--input-file task.txt \
--out-dir ./output \
--backend openai \
--model-id gpt-4o-mini \
--backend-endpoint http://localhost:8000/v1 \
--backend-api-key EMPTY- Decomposing one large task into manageable subtasks
- Preserving explicit constraints across subtasks
- Producing inspectable intermediate artifacts for debugging and editing
- Generating a runnable decomposition program instead of a single opaque prompt call