I cloned the repo and run the training/llmxcpg_query_finetune.py script.
I found that the model training ended before it even started. Later, I realized that the behavior of the max_steps parameter was inconsistent with the README file—setting max_steps to 0 (the default) causes the model to skip training entirely.