When you run a streaming request against llama.cpp there appears to be no way to see error messages in the http response related to grammar errors.