-
-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leaderboard [ERROR] Evaluation failed. No available key left! #145
Comments
Some of your answers contain too many tokens, please check your json file size before submitting. Error message: This model's maximum context length is 16385 tokens. However, your messages resulted in 24345 tokens. Please reduce the length of the messages |
Thank you for your patient response to my question. I'd like to ask how to resolve the following error, which seems to be caused by an incorrect format of the GPT response score.
|
@ymxyll Im meetting the similar error: "could not convert string to float". My last four submissions always failed because of this reason. I think the eval code used in the test server is not very robust. A simple try again logic could easily solve this problem but we cannot access the code |
Usually people encounter such error with inappropriate answer in the submission json file. We do have several try-catch logic in our server, meaning that your submission is constantly causing such error. My suggestion would be testing with train data on your side and see if there is similar result. |
[ERROR] Evaluation failed. No available key left! The team name in your submission is [xxx].
Thank you for providing such a convenient testing service! After I submitted submission.json to the test server, the following error occurred. Does this mean that the GPT key on the test server has run out of balance?
The text was updated successfully, but these errors were encountered: