-
-
Notifications
You must be signed in to change notification settings - Fork 851
Bug: Generative AI guidance seems too permissive #1513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think this is quite clear?
And:
|
I agree with Hugo — I don't see how this suggests that hallucination is going to be solved in the near future. |
I see what you mean, it just feels like the "will continue in the future"/"gaining understanding"/"supplementing contributor knowledge" kind of undermines that, as generation of inaccurate advice is just as much a problem as generation of inaccurate code. |
Contributors that don't use AI will often approach issues and PRs based on inaccurate understandings of how things work (even experienced contributors sometimes misremember technical details, or don't realise that relevant aspects of the implementation have changed since they last worked on a particular area). So in the suggested areas, yes, AI tools can still be wrong, but from the maintainer side, that's no different than dealing with any other source of misunderstanding. What's a genuine problem (and what this policy aims to address) is the sheer volume of issue and PR noise that AI slop can generate. Being wrong about something won't get anyone banned (unless they get abusive about it when the mistake is pointed out). By contrast, repeatedly submitting fully AI generated bug reports and PRs most likely will result in a ban, and this policy is what moderators will point to when that happens. For examples of LLMs at least sometimes working well when asked to explain code snippets, even a few years ago, GPT-3 was already up to the task of explaining at least some not-particularly-obvious code. |
Describe the bug
https://devguide.python.org/getting-started/generative-ai/
It seems to imply that the LLM (I'm not sure what other form of generative AI it could be referring to) hallucination problem is going to be solved in the near future, and that it is wise to trust LLM output. Neither of those seem to be good judgements.
I can see someone reading this, misunderstanding an aspect of Python due to trusting LLM output, and submitting incorrect code or documentation. Or even submitting LLM output itself.
Screenshots
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: