Skip to content

Bug: Generative AI guidance seems too permissive #1513

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
t4moxjc7 opened this issue Feb 11, 2025 · 4 comments
Closed

Bug: Generative AI guidance seems too permissive #1513

t4moxjc7 opened this issue Feb 11, 2025 · 4 comments
Labels

Comments

@t4moxjc7
Copy link

Describe the bug

https://devguide.python.org/getting-started/generative-ai/

It seems to imply that the LLM (I'm not sure what other form of generative AI it could be referring to) hallucination problem is going to be solved in the near future, and that it is wise to trust LLM output. Neither of those seem to be good judgements.

I can see someone reading this, misunderstanding an aspect of Python due to trusting LLM output, and submitting incorrect code or documentation. Or even submitting LLM output itself.

Screenshots

No response

Additional context

No response

@t4moxjc7 t4moxjc7 added the bug label Feb 11, 2025
@hugovk
Copy link
Member

hugovk commented Feb 11, 2025

I think this is quite clear?

Their overuse can also be problematic, such as generation of incorrect code, inaccurate documentation, and unneeded code churn. Discretion, good judgement, and critical thinking must be used when opening issues and pull requests.

And:

Unacceptable uses

Maintainers may close issues and PRs that are not useful or productive, including those that are fully generated by AI. If a contributor repeatedly opens unproductive issues or PRs, they may be blocked.

@zanieb
Copy link

zanieb commented Feb 11, 2025

I agree with Hugo — I don't see how this suggests that hallucination is going to be solved in the near future.

@t4moxjc7
Copy link
Author

I see what you mean, it just feels like the "will continue in the future"/"gaining understanding"/"supplementing contributor knowledge" kind of undermines that, as generation of inaccurate advice is just as much a problem as generation of inaccurate code.

@ncoghlan
Copy link
Contributor

Contributors that don't use AI will often approach issues and PRs based on inaccurate understandings of how things work (even experienced contributors sometimes misremember technical details, or don't realise that relevant aspects of the implementation have changed since they last worked on a particular area).

So in the suggested areas, yes, AI tools can still be wrong, but from the maintainer side, that's no different than dealing with any other source of misunderstanding. What's a genuine problem (and what this policy aims to address) is the sheer volume of issue and PR noise that AI slop can generate. Being wrong about something won't get anyone banned (unless they get abusive about it when the mistake is pointed out). By contrast, repeatedly submitting fully AI generated bug reports and PRs most likely will result in a ban, and this policy is what moderators will point to when that happens.

For examples of LLMs at least sometimes working well when asked to explain code snippets, even a few years ago, GPT-3 was already up to the task of explaining at least some not-particularly-obvious code.

@ncoghlan ncoghlan closed this as not planned Won't fix, can't repro, duplicate, stale Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants