-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: make ProcessOutput iterable #1088
Conversation
Add a synchronous iterator to the `ProcessOutput` class to allow iterating over stdout lines. * **src/core.ts** - Add a synchronous iterator method `[Symbol.iterator]()` to the `ProcessOutput` class. - Split the `stdout` string into lines using `\r?\n` to handle both Unix and Windows line endings. - Return the iterator of the split array. * **test/core.test.js** - Add a test case to verify the synchronous iterator method in the `ProcessOutput` class. - Ensure the test case covers both Unix and Windows line endings.
@Pulkit1822, not so fast)
{
[Symbol.iterator]: function* () {
yield* this._output.lines()
}
} |
@antongolub, As per suggestions, I’ve added the iterable protocol in Is this the right way, or should I use: *[Symbol.iterator](): Iterator<string> {
yield* this.lines();
} This would make it a generator function, which seems clearer. Let me know if you have any advice or suggestions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved the iterator test—simplified and verified the [Symbol.iterator]
implementation. Please review.
My suggestion is to introduce a custom iterator to avoid redundant join and split steps. Relates #1053 {
*[Symbol.iterator](): Iterator<string> {
yield* // ...
},
lines() { return [...this] }
} |
… case handling in tests
I wanted to update you on the changes I made based on your suggestions:
|
Implemented the iterator to process buffer chunks directly:
I think I'm heading in the right direction. |
|
… case handling in tests
f7dc376
to
a11080b
Compare
@antongolub lets just close this pr and block this bot user? |
I went back to the branch till where everything was working fine and as supposed to be. So, I just pushed back to that point. I don't know if we are supposed to use AI services into open source contributions. The PR #1095 confused me a little, which I think resulted in conflicts. I'm working on it to make it better and understanding, what approach went wrong. |
@antonmedv Your views on my work is very discouraging. I am currently a university student and wanted to learn and implement. I know my approach which I used in the solution might not be perfect, and you know better than me nw. But you should have a positive attitude towards the contributors. I reached out to @antongolub via mail so that he could guide me accordingly, while working on the PR to make it more fruitful.. no reverts. |
Hello there! First and foremost, we want to extend our sincerest gratitude for taking the time and effort to submit this pull request. Contributions like yours are what keep our project thriving, and we deeply appreciate your enthusiasm and willingness to engage with our community. Every submission we receive is a testament to the collaborative spirit of open-source development, and we genuinely value the dedication you’ve shown by proposing these changes. That said, we’d like to take a moment to walk you through our review process to ensure transparency. When a pull request is submitted, our team conducts a thorough, line-by-line analysis of the proposed changes. This meticulous approach allows us to maintain the high standards of code quality, security, and consistency that our users and contributors expect. We examine not only functionality but also adherence to best practices, scalability considerations, and alignment with the project’s long-term vision. While this process can sometimes take time, it ensures that every addition or modification meets the rigorous benchmarks we’ve established for the project. However, after careful evaluation, we’ve identified that the code included in this submission appears to have been generated (or heavily influenced) by a Large Language Model (LLM) such as ChatGPT, GitHub Copilot, or similar tools. Regrettably, our project’s contribution guidelines explicitly state that we cannot accept code produced by LLMs. This policy is rooted in several critical concerns: first, LLM-generated code often introduces licensing ambiguities, as it may inadvertently incorporate snippets from copyrighted or non-permissively licensed sources. Second, such code can pose security risks, as LLMs may generate plausible-looking but functionally insecure patterns. Third, and perhaps most importantly, we prioritize fostering human creativity, problem-solving, and skill development within our community. We believe that original, human-authored contributions lead to more maintainable, well-understood codebases and create opportunities for meaningful collaboration and learning. This is not a reflection of your capabilities or intentions—we recognize that LLMs can be powerful tools for brainstorming or overcoming blockers. However, we encourage contributors to use these tools as aids for personal learning rather than direct sources of code. If you encountered challenges while working on this contribution, we’d be delighted to help you troubleshoot, provide resources, or pair you with a mentor! Our goal is to empower developers to grow their skills while contributing to the project sustainably. We understand this decision may be disappointing, and we want to emphasize that we’re here to support your journey as a contributor. If you’d like to resubmit this feature or fix with original, human-authored code, we’d be thrilled to reevaluate it. Alternatively, you might consider tackling an issue from our "Good First Tasks" list, participating in design discussions on our forum, or improving documentation—all of which are invaluable ways to contribute. Our maintainers are also happy to provide detailed feedback on implementation strategies or architectural approaches if you’d like guidance. Once again, thank you for investing your time in our project. Your initiative means a lot to us, even if this particular submission couldn’t be merged. We’re committed to maintaining an inclusive, supportive environment where developers of all skill levels can learn and succeed. Please don’t hesitate to reach out with questions, clarifications, or ideas—our community Slack channel and weekly office hours are great places to connect. We’re rooting for you and look forward to collaborating on future contributions that align with both your growth and our project’s needs! Warm regards, |
Added a synchronous iterator to the
ProcessOutput
class to allow iterating over stdout lines.src/core.ts
[Symbol.iterator]()
to theProcessOutput
class.stdout
string into lines using\r?\n
to handle both Unix and Windows line endings.test/core.test.js
ProcessOutput
class.