Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: make ProcessOutput iterable #1088

Closed
wants to merge 12 commits into from

Conversation

Pulkit1822
Copy link

@Pulkit1822 Pulkit1822 commented Jan 29, 2025

Added a synchronous iterator to the ProcessOutput class to allow iterating over stdout lines.

  • src/core.ts

    • Added a synchronous iterator method [Symbol.iterator]() to the ProcessOutput class.
    • Split the stdout string into lines using \r?\n to handle both Unix and Windows line endings.
    • Returned the iterator of the split array.
  • test/core.test.js

    • Added a test case to verify the synchronous iterator method in the ProcessOutput class.
    • Also ensured that the test case covers both Unix and Windows line endings.

Add a synchronous iterator to the `ProcessOutput` class to allow iterating over stdout lines.

* **src/core.ts**
  - Add a synchronous iterator method `[Symbol.iterator]()` to the `ProcessOutput` class.
  - Split the `stdout` string into lines using `\r?\n` to handle both Unix and Windows line endings.
  - Return the iterator of the split array.

* **test/core.test.js**
  - Add a test case to verify the synchronous iterator method in the `ProcessOutput` class.
  - Ensure the test case covers both Unix and Windows line endings.
@antongolub
Copy link
Collaborator

antongolub commented Jan 29, 2025

@Pulkit1822, not so fast)

  1. ProcessOutput has already lines() extractor, it should be used here.
  2. ProcessPromise, if resolved, should apply generator around its internal data.
{
  [Symbol.iterator]: function* () {
    yield* this._output.lines()
  }
}

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols#the_iterable_protocol

@Pulkit1822
Copy link
Author

@antongolub, As per suggestions, I’ve added the iterable protocol in core.ts with this.lines()[Symbol.iterator](), so ProcessOutput can be used in for...of loops. The test in core.test.js checks that ProcessOutput gives the correct lines when spread with [ ... ], making sure iterable operations work.

Is this the right way, or should I use:

*[Symbol.iterator](): Iterator<string> {
  yield* this.lines();
}

This would make it a generator function, which seems clearer. Let me know if you have any advice or suggestions.

test/core.test.js Outdated Show resolved Hide resolved
Copy link
Author

@Pulkit1822 Pulkit1822 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved the iterator test—simplified and verified the [Symbol.iterator] implementation. Please review.

@Pulkit1822 Pulkit1822 requested a review from antongolub February 5, 2025 21:29
@antongolub antongolub changed the title feat: make resolved ProcessPromise iterable feat: make resolved ProcessOutput iterable Feb 6, 2025
@antongolub
Copy link
Collaborator

My suggestion is to introduce a custom iterator to avoid redundant join and split steps. Relates #1053

{
  *[Symbol.iterator](): Iterator<string> {
    yield* // ...
  },
  lines() { return [...this] }
}

@Pulkit1822
Copy link
Author

My suggestion is to introduce a custom iterator to avoid redundant join and split steps. Relates #1053

{
  *[Symbol.iterator](): Iterator<string> {
    yield* // ...
  },
  lines() { return [...this] }
}

I wanted to update you on the changes I made based on your suggestions:

  • Introduced a custom iterator in the ProcessOutput class. This iterator now yields trimmed and split lines directly, which helps avoid the previous redundant join and split operations. I believe this approach makes the code cleaner and more efficient.

  • Refactored the lines() method to leverage the new iterator. Instead of manually joining and splitting the output, the method now directly utilizes the iterator. This not only simplifies the logic but should also improve performance. Also, added test cases specifically targeting the new iterator, including several edge cases. These tests are meant to ensure that the iterator behaves as expected under various conditions.

@antongolub
Copy link
Collaborator

antongolub commented Feb 8, 2025

#1095
#1096

I made a preparatory steps to pass the store reference to the internal ProcessOutput dto. Now Iterator<string> API can use buffer chunks instead of _combined.

@antongolub antongolub changed the title feat: make resolved ProcessOutput iterable feat: make ProcessOutput iterable Feb 8, 2025
@Pulkit1822
Copy link
Author

Pulkit1822 commented Feb 8, 2025

#1095

I made a preparatory step to pass the store reference to the internal ProcessOutput dto. Now Iterator<string> API can use buffer chunks instead of _combined.

Implemented the iterator to process buffer chunks directly:

  • Streams lines incrementally without concatenation
  • Handles multi-buffer line continuations via carryover
  • Added tests for split-buffer edge cases

I think I'm heading in the right direction.

@antongolub
Copy link
Collaborator

antongolub commented Feb 8, 2025

  1. The last rebase seems broken.
  2. Try this prompts:
1. Read the code in the next message. Do not comment, I will attach the instructions later.
2. Send the `ProcessOutput`
3. Refactor the `lines()` getter to avoid redundant join & split usage. Extract stdout chunks from the store. Extend the class with the Iterator API:

{
  *[Symbol.iterator](): Iterator<string> {
    yield* // ... place your proposal here
  },
  lines() { return [...this] }
}

The store type is pretty simple: 
export type TSpawnStoreChunks = TArrayLike<string | Buffer>;
export type TSpawnStore = {
    stdout: TSpawnStoreChunks;
    stderr: TSpawnStoreChunks;
    stdall: TSpawnStoreChunks;
};

4o gives the correct solution

@Pulkit1822 Pulkit1822 force-pushed the feat-iterable-processpromise branch from f7dc376 to a11080b Compare February 9, 2025 03:15
@antonmedv
Copy link
Collaborator

@antongolub lets just close this pr and block this bot user?

@Pulkit1822
Copy link
Author

Pulkit1822 commented Feb 9, 2025

  1. The last rebase seems broken.
  2. Try this prompts:
1. Read the code in the next message. Do not comment, I will attach the instructions later.
2. Send the `ProcessOutput`
3. Refactor the `lines()` getter to avoid redundant join & split usage. Extract stdout chunks from the store. Extend the class with the Iterator API:

{
  *[Symbol.iterator](): Iterator<string> {
    yield* // ... place your proposal here
  },
  lines() { return [...this] }
}

The store type is pretty simple: 
export type TSpawnStoreChunks = TArrayLike<string | Buffer>;
export type TSpawnStore = {
    stdout: TSpawnStoreChunks;
    stderr: TSpawnStoreChunks;
    stdall: TSpawnStoreChunks;
};

I went back to the branch till where everything was working fine and as supposed to be. So, I just pushed back to that point. I don't know if we are supposed to use AI services into open source contributions. The PR #1095 confused me a little, which I think resulted in conflicts. I'm working on it to make it better and understanding, what approach went wrong.

@Pulkit1822
Copy link
Author

Pulkit1822 commented Feb 9, 2025

@antongolub lets just close this pr and block this bot user?

@antonmedv Your views on my work is very discouraging. I am currently a university student and wanted to learn and implement. I know my approach which I used in the solution might not be perfect, and you know better than me nw. But you should have a positive attitude towards the contributors. I reached out to @antongolub via mail so that he could guide me accordingly, while working on the PR to make it more fruitful.. no reverts.

@antonmedv
Copy link
Collaborator

Hello there! First and foremost, we want to extend our sincerest gratitude for taking the time and effort to submit this pull request. Contributions like yours are what keep our project thriving, and we deeply appreciate your enthusiasm and willingness to engage with our community. Every submission we receive is a testament to the collaborative spirit of open-source development, and we genuinely value the dedication you’ve shown by proposing these changes.

That said, we’d like to take a moment to walk you through our review process to ensure transparency. When a pull request is submitted, our team conducts a thorough, line-by-line analysis of the proposed changes. This meticulous approach allows us to maintain the high standards of code quality, security, and consistency that our users and contributors expect. We examine not only functionality but also adherence to best practices, scalability considerations, and alignment with the project’s long-term vision. While this process can sometimes take time, it ensures that every addition or modification meets the rigorous benchmarks we’ve established for the project.

However, after careful evaluation, we’ve identified that the code included in this submission appears to have been generated (or heavily influenced) by a Large Language Model (LLM) such as ChatGPT, GitHub Copilot, or similar tools. Regrettably, our project’s contribution guidelines explicitly state that we cannot accept code produced by LLMs. This policy is rooted in several critical concerns: first, LLM-generated code often introduces licensing ambiguities, as it may inadvertently incorporate snippets from copyrighted or non-permissively licensed sources. Second, such code can pose security risks, as LLMs may generate plausible-looking but functionally insecure patterns. Third, and perhaps most importantly, we prioritize fostering human creativity, problem-solving, and skill development within our community. We believe that original, human-authored contributions lead to more maintainable, well-understood codebases and create opportunities for meaningful collaboration and learning.

This is not a reflection of your capabilities or intentions—we recognize that LLMs can be powerful tools for brainstorming or overcoming blockers. However, we encourage contributors to use these tools as aids for personal learning rather than direct sources of code. If you encountered challenges while working on this contribution, we’d be delighted to help you troubleshoot, provide resources, or pair you with a mentor! Our goal is to empower developers to grow their skills while contributing to the project sustainably.

We understand this decision may be disappointing, and we want to emphasize that we’re here to support your journey as a contributor. If you’d like to resubmit this feature or fix with original, human-authored code, we’d be thrilled to reevaluate it. Alternatively, you might consider tackling an issue from our "Good First Tasks" list, participating in design discussions on our forum, or improving documentation—all of which are invaluable ways to contribute. Our maintainers are also happy to provide detailed feedback on implementation strategies or architectural approaches if you’d like guidance.

Once again, thank you for investing your time in our project. Your initiative means a lot to us, even if this particular submission couldn’t be merged. We’re committed to maintaining an inclusive, supportive environment where developers of all skill levels can learn and succeed. Please don’t hesitate to reach out with questions, clarifications, or ideas—our community Slack channel and weekly office hours are great places to connect. We’re rooting for you and look forward to collaborating on future contributions that align with both your growth and our project’s needs!

Warm regards,
Zx Team

@antonmedv antonmedv closed this Feb 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants