Skip to content

Conversation

@Shourya742
Copy link
Contributor

@Shourya742 Shourya742 commented Jan 2, 2026

This PR replaces the single proc-macro server per workspace with a small pool of server processes.

@Shourya742 Shourya742 force-pushed the 2026-01-01-parallelize-proc-macro-expansion branch from bd5891e to 2a1c3d0 Compare January 3, 2026 06:19
@Shourya742 Shourya742 marked this pull request as ready for review January 3, 2026 08:25
@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Jan 3, 2026
@Shourya742 Shourya742 force-pushed the 2026-01-01-parallelize-proc-macro-expansion branch from 3a1d2e0 to 84ef250 Compare January 3, 2026 23:35
@Shourya742 Shourya742 requested a review from Veykril January 4, 2026 00:38
}
}

pub(crate) fn default_pool_size() -> usize {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make this configurable, similar to how cachePriming_numThreads is configurable (just for processes instead of threads), so

#[derive(Serialize, Deserialize, Debug, Clone, PartialEq)]
#[serde(rename_all = "snake_case")]
- pub enum NumThreads {
+ pub enum NumProcesses {
    Physical,
-    Logical,
    #[serde(untagged)]
    Concrete(usize),
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and let's make the default Concrete(1) for now so we can have people test this as an opt-in for a bit

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I implemented this, with one caveat: in a few CLI commands we don’t instantiate Config. In those cases I didn’t instantiate config through just to fetch the process count and instead defaulted to 1.

@ChayimFriedman2
Copy link
Contributor

This PR is based on requesting expansion from the least overloaded worker. I wonder, now that the protocol is bidirectional, can we instead make the proc macro server request "more work" from r-a? This will more evenly parallelize in case some expansion takes a long time.

@Shourya742
Copy link
Contributor Author

This PR is based on requesting expansion from the least overloaded worker. I wonder, now that the protocol is bidirectional, can we instead make the proc macro server request "more work" from r-a? This will more evenly parallelize in case some expansion takes a long time.

While working on the current pool design, I also considered a client-side approach closer to work stealing or availability-based scheduling. In this model, the client would wait until any worker becomes free and then dispatch the next request. This keeps scheduling centralized and fits naturally with the existing pool setup. I can add this in a follow-up.

If we instead move scheduling to the server side, things get more involved. The client would need a dedicated listener thread to receive subrequest/request signals from the servers and dispatch work accordingly, along with a way to track request completion. One nice property of this approach is that it pushes the architecture toward a more asyncy type model, the client could fire-and-forget requests and later query the listener thread when it actually needs the result.

@Veykril
Copy link
Member

Veykril commented Jan 5, 2026

Do you mean the client should batch up multiple expansion requests? That makes things fairly complicated again, just like multiplexing the connection to allow multiple inflight expansions (due to proc-macros observing the process environment). I think parallelization should for now be done via multiple processes as that can live separately from the protocol and is easy enough to implement.

@Shourya742 Shourya742 requested a review from Veykril January 5, 2026 09:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

S-waiting-on-review Status: Awaiting review from the assignee but also interested parties.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants