Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Note multiprocessing start method compatibility #529

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Mark2000
Copy link

@Mark2000 Mark2000 commented Apr 4, 2023

Document findings from #362. @mkitti

@codecov
Copy link

codecov bot commented Apr 4, 2023

Codecov Report

Patch and project coverage have no change.

Comparison is base (69b734f) 85.22% compared to head (af85977) 85.22%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master     #529   +/-   ##
=======================================
  Coverage   85.22%   85.22%           
=======================================
  Files          39       39           
  Lines        2342     2342           
=======================================
  Hits         1996     1996           
  Misses        346      346           

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@mkitti
Copy link
Member

mkitti commented Apr 4, 2023

@davidavdav could you also take a look?

@davidavdav
Copy link

I think you could give this a (sub)heading Multiprocessing, and perhaps also mention the option to create a context for spawn.

@mkitti
Copy link
Member

mkitti commented Apr 5, 2023

@MilesCranmer have you run into this as well?

Copy link
Member

@mkitti mkitti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a few suggestions to incorporate some of @davidavdav's thoughts. I also made some copyediting suggestions to simplify the sentence structure.

docs/source/limitations.rst Show resolved Hide resolved

multiprocessing.set_start_method('spawn')

once in your code prior to using the multiprocessing library.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
once in your code prior to using the multiprocessing library.
once in your code prior to using the multiprocessing library.
Also, consider using the *spawn* or *forkserver* methods with ``multiprocessing.get_context``:
.. code:: python
context = multiprocessing.get_context('fork_server')
queue = context.Queue()
process = context.Process(target=foo, args=(queue,))

@davidavdav does that sound right?

Comment on lines +73 to +76
is selected: *fork* (default on Unix) segfaults with with `unknown
function` errors, while *spawn* (default on Windows and MacOS) generally,
but not always, runs without memory allocation issues. To select *spawn*,
include the line
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
is selected: *fork* (default on Unix) segfaults with with `unknown
function` errors, while *spawn* (default on Windows and MacOS) generally,
but not always, runs without memory allocation issues. To select *spawn*,
include the line
is selected. *fork* (default on Unix) segfaults with with `unknown
function` errors. *spawn* (default on Windows and MacOS) generally,
but not always, runs without memory allocation issues. To select *spawn*,
include the line

Just using . here simplifies sentence structure. When does using spawn run with memory allocation issues? Could you elaborate on that point?

Copy link
Author

@Mark2000 Mark2000 Apr 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wish I could, but I haven't done extensive enough testing to say why it happens. I've just noted that it does occasionally happen in my use case. @davidavdav is this something you've also experienced?

Co-authored-by: Mark Kittisopikul <[email protected]>
@MilesCranmer
Copy link
Collaborator

I have not tried to use Python multiprocessing with Julia. In PySR, all the multiprocessing is done on the Julia side, with the Python acting as a serial wrapper library. (I dynamically initialize processes with addprocs inside SymbolicRegression.jl - these then distribute work among themselves within Julia).

@MilesCranmer
Copy link
Collaborator

The point about spawn/fork is revealing. Here's the difference from stackoverflow:

There's a tradeoff between 3 [multiprocessing start methods][1]:

  1. fork is faster because it does a copy-on-write of the parent process's entire virtual memory including the initialized Python interpreter, loaded modules, and constructed objects in memory.

    But fork does not copy the parent process's threads. Thus locks (in memory) that in the parent process were held by other threads are stuck in the child without owning threads to unlock them, ready to cause a deadlock when code tries to acquire any of them. Also any native library with forked threads will be in a broken state.

    The copied Python modules and objects might be useful or they might needlessly bloat every forked child process.

    The child process also "inherits" OS resources like open file descriptors and open network ports. Those can also lead to problems but Python works around some of them.

    So fork is fast, unsafe, and maybe bloated.

    However these safety problems might not cause trouble depending on what the child process does.

  2. spawn starts a Python child process from scratch without the parent process's memory, file descriptors, threads, etc. Technically, spawn forks a duplicate of the current process, then the child immediately calls exec to replace itself with a fresh Python, then asks Python to load the target module and run the target callable.

    So spawn is safe, compact, and slower since Python has to load, initialize itself, read files, load and initialize modules, etc.

    However it might not be noticeably slower compared to the work that the child process does.

  3. forkserver forks a duplicate of the current Python process that trims down to approximately a fresh Python process. This becomes the "fork server" process. Then each time you start a child process, it asks the fork server to fork a child and run its target callable.

    Those child processes all start out compact and without stuck locks.

    forkserver is more complicated and not well documented. [Bojan Nikolic's blog post][2] explains more about forkserver and its secret set_forkserver_preload() method to preload some modules. Be wary of using an undocumented method, esp. before the [bug fix in Python 3.7.0][3].

    So forkserver is fast, compact, and safe, but it's more complicated and not well documented.

[The docs aren't great on all this so I've combined info from multiple sources and made some inferences. Do comment on any mistakes.]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants