Skip to content

concurrency bug fixes/ improvements #4663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions ghcide/src/Development/IDE/LSP/LanguageServer.hs
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ import UnliftIO.Directory
import UnliftIO.Exception

import qualified Colog.Core as Colog
import Control.Exception (BlockedIndefinitelyOnMVar (..))
import Control.Monad.IO.Unlift (MonadUnliftIO)
import Control.Monad.Trans.Cont (evalContT)
import Development.IDE.Core.IdeConfiguration
Expand Down Expand Up @@ -265,11 +266,13 @@ runWithWorkerThreads recorder dbLoc f = evalContT $ do
(WithHieDbShield hiedb, threadQueue) <- runWithDb recorder dbLoc
liftIO $ f hiedb (ThreadQueue threadQueue sessionRestartTQueue sessionLoaderTQueue)

-- | Runs the action until it ends or until the given MVar is put.
-- | Runs the action until it ends or until the given MVar is put or the thread to fill the mvar is dropped, in which case the MVar will never be filled.
-- This happens when the thread that handles the shutdown notification dies. Ideally, this should not rely on the RTS detecting the blocked MVar
-- and instead *also* run the shutdown inf a finally block enclosing the handlers. In which case the BlockedIndefinitelyOnMVar Exception also wouldn't
-- be thrown.
-- Rethrows any exceptions.
untilMVar :: MonadUnliftIO m => MVar () -> m () -> m ()
untilMVar mvar io = void $
waitAnyCancel =<< traverse async [ io , readMVar mvar ]
untilMVar mvar io = race_ (readMVar mvar `catch` \BlockedIndefinitelyOnMVar -> pure ()) io
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a little bit round-about, is this really preferable over

Suggested change
untilMVar mvar io = race_ (readMVar mvar `catch` \BlockedIndefinitelyOnMVar -> pure ()) io
untilMVar mvar io = race_ (readMVar mvar) (io `finally` putMVar mvar ())

Also, I am not quite sure if I understand whether the io thread dying without putting the MVar is a bug on its own that needs fixing? What does dying mean here, does the thread crash for some reason?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The io is not the thing putting the MVar. The MVar is put my the shutdown notification. If none is sent but the connection dies anyway then the MVar gets GC'd and the thread that tries to read the MVar gets a blocked indefinitely on MVar exception. Thats why your proposed change wouldn't work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "real" fix is to put the MVar as a bracket around the server dying, too.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the server shouldn't crash but gracefully shutdown if the connection is dropped?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im not sure it crashes. It may just be that the thread dies before it receives a shutdown notification. That's very well possible if the client doesn't/ can't implement graceful shutdown

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the server can handle the connection termination gracefully, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea. The thread just drops. Which makes the putMVar drop, too. Which triggers a threadBlockedIndefinitelyOnMVar exception

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but can we avoid relying on the rts for this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well I'm assuming that the io will also finish so it's not a problem to not rely on the RTS, perhaps it just finishes later most of the time. But that's what I outlined above. The ideal fix is to hand the MVar somewhere else where it can be put as part of some bracketing operation. But since that ideal fix doesn't give us any but conceptual advantages, idk if it's necessary for now


cancelHandler :: (SomeLspId -> IO ()) -> LSP.Handlers (ServerM c)
cancelHandler cancelRequest = LSP.notificationHandler SMethod_CancelRequest $ \TNotificationMessage{_params=CancelParams{_id}} ->
Expand Down
15 changes: 8 additions & 7 deletions ghcide/src/Development/IDE/Main.hs
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ module Development.IDE.Main
) where

import Control.Concurrent.Extra (withNumCapabilities)
import Control.Concurrent.MVar (newEmptyMVar,
import Control.Concurrent.MVar (MVar, newEmptyMVar,
putMVar, tryReadMVar)
import Control.Concurrent.STM.Stats (dumpSTMStats)
import Control.Monad.Extra (concatMapM, unless,
Expand Down Expand Up @@ -318,9 +318,8 @@ defaultMain recorder Arguments{..} = withHeapStats (cmapWithPrio LogHeapStats re
ioT <- offsetTime
logWith recorder Info $ LogLspStart (pluginId <$> ipMap argsHlsPlugins)

ideStateVar <- newEmptyMVar
let getIdeState :: LSP.LanguageContextEnv Config -> FilePath -> WithHieDb -> Shake.ThreadQueue -> IO IdeState
getIdeState env rootPath withHieDb threadQueue = do
let getIdeState :: MVar IdeState -> LSP.LanguageContextEnv Config -> FilePath -> WithHieDb -> Shake.ThreadQueue -> IO IdeState
getIdeState ideStateVar env rootPath withHieDb threadQueue = do
t <- ioT
logWith recorder Info $ LogLspStartDuration t
sessionLoader <- loadSessionWithOptions (cmapWithPrio LogSession recorder) argsSessionLoadingOptions rootPath (tLoaderQueue threadQueue)
Expand Down Expand Up @@ -353,9 +352,9 @@ defaultMain recorder Arguments{..} = withHeapStats (cmapWithPrio LogHeapStats re
putMVar ideStateVar ide
pure ide

let setup = setupLSP (cmapWithPrio LogLanguageServer recorder) argsProjectRoot argsGetHieDbLoc (pluginHandlers plugins) getIdeState
let setup ideStateVar = setupLSP (cmapWithPrio LogLanguageServer recorder) argsProjectRoot argsGetHieDbLoc (pluginHandlers plugins) (getIdeState ideStateVar)
-- See Note [Client configuration in Rules]
onConfigChange cfg = do
onConfigChange ideStateVar cfg = do
-- TODO: this is nuts, we're converting back to JSON just to get a fingerprint
let cfgObj = J.toJSON cfg
mide <- liftIO $ tryReadMVar ideStateVar
Expand All @@ -368,7 +367,9 @@ defaultMain recorder Arguments{..} = withHeapStats (cmapWithPrio LogHeapStats re
modifyClientSettings ide (const $ Just cfgObj)
return [toNoFileKey Rules.GetClientSettings]

runLanguageServer (cmapWithPrio LogLanguageServer recorder) options inH outH argsDefaultHlsConfig argsParseConfig onConfigChange setup
do
ideStateVar <- newEmptyMVar
runLanguageServer (cmapWithPrio LogLanguageServer recorder) options inH outH argsDefaultHlsConfig argsParseConfig (onConfigChange ideStateVar) (setup ideStateVar)
dumpSTMStats
Check argFiles -> do
let dir = argsProjectRoot
Expand Down
Loading