diff --git a/core/js/src/main/scala/cats/effect/IOApp.scala b/core/js/src/main/scala/cats/effect/IOApp.scala index 928e14d883..c95c2189b8 100644 --- a/core/js/src/main/scala/cats/effect/IOApp.scala +++ b/core/js/src/main/scala/cats/effect/IOApp.scala @@ -77,7 +77,7 @@ import scala.util.Try * } * }}} * - * It is valid to define `val run` rather than `def run` because `IO` 's evaluation is lazy: it + * It is valid to define `val run` rather than `def run` because `IO`'s evaluation is lazy: it * will only run when the `main` method is invoked by the runtime. * * In the event that the process receives an interrupt signal (`SIGINT`) due to Ctrl-C (or any @@ -121,7 +121,7 @@ import scala.util.Try * * However, with that said, there really is no substitute to benchmarking your own application. * Every application and scenario is unique, and you will always get the absolute best results - * by performing your own tuning rather than trusting someone else's defaults. `IOApp` 's + * by performing your own tuning rather than trusting someone else's defaults. `IOApp`'s * defaults are very ''good'', but they are not perfect in all cases. One common example of this * is applications which maintain network or file I/O worker threads which are under heavy load * in steady-state operations. In such a performance profile, it is usually better to reduce the diff --git a/core/jvm/src/main/scala/cats/effect/IOApp.scala b/core/jvm/src/main/scala/cats/effect/IOApp.scala index 1844cf220b..6939dd9f0b 100644 --- a/core/jvm/src/main/scala/cats/effect/IOApp.scala +++ b/core/jvm/src/main/scala/cats/effect/IOApp.scala @@ -80,7 +80,7 @@ import java.util.concurrent.atomic.AtomicInteger * } * }}} * - * It is valid to define `val run` rather than `def run` because `IO` 's evaluation is lazy: it + * It is valid to define `val run` rather than `def run` because `IO`'s evaluation is lazy: it * will only run when the `main` method is invoked by the runtime. * * In the event that the process receives an interrupt signal (`SIGINT`) due to Ctrl-C (or any @@ -124,7 +124,7 @@ import java.util.concurrent.atomic.AtomicInteger * * However, with that said, there really is no substitute to benchmarking your own application. * Every application and scenario is unique, and you will always get the absolute best results - * by performing your own tuning rather than trusting someone else's defaults. `IOApp` 's + * by performing your own tuning rather than trusting someone else's defaults. `IOApp`'s * defaults are very ''good'', but they are not perfect in all cases. One common example of this * is applications which maintain network or file I/O worker threads which are under heavy load * in steady-state operations. In such a performance profile, it is usually better to reduce the diff --git a/core/jvm/src/main/scala/cats/effect/IOPlatform.scala b/core/jvm/src/main/scala/cats/effect/IOPlatform.scala index bb2f4876bb..7dd0115d38 100644 --- a/core/jvm/src/main/scala/cats/effect/IOPlatform.scala +++ b/core/jvm/src/main/scala/cats/effect/IOPlatform.scala @@ -57,7 +57,7 @@ abstract private[effect] class IOPlatform[+A] extends Serializable { self: IO[A] * The first line will run `program` for at most five seconds, interrupt the calculation, and * run the finalizers for as long as they need to complete. The second line will run `program` * for at most five seconds and then immediately release the latch, without interrupting - * `program` 's ongoing execution. + * `program`'s ongoing execution. * * In other words, this function probably doesn't do what you think it does, and you probably * don't want to use it outside of tests. diff --git a/core/jvm/src/main/scala/cats/effect/unsafe/FiberMonitor.scala b/core/jvm/src/main/scala/cats/effect/unsafe/FiberMonitor.scala index dfcf5c1bc8..d2990b44e0 100644 --- a/core/jvm/src/main/scala/cats/effect/unsafe/FiberMonitor.scala +++ b/core/jvm/src/main/scala/cats/effect/unsafe/FiberMonitor.scala @@ -36,10 +36,9 @@ import scala.concurrent.ExecutionContext * `IO#evalOn`). * 1. Because `java.util.Collections.synchronizedMap` is a simple wrapper around any map which * just synchronizes the access to the map through the built in JVM `synchronized` - * mechanism, we need several instances of these synchronized `WeakHashMap` s just to - * reduce contention between threads. A particular instance is selected using a thread - * local source of randomness using an instance of - * `java.util.concurrent.ThreadLocalRandom`. + * mechanism, we need several instances of these synchronized `WeakHashMap`s just to reduce + * contention between threads. A particular instance is selected using a thread local + * source of randomness using an instance of `java.util.concurrent.ThreadLocalRandom`. */ private[effect] sealed class FiberMonitor( // A reference to the compute pool of the `IORuntime` in which this suspended fiber bag diff --git a/core/jvm/src/main/scala/cats/effect/unsafe/TimerHeap.scala b/core/jvm/src/main/scala/cats/effect/unsafe/TimerHeap.scala index cb269e6f41..6d8c29355c 100644 --- a/core/jvm/src/main/scala/cats/effect/unsafe/TimerHeap.scala +++ b/core/jvm/src/main/scala/cats/effect/unsafe/TimerHeap.scala @@ -40,7 +40,7 @@ import java.util.concurrent.atomic.AtomicInteger * * In general, this heap is not threadsafe and modifications (insertion/removal) may only be * performed on its owner WorkerThread. The exception is that the callback value of nodes may be - * `null` ed by other threads and published via data race. + * `null`ed by other threads and published via data race. * * Other threads may traverse the heap with the `steal` method during which they may `null` some * callbacks. This is entirely subject to data races. diff --git a/core/jvm/src/main/scala/cats/effect/unsafe/WorkerThread.scala b/core/jvm/src/main/scala/cats/effect/unsafe/WorkerThread.scala index 99397f8bd4..23491be46c 100644 --- a/core/jvm/src/main/scala/cats/effect/unsafe/WorkerThread.scala +++ b/core/jvm/src/main/scala/cats/effect/unsafe/WorkerThread.scala @@ -157,7 +157,7 @@ private[effect] final class WorkerThread[P <: AnyRef]( * continues with the worker thread run loop. * * @param fiber - * the fiber that `cede` s/`autoCede`s + * the fiber that `cede`s/`autoCede`s */ def reschedule(fiber: Runnable): Unit = { if ((cedeBypass eq null) && queue.isEmpty()) { diff --git a/core/native/src/main/scala/cats/effect/IOApp.scala b/core/native/src/main/scala/cats/effect/IOApp.scala index 27c0ce5525..c7bf849d8d 100644 --- a/core/native/src/main/scala/cats/effect/IOApp.scala +++ b/core/native/src/main/scala/cats/effect/IOApp.scala @@ -75,7 +75,7 @@ import scala.scalanative.meta.LinktimeInfo._ * } * }}} * - * It is valid to define `val run` rather than `def run` because `IO` 's evaluation is lazy: it + * It is valid to define `val run` rather than `def run` because `IO`'s evaluation is lazy: it * will only run when the `main` method is invoked by the runtime. * * In the event that the process receives an interrupt signal (`SIGINT`) due to Ctrl-C (or any @@ -119,7 +119,7 @@ import scala.scalanative.meta.LinktimeInfo._ * * However, with that said, there really is no substitute to benchmarking your own application. * Every application and scenario is unique, and you will always get the absolute best results - * by performing your own tuning rather than trusting someone else's defaults. `IOApp` 's + * by performing your own tuning rather than trusting someone else's defaults. `IOApp`'s * defaults are very ''good'', but they are not perfect in all cases. One common example of this * is applications which maintain network or file I/O worker threads which are under heavy load * in steady-state operations. In such a performance profile, it is usually better to reduce the diff --git a/core/shared/src/main/scala/cats/effect/IO.scala b/core/shared/src/main/scala/cats/effect/IO.scala index f77a357f7f..20d4e49e5e 100644 --- a/core/shared/src/main/scala/cats/effect/IO.scala +++ b/core/shared/src/main/scala/cats/effect/IO.scala @@ -186,7 +186,7 @@ sealed abstract class IO[+A] private () extends IOPlatform[A] { /** * Materializes any sequenced exceptions into value space, where they may be handled. * - * This is analogous to the `catch` clause in `try` /`catch`, being the inverse of + * This is analogous to the `catch` clause in `try`/`catch`, being the inverse of * `IO.raiseError`. Thus: * * {{{ @@ -1310,7 +1310,7 @@ object IO extends IOCompanionPlatform with IOLowPriorityImplicits with TuplePara * The effect returns `Either[Option[IO[Unit]], A]` where: * - right side `A` is an immediate result of computation (callback invocation will be * dropped); - * - left side `Option[IO[Unit]] ` is an optional finalizer to be run in the event that the + * - left side `Option[IO[Unit]]` is an optional finalizer to be run in the event that the * fiber running `asyncCheckAttempt(k)` is canceled. * * For example, here is a simplified version of `IO.fromCompletableFuture`: diff --git a/core/shared/src/main/scala/cats/effect/ResourceApp.scala b/core/shared/src/main/scala/cats/effect/ResourceApp.scala index d72e4f7a70..cb2003f7bc 100644 --- a/core/shared/src/main/scala/cats/effect/ResourceApp.scala +++ b/core/shared/src/main/scala/cats/effect/ResourceApp.scala @@ -45,7 +45,7 @@ import cats.syntax.all._ * [[https://tpolecat.github.io/skunk/ Skunk]] and [[https://http4s.org Http4s]], but otherwise * it represents a relatively typical example of what the main class for a realistic Cats Effect * application might look like. Notably, the whole thing is enclosed in `Resource`, which is - * `use` d at the very end. This kind of pattern is so common that `ResourceApp` defines a + * `use`d at the very end. This kind of pattern is so common that `ResourceApp` defines a * special trait which represents it. We can rewrite the above example: * * {{{ diff --git a/core/shared/src/main/scala/cats/effect/SyncIO.scala b/core/shared/src/main/scala/cats/effect/SyncIO.scala index 7edc5b97dd..7ea67fd474 100644 --- a/core/shared/src/main/scala/cats/effect/SyncIO.scala +++ b/core/shared/src/main/scala/cats/effect/SyncIO.scala @@ -87,7 +87,7 @@ sealed abstract class SyncIO[+A] private () extends Serializable { /** * Materializes any sequenced exceptions into value space, where they may be handled. * - * This is analogous to the `catch` clause in `try` /`catch`, being the inverse of + * This is analogous to the `catch` clause in `try`/`catch`, being the inverse of * `SyncIO.raiseError`. Thus: * * {{{ diff --git a/core/shared/src/main/scala/cats/effect/unsafe/IORuntimeBuilder.scala b/core/shared/src/main/scala/cats/effect/unsafe/IORuntimeBuilder.scala index 60b7a94bf5..f0e44169bf 100644 --- a/core/shared/src/main/scala/cats/effect/unsafe/IORuntimeBuilder.scala +++ b/core/shared/src/main/scala/cats/effect/unsafe/IORuntimeBuilder.scala @@ -20,7 +20,7 @@ package unsafe import scala.concurrent.ExecutionContext /** - * Builder object for creating custom `IORuntime` s. Useful for creating [[IORuntime]] based on + * Builder object for creating custom `IORuntime`s. Useful for creating [[IORuntime]] based on * the default one but with some wrappers around execution contexts or custom shutdown hooks. */ final class IORuntimeBuilder protected ( diff --git a/kernel/shared/src/main/scala/cats/effect/kernel/GenConcurrent.scala b/kernel/shared/src/main/scala/cats/effect/kernel/GenConcurrent.scala index d4fd60b7c5..d39d90d47f 100644 --- a/kernel/shared/src/main/scala/cats/effect/kernel/GenConcurrent.scala +++ b/kernel/shared/src/main/scala/cats/effect/kernel/GenConcurrent.scala @@ -37,7 +37,7 @@ trait GenConcurrent[F[_], E] extends GenSpawn[F, E] { * and cache the result. If `get` is sequenced multiple times `fa` will only be evaluated * once. * - * If all `get` s are canceled prior to `fa` completing, it will be canceled and evaluated + * If all `get`s are canceled prior to `fa` completing, it will be canceled and evaluated * again the next time `get` is sequenced. */ def memoize[A](fa: F[A]): F[F[A]] = { diff --git a/kernel/shared/src/main/scala/cats/effect/kernel/MonadCancel.scala b/kernel/shared/src/main/scala/cats/effect/kernel/MonadCancel.scala index 614452c14f..52ad190d90 100644 --- a/kernel/shared/src/main/scala/cats/effect/kernel/MonadCancel.scala +++ b/kernel/shared/src/main/scala/cats/effect/kernel/MonadCancel.scala @@ -227,7 +227,7 @@ trait MonadCancel[F[_], E] extends MonadError[F, E] { * Indicates the default "root scope" semantics of the `F` in question. For types which do * ''not'' implement auto-cancelation, this value may be set to `CancelScope.Uncancelable`, * which behaves as if all values `F[A]` are wrapped in an implicit "outer" `uncancelable` - * which cannot be polled. Most `IO` -like types will define this to be `Cancelable`. + * which cannot be polled. Most `IO`-like types will define this to be `Cancelable`. */ def rootCancelScope: CancelScope diff --git a/kernel/shared/src/main/scala/cats/effect/kernel/Ref.scala b/kernel/shared/src/main/scala/cats/effect/kernel/Ref.scala index 31cbe1ecb8..b57c75fc81 100644 --- a/kernel/shared/src/main/scala/cats/effect/kernel/Ref.scala +++ b/kernel/shared/src/main/scala/cats/effect/kernel/Ref.scala @@ -321,7 +321,7 @@ object Ref { F.delay(unsafe(a)) /** - * Creates an instance focused on a component of another `Ref` 's value. Delegates every get + * Creates an instance focused on a component of another `Ref`'s value. Delegates every get * and modification to underlying `Ref`, so both instances are always in sync. * * Example: diff --git a/kernel/shared/src/main/scala/cats/effect/kernel/Resource.scala b/kernel/shared/src/main/scala/cats/effect/kernel/Resource.scala index 887efb7999..ac8ca621f3 100644 --- a/kernel/shared/src/main/scala/cats/effect/kernel/Resource.scala +++ b/kernel/shared/src/main/scala/cats/effect/kernel/Resource.scala @@ -443,7 +443,7 @@ sealed abstract class Resource[F[_], +A] extends Serializable { Resource.makeCase(F.unit)((_, ec) => f(ec)).flatMap(_ => this) /** - * Given a `Resource`, possibly built by composing multiple `Resource` s monadically, returns + * Given a `Resource`, possibly built by composing multiple `Resource`s monadically, returns * the acquired resource, as well as a cleanup function that takes an * [[Resource.ExitCase exit case]] and runs all the finalizers for releasing it. * @@ -530,7 +530,7 @@ sealed abstract class Resource[F[_], +A] extends Serializable { } /** - * Given a `Resource`, possibly built by composing multiple `Resource` s monadically, returns + * Given a `Resource`, possibly built by composing multiple `Resource`s monadically, returns * the acquired resource, as well as an action that runs all the finalizers for releasing it. * * If the outer `F` fails or is interrupted, `allocated` guarantees that the finalizers will diff --git a/testkit/shared/src/main/scala/cats/effect/testkit/TestControl.scala b/testkit/shared/src/main/scala/cats/effect/testkit/TestControl.scala index 48d3795bdb..61b779a841 100644 --- a/testkit/shared/src/main/scala/cats/effect/testkit/TestControl.scala +++ b/testkit/shared/src/main/scala/cats/effect/testkit/TestControl.scala @@ -37,9 +37,9 @@ import java.util.concurrent.atomic.AtomicReference * [[TestControl.executeEmbed]]). * * In other words, `TestControl` is sort of like a "handle" to the runtime internals within the - * context of a specific `IO` 's execution. It makes it possible for users to manipulate and + * context of a specific `IO`'s execution. It makes it possible for users to manipulate and * observe the execution of the `IO` under test from an external vantage point. It is important - * to understand that the ''outer'' `IO` s (e.g. those returned by the [[tick]] or [[results]] + * to understand that the ''outer'' `IO`s (e.g. those returned by the [[tick]] or [[results]] * methods) are ''not'' running under the test control environment, and instead they are meant * to be run by some outer runtime. Interactions between the outer runtime and the inner runtime * (potentially via mechanisms like [[cats.effect.std.Queue]] or @@ -204,7 +204,7 @@ final class TestControl[A] private ( * Analogous to, though critically not the same as, running an [[IO]] on a single-threaded * production runtime. * - * This function will terminate for `IO` s which deadlock ''asynchronously'', but any program + * This function will terminate for `IO`s which deadlock ''asynchronously'', but any program * which runs in a loop without fully suspending will cause this function to run indefinitely. * Also note that any `IO` which interacts with some external asynchronous scheduler (such as * NIO) will be considered deadlocked for the purposes of this runtime. @@ -288,8 +288,8 @@ object TestControl { /** * Executes a given [[IO]] under fully mocked runtime control. Produces a `TestControl` which * can be used to manipulate the mocked runtime and retrieve the results. Note that the outer - * `IO` (and the `IO` s produced by the `TestControl`) do ''not'' evaluate under mocked - * runtime control and must be evaluated by some external harness, usually some test framework + * `IO` (and the `IO`s produced by the `TestControl`) do ''not'' evaluate under mocked runtime + * control and must be evaluated by some external harness, usually some test framework * integration. * * A simple example (returns an `IO` which must, itself, be run) using MUnit assertion syntax: