This is the seventh of several posts describing the evolution of scala.concurrent.Future
in Scala 2.12.x
.
For the previous post, click here.
Prepare for Deprecation
ExecutionContext.prepare
has been deprecated without replacement (at this time)—it was ill-specced and it was too easy to forget to call it, or even know when to call it, or call it more times than needed.
(If you have ideas for how to propagate context across asynchronous boundaries, or want to participate in coming up with a replacement, I’d like to hear from you! :smile:)
Missing BlockContext.defaultBlockContext
I’d like to think that scala.concurrent.BlockContext
is well-known, but I know for a fact that it isn’t. BlockContext
is the mechanics which allows to hook in blocking {}
-blocks into the ExecutionContext
which executes the code, and allows it to take actions to prevent deadlocking or starvation.
With ForkJoinPool
-based ExecutionContext
s it would for instance hook into the ManagedBlocker
functionality, which spawns additional threads to take care of existing work to prevent stalls and unbounded starvation.
If no BlockContext
is installed, a default one is used, and previously it was impossible to get to that instance from outside of the Scala Standard Library, and so BlockContext.defaultBlockContext
has been added.
This is almost exclusively needed if you write your own ExecutionContext
implementation, or you want to override the behavior of the currently installed BlockContext
to use the default behavior, as in:
BlockContext.withBlockContext(BlockContext.defaultBlockContext) {
someMethodWhichUsesBlocking() // Will use the default BlockContext
}
//vs.
someMethodWhichUsesBlocking() // Will use the currently installed BlockContext
//For reference
def someMethodWhichUsesBlocking(): Unit = blocking {
println("foo")
}
Perhaps not the «coolest» of features, but when you need it, it is now available!
Hardening ExecutionContext.global
A common issue with ExecutionContext(.Implicits).global
when used with blocking{}
was that the number of extra threads was virtually unbounded, and this in combination with the potential of having nested blocking{}
-calls triggering the spawning of multiple additional ForkJoinWorkerThread
s meant that things could go horribly wrong—think OutOfMemoryError
-wrong.
For that reason 3 things have been put in place:
-
ExecutionContext(.Implicits).global
now has a property to control the maximum number of threads concurrently existing as a result of managed blocking. The default is set to 256 Threads but can be changed by configuring the following System Property:scala.concurrent.context.maxExtraThreads
This means thatglobal
will have at mostscala.concurrent.context.maxThreads
+scala.concurrent.context.maxExtraThreads
concurrent Threads. -
We fixed so that nested
blocking{}
-blocks would not trigger subsequent extra thread creation. -
Thanks to Jessica Kerr we also improved the thread names for
global
. New format is:scala-execution-context-global-${Thread.getId}
Bonus: Refactors
I apologize in advance if this is «boring», but I feel like it is an important thing! Having transform
and transformWith
in Future
(at last!). It meant that I was able to encode most other combinators directly on top of them.
That means that the scala.concurrent.Future
trait does not create any Promise
directly, which means that the implementor of Future
is in full control over which implementation of Promise
will be used.
Bonus: Self Control
Nobody would try to complete a Promise with its own Future, right? Right?! :disappointed:
Soooo, self-checks were added in completeWith
and tryCompleteWith
to guard against cycles like:
val p = Promise[Foo]
p.completeWith(p.future) // OHNOES
//or
p.tryCompleteWith(p.future) // OOOOPS
So, now doing that is a no-op rather than waiting for a miracle to happen.
Click here for the next part in this blog series.
Cheers,
√