/future

:rocket: R package: A Future API for R

Primary LanguageR

future: A Future API for R

Introduction

The purpose of the future package is to provide a very simple and uniform way of evaluating R expressions asynchronously using various resources available to the user.

In programming, a future is an abstraction for a value that may be available at some point in the future. The state of a future can either be unresolved or resolved. As soon as it is resolved, the value is available instantaneously. If the value is queried while the future is still unresolved, the current process is blocked until the future is resolved. It is possible to check whether a future is resolved or not without blocking. Exactly how and when futures are resolved depends on what strategy is used to evaluate them. For instance, a future can be resolved using a "lazy" strategy, which means it is resolved only when the value is requested. Another approach is an "eager" strategy, which means that it starts to resolve the future as soon as it is created. Yet other strategies may be to resolve futures asynchronously, for instance, by evaluating expressions concurrently on a compute cluster.

Here is an example illustrating how the basics of futures work. First, consider the following code snippet that uses plain R code:

> v <- {
+   cat("Resolving...\n")
+   3.14
+ }
Resolving...
> v
[1] 3.14

It works by assigning the value of an expression to variable v and we then print the value of v. Moreover, when the expression for v is evaluated we also print a message.

Here is the same code snippet modified to use futures instead:

> library("future")
> v %<-% {
+   cat("Resolving...\n")
+   3.14
+ }
Resolving...
> v
[1] 3.14

The difference is in how v is constructed; with plain R we use <- whereas with futures we use %<-%.

So why are futures useful? Because we can choose to evaluate the future expression in a separate R process asynchronously by simply switching settings as:

> library("future")
> plan(multiprocess)
> v %<-% {
+   cat("Resolving...\n")
+   3.14
+ }
> v
[1] 3.14

With asynchronous futures, the current/main R process does not block, which means it is available for further processing while the futures are being resolved in separates processes running in the background. In other words, futures provide a simple but yet powerful construct for parallel and / or distributed processing in R.

Now, if you cannot be bothered to read all the nitty-gritty details about futures, but just want to try them out, then skip to the end to play with the Mandelbrot demo using both parallel and non-parallel evaluation.

Implicit or Explicit Futures

Futures can be created either implicitly or explicitly. In the introductory example above we used implicit futures created via the v %<-% { expr } construct. An alternative is explicit futures using the f <- future({ expr }) and v <- value(f) constructs. With these, our example could alternatively be written as:

> library("future")
> f <- future({
+   cat("Resolving...\n")
+   3.14
+ })
Resolving...
> v <- value(f)
> v
[1] 3.14

Either style of future construct works equally(*) well. The implicit style is most similar to how regular R code is written. In principle, all you have to do is to replace <- with a %<-% to turn the assignment into a future assignment. On the other hand, this simplicity can also be deceiving, particularly when asynchronous futures are being used. In contrast, the explicit style makes it much clearer that futures are being used, which lowers the risk for mistakes and better communicates the design to others reading your code.

(*) There are cases where %<-% cannot be used without some (small) modifications. We will return to this in Section 'Constraints when using Implicit Futures' near the end of this document.

To summarize, for explicit futures, we use:

  • f <- future({ expr }) - creates a future
  • v <- value(f) - gets the value of the future (blocks if not yet resolved)

For implicit futures, we use:

  • v %<-% { expr } - creates a future and a promise to its value

To keep it simple, we will use the implicit style in the rest of this document, but everything discussed will also apply to explicit futures.

Controlling How Futures are Resolved

The future package implements the following types of futures:

Name OSes Description
synchronous: non-parallel:
eager all
lazy all lazy evaluation - happens only if the value is requested
transparent all for debugging (eager w/ early signaling and w/out local)
asynchronous: parallel:
multiprocess all multicore iff supported, otherwise multisession
multisession all background R sessions (on current machine)
multicore not Windows forked R processes (on current machine)
cluster all external R sessions on current, local, and/or remote machines
remote all Simple access to remote R sessions

The future package is designed such that support for additional strategies can be implemented as well. For instance, the future.BatchJobs package provides futures for all types of cluster functions ("backends") that the BatchJobs package supports. Specifically, futures for evaluating R expressions via job schedulers such as Slurm, TORQUE/PBS, Oracle/Sun Grid Engine (SGE) and Load Sharing Facility (LSF) are also available.

By default, future expressions are evaluated instantaneously and synchronously (in the current R session). This evaluation strategy is referred to as "eager" and we refer to futures using this strategy as "eager futures". In this section, we will go through each of these strategies and discuss what they have in common and how they differ.

Consistent Behavior Across Futures

Before going through each of the different future strategies, it is probably helpful to clarify the objectives the Future API (as defined by the future package). When programming with futures, it should not really matter what future strategy is used for executing code. This is because we cannot really know what computational resources the user has access to so the choice of evaluation strategy should be in the hand of the user and not the developer. In other words, the code should not make any assumptions on the type of futures used, e.g. synchronous or asynchronous.

One of the designs of the Future API was to encapsulate any differences such that all types of futures will appear to work the same. This despite expressions may be evaluated locally in the current R session or across the world in remote R sessions. Another obvious advantage of having a consistent API and behavior among different types of futures is that it helps while prototyping. Typically one would use eager evaluation while building up a script and, later, when the script is fully developed, one may turn on asynchronous processing.

Because of this, the defaults of the different strategies are such that the results and side effects of evaluating a future expression are as similar as possible. More specifically, the following is true for all futures:

  • All evaluation is done in a local environment (i.e. local({ expr })) so that assignments do not affect the calling environment. This is natural when evaluating in an external R process, but is also enforced when evaluating in the current R session.

  • When a future is constructed, global variables are identified. For asynchronous evaluation, globals are exported to the R process/session that will be evaluating the future expression. For lazy futures, globals are "frozen" (cloned to a local environment of the future). Also, in order to protect against exporting too large objects by mistake, there is a built-in assertion that the total size of all globals is less than a given threshold (controllable via an option, cf. help("future.options")). If the threshold is exceeded, an informative error is thrown.

  • Future expressions are only evaluated once. As soon as the value (or an error) has been collected it will be available for all succeeding requests.

Here is an example illustrating that all assignments are done to a local environment:

> plan(eager)
> a <- 1
> x %<-% {
+     a <- 2
+     2 * a
+ }
> x
[1] 4
> a
[1] 1

Now we are ready to explore the different future strategies.

Synchronous Futures

Synchronous futures are resolved one after another and most commonly by the R process that creates them. When a synchronous future is being resolved it blocks the main process until resolved. There are two main types of synchronous futures in the future package, eager and lazy futures, which are described next.

Eager Futures

Eager futures are the default unless otherwise specified. They were designed to behave as similar as possible to regular R evaluation while still fulfilling the Future API and its behaviors. Here is an example illustrating their properties:

> plan(eager)
> pid <- Sys.getpid()
> pid
[1] 14641
> a %<-% {
+     pid <- Sys.getpid()
+     cat("Resolving 'a' ...\n")
+     3.14
+ }
Resolving 'a' ...
> b %<-% {
+     rm(pid)
+     cat("Resolving 'b' ...\n")
+     Sys.getpid()
+ }
Resolving 'b' ...
> c %<-% {
+     cat("Resolving 'c' ...\n")
+     2 * a
+ }
Resolving 'c' ...
> b
[1] 14641
> c
[1] 6.28
> a
[1] 3.14
> pid
[1] 14641

Since eager evaluation is taking place, each of the three futures is resolved instantaneously in the moment it is created. Note also how pid in the calling environment, which was assigned the process ID of the current process, is neither overwritten nor removed. This is because futures are evaluated in a local environment. Since synchronous (uni-)processing is used, future b is resolved by the main R process (still in a local environment), which is why the value of b and pid are the same.

Lazy Futures

A lazy future evaluates its expression only if its value is queried. Evaluation can also be triggered when the future is checked for being resolved or not. Here is the above example when using lazy evaluation:

> plan(lazy)
> pid <- Sys.getpid()
> pid
[1] 14641
> a %<-% {
+     pid <- Sys.getpid()
+     cat("Resolving 'a' ...\n")
+     3.14
+ }
> b %<-% {
+     rm(pid)
+     cat("Resolving 'b' ...\n")
+     Sys.getpid()
+ }
> c %<-% {
+     cat("Resolving 'c' ...\n")
+     2 * a
+ }
Resolving 'a' ...
> b
Resolving 'b' ...
[1] 14641
> c
Resolving 'c' ...
[1] 6.28
> a
[1] 3.14
> pid
[1] 14641

As previously, variable pid is unaffected because all evaluation is done in a local environment. More interestingly, future a is no longer evaluated in the moment it is created, but instead when it is needed the first time, which happens when future c is created. This is because a is identified as a global variable that needs to be captured ("frozen" to a == 3.14) in order to set up future c. Later when c (the value of future c) is queried, a has already been resolved and only the expression for future c is evaluated and 6.14 is obtained. Moreover, future b is, just like future a, evaluated only when it is needed the first time, which happens here when b is printed. As for eager evaluation, lazy evaluation is also synchronous, which is why b and pid have the same value. Finally, notice also how a is not re-evaluated when we query the value again. Actually, with implicit futures, variables a, b and c are all regular values as soon as their futures have been resolved.

Comment: Lazy evaluation is already used by R itself. Arguments are passed to functions using lazy evaluation. It is also possible to assign variables using lazy evaluation using delayedAssign(), but contrary to lazy futures this function does not freeze globals. For more information, see help("delayedAssign", package="base").

Asynchronous Futures

Next, we will turn to asynchronous futures, which are futures that are resolved in the background. By design, these futures are non-blocking, that is, after being created the calling process is available for other tasks including creating additional futures. It is only when the calling process tries to access the value of a future that is not yet resolved, or trying to create another asynchronous future when all available R processes are busy serving other futures, that it blocks.

Multisession Futures

We start with multisession futures because they are supported by all operating systems. A multisession future is evaluated in a background R session running on the same machine as the calling R process. Here is our example with multisession evaluation:

> plan(multisession)
> pid <- Sys.getpid()
> pid
[1] 14641
> a %<-% {
+     pid <- Sys.getpid()
+     cat("Resolving 'a' ...\n")
+     3.14
+ }
> b %<-% {
+     rm(pid)
+     cat("Resolving 'b' ...\n")
+     Sys.getpid()
+ }
> c %<-% {
+     cat("Resolving 'c' ...\n")
+     2 * a
+ }
> b
[1] 14662
> c
[1] 6.28
> a
[1] 3.14
> pid
[1] 14641

The first thing we observe is that the values of a, c and pid are the same as previously. However, we notice that b is different from before. This is because future b is evaluated in a different R process and therefore it returns a different process ID. Another difference is that the messages, generated by cat(), are no longer displayed. This is because they are outputted to the background sessions and not the calling session.

When multisession evaluation is used, the package launches a set of R sessions in the background that will serve multisession futures by evaluating their expressions as they are created. If all background sessions are busy serving other futures, the creation of the next multisession future is blocked until a background session becomes available again. The total number of background processes launched is decided by the value of availableCores(), e.g.

> availableCores()
mc.cores+1 
         3 

This particular result tells us that the mc.cores option was set such that we are allowed to use in total 3 processes including the main process. In other words, with these settings, there will be 2 background processes serving the multisession futures. The availableCores() is also agile to different options and system environment variables. For instance, if compute cluster schedulers are used (e.g. TORQUE/PBS and Slurm), they set specific environment variable specifying the number of cores that was allotted to any given job; availableCores() acknowledges these as well. If nothing else is specified, all available cores on the machine will be utilized, cf. parallel::detectCores(). For more details, please see help("availableCores", package="future").

Multicore Futures

On operating systems where R supports forking of processes, which is basically all operating system except Windows, an alternative to spawning R sessions in the background is to fork the existing R process. Forking an R process is considered faster than working with a separate R session running in the background. One reason is that the overhead of exporting large globals to the background session can be greater than when forking is used. To use multicore futures, we specify:

plan(multicore)

The only real different between using multicore and multisession futures is that any output written (to standard output or standard error) by a multicore process is instantaneously outputted in calling process. Other than this, the behavior of using multicore evaluation is very similar to that of using multisession evaluation.

Just like for multisession futures, the maximum number of parallel processes running will be decided by availableCores(), since in both cases the evaluation is done on the local machine.

Multiprocess Futures

Sometimes we do not know whether multicore futures are supported or not, but it might still be that we would like to write platform-independent scripts or instructions that work everywhere. In such cases we can specify that we want to use "multiprocess" futures as in:

plan(multiprocess)

A multiprocess future is not a formal class of futures by itself, but rather a convenient alias for either of the two. When this is specified, multisession evaluation will be used unless multicore evaluation is supported.

Cluster Futures

Cluster futures evaluate expressions on an ad-hoc cluster (as implemented by the parallel package). For instance, assume you have access to three nodes n1, n2 and n3, you can then use these for asynchronous evaluation as:

> plan(cluster, workers = c("n1", "n2", "n3"))
> pid <- Sys.getpid()
> pid
[1] 14641
> a %<-% {
+     pid <- Sys.getpid()
+     cat("Resolving 'a' ...\n")
+     3.14
+ }
> b %<-% {
+     rm(pid)
+     cat("Resolving 'b' ...\n")
+     Sys.getpid()
+ }
> c %<-% {
+     cat("Resolving 'c' ...\n")
+     2 * a
+ }
> b
[1] 14684
> c
[1] 6.28
> a
[1] 3.14
> pid
[1] 14641

Just as for the other asynchronous evaluation strategies, the output from cat() is not displayed on the current/calling machine.

Any types of clusters that parallel::makeCluster() creates can be used for cluster futures. For instance, the above cluster can be explicitly set up as:

cl <- parallel::makeCluster(c("n1", "n2", "n3"))
plan(cluster, workers=cl)

Also, it is considered good style to shut down the cluster when it is no longer needed, that is, calling parallel::stopCluster(cl). However, it will shut itself down if the main process is terminated, which will happen in the first example where the cluster in created internally. For more information on how to set up and manage such clusters, see help("makeCluster", package="parallel").

Note that with proper firewall and router configurations (e.g. port forwarding) and with automatic authentication setup (e.g. SSH key pairs), there is nothing preventing us from using the same approach for using a cluster of remote machines.

Different Strategies for Different Futures

Sometimes one may want to use an alternative evaluation strategy for a specific future. Although one can use old <- plan(new) and afterward plan(old) to temporarily switch strategies, a simpler approach is to use the %plan% operator, e.g.

> plan(eager)
> pid <- Sys.getpid()
> pid
[1] 14641
> a %<-% {
+     Sys.getpid()
+ }
> b %<-% {
+     Sys.getpid()
+ } %plan% multiprocess
> c %<-% {
+     Sys.getpid()
+ } %plan% multiprocess
> a
[1] 14641
> b
[1] 14701
> c
[1] 14702

As seen by the different process IDs, future a is evaluated eagerly using the same process as the calling environment whereas the other two are evaluated using multiprocess futures.

However, using different plans to individual futures this way has the drawback of hard coding the evaluation strategy. Doing so may prevent some users from using your script or your package, because they do not have the sufficient resources. It may also prevent users with a lot of resources from utilizing those because you assumed a less-powerful set of hardware. Because of this, we recommend against the use of %plan% other than for interactive prototyping.

Nested Futures and Evaluation Topologies

This far we have discussed what can be referred to as "flat topology" of futures, that is, all futures are created in and assigned to the same environment. However, there is nothing stopping us from using a "nested topology" of futures, where one set of futures may, in turn, create another set of futures internally and so on.

For instance, here is an example of two "top" futures (a and b) that uses multiprocess evaluation and where the second future (b) in turn uses two internal futures:

> plan(multiprocess)
> pid <- Sys.getpid()
> a %<-% {
+     cat("Resolving 'a' ...\n")
+     Sys.getpid()
+ }
> b %<-% {
+     cat("Resolving 'b' ...\n")
+     b1 %<-% {
+         cat("Resolving 'b1' ...\n")
+         Sys.getpid()
+     }
+     b2 %<-% {
+         cat("Resolving 'b2' ...\n")
+         Sys.getpid()
+     }
+     c(b.pid = Sys.getpid(), b1.pid = b1, b2.pid = b2)
+ }
> pid
[1] 14641
> a
[1] 14703
> b
 b.pid b1.pid b2.pid 
 14704  14704  14704 

By inspection the process IDs, we see that there are in total three different processes involved for resolving the futures. There is the main R process (pid 14641), and there are the two processes used by a (pid 14703) and b (pid 14704). However, the two futures (b1 and b2) that is nested by b are evaluated by the same R process as b. This is because nested futures use eager evaluation unless otherwise specified. There are a few reasons for this, but the main reason is that it protects us from spawning off a large number of background processes by mistake, e.g. via recursive calls.

To specify a different type of evaluation topology, other than the first level of futures being resolved by multiprocess evaluation and the second level by eager evaluation, we can provide a list of evaluation strategies to plan(). First, the same evaluation strategies as above can be explicitly specified as:

plan(list(multiprocess, eager))

We would actually get the same behavior if we try with multiple levels of multiprocess evaluations;

> plan(list(multiprocess, multiprocess))
[...]
> pid
[1] 14641
> a
[1] 14705
> b
 b.pid b1.pid b2.pid 
 14706  14706  14706 

The reason for this is, also here, to protect us from launching more processes than what the machine can support. Internally, this is done by setting mc.cores to zero (sic!) such that no additional parallel processes can be launched. This is the case for both multisession and multicore evaluation.

Continuing, if we start off by eager evaluation and then use multiprocess evaluation for any nested futures, we get:

> plan(list(eager, multiprocess))
[...]
Resolving 'a' ...
Resolving 'b' ...
> pid
[1] 14641
> a
[1] 14641
> b
 b.pid b1.pid b2.pid 
 14641  14707  14708 

which clearly show that a and b are resolved in the calling process (pid 14641) whereas the two nested futures (b1 and b2) are resolved in two separate R processes (pids 14707 and 14708).

Having said this, it is indeed possible to use nested multiprocess evaluation strategies, if we explicitly specify (read force) the number of cores available at each level. In order to do this we need to "tweak" the default settings, which can be done as follows:

> plan(list(tweak(multiprocess, workers = 3), tweak(multiprocess, 
+     workers = 3)))
[...]
> pid
[1] 14641
> a
[1] 14709
> b
 b.pid b1.pid b2.pid 
 14710  14711  14712 

First, we see that both a and b are resolved in different processes (pids 14709 and 14710) than the calling process (pid 14641). Second, the two nested futures (b1 and b2) are resolved in yet two other R processes (pids 14711 and 14712).

To clarify, when we set up the two levels of multiprocess evaluation, we specified that in total 3 processes may be used at each level. We choose three parallel processes, not just two, because one is always consumed by the calling process leaving two to be used for the asynchronous futures. This is why we see that pid, a and b are all resolved by the same process. If we had allowed only two cores at the top level, a and b would have been resolved by the same background process. The same applies for the second level of futures. This brings us to another point. When we use asynchronous futures, there is nothing per se that prevents us from using the main process to keep doing computations while checking in on the asynchronous futures now and then to see if they are resolved.

For more details on working with nested futures and different evaluation strategies at each level, see Vignette 'Futures in R: Future Topologies'.

Checking A Future without Blocking

It is possible to check whether a future has been resolved or not without blocking. This can be done using the resolved(f) function, which takes an explicit future f as input. If we work with implicit futures (as in all the examples above), we can use the f <- futureOf(a) function to retrieve the explicit future from an implicit one. For example,

> plan(multiprocess)
> a %<-% {
+     cat("Resolving 'a' ...")
+     Sys.sleep(2)
+     cat("done\n")
+     Sys.getpid()
+ }
> cat("Waiting for 'a' to be resolved ...\n")
Waiting for 'a' to be resolved ...
> f <- futureOf(a)
> count <- 1
> while (!resolved(f)) {
+     cat(count, "\n")
+     Sys.sleep(0.2)
+     count <- count + 1
+ }
1 
2 
3 
4 
5 
> cat("Waiting for 'a' to be resolved ... DONE\n")
Waiting for 'a' to be resolved ... DONE
> a
[1] 14713

Failed Futures

Sometimes the future is not what you expected. If an error occurs while evaluating a future, the error is propagated and thrown as an error in the calling environment when the future value is requested. For example,

> plan(lazy)
> a %<-% {
+     cat("Resolving 'a' ...\n")
+     stop("Whoops!")
+     42
+ }
> cat("Everything is still ok although we have created a future that will fail.\n")
Everything is still ok although we have created a future that will fail.
> a
Resolving 'a' ...
Error in eval(expr, envir, enclos) : Whoops!

The error is thrown each time the value is requested, that is, if we try to get the value again will generate the same error:

> a
Error in eval(expr, envir, enclos) : Whoops!
In addition: Warning message:
restarting interrupted promise evaluation

To see the list of calls (evaluated expressions) that lead up to the error, we can use the backtrace() function(*) on the future, i.e.

> backtrace(a)
[[1]]
eval(quote({
    cat("Resolving 'a' ...\n")
    stop("Whoops!")
    42
}), new.env())
[[2]]
eval(expr, envir, enclos)
[[3]]
stop("Whoops!")

(*) The commonly used traceback() does not provide relevant information in the context of futures.

Globals

Whenever an R expression is to be evaluated asynchronously (in parallel) or via lazy evaluation, global objects have to be identified and passed to the evaluator. They need to be passed exactly as they were at the time the future was created, because, for a lazy future, globals may otherwise change between when it is created and when it is resolved. For asynchronous processing, the reason globals need to be identified is so that they can be exported to the process that evaluates the future.

The future package tries to automate these tasks as far as possible. It does this with help of the globals package. If a global variable is identified, it is captured and made available to the evaluating process. Moreover, if a global is defined in a package, then that global is not exported. Instead, it is made sure that the corresponding package is attached when the future is evaluated. This not only better reflects the setup of the main R session, but it also minimizes the need for exporting globals, which saves not only memory but also time and bandwidth, especially when using remote compute nodes.

Finally, it should be clarified that identifying globals from static code inspection alone is a challenging problem. There will always be corner cases where automatic identification of globals fails so that either false globals are identified (less of a concern) or some of the true globals are missing (which will result in a runtime error or possibly the wrong results). Vignette 'Futures in R: Common Issues with Solutions' provides examples of common cases and explains how to avoid them as well as how to help the package to identify globals or to ignore falsely identified globals.

Constraints when using Implicit Futures

There is one limitation with implicit futures that does not exist for explicit ones. Because an explicit future is just like any other object in R it can be assigned anywhere/to anything. For instance, we can create several of them in a loop and assign them to a list, e.g.

> plan(multiprocess)
> f <- list()
> for (ii in 1:3) {
+     f[[ii]] <- future({
+         Sys.getpid()
+     })
+ }
> v <- lapply(f, FUN = value)
> str(v)
List of 3
 $ : int 14714
 $ : int 14715
 $ : int 14716

This is not possible to do when using implicit futures. This is because the %<-% assignment operator cannot be used in all cases where the regular <- assignment operator can be used. It can only be used to assign future values to environments (including the calling environment) much like how assign(name, value, envir) works. However, we can assign implicit futures to environments using named indices, e.g.

> plan(multiprocess)
> v <- new.env()
> for (name in c("a", "b", "c")) {
+     v[[name]] %<-% {
+         Sys.getpid()
+     }
+ }
> v <- as.list(v)
> str(v)
List of 3
 $ a: int 14717
 $ b: int 14718
 $ c: int 14719

Here as.list(v) blocks until all futures in the environment v have been resolved. Then their values are collected and returned as a regular list.

If numeric indices are required, then list environments can be used. List environments, which are implemented by the listenv package, are regular environments with customized subsetting operators making it possible to index them much like how lists can be indexed. By using list environments where we otherwise would use lists, we can also assign implicit futures to list-like objects using numeric indices. For example,

> library("listenv")
> plan(multiprocess)
> v <- listenv()
> for (ii in 1:3) {
+     v[[ii]] %<-% {
+         Sys.getpid()
+     }
+ }
> v <- as.list(v)
> str(v)
List of 3
 $ : int 14720
 $ : int 14721
 $ : int 14722

As previously, as.list(v) blocks until all futures are resolved.

Demos

To see a live illustration how different types of futures are evaluated, run the Mandelbrot demo of this package. First, try with the eager evaluation,

library("future")
plan(eager)
demo("mandelbrot", package="future", ask=FALSE)

which closely imitates how the script would run if futures were not used. Then try the same using lazy evaluation,

plan(lazy)
demo("mandelbrot", package="future", ask=FALSE)

and see if you can notice the difference in how and when statements are evaluated. You may also try multiprocess evaluation, which calculates the different Mandelbrot planes using parallel R processes running in the background. Try,

plan(multiprocess)
demo("mandelbrot", package="future", ask=FALSE)

This will use multicore processing if you are on a system where R supports process forking, otherwise (such as on Windows) it will use multisession processing.

Finally, if you have access to multiple machines you can try to set up a cluster of workers and use them, e.g.

plan(cluster, workers=c("n2", "n5", "n6", "n6", "n9"))
demo("mandelbrot", package="future", ask=FALSE)

Contributing

The goal of this package is to provide a standardized and unified API for using futures in R. What you are seeing right now is an early but sincere attempt to achieve this goal. If you have comments or ideas on how to improve the 'future' package, I would love to hear about them. The preferred way to get in touch is via the GitHub repository, where you also find the latest source code. I am also open to contributions and collaborations of any kind.

Installation

R package future is available on CRAN and can be installed in R as:

install.packages('future')

Pre-release version

To install the pre-release version that is available in branch develop, use:

source('http://callr.org/install#HenrikBengtsson/future@develop')

This will install the package from source.

Software status

Resource: CRAN Travis CI Appveyor
Platforms: Multiple Linux & macOS Windows
R CMD check CRAN version Build status Build status
Test coverage Coverage Status