The JiTCSDE module

Note and remember that some relevant information can be found in the common JiTC*DE documentation. This includes installation instructions, compiler issues and optimisation, general performance considerations, how to implement network dynamics and conditional statements, and a small FAQ.

Introduction

JiTCSDE (just-in-time compilation for stochastic differential equations) is a standalone Python implementation of the adaptive integration method proposed by Rackauckas and Nie [RN17], which in turn employs Rößler-type stochastic Runge–Kutta methods [R10]. It can handle both Itō and Stratonovich SDEs, converting the latter internally. JiTCSDE is designed in analogy to JiTCODE (which is handled very similarly to SciPy’s ODE (scipy.integrate.ode)): It takes iterables (or generator functions or dictionaries) of symbolic expressions, translates them to C code, compiles them and an integrator wrapped around them on the fly, and allows you to operate this integrator from Python. Symbolic expressions are mostly handled by SymEngine, SymPy’s compiled-backend-to-be (see SymPy vs. SymEngine for details).

This approach has the following advantages:

  • Speed boost through compilation Evaluating the derivative and the core operations of the Runge–Kutta integration happen in compiled C code and thus very efficiently.
  • Speed boost through symbolic optimisation If your derivative is automatically generated by some routine, you can simplify it symbolically to boost the speed. In fact, blatant optimisations such as \(y·(x-x)=0\) are done on the fly by SymEngine. This is for example interesting if you want to simulate dynamics on a sparse network, as non-existing links are not taken into account when calculating the derivative when integrating.
  • Symbolic interface You can enter your differential equations almost like you would on paper. Also, if you are working with SymPy or SymEngine anyway – e.g., to calculate points of zero drift –, you do not need to bother much with translating your equations.

If compilation fails to work for whatever reason, pure Python functions can be employed as a fallback (which is much slower, however).

A brief mathematic background

This documentation assumes that the stochastic differential equation (SDE) you want to solve is:

\[\mathrm{d} y = f(t,y) \mathrm{d} t + g(t,y) ⊙ \mathrm{d} W(t)\]

where \(y\), \(f\), \(g\) and \(\mathrm{d} W(t)\) are vectors or vector-valued, respectively, with the same dimension and \(⊙\) indicates the component-wise product between vectors.

Rackauckas’ and Nie’s method [RN17] basically works like an adaptive Runge–Kutta integrator for ordinary differential equations (ODEs): It employs two stochastic Runge–Kutta integrators and uses the difference between them to estimate the error of the integration. These methods are intertwined to minimise the additional computational effort required to estimate the error. The step size is adapted to keep the estimated error below a user-determined threshold. To avoid a bias through rejected steps, the respective noise is stored and reused employing a Brownian bridge. If the noise is additive, i.e., if g does not depend on y, a more simple algorithm is used.

A simple example

Suppose we want to integrate Lorenz oscillator each of whose components is subject to a diffusion that amounts to \(p\) of the respective component, i.e.:

\[ \begin{align}\begin{aligned}\begin{split}f(y) = \left( \begin{matrix} σ (y_1-y_0)\\ y_0 (ρ-y_2)-y_1\\ y_0 y_1 - β y_2 \end{matrix} \right),\end{split}\\\qquad\\\begin{split}g(y) = \left( \begin{matrix} p y_0 \\ p y_1 \\ p y_2 \end{matrix} \right),\end{split}\end{aligned}\end{align} \]
\[\]

First we do some importing and define the constants, which we want to be \(ρ = 28\), \(σ = 10\), \(β =\frac{8}{3}\), and \(p=0.1\):

from jitcsde import y, jitcsde
import numpy
import symengine

ρ = 28
σ = 10
β = symengine.Rational(8,3)
p = 0.1

Amongst our imports was the symbol for the state (y), which have to be used to write down the differential equation such that JiTCSDE can process it. (For an explicitly time-dependent differential equation, it is also possible to import t.) Using this, we can write down the drift factor \(f\) as a list of expressions:

f = [
	σ * (y(1)-y(0)),
	y(0)*(ρ-y(2)) - y(1),
	y(0)*y(1) - β*y(2)
	]

For the diffusion factor \(g\), we need the same, but due to \(g\)’s regularity, we can employ a list comprehension:

g = [ p*y(i) for i in range(3) ]

We can then initiate JiTCSDE:

SDE = jitcsde(f,g)

We want the initial condition to be random and the integration to start at \(t = 0\).

initial_state = numpy.random.random(3)
SDE.set_initial_value(initial_state,0.0)

Finally, we can perform the actual integration. In our case, we integrate for 100 time units with a sampling rate of 0.1 time units. integrate returns the state after integration, which we collect in the list data. Finally, we use numpy.savetxt to store this to the file timeseries.dat.

data = []
for time in numpy.arange(0.0, 100.0, 0.01):
	data.append( SDE.integrate(time) )
numpy.savetxt("timeseries.dat", data)

Taking everything together, our code is:

from jitcsde import y, jitcsde
import numpy
import symengine

ρ = 28
σ = 10
β = symengine.Rational(8,3)
p = 0.1

f = [
	σ * (y(1)-y(0)),
	y(0)*(ρ-y(2)) - y(1),
	y(0)*y(1) - β*y(2)
	]

g = [ p*y(i) for i in range(3) ]

SDE = jitcsde(f,g)

initial_state = numpy.random.random(3)
SDE.set_initial_value(initial_state,0.0)

data = []
for time in numpy.arange(0.0, 100.0, 0.01):
	data.append( SDE.integrate(time) )
numpy.savetxt("timeseries.dat", data)

Jumps

If you wish to have jumps on top of the Brownian noise (and thus obtain a jump–diffusion process), you can do this using the extension jitcsde_jump. It draws the waiting time between jumps as well as the value of the jump from a distribution that you specify. The jumps are positioned precisely, i.e., the dynamics is integrated up to the time of a jump, the jump is applied, and the integration is continued afterwards.

Note that this functionality is implemented purely in Python, which makes it very flexible, but may also slows things down when the jump rate is high in comparison to the integration step (not to be confused with the sampling step), which however should not occur in typical applications. Also note that this only works for Itō SDEs.

As an example, suppose that we want to add jumps to the noisy Lorenz oscillator from A simple example. These shall have exponentially distributed waiting times (i.e. they are a Poisson process) with a scale parameter \(β=1.0\). A function sampling such waiting times (inter-jump intervals) is:

def IJI(time,state):
	return numpy.random.exponential(1.0)

Note that since our waiting times are neither state- nor time-dependent, we do not use the time and state argument. Next, we want the jumps to only apply to the last component, being normally distributed with zero mean and the current amplitude of this component as a standard deviation. A function producing such a jump is:

def jump(time,state):
	return numpy.array([
			0.0,
			0.0,
			numpy.random.normal(0.0,abs(state[2]))
		])

Finally, we initialise the integrator using jitcsde_jump instead of jitcsde with the previously defined functions as additional arguments:

SDE = jitcsde_jump(IJI,jump,f,g)

Everything else remains unchanged. See the sources for the full example.

Networks and large equations

JiTCSDE is specifically designed to be able to handle large stochastic differential equations, as they arise, e.g., in networks. The caveats, tools, and tricks when doing this are the same as for JiTCODE; so please refer to its documentation, in particular the sections:

Command reference

y(*args, **kwargs) = <MagicMock name='mock.Function()' id='140047263346928'>

the symbol for the state that must be used to define the differential equation. It is a function and the integer argument denotes the component. You may just as well define an analogous function directly with SymEngine or SymPy, but using this function is the best way to get the most of future versions of JiTCSDE, in particular avoiding incompatibilities. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

t(*args, **kwargs) = <MagicMock name='mock.Symbol()' id='140047263384128'>

the symbol for time for defining the differential equation. If your differential equation has no explicit time dependency (“autonomous system”), you do not need this. You may just as well define an analogous symbol directly with SymEngine or SymPy, but using this function is the best way to get the most of future versions of JiTCSDE, in particular avoiding incompatibilities. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

exception UnsuccessfulIntegration

This exception is raised when the integrator cannot meet the accuracy and step-size requirements. If you want to know the exact state of your system before the integration fails or similar, catch this exception.

test(omp=True, sympy=True)

Runs a quick simulation to test whether:

  • a compiler is available and can be interfaced by Setuptools,
  • OMP libraries are available and can be assessed,
  • SymPy is available.

The latter two tests can be deactivated with the respective argument. This is not a full software test but rather a quick sanity check of your installation. If successful, this function just finishes without any message.

The main class

class jitcsde(f_sym=(), g_sym=(), helpers=None, g_helpers='auto', n=None, additive=None, ito=True, control_pars=(), verbose=True, module_location=None)
Parameters:
f_sym : iterable of symbolic expressions or generator function yielding symbolic expressions or dictionary

If an iterable or generator function, the i-th element is the i-th component of the value of the SDE’s drift term \(f(t,y)\). If a dictionary, it has to map the dynamical variables to its derivatives and the dynamical variables must be y(0), y(1), .

g_sym : iterable of symbolic expressions or generator function yielding symbolic expressions or dictionary

If an iterable or generator function, the i-th element is the i-th component of the value of the SDE’s diffusion term \(f(t,y)\). If a dictionary, it has to map the dynamical variables to its derivatives and the dynamical variables must be y(0), y(1), .

helpers : list of length-two iterables, each containing a symbol and an expression

Each helper is a variable that will be calculated before evaluating the drift and diffusion terms and can be used in their computation. The first component of the tuple is the helper’s symbol as referenced in the drift and diffusion terms or other helpers, the second component describes how to compute it from t, y and other helpers. This is for example useful to realise a mean-field coupling, where the helper could look like (mean, sum(y(i) for i in range(100))/100). (See the JiTCODE documentation for an example.)

g_helpers : "auto" (default), "same", or like helpers
  • If "auto", JiTCSDE will automatically determine which helpers are needed for f and g. The only drawback of this is that it may take some time for larger differential equations.
  • If "same", helpers will be used for both f and g as it is.
  • If this is a list of helpers (or empty list), helpers will be used only for calculating f and g_helpers for g.
n : integer

Length of f_sym and g_sym, i.e., the dimension of the system. While JiTCSDE can easily determine this itself (and will, if necessary), this may take some time if f_sym is a generator function and n is large. Take care that this value is correct – if it isn’t, you will not get a helpful error message.

additive : None or boolean

Whether the noise term is additive, i.e., g_sym is independent of the state (y). In this case a simpler, faster integrator can be used. While JiTCSDE can easily determine this itself (and will, if necessary), this may take some time if n is large. If you incorrectly set this to True, you will not get a helpful error message.

ito : boolean

Whether the SDE is formulated in Itō or Stratonovitch calculus. In the latter case, the SDE will be converted to Itō calculus, as this is what is required by the integrator. Note that is conversion may be inefficient for large differential equations with helpers.

control_pars : list of symbols

Each symbol corresponds to a control parameter that can be used when defining the equations and set after compilation with set_parameters. Using this makes sense if you need to do a parameter scan with short integrations for each parameter and you are spending a considerable amount of time compiling.

verbose : boolean

Whether JiTCSDE shall give progress reports on the processing steps.

module_location : string

location of a module file from which functions are to be loaded (see save_compiled). If you use this, you need not give f_sym and g_sym as arguments, but in this case you must give n. Also note that the integrator may lack some functionalities, depending on the arguments you provide.

y_dict

The current state of the system as a dictionary mapping dynamical variables to their current value. Note that if you use this often, you may want to use self.y instead for efficiency.

t
Returns:
time : float
The current time of the integrator.
reset_integrator(self)

Resets the integrator, forgetting all stored noise (and waiting times for jitcsde_jump) and forcing re-initiation when it is needed next.

set_initial_value(self, initial_value, time=0.0)

Sets the initial value and starting time of the integration. The initial value can either be an iterable of numbers or a dictionary that maps dynamical variables to their initial value.

set_seed(self, seed=None)

Sets the seed used for random-number generation. Use this if you want to have reproducible conditions. Whenever the integrator is (re)initialised, this seed is used. If you do not call this method or call it with None as an argument, the seed is chosen elsewise (depending on which backend and random-number generator is used).

generate_lambdas(self)

Explicitly initiates a purely Python-based integrator.

compile_C(self, simplify=None, do_cse=False, numpy_rng=False, chunk_size=100, extra_compile_args=None, extra_link_args=None, verbose=False, modulename=None, omp=False)

translates the derivative to C code using SymEngine’s C-code printer. For detailed information many of the arguments and other ways to tweak the compilation, read these notes.

Parameters:
simplify : boolean

Whether the derivative should be simplified (with ratio=1.0) before translating to C code. The main reason why you could want to disable this is if your derivative is already optimised and so large that simplifying takes a considerable amount of time. If None, this will be automatically disabled for n>10.

do_cse : boolean

Whether SymPy’s common-subexpression detection should be applied before translating to C code. It is almost always better to let the compiler do this (unless you want to set the compiler optimisation to -O2 or lower). As this requires all entries of f and g at once, it may void advantages gained from using generator functions as an input. Also, this feature uses SymPy and not SymEngine.

numpy_rng : boolean

Whether numpy.random.normal shall be explicitly employed for generating random numbers. This is less efficient and mainly exists for testing purposes to ensure that the random numbers are the same as when using the Python backend. Note that the alternative is still based on the same code as NumPy’s random-number generator (until somebody changes it) and should produce the same results. Also note that details in the arithmetic realisation may still cause tiny differences in the results from the two backends, which can then be magnified by the butterfly effect.

chunk_size : integer

If the number of instructions in the final C code exceeds this number, it will be split into chunks of this size. See Handling very large differential equations on why this is useful and how to best choose this value. If smaller than 1, no chunking will happen.

extra_compile_args : iterable of strings
extra_link_args : iterable of strings

Arguments to be handed to the C compiler or linker, respectively.

verbose : boolean

Whether the compiler commands shall be shown. This is the same as Setuptools’ verbose setting.

modulename : string or None

The name used for the compiled module.

omp : pair of iterables of strings or boolean

What compiler arguments shall be used for multiprocessing (using OpenMP). If True, they will be selected automatically. If empty or False, no compilation for multiprocessing will happen (unless you supply the relevant compiler arguments otherwise).

set_parameters(self, *parameters)

Set the control parameters defined by the control_pars argument of the jitcsde. Note that you probably want to use purge_past and address initial discontinuities every time after you do this.

Parameters:
parameters : floats

Values of the control parameters. You can also use a single iterable containing these. Either way, the order must be the same as in the control_pars argument of the jitcsde.

set_integration_parameters(self, atol=1e-10, rtol=1e-05, first_step=1.0, min_step=1e-10, max_step=10.0, decrease_threshold=1.1, increase_threshold=0.5, safety_factor=0.9, max_factor=5.0, min_factor=0.2)

Sets the parameters for the step-size adaption.

Parameters:
atol : float
rtol : float

The tolerance of the estimated integration error is determined as \(\texttt{atol} + \texttt{rtol}·|y|\). The step-size adaption algorithm is the same as for the GSL. For details see its documentation.

first_step : float

The step-size adaption starts with this value.

min_step : float

Should the step-size have to be adapted below this value, the integration is aborted and UnsuccessfulIntegration is raised.

max_step : float

The step size will be capped at this value.

decrease_threshold : float

If the estimated error divided by the tolerance exceeds this, the step size is decreased.

increase_threshold : float

If the estimated error divided by the tolerance is smaller than this, the step size is increased.

safety_factor : float

To avoid frequent adaption, all freshly adapted step sizes are multiplied with this factor.

max_factor : float
min_factor : float

The maximum and minimum factor by which the step size can be adapted in one adaption step.

integrate(self, target_time)

Tries to evolve the dynamics.

Parameters:
target_time : float

time until which the dynamics is evolved

Returns:
state : NumPy array

the computed state of the system at target_time.

pin_noise(self, number, step_size)

Fills the noise memory with a realisation of the underlying Wiener process sampled at number points with a distance of step_size (i.e., for a total length of number·step_size).

This is mainly intended for testing purposes, e.g., if you want to investigate the influence of the step-size-adjustment parameters in a controlled setting. Note that this only pins the Wiener process at the specified points. All other points will have to be interpolated with a Brownian bridge, but the lower the step_size, the more constrained they will be. Note that this inevitably slows things down.

Parameters:
number : integer

number of pre-defined noise points

step_size : float

distance of pre-defined noise points

check(self, fail_fast=True)

Performs a series of checks that may not be feasible at runtime (usually due to their length). Whenever you run into an error that you cannot make sense of, try running this. It checks for the following mistakes:

  • negative arguments of y
  • arguments of y that are higher than the system’s dimension n
  • unused variables
Parameters:
fail_fast : boolean

whether to abort on the first failure. If false, an error is raised only after all problems are printed.

render_and_write_code(self, expressions, name, chunk_size=100, arguments=(), omp=True)

Writes expressions to code.

Parameters:
expressions: iterator

expressions to be written

name: string

unique name of what is computed

chunk_size: integer

size of chunks. If smaller than 1, no chunking happens.

arguments: list of tuples

Each tuple contains the name, type, and size (optional, for arrays) of an argument needed by the code. This is so the arguments can be passed to chunked functions.

omp: boolean

whether OMP pragmas should be included

save_compiled(self, destination='', overwrite=False)

saves the module file with the compiled functions for later use (see the module_location argument). If no compiled derivative exists, it tries to compile it first using compile_C. In most circumstances, you should not rename this file, as the filename is needed to determine the module name.

Parameters:
destination : string specifying a path

If this specifies only a directory (don’t forget the trailing / or similar), the module will be saved to that directory. If empty (default), the module will be saved to the current working directory. Otherwise, the functions will be (re)compiled to match that filename. A file ending will be appended if needed.

overwrite : boolean

Whether to overwrite the specified target if it already exists.

Returns:
filename : string

The destination that was actually used.

Jumps

class jitcsde_jump(IJI, amp, *args, **kwargs)

An extension of jitcsde that can additionally handle random jumps. Note that as you are controlling the randomness in these functions, you also have to handle the random seed yourself. The handling is like that of jitcsde except for:

Parameters:
IJI : callable IJI(time,state) returning a non-negative number

A function (or similar) that returns a waiting time for the next jump, i.e., that draws one value from the inter-jump-interval distribution. A new waiting time using this function is determined directly after each jump (and at the first call of integrate). Hence, only the state and time at those times affect the waiting time, if you choose it to be time- or state-dependent.

amp : callable amp(time,state) returning an array of size n.

A function (or similar) that returns the actual jump. This must be a NumPy array, even if your system is one-dimensional.

reset_integrator(self)

Resets the integrator, forgetting all stored noise (and waiting times for jitcsde_jump) and forcing re-initiation when it is needed next.

integrate(self, target_time)

Tries to evolve the dynamics.

Parameters:
target_time : float

time until which the dynamics is evolved

Returns:
state : NumPy array

the computed state of the system at target_time.

check(self, fail_fast=True)

Same as jitcsde’s check, but additionally checks the output of the amp function (by calling it).

References

[RN17](1, 2)
  1. Rackauckas, Q. Nie: Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory, Discrete Cont. Dyn.-B 22, pp. 2731–2761 (2017), 10.3934/dcdsb.2017133.
[R10]
  1. Rößler, Runge–Kutta methods for the strong approximation of solutions of stochastic differential equations, SIAM J. Numer. Anal. 48, pp. 922–952 (2010) 10.1137/09076636X.