entropy : Python library documentation¶
Contents:
entropy.entropy¶
- entropy.entropy.choose_algorithm(algo=1, version=1)¶
selects the algorithms to use for computing all mutual informations (including partial mutual informations and TEs), and selects the algorithm to use to count neighbors.
- Parameters:
algo – the Kraskov-Stogbauer-Grassberger algorithm. Possible values are {1, 2, 1|2}) for (algo 1, algo 2, both algos). (default=1)
version – counting algorithm version, possible values are {1, 2} for (legacy, mixed ANN) (default=1)
- Returns:
no output
- about the “version” parameter:
1 : legacy (NBG) : faster for small embedding dimensions (<=2)
2 : mixed ANN : faster for large embedding dimensions (>=4)
- entropy.entropy.compute_DI(x, y, N, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
computes directed information DI(x->y) between two n-d vectors x and y, using nearest neighbors search with ANN library. (time-)embedding is performed on the fly.
- Parameters:
N – embedding dimension for x and y (should be >=1)
stride – stride (Theiler correction will be used accordingly, even if n_embed_x,y=1). Note that for DI, lag (equivalent to stride for future point in time) is equal to stride.
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
two values (one per algorithm)
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_MI(x, y, n_embed_x=1, n_embed_y=1, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
computes mutual information MI(x,y) of two multi-dimensional vectors x and y using nearest neighbors search with ANN library.
\[MI(x,y) = H(x) + H(y) - H(x,y)\]- Parameters:
y (x,) – signals (NumPy arrays with ndim=2, time along second dimension)
n_embed_x – embedding dimension for x (default=1)
n_embed_y – embedding dimension for y (default=1)
stride – stride for embedding (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
two values (one per algorithm). See function
choose_algorithm
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_PMI(x, y, z, n_embed_x=1, n_embed_y=1, n_embed_z=1, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
computes partial mutual information (PMI) of three 2-d vectors x, y and z using nearest neighbors search with ANN library
\[PMI = MI(x,y|z) \](z is the conditioning variable)
- Parameters:
z (x, y,) – signals (NumPy arrays with ndim=2, time along second dimension)
n_embed_x – embedding dimension for x (default=1)
n_embed_y – embedding dimension for y (default=1)
n_embed_z – embedding dimension for z (default=1)
stride – stride for embedding (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
two values (one per algorithm)
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_PTE(x, y, z, n_embed_x=1, n_embed_y=1, n_embed_z=1, stride=1, lag=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
computes partial transfer entropy (PTE) of three 2-d vectors x, y and z using nearest neighbors search with ANN library.
\[PTE(x,y,z) = TE(x\rightarrow y|z) \](from x to y, with z a conditioning variable)
- Parameters:
z (x, y,) – signals (NumPy arrays with ndim=2, time along second dimension)
n_embed_x – embedding dimension for x (default=1)
n_embed_y – embedding dimension for y (default=1)
n_embed_z – embedding dimension for z (default=1)
stride – stride for embedding (default=1)
lag – distance of the future point in time (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
two values (one per algorithm)
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_TE(x, y, n_embed_x=1, n_embed_y=1, stride=1, Theiler=0, N_eff=0, N_real=0, lag=1, k=5, mask=<MemoryView of 'ndarray'>)¶
computes the transfer entropy TE(x->y) (influence of x over y) of two n-d vectors x and y using nearest neighbors search with ANN library.
\[TE(x,y) = TE(x\rightarrow y) \]- Parameters:
x – (y): signals (NumPy arrays with ndim=2, time along second dimension)
n_embed_x – embedding dimension for x (default=1)
n_embed_y – embedding dimension for y (default=1)
stride – stride for embedding (default=1)
lag – distance of the future point in time (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
two values (one per algorithm)
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_complexities(x, n_embed=1, stride=1, r=0.2, Theiler=-4, N_eff=4096, N_real=10, mask=<MemoryView of 'ndarray'>)¶
computes ApEn and SampEn complexities (kernel estimates).
- Parameters:
x – signal (NumPy array with ndim=2, time as second dimension)
n_embed – embedding dimension (default=1)
stride – stride for embedding (default=1)
r – radius (default=0.2)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
two nd-arrays of size (n_embed+1). The first array contains ApEn and the second SampEn, each estimate being a function of the embedding dimension, up to the provided value n_embed.
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_complexities_old(x, n_embed=1, stride=1, r=0.2)¶
computes ApEn and SampEn complexities (kernel estimates). OLD VERSION
- Parameters:
x – signal (NumPy array with ndim=2, time as second dimension)
n_embed – embedding dimension (default=1)
stride – stride for embedding (default=1)
r – radius (default=0.2)
- Returns:
two nd-arrays of size (n_embed+1). The first array contains ApEn and the second SampEn, each estimate being a function of the embedding dimension, up to the provided value n_embed.
Note that enhanced samplings and/or masking are not available for this function. contact nicolas.b.garnier (@) ens-lyon .fr if interested.
- entropy.entropy.compute_entropy(x, n_embed=1, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
compute Shannon entropy \(H\) of a signal \(x\) (possibly multi-dimensional) using nearest neighbors search with ANN library.
\[H = - \int p(x^{(m,\tau)}) \ln p(x^{(m,\tau)}) {\rm d}^m x^{(m,\tau)}\](time-)embedding of \(x\) into \(x^{(n_embed, stride)}\) is performed on the fly:
(1)¶\[x \rightarrow x^{(m,\tau)}=(x_t, x_{t-\tau}, ..., x_{t-(m-1)\tau})\]where \(m\) is given by the parameter n_embed and \(\tau\) is given by the parameter stride.
- Parameters:
x – signal (NumPy array with ndim=2, time along second dimension)
n_embed – embedding dimension \(m\) (default=1)
stride – stride (or time scale \(\tau\)) for embedding (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
the entropy estimate
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_entropy_2d(x, n_embed=1, stride_x=1, stride_y=1, Theiler_x=0, Theiler_y=0, N_eff=0, N_real=0, k=5, method=0, Theiler_2d=-1)¶
computes Shannon entropy of a scalar image, or of its spatial increments, using nearest neighbors search with ANN library. embedding and (uniform or random) sub-sampling are performed on the fly.
- Parameters:
x – signal (NumPy array with ndim=2 (unidimensional) or ndim=3 (multi-dimensional))
stride_x – stride along x direction (Theiler correction will be used accordingly) (default=1)
stride_y – stride along y direction (Theiler correction will be used accordingly) (default=1)
Theiler_x – Theiler scale along x (should be >= stride_x, but lower values are tolerated).
Theiler_y – Theiler scale along y (should be >= stride_x, but lower values are tolerated).
N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
method – an interger in {0,1,2} with 0 for regular entropy of the image (default) 1 for entropy of the increments (of the image) 2 for entropy of the averaged increments (of the image)
Theiler_2d – integers in {1, 2, 4} to specify which 2-d Theiler prescription to use.
- Returns:
two values (one per algorithm)
- If either Theiler_x<0 or Theiler_y<0, then automatic Theiler is applied as follows:
-1 for Theiler=tau, and uniform sampling (thus localized in the dataset) (legacy)
-2 for Theiler=max, and uniform sampling (thus covering the all dataset)
-3 for Theiler=tau, and random sampling (default)
-4 for Theiler=max, and random sampling
- The parameter Theiler_2d indicates, depending on its value:
1 for “minimal” : tau_Theiler is selected in each direction as in 1-d (troublesome if stride is small)
2 for “maximal” : tau_Theiler is selected as the max of (stride_x, stride_y) (possible sqrt(2) trouble)
4 for “optimal” : tau_Theiler is selected as sqrt(stride_x^2 + stride_y^2) (rounded-up)
if not provided, the value set by the function
set_Theiler_2dwill be used (default=”maximal”).
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_entropy_Renyi(x, q, inc_type=0, n_embed=1, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
computes Renyi entropy \(H_q\) of order \(q\) of a signal \(x\) (possibly multi-dimensional) or of its increments, using nearest neighbors search with ANN library. (time-)embedding is performed on the fly.
\[H_q = \frac{1}{1-q} \ln \int p(x^{(m,\tau)})^q {\rm d}^m x^{(m,\tau)}\](time-)embedding of \(x\) into \(x^{(m, \tau)}\) (see equation (1)) is performed on the fly, see
compute_entropy- Parameters:
x – signal (NumPy array with ndim=2, time as second dimension)
q – order of the Renyi entropy (should not be 1)
inc_type – which pre-processing to operate, possible values are: 0 for entropy of the signal x itself, 1 for entropy of the increments of the signal x, 2 for entropy of the averaged increments of x.
n_embed – embedding dimension \(m\) (default=1)
stride – stride (or :math`tau`) for embedding (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
the entropy estimate for the increments
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_entropy_increments(x, inc_type=1, order=1, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
compute entropy of the increments of a signal (possibly multi-dimensional) using nearest neighbors search with ANN library.
\[H(\delta_\tau x) = - \int p(\delta_\tau x) \ln p(\delta_\tau x) {\rm d} \delta_\tau x\](time-)increments \(\delta_\tau x\) are computed from signal \(x\) on the fly:
\[\begin{split}x \rightarrow \delta_\tau x &= x_t - x_{t-\tau} \quad {\rm (regular \, increments \, of \, order \, 1)} \\ &= x_t - \sum_{k=1}^{\tau} x_{t-k} \quad {\rm (averaged \, increments \, of \, order \, 1}\end{split}\]- Parameters:
x – signal (NumPy array with ndim=2, time along second dimension)
inc_type –
increments type (regular or averaged):
1 for regular increments (of given order)
2 for averaged increments (of order 1 only)
order – order of increments (between 0 and 5) (default=1)
stride – stride for embedding (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
the entropy estimate for the increments
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_entropy_rate(x, method=2, m=1, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
computes entropy rate \(h^{(m,\tau)}\) of order \(m\) of a signal \(x\) (possibly multi-dimensional) using nearest neighbors search with ANN library.
\[h^{(m,\tau)}(x) = H(x_t^{(m+1,\tau)}) - H(x_{t-\tau}^{(m,\tau)})\](time-)embedding (see equation (1)) is performed on the fly.
- Parameters:
x – signal (NumPy array with ndim=2, time along second dimension)
method – an integer. in {0,1,2} to indicate which method to use (default=2) (see below)
m – embedding dimension (default=1)
stride – stride for embedding (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
the entropy estimate
- The parameter method influences the computation as follows:
0 for \(H^{(m)}/m\)
1 for \(H^{(m+1)}-H^{(m)}\)
2 for \(H^{(1)}-MI(x,x^{(m)})\) (default=2)
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_regularity_index(x, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, mask=<MemoryView of 'ndarray'>)¶
computes regularity index of a vector (possibly multi-dimensional) using nearest neighbors search with ANN library. (time-)embedding is performed on the fly.
\[\Delta(x,\tau) = H(\delta_\tau x) - h^\{(\tau)}(x) \]- Parameters:
x – signal (NumPy array with ndim=2, time as second dimension)
stride – stride (Theiler correction will be used accordingly).
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – mask to use (NumPy array of dtype=char). If a mask is provided, only values given by the mask will be used. (default=no mask)
- Returns:
two values (one per algorithm)
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.compute_relative_entropy(x, y, n_embed_x=1, n_embed_y=1, stride=1, Theiler=0, N_eff=0, N_real=0, k=5, method=1)¶
computes relative entropy of two processes (possibly multi-dimensional) using nearest neighbors search with ANN library. (time-)embedding is performed on the fly.
- Parameters:
x – signal (NumPy array with ndim=2, time along second dimension)
y – signal (NumPy array with ndim=2, time along second dimension)
n_embed_x – embedding dimension for x (default=1)
n_embed_y – embedding dimension for y (default=1)
stride – stride for embedding (default=1)
Theiler – Theiler scale (should be >= stride, but lower values are tolerated). If Theiler<0, then automatic Theiler is applied as described in function
set_Theiler.N_eff – nb of points to consider in the statistics (default=4096) or -1 for largest possible value (legacy behavior)
N_real – nb of realizations to consider (default=10) or -1 for N_real=stride (legacy behavior)
k – number of neighbors to consider or -1 to force a non-ANN computation using covariance only, assuming Gaussian statistics.
mask – masks are not supported yet.
method – 0 for relative entropy (1 value [Hr] is returned) or 1 for Kullbach-Leibler divergence (2 values [Hr, KLdiv] are returned) (default=1)
- Returns:
1 or 2 values are returned, depending on the value of the parameter method: the relative entropy (Hr) and the KL divergence (Hr-H) estimates.
see Input parameters and function
set_samplingto set sampling parameters globally if needed.
- entropy.entropy.get_Theiler_2d()¶
returns the Theiler prescription currently selected. (see the function
set_Theiler_2dfor details)- Parameters:
none
- Returns:
the default Theiler prescription currently used.
- entropy.entropy.get_extra_info(verbosity=0)¶
returns extra information from the last computation
- Parameters:
verbosity – an integer. If ==0 (default), then nothing is printed on the screen, but values are returned (useful in scripts).
- Returns:
statistics on the processed input data (e.g.: increments of the input):
the standard deviation of the last data used (i.e., the std of the input, not of the output!)
the standard deviation of this standard deviation
These maybe useful for, e.g., computations on increments with a given stride.
- entropy.entropy.get_last_info(verbosity=0)¶
returns informations from the last computation (and prints them on screen if verbosity>0)
- Parameters:
verbosity – an integer. If ==0 (default), then nothing is printed on the screen, but values are returned (useful for use in scripts).
- Returns:
the following 8 values, in the following order
the standard deviations estimates of the last computed quantities (2 values)
the number of errors encountered (1 value)
the effective number of points used, per realization, and total (2 values)
the number of independent realizations used (1 value)
the effective value of tau_Theiler (in x, and eventually in y) (2 values)
- entropy.entropy.get_last_sampling(verbosity=0)¶
returns full set of information on the sampling used in the last computation and prints them in the console if verbosity>0 (note that these values may differ from default values, as returned by
get_sampling)- Parameters:
verbosity – an integer in {0,1} (default=0). If verbosity>0, a human-readable message expliciting the 7 returned values is printed in the console.
- Returns:
the following 7 values, in the following order:
the type of Theiler prescription ( 1 integer)
the Theiler scale used, and its maximal value given the other parameters (2 integers)
the effective number of points used in a single realization, and its maximal value (2 integers)
the number of realizations used, and its maximal value (2 integers)
- entropy.entropy.get_sampling(verbosity=1)¶
prints the default values of sampling parameters used in all functions.
- Parameters:
verbosity – an integer in {0,1} (default=1). If verbosity>0, a human-readable message expliciting the 4 returned values is printed in the console.
- Returns:
the following 4 values, in the following order:
the default type of Theiler prescription
the default Theiler scale
the default effective number of points used in a single realization
the default number of realizations used.
see
set_samplingto change these values and Input parameters for their meaning. seeget_last_samplingto get the last values instead (they may be different than default values).
- entropy.entropy.get_threads_number()¶
returns the current number of threads.
- Parameters:
none
- Returns:
an integer, the current number of threads used.
- entropy.entropy.get_verbosity()¶
gets the current verbosity level of the library
- Parameters:
none
- Returns:
no output values, but a message indicating the verbosity level is printed in the console.
- verbosity level explanation:
<0 : no messages, even if an error is encountered (not recommended!)
0 : messages only if an error is encountered
1 : messages for errors and important warnings only (default)
2 : more warnings and/or more details on the warnings
…
- entropy.entropy.multithreading(do_what=['info', 'auto', 'single'], n_cores=0)¶
Selects the multi-threading schemes and eventually sets the number of threads to use
- Parameters:
do_what – either “info” or “auto” or “single”, see below.
nb_cores – an integer; if specified and positive, then the provided number nb_cores of threads will be used. (default=0 for “auto”)
- Returns:
no output.
- The parameter do_what can be chosen as follows:
“info”: nothing is done, but informations on the current multithreading state are displayed.
“auto”: then the optimal (self-adapted to largest) number of threads will be used) (default).
“single”: then the algorithms will run single-threaded (no multithreading)
if nb_cores (a positive number) is specified, then the provided number nb_cores of threads will be used.
- entropy.entropy.set_Theiler(Theiler=4)¶
sets the way the library handles by default the Theiler prescription. this prescription is overiden if explicitly specified in a function call
set_Theiler(Theiler=’legacy’|’smart’|’random’|’adapted’)
- Parameters:
Theiler – Theiler prescription. Possible values are {1, 2, 3, 4} or (“legacy”, “smart”, “random”, “adapted”) (default=4)
- Returns:
no output values, but a message is printed in the console.
- The parameter “Theiler” indicates the Theiler prescription to follow:
1 or “legacy” : tau_Theiler=tau(=stride) + uniform sampling (thus localized in the dataset) (legacy)
2 or “smart” : tau_Theiler=max>=tau(=stride) + uniform sampling (covering the full dataset)
3 or “random” : tau_Theiler=tau(=stride) + random sampling
4 or “adapted” : tau_Theiler>(or <)tau(=stride)
Depending on the Theiler prescription, the effective value of ‘’tau_Theiler’’ can be smaller than tau(=stride) in order to satisfy the imposed N_eff. Use this with caution, for example by tracking the effectively selected ‘’tau_Theiler’’ value with the function
get_last_info.
- entropy.entropy.set_Theiler_2d(Theiler=2)¶
selects the 2-d Theiler prescription to use for sampling images.
- Parameters:
Theiler – Theiler prescription. Possible values: are {1, 2, 4} as explicited below.
- Returns:
no output
- About possible values:
1 or “minimal” : tau_Theiler is selected in each direction as in 1-d (troublesome if one of the stride is much smaller than the other one)
2 or “maximal” : tau_Theiler is selected as the max of (stride_x, stride_y) (possibly too small by a factor sqrt(2))
4 or “optimal” : tau_Theiler is selected as sqrt(stride_x^2 + stride_y^2) (rounded-up for max safety)
(default=2)
- entropy.entropy.set_sampling(Theiler='adapted', N_eff=4096, N_real=10)¶
sets the way the library handles by default the Theiler prescription, the number N_eff of effective points and the number N_real of realizations. These values are overiden if explicitly specified in a function call.
- Parameters:
Theiler – prescription (default=’adapted’), see function
set_Theilerfor details on possible options.N_eff – number of points to consider in the statistics (default=4096).
N_real – number of realizations to consider (default=10).
- Returns:
no output.
parameters N_eff and N_real are overriden if Theiler==’legacy’
You can check what are the current default values with the function
get_sampling. You can also examine what were the values used in the last computation with the functionget_last_sampling. See Input parameters
- entropy.entropy.set_verbosity(level=1)¶
sets the verbosity level of the library
- Parameters:
verbosity – an integer that indicates the verbosity level of the library (default=1)
- Returns:
no output
- verbosity can be:
<0 : no messages, even if an error is encountered (not recommended!)
0 : messages only if an error is encountered
1 : messages for errors and important warnings only (default)
2 : more warnings and/or more details on the warnings
…
- entropy.entropy.surrogate(x, method=0, N_steps=7)¶
creates a surrogate version of (possibly multi-dimensionial) data x
- Parameters:
method – an integer to indicate which type of surrogate to create (see below)
N_steps – an integer to indicate how many steps to use for improved surrogates (method 4 only)
- Returns:
an nd-array with the same dimmensionality as the input data x
- The method parameter can be:
- 0 : by shuffling points in time, while retaining the joint-coordinates1 : randomize phase with unwindowed Fourier transform (uFt)2 : randomize phase with windowed Fourier transform (wFT) (buggy!)3 : randomize phase with Gaussian null hypothesis and Ft (aaFT)4 : improved surrogate (same PDF and PSD), using N_steps5 : creates a Gaussian version of x, with same PSD and dependences
entropy.tools¶
- entropy.tools.compute_over_scales(func, tau_set, *args, verbosity_timing=1, get_samplings=0, **kwargs)¶
runs iteratively an estimation over a range of time-scales/stride values
- Parameters:
func – (full) name of the function to run
tau_set – 1-d numpy array containing the set of values for stride (time-scales)
verbosity_timing – 0 for no output, or 1,2 or more for more and more detailed output
get_samplings – 1 for extra returned array, with samplings parameters used for each stride
args – any parameter to pass to the function: is accepted, with the same syntax as usual (e.g.: x, y, k=5, …)
- Returns:
2 or 3 nd-arrays, each having their last dimension equal to tau_set.size (see below).
example:
# assuming you have data x and y available and ready for analysis tau_set = numpy.arange(20) # a set of (time-)scales to examine MI, MI_std = compute_over_scales(entropy.compute_MI, tau_set, x, y, N_eff=1973) plt.errorbar(tau_set, MI, yerr=MI_std)
- About returned values:
the first returned array contains result(s) as a function of stride. If the function func returns more than one value, then this first nd-array will have more than one dimension.
the second returned array contains an estimator of the std, as a function of stride.
the third returned array (returned only if the parameter get_samplings is set to 1) contains all sampling parameters used for each estimation. This is usefull to track the value of N_eff or tau-Theiler used in the estimation as a function of the (time-)scale. See
get_samplingfor a list of returned values, and how they are ordered.
- entropy.tools.crop(x, npts_new, i_window=0)¶
y = crop(x, npts_new, [i_window=0])
crop an nd-array x (possibly multi-dimensional) in time (faster than Python)
- Parameters:
x – signal (NumPy array with ndim=2, time along second dimension)
npts_new – new size in time
i_window – starting point in time (default=0)
- Returns:
an nd-array with the requested version of input data x
- entropy.tools.embed(x, n_embed=1, stride=1, i_window=0, n_embed_max=-1)¶
y = embed(x, [n_embed=1, stride=1, i_window=0])
causal (time-)embed an nd-array x (possibly multi-dimensional)
note : this function is optimized, and much faster then the python version (“embed_python”).
- Parameters:
x – signal (NumPy array with ndim=2, time along second dimension)
n_embed – embedding dimension (default=1)
stride – distance between successive points (default=1)
i_window – returns the (i_window)th set (0<=i_window<stride)
n_embed_max – max embedding dimension (for causal time reference); if -1, then uses n_embed (default=-1)
- Returns:
an nd-array with the requested (time-)embeded version of input data x
- entropy.tools.embed_python(x, m=1, stride=1, i_window=0)¶
(time-)embeds an nd-array x (possibly multi-dimensional)
note: this function is only here for test purposes; it is absolutely not optimized in any way. you should use the function “embed” instead.
- Parameters:
x – signal (NumPy array with ndim=2, time along second dimension)
m – embedding dimension (default=1)
stride – distance between successive points (default=1)
i_window – returns the (i_window)th set (0<=i_window<stride)
- Returns:
an nd-array with the requested (time-)embeded version of input data x
- entropy.tools.reorder(x)¶
makes any nd-array compatible with any function of the code for temporal-like signals
- Parameters:
x – any nd-array
- Returns:
a well aligned and ordered nd-array containing the same data as input x
- entropy.tools.reorder_2d(x, nx=-1, ny=-1, d=-1)¶
makes any nd-array compatible with any function of the 2d code (images)
entropy.masks¶
- entropy.masks.mask_NaN(x)¶
returns the mask corresponding to NaN values in the data x
- entropy.masks.mask_clean(x)¶
makes any nd-array a compatible mask for the code
at any given time t, if the mask has a True value in one dimension, then the resulting mask will also have a True value at that time t (AND logic)
- entropy.masks.mask_finite(x)¶
returns the mask corresponding to finite, correct values in the data x
- entropy.masks.retain_from_mask(x, mask)¶
returns a new nd-array from input nd-array x using input mask