Information Analysis

The neet provides the Information class to compute various information measures over the dynamics of discrete-state network models.

The core information-theoretic computations are supported by the PyInform package.

class neet.Information(net, k, timesteps)[source]

A class to represent the \(k\)-history informational architecture of a network.

An Information is initialized with a network, a history length, and time series length. A time series of the desired length is computed from each initial state of the network, and used populate probability distributions over the state transitions of each node. From there any number of information or entropy measures may be applied.

The Information class provides three public attributes:

net The network over which to compute the various information measures
k The history length to use to compute the various information measures
timesteps The time series length to use to compute the various information measures

During following measures can be computed and cached:

active_information Get the local or average active information.
entropy_rate Get the local or average entropy rate.
mutual_information Get the local or average mutual information.
transfer_entropy Get the local or average transfer entropy.

Examples

>>> arch = Information(s_pombe, k=5, timesteps=20)
>>> arch.active_information()
array([0.        , 0.4083436 , 0.62956679, 0.62956679, 0.37915718,
       0.40046165, 0.67019615, 0.67019615, 0.39189127])
Parameters:
  • net (neet.Network) – the network to analyze
  • k (int) – the history length
  • timesteps (int) – the number of timesteps to evaluate the network
net

The network over which to compute the various information measures

Note

The cached internal state of the Information instances, namely any pre-computed time series and information measures, is cleared when the network is changed.

Type:neet.Network
k

The history length to use to compute the various information measures

Note

The cached internal state of the Information instances, namely any pre-computed time series and information measures, is cleared when the history length is changed.

Type:int
timesteps

The time series length to use to compute the various information measures

Note

The cached internal state of the Information instances, namely any pre-computed time series and information measures, is cleared when the number of time steps is changed.

Type:int
active_information(local=False)[source]

Get the local or average active information.

Active information (AI) was introduced in [Lizier2012] to quantify information storage in distributed computation. AI is defined in terms of a temporally local variant

\[a_{X,i}(k) = \log_2 \frac{p(x^{(k)}_i, x_{i+1})}{p(x^{(k)}_i)p(x_{i+1})}\]

where the probabilites are constructed emperically from an entire time series. From this local variant, the temporally global active information is defined as

\[A_X(k) = \langle a_{X,i}(k) \rangle_{i} = \sum_{x^{(k)}_i,\, x_{i+1}} p(x^{(k)}_i, x_{i+1}) \log_2 \frac{p(x^{(k)}_i, x_{i+1})}{p(x^{(k)}_i)p(x_{i+1})}.\]

Examples

>>> arch = Information(s_pombe, k=5, timesteps=20)
>>> arch.active_information()
array([0.        , 0.4083436 , 0.62956679, 0.62956679, 0.37915718,
       0.40046165, 0.67019615, 0.67019615, 0.39189127])
>>> lais = arch.active_information(local=True)
>>> lais[1]
array([[0.13079175, 0.13079175, 0.13079175, ..., 0.13079175, 0.13079175,
        0.13079175],
       [0.13079175, 0.13079175, 0.13079175, ..., 0.13079175, 0.13079175,
        0.13079175],
       ...,
       [0.13079175, 0.13079175, 0.13079175, ..., 0.13079175, 0.13079175,
        0.13079175],
       [0.13079175, 0.13079175, 0.13079175, ..., 0.13079175, 0.13079175,
        0.13079175]])
>>> np.mean(lais[1])
0.4083435...
Parameters:local (bool) – whether to return local (True) or global active information
Returns:a numpy.ndarray containing the (local) active information for every node in the network
entropy_rate(local=False)[source]

Get the local or average entropy rate.

Entropy rate quantifies the amount of information need to describe a random variable — the state of a node in this case — given observations of its \(k\)-history. In other words, it is the entropy of the time series of a node’s state conditioned on its \(k\)-history. The time-local entropy rate

\[h_{X,i}(k) = \log_2 \frac{p(x^{(k)}_i, x_{i+1})}{p(x^{(k)}_i)}\]

can be averaged to obtain the global entropy rate

\[H_X(k) = \langle h_{X,i}(k) \rangle_{i} = \sum_{x^{(k)}_i,\, x_{i+1}} p(x^{(k)}_i, x_{i+1}) \log_2 \frac{p(x^{(k)}_i, x_{i+1})}{p(x^{(k)}_i)}.\]

Examples

>>> arch = Information(s_pombe, k=5, timesteps=20)
>>> arch.entropy_rate()
array([0.        , 0.01691208, 0.07280268, 0.07280268, 0.05841994,
       0.02479402, 0.03217332, 0.03217332, 0.08966941])
>>> ler = arch.entropy_rate(local=True)
>>> ler[4]
array([[0.        , 0.        , 0.        , ..., 0.00507099, 0.00507099,
        0.00507099],
       [0.        , 0.        , 0.        , ..., 0.00507099, 0.00507099,
        0.00507099],
       ...,
       [0.        , 0.29604946, 0.00507099, ..., 0.00507099, 0.00507099,
        0.00507099],
       [0.        , 0.29604946, 0.00507099, ..., 0.00507099, 0.00507099,
        0.00507099]])
Parameters:local (bool) – whether to return local (True) or global entropy rate
Returns:a numpy.ndarray containing the (local) entropy rate for every node in the network
transfer_entropy(local=False)[source]

Get the local or average transfer entropy.

Transfer entropy (TE) was introduced by [Schreiber2000] to quantify information transfer between an information source and destination, in this case a pair of nodes, condition out their shared history effects. TE is defined in terms of a time-local variant

\[t_{X \rightarrow Y, i}(k) = \log_2 \frac{p(y_{i+1}, x_i~|~y^{(k)}_i)} {p(y_{i+1}~|~y^{(k)}_i)p(x_i~|~y^{(k)}_i)}\]

Time averaging defines the global transfer entropy

\[T_{Y \rightarrow X}(k) = \langle t_{X \rightarrow Y, i}(k) \rangle_i\]

Examples

>>> arch = Information(s_pombe, k=5, timesteps=20)
>>> arch.transfer_entropy()
array([[0.        , 0.        , 0.        , 0.        , 0.        ,
        0.        , 0.        , 0.        , 0.        ],
       [0.        , 0.        , 0.05137046, 0.05137046, 0.05841994,
        0.        , 0.01668983, 0.01668983, 0.0603037 ],
       ...,
       [0.        , 0.        , 0.00603879, 0.00603879, 0.04760206,
        0.02479402, 0.00298277, 0.        , 0.04892709],
       [0.        , 0.        , 0.07280268, 0.07280268, 0.        ,
        0.        , 0.03217332, 0.03217332, 0.        ]])

>>> lte = arch.transfer_entropy(local=True)
>>> lte[4,3]
array([[-1.03562391,  1.77173101,  0.        , ...,  0.        ,
         0.        ,  0.        ],
       [-1.03562391,  1.77173101,  0.        , ...,  0.        ,
         0.        ,  0.        ],
       [ 1.77173101,  0.        ,  0.        , ...,  0.        ,
         0.        ,  0.        ],
       ...,
       [ 0.        ,  0.        ,  0.        , ...,  0.        ,
         0.        ,  0.        ],
       [ 0.        ,  0.        ,  0.        , ...,  0.        ,
         0.        ,  0.        ],
       [ 0.        ,  0.        ,  0.        , ...,  0.        ,
         0.        ,  0.        ]])

The first and second indices of the resulting arrays are the source and target nodes, respectively.

Parameters:local (bool) – whether to return local (True) or global transfer entropy
Returns:a numpy.ndarray containing the (local) transfer entropy for every pair of nodes in the network
mutual_information(local=False)[source]

Get the local or average mutual information.

Mutual information is a measure of the amount of mutual dependence (correlation) between two random variables — nodes in this case. The time-local mutual information

\[i_{i}(X,Y) = -\log_2 \frac{p(x_i, y_i)}{p(x_i)p(y_i)}\]

can be time-averaged to define the standard mutual information

\[I(X,Y) = -\sum_{x_i, y_i} p(x_i, y_i) \log_2 \frac{p(x_i, y_i)}{p(x_i)p(y_i)}.\]

Examples

>>> arch = Information(s_pombe, k=5, timesteps=20)
>>> arch.mutual_information()
array([[0.16232618, 0.01374672, 0.00428548, 0.00428548, 0.01340937,
        0.01586238, 0.00516987, 0.00516987, 0.01102766],
       [0.01374672, 0.56660996, 0.00745714, 0.00745714, 0.00639113,
        0.32790848, 0.0067609 , 0.0067609 , 0.00468342],
       ...,
       [0.00516987, 0.0067609 , 0.4590254 , 0.4590254 , 0.17560769,
        0.00621124, 0.49349527, 0.80831657, 0.10390475],
       [0.01102766, 0.00468342, 0.12755745, 0.12755745, 0.01233356,
        0.00260667, 0.10390475, 0.10390475, 0.63423835]])
>>> lmi = arch.mutual_information(local=True)
>>> lmi[4,3]
array([[-0.67489772, -0.67489772, -0.67489772, ...,  0.18484073,
         0.18484073,  0.18484073],
       [-0.67489772, -0.67489772, -0.67489772, ...,  0.18484073,
         0.18484073,  0.18484073],
       ...,
       [-2.89794147,  1.7513014 ,  0.18484073, ...,  0.18484073,
         0.18484073,  0.18484073],
       [-2.89794147,  1.7513014 ,  0.18484073, ...,  0.18484073,
         0.18484073,  0.18484073]])
Parameters:local (bool) – whether to return local (True) or global mutual information
Returns:a numpy.ndarray containing the (local) mutual information for every pair of nodes in the network