spikingjelly.activation_based.neuron package

Module contents

class spikingjelly.activation_based.neuron.BaseNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:MemoryModule

参数
  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

可微分SNN神经元的基类神经元。

参数
  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

This class is the base class of differentiable spiking neurons.

property store_v_seq
static jit_hard_reset(v: Tensor, spike: Tensor, v_reset: float)[源代码]
static jit_soft_reset(v: Tensor, spike: Tensor, v_threshold: float)[源代码]
abstract neuronal_charge(x: Tensor)[源代码]

定义神经元的充电差分方程。子类必须实现这个函数。

Define the charge difference equation. The sub-class must implement this function.

neuronal_fire()[源代码]

根据当前神经元的电压、阈值,计算输出脉冲。

Calculate out spikes of neurons by their current membrane potential and threshold voltage.

neuronal_reset(spike)[源代码]

根据当前神经元释放的脉冲,对膜电位进行重置。

Reset the membrane potential according to neurons’ output spikes.

extra_repr()[源代码]
single_step_forward(x: Tensor)[源代码]
参数

x (torch.Tensor) – 输入到神经元的电压增量

返回

神经元的输出脉冲

返回类型

torch.Tensor

按照充电、放电、重置的顺序进行前向传播。

参数

x (torch.Tensor) – increment of voltage inputted to neurons

返回

out spikes of neurons

返回类型

torch.Tensor

Forward by the order of neuronal_charge, neuronal_fire, and neuronal_reset.

multi_step_forward(x_seq: Tensor)[源代码]
v_float_to_tensor(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.AdaptBaseNode(v_threshold: float = 1.0, v_reset: float = 0.0, v_rest: float = 0.0, w_rest: float = 0.0, tau_w: float = 2.0, a: float = 0.0, b: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

static jit_neuronal_adaptation(w: Tensor, tau_w: float, a: float, v_rest: float, v: Tensor)[源代码]
neuronal_adaptation()[源代码]

脉冲触发的适应性电流的更新

Spike-triggered update of adaptation current.

static jit_hard_reset(v: Tensor, w: Tensor, spike_d: Tensor, v_reset: float, b: float, spike: Tensor)[源代码]
static jit_soft_reset(v: Tensor, w: Tensor, spike_d: Tensor, v_threshold: float, b: float, spike: Tensor)[源代码]
neuronal_reset(spike)[源代码]

根据当前神经元释放的脉冲,对膜电位进行重置。

Reset the membrane potential according to neurons’ output spikes.

extra_repr()[源代码]
single_step_forward(x: Tensor)[源代码]
w_float_to_tensor(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.IFNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数
  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Integrate-and-Fire 神经元模型,可以看作理想积分器,无输入时电压保持恒定,不会像LIF神经元那样衰减。其阈下神经动力学方程为:

\[H[t] = V[t-1] + X[t]\]
参数
  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Integrate-and-Fire neuron, which can be seen as a ideal integrator. The voltage of the IF neuron will not decay as that of the LIF neuron. The sub-threshold neural dynamics of it is as followed:

\[H[t] = V[t-1] + X[t]\]
property supported_backends
neuronal_charge(x: Tensor)[源代码]
static jit_eval_single_step_forward_hard_reset(x: Tensor, v: Tensor, v_threshold: float, v_reset: float)[源代码]
static jit_eval_single_step_forward_soft_reset(x: Tensor, v: Tensor, v_threshold: float)[源代码]
static jit_eval_multi_step_forward_hard_reset(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float)[源代码]
static jit_eval_multi_step_forward_soft_reset(x_seq: Tensor, v: Tensor, v_threshold: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
single_step_forward(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.LIFNode(tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数
  • tau (float) – 膜电位时间常数

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Leaky Integrate-and-Fire 神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:

decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]
参数
  • tau (float) – membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Leaky Integrate-and-Fire neuron, which can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:

IF decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

IF decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]
property supported_backends
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
static neuronal_charge_decay_input_reset0(x: Tensor, v: Tensor, tau: float)[源代码]
static neuronal_charge_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float)[源代码]
static neuronal_charge_no_decay_input_reset0(x: Tensor, v: Tensor, tau: float)[源代码]
static neuronal_charge_no_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float)[源代码]
static jit_eval_single_step_forward_hard_reset_decay_input(x: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_single_step_forward_hard_reset_no_decay_input(x: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_single_step_forward_soft_reset_decay_input(x: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_single_step_forward_soft_reset_no_decay_input(x: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_no_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_no_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_no_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_no_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
single_step_forward(x: Tensor)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.ParametricLIFNode(init_tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数
  • init_tau (float) – 膜电位时间常数的初始值

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

  • cupy_fp32_inference (bool) – 若为 True,在 eval 模式下,使用float32,却在GPU上运行,并且 cupy 已经安装,则会自动使用 cupy 进行加速。 这个选项的优先权高于 backend

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks 提出的 Parametric Leaky Integrate-and-Fire (PLIF)神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:

decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

其中 \(\frac{1}{\tau} = {\rm Sigmoid}(w)\)\(w\) 是可学习的参数。

参数
  • init_tau (float) – the initial value of membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数
  • store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

  • cupy_fp32_inference (bool) – If True, if this module is in eval mode, using float32, running on GPU, and cupy is installed, then this module will use cupy to accelerate. This option has priority over backend

The Parametric Leaky Integrate-and-Fire (PLIF) neuron, which is proposed by Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks and can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:

IF decay_input == True:

\[H = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

IF decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

where \(\frac{1}{\tau} = {\rm Sigmoid}(w)\), \(w\) is a learnable parameter.

property supported_backends
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.QIFNode(tau: float = 2.0, v_c: float = 0.8, a0: float = 1.0, v_threshold: float = 1.0, v_rest: float = 0.0, v_reset: float = -0.1, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数
  • tau (float) – 膜电位时间常数

  • v_c (float) – 关键电压

  • a0 (float) –

  • v_threshold (float) – 神经元的阈值电压

  • v_rest (float) – 静息电位

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Quadratic Integrate-and-Fire 神经元模型,一种非线性积分发放神经元模型,也是指数积分发放神经元(Exponential Integrate-and-Fire)的近似版本。其阈下神经动力学方程为:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] + a_0 (V[t-1] - V_{rest})(V[t-1] - V_c))\]
参数
  • tau (float) – membrane time constant

  • v_c (float) – critical voltage

  • a0 (float) –

  • v_threshold (float) – threshold voltage of neurons

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Quadratic Integrate-and-Fire neuron is a kind of nonlinear integrate-and-fire models and also an approximation of the Exponential Integrate-and-Fire model. The subthreshold neural dynamics of it is as followed:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] + a_0 (V[t-1] - V_{rest})(V[t-1] - V_c))\]
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
property supported_backends
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.EIFNode(tau: float = 2.0, delta_T: float = 1.0, theta_rh: float = 0.8, v_threshold: float = 1.0, v_rest: float = 0.0, v_reset: float = -0.1, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数
  • tau (float) – 膜电位时间常数

  • delta_T (float) – 陡峭度参数

  • theta_rh (float) – 基强度电压阈值

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Exponential Integrate-and-Fire 神经元模型,一种非线性积分发放神经元模型,是由HH神经元模型(Hodgkin-Huxley model)简化后推导出的一维模型。在 \(\Delta_T\to 0\) 时退化为LIF模型。其阈下神经动力学方程为:

\[H[t] = V[t-1] + \frac{1}{\tau}\left(X[t] - (V[t-1] - V_{rest}) + \Delta_T\exp\left(\frac{V[t-1] - \theta_{rh}}{\Delta_T}\right)\right)\]
参数
  • tau (float) – membrane time constant

  • delta_T (float) – sharpness parameter

  • theta_rh (float) – rheobase threshold

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Exponential Integrate-and-Fire neuron is a kind of nonlinear integrate-and-fire models and also an one-dimensional model derived from the Hodgkin-Huxley model. It degenerates to the LIF model when \(\Delta_T\to 0\). The subthreshold neural dynamics of it is as followed:

\[H[t] = V[t-1] + \frac{1}{\tau}\left(X[t] - (V[t-1] - V_{rest}) + \Delta_T\exp\left(\frac{V[t-1] - \theta_{rh}}{\Delta_T}\right)\right)\]
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
property supported_backends
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.IzhikevichNode(tau: float = 2.0, v_c: float = 0.8, a0: float = 1.0, v_threshold: float = 1.0, v_reset: float = 0.0, v_rest: float = -0.1, w_rest: float = 0.0, tau_w: float = 2.0, a: float = 0.0, b: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:AdaptBaseNode

extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
property supported_backends
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.LIAFNode(act: Callable, threshold_related: bool, *args, **kwargs)[源代码]

基类:LIFNode

参数
  • act (Callable) – 激活函数

  • threshold_related (bool) – 是否使用阈值依赖模式 (TR mode). 若为 Truey = act(h - v_th), 否则 y = act(h)

LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing 提出的LIAF神经元。LIAFNode和LIFNode的行为相同,但输出是 self.act(...) 而非脉冲。

警告

The outputs of this neurons layer are not binary spikes.

参数
  • act (Callable) – the activation function

  • threshold_related (bool) – whether the neuron uses threshold related (TR mode). If True, y = act(h - v_th), otherwise y = act(h)

Other parameters in *args, **kwargs are same with LIFNode.

The LIAF neuron proposed in LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing. LIAFNode has the same behavior as LIFNode, but outputs self.act(...) rather than spikes.

Warning

The outputs of this neurons layer are not binary spikes.

property supported_backends
single_step_forward(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.KLIFNode(scale_reset: bool = False, tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数
  • scale_reset (bool) – 是否在 neuronal_reset 时将 v 进行缩放

  • tau (float) – 膜电位时间常数

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

KLIF: An optimized spiking neuron unit for tuning surrogate gradient slope and membrane potential 提出的K-based Leaky Integrate-and-Fire 神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:

decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

注意,KLIF神经元的放电和重置与普通的神经元不同,为:

\[ \begin{align}\begin{aligned}F[t] &= \mathrm{ReLU}(kH[t])\\S[t] &= \Theta(F[t] - V_{th})\end{aligned}\end{align} \]

如果 scale_reset == False,则

\[\begin{split}V[t] = \begin{cases} F[t](1-S[t]) + V_{reset}S[t], hard~~reset \\ F[t] - S[t]V_{th}, soft~~reset \end{cases}\end{split}\]

如果 scale_reset == True,则

\[\begin{split}V[t] = \begin{cases} \frac{F[t]}{k}(1-S[t]) + V_{reset}S[t], hard~~reset \\ \frac{1}{k}(F[t] - S[t]V_{th}), soft~~reset \end{cases}\end{split}\]
参数
  • scale_reset (bool) – whether scale v in neuronal_reset

  • tau (float) – membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The K-based Leaky Integrate-and-Fire neuron proposed by KLIF: An optimized spiking neuron unit for tuning surrogate gradient slope and membrane potential, which can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:

IF decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

IF decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

Note that the neuronal fire and reset of the KLIF neuron is different from native neurons:

\[ \begin{align}\begin{aligned}F[t] &= \mathrm{ReLU}(kH[t])\\S[t] &= \Theta(F[t] - V_{th})\end{aligned}\end{align} \]

If scale_reset == False, then

\[\begin{split}V[t] = \begin{cases} F[t](1-S[t]) + V_{reset}S[t], hard~~reset \\ F[t] - S[t]V_{th}, soft~~reset \end{cases}\end{split}\]

Elif scale_reset == True, then

\[\begin{split}V[t] = \begin{cases} \frac{F[t]}{k}(1-S[t]) + V_{reset}S[t], hard~~reset \\ \frac{1}{k}(F[t] - S[t]V_{th}), soft~~reset \end{cases}\end{split}\]
static neuronal_charge_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float, k: Tensor)[源代码]
static neuronal_charge_no_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float, k: Tensor)[源代码]
neuronal_charge(x: Tensor)[源代码]
training: bool
neuronal_reset(spike)[源代码]