spikingjelly.activation_based.neuron package

Module contents

class spikingjelly.activation_based.neuron.SimpleBaseNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s')[源代码]

基类:MemoryModule

A simple version of BaseNode. The user can modify this neuron easily.

single_step_forward(x: Tensor)[源代码]
neuronal_charge(x: Tensor)[源代码]
neuronal_fire()[源代码]
neuronal_reset(spike)[源代码]
training: bool
class spikingjelly.activation_based.neuron.BaseNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:MemoryModule

参数:
  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

可微分SNN神经元的基类神经元。

参数:
  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

This class is the base class of differentiable spiking neurons.

property store_v_seq
static jit_hard_reset(v: Tensor, spike: Tensor, v_reset: float)[源代码]
static jit_soft_reset(v: Tensor, spike: Tensor, v_threshold: float)[源代码]
abstract neuronal_charge(x: Tensor)[源代码]

定义神经元的充电差分方程。子类必须实现这个函数。

Define the charge difference equation. The sub-class must implement this function.

neuronal_fire()[源代码]

根据当前神经元的电压、阈值,计算输出脉冲。

Calculate out spikes of neurons by their current membrane potential and threshold voltage.

neuronal_reset(spike)[源代码]

根据当前神经元释放的脉冲,对膜电位进行重置。

Reset the membrane potential according to neurons’ output spikes.

extra_repr()[源代码]
single_step_forward(x: Tensor)[源代码]
参数:

x (torch.Tensor) – 输入到神经元的电压增量

返回:

神经元的输出脉冲

返回类型:

torch.Tensor

按照充电、放电、重置的顺序进行前向传播。

参数:

x (torch.Tensor) – increment of voltage inputted to neurons

返回:

out spikes of neurons

返回类型:

torch.Tensor

Forward by the order of neuronal_charge, neuronal_fire, and neuronal_reset.

multi_step_forward(x_seq: Tensor)[源代码]
v_float_to_tensor(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.AdaptBaseNode(v_threshold: float = 1.0, v_reset: float = 0.0, v_rest: float = 0.0, w_rest: float = 0.0, tau_w: float = 2.0, a: float = 0.0, b: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

static jit_neuronal_adaptation(w: Tensor, tau_w: float, a: float, v_rest: float, v: Tensor)[源代码]
neuronal_adaptation()[源代码]

脉冲触发的适应性电流的更新

Spike-triggered update of adaptation current.

static jit_hard_reset(v: Tensor, w: Tensor, spike_d: Tensor, v_reset: float, b: float, spike: Tensor)[源代码]
static jit_soft_reset(v: Tensor, w: Tensor, spike_d: Tensor, v_threshold: float, b: float, spike: Tensor)[源代码]
neuronal_reset(spike)[源代码]

根据当前神经元释放的脉冲,对膜电位进行重置。

Reset the membrane potential according to neurons’ output spikes.

extra_repr()[源代码]
single_step_forward(x: Tensor)[源代码]
w_float_to_tensor(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.IFNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数:
  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Integrate-and-Fire 神经元模型,可以看作理想积分器,无输入时电压保持恒定,不会像LIF神经元那样衰减。其阈下神经动力学方程为:

\[H[t] = V[t-1] + X[t]\]
参数:
  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Integrate-and-Fire neuron, which can be seen as a ideal integrator. The voltage of the IF neuron will not decay as that of the LIF neuron. The sub-threshold neural dynamics of it is as followed:

\[H[t] = V[t-1] + X[t]\]
property supported_backends
neuronal_charge(x: Tensor)[源代码]
static jit_eval_single_step_forward_hard_reset(x: Tensor, v: Tensor, v_threshold: float, v_reset: float)[源代码]
static jit_eval_single_step_forward_soft_reset(x: Tensor, v: Tensor, v_threshold: float)[源代码]
static jit_eval_multi_step_forward_hard_reset(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float)[源代码]
static jit_eval_multi_step_forward_soft_reset(x_seq: Tensor, v: Tensor, v_threshold: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
single_step_forward(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.LIFNode(tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数:
  • tau (float) – 膜电位时间常数

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Leaky Integrate-and-Fire 神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:

decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]
参数:
  • tau (float) – membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Leaky Integrate-and-Fire neuron, which can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:

IF decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

IF decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]
property supported_backends
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
static neuronal_charge_decay_input_reset0(x: Tensor, v: Tensor, tau: float)[源代码]
static neuronal_charge_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float)[源代码]
static neuronal_charge_no_decay_input_reset0(x: Tensor, v: Tensor, tau: float)[源代码]
static neuronal_charge_no_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float)[源代码]
static jit_eval_single_step_forward_hard_reset_decay_input(x: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_single_step_forward_hard_reset_no_decay_input(x: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_single_step_forward_soft_reset_decay_input(x: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_single_step_forward_soft_reset_no_decay_input(x: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_no_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_hard_reset_no_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, v_reset: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_no_decay_input(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
static jit_eval_multi_step_forward_soft_reset_no_decay_input_with_v_seq(x_seq: Tensor, v: Tensor, v_threshold: float, tau: float)[源代码]
single_step_forward(x: Tensor)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.ParametricLIFNode(init_tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数:
  • init_tau (float) – 膜电位时间常数的初始值

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

  • cupy_fp32_inference (bool) – 若为 True,在 eval 模式下,使用float32,却在GPU上运行,并且 cupy 已经安装,则会自动使用 cupy 进行加速。 这个选项的优先权高于 backend

Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks 提出的 Parametric Leaky Integrate-and-Fire (PLIF)神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:

decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

其中 \(\frac{1}{\tau} = {\rm Sigmoid}(w)\)\(w\) 是可学习的参数。

参数:
  • init_tau (float) – the initial value of membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:
  • store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

  • cupy_fp32_inference (bool) – If True, if this module is in eval mode, using float32, running on GPU, and cupy is installed, then this module will use cupy to accelerate. This option has priority over backend

The Parametric Leaky Integrate-and-Fire (PLIF) neuron, which is proposed by Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks and can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:

IF decay_input == True:

\[H = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

IF decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

where \(\frac{1}{\tau} = {\rm Sigmoid}(w)\), \(w\) is a learnable parameter.

property supported_backends
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.QIFNode(tau: float = 2.0, v_c: float = 0.8, a0: float = 1.0, v_threshold: float = 1.0, v_rest: float = 0.0, v_reset: float = -0.1, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数:
  • tau (float) – 膜电位时间常数

  • v_c (float) – 关键电压

  • a0 (float) –

  • v_threshold (float) – 神经元的阈值电压

  • v_rest (float) – 静息电位

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Quadratic Integrate-and-Fire 神经元模型,一种非线性积分发放神经元模型,也是指数积分发放神经元(Exponential Integrate-and-Fire)的近似版本。其阈下神经动力学方程为:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] + a_0 (V[t-1] - V_{rest})(V[t-1] - V_c))\]
参数:
  • tau (float) – membrane time constant

  • v_c (float) – critical voltage

  • a0 (float) –

  • v_threshold (float) – threshold voltage of neurons

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Quadratic Integrate-and-Fire neuron is a kind of nonlinear integrate-and-fire models and also an approximation of the Exponential Integrate-and-Fire model. The subthreshold neural dynamics of it is as followed:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] + a_0 (V[t-1] - V_{rest})(V[t-1] - V_c))\]
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
property supported_backends
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.EIFNode(tau: float = 2.0, delta_T: float = 1.0, theta_rh: float = 0.8, v_threshold: float = 1.0, v_rest: float = 0.0, v_reset: float = -0.1, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数:
  • tau (float) – 膜电位时间常数

  • delta_T (float) – 陡峭度参数

  • theta_rh (float) – 基强度电压阈值

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

Exponential Integrate-and-Fire 神经元模型,一种非线性积分发放神经元模型,是由HH神经元模型(Hodgkin-Huxley model)简化后推导出的一维模型。在 \(\Delta_T\to 0\) 时退化为LIF模型。其阈下神经动力学方程为:

\[H[t] = V[t-1] + \frac{1}{\tau}\left(X[t] - (V[t-1] - V_{rest}) + \Delta_T\exp\left(\frac{V[t-1] - \theta_{rh}}{\Delta_T}\right)\right)\]
参数:
  • tau (float) – membrane time constant

  • delta_T (float) – sharpness parameter

  • theta_rh (float) – rheobase threshold

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The Exponential Integrate-and-Fire neuron is a kind of nonlinear integrate-and-fire models and also an one-dimensional model derived from the Hodgkin-Huxley model. It degenerates to the LIF model when \(\Delta_T\to 0\). The subthreshold neural dynamics of it is as followed:

\[H[t] = V[t-1] + \frac{1}{\tau}\left(X[t] - (V[t-1] - V_{rest}) + \Delta_T\exp\left(\frac{V[t-1] - \theta_{rh}}{\Delta_T}\right)\right)\]
extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
property supported_backends
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.IzhikevichNode(tau: float = 2.0, v_c: float = 0.8, a0: float = 1.0, v_threshold: float = 1.0, v_reset: float = 0.0, v_rest: float = -0.1, w_rest: float = 0.0, tau_w: float = 2.0, a: float = 0.0, b: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:AdaptBaseNode

extra_repr()[源代码]
neuronal_charge(x: Tensor)[源代码]
property supported_backends
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.LIAFNode(act: Callable, threshold_related: bool, *args, **kwargs)[源代码]

基类:LIFNode

参数:
  • act (Callable) – 激活函数

  • threshold_related (bool) – 是否使用阈值依赖模式 (TR mode). 若为 Truey = act(h - v_th), 否则 y = act(h)

LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing 提出的LIAF神经元。LIAFNode和LIFNode的行为相同,但输出是 self.act(...) 而非脉冲。

警告

The outputs of this neurons layer are not binary spikes.

参数:
  • act (Callable) – the activation function

  • threshold_related (bool) – whether the neuron uses threshold related (TR mode). If True, y = act(h - v_th), otherwise y = act(h)

Other parameters in *args, **kwargs are same with LIFNode.

The LIAF neuron proposed in LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing. LIAFNode has the same behavior as LIFNode, but outputs self.act(...) rather than spikes.

Warning

The outputs of this neurons layer are not binary spikes.

property supported_backends
single_step_forward(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.KLIFNode(scale_reset: bool = False, tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = False, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:BaseNode

参数:
  • scale_reset (bool) – 是否在 neuronal_reset 时将 v 进行缩放

  • tau (float) – 膜电位时间常数

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离

  • step_mode (str) – 步进模式,可以为 ‘s’ (单步) 或 ‘m’ (多步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

KLIF: An optimized spiking neuron unit for tuning surrogate gradient slope and membrane potential 提出的K-based Leaky Integrate-and-Fire 神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:

decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

注意,KLIF神经元的放电和重置与普通的神经元不同,为:

\[ \begin{align}\begin{aligned}F[t] &= \mathrm{ReLU}(kH[t])\\S[t] &= \Theta(F[t] - V_{th})\end{aligned}\end{align} \]

如果 scale_reset == False,则

\[\begin{split}V[t] = \begin{cases} F[t](1-S[t]) + V_{reset}S[t], hard~~reset \\ F[t] - S[t]V_{th}, soft~~reset \end{cases}\end{split}\]

如果 scale_reset == True,则

\[\begin{split}V[t] = \begin{cases} \frac{F[t]}{k}(1-S[t]) + V_{reset}S[t], hard~~reset \\ \frac{1}{k}(F[t] - S[t]V_{th}), soft~~reset \end{cases}\end{split}\]
参数:
  • scale_reset (bool) – whether scale v in neuronal_reset

  • tau (float) – membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

The K-based Leaky Integrate-and-Fire neuron proposed by KLIF: An optimized spiking neuron unit for tuning surrogate gradient slope and membrane potential, which can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:

IF decay_input == True:

\[H[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset}))\]

IF decay_input == False:

\[H[t] = V[t-1] - \frac{1}{\tau}(V[t-1] - V_{reset}) + X[t]\]

Note that the neuronal fire and reset of the KLIF neuron is different from native neurons:

\[ \begin{align}\begin{aligned}F[t] &= \mathrm{ReLU}(kH[t])\\S[t] &= \Theta(F[t] - V_{th})\end{aligned}\end{align} \]

If scale_reset == False, then

\[\begin{split}V[t] = \begin{cases} F[t](1-S[t]) + V_{reset}S[t], hard~~reset \\ F[t] - S[t]V_{th}, soft~~reset \end{cases}\end{split}\]

Elif scale_reset == True, then

\[\begin{split}V[t] = \begin{cases} \frac{F[t]}{k}(1-S[t]) + V_{reset}S[t], hard~~reset \\ \frac{1}{k}(F[t] - S[t]V_{th}), soft~~reset \end{cases}\end{split}\]
static neuronal_charge_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float, k: Tensor)[源代码]
static neuronal_charge_no_decay_input(x: Tensor, v: Tensor, v_reset: float, tau: float, k: Tensor)[源代码]
neuronal_charge(x: Tensor)[源代码]
neuronal_reset(spike)[源代码]
training: bool
class spikingjelly.activation_based.neuron.PSN(T: int, surrogate_function: SurrogateFunctionBase = ATan(alpha=2.0, spiking=True))[源代码]

基类:Module, MultiStepModule

参数:
  • T (int) – the number of time-steps

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

The Parallel Spiking Neuron proposed in Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability. The neuronal dynamics are defined as

\[\begin{split}H &= WX, ~~~~~~~~~~~~~~~W \in \mathbb{R}^{T \times T}, X \in \mathbb{R}^{T \times N} \label{eq psn neuronal charge}\\ S &= \Theta(H - B), ~~~~~B \in \mathbb{R}^{T}, S\in \{0, 1\}^{T \times N}\end{split}\]

where \(W\) is the learnable weight matrix, and \(B\) is the learnable threshold.

Note

The PSN only supports the multi-step mode.

forward(x_seq: Tensor)[源代码]
extra_repr()[源代码]
training: bool
class spikingjelly.activation_based.neuron.MaskedPSN(k: int, T: int, lambda_init: float = 0.0, surrogate_function: SurrogateFunctionBase = ATan(alpha=2.0, spiking=True), step_mode: str = 's')[源代码]

基类:MemoryModule

参数:
  • k (int) – the order of the Masked PSN

  • T (int) – the number of time-steps

  • lambda_init (float) – the initial value of \(\lambda\) to adjust the progressive masking process

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

The Masked Parallel Spiking Neuron proposed in Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability. The neuronal dynamics are defined as

\[\begin{split}H &= (W \cdot {M}_{k})X, ~~~~~~~~~~~~~~~W \in \mathbb{R}^{T \times T}, {M}_{k} \in \mathbb{R}^{T \times T}, X \in \mathbb{R}^{T \times N} \\ S &= \Theta(H - B), ~~~~~B \in \mathbb{R}^{T}, S\in \{0, 1\}^{T \times N}\end{split}\]

where \(W\) is the learnable weight matrix, \(B\) is the learnable threshold, and \({M}_{k}\) is defined as

\[\begin{split}{M}_{k}[i][j] = \begin{cases} 1, ~~ j \leq i \leq j + k - 1 \\ 0, \mathrm{otherwise} \end{cases}.\end{split}\]

\(\lambda\) is used to adjust the progressive masking process, which is

\[M_{k}(\lambda) = \lambda \cdot M_{k} + (1 - \lambda) \cdot J,\]

where \(J\) is an all-one matrix.

The user can set \(\lambda\) during training by calling self.lambda_ = ....

Note

The masked PSN supports both single-step and multi-step mode. But using the multi-step mode is much faster than the single-step mode.

static gen_masked_weight(lambda_: Tensor, mask0: Tensor, mask1: Tensor, weight: Tensor)[源代码]
masked_weight()[源代码]
single_step_forward(x: Tensor)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
property lambda_
extra_repr()[源代码]
training: bool
class spikingjelly.activation_based.neuron.SlidingPSN(k: int, exp_init: bool = True, surrogate_function: SurrogateFunctionBase = ATan(alpha=2.0, spiking=True), step_mode: str = 's', backend: str = 'gemm')[源代码]

基类:MemoryModule

参数:
  • k (int) – the order of the Sliding PSN

  • exp_init (bool) – if True, the weight will be initialized as (..., 1/4, 1/2, 1). If False, the weight will be initialized by the kaiming uniform

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • step_mode (str) – the step mode, which can be s (single-step) or m (multi-step)

  • backend (str) – backend fot this neuron layer, which can be “gemm” or “conv”. This option only works for the multi-step mode

The Sliding Parallel Spiking Neuron proposed in Parallel Spiking Neurons with High Efficiency and Long-term Dependencies Learning Ability. The neuronal dynamics are defined as

\[\begin{split}H[t] &= \sum_{i=0}^{k-1}W_{i}\cdot X[t - k + 1 + i], \\ S[t] &= \Theta(H[t] - B),\end{split}\]

where \(W = [W_{0}, W_{1}, ..., W_{k-1}] \in \mathbb{R}^{T}\) is the learnable weight, and \(B\) is the learnable threshold.

Note

The Sliding PSN supports both single-step and multi-step mode. But using the multi-step mode is much faster than the single-step mode.

property supported_backends
gen_gemm_weight(T: int)[源代码]
single_step_forward(x: Tensor)[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
extra_repr()[源代码]
training: bool
class spikingjelly.activation_based.neuron.GatedLIFNode(T: int, inplane=None, init_linear_decay=None, init_v_subreset=None, init_tau: float = 0.25, init_v_threshold: float = 0.5, init_conduct: float = 0.5, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), step_mode='m', backend='torch')[源代码]

基类:MemoryModule

参数:
  • T (int) – 时间步长

  • inplane (int) – 输入tensor的通道数。不设置inplane,则默认使用layer-wise GLIF

  • init_linear_decay (float) – 膜电位线性衰减常数初始值,不设置就默认为init_v_threshold/(T * 2)

  • init_v_subreset (float) – 膜电位复位电压初始值

  • init_tau (float) – 膜电位时间常数的初始值

  • init_v_threshold (float) – 神经元的阈值电压初始值

  • init_conduct (float) – 膜电位电导率初始值

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • step_mode (str) – 步进模式,只支持 ‘m’ (多步)

  • backend (str) – 使用哪种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的。gated-LIF只支持torch

模型出处:GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks <https://openreview.net/forum?id=UmFSx2c4ubT> GLIF中所有的膜电位参数都是可学的,包括新引入的门控系数。

参数:
  • T (int) – time-step

  • inplane (int) – input tensor channel number, default: None(layer-wise GLIF). If set, otherwise(channel-wise GLIF)

  • init_linear_decay (float) – initial linear-decay constant,default: init_v_threshold/(T * 2)

  • init_v_subreset (float) – initial soft-reset constant

  • init_tau (float) – initial exponential-decay constant

  • init_v_threshold (float) – initial menbrane potential threshold

  • init_conduct (float) – initial conduct

  • surrogate_function (Callable) – surrogate gradient

  • step_mode (str) – step mode, only support ‘m’ (multi-step)

  • backend (str) – backend fot this neuron layer, which can be “gemm” or “conv”. This option only works for the multi-step mode

Gated LIF neuron refers to GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks <https://openreview.net/forum?id=UmFSx2c4ubT> All membrane-related parameters are learnable, including the gates.

property supported_backends
extra_repr()[源代码]
neuronal_charge(x: Tensor, alpha: Tensor, beta: Tensor, t)[源代码]
neuronal_reset(spike, alpha: Tensor, gamma: Tensor)[源代码]
neuronal_fire()[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.neuron.DSRIFNode(T: int = 20, v_threshold: float = 6.0, alpha: float = 0.5, v_threshold_training: bool = True, v_threshold_grad_scaling: float = 1.0, v_threshold_lower_bound: float = 0.01, step_mode='m', backend='torch', **kwargs)[源代码]

基类:MemoryModule

参数:
  • T (int) – 时间步长

  • v_threshold (float) – 神经元的阈值电压初始值

  • alpha (float) – 放电阈值的缩放因子

  • v_threshold_training (bool) – 是否将阈值电压设置为可学习参数,默认为`’True’`

  • v_threshold_grad_scaling (float) – 对放电阈值的梯度进行缩放的缩放因子

  • v_threshold_lower_bound (float) – 训练过程中,阈值电压能取到的最小值

  • step_mode (str) – 步进模式,只支持 ‘m’ (多步)

  • backend (str) – 使用哪种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的。DSR-IF只支持torch

模型出处:`Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

<https://arxiv.org/pdf/2205.00459.pdf>`.

参数:
  • T (int) – time-step

  • v_threshold (float) – initial menbrane potential threshold

  • alpha (float) – the scaling factor for the menbrane potential threshold

  • v_threshold_training (bool) – whether the menbrane potential threshold is trained, default: ‘True’

  • v_threshold_grad_scaling (float) – the scaling factor for the gradient of the menbrane potential threshold

  • v_threshold_lower_bound (float) – the minimum of the menbrane potential threshold during training

  • step_mode (str) – step mode, only support ‘m’ (multi-step)

  • backend (str) – backend fot this neuron layer, which can be “gemm” or “conv”. This option only works for the multi-step mode

DSR IF neuron refers to `Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

<https://arxiv.org/pdf/2205.00459.pdf>`.

property supported_backends
extra_repr()[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
class DSRIFFunction(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, inp, T=10, v_threshold=1.0, alpha=0.5, v_threshold_grad_scaling=1.0)[源代码]
static backward(ctx, grad_output)[源代码]
training: bool
class spikingjelly.activation_based.neuron.DSRLIFNode(T: int = 20, v_threshold: float = 1.0, tau: float = 2.0, delta_t: float = 0.05, alpha: float = 0.3, v_threshold_training: bool = True, v_threshold_grad_scaling: float = 1.0, v_threshold_lower_bound: float = 0.1, step_mode='m', backend='torch', **kwargs)[源代码]

基类:MemoryModule

参数:
  • T (int) – 时间步长

  • v_threshold (float) – 神经元的阈值电压初始值

  • tau (float) – 膜电位时间常数

  • delta_t (float) – 对微分方程形式的LIF模型进行离散化的步长

  • alpha (float) – 放电阈值的缩放因子

  • v_threshold_training (bool) – 是否将阈值电压设置为可学习参数,默认为`’True’`

  • v_threshold_grad_scaling (float) – 对放电阈值的梯度进行缩放的缩放因子

  • v_threshold_lower_bound (float) – 训练过程中,阈值电压能取到的最小值

  • step_mode (str) – 步进模式,只支持 ‘m’ (多步)

  • backend (str) – 使用哪种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的。DSR-IF只支持torch

模型出处:`Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

<https://arxiv.org/pdf/2205.00459.pdf>`.

参数:
  • T (int) – time-step

  • v_threshold (float) – initial menbrane potential threshold

  • tau (float) – membrane time constant

  • delta_t (float) – discretization step for discretizing the ODE version of the LIF model

  • alpha (float) – the scaling factor for the menbrane potential threshold

  • v_threshold_training (bool) – whether the menbrane potential threshold is trained, default: ‘True’

  • v_threshold_grad_scaling (float) – the scaling factor for the gradient of the menbrane potential threshold

  • v_threshold_lower_bound (float) – the minimum of the menbrane potential threshold during training

  • step_mode (str) – step mode, only support ‘m’ (multi-step)

  • backend (str) – backend fot this neuron layer, which can be “gemm” or “conv”. This option only works for the multi-step mode

DSR LIF neuron refers to `Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation

<https://arxiv.org/pdf/2205.00459.pdf>`.

property supported_backends
extra_repr()[源代码]
multi_step_forward(x_seq: Tensor)[源代码]
classmethod weight_rate_spikes(data, tau, delta_t)[源代码]
class DSRLIFFunction(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, inp, T, v_threshold, tau, delta_t=0.05, alpha=0.3, v_threshold_grad_scaling=1.0)[源代码]
static backward(ctx, grad_output)[源代码]
training: bool
class spikingjelly.activation_based.neuron.OTTTLIFNode(tau: float = 2.0, decay_input: bool = False, v_threshold: float = 1.0, v_reset: Optional[float] = None, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = True, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:LIFNode

参数:
  • tau (float) – 膜电位时间常数

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离。该参数在本模块中不起作用,仅为保持代码统一而保留

  • step_mode (str) – 步进模式,为了保证神经元的显存占用小,仅可以为 ‘s’ (单步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

神经元模型出处:Online Training Through Time for Spiking Neural Networks <https://arxiv.org/pdf/2210.04195.pdf> 模型正向传播和Leaky Integrate-and-Fire神经元相同;用于随时间在线训练

参数:
  • tau (float) – membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward. this parameter does not take any effect in the module, and is retained solely for code consistency

  • step_mode (str) – the step mode, which can solely be s (single-step) to guarantee the memory-efficient computation

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

OTTT LIF neuron refers to Online Training Through Time for Spiking Neural Networks <https://arxiv.org/pdf/2210.04195.pdf> The forward propagation is the same as the Leaky Integrate-and-Fire neuron; used for online training through time

training: bool
reset()[源代码]
property supported_backends
neuronal_charge(x: Tensor)[源代码]
static track_trace(spike: Tensor, trace: Tensor, tau: float)[源代码]
single_step_forward(x: Tensor)[源代码]

训练时,输出脉冲和迹;推理时,输出脉冲 训练时需要将后续参数模块用layer.py中定义的GradwithTrace进行包装,根据迹计算梯度

output spike and trace during training; output spike during inference during training, successive parametric modules shoule be wrapped by GradwithTrace defined in layer.py, to calculate gradients with traces

class spikingjelly.activation_based.neuron.SLTTLIFNode(tau: float = 2.0, decay_input: bool = True, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=4.0, spiking=True), detach_reset: bool = True, step_mode='s', backend='torch', store_v_seq: bool = False)[源代码]

基类:LIFNode

参数:
  • tau (float) – 膜电位时间常数

  • decay_input (bool) – 输入是否也会参与衰减

  • v_threshold (float) – 神经元的阈值电压

  • v_reset (float) – 神经元的重置电压。如果不为 None,当神经元释放脉冲后,电压会被重置为 v_reset; 如果设置为 None,当神经元释放脉冲后,电压会被减去 v_threshold

  • surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数

  • detach_reset (bool) – 是否将reset过程的计算图分离。该参数在本模块中不起作用,仅为保持代码统一而保留

  • step_mode (str) – 步进模式,为了保证神经元的显存占用小,仅可以为 ‘s’ (单步)

  • backend (str) – 使用那种后端。不同的 step_mode 可能会带有不同的后端。可以通过打印 self.supported_backends 查看当前 使用的步进模式支持的后端。在支持的情况下,使用 'cupy' 后端是速度最快的

  • store_v_seq (bool) – 在使用 step_mode = 'm' 时,给与 shape = [T, N, *] 的输入后,是否保存中间过程的 shape = [T, N, *] 的各个时间步的电压值 self.v_seq 。设置为 False 时计算完成后只保留最后一个时刻的电压,即 shape = [N, *]self.v 。 通常设置成 False ,可以节省内存

神经元模型出处:Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks <https://arxiv.org/pdf/2302.14311.pdf>.模型正向传播和Leaky Integrate-and-Fire神经元相同.

参数:
  • tau (float) – membrane time constant

  • decay_input (bool) – whether the input will decay

  • v_threshold (float) – threshold of this neurons layer

  • v_reset (float) – reset voltage of this neurons layer. If not None, the neuron’s voltage will be set to v_reset after firing a spike. If None, the neuron’s voltage will subtract v_threshold after firing a spike

  • surrogate_function (Callable) – the function for calculating surrogate gradients of the heaviside step function in backward

  • detach_reset (bool) – whether detach the computation graph of reset in backward. this parameter does not take any effect in the module, and is retained solely for code consistency

  • step_mode (str) – the step mode, which can solely be s (single-step) to guarantee the memory-efficient computation

  • backend – backend fot this neurons layer. Different step_mode may support for different backends. The user can

print self.supported_backends and check what backends are supported by the current step_mode. If supported, using 'cupy' backend will have the fastest training speed :type backend: str

参数:

store_v_seq (bool) – when using step_mode = 'm' and given input with shape = [T, N, *], this option controls whether storing the voltage at each time-step to self.v_seq with shape = [T, N, *]. If set to False, only the voltage at last time-step will be stored to self.v with shape = [N, *], which can reduce the memory consumption

SLTT LIF neuron refers to Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks <https://arxiv.org/pdf/2302.14311.pdf>. The forward propagation is the same as the Leaky Integrate-and-Fire neuron’s.

training: bool
reset()[源代码]
property supported_backends
neuronal_charge(x: Tensor)[源代码]
single_step_forward(x: Tensor)[源代码]