spikingjelly.clock_driven.neuron package
Module contents
- class spikingjelly.clock_driven.neuron.BaseNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False)[源代码]
基类:
spikingjelly.clock_driven.base.MemoryModule
- 参数
可微分SNN神经元的基类神经元。
- 参数
v_threshold (float) – threshold voltage of neurons
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
This class is the base class of differentiable spiking neurons.
- abstract neuronal_charge(x: torch.Tensor)[源代码]
定义神经元的充电差分方程。子类必须实现这个函数。
Define the charge difference equation. The sub-class must implement this function.
- neuronal_fire()[源代码]
-
根据当前神经元的电压、阈值,计算输出脉冲。
Calculate out spikes of neurons by their current membrane potential and threshold voltage.
- neuronal_reset()[源代码]
-
根据当前神经元释放的脉冲,对膜电位进行重置。
Reset the membrane potential according to neurons’ output spikes.
- forward(x: torch.Tensor)[源代码]
-
- 参数
x (torch.Tensor) – 输入到神经元的电压增量
- 返回
神经元的输出脉冲
- 返回类型
按照充电、放电、重置的顺序进行前向传播。
- 参数
x (torch.Tensor) – increment of voltage inputted to neurons
- 返回
out spikes of neurons
- 返回类型
Forward by the order of neuronal_charge, neuronal_fire, and neuronal_reset.
- class spikingjelly.clock_driven.neuron.IFNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False)[源代码]
基类:
spikingjelly.clock_driven.neuron.BaseNode
- 参数
Integrate-and-Fire 神经元模型,可以看作理想积分器,无输入时电压保持恒定,不会像LIF神经元那样衰减。其阈下神经动力学方程为:
\[V[t] = V[t-1] + X[t]\]- 参数
v_threshold (float) – threshold voltage of neurons
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
The Integrate-and-Fire neuron, which can be seen as a ideal integrator. The voltage of the IF neuron will not decay as that of the LIF neuron. The subthreshold neural dynamics of it is as followed:
\[V[t] = V[t-1] + X[t]\]- neuronal_charge(x: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.MultiStepIFNode(v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False, backend='torch')[源代码]
基类:
spikingjelly.clock_driven.neuron.IFNode
- 参数
多步版本的
spikingjelly.clock_driven.neuron.IFNode
。小技巧
对于多步神经元,输入
x_seq.shape = [T, *]
,不仅可以使用.v
和.spike
获取t = T - 1
时刻的电压和脉冲,还能够 使用.v_seq
和.spike_seq
获取完整的T
个时刻的电压和脉冲。小技巧
阅读 传播模式 以获取更多关于单步和多步传播的信息。
- 参数
v_threshold (float) – threshold voltage of neurons
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
backend (str) – use which backend,
'torch'
or'cupy'
.'cupy'
is faster but only supports GPU
The multi-step version of
spikingjelly.clock_driven.neuron.IFNode
.Tip
The input for multi-step neurons are
x_seq.shape = [T, *]
. We can get membrane potential and spike at time-stept = T - 1
by.v
and.spike
. We can also get membrane potential and spike at allT
time-steps by.v_seq
and.spike_seq
.Tip
Read Propagation Pattern for more details about single-step and multi-step propagation.
- forward(x_seq: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.LIFNode(tau: float = 2.0, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False)[源代码]
基类:
spikingjelly.clock_driven.neuron.BaseNode
- 参数
Leaky Integrate-and-Fire 神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset})\]- 参数
tau (float) – membrane time constant
v_threshold (float) – threshold voltage of neurons
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
The Leaky Integrate-and-Fire neuron, which can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset})\]- neuronal_charge(x: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.MultiStepLIFNode(tau: float = 2.0, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False, backend='torch')[源代码]
基类:
spikingjelly.clock_driven.neuron.LIFNode
- 参数
tau (float) – 膜电位时间常数
v_threshold (float) – 神经元的阈值电压
v_reset (float) – 神经元的重置电压。如果不为
None
,当神经元释放脉冲后,电压会被重置为v_reset
; 如果设置为None
,则电压会被减去v_threshold
surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数
detach_reset (bool) – 是否将reset过程的计算图分离
backend (str) – 使用哪种计算后端,可以为
'torch'
或'cupy'
。'cupy'
速度更快,但仅支持GPU。
多步版本的
spikingjelly.clock_driven.neuron.LIFNode
。小技巧
对于多步神经元,输入
x_seq.shape = [T, *]
,不仅可以使用.v
和.spike
获取t = T - 1
时刻的电压和脉冲,还能够 使用.v_seq
和.spike_seq
获取完整的T
个时刻的电压和脉冲。小技巧
阅读 传播模式 以获取更多关于单步和多步传播的信息。
- 参数
tau (float) – membrane time constant
v_threshold (float) – threshold voltage of neurons
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
backend (str) – use which backend,
'torch'
or'cupy'
.'cupy'
is faster but only supports GPU
The multi-step version of
spikingjelly.clock_driven.neuron.LIFNode
.Tip
The input for multi-step neurons are
x_seq.shape = [T, *]
. We can get membrane potential and spike at time-stept = T - 1
by.v
and.spike
. We can also get membrane potential and spike at allT
time-steps by.v_seq
and.spike_seq
.Tip
Read Propagation Pattern for more details about single-step and multi-step propagation.
- forward(x_seq: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.ParametricLIFNode(init_tau: float = 2.0, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False)[源代码]
基类:
spikingjelly.clock_driven.neuron.BaseNode
- 参数
Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks 提出的 Parametric Leaky Integrate-and-Fire (PLIF)神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset})\]其中 \(\frac{1}{\tau} = {\rm Sigmoid}(w)\),\(w\) 是可学习的参数。
- 参数
init_tau (float) – the initial value of membrane time constant
v_threshold (float) – threshold voltage of neurons
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
The Parametric Leaky Integrate-and-Fire (PLIF) neuron, which is proposed by Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks and can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset})\]where \(\frac{1}{\tau} = {\rm Sigmoid}(w)\), \(w\) is a learnable parameter.
- neuronal_charge(x: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.MultiStepParametricLIFNode(init_tau: float = 2.0, v_threshold: float = 1.0, v_reset: float = 0.0, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False, backend='torch')[源代码]
基类:
spikingjelly.clock_driven.neuron.ParametricLIFNode
- 参数
多步版本的 Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks 提出的 Parametric Leaky Integrate-and-Fire (PLIF)神经元模型,可以看作是带漏电的积分器。其阈下神经动力学方程为:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset})\]其中 \(\frac{1}{\tau} = {\rm Sigmoid}(w)\),\(w\) 是可学习的参数。
小技巧
对于多步神经元,输入
x_seq.shape = [T, *]
,不仅可以使用.v
和.spike
获取t = T - 1
时刻的电压和脉冲,还能够 使用.v_seq
和.spike_seq
获取完整的T
个时刻的电压和脉冲。小技巧
阅读 传播模式 以获取更多关于单步和多步传播的信息。
- 参数
init_tau (float) – the initial value of membrane time constant
v_threshold (float) – threshold voltage of neurons
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
backend (str) – use which backend,
'torch'
or'cupy'
.'cupy'
is faster but only supports GPU
The multi-step Parametric Leaky Integrate-and-Fire (PLIF) neuron, which is proposed by Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks and can be seen as a leaky integrator. The subthreshold neural dynamics of it is as followed:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] - (V[t-1] - V_{reset})\]where \(\frac{1}{\tau} = {\rm Sigmoid}(w)\), \(w\) is a learnable parameter.
Tip
The input for multi-step neurons are
x_seq.shape = [T, *]
. We can get membrane potential and spike at time-stept = T - 1
by.v
and.spike
. We can also get membrane potential and spike at allT
time-steps by.v_seq
and.spike_seq
.Tip
Read Propagation Pattern for more details about single-step and multi-step propagation.
- forward(x_seq: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.QIFNode(tau: float = 2.0, v_c: float = 0.8, a0: float = 1.0, v_threshold: float = 1.0, v_rest: float = 0.0, v_reset: float = - 0.1, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False)[源代码]
基类:
spikingjelly.clock_driven.neuron.BaseNode
- 参数
Quadratic Integrate-and-Fire 神经元模型,一种非线性积分发放神经元模型,也是指数积分发放神经元(Exponential Integrate-and-Fire)的近似版本。其阈下神经动力学方程为:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] + a_0 (V[t-1] - V_{rest})(V[t-1] - V_c))\]- 参数
tau (float) – membrane time constant
v_c (float) – critical voltage
a0 (float) –
v_threshold (float) – threshold voltage of neurons
v_rest (float) – resting potential
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
The Quadratic Integrate-and-Fire neuron is a kind of nonlinear integrate-and-fire models and also an approximation of the Exponential Integrate-and-Fire model. The subthreshold neural dynamics of it is as followed:
\[V[t] = V[t-1] + \frac{1}{\tau}(X[t] + a_0 (V[t-1] - V_{rest})(V[t-1] - V_c))\]- neuronal_charge(x: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.EIFNode(tau: float = 2.0, delta_T: float = 1.0, theta_rh: float = 0.8, v_threshold: float = 1.0, v_rest: float = 0.0, v_reset: float = - 0.1, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False)[源代码]
基类:
spikingjelly.clock_driven.neuron.BaseNode
- 参数
tau (float) – 膜电位时间常数
delta_T (float) – 陡峭度参数
theta_rh (float) – 基强度电压阈值
v_threshold (float) – 神经元的阈值电压
v_rest (float) – 静息电位
v_reset (float) – 神经元的重置电压。如果不为
None
,当神经元释放脉冲后,电压会被重置为v_reset
; 如果设置为None
,则电压会被减去v_threshold
surrogate_function (Callable) – 反向传播时用来计算脉冲函数梯度的替代函数
detach_reset (bool) – 是否将reset过程的计算图分离
Exponential Integrate-and-Fire 神经元模型,一种非线性积分发放神经元模型,是由HH神经元模型(Hodgkin-Huxley model)简化后推导出的一维模型。在 \(\Delta_T\to 0\) 时退化为LIF模型。其阈下神经动力学方程为:
\[V[t] = V[t-1] + \frac{1}{\tau}\left(X[t] - (V[t-1] - V_{rest}) + \Delta_T\exp\left(\frac{V[t-1] - \theta_{rh}}{\Delta_T}\right)\right)\]- 参数
tau (float) – membrane time constant
delta_T (float) – sharpness parameter
theta_rh (float) – rheobase threshold
v_threshold (float) – threshold voltage of neurons
v_rest (float) – resting potential
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
The Exponential Integrate-and-Fire neuron is a kind of nonlinear integrate-and-fire models and also an one-dimensional model derived from the Hodgkin-Huxley model. It degenerates to the LIF model when \(\Delta_T\to 0\). The subthreshold neural dynamics of it is as followed:
\[V[t] = V[t-1] + \frac{1}{\tau}\left(X[t] - (V[t-1] - V_{rest}) + \Delta_T\exp\left(\frac{V[t-1] - \theta_{rh}}{\Delta_T}\right)\right)\]- neuronal_charge(x: torch.Tensor)[源代码]
- class spikingjelly.clock_driven.neuron.MultiStepEIFNode(tau: float = 2.0, delta_T: float = 1.0, theta_rh: float = 0.8, v_threshold: float = 1.0, v_rest: float = 0.0, v_reset: float = - 0.1, surrogate_function: Callable = Sigmoid(alpha=1.0, spiking=True), detach_reset: bool = False, backend='torch')[源代码]
基类:
spikingjelly.clock_driven.neuron.EIFNode
::param tau: 膜电位时间常数 :type tau: float
- 参数
多步版本的
spikingjelly.clock_driven.neuron.EIFNode
。对于多步神经元,输入
x_seq.shape = [T, *]
,不仅可以使用.v
和.spike
获取t = T - 1
时刻的电压和脉冲,还能够 使用.v_seq
和.spike_seq
获取完整的T
个时刻的电压和脉冲。小技巧
阅读 传播模式 以获取更多关于单步和多步传播的信息。
- 参数
tau (float) – membrane time constant
delta_T (float) – sharpness parameter
theta_rh (float) – rheobase threshold
v_threshold (float) – threshold voltage of neurons
v_rest (float) – resting potential
v_reset (float) – reset voltage of neurons. If not
None
, voltage of neurons that just fired spikes will be set tov_reset
. IfNone
, voltage of neurons that just fired spikes will subtractv_threshold
surrogate_function (Callable) – surrogate function for replacing gradient of spiking functions during back-propagation
detach_reset (bool) – whether detach the computation graph of reset
backend (str) – use which backend,
'torch'
or'cupy'
.'cupy'
is faster but only supports GPU
Tip
The input for multi-step neurons are
x_seq.shape = [T, *]
. We can get membrane potential and spike at time-stept = T - 1
by.v
and.spike
. We can also get membrane potential and spike at allT
time-steps by.v_seq
and.spike_seq
.Tip
Read Propagation Pattern for more details about single-step and multi-step propagation.
- forward(x_seq: torch.Tensor)[源代码]