spikingjelly.cext.neuron package

Module contents

spikingjelly.cext.neuron.hard_reset_forward_template(x: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, *args, **kwargs)[源代码]
参数
返回

(spike, v_next),其中 spike\(S_{t}\)v_next\(V_{t}\)

返回类型

tuple

对神经元进行单步的电压更新,其中电压重置方式是硬重置(hard reset)。更新的方程为

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = S_{t}V_{reset} + (1 - S_{t})H_{t}\end{aligned}\end{align} \]

其中 \(f(\cdot)\) 是充电方程,\(\theta\) 是神经元自身的参数。

参数
返回

(spike, v_next), where spike is \(S_{t}\), and v_next is \(V_{t}\)

返回类型

tuple

Update the membrane potential of the neuron by one time step with hard reset. The update is calculated by

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = S_{t}V_{reset} + (1 - S_{t})H_{t}\end{aligned}\end{align} \]

where \(f(\cdot)\) is the charging equation and \(\theta\) is the neuron’s parameters.

spikingjelly.cext.neuron.hard_reset_fptt_template(x_seq: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, *args, **kwargs)[源代码]
参数
返回

(spike_seq, v_next),其中 spike\(S_{t}, t=0,1,...,T-1\)v_next\(V_{T-1}\)

返回类型

tuple

hard_reset_forward_template 的多步版本。

参数
返回

(spike_seq, v_next), where spike is \(S_{t}, t=0,1,...,T-1\), v_next is \(V_{T-1}\)

返回类型

tuple

The multi-step version of hard_reset_forward_template.

spikingjelly.cext.neuron.soft_reset_forward_template(x: torch.Tensor, v: torch.Tensor, v_threshold: float, *args, **kwargs)[源代码]
参数
返回

(spike, v_next),其中 spike\(S_{t}\)v_next\(V_{t}\)

返回类型

tuple

对神经元进行单步的电压更新,其中电压重置方式是软重置(soft reset)。更新的方程为

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = H_{t} - S_{t}V_{threshold}\end{aligned}\end{align} \]

其中 \(f(\cdot)\) 是充电方程,\(\theta\) 是神经元自身的参数。

参数
返回

(spike, v_next), where spike is \(S_{t}\), and v_next is \(V_{t}\)

返回类型

tuple

Update the membrane potential of the neuron by one time step with soft reset. The update is calculated by

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = H_{t} - S_{t}V_{threshold}\end{aligned}\end{align} \]

where \(f(\cdot)\) is the charging equation and \(\theta\) is the neuron’s parameters.

spikingjelly.cext.neuron.soft_reset_fptt_template(x_seq: torch.Tensor, v: torch.Tensor, v_threshold: float, *args, **kwargs)[源代码]
参数
返回

(spike_seq, v_next),其中 spike\(S_{t}, t=0,1,...,T-1\)v_next\(V_{T-1}\)

返回类型

tuple

soft_reset_forward_template 的多步版本。

参数
返回

(spike_seq, v_next), where spike is \(S_{t}, t=0,1,...,T-1\), v_next is \(V_{T-1}\)

返回类型

tuple

The multi-step version of soft_reset_forward_template.

spikingjelly.cext.neuron.hard_reset_forward_with_grad_template(x: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, alpha: float, detach_reset: bool, grad_surrogate_function_index: int, *args, **kwargs)[源代码]
参数
  • x (torch.Tensor) – \(X_{t}\)

  • v (torch.Tensor) – \(V_{t-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • v_reset (float) – \(V_{reset}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – 是否在反向传播的计算图中断开重置过程

  • grad_surrogate_function_index (int) – 梯度替代函数的索引

返回

(spike, v_next, grad_s_to_h, grad_v_to_h),其中 spike\(S_{t}\)v_next\(V_{t}\)grad_s_to_h\(\frac{\partial S_{t}}{\partial H_{t}}\)grad_v_to_h\(\frac{\partial V_{t}}{\partial H_{t}}\)

返回类型

tuple

对神经元进行单步的电压更新,其中电压重置方式是硬重置(hard reset)。更新的方程为

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = S_{t}V_{reset} + (1 - S_{t})H_{t}\end{aligned}\end{align} \]

其中 \(f(\cdot)\) 是充电方程,\(\theta\) 是神经元自身的参数。并且会计算出反向传播所需的梯度

\[ \begin{align}\begin{aligned}\frac{\partial S_{t}}{\partial H_{t}} & = \Theta'(H_{t} - V_{threshold}) = \sigma(\alpha(H_{t} - V_{threshold}))\\\frac{\partial V_{t}}{\partial H_{t}} & = 1 - S_{t} + (V_{reset} - H_{t})\frac{\partial S_{t}}{\partial H_{t}}\end{aligned}\end{align} \]
参数
  • x (torch.Tensor) – \(X_{t}\)

  • v (torch.Tensor) – \(V_{t-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • v_reset (float) – \(V_{reset}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – whether detach the neuronal reset during backward

  • grad_surrogate_function_index (int) – index of the gradient surrogate function

返回

(spike, v_next, grad_s_to_h, grad_v_to_h), where spike is \(S_{t}\), v_next is \(V_{t}\), grad_s_to_h is \(\frac{\partial S_{t}}{\partial H_{t}}\), grad_v_to_h is \(\frac{\partial V_{t}}{\partial H_{t}}\)

返回类型

tuple

Update the membrane potential of the neuron by one time step with hard reset. The update is calculated by

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = S_{t}V_{reset} + (1 - S_{t})H_{t}\end{aligned}\end{align} \]

where \(f(\cdot)\) is the charging equation and \(\theta\) is the neuron’s parameters. This function will also calculate the gradients which the backward function needs

\[ \begin{align}\begin{aligned}\frac{\partial S_{t}}{\partial H_{t}} & = \Theta'(H_{t} - V_{threshold}) = \sigma(\alpha(H_{t} - V_{threshold}))\\\frac{\partial V_{t}}{\partial H_{t}} & = 1 - S_{t} + (V_{reset} - H_{t})\frac{\partial S_{t}}{\partial H_{t}}\end{aligned}\end{align} \]
spikingjelly.cext.neuron.hard_reset_fptt_with_grad_template(x_seq: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, alpha: float, detach_reset: bool, grad_surrogate_function_index: int, *args, **kwargs)[源代码]
参数
  • x_seq\(X_{t}, t=0,1,...,T-1\)

  • v (torch.Tensor) – \(V_{-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • v_reset (float) – \(V_{reset}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – 是否在反向传播的计算图中断开重置过程

  • grad_surrogate_function_index (int) – 梯度替代函数的索引

返回

(spike_seq, v_next, grad_s_to_h, grad_v_to_h),其中 spike_seq\(S_{t}, t=0,1,...,T-1\)v_next\(V_{T-1}\)grad_s_to_h\(\frac{\partial S_{t}}{\partial H_{t}}, t=0,1,...,T-1\)grad_v_to_h\(\frac{\partial V_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

返回类型

tuple

hard_reset_forward_with_grad_template 的多步版本。

参数
  • x_seq\(X_{t}, t=0,1,...,T-1\)

  • v (torch.Tensor) – \(V_{-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • v_reset (float) – \(V_{reset}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – whether detach the neuronal reset during backward

  • grad_surrogate_function_index (int) – index of the gradient surrogate function

返回

(spike_seq, v_next, grad_s_to_h, grad_v_to_h), where spike_seq is \(S_{t}, t=0,1,...,T-1\), v_next is \(V_{T-1}\), grad_s_to_h is \(\frac{\partial S_{t}}{\partial H_{t}}, t=0,1,...,T-1\), grad_v_to_h is \(\frac{\partial V_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

返回类型

tuple

The multi-step version of hard_reset_forward_with_grad_template.

spikingjelly.cext.neuron.soft_reset_forward_with_grad_template(x: torch.Tensor, v: torch.Tensor, v_threshold: float, alpha: float, detach_reset: bool, grad_surrogate_function_index: int, *args, **kwargs)[源代码]
参数
  • x (torch.Tensor) – \(X_{t}\)

  • v (torch.Tensor) – \(V_{t-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – 是否在反向传播的计算图中断开重置过程

  • grad_surrogate_function_index (int) – 梯度替代函数的索引

返回

(spike, v_next, grad_s_to_h, grad_v_to_h),其中 spike\(S_{t}\)v_next\(V_{t}\)grad_s_to_h\(\frac{\partial S_{t}}{\partial H_{t}}\)grad_v_to_h\(\frac{\partial V_{t}}{\partial H_{t}}\)

返回类型

tuple

对神经元进行单步的电压更新,其中电压重置方式是软重置(soft reset)。更新的方程为

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = H_{t} - S_{t}V_{threshold}\end{aligned}\end{align} \]

其中 \(f(\cdot)\) 是充电方程,\(\theta\) 是神经元自身的参数。并且会计算出反向传播所需的梯度

\[ \begin{align}\begin{aligned}\frac{\partial S_{t}}{\partial H_{t}} & = \Theta'(H_{t} - V_{threshold}) = \sigma(\alpha(H_{t} - V_{threshold}))\\\frac{\partial V_{t}}{\partial H_{t}} & = 1 - V_{threshold} \frac{\partial S_{t}}{\partial H_{t}}\end{aligned}\end{align} \]
参数
  • x (torch.Tensor) – \(X_{t}\)

  • v (torch.Tensor) – \(V_{t-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – whether detach the neuronal reset during backward

  • grad_surrogate_function_index (int) – index of the gradient surrogate function

返回

(spike, v_next, grad_s_to_h, grad_v_to_h), where spike is \(S_{t}\), v_next is \(V_{t}\), grad_s_to_h is \(\frac{\partial S_{t}}{\partial H_{t}}\), grad_v_to_h is \(\frac{\partial V_{t}}{\partial H_{t}}\)

返回类型

tuple

Update the membrane potential of the neuron by one time step with soft reset. The update is calculated by

\[ \begin{align}\begin{aligned}H_{t} & = f(X_{t}, V_{t-1}; \theta)\\S_{t} & = \Theta(H_{t} - V_{threshold})\\V_{t} & = H_{t} - S_{t}V_{threshold}\end{aligned}\end{align} \]

where \(f(\cdot)\) is the charging equation and \(\theta\) is the neuron’s parameters. This function will also calculate the gradients which the backward function needs

\[ \begin{align}\begin{aligned}\frac{\partial S_{t}}{\partial H_{t}} & = \Theta'(H_{t} - V_{threshold}) = \sigma(\alpha(H_{t} - V_{threshold}))\\\frac{\partial V_{t}}{\partial H_{t}} & = 1 - V_{threshold} \frac{\partial S_{t}}{\partial H_{t}}\end{aligned}\end{align} \]
spikingjelly.cext.neuron.soft_reset_fptt_with_grad_template(x_seq: torch.Tensor, v: torch.Tensor, v_threshold: float, alpha: float, detach_reset: bool, grad_surrogate_function_index: int, *args, **kwargs)[源代码]
参数
  • x_seq\(X_{t}, t=0,1,...,T-1\)

  • v (torch.Tensor) – \(V_{-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – 是否在反向传播的计算图中断开重置过程

  • grad_surrogate_function_index (int) – 梯度替代函数的索引

返回

(spike_seq, v_next, grad_s_to_h, grad_v_to_h),其中 spike_seq\(S_{t}, t=0,1,...,T-1\)v_next\(V_{T-1}\)grad_s_to_h\(\frac{\partial S_{t}}{\partial H_{t}}, t=0,1,...,T-1\)grad_v_to_h\(\frac{\partial V_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

返回类型

tuple

soft_reset_forward_with_grad_template 的多步版本。

参数
  • x_seq\(X_{t}, t=0,1,...,T-1\)

  • v (torch.Tensor) – \(V_{-1}\)

  • v_threshold (float) – \(V_{threshold}\)

  • v_reset (float) – \(V_{reset}\)

  • alpha (float) – \(\alpha\)

  • detach_reset (bool) – whether detach the neuronal reset during backward

  • grad_surrogate_function_index (int) – index of the gradient surrogate function

返回

(spike_seq, v_next, grad_s_to_h, grad_v_to_h), where spike_seq is \(S_{t}, t=0,1,...,T-1\), v_next is \(V_{T-1}\), grad_s_to_h is \(\frac{\partial S_{t}}{\partial H_{t}}, t=0,1,...,T-1\), grad_v_to_h is \(\frac{\partial V_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

返回类型

tuple

The multi-step version of soft_reset_forward_with_grad_template.

spikingjelly.cext.neuron.backward_template(grad_spike: torch.Tensor, grad_v_next: torch.Tensor, grad_s_to_h: torch.Tensor, grad_v_to_h: float, *args, **kwargs)[源代码]
参数
  • grad_spike (torch.Tensor) – \(\frac{\partial L}{\partial S_{t}}\)

  • grad_v_next (torch.Tensor) – \(\frac{\partial L}{\partial V_{t}}\)

  • grad_s_to_h (torch.Tensor) – \(\frac{\partial S_{t}}{\partial H_{t}}\)

  • grad_v_to_h (torch.Tensor) – \(\frac{\partial V_{t}}{\partial H_{t}}\)

返回

(grad_x, grad_v),其中 grad_x\(\frac{\partial L}{\partial X_{t}}\)grad_v\(\frac{\partial L}{\partial V_{t-1}}\)

返回类型

tuple

hard_reset_forward_with_grad_templatesoft_reset_forward_with_grad_template 的反向传播。梯度的计算按照

\[ \begin{align}\begin{aligned}\frac{\partial L}{\partial H_{t}} & = \frac{\partial L}{\partial S_{t}} \frac{\partial S_{t}}{\partial H_{t}} + \frac{\partial L}{\partial V_{t}} \frac{\partial V_{t}}{\partial H_{t}}\\\frac{\partial L}{\partial X_{t}} &= \frac{\partial L}{\partial H_{t}} \frac{\partial H_{t}}{\partial X_{t}}\\\frac{\partial L}{\partial V_{t-1}} &= \frac{\partial L}{\partial H_{t}} \frac{\partial H_{t}}{\partial V_{t-1}}\end{aligned}\end{align} \]
参数
  • grad_spike (torch.Tensor) – \(\frac{\partial L}{\partial S_{t}}\)

  • grad_v_next (torch.Tensor) – \(\frac{\partial L}{\partial V_{t}}\)

  • grad_s_to_h (torch.Tensor) – \(\frac{\partial S_{t}}{\partial H_{t}}\)

  • grad_v_to_h (torch.Tensor) – \(\frac{\partial V_{t}}{\partial H_{t}}\)

返回

(grad_x, grad_v), where grad_x is \(\frac{\partial L}{\partial X_{t}}\), grad_v is \(\frac{\partial L}{\partial V_{t-1}}\)

返回类型

tuple

The backward of hard_reset_forward_with_grad_template and soft_reset_forward_with_grad_template. The gradients are calculated by

\[ \begin{align}\begin{aligned}\frac{\partial L}{\partial H_{t}} & = \frac{\partial L}{\partial S_{t}} \frac{\partial S_{t}}{\partial H_{t}} + \frac{\partial L}{\partial V_{t}} \frac{\partial V_{t}}{\partial H_{t}}\\\frac{\partial L}{\partial X_{t}} &= \frac{\partial L}{\partial H_{t}} \frac{\partial H_{t}}{\partial X_{t}}\\\frac{\partial L}{\partial V_{t-1}} &= \frac{\partial L}{\partial H_{t}} \frac{\partial H_{t}}{\partial V_{t-1}}\end{aligned}\end{align} \]
spikingjelly.cext.neuron.bptt_template(grad_spike_seq: torch.Tensor, grad_v_next: torch.Tensor, grad_s_to_h: torch.Tensor, grad_v_to_h: torch.Tensor, *args, **kwargs)[源代码]
参数
  • grad_spike_seq (torch.Tensor) – \(\frac{\partial L}{\partial S_{t}}, t=0,1,...,T-1\)

  • grad_v_next (torch.Tensor) – \(\frac{\partial L}{\partial V_{T-1}}\)

  • grad_s_to_h (torch.Tensor) – \(\frac{\partial S_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

  • grad_v_to_h (torch.Tensor) – \(\frac{\partial V_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

返回

(grad_x_seq, grad_v),其中 grad_x_seq\(\frac{\partial L}{\partial X_{t}}, t=0,1,...,T-1\)grad_v\(\frac{\partial L}{\partial V_{-1}}\)

返回类型

tuple

backward_template 的多步版本。

参数
  • grad_spike_seq (torch.Tensor) – \(\frac{\partial L}{\partial S_{t}}, t=0,1,...,T-1\)

  • grad_v_next (torch.Tensor) – \(\frac{\partial L}{\partial V_{T-1}}\)

  • grad_s_to_h (torch.Tensor) – \(\frac{\partial S_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

  • grad_v_to_h (torch.Tensor) – \(\frac{\partial V_{t}}{\partial H_{t}}, t=0,1,...,T-1\)

返回

(grad_x_seq, grad_v), where grad_x_seq is \(\frac{\partial L}{\partial X_{t}}, t=0,1,...,T-1\), grad_v is \(\frac{\partial L}{\partial V_{-1}}\)

返回类型

tuple

The multi-step version of backward_template.

spikingjelly.cext.neuron.LIF_hard_reset_forward(x: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, reciprocal_tau: float, detach_input: bool)[源代码]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

其余的参数参见 hard_reset_forward_template

对LIF神经元进行单步的电压更新,其中电压重置方式是硬重置(hard reset)。充电的方程为

\[H_{t} = V_{t-1} + \frac{1}{\tau}(X_{t} -(V_{t-1} - V_{reset}))\]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

See hard_reset_forward_template for more details about other args。

Update the membrane potential of the LIF neuron by one time step with hard reset. The charging equation is

\[H_{t} = V_{t-1} + \frac{1}{\tau}(X_{t} -(V_{t-1} - V_{reset}))\]
spikingjelly.cext.neuron.LIF_hard_reset_fptt(x_seq: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, reciprocal_tau: float, detach_input: bool)[源代码]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

其余的参数参见 hard_reset_fptt_template

LIF_hard_reset_forward 的多步版本。

参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

See hard_reset_fptt_template for more details about other args。

The multi-step version of LIF_hard_reset_forward.

spikingjelly.cext.neuron.LIF_hard_reset_forward_with_grad(x: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, alpha: float, detach_reset: bool, grad_surrogate_function_index: int, reciprocal_tau: float, detach_input: bool)[源代码]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

其余的参数参见 hard_reset_forward_with_grad_template

对LIF神经元进行单步的电压更新并计算反向传播所需的梯度,其中电压重置方式是硬重置(hard reset)。充电的方程为

\[H_{t} = V_{t-1} + \frac{1}{\tau}(X_{t} -(V_{t-1} - V_{reset}))\]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

See hard_reset_forward_with_grad_template for more details about other args。

Update the membrane potential of the LIF neuron by one time step with hard reset and calculate the gradients that the backward function needs. The charging equation is

\[H_{t} = V_{t-1} + \frac{1}{\tau}(X_{t} -(V_{t-1} - V_{reset}))\]
spikingjelly.cext.neuron.LIF_hard_reset_fptt_with_grad(x_seq: torch.Tensor, v: torch.Tensor, v_threshold: float, v_reset: float, alpha: float, detach_reset: bool, grad_surrogate_function_index: int, reciprocal_tau: float, detach_input: bool)[源代码]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

其余的参数参见 hard_reset_fptt_with_grad_template

LIF_hard_reset_forward_with_grad 的多步版本。

参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

See hard_reset_fptt_with_grad_template for more details about other args。

The multi-step version of LIF_hard_reset_forward_with_grad.

spikingjelly.cext.neuron.LIF_backward(grad_spike: torch.Tensor, grad_v_next: torch.Tensor, grad_s_to_h: torch.Tensor, grad_v_to_h: float, reciprocal_tau: float, detach_input: bool)[源代码]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

其余的参数参见 backward_template

梯度的计算按照

\[ \begin{align}\begin{aligned}\frac{\partial H_{t}}{\partial X_{t}} & = \frac{1}{\tau}\\\frac{\partial H_{t}}{\partial V_{t-1}} & = 1 - \frac{1}{\tau}\end{aligned}\end{align} \]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

See backward_template for more details about other args。

The gradients are calculated by

\[ \begin{align}\begin{aligned}\frac{\partial H_{t}}{\partial X_{t}} & = \frac{1}{\tau}\\\frac{\partial H_{t}}{\partial V_{t-1}} & = 1 - \frac{1}{\tau}\end{aligned}\end{align} \]
spikingjelly.cext.neuron.LIF_bptt(grad_spike: torch.Tensor, grad_v_next: torch.Tensor, grad_s_to_h: torch.Tensor, grad_v_to_h: float, reciprocal_tau: float, detach_input: bool)[源代码]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

其余的参数参见 bptt_template

LIF_backward 的多步版本。

梯度的计算按照

\[ \begin{align}\begin{aligned}\frac{\partial H_{t}}{\partial X_{t}} & = \frac{1}{\tau}\\\frac{\partial H_{t}}{\partial V_{t-1}} & = 1 - \frac{1}{\tau}\end{aligned}\end{align} \]
参数

reciprocal_tau (float) – \(\frac{1}{\tau}\)

See bptt_template for more details about other args。

The multi-step version of LIF_backward.

The gradients are calculated by

\[ \begin{align}\begin{aligned}\frac{\partial H_{t}}{\partial X_{t}} & = \frac{1}{\tau}\\\frac{\partial H_{t}}{\partial V_{t-1}} & = 1 - \frac{1}{\tau}\end{aligned}\end{align} \]
class spikingjelly.cext.neuron.LIFStep[源代码]

基类:torch.autograd.function.Function

static forward(ctx, x, v, v_threshold, v_reset, alpha, detach_reset, grad_surrogate_function_index, reciprocal_tau, detach_input)[源代码]
static backward(ctx, grad_spike, grad_v_next)[源代码]
class spikingjelly.cext.neuron.LIFMultiStep[源代码]

基类:torch.autograd.function.Function

static forward(ctx, x_seq, v, v_threshold, v_reset, alpha, detach_reset, grad_surrogate_function_index, reciprocal_tau, detach_input)[源代码]
static backward(ctx, grad_spike_seq, grad_v_next)[源代码]
class spikingjelly.cext.neuron.IFStep[源代码]

基类:torch.autograd.function.Function

static forward(ctx, x, v, v_threshold, v_reset, alpha, detach_reset, grad_surrogate_function_index)[源代码]
static backward(ctx, grad_spike, grad_v_next)[源代码]
class spikingjelly.cext.neuron.IFMultiStep[源代码]

基类:torch.autograd.function.Function

static forward(ctx, x_seq, v, v_threshold, v_reset, alpha, detach_reset, grad_surrogate_function_index)[源代码]
static backward(ctx, grad_spike_seq, grad_v_next)[源代码]
class spikingjelly.cext.neuron.BaseNode(v_threshold=1.0, v_reset=0.0, surrogate_function='ATan', alpha=2.0, detach_reset=False)[源代码]

基类:torch.nn.modules.module.Module

reset()[源代码]
extra_repr()[源代码]
training: bool
class spikingjelly.cext.neuron.LIFNode(tau=100.0, detach_input=False, v_threshold=1.0, v_reset=0.0, surrogate_function='ATan', alpha=2.0, detach_reset=False)[源代码]

基类:spikingjelly.cext.neuron.BaseNode

forward(dv: torch.Tensor)[源代码]
extra_repr()[源代码]
training: bool
class spikingjelly.cext.neuron.MultiStepLIFNode(tau=100.0, detach_input=False, v_threshold=1.0, v_reset=0.0, surrogate_function='ATan', alpha=2.0, detach_reset=False)[源代码]

基类:spikingjelly.cext.neuron.LIFNode

forward(dv_seq: torch.Tensor)[源代码]
training: bool
class spikingjelly.cext.neuron.IFNode(v_threshold=1.0, v_reset=0.0, surrogate_function='ATan', alpha=2.0, detach_reset=False)[源代码]

基类:spikingjelly.cext.neuron.BaseNode

forward(dv: torch.Tensor)[源代码]
training: bool
class spikingjelly.cext.neuron.MultiStepIFNode(v_threshold=1.0, v_reset=0.0, surrogate_function='ATan', alpha=2.0, detach_reset=False)[源代码]

基类:spikingjelly.cext.neuron.IFNode

forward(dv_seq: torch.Tensor)[源代码]
training: bool