spikingjelly.clock_driven.surrogate package

Module contents

spikingjelly.clock_driven.surrogate.heaviside(x: Tensor)[源代码]
参数

x – 输入tensor

返回

输出tensor

heaviside阶跃函数,定义为

\[\begin{split}g(x) = \begin{cases} 1, & x \geq 0 \\ 0, & x < 0 \\ \end{cases}\end{split}\]

阅读 HeavisideStepFunction 以获得更多信息。

参数

x – the input tensor

返回

the output tensor

The heaviside function, which is defined by

\[\begin{split}g(x) = \begin{cases} 1, & x \geq 0 \\ 0, & x < 0 \\ \end{cases}\end{split}\]

For more information, see HeavisideStepFunction.

spikingjelly.clock_driven.surrogate.check_manual_grad(primitive_function, spiking_function, *args, **kwargs)[源代码]
参数
  • primitive_function (callable) – 梯度替代函数的原函数

  • spiking_function (callable) – 梯度替代函数

梯度替代函数的反向传播一般是手写的,可以用此函数去检查手写梯度是否正确。

此函数检查梯度替代函数spiking_function的反向传播,与原函数primitive_function的反向传播结果是否一致。“一致”被定义为,两者的误差不超过eps。

示例代码:

def s2nn_apply(x, alpha, beta):
    return surrogate.s2nn.apply(x, alpha, beta)

surrogate.check_manual_grad(surrogate.S2NN.primitive_function, s2nn_apply, alpha=4., beta=1.)
spikingjelly.clock_driven.surrogate.check_cuda_grad(neu: Module, surrogate_function, device, *args, **kwargs)[源代码]
class spikingjelly.clock_driven.surrogate.SurrogateFunctionBase(alpha, spiking=True)[源代码]

基类:Module

set_spiking_mode(spiking: bool)[源代码]
extra_repr()[源代码]
static spiking_function(x, alpha)[源代码]
static primitive_function(x, alpha)[源代码]
cuda_code(x: str, y: str, dtype='fp32')[源代码]
cuda_code_start_comments()[源代码]
cuda_code_end_comments()[源代码]
forward(x: Tensor)[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.MultiArgsSurrogateFunctionBase(spiking: bool, *args, **kwargs)[源代码]

基类:Module

set_spiking_mode(spiking: bool)[源代码]
cuda_code(x: str, y: str, dtype='fp32')[源代码]
cuda_code_start_comments()[源代码]
cuda_code_end_comments()[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.piecewise_quadratic[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.PiecewiseQuadratic(alpha=1.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

参数
  • alpha – 控制反向传播时梯度的平滑程度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

反向传播时使用分段二次函数的梯度(三角形函数)的脉冲发放函数。反向传播为

\[\begin{split}g'(x) = \begin{cases} 0, & |x| > \frac{1}{\alpha} \\ -\alpha^2|x|+\alpha, & |x| \leq \frac{1}{\alpha} \end{cases}\end{split}\]

对应的原函数为

\[\begin{split}g(x) = \begin{cases} 0, & x < -\frac{1}{\alpha} \\ -\frac{1}{2}\alpha^2|x|x + \alpha x + \frac{1}{2}, & |x| \leq \frac{1}{\alpha} \\ 1, & x > \frac{1}{\alpha} \\ \end{cases}\end{split}\]
_images/PiecewiseQuadratic.svg

该函数在文章 2 4 7 11 13 中使用。

参数
  • alpha – parameter to control smoothness of gradient

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The piecewise quadratic surrogate spiking function. The gradient is defined by

\[\begin{split}g'(x) = \begin{cases} 0, & |x| > \frac{1}{\alpha} \\ -\alpha^2|x|+\alpha, & |x| \leq \frac{1}{\alpha} \end{cases}\end{split}\]

The primitive function is defined by

\[\begin{split}g(x) = \begin{cases} 0, & x < -\frac{1}{\alpha} \\ -\frac{1}{2}\alpha^2|x|x + \alpha x + \frac{1}{2}, & |x| \leq \frac{1}{\alpha} \\ 1, & x > \frac{1}{\alpha} \\ \end{cases}\end{split}\]
_images/PiecewiseQuadratic.svg

The function is used in 2 4 7 11 13.

static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.piecewise_exp[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.PiecewiseExp(alpha=1.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

参数
  • alpha – 控制反向传播时梯度的平滑程度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

反向传播时使用分段指数函数的梯度的脉冲发放函数。反向传播为

\[g'(x) = \frac{\alpha}{2}e^{-\alpha |x|}\]

对应的原函数为

\[\begin{split}g(x) = \begin{cases} \frac{1}{2}e^{\alpha x}, & x < 0 \\ 1 - \frac{1}{2}e^{-\alpha x}, & x \geq 0 \end{cases}\end{split}\]
_images/PiecewiseExp.svg

该函数在文章 6 11 中使用。

参数
  • alpha – parameter to control smoothness of gradient

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The piecewise exponential surrogate spiking function. The gradient is defined by

\[g'(x) = \frac{\alpha}{2}e^{-\alpha |x|}\]

The primitive function is defined by

\[\begin{split}g(x) = \begin{cases} \frac{1}{2}e^{\alpha x}, & x < 0 \\ 1 - \frac{1}{2}e^{-\alpha x}, & x \geq 0 \end{cases}\end{split}\]
_images/PiecewiseExp.svg

The function is used in 6 11 .

static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.sigmoid[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.Sigmoid(alpha=4.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

参数
  • alpha – 控制反向传播时梯度的平滑程度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

反向传播时使用sigmoid的梯度的脉冲发放函数。反向传播为

\[g'(x) = \alpha * (1 - \mathrm{sigmoid} (\alpha x)) \mathrm{sigmoid} (\alpha x)\]

对应的原函数为

\[g(x) = \mathrm{sigmoid}(\alpha x) = \frac{1}{1+e^{-\alpha x}}\]
_images/Sigmoid.svg

该函数在文章 4 12 14 15 中使用。

参数
  • alpha – parameter to control smoothness of gradient

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The sigmoid surrogate spiking function. The gradient is defined by

\[g'(x) = \alpha * (1 - \mathrm{sigmoid} (\alpha x)) \mathrm{sigmoid} (\alpha x)\]

The primitive function is defined by

\[g(x) = \mathrm{sigmoid}(\alpha x) = \frac{1}{1+e^{-\alpha x}}\]
_images/Sigmoid.svg

The function is used in 4 12 14 15 .

static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
cuda_code(x: str, y: str, dtype='fp32')[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.soft_sign[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.SoftSign(alpha=2.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

参数
  • alpha – 控制反向传播时梯度的平滑程度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

反向传播时使用soft sign的梯度的脉冲发放函数。反向传播为

\[g'(x) = \frac{\alpha}{2(1 + |\alpha x|)^{2}} = \frac{1}{2\alpha(\frac{1}{\alpha} + |x|)^{2}}\]

对应的原函数为

\[g(x) = \frac{1}{2} (\frac{\alpha x}{1 + |\alpha x|} + 1) = \frac{1}{2} (\frac{x}{\frac{1}{\alpha} + |x|} + 1)\]
_images/SoftSign.svg

该函数在文章 8 11 中使用。

参数
  • alpha – parameter to control smoothness of gradient

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The soft sign surrogate spiking function. The gradient is defined by

\[g'(x) = \frac{\alpha}{2(1 + |\alpha x|)^{2}}\]

The primitive function is defined by

\[g(x) = \frac{1}{2} (\frac{\alpha x}{1 + |\alpha x|} + 1)\]
_images/SoftSign.svg

The function is used in 8 11 .

static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.atan[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.ATan(alpha=2.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

反向传播时使用反正切函数arc tangent的梯度的脉冲发放函数。反向传播为

\[g'(x) = \frac{\alpha}{2(1 + (\frac{\pi}{2}\alpha x)^2)}\]

对应的原函数为

\[g(x) = \frac{1}{\pi} \arctan(\frac{\pi}{2}\alpha x) + \frac{1}{2}\]
_images/ATan.svg

The arc tangent surrogate spiking function. The gradient is defined by

\[g'(x) = \frac{\alpha}{2(1 + (\frac{\pi}{2}\alpha x)^2)}\]

The primitive function is defined by

\[g(x) = \frac{1}{\pi} \arctan(\frac{\pi}{2}\alpha x) + \frac{1}{2}\]
_images/ATan.svg
static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
cuda_code(x: str, y: str, dtype='fp32')[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.nonzero_sign_log_abs[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.NonzeroSignLogAbs(alpha=1.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

参数
  • alpha – 控制反向传播时梯度的平滑程度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

警告

原函数的输出范围并不是(0, 1)。它的优势是反向传播的计算量特别小。

反向传播时使用NonzeroSignLogAbs的梯度的脉冲发放函数。反向传播为

\[g'(x) = \frac{\alpha}{1 + |\alpha x|} = \frac{1}{\frac{1}{\alpha} + |x|}\]

对应的原函数为

\[g(x) = \mathrm{NonzeroSign}(x) \log (|\alpha x| + 1)\]

其中

\[\begin{split}\mathrm{NonzeroSign}(x) = \begin{cases} 1, & x \geq 0 \\ -1, & x < 0 \\ \end{cases}\end{split}\]
_images/NonzeroSignLogAbs.svg

该函数在文章 中使用。

参数
  • alpha – parameter to control smoothness of gradient

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

Warning

The output range the primitive function is not (0, 1). The advantage of this function is that computation cost is small when backward.

The NonzeroSignLogAbs surrogate spiking function. The gradient is defined by

\[g'(x) = \frac{\alpha}{1 + |\alpha x|} = \frac{1}{\frac{1}{\alpha} + |x|}\]

The primitive function is defined by

\[g(x) = \mathrm{NonzeroSign}(x) \log (|\alpha x| + 1)\]

where

\[\begin{split}\mathrm{NonzeroSign}(x) = \begin{cases} 1, & x \geq 0 \\ -1, & x < 0 \\ \end{cases}\end{split}\]
_images/NonzeroSignLogAbs.svg

The function is used in .

static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.erf[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.Erf(alpha=2.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

参数
  • alpha – 控制反向传播时梯度的平滑程度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

反向传播时使用高斯误差函数(erf)的梯度的脉冲发放函数。反向传播为

\[g'(x) = \frac{\alpha}{\sqrt{\pi}}e^{-\alpha^2x^2}\]

对应的原函数为

\begin{split} g(x) &= \frac{1}{2}(1-\text{erf}(-\alpha x)) \\ &= \frac{1}{2} \text{erfc}(-\alpha x) \\ &= \frac{1}{\sqrt{\pi}}\int_{-\infty}^{\alpha x}e^{-t^2}dt \end{split}
_images/Erf.svg

该函数在文章 1 4 18 中使用。

参数
  • alpha – parameter to control smoothness of gradient

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The Gaussian error (erf) surrogate spiking function. The gradient is defined by

\[g'(x) = \frac{\alpha}{\sqrt{\pi}}e^{-\alpha^2x^2}\]

The primitive function is defined by

\begin{split} g(x) &= \frac{1}{2}(1-\text{erf}(-\alpha x)) \\ &= \frac{1}{2} \text{erfc}(-\alpha x) \\ &= \frac{1}{\sqrt{\pi}}\int_{-\infty}^{\alpha x}e^{-t^2}dt \end{split}
_images/Erf.svg

The function is used in 1 4 18.

static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.piecewise_leaky_relu[源代码]

基类:Function

static forward(ctx, x: Tensor, w=1, c=0.01)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.PiecewiseLeakyReLU(w=1.0, c=0.01, spiking=True)[源代码]

基类:MultiArgsSurrogateFunctionBase

参数
  • w-w <= x <= w 时反向传播的梯度为 1 / 2w

  • cx > wx < -w 时反向传播的梯度为 c

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

分段线性的近似脉冲发放函数。梯度为

\[\begin{split}g'(x) = \begin{cases} \frac{1}{w}, & -w \leq x \leq w \\ c, & x < -w ~or~ x > w \end{cases}\end{split}\]

对应的原函数为

\[\begin{split}g(x) = \begin{cases} cx + cw, & x < -w \\ \frac{1}{2w}x + \frac{1}{2}, & -w \leq x \leq w \\ cx - cw + 1, & x > w \\ \end{cases}\end{split}\]
_images/PiecewiseLeakyReLU.svg

该函数在文章 3 4 5 9 10 12 16 17 中使用。

参数
  • w – when -w <= x <= w the gradient is 1 / 2w

  • c – when x > w or x < -w the gradient is c

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The piecewise surrogate spiking function. The gradient is defined by

\[\begin{split}g'(x) = \begin{cases} \frac{1}{w}, & -w \leq x \leq w \\ c, & x < -w ~or~ x > w \end{cases}\end{split}\]

The primitive function is defined by

\[\begin{split}g(x) = \begin{cases} cx + cw, & x < -w \\ \frac{1}{2w}x + \frac{1}{2}, & -w \leq x \leq w \\ cx - cw + 1, & x > w \end{cases}\end{split}\]
_images/PiecewiseLeakyReLU.svg

The function is used in 3 4 5 9 10 12 16 17.

forward(x)[源代码]
static spiking_function(x: Tensor, w, c)[源代码]
static primitive_function(x: Tensor, w, c)[源代码]
cuda_code(x: str, y: str, dtype='fp32')[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.squarewave_fourier_series[源代码]

基类:Function

static forward(ctx, x: Tensor, n: int, T_period: float)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.SquarewaveFourierSeries(n: int = 2, T_period: float = 8, spiking=True)[源代码]

基类:MultiArgsSurrogateFunctionBase

forward(x)[源代码]
static spiking_function(x: Tensor, w, c)[源代码]
static primitive_function(x: Tensor, n: int, T_period: float)[源代码]
cuda_code(x: str, y: str, dtype='fp32')[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.s2nn[源代码]

基类:Function

static forward(ctx, x: Tensor, alpha: float, beta: float)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.S2NN(alpha=4.0, beta=1.0, spiking=True)[源代码]

基类:MultiArgsSurrogateFunctionBase

参数
  • alpha – 控制 x < 0 时梯度的参数

  • beta – 控制 x >= 0 时梯度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

S2NN: Time Step Reduction of Spiking Surrogate Gradients for Training Energy Efficient Single-Step Neural Networks 提出的S2NN替代函数。反向传播为

\[\begin{split}g'(x) = \begin{cases} \alpha * (1 - \mathrm{sigmoid} (\alpha x)) \mathrm{sigmoid} (\alpha x), x < 0 \\ \beta (x + 1), x \ge 0 \end{cases}\end{split}\]

对应的原函数为

\[\begin{split}g(x) = \begin{cases} \mathrm{sigmoid} (\alpha x), x < 0 \\ \beta \mathrm{ln}(x + 1) + 1, x \ge 0 \end{cases}\end{split}\]
_images/S2NN.svg
参数
  • alpha – the param that controls the gradient when x < 0

  • beta – the param that controls the gradient when x >= 0

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The S2NN surrogate spiking function, which is proposed by S2NN: Time Step Reduction of Spiking Surrogate Gradients for Training Energy Efficient Single-Step Neural Networks. The gradient is defined by

\[\begin{split}g'(x) = \begin{cases} \alpha * (1 - \mathrm{sigmoid} (\alpha x)) \mathrm{sigmoid} (\alpha x), x < 0 \\ \beta (x + 1), x \ge 0 \end{cases}\end{split}\]

The primitive function is defined by

\[\begin{split}g(x) = \begin{cases} \mathrm{sigmoid} (\alpha x), x < 0 \\ \beta \mathrm{ln}(x + 1) + 1, x \ge 0 \end{cases}\end{split}\]
_images/S2NN.svg
forward(x)[源代码]
static spiking_function(x: Tensor, alpha, beta)[源代码]
static primitive_function(x: Tensor, alpha, beta)[源代码]
cuda_code(x: str, y: str, dtype='fp32')[源代码]
training: bool
class spikingjelly.clock_driven.surrogate.q_pseudo_spike[源代码]

基类:Function

static forward(ctx, x, alpha)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.clock_driven.surrogate.QPseudoSpike(alpha=2.0, spiking=True)[源代码]

基类:SurrogateFunctionBase

参数
  • alpha – 控制反向传播时梯度函数尾部厚度的参数

  • spiking – 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False 则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数

Surrogate Gradients Design 提出的 \(q\)-PseudoSpike替代函数。反向传播为

\[g'(x) = (1+\frac{2|x|}{\alpha-1})^{-\alpha}\]

其中 \(\alpha>1\) 对应原文中的 \(q\)

对应的原函数为

\[\begin{split}g(x) = \begin{cases} \frac{1}{2}(1-\frac{2x}{\alpha-1})^{1-\alpha}, & x < 0 \\ 1 - \frac{1}{2}(1+\frac{2x}{\alpha-1})^{1-\alpha}, & x \geq 0. \end{cases}\end{split}\]
_images/QPseudoSpike.svg
参数
  • alpha – parameter to control tail fatness of gradient

  • spiking – whether output spikes. The default is True which means that using heaviside in forward propagation and using surrogate gradient in backward propagation. If False, in forward propagation, using the primitive function of the surrogate gradient function used in backward propagation

The \(q\)-PseudoSpike surrogate spiking function, which is first proposed in Surrogate Gradients Design. The gradient is defined by

\[g'(x) = (1+\frac{2|x|}{\alpha-1})^{-\alpha}\]

where \(\alpha>1\) corresponds to \(q\) in paper.

The primitive function is defined by

\[\begin{split}g(x) = \begin{cases} \frac{1}{2}(1-\frac{2x}{\alpha-1})^{1-\alpha}, & x < 0 \\ 1 - \frac{1}{2}(1+\frac{2x}{\alpha-1})^{1-\alpha}, & x \geq 0. \end{cases}\end{split}\]
_images/QPseudoSpike.svg
static spiking_function(x, alpha)[源代码]
static primitive_function(x: Tensor, alpha)[源代码]
training: bool
cuda_code(x: str, y: str, dtype='fp32')[源代码]

References

1(1,2)

Esser S K, Appuswamy R, Merolla P, et al. Backpropagation for energy-efficient neuromorphic computing[J]. Advances in neural information processing systems, 2015, 28: 1117-1125.

2(1,2)

Esser S K, Merolla P A, Arthur J V, et al. Convolutional networks for fast, energy-efficient neuromorphic computing[J]. Proceedings of the national academy of sciences, 2016, 113(41): 11441-11446.

3(1,2)

Yin S, Venkataramanaiah S K, Chen G K, et al. Algorithm and hardware design of discrete-time spiking neural networks based on back propagation with binary activations[C]//2017 IEEE Biomedical Circuits and Systems Conference (BioCAS). IEEE, 2017: 1-5.

4(1,2,3,4,5,6,7,8)

Wu Y, Deng L, Li G, et al. Spatio-temporal backpropagation for training high-performance spiking neural networks[J]. Frontiers in neuroscience, 2018, 12: 331.

5(1,2)

Huh D, Sejnowski T J. Gradient descent for spiking neural networks[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018: 1440-1450.

6(1,2)

Shrestha S B, Orchard G. SLAYER: spike layer error reassignment in time[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018: 1419-1428.

7(1,2)

Bellec G, Salaj D, Subramoney A, et al. Long short-term memory and learning-to-learn in networks of spiking neurons[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018: 795-805.

8(1,2)

Zenke F, Ganguli S. Superspike: Supervised learning in multilayer spiking neural networks[J]. Neural computation, 2018, 30(6): 1514-1541.

9(1,2)

Wu Y, Deng L, Li G, et al. Direct training for spiking neural networks: Faster, larger, better[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2019, 33(01): 1311-1318.

10(1,2)

Gu P, Xiao R, Pan G, et al. STCA: Spatio-Temporal Credit Assignment with Delayed Feedback in Deep Spiking Neural Networks[C]//IJCAI. 2019: 1366-1372.

11(1,2,3,4,5,6)

Neftci E O, Mostafa H, Zenke F. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks[J]. IEEE Signal Processing Magazine, 2019, 36(6): 51-63.

12(1,2,3,4)

Roy D, Chakraborty I, Roy K. Scaling deep spiking neural networks with binary stochastic activations[C]//2019 IEEE International Conference on Cognitive Computing (ICCC). IEEE, 2019: 50-58.

13(1,2)

Panda P, Aketi S A, Roy K. Toward scalable, efficient, and accurate deep spiking neural networks with backward residual connections, stochastic softmax, and hybridization[J]. Frontiers in Neuroscience, 2020, 14.

14(1,2)

Lotfi Rezaabad A, Vishwanath S. Long Short-Term Memory Spiking Networks and Their Applications[C]//International Conference on Neuromorphic Systems 2020. 2020: 1-9.

15(1,2)

Woźniak S, Pantazi A, Bohnstingl T, et al. Deep learning incorporating biologically inspired neural dynamics and in-memory computing[J]. Nature Machine Intelligence, 2020, 2(6): 325-336.

16(1,2)

Cheng X, Hao Y, Xu J, et al. LISNN: Improving Spiking Neural Networks with Lateral Interactions for Robust Object Recognition[C]//IJCAI. 1519-1525.

17(1,2)

Kaiser J, Mostafa H, Neftci E. Synaptic plasticity dynamics for deep continuous local learning (DECOLLE)[J]. Frontiers in Neuroscience, 2020, 14: 424.

18(1,2)

Yin B, Corradi F, Bohté S M. Effective and efficient computation with multiple-timescale spiking recurrent neural networks[C]//International Conference on Neuromorphic Systems 2020. 2020: 1-8.