spikingjelly.activation_based.encoding package

Module contents

class spikingjelly.activation_based.encoding.StatelessEncoder(step_mode='s')[源代码]

基类:Module, StepModule

无状态编码器的基类。无状态编码器 encoder = StatelessEncoder(),直接调用 encoder(x) 即可将 x 编码为 spike

The base class of stateless encoder. The stateless encoder encoder = StatelessEncoder() can encode x to spike by encoder(x).

abstract forward(x: Tensor)[源代码]
参数:

x (torch.Tensor) – 输入数据

返回:

spike, shape 与 x.shape 相同

返回类型:

torch.Tensor

参数:

x (torch.Tensor) – input data

返回:

spike, whose shape is same with x.shape

返回类型:

torch.Tensor

training: bool
class spikingjelly.activation_based.encoding.StatefulEncoder(T: int, step_mode='s')[源代码]

基类:MemoryModule

参数:

T (int) – 编码周期。通常情况下,与SNN的仿真周期(总步长一致)

有状态编码器的基类。有状态编码器 encoder = StatefulEncoder(T),编码器会在首次调用 encoder(x) 时对 x 进行编码。在第 t 次调用 encoder(x) 时会输出 spike[t % T]

encoder = StatefulEncoder(T)
s_list = []
for t in range(T):
    s_list.append(encoder(x))  # s_list[t] == spike[t]
参数:

T (int) – the encoding period. It is usually same with the total simulation time-steps of SNN

The base class of stateful encoder. The stateful encoder encoder = StatefulEncoder(T) will encode x to spike at the first time of calling encoder(x). It will output spike[t % T] at the t -th calling

encoder = StatefulEncoder(T)
s_list = []
for t in range(T):
    s_list.append(encoder(x))  # s_list[t] == spike[t]
single_step_forward(x: Optional[Tensor] = None)[源代码]
参数:

x (torch.Tensor) – 输入数据

返回:

spike, shape 与 x.shape 相同

返回类型:

torch.Tensor

参数:

x (torch.Tensor) – input data

返回:

spike, whose shape is same with x.shape

返回类型:

torch.Tensor

abstract single_step_encode(x: Tensor)[源代码]
参数:

x (torch.Tensor) – 输入数据

返回:

spike, shape 与 x.shape 相同

返回类型:

torch.Tensor

参数:

x (torch.Tensor) – input data

返回:

spike, whose shape is same with x.shape

返回类型:

torch.Tensor

extra_repr() str[源代码]
training: bool
class spikingjelly.activation_based.encoding.PeriodicEncoder(spike: Tensor, step_mode='s')[源代码]

基类:StatefulEncoder

参数:

spike (torch.Tensor) – 输入脉冲

周期性编码器,在第 t 次调用时输出 spike[t % T],其中 T = spike.shape[0]

警告

不要忘记调用reset,因为这个编码器是有状态的。

参数:

spike (torch.Tensor) – the input spike

The periodic encoder that outputs spike[t % T] at t -th calling, where T = spike.shape[0]

Warning

Do not forget to reset the encoder because the encoder is stateful!

single_step_encode(spike: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.encoding.LatencyEncoder(T: int, enc_function='linear', step_mode='s')[源代码]

基类:StatefulEncoder

参数:
  • T (int) – 最大(最晚)脉冲发放时刻

  • enc_function (str) – 定义使用哪个函数将输入强度转化为脉冲发放时刻,可以为 linearlog

延迟编码器,将 0 <= x <= 1 的输入转化为在 0 <= t_f <= T-1 时刻发放的脉冲。输入的强度越大,发放越早。 当 enc_function == 'linear'

\[t_f(x) = (T - 1)(1 - x)\]
enc_function == 'log'
\[t_f(x) = (T - 1) - ln(\alpha * x + 1)\]

其中 \(lpha\) 满足 \(t_f(1) = T - 1\)

实例代码:

x = torch.rand(size=[8, 2])
print('x', x)
T = 20
encoder = LatencyEncoder(T)
for t om range(T):
    print(encoder(x))

警告

必须确保 0 <= x <= 1

警告

不要忘记调用reset,因为这个编码器是有状态的。

参数:
  • T (int) – the maximum (latest) firing time

  • enc_function (str) – how to convert intensity to firing time. linear or log

The latency encoder will encode 0 <= x <= 1 to spike whose firing time is 0 <= t_f <= T-1. A larger x will cause a earlier firing time.

If enc_function == 'linear'
\[t_f(x) = (T - 1)(1 - x)\]
If enc_function == 'log'
\[t_f(x) = (T - 1) - ln(\alpha * x + 1)\]

where \(lpha\) satisfies \(t_f(1) = T - 1\)

Example: .. code-block:: python

x = torch.rand(size=[8, 2]) print(‘x’, x) T = 20 encoder = LatencyEncoder(T) for t in range(T):

print(encoder(x))

Warning

The user must assert 0 <= x <= 1.

Warning

Do not forget to reset the encoder because the encoder is stateful!

single_step_encode(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.encoding.PoissonEncoder(step_mode='s')[源代码]

基类:StatelessEncoder

无状态的泊松编码器。输出脉冲的发放概率与输入 x 相同。

警告

必须确保 0 <= x <= 1

The poisson encoder will output spike whose firing probability is x

Warning

The user must assert 0 <= x <= 1.

forward(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.encoding.WeightedPhaseEncoder(K: int, step_mode='s')[源代码]

基类:StatefulEncoder

参数:

K (int) – 编码周期。通常情况下,与SNN的仿真周期(总步长一致)

Kim J, Kim H, Huh S, et al. Deep neural networks with weighted spikes[J]. Neurocomputing, 2018, 311: 373-386.

带权的相位编码,一种基于二进制表示的编码方法。

将输入按照二进制各位展开,从高位到低位遍历输入进行脉冲编码。相比于频率编码,每一位携带的信息量更多。编码相位数为 \(K\) 时, 可以对于处于区间 \([0, 1-2^{-K}]\) 的数进行编码。以下为原始论文中的示例:

Phase (K=8)

1

2

3

4

5

6

7

8

Spike weight \(\omega(t)\)

2-1

2-2

2-3

2-4

2-5

2-6

2-7

2-8

192/256

1

1

0

0

0

0

0

0

1/256

0

0

0

0

0

0

0

1

128/256

1

0

0

0

0

0

0

0

255/256

1

1

1

1

1

1

1

1

警告

不要忘记调用reset,因为这个编码器是有状态的。

参数:

K (int) – the encoding period. It is usually same with the total simulation time-steps of SNN

The weighted phase encoder, which is based on binary system. It will flatten x as a binary number. When T=k, it can encode \(x \in [0, 1-2^{-K}]\) to different spikes. Here is the example from the origin paper:

Phase (K=8)

1

2

3

4

5

6

7

8

Spike weight \(\omega(t)\)

2-1

2-2

2-3

2-4

2-5

2-6

2-7

2-8

192/256

1

1

0

0

0

0

0

0

1/256

0

0

0

0

0

0

0

1

128/256

1

0

0

0

0

0

0

0

255/256

1

1

1

1

1

1

1

1

Warning

Do not forget to reset the encoder because the encoder is stateful!

single_step_encode(x: Tensor)[源代码]
training: bool
class spikingjelly.activation_based.encoding.PopSpikeEncoderDeterministic(obs_dim, pop_dim, spike_ts, mean_range, std)[源代码]

基类:Module

Learnable Population Coding Spike Encoder with Deterministic Spike Trains

forward(obs)[源代码]
training: bool
class spikingjelly.activation_based.encoding.PopSpikeEncoderRandom(obs_dim, pop_dim, spike_ts, mean_range, std)[源代码]

基类:Module

Learnable Population Coding Spike Encoder with Random Spike Trains

forward(obs)[源代码]
training: bool
class spikingjelly.activation_based.encoding.PopEncoder(obs_dim, pop_dim, spike_ts, mean_range, std)[源代码]

基类:Module

Learnable Population Coding Encoder

forward(obs)[源代码]
training: bool