spikingjelly.clock_driven.lava_exchange package

Module contents

spikingjelly.clock_driven.lava_exchange.TNX_to_NXT(x_seq: Tensor)[源代码]
spikingjelly.clock_driven.lava_exchange.NXT_to_TNX(x_seq: Tensor)[源代码]
spikingjelly.clock_driven.lava_exchange.lava_neuron_forward(lava_neuron: Module, x_seq: Tensor, v: Tensor)[源代码]
spikingjelly.clock_driven.lava_exchange.step_quantize(x: Tensor, step: float = 1.0)[源代码]
参数
返回

quantized tensor

返回类型

torch.Tensor

The step quantize function. Here is an example:

# plt.style.use(['science', 'muted', 'grid'])
fig = plt.figure(dpi=200, figsize=(6, 4))
x = torch.arange(-4, 4, 0.001)
plt.plot(x, lava_exchange.step_quantize(x, 2.), label='quantize(x, step=2)')
plt.plot(x, x, label='y=x', ls='-.')
plt.legend()
plt.grid(ls='--')
plt.title('step quantize')
plt.xlabel('Input')
plt.ylabel('Output')
plt.savefig('./docs/source/_static/API/clock_driven/lava_exchange/step_quantize.svg')
plt.savefig('./docs/source/_static/API/clock_driven/lava_exchange/step_quantize.pdf')
_images/step_quantize.svg
spikingjelly.clock_driven.lava_exchange.quantize_8bit(x: Tensor, scale, descale=False)[源代码]
spikingjelly.clock_driven.lava_exchange.check_conv2d(conv2d_nn: Conv2d)[源代码]
spikingjelly.clock_driven.lava_exchange.check_fc(fc: Linear)[源代码]
spikingjelly.clock_driven.lava_exchange.to_lava_neuron_param_dict(sj_ms_neuron: Module)[源代码]
spikingjelly.clock_driven.lava_exchange.to_lava_neuron(sj_ms_neuron: Module)[源代码]
spikingjelly.clock_driven.lava_exchange.linear_to_lava_synapse_dense(fc: Linear)[源代码]
参数

fc (nn.Linear) – a pytorch linear layer without bias

返回

a lava slayer dense synapse

返回类型

slayer.synapse.Dense

Codes example:

T = 4
N = 2
layer_nn = nn.Linear(8, 4, bias=False)
layer_sl = lava_exchange.linear_to_lava_synapse_dense(layer_nn)
x_seq = torch.rand([T, N, 8])
with torch.no_grad():
    y_nn = functional.seq_to_ann_forward(x_seq, layer_nn)
    y_sl = lava_exchange.NXT_to_TNX(layer_sl(lava_exchange.TNX_to_NXT(x_seq)))
    print('max error:', (y_nn - y_sl).abs().max())
spikingjelly.clock_driven.lava_exchange.conv2d_to_lava_synapse_conv(conv2d_nn: Conv2d)[源代码]
参数

conv2d_nn (nn.Conv2d) – a pytorch conv2d layer without bias

返回

a lava slayer conv synapse

返回类型

slayer.synapse.Conv

Codes example:

T = 4
N = 2
layer_nn = nn.Conv2d(3, 8, kernel_size=3, stride=1, padding=1, bias=False)
layer_sl = lava_exchange.conv2d_to_lava_synapse_conv(layer_nn)
x_seq = torch.rand([T, N, 3, 28, 28])
with torch.no_grad():
    y_nn = functional.seq_to_ann_forward(x_seq, layer_nn)
    y_sl = lava_exchange.NXT_to_TNX(layer_sl(lava_exchange.TNX_to_NXT(x_seq)))
    print('max error:', (y_nn - y_sl).abs().max())
spikingjelly.clock_driven.lava_exchange.avgpool2d_to_lava_synapse_pool(pool2d_nn: AvgPool2d)[源代码]
参数

pool2d_nn (nn.AvgPool2d) – a pytorch AvgPool2d layer

返回

a lava slayer pool layer

返回类型

slayer.synapse.Pool

Warning

The lava slayer pool layer applies sum pooling, rather than average pooling.

T = 4
N = 2
layer_nn = nn.AvgPool2d(kernel_size=2, stride=2)
layer_sl = lava_exchange.avgpool2d_to_lava_synapse_pool(layer_nn)
x_seq = torch.rand([T, N, 3, 28, 28])
with torch.no_grad():
    y_nn = functional.seq_to_ann_forward(x_seq, layer_nn)
    y_sl = lava_exchange.NXT_to_TNX(layer_sl(lava_exchange.TNX_to_NXT(x_seq))) / 4.
    print('max error:', (y_nn - y_sl).abs().max())
spikingjelly.clock_driven.lava_exchange.to_lava_block_dense(fc: Linear, sj_ms_neuron: Module, quantize_to_8bit: bool = True)[源代码]
spikingjelly.clock_driven.lava_exchange.to_lava_block_conv(conv2d_nn: Conv2d, sj_ms_neuron: Module, quantize_to_8bit: bool = True)[源代码]
spikingjelly.clock_driven.lava_exchange.to_lava_block_flatten(flatten_nn: Flatten)[源代码]