spikingjelly.cext.layer package

Module contents

class spikingjelly.cext.layer.SparseLinear(in_features: int, out_features: int, bias: bool = True)[源代码]

基类:torch.nn.modules.linear.Linear

参数
  • in_features (int) – 输入的特征数量

  • out_features (int) – 输出的特征数量

  • bias (bool) – 若为 False,则本层不含有可学习的偏置项。默认为 True

适用于稀疏输入的全连接层。与 torch.nn.Linear 的行为几乎相同。

警告

代码内部的实现方式是,首先将 sparse 转换为稀疏矩阵格式,然后再调用相关库进行运算。如果 sparse 不够稀疏,则该函数的速度会比普通矩阵乘法 torch.mm 慢很多。

警告

稀疏矩阵的乘法存在一定的计算误差,但误差并不显著,或可忽略。

警告

本层不支持CPU。

参数
  • in_features (int) – size of each input sample

  • out_features (int) – size of each output sample

  • bias (bool) – If set to False, the layer will not learn an additive bias. Default: True

The fully connected layer for sparse inputs. This module has a similar behavior as torch.nn.Linear.

Warning

This function is implemented by converting sparse to a sparse format and doing a sparse matrix multiplication. If the sparsity of sparse is not high enough, the speed of this function will be slower than torch.mm.

Warning

There are some numeral errors when doing the sparse matrix multiplication. But the errors are not significant.

Warning

This layer does not support to run on cpu.

forward(sparse: torch.Tensor)torch.Tensor[源代码]
in_features: int
out_features: int
weight: torch.Tensor
class spikingjelly.cext.layer.AutoSparseLinear(in_features: int, out_features: int, bias: bool = True, in_spikes: bool = False)[源代码]

基类:torch.nn.modules.linear.Linear

参数
  • in_features (int) – 输入的特征数量

  • out_features (int) – 输出的特征数量

  • bias (bool) – 若为 False,则本层不含有可学习的偏置项。默认为 True

  • in_spikes (bool) – 输入是否为脉冲,即元素均为0或1

智能稀疏全连接层。对于任意输入,若它的 batch_size 对应的临界稀疏度未知,本层会首先运行基准测试 AutoSparseLinear.benchmark 来获取临界稀疏度。临界稀疏度定义为,当输入是这一稀疏度时,稀疏矩阵乘法和普通矩阵乘法的速度恰好相同。对于任意输入,若它的 batch_size 对应的临界稀疏度已知,本层都会根据当前输入的稀疏度来智能决定是使用稀疏矩阵乘法还是普通矩阵乘法。

警告

稀疏矩阵的乘法存在一定的计算误差,但误差并不显著,或可忽略。

警告

稀疏矩阵乘法不支持CPU。在CPU上运行,只会使用普通矩阵乘法。

参数
  • in_features (int) – size of each input sample

  • out_features (int) – size of each output sample

  • bias (bool) – If set to False, the layer will not learn an additive bias. Default: True

  • in_spikes (bool) – Whether inputs are spikes, whose elements are 0 and 1 Default: False

The auto sparse fully connected layer. For an input, if the corresponding critical sparsity of the input’s batch size is unknown, this layer will firstly run the benchmark AutoSparseLinear.benchmark to get the critical sparsity. The critical sparsity is the sparsity where the sparse matrix multiplication and the dense matrix multiplication have the same speed. For an input, if the corresponding critical sparsity of the input’s batch size is known, this layer can auto select whether using the sparse or dense matrix multiplication according to the current input’s sparsity.

Warning

There are some numeral errors when doing the sparse matrix multiplication. But the errors are not significant.

Warning

This sparse matrix multiplication does not support to run on cpu. When this layer is on CPU, the dense matrix multiplication will be always used.

forward(x: torch.Tensor)torch.Tensor[源代码]
extra_repr()str[源代码]
benchmark(batch_size: int, device=None, run_times=1024, precision=0.0001, verbose=True)[源代码]
参数
  • batch_size (int) – 输入的batch size

  • device (str or None) – 运行基准测试所在的设备。若为 None,则会被设置成本层所在的设备。

  • run_times (int) – 运行稀疏/普通矩阵乘法的重复实验的次数。越大,则基准测试的结果越可靠

  • precision (float) – 二分搜索的最终临界稀疏值的精度

  • verbose (bool) – 是否打印出测试过程中的日志

使用二分查找,在输入的batch size为 batch_size 时,在每个稀疏度上重复运行 run_times 次稀疏/普通矩阵乘法,比较 两者的速度,直到搜索到临界稀疏度。若搜索达到精度范围 precision 时,普通矩阵乘法仍然比稀疏矩阵乘法快,则会将临界稀疏度设 置成 None

参数
  • batch_size (int) – batch size of the input

  • device (str) – where to running the benchmark. If None, it will be set as same with this layer’s device

  • run_times (int) – the number of replicated running times for sparse/dense matrix multiplication. The benchmark result will be more reliable with a larger run_times

  • precision (float) – the precision of binary searching critical sparsity

  • verbose (bool) – If True, this function will print logs during running

Using the binary search to find the critical sparsity when the batch size of the input is batch_size. This function will run run_times sparse/dense matrix multiplication on different sparsity and compare their speeds until it finds the cirtical sparsity. If the dense matrix multiplication is faster than the sparse matrix multiplication when searching exceeds precision, then the critical sparsity will be set to None.

in_features: int
out_features: int
weight: torch.Tensor