spikingjelly.cext.layer package¶
Module contents¶
- class spikingjelly.cext.layer.SparseLinear(in_features: int, out_features: int, bias: bool = True)[源代码]¶
基类:
torch.nn.modules.linear.Linear
- 参数
适用于稀疏输入的全连接层。与
torch.nn.Linear
的行为几乎相同。警告
代码内部的实现方式是,首先将
sparse
转换为稀疏矩阵格式,然后再调用相关库进行运算。如果sparse
不够稀疏,则该函数的速度会比普通矩阵乘法torch.mm
慢很多。警告
稀疏矩阵的乘法存在一定的计算误差,但误差并不显著,或可忽略。
警告
本层不支持CPU。
- 参数
The fully connected layer for sparse inputs. This module has a similar behavior as
torch.nn.Linear
.Warning
This function is implemented by converting
sparse
to a sparse format and doing a sparse matrix multiplication. If the sparsity ofsparse
is not high enough, the speed of this function will be slower thantorch.mm
.Warning
There are some numeral errors when doing the sparse matrix multiplication. But the errors are not significant.
Warning
This layer does not support to run on cpu.
- forward(sparse: torch.Tensor) → torch.Tensor[源代码]¶
- weight: torch.Tensor¶
- class spikingjelly.cext.layer.AutoSparseLinear(in_features: int, out_features: int, bias: bool = True, in_spikes: bool = False)[源代码]¶
基类:
torch.nn.modules.linear.Linear
- 参数
智能稀疏全连接层。对于任意输入,若它的
batch_size
对应的临界稀疏度未知,本层会首先运行基准测试 AutoSparseLinear.benchmark 来获取临界稀疏度。临界稀疏度定义为,当输入是这一稀疏度时,稀疏矩阵乘法和普通矩阵乘法的速度恰好相同。对于任意输入,若它的batch_size
对应的临界稀疏度已知,本层都会根据当前输入的稀疏度来智能决定是使用稀疏矩阵乘法还是普通矩阵乘法。警告
稀疏矩阵的乘法存在一定的计算误差,但误差并不显著,或可忽略。
警告
稀疏矩阵乘法不支持CPU。在CPU上运行,只会使用普通矩阵乘法。
- 参数
The auto sparse fully connected layer. For an input, if the corresponding critical sparsity of the input’s batch size is unknown, this layer will firstly run the benchmark AutoSparseLinear.benchmark to get the critical sparsity. The critical sparsity is the sparsity where the sparse matrix multiplication and the dense matrix multiplication have the same speed. For an input, if the corresponding critical sparsity of the input’s batch size is known, this layer can auto select whether using the sparse or dense matrix multiplication according to the current input’s sparsity.
Warning
There are some numeral errors when doing the sparse matrix multiplication. But the errors are not significant.
Warning
This sparse matrix multiplication does not support to run on cpu. When this layer is on CPU, the dense matrix multiplication will be always used.
- forward(x: torch.Tensor) → torch.Tensor[源代码]¶
- benchmark(batch_size: int, device=None, run_times=1024, precision=0.0001, verbose=True)[源代码]¶
-
- 参数
使用二分查找,在输入的batch size为
batch_size
时,在每个稀疏度上重复运行run_times
次稀疏/普通矩阵乘法,比较 两者的速度,直到搜索到临界稀疏度。若搜索达到精度范围precision
时,普通矩阵乘法仍然比稀疏矩阵乘法快,则会将临界稀疏度设 置成None
。- 参数
batch_size (int) – batch size of the input
device (str) – where to running the benchmark. If
None
, it will be set as same with this layer’s devicerun_times (int) – the number of replicated running times for sparse/dense matrix multiplication. The benchmark result will be more reliable with a larger
run_times
precision (float) – the precision of binary searching critical sparsity
verbose (bool) – If
True
, this function will print logs during running
Using the binary search to find the critical sparsity when the batch size of the input is
batch_size
. This function will runrun_times
sparse/dense matrix multiplication on different sparsity and compare their speeds until it finds the cirtical sparsity. If the dense matrix multiplication is faster than the sparse matrix multiplication when searching exceedsprecision
, then the critical sparsity will be set toNone
.
- weight: torch.Tensor¶