spikingjelly.activation_based.spike_op package

Module contents

class spikingjelly.activation_based.spike_op.spikeConvolution(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, spike, weight, bias, stride, padding, dilation, groups)[源代码]
static backward(ctx, grad_output)[源代码]
class spikingjelly.activation_based.spike_op.spikeLinear(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, spike, weight, bias=None)[源代码]
static backward(ctx, grad_output)[源代码]
spikingjelly.activation_based.spike_op.spike_linear(spike: Tensor, weight: Tensor, bias: Optional[Tensor] = None) Tensor[源代码]

torch.nn.functional.linear 在输入为脉冲时的特例。

备注

在CUDA设备上训练时拥有比 torch.nn.functional.linear 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.functional.linear with inputs are spikes.

Note

This function has less memory consumption than torch.nn.functional.linear when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

spikingjelly.activation_based.spike_op.spike_conv1d(spike: Tensor, weight: Tensor, bias: Optional[Tensor] = None, stride: Union[int, Size, List[int], Tuple[int, ...]] = 1, padding: str = 'valid', dilation: Union[int, Size, List[int], Tuple[int, ...]] = 1, groups: int = 1) Tensor[源代码]

torch.nn.functional.conv1d 在输入为脉冲时的特例。

备注

在CUDA设备上训练时拥有比 torch.nn.functional.conv1d 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.functional.conv1d with inputs are spikes.

Note

This function has less memory consumption than torch.nn.functional.conv1d when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

spikingjelly.activation_based.spike_op.spike_conv2d(spike: Tensor, weight: Tensor, bias: Optional[Tensor] = None, stride: Union[int, Size, List[int], Tuple[int, ...]] = 1, padding: str = 'valid', dilation: Union[int, Size, List[int], Tuple[int, ...]] = 1, groups: int = 1) Tensor[源代码]

torch.nn.functional.conv2d 在输入为脉冲时的特例。

备注

在CUDA设备上训练时拥有比 torch.nn.functional.conv2d 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.functional.conv2d with inputs are spikes.

Note

This function has less memory consumption than torch.nn.functional.conv2d when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

spikingjelly.activation_based.spike_op.spike_conv3d(spike: Tensor, weight: Tensor, bias: Optional[Tensor] = None, stride: Union[int, Size, List[int], Tuple[int, ...]] = 1, padding: str = 'valid', dilation: Union[int, Size, List[int], Tuple[int, ...]] = 1, groups: int = 1) Tensor[源代码]

torch.nn.functional.conv3d 在输入为脉冲时的特例。

备注

在CUDA设备上训练时拥有比 torch.nn.functional.conv3d 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.functional.conv3d with inputs are spikes.

Note

This function has less memory consumption than torch.nn.functional.conv3d when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

class spikingjelly.activation_based.spike_op.SpikeLinear(in_features: int, out_features: int, bias: bool = True, device=None, dtype=None)[源代码]

基类:Linear

torch.nn.Linear 在输入为脉冲时的特例。

备注

在CUDA设备上运行时拥有比 torch.nn.Linear 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.Linear with inputs are spikes.

Note

This function has less memory consumption than torch.nn.Linear when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

forward(spike: Tensor) Tensor[源代码]
in_features: int
out_features: int
weight: Tensor
class spikingjelly.activation_based.spike_op.SpikeConv1d(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int]], stride: Union[int, Tuple[int]] = 1, padding: Union[str, int, Tuple[int]] = 0, dilation: Union[int, Tuple[int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)[源代码]

基类:Conv1d

torch.nn.Conv1d 在输入为脉冲时的特例。

备注

在CUDA设备上运行时拥有比 torch.nn.Conv1d 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.Conv1d with inputs are spikes.

Note

This function has less memory consumption than torch.nn.Conv1d when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

bias: Optional[Tensor]
in_channels: int
out_channels: int
kernel_size: Tuple[int, ...]
stride: Tuple[int, ...]
padding: Union[str, Tuple[int, ...]]
dilation: Tuple[int, ...]
transposed: bool
output_padding: Tuple[int, ...]
groups: int
padding_mode: str
weight: Tensor
class spikingjelly.activation_based.spike_op.SpikeConv2d(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[str, int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)[源代码]

基类:Conv2d

torch.nn.Conv2d 在输入为脉冲时的特例。

备注

在CUDA设备上运行时拥有比 torch.nn.Conv2d 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.Conv2d with inputs are spikes.

Note

This function has less memory consumption than torch.nn.Conv2d when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

bias: Optional[Tensor]
in_channels: int
out_channels: int
kernel_size: Tuple[int, ...]
stride: Tuple[int, ...]
padding: Union[str, Tuple[int, ...]]
dilation: Tuple[int, ...]
transposed: bool
output_padding: Tuple[int, ...]
groups: int
padding_mode: str
weight: Tensor
class spikingjelly.activation_based.spike_op.SpikeConv3d(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]] = 1, padding: Union[str, int, Tuple[int, int, int]] = 0, dilation: Union[int, Tuple[int, int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)[源代码]

基类:Conv3d

torch.nn.Conv3d 在输入为脉冲时的特例。

备注

在CUDA设备上运行时拥有比 torch.nn.Conv3d 更低的显存消耗。

警告

spike 中的任何元素都必须为0或1。

A specific case of torch.nn.Conv3d with inputs are spikes.

Note

This function has less memory consumption than torch.nn.Conv3d when training on CUDA devices.

Warning

Any element in spike must be 0 or 1.

bias: Optional[Tensor]
in_channels: int
out_channels: int
kernel_size: Tuple[int, ...]
stride: Tuple[int, ...]
padding: Union[str, Tuple[int, ...]]
dilation: Tuple[int, ...]
transposed: bool
output_padding: Tuple[int, ...]
groups: int
padding_mode: str
weight: Tensor