spikingjelly.activation_based.quantize package
Module contents
- spikingjelly.activation_based.quantize.round(x: Tensor)[源代码]
- 参数:
x (torch.Tensor) – the input tensor
- 返回:
the output tensor
- 返回类型:
Apply
y = torch.round(x)
with re-defining gradient as \(\frac{\partial y}{\partial x} = 1\).
- spikingjelly.activation_based.quantize.ceil(x: Tensor)[源代码]
- 参数:
x (torch.Tensor) – the input tensor
- 返回:
the output tensor
- 返回类型:
Apply
y = torch.ceil(x)
with re-defining gradient as \(\frac{\partial y}{\partial x} = 1\).
- spikingjelly.activation_based.quantize.floor(x: Tensor)[源代码]
- 参数:
x (torch.Tensor) – the input tensor
- 返回:
the output tensor
- 返回类型:
Apply
y = torch.floor(x)
with re-defining gradient as \(\frac{\partial y}{\partial x} = 1\).
- spikingjelly.activation_based.quantize.clamp_backward(grad_output: Tensor, x: Tensor, min_value: float, max_value: float)[源代码]
- spikingjelly.activation_based.quantize.clamp(x: Tensor, min_value: float, max_value: float)[源代码]
- 参数:
x (torch.Tensor) – the input tensor
min_value (float) – lower-bound of the range to be clamped to
max_value (torch.Tensor) – upper-bound of the range to be clamped to
- 返回:
the output tensor
- 返回类型:
Apply
y = torch.clamp(x, min_value, max_value)
with re-defining gradient as:\[\begin{split}\frac{\partial y}{\partial x} = \begin{cases} 1, \rm{min\_value} \leq x \leq \rm{max\_value} \\ 0, \rm{otherwise} \end{cases}\end{split}\]
- spikingjelly.activation_based.quantize.step_quantize(x: Tensor, step: float)[源代码]
- 参数:
x (torch.Tensor) – the input tensor
step (float) – the quantize step
- 返回:
the quantized tensor
- 返回类型:
Quantize
x
to the nearesti * step
, wherei
is an integer.Note that the gradient is defined by \(\frac{\partial y}{\partial x} = 1\).
- spikingjelly.activation_based.quantize.k_bit_quantize(x: Tensor, k: int)[源代码]
- 参数:
x (torch.Tensor) – a float tensor whose range is
[0, 1]
.k (int) – the bit number of output
- 返回:
y = round((2 ** k - 1) * x) / (2 ** k - 1)
- 返回类型:
The k-bit quantizer defined in DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients.
The input whose range is
[0, 1]
will be quantized to the nearesti / (2 ** k - 1)
, wherei = 0, 1, ..., (2 ** k - 1)
.Note that the gradient is defined by \(\frac{\partial y}{\partial x} = 1\).
To clamp the input whose range is
(-inf, inf)
to range(0, 1)
, usingtorch.sigmoid
,torch.nn.Hardtanh
orclamp_*
functions (e.g.,spikingjelly.activation_based.quantize.clamp_by_linear
) inspikingjelly.activation_based.quantize
.Codes example:
x = torch.rand(8) y = k_bit_quantize(x, 2) print(f'x={x}') print(f'y={y}') # x=tensor([0.6965, 0.5697, 0.9883, 0.0438, 0.1332, 0.7613, 0.9704, 0.2384]) # y=tensor([0.6667, 0.6667, 1.0000, 0.0000, 0.0000, 0.6667, 1.0000, 0.3333])
- spikingjelly.activation_based.quantize.affine_k_bit_quantize(x: Tensor, k: int, w: Tensor, b: Tensor)[源代码]
- 参数:
x (torch.Tensor) – a float tensor whose range is
[0, 1]
.k (int) – the bit number of output
w (torch.Tensor) – the weight of the affine transform
b (torch.Tensor) – the bias of the affine transform
- 返回:
y = w * round((2 ** k - 1) * x) / (2 ** k - 1) + b
- 返回类型:
Apply an affine quantization with
y = w * round((2 ** k - 1) * x) / (2 ** k - 1) + b
.
- spikingjelly.activation_based.quantize.clamp_by_linear(x: Tensor, eps: float = 1e-05)[源代码]
- 参数:
x (torch.Tensor) – the input tensor to be normed, whose range is
(-inf, inf)
eps (float) – a value added to the denominator for numerical stability. The default value is
1e-5
- 返回:
the normed tensor, whose range is
[min_value, max_value]
- 返回类型:
Using the linear transform to clamp the input range from
(-inf, inf)
to[0., 1.]
:\[y = \frac{x - \rm{min}(x)}{\rm{max}(x) - \rm{min}(x) + eps}\]