spikingjelly.activation_based.quantize package

Module contents

class spikingjelly.activation_based.quantize.round_atgf(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, x: Tensor)[源代码]
static backward(ctx, grad_output: Tensor)[源代码]
spikingjelly.activation_based.quantize.round(x: Tensor)[源代码]
参数:

x (torch.Tensor) – the input tensor

返回:

the output tensor

返回类型:

torch.Tensor

Apply y = torch.round(x) with re-defining gradient as \(\frac{\partial y}{\partial x} = 1\).

class spikingjelly.activation_based.quantize.ceil_atgf(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, x: Tensor)[源代码]
static backward(ctx, grad_output: Tensor)[源代码]
spikingjelly.activation_based.quantize.ceil(x: Tensor)[源代码]
参数:

x (torch.Tensor) – the input tensor

返回:

the output tensor

返回类型:

torch.Tensor

Apply y = torch.ceil(x) with re-defining gradient as \(\frac{\partial y}{\partial x} = 1\).

class spikingjelly.activation_based.quantize.floor_atgf(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, x: Tensor)[源代码]
static backward(ctx, grad_output: Tensor)[源代码]
spikingjelly.activation_based.quantize.floor(x: Tensor)[源代码]
参数:

x (torch.Tensor) – the input tensor

返回:

the output tensor

返回类型:

torch.Tensor

Apply y = torch.floor(x) with re-defining gradient as \(\frac{\partial y}{\partial x} = 1\).

spikingjelly.activation_based.quantize.clamp_backward(grad_output: Tensor, x: Tensor, min_value: float, max_value: float)[源代码]
class spikingjelly.activation_based.quantize.clamp_atgf(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, x: Tensor, min_value: float, max_value: float)[源代码]
static backward(ctx, grad_output: Tensor)[源代码]
spikingjelly.activation_based.quantize.clamp(x: Tensor, min_value: float, max_value: float)[源代码]
参数:
  • x (torch.Tensor) – the input tensor

  • min_value (float) – lower-bound of the range to be clamped to

  • max_value (torch.Tensor) – upper-bound of the range to be clamped to

返回:

the output tensor

返回类型:

torch.Tensor

Apply y = torch.clamp(x, min_value, max_value) with re-defining gradient as:

\[\begin{split}\frac{\partial y}{\partial x} = \begin{cases} 1, \rm{min\_value} \leq x \leq \rm{max\_value} \\ 0, \rm{otherwise} \end{cases}\end{split}\]
spikingjelly.activation_based.quantize.step_quantize_forward(x: Tensor, step: float)[源代码]
class spikingjelly.activation_based.quantize.step_quantize_atgf(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, x: Tensor, step: float)[源代码]
static backward(ctx, grad_output: Tensor)[源代码]
spikingjelly.activation_based.quantize.step_quantize(x: Tensor, step: float)[源代码]
参数:
返回:

the quantized tensor

返回类型:

torch.Tensor

Quantize x to the nearest i * step, where i is an integer.

Note that the gradient is defined by \(\frac{\partial y}{\partial x} = 1\).

../_images/step_quantize1.svg
spikingjelly.activation_based.quantize.k_bit_quantize_forward(x: Tensor, k: int)[源代码]
class spikingjelly.activation_based.quantize.k_bit_quantize_atgf(*args, **kwargs)[源代码]

基类:Function

static forward(ctx, x: Tensor, k: int)[源代码]
static backward(ctx, grad_output)[源代码]
spikingjelly.activation_based.quantize.k_bit_quantize(x: Tensor, k: int)[源代码]
参数:
  • x (torch.Tensor) – a float tensor whose range is [0, 1].

  • k (int) – the bit number of output

返回:

y = round((2 ** k - 1) * x) / (2 ** k - 1)

返回类型:

torch.Tensor

The k-bit quantizer defined in DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients.

The input whose range is [0, 1] will be quantized to the nearest i / (2 ** k - 1), where i = 0, 1, ..., (2 ** k - 1).

Note that the gradient is defined by \(\frac{\partial y}{\partial x} = 1\).

To clamp the input whose range is (-inf, inf) to range (0, 1), using torch.sigmoid, torch.nn.Hardtanh or clamp_* functions (e.g., spikingjelly.activation_based.quantize.clamp_by_linear) in spikingjelly.activation_based.quantize.

../_images/k_bit_quantize.svg

Codes example:

x = torch.rand(8)
y = k_bit_quantize(x, 2)
print(f'x={x}')
print(f'y={y}')
# x=tensor([0.6965, 0.5697, 0.9883, 0.0438, 0.1332, 0.7613, 0.9704, 0.2384])
# y=tensor([0.6667, 0.6667, 1.0000, 0.0000, 0.0000, 0.6667, 1.0000, 0.3333])
spikingjelly.activation_based.quantize.affine_k_bit_quantize(x: Tensor, k: int, w: Tensor, b: Tensor)[源代码]
参数:
  • x (torch.Tensor) – a float tensor whose range is [0, 1].

  • k (int) – the bit number of output

  • w (torch.Tensor) – the weight of the affine transform

  • b (torch.Tensor) – the bias of the affine transform

返回:

y = w * round((2 ** k - 1) * x) / (2 ** k - 1) + b

返回类型:

torch.Tensor

Apply an affine quantization with y = w * round((2 ** k - 1) * x) / (2 ** k - 1) + b.

spikingjelly.activation_based.quantize.clamp_by_linear(x: Tensor, eps: float = 1e-05)[源代码]
参数:
  • x (torch.Tensor) – the input tensor to be normed, whose range is (-inf, inf)

  • eps (float) – a value added to the denominator for numerical stability. The default value is 1e-5

返回:

the normed tensor, whose range is [min_value, max_value]

返回类型:

torch.Tensor

Using the linear transform to clamp the input range from (-inf, inf) to [0., 1.]:

\[y = \frac{x - \rm{min}(x)}{\rm{max}(x) - \rm{min}(x) + eps}\]