nupic.research.frameworks.pytorch.models.mobilenetv1

class MobileNetV1(num_classes=1001, width_mult=1.0)[source]

Bases: torch.nn.Module

See https://arxiv.org/abs/1704.04861.

forward(x)[source]
mobile_net_v1_sparse_depth(num_classes=1001, width_mult=1.0, percent_on=0.1, k_inference_factor=1.0, boost_strength=1.0, boost_strength_factor=1.0, duty_cycle_period=1000)[source]

Create a MobileNetV1 network with sparse deep wise layers by replacing the Depth wise (3x3) convolution activation function from ReLU with k-winners.

Parameters
  • num_classes (int) – Number of output classes (10 for CIFAR10)

  • width_mult (float) – Width multiplier, used to thin the network

  • percent_on (float) – The activity of the top k = percent_on * number of input units will be allowed to remain, the rest are set to zero.

  • kInferenceFactor (float) – During inference (training=False) we increase percent_on by this factor. percent_on * kInferenceFactor must be strictly less than 1.0, ideally much lower than 1.0

  • boostStrength (float) – boost strength (0.0 implies no boosting).

  • boostStrengthFactor (float) – Boost strength factor to use [0..1]

  • dutyCyclePeriod (int) – The period used to calculate duty cycles

Returns

Depth wise Sparse MoblineNetV1 model

mobile_net_v1_sparse_point(num_classes=1001, width_mult=1.0, percent_on=0.1, k_inference_factor=1.0, boost_strength=1.0, boost_strength_factor=1.0, duty_cycle_period=1000)[source]

Create a MobileNetV1 network with sparse point wise layers by replacing the Point wise (1x1) convolution activation function from ReLU with k-winners.

Parameters
  • num_classes (int) – Number of output classes (10 for CIFAR10)

  • width_mult (float) – Width multiplier, used to thin the network

  • percent_on (float) – The activity of the top k = percent_on * number of input units will be allowed to remain, the rest are set to zero.

  • kInferenceFactor (float) – During inference (training=False) we increase percent_on by this factor. percent_on * kInferenceFactor must be strictly less than 1.0, ideally much lower than 1.0

  • boostStrength (float) – boost strength (0.0 implies no boosting).

  • boostStrengthFactor (float) – Boost strength factor to use [0..1]

  • dutyCyclePeriod (int) – The period used to calculate duty cycles

Returns

Point wise Sparse MoblineNetV1 model

separable_convolution2d(in_channels, out_channels, kernel_size=(3, 3), stride=1, width_mult=1.0)[source]

Depth wise separable convolution 2D. This network block is used by MobileNet to factorize a standard convolution into a depth wise convolution and a 1x1 point wise convolution. The depth wise convolution applies a single filter for each input channel and the point wise applies 1x1 convolution to combine the outputs of the depth wise convolution.

See https://arxiv.org/abs/1704.04861

Parameters
  • in_channels – Input channels

  • out_channels – Output channels

  • kernel_size – Kernel size to use, always 3x3 for mobilenet

  • stride – Stride of the convolution

  • width_mult – Width multiplier, used to thin the network