nupic.research.frameworks.pytorch.models.not_so_densenet

class DenseNetCIFAR(block_config=None, depth=100, growth_rate=12, reduction=0.5, num_classes=10, bottleneck_size=4, avg_pool_size=8)[source]

Bases: torch.nn.Sequential

DenseNet CIFAR model. Based on torchvision.models.densenet blocks. See original densenet.lua implementation for more details.

Parameters
  • block_config – how many layers in each pooling block. If None compute from depth

  • depth – DenseNet network depth. If None then block_config must be given

  • growth_rate – how many filters to add each layer (k in paper)

  • reduction – Channel compress ratio at transition layer

  • num_classes – number of classification classes

  • bottleneck_size – multiplicative factor for number of bottle neck layers

  • avg_pool_size – Average pool size for last transition layer

class NoSoDenseNetCIFAR(block_config=None, depth=100, growth_rate=12, reduction=0.5, num_classes=10, bottleneck_size=4, avg_pool_size=4, dense_percent_on=([1.0, 1.0], [1.0, 1.0], [1.0, 1.0], [1.0, 1.0]), dense_sparse_weights=([1.0, 1.0], [1.0, 1.0], [1.0, 1.0], [1.0, 1.0]), transition_percent_on=(1.0, 1.0, 1.0), transition_sparse_weights=(1.0, 1.0, 1.0), classifier_percent_on=1.0, classifier_sparse_weights=1.0, k_inference_factor=1.0, boost_strength=1.5, boost_strength_factor=0.95, duty_cycle_period=1000)[source]

Bases: nupic.research.frameworks.pytorch.models.not_so_densenet.DenseNetCIFAR

Modified DenseNet architecture using sparse dense blocks and sparse transition layers. Inspired by the original densenet.lua implementation.

Parameters
  • block_config – how many layers in each pooling block. If None compute from depth

  • depth – DenseNet network depth. If None then block_config must be given

  • growth_rate – how many filters to add each layer (k in paper)

  • reduction – Channel compress ratio at transition layer

  • num_classes – number of classification classes

  • bottleneck_size – multiplicative factor for number of bottle neck layers

  • avg_pool_size – Average pool size for last transition layer

  • dense_percent_on – Percent of units allowed to remain before each convolution layer of the dense layer.

  • dense_sparse_weights – Percent of weights that are allowed to be non-zero in each CNN of the dense layer

  • transition_percent_on – Percent of units allowed to remain the convolution layer of the transition layer

  • transition_sparse_weights – Percent of weights that are allowed to be non-zero in the CNN of the transition layer

  • classifier_percent_on – Percent of units allowed to remain after the last batch norm before the classifier

  • classifier_sparse_weights – Percent of weights that are allowed to be non-zero in the classifier

  • k_inference_factor – During inference (training=False) we increase percent_on in all sparse layers by this factor

  • boost_strength – boost strength (0.0 implies no boosting)

  • boost_strength_factor – Boost strength factor to use [0..1]

  • duty_cycle_period – The period used to calculate duty cycles