nupic.research.frameworks.pytorch.model_utils

add_sparse_cnn_layer(network, suffix, in_channels, out_channels, use_batch_norm, weight_sparsity, percent_on, k_inference_factor, boost_strength, boost_strength_factor)[source]

Add sparse cnn layer to network.

Parameters
  • network – The network to add the sparse layer to

  • suffix – Layer suffix. Used to name its components

  • in_channels – input channels

  • out_channels – output channels

  • use_batch_norm – whether or not to use batch norm

  • weight_sparsity – Pct of weights that are allowed to be non-zero

  • percent_on – Pct of ON (non-zero) units

  • k_inference_factor – During inference we increase percent_on by this factor

  • boost_strength – boost strength (0.0 implies no boosting)

  • boost_strength_factor – boost strength is multiplied by this factor after each epoch

add_sparse_linear_layer(network, suffix, input_size, linear_n, dropout, use_batch_norm, weight_sparsity, percent_on, k_inference_factor, boost_strength, boost_strength_factor)[source]

Add sparse linear layer to network.

Parameters
  • network – The network to add the sparse layer to

  • suffix – Layer suffix. Used to name its components

  • input_size – Input size

  • linear_n – Number of units

  • dropout – dropout value

  • use_batch_norm – whether or not to use batch norm

  • weight_sparsity – Pct of weights that are allowed to be non-zero

  • percent_on – Pct of ON (non-zero) units

  • k_inference_factor – During inference we increase percent_on by this factor

  • boost_strength – boost strength (0.0 implies no boosting)

  • boost_strength_factor – boost strength is multiplied by this factor after each epoch

evaluate_model(model, loader, device, batches_in_epoch=9223372036854775807, criterion=torch.nn.functional.nll_loss, progress=None)[source]

Evaluate pre-trained model using given test dataset loader.

Parameters
  • model (torch.nn.Module) – Pretrained pytorch model

  • loader (torch.utils.data.DataLoader) – test dataset loader

  • device (torch.device) – device to use (‘cpu’ or ‘cuda’)

  • batches_in_epoch (int) – Max number of mini batches to test on.

  • criterion (function) – loss function to use

  • progress (dict or None) – Optional tqdm progress bar args. None for no progress bar

Returns

dictionary with computed “mean_accuracy”, “mean_loss”, “total_correct”.

Return type

dict

set_random_seed(seed)[source]

Set pytorch random seed.

See https://pytorch.org/docs/stable/notes/randomness.html

train_model(model, loader, optimizer, device, criterion=torch.nn.functional.nll_loss, batches_in_epoch=9223372036854775807, batch_callback=None, progress_bar=None)[source]

Train the given model by iterating through mini batches. An epoch ends after one pass through the training set, or if the number of mini batches exceeds the parameter “batches_in_epoch”.

Parameters
  • model (torch.nn.Module) – pytorch model to be trained

  • loader (torch.utils.data.DataLoader) – train dataset loader

  • optimizer – Optimizer object used to train the model. This function will train the model on every batch using this optimizer and the torch.nn.functional.nll_loss() function

  • batches_in_epoch – Max number of mini batches to train.

  • device (:class:`torch.device) – device to use (‘cpu’ or ‘cuda’)

  • criterion (function) – loss function to use

  • batch_callback (function) – Callback function to be called on every batch with the following parameters: model, batch_idx

  • progress_bar (dict or None) – Optional tqdm progress bar args. None for no progress bar