PyTorch Lightning

Visualize PyTorch Lightning models with W&B

PyTorch-Lightning provides a lightweight wrapper for organizing your PyTorch code and easily add advanced features such as distributed training and 16-bit precising. We have a nice integration to visualize your results.

from pytorch_lightning.loggers import WandbLogger
from pytorch_lightning import Trainer
wandb_logger = WandbLogger()
trainer = Trainer(logger=wandb_logger)

Parameters

  • name (str) – display name for the run.

  • save_dir (str) – path where data is saved.

  • offline (bool) – run offline (data can be streamed later to wandb servers).

  • version (id) – sets the version, mainly used to resume a previous run.

  • anonymous (bool) – enables or explicitly disables anonymous logging.

  • project (str) – the name of the project to which this run will belong.

  • tags (list of str) – tags associated with this run.

Log model topology and gradients

Log model topology as well as optionally gradients and weights.

wandb_logger.watch(model, log='gradients', log_freq=100)

Parameters:

  • model (nn.Module) – Model to be logged

  • log (str) – Can be "gradients" (default), "parameters", "all" or None.

  • log_freq (int) – Step number at which the metrics should be recorded

Hyperparameters

Record hyperparameters.

Note: this function is called automatically

wandb_logger.log_hyperparams(params)

Parameters: params – argparse.Namespace containing the hyperparameters (should be a dict).

Metrics

Record metrics.

Note: this function is called automatically

wandb_logger.log_metrics(metrics, step=None)

Parameters:

  • metric (float) – Dictionary with metric names as keys and measured quantities as values

  • step (int|None) – Step number at which the metrics should be recorded

Example Code

We've created a few examples for you to see how the integration works: