wandb.init() is called from your training script an API call is made to create a run object on our servers. A new process is started to stream and collect metrics, thereby keeping all threads and logic out of your primary process. Your script runs normally and writes to local files, while the separate process streams them to our servers along with system metrics. You can always turn off streaming by running
wandb off from your training directory, or setting the WANDB_MODE environment variable to "dryrun".
It is extremely important to us that we never interfere with your training runs. We run wandb in a separate process to make sure that if wandb somehow crashes, your training will continue to run. If the internet goes out, wandb will continue to retry sending data to wandb.com.
Wandb should have negligible effect on your training performance if you use it normally. Normal use of wandb means logging less than once a second and logging less than a few megabytes of data at each step. Wandb runs in a separate process and the function calls don't block, so if the network goes down briefly or there are intermittent read write issues on disk it should not affect your performance. It is possible to log a huge amount of data quickly, and if you do that you might create disk I/O issues. If you have any questions, please don't hesitate to contact us.
If you're training on an offline machine and want to upload your results to our servers afterwards, we have a feature for you!
Set the environment variable
WANDB_MODE=dryrun to save the metrics locally, no internet required.
When you're ready, run
wandb init in your directory to set the project name.
wandb sync YOUR_RUN_DIRECTORY to push the metrics to our cloud service and see your results in our hosted web app.
You can pass a SHA or other unique identifier to
wandb.config.update(...) to associate a dataset with a training run. W&B does not store any data unless
wandb.save is called with the local file name.
By default metrics are collected every 2 seconds and averaged over a 30 second period. If you need higher resolution metrics, email us a email@example.com.
Currently the library only works with Python 2.7+ & 3.6+ projects. The architecture mentioned above should enable us to integrate with other languages easily. If you have a need for monitoring other languages, send us a note at firstname.lastname@example.org.
By default, we don't log any of your dataset examples. You can explicitly turn this feature on to see example predictions in our web interface.
There's two ways to turn off code logging:
Set WANDB_DISABLE_CODE to true to turn off all code tracking. We won't pick up the git SHA or the diff patch.
Set WANDB_IGNORE_GLOBS to *.patch to turn off syncing the diff patch to our servers. You'll still have it locally and be able to apply it with the wandb restore command.
"Is the logging function lazy? I don't want to be dependent on the network to send the results to your servers and then carry on with my local operations."
Calling wandb.log writes a line to a local file; it does not block on any network calls. When you call wandb.init we launch a new process on the same machine that listens for filesystem changes and talks to our web service asynchronously from your training process.
We use the same exponential moving average formula as TensorBoard. You can find an explanation here: https://stackoverflow.com/questions/42281844/what-is-the-mathematics-behind-the-smoothing-parameter-in-tensorboards-scalar.
We love the TensorBoard folks, and we have a TensorBoard integration! We were inspired to improve experiment tracking tools for everyone. When the cofounders started working on W&B, they were inspired to build a tool for the frustrated TensorBoard users at OpenAI. Here are a few things we focused on improving:
Reproduce models: Weights & Biases is good for experimentation, exploration, and reproducing models later. We capture not just the metrics, but also the hyperparameters and version of the code, and we can save your model checkpoints for you so your project is reproducible.
Automatic organization: If you hand off a project to a collaborator or take a vacation, W&B makes it easy to see all the models you've tried so you're not wasting hours re-running old experiments.
Fast, flexible integration: Add W&B to your project in 5 minutes. Install our free open-source Python package and add a couple of lines to your code, and every time you run your model you'll have nice logged metrics and records.
Persistent, centralized dashboard: Anywhere you train your models, whether on your local machine, your lab cluster, or spot instances in the cloud, we give you the same centralized dashboard. You don't need to spend your time copying and organizing TensorBoard files from different machines.
Powerful table: Search, filter, sort, and group results from different models. It's easy to look over thousands of model versions and find the best performing models for different tasks. TensorBoard isn't built to work well on large projects.
Tools for collaboration: Use W&B to organize complex machine learning projects. It's easy to share a link to W&B, and you can use private teams to have everyone sending results to a shared project. We also support collaboration via reports— add interactive visualizations and describe your work in markdown. This is a great way to keep a work log, share findings with your supervisor, or present findings to your lab.
Get started with a free personal account →
At the top of your training script when you call wandb.init, pass in an experiment name, like this:
wandb.init(name="my awesome run")
wandb.run.save() and then get the name with
We don't have an anaconda package but you should be able to install wandb using:
conda activate myenvpip install wandb
If you run into issues with this install, please let us know. This Anaconda doc on managing packages has some helpful guidance.
Set the environmental variable WANDB_SILENT.
In a notebook:
%env WANDB_SILENT true
In a python script:
os.environ["WANDB_SILENT"] = "true"
Press ctrl+D on your keyboard to stop a script that is instrumented with wandb.
If you're seeing SSL or network errors:
wandb: Network error (ConnectionError), entering retry loop. you can try a couple of different approaches to solving this issue:
Upgrade your SSL certificate. If you're running the script on an Ubuntu server, run
update-ca-certificates We can't sync training logs without a valid SSL certificate because it's a security vulnerability.
If your network is flakey, run training in offline mode and sync the files to us from a machine that has Internet access.
Try running W&B Local, which operates on your machine and doesn't sync files to our cloud servers.
SSL CERTIFICATE_VERIFY_FAILED: this error could be due to your company's firewall. You can set up local CAs and then use:
If our library is unable to connect to the internet it will enter a retry loop and keep attempting to stream metrics until the network is restored. During this time your program is able to continue running.
If you need to run on a machine without internet, you can set WANDB_MODE=dryrun to only have metrics stored locally on your harddrive. Later you can call
wandb sync DIRECTORY to have the data streamed to our server.