To efficiently train machine learning models, you will often need to scale your training to multiple GPUs, or even multiple machines. TensorFlow now offers rich functionality to achieve this with just a few lines of code. Join this session to learn how to set this up.
Rate this session by signing-in on the I/O website here → https://goo.gl/sBZMEm
Distribution Strategy API:
ResNet50 Model Garden example with MirroredStrategy API:
Commands to set up a GCE instance and run distributed training:
Multi-machine distributed training with train_and_evaluate:
Watch more TensorFlow sessions from I/O '18 here → https://goo.gl/GaAnBR
See all the sessions from Google I/O '18 here → https://goo.gl/q1Tr8x
Subscribe to the TensorFlow channel → https://goo.gl/ht3WGe
For GoLang training courses, please contact Wojtek.