10/31/2022 0 Comments Wise memory optimizer chip![]() ![]() Use DistributedDataParallel instead of DataParallelĬode snippet combining the tips No. Turn off bias for convolutional layers that are right before batch normalization Use channels_last memory format for 4D NCHW Tensors 17. CNN (Convolutional Neural Network) specific 15. #Wise memory optimizer chip update#Gradient accumulation: update weights for every other x batch to mimic the larger batch size Set gradients to None (e.g., model.zero_grad( set_to_none=True) ) before the optimizer updates the weights 13. Use mixed precision for forward pass (but not backward pass) 12. Set the batch size as the multiples of 8 and maximize GPU memory usage 11. Set the sizes of all different architecture designs as the multiples of 8 (for FP16 of mixed precision) Fuse the pointwise (elementwise) operations into a single kernel by PyTorch JIT Use tensor.to( non_blocking=True) when it’s applicable to overlap data transfers 8. Avoid unnecessary data transfer between CPU and GPU 6. Directly create vectors/matrices/tensors as torch.Tensor and at the device where they will run operations 5. Dataloader(dataset, num_workers=4*num_GPU) 3. For each tip, I also provide code snippets and annotate whether it’s specific to the device types (CPU/GPU) or model types. Then I dive into them one by one in detail afterward. #Wise memory optimizer chip full#I start by providing a full list and a combined code snipped in case you’d like to jump into optimizing your scripts. To better leverage these tips, we also need to understand how and why they work. I collected and organized several PyTorch tricks and tips to maximize the efficiency of memory usage and minimize the run time. The faster each experiment iteration is, the more we can optimize the whole model prediction performance given limited time and resources. The training/inference processes of deep learning models are involved lots of steps. Tuning deep learning pipelines is like finding the right gear combination (Image by Tim Mossholder on Unsplash) Why should you read this post? ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |