diff --git a/docs/mindspore/source_en/migration_guide/sample_code.md b/docs/mindspore/source_en/migration_guide/sample_code.md index 9e87cd6170ede9e5fbb05ee5d94dbe6e4b33b4ed..16cd23f9d2df39c336dea55d71588bc322c03325 100644 --- a/docs/mindspore/source_en/migration_guide/sample_code.md +++ b/docs/mindspore/source_en/migration_guide/sample_code.md @@ -25,7 +25,7 @@ ResNet50 is a classic deep neural network in CV, which attracts more developers' The official PyTorch implementation script can be found at [torchvision model](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) or [Nvidia PyTorch implementation script](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5), which includes implementations of the mainstream ResNet family of networks (ResNet18, ResNet18, ResNet18, ResNet18, and ResNet18). (ResNet18, ResNet34, ResNet50, ResNet101, ResNet152). The dataset used for ResNet50 is ImageNet2012, and the convergence accuracy can be found in [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/#model-description). -Developers can run PyTorch-based ResNet50 scripts directly on the benchmark hardware environment and then computes the performance data, or they can refer to the official data on the same hardware environment. For example, when we benchmark the Nvidia DGX-1 32GB (8x V100 32GB) hardware, we can refer to [Nvidia's official ResNet50 performance data](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v15#training-performance-nvidia-dgx-1-32gb-8x-v100-32gb). +Developers can run PyTorch-based ResNet50 scripts directly on the benchmark hardware environment and then computes the performance data, or they can refer to the official data on the same hardware environment. For example, when we benchmark the Nvidia DGX-1 32GB (8x V100 32GB) hardware, we can refer to [Nvidia's official ResNet50 performance data](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5#training-performance-nvidia-dgx-1-32gb-8x-v100-32gb). ### Reproduce the Migration Target diff --git a/docs/mindspore/source_zh_cn/design/comm_fusion.md b/docs/mindspore/source_zh_cn/design/comm_fusion.md index 20e00fea843d2229036892715a80029203403d43..1e47797dd6b42c5348cdcf099e36ca8f88cdc153 100644 --- a/docs/mindspore/source_zh_cn/design/comm_fusion.md +++ b/docs/mindspore/source_zh_cn/design/comm_fusion.md @@ -46,7 +46,7 @@ MindSpore提供两种接口来使能通信融合,下面分别进行介绍。 #### 自动并行场景下的配置 -在自动并行或半自动并行场景下,用户在通过`context.set_auto_parallel_context`来配置并行策略时,可以利用该接口提供的`comm_fusion`参数来设置并行策略,用户可以指定用index方法还是fusion buffer的方法。具体参数说明请参照 [分布式并行接口说明](auto_parallel.md)。 +在自动并行或半自动并行场景下,用户在通过`context.set_auto_parallel_context`来配置并行策略时,可以利用该接口提供的`comm_fusion`参数来设置并行策略,用户可以指定用index方法还是fusion buffer的方法。 #### 利用`Cell`提供的接口