diff --git a/doc/fluid/design/motivation/api.md b/doc/fluid/design/motivation/api.md index e6a4638d91..bc222564e3 100644 --- a/doc/fluid/design/motivation/api.md +++ b/doc/fluid/design/motivation/api.md @@ -77,8 +77,7 @@ print "The sematic-vector of testA: ", paddle.infer(fA, parameters, testA) ### Example 2. Sharing Parameters between "Models" -We use [GAN](https://github.com/PaddlePaddle/book/tree/develop/gan) in -this example. In the following example program, `d0` and `d1` +We use GAN in this example. In the following example program, `d0` and `d1` correspond to the two networks in the following figure: diff --git a/doc/fluid/design/multi_devices/operator_kernel_type.md b/doc/fluid/design/multi_devices/operator_kernel_type.md index 8c1bc8f76a..5e391bd62b 100644 --- a/doc/fluid/design/multi_devices/operator_kernel_type.md +++ b/doc/fluid/design/multi_devices/operator_kernel_type.md @@ -75,7 +75,7 @@ Different layout leads to different implementation of the operator kernel. There - The inference of Layout is at run-time, not at compile-time. -- Every operator has to implement different kernels for different layouts. Let's take MKLDNN as an example. If we want to implement an MKLDNN convolution operator, we have to implement all the kernels for different layouts, which are listed [here](http://01org.github.io/mkl-dnn/structmkldnn_1_1memory.html). And we will have a special macro to register kernels for MKLDNN operators. +- Every operator has to implement different kernels for different layouts. Let's take MKLDNN as an example. If we want to implement an MKLDNN convolution operator, we have to implement all the kernels for different layouts, which are listed [here](http://intel.github.io/mkl-dnn/structmkldnn_1_1memory.html). And we will have a special macro to register kernels for MKLDNN operators. `Layout` is also defined as a enum variable: