!2271 add dependency for HostAllGather and HostReduceScatter

Merge pull request !2271 from yihuaijie/master
pull/2271/MERGE
mindspore-ci-bot 5 years ago committed by Gitee
commit a486f1131c

@ -182,6 +182,9 @@ class HostAllGather(PrimitiveWithInfer):
Note: Note:
Tensor must have the same shape and format in all processes participating in the collective. Tensor must have the same shape and format in all processes participating in the collective.
HostAllGather is a host-side operator, it depends on OpenMPI and must use build option -M on
to enable it. Using mpirun command to run it:
mpirun -output-filename log -merge-stderr-to-stdout -np 3 python test_host_all_gather.py
Args: Args:
group (Union[tuple[int],list[int]]): The rand_ids of communication group to work on. group (Union[tuple[int],list[int]]): The rand_ids of communication group to work on.
@ -200,9 +203,13 @@ class HostAllGather(PrimitiveWithInfer):
Examples: Examples:
>>> import mindspore.nn as nn >>> import mindspore.nn as nn
>>> import mindspore.context as context
>>> import mindspore.ops.operations as P >>> import mindspore.ops.operations as P
>>> from mindspore import Tensor >>> from mindspore import Tensor
>>> >>>
>>> context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
>>> context.set_mpi_config(enable_mpi=True)
>>>
>>> class Net(nn.Cell): >>> class Net(nn.Cell):
>>> def __init__(self): >>> def __init__(self):
>>> super(Net, self).__init__() >>> super(Net, self).__init__()
@ -308,6 +315,9 @@ class HostReduceScatter(PrimitiveWithInfer):
Note: Note:
Tensor must have the same shape and format in all processes participating in the collective. Tensor must have the same shape and format in all processes participating in the collective.
HostReduceScatter is a host-side operator, it depends on OpenMPI and must use build option
-M on to enable it. Using mpirun command to run it:
mpirun -output-filename log -merge-stderr-to-stdout -np 3 python test_host_reduce_scatter.py
Args: Args:
op (str): Specifies an operation used for element-wise reductions, op (str): Specifies an operation used for element-wise reductions,
@ -322,10 +332,14 @@ class HostReduceScatter(PrimitiveWithInfer):
Examples: Examples:
>>> import mindspore.nn as nn >>> import mindspore.nn as nn
>>> import mindspore.context as context
>>> import mindspore.ops.operations as P >>> import mindspore.ops.operations as P
>>> from mindspore import Tensor >>> from mindspore import Tensor
>>> from mindspore.ops.operations.comm_ops import ReduceOp >>> from mindspore.ops.operations.comm_ops import ReduceOp
>>> >>>
>>> context.set_context(mode=context.GRAPH_MODE, device_target='CPU')
>>> context.set_mpi_config(enable_mpi=True)
>>>
>>> class Net(nn.Cell): >>> class Net(nn.Cell):
>>> def __init__(self): >>> def __init__(self):
>>> super(Net, self).__init__() >>> super(Net, self).__init__()

Loading…
Cancel
Save