Update execution_strategy option default value (#18183)

* update execution_strategy option default value
test=develop

* fix doc error
test=develop
revert-18229-add_multi_gpu_install_check
chengduo 6 years ago committed by GitHub
parent c2fb9b906a
commit 25f3cd6486
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -647,7 +647,7 @@ void ParallelExecutor::FeedAndSplitTensorIntoLocalScopes(
"The number(%d) of samples of " "The number(%d) of samples of "
"current batch is less than the count(%d) of " "current batch is less than the count(%d) of "
"devices(%s), currently, it is not allowed. ", "devices(%s), currently, it is not allowed. ",
member_->places_.size(), lod_tensors.size(), lod_tensors.size(), member_->places_.size(),
(is_cpu_place ? "CPU" : "GPU")); (is_cpu_place ? "CPU" : "GPU"));
if (is_cpu_place) { if (is_cpu_place) {
error_info += error_info +=

@ -1179,7 +1179,8 @@ All parameter, weight, gradient are variables in Paddle.
}, },
R"DOC(The type is BOOL, allow_op_delay represents whether to delay the R"DOC(The type is BOOL, allow_op_delay represents whether to delay the
communication operators to run, it may make the execution faster. communication operators to run, it may make the execution faster.
Note that in some models, allow_op_delay may cause program hang. Default False.)DOC") Note that this option is invalid now, and it will be removed in
next version. Default False.)DOC")
.def_property( .def_property(
"num_iteration_per_drop_scope", "num_iteration_per_drop_scope",
[](const ExecutionStrategy &self) { [](const ExecutionStrategy &self) {
@ -1191,7 +1192,8 @@ All parameter, weight, gradient are variables in Paddle.
R"DOC(The type is INT, num_iteration_per_drop_scope indicates how R"DOC(The type is INT, num_iteration_per_drop_scope indicates how
many iterations to clean up the temp variables which many iterations to clean up the temp variables which
is generated during execution. It may make the execution faster, is generated during execution. It may make the execution faster,
because the temp variable's shape maybe the same between two iterations. Default 100. because the temp variable's shape maybe the same between two iterations.
Default 1.
NOTES: NOTES:
1. If you fetch data when calling the 'run', the ParallelExecutor 1. If you fetch data when calling the 'run', the ParallelExecutor

Loading…
Cancel
Save