* Add a class TensorInplaceVersion to count the inplace version and put it in framework::Tensor instead of Allocation or Variable.
* Add a new attribute `_inplace_version` for VarBase.
* Raise exception if an inplace operation can result in incorrect gradient computation.
* Add a new interface _bump_inplace_version() for VarBase to bump the version whenever the Tensor is modified through an inplace operation.
* For api assign, call _bump_inplace_version() when it's an inplace operation inn dynamic mode.
* Use original var_wrapper if the inplace_version is not changed.
* Replace SnapshotVarWrapperList with SnapshotVarWrapper to optimize performane.
* Changed a variable name error
* Add comments
* Move member functions of TranslatedLayer out of function
* edit code according to review
* Edit input argument of '_run_static_graph'
* reset due to Segmentation fault
* rename variables when stitching graph
* modify code according CI
* Add comments to '__i_m_p_l__'
* remove blanks befor 'Get...'
* edit code according to review
* Add a comment to '_execution_method_creator'
* Edit a comment to '_execution_method_creator'
* Generate code coverage reports only for incremental files, test=develop
* Generate code coverage reports only for incremental files, test=develop
* Generate code coverage reports only for incremental files, test=develop
* test for diff python file, test=develop
* fix no python diff report, test=develop
* add cc test file, test=develop
* fix bug in generic.cmake, test=develop
* for debug no cc report, test=develp
* modify compire branch form test_pr to test, test=develop
* fix bug, test=develop
* test for h file changed, test=develop
* debug for redefinition of argument optimize error, test=develop
* close -o3 for test, test=develop
* remove -o3 for test, test=develop
* remove coverage option for nvcc, test=develop
* use CMAKE_CXX_FLAGS open coverage option when header file changed, test=develop
* reopen -o3, test=develop
* remove debug code, test=develop
* remove unused code, test=develop
test_mnist failed on CUDA11. We found that it is due to PaddleInference IR Optimization after debugging. We disable it in this PR and we will re-enable it after PaddleInference fixes it.
GridGenerator model failed because the output shape of `linspace` is (-1). The reason is that C++ InferShape fixes the shape to (-1):
5da3d514eb/paddle/fluid/operators/linspace_op.cc (L49)
We cannot set the shape in C++ infer shape because this Tensor may not be initialized during compile time, but when input `num` of `linspace` is an integer, we know the shape at compiler time. This PR simply set the shape in Python and add GridGenerator as unittest.
* add reducer
* refine envent for memorycopy
* add concat&split for allreduce
* apply concat & split for fuse tensor
* fix nccl dep
* fix the untest, compile problem and ddp initialize problem
* fix untest for mac & add some comments & solve the repeated param in sublayers
* fix untest for windows & fix document
* add lars to fleet meta optimizer
* add lamb to proto
* add lamb to fleet meta optimizer
* fixed syntax bug
* fixed syntax bug
* fixed syntax error in lamb, add config setter of lamb in distributed_strategy
* trigger unitest to rerun
* add new unitest func for lamb
* revise unitest for lars and lamb
* revise dgc meta unitest
* revise lars document in distribute_strategy
* revise lars lamb document in distributed_strategy.py
* revise lars lamb document in distributed_strategy.py
* add weight decay exclude logic to lars
* restore optimzier.py
* restore optimizer.py as develop except lars
* add epsilon and exclude fn to distributed_sttrategy
* add lars epsilon
* revise unitest for fleet lars and lamb
* revise lars lamb unitest for CI coverage
* revise lars argument api
* revise lars argument api
* revise lars argument api
* revise api doc of lars
* fix op role
* add sharding save and add_sync_comm_for_test function
* add comm_analyse to utlis
* revise sharding_utils
* add sharding saving unittest
* revise sharding utils for unittest
* revise sharding en doc
* update sharding utils api
* add doc for sharding
* fixed bug in sharding var size count
* update varsize count in sharding
* fix sharding num_nccl_comm
* Revert "fix sharding num_nccl_comm"
This reverts commit d51587c15e9323acf226ddd36154275f0d1daf76.
* add static_only for static api
* addd static_only for class init
* remove static_only for default_main_program
* remove creater_parameter & startup_program
* remove failed apis
* revert py_func import
* remove global scope
* remove some api
* remove cuda pinned place
* add hapi api flops
* fix bug
* fix some bug
* add unit test
* fix unit test
* solve ci coverage
* fix doc
* fix doc
* fix static flops
* delete the comment
* fix some grammar problem in doc
* fix some bug
* fix some doc
* fix some doc
* Rename variables when use 'jit.load'
* Check whether the original graph contains the variable with the same name
* add comment
* rename output/input of op and edit unittest
* modify the code according to CI
* edit code according to CI
* edit code according to CI
* edit code according to CI
* edit code according to CI
* edit code according to CI
* edit code according to CI
* rewrite the sigmoid_focal_loss code example. test=develop
* fix spelling mistake in comments of code example.test=develop
* change print([.*].numpy()) to print([.*]) in example codes of sigmoid_focal_loss. test=document_fix
* save one name in cross_entropy and softmax_cross_entropy, test=develop
* change used function in CrossEntropy from softmax_cross_entropy to cross_entropy, test=develop
* Impelement 2.0 API version Conv2d and Linear layer quantization in imperative mode.
* use cudnn softmax in static Lenet
* Modified ChannelwiseQAT Unittest for 2.0 API.
* For CI python coverage.
* fix some docs test=develop;test=document_fix
* add code example test=develop;test=document_fix
* fix code example test=develop;test=document_fix
* fix code example test=develop;test=document_fix
* fix code example test=develop;test=document_fix
1) The operands are executed sequentially according to the running logic of Python.
2) If the left hand operand is True(for convert_logical_or)/False(for convert_logical_and), the right hand operand should be executed.
* fix eng doc, test=develop
* add import deprecated for layers, test=develop
* add block line for doc generate, test=develop
* remove todo for create_variable, test=develop
* add blank line for doc generate, test=develop
* add blank line for doc generate, test=develop
* add lstm, simple rnn op kernel
* fix the test_lstm for the rnn op
* change func name
* fix forward postprocess bug
* add gru forward, backward code
* remove unittest.skipIf; use a big rnn op instead of combination op
* fix input doesn't have gradient bug
* add eigen lstm forward, backward
Co-authored-by: wawltor <fangzeyang0904@hotmail.com>
* Support dy2stat error message when call jit.save;
* Polish dy2stat error message:
(1) the original dygraph code is marked with (* user code *) ;
(2) "In user code:" -> "In transformed code:"