* add some unittest cases ot verify jit.save, no_test
* add more unittests
* add test with example inputs
* polish implement details
* remove useless blank
* fix fetch random error
* support load infer model format state dict
* add unittests
* remove keep name table
* recolve circle inport
* fix compatible problem
* recover unittest
* polish doc and comment
* remove backend argument of init_parallel_env
* remove keep name table in transformer
* add cpu version check
* add skip unittest for init_parallel_env
* polish doc: remove func use & update example
* [Dy2Stat] Add debugging and logging mechanism for dygraph to static.
* Remove TransformerError temporarily.
* import mock in PY2, from unittest import mock in PY3. test=develop
* Expose interfaces set_code_level and set_verbosity in paddle.jit, fix doc of the two interface.
* polish doc of set_verbosity and set_code_level.
* expose and unify the Tensor concepts to the user
* expose tensor to user
* add copy place for Tensor
* add copy place for Tensor
* add note
* add macro PADDLE_WITH_CUDA
* remove RUN_TYPE=DIST
* fix some error
* don't remove op_callstack
* [Dy2Stat-ErrorMessage] Optimize error value to improve readability if error raised in run-time.
1. update op_callstack with original information;
2. simplify error value to improve readability if error raised in run-time.
* Fix error in Python3.
* add limit support for load_dygraph loading jit.save result
* simplify unittest
* add unittests for coverage
* remove encoding limit of loading extra var info
* [Dy2Stat-ErrorMessage]Enhance original error and create new exception. test=develop
* Delete redundant code and change func name to create_and_update_origin_info_map.
* optimize loop_transformer.
* fix bug in print_transformer.
* Modify code according to the comments.
Enhance TracedLayer Error Message
Note: this PR uses assert to check type somewhere and check_type somewhere, the reason is that the check_type skips checking when it is under dygraph mode.
* fix the double grad bug for the star gan. test=develop
* update the retain_graph parameter doc. test=develop
* add the unit test for the retain_graph parameter. test=develop
* Refine Model
1. Take the network (instance of Layer) as the input of Model.
2. Refine set_dict/load_dict of Layer.
3. Refine Input interface, so update code sample about Input
* fix optimizer.state_dict and LRScheduler.state_dict to save/load dygraph,test=develop
* fix optimizer.state_dict and LRScheduler.state_dict to save/load dygraph,test=develop
* Add a judgment that state_dict/set_dict is used incorrectly,test=develop
* fix some doc error,test=develop
* fix current_step_lr for _LearningRateEpochDecay,test=develop
* remove some unsed code to improve coverage,test=develop
* remove some unsed code to improve coverage,test=develop
* fix bn & in in dy, test=develop
* update instance_norm,test=develop
* fix bugs,test=develop
* add more case in unittest,test=develop
* fix,test=develop
* fix,test=develop
* show the attr and functions of the Layer,test=develop
* add buffer for dir,test=develop
* fix __dir__,test=develop
* fix doc of Layer.__dir__, test=develop
* support tuple/list init for VarBase,test=develop
* fix doc of fluid.dygraph.to_variable,test=develop
* fix doc of fluid.dygraph.to_variable,test=develop
* add new API: MultiStepDecay, a new learing rate strategy, test=develop
* add new API: MultiStepDecay, a new learing rate strategy,test=develop
* add new API: MultiStepDecay, a new learing rate strategy,test=develop
* add base class of LearningRateEpochDecay, and MultiStepDecay, and StepDecay, test=develop
* fix doc to add coverage,test=develop
Support Various-Length Return Grammar in Dy2stat. This PR is a follow-up of https://github.com/PaddlePaddle/Paddle/pull/25176 .
The basic idea is putting no-value placeholder variables at `return` statement to make all `return` statement have same length, after that the static graph can have fixed fetch output (code at return_transformer.py). Then remove those no-value placeholder when we finally return dygraph result (code at partial_program.py).
However, various length return in Bert model is still not supported. The dy2stat can change the code as I wish but some ops which check shape at compile time (e.g. Reshape, MatMul) will throw error because of the no-value-placeholder may not have the required shape. Is this a matter? To me, those no-value placeholder will be replaced as really values meeting shape requirements at run time, so I think the solution should be some way to do the compile-time checking. By the way, every time when we have dynamic shape, it often causes problem in dy2stat. We should find a way to handle it in the future.
Fixing various return in Bert is my TODO thing and I will also find some other existing models for verification.
This PR added basic support for 'return' grammar in dy2stat. It supports the control flow of 'return'.
The basics idea is using a return value variable to store the early return statements and boolean state variables with if-else to skip the statements after the return statements.
**This PR is very basic support. There are some corner cases I didn't develop/test**. For example, 'return None', 'return different length of variables', 'return non-tensor and tensor together', 'no return statement'. **These corner cases will be done in my next PRs**. Target date is this week.
**Note**:
1. for the unit test, I changed test_program_translator.py because the StaticCode of `dyfunc_with_if_else` will change. To guarantee the correctness of `dyfunc_with_if_else`, I also run it in `TestRecursiveReturn` in test_return.py.
2. I commented the early return code in bert_dygraph_model.py because 'return different length of variables' is unsupported now. I also know that there are some other models used early return and we didn't enable it in the unit test. I will add support for it in next PRs and then re-enable those tests.
* The arg of append() can be not Tensor temporarily.
* Add Seq2Seq as ProgramTranslator Unit Test.
* set dtype of vocab_size_tensor to int64 to pass Windows-CI.
* Support int and long: int or long -> six.integer_types.
* Modify test_tensor_shape: fix bug and modify comment.
* Support convert_var_shape to convert var.shape stmt
* Modify code in ifelse_simple_func.py because don't support return non-Tensor in Tensor-dependent 'if' stament currently.
* Convert the return variables of Tensor-dependent 'if' staments to Tensor if it not. test=develop
* Move function 'convert_len' to file convert_operators.py
* Support that for statements are transformed to while statements.
* Fix bug: raise None -> return None.
* Support variable loaded and created in loop.
* Use int64 in Py2 and Py3 in function to_static_variable.
* cast var in convert_logical_XX.
* Add convert_ifelse function in convert_operators.py
* Add logical_transformer. Remove LogicalTransformer from loop_transformer.py
* Revert modified tests in PR24799(convert_while_stmt).
* Comment and modify code that doesn't support `return` statement.
* Remove unnecessary class: MergeAssignTransformer, NodeTestTransformer and IfConditionVisitor in ifelse_transformer.
* Support return variable in only one of if body or else.
* remove after_visit in IfElseTransformer.
* Modify the result of get_name_ids in test_ifelse_basic.py
* Add unittest to test the new case.
* Modify code according to reviews.
* Support convert_while_loop.
* Comment code that not supported 'if' in test_break_continue.
* Convert int into tensor to support 'if' stmt in for/while loop.
* Add unittest to test all cases of convert_logical_XX.
* Add unittest to test all cases of convert_while_loop.
* Fix bug in LogicalOpTransformer. test=develop
* Support to create LoDTensorArray in control flow (cond and while_loop)
* Fix bug: return LoDTensorArray in while_loop
* Change code in list_transformer.py to accommodate the new features.
* fix numpy ndarray mul var base error; test=develop
* add comment for __array_ufunc__ ; test=develop
* move unitest from imperative math op path to test_math_op_patch_var_base;
test=develop
* support to train in static
* support to independent decorator
* remove in_dygraph_mode condition in ProgramTranslator
* fix import param_guard and add train/eval test=develop
* Modify into ShareVarsFromScope and rm __all__ in partial_program test=develop
* Replace dygraph_to_static_func with @declarative or program_translator.get_func in test_list.py
* Add comments in ConditionalBlock.
* Support list pop last item.
* Support pop the i-th item.
* Support an empty tensor array as Input in assign op and set the kernel type is float.
* Simplify code for gast.If in is_control_flow_to_transform.
* Move IsControlFlowVisitor to file utils.
* Don't use convert_call for build-in func in CallTransformer.
* Optimize api is_control_flow_to_transform.
* Polish the document of IsControlFlowVisitor.
To prepare for publishing APIs, I added tests for that we can access dy2stat through:
@fluid.dygraph.declarative
@fluid.dygraph.jit.declarative
fluid.dygraph.ProgramTranslator()
fluid.dygraph.dygraph_to_static.ProgramTranslator()
fluid.dygraph.dygraph_to_static.program_translator.ProgramTranslator()
It surprised me that we had bugs on those different usages. I have fixed them.
I also added example codes for these new APIs
This PR also pulls my current PR https://github.com/PaddlePaddle/Paddle/pull/23880, so the PR history is long. For reviewer information, you could review this PR after https://github.com/PaddlePaddle/Paddle/pull/23880 is merged
1. Rename Dygraph To Static Decorators to declarative
2. dygraph_to_static_func is still using in some training tests, I cannot delete it now.
3. Add some API docs