* polish minimize en doc
* polish adam optimizer en doc
* polish adamax optimizer en doc
* polish adagrad and decayed adagrad optimizer en doc
* polish model average en doc, test=develop, test=document_fix, test=document_preview
* self review and further polishing doc
* update API.spec, test=develop, test=document_fix
* update fluid.data api in examples, test=develop, test=document_fix
* update fluid.data inferface, test=develop, test=document_fix
* replace -1 by none, test=document_fix
* add recompute based checkpoints methods for large batch training
test=develop
* add append_backward_with_forward_recomputation
test=develop
* refine optimizer
test=develop
* update backward and optimizer
test=develop
* make Variable usable
test=develop
* add recompute code
* refine optimizer
test=develop
* refine addup _append_backward_ops_with_checkpoints_
1) for recompute part, just cache the grad_op_desc without appending to block
2) before appending grad_op_desc to backward part, addup_repetitive_vars, remove unused branch
test=develop
* make method private
* add recompute strategy into DistributedStrategy
test=develop
* checkpoint version3
test=develop
* remove some print information
test=develop
* remove unused sumop
test=develop
* try to fix recompute with graph building modules
* add input names to vars should be held
* add memory debug tool
* backup backward
* Fix bugs
* add backward desc for op not in any segments
* add exception info for sub_block
test=develop
* modify code style
test=develop
* modify code style
test=develop
* remove print functions
test=develop
* add API spec
test=develop
test=document_preview
* make Recompute a child class of Optimizer
test=develop
test=document_preview
* add API spec
test=develop
test=document_preview
* modify API spec
test=develop
test=document_preview
* add document for Recompute
test=develop
test=document_preview
* change API doc of Rcompute
test=develop
test=document_preview
* code cleaning
test=develop
test=document_preview
* modify API spec
* fix bugs when segments hold no element
* add testcase for Recompute Optimizer
test=develop
test=document_preview
* add test for apply_gradient, and code cleaning
test=develop
test=document_preview
* add test case for load function
* enable CI
test=develop
test=document
* add test case
test=develop
test=document_preview
* add sample code for 4 function of recompute optimizer
test=develop
test=document_preview
* refactor dygraph,test=develop
* fix failed unittest,test=develop
* polish code,test=develop
* check windows ci error,test=develop
try to fix windows ci error by np.allclose,test=develop
* polish vlog and profiler, test=develop
* try to fix preceding ops order,test=develop
* test transformer in windows ci, test=develop
* use python c-api to speed up tracer.trace,test=develop
* test=develop, fix docker with paddle nccl problem
* test=develop, add ut for debug string and gradient_accumulator
* test=develop, add tests for layer/gradient_accumulator/prepared_op
* test=develop, fix complie error for test_prepared_op
* test=develop, add more ut for dygraph
* test=develop, create API.spec for dygraph api change
* test=develop, refoctor name to make it easier to understand
* test=develop, refoctor name to make it easier to understand
* test=develop, fix multi-gpu failed problem , add Tracer tests, change PADDLEENFORCE to PADDLEENFORCE_EQ
* test=develop, fix ut failed on parallel se-resnext
* test=develop, change one more PADDLE_ENFORCE
Add Pipeline Concurrency Train Mode:
- Cpp: pipeline_trainer & section_worker
- Python: PipelineOptimizer
- Add a new data_feed type: PrivateInstantDataFeed
- Add a test demo of pipeline trainer and the test model is gnn
- Do not support win32 now
* save optimizer related vars in dygraph
* test=develop, add optimizer save and load
* test=develop, add optimizer save and load
* test=develop, merge code and add multi-optimizer save and load
* test=develop, fix test_imperative_checkpoint
* test=develop, fix include error
* test=develop, fix include error
* test=develop, renew api spec
* test=develop, refine code
* test=develop, set default value for checkpoint
* test=develop, fix ci error
* test=develop, change API.spec and make api more readable
* test=develop, refine version and time stamp
* test=develop, add example code and refine code
* test=develop, refine doc
* test=develop, change version
* add gradient clip in minimize; test=develop
* fix bug; test=develop
* fix format; test=develop
* move new grad clip to dygraph/grad_clip.py; test=develop
* fix lr decay and grad clip test; test=develop
* seperate dygraph grad clip; test=develop
* fix grad clip test; develop
* fix api spec bug; test=develop
* add blank line, test=develop,test=document_preview
to fix format problem
* fix the api example for create_global_var, create_parameter, SGDOptimizer, RMSPropOptimizer, MomentumOptimizer, LarsMomentumOptimizer, FtrlOptimizer
test=develop
* add example for adamoptimizer
fix API.spec
test=develop
* test=develop
* test=develop