* modify cond while_loop to paddle.static.nn.cond
* modify crop_tensor to paddle.crop
* modify Variable to paddle.static.Variable
* remove nn.beam_search, nn.beam_search_decode, nn.gather_tree
* remove bpr_loss, center_loss, rank_loss, smooth_l1, teacher_student_sigmoid_loss, edit_distance, sampled_softmax_with_cross_entropy in nn.functional
* remove apis in nn.functional.learn_rate.py
* remove pool2d, pool3d, adaptive_pool2d, adaptive_pool3d in nn.functional
* remove apis in nn.functional.vision
* remove erf, soft_relu in nn.functional.activation
* remove apis in nn.functional.extension
* remove nn.functional.rnn
* remove hash from nn.functional.lod
* remove row_conv from nn.functional.extension
* remove one_hot, pad2d, pad_constant_like from nn.functional.common
* remove nn.gather_tree, nn.BilinearTensorProduct, nn.Pool2D, nn.Pad2D
* remove apis from optimizer.__init
* remove tensor.creation.fill_constant
* remove elementwise_mul in nn.functional.common and modify to paddle.multiply
* remove tensor.stat.reduce_mean
* remove reduce_all, reduce_any in tensor.logic
* remove apis in tensor.math
* remove apis in tensor.__init__
* remove has_inf, has_nan in tensor.search
* remove apis in framework.__init__
* remove apis in paddle.__init__
* remove apis in nn.functional.__init__
* modify removed alias apis to raw api in doc and unittests
* fix remove grid_sample bug
* modify removed alias apis to raw api in doc and unittests
* modify removed alias apis to raw api in doc and unittests
* modify removed alias apis to raw api in doc and unittests
* modify removed alias apis to raw api in doc and unittests
* modify removed alias apis to raw api in doc and unittests
* modify removed alias apis to raw api in doc and unittests
* delete alias api relastions in doc
* reserve paddle.compat, paddle.sysconfig
* remove unittest for paddle.reduce_all, paddle.reduce_any
* modify removed alias apis to raw api in doc and unittests
* recover paddle.save and paddle.load
* resolve conflicts
* fix sample code missing paddle.enable_static() bug
* fix sample code missing paddle.enable_static() bug
* fix to_string sample code error
* Support assignment to a Variable in dynamic mode. Note: not deal with backward.
* Rewrite VarBase __setitem__ for high-performance.
* try to test 3 means to do __setitem__ and test the performance of 3 means.
* Retain the means of the highest performance: C++ code and don't trace op.
* add float64 input to ctc_loss
* modified error message of warpctc
* update repo and tag of warpctc
* add test for warpctc with float64 input
* modified warpctc.cmake to make sure build always
* resolved sample code bug of warpctc
* add core.ops in warpctc dygraph
* fix a bug of test
* support use add instead of sum to do gradient accumulation
* add inplace addto pass
* add grad_add op and inplace addto pass
* remove debug code
* code refine
* fix bug when sereral sum ops inserts at same op_idx
* fix Flags type
* add addto attribute for conv3d
* fix ut
* code clean
* fix type
* Finished ChannelWiseQuantDequantAbsMaxOp and Passed unittests.
* Finished channel-wise quantize strategy in imperative quantization.
* Added Cuda code of ChannelWiseQuantDequantMaxAbsOP
Add Cuda code of ChannelWiseQuantDequantMaxAbsOp
* Add quant_axis for channel_wise quant.
* fixed a bug in unnitests, which will not trigger axis = 1 case and cannot meet the coverage rate requirement.
* Added some assert infomation and fixed some coding style mistakes.
* update amp_check_finite_and_scale_op for static_amp.
* use amp_check_finite_and_scale in static graph amp.
* update grads to zero when grads own infinite values(as for amp_checkout_finite_and_scale op).
* add update_loss_scaling op in cpp.
* add update_loss_scaling_op unit test.
* update the doc of the check_finite_and_unscale op
* Update the process of gradients updating skipping if the gradients have infinite values.
* update the way to zero grads.
* update test_update_loss_scaling_op.py
* add log info when find infinite grads.
* add the unit test for UpdateLossScaling Layer.
* enhance collect_op for dygraph, test=develop
* enhance detection ops with lod, test=develop
* support none bbox left in generate_proposals, test=develop
* unfiy MultiLevelRoisNum, test=develop
* update core.ops, test=develop
* add op register for new input & output, test=develop
* get use of global 'use_mkldnn' in layer_helper
* update for CI
* update for CI, relu test
* update for CI, relu test added, make FLAGS_use_mkldnn a public flag
* added more strict tests, fixes after review
* fixes after review
* fixes after review, CI stuff
* optimized transformation form tensor to numpy, test=develop
* optimized transformation form tensor to numpy, pass pre-commit, test=develop
* modify fetchophandle zerocopy to deepcopy in PE&CUP, test=develop
* modify py:array construct, test=develop
* fix _fetch_var to use deep copy, test=develop
* support Baidu AI Accelerator
* test=kunlun
* minor
* test=kunlun
* support xpu op in separate file
* test=kunlun
* update XPU error message and remove duplicated code
* test=kunlun
* minor
* test=kunlun
* minor
* test=kunlun
* expose and unify the Tensor concepts to the user
* expose tensor to user
* add copy place for Tensor
* add copy place for Tensor
* add note
* add macro PADDLE_WITH_CUDA
* remove RUN_TYPE=DIST
* fix some error
* fix the double grad bug for the star gan. test=develop
* update the retain_graph parameter doc. test=develop
* add the unit test for the retain_graph parameter. test=develop
* fix bn & in in dy, test=develop
* update instance_norm,test=develop
* fix bugs,test=develop
* add more case in unittest,test=develop
* fix,test=develop
* fix,test=develop
* Add a StatValue class in the backend to represent a stat.
* Add a singleton StatRegistry to maintain the collection of stats.
* For the sake of code neatness, we only support type of int and float, which can cover most of the scenarios.
* add new macro BOOST_GET_SAFELY & unittests, test=develop
* add different macro type, test=develop
* fix get macro type in executor, test=develop
* four macro part change backup
* using one macro for all case, test=develop
* revert attribute change, test=develop
* change to three func to solve gcc4.8 bug, test=develop
* polish some details, test=develop