* strided_slice op basic function test=develop
* test=develop rewrite and fix
* fix bug test=develop
* fix for the PADDLE_ENFORCE usage
* add some unit testw
* fix for the aip test and copright and fix test=develop
* fix API.spec test=develop
* fix API.spec test=develop
* add axis parameter test=develop
* fix for the build error test=develop
* fix python api test=develop
* fix the build test=develop
* fix build test=develop
* fix API spec test=develop
* test=develop add some comment and single op test
* fix API spece test=develop
* fix test=develop
* fix test=develop
* fix api test=develop
* fix api test=develop
* fix API.spec test=develop
* fix typo test=develop
* fix API.spec test=develop
* fix API typo test=develop
* fix doc and API.spec test=develop
* update elementwise double grad to save gpu memory, test=develop
* update elementwise_mul/div_grad_grad to save memory, test=develop
* remove eval function in eigen statement to save memory, test=develop
* add unittest for elementwise_div_grad_grad without dout, test=develop
* add unittest for elementwise_add_grad_grad without ddx, test=develop
* add float16 cuda kernel for elementwise double grad op, test=develop
improve pow op according to reviews:
1. Delete unnecessary judgement statements in PowGradOpDescMaker;
2. Improve test of test_api;
overload GetKernelTypeForVar
add stop_gradient=True when attr(factor) is tensor Variable, change examples in API pow.
test=develop,test=document_preview
add support parameter inference when argument shape is a list containing integer and tensor variable;
test=develop
fix reshape op according to reviews:
1. improve or message;
2. improve test of test_api.
test=develop,test=document_preview
fix reshape op: Add error message in nn.py, test=develop
add stop_gradient=True when attr(shape) is tensor Variable.
change examples in API reshape.
test=develop,test=document_preview
add support parameter inference when arguments starts or ends is a list containing integer and tensor variable;
test=develop,test=document_preview
improve slice op according to review(from hongyu). test=develop
fix slice op according to review: infer_flags, test=develop
fix slice op: improve overload operator __getitem__ to support attrs(starts and ends) are Variable.
test=develop,test=document_preview
fix test_slice_op: add TestSliceOp_decs_dim_6 to resolve conflict with test_slice_ngraph_op. test=develop
add stop_gradient=True when attr(starts) or attr(ends) is tensor Variable.
test=develop,test=document_preview
1. add tensor support for argument expand_times in expand op;
2. add support parameter inference when argument expand_times is a list containing integer and tensor variable;
improve expand op according to reviews:
1. add doc of ExpandTimes in expand_op.cc;
2. improve the test of test_api.
add stop_gradient=True when attr(expand_times) is tensor Variable, change code examples.
test=develop,test=document_preview
* refactor dygraph,test=develop
* fix failed unittest,test=develop
* polish code,test=develop
* check windows ci error,test=develop
try to fix windows ci error by np.allclose,test=develop
* polish vlog and profiler, test=develop
* try to fix preceding ops order,test=develop
* test transformer in windows ci, test=develop
* use python c-api to speed up tracer.trace,test=develop
* test=develop, fix docker with paddle nccl problem
* test=develop, add ut for debug string and gradient_accumulator
* test=develop, add tests for layer/gradient_accumulator/prepared_op
* test=develop, fix complie error for test_prepared_op
* test=develop, add more ut for dygraph
* test=develop, create API.spec for dygraph api change
* add transform_data to dygraph
* test=develop, refoctor name to make it easier to understand
* test=develop, refoctor name to make it easier to understand
* add test and change input to const ref for safety
* test=develop, fix multi-gpu failed problem , add Tracer tests, change PADDLEENFORCE to PADDLEENFORCE_EQ
* add ut for data transform
* refine ut for data_transform
* test=develop, fix ut failed on parallel se-resnext
* test=develop, change one more PADDLE_ENFORCE
* add test_tracer on multiple devices
* test=develop, change place to mutable for data transform
* test=develop, add transform data on same place test and remove useless log
* test=develop, Add to do for data layout and and ut for conv2d with no bias
* Implement the operator with sprase matrix multiply
* Update the URL of mklml library.
test=develop
* Disable MKLML implematation when using no-linux.
test=develop
* optimize bp with mkl sparse matrix
test=develop
* tmp add fused_emb_seq layer
* Add the support of padding_idx attribute.
test=develop
* add padding_idx support
test=develop
* implement grad refer lego
test=develop
* Refine the codes related to fc op.
* Add GPU implementation for fc functor.
* Apply fc_fuse_pass in GPU inference.
test=develop
* Change the cmake for fc op.
* Change PADDLE_ENFORCE to PADDLE_ENFORCE_EQ.
* Add an attribute to set the activation type in fc_op.
* Enhance the unittest of fc_op.
test=develop
* Remove the declaration of FCOpGrad back to the header file.
test=develop
* Set default value for newly added arguments in test_fc_op.
test=develop
* Enhance fc_fuse_pass to enable fusing relu.
* Allow print the shapes of var_desc in graph.
test=develop
* Enhance fc_fuse_pass_tester.
* Remove the use of PADDLE_ENFORCE.
test=develop
* Correct the number of ops after fusing.
test=develop
* Fix a typo.
test=develop
* Set activation_type to null when there is no relu in fc.
test=develop
* Refine fc_fuse_pass's codes.
* Enable the set of shape for tensor.
* Refine repeated_fc_relu_pass and add unittest.
test=develop
* add kernel for unstack_op, test=develop
* add kernel for unstack_op, test=develop
* add kernel for unstack_op, test=develop
* adjust the code format, test=develop
* modify some comment, test=develop
* Open fuse all reduce op
test=develop
* Add Fuse optimization op log
* Add log in fuse_optimizer op pass and fuse all_reduce op pass
* replace with boost::optional<bool>
test=develop
* Polish code
test=develop
* fix code coverage
test=develop
TemporaryAllocator is a singleton used for allocating memory for Cudnn. Since it is a singleton, we can delete it for better performance in memory.
We replace TemporaryAllocator by CUDADeviceContextAllocator and CUDADeviceContextAllocation, which uses stream callback to delete the memory allocated for the stream to avoid singleton.
Also added data_feed_proto to operator to fix CI in CPU compilation
* Refine the codes related to fc op.
* Add GPU implementation for fc functor.
* Apply fc_fuse_pass in GPU inference.
test=develop
* Change the cmake for fc op.
* Change PADDLE_ENFORCE to PADDLE_ENFORCE_EQ.
* Add an attribute to set the activation type in fc_op.
* Enhance the unittest of fc_op.
test=develop
* Remove the declaration of FCOpGrad back to the header file.
test=develop
* Set default value for newly added arguments in test_fc_op.
test=develop
* Remove constraint that last dimension is forced to be 1 in huber_loss
test=develop
* add y[rank-1] == 1 when x_rank=y_rank test=develop
* modify into contain_unknown_dim test=develop
* refactor dygraph,test=develop
* fix failed unittest,test=develop
* polish code,test=develop
* check windows ci error,test=develop
try to fix windows ci error by np.allclose,test=develop
* polish vlog and profiler, test=develop
* try to fix preceding ops order,test=develop
* test transformer in windows ci, test=develop
* use python c-api to speed up tracer.trace,test=develop
* test=develop, fix docker with paddle nccl problem
* test=develop, add ut for debug string and gradient_accumulator
* test=develop, add tests for layer/gradient_accumulator/prepared_op
* test=develop, fix complie error for test_prepared_op
* test=develop, add more ut for dygraph
* test=develop, create API.spec for dygraph api change
* test=develop, refoctor name to make it easier to understand
* test=develop, refoctor name to make it easier to understand
* test=develop, fix multi-gpu failed problem , add Tracer tests, change PADDLEENFORCE to PADDLEENFORCE_EQ
* test=develop, fix ut failed on parallel se-resnext
* test=develop, change one more PADDLE_ENFORCE
* Add the dynamic load of nvrtc, and support runtime compiling of CUDA kernel using nvrtc.
test=develop
* Call CUDA driver api to launch the kernel compiled by nvrtc.
test=develop
* Disable for mac and windows.
test=develop
* Refine the codes to support manually specified num_threads and workload_per_thread.
test=develop
* Refine the CUDA kernel to support large dims.
test=develop
* test=develop add a argument for softshrink python api
* test=develop fix doc format
test=develop fix doc format
* test=develop fix API.spec
test=develop fix API.spec
* add extra error message hint in optimizer ops
* polish format & delete useless change, test=develop
* extract init judue from shape compare, test=develop
* test=develop
Fix the scatter op bug when use the add mode, and support the int64 data type of scatter_op Index(#18804).
* test=develop
Remove the PADDLE_ENFORCE and use PADDLE_ENFORCE_EQ
* test=develop
Remove the fix bug of scatter_add, and just add the support of int64 in scatter_add
* test=develop
Add the test case for scatter op, the test case just for index int64
* Add a interface to enable cudnn for inference.
* Add cudnn_placement_pass.
test=develop
* Set the default value of cudnn_enabled_op_types to null.
test=develop
* Write the common basic class, placement_pass_base, to refine the codes.
test=develop
* Call EnableCUDNN in unittest.
test=develop
* Refine cudnn_placement_pass tester.
* Enable the testing of cudnn_placement_pass in inference's unittest.
test=develop
* Add the check of op kernels.
test=develop