* add NotImplementedError for multi optimizers used on multi-places . test=develop
* assert error only if num_devices>1. test=develop
* set test_optimizer_in_control_flow in CMakeLists for using multi-GPU.test=develop
* add bn and relu fuse pass
* add op attr assert and dtype assert
* fix some inputs&&outputs bugs for the fused op and pattern.
* add the unittest for fuse_bn_act_pass. test=develop
* use normative enforce statements. test=develop
* add the cpu test. test=develop
* add the support of batch_size=1 for the bn with relu op. test=develop
* add the error type for paddle throws. test=develop
* add fused_batch_norm_act and fused_batch_norm_act_grad to op_has_unsed_vars_white_list. test=develop
* modify fc to linear in sample code, test=develop
* remove FC, test=develop
* remove warnings, test=develop
* drop fluid/imperative/README.md , test=develop
* change fc to linear, test=develop
* polish code style, test=develop
1. Add a new input named batch_roi_nums for prroi_pool_op. batch_roi_nums includes the number of roi for each image in batch when rois is Tensor. This information is saved in rois's lod when rois is LoDTensor.
2. add grad check to prroi_pool_op and solve unnormal X grad diff in CPU.
* support elu activation double grad,test=develop
* delete the code commit in .cc,test=develop
* fix relu test unpass, test=develop
* add elu double grad kernel and unit test
* add caculate dX in elu double grad functor, test=develop
* update the commit code,test=develop
* append optimize op in the grad block of current block if current block is in control flow. test=develop
* add conditional grad op when optimizer used in control flow. test=develop
* add comment and modify typo. test=develop
* fix append_backward to support control flow. test=develop
* add test. test=develop
* fix copy_var_to_parent_block and conditional_block_grad. test=develop
* fix bug: revert to append conditional_block_grad vars to sub grad block. test=develop
* fix bug: revert to assign var to parent block even if var already is in parent block
* fix bug: consider outputs is empty. test=develop
* move _rename_grad_ out. test=develop
* modify code according to reviews from Huihuang. test=develop
* modify code according to reviews from Jinle. test=develop
* optimize adam speed by removing _finish_update test=develop
* fix SparseAdamFunctor param list test=develop
* Remove scale_op in expect_list of adam_op test=develop
* fix test optimizer loss assert error test=develop
* fix test optimizer loss assert error test=develop
* modify PADDLE_ENFORCE usage test=develop
* fix op_type in lamb_op.cc test=develop
* fix errors ostream format bug test=develop
* add betaPowOut in ngraph op test=develop
* fix ngraph::op api for gcc8 test=develop
* clean code test=develop
* modify struct into class test=develop
* remove code of beta1Tensor in lamb_op test=develop
* Remove self-set accuracy parameters of op tests: max_relative_error
test=develop
* Remove self-set accuracy parameters of op tests: max_relative_error
test=develop
* Remove self-set parameters of op tests: max_relative_error
test=develop
* fix elementwise_pow bug on integer, test=develop
* use llrint to support elementwise_pow_grad, test=develop
* add some tests, test=develop
* revert grad functor, test=develop
* add fake init for the trainer, fix large memory hold in the trainer
* do not merge recv vars from a remote endpoint, test=develop
* add recv and save op, merge slice var in one op, save memory
* remove hsigmoid with pull sparse, test=develop
* fix python API tests that do not need to inherit OpTest, test=develop
* fix fp16 cases that will only be enabled in GPU mode, test=develop
* remove TestSoftmaxFP16Op from test cases of softmax_mkldnn_op, test=develop
* fix tests so that the cases are only created in GPU mode, test=develop
Add tests to use dy/dx to make sure the gradient values calculated by the control flow backward is correct. Also fixed bugs detected by those tests.
Fix bugs:
1. Unlike sum_op, optimizer ops don't allow uninitialized input tensor. But in conditional_block_grad_op, since the conditional_block may not run, the output gradient tensor may be uninitialized, which will cause the optimizer op error. To fix it, we should let optimizer ops support uninitialized input like sum_op or assign the uninitialized gradient to 0 when the conditional_block_grad_op doesn't run. I found there are about 10+ optimizer ops. **To be simpler, I just assign output gradient of the conditional_block_grad_op to 0 in this PR**. But it can be further explored whether we can make optimizer ops like sum_op to support uninitialized input tensor because theoretically we can speed up without the assigning in conditional_block_grad_op.
2. Infer parameter shapes during append_backward. I didn't know that all our parameters are in global block. When op_desc is inferring shapes at the sub-block, it may not know the shape of gradients of parameters whose shape information is at global block. I fixed it by inferring shapes of gradients from forward var.
This PR also did some code clean up:
1. Print the var name when sgd_op catches shape error so that it is easier to debug
2. Fix a typo: dicta -> dict
* add file check_op_desc.py and add interface to get default value. test=develop
* add test for c++ coverage rate. test=develop
* Correct typo. test=develop
* dygraph mode support linear lr warm up; test=develop
* add unitest for linear warmup; test=develop
* add input type check; test=develop
* fix type check assert error; test=develop
* change type error; test=develop
* test=develop, fix docker with paddle nccl problem
* don't expose numerous Tensor.set(), test=develop
* fix condition, test=develop
* fix float16 bug, test=develop
* feed should be Tensor or np.array, not Variable or number, test=develop
* use forcecast to copy numpy slice to new array, test=develop
* remove float16-uint16 hacking, test=develop
* add variable method to varbase and refactor to_variable to support return varbase
* support kwargs in varbase constructor
* add VarBase constructor to support default python args
* refine varbase initial method
* reset branch
* fix ut for change VarBase error info to PaddleEnforce
* cherry is parameter change before
* overload isinstance to replace too many change of is_variable
* rm useless files
* rm useless code merged by git
* test=develop, fix some ut failed error
* test=develop, fix test_graph_wrapper
* add some tests, test=develop
* refine __getitem__, test=develop
* add tests, test=develop
* fix err_msg, test=develop
* fix the device supported of the op unique and unique_with_counts.
test=develop
test=document_fix
* Fix the precision of test in the op of unique and unique_with_counts.
test=develop
test=document_fix
* add param & grad shape check for sgd op
* add _reshape_inplece interface for dygraph parallel
* refine unittest based paddle/models scripts, test=develop
* add unittest for parallel grad fuse, test=develop
* Commit before merging develop
test=develop
* Backup after working with Huihuang logs
* Commit before deleting Huihuang debug loggings
* Commit before debug
test=develop
* Fix bug commit
test=develop
* Backup of fixing bugs
test=develop
* Clean up code
test=develop
* Fix a bug in sum_op
test=develop
* Add ascending for argsort
* Refine api doc description.
* Refine descending description
* Add int32 logic to speedup when data is small size.
* Remove int32 opt as not support in python
* Implement Int8 FC
* Integrate FC into INT8v2
test=develop
* int8 FC: transpose weights before computing scales
test=develop
* Add support for activation_type string in FC
test=develop
* Disable MKL-DNN's FC in VGG16 and 19
test=develop
* Disable FC quantization when mkldnn FC is disabled
test=develop
* Solve PADDLE_ENFORCES in FC int8
* Fix Paddle enforces and remove const cast
test=develop
* Fix style changes
test=develop
* Fix quantizer_tester test and add fc quantization
test=develop
* Fix FC test fail on CUDA
* Remove unnecessary log from quantize placement pass
test=develop
* Add Thread ID to FC hash key
test=develop
* Add comments to MKL-DNN FC Kernel
test=develop
* Refactor quantizer
test=develop
* Fix linter issues
test=develop
* Fix crash in slim googlenet
test=develop
* Fix PADDLE_ENFORCE messages
test=develop
* Add fc padding to solve mkl performance
test=develop
* fix gpu pass and error information
test=develop
* fix fc_fuse_pass_test
test=develop
* fix error information
test=develop
* fix error information
test=develop
* fix name and add fc op padding test
test=develop
* fix attributes
test=develop
* optimize fc padding
test=develop
* fix test
test=develop
* Refactor MKL-DNN ElementwiseMul
remove manual fallback, remove format attrs
test=develop
* Refine PADDLE_ENFORCEs in eltwise_mul_op.h
test=develop
* Make ElementwiseMulOp inherit from ElementwiseOp
* Change type of simd_width to int
test=develop
* Remove Constructor extensions in ElementwiseOp and ElementwiseMulOp
test=develop
* Restore attributes
test=develop
* Fix test coverage for mkldnn eltwise mul
test=develop
* Conform to new is_run_common_broadcast API
test=develop
* Add UT for AreDimsAndFormatCorrect
test=develop
* Improve argsort performance.
- Give 200000 data to compute argsort on v100,
can speed up ~190x
before opt cost: 0.53s
after opt cost:0.0027s
- Add fp16 support
* Refine error message
* Refine code
test=develop
Signed-off-by: zhaoyuchen <zhaoyuchen01@baidu.com>
* fix fetch handler problem and refactor
when a user define FetchHandler class, he or she should initialize a handler
with variable dict. the key of a variable dict is a user defined name,
the value of a variable dict is a Varaible generated from python API.
For each fetching, a user should implement handler function in which
fetched_result_dict will be available and the user can access the fetched value
with user defined keys.
* add int8 kernel to lookup_table op and add dequantize op test=develop
* change paddle_enforce to paddle_enforce_eq test=develop
* change copyright and change some not suitable code test=develop
* remove debug log test=develop
* replace GetInputType with IndicateVarDataType test=develop
* fix EmptyGradMaker test=develop
* fix diff between cpu and gpu test=develop
* use memcopy when int8_t test=develop
* open dygraph op test, test=develop
* modify to_variable, test=develop
* modify input and output for dygraph, test=develop
* modify input and output for dygraph(fix bug), test=develop
* fix input processing of dygraph op test, test=develop
* fix bug, test=develop
* fix op test, test=develop
* fix forward bug for dygraph, test=develop
* fix mkldnn op test for forward, test=develop
* update nn.py for dygraph, test=develop
* fix crop_tensor_op, test=develop
* fix elementwise_mul_op, test=develop
* fix fill_op, test=develop
* fix some mkldnn op, test=develop
* open backward op test for dygraph, test=develop
* delete log, test=develop
* close backward op test for dygraph, test=develop
* fix bug for edit_distance_op and test_lstm_cudnn_op, test=develop
* fix optest backward bug for dygraph, test=develop
* fix optest backward bug for dygraph, test=develop
* close backward op test for dygraph, test=develop
* close backward op test for dygraph, test=develop
* open dygraph op test, test=develop
* fix op test for dygraph, fix GradOpDescMaker, test=develop
* fix bug for linear_chain_crf_op.h, test=develop
* remove log, test=develop
* remove log, test=develop
* remove log for op_test.py, test=develop
* remove log for op_test.py, test=develop
* fix bug for var_conv_2d_op, change PADDLE_ENFORCE, test=develop
* fix PADDLE_ENFORCE_EQ for hierarchical_sigmoid_op.cc, test=develop
* fix bug for test_increment_ngraph_op.py, test=develop
* fix lod for op test in dygraph, test=develop
* refactor op_test.py to reduce redundant code, test=develop
* fix lod optest, modify InputVar/OutputVar to HasInput/HasOutput, test=develop
* remove debug log, test=develop
* remove redundant code in base.py, test=develop
* fix some error in optest, test=develop
* fix ClearNoNeedBufferInputs function's bug for LoDTensor, test=develop
* refactor op_test.py, test=develop
* remove redundant writing, test=develop
* fix error(get tensor of the grad variable), test=develop
* fix test_concat_mkldnn test_conv2d_mkldnn, test=develop
* fix optest.py for get tensor of LoDTensor, test=develop
* fix optest.py for get tensor of LoDTensor, test=develop
* fix optest.py for get tensor of LoDTensor, test=develop
* fix some redundant code, test=develop
* reslove conflict and rewrite paddle error message, test=develop
* add control flow API: case. test=develop
* delete 'raise TypeError' in _error_message() and return a string. test=develop
* polish API document. test=develop
* fix auc drop first commit test=develop
* update datanorm op
* update datanorm with enforce test=develop
* update test=develop
* update format test=develop
* update format
* update format test=develop
* add unit test test=develop
* update unit test test=develop
* update format test=develop
* update format test=develop
* update API description test=develop
* update API description test=develop
* update format test=develop
* fix codes as comments test=develop
* fix description as comments test=develop
* fix description as comments test=develop
* update codes.. test=develop
* add API switch_case. test=develop
add Nest
* modify code according to reviews:
1.Attr(branch_index) support 'uint8' and 'int64' besides 'int32'.
2.remove useless code.
test=develop
* replace fluid.layers.data with fluid.data and polish API document. test=develop
* add input type and dtype check template, and update some APIs check
* refine check template, and update some APIs check in nn.py
* update some APIs check in loss.py
test=develop
* Add Asypadding for conv fusion.
test=develop
reference: pr/20042
* Fix eigen build link error
* Change back file mode
* Use math function & add more checks.
* set the default value of alpha for prelu to 0.25, test=develop
* add the call to __syncthreads(), test=develop
* fix the implementation of cpu prelu, test=develop
* repair the implementation of element mode prelu, test=develop
* modify test_prelu_op.py, test=develop
* Add the check of lod_level between compile-time and runtime.
test=develop
* Fix bug in check_compile_vs_runtime.
test=develop
* Fix the check of output when it is dispensiable or intermediate.
test=develop
* Share lod of x to out in match_matrix_tensor op in compile-time.
* Implement GetLoDLevel in InferShapeContext.
* Set the default value of check_compile_vs_runtime to False and enable it in test_sequence_pad_op.
test=develop
* Enable check_compile_vs_runtime in test_match_matrix_tensor.
* Add the implementation of SetLoDLevel in InferShapeContext.
* Remove the implementation of IncreaseLoDLevel and call Get/SetLoDLevel instead.
* Remove the implementation of DecreaseLoDLevel and call Set/GetLoDLevel instead.
* Refine some ops and unittests.
test=develop
* Fix a typo.
test=develop
* Remove the check of var type, and change int to int32_t.
test=develop
* Add unittest for Get/SetLoDLevel.
test=develop
* fix bug in pool/conv/conv_transpose:
1. It should be stride[i] not stride[0] in UpdatePaddingAndDilation;
2. fix bug of func _get_padding_with_SAME in test_conv/conv_transpose_op.py;
3. fix bug of the computation process in function conv2dtranspose_forward_naive.
test=develop
* change test to make the data of different dimensions different. test=develop
* Add asymetric padding support for mkldnn pooling
test=develop
* Add asymetric padding support for mkldnn conv
test=develop
* Add asymetric padding support for mkldnn conv_transpose
test=develop
* remove duplicate code and duplicate config of master+patch
* drop all ins which has conflict slot or size < merge_size
* user only need to set merge size,if ins num of same id is not equal to merge size, just drop these ins
* user must make sure master data and patch data has no same slot whose feasigns are both non-zero, otherwise these ins will be dropped. (slot list should still be the same of both master and patch)
* test=develop
* add launch_ps module so that we can launch a parameter server training job
1) a user can specify worker_num and server_num
2) parameter server can be killed after all workers exit
3) unit test is added
test=develop
* don't expose numerous Tensor.set(), test=develop
* fix condition, test=develop
* fix float16 bug, test=develop
* feed should be Tensor or np.array, not Variable or number, test=develop
* use forcecast to copy numpy slice to new array, test=develop
* remove float16-uint16 hacking, test=develop
* Refine the cache of program, context and scope in executor.
test=develop
* Refine the unittest test_executor_and_use_program_cache.
* Add the test the PaddingRNN with use_program_cache=True.
test=develop
* Remove a check.
test=develop
* Refine the unittest to check whether it is correct when setting use_program_cache=True.
test=develop
* improve split and concat op:
1. support Tensor for argument 'dim' in split op.
2. support Tensor for argument 'axis' in concat op.
test=develop
* redefine function GetDataFromTensor and set unknown output shape to - 1.
test=develop
* add check: Attr(sections) match Input(X). test=develop
* support Tensor for attr(sections) and attr(sections) can contain -1.
add check for attr(sections).
test=develop
* modify error message for concat and call Resize only when necessary. test=develop
* Refine the InferShape of ReadFrom and WriteTo op, and add comment to explain why not call ShareLoD for runtime.
test=develop
* Add comment for ReorderLoDTensorByRank op.
* Add comment for lod_tensor_to_tensor_array op to explain why only call DecreaseLoDLevel for compile time.
test=develop
* ShrinkRNNMemory op should call ShareLoD for compile time.
test=develop
* Add the implementation of IncreaseLoDLevel and add the compile-time check of lod_level in InferShape of sequence_pool.
test=develop
* Refine the unittest of DynamicRNN.
test=develop
* Change PADDLE_ENFORCE to PADDLE_ENFORCE_NE.
test=develop
* no longer need to define all embedding layers (no one less) of all slots in each program. make trainer_param repeated in ps.proto.
* add find_distributed_lookup_table_grads instead of hard code GRAD
* support embedding stop gradient. push sparse has error before fix this.*
* fix fill sparse, skip slots which do not have embedding. each slot's embedding in a sparse table should be used in all training programs before fix this.
* fix pull sparse, skip slots which do not have embedding.
* fix collect feasign label info, skip slots which do not have embedding.
* support when there are multi sparse tables in one or multi training programs, each program can pull/push its own related sparse tables instead of all sparse tables.
* test=develop
* All elements in attr(shape) of crop_tensor can be -1, test=develop, test=document_preview
* fix the bug that attr(offsets) should be initialized, test=develop
* add cuda place and remove fluid.layers.data for test of python API, test=develop
* add cuda place and remove fluid.layers.data for test of python API, test=develop
* modified batch size for Python API test, test=develop