* fix fetch handler problem and refactor
when a user define FetchHandler class, he or she should initialize a handler
with variable dict. the key of a variable dict is a user defined name,
the value of a variable dict is a Varaible generated from python API.
For each fetching, a user should implement handler function in which
fetched_result_dict will be available and the user can access the fetched value
with user defined keys.
* add int8 kernel to lookup_table op and add dequantize op test=develop
* change paddle_enforce to paddle_enforce_eq test=develop
* change copyright and change some not suitable code test=develop
* remove debug log test=develop
* replace GetInputType with IndicateVarDataType test=develop
* fix EmptyGradMaker test=develop
* fix diff between cpu and gpu test=develop
* use memcopy when int8_t test=develop
* open dygraph op test, test=develop
* modify to_variable, test=develop
* modify input and output for dygraph, test=develop
* modify input and output for dygraph(fix bug), test=develop
* fix input processing of dygraph op test, test=develop
* fix bug, test=develop
* fix op test, test=develop
* fix forward bug for dygraph, test=develop
* fix mkldnn op test for forward, test=develop
* update nn.py for dygraph, test=develop
* fix crop_tensor_op, test=develop
* fix elementwise_mul_op, test=develop
* fix fill_op, test=develop
* fix some mkldnn op, test=develop
* open backward op test for dygraph, test=develop
* delete log, test=develop
* close backward op test for dygraph, test=develop
* fix bug for edit_distance_op and test_lstm_cudnn_op, test=develop
* fix optest backward bug for dygraph, test=develop
* fix optest backward bug for dygraph, test=develop
* close backward op test for dygraph, test=develop
* close backward op test for dygraph, test=develop
* open dygraph op test, test=develop
* fix op test for dygraph, fix GradOpDescMaker, test=develop
* fix bug for linear_chain_crf_op.h, test=develop
* remove log, test=develop
* remove log, test=develop
* remove log for op_test.py, test=develop
* remove log for op_test.py, test=develop
* fix bug for var_conv_2d_op, change PADDLE_ENFORCE, test=develop
* fix PADDLE_ENFORCE_EQ for hierarchical_sigmoid_op.cc, test=develop
* fix bug for test_increment_ngraph_op.py, test=develop
* fix lod for op test in dygraph, test=develop
* refactor op_test.py to reduce redundant code, test=develop
* fix lod optest, modify InputVar/OutputVar to HasInput/HasOutput, test=develop
* remove debug log, test=develop
* remove redundant code in base.py, test=develop
* fix some error in optest, test=develop
* fix ClearNoNeedBufferInputs function's bug for LoDTensor, test=develop
* refactor op_test.py, test=develop
* remove redundant writing, test=develop
* fix error(get tensor of the grad variable), test=develop
* fix test_concat_mkldnn test_conv2d_mkldnn, test=develop
* fix optest.py for get tensor of LoDTensor, test=develop
* fix optest.py for get tensor of LoDTensor, test=develop
* fix optest.py for get tensor of LoDTensor, test=develop
* fix some redundant code, test=develop
* reslove conflict and rewrite paddle error message, test=develop
* add control flow API: case. test=develop
* delete 'raise TypeError' in _error_message() and return a string. test=develop
* polish API document. test=develop
* fix auc drop first commit test=develop
* update datanorm op
* update datanorm with enforce test=develop
* update test=develop
* update format test=develop
* update format
* update format test=develop
* add unit test test=develop
* update unit test test=develop
* update format test=develop
* update format test=develop
* update API description test=develop
* update API description test=develop
* update format test=develop
* fix codes as comments test=develop
* fix description as comments test=develop
* fix description as comments test=develop
* update codes.. test=develop
* add API switch_case. test=develop
add Nest
* modify code according to reviews:
1.Attr(branch_index) support 'uint8' and 'int64' besides 'int32'.
2.remove useless code.
test=develop
* replace fluid.layers.data with fluid.data and polish API document. test=develop
* add input type and dtype check template, and update some APIs check
* refine check template, and update some APIs check in nn.py
* update some APIs check in loss.py
test=develop
* Add Asypadding for conv fusion.
test=develop
reference: pr/20042
* Fix eigen build link error
* Change back file mode
* Use math function & add more checks.
* set the default value of alpha for prelu to 0.25, test=develop
* add the call to __syncthreads(), test=develop
* fix the implementation of cpu prelu, test=develop
* repair the implementation of element mode prelu, test=develop
* modify test_prelu_op.py, test=develop
* Add the check of lod_level between compile-time and runtime.
test=develop
* Fix bug in check_compile_vs_runtime.
test=develop
* Fix the check of output when it is dispensiable or intermediate.
test=develop
* Share lod of x to out in match_matrix_tensor op in compile-time.
* Implement GetLoDLevel in InferShapeContext.
* Set the default value of check_compile_vs_runtime to False and enable it in test_sequence_pad_op.
test=develop
* Enable check_compile_vs_runtime in test_match_matrix_tensor.
* Add the implementation of SetLoDLevel in InferShapeContext.
* Remove the implementation of IncreaseLoDLevel and call Get/SetLoDLevel instead.
* Remove the implementation of DecreaseLoDLevel and call Set/GetLoDLevel instead.
* Refine some ops and unittests.
test=develop
* Fix a typo.
test=develop
* Remove the check of var type, and change int to int32_t.
test=develop
* Add unittest for Get/SetLoDLevel.
test=develop
* fix bug in pool/conv/conv_transpose:
1. It should be stride[i] not stride[0] in UpdatePaddingAndDilation;
2. fix bug of func _get_padding_with_SAME in test_conv/conv_transpose_op.py;
3. fix bug of the computation process in function conv2dtranspose_forward_naive.
test=develop
* change test to make the data of different dimensions different. test=develop
* Add asymetric padding support for mkldnn pooling
test=develop
* Add asymetric padding support for mkldnn conv
test=develop
* Add asymetric padding support for mkldnn conv_transpose
test=develop
* remove duplicate code and duplicate config of master+patch
* drop all ins which has conflict slot or size < merge_size
* user only need to set merge size,if ins num of same id is not equal to merge size, just drop these ins
* user must make sure master data and patch data has no same slot whose feasigns are both non-zero, otherwise these ins will be dropped. (slot list should still be the same of both master and patch)
* test=develop
* add launch_ps module so that we can launch a parameter server training job
1) a user can specify worker_num and server_num
2) parameter server can be killed after all workers exit
3) unit test is added
test=develop
* don't expose numerous Tensor.set(), test=develop
* fix condition, test=develop
* fix float16 bug, test=develop
* feed should be Tensor or np.array, not Variable or number, test=develop
* use forcecast to copy numpy slice to new array, test=develop
* remove float16-uint16 hacking, test=develop
* Refine the cache of program, context and scope in executor.
test=develop
* Refine the unittest test_executor_and_use_program_cache.
* Add the test the PaddingRNN with use_program_cache=True.
test=develop
* Remove a check.
test=develop
* Refine the unittest to check whether it is correct when setting use_program_cache=True.
test=develop
* improve split and concat op:
1. support Tensor for argument 'dim' in split op.
2. support Tensor for argument 'axis' in concat op.
test=develop
* redefine function GetDataFromTensor and set unknown output shape to - 1.
test=develop
* add check: Attr(sections) match Input(X). test=develop
* support Tensor for attr(sections) and attr(sections) can contain -1.
add check for attr(sections).
test=develop
* modify error message for concat and call Resize only when necessary. test=develop
* Refine the InferShape of ReadFrom and WriteTo op, and add comment to explain why not call ShareLoD for runtime.
test=develop
* Add comment for ReorderLoDTensorByRank op.
* Add comment for lod_tensor_to_tensor_array op to explain why only call DecreaseLoDLevel for compile time.
test=develop
* ShrinkRNNMemory op should call ShareLoD for compile time.
test=develop
* Add the implementation of IncreaseLoDLevel and add the compile-time check of lod_level in InferShape of sequence_pool.
test=develop
* Refine the unittest of DynamicRNN.
test=develop
* Change PADDLE_ENFORCE to PADDLE_ENFORCE_NE.
test=develop
* no longer need to define all embedding layers (no one less) of all slots in each program. make trainer_param repeated in ps.proto.
* add find_distributed_lookup_table_grads instead of hard code GRAD
* support embedding stop gradient. push sparse has error before fix this.*
* fix fill sparse, skip slots which do not have embedding. each slot's embedding in a sparse table should be used in all training programs before fix this.
* fix pull sparse, skip slots which do not have embedding.
* fix collect feasign label info, skip slots which do not have embedding.
* support when there are multi sparse tables in one or multi training programs, each program can pull/push its own related sparse tables instead of all sparse tables.
* test=develop
* All elements in attr(shape) of crop_tensor can be -1, test=develop, test=document_preview
* fix the bug that attr(offsets) should be initialized, test=develop
* add cuda place and remove fluid.layers.data for test of python API, test=develop
* add cuda place and remove fluid.layers.data for test of python API, test=develop
* modified batch size for Python API test, test=develop
* add data type check, test=develop
* polish error messages, test=develop
* polish error messages, test=develop
* Remove support for the CPU architecture matmul, test=develop
* fix syntax bug, test=develop
* add input type and dtype check for cast_op
test=develop
* fix annotation
test=develop
* support more data type
test=develop
* fix bug for fill_constant's error type
test=develop
* improve converage
test=develop
* improve converage
test=develop
* Fix docs of gru_unit and dynamic_gru.
Fix basic_gru in rnn_impl.py.
Add error messages for param_attr setting in layer_norm api.
Add int64 dtype for expand.
test=develop
* Reopen unit-tests of basic_gru/basic_lstm in rnn_impl.py.
test=develop
* Add unit test for layer_norm api.
test=develop
* Remove the deprecated gru doc fix. test=develop
* Fix basic_gru test coverage. test=develop
* Update API.spec. test=develop
* Update API.spec. test=develop
* Fix test_basic_gru coverage test. test=develop
* Update test_basic_gru in test_layers to use fluid.data
test=develop
* Update test_basic_gru for coverage. test=develop
* add input type and dtype check for accuracy_op
* add input type and dtype check for accuracy_op
* modify python error on accuracy_op,add test=develop
* modify details on accuracy_op, test=develop
* test float16, test=develop
* add warning, test=develop
* Add fp16 in input.dtype check test=develop
* Add warning of fp16 in CPU test=develop
* add unittest code for fp16 test=develop
* fix float16 list error test=develop
* add api check in fc test=develop
* enforce shape error info of sum op test=develop
* fix spelling test=develop
* print x_dims info test=develop
* enhance shape error info test=develop
* rm unused ckpt and sort ckpt
* use max op idx to sort, test=develop
* remove unsed code,test=develop
* add testcase, test_develop
* modify test case, test=develop
* test=develop
Add input type and dtype check for sign_op.
* test=develop
Fix the api text format in sign op.
* test=develop
Fix the api examples in sign op add update the api.spec.
* Update crf_decoding api & example
test=develop
* Update api spec
test=develop
* Fix linear chain crf api
test=develop
* Avoid sharing data pointer with input
test=develop
* Simplify the logic in linear_chain_crf_decoding
* Add unittest for crf_decoding when label & path both are set
test=develop
* Update API spec
test=develop
* Add unittest for layers && correct infer_shape in chunk_eval
test=develop
* test=develop, fix docker with paddle nccl problem
* test=develop, Add Variable api and refine dygraph related API
* test=develop, Add Variable api and refine dygraph related API
* test=develop, refine test for new api and error info
* test=develop, refine error info and test_layers
* test=develop, add API.spec
* test=devleop, fix to_string python2 and python3 compat error and refien doc
* test=devleop, add API spec
* test=devleop, update API spec
* test=devleop, update API spec
* test=develop, invoke ci
* test=develop, fix example code
* test=develop, update API spec
* test=develop, fix auto_prune_error_on_leaf
* test=develop, fix auto prune error on loss stop_gradient
* test=develop, remove useless error check
* test=develop, add more ut for sorted gradient
* fix the error message for reduce_mean and reduce_sum op test=develop
* fix typo test=develop
* fix according review advice test=develop
* fix the test test=develop
* fix test=develop
* fix the constant error message test=develop
* fix typo test=develop
* fix typo test=develop
* fix code style test=develop
* fix comment and bugs test=develop
* fix the bug test=develop
* fix and add unittest test=develop
* fix the typo test=develop
* add support for the fill_constant op test=develop
* add test for ci coverage test=develop
1.support asymmetric padding;
2.support padding algorithm:"SAME" and "VALID";
3.support channel_last: data_format NHWC and NDHWC;
4.change doc of python API and c++;
test=develop, test=document_preview
* test=develop, argument shape support tensor and tensor in list
* test=develop,Increasing the coverage of CI tests
* test=develop, modify the document and update API.spec
* test=develop, modify the doc and update API.spec
* test=develop, modify the doc and update API.spec
* test=develop, modify the interface of UniformInitializer
* test=develop, modify the interface of XavierInitializer and MSRAInitializer
* test=develop, modify based on review's comments
* test=develop, modify based on review's comments
* test=develop, modify based on review's comments
* fix pool2d pool3d:
1. support asymmetric padding;
2. support padding algorithm:"SAME" and "VALID";
3. support channel_last: data_format NHWC and NDHWC;
4. support inferring shape when input with negative dims in compile time;
5. change doc of python API and c++;
6. fix bug in cuda kernel when Attr(adaptive) is true.
test=develop,test=document_preview
* fix 'tensors' to 'Tensors'. test=develop,test=document_preview
* add test for converage ValueError.test=develop,test=document_preview
* resolve conflict in test_pool2d. test=develop
* test=develop, fix docker with paddle nccl problem
* test=develop, Add Variable api and refine dygraph related API
* test=develop, Add Variable api and refine dygraph related API
* test=develop, refine test for new api and error info
* test=develop, refine error info and test_layers
* test=develop, add API.spec
* test=devleop, fix to_string python2 and python3 compat error and refien doc
* test=devleop, add API spec
* test=devleop, update API spec
* test=devleop, update API spec
* test=develop, invoke ci
* test=develop, fix example code
* test=develop, update API spec
* test=develop, add compat test and fix inplace campat dict error
* impove error message when passing ndarray with object dtype
* imporve message format
* change assert to raise TypeError
* remind user how to locate the irregular data instead of printing
* add unittest for input array type check
* Fix conv2d+dequantize squash for residual fusion
test=develop
* Correct int8 input
test=develop
* Add if exclude or include padding in pool2d mkldnn
test=develop
The new "fluid.data" changes old "fluid.layers.data":
1. Add shape and dtype check.
2. Remove "append_batch_size" parameter. We won't offer this in the new data layer because other deep learning platforms don't have this kind of data layer pre-processing. It may confuse users.
3. Remove "stop gradient" parameter because the data layer doesn't do back-propagation
TODO:
Now data layer feeded by executor is checked, will we want to check the feed data of readers in the future?
* add kernel for fill_op, test=develop
* modify PADDLE_ENFORCE to PADDLE_ENFORCE_EQ, test=develop
* add op test for fill_op, test=develop
* REGISTER COP CUDA KERNEL, test=develop
* update test_fill_op.py, test=develop
* change FillConstantOpVarTypeInference to FillOpVarTypeInference, test=develop
* fix op test, test=develop
* add head file, test=develop
* add support of matmul with multiple head even different width and height
Original matmul with multiple head supports only the mat_a.width == mat_b.height,
in that case, mat_b will be horizontally split. In this patch, we extend the
support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
in this case, mab_b will be vertically split.
One example is A is [3, 8], B is [2, 16], head_number is 4. In this
case, A will be split as [3, 2], B will be (vertically) split as
[2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
test=develop
* add support of matmul with multiple head even different width and height
Original matmul with multiple head supports only the mat_a.width == mat_b.height,
in that case, mat_b will be horizontally split. In this patch, we extend the
support when mat_a.width != mat_b.height but mat_a.width/head_number == mat_b.height,
in this case, mab_b will be vertically split.
One example is A is [3, 8], B is [2, 16], head_number is 4. In this
case, A will be split as [3, 2], B will be (vertically) split as
[2, 4]. The final result will be 4 matrix of 4 matrix of [3,4], i.e. [3, 16]
test=develop
* refactor the code of matmul with multiple head even different width and height
test=develop
* Remove constraint that last dimension is forced to be 1 by add
lookup_table_v2 test=develop
* modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop
* Revert "modify into PADDLE_ENFORCE_CUDA_SUCCESS test=develop"
This reverts commit 8a960bfc61e51aa27c3c529df8fb90b93ebd19f9.
* move api into fluid.embedding test=develop
* fix example code test=develop
* move one_hot into fluid.one_hot
* modify api.spec test=develop
* fix loss shape test=develop
* support change shuffle thread num
* support change train thread num
* fix receive shuffle data of each channel
* data norm stop gradient
* add check thread_tensor type and root_tensor type when merge metric
* remove sleep in shuffle, add config
* add config of pslib client to client communication
* fix xbox str
* add data norm op testcase
* add flush in trainer finalize
* make OpTest check grad inplace even if forward has no inplace, test=develop
* do not run PE when enable_inplace is False, test=develop
* add conv3d cuda kernel for float16 type, test=develop
* refactor OpTest for inplace, test=develop
* add comments, test=develop
* add recompute based checkpoints methods for large batch training
test=develop
* add append_backward_with_forward_recomputation
test=develop
* refine optimizer
test=develop
* update backward and optimizer
test=develop
* make Variable usable
test=develop
* add recompute code
* refine optimizer
test=develop
* refine addup _append_backward_ops_with_checkpoints_
1) for recompute part, just cache the grad_op_desc without appending to block
2) before appending grad_op_desc to backward part, addup_repetitive_vars, remove unused branch
test=develop
* make method private
* add recompute strategy into DistributedStrategy
test=develop
* checkpoint version3
test=develop
* remove some print information
test=develop
* remove unused sumop
test=develop
* try to fix recompute with graph building modules
* add input names to vars should be held
* add memory debug tool
* backup backward
* Fix bugs
* add backward desc for op not in any segments
* add exception info for sub_block
test=develop
* modify code style
test=develop
* modify code style
test=develop
* remove print functions
test=develop
* add API spec
test=develop
test=document_preview
* make Recompute a child class of Optimizer
test=develop
test=document_preview
* add API spec
test=develop
test=document_preview
* modify API spec
test=develop
test=document_preview
* add document for Recompute
test=develop
test=document_preview
* change API doc of Rcompute
test=develop
test=document_preview
* code cleaning
test=develop
test=document_preview
* modify API spec
* fix bugs when segments hold no element
* add testcase for Recompute Optimizer
test=develop
test=document_preview
* add test for apply_gradient, and code cleaning
test=develop
test=document_preview
* add test case for load function
* enable CI
test=develop
test=document
* add test case
test=develop
test=document_preview
* add sample code for 4 function of recompute optimizer
test=develop
test=document_preview
* move tree_conv to fluid.contrib.layers
test=develop
* update API.spec for tree_conv
test=develop
* update tree_conv api to increase unit coverage
test=develop
* refactor dygraph,test=develop
* fix failed unittest,test=develop
* polish code,test=develop
* check windows ci error,test=develop
try to fix windows ci error by np.allclose,test=develop
* polish vlog and profiler, test=develop
* try to fix preceding ops order,test=develop
* test transformer in windows ci, test=develop
* use python c-api to speed up tracer.trace,test=develop
* test=develop, fix docker with paddle nccl problem
* test=develop, add ut for debug string and gradient_accumulator
* test=develop, add tests for layer/gradient_accumulator/prepared_op
* test=develop, fix complie error for test_prepared_op
* test=develop, add more ut for dygraph
* test=develop, create API.spec for dygraph api change
* test=develop, refoctor name to make it easier to understand
* test=develop, refoctor name to make it easier to understand
* test=develop, fix multi-gpu failed problem , add Tracer tests, change PADDLEENFORCE to PADDLEENFORCE_EQ
* test=develop, fix ut failed on parallel se-resnext
* test=develop, change one more PADDLE_ENFORCE
* support auto prune in dygraph mode
* test=develop, support auto prune
* test=develop, merge develop conflict
* test=develop, fix test_layer and test_tracer ut
* test=develop, fix bug which may cause stop_gradient disabled with a list of backward inputs