* added support for inference using qunatization aware trained dygraph
* added support for inference using qunatization aware trained dygraph
correct boost get usage
* Delete incorrect warning message (#30196)
* fix warning and no grad
* clean redundant API alias in 2.0 - part 2 (#30013)
* delete paddle.nn.functional.assign
* fix dynamic to static error
* just add the op error message for the matmul xpu (#30246)
add the op error message for the matmul xpu
* Add Static Variable Clone (#30208)
Add clone method for static Variable so that this interface will be same as dygraph. It fixed some bugs in dy2stat
* use wget to replace curl to download the lcov file (#30229)
* use wget to replace curl to download the lcov file
* add cache for lcov
* fix test_pool3d_op timeout issue (#30248)
* Fix unittests bugs. (#30250)
* modify error message based on comments (#30189)
* modify error message based on comments
* edit code according to review.
* Correct spelling according to review.
* Fix bug for 'save mutiple method' (#30218)
* Fix bug for 'save mutiple method'
* To pass coverage.
* edit code to pass coverage.
* edit code to pass coverage.
* add unittest for coverage.
* change for coverage.
* edit for coverage.
* added support for inference using qunatization aware trained dygraph
* Alias from paddle.fluid.layers.auc to paddle.static.auc (#30206)
* add alias from fluid.layers.auc to static.auc
* Update __init__.py
* added support for inference using qunatization aware trained dygraph
correct boost get usage
* corrected boost get usage
* corrected naming issues and enforcing zero check
* correct paddle enforce message
* added more error checkings
* corrected error report message and optimized code
* corrected findvar usage
* corrected paddle_enforce in scope
* correct error messages
* correct error reporting format
Co-authored-by: LielinJiang <50691816+LielinJiang@users.noreply.github.com>
Co-authored-by: XiaoguangHu <46782768+XiaoguangHu01@users.noreply.github.com>
Co-authored-by: wawltor <fangzeyang0904@hotmail.com>
Co-authored-by: Huihuang Zheng <zhhsplendid@gmail.com>
Co-authored-by: YUNSHEN XIE <1084314248@qq.com>
Co-authored-by: Bai Yifan <me@ethanbai.com>
Co-authored-by: gongweibao <weibao.gong@gmail.com>
Co-authored-by: WeiXin <weixin10@baidu.com>
Co-authored-by: Jiaqi Liu <liujiaqi06@baidu.com>
usleep function in <unistd.h> only takes argument less than 1,000,000. Current call can exceed this limit, we have to fix it. This PR can fix random CI error.
* set expected place in child thread for dataloader
* set device id when set tensor from numpy
* revert tensor_py change
* add compile guard
* fix ci
* fix bug
* add view strategy on squeeze,unsqueeze,reshape,flatten
* add squeeze unittest
* add unittests
* use View strategy as name rather than Reuse Allacation
* fix view api doc
* fix format
* use core.ops when input of reshape2 is Tensor
* fix test_cross_entropy_loss error because of reshape2
* delete selected_rows
* change op_function
* little change
* solve HandleViewBetweenInputAndOutput
* add cast ops before and after unsupported fp16 ops.
* Keep partial net in FP32 pattern.
* Support check_finite_and_unscale and update_loss_scaling for FP16 calculation mode.
* Add fp16 support for adam op.
* add multi precision attr for adam.
* Fix the bug of test_multi_precision_fp16_train UT.
* Code format for CI.
* Fix the redefine error about MPTypeTrait on windows.
* fix bugs of the _create_accumulators func in Momentum.
* fix bug when inserting post cast op.
* Add the update_loss_scaling op in allow_set of UnusedVarCheck.
* Update for ci coverage.
* Add some doc for OptimizerWithMixedPrecision.
* Fix the code style.
* Imporve the doc of `amp_init`.
* Change for fp16 testing if users have the infer program defined in separate way.
* change to tensor copy sync
* change to tensor copy sync
* make copy_to safe when use TensorCopy
* refine code
* add ut
* add cudapinned garbagecollector
* add testcase: cpu place -> cuda pinned place
1. when slice_item is a slice:
1) the start of __getitem__ should be std::max(start, 0) if slice
2) the start of __getitem__ should be std::min(end, dim)
2. when slice_item is an integer, it should be in [-dim_len, dim_len)
3. Fix error message to use accurate data
* Register op version for linspace,test=op_version
* Register op version for linspace,test=op_version
* Register op version for linspace,test=op_version
* Register op version for linspace,test=op_version
* Register op version for linspace,test=op_version
* dot op support complex types
* matmul support complex types
* add test case
* matmul broadcast gradient support complex
* move conjFunctor to complex_functor.h
PADDLE_RETRY_CUDA_SUCCESS used wrong sleep time so it can cause timeout in unittest. This PR fixed it.
After we searched the doc in https://pubs.opengroup.org/onlinepubs/7908799/xsh/unistd.h.html, the time unit of sleep in unistd.h takes "seconds", usleep takes "microseconds", Sleep in windows.h takes "milliseconds".
* add xpu_coverage function
* xpu coverage ipipe only deal with xpu files
* fix import error
* fix format error
* 'fix format error'
* fix format error
* fix error
* fix format error
* fix format error
* Revert "[inplace] Add ShareHolderWith for class Variable and SharePlaceholderWith in VarBase.detach() to share the same Tensor/SelectedRows (#29267)"
This reverts commit b10ecd9d3a.
* Support ShareInplaceVersionCounterWith to share the same inplace version counter for VarBase
* Add the ipipe log param prefix
1. add the prefix;
2. using Colon before the metric values;
* 增加效率云日志指标收集前缀
暂未验证windows bat的这个字符串替换是否正常
* Preserve The Old Format Metrics During The Transition Period
Please DELETE the old format metrics log finally.
The period man last for a week.
* ipipe_log_param + ccache and clcache ..
1. Type of index: int, slice(step must be 1).
2. Type of value:
(1) int32, int64, float32, bool;
(2) numpy.array(int32, int64, float32, bool);<Note: float64 is not supported>
(3) paddle.Tensor(int32, int64, float32, float64, bool);
* add heter box
* add trainer, worker, wrapper...
* format
* for ci
* format
* remove boost get
* boost & copyright
* rename
* rename
* format
* format
* format
Co-authored-by: yaoxuefeng6 <yaoxuefeng@baidu.com>
* add conj op for complex types
* add conj for complex types
* add more test case
* add conj_op test
* modify conj api and impl
* add complex type for fill_constant_op xpu
* add setConstant for complex type
* remove complex conj test file
* user define grad for test_conj_op
* add test case for static mode of conj api
* modify conj doc
* change input args name to x
* remove useless codes
* conj support real types
* add conj test case for real number
* delete no need to calculate inputs in dygraph op_test
* delete no need to calculate inputs in dygraph op_test
* modify grad of mul for complex types
* fix the grads of inputs args order not match bug
* Test compilation time with less parallel count, notest, test=windows_ci
* optimize rules of Unity Build, notest, test=windows_ci, test=windows_op
* limit parallel counts used only on GPU, test=develop
* remove limit of argument /m:8 on Windows, test=develop
* add conj op for complex types
* add conj for complex types
* add more test case
* add conj_op test
* modify conj api and impl
* add complex type for fill_constant_op xpu
* add setConstant for complex type
* remove complex conj test file
* user define grad for test_conj_op
* add test case for static mode of conj api
* modify conj doc
* change input args name to x
* remove useless codes
* conj support real types
* add conj test case for real number
Modify CublasHandleHolder from using PADDLE_ENFORCE_CUDA_SUCCESS to PADDLE_RETRY_CUDA_SUCCESS to fix random unittest failure. We checked that the unittest log showed CUDA allocation error at this file, which may due to GPU not enough. We fixed similar failure in the past, so we applied PADDLE_RETRY_CUDA_SUCCESS here.
* add complex real op & api & unittest
* add imag op & api & unittest
* refactor op impl
* revert simplify writing due to complile failed
* polish details
* polish grad op code
* fix expand && concat/transpose to new api
* update xpu_header
* update activation op on kunlun
* update activation op on kunlun
* update activation op on kunlun
* update activation op on kunlun
* update activation op on kunlun
* add nearest_interp on kunlun
* update error message
* added UT should not exceed 15s
* fix error
* UT limit of 15s is the first to be executed
* fix error
* fix error with CI_SKIP_CPP_TEST
* modfied tiemout setting
* fix error