Commit Graph

390 Commits (develop)

Author SHA1 Message Date
Leo Chen 11e32baf1e
Add matmtl_v2 to amp list (#28693)
6 years ago
cc d1e84f3e9e
Add some ops for cacluating output scale, test=develop (#28644)
6 years ago
YUNSHEN XIE ba0756325a
exec ut no more than 15s 1 (#28439)
6 years ago
Leo Chen 71d6220772
Skip reader op in mixed_precision decorator (#28353)
6 years ago
Chen Weihang 5d73bfdb98
fix test_weight_decay_extend error (#28178)
6 years ago
cnn 7c1aa0d69d
2.0rc api rename (#28088)
6 years ago
guofei 6bbb6e7f45
Implement the function of OutScaleForTraining/OutScaleForInference in dygraph (#26601)
6 years ago
WangXi 0a1862d1d2
fleet combine amp dgc recompute meta optimizer (#27643)
6 years ago
cc 8fabb1c32f
Add test attribute in channelwise_quant op, test=develop (#27742)
6 years ago
Chen Weihang 9b49f02441
Polish jit.save/load design & remove paddle.SaveLoadConfig (#27623)
6 years ago
LielinJiang 9089841b6e
Fix bilateral inference shape bug (#26822)
6 years ago
Wojciech Uss 966447e338
Added support for quantization of fusion_gru (#27518)
6 years ago
Chen Weihang f2c97b6da5
replace dataset with fake data (#27519)
6 years ago
YUNSHEN XIE 66951ab2ea
modified timeout value for 4 ut (#27462)
6 years ago
Zhang Ting 906e7f921e
add fuse_bn_act op (#27230)
6 years ago
pangyoki 827ac36faa
Use dygraph mode by default (#27443)
6 years ago
huangxu96 02606d45ef
Quant op dev (#25932)
6 years ago
Zhen Wang d28162b97f
Remove save_quantized_model in ImperativeQuantAware. (#27240)
6 years ago
cc 2d8281d5ad
Remove the cache in post_traning_quantization, test=develop (#26450)
6 years ago
Zhen Wang d708b21074
Update amp_check_finite_and_scale_op and add an updating_loss_scaling op for static graph amp training. (#26240)
6 years ago
Zhen Wang ece74c4cd4
Update the _get_fake_quant_type definition in imperative QAT. (#27222)
6 years ago
LielinJiang 8df5b4d608
Add correlation api to contrib (#27015)
6 years ago
Sylwester Fraczek eb65877ce0
fix dimensions error for mobilenetv1_KL_quant (#26776)
6 years ago
Zhen Wang bcdbac1753
fix some cast error. (#26884)
6 years ago
YUNSHEN XIE d8984a6b90
limit timeout value setting on linux (#26923)
6 years ago
qingqing01 f7fb4c2212
Move hapi to python/paddle root dir. (#26442)
6 years ago
Leo Chen 844583c8fd
Refine paddle.manual_seed (#26496)
6 years ago
Pei Yang e3f8e5cf5c
trt int8 support conv2d_transpose (#26636)
6 years ago
chalsliu dc56c89822
Add the option to execute unit tests only at night (#26669)
6 years ago
Aurelius84 f05613683f
[Dy2stat] Support InputSpec and Return callable class instance in @declarative (#25960)
6 years ago
YUNSHEN XIE a8b5741fb4
add a few unittests for setting timeout properity (#26630)
6 years ago
cc 0d71cffd65
Add mnist test for post training quantization, test=develop (#26436)
6 years ago
YUNSHEN XIE 39fe0d35aa
find timeout unittests (#26371)
6 years ago
Pei Yang 379222c3f1
add output scale and trt op teller support for hard_swish and hard_sigmoid (#26499)
6 years ago
cc 3f816bc8b4
[Quantization] Conv2d_transpose and mul support channnelwise quantization (#25639)
6 years ago
Zhou Wei 5017aa76e6
set default python3,fix incompatible,cache dir for third party,unify error code,for windows (#26178)
6 years ago
Pei Yang 9e9a569dae
add trt int8 support for elementwise_mul and scale (#25676)
6 years ago
yukavio f6ac5990aa
fix quant unit test (#25792)
6 years ago
Bai Yifan 2131559d08
Remove slim from paddle framework (#25666)
6 years ago
tangwei12 caa90a6510
Integrated Trainer of Parameter Server (API add `fluid.contrib.layers.sparse_embedding` only) (#22957)
6 years ago
cc 42189be67b
[Quant] Remove the output for moving_average_abs_max_scale op (#25697)
6 years ago
Chen Weihang 23d1228c4d
remove ProgramTranslator.save_inference_model (#25740)
6 years ago
yukavio c9285a18a0
saving inference model when user define activation or weight preprocess function (#25749)
6 years ago
cc 650d7223bc
Fix test_quantization_scale_pass by change the model, test=develop (#25710)
6 years ago
Wojciech Uss 43f3d0cce3
Add an option to choose inference targets in Quant tests (#25582)
6 years ago
LielinJiang 7129f544f0
Add bilateral_slice op (#25401)
6 years ago
YUNSHEN XIE 3e45d44d0c
disable unittest test_user_defined_quantization,test=develop (#25451)
6 years ago
Zhen Wang ee44bcddd8
add more unit tests for imperative qat. test=develop (#25486)
6 years ago
Zhen Wang 548cdbc544
Quantization-aware training for dygraph (#24634)
6 years ago
cc 5c8e79956e
Use the specificed scope in post quant, test=develop (#25384)
6 years ago
cc 22720a1535
Fix post quant save bug, test=develop (#25370)
6 years ago
Wojciech Uss d0a921ba98
Quant2 updates and fixes (#25313)
6 years ago
cc d8f4714bc1
[Quantization] Save output threshold by argname_index (#25272)
6 years ago
Wojciech Uss 23a4f54b73
rename qat into quant (#24948)
6 years ago
Wojciech Uss 56fa3880e3
rename qat into quant in filenames only (#25194)
6 years ago
iducn f282599229
disable unitest for gcc8(#25134)
6 years ago
cc 8fc31d501b
Support conv2d_traspose quantize, test=develop (#25084)
6 years ago
Liufang Sang b174b99764
support user defined quantization func and preprocess (#24720)
6 years ago
cc 75eec3d1f6
Post training quantization supports optimize model by fusing (#24822)
6 years ago
Wojciech Uss 78d4f0cc91
add option to exclude ops by id from quantization (#24689)
6 years ago
cc dbcd7c69e9
Update sigmoid output from Y to out, test=develop (#24765)
6 years ago
cc 88e9d74a75
Collecting concat output threshold, test=develop (#24742)
6 years ago
ShenLiang 950892044f
fix conflict, test=develop (#24238)
6 years ago
cc 6c89ca2157
Add output threshold for ops that have several output activations, test=develop (#24726)
6 years ago
lidanqing 8ef3c02e90
Update DNNL QAT document 2.0-alpha (#24494)
6 years ago
cc 4d35112255
[Fix bug] Init scale node in OutScaleForTrainingPass and enable test_quantization_scale_pass UT (#24393)
6 years ago
joanna.wozna.intel 53125c2f6f
Model converter to dot file (#23169)
6 years ago
Wojciech Uss db052009c7
Enabled quantize all and skip missing in QAT (#24281)
6 years ago
Leo Chen 381492fca3
add try finally, test=develop (#24243)
6 years ago
lidanqing 61ec30f030
Update QAT INT8 2.0 doc (#24127)
6 years ago
Sylwester Fraczek e1a7a88057
added reshape transpose matmul fuse pass (#23754)
6 years ago
ShenLiang 0fb9b208ab
Add batch_fc op in contrib (#24017)
6 years ago
arlesniak d31a174f51
added fusing matmul-transpose-reshape pass (#23866)
6 years ago
Wojciech Uss 3d744162dd
QAT: support for new models (#23928)
6 years ago
zhangchunle 6bd200db66
remove high level api (#23854)
6 years ago
ShenLiang 30bd7e1c83
Add rank_attention_op attributes for GPU memory in contrib (#23915)
6 years ago
cc 40aa14ec77
Weight quantization support channel_wise_abs_max method to achieve higher accuracy (#23629)
6 years ago
mapingshuo f0e743f136
fix AMP and recompute (#23551)
6 years ago
joanna.wozna.intel 12ba05ce0c
Add scale-matmul fuse pass (#23734)
6 years ago
Wojciech Uss 2383a9f7ee
[Doc update] Update for QAT INT8 MKL-DNN document (#23361)
6 years ago
Chengmo 8c0bdde934
Add Tdm sampler op in Contrib (#23290)
6 years ago
Wojciech Uss 1753860dd0
Enable matmul and cleanup in QAT2 (#23657)
6 years ago
silingtong123 cec234b1aa
test=develop, error message of tree_conv OP enhancement (#23574)
6 years ago
cc 25628587f1
Collect output scale for quantized op and fused op (#23369)
6 years ago
Bai Yifan 9bc223c8a2
fix test_graph_wrapper failure on cudnnv7, test=develop (#23451)
6 years ago
ShenLiang c706ff20a3
fix conflict, test=develop (#23298)
6 years ago
Chengmo a2e9af5663
Add Tdm child OP in contrib (#23241)
6 years ago
cc 3ea7c59f76
Set fuse_all_reduce_ops=false for quantization test, test=develop (#23413)
6 years ago
cc 7c55a94de5
Disable test_quantization_scale_pass unittest for random error, test=develop (#23441)
6 years ago
Yiqun Liu bc2981e998
Disable test_code_generator and test_post_training_quantization_mobilenetv1 (#23440)
6 years ago
Wojciech Uss 9fd9067455
handle conv2d activations in older QAT models (#23202)
6 years ago
Wojciech Uss be2ac9cc3a
separated QAT1 and QAT2 (#23284)
6 years ago
lidanqing c524b930e7
Update QAT INT8 related code (#23104)
6 years ago
Wojciech Uss f836c8aa8f
add check for scales and a message (#23119)
6 years ago
cc bd80903333
Add activation_type in AddQuantDequantPass to be compatible with paddleslim, test=develop (#23221)
6 years ago
cc 589cd8782f
Post_training_quantizaion supports min_max methon (#23078)
6 years ago
lidanqing 432a4b2789
Changes QAT MKL-DNN documents (#22840)
6 years ago
cc b6717faf80
Added an option to use external FP32 model in QAT comparison test (#22858)
6 years ago
Sylwester Fraczek 5ff2439f51
fixed save qat2 model resnet50 and ernie (#22822)
6 years ago
hong f05c213f98
fix basic gru lstm parameter attr bug; test=develop (#22508)
6 years ago