Commit Graph

122 Commits (87197f8c2e4d002fc39027c3d4ee99f4ead0ba2c)

Author SHA1 Message Date
yingshengBD 0eea5d714f
post quantize support insert fake_quantize_dequantize node before the OPs that will be used in VIS's faceid models (#30659)
4 years ago
guofei 430f8449f1
Fix the error of save_quantized_model (#30583)
4 years ago
cc ce6777fcdf
Fix bug of supporting channelwise dygraph quantized model, test=develop (#30531)
4 years ago
cc 5d8d463cf7
Collect weight threshold for lstm op in post_training_quantization (#28701)
4 years ago
cc 8e3a294045
skip quantizing ops in cpu inference (#30342)
4 years ago
Bai Yifan ad6fee2fa8
fix quantize error in speical naming model (#30354)
4 years ago
huangxu96 ee623bff64
Implemented AddQuantDequantPass in imperative quantization. (#26692)
4 years ago
guofei 1bdf924217
Quantization supports 2.0 APIs (#30036)
4 years ago
wangchaochaohu 7dd551e08b
refine the paddle place support using str (#28769)
4 years ago
cc 1fa863da40
Support dygraph quant model (#29927)
4 years ago
cc 62f455e023
Support quantizing program_desc (#29526)
5 years ago
Wojciech Uss 917a11495f
fix ininite scale values (#29386)
5 years ago
Wojciech Uss 4fd4095d1b
Add quantization of multi_gru op and tests (#28615)
5 years ago
guofei 638402274a
Integrate ImperativeOutScale into ImperativeQuantAware. (#27956)
5 years ago
huangxu96 40f5453725
Quant nn2.0 (#28764)
5 years ago
Leo Chen 3815d7aa40
Upgrade string literals to raw string (#28989)
5 years ago
Bai Yifan 5050e761b8
Support user-defined activation/weight quantize and preprocess. (#28570)
5 years ago
cc d1e84f3e9e
Add some ops for cacluating output scale, test=develop (#28644)
5 years ago
guofei 6bbb6e7f45
Implement the function of OutScaleForTraining/OutScaleForInference in dygraph (#26601)
5 years ago
cc 8fabb1c32f
Add test attribute in channelwise_quant op, test=develop (#27742)
5 years ago
Wojciech Uss 966447e338
Added support for quantization of fusion_gru (#27518)
5 years ago
huangxu96 02606d45ef
Quant op dev (#25932)
5 years ago
Zhen Wang d28162b97f
Remove save_quantized_model in ImperativeQuantAware. (#27240)
5 years ago
cc 2d8281d5ad
Remove the cache in post_traning_quantization, test=develop (#26450)
5 years ago
Zhen Wang ece74c4cd4
Update the _get_fake_quant_type definition in imperative QAT. (#27222)
5 years ago
Sylwester Fraczek eb65877ce0
fix dimensions error for mobilenetv1_KL_quant (#26776)
5 years ago
qingqing01 f7fb4c2212
Move hapi to python/paddle root dir. (#26442)
5 years ago
Pei Yang e3f8e5cf5c
trt int8 support conv2d_transpose (#26636)
5 years ago
Aurelius84 f05613683f
[Dy2stat] Support InputSpec and Return callable class instance in @declarative (#25960)
5 years ago
Pei Yang 379222c3f1
add output scale and trt op teller support for hard_swish and hard_sigmoid (#26499)
5 years ago
cc 3f816bc8b4
[Quantization] Conv2d_transpose and mul support channnelwise quantization (#25639)
5 years ago
Pei Yang 9e9a569dae
add trt int8 support for elementwise_mul and scale (#25676)
5 years ago
Bai Yifan 2131559d08
Remove slim from paddle framework (#25666)
5 years ago
cc 42189be67b
[Quant] Remove the output for moving_average_abs_max_scale op (#25697)
5 years ago
Chen Weihang 23d1228c4d
remove ProgramTranslator.save_inference_model (#25740)
5 years ago
yukavio c9285a18a0
saving inference model when user define activation or weight preprocess function (#25749)
5 years ago
Zhen Wang 548cdbc544
Quantization-aware training for dygraph (#24634)
5 years ago
cc 5c8e79956e
Use the specificed scope in post quant, test=develop (#25384)
5 years ago
cc 22720a1535
Fix post quant save bug, test=develop (#25370)
5 years ago
Wojciech Uss d0a921ba98
Quant2 updates and fixes (#25313)
5 years ago
cc d8f4714bc1
[Quantization] Save output threshold by argname_index (#25272)
5 years ago
Wojciech Uss 23a4f54b73
rename qat into quant (#24948)
5 years ago
Wojciech Uss 56fa3880e3
rename qat into quant in filenames only (#25194)
5 years ago
cc 8fc31d501b
Support conv2d_traspose quantize, test=develop (#25084)
5 years ago
Liufang Sang b174b99764
support user defined quantization func and preprocess (#24720)
5 years ago
cc 75eec3d1f6
Post training quantization supports optimize model by fusing (#24822)
5 years ago
Wojciech Uss 78d4f0cc91
add option to exclude ops by id from quantization (#24689)
5 years ago
cc dbcd7c69e9
Update sigmoid output from Y to out, test=develop (#24765)
5 years ago
cc 88e9d74a75
Collecting concat output threshold, test=develop (#24742)
5 years ago
cc 6c89ca2157
Add output threshold for ops that have several output activations, test=develop (#24726)
5 years ago