Commit Graph

138 Commits (develop)

Author SHA1 Message Date
Pei Yang 14b7e3cf06
[Paddle-TRT] TRT inference support for BERT/Transformer in paddle 2.0 api (#31744)
5 years ago
alncat bfb8a64234
updated conv bn fuse pass to make it compatible with latest batch_norm op (#31272)
5 years ago
joanna.wozna.intel 781df300d0
Unification of BF16 enablement process (#31034)
5 years ago
joanna.wozna.intel caf9d39839
Add Conv Transpose BF16 (#30877)
5 years ago
wanghuancoder 35c5b23f68
use iwyu clean include second time, test=develop (#30829)
5 years ago
Adam Osewski 4f066e316e
Layer normalization fuse pass. (#30721)
5 years ago
alncat 5ace20fc3f
modified conv+bn fuse pass to fix wrong mask in mask rcnn (#30704)
5 years ago
alncat 7bbf3ac5ab
Added support for inference using quantization aware trained dygraph (#30288)
5 years ago
cc 6a0102b038
map matmul/squeeze2+matmul/reshape2+matmul to mul (#29911)
5 years ago
jakpiase edc06c6a1b
Added fc + activation fuse pass (currently only gelu, sigmoid and tanh are supported) (#29772)
5 years ago
Wojciech Uss 4fd4095d1b
Add quantization of multi_gru op and tests (#28615)
5 years ago
joanna.wozna.intel b0d1ac161e
Add bf16 pool2d and unify bf16 unit tests (#29039)
5 years ago
joanna.wozna.intel fddea67445
Fix cpu_bfloat16_pass (#28730)
5 years ago
Wojciech Uss 7b5a8e46de
Add multi_gru_fuse_pass and tests (#28601)
5 years ago
Wojciech Uss 991345b368
Add multi_gru_seq_fuse_pass and tests (#28604)
5 years ago
joanna.wozna.intel 8c0ea4bffe
Add bf16 matmul, fc, elementwise add and mul (#28729)
5 years ago
Jacek Czaja 6d8d3d4c22
[oneDNN] Layer norm bf16 kernel (#28619)
5 years ago
joanna.wozna.intel 7821759d48
Add bfloat16 softmax and gelu (#28394)
5 years ago
Jacek Czaja ca41541472
[oneDNN]Sum bf16 kernel (#28382)
5 years ago
joanna.wozna.intel 571a63e7ec
Add bf16 transpose2, reshape2, concat ops (#28195)
5 years ago
Zhang Ting fdc06f2158
add Fuse bn add act pass (#28196)
6 years ago
Adam Osewski 7db747d9e8
oneDNN BatchNorm + Act fusion pass. (#27912)
6 years ago
Jacek Czaja 606611d351
[oneDNN] GRU BF16 kernel (#27731)
6 years ago
Wojciech Uss 966447e338
Added support for quantization of fusion_gru (#27518)
6 years ago
joanna.wozna.intel b0ee1405f7
Add conv2d bfloat16 support (#27325)
6 years ago
joanna.wozna.intel 1483ea2304
Add bfloat16 passes (#26999)
6 years ago
joanna.wozna.intel eb097d64f6
Fix int8 performace drop cpu_quantize_placement_pass (#26715)
6 years ago
Zhaolong Xing 7b7e605189
[Fix BUGs]: fix multhead matmul pass's instable bug (#25123)
6 years ago
Pei Yang b2f5a149e7
[Paddle-TRT] Better Paddle-TensorRT support for PaddleSlim quant models (#25097)
6 years ago
Jacek Czaja a7944904d3
[oneDNN]elementwise_add and elementwise_mul int8 support (#24984)
6 years ago
Chen Weihang aa0f254fbe
Add macro BOOST_GET to enrich the error information of boost :: get (#24175)
6 years ago
joanna.wozna.intel 356f5ee220
[Refactoring] Unify op-dequant squashes (#24277)
6 years ago
joanna.wozna.intel b43b46e619
[INT8] Add requant-op squash (#24143)
6 years ago
Sylwester Fraczek e1a7a88057
added reshape transpose matmul fuse pass (#23754)
6 years ago
Jacek Czaja 461e6a01ec
[DNNL] activations Inplace support (#24123)
6 years ago
arlesniak d31a174f51
added fusing matmul-transpose-reshape pass (#23866)
6 years ago
Jacek Czaja c6c65c65c7
[DNNL] Added elementwise_add mkl-dnn inplace (#23477)
6 years ago
joanna.wozna.intel 12ba05ce0c
Add scale-matmul fuse pass (#23734)
6 years ago
chenhaoze 9b06dd8628
Add three passes and api reference of paddle_pass_builder. test=develop (#23741)
6 years ago
joanna.wozna.intel 5ee099ca57
Op-requant squash (#23665)
6 years ago
joanna.wozna.intel 3cb5623dad
Add matmul dequant squash (#23505)
6 years ago
joanna.wozna.intel ce08fdcf2b
Add support for INT8 matmul in C-API quantization (#23463)
6 years ago
Jacek Czaja 2bb1b0e89e
[DNNL] Added MKL-DNN inplace pass for C-API inference (#23315)
6 years ago
tianshuo78520a d2ba91aad1
fix typo words (#22653)
6 years ago
joanna.wozna.intel 17f2c0899f
Add dequant-scale squash (#22409)
6 years ago
石晓伟 e1b0d7cbb1
remove anakin from code, test=develop (#22420)
6 years ago
Zhen Wang e40cfb1010
fix the bug of assert_is_op_output. test=develop (#22262)
6 years ago
Zhen Wang 46189b166d Add bn and relu fuse pass (#22048)
6 years ago
joanna.wozna.intel 5b2e98aa17 Add multiple quantize operators fuse (#22062)
6 years ago
liu zhengxi 724b13e459
fix xception precision problem, test=develop (#22124)
6 years ago