Commit Graph

8243 Commits (e784884e70ecba4a419db785579a579ef31c2208)

Author SHA1 Message Date
dongdaxiang 317eb0aad3 add incubate for unified API
6 years ago
dongdaxiang 3641a78b01 add incubate for unified API
6 years ago
dongdaxiang e657c127a8 hide opt_info in distirbuted optimizer
6 years ago
xujiaqi01 ecfc7df913 add dataset factory && fix style
6 years ago
xujiaqi01 3cea00bd52 store memory data in Dataset && fix bug
6 years ago
dongdaxiang ff87698a44 refactor downpour optimization
6 years ago
dongdaxiang b66f0074b6 fix data reading bugs in api, add VLOG(3) log for setup
6 years ago
dongdaxiang 71aa307ebe make Dataset* as an argument
6 years ago
dongdaxiang b415ec27e8 make Dataset* as an argument
6 years ago
xjqbest dd67ad08a2 modify c++ and python dataset related code & fix bug
6 years ago
heqiaozhi 9bca1926c1 refactor & fix bug
6 years ago
heqiaozhi 8de4d31a5b refactor async exe
6 years ago
dongdaxiang c28bbdf8ba add dataset_generator.py
6 years ago
dongdaxiang 687cb79dbb add pipe command io interface
6 years ago
dongdaxiang 1fe54416c9 move fs.cc and shell.cc into paddle/fluid/framework/io
6 years ago
dongdaxiang 6de9ebc65c refine VLOG in fleet_wrapper.h
6 years ago
dongdaxiang c165012031 refine device_worker and trainer code
6 years ago
dongdaxiang 8a335b50be add downpour device_worker pb configuration
6 years ago
dongdaxiang caf0c10e71 add dist_multi_trainer for distributed training, add trainer_factory and device_worker_factory so that we can easily extend new training mode, add pull dense worker which is a singleton for parameter fetching
6 years ago
lujun d4f63d82ac
Merge pull request #16475 from junjun315/fix-doc-multiplex
6 years ago
Qiao Longfei d8974e6da0 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add-async-ssa-graph-executor-communicator
6 years ago
lujun de605cc0fc
Merge pull request #16523 from junjun315/tensor_api
6 years ago
wanghaoshuang d41b623a72 Merge branch 'develop' of https://github.com/PaddlePaddle/paddle into quan_ck
6 years ago
wanghaoshuang 6db7c2a500 Fix checkpoint of quantization.
6 years ago
chengduo 1096746cbf
Fuse Adam And SGD ops (#15933)
6 years ago
minqiyang 9e14f260c0 Fix polynomal decay bug in python2.x
6 years ago
Zhen Wang f7f5044b3d
Merge pull request #16489 from wzzju/fix_slim_quant_bugs
6 years ago
lujun 1c9aaeebe0 move imperative to dygraph, test=develop
6 years ago
minqiyang 42507d33c6 Change atol to default value
6 years ago
dengkaipeng 193185b840 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into shift
6 years ago
dengkaipeng 8a0023892a fix unittest. test=develop
6 years ago
minqiyang 48f3cbdf55 Polish code
6 years ago
whs 59f75ec76e
Make unitest of fsp op faster and more stable. (#16502)
6 years ago
Zhen Wang 46e1bb06c7 remove no necessary doc changes. test=develop
6 years ago
whs ecc3088df8
Fix saving in quantization strategy. (#16474)
6 years ago
Zhen Wang f86429dbd9 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_slim_quant_bugs
6 years ago
Zhen Wang 5ab5687138 remove no necessary doc changes. test=develop
6 years ago
minqiyang 35c89f38c3 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
gongweibao eb83abeac3
Add DGC(Deep Gradient Compression) interface. (#15841)
6 years ago
Jiabin Yang e41d581304
test=develop, fix space_to_depth_doc (#16293)
6 years ago
whs 679a4c28fc
Fix lost of learning rate variable in distillatoin when using lr decay. (#16471)
6 years ago
Qiao Longfei b68f84090b fix test_split_selected_rows_op test=develop
6 years ago
Zeng Jinle c7c6eeb44e
Merge pull request #16409 from sneaxiy/feature/advance_gc
6 years ago
Zhen Wang 6b854f3e1f fix the save_in_nodes bug.
6 years ago
Zhen Wang 183bacebe3 clean codes and fix some bugs. test=develop
6 years ago
Jiabin Yang 54a73578a8
Feature/install check (#16044)
6 years ago
chengduo 999365149d
Add from six.moves import reduce (#16435)
6 years ago
minqiyang a71a0f865b Polish code
6 years ago
minqiyang 99128a5c72 Implement Cosine and Noam Decay
6 years ago
wopeizl c300b1ba69
Tensor index (#16223)
6 years ago
minqiyang ec9c0874bc Implement Expotential NatureExp Inversetime and Polynomal Decay
6 years ago
Jiabin Yang 0d9d25d40f
Feature/refactor layers to Layers (#16337)
6 years ago
gongweibao 850b737112
Fix nparray.all() bug. (#16472)
6 years ago
dengkaipeng eb2123e12d fix doc and jit. test=develop
6 years ago
Tao Luo 1b4e4e7ef7
Merge pull request #16453 from chuanqi129/calibration_readme_refine
6 years ago
liuwei1031 8d22bc17a4
Memory optimize (#16410)
6 years ago
Xin Pan f8c279b11c
Merge pull request #16454 from panyx0718/imperative2
6 years ago
lujun 3f8b2f5ff5 fix multiplex doc, test=develop
6 years ago
Qiao Longfei d640c6cfa9 fix pylint
6 years ago
Qiao Longfei 30618409db Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add-async-ssa-graph-executor-communicator
6 years ago
sneaxiy 78fb3a62e0 fix env variable settting bug
6 years ago
minqiyang 4278be8c49 Merge branch 'imperative_lr_scheduler' of https://github.com/velconia/Paddle into imperative_lr_scheduler
6 years ago
minqiyang b5bbb13ac1 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into imperative_lr_scheduler
6 years ago
dengkaipeng 7920e3be02 revert test_softmax_cudnn. test=develop
6 years ago
chuanqiw c512516ff4 Update INT8 calibration README
6 years ago
Zhen Wang 27d05203e7 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into fix_quan_hang
6 years ago
Zhen Wang 1c11f817e9 Use the resolve hazard method.
6 years ago
Jiabin Yang 7c5319ba12
Fix/test imperative ptb rnn (#16433)
6 years ago
Jiabin Yang f735102eab
add layer norm to Layers, add transformer test in imperative mode (#16092)
6 years ago
Xin Pan fd24ab47ab polish
6 years ago
Xin Pan 1f89249a95 update DeepCF model
6 years ago
sneaxiy a7d0ac50b8 Merge develop
6 years ago
sneaxiy 7000ec85d9 fix some op grad maker
6 years ago
zhaoyuchen2018 cdb315e9d8
Merge branch 'develop' into docrefine
6 years ago
Xin Pan 0fff666f14
Merge pull request #16449 from panyx0718/imperative3
6 years ago
Xin Pan becf799431 fix
6 years ago
Wang, Chuanqi 85e1cc1e02 Update Readme with new accuracy and performance data measured on 6271 (#16437)
6 years ago
Zeng Jinle 4cc9809cae
Merge pull request #15799 from sneaxiy/feature/decoupled_reader
6 years ago
whs e9bec9369b
[slim] Add quantization strategy and distillation strategy. (#16408)
6 years ago
Zhen Wang 2ccbfd5e10 Fix some bugs for quantization passes.
6 years ago
liuwei1031 de3b70a101
fix cdn issue, test=develop (#16423)
6 years ago
sneaxiy f8ed2c229e try to fix ci error
6 years ago
Zeng Jinle c64d959343
Merge pull request #16295 from zhhsplendid/zhenghuihuang-dev-2
6 years ago
Xin Pan b55dd32e9c
Merge pull request #16394 from panyx0718/imperative2
6 years ago
sneaxiy 2f54d9f995 Merge develop
6 years ago
sneaxiy 072d95d8f6 Merge develop
6 years ago
sneaxiy a93a9eef8f add op registry type
6 years ago
Tao Luo f9061796d6
Merge pull request #16407 from chuanqi129/test_calibration_enhance
6 years ago
chengduo c917c13af1
increase the time limite (#16405)
6 years ago
chuanqiw 431068c9ca Enhance test calibration script on accuracy assert
6 years ago
whs 2e5831f0dc
[slim] Refine framework of slim and add filter pruning strategy (#16226)
6 years ago
whs 18779b5b8f
[Operator] Add range op. (#15431)
6 years ago
Hongyu Liu 466e150f28
Merge pull request #16380 from phlrain/add_var_name_in_opt_2
6 years ago
qingqing01 5d6737b5cb
Fix bug in affine_channel API (#16373)
6 years ago
phlrain 6b971e1f19 remove test_dist_transplier; test=develop
6 years ago
phlrain 7dc4a7f4f8 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_var_name_in_opt_2
6 years ago
phlrain d11d0e18c2 remove test_dist_transplier; test=develop
6 years ago
Xin Pan 55a7b98126 Add DeepCF model
6 years ago
Zhen Wang ec11135d54
Merge pull request #16341 from wzzju/add_channel_wise_in_quant_pass
6 years ago
xiaolil1 e235882c18 Enable MKL-DNN INT8 Concat Kernel. (#16156)
6 years ago
Qiyang Min 171df5b56b
Merge pull request #16303 from junjun315/checkpoint
6 years ago
Hongyu Liu e5478ab5c8
Merge pull request #16346 from phlrain/add_floordiv_and_mod
6 years ago
phlrain 77a08750e9 add var name in optimizer; test=develop
6 years ago
chengduo 33965527fd
Add unit test for fuse all reduce (#16354)
6 years ago
phlrain 5dc9b51994 fix time; test=develop
6 years ago
phlrain 686b8935fe Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_floordiv_and_mod
6 years ago
Hongyu Liu 8c81d9949e
Merge pull request #16347 from phlrain/fix_matmul_check
6 years ago
lujun ac32bf6f77 update input params type, test=develop
6 years ago
qingqing01 d2b938ef5a
Refine gradient proto maker and python API for affine_channel_op (#16340)
6 years ago
phlrain 0e40298949 fix matmul shape check; test=develop
6 years ago
phlrain 56c2d384c7 add elementwise floordiv, mod; test=develop
6 years ago
ruri 09e05a110b
Merge pull request #16217 from ceci3/doc
6 years ago
zhhsplendid 124f1df481 Add flags for init and re-alloc gpu
6 years ago
Zhen Wang 8965819fbb rewrite the cuda kernels of channel_wise_quant_op and channe_wise_dequant_op. test=develop
6 years ago
Wu Yi 8bebfe5640
add resnet nccl2 dist training, mp training unit test (#16167)
6 years ago
flame 08838f3909
Fix save inference model bug (#16242)
6 years ago
baojun 2de263a5d9 Add softmax_with_cross_entropy_op to ngraph engine (#16304)
6 years ago
sneaxiy bb166a1e10 fix API.spec
6 years ago
chengduo f26ba5bddd
Fuse AllReduce (#15921)
6 years ago
dengkaipeng 93701dba50 add jit kernel for softmax axis. test=develop
6 years ago
Wu Yi 6382b62f6b
Collective ops (#15572)
6 years ago
lujun bed0ecf3d2 checkpoint pr be moved here, test=develop
6 years ago
Zhen Wang ec88b6cc5a add channel wise quantization in ir pass.
6 years ago
whs 18911b6eea
[enhence] Make step_input of dynamic_rnn support custom lod level. (#15972)
6 years ago
zhhsplendid 22715487dc add allocator flags
6 years ago
ceci3 27f7a72641 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into doc
6 years ago
ceci3 3f5f5ed361 fix dropout doc
6 years ago
Zeng Jinle f8df9eb32e fix api doc (#16201)
6 years ago
sneaxiy 3a09693f5c change API name
6 years ago
Yibing Liu 7e20e7691e
Fix the bug in fp16 backward kernel (#16269)
6 years ago
dengkaipeng 8b88960dce fix doc. test=develop
6 years ago
dengkaipeng 2ddd23dac8 fix format. test=develop
6 years ago
dengkaipeng 365e6cfd15 add mkldnn support. test=develop
6 years ago
dengkaipeng 217db27337 add mkldnn support. test=develop
6 years ago
dengkaipeng 6cb66721d2 add cudnn support. test=develop
6 years ago
sneaxiy 161b8ddcaa Merge develop
6 years ago
xiaolil1 e818fa1004 Enable INT8 transpose kernel for MobileNet-SSD improvement. (#16159)
6 years ago
Xin Pan 374abcf361
Merge pull request #16247 from panyx0718/imperative
6 years ago
tangwei12 8ea4218ce1
update load persistables for increment, test=develop (#15576)
6 years ago
Qiyang Min 8e4ad008fb
Merge pull request #16198 from velconia/imperative_train_speed
6 years ago
Xin Pan 3e9319f3ab add more imperative layer tests.
6 years ago
Qiao Longfei 039d783db5 change communicator_recv_wait_ms to communicator_max_send_grad_num_before_recv
6 years ago
Xin Pan 7458114b5b
Merge pull request #16228 from panyx0718/imperative
6 years ago
dengkaipeng a6daf6fe5f add doc param name. test=develop
6 years ago
sneaxiy 4b073c95dc fix compiler
6 years ago
Tao Luo 38898c2808
Merge pull request #16212 from Aurelius84/develop
6 years ago
Kaipeng Deng b77ebb2af2
Merge pull request #15919 from heavengate/yolo_box
6 years ago
Xin Pan 3be7e971ab polish
6 years ago
Xin Pan 50ff898378 graph neural network for imperative mode
6 years ago
achao2013 81b4fad8b9 add moving average absmax op and fix bug (#15155)
6 years ago