Commit Graph

2192 Commits (cc4def6ba5a640845ee5b2cf84d1366837aab118)

Author SHA1 Message Date
dongdaxiang cc4def6ba5 fix some conflict for compilation
6 years ago
heqiaozhi 9bca1926c1 refactor & fix bug
6 years ago
xjqbest 2e9a836c6f add DataSet and InMemoryDataFeed, support load data into memory and shuffle data
6 years ago
dongdaxiang 2486389793 add RunFromDataset in executor
6 years ago
dongdaxiang e36bbcc871 fix some typo and CMakefile.txt
6 years ago
xjqbest 824b84d185 add DataSet and InMemoryDataFeed, support load data into memory and shuffle data
6 years ago
dongdaxiang 08c25995a2 add run from dataset in executor.
6 years ago
dongdaxiang c28bbdf8ba add dataset_generator.py
6 years ago
dongdaxiang be757096da add pybind for fleet
6 years ago
dongdaxiang 687cb79dbb add pipe command io interface
6 years ago
dongdaxiang 1fe54416c9 move fs.cc and shell.cc into paddle/fluid/framework/io
6 years ago
dongdaxiang 53fbab5d33 add fs_local_open example
6 years ago
dongdaxiang afaf937010 add fs_local_open example
6 years ago
dongdaxiang cf1360643f add printer for fetch variable
6 years ago
dongdaxiang d65cb13ad5 add pslib flag on fleet_wrapper CMakefile
6 years ago
dongdaxiang 6de9ebc65c refine VLOG in fleet_wrapper.h
6 years ago
dongdaxiang 97d5cd30f0 make pull dense worker work
6 years ago
dongdaxiang 39014b9f9f fix class register problem
6 years ago
dongdaxiang f0dd1201cc fix destructor problem
6 years ago
dongdaxiang f2bde9c241 fix destructor problem
6 years ago
dongdaxiang 54f047a126 fix ngraph compile option
6 years ago
dongdaxiang dd1dc9bcf0 add common.h.in back
6 years ago
dongdaxiang 378037c535 make s_instance_ private to ensure singleton
6 years ago
dongdaxiang a446d26e8a add todo for asynce executor
6 years ago
dongdaxiang c165012031 refine device_worker and trainer code
6 years ago
dongdaxiang 8a335b50be add downpour device_worker pb configuration
6 years ago
dongdaxiang 24a8001142 make -DWITH_PSLIB=ON compilable
6 years ago
dongdaxiang 67b1d6d721 add dist_multi_trainer for distributed training, add trainer_factory and device_worker_factory so that we can easily extend new training mode, add pull dense worker which is a singleton for parameter fetching
6 years ago
dongdaxiang 855bf579d2 add dist_multi_trainer for distributed training, add trainer_factory and device_worker_factory so that we can easily extend new training mode, add pull dense worker which is a singleton for parameter fetching
6 years ago
chengduo 1096746cbf
Fuse Adam And SGD ops (#15933)
6 years ago
Jacek Czaja 2632327429 [MKL-DNN] Tensor modifications revert (#16462)
6 years ago
chengduo 2265d091e6
Fix threaded executor bug (#16508)
6 years ago
Zeng Jinle 69cb9792ea
Merge pull request #16506 from sneaxiy/revert-16424-fix_allocator_bug
6 years ago
chengduo ed61d67c73
Fix the interface of Pass::Apply (#16484)
6 years ago
Zeng Jinle 2aa18e2bda
Merge pull request #16496 from sneaxiy/fix_gc_bug
6 years ago
Zeng Jinle 174d0d0b90 Revert "Fix allocator bug"
6 years ago
gongweibao eb83abeac3
Add DGC(Deep Gradient Compression) interface. (#15841)
6 years ago
Zeng Jinle 644e8af4cf
Merge pull request #16424 from sneaxiy/fix_allocator_bug
6 years ago
sneaxiy c4c6205268 fix gc bug
6 years ago
Zeng Jinle c7c6eeb44e
Merge pull request #16409 from sneaxiy/feature/advance_gc
6 years ago
liuwei1031 8d22bc17a4
Memory optimize (#16410)
6 years ago
Zhaolong Xing fa1796a30a
Merge pull request #16330 from NHZlX/merge_anakin_branch_to_dev
6 years ago
sneaxiy a0f4fefb60 delete source file no_need_buffer_vars_inference.cc
6 years ago
Wu Yi 9ffd5eecef
test fix fetch bar place for ce (#16406)
6 years ago
nhzlx 953bdde058 Merge branch 'develop' of https://github.com/paddlepaddle/paddle into HEAD
6 years ago
Tao Luo e0a3a49096
Merge pull request #16438 from wojtuss/wojtuss/move-cpu-quantize-passes
6 years ago
gongweibao ec6519e806
Fix allreducedep bug (#16443)
6 years ago
sneaxiy 78fb3a62e0 fix env variable settting bug
6 years ago
sneaxiy 2d92b6be98 merge develop
6 years ago
sneaxiy 7000ec85d9 fix some op grad maker
6 years ago