Commit Graph

16 Commits (3225e195912b1c467558bce192c6468d7f0e8540)

Author SHA1 Message Date
liuwei1031 4c7b6e2e67 fix comment, test=develop
6 years ago
liuwei1031 b20a21e299 fix comments of PR 15529, test=develop
6 years ago
dzhwinter 381f2015a5
Merge pull request #15665 from dzhwinter/experiment/refactor_memory
6 years ago
dzhwinter 04e9776aef add details. test=develop
6 years ago
Dun Liang ceec13562c Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into my_checkpoint
6 years ago
Dun Liang bc92192747 Fix Pr #15296
6 years ago
dzhwinter ce0394bcd0 merge develop branch. test=develop
6 years ago
liuwei1031 6e84eb131f expose peak gpu memory API to python test=develop (#15529)
6 years ago
Qiyang Min 6000a6e76e
Merge pull request #15312 from velconia/add_pyramid_dnn_support
6 years ago
liuwei1031 5d026a881a Gpu memory monitoring (#15436)
6 years ago
minqiyang ac80273686 Change definitions to PADDLE_WITH_JEMALLOC
6 years ago
minqiyang 29ceb93126 Use malloc and free in JeMalloc
6 years ago
Wu Yi 29d9fb53fc
[Feature] multi process multi gpu dist training, boost v100 performance by 20% (#14661)
7 years ago
gongweibao 50a698525d
Fix log level (#14692)
7 years ago
minqiyang 53433d7f2e Revert the changes of VLOG
7 years ago
Yu Yang 19e669a992 Add legacy_allocator
7 years ago