Commit Graph

913 Commits (e8a678e1eecd11fee219a93c6c586ee24663a506)

Author SHA1 Message Date
qijun e8a678e1ee fix executor gpu unittest runtime error
8 years ago
qijun 1f5192a27b fix executor gpu unittest
8 years ago
qijun 39f75a13a4 Merge remote-tracking branch 'baidu/develop' into executor_impl
8 years ago
qijun bbceb72398 refine some codes
8 years ago
qijun 48b080db9f ensure global BuddyAllocator is initialized before global Scope
8 years ago
qijun 45c4dcaabb add fetch operator
8 years ago
kavyasrinet 3e2be065b9 Merge pull request #4604 from kavyasrinet/activations
8 years ago
qijun 20725f2d52 add executor feed operator test
8 years ago
Abhinav Arora 828c5b3e1d Adding Adadelta optimization operator (#4576)
8 years ago
Kavya Srinet 60af56c1b8 Added Leaky Relu activation
8 years ago
qijun 623848afa1 add feed operator
8 years ago
Yi Wang 1172f24929 Merge pull request #4590 from wangkuiyi/paddle_only_cpu
8 years ago
qiaolongfei 8ebc31d935 optimize the dsize
8 years ago
qiaolongfei 775c60246b remove using in sgd header file
8 years ago
qiaolongfei ee7b3ed09e use EigenScalar to get learning_rate from GPU device
8 years ago
Yi Wang 4558807c48 Use PADDLE_WITH_CUDA instead of PADDLE_WITH_GPU
8 years ago
Yi Wang e79d2f1b65 Merge pull request #4584 from reyoung/feature/change_macro_paddle_no_gpu
8 years ago
Yu Yang 84500f9487 Change `PADDLE_ONLY_CPU` to `PADDLE_WITH_GPU`
8 years ago
Abhinav Arora eed2c1e1d6 Changing SGD inputs and outputs to conform to Operator naming convention (#4586)
8 years ago
Abhinav Arora 324876bbbf Changing learning rate from type Input(float) to Input(tensor) (#4578)
8 years ago
zchen0211 94b94e5b68 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into develop
8 years ago
zchen0211 2d876b8643 gather scatter fix according to google style
8 years ago
Abhinav Arora 42e7fe05a2 Changing learning rate from attribute to input(float) (#4568)
8 years ago
zchen0211 2ccaec4f57 gather scatter cond
8 years ago
Yu Yang ff1bfdedc9 Fix CRLF in sum_op.cu
8 years ago
Yu Yang adec0d30fe Simplify SumOp Kernel
8 years ago
zchen0211 58174b12f7 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into develop
8 years ago
zchen0211 84b8baf196 gather scatter with cuda streams
8 years ago
qiaolongfei cde542e652 optimize auto
8 years ago
qiaolongfei 6b051b651a optimize code
8 years ago
qiaolongfei 32f5c9dd93 recurrent_op pass the unit test
8 years ago
zchen0211 15941dbd8c solve conflict for cond_op and scatter
8 years ago
qiaolongfei 7163dd0413 revert code
8 years ago
Yu Yang 0900aedfa0 Merge pull request #4514 from reyoung/feature/remove_add_op
8 years ago
chengduo 4f5491b2b4 Merge pull request #4146 from chengduoZH/Add_pool_op
8 years ago
Yu Yang aa52fa1c64 Merge pull request #4491 from reyoung/feature/stable_lstm
8 years ago
chengduoZH 2d8a5b97cc fix unit test
8 years ago
Qiao Longfei 7fe0297e64 remove Runtime InferShape for cond op (#4518)
8 years ago
Yu Yang 762a99cc06 Remove add_op since it can be replaced by sum_op
8 years ago
Yu Yang ae4b7fd575 Merge pull request #4485 from reyoung/feature/BetterActivationKern
8 years ago
chengduoZH df59889984 remove conflict
8 years ago
Luo Tao bb7f555803 remove rowwise_add_op
8 years ago
Liu Yiqun 8bafdda0ad Merge branch 'develop' into core_add_sequence_softmax_op
8 years ago
Cao Ying 7cc5ae9999 Merge pull request #4492 from QiJune/refine_some_functors
8 years ago
qijun b611a479fc fix gpu build error
8 years ago
chengduoZH e1e3859e88 remove custom attr checker and fix code format
8 years ago
Yu Yang a8c6ce9b4d Merge branch 'develop' of github.com:baidu/Paddle into feature/BetterActivationKern
8 years ago
qijun 84ff7e9784 refine SoftmaxFunctor
8 years ago
Yu Yang f60f0eae11 Using double precision to stablize lstm gradient check
8 years ago
Abhinav Arora 0c3eee09ff Implementing the SoftSign activation operator
8 years ago