Commit Graph

1019 Commits (2603cb7e86dc4fdfe163d17f286df7ab2f05c4d6)

Author SHA1 Message Date
Qiao Longfei e12ec95ac1 Merge pull request #4630 from jacquesqiao/merge-infershapecontext
7 years ago
Abhinav Arora 4cb5bd9021 Implementing the Adamax optimizer operator (#4538)
7 years ago
kavyasrinet f30a1f42f0 Adding relu6 activation function (#4607)
7 years ago
Luo Tao 597299074e fix bug in REGISTER_OP(reduce_min)
7 years ago
Luo Tao a06f099d9f refine comment of interp_op
7 years ago
chengduoZH fcfce48421 follow coments
7 years ago
zhouxiao-coder 53574e54a1 reslove merge conflict;reimplement ELU activation with functor
7 years ago
Luo Tao 707d144c93 Unify Reduce functions and simplify register code
7 years ago
Luo Tao 5b862fedf1 remove debug log in interp_op.cc
7 years ago
Luo Tao 4724bdbe68 Merge branch 'develop' into interp
7 years ago
chengduoZH 3db3a1066b remove conflict
7 years ago
chengduoZH ba791f7b3f Add vol2col functor and unit test
7 years ago
Yang Yang c93d74aa06 merge develop
7 years ago
qiaolongfei c0a34e1c64 rename InferShapeContextBase to InferShapeContext
7 years ago
Yi Wang 99895730f7 Merge pull request #4609 from kavyasrinet/tanhshrink
7 years ago
qijun f087533cc3 Merge remote-tracking branch 'baidu/develop' into executor_impl
7 years ago
qijun 91f5d2b9cb follow comments and create local_scope inside executor run method
7 years ago
Yi Wang 097f533bca Resolve conflict
7 years ago
qijun e8a678e1ee fix executor gpu unittest runtime error
7 years ago
qijun 1f5192a27b fix executor gpu unittest
7 years ago
kexinzhao 087addaa76 Merge pull request #4558 from kexinzhao/adagrad_op
7 years ago
Kexin Zhao 78f4c803f3 change learning rate and fix format
7 years ago
qijun 39f75a13a4 Merge remote-tracking branch 'baidu/develop' into executor_impl
7 years ago
qijun bbceb72398 refine some codes
7 years ago
qijun 48b080db9f ensure global BuddyAllocator is initialized before global Scope
7 years ago
Kavya Srinet f52cdaa0ce Updated RMSProp to have learning rate as an input and work with GPU
7 years ago
Kavya Srinet 0336304176 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into rmsprop
7 years ago
Kavya Srinet 154a6ed29c Implementing tanhshrink operator
7 years ago
qijun 45c4dcaabb add fetch operator
7 years ago
kavyasrinet 3e2be065b9 Merge pull request #4604 from kavyasrinet/activations
7 years ago
qijun 20725f2d52 add executor feed operator test
7 years ago
Abhinav Arora 828c5b3e1d Adding Adadelta optimization operator (#4576)
7 years ago
Kavya Srinet 60af56c1b8 Added Leaky Relu activation
7 years ago
qijun 623848afa1 add feed operator
7 years ago
Yi Wang 1172f24929 Merge pull request #4590 from wangkuiyi/paddle_only_cpu
7 years ago
qiaolongfei 8ebc31d935 optimize the dsize
7 years ago
qiaolongfei 775c60246b remove using in sgd header file
7 years ago
Yu Yang 2594a50245 Polish code
7 years ago
Yu Yang c4effc7d2d Fix CI Test
7 years ago
qiaolongfei ee7b3ed09e use EigenScalar to get learning_rate from GPU device
7 years ago
Yi Wang 4558807c48 Use PADDLE_WITH_CUDA instead of PADDLE_WITH_GPU
7 years ago
Yi Wang e79d2f1b65 Merge pull request #4584 from reyoung/feature/change_macro_paddle_no_gpu
7 years ago
Kavya Srinet fa12e51675 Adding the default attribute test case
7 years ago
Kavya Srinet 94855f4af0 Fixed changes proposed in the review
7 years ago
Yu Yang e119177a8c Use unique_ptr
7 years ago
Yu Yang 84500f9487 Change `PADDLE_ONLY_CPU` to `PADDLE_WITH_GPU`
7 years ago
Abhinav Arora eed2c1e1d6 Changing SGD inputs and outputs to conform to Operator naming convention (#4586)
7 years ago
Abhinav Arora 324876bbbf Changing learning rate from type Input(float) to Input(tensor) (#4578)
7 years ago
Yu Yang 14a59d2e6b Merge branch 'develop' of github.com:baidu/Paddle into feature/grad_reg_mechanism_cont2
7 years ago
zchen0211 94b94e5b68 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into develop
7 years ago
zchen0211 2d876b8643 gather scatter fix according to google style
7 years ago
Abhinav Arora 42e7fe05a2 Changing learning rate from attribute to input(float) (#4568)
7 years ago
Kavya Srinet 163d287143 Made learning rate the input
7 years ago
Kexin Zhao d1de7ec630 Change learning rate from attribute to input tensor
7 years ago
zchen0211 2ccaec4f57 gather scatter cond
7 years ago
Yu Yang 46c551b299 Complete Register Gradient in compile time
7 years ago
Kavya Srinet 61c03f9d59 Adding the implementation for rmsprop operator
7 years ago
Yu Yang ff1bfdedc9 Fix CRLF in sum_op.cu
7 years ago
Yu Yang adec0d30fe Simplify SumOp Kernel
7 years ago
zchen0211 58174b12f7 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into develop
7 years ago
zchen0211 84b8baf196 gather scatter with cuda streams
7 years ago
Kexin Zhao 05cbd4daac fix format
7 years ago
qiaolongfei cde542e652 optimize auto
7 years ago
qiaolongfei 6b051b651a optimize code
7 years ago
Kexin Zhao 1ac654a69f Implementing the Adagrad optimizer step operator
7 years ago
qiaolongfei 32f5c9dd93 recurrent_op pass the unit test
7 years ago
zchen0211 15941dbd8c solve conflict for cond_op and scatter
7 years ago
qiaolongfei 7163dd0413 revert code
7 years ago
chengduoZH 14b2c98f90 Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into Add_maxpool_withIdx_only
7 years ago
Yu Yang 0900aedfa0 Merge pull request #4514 from reyoung/feature/remove_add_op
7 years ago
Yancey1989 0028459bb0 update
7 years ago
Yancey1989 927767b6aa add some checking
7 years ago
chengduoZH bb33c2b3a5 fix kernel func
7 years ago
chengduoZH 2ed56df1e6 remove conflict
7 years ago
chengduoZH bee95fc891 fix code format and some bug
7 years ago
Yancey1989 a35e82a649 Merge branch 'develop' of github.com:PaddlePaddle/Paddle into seqconcat_op
7 years ago
chengduo 4f5491b2b4 Merge pull request #4146 from chengduoZH/Add_pool_op
7 years ago
Yu Yang aa52fa1c64 Merge pull request #4491 from reyoung/feature/stable_lstm
7 years ago
chengduoZH 2d8a5b97cc fix unit test
7 years ago
Qiao Longfei 7fe0297e64 remove Runtime InferShape for cond op (#4518)
7 years ago
Yu Yang 762a99cc06 Remove add_op since it can be replaced by sum_op
7 years ago
Yu Yang ae4b7fd575 Merge pull request #4485 from reyoung/feature/BetterActivationKern
7 years ago
Yancey1989 be3fa7926e add sequence concat op
7 years ago
zhouxiao-coder 601e2317fd update to latest
7 years ago
zhouxiao-coder 4436ba0c56 elu: Optimize gradient calculation;Add more comments
7 years ago
zhouxiao-coder a815d6abcf elu: Optimize gradient calculation;Add more comments
7 years ago
chengduoZH df59889984 remove conflict
7 years ago
Luo Tao bb7f555803 remove rowwise_add_op
7 years ago
Luo Tao 884e31a59b add interpolation op
7 years ago
Liu Yiqun 8bafdda0ad Merge branch 'develop' into core_add_sequence_softmax_op
7 years ago
Cao Ying 7cc5ae9999 Merge pull request #4492 from QiJune/refine_some_functors
7 years ago
qijun b611a479fc fix gpu build error
7 years ago
chengduoZH e1e3859e88 remove custom attr checker and fix code format
7 years ago
Yu Yang a8c6ce9b4d Merge branch 'develop' of github.com:baidu/Paddle into feature/BetterActivationKern
7 years ago
qijun 84ff7e9784 refine SoftmaxFunctor
7 years ago
Yu Yang f60f0eae11 Using double precision to stablize lstm gradient check
7 years ago
Abhinav Arora 0c3eee09ff Implementing the SoftSign activation operator
7 years ago
qijun 79def5e634 refine CrossEntropyFunctor
7 years ago
qijun c634a8480a add SetConstant method in math_function.h
7 years ago
zchen0211 78808b2091 1 api
7 years ago