Qiao Longfei
59357f4fb9
fix floor_op ( #7926 )
7 years ago
Yang Yu
5a4367bb16
Update
7 years ago
fengjiayi
bff0cbfcd3
Merge pull request #7025 from JiayiFeng/rename_output_of_softmax_and_activitions
...
Rename output of softmax and activations
7 years ago
Luo Tao
761b329793
unify the indentation of license
7 years ago
fengjiayi
e0be63bf09
change activations
7 years ago
QI JUN
61ec0b9516
Refine device context ( #6433 )
...
There are mainly following fixes:
- take `DeviceContext` as the template parameter of math functors and OpKernel instead of `Place`
- remove `eigen_device` interface in base class `DeviceContext`
- remove `GetEigenDevice` interface in `ExecutionContext` and base class `DeviceContext`
- remove unused `platform::EigenDeviceConverter`
- rename `REGISTER_OP_GPU_KERNEL` to `REGISTER_OP_CUDA_KERNEL`
- rename `USE_GPU_ONLY_OP` to `USE_CUDA_ONLY_OP`
7 years ago
Abhinav Arora
113c026d12
Swish activation operator ( #6358 )
7 years ago
dzhwinter
513b1e010f
"add floor, ceil, round op" ( #5898 )
...
* "add floor, ceil, round op"
* "reuse zero gradient"
* "fix divide zero"
* "fix numpy floor error"
7 years ago
Kexin Zhao
81ba077e7b
small fix
8 years ago
QI JUN
669786bfe1
refine square_error_cost layer ( #5216 )
...
* reimplement pow operator
* add pow_grad operator
* fix code style
* fix build error
* fix op_test bug
* revert pow operator
* add FIXME comment
8 years ago
Yu Yang
be00b0c4d6
Gradient check use graph ( #5027 )
...
* Simplize Gradient Check
* Stash
* Extract apply_backward_pass to backward.py
Rename apply_backward_pass to append_backward_ops
* Use graph API to check gradient
* Fix ci
* Fix CI
* Fix backward for double precision
* Stash
* Fix CI
* Fix ci
* Ignore GRU test
* Ignore xe op
* Fix CI
* Fix softmax with xe gradient
The correct equation should be IG = OG * (d_softmax_with_xe())
* Fix typo
* Fix merge error
* Disable LRN
8 years ago
Abhinav Arora
3b954e1ddc
Adding Hard Sigmoid Activation ( #4771 )
...
* Adding Hard Sigmoid Activation
* Adding a comment for slope to be only positive
* Fixing grammatical mistake in comment
8 years ago
Abhinav Arora
b504a2346c
Adding the Thresholded Relu Op ( #4685 )
...
* Adding thresholded_relu op
* Adding test for thresholded relu op
8 years ago
kexinzhao
9995aed114
Implementing Softplus operator ( #4690 )
...
* implementing softplus
* small fix
* small fix
* small fix
* small fix
8 years ago
kavyasrinet
1397e17f6b
Implemented the hardShrink activation ( #4653 )
...
* Implemented the hardShrink activation
* Fixing the unit test
8 years ago
Siddharth Goyal
6604d7cda2
Add logsigmoid (numerically stable) and softshrink ( #4663 )
...
* Add numerically-stable logsigmoid activation
* Add softshrink operator
* Adjust relative tolerance for grad-check
* Address review comments
8 years ago
zhouxiao-coder
e6421249d5
update to latest
8 years ago
kavyasrinet
f30a1f42f0
Adding relu6 activation function ( #4607 )
8 years ago
zhouxiao-coder
53574e54a1
reslove merge conflict;reimplement ELU activation with functor
8 years ago
Kavya Srinet
154a6ed29c
Implementing tanhshrink operator
8 years ago
Kavya Srinet
60af56c1b8
Added Leaky Relu activation
8 years ago
zhouxiao-coder
a815d6abcf
elu: Optimize gradient calculation;Add more comments
8 years ago
Yu Yang
a8c6ce9b4d
Merge branch 'develop' of github.com:baidu/Paddle into feature/BetterActivationKern
8 years ago
Abhinav Arora
0c3eee09ff
Implementing the SoftSign activation operator
8 years ago
Yu Yang
337b7ebe77
Unify Activation functions and simplify register code
8 years ago
Yu Yang
3a5693e0a8
Add Skeleton of Double support
8 years ago
qijun
5824d85001
add activation operators and python unittests
8 years ago
qijun
dadace3178
add more activation functors
8 years ago
qijun
e515f18dd8
add tanh and sqrt activation operators
8 years ago
qijun
0957fa7b3c
fix relu functor and revert some codes
8 years ago
qijun
c18ebc3022
remove macros
8 years ago
qijun
d736fc0e00
add activation macro
8 years ago