Commit Graph

22 Commits (08bc08d64e7a62f097ffc068bd9114bf5c02e712)

Author SHA1 Message Date
QI JUN d7bf372d26
support adagrad sparse update (#5272)
7 years ago
Yang Yang(Tony) 40367d18d4 feature/while_op (#5502)
7 years ago
Yu Yang bbdac7f7d8 Polish OpWithKernel
7 years ago
Yu Yang 6cde889b5e
Add unittest, backward of array read/write op (#5409)
7 years ago
Abhinav Arora b0b26dabe7 Polish operator documentation (#5356)
7 years ago
QI JUN 7f8574c0f5 add sparse support for sum op (#5093)
7 years ago
Yu Yang be00b0c4d6 Gradient check use graph (#5027)
7 years ago
Yu Yang 73a8b78a72 Correct OpWithKernel's infershape (#4847)
7 years ago
Yan Chunwei 1c1f73b46d Feature/dynamic recurrent op forward test (#4729)
7 years ago
qiaolongfei c0a34e1c64 rename InferShapeContextBase to InferShapeContext
7 years ago
Yu Yang e119177a8c Use unique_ptr
7 years ago
Yu Yang 46c551b299 Complete Register Gradient in compile time
7 years ago
Yu Yang adec0d30fe Simplify SumOp Kernel
7 years ago
qiaolongfei 32f5c9dd93 recurrent_op pass the unit test
7 years ago
Yu Yang 762a99cc06 Remove add_op since it can be replaced by sum_op
7 years ago
Qiao Longfei 9a9d50a6ee Refactoring InferShape (#3946)
7 years ago
dangqingqing 36aeb30d12 Remove LoDTensor in some operators' InferShape and refine ShareLoD function.
8 years ago
dangqingqing b65709e403 Share LoD between input and output of each opeators.
8 years ago
Liu Yiqun eef1ccbf08 Add the check of inputs and outputs in all operators.
8 years ago
dangqingqing f299206396 Using LoDTensor instead of Tensor in every operator.
8 years ago
qijun f50e36e285 follow comments
8 years ago
qijun f314330c23 refactor operator python test and add sum operator
8 years ago