He Wei
7d9a783993
[auto-monad] Support side-effects by auto-monad
...
The basic idea is: exploits data dependency to control the execution order
of side-effect operations, and keep the semantics of ANF unchanged.
The ControlDepend primitive is removed and there are two primitives added:
1. UpdateState:
```
a = Assign(para, value)
```
became:
```
a = Assign(para, value, u)
u = UpdateState(u, a)
```
2. Load:
```
x = Add(para, value)
```
became:
```
p = Load(para, u)
x = Add(p, value)
u = UpdateState(u, p)
```
4 years ago
wangnan39@huawei.com
0fe9e2e4cb
support import dynamic_lr from nn
4 years ago
zhuyuxiao
37bebc751b
add adagrad optim
4 years ago
wangnan39@huawei.com
ab811fca8f
add AdamOffload optimizer
4 years ago
Jiaqi
a30ccea62c
sparse optimizer
4 years ago
panyifeng
1a54785fe2
remove name arg from gradoperation
5 years ago
panyifeng
637e812347
remove global grad ops
5 years ago
wangnan39@huawei.com
082433183d
uniform learning_rate behavior of optimizers
5 years ago
wangnan39@huawei.com
86889c59cb
optimizer adapt IndexedSlices
5 years ago
panyifeng
44e74ad5aa
Apply indexed_slices
5 years ago
panyifeng
d6635bbbe2
Add IndexedSlices
5 years ago
wangnan39@huawei.com
172728a6a6
support weight decay for sparse optimizer
5 years ago
Ziyan
41ddc153a6
modify lars interface
5 years ago
panyifeng
3c2057297e
support multi param for tuple grad
5 years ago
lilei
497067d7b2
add sparse proximal ada grad optimizer
5 years ago
mindspore-ci-bot
2d84011504
!2071 optimizer support loss scale for sparse situation
...
Merge pull request !2071 from wangnan39/support_loss_scale_for_sparse_optimizer
5 years ago
wangnan39@huawei.com
d4e3d69f37
support loss scale for sparse situation
5 years ago
kingfo
9708e58259
fix TupleToArray & Cast operator issue
5 years ago
mindspore-ci-bot
3536185f5b
!2007 add lazy adam optimizer and support sparse adam&ftrl for cpu backend
...
Merge pull request !2007 from wangnan39/add_lazy_adam_optim_and_support_sparse_admm_for_cpu_backend
5 years ago
wangnan39@huawei.com
4042f16ce4
add lazy adam optim and support sparse adam & ftrl for cpu backend
5 years ago
mindspore-ci-bot
f859dfecc8
!1920 SupportPynativeIndexing
...
Merge pull request !1920 from amongo/SupportPynativeIndexing
5 years ago
huangdongrun
9522f59b87
support for tensor indexing in pynative
...
support tensor slice using constexpr
remove tensorslice metagraph
add pynative testcases
5 years ago
mindspore-ci-bot
9dfb1011fe
!1854 add SparseApplyAdam and SparseApplyLazyAdam ops
...
Merge pull request !1854 from wangnan39/add_ops_sparse_adam_and_sparse_lazy_adam
5 years ago
wangnan39@huawei.com
de21dbdaef
add ops SparseApplyAdam and SparseApplyLazyAdam
5 years ago
mindspore-ci-bot
3b8edd5a5b
!1918 sparse grad for gatherv2
...
Merge pull request !1918 from riemann_penn/sparse_grad_for_gatherv2
5 years ago
panyifeng
acaa66a738
sparse grad for gatherv2
5 years ago
lilei
36d9e353a5
add proximal_ada_grad optimizer
5 years ago
wangnan39@huawei.com
c9b7d95c2c
fix lr check bug in AdamWeightDecayDynamicLR
5 years ago
wangnan39@huawei.com
810ccf80d8
fix_bug_in_check_lamb_warmup_step
5 years ago
“liuxiao”
ebbccca78b
pylint clean
5 years ago
guohongzilong
2d2f9ba8fd
fix group parameter code for check
5 years ago
jinyaohui
5a914994ba
clean pylint
5 years ago
jinyaohui
26fd75895d
pylint waring clean
5 years ago
guohongzilong
824bc30a94
learning rate and weight decay support group mode
5 years ago
wangnan39@huawei.com
7f602016f4
add parameter verification for rmsprop, and modify default value in annotation
5 years ago
root
7d700295f8
add dynamic lr and enhance optim
5 years ago
Ziyan
4cbcd8e907
enable use float type learning rate in lars optimizer
5 years ago
zhunaipan
930a1fb0a8
initial version
...
Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
5 years ago