dingpeifei
87e41aaeee
IR operators of GPU and CPU are unified as batchnorm
4 years ago
LianLiguang
d34074d1ac
fix bug of biasadd grad's infer shape
4 years ago
mindspore-ci-bot
c69142fdc1
!12968 update reshape type for 3d nodes
...
From: @liubuyu
Reviewed-by:
Signed-off-by:
4 years ago
liubuyu
518818fbef
reshape type for 3d nodes
4 years ago
LianLiguang
4acab81599
using cpp infer firstly
4 years ago
l00591931
cf7c5840e3
Change return
4 years ago
yuchaojie
6d195f340c
add SyncBatchNorm
4 years ago
He Wei
7d9a783993
[auto-monad] Support side-effects by auto-monad
...
The basic idea is: exploits data dependency to control the execution order
of side-effect operations, and keep the semantics of ANF unchanged.
The ControlDepend primitive is removed and there are two primitives added:
1. UpdateState:
```
a = Assign(para, value)
```
became:
```
a = Assign(para, value, u)
u = UpdateState(u, a)
```
2. Load:
```
x = Add(para, value)
```
became:
```
p = Load(para, u)
x = Add(p, value)
u = UpdateState(u, p)
```
4 years ago
mindspore-ci-bot
aebe263dce
!11895 unify mindir for different backend: the output num of optimizer ops, the backward of concat
...
From: @wangnan39
Reviewed-by:
Signed-off-by:
4 years ago
jinyaohui
8022f9a6ed
modify pack to stack
4 years ago
wangnan39@huawei.com
cd9173fdfd
unify the output num of optimizer ops
4 years ago
shenghong96
49144fde37
fix UT of test_topk_no_split
4 years ago
xsmq
a8259bae9b
disable ut cpp case(test_topk_no_split)
4 years ago
l00591931
9ec100d069
Change TensorAdd to Add, from r1.1 to master
4 years ago
lilei
9a45c4419c
modify batch_normal
4 years ago
jjfeing
8fb7d11ecb
fix topk help 4096
4 years ago
huanghui
1c6c280da7
fix unsorted_segment_sum_fission pass
4 years ago
Yi Huaijie
d7faa77b5e
support int64 shape
4 years ago
huanghui
b7519b7418
unify save_graphs_path
4 years ago
mindspore-ci-bot
c951d42c2c
!6728 [Ascend][DynamicShape] Dynamic shape feature
...
Merge pull request !6728 from caifubi/dynamic_shape_share_2
4 years ago
caifubi
d3b978147f
Ascend Dynamic Shape
4 years ago
jjfeing
755863ebae
insert memcpy when hccl node
4 years ago
mindspore-ci-bot
3048240f16
!5508 Add AdamApplyOneWithDecayAssign fusion pass
...
Merge pull request !5508 from YuJianfeng/adam_assign
5 years ago
fary86
fcbb3e0edc
Refactor ms_context implementation
5 years ago
yujianfeng
4b77f6b53c
Add AdamApplyOneWithDecayAssign fusion pass
5 years ago
yujianfeng
e688e1df32
Fix remove internal output for unique device target
5 years ago
Wei Luning
c1c30a44f1
rename param_value -> param_info
5 years ago
huanghui
b8d7f6d77f
add UnsortedSegmentSum fission pass
5 years ago
yujianfeng
8a77751988
Add AdamApplyOneAssign and AdamApplyOneWithDecayAssign fusion pass
5 years ago
mindspore-ci-bot
b045f47428
!3983 Add ReduceMin fission pass
...
Merge pull request !3983 from huanghui/reduce-min-fission-pass
5 years ago
huanghui
30000fdb52
add ReduceMin fission pass
5 years ago
liubuyu
d81862a916
decoupling core and context
5 years ago
Wei Luning
a05c38bb63
make python Parameter inherit from Tensor
5 years ago
WilliamLian
0179724dcd
spilit unspported transdata to two transdata from special format -> defualt -> default -> special
5 years ago
liubuyu
f4bc0bc9fe
move the dependency of utils to core
5 years ago
chenfei
1f1a07e645
don't insert assign from condition to true branch of while
5 years ago
root
1b6f85dec8
split tuple parameter to parameters
...
add function trans tuple to maketuple
5 years ago
yujianfeng
4d18e9ec35
Fix internal multiple outputs check
5 years ago
huanghui
f1563d2d37
insert memcpy async if hccl op cascade
5 years ago
mindspore-ci-bot
6f8863b65d
!3198 synchronize latest Ascend software suite 18 Jul 2020, and merging branches
...
Merge pull request !3198 from yanghaoran/code_sync_0718
5 years ago
yanghaoran
859acc6d2a
synchronize latest Ascend software suite 18 Jul 2020, and merging branches
5 years ago
yujianfeng
fa0684d12d
Add pack and concat fission pass
5 years ago
yujianfeng
188d74f15e
Remove transdata and cast for internal outputs
5 years ago
changzherui
f4cb445ea8
syn code for 0715
5 years ago
laiyongqiang
68c78ab6bb
reuse communication op output's memory
5 years ago
liubuyu
43c79eb853
mindspore path adjust
5 years ago
huanghui
3eaf663545
add tensor scatter update fission pass
5 years ago
yujianfeng
24f6b9d77e
Add input2output pass
5 years ago
He Wei
43e0967024
Decouple ir::Tensor class from python
5 years ago
gong chen
a6dfa281ea
Init GraphKernel.
...
- It provides a unified style to express graph and kernel for user.
- It provides a unified IR to represent graph and kernel for developer.
- It breaks the boundary between graph and kernel.
- It provides more opportunities to do compile optimization.
5 years ago