Xiaoda Zhang
|
fba2bfeb54
|
overwrite strategies for star graph structure
|
4 years ago |
yangzhenzhang
|
fc4ed975c4
|
handle repeated calculation
|
4 years ago |
yao_yf
|
022005b94a
|
fix a bug cases in reshape redistribution
|
4 years ago |
yao_yf
|
f60d81a15f
|
support reshape redistribution in all scenes
|
4 years ago |
mindspore-ci-bot
|
fd0c03c493
|
!7090 implement parallel BroadcastTo
Merge pull request !7090 from yihuaijie/master
|
4 years ago |
Yi Huaijie
|
45d373d40e
|
implement parallel BroadcastTo
|
4 years ago |
Ziyan
|
ddc0113058
|
enable parallel optimizer in auto parallel
|
4 years ago |
mindspore-ci-bot
|
58610443cb
|
!7023 modify endofsequence for multi-machine
Merge pull request !7023 from HW_KK/master
|
4 years ago |
wuweikang
|
a32811e160
|
modify endofsequence for multi-machine
|
4 years ago |
mindspore-ci-bot
|
2a799fe90e
|
!6018 Set the number of epochs in mode.train() non-sink Mode
Merge pull request !6018 from h.farahat/ms_nonsink_epochs
|
4 years ago |
hesham
|
9cee0d2143
|
Add num_epochs to non-sink training
|
4 years ago |
huangxinjing
|
4ef439e27b
|
Add stage information for ops and strategy
|
4 years ago |
Yi Huaijie
|
6066b16838
|
implement parallel Pack
|
4 years ago |
mindspore-ci-bot
|
9475f9a19a
|
!6548 Implement parallel Split
Merge pull request !6548 from yihuaijie/dev
|
4 years ago |
mindspore-ci-bot
|
dfe77372f5
|
!6505 Set top graph parameters' name the same as original graph parameters.
Merge pull request !6505 from 张清华/master2
|
4 years ago |
Zhang Qinghua
|
6c72d88ba1
|
Set top graph parameters' name as original graph parameters.
|
4 years ago |
guohongzilong
|
a754dea90c
|
delete SoftmaxCrossEntropyExpand
|
4 years ago |
mindspore-ci-bot
|
5a20b11012
|
!6502 [AutoParallel]Fix auto parallel find loss bug
Merge pull request !6502 from lichen/fix_auto_parallel_find_loss_bug
|
4 years ago |
Yi Huaijie
|
18ed2bec53
|
implement parallel Split
|
4 years ago |
lichenever
|
d4bba3f1d2
|
fix_auto_parallel_find_loss_bug
|
4 years ago |
mindspore-ci-bot
|
4d54de100b
|
!6411 [Auto parallel] Add a flag to control whether to overwrite the right-node in triangle-elimination of DP algorithm
Merge pull request !6411 from Xiaoda/23-fix-the-triangle-elimination-problem
|
4 years ago |
Xiaoda Zhang
|
970490a6f0
|
add a flag to control whether overwrite the right-node in triangle elimination of DP algorithm
|
4 years ago |
mindspore-ci-bot
|
d8d2a70cb3
|
!6344 [AutoParallel]fix auto parallel multigraph bug
Merge pull request !6344 from lichen/fix_auto_parallel_mutigraph_bug
|
4 years ago |
lichenever
|
6b2a9de09f
|
fix auto parallel mutigrpah bug
|
4 years ago |
Yi Huaijie
|
e4cd67596f
|
raise RuntimeError when using full_batch neither under semi_auto_parallel nor auto_parallel
|
4 years ago |
Wan Hanyang
|
0b7570eb53
|
add model with loss, without loso and o2 test case
|
5 years ago |
Wan Hanyang
|
2ceea1e59d
|
add a self attention test case
|
5 years ago |
Su Teng
|
7b46f46a65
|
remove unuse test
|
5 years ago |
Yi Huaijie
|
eb83ea9607
|
change internal API _get_strategy() to _get_shard_strategy()
|
5 years ago |
Yi Huaijie
|
a836d25c64
|
change API set_strategy() to shard()
|
5 years ago |
mindspore-ci-bot
|
b40677002f
|
!5714 [refine]change top graph and add cell class
Merge pull request !5714 from vlne-v1/change-top-graph
|
5 years ago |
Wei Luning
|
e6f82af849
|
add cell class to c++
|
5 years ago |
lichenever
|
f2d3fd34ce
|
rectification_allreduce_fusion_api
|
5 years ago |
yao_yf
|
d4cfe55c04
|
rename mirror_mean to gradients_mean
|
5 years ago |
mindspore-ci-bot
|
9018737e99
|
!5696 [Auto parallel] Move 'multi-subgraphs' interface to internal
Merge pull request !5696 from Xiaoda/20-moving-multi-graph-interface-internal
|
5 years ago |
mindspore-ci-bot
|
c064c01b6b
|
!5729 [AutoParallel]Add FuseBatchNormEx op
Merge pull request !5729 from lichen/add_batchnormex_op
|
5 years ago |
mindspore-ci-bot
|
7786adc3aa
|
!5722 fix semi auto parallel parameter of reshape has another user
Merge pull request !5722 from yao_yf/semi_auto_parallel_reshape_parameter_has_another_user
|
5 years ago |
lichenever
|
d22f506431
|
add BatchNormEx op
|
5 years ago |
yao_yf
|
05c003ae6b
|
origin/semi_auto_parallel_reshape_parameter_has_another_user
|
5 years ago |
mindspore-ci-bot
|
fc79997de5
|
!5502 Mod SoftmaxCrossEntropyWithlogits
Merge pull request !5502 from wanyiming/mod_SoftmaxCrossEntropyWithlogits
|
5 years ago |
Xiaoda Zhang
|
42f1241270
|
remove 'multi-subgraphs' to internal
|
5 years ago |
wanyiming
|
0ec70068ae
|
mod_SoftmaxCrossEntropyWithLogits
|
5 years ago |
mindspore-ci-bot
|
35e6cca1a3
|
!5634 wrap numpy random seed into an api
Merge pull request !5634 from yihuaijie/master
|
5 years ago |
Yi Huaijie
|
4a5d115a66
|
add get_seed() and set_seed()
|
5 years ago |
mindspore-ci-bot
|
ccc0ea60ee
|
!5661 fix auto parallel reshape strategy set when it is first operator
Merge pull request !5661 from yao_yf/auto_parallel_reshape_fix
|
5 years ago |
yao_yf
|
755f381406
|
fix auto parallel reshape strategy set when it is first operator
|
5 years ago |
yao_yf
|
8f7aa5bd5a
|
auto parallel context modify
|
5 years ago |
mindspore-ci-bot
|
be606ba8f5
|
!5432 Mindspore parallel supports all elementary-wise operators
Merge pull request !5432 from yihuaijie/master
|
5 years ago |
Yi Huaijie
|
84948ca730
|
parallel supports more elementary-wise operators
|
5 years ago |
mindspore-ci-bot
|
414184c184
|
!5367 Check the parameter's split strategies if it has multiple users
Merge pull request !5367 from yangzhenzhang/check-parameter-split
|
5 years ago |
yao_yf
|
07117e4dd4
|
mv ParallelMode to context
|
5 years ago |
yangzhenzhang
|
fbda03bbcc
|
check parameter split
|
5 years ago |
mindspore-ci-bot
|
66d6320b21
|
!5224 Add test case about loss scale in parallel mode
Merge pull request !5224 from yangzhenzhang/add-split-sens-and-loss-scale-test-case
|
5 years ago |
yangzhenzhang
|
6ae5893681
|
add test cases
|
5 years ago |
panyifeng
|
1a54785fe2
|
remove name arg from gradoperation
|
5 years ago |
mindspore-ci-bot
|
7d4f481884
|
!5017 remove internal interface in wide&deep
Merge pull request !5017 from yao_yf/wide_and_deep_no_internal_interface
|
5 years ago |
mindspore-ci-bot
|
abe6b82138
|
!5011 remove global grad ops
Merge pull request !5011 from riemann_penn/remove_global_grad_ops
|
5 years ago |
yao_yf
|
a9a8e323b2
|
remove internal interface in wide&deep
|
5 years ago |
mindspore-ci-bot
|
fc6eee3bda
|
!5019 raise RuntimeError when set different mode after Initializer created
Merge pull request !5019 from yihuaijie/dev
|
5 years ago |
panyifeng
|
637e812347
|
remove global grad ops
|
5 years ago |
Yi Huaijie
|
394be43492
|
raise RuntimeError when set different mode after Initializer created
|
5 years ago |
Su Teng
|
e3ae23c939
|
add parallel attention test
|
5 years ago |
mindspore-ci-bot
|
3d06cbf987
|
!4801 Must set or change parallel mode before any Initializer created
Merge pull request !4801 from yihuaijie/dev
|
5 years ago |
Yi Huaijie
|
89a4ebf8a1
|
parallel mode must be set before create an initializer
|
5 years ago |
mindspore-ci-bot
|
9ee144ea40
|
!4744 [AutoParallel]Support bert
Merge pull request !4744 from lichen/support_bert
|
5 years ago |
lichenever
|
221a801395
|
auto parallel support bert
|
5 years ago |
yangzhenzhang
|
cda08f6a52
|
concat 3 tensors in auto parallel mode
|
5 years ago |
mindspore-ci-bot
|
2ae6365d77
|
!4650 EmbeddingLookup support auto parallel
Merge pull request !4650 from yangzhenzhang/embedding-lookup-auto-parallel
|
5 years ago |
yangzhenzhang
|
6f6a8ae9f0
|
embedding lookup auto parallel
|
5 years ago |
Yi Huaijie
|
0f7ead5f14
|
parameter slice init test all initializers
|
5 years ago |
yao_yf
|
cbb4363fa7
|
remove to_full_tensor and load_inputs in exexute stage
|
5 years ago |
yangzhenzhang
|
14c77c9f03
|
update field split
|
5 years ago |
mindspore-ci-bot
|
2db0290c49
|
!4356 Add validation for field split
Merge pull request !4356 from yangzhenzhang/update-field-split
|
5 years ago |
yangzhenzhang
|
4a0e6ff7fc
|
update field split
|
5 years ago |
yao_yf
|
e4de26d5bc
|
embeddinglookup wrap
|
5 years ago |
yangzhenzhang
|
f4bb43bbaf
|
add concat op
|
5 years ago |
lichenever
|
bfc96de1b9
|
add dropout distributed op
|
5 years ago |
simson
|
3617121ccf
|
revert modification of opt
|
5 years ago |
Xiaoda Zhang
|
d24a902afe
|
add a new graph operation in autoparallel
|
5 years ago |
mindspore-ci-bot
|
ab4c43007f
|
!3657 Add parallel operator for StridedSlice
Merge pull request !3657 from yangzhenzhang/add-stridedslice-op
|
5 years ago |
Ziyan
|
98e2ee90de
|
fix optimizer parallel problems
|
5 years ago |
yangzhenzhang
|
9aa84b3d14
|
add strided slice op
|
5 years ago |
mindspore-ci-bot
|
16079e6356
|
!3472 [Auto parallel] Cost model for GPU
Merge pull request !3472 from Xiaoda/13-add-gpu-costmodel
|
5 years ago |
mindspore-ci-bot
|
d4165671d9
|
!3435 Add parallel operator for Tile
Merge pull request !3435 from yangzhenzhang/add-tile-op
|
5 years ago |
Xiaoda Zhang
|
9097b36950
|
add resnet50 testcases for gpu
|
5 years ago |
lirongzhen1
|
51796aa624
|
fix sparse feature bug for auto parallel
|
5 years ago |
yangzhenzhang
|
6a6e2bd271
|
add tile op
|
5 years ago |
panyifeng
|
963bd67a60
|
add sparse api docs
|
5 years ago |
panyifeng
|
8a89f003eb
|
fix sparse related issues
|
5 years ago |
mindspore-ci-bot
|
684ff4f46b
|
!3160 Rewrite tensor's __bool__ for pynative mode
Merge pull request !3160 from Simson/push-to-opensource
|
5 years ago |
simson
|
5f77fbdd75
|
Rewrite tensor's __bool__ for pynative mode
|
5 years ago |
mindspore-ci-bot
|
7f1ccc5f3b
|
!3311 add sparse feature test cases for auto parallel
Merge pull request !3311 from lirongzhen1/master
|
5 years ago |
mindspore-ci-bot
|
bc20de741a
|
!3315 restore reshape ut
Merge pull request !3315 from yao_yf/restore_auto_parallel_reshape_ut
|
5 years ago |
yao_yf
|
1d3a06a3b0
|
recover reshape ut
|
5 years ago |
lirongzhen1
|
5d63c60135
|
add sparse feature test cases for auto parallel
|
5 years ago |
wangnan39@huawei.com
|
082433183d
|
uniform learning_rate behavior of optimizers
|
5 years ago |
anzhengqi
|
008b91b2a1
|
inject epoch ctrl op in the execution tree and send eos at the end of epoch
|
5 years ago |
liuxiao93
|
75881e5f2f
|
check input of BatchNorm is 4D.
|
5 years ago |
wangnan39@huawei.com
|
86889c59cb
|
optimizer adapt IndexedSlices
|
5 years ago |
yao_yf
|
abebb2004b
|
remove 4 reshape ut
|
5 years ago |