Commit Graph

270 Commits (bdbdc291f592146e6fbc25806a50a6eb082bf0c1)

Author SHA1 Message Date
Xiaoda Zhang fba2bfeb54 overwrite strategies for star graph structure
4 years ago
yangzhenzhang fc4ed975c4 handle repeated calculation
4 years ago
yao_yf 022005b94a fix a bug cases in reshape redistribution
4 years ago
yao_yf f60d81a15f support reshape redistribution in all scenes
4 years ago
mindspore-ci-bot fd0c03c493 !7090 implement parallel BroadcastTo
4 years ago
Yi Huaijie 45d373d40e implement parallel BroadcastTo
4 years ago
Ziyan ddc0113058 enable parallel optimizer in auto parallel
4 years ago
mindspore-ci-bot 58610443cb !7023 modify endofsequence for multi-machine
4 years ago
wuweikang a32811e160 modify endofsequence for multi-machine
4 years ago
mindspore-ci-bot 2a799fe90e !6018 Set the number of epochs in mode.train() non-sink Mode
4 years ago
hesham 9cee0d2143 Add num_epochs to non-sink training
4 years ago
huangxinjing 4ef439e27b Add stage information for ops and strategy
4 years ago
Yi Huaijie 6066b16838 implement parallel Pack
4 years ago
mindspore-ci-bot 9475f9a19a !6548 Implement parallel Split
4 years ago
mindspore-ci-bot dfe77372f5 !6505 Set top graph parameters' name the same as original graph parameters.
4 years ago
Zhang Qinghua 6c72d88ba1 Set top graph parameters' name as original graph parameters.
4 years ago
guohongzilong a754dea90c delete SoftmaxCrossEntropyExpand
4 years ago
mindspore-ci-bot 5a20b11012 !6502 [AutoParallel]Fix auto parallel find loss bug
4 years ago
Yi Huaijie 18ed2bec53 implement parallel Split
4 years ago
lichenever d4bba3f1d2 fix_auto_parallel_find_loss_bug
4 years ago
mindspore-ci-bot 4d54de100b !6411 [Auto parallel] Add a flag to control whether to overwrite the right-node in triangle-elimination of DP algorithm
4 years ago
Xiaoda Zhang 970490a6f0 add a flag to control whether overwrite the right-node in triangle elimination of DP algorithm
4 years ago
mindspore-ci-bot d8d2a70cb3 !6344 [AutoParallel]fix auto parallel multigraph bug
4 years ago
lichenever 6b2a9de09f fix auto parallel mutigrpah bug
4 years ago
Yi Huaijie e4cd67596f raise RuntimeError when using full_batch neither under semi_auto_parallel nor auto_parallel
4 years ago
Wan Hanyang 0b7570eb53 add model with loss, without loso and o2 test case
5 years ago
Wan Hanyang 2ceea1e59d add a self attention test case
5 years ago
Su Teng 7b46f46a65 remove unuse test
5 years ago
Yi Huaijie eb83ea9607 change internal API _get_strategy() to _get_shard_strategy()
5 years ago
Yi Huaijie a836d25c64 change API set_strategy() to shard()
5 years ago
mindspore-ci-bot b40677002f !5714 [refine]change top graph and add cell class
5 years ago
Wei Luning e6f82af849 add cell class to c++
5 years ago
lichenever f2d3fd34ce rectification_allreduce_fusion_api
5 years ago
yao_yf d4cfe55c04 rename mirror_mean to gradients_mean
5 years ago
mindspore-ci-bot 9018737e99 !5696 [Auto parallel] Move 'multi-subgraphs' interface to internal
5 years ago
mindspore-ci-bot c064c01b6b !5729 [AutoParallel]Add FuseBatchNormEx op
5 years ago
mindspore-ci-bot 7786adc3aa !5722 fix semi auto parallel parameter of reshape has another user
5 years ago
lichenever d22f506431 add BatchNormEx op
5 years ago
yao_yf 05c003ae6b origin/semi_auto_parallel_reshape_parameter_has_another_user
5 years ago
mindspore-ci-bot fc79997de5 !5502 Mod SoftmaxCrossEntropyWithlogits
5 years ago
Xiaoda Zhang 42f1241270 remove 'multi-subgraphs' to internal
5 years ago
wanyiming 0ec70068ae mod_SoftmaxCrossEntropyWithLogits
5 years ago
mindspore-ci-bot 35e6cca1a3 !5634 wrap numpy random seed into an api
5 years ago
Yi Huaijie 4a5d115a66 add get_seed() and set_seed()
5 years ago
mindspore-ci-bot ccc0ea60ee !5661 fix auto parallel reshape strategy set when it is first operator
5 years ago
yao_yf 755f381406 fix auto parallel reshape strategy set when it is first operator
5 years ago
yao_yf 8f7aa5bd5a auto parallel context modify
5 years ago
mindspore-ci-bot be606ba8f5 !5432 Mindspore parallel supports all elementary-wise operators
5 years ago
Yi Huaijie 84948ca730 parallel supports more elementary-wise operators
5 years ago
mindspore-ci-bot 414184c184 !5367 Check the parameter's split strategies if it has multiple users
5 years ago
yao_yf 07117e4dd4 mv ParallelMode to context
5 years ago
yangzhenzhang fbda03bbcc check parameter split
5 years ago
mindspore-ci-bot 66d6320b21 !5224 Add test case about loss scale in parallel mode
5 years ago
yangzhenzhang 6ae5893681 add test cases
5 years ago
panyifeng 1a54785fe2 remove name arg from gradoperation
5 years ago
mindspore-ci-bot 7d4f481884 !5017 remove internal interface in wide&deep
5 years ago
mindspore-ci-bot abe6b82138 !5011 remove global grad ops
5 years ago
yao_yf a9a8e323b2 remove internal interface in wide&deep
5 years ago
mindspore-ci-bot fc6eee3bda !5019 raise RuntimeError when set different mode after Initializer created
5 years ago
panyifeng 637e812347 remove global grad ops
5 years ago
Yi Huaijie 394be43492 raise RuntimeError when set different mode after Initializer created
5 years ago
Su Teng e3ae23c939 add parallel attention test
5 years ago
mindspore-ci-bot 3d06cbf987 !4801 Must set or change parallel mode before any Initializer created
5 years ago
Yi Huaijie 89a4ebf8a1 parallel mode must be set before create an initializer
5 years ago
mindspore-ci-bot 9ee144ea40 !4744 [AutoParallel]Support bert
5 years ago
lichenever 221a801395 auto parallel support bert
5 years ago
yangzhenzhang cda08f6a52 concat 3 tensors in auto parallel mode
5 years ago
mindspore-ci-bot 2ae6365d77 !4650 EmbeddingLookup support auto parallel
5 years ago
yangzhenzhang 6f6a8ae9f0 embedding lookup auto parallel
5 years ago
Yi Huaijie 0f7ead5f14 parameter slice init test all initializers
5 years ago
yao_yf cbb4363fa7 remove to_full_tensor and load_inputs in exexute stage
5 years ago
yangzhenzhang 14c77c9f03 update field split
5 years ago
mindspore-ci-bot 2db0290c49 !4356 Add validation for field split
5 years ago
yangzhenzhang 4a0e6ff7fc update field split
5 years ago
yao_yf e4de26d5bc embeddinglookup wrap
5 years ago
yangzhenzhang f4bb43bbaf add concat op
5 years ago
lichenever bfc96de1b9 add dropout distributed op
5 years ago
simson 3617121ccf revert modification of opt
5 years ago
Xiaoda Zhang d24a902afe add a new graph operation in autoparallel
5 years ago
mindspore-ci-bot ab4c43007f !3657 Add parallel operator for StridedSlice
5 years ago
Ziyan 98e2ee90de fix optimizer parallel problems
5 years ago
yangzhenzhang 9aa84b3d14 add strided slice op
5 years ago
mindspore-ci-bot 16079e6356 !3472 [Auto parallel] Cost model for GPU
5 years ago
mindspore-ci-bot d4165671d9 !3435 Add parallel operator for Tile
5 years ago
Xiaoda Zhang 9097b36950 add resnet50 testcases for gpu
5 years ago
lirongzhen1 51796aa624 fix sparse feature bug for auto parallel
5 years ago
yangzhenzhang 6a6e2bd271 add tile op
5 years ago
panyifeng 963bd67a60 add sparse api docs
5 years ago
panyifeng 8a89f003eb fix sparse related issues
5 years ago
mindspore-ci-bot 684ff4f46b !3160 Rewrite tensor's __bool__ for pynative mode
5 years ago
simson 5f77fbdd75 Rewrite tensor's __bool__ for pynative mode
5 years ago
mindspore-ci-bot 7f1ccc5f3b !3311 add sparse feature test cases for auto parallel
5 years ago
mindspore-ci-bot bc20de741a !3315 restore reshape ut
5 years ago
yao_yf 1d3a06a3b0 recover reshape ut
5 years ago
lirongzhen1 5d63c60135 add sparse feature test cases for auto parallel
5 years ago
wangnan39@huawei.com 082433183d uniform learning_rate behavior of optimizers
5 years ago
anzhengqi 008b91b2a1 inject epoch ctrl op in the execution tree and send eos at the end of epoch
5 years ago
liuxiao93 75881e5f2f check input of BatchNorm is 4D.
5 years ago
wangnan39@huawei.com 86889c59cb optimizer adapt IndexedSlices
5 years ago
yao_yf abebb2004b remove 4 reshape ut
5 years ago