Commit Graph

236 Commits (b8abcf858a44a61c978930e9ee6328e1a3ce8dee)

Author SHA1 Message Date
mindspore-ci-bot 9018737e99 !5696 [Auto parallel] Move 'multi-subgraphs' interface to internal
5 years ago
mindspore-ci-bot c064c01b6b !5729 [AutoParallel]Add FuseBatchNormEx op
5 years ago
mindspore-ci-bot 7786adc3aa !5722 fix semi auto parallel parameter of reshape has another user
5 years ago
lichenever d22f506431 add BatchNormEx op
5 years ago
yao_yf 05c003ae6b origin/semi_auto_parallel_reshape_parameter_has_another_user
5 years ago
mindspore-ci-bot fc79997de5 !5502 Mod SoftmaxCrossEntropyWithlogits
5 years ago
Xiaoda Zhang 42f1241270 remove 'multi-subgraphs' to internal
5 years ago
wanyiming 0ec70068ae mod_SoftmaxCrossEntropyWithLogits
5 years ago
mindspore-ci-bot 35e6cca1a3 !5634 wrap numpy random seed into an api
5 years ago
Yi Huaijie 4a5d115a66 add get_seed() and set_seed()
5 years ago
mindspore-ci-bot ccc0ea60ee !5661 fix auto parallel reshape strategy set when it is first operator
5 years ago
yao_yf 755f381406 fix auto parallel reshape strategy set when it is first operator
5 years ago
yao_yf 8f7aa5bd5a auto parallel context modify
5 years ago
mindspore-ci-bot be606ba8f5 !5432 Mindspore parallel supports all elementary-wise operators
5 years ago
Yi Huaijie 84948ca730 parallel supports more elementary-wise operators
5 years ago
mindspore-ci-bot 414184c184 !5367 Check the parameter's split strategies if it has multiple users
5 years ago
yao_yf 07117e4dd4 mv ParallelMode to context
5 years ago
yangzhenzhang fbda03bbcc check parameter split
5 years ago
mindspore-ci-bot 66d6320b21 !5224 Add test case about loss scale in parallel mode
5 years ago
yangzhenzhang 6ae5893681 add test cases
5 years ago
panyifeng 1a54785fe2 remove name arg from gradoperation
5 years ago
mindspore-ci-bot 7d4f481884 !5017 remove internal interface in wide&deep
5 years ago
mindspore-ci-bot abe6b82138 !5011 remove global grad ops
5 years ago
yao_yf a9a8e323b2 remove internal interface in wide&deep
5 years ago
mindspore-ci-bot fc6eee3bda !5019 raise RuntimeError when set different mode after Initializer created
5 years ago
panyifeng 637e812347 remove global grad ops
5 years ago
Yi Huaijie 394be43492 raise RuntimeError when set different mode after Initializer created
5 years ago
Su Teng e3ae23c939 add parallel attention test
5 years ago
mindspore-ci-bot 3d06cbf987 !4801 Must set or change parallel mode before any Initializer created
5 years ago
Yi Huaijie 89a4ebf8a1 parallel mode must be set before create an initializer
5 years ago
mindspore-ci-bot 9ee144ea40 !4744 [AutoParallel]Support bert
5 years ago
lichenever 221a801395 auto parallel support bert
5 years ago
yangzhenzhang cda08f6a52 concat 3 tensors in auto parallel mode
5 years ago
mindspore-ci-bot 2ae6365d77 !4650 EmbeddingLookup support auto parallel
5 years ago
yangzhenzhang 6f6a8ae9f0 embedding lookup auto parallel
5 years ago
Yi Huaijie 0f7ead5f14 parameter slice init test all initializers
5 years ago
yao_yf cbb4363fa7 remove to_full_tensor and load_inputs in exexute stage
5 years ago
yangzhenzhang 14c77c9f03 update field split
5 years ago
mindspore-ci-bot 2db0290c49 !4356 Add validation for field split
5 years ago
yangzhenzhang 4a0e6ff7fc update field split
5 years ago
yao_yf e4de26d5bc embeddinglookup wrap
5 years ago
yangzhenzhang f4bb43bbaf add concat op
5 years ago
lichenever bfc96de1b9 add dropout distributed op
5 years ago
simson 3617121ccf revert modification of opt
5 years ago
Xiaoda Zhang d24a902afe add a new graph operation in autoparallel
5 years ago
mindspore-ci-bot ab4c43007f !3657 Add parallel operator for StridedSlice
5 years ago
Ziyan 98e2ee90de fix optimizer parallel problems
5 years ago
yangzhenzhang 9aa84b3d14 add strided slice op
5 years ago
mindspore-ci-bot 16079e6356 !3472 [Auto parallel] Cost model for GPU
5 years ago
mindspore-ci-bot d4165671d9 !3435 Add parallel operator for Tile
5 years ago
Xiaoda Zhang 9097b36950 add resnet50 testcases for gpu
5 years ago
lirongzhen1 51796aa624 fix sparse feature bug for auto parallel
5 years ago
yangzhenzhang 6a6e2bd271 add tile op
5 years ago
panyifeng 963bd67a60 add sparse api docs
5 years ago
panyifeng 8a89f003eb fix sparse related issues
5 years ago
mindspore-ci-bot 684ff4f46b !3160 Rewrite tensor's __bool__ for pynative mode
5 years ago
simson 5f77fbdd75 Rewrite tensor's __bool__ for pynative mode
5 years ago
mindspore-ci-bot 7f1ccc5f3b !3311 add sparse feature test cases for auto parallel
5 years ago
mindspore-ci-bot bc20de741a !3315 restore reshape ut
5 years ago
yao_yf 1d3a06a3b0 recover reshape ut
5 years ago
lirongzhen1 5d63c60135 add sparse feature test cases for auto parallel
5 years ago
wangnan39@huawei.com 082433183d uniform learning_rate behavior of optimizers
5 years ago
anzhengqi 008b91b2a1 inject epoch ctrl op in the execution tree and send eos at the end of epoch
5 years ago
liuxiao93 75881e5f2f check input of BatchNorm is 4D.
5 years ago
wangnan39@huawei.com 86889c59cb optimizer adapt IndexedSlices
5 years ago
yao_yf abebb2004b remove 4 reshape ut
5 years ago
mindspore-ci-bot edec821c50 !2876 set reshape operator no redistribution for auto parallel
5 years ago
mindspore-ci-bot 74bbfa3cf6 !3095 modify the limit of loss scale
5 years ago
lirongzhen1 c1eba79b83 set reshape redistribution strategy attribute to no redistribution
5 years ago
simson 177e18f3f4 modify the limit of loss scale
5 years ago
Ziyan 39f08eb7dd enable optimizer parallel
5 years ago
lichenever cde5cc2bd2 add_embedding_look_up
5 years ago
Xiaoda Zhang fc906f7f58 move embeddinglookup to external
5 years ago
Yi Huaijie cae254f4df asymmetric row split support for GatherV2
5 years ago
jinyaohui dd5fba1db9 add notice
5 years ago
Ziyan 0925e35252 enable optimizer parallel with broadcast
5 years ago
Ziyan 41ddc153a6 modify lars interface
5 years ago
Xiaoda Zhang 3ff6e336c6 check cast from optimizer in auto-parallel
5 years ago
mindspore-ci-bot c0fe8c0322 !2273 [AutoParallel]update EmbeddingLookUp op
5 years ago
lichenever 563622874a update
5 years ago
Xiaoda Zhang 69574f3823 fix the bprob error of embeddinglookup
5 years ago
Xiaoda Zhang 55e7d9d2b8 move embeddinglookup to the internal
5 years ago
Xiaoda Zhang 20d2012a0e implementing the backward of embeddinglookup
5 years ago
lichenever e0e055a0b8 add sparse gatherv2
5 years ago
mindspore-ci-bot 5b0472683c !1737 sparse feature backpropagation
5 years ago
Yi Huaijie e5c351690b support load full dataset on each device
5 years ago
lirongzhen1 516b56cb64 sparse feature bp
5 years ago
Xiaoda Zhang 1cfb52bc0e add the reshape part of the embeddinglookup backward operator
5 years ago
lichenever 1437966c98 gatherv2_support_host_and_device
5 years ago
Xiaoda Zhang 55ef468a7f fix the embeddinglookup bug
5 years ago
Yi Huaijie d7e1ab1445 clean pylint warnings
5 years ago
jinyaohui 86d197dfeb clean pylint
5 years ago
mindspore-ci-bot 8316f736ea !1650 Support forward reduce scatter for Matmul
5 years ago
mindspore-ci-bot 57874cd61f !1372 [Auto parallel] Add a new primitive EmbeddingLookup
5 years ago
yangzhenzhang 19bd830539 support forward reduce scatter for matmul
5 years ago
mindspore-ci-bot f523a0f83c !1600 [AutoParallel]Fix GatherV2 bug
5 years ago
Xiaoda Zhang 4154adf196 add embedinglookup primitive
5 years ago
lichenever c223fde566 fix auto parallel gatherv2 bug
5 years ago
yangzhenzhang 1413f520d7 support reshape optimized
5 years ago
mindspore-ci-bot 0fe3ff2761 !1506 Setting auto parallel flag for dataset wrap cell
5 years ago