Commit Graph

99 Commits (fac6c56db567565b36d3bf066ca9691fe3147bec)

Author SHA1 Message Date
yao_yf 5a6540450e use param name as the key of strategy checkpoint
5 years ago
mindspore-ci-bot 21d936e656 !728 auto parallel strategy checkpoint full
5 years ago
yao_yf 6cde5f6d91 auto parallel strategy checkpoint
5 years ago
zjun c538b83712 remove enbale hccl
5 years ago
mindspore-ci-bot afbd24cb78 !718 Fix dtype judge sentence in infer_dtype function of hcom operations
5 years ago
zhouyuanshen c046874b03 fix bug in infer_dtype function of hcom operations
5 years ago
Xiaoda Zhang e227415673 support-the-multiple-subgraphs-in-the-ANF
5 years ago
yangzhenzhang 4750861054 fix layernorm bug
5 years ago
yangzhenzhang 36a62576e8 support forward graph
5 years ago
Xiaoda Zhang 3ff9e54734 add the resnet50 32k-8p testcase
5 years ago
lirongzhen1 56f785f7e6 add context configration
5 years ago
mindspore-ci-bot ce71c17933 !645 auto parallel prelu operator support broadcast
5 years ago
mindspore-ci-bot 84d5e4f923 !643 [AutoParallel]Support reshape parameter
5 years ago
mindspore-ci-bot 00859ae119 !586 enable/disable allreduce_fusion
5 years ago
lichenever 2ab211ae04 support reshape parameter
5 years ago
yao_yf 425276d43d auto parallel prelu support prelu
5 years ago
Xiaoda Zhang dfde76af88 delete the 'simplify_cal' attribute in 'set_algo_parameters' and 'get_algo_parameters' interface
5 years ago
lirongzhen 4ff418084c enable/disable allreduce_fusion
5 years ago
lichenever c78630d737 support multiple subgraphs
5 years ago
Ziyan 0d208e00bd Model ALLTOALL as a single operator in cost model; scale the ALLTOALL,
5 years ago
yangzhenzhang 36ffb66782 add parallel op for square
5 years ago
yangzhenzhang 57cd9f8188 add parallel op for sigmoidloss
5 years ago
yangzhenzhang 6d522f0a4f add parallel op for layernorm
5 years ago
Xiaoda Zhang ffb2cb03a4 Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel
5 years ago
lichenever b81cc6ea4f add minimum distributed op
5 years ago
mindspore-ci-bot 7bc2cee318 !167 add_squeeze_distributed_op
5 years ago
c00425699 c8cdb6b331 support distributed GatherV2 operator
5 years ago
buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow
5 years ago
lichenever 32cd280c1a add squeeze distributed op
5 years ago
yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask
5 years ago
yao_yf b5e3fa9593 fix auto parallel prelu
5 years ago
yangzhenzhang dd0d4e6b84 add parallel ops for expand dims
5 years ago
mindspore-ci-bot a5a904fbdf !91 fix bug for allreduce fusion and add resnet unit test
5 years ago
mindspore-ci-bot 55916351ee !52 remove ge depend
5 years ago
Wei Luning 73ba399364 remove ge depend in cpu
5 years ago
c00425699 ab917a734d fix bug for allreduce fusion and add resnet unit test
5 years ago
lichenever 5240b1f603 fix refkey bug for auto parallel
5 years ago
mindspore-ci-bot a47046652a !76 [Auto parallel] Refining the strategy_checking for resnset50
5 years ago
mindspore-ci-bot 22a9c00bcd !57 Add parallel operators for Neg and BatchMatMul
5 years ago
Xiaoda Zhang fb6eed23ae refining strategy-checking for resnet50
5 years ago
mindspore-ci-bot 87040483ee !58 fix two cast bug in auto parallel
5 years ago
yangzhenzhang 110640e2ad add parallel ops for neg and batchmatmul
5 years ago
mindspore-ci-bot e2df848597 !55 modify long time python ut
5 years ago
lichenever 2da38ad401 fix two cast bug in auto parallel
5 years ago
chang zherui eaf7146d46 modify longtime python ut
5 years ago
lichenever f946aea10d fix grpah mode loop sink bug in auto parallel
5 years ago
leonwanghui 976af212e9 回退 'Pull Request !17 : [AutoParallel]Fix bug in the case of two cast'
5 years ago
lichenever b4d34973bc fix_cast_bug
5 years ago
zhunaipan 930a1fb0a8 initial version
5 years ago