Commit Graph

256 Commits (22d683a805f3e0abdda83724238510a1cefc8cd9)

Author SHA1 Message Date
mindspore-ci-bot 9ee144ea40 !4744 [AutoParallel]Support bert
5 years ago
lichenever 221a801395 auto parallel support bert
5 years ago
yangzhenzhang cda08f6a52 concat 3 tensors in auto parallel mode
5 years ago
mindspore-ci-bot 2ae6365d77 !4650 EmbeddingLookup support auto parallel
5 years ago
yangzhenzhang 6f6a8ae9f0 embedding lookup auto parallel
5 years ago
Yi Huaijie 0f7ead5f14 parameter slice init test all initializers
5 years ago
yao_yf cbb4363fa7 remove to_full_tensor and load_inputs in exexute stage
5 years ago
yangzhenzhang 14c77c9f03 update field split
5 years ago
mindspore-ci-bot 2db0290c49 !4356 Add validation for field split
5 years ago
yangzhenzhang 4a0e6ff7fc update field split
5 years ago
yao_yf e4de26d5bc embeddinglookup wrap
5 years ago
yangzhenzhang f4bb43bbaf add concat op
5 years ago
lichenever bfc96de1b9 add dropout distributed op
5 years ago
simson 3617121ccf revert modification of opt
5 years ago
Xiaoda Zhang d24a902afe add a new graph operation in autoparallel
5 years ago
mindspore-ci-bot ab4c43007f !3657 Add parallel operator for StridedSlice
5 years ago
Ziyan 98e2ee90de fix optimizer parallel problems
5 years ago
yangzhenzhang 9aa84b3d14 add strided slice op
5 years ago
mindspore-ci-bot 16079e6356 !3472 [Auto parallel] Cost model for GPU
5 years ago
mindspore-ci-bot d4165671d9 !3435 Add parallel operator for Tile
5 years ago
Xiaoda Zhang 9097b36950 add resnet50 testcases for gpu
5 years ago
lirongzhen1 51796aa624 fix sparse feature bug for auto parallel
5 years ago
yangzhenzhang 6a6e2bd271 add tile op
5 years ago
panyifeng 963bd67a60 add sparse api docs
5 years ago
panyifeng 8a89f003eb fix sparse related issues
5 years ago
mindspore-ci-bot 684ff4f46b !3160 Rewrite tensor's __bool__ for pynative mode
5 years ago
simson 5f77fbdd75 Rewrite tensor's __bool__ for pynative mode
5 years ago
mindspore-ci-bot 7f1ccc5f3b !3311 add sparse feature test cases for auto parallel
5 years ago
mindspore-ci-bot bc20de741a !3315 restore reshape ut
5 years ago
yao_yf 1d3a06a3b0 recover reshape ut
5 years ago
lirongzhen1 5d63c60135 add sparse feature test cases for auto parallel
5 years ago
wangnan39@huawei.com 082433183d uniform learning_rate behavior of optimizers
5 years ago
anzhengqi 008b91b2a1 inject epoch ctrl op in the execution tree and send eos at the end of epoch
5 years ago
liuxiao93 75881e5f2f check input of BatchNorm is 4D.
5 years ago
wangnan39@huawei.com 86889c59cb optimizer adapt IndexedSlices
5 years ago
yao_yf abebb2004b remove 4 reshape ut
5 years ago
mindspore-ci-bot edec821c50 !2876 set reshape operator no redistribution for auto parallel
5 years ago
mindspore-ci-bot 74bbfa3cf6 !3095 modify the limit of loss scale
5 years ago
lirongzhen1 c1eba79b83 set reshape redistribution strategy attribute to no redistribution
5 years ago
simson 177e18f3f4 modify the limit of loss scale
5 years ago
Ziyan 39f08eb7dd enable optimizer parallel
5 years ago
lichenever cde5cc2bd2 add_embedding_look_up
5 years ago
Xiaoda Zhang fc906f7f58 move embeddinglookup to external
5 years ago
Yi Huaijie cae254f4df asymmetric row split support for GatherV2
5 years ago
jinyaohui dd5fba1db9 add notice
5 years ago
Ziyan 0925e35252 enable optimizer parallel with broadcast
5 years ago
Ziyan 41ddc153a6 modify lars interface
5 years ago
Xiaoda Zhang 3ff6e336c6 check cast from optimizer in auto-parallel
5 years ago
mindspore-ci-bot c0fe8c0322 !2273 [AutoParallel]update EmbeddingLookUp op
5 years ago
lichenever 563622874a update
5 years ago
Xiaoda Zhang 69574f3823 fix the bprob error of embeddinglookup
5 years ago
Xiaoda Zhang 55e7d9d2b8 move embeddinglookup to the internal
5 years ago
Xiaoda Zhang 20d2012a0e implementing the backward of embeddinglookup
5 years ago
lichenever e0e055a0b8 add sparse gatherv2
5 years ago
mindspore-ci-bot 5b0472683c !1737 sparse feature backpropagation
5 years ago
Yi Huaijie e5c351690b support load full dataset on each device
5 years ago
lirongzhen1 516b56cb64 sparse feature bp
5 years ago
Xiaoda Zhang 1cfb52bc0e add the reshape part of the embeddinglookup backward operator
5 years ago
lichenever 1437966c98 gatherv2_support_host_and_device
5 years ago
Xiaoda Zhang 55ef468a7f fix the embeddinglookup bug
5 years ago
Yi Huaijie d7e1ab1445 clean pylint warnings
5 years ago
jinyaohui 86d197dfeb clean pylint
5 years ago
mindspore-ci-bot 8316f736ea !1650 Support forward reduce scatter for Matmul
5 years ago
mindspore-ci-bot 57874cd61f !1372 [Auto parallel] Add a new primitive EmbeddingLookup
5 years ago
yangzhenzhang 19bd830539 support forward reduce scatter for matmul
5 years ago
mindspore-ci-bot f523a0f83c !1600 [AutoParallel]Fix GatherV2 bug
5 years ago
Xiaoda Zhang 4154adf196 add embedinglookup primitive
5 years ago
lichenever c223fde566 fix auto parallel gatherv2 bug
5 years ago
yangzhenzhang 1413f520d7 support reshape optimized
5 years ago
mindspore-ci-bot 0fe3ff2761 !1506 Setting auto parallel flag for dataset wrap cell
5 years ago
yangzhenzhang 2f8516e5d7 set auto parallel for dataset warp cell
5 years ago
yao_yf 96c9569dca fix reshape reshape case
5 years ago
yangzhenzhang 7c237620ba add sigmoid op
5 years ago
Yi Huaijie ac484ebbc0 remove unused variable
5 years ago
Yi Huaijie 0b137e5312 delete unused arguments in test cases
5 years ago
Yi Huaijie 8cfc05e4cf clean pylint warnings of parallel test cases
5 years ago
mindspore-ci-bot e87e6b38b0 !1355 [AutoParallel]Fix GatherV2 distributed op
5 years ago
lichenever 390a86effb fix gatherv2
5 years ago
Yi Huaijie 1e6ee83874 delete parallel end-to-end test cases
5 years ago
Yi Huaijie 14fe72f383 fix pylint warnings
5 years ago
jinyaohui fbdba6e4da clean pylint
5 years ago
Yi Huaijie 7a5004cc49 init the slices of a Initialzer on different devices
5 years ago
mindspore-ci-bot 1b7cdc4c61 !1218 clean pylint warning in test about import
5 years ago
jinyaohui 5a914994ba clean pylint
5 years ago
yangzhenzhang 6b54a6417d ckpt and restore parameter shape
5 years ago
jinyaohui bcfaff97f9 clean pylint
5 years ago
jinyaohui 2907cf4488 remove some context param
5 years ago
candanzg 2cc85bdc93 Support weight compile according to shape
5 years ago
mindspore-ci-bot 0345995000 !1110 [AutoParallel]fix gatherv2 and dataset bug
5 years ago
lichenever debfd38b75 fix gatherv2 and dataset bug
5 years ago
jinyaohui 391a060f21 remove two context param
5 years ago
mindspore-ci-bot 48e54dcfda !1056 [Auto parallel] Memory calculation in the inference phase
5 years ago
mindspore-ci-bot fdad91355f !1093 reset parallel context before running each parallel test case
5 years ago
Xiaoda Zhang a05aa21cc2 calculating PEAK memory cost in the inference phase
5 years ago
mindspore-ci-bot 06a9eeb3bf !1050 [bug][auto_mixed_precision]fix sens shape error of `TrainOneStepWithLossScaleCell`
5 years ago
Yi Huaijie 462f30e077 reset auto parallel context before each test
5 years ago
Wei Luning 3e89f7baaa fix sens shape error in loss_scale one_step_wrap
5 years ago
mindspore-ci-bot 2af6ee2482 !1054 Add slice shape for param info
5 years ago
mindspore-ci-bot 57085bb18d !996 [Auto parallel] Supporting VirtualDataset having three inputs
5 years ago
yangzhenzhang 05fde3d23d add slice shape for param info
5 years ago
yao_yf 716def7c0a move InferStraByTensorInfo to tensor_info.h
5 years ago
mindspore-ci-bot dd2062bf8d !1023 add_gatherv2_distributed_op
5 years ago
lichenever 19a24b86ac add gatherv2 distributed op
5 years ago
yao_yf f0bf438a55 reshape strategy search
5 years ago
Xiaoda Zhang 5e041966f1 add a new vritualdataset cell for three inputs
5 years ago
yangzhenzhang 8c9730b3c5 add parallel mode for cell
5 years ago
Xiaoda Zhang def8573275 implementing-searching-strategy-for-inference
5 years ago
yao_yf 5a6540450e use param name as the key of strategy checkpoint
5 years ago
mindspore-ci-bot 21d936e656 !728 auto parallel strategy checkpoint full
5 years ago
yao_yf 6cde5f6d91 auto parallel strategy checkpoint
5 years ago
zjun c538b83712 remove enbale hccl
5 years ago
mindspore-ci-bot afbd24cb78 !718 Fix dtype judge sentence in infer_dtype function of hcom operations
5 years ago
zhouyuanshen c046874b03 fix bug in infer_dtype function of hcom operations
5 years ago
Xiaoda Zhang e227415673 support-the-multiple-subgraphs-in-the-ANF
5 years ago
yangzhenzhang 4750861054 fix layernorm bug
5 years ago
yangzhenzhang 36a62576e8 support forward graph
5 years ago
Xiaoda Zhang 3ff9e54734 add the resnet50 32k-8p testcase
5 years ago
lirongzhen1 56f785f7e6 add context configration
5 years ago
mindspore-ci-bot ce71c17933 !645 auto parallel prelu operator support broadcast
5 years ago
mindspore-ci-bot 84d5e4f923 !643 [AutoParallel]Support reshape parameter
5 years ago
mindspore-ci-bot 00859ae119 !586 enable/disable allreduce_fusion
5 years ago
lichenever 2ab211ae04 support reshape parameter
5 years ago
yao_yf 425276d43d auto parallel prelu support prelu
5 years ago
Xiaoda Zhang dfde76af88 delete the 'simplify_cal' attribute in 'set_algo_parameters' and 'get_algo_parameters' interface
5 years ago
lirongzhen 4ff418084c enable/disable allreduce_fusion
5 years ago
lichenever c78630d737 support multiple subgraphs
5 years ago
Ziyan 0d208e00bd Model ALLTOALL as a single operator in cost model; scale the ALLTOALL,
5 years ago
yangzhenzhang 36ffb66782 add parallel op for square
5 years ago
yangzhenzhang 57cd9f8188 add parallel op for sigmoidloss
5 years ago
yangzhenzhang 6d522f0a4f add parallel op for layernorm
5 years ago
Xiaoda Zhang ffb2cb03a4 Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel
5 years ago
lichenever b81cc6ea4f add minimum distributed op
5 years ago
mindspore-ci-bot 7bc2cee318 !167 add_squeeze_distributed_op
5 years ago
c00425699 c8cdb6b331 support distributed GatherV2 operator
5 years ago
buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow
5 years ago
lichenever 32cd280c1a add squeeze distributed op
5 years ago
yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask
5 years ago
yao_yf b5e3fa9593 fix auto parallel prelu
5 years ago
yangzhenzhang dd0d4e6b84 add parallel ops for expand dims
5 years ago
mindspore-ci-bot a5a904fbdf !91 fix bug for allreduce fusion and add resnet unit test
5 years ago
mindspore-ci-bot 55916351ee !52 remove ge depend
5 years ago
Wei Luning 73ba399364 remove ge depend in cpu
5 years ago
c00425699 ab917a734d fix bug for allreduce fusion and add resnet unit test
5 years ago
lichenever 5240b1f603 fix refkey bug for auto parallel
5 years ago
mindspore-ci-bot a47046652a !76 [Auto parallel] Refining the strategy_checking for resnset50
5 years ago
mindspore-ci-bot 22a9c00bcd !57 Add parallel operators for Neg and BatchMatMul
5 years ago
Xiaoda Zhang fb6eed23ae refining strategy-checking for resnet50
5 years ago
mindspore-ci-bot 87040483ee !58 fix two cast bug in auto parallel
5 years ago
yangzhenzhang 110640e2ad add parallel ops for neg and batchmatmul
5 years ago
mindspore-ci-bot e2df848597 !55 modify long time python ut
5 years ago