Commit Graph

99 Commits (59c4f31fc04c7019ff96a80442f06fd2f98f67a7)

Author SHA1 Message Date
mindspore-ci-bot b045f47428 !3983 Add ReduceMin fission pass
5 years ago
huanghui 30000fdb52 add ReduceMin fission pass
5 years ago
liubuyu d81862a916 decoupling core and context
5 years ago
Wei Luning a05c38bb63 make python Parameter inherit from Tensor
5 years ago
WilliamLian 0179724dcd spilit unspported transdata to two transdata from special format -> defualt -> default -> special
5 years ago
liubuyu f4bc0bc9fe move the dependency of utils to core
5 years ago
chenfei 1f1a07e645 don't insert assign from condition to true branch of while
5 years ago
root 1b6f85dec8 split tuple parameter to parameters
5 years ago
yujianfeng 4d18e9ec35 Fix internal multiple outputs check
5 years ago
huanghui f1563d2d37 insert memcpy async if hccl op cascade
5 years ago
mindspore-ci-bot 6f8863b65d !3198 synchronize latest Ascend software suite 18 Jul 2020, and merging branches
5 years ago
yanghaoran 859acc6d2a synchronize latest Ascend software suite 18 Jul 2020, and merging branches
5 years ago
yujianfeng fa0684d12d Add pack and concat fission pass
5 years ago
yujianfeng 188d74f15e Remove transdata and cast for internal outputs
5 years ago
changzherui f4cb445ea8 syn code for 0715
5 years ago
laiyongqiang 68c78ab6bb reuse communication op output's memory
5 years ago
liubuyu 43c79eb853 mindspore path adjust
5 years ago
huanghui 3eaf663545 add tensor scatter update fission pass
5 years ago
yujianfeng 24f6b9d77e Add input2output pass
5 years ago
He Wei 43e0967024 Decouple ir::Tensor class from python
5 years ago
gong chen a6dfa281ea Init GraphKernel.
5 years ago
yujianfeng 7ad877a948 Add Split fission pass
5 years ago
yujianfeng f15cb6b7c9 Add sort by index for each group of AllReduce
5 years ago
mindspore-ci-bot 971f10d222 !1790 remove transdata only connected with control depend
5 years ago
WilliamLian b86016a26f remove the useless transdata and cast of control depend node
5 years ago
huanghui 4acb61d59d code review fix for buffer fusion
5 years ago
huanghui 118496b3ec enhance insert memcpy
5 years ago
WilliamLian 9808e47663 change checkAicpu to CheckAICPU & add charge Scalar function to charge the input or output is scalar
5 years ago
huanghui b4c0ed4b36 add signle batchnorm fission pass
5 years ago
chujinjin 7465abc798 optimize transdata for pynative
5 years ago
mindspore-ci-bot c51d90d84e !1767 Move LayerNormGrad split pass ahead of kernel select
5 years ago
yujianfeng e87ac6525e Add batch norm fusion pattern for mix precision
5 years ago
huanghui cf87218fb7 place layernormgrad split pass before kernel select
5 years ago
huanghui d1cec14a0c add 2 pattern for softmaxgradext fusion pass
5 years ago
mindspore-ci-bot cc9c004bc1 !1696 Enable ConfusionMulGrad fusion pass
5 years ago
mindspore-ci-bot 59683a1d90 !1692 Fix topk bug for fasterrcnn
5 years ago
huanghui 71acaa5300 enable ConfusionMulGrad fusion pass in bert only
5 years ago
mindspore-ci-bot 387bcece9e !1666 Add 5 patterns for AdamApplyOneWithDecay fusion pass
5 years ago
huanghui ff05aa1faa add newly 5 patterns for AdamApplyOneWithDecayRule fusion pass
5 years ago
yujianfeng e6f1cfa581 Check the input size of BatchNorm before fission in bert
5 years ago
meixiaowei 1778ec0135 fix topk bug
5 years ago
huanghui 99ca6b3e80 add SoftmaxGradExt fusion pass
5 years ago
mindspore-ci-bot 04398cf88e !1433 add tensor_minnie and separate py from ir
5 years ago
leopz 4508134ceb add tensor_minnie and separate py from ir
5 years ago
mindspore-ci-bot c086d91aaf !1505 Add some checks in ConstToAttr[StridedSliceGrad] pass
5 years ago
yujianfeng ee087bdf60 Check the size of topk input names before converting input to attr
5 years ago
huanghui 1d65ae598a extract const_to_attr_strided_slice_grad pass
5 years ago
yangjie159 cbf5390b34 refactor memreuse allocator
5 years ago
mindspore-ci-bot 2215e3267f !1419 remove old buffer fusion pass
5 years ago
etone-chan 42d724d8b4 remove old buffer fusion pass
5 years ago