Commit Graph

27 Commits (e06dfaa80d8691d40d209af2509969933ff67dcb)

Author SHA1 Message Date
Yi Huaijie 80bdcab982 temporarily cast between int64 and int32 to wait ME support int64
5 years ago
Yi Huaijie 518cb80133 change type of Shape from int32 to int64
5 years ago
suteng 19e45ccdb1 回退 'Pull Request !3103 : change type of Shape from int32 to int64'
5 years ago
Yi Huaijie 15d5cc396d change type of Shape from int32 to int64
5 years ago
He Wei 4eb81d7934 Rename AnfNode::user_data related functions to follow naming rule
5 years ago
yangzhenzhang e6cef98e95 delete useless code for allreduce
5 years ago
He Wei 32379f3e7a Decouple ir from frontend
5 years ago
WilliamLian 50e2fda52d refactor primitive ComputeFunction function
5 years ago
liubuyu 43c79eb853 mindspore path adjust
5 years ago
Ziyan 0925e35252 enable optimizer parallel with broadcast
5 years ago
He Wei 43e0967024 Decouple ir::Tensor class from python
5 years ago
Xiaoda Zhang 9f4b8a3cd1 changing the succive edges order in GetAliveSuccEdges() so that Triangle and Star Elimination can be merged into particular node; adding some check information
5 years ago
leopz 40e15996b0 move default_param out of parameter and remove pybind11 in anf define
5 years ago
yao_yf f0bf438a55 reshape strategy search
5 years ago
Xiaoda Zhang def8573275 implementing-searching-strategy-for-inference
5 years ago
ch-l f806b72447 use DeviceMemory for memory control
5 years ago
yangzhenzhang 6d522f0a4f add parallel op for layernorm
5 years ago
Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost.
5 years ago
c00425699 d62f560b50 add_bool_type_check_in_comm_op
5 years ago
buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow
5 years ago
yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask
5 years ago
c00425699 b413638f23 refactor OperatorCostPtr in OperatorInfo
5 years ago
mindspore-ci-bot 2e6e94b2b6 !177 prelu operator support parallel on the channel
5 years ago
yao_yf b5e3fa9593 fix auto parallel prelu
5 years ago
Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed.
5 years ago
c00425699 3bb48ffee1 use std::vector instead of std::list to promote performance for parallel module
5 years ago
zhunaipan 930a1fb0a8 initial version
5 years ago