Commit Graph

20 Commits (5d42d00161796bf0d64fe91bf9a7e0a03869ef59)

Author SHA1 Message Date
WilliamLian 50e2fda52d refactor primitive ComputeFunction function
5 years ago
liubuyu 43c79eb853 mindspore path adjust
5 years ago
Ziyan 0925e35252 enable optimizer parallel with broadcast
5 years ago
He Wei 43e0967024 Decouple ir::Tensor class from python
5 years ago
Xiaoda Zhang 9f4b8a3cd1 changing the succive edges order in GetAliveSuccEdges() so that Triangle and Star Elimination can be merged into particular node; adding some check information
5 years ago
leopz 40e15996b0 move default_param out of parameter and remove pybind11 in anf define
5 years ago
yao_yf f0bf438a55 reshape strategy search
5 years ago
Xiaoda Zhang def8573275 implementing-searching-strategy-for-inference
5 years ago
ch-l f806b72447 use DeviceMemory for memory control
5 years ago
yangzhenzhang 6d522f0a4f add parallel op for layernorm
5 years ago
Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost.
5 years ago
c00425699 d62f560b50 add_bool_type_check_in_comm_op
5 years ago
buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow
5 years ago
yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask
5 years ago
c00425699 b413638f23 refactor OperatorCostPtr in OperatorInfo
5 years ago
mindspore-ci-bot 2e6e94b2b6 !177 prelu operator support parallel on the channel
5 years ago
yao_yf b5e3fa9593 fix auto parallel prelu
5 years ago
Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed.
5 years ago
c00425699 3bb48ffee1 use std::vector instead of std::list to promote performance for parallel module
5 years ago
zhunaipan 930a1fb0a8 initial version
5 years ago