Commit Graph

168 Commits (4bb5c7b39a99d02ac989f1c94635a7ba2eceabed)

Author SHA1 Message Date
mindspore-ci-bot 4a3e5cb944 !492 Add AllGather fusion pass
6 years ago
mindspore-ci-bot 54e0fa5c09 !556 [Auto Parallel] use DeviceMemory instead of fixed-size memory check
6 years ago
YuJianfeng 39945d0f79 Add AllGather fusion pass
6 years ago
ch-l f806b72447 use DeviceMemory for memory control
6 years ago
zhousiyi f6a4f3d155 [static_analysis]: remove the TrivialPrimEvaluator cache.
6 years ago
huanghui 14df771175 fix confusion_softmax_grad_rule pass
6 years ago
chang zherui 3c1785a121 syn code
6 years ago
liubuyu 672244e0ac add keep_bn_fp32 parameter
6 years ago
mindspore-ci-bot 2dabcb9e59 !41 Synchronization code to ms-incubator
6 years ago
Wei Luning ac25cbae0e Revert "add pattern AdjustAllReduceMulAdd"
6 years ago
mindspore-ci-bot d99dfbd83d !390 getnext parallel optimization part III & IV: Loop Control & Stream Dispatch adaptation
6 years ago
gukecai f8208c7c52 Support GetNext Parallel
6 years ago
liyong f1542a90a3 add pk sampler
6 years ago
mindspore-ci-bot 6369cf27bd !406 added first row crc check for when reading tfrecord files
6 years ago
xulei2020 c705ea5e5b add filterOp code
6 years ago
Peilin Wang 9bc2134cb7 added checking of first row crc to find invalid tfrecord files
6 years ago
mindspore-ci-bot d8176a77f4 !425 New api "TextFileDataset"
6 years ago
mindspore-ci-bot 11a4b35caa !472 Fix inputs size and attr for AddN fission pass
6 years ago
ms_yan c5cfb09e66 Repair some MS_LOG problem
6 years ago
mindspore-ci-bot 679dbd27b3 !478 fix save_graphs_path and log file mode related issues
6 years ago
fary86 7e23a1a475 Fix issues of save_graphs_path, Type/Value error message and log file mode
6 years ago
YuJianfeng bc2df2c913 Fix inputs size and attr for AddN fission pass
6 years ago
yanghaitao 2795e492ff TextFileDataset
6 years ago
mindspore-ci-bot 3822b4837f !340 Add a HistogramSummary ops to record tensor value
6 years ago
Wei Luning d447364417 add pattern AdjustAllReduceMulAdd
6 years ago
ms_yan f0c07c3fa6 Realize take op and add ut
6 years ago
mindspore-ci-bot 71b63c3fcf !246 [opt pass] AdjustAllReduceMulAdd
6 years ago
Wei Luning ea6958c50a add pattern AdjustAllReduceMulAdd
6 years ago
YuJianfeng 5eb5379889 Add AdamApplyOne fusion pass
6 years ago
ougongchang 0ed6d9178e add Histogram summary operator
6 years ago
jzw 3f7054dccb add skip dataset op
6 years ago
mindspore-ci-bot f69a668d98 !350 change tuple output to make tuple
6 years ago
lianliguang 00e4306518 convert all tuple output to maketuple
6 years ago
mindspore-ci-bot f98efafa16 !317 [IRFusion] add derelu_fusion pass
6 years ago
mindspore-ci-bot cf026096a6 !183 Mindspore.dataset CPP sampler for GeneratorDataset
6 years ago
wenchunjiang ee5f3fa240 add pass to insert memcpy_async for get_next outputs
6 years ago
mindspore-ci-bot 58a70b5f82 !346 getnext parallel optimization part II: Eliminate Memcpy in specify scenario
6 years ago
laiyongqiang 3e05f50f5f getnext_memcpy_elimination
6 years ago
yangzhenzhang 6d522f0a4f add parallel op for layernorm
6 years ago
huanghui b02e871c1a [IRFusion] add derelu_fusion pass
6 years ago
Junhan Hu 9739d3b048 Add CPP sampler support for GeneratorDataset
6 years ago
mindspore-ci-bot 30de261c3c !243 Support nested repeat
6 years ago
hesham 0fc23eee0f Support nested repeat
6 years ago
mindspore-ci-bot b571fabd77 !289 Add cnode mapping after graph match
6 years ago
YuJianfeng e5c67b9088 Add cnode to equal map when opt matching
6 years ago
mindspore-ci-bot 9bda080bb5 !260 refactor padding strategy
6 years ago
mindspore-ci-bot 1ab430072e !232 [Auto parallel] Model the memory_cost in cost model
6 years ago
mindspore-ci-bot 94589ce611 !226 expend conv stride and dilation to 2d
6 years ago
wangnan39@huawei.com 2604acedcb extend conv stride and dilation to 2d
6 years ago
lianliguang 5d225f934f change the padding strategy & refactor insert transdata
6 years ago
Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost.
6 years ago
mindspore-ci-bot f1fa2a9941 !273 [MD] update subset random sampler in minddataset
6 years ago
mindspore-ci-bot 298a32f1e2 !132 Add confusion_mul_grad_fusion pass
6 years ago
liyong 0ce83e39e1 fix TestShardSampleWrongNumber
6 years ago
mindspore-ci-bot 57f953ca38 !216 Implement addn fusion pass
6 years ago
huanghui 19ee376cd3 add confusion_mul_grad fusion pass
6 years ago
c00425699 d62f560b50 add_bool_type_check_in_comm_op
6 years ago
YuJianfeng 7307c81f31 implement AddN fission pass
6 years ago
panfengfeng 6a79fc1735 skip mindrecord ut test case
6 years ago
buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow
6 years ago
yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask
6 years ago
jonyguo a9443635b7 fix: mindpage enhance parameter check and search by filename failed
6 years ago
panfengfeng 53a98210af skip ut test cases temporarily
6 years ago
c00425699 406475160f refactor OperatorCostPtr in OperatorInfo
6 years ago
biffex cc1416bfc2 constant duplicate mul for momentum
6 years ago
kswang a6747c522f add ascend mem pool
6 years ago
yao_yf 513f384c43 fix auto parallel prelu
6 years ago
kswang d84cfb0108 add mem manager
6 years ago
Alexey Shevlyakov 6d1ea7af8e remove make_unique.h
6 years ago
jonyguo 6690a7fd7a fix: error info is not exactly when column list invalid
6 years ago
jojobugfree 89f0b3b1bb profiling feature enhancement
6 years ago
Xiaoda Zhang 7798c85e70 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed.
6 years ago
Jonathan Yan f01098bc12 remove ENABLE_MINDRECORD flag
6 years ago
Alexey Shevlyakov b9701db887 fix RandomCropDecodeResize test
6 years ago
mindspore-ci-bot 268d358a1d !187 refactor OperatorCostPtr in OperatorInfo
6 years ago
mindspore-ci-bot 1b3b3b1a1c !198 [opt] momentum duplicate mul constant
6 years ago
biffex 62bbf560c6 constant duplicate mul for momentum
6 years ago
c00425699 b413638f23 refactor OperatorCostPtr in OperatorInfo
6 years ago
kswang bef62db128 add ascend mem pool
6 years ago
mindspore-ci-bot 2e6e94b2b6 !177 prelu operator support parallel on the channel
6 years ago
mindspore-ci-bot 31efc8b088 !172 add mem manager
6 years ago
kswang fb343bd607 add mem manager
6 years ago
buxue 1d3bb0b731 Develop op MaxPoolWithArgMax
6 years ago
mindspore-ci-bot cc75cb357c !168 remove mindspore::make_unique and make_unique.h
6 years ago
mindspore-ci-bot 3e36982314 !99 Develop op MaxPoolWithArgMax
6 years ago
mindspore-ci-bot 68f804255f !174 enhance: the error info is not detail when the column list is invalid by MindDataset
6 years ago
buxue 7541d3b067 Develop op MaxPoolWithArgMax
6 years ago
yao_yf b5e3fa9593 fix auto parallel prelu
6 years ago
kingfo cdbef85e5e refactor callback for ge backend
6 years ago
huangdongrun d12a720fc5 add comparison ops
6 years ago
YuJianfeng 024706f951 Optimize depend edge with make tuple input
6 years ago
Wei Luning b18f634912 remove ge depend in cpu
6 years ago
fary86 73e0bdcd43 Dump graph with type info when static analysis failed
6 years ago
c00425699 c4b03e854c use std::vector instead of std::list to promote performance for parallel module
6 years ago
GinFung 2fd7f67108 Add matmul biasadd fusion pass
6 years ago
Jonathan Yan 295c00ac39 Replace std::cout with MS_LOG in dataset unit test
6 years ago
mindspore-ci-bot 2eb71103f9 !82 profiling feature enhancement
6 years ago
jonyguo 20d1b64443 fix: error info is not exactly when column list invalid
6 years ago
jojobugfree effdb483d6 profiling feature enhancement
6 years ago
Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed.
6 years ago