Commit Graph

16216 Commits (c0f966ddeff8b49e32319c682178aa855ec68e81)

Author SHA1 Message Date
mindspore-ci-bot 569e5b056e !53 [Auto parallel] Remove the redundant work in star elimination
5 years ago
万万没想到 fd96ebe3ea fix typo in formula
5 years ago
万万没想到 849b84abb1 fix typo in formula
5 years ago
mindspore-ci-bot c46e267cfd !21 update lenet , add alexnet in example
5 years ago
mindspore-ci-bot 8f76c28017 !56 remove cce dependency
5 years ago
mindspore-ci-bot 87040483ee !58 fix two cast bug in auto parallel
5 years ago
WeibiaoYu 60f7a95b1c the size of tensor may be bigger than 2GB, should use memcpy instead of memcpy_s
5 years ago
yangzhenzhang 110640e2ad add parallel ops for neg and batchmatmul
5 years ago
Xiaoda Zhang c080ec7874 change star elimination: remove some redundant and checking works
5 years ago
Peilin Wang 0ae77bb0db TFReaderOp fix, threads will exit after reading necessary amount of rows
5 years ago
mindspore-ci-bot 52166a85cf !43 dump graph with type info when analysis failed
5 years ago
mindspore-ci-bot 045a1c356c !51 add op SpaceToBatch and BatchToSpace for ge
5 years ago
mindspore-ci-bot 58cd173051 !63 dataset: repair parameter check problem in random_resize_crop and random_crop
5 years ago
mindspore-ci-bot 1fc5a69d6f !59 dataset: Repair parameter check problem in TFRecordDataset
5 years ago
chenhaozhe 0727c2e76a modify log level in DfGraphManager
5 years ago
fary86 816b60491d Dump graph with type info when static analysis failed
5 years ago
mindspore-ci-bot e2df848597 !55 modify long time python ut
5 years ago
mxm 2f031e1540 fixed: PrimitiveToInferImplMap map is global, and key of the map PrimitivePtr also a global variable. If key is initialized later than the map initialized during compilation, will cause the primitive map initialize failed. Variable initialization order is not guaranteed during compilation.
5 years ago
mindspore-ci-bot a8450dd761 !47 fix issue about the process don't exit when custom ops run failed
5 years ago
zhaozhenlong cf40305bf0 add operator SpaceToBatch and BatchToSpace for ge
5 years ago
ms_yan 624ab97de6 repair parameter check problem in random_resize_crop and random_crop
5 years ago
mindspore-ci-bot da4c711dfb !50 fix parallel related valuenode merging error
5 years ago
panyifeng feb1c36811 fix parallel related valuenode merging error
5 years ago
ms_yan 7b5640da4e Repair parameter check problem in TFRecordDataset
5 years ago
mindspore-ci-bot 352c6faf85 !18 enable use float type learning rate in lars optimizer
5 years ago
lichenever 2da38ad401 fix two cast bug in auto parallel
5 years ago
mindspore-ci-bot da447b8d4d !45 use std::vector instead of std::list to promote performance for parallel module
5 years ago
mindspore-ci-bot 04c10bb93d !42 add vgg scripts master
5 years ago
wukesong 0173c40124 add lenet & alexnet in master branch
5 years ago
jojobugfree e6c15b82c8 remove cce dependency
5 years ago
chang zherui eaf7146d46 modify longtime python ut
5 years ago
lichenever f946aea10d fix grpah mode loop sink bug in auto parallel
5 years ago
wangjun260 31b165a57e add vgg scripts
5 years ago
wenchunjiang 5a00d8cb58 This fixes an issue about mindspore process cannot exit when calling python api op_select_format failed in select kernel steps.
5 years ago
c00425699 3bb48ffee1 use std::vector instead of std::list to promote performance for parallel module
5 years ago
mindspore-ci-bot 976226f9ac !10 Add matmul biasadd fusion pass
5 years ago
mindspore-ci-bot 6f03881b04 !39 add op Diag and DiagPart
5 years ago
mindspore-ci-bot 2753aa8768 !15 Correct the comments for `RandomChoiceWithMask` op.
5 years ago
mindspore-ci-bot 7242c0fad5 !41 Revert 'Pull Request !17 : [AutoParallel]Fix bug in the case of two cast'
5 years ago
mindspore-ci-bot e4b404e8ae !32 auto-enable-dynamic-mem-pool
5 years ago
mindspore-ci-bot e535a5f50c !40 fix bug of get a error dst format of transdata
5 years ago
mindspore-ci-bot 84a61bd015 !27 add log for kernel runtime in order to trace performance
5 years ago
lvliang b3a306489d auto enbale dynamic mem pool
5 years ago
leonwanghui 976af212e9 回退 'Pull Request !17 : [AutoParallel]Fix bug in the case of two cast'
5 years ago
mindspore-ci-bot 140a15924c !17 [AutoParallel]Fix bug in the case of two cast
5 years ago
mindspore-ci-bot 02a25407c4 !30 use string::find instead of equal to distinguish training graph
5 years ago
lianliguang 9d5890d9b9 fix bug of got a error transdata's dest format
5 years ago
zhaozhenlong b12e6ff780 add operator diag and diag_part
5 years ago
mindspore-ci-bot c1c8fef9ca !24 Change strategy for structure output
5 years ago
mindspore-ci-bot 4f5755003a !29 Add some prompt information for ease of use
5 years ago
seatea 840280e784 Correct the comments for `RandomChoiceWithMask` op.
5 years ago
mindspore-ci-bot 062b744b19 !12 Fix dtype bug for loss_scale and weight_decay
5 years ago
mindspore-ci-bot 3ab402e110 !7 adapt relu6grad
5 years ago
lichenever b4d34973bc fix_cast_bug
5 years ago
chenhaozhe cab5503280 use find instead of equal to distinguish training graph
5 years ago
mindspore-ci-bot 44cd0c1f90 !13 Check input shape for `NMSWithMask` op
5 years ago
buxue 0da0bdcf40 Fix bug structure output when there is depend whose first input is constant in outputs
5 years ago
Ziyan 4cbcd8e907 enable use float type learning rate in lars optimizer
5 years ago
zhaozhenlong 9862dea3cf adapt relu6grad and graphengine modified
5 years ago
shibeiji 468e257a14 add log for kernel runtime in order to trace performance
5 years ago
jonyguo 34e42bd6f9 1. add more log info for dataset & mindrecord, 2. add two new testcase for MindDataset
5 years ago
seatea 6c03542eec Fix dtype bug for loss_scale and weight_decay.
5 years ago
seatea 7b7a6a45a0 Check if the shape of the input of NMSWithMask is (N, 5).
5 years ago
jjfeing 86f5c69995 change parallel complie num: 32->16
5 years ago
GinFung 468dbc3557 Add matmul biasadd fusion pass
5 years ago
zhunaipan 930a1fb0a8 initial version
5 years ago