Commit Graph

15569 Commits (01903eb770eb9bd1e88cb91ba504ecab9983f04b)
 

Author SHA1 Message Date
mindspore-ci-bot 27b3ab1731 !9628 optimize example
4 years ago
mindspore-ci-bot 81bff0c66c !9676 Add option to bert for graph kernel, and auto enable mixed precision for gpu when suitable.
4 years ago
mindspore-ci-bot e400ccf5ed !9669 [GraphKernel] Clean a batch buffers in once.
4 years ago
jiangzhenguang 5060565920 add conv3d_transpose.
4 years ago
mindspore-ci-bot cd249089f3 !9655 【MS】【LITE】opencl support runtime fusion
4 years ago
mindspore-ci-bot b2a164b1c2 !9571 Add Inceptionv4 net to model_zoo/official/cv/
4 years ago
mindspore-ci-bot 84fc72f67d !9164 Add FaceAttribute network to model_zoo/research/cv/
4 years ago
mindspore-ci-bot 2fa9b51ae8 !9262 Add FaceDetection net to /model_zoo/research/cv/
4 years ago
shenwei41 76651a578c modify the throw information exception
4 years ago
YuhanShi53 afe360f4e8 add skipping when return performance and fix bug for small-size ablation
4 years ago
dayschan 85b69bf91f Add a float16 restriction in the solution of reduction op's precision problem in graph splitter.
4 years ago
mindspore-ci-bot c2ad16352f !9532 build for mac
4 years ago
mindspore-ci-bot 90ae2a462e !9690 add interface to api document
4 years ago
Peilin Wang d9fb28b9fc zeroslike dynamic shape no opt
4 years ago
caozhou ce35b6c140 optimize example
4 years ago
buxue 0647b8b7db optimize scalar to tensor function
4 years ago
yoonlee666 fe9443bfc6 add multi-machine
4 years ago
tronzhang 17d6f1c2f9 add option for graph kernel and mixed precision
4 years ago
tanghuikang 82450afa9e Optimize memory using in pynative mode
4 years ago
liuzhongkai 0a5ef19063 win add sse/avx logic
4 years ago
mindspore-ci-bot 4a7c87f442 !9560 [lite]reconstruct onnx
4 years ago
peixu_ren 16bbdc4eca Add some examples for random ops
4 years ago
mindspore-ci-bot c1f65f7460 !9678 【GraphKernel】Fix precision problem
4 years ago
mwang 1e90c7997e fix readme
4 years ago
tronzhang 056d7ffc56 clean batch buffer in once
4 years ago
zhouyuanshen e9aca01620 add support to reduceAny and reduceAll on gpu
4 years ago
mindspore-ci-bot 29db53c2ba !9675 add ps cache
4 years ago
lichenever 07bc550c17 fix_PipelineSplit_bug
4 years ago
mindspore-ci-bot 8a7793ecb5 !9523 Add MaximumGrad
4 years ago
mindspore-ci-bot 7a92deee62 !9563 tod examples
4 years ago
mindspore-ci-bot f9c9d0a1c4 !9637 [MSLITE] layer norm fp32 optimize
4 years ago
mindspore-ci-bot 56ce0f4a27 !9648 Fix the bug of toolbox
4 years ago
mindspore-ci-bot 37390519cb !9680 Make constant numbers to tensors to avoid a bug
4 years ago
mindspore-ci-bot eed0b3ea86 !9613 Refactor explainer for better usability
4 years ago
mindspore-ci-bot b724bac9fc !9686 Fix a bug of naming the variable in LGamma
4 years ago
looop5 fa519433ef expand ClipByNormNoDivSum
4 years ago
mindspore-ci-bot 0ea4d9bbb7 !9421 mindspore lite support npu
4 years ago
mindspore-ci-bot 00c6f4822f !9670 Removing redundant code
4 years ago
wandongdong 68c7ba09d9 support weight quant for opencl
4 years ago
mindspore-ci-bot 317a97e6b9 !9336 auto num_parallel_workers setup
4 years ago
mindspore-ci-bot a2c80435ce !9685 Fix a core dump in TreeConsumer::Terminate() plus minor cache fixes
4 years ago
caozhou cf36a05e81 add to api document
4 years ago
Zirui Wu d6df1b0832 Implemented AutoNumWorker Pass which sets num_workers of selected parallel ops automatically if enabled
4 years ago
alex-yuyue 5250b327ae Fix some non-minddata typos
4 years ago
Lixia Chen 32b82c2737 Fix a core dump in TreeConsumer::Terminate()
4 years ago
Harshvardhan Gupta dd0084c52b improve perf, keep consistent tensor state, fix recheck, check weights at step end
4 years ago
xuanyue e7151c194c reconstruct onnx
4 years ago
peixu_ren c1f645931c Fix a bug of naming the variable in LGamma
4 years ago
zhoufeng cd1ce73a25 remove pybind calling in cxx library
4 years ago
TFbunny 1ab6f73d49 Update tensor shape from int to size_t in scatterop
4 years ago