Commit Graph

39 Commits (98307c10dbc27058979bc8d7d789d88d1376c9d4)

Author SHA1 Message Date
mindspore-ci-bot 8634675e2d !14499 [GraphKernel]split UMonad in inputs of op
4 years ago
wenfangpei 0085a273e7 split UMonad in inputs of op
4 years ago
lingyunli63 8b3823b22c optimizeMatmul
4 years ago
mindspore-ci-bot 69526df01e !14314 [GraphKernel] unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
4 years ago
mindspore-ci-bot ddf75da542 !14085 [GraphKernel] add some expander ops
4 years ago
looop5 76d322464d unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
4 years ago
chenlei_autodiff f4289d40f3 add graph kernel expander ops.
4 years ago
mindspore-ci-bot 7149e8c2c9 !14045 [Graph Kernel] add compare test case
4 years ago
zengzitao 72c6dad4ba add compare_test case in gpu ci and update akg submodule
4 years ago
lingyunli63 4b966ed40d support matmul on D
4 years ago
mindspore-ci-bot 5b95409022 !13512 add some expander ops
4 years ago
wenfangpei 043a558ae2 expander lamb_apply_optimizer_assign
4 years ago
zengzitao d0a656f3cd add some expander ops
4 years ago
zengzitao ef3507e973 fix exec order bug about monad
4 years ago
He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad
4 years ago
jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU
4 years ago
mindspore-ci-bot e897eb4c41 !11915 Change TensorAdd to Add, merge from r1.1 to master
4 years ago
l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master
4 years ago
looop5 0161209e40 update submoudle akg, close graph kernel ascend ci testcases
4 years ago
looop5 8bbe723603 add Tile infer shape function
4 years ago
mindspore-ci-bot 5f2c84f3cb !9867 Add graph kernel testcases
4 years ago
looop5 4d8205cd93 Delete unused interface in graph_kernels.py
4 years ago
looop5 56fa56b173 add graph kernel testcases
4 years ago
tronzhang 2190da9946 support atomic clean and change package for akg.
4 years ago
zengzitao 3ef0e9f053 substitute dropout by cudnnuniformreal and dropout
4 years ago
zengzitao 266bfa50bf expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op
4 years ago
looop5 f5f66abd06 Add testcases in Ascend back-end for graph kernel
4 years ago
zengzitao 326540cbbd expand layernorm_grad op
4 years ago
zengzitao 28f1db74dd expand maximum_grad minimum_grad dropout_grad op
4 years ago
zengzitao db27783d54 expand tanh_grad and reduce_mean, fix bug and add test_case in ci
4 years ago
zengzitao 53043ae18f support expand fused_adam and fused_adam_weight_decay op
4 years ago
dayschan 0f8f1cdda7 Eliminate redundant parameters while expanding basic ops.
4 years ago
mindspore-ci-bot 8d39a8a4b2 !7529 complex arithmetic_simplify
4 years ago
zhu_xiaochen c739f14038 simplify transpose matmul reduce
4 years ago
lingyunli63 a500a57c72 add GraphkernelCSE
4 years ago
Geng_Fei 1455372cf1 add new pass in graph kernel: arithmetic_simplify
4 years ago
dayschan 37a48f6aac GraphKernel supports GPU
4 years ago
duxiutao 793737ab62 add primitive operator to test_lamb
5 years ago
duxiutao 1e43c609e0 Add test case and fix two bugs
5 years ago