Commit Graph

13684 Commits (fb0e866ad18b2873df1ee81ee59c3e829a887824)
 

Author SHA1 Message Date
mindspore-ci-bot fb0e866ad1 !8269 forward unique dynamic shape
4 years ago
mindspore-ci-bot f657fcb155 !8698 added libevent pthread
4 years ago
mindspore-ci-bot 5a203d08d0 !8708 fix fix SmoothL1lossGrad beta attr problem.
4 years ago
mindspore-ci-bot e16661d2d9 !8001 Adapte nn.LSTM for Ascend.
4 years ago
mindspore-ci-bot 84957cc4a7 !8650 remove useless config's parameters
4 years ago
mindspore-ci-bot 8da2a59764 !8725 add definition of scale for GPT
4 years ago
mindspore-ci-bot 168c79e13d !8664 [MD] Fix a minddata issue that pyfunc multiprogress can't exit normally
4 years ago
yao_yf 31819bb4a7 support forward unique
4 years ago
mindspore-ci-bot de60d1d98f !8684 [MS][LITE] remove internal
4 years ago
mindspore-ci-bot 337805aa55 !8326 fix java some unused code
4 years ago
anancds 18d34ed47f added libevent pthread
4 years ago
mindspore-ci-bot 90eb272751 !8641 Updating notes on pynative examples of each class in nn_layer folder
4 years ago
mindspore-ci-bot 232dff3598 !8685 [GraphKernel] For fp16 value, declare fp32 firstly and than cast to fp16 in expander
4 years ago
mindspore-ci-bot 3b946d4eb2 !8678 expand logsoftmax and grad, delete cast in softmax and fix layernorm compute dsl
4 years ago
mindspore-ci-bot 286f5b05f7 !8493 【GraphKernel】Fuse composite ops separated by GetItem nodes
4 years ago
mindspore-ci-bot a6679511ed !8581 add graph dependency
4 years ago
alouhahaha 641888f603 add definition of scale for GPT
4 years ago
mindspore-ci-bot 37a722fe77 !8559 [MSLITE][Develop] optimize_softmax
4 years ago
mindspore-ci-bot 5a602e5288 !8634 fix quant mindir inference
4 years ago
mindspore-ci-bot 103c42b683 !8662 [MD] Fix pad of lite-cv core dump
4 years ago
mindspore-ci-bot 9969c83f75 !8689 [GraphKernel] Split shape ops for more fusion opportunity.
4 years ago
xiefangqi 4447d08ca9 fix minddata multiprogress hang problem
4 years ago
mindspore-ci-bot d549676d36 !8693 Add Comments for UnsortedSegmentOps
4 years ago
sunsuodong 40a6fd8887 optimize_softmax
4 years ago
tronzhang 80f071e9fa declare fp32 and than cast to fp16 in expander
4 years ago
mindspore-ci-bot f40a4781e4 !8656 Adapt DynamicGRUV2Grad for Ascend new backend.
4 years ago
jianghui58 5109cf7f05 remove internal
4 years ago
mindspore-ci-bot 3939874b67 !8645 [MS][LITE][Develop]optimization for quantized mobilenet_v2
4 years ago
mindspore-ci-bot 9d260cbf56 !8700 Optimize performance of PyNative
4 years ago
mindspore-ci-bot f2f79029d6 !8690 Support MetaTensor in Equal's infer_value
4 years ago
yankai 20a5f88bae fix mindir
4 years ago
mindspore-ci-bot 7fd2db437b !8484 Add Digamma op
4 years ago
mindspore-ci-bot 00b41244ac !8654 fix train failed of resnet_thor
4 years ago
mindspore-ci-bot e3b1814401 !8713 Fix for GetDatasetSize issue in TextFile
4 years ago
mindspore-ci-bot bf447ff51a !8672 add removal pass for dataset getters
4 years ago
Mahdi 449e1526dc Fixed GetDatasetSize for TextFile
4 years ago
peixu_ren 4aa836dae1 Add Digamma op
4 years ago
Zirui Wu ff5999fc2f add removal pass for getters
4 years ago
mindspore-ci-bot fedb225a96 !8667 fix cast issue
4 years ago
mindspore-ci-bot 78fc3e722f !8704 BugFix for GPT
4 years ago
mindspore-ci-bot d7e3b18b6f !8665 check kernel type and do SyncStream for hccl dynamic kernels
4 years ago
huangxinjing 4e0a5359f5 Add doc for max value
4 years ago
lixian 7f9d65cce0 apply int8 4x16 kernel
4 years ago
mindspore-ci-bot c253c3a5c6 !8686 [lite] adjust minir model attr changes
4 years ago
liuxiao93 2aaf5e2e1b Adapt DynamicGRUV2Grad for Ascend new backend.
4 years ago
dayschan 8e6d92eac9 Fuse composite ops separated by GetItem nodes
4 years ago
mindspore-ci-bot e2e532dec3 !8652 fix memory leak in pynative memory
4 years ago
liangchenghui d45bbb8f87 fix fix SmoothL1lossGrad beta attr problem.
4 years ago
liuxiao93 97e56270ad Adapte nn.LSTM for Ascend.
4 years ago
mindspore-ci-bot e1cfeeb1dd !8644 update getting device list in parallel ops
4 years ago