jiangzhenguang
|
5060565920
|
add conv3d_transpose.
|
5 years ago |
Bairong
|
2eeaec7512
|
support arange in functional.py
|
5 years ago |
mindspore-ci-bot
|
186f5394ba
|
!9155 add_conv3d_object
From: @jiangzg001
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
5d1a8b09db
|
!9097 Primitive Shape support return range of dynamic shape
From: @M20211031
Reviewed-by: @ginfung
Signed-off-by:
|
5 years ago |
liuxiao93
|
712ad98a92
|
Move API inner.DynamicGRUV2 to P.DynamicGRUV2.
|
5 years ago |
jiangzhenguang
|
c45eee9b94
|
add conv3d object
|
5 years ago |
mxm
|
82268dc703
|
support DynamicShape and StridedSlice primitive return range of shape value.
|
5 years ago |
mindspore-ci-bot
|
b9d2392937
|
!9350 Add Matrix_Determinant and Matrix_Inverse to Mindspore front-end
From: @peixu_ren
Reviewed-by: @liangchenghui,@zh_qh
Signed-off-by: @liangchenghui
|
5 years ago |
mindspore-ci-bot
|
79fef0df0e
|
!9142 fix bug of addn not supported dynamic shape
From: @M20211031
Reviewed-by: @zh_qh
Signed-off-by: @zh_qh
|
5 years ago |
peixu_ren
|
0adab15de7
|
Add Matrix_Determinant and Matrix_Inverse to Mindspore front-end
|
5 years ago |
Peilin Wang
|
f7dc2432a0
|
initial commit
fix ci
fix ci
remove old sequence mask api
fix ci
fix ci
remove old seuqence mask tests
|
5 years ago |
mxm
|
59f9ac9975
|
Primitive 'AddN' support dynamic shape
|
5 years ago |
peixu_ren
|
6355f46d32
|
Add LBeta op at nn level
|
6 years ago |
mindspore-ci-bot
|
22d683a805
|
!8920 Adapt ops LinSpace for Ascend.
From: @liu_xiao_93
Reviewed-by: @liangchenghui,@linqingke,@liangchenghui
Signed-off-by: @liangchenghui,@liangchenghui
|
6 years ago |
liuxiao93
|
584e241e29
|
Adapt ops LinSpace for Ascend.
|
6 years ago |
l00591931
|
17c0810f9e
|
add SequenceMask operator
|
6 years ago |
l00591931
|
f1e92d0ea7
|
Change Ones/zeros
|
6 years ago |
mindspore-ci-bot
|
f40a4781e4
|
!8656 Adapt DynamicGRUV2Grad for Ascend new backend.
From: @liu_xiao_93
Reviewed-by: @jjfeing,@liangchenghui
Signed-off-by: @liangchenghui
|
6 years ago |
peixu_ren
|
4aa836dae1
|
Add Digamma op
|
6 years ago |
liuxiao93
|
2aaf5e2e1b
|
Adapt DynamicGRUV2Grad for Ascend new backend.
|
6 years ago |
peixu_ren
|
5e9178c5b6
|
Add IGamma operator
|
6 years ago |
liuxiao93
|
d471ac491e
|
Adapt DynamicGRUV2 forward for Ascend new backend.
|
6 years ago |
mindspore-ci-bot
|
a442024604
|
!8510 Add Ones and Zeros operators
From: @liangzhibo
Reviewed-by: @zh_qh,@chenfei52
Signed-off-by: @zh_qh
|
6 years ago |
l00591931
|
886ef520d7
|
Add Ones and Zeros operators
|
6 years ago |
mindspore-ci-bot
|
f16509388c
|
!8409 Multi-dimensional list value assignment
From: @liangzhibo
Reviewed-by: @chenfei52
Signed-off-by:
|
6 years ago |
l00591931
|
c5b5a6719c
|
Enable multi-dimensional list value assignment
|
6 years ago |
liuxiao93
|
92ba9d94df
|
Adapt UnsortedSegmentMax for Ascend.
|
6 years ago |
HuangBingjian
|
265a6d61b6
|
reverse fix shape=0
|
6 years ago |
liuxiao93
|
0a1155f938
|
Fix some bugs about API.
|
6 years ago |
HuangBingjian
|
4477fcfe19
|
fix shape bug.
|
6 years ago |
liuxiao93
|
45d343257b
|
Add DynamicGRU.
|
6 years ago |
jzg
|
4851a67bb5
|
add layer of clipbyglobalnorm.
|
6 years ago |
jzg
|
7cbd55e17d
|
add embedding layer.
|
6 years ago |
jzg
|
374e9e199d
|
add moment and nonzero.
|
6 years ago |
mindspore-ci-bot
|
3fd54fd58f
|
!7361 Check whether the network args are tensors in the compile phase
Merge pull request !7361 from YuJianfeng/master
|
6 years ago |
yujianfeng
|
18a76ff3c5
|
Check whether the network args are tensors in the compile phase
|
6 years ago |
jzg
|
2c6a9c8486
|
add fake-quant operators.
|
6 years ago |
mindspore-ci-bot
|
55be3c42a5
|
!5875 Add IFMR op for new backend.
Merge pull request !5875 from liuxiao93/IFMR-OpenSource-new-backend
|
6 years ago |
liangzelang
|
195d97ad46
|
Remove redundant check about IsInstance op
|
6 years ago |
mindspore-ci-bot
|
f72f2c22fb
|
!6653 fix stream sync error for mixed precesion on pynative mode
Merge pull request !6653 from chujinjin/fix_stream_sync_error_for_mixed_precision
|
6 years ago |
liuxiao93
|
0e02df812a
|
Add IFMR op for new backend.
(cherry picked from commit 17a5995e97)
|
6 years ago |
chujinjin
|
1cf8f3b777
|
fix stream sync error for mixed precision
|
6 years ago |
jinyaohui
|
334a32d501
|
fix pylint
|
6 years ago |
Wei Luning
|
cdbd16de0c
|
fix bug in parameter set & fix code style in pynative_executa.cc
|
6 years ago |
jin-xiulang
|
5873614b86
|
Refactoring laplace random operator.
|
6 years ago |
lihongkang
|
74e3adebb8
|
change api interface
|
6 years ago |
buxue
|
08059f5c61
|
add check for stridedslice when choose aicpu or aicore
|
6 years ago |
buxue
|
1ae9da85e0
|
remove timeout exception test case
|
6 years ago |
mindspore-ci-bot
|
021ba724cf
|
!5645 [bug][api]updata signature
Merge pull request !5645 from vlne-v1/ref_demo
|
6 years ago |
Wei Luning
|
879a519136
|
updata signature
|
6 years ago |
mindspore-ci-bot
|
bb84f50407
|
!5473 optim pylint
Merge pull request !5473 from jinyaohui/master
|
6 years ago |
mindspore-ci-bot
|
be3d79cb6b
|
!4204 add dynamic shape support for GatherV2 and others
Merge pull request !4204 from fary86/adapt_primitive_dynamic_shape
|
6 years ago |
buxue
|
359e1f236e
|
check user define bprop when there is parameter in nested network
|
6 years ago |
fary86
|
144a35b17e
|
Adapt GatherV2 for dynamic shape
|
6 years ago |
jinyaohui
|
a9972a7def
|
optim pylint
|
6 years ago |
mindspore-ci-bot
|
e94416be0c
|
!5283 Support setting operator io format in the frontend
Merge pull request !5283 from liangchenghui/io_format
|
6 years ago |
梁成辉
|
34d433fd9a
|
Set io format for old backend
|
6 years ago |
panyifeng
|
1a54785fe2
|
remove name arg from gradoperation
|
6 years ago |
fary86
|
8d245497a4
|
Fix large for loop Runtime Error due to backend missing operators
|
6 years ago |
mindspore-ci-bot
|
afd16fbf0a
|
!4963 fix bug of switch layer join
Merge pull request !4963 from fary86/fix_switch_layer_join_bug
|
6 years ago |
fary86
|
947e19b839
|
Fix bug of switch layer join
|
6 years ago |
panyifeng
|
637e812347
|
remove global grad ops
|
6 years ago |
fary86
|
04524b6bd3
|
Fix coredump caused by function call depth too large
|
6 years ago |
mindspore-ci-bot
|
c92f7c9170
|
!4911 add func type check for switch layer
Merge pull request !4911 from riemann_penn/add_func_type_check_for_switch_layer
|
6 years ago |
panyifeng
|
abab21ed57
|
add func type check for switch layer
|
6 years ago |
mindspore-ci-bot
|
6e8d3a3b82
|
!4859 Add CTCGrerdyDecoder ops for old backend.
Merge pull request !4859 from liuxiao93/Add-ReversSqueuce-EditDistance-CTCGrerdyDecoder
|
6 years ago |
panyifeng
|
f9f3cd7ce0
|
fix switch_layer_issues
|
6 years ago |
liuxiao93
|
9bc18ea2e5
|
Add CTCGrerdyDecoder ops for GE.
|
6 years ago |
mindspore-ci-bot
|
2ac410e90d
|
!4789 Add EditDistance op for GE.
Merge pull request !4789 from liuxiao93/Add-EditDistance-op-for-GE
|
6 years ago |
liuxiao93
|
4c99f4f649
|
Add EditDistance op for GE.
|
6 years ago |
huangdongrun
|
f30418991c
|
refactor bool op parsing to be consistent with pynative mode
add testcase of st
|
6 years ago |
panyifeng
|
22a9d02e9f
|
switch_layer incorporate env_get and tuple_get
|
6 years ago |
fary86
|
38083e055a
|
Fix coredump missing return statement after while loop
|
6 years ago |
mindspore-ci-bot
|
f875bf21bc
|
!2948 fix control flow
Merge pull request !2948 from amongo/FixControlFlow
|
6 years ago |
huangdongrun
|
2a6d346d2f
|
support if by if grad parameter
add join for ref
adjust env eliminate to eliminate all env ops
add partial app cache
resolve while endless
fix env eliminate
support for "for while" cases
fix join shape error
|
6 years ago |
panyifeng
|
1c296b96c9
|
fix switch layer sigle prim cell
|
6 years ago |
jonyguo
|
4964f7703a
|
Merge branch 'master' into code_sync_incubator_f3c32baf_to_master_fcfc75a3_0811
|
6 years ago |
peixu_ren
|
3820533ad1
|
Refactor Gamma and Poisson ops
|
6 years ago |
changzherui
|
22336c0843
|
sync 0807 code to ms-incubator
|
6 years ago |
peixu_ren
|
517fed55a5
|
Added LGamma op
|
6 years ago |
fary86
|
7602054acd
|
Fix do concat in while loop specialize error
|
6 years ago |
mindspore-ci-bot
|
0b407dfe78
|
!4005 unsortedsegsum grad
Merge pull request !4005 from fangzehua/unsortedsegsum
|
6 years ago |
mindspore-ci-bot
|
c493859978
|
!4004 add squreasumall grad
Merge pull request !4004 from fangzehua/squaresumall
|
6 years ago |
fangzehua
|
99f2be7064
|
unsortedsegsum grad
|
6 years ago |
fangzehua
|
7011508379
|
add squreasumall grad
|
6 years ago |
panyifeng
|
9927e6eb5c
|
eager mode sparse
|
6 years ago |
liuxiao93
|
374b7b8583
|
Add TBE op SquaredDifference for VM.
|
6 years ago |
peixu_ren
|
374772035a
|
Refactor random uniform ops and complete gamma and poisson
|
6 years ago |
simson
|
b2cdd5d848
|
Fix the difference of Error type between graph&pynative mode
|
6 years ago |
mindspore-ci-bot
|
568da0d510
|
!3425 fix avgpoolgrad
Merge pull request !3425 from fangzehua/avgpool
|
6 years ago |
fangzehua
|
86dd0c583a
|
fix avgpool grad
|
6 years ago |
mindspore-ci-bot
|
295038d346
|
!3324 add reduce_any op for vm
Merge pull request !3324 from fangzehua/reduce_any
|
6 years ago |
kingfo
|
73ea9b7855
|
fix mix precesion operator issue
|
6 years ago |
mindspore-ci-bot
|
684ff4f46b
|
!3160 Rewrite tensor's __bool__ for pynative mode
Merge pull request !3160 from Simson/push-to-opensource
|
6 years ago |
simson
|
5f77fbdd75
|
Rewrite tensor's __bool__ for pynative mode
|
6 years ago |
fangzehua
|
228a959cc7
|
add reduce any op for vm
|
6 years ago |
Wei Luning
|
484d7f10c8
|
refine code* refine code in bert model* add ToAbstruct for `FuncGraph`, `MetaFuncGraph` `Primitive`* remove partial hard code in spec for poly* remove any in data convert cache
|
6 years ago |
mindspore-ci-bot
|
cf4353f728
|
!3220 Add random normal op at MindSpore front-end
Merge pull request !3220 from peixu_ren/custom_gpu
|
6 years ago |
fangzehua
|
dde74b227c
|
add 5 vm op
|
6 years ago |
peixu_ren
|
9b45018dfd
|
Add random normal op at MindSpore front-end
|
6 years ago |