Compare commits

...

106 Commits

Author SHA1 Message Date
mindspore-ci-bot 59361d83a9 !1470 LinkToPotentialPrecedenceNode c76
4 years ago
lianghao 1993675125 LinkToPotentialPrecedenceNode
4 years ago
mindspore-ci-bot a205af43fc !1324 Fix bug of single_op ageing.
4 years ago
unknown 63e8d1e291 Fix bug of single_op ageing.
4 years ago
mindspore-ci-bot 5d094c7f72 !1289 modify static depends
4 years ago
wxl 11b77d9daf modify static depends
4 years ago
mindspore-ci-bot a8c137e5eb !1058 kTaskNumPerHcclNode
4 years ago
lianghao 3dd23c428e kTaskNumPerHcclNode
4 years ago
mindspore-ci-bot d98d77371e !1045 move end_task after assignadd on iterator_loop case
4 years ago
wangxiaotian22 2b0464ad81 move end_task before active stream when bpfp set by env
4 years ago
mindspore-ci-bot 145d6a97d6 !1011 move end_task after assignadd on iterator_loop case
4 years ago
wangxiaotian22 c72ba8ad66 parser update
4 years ago
wangxiaotian22 d8db67d06f add metadef update
4 years ago
wangxiaotian22 9712b297b7 move end_task after assignadd on iterator_loop case
4 years ago
mindspore-ci-bot 885b3a6b7f !951 gensessionid add pid prefix
4 years ago
wangxiaotian22 ee5c71962d gensessionid add pid prefix
4 years ago
mindspore-ci-bot c14c6a87cb !947 change mult batch to switchn
4 years ago
wjm 81cd9527aa fix error
4 years ago
wjm d3515b1624 change mult batch to switchn
4 years ago
mindspore-ci-bot cb019693b1 !945 Add cc_task task_info log.
4 years ago
mindspore-ci-bot 6529491d90 !943 Add keep_dtype attribute
4 years ago
unknown 4318596b9a Add cc_task task_info log.
4 years ago
lwx897429 970745eedf Add keep_dtype attribute
4 years ago
mindspore-ci-bot 89cccaeb3c !907 modify dump task proto in c76
4 years ago
zhou_chao1993 7f341ab53a modify dump_task proto in c76
4 years ago
mindspore-ci-bot ae91b07e6a !917 Migration subgraph Const Node
4 years ago
mindspore-ci-bot 267ddd9801 !916 Bugfix: check pricision loss when cast from int64 to bool
4 years ago
wjm 5bc603e52d fix error
4 years ago
zhaoxinxin a608eee4e2 modified: ge/graph/common/transop_util.cc
4 years ago
mindspore-ci-bot 1218a757c5 !898 Add submodelId in dynamic shape
4 years ago
taoxiangdong fadd5d1874 Add submodelId in dyanamic shape
4 years ago
mindspore-ci-bot 1cd83211d6 !891 fix l2 buffer error
4 years ago
mindspore-ci-bot 3eeb8a9c97 !893 remove interface aclgrphInfershapeAndType
4 years ago
mindspore-ci-bot 0126007d89 !888 add SwitchDeadBranchElimination & MergePass in graph prepare
4 years ago
wxl a69806eee1 remove interface aclgrphInfershapeAndType
4 years ago
wjm aac7897a44 fix l2 buffer error
4 years ago
chenyemeng 2b90729519 add SwitchDeadBranchElimination & MergePass in graph prepare
4 years ago
mindspore-ci-bot d5c6e8146b !872 Log macro error in windows
4 years ago
taoxiangdong 850f6efb29 Log print macro error
4 years ago
mindspore-ci-bot 65509ee0f0 !853 add whole graph optimize
4 years ago
gengchao4@huawei.com 8ad6d4b463 add whole graph optimize
4 years ago
mindspore-ci-bot 103aa22616 !839 fixed issue of repeated profile subscription
4 years ago
mindspore-ci-bot 11b6f47be6 !811 modify p2p addr assigner bug in c76
4 years ago
mindspore-ci-bot 501e184095 !828 Free mem before return
4 years ago
taoxiangdong c8cc205f33 Free memory before return
4 years ago
lwx897429 d30dd18e09 fixed issue of repeated profile subscription
4 years ago
mindspore-ci-bot 63cc95c5e5 !837 Feature: delete compress_weight_conf para of aclgrphParse interface
4 years ago
l00444296 b6aa9c0e4d Feature: delete compress_weight_conf para of aclgrphParse interface
4 years ago
mindspore-ci-bot 032f9d1f07 !795 Parse traing trace switch in profstart func
4 years ago
mindspore-ci-bot dfd2314793 !782 Feature: delete compress_weight_conf para of aclgrphParse interface
4 years ago
mindspore-ci-bot 69e7d5bf64 !789 fix question that release all loaded model memory when memory is not enough
4 years ago
mindspore-ci-bot c451c30026 !797 fix dynamic aipp error
4 years ago
mindspore-ci-bot 61b2de9c38 !786 dump
4 years ago
mindspore-ci-bot a552edfd11 !779 Feature: delete is_load_profiling_ reset to false
4 years ago
wjm ba745a12d3 fix
4 years ago
zhou_chao1993 0554dd5942 modify p2p addr assigner bug
4 years ago
weiyang 628162c7b0 dump
4 years ago
wjm a9b4cf400a fix dynamic aipp error
4 years ago
taoxiangdong 5e85506711 Parse training trace on profstart
4 years ago
mindspore-ci-bot 5a8206cf7d !793 license update, mentioning usage of tensorflow and caffe code
4 years ago
yanghaoran fba4643a47 license update, mentioning usage of tensorflow and caffe code
4 years ago
wxl 2504b6b7b9 bugfix
4 years ago
l00444296 50fdb59274 Feature: delete compress_weight_conf para of aclgrphParse interface
4 years ago
l00444296 d1eb560616 Feature: delete is_load_profiling_ reset to false
4 years ago
mindspore-ci-bot 72eef81746 !767 for perfmance
4 years ago
mindspore-ci-bot 856fb4419a !724 Feature: repair dynamic_stitch_kernel folding bug
4 years ago
mindspore-ci-bot b3cfade65e !771 op_compiler_cache_dir
4 years ago
mindspore-ci-bot 35983c7c38 !761 Check aicpu op type
4 years ago
mindspore-ci-bot 4fe214984d !755 errorcode
4 years ago
lianghao bb6de73c97 op_compiler_cache_dir
4 years ago
weiyang 43c1e02265 perf
4 years ago
taoxiangdong 546e9f7cf9 Check aicpu op type
4 years ago
mindspore-ci-bot c2fb4adbce !760 device os log missing
4 years ago
taoxiangdong c485a99932 device os log missing
4 years ago
mindspore-ci-bot 5de4cd5479 !733 decrease om size
4 years ago
mindspore-ci-bot 30000dd4e7 !747 fix case plugin error
4 years ago
mindspore-ci-bot 60ceda422f !752 Fix storage bug.
4 years ago
weiyang 48585c78f0 errorcode
4 years ago
mindspore-ci-bot 51314c970b !751 Fix bug of modify output shape to -2.
4 years ago
unknown 17428ef7a8 Fix storage bug.
4 years ago
unknown 4a315e1d4f Fix bug of modify output shape to -2.
4 years ago
wjm c694a907e2 fix case flugin
4 years ago
mindspore-ci-bot 8bb847b429 !746 delete invalid comment
4 years ago
wqtshg acae7cfaea delete invalid comment
4 years ago
mindspore-ci-bot 3efd05e1c4 !744 update c76 code
4 years ago
wqtshg 7f542c2b68 update c76 code ut
4 years ago
wqtshg 0b6354215a update c76 ut
4 years ago
wqtshg 8a8a42cf03 update c76 code ut
4 years ago
wqtshg 8e87e4b7a5 update ge ut
4 years ago
wqtshg ea477de6eb update test
4 years ago
wqtshg 0f36063e8c update ut
4 years ago
wqtshg 1ac3bff4af update c76 submodule
4 years ago
lianghao 2898b2d83c decrease om size
4 years ago
wqtshg 455f21252f update c76 log_cpp
4 years ago
l00444296 48973d4ea1 Feature: repair dynamic_stitch_kernel folding bug
4 years ago
wqtshg 4ac0f69204 add c76 LOG_CPP
4 years ago
wqtshg d662b5e84e update c76 submodule
4 years ago
wqtshg e38e5e06a2 update c76 cmake
4 years ago
wqtshg eea696c45b update c76 code
4 years ago
wqtshg eaeaec68ff update slog to alog
4 years ago
wqtshg 86bb779cee update c76 submodule
4 years ago
计晨 fe5db33358 !712 update c76 code
4 years ago
wqtshg 5cc51efc74 update c76 code and submodule
4 years ago
计晨 ca855a5bf7 !707 update c76 code
4 years ago
wqtshg 16758ee2b1 update c76 code
4 years ago
wqtshg 5d043adbca update c76 code
4 years ago

4
.gitmodules vendored

@ -1,8 +1,8 @@
[submodule "parser"]
path = parser
url = https://gitee.com/ascend/parser.git
branch = development
branch = r1.2.0
[submodule "metadef"]
path = metadef
url = https://gitee.com/ascend/metadef.git
branch = development
branch = r1.2.0

@ -52,10 +52,10 @@ if (ENABLE_OPEN_SRC)
include(cmake/FindModule.cmake)
include(cmake/intf_pub_linux.cmake)
# for CPU/GPU mode, find c_sec and slog from local prebuild
# for CPU/GPU mode, find c_sec and alog from local prebuild
#if(NOT ENABLE_D AND NOT GE_ONLY)
# set(GE_PREBUILD_PATH ${GE_CODE_DIR}/third_party/prebuild/${CMAKE_HOST_SYSTEM_PROCESSOR})
# find_module(slog libslog.so ${GE_PREBUILD_PATH})
# find_module(slog libalog.so ${GE_PREBUILD_PATH})
# if D_LINK_PATH is set in environment variables, search libraries in given path
if(DEFINED ENV{D_LINK_PATH})
# D_LINK_PATH is set
@ -72,7 +72,7 @@ if (ENABLE_OPEN_SRC)
endif()
set(GE_LIB_PATH ${GE_LIB_PATH}/${GE_SYS_ARCH})
set(STATIC_ACL_LIB ${GE_LIB_PATH})
find_module(slog libslog.so ${GE_LIB_PATH})
find_module(slog libalog.so ${GE_LIB_PATH})
find_module(static_mmpa libmmpa.a ${GE_LIB_PATH})
find_module(msprofiler libmsprofiler.a ${GE_LIB_PATH})
find_module(hccl libhccl.so ${GE_LIB_PATH})
@ -88,7 +88,7 @@ if (ENABLE_OPEN_SRC)
elseif(ENABLE_GE_COV OR ENABLE_GE_UT)
add_subdirectory(tests)
else()
find_module(slog libslog.so ${ASCEND_ATC_DIR})
find_module(slog libalog.so ${ASCEND_ATC_DIR})
find_module(static_mmpa libmmpa.a ${ASCEND_ATC_DIR})
find_module(error_manager liberror_manager.so ${ASCEND_ATC_DIR})
if(PLATFORM STREQUAL "train")
@ -154,7 +154,7 @@ elseif (ENABLE_D OR ENABLE_ACL)
include(cmake/intf_pub_linux.cmake)
# common libraries
find_module(slog libslog.so ${ASCEND_MS_DRIVER_PATH})
find_module(slog libalog.so ${ASCEND_MS_DRIVER_PATH})
find_module(error_manager liberror_manager.so ${ASCEND_MS_RUNTIME_PATH} ${ATLAS_MS_RUNTIME_PATH})
find_module(static_mmpa libmmpa.a ${ASCEND_MS_RUNTIME_PATH} ${ATLAS_MS_RUNTIME_PATH})
@ -174,7 +174,7 @@ elseif(ENABLE_MS_TESTCASES)
include(cmake/intf_pub_linux.cmake)
# common libraries
find_module(slog libslog.so ${ASCEND_MS_DRIVER_PATH})
find_module(slog libalog.so ${ASCEND_MS_DRIVER_PATH})
find_module(error_manager liberror_manager.so ${ASCEND_MS_RUNTIME_PATH} ${ATLAS_MS_RUNTIME_PATH})
find_module(static_mmpa libmmpa.a ${ASCEND_MS_RUNTIME_PATH} ${ATLAS_MS_RUNTIME_PATH})

@ -458,3 +458,76 @@ Copyright (c) Facebook Inc. and Microsoft Corporation.
License: MIT License
Please see above.
Software: caffe 1.0
License: BSD 2-Clause License
Open Source Software Licensed Under the BSD 2-Clause License
GraphEngine uses source code files from caffe so as to support model format conversion from caffe model to GraphEngine model.
Please see below for the full list of source code files from caffe that are used by GraphEngine.
The below software in this distribution may have been modified by Huawei Technologies Co., Ltd ("Huawei Modifications"). All Huawei Modifications are Copyright 2019-2020 Huawei Technologies Co., Ltd.
----------------------------------------------------------------------------------------
1. caffe.proto master
All contributions by the University of California:
Copyright (c) 2014-2017 The Regents of the University of California (Regents)
All rights reserved.
Terms of the BSD 2-Clause License:
--------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Software: tensorflow 1.15.0
License: Apache-2.0 License
Open Source Software Licensed Under the Apache-2.0 License
GraphEngine uses source code files from tensorflow so as to support model format conversion from tensorflow model to GraphEngine model.
Please see below for the full list of source code files from tensorflow that are used by GraphEngine.
The below software in this distribution may have been modified by Huawei Technologies Co., Ltd ("Huawei Modifications"). All Huawei Modifications are Copyright 2019-2020 Huawei Technologies Co., Ltd.
----------------------------------------------------------------------------------------
1. attr_value.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
2. function.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
3. graph.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
4. node_def.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
5. op_def.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
6. resource_handle.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
7. tensor.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
8. tensor_shape.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
9. types.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
10. versions.proto master
Copyright 2015 The TensorFlow Authors. All Rights Reserved.
Terms of the Apache-2.0 License:
Please see above.

@ -224,14 +224,12 @@ if [[ "X$ENABLE_GE_UT" = "Xon" || "X$ENABLE_GE_COV" = "Xon" ]]; then
# fi
# if [[ "X$ENABLE_GE_COV" = "Xon" ]]; then
echo "Generating coverage statistics, please wait..."
cd ${BASEPATH}
rm -rf ${BASEPATH}/cov
mkdir ${BASEPATH}/cov
lcov -c -d build/tests/ut/ge -d build/tests/ut/common/graph/ -o cov/tmp.info
lcov --remove cov/tmp.info '*/output/*' '*/build/opensrc/*' '*/build/proto/*' '*/third_party/*' '*/tests/*' '/usr/local/*' -o cov/coverage.info
cd ${BASEPATH}/cov
genhtml coverage.info
# echo "Generating coverage statistics, please wait..."
# cd ${BASEPATH}
# rm -rf ${BASEPATH}/cov
# mkdir ${BASEPATH}/cov
# gcovr -r ./ --exclude 'third_party' --exclude 'build' --exclude 'tests' --exclude 'prebuild' --exclude 'inc' --print-summary --html --html-details -d -o cov/index.html
# fi
fi
# generate output package in tar form, including ut/st libraries/executables

@ -23,7 +23,6 @@ ExternalProject_Add(gflags_build
URL ${REQ_URL}
#URL /home/txd/workspace/linux_cmake/pkg/protobuf-3.8.0.tar.gz
#SOURCE_DIR ${GE_CODE_DIR}/../../third_party/gflags/src/gflags-2.2.2
TLS_VERIFY OFF
CONFIGURE_COMMAND ${CMAKE_COMMAND} -DCMAKE_CXX_FLAGS=${gflags_CXXFLAGS} -DCMAKE_INSTALL_PREFIX=${CMAKE_INSTALL_PREFIX}/gflags <SOURCE_DIR>
BUILD_COMMAND $(MAKE)
INSTALL_COMMAND $(MAKE) install

@ -10,10 +10,7 @@ if ((${CMAKE_INSTALL_PREFIX} STREQUAL /usr/local) OR
message(STATUS "No install prefix selected, default to ${CMAKE_INSTALL_PREFIX}.")
endif()
if (GE_PB_PKG)
set(REQ_URL "${GE_PB_PKG}/libs/ge_gtest/release-1.8.0.tar.gz")
set(MD5 "")
elseif (ENABLE_GITEE)
if (ENABLE_GITEE)
set(REQ_URL "https://gitee.com/mirrors/googletest/repository/archive/release-1.8.0.tar.gz")
set(MD5 "")
else()
@ -25,7 +22,6 @@ set (gtest_CXXFLAGS "-D_GLIBCXX_USE_CXX11_ABI=0 -D_FORTIFY_SOURCE=2 -O2 -fstack-
set (gtest_CFLAGS "-D_GLIBCXX_USE_CXX11_ABI=0 -D_FORTIFY_SOURCE=2 -O2 -fstack-protector-all -Wl,-z,relro,-z,now,-z,noexecstack")
ExternalProject_Add(gtest_build
URL ${REQ_URL}
TLS_VERIFY OFF
CONFIGURE_COMMAND ${CMAKE_COMMAND} -DCMAKE_CXX_FLAGS=${gtest_CXXFLAGS} -DCMAKE_INSTALL_PREFIX=${CMAKE_INSTALL_PREFIX}/gtest <SOURCE_DIR>
-DBUILD_TESTING=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -DBUILD_SHARED_LIBS=ON -DCMAKE_MACOSX_RPATH=TRUE -Dgtest_disable_pthreads=ON
BUILD_COMMAND $(MAKE)

@ -5,24 +5,19 @@ endif()
include(ExternalProject)
set(JSON_SRC_DIR ${CMAKE_BINARY_DIR}/opensrc/json/include)
if (GE_PB_PKG)
set(REQ_URL "${GE_PB_PKG}/libs/ge_nlohmann_json/include.zip")
set(MD5 "0dc903888211db3a0f170304cd9f3a89")
set(JSON_INCLUDE_DIR ${JSON_SRC_DIR})
#elseif (ENABLE_GITEE)
#if (ENABLE_GITEE)
# set(REQ_URL "https://gitee.com/mirrors/JSON-for-Modern-CPP/repository/archive/v3.6.1.zip")
# set(MD5 "5bda78ce308e6cfcf614dcf1d5ff27a7")
# set(JSON_INCLUDE_DIR "${JSON_SRC_DIR}/include")
else()
#else()
set(REQ_URL "https://github.com/nlohmann/json/releases/download/v3.6.1/include.zip")
set(MD5 "0dc903888211db3a0f170304cd9f3a89")
set(JSON_INCLUDE_DIR ${JSON_SRC_DIR})
endif ()
#endif ()
ExternalProject_Add(json_build
URL ${REQ_URL}
#URL /home/txd/workspace/cloud_code/pkg/include.zip
SOURCE_DIR ${JSON_SRC_DIR}
TLS_VERIFY OFF
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""

@ -6,10 +6,7 @@ set(ONNX_PROTO_DIR ${CMAKE_BINARY_DIR}/onnx)
set(ONNX_PROTO_FILE ${ONNX_PROTO_DIR}/onnx.proto)
file(MAKE_DIRECTORY ${ONNX_PROTO_DIR})
if (GE_PB_PKG)
set(REQ_URL "${GE_PB_PKG}/libs/onnx/onnx-1.6.0.tar.gz")
set(MD5 "512f2779d6215d4a36f366b6b9acdf1e")
elseif (ENABLE_GITEE)
if (ENABLE_GITEE)
set(REQ_URL "https://gitee.com/mirrors/ONNX/repository/archive/v1.6.0.tar.gz")
set(MD5 "1bdbcecdd68ea8392630467646776e02")
else()
@ -22,7 +19,6 @@ ExternalProject_Add(onnx
#URL /home/txd/workspace/cloud_code/pkg/onnx-1.6.0.tar.gz
#URL_HASH SHA256=3b88c3fe521151651a0403c4d131cb2e0311bd28b753ef692020a432a81ce345
#SOURCE_DIR ${ONNX_SRC_DIR}
TLS_VERIFY OFF
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
#INSTALL_COMMAND ""

@ -26,7 +26,6 @@ set(protobuf_CXXFLAGS "-Wno-maybe-uninitialized -Wno-unused-parameter -fPIC -fst
set(protobuf_LDFLAGS "-Wl,-z,relro,-z,now,-z,noexecstack")
ExternalProject_Add(protobuf_build
URL ${REQ_URL}
TLS_VERIFY OFF
CONFIGURE_COMMAND ${CMAKE_COMMAND}
-Dprotobuf_WITH_ZLIB=OFF
-DCMAKE_INSTALL_LIBDIR=${CMAKE_INSTALL_LIBDIR}

@ -1,3 +1,7 @@
if (HAVE_PROTOBUF_STATIC)
return()
endif()
include(ExternalProject)
include(GNUInstallDirs)
#set(CMAKE_INSTALL_PREFIX ${GE_CODE_DIR}/output)
@ -27,7 +31,6 @@ ExternalProject_Add(protobuf_static_build
URL ${REQ_URL}
#URL /home/txd/workspace/linux_cmake/pkg/protobuf-3.8.0.tar.gz
#SOURCE_DIR ${METADEF_DIR}/../../third_party/protobuf/src/protobuf-3.8.0
TLS_VERIFY OFF
CONFIGURE_COMMAND ${CMAKE_COMMAND}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}
@ -58,3 +61,5 @@ include_directories(${PROTOBUF_STATIC_PKG_DIR}/include)
endif ()
add_dependencies(ascend_protobuf_static protobuf_static_build)
set(HAVE_PROTOBUF_STATIC TRUE)

@ -30,7 +30,6 @@ ExternalProject_Add(protoc_build
URL ${REQ_URL}
#URL /home/txd/workspace/linux_cmake/pkg/protobuf-3.8.0.tar.gz
#SOURCE_DIR ${GE_CODE_DIR}/../third_party/protobuf/src/protobuf-3.8.0
TLS_VERIFY OFF
CONFIGURE_COMMAND ${CMAKE_COMMAND} -Dprotobuf_WITH_ZLIB=OFF -Dprotobuf_BUILD_TESTS=OFF -DBUILD_SHARED_LIBS=OFF -DCMAKE_CXX_FLAGS=${protobuf_CXXFLAGS} -DCMAKE_CXX_LDFLAGS=${protobuf_LDFLAGS} -DCMAKE_INSTALL_PREFIX=${CMAKE_INSTALL_PREFIX}/protoc <SOURCE_DIR>/cmake
BUILD_COMMAND $(MAKE)
INSTALL_COMMAND $(MAKE) install

@ -10,20 +10,11 @@ if ((${CMAKE_INSTALL_PREFIX} STREQUAL /usr/local) OR
message(STATUS "No install prefix selected, default to ${CMAKE_INSTALL_PREFIX}.")
endif()
if (GE_PB_PKG)
set(REQ_URL "${GE_PB_PKG}/libs/securec/v1.1.10.tar.gz")
set(MD5 "")
else()
set(REQ_URL "https://gitee.com/openeuler/libboundscheck/repository/archive/v1.1.10.tar.gz")
set(MD5 "")
endif ()
ExternalProject_Add(c_sec_build
URL ${REQ_URL}
#URL https://gitee.com/openeuler/libboundscheck/repository/archive/v1.1.10.tar.gz
URL https://gitee.com/openeuler/libboundscheck/repository/archive/v1.1.10.tar.gz
#URL /home/txd/workspace/linux_cmake/pkg/protobuf-3.8.0.tar.gz
#SOURCE_DIR ${GE_CODE_DIR}/../libc_sec
PATCH_COMMAND patch -p1 < ${GE_CODE_DIR}/metadef/third_party/patch/securec/0001-add-securec-cmake-script.patch
TLS_VERIFY OFF
CONFIGURE_COMMAND ${CMAKE_COMMAND}
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}

@ -16,6 +16,7 @@ target_compile_definitions(intf_pub INTERFACE
$<$<CONFIG:Debug>:CFG_BUILD_DEBUG>
WIN64=1
LINUX=0
LOG_CPP
)
target_link_options(intf_pub INTERFACE
-Wl,-z,relro

@ -620,6 +620,7 @@ target_compile_definitions(ge_runner PRIVATE
FMK_SUPPORT_DUMP
DAVINCI_CLOUD
google=ascend_private
$<$<STREQUAL:${ENABLE_OPEN_SRC},True>:ONLY_COMPILE_OPEN_SRC>
)
target_compile_options(ge_runner PRIVATE
@ -687,6 +688,7 @@ target_compile_definitions(ge_compiler PRIVATE
FMK_HOST_INFER
COMPILE_OMG_PACKAGE
google=ascend_private
$<$<STREQUAL:${ENABLE_OPEN_SRC},True>:ONLY_COMPILE_OPEN_SRC>
)
target_compile_options(ge_compiler PRIVATE

@ -32,6 +32,9 @@
#include "graph/common/ge_call_wrapper.h"
#include "register/op_registry.h"
#include "common/ge/tbe_plugin_manager.h"
#ifndef ONLY_COMPILE_OPEN_SRC
#include "toolchain/plog.h"
#endif
using domi::OpRegistry;
using std::map;
@ -129,6 +132,11 @@ Status GEInitializeImpl(const std::map<string, string> &options) {
// Initialize GE, prepare for execution, call GELib::Initialize
Status GEInitialize(const std::map<string, string> &options) {
#ifndef ONLY_COMPILE_OPEN_SRC
if (DlogReportInitialize() != SUCCESS) {
GELOGW("Dlog report device log initialize failed.");
}
#endif
return GEInitializeImpl(options);
}
@ -143,6 +151,11 @@ Status GEInitialize(const std::map<AscendString, AscendString> &options) {
std::string val = option.second.GetString();
str_options[key] = val;
}
#ifndef ONLY_COMPILE_OPEN_SRC
if (DlogReportInitialize() != SUCCESS) {
GELOGW("Dlog report device log initialize failed.");
}
#endif
return GEInitializeImpl(str_options);
}
@ -187,6 +200,12 @@ Status GEFinalize() {
// to avoid memory fragment, use malloc_trim to back free stack to system
malloc_trim(0);
#ifndef ONLY_COMPILE_OPEN_SRC
if (DlogReportFinalize() != SUCCESS) {
GELOGW("Dlog report device log finalize failed.");
}
#endif
GELOGT(TRACE_STOP, "GEFinalize finished");
return ret;
}

@ -163,7 +163,7 @@ target_include_directories(ge_common_static PRIVATE
target_link_libraries(ge_common_static PRIVATE
$<BUILD_INTERFACE:intf_pub>
ascend_protobuf
ascend_protobuf_static
json
c_sec
$<$<NOT:$<STREQUAL:${TARGET_SYSTEM_NAME},Android>>:-lrt>

@ -94,9 +94,6 @@ Status DumpOp::DumpOutput(aicpu::dump::Task &task) {
for (auto dim : output_descs.at(i).GetShape().GetDims()) {
output.mutable_shape()->add_dim(dim);
}
for (auto dim : output_descs.at(i).GetOriginShape().GetDims()) {
output.mutable_origin_shape()->add_dim(dim);
}
int64_t output_size = 0;
if (TensorUtils::GetTensorSizeInBytes(output_descs.at(i), output_size) != SUCCESS) {
GELOGE(PARAM_INVALID, "Get output size filed");
@ -121,9 +118,6 @@ Status DumpOp::DumpInput(aicpu::dump::Task &task) {
for (auto dim : input_descs.at(i).GetShape().GetDims()) {
input.mutable_shape()->add_dim(dim);
}
for (auto dim : input_descs.at(i).GetOriginShape().GetDims()) {
input.mutable_origin_shape()->add_dim(dim);
}
int64_t input_size = 0;
if (TensorUtils::GetTensorSizeInBytes(input_descs.at(i), input_size) != SUCCESS) {
GELOGE(PARAM_INVALID, "Get output size filed");
@ -220,15 +214,8 @@ Status DumpOp::LaunchDumpOp() {
SetOpMappingLoopAddr(global_step_, loop_per_iter_, loop_cond_, op_mapping_info);
GELOGI("Dump step is %s ,dump path is %s ,in Launch dump op", dump_properties_.GetDumpStep().c_str(),
dump_path.c_str());
uint32_t task_id = 0;
uint32_t stream_id = 0;
rt_ret = rtGetTaskIdAndStreamID(&task_id, &stream_id);
if (rt_ret != RT_ERROR_NONE) {
GELOGW("call rtGetTaskIdAndStreamID failed, ret = 0x%X", rt_ret);
}
aicpu::dump::Task task;
task.set_task_id(task_id);
task.set_stream_id(stream_id);
task.mutable_op()->set_op_name(op_desc_->GetName());
task.mutable_op()->set_op_type(op_desc_->GetType());
if (dump_properties_.GetDumpMode() == kDumpOutput) {

@ -192,7 +192,7 @@ void TBEPluginManager::LoadCustomOpLib() {
if (std::to_string(reg_data.GetFrameworkType()) == fmk_type) {
GELOGD("Begin to register optype: %s, imply_type: %s", reg_data.GetOmOptype().c_str(),
TypeUtils::ImplyTypeToSerialString(reg_data.GetImplyType()).c_str());
(void)domi::OpRegistry::Instance()->Register(reg_data);
domi::OpRegistry::Instance()->Register(reg_data);
}
}
}

@ -89,13 +89,12 @@ ge::Status ProfilingManager::InitFromOptions(const Options &options, MsprofGeOpt
#ifdef DAVINCI_SUPPORT_PROFILING
// enable profiling by env
char env_profiling_mode[MMPA_MAX_PATH] = { 0x00 };
is_load_profiling_ = false; // Change in ProfInit
is_execute_profiling_ = false;
if (options.profiling_mode == "1" && !options.profiling_options.empty()) {
// enable profiling by ge option
if (memcpy_s(prof_conf.options, MSPROF_OPTIONS_DEF_LEN_MAX, options.profiling_options.c_str(),
options.profiling_options.size()) != EOK) {
if (strncpy_s(prof_conf.options, MSPROF_OPTIONS_DEF_LEN_MAX, options.profiling_options.c_str(),
MSPROF_OPTIONS_DEF_LEN_MAX - 1) != EOK) {
GELOGE(INTERNAL_ERROR, "copy profiling_options failed.");
return INTERNAL_ERROR;
}
@ -125,11 +124,12 @@ ge::Status ProfilingManager::InitFromOptions(const Options &options, MsprofGeOpt
return ge::PARAM_INVALID;
}
if (memcpy_s(prof_conf.jobId, sizeof(prof_conf.jobId), options.job_id.c_str(),
sizeof(options.job_id.c_str())) != EOK) {
if (strncpy_s(prof_conf.jobId, MSPROF_OPTIONS_DEF_LEN_MAX, options.job_id.c_str(),
MSPROF_OPTIONS_DEF_LEN_MAX - 1) != EOK) {
GELOGE(INTERNAL_ERROR, "copy job_id failed.");
return INTERNAL_ERROR;
}
GELOGI("Job id: %s, original job id: %s.", prof_conf.jobId, options.job_id.c_str());
#endif
return ge::SUCCESS;
}
@ -159,6 +159,7 @@ ge::Status ProfilingManager::ParseOptions(const std::string &options) {
if (!fp_point_.empty() && !bp_point_.empty()) {
GELOGI("Training trace bp fp is set, bp_point:%s, fp_point:%s.", bp_point_.c_str(), fp_point_.c_str());
}
is_training_trace_ = true;
} catch (...) {
GELOGE(FAILED, "Json prof_conf options is invalid.");
return ge::PARAM_INVALID;
@ -212,16 +213,12 @@ FMK_FUNC_HOST_VISIBILITY FMK_FUNC_DEV_VISIBILITY void ProfilingManager::Profilin
uint32_t block_dim = task.block_dim;
uint32_t task_id = task.task_id;
uint32_t stream_id = task.stream_id;
std::string shape_type = task.shape_type;
int64_t cur_iter_num = task.cur_iter_num;
data = model_name.append(" ")
.append(op_name).append(" ")
.append(std::to_string(block_dim)).append(" ")
.append(std::to_string(block_dim).append(" ")
.append(std::to_string(task_id)).append(" ")
.append(std::to_string(stream_id)).append(" ")
.append(std::to_string(model_id)).append(" ")
.append(shape_type).append(" ")
.append(std::to_string(cur_iter_num)).append("\n");
.append(std::to_string(model_id)).append("\n"));
ReporterData reporter_data{};
reporter_data.deviceId = device_id;
@ -632,6 +629,10 @@ FMK_FUNC_HOST_VISIBILITY FMK_FUNC_DEV_VISIBILITY Status ProfilingManager::ProfSt
uint64_t module, const std::map<std::string, std::string> &config_para) {
#ifdef DAVINCI_SUPPORT_PROFILING
std::lock_guard<std::mutex> lock(mutex_);
uint64_t training_trace_mask = module & PROF_TRAINING_TRACE_MASK;
if (training_trace_mask == PROF_TRAINING_TRACE_MASK) {
is_training_trace_ = true;
}
int32_t device_num = 0;
vector<int32_t> device_list;
if (ProfParseParam(config_para, device_num, device_list) != SUCCESS) {
@ -846,7 +847,6 @@ FMK_FUNC_HOST_VISIBILITY FMK_FUNC_DEV_VISIBILITY void ProfilingManager::GetFpBpP
return;
}
}
return;
}

@ -15,7 +15,6 @@ message Output {
int32 original_output_data_type = 7;
int32 original_output_format = 8;
uint64 size = 9;
Shape origin_shape = 10;
}
message Input {
@ -24,7 +23,6 @@ message Input {
Shape shape = 3;
uint64 address = 4;
uint64 size = 5;
Shape origin_shape = 6;
}
enum BufferType {

@ -1,3 +1,11 @@
/**
* This file is part of Open Source Software TensorFlow, version 1.15.0 https://github.com/tensorflow/tensorflow
*
* This file is included by GraphEngine so as to support model format conversion from tensorflow model to GraphEngine model.
* This file in this distribution may have been modified by Huawei Technologies Co., Ltd ("Huawei Modifications").
* All Huawei Modifications are Copyright 2019-2020 Huawei Technologies Co., Ltd.
*/
syntax = "proto3";
package domi.tensorflow;

@ -1,3 +1,11 @@
/**
* This file is part of Open Source Software TensorFlow, version 1.15.0 https://github.com/tensorflow/tensorflow
*
* This file is included by GraphEngine so as to support model format conversion from tensorflow model to GraphEngine model.
* This file in this distribution may have been modified by Huawei Technologies Co., Ltd ("Huawei Modifications").
* All Huawei Modifications are Copyright 2019-2020 Huawei Technologies Co., Ltd.
*/
syntax = "proto3";
package domi.tensorflow;

@ -1,3 +1,11 @@
/**
* This file is part of Open Source Software TensorFlow, version 1.15.0 https://github.com/tensorflow/tensorflow
*
* This file is included by GraphEngine so as to support model format conversion from tensorflow model to GraphEngine model.
* This file in this distribution may have been modified by Huawei Technologies Co., Ltd ("Huawei Modifications").
* All Huawei Modifications are Copyright 2019-2020 Huawei Technologies Co., Ltd.
*/
syntax = "proto3";
package domi.tensorflow;

@ -1,3 +1,11 @@
/**
* This file is part of Open Source Software TensorFlow, version 1.15.0 https://github.com/tensorflow/tensorflow
*
* This file is included by GraphEngine so as to support model format conversion from tensorflow model to GraphEngine model.
* This file in this distribution may have been modified by Huawei Technologies Co., Ltd ("Huawei Modifications").
* All Huawei Modifications are Copyright 2019-2020 Huawei Technologies Co., Ltd.
*/
syntax = "proto3";
package domi.tensorflow;

@ -1,3 +1,11 @@
/**
* This file is part of Open Source Software TensorFlow, version 1.15.0 https://github.com/tensorflow/tensorflow
*
* This file is included by GraphEngine so as to support model format conversion from tensorflow model to GraphEngine model.
* This file in this distribution may have been modified by Huawei Technologies Co., Ltd ("Huawei Modifications").
* All Huawei Modifications are Copyright 2019-2020 Huawei Technologies Co., Ltd.
*/
syntax = "proto3";
package domi.tensorflow;

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save