Compare commits

...

8 Commits

Author SHA1 Message Date
inspur-inna dd2319cd56 update README.md.
6 years ago
Qichun Cao (曹其春) 75d1812d23 remove submodule
6 years ago
Qichun Cao (曹其春) e18acdd4a6 add submodule caffe-mpi
6 years ago
inspur-inna 86059640d2 update README.md.
6 years ago
inspur-inna c4bd3d1015 update README.md.
6 years ago
inspur-cqc 05719114b9
update install script
6 years ago
inspur-cqc 73e5039c10
Project Content Description
6 years ago
caoqichun 33e6476f10 use tvm.Relay replace nnvm
6 years ago

@ -1,30 +1,21 @@
![Image text](https://github.com/inspur-inna/inspur-inna/blob/master/Image/inspur.png)
![Image text](https://gitee.com/inspur-inna/inspur-inna/raw/master/Image/inspur.png)
# 基于FPGA的CNN自适应映射技术---inspur-inna
基于宏指令的Look-Aside Acceleration框架
- 一键式快速部署
- 软硬件协同优化
- 支持多种卷积
- 执行过程无需主机干预
# 基于FPGA的CNN自适应映射技术——inna1.0
基于FPGA板卡设计深度学习加速器并进行优化在整体性能和功耗方面拟达到业界领先水平映射技术采用宏指令的Look-Aside Acceleration框架实现了一键式快速部署、软硬件协同优化、支持多种卷积、执行过程无需主机干预。本项目为映射技术的软件端拟实现CNN映射编译器和CNN量化器首先由TensorFlow产生的模型文件解析产生CNN的计算图模型CNN映射编译器会根据解析的计算图和现有的CNN加速库单元选择相应的CNN库单元生成相应的硬件结构和相应的调度器的配置参数以达到计算、片上存储、片上带宽和片外带宽的均衡从而达到最优的计算性能CNN量化器可根据模型的权重文件对各层数据进行8位定点量化以便于FPGA的DSP计算从而在保证精度的前提下降低存储开销提高处理速度降低功耗。
## Install
### TVM source code install
LLVM install in Ubuntu
### inna install
TVM need LLVMLLVM install in Ubuntuother system require source code compilation
```bash
apt search llvm
apt install llvm-6.0
apt install clang-6.0
```
TVM Install Source<https://tvm.apache.org/docs/install/from_source.html>
### inna install
Install miniconda for python=3.6
Install miniconda for python=3.6install_inna.sh include TVM install scriptrefer to TVM <https://tvm.apache.org/docs/install/from_source.html>
```bash
conda create -n inna python=3.6 ipykernel -y
conda activate inna

@ -0,0 +1,19 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd

@ -0,0 +1,6 @@
Python API
==========
.. toctree::
inna

File diff suppressed because it is too large Load Diff

@ -0,0 +1,21 @@
.. inna documentation master file, created by
sphinx-quickstart on Mon Apr 8 10:09:32 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to INNA's documentation!
================================
.. toctree::
:maxdepth: 2
:caption: Contents:
install
api
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

@ -0,0 +1,9 @@
inna.compiler
==============
.. autofunction:: inna.compiler.create
.. autoclass:: inna.compiler.INNACompiler
:members:
.. automethod:: __init__

@ -0,0 +1,5 @@
inna.quantizer
===============
.. automodule:: inna.quantizer
:members:

@ -0,0 +1,8 @@
inna
=====
.. toctree::
inna.compiler
inna.quantizer
inna.runtime

@ -0,0 +1,28 @@
inna.runtime
=============
.. autofunction:: inna.runtime.create
.. autoclass:: inna.runtime.INNARuntime
.. automethod:: __init__
.. automethod:: run
.. automethod:: ReloadNNModel
.. automethod:: SetExtendHWBatch
.. automethod:: SetInputFeatures
.. automethod:: Run
.. automethod:: Wait
.. automethod:: GetOutputFeatures
.. automethod:: WriteRegister
.. automethod:: ReadRegisterU8
.. automethod:: ReadRegisterU16
.. automethod:: ReadRegisterU32
.. automethod:: ReadRegisterU64
.. automethod:: WriteDMA
.. automethod:: ReadDMA
.. automethod:: WaitPcieInterupt
.. autofunction:: inna.runtime.check_equal

@ -0,0 +1,19 @@
Installation
============
INNA use NNVM as frontend compiler. To get started, install tvm by
reference `TVM Install from
Source <https://docs.tvm.ai/install/from_source.html#install-from-source>`__.
After install tvm, clone INNA repo from git server.
.. code:: bash
git clone git@100.2.95.100:fanbaoyu/inna.git
Install INNA throw ``pip install``.
.. code:: bash
pip install ./inna

@ -0,0 +1,10 @@
Metadata-Version: 1.0
Name: inna
Version: 0.0.1
Summary: Inspur Neural Network Accelerater
Home-page: UNKNOWN
Author: Fan Baoyu
Author-email: fanbaoyu@inspur.com
License: UNKNOWN
Description: UNKNOWN
Platform: UNKNOWN

@ -0,0 +1,30 @@
setup.py
inna/__init__.py
inna/config.ini
inna.egg-info/PKG-INFO
inna.egg-info/SOURCES.txt
inna.egg-info/dependency_links.txt
inna.egg-info/not-zip-safe
inna.egg-info/requires.txt
inna.egg-info/top_level.txt
inna/compiler/__init__.py
inna/compiler/assembler.py
inna/compiler/compiler.py
inna/compiler/converter.py
inna/compiler/frontend.py
inna/compiler/scheduler.py
inna/compiler/test.py
inna/compiler/applications/__init__.py
inna/compiler/applications/resnet/__init__.py
inna/compiler/applications/resnet/keras.py
inna/compiler/applications/resnet/mxnet.py
inna/compiler/applications/resnet/onnx.py
inna/compiler/applications/resnet/tensorflow.py
inna/quantizer/__init__.py
inna/quantizer/quantize.py
inna/quantizer/quantize_base.py
inna/quantizer/quantize_caffe.py
inna/quantizer/quantize_tf.py
inna/runtime/__init__.py
inna/runtime/runtime.cc
inna/runtime/runtime.py

@ -0,0 +1,2 @@
pybind11>=2.2
numpy>=1.16

@ -0,0 +1,5 @@
from __future__ import absolute_import
from . import compiler
from . import quantizer
from . import runtime

@ -0,0 +1,3 @@
from __future__ import absolute_import
from .compiler import create, INNACompiler

@ -0,0 +1,3 @@
from __future__ import absolute_import
from . import resnet

@ -0,0 +1,6 @@
from __future__ import absolute_import
from . import tensorflow
from . import keras
from . import mxnet
from . import onnx

@ -0,0 +1,17 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import keras
def resnet_v1_50():
graph = keras.applications.resnet50.ResNet50(include_top=True, weights=None,
input_shape=(224, 224, 3), classes=1000)
graph.load_weights('../models/keras/resnet/resnet50_weights.h5')
shape_dict = {
'input_1': (1, 3, 224, 224),
}
layout = 'NCHW'
return graph, shape_dict, layout

@ -0,0 +1,14 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from mxnet.gluon.model_zoo.vision import get_model
def resnet_v1_50():
graph = get_model('resnet50_v1', pretrained=True)
shape_dict = {
'data': (1, 3, 224, 224),
}
layout = 'NCHW'
return graph, shape_dict, layout

@ -0,0 +1,15 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import onnx
def resnet_v1_50():
graph = onnx.load('../models/onnx/resnet/restnet50_v1.1.onnx')
shape_dict = {
'gpu_0/data_0': (1, 3, 224, 224),
}
layout = 'NCHW'
return graph, shape_dict, layout

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save