You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Paddle/paddle/fluid/inference/api
tensor-tang 89d09e6594
Merge branch 'develop' into fea/ut/vis
7 years ago
..
demo_ci speedup the download of inference_demo 7 years ago
CMakeLists.txt enable ner analysis test and refine lac 7 years ago
README.md
analysis_predictor.cc add note for use mkldnn 7 years ago
analysis_predictor.h test/add text-classification test (#13081) 7 years ago
api.cc fea/anakin compile with demo (#12772) 7 years ago
api_anakin_engine.cc support anakin for only-cpu environment 7 years ago
api_anakin_engine.h fea/anakin compile with demo (#12772) 7 years ago
api_anakin_engine_rnn_tester.cc clean api_anakin_engine_rnn_tester 7 years ago
api_anakin_engine_tester.cc Improve anakin feature (#11961) 7 years ago
api_impl.cc add note for use mkldnn 7 years ago
api_impl.h add unit-test for chinese_ner 7 years ago
api_impl_tester.cc
api_tensorrt_subgraph_engine.cc use fast RunPrepareContext for inference 7 years ago
api_tensorrt_subgraph_engine_tester.cc refine uttest of api_tensorrt_subgraph_engine 7 years ago
api_tester.cc
helper.cc fea/fuse attention lstm simplify.with fusion lstm.with sequnce expand (#13006) 7 years ago
helper.h add multi-thread for nlp unit-tests 7 years ago
high_level_api.md
high_level_api_cn.md fix some teeny mistakes 7 years ago
paddle_inference_api.h add note for use mkldnn 7 years ago
timer.h fix elementwise 7 years ago

README.md

Embed Paddle Inference in Your Application

Paddle inference offers the APIs in C and C++ languages.

One can easily deploy a model trained by Paddle following the steps as below:

  1. Optimize the native model;
  2. Write some codes for deployment.

Let's explain the steps in detail.

Optimize the native Fluid Model

The native model that get from the training phase needs to be optimized for that.

  • Clean the noise such as the cost operators that do not need inference;
  • Prune unnecessary computation fork that has nothing to do with the output;
  • Remove extraneous variables;
  • Memory reuse for native Fluid executor;
  • Translate the model storage format to some third-party engine's, so that the inference API can utilize the engine for acceleration;

We have an official tool to do the optimization, call paddle_inference_optimize --help for more information.

Write some codes

Read paddle_inference_api.h for more information.