You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Paddle/paddle/fluid/inference/api
nhzlx d347ea689a
fix comments
6 years ago
..
demo_ci fix comments 6 years ago
details fea/infer executor and concurrency performance issue bug fix (#13451) 6 years ago
CMakeLists.txt clean some CMakeLists 6 years ago
README.md move contrib/inference to paddle/fluid/inference/api 7 years ago
analysis_predictor.cc Enable MKLDNN in Naive Executor 6 years ago
analysis_predictor.h fea/infer executor and concurrency performance issue bug fix (#13451) 6 years ago
analysis_predictor_tester.cc fea/infer executor and concurrency performance issue bug fix (#13451) 6 years ago
api.cc fea/infer executor and concurrency performance issue bug fix (#13451) 6 years ago
api_anakin_engine.cc refine inference api (#13518) 6 years ago
api_anakin_engine.h refine inference api (#13518) 6 years ago
api_impl.cc rollback paddle_inference_helper.h to helper.h 6 years ago
api_impl.h fea/infer executor and concurrency performance issue bug fix (#13451) 6 years ago
api_impl_tester.cc test=develop 6 years ago
api_tensorrt_subgraph_engine.cc Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into add_ut_for_trt 6 years ago
api_tensorrt_subgraph_engine_tester.cc refine inference api (#13518) 6 years ago
api_tester.cc inference-api code clean (#12274) 7 years ago
helper.cc rollback paddle_inference_helper.h to helper.h 6 years ago
helper.h rollback paddle_inference_helper.h to helper.h 6 years ago
high_level_api.md fix dead link in high_level_api.md 7 years ago
high_level_api_cn.md fix some teeny mistakes 7 years ago
paddle_inference_api.h Merge pull request #13685 from luotao1/naive_cmake 6 years ago

README.md

Embed Paddle Inference in Your Application

Paddle inference offers the APIs in C and C++ languages.

One can easily deploy a model trained by Paddle following the steps as below:

  1. Optimize the native model;
  2. Write some codes for deployment.

Let's explain the steps in detail.

Optimize the native Fluid Model

The native model that get from the training phase needs to be optimized for that.

  • Clean the noise such as the cost operators that do not need inference;
  • Prune unnecessary computation fork that has nothing to do with the output;
  • Remove extraneous variables;
  • Memory reuse for native Fluid executor;
  • Translate the model storage format to some third-party engine's, so that the inference API can utilize the engine for acceleration;

We have an official tool to do the optimization, call paddle_inference_optimize --help for more information.

Write some codes

Read paddle_inference_api.h for more information.