You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Paddle/paddle/fluid/inference/api
zlsh80826 64ee255ffd
[Paddle-TRT] yolobox (#31755)
4 years ago
..
demo_ci Update windows compiler and CI from VS2015 to VS2017 (#31652) 4 years ago
details update zero_copy_tensor_test.cc for build of gcc485, test=develop (#31470) 4 years ago
CMakeLists.txt Fix xpu compile and cipher symbol problem. (#31271) 4 years ago
README.md Combine Inference Analysis with IR (#13914) 6 years ago
analysis_config.cc [ROCM] update fluid inference for rocm (part1), test=develop (#31018) 4 years ago
analysis_predictor.cc [Paddle-TRT] yolobox (#31755) 4 years ago
analysis_predictor.h support xpu with analysis predictor, test=develop (#30832) 4 years ago
analysis_predictor_tester.cc [ROCM] update fluid inference for rocm (part1), test=develop (#31018) 4 years ago
api.cc Fix xpu compile and cipher symbol problem. (#31271) 4 years ago
api_impl.cc [ROCM] update fluid inference for rocm (part1), test=develop (#31018) 4 years ago
api_impl.h use iwyu clean include (#27267) 4 years ago
api_impl_tester.cc [ROCM] update fluid inference for rocm (part1), test=develop (#31018) 4 years ago
api_tester.cc Fix xpu compile and cipher symbol problem. (#31271) 4 years ago
helper.cc rollback paddle_inference_helper.h to helper.h 6 years ago
helper.h upgrade inference tensor apis, test=develop (#31402) 4 years ago
high_level_api.md update paddle_fluid.so to paddle_inference.so (#30850) 4 years ago
high_level_api_cn.md update paddle_fluid.so to paddle_inference.so (#30850) 4 years ago
mkldnn_quantizer.cc Enhance infer error info message (#26731) 4 years ago
mkldnn_quantizer.h INT8 Fully-connected (#17641) 5 years ago
mkldnn_quantizer_config.cc use iwyu clean include (#27267) 4 years ago
paddle_analysis_config.h bug fix of xpu lite engine, test=develop (#30918) 4 years ago
paddle_api.h upgrade inference tensor apis, test=develop (#31402) 4 years ago
paddle_infer_declare.h fix bug MD of compile, And add MD/STATIC/OPENBLAS inference lib check on windows (#27051) 4 years ago
paddle_inference_api.h upgrade inference tensor apis, test=develop (#31402) 4 years ago
paddle_mkldnn_quantizer_config.h support C++ inference shared library on windows (#24672) 5 years ago
paddle_pass_builder.cc OneDNN hardswish integration (#30211) 4 years ago
paddle_pass_builder.h support xpu with analysis predictor, test=develop (#30832) 4 years ago
paddle_tensor.h upgrade inference tensor apis, test=develop (#31402) 4 years ago

README.md

Embed Paddle Inference in Your Application

Paddle inference offers the APIs in C and C++ languages.

You can easily deploy a model trained by Paddle following the steps as below:

  1. Optimize the native model;
  2. Write some codes for deployment.

The APIs

All the released APIs are located in the paddle_inference_api.h header file. The stable APIs are wrapped by namespace paddle, the unstable APIs are protected by namespace paddle::contrib.

Write some codes

Read paddle_inference_api.h for more information.