You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Paddle/paddle/fluid/inference/api
Jiabin Yang 667f88f9a6
Fix/gcc 4.8 ubt link error (#18558)
6 years ago
..
demo_ci fix package generation for inference test=develop (#18220) 6 years ago
details fix comments 6 years ago
CMakeLists.txt Support Bitmain Anakin (#18542) 6 years ago
README.md Combine Inference Analysis with IR (#13914) 6 years ago
analysis_config.cc Inference: fix mask rcnn model diff, optim memory usage, memory leak. (#18532) 6 years ago
analysis_predictor.cc Inference: fix mask rcnn model diff, optim memory usage, memory leak. (#18532) 6 years ago
analysis_predictor.h remove unused AnalysisPredictor::SetMkldnnThreadID() (#18444) 6 years ago
analysis_predictor_tester.cc INT8 MKL-DNN v2 integrate to slim (#17634) 6 years ago
api.cc Security issue (#16774) 6 years ago
api_anakin_engine.cc Support Bitmain Anakin (#18542) 6 years ago
api_anakin_engine.h Support Bitmain Anakin (#18542) 6 years ago
api_impl.cc Fix/gcc 4.8 ubt link error (#18558) 6 years ago
api_impl.h make clone thread safe (#15363) 6 years ago
api_impl_tester.cc Add INT32 support. INT32 in last switch case 6 years ago
api_tester.cc add version support (#15469) 6 years ago
helper.cc rollback paddle_inference_helper.h to helper.h 6 years ago
helper.h Update the Anakin interfaces for content-dnn and MLU (#17890) 6 years ago
high_level_api.md fix dead link in high_level_api.md 7 years ago
high_level_api_cn.md fix some teeny mistakes 7 years ago
mkldnn_quantizer.cc Fix Pooling output scale (#18186) 6 years ago
mkldnn_quantizer.h Reset DeviceContext after quantization warmup (#18182) 6 years ago
mkldnn_quantizer_config.cc add int8 mkldnn prior_box (#17242) 6 years ago
paddle_anakin_config.h Support Bitmain Anakin (#18542) 6 years ago
paddle_analysis_config.h Inference: fix mask rcnn model diff, optim memory usage, memory leak. (#18532) 6 years ago
paddle_api.h fix comments 6 years ago
paddle_inference_api.h Remove the obsolete cmake options (#18481) 6 years ago
paddle_mkldnn_quantizer_config.h C-API quantization core 2 (#16396) 6 years ago
paddle_pass_builder.cc fix: when use the load model from memory mode, the RAM occupy is high (#17788) 6 years ago
paddle_pass_builder.h Inference: fix mask rcnn model diff, optim memory usage, memory leak. (#18532) 6 years ago

README.md

Embed Paddle Inference in Your Application

Paddle inference offers the APIs in C and C++ languages.

You can easily deploy a model trained by Paddle following the steps as below:

  1. Optimize the native model;
  2. Write some codes for deployment.

The APIs

All the released APIs are located in the paddle_inference_api.h header file. The stable APIs are wrapped by namespace paddle, the unstable APIs are protected by namespace paddle::contrib.

Write some codes

Read paddle_inference_api.h for more information.