You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Paddle/paddle/fluid/inference/api
Zhaolong Xing a9fb34fad8
Merge pull request #14903 from NHZlX/add_conv_elementwise_pass
6 years ago
..
demo_ci fix script issue 6 years ago
details fix container not cleared (#14231) 6 years ago
CMakeLists.txt Remove the memory copy of feeding data in C++ inference API (#14577) 6 years ago
README.md Combine Inference Analysis with IR (#13914) 6 years ago
analysis_config.cc One possible solution to add flexibility for mkldnn placement pass (#14768) 6 years ago
analysis_predictor.cc Change tensor uses proto::VarType::type 6 years ago
analysis_predictor.h Fix bug of referencing a temporary variable. (#14614) 6 years ago
analysis_predictor_tester.cc Fea/fuse conv elementwise add fuse (#14669) 6 years ago
api.cc Combine Inference Analysis with IR (#13914) 6 years ago
api_anakin_engine.cc Because anakin do NOT use glog, so we revert anakin related change 6 years ago
api_anakin_engine.h Combine Inference Analysis with IR (#13914) 6 years ago
api_impl.cc Change tensor uses proto::VarType::type 6 years ago
api_impl.h Fix bug of referencing a temporary variable. (#14614) 6 years ago
api_impl_tester.cc Fix ut 6 years ago
api_tester.cc inference-api code clean (#12274) 7 years ago
helper.cc rollback paddle_inference_helper.h to helper.h 6 years ago
helper.h fix unit test cases 6 years ago
high_level_api.md fix dead link in high_level_api.md 7 years ago
high_level_api_cn.md fix some teeny mistakes 7 years ago
paddle_anakin_config.h Combine Inference Analysis with IR (#13914) 6 years ago
paddle_analysis_config.h One possible solution to add flexibility for mkldnn placement pass (#14768) 6 years ago
paddle_api.h refine api name 6 years ago
paddle_inference_api.h Combine Inference Analysis with IR (#13914) 6 years ago
paddle_pass_builder.cc Combine Inference Analysis with IR (#13914) 6 years ago
paddle_pass_builder.h add conv+elementwiseadd pass 6 years ago

README.md

Embed Paddle Inference in Your Application

Paddle inference offers the APIs in C and C++ languages.

You can easily deploy a model trained by Paddle following the steps as below:

  1. Optimize the native model;
  2. Write some codes for deployment.

The APIs

All the released APIs are located in the paddle_inference_api.h header file. The stable APIs are wrapped by namespace paddle, the unstable APIs are protected by namespace paddle::contrib.

Write some codes

Read paddle_inference_api.h for more information.