You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
Go to file
qijun 3a36f61a3d
add build_doc dir to .gitignore
8 years ago
benchmark All file pass pre-commit hook 8 years ago
cmake Fix conflict 8 years ago
demo add seqtext_print for seqToseq demo 8 years ago
doc Merge pull request #1812 from reyoung/feature/doc_about_releasing_process 8 years ago
doc_theme Fix font issue in doc 8 years ago
paddle fix wrong path of pip install *whl 8 years ago
proto stride pooling for seqlastin and seqfirstin 8 years ago
python Merge pull request #1719 from QiJune/feature/add_v2_api_doc 8 years ago
.clang-format Refine clang-format for Paddle style 8 years ago
.dockerignore refine docker build 8 years ago
.gitignore add build_doc dir to .gitignore 8 years ago
.pre-commit-config.yaml Add submodule for book 8 years ago
.style.yapf change python code style to pep8 8 years ago
.travis.yml remove osx build from CI 8 years ago
CMakeLists.txt Merge pull request #1698 from Xreki/build_arm 8 years ago
CONTRIBUTING.md add linkto CONTRIBUTING.md 8 years ago
Dockerfile fix ipython version 8 years ago
ISSUE_TEMPLATE.md Revise one word in ISSUE_TEMPLATE.md (#371) 8 years ago
LICENSE Change "Baidu, Inc" into "PaddlePaddle Authors" 8 years ago
README.md Update README.md 8 years ago
RELEASE.md Refine documentation in RELEASE.md 8 years ago
authors update authors 8 years ago

README.md

PaddlePaddle

Build Status Documentation Status Documentation Status Coverage Status Release License

Welcome to the PaddlePaddle GitHub.

PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Our vision is to enable deep learning for everyone via PaddlePaddle. Please refer to our release announcement to track the latest feature of PaddlePaddle.

Features

  • Flexibility

    PaddlePaddle supports a wide range of neural network architectures and optimization algorithms. It is easy to configure complex models such as neural machine translation model with attention mechanism or complex memory connection.

  • Efficiency

    In order to unleash the power of heterogeneous computing resource, optimization occurs at different levels of PaddlePaddle, including computing, memory, architecture and communication. The following are some examples:

    • Optimized math operations through SSE/AVX intrinsics, BLAS libraries (e.g. MKL, ATLAS, cuBLAS) or customized CPU/GPU kernels.
    • Highly optimized recurrent networks which can handle variable-length sequence without padding.
    • Optimized local and distributed training for models with high dimensional sparse data.
  • Scalability

    With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed up your training. PaddlePaddle can achieve high throughput and performance via optimized communication.

  • Connected to Products

    In addition, PaddlePaddle is also designed to be easily deployable. At Baidu, PaddlePaddle has been deployed into products or service with a vast number of users, including ad click-through rate (CTR) prediction, large-scale image classification, optical character recognition(OCR), search ranking, computer virus detection, recommendation, etc. It is widely utilized in products at Baidu and it has achieved a significant impact. We hope you can also exploit the capability of PaddlePaddle to make a huge impact for your product.

Installation

It is recommended to check out the Docker installation guide before looking into the build from source guide

Documentation

We provide English and Chinese documentation.

Ask Questions

You are welcome to submit questions and bug reports as Github Issues.

PaddlePaddle is provided under the Apache-2.0 license.