* bugfix for warpctc
* fix warpctc commit id
* fix warpctc commit id
* fix warpctc commit id
* fix warpctc commit id
* fix warpctc commit id
* fix WARPCTC_WITH_HIP invalid
* Add logs to find out why can not dlopen libwarpctc.so
* fix warpctc commit id
* fix unit test test_warpctc_op
* Optime failed log for dlopen
* Optime failed log for dlopen
* Delete extra changes
* fix warpctc commit id
* fix warpctc commit id
* Add is_compiled_with_rocm for test_warpctc_op
* fix warpctc commit id
* Cancel optimize dlopen failed reason, move to next pr, due to it makes windows ci failed
* Cancel optimize dlopen failed reason, move to next pr, due to it makes windows ci failed
* Cancel optimize dlopen failed reason, move to next pr, due to it makes windows ci failed
* fix code style problems
* support multihead_matmul_fuse_pass_v3
* fix compile problems
* embedding_eltwise_ln pass support lookup_table_v2
* suppoort matmul and matmul_v2 in qkv matmul
* add deprecated for softmax_with_cross_entropy, test=develop
* test for deprecated in english doc, test=develop
* test deprecated for softmax_with_cross_entropy in english doc, test=develop
* fix readme and English doc for cross_entropy, test=develop
* rm test for softmax_with_cross_entropy deprecated, test=develop
* update readme for CrossEntropyLoss, test=develop
* fix readme format, test=develop
* fix readme format, test=develop
* fix readme format for cross_entropy, test=develop
* add softmax_switch and fix softlabel for cross_entropy, test=develop
* 1)recovery softmax_with_cross_entropy in fluid 2) change softmax_switch to use_softmax 3) add example for softlabel for cross_entropy, test=develop
* fix Example number for cross_entropy, test=develop
* fix code format, test=develop
* fix for CI-Coverage, test=develop
* fix for CI-Coverage, test=develop
* fix ci-coverage for Non-ASCII character '\xe2' in file, test=develop
* fix ci-coverage for Non-ASCII character '\xe2' in nn.layer.loss.py, test=develop
* update description for doc when use_softmax=Fasle, test=develop
* fix some docs and code example for cross_entropy, test=develop
* delete redundant description for soft_label parameter of cross_entropy, test=develop
* fix some comment for test_cross_entropy_loss.py, test=develop
* give shape related contructor and reshape warning
* change line num to fit ut
* change ut to fit
* remove useless code
* call resize directly in constructor
* add roi_align_plugin
* add roi align unit_test
* add roi align serialization
* remove roi align static plugin because of batch dim issue
* refine roi align unittest and add fp16/serialization
* add trt roi align condition to op_teller
* refine error message
* remove unnecessary reshape layer
* trt affine channel converter
* add trt affine channel base test
* add trt affine channel NHWC
* remove asterisk for python2 compatibility
* trt affine channel converter
* add trt affine channel base test
* add trt affine channel NHWC
* remove asterisk for python2 compatibility
* fix rebase
* move LodTensor to Tensor
* add dbg info
* affine channel converter only support NCHW
* scale,bias are parameters, use create_parameters api
* reduce test input size to not exceed the timelimit of ci
* refine affine channel unittest and add serialization/dynamic test
* change super to InferencePassTest for python2 compatibility
* change super to InferencePassTest for python2 compatibility
* fix affine channel fp16 serialize setting
* add multiclass_nms
* add multiclass_nms unittest
* add default enable_tensorrt_oss option
* refine multiclas nms unittest and add serialization/dynamic test
* change super to InferencePassTest for python2 compatibility
* refine multiclass nms unittest
* move out dynamic shape test due to ci timelimit
Our old `loop_body` function may return single element when `loop_vars` just contains only 1 element, which can cause bug. The key point of this PR is forcing `loop_body` functions always return tuple.
* fix tensorrt output varible reshape
* move padding shape x 1 x 1 in ernie to qkv and fc
* update layer name
* fix softmax when input is dynamic, fc not padding any more
* fix varlen
* move fc x_dim assert to op_teller
* nearest_interp op converter w/ dynamic/static
* fix data_layout include
* add trt nearest unit_test
* add nearest_interp NHWC test
* update trt nearest interp nhwc testcase
* remove asterisk for python2 compatibility
* add empty line to prevent conflict
* nearest_interp op converter w/ dynamic/static
* fix data_layout include
* add trt nearest unit_test
* add nearest_interp NHWC test
* update trt nearest interp nhwc testcase
* remove asterisk for python2 compatibility
* add empty line to prevent conflict
* change the priority of out_h, out_w
* Support loading parameters from checkpoint to save quantized model
* Fix the unittest test_moving_average_abs_max_scale_op
* Add unittest of save_quantized_model from checkpoint
* Add comments to explain the function
* add softmax_switch for softmax_with_cross_entropy_op, test=develop
* delete using EigenMatrix in softmax_with_cross_entropy_op.h, test=develop
* add REGISTER_OP_VERSION for softmax_switch attr of softmax_with_cross_entropy_op, test=develop
* add precision on mac
* added judge
* match file_ut.json on mac
* fix code format error
* fix code format error
* fix error caused by length of ut_lists exceeds the limit
* fix format error,notest,test=cpu
* fix code format error
* add windows judge on get_pr_ut
Fix Read-Only Attribute as while_loop Output:
Usually, our convert_while_loop will be like:
```
[a, b, c] = paddle.jit.dy2static.convert_while_loop(
condition_name, body_name, [a, b, c])
```
where a, b, c are in loop_var_names.
However, if loop_var_names contains property such as foo.x, we cannot
assign the attribute as output of convert_while_loop because Python
property is a kind of read-only attribute. To handle the case, we replace
the attributes which are output of convert_while_loop with generated
variables, then if we know the attribute is not read-only at runtime, we
assign the attribute. The created statements are like:
```
[a, b, __attribute_variable_1] = paddle.jit.dy2static.convert_while_loop(
condition_name, body_name, [a, b, foo.x])
if not isinstance(getattr(type(foo), x, None), property): foo.x = __attribute_variable_1
```
* Decrease threshold for failed ut retry
* retry Method upgrade
* second method upgrade
* fix error
* Remove the comment lines
* test for modified_retry_times
* fix error
* fix some error
* fix error
* fix error
* remove test content
* fix error
* Reduce duplicate code
* fix more than 10 ut failed bug
* fix more than 10 ut failed bug on mac
* support trt serialize when load model from memory
* delete conv_bn_fuse_pass before tensorrt, with which trt serialize engine id is not stable
* Revert "delete conv_bn_fuse_pass before tensorrt, with which trt serialize engine id is not stable"
performance degradation, fix in the future
This reverts commit fa6cd17e60b15df351efda379ddd00e9e9c1fea9.
* add delete conv_bn
* delete path when delete_cache_files
* [Custom OP]add PD_THROW and PD_CHECK for User error message
* PD_THROW and PD_CHECK, fix comment
* fix Windows error message
* fix Windows error message
* fix CI
* remove remove_unsupport_dtype
* remove remove_unsupport_dtype
* remove test dtype
* add more include
* change dtype.h's enum as enum class to avoid conflict with inference lib
* make enum as enum class
* remove additional test
* merge develop
* polish code
* add cache for VariableWrapper
* modify args names and vlog level
* format code style
* add log when set cache to variable_wrapper
* add log when set cache to variable_wrapper
* add comment to variableWrapper cache
* format code style
* add simple attr support and test
* add int, float attr support
* support other attribute
* add custom attrs test in cmake
* polish details
* fix test failed
* add backward test
* update test flags
* add group norm plugin
* fix compile problems
* move concat axis check to trt op teller
* add nbDims for scale and bias nv dims
* add group norm unit test
* fix unittest
* add trt version restriction for group norm op teller
* fix unittest
* fix entry
* fix distributed lookup table fuse case
* fix entry bug at first time
* move entry from paddle.fluid -> paddle.distributed
* fix ut with paddle.enable_static()
Co-authored-by: malin10 <malin10@baidu.com>
* add error msg when dtypes of operator are not same
* add error msg when dtypes of operator are not same
* change error msg to warning msg when dtypes of operator are not same
* modify test case to fit for python2
* added support for fake_quantize_dequantize_abs_max op in quantization inference pass
* remove const_cast to pass ci
* remove compare operator to pass ci-coverage
* added detailed error message for unregistered tensorrt_subgrah_pass
* add default argument for paddle.save/static.save
* edit documentation of
* Add comments for special processing for protocol=2 and protocol=3.
* Update python/paddle/fluid/io.py
Co-authored-by: lanxianghit <47554610+lanxianghit@users.noreply.github.com>
Co-authored-by: lanxianghit <47554610+lanxianghit@users.noreply.github.com>
**Problem**
In our old shape transformer logic, if user write:
```
s = tensor.shape
...
y = paddle.some_api(s)
```
Dy2stat will change it to
```
...
y = paddle.some_api(convert_var_shape(tensor))
```
However it will cause fatal bug if user changes the shape of `x` after assign. For example:
```
s = tensor.shape
...
tensor = paddle.some_change_shape_api(tensor)
...
y = paddle.some_api(s)
```
Then the Dy2stat will get wrong result because the code is translated into:
```
tensor = paddle.some_change_shape_api(tensor)
...
y = paddle.some_api(convert_var_shape(tensor)) # tensor shape has been changed, not origin `s` value
```
**Solution Logic**
It can not be solved in the old logic, so I refactoring tensor_shape_transformer logic. Now we will use `s` to store shape attribute and generate a var `s__STATIC_CONVERT_VAR_SHAPE_SUFFIX` to store static shape API `shape(tensor)`
```
s = tensor.shape
...
y = paddle.some_api(s)
```
Dy2stat will change it to
```
s = tensor.shape
s__STATIC_CONVERT_VAR_SHAPE_SUFFIX = shape(tensor)
...
y = paddle.some_api(choose_shape_attr_or_api(s, s__STATIC_CONVERT_VAR_SHAPE_SUFFIX ))
```
In this case, the code is consistent with origin dygraph meaning and it fixed the change after assign bug.
**Code Key Note**
To help reviewers, the key change of this PR is changing `self.name_to_var_shape` from "mapping name to shape node" to "mapping name to its STATIC_CONVERT_VAR_SHAPE_SUFFIX name", then if a variable name has the SUFFIX, we can choose to use attribute shape or shape api. Other changes go with the key change.
**Consideration**
The issue of this PR is that we store extra static `shape` API result, will it harms the speed of Dy2stat? In some cases it will, but we argue that the benefit would be greater than the cost.
1. The extra calling to static `shape` API will happen when coder assign among shape variables. Take the following dygraph code as an instance:
```
s1 = tensor.shape
s2 = s1
s3 = s2
...
```
Then we called extra static `shape` APIs again and again, however users seldom write code like this.
2. If the shape variable is used a lot, for example:
```
s = tensor.shape
y1 = paddle.some_api1(s)
y2 = paddle.some_api2(s)
y3 = paddle.some_api3(s)
```
Our old logic will create 3 shape APIs but now just 1. This is more common user code pattern. In fact, if reviewers take a look at the current unit test in this PR, you could see the op numbers decrease after this PR. So we argue that this PR can also improve speed in this code pattern.
* add more dispatch marco
* add more dispatch marco
* add more tests
* revert unneeded change
* add timeout for test dispatch
* add float and complex test
* remove and marco
* [static setitem] support the index step > 1. tensor_a[::3] = value
* [static setitem] support the index step < 0. Eg: tensor_a[::-3] = value
* [static setitem] support the index is Tensor. eg: tensor_a[tensor_3:0:-1] = value
* Add op version.
As the title, when slice_node like 1:3 being passed to idx of convert_var_shape, it will cause syntax error because a function cannot take this as argument. This PR fixed it.
* add more unitest for ABI compatibility
* add more unittest
* refine warning style
* support compile multi custom ops in same time
* fix not import paddle in unittest
* fix typo
* add more unittest
* add comment for details
* Add conv transpose BF16
* Share function GetWeightsTz
* Adjust to review and fix op compatibility
* Add bias to unique handler name
* Remove errors related to paddle enforce
* Add conv2d_transpose to bf16 list and kernel refator
Dy2stat didn't support tuple as iteration variable in the past. This PR added there main cases:
1). Non-enumerate case: for var1, var2 in var|var.numpy() will be re-written as:
for FOR_ITER_TUPLE_PREFIX_x in var | var.numpy():
var1 = FOR_ITER_TUPLE_PREFIX_x[0]
var2 = FOR_ITER_TUPLE_PREFIX_x[1]
2). Enumerate out tuple case: for t in enumerate(var|var.numpy) will be rewritten as:
for FOR_ITER_TUPLE_INDEX_PREFIX_x, FOR_ITER_TUPLE_PREFIX_x in enumerate(var|var.numpy):
t = (FOR_ITER_TUPLE_INDEX_PREFIX_x, FOR_ITER_TUPLE_PREFIX_x)
3). Enumerate inner tuple case: for i, (var1, (var2, va3)) in enumerate(var|var.numpy()) will
be re-written as:
for i, FOR_ITER_TUPLE_PREFIX_x in var | var.numpy():
var1 = FOR_ITER_TUPLE_PREFIX_x[0]
var2 = FOR_ITER_TUPLE_PREFIX_x[1][0]
var3 = FOR_ITER_TUPLE_PREFIX_x[1][1]
* support setup.py to compile custom op
* move file into paddle.utils.cpp_extension
* support python setup.py install
* refine code style
* Enrich code and add unittest
* initial commit: simple demo
* polish copyright format
* add grap op simple demo
* adapt uncertain number of argument
* change trait marco name
* add place & dtype support for add kernel
* add dispath and infershape func
* poish code & add notes
* add dynamic_loader dep for paddle_framework
* add new custom op test dir
* polish impl details
* add unittest for new custom op
* fix failed unittest
* Costum op (#1)
* fix compile error
* wrap framework tensor with LoDTensor
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* add CustomTensor default constructor
* add size() for CustomTensor
* make size const for CustomTensor
* refactor place related api to circle the concept
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* make place const
* make Tensor copy
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* remove additional head of framework
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* add gpu test
* merge latest cwh code in
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* Remove ShareData from user && Change CustomTensor to Tensor && Support more data type (#2)
* fix compile error
* wrap framework tensor with LoDTensor
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* add CustomTensor default constructor
* add size() for CustomTensor
* make size const for CustomTensor
* refactor place related api to circle the concept
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* make place const
* make Tensor copy
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* remove additional head of framework
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* add gpu test
* merge latest cwh code in
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* hid share data from and to
* rename CustomTensor to Tensor
* refactor register design & add test
* change op_funtion to op_meta_info
* split op meta info into .h and .cc
* move get methods into friend class
* move OpMetaInfoHelper into framework space
* move CustomTensorUtils into framework space
* change pybind api name
* move PD C API into op meta info
* add register custom op api
* remove inference cmake change
* refactor copy to api && change Reshape to lowercase && support more dtype && add more test (#3)
* fix compile error
* wrap framework tensor with LoDTensor
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* add CustomTensor default constructor
* add size() for CustomTensor
* make size const for CustomTensor
* refactor place related api to circle the concept
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* make place const
* make Tensor copy
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* remove additional head of framework
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* add gpu test
* merge latest cwh code in
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* hid share data from and to
* rename CustomTensor to Tensor
* support multi dtype
* remove lod, make reshape lowercase, add copy test and refactor copy api
* remove lod, make reshape lowercase, add copy test and refactor copy api
* remove lod, make reshape lowercase, add copy test and refactor copy api
* remove lod, make reshape lowercase, add copy test and refactor copy api
* fix copy to error
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* polish detail & error message
* polish test details
* Add cast api && Change copy related api to copy_to && add more test (#4)
* fix compile error
* wrap framework tensor with LoDTensor
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* add CustomTensor default constructor
* add size() for CustomTensor
* make size const for CustomTensor
* refactor place related api to circle the concept
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* fix compile error
* make place const
* make Tensor copy
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* debug CustomTensor core
* remove additional head of framework
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* use back to shared ptr for custom tensor
* add gpu test
* merge latest cwh code in
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* adjust ut code of custom op
* hid share data from and to
* rename CustomTensor to Tensor
* support multi dtype
* remove lod, make reshape lowercase, add copy test and refactor copy api
* remove lod, make reshape lowercase, add copy test and refactor copy api
* remove lod, make reshape lowercase, add copy test and refactor copy api
* remove lod, make reshape lowercase, add copy test and refactor copy api
* fix copy to error
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add more test
* add type cast
* add cast and make copy to api
* add cast and make copy to api
* add cast and make copy to api
* add cast and make copy to api
* merge cwh code
* merge cwh code
* merge cwh code
* merge cwh code
* merge cwh code
* add more error log
* add more error log
* polish code
* used for test
* remove test comment
* remove test comment
* fix uint8 type error
* fix lost uint8 type error
* add test for coverage
* polish details by reviewer comments
* add prefix for DISABLE_COPY_AND_ASSIGN
Co-authored-by: Jiabin Yang <360788950@qq.com>
* op benchmark ci retry with specfied id, notest, test=op_benchmark
* fix parse case name with case id, notest, test=op_benchmark
* remove test code, test=develop
* support xpu inference with analysis predictor, test=develop
* merge the cmake of the xpu toolchain, test=develop
* add c-apis, test=develop
* fix a bug in extern_xpu, test=develop
* support setup.py to compile custom op
* move file into paddle.utils.cpp_extension
* support python setup.py install
* refine code style
* Enrich code and add unittest
* Polish code and api doc
* fix cpp_extension not include in package
* fix relative import
* fix os.makedirs exist_ok param compatibility PY2
* add compile flags in test_jit_load
* set WITH_INFERENCE_API_TEST=ON on Windows with GPU, notest, test=windows_ci
* disable lite_mul_model_test, test=develop
* disable test_analyzer_int8_resnet50, test=develop
* rewrite abs op
* rewrite abs op and remove abs in activation
* remove abs register in old codes
* fix abs_grad type error
* fix abs double_grad output name error
* modify abs_grad, abs_grad_grad functor for windows building
* format code style
* fix the bug of result is nan when the divisor is zero
* add missing abs attr and add abs for float16
* Avoid bug on 'MAC python3.5/6'.
* Choose the saving method according to the OS.
* smaller length of '_unpack_saved_dict' for MAC OS.
* add version information of Python.
* Edit comment.
* add view strategy on squeeze,unsqueeze,reshape,flatten
* add squeeze unittest
* add unittests
* use View strategy as name rather than Reuse Allacation
* fix view api doc
* fix format
* use core.ops when input of reshape2 is Tensor
* fix test_cross_entropy_loss error because of reshape2
* fix test_cross_entropy_loss error because of reshape2
* add inplace strategy
* add elementwise_add sub
* let backward op not use inplace
* grad op do not use inplace
* fix memory increase error and add leaf error message
* delete selected_rows
* change op_function
* little change
* solve HandleViewBetweenInputAndOutput
* add unittest and leaf error message
* merge view error
* optimize op_function_generator format and support sum inplace op
* fix format of basic_engine
* fix format for framework
* little change of variable wrapper
* add reshape, squeeze, unsqueeze, scatter api
* add relu elu tanh softmax inplace api
* fix test_squeeze_op unittest
* fix test_relu_op unittest
* fix comment problems
* delete sample code of inplace api
* add reference of grad_pending_nodes in basic_engine
* fix unittest name
* add inplace apis into wlist
* fix error message
* add PADDLE_ENFORCE for set grad op twice
* fix head file error
message(FATAL_ERROR"cmake ${CMAKE_VERSION} is not supported when WITH_GPU=ON because of bug https://cmake.org/pipermail/cmake/2018-September/068195.html. "
Our vision is to enable deep learning for everyone via PaddlePaddle.
Please refer to our [release announcement](https://github.com/PaddlePaddle/Paddle/releases) to track the latest features of PaddlePaddle.
### Install Latest Stable Release:
```
# Linux CPU
# CPU
pip install paddlepaddle
# Linux GPU cuda10cudnn7
# GPU
pip install paddlepaddle-gpu
# Linux GPU cuda9cudnn7
pip install paddlepaddle-gpu==1.8.5.post97
```
It is recommended to read [this doc](https://www.paddlepaddle.org.cn/documentation/docs/en/beginners_guide/install/index_en.html) on our website.
More infomation about installation, please view [Quick Install](https://www.paddlepaddle.org.cn/install/quick)
Now our developers can acquire Tesla V100 online computing resources for free. If you create a program by AI Studio, you will obtain 12 hours to train models online per day. If you can insist on that for five consecutive days, then you will receive an extra 48 hours. [Click here to start](http://ai.baidu.com/support/news?action=detail&id=981).
Now our developers can acquire Tesla V100 online computing resources for free. If you create a program by AI Studio, you will obtain 10 hours to train models online per day. [Click here to start](https://aistudio.baidu.com/aistudio/index).
## FOUR LEADING TECHNOLOGIES
@ -67,38 +65,30 @@ Now our developers can acquire Tesla V100 online computing resources for free. I
## Documentation
We provide [English](http://www.paddlepaddle.org.cn/documentation/docs/en/1.8/beginners_guide/index_en.html) and