* Convert `op` --> `operators`
* Remove AddType in OpProtoMaker, because type is part of registry.
* Rename CPU_OR_GPU --> DEVICE_TYPE in registry macro.
* User can register OpKernel to its Ops. The OpKernelMap saved in
OperatorWithKernel. Each Op which inherits OperatorWithKernel will
use `OpKernel::Compute` instead of Run.
Add OperatorBase.
issue: https://github.com/PaddlePaddle/Paddle/issues/2790
Paddle design the Operator with Kernel. OperatorBase has no type and device information when create, One operator can have multiple kernels, Operator will choose a kernel to run according to context. The kernel should be bind to Operator before or during Operator running.
1. Add member variable 'DDim dims_' and a getter function 'dims()'.
'dims' is supposed to hold tensor's shape during Op::InferShape.
2. Remove 'mutable_data' which use default Place. User must specify a
explicit Place when call 'mutable_data'.
3. A PlaceHolder may be shared by more than one tensor, and some of them may be the others' slices. So we add a new member variable 'offset_' for Tensor, which is used to show the byte offset between PlaceHolder::ptr_ and where tensor's data really begins.
4. Add functions 'ShareDataFrom' and 'Slice' for Tensor.
TODO: Tensor needs a 'CopyFrom' function.
* Move static variable defined in .cc
We cannot define static variable in .h, because it will be multi-defined
errors.
Also fix some cpp syntax, like:
* Prefer to use algorithm not manually for-loop, to make code more
readable.
* Remove unused `()`.
* Enforce take a bool. It is no need `xxx==true`.
* Use range-based for-loop iterator from op_desc.attrs
* Fix a protential static variable init order error
* Expose paddle.framework by pybind11
* Export paddle.framework.{Scope, Variable} to paddle.v2.framework.core.
* See python/paddle/v2/framework/tests/test_scope.py for Python usage
* See paddle/pybind/pybind.cc for C++ bind code.
* add copyright
* add device_context
* add unittest for device_context
* transfer to use function paddle::platform::throw_on_error
* fix cuda build error
* using dynload functions
* follow comments
* init op_registry.h
* dev op_registry.h
* add 'attr_checker.h', which is a draft of op attribute checker.
* rename some macro parameters
* 1. Use `Attribute` and `AttributeMap` instead of `OpDesc`. `AttributeMap` is a unordered_map of <string, Attribute>, and `Attribute` is a boost::variant object to hold multiple types of attribute value.
2. Use `PADDLE_ENFORCE` to print checkers' fail message.
3. Abstract default value operations to a new function: `DefaultChecker`.
* rename DefaultChecker to DefaultValueSetter
ZZ
* Finish op_registry
1. Complete the development of interfaces between OpRegistry and
Protobuf.
2. Add unit test for op_registry.h
* Add demo and test of custome checker
* fix merge conflict
Python should be able to manipulate Protobuf message because:
1. Python's `create_op_creation_methods` take the `OpProto` array to
generate all `op_creation_methods` in RunTime.
2. All `op_creation_methods` will create an `OpDesc` and pass it to
Paddle C++ method `CreateOp` and return the Op handle.
Here is the list of what is added in this commit:
* Add `protobuf_generate_python` if it is not defined.
* Before cmake 3.4, `protobuf_generate_python` is not defined. Just
copy the implementation of that function in `protobuf.cmake`
* Add `py_proto_compile` function in `cmake/generic.cmake`.
* It follows bazel's API interface.
* https://github.com/pubref/rules_protobuf#rules
* Add an empty package named `paddle.v2.framework`, all python code of
`paddle::framework` will be in that package.
* Generate protobuf's python module `__init__.py` by `touch` while
compiling.
* Change setup.py.in, make `paddle.v2.framework.proto` uses the
generated protobuf pythons.