* add test_fit_a_line
* Update
* fix persistable bug
* fix elementwise add bug
* set correct attr for bias op in fc layer
* set correct attr for bias op in fc layer
* Update
1. Add init_program to hold initializers
2. bug fix
* add test_fit_a_line
* fix persistable bug
* fix elementwise add bug
* fix type
* add gitignore
* Complete fit_a_line test
* revert code
* Clean up
* Revert "revert code"
This reverts commit eb1aa015cda4fc12b6dc778ada6c3507b98134f5.
* Refine
* Fix unit test
* Implement FC layer with helper
* Update LayerHelper
* Add debug string for Python ProtoBuf
and Rename `Sync` to `Flush`
* Add check of ProtoBuf initialization
* Layer wrapper for FC
* Fix unittest
* Fix CI
* Add code generator
* AttributeChecker Better error log and speicalize bool
Since lots of types can be cast to bool
* Complete mlp, fit_a_line
* Expose get global scope
* Make global scope not thread-safe
1. It is no need to make global scope thread-safe, since it will be
invoked in Python main thread.
2. Do not free the global scope when C++ exit. Let the OS free memories,
otherwise, we need to handle the destroy dependencies.
See
https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables
* Fix
* Implementation of simple conv_2d layer
* Stash
* Remove private data members in OpRegister
* Fix bugs
* Stash
* Expose FeedFetchList as VarType
* Change ProgramDesc not a global variable
* Polish code style
* Stash
* Correct implement BlockDesc destructor
* Correct implement BlockDesc destructor
* Unify program as parameter name
* Fix bugs
* Add unittest
* Fix unit test error
* Remove unused functions
* Add clone for Python Program
* Working on executor
* Stash
* Add glog as dependencies of ops
* Use VLOG to logging some information is helpful when we debug Paddle
* Expose VarDesc::persistable to Python
* Test executor
* Complete unittest
* Polish code
* Fix merge error
* Follow comment
* Polish Python Code
* Implement FC layer with helper
* Update LayerHelper
* Add debug string for Python ProtoBuf
and Rename `Sync` to `Flush`
* Add check of ProtoBuf initialization
* Layer wrapper for FC
* Fix unittest
* Fix CI
* Add code generator
* AttributeChecker Better error log and speicalize bool
Since lots of types can be cast to bool
* Complete mlp, fit_a_line
* Implementation of simple conv_2d layer
* Fix bugs
* Change ProgramDesc not a global variable
* Polish code style
* Stash
* Correct implement BlockDesc destructor
* Correct implement BlockDesc destructor
* Unify program as parameter name
* Fix bugs
* Add unittest
* Fix unit test error
* Remove unused functions
* Add clone for Python Program
* Compare OpDescBind directly
* Implement FC layer with helper
* Update LayerHelper
* Add debug string for Python ProtoBuf
and Rename `Sync` to `Flush`
* Add check of ProtoBuf initialization
* Layer wrapper for FC
* Fix unittest
* Fix CI
* Add code generator
* AttributeChecker Better error log and speicalize bool
Since lots of types can be cast to bool
* Complete mlp, fit_a_line
* Implementation of simple conv_2d layer
* Fix bugs
* Correct implement BlockDesc destructor
* Fix bugs
* Fix unit test error
* Follow comments
* Implement FC layer with helper
* Update LayerHelper
* Add debug string for Python ProtoBuf
and Rename `Sync` to `Flush`
* Add check of ProtoBuf initialization
* Layer wrapper for FC
* Fix unittest
* Fix CI
* Add code generator
* AttributeChecker Better error log and speicalize bool
Since lots of types can be cast to bool
* Complete mlp, fit_a_line
* Implementation of simple conv_2d layer
* Fix bugs
* Remove debug code
* initial matmul operator
Similar to np.matmul, but also has transpose_X and transpose_Y flags,
and only supports tensors from rank 1 to 3 inclusive.
For GPU, uses cublas?gemmStridedBatched. For CPU, uses
cblas_?gemm_batch if available via MKL; otherwise a simple serial
implementation that loops over the batch dimension is employed for now.
* init parameter base class
* optimize the Comments of optimizer
* basic implimentation of optimizer
* add test_optimizer
* add no_grad_set to interface
* update optimizer.py
* python code can run
* fix some problem
* add sync_with_cpp to Python Program and Block
* sync vars and ops in block from cpp
* optimize code and add some comment
* add more check for sync
* update optimizer with return value of Backward
* rm unused code
* infer shape when create gradient vairiable
* update test_optimizer
* update test_program.py
* update backward test
* follow comment
* Implement FC layer with helper
* Update LayerHelper
* Add debug string for Python ProtoBuf
and Rename `Sync` to `Flush`
* Add check of ProtoBuf initialization
* Layer wrapper for FC
* Fix unittest
* Fix CI
* Add code generator
* AttributeChecker Better error log and speicalize bool
Since lots of types can be cast to bool
* Complete mlp, fit_a_line
* add target to Backward, generate var in block when call backward
* modify backward_test
* fix executor_test
* set var desc default type to LOD_TENSOR
* update backward_test
* insert loss in the top level of backward
* create grad vars for all blocks in current program
* optimize code
* update test_program.py
* only create var for newly create blocks when backward