Commit Graph

224 Commits (cbbec595472189ac252c742cdc6d5f2b435235bd)

Author SHA1 Message Date
Luo Tao cbbec59547 adjust poolSequenceWithStride interface for average and max
9 years ago
Luo Tao 0291c01884 Merge branch 'develop' into stride
9 years ago
Tao Luo 5961b52b13 Merge pull request #1653 from Noplz/normalize-layer
9 years ago
Luo Tao 9298a9ec0d stride pooling for seqlastin and seqfirstin
9 years ago
wangyang59 dc45cb4b9d changed projections.protostr
9 years ago
gaoyuan 784e242bd5 Remove redundancy codes
9 years ago
wangyang59 fc7f72c03f modified after reyoung's comments
9 years ago
wangyang59 090c974e4b completed implementation of cudnn_convt convTransProjection and convTransOperator
9 years ago
wangyang59 07c1ea258f python interface for convTransProjection and convTransOperator
9 years ago
gaoyuan c06d8d2129 Assert cross_channel_norm's input filters
9 years ago
Yuan Gao 8c2d1bad7a Merge branch 'develop' into normalize-layer
9 years ago
gaoyuan eb43d93a58 Change Normalize layer to CrossChannelNorm layer
9 years ago
Yu Yang bd4ec1b493 Merge branch 'develop' into memory.set_input
9 years ago
Luo Tao 36ed2ff1ea keep forward compatibility
9 years ago
Luo Tao 7dbc77ba4d rename regression_cost to mse_cost
9 years ago
Peng Li bf838034d6 Merge branch 'develop' into fix-crf-weight-and-coeff-bug
9 years ago
gaoyuan eea0097dcb NormalizeLayer for SSD
9 years ago
Luo Tao 8243797059 add relu in layer_math.py
9 years ago
Yu Yang 8d5a18a209 Complete Layers documentation
9 years ago
Yu Yang 3758993393 Simplify layer.v2
9 years ago
Luo Tao 06056fe26e Merge branch 'develop' into layer
9 years ago
dangqingqing 623d24ad5c convert mixed layer, projection and operator
9 years ago
Luo Tao 5258bcf3ee implement more layers in v2
9 years ago
Liang Zhao e768721cd9 fix calculating totalScore2_ bug
9 years ago
Liang Zhao 043859b5db clean up code
9 years ago
Liang Zhao e00f06afa4 Add top-k error
9 years ago
Luo Tao 0b673756f1 add SequenceReshapeLayer in trainer_config_helpers
9 years ago
Luo Tao f9eddadb3e follow comments, add seq_concat_layer in docs
9 years ago
Luo Tao 27a42c2e3b add SequenceConcatLayer in trainer_config_helpers
9 years ago
xuwei06 7d551dd484 Make it possible to postpone setting the layer name for a memory.
9 years ago
Yu Yang 71c3c93c72 Fix unittest
9 years ago
Yu Yang 970440622f Temporary disable async load data in PyDP2.
9 years ago
Tao Luo c785975b4b Merge pull request #1256 from luotao1/maxout
9 years ago
Luo Tao ca25b9a508 remove unused notes in maxout layer
9 years ago
wangkuiyi ccb553fec4 Merge pull request #1253 from wangkuiyi/python_learning_and_refactor
9 years ago
Yi Wang 058eeac0fc Revert "Remove completely create_data_config_proto"
9 years ago
Yi Wang ab279beed1 Remove completely create_data_config_proto
9 years ago
Luo Tao 03148804bb remove usused arguments for maxout_layer, refine notes for pad_layer
9 years ago
Haonan b9dfe8e7c8 Merge pull request #1231 from yu239/rotate_and_flip
9 years ago
Yi Wang 996b1de1d8 Rename DataBase into create_data_config_proto
9 years ago
Haonan 73dcf2cd58 improving code comments
9 years ago
Haonan 6245fed240 rotate_layer python interface fixes
9 years ago
wangyang59 9c42d90468 add comments in gru_step_layer of layers.py to explain the parameter location
9 years ago
Haonan b4c1d17580 remove flip_layer
9 years ago
Haonan 55eb2fcffa format correction
9 years ago
Haonan 781b85b5fc rotate_layer and flip_layer * added getMin and getMax for GpuMatrix * gru_step_layer parameter name
9 years ago
wangyang59 04b5daf92d change the parameter position of gru_step_layer from 1 back to 0
9 years ago
emailweixu c1f9cd9dbe Merge pull request #1241 from wangyang59/rnnParaShare
9 years ago
zhanghaichao e1d074abdb updated comments for gru_group and lstm_group in networks.py
9 years ago
wangyang59 6da7283475 make gru_group parameters sharable
9 years ago