But need to check consistency of CPU and GPU later for conv layer.
ISSUE=4592155
git-svn-id: https://svn.baidu.com/idl/trunk/paddle@1427 1ad973e4-5ce8-4261-8a94-b56d1f490c56
@ -11,7 +11,7 @@ First, download CIFAR-10 dataset. CIFAR-10 dataset can be downloaded from its of
<https://www.cs.toronto.edu/~kriz/cifar.html>
We have prepared a script to download and process CIFAR-10 dataset. The script will download CIFAR-10 dataset from the official dataset.
It will convert it to jpeg images and organize them into a directory with the required structure for the tutorial. Make sure that you have installed the python dependency (PIL).
It will convert it to jpeg images and organize them into a directory with the required structure for the tutorial. Make sure that you have installed the python dependency (PIL). If not, you can install it by `pip install PIL` and if you have installed `pip` package.
* --job=extract: specify job mode to extract feature.
* --conf=resnet.py: network configure.
* --model=model/resnet_5: model path.
* --data=./example/test.list: data list.
* --output_layer="xxx,xxx": specify layers to extract features.
* --output_dir=features: output diretcoty.
* \--job=extract: specify job mode to extract feature.
* \--conf=resnet.py: network configure.
* \--use_gpu=1: speficy GPU mode.
* \--model=model/resnet_5: model path.
* \--data=./example/test.list: data list.
* \--output_layer="xxx,xxx": specify layers to extract features.
* \--output_dir=features: output diretcoty.
Note, since the convolution layer in these ResNet models is suitable for the cudnn implementation which only support GPU. It not support CPU mode because of compatibility issue and we will fix later.
If run successfully, you will see features saved in `features/batch_0`, this file is produced with cPickle. You can use `load_feature_py` interface in `load_feature.py` to open the file, and it returns a dictionary as follows:
@ -265,13 +269,15 @@ python classify.py \
--conf=resnet.py\
--multi_crop \
--model=model/resnet_50 \
--use_gpu=1 \
--data=./example/test.list
```
* --job=extract: speficy job mode to predict.
* --conf=resnet.py: network configure.
* --multi_crop: use 10 crops and average predicting probability.
* --model=model/resnet_50: model path.
* --data=./example/test.list: data list.
* \--job=extract: speficy job mode to predict.
* \--conf=resnet.py: network configure.
* \--multi_crop: use 10 crops and average predicting probability.
* \--use_gpu=1: speficy GPU mode.
* \--model=model/resnet_50: model path.
* \--data=./example/test.list: data list.
If run successfully, you will see following results, where 156 and 285 are labels of the images.
* --save\_dir=$output: set output path to save models.
* --job=train: set job mode to train.
* --use\_gpu=false: use CPU to train, set true, if you install GPU version of PaddlePaddle and want to use GPU to train.
* --trainer\_count=4: set thread number (or GPU count).
* --num\_passes=15: set pass number, one pass in PaddlePaddle means training all samples in dataset one time.
* --log\_period=20: print log every 20 batches.
* --show\_parameter\_stats\_period=100: show parameter statistic every 100 batches.
* --test\_all_data\_in\_one\_period=1: test all data every testing.
* \--config=$config: set network config.
* \--save\_dir=$output: set output path to save models.
* \--job=train: set job mode to train.
* \--use\_gpu=false: use CPU to train, set true, if you install GPU version of PaddlePaddle and want to use GPU to train.
* \--trainer\_count=4: set thread number (or GPU count).
* \--num\_passes=15: set pass number, one pass in PaddlePaddle means training all samples in dataset one time.
* \--log\_period=20: print log every 20 batches.
* \--show\_parameter\_stats\_period=100: show parameter statistic every 100 batches.
* \--test\_all_data\_in\_one\_period=1: test all data every testing.
If the run succeeds, the output log is saved in path of `demo/sentiment/train.log` and model is saved in path of `demo/sentiment/model_output/`. The output log is explained as follows.
@ -286,8 +286,10 @@ cd demo/sentiment
predict.sh:
```
config=trainer_config.py
#Note the default model is pass-00002, you shold make sure the model path
#exists or change the mode path.
model=model_output/pass-00002/
config=trainer_config.py
label=data/pre-imdb/labels.list
python predict.py \
-n $config\
@ -304,6 +306,9 @@ python predict.py \
* -d data/pre-imdb/dict.txt: set dictionary.
* -i data/aclImdb/test/pos/10014_7.txt: set one example file to predict.
Note you should make sure the default model path `model_output/pass-00002`