@ -18,7 +18,9 @@
- [Performance ](#performance )
- [Performance ](#performance )
- [Description of random situation ](#description-of-random-situation )
- [Description of random situation ](#description-of-random-situation )
- [ModelZoo Homepage ](#modelzoo-homepage )
- [ModelZoo Homepage ](#modelzoo-homepage )
<!-- TOC -->
<!-- TOC -->
# [Bayesian Graph Collaborative Filtering ](#contents )
# [Bayesian Graph Collaborative Filtering ](#contents )
Bayesian Graph Collaborative Filtering(BGCF) was proposed in 2020 by Sun J, Guo W, Zhang D et al. By naturally incorporating the
Bayesian Graph Collaborative Filtering(BGCF) was proposed in 2020 by Sun J, Guo W, Zhang D et al. By naturally incorporating the
@ -33,12 +35,14 @@ Specially, BGCF contains two main modules. The first is sampling, which produce
aggregate the neighbors sampling from nodes consisting of mean aggregator and attention aggregator.
aggregate the neighbors sampling from nodes consisting of mean aggregator and attention aggregator.
# [Dataset ](#contents )
# [Dataset ](#contents )
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
- Dataset size:
- Dataset size:
Statistics of dataset used are summarized as below:
Statistics of dataset used are summarized as below:
| | Amazon-Beauty |
| | Amazon-Beauty |
| ------------------ | -----------------------: |
| ------------------ | ----------------------|
| Task | Recommendation |
| Task | Recommendation |
| # User | 7068 (1 graph) |
| # User | 7068 (1 graph) |
| # Item | 3570 |
| # Item | 3570 |
@ -49,20 +53,22 @@ Note that you can run the scripts based on the dataset mentioned in original pap
- Data Preparation
- Data Preparation
- Place the dataset to any path you want, the folder should include files as follows(we use Amazon-Beauty dataset as an example)"
- Place the dataset to any path you want, the folder should include files as follows(we use Amazon-Beauty dataset as an example)"
```
```python
.
.
└─data
└─data
├─ratings_Beauty.csv
├─ratings_Beauty.csv
```
```
- Generate dataset in mindrecord format for Amazon-Beauty.
- Generate dataset in mindrecord format for Amazon-Beauty.
```builddoutcfg
```builddoutcfg
cd ./scripts
cd ./scripts
# SRC_PATH is the dataset file path you download.
# SRC_PATH is the dataset file path you download.
sh run_process_data_ascend.sh [SRC_PATH]
sh run_process_data_ascend.sh [SRC_PATH]
```
```
# [Features ](#contents )
# [Features ](#contents )
## Mixed Precision
## Mixed Precision
@ -71,7 +77,7 @@ To ultilize the strong computation power of Ascend chip, and accelerate the trai
# [Environment Requirements ](#contents )
# [Environment Requirements ](#contents )
- Hardward (Ascend )
- Hardware (Ascend/GPU )
- Framework
- Framework
- [MindSpore ](https://www.mindspore.cn/install/en )
- [MindSpore ](https://www.mindspore.cn/install/en )
- For more information, please check the resources below:
- For more information, please check the resources below:
@ -84,7 +90,7 @@ After installing MindSpore via the official website and Dataset is correctly gen
- running on Ascend
- running on Ascend
```
```python
# run training example with Amazon-Beauty dataset
# run training example with Amazon-Beauty dataset
sh run_train_ascend.sh
sh run_train_ascend.sh
@ -92,6 +98,16 @@ After installing MindSpore via the official website and Dataset is correctly gen
sh run_eval_ascend.sh
sh run_eval_ascend.sh
```
```
- running on GPU
```python
# run training example with Amazon-Beauty dataset
sh run_train_gpu.sh 0 dataset_path
# run evaluation example with Amazon-Beauty dataset
sh run_eval_gpu.sh 0 dataset_path
```
# [Script Description ](#contents )
# [Script Description ](#contents )
## [Script and Sample Code ](#contents )
## [Script and Sample Code ](#contents )
@ -101,9 +117,11 @@ After installing MindSpore via the official website and Dataset is correctly gen
└─bgcf
└─bgcf
├─README.md
├─README.md
├─scripts
├─scripts
| ├─run_eval_ascend.sh # Launch evaluation
| ├─run_eval_ascend.sh # Launch evaluation in ascend
| ├─run_eval_gpu.sh # Launch evaluation in gpu
| ├─run_process_data_ascend.sh # Generate dataset in mindrecord format
| ├─run_process_data_ascend.sh # Generate dataset in mindrecord format
| └─run_train_ascend.sh # Launch training
| └─run_train_ascend.sh # Launch training in ascend
| └─run_train_gpu.sh # Launch training in gpu
|
|
├─src
├─src
| ├─bgcf.py # BGCF model
| ├─bgcf.py # BGCF model
@ -131,8 +149,9 @@ Parameters for both training and evaluation can be set in config.py.
"gnew_neighs": 20, # Num of sampling neighbors in sample graph
"gnew_neighs": 20, # Num of sampling neighbors in sample graph
"input_dim": 64, # User and item embedding dimension
"input_dim": 64, # User and item embedding dimension
"l2": 0.03 # l2 coefficient
"l2": 0.03 # l2 coefficient
"neighbor_dropout": [0.0, 0.2, 0.3]# Dropout ratio for different aggregation layer
"neighbor_dropout": [0.0, 0.2, 0.3] # Dropout ratio for different aggregation layer
```
```
config.py for more configuration.
config.py for more configuration.
## [Training Process ](#contents )
## [Training Process ](#contents )
@ -140,6 +159,7 @@ Parameters for both training and evaluation can be set in config.py.
### Training
### Training
- running on Ascend
- running on Ascend
```python
```python
sh run_train_ascend.sh
sh run_train_ascend.sh
```
```
@ -161,11 +181,28 @@ Parameters for both training and evaluation can be set in config.py.
...
...
```
```
- running on GPU
```python
sh run_train_gpu.sh 0 dataset_path
```
Training result will be stored in the scripts path, whose folder name begins with "train". You can find the result like the
followings in log.
```python
Epoch 001 iter 12 loss 34696.242
Epoch 002 iter 12 loss 34275.508
Epoch 003 iter 12 loss 30620.635
Epoch 004 iter 12 loss 21628.908
```
## [Evaluation Process ](#contents )
## [Evaluation Process ](#contents )
### Evaluation
### Evaluation
- Evaluation on Ascend
- Evaluation on Ascend
```python
```python
sh run_eval_ascend.sh
sh run_eval_ascend.sh
```
```
@ -190,34 +227,54 @@ Parameters for both training and evaluation can be set in config.py.
sedp_@10:0.01890, sedp_@20:0.01517, nov_@10:7.58277, nov_@20:7.80038
sedp_@10:0.01890, sedp_@20:0.01517, nov_@10:7.58277, nov_@20:7.80038
...
...
```
```
- Evaluation on GPU
```python
sh run_eval_gpu.sh 0 dataset_path
```
Evaluation result will be stored in the scripts path, whose folder name begins with "eval". You can find the result like the
followings in log.
```python
epoch:680, recall_@10:0.10383, recall_@20:0.15524, ndcg_@10:0.07503, ndcg_@20:0.09249,
sedp_@10:0.01926, sedp_@20:0.01547, nov_@10:7.60851, nov_@20:7.81969
```
# [Model Description ](#contents )
# [Model Description ](#contents )
## [Performance ](#contents )
## [Performance ](#contents )
### Evaluation Performance
| Parameter | BGCF |
### Training Performance
| ------------------------------------ | ----------------------------------------- |
| Model Version | Inception V1 |
| Parameter | BGCF Ascend | BGCF GPU |
| Resource | Ascend 910 |
| ------------------------------ | ------------------------------------------ | ------------------------------------------ |
| uploaded Date | 09/23/2020(month/day/year) |
| Model Version | Inception V1 | Inception V1 |
| MindSpore Version | 1.0.0 |
| Resource | Ascend 910 | Tesla V100-PCIE |
| Dataset | Amazon-Beauty |
| uploaded Date | 09/23/2020(month/day/year) | 01/27/2021(month/day/year) |
| Training Parameter | epoch=600,steps=12,batch_size=5000,lr=0.001 |
| MindSpore Version | 1.0.0 | 1.1.0 |
| Optimizer | Adam |
| Dataset | Amazon-Beauty | Amazon-Beauty |
| Loss Function | BPR loss |
| Training Parameter | epoch=600,steps=12,batch_size=5000,lr=0.001| epoch=680,steps=12,batch_size=5000,lr=0.001|
| Training Cost | 25min |
| Optimizer | Adam | Adam |
| Scripts | [bgcf script ](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/bgcf ) |
| Loss Function | BPR loss | BPR loss |
| Training Cost | 25min | 60min |
| Scripts | [bgcf script ](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/bgcf ) | [bgcf script ](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/gnn/bgcf ) |
### Inference Performance
### Inference Performance
| Parameter | BGCF |
| ------------------------------------ | ----------------------------------------- |
| Parameter | BGCF Ascend | BGCF GPU |
| Model Version | Inception V1 |
| ------------------------------ | ---------------------------- | ---------------------------- |
| Resource | Ascend 910 |
| Model Version | Inception V1 | Inception V1 |
| uploaded Date | 09/23/2020(month/day/year) |
| Resource | Ascend 910 | Tesla V100-PCIE |
| MindSpore Version | 1.0.0 |
| uploaded Date | 09/23/2020(month/day/year) | 01/28/2021(month/day/year) |
| Dataset | Amazon-Beauty |
| MindSpore Version | 1.0.0 | Master(4b3e53b4) |
| Batch_size | 5000 |
| Dataset | Amazon-Beauty | Amazon-Beauty |
| Output | probability |
| Batch_size | 5000 | 5000 |
| Recall@20 | 0.1534 |
| Output | probability | probability |
| NDCG@20 | 0.0912 |
| Recall@20 | 0.1534 | 0.15524 |
| NDCG@20 | 0.0912 | 0.09249 |
# [Description of random situation ](#contents )
# [Description of random situation ](#contents )
BGCF model contains lots of dropout operations, if you want to disable dropout, set the neighbor_dropout to [0.0, 0.0, 0.0] in src/config.py.
BGCF model contains lots of dropout operations, if you want to disable dropout, set the neighbor_dropout to [0.0, 0.0, 0.0] in src/config.py.
@ -225,5 +282,3 @@ BGCF model contains lots of dropout operations, if you want to disable dropout,
# [ModelZoo Homepage ](#contents )
# [ModelZoo Homepage ](#contents )
Please check the official [homepage ](http://gitee.com/mindspore/mindspore/tree/master/model_zoo ).
Please check the official [homepage ](http://gitee.com/mindspore/mindspore/tree/master/model_zoo ).