From 8a50f79686839de58cf1ccde7c29da76b6c02224 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 17 Nov 2016 15:28:32 +0800 Subject: [PATCH 001/265] Convert markdown to rst for h-rnn --- doc_cn/algorithm/rnn/hierarchical-rnn.md | 403 -------------------- doc_cn/algorithm/rnn/hierarchical-rnn.rst | 440 ++++++++++++++++++++++ 2 files changed, 440 insertions(+), 403 deletions(-) delete mode 100644 doc_cn/algorithm/rnn/hierarchical-rnn.md create mode 100644 doc_cn/algorithm/rnn/hierarchical-rnn.rst diff --git a/doc_cn/algorithm/rnn/hierarchical-rnn.md b/doc_cn/algorithm/rnn/hierarchical-rnn.md deleted file mode 100644 index c184a34e85..0000000000 --- a/doc_cn/algorithm/rnn/hierarchical-rnn.md +++ /dev/null @@ -1,403 +0,0 @@ -# 双层RNN配置与示例 - -我们在`paddle/gserver/tests/test_RecurrentGradientMachine`单测中,通过多组语义相同的单双层RNN配置,讲解如何使用双层RNN。 - -## 示例1:双进双出,subseq间无memory - -配置:单层RNN(`sequence_layer_group`)和双层RNN(`sequence_nest_layer_group`),语义完全相同。 - -### 读取双层序列的方法 - -首先,我们看一下单双层序列的不同数据组织形式(您也可以采用别的组织形式): - -- 单层序列的数据(`Sequence/tour_train_wdseg`)如下,一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。 - -```text -2 酒店 有 很 舒适 的 床垫 子 , 床上用品 也 应该 是 一人 一 换 , 感觉 很 利落 对 卫生 很 放心 呀 。 -2 很 温馨 , 也 挺 干净 的 * 地段 不错 , 出来 就 有 全家 , 离 地铁站 也 近 , 交通 很方便 * 就是 都 不 给 刷牙 的 杯子 啊 , 就 第一天 给 了 一次性杯子 * -2 位置 方便 , 强烈推荐 , 十一 出去玩 的 时候 选 的 , 对面 就是 华润万家 , 周围 吃饭 的 也 不少 。 -2 交通便利 , 吃 很 便利 , 乾 浄 、 安静 , 商务 房 有 电脑 、 上网 快 , 价格 可以 , 就 早餐 不 好吃 。 整体 是 不错 的 。 適 合 出差 來 住 。 -2 本来 准备 住 两 晚 , 第 2 天 一早 居然 停电 , 且 无 通知 , 只有 口头 道歉 。 总体来说 性价比 尚可 , 房间 较 新 , 还是 推荐 . -2 这个 酒店 去过 很多 次 了 , 选择 的 主要原因 是 离 客户 最 便宜 相对 又 近 的 酒店 -2 挺好 的 汉庭 , 前台 服务 很 热情 , 卫生 很 整洁 , 房间 安静 , 水温 适中 , 挺好 ! -2 HowardJohnson 的 品质 , 服务 相当 好 的 一 家 五星级 。 房间 不错 、 泳池 不错 、 楼层 安排 很 合理 。 还有 就是 地理位置 , 简直 一 流 。 就 在 天一阁 、 月湖 旁边 , 离 天一广场 也 不远 。 下次 来 宁波 还会 住 。 -2 酒店 很干净 , 很安静 , 很 温馨 , 服务员 服务 好 , 各方面 都 不错 * -2 挺好 的 , 就是 没 窗户 , 不过 对 得 起 这 价格 -``` - -- 双层序列的数据(`Sequence/tour_train_wdseg.nest`)如下,一共有4个样本。样本间用空行分开,代表不同的双层序列,序列数据和上面的完全一样。每个样本的子句数分别为2,3,2,3。 - -```text -2 酒店 有 很 舒适 的 床垫 子 , 床上用品 也 应该 是 一人 一 换 , 感觉 很 利落 对 卫生 很 放心 呀 。 -2 很 温馨 , 也 挺 干净 的 * 地段 不错 , 出来 就 有 全家 , 离 地铁站 也 近 , 交通 很方便 * 就是 都 不 给 刷牙 的 杯子 啊 , 就 第一天 给 了 一次性杯子 * - -2 位置 方便 , 强烈推荐 , 十一 出去玩 的 时候 选 的 , 对面 就是 华润万家 , 周围 吃饭 的 也 不少 。 -2 交通便利 , 吃 很 便利 , 乾 浄 、 安静 , 商务 房 有 电脑 、 上网 快 , 价格 可以 , 就 早餐 不 好吃 。 整体 是 不错 的 。 適 合 出差 來 住 。 -2 本来 准备 住 两 晚 , 第 2 天 一早 居然 停电 , 且 无 通知 , 只有 口头 道歉 。 总体来说 性价比 尚可 , 房间 较 新 , 还是 推荐 . - -2 这个 酒店 去过 很多 次 了 , 选择 的 主要原因 是 离 客户 最 便宜 相对 又 近 的 酒店 -2 挺好 的 汉庭 , 前台 服务 很 热情 , 卫生 很 整洁 , 房间 安静 , 水温 适中 , 挺好 ! - -2 HowardJohnson 的 品质 , 服务 相当 好 的 一 家 五星级 。 房间 不错 、 泳池 不错 、 楼层 安排 很 合理 。 还有 就是 地理位置 , 简直 一 流 。 就 在 天一阁 、 月湖 旁边 , 离 天一广场 也 不远 。 下次 来 宁波 还会 住 。 -2 酒店 很干净 , 很安静 , 很 温馨 , 服务员 服务 好 , 各方面 都 不错 * -2 挺好 的 , 就是 没 窗户 , 不过 对 得 起 这 价格 -``` - -其次,我们看一下单双层序列的不同dataprovider(见`sequenceGen.py`): - -- 单层序列的dataprovider如下: - - word_slot是integer_value_sequence类型,代表单层序列。 - - label是integer_value类型,代表一个向量。 - -```python -def hook(settings, dict_file, **kwargs): - settings.word_dict = dict_file - settings.input_types = [integer_value_sequence(len(settings.word_dict)), - integer_value(3)] - -@provider(init_hook=hook) -def process(settings, file_name): - with open(file_name, 'r') as fdata: - for line in fdata: - label, comment = line.strip().split('\t') - label = int(''.join(label.split())) - words = comment.split() - word_slot = [settings.word_dict[w] for w in words if w in settings.word_dict] - yield word_slot, label -``` - -- 双层序列的dataprovider如下: - - word_slot是integer_value_sub_sequence类型,代表双层序列。 - - label是integer_value_sequence类型,代表单层序列,即一个子句一个label。注意:也可以为integer_value类型,代表一个向量,即一个句子一个label。通常根据任务需求进行不同设置。 - - 关于dataprovider中input_types的详细用法,参见PyDataProvider2。 - -```python -def hook2(settings, dict_file, **kwargs): - settings.word_dict = dict_file - settings.input_types = [integer_value_sub_sequence(len(settings.word_dict)), - integer_value_sequence(3)] - -@provider(init_hook=hook2) -def process2(settings, file_name): - with open(file_name) as fdata: - label_list = [] - word_slot_list = [] - for line in fdata: - if (len(line)) > 1: - label,comment = line.strip().split('\t') - label = int(''.join(label.split())) - words = comment.split() - word_slot = [settings.word_dict[w] for w in words if w in settings.word_dict] - label_list.append(label) - word_slot_list.append(word_slot) - else: - yield word_slot_list, label_list - label_list = [] - word_slot_list = [] -``` - -### 模型中的配置 - -首先,我们看一下单层序列的配置(见`sequence_layer_group.conf`)。注意:batchsize=5表示一次过5句单层序列,因此2个batch就可以完成1个pass。 - -```python -settings(batch_size=5) - -data = data_layer(name="word", size=dict_dim) - -emb = embedding_layer(input=data, size=word_dim) - -# (lstm_input + lstm) is equal to lstmemory -with mixed_layer(size=hidden_dim*4) as lstm_input: - lstm_input += full_matrix_projection(input=emb) - -lstm = lstmemory_group(input=lstm_input, - size=hidden_dim, - act=TanhActivation(), - gate_act=SigmoidActivation(), - state_act=TanhActivation(), - lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50)) - -lstm_last = last_seq(input=lstm) - -with mixed_layer(size=label_dim, - act=SoftmaxActivation(), - bias_attr=True) as output: - output += full_matrix_projection(input=lstm_last) - -outputs(classification_cost(input=output, label=data_layer(name="label", size=1))) - -``` -其次,我们看一下语义相同的双层序列配置(见`sequence_nest_layer_group.conf`),并对其详细分析: - -- batchsize=2表示一次过2句双层序列。但从上面的数据格式可知,2句双层序列和5句单层序列的数据完全一样。 -- data_layer和embedding_layer不关心数据是否是序列格式,因此两个配置在这两层上的输出是一样的。 -- lstmemory: - - 单层序列过了一个mixed_layer和lstmemory_group。 - - 双层序列在同样的mixed_layer和lstmemory_group外,直接加了一层group。由于这个外层group里面没有memory,表示subseq间不存在联系,即起到的作用仅仅是把双层seq拆成单层,因此双层序列过完lstmemory的输出和单层的一样。 -- last_seq: - - 单层序列直接取了最后一个元素 - - 双层序列首先(last_seq层)取了每个subseq的最后一个元素,将其拼接成一个新的单层序列;接着(expand_layer层)将其扩展成一个新的双层序列,其中第i个subseq中的所有向量均为输入的单层序列中的第i个向量;最后(average_layer层)取了每个subseq的平均值。 - - 分析得出:第一个last_seq后,每个subseq的最后一个元素就等于单层序列的最后一个元素,而expand_layer和average_layer后,依然保持每个subseq最后一个元素的值不变(这两层仅是为了展示它们的用法,实际中并不需要)。因此单双层序列的输出是一样旳。 - -```python -settings(batch_size=2) - -data = data_layer(name="word", size=dict_dim) - -emb_group = embedding_layer(input=data, size=word_dim) - -# (lstm_input + lstm) is equal to lstmemory -def lstm_group(lstm_group_input): - with mixed_layer(size=hidden_dim*4) as group_input: - group_input += full_matrix_projection(input=lstm_group_input) - - lstm_output = lstmemory_group(input=group_input, - name="lstm_group", - size=hidden_dim, - act=TanhActivation(), - gate_act=SigmoidActivation(), - state_act=TanhActivation(), - lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50)) - return lstm_output - -lstm_nest_group = recurrent_group(input=SubsequenceInput(emb_group), - step=lstm_group, - name="lstm_nest_group") -# hasSubseq ->(seqlastins) seq -lstm_last = last_seq(input=lstm_nest_group, agg_level=AggregateLevel.EACH_SEQUENCE) - -# seq ->(expand) hasSubseq -lstm_expand = expand_layer(input=lstm_last, expand_as=emb_group, expand_level=ExpandLevel.FROM_SEQUENCE) - -# hasSubseq ->(average) seq -lstm_average = pooling_layer(input=lstm_expand, - pooling_type=AvgPooling(), - agg_level=AggregateLevel.EACH_SEQUENCE) - -with mixed_layer(size=label_dim, - act=SoftmaxActivation(), - bias_attr=True) as output: - output += full_matrix_projection(input=lstm_average) - -outputs(classification_cost(input=output, label=data_layer(name="label", size=1))) -``` -## 示例2:双进双出,subseq间有memory - -配置:单层RNN(`sequence_rnn.conf`),双层RNN(`sequence_nest_rnn.conf`和`sequence_nest_rnn_readonly_memory.conf`),语义完全相同。 - -### 读取双层序列的方法 - -我们看一下单双层序列的不同数据组织形式和dataprovider(见`rnn_data_provider.py`) -```python -data = [ - [[[1, 3, 2], [4, 5, 2]], 0], - [[[0, 2], [2, 5], [0, 1, 2]], 1], -] - -@provider(input_types=[integer_value_sub_sequence(10), - integer_value(3)]) -def process_subseq(settings, file_name): - for d in data: - yield d - -@provider(input_types=[integer_value_sequence(10), - integer_value(3)]) -def process_seq(settings, file_name): - for d in data: - seq = [] -``` -- 单层序列:有两句,分别为[1,3,2,4,5,2]和[0,2,2,5,0,1,2]。 -- 双层序列:有两句,分别为[[1,3,2],[4,5,2]](2个子句)和[[0,2],[2,5],[0,1,2]](3个子句)。 -- 单双层序列的label都分别是0和1 - -### 模型中的配置 - -我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 - -- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 - -```python -def step(y): - mem = memory(name="rnn_state", size=hidden_dim) - return fc_layer(input=[y, mem], - size=hidden_dim, - act=TanhActivation(), - bias_attr=True, - name="rnn_state") - -out = recurrent_group(step=step, input=emb) -``` -- 双层序列,外层memory是一个元素: - - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 - -```python -def outer_step(x): - outer_mem = memory(name="outer_rnn_state", size=hidden_dim) - def inner_step(y): - inner_mem = memory(name="inner_rnn_state", - size=hidden_dim, - boot_layer=outer_mem) - return fc_layer(input=[y, inner_mem], - size=hidden_dim, - act=TanhActivation(), - bias_attr=True, - name="inner_rnn_state") - - inner_rnn_output = recurrent_group( - step=inner_step, - input=x) - last = last_seq(input=inner_rnn_output, name="outer_rnn_state") - - return inner_rnn_output - -out = recurrent_group(step=outer_step, input=SubsequenceInput(emb)) -``` -- 双层序列,外层memory是单层序列: - - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 - - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 - -## 示例3:双进双出,输入不等长 - -**输入不等长**是指recurrent_group的多个输入在各时刻的长度可以不相等, 但需要指定一个和输出长度一致的input,用targetInlink表示。参考配置:单层RNN(`sequence_rnn_multi_unequalength_inputs.conf`),双层RNN(`sequence_nest_rnn_multi_unequalength_inputs.conf`) - -### 读取双层序列的方法 - -我们看一下单双层序列的数据组织形式和dataprovider(见`rnn_data_provider.py`) -```python -data2 = [ - [[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]] ,0], - [[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]], 1], -] - -@provider(input_types=[integer_value_sub_sequence(10), - integer_value_sub_sequence(10), - integer_value(2)], - should_shuffle=False) -def process_unequalength_subseq(settings, file_name): #双层RNN的dataprovider - for d in data2: - yield d - - -@provider(input_types=[integer_value_sequence(10), - integer_value_sequence(10), - integer_value(2)], - should_shuffle=False) -def process_unequalength_seq(settings, file_name): #单层RNN的dataprovider - for d in data2: - words1=reduce(lambda x,y: x+y, d[0]) - words2=reduce(lambda x,y: x+y, d[1]) - yield words1, words2, d[2] -``` - -data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 - -- 单层序列:两个样本分别为[[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]] 和 [[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]] -- 双层序列:两个样本分别为 - - **样本1**:[[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]]]。fea1和fea2都分别有2个子句,fea1=[[1, 2], [4, 5, 2]], fea2=[[5, 4, 1], [3, 1]] - - **样本2**:[[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]]。fea1和fea2都分别有3个子句, fea1=[[0, 2], [2, 5], [0, 1, 2]], fea2=[[1, 5], [4], [2, 3, 6, 1]]。
- - **注意**:每个样本中,各特征的子句数目需要相等。这里说的“双进双出,输入不等长”是指fea1在i时刻的输入的长度可以不等于fea2在i时刻的输入的长度。如对于第1个样本,时刻i=2, fea1[2]=[4, 5, 2],fea2[2]=[3, 1],3≠2。 -- 单双层序列中,两个样本的label都分别是0和1 - -### 模型中的配置 - -单层RNN(`sequence_rnn_multi_unequalength_inputs.conf`)和双层RNN(`sequence_nest_rnn_multi_unequalength_inputs.conf`)两个模型配置达到的效果完全一样,区别只在于输入为单层还是双层序列,现在我们来看它们内部分别是如何实现的。 - -- 单层序列: - - 过了一个简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全连接,功能与示例2中`sequence_rnn.conf`的`step`函数完全相同。这里,两个输入x1,x2分别通过calrnn返回最后时刻的状态。结果得到的encoder1_rep和encoder2_rep分别是单层序列,最后取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 - - 注意到这里recurrent_group输入的每个样本中,fea1和fea2的长度都分别相等,这并非偶然,而是因为recurrent_group要求输入为单层序列时,所有输入的长度都必须相等。 - -```python -def step(x1, x2): - def calrnn(y): - mem = memory(name = 'rnn_state_' + y.name, size = hidden_dim) - out = fc_layer(input = [y, mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'rnn_state_' + y.name) - return out - - encoder1 = calrnn(x1) - encoder2 = calrnn(x2) - return [encoder1, encoder2] - -encoder1_rep, encoder2_rep = recurrent_group( - name="stepout", - step=step, - input=[emb1, emb2]) - -encoder1_last = last_seq(input = encoder1_rep) -encoder1_expandlast = expand_layer(input = encoder1_last, - expand_as = encoder2_rep) -context = mixed_layer(input = [identity_projection(encoder1_expandlast), - identity_projection(encoder2_rep)], - size = hidden_dim) -``` -- 双层序列: - - 双层RNN中,对输入的两个特征分别求时序上的连续全连接(`inner_step1`和`inner_step2`分别处理fea1和fea2),其功能与示例2中`sequence_nest_rnn.conf`的`outer_step`函数完全相同。不同之处是,此时输入`[SubsequenceInput(emb1), SubsequenceInput(emb2)]`在各时刻并不等长。 - - 函数`outer_step`中可以分别处理这两个特征,但我们需要用targetInlink指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 - - 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 - -```python -def outer_step(x1, x2): - outer_mem1 = memory(name = "outer_rnn_state1", size = hidden_dim) - outer_mem2 = memory(name = "outer_rnn_state2", size = hidden_dim) - def inner_step1(y): - inner_mem = memory(name = 'inner_rnn_state_' + y.name, - size = hidden_dim, - boot_layer = outer_mem1) - out = fc_layer(input = [y, inner_mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'inner_rnn_state_' + y.name) - return out - - def inner_step2(y): - inner_mem = memory(name = 'inner_rnn_state_' + y.name, - size = hidden_dim, - boot_layer = outer_mem2) - out = fc_layer(input = [y, inner_mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'inner_rnn_state_' + y.name) - return out - - encoder1 = recurrent_group( - step = inner_step1, - name = 'inner1', - input = x1) - - encoder2 = recurrent_group( - step = inner_step2, - name = 'inner2', - input = x2) - - sentence_last_state1 = last_seq(input = encoder1, name = 'outer_rnn_state1') - sentence_last_state2_ = last_seq(input = encoder2, name = 'outer_rnn_state2') - - encoder1_expand = expand_layer(input = sentence_last_state1, - expand_as = encoder2) - - return [encoder1_expand, encoder2] - -encoder1_rep, encoder2_rep = recurrent_group( - name="outer", - step=outer_step, - input=[SubsequenceInput(emb1), SubsequenceInput(emb2)], - targetInlink=emb2) - -encoder1_last = last_seq(input = encoder1_rep) -encoder1_expandlast = expand_layer(input = encoder1_last, - expand_as = encoder2_rep) -context = mixed_layer(input = [identity_projection(encoder1_expandlast), - identity_projection(encoder2_rep)], - size = hidden_dim) -``` - -## 示例4:beam_search的生成 - -TBD diff --git a/doc_cn/algorithm/rnn/hierarchical-rnn.rst b/doc_cn/algorithm/rnn/hierarchical-rnn.rst new file mode 100644 index 0000000000..7de54cc0b1 --- /dev/null +++ b/doc_cn/algorithm/rnn/hierarchical-rnn.rst @@ -0,0 +1,440 @@ +################# +双层RNN配置与示例 +################# + +我们在 :code:`paddle/gserver/tests/test_RecurrentGradientMachine` 单测中,通过多组语义相同的单双层RNN配置,讲解如何使用双层RNN。 + +示例1:双进双出,subseq间无memory +================================= + +配置:单层RNN(:code:`sequence_layer_group`)和双层RNN(:code:`sequence_nest_layer_group`),语义完全相同。 + +读取双层序列的方法 +------------------ + +首先,我们看一下单双层序列的不同数据组织形式(您也可以采用别的组织形式)\: + +- 单层序列的数据( :code:`Sequence/tour_train_wdseg`)如下,一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。 + +.. code-block:: text + + 2 酒店 有 很 舒适 的 床垫 子 , 床上用品 也 应该 是 一人 一 换 , 感觉 很 利落 对 卫生 很 放心 呀 。 + 2 很 温馨 , 也 挺 干净 的 * 地段 不错 , 出来 就 有 全家 , 离 地铁站 也 近 , 交通 很方便 * 就是 都 不 给 刷牙 的 杯子 啊 , 就 第一天 给 了 一次性杯子 * + 2 位置 方便 , 强烈推荐 , 十一 出去玩 的 时候 选 的 , 对面 就是 华润万家 , 周围 吃饭 的 也 不少 。 + 2 交通便利 , 吃 很 便利 , 乾 浄 、 安静 , 商务 房 有 电脑 、 上网 快 , 价格 可以 , 就 早餐 不 好吃 。 整体 是 不错 的 。 適 合 出差 來 住 。 + 2 本来 准备 住 两 晚 , 第 2 天 一早 居然 停电 , 且 无 通知 , 只有 口头 道歉 。 总体来说 性价比 尚可 , 房间 较 新 , 还是 推荐 . + 2 这个 酒店 去过 很多 次 了 , 选择 的 主要原因 是 离 客户 最 便宜 相对 又 近 的 酒店 + 2 挺好 的 汉庭 , 前台 服务 很 热情 , 卫生 很 整洁 , 房间 安静 , 水温 适中 , 挺好 ! + 2 HowardJohnson 的 品质 , 服务 相当 好 的 一 家 五星级 。 房间 不错 、 泳池 不错 、 楼层 安排 很 合理 。 还有 就是 地理位置 , 简直 一 流 。 就 在 天一阁 、 月湖 旁边 , 离 天一广场 也 不远 。 下次 来 宁波 还会 住 。 + 2 酒店 很干净 , 很安静 , 很 温馨 , 服务员 服务 好 , 各方面 都 不错 * + 2 挺好 的 , 就是 没 窗户 , 不过 对 得 起 这 价格 + + +- 双层序列的数据( :code:`Sequence/tour_train_wdseg.nest`)如下,一共有4个样本。样本间用空行分开,代表不同的双层序列,序列数据和上面的完全一样。每个样本的子句数分别为2,3,2,3。 + +.. code-block:: text + + 2 酒店 有 很 舒适 的 床垫 子 , 床上用品 也 应该 是 一人 一 换 , 感觉 很 利落 对 卫生 很 放心 呀 。 + 2 很 温馨 , 也 挺 干净 的 * 地段 不错 , 出来 就 有 全家 , 离 地铁站 也 近 , 交通 很方便 * 就是 都 不 给 刷牙 的 杯子 啊 , 就 第一天 给 了 一次性杯子 * + + 2 位置 方便 , 强烈推荐 , 十一 出去玩 的 时候 选 的 , 对面 就是 华润万家 , 周围 吃饭 的 也 不少 。 + 2 交通便利 , 吃 很 便利 , 乾 浄 、 安静 , 商务 房 有 电脑 、 上网 快 , 价格 可以 , 就 早餐 不 好吃 。 整体 是 不错 的 。 適 合 出差 來 住 。 + 2 本来 准备 住 两 晚 , 第 2 天 一早 居然 停电 , 且 无 通知 , 只有 口头 道歉 。 总体来说 性价比 尚可 , 房间 较 新 , 还是 推荐 . + + 2 这个 酒店 去过 很多 次 了 , 选择 的 主要原因 是 离 客户 最 便宜 相对 又 近 的 酒店 + 2 挺好 的 汉庭 , 前台 服务 很 热情 , 卫生 很 整洁 , 房间 安静 , 水温 适中 , 挺好 ! + + 2 HowardJohnson 的 品质 , 服务 相当 好 的 一 家 五星级 。 房间 不错 、 泳池 不错 、 楼层 安排 很 合理 。 还有 就是 地理位置 , 简直 一 流 。 就 在 天一阁 、 月湖 旁边 , 离 天一广场 也 不远 。 下次 来 宁波 还会 住 。 + 2 酒店 很干净 , 很安静 , 很 温馨 , 服务员 服务 好 , 各方面 都 不错 * + 2 挺好 的 , 就是 没 窗户 , 不过 对 得 起 这 价格 + +其次,我们看一下单双层序列的不同dataprovider(见 :code:`sequenceGen.py` ): + +- 单层序列的dataprovider如下: + + - word_slot是integer_value_sequence类型,代表单层序列。 + - label是integer_value类型,代表一个向量。 + +.. code-block:: python + + def hook(settings, dict_file, **kwargs): + settings.word_dict = dict_file + settings.input_types = [integer_value_sequence(len(settings.word_dict)), + integer_value(3)] + + @provider(init_hook=hook) + def process(settings, file_name): + with open(file_name, 'r') as fdata: + for line in fdata: + label, comment = line.strip().split('\t') + label = int(''.join(label.split())) + words = comment.split() + word_slot = [settings.word_dict[w] for w in words if w in settings.word_dict] + yield word_slot, label + +- 双层序列的dataprovider如下: + + - word_slot是integer_value_sub_sequence类型,代表双层序列。 + - label是integer_value_sequence类型,代表单层序列,即一个子句一个label。注意:也可以为integer_value类型,代表一个向量,即一个句子一个label。通常根据任务需求进行不同设置。 + - 关于dataprovider中input_types的详细用法,参见PyDataProvider2。 + +.. code-block:: python + + def hook2(settings, dict_file, **kwargs): + settings.word_dict = dict_file + settings.input_types = [integer_value_sub_sequence(len(settings.word_dict)), + integer_value_sequence(3)] + + @provider(init_hook=hook2) + def process2(settings, file_name): + with open(file_name) as fdata: + label_list = [] + word_slot_list = [] + for line in fdata: + if (len(line)) > 1: + label,comment = line.strip().split('\t') + label = int(''.join(label.split())) + words = comment.split() + word_slot = [settings.word_dict[w] for w in words if w in settings.word_dict] + label_list.append(label) + word_slot_list.append(word_slot) + else: + yield word_slot_list, label_list + label_list = [] + word_slot_list = [] + + +模型中的配置 +------------ + +首先,我们看一下单层序列的配置(见 :code:`sequence_layer_group.conf`)。注意:batchsize=5表示一次过5句单层序列,因此2个batch就可以完成1个pass。 + +.. code-block:: python + + settings(batch_size=5) + + data = data_layer(name="word", size=dict_dim) + + emb = embedding_layer(input=data, size=word_dim) + + # (lstm_input + lstm) is equal to lstmemory + with mixed_layer(size=hidden_dim*4) as lstm_input: + lstm_input += full_matrix_projection(input=emb) + + lstm = lstmemory_group(input=lstm_input, + size=hidden_dim, + act=TanhActivation(), + gate_act=SigmoidActivation(), + state_act=TanhActivation(), + lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50)) + + lstm_last = last_seq(input=lstm) + + with mixed_layer(size=label_dim, + act=SoftmaxActivation(), + bias_attr=True) as output: + output += full_matrix_projection(input=lstm_last) + + outputs(classification_cost(input=output, label=data_layer(name="label", size=1))) + + +其次,我们看一下语义相同的双层序列配置(见 :code:`sequence_nest_layer_group.conf` ),并对其详细分析: + +- batchsize=2表示一次过2句双层序列。但从上面的数据格式可知,2句双层序列和5句单层序列的数据完全一样。 +- data_layer和embedding_layer不关心数据是否是序列格式,因此两个配置在这两层上的输出是一样的。 +- lstmemory\: + + - 单层序列过了一个mixed_layer和lstmemory_group。 + - 双层序列在同样的mixed_layer和lstmemory_group外,直接加了一层group。由于这个外层group里面没有memory,表示subseq间不存在联系,即起到的作用仅仅是把双层seq拆成单层,因此双层序列过完lstmemory的输出和单层的一样。 + +- last_seq\: + + - 单层序列直接取了最后一个元素 + - 双层序列首先(last_seq层)取了每个subseq的最后一个元素,将其拼接成一个新的单层序列;接着(expand_layer层)将其扩展成一个新的双层序列,其中第i个subseq中的所有向量均为输入的单层序列中的第i个向量;最后(average_layer层)取了每个subseq的平均值。 + - 分析得出:第一个last_seq后,每个subseq的最后一个元素就等于单层序列的最后一个元素,而expand_layer和average_layer后,依然保持每个subseq最后一个元素的值不变(这两层仅是为了展示它们的用法,实际中并不需要)。因此单双层序列的输出是一样旳。 + +.. code-block:: python + + settings(batch_size=2) + + data = data_layer(name="word", size=dict_dim) + + emb_group = embedding_layer(input=data, size=word_dim) + + # (lstm_input + lstm) is equal to lstmemory + def lstm_group(lstm_group_input): + with mixed_layer(size=hidden_dim*4) as group_input: + group_input += full_matrix_projection(input=lstm_group_input) + + lstm_output = lstmemory_group(input=group_input, + name="lstm_group", + size=hidden_dim, + act=TanhActivation(), + gate_act=SigmoidActivation(), + state_act=TanhActivation(), + lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50)) + return lstm_output + + lstm_nest_group = recurrent_group(input=SubsequenceInput(emb_group), + step=lstm_group, + name="lstm_nest_group") + # hasSubseq ->(seqlastins) seq + lstm_last = last_seq(input=lstm_nest_group, agg_level=AggregateLevel.EACH_SEQUENCE) + + # seq ->(expand) hasSubseq + lstm_expand = expand_layer(input=lstm_last, expand_as=emb_group, expand_level=ExpandLevel.FROM_SEQUENCE) + + # hasSubseq ->(average) seq + lstm_average = pooling_layer(input=lstm_expand, + pooling_type=AvgPooling(), + agg_level=AggregateLevel.EACH_SEQUENCE) + + with mixed_layer(size=label_dim, + act=SoftmaxActivation(), + bias_attr=True) as output: + output += full_matrix_projection(input=lstm_average) + + outputs(classification_cost(input=output, label=data_layer(name="label", size=1))) + +示例2:双进双出,subseq间有memory +================================= + +配置:单层RNN( :code:`sequence_rnn.conf` ),双层RNN( :code:`sequence_nest_rnn.conf` 和 :code:`sequence_nest_rnn_readonly_memory.conf` ),语义完全相同。 + +读取双层序列的方法 +------------------ + +我们看一下单双层序列的不同数据组织形式和dataprovider(见 :code:`rnn_data_provider.py`) + +.. code-block:: python + + data = [ + [[[1, 3, 2], [4, 5, 2]], 0], + [[[0, 2], [2, 5], [0, 1, 2]], 1], + ] + + @provider(input_types=[integer_value_sub_sequence(10), + integer_value(3)]) + def process_subseq(settings, file_name): + for d in data: + yield d + + @provider(input_types=[integer_value_sequence(10), + integer_value(3)]) + def process_seq(settings, file_name): + for d in data: + seq = [] + +- 单层序列:有两句,分别为[1,3,2,4,5,2]和[0,2,2,5,0,1,2]。 +- 双层序列:有两句,分别为[[1,3,2],[4,5,2]](2个子句)和[[0,2],[2,5],[0,1,2]](3个子句)。 +- 单双层序列的label都分别是0和1 + +模型中的配置 +------------ + +我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 + +- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 + +.. code-block:: python + + def step(y): + mem = memory(name="rnn_state", size=hidden_dim) + return fc_layer(input=[y, mem], + size=hidden_dim, + act=TanhActivation(), + bias_attr=True, + name="rnn_state") + + out = recurrent_group(step=step, input=emb) + +- 双层序列,外层memory是一个元素: + - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 + - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 + +.. code-block:: + + def outer_step(x): + outer_mem = memory(name="outer_rnn_state", size=hidden_dim) + def inner_step(y): + inner_mem = memory(name="inner_rnn_state", + size=hidden_dim, + boot_layer=outer_mem) + return fc_layer(input=[y, inner_mem], + size=hidden_dim, + act=TanhActivation(), + bias_attr=True, + name="inner_rnn_state") + + inner_rnn_output = recurrent_group( + step=inner_step, + input=x) + last = last_seq(input=inner_rnn_output, name="outer_rnn_state") + + return inner_rnn_output + + out = recurrent_group(step=outer_step, input=SubsequenceInput(emb)) + +- 双层序列,外层memory是单层序列: + - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 + - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 + +示例3:双进双出,输入不等长 +=========================== + +.. role:: red + +.. raw:: html + + + +**输入不等长** 是指recurrent_group的多个输入在各时刻的长度可以不相等, 但需要指定一个和输出长度一致的input,用 :red:`targetInlink` 表示。参考配置:单层RNN(:code:`sequence_rnn_multi_unequalength_inputs.conf`),双层RNN(:code:`sequence_nest_rnn_multi_unequalength_inputs.conf`) + +读取双层序列的方法 +------------------ + +我们看一下单双层序列的数据组织形式和dataprovider(见`rnn_data_provider.py`) + +.. code-block:: python + + data2 = [ + [[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]] ,0], + [[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]], 1], + ] + + @provider(input_types=[integer_value_sub_sequence(10), + integer_value_sub_sequence(10), + integer_value(2)], + should_shuffle=False) + def process_unequalength_subseq(settings, file_name): #双层RNN的dataprovider + for d in data2: + yield d + + + @provider(input_types=[integer_value_sequence(10), + integer_value_sequence(10), + integer_value(2)], + should_shuffle=False) + def process_unequalength_seq(settings, file_name): #单层RNN的dataprovider + for d in data2: + words1=reduce(lambda x,y: x+y, d[0]) + words2=reduce(lambda x,y: x+y, d[1]) + yield words1, words2, d[2] + +data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 + +- 单层序列:两个样本分别为[[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]] 和 [[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]] +- 双层序列:两个样本分别为 + + - **样本1**\:[[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]]]。fea1和fea2都分别有2个子句,fea1=[[1, 2], [4, 5, 2]], fea2=[[5, 4, 1], [3, 1]] + - **样本2**\:[[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]]。fea1和fea2都分别有3个子句, fea1=[[0, 2], [2, 5], [0, 1, 2]], fea2=[[1, 5], [4], [2, 3, 6, 1]]。
+ - **注意**\:每个样本中,各特征的子句数目需要相等。这里说的“双进双出,输入不等长”是指fea1在i时刻的输入的长度可以不等于fea2在i时刻的输入的长度。如对于第1个样本,时刻i=2, fea1[2]=[4, 5, 2],fea2[2]=[3, 1],3≠2。 + +- 单双层序列中,两个样本的label都分别是0和1 + +模型中的配置 +------------ + +单层RNN( :code:`sequence_rnn_multi_unequalength_inputs.conf`)和双层RNN( :code:`sequence_nest_rnn_multi_unequalength_inputs.conf`)两个模型配置达到的效果完全一样,区别只在于输入为单层还是双层序列,现在我们来看它们内部分别是如何实现的。 + +- 单层序列\: + + - 过了一个简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全连接,功能与示例2中`sequence_rnn.conf`的`step`函数完全相同。这里,两个输入x1,x2分别通过calrnn返回最后时刻的状态。结果得到的encoder1_rep和encoder2_rep分别是单层序列,最后取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 + - 注意到这里recurrent_group输入的每个样本中,fea1和fea2的长度都分别相等,这并非偶然,而是因为recurrent_group要求输入为单层序列时,所有输入的长度都必须相等。 + +.. code-block:: python + + def step(x1, x2): + def calrnn(y): + mem = memory(name = 'rnn_state_' + y.name, size = hidden_dim) + out = fc_layer(input = [y, mem], + size = hidden_dim, + act = TanhActivation(), + bias_attr = True, + name = 'rnn_state_' + y.name) + return out + + encoder1 = calrnn(x1) + encoder2 = calrnn(x2) + return [encoder1, encoder2] + + encoder1_rep, encoder2_rep = recurrent_group( + name="stepout", + step=step, + input=[emb1, emb2]) + + encoder1_last = last_seq(input = encoder1_rep) + encoder1_expandlast = expand_layer(input = encoder1_last, + expand_as = encoder2_rep) + context = mixed_layer(input = [identity_projection(encoder1_expandlast), + identity_projection(encoder2_rep)], + size = hidden_dim) + +- 双层序列\: + + - 双层RNN中,对输入的两个特征分别求时序上的连续全连接(`inner_step1`和`inner_step2`分别处理fea1和fea2),其功能与示例2中`sequence_nest_rnn.conf`的`outer_step`函数完全相同。不同之处是,此时输入`[SubsequenceInput(emb1), SubsequenceInput(emb2)]`在各时刻并不等长。 + - 函数`outer_step`中可以分别处理这两个特征,但我们需要用targetInlink指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 + - 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 + +.. code-block:: python + + def outer_step(x1, x2): + outer_mem1 = memory(name = "outer_rnn_state1", size = hidden_dim) + outer_mem2 = memory(name = "outer_rnn_state2", size = hidden_dim) + def inner_step1(y): + inner_mem = memory(name = 'inner_rnn_state_' + y.name, + size = hidden_dim, + boot_layer = outer_mem1) + out = fc_layer(input = [y, inner_mem], + size = hidden_dim, + act = TanhActivation(), + bias_attr = True, + name = 'inner_rnn_state_' + y.name) + return out + + def inner_step2(y): + inner_mem = memory(name = 'inner_rnn_state_' + y.name, + size = hidden_dim, + boot_layer = outer_mem2) + out = fc_layer(input = [y, inner_mem], + size = hidden_dim, + act = TanhActivation(), + bias_attr = True, + name = 'inner_rnn_state_' + y.name) + return out + + encoder1 = recurrent_group( + step = inner_step1, + name = 'inner1', + input = x1) + + encoder2 = recurrent_group( + step = inner_step2, + name = 'inner2', + input = x2) + + sentence_last_state1 = last_seq(input = encoder1, name = 'outer_rnn_state1') + sentence_last_state2_ = last_seq(input = encoder2, name = 'outer_rnn_state2') + + encoder1_expand = expand_layer(input = sentence_last_state1, + expand_as = encoder2) + + return [encoder1_expand, encoder2] + + encoder1_rep, encoder2_rep = recurrent_group( + name="outer", + step=outer_step, + input=[SubsequenceInput(emb1), SubsequenceInput(emb2)], + targetInlink=emb2) + + encoder1_last = last_seq(input = encoder1_rep) + encoder1_expandlast = expand_layer(input = encoder1_last, + expand_as = encoder2_rep) + context = mixed_layer(input = [identity_projection(encoder1_expandlast), + identity_projection(encoder2_rep)], + size = hidden_dim) + +示例4:beam_search的生成 +======================== + +TBD From f1955e2b2092d63f9acab07d8971ef2c5572170c Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 17 Nov 2016 16:36:26 +0800 Subject: [PATCH 002/265] Remove copy & paste code. --- doc_cn/algorithm/rnn/hierarchical-rnn.rst | 337 +++------------------- 1 file changed, 38 insertions(+), 299 deletions(-) diff --git a/doc_cn/algorithm/rnn/hierarchical-rnn.rst b/doc_cn/algorithm/rnn/hierarchical-rnn.rst index 7de54cc0b1..7c81ce8c67 100644 --- a/doc_cn/algorithm/rnn/hierarchical-rnn.rst +++ b/doc_cn/algorithm/rnn/hierarchical-rnn.rst @@ -16,37 +16,14 @@ - 单层序列的数据( :code:`Sequence/tour_train_wdseg`)如下,一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。 -.. code-block:: text - - 2 酒店 有 很 舒适 的 床垫 子 , 床上用品 也 应该 是 一人 一 换 , 感觉 很 利落 对 卫生 很 放心 呀 。 - 2 很 温馨 , 也 挺 干净 的 * 地段 不错 , 出来 就 有 全家 , 离 地铁站 也 近 , 交通 很方便 * 就是 都 不 给 刷牙 的 杯子 啊 , 就 第一天 给 了 一次性杯子 * - 2 位置 方便 , 强烈推荐 , 十一 出去玩 的 时候 选 的 , 对面 就是 华润万家 , 周围 吃饭 的 也 不少 。 - 2 交通便利 , 吃 很 便利 , 乾 浄 、 安静 , 商务 房 有 电脑 、 上网 快 , 价格 可以 , 就 早餐 不 好吃 。 整体 是 不错 的 。 適 合 出差 來 住 。 - 2 本来 准备 住 两 晚 , 第 2 天 一早 居然 停电 , 且 无 通知 , 只有 口头 道歉 。 总体来说 性价比 尚可 , 房间 较 新 , 还是 推荐 . - 2 这个 酒店 去过 很多 次 了 , 选择 的 主要原因 是 离 客户 最 便宜 相对 又 近 的 酒店 - 2 挺好 的 汉庭 , 前台 服务 很 热情 , 卫生 很 整洁 , 房间 安静 , 水温 适中 , 挺好 ! - 2 HowardJohnson 的 品质 , 服务 相当 好 的 一 家 五星级 。 房间 不错 、 泳池 不错 、 楼层 安排 很 合理 。 还有 就是 地理位置 , 简直 一 流 。 就 在 天一阁 、 月湖 旁边 , 离 天一广场 也 不远 。 下次 来 宁波 还会 住 。 - 2 酒店 很干净 , 很安静 , 很 温馨 , 服务员 服务 好 , 各方面 都 不错 * - 2 挺好 的 , 就是 没 窗户 , 不过 对 得 起 这 价格 +.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg + :language: text - 双层序列的数据( :code:`Sequence/tour_train_wdseg.nest`)如下,一共有4个样本。样本间用空行分开,代表不同的双层序列,序列数据和上面的完全一样。每个样本的子句数分别为2,3,2,3。 -.. code-block:: text - - 2 酒店 有 很 舒适 的 床垫 子 , 床上用品 也 应该 是 一人 一 换 , 感觉 很 利落 对 卫生 很 放心 呀 。 - 2 很 温馨 , 也 挺 干净 的 * 地段 不错 , 出来 就 有 全家 , 离 地铁站 也 近 , 交通 很方便 * 就是 都 不 给 刷牙 的 杯子 啊 , 就 第一天 给 了 一次性杯子 * - - 2 位置 方便 , 强烈推荐 , 十一 出去玩 的 时候 选 的 , 对面 就是 华润万家 , 周围 吃饭 的 也 不少 。 - 2 交通便利 , 吃 很 便利 , 乾 浄 、 安静 , 商务 房 有 电脑 、 上网 快 , 价格 可以 , 就 早餐 不 好吃 。 整体 是 不错 的 。 適 合 出差 來 住 。 - 2 本来 准备 住 两 晚 , 第 2 天 一早 居然 停电 , 且 无 通知 , 只有 口头 道歉 。 总体来说 性价比 尚可 , 房间 较 新 , 还是 推荐 . - - 2 这个 酒店 去过 很多 次 了 , 选择 的 主要原因 是 离 客户 最 便宜 相对 又 近 的 酒店 - 2 挺好 的 汉庭 , 前台 服务 很 热情 , 卫生 很 整洁 , 房间 安静 , 水温 适中 , 挺好 ! - - 2 HowardJohnson 的 品质 , 服务 相当 好 的 一 家 五星级 。 房间 不错 、 泳池 不错 、 楼层 安排 很 合理 。 还有 就是 地理位置 , 简直 一 流 。 就 在 天一阁 、 月湖 旁边 , 离 天一广场 也 不远 。 下次 来 宁波 还会 住 。 - 2 酒店 很干净 , 很安静 , 很 温馨 , 服务员 服务 好 , 各方面 都 不错 * - 2 挺好 的 , 就是 没 窗户 , 不过 对 得 起 这 价格 +.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest + :language: text 其次,我们看一下单双层序列的不同dataprovider(见 :code:`sequenceGen.py` ): @@ -55,22 +32,9 @@ - word_slot是integer_value_sequence类型,代表单层序列。 - label是integer_value类型,代表一个向量。 -.. code-block:: python - - def hook(settings, dict_file, **kwargs): - settings.word_dict = dict_file - settings.input_types = [integer_value_sequence(len(settings.word_dict)), - integer_value(3)] - - @provider(init_hook=hook) - def process(settings, file_name): - with open(file_name, 'r') as fdata: - for line in fdata: - label, comment = line.strip().split('\t') - label = int(''.join(label.split())) - words = comment.split() - word_slot = [settings.word_dict[w] for w in words if w in settings.word_dict] - yield word_slot, label +.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py + :language: python + :lines: 21-39 - 双层序列的dataprovider如下: @@ -78,64 +42,18 @@ - label是integer_value_sequence类型,代表单层序列,即一个子句一个label。注意:也可以为integer_value类型,代表一个向量,即一个句子一个label。通常根据任务需求进行不同设置。 - 关于dataprovider中input_types的详细用法,参见PyDataProvider2。 -.. code-block:: python - - def hook2(settings, dict_file, **kwargs): - settings.word_dict = dict_file - settings.input_types = [integer_value_sub_sequence(len(settings.word_dict)), - integer_value_sequence(3)] - - @provider(init_hook=hook2) - def process2(settings, file_name): - with open(file_name) as fdata: - label_list = [] - word_slot_list = [] - for line in fdata: - if (len(line)) > 1: - label,comment = line.strip().split('\t') - label = int(''.join(label.split())) - words = comment.split() - word_slot = [settings.word_dict[w] for w in words if w in settings.word_dict] - label_list.append(label) - word_slot_list.append(word_slot) - else: - yield word_slot_list, label_list - label_list = [] - word_slot_list = [] - +.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py + :language: python + :lines: 42-71 模型中的配置 ------------ 首先,我们看一下单层序列的配置(见 :code:`sequence_layer_group.conf`)。注意:batchsize=5表示一次过5句单层序列,因此2个batch就可以完成1个pass。 -.. code-block:: python - - settings(batch_size=5) - - data = data_layer(name="word", size=dict_dim) - - emb = embedding_layer(input=data, size=word_dim) - - # (lstm_input + lstm) is equal to lstmemory - with mixed_layer(size=hidden_dim*4) as lstm_input: - lstm_input += full_matrix_projection(input=emb) - - lstm = lstmemory_group(input=lstm_input, - size=hidden_dim, - act=TanhActivation(), - gate_act=SigmoidActivation(), - state_act=TanhActivation(), - lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50)) - - lstm_last = last_seq(input=lstm) - - with mixed_layer(size=label_dim, - act=SoftmaxActivation(), - bias_attr=True) as output: - output += full_matrix_projection(input=lstm_last) - - outputs(classification_cost(input=output, label=data_layer(name="label", size=1))) +.. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf + :language: python + :lines: 38-63 其次,我们看一下语义相同的双层序列配置(见 :code:`sequence_nest_layer_group.conf` ),并对其详细分析: @@ -153,48 +71,9 @@ - 双层序列首先(last_seq层)取了每个subseq的最后一个元素,将其拼接成一个新的单层序列;接着(expand_layer层)将其扩展成一个新的双层序列,其中第i个subseq中的所有向量均为输入的单层序列中的第i个向量;最后(average_layer层)取了每个subseq的平均值。 - 分析得出:第一个last_seq后,每个subseq的最后一个元素就等于单层序列的最后一个元素,而expand_layer和average_layer后,依然保持每个subseq最后一个元素的值不变(这两层仅是为了展示它们的用法,实际中并不需要)。因此单双层序列的输出是一样旳。 -.. code-block:: python - - settings(batch_size=2) - - data = data_layer(name="word", size=dict_dim) - - emb_group = embedding_layer(input=data, size=word_dim) - - # (lstm_input + lstm) is equal to lstmemory - def lstm_group(lstm_group_input): - with mixed_layer(size=hidden_dim*4) as group_input: - group_input += full_matrix_projection(input=lstm_group_input) - - lstm_output = lstmemory_group(input=group_input, - name="lstm_group", - size=hidden_dim, - act=TanhActivation(), - gate_act=SigmoidActivation(), - state_act=TanhActivation(), - lstm_layer_attr=ExtraLayerAttribute(error_clipping_threshold=50)) - return lstm_output - - lstm_nest_group = recurrent_group(input=SubsequenceInput(emb_group), - step=lstm_group, - name="lstm_nest_group") - # hasSubseq ->(seqlastins) seq - lstm_last = last_seq(input=lstm_nest_group, agg_level=AggregateLevel.EACH_SEQUENCE) - - # seq ->(expand) hasSubseq - lstm_expand = expand_layer(input=lstm_last, expand_as=emb_group, expand_level=ExpandLevel.FROM_SEQUENCE) - - # hasSubseq ->(average) seq - lstm_average = pooling_layer(input=lstm_expand, - pooling_type=AvgPooling(), - agg_level=AggregateLevel.EACH_SEQUENCE) - - with mixed_layer(size=label_dim, - act=SoftmaxActivation(), - bias_attr=True) as output: - output += full_matrix_projection(input=lstm_average) - - outputs(classification_cost(input=output, label=data_layer(name="label", size=1))) +.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf + :language: python + :lines: 38-84 示例2:双进双出,subseq间有memory ================================= @@ -206,24 +85,9 @@ 我们看一下单双层序列的不同数据组织形式和dataprovider(见 :code:`rnn_data_provider.py`) -.. code-block:: python - - data = [ - [[[1, 3, 2], [4, 5, 2]], 0], - [[[0, 2], [2, 5], [0, 1, 2]], 1], - ] - - @provider(input_types=[integer_value_sub_sequence(10), - integer_value(3)]) - def process_subseq(settings, file_name): - for d in data: - yield d - - @provider(input_types=[integer_value_sequence(10), - integer_value(3)]) - def process_seq(settings, file_name): - for d in data: - seq = [] +.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py + :language: python + :lines: 20-32 - 单层序列:有两句,分别为[1,3,2,4,5,2]和[0,2,2,5,0,1,2]。 - 双层序列:有两句,分别为[[1,3,2],[4,5,2]](2个子句)和[[0,2],[2,5],[0,1,2]](3个子句)。 @@ -236,46 +100,21 @@ - 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 -.. code-block:: python - - def step(y): - mem = memory(name="rnn_state", size=hidden_dim) - return fc_layer(input=[y, mem], - size=hidden_dim, - act=TanhActivation(), - bias_attr=True, - name="rnn_state") - - out = recurrent_group(step=step, input=emb) +.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf + :language: python + :lines: 36-48 - 双层序列,外层memory是一个元素: + - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 -.. code-block:: - - def outer_step(x): - outer_mem = memory(name="outer_rnn_state", size=hidden_dim) - def inner_step(y): - inner_mem = memory(name="inner_rnn_state", - size=hidden_dim, - boot_layer=outer_mem) - return fc_layer(input=[y, inner_mem], - size=hidden_dim, - act=TanhActivation(), - bias_attr=True, - name="inner_rnn_state") - - inner_rnn_output = recurrent_group( - step=inner_step, - input=x) - last = last_seq(input=inner_rnn_output, name="outer_rnn_state") - - return inner_rnn_output - - out = recurrent_group(step=outer_step, input=SubsequenceInput(emb)) +.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf + :language: python + :lines: 39-66 - 双层序列,外层memory是单层序列: + - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 @@ -293,33 +132,11 @@ 读取双层序列的方法 ------------------ -我们看一下单双层序列的数据组织形式和dataprovider(见`rnn_data_provider.py`) - -.. code-block:: python - - data2 = [ - [[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]] ,0], - [[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]], 1], - ] - - @provider(input_types=[integer_value_sub_sequence(10), - integer_value_sub_sequence(10), - integer_value(2)], - should_shuffle=False) - def process_unequalength_subseq(settings, file_name): #双层RNN的dataprovider - for d in data2: - yield d - +我们看一下单双层序列的数据组织形式和dataprovider(见 :code:`rnn_data_provider.py` ) - @provider(input_types=[integer_value_sequence(10), - integer_value_sequence(10), - integer_value(2)], - should_shuffle=False) - def process_unequalength_seq(settings, file_name): #单层RNN的dataprovider - for d in data2: - words1=reduce(lambda x,y: x+y, d[0]) - words2=reduce(lambda x,y: x+y, d[1]) - yield words1, words2, d[2] +.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py + :language: python + :lines: 69-97 data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 @@ -335,40 +152,16 @@ data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 模型中的配置 ------------ -单层RNN( :code:`sequence_rnn_multi_unequalength_inputs.conf`)和双层RNN( :code:`sequence_nest_rnn_multi_unequalength_inputs.conf`)两个模型配置达到的效果完全一样,区别只在于输入为单层还是双层序列,现在我们来看它们内部分别是如何实现的。 +单层RNN( :code:`sequence_rnn_multi_unequalength_inputs.conf`)和双层RNN( :code:`v.conf`)两个模型配置达到的效果完全一样,区别只在于输入为单层还是双层序列,现在我们来看它们内部分别是如何实现的。 - 单层序列\: - 过了一个简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全连接,功能与示例2中`sequence_rnn.conf`的`step`函数完全相同。这里,两个输入x1,x2分别通过calrnn返回最后时刻的状态。结果得到的encoder1_rep和encoder2_rep分别是单层序列,最后取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 - 注意到这里recurrent_group输入的每个样本中,fea1和fea2的长度都分别相等,这并非偶然,而是因为recurrent_group要求输入为单层序列时,所有输入的长度都必须相等。 -.. code-block:: python - - def step(x1, x2): - def calrnn(y): - mem = memory(name = 'rnn_state_' + y.name, size = hidden_dim) - out = fc_layer(input = [y, mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'rnn_state_' + y.name) - return out - - encoder1 = calrnn(x1) - encoder2 = calrnn(x2) - return [encoder1, encoder2] - - encoder1_rep, encoder2_rep = recurrent_group( - name="stepout", - step=step, - input=[emb1, emb2]) - - encoder1_last = last_seq(input = encoder1_rep) - encoder1_expandlast = expand_layer(input = encoder1_last, - expand_as = encoder2_rep) - context = mixed_layer(input = [identity_projection(encoder1_expandlast), - identity_projection(encoder2_rep)], - size = hidden_dim) +.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf + :language: python + :lines: 41-58 - 双层序列\: @@ -376,63 +169,9 @@ data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 - 函数`outer_step`中可以分别处理这两个特征,但我们需要用targetInlink指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 - 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 -.. code-block:: python - - def outer_step(x1, x2): - outer_mem1 = memory(name = "outer_rnn_state1", size = hidden_dim) - outer_mem2 = memory(name = "outer_rnn_state2", size = hidden_dim) - def inner_step1(y): - inner_mem = memory(name = 'inner_rnn_state_' + y.name, - size = hidden_dim, - boot_layer = outer_mem1) - out = fc_layer(input = [y, inner_mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'inner_rnn_state_' + y.name) - return out - - def inner_step2(y): - inner_mem = memory(name = 'inner_rnn_state_' + y.name, - size = hidden_dim, - boot_layer = outer_mem2) - out = fc_layer(input = [y, inner_mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'inner_rnn_state_' + y.name) - return out - - encoder1 = recurrent_group( - step = inner_step1, - name = 'inner1', - input = x1) - - encoder2 = recurrent_group( - step = inner_step2, - name = 'inner2', - input = x2) - - sentence_last_state1 = last_seq(input = encoder1, name = 'outer_rnn_state1') - sentence_last_state2_ = last_seq(input = encoder2, name = 'outer_rnn_state2') - - encoder1_expand = expand_layer(input = sentence_last_state1, - expand_as = encoder2) - - return [encoder1_expand, encoder2] - - encoder1_rep, encoder2_rep = recurrent_group( - name="outer", - step=outer_step, - input=[SubsequenceInput(emb1), SubsequenceInput(emb2)], - targetInlink=emb2) - - encoder1_last = last_seq(input = encoder1_rep) - encoder1_expandlast = expand_layer(input = encoder1_last, - expand_as = encoder2_rep) - context = mixed_layer(input = [identity_projection(encoder1_expandlast), - identity_projection(encoder2_rep)], - size = hidden_dim) +.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf + :language: python + :lines: 41-89 示例4:beam_search的生成 ======================== From b3dd2d10b74b495aa525ec1052e5ed1b761aa935 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 18 Nov 2016 17:07:06 +0800 Subject: [PATCH 003/265] Add glossary for Paddle --- doc_cn/algorithm/rnn/hrnn_demo.rst | 7 + doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 183 ++++++++++++++++++ doc_cn/concepts/glossary.rst | 59 ++++++ doc_cn/concepts/glossary_rnn.dot | 42 ++++ doc_cn/concepts/glossary_rnn_with_memory.dot | 48 +++++ doc_cn/index.rst | 3 +- 6 files changed, 341 insertions(+), 1 deletion(-) create mode 100644 doc_cn/algorithm/rnn/hrnn_demo.rst create mode 100644 doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst create mode 100644 doc_cn/concepts/glossary.rst create mode 100644 doc_cn/concepts/glossary_rnn.dot create mode 100644 doc_cn/concepts/glossary_rnn_with_memory.dot diff --git a/doc_cn/algorithm/rnn/hrnn_demo.rst b/doc_cn/algorithm/rnn/hrnn_demo.rst new file mode 100644 index 0000000000..cf38e416c0 --- /dev/null +++ b/doc_cn/algorithm/rnn/hrnn_demo.rst @@ -0,0 +1,7 @@ +.. algo_hrnn_demo: + +################# +双层RNN的使用示例 +################# + +TBD \ No newline at end of file diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst new file mode 100644 index 0000000000..cf18108019 --- /dev/null +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -0,0 +1,183 @@ +.. _algo_hrnn_rnn_api_compare: + +##################### +单双层RNN API对比介绍 +##################### + +这篇教程主要介绍了 :ref:`glossary_双层RNN` 的API接口。本文中的以 :ref:`glossary_paddle` 的 :ref:`glossary_双层RNN` 单元测试为示例,用多对效果完全相同的、分别使用单、双层RNN作为网络配置的模型,来讲解如何使用 :ref:`glossary_双层RNN` 。本文中所有的例子,都只是介绍 :ref:`glossary_双层RNN` 的API接口,并不是使用 :ref:`glossary_双层RNN` 解决实际的问题。如果想要了解 :ref:`glossary_双层RNN` 在具体问题中的使用,请参考 :ref:`algo_hrnn_demo` 。文章中示例所使用的单元测试文件是 `test_RecurrentGradientMachine.cpp `_ 。 + +示例1:双层RNN,子序列间无Memory +================================ + + + +配置:单层RNN(:code:`sequence_layer_group`)和双层RNN(:code:`sequence_nest_layer_group`),语义完全相同。 + +读取双层序列的方法 +------------------ + +首先,我们看一下单双层序列的不同数据组织形式(您也可以采用别的组织形式)\: + +- 单层序列的数据( :code:`Sequence/tour_train_wdseg`)如下,一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。 + +.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg + :language: text + + +- 双层序列的数据( :code:`Sequence/tour_train_wdseg.nest`)如下,一共有4个样本。样本间用空行分开,代表不同的双层序列,序列数据和上面的完全一样。每个样本的子句数分别为2,3,2,3。 + +.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest + :language: text + +其次,我们看一下单双层序列的不同dataprovider(见 :code:`sequenceGen.py` ): + +- 单层序列的dataprovider如下: + + - word_slot是integer_value_sequence类型,代表单层序列。 + - label是integer_value类型,代表一个向量。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py + :language: python + :lines: 21-39 + +- 双层序列的dataprovider如下: + + - word_slot是integer_value_sub_sequence类型,代表双层序列。 + - label是integer_value_sequence类型,代表单层序列,即一个子句一个label。注意:也可以为integer_value类型,代表一个向量,即一个句子一个label。通常根据任务需求进行不同设置。 + - 关于dataprovider中input_types的详细用法,参见PyDataProvider2。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py + :language: python + :lines: 42-71 + +模型中的配置 +------------ + +首先,我们看一下单层序列的配置(见 :code:`sequence_layer_group.conf`)。注意:batchsize=5表示一次过5句单层序列,因此2个batch就可以完成1个pass。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf + :language: python + :lines: 38-63 + + +其次,我们看一下语义相同的双层序列配置(见 :code:`sequence_nest_layer_group.conf` ),并对其详细分析: + +- batchsize=2表示一次过2句双层序列。但从上面的数据格式可知,2句双层序列和5句单层序列的数据完全一样。 +- data_layer和embedding_layer不关心数据是否是序列格式,因此两个配置在这两层上的输出是一样的。 +- lstmemory\: + + - 单层序列过了一个mixed_layer和lstmemory_group。 + - 双层序列在同样的mixed_layer和lstmemory_group外,直接加了一层group。由于这个外层group里面没有memory,表示subseq间不存在联系,即起到的作用仅仅是把双层seq拆成单层,因此双层序列过完lstmemory的输出和单层的一样。 + +- last_seq\: + + - 单层序列直接取了最后一个元素 + - 双层序列首先(last_seq层)取了每个subseq的最后一个元素,将其拼接成一个新的单层序列;接着(expand_layer层)将其扩展成一个新的双层序列,其中第i个subseq中的所有向量均为输入的单层序列中的第i个向量;最后(average_layer层)取了每个subseq的平均值。 + - 分析得出:第一个last_seq后,每个subseq的最后一个元素就等于单层序列的最后一个元素,而expand_layer和average_layer后,依然保持每个subseq最后一个元素的值不变(这两层仅是为了展示它们的用法,实际中并不需要)。因此单双层序列的输出是一样旳。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf + :language: python + :lines: 38-84 + +示例2:双进双出,subseq间有memory +================================= + +配置:单层RNN( :code:`sequence_rnn.conf` ),双层RNN( :code:`sequence_nest_rnn.conf` 和 :code:`sequence_nest_rnn_readonly_memory.conf` ),语义完全相同。 + +读取双层序列的方法 +------------------ + +我们看一下单双层序列的不同数据组织形式和dataprovider(见 :code:`rnn_data_provider.py`) + +.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py + :language: python + :lines: 20-32 + +- 单层序列:有两句,分别为[1,3,2,4,5,2]和[0,2,2,5,0,1,2]。 +- 双层序列:有两句,分别为[[1,3,2],[4,5,2]](2个子句)和[[0,2],[2,5],[0,1,2]](3个子句)。 +- 单双层序列的label都分别是0和1 + +模型中的配置 +------------ + +我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 + +- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf + :language: python + :lines: 36-48 + +- 双层序列,外层memory是一个元素: + + - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 + - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf + :language: python + :lines: 39-66 + +- 双层序列,外层memory是单层序列: + + - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 + - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 + +示例3:双进双出,输入不等长 +=========================== + +.. role:: red + +.. raw:: html + + + +**输入不等长** 是指recurrent_group的多个输入在各时刻的长度可以不相等, 但需要指定一个和输出长度一致的input,用 :red:`targetInlink` 表示。参考配置:单层RNN(:code:`sequence_rnn_multi_unequalength_inputs.conf`),双层RNN(:code:`sequence_nest_rnn_multi_unequalength_inputs.conf`) + +读取双层序列的方法 +------------------ + +我们看一下单双层序列的数据组织形式和dataprovider(见 :code:`rnn_data_provider.py` ) + +.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py + :language: python + :lines: 69-97 + +data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 + +- 单层序列:两个样本分别为[[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]] 和 [[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]] +- 双层序列:两个样本分别为 + + - **样本1**\:[[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]]]。fea1和fea2都分别有2个子句,fea1=[[1, 2], [4, 5, 2]], fea2=[[5, 4, 1], [3, 1]] + - **样本2**\:[[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]]。fea1和fea2都分别有3个子句, fea1=[[0, 2], [2, 5], [0, 1, 2]], fea2=[[1, 5], [4], [2, 3, 6, 1]]。
+ - **注意**\:每个样本中,各特征的子句数目需要相等。这里说的“双进双出,输入不等长”是指fea1在i时刻的输入的长度可以不等于fea2在i时刻的输入的长度。如对于第1个样本,时刻i=2, fea1[2]=[4, 5, 2],fea2[2]=[3, 1],3≠2。 + +- 单双层序列中,两个样本的label都分别是0和1 + +模型中的配置 +------------ + +单层RNN( :code:`sequence_rnn_multi_unequalength_inputs.conf`)和双层RNN( :code:`v.conf`)两个模型配置达到的效果完全一样,区别只在于输入为单层还是双层序列,现在我们来看它们内部分别是如何实现的。 + +- 单层序列\: + + - 过了一个简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全连接,功能与示例2中`sequence_rnn.conf`的`step`函数完全相同。这里,两个输入x1,x2分别通过calrnn返回最后时刻的状态。结果得到的encoder1_rep和encoder2_rep分别是单层序列,最后取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 + - 注意到这里recurrent_group输入的每个样本中,fea1和fea2的长度都分别相等,这并非偶然,而是因为recurrent_group要求输入为单层序列时,所有输入的长度都必须相等。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf + :language: python + :lines: 41-58 + +- 双层序列\: + + - 双层RNN中,对输入的两个特征分别求时序上的连续全连接(`inner_step1`和`inner_step2`分别处理fea1和fea2),其功能与示例2中`sequence_nest_rnn.conf`的`outer_step`函数完全相同。不同之处是,此时输入`[SubsequenceInput(emb1), SubsequenceInput(emb2)]`在各时刻并不等长。 + - 函数`outer_step`中可以分别处理这两个特征,但我们需要用targetInlink指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 + - 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 + +.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf + :language: python + :lines: 41-89 + +示例4:beam_search的生成 +======================== + +TBD diff --git a/doc_cn/concepts/glossary.rst b/doc_cn/concepts/glossary.rst new file mode 100644 index 0000000000..a94aa73675 --- /dev/null +++ b/doc_cn/concepts/glossary.rst @@ -0,0 +1,59 @@ +.. _glossary: + +######################## +Paddle文档中使用的词汇表 +######################## + +.. _glossary_paddle: + +PaddlePaddle +------------ + +TBD + + +.. _glossary_memory: + +Memory +------ + +Memory是 :ref:`glossary_paddle` 实现 :ref:`glossary_RNN` 时候使用的一个概念。 :ref:`glossary_RNN` 即时间递归神经网络,通常要求时间步之间具有一些依赖性,即当前时间步下的神经网络依赖前一个时间步神经网络中某一个神经元输出。如下图所示。 + +.. graphviz:: glossary_rnn.dot + +上图中虚线的连接,即是跨越时间步的网络连接。:ref:`glossary_paddle` 在实现 :ref:`glossary_RNN` 的时候,将这种跨越时间步的连接用一个特殊的神经网络单元实现。这个神经网络单元就叫Memory。Memory可以缓存上一个时刻某一个神经元的输出,然后在下一个时间步输入给另一个神经元。使用Memory的 :ref:`glossary_RNN` 实现便如下图所示。 + +.. graphviz:: glossary_rnn_with_memory.dot + +使用这种方式,:ref:`glossary_paddle` 可以比较简单的判断哪些输出是应该跨越时间步的,哪些不是。 + +.. _glossary_Sequence: + +时间序列 +-------- + +时间序列(time series)是指一系列的特征数据。这些特征数据之间的顺序是有意义的。即特征的数组,而不是特征的集合。而这每一个数组元素,或者每一个系列里的特征数据,即为一个时间步(time step)。值得注意的是,时间序列、时间步的概念,并不真正的和『时间』有关。只要一系列特征数据中的『顺序』是有意义的,即为时间序列的输入。 + +举例说明,例如文本分类中,我们通常将一句话理解成一个时间序列。比如一句话中的每一个单词,会变成词表中的位置。而这一句话就可以表示成这些位置的数组。例如 :code:`[9, 2, 3, 5, 3]` 。 + +关于时间序列(time series)的更详细准确的定义,可以参考 `维基百科页面 Time series `_ 或者 `维基百科中文页面 时间序列 `_ 。 + +另外,Paddle中经常会将时间序列成为 :code:`Sequence` 。他们在Paddle的文档和API中是一个概念。 + +.. _glossary_RNN: + +RNN +--- + +RNN 在 :ref:`glossary_paddle` 的文档中,一般表示 :code:`Recurrent neural network`,即时间递归神经网络。详细介绍可以参考 `维基百科页面 Recurrent neural network `_ 或者 `中文维基百科页面 `_ 中关于时间递归神经网络的介绍。 + +RNN 一般在 :ref:`glossary_paddle` 中,指对于一个 :ref:`glossary_Sequence` 输入数据,每一个时间步之间的神经网络具有一定的相关性。例如,某一个神经元的一个输入为上一个时间步网络中某一个神经元的输出。或者,从每一个时间步来看,神经网络的网络结构中具有有向环结构。 + +.. _glossary_双层RNN: + +双层RNN +------- + +双层RNN顾名思义,即 :ref:`glossary_RNN` 之间有一次嵌套关系。输入数据整体上是一个时间序列,而对于每一个内层特征数据而言,也是一个时间序列。即二维数组,或者数组的数组这个概念。 而双层RNN是可以处理这种输入数据的网络结构。 + +例如,对于段落的文本分类,即将一段话进行分类。我们将一段话看成句子的数组,每个句子又是单词的数组。这便是一种双层RNN的输入数据。而将这个段落的每一句话用lstm编码成一个向量,再对每一句话的编码向量用lstm编码成一个段落的向量。再对这个段落向量进行分类,即为这个双层RNN的网络结构。 diff --git a/doc_cn/concepts/glossary_rnn.dot b/doc_cn/concepts/glossary_rnn.dot new file mode 100644 index 0000000000..2cd0fb1820 --- /dev/null +++ b/doc_cn/concepts/glossary_rnn.dot @@ -0,0 +1,42 @@ +digraph G{ + subgraph cluster_timestep0 { + label="recurrent timestep i-1" + bgcolor=lightgray + node [style=filled,color=white] + fc0_0 [label="fc 0"] + fc0_1 [label="fc 1"] + fc0_2 [label="fc 2"] + + fc0_0 -> fc0_1 + fc0_1 -> fc0_2 + } + + subgraph cluster_timestep1 { + label="recurrent timestep i" + node [style=filled]; + fc1_0 [label="fc 0"] + fc1_1 [label="fc 1"] + fc1_2 [label="fc 2"] + color=blue + + fc1_0 -> fc1_1 + fc1_1 -> fc1_2 + } + + subgraph cluster_timestep2 { + label="recurrent timestep i+1" + bgcolor=lightgray + node [style=filled,color=white] + fc2_0 [label="fc 0"] + fc2_1 [label="fc 1"] + fc2_2 [label="fc 2"] + + fc2_0 -> fc2_1 + fc2_1 -> fc2_2 + } + + + fc0_1 -> fc1_1 [style="dotted" constraint=false] + fc1_1 -> fc2_1 [style="dotted" constraint=false] + +} \ No newline at end of file diff --git a/doc_cn/concepts/glossary_rnn_with_memory.dot b/doc_cn/concepts/glossary_rnn_with_memory.dot new file mode 100644 index 0000000000..0f101ec2d8 --- /dev/null +++ b/doc_cn/concepts/glossary_rnn_with_memory.dot @@ -0,0 +1,48 @@ +digraph G{ + subgraph cluster_timestep0 { + label="recurrent timestep i-1" + bgcolor=lightgray + node [style=filled,color=white] + fc0_0 [label="fc 0"] + fc0_1 [label="fc 1"] + fc0_2 [label="fc 2"] + m0 [label="memory"] + fc0_0 -> fc0_1 + fc0_1 -> fc0_2 + fc0_1 -> m0 + m0 -> fc0_1 + } + + subgraph cluster_timestep1 { + label="recurrent timestep i" + node [style=filled]; + fc1_0 [label="fc 0"] + fc1_1 [label="fc 1"] + fc1_2 [label="fc 2"] + m1 [label="memory"] + color=blue + fc1_0 -> fc1_1 + fc1_1 -> fc1_2 + fc1_1 -> m1 + m1 -> fc1_1 + } + + subgraph cluster_timestep2 { + label="recurrent timestep i+1" + bgcolor=lightgray + node [style=filled,color=white] + fc2_0 [label="fc 0"] + fc2_1 [label="fc 1"] + fc2_2 [label="fc 2"] + m2 [label="memory"] + fc2_0 -> fc2_1 + fc2_1 -> fc2_2 + fc2_1 -> m2 + m2 -> fc2_1 + } + + + m0 -> m1 [style="dotted" constraint=false] + m1 -> m2 [style="dotted" constraint=false] + +} \ No newline at end of file diff --git a/doc_cn/index.rst b/doc_cn/index.rst index f1398206fd..fef39aa527 100644 --- a/doc_cn/index.rst +++ b/doc_cn/index.rst @@ -11,6 +11,7 @@ PaddlePaddle文档 * `使用示例 `_ * `模型配置 <../doc/ui/api/trainer_config_helpers/index.html>`_ * `集群训练 `_ +* :ref:`glossary` 开发指南 -------- @@ -22,7 +23,7 @@ PaddlePaddle文档 * `Recurrent Group教程 `_ * `单层RNN示例 <../doc/algorithm/rnn/rnn.html>`_ -* `双层RNN示例 `_ +* :ref:`algo_hrnn_rnn_api_compare` * `支持双层序列作为输入的Layer `_ 常见问题 From d8cca855e1f1b69fcd9ca53ae8bd9bc6260dcfe1 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Fri, 18 Nov 2016 18:52:42 +0800 Subject: [PATCH 004/265] Refine use_concepts.rst --- doc_cn/concepts/use_concepts.rst | 97 ++++++++++++++------------------ 1 file changed, 42 insertions(+), 55 deletions(-) diff --git a/doc_cn/concepts/use_concepts.rst b/doc_cn/concepts/use_concepts.rst index 67e98edabc..d3da9cc16b 100644 --- a/doc_cn/concepts/use_concepts.rst +++ b/doc_cn/concepts/use_concepts.rst @@ -2,16 +2,16 @@ PaddlePaddle 基本使用概念 ######################### -PaddlePaddle是一个神经网络学习框架。其单机进程为 :code:`paddle train`。 单机的所有设备使用,均在单机进程内调度完成。 而多机辅助进程 :code:`paddle pserver` 负责联合多个单机进程进行通信,进而充分利用集群的计算资源。 PaddlePaddle同时以 :code:`swig api` 的形式,提供训练结果模型预测的方法和自定义训练流程。 +PaddlePaddle是一个深度学习框架,同时支持单机和多机模式的系统。命令 ``paddle train`` 可启动单机模式的进程,我们称之为 ``trainer`` 进程。单机所有设备使用均在单机进程内调度完成。多机模式除了需要启动trainer进程外,还需要通过命令 ``paddle pserver`` 启动多机参数服务器进程, 我们称之为   ``pserver`` 进程。该进程负责多个单机进程间的通信,进而充分利用集群的计算资源。 PaddlePaddle同时以 ``swig api`` 的形式,提供训练结果模型预测的方法和自定义训练流程。 -下面我们会分别介绍主要进程 :code:`paddle train` 中的一些概念。这些概念会对如何使用PaddlePaddle有一定的帮助。 了解这些概念的前提是,读者已经了解 `基本的神经网络/机器学习原理和概念 `_ 。同时,如果想要了解PaddlePaddle实现中的一些概念,请参考 `PaddlePaddle 编程中的基本概念 `_ 。 +下面我们会介绍trainer进程中的一些概念,这些概念会对如何使用PaddlePaddle有一定的帮助。 了解这些概念的前提是,读者已经了解 `基本的神经网络/机器学习原理和概念 `_ 。同时,如果想要了解PaddlePaddle实现中的一些概念,请参考 `PaddlePaddle 编程中的基本概念 `_ 。 .. contents:: -PaddlePaddle 的进程模型 -======================= +系统模块 +======== -PaddlePaddle进程内嵌了一个 :code:`python` 解释器。 这个 :code:`python` 解释器负责解析用户定义的神经网络配置,和解析用户数据,并将用户数据传入给 PaddlePaddle。 +``trainer`` 进程内嵌了一个 ``python`` 解释器, 这个 ``python`` 解释器负责解析用户定义的神经网络配置;解析输入数据流,并将数据传入给 ``trainer`` 系统。 .. graphviz:: @@ -30,95 +30,84 @@ PaddlePaddle进程内嵌了一个 :code:`python` 解释器。 这个 :code:`pyth py -> data_provider [dir="back"]; } -所以,PaddlePaddle单机训练进程,:code:`paddle train` , 对于用户的主要接口语言为 python。 主要需要用户配置的两个文件为 :code:`DataProvider` 和训练文件 :code:`TrainerConfig` 。 +所以,单机训练 ``trainer`` 进程对用户的主要接口语言为Python。用户需要配置文件主要有两个:数据流提供器 ``DataProvider`` 和模型配置 ``TrainerConfig`` 。 DataProvider ============ -DataProvider是 :code:`paddle train` 的数据提供器。 它负责将用户的原始数据转换成 PaddlePaddle 可以识别的数据类型。每当 PaddlePaddle 需要新的数据训练时,都会调用 DataProvider 返回数据。 当所有数据读取完一轮后,DataProvider 便返回空数据通知 PaddlePaddle。PaddlePaddle负责在下一轮训练开始前,将DataProvider重置。 +DataProvider是 ``trainer`` 进程的数据提供器。主要负责将用户的原始数据转换成 ``trainer`` 系统可以识别的数据类型。当系统需要新的数据训练时,会调用DataProvider获取数据接口。当所有数据读取完一轮后,DataProvider返回空数据通知系统一轮数据读取结束。 ``trainer`` 在每一轮训练开始时会重置DataProvider。 -需要注意的是,DataProvider在PaddlePaddle中是被训练逻辑调用的关系, 而不是新的数据驱动训练。并且所有的 :code:`shuffle` , 和一些随机化的噪声添加,都应该在 DataProvider 阶段完成。 +需要注意的是,DataProvider是被 ``trainer`` 系统调用,而不是新数据驱动系统;数据 ``shuffle`` 和一些随机化噪声添加都应该在DataProvider中完成。 -为了方便用户使用自己的数据格式, PaddlePaddle 提供了 `PyDataProvider`_ 来处理数据。 并且在这个Provider中,PaddlePaddle的 C++ 部分接管了如何shuffle,处理 batch,GPU/CPU通信,双缓冲,异步读取等问题。 用户可以参考 `PyDataProvider`_ 的相关文档,继续深入了解 DataProvider 的使用。 +为了用户能够灵活的处理数据,PaddlePaddle提供了处理数据的Python接口(称为 `PyDataProvider`_ )。 在 ``PyDataProvider`` 中,系统C++模块接管了shuffle、处理batch、GPU和CPU通信、双缓冲、异步读取等问题,需要说明的是,一些情况下需要Python接口里处理shuffle,可以参考 `PyDataProvider`_ 的相关文档继续深入了解。 -训练文件 -======== +TrainerConfig +============= -训练文件是PaddlePaddle中配置神经网络结构、学习优化算法、数据传入方式的地方。 训练文件是一个python文件,使用命令行参数 :code:`--config` 传给 paddle 的主程序。 例如\: +模型配置是一个Python文件,主要包括神经网络结构、优化算法、数据传入方式,使用命令行参数 ``--config`` 传给``trainer``主程序。 例如\: .. code-block:: bash paddle train --config=trainer_config.py -一个典型简单的训练文件可能为 +一个简单的模型配置文件为: .. literalinclude:: trainer_config.py :linenos: -下面我们详细的介绍一下训练文件中各个模块的概念。 +下面我们详细的介绍一下模型配置中各个模块的概念。 trainer_config_helpers ---------------------- -PaddlePaddle的配置文件与PaddlePaddle C++端通信的最基础协议是 :code:`protobuf` 。而为了避免用户直接写比较难写的 protobuf string,我们书写了一个helpers来生成这个protobuf包。所以在文件的开始,import这些helpers函数。 +PaddlePaddle配置文件与C++模块通信的最基础协议是 ``protobuf`` 。为了避免用户直接写比较难写的protobuf string,我们通过Python代码来生成protobuf包,这就是helpers的作用。所以在文件的开始,需要import这些helpers函数。 -需要注意的是,这个 :code:`paddle.trainer_config_helpers` 包是标准的python包,这意味着用户可以选择自己喜欢的 :code:`ide` 或者编辑器来编写Paddle的配置文件,这个python包注释文档比较完善,并且考虑了IDE的代码提示与类型注释。 +需要注意的是,这个 ``paddle.trainer_config_helpers`` 包是标准的python包,这意味着用户可以选择自己喜欢的 ``IDE`` 或者编辑器来编写Paddle的配置文件,这个Python包注释文档比较完善,并提供了IDE的代码提示与类型注释。 data_sources ------------ -data_sources是配置神经网络的数据源。这里使用的函数是 :code:`define_py_data_sources2` ,这个函数是定义了使用 `PyDataProvider`_ 作为数据源。 而后缀 :code:`2` 是Paddle历史遗留问题,因为Paddle之前使用的 PyDataProvider 性能较差,所以完全重构了一个新的 `PyDataProvider`_ 。 - -data_sources里面的 train_list 和 test_list 指定的是训练文件列表和测试文件列表。 如果传入一个字符串的话,是指一个训练列表文件。这个训练列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个 list 文件,再传入给 train.list 或者 test.list 。 +data_sources配置神经网络的数据源,使用的函数是 ``define_py_data_sources2`` ,这个函数是定义了使用 `PyDataProvider`_ 提供数据源。后缀 ``2`` 是Paddle历史遗留问题,因为Paddle之前使用的PyDataProvider性能问题,重构了一个新的 `PyDataProvider`_ 。 -而 :code:`module` 和 :code:`obj` 指定了 DataProvider 的模块名和函数名。 +data_sources里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 -更具体的使用,请参考 `PyDataProvider`_ 。 +其中``module`` 和``obj``指定了DataProvider的文件名和返回数据的函数名。更详细的使用,请参考 `PyDataProvider`_ 。 settings -------- -`settings`_ 是神经网络训练算法相关的设置项。包括学习率,batch_size,优化算法,正则方法等等。具体的使用方法请参考 `settings`_ 文档。 +`settings`_ 设置训练神经网络所使用的算法。包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 网络配置 -------- -上述网络配置中余下的部分均是神经网络配置。第一行是定义一个名字叫 "pixel" 的 :code:`data_layer` 。每一个layer返回的都是一个 :code:`LayerOutput` 对象。 这里第一层的输出对象是 :code:`img` 。然后这个对象传输给了另一个 layer 函数, -:code:`simple_img_conv_pool` 。:code:`simple_img_conv_pool` 是一个组合层, -包括了图像的卷积 (convolution) 和池化(pooling), -并继续接了一个全连接层( :code:`fc_layer` ),然后再接了一个Softmax的全连接层。 +上述配置中余下的部分是神经网络配置,主要包括网络连接、 ``cost`` 层、评估器。 -最终,网络配置输出了 :code:`classification_cost` 。标记网络输出的函数为 -:code:`outputs` 。网络的输出是神经网络的优化目标,神经网络训练的时候,实际上就是 -要最小化这个输出。 +- 首先,定义了一个名字叫"pixel"的 ``data_layer`` ,每个layer返回的都是一个 ``LayerOutput`` 对象,比如第一层的输出对象称作 ``img`` 。 +- 然后,这个对象作为另一个layer( ``simple_img_conv_pool`` )的输入, ``simple_img_conv_pool`` 是一个组合层,包括了图像的卷积 (convolution) 和池化(pooling), +- 其次,连接到全连接层(``fc_layer``),再连接到一个含Softmax激活的全连接层。 +- 最终,连接到cost层( ``classification_cost`` ), ``classification_cost`` 默认使用多类交叉熵损失函数和分类错误率统计评估器。标记网络输出的函数为 ``outputs`` ,网络的输出是神经网络的优化目标,神经网络训练的时候,实际上就是要最小化这个输出。 -在神经网络进行预测的时候,实际上网络的输出也是通过 :code:`outputs` 标记。 +用该模型配置进行预测时,网络的输出也是通过 ``outputs`` 标记。 Layer、Projection、Operator =========================== -PaddlePaddle的网络基本上是基于Layer来配置的。所谓的Layer即是神经网络的某一层, -而神经网络的某一层,一般是封装了许多复杂操作的操作集合。比如最简单的 -:code:`fc_layer` ,也包括矩阵乘法,多输入的求和,和activation。 +PaddlePaddle的网络是基于Layer来配置的。所谓的Layer即是神经网络的某一层,一般是封装了许多复杂操作的操作集合。比如最简单的 ``fc_layer`` ,包括矩阵乘法、多输入的求和、加Bias操作、激活( ``activation`` )函数操作。 .. code-block:: python data = data_layer(name='data', size=200) out = fc_layer(input=data, size=200, act=TanhActivation()) -而对于更灵活配置需求,可能这样基于Layer的配置是不灵活的。于是 PaddlePaddle 提供 -了基于 Projection 或者 Operator 的配置。使用Projection和Operator需要与 -:code:`mixed_layer` 配合使用。 :code:`mixed_layer` 是将layer中的元素累加求和, -并且做一个 :code:`activation` , 而这个layer具体如何计算,是交由内部的Projection -和 Operator 定义。Projection是指含有可学习参数的操作,而Operator不含有可学习的 -参数,输入全是其他Layer的输出。 +对于更灵活配置需求,PaddlePaddle提供了基于 ``Projection`` 或者 ``Operator`` 的配置,这些需要与 ``mixed_layer`` 配合使用。 ``mixed_layer`` 是将多个输入累加求和,然后加Bias和 ``activation`` 操作。 ``mixed_layer`` 具体计算是通过内部的Projection和Operator完成。Projection含有可学习参数;而Operator不含可学习的参数,输入全是其他Layer的输出。 -例如,和 :code:`fc_layer` 同样功能的 :code:`mixed_layer` 。 +例如,和 ``fc_layer`` 同样功能的 ``mixed_layer`` 是: .. code-block:: python @@ -126,14 +115,12 @@ PaddlePaddle的网络基本上是基于Layer来配置的。所谓的Layer即是 with mixed_layer(size=200) as out: out += full_matrix_projection(input=data) -PaddlePaddle可以使用的mixed layer 配置出非常复杂的网络,甚至可以直接配置一个完整的LSTM。 -用户可以参考 `mixed_layer`_ 的相关文档进行配置。 +PaddlePaddle可以使用 ``mixed layer`` 配置出非常复杂的网络,甚至可以直接配置一个完整的LSTM。用户可以参考 `mixed_layer`_ 的相关文档进行配置。 如何利用单机的所有GPU或所有CPU核心 -================================== +=============================== -PaddlePaddle的单机进程 :code:`paddle train` 可以充分利用一台计算机上所有的GPU资 -源或者CPU。 +PaddlePaddle的单机 ``trainer`` 进程可以充分利用一台计算机上所有的GPU资源或者CPU。 如果要使用机器上多块GPU,使用如下命令即可\: @@ -145,41 +132,41 @@ PaddlePaddle的单机进程 :code:`paddle train` 可以充分利用一台计算 .. code-block:: bash - paddle train --trainer_config=4 # use 4 cpu cores. + paddle train --trainer_count=4 # use 4 cpu cores. -对于其他设置GPU的选择情况,例如选择第0、2号GPU显卡,则可以使用 :code:`CUDA_VISIBLE_DEVICES` 环境变量来选择部分的显卡。 具体可以参考连接`masking-gpus`_ 。 可以使用的命令为 +如果要指定GPU编号,例如选择第0、2号GPU,则可以设置 ``CUDA_VISIBLE_DEVICES`` 环境变量来指定特定的GPU。具体可以参考连接`masking-gpu`_ ,命令为: .. code-block:: bash - env CUDA_VISIBLE_DEVICES=0,2 paddle train --use_gpu=true --trainer_config=2 + env CUDA_VISIBLE_DEVICES=0,2 paddle train --use_gpu=true --trainer_count=2 如何利用多台机器的计算资源训练神经网络 -====================================== +=================================== -PaddlePaddle多机使用的经典方法是通过 :code:`Parameter Server` 来对多机的 :code:`paddle train` 进行同步。 而多机训练神经网络,首先要讲数据切分到不同的机器上。 切分数据文件的方式在PaddlePaddle的开源实现中并没有提供工具包。 但是切分数据并不是一件非常复杂的事情,也不是神经网络实现的重点。 +PaddlePaddle多机采用经典的 ``Parameter Server`` 架构对多个节点的 ``trainer`` 进行同步。多机训练神经网络,要讲数据切分到不同的机器上,切分数据相对简单,所以在PaddlePaddle的开源实现中并没有提供相关工具包。 -多机训练过程中,经典的拓扑结构如下\: +多机训练的经典拓扑结构如下\: .. graphviz:: pserver_topology.dot -图中每个灰色方块是一台机器,在每个机器中,先去启动一个 :code:`paddle pserver` 进程,并确定整体的端口号。可能的参数是\: +图中每个灰色方块是一台机器,在每个机器中,先启动一个 ``paddle pserver`` 进程,并指定端口号,可能的参数是\: .. code-block:: bash paddle pserver --port=5000 --num_gradient_servers=4 --nics='eth0' -这里说明系统的 :code:`paddle pserver` 的起始端口是 :code:`5000` ,并且有四个训练进程(:code:`gradient_servers`,Paddle同时将 :code:`paddle train` 进程称作 :code:`GradientServer` 。因为其为负责提供Gradient的进程)。 而对于训练进程的话,则需要在 :code:`paddle pserver` 启动之后,再在各个节点上运行如下命令\: +这里说明系统的 ``pserver`` 进程端口是 ``5000`` ,有四个训练进程(即 ``--gradient_servers=4`` ,PaddlePaddle同时将 ``trainer`` 称作 ``GradientServer`` 。因为其为负责提供Gradient)。 启动之后 ``pserver`` 进程之后,需要 ``trainer`` 训练进程,再在各个机器上运行如下命令\: .. code-block:: bash paddle train --port=5000 --pservers=192.168.100.101,192.168.100.102,192.168.100.103,192.168.100.104 --config=... -对于简单的多机协同使用上述方式即可。同时,pserver/train 通常在高级情况下,还有两个参数需要设置,他们是 +对于简单的多机协同训练使用上述方式即可。另外,pserver/train 通常在高级情况下,还需要设置下面两个参数\: * --ports_num\: 一个 pserver进程共绑定多少个端口用来做稠密更新。默认是1 * --ports_num_for_sparse\: 一个pserver进程共绑定多少端口用来做稀疏更新,默认是0 -使用手工指定端口数量,是因为Paddle的网络通信中,使用了 :code:`int32` 作为消息长度,比较容易在大模型下溢出。所以,在 :code:`paddle pserver` 进程中可以启动多个子线程去接受 trainer 的数据,这样单个子线程的长度就不会溢出了。但是这个值不可以调的过大,因为增加这个值,还是对性能,尤其是内存占用有一定的开销的,另外稀疏更新的端口如果太大的话,很容易某一个参数服务器没有分配到任何参数。 +使用手工指定端口数量,是因为Paddle的网络通信中,使用了 ``int32`` 作为消息长度,比较容易在大模型下溢出。所以,在 ``pserver`` 进程中可以启动多个子线程去接受trainer的数据,这样单个子线程的长度就不会溢出了。但是这个值不可以调的过大,因为增加这个值,对性能尤其是内存占用有一定的开销,另外稀疏更新的端口如果太大的话,很容易导致某一个参数服务器没有分配到任何参数。 详细的说明可以参考,使用 `集群训练Paddle`_ 。 From 0e7d5cdea2de0e5f2d2b051083f6a92956f52ca4 Mon Sep 17 00:00:00 2001 From: liaogang Date: Sun, 20 Nov 2016 14:37:50 +0800 Subject: [PATCH 005/265] Refine quick start index.rst chinese docs --- doc_cn/demo/quick_start/index.md | 545 ------------------------------ doc_cn/demo/quick_start/index.rst | 474 ++++++++++++++++++++++++++ 2 files changed, 474 insertions(+), 545 deletions(-) delete mode 100644 doc_cn/demo/quick_start/index.md create mode 100644 doc_cn/demo/quick_start/index.rst diff --git a/doc_cn/demo/quick_start/index.md b/doc_cn/demo/quick_start/index.md deleted file mode 100644 index 4d9b24ba85..0000000000 --- a/doc_cn/demo/quick_start/index.md +++ /dev/null @@ -1,545 +0,0 @@ -# PaddlePaddle快速入门教程 - -我们以文本分类问题作为背景,介绍PaddlePaddle使用流程和常用的网络基础单元的配置方法。 - -## 安装(Install) - -首先请参考安装教程安装PaddlePaddle。 - -## 使用概述(Overview) - -**文本分类问题**:对于给定的一条文本, 我们从提前给定的类别集合中选择其所属类 -别。比如通过用户对电子商务网站评论,评估产品的质量: - -- 这个显示器很棒! (好评) -- 用了两个月之后这个显示器屏幕碎了。(差评) - -每一个任务流程都可以分为如下5个基础部分。 -
![](./Pipeline.jpg)
- -1. 数据格式准备 - - 每行保存一条样本,类别Id 和文本信息用Tab间隔, 文本中的单词用空格分隔(如果不切词,则字与字之间用空格分隔),例如:```类别Id ‘\t’ 这 个 显 示 器 很 棒 !``` -2. 数据向模型传送 - - PaddlePaddle可以读取Python写的传输数据脚本,所有字符都将转换为连续整数表示的Id传给模型 -3. 网络结构(由易到难展示4种不同的网络配置) - - 逻辑回归模型 - - 词向量模型 - - 卷积模型 - - 时序模型 - - 优化算法 -4. 训练模型 -5. 预测 - -## 数据格式准备(Data Preparation) -在本问题中,我们使用[Amazon电子产品评论数据](http://jmcauley.ucsd.edu/data/amazon/), -将评论分为好评(正样本)和差评(负样本)两类。[源码](https://github.com/baidu/Paddle)的`demo/quick_start`里提供了数据下载脚本 -和预处理脚本。 - -```bash -cd demo/quick_start -./data/get_data.sh -./preprocess.sh -``` - -## 数据向模型传送(Transfer Data to Model) - -### Python数据加载脚本(Data Provider Script) - -下面dataprovider_bow.py文件给出了完整例子,主要包括两部分: - -* initalizer: 定义文本信息、类别Id的数据类型。 -* process: yield文本信息和类别Id,和initalizer里定义顺序一致。 - -```python -from paddle.trainer.PyDataProvider2 import * - -# id of the word not in dictionary -UNK_IDX = 0 - -# initializer is called by the framework during initialization. -# It allows the user to describe the data types and setup the -# necessary data structure for later use. -# `settings` is an object. initializer need to properly fill settings.input_types. -# initializer can also store other data structures needed to be used at process(). -# In this example, dictionary is stored in settings. -# `dictionay` and `kwargs` are arguments passed from trainer_config.lr.py -def initializer(settings, dictionary, **kwargs): - # Put the word dictionary into settings - settings.word_dict = dictionary - - # setting.input_types specifies what the data types the data provider - # generates. - settings.input_types = [ - # The first input is a sparse_binary_vector, - # which means each dimension of the vector is either 0 or 1. It is the - # bag-of-words (BOW) representation of the texts. - sparse_binary_vector(len(dictionary)), - # The second input is an integer. It represents the category id of the - # sample. 2 means there are two labels in the dataset. - # (1 for positive and 0 for negative) - integer_value(2)] - -# Delaring a data provider. It has an initializer 'data_initialzer'. -# It will cache the generated data of the first pass in memory, so that -# during later pass, no on-the-fly data generation will be needed. -# `setting` is the same object used by initializer() -# `file_name` is the name of a file listed train_list or test_list file given -# to define_py_data_sources2(). See trainer_config.lr.py. -@provider(init_hook=initializer, cache=CacheType.CACHE_PASS_IN_MEM) -def process(settings, file_name): - # Open the input data file. - with open(file_name, 'r') as f: - # Read each line. - for line in f: - # Each line contains the label and text of the comment, separated by \t. - label, comment = line.strip().split('\t') - - # Split the words into a list. - words = comment.split() - - # convert the words into a list of ids by looking them up in word_dict. - word_vector = [settings.word_dict.get(w, UNK_IDX) for w in words] - - # Return the features for the current comment. The first is a list - # of ids representing a 0-1 binary sparse vector of the text, - # the second is the integer id of the label. - yield word_vector, int(label) -``` - -### 配置中的数据加载定义(Data Provider in Configure) - -在模型配置中利用`define_py_data_sources2`加载数据: - -```python -from paddle.trainer_config_helpers import * - -file = "data/dict.txt" -word_dict = dict() -with open(dict_file, 'r') as f: - for i, line in enumerate(f): - w = line.strip().split()[0] - word_dict[w] = i -# define the data sources for the model. -# We need to use different process for training and prediction. -# For training, the input data includes both word IDs and labels. -# For prediction, the input data only includs word Ids. -define_py_data_sources2(train_list='data/train.list', - test_list='data/test.list', - module="dataprovider_bow", - obj="process", - args={"dictionary": word_dict}) -``` -* data/train.list,data/test.list: 指定训练、测试数据 -* module="dataprovider": 数据处理Python文件名 -* obj="process": 指定生成数据的函数 -* args={"dictionary": word_dict}: 额外的参数,这里指定词典 - -更详细数据格式和用例请参考 -PyDataProvider2。 - -## 网络结构(Network Architecture) -本节我们将专注于网络结构的介绍。 -
![](./PipelineNetwork.jpg)
- -我们将以基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置 -连接请参考Layer文档。 -所有配置在[源码](https://github.com/baidu/Paddle)`demo/quick_start`目录,首先列举逻辑回归网络。 - -### 逻辑回归模型(Logistic Regression) - -流程如下: -
![](./NetLR.jpg)
- -- 获取利用one-hot vector表示的每个单词,维度是词典大小 - -```python -word = data_layer(name="word", size=word_dim) -``` - -- 获取该条样本类别Id,维度是类别个数。 - -```python -label = data_layer(name="label", size=label_dim) -``` - -- 利用逻辑回归模型对该向量进行分类,同时会计算分类准确率 - -```python -# Define a fully connected layer with logistic activation (also called softmax activation). -output = fc_layer(input=word, - size=label_dim, - act_type=SoftmaxActivation()) -# Define cross-entropy classification loss and error. -classification_cost(input=output, label=label) -``` - - - input: 除过data层,每个层都有一个或多个input,多个input以list方式输入 - - size: 该层神经元个数 - - act_type: 激活函数类型 - -效果总结:我们将在后面介绍训练和预测的流程的脚本。在此为方便对比不同网络结构, -我们随时总结了各个网络的复杂度和效果。 - - -
- - - - - - - - - - - - - - - - - -
网络名称参数数量错误率
逻辑回归252 KB8.652%
- -
- -### 词向量模型(Word Vector) - -embedding模型需要稍微改变数据提供的脚本,即`dataprovider_emb.py`,词向量模型、 -卷积模型、时序模型均使用该脚本。其中文本输入类型定义为整数时序类型integer_value_sequence。 - -``` -def initializer(settings, dictionary, **kwargs): - settings.word_dict = dictionary - settings.input_types = [ - # Define the type of the first input as sequence of integer. - # The value of the integers range from 0 to len(dictrionary)-1 - integer_value_sequence(len(dictionary)), - # Define the second input for label id - integer_value(2)] - -@provider(init_hook=initializer) -def process(settings, file_name): - ... - # omitted, it is same as the data provider for LR model -``` - -该模型依然是使用逻辑回归分类网络的框架, 只是将句子利用连续向量表示替换稀疏 -向量表示, 即对第3步进行替换。句子表示的计算更新为2步: -
![](./NetContinuous.jpg)
- -- 利用单词Id查找对应的该单词的连续表示向量(维度为word_dim), 输入N个单词,输出为N个word_dim维度向量 - -```python -emb = embedding_layer(input=word, size=word_dim) -``` - -- 将该句话包含的所有单词向量求平均得到句子的表示 - -```python -avg = pooling_layer(input=emb, pooling_type=AvgPooling()) -``` - -其它部分和逻辑回归网络结构一致。 -效果总结: - - -
- - - - - - - - - - - - - - - - - -
网络名称参数数量错误率
词向量模型15 MB8.484%
-
-
- -### 卷积模型(Convolution) -卷积网络是一种特殊的从词向量表示到句子表示的方法, 也就是将词向量模型额步 -骤3-2进行进一步演化, 变为3个新的子步骤。 -
![](./NetConv.jpg)
- -文本卷积分为三个步骤: -1. 获取每个单词左右各k个近邻, 拼接成一个新的向量表示; -2. 对该表示进行非线性变换 (例如Sigmoid变换), 成为维度为hidden_dim的新的向量; -3. 在每个维度上取出在该句话新的向量集合上该维度的最大值作为最后的句子表示向量。 这3个子步骤可配置为: - -```python -text_conv = sequence_conv_pool(input=emb, - context_start=k, - context_len=2 * k + 1) -``` - -效果总结: - - -
- - - - - - - - - - - - - - - - - -
网络名称参数数量错误率
卷积模型16 MB5.628%
-
- -### 时序模型(Time Sequence) -
![](./NetRNN.jpg)
- -时序模型即为RNN模型, 包括简单的RNN模型、GRU模型、LSTM模型等。 - -- GRU模型配置: - -```python -gru = simple_gru(input=emb, size=gru_size) -``` - -- LSTM模型配置: - -```python -lstm = simple_lstm(input=emb, size=lstm_size) -``` - -针对本问题,我们采用单层LSTM模型,并使用了Dropout,效果总结: - - -
- - - - - - - - - - - - - - - - - -
网络名称参数数量错误率
时序模型16 MB4.812%
- -
- -## 优化算法(Optimization Algorithm) -优化算法包括 -Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优化方法,加了L2正则和梯度截断。 - -```python -settings(batch_size=128, - learning_rate=2e-3, - learning_method=AdamOptimizer(), - regularization=L2Regularization(8e-4), - gradient_clipping_threshold=25) -``` - -## 训练模型(Training Model) -在完成了数据和网络结构搭建之后, 我们进入到训练部分。 -
![](./PipelineTrain.jpg)
- -训练脚本:我们将训练的命令行保存在了 `train.sh`文件中。训练时所需设置的主要参数如下: - -```bash -paddle train \ ---config=trainer_config.py \ ---log_period=20 \ ---save_dir=./output \ ---num_passes=15 \ ---use_gpu=false -``` -这里没有介绍多机分布式训练,可以参考分布式训练的demo学习如何进行多机训练。 - -## 预测(Prediction) -可以使用训练好的模型评估带有label的验证集,也可以预测没有label的测试集。 -
![](./PipelineTest.jpg)
- -测试脚本如下,将会测试配置文件中test.list指定的数据。 - -```bash -paddle train \ ---use_gpu=false \ ---job=test \ ---init_model_path=./output/pass-0000x -``` - -可以参考Python API预测 -教程,或其他demo的Python预测过程。也可以通过如下方式预测。 - -预测脚本(`predict.sh`): - -```bash -model="output/pass-00003" -paddle train \ - --config=trainer_config.lstm.py \ - --use_gpu=false \ - --job=test \ - --init_model_path=$model \ - --config_args=is_predict=1 \ - --predict_output_dir=. \ - -mv rank-00000 result.txt -``` -这里以`output/pass-00003`为例进行预测,用户可以根据训练log选择test结果最好的模型来预测。与训练网络配置不同的是:无需label相关的层,指定outputs输出概率层(softmax输出), -指定batch_size=1,数据传输无需label数据,预测数据指定test_list的位置。 - -预测结果以文本的形式保存在`result.txt`中,一行为一个样本,格式如下: - -``` -预测ID;ID为0的概率 ID为1的概率 -预测ID;ID为0的概率 ID为1的概率 -``` - -``` -is_predict = get_config_arg('is_predict', bool, False) -trn = 'data/train.list' if not is_predict else None -tst = 'data/test.list' if not is_predict else 'data/pred.list' -obj = 'process' if not is_predict else 'process_pre' -batch_size = 128 if not is_predict else 1 -if is_predict: - maxid = maxid_layer(output) - outputs([maxid,output]) -else: - label = data_layer(name="label", size=2) - cls = classification_cost(input=output, label=label) - outputs(cls) -``` - -## 总体效果总结(Summary) -这些流程中的数据下载、网络配置、训练脚本在`/demo/quick_start`目录,我们在此总 -结上述网络结构在Amazon-Elec测试集(25k)上的效果: - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
网络名称参数数量错误率配置文件
逻辑回归模型 252KB 8.652%trainer_config.lr.py
词向量模型 15MB 8.484%trainer_config.emb.py
卷积模型 16MB 5.628%trainer_config.cnn.py
时序模型 16MB 4.812%trainer_config.lstm.py
-
-
- -## 附录(Appendix) -### 命令行参数(Command Line Argument) - -* \--config:网络配置 -* \--save_dir:模型存储路径 -* \--log_period:每隔多少batch打印一次日志 -* \--num_passes:训练轮次,一个pass表示过一遍所有训练样本 -* \--config_args:命令指定的参数会传入网络配置中。 -* \--init_model_path:指定初始化模型路径,可用在测试或训练时指定初始化模型。 - -默认一个pass保存一次模型,也可以通过saving_period_by_batches设置每隔多少batch保存一次模型。 -可以通过show_parameter_stats_period设置打印参数信息等。 -其他参数请参考令行参数文档。 - -### 输出日志(Log) - -``` -TrainerInternal.cpp:160] Batch=20 samples=2560 AvgCost=0.628761 CurrentCost=0.628761 Eval: classification_error_evaluator=0.304297 CurrentEval: classification_error_evaluator=0.304297 -``` -模型训练会看到这样的日志,详细的参数解释如下面表格: -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
名称解释
Batch=20 表示过了20个batch
samples=2560 表示过了2560个样本
AvgCost 每个pass的第0个batch到当前batch所有样本的平均cost
CurrentCost 当前log_period个batch所有样本的平均cost
Eval: classification_error_evaluator 每个pass的第0个batch到当前batch所有样本的平均分类错误率
CurrentEval: classification_error_evaluator 当前log_period个batch所有样本的平均分类错误率
-
-
diff --git a/doc_cn/demo/quick_start/index.rst b/doc_cn/demo/quick_start/index.rst new file mode 100644 index 0000000000..9dabf1f661 --- /dev/null +++ b/doc_cn/demo/quick_start/index.rst @@ -0,0 +1,474 @@ +PaddlePaddle快速入门教程 +======================== + +我们将以 `文本分类问题 `_ 为例, +介绍PaddlePaddle的基本使用方法。 + +安装 +==== + +请参考 `安装教程 <../../build_and_install/index.html>`_ 安装PaddlePaddle。 + +使用概述 +======== + +**文本分类问题**:对于给定的一条文本,我们从提前给定的类别集合中选择其所属类别。 + +比如, 在购物网站上,通过查看买家对某个产品的评价反馈, 评估该产品的质量。 + +- 这个显示器很棒! (好评) +- 用了两个月之后这个显示器屏幕碎了。(差评) + +使用PaddlePaddle, 每一个任务流程都可以被划分为如下五个步骤。 + + .. image:: Pipeline.jpg + :align: center + :scale: 80% + +1. 数据格式准备 + - 本例每行保存一条样本,类别Id和文本信息用 ``Tab`` 间隔,文本中的单词用空格分隔(如果不切词,则字与字之间用空格分隔),例如:``类别Id '\t' 这 个 显 示 器 很 棒 !`` +2. 向系统传送数据 + - PaddlePaddle可以执行用户的python脚本程序来读取各种格式的数据文件。 + - 本例的所有字符都将转换为连续整数表示的Id传给模型。 +3. 描述网络结构和优化算法 + - 本例由易到难展示4种不同的文本分类网络配置:逻辑回归模型,词向量模型,卷积模型,时序模型。 + - 常用优化算法包括Momentum, RMSProp,AdaDelta,AdaGrad,Adam,Adamax等,本例采用Adam优化方法,加了L2正则和梯度截断。 +4. 训练模型 +5. 应用模型 + +数据格式准备 +------------ + +接下来我们将展示如何用PaddlePaddle训练一个文本分类模型,将 `Amazon电子产品评论数据 `_ 分为好评(正样本)和差评(负样本)两种类别。 +`源代码 `_ 的 ``demo/quick_start`` 目录里提供了该数据的下载脚本和预处理脚本。 + +.. code-block:: bash + + cd demo/quick_start + ./data/get_data.sh + ./preprocess.sh + +向系统传送数据 +============== + +Python数据读取脚本 +------------------ + +`DataProvider <../../ui/data_provider/index.html>`_ 是PaddlePaddle负责提供数据的模块。``DataProvider`` 主要职责在于将训练数据传入内存或者显存,让模型能够得到训练更新,其包括两个函数: + +* initializer:PaddlePaddle会在调用读取数据的Python脚本之前,先调用initializer函数。在下面例子里,我们在initialzier函数里初始化词表,并且在随后的读取数据过程中填充词表。 +* process:PaddlePaddle调用process函数来读取数据。每次读取一条数据后,process函数会用yield语句输出这条数据,从而能够被PaddlePaddle 捕获 (harvest)。 + +``dataprovider_bow.py`` 文件给出了完整例子: + +.. code-block:: python + + from paddle.trainer.PyDataProvider2 import * + + # id of the word not in dictionary + UNK_IDX = 0 + + # initializer is called by the framework during initialization. + # It allows the user to describe the data types and setup the + # necessary data structure for later use. + # `settings` is an object. initializer need to properly fill settings.input_types. + # initializer can also store other data structures needed to be used at process(). + # In this example, dictionary is stored in settings. + # `dictionay` and `kwargs` are arguments passed from trainer_config.lr.py + def initializer(settings, dictionary, **kwargs): + # Put the word dictionary into settings + settings.word_dict = dictionary + + # setting.input_types specifies what the data types the data provider + # generates. + settings.input_types = [ + # The first input is a sparse_binary_vector, + # which means each dimension of the vector is either 0 or 1. It is the + # bag-of-words (BOW) representation of the texts. + sparse_binary_vector(len(dictionary)), + # The second input is an integer. It represents the category id of the + # sample. 2 means there are two labels in the dataset. + # (1 for positive and 0 for negative) + integer_value(2)] + + # Delaring a data provider. It has an initializer 'data_initialzer'. + # It will cache the generated data of the first pass in memory, so that + # during later pass, no on-the-fly data generation will be needed. + # `setting` is the same object used by initializer() + # `file_name` is the name of a file listed train_list or test_list file given + # to define_py_data_sources2(). See trainer_config.lr.py. + @provider(init_hook=initializer, cache=CacheType.CACHE_PASS_IN_MEM) + def process(settings, file_name): + # Open the input data file. + with open(file_name, 'r') as f: + # Read each line. + for line in f: + # Each line contains the label and text of the comment, separated by \t. + label, comment = line.strip().split('\t') + + # Split the words into a list. + words = comment.split() + + # convert the words into a list of ids by looking them up in word_dict. + word_vector = [settings.word_dict.get(w, UNK_IDX) for w in words] + + # Return the features for the current comment. The first is a list + # of ids representing a 0-1 binary sparse vector of the text, + # the second is the integer id of the label. + yield word_vector, int(label) + +配置中的数据加载定义 +-------------------- + +在模型配置中通过 ``define_py_data_sources2`` 接口来加载数据: + +.. code-block:: python + + from paddle.trainer_config_helpers import * + + file = "data/dict.txt" + word_dict = dict() + with open(dict_file, 'r') as f: + for i, line in enumerate(f): + w = line.strip().split()[0] + word_dict[w] = i + # define the data sources for the model. + # We need to use different process for training and prediction. + # For training, the input data includes both word IDs and labels. + # For prediction, the input data only includs word Ids. + define_py_data_sources2(train_list='data/train.list', + test_list='data/test.list', + module="dataprovider_bow", + obj="process", + args={"dictionary": word_dict}) + + +以下是对上述数据加载的解释: + +- data/train.list,data/test.list: 指定训练数据和测试数据 +- module="dataprovider_bow": 数据处理的Python脚本文件名 +- obj="process": 指定生成数据的函数 +- args={"dictionary": word_dict}: 额外的参数,这里指定词典 + +更详细数据格式和用例请参考 `PyDataProvider2 <../../ui/data_provider/pydataprovider2.html>`_ 。 + +模型网络结构 +============ + +本小节我们将介绍模型网络结构。 + + .. image:: PipelineNetwork.jpg + :align: center + :scale: 80% + + +我们将以基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置连接请参考 `Layer文档 <../../../doc/layer.html>`_ 。 +所有配置都在 `源代码 `_ 的 ``demo/quick_start`` 目录下。 + +逻辑回归模型 +------------ + +具体流程如下: + + .. image:: NetLR.jpg + :align: center + :scale: 80% + +- 获取利用one-hot vector表示的每个单词,维度是词典大小 + + .. code-block:: python + + word = data_layer(name="word", size=word_dim) + +- 获取该条样本类别Id,维度是类别个数。 + + .. code-block:: python + + label = data_layer(name="label", size=label_dim) + +- 利用逻辑回归模型对该向量进行分类,同时会计算分类准确率 + + .. code-block:: python + + # Define a fully connected layer with logistic activation (also called softmax activation). + output = fc_layer(input=word, + size=label_dim, + act_type=SoftmaxActivation()) + # Define cross-entropy classification loss and error. + classification_cost(input=output, label=label) + + + - input: 除过data层,每个层都有一个或多个input,多个input以list方式输入 + - size: 该层神经元个数 + - act_type: 激活函数类型 + +**效果总结**:我们将在后面介绍训练和预测流程的脚本。在此为方便对比不同网络结构,我们总结了各个网络的复杂度和效果。 + + ===================== =============================== ================= + 网络名称 参数数量 错误率 + ===================== =============================== ================= + 逻辑回归 252 KB 8.652 % + ===================== =============================== ================= + +词向量模型 +---------- + +embedding模型需要稍微改变数据提供的脚本,即 ``dataprovider_emb.py``,词向量模型、 +卷积模型、时序模型均使用该脚本。其中文本输入类型定义为整数时序类型integer_value_sequence。 + +.. code-block:: python + + def initializer(settings, dictionary, **kwargs): + settings.word_dict = dictionary + settings.input_types = [ + # Define the type of the first input as sequence of integer. + # The value of the integers range from 0 to len(dictrionary)-1 + integer_value_sequence(len(dictionary)), + # Define the second input for label id + integer_value(2)] + + @provider(init_hook=initializer) + def process(settings, file_name): + ... + # omitted, it is same as the data provider for LR model + +该模型依然是使用逻辑回归分类网络的框架, 只是将句子利用连续向量表示替换稀疏 +向量表示, 即对第3步进行替换。句子表示的计算更新为2步: + +.. image:: NetContinuous.jpg + :align: center + :scale: 80% + +- 利用单词Id查找对应的该单词的连续表示向量(维度为word_dim), 输入N个单词,输出为N个word_dim维度向量 + + .. code-block:: python + + emb = embedding_layer(input=word, size=word_dim) + +- 将该句话包含的所有单词向量求平均得到句子的表示 + + .. code-block:: python + + avg = pooling_layer(input=emb, pooling_type=AvgPooling()) + +其它部分和逻辑回归网络结构一致。 + +**效果总结:** + + ===================== =============================== ================== + 网络名称 参数数量 错误率 + ===================== =============================== ================== + 词向量模型 15 MB 8.484 % + ===================== =============================== ================== + +卷积模型 +----------- + +卷积网络是一种特殊的从词向量表示到句子表示的方法, 也就是将词向量模型额步 +骤3-2进行进一步演化, 变为3个新的子步骤。 + +.. image:: NetConv.jpg + :align: center + :scale: 80% + +文本卷积分为三个步骤: + +1. 获取每个单词左右各k个近邻, 拼接成一个新的向量表示; + +2. 对该表示进行非线性变换 (例如Sigmoid变换), 成为维度为hidden_dim的新的向量; + +3. 在每个维度上取出在该句话新的向量集合上该维度的最大值作为最后的句子表示向量。 这3个子步骤可配置为: + +.. code-block:: python + + text_conv = sequence_conv_pool(input=emb, + context_start=k, + context_len=2 * k + 1) + +**效果总结:** + + ===================== =============================== ======================== + 网络名称 参数数量 错误率 + ===================== =============================== ======================== + 卷积模型 16 MB 5.628 % + ===================== =============================== ======================== + +时序模型 +---------- + +.. image:: NetRNN.jpg + :align: center + :scale: 80% + +时序模型即为RNN模型, 包括简单的RNN模型、GRU模型、LSTM模型等。 + +- GRU模型配置: + + .. code-block:: python + + gru = simple_gru(input=emb, size=gru_size) + + +- LSTM模型配置: + + .. code-block:: python + + lstm = simple_lstm(input=emb, size=lstm_size) + +针对本问题,我们采用单层LSTM模型,并使用了Dropout,**效果总结:** + + ===================== =============================== ========================= + 网络名称 参数数量 错误率 + ===================== =============================== ========================= + 时序模型 16 MB 4.812 % + ===================== =============================== ========================= + +优化算法 +========= + +`优化算法 <../../../doc/ui/trainer_config_helpers_api.html#module-paddle.trainer_config_helpers.optimizers>`_ 包括 +Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优化方法,加了L2正则和梯度截断。 + +.. code-block:: python + + settings(batch_size=128, + learning_rate=2e-3, + learning_method=AdamOptimizer(), + regularization=L2Regularization(8e-4), + gradient_clipping_threshold=25) + +训练模型 +========= + +在完成了数据和网络结构搭建之后, 我们进入到训练部分。 + +.. image:: PipelineTrain.jpg + :align: center + :scale: 80% + +训练脚本:我们将训练的命令行保存在了 ``train.sh`` 文件中。训练时所需设置的主要参数如下: + + .. code-block:: bash + + paddle train \ + --config=trainer_config.py \ + --log_period=20 \ + --save_dir=./output \ + --num_passes=15 \ + --use_gpu=false + +这里没有介绍多机分布式训练,可以参考 `分布式训练 <../../cluster/index.html>`_ 的demo学习如何进行多机训练。 + +预测 +===== + +可以使用训练好的模型评估带有label的验证集,也可以预测没有label的测试集。 + +.. image:: PipelineTest.jpg + :align: center + :scale: 80% + +测试脚本如下,将会测试配置文件中test.list指定的数据。 + + .. code-block:: bash + + paddle train \ + --use_gpu=false \ + --job=test \ + --init_model_path=./output/pass-0000x + +可以参考 `Python API预测 <../../ui/predict/swig_py_paddle.html>`_ +教程,或其他 `demo <../../demo/index.html>`_ 的Python预测过程。也可以通过如下方式预测。 + +预测脚本(``predict.sh``): + + .. code-block:: bash + + model="output/pass-00003" + paddle train \ + --config=trainer_config.lstm.py \ + --use_gpu=false \ + --job=test \ + --init_model_path=$model \ + --config_args=is_predict=1 \ + --predict_output_dir=. \ + + mv rank-00000 result.txt + +这里以 ``output/pass-00003`` 为例进行预测,用户可以根据训练log选择test结果最好的模型来预测。与训练网络配置不同的是:无需label相关的层,指定outputs输出概率层(softmax输出), +指定batch_size=1,数据传输无需label数据,预测数据指定test_list的位置。 + +预测结果以文本的形式保存在 ``result.txt`` 中,一行为一个样本,格式如下: + + .. code-block:: bash + + 预测ID;ID为0的概率 ID为1的概率 + 预测ID;ID为0的概率 ID为1的概率 + + .. code-block:: python + + is_predict = get_config_arg('is_predict', bool, False) + trn = 'data/train.list' if not is_predict else None + tst = 'data/test.list' if not is_predict else 'data/pred.list' + obj = 'process' if not is_predict else 'process_pre' + batch_size = 128 if not is_predict else 1 + if is_predict: + maxid = maxid_layer(output) + outputs([maxid,output]) + else: + label = data_layer(name="label", size=2) + cls = classification_cost(input=output, label=label) + outputs(cls) + +总体效果总结 +============== + +这些流程中的数据下载、网络配置、训练脚本在 ``/demo/quick_start`` 目录,我们在此总 +结上述网络结构在Amazon-Elec测试集(25k)上的效果: + + ===================== =============================== ============= ================================== + 网络名称 参数数量 错误率 配置文件 + ===================== =============================== ============= ================================== + 逻辑回归模型 252 KB 8.652% trainer_config.lr.py + 词向量模型 15 MB 8.484% trainer_config.emb.py + 卷积模型 16 MB 5.628% trainer_config.cnn.py + 时序模型 16 MB 4.812% trainer_config.lstm.py + ===================== =============================== ============= ================================== + + +附录 +===== + +命令行参数 +---------- + +* \--config:网络配置 +* \--save_dir:模型存储路径 +* \--log_period:每隔多少batch打印一次日志 +* \--num_passes:训练轮次,一个pass表示过一遍所有训练样本 +* \--config_args:命令指定的参数会传入网络配置中。 +* \--init_model_path:指定初始化模型路径,可用在测试或训练时指定初始化模型。 + +默认一个pass保存一次模型,也可以通过saving_period_by_batches设置每隔多少batch保存一次模型。 +可以通过show_parameter_stats_period设置打印参数信息等。 +其他参数请参考 `命令行参数文档 <../../ui/index.html#command-line-argument>`_ 。 + +输出日志 +--------- + +.. code-block:: bash + + TrainerInternal.cpp:160] Batch=20 samples=2560 AvgCost=0.628761 CurrentCost=0.628761 Eval: classification_error_evaluator=0.304297 CurrentEval: classification_error_evaluator=0.304297 + +模型训练会看到这样的日志,详细的参数解释如下面表格: + + =========================================== ========================================================== + 名称 解释 + =========================================== ========================================================== + Batch=20 表示过了20个batch + samples=2560 表示过了2560个样本 + AvgCost 每个pass的第0个batch到当前batch所有样本的平均cost + CurrentCost 当前log_period个batch所有样本的平均cost + Eval: classification_error_evaluator 每个pass的第0个batch到当前batch所有样本的平均分类错误率 + CurrentEval: classification_error_evaluator 当前log_period个batch所有样本的平均分类错误率 + =========================================== ========================================================== From a49d1d9529e14638f84c22289b3eff8caf45da80 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Sat, 19 Nov 2016 21:37:58 +0800 Subject: [PATCH 006/265] Refine the original hrnn documentation. --- doc_cn/algorithm/rnn/hierarchical-rnn.rst | 179 ------------------ doc_cn/algorithm/rnn/hrnn_demo.rst | 2 +- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 98 +++++----- doc_cn/concepts/glossary.rst | 34 ++++ doc_cn/concepts/use_concepts.rst | 2 + doc_cn/conf.py.in | 2 +- paddle/gserver/tests/sequenceGen.py | 20 +- paddle/gserver/tests/sequence_nest_rnn.conf | 5 +- 8 files changed, 98 insertions(+), 244 deletions(-) delete mode 100644 doc_cn/algorithm/rnn/hierarchical-rnn.rst diff --git a/doc_cn/algorithm/rnn/hierarchical-rnn.rst b/doc_cn/algorithm/rnn/hierarchical-rnn.rst deleted file mode 100644 index 7c81ce8c67..0000000000 --- a/doc_cn/algorithm/rnn/hierarchical-rnn.rst +++ /dev/null @@ -1,179 +0,0 @@ -################# -双层RNN配置与示例 -################# - -我们在 :code:`paddle/gserver/tests/test_RecurrentGradientMachine` 单测中,通过多组语义相同的单双层RNN配置,讲解如何使用双层RNN。 - -示例1:双进双出,subseq间无memory -================================= - -配置:单层RNN(:code:`sequence_layer_group`)和双层RNN(:code:`sequence_nest_layer_group`),语义完全相同。 - -读取双层序列的方法 ------------------- - -首先,我们看一下单双层序列的不同数据组织形式(您也可以采用别的组织形式)\: - -- 单层序列的数据( :code:`Sequence/tour_train_wdseg`)如下,一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。 - -.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg - :language: text - - -- 双层序列的数据( :code:`Sequence/tour_train_wdseg.nest`)如下,一共有4个样本。样本间用空行分开,代表不同的双层序列,序列数据和上面的完全一样。每个样本的子句数分别为2,3,2,3。 - -.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest - :language: text - -其次,我们看一下单双层序列的不同dataprovider(见 :code:`sequenceGen.py` ): - -- 单层序列的dataprovider如下: - - - word_slot是integer_value_sequence类型,代表单层序列。 - - label是integer_value类型,代表一个向量。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py - :language: python - :lines: 21-39 - -- 双层序列的dataprovider如下: - - - word_slot是integer_value_sub_sequence类型,代表双层序列。 - - label是integer_value_sequence类型,代表单层序列,即一个子句一个label。注意:也可以为integer_value类型,代表一个向量,即一个句子一个label。通常根据任务需求进行不同设置。 - - 关于dataprovider中input_types的详细用法,参见PyDataProvider2。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py - :language: python - :lines: 42-71 - -模型中的配置 ------------- - -首先,我们看一下单层序列的配置(见 :code:`sequence_layer_group.conf`)。注意:batchsize=5表示一次过5句单层序列,因此2个batch就可以完成1个pass。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf - :language: python - :lines: 38-63 - - -其次,我们看一下语义相同的双层序列配置(见 :code:`sequence_nest_layer_group.conf` ),并对其详细分析: - -- batchsize=2表示一次过2句双层序列。但从上面的数据格式可知,2句双层序列和5句单层序列的数据完全一样。 -- data_layer和embedding_layer不关心数据是否是序列格式,因此两个配置在这两层上的输出是一样的。 -- lstmemory\: - - - 单层序列过了一个mixed_layer和lstmemory_group。 - - 双层序列在同样的mixed_layer和lstmemory_group外,直接加了一层group。由于这个外层group里面没有memory,表示subseq间不存在联系,即起到的作用仅仅是把双层seq拆成单层,因此双层序列过完lstmemory的输出和单层的一样。 - -- last_seq\: - - - 单层序列直接取了最后一个元素 - - 双层序列首先(last_seq层)取了每个subseq的最后一个元素,将其拼接成一个新的单层序列;接着(expand_layer层)将其扩展成一个新的双层序列,其中第i个subseq中的所有向量均为输入的单层序列中的第i个向量;最后(average_layer层)取了每个subseq的平均值。 - - 分析得出:第一个last_seq后,每个subseq的最后一个元素就等于单层序列的最后一个元素,而expand_layer和average_layer后,依然保持每个subseq最后一个元素的值不变(这两层仅是为了展示它们的用法,实际中并不需要)。因此单双层序列的输出是一样旳。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf - :language: python - :lines: 38-84 - -示例2:双进双出,subseq间有memory -================================= - -配置:单层RNN( :code:`sequence_rnn.conf` ),双层RNN( :code:`sequence_nest_rnn.conf` 和 :code:`sequence_nest_rnn_readonly_memory.conf` ),语义完全相同。 - -读取双层序列的方法 ------------------- - -我们看一下单双层序列的不同数据组织形式和dataprovider(见 :code:`rnn_data_provider.py`) - -.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py - :language: python - :lines: 20-32 - -- 单层序列:有两句,分别为[1,3,2,4,5,2]和[0,2,2,5,0,1,2]。 -- 双层序列:有两句,分别为[[1,3,2],[4,5,2]](2个子句)和[[0,2],[2,5],[0,1,2]](3个子句)。 -- 单双层序列的label都分别是0和1 - -模型中的配置 ------------- - -我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 - -- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf - :language: python - :lines: 36-48 - -- 双层序列,外层memory是一个元素: - - - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf - :language: python - :lines: 39-66 - -- 双层序列,外层memory是单层序列: - - - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 - - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 - -示例3:双进双出,输入不等长 -=========================== - -.. role:: red - -.. raw:: html - - - -**输入不等长** 是指recurrent_group的多个输入在各时刻的长度可以不相等, 但需要指定一个和输出长度一致的input,用 :red:`targetInlink` 表示。参考配置:单层RNN(:code:`sequence_rnn_multi_unequalength_inputs.conf`),双层RNN(:code:`sequence_nest_rnn_multi_unequalength_inputs.conf`) - -读取双层序列的方法 ------------------- - -我们看一下单双层序列的数据组织形式和dataprovider(见 :code:`rnn_data_provider.py` ) - -.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py - :language: python - :lines: 69-97 - -data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 - -- 单层序列:两个样本分别为[[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]] 和 [[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]] -- 双层序列:两个样本分别为 - - - **样本1**\:[[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]]]。fea1和fea2都分别有2个子句,fea1=[[1, 2], [4, 5, 2]], fea2=[[5, 4, 1], [3, 1]] - - **样本2**\:[[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]]。fea1和fea2都分别有3个子句, fea1=[[0, 2], [2, 5], [0, 1, 2]], fea2=[[1, 5], [4], [2, 3, 6, 1]]。
- - **注意**\:每个样本中,各特征的子句数目需要相等。这里说的“双进双出,输入不等长”是指fea1在i时刻的输入的长度可以不等于fea2在i时刻的输入的长度。如对于第1个样本,时刻i=2, fea1[2]=[4, 5, 2],fea2[2]=[3, 1],3≠2。 - -- 单双层序列中,两个样本的label都分别是0和1 - -模型中的配置 ------------- - -单层RNN( :code:`sequence_rnn_multi_unequalength_inputs.conf`)和双层RNN( :code:`v.conf`)两个模型配置达到的效果完全一样,区别只在于输入为单层还是双层序列,现在我们来看它们内部分别是如何实现的。 - -- 单层序列\: - - - 过了一个简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全连接,功能与示例2中`sequence_rnn.conf`的`step`函数完全相同。这里,两个输入x1,x2分别通过calrnn返回最后时刻的状态。结果得到的encoder1_rep和encoder2_rep分别是单层序列,最后取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 - - 注意到这里recurrent_group输入的每个样本中,fea1和fea2的长度都分别相等,这并非偶然,而是因为recurrent_group要求输入为单层序列时,所有输入的长度都必须相等。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf - :language: python - :lines: 41-58 - -- 双层序列\: - - - 双层RNN中,对输入的两个特征分别求时序上的连续全连接(`inner_step1`和`inner_step2`分别处理fea1和fea2),其功能与示例2中`sequence_nest_rnn.conf`的`outer_step`函数完全相同。不同之处是,此时输入`[SubsequenceInput(emb1), SubsequenceInput(emb2)]`在各时刻并不等长。 - - 函数`outer_step`中可以分别处理这两个特征,但我们需要用targetInlink指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 - - 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 - -.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf - :language: python - :lines: 41-89 - -示例4:beam_search的生成 -======================== - -TBD diff --git a/doc_cn/algorithm/rnn/hrnn_demo.rst b/doc_cn/algorithm/rnn/hrnn_demo.rst index cf38e416c0..96396ff105 100644 --- a/doc_cn/algorithm/rnn/hrnn_demo.rst +++ b/doc_cn/algorithm/rnn/hrnn_demo.rst @@ -1,4 +1,4 @@ -.. algo_hrnn_demo: +.. _algo_hrnn_demo: ################# 双层RNN的使用示例 diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index cf18108019..8ae0f85b29 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -4,101 +4,99 @@ 单双层RNN API对比介绍 ##################### -这篇教程主要介绍了 :ref:`glossary_双层RNN` 的API接口。本文中的以 :ref:`glossary_paddle` 的 :ref:`glossary_双层RNN` 单元测试为示例,用多对效果完全相同的、分别使用单、双层RNN作为网络配置的模型,来讲解如何使用 :ref:`glossary_双层RNN` 。本文中所有的例子,都只是介绍 :ref:`glossary_双层RNN` 的API接口,并不是使用 :ref:`glossary_双层RNN` 解决实际的问题。如果想要了解 :ref:`glossary_双层RNN` 在具体问题中的使用,请参考 :ref:`algo_hrnn_demo` 。文章中示例所使用的单元测试文件是 `test_RecurrentGradientMachine.cpp `_ 。 +这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文中的以\ :ref:`glossary_paddle`\ 的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单、双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。文章中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 示例1:双层RNN,子序列间无Memory ================================ +在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_Sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 +在本问题中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用\ :ref:`glossary_lstm`\ 作为\ :ref:`glossary_encoder`\ ,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用\ :ref:`glossary_encoder`\ 压缩,二者语意上完全一致。这组语意相同的示例配置如下 -配置:单层RNN(:code:`sequence_layer_group`)和双层RNN(:code:`sequence_nest_layer_group`),语义完全相同。 +* 单层 \:ref:`glossary_RNN`\: `sequence_layer_group.conf `_ +* :ref:`glossary_双层RNN`\: `sequence_nest_layer_group.conf `_ -读取双层序列的方法 ------------------- -首先,我们看一下单双层序列的不同数据组织形式(您也可以采用别的组织形式)\: +读取双层序列数据 +---------------- + +首先,本示例中使用的原始数据如下\: -- 单层序列的数据( :code:`Sequence/tour_train_wdseg`)如下,一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。 +- 本里中的原始数据一共有10个\ :ref:`glossary_sample`\ 。每个\ :ref:`glossary_sample`\ 由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层\ :ref:`glossary_RNN`\ 网络直接使用。 .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg :language: text -- 双层序列的数据( :code:`Sequence/tour_train_wdseg.nest`)如下,一共有4个样本。样本间用空行分开,代表不同的双层序列,序列数据和上面的完全一样。每个样本的子句数分别为2,3,2,3。 +- 双层序列数据一共有4个\ :ref:`glossary_sample`\ 。 每个样本间用空行分开,整体数据和原始数据完全一样。而对于双层序列的\ :ref:`glossary_lstm`\ 来说,第一条数据同时\ :ref:`glossary_encode` 两条数据成两个向量。这四条数据同时处理的句子为\ :code:`[2, 3, 2, 3]`\ 。 .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest :language: text -其次,我们看一下单双层序列的不同dataprovider(见 :code:`sequenceGen.py` ): - -- 单层序列的dataprovider如下: - - - word_slot是integer_value_sequence类型,代表单层序列。 - - label是integer_value类型,代表一个向量。 +其次,对于两种不同的输入数据类型,不同\ :ref:`glossary_DataProvider`\ 对比如下(`sequenceGen.py `_)\: .. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py :language: python :lines: 21-39 + :linenos: -- 双层序列的dataprovider如下: - - - word_slot是integer_value_sub_sequence类型,代表双层序列。 - - label是integer_value_sequence类型,代表单层序列,即一个子句一个label。注意:也可以为integer_value类型,代表一个向量,即一个句子一个label。通常根据任务需求进行不同设置。 - - 关于dataprovider中input_types的详细用法,参见PyDataProvider2。 +- 这是普通的单层\ :ref:`glossary_Sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: + + * :ref:`glossary_DataProvider`\ 共返回两个数据,分别是words和label。即上述代码中的第19行。 + - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层\ :ref:`glossary_Sequence`\ 。 + - label是原始数据中对于每一句话的分类标签,它是integer_value类型的。 .. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py :language: python :lines: 42-71 + :linenos: -模型中的配置 ------------- +- 这是对于同样的数据,本示例中双层\ :ref:`glossary_Sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: + + - :ref:`glossary_DataProvider`\ 共返回两组数据,分别是sentences和labels。即在双层序列的原始数据中,每一组内的所有句子和labels + - sentences是双层\ :ref:`glossary_Sequence`\ 的数据。他内部包括了每组数据中的所有句子,又使用句子中每一个单词的词表index表示每一个句子,故为双层\ :ref:`glossary_Sequence`\ 。类型为 integer_value_sub_sequence 。 + - labels是每组内每一个句子的标签,故而是一个单层\ :ref:`glossary_Sequence`\ 。 + + +:ref:`glossary_trainer_config`\ 的模型配置 +------------------------------------------ -首先,我们看一下单层序列的配置(见 :code:`sequence_layer_group.conf`)。注意:batchsize=5表示一次过5句单层序列,因此2个batch就可以完成1个pass。 +首先,我们看一下单层\ :ref:`glossary_RNN`\ 的配置。代码中9-15行即为单层RNN序列的使用代码。这里使用了\ :ref:`glossary_paddle`\ 预定义好的\ :ref:`glossary_RNN`\ 处理函数。在这个函数中,\ :ref:`glossary_RNN`\ 对于每一个\ :ref:`glossary_timestep`\ 通过了一个\ :ref:`glossary_lstm`\ 网络。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf :language: python :lines: 38-63 + :linenos: + :emphasize-lines: 9-15 -其次,我们看一下语义相同的双层序列配置(见 :code:`sequence_nest_layer_group.conf` ),并对其详细分析: +其次,我们看一下语义相同的\ :ref:`glossary_双层RNN`\ 的网络配置。 -- batchsize=2表示一次过2句双层序列。但从上面的数据格式可知,2句双层序列和5句单层序列的数据完全一样。 -- data_layer和embedding_layer不关心数据是否是序列格式,因此两个配置在这两层上的输出是一样的。 -- lstmemory\: +* :ref:`glossary_paddle`\ 中的许多layer并不在意输入是否是\ :ref:`glossary_Sequence`\ ,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个\ :ref:`glossary_timestep`\ 来进行的。 - - 单层序列过了一个mixed_layer和lstmemory_group。 - - 双层序列在同样的mixed_layer和lstmemory_group外,直接加了一层group。由于这个外层group里面没有memory,表示subseq间不存在联系,即起到的作用仅仅是把双层seq拆成单层,因此双层序列过完lstmemory的输出和单层的一样。 +* 在该配置中,7-26行将双层\ :ref:`glossary_Sequence`\ 数据,先变换成单层\ :ref:`glossary_Sequence`\ 数据,在对每一个单层\ :ref:`glossary_Sequence`\ 进行处理。 -- last_seq\: + * 使用\ :code:`recurrent_group`\ 这个函数进行变换,在变换时需要将输入序列传入。由于我们想要的变换是双层\ :ref:`glossary_Sequence`\ => 单层\ :ref:`glossary_Sequence`\ ,所以我们需要将输入数据标记成\ :code:`SubsequenceInput`\ 。 + + * 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个\ :ref:`glossary_lstm`\ 网络。这和单层\ :ref:`glossary_RNN`\ 的配置是等价的。 + +* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要知道使用\ :ref:`glossary_lstm` :ref:`glossary_encode`\ 成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但是,和单层\ :ref:`glossary_RNN`\ 有区别的地方是,我们是对每一个子序列取最后一个元素。于是我们设置\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 - - 单层序列直接取了最后一个元素 - - 双层序列首先(last_seq层)取了每个subseq的最后一个元素,将其拼接成一个新的单层序列;接着(expand_layer层)将其扩展成一个新的双层序列,其中第i个subseq中的所有向量均为输入的单层序列中的第i个向量;最后(average_layer层)取了每个subseq的平均值。 - - 分析得出:第一个last_seq后,每个subseq的最后一个元素就等于单层序列的最后一个元素,而expand_layer和average_layer后,依然保持每个subseq最后一个元素的值不变(这两层仅是为了展示它们的用法,实际中并不需要)。因此单双层序列的输出是一样旳。 +* 至此,\ :code:`lstm_last`\ 便和单层\ :ref:`glossary_RNN`\ 的配置中的\ :code:`lstm_last`\ 具有相同的结果了。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf :language: python - :lines: 38-84 - -示例2:双进双出,subseq间有memory -================================= + :lines: 38-64 + :linenos: + :emphasize-lines: 7-26 -配置:单层RNN( :code:`sequence_rnn.conf` ),双层RNN( :code:`sequence_nest_rnn.conf` 和 :code:`sequence_nest_rnn_readonly_memory.conf` ),语义完全相同。 - -读取双层序列的方法 ------------------- +示例2::ref:`glossary_双层RNN`,子序列间有\ :ref:`glossary_Memory` +================================================================== -我们看一下单双层序列的不同数据组织形式和dataprovider(见 :code:`rnn_data_provider.py`) +本示例中,意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 同时实现一个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_Sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。而对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_Sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 -.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py - :language: python - :lines: 20-32 - -- 单层序列:有两句,分别为[1,3,2,4,5,2]和[0,2,2,5,0,1,2]。 -- 双层序列:有两句,分别为[[1,3,2],[4,5,2]](2个子句)和[[0,2],[2,5],[0,1,2]](3个子句)。 -- 单双层序列的label都分别是0和1 - -模型中的配置 ------------- +:ref:`glossary_trainer_config`\ 的模型配置 +------------------------------------------ 我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 diff --git a/doc_cn/concepts/glossary.rst b/doc_cn/concepts/glossary.rst index a94aa73675..518712d1fe 100644 --- a/doc_cn/concepts/glossary.rst +++ b/doc_cn/concepts/glossary.rst @@ -11,6 +11,33 @@ PaddlePaddle TBD +.. _glossary_encode: + +encode +------ + +参考\ :ref:`glossary_encoder`\ 。 + +.. _glossary_encoder: + +encoder +------- + +TBD + +.. _glossary_sample: + +样本 +---- + +TBD Sample的概念 + +.. _glossary_lstm: + +LSTM +---- + +TBD .. _glossary_memory: @@ -27,6 +54,13 @@ Memory是 :ref:`glossary_paddle` 实现 :ref:`glossary_RNN` 时候使用的一 使用这种方式,:ref:`glossary_paddle` 可以比较简单的判断哪些输出是应该跨越时间步的,哪些不是。 +.. _glossary_timestep: + +时间步 +------ + +参考 :ref:`_glossary_Sequence` 。 + .. _glossary_Sequence: 时间序列 diff --git a/doc_cn/concepts/use_concepts.rst b/doc_cn/concepts/use_concepts.rst index 67e98edabc..73fa78455f 100644 --- a/doc_cn/concepts/use_concepts.rst +++ b/doc_cn/concepts/use_concepts.rst @@ -32,6 +32,7 @@ PaddlePaddle进程内嵌了一个 :code:`python` 解释器。 这个 :code:`pyth 所以,PaddlePaddle单机训练进程,:code:`paddle train` , 对于用户的主要接口语言为 python。 主要需要用户配置的两个文件为 :code:`DataProvider` 和训练文件 :code:`TrainerConfig` 。 +.. _glossary_DataProvider: DataProvider ============ @@ -42,6 +43,7 @@ DataProvider是 :code:`paddle train` 的数据提供器。 它负责将用户的 为了方便用户使用自己的数据格式, PaddlePaddle 提供了 `PyDataProvider`_ 来处理数据。 并且在这个Provider中,PaddlePaddle的 C++ 部分接管了如何shuffle,处理 batch,GPU/CPU通信,双缓冲,异步读取等问题。 用户可以参考 `PyDataProvider`_ 的相关文档,继续深入了解 DataProvider 的使用。 +.. _glossary_trainer_config: 训练文件 ======== diff --git a/doc_cn/conf.py.in b/doc_cn/conf.py.in index 93242ace40..80e5291815 100644 --- a/doc_cn/conf.py.in +++ b/doc_cn/conf.py.in @@ -69,7 +69,7 @@ master_doc = 'index' # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. -language = None +language = 'zh_CN' # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: diff --git a/paddle/gserver/tests/sequenceGen.py b/paddle/gserver/tests/sequenceGen.py index fab876fd30..99440ada53 100644 --- a/paddle/gserver/tests/sequenceGen.py +++ b/paddle/gserver/tests/sequenceGen.py @@ -33,10 +33,10 @@ def process(settings, file_name): label, comment = line.strip().split('\t') label = int(''.join(label.split())) words = comment.split() - word_slot = [ + words = [ settings.word_dict[w] for w in words if w in settings.word_dict ] - yield word_slot, label + yield words, label ## for hierarchical sequence network @@ -52,20 +52,20 @@ def hook2(settings, dict_file, **kwargs): @provider(init_hook=hook2, should_shuffle=False) def process2(settings, file_name): with open(file_name) as fdata: - label_list = [] - word_slot_list = [] + labels = [] + sentences = [] for line in fdata: if (len(line)) > 1: label, comment = line.strip().split('\t') label = int(''.join(label.split())) words = comment.split() - word_slot = [ + words = [ settings.word_dict[w] for w in words if w in settings.word_dict ] - label_list.append(label) - word_slot_list.append(word_slot) + labels.append(label) + sentences.append(words) else: - yield word_slot_list, label_list - label_list = [] - word_slot_list = [] + yield sentences, labels + labels = [] + sentences = [] diff --git a/paddle/gserver/tests/sequence_nest_rnn.conf b/paddle/gserver/tests/sequence_nest_rnn.conf index 93b08eb2f8..524760be76 100644 --- a/paddle/gserver/tests/sequence_nest_rnn.conf +++ b/paddle/gserver/tests/sequence_nest_rnn.conf @@ -55,9 +55,8 @@ def outer_step(x): input=x) last = last_seq(input=inner_rnn_output, name="outer_rnn_state") - # "return last" should also work. But currently RecurrentGradientMachine - # does not handle it, and will report error: In hierachical RNN, all out - # links should be from sequences now. + # "return last" won't work, because recurrent_group only support the input + # sequence type is same as return sequence type. return inner_rnn_output out = recurrent_group( From a146fcf8ad37c111cd0fc44378dd6ca6804b3dfe Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Sun, 20 Nov 2016 21:25:25 +0800 Subject: [PATCH 007/265] stash --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index 8ae0f85b29..eea220c043 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -98,23 +98,28 @@ :ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------ -我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 - -- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 +本例配置了两个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层序列模型的配置如下: .. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf :language: python :lines: 36-48 + :linenos: -- 双层序列,外层memory是一个元素: +在该配置中,名称为\ :code:`rnn_state`\ 的全连接层暂存到了\ :ref:`glossary_Memory`\ 中。这个\ :ref:`glossary_Memory`\ 变量\ :code:`mem`\ 中可以保存到上一个\ :ref:`glossary_timestep`\ 中的全连接层的输出。从而实现一个全连接的\ :ref:`glossary_RNN`\ 。 - - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 +而对于\ :ref:`glossary_双层RNN`\ 来说,等价的网络配置如下\: .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf :language: python :lines: 39-66 +- 双层序列,外层memory是一个元素: + + - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 + - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 + + + - 双层序列,外层memory是单层序列: - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 From 4fcf01a849743965b8fb49db9173217e4aed2dfc Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 21 Nov 2016 16:36:07 +0800 Subject: [PATCH 008/265] Refine code, found a bad design --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 20 ++++++++----- .../simple_full_hierarchical_recurrent.dot | 30 +++++++++++++++++++ .../algorithm/rnn/simple_full_recurrent.dot | 19 ++++++++++++ 3 files changed, 62 insertions(+), 7 deletions(-) create mode 100644 doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot create mode 100644 doc_cn/algorithm/rnn/simple_full_recurrent.dot diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index eea220c043..09172c53f7 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -107,23 +107,29 @@ 在该配置中,名称为\ :code:`rnn_state`\ 的全连接层暂存到了\ :ref:`glossary_Memory`\ 中。这个\ :ref:`glossary_Memory`\ 变量\ :code:`mem`\ 中可以保存到上一个\ :ref:`glossary_timestep`\ 中的全连接层的输出。从而实现一个全连接的\ :ref:`glossary_RNN`\ 。 +以数据\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 举例,单层\ :ref:`glossary_RNN`\ 的网络图如下\: + +.. graphviz:: simple_full_recurrent.dot + 而对于\ :ref:`glossary_双层RNN`\ 来说,等价的网络配置如下\: .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf :language: python :lines: 39-66 + :linenos: + :emphasize-lines: 4-6 -- 双层序列,外层memory是一个元素: - - - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 +- 在该配置中,外层的\ :code:`outer_mem`\ 和内层的\ :code:`inner_mem`\ 两个变量配合,实现了和单层\ :ref:`glossary_RNN`\ 等价的全连接\ :ref:`glossary_RNN`\ 。 + - 外层\ :code:`outer_step`\ 中的\ :code:`outer_mem`\ 会将神经网络中每个子序列的最后一个结果记录下来。即将第18行的\ :code:`last`\ 变量记录下来。 + - 内层\ :code:`inner_step`\ 中的\ :code:`inner_mem`\ 会将神经网络中子序列中的每一个元素的结果记录下来。即将第7行的\ :code:`out`\ 变量记录下来。 + - 内层的\ :code:`inner_mem`\ 初始值是\ :code:`outer_mem`(:code:`boot_layer`)。于是前一个子序列的最后结果,是新的子序列的初试结果。即完成了简单的全连接\ :code:`glossary_RNN`\ 。 +本例中的\ :ref:`glossary_双层RNN`\ ,以数据\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`\ 举例,配置图如下\: -- 双层序列,外层memory是单层序列: +.. graphviz:: simple_full_hierarchical_recurrent.dot - - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 - - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 +这里有一点注意事项,Paddle目前实现的\ :ref:`glossary_双层RNN`\ 不完全支持内层\ :ref:`glossary_RNN`\ 的\ :ref:`glossary_Memory`\ 引用外层\ :ref:`glossary_RNN`\ 的某一层序列输入。即\ :code:`inner_mem`的\ :code:`boot_layer`\ 需要是非序列类型的,或者可以是序列类型,但是每个时间步下,序列长度是一致的。从序列类型转换为非序列类型,可以使用\ :code:`pooling_layer`, :code:`last_seq`, :code:`first_seq`\ 等操作进行转换。 示例3:双进双出,输入不等长 =========================== diff --git a/doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot b/doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot new file mode 100644 index 0000000000..ff278a0323 --- /dev/null +++ b/doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot @@ -0,0 +1,30 @@ +digraph G { + rankdir=LR; + + subgraph cluster_t0 { + a [label="4"] + b [label="5"] + c [label="2"] + } + + subgraph cluster_t1 { + d [label="0"] + e [label="9"] + } + + subgraph cluster_t2 { + f [label="8"] + g [label="1"] + h [label="4"] + } + + a -> b; + b -> c; + c -> d [constraint=false]; + + d -> e; + e -> f [constraint=false]; + + f -> g; + g -> h; +} \ No newline at end of file diff --git a/doc_cn/algorithm/rnn/simple_full_recurrent.dot b/doc_cn/algorithm/rnn/simple_full_recurrent.dot new file mode 100644 index 0000000000..cee281fbac --- /dev/null +++ b/doc_cn/algorithm/rnn/simple_full_recurrent.dot @@ -0,0 +1,19 @@ +digraph G { + rankdir=LR; + a [label="4"] + b [label="5"] + c [label="2"] + d [label="0"] + e [label="9"] + f [label="8"] + g [label="1"] + h [label="4"] + + a -> b; + b -> c; + c -> d; + d -> e; + e -> f; + f -> g; + g -> h; +} \ No newline at end of file From 8c2e5b2cadf450de6f5b44ffffa42da93651c3c4 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Mon, 21 Nov 2016 20:56:00 +0800 Subject: [PATCH 009/265] Update use_concepts --- doc_cn/concepts/use_concepts.rst | 156 ++++++++++++++----------------- doc_cn/faq/index.rst | 19 +++- 2 files changed, 88 insertions(+), 87 deletions(-) diff --git a/doc_cn/concepts/use_concepts.rst b/doc_cn/concepts/use_concepts.rst index d3da9cc16b..49c45ff779 100644 --- a/doc_cn/concepts/use_concepts.rst +++ b/doc_cn/concepts/use_concepts.rst @@ -2,16 +2,20 @@ PaddlePaddle 基本使用概念 ######################### -PaddlePaddle是一个深度学习框架,同时支持单机和多机模式的系统。命令 ``paddle train`` 可启动单机模式的进程,我们称之为 ``trainer`` 进程。单机所有设备使用均在单机进程内调度完成。多机模式除了需要启动trainer进程外,还需要通过命令 ``paddle pserver`` 启动多机参数服务器进程, 我们称之为   ``pserver`` 进程。该进程负责多个单机进程间的通信,进而充分利用集群的计算资源。 PaddlePaddle同时以 ``swig api`` 的形式,提供训练结果模型预测的方法和自定义训练流程。 +PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 -下面我们会介绍trainer进程中的一些概念,这些概念会对如何使用PaddlePaddle有一定的帮助。 了解这些概念的前提是,读者已经了解 `基本的神经网络/机器学习原理和概念 `_ 。同时,如果想要了解PaddlePaddle实现中的一些概念,请参考 `PaddlePaddle 编程中的基本概念 `_ 。 +单节模式用命令 ``paddle train`` 可以启动一个trainer进程,一个单机训练作业只包括一个trainer进程,单机的所有设备使用,均在单机进程内调度完成。 + +如果数据规模比较大,希望加速训练,可以启动分布式作业。一个分布式作业里包括若干trainer进程和若干Parameter Server(或称pserver)进程。用命令 ``paddle pserver`` 可以启动 pserver 进程,pserver进程用于协调多个trainer进程之间的通信。 + +本文首先介绍trainer进程中的一些使用概念,然后介绍pserver进程中概念。 .. contents:: -系统模块 +系统框图 ======== -``trainer`` 进程内嵌了一个 ``python`` 解释器, 这个 ``python`` 解释器负责解析用户定义的神经网络配置;解析输入数据流,并将数据传入给 ``trainer`` 系统。 +下图描述了用户使用框图,PaddlePaddle里链接了Python解释器,trainer进程可以利用这个解释器执行Python脚本,Python脚本里定义了模型配置、训练算法、以及数据读取函数。其中,数据读取程序往往定义在一个单独Python脚本文件里,被称为DataProvider,通常是一个Python函数。模型配置、训练算法通常定义在另一单独Python文件中。下面将分别介绍这两部分。 .. graphviz:: @@ -30,132 +34,105 @@ PaddlePaddle是一个深度学习框架,同时支持单机和多机模式的 py -> data_provider [dir="back"]; } -所以,单机训练 ``trainer`` 进程对用户的主要接口语言为Python。用户需要配置文件主要有两个:数据流提供器 ``DataProvider`` 和模型配置 ``TrainerConfig`` 。 - - DataProvider ============ -DataProvider是 ``trainer`` 进程的数据提供器。主要负责将用户的原始数据转换成 ``trainer`` 系统可以识别的数据类型。当系统需要新的数据训练时,会调用DataProvider获取数据接口。当所有数据读取完一轮后,DataProvider返回空数据通知系统一轮数据读取结束。 ``trainer`` 在每一轮训练开始时会重置DataProvider。 +在不同的应用里,训练数据的格式往往各不相同。因此,为了用户能够灵活的处理数据,我们提供了Python处理数据的接口,称为 `PyDataProvider`_ 。 -需要注意的是,DataProvider是被 ``trainer`` 系统调用,而不是新数据驱动系统;数据 ``shuffle`` 和一些随机化噪声添加都应该在DataProvider中完成。 +trainer进程会调用DataProvider函数,将用户的原始数据转换成系统可以识别的数据类型。当所有数据读取完一轮后,DataProvider返回空数据,通知系统一轮数据读取结束,系统每一轮训练开始时会重置DataProvider。需要注意的是,DataProvider是被系统调用,而不是新数据驱动系统,一些随机化噪声添加都应该在DataProvider中完成。 -为了用户能够灵活的处理数据,PaddlePaddle提供了处理数据的Python接口(称为 `PyDataProvider`_ )。 在 ``PyDataProvider`` 中,系统C++模块接管了shuffle、处理batch、GPU和CPU通信、双缓冲、异步读取等问题,需要说明的是,一些情况下需要Python接口里处理shuffle,可以参考 `PyDataProvider`_ 的相关文档继续深入了解。 +在 ``PyDataProvider`` 中,系统C++模块接管了shuffle、处理batch、GPU和CPU通信、双缓冲、异步读取等问题,一些情况下(如:``min_pool_size=0``)需要Python接口里处理shuffle,可以参考 `PyDataProvider`_ 的相关文档继续深入了解。 -TrainerConfig -============= - -模型配置是一个Python文件,主要包括神经网络结构、优化算法、数据传入方式,使用命令行参数 ``--config`` 传给``trainer``主程序。 例如\: - -.. code-block:: bash +模型配置文件 +============ - paddle train --config=trainer_config.py +模型配置主要包括数据传入接口定义(DataConfig)、优化算法(OptimizationConfig)、网络结构(ModelConfig)。 其中数据传入接口定义与DataProvider的关系是:DataProvider里定义数据读取函数,配置文件的DataConfig里指定DataProvider文件名字、生成数据函数接口,请不要混淆。 一个简单的模型配置文件为: .. literalinclude:: trainer_config.py :linenos: -下面我们详细的介绍一下模型配置中各个模块的概念。 - +文件开头 ``from paddle.trainer_config_helpers import *`` ,是因为PaddlePaddle配置文件与C++模块通信的最基础协议是protobuf。为了避免用户直接写复杂的protobuf string,我们为用户定以Python接口来配置网络,该Python代码可以生成protobuf包,这就是的作用`trainer_config_helpers`_的作用。因此,在文件的开始,需要import这些函数。 这个包里面包含了模型配置需要的各个模块。 -trainer_config_helpers ----------------------- +下面分别介绍DataConfig、OptimizationConfig、ModelConfig这三部分该概念。 -PaddlePaddle配置文件与C++模块通信的最基础协议是 ``protobuf`` 。为了避免用户直接写比较难写的protobuf string,我们通过Python代码来生成protobuf包,这就是helpers的作用。所以在文件的开始,需要import这些helpers函数。 +DataConfig +---------- -需要注意的是,这个 ``paddle.trainer_config_helpers`` 包是标准的python包,这意味着用户可以选择自己喜欢的 ``IDE`` 或者编辑器来编写Paddle的配置文件,这个Python包注释文档比较完善,并提供了IDE的代码提示与类型注释。 +使用函数 ``define_py_data_sources2`` 配置数据源,后缀 2 是Paddle历史遗留问题,因为Paddle之前使用的PyDataProvider性能问题,重构了一个新的 `PyDataProvider`_ 。 -data_sources ------------- +``define_py_data_sources2`` 里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 -data_sources配置神经网络的数据源,使用的函数是 ``define_py_data_sources2`` ,这个函数是定义了使用 `PyDataProvider`_ 提供数据源。后缀 ``2`` 是Paddle历史遗留问题,因为Paddle之前使用的PyDataProvider性能问题,重构了一个新的 `PyDataProvider`_ 。 +``module`` 和 ``obj`` 指定了DataProvider的文件名和返回数据的函数名。更详细的使用,请参考 `PyDataProvider`_ 。 -data_sources里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 +OptimizationConfig +------------------ -其中``module`` 和``obj``指定了DataProvider的文件名和返回数据的函数名。更详细的使用,请参考 `PyDataProvider`_ 。 +通过`settings`_ 接口设置神经网络所使用的训练参数和优化算法,包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 -settings --------- +ModelConfig +----------- -`settings`_ 设置训练神经网络所使用的算法。包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 +神经网络配置主要包括网络连接、激活函数、损失函数、评估器。 -网络配置 --------- +- 网络连接: 主要由Layer组成,每个Layer返回的都是一个 ``LayerOutput`` 对象,Layer里面可以定义参数属性、激活类型等。 -上述配置中余下的部分是神经网络配置,主要包括网络连接、 ``cost`` 层、评估器。 + 为了更灵活的配置,PaddlePaddle提供了基于 Projection 或者 Operator 的配置,这两个需要与 ``mixed_layer`` 配合使用。这里简单介绍Layer、Projection、Operator的概念: -- 首先,定义了一个名字叫"pixel"的 ``data_layer`` ,每个layer返回的都是一个 ``LayerOutput`` 对象,比如第一层的输出对象称作 ``img`` 。 -- 然后,这个对象作为另一个layer( ``simple_img_conv_pool`` )的输入, ``simple_img_conv_pool`` 是一个组合层,包括了图像的卷积 (convolution) 和池化(pooling), -- 其次,连接到全连接层(``fc_layer``),再连接到一个含Softmax激活的全连接层。 -- 最终,连接到cost层( ``classification_cost`` ), ``classification_cost`` 默认使用多类交叉熵损失函数和分类错误率统计评估器。标记网络输出的函数为 ``outputs`` ,网络的输出是神经网络的优化目标,神经网络训练的时候,实际上就是要最小化这个输出。 + - Layer: 神经网络的某一层,可以有可学习的参数,一般是封装了许多复杂操作的集合。 + - Projection:需要与 ``mixed_layer`` 配合使用,含可学习参数。 + - Operator: 需要与 ``mixed_layer`` 配合使用,不含可学习参数,输入全是其他Layer的输出。 -用该模型配置进行预测时,网络的输出也是通过 ``outputs`` 标记。 + + 这个配置文件网络由 ``data_layer`` 、 ``simple_img_conv_pool`` 、 ``fc_layer`` 组成。 + - `data_layer`_ : 通常每个配置文件都会包括 ``data_layer`` ,定义输入数据大小。 + - `simple_img_conv_pool`_ :是一个组合层,包括了图像的卷积 (convolution)和池化(pooling)。 + - `fc_layer`_ :全连接层,激活函数为Softmax,这里也可叫分类层。 -Layer、Projection、Operator -=========================== + +- 损失函数和评估器:损失函数即为网络的优化目标,评估器可以评价模型结果。 -PaddlePaddle的网络是基于Layer来配置的。所谓的Layer即是神经网络的某一层,一般是封装了许多复杂操作的操作集合。比如最简单的 ``fc_layer`` ,包括矩阵乘法、多输入的求和、加Bias操作、激活( ``activation`` )函数操作。 - -.. code-block:: python + PaddlePaddle包括很多损失函数和评估起,详细可以参考 `损失函数层`_ 和 `评估器`_ 。这里 ``classification_cost`` 默认使用多类交叉熵损失函数和分类错误率统计评估器。 + +- ``outputs``: 标记网络输出的函数为 ``outputs`` 。 - data = data_layer(name='data', size=200) - out = fc_layer(input=data, size=200, act=TanhActivation()) + 训练阶段,网络的输出为神经网络的优化目标;预测阶段,网络的输出也可通过 ``outputs`` 标记。 -对于更灵活配置需求,PaddlePaddle提供了基于 ``Projection`` 或者 ``Operator`` 的配置,这些需要与 ``mixed_layer`` 配合使用。 ``mixed_layer`` 是将多个输入累加求和,然后加Bias和 ``activation`` 操作。 ``mixed_layer`` 具体计算是通过内部的Projection和Operator完成。Projection含有可学习参数;而Operator不含可学习的参数,输入全是其他Layer的输出。 +这里对 ``mixed_layer`` 稍做详细说明, 该Layer将多个输入(Projection 或 Operator)累加求和,具体计算是通过内部的 Projection 和 Operator 完成,然后加 Bias 和 activation 操作, 例如,和 ``fc_layer`` 同样功能的 ``mixed_layer`` 是: .. code-block:: python + + data = data_layer(name='data', size=200) + with mixed_layer(size=200) as out: + out += full_matrix_projection(input=data) - data = data_layer(name='data', size=200) - with mixed_layer(size=200) as out: - out += full_matrix_projection(input=data) - -PaddlePaddle可以使用 ``mixed layer`` 配置出非常复杂的网络,甚至可以直接配置一个完整的LSTM。用户可以参考 `mixed_layer`_ 的相关文档进行配置。 - -如何利用单机的所有GPU或所有CPU核心 -=============================== - -PaddlePaddle的单机 ``trainer`` 进程可以充分利用一台计算机上所有的GPU资源或者CPU。 +PaddlePaddle 可以使用 ``mixed layer`` 配置出非常复杂的网络,甚至可以直接配置一个完整的LSTM。用户可以参考 `mixed_layer`_ 的相关文档进行配置。 -如果要使用机器上多块GPU,使用如下命令即可\: -.. code-block:: bash - - paddle train --use_gpu=true --trainer_count=4 # use 4 gpu card, 0, 1, 2, 3 - -如果要使用机器上多块CPU, 使用如下命令即可\: - -.. code-block:: bash - - paddle train --trainer_count=4 # use 4 cpu cores. - -如果要指定GPU编号,例如选择第0、2号GPU,则可以设置 ``CUDA_VISIBLE_DEVICES`` 环境变量来指定特定的GPU。具体可以参考连接`masking-gpu`_ ,命令为: +分布式训练 +========== -.. code-block:: bash - - env CUDA_VISIBLE_DEVICES=0,2 paddle train --use_gpu=true --trainer_count=2 - -如何利用多台机器的计算资源训练神经网络 -=================================== - -PaddlePaddle多机采用经典的 ``Parameter Server`` 架构对多个节点的 ``trainer`` 进行同步。多机训练神经网络,要讲数据切分到不同的机器上,切分数据相对简单,所以在PaddlePaddle的开源实现中并没有提供相关工具包。 - -多机训练的经典拓扑结构如下\: +PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trainer 进行同步。多机训练的经典拓扑结构如下\: .. graphviz:: pserver_topology.dot -图中每个灰色方块是一台机器,在每个机器中,先启动一个 ``paddle pserver`` 进程,并指定端口号,可能的参数是\: +图中每个灰色方块是一台机器,在每个机器中,先使用命令 ``paddle pserver`` 启动一个pserver进程,并指定端口号,可能的参数是\: .. code-block:: bash paddle pserver --port=5000 --num_gradient_servers=4 --nics='eth0' -这里说明系统的 ``pserver`` 进程端口是 ``5000`` ,有四个训练进程(即 ``--gradient_servers=4`` ,PaddlePaddle同时将 ``trainer`` 称作 ``GradientServer`` 。因为其为负责提供Gradient)。 启动之后 ``pserver`` 进程之后,需要 ``trainer`` 训练进程,再在各个机器上运行如下命令\: +* 指定 pserver 进程端口是 5000 。 +* 有四个训练进程(即 ``--gradient_servers=4`` ,PaddlePaddle同时将 trainer 称作 GradientServer 。因为其为负责提供Gradient) 。 +* 指定以太网类型为TCP网络。 + +启动之后 pserver 进程之后,需要启动 trainer 训练进程,在各个机器上运行如下命令\: .. code-block:: bash @@ -163,16 +140,23 @@ PaddlePaddle多机采用经典的 ``Parameter Server`` 架构对多个节点的 对于简单的多机协同训练使用上述方式即可。另外,pserver/train 通常在高级情况下,还需要设置下面两个参数\: -* --ports_num\: 一个 pserver进程共绑定多少个端口用来做稠密更新。默认是1 +* --ports_num\: 一个 pserver 进程共绑定多少个端口用来做稠密更新。默认是1 * --ports_num_for_sparse\: 一个pserver进程共绑定多少端口用来做稀疏更新,默认是0 -使用手工指定端口数量,是因为Paddle的网络通信中,使用了 ``int32`` 作为消息长度,比较容易在大模型下溢出。所以,在 ``pserver`` 进程中可以启动多个子线程去接受trainer的数据,这样单个子线程的长度就不会溢出了。但是这个值不可以调的过大,因为增加这个值,对性能尤其是内存占用有一定的开销,另外稀疏更新的端口如果太大的话,很容易导致某一个参数服务器没有分配到任何参数。 +使用手工指定端口数量,是因为Paddle的网络通信中,使用了 int32 作为消息长度,比较容易在大模型下溢出。所以,在 pserver 进程中可以启动多个子线程去接受 trainer 的数据,这样单个子线程的长度就不会溢出了。但是这个值不可以调的过大,因为增加这个值,对性能尤其是内存占用有一定的开销,另外稀疏更新的端口如果太大的话,很容易导致某一个参数服务器没有分配到任何参数。 详细的说明可以参考,使用 `集群训练Paddle`_ 。 .. _PyDataProvider: ../ui/data_provider/pydataprovider2.html -.. _settings: ../../doc/ui/api/trainer_config_helpers/optimizers.html#settings -.. _mixed_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#mixed-layer -.. _masking-gpu: http://www.acceleware.com/blog/cudavisibledevices-masking-gpus +.. _settings: ../../doc/ui/api/trainer_config_helpers/optimizers.html#settings +.. _trainer_config_helper: ../../doc/ui/api/trainer_config_helpers/index.html +.. _data_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#data-layer +.. _simple_img_conv_pool: ../../doc/ui/api/trainer_config_helpers/networks.html#simple-img-conv-pool +.. _fc_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#fc-layer +.. _损失函数层: ../../doc/ui/api/trainer_config_helpers/layers.html#cost-layers +.. _评估器: ../../doc/ui/api/trainer_config_helpers/evaluators.html +.. _mixed_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#mixed-layer +.. _masking-gpu: http://www.acceleware.com/blog/cudavisibledevices-masking-gpus + .. _集群训练Paddle: ../cluster/index.html diff --git a/doc_cn/faq/index.rst b/doc_cn/faq/index.rst index 3eb0e10ae2..8da21e5b8b 100644 --- a/doc_cn/faq/index.rst +++ b/doc_cn/faq/index.rst @@ -213,4 +213,21 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 原因是:单元测试使用了一个旧版本的python包,而没有测试到代码中实际修改的python包。即单元测试需要一个干净的环境: * 如果paddle包已经在python的site-packages里面了,那么单元测试时使用的paddle包,就是site-packages里面的python包,而不是源码目录里 :code:`/python` 目录下的python包。 -* 即便设置了 :code:`PYTHONPATH` 到 :code:`/python` 也没用,因为python的搜索路径是优先已经安装的python包。 \ No newline at end of file +* 即便设置了 :code:`PYTHONPATH` 到 :code:`/python` 也没用,因为python的搜索路径是优先已经安装的python包。 + +9. 如何指定GPU设备 +----------------- + +例如机器上有4块GPU,编号从0开始,指定使用2、3号GPU: + +* 方式1:通过 ``CUDA_VISIBLE_DEVICES`` 环境变量来指定特定的GPU。 + +.. code-block:: bash + + env CUDA_VISIBLE_DEVICES=2,3 paddle train --use_gpu=true --trainer_count=2 + +* 方式2:通过命令行参数 ``--gpu_id`` 指定。 + +.. code-block:: bash + + paddle train --use_gpu=true --trainer_count=2 --gpu_id=2 From a61bf5a65f62b2ef2aec153dc7fd01d22543df87 Mon Sep 17 00:00:00 2001 From: liaogang Date: Tue, 22 Nov 2016 20:51:59 +0800 Subject: [PATCH 010/265] Refine quick start index.rst --- doc_cn/demo/quick_start/index.rst | 105 ++++++++++++------------------ 1 file changed, 42 insertions(+), 63 deletions(-) diff --git a/doc_cn/demo/quick_start/index.rst b/doc_cn/demo/quick_start/index.rst index 9dabf1f661..08c1c8413b 100644 --- a/doc_cn/demo/quick_start/index.rst +++ b/doc_cn/demo/quick_start/index.rst @@ -40,7 +40,7 @@ PaddlePaddle快速入门教程 ------------ 接下来我们将展示如何用PaddlePaddle训练一个文本分类模型,将 `Amazon电子产品评论数据 `_ 分为好评(正样本)和差评(负样本)两种类别。 -`源代码 `_ 的 ``demo/quick_start`` 目录里提供了该数据的下载脚本和预处理脚本。 +`源代码 `_ 的 ``demo/quick_start`` 目录里提供了该数据的下载脚本和预处理脚本,你只需要在命令行输入以下命令,就能够很方便的完成数据下载和相应的预处理工作。 .. code-block:: bash @@ -51,7 +51,7 @@ PaddlePaddle快速入门教程 向系统传送数据 ============== -Python数据读取脚本 +Python脚本读取数据 ------------------ `DataProvider <../../ui/data_provider/index.html>`_ 是PaddlePaddle负责提供数据的模块。``DataProvider`` 主要职责在于将训练数据传入内存或者显存,让模型能够得到训练更新,其包括两个函数: @@ -146,7 +146,7 @@ Python数据读取脚本 以下是对上述数据加载的解释: - data/train.list,data/test.list: 指定训练数据和测试数据 -- module="dataprovider_bow": 数据处理的Python脚本文件名 +- module="dataprovider_bow": 处理数据的Python脚本文件 - obj="process": 指定生成数据的函数 - args={"dictionary": word_dict}: 额外的参数,这里指定词典 @@ -162,8 +162,8 @@ Python数据读取脚本 :scale: 80% -我们将以基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置连接请参考 `Layer文档 <../../../doc/layer.html>`_ 。 -所有配置都在 `源代码 `_ 的 ``demo/quick_start`` 目录下。 +我们将以最基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置连接请参考 `Layer文档 <../../../doc/layer.html>`_ 。 +所有配置都能在 `源代码 `_ 的 ``demo/quick_start`` 目录下找到。 逻辑回归模型 ------------ @@ -174,7 +174,7 @@ Python数据读取脚本 :align: center :scale: 80% -- 获取利用one-hot vector表示的每个单词,维度是词典大小 +- 获取利用 `one-hot vector `_ 表示的每个单词,维度是词典大小 .. code-block:: python @@ -198,7 +198,7 @@ Python数据读取脚本 classification_cost(input=output, label=label) - - input: 除过data层,每个层都有一个或多个input,多个input以list方式输入 + - input: 除去data层,每个层都有一个或多个input,多个input以list方式输入 - size: 该层神经元个数 - act_type: 激活函数类型 @@ -213,7 +213,7 @@ Python数据读取脚本 词向量模型 ---------- -embedding模型需要稍微改变数据提供的脚本,即 ``dataprovider_emb.py``,词向量模型、 +embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovider_emb.py``,词向量模型、 卷积模型、时序模型均使用该脚本。其中文本输入类型定义为整数时序类型integer_value_sequence。 .. code-block:: python @@ -232,20 +232,19 @@ embedding模型需要稍微改变数据提供的脚本,即 ``dataprovider_emb. ... # omitted, it is same as the data provider for LR model -该模型依然是使用逻辑回归分类网络的框架, 只是将句子利用连续向量表示替换稀疏 -向量表示, 即对第3步进行替换。句子表示的计算更新为2步: +该模型依然使用逻辑回归分类网络的框架, 只是将句子用连续向量表示替换为用稀疏向量表示, 即对第三步进行替换。句子表示的计算更新为两步: .. image:: NetContinuous.jpg :align: center :scale: 80% -- 利用单词Id查找对应的该单词的连续表示向量(维度为word_dim), 输入N个单词,输出为N个word_dim维度向量 +- 利用单词Id查找该单词对应的连续向量(维度为word_dim), 输入N个单词,输出为N个word_dim维度向量 .. code-block:: python emb = embedding_layer(input=word, size=word_dim) -- 将该句话包含的所有单词向量求平均得到句子的表示 +- 将该句话包含的所有单词向量求平均, 得到句子的表示 .. code-block:: python @@ -264,20 +263,21 @@ embedding模型需要稍微改变数据提供的脚本,即 ``dataprovider_emb. 卷积模型 ----------- -卷积网络是一种特殊的从词向量表示到句子表示的方法, 也就是将词向量模型额步 -骤3-2进行进一步演化, 变为3个新的子步骤。 +卷积网络是一种特殊的从词向量表示到句子表示的方法, 也就是将词向量模型进一步演化为三个新步骤。 .. image:: NetConv.jpg :align: center :scale: 80% -文本卷积分为三个步骤: +文本卷积分可为三个步骤: -1. 获取每个单词左右各k个近邻, 拼接成一个新的向量表示; +1. 首先,从每个单词左右两端分别获取k个相邻的单词, 拼接成一个新的向量; -2. 对该表示进行非线性变换 (例如Sigmoid变换), 成为维度为hidden_dim的新的向量; +2. 其次,对该向量进行非线性变换(例如Sigmoid变换), 使其转变为维度为hidden_dim的新向量; -3. 在每个维度上取出在该句话新的向量集合上该维度的最大值作为最后的句子表示向量。 这3个子步骤可配置为: +3. 最后,对整个新向量集合的每一个维度取最大值来表示最后的句子。 + +这三个步骤可配置为: .. code-block:: python @@ -300,7 +300,7 @@ embedding模型需要稍微改变数据提供的脚本,即 ``dataprovider_emb. :align: center :scale: 80% -时序模型即为RNN模型, 包括简单的RNN模型、GRU模型、LSTM模型等。 +时序模型,也称为RNN模型, 包括简单的RNN模型, GRU模型和LSTM模型等等。 - GRU模型配置: @@ -315,7 +315,7 @@ embedding模型需要稍微改变数据提供的脚本,即 ``dataprovider_emb. lstm = simple_lstm(input=emb, size=lstm_size) -针对本问题,我们采用单层LSTM模型,并使用了Dropout,**效果总结:** +本次试验,我们采用单层LSTM模型,并使用了Dropout,**效果总结:** ===================== =============================== ========================= 网络名称 参数数量 错误率 @@ -326,8 +326,8 @@ embedding模型需要稍微改变数据提供的脚本,即 ``dataprovider_emb. 优化算法 ========= -`优化算法 <../../../doc/ui/trainer_config_helpers_api.html#module-paddle.trainer_config_helpers.optimizers>`_ 包括 -Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优化方法,加了L2正则和梯度截断。 +`优化算法 `_ 包括 +Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优化方法,同时使用了L2正则和梯度截断。 .. code-block:: python @@ -340,13 +340,19 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 训练模型 ========= -在完成了数据和网络结构搭建之后, 我们进入到训练部分。 +在数据加载和网络配置完成之后, 我们就可以训练模型了。 .. image:: PipelineTrain.jpg :align: center :scale: 80% -训练脚本:我们将训练的命令行保存在了 ``train.sh`` 文件中。训练时所需设置的主要参数如下: +训练模型,我们只需要运行 ``train.sh`` 训练脚本: + + .. code-block:: bash + + ./train.sh + +``train.sh``中包含了训练模型的基本命令。训练时所需设置的主要参数如下: .. code-block:: bash @@ -357,30 +363,19 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 --num_passes=15 \ --use_gpu=false -这里没有介绍多机分布式训练,可以参考 `分布式训练 <../../cluster/index.html>`_ 的demo学习如何进行多机训练。 +这里只简单介绍了单机训练,如何进行分布式训练,可以参考教程 `分布式训练 <../../cluster/index.html>`_ 。 预测 ===== -可以使用训练好的模型评估带有label的验证集,也可以预测没有label的测试集。 +当模型训练好了之后,我们就可以进行预测了。 .. image:: PipelineTest.jpg :align: center :scale: 80% -测试脚本如下,将会测试配置文件中test.list指定的数据。 - - .. code-block:: bash - - paddle train \ - --use_gpu=false \ - --job=test \ - --init_model_path=./output/pass-0000x - -可以参考 `Python API预测 <../../ui/predict/swig_py_paddle.html>`_ -教程,或其他 `demo <../../demo/index.html>`_ 的Python预测过程。也可以通过如下方式预测。 - -预测脚本(``predict.sh``): +之前配置文件中 ``test.list`` 指定的数据将会被测试,这里直接通过预测脚本 ``predict.sh`` 进行预测, +更详细的说明,可以参考 `Python API预测 <../../ui/predict/swig_py_paddle.html>`_ 教程。 .. code-block:: bash @@ -395,8 +390,7 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 mv rank-00000 result.txt -这里以 ``output/pass-00003`` 为例进行预测,用户可以根据训练log选择test结果最好的模型来预测。与训练网络配置不同的是:无需label相关的层,指定outputs输出概率层(softmax输出), -指定batch_size=1,数据传输无需label数据,预测数据指定test_list的位置。 +这里以 ``output/pass-00003`` 为例进行预测,用户可以根据训练日志,选择测试结果最好的模型来预测。 预测结果以文本的形式保存在 ``result.txt`` 中,一行为一个样本,格式如下: @@ -405,29 +399,14 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 预测ID;ID为0的概率 ID为1的概率 预测ID;ID为0的概率 ID为1的概率 - .. code-block:: python - - is_predict = get_config_arg('is_predict', bool, False) - trn = 'data/train.list' if not is_predict else None - tst = 'data/test.list' if not is_predict else 'data/pred.list' - obj = 'process' if not is_predict else 'process_pre' - batch_size = 128 if not is_predict else 1 - if is_predict: - maxid = maxid_layer(output) - outputs([maxid,output]) - else: - label = data_layer(name="label", size=2) - cls = classification_cost(input=output, label=label) - outputs(cls) - 总体效果总结 ============== -这些流程中的数据下载、网络配置、训练脚本在 ``/demo/quick_start`` 目录,我们在此总 -结上述网络结构在Amazon-Elec测试集(25k)上的效果: +在 ``/demo/quick_start`` 目录下,能够找到这里使用的所有数据, 网络配置, 训练脚本等等。 +对于Amazon-Elec测试集(25k), 如下表格,展示了上述网络模型的训练效果: ===================== =============================== ============= ================================== - 网络名称 参数数量 错误率 配置文件 + 网络名称 参数数量 错误率 配置文件 ===================== =============================== ============= ================================== 逻辑回归模型 252 KB 8.652% trainer_config.lr.py 词向量模型 15 MB 8.484% trainer_config.emb.py @@ -460,15 +439,15 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 TrainerInternal.cpp:160] Batch=20 samples=2560 AvgCost=0.628761 CurrentCost=0.628761 Eval: classification_error_evaluator=0.304297 CurrentEval: classification_error_evaluator=0.304297 -模型训练会看到这样的日志,详细的参数解释如下面表格: +模型训练会看到类似上面这样的日志信息,详细的参数解释,请参考如下表格: - =========================================== ========================================================== + =========================================== ============================================================== 名称 解释 - =========================================== ========================================================== + =========================================== ============================================================== Batch=20 表示过了20个batch samples=2560 表示过了2560个样本 AvgCost 每个pass的第0个batch到当前batch所有样本的平均cost CurrentCost 当前log_period个batch所有样本的平均cost Eval: classification_error_evaluator 每个pass的第0个batch到当前batch所有样本的平均分类错误率 CurrentEval: classification_error_evaluator 当前log_period个batch所有样本的平均分类错误率 - =========================================== ========================================================== + =========================================== ============================================================== From f56643dd902168cf07c64339c3b7d8c74868a5da Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 23 Nov 2016 15:49:06 +0800 Subject: [PATCH 011/265] Remove glossary --- .../rnn}/glossary_rnn.dot | 0 .../rnn}/glossary_rnn_with_memory.dot | 0 doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 124 ++++++++++++------ doc_cn/concepts/glossary.rst | 93 ------------- doc_cn/index.rst | 1 - 5 files changed, 86 insertions(+), 132 deletions(-) rename doc_cn/{concepts => algorithm/rnn}/glossary_rnn.dot (100%) rename doc_cn/{concepts => algorithm/rnn}/glossary_rnn_with_memory.dot (100%) delete mode 100644 doc_cn/concepts/glossary.rst diff --git a/doc_cn/concepts/glossary_rnn.dot b/doc_cn/algorithm/rnn/glossary_rnn.dot similarity index 100% rename from doc_cn/concepts/glossary_rnn.dot rename to doc_cn/algorithm/rnn/glossary_rnn.dot diff --git a/doc_cn/concepts/glossary_rnn_with_memory.dot b/doc_cn/algorithm/rnn/glossary_rnn_with_memory.dot similarity index 100% rename from doc_cn/concepts/glossary_rnn_with_memory.dot rename to doc_cn/algorithm/rnn/glossary_rnn_with_memory.dot diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index 09172c53f7..896dd7ada9 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -4,16 +4,16 @@ 单双层RNN API对比介绍 ##################### -这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文中的以\ :ref:`glossary_paddle`\ 的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单、双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。文章中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 +这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文中的以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单、双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。文章中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 示例1:双层RNN,子序列间无Memory ================================ -在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_Sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 +在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 -在本问题中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用\ :ref:`glossary_lstm`\ 作为\ :ref:`glossary_encoder`\ ,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用\ :ref:`glossary_encoder`\ 压缩,二者语意上完全一致。这组语意相同的示例配置如下 +在本问题中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用encoder压缩,二者语意上完全一致。这组语意相同的示例配置如下 -* 单层 \:ref:`glossary_RNN`\: `sequence_layer_group.conf `_ +* 单层\ :ref:`glossary_RNN`\: `sequence_layer_group.conf `_ * :ref:`glossary_双层RNN`\: `sequence_nest_layer_group.conf `_ @@ -22,13 +22,13 @@ 首先,本示例中使用的原始数据如下\: -- 本里中的原始数据一共有10个\ :ref:`glossary_sample`\ 。每个\ :ref:`glossary_sample`\ 由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层\ :ref:`glossary_RNN`\ 网络直接使用。 +- 本里中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层\ :ref:`glossary_RNN`\ 网络直接使用。 .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg :language: text -- 双层序列数据一共有4个\ :ref:`glossary_sample`\ 。 每个样本间用空行分开,整体数据和原始数据完全一样。而对于双层序列的\ :ref:`glossary_lstm`\ 来说,第一条数据同时\ :ref:`glossary_encode` 两条数据成两个向量。这四条数据同时处理的句子为\ :code:`[2, 3, 2, 3]`\ 。 +- 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。而对于双层序列的LSTM来说,第一条数据同时encode两条数据成两个向量。这四条数据同时处理的句子为\ :code:`[2, 3, 2, 3]`\ 。 .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest :language: text @@ -40,10 +40,10 @@ :lines: 21-39 :linenos: -- 这是普通的单层\ :ref:`glossary_Sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: +- 这是普通的单层\ :ref:`glossary_sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: * :ref:`glossary_DataProvider`\ 共返回两个数据,分别是words和label。即上述代码中的第19行。 - - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层\ :ref:`glossary_Sequence`\ 。 + - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层\ :ref:`glossary_sequence`\ 。 - label是原始数据中对于每一句话的分类标签,它是integer_value类型的。 .. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py @@ -51,17 +51,17 @@ :lines: 42-71 :linenos: -- 这是对于同样的数据,本示例中双层\ :ref:`glossary_Sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: +- 这是对于同样的数据,本示例中双层\ :ref:`glossary_sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: - :ref:`glossary_DataProvider`\ 共返回两组数据,分别是sentences和labels。即在双层序列的原始数据中,每一组内的所有句子和labels - - sentences是双层\ :ref:`glossary_Sequence`\ 的数据。他内部包括了每组数据中的所有句子,又使用句子中每一个单词的词表index表示每一个句子,故为双层\ :ref:`glossary_Sequence`\ 。类型为 integer_value_sub_sequence 。 - - labels是每组内每一个句子的标签,故而是一个单层\ :ref:`glossary_Sequence`\ 。 + - sentences是双层\ :ref:`glossary_sequence`\ 的数据。他内部包括了每组数据中的所有句子,又使用句子中每一个单词的词表index表示每一个句子,故为双层\ :ref:`glossary_sequence`\ 。类型为 integer_value_sub_sequence 。 + - labels是每组内每一个句子的标签,故而是一个单层\ :ref:`glossary_sequence`\ 。 :ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------ -首先,我们看一下单层\ :ref:`glossary_RNN`\ 的配置。代码中9-15行即为单层RNN序列的使用代码。这里使用了\ :ref:`glossary_paddle`\ 预定义好的\ :ref:`glossary_RNN`\ 处理函数。在这个函数中,\ :ref:`glossary_RNN`\ 对于每一个\ :ref:`glossary_timestep`\ 通过了一个\ :ref:`glossary_lstm`\ 网络。 +首先,我们看一下单层\ :ref:`glossary_RNN`\ 的配置。代码中9-15行即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的\ :ref:`glossary_RNN`\ 处理函数。在这个函数中,\ :ref:`glossary_RNN`\ 对于每一个\ :ref:`glossary_timestep`\ 通过了一个LSTM网络。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf :language: python @@ -72,15 +72,15 @@ 其次,我们看一下语义相同的\ :ref:`glossary_双层RNN`\ 的网络配置。 -* :ref:`glossary_paddle`\ 中的许多layer并不在意输入是否是\ :ref:`glossary_Sequence`\ ,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个\ :ref:`glossary_timestep`\ 来进行的。 +* PaddlePaddle中的许多layer并不在意输入是否是\ :ref:`glossary_sequence`\ ,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个\ :ref:`glossary_timestep`\ 来进行的。 -* 在该配置中,7-26行将双层\ :ref:`glossary_Sequence`\ 数据,先变换成单层\ :ref:`glossary_Sequence`\ 数据,在对每一个单层\ :ref:`glossary_Sequence`\ 进行处理。 +* 在该配置中,7-26行将双层\ :ref:`glossary_sequence`\ 数据,先变换成单层\ :ref:`glossary_sequence`\ 数据,在对每一个单层\ :ref:`glossary_sequence`\ 进行处理。 - * 使用\ :code:`recurrent_group`\ 这个函数进行变换,在变换时需要将输入序列传入。由于我们想要的变换是双层\ :ref:`glossary_Sequence`\ => 单层\ :ref:`glossary_Sequence`\ ,所以我们需要将输入数据标记成\ :code:`SubsequenceInput`\ 。 + * 使用\ :code:`recurrent_group`\ 这个函数进行变换,在变换时需要将输入序列传入。由于我们想要的变换是双层\ :ref:`glossary_sequence`\ => 单层\ :ref:`glossary_sequence`\ ,所以我们需要将输入数据标记成\ :code:`SubsequenceInput`\ 。 - * 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个\ :ref:`glossary_lstm`\ 网络。这和单层\ :ref:`glossary_RNN`\ 的配置是等价的。 + * 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个LSTM网络。这和单层\ :ref:`glossary_RNN`\ 的配置是等价的。 -* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要知道使用\ :ref:`glossary_lstm` :ref:`glossary_encode`\ 成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但是,和单层\ :ref:`glossary_RNN`\ 有区别的地方是,我们是对每一个子序列取最后一个元素。于是我们设置\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 +* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要知道使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但是,和单层\ :ref:`glossary_RNN`\ 有区别的地方是,我们是对每一个子序列取最后一个元素。于是我们设置\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 * 至此,\ :code:`lstm_last`\ 便和单层\ :ref:`glossary_RNN`\ 的配置中的\ :code:`lstm_last`\ 具有相同的结果了。 @@ -93,43 +93,32 @@ 示例2::ref:`glossary_双层RNN`,子序列间有\ :ref:`glossary_Memory` ================================================================== -本示例中,意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 同时实现一个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_Sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。而对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_Sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 +本示例中,意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 同时实现一个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。而对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 :ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------ -本例配置了两个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层序列模型的配置如下: +我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 + +- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf :language: python :lines: 36-48 - :linenos: - -在该配置中,名称为\ :code:`rnn_state`\ 的全连接层暂存到了\ :ref:`glossary_Memory`\ 中。这个\ :ref:`glossary_Memory`\ 变量\ :code:`mem`\ 中可以保存到上一个\ :ref:`glossary_timestep`\ 中的全连接层的输出。从而实现一个全连接的\ :ref:`glossary_RNN`\ 。 -以数据\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 举例,单层\ :ref:`glossary_RNN`\ 的网络图如下\: +- 双层序列,外层memory是一个元素: -.. graphviz:: simple_full_recurrent.dot - -而对于\ :ref:`glossary_双层RNN`\ 来说,等价的网络配置如下\: + - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 + - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf :language: python :lines: 39-66 - :linenos: - :emphasize-lines: 4-6 -- 在该配置中,外层的\ :code:`outer_mem`\ 和内层的\ :code:`inner_mem`\ 两个变量配合,实现了和单层\ :ref:`glossary_RNN`\ 等价的全连接\ :ref:`glossary_RNN`\ 。 +- 双层序列,外层memory是单层序列: - - 外层\ :code:`outer_step`\ 中的\ :code:`outer_mem`\ 会将神经网络中每个子序列的最后一个结果记录下来。即将第18行的\ :code:`last`\ 变量记录下来。 - - 内层\ :code:`inner_step`\ 中的\ :code:`inner_mem`\ 会将神经网络中子序列中的每一个元素的结果记录下来。即将第7行的\ :code:`out`\ 变量记录下来。 - - 内层的\ :code:`inner_mem`\ 初始值是\ :code:`outer_mem`(:code:`boot_layer`)。于是前一个子序列的最后结果,是新的子序列的初试结果。即完成了简单的全连接\ :code:`glossary_RNN`\ 。 - -本例中的\ :ref:`glossary_双层RNN`\ ,以数据\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`\ 举例,配置图如下\: - -.. graphviz:: simple_full_hierarchical_recurrent.dot - -这里有一点注意事项,Paddle目前实现的\ :ref:`glossary_双层RNN`\ 不完全支持内层\ :ref:`glossary_RNN`\ 的\ :ref:`glossary_Memory`\ 引用外层\ :ref:`glossary_RNN`\ 的某一层序列输入。即\ :code:`inner_mem`的\ :code:`boot_layer`\ 需要是非序列类型的,或者可以是序列类型,但是每个时间步下,序列长度是一致的。从序列类型转换为非序列类型,可以使用\ :code:`pooling_layer`, :code:`last_seq`, :code:`first_seq`\ 等操作进行转换。 + - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 + - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 示例3:双进双出,输入不等长 =========================== @@ -190,3 +179,62 @@ data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 ======================== TBD + + +词汇表 +====== + +.. _glossary_memory: + +Memory +------ + +Memory是PaddlePaddle实现 :ref:`glossary_RNN` 时候使用的一个概念。 :ref:`glossary_RNN` 即时间递归神经网络,通常要求时间步之间具有一些依赖性,即当前时间步下的神经网络依赖前一个时间步神经网络中某一个神经元输出。如下图所示。 + +.. graphviz:: glossary_rnn.dot + +上图中虚线的连接,即是跨越时间步的网络连接。PaddlePaddle在实现 :ref:`glossary_RNN` 的时候,将这种跨越时间步的连接用一个特殊的神经网络单元实现。这个神经网络单元就叫Memory。Memory可以缓存上一个时刻某一个神经元的输出,然后在下一个时间步输入给另一个神经元。使用Memory的 :ref:`glossary_RNN` 实现便如下图所示。 + +.. graphviz:: glossary_rnn_with_memory.dot + +使用这种方式,PaddlePaddle可以比较简单的判断哪些输出是应该跨越时间步的,哪些不是。 + +.. _glossary_timestep: + +时间步 +------ + +参考 :ref:`glossary_sequence` 。 + + +.. _glossary_sequence: + +时间序列 +-------- + +时间序列(time series)是指一系列的特征数据。这些特征数据之间的顺序是有意义的。即特征的数组,而不是特征的集合。而这每一个数组元素,或者每一个系列里的特征数据,即为一个时间步(time step)。值得注意的是,时间序列、时间步的概念,并不真正的和『时间』有关。只要一系列特征数据中的『顺序』是有意义的,即为时间序列的输入。 + +举例说明,例如文本分类中,我们通常将一句话理解成一个时间序列。比如一句话中的每一个单词,会变成词表中的位置。而这一句话就可以表示成这些位置的数组。例如 :code:`[9, 2, 3, 5, 3]` 。 + +关于时间序列(time series)的更详细准确的定义,可以参考 `维基百科页面 Time series `_ 或者 `维基百科中文页面 时间序列 `_ 。 + +另外,Paddle中经常会将时间序列成为 :code:`Sequence` 。他们在Paddle的文档和API中是一个概念。 + +.. _glossary_RNN: + +RNN +--- + +RNN 在PaddlePaddle的文档中,一般表示 :code:`Recurrent neural network`,即时间递归神经网络。详细介绍可以参考 `维基百科页面 Recurrent neural network `_ 或者 `中文维基百科页面 `_ 中关于时间递归神经网络的介绍。 + +RNN 一般在PaddlePaddle中,指对于一个 :ref:`glossary_sequence` 输入数据,每一个时间步之间的神经网络具有一定的相关性。例如,某一个神经元的一个输入为上一个时间步网络中某一个神经元的输出。或者,从每一个时间步来看,神经网络的网络结构中具有有向环结构。 + +.. _glossary_双层RNN: + +双层RNN +------- + +双层RNN顾名思义,即 :ref:`glossary_RNN` 之间有一次嵌套关系。输入数据整体上是一个时间序列,而对于每一个内层特征数据而言,也是一个时间序列。即二维数组,或者数组的数组这个概念。 而双层RNN是可以处理这种输入数据的网络结构。 + +例如,对于段落的文本分类,即将一段话进行分类。我们将一段话看成句子的数组,每个句子又是单词的数组。这便是一种双层RNN的输入数据。而将这个段落的每一句话用lstm编码成一个向量,再对每一句话的编码向量用lstm编码成一个段落的向量。再对这个段落向量进行分类,即为这个双层RNN的网络结构。 + diff --git a/doc_cn/concepts/glossary.rst b/doc_cn/concepts/glossary.rst deleted file mode 100644 index 518712d1fe..0000000000 --- a/doc_cn/concepts/glossary.rst +++ /dev/null @@ -1,93 +0,0 @@ -.. _glossary: - -######################## -Paddle文档中使用的词汇表 -######################## - -.. _glossary_paddle: - -PaddlePaddle ------------- - -TBD - -.. _glossary_encode: - -encode ------- - -参考\ :ref:`glossary_encoder`\ 。 - -.. _glossary_encoder: - -encoder -------- - -TBD - -.. _glossary_sample: - -样本 ----- - -TBD Sample的概念 - -.. _glossary_lstm: - -LSTM ----- - -TBD - -.. _glossary_memory: - -Memory ------- - -Memory是 :ref:`glossary_paddle` 实现 :ref:`glossary_RNN` 时候使用的一个概念。 :ref:`glossary_RNN` 即时间递归神经网络,通常要求时间步之间具有一些依赖性,即当前时间步下的神经网络依赖前一个时间步神经网络中某一个神经元输出。如下图所示。 - -.. graphviz:: glossary_rnn.dot - -上图中虚线的连接,即是跨越时间步的网络连接。:ref:`glossary_paddle` 在实现 :ref:`glossary_RNN` 的时候,将这种跨越时间步的连接用一个特殊的神经网络单元实现。这个神经网络单元就叫Memory。Memory可以缓存上一个时刻某一个神经元的输出,然后在下一个时间步输入给另一个神经元。使用Memory的 :ref:`glossary_RNN` 实现便如下图所示。 - -.. graphviz:: glossary_rnn_with_memory.dot - -使用这种方式,:ref:`glossary_paddle` 可以比较简单的判断哪些输出是应该跨越时间步的,哪些不是。 - -.. _glossary_timestep: - -时间步 ------- - -参考 :ref:`_glossary_Sequence` 。 - -.. _glossary_Sequence: - -时间序列 --------- - -时间序列(time series)是指一系列的特征数据。这些特征数据之间的顺序是有意义的。即特征的数组,而不是特征的集合。而这每一个数组元素,或者每一个系列里的特征数据,即为一个时间步(time step)。值得注意的是,时间序列、时间步的概念,并不真正的和『时间』有关。只要一系列特征数据中的『顺序』是有意义的,即为时间序列的输入。 - -举例说明,例如文本分类中,我们通常将一句话理解成一个时间序列。比如一句话中的每一个单词,会变成词表中的位置。而这一句话就可以表示成这些位置的数组。例如 :code:`[9, 2, 3, 5, 3]` 。 - -关于时间序列(time series)的更详细准确的定义,可以参考 `维基百科页面 Time series `_ 或者 `维基百科中文页面 时间序列 `_ 。 - -另外,Paddle中经常会将时间序列成为 :code:`Sequence` 。他们在Paddle的文档和API中是一个概念。 - -.. _glossary_RNN: - -RNN ---- - -RNN 在 :ref:`glossary_paddle` 的文档中,一般表示 :code:`Recurrent neural network`,即时间递归神经网络。详细介绍可以参考 `维基百科页面 Recurrent neural network `_ 或者 `中文维基百科页面 `_ 中关于时间递归神经网络的介绍。 - -RNN 一般在 :ref:`glossary_paddle` 中,指对于一个 :ref:`glossary_Sequence` 输入数据,每一个时间步之间的神经网络具有一定的相关性。例如,某一个神经元的一个输入为上一个时间步网络中某一个神经元的输出。或者,从每一个时间步来看,神经网络的网络结构中具有有向环结构。 - -.. _glossary_双层RNN: - -双层RNN -------- - -双层RNN顾名思义,即 :ref:`glossary_RNN` 之间有一次嵌套关系。输入数据整体上是一个时间序列,而对于每一个内层特征数据而言,也是一个时间序列。即二维数组,或者数组的数组这个概念。 而双层RNN是可以处理这种输入数据的网络结构。 - -例如,对于段落的文本分类,即将一段话进行分类。我们将一段话看成句子的数组,每个句子又是单词的数组。这便是一种双层RNN的输入数据。而将这个段落的每一句话用lstm编码成一个向量,再对每一句话的编码向量用lstm编码成一个段落的向量。再对这个段落向量进行分类,即为这个双层RNN的网络结构。 diff --git a/doc_cn/index.rst b/doc_cn/index.rst index fef39aa527..68128a74f8 100644 --- a/doc_cn/index.rst +++ b/doc_cn/index.rst @@ -11,7 +11,6 @@ PaddlePaddle文档 * `使用示例 `_ * `模型配置 <../doc/ui/api/trainer_config_helpers/index.html>`_ * `集群训练 `_ -* :ref:`glossary` 开发指南 -------- From 0c981164902ea322f5573d645d3bd090b0ce5421 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 23 Nov 2016 17:38:20 +0800 Subject: [PATCH 012/265] Refining documentation --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 43 +++++-------------- 1 file changed, 11 insertions(+), 32 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index 896dd7ada9..e1a847fc9c 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -115,13 +115,11 @@ :language: python :lines: 39-66 -- 双层序列,外层memory是单层序列: +.. warning:: + PaddlePaddle目前只支持在每一个时间步中,Memory的sequence长度一致的情况。 - - 由于外层每个时间步返回的是一个子句,这些子句的长度往往不等长。因此当外层有is_seq=True的memory时,内层是**无法直接使用**它的,即内层memory的boot_layer不能链接外层的这个memory。 - - 如果内层memory想**间接使用**这个外层memory,只能通过`pooling_layer`、`last_seq`或`first_seq`这三个layer将它先变成一个元素。但这种情况下,外层memory必须有boot_layer,否则在第0个时间步时,由于外层memory没有任何seq信息,因此上述三个layer的前向会报出“**Check failed: input.sequenceStartPositions**”的错误。 - -示例3:双进双出,输入不等长 -=========================== +示例3:双层RNN,输入不等长 +========================== .. role:: red @@ -129,41 +127,22 @@ -**输入不等长** 是指recurrent_group的多个输入在各时刻的长度可以不相等, 但需要指定一个和输出长度一致的input,用 :red:`targetInlink` 表示。参考配置:单层RNN(:code:`sequence_rnn_multi_unequalength_inputs.conf`),双层RNN(:code:`sequence_nest_rnn_multi_unequalength_inputs.conf`) - -读取双层序列的方法 ------------------- - -我们看一下单双层序列的数据组织形式和dataprovider(见 :code:`rnn_data_provider.py` ) - -.. literalinclude:: ../../../paddle/gserver/tests/rnn_data_provider.py - :language: python - :lines: 69-97 - -data2 中有两个样本,每个样本有两个特征, 记fea1, fea2。 +**输入不等长** 是指recurrent_group的多个输入序列,在每个\ :ref:`glossary_timestep`\ 的子序列长度可以不相等。但\ :ref:`glossary_双层RNN`\ 目前需要整体的输出,与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定和输出序列信息一致。 -- 单层序列:两个样本分别为[[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]] 和 [[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]] -- 双层序列:两个样本分别为 +本例参考配置分别为\ `单层不等长RNN `_\ 和\ `双层不等长RNN `_\ 。 - - **样本1**\:[[[1, 2], [4, 5, 2]], [[5, 4, 1], [3, 1]]]。fea1和fea2都分别有2个子句,fea1=[[1, 2], [4, 5, 2]], fea2=[[5, 4, 1], [3, 1]] - - **样本2**\:[[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]]。fea1和fea2都分别有3个子句, fea1=[[0, 2], [2, 5], [0, 1, 2]], fea2=[[1, 5], [4], [2, 3, 6, 1]]。
- - **注意**\:每个样本中,各特征的子句数目需要相等。这里说的“双进双出,输入不等长”是指fea1在i时刻的输入的长度可以不等于fea2在i时刻的输入的长度。如对于第1个样本,时刻i=2, fea1[2]=[4, 5, 2],fea2[2]=[3, 1],3≠2。 +本例中对于单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 数据完全相同,对于单层\ :ref:`glossary_RNN`\ 的数据一共有两个样本,他们分别是\ :code:`[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]`\ 和\ :code:`[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]`\ 。对于每一个单层\ :ref:`glossary_RNN`\ 的数据,均有两组特征。在单层数据的基础上,\ :ref:`glossary_双层RNN`\ 数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。但是需要注意的是Paddle目前只支持序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。即两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 -- 单双层序列中,两个样本的label都分别是0和1 -模型中的配置 ------------- - -单层RNN( :code:`sequence_rnn_multi_unequalength_inputs.conf`)和双层RNN( :code:`v.conf`)两个模型配置达到的效果完全一样,区别只在于输入为单层还是双层序列,现在我们来看它们内部分别是如何实现的。 - -- 单层序列\: +:ref:`glossary_trainer_config`\ 的模型配置 +------------------------------------------ - - 过了一个简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全连接,功能与示例2中`sequence_rnn.conf`的`step`函数完全相同。这里,两个输入x1,x2分别通过calrnn返回最后时刻的状态。结果得到的encoder1_rep和encoder2_rep分别是单层序列,最后取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 - - 注意到这里recurrent_group输入的每个样本中,fea1和fea2的长度都分别相等,这并非偶然,而是因为recurrent_group要求输入为单层序列时,所有输入的长度都必须相等。 +本例中的配置,使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 使用一个\ :code:`recurrent_group`\ 将两个序列同时过完全连接的\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ 的code如下。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf :language: python :lines: 41-58 + :linenos: - 双层序列\: From da7c0f1326677ec7c22b6cdbd6595e2f4a1d59a2 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 23 Nov 2016 17:41:20 +0800 Subject: [PATCH 013/265] Format sequence_nest_rnn_multi_unequalength*.conf --- ...ce_nest_rnn_multi_unequalength_inputs.conf | 106 ----------------- ...ence_nest_rnn_multi_unequalength_inputs.py | 107 ++++++++++++++++++ ...sequence_rnn_multi_unequalength_inputs.py} | 68 +++++------ .../tests/test_RecurrentGradientMachine.cpp | 19 ++-- 4 files changed, 151 insertions(+), 149 deletions(-) delete mode 100644 paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf create mode 100644 paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py rename paddle/gserver/tests/{sequence_rnn_multi_unequalength_inputs.conf => sequence_rnn_multi_unequalength_inputs.py} (52%) diff --git a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf deleted file mode 100644 index d0b9450f4b..0000000000 --- a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf +++ /dev/null @@ -1,106 +0,0 @@ -#edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from paddle.trainer_config_helpers import * - -######################## data source ################################ -define_py_data_sources2(train_list='gserver/tests/Sequence/dummy.list', - test_list=None, - module='rnn_data_provider', - obj='process_unequalength_subseq') - - -settings(batch_size=2, learning_rate=0.01) -######################## network configure ################################ -dict_dim = 10 -word_dim = 8 -hidden_dim = 8 -label_dim = 2 - -speaker1 = data_layer(name="word1", size=dict_dim) -speaker2 = data_layer(name="word2", size=dict_dim) - -emb1 = embedding_layer(input=speaker1, size=word_dim) -emb2 = embedding_layer(input=speaker2, size=word_dim) - -# This hierachical RNN is designed to be equivalent to the simple RNN in -# sequence_rnn_multi_unequalength_inputs.conf - -def outer_step(x1, x2): - outer_mem1 = memory(name = "outer_rnn_state1", size = hidden_dim) - outer_mem2 = memory(name = "outer_rnn_state2", size = hidden_dim) - def inner_step1(y): - inner_mem = memory(name = 'inner_rnn_state_' + y.name, - size = hidden_dim, - boot_layer = outer_mem1) - out = fc_layer(input = [y, inner_mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'inner_rnn_state_' + y.name) - return out - - def inner_step2(y): - inner_mem = memory(name = 'inner_rnn_state_' + y.name, - size = hidden_dim, - boot_layer = outer_mem2) - out = fc_layer(input = [y, inner_mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'inner_rnn_state_' + y.name) - return out - - encoder1 = recurrent_group( - step = inner_step1, - name = 'inner1', - input = x1) - - encoder2 = recurrent_group( - step = inner_step2, - name = 'inner2', - input = x2) - - sentence_last_state1 = last_seq(input = encoder1, name = 'outer_rnn_state1') - sentence_last_state2_ = last_seq(input = encoder2, name = 'outer_rnn_state2') - - encoder1_expand = expand_layer(input = sentence_last_state1, - expand_as = encoder2) - - return [encoder1_expand, encoder2] - - -encoder1_rep, encoder2_rep = recurrent_group( - name="outer", - step=outer_step, - input=[SubsequenceInput(emb1), SubsequenceInput(emb2)], - targetInlink=emb2) - -encoder1_last = last_seq(input = encoder1_rep) -encoder1_expandlast = expand_layer(input = encoder1_last, - expand_as = encoder2_rep) -context = mixed_layer(input = [identity_projection(encoder1_expandlast), - identity_projection(encoder2_rep)], - size = hidden_dim) - -rep = last_seq(input=context) -prob = fc_layer(size=label_dim, - input=rep, - act=SoftmaxActivation(), - bias_attr=True) - -outputs(classification_cost(input=prob, - label=data_layer(name="label", size=label_dim))) - diff --git a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py new file mode 100644 index 0000000000..1b709a39c4 --- /dev/null +++ b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py @@ -0,0 +1,107 @@ +#edit-mode: -*- python -*- +# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from paddle.trainer_config_helpers import * + +######################## data source ################################ +define_py_data_sources2( + train_list='gserver/tests/Sequence/dummy.list', + test_list=None, + module='rnn_data_provider', + obj='process_unequalength_subseq') + +settings(batch_size=2, learning_rate=0.01) +######################## network configure ################################ +dict_dim = 10 +word_dim = 8 +hidden_dim = 8 +label_dim = 2 + +speaker1 = data_layer(name="word1", size=dict_dim) +speaker2 = data_layer(name="word2", size=dict_dim) + +emb1 = embedding_layer(input=speaker1, size=word_dim) +emb2 = embedding_layer(input=speaker2, size=word_dim) + +# This hierachical RNN is designed to be equivalent to the simple RNN in +# sequence_rnn_multi_unequalength_inputs.conf + + +def outer_step(x1, x2): + outer_mem1 = memory(name="outer_rnn_state1", size=hidden_dim) + outer_mem2 = memory(name="outer_rnn_state2", size=hidden_dim) + + def inner_step1(y): + inner_mem = memory( + name='inner_rnn_state_' + y.name, + size=hidden_dim, + boot_layer=outer_mem1) + out = fc_layer( + input=[y, inner_mem], + size=hidden_dim, + act=TanhActivation(), + bias_attr=True, + name='inner_rnn_state_' + y.name) + return out + + def inner_step2(y): + inner_mem = memory( + name='inner_rnn_state_' + y.name, + size=hidden_dim, + boot_layer=outer_mem2) + out = fc_layer( + input=[y, inner_mem], + size=hidden_dim, + act=TanhActivation(), + bias_attr=True, + name='inner_rnn_state_' + y.name) + return out + + encoder1 = recurrent_group(step=inner_step1, name='inner1', input=x1) + + encoder2 = recurrent_group(step=inner_step2, name='inner2', input=x2) + + sentence_last_state1 = last_seq(input=encoder1, name='outer_rnn_state1') + sentence_last_state2_ = last_seq(input=encoder2, name='outer_rnn_state2') + + encoder1_expand = expand_layer( + input=sentence_last_state1, expand_as=encoder2) + + return [encoder1_expand, encoder2] + + +encoder1_rep, encoder2_rep = recurrent_group( + name="outer", + step=outer_step, + input=[SubsequenceInput(emb1), SubsequenceInput(emb2)], + targetInlink=emb2) + +encoder1_last = last_seq(input=encoder1_rep) +encoder1_expandlast = expand_layer(input=encoder1_last, expand_as=encoder2_rep) +context = mixed_layer( + input=[ + identity_projection(encoder1_expandlast), + identity_projection(encoder2_rep) + ], + size=hidden_dim) + +rep = last_seq(input=context) +prob = fc_layer( + size=label_dim, input=rep, act=SoftmaxActivation(), bias_attr=True) + +outputs( + classification_cost( + input=prob, label=data_layer( + name="label", size=label_dim))) diff --git a/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf b/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py similarity index 52% rename from paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf rename to paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py index 28b1cb98cf..4cf7035477 100644 --- a/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf +++ b/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py @@ -16,11 +16,11 @@ from paddle.trainer_config_helpers import * ######################## data source ################################ -define_py_data_sources2(train_list='gserver/tests/Sequence/dummy.list', - test_list=None, - module='rnn_data_provider', - obj='process_unequalength_seq') - +define_py_data_sources2( + train_list='gserver/tests/Sequence/dummy.list', + test_list=None, + module='rnn_data_provider', + obj='process_unequalength_seq') settings(batch_size=2, learning_rate=0.01) ######################## network configure ################################ @@ -38,38 +38,40 @@ emb2 = embedding_layer(input=speaker2, size=word_dim) # This hierachical RNN is designed to be equivalent to the RNN in # sequence_nest_rnn_multi_unequalength_inputs.conf + def step(x1, x2): - def calrnn(y): - mem = memory(name = 'rnn_state_' + y.name, size = hidden_dim) - out = fc_layer(input = [y, mem], - size = hidden_dim, - act = TanhActivation(), - bias_attr = True, - name = 'rnn_state_' + y.name) - return out - - encoder1 = calrnn(x1) - encoder2 = calrnn(x2) - return [encoder1, encoder2] + def calrnn(y): + mem = memory(name='rnn_state_' + y.name, size=hidden_dim) + out = fc_layer( + input=[y, mem], + size=hidden_dim, + act=TanhActivation(), + bias_attr=True, + name='rnn_state_' + y.name) + return out + + encoder1 = calrnn(x1) + encoder2 = calrnn(x2) + return [encoder1, encoder2] + encoder1_rep, encoder2_rep = recurrent_group( - name="stepout", - step=step, - input=[emb1, emb2]) + name="stepout", step=step, input=[emb1, emb2]) -encoder1_last = last_seq(input = encoder1_rep) -encoder1_expandlast = expand_layer(input = encoder1_last, - expand_as = encoder2_rep) -context = mixed_layer(input = [identity_projection(encoder1_expandlast), - identity_projection(encoder2_rep)], - size = hidden_dim) +encoder1_last = last_seq(input=encoder1_rep) +encoder1_expandlast = expand_layer(input=encoder1_last, expand_as=encoder2_rep) +context = mixed_layer( + input=[ + identity_projection(encoder1_expandlast), + identity_projection(encoder2_rep) + ], + size=hidden_dim) rep = last_seq(input=context) -prob = fc_layer(size=label_dim, - input=rep, - act=SoftmaxActivation(), - bias_attr=True) - -outputs(classification_cost(input=prob, - label=data_layer(name="label", size=label_dim))) +prob = fc_layer( + size=label_dim, input=rep, act=SoftmaxActivation(), bias_attr=True) +outputs( + classification_cost( + input=prob, label=data_layer( + name="label", size=label_dim))) diff --git a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp index 80d713dac0..9d86067fb5 100644 --- a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp +++ b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp @@ -13,12 +13,12 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include -#include -#include +#include #include #include -#include +#include +#include +#include P_DECLARE_int32(seed); @@ -45,10 +45,9 @@ public: auto p = const_cast(this); auto& params = p->getGradientMachine()->getParameters(); return std::accumulate( - params.begin(), - params.end(), - 0UL, - [](size_t a, const ParameterPtr& p) { return a + p->getSize(); }); + params.begin(), params.end(), 0UL, [](size_t a, const ParameterPtr& p) { + return a + p->getSize(); + }); } }; @@ -148,8 +147,8 @@ TEST(RecurrentGradientMachine, rnn_multi_input) { TEST(RecurrentGradientMachine, rnn_multi_unequalength_input) { for (bool useGpu : {false, true}) { - test("gserver/tests/sequence_rnn_multi_unequalength_inputs.conf", - "gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf", + test("gserver/tests/sequence_rnn_multi_unequalength_inputs.py", + "gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py", 1e-6, useGpu); } From 6aece5060b600f9c86580c5958b6039973a424fa Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 23 Nov 2016 18:06:54 +0800 Subject: [PATCH 014/265] Stash --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 11 +-- ...ence_nest_rnn_multi_unequalength_inputs.py | 69 ++++++++----------- 2 files changed, 36 insertions(+), 44 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index e1a847fc9c..4d29507ca3 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -139,20 +139,21 @@ 本例中的配置,使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 使用一个\ :code:`recurrent_group`\ 将两个序列同时过完全连接的\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ 的code如下。 -.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.conf +.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py :language: python - :lines: 41-58 + :lines: 42-59 :linenos: - 双层序列\: - 双层RNN中,对输入的两个特征分别求时序上的连续全连接(`inner_step1`和`inner_step2`分别处理fea1和fea2),其功能与示例2中`sequence_nest_rnn.conf`的`outer_step`函数完全相同。不同之处是,此时输入`[SubsequenceInput(emb1), SubsequenceInput(emb2)]`在各时刻并不等长。 - - 函数`outer_step`中可以分别处理这两个特征,但我们需要用targetInlink指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 + - 函数`outer_step`中可以分别处理这两个特征,但我们需要用\ :red:`targetInlink`\ 指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 - 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 -.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.conf +.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py :language: python - :lines: 41-89 + :lines: 42-75, 82-89 + :linenos: 示例4:beam_search的生成 ======================== diff --git a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py index 1b709a39c4..bf88d00f2d 100644 --- a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py +++ b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py @@ -1,4 +1,4 @@ -#edit-mode: -*- python -*- +# edit-mode: -*- python -*- # Copyright (c) 2016 Baidu, Inc. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); @@ -35,46 +35,37 @@ speaker2 = data_layer(name="word2", size=dict_dim) emb1 = embedding_layer(input=speaker1, size=word_dim) emb2 = embedding_layer(input=speaker2, size=word_dim) -# This hierachical RNN is designed to be equivalent to the simple RNN in -# sequence_rnn_multi_unequalength_inputs.conf - +# This hierarchical RNN is designed to be equivalent to the simple RNN in +# sequence_rnn_multi_unequalength_inputs.conf def outer_step(x1, x2): - outer_mem1 = memory(name="outer_rnn_state1", size=hidden_dim) - outer_mem2 = memory(name="outer_rnn_state2", size=hidden_dim) - - def inner_step1(y): - inner_mem = memory( - name='inner_rnn_state_' + y.name, - size=hidden_dim, - boot_layer=outer_mem1) - out = fc_layer( - input=[y, inner_mem], - size=hidden_dim, - act=TanhActivation(), - bias_attr=True, - name='inner_rnn_state_' + y.name) - return out - - def inner_step2(y): - inner_mem = memory( - name='inner_rnn_state_' + y.name, - size=hidden_dim, - boot_layer=outer_mem2) - out = fc_layer( - input=[y, inner_mem], - size=hidden_dim, - act=TanhActivation(), - bias_attr=True, - name='inner_rnn_state_' + y.name) - return out - - encoder1 = recurrent_group(step=inner_step1, name='inner1', input=x1) - - encoder2 = recurrent_group(step=inner_step2, name='inner2', input=x2) - - sentence_last_state1 = last_seq(input=encoder1, name='outer_rnn_state1') - sentence_last_state2_ = last_seq(input=encoder2, name='outer_rnn_state2') + index = [0] + + def inner_step(ipt): + index[0] += 1 + i = index[0] + outer_mem = memory(name="outer_rnn_state_%d" % i, size=hidden_dim) + + def inner_step_impl(y): + inner_mem = memory( + name="inner_rnn_state_" + y.name, + size=hidden_dim, + boot_layer=outer_mem) + out = fc_layer( + input=[y, inner_mem], + size=hidden_dim, + act=TanhActivation(), + bias_attr=True, + name='inner_rnn_state_' + y.name) + return out + + encoder = recurrent_group( + step=inner_step_impl, name='inner_%d' % i, input=ipt) + last = last_seq(name="outer_rnn_state_%d" % i, input=encoder) + return encoder, last + + _, sentence_last_state1 = inner_step(ipt=x1) + encoder2, _ = inner_step(ipt=x2) encoder1_expand = expand_layer( input=sentence_last_state1, expand_as=encoder2) From 24cfc5ab3cb5ee50ff0efe7c5baf5f7d6436ba7a Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Wed, 23 Nov 2016 21:35:27 +0800 Subject: [PATCH 015/265] Minor changes for use_concepts.rst. --- doc_cn/concepts/use_concepts.rst | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/doc_cn/concepts/use_concepts.rst b/doc_cn/concepts/use_concepts.rst index c86429f323..13f6410b98 100644 --- a/doc_cn/concepts/use_concepts.rst +++ b/doc_cn/concepts/use_concepts.rst @@ -15,7 +15,7 @@ PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 系统框图 ======== -下图描述了用户使用框图,PaddlePaddle里链接了Python解释器,trainer进程可以利用这个解释器执行Python脚本,Python脚本里定义了模型配置、训练算法、以及数据读取函数。其中,数据读取程序往往定义在一个单独Python脚本文件里,被称为DataProvider,通常是一个Python函数。模型配置、训练算法通常定义在另一单独Python文件中。下面将分别介绍这两部分。 +下图描述了用户使用框图,PaddlePaddle的trainer进程里内嵌了Python解释器,trainer进程可以利用这个解释器执行Python脚本,Python脚本里定义了模型配置、训练算法、以及数据读取函数。其中,数据读取程序往往定义在一个单独Python脚本文件里,被称为DataProvider,通常是一个Python函数。模型配置、训练算法通常定义在另一单独Python文件中。下面将分别介绍这两部分。 .. graphviz:: @@ -37,17 +37,15 @@ PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 DataProvider ============ -在不同的应用里,训练数据的格式往往各不相同。因此,为了用户能够灵活的处理数据,我们提供了Python处理数据的接口,称为 `PyDataProvider`_ 。 +DataProvider是PaddlePaddle系统的数据提供器,trainer进程会调用DataProvider函数,将用户的原始数据转换成系统可以识别的数据类型。当所有数据读取完一轮后,DataProvider返回空数据,通知系统一轮数据读取结束,系统每一轮训练开始时会重置DataProvider。需要注意的是,DataProvider是被系统调用,而不是新数据驱动系统,一些随机化噪声添加都应该在DataProvider中完成。 -trainer进程会调用DataProvider函数,将用户的原始数据转换成系统可以识别的数据类型。当所有数据读取完一轮后,DataProvider返回空数据,通知系统一轮数据读取结束,系统每一轮训练开始时会重置DataProvider。需要注意的是,DataProvider是被系统调用,而不是新数据驱动系统,一些随机化噪声添加都应该在DataProvider中完成。 - -在 ``PyDataProvider`` 中,系统C++模块接管了shuffle、处理batch、GPU和CPU通信、双缓冲、异步读取等问题,一些情况下(如:``min_pool_size=0``)需要Python接口里处理shuffle,可以参考 `PyDataProvider`_ 的相关文档继续深入了解。 +在不同的应用里,训练数据的格式往往各不相同。因此,为了用户能够灵活的处理数据,我们提供了Python处理数据的接口,称为 `PyDataProvider`_ 。在 ``PyDataProvider`` 中,系统C++模块接管了shuffle、处理batch、GPU和CPU通信、双缓冲、异步读取等问题,一些情况下(如:``min_pool_size=0``)需要Python接口里处理shuffle,可以参考 `PyDataProvider`_ 的相关文档继续深入了解。 模型配置文件 ============ -模型配置主要包括数据传入接口定义(DataConfig)、优化算法(OptimizationConfig)、网络结构(ModelConfig)。 其中数据传入接口定义与DataProvider的关系是:DataProvider里定义数据读取函数,配置文件的DataConfig里指定DataProvider文件名字、生成数据函数接口,请不要混淆。 +模型配置文件主要包括数据传入接口定义(DataConfig)、优化算法(OptimizationConfig)、网络结构(ModelConfig)。 其中数据传入接口定义与DataProvider的关系是:DataProvider里定义数据读取函数,配置文件的DataConfig里指定DataProvider文件名字、生成数据函数接口,请不要混淆。 一个简单的模型配置文件为: @@ -61,7 +59,7 @@ trainer进程会调用DataProvider函数,将用户的原始数据转换成系 DataConfig ---------- -使用函数 ``define_py_data_sources2`` 配置数据源,后缀 2 是Paddle历史遗留问题,因为Paddle之前使用的PyDataProvider性能问题,重构了一个新的 `PyDataProvider`_ 。 +使用 `PyDataProvider`_ 的函数 ``define_py_data_sources2`` 配置数据源,后缀 2 是Paddle历史遗留问题,因为Paddle之前使用的PyDataProvider性能问题,重构了一个新的 `PyDataProvider`_ 。 ``define_py_data_sources2`` 里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 @@ -70,7 +68,7 @@ DataConfig OptimizationConfig ------------------ -通过`settings`_ 接口设置神经网络所使用的训练参数和优化算法,包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 +通过`settings`_ 接口设置神经网络所使用的训练参数和 `优化算法`_ ,包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 ModelConfig ----------- @@ -150,6 +148,7 @@ PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trai .. _PyDataProvider: ../ui/data_provider/pydataprovider2.html .. _settings: ../../doc/ui/api/trainer_config_helpers/optimizers.html#settings +.. _优化算法: ../../doc/ui/api/trainer_config_helpers/optimizers.html#optimizers .. _trainer_config_helper: ../../doc/ui/api/trainer_config_helpers/index.html .. _data_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#data-layer .. _simple_img_conv_pool: ../../doc/ui/api/trainer_config_helpers/networks.html#simple-img-conv-pool From 634576128ce3c8075c38dbd7fdf6451ca7885a7f Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 24 Nov 2016 12:42:22 +0800 Subject: [PATCH 016/265] Done for reviewing docs. --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index 4d29507ca3..a13d4728a9 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -144,17 +144,15 @@ :lines: 42-59 :linenos: -- 双层序列\: - - - 双层RNN中,对输入的两个特征分别求时序上的连续全连接(`inner_step1`和`inner_step2`分别处理fea1和fea2),其功能与示例2中`sequence_nest_rnn.conf`的`outer_step`函数完全相同。不同之处是,此时输入`[SubsequenceInput(emb1), SubsequenceInput(emb2)]`在各时刻并不等长。 - - 函数`outer_step`中可以分别处理这两个特征,但我们需要用\ :red:`targetInlink`\ 指定recurrent_group的输出的格式(各子句长度)只能和其中一个保持一致,如这里选择了和emb2的长度一致。 - - 最后,依然是取encoder1_rep的最后一个时刻和encoder2_rep的所有时刻分别相加得到context。 +而双层序列的代码如下。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py :language: python - :lines: 42-75, 82-89 + :lines: 41-80 :linenos: +在上面代码中,单层和双层序列的使用和示例2中的示例类似,区别是同时处理了两个输入。而对于双层序列,两个输入的子序列长度也并不相同。但是,我们使用了\ :code:`targetInlink`\ 参数设置了外层\ :code:`recurrent_group`\ 的输出格式。所以外层输出的序列形状,和\ :code:`emb2`的序列形状一致。 + 示例4:beam_search的生成 ======================== From 2664286d6adfdd5cc80560bd22371083a3d64bdd Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Fri, 25 Nov 2016 18:32:24 +0800 Subject: [PATCH 017/265] refine dataprovider related rst --- doc_cn/ui/data_provider/dataprovider.rst | 15 + doc_cn/ui/data_provider/index.rst | 17 - doc_cn/ui/data_provider/mnist_config.py | 1 + doc_cn/ui/data_provider/mnist_provider.py | 22 - doc_cn/ui/data_provider/pydataprovider2.rst | 476 ++++++++---------- .../ui/data_provider/sentimental_provider.py | 15 +- .../data_provider/write_new_dataprovider.rst | 4 - doc_cn/ui/index.rst | 7 +- 8 files changed, 244 insertions(+), 313 deletions(-) create mode 100644 doc_cn/ui/data_provider/dataprovider.rst delete mode 100644 doc_cn/ui/data_provider/index.rst delete mode 100644 doc_cn/ui/data_provider/mnist_provider.py delete mode 100644 doc_cn/ui/data_provider/write_new_dataprovider.rst diff --git a/doc_cn/ui/data_provider/dataprovider.rst b/doc_cn/ui/data_provider/dataprovider.rst new file mode 100644 index 0000000000..e1ad330c29 --- /dev/null +++ b/doc_cn/ui/data_provider/dataprovider.rst @@ -0,0 +1,15 @@ +DataProvider的介绍 +================== + +DataProvider是PaddlePaddle负责提供数据的模块。其作用是将数据传入内存或显存,让神经网络可以进行训练或预测。有两种使用方式: + +- 简单使用:使用Python接口 `PyDataProvider2 `_ 来自定义传数据的过程。 +- 高级使用:如果用户有更复杂的使用,或者需要更高的效率,可以在C++端自定义一个 ``DataProvider`` 。 + +PaddlePaddle需要用户在网络配置(trainer_config.py)中定义使用哪种DataProvider,并且在DataProvider中实现如何访问训练文件列表(train.list)或测试文件列表(test.list)。 + +- train.list和test.list存放在本地(推荐直接存放到训练目录,以相对路径引用)。一般情况下,两者均为纯文本文件,其中每一行对应一个数据文件地址: + + - 如果数据文件存于本地磁盘,则将这些文件的绝对路径或相对路径(相对于PaddlePaddle程序运行时的路径)写在train.list和test.list中。 + - 地址也可以为hdfs文件路径,或者数据库连接地址等。 +- 如果没有设置test.list,或设置为None,那么在训练过程中不会执行测试操作;否则,会根据命令行参数指定的测试方式,在训练过程中进行测试,从而防止过拟合。 diff --git a/doc_cn/ui/data_provider/index.rst b/doc_cn/ui/data_provider/index.rst deleted file mode 100644 index ec8f8e5dc5..0000000000 --- a/doc_cn/ui/data_provider/index.rst +++ /dev/null @@ -1,17 +0,0 @@ -PaddlePaddle的数据提供(DataProvider)介绍 -======================================== - -数据提供(DataProvider)是PaddlePaddle负责提供数据的模块。其作用是将训练数据传入内存或者显存,让神经网络可以进行训练。简单的使用,用户可以使用Python的 :code:`PyDataProvider` 来自定义传数据的过程。如果有更复杂的使用,或者需要更高的效率,用户也可以在C++端自定义一个 :code:`DataProvider` 。 - -PaddlePaddle需要用户在网络配置(trainer_config.py)中定义使用哪种DataProvider及其参数,训练文件列表(train.list)和测试文件列表(test.list)。 - -其中,train.list和test.list均为本地的两个文件(推荐直接放置到训练目录,以相对路径引用)。如果test.list不设置,或者设置为None,那么在训练过程中,不会执行测试操作。否则,会根据命令行参数指定的测试方式,在训练过程中进行测试,从而防止过拟合。 - -一般情况下,train.list和test.list为纯文本文件,一行对应一个数据文件,数据文件存放在本地磁盘中。将文件的绝对路径或相对路径(相对于PaddlePaddle程序运行时的路径)写在train.list和test.list中。当然,train.list和test.list也可以放置hdfs文件路径,或者数据库连接地址等等。 - -用户在DataProvider中需要实现如何访问其中每一个文件。DataProvider的具体用法和如何实现一个新的DataProvider,请参考下述文章: - -.. toctree:: - - pydataprovider2.rst - write_new_dataprovider.rst diff --git a/doc_cn/ui/data_provider/mnist_config.py b/doc_cn/ui/data_provider/mnist_config.py index 39becff03b..429338c57f 100644 --- a/doc_cn/ui/data_provider/mnist_config.py +++ b/doc_cn/ui/data_provider/mnist_config.py @@ -5,5 +5,6 @@ define_py_data_sources2( test_list=None, module='mnist_provider', obj='process') + img = data_layer(name='pixel', size=784) label = data_layer(name='label', size=10) diff --git a/doc_cn/ui/data_provider/mnist_provider.py b/doc_cn/ui/data_provider/mnist_provider.py deleted file mode 100644 index 8b828641d5..0000000000 --- a/doc_cn/ui/data_provider/mnist_provider.py +++ /dev/null @@ -1,22 +0,0 @@ -from paddle.trainer.PyDataProvider2 import * - - -# Define a py data provider -@provider(input_types=[dense_vector(28 * 28), integer_value(10)]) -def process(settings, filename): # settings is not used currently. - f = open(filename, 'r') # open one of training file - - for line in f: # read each line - label, pixel = line.split(';') - - # get features and label - pixels_str = pixel.split(' ') - - pixels_float = [] - for each_pixel_str in pixels_str: - pixels_float.append(float(each_pixel_str)) - - # give data to paddle. - yield pixels_float, int(label) - - f.close() # close file diff --git a/doc_cn/ui/data_provider/pydataprovider2.rst b/doc_cn/ui/data_provider/pydataprovider2.rst index 80b40084d8..3455e331da 100644 --- a/doc_cn/ui/data_provider/pydataprovider2.rst +++ b/doc_cn/ui/data_provider/pydataprovider2.rst @@ -1,257 +1,219 @@ -PyDataProvider2的使用 -===================== - -PyDataProvider是PaddlePaddle使用Python提供数据的推荐接口。使用该接口用户可以只关注如何 -从文件中读取每一条数据,而不用关心数据如何传输给PaddlePaddle,数据如何存储等等。该数据 -接口使用多线程读取数据,并提供了简单的Cache功能。 - - -简单的使用场景 --------------- - -这里以MNIST手写识别为例,来说明简单的PyDataProvider如何使用。MNIST是一个包含有 -70,000张灰度图片的数字分类数据集。对于MNIST而言,标签是0-9的数字,而特征即为 -28*28的像素灰度值。这里我们使用简单的文本文件表示MNIST图片,样例数据如下。 - -.. literalinclude:: mnist_train.txt - -其数据使用;间隔,第一段数据为这张图片的label,第二段数据为这个图片的像素值。 -首先我们将这个数据文件(例如文件名是'mnist_train.txt')写入train.list。那么 -train.list即为 - -.. literalinclude:: train.list - -那么对应的dataprovider既为 - -.. literalinclude:: mnist_provider.py - :linenos: - -其中第一行是引入PaddlePaddle的PyDataProvider2包。主要函数是process函数。process函数 -具有两个参数,第一个参数是 settings 。这个参数在这个样例里没有使用,具 -体可以参考 settings 。第二个参数是filename,这个参数被PaddlePaddle进程传入,为 -train.list中的一行(即train.list若干数据文件路径的某一个路径)。 - -:code:`@provider` 是一个Python的 `Decorator `_ -。这行的作用是设置DataProvider的一些属性,并且标记process函数是一个DataProvider。 -如果不了解 `Decorator `_ 是什么也没关系, -只需要知道这只是一个标记属性的方法就可以了。 - -属性 `input_types`_ 是设置这个DataProvider返回什么样的数据。这里设置的是返回一个 -28*28的稠密向量和一个[0-9],10维的整数值。 `input_types`_ 具体可以设置成什么其他格 -式,请参考 `input_types`_ 的文档。 - -process函数是实现数据输入的主函数,在这个函数中,实现了打开文本文件,从文本文件中读取 -每一行,并将每行转换成和 `input_types`_ 一致的特征,并在23行返回给PaddlePaddle进程。需要注意 -的是, 返回的顺序需要和 `input_types`_ 中定义的顺序一致。 - -同时,返回数据在PaddlePaddle中是仅仅返回一条完整的训练样本,并且使用关键词 :code:`yield` 。 -在PyDataProvider中,可以为一个数据文件返回多条训练样本(就像这个样例一样),只需要在 -process函数调用多次 :code:`yield` 即可。 :code:`yield` 是Python的一个关键词,相关的概 -念是 :code:`generator` 。使用这个关键词,可以在一个函数里,多次返回变量。 - -在训练配置里,只需要使用一行代码即可以设置训练引用这个DataProvider。这个设置为 - -.. literalinclude:: mnist_config.py - -这里说明了训练数据是 'train.list',而没有测试数据。引用的DataProvider是 'mnist_provider' -这个模块中的 'process' 函数。 - -同时,根据模型配置文件中 :code:`data_layer` 的名字,用户也可以显式指定返回的数据对应关系。例如: - -.. literalinclude:: mnist_provider.dict.py - :linenos: - -如果用户不指定返回数据的对应关系,那么PaddlePaddle会粗略的根据layer的声明顺序, -来确定对应关系。这个对应关系可能不正确。所以推荐使用显式指定返回值和数据对应关系。 - -至此,简单的PyDataProvider样例就说明完毕了。对于用户来说,讲数据发送给PaddlePaddle,仅仅需要 -知道如何从 **一个文件** 里面读取 **一条** 样本。而PaddlePaddle进程帮助用户做了 - -* 将数据组合成Batch训练 -* Shuffle训练数据 -* 多线程数据读取 -* 缓存训练数据到内存(可选) -* CPU->GPU双缓存 - -是不是很简单呢? - -序列模型数据提供 ----------------- - -序列模型是指数据的某一维度是一个序列形式,即包含时间步信息。所谓时间步信息, -不一定和时间有关系,只是说明数据的顺序是重要的。例如,文本信息就是一个序列 -数据。 - -这里举例的数据是英文情感分类的数据。数据是给一段英文文本,分类成正面情绪和 -负面情绪两类(用0和1表示)。样例数据为 - -.. literalinclude:: sentimental_train.txt - -这里,DataProvider可以是 - -.. literalinclude:: sentimental_provider.py - -这个序列模型比较复杂。主要是增加了初始化机制。其中 :code:`on_init` 函数是使用 -`@provider`_ 中的 `init_hook`_ 配置参数配置给DataProvider的。这个函数会在 -DataProvider创建的时候执行。这个初始化函数具有如下参数: - -* 第一个参数是 settings 对象。 -* 其他参数均使用key word argument形式传入。有部分参数是Paddle自动生成的, - 参考 `init_hook`_ 。这里的 :code:`dictionary` 是从训练配置传入的dict对象。 - 即从单词字符串到单词id的字典。 - -传入这个变量的方式为 - -.. literalinclude:: sentimental_config.py - -这个声明基本上和mnist的样例一致。除了 - -* 在配置中读取了字典 -* 在声明DataProvider的时候传入了dictionary作为参数。 - -在 :code:`on_init` 函数中,配置了 `input_types` 。这个和在 `@provider`_ 中配置 -`input_types` 效果一致,但是在 `on_init` 中配置 `input_types` 是在运行时执行的,所以 -可以根据不同的数据配置不同的输入类型。这里的输入特征是词id的序列,所以将 :code:`seq_type` -设置成了序列(同时,也可以使用 :code:`integer_sequence` 类型来设置)。 - -同时,将字典存入了settings 对象。这个字典可以在 :code:`process` 函数中使用。 :code:`process` -函数中的 settings 和 :code:`on_init` 中的settings 是同一个对象。 - -而在 :code:`process` 函数中,基本的处理逻辑也和mnist逻辑一致。依次返回了文件中的每条数据。 - -至此,基本的PyDataProvider使用介绍完毕了。具体DataProvider还具有什么功能,请参考下节reference。 - -参考(Reference) ---------------- - -@provider -+++++++++ - -:code:`@provider` 是一个Python的 `Decorator`_ ,他可以将某一个函数标记成一个PyDataProvider。它包含的参数有: - -* `input_types`_ 是数据输入格式。具体有哪些格式,参考 `input_types`_ 。 -* should_shuffle 是个DataProvider是不是要做shuffle,如果不设置的话,训练的时候默认shuffle, - 测试的时候默认不shuffle。 -* min_pool_size 是设置DataProvider在内存中最小暂存的数据条数。这个也是PaddlePaddle所能够保证的shuffle粒度。 - 设置成-1的话,会预先读取全部数据到内存中。 -* pool_size 是设置DataProvider在内存中暂存的数据条数。设置成-1的话,即不在乎内存暂存多少条数据。 -* can_over_batch_size 表示是否允许Paddle暂存略微多余pool_size的数据。这样做可以避免很多死锁问题。 - 一般推荐设置成True -* calc_batch_size 传入的是一个函数,这个函数以一条数据为参数,返回batch_size的大小。默认情况下一条数据 - 是一个batch size,但是有时为了计算均衡性,可以将一条数据设置成多个batch size -* cache 是数据缓存的策略,参考 `cache`_ -* init_hook 是初始化时调用的函数,参考 `init_hook`_ -* check 设置成true的话,会根据input_types检查数据的合法性。 -* check_fail_continue 如果设置成true的话,即使在check中数据不合法,也会扔到这条数据,继续训练。 如果 - check是false的话,没有作用。 - -input_types -+++++++++++ - -PaddlePaddle的数据包括四种主要类型,和三种序列模式。其中,四种数据类型是 - -* dense_vector 表示稠密的浮点数向量。 -* sparse_binary_vector 表示稀疏的零一向量,即大部分值为0,有值的位置只能取1 -* sparse_float_vector 表示稀疏的向量,即大部分值为0,有值的部分可以是任何浮点数 -* integer 表示整数标签。 - -而三种序列模式为 - -* SequenceType.NO_SEQUENCE 即不是一条序列 -* SequenceType.SEQUENCE 即是一条时间序列 -* SequenceType.SUB_SEQUENCE 即是一条时间序列,且序列的每一个元素还是一个时间序列。 - -不同的数据类型和序列模式返回的格式不同,列表如下 - -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| | NO_SEQUENCE | SEQUENCE | SUB_SEQUENCE | -+======================+=====================+===================================+================================================+ -| dense_vector | [f, f, ...] | [[f, ...], [f, ...], ...] | [[[f, ...], ...], [[f, ...], ...],...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| sparse_binary_vector | [i, i, ...] | [[i, ...], [i, ...], ...] | [[[i, ...], ...], [[i, ...], ...],...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| sparse_float_vector | [(i,f), (i,f), ...] | [[(i,f), ...], [(i,f), ...], ...] | [[[(i,f), ...], ...], [[(i,f), ...], ...],...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| integer_value | i | [i, i, ...] | [[i, ...], [i, ...], ...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ - -其中,f代表一个浮点数,i代表一个整数。 - -init_hook -+++++++++ - -init_hook可以传入一个函数。这个函数在初始化的时候会被调用。这个函数的参数是: - -* 第一个参数是 settings 对象。这个对象和process的第一个参数一致。具有的属性有 - * settings.input_types 设置输入类型。参考 `input_types`_ - * settings.logger 一个logging对象 -* 其他参数都使用key word argument传入。这些参数包括paddle定义的参数,和用户传入的参数。 - * Paddle定义的参数包括: - * is_train bool参数,表示这个DataProvider是训练用的DataProvider或者测试用的 - DataProvider - * file_list 所有文件列表。 - * 用户定义的参数使用args在训练配置中设置。 - -注意,PaddlePaddle保留添加参数的权力,所以init_hook尽量使用 :code:`**kwargs` , 来接受不使用的 -函数来保证兼容性。 - -cache -+++++ - -DataProvider提供了两种简单的Cache策略。他们是 - -* CacheType.NO_CACHE 不缓存任何数据,每次都会从python端读取数据 -* CacheType.CACHE_PASS_IN_MEM 第一个pass会从python端读取数据,剩下的pass会直接从内存里 - 读取数据。 - - -注意事项 --------- - -可能的内存泄露问题 -++++++++++++++++++ - -PaddlePaddle将train.list中的每一行,都传递给process函数,从而生成多个generator。 -即如果train.list中,有100个训练文件,即会生成100个generator。这个本身不是一个很 -严重的问题。 - -但是,如果在训练时,每一条训练数据都是一个文件,并且,训练数据非常多的情况下,就 -会生成多个generator。每个generator在没有调用的时候,是几乎不占内存的。但是,当调 -用过一次的时候,generator便会存下当前的上下文(Context)。而这个Context可能会非常 -大。并且,generator至少调用两次才会知道是否停止。所以,即使在process里面只会有一 -个yield,也需要两次随机选择到同样的generator的时候,才会释放该段内存。 - -.. code-block:: python - - def func(): - yield 0 - - f = func() # 创建generator - tmp = next(f) # 调用一次,返回0 - tmp = next(f) # 调用第二次的时候,才会Stop Iteration - -而如果按顺序调用这些generator就不会出现这个问题。 - -所以最佳实践推荐不要将每一个样本都放入train.list。而是将样本的地址放入另一个文本 -文件,train.list写入那个文本文件的地址。 或者在python generator的上下文中尽量留 -下非常少的变量引用。例如 - -.. code-block:: python - - def real_process(fn): - # ... read from fn - return result # 当函数返回的时候,python可以解除掉内部变量的引用。 - - def process(fn): - yield real_process(fn) - -这个问题是PyDataProvider读数据时候的逻辑问题,基本上不能整体修正。 - - -内存不够用的情况 -++++++++++++++++ - -PyDataProvider2会尽量使用内存。所以如果对于内存比较小的机器,推荐设置 -:code:`pool_size` 变量,而这个变量推荐大于训练的batch size,并且在内存足够 -的情况下越大越好。 - +PyDataProvider2的使用 +===================== + +PyDataProvider2是PaddlePaddle使用Python提供数据的接口。该接口使用多线程读取数据,并提供了简单的Cache功能;同时可以使用户只关注如何从文件中读取每一条数据,而不用关心数据如何传输,如何存储等等。 + +.. contents:: + +MNIST的使用场景 +--------------- + +我们以MNIST手写识别为例,来说明如何使用最简单的PyDataProvider2。 + +样例数据 +++++++++ + +MNIST是一个包含有70,000张灰度图片的数字分类数据集。样例数据 ``mnist_train.txt`` 如下: + +.. literalinclude:: mnist_train.txt + +其中每行数据代表一张图片,行内使用 ``;`` 分成两部分。第一部分是图片的标签,为0-9中的一个数字;第二部分是28*28的图片像素灰度值。 对应的 ``train.list`` 为: + +.. literalinclude:: train.list + +dataprovider的使用 +++++++++++++++++++ + +.. literalinclude:: mnist_provider.dict.py + +- 首先,引入PaddlePaddle的PyDataProvider2包。 +- 其次,定义一个Python的 `Decorator `_ `@provider`_ 。用于将下一行的数据输入函数标记成一个PyDataProvider2,同时设置它的input_types属性。 + + - `input_types`_:设置这个PyDataProvider2返回什么样的数据。本例根据网络配置中 ``data_layer`` 的名字,显式指定返回的是一个28*28维的稠密浮点数向量和一个[0-9]的10维整数标签。 + + .. literalinclude:: mnist_config.py + :lines: 9-10 + + - 注意:如果用户不显示指定返回数据的对应关系,那么PaddlePaddle会根据layer的声明顺序,来确定对应关系。但这个关系可能不正确,所以推荐使用显式指定的方式来设置input_types。 +- 最后,实现数据输入函数(如本例的 ``process`` 函数)。 + + - 该函数的功能是:打开文本文件,读取每一行,将行中的数据转换成与input_types一致的格式,然后返回给PaddlePaddle进程。注意, + + - 返回的顺序需要和input_types中定义的顺序一致。 + - 返回时,必须使用关键词 ``yield`` 。一次yield调用,即返回一条完整的样本。如果想为一个数据文件返回多条样本,只需要在函数中调用多次yield即可(本例中使用for循环进行多次调用)。 + + - 该函数具有两个参数: + + - settings:在本例中没有使用,具体可以参考 `init_hook`_ 中的说明。 + - filename:为 ``train.list`` 或 ``test.list`` 中的一行,即若干数据文件路径的某一个。 + +网络配置中的调用 +++++++++++++++++ + +在网络配置里,只需要一行代码就可以调用这个PyDataProvider2,如, + +.. literalinclude:: mnist_config.py + :lines: 1-7 + +训练数据是 ``train.list`` ,测试数据没有,调用的PyDataProvider2是 ``mnist_provider`` 模块中的 ``process`` 函数。 + +时序模型的使用场景 +------------------ +样例数据 +++++++++ + +时序模型是指数据的某一维度是一个序列形式,即包含时间步信息。所谓时间步信息,不一定和时间有关系,只是说明数据的顺序是重要的。例如,文本信息就是一个序列数据。 + +本例采用英文情感分类的数据,即将一段英文文本数据,分类成正面情绪和负面情绪两类(用0和1表示)。样例数据 ``sentimental_train.txt`` 如下: + +.. literalinclude:: sentimental_train.txt + +dataprovider的使用 +++++++++++++++++++ + +相对MNIST而言,这个dataprovider较复杂,主要原因是增加了初始化机制 `init_hook`_。本例的 ``on_init`` 函数就是根据该机制配置的,它会在dataprovider创建的时候执行。 + +- 其中 ``input_types`` 和在 `@provider`_ 中配置的效果一致。本例中的输入特征是词ID的序列,因此使用 ``integer_value_sequence`` 类型来设置。 +- 将 ``dictionary`` 存入settings对象,在 ``process`` 函数中使用。 dictionary是从网络配置中传入的dict对象,即一个将单词字符串映射到单词ID的字典。 + +.. literalinclude:: sentimental_provider.py + +网络配置中的调用 +++++++++++++++++ + +调用这个PyDataProvider2的方法,基本上和MNIST样例一致,除了 + +* 在配置中需要读取外部字典。 +* 在声明DataProvider的时候传入dictionary作为参数。 + +.. literalinclude:: sentimental_config.py + :emphasize-lines: 12-14 + +小结 +----- + +至此,两个PyDataProvider2的样例就说明完毕了。对用户来说,仅需要知道如何从 **一个文件** 中读取 **一条样本** ,就可以将数据传送给PaddlePaddle了。而PaddlePaddle则会帮用户做以下工作: + +* 将数据组合成Batch进行训练 +* 对训练数据进行Shuffle +* 多线程的数据读取 +* 缓存训练数据到内存(可选) +* CPU->GPU双缓存 + +是不是很简单呢? + +参考(Reference) +--------------- + +@provider ++++++++++ + +``@provider`` 是一个Python的 `Decorator`_ ,可以将某一个函数标记成一个PyDataProvider2。如果不了解 `Decorator`_ 是什么也没关系,只需知道这是一个标记属性的方法就可以了。它包含的属性参数如下: + +* input_types:数据输入格式。具体的格式说明,请参考 `input_types`_ 。 +* should_shuffle:是不是要对数据做Shuffle。训练时默认shuffle,测试时默认不shuffle。 +* min_pool_size:设置内存中最小暂存的数据条数,也是PaddlePaddle所能够保证的shuffle粒度。如果为-1,则会预先读取全部数据到内存中。 +* pool_size: 设置内存中暂存的数据条数。如果为-1(默认),则不在乎内存暂存多少条数据。如果设置,则推荐大于训练时batch size的值,并且在内存足够的情况下越大越好。 +* can_over_batch_size:是否允许暂存略微多余pool_size的数据。由于这样做可以避免很多死锁问题,一般推荐设置成True。 +* calc_batch_size:可以传入一个函数,用于自定义每条数据的batch size(默认为1)。 +* cache: 数据缓存的策略,具体请参考 `cache`_ 。 +* init_hook:初始化时调用的函数,具体请参考 `init_hook`_ 。 +* check:如果为true,会根据input_types检查数据的合法性。 +* check_fail_continue:如果为true,那么当check出数据不合法时,会扔到这条数据,继续训练或预测。(对check=false的情况,没有作用) + +input_types ++++++++++++ + +PaddlePaddle的数据包括四种主要类型,和三种序列模式。 + +四种数据类型: + +* dense_vector:稠密的浮点数向量。 +* sparse_binary_vector:稀疏的01向量,即大部分值为0,但有值的地方必须为1。 +* sparse_float_vector:稀疏的向量,即大部分值为0,但有值的部分可以是任何浮点数。 +* integer:整数标签。 + +三种序列模式: + +* SequenceType.NO_SEQUENCE:不是一条序列 +* SequenceType.SEQUENCE:是一条时间序列 +* SequenceType.SUB_SEQUENCE: 是一条时间序列,且序列的每一个元素还是一个时间序列。 + +不同的数据类型和序列模式返回的格式不同,列表如下: + ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| | NO_SEQUENCE | SEQUENCE | SUB_SEQUENCE | ++======================+=====================+===================================+================================================+ +| dense_vector | [f, f, ...] | [[f, ...], [f, ...], ...] | [[[f, ...], ...], [[f, ...], ...],...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| sparse_binary_vector | [i, i, ...] | [[i, ...], [i, ...], ...] | [[[i, ...], ...], [[i, ...], ...],...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| sparse_float_vector | [(i,f), (i,f), ...] | [[(i,f), ...], [(i,f), ...], ...] | [[[(i,f), ...], ...], [[(i,f), ...], ...],...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| integer_value | i | [i, i, ...] | [[i, ...], [i, ...], ...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ + +其中,f代表一个浮点数,i代表一个整数。 + +init_hook ++++++++++ + +init_hook可以传入一个函数。该函数在初始化的时候会被调用,其参数如下: + +* 第一个参数是settings对象,它和数据传入函数的第一个参数(如本例中 ``process`` 函数的 ``settings`` 参数)必须一致。该对象具有以下两个属性: + * settings.input_types:数据输入格式,具体请参考 `input_types`_ 。 + * settings.logger:一个logging对象。 +* 其他参数使用 ``kwargs`` (key word arguments)传入,包括以下两种: + * PaddlePaddle定义的参数: 1)is_train:bool型参数,表示用于训练或预测;2)file_list:所有文件列表。 + * 用户定义的参数:使用args在网络配置中设置。 + +cache ++++++ + +PyDataProvider2提供了两种简单的Cache策略: + +* CacheType.NO_CACHE:不缓存任何数据,每次都会从python端读取数据 +* CacheType.CACHE_PASS_IN_MEM:第一个pass会从python端读取数据,剩下的pass会直接从内存里 + 读取数据。 + + +注意事项 +-------- + +可能的内存泄露问题 +++++++++++++++++++ + +PaddlePaddle将train.list中的每一行都传递给process函数,从而生成多个generator。当训练数据非常多时,就会生成非常多的generator。 + +虽然每个generator在没有调用的时候,是几乎不占内存的;但当调用过一次后,generator便会存下当前的上下文(Context),而这个Context可能会非常大。并且,generator至少需要调用两次才会知道是否停止。所以,即使process函数里面只有一个yield,也需要两次随机选择到相同generator的时候,才会释放该段内存。 + +.. code-block:: python + + def func(): + yield 0 + + f = func() # 创建generator + tmp = next(f) # 调用一次,返回0 + tmp = next(f) # 调用第二次的时候,才会Stop Iteration + +由于顺序调用这些generator不会出现上述问题,因此有两种解决方案: + +1. **最佳推荐**:将样本的地址放入另一个文本文件,train.list写入那个文本文件的地址。即不要将每一个样本都放入train.list。 +2. 在generator的上下文中尽量留下非常少的变量引用,例如 + +.. code-block:: python + + def real_process(fn): + # ... read from fn + return result # 当函数返回的时候,python可以解除掉内部变量的引用。 + + def process(fn): + yield real_process(fn) + +注意:这个问题是PyDataProvider读数据时候的逻辑问题,很难整体修正。 + +内存不够用的情况 +++++++++++++++++ + +PyDataProvider2会尽可能多的使用内存。因此,对于内存较小的机器,推荐使用 ``pool_size`` 变量来设置内存中暂存的数据条。具体请参考 `@provider`_ 中的说明。 + diff --git a/doc_cn/ui/data_provider/sentimental_provider.py b/doc_cn/ui/data_provider/sentimental_provider.py index 0fb0bb88e9..14bd0e05a9 100644 --- a/doc_cn/ui/data_provider/sentimental_provider.py +++ b/doc_cn/ui/data_provider/sentimental_provider.py @@ -8,19 +8,16 @@ def on_init(settings, dictionary, **kwargs): # set input types in runtime. It will do the same thing as # @provider(input_types) will do, but it is set dynamically during runtime. - settings.input_types = [ + settings.input_types = { # The text is a sequence of integer values, and each value is a word id. # The whole sequence is the sentences that we want to predict its # sentimental. - integer_value( - len(dictionary), seq_type=SequenceType), # text input + 'data': integer_value_sequence(len(dictionary)), # text input + 'label': integer_value(2) # label positive/negative + } - # label positive/negative - integer_value(2) - ] - - # save dictionary as settings.dictionary. It will be used in process - # method. + # save dictionary as settings.dictionary. + # It will be used in process method. settings.dictionary = dictionary diff --git a/doc_cn/ui/data_provider/write_new_dataprovider.rst b/doc_cn/ui/data_provider/write_new_dataprovider.rst deleted file mode 100644 index a2495fe663..0000000000 --- a/doc_cn/ui/data_provider/write_new_dataprovider.rst +++ /dev/null @@ -1,4 +0,0 @@ -自定义一个DataProvider -==================== - -TBD \ No newline at end of file diff --git a/doc_cn/ui/index.rst b/doc_cn/ui/index.rst index 8079bd9180..c53ebeefe1 100644 --- a/doc_cn/ui/index.rst +++ b/doc_cn/ui/index.rst @@ -8,8 +8,8 @@ .. toctree:: :maxdepth: 1 - data_provider/index.rst - + data_provider/dataprovider.rst + data_provider/pydataprovider2.rst 命令行参数 ========== @@ -22,9 +22,8 @@ * `参数描述 <../../doc/ui/cmd_argument/detail_introduction.html>`_ * `参数用例 <../../doc/ui/cmd_argument/use_case.html>`_ - 预测 -==== +======= .. toctree:: From d6b0b5cb103c83c592bed8ce642250b33b09a8c3 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 28 Nov 2016 15:33:01 +0800 Subject: [PATCH 018/265] note on difference between sparse_vector and dense_vector --- doc_cn/ui/data_provider/pydataprovider2.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doc_cn/ui/data_provider/pydataprovider2.rst b/doc_cn/ui/data_provider/pydataprovider2.rst index 3455e331da..001e0884be 100644 --- a/doc_cn/ui/data_provider/pydataprovider2.rst +++ b/doc_cn/ui/data_provider/pydataprovider2.rst @@ -155,6 +155,8 @@ PaddlePaddle的数据包括四种主要类型,和三种序列模式。 其中,f代表一个浮点数,i代表一个整数。 +注意:sparse_binary_vector和sparse_float_vector中的元素,可以有None;但dense_vector和integer中的元素,不能出现None。 + init_hook +++++++++ From d853a43f67cc91c3b50dbbe04cc0a7853c3fabd6 Mon Sep 17 00:00:00 2001 From: liaogang Date: Mon, 28 Nov 2016 16:41:33 +0800 Subject: [PATCH 019/265] Refine quick start index.rst --- doc_cn/demo/quick_start/index.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/doc_cn/demo/quick_start/index.rst b/doc_cn/demo/quick_start/index.rst index 08c1c8413b..b38b8ca999 100644 --- a/doc_cn/demo/quick_start/index.rst +++ b/doc_cn/demo/quick_start/index.rst @@ -48,6 +48,11 @@ PaddlePaddle快速入门教程 ./data/get_data.sh ./preprocess.sh +数据预处理完成之后,通过配置类似于 ``dataprovider_*.py`` 的数据读取脚本和类似于 ``trainer_config.*.py`` 的训练模型脚本,PaddlePaddle将以设置参数的方式来设置 +相应的数据读取脚本和训练模型脚本。接下来,我们将对这两个步骤给出了详细的解释,你也可以先跳过本文的解释环节,直接进入训练环节, 使用 ``sh train.sh`` 开始训练模型, +查看`train.sh`内容,通过 **自底向上法** (bottom-up approach)来帮助你理解PaddlePaddle的内部运行机制。 + + 向系统传送数据 ============== From b5e36970e29bf2f52cad934466a88b7cf7672499 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 28 Nov 2016 17:04:28 +0800 Subject: [PATCH 020/265] all input_types should has None element --- doc_cn/ui/data_provider/pydataprovider2.rst | 2 -- 1 file changed, 2 deletions(-) diff --git a/doc_cn/ui/data_provider/pydataprovider2.rst b/doc_cn/ui/data_provider/pydataprovider2.rst index 001e0884be..3455e331da 100644 --- a/doc_cn/ui/data_provider/pydataprovider2.rst +++ b/doc_cn/ui/data_provider/pydataprovider2.rst @@ -155,8 +155,6 @@ PaddlePaddle的数据包括四种主要类型,和三种序列模式。 其中,f代表一个浮点数,i代表一个整数。 -注意:sparse_binary_vector和sparse_float_vector中的元素,可以有None;但dense_vector和integer中的元素,不能出现None。 - init_hook +++++++++ From 11f97c93b2a0ffa4a6eb23e7af3439e5d5b8c224 Mon Sep 17 00:00:00 2001 From: liaogang Date: Mon, 28 Nov 2016 19:45:54 +0800 Subject: [PATCH 021/265] Change explicit code into literalinclude syntax --- doc_cn/demo/quick_start/index.rst | 85 ++++--------------------------- 1 file changed, 11 insertions(+), 74 deletions(-) diff --git a/doc_cn/demo/quick_start/index.rst b/doc_cn/demo/quick_start/index.rst index b38b8ca999..db73cb3f34 100644 --- a/doc_cn/demo/quick_start/index.rst +++ b/doc_cn/demo/quick_start/index.rst @@ -49,7 +49,7 @@ PaddlePaddle快速入门教程 ./preprocess.sh 数据预处理完成之后,通过配置类似于 ``dataprovider_*.py`` 的数据读取脚本和类似于 ``trainer_config.*.py`` 的训练模型脚本,PaddlePaddle将以设置参数的方式来设置 -相应的数据读取脚本和训练模型脚本。接下来,我们将对这两个步骤给出了详细的解释,你也可以先跳过本文的解释环节,直接进入训练环节, 使用 ``sh train.sh`` 开始训练模型, +相应的数据读取脚本和训练模型脚本。接下来,我们将对这两个步骤给出了详细的解释,你也可以先跳过本文的解释环节,直接进入训练模型章节, 使用 ``sh train.sh`` 开始训练模型, 查看`train.sh`内容,通过 **自底向上法** (bottom-up approach)来帮助你理解PaddlePaddle的内部运行机制。 @@ -66,86 +66,23 @@ Python脚本读取数据 ``dataprovider_bow.py`` 文件给出了完整例子: -.. code-block:: python - - from paddle.trainer.PyDataProvider2 import * - - # id of the word not in dictionary - UNK_IDX = 0 - - # initializer is called by the framework during initialization. - # It allows the user to describe the data types and setup the - # necessary data structure for later use. - # `settings` is an object. initializer need to properly fill settings.input_types. - # initializer can also store other data structures needed to be used at process(). - # In this example, dictionary is stored in settings. - # `dictionay` and `kwargs` are arguments passed from trainer_config.lr.py - def initializer(settings, dictionary, **kwargs): - # Put the word dictionary into settings - settings.word_dict = dictionary +.. literalinclude:: ../../../demo/quick_start/dataprovider_bow.py + :language: python + :lines: 21-70 + :linenos: + :emphasize-lines: 8,33 - # setting.input_types specifies what the data types the data provider - # generates. - settings.input_types = [ - # The first input is a sparse_binary_vector, - # which means each dimension of the vector is either 0 or 1. It is the - # bag-of-words (BOW) representation of the texts. - sparse_binary_vector(len(dictionary)), - # The second input is an integer. It represents the category id of the - # sample. 2 means there are two labels in the dataset. - # (1 for positive and 0 for negative) - integer_value(2)] - - # Delaring a data provider. It has an initializer 'data_initialzer'. - # It will cache the generated data of the first pass in memory, so that - # during later pass, no on-the-fly data generation will be needed. - # `setting` is the same object used by initializer() - # `file_name` is the name of a file listed train_list or test_list file given - # to define_py_data_sources2(). See trainer_config.lr.py. - @provider(init_hook=initializer, cache=CacheType.CACHE_PASS_IN_MEM) - def process(settings, file_name): - # Open the input data file. - with open(file_name, 'r') as f: - # Read each line. - for line in f: - # Each line contains the label and text of the comment, separated by \t. - label, comment = line.strip().split('\t') - - # Split the words into a list. - words = comment.split() - - # convert the words into a list of ids by looking them up in word_dict. - word_vector = [settings.word_dict.get(w, UNK_IDX) for w in words] - - # Return the features for the current comment. The first is a list - # of ids representing a 0-1 binary sparse vector of the text, - # the second is the integer id of the label. - yield word_vector, int(label) 配置中的数据加载定义 -------------------- 在模型配置中通过 ``define_py_data_sources2`` 接口来加载数据: -.. code-block:: python - - from paddle.trainer_config_helpers import * - - file = "data/dict.txt" - word_dict = dict() - with open(dict_file, 'r') as f: - for i, line in enumerate(f): - w = line.strip().split()[0] - word_dict[w] = i - # define the data sources for the model. - # We need to use different process for training and prediction. - # For training, the input data includes both word IDs and labels. - # For prediction, the input data only includs word Ids. - define_py_data_sources2(train_list='data/train.list', - test_list='data/test.list', - module="dataprovider_bow", - obj="process", - args={"dictionary": word_dict}) +.. literalinclude:: ../../../demo/quick_start/trainer_config.emb.py + :language: python + :lines: 19-35 + :linenos: + :emphasize-lines: 12 以下是对上述数据加载的解释: From a36df60a1ce7532aa15fec336512a98bbb79a5d5 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Tue, 29 Nov 2016 13:59:21 +0800 Subject: [PATCH 022/265] refine dataprovider rst based on comments --- doc_cn/ui/data_provider/dataprovider.rst | 14 ++++---- doc_cn/ui/data_provider/pydataprovider2.rst | 39 +++++++++++---------- 2 files changed, 27 insertions(+), 26 deletions(-) diff --git a/doc_cn/ui/data_provider/dataprovider.rst b/doc_cn/ui/data_provider/dataprovider.rst index e1ad330c29..e6796429a7 100644 --- a/doc_cn/ui/data_provider/dataprovider.rst +++ b/doc_cn/ui/data_provider/dataprovider.rst @@ -1,15 +1,13 @@ DataProvider的介绍 ================== -DataProvider是PaddlePaddle负责提供数据的模块。其作用是将数据传入内存或显存,让神经网络可以进行训练或预测。有两种使用方式: - -- 简单使用:使用Python接口 `PyDataProvider2 `_ 来自定义传数据的过程。 -- 高级使用:如果用户有更复杂的使用,或者需要更高的效率,可以在C++端自定义一个 ``DataProvider`` 。 +DataProvider是PaddlePaddle负责提供数据的模块。其作用是将数据传入内存或显存,让神经网络可以进行训练或预测。用户可以通过简单使用Python接口 `PyDataProvider2 `_ ,来自定义传数据的过程。如果有更复杂的使用,或者需要更高的效率,用户也可以在C++端自定义一个 ``DataProvider`` 。 PaddlePaddle需要用户在网络配置(trainer_config.py)中定义使用哪种DataProvider,并且在DataProvider中实现如何访问训练文件列表(train.list)或测试文件列表(test.list)。 -- train.list和test.list存放在本地(推荐直接存放到训练目录,以相对路径引用)。一般情况下,两者均为纯文本文件,其中每一行对应一个数据文件地址: - - - 如果数据文件存于本地磁盘,则将这些文件的绝对路径或相对路径(相对于PaddlePaddle程序运行时的路径)写在train.list和test.list中。 - - 地址也可以为hdfs文件路径,或者数据库连接地址等。 +- train.list和test.list存放在本地(推荐直接存放到训练目录,以相对路径引用)。一般情况下,两者均为纯文本文件,其中每一行对应一个数据文件地址: + + - 如果数据文件存于本地磁盘,这个地址则为它的绝对路径或相对路径(相对于PaddlePaddle程序运行时的路径)。 + - 地址也可以为hdfs文件路径,或者数据库连接路径等。 + - 由于这个地址会被DataProvider使用,因此,如何解析该地址也是用户自定义DataProvider时需要考虑的地方。 - 如果没有设置test.list,或设置为None,那么在训练过程中不会执行测试操作;否则,会根据命令行参数指定的测试方式,在训练过程中进行测试,从而防止过拟合。 diff --git a/doc_cn/ui/data_provider/pydataprovider2.rst b/doc_cn/ui/data_provider/pydataprovider2.rst index 3455e331da..c0b3286ad5 100644 --- a/doc_cn/ui/data_provider/pydataprovider2.rst +++ b/doc_cn/ui/data_provider/pydataprovider2.rst @@ -1,14 +1,14 @@ PyDataProvider2的使用 ===================== -PyDataProvider2是PaddlePaddle使用Python提供数据的接口。该接口使用多线程读取数据,并提供了简单的Cache功能;同时可以使用户只关注如何从文件中读取每一条数据,而不用关心数据如何传输,如何存储等等。 +PyDataProvider2是PaddlePaddle使用Python提供数据的推荐接口。该接口使用多线程读取数据,并提供了简单的Cache功能;同时可以使用户只关注如何从文件中读取每一条数据,而不用关心数据如何传输,如何存储等等。 .. contents:: MNIST的使用场景 --------------- -我们以MNIST手写识别为例,来说明如何使用最简单的PyDataProvider2。 +我们以MNIST手写识别为例,来说明PyDataProvider2的简单使用场景。 样例数据 ++++++++ @@ -17,7 +17,7 @@ MNIST是一个包含有70,000张灰度图片的数字分类数据集。样例数 .. literalinclude:: mnist_train.txt -其中每行数据代表一张图片,行内使用 ``;`` 分成两部分。第一部分是图片的标签,为0-9中的一个数字;第二部分是28*28的图片像素灰度值。 对应的 ``train.list`` 为: +其中每行数据代表一张图片,行内使用 ``;`` 分成两部分。第一部分是图片的标签,为0-9中的一个数字;第二部分是28*28的图片像素灰度值。 对应的 ``train.list`` 即为这个数据文件的名字: .. literalinclude:: train.list @@ -40,7 +40,8 @@ dataprovider的使用 - 该函数的功能是:打开文本文件,读取每一行,将行中的数据转换成与input_types一致的格式,然后返回给PaddlePaddle进程。注意, - 返回的顺序需要和input_types中定义的顺序一致。 - - 返回时,必须使用关键词 ``yield`` 。一次yield调用,即返回一条完整的样本。如果想为一个数据文件返回多条样本,只需要在函数中调用多次yield即可(本例中使用for循环进行多次调用)。 + - 返回时,必须使用Python关键词 ``yield`` ,相关概念是 ``generator`` 。 + - 一次yield调用,返回一条完整的样本。如果想为一个数据文件返回多条样本,只需要在函数中调用多次yield即可(本例中使用for循环进行多次调用)。 - 该函数具有两个参数: @@ -55,7 +56,20 @@ dataprovider的使用 .. literalinclude:: mnist_config.py :lines: 1-7 -训练数据是 ``train.list`` ,测试数据没有,调用的PyDataProvider2是 ``mnist_provider`` 模块中的 ``process`` 函数。 +训练数据是 ``train.list`` ,没有测试数据,调用的PyDataProvider2是 ``mnist_provider`` 模块中的 ``process`` 函数。 + +小结 ++++++ + +至此,简单的PyDataProvider2样例就说明完毕了。对用户来说,仅需要知道如何从 **一个文件** 中读取 **一条样本** ,就可以将数据传送给PaddlePaddle了。而PaddlePaddle则会帮用户做以下工作: + +* 将数据组合成Batch进行训练 +* 对训练数据进行Shuffle +* 多线程的数据读取 +* 缓存训练数据到内存(可选) +* CPU->GPU双缓存 + +是不是很简单呢? 时序模型的使用场景 ------------------ @@ -89,19 +103,6 @@ dataprovider的使用 .. literalinclude:: sentimental_config.py :emphasize-lines: 12-14 -小结 ------ - -至此,两个PyDataProvider2的样例就说明完毕了。对用户来说,仅需要知道如何从 **一个文件** 中读取 **一条样本** ,就可以将数据传送给PaddlePaddle了。而PaddlePaddle则会帮用户做以下工作: - -* 将数据组合成Batch进行训练 -* 对训练数据进行Shuffle -* 多线程的数据读取 -* 缓存训练数据到内存(可选) -* CPU->GPU双缓存 - -是不是很简单呢? - 参考(Reference) --------------- @@ -167,6 +168,8 @@ init_hook可以传入一个函数。该函数在初始化的时候会被调用 * PaddlePaddle定义的参数: 1)is_train:bool型参数,表示用于训练或预测;2)file_list:所有文件列表。 * 用户定义的参数:使用args在网络配置中设置。 +注意:PaddlePaddle保留添加参数的权力,因此init_hook尽量使用 ``**kwargs`` 来接受不使用的函数以保证兼容性。 + cache +++++ From 4d487c6f350b168f5e24094adeacb3c193d5d888 Mon Sep 17 00:00:00 2001 From: Liu Yiqun Date: Tue, 29 Nov 2016 07:40:17 +0000 Subject: [PATCH 023/265] Integrate warp-ctc as WarpCTCLayer, including unitest and layer interface. --- .gitmodules | 0 CMakeLists.txt | 5 + cmake/util.cmake | 5 + paddle/cuda/CMakeLists.txt | 34 ++- paddle/cuda/include/hl_dso_loader.h | 12 +- paddle/cuda/include/hl_gpu.h | 1 + paddle/cuda/include/hl_sequence.h | 33 +++ paddle/cuda/include/hl_warpctc_wrap.h | 94 +++++++ paddle/cuda/include/stub/hl_sequence_stub.h | 9 + paddle/cuda/src/hl_cuda_sequence.cu | 118 +++++++++ paddle/cuda/src/hl_cudart_wrap.cc | 1 + paddle/cuda/src/hl_dso_loader.cc | 25 +- paddle/cuda/src/hl_warpctc_wrap.cc | 157 +++++++++++ paddle/gserver/layers/WarpCTCLayer.cpp | 229 ++++++++++++++++ paddle/gserver/layers/WarpCTCLayer.h | 65 +++++ paddle/gserver/tests/CMakeLists.txt | 7 + paddle/gserver/tests/test_WarpCTCLayer.cpp | 247 ++++++++++++++++++ proto/ModelConfig.proto.m4 | 2 + python/paddle/trainer/config_parser.py | 21 ++ .../paddle/trainer_config_helpers/layers.py | 79 ++++++ .../protostr/test_cost_layers.protostr | 17 ++ .../tests/configs/test_cost_layers.py | 2 + 22 files changed, 1140 insertions(+), 23 deletions(-) create mode 100644 .gitmodules create mode 100644 paddle/cuda/include/hl_warpctc_wrap.h create mode 100644 paddle/cuda/src/hl_warpctc_wrap.cc create mode 100644 paddle/gserver/layers/WarpCTCLayer.cpp create mode 100644 paddle/gserver/layers/WarpCTCLayer.h create mode 100644 paddle/gserver/tests/test_WarpCTCLayer.cpp diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 0000000000..e69de29bb2 diff --git a/CMakeLists.txt b/CMakeLists.txt index af193c27ae..e5e54cc8cf 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -94,6 +94,11 @@ endif() if(NOT WITH_GPU) add_definitions(-DPADDLE_ONLY_CPU) add_definitions(-DHPPL_STUB_FUNC) + + if(WITH_DSO) + add_definitions(-DPADDLE_USE_DSO) + endif(WITH_DSO) + list(APPEND CMAKE_CXX_SOURCE_FILE_EXTENSIONS cu) else() if(${CUDA_VERSION_MAJOR} GREATER 6) diff --git a/cmake/util.cmake b/cmake/util.cmake index a8282f0718..11641f6064 100644 --- a/cmake/util.cmake +++ b/cmake/util.cmake @@ -148,6 +148,11 @@ function(link_paddle_exe TARGET_NAME) target_link_libraries(${TARGET_NAME} rt) endif() endif() + + if(NOT WITH_DSO) + target_link_libraries(${TARGET_NAME} + ${WARPCTC_LIBRARY}) + endif() endfunction() # link_paddle_test diff --git a/paddle/cuda/CMakeLists.txt b/paddle/cuda/CMakeLists.txt index 11dbfb54b2..7e45d3d578 100755 --- a/paddle/cuda/CMakeLists.txt +++ b/paddle/cuda/CMakeLists.txt @@ -15,20 +15,29 @@ else() endif() set(CUDA_CXX_WITH_GPU_SOURCES + src/hl_cudart_wrap.cc src/hl_cuda_cublas.cc src/hl_cuda_cudnn.cc - src/hl_cuda_device.cc) + src/hl_cuda_device.cc + ) -set_source_files_properties(${CUDA_CXX_WITH_GPU_SOURCES} - PROPERTIES COMPILE_FLAGS "-D__NVCC__") +if(WITH_GPU) + set(CUDA_CXX_SOURCES + src/hl_dso_loader.cc + src/hl_warpctc_wrap.cc + ${CUDA_CXX_WITH_GPU_SOURCES}) + + set_source_files_properties(${CUDA_CXX_SOURCES} + PROPERTIES COMPILE_FLAGS "-D__NVCC__") +else() + set(CUDA_CXX_SOURCES + src/hl_dso_loader.cc + src/hl_warpctc_wrap.cc) +endif() set_source_files_properties(${AVX_SOURCES} PROPERTIES COMPILE_FLAGS "-mavx") -set(CUDA_DSO_SOURCES - src/hl_dso_loader.cc - src/hl_cudart_wrap.cc) - set(CUDA_CU_SOURCES src/hl_perturbation_util.cu src/hl_cuda_aggregate.cu @@ -44,6 +53,7 @@ set(CUDA_CU_SOURCES set(CUDA_HEADERS include/hl_time.h include/hl_dso_loader.h + include/hl_warpctc_wrap.h include/hl_sequence.h include/hl_cuda_cublas.h include/hl_batch_transpose.h @@ -75,14 +85,14 @@ if(WITH_GPU) cuda_add_library(paddle_cuda ${CUDA_SOURCES} ${CUDA_CU_SOURCES} - ${CUDA_DSO_SOURCES} - ${CUDA_CXX_WITH_GPU_SOURCES}) + ${CUDA_CXX_SOURCES}) else() - add_library(paddle_cuda ${CUDA_SOURCES}) + add_library(paddle_cuda + ${CUDA_SOURCES} + ${CUDA_CXX_SOURCES}) endif() add_style_check_target(paddle_cuda ${CUDA_SOURCES} ${CUDA_HEADERS} - ${CUDA_DSO_SOURCES} - ${CUDA_CXX_WITH_GPU_SOURCES}) + ${CUDA_CXX_SOURCES}) diff --git a/paddle/cuda/include/hl_dso_loader.h b/paddle/cuda/include/hl_dso_loader.h index 1eb9f9ca88..c52066e3d7 100644 --- a/paddle/cuda/include/hl_dso_loader.h +++ b/paddle/cuda/include/hl_dso_loader.h @@ -18,10 +18,6 @@ limitations under the License. */ #include #include #include -#include -#include -#include -#include #include "hl_base.h" /** @@ -56,4 +52,12 @@ void GetCudartDsoHandle(void** dso_handle); */ void GetCurandDsoHandle(void** dso_handle); +/** + * @brief load the DSO of warp-ctc + * + * @param **dso_handle dso handler + * + */ +void GetWarpctcDsoHandle(void** dso_handle); + #endif // HL_DSO_LOADER_H_ diff --git a/paddle/cuda/include/hl_gpu.h b/paddle/cuda/include/hl_gpu.h index 3be0df3b93..6dd6d13212 100644 --- a/paddle/cuda/include/hl_gpu.h +++ b/paddle/cuda/include/hl_gpu.h @@ -25,6 +25,7 @@ limitations under the License. */ #include "hl_sparse.h" #include "hl_lstm.h" #include "hl_sequence.h" +#include "hl_warpctc_wrap.h" #ifdef HPPL_STUB_FUNC #include "stub/hl_cuda_stub.h" diff --git a/paddle/cuda/include/hl_sequence.h b/paddle/cuda/include/hl_sequence.h index bb5124df44..b98d7bdeaf 100644 --- a/paddle/cuda/include/hl_sequence.h +++ b/paddle/cuda/include/hl_sequence.h @@ -172,6 +172,39 @@ extern void hl_sequence2batch_add(real* batch, int batchCount, bool seq2batch); +/** + * @brief Memory copy from sequence to batch, + * while padding all sequences to the same length. + * + * if seq2batch == true + * + * copy from sequence to batch: + * batch[i] = sequence[sequenceStartPositions[i]] + * + * if seq2batch == false + * + * copy from batch to sequence: + * sequence[sequenceStartPositions[i]] = batch[i] + * + * @param[in,out] batch batch matrix. + * @param[in,out] sequence sequence matrix. + * @param[in] sequenceStartPositions index vector. + * @param[in] sequenceWidth width of sequence. + * @param[in] maxSequenceLength maximum length of sequences. + * @param[in] numSequences number of sequences. + * @param[in] normByTimes whether dividing sequence's length. + * @param[in] seq2batch copy direction. + * + */ +extern void hl_sequence2batch_copy_padding(real* batch, + real* sequence, + const int* sequenceStartPositions, + const size_t sequenceWidth, + const size_t maxSequenceLength, + const size_t numSequences, + bool normByTimes, + bool seq2batch); + /** * @brief dst = Op(src), src is sequence. * diff --git a/paddle/cuda/include/hl_warpctc_wrap.h b/paddle/cuda/include/hl_warpctc_wrap.h new file mode 100644 index 0000000000..9d2379a024 --- /dev/null +++ b/paddle/cuda/include/hl_warpctc_wrap.h @@ -0,0 +1,94 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#ifndef HL_WARPCTC_WRAP_H_ +#define HL_WARPCTC_WRAP_H_ + +#include "hl_base.h" +/// #include "hl_cuda.h" +#include "warp-ctc/include/ctc.h" + +typedef ctcStatus_t hl_warpctc_status_t; +typedef ctcOptions hl_warpctc_options_t; + +/** + * @brief Init ctc options. + * + * @param[in] blank blank label used in ctc loss function. + * @param[in] useGpu whether use gpu. + * @param[out] options handle to store cpu or gpu informations. + * + */ +extern void hl_warpctc_init(const size_t blank, + bool useGpu, + hl_warpctc_options_t* options); + +/** + * @brief Compute the connectionist temporal classification loss, + * and optionally compute the gradient with respect to the inputs. + * + * if batchGrad == nullptr + * + * only compute the ctc loss. + * + * if batchGrad != nullptr + * + * compute both ctc loss and gradient. + * + * @param[in] batchInput batch matrix of input probabilities, + * in maxSequenceLength x numSequence x numClasses + * (row-major) format. + * @param[out] batchGrad batch matrix of gradient. + * @param[in] cpuLabels labels always in CPU memory. + * @param[in] cpuLabelLengths length of all labels in CPU memory. + * @param[in] cpuInputLengths length of all sequences in CPU memory. + * @param[in] numClasses number of possible output symbols. + * @param[in] numSequences number of sequence. + * @param[out] cpuCosts cost of each sequence in CPU memory. + * @param[out] workspace workspace to store some temporary results. + * @param[in] options handle to store cpu or gpu informations. + * + */ +extern void hl_warpctc_compute_loss(const real* batchInput, + real* batchGrad, + const int* cpuLabels, + const int* cpuLabelLengths, + const int* cpuInputLengths, + const size_t numClasses, + const size_t numSequences, + real* cpuCosts, + void* workspace, + hl_warpctc_options_t* options); + +/** + * @brief Compute the required workspace size. + * There is no memory allocated operations within warp-ctc. + * + * @param[in] cpuLabelLengths length of all labels in CPU memory. + * @param[in] cpuInputLengths length of all sequences in CPU memory. + * @param[in] numClasses number of possible output symbols. + * @param[in] numSequences number of sequence. + * @param[in] options handle to store cpu or gpu informations. + * @param[out] bytes pointer to a scalar where the memory + * requirement in bytes will be placed. + * + */ +extern void hl_warpctc_get_workspace_size(const int* cpuLabelLengths, + const int* cpuInputLengths, + const size_t numClasses, + const size_t numSequences, + hl_warpctc_options_t* options, + size_t* bytes); + +#endif // HL_WARPCTC_WRAP_H_ diff --git a/paddle/cuda/include/stub/hl_sequence_stub.h b/paddle/cuda/include/stub/hl_sequence_stub.h index 381f0a6f26..3343463a8d 100644 --- a/paddle/cuda/include/stub/hl_sequence_stub.h +++ b/paddle/cuda/include/stub/hl_sequence_stub.h @@ -70,6 +70,15 @@ inline void hl_sequence2batch_add(real* batch, int batchCount, bool seq2batch) {} +inline void hl_sequence2batch_copy_padding(real* batch, + real* sequence, + const int* sequenceStartPositions, + const size_t sequenceWidth, + const size_t maxSequenceLength, + const size_t numSequences, + bool normByTimes, + bool seq2batch) {} + inline void hl_sequence_avg_forward(real* dst, real* src, const int* starts, diff --git a/paddle/cuda/src/hl_cuda_sequence.cu b/paddle/cuda/src/hl_cuda_sequence.cu index 63824eaa4c..0f1d720439 100644 --- a/paddle/cuda/src/hl_cuda_sequence.cu +++ b/paddle/cuda/src/hl_cuda_sequence.cu @@ -447,6 +447,124 @@ void hl_sequence2batch_add(real *batch, CHECK_SYNC("hl_sequence2batch_add failed"); } +template +__global__ +void KeSequence2BatchPadding(real* batch, + real* sequence, + const int* sequenceStartPositions, + const size_t sequenceWidth, + const size_t maxSequenceLength, + const size_t numSequences) { + int batchIdx = blockIdx.y; + int sequenceStart = sequenceStartPositions[batchIdx]; + int sequenceLength = sequenceStartPositions[batchIdx + 1] - sequenceStart; + + int sequenceIdx = blockIdx.x * blockDim.y + threadIdx.y; + int batchBaseIdx = (sequenceIdx * numSequences + batchIdx) * sequenceWidth; + int sequenceBaseIdx = (sequenceStart + sequenceIdx) * sequenceWidth; + + if (sequenceIdx < sequenceLength) { + if (seq2batch) { + /* sequence -> batch */ + if (normByTimes) { + real scale = 1.0f / (real)sequenceLength; + for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { + batch[batchBaseIdx + i] = scale * sequence[sequenceBaseIdx + i]; + } + } else { + for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { + batch[batchBaseIdx + i] = sequence[sequenceBaseIdx + i]; + } + } + } else { + /* batch -> sequence */ + if (normByTimes) { + real scale = 1.0f / (real)sequenceLength; + for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { + sequence[sequenceBaseIdx + i] = scale * batch[batchBaseIdx + i]; + } + } else { + for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { + sequence[sequenceBaseIdx + i] = batch[batchBaseIdx + i]; + } + } + } + } else if (sequenceIdx < maxSequenceLength) { + if (seq2batch) { + /* sequence -> batch */ + for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { + batch[batchBaseIdx + i] = 0; + } + } + } +} + +void hl_sequence2batch_copy_padding(real* batch, + real* sequence, + const int* sequenceStartPositions, + const size_t sequenceWidth, + const size_t maxSequenceLength, + const size_t numSequences, + bool normByTimes, + bool seq2batch) { + CHECK_NOTNULL(batch); + CHECK_NOTNULL(sequence); + CHECK_NOTNULL(sequenceStartPositions); + + if (!normByTimes && numSequences == 1) { + size_t elementCount = maxSequenceLength * sequenceWidth; + if (seq2batch) { + /* sequence -> batch */ + hl_memcpy_device2device(batch, sequence, sizeof(real) * elementCount); + } else { + /* batch -> sequence */ + hl_memcpy_device2device(sequence, batch, sizeof(real) * elementCount); + } + return; + } + + const int CUDA_BLOCK_SIZE = 512; + + /* At least use 32 threads to copy sequenceWidth elements, + and at least 8 elements for each thread. */ + int blockDimX = ((((sequenceWidth + 7) >> 3) + 31) >> 5) << 5; + blockDimX = (blockDimX < CUDA_BLOCK_SIZE) ? blockDimX : CUDA_BLOCK_SIZE; + + int blockDimY = CUDA_BLOCK_SIZE / blockDimX; + dim3 threads(blockDimX, blockDimY); + + int gridDimX = (maxSequenceLength * blockDimX + CUDA_BLOCK_SIZE - 1) / + CUDA_BLOCK_SIZE; + int gridDimY = numSequences; + dim3 grid(gridDimX, gridDimY); + + if (seq2batch) { + /* sequence -> batch */ + if (normByTimes) { + KeSequence2BatchPadding<1, 1><<< grid, threads, 0, STREAM_DEFAULT >>>( + batch, sequence, sequenceStartPositions, + sequenceWidth, maxSequenceLength, numSequences); + } else { + KeSequence2BatchPadding<0, 1><<< grid, threads, 0, STREAM_DEFAULT >>>( + batch, sequence, sequenceStartPositions, + sequenceWidth, maxSequenceLength, numSequences); + } + } else { + /* batch -> sequence */ + if (normByTimes) { + KeSequence2BatchPadding<1, 0><<< grid, threads, 0, STREAM_DEFAULT >>>( + batch, sequence, sequenceStartPositions, + sequenceWidth, maxSequenceLength, numSequences); + } else { + KeSequence2BatchPadding<0, 0><<< grid, threads, 0, STREAM_DEFAULT >>>( + batch, sequence, sequenceStartPositions, + sequenceWidth, maxSequenceLength, numSequences); + } + } + + CHECK_SYNC("hl_sequence2batch_copy_padding failed"); +} + __device__ inline float my_rsqrt(float x) { return rsqrtf(x); } diff --git a/paddle/cuda/src/hl_cudart_wrap.cc b/paddle/cuda/src/hl_cudart_wrap.cc index ff6b830b7a..a95f5557af 100644 --- a/paddle/cuda/src/hl_cudart_wrap.cc +++ b/paddle/cuda/src/hl_cudart_wrap.cc @@ -15,6 +15,7 @@ limitations under the License. */ #ifdef PADDLE_USE_DSO #include +#include #include "hl_dso_loader.h" /** diff --git a/paddle/cuda/src/hl_dso_loader.cc b/paddle/cuda/src/hl_dso_loader.cc index 1a3ce08619..a6ea2a3b9f 100644 --- a/paddle/cuda/src/hl_dso_loader.cc +++ b/paddle/cuda/src/hl_dso_loader.cc @@ -30,6 +30,8 @@ P_DEFINE_string(cuda_dir, "build-in function in cudart already ran before main entry). " "If default, dlopen will search cuda from LD_LIBRARY_PATH"); +P_DEFINE_string(warpctc_dir, "", "Specify path for loading libwarpctc.so."); + static inline std::string join(const std::string& part1, const std::string& part2) { // directory separator @@ -92,27 +94,28 @@ static inline void GetDsoHandleFromSearchPath(const std::string& search_root, *dso_handle = dlopen(dlPath.c_str(), dynload_flags); // if not found, search from default path if (nullptr == *dso_handle) { - LOG(WARNING) << "Failed to find cuda library: " << dlPath; + LOG(WARNING) << "Failed to find dynamic library: " << dlPath << " (" + << dlerror() << ")"; dlPath = dso_name; GetDsoHandleFromDefaultPath(dlPath, dso_handle, dynload_flags); } } - CHECK(nullptr != *dso_handle) << "Failed to find cuda library: " << dlPath - << std::endl + CHECK(nullptr != *dso_handle) << "Failed to find dynamic library: " << dlPath + << " (" << dlerror() << ") \n" << "Please specify its path correctly using " - "one of the following ways: \n" // NOLINT + "one of the following ways: \n" << "Method 1. set cuda and cudnn lib path at " "runtime. " << "http://www.paddlepaddle.org/doc/ui/" "cmd_argument/" - "argument_outline.html \n" // NOLINT + "argument_outline.html \n" << "For instance, issue command: paddle train " "--use_gpu=1 " << "--cuda_dir=/usr/local/cuda/lib64 " "--cudnn_dir=/usr/local/cudnn/lib " - "...\n" // NOLINT + "...\n" << "Method 2. set environment variable " "LD_LIBRARY_PATH on Linux or " @@ -124,7 +127,7 @@ static inline void GetDsoHandleFromSearchPath(const std::string& search_root, "DYLD_LIBRARY_PATH is impossible " << "unless System Integrity Protection (SIP) " "is disabled. However, " - "method 1 " // NOLINT + "method 1 " << "always work well."; } @@ -159,3 +162,11 @@ void GetCurandDsoHandle(void** dso_handle) { GetDsoHandleFromSearchPath(FLAGS_cuda_dir, "libcurand.so", dso_handle); #endif } + +void GetWarpctcDsoHandle(void** dso_handle) { +#if defined(__APPLE__) || defined(__OSX__) + GetDsoHandleFromSearchPath(FLAGS_warpctc_dir, "libwarpctc.dylib", dso_handle); +#else + GetDsoHandleFromSearchPath(FLAGS_warpctc_dir, "libwarpctc.so", dso_handle); +#endif +} diff --git a/paddle/cuda/src/hl_warpctc_wrap.cc b/paddle/cuda/src/hl_warpctc_wrap.cc new file mode 100644 index 0000000000..99db0f242d --- /dev/null +++ b/paddle/cuda/src/hl_warpctc_wrap.cc @@ -0,0 +1,157 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include +#include "hl_warpctc_wrap.h" +#include "hl_dso_loader.h" +#include "paddle/utils/Logging.h" + +namespace dynload { + +std::once_flag warpctc_dso_flag; +void* warpctc_dso_handle = nullptr; + +/** + * The following macro definition can generate structs + * (for each function) to dynamic load warpctc routine + * via operator overloading. When PADDLE_USE_DSO is + * false, you need to add the path of libwarp-ctc.so to + * the linked-libs of paddle or to LD_PRELOAD. + */ +#ifdef PADDLE_USE_DSO +#define DYNAMIC_LOAD_WARPCTC_WRAP(__name, __type) \ + struct DynLoad__##__name { \ + template \ + __type operator()(Args... args) { \ + typedef __type (*warpctcFunc)(Args...); \ + std::call_once( \ + warpctc_dso_flag, GetWarpctcDsoHandle, &warpctc_dso_handle); \ + void* p_##_name = dlsym(warpctc_dso_handle, #__name); \ + return reinterpret_cast(p_##_name)(args...); \ + } \ + } __name; // struct DynLoad__##__name +#else +#define DYNAMIC_LOAD_WARPCTC_WRAP(__name, __type) \ + struct DynLoad__##__name { \ + template \ + __type operator()(Args... args) { \ + return __name(args...); \ + } \ + } __name; // struct DynLoad__##__name +#endif + +// include all needed warp-ctc functions +DYNAMIC_LOAD_WARPCTC_WRAP(get_warpctc_version, int) +DYNAMIC_LOAD_WARPCTC_WRAP(ctcGetStatusString, const char*) +DYNAMIC_LOAD_WARPCTC_WRAP(compute_ctc_loss, hl_warpctc_status_t) +DYNAMIC_LOAD_WARPCTC_WRAP(get_workspace_size, hl_warpctc_status_t) + +#undef DYNAMIC_LOAD_WARPCTC_WRAP + +} /* namespace dynload */ + +#define WARPCTC_GET_VERSION dynload::get_warpctc_version +#define WARPCTC_GET_STATUS_STRING dynload::ctcGetStatusString + +#ifndef PADDLE_TYPE_DOUBLE +#define WARPCTC_COMPUTE_LOSS dynload::compute_ctc_loss +#define WARPCTC_GET_WORKSPACE_SIZE dynload::get_workspace_size +#else +#define WARPCTC_LOG_FATAL \ + LOG(FATAL) << "warp-ctc [version " << g_warpctcVersion \ + << "] Error: not support double precision." +#define WARPCTC_COMPUTE_LOSS(...) WARPCTC_LOG_FATAL(__VA_ARGS__) +#define WARPCTC_GET_WORKSPACE_SIZE(...) WARPCTC_LOG_FATAL(__VA_ARGS__) +#endif + +/** + * Check build-in warp-ctc function using glog and it also + * support << operator for more details error info. + */ +static int g_warpctcVersion = -1; +#define CHECK_WARPCTC(warpctcStat) \ + CHECK_EQ(CTC_STATUS_SUCCESS, warpctcStat) \ + << "warp-ctc [version " << g_warpctcVersion \ + << "] Error: " << WARPCTC_GET_STATUS_STRING(warpctcStat) << " " + +void hl_warpctc_init(const size_t blank, + bool useGpu, + hl_warpctc_options_t* options) { + CHECK_NOTNULL(options); + + g_warpctcVersion = WARPCTC_GET_VERSION(); + + if (useGpu) { +#ifdef __NVCC__ + options->loc = CTC_GPU; + options->stream = STREAM_DEFAULT; +#else + LOG(FATAL) << "[warpctc init] GPU is not enabled."; +#endif + } else { + options->loc = CTC_CPU; + options->num_threads = 1; + } + + options->blank_label = blank; +} + +void hl_warpctc_compute_loss(const real* batchInput, + real* batchGrad, + const int* cpuLabels, + const int* cpuLabelLengths, + const int* cpuInputLengths, + const size_t numClasses, + const size_t numSequences, + real* cpuCosts, + void* workspace, + hl_warpctc_options_t* options) { + CHECK_NOTNULL(batchInput); + CHECK_NOTNULL(cpuLabels); + CHECK_NOTNULL(cpuLabelLengths); + CHECK_NOTNULL(cpuInputLengths); + CHECK_NOTNULL(cpuCosts); + CHECK_NOTNULL(workspace); + CHECK_NOTNULL(options); + + CHECK_WARPCTC(WARPCTC_COMPUTE_LOSS(batchInput, + batchGrad, + cpuLabels, + cpuLabelLengths, + cpuInputLengths, + numClasses, + numSequences, + cpuCosts, + workspace, + *options)); +} + +void hl_warpctc_get_workspace_size(const int* cpuLabelLengths, + const int* cpuInputLengths, + const size_t numClasses, + const size_t numSequences, + hl_warpctc_options_t* options, + size_t* bytes) { + CHECK_NOTNULL(cpuLabelLengths); + CHECK_NOTNULL(cpuInputLengths); + CHECK_NOTNULL(options); + CHECK_NOTNULL(bytes); + + CHECK_WARPCTC(WARPCTC_GET_WORKSPACE_SIZE(cpuLabelLengths, + cpuInputLengths, + numClasses, + numSequences, + *options, + bytes)); +} diff --git a/paddle/gserver/layers/WarpCTCLayer.cpp b/paddle/gserver/layers/WarpCTCLayer.cpp new file mode 100644 index 0000000000..b99e9b9c7a --- /dev/null +++ b/paddle/gserver/layers/WarpCTCLayer.cpp @@ -0,0 +1,229 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "WarpCTCLayer.h" + +namespace paddle { + +REGISTER_LAYER(warp_ctc, WarpCTCLayer); + +bool WarpCTCLayer::init(const LayerMap& layerMap, + const ParameterMap& parameterMap) { + /* Initialize the basic parament class */ + Layer::init(layerMap, parameterMap); + + CHECK_EQ(inputLayers_.size(), 2UL); + + /* The inputLayers_[0] must be sequence output without softmax */ + numClasses_ = config_.size(); + CHECK_GE(numClasses_, 2UL); + CHECK_EQ(numClasses_, inputLayers_[0]->getSize()); + + blank_ = config_.blank(); + CHECK_GE(blank_, 0UL); + CHECK_LT(blank_, numClasses_); + + normByTimes_ = config_.norm_by_times(); + + // We don't need sequenceStartPositions because each sample of output_ is + // for the cost of one sequence. + setNeedSequenceInfo(false); + + return true; +} + +void WarpCTCLayer::forward(PassType passType) { + Layer::forward(passType); + + const Argument& output = getInput(0); + const Argument& labels = getInput(1); + + CHECK(output.sequenceStartPositions); + CHECK(labels.sequenceStartPositions); + CHECK(labels.ids); + + size_t numSequences = labels.sequenceStartPositions->getSize() - 1; + CHECK_EQ(numSequences, output.sequenceStartPositions->getSize() - 1); + + resizeOutput(numSequences, 1); + + const int* cpuLabelStartPositions = + labels.sequenceStartPositions->getData(false); + const int* cpuOutputStartPositions = + output.sequenceStartPositions->getData(false); + + std::vector cpuLabelLengths(numSequences); + std::vector cpuOutputLengths(numSequences); + for (size_t i = 0; i < numSequences; i++) { + cpuLabelLengths[i] = + cpuLabelStartPositions[i + 1] - cpuLabelStartPositions[i]; + cpuOutputLengths[i] = + cpuOutputStartPositions[i + 1] - cpuOutputStartPositions[i]; + } + + /* Get the maximum sequence length */ + maxSequenceLength_ = 0; + maxSequenceLength_ = *std::max_element( + cpuOutputLengths.data(), cpuOutputLengths.data() + numSequences); + + Matrix::resizeOrCreate(batchValue_, + /* height */ numSequences * maxSequenceLength_, + /* width */ numClasses_, + /* trans */ false, + /* useGpu */ useGpu_); + + Matrix::resizeOrCreate(batchGrad_, + /* height */ numSequences * maxSequenceLength_, + /* width */ numClasses_, + /* trans */ false, + /* useGpu */ useGpu_); + batchGrad_->zeroMem(); + + seq2batchPadding(output.value, batchValue_, output.sequenceStartPositions); + + /* labels always in CPU memory */ + IVector::resizeOrCreate(cpuLabels_, + /* size */ (labels.ids)->getSize(), + /* useGpu */ false); + cpuLabels_->copyFrom(*(labels.ids)); + + /* labels always in CPU memory */ + Matrix::resizeOrCreate(cpuCosts_, + /* width */ numSequences, + /* height */ 1, + /* trans */ false, + /* useGpu */ false); + + /* Init warp-ctc options */ + hl_warpctc_options_t options; + hl_warpctc_init(blank_, useGpu_, &options); + + /* Get the needed workspace size */ + size_t workspaceBytes = 0; + hl_warpctc_get_workspace_size(cpuLabelLengths.data(), + cpuOutputLengths.data(), + numClasses_, + numSequences, + &options, + &workspaceBytes); + CHECK_GT(workspaceBytes, 0UL); + + size_t workspaceLength = workspaceBytes / sizeof(real) + 1; + Vector::resizeOrCreate(workspace_, + /* size */ workspaceLength, + /* useGpu */ useGpu_); + + hl_warpctc_compute_loss(batchValue_->getData(), + batchGrad_->getData(), + cpuLabels_->getData(), + cpuLabelLengths.data(), + cpuOutputLengths.data(), + numClasses_, + numSequences, + cpuCosts_->getData(), + workspace_->getData(), + &options); + + /* Copy the costs */ + output_.value->copyFrom(*cpuCosts_); +} + +void WarpCTCLayer::backward(const UpdateCallback& callback) { + (void)callback; + + const Argument& output = getInput(0); + CHECK(batchGrad_); + + batch2seqPadding( + output.grad, batchGrad_, output.sequenceStartPositions, normByTimes_); +} + +void WarpCTCLayer::seq2batchPadding(const MatrixPtr& seqValue, + MatrixPtr& batchValue, + const ICpuGpuVectorPtr& seqStartPositions) { + size_t numSequences = seqStartPositions->getSize() - 1; + const int* seqStartPositionsData = seqStartPositions->getData(useGpu_); + + real* seqData = seqValue->getData(); + real* batchData = batchValue->getData(); + if (useGpu_) { + hl_sequence2batch_copy_padding(batchData, + seqData, + seqStartPositionsData, + numClasses_, + maxSequenceLength_, + numSequences, + false, + true); + } else { + for (size_t i = 0; i < maxSequenceLength_; i++) { + for (size_t j = 0; j < numSequences; j++) { + size_t sequenceStart = seqStartPositionsData[j]; + size_t sequenceLength = + seqStartPositionsData[j + 1] - seqStartPositionsData[j]; + if (i < sequenceLength) { + memcpy(batchData + (i * numSequences + j) * numClasses_, + seqData + (sequenceStart + i) * numClasses_, + numClasses_ * sizeof(real)); + } else { + memset(batchData + (i * numSequences + j) * numClasses_, + 0, + numClasses_ * sizeof(real)); + } + } + } + } +} + +void WarpCTCLayer::batch2seqPadding(const MatrixPtr& seqValue, + MatrixPtr& batchValue, + const ICpuGpuVectorPtr& seqStartPositions, + bool normByTimes) { + size_t numSequences = seqStartPositions->getSize() - 1; + const int* seqStartPositionsData = seqStartPositions->getData(useGpu_); + + real* seqData = seqValue->getData(); + real* batchData = batchValue->getData(); + if (useGpu_) { + hl_sequence2batch_copy_padding(batchData, + seqData, + seqStartPositionsData, + numClasses_, + maxSequenceLength_, + numSequences, + normByTimes, + false); + } else { + for (size_t i = 0; i < numSequences; i++) { + int sequenceStart = seqStartPositionsData[i]; + int sequenceLength = + seqStartPositionsData[i + 1] - seqStartPositionsData[i]; + for (int j = 0; j < sequenceLength; j++) { + if (normByTimes) { + for (size_t k = 0; k < numClasses_; k++) { + seqData[(sequenceStart + j) * numClasses_ + k] = + batchData[(j * numSequences + i) * numClasses_ + k] / + sequenceLength; + } + } else { + memcpy(seqData + (sequenceStart + j) * numClasses_, + batchData + (j * numSequences + i) * numClasses_, + numClasses_ * sizeof(real)); + } + } + } + } +} + +} // namespace paddle diff --git a/paddle/gserver/layers/WarpCTCLayer.h b/paddle/gserver/layers/WarpCTCLayer.h new file mode 100644 index 0000000000..1b0f5ba267 --- /dev/null +++ b/paddle/gserver/layers/WarpCTCLayer.h @@ -0,0 +1,65 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#pragma once + +#include "Layer.h" + +namespace paddle { + +/** + * @brief A layer integrating the open-source warp-ctc library + * to compute connectionist + * temporal classification cost. + * + * The config file api is warp_ctc_layer. + */ +class WarpCTCLayer : public Layer { +public: + explicit WarpCTCLayer(const LayerConfig& config) : Layer(config) {} + ~WarpCTCLayer() {} + + virtual bool init(const LayerMap& layerMap, const ParameterMap& parameterMap); + virtual void forward(PassType passType); + virtual void backward(const UpdateCallback& callback); + +protected: + /** + * sequence matrix and batch matrix copy: + * sequence (s0, s0, s0, s0; s1, s1; s2, s2, s2; s3) + * batch (s0, s1, s2, s3; s0, s1, s2, 0; s0, 0, s2, 0; s0, 0, 0, 0) + */ + void seq2batchPadding(const MatrixPtr& seqValue, + MatrixPtr& batchValue, + const ICpuGpuVectorPtr& seqStartPositions); + void batch2seqPadding(const MatrixPtr& seqValue, + MatrixPtr& batchValue, + const ICpuGpuVectorPtr& seqStartPositions, + bool normByTimes); + +protected: + size_t numClasses_; + size_t blank_; + size_t maxSequenceLength_; + bool normByTimes_; + + MatrixPtr batchValue_; + MatrixPtr batchGrad_; + VectorPtr workspace_; + + IVectorPtr cpuLabels_; + MatrixPtr cpuCosts_; +}; + +} // namespace paddle diff --git a/paddle/gserver/tests/CMakeLists.txt b/paddle/gserver/tests/CMakeLists.txt index 0651d0b473..5427dc062d 100644 --- a/paddle/gserver/tests/CMakeLists.txt +++ b/paddle/gserver/tests/CMakeLists.txt @@ -62,6 +62,13 @@ add_unittest(test_RecurrentLayer test_RecurrentLayer.cpp TestUtil.cpp) +############### test_WarpCTCLayer ####################### +if(NOT WITH_DOUBLE) + add_unittest(test_WarpCTCLayer + test_WarpCTCLayer.cpp + TestUtil.cpp) +endif() + ############### test_RecurrentGradientMachine ############### # TODO(yuyang18): There is some bug in test_RecurrentGradientMachine # I will fix it. diff --git a/paddle/gserver/tests/test_WarpCTCLayer.cpp b/paddle/gserver/tests/test_WarpCTCLayer.cpp new file mode 100644 index 0000000000..5289c9892c --- /dev/null +++ b/paddle/gserver/tests/test_WarpCTCLayer.cpp @@ -0,0 +1,247 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include +#include +#include "paddle/gserver/layers/Layer.h" +#include "paddle/gserver/layers/DataLayer.h" +#include "paddle/gserver/layers/CTCLayer.h" +#include "paddle/gserver/layers/WarpCTCLayer.h" +#include "ModelConfig.pb.h" + +#include "TestUtil.h" + +using namespace paddle; // NOLINT +using namespace std; // NOLINT + +P_DECLARE_bool(use_gpu); + +const real* getData(const Matrix& matrix) { + if (matrix.useGpu()) { + MatrixPtr cpuMatrix = Matrix::create( + matrix.getWidth(), matrix.getHeight(), matrix.isTransposed(), false); + cpuMatrix->copyFrom(matrix); + return cpuMatrix->getData(); + } else { + return matrix.getData(); + } +} + +void checkError(const Matrix& matrix1, const Matrix& matrix2) { + CHECK_EQ(matrix1.getHeight(), matrix2.getHeight()); + CHECK_EQ(matrix1.getWidth(), matrix2.getWidth()); + CHECK_EQ(matrix1.isTransposed(), matrix2.isTransposed()); +#ifndef PADDLE_TYPE_DOUBLE + real err = 1e-3; +#else + real err = 1e-10; +#endif + + int height = matrix1.getHeight(); + int width = matrix1.getWidth(); + + const real* data1 = getData(matrix1); + const real* data2 = getData(matrix2); + int count = 0; + for (int i = 0; i < height; i++) { + for (int j = 0; j < width; j++) { + if (fabs(data1[i * width + j] - data2[i * width + j]) > err) { + count++; + } + } + } + EXPECT_EQ(count, 0) << "There are " << count << " different element."; +} + +void initArgument(size_t batchSize, + int layerSize, + bool useGpu, + Argument& data) { + data.value = Matrix::create(batchSize, layerSize, false, useGpu); + data.grad = Matrix::create(batchSize, layerSize, false, useGpu); + data.value->randomizeUniform(); + data.value->add(-0.5); + /// data.value->sigmoid(*data.value); + data.grad->zeroMem(); + + generateSequenceStartPositions(batchSize, data.sequenceStartPositions); +} + +LayerPtr createDataLayer( + string name, size_t batchSize, int layerSize, bool useGpu, Argument& data) { + LayerConfig layerConfig; + layerConfig.set_name(name); + layerConfig.set_type("data"); + layerConfig.set_size(layerSize); + LayerPtr layer = LayerPtr(new DataLayer(layerConfig)); + + DataLayerPtr dataLayer = std::dynamic_pointer_cast(layer); + dataLayer->setData(data); + dataLayer->forward(PASS_GC); + + /// std::cout << "dataLayer: " << std::endl; + /// (dataLayer->getOutput().value)->print(std::cout); + + return layer; +} + +LayerPtr createLabelLayer(string name, + size_t batchSize, + size_t numClasses, + bool useGpu) { + LayerConfig layerConfig; + layerConfig.set_name(name); + layerConfig.set_type("data"); + layerConfig.set_size(1); + LayerPtr layer = LayerPtr(new DataLayer(layerConfig)); + + Argument data; + data.ids = IVector::create(batchSize, useGpu); + data.ids->rand(numClasses - 1); + + generateSequenceStartPositions(batchSize, data.sequenceStartPositions); + + DataLayerPtr labelLayer = std::dynamic_pointer_cast(layer); + labelLayer->setData(data); + labelLayer->forward(PASS_GC); + + return layer; +} + +LayerPtr createCTCLayer(string name, + size_t numClasses, + bool useGpu, + bool normByTimes, + LayerPtr dataLayer, + LayerPtr labelLayer) { + LayerMap layerMap; + layerMap[dataLayer->getName()] = dataLayer; + layerMap[labelLayer->getName()] = labelLayer; + + ParameterMap parameterMap; + + LayerConfig layerConfig; + layerConfig.set_name(name); + layerConfig.set_type("ctc"); + layerConfig.set_size(numClasses); + layerConfig.set_norm_by_times(normByTimes); + + layerConfig.add_inputs(); + LayerInputConfig& input0 = *(layerConfig.mutable_inputs(0)); + input0.set_input_layer_name(dataLayer->getName()); + + layerConfig.add_inputs(); + LayerInputConfig& input1 = *(layerConfig.mutable_inputs(1)); + input1.set_input_layer_name(labelLayer->getName()); + + LayerPtr layer = LayerPtr(new CTCLayer(layerConfig)); + layerMap[layer->getName()] = layer; + layer->init(layerMap, parameterMap); + + ActivationFunction* softmaxActivation = ActivationFunction::create("softmax"); + + softmaxActivation->forward(dataLayer->getOutput()); + layer->forward(PASS_GC); + + layer->backward(); + softmaxActivation->backward(dataLayer->getOutput()); + + return layer; +} + +LayerPtr createWarpCTCLayer(string name, + size_t numClasses, + bool useGpu, + bool normByTimes, + LayerPtr dataLayer, + LayerPtr labelLayer) { + LayerMap layerMap; + layerMap[dataLayer->getName()] = dataLayer; + layerMap[labelLayer->getName()] = labelLayer; + + ParameterMap parameterMap; + + LayerConfig layerConfig; + layerConfig.set_name(name); + layerConfig.set_type("warp_ctc"); + layerConfig.set_size(numClasses); + layerConfig.set_blank(numClasses - 1); + layerConfig.set_norm_by_times(normByTimes); + + layerConfig.add_inputs(); + LayerInputConfig& input0 = *(layerConfig.mutable_inputs(0)); + input0.set_input_layer_name(dataLayer->getName()); + + layerConfig.add_inputs(); + LayerInputConfig& input1 = *(layerConfig.mutable_inputs(1)); + input1.set_input_layer_name(labelLayer->getName()); + + LayerPtr layer = LayerPtr(new WarpCTCLayer(layerConfig)); + layerMap[layer->getName()] = layer; + layer->init(layerMap, parameterMap); + + layer->forward(PASS_GC); + layer->backward(); + + return layer; +} + +TEST(Layer, WarpCTCLayer) { + for (auto layerSize : {10, 64, 128}) { + for (auto batchSize : {1, 10, 20, 64}) { + for (auto useGpu : {false, true}) { +#ifdef PADDLE_ONLY_CPU + if (useGpu) continue; +#endif + LOG(INFO) << " layerSize=" << layerSize << " batchSize=" << batchSize + << " useGpu=" << useGpu; + + FLAGS_use_gpu = useGpu; + + Argument data0; + initArgument(batchSize, layerSize, useGpu, data0); + + Argument data1; + data1.resizeAndCopyFrom(data0); + + LayerPtr dataLayer0 = + createDataLayer("data", batchSize, layerSize, useGpu, data0); + LayerPtr dataLayer1 = + createDataLayer("data", batchSize, layerSize, useGpu, data1); + + LayerPtr labelLayer = + createLabelLayer("label", batchSize, layerSize, useGpu); + + LayerPtr warpctcLayer = createWarpCTCLayer( + "cost", layerSize, useGpu, false, dataLayer0, labelLayer); + LayerPtr ctcLayer = createCTCLayer( + "cost", layerSize, useGpu, false, dataLayer1, labelLayer); + + /// Check loss + checkError(*(warpctcLayer->getOutput().value), + *(ctcLayer->getOutput().value)); + + /// Check gradients + checkError(*(dataLayer0->getOutput().grad), + *(dataLayer1->getOutput().grad)); + } + } + } +} + +int main(int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); + initMain(argc, argv); + return RUN_ALL_TESTS(); +} diff --git a/proto/ModelConfig.proto.m4 b/proto/ModelConfig.proto.m4 index 68a5eb9dd2..08108a4666 100644 --- a/proto/ModelConfig.proto.m4 +++ b/proto/ModelConfig.proto.m4 @@ -414,6 +414,8 @@ sinclude(`ModelConfigLayer.proto.m4') // to string and reinterpreted in the user's own layer implementation. optional string user_arg = 49; + // For WarpCTCLayer + optional uint32 blank = 50 [default = 0]; } message EvaluatorConfig { diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 9db42bf172..e987ad17d6 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -2993,6 +2993,27 @@ class CTCLayer(LayerBase): config_assert(len(self.inputs) == 2, 'CTCLayer must have 2 inputs') +@config_layer('warp_ctc') +class WarpCTCLayer(LayerBase): + def __init__(self, + name, + size, + inputs, + blank=0, + norm_by_times=False, + device=None): + super(WarpCTCLayer, self).__init__( + name, 'warp_ctc', size=size, inputs=inputs, device=device) + self.config.blank = blank + self.config.norm_by_times = norm_by_times + config_assert(len(self.inputs) == 2, 'WarpCTCLayer must have 2 inputs') + input_layer = self.get_input_layer(0) + config_assert( + (input_layer.active_type == '' or + input_layer.active_type == 'linear'), + "Expecting the active_type of input layer to be linear or null") + + @config_layer('recurrent_layer_group') class RecurrentLayerGroup(LayerBase): def __init__(self, name, device=None): diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index 9a45a51589..888d48722a 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -91,6 +91,7 @@ __all__ = [ 'linear_comb_layer', 'convex_comb_layer', 'ctc_layer', + 'warp_ctc_layer', 'crf_layer', 'crf_decoding_layer', 'nce_layer', @@ -169,6 +170,7 @@ class LayerType(object): PRINT_LAYER = "print" CTC_LAYER = "ctc" + WARP_CTC_LAYER = "warp_ctc" CRF_LAYER = "crf" CRF_DECODING_LAYER = "crf_decoding" NCE_LAYER = 'nce' @@ -4085,6 +4087,83 @@ def ctc_layer(input, return LayerOutput(name, LayerType.CTC_LAYER, [input, label], size=size) +@wrap_name_default() +@layer_support() +def warp_ctc_layer(input, + label, + size=None, + name=None, + blank=0, + norm_by_times=False, + layer_attr=None): + """ + A layer intergrating the open-source `warp-ctc + ` library, which is used in + `Deep Speech 2: End-toEnd Speech Recognition in English and Mandarin + `, to compute Connectionist Temporal + Classification (CTC) loss. + + More details of CTC can be found by referring to `Connectionist Temporal + Classification: Labelling Unsegmented Sequence Data with Recurrent + Neural Networks `_ + + Note: + - Let num_classes represent the category number. Considering the 'blank' + label needed by CTC, you need to use (num_classes + 1) as the input size. + Thus, the size of both warp_ctc_layer and 'input' layer should be set to + num_classes + 1. + - You can set 'blank' to [0, num_classes - 1], which should be consistent + as that used in your labels. + - As a native 'softmax' activation is interated to the warp-ctc library, + 'linear' activation is expected instead in the 'input' layer. + + The simple usage: + + .. code-block:: python + + ctc = warp_ctc_layer(input=input, + label=label, + size=1001, + blank=1000, + norm_by_times=False) + + :param input: The input layer. + :type input: LayerOutput + :param label: The data layer of label with variable length. + :type label: LayerOutput + :param size: category numbers + 1. + :type size: int + :param name: The name of this layer, which can not specify. + :type name: basestring|None + :param blank: the 'blank' label used in ctc + :type blank: int + :param norm_by_times: Whether to normalization by times. False by default. + :type norm_by_times: bool + :param layer_attr: Extra Layer config. + :type layer_attr: ExtraLayerAttribute|None + :return: LayerOutput object. + :rtype: LayerOutput + """ + assert isinstance(input, LayerOutput) + assert isinstance(label, LayerOutput) + if label.size is not None: + if size is not None: + assert size == label.size + 1 + else: + size = label.size + 1 + Layer( + name=name, + type=LayerType.WARP_CTC_LAYER, + size=size, + blank=blank, + norm_by_times=norm_by_times, + inputs=[input.name, label.name], + **ExtraLayerAttribute.to_kwargs(layer_attr)) + return LayerOutput( + name, LayerType.WARP_CTC_LAYER, parents=[input, label], size=size) + + @wrap_name_default() @wrap_param_attr_default() @layer_support() diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_cost_layers.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_cost_layers.protostr index f6045fe1f6..10e59e21bc 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/protostr/test_cost_layers.protostr +++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/test_cost_layers.protostr @@ -47,6 +47,20 @@ layers { } norm_by_times: false } +layers { + name: "__warp_ctc_layer_0__" + type: "warp_ctc" + size: 5001 + active_type: "" + inputs { + input_layer_name: "input" + } + inputs { + input_layer_name: "labels" + } + norm_by_times: false + blank: 0 +} layers { name: "crf_label" type: "data" @@ -244,6 +258,7 @@ input_layer_names: "xe-label" input_layer_names: "huber_probs" input_layer_names: "huber_label" output_layer_names: "__ctc_layer_0__" +output_layer_names: "__warp_ctc_layer_0__" output_layer_names: "__crf_layer_0__" output_layer_names: "__rank_cost_0__" output_layer_names: "__lambda_cost_0__" @@ -260,6 +275,7 @@ sub_models { layer_names: "xe-label" layer_names: "__fc_layer_0__" layer_names: "__ctc_layer_0__" + layer_names: "__warp_ctc_layer_0__" layer_names: "crf_label" layer_names: "__crf_layer_0__" layer_names: "left" @@ -289,6 +305,7 @@ sub_models { input_layer_names: "huber_probs" input_layer_names: "huber_label" output_layer_names: "__ctc_layer_0__" + output_layer_names: "__warp_ctc_layer_0__" output_layer_names: "__crf_layer_0__" output_layer_names: "__rank_cost_0__" output_layer_names: "__lambda_cost_0__" diff --git a/python/paddle/trainer_config_helpers/tests/configs/test_cost_layers.py b/python/paddle/trainer_config_helpers/tests/configs/test_cost_layers.py index fd979a1e9f..18ff6b48c4 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/test_cost_layers.py +++ b/python/paddle/trainer_config_helpers/tests/configs/test_cost_layers.py @@ -12,6 +12,8 @@ hidden = fc_layer(input=seq_in, size=4) outputs( ctc_layer( input=seq_in, label=labels), + warp_ctc_layer( + input=seq_in, label=labels, blank=0), crf_layer( input=hidden, label=data_layer( name='crf_label', size=4)), From 18645134bd07d7a1658c08102d51af8a5d0195cd Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 29 Nov 2016 16:16:28 +0800 Subject: [PATCH 024/265] Follow comments --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 6 +++--- .../tests/sequence_nest_rnn_multi_unequalength_inputs.py | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index a13d4728a9..7ae9f5ef8e 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -4,14 +4,14 @@ 单双层RNN API对比介绍 ##################### -这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文中的以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单、双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。文章中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 +这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 示例1:双层RNN,子序列间无Memory ================================ 在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 -在本问题中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用encoder压缩,二者语意上完全一致。这组语意相同的示例配置如下 +在本示例中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用encoder压缩,二者语意上完全一致。这组语意相同的示例配置如下 * 单层\ :ref:`glossary_RNN`\: `sequence_layer_group.conf `_ * :ref:`glossary_双层RNN`\: `sequence_nest_layer_group.conf `_ @@ -22,7 +22,7 @@ 首先,本示例中使用的原始数据如下\: -- 本里中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层\ :ref:`glossary_RNN`\ 网络直接使用。 +- 本例中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层\ :ref:`glossary_RNN`\ 网络直接使用。 .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg :language: text diff --git a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py index bf88d00f2d..163fce956e 100644 --- a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py +++ b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py @@ -64,8 +64,8 @@ def outer_step(x1, x2): last = last_seq(name="outer_rnn_state_%d" % i, input=encoder) return encoder, last - _, sentence_last_state1 = inner_step(ipt=x1) - encoder2, _ = inner_step(ipt=x2) + encoder1, sentence_last_state1 = inner_step(ipt=x1) + encoder2, sentence_last_state2 = inner_step(ipt=x2) encoder1_expand = expand_layer( input=sentence_last_state1, expand_as=encoder2) From 63df7ae65d5f003f0166d5a403991b73081db83f Mon Sep 17 00:00:00 2001 From: liaogang Date: Wed, 30 Nov 2016 12:51:13 +0800 Subject: [PATCH 025/265] Refine docker install doc and FAQ for gpu driver --- .../build_and_install/docker_install.rst | 33 +++++++++++++------ .../install/docker_install.rst | 33 ++++++++++++++----- doc_cn/faq/index.rst | 15 +++++++++ 3 files changed, 63 insertions(+), 18 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install.rst b/doc/getstarted/build_and_install/docker_install.rst index e95de35f4d..88d8357304 100644 --- a/doc/getstarted/build_and_install/docker_install.rst +++ b/doc/getstarted/build_and_install/docker_install.rst @@ -56,25 +56,38 @@ The PaddlePaddle images don't contain any entry command. You need to write your Download and Run Docker images ------------------------------ -You have to install Docker in your machine which has linux kernel version 3.10+ first. You can refer to the official guide https://docs.docker.com/engine/installation/ for further information. +Currently, Docker is supported on macOS, Windows and Linux distributions. Please check out `Install Docker Engine `_ to find out much more details. -You can use :code:`docker pull ` to download images first, or just launch a container with :code:`docker run` \: +PaddlePaddle on CPU +..................... -.. code-block:: bash +You can use :code:`docker pull ` to download images, or directly launch a container with :code:`docker run` \: - docker run -it paddledev/paddle:cpu-latest + .. code-block:: bash + docker run -it paddledev/paddle:cpu-latest -If you want to launch container with GPU support, you need to set some environment variables at the same time: +PaddlePaddle on GPU +..................... -.. code-block:: bash +To build GPU version, you will need the following installed: + + .. code-block:: bash + + 1. a CUDA-capable GPU + 2. NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads) + + +Then, you will need to mount related CUDA driver and library into container. + + .. code-block:: bash - export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" - export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') - docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest + export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" + export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') + docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest -Some notes for docker +Some notes for Docker --------------------- Performance diff --git a/doc_cn/build_and_install/install/docker_install.rst b/doc_cn/build_and_install/install/docker_install.rst index 40339659be..90a5c93709 100644 --- a/doc_cn/build_and_install/install/docker_install.rst +++ b/doc_cn/build_and_install/install/docker_install.rst @@ -60,21 +60,38 @@ mac osx或者是windows机器,请参考 `mac osx的安装文档 `_ 和 `windows 的安装文档 `_ 。 + +启动CPU版Docker镜像 +................... + 您可以使用 :code:`docker pull` 命令预先下载镜像,也可以直接执行 :code:`docker run` 命令运行镜像。执行方法如下: -.. code-block:: bash + .. code-block:: bash + + $ docker run -it paddledev/paddlepaddle:cpu-latest + +即可启动和进入PaddlePaddle的container。 + +启动GPU版Docker镜像 +................... + +首先, 请参考以下链接,在机器上安装CUDA Toolkit。 + + .. code-block:: bash - $ docker run -it paddledev/paddle:cpu-latest + NVIDIA CUDA Toolkit (available at http://developer.nvidia.com/cuda-downloads) -即可启动和进入PaddlePaddle的container。如果运行GPU版本的PaddlePaddle,则需要先将 -cuda相关的Driver和设备映射进container中,脚本类似于 +其次,需要将cuda相关的驱动和设备映射进container中,脚本类似于 -.. code-block:: bash + .. code-block:: bash + + $ export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" + $ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') + $ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddlepaddle:latest-gpu - $ export CUDA_SO="$(\ls /usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" - $ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') - $ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest +使用PaddlePaddle +.................. 进入Docker container后,运行 :code:`paddle version` 即可打印出PaddlePaddle的版本和构建 信息。安装完成的PaddlePaddle主体包括三个部分, :code:`paddle` 脚本, python的 diff --git a/doc_cn/faq/index.rst b/doc_cn/faq/index.rst index 551430eb41..838fa651d8 100644 --- a/doc_cn/faq/index.rst +++ b/doc_cn/faq/index.rst @@ -202,3 +202,18 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 解决办法是: * 卸载PaddlePaddle包 :code:`pip uninstall paddle`, 清理掉老旧的PaddlePaddle安装包,使得单元测试有一个干净的环境。如果PaddlePaddle包已经在python的site-packages里面,单元测试会引用site-packages里面的python包,而不是源码目录里 :code:`/python` 目录下的python包。同时,即便设置 :code:`PYTHONPATH` 到 :code:`/python` 也没用,因为python的搜索路径是优先已经安装的python包。 + + +9. 运行Docker GPU镜像出现 "CUDA driver version is insufficient" +---------------------------------------------------------------- + +用户在使用PaddlePaddle GPU的Docker镜像的时候,常常出现 `Cuda Error: CUDA driver version is insufficient for CUDA runtime version`, 原因在于没有把机器上CUDA相关的驱动和库映射到容器内部。 +具体的解决方法是: + +.. code-block:: bash + + $ export CUDA_SO="$(\ls usr/lib64/libcuda* | xargs -I{} echo '-v {}:{}') $(\ls /usr/lib64/libnvidia* | xargs -I{} echo '-v {}:{}')" + $ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') + $ docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddlepaddle:latest-gpu + +更多关于Docker的安装与使用, 请参考 `PaddlePaddle Docker 文档 `_ 。 From 9d72cab0a4323a6d96bdc443f9cbac5c5658edbc Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Wed, 30 Nov 2016 11:55:00 +0800 Subject: [PATCH 026/265] Accelerating image processing for CNN --- CMakeLists.txt | 4 + plugin/opencv/CMakeLists.txt | 39 ++++++ plugin/opencv/DataTransformer.cpp | 179 +++++++++++++++++++++++++ plugin/opencv/DataTransformer.h | 123 +++++++++++++++++ plugin/opencv/PyDecodejpeg.cpp | 173 ++++++++++++++++++++++++ python/paddle/utils/image_multiproc.py | 170 +++++++++++++++++++++++ python/paddle/utils/image_util.py | 31 +++-- 7 files changed, 705 insertions(+), 14 deletions(-) create mode 100644 plugin/opencv/CMakeLists.txt create mode 100644 plugin/opencv/DataTransformer.cpp create mode 100644 plugin/opencv/DataTransformer.h create mode 100644 plugin/opencv/PyDecodejpeg.cpp create mode 100644 python/paddle/utils/image_multiproc.py diff --git a/CMakeLists.txt b/CMakeLists.txt index af193c27ae..40f18f1550 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -195,3 +195,7 @@ if(WITH_DOC) add_subdirectory(doc) add_subdirectory(doc_cn) endif() + +if(USE_OPENCV) + add_subdirectory(plugin/opencv) +endif() diff --git a/plugin/opencv/CMakeLists.txt b/plugin/opencv/CMakeLists.txt new file mode 100644 index 0000000000..4a253f346a --- /dev/null +++ b/plugin/opencv/CMakeLists.txt @@ -0,0 +1,39 @@ +# use opencv plugin + +project(DeJpeg CXX C) +set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake") +set(PROJ_ROOT ${CMAKE_SOURCE_DIR}) +list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake/Modules) +set(DEJPEG_LINKER_LIBS "") + +# opencv +find_package(OpenCV REQUIRED COMPONENTS core highgui imgproc) +include_directories(${OpenCV_INCLUDE_DIRS}) +list(APPEND DEJPEG_LINKER_LIBS ${OpenCV_LIBS}) +message(STATUS "OpenCV found (${OpenCV_CONFIG_PATH})") +add_definitions(-DUSE_OPENCV) + +# boost-python +set(Boost_NO_SYSTEM_PATHS ON) +if (Boost_NO_SYSTEM_PATHS) + set(BOOST_ROOT $ENV{BOOST_ROOT}) + set(Boost_DIR ${BOOST_ROOT}) + set(Boost_INCLUDE_DIR "${BOOST_ROOT}/include") + set(Boost_LIBRARIES "${BOOST_ROOT}/lib/") +endif (Boost_NO_SYSTEM_PATHS) +find_package(Boost 1.46 COMPONENTS python) +include_directories(SYSTEM ${Boost_INCLUDE_DIR}) +link_directories(${Boost_INCLUDE_DIR}) +message(STATUS "Boost found (${Boost_INCLUDE_DIR})") +message(STATUS "Boost found (${Boost_LIBRARIES})") +list(APPEND DEJPEG_LINKER_LIBS ${Boost_LIBRARIES}) + + +file(GLOB DEJPEG_HEADER "${CMAKE_CURRENT_SOURCE_DIR}" "*.h") +file(GLOB DEJPEG_SOURCES "${CMAKE_CURRENT_SOURCE_DIR}" "*.cpp") + +set(CMAKE_CXX_FLAGS "-std=c++11 -O3 -fPIC -Wno-unused-parameter") + +add_library(DeJpeg SHARED ${DEJPEG_SOURCES}) +target_link_libraries(DeJpeg ${DEJPEG_LINKER_LIBS}) +set_target_properties(DeJpeg PROPERTIES PREFIX "") diff --git a/plugin/opencv/DataTransformer.cpp b/plugin/opencv/DataTransformer.cpp new file mode 100644 index 0000000000..f4e21db886 --- /dev/null +++ b/plugin/opencv/DataTransformer.cpp @@ -0,0 +1,179 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "DataTransformer.h" +#include +#include + +DataTransformer::DataTransformer(int threadNum, + int capacity, + bool isTest, + bool isColor, + int cropHeight, + int cropWidth, + int imgSize, + bool isEltMean, + bool isChannelMean, + float* meanValues) + : isTest_(isTest), + isColor_(isColor), + cropHeight_(cropHeight), + cropWidth_(cropWidth), + imgSize_(imgSize), + capacity_(capacity), + prefetchFree_(capacity), + prefetchFull_(capacity) { + fetchCount_ = -1; + scale_ = 1.0; + isChannelMean_ = isChannelMean; + isEltMean_ = isEltMean; + loadMean(meanValues); + + imgPixels_ = cropHeight * cropWidth * (isColor_ ? 3 : 1); + + prefetch_.reserve(capacity); + for (int i = 0; i < capacity; i++) { + auto d = std::make_shared(new float[imgPixels_ * 3], 0); + prefetch_.push_back(d); + memset(prefetch_[i]->first, 0, imgPixels_ * sizeof(float)); + prefetchFree_.enqueue(prefetch_[i]); + } + + numThreads_ = 12; + syncThreadPool_.reset(new SyncThreadPool(numThreads_, false)); +} + +void DataTransformer::loadMean(float* values) { + if (values) { + int c = isColor_ ? 3 : 1; + int sz = isChannelMean_ ? c : cropHeight_ * cropWidth_ * c; + meanValues_ = new float[sz]; + memcpy(meanValues_, values, sz * sizeof(float)); + } +} + +void DataTransformer::startFetching(const char* src, + const int size, + float* trg) { + vector imbuf(src, src + size); + int cvFlag = (isColor_ ? CV_LOAD_IMAGE_COLOR : CV_LOAD_IMAGE_GRAYSCALE); + cv::Mat im = cv::imdecode(cv::Mat(imbuf), cvFlag); + if (!im.data) { + LOG(ERROR) << "Could not decode image"; + LOG(ERROR) << im.channels() << " " << im.rows << " " << im.cols; + } + this->transform(im, trg); +} + +int DataTransformer::Rand(int min, int max) { + std::random_device source; + std::mt19937 rng(source()); + std::uniform_int_distribution dist(min, max); + return dist(rng); +} + +void DataTransformer::transform(Mat& cvImgOri, float* target) { + const int imgChannels = cvImgOri.channels(); + const int imgHeight = cvImgOri.rows; + const int imgWidth = cvImgOri.cols; + const bool doMirror = (!isTest_) && Rand(0, 1); + int h_off = 0; + int w_off = 0; + int th = imgHeight; + int tw = imgWidth; + cv::Mat img; + if (imgSize_ > 0) { + if (imgHeight > imgWidth) { + tw = imgSize_; + th = int(double(imgHeight) / imgWidth * tw); + th = th > imgSize_ ? th : imgSize_; + } else { + th = imgSize_; + tw = int(double(imgWidth) / imgHeight * th); + tw = tw > imgSize_ ? tw : imgSize_; + } + cv::resize(cvImgOri, img, cv::Size(tw, th)); + } else { + cv::Mat img = cvImgOri; + } + + cv::Mat cv_cropped_img = img; + if (cropHeight_ && cropWidth_) { + if (!isTest_) { + h_off = Rand(0, th - cropHeight_); + w_off = Rand(0, tw - cropWidth_); + } else { + h_off = (th - cropHeight_) / 2; + w_off = (tw - cropWidth_) / 2; + } + cv::Rect roi(w_off, h_off, cropWidth_, cropHeight_); + cv_cropped_img = img(roi); + } else { + CHECK_EQ(cropHeight_, imgHeight); + CHECK_EQ(cropWidth_, imgWidth); + } + int height = cropHeight_; + int width = cropWidth_; + int top_index; + for (int h = 0; h < height; ++h) { + const uchar* ptr = cv_cropped_img.ptr(h); + int img_index = 0; + for (int w = 0; w < width; ++w) { + for (int c = 0; c < imgChannels; ++c) { + if (doMirror) { + top_index = (c * height + h) * width + width - 1 - w; + } else { + top_index = (c * height + h) * width + w; + } + float pixel = static_cast(ptr[img_index++]); + if (isEltMean_) { + int mean_index = (c * imgHeight + h) * imgWidth + w; + target[top_index] = (pixel - meanValues_[mean_index]) * scale_; + } else { + if (isChannelMean_) { + target[top_index] = (pixel - meanValues_[c]) * scale_; + } else { + target[top_index] = pixel * scale_; + } + } + } + } + } // target: BGR +} + +void DataTransformer::start(vector& data, int* datalen, int* labels) { + auto job = [&](int tid, int numThreads) { + for (int i = tid; i < data.size(); i += numThreads) { + DataTypePtr ret = prefetchFree_.dequeue(); + char* buf = data[i]; + int size = datalen[i]; + ret->second = labels[i]; + this->startFetching(buf, size, ret->first); + prefetchFull_.enqueue(ret); + } + }; + syncThreadPool_->exec(job); + fetchCount_ = data.size(); +} + +void DataTransformer::obtain(float* data, int* label) { + fetchCount_--; + if (fetchCount_ < 0) { + LOG(FATAL) << "Empty data"; + } + DataTypePtr ret = prefetchFull_.dequeue(); + *label = ret->second; + memcpy(data, ret->first, sizeof(float) * imgPixels_); + prefetchFree_.enqueue(ret); +} diff --git a/plugin/opencv/DataTransformer.h b/plugin/opencv/DataTransformer.h new file mode 100644 index 0000000000..c4f04a5878 --- /dev/null +++ b/plugin/opencv/DataTransformer.h @@ -0,0 +1,123 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include +#include +// #define OPENCV_CAN_BREAK_BINARY_COMPATIBILITY +#include +#include +#include +#include + +#include "paddle/utils/Thread.h" + +using namespace std; +using namespace cv; +using namespace paddle; + +/** + * This is an image processing module with OpenCV, such as + * resizing, scaling, mirroring, substracting the image mean... + * + * This class has a double BlockQueue and they shared the same memory. + * It is used to avoid create memory each time. And it also can + * return the data even if the data are processing in multi-threads. + */ +class DataTransformer { +public: + DataTransformer(int threadNum, + int capacity, + bool isTest, + bool isColor, + int cropHeight, + int cropWidth, + int imgSize, + bool isEltMean, + bool isChannelMean, + float* meanValues); + virtual ~DataTransformer() { + if (meanValues_) { + free(meanValues_); + } + } + + /** + * @brief Start multi-threads to transform a list of input data. + * The processed data will be saved in Queue of prefetchFull_. + * + * @param data Data containing the image string to be transformed. + * @param label The label of input image. + */ + void start(vector& data, int* datalen, int* labels); + + /** + * @brief Applies the transformation on one image Mat. + * + * @param img The input img to be transformed. + * @param target target is used to save the transformed data. + */ + void transform(Mat& img, float* target); + + /** + * @brief Decode the image string, then calls transform() function. + * + * @param src The input image string. + * @param size The length of string. + * @param trg trg is used to save the transformed data. + */ + void startFetching(const char* src, const int size, float* trg); + + /** + * @brief Return the transformed data and its label. + */ + void obtain(float* data, int* label); + +private: + int isTest_; + int isColor_; + int cropHeight_; + int cropWidth_; + int imgSize_; + int capacity_; + int fetchCount_; + bool isEltMean_; + bool isChannelMean_; + int numThreads_; + float scale_; + int imgPixels_; + float* meanValues_; + + /** + * Initialize the mean values. + */ + void loadMean(float* values); + + /** + * @brief Generates a random integer from Uniform({min, min + 1, ..., max}). + * @param min The lower bound (inclusive) value of the random number. + * @param max The upper bound (inclusive) value of the random number. + * + * @return + * A uniformly random integer value from ({min, min + 1, ..., max}). + */ + int Rand(int min, int max); + + typedef pair DataType; + typedef std::shared_ptr DataTypePtr; + std::vector prefetch_; + std::unique_ptr syncThreadPool_; + BlockingQueue prefetchFree_; + BlockingQueue prefetchFull_; + +}; // class DataTransformer diff --git a/plugin/opencv/PyDecodejpeg.cpp b/plugin/opencv/PyDecodejpeg.cpp new file mode 100644 index 0000000000..b004d7cad8 --- /dev/null +++ b/plugin/opencv/PyDecodejpeg.cpp @@ -0,0 +1,173 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "DataTransformer.h" + +using namespace boost::python; +using namespace std; + +/** + * DecodeJpeg is an image processing API for interfacing Python and C++ + * code DataTransformer, which used OpenCV and multi-threads to accelerate + * image processing. + * The Boost Python Library is used to wrap C++ interfaces. + */ + +class DecodeJpeg { +public: + /** + * The constructor will create and nitialize an object of DataTransformer. + */ + DecodeJpeg(int threadNum, + int capacity, + bool isTest, + bool isColor, + int resize_min_size, + int cropSizeH, + int cropSizeW, + PyObject* meanValues) { + int channel = isColor ? 3 : 1; + bool isEltMean = false; + bool isChannelMean = false; + float* mean = NULL; + if (meanValues || meanValues != Py_None) { + if (!PyArray_Check(meanValues)) { + LOG(FATAL) << "Object is not a numpy array"; + } + pyTypeCheck(meanValues); + int size = PyArray_SIZE(meanValues); + isChannelMean = (size == channel) ? true : false; + isEltMean = (size == channel * cropSizeH * cropSizeW) ? true : false; + CHECK(isChannelMean != isEltMean); + mean = (float*)PyArray_DATA(meanValues); + } + tfhandlerPtr_ = std::make_shared(threadNum, + capacity, + isTest, + isColor, + cropSizeH, + cropSizeW, + resize_min_size, + isEltMean, + isChannelMean, + mean); + } + + ~DecodeJpeg() {} + + /** + * @brief This function is used to parse the Python object and convert + * the data to C++ format. Then it called the function of + * DataTransformer to start image processing. + * @param pysrc The input image list with string type. + * @param pylabel The input label of image. + * It's type is numpy.array with int32. + */ + void start(boost::python::list& pysrc, PyObject* pydlen, PyObject* pylabel) { + vector data; + int num = len(pysrc); + for (int t = 0; t < num; ++t) { + char* src = boost::python::extract(pysrc[t]); + data.push_back(src); + } + int* dlen = (int*)PyArray_DATA(pydlen); + int* dlabels = (int*)PyArray_DATA(pylabel); + tfhandlerPtr_->start(data, dlen, dlabels); + } + + /** + * @brief Return one processed data. + * @param pytrg The processed image. + * @param pylabel The label of processed image. + */ + void get(PyObject* pytrg, PyObject* pylab) { + pyWritableCheck(pytrg); + pyWritableCheck(pylab); + pyContinuousCheck(pytrg); + pyContinuousCheck(pylab); + float* data = (float*)PyArray_DATA(pytrg); + int* label = (int*)PyArray_DATA(pylab); + tfhandlerPtr_->obtain(data, label); + } + + /** + * @brief An object of DataTransformer, which is used to call + * the image processing funtions. + */ + std::shared_ptr tfhandlerPtr_; + +private: + /** + * @brief Check whether the type of PyObject is valid or not. + */ + void pyTypeCheck(const PyObject* o) { + int typenum = PyArray_TYPE(o); + + // clang-format off + int type = + typenum == NPY_UBYTE ? CV_8U : + typenum == NPY_BYTE ? CV_8S : + typenum == NPY_USHORT ? CV_16U : + typenum == NPY_SHORT ? CV_16S : + typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : + typenum == NPY_FLOAT ? CV_32F : + typenum == NPY_DOUBLE ? CV_64F : -1; + // clang-format on + + if (type < 0) { + LOG(FATAL) << "toMat: Data type = " << type << " is not supported"; + } + } + + /** + * @brief Check whether the PyObject is writable or not. + */ + void pyWritableCheck(PyObject* o) { CHECK(PyArray_ISWRITEABLE(o)); } + + /** + * @brief Check whether the PyObject is c-contiguous or not. + */ + void pyContinuousCheck(PyObject* o) { CHECK(PyArray_IS_C_CONTIGUOUS(o)); } +}; + +/** + * @brief Initialize the Python interpreter and numpy. + */ +static void initPython() { + Py_Initialize(); + PyOS_sighandler_t sighandler = PyOS_getsig(SIGINT); + import_array(); + PyOS_setsig(SIGINT, sighandler); +} + +/** + * Use Boost.Python to expose C++ interface to Python. + */ +BOOST_PYTHON_MODULE(DeJpeg) { + initPython(); + class_("DecodeJpeg", + init()) + .def("start", &DecodeJpeg::start) + .def("get", &DecodeJpeg::get); +}; diff --git a/python/paddle/utils/image_multiproc.py b/python/paddle/utils/image_multiproc.py new file mode 100644 index 0000000000..ccc0a531a7 --- /dev/null +++ b/python/paddle/utils/image_multiproc.py @@ -0,0 +1,170 @@ +import os, psutil +import cv2 +from paddle.utils.image_util import * +import multiprocessing +import subprocess, signal, sys + + +class CvImageTransfomer(ImageTransformer): + """ + CvImageTransfomer used python-opencv to process image. + """ + + def __init__(self, + min_size=None, + crop_size=None, + transpose=None, + channel_swap=None, + mean=None, + is_train=True, + is_color=True): + ImageTransformer.__init__(self, transpose, channel_swap, mean, is_color) + self.min_size = min_size + self.crop_size = crop_size + self.is_train = is_train + + def cv_resize_fixed_short_side(self, im, min_size): + row, col = im.shape[:2] + scale = min_size / float(min(row, col)) + if row < col: + row = min_size + col = int(round(col * scale)) + col = col if col > min_size else min_size + else: + col = min_size + row = int(round(row * scale)) + row = row if row > min_size else min_size + resized_size = row, col + im = cv2.resize(im, resized_size, interpolation=cv2.INTER_CUBIC) + return im + + def crop_img(self, im): + """ + Return cropped image. + The size of the cropped image is inner_size * inner_size. + im: (H x W x K) ndarrays + """ + row, col = im.shape[:2] + start_h, start_w = 0, 0 + if self.is_train: + start_h = np.random.randint(0, row - self.crop_size + 1) + start_w = np.random.randint(0, col - self.crop_size + 1) + else: + start_h = (row - self.crop_size) / 2 + start_w = (col - self.crop_size) / 2 + end_h, end_w = start_h + self.crop_size, start_w + self.crop_size + if self.is_color: + im = im[start_h:end_h, start_w:end_w, :] + else: + im = im[start_h:end_h, start_w:end_w] + if (self.is_train) and (np.random.randint(2) == 0): + if self.is_color: + im = im[:, ::-1, :] + else: + im = im[:, ::-1] + return im + + def transform(self, im): + im = self.cv_resize_fixed_short_side(im, self.min_size) + im = self.crop_img(im) + # transpose, swap channel, sub mean + im = im.astype('float32') + ImageTransformer.transformer(self, im) + return im + + def load_image_from_string(self, data): + flag = cv2.CV_LOAD_IMAGE_COLOR if self.is_color else cv2.CV_LOAD_IMAGE_GRAYSCALE + im = cv2.imdecode(np.fromstring(data, np.uint8), flag) + return im + + def transform_from_string(self, data): + im = self.load_image_from_string(data) + return self.transform(im) + + +class MultiProcessImageTransfomer(): + def __init__(self, + procnum=10, + capacity=10240, + min_size=None, + crop_size=None, + transpose=None, + channel_swap=None, + mean=None, + is_train=True, + is_color=True): + self.procnum = procnum + self.capacity = capacity + self.size = 0 + self.count = 0 + signal.signal(signal.SIGTERM, self.kill_child_processes) + self.fetch_queue = multiprocessing.Queue(maxsize=capacity) + self.cv_transformer = CvImageTransfomer(min_size, crop_size, transpose, + channel_swap, mean, is_train, + is_color) + + def __del__(self): + try: + for p in self.procs: + p.join() + except Exception as e: + print str(e) + + def reset(self, size): + self.size = size + self.count = 0 + self.procs = [] + + def run_proc(self, data, label): + dlen = len(label) + self.reset(dlen) + for i in xrange(self.procnum): + start = dlen * i / self.procnum + end = dlen * (i + 1) / self.procnum + proc = multiprocessing.Process( + target=self.batch_transfomer, + args=(data[start:end], label[start:end])) + proc.daemon = True + self.procs.append(proc) + for p in self.procs: + p.start() + + def get(self): + """ + Return one processed image. + """ + # block if necessary until an item is available + data, lab = self.fetch_queue.get(block=True) + self.count += 1 + if self.count == self.size: + try: + for p in self.procs: + p.join() + except Exception as e: + print str(e) + return data, lab + + def batch_transfomer(self, data, label): + """ + param data: input data in format of image string + type data: a list of string + label: the label of image + """ + for i in xrange(len(label)): + res = self.cv_transformer.transform_from_string(data[i]) + self.fetch_queue.put((res, int(label[i]))) + + def kill_child_processes(self, signum, frame): + """ + Kill a process's child processes in python. + """ + parent_id = os.getpid() + ps_command = subprocess.Popen( + "ps -o pid --ppid %d --noheaders" % parent_id, + shell=True, + stdout=subprocess.PIPE) + ps_output = ps_command.stdout.read() + retcode = ps_command.wait() + for pid_str in ps_output.strip().split("\n")[:-1]: + os.kill(int(pid_str), signal.SIGTERM) + sys.exit() diff --git a/python/paddle/utils/image_util.py b/python/paddle/utils/image_util.py index b5c6431c06..e6c6b04de0 100644 --- a/python/paddle/utils/image_util.py +++ b/python/paddle/utils/image_util.py @@ -186,29 +186,32 @@ class ImageTransformer: channel_swap=None, mean=None, is_color=True): - self.transpose = transpose - self.channel_swap = None - self.mean = None self.is_color = is_color + self.set_transpose(transpose) + self.set_channel_swap(channel_swap) + self.set_mean(mean) def set_transpose(self, order): - if self.is_color: - assert 3 == len(order) + if order is not None: + if self.is_color: + assert 3 == len(order) self.transpose = order def set_channel_swap(self, order): - if self.is_color: - assert 3 == len(order) + if order is not None: + if self.is_color: + assert 3 == len(order) self.channel_swap = order def set_mean(self, mean): - # mean value, may be one value per channel - if mean.ndim == 1: - mean = mean[:, np.newaxis, np.newaxis] - else: - # elementwise mean - if self.is_color: - assert len(mean.shape) == 3 + if mean is not None: + # mean value, may be one value per channel + if mean.ndim == 1: + mean = mean[:, np.newaxis, np.newaxis] + else: + # elementwise mean + if self.is_color: + assert len(mean.shape) == 3 self.mean = mean def transformer(self, data): From a816443e118f3abc69d0a474e01135d06d5f638e Mon Sep 17 00:00:00 2001 From: Liu Yiqun Date: Wed, 30 Nov 2016 05:32:16 +0000 Subject: [PATCH 027/265] Add submodule warp-ctc. --- .gitmodules | 3 ++ .pre-commit-config.yaml | 2 + paddle/cuda/include/hl_dso_loader.h | 2 +- paddle/cuda/include/hl_warpctc_wrap.h | 1 - paddle/cuda/src/hl_cuda_sequence.cu | 24 +++------- paddle/cuda/src/hl_dso_loader.cc | 2 +- paddle/cuda/src/hl_warpctc_wrap.cc | 28 +++++------ paddle/gserver/layers/WarpCTCLayer.cpp | 18 +++----- paddle/gserver/tests/test_WarpCTCLayer.cpp | 54 +++++++++++----------- warp-ctc | 1 + 10 files changed, 62 insertions(+), 73 deletions(-) create mode 160000 warp-ctc diff --git a/.gitmodules b/.gitmodules index e69de29bb2..f635e65784 100644 --- a/.gitmodules +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "warp-ctc"] + path = warp-ctc + url = https://github.com/baidu-research/warp-ctc.git diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 90c25e4350..942669c41f 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -2,6 +2,7 @@ sha: c25201a00e6b0514370501050cf2a8538ac12270 hooks: - id: remove-crlf + files: (?!.*warp-ctc)^.*$ - repo: https://github.com/reyoung/mirrors-yapf.git sha: v0.13.2 hooks: @@ -13,6 +14,7 @@ - id: check-merge-conflict - id: check-symlinks - id: detect-private-key + files: (?!.*warp-ctc)^.*$ - id: end-of-file-fixer - repo: https://github.com/PaddlePaddle/clang-format-pre-commit-hook.git sha: 28c0ea8a67a3e2dbbf4822ef44e85b63a0080a29 diff --git a/paddle/cuda/include/hl_dso_loader.h b/paddle/cuda/include/hl_dso_loader.h index c52066e3d7..e5d3d40311 100644 --- a/paddle/cuda/include/hl_dso_loader.h +++ b/paddle/cuda/include/hl_dso_loader.h @@ -58,6 +58,6 @@ void GetCurandDsoHandle(void** dso_handle); * @param **dso_handle dso handler * */ -void GetWarpctcDsoHandle(void** dso_handle); +void GetWarpCTCDsoHandle(void** dso_handle); #endif // HL_DSO_LOADER_H_ diff --git a/paddle/cuda/include/hl_warpctc_wrap.h b/paddle/cuda/include/hl_warpctc_wrap.h index 9d2379a024..dc50cf9d20 100644 --- a/paddle/cuda/include/hl_warpctc_wrap.h +++ b/paddle/cuda/include/hl_warpctc_wrap.h @@ -16,7 +16,6 @@ limitations under the License. */ #define HL_WARPCTC_WRAP_H_ #include "hl_base.h" -/// #include "hl_cuda.h" #include "warp-ctc/include/ctc.h" typedef ctcStatus_t hl_warpctc_status_t; diff --git a/paddle/cuda/src/hl_cuda_sequence.cu b/paddle/cuda/src/hl_cuda_sequence.cu index 0f1d720439..e83a60ad72 100644 --- a/paddle/cuda/src/hl_cuda_sequence.cu +++ b/paddle/cuda/src/hl_cuda_sequence.cu @@ -463,30 +463,18 @@ void KeSequence2BatchPadding(real* batch, int batchBaseIdx = (sequenceIdx * numSequences + batchIdx) * sequenceWidth; int sequenceBaseIdx = (sequenceStart + sequenceIdx) * sequenceWidth; + real scale = normByTimes ? (1.0f / (real)sequenceLength) : 1.0f; + if (sequenceIdx < sequenceLength) { if (seq2batch) { /* sequence -> batch */ - if (normByTimes) { - real scale = 1.0f / (real)sequenceLength; - for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { - batch[batchBaseIdx + i] = scale * sequence[sequenceBaseIdx + i]; - } - } else { - for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { - batch[batchBaseIdx + i] = sequence[sequenceBaseIdx + i]; - } + for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { + batch[batchBaseIdx + i] = scale * sequence[sequenceBaseIdx + i]; } } else { /* batch -> sequence */ - if (normByTimes) { - real scale = 1.0f / (real)sequenceLength; - for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { - sequence[sequenceBaseIdx + i] = scale * batch[batchBaseIdx + i]; - } - } else { - for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { - sequence[sequenceBaseIdx + i] = batch[batchBaseIdx + i]; - } + for (int i = threadIdx.x; i < sequenceWidth; i += blockDim.x) { + sequence[sequenceBaseIdx + i] = scale * batch[batchBaseIdx + i]; } } } else if (sequenceIdx < maxSequenceLength) { diff --git a/paddle/cuda/src/hl_dso_loader.cc b/paddle/cuda/src/hl_dso_loader.cc index a6ea2a3b9f..ce19073626 100644 --- a/paddle/cuda/src/hl_dso_loader.cc +++ b/paddle/cuda/src/hl_dso_loader.cc @@ -163,7 +163,7 @@ void GetCurandDsoHandle(void** dso_handle) { #endif } -void GetWarpctcDsoHandle(void** dso_handle) { +void GetWarpCTCDsoHandle(void** dso_handle) { #if defined(__APPLE__) || defined(__OSX__) GetDsoHandleFromSearchPath(FLAGS_warpctc_dir, "libwarpctc.dylib", dso_handle); #else diff --git a/paddle/cuda/src/hl_warpctc_wrap.cc b/paddle/cuda/src/hl_warpctc_wrap.cc index 99db0f242d..3d3bf46158 100644 --- a/paddle/cuda/src/hl_warpctc_wrap.cc +++ b/paddle/cuda/src/hl_warpctc_wrap.cc @@ -30,32 +30,32 @@ void* warpctc_dso_handle = nullptr; * the linked-libs of paddle or to LD_PRELOAD. */ #ifdef PADDLE_USE_DSO -#define DYNAMIC_LOAD_WARPCTC_WRAP(__name, __type) \ +#define DYNAMIC_LOAD_WARPCTC_WRAP(__name) \ struct DynLoad__##__name { \ template \ - __type operator()(Args... args) { \ - typedef __type (*warpctcFunc)(Args...); \ + auto operator()(Args... args) -> decltype(__name(args...)) { \ + using warpctcFunc = decltype(__name(args...)) (*)(Args...); \ std::call_once( \ - warpctc_dso_flag, GetWarpctcDsoHandle, &warpctc_dso_handle); \ + warpctc_dso_flag, GetWarpCTCDsoHandle, &warpctc_dso_handle); \ void* p_##_name = dlsym(warpctc_dso_handle, #__name); \ return reinterpret_cast(p_##_name)(args...); \ } \ } __name; // struct DynLoad__##__name #else -#define DYNAMIC_LOAD_WARPCTC_WRAP(__name, __type) \ - struct DynLoad__##__name { \ - template \ - __type operator()(Args... args) { \ - return __name(args...); \ - } \ +#define DYNAMIC_LOAD_WARPCTC_WRAP(__name) \ + struct DynLoad__##__name { \ + template \ + auto operator()(Args... args) -> decltype(__name(args...)) { \ + return __name(args...); \ + } \ } __name; // struct DynLoad__##__name #endif // include all needed warp-ctc functions -DYNAMIC_LOAD_WARPCTC_WRAP(get_warpctc_version, int) -DYNAMIC_LOAD_WARPCTC_WRAP(ctcGetStatusString, const char*) -DYNAMIC_LOAD_WARPCTC_WRAP(compute_ctc_loss, hl_warpctc_status_t) -DYNAMIC_LOAD_WARPCTC_WRAP(get_workspace_size, hl_warpctc_status_t) +DYNAMIC_LOAD_WARPCTC_WRAP(get_warpctc_version) +DYNAMIC_LOAD_WARPCTC_WRAP(ctcGetStatusString) +DYNAMIC_LOAD_WARPCTC_WRAP(compute_ctc_loss) +DYNAMIC_LOAD_WARPCTC_WRAP(get_workspace_size) #undef DYNAMIC_LOAD_WARPCTC_WRAP diff --git a/paddle/gserver/layers/WarpCTCLayer.cpp b/paddle/gserver/layers/WarpCTCLayer.cpp index b99e9b9c7a..e68363a1b2 100644 --- a/paddle/gserver/layers/WarpCTCLayer.cpp +++ b/paddle/gserver/layers/WarpCTCLayer.cpp @@ -100,8 +100,8 @@ void WarpCTCLayer::forward(PassType passType) { /* labels always in CPU memory */ Matrix::resizeOrCreate(cpuCosts_, - /* width */ numSequences, - /* height */ 1, + /* height */ numSequences, + /* width */ 1, /* trans */ false, /* useGpu */ false); @@ -209,17 +209,11 @@ void WarpCTCLayer::batch2seqPadding(const MatrixPtr& seqValue, int sequenceStart = seqStartPositionsData[i]; int sequenceLength = seqStartPositionsData[i + 1] - seqStartPositionsData[i]; + real scale = normByTimes ? (1.0f / (real)sequenceLength) : 1.0f; for (int j = 0; j < sequenceLength; j++) { - if (normByTimes) { - for (size_t k = 0; k < numClasses_; k++) { - seqData[(sequenceStart + j) * numClasses_ + k] = - batchData[(j * numSequences + i) * numClasses_ + k] / - sequenceLength; - } - } else { - memcpy(seqData + (sequenceStart + j) * numClasses_, - batchData + (j * numSequences + i) * numClasses_, - numClasses_ * sizeof(real)); + for (size_t k = 0; k < numClasses_; k++) { + seqData[(sequenceStart + j) * numClasses_ + k] = + batchData[(j * numSequences + i) * numClasses_ + k] * scale; } } } diff --git a/paddle/gserver/tests/test_WarpCTCLayer.cpp b/paddle/gserver/tests/test_WarpCTCLayer.cpp index 5289c9892c..aba48935a6 100644 --- a/paddle/gserver/tests/test_WarpCTCLayer.cpp +++ b/paddle/gserver/tests/test_WarpCTCLayer.cpp @@ -30,7 +30,7 @@ P_DECLARE_bool(use_gpu); const real* getData(const Matrix& matrix) { if (matrix.useGpu()) { MatrixPtr cpuMatrix = Matrix::create( - matrix.getWidth(), matrix.getHeight(), matrix.isTransposed(), false); + matrix.getHeight(), matrix.getWidth(), matrix.isTransposed(), false); cpuMatrix->copyFrom(matrix); return cpuMatrix->getData(); } else { @@ -200,41 +200,43 @@ LayerPtr createWarpCTCLayer(string name, TEST(Layer, WarpCTCLayer) { for (auto layerSize : {10, 64, 128}) { for (auto batchSize : {1, 10, 20, 64}) { - for (auto useGpu : {false, true}) { + for (auto normByTimes : {false, true}) { + for (auto useGpu : {false, true}) { #ifdef PADDLE_ONLY_CPU - if (useGpu) continue; + if (useGpu) continue; #endif - LOG(INFO) << " layerSize=" << layerSize << " batchSize=" << batchSize - << " useGpu=" << useGpu; + LOG(INFO) << " layerSize=" << layerSize << " batchSize=" << batchSize + << " normByTimes = " << normByTimes << " useGpu=" << useGpu; - FLAGS_use_gpu = useGpu; + FLAGS_use_gpu = useGpu; - Argument data0; - initArgument(batchSize, layerSize, useGpu, data0); + Argument data0; + initArgument(batchSize, layerSize, useGpu, data0); - Argument data1; - data1.resizeAndCopyFrom(data0); + Argument data1; + data1.resizeAndCopyFrom(data0); - LayerPtr dataLayer0 = - createDataLayer("data", batchSize, layerSize, useGpu, data0); - LayerPtr dataLayer1 = - createDataLayer("data", batchSize, layerSize, useGpu, data1); + LayerPtr dataLayer0 = + createDataLayer("data", batchSize, layerSize, useGpu, data0); + LayerPtr dataLayer1 = + createDataLayer("data", batchSize, layerSize, useGpu, data1); - LayerPtr labelLayer = - createLabelLayer("label", batchSize, layerSize, useGpu); + LayerPtr labelLayer = + createLabelLayer("label", batchSize, layerSize, useGpu); - LayerPtr warpctcLayer = createWarpCTCLayer( - "cost", layerSize, useGpu, false, dataLayer0, labelLayer); - LayerPtr ctcLayer = createCTCLayer( - "cost", layerSize, useGpu, false, dataLayer1, labelLayer); + LayerPtr warpctcLayer = createWarpCTCLayer( + "cost", layerSize, useGpu, normByTimes, dataLayer0, labelLayer); + LayerPtr ctcLayer = createCTCLayer( + "cost", layerSize, useGpu, normByTimes, dataLayer1, labelLayer); - /// Check loss - checkError(*(warpctcLayer->getOutput().value), - *(ctcLayer->getOutput().value)); + /// Check loss + checkError(*(warpctcLayer->getOutput().value), + *(ctcLayer->getOutput().value)); - /// Check gradients - checkError(*(dataLayer0->getOutput().grad), - *(dataLayer1->getOutput().grad)); + /// Check gradients + checkError(*(dataLayer0->getOutput().grad), + *(dataLayer1->getOutput().grad)); + } } } } diff --git a/warp-ctc b/warp-ctc new file mode 160000 index 0000000000..bd535c8d44 --- /dev/null +++ b/warp-ctc @@ -0,0 +1 @@ +Subproject commit bd535c8d44e03c8ebd2d768e06c8c05fdccd11d2 From f340f37f021274598c2561e2346987ec50463541 Mon Sep 17 00:00:00 2001 From: liaogang Date: Wed, 30 Nov 2016 20:00:51 +0800 Subject: [PATCH 028/265] Change atomicAdd to paddleAtomicAdd --- paddle/cuda/src/hl_cuda_cnn.cu | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/paddle/cuda/src/hl_cuda_cnn.cu b/paddle/cuda/src/hl_cuda_cnn.cu index ae387a8bc0..7f2f6897b4 100644 --- a/paddle/cuda/src/hl_cuda_cnn.cu +++ b/paddle/cuda/src/hl_cuda_cnn.cu @@ -16,6 +16,7 @@ limitations under the License. */ #include #include "hl_base.h" #include "hl_cnn.h" +#include "hl_device_functions.cuh" __global__ void KeFeature2col(size_t n, size_t height, const real* data_im, size_t blockH, size_t blockW, size_t width, @@ -641,10 +642,10 @@ __global__ void KeBilinearInterpBw(real* in, real* inPos = &in[outIdH * inputW + channelId * inImgSize + inImgIdy * inImgW + inImgIdx]; const real* outPos = &out[outIdH * outputW + outIdW]; - atomicAdd(&inPos[0], h2lambda * w2lambda * outPos[0]); - atomicAdd(&inPos[wId], h2lambda * w1lambda * outPos[0]); - atomicAdd(&inPos[hId * inImgW], h1lambda * w2lambda * outPos[0]); - atomicAdd(&inPos[hId * inImgW + wId], h1lambda * w1lambda * outPos[0]); + paddle::paddleAtomicAdd(&inPos[0], h2lambda * w2lambda * outPos[0]); + paddle::paddleAtomicAdd(&inPos[wId], h2lambda * w1lambda * outPos[0]); + paddle::paddleAtomicAdd(&inPos[hId * inImgW], h1lambda * w2lambda * outPos[0]); + paddle::paddleAtomicAdd(&inPos[hId * inImgW + wId], h1lambda * w1lambda * outPos[0]); } } From fe073d1f2a81c37c22ddc245722014c13e78e7af Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Wed, 30 Nov 2016 20:01:42 +0800 Subject: [PATCH 029/265] Add style check and remove 'using namespace' --- CMakeLists.txt | 1 + plugin/opencv/CMakeLists.txt | 3 +++ plugin/opencv/DataTransformer.cpp | 10 ++++++---- plugin/opencv/DataTransformer.h | 19 ++++++++++--------- plugin/opencv/PyDecodejpeg.cpp | 9 ++++----- 5 files changed, 24 insertions(+), 18 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index fcb8398517..c9cdfc2c2b 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -43,6 +43,7 @@ option(WITH_SWIG_PY "Compile PaddlePaddle with py PaddlePaddle prediction api" $ option(ON_TRAVIS "Running test on travis-ci or not." OFF) option(ON_COVERALLS "Generating code coverage data on coveralls or not." OFF) option(COVERALLS_UPLOAD "Uploading the generated coveralls json." ON) +option(USE_OPENCV "Compile PaddlePaddle with opencv" OFF) if(NOT CMAKE_BUILD_TYPE) set(CMAKE_BUILD_TYPE "RelWithDebInfo" CACHE STRING diff --git a/plugin/opencv/CMakeLists.txt b/plugin/opencv/CMakeLists.txt index bc0a6e6354..7a6b22c899 100644 --- a/plugin/opencv/CMakeLists.txt +++ b/plugin/opencv/CMakeLists.txt @@ -42,3 +42,6 @@ add_library(DeJpeg SHARED ${DEJPEG_SOURCES}) target_compile_options(DeJpeg BEFORE PRIVATE ${BUILD_PRIVATE_FLAGS}) target_link_libraries(DeJpeg ${DEJPEG_LINKER_LIBS}) set_target_properties(DeJpeg PROPERTIES PREFIX "") + +add_style_check_target(DeJpeg ${DEJPEG_SOURCES}) +add_style_check_target(DeJpeg ${DEJPEG_HEADER}) diff --git a/plugin/opencv/DataTransformer.cpp b/plugin/opencv/DataTransformer.cpp index d9e8883443..dd123639f4 100644 --- a/plugin/opencv/DataTransformer.cpp +++ b/plugin/opencv/DataTransformer.cpp @@ -51,7 +51,7 @@ DataTransformer::DataTransformer(int threadNum, } numThreads_ = threadNum; - syncThreadPool_.reset(new SyncThreadPool(numThreads_, false)); + syncThreadPool_.reset(new paddle::SyncThreadPool(numThreads_, false)); } void DataTransformer::loadMean(float* values) { @@ -66,7 +66,7 @@ void DataTransformer::loadMean(float* values) { void DataTransformer::startFetching(const char* src, const int size, float* trg) { - vector imbuf(src, src + size); + std::vector imbuf(src, src + size); int cvFlag = (isColor_ ? CV_LOAD_IMAGE_COLOR : CV_LOAD_IMAGE_GRAYSCALE); cv::Mat im = cv::imdecode(cv::Mat(imbuf), cvFlag); if (!im.data) { @@ -83,7 +83,7 @@ int DataTransformer::Rand(int min, int max) { return dist(rng); } -void DataTransformer::transform(Mat& cvImgOri, float* target) { +void DataTransformer::transform(cv::Mat& cvImgOri, float* target) { const int imgChannels = cvImgOri.channels(); const int imgHeight = cvImgOri.rows; const int imgWidth = cvImgOri.cols; @@ -152,7 +152,9 @@ void DataTransformer::transform(Mat& cvImgOri, float* target) { } // target: BGR } -void DataTransformer::start(vector& data, int* datalen, int* labels) { +void DataTransformer::start(std::vector& data, + int* datalen, + int* labels) { auto job = [&](int tid, int numThreads) { for (size_t i = tid; i < data.size(); i += numThreads) { DataTypePtr ret = prefetchFree_.dequeue(); diff --git a/plugin/opencv/DataTransformer.h b/plugin/opencv/DataTransformer.h index 52abab928b..603cea3059 100644 --- a/plugin/opencv/DataTransformer.h +++ b/plugin/opencv/DataTransformer.h @@ -12,6 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#ifndef DATATRANSFORMER_H_ +#define DATATRANSFORMER_H_ + #include #include #include @@ -21,9 +24,6 @@ limitations under the License. */ #include "paddle/utils/Thread.h" -using namespace cv; -using namespace paddle; - /** * This is an image processing module with OpenCV, such as * resizing, scaling, mirroring, substracting the image mean... @@ -57,7 +57,7 @@ public: * @param data Data containing the image string to be transformed. * @param label The label of input image. */ - void start(vector& data, int* datalen, int* labels); + void start(std::vector& data, int* datalen, int* labels); /** * @brief Applies the transformation on one image Mat. @@ -65,7 +65,7 @@ public: * @param img The input img to be transformed. * @param target target is used to save the transformed data. */ - void transform(Mat& img, float* target); + void transform(cv::Mat& img, float* target); /** * @brief Decode the image string, then calls transform() function. @@ -114,8 +114,9 @@ private: typedef std::pair DataType; typedef std::shared_ptr DataTypePtr; std::vector prefetch_; - std::unique_ptr syncThreadPool_; - BlockingQueue prefetchFree_; - BlockingQueue prefetchFull_; - + std::unique_ptr syncThreadPool_; + paddle::BlockingQueue prefetchFree_; + paddle::BlockingQueue prefetchFull_; }; // class DataTransformer + +#endif // DATATRANSFORMER_H_ diff --git a/plugin/opencv/PyDecodejpeg.cpp b/plugin/opencv/PyDecodejpeg.cpp index 66054302f8..a32e6430e1 100644 --- a/plugin/opencv/PyDecodejpeg.cpp +++ b/plugin/opencv/PyDecodejpeg.cpp @@ -23,8 +23,6 @@ limitations under the License. */ #include "DataTransformer.h" -using namespace boost::python; - /** * DecodeJpeg is an image processing API for interfacing Python and C++ * code DataTransformer, which used OpenCV and multi-threads to accelerate @@ -83,7 +81,7 @@ public: * It's type is numpy.array with int32. */ void start(boost::python::list& pysrc, PyObject* pydlen, PyObject* pylabel) { - vector data; + std::vector data; int num = len(pysrc); for (int t = 0; t < num; ++t) { char* src = boost::python::extract(pysrc[t]); @@ -169,8 +167,9 @@ static void initPython() { */ BOOST_PYTHON_MODULE(DeJpeg) { initPython(); - class_("DecodeJpeg", - init()) + boost::python::class_( + "DecodeJpeg", + boost::python::init()) .def("start", &DecodeJpeg::start) .def("get", &DecodeJpeg::get); }; From 18b85e558a35009c3d7108e59c5ce511cf494946 Mon Sep 17 00:00:00 2001 From: Liu Yiqun Date: Thu, 1 Dec 2016 05:49:51 +0000 Subject: [PATCH 030/265] Add a script to auto compile the warp-ctc submodule. --- paddle/cuda/CMakeLists.txt | 3 +-- paddle/gserver/tests/CMakeLists.txt | 6 ++++- paddle/gserver/tests/test_WarpCTCLayer.cpp | 27 +++++++++++----------- paddle/scripts/travis/build_and_test.sh | 1 + paddle/scripts/travis/submodules.sh | 18 +++++++++++++++ 5 files changed, 39 insertions(+), 16 deletions(-) create mode 100755 paddle/scripts/travis/submodules.sh diff --git a/paddle/cuda/CMakeLists.txt b/paddle/cuda/CMakeLists.txt index 7e45d3d578..10fa34b927 100755 --- a/paddle/cuda/CMakeLists.txt +++ b/paddle/cuda/CMakeLists.txt @@ -18,8 +18,7 @@ set(CUDA_CXX_WITH_GPU_SOURCES src/hl_cudart_wrap.cc src/hl_cuda_cublas.cc src/hl_cuda_cudnn.cc - src/hl_cuda_device.cc - ) + src/hl_cuda_device.cc) if(WITH_GPU) set(CUDA_CXX_SOURCES diff --git a/paddle/gserver/tests/CMakeLists.txt b/paddle/gserver/tests/CMakeLists.txt index 8fc6656bf4..310c8ad088 100644 --- a/paddle/gserver/tests/CMakeLists.txt +++ b/paddle/gserver/tests/CMakeLists.txt @@ -71,9 +71,13 @@ add_unittest(test_RecurrentLayer ############### test_WarpCTCLayer ####################### if(NOT WITH_DOUBLE) - add_unittest(test_WarpCTCLayer + add_unittest_without_exec(test_WarpCTCLayer test_WarpCTCLayer.cpp TestUtil.cpp) + + add_test(NAME test_WarpCTCLayer + COMMAND ${CMAKE_CURRENT_BINARY_DIR}/test_WarpCTCLayer --warpctc_dir=${PROJ_ROOT}/warp-ctc/build + WORKING_DIRECTORY ${PROJ_ROOT}/paddle) endif() ############### test_RecurrentGradientMachine ############### diff --git a/paddle/gserver/tests/test_WarpCTCLayer.cpp b/paddle/gserver/tests/test_WarpCTCLayer.cpp index aba48935a6..2dd83db345 100644 --- a/paddle/gserver/tests/test_WarpCTCLayer.cpp +++ b/paddle/gserver/tests/test_WarpCTCLayer.cpp @@ -38,7 +38,7 @@ const real* getData(const Matrix& matrix) { } } -void checkError(const Matrix& matrix1, const Matrix& matrix2) { +int checkError(const Matrix& matrix1, const Matrix& matrix2) { CHECK_EQ(matrix1.getHeight(), matrix2.getHeight()); CHECK_EQ(matrix1.getWidth(), matrix2.getWidth()); CHECK_EQ(matrix1.isTransposed(), matrix2.isTransposed()); @@ -62,6 +62,7 @@ void checkError(const Matrix& matrix1, const Matrix& matrix2) { } } EXPECT_EQ(count, 0) << "There are " << count << " different element."; + return count; } void initArgument(size_t batchSize, @@ -72,7 +73,6 @@ void initArgument(size_t batchSize, data.grad = Matrix::create(batchSize, layerSize, false, useGpu); data.value->randomizeUniform(); data.value->add(-0.5); - /// data.value->sigmoid(*data.value); data.grad->zeroMem(); generateSequenceStartPositions(batchSize, data.sequenceStartPositions); @@ -90,9 +90,6 @@ LayerPtr createDataLayer( dataLayer->setData(data); dataLayer->forward(PASS_GC); - /// std::cout << "dataLayer: " << std::endl; - /// (dataLayer->getOutput().value)->print(std::cout); - return layer; } @@ -198,14 +195,14 @@ LayerPtr createWarpCTCLayer(string name, } TEST(Layer, WarpCTCLayer) { - for (auto layerSize : {10, 64, 128}) { - for (auto batchSize : {1, 10, 20, 64}) { + for (auto layerSize : {10, 64}) { + for (auto batchSize : {1, 10, 32}) { for (auto normByTimes : {false, true}) { for (auto useGpu : {false, true}) { #ifdef PADDLE_ONLY_CPU if (useGpu) continue; #endif - LOG(INFO) << " layerSize=" << layerSize << " batchSize=" << batchSize + LOG(INFO) << "layerSize=" << layerSize << " batchSize=" << batchSize << " normByTimes = " << normByTimes << " useGpu=" << useGpu; FLAGS_use_gpu = useGpu; @@ -229,13 +226,17 @@ TEST(Layer, WarpCTCLayer) { LayerPtr ctcLayer = createCTCLayer( "cost", layerSize, useGpu, normByTimes, dataLayer1, labelLayer); - /// Check loss - checkError(*(warpctcLayer->getOutput().value), - *(ctcLayer->getOutput().value)); + /// Check cost + LOG(INFO) << "Check cost: " + << checkError(*(warpctcLayer->getOutput().value), + *(ctcLayer->getOutput().value)) + << " different elements."; /// Check gradients - checkError(*(dataLayer0->getOutput().grad), - *(dataLayer1->getOutput().grad)); + LOG(INFO) << "Check gradients: " + << checkError(*(dataLayer0->getOutput().grad), + *(dataLayer1->getOutput().grad)) + << " different elements"; } } } diff --git a/paddle/scripts/travis/build_and_test.sh b/paddle/scripts/travis/build_and_test.sh index 242fd982aa..c46c119dae 100755 --- a/paddle/scripts/travis/build_and_test.sh +++ b/paddle/scripts/travis/build_and_test.sh @@ -1,4 +1,5 @@ #!/bin/bash +./submodules.sh source ./common.sh CMAKE_EXTRA="" if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then diff --git a/paddle/scripts/travis/submodules.sh b/paddle/scripts/travis/submodules.sh new file mode 100755 index 0000000000..47bd8d87ac --- /dev/null +++ b/paddle/scripts/travis/submodules.sh @@ -0,0 +1,18 @@ +#!/bin/bash +set -e +PROJ_ROOT=$(git rev-parse --show-cdup) +SUBMODULES=$(grep path ${PROJ_ROOT}.gitmodules | sed 's/^.*path = //') + +for module in $SUBMODULES +do + case $module in + "warp-ctc") + if [ -d ${PROJ_ROOT}warp-ctc/build ]; then + rm -rf ${PROJ_ROOT}warp-ctc/build + fi + mkdir ${PROJ_ROOT}warp-ctc/build + cd ${PROJ_ROOT}warp-ctc/build + cmake ..; make + ;; + esac +done From 3d5060a13af7edc36fc0458ade81a6dd69591cc9 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Fri, 2 Dec 2016 14:15:04 +0800 Subject: [PATCH 031/265] add a example for explaining sparse_vector --- doc_cn/ui/data_provider/pydataprovider2.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/doc_cn/ui/data_provider/pydataprovider2.rst b/doc_cn/ui/data_provider/pydataprovider2.rst index c0b3286ad5..dce373118c 100644 --- a/doc_cn/ui/data_provider/pydataprovider2.rst +++ b/doc_cn/ui/data_provider/pydataprovider2.rst @@ -156,6 +156,11 @@ PaddlePaddle的数据包括四种主要类型,和三种序列模式。 其中,f代表一个浮点数,i代表一个整数。 +注意:对sparse_binary_vector和sparse_float_vector,PaddlePaddle存的是有值位置的索引。例如, + +- 对一个5维非序列的稀疏01向量 ``[0, 1, 1, 0, 0]`` ,类型是sparse_binary_vector,返回的是 ``[1, 2]`` 。 +- 对一个5维非序列的稀疏浮点向量 ``[0, 0.5, 0.7, 0, 0]`` ,类型是sparse_float_vector,返回的是 ``[(1, 0.5), (2, 0.7)]`` 。 + init_hook +++++++++ From ae06debf2348d5f4df4aeeff8f99e89d31ab30a6 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Fri, 2 Dec 2016 15:09:27 +0800 Subject: [PATCH 032/265] Remove the C++ code and refine Python code. --- CMakeLists.txt | 5 - plugin/opencv/CMakeLists.txt | 47 ---- plugin/opencv/DataTransformer.cpp | 181 -------------- plugin/opencv/DataTransformer.h | 122 ---------- plugin/opencv/PyDecodejpeg.cpp | 175 -------------- python/paddle/utils/image_multiproc.py | 313 ++++++++++++++++--------- 6 files changed, 208 insertions(+), 635 deletions(-) delete mode 100644 plugin/opencv/CMakeLists.txt delete mode 100644 plugin/opencv/DataTransformer.cpp delete mode 100644 plugin/opencv/DataTransformer.h delete mode 100644 plugin/opencv/PyDecodejpeg.cpp diff --git a/CMakeLists.txt b/CMakeLists.txt index c9cdfc2c2b..7d685587a7 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -43,7 +43,6 @@ option(WITH_SWIG_PY "Compile PaddlePaddle with py PaddlePaddle prediction api" $ option(ON_TRAVIS "Running test on travis-ci or not." OFF) option(ON_COVERALLS "Generating code coverage data on coveralls or not." OFF) option(COVERALLS_UPLOAD "Uploading the generated coveralls json." ON) -option(USE_OPENCV "Compile PaddlePaddle with opencv" OFF) if(NOT CMAKE_BUILD_TYPE) set(CMAKE_BUILD_TYPE "RelWithDebInfo" CACHE STRING @@ -196,7 +195,3 @@ if(WITH_DOC) add_subdirectory(doc) add_subdirectory(doc_cn) endif() - -if(USE_OPENCV) - add_subdirectory(plugin/opencv) -endif() diff --git a/plugin/opencv/CMakeLists.txt b/plugin/opencv/CMakeLists.txt deleted file mode 100644 index 7a6b22c899..0000000000 --- a/plugin/opencv/CMakeLists.txt +++ /dev/null @@ -1,47 +0,0 @@ -# use opencv plugin - -project(DeJpeg CXX C) -set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake") -set(PROJ_ROOT ${CMAKE_SOURCE_DIR}) -list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake/Modules) -set(DEJPEG_LINKER_LIBS "") - -# opencv -find_package(OpenCV REQUIRED COMPONENTS core highgui imgproc) -include_directories(${OpenCV_INCLUDE_DIRS}) -list(APPEND DEJPEG_LINKER_LIBS ${OpenCV_LIBS}) -message(STATUS "OpenCV found (${OpenCV_CONFIG_PATH})") -add_definitions(-DUSE_OPENCV) - -# boost-python -set(Boost_NO_SYSTEM_PATHS ON) -if (Boost_NO_SYSTEM_PATHS) - set(BOOST_ROOT $ENV{BOOST_ROOT}) - set(Boost_DIR ${BOOST_ROOT}) - set(Boost_INCLUDE_DIR "${BOOST_ROOT}/include") - set(Boost_LIBRARIES "${BOOST_ROOT}/lib/") -endif (Boost_NO_SYSTEM_PATHS) -find_package(Boost 1.46 COMPONENTS python) -include_directories(SYSTEM ${Boost_INCLUDE_DIR}) -link_directories(${Boost_INCLUDE_DIR}) -message(STATUS "Boost found (${Boost_INCLUDE_DIR})") -message(STATUS "Boost found (${Boost_LIBRARIES})") -list(APPEND DEJPEG_LINKER_LIBS ${Boost_LIBRARIES}) - - -file(GLOB DEJPEG_HEADER "${CMAKE_CURRENT_SOURCE_DIR}" "*.h") -file(GLOB DEJPEG_SOURCES "${CMAKE_CURRENT_SOURCE_DIR}" "*.cpp") - -set(BUILD_PRIVATE_FLAGS - -Wno-all - -Wno-error - -Wno-non-virtual-dtor - -Wno-delete-non-virtual-dtor) - -add_library(DeJpeg SHARED ${DEJPEG_SOURCES}) -target_compile_options(DeJpeg BEFORE PRIVATE ${BUILD_PRIVATE_FLAGS}) -target_link_libraries(DeJpeg ${DEJPEG_LINKER_LIBS}) -set_target_properties(DeJpeg PROPERTIES PREFIX "") - -add_style_check_target(DeJpeg ${DEJPEG_SOURCES}) -add_style_check_target(DeJpeg ${DEJPEG_HEADER}) diff --git a/plugin/opencv/DataTransformer.cpp b/plugin/opencv/DataTransformer.cpp deleted file mode 100644 index dd123639f4..0000000000 --- a/plugin/opencv/DataTransformer.cpp +++ /dev/null @@ -1,181 +0,0 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. */ - -#include "DataTransformer.h" -#include -#include - -DataTransformer::DataTransformer(int threadNum, - int capacity, - bool isTest, - bool isColor, - int cropHeight, - int cropWidth, - int imgSize, - bool isEltMean, - bool isChannelMean, - float* meanValues) - : isTest_(isTest), - isColor_(isColor), - cropHeight_(cropHeight), - cropWidth_(cropWidth), - imgSize_(imgSize), - capacity_(capacity), - prefetchFree_(capacity), - prefetchFull_(capacity) { - fetchCount_ = -1; - scale_ = 1.0; - isChannelMean_ = isChannelMean; - isEltMean_ = isEltMean; - loadMean(meanValues); - - imgPixels_ = cropHeight * cropWidth * (isColor_ ? 3 : 1); - - prefetch_.reserve(capacity); - for (int i = 0; i < capacity; i++) { - auto d = std::make_shared(new float[imgPixels_ * 3], 0); - prefetch_.push_back(d); - memset(prefetch_[i]->first, 0, imgPixels_ * sizeof(float)); - prefetchFree_.enqueue(prefetch_[i]); - } - - numThreads_ = threadNum; - syncThreadPool_.reset(new paddle::SyncThreadPool(numThreads_, false)); -} - -void DataTransformer::loadMean(float* values) { - if (values) { - int c = isColor_ ? 3 : 1; - int sz = isChannelMean_ ? c : cropHeight_ * cropWidth_ * c; - meanValues_ = new float[sz]; - memcpy(meanValues_, values, sz * sizeof(float)); - } -} - -void DataTransformer::startFetching(const char* src, - const int size, - float* trg) { - std::vector imbuf(src, src + size); - int cvFlag = (isColor_ ? CV_LOAD_IMAGE_COLOR : CV_LOAD_IMAGE_GRAYSCALE); - cv::Mat im = cv::imdecode(cv::Mat(imbuf), cvFlag); - if (!im.data) { - LOG(ERROR) << "Could not decode image"; - LOG(ERROR) << im.channels() << " " << im.rows << " " << im.cols; - } - this->transform(im, trg); -} - -int DataTransformer::Rand(int min, int max) { - std::random_device source; - std::mt19937 rng(source()); - std::uniform_int_distribution dist(min, max); - return dist(rng); -} - -void DataTransformer::transform(cv::Mat& cvImgOri, float* target) { - const int imgChannels = cvImgOri.channels(); - const int imgHeight = cvImgOri.rows; - const int imgWidth = cvImgOri.cols; - const bool doMirror = (!isTest_) && Rand(0, 1); - int h_off = 0; - int w_off = 0; - int th = imgHeight; - int tw = imgWidth; - cv::Mat img; - if (imgSize_ > 0) { - if (imgHeight > imgWidth) { - tw = imgSize_; - th = int(double(imgHeight) / imgWidth * tw); - th = th > imgSize_ ? th : imgSize_; - } else { - th = imgSize_; - tw = int(double(imgWidth) / imgHeight * th); - tw = tw > imgSize_ ? tw : imgSize_; - } - cv::resize(cvImgOri, img, cv::Size(tw, th)); - } else { - cv::Mat img = cvImgOri; - } - - cv::Mat cv_cropped_img = img; - if (cropHeight_ && cropWidth_) { - if (!isTest_) { - h_off = Rand(0, th - cropHeight_); - w_off = Rand(0, tw - cropWidth_); - } else { - h_off = (th - cropHeight_) / 2; - w_off = (tw - cropWidth_) / 2; - } - cv::Rect roi(w_off, h_off, cropWidth_, cropHeight_); - cv_cropped_img = img(roi); - } else { - CHECK_EQ(cropHeight_, imgHeight); - CHECK_EQ(cropWidth_, imgWidth); - } - int height = cropHeight_; - int width = cropWidth_; - int top_index; - for (int h = 0; h < height; ++h) { - const uchar* ptr = cv_cropped_img.ptr(h); - int img_index = 0; - for (int w = 0; w < width; ++w) { - for (int c = 0; c < imgChannels; ++c) { - if (doMirror) { - top_index = (c * height + h) * width + width - 1 - w; - } else { - top_index = (c * height + h) * width + w; - } - float pixel = static_cast(ptr[img_index++]); - if (isEltMean_) { - int mean_index = (c * imgHeight + h) * imgWidth + w; - target[top_index] = (pixel - meanValues_[mean_index]) * scale_; - } else { - if (isChannelMean_) { - target[top_index] = (pixel - meanValues_[c]) * scale_; - } else { - target[top_index] = pixel * scale_; - } - } - } - } - } // target: BGR -} - -void DataTransformer::start(std::vector& data, - int* datalen, - int* labels) { - auto job = [&](int tid, int numThreads) { - for (size_t i = tid; i < data.size(); i += numThreads) { - DataTypePtr ret = prefetchFree_.dequeue(); - char* buf = data[i]; - int size = datalen[i]; - ret->second = labels[i]; - this->startFetching(buf, size, ret->first); - prefetchFull_.enqueue(ret); - } - }; - syncThreadPool_->exec(job); - fetchCount_ = data.size(); -} - -void DataTransformer::obtain(float* data, int* label) { - fetchCount_--; - if (fetchCount_ < 0) { - LOG(FATAL) << "Empty data"; - } - DataTypePtr ret = prefetchFull_.dequeue(); - *label = ret->second; - memcpy(data, ret->first, sizeof(float) * imgPixels_); - prefetchFree_.enqueue(ret); -} diff --git a/plugin/opencv/DataTransformer.h b/plugin/opencv/DataTransformer.h deleted file mode 100644 index 603cea3059..0000000000 --- a/plugin/opencv/DataTransformer.h +++ /dev/null @@ -1,122 +0,0 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. */ - -#ifndef DATATRANSFORMER_H_ -#define DATATRANSFORMER_H_ - -#include -#include -#include -#include -#include -#include - -#include "paddle/utils/Thread.h" - -/** - * This is an image processing module with OpenCV, such as - * resizing, scaling, mirroring, substracting the image mean... - * - * This class has a double BlockQueue and they shared the same memory. - * It is used to avoid create memory each time. And it also can - * return the data even if the data are processing in multi-threads. - */ -class DataTransformer { -public: - DataTransformer(int threadNum, - int capacity, - bool isTest, - bool isColor, - int cropHeight, - int cropWidth, - int imgSize, - bool isEltMean, - bool isChannelMean, - float* meanValues); - virtual ~DataTransformer() { - if (meanValues_) { - free(meanValues_); - } - } - - /** - * @brief Start multi-threads to transform a list of input data. - * The processed data will be saved in Queue of prefetchFull_. - * - * @param data Data containing the image string to be transformed. - * @param label The label of input image. - */ - void start(std::vector& data, int* datalen, int* labels); - - /** - * @brief Applies the transformation on one image Mat. - * - * @param img The input img to be transformed. - * @param target target is used to save the transformed data. - */ - void transform(cv::Mat& img, float* target); - - /** - * @brief Decode the image string, then calls transform() function. - * - * @param src The input image string. - * @param size The length of string. - * @param trg trg is used to save the transformed data. - */ - void startFetching(const char* src, const int size, float* trg); - - /** - * @brief Return the transformed data and its label. - */ - void obtain(float* data, int* label); - -private: - int isTest_; - int isColor_; - int cropHeight_; - int cropWidth_; - int imgSize_; - int capacity_; - int fetchCount_; - bool isEltMean_; - bool isChannelMean_; - int numThreads_; - float scale_; - int imgPixels_; - float* meanValues_; - - /** - * Initialize the mean values. - */ - void loadMean(float* values); - - /** - * @brief Generates a random integer from Uniform({min, min + 1, ..., max}). - * @param min The lower bound (inclusive) value of the random number. - * @param max The upper bound (inclusive) value of the random number. - * - * @return - * A uniformly random integer value from ({min, min + 1, ..., max}). - */ - int Rand(int min, int max); - - typedef std::pair DataType; - typedef std::shared_ptr DataTypePtr; - std::vector prefetch_; - std::unique_ptr syncThreadPool_; - paddle::BlockingQueue prefetchFree_; - paddle::BlockingQueue prefetchFull_; -}; // class DataTransformer - -#endif // DATATRANSFORMER_H_ diff --git a/plugin/opencv/PyDecodejpeg.cpp b/plugin/opencv/PyDecodejpeg.cpp deleted file mode 100644 index a32e6430e1..0000000000 --- a/plugin/opencv/PyDecodejpeg.cpp +++ /dev/null @@ -1,175 +0,0 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. */ - -#include -#include -#include -#include -#include -#include -#include -#include - -#include "DataTransformer.h" - -/** - * DecodeJpeg is an image processing API for interfacing Python and C++ - * code DataTransformer, which used OpenCV and multi-threads to accelerate - * image processing. - * The Boost Python Library is used to wrap C++ interfaces. - */ - -class DecodeJpeg { -public: - /** - * The constructor will create and initialize an object of DataTransformer. - */ - DecodeJpeg(int threadNum, - int capacity, - bool isTest, - bool isColor, - int resize_min_size, - int cropSizeH, - int cropSizeW, - PyObject* meanValues) { - int channel = isColor ? 3 : 1; - bool isEltMean = false; - bool isChannelMean = false; - float* mean = NULL; - if (meanValues || meanValues != Py_None) { - if (!PyArray_Check(meanValues)) { - LOG(FATAL) << "Object is not a numpy array"; - } - pyTypeCheck(meanValues); - int size = PyArray_SIZE(reinterpret_cast(meanValues)); - isChannelMean = (size == channel) ? true : false; - isEltMean = (size == channel * cropSizeH * cropSizeW) ? true : false; - CHECK(isChannelMean != isEltMean); - mean = (float*)PyArray_DATA(reinterpret_cast(meanValues)); - } - tfhandlerPtr_ = std::make_shared(threadNum, - capacity, - isTest, - isColor, - cropSizeH, - cropSizeW, - resize_min_size, - isEltMean, - isChannelMean, - mean); - } - - ~DecodeJpeg() {} - - /** - * @brief This function is used to parse the Python object and convert - * the data to C++ format. Then it called the function of - * DataTransformer to start image processing. - * @param pysrc The input image list with string type. - * @param pylabel The input label of image. - * It's type is numpy.array with int32. - */ - void start(boost::python::list& pysrc, PyObject* pydlen, PyObject* pylabel) { - std::vector data; - int num = len(pysrc); - for (int t = 0; t < num; ++t) { - char* src = boost::python::extract(pysrc[t]); - data.push_back(src); - } - int* dlen = (int*)PyArray_DATA(reinterpret_cast(pydlen)); - int* dlabels = - (int*)PyArray_DATA(reinterpret_cast(pylabel)); - tfhandlerPtr_->start(data, dlen, dlabels); - } - - /** - * @brief Return one processed data. - * @param pytrg The processed image. - * @param pylabel The label of processed image. - */ - void get(PyObject* pytrg, PyObject* pylab) { - pyWritableCheck(pytrg); - pyWritableCheck(pylab); - pyContinuousCheck(pytrg); - pyContinuousCheck(pylab); - float* data = (float*)PyArray_DATA(reinterpret_cast(pytrg)); - int* label = (int*)PyArray_DATA(reinterpret_cast(pylab)); - tfhandlerPtr_->obtain(data, label); - } - - /** - * @brief An object of DataTransformer, which is used to call - * the image processing funtions. - */ - std::shared_ptr tfhandlerPtr_; - -private: - /** - * @brief Check whether the type of PyObject is valid or not. - */ - void pyTypeCheck(PyObject* o) { - int typenum = PyArray_TYPE(reinterpret_cast(o)); - - // clang-format off - int type = - typenum == NPY_UBYTE ? CV_8U : - typenum == NPY_BYTE ? CV_8S : - typenum == NPY_USHORT ? CV_16U : - typenum == NPY_SHORT ? CV_16S : - typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : - typenum == NPY_FLOAT ? CV_32F : - typenum == NPY_DOUBLE ? CV_64F : -1; - // clang-format on - - if (type < 0) { - LOG(FATAL) << "toMat: Data type = " << type << " is not supported"; - } - } - - /** - * @brief Check whether the PyObject is writable or not. - */ - void pyWritableCheck(PyObject* o) { - CHECK(PyArray_ISWRITEABLE(reinterpret_cast(o))); - } - - /** - * @brief Check whether the PyObject is c-contiguous or not. - */ - void pyContinuousCheck(PyObject* o) { - CHECK(PyArray_IS_C_CONTIGUOUS(reinterpret_cast(o))); - } -}; // DecodeJpeg - -/** - * @brief Initialize the Python interpreter and numpy. - */ -static void initPython() { - Py_Initialize(); - PyOS_sighandler_t sighandler = PyOS_getsig(SIGINT); - import_array(); - PyOS_setsig(SIGINT, sighandler); -} - -/** - * Use Boost.Python to expose C++ interface to Python. - */ -BOOST_PYTHON_MODULE(DeJpeg) { - initPython(); - boost::python::class_( - "DecodeJpeg", - boost::python::init()) - .def("start", &DecodeJpeg::start) - .def("get", &DecodeJpeg::get); -}; diff --git a/python/paddle/utils/image_multiproc.py b/python/paddle/utils/image_multiproc.py index ccc0a531a7..82df6d6c0c 100644 --- a/python/paddle/utils/image_multiproc.py +++ b/python/paddle/utils/image_multiproc.py @@ -1,44 +1,50 @@ -import os, psutil -import cv2 -from paddle.utils.image_util import * +import os, sys +import numpy as np +from PIL import Image +from cStringIO import StringIO import multiprocessing -import subprocess, signal, sys +from functools import partial + +from paddle.utils.image_util import * +from paddle.trainer.config_parser import logger +try: + import cv2 +except ImportError: + logger.warning("OpenCV2 is not installed, using PIL to prcoess") + cv2 = None -class CvImageTransfomer(ImageTransformer): + +class CvTransfomer(ImageTransformer): """ - CvImageTransfomer used python-opencv to process image. + CvTransfomer used python-opencv to process image. """ - def __init__(self, - min_size=None, - crop_size=None, - transpose=None, - channel_swap=None, - mean=None, - is_train=True, - is_color=True): + def __init__( + self, + min_size=None, + crop_size=None, + transpose=(2, 0, 1), # transpose to C * H * W + channel_swap=None, + mean=None, + is_train=True, + is_color=True): ImageTransformer.__init__(self, transpose, channel_swap, mean, is_color) self.min_size = min_size self.crop_size = crop_size self.is_train = is_train - def cv_resize_fixed_short_side(self, im, min_size): + def resize(self, im, min_size): row, col = im.shape[:2] - scale = min_size / float(min(row, col)) - if row < col: - row = min_size - col = int(round(col * scale)) - col = col if col > min_size else min_size + new_row, new_col = min_size, min_size + if row > col: + new_row = min_size * row / col else: - col = min_size - row = int(round(row * scale)) - row = row if row > min_size else min_size - resized_size = row, col - im = cv2.resize(im, resized_size, interpolation=cv2.INTER_CUBIC) + new_col = min_size * col / row + im = cv2.resize(im, (new_row, new_col), interpolation=cv2.INTER_CUBIC) return im - def crop_img(self, im): + def crop_and_flip(self, im): """ Return cropped image. The size of the cropped image is inner_size * inner_size. @@ -65,8 +71,8 @@ class CvImageTransfomer(ImageTransformer): return im def transform(self, im): - im = self.cv_resize_fixed_short_side(im, self.min_size) - im = self.crop_img(im) + im = self.resize(im, self.min_size) + im = self.crop_and_flip(im) # transpose, swap channel, sub mean im = im.astype('float32') ImageTransformer.transformer(self, im) @@ -81,90 +87,187 @@ class CvImageTransfomer(ImageTransformer): im = self.load_image_from_string(data) return self.transform(im) + def load_image_from_file(self, file): + flag = cv2.CV_LOAD_IMAGE_COLOR if self.is_color else cv2.CV_LOAD_IMAGE_GRAYSCALE + im = cv2.imread(file, flag) + return im + + def transform_from_file(self, file): + im = self.load_image_from_file(file) + return self.transform(im) + + +class PILTransfomer(ImageTransformer): + """ + PILTransfomer used PIL to process image. + """ + + def __init__( + self, + min_size=None, + crop_size=None, + transpose=(2, 0, 1), # transpose to C * H * W + channel_swap=None, + mean=None, + is_train=True, + is_color=True): + ImageTransformer.__init__(self, transpose, channel_swap, mean, is_color) + self.min_size = min_size + self.crop_size = crop_size + self.is_train = is_train + + def resize(self, im, min_size): + row, col = im.size[:2] + new_row, new_col = min_size, min_size + if row > col: + new_row = min_size * row / col + else: + new_col = min_size * col / row + im = im.resize((new_row, new_col), Image.ANTIALIAS) + return im -class MultiProcessImageTransfomer(): + def crop_and_flip(self, im): + """ + Return cropped image. + The size of the cropped image is inner_size * inner_size. + """ + row, col = im.size[:2] + start_h, start_w = 0, 0 + if self.is_train: + start_h = np.random.randint(0, row - self.crop_size + 1) + start_w = np.random.randint(0, col - self.crop_size + 1) + else: + start_h = (row - self.crop_size) / 2 + start_w = (col - self.crop_size) / 2 + end_h, end_w = start_h + self.crop_size, start_w + self.crop_size + im = im.crop((start_h, start_w, end_h, end_w)) + if (self.is_train) and (np.random.randint(2) == 0): + im = im.transpose(Image.FLIP_LEFT_RIGHT) + return im + + def transform(self, im): + im = self.resize(im, self.min_size) + im = self.crop_and_flip(im) + im = np.array(im, dtype=np.float32) # convert to numpy.array + # transpose, swap channel, sub mean + ImageTransformer.transformer(self, im) + return im + + def load_image_from_string(self, data): + im = Image.open(StringIO(data)) + return im + + def transform_from_string(self, data): + im = self.load_image_from_string(data) + return self.transform(im) + + def load_image_from_file(self, file): + im = Image.open(file) + return im + + def transform_from_file(self, file): + im = self.load_image_from_file(file) + return self.transform(im) + + +def warpper(cls, (dat, label)): + return cls.job(dat, label) + + +class MultiProcessImageTransformer(object): def __init__(self, procnum=10, - capacity=10240, - min_size=None, + resize_size=None, crop_size=None, - transpose=None, + transpose=(2, 0, 1), channel_swap=None, mean=None, is_train=True, - is_color=True): - self.procnum = procnum - self.capacity = capacity - self.size = 0 - self.count = 0 - signal.signal(signal.SIGTERM, self.kill_child_processes) - self.fetch_queue = multiprocessing.Queue(maxsize=capacity) - self.cv_transformer = CvImageTransfomer(min_size, crop_size, transpose, - channel_swap, mean, is_train, - is_color) - - def __del__(self): - try: - for p in self.procs: - p.join() - except Exception as e: - print str(e) - - def reset(self, size): - self.size = size - self.count = 0 - self.procs = [] - - def run_proc(self, data, label): - dlen = len(label) - self.reset(dlen) - for i in xrange(self.procnum): - start = dlen * i / self.procnum - end = dlen * (i + 1) / self.procnum - proc = multiprocessing.Process( - target=self.batch_transfomer, - args=(data[start:end], label[start:end])) - proc.daemon = True - self.procs.append(proc) - for p in self.procs: - p.start() - - def get(self): - """ - Return one processed image. - """ - # block if necessary until an item is available - data, lab = self.fetch_queue.get(block=True) - self.count += 1 - if self.count == self.size: - try: - for p in self.procs: - p.join() - except Exception as e: - print str(e) - return data, lab - - def batch_transfomer(self, data, label): + is_color=True, + is_img_string=True): """ - param data: input data in format of image string - type data: a list of string - label: the label of image - """ - for i in xrange(len(label)): - res = self.cv_transformer.transform_from_string(data[i]) - self.fetch_queue.put((res, int(label[i]))) + Processing image with multi-process. If it is used in PyDataProvider, + the simple usage for CNN is as follows: + + .. code-block:: python - def kill_child_processes(self, signum, frame): - """ - Kill a process's child processes in python. + def hool(settings, is_train, **kwargs): + settings.is_train = is_train + settings.mean_value = np.array([103.939,116.779,123.68], dtype=np.float32) + settings.input_types = [ + dense_vector(3 * 224 * 224), + integer_value(1)] + settings.transformer = MultiProcessImageTransformer( + procnum=10, + resize_size=256, + crop_size=224, + transpose=(2, 0, 1), + mean=settings.mean_values, + is_train=settings.is_train) + + + @provider(init_hook=hook, pool_size=20480) + def process(settings, file_list): + with open(file_list, 'r') as fdata: + for line in fdata: + data_dic = np.load(line.strip()) # load the data batch pickled by Pickle. + data = data_dic['data'] + labels = data_dic['label'] + labels = np.array(labels, dtype=np.float32) + for im, lab in settings.dp.run(data, labels): + yield [im.astype('float32'), int(lab)] + + :param procnum: processor number. + :type procnum: int + :param resize_size: the shorter edge size of image after resizing. + :type resize_size: int + :param crop_size: the croping size. + :type crop_size: int + :param transpose: the transpose order, Paddle only allow C * H * W order. + :type transpose: tuple or list + :param channel_swap: the channel swap order, RGB or BRG. + :type channel_swap: tuple or list + :param mean: the mean values of image, per-channel mean or element-wise mean. + :type mean: array, The dimension is 1 for per-channel mean. + The dimension is 3 for element-wise mean. + :param is_train: training peroid or testing peroid. + :type is_train: bool. + :param is_color: the image is color or gray. + :type is_color: bool. + :param is_img_string: The input can be the file name of image or image string. + :type is_img_string: bool. """ - parent_id = os.getpid() - ps_command = subprocess.Popen( - "ps -o pid --ppid %d --noheaders" % parent_id, - shell=True, - stdout=subprocess.PIPE) - ps_output = ps_command.stdout.read() - retcode = ps_command.wait() - for pid_str in ps_output.strip().split("\n")[:-1]: - os.kill(int(pid_str), signal.SIGTERM) - sys.exit() + + self.pool = multiprocessing.Pool(procnum) + self.is_img_string = is_img_string + if cv2 is not None: + self.transformer = CvTransfomer(resize_size, crop_size, transpose, + channel_swap, mean, is_train, + is_color) + else: + self.transformer = PILTransfomer(resize_size, crop_size, transpose, + channel_swap, mean, is_train, + is_color) + + def run(self, data, label): + try: + fun = partial(warpper, self) + return self.pool.imap_unordered(fun, zip(data, label), chunksize=5) + except KeyboardInterrupt: + self.pool.terminate() + except Exception, e: + self.pool.terminate() + + def job(self, data, label): + if self.is_img_string: + return self.transformer.transform_from_string(data), label + else: + return self.transformer.transform_from_file(data), label + + def __getstate__(self): + self_dict = self.__dict__.copy() + del self_dict['pool'] + return self_dict + + def __setstate__(self, state): + self.__dict__.update(state) From 7bb7fed8336232321bcb8dfff002c224ae746cf2 Mon Sep 17 00:00:00 2001 From: Liu Yiqun Date: Fri, 2 Dec 2016 09:22:51 +0000 Subject: [PATCH 033/265] Simplify the CMakelist.txt and fix typos. --- CMakeLists.txt | 13 ++++--------- paddle/scripts/travis/build_and_test.sh | 2 +- .../travis/{submodules.sh => build_submodules.sh} | 2 ++ python/paddle/trainer_config_helpers/layers.py | 12 ++++++------ 4 files changed, 13 insertions(+), 16 deletions(-) rename paddle/scripts/travis/{submodules.sh => build_submodules.sh} (93%) diff --git a/CMakeLists.txt b/CMakeLists.txt index 28375d0cd0..dfb5159ea1 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -77,15 +77,10 @@ find_package(Git REQUIRED) include(version) add_definitions(-DPADDLE_VERSION=\"${PADDLE_VERSION}\") - if(NOT WITH_GPU) add_definitions(-DPADDLE_ONLY_CPU) add_definitions(-DHPPL_STUB_FUNC) - if(WITH_DSO) - add_definitions(-DPADDLE_USE_DSO) - endif(WITH_DSO) - list(APPEND CMAKE_CXX_SOURCE_FILE_EXTENSIONS cu) else() if(${CUDA_VERSION_MAJOR} GREATER 6) @@ -107,15 +102,15 @@ else() set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} "-Xcompiler ${SSE3_FLAG}") endif(WITH_AVX) - if(WITH_DSO) - add_definitions(-DPADDLE_USE_DSO) - endif(WITH_DSO) - # Include cuda and cudnn include_directories(${CUDNN_INCLUDE_DIR}) include_directories(${CUDA_TOOLKIT_INCLUDE}) endif(NOT WITH_GPU) +if(WITH_DSO) + add_definitions(-DPADDLE_USE_DSO) +endif(WITH_DSO) + if(WITH_DOUBLE) add_definitions(-DPADDLE_TYPE_DOUBLE) set(ACCURACY double) diff --git a/paddle/scripts/travis/build_and_test.sh b/paddle/scripts/travis/build_and_test.sh index c46c119dae..9caeb21beb 100755 --- a/paddle/scripts/travis/build_and_test.sh +++ b/paddle/scripts/travis/build_and_test.sh @@ -1,5 +1,5 @@ #!/bin/bash -./submodules.sh +./build_submodules.sh source ./common.sh CMAKE_EXTRA="" if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then diff --git a/paddle/scripts/travis/submodules.sh b/paddle/scripts/travis/build_submodules.sh similarity index 93% rename from paddle/scripts/travis/submodules.sh rename to paddle/scripts/travis/build_submodules.sh index 47bd8d87ac..d458bf92bf 100755 --- a/paddle/scripts/travis/submodules.sh +++ b/paddle/scripts/travis/build_submodules.sh @@ -1,5 +1,6 @@ #!/bin/bash set -e +WORK_DIR=$PWD PROJ_ROOT=$(git rev-parse --show-cdup) SUBMODULES=$(grep path ${PROJ_ROOT}.gitmodules | sed 's/^.*path = //') @@ -16,3 +17,4 @@ do ;; esac done +cd $WORK_DIR diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index bf043c3674..bec675a8ce 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -1874,7 +1874,7 @@ def img_conv_layer(input, param_attr.attr["initial_std"] = init_w param_attr.attr["initial_strategy"] = 0 param_attr.attr["initial_smart"] = False - + if layer_type: if trans: assert layer_type in ["exconvt"] @@ -4125,11 +4125,11 @@ def warp_ctc_layer(input, Note: - Let num_classes represent the category number. Considering the 'blank' - label needed by CTC, you need to use (num_classes + 1) as the input size. - Thus, the size of both warp_ctc_layer and 'input' layer should be set to - num_classes + 1. - - You can set 'blank' to [0, num_classes - 1], which should be consistent - as that used in your labels. + label needed by CTC, you need to use (num_classes + 1) as the input + size. Thus, the size of both warp_ctc_layer and 'input' layer should + be set to num_classes + 1. + - You can set 'blank' to any value ranged in [0, num_classes], which + should be consistent as that used in your labels. - As a native 'softmax' activation is interated to the warp-ctc library, 'linear' activation is expected instead in the 'input' layer. From 8e9ac0cc5541654aa44bf9b07a679c9b335f1f95 Mon Sep 17 00:00:00 2001 From: zhanghaichao Date: Fri, 2 Dec 2016 04:36:56 -0800 Subject: [PATCH 034/265] adding input type check for data provider --- python/paddle/trainer/PyDataProvider2.py | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/python/paddle/trainer/PyDataProvider2.py b/python/paddle/trainer/PyDataProvider2.py index 0c577ec657..081e540848 100644 --- a/python/paddle/trainer/PyDataProvider2.py +++ b/python/paddle/trainer/PyDataProvider2.py @@ -202,6 +202,24 @@ class CheckWrapper(object): for each in item: callback(each) +class CheckInputTypeWrapper(object): + def __init__(self, generator, input_types, logger): + self.generator = generator + self.input_types = input_types + self.logger = logger + + def __call__(self, obj, filename): + for items in self.generator(obj, filename): + try: + # dict type is required for input_types when item is dict type + assert (isinstance(items, dict) and \ + not isinstance(self.input_types, dict))==False + yield items + except AssertionError as e: + self.logger.error( + "%s type is required for input type but got %s" % + (repr(type(items)), repr(type(self.input_types)))) + raise def provider(input_types=None, should_shuffle=None, @@ -355,6 +373,9 @@ def provider(input_types=None, if use_dynamic_order: self.generator = InputOrderWrapper(self.generator, self.input_order) + else: + self.generator = CheckInputTypeWrapper(self.generator, self.slots, + self.logger) if self.check: self.generator = CheckWrapper(self.generator, self.slots, check_fail_continue, From 26b2996b0ac231a1df54d5a1c9b6ce258dcd6fa8 Mon Sep 17 00:00:00 2001 From: liaogang Date: Mon, 5 Dec 2016 16:57:09 +0800 Subject: [PATCH 035/265] =?UTF-8?q?Upgrade=20compiler=E2=80=98s=20minimum?= =?UTF-8?q?=20version?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * for modern language properties --- CMakeLists.txt | 13 ++--------- cmake/enableCXX11.cmake | 13 ----------- cmake/flags.cmake | 46 +++++++++++++++++++++++++++++++++++--- paddle/cuda/CMakeLists.txt | 3 --- 4 files changed, 45 insertions(+), 30 deletions(-) delete mode 100644 cmake/enableCXX11.cmake diff --git a/CMakeLists.txt b/CMakeLists.txt index 7b42423749..5b36088b75 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -51,13 +51,7 @@ option(ON_TRAVIS "Running test on travis-ci or not." OFF) option(ON_COVERALLS "Generating code coverage data on coveralls or not." OFF) option(COVERALLS_UPLOAD "Uploading the generated coveralls json." ON) -if(NOT CMAKE_BUILD_TYPE) - set(CMAKE_BUILD_TYPE "RelWithDebInfo" CACHE STRING - "Choose the type of build, options are: Debug Release RelWithDebInfo MinSizeRel" - FORCE) -endif() -include(enableCXX11) include(cpplint) include(ccache) if(WITH_RDMA) @@ -84,17 +78,14 @@ if(NOT WITH_GPU) list(APPEND CMAKE_CXX_SOURCE_FILE_EXTENSIONS cu) else() if(${CUDA_VERSION_MAJOR} GREATER 6) - if(COMPILER_SUPPORT_CXX11) - LIST(APPEND CUDA_NVCC_FLAGS -std=c++11) - endif() + LIST(APPEND CUDA_NVCC_FLAGS -std=c++11) endif() - # TODO(yuyang18): Change it to remove std=c++11 in cuda compile. set(CUDA_PROPAGATE_HOST_FLAGS OFF) + if(NOT CUDNN_FOUND) message(FATAL_ERROR "Paddle need cudnn to compile") endif() - set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} "-g -O3 --use_fast_math") if(WITH_AVX) set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} "-Xcompiler ${AVX_FLAG}") diff --git a/cmake/enableCXX11.cmake b/cmake/enableCXX11.cmake deleted file mode 100644 index dc8cc3371a..0000000000 --- a/cmake/enableCXX11.cmake +++ /dev/null @@ -1,13 +0,0 @@ -# Enable C++ 11 for GCC. -# NOTE: It's only tested for gcc. -include(CheckCXXCompilerFlag) -CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORT_CXX11) -CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORT_CXX0X) - -if(COMPILER_SUPPORT_CXX11) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") -elseif(COMPILER_SUPPORT_CXX0X) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x") -else() - message(FATAL_ERROR "Your compiler must support c++11") -endif() \ No newline at end of file diff --git a/cmake/flags.cmake b/cmake/flags.cmake index e087770991..4531efb1d5 100644 --- a/cmake/flags.cmake +++ b/cmake/flags.cmake @@ -2,6 +2,37 @@ include(CheckCXXCompilerFlag) include(CheckCCompilerFlag) include(CheckCXXSymbolExists) + +if(NOT CMAKE_BUILD_TYPE) + set(CMAKE_BUILD_TYPE "RelWithDebInfo" CACHE STRING + "Choose the type of build, options are: Debug Release RelWithDebInfo MinSizeRel" + FORCE) +endif() + +function(CheckCompilerCXX11Flag) + if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") + if(${CMAKE_CXX_COMPILER_VERSION} VERSION_LESS 4.8) + message(FATAL_ERROR "Unsupported GCC version. GCC >= 4.8 required.") + endif() + elseif(CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang" OR CMAKE_CXX_COMPILER_ID STREQUAL "Clang") + # cmake >= 3.0 compiler id "AppleClang" on Mac OS X, otherwise "Clang" + # Apple Clang is a different compiler than upstream Clang which havs different version numbers. + # https://gist.github.com/yamaya/2924292 + if(APPLE) # cmake < 3.0 compiler id "Clang" on Mac OS X + if(${CMAKE_CXX_COMPILER_VERSION} VERSION_LESS 5.1) + message(FATAL_ERROR "Unsupported AppleClang version. AppleClang >= 5.1 required.") + endif() + else() + if (${CMAKE_CXX_COMPILER_VERSION} VERSION_LESS 3.3) + message(FATAL_ERROR "Unsupported Clang version. Clang >= 3.3 required.") + endif() + endif() + endif() +endfunction() + +CheckCompilerCXX11Flag() +LIST(APPEND CMAKE_CXX_FLAGS -std=c++11) + # safe_set_flag # # Set a compile flag only if compiler is support @@ -41,9 +72,7 @@ macro(safe_set_nvflag flag_name) CHECK_C_COMPILER_FLAG(${flag_name} C_COMPILER_SUPPORT_FLAG_${safe_name}) set(safe_name C_COMPILER_SUPPORT_FLAG_${safe_name}) if(${safe_name}) - set(CUDA_NVCC_FLAGS - --compiler-options;${flag_name} - ${CUDA_NVCC_FLAGS}) + LIST(APPEND CUDA_NVCC_FLAGS -Xcompiler ${flag_name}) endif() endmacro() @@ -111,6 +140,17 @@ endforeach() # Release/Debug flags set by cmake. Such as -O3 -g -DNDEBUG etc. # So, don't set these flags here. +LIST(APPEND CUDA_NVCC_FLAGS --use_fast_math) + +if(CMAKE_BUILD_TYPE STREQUAL "Debug") + LIST(APPEND CUDA_NVCC_FLAGS ${CMAKE_CXX_FLAGS_DEBUG}) +elseif(CMAKE_BUILD_TYPE STREQUAL "Release") + LIST(APPEND CUDA_NVCC_FLAGS ${CMAKE_CXX_FLAGS_RELEASE}) +elseif(CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo") + LIST(APPEND CUDA_NVCC_FLAGS ${CMAKE_CXX_FLAGS_RELWITHDEBINFO}) +elseif(CMAKE_BUILD_TYPE STREQUAL "MinSizeRel") + LIST(APPEND CUDA_NVCC_FLAGS ${CMAKE_CXX_FLAGS_MINSIZEREL}) +endif() function(specify_cuda_arch cuda_version cuda_arch) if(${cuda_version} VERSION_GREATER "8.0") diff --git a/paddle/cuda/CMakeLists.txt b/paddle/cuda/CMakeLists.txt index 11dbfb54b2..2a1eb307de 100755 --- a/paddle/cuda/CMakeLists.txt +++ b/paddle/cuda/CMakeLists.txt @@ -22,9 +22,6 @@ set(CUDA_CXX_WITH_GPU_SOURCES set_source_files_properties(${CUDA_CXX_WITH_GPU_SOURCES} PROPERTIES COMPILE_FLAGS "-D__NVCC__") -set_source_files_properties(${AVX_SOURCES} - PROPERTIES COMPILE_FLAGS "-mavx") - set(CUDA_DSO_SOURCES src/hl_dso_loader.cc src/hl_cudart_wrap.cc) From aef68475b8f9b31641d94c6fb61c2d03dc881d1b Mon Sep 17 00:00:00 2001 From: liaogang Date: Mon, 5 Dec 2016 17:21:27 +0800 Subject: [PATCH 036/265] Add docs for compiler version --- doc/getstarted/build_and_install/build_from_source.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/doc/getstarted/build_and_install/build_from_source.md b/doc/getstarted/build_and_install/build_from_source.md index b932fbc0fa..a71a9c3143 100644 --- a/doc/getstarted/build_and_install/build_from_source.md +++ b/doc/getstarted/build_and_install/build_from_source.md @@ -15,13 +15,15 @@ cd paddle ## Requirements -To compile the source code, your computer must be equipped with GCC >=4.6 or Clang compiler. +To compile the source code, your computer must be equipped with the following dependencies. + ### Dependencies +- **Compiler**: GCC >= 4.8 or Clang >= 3.3 (AppleClang >= 5.1) - **CMake**: version >= 2.8 - **BLAS**: MKL, OpenBlas or ATLAS -- **protobuf**: version >= 2.4, **Note: 3.x is not supported** -- **python**: only python 2.7 is supported currently +- **Protocol Buffers**: version >= 2.4, **Note: 3.x is not supported** +- **Python**: only python 2.7 is supported currently ### Options From 7f78912c9fa4e5b9b265738584a75e89a63513c1 Mon Sep 17 00:00:00 2001 From: hanchao Date: Mon, 5 Dec 2016 21:07:31 +0800 Subject: [PATCH 037/265] test code for issue #729 --- .../tests/configs/generate_protostr.sh | 2 + .../tests/configs/run_tests.sh | 11 +++-- .../test_config_parser_for_non_file_config.py | 40 +++++++++++++++++++ 3 files changed, 49 insertions(+), 4 deletions(-) create mode 100644 python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py diff --git a/python/paddle/trainer_config_helpers/tests/configs/generate_protostr.sh b/python/paddle/trainer_config_helpers/tests/configs/generate_protostr.sh index e55f9bd388..a54af94ce3 100755 --- a/python/paddle/trainer_config_helpers/tests/configs/generate_protostr.sh +++ b/python/paddle/trainer_config_helpers/tests/configs/generate_protostr.sh @@ -11,10 +11,12 @@ for conf in ${configs[*]} do echo "Generating " $conf python -m paddle.utils.dump_config $conf.py > $protostr/$conf.protostr.unittest + cat ${conf}.py |python test_config_parser_for_non_file_config.py > $protostr/$conf.protostr.non_file_config.unittest done for conf in ${whole_configs[*]} do echo "Generating " $conf python -m paddle.utils.dump_config $conf.py "" --whole > $protostr/$conf.protostr.unittest + cat ${conf}.py |python test_config_parser_for_non_file_config.py --whole > $protostr/$conf.protostr.non_file_config.unittest done diff --git a/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh b/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh index 73f8b333b2..ed2ac6ed18 100755 --- a/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh +++ b/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh @@ -16,20 +16,23 @@ if [ -z $1 ]; then do base_protostr=$protostr/$file new_protostr=$protostr/$file.unittest - diff $base_protostr $new_protostr -u + diff $base_protostr $new_protostr -u && + diff $protostr/$file $protostr/$file.non_file_config.unittest -u done else for file in ${configs[*]} do if ! $1 $protostr/$file.protostr $protostr/$file.protostr.unittest; then - diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u + diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u && + diff $protostr/$file.protostr $protostr/$file.protostr.non_file_config.unittest -u fi done for file in ${whole_configs[*]} - do +do if ! $1 $protostr/$file.protostr $protostr/$file.protostr.unittest --whole; then - diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u + diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u && + diff $protostr/$file.protostr $protostr/$file.protostr.non_file_config.unittest -u fi done fi diff --git a/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py b/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py new file mode 100644 index 0000000000..71ee0499d1 --- /dev/null +++ b/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py @@ -0,0 +1,40 @@ +#!/usr/bin/env python +# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import sys +import getopt + +whole = False +opts, args = getopt.getopt(sys.argv[1:], "", ["whole"]) +for op, value in opts: + if op == "--whole": + whole = True + +cmdstr = """ +from paddle.trainer.config_parser import * +from paddle.trainer_config_helpers import * +def configs():\n""" + +for line in sys.stdin: + if "import" in line and "from" in line: + continue + cmdstr = cmdstr + " " + line + +if whole: + cmdstr = cmdstr + """print parse_config(configs, "")""" +else: + cmdstr = cmdstr + """print parse_config(configs, "").model_config""" + +exec(cmdstr) From 84d47ac205b81ff06efd73892cd714f7874dda63 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Tue, 6 Dec 2016 11:43:14 +0800 Subject: [PATCH 038/265] follow comments --- python/paddle/utils/image_multiproc.py | 55 +++++++++++--------------- 1 file changed, 22 insertions(+), 33 deletions(-) diff --git a/python/paddle/utils/image_multiproc.py b/python/paddle/utils/image_multiproc.py index 82df6d6c0c..6ce32f7811 100644 --- a/python/paddle/utils/image_multiproc.py +++ b/python/paddle/utils/image_multiproc.py @@ -3,7 +3,8 @@ import numpy as np from PIL import Image from cStringIO import StringIO import multiprocessing -from functools import partial +import functools +import itertools from paddle.utils.image_util import * from paddle.trainer.config_parser import logger @@ -14,10 +15,12 @@ except ImportError: logger.warning("OpenCV2 is not installed, using PIL to prcoess") cv2 = None +__all__ = ["CvTransformer", "PILTransformer", "MultiProcessImageTransformer"] -class CvTransfomer(ImageTransformer): + +class CvTransformer(ImageTransformer): """ - CvTransfomer used python-opencv to process image. + CvTransformer used python-opencv to process image. """ def __init__( @@ -97,9 +100,9 @@ class CvTransfomer(ImageTransformer): return self.transform(im) -class PILTransfomer(ImageTransformer): +class PILTransformer(ImageTransformer): """ - PILTransfomer used PIL to process image. + PILTransformer used PIL to process image. """ def __init__( @@ -170,8 +173,11 @@ class PILTransfomer(ImageTransformer): return self.transform(im) -def warpper(cls, (dat, label)): - return cls.job(dat, label) +def job(is_img_string, transformer, (data, label)): + if is_img_string: + return transformer.transform_from_string(data), label + else: + return transformer.transform_from_file(data), label class MultiProcessImageTransformer(object): @@ -238,36 +244,19 @@ class MultiProcessImageTransformer(object): :type is_img_string: bool. """ + self.procnum = procnum self.pool = multiprocessing.Pool(procnum) self.is_img_string = is_img_string if cv2 is not None: - self.transformer = CvTransfomer(resize_size, crop_size, transpose, - channel_swap, mean, is_train, - is_color) - else: - self.transformer = PILTransfomer(resize_size, crop_size, transpose, + self.transformer = CvTransformer(resize_size, crop_size, transpose, channel_swap, mean, is_train, is_color) - - def run(self, data, label): - try: - fun = partial(warpper, self) - return self.pool.imap_unordered(fun, zip(data, label), chunksize=5) - except KeyboardInterrupt: - self.pool.terminate() - except Exception, e: - self.pool.terminate() - - def job(self, data, label): - if self.is_img_string: - return self.transformer.transform_from_string(data), label else: - return self.transformer.transform_from_file(data), label - - def __getstate__(self): - self_dict = self.__dict__.copy() - del self_dict['pool'] - return self_dict + self.transformer = PILTransformer(resize_size, crop_size, transpose, + channel_swap, mean, is_train, + is_color) - def __setstate__(self, state): - self.__dict__.update(state) + def run(self, data, label): + fun = functools.partial(job, self.is_img_string, self.transformer) + return self.pool.imap_unordered( + fun, itertools.izip(data, label), chunksize=100 * self.procnum) From 4453d767593d15d8f6486a1e99bc7a0482081178 Mon Sep 17 00:00:00 2001 From: liaogang Date: Tue, 6 Dec 2016 14:01:54 +0800 Subject: [PATCH 039/265] Upgrade cuda minimum version to 7.0 --- CMakeLists.txt | 6 ++---- cmake/flags.cmake | 3 +++ doc/getstarted/build_and_install/build_from_source.md | 9 +++++---- 3 files changed, 10 insertions(+), 8 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 5b36088b75..ba6fb95315 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -77,12 +77,10 @@ if(NOT WITH_GPU) add_definitions(-DHPPL_STUB_FUNC) list(APPEND CMAKE_CXX_SOURCE_FILE_EXTENSIONS cu) else() - if(${CUDA_VERSION_MAJOR} GREATER 6) - LIST(APPEND CUDA_NVCC_FLAGS -std=c++11) + if(${CUDA_VERSION_MAJOR} VERSION_LESS 7) + message(FATAL_ERROR "Paddle need CUDA >= 7.0 to compile") endif() - set(CUDA_PROPAGATE_HOST_FLAGS OFF) - if(NOT CUDNN_FOUND) message(FATAL_ERROR "Paddle need cudnn to compile") endif() diff --git a/cmake/flags.cmake b/cmake/flags.cmake index 4531efb1d5..0983d83b73 100644 --- a/cmake/flags.cmake +++ b/cmake/flags.cmake @@ -138,8 +138,11 @@ foreach(flag ${GPU_COMMON_FLAGS}) endforeach() +set(CUDA_PROPAGATE_HOST_FLAGS OFF) + # Release/Debug flags set by cmake. Such as -O3 -g -DNDEBUG etc. # So, don't set these flags here. +LIST(APPEND CUDA_NVCC_FLAGS -std=c++11) LIST(APPEND CUDA_NVCC_FLAGS --use_fast_math) if(CMAKE_BUILD_TYPE STREQUAL "Debug") diff --git a/doc/getstarted/build_and_install/build_from_source.md b/doc/getstarted/build_and_install/build_from_source.md index a71a9c3143..92389905e5 100644 --- a/doc/getstarted/build_and_install/build_from_source.md +++ b/doc/getstarted/build_and_install/build_from_source.md @@ -17,14 +17,15 @@ cd paddle To compile the source code, your computer must be equipped with the following dependencies. -### Dependencies - - **Compiler**: GCC >= 4.8 or Clang >= 3.3 (AppleClang >= 5.1) - **CMake**: version >= 2.8 - **BLAS**: MKL, OpenBlas or ATLAS - **Protocol Buffers**: version >= 2.4, **Note: 3.x is not supported** - **Python**: only python 2.7 is supported currently +**Note:** For CUDA 7.0 and CUDA 7.5, GCC 5.0 and up are not supported! +For CUDA 8.0, GCC versions later than 5.3 are not supported! + ### Options PaddlePaddle supports some build options. To enable it, first you need to install the related libraries. @@ -51,8 +52,8 @@ PaddlePaddle supports some build options. To enable it, first you need to instal **Note:** - - The GPU version works best with Cuda Toolkit 7.5 and cuDNN v5. - - Other versions like Cuda Toolkit 6.5, 7.0, 8.0 and cuDNN v2, v3, v4 are also supported. + - The GPU version works best with Cuda Toolkit 8.0 and cuDNN v5. + - Other versions like Cuda Toolkit 7.0, 7.5 and cuDNN v3, v4 are also supported. - **To utilize cuDNN v5, Cuda Toolkit 7.5 is prerequisite and vice versa.** As a simple example, consider the following: From 75351f586d08c110f0dd2489a29875f4fc77eef9 Mon Sep 17 00:00:00 2001 From: gangliao Date: Tue, 6 Dec 2016 14:17:18 +0800 Subject: [PATCH 040/265] Revert test_Matrix.cpp when merge with the develop branch, some mistakes happened --- paddle/math/tests/{test_matrix.cpp => test_Matrix.cpp} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename paddle/math/tests/{test_matrix.cpp => test_Matrix.cpp} (100%) diff --git a/paddle/math/tests/test_matrix.cpp b/paddle/math/tests/test_Matrix.cpp similarity index 100% rename from paddle/math/tests/test_matrix.cpp rename to paddle/math/tests/test_Matrix.cpp From 1539335383171dcc39e251ed7b5e194c01f6f282 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 7 Dec 2016 10:07:40 +0800 Subject: [PATCH 041/265] Add unittest related #653 * But not reproduce the problem. --- paddle/gserver/tests/test_PyDataProvider2.cpp | 61 +++++++++++++------ paddle/gserver/tests/test_PyDataProvider2.py | 10 +++ 2 files changed, 52 insertions(+), 19 deletions(-) diff --git a/paddle/gserver/tests/test_PyDataProvider2.cpp b/paddle/gserver/tests/test_PyDataProvider2.cpp index 24aa73910f..6674e6b87c 100644 --- a/paddle/gserver/tests/test_PyDataProvider2.cpp +++ b/paddle/gserver/tests/test_PyDataProvider2.cpp @@ -15,16 +15,16 @@ limitations under the License. */ #ifndef PADDLE_NO_PYTHON #include #include -#include "paddle/utils/Util.h" -#include "paddle/utils/PythonUtil.h" #include "paddle/gserver/dataproviders/DataProvider.h" +#include "paddle/utils/PythonUtil.h" +#include "paddle/utils/Util.h" P_DEFINE_string(train_list, "unittest.list", "file list for unittest"); namespace paddle { namespace unittest { namespace pydp2 { -extern void setOnPoolFilledHook(const std::function& func); +extern void setOnPoolFilledHook(const std::function &func); extern void clearOnPoolFilledHook(); } // namespace pydp2 @@ -33,8 +33,8 @@ extern void clearOnPoolFilledHook(); const paddle::real epsilon = 1e-5; -static inline int64_t readDataBatch(paddle::DataBatch* batch, - const std::string& funcName, +static inline int64_t readDataBatch(paddle::DataBatch *batch, + const std::string &funcName, int64_t batchSize = 65535) { paddle::DataConfig config; config.set_type("py2"); @@ -143,7 +143,7 @@ TEST(PyDataProvider2, init_hook) { paddle::DataBatch batch; int64_t num = provider->getNextBatchInternal(100000, &batch); ASSERT_EQ(num, 200); - auto& mat = batch.getStreams()[0].value; + auto &mat = batch.getStreams()[0].value; ASSERT_EQ((size_t)mat->getWidth(), (size_t)20); for (size_t i = 0; i < 200; ++i) { for (size_t j = 0; j < 20; ++j) { @@ -170,7 +170,7 @@ TEST(PyDataProvider2, sparse_no_value_no_seq) { CHECK(csm != nullptr); for (int i = 0; i < 200; ++i) { CHECK_EQ(csm->getColNum(i), (size_t)10); - int* cols = csm->getRowCols(i); + int *cols = csm->getRowCols(i); for (int j = 0; j < 10; ++j) { CHECK_EQ(cols[j], (i + 1) * (j + 1)); } @@ -185,8 +185,8 @@ TEST(PyDataProvider2, sparse_value_no_seq) { CHECK(csm != nullptr); for (int i = 0; i < 200; ++i) { CHECK_EQ(csm->getColNum(i), (size_t)10); - int* cols = csm->getRowCols(i); - real* dat = csm->getRowValues(i); + int *cols = csm->getRowCols(i); + real *dat = csm->getRowValues(i); for (int j = 0; j < 10; ++j) { EXPECT_EQ(cols[j], (i + 1) * (j + 1)); EXPECT_EQ(dat[j], real(j) / real(i + 1)); @@ -197,7 +197,7 @@ TEST(PyDataProvider2, sparse_value_no_seq) { TEST(PyDataProvider2, index_seq) { paddle::DataBatch batch; CHECK_EQ(readDataBatch(&batch, "test_index_seq"), 200); - auto& arg = batch.getStreams()[0]; + auto &arg = batch.getStreams()[0]; CHECK_EQ((int)arg.ids->getSize(), (200 + 1) * 200 / 2); size_t tmp = 0; for (size_t i = 0; i < 200; ++i) { // CHECK DATA CORRECT @@ -219,7 +219,7 @@ TEST(PyDataProvider2, index_seq) { TEST(PyDataProvider2, index_sub_seq) { paddle::DataBatch batch; ASSERT_EQ(readDataBatch(&batch, "test_index_sub_seq"), 200); - auto& arg = batch.getStreams()[0]; + auto &arg = batch.getStreams()[0]; size_t tmp = 0; for (size_t i = 0; i < 200; ++i) { for (size_t j = 0; j < i + 1; ++j) { @@ -268,7 +268,7 @@ TEST(PyDataProvider2, min_pool_size) { } }); while (true) { - size_t realBatchSize = provider->getNextBatchInternal(batchSize, &batch); + int64_t realBatchSize = provider->getNextBatchInternal(batchSize, &batch); if (realBatchSize) { totalData -= realBatchSize; } else { @@ -291,7 +291,7 @@ TEST(PyDataProvider2, can_over_batch_size) { provider->reset(); constexpr size_t batchSize = 100; while (true) { - size_t realBatchSize = provider->getNextBatchInternal(batchSize, &batch); + int64_t realBatchSize = provider->getNextBatchInternal(batchSize, &batch); if (realBatchSize) { CHECK_LE(realBatchSize, batchSize); } else { @@ -317,12 +317,12 @@ TEST(PyDataProvider2, input_order) { provider->reset(); constexpr size_t batchSize = 100; while (true) { - size_t realBatchSize = provider->getNextBatchInternal(batchSize, &batch); + int64_t realBatchSize = provider->getNextBatchInternal(batchSize, &batch); if (!realBatchSize) { break; } - ASSERT_EQ(batch.getStreams().size(), (size_t)2); - for (size_t i = 0; i < realBatchSize; ++i) { + ASSERT_EQ(batch.getStreams().size(), static_cast(2)); + for (int64_t i = 0; i < realBatchSize; ++i) { ASSERT_EQ(batch.getStream(0).ids->getData()[i], 0); ASSERT_EQ(batch.getStream(1).ids->getData()[i], 1); } @@ -341,11 +341,11 @@ TEST(PyDataProvider2, test_check) { paddle::DataProvider::create(config, false)); provider->reset(); while (true) { - size_t realBatchSize = provider->getNextBatchInternal(100, &batch); + int64_t realBatchSize = provider->getNextBatchInternal(100, &batch); if (!realBatchSize) { break; } else { - auto& ivec = batch.getStream(0).ids; + auto &ivec = batch.getStream(0).ids; for (size_t i = 0; i < ivec->getSize(); ++i) { CHECK_LT(ivec->getData()[i], 10); } @@ -370,7 +370,30 @@ TEST(PyDataProvider2, multiThread) { provider.reset(); } -int main(int argc, char** argv) { +TEST(PyDataProvider2, minPoolSizeWithCache) { + paddle::DataConfig config; + config.set_type("py2"); + config.set_files(FLAGS_train_list.c_str()); + config.set_load_data_module("test_PyDataProvider2"); + config.set_load_data_object("test_min_pool_size_with_cache"); + config.set_async_load_data(true); + + std::unique_ptr provider( + paddle::DataProvider::create(config, false)); + + paddle::DataBatch batch; + + for (int i = 0; i < 10; ++i) { + provider->reset(); + int64_t sum = 0; + while (int64_t actualNum = provider->getNextBatch(100, &batch)) { + sum += actualNum; + } + ASSERT_EQ(1 << 20, sum); + } +} + +int main(int argc, char **argv) { testing::InitGoogleTest(&argc, argv); paddle::initMain(argc, argv); paddle::initPython(argc, argv); diff --git a/paddle/gserver/tests/test_PyDataProvider2.py b/paddle/gserver/tests/test_PyDataProvider2.py index 7ca30198fb..bf23c52fd7 100644 --- a/paddle/gserver/tests/test_PyDataProvider2.py +++ b/paddle/gserver/tests/test_PyDataProvider2.py @@ -111,3 +111,13 @@ def test_check(settings, filename): if i < 10: yield_good_value = True yield i + + +@provider( + input_types=[index_slot(10)], + min_pool_size=1000, + cache=CacheType.CACHE_PASS_IN_MEM, ) +def test_min_pool_size_with_cache(settings, filename): + import random + for _ in xrange(2**20): + yield random.randint(0, 9) From 0eaf9f64e63315bc3e6735c5ac4349cb6e50b68a Mon Sep 17 00:00:00 2001 From: liaogang Date: Wed, 7 Dec 2016 15:26:15 +0800 Subject: [PATCH 042/265] Remove sparse length limits --- paddle/math/SparseRowMatrix.cpp | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/paddle/math/SparseRowMatrix.cpp b/paddle/math/SparseRowMatrix.cpp index eefaf4b71f..100827e321 100644 --- a/paddle/math/SparseRowMatrix.cpp +++ b/paddle/math/SparseRowMatrix.cpp @@ -15,15 +15,14 @@ limitations under the License. */ #include "SparseRowMatrix.h" #include "CpuSparseMatrix.h" -#include #include #include "paddle/utils/Logging.h" #include "SIMDFunctions.h" -#include "paddle/utils/Util.h" #include "paddle/utils/Thread.h" +#include "paddle/utils/Util.h" P_DEFINE_bool(allow_inefficient_sparse_update, false, @@ -34,8 +33,6 @@ namespace paddle { const unsigned int SparseRowCpuMatrix::kUnusedId_ = -1U; void SparseRowCpuMatrix::init(size_t height, size_t width) { - // @TODO(yuyang18) Just remove this limit - CHECK(simd::vec_check(width)) << width; height_ = height; if (!indexDictHandle_) { indexDictHandle_.reset(new IndexDict); From e6cdd18015f33492e49110f96a6986a97767cb4a Mon Sep 17 00:00:00 2001 From: Liu Yiqun Date: Wed, 7 Dec 2016 08:19:57 +0000 Subject: [PATCH 043/265] Update documents to init submodule. --- doc/getstarted/build_and_install/build_from_source.md | 1 + doc/getstarted/build_and_install/docker_install.rst | 1 + doc/howto/contribute_to_paddle.md | 5 +++-- 3 files changed, 5 insertions(+), 2 deletions(-) diff --git a/doc/getstarted/build_and_install/build_from_source.md b/doc/getstarted/build_and_install/build_from_source.md index b932fbc0fa..a2a96f6f48 100644 --- a/doc/getstarted/build_and_install/build_from_source.md +++ b/doc/getstarted/build_and_install/build_from_source.md @@ -11,6 +11,7 @@ You can download PaddlePaddle from the [github source](https://github.com/Paddle ```bash git clone https://github.com/PaddlePaddle/Paddle paddle cd paddle +git submodule update --init --recursive ``` ## Requirements diff --git a/doc/getstarted/build_and_install/docker_install.rst b/doc/getstarted/build_and_install/docker_install.rst index 5abb3b9a3f..1ab6fc6a72 100644 --- a/doc/getstarted/build_and_install/docker_install.rst +++ b/doc/getstarted/build_and_install/docker_install.rst @@ -79,6 +79,7 @@ source code: cd ~ git clone github.com/PaddlePaddle/Paddle cd Paddle + git submodule update --init --recursive docker build --build-arg WITH_AVX=OFF -t paddle:cpu-noavx -f paddle/scripts/docker/Dockerfile . docker build --build-arg WITH_AVX=OFF -t paddle:gpu-noavx -f paddle/scripts/docker/Dockerfile.gpu . diff --git a/doc/howto/contribute_to_paddle.md b/doc/howto/contribute_to_paddle.md index d1f12c6ab2..1decc91d62 100644 --- a/doc/howto/contribute_to_paddle.md +++ b/doc/howto/contribute_to_paddle.md @@ -36,8 +36,9 @@ If your repository doesn't contain **develop** branch, just create it by your ow git clone https://github.com/USERNAME/Paddle.git Paddle cd Paddle git checkout -b develop # create develop branch. -git remote add upstream https://github.com/baidu/Paddle.git # add upstream to baidu/Paddle +git remote add upstream https://github.com/PaddlePaddle/Paddle.git # add upstream to baidu/Paddle git pull upstream develop # update to upstream +git submodule update --init --recursive ``` Then you can start to develop by making a local developement branch @@ -69,7 +70,7 @@ To do this, you'll need to add a remote at first: # see the current configured remote repository git remote -v # add upstream repository -git remote add upstream https://github.com/baidu/Paddle.git +git remote add upstream https://github.com/PaddlePaddle/Paddle.git # verify the new upstream git remote -v ``` From 0592d1b76eb3ac9bc3c6e3a479cb72dfab1d45e8 Mon Sep 17 00:00:00 2001 From: Li Peng Date: Wed, 7 Dec 2016 18:10:17 +0800 Subject: [PATCH 044/265] Add bazel installation to the Dockerfiles Also add DEBIAN_FRONTEND=noninteractive to suppress the debconf messages. --- paddle/scripts/docker/Dockerfile | 8 ++++++++ paddle/scripts/docker/Dockerfile.gpu | 8 ++++++++ 2 files changed, 16 insertions(+) diff --git a/paddle/scripts/docker/Dockerfile b/paddle/scripts/docker/Dockerfile index 2a1a842336..a9a72b355e 100644 --- a/paddle/scripts/docker/Dockerfile +++ b/paddle/scripts/docker/Dockerfile @@ -12,6 +12,14 @@ RUN apt-get update \ RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark +ARG DEBIAN_FRONTEND=noninteractive +RUN apt-get update && apt-get install -y curl software-properties-common \ + && add-apt-repository ppa:webupd8team/java \ + && echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | debconf-set-selections \ + && echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list \ + && curl https://bazel.build/bazel-release.pub.gpg | apt-key add - \ + && apt-get update && apt-get install -y oracle-java8-installer bazel + ARG WITH_AVX ARG WITH_DOC ARG WITH_SWIG_PY diff --git a/paddle/scripts/docker/Dockerfile.gpu b/paddle/scripts/docker/Dockerfile.gpu index b3253d23c3..a147e3840e 100644 --- a/paddle/scripts/docker/Dockerfile.gpu +++ b/paddle/scripts/docker/Dockerfile.gpu @@ -12,6 +12,14 @@ RUN apt-get update \ RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark +ARG DEBIAN_FRONTEND=noninteractive +RUN apt-get update && apt-get install -y curl software-properties-common \ + && add-apt-repository ppa:webupd8team/java \ + && echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | debconf-set-selections \ + && echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list \ + && curl https://bazel.build/bazel-release.pub.gpg | apt-key add - \ + && apt-get update && apt-get install -y oracle-java8-installer bazel + ARG WITH_AVX ARG WITH_DOC ARG WITH_SWIG_PY From 00b86b1f3a967357517bd0acace0e166273afb82 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 7 Dec 2016 21:37:19 +0800 Subject: [PATCH 045/265] Follow comments --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 58 +++++++++++-------- 1 file changed, 35 insertions(+), 23 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index 7ae9f5ef8e..3f5100cbf1 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -4,14 +4,14 @@ 单双层RNN API对比介绍 ##################### -这篇教程主要介绍了\ :ref:`glossary_双层RNN`\ 的API接口。本文以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 +本文以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 示例1:双层RNN,子序列间无Memory ================================ -在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作。并且内层的序列操作之间是独立没有依赖的,即不需要使用\ :ref:`glossary_Memory`\ 的。 +在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作;并且内层的序列操作之间独立无依赖,即不需要使用\ :ref:`glossary_Memory`\ 。 -在本示例中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体,同时使用encoder压缩,二者语意上完全一致。这组语意相同的示例配置如下 +在本示例中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体同时使用encoder压缩。二者语意上完全一致。这组语义相同的示例配置如下: * 单层\ :ref:`glossary_RNN`\: `sequence_layer_group.conf `_ * :ref:`glossary_双层RNN`\: `sequence_nest_layer_group.conf `_ @@ -28,7 +28,7 @@ :language: text -- 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。而对于双层序列的LSTM来说,第一条数据同时encode两条数据成两个向量。这四条数据同时处理的句子为\ :code:`[2, 3, 2, 3]`\ 。 +- 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。但于双层序列的LSTM来说,第一个样本同时encode两条数据成两个向量。这四条数据同时处理的句子数量为\ :code:`[2, 3, 2, 3]`\ 。 .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest :language: text @@ -51,17 +51,17 @@ :lines: 42-71 :linenos: -- 这是对于同样的数据,本示例中双层\ :ref:`glossary_sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: +- 对于同样的数据,双层\ :ref:`glossary_sequence`\ 的\ :ref:`glossary_DataProvider`\ 的代码。其说明如下: - :ref:`glossary_DataProvider`\ 共返回两组数据,分别是sentences和labels。即在双层序列的原始数据中,每一组内的所有句子和labels - - sentences是双层\ :ref:`glossary_sequence`\ 的数据。他内部包括了每组数据中的所有句子,又使用句子中每一个单词的词表index表示每一个句子,故为双层\ :ref:`glossary_sequence`\ 。类型为 integer_value_sub_sequence 。 - - labels是每组内每一个句子的标签,故而是一个单层\ :ref:`glossary_sequence`\ 。 + - sentences是双层\ :ref:`glossary_sequence`\ 的数据。由于它内部包含了每组数据中的所有句子,且每个句子表示为对应的词表索引数组,因此它是integer_value_sub_sequence 类型的,即双层\ :ref:`glossary_sequence`\ 。 + - labels是每组内每个句子的标签,故而是一个单层\ :ref:`glossary_sequence`\ 。 :ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------ -首先,我们看一下单层\ :ref:`glossary_RNN`\ 的配置。代码中9-15行即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的\ :ref:`glossary_RNN`\ 处理函数。在这个函数中,\ :ref:`glossary_RNN`\ 对于每一个\ :ref:`glossary_timestep`\ 通过了一个LSTM网络。 +首先,我们看一下单层\ :ref:`glossary_RNN`\ 的配置。代码中9-15行(高亮部分)即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的\ :ref:`glossary_RNN`\ 处理函数。在这个函数中,\ :ref:`glossary_RNN`\ 对于每一个\ :ref:`glossary_timestep`\ 通过了一个LSTM网络。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf :language: python @@ -70,19 +70,19 @@ :emphasize-lines: 9-15 -其次,我们看一下语义相同的\ :ref:`glossary_双层RNN`\ 的网络配置。 +其次,我们看一下语义相同的\ :ref:`glossary_双层RNN`\ 的网络配置\: * PaddlePaddle中的许多layer并不在意输入是否是\ :ref:`glossary_sequence`\ ,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个\ :ref:`glossary_timestep`\ 来进行的。 -* 在该配置中,7-26行将双层\ :ref:`glossary_sequence`\ 数据,先变换成单层\ :ref:`glossary_sequence`\ 数据,在对每一个单层\ :ref:`glossary_sequence`\ 进行处理。 +* 在该配置的7-26行(高亮部分),将双层\ :ref:`glossary_sequence`\ 数据先变换成单层\ :ref:`glossary_sequence`\ 数据,再对每一个单层\ :ref:`glossary_sequence`\ 进行处理。 * 使用\ :code:`recurrent_group`\ 这个函数进行变换,在变换时需要将输入序列传入。由于我们想要的变换是双层\ :ref:`glossary_sequence`\ => 单层\ :ref:`glossary_sequence`\ ,所以我们需要将输入数据标记成\ :code:`SubsequenceInput`\ 。 * 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个LSTM网络。这和单层\ :ref:`glossary_RNN`\ 的配置是等价的。 -* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要知道使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但是,和单层\ :ref:`glossary_RNN`\ 有区别的地方是,我们是对每一个子序列取最后一个元素。于是我们设置\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 +* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但和单层\ :ref:`glossary_RNN`\ 不同,我们是对每一个子序列取最后一个元素,因此\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 -* 至此,\ :code:`lstm_last`\ 便和单层\ :ref:`glossary_RNN`\ 的配置中的\ :code:`lstm_last`\ 具有相同的结果了。 +* 至此,\ :code:`lstm_last`\ 便和单层\ :ref:`glossary_RNN`\ 配置中的\ :code:`lstm_last`\ 具有相同的结果了。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf :language: python @@ -93,30 +93,34 @@ 示例2::ref:`glossary_双层RNN`,子序列间有\ :ref:`glossary_Memory` ================================================================== -本示例中,意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 同时实现一个完全等价的全连接\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。而对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 +本示例意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 实现两个完全等价的全连接\ :ref:`glossary_RNN`\ 。 + +* 对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。 + +* 对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 :ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------ 我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 -- 单层序列:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 +- 单层\ :ref:`glossary_rnn`\ :过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf :language: python :lines: 36-48 -- 双层序列,外层memory是一个元素: +- \ :ref:`glossary_双层RNN`\ ,外层memory是一个元素: - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每一个时间步都用了上一个时间步的输出结果”一致。 + - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每个时间步都用了上一个时间步的输出结果”一致。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf :language: python :lines: 39-66 .. warning:: - PaddlePaddle目前只支持在每一个时间步中,Memory的sequence长度一致的情况。 + PaddlePaddle目前只支持在每个时间步中,Memory的\ :ref:`glossary_sequence`\ 长度一致的情况。 示例3:双层RNN,输入不等长 ========================== @@ -127,31 +131,39 @@ -**输入不等长** 是指recurrent_group的多个输入序列,在每个\ :ref:`glossary_timestep`\ 的子序列长度可以不相等。但\ :ref:`glossary_双层RNN`\ 目前需要整体的输出,与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定和输出序列信息一致。 +**输入不等长** 是指recurrent_group的多个输入序列,在每个\ :ref:`glossary_timestep`\ 的子序列长度可以不相等。但序列输出时,需要指定与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定哪一个输入和输出序列信息一致,默认指定第一个输入。 + +示例3的配置分别为\ `单层不等长RNN `_\ 和\ `双层不等长RNN `_\ 。 -本例参考配置分别为\ `单层不等长RNN `_\ 和\ `双层不等长RNN `_\ 。 +示例3对于单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 数据完全相同。 -本例中对于单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 数据完全相同,对于单层\ :ref:`glossary_RNN`\ 的数据一共有两个样本,他们分别是\ :code:`[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]`\ 和\ :code:`[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]`\ 。对于每一个单层\ :ref:`glossary_RNN`\ 的数据,均有两组特征。在单层数据的基础上,\ :ref:`glossary_双层RNN`\ 数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。但是需要注意的是Paddle目前只支持序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。即两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 +* 对于单层\ :ref:`glossary_RNN`\ 的数据一共有两个样本,他们分别是\ :code:`[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]`\ 和\ :code:`[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]`\ 。对于每一个单层\ :ref:`glossary_RNN`\ 的数据,均有两组特征。 + +* 在单层数据的基础上,\ :ref:`glossary_双层RNN`\ 数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。 + +* 需要注意的是Paddle目前只支持子序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。例如本里中的两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 :ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------ -本例中的配置,使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 使用一个\ :code:`recurrent_group`\ 将两个序列同时过完全连接的\ :ref:`glossary_RNN`\ 。对于单层\ :ref:`glossary_RNN`\ 的code如下。 +和示例2中的配置累死,示例3的配置使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ ,实现两个完全等价的全连接\ :ref:`glossary_RNN`\ 。 + +* 单层\ :ref:`glossary_RNN`\ \: .. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py :language: python :lines: 42-59 :linenos: -而双层序列的代码如下。 +* :ref:`glossary_双层RNN`\ \: .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py :language: python :lines: 41-80 :linenos: -在上面代码中,单层和双层序列的使用和示例2中的示例类似,区别是同时处理了两个输入。而对于双层序列,两个输入的子序列长度也并不相同。但是,我们使用了\ :code:`targetInlink`\ 参数设置了外层\ :code:`recurrent_group`\ 的输出格式。所以外层输出的序列形状,和\ :code:`emb2`的序列形状一致。 +在上面代码中,单层和双层序列的使用和示例2中的示例类似,区别是同时处理了两个输入。而对于双层序列,两个输入的子序列长度也并不相同。但是,我们使用了\ :code:`targetInlink`\ 参数设置了外层\ :code:`recurrent_group`\ 的输出格式。所以外层输出的序列形状,和\ :code:`emb2`\ 的序列形状一致。 示例4:beam_search的生成 ======================== From 3cf7337f3055f8f6aec15365a0fe2756acc5cf48 Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Wed, 7 Dec 2016 13:51:20 -0800 Subject: [PATCH 046/265] Correctly handling multiple calls to parse_config() To solve this, we maintain the list of DefaultNameFactory used in by trainer_config_helper, and reset the state at the beginning of each parse_config call. Change-Id: I13c7574dc8f0b6bc6f6b7c92eb425e2c52c926e8 --- python/paddle/trainer/config_parser.py | 18 +++++++++--- .../default_decorators.py | 15 +++++++++- .../tests/CMakeLists.txt | 5 ++++ .../tests/test_reset_hook.py | 28 +++++++++++++++++++ 4 files changed, 61 insertions(+), 5 deletions(-) create mode 100644 python/paddle/trainer_config_helpers/tests/test_reset_hook.py diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index c6c0c9c151..699fe6630a 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -141,9 +141,9 @@ def init_config_environment( g_add_submodel_suffix=False, # Whether current layer needs to pass the image height and width. - # Default value is true, but if it encounters recurrent_layer_group, - # it will be false. The reason is that image is converted to be sequence, - # image height will be sequence length, and image width will be feature + # Default value is true, but if it encounters recurrent_layer_group, + # it will be false. The reason is that image is converted to be sequence, + # image height will be sequence length, and image width will be feature # length of each timestep. g_pass_height_width=True, ): @@ -1067,7 +1067,7 @@ def cnn_output_size(img_size, filter_size, padding, stride, caffe_mode): return 1 + int(math.ceil(output)) -#calcualte image_size based on output_size for de-convolution (ConvTransLayer). +#calcualte image_size based on output_size for de-convolution (ConvTransLayer). #It is the reverse function of cnn_output_size def cnn_image_size(output_size, filter_size, padding, stride, caffe_mode): img_size = (output_size - 1) * stride + filter_size - 2 * padding @@ -3364,6 +3364,14 @@ def my_fatal(s): logger.critical(s) raise Exception() +_parse_config_hooks = set() +def register_parse_config_hook(f): + """ + Register a hook function for parse_config. parse_config will invoke the hook + at the beginning of parse. This make it possible to reset global state for + for constructing the model. + """ + _parse_config_hooks.add(f) def parse_config(config_file, config_arg_str): ''' @@ -3371,6 +3379,8 @@ def parse_config(config_file, config_arg_str): passed to config script as a dictionary CONFIG_ARGS ''' init_config_environment() + for hook in _parse_config_hooks: + hook() config_args = {} diff --git a/python/paddle/trainer_config_helpers/default_decorators.py b/python/paddle/trainer_config_helpers/default_decorators.py index c01050e338..23a4fa241d 100644 --- a/python/paddle/trainer_config_helpers/default_decorators.py +++ b/python/paddle/trainer_config_helpers/default_decorators.py @@ -78,6 +78,17 @@ class DefaultNameFactory(object): """ pass + def reset(self): + self.__counter__ = 0 + + +_name_factories = [] + +def reset_hook(): + for factory in _name_factories: + factory.reset() + +register_parse_config_hook(reset_hook) def wrap_name_default(name_prefix=None): """ @@ -95,7 +106,9 @@ def wrap_name_default(name_prefix=None): :return: a decorator to set default name :rtype: callable """ - return wrap_param_default(["name"], DefaultNameFactory(name_prefix)) + factory = DefaultNameFactory(name_prefix) + _name_factories.append(factory) + return wrap_param_default(["name"], factory) def wrap_param_attr_default(param_names=None, default_factory=None): diff --git a/python/paddle/trainer_config_helpers/tests/CMakeLists.txt b/python/paddle/trainer_config_helpers/tests/CMakeLists.txt index 6180b2efbc..bff82f7505 100644 --- a/python/paddle/trainer_config_helpers/tests/CMakeLists.txt +++ b/python/paddle/trainer_config_helpers/tests/CMakeLists.txt @@ -4,6 +4,11 @@ add_test(NAME layers_test python ${PROJ_ROOT}/python/paddle/trainer_config_helpers/tests/layers_test.py WORKING_DIRECTORY ${PROJ_ROOT}/python/paddle) +add_test(NAME test_reset_hook + COMMAND ${PROJ_ROOT}/paddle/.set_python_path.sh -d ${PROJ_ROOT}/python/ + python ${PROJ_ROOT}/python/paddle/trainer_config_helpers/tests/test_rest_hook.py + WORKING_DIRECTORY ${PROJ_ROOT}/python/paddle) + if (PROTOBUF_3) add_paddle_exe(protobuf_equal ProtobufEqualMain.cpp) diff --git a/python/paddle/trainer_config_helpers/tests/test_reset_hook.py b/python/paddle/trainer_config_helpers/tests/test_reset_hook.py new file mode 100644 index 0000000000..dc494d0eef --- /dev/null +++ b/python/paddle/trainer_config_helpers/tests/test_reset_hook.py @@ -0,0 +1,28 @@ +# Copyright PaddlePaddle contributors. All Rights Reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import unittest +from paddle.trainer.config_parser import parse_config + +class TestParse(unittest.TestCase): + + def test_parse(self): + a = parse_config( + 'trainer_config_helpers/tests/layers_test_config.py', '') + b = parse_config( + 'trainer_config_helpers/tests/layers_test_config.py', '') + self.assertEqual(a, b) + + +if __name__ == '__main__': + unittest.main() From 6772acab127849f0e94a471d20ef578a339f4179 Mon Sep 17 00:00:00 2001 From: Li Peng Date: Thu, 8 Dec 2016 11:00:02 +0800 Subject: [PATCH 047/265] Update according Yi's comments - Put the 'ARG DEBIAN_FRONTEND=noninteractive' ahead of the first 'apt-get install' invoke. - Add comments to Dockerfiles to explain why we're adding Bazel installation. Signed-off-by: Li Peng --- paddle/scripts/docker/Dockerfile | 7 ++++++- paddle/scripts/docker/Dockerfile.gpu | 7 ++++++- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/paddle/scripts/docker/Dockerfile b/paddle/scripts/docker/Dockerfile index a9a72b355e..edb84712d8 100644 --- a/paddle/scripts/docker/Dockerfile +++ b/paddle/scripts/docker/Dockerfile @@ -1,6 +1,7 @@ FROM ubuntu:14.04 MAINTAINER PaddlePaddle Authors +ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get install -y cmake libprotobuf-dev protobuf-compiler git \ libgoogle-glog-dev libgflags-dev libatlas-dev libatlas3-base g++ m4 python-pip \ @@ -12,7 +13,11 @@ RUN apt-get update \ RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark -ARG DEBIAN_FRONTEND=noninteractive +# cmake tends to hide and blur the dependencies between code modules, as +# noted here https://github.com/PaddlePaddle/Paddle/issues/763. We are +# thinking about using Bazel to fix this problem, e.g., +# https://github.com/PaddlePaddle/Paddle/issues/681#issuecomment-263996102. To +# start the trail of fixing, we add Bazel to our Dockerfiles. RUN apt-get update && apt-get install -y curl software-properties-common \ && add-apt-repository ppa:webupd8team/java \ && echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | debconf-set-selections \ diff --git a/paddle/scripts/docker/Dockerfile.gpu b/paddle/scripts/docker/Dockerfile.gpu index a147e3840e..5d175e15a7 100644 --- a/paddle/scripts/docker/Dockerfile.gpu +++ b/paddle/scripts/docker/Dockerfile.gpu @@ -1,6 +1,7 @@ FROM nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04 MAINTAINER PaddlePaddle Authors +ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get install -y cmake libprotobuf-dev protobuf-compiler git \ libgoogle-glog-dev libgflags-dev libatlas-dev libatlas3-base g++ m4 python-pip \ @@ -12,7 +13,11 @@ RUN apt-get update \ RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark -ARG DEBIAN_FRONTEND=noninteractive +# cmake tends to hide and blur the dependencies between code modules, as +# noted here https://github.com/PaddlePaddle/Paddle/issues/763. We are +# thinking about using Bazel to fix this problem, e.g., +# https://github.com/PaddlePaddle/Paddle/issues/681#issuecomment-263996102. To +# start the trail of fixing, we add Bazel to our Dockerfiles. RUN apt-get update && apt-get install -y curl software-properties-common \ && add-apt-repository ppa:webupd8team/java \ && echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | debconf-set-selections \ From aaecfcc47f0319cb74b80a7e095a8406ce8354ff Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Thu, 8 Dec 2016 11:03:21 +0800 Subject: [PATCH 048/265] Support predicting the samples from sys.stdin --- demo/sentiment/predict.py | 73 ++++++++++++++++++++++----------------- demo/sentiment/predict.sh | 12 +++---- 2 files changed, 47 insertions(+), 38 deletions(-) diff --git a/demo/sentiment/predict.py b/demo/sentiment/predict.py index bc0f6f3126..e01dc6d228 100755 --- a/demo/sentiment/predict.py +++ b/demo/sentiment/predict.py @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -import os +import os, sys import numpy as np from optparse import OptionParser from py_paddle import swig_paddle, DataProviderConverter @@ -66,35 +66,42 @@ class SentimentPrediction(): for v in open(label_file, 'r'): self.label[int(v.split('\t')[1])] = v.split('\t')[0] - def get_data(self, data_file): + def get_data(self, data): """ Get input data of paddle format. """ - with open(data_file, 'r') as fdata: - for line in fdata: - words = line.strip().split() - word_slot = [ - self.word_dict[w] for w in words if w in self.word_dict - ] - if not word_slot: - print "all words are not in dictionary: %s", line - continue - yield [word_slot] - - def predict(self, data_file): - """ - data_file: file name of input data. - """ - input = self.converter(self.get_data(data_file)) - output = self.network.forwardTest(input) - prob = output[0]["value"] - lab = np.argsort(-prob) - if self.label is None: - print("%s: predicting label is %d" % (data_file, lab[0][0])) - else: - print("%s: predicting label is %s" % - (data_file, self.label[lab[0][0]])) + for line in data: + words = line.strip().split() + word_slot = [ + self.word_dict[w] for w in words if w in self.word_dict + ] + if not word_slot: + print "all words are not in dictionary: %s", line + continue + yield [word_slot] + + def predict(self, batch_size): + + def batch_predict(batch_data): + input = self.converter(self.get_data(batch_data)) + output = self.network.forwardTest(input) + prob = output[0]["value"] + labs = np.argsort(-prob) + for idx, lab in enumerate(labs): + if self.label is None: + print("predicting label is %d" % (lab[0])) + else: + print("predicting label is %s" % + (self.label[lab[0]])) + batch = [] + for line in sys.stdin: + batch.append(line) + if len(batch) == batch_size: + batch_predict(batch) + batch=[] + if len(batch) > 0: + batch_predict(batch) def option_parser(): usage = "python predict.py -n config -w model_dir -d dictionary -i input_file " @@ -119,11 +126,13 @@ def option_parser(): default=None, help="dictionary file") parser.add_option( - "-i", - "--data", + "-c", + "--batch_size", + type="int", action="store", - dest="data", - help="data file to predict") + dest="batch_size", + default=1, + help="the batch size for prediction") parser.add_option( "-w", "--model", @@ -137,13 +146,13 @@ def option_parser(): def main(): options, args = option_parser() train_conf = options.train_conf - data = options.data + batch_size = options.batch_size dict_file = options.dict_file model_path = options.model_path label = options.label swig_paddle.initPaddle("--use_gpu=0") predict = SentimentPrediction(train_conf, dict_file, model_path, label) - predict.predict(data) + predict.predict(batch_size) if __name__ == '__main__': diff --git a/demo/sentiment/predict.sh b/demo/sentiment/predict.sh index 053f23e491..219d2d2025 100755 --- a/demo/sentiment/predict.sh +++ b/demo/sentiment/predict.sh @@ -19,9 +19,9 @@ set -e model=model_output/pass-00002/ config=trainer_config.py label=data/pre-imdb/labels.list -python predict.py \ - -n $config\ - -w $model \ - -b $label \ - -d ./data/pre-imdb/dict.txt \ - -i ./data/aclImdb/test/pos/10007_10.txt +cat ./data/aclImdb/test/pos/10007_10.txt | python predict.py \ + --tconf=$config\ + --model=$model \ + --label=$label \ + --dict=./data/pre-imdb/dict.txt \ + --batch_size=1 From 70e206e8f15eb62c8c0e36c4feb7b585a8bb6e50 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Thu, 8 Dec 2016 10:36:18 +0800 Subject: [PATCH 049/265] follow yanfei's comments --- doc_cn/concepts/use_concepts.rst | 44 ++++++++++++++------------------ doc_cn/faq/index.rst | 2 +- 2 files changed, 20 insertions(+), 26 deletions(-) diff --git a/doc_cn/concepts/use_concepts.rst b/doc_cn/concepts/use_concepts.rst index ea7a5c7d18..5122646d27 100644 --- a/doc_cn/concepts/use_concepts.rst +++ b/doc_cn/concepts/use_concepts.rst @@ -4,9 +4,7 @@ PaddlePaddle 基本使用概念 PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 -单节模式用命令 ``paddle train`` 可以启动一个trainer进程,一个单机训练作业只包括一个trainer进程,单机的所有设备使用,均在单机进程内调度完成。 - -如果数据规模比较大,希望加速训练,可以启动分布式作业。一个分布式作业里包括若干trainer进程和若干Parameter Server(或称pserver)进程。用命令 ``paddle pserver`` 可以启动 pserver 进程,pserver进程用于协调多个trainer进程之间的通信。 +单机模式用命令 ``paddle train`` 可以启动一个trainer进程,单机训练通常只包括一个trainer进程。如果数据规模比较大,希望加速训练,可以启动分布式作业。一个分布式作业里包括若干trainer进程和若干Parameter Server(或称pserver)进程。用命令 ``paddle pserver`` 可以启动 pserver 进程,pserver进程用于协调多个trainer进程之间的通信。 本文首先介绍trainer进程中的一些使用概念,然后介绍pserver进程中概念。 @@ -15,7 +13,7 @@ PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 系统框图 ======== -下图描述了用户使用框图,PaddlePaddle的trainer进程里内嵌了Python解释器,trainer进程可以利用这个解释器执行Python脚本,Python脚本里定义了模型配置、训练算法、以及数据读取函数。其中,数据读取程序往往定义在一个单独Python脚本文件里,被称为DataProvider,通常是一个Python函数。模型配置、训练算法通常定义在另一单独Python文件中, 称为训练配置文件。下面将分别介绍这两部分。 +下图描述了用户使用框图,PaddlePaddle的trainer进程里内嵌了Python解释器,trainer进程可以利用这个解释器执行Python脚本,Python脚本里定义了模型配置、训练算法、以及数据读取函数。其中,数据读取程序往往定义在一个单独Python脚本文件里,被称为数据提供器(DataProvider),通常是一个Python函数。模型配置、训练算法通常定义在另一单独Python文件中, 称为训练配置文件。下面将分别介绍这两部分。 .. graphviz:: @@ -34,8 +32,8 @@ PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 py -> data_provider [dir="back"]; } -DataProvider -============ +数据提供器 +========== DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据转换成系统可以识别的数据类型。每当系统需要新的数据训练时, trainer进程会调用DataProvider函数返回数据。当所有数据读取完一轮后,DataProvider返回空数据,通知系统一轮数据读取结束,并且系统每一轮训练开始时会重置DataProvider。需要注意的是,DataProvider是被系统调用,而不是新数据驱动系统,一些随机化噪声添加都应该在DataProvider中完成。 @@ -45,7 +43,7 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据 训练配置文件 ============ -训练配置文件主要包括数据传入接口定义(DataConfig)、优化算法(OptimizationConfig)、网络结构(ModelConfig)。 其中数据传入接口定义与DataProvider的关系是:DataProvider里定义数据读取函数,配置文件的DataConfig里指定DataProvider文件名字、生成数据函数接口,请不要混淆。 +训练配置文件主要包括数据源、优化算法、网络结构配置三部分。 其中数据源配置与DataProvider的关系是:DataProvider里定义数据读取函数,训练配置文件的数据源配置中指定DataProvider文件名字、生成数据函数接口,请不要混淆。 一个简单的训练配置文件为: @@ -54,26 +52,22 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据 文件开头 ``from paddle.trainer_config_helpers import *`` ,是因为PaddlePaddle配置文件与C++模块通信的最基础协议是protobuf,为了避免用户直接写复杂的protobuf string,我们为用户定以Python接口来配置网络,该Python代码可以生成protobuf包,这就是`trainer_config_helpers`_的作用。因此,在文件的开始,需要import这些函数。 这个包里面包含了模型配置需要的各个模块。 -需要注意的是,这个 ``paddle.trainer_config_helpers`` 包是标准的 Python 包,这意味着用户可以选择自己喜欢的 IDE 或者编辑器来编写配置文件,这个 Python 包注释文档比较完善,并且考虑了 IDE 的代码提示与类型注释。 +下面分别介绍数据源配置、优化算法配置、网络结构配置这三部分该概念。 -下面分别介绍DataConfig、OptimizationConfig、ModelConfig这三部分该概念。 - -DataConfig +数据源配置 ---------- -使用 `PyDataProvider`_ 的函数 ``define_py_data_sources2`` 配置数据源,后缀 2 是Paddle历史遗留问题,因为Paddle之前使用的PyDataProvider性能问题,重构了一个新的 `PyDataProvider`_ 。 - -``define_py_data_sources2`` 里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 +使用 `PyDataProvider`_ 的函数 ``define_py_data_sources2`` 配置数据源。``define_py_data_sources2`` 里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 ``module`` 和 ``obj`` 指定了DataProvider的文件名和返回数据的函数名。更详细的使用,请参考 `PyDataProvider`_ 。 -OptimizationConfig ------------------- +优化算法配置 +------------ -通过`settings`_ 接口设置神经网络所使用的训练参数和 `优化算法`_ ,包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 +通过 `settings`_ 接口设置神经网络所使用的训练参数和 `优化算法`_ ,包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 -ModelConfig ------------ +网络结构配置 +------------ 神经网络配置主要包括网络连接、激活函数、损失函数、评估器。 @@ -126,11 +120,11 @@ PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trai .. code-block:: bash - paddle pserver --port=5000 --num_gradient_servers=4 --nics='eth0' + paddle pserver --port=5000 --num_gradient_servers=4 --tcp_rdma='tcp' --nics='eth0' -* 指定 pserver 进程端口是 5000 。 -* 有四个训练进程(即 ``--gradient_servers=4`` ,PaddlePaddle同时将 trainer 称作 GradientServer 。因为其为负责提供Gradient) 。 -* 指定以太网类型为TCP网络。 +* ``--port=5000`` : 指定 pserver 进程端口是 5000 。 +* ``--gradient_servers=4`` : 有四个训练进程(PaddlePaddle 将 trainer 也称作 GradientServer ,因为其为负责提供Gradient) 。 +* ``--tcp_rdma='tcp' --nics=`eth0```: 指定以太网类型为TCP网络,指定网络接口名字为eth0。 启动之后 pserver 进程之后,需要启动 trainer 训练进程,在各个机器上运行如下命令\: @@ -140,8 +134,8 @@ PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trai 对于简单的多机协同训练使用上述方式即可。另外,pserver/train 通常在高级情况下,还需要设置下面两个参数\: -* --ports_num\: 一个 pserver 进程共绑定多少个端口用来做稠密更新。默认是1 -* --ports_num_for_sparse\: 一个pserver进程共绑定多少端口用来做稀疏更新,默认是0 +* --ports_num\: 一个 pserver 进程共绑定多少个端口用来做稠密更新,默认是1。 +* --ports_num_for_sparse\: 一个pserver进程共绑定多少端口用来做稀疏更新,默认是0。 使用手工指定端口数量,是因为Paddle的网络通信中,使用了 int32 作为消息长度,比较容易在大模型下溢出。所以,在 pserver 进程中可以启动多个子线程去接受 trainer 的数据,这样单个子线程的长度就不会溢出了。但是这个值不可以调的过大,因为增加这个值,对性能尤其是内存占用有一定的开销,另外稀疏更新的端口如果太大的话,很容易导致某一个参数服务器没有分配到任何参数。 diff --git a/doc_cn/faq/index.rst b/doc_cn/faq/index.rst index 2eba575c2e..2a202527d5 100644 --- a/doc_cn/faq/index.rst +++ b/doc_cn/faq/index.rst @@ -255,7 +255,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 完整源码可参考 `seqToseq `_ 示例。 11. 如何指定GPU设备 ------------------ +------------------- 例如机器上有4块GPU,编号从0开始,指定使用2、3号GPU: From d87b2c11acf00a437925c531890f81c7f38d4eb8 Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Wed, 7 Dec 2016 21:34:56 -0800 Subject: [PATCH 050/265] Fix bug in CMakeList.txt test_rest_hook.py => test_reset_hook.py Change-Id: I84909ade4a1ea2bec4311264626dd000cd6bc86c --- python/paddle/trainer_config_helpers/tests/CMakeLists.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/python/paddle/trainer_config_helpers/tests/CMakeLists.txt b/python/paddle/trainer_config_helpers/tests/CMakeLists.txt index bff82f7505..d1a9843d32 100644 --- a/python/paddle/trainer_config_helpers/tests/CMakeLists.txt +++ b/python/paddle/trainer_config_helpers/tests/CMakeLists.txt @@ -6,7 +6,7 @@ add_test(NAME layers_test add_test(NAME test_reset_hook COMMAND ${PROJ_ROOT}/paddle/.set_python_path.sh -d ${PROJ_ROOT}/python/ - python ${PROJ_ROOT}/python/paddle/trainer_config_helpers/tests/test_rest_hook.py + python ${PROJ_ROOT}/python/paddle/trainer_config_helpers/tests/test_reset_hook.py WORKING_DIRECTORY ${PROJ_ROOT}/python/paddle) if (PROTOBUF_3) From dc577f0e79471ff422a097b880a2a07f8674e65a Mon Sep 17 00:00:00 2001 From: liaogang Date: Thu, 8 Dec 2016 14:36:51 +0800 Subject: [PATCH 051/265] Add links for terminology --- doc_cn/demo/quick_start/index.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc_cn/demo/quick_start/index.rst b/doc_cn/demo/quick_start/index.rst index db73cb3f34..0536936dc4 100644 --- a/doc_cn/demo/quick_start/index.rst +++ b/doc_cn/demo/quick_start/index.rst @@ -242,7 +242,7 @@ embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovide :align: center :scale: 80% -时序模型,也称为RNN模型, 包括简单的RNN模型, GRU模型和LSTM模型等等。 +时序模型,也称为RNN模型, 包括简单的 `RNN模型 `_, `GRU模型 `_ 和 `LSTM模型 `_ 等等。 - GRU模型配置: @@ -269,7 +269,7 @@ embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovide ========= `优化算法 `_ 包括 -Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优化方法,同时使用了L2正则和梯度截断。 +Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优化方法,同时使用了L2正则(L2 Regularization)和梯度截断(Gradient Clipping)。 .. code-block:: python From 20e32ea043ae52262b2eee33f2dee648cddceba7 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 8 Dec 2016 16:04:56 +0800 Subject: [PATCH 052/265] Follow comments --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index 3f5100cbf1..018f93dbf5 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -141,13 +141,13 @@ * 在单层数据的基础上,\ :ref:`glossary_双层RNN`\ 数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。 -* 需要注意的是Paddle目前只支持子序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。例如本里中的两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 +* 需要注意的是PaddlePaddle目前只支持子序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。例如本例中的两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 :ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------ -和示例2中的配置累死,示例3的配置使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ ,实现两个完全等价的全连接\ :ref:`glossary_RNN`\ 。 +和示例2中的配置类似,示例3的配置使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ ,实现两个完全等价的全连接\ :ref:`glossary_RNN`\ 。 * 单层\ :ref:`glossary_RNN`\ \: From bc5e0a93bf144f344a5b45aba9c96c968d08f435 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 8 Dec 2016 16:13:00 +0800 Subject: [PATCH 053/265] Remove ref tags --- doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst | 96 +++++++++---------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst index 018f93dbf5..9baa0b5780 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst @@ -4,17 +4,17 @@ 单双层RNN API对比介绍 ##################### -本文以PaddlePaddle的\ :ref:`glossary_双层RNN`\ 单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用\ :ref:`glossary_双层RNN`\ 。本文中所有的例子,都只是介绍\ :ref:`glossary_双层RNN`\ 的API接口,并不是使用\ :ref:`glossary_双层RNN`\ 解决实际的问题。如果想要了解\ :ref:`glossary_双层RNN`\ 在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 +本文以PaddlePaddle的双层RNN单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用双层RNN。本文中所有的例子,都只是介绍双层RNN的API接口,并不是使用双层RNN解决实际的问题。如果想要了解双层RNN在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp `_\ 。 示例1:双层RNN,子序列间无Memory ================================ -在\ :ref:`glossary_双层RNN`\ 中的经典情况是将内层的每一个\ :ref:`glossary_sequence`\ 数据,分别进行序列操作;并且内层的序列操作之间独立无依赖,即不需要使用\ :ref:`glossary_Memory`\ 。 +在双层RNN中的经典情况是将内层的每一个时间序列数据,分别进行序列操作;并且内层的序列操作之间独立无依赖,即不需要使用Memory\ 。 -在本示例中,单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是\ :ref:`glossary_RNN`\ 使用两层序列模型,将多句话看成一个整体同时使用encoder压缩。二者语意上完全一致。这组语义相同的示例配置如下: +在本示例中,单层RNN和双层RNN的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是RNN使用两层序列模型,将多句话看成一个整体同时使用encoder压缩。二者语意上完全一致。这组语义相同的示例配置如下: -* 单层\ :ref:`glossary_RNN`\: `sequence_layer_group.conf `_ -* :ref:`glossary_双层RNN`\: `sequence_nest_layer_group.conf `_ +* 单层RNN\: `sequence_layer_group.conf `_ +* 双层RNN\: `sequence_nest_layer_group.conf `_ 读取双层序列数据 @@ -22,7 +22,7 @@ 首先,本示例中使用的原始数据如下\: -- 本例中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层\ :ref:`glossary_RNN`\ 网络直接使用。 +- 本例中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层RNN网络直接使用。 .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg :language: text @@ -33,17 +33,17 @@ .. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest :language: text -其次,对于两种不同的输入数据类型,不同\ :ref:`glossary_DataProvider`\ 对比如下(`sequenceGen.py `_)\: +其次,对于两种不同的输入数据类型,不同DataProvider对比如下(`sequenceGen.py `_)\: .. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py :language: python :lines: 21-39 :linenos: -- 这是普通的单层\ :ref:`glossary_sequence`\ 的\ :ref:`glossary_DataProvider`\ 代码,其说明如下: +- 这是普通的单层时间序列的DataProvider代码,其说明如下: - * :ref:`glossary_DataProvider`\ 共返回两个数据,分别是words和label。即上述代码中的第19行。 - - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层\ :ref:`glossary_sequence`\ 。 + * DataProvider共返回两个数据,分别是words和label。即上述代码中的第19行。 + - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层时间序列。 - label是原始数据中对于每一句话的分类标签,它是integer_value类型的。 .. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py @@ -51,17 +51,17 @@ :lines: 42-71 :linenos: -- 对于同样的数据,双层\ :ref:`glossary_sequence`\ 的\ :ref:`glossary_DataProvider`\ 的代码。其说明如下: +- 对于同样的数据,双层时间序列的DataProvider的代码。其说明如下: - - :ref:`glossary_DataProvider`\ 共返回两组数据,分别是sentences和labels。即在双层序列的原始数据中,每一组内的所有句子和labels - - sentences是双层\ :ref:`glossary_sequence`\ 的数据。由于它内部包含了每组数据中的所有句子,且每个句子表示为对应的词表索引数组,因此它是integer_value_sub_sequence 类型的,即双层\ :ref:`glossary_sequence`\ 。 - - labels是每组内每个句子的标签,故而是一个单层\ :ref:`glossary_sequence`\ 。 + - DataProvider共返回两组数据,分别是sentences和labels。即在双层序列的原始数据中,每一组内的所有句子和labels + - sentences是双层时间序列的数据。由于它内部包含了每组数据中的所有句子,且每个句子表示为对应的词表索引数组,因此它是integer_value_sub_sequence 类型的,即双层时间序列。 + - labels是每组内每个句子的标签,故而是一个单层时间序列。 -:ref:`glossary_trainer_config`\ 的模型配置 +模型配置的模型配置 ------------------------------------------ -首先,我们看一下单层\ :ref:`glossary_RNN`\ 的配置。代码中9-15行(高亮部分)即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的\ :ref:`glossary_RNN`\ 处理函数。在这个函数中,\ :ref:`glossary_RNN`\ 对于每一个\ :ref:`glossary_timestep`\ 通过了一个LSTM网络。 +首先,我们看一下单层RNN的配置。代码中9-15行(高亮部分)即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的RNN处理函数。在这个函数中,RNN对于每一个时间步通过了一个LSTM网络。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf :language: python @@ -70,19 +70,19 @@ :emphasize-lines: 9-15 -其次,我们看一下语义相同的\ :ref:`glossary_双层RNN`\ 的网络配置\: +其次,我们看一下语义相同的双层RNN的网络配置\: -* PaddlePaddle中的许多layer并不在意输入是否是\ :ref:`glossary_sequence`\ ,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个\ :ref:`glossary_timestep`\ 来进行的。 +* PaddlePaddle中的许多layer并不在意输入是否是时间序列,例如\ :code:`embedding_layer`\ 。在这些layer中,所有的操作都是针对每一个时间步来进行的。 -* 在该配置的7-26行(高亮部分),将双层\ :ref:`glossary_sequence`\ 数据先变换成单层\ :ref:`glossary_sequence`\ 数据,再对每一个单层\ :ref:`glossary_sequence`\ 进行处理。 +* 在该配置的7-26行(高亮部分),将双层时间序列数据先变换成单层时间序列数据,再对每一个单层时间序列进行处理。 - * 使用\ :code:`recurrent_group`\ 这个函数进行变换,在变换时需要将输入序列传入。由于我们想要的变换是双层\ :ref:`glossary_sequence`\ => 单层\ :ref:`glossary_sequence`\ ,所以我们需要将输入数据标记成\ :code:`SubsequenceInput`\ 。 + * 使用\ :code:`recurrent_group`\ 这个函数进行变换,在变换时需要将输入序列传入。由于我们想要的变换是双层时间序列=> 单层时间序列,所以我们需要将输入数据标记成\ :code:`SubsequenceInput`\ 。 - * 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个LSTM网络。这和单层\ :ref:`glossary_RNN`\ 的配置是等价的。 + * 在本例中,我们将原始数据的每一组,通过\ :code:`recurrent_group`\ 进行拆解,拆解成的每一句话再通过一个LSTM网络。这和单层RNN的配置是等价的。 -* 与单层\ :ref:`glossary_RNN`\ 的配置类似,我们只需要使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但和单层\ :ref:`glossary_RNN`\ 不同,我们是对每一个子序列取最后一个元素,因此\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 +* 与单层RNN的配置类似,我们只需要使用LSTM encode成的最后一个向量。所以对\ :code:`recurrent_group`\ 进行了\ :code:`last_seq`\ 操作。但和单层RNN不同,我们是对每一个子序列取最后一个元素,因此\ :code:`agg_level=AggregateLevel.EACH_SEQUENCE`\ 。 -* 至此,\ :code:`lstm_last`\ 便和单层\ :ref:`glossary_RNN`\ 配置中的\ :code:`lstm_last`\ 具有相同的结果了。 +* 至此,\ :code:`lstm_last`\ 便和单层RNN配置中的\ :code:`lstm_last`\ 具有相同的结果了。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf :language: python @@ -90,27 +90,27 @@ :linenos: :emphasize-lines: 7-26 -示例2::ref:`glossary_双层RNN`,子序列间有\ :ref:`glossary_Memory` -================================================================== +示例2:双层RNN,子序列间有Memory +================================ -本示例意图使用单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 实现两个完全等价的全连接\ :ref:`glossary_RNN`\ 。 +本示例意图使用单层RNN和双层RNN实现两个完全等价的全连接RNN。 -* 对于单层\ :ref:`glossary_RNN`\ ,输入数据为一个完整的\ :ref:`glossary_sequence`\ ,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。 +* 对于单层RNN,输入数据为一个完整的时间序列,例如\ :code:`[4, 5, 2, 0, 9, 8, 1, 4]`\ 。 -* 对于\ :ref:`glossary_双层RNN`\ ,输入数据为在单层\ :ref:`glossary_RNN`\ 数据里面,任意将一些数据组合成双层\ :ref:`glossary_sequence`\ ,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 +* 对于双层RNN,输入数据为在单层RNN数据里面,任意将一些数据组合成双层时间序列,例如\ :code:`[ [4, 5, 2], [0, 9], [8, 1, 4]]`。 -:ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------- +模型配置的模型配置 +------------------ 我们选取单双层序列配置中的不同部分,来对比分析两者语义相同的原因。 -- 单层\ :ref:`glossary_rnn`\ :过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 +- 单层RNN:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 .. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf :language: python :lines: 36-48 -- \ :ref:`glossary_双层RNN`\ ,外层memory是一个元素: +- 双层RNN,外层memory是一个元素: - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每个时间步都用了上一个时间步的输出结果”一致。 @@ -120,7 +120,7 @@ :lines: 39-66 .. warning:: - PaddlePaddle目前只支持在每个时间步中,Memory的\ :ref:`glossary_sequence`\ 长度一致的情况。 + PaddlePaddle目前只支持在每个时间步中,Memory的时间序列长度一致的情况。 示例3:双层RNN,输入不等长 ========================== @@ -131,32 +131,32 @@ -**输入不等长** 是指recurrent_group的多个输入序列,在每个\ :ref:`glossary_timestep`\ 的子序列长度可以不相等。但序列输出时,需要指定与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定哪一个输入和输出序列信息一致,默认指定第一个输入。 +**输入不等长** 是指recurrent_group的多个输入序列,在每个时间步的子序列长度可以不相等。但序列输出时,需要指定与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定哪一个输入和输出序列信息一致,默认指定第一个输入。 示例3的配置分别为\ `单层不等长RNN `_\ 和\ `双层不等长RNN `_\ 。 -示例3对于单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ 数据完全相同。 +示例3对于单层RNN和双层RNN数据完全相同。 -* 对于单层\ :ref:`glossary_RNN`\ 的数据一共有两个样本,他们分别是\ :code:`[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]`\ 和\ :code:`[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]`\ 。对于每一个单层\ :ref:`glossary_RNN`\ 的数据,均有两组特征。 +* 对于单层RNN的数据一共有两个样本,他们分别是\ :code:`[1, 2, 4, 5, 2], [5, 4, 1, 3, 1]`\ 和\ :code:`[0, 2, 2, 5, 0, 1, 2], [1, 5, 4, 2, 3, 6, 1]`\ 。对于每一个单层RNN的数据,均有两组特征。 -* 在单层数据的基础上,\ :ref:`glossary_双层RNN`\ 数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。 +* 在单层数据的基础上,双层RNN数据随意加了一些隔断,例如将第一条数据转化为\ :code:`[[0, 2], [2, 5], [0, 1, 2]],[[1, 5], [4], [2, 3, 6, 1]]`\ 。 -* 需要注意的是PaddlePaddle目前只支持子序列数目一样的多输入\ :ref:`glossary_双层RNN`\ 。例如本例中的两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 +* 需要注意的是PaddlePaddle目前只支持子序列数目一样的多输入双层RNN。例如本例中的两个特征,均有三个子序列。每个子序列长度可以不一致,但是子序列的数目必须一样。 -:ref:`glossary_trainer_config`\ 的模型配置 ------------------------------------------- +模型配置 +-------- -和示例2中的配置类似,示例3的配置使用了单层\ :ref:`glossary_RNN`\ 和\ :ref:`glossary_双层RNN`\ ,实现两个完全等价的全连接\ :ref:`glossary_RNN`\ 。 +和示例2中的配置类似,示例3的配置使用了单层RNN和双层RNN,实现两个完全等价的全连接RNN。 -* 单层\ :ref:`glossary_RNN`\ \: +* 单层RNN\: .. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py :language: python :lines: 42-59 :linenos: -* :ref:`glossary_双层RNN`\ \: +* 双层RNN\ \: .. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py :language: python @@ -179,11 +179,11 @@ TBD Memory ------ -Memory是PaddlePaddle实现 :ref:`glossary_RNN` 时候使用的一个概念。 :ref:`glossary_RNN` 即时间递归神经网络,通常要求时间步之间具有一些依赖性,即当前时间步下的神经网络依赖前一个时间步神经网络中某一个神经元输出。如下图所示。 +Memory是PaddlePaddle实现RNN时候使用的一个概念。RNN即时间递归神经网络,通常要求时间步之间具有一些依赖性,即当前时间步下的神经网络依赖前一个时间步神经网络中某一个神经元输出。如下图所示。 .. graphviz:: glossary_rnn.dot -上图中虚线的连接,即是跨越时间步的网络连接。PaddlePaddle在实现 :ref:`glossary_RNN` 的时候,将这种跨越时间步的连接用一个特殊的神经网络单元实现。这个神经网络单元就叫Memory。Memory可以缓存上一个时刻某一个神经元的输出,然后在下一个时间步输入给另一个神经元。使用Memory的 :ref:`glossary_RNN` 实现便如下图所示。 +上图中虚线的连接,即是跨越时间步的网络连接。PaddlePaddle在实现RNN的时候,将这种跨越时间步的连接用一个特殊的神经网络单元实现。这个神经网络单元就叫Memory。Memory可以缓存上一个时刻某一个神经元的输出,然后在下一个时间步输入给另一个神经元。使用Memory的RNN实现便如下图所示。 .. graphviz:: glossary_rnn_with_memory.dot @@ -194,7 +194,7 @@ Memory是PaddlePaddle实现 :ref:`glossary_RNN` 时候使用的一个概念。 : 时间步 ------ -参考 :ref:`glossary_sequence` 。 +参考时间序列。 .. _glossary_sequence: @@ -217,14 +217,14 @@ RNN RNN 在PaddlePaddle的文档中,一般表示 :code:`Recurrent neural network`,即时间递归神经网络。详细介绍可以参考 `维基百科页面 Recurrent neural network `_ 或者 `中文维基百科页面 `_ 中关于时间递归神经网络的介绍。 -RNN 一般在PaddlePaddle中,指对于一个 :ref:`glossary_sequence` 输入数据,每一个时间步之间的神经网络具有一定的相关性。例如,某一个神经元的一个输入为上一个时间步网络中某一个神经元的输出。或者,从每一个时间步来看,神经网络的网络结构中具有有向环结构。 +RNN 一般在PaddlePaddle中,指对于一个时间序列输入数据,每一个时间步之间的神经网络具有一定的相关性。例如,某一个神经元的一个输入为上一个时间步网络中某一个神经元的输出。或者,从每一个时间步来看,神经网络的网络结构中具有有向环结构。 .. _glossary_双层RNN: 双层RNN ------- -双层RNN顾名思义,即 :ref:`glossary_RNN` 之间有一次嵌套关系。输入数据整体上是一个时间序列,而对于每一个内层特征数据而言,也是一个时间序列。即二维数组,或者数组的数组这个概念。 而双层RNN是可以处理这种输入数据的网络结构。 +双层RNN顾名思义,即RNN之间有一次嵌套关系。输入数据整体上是一个时间序列,而对于每一个内层特征数据而言,也是一个时间序列。即二维数组,或者数组的数组这个概念。 而双层RNN是可以处理这种输入数据的网络结构。 例如,对于段落的文本分类,即将一段话进行分类。我们将一段话看成句子的数组,每个句子又是单词的数组。这便是一种双层RNN的输入数据。而将这个段落的每一句话用lstm编码成一个向量,再对每一句话的编码向量用lstm编码成一个段落的向量。再对这个段落向量进行分类,即为这个双层RNN的网络结构。 From 23d57335f1e76dbc7e63647c092e7ee521844932 Mon Sep 17 00:00:00 2001 From: hanchao Date: Thu, 8 Dec 2016 18:14:25 +0800 Subject: [PATCH 054/265] update cr for pull #734. --- .../tests/configs/run_tests.sh | 10 ++-- .../test_config_parser_for_non_file_config.py | 47 +++++++++++-------- 2 files changed, 35 insertions(+), 22 deletions(-) diff --git a/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh b/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh index ed2ac6ed18..72dd55bb67 100755 --- a/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh +++ b/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh @@ -16,14 +16,16 @@ if [ -z $1 ]; then do base_protostr=$protostr/$file new_protostr=$protostr/$file.unittest - diff $base_protostr $new_protostr -u && + diff $base_protostr $new_protostr -u diff $protostr/$file $protostr/$file.non_file_config.unittest -u done else for file in ${configs[*]} do if ! $1 $protostr/$file.protostr $protostr/$file.protostr.unittest; then - diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u && + diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u + fi + if ! $1 $protostr/$file.protostr $protostr/$file.protostr.non_file_config.unittest; then diff $protostr/$file.protostr $protostr/$file.protostr.non_file_config.unittest -u fi done @@ -31,7 +33,9 @@ else for file in ${whole_configs[*]} do if ! $1 $protostr/$file.protostr $protostr/$file.protostr.unittest --whole; then - diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u && + diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u + fi + if ! $1 $protostr/$file.protostr $protostr/$file.protostr.non_file_config.unittest --whole; then diff $protostr/$file.protostr $protostr/$file.protostr.non_file_config.unittest -u fi done diff --git a/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py b/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py index 71ee0499d1..87a607acf4 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py +++ b/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py @@ -1,5 +1,5 @@ #!/usr/bin/env python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,27 +14,36 @@ # limitations under the License. import sys +import re import getopt -whole = False -opts, args = getopt.getopt(sys.argv[1:], "", ["whole"]) -for op, value in opts: - if op == "--whole": - whole = True +def main(print_whole_config, globals, locals): + ''' + this test will all test_config.py + ''' + cmdstr = """from paddle.trainer.config_parser import parse_config\n""" + importstr = "" + functionstr = "" -cmdstr = """ -from paddle.trainer.config_parser import * -from paddle.trainer_config_helpers import * -def configs():\n""" + for line in sys.stdin: + if re.match("^import", line) or re.match("^from.*import", line): + importstr = importstr + line + else: + functionstr = functionstr + " " + line -for line in sys.stdin: - if "import" in line and "from" in line: - continue - cmdstr = cmdstr + " " + line + cmdstr = cmdstr + importstr + """def configs():\n""" + functionstr + #cmdstr = cmdstr + """def configs():\n""" + importstr + functionstr + if print_whole_config: + cmdstr = cmdstr + """print parse_config(configs, "")""" + else: + cmdstr = cmdstr + """print parse_config(configs, "").model_config""" -if whole: - cmdstr = cmdstr + """print parse_config(configs, "")""" -else: - cmdstr = cmdstr + """print parse_config(configs, "").model_config""" + exec(cmdstr, globals, locals) -exec(cmdstr) +if __name__ == '__main__': + whole = False + opts, args = getopt.getopt(sys.argv[1:], "", ["whole"]) + for op, value in opts: + if op == "--whole": + whole = True + main(whole, globals(), locals()) From 957794cd29974acc3d36f201d2754a6e8c096371 Mon Sep 17 00:00:00 2001 From: hanchao Date: Thu, 8 Dec 2016 19:00:10 +0800 Subject: [PATCH 055/265] fix bad ident --- python/paddle/trainer_config_helpers/tests/configs/run_tests.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh b/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh index 72dd55bb67..e984ee7062 100755 --- a/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh +++ b/python/paddle/trainer_config_helpers/tests/configs/run_tests.sh @@ -31,7 +31,7 @@ else done for file in ${whole_configs[*]} -do + do if ! $1 $protostr/$file.protostr $protostr/$file.protostr.unittest --whole; then diff $protostr/$file.protostr $protostr/$file.protostr.unittest -u fi From 3adfdf0c583772f5655c31843b802d31b4328c83 Mon Sep 17 00:00:00 2001 From: FoREacH Date: Thu, 8 Dec 2016 13:43:37 +0200 Subject: [PATCH 056/265] Fix nvcc stray character Issue #760 --- CMakeLists.txt | 2 +- paddle/utils/Version.cpp | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 0c40e3c6f7..0a44e56719 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -69,7 +69,7 @@ include(coveralls) find_package(Git REQUIRED) # version.cmake will get the current PADDLE_VERSION include(version) -add_definitions(-DPADDLE_VERSION=\"${PADDLE_VERSION}\") +add_definitions(-DPADDLE_VERSION=${PADDLE_VERSION}) if(NOT WITH_GPU) add_definitions(-DPADDLE_ONLY_CPU) diff --git a/paddle/utils/Version.cpp b/paddle/utils/Version.cpp index e706983918..ed4ae6115f 100644 --- a/paddle/utils/Version.cpp +++ b/paddle/utils/Version.cpp @@ -33,7 +33,11 @@ void printVersion(std::ostream& os) { #ifndef PADDLE_VERSION #define PADDLE_VERSION "unknown" #endif - os << "paddle version: " << PADDLE_VERSION << std::endl +// converts macro to string https://gcc.gnu.org/onlinedocs/cpp/Stringification.html +#define xstr(s) str(s) +#define str(s) #s + + os << "paddle version: " << str(PADDLE_VERSION) << std::endl << std::boolalpha << "\t" << "withGpu: " << version::isWithGpu() << std::endl << "\t" From 4690c2e65a91cdf549a344a4b4d4ee284cd6a811 Mon Sep 17 00:00:00 2001 From: livc Date: Thu, 8 Dec 2016 20:47:12 +0800 Subject: [PATCH 057/265] add Chinese version --- .../semantic_role_labeling_cn.md | 201 ++++++++++++++++++ 1 file changed, 201 insertions(+) create mode 100644 doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md diff --git a/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md b/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md new file mode 100644 index 0000000000..f3c855a9fd --- /dev/null +++ b/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md @@ -0,0 +1,201 @@ +# 语义角色标注教程 # + +语义角色标注(Semantic role labeling, SRL)是浅语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词参数结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: + + [ A0 他 ] [ AM-MOD 将 ][ AM-NEG 不会 ] [ V 接受] [ A1 任何东西 ] 从 [A2 那些他写的东西中 ]。 + +- V: 动词 +- A0: 接受者 +- A1: 接受的东西 +- A2: 从……接受 +- A3: 属性 +- AM-MOD: 情态动词 +- AM-NEG: 否定 + +给定动词“接受”,句子中的大部分将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 + +到目前为止,大多数成功的SRL系统是建立在某种形式的解析结果之上的,其中在语法结构上使用了预先定义的特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标记问题。 + +## 数据描述 +相关论文[2]采用 CoNLL-2005&2012 共享任务中设置的数据进行训练和测试。根据数据许可证,演示采用 CoNLL-2005 的测试数据集,可以在网站上找到。 + +用户只需执行以下命令就可以下载并处理原始数据: + +```bash +cd data +./get_data.sh +``` +`data `目录会出现如下几个新的文件: +```bash +conll05st-release:the test data set of CoNll-2005 shared task +test.wsj.words:the Wall Street Journal data sentences +test.wsj.props: the propositional arguments +feature: the extracted features from data set +``` + +## 训练 +### DB-LSTM +请参阅情绪分析的演示以了解有关长期短期记忆单元的更多信息。 + +与在 Sentiment Analysis 演示中使用的 Bidirectional-LSTM 不同,DB-LSTM 采用另一种方法来堆叠LSTM层。首先,标准LSTM以正向处理该序列。该 LSTM 层的输入和输出作为下一个 LSTM 层的输入,并被反向处理。这两个标准 LSTM 层组成一对 LSTM。然后我们堆叠一对对的 LSTM 层后得到深度 LSTM 模型。 + +下图展示了时间扩展的2层 DB-LSTM 网络。 +
+![pic](./network_arch.png) +
+ +### 特征 +两个输入特性在这个管道中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示: +
+![pic](./feature.jpg) +
+ +在这个示例中,相应的标记句子是: + +[ A1 A record date ] has [ AM-NEG n't ] been [ V set ] . + +在演示中, 我们采用上面的特征模板, 包括: `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` 并使用 `B/I/O` 方案来标记每个参数。这些特征和标签存储在 `feature` 文件中, 用`\t`分割。 + +### 数据提供 + +`dataprovider.py` 是一个包装数据的 Python 文件。 函数 `hook()` 定义了网络的数据槽。六个特征和标签都是索引槽。 +``` +def hook(settings, word_dict, label_dict, **kwargs): + settings.word_dict = word_dict + settings.label_dict = label_dict + #all inputs are integral and sequential type + settings.slots = [ + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(predicate_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(2), + integer_value_sequence(len(label_dict))] +``` +相应的数据迭代器如下: +``` +@provider(init_hook=hook, should_shuffle=True, calc_batch_size=get_batch_size, + can_over_batch_size=False, cache=CacheType.CACHE_PASS_IN_MEM) +def process(settings, file_name): + with open(file_name, 'r') as fdata: + for line in fdata: + sentence, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, label = \ + line.strip().split('\t') + + words = sentence.split() + sen_len = len(words) + word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] + + predicate_slot = [settings.predicate_dict.get(predicate)] * sen_len + ctx_n2_slot = [settings.word_dict.get(ctx_n2, UNK_IDX)] * sen_len + ctx_n1_slot = [settings.word_dict.get(ctx_n1, UNK_IDX)] * sen_len + ctx_0_slot = [settings.word_dict.get(ctx_0, UNK_IDX)] * sen_len + ctx_p1_slot = [settings.word_dict.get(ctx_p1, UNK_IDX)] * sen_len + ctx_p2_slot = [settings.word_dict.get(ctx_p2, UNK_IDX)] * sen_len + + marks = mark.split() + mark_slot = [int(w) for w in marks] + + label_list = label.split() + label_slot = [settings.label_dict.get(w) for w in label_list] + yield word_slot, predicate_slot, ctx_n2_slot, ctx_n1_slot, \ + ctx_0_slot, ctx_p1_slot, ctx_p2_slot, mark_slot, label_slot +``` +函数 `process` 产出有8个特征和标签的9个表。 + +### 神经网络配置 + +`db_lstm.py` 是在训练过程中加载字典并定义数据提供程序模块和网络架构的神经网络配置文件。 + +九个 `data_layer` 从数据提供程序加载实例。八个特征分别转换为嵌入,并由`mixed_layer`混合。 深度双向LSTM层提取softmax层的特征。目标函数是标签的交叉熵。 + +### 训练 +训练的脚本是 `train.sh`,用户只需执行: +```bash + ./train.sh +``` +`train.sh` 中的内容: +``` +paddle train \ + --config=./db_lstm.py \ + --use_gpu=0 \ + --log_period=5000 \ + --trainer_count=1 \ + --show_parameter_stats_period=5000 \ + --save_dir=./output \ + --num_passes=10000 \ + --average_test_period=10000000 \ + --init_model_path=./data \ + --load_missing_parameter_strategy=rand \ + --test_all_data_in_one_period=1 \ +2>&1 | tee 'train.log' +``` + +- \--config=./db_lstm.py : 网络配置文件 +- \--use_gpu=false: 使用 CPU 训练(如果已安装 PaddlePaddle GPU版本并想使用 GPU 训练可以设置为true,目前 crf_layer 不支持 GPU) +- \--log_period=500: 每20批(batch)输出日志 +- \--trainer_count=1: 设置线程数(或 GPU 数) +- \--show_parameter_stats_period=5000: 每100批显示参数统计 +- \--save_dir=./output: 模型输出路径 +- \--num_passes=10000: 设置通过数,一次通过意味着PaddlePaddle训练数据集中的所有样本一次 +- \--average_test_period=10000000: 每个 average_test_period 批次对平均参数进行测试 +- \--init_model_path=./data: 参数初始化路径 +- \--load_missing_parameter_strategy=rand: 随机初始不存在的参数 +- \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 + + +训练后,模型将保存在目录`output`中。 我们的训练曲线如下: +
+![pic](./curve.jpg) +
+ +### 测试 +测试脚本是 `test.sh`, 执行: +```bash + ./test.sh +``` +`tesh.sh` 的主要部分: +``` +paddle train \ + --config=./db_lstm.py \ + --model_list=$model_list \ + --job=test \ + --config_args=is_test=1 \ +``` + + - \--config=./db_lstm.py: 网络配置文件 + - \--model_list=$model_list.list: 模型列表文件 + - \--job=test: 指示测试任务 + - \--config_args=is_test=1: 指示测试任务的标记 + - \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 + + +### 预测 +预测脚本是 `predict.sh`,用户只需执行: +```bash + ./predict.sh + +``` +在`predict.sh`中,用户应该提供网络配置文件,模型路径,标签文件,字典文件,特征文件。 +``` +python predict.py + -c $config_file \ + -w $best_model_path \ + -l $label_file \ + -p $predicate_dict_file \ + -d $dict_file \ + -i $input_file \ + -o $output_file +``` + +`predict.py` 是主要的可执行python脚本,其中包括函数:加载模型,加载数据,数据预测。网络模型将输出标签的概率分布。 在演示中,我们使用最大概率的标签作为结果。用户还可以根据概率分布矩阵实现集束搜索或维特比解码。 + +预测后,结果保存在 `predict.res` 中。 + +## 引用 +[1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005. + +[2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015. From 38eddca9c26230ab0e6997f2c561cb786c23662b Mon Sep 17 00:00:00 2001 From: FoREacH Date: Thu, 8 Dec 2016 15:12:40 +0200 Subject: [PATCH 058/265] Fixed incorrect macro --- paddle/utils/Version.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paddle/utils/Version.cpp b/paddle/utils/Version.cpp index ed4ae6115f..84060a1005 100644 --- a/paddle/utils/Version.cpp +++ b/paddle/utils/Version.cpp @@ -37,7 +37,7 @@ void printVersion(std::ostream& os) { #define xstr(s) str(s) #define str(s) #s - os << "paddle version: " << str(PADDLE_VERSION) << std::endl + os << "paddle version: " << xstr(PADDLE_VERSION) << std::endl << std::boolalpha << "\t" << "withGpu: " << version::isWithGpu() << std::endl << "\t" From e9549cbb78b01a570ded56b54cd1f5a0e5f44f12 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Thu, 8 Dec 2016 15:37:00 -0800 Subject: [PATCH 059/265] Change "Baidu, Inc" into "PaddlePaddle Authors" --- LICENSE | 4 ++-- demo/gan/data/download_cifar.sh | 2 +- demo/gan/gan_conf.py | 2 +- demo/gan/gan_conf_image.py | 2 +- demo/gan/gan_trainer.py | 2 +- demo/image_classification/data/download_cifar.sh | 2 +- demo/image_classification/data/process_cifar.py | 2 +- demo/image_classification/image_provider.py | 2 +- demo/image_classification/image_util.py | 2 +- demo/image_classification/predict.sh | 2 +- demo/image_classification/prediction.py | 2 +- demo/image_classification/preprocess.py | 2 +- demo/image_classification/preprocess.sh | 2 +- demo/image_classification/train.sh | 2 +- demo/image_classification/vgg_16_cifar.py | 2 +- demo/introduction/dataprovider.py | 2 +- demo/introduction/evaluate_model.py | 2 +- demo/introduction/train.sh | 2 +- demo/introduction/trainer_config.py | 2 +- demo/mnist/data/generate_list.py | 2 +- demo/mnist/train.sh | 2 +- demo/mnist/vgg_16_mnist.py | 2 +- demo/model_zoo/embedding/extract_para.py | 2 +- demo/model_zoo/embedding/paraconvert.py | 2 +- demo/model_zoo/embedding/pre_DictAndModel.sh | 2 +- demo/model_zoo/resnet/classify.py | 2 +- demo/model_zoo/resnet/example/__init__.py | 2 +- demo/model_zoo/resnet/example/image_list_provider.py | 2 +- demo/model_zoo/resnet/extract_fea_c++.sh | 2 +- demo/model_zoo/resnet/extract_fea_py.sh | 2 +- demo/model_zoo/resnet/get_model.sh | 2 +- demo/model_zoo/resnet/load_feature.py | 2 +- demo/model_zoo/resnet/net_diagram.sh | 2 +- demo/model_zoo/resnet/predict.sh | 2 +- demo/model_zoo/resnet/resnet.py | 2 +- demo/quick_start/api_train.py | 2 +- demo/quick_start/api_train.sh | 2 +- demo/quick_start/data/get_data.sh | 2 +- demo/quick_start/data/proc_from_raw_data/get_data.sh | 2 +- demo/quick_start/data/proc_from_raw_data/preprocess.py | 2 +- demo/quick_start/dataprovider_bow.py | 2 +- demo/quick_start/dataprovider_emb.py | 2 +- demo/quick_start/predict.sh | 2 +- demo/quick_start/train.sh | 2 +- demo/quick_start/trainer_config.bidi-lstm.py | 2 +- demo/quick_start/trainer_config.cnn.py | 2 +- demo/quick_start/trainer_config.db-lstm.py | 2 +- demo/quick_start/trainer_config.emb.py | 2 +- demo/quick_start/trainer_config.lr.py | 2 +- demo/quick_start/trainer_config.lstm.py | 2 +- demo/quick_start/trainer_config.resnet-lstm.py | 2 +- demo/recommendation/common_utils.py | 2 +- demo/recommendation/data/config_generator.py | 2 +- demo/recommendation/data/meta_generator.py | 2 +- demo/recommendation/data/ml_data.sh | 2 +- demo/recommendation/data/split.py | 2 +- demo/recommendation/dataprovider.py | 2 +- demo/recommendation/evaluate.sh | 2 +- demo/recommendation/prediction.py | 2 +- demo/recommendation/preprocess.sh | 2 +- demo/recommendation/run.sh | 2 +- demo/recommendation/trainer_config.py | 2 +- demo/semantic_role_labeling/data/extract_dict_feature.py | 2 +- demo/semantic_role_labeling/data/extract_pairs.py | 2 +- demo/semantic_role_labeling/data/get_data.sh | 2 +- demo/semantic_role_labeling/dataprovider.py | 2 +- demo/semantic_role_labeling/db_lstm.py | 2 +- demo/semantic_role_labeling/predict.py | 2 +- demo/semantic_role_labeling/predict.sh | 2 +- demo/semantic_role_labeling/test.sh | 2 +- demo/semantic_role_labeling/train.sh | 2 +- demo/sentiment/data/get_imdb.sh | 2 +- demo/sentiment/dataprovider.py | 2 +- demo/sentiment/predict.py | 2 +- demo/sentiment/predict.sh | 2 +- demo/sentiment/preprocess.py | 2 +- demo/sentiment/preprocess.sh | 2 +- demo/sentiment/sentiment_net.py | 2 +- demo/sentiment/test.sh | 2 +- demo/sentiment/train.sh | 2 +- demo/sentiment/trainer_config.py | 2 +- demo/seqToseq/data/paraphrase_data.sh | 2 +- demo/seqToseq/data/paraphrase_model.sh | 2 +- demo/seqToseq/data/wmt14_data.sh | 2 +- demo/seqToseq/data/wmt14_model.sh | 2 +- demo/seqToseq/dataprovider.py | 2 +- demo/seqToseq/paraphrase/train.conf | 2 +- demo/seqToseq/paraphrase/train.sh | 2 +- demo/seqToseq/preprocess.py | 2 +- demo/seqToseq/seqToseq_net.py | 2 +- demo/seqToseq/translation/eval_bleu.sh | 2 +- demo/seqToseq/translation/gen.conf | 2 +- demo/seqToseq/translation/gen.sh | 2 +- demo/seqToseq/translation/moses_bleu.sh | 2 +- demo/seqToseq/translation/train.conf | 2 +- demo/seqToseq/translation/train.sh | 2 +- demo/sequence_tagging/data/get_data.sh | 2 +- demo/sequence_tagging/dataprovider.py | 2 +- demo/sequence_tagging/linear_crf.py | 2 +- demo/sequence_tagging/rnn_crf.py | 2 +- doc/api/predict/predict_sample.py | 2 +- doc_cn/cluster/k8s/start_paddle.py | 2 +- paddle/.common_test_util.sh | 2 +- paddle/.set_port.sh | 2 +- paddle/.set_python_path.sh | 2 +- paddle/api/Arguments.cpp | 2 +- paddle/api/ConfigParser.cpp | 2 +- paddle/api/GradientMachine.cpp | 2 +- paddle/api/Internal.h | 2 +- paddle/api/Matrix.cpp | 2 +- paddle/api/PaddleAPI.h | 2 +- paddle/api/PaddleAPIPrivate.h | 2 +- paddle/api/Parameter.cpp | 2 +- paddle/api/ParameterOptimizer.cpp | 2 +- paddle/api/SequenceGenerator.cpp | 2 +- paddle/api/Trainer.cpp | 2 +- paddle/api/Util.cpp | 2 +- paddle/api/Vector.cpp | 2 +- paddle/api/__init__.py | 2 +- paddle/api/paddle_ld_flags.py | 2 +- paddle/api/test/run_tests.sh | 2 +- paddle/api/test/testArguments.py | 2 +- paddle/api/test/testGradientMachine.py | 2 +- paddle/api/test/testMatrix.py | 2 +- paddle/api/test/testTrain.py | 2 +- paddle/api/test/testTrainer.py | 2 +- paddle/api/test/testVector.py | 2 +- paddle/api/test/util.py | 2 +- paddle/cuda/include/hl_activation_functions.h | 2 +- paddle/cuda/include/hl_aggregate.h | 2 +- paddle/cuda/include/hl_avx_functions.h | 2 +- paddle/cuda/include/hl_base.h | 2 +- paddle/cuda/include/hl_batch_transpose.h | 2 +- paddle/cuda/include/hl_cnn.h | 2 +- paddle/cuda/include/hl_cpu_gru.cuh | 2 +- paddle/cuda/include/hl_cpu_lstm.cuh | 2 +- paddle/cuda/include/hl_cpu_matrix_kernel.cuh | 2 +- paddle/cuda/include/hl_cuda.h | 2 +- paddle/cuda/include/hl_cuda.ph | 2 +- paddle/cuda/include/hl_cuda_cublas.h | 2 +- paddle/cuda/include/hl_cuda_cudnn.h | 2 +- paddle/cuda/include/hl_cuda_cudnn.ph | 2 +- paddle/cuda/include/hl_device_functions.cuh | 2 +- paddle/cuda/include/hl_dso_loader.h | 2 +- paddle/cuda/include/hl_functions.h | 2 +- paddle/cuda/include/hl_gpu.h | 2 +- paddle/cuda/include/hl_gpu_functions.cuh | 2 +- paddle/cuda/include/hl_gpu_gru.cuh | 2 +- paddle/cuda/include/hl_gpu_lstm.cuh | 2 +- paddle/cuda/include/hl_gpu_matrix_kernel.cuh | 2 +- paddle/cuda/include/hl_gru_ops.cuh | 2 +- paddle/cuda/include/hl_lstm.h | 2 +- paddle/cuda/include/hl_lstm_ops.cuh | 2 +- paddle/cuda/include/hl_matrix.h | 2 +- paddle/cuda/include/hl_matrix_apply.cuh | 2 +- paddle/cuda/include/hl_matrix_base.cuh | 2 +- paddle/cuda/include/hl_matrix_base_sse.cuh | 2 +- paddle/cuda/include/hl_matrix_ops.cuh | 2 +- paddle/cuda/include/hl_matrix_type.cuh | 2 +- paddle/cuda/include/hl_perturbation_util.cuh | 2 +- paddle/cuda/include/hl_recurrent_apply.cuh | 2 +- paddle/cuda/include/hl_sequence.h | 2 +- paddle/cuda/include/hl_sparse.h | 2 +- paddle/cuda/include/hl_sparse.ph | 2 +- paddle/cuda/include/hl_sse_matrix_kernel.cuh | 2 +- paddle/cuda/include/hl_table_apply.h | 2 +- paddle/cuda/include/hl_tensor_ops.h | 2 +- paddle/cuda/include/hl_thread.ph | 2 +- paddle/cuda/include/hl_time.h | 2 +- paddle/cuda/include/hl_top_k.h | 2 +- paddle/cuda/include/hl_warpctc_wrap.h | 2 +- paddle/cuda/include/stub/hl_aggregate_stub.h | 2 +- paddle/cuda/include/stub/hl_cnn_stub.h | 2 +- paddle/cuda/include/stub/hl_cuda_cublas_stub.h | 2 +- paddle/cuda/include/stub/hl_cuda_cudnn_stub.h | 2 +- paddle/cuda/include/stub/hl_cuda_stub.h | 2 +- paddle/cuda/include/stub/hl_lstm_stub.h | 2 +- paddle/cuda/include/stub/hl_matrix_stub.h | 2 +- paddle/cuda/include/stub/hl_sequence_stub.h | 2 +- paddle/cuda/include/stub/hl_sparse_stub.h | 2 +- paddle/cuda/src/hl_avx_functions.cc | 2 +- paddle/cuda/src/hl_batch_transpose.cu | 2 +- paddle/cuda/src/hl_cpu_functions.cc | 2 +- paddle/cuda/src/hl_cuda_aggregate.cu | 2 +- paddle/cuda/src/hl_cuda_cnn.cu | 2 +- paddle/cuda/src/hl_cuda_cublas.cc | 2 +- paddle/cuda/src/hl_cuda_cudnn.cc | 2 +- paddle/cuda/src/hl_cuda_device.cc | 2 +- paddle/cuda/src/hl_cuda_lstm.cu | 2 +- paddle/cuda/src/hl_cuda_matrix.cu | 2 +- paddle/cuda/src/hl_cuda_sequence.cu | 2 +- paddle/cuda/src/hl_cuda_sparse.cu | 2 +- paddle/cuda/src/hl_cuda_sparse.cuh | 2 +- paddle/cuda/src/hl_cudart_wrap.cc | 2 +- paddle/cuda/src/hl_dso_loader.cc | 2 +- paddle/cuda/src/hl_math.cc | 2 +- paddle/cuda/src/hl_perturbation_util.cu | 2 +- paddle/cuda/src/hl_table_apply.cu | 2 +- paddle/cuda/src/hl_time.cc | 2 +- paddle/cuda/src/hl_top_k.cu | 2 +- paddle/cuda/src/hl_warpctc_wrap.cc | 2 +- paddle/gserver/activations/ActivationFunction.cpp | 2 +- paddle/gserver/activations/ActivationFunction.h | 2 +- paddle/gserver/dataproviders/DataProvider.cpp | 2 +- paddle/gserver/dataproviders/DataProvider.h | 2 +- paddle/gserver/dataproviders/DataProviderGroup.h | 2 +- paddle/gserver/dataproviders/MultiDataProvider.cpp | 2 +- paddle/gserver/dataproviders/MultiDataProvider.h | 2 +- paddle/gserver/dataproviders/ProtoDataProvider.cpp | 2 +- paddle/gserver/dataproviders/ProtoDataProvider.h | 2 +- paddle/gserver/dataproviders/ProtoReader.h | 2 +- paddle/gserver/dataproviders/PyDataProvider.cpp | 2 +- paddle/gserver/dataproviders/PyDataProvider.h | 2 +- paddle/gserver/dataproviders/PyDataProvider2.cpp | 2 +- paddle/gserver/evaluators/CTCErrorEvaluator.cpp | 2 +- paddle/gserver/evaluators/ChunkEvaluator.cpp | 2 +- paddle/gserver/evaluators/Evaluator.cpp | 2 +- paddle/gserver/evaluators/Evaluator.h | 2 +- paddle/gserver/gradientmachines/GradientMachine.cpp | 2 +- paddle/gserver/gradientmachines/GradientMachine.h | 2 +- paddle/gserver/gradientmachines/GradientMachineMode.cpp | 2 +- paddle/gserver/gradientmachines/GradientMachineMode.h | 2 +- paddle/gserver/gradientmachines/MultiGradientMachine.cpp | 2 +- paddle/gserver/gradientmachines/MultiGradientMachine.h | 2 +- paddle/gserver/gradientmachines/MultiNetwork.cpp | 2 +- paddle/gserver/gradientmachines/MultiNetwork.h | 2 +- paddle/gserver/gradientmachines/NeuralNetwork.cpp | 2 +- paddle/gserver/gradientmachines/NeuralNetwork.h | 2 +- paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp | 2 +- paddle/gserver/gradientmachines/ParallelNeuralNetwork.h | 2 +- paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp | 2 +- paddle/gserver/gradientmachines/RecurrentGradientMachine.h | 2 +- paddle/gserver/layers/AddtoLayer.cpp | 2 +- paddle/gserver/layers/AddtoLayer.h | 2 +- paddle/gserver/layers/AgentLayer.cpp | 2 +- paddle/gserver/layers/AgentLayer.h | 2 +- paddle/gserver/layers/AverageLayer.cpp | 2 +- paddle/gserver/layers/AverageLayer.h | 2 +- paddle/gserver/layers/BatchNormBaseLayer.cpp | 2 +- paddle/gserver/layers/BatchNormBaseLayer.h | 2 +- paddle/gserver/layers/BatchNormalizationLayer.cpp | 2 +- paddle/gserver/layers/BatchNormalizationLayer.h | 2 +- paddle/gserver/layers/BilinearInterpLayer.cpp | 2 +- paddle/gserver/layers/BilinearInterpLayer.h | 2 +- paddle/gserver/layers/BlockExpandLayer.cpp | 2 +- paddle/gserver/layers/BlockExpandLayer.h | 2 +- paddle/gserver/layers/CRFDecodingLayer.cpp | 2 +- paddle/gserver/layers/CRFDecodingLayer.h | 2 +- paddle/gserver/layers/CRFLayer.cpp | 2 +- paddle/gserver/layers/CRFLayer.h | 2 +- paddle/gserver/layers/CTCLayer.cpp | 2 +- paddle/gserver/layers/CTCLayer.h | 2 +- paddle/gserver/layers/ConcatenateLayer.cpp | 2 +- paddle/gserver/layers/ContextProjection.cpp | 2 +- paddle/gserver/layers/ContextProjection.h | 2 +- paddle/gserver/layers/ConvBaseLayer.cpp | 2 +- paddle/gserver/layers/ConvBaseLayer.h | 2 +- paddle/gserver/layers/ConvOperator.cpp | 2 +- paddle/gserver/layers/ConvProjection.cpp | 2 +- paddle/gserver/layers/ConvProjection.h | 2 +- paddle/gserver/layers/ConvShiftLayer.cpp | 2 +- paddle/gserver/layers/ConvexCombinationLayer.cpp | 2 +- paddle/gserver/layers/CosSimLayer.cpp | 2 +- paddle/gserver/layers/CosSimLayer.h | 2 +- paddle/gserver/layers/CosSimVecMatLayer.cpp | 2 +- paddle/gserver/layers/CostLayer.cpp | 2 +- paddle/gserver/layers/CostLayer.h | 2 +- paddle/gserver/layers/CudnnBatchNormLayer.cpp | 2 +- paddle/gserver/layers/CudnnBatchNormLayer.h | 2 +- paddle/gserver/layers/CudnnConvLayer.cpp | 2 +- paddle/gserver/layers/CudnnConvLayer.h | 2 +- paddle/gserver/layers/CudnnPoolLayer.cpp | 2 +- paddle/gserver/layers/CudnnPoolLayer.h | 2 +- paddle/gserver/layers/DataLayer.cpp | 2 +- paddle/gserver/layers/DataLayer.h | 2 +- paddle/gserver/layers/DataNormLayer.cpp | 2 +- paddle/gserver/layers/DataNormLayer.h | 2 +- paddle/gserver/layers/DotMulOperator.cpp | 2 +- paddle/gserver/layers/DotMulProjection.cpp | 2 +- paddle/gserver/layers/EosIdCheckLayer.cpp | 2 +- paddle/gserver/layers/ExpandConvBaseLayer.cpp | 2 +- paddle/gserver/layers/ExpandConvBaseLayer.h | 2 +- paddle/gserver/layers/ExpandConvLayer.cpp | 2 +- paddle/gserver/layers/ExpandConvLayer.h | 2 +- paddle/gserver/layers/ExpandConvTransLayer.cpp | 2 +- paddle/gserver/layers/ExpandConvTransLayer.h | 2 +- paddle/gserver/layers/ExpandLayer.cpp | 2 +- paddle/gserver/layers/ExpandLayer.h | 2 +- paddle/gserver/layers/FeatureMapExpandLayer.cpp | 2 +- paddle/gserver/layers/FullMatrixProjection.cpp | 2 +- paddle/gserver/layers/FullMatrixProjection.h | 2 +- paddle/gserver/layers/FullyConnectedLayer.cpp | 2 +- paddle/gserver/layers/FullyConnectedLayer.h | 2 +- paddle/gserver/layers/GatedRecurrentLayer.cpp | 2 +- paddle/gserver/layers/GatedRecurrentLayer.h | 2 +- paddle/gserver/layers/GetOutputLayer.cpp | 2 +- paddle/gserver/layers/GruCompute.cpp | 2 +- paddle/gserver/layers/GruCompute.cu | 2 +- paddle/gserver/layers/GruCompute.h | 2 +- paddle/gserver/layers/GruStepLayer.cpp | 2 +- paddle/gserver/layers/HierarchicalSigmoidLayer.cpp | 2 +- paddle/gserver/layers/HierarchicalSigmoidLayer.h | 2 +- paddle/gserver/layers/IdentityProjection.cpp | 2 +- paddle/gserver/layers/InterpolationLayer.cpp | 2 +- paddle/gserver/layers/Layer.cpp | 2 +- paddle/gserver/layers/Layer.h | 2 +- paddle/gserver/layers/LinearChainCRF.cpp | 2 +- paddle/gserver/layers/LinearChainCRF.h | 2 +- paddle/gserver/layers/LinearChainCTC.cpp | 2 +- paddle/gserver/layers/LinearChainCTC.h | 2 +- paddle/gserver/layers/LstmCompute.cpp | 2 +- paddle/gserver/layers/LstmCompute.cu | 2 +- paddle/gserver/layers/LstmCompute.h | 2 +- paddle/gserver/layers/LstmLayer.cpp | 2 +- paddle/gserver/layers/LstmLayer.h | 2 +- paddle/gserver/layers/LstmStepLayer.cpp | 2 +- paddle/gserver/layers/MDLstmLayer.cpp | 2 +- paddle/gserver/layers/MaxIdLayer.cpp | 2 +- paddle/gserver/layers/MaxLayer.cpp | 2 +- paddle/gserver/layers/MaxLayer.h | 2 +- paddle/gserver/layers/MaxOutLayer.cpp | 2 +- paddle/gserver/layers/MaxOutLayer.h | 2 +- paddle/gserver/layers/MixedLayer.cpp | 2 +- paddle/gserver/layers/MixedLayer.h | 2 +- paddle/gserver/layers/MultinomialSampler.cpp | 2 +- paddle/gserver/layers/MultinomialSampler.h | 2 +- paddle/gserver/layers/MultiplexLayer.cpp | 2 +- paddle/gserver/layers/NCELayer.cpp | 2 +- paddle/gserver/layers/NormLayer.cpp | 2 +- paddle/gserver/layers/NormLayer.h | 2 +- paddle/gserver/layers/NormProjectionLayer.cpp | 2 +- paddle/gserver/layers/NormProjectionLayer.h | 2 +- paddle/gserver/layers/Operator.cpp | 2 +- paddle/gserver/layers/Operator.h | 2 +- paddle/gserver/layers/OuterProdLayer.cpp | 2 +- paddle/gserver/layers/ParameterReluLayer.cpp | 2 +- paddle/gserver/layers/ParameterReluLayer.h | 2 +- paddle/gserver/layers/PoolLayer.cpp | 2 +- paddle/gserver/layers/PoolLayer.h | 2 +- paddle/gserver/layers/PoolProjection.cpp | 2 +- paddle/gserver/layers/PoolProjection.h | 2 +- paddle/gserver/layers/PoolProjectionLayer.cpp | 2 +- paddle/gserver/layers/PoolProjectionLayer.h | 2 +- paddle/gserver/layers/PowerLayer.cpp | 2 +- paddle/gserver/layers/PrintLayer.cpp | 2 +- paddle/gserver/layers/Projection.cpp | 2 +- paddle/gserver/layers/Projection.h | 2 +- paddle/gserver/layers/RecurrentLayer.cpp | 2 +- paddle/gserver/layers/RecurrentLayerGroup.cpp | 2 +- paddle/gserver/layers/ResizeLayer.cpp | 2 +- paddle/gserver/layers/SamplingIdLayer.cpp | 2 +- paddle/gserver/layers/ScalingLayer.cpp | 2 +- paddle/gserver/layers/ScalingProjection.cpp | 2 +- paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp | 2 +- paddle/gserver/layers/SelectiveFullyConnectedLayer.h | 2 +- paddle/gserver/layers/SequenceConcatLayer.cpp | 2 +- paddle/gserver/layers/SequenceLastInstanceLayer.cpp | 2 +- paddle/gserver/layers/SequencePoolLayer.cpp | 2 +- paddle/gserver/layers/SequencePoolLayer.h | 2 +- paddle/gserver/layers/SequenceReshapeLayer.cpp | 2 +- paddle/gserver/layers/SequenceToBatch.cpp | 2 +- paddle/gserver/layers/SequenceToBatch.h | 2 +- paddle/gserver/layers/SlopeInterceptLayer.cpp | 2 +- paddle/gserver/layers/SpatialPyramidPoolLayer.cpp | 2 +- paddle/gserver/layers/SpatialPyramidPoolLayer.h | 2 +- paddle/gserver/layers/SubSequenceLayer.cpp | 2 +- paddle/gserver/layers/SumToOneNormLayer.cpp | 2 +- paddle/gserver/layers/TableProjection.cpp | 2 +- paddle/gserver/layers/TableProjection.h | 2 +- paddle/gserver/layers/TensorLayer.cpp | 2 +- paddle/gserver/layers/TensorLayer.h | 2 +- paddle/gserver/layers/TransLayer.cpp | 2 +- paddle/gserver/layers/TransLayer.h | 2 +- paddle/gserver/layers/TransposedFullMatrixProjection.cpp | 2 +- paddle/gserver/layers/ValidationLayer.cpp | 2 +- paddle/gserver/layers/ValidationLayer.h | 2 +- paddle/gserver/layers/WarpCTCLayer.cpp | 2 +- paddle/gserver/layers/WarpCTCLayer.h | 2 +- paddle/gserver/tests/LayerGradUtil.cpp | 2 +- paddle/gserver/tests/LayerGradUtil.h | 2 +- paddle/gserver/tests/TestUtil.cpp | 2 +- paddle/gserver/tests/TestUtil.h | 2 +- paddle/gserver/tests/__init__.py | 2 +- paddle/gserver/tests/concat_dotmul_a.conf | 2 +- paddle/gserver/tests/concat_dotmul_b.conf | 2 +- paddle/gserver/tests/concat_fullmatrix_a.conf | 2 +- paddle/gserver/tests/concat_fullmatrix_b.conf | 2 +- paddle/gserver/tests/concat_table_a.conf | 2 +- paddle/gserver/tests/concat_table_b.conf | 2 +- paddle/gserver/tests/img_conv_a.conf | 2 +- paddle/gserver/tests/img_conv_b.conf | 2 +- paddle/gserver/tests/img_conv_c.conf | 2 +- paddle/gserver/tests/img_pool_a.conf | 2 +- paddle/gserver/tests/img_pool_b.conf | 2 +- paddle/gserver/tests/pyDataProvider.py | 2 +- paddle/gserver/tests/pyDataProvider/trainer.conf | 2 +- paddle/gserver/tests/rnn_data_provider.py | 2 +- paddle/gserver/tests/sequenceGen.py | 2 +- paddle/gserver/tests/sequence_layer_group.conf | 2 +- paddle/gserver/tests/sequence_nest_layer_group.conf | 2 +- paddle/gserver/tests/sequence_nest_rnn.conf | 2 +- paddle/gserver/tests/sequence_nest_rnn_multi_input.conf | 2 +- .../tests/sequence_nest_rnn_multi_unequalength_inputs.py | 2 +- paddle/gserver/tests/sequence_rnn.conf | 2 +- paddle/gserver/tests/sequence_rnn_multi_input.conf | 2 +- .../gserver/tests/sequence_rnn_multi_unequalength_inputs.py | 2 +- paddle/gserver/tests/test_ActivationGrad.cpp | 2 +- paddle/gserver/tests/test_BatchNorm.cpp | 2 +- paddle/gserver/tests/test_ConvTrans.cpp | 2 +- paddle/gserver/tests/test_ConvUnify.cpp | 2 +- paddle/gserver/tests/test_Evaluator.cpp | 2 +- paddle/gserver/tests/test_LayerGrad.cpp | 2 +- paddle/gserver/tests/test_LinearChainCRF.cpp | 2 +- paddle/gserver/tests/test_MultinomialSampler.cpp | 2 +- paddle/gserver/tests/test_NetworkCompare.cpp | 2 +- paddle/gserver/tests/test_ProtoDataProvider.cpp | 2 +- paddle/gserver/tests/test_PyDataProvider.cpp | 2 +- paddle/gserver/tests/test_PyDataProvider2.cpp | 2 +- paddle/gserver/tests/test_PyDataProvider2.py | 2 +- paddle/gserver/tests/test_RecurrentGradientMachine.cpp | 2 +- paddle/gserver/tests/test_RecurrentLayer.cpp | 2 +- paddle/gserver/tests/test_SelectiveFCLayer.cpp | 2 +- paddle/gserver/tests/test_WarpCTCLayer.cpp | 2 +- paddle/math/Allocator.h | 2 +- paddle/math/BaseMatrix.cu | 2 +- paddle/math/BaseMatrix.h | 2 +- paddle/math/CpuSparseMatrix.cpp | 2 +- paddle/math/CpuSparseMatrix.h | 2 +- paddle/math/ExecViaCpu.h | 2 +- paddle/math/MathFunctions.cpp | 2 +- paddle/math/MathFunctions.h | 2 +- paddle/math/MathUtils.cpp | 2 +- paddle/math/MathUtils.h | 2 +- paddle/math/Matrix.cpp | 2 +- paddle/math/Matrix.h | 2 +- paddle/math/MatrixBitCode.cpp | 2 +- paddle/math/MemoryHandle.cpp | 2 +- paddle/math/MemoryHandle.h | 2 +- paddle/math/PoolAllocator.cpp | 2 +- paddle/math/PoolAllocator.h | 2 +- paddle/math/SIMDFunctions.cpp | 2 +- paddle/math/SIMDFunctions.h | 2 +- paddle/math/SparseMatrix.cpp | 2 +- paddle/math/SparseMatrix.h | 2 +- paddle/math/SparseRowMatrix.cpp | 2 +- paddle/math/SparseRowMatrix.h | 2 +- paddle/math/Storage.cpp | 2 +- paddle/math/Storage.h | 2 +- paddle/math/TensorApply.h | 2 +- paddle/math/TensorAssign.h | 2 +- paddle/math/TensorEvaluate.h | 2 +- paddle/math/TensorExpression.h | 2 +- paddle/math/TrainingAlgorithmOp.cu | 2 +- paddle/math/TrainingAlgorithmOp.h | 2 +- paddle/math/Vector.cpp | 2 +- paddle/math/Vector.h | 2 +- paddle/math/tests/OriginalOptimizerApi.h | 2 +- paddle/math/tests/PerfUtils.h | 2 +- paddle/math/tests/TensorCheck.h | 2 +- paddle/math/tests/TestUtils.h | 2 +- paddle/math/tests/test_Allocator.cpp | 2 +- paddle/math/tests/test_BaseMatrix.cpp | 2 +- paddle/math/tests/test_CpuGpuVector.cpp | 2 +- paddle/math/tests/test_ExecViaCpu.cpp | 2 +- paddle/math/tests/test_FPException.cpp | 2 +- paddle/math/tests/test_GpuProfiler.cpp | 2 +- paddle/math/tests/test_Matrix.cpp | 2 +- paddle/math/tests/test_SIMDFunctions.cpp | 2 +- paddle/math/tests/test_SparseMatrix.cpp | 2 +- paddle/math/tests/test_Tensor.cu | 2 +- paddle/math/tests/test_TrainingAlgorithm.cpp | 2 +- paddle/math/tests/test_batchTranspose.cpp | 2 +- paddle/math/tests/test_lazyAssign.cu | 2 +- paddle/math/tests/test_matrixCompare.cpp | 2 +- paddle/math/tests/test_matrixUtil.h | 2 +- paddle/math/tests/test_perturbation.cpp | 2 +- paddle/math/tests/test_sparseMatrixCompare.cpp | 2 +- paddle/parameter/Argument.cpp | 2 +- paddle/parameter/Argument.h | 2 +- paddle/parameter/AverageOptimizer.cpp | 2 +- paddle/parameter/AverageOptimizer.h | 2 +- paddle/parameter/FirstOrderOptimizer.cpp | 2 +- paddle/parameter/FirstOrderOptimizer.h | 2 +- paddle/parameter/LearningRateScheduler.cpp | 2 +- paddle/parameter/LearningRateScheduler.h | 2 +- paddle/parameter/OptimizerFunctions.cpp | 2 +- paddle/parameter/OptimizerFunctions.h | 2 +- paddle/parameter/OptimizerWithRegularizer.cpp | 2 +- paddle/parameter/OptimizerWithRegularizer.h | 2 +- paddle/parameter/ParallelParameter.cpp | 2 +- paddle/parameter/ParallelParameter.h | 2 +- paddle/parameter/Parameter.cpp | 2 +- paddle/parameter/Parameter.h | 2 +- paddle/parameter/ParameterOptimizer.cpp | 2 +- paddle/parameter/ParameterOptimizer.h | 2 +- paddle/parameter/ParameterUpdateFunctions.cpp | 2 +- paddle/parameter/ParameterUpdateFunctions.h | 2 +- paddle/parameter/ParameterUpdaterBase.cpp | 2 +- paddle/parameter/ParameterUpdaterBase.h | 2 +- paddle/parameter/ParameterUpdaterHook.cpp | 2 +- paddle/parameter/ParameterUpdaterHook.h | 2 +- paddle/parameter/Regularizer.cpp | 2 +- paddle/parameter/Regularizer.h | 2 +- paddle/parameter/Weight.cpp | 2 +- paddle/parameter/Weight.h | 2 +- paddle/parameter/tests/test_common.cpp | 2 +- paddle/pserver/BaseClient.cpp | 2 +- paddle/pserver/BaseClient.h | 2 +- paddle/pserver/LightNetwork.cpp | 2 +- paddle/pserver/LightNetwork.h | 2 +- paddle/pserver/ParameterClient2.cpp | 2 +- paddle/pserver/ParameterClient2.h | 2 +- paddle/pserver/ParameterServer2.cpp | 2 +- paddle/pserver/ParameterServer2.h | 2 +- paddle/pserver/ParameterServer2Main.cpp | 2 +- paddle/pserver/ProtoServer.cpp | 2 +- paddle/pserver/ProtoServer.h | 2 +- paddle/pserver/RDMANetwork.h | 2 +- paddle/pserver/SocketChannel.cpp | 2 +- paddle/pserver/SocketChannel.h | 2 +- paddle/pserver/SparseParameterDistribution.cpp | 2 +- paddle/pserver/SparseParameterDistribution.h | 2 +- paddle/pserver/test/SocketTest.cpp | 2 +- paddle/pserver/test/test_ParameterServer2.cpp | 2 +- paddle/pserver/test/test_ProtoServer.cpp | 2 +- paddle/pserver/test/test_ProtoServer.sh | 2 +- paddle/py_paddle/__init__.py | 2 +- paddle/py_paddle/dataprovider_converter.py | 2 +- paddle/py_paddle/util.py | 2 +- paddle/scripts/cluster_train/conf.py | 2 +- paddle/scripts/cluster_train/paddle.py | 2 +- paddle/setup.py.in | 2 +- paddle/trainer/MergeModel.cpp | 2 +- paddle/trainer/ParamUtil.cpp | 2 +- paddle/trainer/ParamUtil.h | 2 +- paddle/trainer/ParameterUpdater.cpp | 2 +- paddle/trainer/ParameterUpdater.h | 2 +- paddle/trainer/RemoteParameterUpdater.cpp | 2 +- paddle/trainer/RemoteParameterUpdater.h | 2 +- paddle/trainer/Tester.cpp | 2 +- paddle/trainer/Tester.h | 2 +- paddle/trainer/TesterConfig.h | 2 +- paddle/trainer/ThreadParameterUpdater.cpp | 2 +- paddle/trainer/ThreadParameterUpdater.h | 2 +- paddle/trainer/Trainer.cpp | 2 +- paddle/trainer/Trainer.h | 2 +- paddle/trainer/TrainerBenchmark.cpp | 2 +- paddle/trainer/TrainerConfigHelper.cpp | 2 +- paddle/trainer/TrainerConfigHelper.h | 2 +- paddle/trainer/TrainerInternal.cpp | 2 +- paddle/trainer/TrainerInternal.h | 2 +- paddle/trainer/TrainerInternalConfig.cpp | 2 +- paddle/trainer/TrainerInternalConfig.h | 2 +- paddle/trainer/TrainerMain.cpp | 2 +- paddle/trainer/tests/__init__.py | 2 +- paddle/trainer/tests/chunking.conf | 2 +- paddle/trainer/tests/config_parser_test.py | 2 +- paddle/trainer/tests/gen_proto_data.py | 2 +- paddle/trainer/tests/sample_trainer_config.conf | 2 +- paddle/trainer/tests/sample_trainer_config_hsigmoid.conf | 2 +- paddle/trainer/tests/sample_trainer_config_opt_a.conf | 2 +- paddle/trainer/tests/sample_trainer_config_opt_b.conf | 2 +- paddle/trainer/tests/sample_trainer_config_parallel.conf | 2 +- paddle/trainer/tests/sample_trainer_config_qb_rnn.conf | 2 +- paddle/trainer/tests/sample_trainer_config_rnn.conf | 2 +- paddle/trainer/tests/sample_trainer_nest_rnn_gen.conf | 2 +- paddle/trainer/tests/sample_trainer_rnn_gen.conf | 2 +- paddle/trainer/tests/testPyDataWrapper.py | 2 +- paddle/trainer/tests/test_Compare.cpp | 2 +- paddle/trainer/tests/test_CompareSparse.cpp | 2 +- paddle/trainer/tests/test_CompareTwoNets.cpp | 2 +- paddle/trainer/tests/test_CompareTwoOpts.cpp | 2 +- paddle/trainer/tests/test_Prediction.cpp | 2 +- paddle/trainer/tests/test_PyDataProviderWrapper.cpp | 2 +- paddle/trainer/tests/test_Trainer.cpp | 2 +- paddle/trainer/tests/test_TrainerOnePass.cpp | 2 +- paddle/trainer/tests/test_config.conf | 2 +- paddle/trainer/tests/test_recurrent_machine_generation.cpp | 2 +- paddle/utils/BarrierStat.cpp | 2 +- paddle/utils/BarrierStat.h | 2 +- paddle/utils/ClassRegistrar.h | 2 +- paddle/utils/CommandLineParser.cpp | 2 +- paddle/utils/CommandLineParser.h | 2 +- paddle/utils/CompilerMacros.h | 2 +- paddle/utils/CustomStackTrace.cpp | 2 +- paddle/utils/CustomStackTrace.h | 2 +- paddle/utils/DisableCopy.h | 2 +- paddle/utils/Excepts.cpp | 2 +- paddle/utils/Excepts.h | 2 +- paddle/utils/Flags.cpp | 2 +- paddle/utils/Flags.h | 2 +- paddle/utils/GlobalConstants.cpp | 2 +- paddle/utils/GlobalConstants.h | 2 +- paddle/utils/Locks.h | 2 +- paddle/utils/Logging.cpp | 2 +- paddle/utils/Logging.h | 2 +- paddle/utils/PythonUtil.cpp | 2 +- paddle/utils/PythonUtil.h | 2 +- paddle/utils/Queue.h | 2 +- paddle/utils/Stat.cpp | 2 +- paddle/utils/Stat.h | 2 +- paddle/utils/StringUtil.cpp | 2 +- paddle/utils/StringUtil.h | 2 +- paddle/utils/Thread.h | 2 +- paddle/utils/ThreadLocal.cpp | 2 +- paddle/utils/ThreadLocal.h | 2 +- paddle/utils/TypeDefs.h | 2 +- paddle/utils/Util.cpp | 2 +- paddle/utils/Util.h | 2 +- paddle/utils/Version.cpp | 2 +- paddle/utils/Version.h | 2 +- paddle/utils/arch/linux/Locks.cpp | 2 +- paddle/utils/arch/osx/Locks.cpp | 2 +- paddle/utils/tests/test_CommandLineParser.cpp | 2 +- paddle/utils/tests/test_CustomStackTrace.cpp | 2 +- paddle/utils/tests/test_CustomStackTracePrint.cpp | 2 +- paddle/utils/tests/test_Logging.cpp | 2 +- paddle/utils/tests/test_SpinLock.cpp | 2 +- paddle/utils/tests/test_StringUtils.cpp | 2 +- paddle/utils/tests/test_Thread.cpp | 2 +- paddle/utils/tests/test_ThreadBarrier.cpp | 2 +- proto/DataConfig.proto.m4 | 2 +- proto/DataFormat.proto.m4 | 2 +- proto/ModelConfig.proto.m4 | 2 +- proto/ParameterConfig.proto.m4 | 2 +- proto/ParameterService.proto.m4 | 2 +- proto/TrainerConfig.proto.m4 | 2 +- python/paddle/__init__.py | 2 +- python/paddle/proto/__init__.py | 2 +- python/paddle/trainer/PyDataProvider2.py | 2 +- python/paddle/trainer/PyDataProviderWrapper.py | 2 +- python/paddle/trainer/__init__.py | 2 +- python/paddle/trainer/config_parser.py | 2 +- python/paddle/trainer/config_parser_extension.py | 2 +- python/paddle/trainer/recurrent_units.py | 2 +- python/paddle/trainer_config_helpers/__init__.py | 2 +- python/paddle/trainer_config_helpers/activations.py | 2 +- python/paddle/trainer_config_helpers/attrs.py | 2 +- python/paddle/trainer_config_helpers/data_sources.py | 2 +- python/paddle/trainer_config_helpers/default_decorators.py | 2 +- python/paddle/trainer_config_helpers/evaluators.py | 2 +- python/paddle/trainer_config_helpers/layers.py | 2 +- python/paddle/trainer_config_helpers/math.py | 2 +- python/paddle/trainer_config_helpers/networks.py | 2 +- python/paddle/trainer_config_helpers/optimizers.py | 2 +- python/paddle/trainer_config_helpers/poolings.py | 2 +- .../paddle/trainer_config_helpers/tests/ProtobufEqualMain.cpp | 2 +- python/paddle/trainer_config_helpers/tests/layers_test.py | 2 +- .../paddle/trainer_config_helpers/tests/layers_test_config.py | 2 +- python/paddle/trainer_config_helpers/utils.py | 2 +- python/paddle/utils/__init__.py | 2 +- python/paddle/utils/dump_config.py | 2 +- python/paddle/utils/image_util.py | 2 +- python/paddle/utils/make_model_diagram.py | 2 +- python/paddle/utils/plotcurve.py | 2 +- python/paddle/utils/predefined_net.py | 2 +- python/paddle/utils/preprocess_img.py | 2 +- python/paddle/utils/preprocess_util.py | 2 +- python/paddle/utils/show_pb.py | 2 +- python/paddle/utils/torch2paddle.py | 2 +- 660 files changed, 661 insertions(+), 661 deletions(-) diff --git a/LICENSE b/LICENSE index 2ff3140db0..e77bd090ee 100644 --- a/LICENSE +++ b/LICENSE @@ -1,4 +1,4 @@ -Copyright (c) 2016 Baidu, Inc. All Rights Reserved +Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved Apache License Version 2.0, January 2004 @@ -188,7 +188,7 @@ Copyright (c) 2016 Baidu, Inc. All Rights Reserved same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/demo/gan/data/download_cifar.sh b/demo/gan/data/download_cifar.sh index ea3be594cd..32e73b3d8e 100755 --- a/demo/gan/data/download_cifar.sh +++ b/demo/gan/data/download_cifar.sh @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/gan/gan_conf.py b/demo/gan/gan_conf.py index 05eee3a9b9..58ba9dde58 100644 --- a/demo/gan/gan_conf.py +++ b/demo/gan/gan_conf.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/gan/gan_conf_image.py b/demo/gan/gan_conf_image.py index dc5910e9f0..5c2b140537 100644 --- a/demo/gan/gan_conf_image.py +++ b/demo/gan/gan_conf_image.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/gan/gan_trainer.py b/demo/gan/gan_trainer.py index 72699952b9..a8c1bd0414 100644 --- a/demo/gan/gan_trainer.py +++ b/demo/gan/gan_trainer.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/data/download_cifar.sh b/demo/image_classification/data/download_cifar.sh index ca9b0b5c90..52e82d0d98 100755 --- a/demo/image_classification/data/download_cifar.sh +++ b/demo/image_classification/data/download_cifar.sh @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/data/process_cifar.py b/demo/image_classification/data/process_cifar.py index b235010e4e..db6666189e 100644 --- a/demo/image_classification/data/process_cifar.py +++ b/demo/image_classification/data/process_cifar.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/image_provider.py b/demo/image_classification/image_provider.py index 28bf1bb02c..87eed5eebd 100644 --- a/demo/image_classification/image_provider.py +++ b/demo/image_classification/image_provider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/image_util.py b/demo/image_classification/image_util.py index b5c6431c06..f09605394a 100644 --- a/demo/image_classification/image_util.py +++ b/demo/image_classification/image_util.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/predict.sh b/demo/image_classification/predict.sh index 35ffae6c8c..9d5785c9a1 100755 --- a/demo/image_classification/predict.sh +++ b/demo/image_classification/predict.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/prediction.py b/demo/image_classification/prediction.py index 6a47bd5851..9a86aafcb2 100755 --- a/demo/image_classification/prediction.py +++ b/demo/image_classification/prediction.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/preprocess.py b/demo/image_classification/preprocess.py index 10b9c1691b..2947ad239c 100755 --- a/demo/image_classification/preprocess.py +++ b/demo/image_classification/preprocess.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/preprocess.sh b/demo/image_classification/preprocess.sh index e3e86ff106..c7396c6393 100755 --- a/demo/image_classification/preprocess.sh +++ b/demo/image_classification/preprocess.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/train.sh b/demo/image_classification/train.sh index db0a057bf3..6fc11caf1c 100755 --- a/demo/image_classification/train.sh +++ b/demo/image_classification/train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/image_classification/vgg_16_cifar.py b/demo/image_classification/vgg_16_cifar.py index 58ceff5fc2..8ee4a64c15 100755 --- a/demo/image_classification/vgg_16_cifar.py +++ b/demo/image_classification/vgg_16_cifar.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/introduction/dataprovider.py b/demo/introduction/dataprovider.py index 8515022e18..03c920cc34 100644 --- a/demo/introduction/dataprovider.py +++ b/demo/introduction/dataprovider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/introduction/evaluate_model.py b/demo/introduction/evaluate_model.py index ca4a187273..eeda43c5c8 100755 --- a/demo/introduction/evaluate_model.py +++ b/demo/introduction/evaluate_model.py @@ -1,7 +1,7 @@ #!/usr/bin/env python # -*- coding: UTF-8 -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/introduction/train.sh b/demo/introduction/train.sh index 06db8edd10..b7bbb90ddd 100755 --- a/demo/introduction/train.sh +++ b/demo/introduction/train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/introduction/trainer_config.py b/demo/introduction/trainer_config.py index 7c838c1a8f..41cebcf6e1 100644 --- a/demo/introduction/trainer_config.py +++ b/demo/introduction/trainer_config.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/mnist/data/generate_list.py b/demo/mnist/data/generate_list.py index d880721f94..49981cc7a9 100644 --- a/demo/mnist/data/generate_list.py +++ b/demo/mnist/data/generate_list.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/mnist/train.sh b/demo/mnist/train.sh index 084b32ac39..da90cd749a 100755 --- a/demo/mnist/train.sh +++ b/demo/mnist/train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/mnist/vgg_16_mnist.py b/demo/mnist/vgg_16_mnist.py index f9e89bc588..a819b391c6 100644 --- a/demo/mnist/vgg_16_mnist.py +++ b/demo/mnist/vgg_16_mnist.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/embedding/extract_para.py b/demo/model_zoo/embedding/extract_para.py index 47e06fae9c..570b90c1f7 100755 --- a/demo/model_zoo/embedding/extract_para.py +++ b/demo/model_zoo/embedding/extract_para.py @@ -1,5 +1,5 @@ #!/bin/env python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/embedding/paraconvert.py b/demo/model_zoo/embedding/paraconvert.py index 54155eff8e..ce7a70efc4 100755 --- a/demo/model_zoo/embedding/paraconvert.py +++ b/demo/model_zoo/embedding/paraconvert.py @@ -1,5 +1,5 @@ #!/bin/env python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/embedding/pre_DictAndModel.sh b/demo/model_zoo/embedding/pre_DictAndModel.sh index 6d647f5dd9..f97ef26107 100755 --- a/demo/model_zoo/embedding/pre_DictAndModel.sh +++ b/demo/model_zoo/embedding/pre_DictAndModel.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/classify.py b/demo/model_zoo/resnet/classify.py index 7855126edc..4631816c43 100755 --- a/demo/model_zoo/resnet/classify.py +++ b/demo/model_zoo/resnet/classify.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/example/__init__.py b/demo/model_zoo/resnet/example/__init__.py index c90af2ee00..f662d68263 100644 --- a/demo/model_zoo/resnet/example/__init__.py +++ b/demo/model_zoo/resnet/example/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/example/image_list_provider.py b/demo/model_zoo/resnet/example/image_list_provider.py index 9e415f76a5..2cd8eb8bf8 100644 --- a/demo/model_zoo/resnet/example/image_list_provider.py +++ b/demo/model_zoo/resnet/example/image_list_provider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/extract_fea_c++.sh b/demo/model_zoo/resnet/extract_fea_c++.sh index c7f9aea9a5..5447aa92df 100755 --- a/demo/model_zoo/resnet/extract_fea_c++.sh +++ b/demo/model_zoo/resnet/extract_fea_c++.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/extract_fea_py.sh b/demo/model_zoo/resnet/extract_fea_py.sh index a70cef9a87..2e87152f7f 100755 --- a/demo/model_zoo/resnet/extract_fea_py.sh +++ b/demo/model_zoo/resnet/extract_fea_py.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/get_model.sh b/demo/model_zoo/resnet/get_model.sh index 133d08fca4..b33d8178ab 100755 --- a/demo/model_zoo/resnet/get_model.sh +++ b/demo/model_zoo/resnet/get_model.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/load_feature.py b/demo/model_zoo/resnet/load_feature.py index b0948b75fd..5d3d0c0d30 100644 --- a/demo/model_zoo/resnet/load_feature.py +++ b/demo/model_zoo/resnet/load_feature.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/net_diagram.sh b/demo/model_zoo/resnet/net_diagram.sh index a21ab4345b..1b06ffa44e 100755 --- a/demo/model_zoo/resnet/net_diagram.sh +++ b/demo/model_zoo/resnet/net_diagram.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/predict.sh b/demo/model_zoo/resnet/predict.sh index 55cf16e34a..2b67b17c48 100755 --- a/demo/model_zoo/resnet/predict.sh +++ b/demo/model_zoo/resnet/predict.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/model_zoo/resnet/resnet.py b/demo/model_zoo/resnet/resnet.py index 015b74cd48..6fdd97fefc 100644 --- a/demo/model_zoo/resnet/resnet.py +++ b/demo/model_zoo/resnet/resnet.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/api_train.py b/demo/quick_start/api_train.py index 66cbb85648..5699789daa 100644 --- a/demo/quick_start/api_train.py +++ b/demo/quick_start/api_train.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/api_train.sh b/demo/quick_start/api_train.sh index 40e9d0a09a..9b2a4e2f22 100755 --- a/demo/quick_start/api_train.sh +++ b/demo/quick_start/api_train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/data/get_data.sh b/demo/quick_start/data/get_data.sh index 952de3f3c8..a09a18f919 100755 --- a/demo/quick_start/data/get_data.sh +++ b/demo/quick_start/data/get_data.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/data/proc_from_raw_data/get_data.sh b/demo/quick_start/data/proc_from_raw_data/get_data.sh index cd85e26842..d976eaebfa 100755 --- a/demo/quick_start/data/proc_from_raw_data/get_data.sh +++ b/demo/quick_start/data/proc_from_raw_data/get_data.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/data/proc_from_raw_data/preprocess.py b/demo/quick_start/data/proc_from_raw_data/preprocess.py index 56c2c5f16c..72bd95f21d 100755 --- a/demo/quick_start/data/proc_from_raw_data/preprocess.py +++ b/demo/quick_start/data/proc_from_raw_data/preprocess.py @@ -1,6 +1,6 @@ # -*- coding: UTF-8 -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/dataprovider_bow.py b/demo/quick_start/dataprovider_bow.py index a5156a2d40..8e651d77bf 100644 --- a/demo/quick_start/dataprovider_bow.py +++ b/demo/quick_start/dataprovider_bow.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/dataprovider_emb.py b/demo/quick_start/dataprovider_emb.py index 286f3f5c82..b010253a8a 100755 --- a/demo/quick_start/dataprovider_emb.py +++ b/demo/quick_start/dataprovider_emb.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/predict.sh b/demo/quick_start/predict.sh index b1e5e44f0b..f02e5038e9 100755 --- a/demo/quick_start/predict.sh +++ b/demo/quick_start/predict.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/train.sh b/demo/quick_start/train.sh index b3c471608c..e3595fce75 100755 --- a/demo/quick_start/train.sh +++ b/demo/quick_start/train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/trainer_config.bidi-lstm.py b/demo/quick_start/trainer_config.bidi-lstm.py index 51deaf31f9..ca1d1f8d09 100644 --- a/demo/quick_start/trainer_config.bidi-lstm.py +++ b/demo/quick_start/trainer_config.bidi-lstm.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/trainer_config.cnn.py b/demo/quick_start/trainer_config.cnn.py index 388efa75f9..f8c3d511f3 100644 --- a/demo/quick_start/trainer_config.cnn.py +++ b/demo/quick_start/trainer_config.cnn.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/trainer_config.db-lstm.py b/demo/quick_start/trainer_config.db-lstm.py index 02bc898d88..fba802b460 100644 --- a/demo/quick_start/trainer_config.db-lstm.py +++ b/demo/quick_start/trainer_config.db-lstm.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/trainer_config.emb.py b/demo/quick_start/trainer_config.emb.py index 8fd18a7aac..7410397ef6 100644 --- a/demo/quick_start/trainer_config.emb.py +++ b/demo/quick_start/trainer_config.emb.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/trainer_config.lr.py b/demo/quick_start/trainer_config.lr.py index b9c9441baa..e5105aa895 100644 --- a/demo/quick_start/trainer_config.lr.py +++ b/demo/quick_start/trainer_config.lr.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/trainer_config.lstm.py b/demo/quick_start/trainer_config.lstm.py index 8821e02d9b..43b4ddac2d 100644 --- a/demo/quick_start/trainer_config.lstm.py +++ b/demo/quick_start/trainer_config.lstm.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/quick_start/trainer_config.resnet-lstm.py b/demo/quick_start/trainer_config.resnet-lstm.py index 91e1581c38..5bed925d84 100644 --- a/demo/quick_start/trainer_config.resnet-lstm.py +++ b/demo/quick_start/trainer_config.resnet-lstm.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/common_utils.py b/demo/recommendation/common_utils.py index 613e36b496..d4fbdad1d7 100755 --- a/demo/recommendation/common_utils.py +++ b/demo/recommendation/common_utils.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/data/config_generator.py b/demo/recommendation/data/config_generator.py index fa60545830..4ca496a252 100644 --- a/demo/recommendation/data/config_generator.py +++ b/demo/recommendation/data/config_generator.py @@ -1,5 +1,5 @@ #!/bin/env python2 -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/data/meta_generator.py b/demo/recommendation/data/meta_generator.py index 593c863670..38e4679d26 100644 --- a/demo/recommendation/data/meta_generator.py +++ b/demo/recommendation/data/meta_generator.py @@ -1,5 +1,5 @@ #!/bin/env python2 -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/data/ml_data.sh b/demo/recommendation/data/ml_data.sh index 408a8723e0..2268d87638 100755 --- a/demo/recommendation/data/ml_data.sh +++ b/demo/recommendation/data/ml_data.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/data/split.py b/demo/recommendation/data/split.py index 8dd0cbd32a..be6869c22f 100644 --- a/demo/recommendation/data/split.py +++ b/demo/recommendation/data/split.py @@ -1,5 +1,5 @@ #!/bin/env python2 -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/dataprovider.py b/demo/recommendation/dataprovider.py index ff3932be03..80c62d7561 100755 --- a/demo/recommendation/dataprovider.py +++ b/demo/recommendation/dataprovider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/evaluate.sh b/demo/recommendation/evaluate.sh index 38c1562c63..02b2857de0 100755 --- a/demo/recommendation/evaluate.sh +++ b/demo/recommendation/evaluate.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/prediction.py b/demo/recommendation/prediction.py index e2a202cfd1..191120188e 100755 --- a/demo/recommendation/prediction.py +++ b/demo/recommendation/prediction.py @@ -1,5 +1,5 @@ #!/bin/env python2 -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/preprocess.sh b/demo/recommendation/preprocess.sh index e181d0be45..e121e47019 100755 --- a/demo/recommendation/preprocess.sh +++ b/demo/recommendation/preprocess.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/run.sh b/demo/recommendation/run.sh index 846b59cec9..e341d1cc7a 100755 --- a/demo/recommendation/run.sh +++ b/demo/recommendation/run.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/recommendation/trainer_config.py b/demo/recommendation/trainer_config.py index cec340b0b6..aabcd33525 100755 --- a/demo/recommendation/trainer_config.py +++ b/demo/recommendation/trainer_config.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/data/extract_dict_feature.py b/demo/semantic_role_labeling/data/extract_dict_feature.py index daca5f01cf..123df022f5 100644 --- a/demo/semantic_role_labeling/data/extract_dict_feature.py +++ b/demo/semantic_role_labeling/data/extract_dict_feature.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/data/extract_pairs.py b/demo/semantic_role_labeling/data/extract_pairs.py index 86ab00ce41..2d0d535c53 100644 --- a/demo/semantic_role_labeling/data/extract_pairs.py +++ b/demo/semantic_role_labeling/data/extract_pairs.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/data/get_data.sh b/demo/semantic_role_labeling/data/get_data.sh index 99487e0d9a..a0ef26a13b 100644 --- a/demo/semantic_role_labeling/data/get_data.sh +++ b/demo/semantic_role_labeling/data/get_data.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/dataprovider.py b/demo/semantic_role_labeling/dataprovider.py index 2c8e134627..d12f10bfcb 100644 --- a/demo/semantic_role_labeling/dataprovider.py +++ b/demo/semantic_role_labeling/dataprovider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/db_lstm.py b/demo/semantic_role_labeling/db_lstm.py index 54ceff0e72..75946bd72e 100644 --- a/demo/semantic_role_labeling/db_lstm.py +++ b/demo/semantic_role_labeling/db_lstm.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/predict.py b/demo/semantic_role_labeling/predict.py index a7f1e8f81f..15145fafce 100644 --- a/demo/semantic_role_labeling/predict.py +++ b/demo/semantic_role_labeling/predict.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/predict.sh b/demo/semantic_role_labeling/predict.sh index 88ab5898f7..873aad670d 100755 --- a/demo/semantic_role_labeling/predict.sh +++ b/demo/semantic_role_labeling/predict.sh @@ -1,6 +1,6 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/test.sh b/demo/semantic_role_labeling/test.sh index f9e1bdcd4c..11d9d6a19c 100755 --- a/demo/semantic_role_labeling/test.sh +++ b/demo/semantic_role_labeling/test.sh @@ -1,6 +1,6 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/semantic_role_labeling/train.sh b/demo/semantic_role_labeling/train.sh index 420768bb2b..9354e72f46 100755 --- a/demo/semantic_role_labeling/train.sh +++ b/demo/semantic_role_labeling/train.sh @@ -1,6 +1,6 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/data/get_imdb.sh b/demo/sentiment/data/get_imdb.sh index 28fa86232d..7600af6fbb 100755 --- a/demo/sentiment/data/get_imdb.sh +++ b/demo/sentiment/data/get_imdb.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/dataprovider.py b/demo/sentiment/dataprovider.py index 53e3d1d20d..00f72cecac 100755 --- a/demo/sentiment/dataprovider.py +++ b/demo/sentiment/dataprovider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/predict.py b/demo/sentiment/predict.py index bc0f6f3126..00239c6009 100755 --- a/demo/sentiment/predict.py +++ b/demo/sentiment/predict.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/predict.sh b/demo/sentiment/predict.sh index 053f23e491..a889dfe3ec 100755 --- a/demo/sentiment/predict.sh +++ b/demo/sentiment/predict.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/preprocess.py b/demo/sentiment/preprocess.py index 7146e95d75..29b3682b74 100755 --- a/demo/sentiment/preprocess.py +++ b/demo/sentiment/preprocess.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/preprocess.sh b/demo/sentiment/preprocess.sh index 5f5c78e222..19ec34d4f0 100755 --- a/demo/sentiment/preprocess.sh +++ b/demo/sentiment/preprocess.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/sentiment_net.py b/demo/sentiment/sentiment_net.py index ff6a3624a4..a01577ca5a 100644 --- a/demo/sentiment/sentiment_net.py +++ b/demo/sentiment/sentiment_net.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/test.sh b/demo/sentiment/test.sh index c8b12a0e89..8af827c338 100755 --- a/demo/sentiment/test.sh +++ b/demo/sentiment/test.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/train.sh b/demo/sentiment/train.sh index f44a9a53f2..5ce8bf4b99 100755 --- a/demo/sentiment/train.sh +++ b/demo/sentiment/train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sentiment/trainer_config.py b/demo/sentiment/trainer_config.py index 114a9138eb..2defecd178 100644 --- a/demo/sentiment/trainer_config.py +++ b/demo/sentiment/trainer_config.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/data/paraphrase_data.sh b/demo/seqToseq/data/paraphrase_data.sh index 1b3f1d45e1..e6497c9128 100755 --- a/demo/seqToseq/data/paraphrase_data.sh +++ b/demo/seqToseq/data/paraphrase_data.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/data/paraphrase_model.sh b/demo/seqToseq/data/paraphrase_model.sh index 041f69cf46..d0e7f214a3 100755 --- a/demo/seqToseq/data/paraphrase_model.sh +++ b/demo/seqToseq/data/paraphrase_model.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/data/wmt14_data.sh b/demo/seqToseq/data/wmt14_data.sh index 6c360b2060..43f67168d2 100755 --- a/demo/seqToseq/data/wmt14_data.sh +++ b/demo/seqToseq/data/wmt14_data.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/data/wmt14_model.sh b/demo/seqToseq/data/wmt14_model.sh index d6e7a73264..c4b55b90a3 100755 --- a/demo/seqToseq/data/wmt14_model.sh +++ b/demo/seqToseq/data/wmt14_model.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/dataprovider.py b/demo/seqToseq/dataprovider.py index 127c3672c7..c2b49804be 100755 --- a/demo/seqToseq/dataprovider.py +++ b/demo/seqToseq/dataprovider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/paraphrase/train.conf b/demo/seqToseq/paraphrase/train.conf index 748920e2c7..be79c5e771 100644 --- a/demo/seqToseq/paraphrase/train.conf +++ b/demo/seqToseq/paraphrase/train.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/paraphrase/train.sh b/demo/seqToseq/paraphrase/train.sh index 2aa7b84060..33a42f6eff 100755 --- a/demo/seqToseq/paraphrase/train.sh +++ b/demo/seqToseq/paraphrase/train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/preprocess.py b/demo/seqToseq/preprocess.py index bd1c51b151..03f371331a 100755 --- a/demo/seqToseq/preprocess.py +++ b/demo/seqToseq/preprocess.py @@ -1,5 +1,5 @@ #!/bin/env python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/seqToseq_net.py b/demo/seqToseq/seqToseq_net.py index fc9db05ba7..e523a34d5a 100644 --- a/demo/seqToseq/seqToseq_net.py +++ b/demo/seqToseq/seqToseq_net.py @@ -1,6 +1,6 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/translation/eval_bleu.sh b/demo/seqToseq/translation/eval_bleu.sh index ef0ede717a..54c2ed237e 100755 --- a/demo/seqToseq/translation/eval_bleu.sh +++ b/demo/seqToseq/translation/eval_bleu.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/translation/gen.conf b/demo/seqToseq/translation/gen.conf index 63c5c2f9a6..e9bea4e455 100644 --- a/demo/seqToseq/translation/gen.conf +++ b/demo/seqToseq/translation/gen.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/translation/gen.sh b/demo/seqToseq/translation/gen.sh index ad977c05ff..a700ae2134 100755 --- a/demo/seqToseq/translation/gen.sh +++ b/demo/seqToseq/translation/gen.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/translation/moses_bleu.sh b/demo/seqToseq/translation/moses_bleu.sh index bfaba40b26..2f230d7f4c 100755 --- a/demo/seqToseq/translation/moses_bleu.sh +++ b/demo/seqToseq/translation/moses_bleu.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/translation/train.conf b/demo/seqToseq/translation/train.conf index cf1bde15c4..72b7ccdbb9 100644 --- a/demo/seqToseq/translation/train.conf +++ b/demo/seqToseq/translation/train.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/seqToseq/translation/train.sh b/demo/seqToseq/translation/train.sh index 976b5ba3b0..bdece693e5 100755 --- a/demo/seqToseq/translation/train.sh +++ b/demo/seqToseq/translation/train.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sequence_tagging/data/get_data.sh b/demo/sequence_tagging/data/get_data.sh index e579d6c46c..0cdb394035 100755 --- a/demo/sequence_tagging/data/get_data.sh +++ b/demo/sequence_tagging/data/get_data.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sequence_tagging/dataprovider.py b/demo/sequence_tagging/dataprovider.py index 37dcb7aa17..bb4b4465bc 100644 --- a/demo/sequence_tagging/dataprovider.py +++ b/demo/sequence_tagging/dataprovider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sequence_tagging/linear_crf.py b/demo/sequence_tagging/linear_crf.py index 64895742e1..736b580bb8 100644 --- a/demo/sequence_tagging/linear_crf.py +++ b/demo/sequence_tagging/linear_crf.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/demo/sequence_tagging/rnn_crf.py b/demo/sequence_tagging/rnn_crf.py index 90d4bbdddf..ad1e7b68e7 100644 --- a/demo/sequence_tagging/rnn_crf.py +++ b/demo/sequence_tagging/rnn_crf.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/doc/api/predict/predict_sample.py b/doc/api/predict/predict_sample.py index 63e8b36d26..51349250e8 100644 --- a/doc/api/predict/predict_sample.py +++ b/doc/api/predict/predict_sample.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/doc_cn/cluster/k8s/start_paddle.py b/doc_cn/cluster/k8s/start_paddle.py index bc0112a77f..6a46161410 100755 --- a/doc_cn/cluster/k8s/start_paddle.py +++ b/doc_cn/cluster/k8s/start_paddle.py @@ -1,5 +1,5 @@ #!/usr/bin/python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/.common_test_util.sh b/paddle/.common_test_util.sh index dc15250615..8d024bc7d0 100644 --- a/paddle/.common_test_util.sh +++ b/paddle/.common_test_util.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/.set_port.sh b/paddle/.set_port.sh index 33596fac60..617ac79a24 100755 --- a/paddle/.set_port.sh +++ b/paddle/.set_port.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/.set_python_path.sh b/paddle/.set_python_path.sh index 657fdf65e9..fa7baccc86 100755 --- a/paddle/.set_python_path.sh +++ b/paddle/.set_python_path.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/Arguments.cpp b/paddle/api/Arguments.cpp index bd1fdffe89..0cafbd896e 100644 --- a/paddle/api/Arguments.cpp +++ b/paddle/api/Arguments.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/ConfigParser.cpp b/paddle/api/ConfigParser.cpp index bc40d871d1..2f45173bfd 100644 --- a/paddle/api/ConfigParser.cpp +++ b/paddle/api/ConfigParser.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/GradientMachine.cpp b/paddle/api/GradientMachine.cpp index 9a4846d809..c1b546dbcb 100644 --- a/paddle/api/GradientMachine.cpp +++ b/paddle/api/GradientMachine.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/Internal.h b/paddle/api/Internal.h index 66a13bc603..4a07880d80 100644 --- a/paddle/api/Internal.h +++ b/paddle/api/Internal.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/Matrix.cpp b/paddle/api/Matrix.cpp index f257ee65aa..d4c00e7093 100644 --- a/paddle/api/Matrix.cpp +++ b/paddle/api/Matrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/PaddleAPI.h b/paddle/api/PaddleAPI.h index a125934fc1..f3c80e3b06 100644 --- a/paddle/api/PaddleAPI.h +++ b/paddle/api/PaddleAPI.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/PaddleAPIPrivate.h b/paddle/api/PaddleAPIPrivate.h index 5ffeff6a97..d2b56fc41c 100644 --- a/paddle/api/PaddleAPIPrivate.h +++ b/paddle/api/PaddleAPIPrivate.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/Parameter.cpp b/paddle/api/Parameter.cpp index 9c30ef6ff4..742ad0679c 100644 --- a/paddle/api/Parameter.cpp +++ b/paddle/api/Parameter.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/ParameterOptimizer.cpp b/paddle/api/ParameterOptimizer.cpp index 21d031e4bc..606dccd5ac 100644 --- a/paddle/api/ParameterOptimizer.cpp +++ b/paddle/api/ParameterOptimizer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/SequenceGenerator.cpp b/paddle/api/SequenceGenerator.cpp index d51be78d45..5c65b34f23 100644 --- a/paddle/api/SequenceGenerator.cpp +++ b/paddle/api/SequenceGenerator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/Trainer.cpp b/paddle/api/Trainer.cpp index 7a6aa69fb6..9aeb874bdc 100644 --- a/paddle/api/Trainer.cpp +++ b/paddle/api/Trainer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/Util.cpp b/paddle/api/Util.cpp index 1bba1df2e1..0c9c048099 100644 --- a/paddle/api/Util.cpp +++ b/paddle/api/Util.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/Vector.cpp b/paddle/api/Vector.cpp index 74c9ff8dc7..4f3ab7de60 100644 --- a/paddle/api/Vector.cpp +++ b/paddle/api/Vector.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/api/__init__.py b/paddle/api/__init__.py index c90af2ee00..f662d68263 100644 --- a/paddle/api/__init__.py +++ b/paddle/api/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/paddle_ld_flags.py b/paddle/api/paddle_ld_flags.py index ebe00798e8..85cc54700f 100644 --- a/paddle/api/paddle_ld_flags.py +++ b/paddle/api/paddle_ld_flags.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/run_tests.sh b/paddle/api/test/run_tests.sh index ff69c45264..2f12ba0264 100755 --- a/paddle/api/test/run_tests.sh +++ b/paddle/api/test/run_tests.sh @@ -1,5 +1,5 @@ #!/bin/bash -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/testArguments.py b/paddle/api/test/testArguments.py index 70fb169fd5..8cabecd242 100644 --- a/paddle/api/test/testArguments.py +++ b/paddle/api/test/testArguments.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/testGradientMachine.py b/paddle/api/test/testGradientMachine.py index e12613fbb8..b81eafa967 100644 --- a/paddle/api/test/testGradientMachine.py +++ b/paddle/api/test/testGradientMachine.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/testMatrix.py b/paddle/api/test/testMatrix.py index 8b0da62692..f76f84d2e1 100644 --- a/paddle/api/test/testMatrix.py +++ b/paddle/api/test/testMatrix.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/testTrain.py b/paddle/api/test/testTrain.py index a3ba4eaaa6..a90d15c272 100644 --- a/paddle/api/test/testTrain.py +++ b/paddle/api/test/testTrain.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/testTrainer.py b/paddle/api/test/testTrainer.py index edd5a2da57..a76cbf02d8 100644 --- a/paddle/api/test/testTrainer.py +++ b/paddle/api/test/testTrainer.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/testVector.py b/paddle/api/test/testVector.py index 963359236d..525ed97edd 100644 --- a/paddle/api/test/testVector.py +++ b/paddle/api/test/testVector.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/api/test/util.py b/paddle/api/test/util.py index dbcdba5bf2..9f4631c53e 100644 --- a/paddle/api/test/util.py +++ b/paddle/api/test/util.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_activation_functions.h b/paddle/cuda/include/hl_activation_functions.h index 03e15b2223..cdb2dba06c 100644 --- a/paddle/cuda/include/hl_activation_functions.h +++ b/paddle/cuda/include/hl_activation_functions.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_aggregate.h b/paddle/cuda/include/hl_aggregate.h index a6d9ff8483..d2189de689 100644 --- a/paddle/cuda/include/hl_aggregate.h +++ b/paddle/cuda/include/hl_aggregate.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_avx_functions.h b/paddle/cuda/include/hl_avx_functions.h index ed339e312a..35f4eabb4c 100644 --- a/paddle/cuda/include/hl_avx_functions.h +++ b/paddle/cuda/include/hl_avx_functions.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_base.h b/paddle/cuda/include/hl_base.h index a076952467..0b9dfc6117 100644 --- a/paddle/cuda/include/hl_base.h +++ b/paddle/cuda/include/hl_base.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_batch_transpose.h b/paddle/cuda/include/hl_batch_transpose.h index f3630e9762..e2e958cd67 100644 --- a/paddle/cuda/include/hl_batch_transpose.h +++ b/paddle/cuda/include/hl_batch_transpose.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cnn.h b/paddle/cuda/include/hl_cnn.h index cffaac634f..06ee3b3654 100644 --- a/paddle/cuda/include/hl_cnn.h +++ b/paddle/cuda/include/hl_cnn.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cpu_gru.cuh b/paddle/cuda/include/hl_cpu_gru.cuh index d39cf67448..c0a37ced2a 100644 --- a/paddle/cuda/include/hl_cpu_gru.cuh +++ b/paddle/cuda/include/hl_cpu_gru.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cpu_lstm.cuh b/paddle/cuda/include/hl_cpu_lstm.cuh index 65a174d85b..0e412fcdf5 100644 --- a/paddle/cuda/include/hl_cpu_lstm.cuh +++ b/paddle/cuda/include/hl_cpu_lstm.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cpu_matrix_kernel.cuh b/paddle/cuda/include/hl_cpu_matrix_kernel.cuh index 239a241991..f35bfbc5c8 100644 --- a/paddle/cuda/include/hl_cpu_matrix_kernel.cuh +++ b/paddle/cuda/include/hl_cpu_matrix_kernel.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cuda.h b/paddle/cuda/include/hl_cuda.h index 2c7d665101..5383c1130b 100644 --- a/paddle/cuda/include/hl_cuda.h +++ b/paddle/cuda/include/hl_cuda.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cuda.ph b/paddle/cuda/include/hl_cuda.ph index 9e0537aaf1..701916b279 100644 --- a/paddle/cuda/include/hl_cuda.ph +++ b/paddle/cuda/include/hl_cuda.ph @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cuda_cublas.h b/paddle/cuda/include/hl_cuda_cublas.h index db8c03c2c0..e206e42b2a 100644 --- a/paddle/cuda/include/hl_cuda_cublas.h +++ b/paddle/cuda/include/hl_cuda_cublas.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cuda_cudnn.h b/paddle/cuda/include/hl_cuda_cudnn.h index 3a2f916210..db18e4912b 100644 --- a/paddle/cuda/include/hl_cuda_cudnn.h +++ b/paddle/cuda/include/hl_cuda_cudnn.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_cuda_cudnn.ph b/paddle/cuda/include/hl_cuda_cudnn.ph index c0e82abe17..61378937ce 100644 --- a/paddle/cuda/include/hl_cuda_cudnn.ph +++ b/paddle/cuda/include/hl_cuda_cudnn.ph @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_device_functions.cuh b/paddle/cuda/include/hl_device_functions.cuh index 159c26f443..e0b5632f23 100755 --- a/paddle/cuda/include/hl_device_functions.cuh +++ b/paddle/cuda/include/hl_device_functions.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_dso_loader.h b/paddle/cuda/include/hl_dso_loader.h index e5d3d40311..9ddf0e61ee 100644 --- a/paddle/cuda/include/hl_dso_loader.h +++ b/paddle/cuda/include/hl_dso_loader.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_functions.h b/paddle/cuda/include/hl_functions.h index 91ce9a0678..0d7e80a855 100644 --- a/paddle/cuda/include/hl_functions.h +++ b/paddle/cuda/include/hl_functions.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_gpu.h b/paddle/cuda/include/hl_gpu.h index 6dd6d13212..aad0450c8c 100644 --- a/paddle/cuda/include/hl_gpu.h +++ b/paddle/cuda/include/hl_gpu.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_gpu_functions.cuh b/paddle/cuda/include/hl_gpu_functions.cuh index a2c5ebd18a..8e64cbe360 100644 --- a/paddle/cuda/include/hl_gpu_functions.cuh +++ b/paddle/cuda/include/hl_gpu_functions.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_gpu_gru.cuh b/paddle/cuda/include/hl_gpu_gru.cuh index 3e0cfdbe4f..6668e135d2 100644 --- a/paddle/cuda/include/hl_gpu_gru.cuh +++ b/paddle/cuda/include/hl_gpu_gru.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_gpu_lstm.cuh b/paddle/cuda/include/hl_gpu_lstm.cuh index 07806e11c1..5dceba2f5b 100644 --- a/paddle/cuda/include/hl_gpu_lstm.cuh +++ b/paddle/cuda/include/hl_gpu_lstm.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_gpu_matrix_kernel.cuh b/paddle/cuda/include/hl_gpu_matrix_kernel.cuh index 201c5c25f1..9bbdf5fa72 100644 --- a/paddle/cuda/include/hl_gpu_matrix_kernel.cuh +++ b/paddle/cuda/include/hl_gpu_matrix_kernel.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_gru_ops.cuh b/paddle/cuda/include/hl_gru_ops.cuh index 3c137d8d44..45f66ad533 100644 --- a/paddle/cuda/include/hl_gru_ops.cuh +++ b/paddle/cuda/include/hl_gru_ops.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_lstm.h b/paddle/cuda/include/hl_lstm.h index 7e527a7902..857756e5cd 100644 --- a/paddle/cuda/include/hl_lstm.h +++ b/paddle/cuda/include/hl_lstm.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_lstm_ops.cuh b/paddle/cuda/include/hl_lstm_ops.cuh index a5ea018dbc..2601060cc2 100644 --- a/paddle/cuda/include/hl_lstm_ops.cuh +++ b/paddle/cuda/include/hl_lstm_ops.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_matrix.h b/paddle/cuda/include/hl_matrix.h index 96648661e3..abd5eb3a0c 100644 --- a/paddle/cuda/include/hl_matrix.h +++ b/paddle/cuda/include/hl_matrix.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_matrix_apply.cuh b/paddle/cuda/include/hl_matrix_apply.cuh index 927212c83d..b10d177b97 100644 --- a/paddle/cuda/include/hl_matrix_apply.cuh +++ b/paddle/cuda/include/hl_matrix_apply.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_matrix_base.cuh b/paddle/cuda/include/hl_matrix_base.cuh index a3645ef51e..db35ee2037 100644 --- a/paddle/cuda/include/hl_matrix_base.cuh +++ b/paddle/cuda/include/hl_matrix_base.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_matrix_base_sse.cuh b/paddle/cuda/include/hl_matrix_base_sse.cuh index dd55b84884..db6c9cca03 100644 --- a/paddle/cuda/include/hl_matrix_base_sse.cuh +++ b/paddle/cuda/include/hl_matrix_base_sse.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_matrix_ops.cuh b/paddle/cuda/include/hl_matrix_ops.cuh index 3e5e1bc701..fc29201357 100644 --- a/paddle/cuda/include/hl_matrix_ops.cuh +++ b/paddle/cuda/include/hl_matrix_ops.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_matrix_type.cuh b/paddle/cuda/include/hl_matrix_type.cuh index 060be07364..59213eee75 100644 --- a/paddle/cuda/include/hl_matrix_type.cuh +++ b/paddle/cuda/include/hl_matrix_type.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_perturbation_util.cuh b/paddle/cuda/include/hl_perturbation_util.cuh index 90fc1cb060..93b81bf035 100644 --- a/paddle/cuda/include/hl_perturbation_util.cuh +++ b/paddle/cuda/include/hl_perturbation_util.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_recurrent_apply.cuh b/paddle/cuda/include/hl_recurrent_apply.cuh index 0ccbf01f1c..113446cf75 100644 --- a/paddle/cuda/include/hl_recurrent_apply.cuh +++ b/paddle/cuda/include/hl_recurrent_apply.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_sequence.h b/paddle/cuda/include/hl_sequence.h index b98d7bdeaf..9bcd25b062 100644 --- a/paddle/cuda/include/hl_sequence.h +++ b/paddle/cuda/include/hl_sequence.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_sparse.h b/paddle/cuda/include/hl_sparse.h index c4e0be23e2..67fe701c10 100644 --- a/paddle/cuda/include/hl_sparse.h +++ b/paddle/cuda/include/hl_sparse.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_sparse.ph b/paddle/cuda/include/hl_sparse.ph index d3bc73c80d..13bba17811 100644 --- a/paddle/cuda/include/hl_sparse.ph +++ b/paddle/cuda/include/hl_sparse.ph @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_sse_matrix_kernel.cuh b/paddle/cuda/include/hl_sse_matrix_kernel.cuh index 45db2f313e..9e50580669 100644 --- a/paddle/cuda/include/hl_sse_matrix_kernel.cuh +++ b/paddle/cuda/include/hl_sse_matrix_kernel.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_table_apply.h b/paddle/cuda/include/hl_table_apply.h index b4ac83a66a..2170b97f4d 100644 --- a/paddle/cuda/include/hl_table_apply.h +++ b/paddle/cuda/include/hl_table_apply.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_tensor_ops.h b/paddle/cuda/include/hl_tensor_ops.h index cc95620e37..7945b98201 100644 --- a/paddle/cuda/include/hl_tensor_ops.h +++ b/paddle/cuda/include/hl_tensor_ops.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_thread.ph b/paddle/cuda/include/hl_thread.ph index 0cfc459936..a3830ff8d8 100644 --- a/paddle/cuda/include/hl_thread.ph +++ b/paddle/cuda/include/hl_thread.ph @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_time.h b/paddle/cuda/include/hl_time.h index b0a88c66a1..f214b055f9 100644 --- a/paddle/cuda/include/hl_time.h +++ b/paddle/cuda/include/hl_time.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_top_k.h b/paddle/cuda/include/hl_top_k.h index e8cfebbf6a..77949ed295 100644 --- a/paddle/cuda/include/hl_top_k.h +++ b/paddle/cuda/include/hl_top_k.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/hl_warpctc_wrap.h b/paddle/cuda/include/hl_warpctc_wrap.h index dc50cf9d20..79bf6c3db7 100644 --- a/paddle/cuda/include/hl_warpctc_wrap.h +++ b/paddle/cuda/include/hl_warpctc_wrap.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_aggregate_stub.h b/paddle/cuda/include/stub/hl_aggregate_stub.h index bb53fc581e..bbfa9b8fad 100644 --- a/paddle/cuda/include/stub/hl_aggregate_stub.h +++ b/paddle/cuda/include/stub/hl_aggregate_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_cnn_stub.h b/paddle/cuda/include/stub/hl_cnn_stub.h index 2f73b9671e..52c9787352 100644 --- a/paddle/cuda/include/stub/hl_cnn_stub.h +++ b/paddle/cuda/include/stub/hl_cnn_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_cuda_cublas_stub.h b/paddle/cuda/include/stub/hl_cuda_cublas_stub.h index 85f7c390c4..e86fd853f4 100644 --- a/paddle/cuda/include/stub/hl_cuda_cublas_stub.h +++ b/paddle/cuda/include/stub/hl_cuda_cublas_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_cuda_cudnn_stub.h b/paddle/cuda/include/stub/hl_cuda_cudnn_stub.h index 3beb0e5b51..abd0d6b099 100644 --- a/paddle/cuda/include/stub/hl_cuda_cudnn_stub.h +++ b/paddle/cuda/include/stub/hl_cuda_cudnn_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_cuda_stub.h b/paddle/cuda/include/stub/hl_cuda_stub.h index 24923a0d4a..5246a8d5a4 100644 --- a/paddle/cuda/include/stub/hl_cuda_stub.h +++ b/paddle/cuda/include/stub/hl_cuda_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_lstm_stub.h b/paddle/cuda/include/stub/hl_lstm_stub.h index 7ccda032d2..246ba79f63 100644 --- a/paddle/cuda/include/stub/hl_lstm_stub.h +++ b/paddle/cuda/include/stub/hl_lstm_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_matrix_stub.h b/paddle/cuda/include/stub/hl_matrix_stub.h index 1bd78d23fb..0b669f6735 100644 --- a/paddle/cuda/include/stub/hl_matrix_stub.h +++ b/paddle/cuda/include/stub/hl_matrix_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_sequence_stub.h b/paddle/cuda/include/stub/hl_sequence_stub.h index 3343463a8d..d6b07556f8 100644 --- a/paddle/cuda/include/stub/hl_sequence_stub.h +++ b/paddle/cuda/include/stub/hl_sequence_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/include/stub/hl_sparse_stub.h b/paddle/cuda/include/stub/hl_sparse_stub.h index d47bdd2c47..bd17461d88 100644 --- a/paddle/cuda/include/stub/hl_sparse_stub.h +++ b/paddle/cuda/include/stub/hl_sparse_stub.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_avx_functions.cc b/paddle/cuda/src/hl_avx_functions.cc index c1e0c7f9d9..9066475876 100644 --- a/paddle/cuda/src/hl_avx_functions.cc +++ b/paddle/cuda/src/hl_avx_functions.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_batch_transpose.cu b/paddle/cuda/src/hl_batch_transpose.cu index 00fd18e7f3..f047403da1 100644 --- a/paddle/cuda/src/hl_batch_transpose.cu +++ b/paddle/cuda/src/hl_batch_transpose.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cpu_functions.cc b/paddle/cuda/src/hl_cpu_functions.cc index af00f352e5..c2117a7315 100644 --- a/paddle/cuda/src/hl_cpu_functions.cc +++ b/paddle/cuda/src/hl_cpu_functions.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_aggregate.cu b/paddle/cuda/src/hl_cuda_aggregate.cu index 4eb775eb79..97034a9177 100644 --- a/paddle/cuda/src/hl_cuda_aggregate.cu +++ b/paddle/cuda/src/hl_cuda_aggregate.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_cnn.cu b/paddle/cuda/src/hl_cuda_cnn.cu index 7f2f6897b4..0992286f36 100644 --- a/paddle/cuda/src/hl_cuda_cnn.cu +++ b/paddle/cuda/src/hl_cuda_cnn.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_cublas.cc b/paddle/cuda/src/hl_cuda_cublas.cc index e8ba232d44..7cede8c63c 100644 --- a/paddle/cuda/src/hl_cuda_cublas.cc +++ b/paddle/cuda/src/hl_cuda_cublas.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_cudnn.cc b/paddle/cuda/src/hl_cuda_cudnn.cc index 9d4ff08a78..9c9b8906c2 100644 --- a/paddle/cuda/src/hl_cuda_cudnn.cc +++ b/paddle/cuda/src/hl_cuda_cudnn.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_device.cc b/paddle/cuda/src/hl_cuda_device.cc index 6b71a53848..d181448292 100644 --- a/paddle/cuda/src/hl_cuda_device.cc +++ b/paddle/cuda/src/hl_cuda_device.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_lstm.cu b/paddle/cuda/src/hl_cuda_lstm.cu index cf009620bf..b869d903ba 100644 --- a/paddle/cuda/src/hl_cuda_lstm.cu +++ b/paddle/cuda/src/hl_cuda_lstm.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_matrix.cu b/paddle/cuda/src/hl_cuda_matrix.cu index 0b7cd33756..2b4c6f7c39 100644 --- a/paddle/cuda/src/hl_cuda_matrix.cu +++ b/paddle/cuda/src/hl_cuda_matrix.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_sequence.cu b/paddle/cuda/src/hl_cuda_sequence.cu index e83a60ad72..4e33ac443c 100644 --- a/paddle/cuda/src/hl_cuda_sequence.cu +++ b/paddle/cuda/src/hl_cuda_sequence.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_sparse.cu b/paddle/cuda/src/hl_cuda_sparse.cu index 1687fcc221..ab9ab57c88 100644 --- a/paddle/cuda/src/hl_cuda_sparse.cu +++ b/paddle/cuda/src/hl_cuda_sparse.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cuda_sparse.cuh b/paddle/cuda/src/hl_cuda_sparse.cuh index 9cf2d5a843..72572756a6 100644 --- a/paddle/cuda/src/hl_cuda_sparse.cuh +++ b/paddle/cuda/src/hl_cuda_sparse.cuh @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_cudart_wrap.cc b/paddle/cuda/src/hl_cudart_wrap.cc index a95f5557af..a3ac750b53 100644 --- a/paddle/cuda/src/hl_cudart_wrap.cc +++ b/paddle/cuda/src/hl_cudart_wrap.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_dso_loader.cc b/paddle/cuda/src/hl_dso_loader.cc index ce19073626..f509b89243 100644 --- a/paddle/cuda/src/hl_dso_loader.cc +++ b/paddle/cuda/src/hl_dso_loader.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_math.cc b/paddle/cuda/src/hl_math.cc index f4bf888bab..3048693fb8 100644 --- a/paddle/cuda/src/hl_math.cc +++ b/paddle/cuda/src/hl_math.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_perturbation_util.cu b/paddle/cuda/src/hl_perturbation_util.cu index a10d06f8a9..2a945bcdb8 100644 --- a/paddle/cuda/src/hl_perturbation_util.cu +++ b/paddle/cuda/src/hl_perturbation_util.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_table_apply.cu b/paddle/cuda/src/hl_table_apply.cu index 52ee4610ed..61edbe3ccc 100644 --- a/paddle/cuda/src/hl_table_apply.cu +++ b/paddle/cuda/src/hl_table_apply.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_time.cc b/paddle/cuda/src/hl_time.cc index d52b2a1df0..3005065899 100644 --- a/paddle/cuda/src/hl_time.cc +++ b/paddle/cuda/src/hl_time.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_top_k.cu b/paddle/cuda/src/hl_top_k.cu index ed74787b61..f0ef0cc3c5 100644 --- a/paddle/cuda/src/hl_top_k.cu +++ b/paddle/cuda/src/hl_top_k.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/cuda/src/hl_warpctc_wrap.cc b/paddle/cuda/src/hl_warpctc_wrap.cc index 3d3bf46158..619b90120f 100644 --- a/paddle/cuda/src/hl_warpctc_wrap.cc +++ b/paddle/cuda/src/hl_warpctc_wrap.cc @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/activations/ActivationFunction.cpp b/paddle/gserver/activations/ActivationFunction.cpp index 220f220e0f..f1d09c568d 100644 --- a/paddle/gserver/activations/ActivationFunction.cpp +++ b/paddle/gserver/activations/ActivationFunction.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/activations/ActivationFunction.h b/paddle/gserver/activations/ActivationFunction.h index e9ed5c619a..601e3b6c0c 100644 --- a/paddle/gserver/activations/ActivationFunction.h +++ b/paddle/gserver/activations/ActivationFunction.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/DataProvider.cpp b/paddle/gserver/dataproviders/DataProvider.cpp index e6cc4a246a..55ca62543a 100644 --- a/paddle/gserver/dataproviders/DataProvider.cpp +++ b/paddle/gserver/dataproviders/DataProvider.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/DataProvider.h b/paddle/gserver/dataproviders/DataProvider.h index 8247693822..5b854936c6 100644 --- a/paddle/gserver/dataproviders/DataProvider.h +++ b/paddle/gserver/dataproviders/DataProvider.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/DataProviderGroup.h b/paddle/gserver/dataproviders/DataProviderGroup.h index 6c178e29ee..69ac2590b9 100644 --- a/paddle/gserver/dataproviders/DataProviderGroup.h +++ b/paddle/gserver/dataproviders/DataProviderGroup.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/MultiDataProvider.cpp b/paddle/gserver/dataproviders/MultiDataProvider.cpp index 51fb1f2666..e1fc4c9365 100644 --- a/paddle/gserver/dataproviders/MultiDataProvider.cpp +++ b/paddle/gserver/dataproviders/MultiDataProvider.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/MultiDataProvider.h b/paddle/gserver/dataproviders/MultiDataProvider.h index 876467c04f..4c8fb2cd0d 100644 --- a/paddle/gserver/dataproviders/MultiDataProvider.h +++ b/paddle/gserver/dataproviders/MultiDataProvider.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/ProtoDataProvider.cpp b/paddle/gserver/dataproviders/ProtoDataProvider.cpp index 0a7ff80246..6a0cb5ef63 100644 --- a/paddle/gserver/dataproviders/ProtoDataProvider.cpp +++ b/paddle/gserver/dataproviders/ProtoDataProvider.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/ProtoDataProvider.h b/paddle/gserver/dataproviders/ProtoDataProvider.h index ffdcc8fdc9..9ec5cb97c0 100644 --- a/paddle/gserver/dataproviders/ProtoDataProvider.h +++ b/paddle/gserver/dataproviders/ProtoDataProvider.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/ProtoReader.h b/paddle/gserver/dataproviders/ProtoReader.h index b8fca3cd7f..6708e7cde7 100644 --- a/paddle/gserver/dataproviders/ProtoReader.h +++ b/paddle/gserver/dataproviders/ProtoReader.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/PyDataProvider.cpp b/paddle/gserver/dataproviders/PyDataProvider.cpp index bee6ca14a2..f5dcbfcf34 100644 --- a/paddle/gserver/dataproviders/PyDataProvider.cpp +++ b/paddle/gserver/dataproviders/PyDataProvider.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/PyDataProvider.h b/paddle/gserver/dataproviders/PyDataProvider.h index 6bb7c831fd..1401c13a1e 100644 --- a/paddle/gserver/dataproviders/PyDataProvider.h +++ b/paddle/gserver/dataproviders/PyDataProvider.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/dataproviders/PyDataProvider2.cpp b/paddle/gserver/dataproviders/PyDataProvider2.cpp index 967fc9026a..8b04a03f6d 100644 --- a/paddle/gserver/dataproviders/PyDataProvider2.cpp +++ b/paddle/gserver/dataproviders/PyDataProvider2.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/evaluators/CTCErrorEvaluator.cpp b/paddle/gserver/evaluators/CTCErrorEvaluator.cpp index 8f7d2fb80e..05aa6c012a 100644 --- a/paddle/gserver/evaluators/CTCErrorEvaluator.cpp +++ b/paddle/gserver/evaluators/CTCErrorEvaluator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/evaluators/ChunkEvaluator.cpp b/paddle/gserver/evaluators/ChunkEvaluator.cpp index 923e77fc9d..3d8af5bcd4 100644 --- a/paddle/gserver/evaluators/ChunkEvaluator.cpp +++ b/paddle/gserver/evaluators/ChunkEvaluator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/evaluators/Evaluator.cpp b/paddle/gserver/evaluators/Evaluator.cpp index f5df2b18de..aa6dc7cb86 100644 --- a/paddle/gserver/evaluators/Evaluator.cpp +++ b/paddle/gserver/evaluators/Evaluator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/evaluators/Evaluator.h b/paddle/gserver/evaluators/Evaluator.h index 732abb6079..a26c650c38 100644 --- a/paddle/gserver/evaluators/Evaluator.h +++ b/paddle/gserver/evaluators/Evaluator.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/GradientMachine.cpp b/paddle/gserver/gradientmachines/GradientMachine.cpp index 3761fda5f3..6adee05dbe 100644 --- a/paddle/gserver/gradientmachines/GradientMachine.cpp +++ b/paddle/gserver/gradientmachines/GradientMachine.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/GradientMachine.h b/paddle/gserver/gradientmachines/GradientMachine.h index 27cdf7f789..f3e44a9e39 100644 --- a/paddle/gserver/gradientmachines/GradientMachine.h +++ b/paddle/gserver/gradientmachines/GradientMachine.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/GradientMachineMode.cpp b/paddle/gserver/gradientmachines/GradientMachineMode.cpp index 4a90a4a566..3583fb4de8 100644 --- a/paddle/gserver/gradientmachines/GradientMachineMode.cpp +++ b/paddle/gserver/gradientmachines/GradientMachineMode.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/GradientMachineMode.h b/paddle/gserver/gradientmachines/GradientMachineMode.h index f2f55a7067..7bc885fe99 100644 --- a/paddle/gserver/gradientmachines/GradientMachineMode.h +++ b/paddle/gserver/gradientmachines/GradientMachineMode.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp index 148451f18d..a7324f5545 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.h b/paddle/gserver/gradientmachines/MultiGradientMachine.h index 58c5486810..fe6d96e8ea 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.h +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/MultiNetwork.cpp b/paddle/gserver/gradientmachines/MultiNetwork.cpp index e5be19cad6..61af82fcb7 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.cpp +++ b/paddle/gserver/gradientmachines/MultiNetwork.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/MultiNetwork.h b/paddle/gserver/gradientmachines/MultiNetwork.h index 779a2267f5..89fbf32b4f 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.h +++ b/paddle/gserver/gradientmachines/MultiNetwork.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.cpp b/paddle/gserver/gradientmachines/NeuralNetwork.cpp index 9932ea655e..dbcb97b42b 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.cpp +++ b/paddle/gserver/gradientmachines/NeuralNetwork.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.h b/paddle/gserver/gradientmachines/NeuralNetwork.h index 55ef45c5ee..fd885b436a 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.h +++ b/paddle/gserver/gradientmachines/NeuralNetwork.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp index 9dbf418c31..980a5851a2 100644 --- a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp +++ b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h index 71488bc3b7..934a7cfc7b 100644 --- a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h +++ b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp index 516b617576..4fb1a44ab7 100644 --- a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/gradientmachines/RecurrentGradientMachine.h b/paddle/gserver/gradientmachines/RecurrentGradientMachine.h index cb74a67e52..369c8c3d98 100644 --- a/paddle/gserver/gradientmachines/RecurrentGradientMachine.h +++ b/paddle/gserver/gradientmachines/RecurrentGradientMachine.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/AddtoLayer.cpp b/paddle/gserver/layers/AddtoLayer.cpp index 8a9aecfa19..5338530113 100644 --- a/paddle/gserver/layers/AddtoLayer.cpp +++ b/paddle/gserver/layers/AddtoLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/AddtoLayer.h b/paddle/gserver/layers/AddtoLayer.h index 883d186f3e..53d3f99cdd 100644 --- a/paddle/gserver/layers/AddtoLayer.h +++ b/paddle/gserver/layers/AddtoLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/AgentLayer.cpp b/paddle/gserver/layers/AgentLayer.cpp index eb89281cb1..2d30029027 100644 --- a/paddle/gserver/layers/AgentLayer.cpp +++ b/paddle/gserver/layers/AgentLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/AgentLayer.h b/paddle/gserver/layers/AgentLayer.h index 0186653c0f..41683ad671 100644 --- a/paddle/gserver/layers/AgentLayer.h +++ b/paddle/gserver/layers/AgentLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/AverageLayer.cpp b/paddle/gserver/layers/AverageLayer.cpp index af64e15fe3..b8955ab04f 100644 --- a/paddle/gserver/layers/AverageLayer.cpp +++ b/paddle/gserver/layers/AverageLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/AverageLayer.h b/paddle/gserver/layers/AverageLayer.h index 1edc2ace49..b3c4ecec8b 100644 --- a/paddle/gserver/layers/AverageLayer.h +++ b/paddle/gserver/layers/AverageLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BatchNormBaseLayer.cpp b/paddle/gserver/layers/BatchNormBaseLayer.cpp index fd534b2ac4..51463f1118 100644 --- a/paddle/gserver/layers/BatchNormBaseLayer.cpp +++ b/paddle/gserver/layers/BatchNormBaseLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BatchNormBaseLayer.h b/paddle/gserver/layers/BatchNormBaseLayer.h index f956646a6d..f5a555a6d0 100644 --- a/paddle/gserver/layers/BatchNormBaseLayer.h +++ b/paddle/gserver/layers/BatchNormBaseLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BatchNormalizationLayer.cpp b/paddle/gserver/layers/BatchNormalizationLayer.cpp index bdc20c9d81..e6a0624636 100644 --- a/paddle/gserver/layers/BatchNormalizationLayer.cpp +++ b/paddle/gserver/layers/BatchNormalizationLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BatchNormalizationLayer.h b/paddle/gserver/layers/BatchNormalizationLayer.h index 36925a5ed2..56be473568 100644 --- a/paddle/gserver/layers/BatchNormalizationLayer.h +++ b/paddle/gserver/layers/BatchNormalizationLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BilinearInterpLayer.cpp b/paddle/gserver/layers/BilinearInterpLayer.cpp index 11028290dc..1976cb0017 100644 --- a/paddle/gserver/layers/BilinearInterpLayer.cpp +++ b/paddle/gserver/layers/BilinearInterpLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BilinearInterpLayer.h b/paddle/gserver/layers/BilinearInterpLayer.h index eba3c054fa..4ff4b0ea79 100644 --- a/paddle/gserver/layers/BilinearInterpLayer.h +++ b/paddle/gserver/layers/BilinearInterpLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BlockExpandLayer.cpp b/paddle/gserver/layers/BlockExpandLayer.cpp index 17d77879b2..2bafeb9215 100644 --- a/paddle/gserver/layers/BlockExpandLayer.cpp +++ b/paddle/gserver/layers/BlockExpandLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/BlockExpandLayer.h b/paddle/gserver/layers/BlockExpandLayer.h index 1496fb681a..cc96fdd03f 100644 --- a/paddle/gserver/layers/BlockExpandLayer.h +++ b/paddle/gserver/layers/BlockExpandLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CRFDecodingLayer.cpp b/paddle/gserver/layers/CRFDecodingLayer.cpp index 8986741dc3..fdb46aba68 100644 --- a/paddle/gserver/layers/CRFDecodingLayer.cpp +++ b/paddle/gserver/layers/CRFDecodingLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CRFDecodingLayer.h b/paddle/gserver/layers/CRFDecodingLayer.h index 1914062011..1fd444ad10 100644 --- a/paddle/gserver/layers/CRFDecodingLayer.h +++ b/paddle/gserver/layers/CRFDecodingLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CRFLayer.cpp b/paddle/gserver/layers/CRFLayer.cpp index ed4f864ba9..02b7aaf17e 100644 --- a/paddle/gserver/layers/CRFLayer.cpp +++ b/paddle/gserver/layers/CRFLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CRFLayer.h b/paddle/gserver/layers/CRFLayer.h index 21c7fc61e1..d21b32b68c 100644 --- a/paddle/gserver/layers/CRFLayer.h +++ b/paddle/gserver/layers/CRFLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CTCLayer.cpp b/paddle/gserver/layers/CTCLayer.cpp index be5d2c8c75..14ec851551 100644 --- a/paddle/gserver/layers/CTCLayer.cpp +++ b/paddle/gserver/layers/CTCLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CTCLayer.h b/paddle/gserver/layers/CTCLayer.h index 18ba12583b..70d429bad6 100644 --- a/paddle/gserver/layers/CTCLayer.h +++ b/paddle/gserver/layers/CTCLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConcatenateLayer.cpp b/paddle/gserver/layers/ConcatenateLayer.cpp index 910eec8bbc..f6b3d86b8c 100644 --- a/paddle/gserver/layers/ConcatenateLayer.cpp +++ b/paddle/gserver/layers/ConcatenateLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ContextProjection.cpp b/paddle/gserver/layers/ContextProjection.cpp index 30dbf168fb..6080aa51b9 100644 --- a/paddle/gserver/layers/ContextProjection.cpp +++ b/paddle/gserver/layers/ContextProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ContextProjection.h b/paddle/gserver/layers/ContextProjection.h index 188dec0fb3..2df43bd04f 100644 --- a/paddle/gserver/layers/ContextProjection.h +++ b/paddle/gserver/layers/ContextProjection.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConvBaseLayer.cpp b/paddle/gserver/layers/ConvBaseLayer.cpp index b5a2f8b8e1..473ca24a94 100644 --- a/paddle/gserver/layers/ConvBaseLayer.cpp +++ b/paddle/gserver/layers/ConvBaseLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConvBaseLayer.h b/paddle/gserver/layers/ConvBaseLayer.h index 85f57dbe0b..aedf4100e3 100644 --- a/paddle/gserver/layers/ConvBaseLayer.h +++ b/paddle/gserver/layers/ConvBaseLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConvOperator.cpp b/paddle/gserver/layers/ConvOperator.cpp index dc06c89dab..3ede98ba4b 100644 --- a/paddle/gserver/layers/ConvOperator.cpp +++ b/paddle/gserver/layers/ConvOperator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConvProjection.cpp b/paddle/gserver/layers/ConvProjection.cpp index 5a68fb08da..e72dc37ec8 100644 --- a/paddle/gserver/layers/ConvProjection.cpp +++ b/paddle/gserver/layers/ConvProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConvProjection.h b/paddle/gserver/layers/ConvProjection.h index 779fe1455a..c32e5e1d3a 100644 --- a/paddle/gserver/layers/ConvProjection.h +++ b/paddle/gserver/layers/ConvProjection.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConvShiftLayer.cpp b/paddle/gserver/layers/ConvShiftLayer.cpp index 6e77c1f14e..527d885d86 100644 --- a/paddle/gserver/layers/ConvShiftLayer.cpp +++ b/paddle/gserver/layers/ConvShiftLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ConvexCombinationLayer.cpp b/paddle/gserver/layers/ConvexCombinationLayer.cpp index 7e1fef8bc6..57ff95fe37 100644 --- a/paddle/gserver/layers/ConvexCombinationLayer.cpp +++ b/paddle/gserver/layers/ConvexCombinationLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CosSimLayer.cpp b/paddle/gserver/layers/CosSimLayer.cpp index 894cb5b0d8..254120443d 100644 --- a/paddle/gserver/layers/CosSimLayer.cpp +++ b/paddle/gserver/layers/CosSimLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CosSimLayer.h b/paddle/gserver/layers/CosSimLayer.h index bc47998c11..5dcc5d8a5b 100644 --- a/paddle/gserver/layers/CosSimLayer.h +++ b/paddle/gserver/layers/CosSimLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CosSimVecMatLayer.cpp b/paddle/gserver/layers/CosSimVecMatLayer.cpp index 56d177da64..e8a7f671ee 100644 --- a/paddle/gserver/layers/CosSimVecMatLayer.cpp +++ b/paddle/gserver/layers/CosSimVecMatLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CostLayer.cpp b/paddle/gserver/layers/CostLayer.cpp index 5c839f2d6c..90cd473c42 100644 --- a/paddle/gserver/layers/CostLayer.cpp +++ b/paddle/gserver/layers/CostLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CostLayer.h b/paddle/gserver/layers/CostLayer.h index 120ff9bd2d..7f73bdb3f7 100644 --- a/paddle/gserver/layers/CostLayer.h +++ b/paddle/gserver/layers/CostLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CudnnBatchNormLayer.cpp b/paddle/gserver/layers/CudnnBatchNormLayer.cpp index 6be62b1a25..d44c217105 100644 --- a/paddle/gserver/layers/CudnnBatchNormLayer.cpp +++ b/paddle/gserver/layers/CudnnBatchNormLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CudnnBatchNormLayer.h b/paddle/gserver/layers/CudnnBatchNormLayer.h index 6220e77ceb..a52a683e15 100644 --- a/paddle/gserver/layers/CudnnBatchNormLayer.h +++ b/paddle/gserver/layers/CudnnBatchNormLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CudnnConvLayer.cpp b/paddle/gserver/layers/CudnnConvLayer.cpp index 93c5565d2f..6e28d5eb42 100644 --- a/paddle/gserver/layers/CudnnConvLayer.cpp +++ b/paddle/gserver/layers/CudnnConvLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CudnnConvLayer.h b/paddle/gserver/layers/CudnnConvLayer.h index 6cfbadfb53..6317fab6f8 100644 --- a/paddle/gserver/layers/CudnnConvLayer.h +++ b/paddle/gserver/layers/CudnnConvLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CudnnPoolLayer.cpp b/paddle/gserver/layers/CudnnPoolLayer.cpp index 21d8e2579f..d0e71c6345 100644 --- a/paddle/gserver/layers/CudnnPoolLayer.cpp +++ b/paddle/gserver/layers/CudnnPoolLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/CudnnPoolLayer.h b/paddle/gserver/layers/CudnnPoolLayer.h index 6a6b28db96..072b2f9513 100644 --- a/paddle/gserver/layers/CudnnPoolLayer.h +++ b/paddle/gserver/layers/CudnnPoolLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/DataLayer.cpp b/paddle/gserver/layers/DataLayer.cpp index 67c4923036..66f0606a38 100644 --- a/paddle/gserver/layers/DataLayer.cpp +++ b/paddle/gserver/layers/DataLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/DataLayer.h b/paddle/gserver/layers/DataLayer.h index da74702201..d3bc97bb6c 100644 --- a/paddle/gserver/layers/DataLayer.h +++ b/paddle/gserver/layers/DataLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/DataNormLayer.cpp b/paddle/gserver/layers/DataNormLayer.cpp index b398f3dbed..afd532c949 100644 --- a/paddle/gserver/layers/DataNormLayer.cpp +++ b/paddle/gserver/layers/DataNormLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/DataNormLayer.h b/paddle/gserver/layers/DataNormLayer.h index 1179d94fbb..b3043cffd2 100644 --- a/paddle/gserver/layers/DataNormLayer.h +++ b/paddle/gserver/layers/DataNormLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/DotMulOperator.cpp b/paddle/gserver/layers/DotMulOperator.cpp index 9409493fda..55dabd79d0 100644 --- a/paddle/gserver/layers/DotMulOperator.cpp +++ b/paddle/gserver/layers/DotMulOperator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/DotMulProjection.cpp b/paddle/gserver/layers/DotMulProjection.cpp index 862eeb6f01..0a1ede3618 100644 --- a/paddle/gserver/layers/DotMulProjection.cpp +++ b/paddle/gserver/layers/DotMulProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/EosIdCheckLayer.cpp b/paddle/gserver/layers/EosIdCheckLayer.cpp index 3a43705d26..dc3c6e6b64 100644 --- a/paddle/gserver/layers/EosIdCheckLayer.cpp +++ b/paddle/gserver/layers/EosIdCheckLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandConvBaseLayer.cpp b/paddle/gserver/layers/ExpandConvBaseLayer.cpp index 3724609720..25948747fe 100644 --- a/paddle/gserver/layers/ExpandConvBaseLayer.cpp +++ b/paddle/gserver/layers/ExpandConvBaseLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandConvBaseLayer.h b/paddle/gserver/layers/ExpandConvBaseLayer.h index 5939d27e2a..e14f6e6f44 100644 --- a/paddle/gserver/layers/ExpandConvBaseLayer.h +++ b/paddle/gserver/layers/ExpandConvBaseLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandConvLayer.cpp b/paddle/gserver/layers/ExpandConvLayer.cpp index 0649289c1c..dcc7839960 100644 --- a/paddle/gserver/layers/ExpandConvLayer.cpp +++ b/paddle/gserver/layers/ExpandConvLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandConvLayer.h b/paddle/gserver/layers/ExpandConvLayer.h index 82a9e88a42..6f8504b50a 100644 --- a/paddle/gserver/layers/ExpandConvLayer.h +++ b/paddle/gserver/layers/ExpandConvLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandConvTransLayer.cpp b/paddle/gserver/layers/ExpandConvTransLayer.cpp index 1132ab4f92..cd4965c3c5 100644 --- a/paddle/gserver/layers/ExpandConvTransLayer.cpp +++ b/paddle/gserver/layers/ExpandConvTransLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandConvTransLayer.h b/paddle/gserver/layers/ExpandConvTransLayer.h index 47efe3f656..fa9d7fb481 100644 --- a/paddle/gserver/layers/ExpandConvTransLayer.h +++ b/paddle/gserver/layers/ExpandConvTransLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandLayer.cpp b/paddle/gserver/layers/ExpandLayer.cpp index 9290ce4f6d..de5acfde05 100644 --- a/paddle/gserver/layers/ExpandLayer.cpp +++ b/paddle/gserver/layers/ExpandLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ExpandLayer.h b/paddle/gserver/layers/ExpandLayer.h index fbe0ced9b1..5c63614423 100644 --- a/paddle/gserver/layers/ExpandLayer.h +++ b/paddle/gserver/layers/ExpandLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/FeatureMapExpandLayer.cpp b/paddle/gserver/layers/FeatureMapExpandLayer.cpp index 97c8d143fe..d023074c52 100644 --- a/paddle/gserver/layers/FeatureMapExpandLayer.cpp +++ b/paddle/gserver/layers/FeatureMapExpandLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/FullMatrixProjection.cpp b/paddle/gserver/layers/FullMatrixProjection.cpp index 35a5cb5b7a..9e72a33a3c 100644 --- a/paddle/gserver/layers/FullMatrixProjection.cpp +++ b/paddle/gserver/layers/FullMatrixProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/FullMatrixProjection.h b/paddle/gserver/layers/FullMatrixProjection.h index ddb1e7b18c..58499f2e1e 100644 --- a/paddle/gserver/layers/FullMatrixProjection.h +++ b/paddle/gserver/layers/FullMatrixProjection.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/FullyConnectedLayer.cpp b/paddle/gserver/layers/FullyConnectedLayer.cpp index 70c56499a7..d2a028dd80 100644 --- a/paddle/gserver/layers/FullyConnectedLayer.cpp +++ b/paddle/gserver/layers/FullyConnectedLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/FullyConnectedLayer.h b/paddle/gserver/layers/FullyConnectedLayer.h index e15e1236cd..ccd584585c 100644 --- a/paddle/gserver/layers/FullyConnectedLayer.h +++ b/paddle/gserver/layers/FullyConnectedLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/GatedRecurrentLayer.cpp b/paddle/gserver/layers/GatedRecurrentLayer.cpp index 495c2174f3..01b210ba70 100644 --- a/paddle/gserver/layers/GatedRecurrentLayer.cpp +++ b/paddle/gserver/layers/GatedRecurrentLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/GatedRecurrentLayer.h b/paddle/gserver/layers/GatedRecurrentLayer.h index 3b8706a44e..e099b4d18b 100644 --- a/paddle/gserver/layers/GatedRecurrentLayer.h +++ b/paddle/gserver/layers/GatedRecurrentLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/GetOutputLayer.cpp b/paddle/gserver/layers/GetOutputLayer.cpp index 01579d55fd..b77fdbb30e 100644 --- a/paddle/gserver/layers/GetOutputLayer.cpp +++ b/paddle/gserver/layers/GetOutputLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/GruCompute.cpp b/paddle/gserver/layers/GruCompute.cpp index d9d423af44..7d4e8001a8 100644 --- a/paddle/gserver/layers/GruCompute.cpp +++ b/paddle/gserver/layers/GruCompute.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/GruCompute.cu b/paddle/gserver/layers/GruCompute.cu index 4a3cf6b1ca..d5e547dce3 100644 --- a/paddle/gserver/layers/GruCompute.cu +++ b/paddle/gserver/layers/GruCompute.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/GruCompute.h b/paddle/gserver/layers/GruCompute.h index 58b5aacba0..2a5da72068 100644 --- a/paddle/gserver/layers/GruCompute.h +++ b/paddle/gserver/layers/GruCompute.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/GruStepLayer.cpp b/paddle/gserver/layers/GruStepLayer.cpp index 6c9b0c5771..c48b5e40e6 100644 --- a/paddle/gserver/layers/GruStepLayer.cpp +++ b/paddle/gserver/layers/GruStepLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/HierarchicalSigmoidLayer.cpp b/paddle/gserver/layers/HierarchicalSigmoidLayer.cpp index 61bc777785..d62a8d846e 100644 --- a/paddle/gserver/layers/HierarchicalSigmoidLayer.cpp +++ b/paddle/gserver/layers/HierarchicalSigmoidLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/HierarchicalSigmoidLayer.h b/paddle/gserver/layers/HierarchicalSigmoidLayer.h index 10762bc926..70da3ac126 100644 --- a/paddle/gserver/layers/HierarchicalSigmoidLayer.h +++ b/paddle/gserver/layers/HierarchicalSigmoidLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/IdentityProjection.cpp b/paddle/gserver/layers/IdentityProjection.cpp index b38656c960..8660631b5a 100644 --- a/paddle/gserver/layers/IdentityProjection.cpp +++ b/paddle/gserver/layers/IdentityProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/InterpolationLayer.cpp b/paddle/gserver/layers/InterpolationLayer.cpp index b00bee2356..94d4614b21 100644 --- a/paddle/gserver/layers/InterpolationLayer.cpp +++ b/paddle/gserver/layers/InterpolationLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/Layer.cpp b/paddle/gserver/layers/Layer.cpp index a83b0e9ab4..3c539f3076 100644 --- a/paddle/gserver/layers/Layer.cpp +++ b/paddle/gserver/layers/Layer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/Layer.h b/paddle/gserver/layers/Layer.h index 3d427a1ac6..6609e16c4c 100644 --- a/paddle/gserver/layers/Layer.h +++ b/paddle/gserver/layers/Layer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LinearChainCRF.cpp b/paddle/gserver/layers/LinearChainCRF.cpp index e2a4f69e71..c6414c822e 100644 --- a/paddle/gserver/layers/LinearChainCRF.cpp +++ b/paddle/gserver/layers/LinearChainCRF.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LinearChainCRF.h b/paddle/gserver/layers/LinearChainCRF.h index 6368f2b9de..a905bf803d 100644 --- a/paddle/gserver/layers/LinearChainCRF.h +++ b/paddle/gserver/layers/LinearChainCRF.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LinearChainCTC.cpp b/paddle/gserver/layers/LinearChainCTC.cpp index 3368eb4d8a..60e814fc30 100644 --- a/paddle/gserver/layers/LinearChainCTC.cpp +++ b/paddle/gserver/layers/LinearChainCTC.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LinearChainCTC.h b/paddle/gserver/layers/LinearChainCTC.h index 0a93d2e9a6..737c9d5c31 100644 --- a/paddle/gserver/layers/LinearChainCTC.h +++ b/paddle/gserver/layers/LinearChainCTC.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LstmCompute.cpp b/paddle/gserver/layers/LstmCompute.cpp index 38057636ed..18f7996958 100644 --- a/paddle/gserver/layers/LstmCompute.cpp +++ b/paddle/gserver/layers/LstmCompute.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LstmCompute.cu b/paddle/gserver/layers/LstmCompute.cu index af271d682f..f75c0c40cc 100644 --- a/paddle/gserver/layers/LstmCompute.cu +++ b/paddle/gserver/layers/LstmCompute.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LstmCompute.h b/paddle/gserver/layers/LstmCompute.h index 97be7218f2..9b7aee19dd 100644 --- a/paddle/gserver/layers/LstmCompute.h +++ b/paddle/gserver/layers/LstmCompute.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LstmLayer.cpp b/paddle/gserver/layers/LstmLayer.cpp index e70a20e5c0..975edcfe7f 100644 --- a/paddle/gserver/layers/LstmLayer.cpp +++ b/paddle/gserver/layers/LstmLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LstmLayer.h b/paddle/gserver/layers/LstmLayer.h index 5b936ff44e..16c62aa88d 100644 --- a/paddle/gserver/layers/LstmLayer.h +++ b/paddle/gserver/layers/LstmLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/LstmStepLayer.cpp b/paddle/gserver/layers/LstmStepLayer.cpp index e7a8d519f2..5fc6474b86 100644 --- a/paddle/gserver/layers/LstmStepLayer.cpp +++ b/paddle/gserver/layers/LstmStepLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MDLstmLayer.cpp b/paddle/gserver/layers/MDLstmLayer.cpp index 93f52c1c31..9d3797d16f 100644 --- a/paddle/gserver/layers/MDLstmLayer.cpp +++ b/paddle/gserver/layers/MDLstmLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MaxIdLayer.cpp b/paddle/gserver/layers/MaxIdLayer.cpp index 22670fa121..80555f3f7b 100644 --- a/paddle/gserver/layers/MaxIdLayer.cpp +++ b/paddle/gserver/layers/MaxIdLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MaxLayer.cpp b/paddle/gserver/layers/MaxLayer.cpp index 42bc6bb815..23629e1986 100644 --- a/paddle/gserver/layers/MaxLayer.cpp +++ b/paddle/gserver/layers/MaxLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MaxLayer.h b/paddle/gserver/layers/MaxLayer.h index 74df0b8b57..472ee0ccca 100644 --- a/paddle/gserver/layers/MaxLayer.h +++ b/paddle/gserver/layers/MaxLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MaxOutLayer.cpp b/paddle/gserver/layers/MaxOutLayer.cpp index b7f1b98041..4fb99ce2a2 100644 --- a/paddle/gserver/layers/MaxOutLayer.cpp +++ b/paddle/gserver/layers/MaxOutLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MaxOutLayer.h b/paddle/gserver/layers/MaxOutLayer.h index 9011a5c332..59c2245e0d 100644 --- a/paddle/gserver/layers/MaxOutLayer.h +++ b/paddle/gserver/layers/MaxOutLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MixedLayer.cpp b/paddle/gserver/layers/MixedLayer.cpp index 1392188fca..490b217347 100644 --- a/paddle/gserver/layers/MixedLayer.cpp +++ b/paddle/gserver/layers/MixedLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MixedLayer.h b/paddle/gserver/layers/MixedLayer.h index 271e0c2538..d73ba6b7a1 100644 --- a/paddle/gserver/layers/MixedLayer.h +++ b/paddle/gserver/layers/MixedLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MultinomialSampler.cpp b/paddle/gserver/layers/MultinomialSampler.cpp index e85dca72d3..0b285ed20f 100644 --- a/paddle/gserver/layers/MultinomialSampler.cpp +++ b/paddle/gserver/layers/MultinomialSampler.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MultinomialSampler.h b/paddle/gserver/layers/MultinomialSampler.h index 59683d2ee2..6e50f8738e 100644 --- a/paddle/gserver/layers/MultinomialSampler.h +++ b/paddle/gserver/layers/MultinomialSampler.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/MultiplexLayer.cpp b/paddle/gserver/layers/MultiplexLayer.cpp index c681eb0623..dc4a1ec321 100644 --- a/paddle/gserver/layers/MultiplexLayer.cpp +++ b/paddle/gserver/layers/MultiplexLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/NCELayer.cpp b/paddle/gserver/layers/NCELayer.cpp index 50b29cdea5..540db46545 100644 --- a/paddle/gserver/layers/NCELayer.cpp +++ b/paddle/gserver/layers/NCELayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/NormLayer.cpp b/paddle/gserver/layers/NormLayer.cpp index 445a1a0c52..b8682a1422 100644 --- a/paddle/gserver/layers/NormLayer.cpp +++ b/paddle/gserver/layers/NormLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/NormLayer.h b/paddle/gserver/layers/NormLayer.h index fcc57849d6..aedbb95b4f 100644 --- a/paddle/gserver/layers/NormLayer.h +++ b/paddle/gserver/layers/NormLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index da36cc2c99..ea301292e0 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/NormProjectionLayer.h b/paddle/gserver/layers/NormProjectionLayer.h index b42e98ab09..0db8e2551f 100644 --- a/paddle/gserver/layers/NormProjectionLayer.h +++ b/paddle/gserver/layers/NormProjectionLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/Operator.cpp b/paddle/gserver/layers/Operator.cpp index b89c474014..a638933914 100644 --- a/paddle/gserver/layers/Operator.cpp +++ b/paddle/gserver/layers/Operator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/Operator.h b/paddle/gserver/layers/Operator.h index ff6558dc73..b0586b59e9 100644 --- a/paddle/gserver/layers/Operator.h +++ b/paddle/gserver/layers/Operator.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/OuterProdLayer.cpp b/paddle/gserver/layers/OuterProdLayer.cpp index 9b24a4f440..42587dcce5 100644 --- a/paddle/gserver/layers/OuterProdLayer.cpp +++ b/paddle/gserver/layers/OuterProdLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ParameterReluLayer.cpp b/paddle/gserver/layers/ParameterReluLayer.cpp index cd3bffa2e1..836c1981ba 100644 --- a/paddle/gserver/layers/ParameterReluLayer.cpp +++ b/paddle/gserver/layers/ParameterReluLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ParameterReluLayer.h b/paddle/gserver/layers/ParameterReluLayer.h index 029c09381f..a82497fc01 100644 --- a/paddle/gserver/layers/ParameterReluLayer.h +++ b/paddle/gserver/layers/ParameterReluLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PoolLayer.cpp b/paddle/gserver/layers/PoolLayer.cpp index 511dfd87c1..36e396487e 100644 --- a/paddle/gserver/layers/PoolLayer.cpp +++ b/paddle/gserver/layers/PoolLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PoolLayer.h b/paddle/gserver/layers/PoolLayer.h index 59be295a53..c05d7a364d 100644 --- a/paddle/gserver/layers/PoolLayer.h +++ b/paddle/gserver/layers/PoolLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PoolProjection.cpp b/paddle/gserver/layers/PoolProjection.cpp index 1b227c8084..d90b438448 100644 --- a/paddle/gserver/layers/PoolProjection.cpp +++ b/paddle/gserver/layers/PoolProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PoolProjection.h b/paddle/gserver/layers/PoolProjection.h index 9c3191bd80..9a75f465f6 100644 --- a/paddle/gserver/layers/PoolProjection.h +++ b/paddle/gserver/layers/PoolProjection.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PoolProjectionLayer.cpp b/paddle/gserver/layers/PoolProjectionLayer.cpp index aabc60af19..392c548d45 100644 --- a/paddle/gserver/layers/PoolProjectionLayer.cpp +++ b/paddle/gserver/layers/PoolProjectionLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PoolProjectionLayer.h b/paddle/gserver/layers/PoolProjectionLayer.h index 777b6f39e7..3dc6af2f0e 100644 --- a/paddle/gserver/layers/PoolProjectionLayer.h +++ b/paddle/gserver/layers/PoolProjectionLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PowerLayer.cpp b/paddle/gserver/layers/PowerLayer.cpp index dbe70a1d87..eb69249270 100644 --- a/paddle/gserver/layers/PowerLayer.cpp +++ b/paddle/gserver/layers/PowerLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/PrintLayer.cpp b/paddle/gserver/layers/PrintLayer.cpp index 95be7b34cb..ac7f658864 100644 --- a/paddle/gserver/layers/PrintLayer.cpp +++ b/paddle/gserver/layers/PrintLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/Projection.cpp b/paddle/gserver/layers/Projection.cpp index c7eb4b6442..974b3cf059 100644 --- a/paddle/gserver/layers/Projection.cpp +++ b/paddle/gserver/layers/Projection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/Projection.h b/paddle/gserver/layers/Projection.h index 798503113d..8cd8042479 100644 --- a/paddle/gserver/layers/Projection.h +++ b/paddle/gserver/layers/Projection.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/RecurrentLayer.cpp b/paddle/gserver/layers/RecurrentLayer.cpp index 08453e21b8..0832eeaa10 100644 --- a/paddle/gserver/layers/RecurrentLayer.cpp +++ b/paddle/gserver/layers/RecurrentLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/RecurrentLayerGroup.cpp b/paddle/gserver/layers/RecurrentLayerGroup.cpp index a5443975da..5cb4220623 100644 --- a/paddle/gserver/layers/RecurrentLayerGroup.cpp +++ b/paddle/gserver/layers/RecurrentLayerGroup.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ResizeLayer.cpp b/paddle/gserver/layers/ResizeLayer.cpp index 3c478a33e3..e79732155a 100644 --- a/paddle/gserver/layers/ResizeLayer.cpp +++ b/paddle/gserver/layers/ResizeLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SamplingIdLayer.cpp b/paddle/gserver/layers/SamplingIdLayer.cpp index b39c9948b5..59ff5d41b5 100644 --- a/paddle/gserver/layers/SamplingIdLayer.cpp +++ b/paddle/gserver/layers/SamplingIdLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ScalingLayer.cpp b/paddle/gserver/layers/ScalingLayer.cpp index 71570810f9..013bff6b98 100644 --- a/paddle/gserver/layers/ScalingLayer.cpp +++ b/paddle/gserver/layers/ScalingLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ScalingProjection.cpp b/paddle/gserver/layers/ScalingProjection.cpp index 7999d02d38..ddb8c87110 100644 --- a/paddle/gserver/layers/ScalingProjection.cpp +++ b/paddle/gserver/layers/ScalingProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp b/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp index 4dfa2c179d..75d9fa8a97 100644 --- a/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp +++ b/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SelectiveFullyConnectedLayer.h b/paddle/gserver/layers/SelectiveFullyConnectedLayer.h index 9f92ae0605..bdf9a4652c 100644 --- a/paddle/gserver/layers/SelectiveFullyConnectedLayer.h +++ b/paddle/gserver/layers/SelectiveFullyConnectedLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SequenceConcatLayer.cpp b/paddle/gserver/layers/SequenceConcatLayer.cpp index bd72ba3d16..d3e0e16e96 100644 --- a/paddle/gserver/layers/SequenceConcatLayer.cpp +++ b/paddle/gserver/layers/SequenceConcatLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SequenceLastInstanceLayer.cpp b/paddle/gserver/layers/SequenceLastInstanceLayer.cpp index 0e9531eabb..4bfce766c7 100644 --- a/paddle/gserver/layers/SequenceLastInstanceLayer.cpp +++ b/paddle/gserver/layers/SequenceLastInstanceLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SequencePoolLayer.cpp b/paddle/gserver/layers/SequencePoolLayer.cpp index c9f19b7d3b..856c889e3b 100644 --- a/paddle/gserver/layers/SequencePoolLayer.cpp +++ b/paddle/gserver/layers/SequencePoolLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SequencePoolLayer.h b/paddle/gserver/layers/SequencePoolLayer.h index 669af80e1d..aa9c132586 100644 --- a/paddle/gserver/layers/SequencePoolLayer.h +++ b/paddle/gserver/layers/SequencePoolLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SequenceReshapeLayer.cpp b/paddle/gserver/layers/SequenceReshapeLayer.cpp index 5ca9b8b300..4b90424215 100644 --- a/paddle/gserver/layers/SequenceReshapeLayer.cpp +++ b/paddle/gserver/layers/SequenceReshapeLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SequenceToBatch.cpp b/paddle/gserver/layers/SequenceToBatch.cpp index 04402db9c8..c12ed82197 100644 --- a/paddle/gserver/layers/SequenceToBatch.cpp +++ b/paddle/gserver/layers/SequenceToBatch.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SequenceToBatch.h b/paddle/gserver/layers/SequenceToBatch.h index 6bc12f207e..fe9b34b224 100644 --- a/paddle/gserver/layers/SequenceToBatch.h +++ b/paddle/gserver/layers/SequenceToBatch.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SlopeInterceptLayer.cpp b/paddle/gserver/layers/SlopeInterceptLayer.cpp index dd6ffcd50b..5c00e54f8c 100644 --- a/paddle/gserver/layers/SlopeInterceptLayer.cpp +++ b/paddle/gserver/layers/SlopeInterceptLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SpatialPyramidPoolLayer.cpp b/paddle/gserver/layers/SpatialPyramidPoolLayer.cpp index dce660a5bc..14fe88ff8a 100644 --- a/paddle/gserver/layers/SpatialPyramidPoolLayer.cpp +++ b/paddle/gserver/layers/SpatialPyramidPoolLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SpatialPyramidPoolLayer.h b/paddle/gserver/layers/SpatialPyramidPoolLayer.h index 79db574d99..32e88cf141 100644 --- a/paddle/gserver/layers/SpatialPyramidPoolLayer.h +++ b/paddle/gserver/layers/SpatialPyramidPoolLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SubSequenceLayer.cpp b/paddle/gserver/layers/SubSequenceLayer.cpp index 664f9e13c0..8b35456391 100644 --- a/paddle/gserver/layers/SubSequenceLayer.cpp +++ b/paddle/gserver/layers/SubSequenceLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/SumToOneNormLayer.cpp b/paddle/gserver/layers/SumToOneNormLayer.cpp index bcf3916840..e6759171cb 100644 --- a/paddle/gserver/layers/SumToOneNormLayer.cpp +++ b/paddle/gserver/layers/SumToOneNormLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/TableProjection.cpp b/paddle/gserver/layers/TableProjection.cpp index 2bc0d329d9..270acdd34b 100644 --- a/paddle/gserver/layers/TableProjection.cpp +++ b/paddle/gserver/layers/TableProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/TableProjection.h b/paddle/gserver/layers/TableProjection.h index 97c672508a..fb6c0e17c2 100644 --- a/paddle/gserver/layers/TableProjection.h +++ b/paddle/gserver/layers/TableProjection.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/TensorLayer.cpp b/paddle/gserver/layers/TensorLayer.cpp index 03586cc6ff..642eb1bdd3 100644 --- a/paddle/gserver/layers/TensorLayer.cpp +++ b/paddle/gserver/layers/TensorLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/TensorLayer.h b/paddle/gserver/layers/TensorLayer.h index 9ac651de4d..ac38ffb620 100644 --- a/paddle/gserver/layers/TensorLayer.h +++ b/paddle/gserver/layers/TensorLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/TransLayer.cpp b/paddle/gserver/layers/TransLayer.cpp index 53a24d4cc4..5cbaaf8f08 100644 --- a/paddle/gserver/layers/TransLayer.cpp +++ b/paddle/gserver/layers/TransLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/TransLayer.h b/paddle/gserver/layers/TransLayer.h index 25b091f9f4..8189700759 100644 --- a/paddle/gserver/layers/TransLayer.h +++ b/paddle/gserver/layers/TransLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/TransposedFullMatrixProjection.cpp b/paddle/gserver/layers/TransposedFullMatrixProjection.cpp index c883283f78..8282584ab4 100644 --- a/paddle/gserver/layers/TransposedFullMatrixProjection.cpp +++ b/paddle/gserver/layers/TransposedFullMatrixProjection.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ValidationLayer.cpp b/paddle/gserver/layers/ValidationLayer.cpp index 0fee4bd246..f029ea4c51 100644 --- a/paddle/gserver/layers/ValidationLayer.cpp +++ b/paddle/gserver/layers/ValidationLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/ValidationLayer.h b/paddle/gserver/layers/ValidationLayer.h index eef9c80a7b..f9c61503aa 100644 --- a/paddle/gserver/layers/ValidationLayer.h +++ b/paddle/gserver/layers/ValidationLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/WarpCTCLayer.cpp b/paddle/gserver/layers/WarpCTCLayer.cpp index e68363a1b2..23ca5257b6 100644 --- a/paddle/gserver/layers/WarpCTCLayer.cpp +++ b/paddle/gserver/layers/WarpCTCLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/layers/WarpCTCLayer.h b/paddle/gserver/layers/WarpCTCLayer.h index 1b0f5ba267..3d9ae9249a 100644 --- a/paddle/gserver/layers/WarpCTCLayer.h +++ b/paddle/gserver/layers/WarpCTCLayer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/LayerGradUtil.cpp b/paddle/gserver/tests/LayerGradUtil.cpp index 4757516917..dffc24936f 100644 --- a/paddle/gserver/tests/LayerGradUtil.cpp +++ b/paddle/gserver/tests/LayerGradUtil.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/LayerGradUtil.h b/paddle/gserver/tests/LayerGradUtil.h index a061c7fc53..2b8f334f19 100644 --- a/paddle/gserver/tests/LayerGradUtil.h +++ b/paddle/gserver/tests/LayerGradUtil.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/TestUtil.cpp b/paddle/gserver/tests/TestUtil.cpp index 84d516683c..dc00711697 100644 --- a/paddle/gserver/tests/TestUtil.cpp +++ b/paddle/gserver/tests/TestUtil.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/TestUtil.h b/paddle/gserver/tests/TestUtil.h index 000f8884e8..ec86469aeb 100644 --- a/paddle/gserver/tests/TestUtil.h +++ b/paddle/gserver/tests/TestUtil.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/__init__.py b/paddle/gserver/tests/__init__.py index c90af2ee00..f662d68263 100644 --- a/paddle/gserver/tests/__init__.py +++ b/paddle/gserver/tests/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/concat_dotmul_a.conf b/paddle/gserver/tests/concat_dotmul_a.conf index 52340596b9..db02ca7e80 100644 --- a/paddle/gserver/tests/concat_dotmul_a.conf +++ b/paddle/gserver/tests/concat_dotmul_a.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/concat_dotmul_b.conf b/paddle/gserver/tests/concat_dotmul_b.conf index 68859867bf..5e64970e44 100644 --- a/paddle/gserver/tests/concat_dotmul_b.conf +++ b/paddle/gserver/tests/concat_dotmul_b.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/concat_fullmatrix_a.conf b/paddle/gserver/tests/concat_fullmatrix_a.conf index 35bafc58ac..940d1efc58 100644 --- a/paddle/gserver/tests/concat_fullmatrix_a.conf +++ b/paddle/gserver/tests/concat_fullmatrix_a.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/concat_fullmatrix_b.conf b/paddle/gserver/tests/concat_fullmatrix_b.conf index 00a957d97d..931e5b38ef 100644 --- a/paddle/gserver/tests/concat_fullmatrix_b.conf +++ b/paddle/gserver/tests/concat_fullmatrix_b.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/concat_table_a.conf b/paddle/gserver/tests/concat_table_a.conf index a8ff70f883..047cb44d15 100644 --- a/paddle/gserver/tests/concat_table_a.conf +++ b/paddle/gserver/tests/concat_table_a.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/concat_table_b.conf b/paddle/gserver/tests/concat_table_b.conf index 95d7c10f7b..c666ab9942 100644 --- a/paddle/gserver/tests/concat_table_b.conf +++ b/paddle/gserver/tests/concat_table_b.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/img_conv_a.conf b/paddle/gserver/tests/img_conv_a.conf index 20c89b875e..3ad15c64fe 100644 --- a/paddle/gserver/tests/img_conv_a.conf +++ b/paddle/gserver/tests/img_conv_a.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/img_conv_b.conf b/paddle/gserver/tests/img_conv_b.conf index 19b99c77fd..e68008155e 100644 --- a/paddle/gserver/tests/img_conv_b.conf +++ b/paddle/gserver/tests/img_conv_b.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/img_conv_c.conf b/paddle/gserver/tests/img_conv_c.conf index fea332f6d1..4598ffbdb2 100644 --- a/paddle/gserver/tests/img_conv_c.conf +++ b/paddle/gserver/tests/img_conv_c.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/img_pool_a.conf b/paddle/gserver/tests/img_pool_a.conf index 9bd046b533..afd271055d 100644 --- a/paddle/gserver/tests/img_pool_a.conf +++ b/paddle/gserver/tests/img_pool_a.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/img_pool_b.conf b/paddle/gserver/tests/img_pool_b.conf index 6ea9649b3f..e8deb9edbe 100644 --- a/paddle/gserver/tests/img_pool_b.conf +++ b/paddle/gserver/tests/img_pool_b.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/pyDataProvider.py b/paddle/gserver/tests/pyDataProvider.py index 91863b4175..7235a23943 100644 --- a/paddle/gserver/tests/pyDataProvider.py +++ b/paddle/gserver/tests/pyDataProvider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/pyDataProvider/trainer.conf b/paddle/gserver/tests/pyDataProvider/trainer.conf index 7957814c01..7d910df20d 100644 --- a/paddle/gserver/tests/pyDataProvider/trainer.conf +++ b/paddle/gserver/tests/pyDataProvider/trainer.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/rnn_data_provider.py b/paddle/gserver/tests/rnn_data_provider.py index 715ac08a42..3afd45c72f 100644 --- a/paddle/gserver/tests/rnn_data_provider.py +++ b/paddle/gserver/tests/rnn_data_provider.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequenceGen.py b/paddle/gserver/tests/sequenceGen.py index 99440ada53..fd725727c0 100644 --- a/paddle/gserver/tests/sequenceGen.py +++ b/paddle/gserver/tests/sequenceGen.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_layer_group.conf b/paddle/gserver/tests/sequence_layer_group.conf index 087aa96ccb..68d150d553 100644 --- a/paddle/gserver/tests/sequence_layer_group.conf +++ b/paddle/gserver/tests/sequence_layer_group.conf @@ -1,5 +1,5 @@ #!/usr/bin/env python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_nest_layer_group.conf b/paddle/gserver/tests/sequence_nest_layer_group.conf index 93a0f6da79..88cb42798b 100644 --- a/paddle/gserver/tests/sequence_nest_layer_group.conf +++ b/paddle/gserver/tests/sequence_nest_layer_group.conf @@ -1,5 +1,5 @@ #!/usr/bin/env python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_nest_rnn.conf b/paddle/gserver/tests/sequence_nest_rnn.conf index 524760be76..2873a59966 100644 --- a/paddle/gserver/tests/sequence_nest_rnn.conf +++ b/paddle/gserver/tests/sequence_nest_rnn.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_nest_rnn_multi_input.conf b/paddle/gserver/tests/sequence_nest_rnn_multi_input.conf index 0614958b47..ad14a2c927 100644 --- a/paddle/gserver/tests/sequence_nest_rnn_multi_input.conf +++ b/paddle/gserver/tests/sequence_nest_rnn_multi_input.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py index 163fce956e..7303d08804 100644 --- a/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py +++ b/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py @@ -1,5 +1,5 @@ # edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_rnn.conf b/paddle/gserver/tests/sequence_rnn.conf index 3294c2c3fc..1084edfe70 100644 --- a/paddle/gserver/tests/sequence_rnn.conf +++ b/paddle/gserver/tests/sequence_rnn.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_rnn_multi_input.conf b/paddle/gserver/tests/sequence_rnn_multi_input.conf index 51881e21d9..40d0317415 100644 --- a/paddle/gserver/tests/sequence_rnn_multi_input.conf +++ b/paddle/gserver/tests/sequence_rnn_multi_input.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py b/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py index 4cf7035477..786a0c6d78 100644 --- a/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py +++ b/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_ActivationGrad.cpp b/paddle/gserver/tests/test_ActivationGrad.cpp index e54c5109e7..0181d62519 100644 --- a/paddle/gserver/tests/test_ActivationGrad.cpp +++ b/paddle/gserver/tests/test_ActivationGrad.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_BatchNorm.cpp b/paddle/gserver/tests/test_BatchNorm.cpp index 0cb6f58dc0..8575999aba 100644 --- a/paddle/gserver/tests/test_BatchNorm.cpp +++ b/paddle/gserver/tests/test_BatchNorm.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_ConvTrans.cpp b/paddle/gserver/tests/test_ConvTrans.cpp index f3efdfb428..3af3f08f40 100644 --- a/paddle/gserver/tests/test_ConvTrans.cpp +++ b/paddle/gserver/tests/test_ConvTrans.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_ConvUnify.cpp b/paddle/gserver/tests/test_ConvUnify.cpp index 5acf02bea0..d59acf96ac 100644 --- a/paddle/gserver/tests/test_ConvUnify.cpp +++ b/paddle/gserver/tests/test_ConvUnify.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_Evaluator.cpp b/paddle/gserver/tests/test_Evaluator.cpp index be639ea093..2c20f3a52f 100644 --- a/paddle/gserver/tests/test_Evaluator.cpp +++ b/paddle/gserver/tests/test_Evaluator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_LayerGrad.cpp b/paddle/gserver/tests/test_LayerGrad.cpp index 099e96aa6c..7983d9fe64 100644 --- a/paddle/gserver/tests/test_LayerGrad.cpp +++ b/paddle/gserver/tests/test_LayerGrad.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_LinearChainCRF.cpp b/paddle/gserver/tests/test_LinearChainCRF.cpp index 913d6ed751..330adee8f7 100644 --- a/paddle/gserver/tests/test_LinearChainCRF.cpp +++ b/paddle/gserver/tests/test_LinearChainCRF.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_MultinomialSampler.cpp b/paddle/gserver/tests/test_MultinomialSampler.cpp index 3fc099adbd..fc164da8ea 100644 --- a/paddle/gserver/tests/test_MultinomialSampler.cpp +++ b/paddle/gserver/tests/test_MultinomialSampler.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_NetworkCompare.cpp b/paddle/gserver/tests/test_NetworkCompare.cpp index 71ed3bc4b6..ff6b5ab0d0 100644 --- a/paddle/gserver/tests/test_NetworkCompare.cpp +++ b/paddle/gserver/tests/test_NetworkCompare.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_ProtoDataProvider.cpp b/paddle/gserver/tests/test_ProtoDataProvider.cpp index 01070bc1cb..d5b8017cd1 100644 --- a/paddle/gserver/tests/test_ProtoDataProvider.cpp +++ b/paddle/gserver/tests/test_ProtoDataProvider.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_PyDataProvider.cpp b/paddle/gserver/tests/test_PyDataProvider.cpp index 802f9aa4cb..0f264ecf91 100644 --- a/paddle/gserver/tests/test_PyDataProvider.cpp +++ b/paddle/gserver/tests/test_PyDataProvider.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_PyDataProvider2.cpp b/paddle/gserver/tests/test_PyDataProvider2.cpp index 6674e6b87c..436318d356 100644 --- a/paddle/gserver/tests/test_PyDataProvider2.cpp +++ b/paddle/gserver/tests/test_PyDataProvider2.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_PyDataProvider2.py b/paddle/gserver/tests/test_PyDataProvider2.py index bf23c52fd7..f7b540013e 100644 --- a/paddle/gserver/tests/test_PyDataProvider2.py +++ b/paddle/gserver/tests/test_PyDataProvider2.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp index 9d86067fb5..a351667d8b 100644 --- a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp +++ b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_RecurrentLayer.cpp b/paddle/gserver/tests/test_RecurrentLayer.cpp index 0643cec38b..3f26b710e9 100644 --- a/paddle/gserver/tests/test_RecurrentLayer.cpp +++ b/paddle/gserver/tests/test_RecurrentLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_SelectiveFCLayer.cpp b/paddle/gserver/tests/test_SelectiveFCLayer.cpp index 204b03332f..c588f69446 100644 --- a/paddle/gserver/tests/test_SelectiveFCLayer.cpp +++ b/paddle/gserver/tests/test_SelectiveFCLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/gserver/tests/test_WarpCTCLayer.cpp b/paddle/gserver/tests/test_WarpCTCLayer.cpp index 2dd83db345..e526a27906 100644 --- a/paddle/gserver/tests/test_WarpCTCLayer.cpp +++ b/paddle/gserver/tests/test_WarpCTCLayer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/Allocator.h b/paddle/math/Allocator.h index cba8b37289..4d0a1506be 100644 --- a/paddle/math/Allocator.h +++ b/paddle/math/Allocator.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/BaseMatrix.cu b/paddle/math/BaseMatrix.cu index 05faeff2e4..0a0d92d1ae 100644 --- a/paddle/math/BaseMatrix.cu +++ b/paddle/math/BaseMatrix.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/BaseMatrix.h b/paddle/math/BaseMatrix.h index f4576985b8..368557bb26 100644 --- a/paddle/math/BaseMatrix.h +++ b/paddle/math/BaseMatrix.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/CpuSparseMatrix.cpp b/paddle/math/CpuSparseMatrix.cpp index ad3f8e64ef..324c7ec0ca 100644 --- a/paddle/math/CpuSparseMatrix.cpp +++ b/paddle/math/CpuSparseMatrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/CpuSparseMatrix.h b/paddle/math/CpuSparseMatrix.h index 50f3c1569a..9676f8864f 100644 --- a/paddle/math/CpuSparseMatrix.h +++ b/paddle/math/CpuSparseMatrix.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/ExecViaCpu.h b/paddle/math/ExecViaCpu.h index 67fb6c0cda..1e03cc5f45 100644 --- a/paddle/math/ExecViaCpu.h +++ b/paddle/math/ExecViaCpu.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/MathFunctions.cpp b/paddle/math/MathFunctions.cpp index 1217163bee..037525b402 100644 --- a/paddle/math/MathFunctions.cpp +++ b/paddle/math/MathFunctions.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/MathFunctions.h b/paddle/math/MathFunctions.h index 0741c45678..c8559eefd8 100644 --- a/paddle/math/MathFunctions.h +++ b/paddle/math/MathFunctions.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/MathUtils.cpp b/paddle/math/MathUtils.cpp index 878e0b8723..1fb7655c5a 100644 --- a/paddle/math/MathUtils.cpp +++ b/paddle/math/MathUtils.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/MathUtils.h b/paddle/math/MathUtils.h index 907116c002..f2b2980138 100644 --- a/paddle/math/MathUtils.h +++ b/paddle/math/MathUtils.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/Matrix.cpp b/paddle/math/Matrix.cpp index b70b47a5fc..c69e074a76 100644 --- a/paddle/math/Matrix.cpp +++ b/paddle/math/Matrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/Matrix.h b/paddle/math/Matrix.h index 5de78bb84c..395143a4b1 100644 --- a/paddle/math/Matrix.h +++ b/paddle/math/Matrix.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/MatrixBitCode.cpp b/paddle/math/MatrixBitCode.cpp index ac5b10c7bd..6390d4b6a5 100644 --- a/paddle/math/MatrixBitCode.cpp +++ b/paddle/math/MatrixBitCode.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/MemoryHandle.cpp b/paddle/math/MemoryHandle.cpp index 9101957fc6..4c4a827b23 100644 --- a/paddle/math/MemoryHandle.cpp +++ b/paddle/math/MemoryHandle.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/MemoryHandle.h b/paddle/math/MemoryHandle.h index f12635d5d4..0828d377c9 100644 --- a/paddle/math/MemoryHandle.h +++ b/paddle/math/MemoryHandle.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/PoolAllocator.cpp b/paddle/math/PoolAllocator.cpp index 2c150949dd..4282c7243a 100644 --- a/paddle/math/PoolAllocator.cpp +++ b/paddle/math/PoolAllocator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/PoolAllocator.h b/paddle/math/PoolAllocator.h index 5d33b45312..1544cb2cfc 100644 --- a/paddle/math/PoolAllocator.h +++ b/paddle/math/PoolAllocator.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/SIMDFunctions.cpp b/paddle/math/SIMDFunctions.cpp index 1fb156f29b..95219debf5 100644 --- a/paddle/math/SIMDFunctions.cpp +++ b/paddle/math/SIMDFunctions.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/SIMDFunctions.h b/paddle/math/SIMDFunctions.h index ac82f10910..9b0a8719b2 100644 --- a/paddle/math/SIMDFunctions.h +++ b/paddle/math/SIMDFunctions.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/SparseMatrix.cpp b/paddle/math/SparseMatrix.cpp index 2b0bff9535..d2779cc9f5 100644 --- a/paddle/math/SparseMatrix.cpp +++ b/paddle/math/SparseMatrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/SparseMatrix.h b/paddle/math/SparseMatrix.h index 175ef54b85..f8d9ffc29f 100644 --- a/paddle/math/SparseMatrix.h +++ b/paddle/math/SparseMatrix.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/SparseRowMatrix.cpp b/paddle/math/SparseRowMatrix.cpp index 100827e321..3091743123 100644 --- a/paddle/math/SparseRowMatrix.cpp +++ b/paddle/math/SparseRowMatrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/SparseRowMatrix.h b/paddle/math/SparseRowMatrix.h index 56f113a361..2fee1b39fe 100644 --- a/paddle/math/SparseRowMatrix.h +++ b/paddle/math/SparseRowMatrix.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/Storage.cpp b/paddle/math/Storage.cpp index 57ea5c9266..0170b4efb8 100644 --- a/paddle/math/Storage.cpp +++ b/paddle/math/Storage.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/Storage.h b/paddle/math/Storage.h index 725de247e6..3658320182 100644 --- a/paddle/math/Storage.h +++ b/paddle/math/Storage.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/TensorApply.h b/paddle/math/TensorApply.h index 8b2a9a7cd2..11c7acb441 100644 --- a/paddle/math/TensorApply.h +++ b/paddle/math/TensorApply.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/TensorAssign.h b/paddle/math/TensorAssign.h index 03f7048d2d..943fb5649e 100644 --- a/paddle/math/TensorAssign.h +++ b/paddle/math/TensorAssign.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/TensorEvaluate.h b/paddle/math/TensorEvaluate.h index 39981246f0..346ed7ab13 100644 --- a/paddle/math/TensorEvaluate.h +++ b/paddle/math/TensorEvaluate.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/TensorExpression.h b/paddle/math/TensorExpression.h index b28ea2be1d..7f28ad83bb 100644 --- a/paddle/math/TensorExpression.h +++ b/paddle/math/TensorExpression.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/TrainingAlgorithmOp.cu b/paddle/math/TrainingAlgorithmOp.cu index d8d9c793fb..72ff077270 100644 --- a/paddle/math/TrainingAlgorithmOp.cu +++ b/paddle/math/TrainingAlgorithmOp.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/TrainingAlgorithmOp.h b/paddle/math/TrainingAlgorithmOp.h index 68eb98a93e..2dc56f69e5 100644 --- a/paddle/math/TrainingAlgorithmOp.h +++ b/paddle/math/TrainingAlgorithmOp.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/Vector.cpp b/paddle/math/Vector.cpp index b2ade83138..484f4c9252 100644 --- a/paddle/math/Vector.cpp +++ b/paddle/math/Vector.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/Vector.h b/paddle/math/Vector.h index bcd8ff3fa3..535580ac37 100644 --- a/paddle/math/Vector.h +++ b/paddle/math/Vector.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/OriginalOptimizerApi.h b/paddle/math/tests/OriginalOptimizerApi.h index fe4d1ae542..ddcdd6bb51 100644 --- a/paddle/math/tests/OriginalOptimizerApi.h +++ b/paddle/math/tests/OriginalOptimizerApi.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/PerfUtils.h b/paddle/math/tests/PerfUtils.h index c32f4c634a..9c6a63ce6c 100644 --- a/paddle/math/tests/PerfUtils.h +++ b/paddle/math/tests/PerfUtils.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/TensorCheck.h b/paddle/math/tests/TensorCheck.h index 956bcf61a4..5bc4a03067 100644 --- a/paddle/math/tests/TensorCheck.h +++ b/paddle/math/tests/TensorCheck.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/TestUtils.h b/paddle/math/tests/TestUtils.h index 2edb07de01..5f9fab7245 100644 --- a/paddle/math/tests/TestUtils.h +++ b/paddle/math/tests/TestUtils.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_Allocator.cpp b/paddle/math/tests/test_Allocator.cpp index 084322a1ca..440fcda0fe 100644 --- a/paddle/math/tests/test_Allocator.cpp +++ b/paddle/math/tests/test_Allocator.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_BaseMatrix.cpp b/paddle/math/tests/test_BaseMatrix.cpp index f8c795a639..a4683918ca 100644 --- a/paddle/math/tests/test_BaseMatrix.cpp +++ b/paddle/math/tests/test_BaseMatrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_CpuGpuVector.cpp b/paddle/math/tests/test_CpuGpuVector.cpp index 7b50b020cd..c671735875 100644 --- a/paddle/math/tests/test_CpuGpuVector.cpp +++ b/paddle/math/tests/test_CpuGpuVector.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_ExecViaCpu.cpp b/paddle/math/tests/test_ExecViaCpu.cpp index b3eca19a72..b328ebf554 100644 --- a/paddle/math/tests/test_ExecViaCpu.cpp +++ b/paddle/math/tests/test_ExecViaCpu.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_FPException.cpp b/paddle/math/tests/test_FPException.cpp index f996e0dadd..6aa5891bce 100644 --- a/paddle/math/tests/test_FPException.cpp +++ b/paddle/math/tests/test_FPException.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_GpuProfiler.cpp b/paddle/math/tests/test_GpuProfiler.cpp index c3542b7834..e5fd6f4523 100644 --- a/paddle/math/tests/test_GpuProfiler.cpp +++ b/paddle/math/tests/test_GpuProfiler.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_Matrix.cpp b/paddle/math/tests/test_Matrix.cpp index edc9d74103..adb5fbd9fa 100644 --- a/paddle/math/tests/test_Matrix.cpp +++ b/paddle/math/tests/test_Matrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_SIMDFunctions.cpp b/paddle/math/tests/test_SIMDFunctions.cpp index 8405b96fc2..2c54121d99 100644 --- a/paddle/math/tests/test_SIMDFunctions.cpp +++ b/paddle/math/tests/test_SIMDFunctions.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_SparseMatrix.cpp b/paddle/math/tests/test_SparseMatrix.cpp index 3788218aab..88b75b6d83 100644 --- a/paddle/math/tests/test_SparseMatrix.cpp +++ b/paddle/math/tests/test_SparseMatrix.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_Tensor.cu b/paddle/math/tests/test_Tensor.cu index 8fa402055a..1859b9fc13 100644 --- a/paddle/math/tests/test_Tensor.cu +++ b/paddle/math/tests/test_Tensor.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_TrainingAlgorithm.cpp b/paddle/math/tests/test_TrainingAlgorithm.cpp index b40c8d9dae..93a930cc2f 100644 --- a/paddle/math/tests/test_TrainingAlgorithm.cpp +++ b/paddle/math/tests/test_TrainingAlgorithm.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_batchTranspose.cpp b/paddle/math/tests/test_batchTranspose.cpp index a9596992b2..88631c62b8 100644 --- a/paddle/math/tests/test_batchTranspose.cpp +++ b/paddle/math/tests/test_batchTranspose.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_lazyAssign.cu b/paddle/math/tests/test_lazyAssign.cu index 52dfdacffe..16541edb54 100644 --- a/paddle/math/tests/test_lazyAssign.cu +++ b/paddle/math/tests/test_lazyAssign.cu @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 0883066947..713792d82b 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_matrixUtil.h b/paddle/math/tests/test_matrixUtil.h index 5300e7168b..9aa74b1519 100644 --- a/paddle/math/tests/test_matrixUtil.h +++ b/paddle/math/tests/test_matrixUtil.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_perturbation.cpp b/paddle/math/tests/test_perturbation.cpp index 837c2f47ba..eaf4dfea66 100644 --- a/paddle/math/tests/test_perturbation.cpp +++ b/paddle/math/tests/test_perturbation.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/math/tests/test_sparseMatrixCompare.cpp b/paddle/math/tests/test_sparseMatrixCompare.cpp index d7aa20eb98..eff2c502bb 100644 --- a/paddle/math/tests/test_sparseMatrixCompare.cpp +++ b/paddle/math/tests/test_sparseMatrixCompare.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Argument.cpp b/paddle/parameter/Argument.cpp index 0f414b4463..b632a11bbd 100644 --- a/paddle/parameter/Argument.cpp +++ b/paddle/parameter/Argument.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Argument.h b/paddle/parameter/Argument.h index 2b20122deb..69d57a28c0 100644 --- a/paddle/parameter/Argument.h +++ b/paddle/parameter/Argument.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/AverageOptimizer.cpp b/paddle/parameter/AverageOptimizer.cpp index 593594761e..e51ca56520 100644 --- a/paddle/parameter/AverageOptimizer.cpp +++ b/paddle/parameter/AverageOptimizer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/AverageOptimizer.h b/paddle/parameter/AverageOptimizer.h index ccc2612608..9fd3f75baa 100644 --- a/paddle/parameter/AverageOptimizer.h +++ b/paddle/parameter/AverageOptimizer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/FirstOrderOptimizer.cpp b/paddle/parameter/FirstOrderOptimizer.cpp index 9e363fb20d..17268d3715 100644 --- a/paddle/parameter/FirstOrderOptimizer.cpp +++ b/paddle/parameter/FirstOrderOptimizer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/FirstOrderOptimizer.h b/paddle/parameter/FirstOrderOptimizer.h index a9a2ffdd41..095019b74f 100644 --- a/paddle/parameter/FirstOrderOptimizer.h +++ b/paddle/parameter/FirstOrderOptimizer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/LearningRateScheduler.cpp b/paddle/parameter/LearningRateScheduler.cpp index a7412500cc..66448b2c5f 100644 --- a/paddle/parameter/LearningRateScheduler.cpp +++ b/paddle/parameter/LearningRateScheduler.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/LearningRateScheduler.h b/paddle/parameter/LearningRateScheduler.h index e987c3dcde..53b9dba446 100644 --- a/paddle/parameter/LearningRateScheduler.h +++ b/paddle/parameter/LearningRateScheduler.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/OptimizerFunctions.cpp b/paddle/parameter/OptimizerFunctions.cpp index 6fd7964347..a4af1b4705 100644 --- a/paddle/parameter/OptimizerFunctions.cpp +++ b/paddle/parameter/OptimizerFunctions.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/OptimizerFunctions.h b/paddle/parameter/OptimizerFunctions.h index a5f8b2c569..4f7370b6ba 100644 --- a/paddle/parameter/OptimizerFunctions.h +++ b/paddle/parameter/OptimizerFunctions.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/OptimizerWithRegularizer.cpp b/paddle/parameter/OptimizerWithRegularizer.cpp index 5381e7bef3..85f13c8bc0 100644 --- a/paddle/parameter/OptimizerWithRegularizer.cpp +++ b/paddle/parameter/OptimizerWithRegularizer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/OptimizerWithRegularizer.h b/paddle/parameter/OptimizerWithRegularizer.h index ebe23c7397..0e1c444d28 100644 --- a/paddle/parameter/OptimizerWithRegularizer.h +++ b/paddle/parameter/OptimizerWithRegularizer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParallelParameter.cpp b/paddle/parameter/ParallelParameter.cpp index 99b20a59ca..b3182306a4 100644 --- a/paddle/parameter/ParallelParameter.cpp +++ b/paddle/parameter/ParallelParameter.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParallelParameter.h b/paddle/parameter/ParallelParameter.h index 2b65321fe2..b0fe82d3c4 100644 --- a/paddle/parameter/ParallelParameter.h +++ b/paddle/parameter/ParallelParameter.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Parameter.cpp b/paddle/parameter/Parameter.cpp index 7e37bf225b..3b06650e0c 100644 --- a/paddle/parameter/Parameter.cpp +++ b/paddle/parameter/Parameter.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Parameter.h b/paddle/parameter/Parameter.h index 1c159d669a..6b0600517a 100644 --- a/paddle/parameter/Parameter.h +++ b/paddle/parameter/Parameter.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterOptimizer.cpp b/paddle/parameter/ParameterOptimizer.cpp index 2a71d6aee4..7c8c6978e2 100644 --- a/paddle/parameter/ParameterOptimizer.cpp +++ b/paddle/parameter/ParameterOptimizer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterOptimizer.h b/paddle/parameter/ParameterOptimizer.h index 21a148333c..2bdc793d60 100644 --- a/paddle/parameter/ParameterOptimizer.h +++ b/paddle/parameter/ParameterOptimizer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterUpdateFunctions.cpp b/paddle/parameter/ParameterUpdateFunctions.cpp index 510ec5bf48..c8af7105c7 100644 --- a/paddle/parameter/ParameterUpdateFunctions.cpp +++ b/paddle/parameter/ParameterUpdateFunctions.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterUpdateFunctions.h b/paddle/parameter/ParameterUpdateFunctions.h index 2d98030bd2..7374843d80 100644 --- a/paddle/parameter/ParameterUpdateFunctions.h +++ b/paddle/parameter/ParameterUpdateFunctions.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterUpdaterBase.cpp b/paddle/parameter/ParameterUpdaterBase.cpp index e706742053..b938270ce1 100644 --- a/paddle/parameter/ParameterUpdaterBase.cpp +++ b/paddle/parameter/ParameterUpdaterBase.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterUpdaterBase.h b/paddle/parameter/ParameterUpdaterBase.h index ffd2980261..5401046f67 100644 --- a/paddle/parameter/ParameterUpdaterBase.h +++ b/paddle/parameter/ParameterUpdaterBase.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterUpdaterHook.cpp b/paddle/parameter/ParameterUpdaterHook.cpp index 7d85a32c0c..466560c437 100644 --- a/paddle/parameter/ParameterUpdaterHook.cpp +++ b/paddle/parameter/ParameterUpdaterHook.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/ParameterUpdaterHook.h b/paddle/parameter/ParameterUpdaterHook.h index 553282bcaa..1f4506441d 100644 --- a/paddle/parameter/ParameterUpdaterHook.h +++ b/paddle/parameter/ParameterUpdaterHook.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Regularizer.cpp b/paddle/parameter/Regularizer.cpp index a9bddc1596..4420ee0031 100644 --- a/paddle/parameter/Regularizer.cpp +++ b/paddle/parameter/Regularizer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Regularizer.h b/paddle/parameter/Regularizer.h index 5baaccc00d..6d54773098 100644 --- a/paddle/parameter/Regularizer.h +++ b/paddle/parameter/Regularizer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Weight.cpp b/paddle/parameter/Weight.cpp index c138010607..f366a2b53f 100644 --- a/paddle/parameter/Weight.cpp +++ b/paddle/parameter/Weight.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/Weight.h b/paddle/parameter/Weight.h index 531b571cbc..6e7a49154e 100644 --- a/paddle/parameter/Weight.h +++ b/paddle/parameter/Weight.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/parameter/tests/test_common.cpp b/paddle/parameter/tests/test_common.cpp index 1a64fe3352..4e4d0ccfa2 100644 --- a/paddle/parameter/tests/test_common.cpp +++ b/paddle/parameter/tests/test_common.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/BaseClient.cpp b/paddle/pserver/BaseClient.cpp index ff83970ab1..62fafc1891 100644 --- a/paddle/pserver/BaseClient.cpp +++ b/paddle/pserver/BaseClient.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/BaseClient.h b/paddle/pserver/BaseClient.h index 3a501172b7..5924f80684 100644 --- a/paddle/pserver/BaseClient.h +++ b/paddle/pserver/BaseClient.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/LightNetwork.cpp b/paddle/pserver/LightNetwork.cpp index 1830170a16..9a398d4f45 100644 --- a/paddle/pserver/LightNetwork.cpp +++ b/paddle/pserver/LightNetwork.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/LightNetwork.h b/paddle/pserver/LightNetwork.h index b7d7bc7902..7aff007a27 100644 --- a/paddle/pserver/LightNetwork.h +++ b/paddle/pserver/LightNetwork.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/ParameterClient2.cpp b/paddle/pserver/ParameterClient2.cpp index 28cc0ae2dd..31418822b3 100644 --- a/paddle/pserver/ParameterClient2.cpp +++ b/paddle/pserver/ParameterClient2.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/ParameterClient2.h b/paddle/pserver/ParameterClient2.h index af8dd41ec4..0f180722e3 100644 --- a/paddle/pserver/ParameterClient2.h +++ b/paddle/pserver/ParameterClient2.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/ParameterServer2.cpp b/paddle/pserver/ParameterServer2.cpp index b7f999f8b1..ac70efc64f 100644 --- a/paddle/pserver/ParameterServer2.cpp +++ b/paddle/pserver/ParameterServer2.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/ParameterServer2.h b/paddle/pserver/ParameterServer2.h index ccaea42e7d..47122f3632 100644 --- a/paddle/pserver/ParameterServer2.h +++ b/paddle/pserver/ParameterServer2.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/ParameterServer2Main.cpp b/paddle/pserver/ParameterServer2Main.cpp index b15ef8c3cc..1ba9b48c23 100644 --- a/paddle/pserver/ParameterServer2Main.cpp +++ b/paddle/pserver/ParameterServer2Main.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/ProtoServer.cpp b/paddle/pserver/ProtoServer.cpp index 2f6d911a01..410317ece2 100644 --- a/paddle/pserver/ProtoServer.cpp +++ b/paddle/pserver/ProtoServer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/ProtoServer.h b/paddle/pserver/ProtoServer.h index cf08e24ff3..97b7bf167d 100644 --- a/paddle/pserver/ProtoServer.h +++ b/paddle/pserver/ProtoServer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/RDMANetwork.h b/paddle/pserver/RDMANetwork.h index 4e492a3afd..caef65134b 100644 --- a/paddle/pserver/RDMANetwork.h +++ b/paddle/pserver/RDMANetwork.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/SocketChannel.cpp b/paddle/pserver/SocketChannel.cpp index 4ebc47d326..f3e74257f6 100644 --- a/paddle/pserver/SocketChannel.cpp +++ b/paddle/pserver/SocketChannel.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/SocketChannel.h b/paddle/pserver/SocketChannel.h index 472b37a122..6c3dd20d7b 100644 --- a/paddle/pserver/SocketChannel.h +++ b/paddle/pserver/SocketChannel.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/SparseParameterDistribution.cpp b/paddle/pserver/SparseParameterDistribution.cpp index 2085b22a95..0068f85b52 100644 --- a/paddle/pserver/SparseParameterDistribution.cpp +++ b/paddle/pserver/SparseParameterDistribution.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/SparseParameterDistribution.h b/paddle/pserver/SparseParameterDistribution.h index af2b43af0f..dc63b065a7 100644 --- a/paddle/pserver/SparseParameterDistribution.h +++ b/paddle/pserver/SparseParameterDistribution.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/test/SocketTest.cpp b/paddle/pserver/test/SocketTest.cpp index 24c90f1078..528f5e381e 100644 --- a/paddle/pserver/test/SocketTest.cpp +++ b/paddle/pserver/test/SocketTest.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/test/test_ParameterServer2.cpp b/paddle/pserver/test/test_ParameterServer2.cpp index eb813e92d6..493b6d060c 100644 --- a/paddle/pserver/test/test_ParameterServer2.cpp +++ b/paddle/pserver/test/test_ParameterServer2.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/test/test_ProtoServer.cpp b/paddle/pserver/test/test_ProtoServer.cpp index 79d1f2743a..cfed0d30d3 100644 --- a/paddle/pserver/test/test_ProtoServer.cpp +++ b/paddle/pserver/test/test_ProtoServer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/pserver/test/test_ProtoServer.sh b/paddle/pserver/test/test_ProtoServer.sh index a87b1b1ddc..970c90b494 100755 --- a/paddle/pserver/test/test_ProtoServer.sh +++ b/paddle/pserver/test/test_ProtoServer.sh @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/py_paddle/__init__.py b/paddle/py_paddle/__init__.py index f8399f9c63..5504d1d50c 100644 --- a/paddle/py_paddle/__init__.py +++ b/paddle/py_paddle/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/py_paddle/dataprovider_converter.py b/paddle/py_paddle/dataprovider_converter.py index d64c7b20cb..edcefba6a8 100644 --- a/paddle/py_paddle/dataprovider_converter.py +++ b/paddle/py_paddle/dataprovider_converter.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/py_paddle/util.py b/paddle/py_paddle/util.py index 35a355ef29..d6bbf9a5a9 100644 --- a/paddle/py_paddle/util.py +++ b/paddle/py_paddle/util.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/scripts/cluster_train/conf.py b/paddle/scripts/cluster_train/conf.py index f1114a5920..c77d7584d3 100644 --- a/paddle/scripts/cluster_train/conf.py +++ b/paddle/scripts/cluster_train/conf.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/scripts/cluster_train/paddle.py b/paddle/scripts/cluster_train/paddle.py index 7343a600c1..9b03ed1d8f 100644 --- a/paddle/scripts/cluster_train/paddle.py +++ b/paddle/scripts/cluster_train/paddle.py @@ -1,5 +1,5 @@ #!/usr/bin/python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/setup.py.in b/paddle/setup.py.in index 1a15eafd55..b4c38a41b8 100644 --- a/paddle/setup.py.in +++ b/paddle/setup.py.in @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/MergeModel.cpp b/paddle/trainer/MergeModel.cpp index 1d15c66d4d..8cb2873feb 100644 --- a/paddle/trainer/MergeModel.cpp +++ b/paddle/trainer/MergeModel.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/ParamUtil.cpp b/paddle/trainer/ParamUtil.cpp index 2be9cd6223..200417ebfc 100644 --- a/paddle/trainer/ParamUtil.cpp +++ b/paddle/trainer/ParamUtil.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/ParamUtil.h b/paddle/trainer/ParamUtil.h index 3923941c3d..8fa6fda75c 100644 --- a/paddle/trainer/ParamUtil.h +++ b/paddle/trainer/ParamUtil.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/ParameterUpdater.cpp b/paddle/trainer/ParameterUpdater.cpp index 6001a0b391..8b5b95da5b 100644 --- a/paddle/trainer/ParameterUpdater.cpp +++ b/paddle/trainer/ParameterUpdater.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/ParameterUpdater.h b/paddle/trainer/ParameterUpdater.h index b83b4cf55e..81ac374425 100644 --- a/paddle/trainer/ParameterUpdater.h +++ b/paddle/trainer/ParameterUpdater.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/RemoteParameterUpdater.cpp b/paddle/trainer/RemoteParameterUpdater.cpp index d83bb5b10a..702ea07f8a 100644 --- a/paddle/trainer/RemoteParameterUpdater.cpp +++ b/paddle/trainer/RemoteParameterUpdater.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/RemoteParameterUpdater.h b/paddle/trainer/RemoteParameterUpdater.h index a40884724c..46ce4be146 100644 --- a/paddle/trainer/RemoteParameterUpdater.h +++ b/paddle/trainer/RemoteParameterUpdater.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/Tester.cpp b/paddle/trainer/Tester.cpp index 6a5b7241a0..97d1b53934 100644 --- a/paddle/trainer/Tester.cpp +++ b/paddle/trainer/Tester.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/Tester.h b/paddle/trainer/Tester.h index a9de9fe208..ae7e0e93bf 100644 --- a/paddle/trainer/Tester.h +++ b/paddle/trainer/Tester.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TesterConfig.h b/paddle/trainer/TesterConfig.h index f490e57344..9ff145a8a1 100644 --- a/paddle/trainer/TesterConfig.h +++ b/paddle/trainer/TesterConfig.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/ThreadParameterUpdater.cpp b/paddle/trainer/ThreadParameterUpdater.cpp index cc22851d8e..bee7f061fe 100644 --- a/paddle/trainer/ThreadParameterUpdater.cpp +++ b/paddle/trainer/ThreadParameterUpdater.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/ThreadParameterUpdater.h b/paddle/trainer/ThreadParameterUpdater.h index 5a5e3f1d4b..492692dbe5 100644 --- a/paddle/trainer/ThreadParameterUpdater.h +++ b/paddle/trainer/ThreadParameterUpdater.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 9c83c207ed..85610ec04e 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/Trainer.h b/paddle/trainer/Trainer.h index 899607c7c0..f50b56143d 100644 --- a/paddle/trainer/Trainer.h +++ b/paddle/trainer/Trainer.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerBenchmark.cpp b/paddle/trainer/TrainerBenchmark.cpp index 54862e95b4..5c3177c808 100644 --- a/paddle/trainer/TrainerBenchmark.cpp +++ b/paddle/trainer/TrainerBenchmark.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerConfigHelper.cpp b/paddle/trainer/TrainerConfigHelper.cpp index ee5b1e0a9c..2017a08d20 100644 --- a/paddle/trainer/TrainerConfigHelper.cpp +++ b/paddle/trainer/TrainerConfigHelper.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerConfigHelper.h b/paddle/trainer/TrainerConfigHelper.h index d206849641..2c5c492ce8 100644 --- a/paddle/trainer/TrainerConfigHelper.h +++ b/paddle/trainer/TrainerConfigHelper.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerInternal.cpp b/paddle/trainer/TrainerInternal.cpp index b1c3bf26d2..1b49d4aa28 100644 --- a/paddle/trainer/TrainerInternal.cpp +++ b/paddle/trainer/TrainerInternal.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerInternal.h b/paddle/trainer/TrainerInternal.h index 962d53a30e..b67711a721 100644 --- a/paddle/trainer/TrainerInternal.h +++ b/paddle/trainer/TrainerInternal.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerInternalConfig.cpp b/paddle/trainer/TrainerInternalConfig.cpp index 0dc74cb3b3..a017cdec9d 100644 --- a/paddle/trainer/TrainerInternalConfig.cpp +++ b/paddle/trainer/TrainerInternalConfig.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerInternalConfig.h b/paddle/trainer/TrainerInternalConfig.h index b7bfd29abd..fd6fdf45e6 100644 --- a/paddle/trainer/TrainerInternalConfig.h +++ b/paddle/trainer/TrainerInternalConfig.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/TrainerMain.cpp b/paddle/trainer/TrainerMain.cpp index e23e745d99..7a18f9836c 100644 --- a/paddle/trainer/TrainerMain.cpp +++ b/paddle/trainer/TrainerMain.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/__init__.py b/paddle/trainer/tests/__init__.py index c90af2ee00..f662d68263 100644 --- a/paddle/trainer/tests/__init__.py +++ b/paddle/trainer/tests/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/chunking.conf b/paddle/trainer/tests/chunking.conf index 01c15fab5f..d88df919df 100644 --- a/paddle/trainer/tests/chunking.conf +++ b/paddle/trainer/tests/chunking.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/config_parser_test.py b/paddle/trainer/tests/config_parser_test.py index c5ec315d6b..db66ebb5b7 100644 --- a/paddle/trainer/tests/config_parser_test.py +++ b/paddle/trainer/tests/config_parser_test.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/gen_proto_data.py b/paddle/trainer/tests/gen_proto_data.py index a3dbc10c88..8cc6d44673 100644 --- a/paddle/trainer/tests/gen_proto_data.py +++ b/paddle/trainer/tests/gen_proto_data.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_config.conf b/paddle/trainer/tests/sample_trainer_config.conf index 15901065b2..2697832840 100644 --- a/paddle/trainer/tests/sample_trainer_config.conf +++ b/paddle/trainer/tests/sample_trainer_config.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_config_hsigmoid.conf b/paddle/trainer/tests/sample_trainer_config_hsigmoid.conf index 174cb5e25f..e4abe31d48 100644 --- a/paddle/trainer/tests/sample_trainer_config_hsigmoid.conf +++ b/paddle/trainer/tests/sample_trainer_config_hsigmoid.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_config_opt_a.conf b/paddle/trainer/tests/sample_trainer_config_opt_a.conf index f5b1988dda..b1744db8d6 100644 --- a/paddle/trainer/tests/sample_trainer_config_opt_a.conf +++ b/paddle/trainer/tests/sample_trainer_config_opt_a.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_config_opt_b.conf b/paddle/trainer/tests/sample_trainer_config_opt_b.conf index f5b1988dda..b1744db8d6 100644 --- a/paddle/trainer/tests/sample_trainer_config_opt_b.conf +++ b/paddle/trainer/tests/sample_trainer_config_opt_b.conf @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_config_parallel.conf b/paddle/trainer/tests/sample_trainer_config_parallel.conf index e35a1f26da..e2b8b3ecda 100644 --- a/paddle/trainer/tests/sample_trainer_config_parallel.conf +++ b/paddle/trainer/tests/sample_trainer_config_parallel.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_config_qb_rnn.conf b/paddle/trainer/tests/sample_trainer_config_qb_rnn.conf index d254cc5700..d19222360c 100644 --- a/paddle/trainer/tests/sample_trainer_config_qb_rnn.conf +++ b/paddle/trainer/tests/sample_trainer_config_qb_rnn.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_config_rnn.conf b/paddle/trainer/tests/sample_trainer_config_rnn.conf index cbb6663029..b720d4d5a6 100644 --- a/paddle/trainer/tests/sample_trainer_config_rnn.conf +++ b/paddle/trainer/tests/sample_trainer_config_rnn.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_nest_rnn_gen.conf b/paddle/trainer/tests/sample_trainer_nest_rnn_gen.conf index 613fd325e1..d669fbc40c 100644 --- a/paddle/trainer/tests/sample_trainer_nest_rnn_gen.conf +++ b/paddle/trainer/tests/sample_trainer_nest_rnn_gen.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/sample_trainer_rnn_gen.conf b/paddle/trainer/tests/sample_trainer_rnn_gen.conf index ec1c12cc89..2b337282f6 100644 --- a/paddle/trainer/tests/sample_trainer_rnn_gen.conf +++ b/paddle/trainer/tests/sample_trainer_rnn_gen.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/testPyDataWrapper.py b/paddle/trainer/tests/testPyDataWrapper.py index 4607bec24e..2c29a27433 100644 --- a/paddle/trainer/tests/testPyDataWrapper.py +++ b/paddle/trainer/tests/testPyDataWrapper.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_Compare.cpp b/paddle/trainer/tests/test_Compare.cpp index 03312f9e47..07a47b2990 100644 --- a/paddle/trainer/tests/test_Compare.cpp +++ b/paddle/trainer/tests/test_Compare.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_CompareSparse.cpp b/paddle/trainer/tests/test_CompareSparse.cpp index a7c6862ce3..3fea3a3c24 100644 --- a/paddle/trainer/tests/test_CompareSparse.cpp +++ b/paddle/trainer/tests/test_CompareSparse.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_CompareTwoNets.cpp b/paddle/trainer/tests/test_CompareTwoNets.cpp index 81320da6ac..7e5449dcba 100644 --- a/paddle/trainer/tests/test_CompareTwoNets.cpp +++ b/paddle/trainer/tests/test_CompareTwoNets.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_CompareTwoOpts.cpp b/paddle/trainer/tests/test_CompareTwoOpts.cpp index a52f2fa7e7..4d051b537c 100644 --- a/paddle/trainer/tests/test_CompareTwoOpts.cpp +++ b/paddle/trainer/tests/test_CompareTwoOpts.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_Prediction.cpp b/paddle/trainer/tests/test_Prediction.cpp index 6db33439b3..322121a579 100644 --- a/paddle/trainer/tests/test_Prediction.cpp +++ b/paddle/trainer/tests/test_Prediction.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_PyDataProviderWrapper.cpp b/paddle/trainer/tests/test_PyDataProviderWrapper.cpp index e53291386c..5c5c6d5346 100644 --- a/paddle/trainer/tests/test_PyDataProviderWrapper.cpp +++ b/paddle/trainer/tests/test_PyDataProviderWrapper.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_Trainer.cpp b/paddle/trainer/tests/test_Trainer.cpp index 900c05af85..0fede59f8d 100644 --- a/paddle/trainer/tests/test_Trainer.cpp +++ b/paddle/trainer/tests/test_Trainer.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_TrainerOnePass.cpp b/paddle/trainer/tests/test_TrainerOnePass.cpp index da2954d166..1d9dce1b0e 100644 --- a/paddle/trainer/tests/test_TrainerOnePass.cpp +++ b/paddle/trainer/tests/test_TrainerOnePass.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_config.conf b/paddle/trainer/tests/test_config.conf index 2a4548896f..d1bb9b877f 100644 --- a/paddle/trainer/tests/test_config.conf +++ b/paddle/trainer/tests/test_config.conf @@ -1,5 +1,5 @@ #edit-mode: -*- python -*- -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/paddle/trainer/tests/test_recurrent_machine_generation.cpp b/paddle/trainer/tests/test_recurrent_machine_generation.cpp index 49e8a97ad0..b52acc2ca7 100644 --- a/paddle/trainer/tests/test_recurrent_machine_generation.cpp +++ b/paddle/trainer/tests/test_recurrent_machine_generation.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/BarrierStat.cpp b/paddle/utils/BarrierStat.cpp index 82c5b84e59..5040deefd0 100644 --- a/paddle/utils/BarrierStat.cpp +++ b/paddle/utils/BarrierStat.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/BarrierStat.h b/paddle/utils/BarrierStat.h index 661340ad27..3c5c0885d6 100644 --- a/paddle/utils/BarrierStat.h +++ b/paddle/utils/BarrierStat.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/ClassRegistrar.h b/paddle/utils/ClassRegistrar.h index ee58ccb2ad..1ac27bafab 100644 --- a/paddle/utils/ClassRegistrar.h +++ b/paddle/utils/ClassRegistrar.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/CommandLineParser.cpp b/paddle/utils/CommandLineParser.cpp index 307e304bb0..14f83241c5 100644 --- a/paddle/utils/CommandLineParser.cpp +++ b/paddle/utils/CommandLineParser.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/CommandLineParser.h b/paddle/utils/CommandLineParser.h index c46567913e..3d25bc3b0b 100644 --- a/paddle/utils/CommandLineParser.h +++ b/paddle/utils/CommandLineParser.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/CompilerMacros.h b/paddle/utils/CompilerMacros.h index 4236d750c4..e50093f7fc 100644 --- a/paddle/utils/CompilerMacros.h +++ b/paddle/utils/CompilerMacros.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/CustomStackTrace.cpp b/paddle/utils/CustomStackTrace.cpp index 8740fe662e..730788cb98 100644 --- a/paddle/utils/CustomStackTrace.cpp +++ b/paddle/utils/CustomStackTrace.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/CustomStackTrace.h b/paddle/utils/CustomStackTrace.h index 878e14eb5f..5686f3c84c 100644 --- a/paddle/utils/CustomStackTrace.h +++ b/paddle/utils/CustomStackTrace.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/DisableCopy.h b/paddle/utils/DisableCopy.h index e991c07cdf..41de98bbde 100644 --- a/paddle/utils/DisableCopy.h +++ b/paddle/utils/DisableCopy.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Excepts.cpp b/paddle/utils/Excepts.cpp index b2fad3ac9d..4ddce35ed3 100644 --- a/paddle/utils/Excepts.cpp +++ b/paddle/utils/Excepts.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Excepts.h b/paddle/utils/Excepts.h index a84a2d33a6..dc3369b7e8 100644 --- a/paddle/utils/Excepts.h +++ b/paddle/utils/Excepts.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Flags.cpp b/paddle/utils/Flags.cpp index 6fae24e1b5..1c9e602f45 100644 --- a/paddle/utils/Flags.cpp +++ b/paddle/utils/Flags.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Flags.h b/paddle/utils/Flags.h index dda60c3f96..922533d63e 100644 --- a/paddle/utils/Flags.h +++ b/paddle/utils/Flags.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/GlobalConstants.cpp b/paddle/utils/GlobalConstants.cpp index d769cd1ee7..9e8dade0b2 100644 --- a/paddle/utils/GlobalConstants.cpp +++ b/paddle/utils/GlobalConstants.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/GlobalConstants.h b/paddle/utils/GlobalConstants.h index 4c74c17a50..707346f2c7 100644 --- a/paddle/utils/GlobalConstants.h +++ b/paddle/utils/GlobalConstants.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Locks.h b/paddle/utils/Locks.h index 5990e16570..0f922f3548 100644 --- a/paddle/utils/Locks.h +++ b/paddle/utils/Locks.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Logging.cpp b/paddle/utils/Logging.cpp index 14303bd4c7..3c31633e58 100644 --- a/paddle/utils/Logging.cpp +++ b/paddle/utils/Logging.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Logging.h b/paddle/utils/Logging.h index e9029b421f..c91ca9fecc 100644 --- a/paddle/utils/Logging.h +++ b/paddle/utils/Logging.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/PythonUtil.cpp b/paddle/utils/PythonUtil.cpp index 7f17a82522..a9c6a20997 100644 --- a/paddle/utils/PythonUtil.cpp +++ b/paddle/utils/PythonUtil.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/PythonUtil.h b/paddle/utils/PythonUtil.h index 65677d9010..2cbc2fdd37 100644 --- a/paddle/utils/PythonUtil.h +++ b/paddle/utils/PythonUtil.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Queue.h b/paddle/utils/Queue.h index 58d17e86c4..37748345a4 100644 --- a/paddle/utils/Queue.h +++ b/paddle/utils/Queue.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Stat.cpp b/paddle/utils/Stat.cpp index ab140c3350..01ea535cfd 100644 --- a/paddle/utils/Stat.cpp +++ b/paddle/utils/Stat.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Stat.h b/paddle/utils/Stat.h index 1ef688ea8d..9be79e8859 100644 --- a/paddle/utils/Stat.h +++ b/paddle/utils/Stat.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/StringUtil.cpp b/paddle/utils/StringUtil.cpp index b416cda4af..0c98e6db34 100644 --- a/paddle/utils/StringUtil.cpp +++ b/paddle/utils/StringUtil.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/StringUtil.h b/paddle/utils/StringUtil.h index 8b44dad192..8a63ca23b4 100644 --- a/paddle/utils/StringUtil.h +++ b/paddle/utils/StringUtil.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Thread.h b/paddle/utils/Thread.h index ade0ee496f..435dff2f66 100644 --- a/paddle/utils/Thread.h +++ b/paddle/utils/Thread.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/ThreadLocal.cpp b/paddle/utils/ThreadLocal.cpp index 49d4b15265..c9b32784d9 100644 --- a/paddle/utils/ThreadLocal.cpp +++ b/paddle/utils/ThreadLocal.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/ThreadLocal.h b/paddle/utils/ThreadLocal.h index 06c8b392af..b6e31bd05b 100644 --- a/paddle/utils/ThreadLocal.h +++ b/paddle/utils/ThreadLocal.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/TypeDefs.h b/paddle/utils/TypeDefs.h index e8be779bea..c50a05e82d 100644 --- a/paddle/utils/TypeDefs.h +++ b/paddle/utils/TypeDefs.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Util.cpp b/paddle/utils/Util.cpp index bc727cfa74..f48726bff0 100644 --- a/paddle/utils/Util.cpp +++ b/paddle/utils/Util.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Util.h b/paddle/utils/Util.h index ed38f8fa60..ff67439da6 100644 --- a/paddle/utils/Util.h +++ b/paddle/utils/Util.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Version.cpp b/paddle/utils/Version.cpp index e706983918..88fb333f79 100644 --- a/paddle/utils/Version.cpp +++ b/paddle/utils/Version.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/Version.h b/paddle/utils/Version.h index e6c799644e..ac04963c2c 100644 --- a/paddle/utils/Version.h +++ b/paddle/utils/Version.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/arch/linux/Locks.cpp b/paddle/utils/arch/linux/Locks.cpp index 93016daeae..2a6f96e04d 100644 --- a/paddle/utils/arch/linux/Locks.cpp +++ b/paddle/utils/arch/linux/Locks.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/arch/osx/Locks.cpp b/paddle/utils/arch/osx/Locks.cpp index ae563a6afd..8590226431 100644 --- a/paddle/utils/arch/osx/Locks.cpp +++ b/paddle/utils/arch/osx/Locks.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_CommandLineParser.cpp b/paddle/utils/tests/test_CommandLineParser.cpp index 5ecfb2b4f5..9a1d2391a8 100644 --- a/paddle/utils/tests/test_CommandLineParser.cpp +++ b/paddle/utils/tests/test_CommandLineParser.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_CustomStackTrace.cpp b/paddle/utils/tests/test_CustomStackTrace.cpp index 3bfb381ed9..512330b49e 100644 --- a/paddle/utils/tests/test_CustomStackTrace.cpp +++ b/paddle/utils/tests/test_CustomStackTrace.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_CustomStackTracePrint.cpp b/paddle/utils/tests/test_CustomStackTracePrint.cpp index d39a190961..60ba210b70 100644 --- a/paddle/utils/tests/test_CustomStackTracePrint.cpp +++ b/paddle/utils/tests/test_CustomStackTracePrint.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_Logging.cpp b/paddle/utils/tests/test_Logging.cpp index 9f477fab14..667864aa75 100644 --- a/paddle/utils/tests/test_Logging.cpp +++ b/paddle/utils/tests/test_Logging.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_SpinLock.cpp b/paddle/utils/tests/test_SpinLock.cpp index 77d281962c..9c7ad05b0b 100644 --- a/paddle/utils/tests/test_SpinLock.cpp +++ b/paddle/utils/tests/test_SpinLock.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_StringUtils.cpp b/paddle/utils/tests/test_StringUtils.cpp index 2c699b791f..fdc914d1bc 100644 --- a/paddle/utils/tests/test_StringUtils.cpp +++ b/paddle/utils/tests/test_StringUtils.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_Thread.cpp b/paddle/utils/tests/test_Thread.cpp index 154db5d9c6..b069be1d7a 100644 --- a/paddle/utils/tests/test_Thread.cpp +++ b/paddle/utils/tests/test_Thread.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/paddle/utils/tests/test_ThreadBarrier.cpp b/paddle/utils/tests/test_ThreadBarrier.cpp index 20b9babd94..997a393683 100644 --- a/paddle/utils/tests/test_ThreadBarrier.cpp +++ b/paddle/utils/tests/test_ThreadBarrier.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/proto/DataConfig.proto.m4 b/proto/DataConfig.proto.m4 index 01d451ff7d..1f8e3f4f3e 100644 --- a/proto/DataConfig.proto.m4 +++ b/proto/DataConfig.proto.m4 @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/proto/DataFormat.proto.m4 b/proto/DataFormat.proto.m4 index 8a4a0be1b3..54e9fd008e 100644 --- a/proto/DataFormat.proto.m4 +++ b/proto/DataFormat.proto.m4 @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/proto/ModelConfig.proto.m4 b/proto/ModelConfig.proto.m4 index 4e8ed36f4e..ccad69a3c2 100644 --- a/proto/ModelConfig.proto.m4 +++ b/proto/ModelConfig.proto.m4 @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/proto/ParameterConfig.proto.m4 b/proto/ParameterConfig.proto.m4 index 26e7c3ef77..b5c0fea6c3 100644 --- a/proto/ParameterConfig.proto.m4 +++ b/proto/ParameterConfig.proto.m4 @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/proto/ParameterService.proto.m4 b/proto/ParameterService.proto.m4 index 0b3f14a2ee..25b0991583 100644 --- a/proto/ParameterService.proto.m4 +++ b/proto/ParameterService.proto.m4 @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/proto/TrainerConfig.proto.m4 b/proto/TrainerConfig.proto.m4 index 965c9cd393..4684203b03 100644 --- a/proto/TrainerConfig.proto.m4 +++ b/proto/TrainerConfig.proto.m4 @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/python/paddle/__init__.py b/python/paddle/__init__.py index c90af2ee00..f662d68263 100644 --- a/python/paddle/__init__.py +++ b/python/paddle/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/proto/__init__.py b/python/paddle/proto/__init__.py index cd6a59ecbb..07406a841e 100644 --- a/python/paddle/proto/__init__.py +++ b/python/paddle/proto/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer/PyDataProvider2.py b/python/paddle/trainer/PyDataProvider2.py index 0c577ec657..6e8cce1cce 100644 --- a/python/paddle/trainer/PyDataProvider2.py +++ b/python/paddle/trainer/PyDataProvider2.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer/PyDataProviderWrapper.py b/python/paddle/trainer/PyDataProviderWrapper.py index 90b684a000..6af2507728 100644 --- a/python/paddle/trainer/PyDataProviderWrapper.py +++ b/python/paddle/trainer/PyDataProviderWrapper.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer/__init__.py b/python/paddle/trainer/__init__.py index c90af2ee00..f662d68263 100644 --- a/python/paddle/trainer/__init__.py +++ b/python/paddle/trainer/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index c6c0c9c151..fd7fb40822 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer/config_parser_extension.py b/python/paddle/trainer/config_parser_extension.py index ba4c79efdc..b9e0f3eb13 100644 --- a/python/paddle/trainer/config_parser_extension.py +++ b/python/paddle/trainer/config_parser_extension.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer/recurrent_units.py b/python/paddle/trainer/recurrent_units.py index a80ad13d1e..edca279dca 100644 --- a/python/paddle/trainer/recurrent_units.py +++ b/python/paddle/trainer/recurrent_units.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/__init__.py b/python/paddle/trainer_config_helpers/__init__.py index adebebba25..3ac1454934 100644 --- a/python/paddle/trainer_config_helpers/__init__.py +++ b/python/paddle/trainer_config_helpers/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/activations.py b/python/paddle/trainer_config_helpers/activations.py index eeed18a98a..06be3e4599 100644 --- a/python/paddle/trainer_config_helpers/activations.py +++ b/python/paddle/trainer_config_helpers/activations.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/attrs.py b/python/paddle/trainer_config_helpers/attrs.py index 54169f382f..59bb18bfca 100644 --- a/python/paddle/trainer_config_helpers/attrs.py +++ b/python/paddle/trainer_config_helpers/attrs.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/data_sources.py b/python/paddle/trainer_config_helpers/data_sources.py index b41097953d..b6ecd42857 100644 --- a/python/paddle/trainer_config_helpers/data_sources.py +++ b/python/paddle/trainer_config_helpers/data_sources.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/default_decorators.py b/python/paddle/trainer_config_helpers/default_decorators.py index c01050e338..1caad19349 100644 --- a/python/paddle/trainer_config_helpers/default_decorators.py +++ b/python/paddle/trainer_config_helpers/default_decorators.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/evaluators.py b/python/paddle/trainer_config_helpers/evaluators.py index dc6a36392f..0ee116d8c4 100644 --- a/python/paddle/trainer_config_helpers/evaluators.py +++ b/python/paddle/trainer_config_helpers/evaluators.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index 4541b6fd8d..8dd6b7b7d2 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/math.py b/python/paddle/trainer_config_helpers/math.py index 30a9b1c4e8..2d9e36f2b0 100644 --- a/python/paddle/trainer_config_helpers/math.py +++ b/python/paddle/trainer_config_helpers/math.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/networks.py b/python/paddle/trainer_config_helpers/networks.py index ff6d2e1cff..375bea34e8 100644 --- a/python/paddle/trainer_config_helpers/networks.py +++ b/python/paddle/trainer_config_helpers/networks.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/optimizers.py b/python/paddle/trainer_config_helpers/optimizers.py index 501fc3211b..d95b2cfe46 100644 --- a/python/paddle/trainer_config_helpers/optimizers.py +++ b/python/paddle/trainer_config_helpers/optimizers.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/poolings.py b/python/paddle/trainer_config_helpers/poolings.py index 6f13a76f25..0c38a8dce5 100644 --- a/python/paddle/trainer_config_helpers/poolings.py +++ b/python/paddle/trainer_config_helpers/poolings.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/tests/ProtobufEqualMain.cpp b/python/paddle/trainer_config_helpers/tests/ProtobufEqualMain.cpp index 06f7de9306..fc53422afd 100644 --- a/python/paddle/trainer_config_helpers/tests/ProtobufEqualMain.cpp +++ b/python/paddle/trainer_config_helpers/tests/ProtobufEqualMain.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/tests/layers_test.py b/python/paddle/trainer_config_helpers/tests/layers_test.py index 3b55667354..05902ea293 100644 --- a/python/paddle/trainer_config_helpers/tests/layers_test.py +++ b/python/paddle/trainer_config_helpers/tests/layers_test.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/tests/layers_test_config.py b/python/paddle/trainer_config_helpers/tests/layers_test_config.py index 44d134d1f7..ae275735aa 100644 --- a/python/paddle/trainer_config_helpers/tests/layers_test_config.py +++ b/python/paddle/trainer_config_helpers/tests/layers_test_config.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/trainer_config_helpers/utils.py b/python/paddle/trainer_config_helpers/utils.py index c0235b28cd..fe6e9cd53c 100644 --- a/python/paddle/trainer_config_helpers/utils.py +++ b/python/paddle/trainer_config_helpers/utils.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/__init__.py b/python/paddle/utils/__init__.py index 3e93f41c2e..15595d2085 100644 --- a/python/paddle/utils/__init__.py +++ b/python/paddle/utils/__init__.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/dump_config.py b/python/paddle/utils/dump_config.py index c5ce5c8d9a..73bf349c46 100644 --- a/python/paddle/utils/dump_config.py +++ b/python/paddle/utils/dump_config.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/image_util.py b/python/paddle/utils/image_util.py index e6c6b04de0..d3d79b1440 100644 --- a/python/paddle/utils/image_util.py +++ b/python/paddle/utils/image_util.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/make_model_diagram.py b/python/paddle/utils/make_model_diagram.py index 29e271717d..1370ea83a4 100644 --- a/python/paddle/utils/make_model_diagram.py +++ b/python/paddle/utils/make_model_diagram.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/plotcurve.py b/python/paddle/utils/plotcurve.py index 7bc7c5f8d2..27bd8157d3 100644 --- a/python/paddle/utils/plotcurve.py +++ b/python/paddle/utils/plotcurve.py @@ -1,5 +1,5 @@ #!/usr/bin/python -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/predefined_net.py b/python/paddle/utils/predefined_net.py index e9033432ed..fa05f981f2 100644 --- a/python/paddle/utils/predefined_net.py +++ b/python/paddle/utils/predefined_net.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/preprocess_img.py b/python/paddle/utils/preprocess_img.py index f3c609e4cd..975f1e9ede 100644 --- a/python/paddle/utils/preprocess_img.py +++ b/python/paddle/utils/preprocess_img.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/preprocess_util.py b/python/paddle/utils/preprocess_util.py index e5067a80ea..1d17a48824 100644 --- a/python/paddle/utils/preprocess_util.py +++ b/python/paddle/utils/preprocess_util.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/show_pb.py b/python/paddle/utils/show_pb.py index 3b371727b8..20614826d1 100644 --- a/python/paddle/utils/show_pb.py +++ b/python/paddle/utils/show_pb.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/python/paddle/utils/torch2paddle.py b/python/paddle/utils/torch2paddle.py index 958f55dbc4..91490111a1 100644 --- a/python/paddle/utils/torch2paddle.py +++ b/python/paddle/utils/torch2paddle.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016 Baidu, Inc. All Rights Reserved +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. From 89f0072a38caec86ffe03e4437bc84434d7c34b7 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Fri, 9 Dec 2016 12:02:57 +0800 Subject: [PATCH 060/265] add file extensions in doc/ --- doc/about/{index.rst => index_en.rst} | 0 .../data_provider/{index.rst => index_en.rst} | 8 ------- ...taprovider2.rst => pydataprovider2_en.rst} | 0 doc/api/{index.rst => index_en.rst} | 7 +++--- .../{basic_usage.rst => index_en.rst} | 0 ...from_source.md => build_from_source_en.md} | 0 ...cker_install.rst => docker_install_en.rst} | 0 .../{index.rst => index_en.rst} | 8 +++---- ...untu_install.rst => ubuntu_install_en.rst} | 0 doc/getstarted/index.rst | 8 ------- doc/getstarted/index_en.rst | 8 +++++++ .../{cluster_train.md => cluster_train_en.md} | 0 .../{arguments.md => arguments_en.md} | 0 ...roduction.md => detail_introduction_en.md} | 0 doc/howto/cmd_parameter/index.md | 5 ---- doc/howto/cmd_parameter/index_en.md | 5 ++++ .../{use_case.md => use_case_en.md} | 0 ...o_paddle.md => contribute_to_paddle_en.md} | 0 .../deep_model/{index.rst => index_en.rst} | 2 +- .../deep_model/rnn/{rnn.rst => rnn_en.rst} | 0 doc/howto/{index.rst => index_en.rst} | 12 +++++----- .../new_layer/{index.rst => index_en.rst} | 0 ...gpu_profiling.rst => gpu_profiling_en.rst} | 0 .../optimization/{index.rst => index_en.rst} | 2 +- doc/index.rst | 11 ++++----- .../embedding_model/{index.md => index_en.md} | 0 doc/tutorials/image_classification/index.rst | 10 -------- .../{image_classification.md => index_en.md} | 0 .../{resnet_model.md => resnet_model_en.md} | 0 doc/tutorials/index.md | 21 ----------------- doc/tutorials/index_en.md | 23 +++++++++++++++++++ .../rec/{ml_dataset.md => ml_dataset_en.md} | 0 ...ml_regression.rst => ml_regression_en.rst} | 0 .../semantic_role_labeling/index.rst | 7 ------ ...{semantic_role_labeling.md => index_en.md} | 0 doc/tutorials/sentiment_analysis/index.rst | 9 -------- .../{sentiment_analysis.md => index_en.md} | 0 doc/tutorials/text_generation/index.rst | 9 -------- .../{text_generation.md => index_en.md} | 0 39 files changed, 55 insertions(+), 100 deletions(-) rename doc/about/{index.rst => index_en.rst} (100%) rename doc/api/data_provider/{index.rst => index_en.rst} (89%) rename doc/api/data_provider/{pydataprovider2.rst => pydataprovider2_en.rst} (100%) rename doc/api/{index.rst => index_en.rst} (80%) rename doc/getstarted/basic_usage/{basic_usage.rst => index_en.rst} (100%) rename doc/getstarted/build_and_install/{build_from_source.md => build_from_source_en.md} (100%) rename doc/getstarted/build_and_install/{docker_install.rst => docker_install_en.rst} (100%) rename doc/getstarted/build_and_install/{index.rst => index_en.rst} (79%) rename doc/getstarted/build_and_install/{ubuntu_install.rst => ubuntu_install_en.rst} (100%) delete mode 100644 doc/getstarted/index.rst create mode 100644 doc/getstarted/index_en.rst rename doc/howto/cluster/{cluster_train.md => cluster_train_en.md} (100%) rename doc/howto/cmd_parameter/{arguments.md => arguments_en.md} (100%) rename doc/howto/cmd_parameter/{detail_introduction.md => detail_introduction_en.md} (100%) delete mode 100644 doc/howto/cmd_parameter/index.md create mode 100644 doc/howto/cmd_parameter/index_en.md rename doc/howto/cmd_parameter/{use_case.md => use_case_en.md} (100%) rename doc/howto/{contribute_to_paddle.md => contribute_to_paddle_en.md} (100%) rename doc/howto/deep_model/{index.rst => index_en.rst} (83%) rename doc/howto/deep_model/rnn/{rnn.rst => rnn_en.rst} (100%) rename doc/howto/{index.rst => index_en.rst} (51%) rename doc/howto/new_layer/{index.rst => index_en.rst} (100%) rename doc/howto/optimization/{gpu_profiling.rst => gpu_profiling_en.rst} (100%) rename doc/howto/optimization/{index.rst => index_en.rst} (78%) rename doc/tutorials/embedding_model/{index.md => index_en.md} (100%) delete mode 100644 doc/tutorials/image_classification/index.rst rename doc/tutorials/image_classification/{image_classification.md => index_en.md} (100%) rename doc/tutorials/imagenet_model/{resnet_model.md => resnet_model_en.md} (100%) delete mode 100644 doc/tutorials/index.md create mode 100644 doc/tutorials/index_en.md rename doc/tutorials/rec/{ml_dataset.md => ml_dataset_en.md} (100%) rename doc/tutorials/rec/{ml_regression.rst => ml_regression_en.rst} (100%) delete mode 100644 doc/tutorials/semantic_role_labeling/index.rst rename doc/tutorials/semantic_role_labeling/{semantic_role_labeling.md => index_en.md} (100%) delete mode 100644 doc/tutorials/sentiment_analysis/index.rst rename doc/tutorials/sentiment_analysis/{sentiment_analysis.md => index_en.md} (100%) delete mode 100644 doc/tutorials/text_generation/index.rst rename doc/tutorials/text_generation/{text_generation.md => index_en.md} (100%) diff --git a/doc/about/index.rst b/doc/about/index_en.rst similarity index 100% rename from doc/about/index.rst rename to doc/about/index_en.rst diff --git a/doc/api/data_provider/index.rst b/doc/api/data_provider/index_en.rst similarity index 89% rename from doc/api/data_provider/index.rst rename to doc/api/data_provider/index_en.rst index 5e7a49d632..96efbb1da9 100644 --- a/doc/api/data_provider/index.rst +++ b/doc/api/data_provider/index_en.rst @@ -32,11 +32,3 @@ Each line of train.list and test.list is an absolute or relative path (relative to the PaddePaddle program runtime) of data file. Fascinatingly more, each line can also be a HDFS file path or a SQL connection string. As long as the user assures how to access each file in DataProvider. - -Please refer to the following articles for more information about the detail -usages of DataProvider and how to implement a new DataProvider, - -.. toctree:: - - pydataprovider2.rst - write_new_dataprovider.rst diff --git a/doc/api/data_provider/pydataprovider2.rst b/doc/api/data_provider/pydataprovider2_en.rst similarity index 100% rename from doc/api/data_provider/pydataprovider2.rst rename to doc/api/data_provider/pydataprovider2_en.rst diff --git a/doc/api/index.rst b/doc/api/index_en.rst similarity index 80% rename from doc/api/index.rst rename to doc/api/index_en.rst index ccee7a0f1f..9930f93e10 100644 --- a/doc/api/index.rst +++ b/doc/api/index_en.rst @@ -7,8 +7,8 @@ DataProvider API .. toctree:: :maxdepth: 1 - data_provider/index.rst - data_provider/pydataprovider2.rst + data_provider/index_en.rst + data_provider/pydataprovider2_en.rst Model Config API ---------------- @@ -16,7 +16,6 @@ Model Config API .. toctree:: :maxdepth: 1 - trainer_config_helpers/index.rst trainer_config_helpers/optimizers.rst trainer_config_helpers/data_sources.rst trainer_config_helpers/layers.rst @@ -33,4 +32,4 @@ Applications API .. toctree:: :maxdepth: 1 - predict/swig_py_paddle_en.rst \ No newline at end of file + predict/swig_py_paddle_en.rst diff --git a/doc/getstarted/basic_usage/basic_usage.rst b/doc/getstarted/basic_usage/index_en.rst similarity index 100% rename from doc/getstarted/basic_usage/basic_usage.rst rename to doc/getstarted/basic_usage/index_en.rst diff --git a/doc/getstarted/build_and_install/build_from_source.md b/doc/getstarted/build_and_install/build_from_source_en.md similarity index 100% rename from doc/getstarted/build_and_install/build_from_source.md rename to doc/getstarted/build_and_install/build_from_source_en.md diff --git a/doc/getstarted/build_and_install/docker_install.rst b/doc/getstarted/build_and_install/docker_install_en.rst similarity index 100% rename from doc/getstarted/build_and_install/docker_install.rst rename to doc/getstarted/build_and_install/docker_install_en.rst diff --git a/doc/getstarted/build_and_install/index.rst b/doc/getstarted/build_and_install/index_en.rst similarity index 79% rename from doc/getstarted/build_and_install/index.rst rename to doc/getstarted/build_and_install/index_en.rst index 6187be9d72..1bfd4f75c0 100644 --- a/doc/getstarted/build_and_install/index.rst +++ b/doc/getstarted/build_and_install/index_en.rst @@ -6,10 +6,9 @@ Install PaddlePaddle .. toctree:: :maxdepth: 1 - :glob: - docker_install.rst - ubuntu_install.rst + docker_install_en.rst + ubuntu_install_en.rst Build from Source ----------------- @@ -20,6 +19,5 @@ Build from Source .. toctree:: :maxdepth: 1 - :glob: - build_from_source.md \ No newline at end of file + build_from_source_en.md diff --git a/doc/getstarted/build_and_install/ubuntu_install.rst b/doc/getstarted/build_and_install/ubuntu_install_en.rst similarity index 100% rename from doc/getstarted/build_and_install/ubuntu_install.rst rename to doc/getstarted/build_and_install/ubuntu_install_en.rst diff --git a/doc/getstarted/index.rst b/doc/getstarted/index.rst deleted file mode 100644 index 5f2787066e..0000000000 --- a/doc/getstarted/index.rst +++ /dev/null @@ -1,8 +0,0 @@ -GET STARTED -============ - -.. toctree:: - :maxdepth: 2 - - build_and_install/index.rst - basic_usage/basic_usage.rst diff --git a/doc/getstarted/index_en.rst b/doc/getstarted/index_en.rst new file mode 100644 index 0000000000..55d95d8015 --- /dev/null +++ b/doc/getstarted/index_en.rst @@ -0,0 +1,8 @@ +GET STARTED +============ + +.. toctree:: + :maxdepth: 2 + + build_and_install/index_en.rst + basic_usage/index_en.rst diff --git a/doc/howto/cluster/cluster_train.md b/doc/howto/cluster/cluster_train_en.md similarity index 100% rename from doc/howto/cluster/cluster_train.md rename to doc/howto/cluster/cluster_train_en.md diff --git a/doc/howto/cmd_parameter/arguments.md b/doc/howto/cmd_parameter/arguments_en.md similarity index 100% rename from doc/howto/cmd_parameter/arguments.md rename to doc/howto/cmd_parameter/arguments_en.md diff --git a/doc/howto/cmd_parameter/detail_introduction.md b/doc/howto/cmd_parameter/detail_introduction_en.md similarity index 100% rename from doc/howto/cmd_parameter/detail_introduction.md rename to doc/howto/cmd_parameter/detail_introduction_en.md diff --git a/doc/howto/cmd_parameter/index.md b/doc/howto/cmd_parameter/index.md deleted file mode 100644 index 48cf835de1..0000000000 --- a/doc/howto/cmd_parameter/index.md +++ /dev/null @@ -1,5 +0,0 @@ -# How to Set Command-line Parameters - -* [Use Case](use_case.md) -* [Arguments](arguments.md) -* [Detailed Descriptions](detail_introduction.md) diff --git a/doc/howto/cmd_parameter/index_en.md b/doc/howto/cmd_parameter/index_en.md new file mode 100644 index 0000000000..bd16affdd8 --- /dev/null +++ b/doc/howto/cmd_parameter/index_en.md @@ -0,0 +1,5 @@ +# How to Set Command-line Parameters + +* [Use Case](use_case_en.md) +* [Arguments](arguments_en.md) +* [Detailed Descriptions](detail_introduction_en.md) diff --git a/doc/howto/cmd_parameter/use_case.md b/doc/howto/cmd_parameter/use_case_en.md similarity index 100% rename from doc/howto/cmd_parameter/use_case.md rename to doc/howto/cmd_parameter/use_case_en.md diff --git a/doc/howto/contribute_to_paddle.md b/doc/howto/contribute_to_paddle_en.md similarity index 100% rename from doc/howto/contribute_to_paddle.md rename to doc/howto/contribute_to_paddle_en.md diff --git a/doc/howto/deep_model/index.rst b/doc/howto/deep_model/index_en.rst similarity index 83% rename from doc/howto/deep_model/index.rst rename to doc/howto/deep_model/index_en.rst index 06ef443f62..00a45641e6 100644 --- a/doc/howto/deep_model/index.rst +++ b/doc/howto/deep_model/index_en.rst @@ -4,4 +4,4 @@ How to Configure Deep Models .. toctree:: :maxdepth: 1 - rnn/rnn.rst + rnn/rnn_en.rst diff --git a/doc/howto/deep_model/rnn/rnn.rst b/doc/howto/deep_model/rnn/rnn_en.rst similarity index 100% rename from doc/howto/deep_model/rnn/rnn.rst rename to doc/howto/deep_model/rnn/rnn_en.rst diff --git a/doc/howto/index.rst b/doc/howto/index_en.rst similarity index 51% rename from doc/howto/index.rst rename to doc/howto/index_en.rst index 41877a64a5..bd64c5b1fb 100644 --- a/doc/howto/index.rst +++ b/doc/howto/index_en.rst @@ -7,9 +7,9 @@ Usage .. toctree:: :maxdepth: 1 - cmd_parameter/index.md - deep_model/index.rst - cluster/cluster_train.md + cmd_parameter/index_en.md + deep_model/index_en.rst + cluster/cluster_train_en.md Development ------------ @@ -17,8 +17,8 @@ Development .. toctree:: :maxdepth: 1 - new_layer/index.rst - contribute_to_paddle.md + new_layer/index_en.rst + contribute_to_paddle_en.md Optimization ------------- @@ -26,4 +26,4 @@ Optimization .. toctree:: :maxdepth: 1 - optimization/index.rst + optimization/index_en.rst diff --git a/doc/howto/new_layer/index.rst b/doc/howto/new_layer/index_en.rst similarity index 100% rename from doc/howto/new_layer/index.rst rename to doc/howto/new_layer/index_en.rst diff --git a/doc/howto/optimization/gpu_profiling.rst b/doc/howto/optimization/gpu_profiling_en.rst similarity index 100% rename from doc/howto/optimization/gpu_profiling.rst rename to doc/howto/optimization/gpu_profiling_en.rst diff --git a/doc/howto/optimization/index.rst b/doc/howto/optimization/index_en.rst similarity index 78% rename from doc/howto/optimization/index.rst rename to doc/howto/optimization/index_en.rst index e2822a0098..1e2f16b5da 100644 --- a/doc/howto/optimization/index.rst +++ b/doc/howto/optimization/index_en.rst @@ -4,4 +4,4 @@ How to Tune GPU Performance .. toctree:: :maxdepth: 3 - gpu_profiling.rst + gpu_profiling_en.rst diff --git a/doc/index.rst b/doc/index.rst index 3555da1dfc..c107239438 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -4,9 +4,8 @@ PaddlePaddle Documentation .. toctree:: :maxdepth: 1 - getstarted/index.rst - tutorials/index.md - howto/index.rst - api/index.rst - about/index.rst - \ No newline at end of file + getstarted/index_en.rst + tutorials/index_en.md + howto/index_en.rst + api/index_en.rst + about/index_en.rst diff --git a/doc/tutorials/embedding_model/index.md b/doc/tutorials/embedding_model/index_en.md similarity index 100% rename from doc/tutorials/embedding_model/index.md rename to doc/tutorials/embedding_model/index_en.md diff --git a/doc/tutorials/image_classification/index.rst b/doc/tutorials/image_classification/index.rst deleted file mode 100644 index 1ea68f1416..0000000000 --- a/doc/tutorials/image_classification/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -Image Classification Tutorial -============================= - -.. toctree:: - :maxdepth: 3 - :glob: - - Training Locally - cluster_train/internal/cluster_train.md - cluster_train/opensource/cluster_train.md diff --git a/doc/tutorials/image_classification/image_classification.md b/doc/tutorials/image_classification/index_en.md similarity index 100% rename from doc/tutorials/image_classification/image_classification.md rename to doc/tutorials/image_classification/index_en.md diff --git a/doc/tutorials/imagenet_model/resnet_model.md b/doc/tutorials/imagenet_model/resnet_model_en.md similarity index 100% rename from doc/tutorials/imagenet_model/resnet_model.md rename to doc/tutorials/imagenet_model/resnet_model_en.md diff --git a/doc/tutorials/index.md b/doc/tutorials/index.md deleted file mode 100644 index ebf5397391..0000000000 --- a/doc/tutorials/index.md +++ /dev/null @@ -1,21 +0,0 @@ -# TUTORIALS -There are serveral examples and demos here. - -## Image - -* [Image Classification](image_classification/index.rst) - -## NLP - -* [Sentiment Analysis](sentiment_analysis/index.rst) -* [Text Generation](text_generation/index.rst) -* [Semantic Role Labeling](semantic_role_labeling/index.rst) - -## Recommendation - -* [MovieLens Dataset](rec/ml_dataset.md) -* [MovieLens Regression](rec/ml_regression.rst) - -## Model Zoo -* [ImageNet: ResNet](imagenet_model/resnet_model.md) -* [Embedding: Chinese Word](embedding_model/index.md) diff --git a/doc/tutorials/index_en.md b/doc/tutorials/index_en.md new file mode 100644 index 0000000000..97de356665 --- /dev/null +++ b/doc/tutorials/index_en.md @@ -0,0 +1,23 @@ +# TUTORIALS +There are serveral examples and demos here. + +## [Quick Start](quick_start/index_en.md) + +## Image + +* [Image Classification](image_classification/index_en.md) + +## NLP + +* [Sentiment Analysis](sentiment_analysis/index_en.md) +* [Text Generation](text_generation/index_en.md) +* [Semantic Role Labeling](semantic_role_labeling/index_en.md) + +## Recommendation + +* [MovieLens Dataset](rec/ml_dataset_en.md) +* [MovieLens Regression](rec/ml_regression_en.rst) + +## Model Zoo +* [ImageNet: ResNet](imagenet_model/resnet_model_en.md) +* [Embedding: Chinese Word](embedding_model/index_en.md) diff --git a/doc/tutorials/rec/ml_dataset.md b/doc/tutorials/rec/ml_dataset_en.md similarity index 100% rename from doc/tutorials/rec/ml_dataset.md rename to doc/tutorials/rec/ml_dataset_en.md diff --git a/doc/tutorials/rec/ml_regression.rst b/doc/tutorials/rec/ml_regression_en.rst similarity index 100% rename from doc/tutorials/rec/ml_regression.rst rename to doc/tutorials/rec/ml_regression_en.rst diff --git a/doc/tutorials/semantic_role_labeling/index.rst b/doc/tutorials/semantic_role_labeling/index.rst deleted file mode 100644 index ff3035059b..0000000000 --- a/doc/tutorials/semantic_role_labeling/index.rst +++ /dev/null @@ -1,7 +0,0 @@ -Semantic Role Labeling Tutorial -=============================== - -.. toctree:: - :maxdepth: 3 - - semantic_role_labeling.md diff --git a/doc/tutorials/semantic_role_labeling/semantic_role_labeling.md b/doc/tutorials/semantic_role_labeling/index_en.md similarity index 100% rename from doc/tutorials/semantic_role_labeling/semantic_role_labeling.md rename to doc/tutorials/semantic_role_labeling/index_en.md diff --git a/doc/tutorials/sentiment_analysis/index.rst b/doc/tutorials/sentiment_analysis/index.rst deleted file mode 100644 index 9ee6d3a177..0000000000 --- a/doc/tutorials/sentiment_analysis/index.rst +++ /dev/null @@ -1,9 +0,0 @@ -Sentiment Analasis Tutorial -=========================== - -.. toctree:: - :maxdepth: 3 - :glob: - - Training Locally - internal/cluster_train.md diff --git a/doc/tutorials/sentiment_analysis/sentiment_analysis.md b/doc/tutorials/sentiment_analysis/index_en.md similarity index 100% rename from doc/tutorials/sentiment_analysis/sentiment_analysis.md rename to doc/tutorials/sentiment_analysis/index_en.md diff --git a/doc/tutorials/text_generation/index.rst b/doc/tutorials/text_generation/index.rst deleted file mode 100644 index 82da552419..0000000000 --- a/doc/tutorials/text_generation/index.rst +++ /dev/null @@ -1,9 +0,0 @@ -Text Generation Tutorial -======================== - -.. toctree:: - :maxdepth: 3 - :glob: - - Training Locally - internal/cluster_train.md diff --git a/doc/tutorials/text_generation/text_generation.md b/doc/tutorials/text_generation/index_en.md similarity index 100% rename from doc/tutorials/text_generation/text_generation.md rename to doc/tutorials/text_generation/index_en.md From 4f73e3f666e2144fba853eaadd69577647f40238 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 9 Dec 2016 13:33:31 +0800 Subject: [PATCH 061/265] Add SIMD flags for runtime check --- paddle/utils/CpuId.cpp | 61 +++++++++++++++++++++++ paddle/utils/CpuId.h | 71 +++++++++++++++++++++++++++ paddle/utils/tests/CMakeLists.txt | 1 + paddle/utils/tests/test_SIMDFlags.cpp | 55 +++++++++++++++++++++ 4 files changed, 188 insertions(+) create mode 100644 paddle/utils/CpuId.cpp create mode 100644 paddle/utils/CpuId.h create mode 100644 paddle/utils/tests/test_SIMDFlags.cpp diff --git a/paddle/utils/CpuId.cpp b/paddle/utils/CpuId.cpp new file mode 100644 index 0000000000..ab73be9e89 --- /dev/null +++ b/paddle/utils/CpuId.cpp @@ -0,0 +1,61 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved. +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "paddle/utils/CpuId.h" +#include "paddle/utils/Util.h" + +#ifdef _WIN32 + +/// for MSVC +#define CPUID(info, x) __cpuidex(info, x, 0) + +#else + +#include + +/// for GCC/Clang +#define CPUID(info, x) __cpuid_count(x, 0, info[0], info[1], info[2], info[3]) + +#endif + +namespace paddle { + +/// init simd instance +static InitFunction __init_simd_flags( + []{ SIMDFlags::instance(); }, std::numeric_limits::max()); + +SIMDFlags::SIMDFlags() { + unsigned int cpuInfo[4]; + + CPUID(cpuInfo, 0x00000001); + simd_flags_ |= cpuInfo[3] & (1 << 25) ? SIMD_SSE : SIMD_NONE; + simd_flags_ |= cpuInfo[3] & (1 << 26) ? SIMD_SSE2 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 0) ? SIMD_SSE3 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 9) ? SIMD_SSSE3 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 19) ? SIMD_SSE41 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 20) ? SIMD_SSE42 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 12) ? SIMD_FMA3 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 28) ? SIMD_AVX : SIMD_NONE; + + CPUID(cpuInfo, 0x00000007); + simd_flags_ |= cpuInfo[1] & (1 << 5) ? SIMD_AVX2 : SIMD_NONE; + simd_flags_ |= cpuInfo[1] & (1 << 16) ? SIMD_AVX512: SIMD_NONE; + + CPUID(cpuInfo, 0x80000001); + simd_flags_ |= cpuInfo[2] & (1 << 16) ? SIMD_FMA4 : SIMD_NONE; +} + +SIMDFlags* SIMDFlags::instance() { + static SIMDFlags instance; + return &instance; +} + +} // namespace paddle diff --git a/paddle/utils/CpuId.h b/paddle/utils/CpuId.h new file mode 100644 index 0000000000..bb77d37712 --- /dev/null +++ b/paddle/utils/CpuId.h @@ -0,0 +1,71 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved. +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#pragma once + +#include +#include "DisableCopy.h" + +namespace paddle { + +class SIMDFlags final { +public: + DISABLE_COPY(SIMDFlags); + + SIMDFlags(); + + static SIMDFlags* instance(); + + inline bool isSSE() { return simd_flags_ & SIMD_SSE; } + inline bool isSSE2() { return simd_flags_ & SIMD_SSE2; } + inline bool isSSE3() { return simd_flags_ & SIMD_SSE3; } + inline bool isSSSE3() { return simd_flags_ & SIMD_SSSE3; } + inline bool isSSE41() { return simd_flags_ & SIMD_SSE41; } + inline bool isSSE42() { return simd_flags_ & SIMD_SSE42; } + inline bool isFMA3() { return simd_flags_ & SIMD_FMA3; } + inline bool isFMA4() { return simd_flags_ & SIMD_FMA4; } + inline bool isAVX() { return simd_flags_ & SIMD_AVX; } + inline bool isAVX2() { return simd_flags_ & SIMD_AVX2; } + inline bool isAVX512() { return simd_flags_ & SIMD_AVX512;} + +private: + enum simd_t { + SIMD_NONE = 0, ///< None + SIMD_SSE = 1 << 0, ///< SSE + SIMD_SSE2 = 1 << 1, ///< SSE 2 + SIMD_SSE3 = 1 << 2, ///< SSE 3 + SIMD_SSSE3 = 1 << 3, ///< SSSE 3 + SIMD_SSE41 = 1 << 4, ///< SSE 4.1 + SIMD_SSE42 = 1 << 5, ///< SSE 4.2 + SIMD_FMA3 = 1 << 6, ///< FMA 3 + SIMD_FMA4 = 1 << 7, ///< FMA 4 + SIMD_AVX = 1 << 8, ///< AVX + SIMD_AVX2 = 1 << 9, ///< AVX 2 + SIMD_AVX512 = 1 << 10, ///< AVX 512 + }; + + /// simd flags + int simd_flags_ = SIMD_NONE; +}; + +#define HAS_SSE SIMDFlags::instance()->isSSE() +#define HAS_SSE2 SIMDFlags::instance()->isSSE2() +#define HAS_SSE3 SIMDFlags::instance()->isSSE3() +#define HAS_SSSE3 SIMDFlags::instance()->isSSSE3() +#define HAS_SSE41 SIMDFlags::instance()->isSSE41() +#define HAS_SSS42 SIMDFlags::instance()->isSSE42() +#define HAS_FMA3 SIMDFlags::instance()->isFMA3() +#define HAS_FMA4 SIMDFlags::instance()->isFMA4() +#define HAS_AVX SIMDFlags::instance()->isAVX() +#define HAS_AVX2 SIMDFlags::instance()->isAVX2() +#define HAS_AVX512 SIMDFlags::instance()->isAVX512() + +} // namespace paddle diff --git a/paddle/utils/tests/CMakeLists.txt b/paddle/utils/tests/CMakeLists.txt index adf489fafe..298ede5cd6 100644 --- a/paddle/utils/tests/CMakeLists.txt +++ b/paddle/utils/tests/CMakeLists.txt @@ -5,6 +5,7 @@ add_simple_unittest(test_StringUtils) add_simple_unittest(test_CustomStackTrace) add_simple_unittest(test_ThreadBarrier) add_simple_unittest(test_SpinLock) +add_simple_unittest(test_SIMDFlags) add_executable( test_CustomStackTracePrint diff --git a/paddle/utils/tests/test_SIMDFlags.cpp b/paddle/utils/tests/test_SIMDFlags.cpp new file mode 100644 index 0000000000..583a649b84 --- /dev/null +++ b/paddle/utils/tests/test_SIMDFlags.cpp @@ -0,0 +1,55 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved. +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + + +#include + +#include "paddle/utils/CpuId.h" +#include "paddle/utils/Logging.h" +#include "paddle/utils/Util.h" + +using namespace paddle; // NOLINT + +TEST(SIMDFlags, gccTest) { +#if (defined(__GNUC__) || defined(__GNUG__)) && !(defined(__clang__)) + CHECK(__builtin_cpu_supports("sse") == HAS_SSE); + CHECK(__builtin_cpu_supports("sse2") == HAS_SSE2); + CHECK(__builtin_cpu_supports("sse3") == HAS_SSE3); + CHECK(__builtin_cpu_supports("ssse3")== HAS_SSSE3); + CHECK(__builtin_cpu_supports("sse4.1")== HAS_SSE41); + CHECK(__builtin_cpu_supports("sse4.2")== HAS_SSE42); + CHECK(__builtin_cpu_supports("fma3")== HAS_FMA3); + CHECK(__builtin_cpu_supports("fma4")== HAS_FMA4); + CHECK(__builtin_cpu_supports("avx")== HAS_AVX); + CHECK(__builtin_cpu_supports("avx2")== HAS_AVX2); + CHECK(__builtin_cpu_supports("avx512f")== HAS_AVX512); +#endif +} + +TEST(SIMDFlags, normalPrint) { + auto simd = SIMDFlags::instance(); + LOG(INFO) << "Has SSE2: " << std::boolalpha << simd->isSSE2(); + LOG(INFO) << "Has SSE3: " << std::boolalpha << simd->isSSE3(); + LOG(INFO) << "Has SSSE3: " << std::boolalpha << simd->isSSSE3(); + LOG(INFO) << "Has SSE4.1: " << std::boolalpha << simd->isSSE41(); + LOG(INFO) << "Has SSE4.2: " << std::boolalpha << simd->isSSE42(); + LOG(INFO) << "Has FMA3: " << std::boolalpha << simd->isFMA3(); + LOG(INFO) << "Has FMA4: " << std::boolalpha << simd->isFMA4(); + LOG(INFO) << "Has AVX: " << std::boolalpha << simd->isAVX(); + LOG(INFO) << "Has AVX2: " << std::boolalpha << simd->isAVX2(); + LOG(INFO) << "Has AVX512: " << std::boolalpha << simd->isAVX512(); +} + +int main(int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); + paddle::initMain(argc, argv); + return RUN_ALL_TESTS(); +} From 3d2896493b90283b6b5b8e49869d90f656f39783 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Fri, 9 Dec 2016 14:00:04 +0800 Subject: [PATCH 062/265] follow comments --- demo/sentiment/predict.py | 62 ++++++++----------- .../sentiment_analysis/sentiment_analysis.md | 29 ++++----- .../sentiment_analysis/sentiment_analysis.md | 29 ++++----- 3 files changed, 57 insertions(+), 63 deletions(-) diff --git a/demo/sentiment/predict.py b/demo/sentiment/predict.py index e01dc6d228..3920a1bade 100755 --- a/demo/sentiment/predict.py +++ b/demo/sentiment/predict.py @@ -66,42 +66,27 @@ class SentimentPrediction(): for v in open(label_file, 'r'): self.label[int(v.split('\t')[1])] = v.split('\t')[0] - def get_data(self, data): + def get_index(self, data): """ - Get input data of paddle format. + transform word into integer index according to the dictionary. """ - for line in data: - words = line.strip().split() - word_slot = [ - self.word_dict[w] for w in words if w in self.word_dict - ] - if not word_slot: - print "all words are not in dictionary: %s", line - continue - yield [word_slot] - - def predict(self, batch_size): - - def batch_predict(batch_data): - input = self.converter(self.get_data(batch_data)) - output = self.network.forwardTest(input) - prob = output[0]["value"] - labs = np.argsort(-prob) - for idx, lab in enumerate(labs): - if self.label is None: - print("predicting label is %d" % (lab[0])) - else: - print("predicting label is %s" % - (self.label[lab[0]])) - - batch = [] - for line in sys.stdin: - batch.append(line) - if len(batch) == batch_size: - batch_predict(batch) - batch=[] - if len(batch) > 0: - batch_predict(batch) + words = data.strip().split() + word_slot = [ + self.word_dict[w] for w in words if w in self.word_dict + ] + return word_slot + + def batch_predict(self, data_batch): + input = self.converter(data_batch) + output = self.network.forwardTest(input) + prob = output[0]["value"] + labs = np.argsort(-prob) + for idx, lab in enumerate(labs): + if self.label is None: + print("predicting label is %d" % (lab[0])) + else: + print("predicting label is %s" % + (self.label[lab[0]])) def option_parser(): usage = "python predict.py -n config -w model_dir -d dictionary -i input_file " @@ -152,8 +137,15 @@ def main(): label = options.label swig_paddle.initPaddle("--use_gpu=0") predict = SentimentPrediction(train_conf, dict_file, model_path, label) - predict.predict(batch_size) + batch = [] + for line in sys.stdin: + batch.append([predict.get_index(line)]) + if len(batch) == batch_size: + predict.batch_predict(batch) + batch=[] + if len(batch) > 0: + predict.batch_predict(batch) if __name__ == '__main__': main() diff --git a/doc/tutorials/sentiment_analysis/sentiment_analysis.md b/doc/tutorials/sentiment_analysis/sentiment_analysis.md index c53952c544..bb7681db44 100644 --- a/doc/tutorials/sentiment_analysis/sentiment_analysis.md +++ b/doc/tutorials/sentiment_analysis/sentiment_analysis.md @@ -293,20 +293,21 @@ predict.sh: model=model_output/pass-00002/ config=trainer_config.py label=data/pre-imdb/labels.list -python predict.py \ - -n $config\ - -w $model \ - -b $label \ - -d data/pre-imdb/dict.txt \ - -i data/aclImdb/test/pos/10007_10.txt -``` - -* `predict.py`: predicting interface. -* -n $config : set network configure. -* -w $model: set model path. -* -b $label: set dictionary about corresponding relation between integer label and string label. -* -d data/pre-imdb/dict.txt: set dictionary. -* -i data/aclImdb/test/pos/10014_7.txt: set one example file to predict. +cat ./data/aclImdb/test/pos/10007_10.txt | python predict.py \ + --tconf=$config\ + --model=$model \ + --label=$label \ + --dict=./data/pre-imdb/dict.txt \ + --batch_size=1 +``` + +* `cat ./data/aclImdb/test/pos/10007_10.txt` : the input sample. +* `predict.py` : predicting interface. +* `--tconf=$config` : set network configure. +* ` --model=$model` : set model path. +* `--label=$label` : set dictionary about corresponding relation between integer label and string label. +* `--dict=data/pre-imdb/dict.txt` : set dictionary. +* `--batch_size=1` : set batch size. Note you should make sure the default model path `model_output/pass-00002` exists or change the model path. diff --git a/doc_cn/demo/sentiment_analysis/sentiment_analysis.md b/doc_cn/demo/sentiment_analysis/sentiment_analysis.md index b70f2d5967..ba307e97e3 100644 --- a/doc_cn/demo/sentiment_analysis/sentiment_analysis.md +++ b/doc_cn/demo/sentiment_analysis/sentiment_analysis.md @@ -291,20 +291,21 @@ predict.sh: model=model_output/pass-00002/ config=trainer_config.py label=data/pre-imdb/labels.list -python predict.py \ - -n $config\ - -w $model \ - -b $label \ - -d data/pre-imdb/dict.txt \ - -i data/aclImdb/test/pos/10007_10.txt -``` - -* `predict.py`: 预测接口脚本。 -* -n $config : 设置网络配置。 -* -w $model: 设置模型路径。 -* -b $label: 设置标签类别字典,这个字典是整数标签和字符串标签的一个对应。 -* -d data/pre-imdb/dict.txt: 设置字典文件。 -* -i data/aclImdb/test/pos/10014_7.txt: 设置一个要预测的示例文件。 +cat ./data/aclImdb/test/pos/10007_10.txt | python predict.py \ + --tconf=$config\ + --model=$model \ + --label=$label \ + --dict=./data/pre-imdb/dict.txt \ + --batch_size=1 +``` + +* `cat ./data/aclImdb/test/pos/10007_10.txt` : 输入预测样本。 +* `predict.py` : 预测接口脚本。 +* `--tconf=$config` : 设置网络配置。 +* `--model=$model` : 设置模型路径。 +* `--label=$label` : 设置标签类别字典,这个字典是整数标签和字符串标签的一个对应。 +* `--dict=data/pre-imdb/dict.txt` : 设置字典文件。 +* `--batch_size=1` : 设置batch size。 注意应该确保默认模型路径`model_output / pass-00002`存在或更改为其它模型路径。 From 5c480d6fcb2f4ce8f912c4c242c6dcd16f7ce3a6 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 9 Dec 2016 15:15:41 +0800 Subject: [PATCH 063/265] Add reference to CPUID --- paddle/utils/CpuId.cpp | 2 +- paddle/utils/CpuId.h | 2 +- paddle/utils/tests/test_SIMDFlags.cpp | 15 ++++++--------- 3 files changed, 8 insertions(+), 11 deletions(-) diff --git a/paddle/utils/CpuId.cpp b/paddle/utils/CpuId.cpp index ab73be9e89..909c347945 100644 --- a/paddle/utils/CpuId.cpp +++ b/paddle/utils/CpuId.cpp @@ -34,7 +34,7 @@ static InitFunction __init_simd_flags( SIMDFlags::SIMDFlags() { unsigned int cpuInfo[4]; - + // CPUID: https://en.wikipedia.org/wiki/CPUID CPUID(cpuInfo, 0x00000001); simd_flags_ |= cpuInfo[3] & (1 << 25) ? SIMD_SSE : SIMD_NONE; simd_flags_ |= cpuInfo[3] & (1 << 26) ? SIMD_SSE2 : SIMD_NONE; diff --git a/paddle/utils/CpuId.h b/paddle/utils/CpuId.h index bb77d37712..59a917c857 100644 --- a/paddle/utils/CpuId.h +++ b/paddle/utils/CpuId.h @@ -61,7 +61,7 @@ private: #define HAS_SSE3 SIMDFlags::instance()->isSSE3() #define HAS_SSSE3 SIMDFlags::instance()->isSSSE3() #define HAS_SSE41 SIMDFlags::instance()->isSSE41() -#define HAS_SSS42 SIMDFlags::instance()->isSSE42() +#define HAS_SSE42 SIMDFlags::instance()->isSSE42() #define HAS_FMA3 SIMDFlags::instance()->isFMA3() #define HAS_FMA4 SIMDFlags::instance()->isFMA4() #define HAS_AVX SIMDFlags::instance()->isAVX() diff --git a/paddle/utils/tests/test_SIMDFlags.cpp b/paddle/utils/tests/test_SIMDFlags.cpp index 583a649b84..382d69a4a6 100644 --- a/paddle/utils/tests/test_SIMDFlags.cpp +++ b/paddle/utils/tests/test_SIMDFlags.cpp @@ -20,17 +20,14 @@ using namespace paddle; // NOLINT TEST(SIMDFlags, gccTest) { #if (defined(__GNUC__) || defined(__GNUG__)) && !(defined(__clang__)) - CHECK(__builtin_cpu_supports("sse") == HAS_SSE); - CHECK(__builtin_cpu_supports("sse2") == HAS_SSE2); - CHECK(__builtin_cpu_supports("sse3") == HAS_SSE3); - CHECK(__builtin_cpu_supports("ssse3")== HAS_SSSE3); + CHECK(__builtin_cpu_supports("sse") == HAS_SSE); + CHECK(__builtin_cpu_supports("sse2") == HAS_SSE2); + CHECK(__builtin_cpu_supports("sse3") == HAS_SSE3); + CHECK(__builtin_cpu_supports("ssse3") == HAS_SSSE3); CHECK(__builtin_cpu_supports("sse4.1")== HAS_SSE41); CHECK(__builtin_cpu_supports("sse4.2")== HAS_SSE42); - CHECK(__builtin_cpu_supports("fma3")== HAS_FMA3); - CHECK(__builtin_cpu_supports("fma4")== HAS_FMA4); - CHECK(__builtin_cpu_supports("avx")== HAS_AVX); - CHECK(__builtin_cpu_supports("avx2")== HAS_AVX2); - CHECK(__builtin_cpu_supports("avx512f")== HAS_AVX512); + CHECK(__builtin_cpu_supports("avx") == HAS_AVX); + CHECK(__builtin_cpu_supports("avx2") == HAS_AVX2); #endif } From bf5342702f566703a67289682b38682b95533a7a Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 9 Dec 2016 15:47:44 +0800 Subject: [PATCH 064/265] Fix bugs on SIMD flags and unit test --- paddle/utils/CpuId.cpp | 4 ---- paddle/utils/CpuId.h | 22 +++++++++++----------- paddle/utils/tests/test_SIMDFlags.cpp | 16 ++++++++-------- 3 files changed, 19 insertions(+), 23 deletions(-) diff --git a/paddle/utils/CpuId.cpp b/paddle/utils/CpuId.cpp index 909c347945..734b2e0924 100644 --- a/paddle/utils/CpuId.cpp +++ b/paddle/utils/CpuId.cpp @@ -28,10 +28,6 @@ limitations under the License. */ namespace paddle { -/// init simd instance -static InitFunction __init_simd_flags( - []{ SIMDFlags::instance(); }, std::numeric_limits::max()); - SIMDFlags::SIMDFlags() { unsigned int cpuInfo[4]; // CPUID: https://en.wikipedia.org/wiki/CPUID diff --git a/paddle/utils/CpuId.h b/paddle/utils/CpuId.h index 59a917c857..d15e58d1dd 100644 --- a/paddle/utils/CpuId.h +++ b/paddle/utils/CpuId.h @@ -24,17 +24,17 @@ public: static SIMDFlags* instance(); - inline bool isSSE() { return simd_flags_ & SIMD_SSE; } - inline bool isSSE2() { return simd_flags_ & SIMD_SSE2; } - inline bool isSSE3() { return simd_flags_ & SIMD_SSE3; } - inline bool isSSSE3() { return simd_flags_ & SIMD_SSSE3; } - inline bool isSSE41() { return simd_flags_ & SIMD_SSE41; } - inline bool isSSE42() { return simd_flags_ & SIMD_SSE42; } - inline bool isFMA3() { return simd_flags_ & SIMD_FMA3; } - inline bool isFMA4() { return simd_flags_ & SIMD_FMA4; } - inline bool isAVX() { return simd_flags_ & SIMD_AVX; } - inline bool isAVX2() { return simd_flags_ & SIMD_AVX2; } - inline bool isAVX512() { return simd_flags_ & SIMD_AVX512;} + inline bool isSSE() const { return simd_flags_ & SIMD_SSE; } + inline bool isSSE2() const { return simd_flags_ & SIMD_SSE2; } + inline bool isSSE3() const { return simd_flags_ & SIMD_SSE3; } + inline bool isSSSE3() const { return simd_flags_ & SIMD_SSSE3; } + inline bool isSSE41() const { return simd_flags_ & SIMD_SSE41; } + inline bool isSSE42() const { return simd_flags_ & SIMD_SSE42; } + inline bool isFMA3() const { return simd_flags_ & SIMD_FMA3; } + inline bool isFMA4() const { return simd_flags_ & SIMD_FMA4; } + inline bool isAVX() const { return simd_flags_ & SIMD_AVX; } + inline bool isAVX2() const { return simd_flags_ & SIMD_AVX2; } + inline bool isAVX512()const { return simd_flags_ & SIMD_AVX512;} private: enum simd_t { diff --git a/paddle/utils/tests/test_SIMDFlags.cpp b/paddle/utils/tests/test_SIMDFlags.cpp index 382d69a4a6..a544901aa3 100644 --- a/paddle/utils/tests/test_SIMDFlags.cpp +++ b/paddle/utils/tests/test_SIMDFlags.cpp @@ -20,14 +20,14 @@ using namespace paddle; // NOLINT TEST(SIMDFlags, gccTest) { #if (defined(__GNUC__) || defined(__GNUG__)) && !(defined(__clang__)) - CHECK(__builtin_cpu_supports("sse") == HAS_SSE); - CHECK(__builtin_cpu_supports("sse2") == HAS_SSE2); - CHECK(__builtin_cpu_supports("sse3") == HAS_SSE3); - CHECK(__builtin_cpu_supports("ssse3") == HAS_SSSE3); - CHECK(__builtin_cpu_supports("sse4.1")== HAS_SSE41); - CHECK(__builtin_cpu_supports("sse4.2")== HAS_SSE42); - CHECK(__builtin_cpu_supports("avx") == HAS_AVX); - CHECK(__builtin_cpu_supports("avx2") == HAS_AVX2); + CHECK(!__builtin_cpu_supports("sse") != HAS_SSE); + CHECK(!__builtin_cpu_supports("sse2") != HAS_SSE2); + CHECK(!__builtin_cpu_supports("sse3") != HAS_SSE3); + CHECK(!__builtin_cpu_supports("ssse3") != HAS_SSSE3); + CHECK(!__builtin_cpu_supports("sse4.1")!= HAS_SSE41); + CHECK(!__builtin_cpu_supports("sse4.2")!= HAS_SSE42); + CHECK(!__builtin_cpu_supports("avx") != HAS_AVX); + CHECK(!__builtin_cpu_supports("avx2") != HAS_AVX2); #endif } From cb9ebedaeff37c3eee5f72a836528c97acd36e05 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 9 Dec 2016 16:54:58 +0800 Subject: [PATCH 065/265] Remove ostream and add intrin header file --- paddle/utils/CpuId.cpp | 4 +++- paddle/utils/CpuId.h | 3 +-- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/paddle/utils/CpuId.cpp b/paddle/utils/CpuId.cpp index 734b2e0924..ae1fb40f04 100644 --- a/paddle/utils/CpuId.cpp +++ b/paddle/utils/CpuId.cpp @@ -14,6 +14,8 @@ limitations under the License. */ #ifdef _WIN32 +#include + /// for MSVC #define CPUID(info, x) __cpuidex(info, x, 0) @@ -49,7 +51,7 @@ SIMDFlags::SIMDFlags() { simd_flags_ |= cpuInfo[2] & (1 << 16) ? SIMD_FMA4 : SIMD_NONE; } -SIMDFlags* SIMDFlags::instance() { +const SIMDFlags* SIMDFlags::instance() { static SIMDFlags instance; return &instance; } diff --git a/paddle/utils/CpuId.h b/paddle/utils/CpuId.h index d15e58d1dd..19096332b6 100644 --- a/paddle/utils/CpuId.h +++ b/paddle/utils/CpuId.h @@ -11,7 +11,6 @@ limitations under the License. */ #pragma once -#include #include "DisableCopy.h" namespace paddle { @@ -22,7 +21,7 @@ public: SIMDFlags(); - static SIMDFlags* instance(); + static const SIMDFlags* instance(); inline bool isSSE() const { return simd_flags_ & SIMD_SSE; } inline bool isSSE2() const { return simd_flags_ & SIMD_SSE2; } From 0c96b26c3ce3e7f1a3e6c9501d9675874d846564 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Fri, 9 Dec 2016 17:01:59 +0800 Subject: [PATCH 066/265] remove breathe and doc/howto/source directory --- .travis.yml | 2 +- cmake/check_packages.cmake | 1 - .../build_and_install/build_from_source_en.md | 2 +- doc/howto/source/api.rst | 5 - doc/howto/source/cuda/index.rst | 9 - doc/howto/source/cuda/matrix.rst | 59 -- doc/howto/source/cuda/nn.rst | 39 -- doc/howto/source/cuda/utils.rst | 37 -- doc/howto/source/gserver/activations.rst | 5 - doc/howto/source/gserver/dataproviders.rst | 87 --- doc/howto/source/gserver/evaluators.rst | 103 ---- doc/howto/source/gserver/gradientmachines.rst | 27 - doc/howto/source/gserver/index.rst | 12 - doc/howto/source/gserver/layers.rst | 566 ------------------ doc/howto/source/gserver/neworks.rst | 12 - doc/howto/source/index.rst | 14 - doc/howto/source/math/functions.rst | 10 - doc/howto/source/math/index.rst | 10 - doc/howto/source/math/matrix.rst | 76 --- doc/howto/source/math/utils.rst | 18 - doc/howto/source/math/vector.rst | 37 -- doc/howto/source/parameter/index.rst | 9 - doc/howto/source/parameter/optimizer.rst | 22 - doc/howto/source/parameter/parameter.rst | 12 - doc/howto/source/parameter/updater.rst | 14 - doc/howto/source/pserver/client.rst | 12 - doc/howto/source/pserver/index.rst | 10 - doc/howto/source/pserver/network.rst | 27 - doc/howto/source/pserver/server.rst | 12 - doc/howto/source/trainer.rst | 32 - doc/howto/source/utils/customStackTrace.rst | 4 - doc/howto/source/utils/enum.rst | 3 - doc/howto/source/utils/index.rst | 11 - doc/howto/source/utils/lock.rst | 32 - doc/howto/source/utils/queue.rst | 12 - doc/howto/source/utils/thread.rst | 27 - paddle/scripts/docker/Dockerfile | 2 +- paddle/scripts/docker/Dockerfile.gpu | 2 +- paddle/scripts/tools/build_docs/Dockerfile | 2 +- 39 files changed, 5 insertions(+), 1371 deletions(-) delete mode 100644 doc/howto/source/api.rst delete mode 100644 doc/howto/source/cuda/index.rst delete mode 100644 doc/howto/source/cuda/matrix.rst delete mode 100644 doc/howto/source/cuda/nn.rst delete mode 100644 doc/howto/source/cuda/utils.rst delete mode 100644 doc/howto/source/gserver/activations.rst delete mode 100644 doc/howto/source/gserver/dataproviders.rst delete mode 100644 doc/howto/source/gserver/evaluators.rst delete mode 100644 doc/howto/source/gserver/gradientmachines.rst delete mode 100644 doc/howto/source/gserver/index.rst delete mode 100644 doc/howto/source/gserver/layers.rst delete mode 100644 doc/howto/source/gserver/neworks.rst delete mode 100644 doc/howto/source/index.rst delete mode 100644 doc/howto/source/math/functions.rst delete mode 100644 doc/howto/source/math/index.rst delete mode 100644 doc/howto/source/math/matrix.rst delete mode 100644 doc/howto/source/math/utils.rst delete mode 100644 doc/howto/source/math/vector.rst delete mode 100644 doc/howto/source/parameter/index.rst delete mode 100644 doc/howto/source/parameter/optimizer.rst delete mode 100644 doc/howto/source/parameter/parameter.rst delete mode 100644 doc/howto/source/parameter/updater.rst delete mode 100644 doc/howto/source/pserver/client.rst delete mode 100644 doc/howto/source/pserver/index.rst delete mode 100644 doc/howto/source/pserver/network.rst delete mode 100644 doc/howto/source/pserver/server.rst delete mode 100644 doc/howto/source/trainer.rst delete mode 100644 doc/howto/source/utils/customStackTrace.rst delete mode 100644 doc/howto/source/utils/enum.rst delete mode 100644 doc/howto/source/utils/index.rst delete mode 100644 doc/howto/source/utils/lock.rst delete mode 100644 doc/howto/source/utils/queue.rst delete mode 100644 doc/howto/source/utils/thread.rst diff --git a/.travis.yml b/.travis.yml index 6215060e33..cf0cca1134 100644 --- a/.travis.yml +++ b/.travis.yml @@ -50,7 +50,7 @@ before_install: fi - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then sudo paddle/scripts/travis/before_install.linux.sh; fi - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then paddle/scripts/travis/before_install.osx.sh; fi - - pip install wheel protobuf sphinx breathe recommonmark virtualenv numpy sphinx_rtd_theme + - pip install wheel protobuf sphinx recommonmark virtualenv numpy sphinx_rtd_theme script: - paddle/scripts/travis/main.sh notifications: diff --git a/cmake/check_packages.cmake b/cmake/check_packages.cmake index 3bc0c1fd18..0688745541 100644 --- a/cmake/check_packages.cmake +++ b/cmake/check_packages.cmake @@ -30,7 +30,6 @@ if(WITH_DOC) find_package(Sphinx REQUIRED) find_package(Doxygen REQUIRED) find_python_module(recommonmark REQUIRED) - find_python_module(breathe REQUIRED) endif() if(WITH_SWIG_PY) diff --git a/doc/getstarted/build_and_install/build_from_source_en.md b/doc/getstarted/build_and_install/build_from_source_en.md index 150d7fc437..3771d316a1 100644 --- a/doc/getstarted/build_and_install/build_from_source_en.md +++ b/doc/getstarted/build_and_install/build_from_source_en.md @@ -79,7 +79,7 @@ As a simple example, consider the following: ```bash pip install 'sphinx>=1.4.0' - pip install sphinx_rtd_theme breathe recommonmark + pip install sphinx_rtd_theme recommonmark # install doxygen on Ubuntu sudo apt-get install doxygen diff --git a/doc/howto/source/api.rst b/doc/howto/source/api.rst deleted file mode 100644 index 30396c26b6..0000000000 --- a/doc/howto/source/api.rst +++ /dev/null @@ -1,5 +0,0 @@ -API -=== - -.. doxygenfile:: paddle/api/PaddleAPI.h -.. doxygenfile:: paddle/api/Internal.h diff --git a/doc/howto/source/cuda/index.rst b/doc/howto/source/cuda/index.rst deleted file mode 100644 index b0fed2e7f7..0000000000 --- a/doc/howto/source/cuda/index.rst +++ /dev/null @@ -1,9 +0,0 @@ -CUDA -==== - -.. toctree:: - :maxdepth: 2 - - matrix.rst - nn.rst - utils.rst diff --git a/doc/howto/source/cuda/matrix.rst b/doc/howto/source/cuda/matrix.rst deleted file mode 100644 index b7699c83ed..0000000000 --- a/doc/howto/source/cuda/matrix.rst +++ /dev/null @@ -1,59 +0,0 @@ -Matrix -====== - -Base ----- - -hl_matrix.h -``````````` -.. doxygenfile:: paddle/cuda/include/hl_matrix.h - -hl_matrix_base.h -```````````````` -.. doxygenfile:: paddle/cuda/include/hl_matrix_base.cuh - -hl_matrix_apply.cuh -``````````````````` -.. doxygenfile:: paddle/cuda/include/hl_matrix_apply.cuh - -hl_matrix_ops.cuh -````````````````` -.. doxygenfile:: paddle/cuda/include/hl_matrix_ops.cuh - -hl_matrix_type.cuh -`````````````````` -.. doxygenfile:: paddle/cuda/include/hl_matrix_type.cuh - -hl_sse_matrix_kernel.cuh -```````````````````````` -.. doxygenfile:: paddle/cuda/include/hl_sse_matrix_kernel.cuh - -Matrix Function ---------------- - -hl_batch_transpose.h -```````````````````` -.. doxygenfile:: paddle/cuda/include/hl_batch_transpose.h - -hl_aggregate.h -`````````````` -.. doxygenfile:: paddle/cuda/include/hl_aggregate.h - -hl_top_k.h -`````````` -.. doxygenfile:: paddle/cuda/include/hl_top_k.h - -hl_table_apply.h -```````````````` -.. doxygenfile:: paddle/cuda/include/hl_table_apply.h - -Sparse Matrix -------------- - -hl_sparse.h -``````````` -.. doxygenfile:: paddle/cuda/include/hl_sparse.h - -hl_sparse.ph -```````````` -.. doxygenfile:: paddle/cuda/include/hl_sparse.ph diff --git a/doc/howto/source/cuda/nn.rst b/doc/howto/source/cuda/nn.rst deleted file mode 100644 index 5577d01e72..0000000000 --- a/doc/howto/source/cuda/nn.rst +++ /dev/null @@ -1,39 +0,0 @@ -Neural Network -============== - -Base ----- - -.. doxygenfile:: paddle/cuda/include/hl_gpu.h -.. doxygenfile:: paddle/cuda/include/hl_functions.h -.. doxygenfile:: paddle/cuda/include/hl_avx_functions.h -.. doxygenfile:: paddle/cuda/include/hl_gpu_functions.cuh -.. doxygenfile:: paddle/cuda/include/hl_activation_functions.h - - -CNN Related APIs ----------------- -.. doxygenfile:: paddle/cuda/include/hl_cnn.h -.. doxygenfile:: paddle/cuda/include/hl_cuda_cudnn.h -.. doxygenfile:: paddle/cuda/include/hl_cuda_cudnn.ph - -RNN Related APIs ----------------- - -.. doxygenfile:: paddle/cuda/include/hl_recurrent_apply.cuh -.. doxygenfile:: paddle/cuda/include/hl_sequence.h - -LSTM Model -`````````` - -.. doxygenfile:: paddle/cuda/include/hl_lstm.h -.. dpxygenfile:: paddle/cuda/include/hl_cpu_lstm.cuh -.. doxygenfile:: paddle/cuda/include/hl_gpu_lstm.cuh -.. doxygenfile:: paddle/cuda/include/hl_lstm_ops.cuh - -GRU Model -````````` - -.. doxygenfile:: paddle/cuda/include/hl_gru_ops.cuh -.. doxygenfile:: paddle/cuda/include/hl_cpu_gru.cuh -.. doxygenfile:: paddle/cuda/include/hl_gpu_gru.cuh diff --git a/doc/howto/source/cuda/utils.rst b/doc/howto/source/cuda/utils.rst deleted file mode 100644 index 850e8bd1c6..0000000000 --- a/doc/howto/source/cuda/utils.rst +++ /dev/null @@ -1,37 +0,0 @@ -Utils -===== - -Dynamic Link Libs ------------------ -.. doxygenfile:: paddle/cuda/include/hl_dso_loader.h - -GPU Resources -------------- - -hl_cuda.ph -`````````` -.. doxygenfile:: paddle/cuda/include/hl_cuda.ph - -hl_cuda.h -````````` -.. doxygenfile:: paddle/cuda/include/hl_cuda.h - -HPPL Base ---------- -.. doxygenfile:: paddle/cuda/include/hl_base.h - -CUBLAS Wrapper --------------- -.. doxygenfile:: paddle/cuda/include/hl_cuda_cublas.h - -Timer ------ -.. doxygenfile:: paddle/cuda/include/hl_time.h - -Thread Resource ---------------- -.. doxygenfile:: paddle/cuda/include/hl_thread.ph - -Device Function ---------------- -.. doxygenfile:: paddle/cuda/include/hl_device_functions.cuh diff --git a/doc/howto/source/gserver/activations.rst b/doc/howto/source/gserver/activations.rst deleted file mode 100644 index 55b9d3be38..0000000000 --- a/doc/howto/source/gserver/activations.rst +++ /dev/null @@ -1,5 +0,0 @@ -Activations -=========== - -.. doxygenclass:: paddle::ActivationFunction - :members: diff --git a/doc/howto/source/gserver/dataproviders.rst b/doc/howto/source/gserver/dataproviders.rst deleted file mode 100644 index c30d9d6a36..0000000000 --- a/doc/howto/source/gserver/dataproviders.rst +++ /dev/null @@ -1,87 +0,0 @@ -============== -Data Providers -============== - -DataProviders -============= - -Base ----- -.. doxygenclass:: paddle::DataProvider - :members: - -DataProviderGroup ------------------ -.. doxygenclass:: paddle::DataProviderGroup - :members: - -MultiDataProvider ------------------ -.. doxygenclass:: paddle::MultiDataProvider - :members: - -PyDataProvider -============== - -IFieldScanner -------------- -.. doxygenclass:: paddle::IFieldScanner - :members: - -DenseScanner -------------- -.. doxygenclass:: paddle::DenseScanner - :members: - -IndexScanner -------------- -.. doxygenclass:: paddle::IndexScanner - :members: - -SparseNonValueScanner ---------------------- -.. doxygenclass:: paddle::SparseNonValueScanner - :members: - -SparseValueScanner ------------------- -.. doxygenclass:: paddle::SparseValueScanner - :members: - -SequenceScanner ---------------- -.. doxygenclass:: paddle::SparseValueScanner - :members: - -IPyDataProviderCache --------------------- -.. doxygenclass:: paddle::IPyDataProviderCache - :members: - -NoCacheStrategy ---------------- -.. doxygenclass:: paddle::NoCacheStrategy - :members: - -CacheOnePassInMemory --------------------- -.. doxygenclass:: paddle::CacheOnePassInMemory - :members: - -IPyDataProvider ---------------- -.. doxygenclass:: paddle::PyDataProvider2 - :members: - -ProtoDataProvider -================= - -ProtoDataProvider ----------------- -.. doxygenclass:: paddle::ProtoDataProvider - :members: - -ProtoSequenceDataProvider -------------------------- -.. doxygenclass:: paddle::ProtoSequenceDataProvider - :members: diff --git a/doc/howto/source/gserver/evaluators.rst b/doc/howto/source/gserver/evaluators.rst deleted file mode 100644 index f5361f76cd..0000000000 --- a/doc/howto/source/gserver/evaluators.rst +++ /dev/null @@ -1,103 +0,0 @@ -========== -Evaluators -========== - -Base -==== - -.. doxygenclass:: paddle::Evaluator - :members: - -Sum -=== - -SumEvaluator ------------- -.. doxygenclass:: paddle::SumEvaluator - :members: - -ColumnSumEvaluator ------------------- -.. doxygenclass:: paddle::ColumnSumEvaluator - :members: - -Classification -============== - -ClassificationErrorEvaluator ---------------------------- -.. doxygenclass:: paddle::ClassificationErrorEvaluator - :members: - -SequenceClassificationErrorEvaluator ------------------------------------- -.. doxygenclass:: paddle::SequenceClassificationErrorEvaluator - :members: - -AucEvaluator -------------- -.. doxygenclass:: paddle::AucEvaluator - :members: - -PrecisionRecallEvaluator ------------------------- -.. doxygenclass:: paddle::PrecisionRecallEvaluator - :members: - -ChunkEvaluator --------------- -.. doxygenclass:: paddle::ChunkEvaluator - :members: - -CTCEvaluator ------------- -.. doxygenclass:: paddle::CTCErrorEvaluator - :members: - - -Rank -==== - -PnpairEvaluator -------------- -.. doxygenclass:: paddle::PnpairEvaluator - :members: - -AucEvaluator -------------- -.. doxygenclass:: paddle::RankAucEvaluator - :members: - - -Printer -======= - -ValuePrinter -------------- -.. doxygenclass:: paddle::ValuePrinter - :members: - -GradientPrinter ---------------- -.. doxygenclass:: paddle::GradientPrinter - :members: - -MaxIdPrinter ------------- -.. doxygenclass:: paddle::MaxIdPrinter - :members: - -MaxFramePrinter ---------------- -.. doxygenclass:: paddle::MaxFramePrinter - :members: - -SequenceTextPrinter ------------------- -.. doxygenclass:: paddle::SequenceTextPrinter - :members: - -ClassificationErrorPrinter --------------------------- -.. doxygenclass:: paddle::ClassificationErrorPrinter - :members: diff --git a/doc/howto/source/gserver/gradientmachines.rst b/doc/howto/source/gserver/gradientmachines.rst deleted file mode 100644 index 04c8e91d03..0000000000 --- a/doc/howto/source/gserver/gradientmachines.rst +++ /dev/null @@ -1,27 +0,0 @@ -Gradient Machines -================= - -GradientMachine ---------------- -.. doxygenclass:: paddle::GradientMachine - :members: - -GradientMachineMode -------------------- -.. doxygenclass:: paddle::IGradientMachineMode - :members: - -MultiGradientMachine --------------------- -.. doxygenclass:: paddle::MultiGradientMachine - :members: - -TrainerThread -````````````` -.. doxygenclass:: paddle::TrainerThread - :members: - -RecurrentGradientMachine ------------------------- -.. doxygenclass:: paddle::RecurrentGradientMachine - :members: diff --git a/doc/howto/source/gserver/index.rst b/doc/howto/source/gserver/index.rst deleted file mode 100644 index 223b00b9a9..0000000000 --- a/doc/howto/source/gserver/index.rst +++ /dev/null @@ -1,12 +0,0 @@ -GServer -======= - -.. toctree:: - :maxdepth: 2 - - activations.rst - dataproviders.rst - evaluators.rst - gradientmachines.rst - layers.rst - neworks.rst diff --git a/doc/howto/source/gserver/layers.rst b/doc/howto/source/gserver/layers.rst deleted file mode 100644 index 191b2bdff2..0000000000 --- a/doc/howto/source/gserver/layers.rst +++ /dev/null @@ -1,566 +0,0 @@ -====== -Layers -====== - -Base -==== - -Layer ------ -.. doxygenclass:: paddle::Layer - :members: - -Projection ----------- -.. doxygenclass:: paddle::Projection - :members: - -Operator --------- -.. doxygenclass:: paddle::Operator - :members: - -Data Layer -========== - -.. doxygenclass:: paddle::DataLayer - :members: - -Fully Connected Layers -====================== - -FullyConnectedLayer -------------------- -.. doxygenclass:: paddle::FullyConnectedLayer - :members: - -SelectiveFullyConnectedLayer ----------------------------- -.. doxygenclass:: paddle::SelectiveFullyConnectedLayer - :members: - -Conv Layers -=========== - -ConvBaseLayer -------------- -.. doxygenclass:: paddle::ConvBaseLayer - :members: - -ConvOperator ------------- -.. doxygenclass:: paddle::ConvOperator - :members: - -ConvShiftLayer --------------- -.. doxygenclass:: paddle::ConvShiftLayer - :members: - -CudnnConvLayer --------------- -.. doxygenclass:: paddle::CudnnConvLayer - :members: - -ExpandConvBaseLayer -------------------- -.. doxygenclass:: paddle::ExpandConvBaseLayer - :members: - -ExpandConvLayer ---------------- -.. doxygenclass:: paddle::ExpandConvLayer - :members: - -ContextProjection ------------------ -.. doxygenclass:: paddle::ContextProjection - :members: - -Pooling Layers -============== - -PoolLayer ---------- -.. doxygenclass:: paddle::PoolLayer - :members: - -PoolProjectionLayer -------------------- -.. doxygenclass:: paddle::PoolProjectionLayer - :members: - -CudnnPoolLayer --------------- -.. doxygenclass:: paddle::CudnnPoolLayer - :members: - -SpatialPyramidPoolLayer ------------------------ -.. doxygenclass:: paddle::SpatialPyramidPoolLayer - :members: - -MaxOutLayer ------------ -.. doxygenclass:: paddle::MaxOutLayer - :members: - -Norm Layers -=========== - -NormLayer ---------- -.. doxygenclass:: paddle::NormLayer - :members: - -CMRProjectionNormLayer ----------------------- -.. doxygenclass:: paddle::CMRProjectionNormLayer - :members: - -DataNormLayer -------------- -.. doxygenclass:: paddle::DataNormLayer - :members: - -ResponseNormLayer ------------------ -.. doxygenclass:: paddle::ResponseNormLayer - :members: - -BatchNormBaseLayer ------------------- -.. doxygenclass:: paddle::BatchNormBaseLayer - :members: - -BatchNormalizationLayer ------------------------ -.. doxygenclass:: paddle::BatchNormalizationLayer - :members: - -CudnnBatchNormLayer ------------------------ -.. doxygenclass:: paddle::CudnnBatchNormLayer - :members: - -SumToOneNormLayer ------------------ -.. doxygenclass:: paddle::SumToOneNormLayer - :members: - -Activation Layer -================ - -ParameterReluLayer ------------------- -.. doxygenclass:: paddle::ParameterReluLayer - :members: - -Recurrent Layers -================ - -RecurrentLayer --------------- -.. doxygenclass:: paddle::RecurrentLayer - :members: - -SequenceToBatch ---------------- -.. doxygenclass:: paddle::SequenceToBatch - :members: - -LSTM ----- -LstmLayer -````````` -.. doxygenclass:: paddle::LstmLayer - :members: - -LstmStepLayer -````````````` -.. doxygenclass:: paddle::LstmStepLayer - :members: - -LstmCompute -``````````` -.. doxygenclass:: paddle::LstmCompute - :members: - -MDLSTM ------- -MDLstmLayer -``````````` -.. doxygenclass:: paddle::MDLstmLayer - :members: - -CoordIterator -````````````` -.. doxygenclass:: paddle::CoordIterator - :members: - -GRU ---- -GatedRecurrentLayer -``````````````````` -.. doxygenclass:: paddle::GatedRecurrentLayer - :members: - -GruStepLayer -```````````` -.. doxygenclass:: paddle::GruStepLayer - :members: - -GruCompute -`````````` -.. doxygenclass:: paddle::GruCompute - :members: - -Recurrent Layer Group -===================== - -AgentLayer ----------- -.. doxygenclass:: paddle::AgentLayer - :members: - -SequenceAgentLayer ------------------- -.. doxygenclass:: paddle::SequenceAgentLayer - :members: - -GatherAgentLayer ----------------- -.. doxygenclass:: paddle::GatherAgentLayer - :members: - -SequenceGatherAgentLayer ------------------------- -.. doxygenclass:: paddle::SequenceGatherAgentLayer - :members: - -ScatterAgentLayer ------------------ -.. doxygenclass:: paddle::ScatterAgentLayer - :members: - -SequenceScatterAgentLayer -------------------------- -.. doxygenclass:: paddle::SequenceScatterAgentLayer - :members: - -GetOutputLayer --------------- -.. doxygenclass:: paddle::GetOutputLayer - :members: - -Mixed Layer -=========== -.. doxygenclass:: paddle::MixedLayer - :members: - -DotMulProjection ----------------- -.. doxygenclass:: paddle::DotMulProjection - :members: - -DotMulOperator --------------- -.. doxygenclass:: paddle::DotMulOperator - :members: - -FullMatrixProjection --------------------- -.. doxygenclass:: paddle::FullMatrixProjection - :members: - -IdentityProjection ------------------- -.. doxygenclass:: paddle::IdentityProjection - :members: - -IdentityOffsetProjection ------------------------- -.. doxygenclass:: paddle::IdentityOffsetProjection - :members: - -TableProjection ---------------- -.. doxygenclass:: paddle::TableProjection - :members: - -TransposedFullMatrixProjection ------------------------------- -.. doxygenclass:: paddle::TransposedFullMatrixProjection - :members: - -Aggregate Layers -================ - -Aggregate ---------- -AverageLayer -```````````` -.. doxygenclass:: paddle::AverageLayer - :members: - -MaxLayer -```````` -.. doxygenclass:: paddle::MaxLayer - :members: - -SequenceLastInstanceLayer -````````````````````````` -.. doxygenclass:: paddle::SequenceLastInstanceLayer - :members: - -Concat ------- -ConcatenateLayer -```````````````` -.. doxygenclass:: paddle::ConcatenateLayer - :members: - -ConcatenateLayer2 -````````````````` -.. doxygenclass:: paddle::ConcatenateLayer2 - :members: - -SequenceConcatLayer -``````````````````` -.. doxygenclass:: paddle::SequenceConcatLayer - :members: - -Subset ------- -SubSequenceLayer -```````````````` -.. doxygenclass:: paddle::SubSequenceLayer - :members: - -Reshaping Layers -================ - -BlockExpandLayer ----------------- -.. doxygenclass:: paddle::BlockExpandLayer - :members: - -ExpandLayer ------------ -.. doxygenclass:: paddle::ExpandLayer - :members: - -FeatureMapExpandLayer ---------------------- -.. doxygenclass:: paddle::FeatureMapExpandLayer - :members: - -ResizeLayer ------------ -.. doxygenclass:: paddle::ResizeLayer - :members: - -SequenceReshapeLayer --------------------- -.. doxygenclass:: paddle::SequenceReshapeLayer - :members: - -Math Layers -=========== - -AddtoLayer ----------- -.. doxygenclass:: paddle::AddtoLayer - :members: - -ConvexCombinationLayer ----------------------- -.. doxygenclass:: paddle::ConvexCombinationLayer - :members: - -InterpolationLayer ------------------- -.. doxygenclass:: paddle::InterpolationLayer - :members: - -MultiplexLayer --------------- -.. doxygenclass:: paddle::MultiplexLayer - :members: - -OuterProdLayer --------------- -.. doxygenclass:: paddle::OuterProdLayer - :members: - -PowerLayer ----------- -.. doxygenclass:: paddle::PowerLayer - :members: - -ScalingLayer ------------- -.. doxygenclass:: paddle::ScalingLayer - :members: - -SlopeInterceptLayer -------------------- -.. doxygenclass:: paddle::SlopeInterceptLayer - :members: - -TensorLayer ------------- -.. doxygenclass:: paddle::TensorLayer - :members: - -TransLayer ----------- -.. doxygenclass:: paddle::TransLayer - :members: - -Sampling Layers -=============== - -BilinearInterpLayer -------------------- -.. doxygenclass:: paddle::BilinearInterpLayer - :members: - -MultinomialSampler ------------------- -.. doxygenclass:: paddle::MultinomialSampler - :members: - -MaxIdLayer ----------- -.. doxygenclass:: paddle::MaxIdLayer - :members: - -SamplingIdLayer ---------------- -.. doxygenclass:: paddle::SamplingIdLayer - :members: - -Cost Layers -=========== - -CostLayer ------------ -.. doxygenclass:: paddle::CostLayer - :members: - -HuberTwoClass -````````````` -.. doxygenclass:: paddle::HuberTwoClass - :members: - -LambdaCost -``````````` -.. doxygenclass:: paddle::LambdaCost - :members: - -MultiBinaryLabelCrossEntropy -```````````````````````````` -.. doxygenclass:: paddle::MultiBinaryLabelCrossEntropy - :members: - -MultiClassCrossEntropy -``````````````````````` -.. doxygenclass:: paddle::MultiClassCrossEntropy - :members: - -MultiClassCrossEntropyWithSelfNorm -`````````````````````````````````` -.. doxygenclass:: paddle::MultiClassCrossEntropyWithSelfNorm - :members: - -RankingCost -``````````` -.. doxygenclass:: paddle::RankingCost - :members: - -SoftBinaryClassCrossEntropy -``````````````````````````` -.. doxygenclass:: paddle::SoftBinaryClassCrossEntropy - :members: - -SumOfSquaresCostLayer -````````````````````` -.. doxygenclass:: paddle::SumOfSquaresCostLayer - :members: - -SumCostLayer -````````````````````` -.. doxygenclass:: paddle::SumCostLayer - :members: - -CosSimLayer ------------ -.. doxygenclass:: paddle::CosSimLayer - :members: - -CosSimVecMatLayer ------------------ -.. doxygenclass:: paddle::CosSimVecMatLayer - :members: - -CRFDecodingLayer ----------------- -.. doxygenclass:: paddle::CRFDecodingLayer - :members: - -CRFLayer --------- -.. doxygenclass:: paddle::CRFLayer - :members: - -CTCLayer --------- -.. doxygenclass:: paddle::CTCLayer - :members: - -HierarchicalSigmoidLayer ------------------------- -.. doxygenclass:: paddle::HierarchicalSigmoidLayer - :members: - -LinearChainCRF --------------- -.. doxygenclass:: paddle::LinearChainCRF - :members: - -LinearChainCTC --------------- -.. doxygenclass:: paddle::LinearChainCTC - :members: - -NCELayer --------- -.. doxygenclass:: paddle::NCELayer - :members: - -Validation Layers ------------------ - -ValidationLayer -``````````````` -.. doxygenclass:: paddle::ValidationLayer - :members: - -AucValidation -````````````` -.. doxygenclass:: paddle::AucValidation - :members: - -PnpairValidation -```````````````` -.. doxygenclass:: paddle::PnpairValidation - :members: - -Check Layers -============ - -EosIdCheckLayer ---------------- -.. doxygenclass:: paddle::EosIdCheckLayer - :members: diff --git a/doc/howto/source/gserver/neworks.rst b/doc/howto/source/gserver/neworks.rst deleted file mode 100644 index 73fb60d549..0000000000 --- a/doc/howto/source/gserver/neworks.rst +++ /dev/null @@ -1,12 +0,0 @@ -Networks -======== - -NeuralNetwork -------------- -.. doxygenclass:: paddle::NeuralNetwork - :members: - -ParallelNeuralNetwork ---------------------- -.. doxygenclass:: paddle::ParallelNeuralNetwork - :members: diff --git a/doc/howto/source/index.rst b/doc/howto/source/index.rst deleted file mode 100644 index 36323c888e..0000000000 --- a/doc/howto/source/index.rst +++ /dev/null @@ -1,14 +0,0 @@ -Source Code Documents -===================== - -.. toctree:: - :maxdepth: 1 - - gserver/index.rst - trainer.rst - parameter/index.rst - pserver/index.rst - api.rst - cuda/index.rst - math/index.rst - utils/index.rst diff --git a/doc/howto/source/math/functions.rst b/doc/howto/source/math/functions.rst deleted file mode 100644 index aef12e0f00..0000000000 --- a/doc/howto/source/math/functions.rst +++ /dev/null @@ -1,10 +0,0 @@ -Functions -========= - -MathFunctions -------------- -.. doxygenfile:: paddle/math/MathFunctions.h - -SIMDFunctions -------------- -.. doxygenfile:: paddle/math/SIMDFunctions.h diff --git a/doc/howto/source/math/index.rst b/doc/howto/source/math/index.rst deleted file mode 100644 index 2ec16f2b44..0000000000 --- a/doc/howto/source/math/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -Math -==== - -.. toctree:: - :maxdepth: 2 - - vector.rst - matrix.rst - functions.rst - utils.rst diff --git a/doc/howto/source/math/matrix.rst b/doc/howto/source/math/matrix.rst deleted file mode 100644 index 9bb20f618d..0000000000 --- a/doc/howto/source/math/matrix.rst +++ /dev/null @@ -1,76 +0,0 @@ -Matrix -====== - -Base ----- - -BaseMatrix Template -``````````````````` -.. doxygenclass:: paddle::BaseMatrixT - :members: - -Matrix -`````` -.. doxygenclass:: paddle::Matrix - :members: - -MatrixOffset -```````````` -.. doxygenclass:: paddle::MatrixOffset - :members: - -CpuMatrix ---------- - -CpuMatrix -````````` -.. doxygenclass:: paddle::CpuMatrix - :members: - -SharedCpuMatrix -``````````````` -.. doxygenclass:: paddle::SharedCpuMatrix - :members: - -GpuMatrix ---------- -.. doxygenclass:: paddle::GpuMatrix - :members: - -CpuSparseMatrix ---------------- - -CpuSparseMatrix -``````````````` -.. doxygenclass:: paddle::CpuSparseMatrix - :members: - -SparseRowCpuMatrix -`````````````````` -.. doxygenclass:: paddle::SparseRowCpuMatrix - :members: - -SparseAutoGrowRowCpuMatrix -`````````````````````````` -.. doxygenclass:: paddle::SparseAutoGrowRowCpuMatrix - :members: - -SparsePrefetchRowCpuMatrix -`````````````````````````` -.. doxygenclass:: paddle::SparsePrefetchRowCpuMatrix - :members: - -SparseRowIdsCpuMatrix -````````````````````` -.. doxygenclass:: paddle::SparseRowIdsCpuMatrix - :members: - -CacheRowCpuMatrix -````````````````` -.. doxygenclass:: paddle::CacheRowCpuMatrix - :members: - -GpuSparseMatrix ---------------- -.. doxygenclass:: paddle::GpuSparseMatrix - :members: diff --git a/doc/howto/source/math/utils.rst b/doc/howto/source/math/utils.rst deleted file mode 100644 index 55d9961a39..0000000000 --- a/doc/howto/source/math/utils.rst +++ /dev/null @@ -1,18 +0,0 @@ -Memory Manager -============== - -Memory Handle -------------- -.. doxygenfile:: paddle/math/MemoryHandle.h - -Allocator ---------- -.. doxygenfile:: paddle/math/Allocator.h - -PoolAllocator -````````````` -.. doxygenfile:: paddle/math/PoolAllocator.h - -Storage -------- -.. doxygenfile:: paddle/math/Storage.h diff --git a/doc/howto/source/math/vector.rst b/doc/howto/source/math/vector.rst deleted file mode 100644 index 07f7062aba..0000000000 --- a/doc/howto/source/math/vector.rst +++ /dev/null @@ -1,37 +0,0 @@ -Vector -====== - -BaseVector -`````````` -.. doxygenclass:: paddle::BaseVector - :members: - -Vector Template -``````````````` -.. doxygenclass:: paddle::VectorT - :members: - -CpuVector Template -`````````````````` -.. doxygenclass:: paddle::CpuVectorT - :members: - -GpuVector Template -`````````````````` -.. doxygenclass:: paddle::GpuVectorT - :members: - -ParallelCpuVector Template -`````````````````````````` -.. doxygenclass:: paddle::ParallelCpuVectorT - :members: - -ParallelGpuVector Template -`````````````````````````` -.. doxygenclass:: paddle::ParallelGpuVectorT - :members: - -CpuGpuVector Template -````````````````````` -.. doxygenclass:: paddle::CpuGpuVectorT - :members: diff --git a/doc/howto/source/parameter/index.rst b/doc/howto/source/parameter/index.rst deleted file mode 100644 index 3bf6948dc3..0000000000 --- a/doc/howto/source/parameter/index.rst +++ /dev/null @@ -1,9 +0,0 @@ -Parameter -========= - -.. toctree:: - :maxdepth: 2 - - parameter.rst - optimizer.rst - updater.rst diff --git a/doc/howto/source/parameter/optimizer.rst b/doc/howto/source/parameter/optimizer.rst deleted file mode 100644 index b5b8b850b3..0000000000 --- a/doc/howto/source/parameter/optimizer.rst +++ /dev/null @@ -1,22 +0,0 @@ -Optimizer -========= - -ParameterOptimizer ------------------- -.. doxygenfile:: paddle/parameter/ParameterOptimizer.h - -Regularizer ------------ -.. doxygenfile:: paddle/parameter/Regularizer.h - -FirstOrderOptimizer -------------------- -.. doxygenfile:: paddle/parameter/FirstOrderOptimizer.h - -AverageOptimizer ----------------- -.. doxygenfile:: paddle/parameter/AverageOptimizer.h - -OptimizerWithRegularizer ------------------------- -.. doxygenfile:: paddle/parameter/OptimizerWithRegularizer.h diff --git a/doc/howto/source/parameter/parameter.rst b/doc/howto/source/parameter/parameter.rst deleted file mode 100644 index 2daa62d4e6..0000000000 --- a/doc/howto/source/parameter/parameter.rst +++ /dev/null @@ -1,12 +0,0 @@ -Parameter -========= - -Parameter ---------- -.. doxygenfile:: paddle/parameter/Argument.h -.. doxygenfile:: paddle/parameter/Parameter.h -.. doxygenfile:: paddle/parameter/ParallelParameter.h - -Weight ------- -.. doxygenfile:: paddle/parameter/Weight.h diff --git a/doc/howto/source/parameter/updater.rst b/doc/howto/source/parameter/updater.rst deleted file mode 100644 index dfa22e8e7d..0000000000 --- a/doc/howto/source/parameter/updater.rst +++ /dev/null @@ -1,14 +0,0 @@ -Updater -======= - -Base ----- -.. doxygenfile:: paddle/parameter/ParameterUpdaterBase.h - -Hook ----- -.. doxygenfile:: paddle/parameter/ParameterUpdaterHook.h - -Functions ---------- -.. doxygenfile:: paddle/parameter/ParameterUpdateFunctions.h diff --git a/doc/howto/source/pserver/client.rst b/doc/howto/source/pserver/client.rst deleted file mode 100644 index e5bba0706a..0000000000 --- a/doc/howto/source/pserver/client.rst +++ /dev/null @@ -1,12 +0,0 @@ -Client -====== - -BaseClient ----------- -.. doxygenclass:: paddle::BaseClient - :members: - -ParameterClient2 ----------------- -.. doxygenclass:: paddle::ParameterClient2 - :members: diff --git a/doc/howto/source/pserver/index.rst b/doc/howto/source/pserver/index.rst deleted file mode 100644 index 0031e9476b..0000000000 --- a/doc/howto/source/pserver/index.rst +++ /dev/null @@ -1,10 +0,0 @@ -PServer -======= - -.. toctree:: - :maxdepth: 2 - - client.rst - network.rst - server.rst - utils.rst diff --git a/doc/howto/source/pserver/network.rst b/doc/howto/source/pserver/network.rst deleted file mode 100644 index 7004c9d91f..0000000000 --- a/doc/howto/source/pserver/network.rst +++ /dev/null @@ -1,27 +0,0 @@ -Network -======= - -SocketServer ------------- -.. doxygenclass:: paddle::SocketServer - :members: - -SocketWorker ------------- -.. doxygenclass:: paddle::SocketWorker - :members: - -SocketClient ------------- -.. doxygenclass:: paddle::SocketClient - :members: - -SocketChannel -------------- -.. doxygenclass:: paddle::SocketChannel - :members: - -MessageReader -------------- -.. doxygenclass:: paddle::MsgReader - :members: diff --git a/doc/howto/source/pserver/server.rst b/doc/howto/source/pserver/server.rst deleted file mode 100644 index 35301acf8f..0000000000 --- a/doc/howto/source/pserver/server.rst +++ /dev/null @@ -1,12 +0,0 @@ -Server -====== - -ProtoServer ------------ -.. doxygenclass:: paddle::ProtoServer - :members: - -ParameterServer2 ----------------- -.. doxygenclass:: paddle::ParameterServer2 - :members: diff --git a/doc/howto/source/trainer.rst b/doc/howto/source/trainer.rst deleted file mode 100644 index 85f1feb4fc..0000000000 --- a/doc/howto/source/trainer.rst +++ /dev/null @@ -1,32 +0,0 @@ -Trainer -======= - -TrainerStats ------------- - -.. doxygenclass:: paddle::TrainerStats - :members: - -RemoteParameterUpdater ------------------------ - -.. doxygenclass:: paddle::RemoteParameterUpdater - :members: - -ConcurrentRemoteParameterUpdater --------------------------------- - -.. doxygenclass:: paddle::ConcurrentRemoteParameterUpdater - :members: - -SparseRemoteParameterUpdater ----------------------------- - -.. doxygenclass:: paddle::SparseRemoteParameterUpdater - :members: - -SparseRemoteParameterUpdaterComposite -------------------------------------- - -.. doxygenclass:: paddle::SparseRemoteParameterUpdaterComposite - :members: diff --git a/doc/howto/source/utils/customStackTrace.rst b/doc/howto/source/utils/customStackTrace.rst deleted file mode 100644 index cdc8930739..0000000000 --- a/doc/howto/source/utils/customStackTrace.rst +++ /dev/null @@ -1,4 +0,0 @@ -CustomStackTrace -================ -.. doxygenclass:: paddle::CustomStackTrace - :members: diff --git a/doc/howto/source/utils/enum.rst b/doc/howto/source/utils/enum.rst deleted file mode 100644 index e0da75afe1..0000000000 --- a/doc/howto/source/utils/enum.rst +++ /dev/null @@ -1,3 +0,0 @@ -Enumeration wrapper -=================== -.. doxygennamespace:: paddle::enumeration_wrapper diff --git a/doc/howto/source/utils/index.rst b/doc/howto/source/utils/index.rst deleted file mode 100644 index 7ddc47d172..0000000000 --- a/doc/howto/source/utils/index.rst +++ /dev/null @@ -1,11 +0,0 @@ -Utils -===== - -.. toctree:: - :maxdepth: 2 - - lock.rst - queue.rst - thread.rst - customStackTrace.rst - enum.rst diff --git a/doc/howto/source/utils/lock.rst b/doc/howto/source/utils/lock.rst deleted file mode 100644 index f011acb943..0000000000 --- a/doc/howto/source/utils/lock.rst +++ /dev/null @@ -1,32 +0,0 @@ -Lock -==== - -RWLock ------- -.. doxygenclass:: paddle::RWLock - :members: - -ReadLockGuard -------------- -.. doxygenclass:: paddle::ReadLockGuard - :members: - -SpinLock --------- -.. doxygenclass:: paddle::SpinLock - :members: - -Semaphore ---------- -.. doxygenclass:: paddle::Semaphore - :members: - -ThreadBarrier -------------- -.. doxygenclass:: paddle::ThreadBarrier - :members: - -LockedCondition ---------------- -.. doxygenclass:: paddle::LockedCondition - :members: diff --git a/doc/howto/source/utils/queue.rst b/doc/howto/source/utils/queue.rst deleted file mode 100644 index 98192648e2..0000000000 --- a/doc/howto/source/utils/queue.rst +++ /dev/null @@ -1,12 +0,0 @@ -Queue -===== - -Queue ------ -.. doxygenclass:: paddle::Queue - :members: - -BlockingQueue -------------- -.. doxygenclass:: paddle::BlockingQueue - :members: diff --git a/doc/howto/source/utils/thread.rst b/doc/howto/source/utils/thread.rst deleted file mode 100644 index 23d379a989..0000000000 --- a/doc/howto/source/utils/thread.rst +++ /dev/null @@ -1,27 +0,0 @@ -Thread -====== - -Thread ------- -.. doxygenclass:: paddle::Thread - :members: - -ThreadWorker ------------- -.. doxygenclass:: paddle::ThreadWorker - :members: - -SyncThreadPool --------------- -.. doxygenclass:: paddle::SyncThreadPool - :members: - -MultiThreadWorker ------------------ -.. doxygenclass:: paddle::MultiThreadWorker - :members: - -AsyncThreadPool ---------------- -.. doxygenclass:: paddle::AsyncThreadPool - :members: diff --git a/paddle/scripts/docker/Dockerfile b/paddle/scripts/docker/Dockerfile index edb84712d8..6f0554caa5 100644 --- a/paddle/scripts/docker/Dockerfile +++ b/paddle/scripts/docker/Dockerfile @@ -11,7 +11,7 @@ RUN apt-get update \ clang-3.8 llvm-3.8 libclang-3.8-dev \ && apt-get clean -y RUN pip install -U BeautifulSoup docopt PyYAML pillow \ - sphinx sphinx_rtd_theme breathe recommonmark + sphinx sphinx_rtd_theme recommonmark # cmake tends to hide and blur the dependencies between code modules, as # noted here https://github.com/PaddlePaddle/Paddle/issues/763. We are diff --git a/paddle/scripts/docker/Dockerfile.gpu b/paddle/scripts/docker/Dockerfile.gpu index 5d175e15a7..c707d3b553 100644 --- a/paddle/scripts/docker/Dockerfile.gpu +++ b/paddle/scripts/docker/Dockerfile.gpu @@ -11,7 +11,7 @@ RUN apt-get update \ clang-3.8 llvm-3.8 libclang-3.8-dev \ && apt-get clean -y RUN pip install -U BeautifulSoup docopt PyYAML pillow \ - sphinx sphinx_rtd_theme breathe recommonmark + sphinx sphinx_rtd_theme recommonmark # cmake tends to hide and blur the dependencies between code modules, as # noted here https://github.com/PaddlePaddle/Paddle/issues/763. We are diff --git a/paddle/scripts/tools/build_docs/Dockerfile b/paddle/scripts/tools/build_docs/Dockerfile index 506b13210b..78dc756bd1 100644 --- a/paddle/scripts/tools/build_docs/Dockerfile +++ b/paddle/scripts/tools/build_docs/Dockerfile @@ -3,5 +3,5 @@ COPY build.sh / RUN pip install sphinx &&\ pip install sphinx_rtd_theme &&\ apt install -y doxygen graphviz &&\ - pip install breathe recommonmark numpy protobuf==2.6.1 + pip install recommonmark numpy protobuf==2.6.1 CMD /build.sh From b9579a5f01ee8fcf28c18081f65fe8d2b95cb03a Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 9 Dec 2016 17:27:41 +0800 Subject: [PATCH 067/265] Fix dead link after rearrange english documentation. * Fix #803 --- doc/api/data_provider/pydataprovider2_en.rst | 4 +- doc/api/index_en.rst | 32 ++++++++-------- .../trainer_config_helpers/data_sources.rst | 2 + doc/api/trainer_config_helpers/layers.rst | 12 ++++++ doc/api/trainer_config_helpers/networks.rst | 2 + doc/conf.py.in | 1 + doc/howto/cmd_parameter/index_en.md | 3 ++ doc/tutorials/rec/ml_dataset_en.md | 5 +++ doc/tutorials/rec/ml_regression_en.rst | 37 +++++++------------ 9 files changed, 58 insertions(+), 40 deletions(-) diff --git a/doc/api/data_provider/pydataprovider2_en.rst b/doc/api/data_provider/pydataprovider2_en.rst index b42cbca576..083436e271 100644 --- a/doc/api/data_provider/pydataprovider2_en.rst +++ b/doc/api/data_provider/pydataprovider2_en.rst @@ -1,5 +1,7 @@ +.. _api_pydataprovider: + PyDataProvider2 -================= +=============== We highly recommand users to use PyDataProvider2 to provide training or testing data to PaddlePaddle. The user only needs to focus on how to read a single diff --git a/doc/api/index_en.rst b/doc/api/index_en.rst index 9930f93e10..6fdee9f928 100644 --- a/doc/api/index_en.rst +++ b/doc/api/index_en.rst @@ -1,35 +1,37 @@ API -==== +=== DataProvider API ---------------- .. toctree:: - :maxdepth: 1 + :maxdepth: 1 - data_provider/index_en.rst - data_provider/pydataprovider2_en.rst + data_provider/index_en.rst + data_provider/pydataprovider2_en.rst + +.. _api_trainer_config: Model Config API ---------------- .. toctree:: - :maxdepth: 1 + :maxdepth: 1 - trainer_config_helpers/optimizers.rst - trainer_config_helpers/data_sources.rst - trainer_config_helpers/layers.rst - trainer_config_helpers/activations.rst - trainer_config_helpers/poolings.rst - trainer_config_helpers/networks.rst - trainer_config_helpers/evaluators.rst - trainer_config_helpers/attrs.rst + trainer_config_helpers/optimizers.rst + trainer_config_helpers/data_sources.rst + trainer_config_helpers/layers.rst + trainer_config_helpers/activations.rst + trainer_config_helpers/poolings.rst + trainer_config_helpers/networks.rst + trainer_config_helpers/evaluators.rst + trainer_config_helpers/attrs.rst Applications API ---------------- .. toctree:: - :maxdepth: 1 + :maxdepth: 1 - predict/swig_py_paddle_en.rst + predict/swig_py_paddle_en.rst diff --git a/doc/api/trainer_config_helpers/data_sources.rst b/doc/api/trainer_config_helpers/data_sources.rst index 44ea59df43..b9dd4dda01 100644 --- a/doc/api/trainer_config_helpers/data_sources.rst +++ b/doc/api/trainer_config_helpers/data_sources.rst @@ -1,3 +1,5 @@ +.. _api_trainer_config_helpers_data_sources: + DataSources =========== diff --git a/doc/api/trainer_config_helpers/layers.rst b/doc/api/trainer_config_helpers/layers.rst index b487b739a7..12a75080d0 100644 --- a/doc/api/trainer_config_helpers/layers.rst +++ b/doc/api/trainer_config_helpers/layers.rst @@ -20,6 +20,8 @@ LayerOutput Data layer =========== +.. _api_trainer_config_helpers_layers_data_layer: + data_layer ---------- .. automodule:: paddle.trainer_config_helpers.layers @@ -29,6 +31,8 @@ data_layer Fully Connected Layers ====================== +.. _api_trainer_config_helpers_layers_fc_layer: + fc_layer -------- .. automodule:: paddle.trainer_config_helpers.layers @@ -68,6 +72,8 @@ img_conv_layer :members: img_conv_layer :noindex: +.. _api_trainer_config_helpers_layers_context_projection: + context_projection ------------------ .. automodule:: paddle.trainer_config_helpers.layers @@ -185,6 +191,8 @@ mixed_layer :members: mixed_layer :noindex: +.. _api_trainer_config_helpers_layers_embedding_layer: + embedding_layer --------------- .. automodule:: paddle.trainer_config_helpers.layers @@ -237,6 +245,8 @@ trans_full_matrix_projection Aggregate Layers ================ +.. _api_trainer_config_helpers_layers_pooling_layer: + pooling_layer ------------- .. automodule:: paddle.trainer_config_helpers.layers @@ -333,6 +343,8 @@ tensor_layer :members: tensor_layer :noindex: +.. _api_trainer_config_helpers_layers_cos_sim: + cos_sim ------- .. automodule:: paddle.trainer_config_helpers.layers diff --git a/doc/api/trainer_config_helpers/networks.rst b/doc/api/trainer_config_helpers/networks.rst index 29c52c5ce3..e13c368051 100644 --- a/doc/api/trainer_config_helpers/networks.rst +++ b/doc/api/trainer_config_helpers/networks.rst @@ -13,6 +13,8 @@ sequence_conv_pool :members: sequence_conv_pool :noindex: +.. _api_trainer_config_helpers_network_text_conv_pool: + text_conv_pool -------------- .. automodule:: paddle.trainer_config_helpers.networks diff --git a/doc/conf.py.in b/doc/conf.py.in index 5fb307e3a9..01d156e887 100644 --- a/doc/conf.py.in +++ b/doc/conf.py.in @@ -144,5 +144,6 @@ def setup(app): # no c++ API for now app.add_config_value('recommonmark_config', { 'url_resolver': lambda url: github_doc_root + url, + 'enable_eval_rst': True, }, True) app.add_transform(AutoStructify) diff --git a/doc/howto/cmd_parameter/index_en.md b/doc/howto/cmd_parameter/index_en.md index bd16affdd8..a6c236db61 100644 --- a/doc/howto/cmd_parameter/index_en.md +++ b/doc/howto/cmd_parameter/index_en.md @@ -1,3 +1,6 @@ +```eval_rst +.. _cmd_line_index: +``` # How to Set Command-line Parameters * [Use Case](use_case_en.md) diff --git a/doc/tutorials/rec/ml_dataset_en.md b/doc/tutorials/rec/ml_dataset_en.md index c93a4585e4..73879d6537 100644 --- a/doc/tutorials/rec/ml_dataset_en.md +++ b/doc/tutorials/rec/ml_dataset_en.md @@ -1,3 +1,8 @@ +```eval_rst +.. _demo_ml_dataset: + +``` + # MovieLens Dataset The [MovieLens Dataset](http://grouplens.org/datasets/movielens/) was collected by GroupLens Research. diff --git a/doc/tutorials/rec/ml_regression_en.rst b/doc/tutorials/rec/ml_regression_en.rst index 0c14e4f5bb..b9c2f8cb59 100644 --- a/doc/tutorials/rec/ml_regression_en.rst +++ b/doc/tutorials/rec/ml_regression_en.rst @@ -16,7 +16,7 @@ Data Preparation ```````````````` Download and extract dataset '''''''''''''''''''''''''''' -We use `movielens 1m dataset `_ here. +We use :ref:`demo_ml_dataset` here. To download and unzip the dataset, simply run the following commands. .. code-block:: bash @@ -239,26 +239,16 @@ Then we combine each features of movie into one movie feature by a get one user feature. Then we calculate the cosine similarity of these two features. -In these network, we use several api in `trainer_config_helpers -<../../ui/api/trainer_config_helpers/index.html>`_. There are - -* Data Layer, `data_layer - <../../ui/api/trainer_config_helpers/layers.html#id1>`_ -* Fully Connected Layer, `fc_layer - <../../ui/api/trainer_config_helpers/layers.html#fc-layer>`_ -* Embedding Layer, `embedding_layer - <../../ui/api/trainer_config_helpers/layers.html#embedding-layer>`_ -* Context Projection Layer, `context_projection - <../../ui/api/trainer_config_helpers/layers.html#context-projection>`_ -* Pooling Layer, `pooling_layer - <../../ui/api/trainer_config_helpers/layers.html#pooling-layer>`_ -* Cosine Similarity Layer, `cos_sim - <../../ui/api/trainer_config_helpers/layers.html#cos-sim>`_ -* Text Convolution Pooling Layer, `text_conv_pool - <../../ui/api/trainer_config_helpers/networks.html - #trainer_config_helpers.networks.text_conv_pool>`_ -* Declare Python Data Sources, `define_py_data_sources2 - <../../ui/api/trainer_config_helpers/data_sources.html>`_ +In these network, we use several api in :ref:`api_trainer_config` . There are + +* Data Layer, :ref:`api_trainer_config_helpers_layers_data_layer` +* Fully Connected Layer, :ref:`api_trainer_config_helpers_layers_fc_layer` +* Embedding Layer, :ref:`api_trainer_config_helpers_layers_embedding_layer` +* Context Projection Layer, :ref:`api_trainer_config_helpers_layers_context_projection` +* Pooling Layer, :ref:`api_trainer_config_helpers_layers_pooling_layer` +* Cosine Similarity Layer, :ref:`api_trainer_config_helpers_layers_cos_sim` +* Text Convolution Pooling Layer, :ref:`api_trainer_config_helpers_network_text_conv_pool` +* Declare Python Data Sources :ref:`api_trainer_config_helpers_data_sources`. Data Provider ''''''''''''' @@ -274,7 +264,7 @@ In this :code:`dataprovider.py`, we should set\: * use_seq\: Whether this :code:`dataprovider.py` in sequence mode or not. * process\: Return each sample of data to :code:`paddle`. -The data provider details document see `there <../../ui/data_provider/pydataprovider2.html>`_. +The data provider details document see :ref:`api_pydataprovider`. Train ````` @@ -290,8 +280,7 @@ The run.sh is shown as follow: It just start a paddle training process, write the log to `log.txt`, then print it on screen. -Each command line argument in :code:`run.sh`, please refer to the `command line -arguments <../../ui/index.html#command-line-argument>`_ page. The short description of these arguments is shown as follow. +Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index` page. The short description of these arguments is shown as follow. * config\: Tell paddle which file is neural network configuration. * save_dir\: Tell paddle save model into './output' From b8595196292378c596e4663a843612573588a0e4 Mon Sep 17 00:00:00 2001 From: livc Date: Fri, 9 Dec 2016 17:39:04 +0800 Subject: [PATCH 068/265] change filename to index_cn.md --- .../semantic_role_labeling/index_cn.md | 201 ++++++++++++++++++ 1 file changed, 201 insertions(+) create mode 100644 doc/tutorials/semantic_role_labeling/index_cn.md diff --git a/doc/tutorials/semantic_role_labeling/index_cn.md b/doc/tutorials/semantic_role_labeling/index_cn.md new file mode 100644 index 0000000000..f3c855a9fd --- /dev/null +++ b/doc/tutorials/semantic_role_labeling/index_cn.md @@ -0,0 +1,201 @@ +# 语义角色标注教程 # + +语义角色标注(Semantic role labeling, SRL)是浅语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词参数结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: + + [ A0 他 ] [ AM-MOD 将 ][ AM-NEG 不会 ] [ V 接受] [ A1 任何东西 ] 从 [A2 那些他写的东西中 ]。 + +- V: 动词 +- A0: 接受者 +- A1: 接受的东西 +- A2: 从……接受 +- A3: 属性 +- AM-MOD: 情态动词 +- AM-NEG: 否定 + +给定动词“接受”,句子中的大部分将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 + +到目前为止,大多数成功的SRL系统是建立在某种形式的解析结果之上的,其中在语法结构上使用了预先定义的特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标记问题。 + +## 数据描述 +相关论文[2]采用 CoNLL-2005&2012 共享任务中设置的数据进行训练和测试。根据数据许可证,演示采用 CoNLL-2005 的测试数据集,可以在网站上找到。 + +用户只需执行以下命令就可以下载并处理原始数据: + +```bash +cd data +./get_data.sh +``` +`data `目录会出现如下几个新的文件: +```bash +conll05st-release:the test data set of CoNll-2005 shared task +test.wsj.words:the Wall Street Journal data sentences +test.wsj.props: the propositional arguments +feature: the extracted features from data set +``` + +## 训练 +### DB-LSTM +请参阅情绪分析的演示以了解有关长期短期记忆单元的更多信息。 + +与在 Sentiment Analysis 演示中使用的 Bidirectional-LSTM 不同,DB-LSTM 采用另一种方法来堆叠LSTM层。首先,标准LSTM以正向处理该序列。该 LSTM 层的输入和输出作为下一个 LSTM 层的输入,并被反向处理。这两个标准 LSTM 层组成一对 LSTM。然后我们堆叠一对对的 LSTM 层后得到深度 LSTM 模型。 + +下图展示了时间扩展的2层 DB-LSTM 网络。 +
+![pic](./network_arch.png) +
+ +### 特征 +两个输入特性在这个管道中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示: +
+![pic](./feature.jpg) +
+ +在这个示例中,相应的标记句子是: + +[ A1 A record date ] has [ AM-NEG n't ] been [ V set ] . + +在演示中, 我们采用上面的特征模板, 包括: `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` 并使用 `B/I/O` 方案来标记每个参数。这些特征和标签存储在 `feature` 文件中, 用`\t`分割。 + +### 数据提供 + +`dataprovider.py` 是一个包装数据的 Python 文件。 函数 `hook()` 定义了网络的数据槽。六个特征和标签都是索引槽。 +``` +def hook(settings, word_dict, label_dict, **kwargs): + settings.word_dict = word_dict + settings.label_dict = label_dict + #all inputs are integral and sequential type + settings.slots = [ + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(predicate_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(word_dict)), + integer_value_sequence(2), + integer_value_sequence(len(label_dict))] +``` +相应的数据迭代器如下: +``` +@provider(init_hook=hook, should_shuffle=True, calc_batch_size=get_batch_size, + can_over_batch_size=False, cache=CacheType.CACHE_PASS_IN_MEM) +def process(settings, file_name): + with open(file_name, 'r') as fdata: + for line in fdata: + sentence, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, label = \ + line.strip().split('\t') + + words = sentence.split() + sen_len = len(words) + word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] + + predicate_slot = [settings.predicate_dict.get(predicate)] * sen_len + ctx_n2_slot = [settings.word_dict.get(ctx_n2, UNK_IDX)] * sen_len + ctx_n1_slot = [settings.word_dict.get(ctx_n1, UNK_IDX)] * sen_len + ctx_0_slot = [settings.word_dict.get(ctx_0, UNK_IDX)] * sen_len + ctx_p1_slot = [settings.word_dict.get(ctx_p1, UNK_IDX)] * sen_len + ctx_p2_slot = [settings.word_dict.get(ctx_p2, UNK_IDX)] * sen_len + + marks = mark.split() + mark_slot = [int(w) for w in marks] + + label_list = label.split() + label_slot = [settings.label_dict.get(w) for w in label_list] + yield word_slot, predicate_slot, ctx_n2_slot, ctx_n1_slot, \ + ctx_0_slot, ctx_p1_slot, ctx_p2_slot, mark_slot, label_slot +``` +函数 `process` 产出有8个特征和标签的9个表。 + +### 神经网络配置 + +`db_lstm.py` 是在训练过程中加载字典并定义数据提供程序模块和网络架构的神经网络配置文件。 + +九个 `data_layer` 从数据提供程序加载实例。八个特征分别转换为嵌入,并由`mixed_layer`混合。 深度双向LSTM层提取softmax层的特征。目标函数是标签的交叉熵。 + +### 训练 +训练的脚本是 `train.sh`,用户只需执行: +```bash + ./train.sh +``` +`train.sh` 中的内容: +``` +paddle train \ + --config=./db_lstm.py \ + --use_gpu=0 \ + --log_period=5000 \ + --trainer_count=1 \ + --show_parameter_stats_period=5000 \ + --save_dir=./output \ + --num_passes=10000 \ + --average_test_period=10000000 \ + --init_model_path=./data \ + --load_missing_parameter_strategy=rand \ + --test_all_data_in_one_period=1 \ +2>&1 | tee 'train.log' +``` + +- \--config=./db_lstm.py : 网络配置文件 +- \--use_gpu=false: 使用 CPU 训练(如果已安装 PaddlePaddle GPU版本并想使用 GPU 训练可以设置为true,目前 crf_layer 不支持 GPU) +- \--log_period=500: 每20批(batch)输出日志 +- \--trainer_count=1: 设置线程数(或 GPU 数) +- \--show_parameter_stats_period=5000: 每100批显示参数统计 +- \--save_dir=./output: 模型输出路径 +- \--num_passes=10000: 设置通过数,一次通过意味着PaddlePaddle训练数据集中的所有样本一次 +- \--average_test_period=10000000: 每个 average_test_period 批次对平均参数进行测试 +- \--init_model_path=./data: 参数初始化路径 +- \--load_missing_parameter_strategy=rand: 随机初始不存在的参数 +- \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 + + +训练后,模型将保存在目录`output`中。 我们的训练曲线如下: +
+![pic](./curve.jpg) +
+ +### 测试 +测试脚本是 `test.sh`, 执行: +```bash + ./test.sh +``` +`tesh.sh` 的主要部分: +``` +paddle train \ + --config=./db_lstm.py \ + --model_list=$model_list \ + --job=test \ + --config_args=is_test=1 \ +``` + + - \--config=./db_lstm.py: 网络配置文件 + - \--model_list=$model_list.list: 模型列表文件 + - \--job=test: 指示测试任务 + - \--config_args=is_test=1: 指示测试任务的标记 + - \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 + + +### 预测 +预测脚本是 `predict.sh`,用户只需执行: +```bash + ./predict.sh + +``` +在`predict.sh`中,用户应该提供网络配置文件,模型路径,标签文件,字典文件,特征文件。 +``` +python predict.py + -c $config_file \ + -w $best_model_path \ + -l $label_file \ + -p $predicate_dict_file \ + -d $dict_file \ + -i $input_file \ + -o $output_file +``` + +`predict.py` 是主要的可执行python脚本,其中包括函数:加载模型,加载数据,数据预测。网络模型将输出标签的概率分布。 在演示中,我们使用最大概率的标签作为结果。用户还可以根据概率分布矩阵实现集束搜索或维特比解码。 + +预测后,结果保存在 `predict.res` 中。 + +## 引用 +[1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005. + +[2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015. From aa0e415f96cb2aa95a0bb64cd68b58bd1218f7fc Mon Sep 17 00:00:00 2001 From: livc Date: Fri, 9 Dec 2016 17:57:35 +0800 Subject: [PATCH 069/265] Modify the translation details --- .../semantic_role_labeling/index_cn.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/doc/tutorials/semantic_role_labeling/index_cn.md b/doc/tutorials/semantic_role_labeling/index_cn.md index f3c855a9fd..7a3eff90e3 100644 --- a/doc/tutorials/semantic_role_labeling/index_cn.md +++ b/doc/tutorials/semantic_role_labeling/index_cn.md @@ -1,8 +1,8 @@ # 语义角色标注教程 # -语义角色标注(Semantic role labeling, SRL)是浅语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词参数结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: +语义角色标注(Semantic role labeling, SRL)是浅层语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词参数结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: - [ A0 他 ] [ AM-MOD 将 ][ AM-NEG 不会 ] [ V 接受] [ A1 任何东西 ] 从 [A2 那些他写的东西中 ]。 + [ A0 He ] [ AM-MOD would ][ AM-NEG n’t ] [ V accept] [ A1 anything of value ] from [A2 those he was writing about ]. - V: 动词 - A0: 接受者 @@ -12,12 +12,12 @@ - AM-MOD: 情态动词 - AM-NEG: 否定 -给定动词“接受”,句子中的大部分将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 +给定动词“accept”,句子中的大部分将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 -到目前为止,大多数成功的SRL系统是建立在某种形式的解析结果之上的,其中在语法结构上使用了预先定义的特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标记问题。 +到目前为止,大多数成功的SRL系统是建立在某种形式的解析结果之上的,其中在语法结构上使用了预先定义的特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标注问题。 ## 数据描述 -相关论文[2]采用 CoNLL-2005&2012 共享任务中设置的数据进行训练和测试。根据数据许可证,演示采用 CoNLL-2005 的测试数据集,可以在网站上找到。 +相关论文[2]采用 CoNLL-2005&2012 共享任务中设置的数据进行训练和测试。由于数据许可的原因,演示采用 CoNLL-2005 的测试数据集,可以在网站上找到。 用户只需执行以下命令就可以下载并处理原始数据: @@ -35,7 +35,7 @@ feature: the extracted features from data set ## 训练 ### DB-LSTM -请参阅情绪分析的演示以了解有关长期短期记忆单元的更多信息。 +请参阅情感分析的演示以了解有关长期短期记忆单元的更多信息。 与在 Sentiment Analysis 演示中使用的 Bidirectional-LSTM 不同,DB-LSTM 采用另一种方法来堆叠LSTM层。首先,标准LSTM以正向处理该序列。该 LSTM 层的输入和输出作为下一个 LSTM 层的输入,并被反向处理。这两个标准 LSTM 层组成一对 LSTM。然后我们堆叠一对对的 LSTM 层后得到深度 LSTM 模型。 @@ -45,7 +45,7 @@ feature: the extracted features from data set ### 特征 -两个输入特性在这个管道中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示: +两个输入特性在这个流程中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示:
![pic](./feature.jpg)
@@ -104,13 +104,13 @@ def process(settings, file_name): yield word_slot, predicate_slot, ctx_n2_slot, ctx_n1_slot, \ ctx_0_slot, ctx_p1_slot, ctx_p2_slot, mark_slot, label_slot ``` -函数 `process` 产出有8个特征和标签的9个表。 +函数 `process` 返回8个特征list和1个标签list。 ### 神经网络配置 `db_lstm.py` 是在训练过程中加载字典并定义数据提供程序模块和网络架构的神经网络配置文件。 -九个 `data_layer` 从数据提供程序加载实例。八个特征分别转换为嵌入,并由`mixed_layer`混合。 深度双向LSTM层提取softmax层的特征。目标函数是标签的交叉熵。 +九个 `data_layer` 从数据提供程序加载实例。八个特征分别转换为向量,并由`mixed_layer`混合。 深度双向LSTM层提取softmax层的特征。目标函数是标签的交叉熵。 ### 训练 训练的脚本是 `train.sh`,用户只需执行: @@ -136,11 +136,11 @@ paddle train \ - \--config=./db_lstm.py : 网络配置文件 - \--use_gpu=false: 使用 CPU 训练(如果已安装 PaddlePaddle GPU版本并想使用 GPU 训练可以设置为true,目前 crf_layer 不支持 GPU) -- \--log_period=500: 每20批(batch)输出日志 +- \--log_period=500: 每20个batch输出日志 - \--trainer_count=1: 设置线程数(或 GPU 数) -- \--show_parameter_stats_period=5000: 每100批显示参数统计 +- \--show_parameter_stats_period=5000: 每100个batch显示参数统计 - \--save_dir=./output: 模型输出路径 -- \--num_passes=10000: 设置通过数,一次通过意味着PaddlePaddle训练数据集中的所有样本一次 +- \--num_passes=10000: 设置数据遍历次数,一个pass意味着PaddlePaddle训练数据集中的所有样本被遍历一次 - \--average_test_period=10000000: 每个 average_test_period 批次对平均参数进行测试 - \--init_model_path=./data: 参数初始化路径 - \--load_missing_parameter_strategy=rand: 随机初始不存在的参数 @@ -191,7 +191,7 @@ python predict.py -o $output_file ``` -`predict.py` 是主要的可执行python脚本,其中包括函数:加载模型,加载数据,数据预测。网络模型将输出标签的概率分布。 在演示中,我们使用最大概率的标签作为结果。用户还可以根据概率分布矩阵实现集束搜索或维特比解码。 +`predict.py` 是主要的可执行python脚本,其中包括函数:加载模型,加载数据,数据预测。网络模型将输出标签的概率分布。 在演示中,我们使用最大概率的标签作为结果。用户还可以根据概率分布矩阵实现柱搜索或维特比解码。 预测后,结果保存在 `predict.res` 中。 From 9aa88426e1d78debe4853a3ac909868f590e9b0c Mon Sep 17 00:00:00 2001 From: zhangjcqq <664122220@qq.com> Date: Fri, 9 Dec 2016 19:02:43 +0800 Subject: [PATCH 070/265] modify somewhere --- doc/tutorials/semantic_role_labeling/index_cn.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/tutorials/semantic_role_labeling/index_cn.md b/doc/tutorials/semantic_role_labeling/index_cn.md index 7a3eff90e3..c7e0a78f50 100644 --- a/doc/tutorials/semantic_role_labeling/index_cn.md +++ b/doc/tutorials/semantic_role_labeling/index_cn.md @@ -1,6 +1,6 @@ # 语义角色标注教程 # -语义角色标注(Semantic role labeling, SRL)是浅层语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词参数结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: +语义角色标注(Semantic role labeling, SRL)是浅层语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词论元结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: [ A0 He ] [ AM-MOD would ][ AM-NEG n’t ] [ V accept] [ A1 anything of value ] from [A2 those he was writing about ]. @@ -12,9 +12,9 @@ - AM-MOD: 情态动词 - AM-NEG: 否定 -给定动词“accept”,句子中的大部分将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 +给定动词“accept”,句子中的组块将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 -到目前为止,大多数成功的SRL系统是建立在某种形式的解析结果之上的,其中在语法结构上使用了预先定义的特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标注问题。 +到目前为止,大多数成功的SRL系统是建立在某种形式的句法分析结果之上的,使用了基于句法结构的预定义特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标注问题。 ## 数据描述 相关论文[2]采用 CoNLL-2005&2012 共享任务中设置的数据进行训练和测试。由于数据许可的原因,演示采用 CoNLL-2005 的测试数据集,可以在网站上找到。 @@ -45,7 +45,7 @@ feature: the extracted features from data set ### 特征 -两个输入特性在这个流程中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示: +两个输入特征在这个流程中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示:
![pic](./feature.jpg)
From 917c9cc67e07a97149fbecd67fc17422e8c7e683 Mon Sep 17 00:00:00 2001 From: liaogang Date: Fri, 9 Dec 2016 20:35:05 +0800 Subject: [PATCH 071/265] Change interface and comments in Cpuid.cpp --- paddle/utils/CpuId.cpp | 6 +- paddle/utils/CpuId.h | 91 +++++++++++++++------------ paddle/utils/tests/test_SIMDFlags.cpp | 21 +++---- 3 files changed, 65 insertions(+), 53 deletions(-) diff --git a/paddle/utils/CpuId.cpp b/paddle/utils/CpuId.cpp index ae1fb40f04..815e0316b1 100644 --- a/paddle/utils/CpuId.cpp +++ b/paddle/utils/CpuId.cpp @@ -51,7 +51,11 @@ SIMDFlags::SIMDFlags() { simd_flags_ |= cpuInfo[2] & (1 << 16) ? SIMD_FMA4 : SIMD_NONE; } -const SIMDFlags* SIMDFlags::instance() { +bool SIMDFlags::check(int flags) const { + return (simd_flags_ & flags) == flags; +} + +SIMDFlags const* SIMDFlags::instance() { static SIMDFlags instance; return &instance; } diff --git a/paddle/utils/CpuId.h b/paddle/utils/CpuId.h index 19096332b6..a1ef7e48e6 100644 --- a/paddle/utils/CpuId.h +++ b/paddle/utils/CpuId.h @@ -15,56 +15,65 @@ limitations under the License. */ namespace paddle { +enum simd_t { + SIMD_NONE = 0, ///< None + SIMD_SSE = 1 << 0, ///< SSE + SIMD_SSE2 = 1 << 1, ///< SSE 2 + SIMD_SSE3 = 1 << 2, ///< SSE 3 + SIMD_SSSE3 = 1 << 3, ///< SSSE 3 + SIMD_SSE41 = 1 << 4, ///< SSE 4.1 + SIMD_SSE42 = 1 << 5, ///< SSE 4.2 + SIMD_FMA3 = 1 << 6, ///< FMA 3 + SIMD_FMA4 = 1 << 7, ///< FMA 4 + SIMD_AVX = 1 << 8, ///< AVX + SIMD_AVX2 = 1 << 9, ///< AVX 2 + SIMD_AVX512 = 1 << 10, ///< AVX 512 +}; + class SIMDFlags final { public: DISABLE_COPY(SIMDFlags); - SIMDFlags(); - static const SIMDFlags* instance(); - - inline bool isSSE() const { return simd_flags_ & SIMD_SSE; } - inline bool isSSE2() const { return simd_flags_ & SIMD_SSE2; } - inline bool isSSE3() const { return simd_flags_ & SIMD_SSE3; } - inline bool isSSSE3() const { return simd_flags_ & SIMD_SSSE3; } - inline bool isSSE41() const { return simd_flags_ & SIMD_SSE41; } - inline bool isSSE42() const { return simd_flags_ & SIMD_SSE42; } - inline bool isFMA3() const { return simd_flags_ & SIMD_FMA3; } - inline bool isFMA4() const { return simd_flags_ & SIMD_FMA4; } - inline bool isAVX() const { return simd_flags_ & SIMD_AVX; } - inline bool isAVX2() const { return simd_flags_ & SIMD_AVX2; } - inline bool isAVX512()const { return simd_flags_ & SIMD_AVX512;} + static SIMDFlags const* instance(); + bool check(int flags) const; private: - enum simd_t { - SIMD_NONE = 0, ///< None - SIMD_SSE = 1 << 0, ///< SSE - SIMD_SSE2 = 1 << 1, ///< SSE 2 - SIMD_SSE3 = 1 << 2, ///< SSE 3 - SIMD_SSSE3 = 1 << 3, ///< SSSE 3 - SIMD_SSE41 = 1 << 4, ///< SSE 4.1 - SIMD_SSE42 = 1 << 5, ///< SSE 4.2 - SIMD_FMA3 = 1 << 6, ///< FMA 3 - SIMD_FMA4 = 1 << 7, ///< FMA 4 - SIMD_AVX = 1 << 8, ///< AVX - SIMD_AVX2 = 1 << 9, ///< AVX 2 - SIMD_AVX512 = 1 << 10, ///< AVX 512 - }; - - /// simd flags int simd_flags_ = SIMD_NONE; }; -#define HAS_SSE SIMDFlags::instance()->isSSE() -#define HAS_SSE2 SIMDFlags::instance()->isSSE2() -#define HAS_SSE3 SIMDFlags::instance()->isSSE3() -#define HAS_SSSE3 SIMDFlags::instance()->isSSSE3() -#define HAS_SSE41 SIMDFlags::instance()->isSSE41() -#define HAS_SSE42 SIMDFlags::instance()->isSSE42() -#define HAS_FMA3 SIMDFlags::instance()->isFMA3() -#define HAS_FMA4 SIMDFlags::instance()->isFMA4() -#define HAS_AVX SIMDFlags::instance()->isAVX() -#define HAS_AVX2 SIMDFlags::instance()->isAVX2() -#define HAS_AVX512 SIMDFlags::instance()->isAVX512() +/** + * @brief Check SIMD flags at runtime. + * + * For example. + * @code{.cpp} + * + * if (HAS_SIMD(SIMD_AVX2 | SIMD_FMA4)) { + * avx2_fm4_stub(); + * } else if (HAS_SIMD(SIMD_AVX)) { + * avx_stub(); + * } + * + * @endcode + */ +#define HAS_SIMD(__flags) SIMDFlags::instance()->check(__flags) + +/** + * @brief Check SIMD flags at runtime. + * + * 1. Check all SIMD flags at runtime: HAS_SSE && HAS_SSE2 && HAS_SSE3 + * 2. Check one SIMD flags at runtime: HAS_SSE || HAS_SSE2 || HAS_SSE3 + */ +#define HAS_SSE HAS_SIMD(SIMD_SSE) +#define HAS_SSE2 HAS_SIMD(SIMD_SSE2) +#define HAS_SSE3 HAS_SIMD(SIMD_SSE3) +#define HAS_SSSE3 HAS_SIMD(SIMD_SSSE3) +#define HAS_SSE41 HAS_SIMD(SIMD_SSE41) +#define HAS_SSE42 HAS_SIMD(SIMD_SSE42) +#define HAS_FMA3 HAS_SIMD(SIMD_FMA3) +#define HAS_FMA4 HAS_SIMD(SIMD_FMA4) +#define HAS_AVX HAS_SIMD(SIMD_AVX) +#define HAS_AVX2 HAS_SIMD(SIMD_AVX2) +#define HAS_AVX512 HAS_SIMD(SIMD_AVX512) } // namespace paddle diff --git a/paddle/utils/tests/test_SIMDFlags.cpp b/paddle/utils/tests/test_SIMDFlags.cpp index a544901aa3..47b6c542ea 100644 --- a/paddle/utils/tests/test_SIMDFlags.cpp +++ b/paddle/utils/tests/test_SIMDFlags.cpp @@ -32,17 +32,16 @@ TEST(SIMDFlags, gccTest) { } TEST(SIMDFlags, normalPrint) { - auto simd = SIMDFlags::instance(); - LOG(INFO) << "Has SSE2: " << std::boolalpha << simd->isSSE2(); - LOG(INFO) << "Has SSE3: " << std::boolalpha << simd->isSSE3(); - LOG(INFO) << "Has SSSE3: " << std::boolalpha << simd->isSSSE3(); - LOG(INFO) << "Has SSE4.1: " << std::boolalpha << simd->isSSE41(); - LOG(INFO) << "Has SSE4.2: " << std::boolalpha << simd->isSSE42(); - LOG(INFO) << "Has FMA3: " << std::boolalpha << simd->isFMA3(); - LOG(INFO) << "Has FMA4: " << std::boolalpha << simd->isFMA4(); - LOG(INFO) << "Has AVX: " << std::boolalpha << simd->isAVX(); - LOG(INFO) << "Has AVX2: " << std::boolalpha << simd->isAVX2(); - LOG(INFO) << "Has AVX512: " << std::boolalpha << simd->isAVX512(); + LOG(INFO) << "Has SSE: " << std::boolalpha << HAS_SSE; + LOG(INFO) << "Has SSE2: " << std::boolalpha << HAS_SSE2; + LOG(INFO) << "Has SSE3: " << std::boolalpha << HAS_SSE3; + LOG(INFO) << "Has SSSE3: " << std::boolalpha << HAS_SSSE3; + LOG(INFO) << "Has SSE4: " << std::boolalpha << HAS_SSE41 || HAS_SSE42; + LOG(INFO) << "Has FMA3: " << std::boolalpha << HAS_FMA3; + LOG(INFO) << "Has FMA4: " << std::boolalpha << HAS_FMA4; + LOG(INFO) << "Has AVX: " << std::boolalpha << HAS_AVX; + LOG(INFO) << "Has AVX2: " << std::boolalpha << HAS_AVX2; + LOG(INFO) << "Has AVX512: " << std::boolalpha << HAS_AVX512; } int main(int argc, char** argv) { From 56b15bd530a448e44264ad7b7643ead70c2f56c6 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sat, 10 Dec 2016 09:25:18 -0800 Subject: [PATCH 072/265] Install libgtest-dev in Dockerfiles --- paddle/scripts/docker/Dockerfile | 4 +++- paddle/scripts/docker/Dockerfile.gpu | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/paddle/scripts/docker/Dockerfile b/paddle/scripts/docker/Dockerfile index edb84712d8..6243d1d188 100644 --- a/paddle/scripts/docker/Dockerfile +++ b/paddle/scripts/docker/Dockerfile @@ -4,12 +4,14 @@ MAINTAINER PaddlePaddle Authors ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get install -y cmake libprotobuf-dev protobuf-compiler git \ - libgoogle-glog-dev libgflags-dev libatlas-dev libatlas3-base g++ m4 python-pip \ + libgoogle-glog-dev libgflags-dev libgtest-dev \ + libatlas-dev libatlas3-base g++ m4 python-pip \ python-protobuf python-numpy python-dev swig openssh-server \ wget unzip python-matplotlib tar xz-utils bzip2 gzip coreutils \ sed grep graphviz libjpeg-dev zlib1g-dev doxygen \ clang-3.8 llvm-3.8 libclang-3.8-dev \ && apt-get clean -y +RUN cd /usr/src/gtest && cmake . && make && sudo cp *.a /usr/lib RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark diff --git a/paddle/scripts/docker/Dockerfile.gpu b/paddle/scripts/docker/Dockerfile.gpu index 5d175e15a7..f9821f2760 100644 --- a/paddle/scripts/docker/Dockerfile.gpu +++ b/paddle/scripts/docker/Dockerfile.gpu @@ -4,12 +4,14 @@ MAINTAINER PaddlePaddle Authors ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update \ && apt-get install -y cmake libprotobuf-dev protobuf-compiler git \ - libgoogle-glog-dev libgflags-dev libatlas-dev libatlas3-base g++ m4 python-pip \ + libgoogle-glog-dev libgflags-dev libgtest-dev \ + libatlas-dev libatlas3-base g++ m4 python-pip \ python-protobuf python-numpy python-dev swig openssh-server \ wget unzip python-matplotlib tar xz-utils bzip2 gzip coreutils \ sed grep graphviz libjpeg-dev zlib1g-dev doxygen \ clang-3.8 llvm-3.8 libclang-3.8-dev \ && apt-get clean -y +RUN cd /usr/src/gtest && cmake . && make && sudo cp *.a /usr/lib RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark From 174ca2723a5290f0130c616c2cfee248409de898 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sat, 10 Dec 2016 20:12:26 -0800 Subject: [PATCH 073/265] Remove sudo from Dockerfile.* --- paddle/scripts/docker/Dockerfile | 2 +- paddle/scripts/docker/Dockerfile.gpu | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/paddle/scripts/docker/Dockerfile b/paddle/scripts/docker/Dockerfile index 6243d1d188..bfe287f080 100644 --- a/paddle/scripts/docker/Dockerfile +++ b/paddle/scripts/docker/Dockerfile @@ -11,7 +11,7 @@ RUN apt-get update \ sed grep graphviz libjpeg-dev zlib1g-dev doxygen \ clang-3.8 llvm-3.8 libclang-3.8-dev \ && apt-get clean -y -RUN cd /usr/src/gtest && cmake . && make && sudo cp *.a /usr/lib +RUN cd /usr/src/gtest && cmake . && make && cp *.a /usr/lib RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark diff --git a/paddle/scripts/docker/Dockerfile.gpu b/paddle/scripts/docker/Dockerfile.gpu index f9821f2760..26a88e414b 100644 --- a/paddle/scripts/docker/Dockerfile.gpu +++ b/paddle/scripts/docker/Dockerfile.gpu @@ -11,7 +11,7 @@ RUN apt-get update \ sed grep graphviz libjpeg-dev zlib1g-dev doxygen \ clang-3.8 llvm-3.8 libclang-3.8-dev \ && apt-get clean -y -RUN cd /usr/src/gtest && cmake . && make && sudo cp *.a /usr/lib +RUN cd /usr/src/gtest && cmake . && make && cp *.a /usr/lib RUN pip install -U BeautifulSoup docopt PyYAML pillow \ sphinx sphinx_rtd_theme breathe recommonmark From 469ea64d5780533d0892db9be90a39c403425c95 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Sun, 11 Dec 2016 12:26:22 +0800 Subject: [PATCH 074/265] Follow comments --- doc/howto/cmd_parameter/index_en.md | 2 +- doc/tutorials/rec/ml_dataset_en.md | 2 +- doc/tutorials/rec/ml_regression_en.rst | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/howto/cmd_parameter/index_en.md b/doc/howto/cmd_parameter/index_en.md index a6c236db61..fb658f2aa5 100644 --- a/doc/howto/cmd_parameter/index_en.md +++ b/doc/howto/cmd_parameter/index_en.md @@ -1,5 +1,5 @@ ```eval_rst -.. _cmd_line_index: +.. _cmd_line_index_en: ``` # How to Set Command-line Parameters diff --git a/doc/tutorials/rec/ml_dataset_en.md b/doc/tutorials/rec/ml_dataset_en.md index 73879d6537..dc11a5e060 100644 --- a/doc/tutorials/rec/ml_dataset_en.md +++ b/doc/tutorials/rec/ml_dataset_en.md @@ -1,5 +1,5 @@ ```eval_rst -.. _demo_ml_dataset: +.. _demo_ml_dataset_en: ``` diff --git a/doc/tutorials/rec/ml_regression_en.rst b/doc/tutorials/rec/ml_regression_en.rst index b9c2f8cb59..ddc00dc706 100644 --- a/doc/tutorials/rec/ml_regression_en.rst +++ b/doc/tutorials/rec/ml_regression_en.rst @@ -16,7 +16,7 @@ Data Preparation ```````````````` Download and extract dataset '''''''''''''''''''''''''''' -We use :ref:`demo_ml_dataset` here. +We use :ref:`demo_ml_dataset_en` here. To download and unzip the dataset, simply run the following commands. .. code-block:: bash @@ -280,7 +280,7 @@ The run.sh is shown as follow: It just start a paddle training process, write the log to `log.txt`, then print it on screen. -Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index` page. The short description of these arguments is shown as follow. +Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index_en` page. The short description of these arguments is shown as follow. * config\: Tell paddle which file is neural network configuration. * save_dir\: Tell paddle save model into './output' From 62a7e5cf010a01601f8af4ddbe142a570752d4ed Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sun, 11 Dec 2016 11:16:13 -0800 Subject: [PATCH 075/265] Initialize Bazel WORKSPACE and a testing packages --- .gitignore | 3 +++ WORKSPACE | 8 ++++++++ third_party/protobuf_test/BUILD | 17 +++++++++++++++++ third_party/protobuf_test/README.md | 1 + third_party/protobuf_test/example.proto | 7 +++++++ third_party/protobuf_test/example_lib.cc | 6 ++++++ third_party/protobuf_test/example_lib.h | 7 +++++++ 7 files changed, 49 insertions(+) create mode 100644 WORKSPACE create mode 100644 third_party/protobuf_test/BUILD create mode 100644 third_party/protobuf_test/README.md create mode 100644 third_party/protobuf_test/example.proto create mode 100644 third_party/protobuf_test/example_lib.cc create mode 100644 third_party/protobuf_test/example_lib.h diff --git a/.gitignore b/.gitignore index 35bed0accd..1c9730a5ad 100644 --- a/.gitignore +++ b/.gitignore @@ -9,3 +9,6 @@ build/ .pydevproject Makefile .test_env/ + +*~ +bazel-* diff --git a/WORKSPACE b/WORKSPACE new file mode 100644 index 0000000000..06495690ab --- /dev/null +++ b/WORKSPACE @@ -0,0 +1,8 @@ +git_repository( + name = "org_pubref_rules_protobuf", + remote = "https://github.com/pubref/rules_protobuf", + tag = "v0.7.1", +) + +load("@org_pubref_rules_protobuf//cpp:rules.bzl", "cpp_proto_repositories") +cpp_proto_repositories() diff --git a/third_party/protobuf_test/BUILD b/third_party/protobuf_test/BUILD new file mode 100644 index 0000000000..7c5b1c6994 --- /dev/null +++ b/third_party/protobuf_test/BUILD @@ -0,0 +1,17 @@ +licenses(["notice"]) # Apache 2.0 + +load("@org_pubref_rules_protobuf//cpp:rules.bzl", "cpp_proto_library") + +cpp_proto_library( + name = "example_proto", + protos = [ + "example.proto" + ], +) + +cc_library( + name = "example_lib", + srcs = ["example_lib.cc"], + hdrs = ["example_lib.h"], + deps = [":example_proto"], +) diff --git a/third_party/protobuf_test/README.md b/third_party/protobuf_test/README.md new file mode 100644 index 0000000000..e8bdeee6fe --- /dev/null +++ b/third_party/protobuf_test/README.md @@ -0,0 +1 @@ +This package tests that Bazel can build protobuf related rules. diff --git a/third_party/protobuf_test/example.proto b/third_party/protobuf_test/example.proto new file mode 100644 index 0000000000..57c52d3521 --- /dev/null +++ b/third_party/protobuf_test/example.proto @@ -0,0 +1,7 @@ +syntax = "proto3"; + +package protos; + +message Greeting { + string name = 1; +} diff --git a/third_party/protobuf_test/example_lib.cc b/third_party/protobuf_test/example_lib.cc new file mode 100644 index 0000000000..8d55ed66de --- /dev/null +++ b/third_party/protobuf_test/example_lib.cc @@ -0,0 +1,6 @@ +#include "third_party/protobuf_test/example_lib.h" +#include + +std::string get_greet(const ::protos::Greeting& who) { + return "Hello " + who.name(); +} diff --git a/third_party/protobuf_test/example_lib.h b/third_party/protobuf_test/example_lib.h new file mode 100644 index 0000000000..eaf4dd4cea --- /dev/null +++ b/third_party/protobuf_test/example_lib.h @@ -0,0 +1,7 @@ +#pragma once + +#include "third_party/protobuf_test/example.pb.h" + +#include + +std::string get_greet(const ::protos::Greeting &who); From e74ae54a594ca2a68c936b570076de186bc87ae9 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sun, 11 Dec 2016 11:37:28 -0800 Subject: [PATCH 076/265] Update development document --- .../build_and_install/docker_install_en.rst | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index 1ab6fc6a72..a3fa8bccbf 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -104,3 +104,69 @@ container: Then we can direct our Web browser to the HTML version of source code at http://localhost:8088/paddle/ + + +Development Using Docker +------------------------ + +Develpers can work on PaddlePaddle using Docker. This allows +developers to work on different platforms -- Linux, Mac OS X, and +Windows -- in a consistent way. + +The general development workflow with Docker and Bazel is as follows: + +1. Get the source code of Paddle: + + .. code-block:: bash + + git clone --recursive https://github.com/paddlepaddle/paddle + + +1. Build a development Docker image `paddle:dev` from the source code. + This image contains all the development tools and dependencies of + PaddlePaddle. + + + .. code-block:: bash + + cd paddle + docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . + + +1. Run the image as a container and mounting local source code + directory into the container. This allows us to change the code on + the host and build it within the container. + + .. code-block:: bash + + docker run \ + -d # run the container in background mode \ + --name paddle # we can run a nginx container to serve documents \ + -p 2022:22 # so we can SSH into this container \ + -v $PWD:/paddle # mount the source code \ + -v $HOME/.cache/bazel:/root/.cache/bazel # mount Bazel cache \ + paddle:dev + +1. SSH into the container: + + .. code-block:: bash + + ssh root@localhost -p 2022 + +1. We can edit the source code in the container or on this host. Then + we can build using cmake + + .. code-block:: bash + + cd /paddle # where paddle source code has been mounted into the container + mkdir -p build + cd build + cmake .. + make -j `nproc` + + or Bazel in the container: + + .. code-block:: bash + + cd /paddle + bazel build ... From 06d780070439761da6947a3f8b3c2342cf52f8cf Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sun, 11 Dec 2016 12:02:04 -0800 Subject: [PATCH 077/265] Correct indentation in .rst file --- .../build_and_install/docker_install_en.rst | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index a3fa8bccbf..95b6954789 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -119,7 +119,7 @@ The general development workflow with Docker and Bazel is as follows: .. code-block:: bash - git clone --recursive https://github.com/paddlepaddle/paddle + git clone --recursive https://github.com/paddlepaddle/paddle 1. Build a development Docker image `paddle:dev` from the source code. @@ -129,8 +129,8 @@ The general development workflow with Docker and Bazel is as follows: .. code-block:: bash - cd paddle - docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . + cd paddle + docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . 1. Run the image as a container and mounting local source code @@ -139,34 +139,34 @@ The general development workflow with Docker and Bazel is as follows: .. code-block:: bash - docker run \ - -d # run the container in background mode \ - --name paddle # we can run a nginx container to serve documents \ - -p 2022:22 # so we can SSH into this container \ - -v $PWD:/paddle # mount the source code \ - -v $HOME/.cache/bazel:/root/.cache/bazel # mount Bazel cache \ - paddle:dev + docker run \ + -d # run the container in background mode \ + --name paddle # we can run a nginx container to serve documents \ + -p 2022:22 # so we can SSH into this container \ + -v $PWD:/paddle # mount the source code \ + -v $HOME/.cache/bazel:/root/.cache/bazel # mount Bazel cache \ + paddle:dev 1. SSH into the container: .. code-block:: bash - ssh root@localhost -p 2022 + ssh root@localhost -p 2022 1. We can edit the source code in the container or on this host. Then we can build using cmake .. code-block:: bash - cd /paddle # where paddle source code has been mounted into the container - mkdir -p build - cd build - cmake .. - make -j `nproc` + cd /paddle # where paddle source code has been mounted into the container + mkdir -p build + cd build + cmake .. + make -j `nproc` or Bazel in the container: .. code-block:: bash - cd /paddle - bazel build ... + cd /paddle + bazel build ... From 8013d17dfae690d6b0408fb6b0d8e4eceee30f2d Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sun, 11 Dec 2016 12:03:05 -0800 Subject: [PATCH 078/265] Correct enumeration in lists --- doc/getstarted/build_and_install/docker_install_en.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index 95b6954789..f6f1bbab42 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -122,7 +122,7 @@ The general development workflow with Docker and Bazel is as follows: git clone --recursive https://github.com/paddlepaddle/paddle -1. Build a development Docker image `paddle:dev` from the source code. +2. Build a development Docker image `paddle:dev` from the source code. This image contains all the development tools and dependencies of PaddlePaddle. @@ -133,7 +133,7 @@ The general development workflow with Docker and Bazel is as follows: docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . -1. Run the image as a container and mounting local source code +3. Run the image as a container and mounting local source code directory into the container. This allows us to change the code on the host and build it within the container. @@ -147,13 +147,13 @@ The general development workflow with Docker and Bazel is as follows: -v $HOME/.cache/bazel:/root/.cache/bazel # mount Bazel cache \ paddle:dev -1. SSH into the container: +4. SSH into the container: .. code-block:: bash ssh root@localhost -p 2022 -1. We can edit the source code in the container or on this host. Then +5. We can edit the source code in the container or on this host. Then we can build using cmake .. code-block:: bash From 9b94773a7f546f5d1a825c3a6f07420a5ced6404 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sun, 11 Dec 2016 12:39:02 -0800 Subject: [PATCH 079/265] Add gtest --- WORKSPACE | 14 ++++++++++++++ .../build_and_install/docker_install_en.rst | 5 +++-- third_party/gtest.BUILD | 14 ++++++++++++++ third_party/protobuf_test/BUILD | 11 +++++++++++ third_party/protobuf_test/example.proto | 2 +- third_party/protobuf_test/example_lib.cc | 9 +++++++-- third_party/protobuf_test/example_lib.h | 8 +++++++- third_party/protobuf_test/example_lib_test.cc | 15 +++++++++++++++ 8 files changed, 72 insertions(+), 6 deletions(-) create mode 100644 third_party/gtest.BUILD create mode 100644 third_party/protobuf_test/example_lib_test.cc diff --git a/WORKSPACE b/WORKSPACE index 06495690ab..38e1628d11 100644 --- a/WORKSPACE +++ b/WORKSPACE @@ -1,8 +1,22 @@ +# External dependency to grpc-enabled Google-styleprotobuf bulding +# rules. This method comes from +# https://github.com/pubref/rules_protobuf#usage. git_repository( name = "org_pubref_rules_protobuf", remote = "https://github.com/pubref/rules_protobuf", tag = "v0.7.1", ) +# External dependency to gtest 1.7.0. This method comes from +# https://www.bazel.io/versions/master/docs/tutorial/cpp.html. +new_http_archive( + name = "gtest", + url = "https://github.com/google/googletest/archive/release-1.7.0.zip", + sha256 = "b58cb7547a28b2c718d1e38aee18a3659c9e3ff52440297e965f5edffe34b6d0", + build_file = "third_party/gtest.BUILD", + strip_prefix = "googletest-release-1.7.0", +) + + load("@org_pubref_rules_protobuf//cpp:rules.bzl", "cpp_proto_repositories") cpp_proto_repositories() diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index f6f1bbab42..feb027ccbb 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -161,12 +161,13 @@ The general development workflow with Docker and Bazel is as follows: cd /paddle # where paddle source code has been mounted into the container mkdir -p build cd build - cmake .. + cmake -DWITH_TESTING=ON .. make -j `nproc` + CTEST_OUTPUT_ON_FAILURE=1 ctest or Bazel in the container: .. code-block:: bash cd /paddle - bazel build ... + bazel test ... diff --git a/third_party/gtest.BUILD b/third_party/gtest.BUILD new file mode 100644 index 0000000000..3e68a1d879 --- /dev/null +++ b/third_party/gtest.BUILD @@ -0,0 +1,14 @@ +cc_library( + name = "main", + srcs = glob( + ["src/*.cc"], + exclude = ["src/gtest-all.cc"] + ), + hdrs = glob([ + "include/**/*.h", + "src/*.h" + ]), + copts = ["-Iexternal/gtest/include"], + linkopts = ["-pthread"], + visibility = ["//visibility:public"], +) diff --git a/third_party/protobuf_test/BUILD b/third_party/protobuf_test/BUILD index 7c5b1c6994..6208b87082 100644 --- a/third_party/protobuf_test/BUILD +++ b/third_party/protobuf_test/BUILD @@ -7,6 +7,7 @@ cpp_proto_library( protos = [ "example.proto" ], + with_grpc = True, ) cc_library( @@ -15,3 +16,13 @@ cc_library( hdrs = ["example_lib.h"], deps = [":example_proto"], ) + +cc_test( + name = "example_lib_test", + srcs = ["example_lib_test.cc"], + copts = ["-Iexternal/gtest/include"], + deps =[ + "@gtest//:main", + ":example_lib", + ], +) diff --git a/third_party/protobuf_test/example.proto b/third_party/protobuf_test/example.proto index 57c52d3521..6a7eada9c1 100644 --- a/third_party/protobuf_test/example.proto +++ b/third_party/protobuf_test/example.proto @@ -1,6 +1,6 @@ syntax = "proto3"; -package protos; +package third_party.protobuf_test; message Greeting { string name = 1; diff --git a/third_party/protobuf_test/example_lib.cc b/third_party/protobuf_test/example_lib.cc index 8d55ed66de..56341a0124 100644 --- a/third_party/protobuf_test/example_lib.cc +++ b/third_party/protobuf_test/example_lib.cc @@ -1,6 +1,11 @@ #include "third_party/protobuf_test/example_lib.h" -#include -std::string get_greet(const ::protos::Greeting& who) { +namespace third_party { +namespace protobuf_test { + +std::string get_greet(const Greeting& who) { return "Hello " + who.name(); } + +} // namespace protobuf_test +} // namespace thrid_party diff --git a/third_party/protobuf_test/example_lib.h b/third_party/protobuf_test/example_lib.h index eaf4dd4cea..516326e812 100644 --- a/third_party/protobuf_test/example_lib.h +++ b/third_party/protobuf_test/example_lib.h @@ -4,4 +4,10 @@ #include -std::string get_greet(const ::protos::Greeting &who); +namespace third_party { +namespace protobuf_test { + +std::string get_greet(const Greeting &who); + +} // namespace protobuf_test +} // namespace third_party diff --git a/third_party/protobuf_test/example_lib_test.cc b/third_party/protobuf_test/example_lib_test.cc new file mode 100644 index 0000000000..6229f56e60 --- /dev/null +++ b/third_party/protobuf_test/example_lib_test.cc @@ -0,0 +1,15 @@ +#include "third_party/protobuf_test/example_lib.h" + +#include "gtest/gtest.h" + +namespace third_party { +namespace protobuf_test { + +TEST(ProtobufTest, GetGreet) { + Greeting g; + g.set_name("Paddle"); + EXPECT_EQ("Hello Paddle", get_greet(g)); +} + +} // namespace protobuf_test +} // namespace third_party From dab81d4d030f08466448312a7697b7bf94806927 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Mon, 12 Dec 2016 11:28:05 +0800 Subject: [PATCH 080/265] fix dead links in quick start after reorganzing documentaion directory. --- doc/tutorials/quick_start/index_en.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/doc/tutorials/quick_start/index_en.md b/doc/tutorials/quick_start/index_en.md index ec548b5393..2ade741799 100644 --- a/doc/tutorials/quick_start/index_en.md +++ b/doc/tutorials/quick_start/index_en.md @@ -12,7 +12,7 @@ This tutorial will teach the basics of deep learning (DL), including how to impl To get started, please install PaddlePaddle on your computer. Throughout this tutorial, you will learn by implementing different DL models for text classification. -To install PaddlePaddle, please follow the instructions here: Build and Install. +To install PaddlePaddle, please follow the instructions here: Build and Install. ## Overview For the first step, you will use PaddlePaddle to build a **text classification** system. For example, suppose you run an e-commence website, and you want to analyze the sentiment of user reviews to evaluate product quality. @@ -156,14 +156,14 @@ define_py_data_sources2(train_list='data/train.list', obj="process", args={"dictionary": word_dict}) ``` -You can refer to the following link for more detailed examples and data formats: PyDataProvider2. +You can refer to the following link for more detailed examples and data formats: PyDataProvider2. ## Network Architecture You will describe four kinds of network architectures in this section.
![](./PipelineNetwork_en.jpg)
First, you will build a logistic regression model. Later, you will also get chance to build other more powerful network architectures. -For more detailed documentation, you could refer to: Layer documentation。All configuration files are in `demo/quick_start` directory. +For more detailed documentation, you could refer to: layer documentation. All configuration files are in `demo/quick_start` directory. ### Logistic Regression The architecture is illustrated in the following picture: @@ -366,7 +366,7 @@ You can use single layer LSTM model with Dropout for our text classification pro
## Optimization Algorithm -Optimization algorithms include Momentum, RMSProp, AdaDelta, AdaGrad, Adam, and Adamax. You can use Adam optimization method here, with L2 regularization and gradient clipping, because Adam has been proved to work very well for training recurrent neural network. +Optimization algorithms include Momentum, RMSProp, AdaDelta, AdaGrad, Adam, and Adamax. You can use Adam optimization method here, with L2 regularization and gradient clipping, because Adam has been proved to work very well for training recurrent neural network. ```python settings(batch_size=128, @@ -391,7 +391,7 @@ paddle train \ --use_gpu=false ``` -If you want to install the remote training platform, which enables distributed training on clusters, follow the instructions here: Platform documentation. We do not provide examples on how to train on clusters. Please refer to other demos or platform training documentation for mode details on training on clusters. +If you want to install the remote training platform, which enables distributed training on clusters, follow the instructions here: Platform documentation. We do not provide examples on how to train on clusters. Please refer to other demos or platform training documentation for mode details on training on clusters. ## Inference You can use the trained model to perform prediction on the dataset with no labels. You can also evaluate the model on dataset with labels to obtain its test accuracy.
![](./PipelineTest_en.png)
@@ -406,7 +406,7 @@ paddle train \ --init_model_path=./output/pass-0000x ``` -We will give an example of performing prediction using Recurrent model on a dataset with no labels. You can refer to: Python Prediction API tutorial,or other demo for the prediction process using Python. You can also use the following script for inference or evaluation. +We will give an example of performing prediction using Recurrent model on a dataset with no labels. You can refer to Python Prediction API tutorial,or other demo for the prediction process using Python. You can also use the following script for inference or evaluation. inference script (predict.sh): @@ -508,7 +508,7 @@ The scripts of data downloading, network configurations, and training scrips are * \--config_args:Other configuration arguments. * \--init_model_path:The path of the initial model parameter. -By default, the trainer will save model every pass. You can also specify `saving_period_by_batches` to set the frequency of batch saving. You can use `show_parameter_stats_period` to print the statistics of the parameters, which are very useful for tuning parameters. Other command line arguments can be found in command line argument documentation。 +By default, the trainer will save model every pass. You can also specify `saving_period_by_batches` to set the frequency of batch saving. You can use `show_parameter_stats_period` to print the statistics of the parameters, which are very useful for tuning parameters. Other command line arguments can be found in command line argument documentation。 ### Log From f722ee21acdc190f456a64173647c38bffabbdb8 Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Mon, 12 Dec 2016 04:19:34 +0000 Subject: [PATCH 081/265] Do not use https://github.com/pubref/rules_protobuf, but use https://github.com/google/protobuf/ as external dependency for protobuf --- .pre-commit-config.yaml | 3 ++- WORKSPACE | 17 ++++++----------- third_party/protobuf_test/BUILD | 11 +++++------ 3 files changed, 13 insertions(+), 18 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 942669c41f..641ac583b5 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -6,7 +6,8 @@ - repo: https://github.com/reyoung/mirrors-yapf.git sha: v0.13.2 hooks: - - id: yapf + - id: yapf + files: (.*\.(py|bzl)|BUILD|.*\.BUILD)$ # Bazel BUILD files follow Python syntax. - repo: https://github.com/pre-commit/pre-commit-hooks sha: 7539d8bd1a00a3c1bfd34cdb606d3a6372e83469 hooks: diff --git a/WORKSPACE b/WORKSPACE index 38e1628d11..d6ae2af8eb 100644 --- a/WORKSPACE +++ b/WORKSPACE @@ -1,10 +1,9 @@ -# External dependency to grpc-enabled Google-styleprotobuf bulding -# rules. This method comes from -# https://github.com/pubref/rules_protobuf#usage. -git_repository( - name = "org_pubref_rules_protobuf", - remote = "https://github.com/pubref/rules_protobuf", - tag = "v0.7.1", +# External dependency to Google protobuf. +http_archive( + name = "protobuf", + url = "http://github.com/google/protobuf/archive/v3.1.0.tar.gz", + sha256 = "0a0ae63cbffc274efb573bdde9a253e3f32e458c41261df51c5dbc5ad541e8f7", + strip_prefix = "protobuf-3.1.0", ) # External dependency to gtest 1.7.0. This method comes from @@ -16,7 +15,3 @@ new_http_archive( build_file = "third_party/gtest.BUILD", strip_prefix = "googletest-release-1.7.0", ) - - -load("@org_pubref_rules_protobuf//cpp:rules.bzl", "cpp_proto_repositories") -cpp_proto_repositories() diff --git a/third_party/protobuf_test/BUILD b/third_party/protobuf_test/BUILD index 6208b87082..46f769da5f 100644 --- a/third_party/protobuf_test/BUILD +++ b/third_party/protobuf_test/BUILD @@ -1,13 +1,12 @@ licenses(["notice"]) # Apache 2.0 -load("@org_pubref_rules_protobuf//cpp:rules.bzl", "cpp_proto_library") +load("@protobuf//:protobuf.bzl", "cc_proto_library") -cpp_proto_library( +cc_proto_library( name = "example_proto", - protos = [ - "example.proto" - ], - with_grpc = True, + srcs = ["example.proto"], + protoc = "@protobuf//:protoc", + default_runtime = "@protobuf//:protobuf", ) cc_library( From 7e7b2bb7138a481c2004ae9e27ce15cc5e3c2adc Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sun, 11 Dec 2016 20:26:22 -0800 Subject: [PATCH 082/265] Make Git pre-commit hook auto-format WORKSPACE as well. --- .pre-commit-config.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 641ac583b5..b9902a863d 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -7,7 +7,7 @@ sha: v0.13.2 hooks: - id: yapf - files: (.*\.(py|bzl)|BUILD|.*\.BUILD)$ # Bazel BUILD files follow Python syntax. + files: (.*\.(py|bzl)|BUILD|.*\.BUILD|WORKSPACE)$ # Bazel BUILD files follow Python syntax. - repo: https://github.com/pre-commit/pre-commit-hooks sha: 7539d8bd1a00a3c1bfd34cdb606d3a6372e83469 hooks: From 3f9f222328f38af25ba275af1ecd38e0fb9b7121 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 12 Dec 2016 13:41:20 +0800 Subject: [PATCH 083/265] fix some dead links in doc/ --- doc/api/data_provider/pydataprovider2_en.rst | 4 +- doc/api/predict/swig_py_paddle_en.rst | 4 +- doc/api/trainer_config_helpers/layers.rst | 2 + doc/getstarted/basic_usage/index_en.rst | 8 - .../cmd_parameter/detail_introduction_en.md | 4 + doc/howto/deep_model/rnn/rnn_en.rst | 6 +- doc/howto/optimization/gpu_profiling_en.rst | 6 +- doc/tutorials/embedding_model/index_en.md | 2 +- doc/tutorials/rec/ml_regression_en.rst | 2 +- .../semantic_role_labeling_cn.md | 201 ------------------ doc/tutorials/sentiment_analysis/index_en.md | 4 + .../trainer_config_helpers/data_sources.py | 3 +- 12 files changed, 24 insertions(+), 222 deletions(-) delete mode 100644 doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md diff --git a/doc/api/data_provider/pydataprovider2_en.rst b/doc/api/data_provider/pydataprovider2_en.rst index 083436e271..50e8b0d329 100644 --- a/doc/api/data_provider/pydataprovider2_en.rst +++ b/doc/api/data_provider/pydataprovider2_en.rst @@ -1,4 +1,4 @@ -.. _api_pydataprovider: +.. _api_pydataprovider2_en: PyDataProvider2 =============== @@ -104,6 +104,8 @@ And PaddlePadle will do all of the rest things\: Is this cool? +.. _api_pydataprovider2_en_sequential_model: + DataProvider for the sequential model ------------------------------------- A sequence model takes sequences as its input. A sequence is made up of several diff --git a/doc/api/predict/swig_py_paddle_en.rst b/doc/api/predict/swig_py_paddle_en.rst index 9845cd1607..8b145e5b30 100644 --- a/doc/api/predict/swig_py_paddle_en.rst +++ b/doc/api/predict/swig_py_paddle_en.rst @@ -23,7 +23,7 @@ python's :code:`help()` function. Let's walk through the above python script: * At the beginning, use :code:`swig_paddle.initPaddle()` to initialize PaddlePaddle with command line arguments, for more about command line arguments - see `Command Line Arguments <../cmd_argument/detail_introduction.html>`_. + see :ref:`cmd_detail_introduction_en` . * Parse the configuration file that is used in training with :code:`parse_config()`. Because data to predict with always have no label, and output of prediction work normally is the output layer rather than the cost layer, so you should modify @@ -36,7 +36,7 @@ python's :code:`help()` function. Let's walk through the above python script: - Note: As swig_paddle can only accept C++ matrices, we offer a utility class DataProviderConverter that can accept the same input data with PyDataProvider2, for more information please refer to document - of `PyDataProvider2 <../data_provider/pydataprovider2.html>`_. + of :ref:`api_pydataprovider2_en` . * Do the prediction with :code:`forwardTest()`, which takes the converted input data and outputs the activations of the output layer. diff --git a/doc/api/trainer_config_helpers/layers.rst b/doc/api/trainer_config_helpers/layers.rst index 12a75080d0..52a6cfb120 100644 --- a/doc/api/trainer_config_helpers/layers.rst +++ b/doc/api/trainer_config_helpers/layers.rst @@ -1,3 +1,5 @@ +.. _api_trainer_config_helpers_layers: + ====== Layers ====== diff --git a/doc/getstarted/basic_usage/index_en.rst b/doc/getstarted/basic_usage/index_en.rst index dca7a6b1f4..4ffadc68ee 100644 --- a/doc/getstarted/basic_usage/index_en.rst +++ b/doc/getstarted/basic_usage/index_en.rst @@ -99,11 +99,3 @@ In PaddlePaddle, training is just to get a collection of model parameters, which Although starts from a random guess, you can see that value of ``w`` changes quickly towards 2 and ``b`` changes quickly towards 0.3. In the end, the predicted line is almost identical with real answer. There, you have recovered the underlying pattern between ``X`` and ``Y`` only from observed data. - - -5. Where to Go from Here -------------------------- - -- `Install and Build <../build_and_install/index.html>`_ -- `Tutorials <../demo/quick_start/index_en.html>`_ -- `Example and Demo <../demo/index.html>`_ diff --git a/doc/howto/cmd_parameter/detail_introduction_en.md b/doc/howto/cmd_parameter/detail_introduction_en.md index 510396b629..82136b7d4f 100644 --- a/doc/howto/cmd_parameter/detail_introduction_en.md +++ b/doc/howto/cmd_parameter/detail_introduction_en.md @@ -1,3 +1,7 @@ +```eval_rst +.. _cmd_detail_introduction_en: +``` + # Detail Description ## Common diff --git a/doc/howto/deep_model/rnn/rnn_en.rst b/doc/howto/deep_model/rnn/rnn_en.rst index da29b8efad..64f464b1dc 100644 --- a/doc/howto/deep_model/rnn/rnn_en.rst +++ b/doc/howto/deep_model/rnn/rnn_en.rst @@ -30,7 +30,7 @@ Then at the :code:`process` function, each :code:`yield` function will return th yield src_ids, trg_ids, trg_ids_next -For more details description of how to write a data provider, please refer to `PyDataProvider2 <../../ui/data_provider/index.html>`_. The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`. +For more details description of how to write a data provider, please refer to :ref:`api_pydataprovider2_en` . The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`. =============================================== Configure Recurrent Neural Network Architecture @@ -106,7 +106,7 @@ We will use the sequence to sequence model with attention as an example to demon In this model, the source sequence :math:`S = \{s_1, \dots, s_T\}` is encoded with a bidirectional gated recurrent neural networks. The hidden states of the bidirectional gated recurrent neural network :math:`H_S = \{H_1, \dots, H_T\}` is called *encoder vector* The decoder is a gated recurrent neural network. When decoding each token :math:`y_t`, the gated recurrent neural network generates a set of weights :math:`W_S^t = \{W_1^t, \dots, W_T^t\}`, which are used to compute a weighted sum of the encoder vector. The weighted sum of the encoder vector is utilized to condition the generation of the token :math:`y_t`. -The encoder part of the model is listed below. It calls :code:`grumemory` to represent gated recurrent neural network. It is the recommended way of using recurrent neural network if the network architecture is simple, because it is faster than :code:`recurrent_group`. We have implemented most of the commonly used recurrent neural network architectures, you can refer to `Layers <../../ui/api/trainer_config_helpers/layers_index.html>`_ for more details. +The encoder part of the model is listed below. It calls :code:`grumemory` to represent gated recurrent neural network. It is the recommended way of using recurrent neural network if the network architecture is simple, because it is faster than :code:`recurrent_group`. We have implemented most of the commonly used recurrent neural network architectures, you can refer to :ref:`api_trainer_config_helpers_layers` for more details. We also project the encoder vector to :code:`decoder_size` dimensional space, get the first instance of the backward recurrent network, and project it to :code:`decoder_size` dimensional space: @@ -246,6 +246,6 @@ The code is listed below: outputs(beam_gen) -Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to `Semantic Role Labeling Demo <../../demo/semantic_role_labeling/index.html>`_ for more details. +Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`sentiment_analysis_en` for more details. The full configuration file is located at :code:`demo/seqToseq/seqToseq_net.py`. diff --git a/doc/howto/optimization/gpu_profiling_en.rst b/doc/howto/optimization/gpu_profiling_en.rst index 667bf1364e..40ba698f4e 100644 --- a/doc/howto/optimization/gpu_profiling_en.rst +++ b/doc/howto/optimization/gpu_profiling_en.rst @@ -51,7 +51,7 @@ In this tutorial, we will focus on nvprof and nvvp. :code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate above profilers. -.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp +.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ :lines: 111-124 :linenos: @@ -77,7 +77,7 @@ As a simple example, consider the following: 1. Add :code:`REGISTER_TIMER_INFO` and :code:`printAllStatus` functions (see the emphasize-lines). - .. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp + .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ :lines: 111-124 :emphasize-lines: 8-10,13 @@ -124,7 +124,7 @@ To use this command line profiler **nvprof**, you can simply issue the following 1. Add :code:`REGISTER_GPU_PROFILER` function (see the emphasize-lines). - .. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp + .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ :lines: 111-124 :emphasize-lines: 6-7 diff --git a/doc/tutorials/embedding_model/index_en.md b/doc/tutorials/embedding_model/index_en.md index 06f3ff1f00..d793a50f48 100644 --- a/doc/tutorials/embedding_model/index_en.md +++ b/doc/tutorials/embedding_model/index_en.md @@ -93,7 +93,7 @@ where `train.sh` is almost the same as `demo/seqToseq/translation/train.sh`, the - `--init_model_path`: path of the initialization model, here is `data/paraphrase_model` - `--load_missing_parameter_strategy`: operations when model file is missing, here use a normal distibution to initialize the other parameters except for the embedding layer -For users who want to understand the dataset format, model architecture and training procedure in detail, please refer to [Text generation Tutorial](../text_generation/text_generation.md). +For users who want to understand the dataset format, model architecture and training procedure in detail, please refer to [Text generation Tutorial](../text_generation/index_en.md). ## Optional Function ## ### Embedding Parameters Observation diff --git a/doc/tutorials/rec/ml_regression_en.rst b/doc/tutorials/rec/ml_regression_en.rst index ddc00dc706..6346090a84 100644 --- a/doc/tutorials/rec/ml_regression_en.rst +++ b/doc/tutorials/rec/ml_regression_en.rst @@ -264,7 +264,7 @@ In this :code:`dataprovider.py`, we should set\: * use_seq\: Whether this :code:`dataprovider.py` in sequence mode or not. * process\: Return each sample of data to :code:`paddle`. -The data provider details document see :ref:`api_pydataprovider`. +The data provider details document see :ref:`api_pydataprovider2_en`. Train ````` diff --git a/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md b/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md deleted file mode 100644 index f3c855a9fd..0000000000 --- a/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md +++ /dev/null @@ -1,201 +0,0 @@ -# 语义角色标注教程 # - -语义角色标注(Semantic role labeling, SRL)是浅语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词参数结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: - - [ A0 他 ] [ AM-MOD 将 ][ AM-NEG 不会 ] [ V 接受] [ A1 任何东西 ] 从 [A2 那些他写的东西中 ]。 - -- V: 动词 -- A0: 接受者 -- A1: 接受的东西 -- A2: 从……接受 -- A3: 属性 -- AM-MOD: 情态动词 -- AM-NEG: 否定 - -给定动词“接受”,句子中的大部分将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 - -到目前为止,大多数成功的SRL系统是建立在某种形式的解析结果之上的,其中在语法结构上使用了预先定义的特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标记问题。 - -## 数据描述 -相关论文[2]采用 CoNLL-2005&2012 共享任务中设置的数据进行训练和测试。根据数据许可证,演示采用 CoNLL-2005 的测试数据集,可以在网站上找到。 - -用户只需执行以下命令就可以下载并处理原始数据: - -```bash -cd data -./get_data.sh -``` -`data `目录会出现如下几个新的文件: -```bash -conll05st-release:the test data set of CoNll-2005 shared task -test.wsj.words:the Wall Street Journal data sentences -test.wsj.props: the propositional arguments -feature: the extracted features from data set -``` - -## 训练 -### DB-LSTM -请参阅情绪分析的演示以了解有关长期短期记忆单元的更多信息。 - -与在 Sentiment Analysis 演示中使用的 Bidirectional-LSTM 不同,DB-LSTM 采用另一种方法来堆叠LSTM层。首先,标准LSTM以正向处理该序列。该 LSTM 层的输入和输出作为下一个 LSTM 层的输入,并被反向处理。这两个标准 LSTM 层组成一对 LSTM。然后我们堆叠一对对的 LSTM 层后得到深度 LSTM 模型。 - -下图展示了时间扩展的2层 DB-LSTM 网络。 -
-![pic](./network_arch.png) -
- -### 特征 -两个输入特性在这个管道中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示: -
-![pic](./feature.jpg) -
- -在这个示例中,相应的标记句子是: - -[ A1 A record date ] has [ AM-NEG n't ] been [ V set ] . - -在演示中, 我们采用上面的特征模板, 包括: `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` 并使用 `B/I/O` 方案来标记每个参数。这些特征和标签存储在 `feature` 文件中, 用`\t`分割。 - -### 数据提供 - -`dataprovider.py` 是一个包装数据的 Python 文件。 函数 `hook()` 定义了网络的数据槽。六个特征和标签都是索引槽。 -``` -def hook(settings, word_dict, label_dict, **kwargs): - settings.word_dict = word_dict - settings.label_dict = label_dict - #all inputs are integral and sequential type - settings.slots = [ - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(predicate_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(2), - integer_value_sequence(len(label_dict))] -``` -相应的数据迭代器如下: -``` -@provider(init_hook=hook, should_shuffle=True, calc_batch_size=get_batch_size, - can_over_batch_size=False, cache=CacheType.CACHE_PASS_IN_MEM) -def process(settings, file_name): - with open(file_name, 'r') as fdata: - for line in fdata: - sentence, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, label = \ - line.strip().split('\t') - - words = sentence.split() - sen_len = len(words) - word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] - - predicate_slot = [settings.predicate_dict.get(predicate)] * sen_len - ctx_n2_slot = [settings.word_dict.get(ctx_n2, UNK_IDX)] * sen_len - ctx_n1_slot = [settings.word_dict.get(ctx_n1, UNK_IDX)] * sen_len - ctx_0_slot = [settings.word_dict.get(ctx_0, UNK_IDX)] * sen_len - ctx_p1_slot = [settings.word_dict.get(ctx_p1, UNK_IDX)] * sen_len - ctx_p2_slot = [settings.word_dict.get(ctx_p2, UNK_IDX)] * sen_len - - marks = mark.split() - mark_slot = [int(w) for w in marks] - - label_list = label.split() - label_slot = [settings.label_dict.get(w) for w in label_list] - yield word_slot, predicate_slot, ctx_n2_slot, ctx_n1_slot, \ - ctx_0_slot, ctx_p1_slot, ctx_p2_slot, mark_slot, label_slot -``` -函数 `process` 产出有8个特征和标签的9个表。 - -### 神经网络配置 - -`db_lstm.py` 是在训练过程中加载字典并定义数据提供程序模块和网络架构的神经网络配置文件。 - -九个 `data_layer` 从数据提供程序加载实例。八个特征分别转换为嵌入,并由`mixed_layer`混合。 深度双向LSTM层提取softmax层的特征。目标函数是标签的交叉熵。 - -### 训练 -训练的脚本是 `train.sh`,用户只需执行: -```bash - ./train.sh -``` -`train.sh` 中的内容: -``` -paddle train \ - --config=./db_lstm.py \ - --use_gpu=0 \ - --log_period=5000 \ - --trainer_count=1 \ - --show_parameter_stats_period=5000 \ - --save_dir=./output \ - --num_passes=10000 \ - --average_test_period=10000000 \ - --init_model_path=./data \ - --load_missing_parameter_strategy=rand \ - --test_all_data_in_one_period=1 \ -2>&1 | tee 'train.log' -``` - -- \--config=./db_lstm.py : 网络配置文件 -- \--use_gpu=false: 使用 CPU 训练(如果已安装 PaddlePaddle GPU版本并想使用 GPU 训练可以设置为true,目前 crf_layer 不支持 GPU) -- \--log_period=500: 每20批(batch)输出日志 -- \--trainer_count=1: 设置线程数(或 GPU 数) -- \--show_parameter_stats_period=5000: 每100批显示参数统计 -- \--save_dir=./output: 模型输出路径 -- \--num_passes=10000: 设置通过数,一次通过意味着PaddlePaddle训练数据集中的所有样本一次 -- \--average_test_period=10000000: 每个 average_test_period 批次对平均参数进行测试 -- \--init_model_path=./data: 参数初始化路径 -- \--load_missing_parameter_strategy=rand: 随机初始不存在的参数 -- \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 - - -训练后,模型将保存在目录`output`中。 我们的训练曲线如下: -
-![pic](./curve.jpg) -
- -### 测试 -测试脚本是 `test.sh`, 执行: -```bash - ./test.sh -``` -`tesh.sh` 的主要部分: -``` -paddle train \ - --config=./db_lstm.py \ - --model_list=$model_list \ - --job=test \ - --config_args=is_test=1 \ -``` - - - \--config=./db_lstm.py: 网络配置文件 - - \--model_list=$model_list.list: 模型列表文件 - - \--job=test: 指示测试任务 - - \--config_args=is_test=1: 指示测试任务的标记 - - \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 - - -### 预测 -预测脚本是 `predict.sh`,用户只需执行: -```bash - ./predict.sh - -``` -在`predict.sh`中,用户应该提供网络配置文件,模型路径,标签文件,字典文件,特征文件。 -``` -python predict.py - -c $config_file \ - -w $best_model_path \ - -l $label_file \ - -p $predicate_dict_file \ - -d $dict_file \ - -i $input_file \ - -o $output_file -``` - -`predict.py` 是主要的可执行python脚本,其中包括函数:加载模型,加载数据,数据预测。网络模型将输出标签的概率分布。 在演示中,我们使用最大概率的标签作为结果。用户还可以根据概率分布矩阵实现集束搜索或维特比解码。 - -预测后,结果保存在 `predict.res` 中。 - -## 引用 -[1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005. - -[2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015. diff --git a/doc/tutorials/sentiment_analysis/index_en.md b/doc/tutorials/sentiment_analysis/index_en.md index bb7681db44..279ebddf19 100644 --- a/doc/tutorials/sentiment_analysis/index_en.md +++ b/doc/tutorials/sentiment_analysis/index_en.md @@ -1,3 +1,7 @@ +```eval_rst +.. _sentiment_analysis_en: +``` + # Sentiment Analysis Tutorial Sentiment analysis has many applications. A basic task in sentiment analysis is classifying the polarity of a given text at the document, sentence or feature/aspect level. One simple example is to classify the customer reviews in a shopping website, a tourism website, and group buying websites like Amazon, TaoBao, Tmall etc. diff --git a/python/paddle/trainer_config_helpers/data_sources.py b/python/paddle/trainer_config_helpers/data_sources.py index b6ecd42857..c62553f54c 100644 --- a/python/paddle/trainer_config_helpers/data_sources.py +++ b/python/paddle/trainer_config_helpers/data_sources.py @@ -186,8 +186,7 @@ def define_py_data_sources2(train_list, test_list, module, obj, args=None): obj="process", args={"dictionary": dict_name}) - The related data provider can refer to - `here <../../data_provider/pydataprovider2.html#dataprovider-for-the-sequential-model>`__. + The related data provider can refer to :ref:`api_pydataprovider2_en_sequential_model` . :param train_list: Train list name. :type train_list: basestring From ca476f48560a39e37ef0f5c4f8e0e25d30a150ff Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Mon, 12 Dec 2016 06:07:37 +0000 Subject: [PATCH 084/265] Add dependency to gflags and related tests --- WORKSPACE | 8 ++++++ third_party/gflags_test/BUILD | 11 ++++++++ third_party/gflags_test/gflags_test.cc | 36 ++++++++++++++++++++++++++ 3 files changed, 55 insertions(+) create mode 100644 third_party/gflags_test/BUILD create mode 100644 third_party/gflags_test/gflags_test.cc diff --git a/WORKSPACE b/WORKSPACE index d6ae2af8eb..8060047744 100644 --- a/WORKSPACE +++ b/WORKSPACE @@ -15,3 +15,11 @@ new_http_archive( build_file = "third_party/gtest.BUILD", strip_prefix = "googletest-release-1.7.0", ) + +# External dependency to gflags. This method comes from +# https://github.com/gflags/example/blob/master/WORKSPACE. +git_repository( + name = "gflags", + tag = "v2.2.0", + remote = "https://github.com/gflags/gflags.git" +) diff --git a/third_party/gflags_test/BUILD b/third_party/gflags_test/BUILD new file mode 100644 index 0000000000..c3e53afb40 --- /dev/null +++ b/third_party/gflags_test/BUILD @@ -0,0 +1,11 @@ +licenses(["notice"]) # Apache 2.0 + +cc_test( + name = "gflags_test", + srcs = ["gflags_test.cc"], + copts = ["-Iexternal/gtest/include"], + deps = [ + "@gtest//:main", + "@gflags//:gflags", + ], +) diff --git a/third_party/gflags_test/gflags_test.cc b/third_party/gflags_test/gflags_test.cc new file mode 100644 index 0000000000..5e588c2279 --- /dev/null +++ b/third_party/gflags_test/gflags_test.cc @@ -0,0 +1,36 @@ +#include +#include + +#include "gflags/gflags.h" +#include "gtest/gtest.h" + + +DEFINE_bool(verbose, false, "Display program name before message"); +DEFINE_string(message, "Hello world!", "Message to print"); + +static bool IsNonEmptyMessage(const char *flagname, const std::string &value) +{ + return value[0] != '\0'; +} +DEFINE_validator(message, &IsNonEmptyMessage); + + +namespace third_party { +namespace gflags_test { + +TEST(GflagsTest, ParseAndPrint) { + gflags::SetUsageMessage("some usage message"); + gflags::SetVersionString("1.0.0"); + int argc = 1; + char program_name[] = "gflags_test"; + char** argv = new char*[2]; + argv[0] = program_name; + argv[1] = NULL; + gflags::ParseCommandLineFlags(&argc, reinterpret_cast(&argv), true); + EXPECT_EQ("gflags_test", std::string(gflags::ProgramInvocationShortName())); + EXPECT_EQ("Hello world!", FLAGS_message); + gflags::ShutDownCommandLineFlags(); +} + +} // namespace gflags_test +} // namespace third_party From 2a5a8e79e5e01c99670cd647d7083ea00cbaf0ec Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 12 Dec 2016 14:15:46 +0800 Subject: [PATCH 085/265] Fix Travis-CI when using rebase. * The $TRAVIS_COMMIT_RANGE could be not existed, because the git history can be changed and rebased. Here we alway run Travis-CI when `git diff` cannot return the modified files. * Also do not push docs update if is not PaddlePaddle/Paddle. --- .travis.yml | 11 +++++++---- paddle/scripts/travis/docs.sh | 31 +++++++++++++++++-------------- 2 files changed, 24 insertions(+), 18 deletions(-) diff --git a/.travis.yml b/.travis.yml index 6215060e33..145d22c308 100644 --- a/.travis.yml +++ b/.travis.yml @@ -42,10 +42,13 @@ addons: before_install: - | if [ ${JOB} == "BUILD_AND_TEST" ]; then - if ! git diff --name-only $TRAVIS_COMMIT_RANGE | grep -qvE '(\.md$)|(\.rst$)|(\.jpg$)|(\.png$)' - then - echo "Only markdown docs were updated, stopping build process." - exit + local change_list=`git diff --name-only $TRAVIS_COMMIT_RANGE` + if [ $? -eq 0 ]; then # if git diff return no zero, then rerun unit test. + if ! echo ${change_list} | grep -qvE '(\.md$)|(\.rst$)|(\.jpg$)|(\.png$)' + then + echo "Only markdown docs were updated, stopping build process." + exit + fi fi fi - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then sudo paddle/scripts/travis/before_install.linux.sh; fi diff --git a/paddle/scripts/travis/docs.sh b/paddle/scripts/travis/docs.sh index c2a4809d75..0bbb76a8a3 100755 --- a/paddle/scripts/travis/docs.sh +++ b/paddle/scripts/travis/docs.sh @@ -47,17 +47,20 @@ if [ $? -eq 0 ]; then fi set -e -# Commit -git add . -git config user.name "Travis CI" -git config user.email "paddle-dev@baidu.com" -git commit -m "Deploy to GitHub Pages: ${SHA}" - -# Set ssh private key -openssl aes-256-cbc -K $SSL_KEY -iv $SSL_IV -in ../../paddle/scripts/travis/deploy_key.enc -out deploy_key -d -chmod 600 deploy_key -eval `ssh-agent -s` -ssh-add deploy_key - -# Push -git push $SSH_REPO $TARGET_BRANCH +if [ -n $SSL_KEY ]; then # Only push updated docs for github.com/PaddlePaddle/Paddle. + # Commit + git add . + git config user.name "Travis CI" + git config user.email "paddle-dev@baidu.com" + git commit -m "Deploy to GitHub Pages: ${SHA}" + + # Set ssh private key + openssl aes-256-cbc -K $SSL_KEY -iv $SSL_IV -in ../../paddle/scripts/travis/deploy_key.enc -out deploy_key -d + chmod 600 deploy_key + eval `ssh-agent -s` + ssh-add deploy_key + + # Push + git push $SSH_REPO $TARGET_BRANCH + +fi From 068bfbb817611c856acd8c535de2b33a6126786c Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 12 Dec 2016 14:48:32 +0800 Subject: [PATCH 086/265] All file pass pre-commit hook --- .travis.yml | 9 +- benchmark/tensorflow/rnn/run_multi.sh | 1 - demo/gan/README.md | 2 +- demo/gan/data/download_cifar.sh | 1 - demo/gan/data/get_mnist_data.sh | 2 - demo/gan/gan_conf.py | 147 +++---- demo/gan/gan_conf_image.py | 358 ++++++++++-------- demo/gan/gan_trainer.py | 134 ++++--- .../quick_start/trainer_config.resnet-lstm.py | 27 +- .../data/extract_dict_feature.py | 5 +- .../data/extract_pairs.py | 8 +- demo/semantic_role_labeling/dataprovider.py | 19 +- demo/semantic_role_labeling/db_lstm.py | 174 ++++----- demo/semantic_role_labeling/predict.py | 44 +-- .../k8s/distributed_training_on_kubernetes.md | 2 +- doc_cn/cluster/k8s/job.yaml | 2 +- doc_cn/cluster/k8s/start_paddle.py | 5 +- doc_cn/demo/sentiment_analysis/index.rst | 14 +- doc_theme/static/js/paddle_doc_init.js | 2 +- paddle/api/GradientMachine.cpp | 2 +- paddle/api/Internal.h | 9 +- paddle/api/Matrix.cpp | 8 +- paddle/api/PaddleAPI.h | 2 +- paddle/api/Parameter.cpp | 2 +- paddle/api/ParameterOptimizer.cpp | 23 +- paddle/api/SequenceGenerator.cpp | 8 +- paddle/api/Trainer.cpp | 4 +- paddle/api/Util.cpp | 10 +- paddle/api/Vector.cpp | 2 +- paddle/api/test/testMatrix.py | 5 +- paddle/api/test/testVector.py | 18 +- paddle/cuda/include/hl_base.h | 4 +- paddle/cuda/include/hl_dso_loader.h | 2 +- paddle/cuda/include/hl_gpu.h | 18 +- paddle/cuda/src/hl_cuda_cublas.cc | 4 +- paddle/cuda/src/hl_cuda_cudnn.cc | 6 +- paddle/cuda/src/hl_cudart_wrap.cc | 2 +- paddle/cuda/src/hl_time.cc | 4 +- paddle/cuda/src/hl_warpctc_wrap.cc | 2 +- .../activations/ActivationFunction.cpp | 6 +- paddle/gserver/dataproviders/DataProvider.cpp | 8 +- paddle/gserver/dataproviders/DataProvider.h | 28 +- .../dataproviders/MultiDataProvider.cpp | 4 +- .../dataproviders/ProtoDataProvider.cpp | 42 +- .../gserver/dataproviders/ProtoDataProvider.h | 2 +- paddle/gserver/dataproviders/ProtoReader.h | 4 +- .../gserver/dataproviders/PyDataProvider.cpp | 40 +- .../gserver/dataproviders/PyDataProvider2.cpp | 23 +- paddle/gserver/evaluators/Evaluator.cpp | 8 +- paddle/gserver/evaluators/Evaluator.h | 6 +- .../gradientmachines/GradientMachine.cpp | 12 +- .../gradientmachines/GradientMachine.h | 10 +- .../gradientmachines/MultiGradientMachine.h | 4 +- .../gserver/gradientmachines/MultiNetwork.cpp | 2 +- .../gradientmachines/NeuralNetwork.cpp | 8 +- .../gserver/gradientmachines/NeuralNetwork.h | 25 +- .../gradientmachines/ParallelNeuralNetwork.h | 15 +- .../RecurrentGradientMachine.cpp | 49 +-- .../RecurrentGradientMachine.h | 2 +- paddle/gserver/layers/BatchNormBaseLayer.cpp | 4 +- paddle/gserver/layers/BatchNormBaseLayer.h | 2 +- .../gserver/layers/BatchNormalizationLayer.h | 2 +- paddle/gserver/layers/ConcatenateLayer.cpp | 2 +- paddle/gserver/layers/ContextProjection.cpp | 2 +- paddle/gserver/layers/ConvBaseLayer.cpp | 2 +- paddle/gserver/layers/ConvOperator.cpp | 4 +- paddle/gserver/layers/ConvProjection.cpp | 2 +- paddle/gserver/layers/ConvShiftLayer.cpp | 2 +- .../gserver/layers/ConvexCombinationLayer.cpp | 2 +- paddle/gserver/layers/CosSimVecMatLayer.cpp | 2 +- paddle/gserver/layers/CostLayer.cpp | 6 +- paddle/gserver/layers/CudnnBatchNormLayer.cpp | 4 +- paddle/gserver/layers/CudnnBatchNormLayer.h | 4 +- paddle/gserver/layers/CudnnConvLayer.cpp | 2 +- paddle/gserver/layers/CudnnConvLayer.h | 4 +- paddle/gserver/layers/CudnnPoolLayer.cpp | 4 +- paddle/gserver/layers/EosIdCheckLayer.cpp | 2 +- paddle/gserver/layers/ExpandConvBaseLayer.h | 2 +- paddle/gserver/layers/ExpandConvLayer.cpp | 2 +- paddle/gserver/layers/ExpandConvLayer.h | 2 +- .../gserver/layers/ExpandConvTransLayer.cpp | 2 +- paddle/gserver/layers/ExpandConvTransLayer.h | 2 +- paddle/gserver/layers/FullyConnectedLayer.cpp | 6 +- paddle/gserver/layers/GatedRecurrentLayer.cpp | 7 +- paddle/gserver/layers/GatedRecurrentLayer.h | 4 +- paddle/gserver/layers/GruCompute.cpp | 2 +- paddle/gserver/layers/GruCompute.h | 2 +- paddle/gserver/layers/GruStepLayer.cpp | 2 +- paddle/gserver/layers/IdentityProjection.cpp | 2 +- paddle/gserver/layers/InterpolationLayer.cpp | 2 +- paddle/gserver/layers/Layer.cpp | 6 +- paddle/gserver/layers/Layer.h | 10 +- paddle/gserver/layers/LinearChainCRF.cpp | 2 +- paddle/gserver/layers/LinearChainCTC.cpp | 2 +- paddle/gserver/layers/LstmCompute.cpp | 4 +- paddle/gserver/layers/LstmCompute.h | 2 +- paddle/gserver/layers/LstmLayer.cpp | 2 +- paddle/gserver/layers/LstmLayer.h | 6 +- paddle/gserver/layers/MDLstmLayer.cpp | 4 +- paddle/gserver/layers/MaxOutLayer.cpp | 2 +- paddle/gserver/layers/MixedLayer.cpp | 2 +- paddle/gserver/layers/MixedLayer.h | 2 +- paddle/gserver/layers/MultiplexLayer.cpp | 2 +- paddle/gserver/layers/NormLayer.cpp | 2 +- paddle/gserver/layers/NormLayer.h | 2 +- paddle/gserver/layers/NormProjectionLayer.cpp | 2 +- paddle/gserver/layers/NormProjectionLayer.h | 2 +- paddle/gserver/layers/Operator.h | 4 +- paddle/gserver/layers/OuterProdLayer.cpp | 2 +- paddle/gserver/layers/PoolLayer.cpp | 2 +- paddle/gserver/layers/PoolLayer.h | 4 +- paddle/gserver/layers/PoolProjectionLayer.cpp | 2 +- paddle/gserver/layers/PowerLayer.cpp | 2 +- paddle/gserver/layers/RecurrentLayer.cpp | 2 +- paddle/gserver/layers/RecurrentLayerGroup.cpp | 2 +- paddle/gserver/layers/ResizeLayer.cpp | 2 +- paddle/gserver/layers/ScalingLayer.cpp | 2 +- .../layers/SelectiveFullyConnectedLayer.cpp | 6 +- paddle/gserver/layers/SequenceConcatLayer.cpp | 2 +- paddle/gserver/layers/SequencePoolLayer.cpp | 2 +- .../gserver/layers/SequenceReshapeLayer.cpp | 2 +- paddle/gserver/layers/SequenceToBatch.cpp | 6 +- paddle/gserver/layers/SequenceToBatch.h | 2 +- paddle/gserver/layers/SlopeInterceptLayer.cpp | 2 +- paddle/gserver/layers/SubSequenceLayer.cpp | 2 +- paddle/gserver/layers/SumToOneNormLayer.cpp | 2 +- paddle/gserver/layers/TransLayer.cpp | 2 +- paddle/gserver/layers/TransLayer.h | 2 +- .../layers/TransposedFullMatrixProjection.cpp | 2 +- paddle/gserver/layers/ValidationLayer.cpp | 4 +- paddle/gserver/layers/ValidationLayer.h | 2 +- paddle/gserver/tests/LayerGradUtil.h | 4 +- paddle/gserver/tests/TestUtil.cpp | 10 +- paddle/gserver/tests/test_ActivationGrad.cpp | 6 +- paddle/gserver/tests/test_BatchNorm.cpp | 157 ++++---- paddle/gserver/tests/test_ConvTrans.cpp | 10 +- paddle/gserver/tests/test_ConvUnify.cpp | 246 ++++++------ paddle/gserver/tests/test_Evaluator.cpp | 2 +- paddle/gserver/tests/test_LayerGrad.cpp | 2 +- .../gserver/tests/test_MultinomialSampler.cpp | 2 +- paddle/gserver/tests/test_NetworkCompare.cpp | 6 +- .../gserver/tests/test_ProtoDataProvider.cpp | 2 +- paddle/gserver/tests/test_RecurrentLayer.cpp | 6 +- .../gserver/tests/test_SelectiveFCLayer.cpp | 10 +- paddle/gserver/tests/test_WarpCTCLayer.cpp | 6 +- paddle/math/Allocator.h | 2 +- paddle/math/BaseMatrix.h | 4 +- paddle/math/CpuSparseMatrix.cpp | 10 +- paddle/math/MathFunctions.cpp | 2 +- paddle/math/MathUtils.cpp | 2 +- paddle/math/Matrix.h | 6 +- paddle/math/MatrixBitCode.cpp | 4 +- paddle/math/MemoryHandle.cpp | 2 +- paddle/math/PoolAllocator.h | 4 +- paddle/math/SparseMatrix.cpp | 14 +- paddle/math/SparseMatrix.h | 2 +- paddle/math/SparseRowMatrix.h | 4 +- paddle/math/Storage.cpp | 4 +- paddle/math/Storage.h | 2 +- paddle/math/TensorEvaluate.h | 2 +- paddle/math/TensorExpression.h | 6 +- paddle/math/TrainingAlgorithmOp.h | 2 +- paddle/math/Vector.cpp | 13 +- paddle/math/Vector.h | 6 +- paddle/math/tests/OriginalOptimizerApi.h | 2 +- paddle/math/tests/TestUtils.h | 2 +- paddle/math/tests/test_Allocator.cpp | 4 +- paddle/math/tests/test_BaseMatrix.cpp | 2 +- paddle/math/tests/test_CpuGpuVector.cpp | 4 +- paddle/math/tests/test_ExecViaCpu.cpp | 4 +- paddle/math/tests/test_GpuProfiler.cpp | 64 +++- paddle/math/tests/test_SIMDFunctions.cpp | 4 +- paddle/math/tests/test_TrainingAlgorithm.cpp | 6 +- paddle/math/tests/test_batchTranspose.cpp | 2 +- paddle/math/tests/test_matrixCompare.cpp | 8 +- paddle/math/tests/test_perturbation.cpp | 4 +- .../math/tests/test_sparseMatrixCompare.cpp | 4 +- paddle/parameter/Argument.cpp | 9 +- paddle/parameter/Argument.h | 2 +- paddle/parameter/FirstOrderOptimizer.cpp | 6 +- paddle/parameter/ParallelParameter.cpp | 2 +- paddle/parameter/ParallelParameter.h | 10 +- paddle/parameter/Parameter.cpp | 10 +- paddle/parameter/Parameter.h | 10 +- paddle/parameter/ParameterUpdateFunctions.h | 2 +- paddle/parameter/ParameterUpdaterBase.cpp | 4 +- paddle/parameter/ParameterUpdaterHook.cpp | 9 +- paddle/parameter/Regularizer.cpp | 4 +- paddle/parameter/Weight.cpp | 2 +- paddle/parameter/tests/test_common.cpp | 4 +- paddle/pserver/BaseClient.cpp | 6 +- paddle/pserver/BaseClient.h | 4 +- paddle/pserver/LightNetwork.cpp | 12 +- paddle/pserver/LightNetwork.h | 2 +- paddle/pserver/ParameterClient2.cpp | 4 +- paddle/pserver/ParameterClient2.h | 12 +- paddle/pserver/ParameterServer2.cpp | 6 +- paddle/pserver/ParameterServer2.h | 12 +- paddle/pserver/ParameterServer2Main.cpp | 6 +- paddle/pserver/ProtoServer.h | 3 +- paddle/pserver/SocketChannel.cpp | 6 +- paddle/pserver/SparseParameterDistribution.h | 2 +- paddle/pserver/test/SocketTest.cpp | 6 +- paddle/pserver/test/test_ParameterServer2.cpp | 2 +- paddle/pserver/test/test_ProtoServer.cpp | 4 +- paddle/py_paddle/util.py | 10 +- paddle/scripts/travis/main.sh | 2 + paddle/scripts/travis/precommit.sh | 6 + paddle/trainer/MergeModel.cpp | 4 +- paddle/trainer/ParamUtil.cpp | 8 +- paddle/trainer/ParamUtil.h | 6 +- paddle/trainer/ParameterUpdater.h | 2 +- paddle/trainer/RemoteParameterUpdater.cpp | 2 +- paddle/trainer/RemoteParameterUpdater.h | 6 +- paddle/trainer/Tester.h | 6 +- paddle/trainer/TesterConfig.h | 4 +- paddle/trainer/ThreadParameterUpdater.h | 2 +- paddle/trainer/Trainer.h | 10 +- paddle/trainer/TrainerConfigHelper.h | 2 +- paddle/trainer/TrainerInternal.cpp | 12 +- paddle/trainer/TrainerInternal.h | 8 +- paddle/trainer/TrainerInternalConfig.h | 4 +- paddle/trainer/TrainerMain.cpp | 4 +- paddle/trainer/tests/picojson.h | 2 +- paddle/trainer/tests/test_Compare.cpp | 2 +- paddle/trainer/tests/test_CompareTwoNets.cpp | 4 +- paddle/trainer/tests/test_CompareTwoOpts.cpp | 4 +- .../tests/test_PyDataProviderWrapper.cpp | 8 +- paddle/trainer/tests/test_TrainerOnePass.cpp | 2 +- .../test_recurrent_machine_generation.cpp | 2 +- paddle/utils/BarrierStat.cpp | 8 +- paddle/utils/BarrierStat.h | 10 +- paddle/utils/CommandLineParser.cpp | 27 +- paddle/utils/CommandLineParser.h | 4 +- paddle/utils/CpuId.cpp | 46 +-- paddle/utils/CpuId.h | 84 ++-- paddle/utils/CustomStackTrace.cpp | 2 +- paddle/utils/CustomStackTrace.h | 5 +- paddle/utils/Logging.cpp | 8 +- paddle/utils/Logging.h | 2 +- paddle/utils/PythonUtil.cpp | 2 +- paddle/utils/PythonUtil.h | 4 +- paddle/utils/Queue.h | 6 +- paddle/utils/Stat.cpp | 9 +- paddle/utils/StringUtil.h | 2 +- paddle/utils/Thread.h | 4 +- paddle/utils/ThreadLocal.cpp | 2 +- paddle/utils/ThreadLocal.h | 4 +- paddle/utils/Util.cpp | 40 +- paddle/utils/Util.h | 16 +- paddle/utils/Version.cpp | 7 +- paddle/utils/Version.h | 2 +- paddle/utils/arch/osx/Locks.cpp | 4 +- paddle/utils/tests/test_CommandLineParser.cpp | 2 +- paddle/utils/tests/test_CustomStackTrace.cpp | 4 +- .../tests/test_CustomStackTracePrint.cpp | 2 +- paddle/utils/tests/test_Logging.cpp | 4 +- paddle/utils/tests/test_SIMDFlags.cpp | 41 +- paddle/utils/tests/test_SpinLock.cpp | 4 +- paddle/utils/tests/test_Thread.cpp | 22 +- paddle/utils/tests/test_ThreadBarrier.cpp | 4 +- 261 files changed, 1524 insertions(+), 1404 deletions(-) create mode 100755 paddle/scripts/travis/precommit.sh diff --git a/.travis.yml b/.travis.yml index 6215060e33..9fc518b0cc 100644 --- a/.travis.yml +++ b/.travis.yml @@ -8,10 +8,13 @@ os: env: - JOB=DOCS - JOB=BUILD_AND_TEST + - JOB=PRE_COMMIT matrix: exclude: - os: osx - env: JOB=DOCS # Only generate documentation in linux + env: JOB=DOCS # Only generate documentation in linux. + - os: osx + env: JOB=PRE_COMMIT # Only check pre-commit hook in linux addons: apt: @@ -39,6 +42,7 @@ addons: - lcov - graphviz - swig + - clang-format-3.8 before_install: - | if [ ${JOB} == "BUILD_AND_TEST" ]; then @@ -50,7 +54,8 @@ before_install: fi - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then sudo paddle/scripts/travis/before_install.linux.sh; fi - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then paddle/scripts/travis/before_install.osx.sh; fi - - pip install wheel protobuf sphinx breathe recommonmark virtualenv numpy sphinx_rtd_theme + - if [[ "$JOB" == "PRE_COMMIT" ]]; then sudo ln -s /usr/bin/clang-format-3.8 /usr/bin/clang-format; fi + - pip install wheel protobuf sphinx breathe recommonmark virtualenv numpy sphinx_rtd_theme pre-commit script: - paddle/scripts/travis/main.sh notifications: diff --git a/benchmark/tensorflow/rnn/run_multi.sh b/benchmark/tensorflow/rnn/run_multi.sh index f7f52e01e3..c2d7dd597e 100755 --- a/benchmark/tensorflow/rnn/run_multi.sh +++ b/benchmark/tensorflow/rnn/run_multi.sh @@ -25,4 +25,3 @@ test 4 2 256 512 test 4 2 512 128 test 4 2 512 256 test 4 2 512 512 - diff --git a/demo/gan/README.md b/demo/gan/README.md index fdc970a07b..1908b534b0 100644 --- a/demo/gan/README.md +++ b/demo/gan/README.md @@ -10,4 +10,4 @@ Then you can run the command below. The flag -d specifies the training data (cif $python gan_trainer.py -d cifar --use_gpu 1 The generated images will be stored in ./cifar_samples/ -The corresponding models will be stored in ./cifar_params/ \ No newline at end of file +The corresponding models will be stored in ./cifar_params/ diff --git a/demo/gan/data/download_cifar.sh b/demo/gan/data/download_cifar.sh index 32e73b3d8e..ae24ef2b7f 100755 --- a/demo/gan/data/download_cifar.sh +++ b/demo/gan/data/download_cifar.sh @@ -15,4 +15,3 @@ set -e wget https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz tar zxf cifar-10-python.tar.gz rm cifar-10-python.tar.gz - diff --git a/demo/gan/data/get_mnist_data.sh b/demo/gan/data/get_mnist_data.sh index d21bf70671..a77c81bf5a 100644 --- a/demo/gan/data/get_mnist_data.sh +++ b/demo/gan/data/get_mnist_data.sh @@ -15,5 +15,3 @@ do gunzip ${fname}.gz fi done - - diff --git a/demo/gan/gan_conf.py b/demo/gan/gan_conf.py index 58ba9dde58..86ac2dffe5 100644 --- a/demo/gan/gan_conf.py +++ b/demo/gan/gan_conf.py @@ -14,10 +14,9 @@ from paddle.trainer_config_helpers import * mode = get_config_arg("mode", str, "generator") -assert mode in set(["generator", - "discriminator", - "generator_training", - "discriminator_training"]) +assert mode in set([ + "generator", "discriminator", "generator_training", "discriminator_training" +]) is_generator_training = mode == "generator_training" is_discriminator_training = mode == "discriminator_training" @@ -38,8 +37,8 @@ sample_dim = 2 settings( batch_size=128, learning_rate=1e-4, - learning_method=AdamOptimizer(beta1=0.5) -) + learning_method=AdamOptimizer(beta1=0.5)) + def discriminator(sample): """ @@ -50,70 +49,87 @@ def discriminator(sample): of the sample is from real data. """ param_attr = ParamAttr(is_static=is_generator_training) - bias_attr = ParamAttr(is_static=is_generator_training, - initial_mean=1.0, - initial_std=0) - - hidden = fc_layer(input=sample, name="dis_hidden", size=hidden_dim, - bias_attr=bias_attr, - param_attr=param_attr, - act=ReluActivation()) - - hidden2 = fc_layer(input=hidden, name="dis_hidden2", size=hidden_dim, - bias_attr=bias_attr, - param_attr=param_attr, - act=LinearActivation()) - - hidden_bn = batch_norm_layer(hidden2, - act=ReluActivation(), - name="dis_hidden_bn", - bias_attr=bias_attr, - param_attr=ParamAttr(is_static=is_generator_training, - initial_mean=1.0, - initial_std=0.02), - use_global_stats=False) - - return fc_layer(input=hidden_bn, name="dis_prob", size=2, - bias_attr=bias_attr, - param_attr=param_attr, - act=SoftmaxActivation()) + bias_attr = ParamAttr( + is_static=is_generator_training, initial_mean=1.0, initial_std=0) + + hidden = fc_layer( + input=sample, + name="dis_hidden", + size=hidden_dim, + bias_attr=bias_attr, + param_attr=param_attr, + act=ReluActivation()) + + hidden2 = fc_layer( + input=hidden, + name="dis_hidden2", + size=hidden_dim, + bias_attr=bias_attr, + param_attr=param_attr, + act=LinearActivation()) + + hidden_bn = batch_norm_layer( + hidden2, + act=ReluActivation(), + name="dis_hidden_bn", + bias_attr=bias_attr, + param_attr=ParamAttr( + is_static=is_generator_training, initial_mean=1.0, + initial_std=0.02), + use_global_stats=False) + + return fc_layer( + input=hidden_bn, + name="dis_prob", + size=2, + bias_attr=bias_attr, + param_attr=param_attr, + act=SoftmaxActivation()) + def generator(noise): """ generator generates a sample given noise """ param_attr = ParamAttr(is_static=is_discriminator_training) - bias_attr = ParamAttr(is_static=is_discriminator_training, - initial_mean=1.0, - initial_std=0) - - hidden = fc_layer(input=noise, - name="gen_layer_hidden", - size=hidden_dim, - bias_attr=bias_attr, - param_attr=param_attr, - act=ReluActivation()) - - hidden2 = fc_layer(input=hidden, name="gen_hidden2", size=hidden_dim, - bias_attr=bias_attr, - param_attr=param_attr, - act=LinearActivation()) - - hidden_bn = batch_norm_layer(hidden2, - act=ReluActivation(), - name="gen_layer_hidden_bn", - bias_attr=bias_attr, - param_attr=ParamAttr(is_static=is_discriminator_training, - initial_mean=1.0, - initial_std=0.02), - use_global_stats=False) - - return fc_layer(input=hidden_bn, - name="gen_layer1", - size=sample_dim, - bias_attr=bias_attr, - param_attr=param_attr, - act=LinearActivation()) + bias_attr = ParamAttr( + is_static=is_discriminator_training, initial_mean=1.0, initial_std=0) + + hidden = fc_layer( + input=noise, + name="gen_layer_hidden", + size=hidden_dim, + bias_attr=bias_attr, + param_attr=param_attr, + act=ReluActivation()) + + hidden2 = fc_layer( + input=hidden, + name="gen_hidden2", + size=hidden_dim, + bias_attr=bias_attr, + param_attr=param_attr, + act=LinearActivation()) + + hidden_bn = batch_norm_layer( + hidden2, + act=ReluActivation(), + name="gen_layer_hidden_bn", + bias_attr=bias_attr, + param_attr=ParamAttr( + is_static=is_discriminator_training, + initial_mean=1.0, + initial_std=0.02), + use_global_stats=False) + + return fc_layer( + input=hidden_bn, + name="gen_layer1", + size=sample_dim, + bias_attr=bias_attr, + param_attr=param_attr, + act=LinearActivation()) + if is_generator_training: noise = data_layer(name="noise", size=noise_dim) @@ -126,7 +142,8 @@ if is_generator_training or is_discriminator_training: label = data_layer(name="label", size=1) prob = discriminator(sample) cost = cross_entropy(input=prob, label=label) - classification_error_evaluator(input=prob, label=label, name=mode+'_error') + classification_error_evaluator( + input=prob, label=label, name=mode + '_error') outputs(cost) if is_generator: diff --git a/demo/gan/gan_conf_image.py b/demo/gan/gan_conf_image.py index 5c2b140537..f89a4e706c 100644 --- a/demo/gan/gan_conf_image.py +++ b/demo/gan/gan_conf_image.py @@ -15,10 +15,9 @@ from paddle.trainer_config_helpers import * mode = get_config_arg("mode", str, "generator") dataSource = get_config_arg("data", str, "mnist") -assert mode in set(["generator", - "discriminator", - "generator_training", - "discriminator_training"]) +assert mode in set([ + "generator", "discriminator", "generator_training", "discriminator_training" +]) is_generator_training = mode == "generator_training" is_discriminator_training = mode == "discriminator_training" @@ -36,24 +35,33 @@ noise_dim = 100 gf_dim = 64 df_dim = 64 if dataSource == "mnist": - sample_dim = 28 # image dim - c_dim = 1 # image color + sample_dim = 28 # image dim + c_dim = 1 # image color else: sample_dim = 32 c_dim = 3 -s2, s4 = int(sample_dim/2), int(sample_dim/4), -s8, s16 = int(sample_dim/8), int(sample_dim/16) +s2, s4 = int(sample_dim / 2), int(sample_dim / 4), +s8, s16 = int(sample_dim / 8), int(sample_dim / 16) settings( batch_size=128, learning_rate=2e-4, - learning_method=AdamOptimizer(beta1=0.5) -) + learning_method=AdamOptimizer(beta1=0.5)) -def conv_bn(input, channels, imgSize, num_filters, output_x, stride, name, - param_attr, bias_attr, param_attr_bn, bn, trans=False, - act=ReluActivation()): - + +def conv_bn(input, + channels, + imgSize, + num_filters, + output_x, + stride, + name, + param_attr, + bias_attr, + param_attr_bn, + bn, + trans=False, + act=ReluActivation()): """ conv_bn is a utility function that constructs a convolution/deconv layer with an optional batch_norm layer @@ -63,10 +71,10 @@ def conv_bn(input, channels, imgSize, num_filters, output_x, stride, name, :param trans: whether to use conv (False) or deconv (True) :type trans: bool """ - + # calculate the filter_size and padding size based on the given # imgSize and ouput size - tmp = imgSize - (output_x - 1) * stride + tmp = imgSize - (output_x - 1) * stride if tmp <= 1 or tmp > 5: raise ValueError("conv input-output dimension does not fit") elif tmp <= 3: @@ -76,111 +84,134 @@ def conv_bn(input, channels, imgSize, num_filters, output_x, stride, name, filter_size = tmp padding = 0 - print (imgSize, output_x, stride, filter_size, padding) - + print(imgSize, output_x, stride, filter_size, padding) + if trans: nameApx = "_conv" else: nameApx = "_convt" - + if bn: - conv = img_conv_layer(input, filter_size=filter_size, - num_filters=num_filters, - name=name + nameApx, num_channels=channels, - act=LinearActivation(), groups=1, stride=stride, - padding=padding, bias_attr=bias_attr, - param_attr=param_attr, shared_biases=True, layer_attr=None, - filter_size_y=None, stride_y=None, padding_y=None, - trans=trans) - - conv_bn = batch_norm_layer(conv, - act=act, - name=name + nameApx + "_bn", - bias_attr=bias_attr, - param_attr=param_attr_bn, - use_global_stats=False) - + conv = img_conv_layer( + input, + filter_size=filter_size, + num_filters=num_filters, + name=name + nameApx, + num_channels=channels, + act=LinearActivation(), + groups=1, + stride=stride, + padding=padding, + bias_attr=bias_attr, + param_attr=param_attr, + shared_biases=True, + layer_attr=None, + filter_size_y=None, + stride_y=None, + padding_y=None, + trans=trans) + + conv_bn = batch_norm_layer( + conv, + act=act, + name=name + nameApx + "_bn", + bias_attr=bias_attr, + param_attr=param_attr_bn, + use_global_stats=False) + return conv_bn else: - conv = img_conv_layer(input, filter_size=filter_size, - num_filters=num_filters, - name=name + nameApx, num_channels=channels, - act=act, groups=1, stride=stride, - padding=padding, bias_attr=bias_attr, - param_attr=param_attr, shared_biases=True, layer_attr=None, - filter_size_y=None, stride_y=None, padding_y=None, - trans=trans) + conv = img_conv_layer( + input, + filter_size=filter_size, + num_filters=num_filters, + name=name + nameApx, + num_channels=channels, + act=act, + groups=1, + stride=stride, + padding=padding, + bias_attr=bias_attr, + param_attr=param_attr, + shared_biases=True, + layer_attr=None, + filter_size_y=None, + stride_y=None, + padding_y=None, + trans=trans) return conv - + + def generator(noise): """ generator generates a sample given noise """ - param_attr = ParamAttr(is_static=is_discriminator_training, - initial_mean=0.0, - initial_std=0.02) - bias_attr = ParamAttr(is_static=is_discriminator_training, - initial_mean=0.0, - initial_std=0.0) - - param_attr_bn=ParamAttr(is_static=is_discriminator_training, - initial_mean=1.0, - initial_std=0.02) - - h1 = fc_layer(input=noise, - name="gen_layer_h1", - size=s8 * s8 * gf_dim * 4, - bias_attr=bias_attr, - param_attr=param_attr, - act=LinearActivation()) - - h1_bn = batch_norm_layer(h1, - act=ReluActivation(), - name="gen_layer_h1_bn", - bias_attr=bias_attr, - param_attr=param_attr_bn, - use_global_stats=False) - - h2_bn = conv_bn(h1_bn, - channels=gf_dim*4, - output_x=s8, - num_filters=gf_dim*2, - imgSize=s4, - stride=2, - name="gen_layer_h2", - param_attr=param_attr, - bias_attr=bias_attr, - param_attr_bn=param_attr_bn, - bn=True, - trans=True) - - h3_bn = conv_bn(h2_bn, - channels=gf_dim*2, - output_x=s4, - num_filters=gf_dim, - imgSize=s2, - stride=2, - name="gen_layer_h3", - param_attr=param_attr, - bias_attr=bias_attr, - param_attr_bn=param_attr_bn, - bn=True, - trans=True) - - - return conv_bn(h3_bn, - channels=gf_dim, - output_x=s2, - num_filters=c_dim, - imgSize=sample_dim, - stride=2, - name="gen_layer_h4", - param_attr=param_attr, - bias_attr=bias_attr, - param_attr_bn=param_attr_bn, - bn=False, - trans=True, - act=TanhActivation()) + param_attr = ParamAttr( + is_static=is_discriminator_training, initial_mean=0.0, initial_std=0.02) + bias_attr = ParamAttr( + is_static=is_discriminator_training, initial_mean=0.0, initial_std=0.0) + + param_attr_bn = ParamAttr( + is_static=is_discriminator_training, initial_mean=1.0, initial_std=0.02) + + h1 = fc_layer( + input=noise, + name="gen_layer_h1", + size=s8 * s8 * gf_dim * 4, + bias_attr=bias_attr, + param_attr=param_attr, + act=LinearActivation()) + + h1_bn = batch_norm_layer( + h1, + act=ReluActivation(), + name="gen_layer_h1_bn", + bias_attr=bias_attr, + param_attr=param_attr_bn, + use_global_stats=False) + + h2_bn = conv_bn( + h1_bn, + channels=gf_dim * 4, + output_x=s8, + num_filters=gf_dim * 2, + imgSize=s4, + stride=2, + name="gen_layer_h2", + param_attr=param_attr, + bias_attr=bias_attr, + param_attr_bn=param_attr_bn, + bn=True, + trans=True) + + h3_bn = conv_bn( + h2_bn, + channels=gf_dim * 2, + output_x=s4, + num_filters=gf_dim, + imgSize=s2, + stride=2, + name="gen_layer_h3", + param_attr=param_attr, + bias_attr=bias_attr, + param_attr_bn=param_attr_bn, + bn=True, + trans=True) + + return conv_bn( + h3_bn, + channels=gf_dim, + output_x=s2, + num_filters=c_dim, + imgSize=sample_dim, + stride=2, + name="gen_layer_h4", + param_attr=param_attr, + bias_attr=bias_attr, + param_attr_bn=param_attr_bn, + bn=False, + trans=True, + act=TanhActivation()) def discriminator(sample): @@ -191,58 +222,60 @@ def discriminator(sample): of the sample is from generator and dimension 1 is the probabblity of the sample is from real data. """ - param_attr = ParamAttr(is_static=is_generator_training, - initial_mean=0.0, - initial_std=0.02) - bias_attr = ParamAttr(is_static=is_generator_training, - initial_mean=0.0, - initial_std=0.0) - - param_attr_bn=ParamAttr(is_static=is_generator_training, - initial_mean=1.0, - initial_std=0.02) - - h0 = conv_bn(sample, - channels=c_dim, - imgSize=sample_dim, - num_filters=df_dim, - output_x=s2, - stride=2, - name="dis_h0", - param_attr=param_attr, - bias_attr=bias_attr, - param_attr_bn=param_attr_bn, - bn=False) - - h1_bn = conv_bn(h0, - channels=df_dim, - imgSize=s2, - num_filters=df_dim*2, - output_x=s4, - stride=2, - name="dis_h1", - param_attr=param_attr, - bias_attr=bias_attr, - param_attr_bn=param_attr_bn, - bn=True) - - h2_bn = conv_bn(h1_bn, - channels=df_dim*2, - imgSize=s4, - num_filters=df_dim*4, - output_x=s8, - stride=2, - name="dis_h2", - param_attr=param_attr, - bias_attr=bias_attr, - param_attr_bn=param_attr_bn, - bn=True) - - return fc_layer(input=h2_bn, name="dis_prob", size=2, - bias_attr=bias_attr, - param_attr=param_attr, - act=SoftmaxActivation()) + param_attr = ParamAttr( + is_static=is_generator_training, initial_mean=0.0, initial_std=0.02) + bias_attr = ParamAttr( + is_static=is_generator_training, initial_mean=0.0, initial_std=0.0) + + param_attr_bn = ParamAttr( + is_static=is_generator_training, initial_mean=1.0, initial_std=0.02) + + h0 = conv_bn( + sample, + channels=c_dim, + imgSize=sample_dim, + num_filters=df_dim, + output_x=s2, + stride=2, + name="dis_h0", + param_attr=param_attr, + bias_attr=bias_attr, + param_attr_bn=param_attr_bn, + bn=False) + + h1_bn = conv_bn( + h0, + channels=df_dim, + imgSize=s2, + num_filters=df_dim * 2, + output_x=s4, + stride=2, + name="dis_h1", + param_attr=param_attr, + bias_attr=bias_attr, + param_attr_bn=param_attr_bn, + bn=True) + + h2_bn = conv_bn( + h1_bn, + channels=df_dim * 2, + imgSize=s4, + num_filters=df_dim * 4, + output_x=s8, + stride=2, + name="dis_h2", + param_attr=param_attr, + bias_attr=bias_attr, + param_attr_bn=param_attr_bn, + bn=True) + return fc_layer( + input=h2_bn, + name="dis_prob", + size=2, + bias_attr=bias_attr, + param_attr=param_attr, + act=SoftmaxActivation()) if is_generator_training: @@ -250,13 +283,14 @@ if is_generator_training: sample = generator(noise) if is_discriminator_training: - sample = data_layer(name="sample", size=sample_dim * sample_dim*c_dim) + sample = data_layer(name="sample", size=sample_dim * sample_dim * c_dim) if is_generator_training or is_discriminator_training: label = data_layer(name="label", size=1) prob = discriminator(sample) cost = cross_entropy(input=prob, label=label) - classification_error_evaluator(input=prob, label=label, name=mode+'_error') + classification_error_evaluator( + input=prob, label=label, name=mode + '_error') outputs(cost) if is_generator: diff --git a/demo/gan/gan_trainer.py b/demo/gan/gan_trainer.py index a8c1bd0414..4a26c230f7 100644 --- a/demo/gan/gan_trainer.py +++ b/demo/gan/gan_trainer.py @@ -16,7 +16,7 @@ import argparse import random import numpy import cPickle -import sys,os +import sys, os from PIL import Image from paddle.trainer.config_parser import parse_config @@ -24,6 +24,7 @@ from paddle.trainer.config_parser import logger import py_paddle.swig_paddle as api import matplotlib.pyplot as plt + def plot2DScatter(data, outputfile): ''' Plot the data as a 2D scatter plot and save to outputfile @@ -41,9 +42,11 @@ def plot2DScatter(data, outputfile): plt.scatter(x, y) plt.savefig(outputfile, bbox_inches='tight') + def CHECK_EQ(a, b): assert a == b, "a=%s, b=%s" % (a, b) + def copy_shared_parameters(src, dst): ''' copy the parameters from src to dst @@ -52,11 +55,9 @@ def copy_shared_parameters(src, dst): :param dst: the destination of the parameters :type dst: GradientMachine ''' - src_params = [src.getParameter(i) - for i in xrange(src.getParameterSize())] + src_params = [src.getParameter(i) for i in xrange(src.getParameterSize())] src_params = dict([(p.getName(), p) for p in src_params]) - for i in xrange(dst.getParameterSize()): dst_param = dst.getParameter(i) src_param = src_params.get(dst_param.getName(), None) @@ -67,15 +68,17 @@ def copy_shared_parameters(src, dst): CHECK_EQ(len(src_value), len(dst_value)) dst_value.copyFrom(src_value) dst_param.setValueUpdated() - + + def print_parameters(src): - src_params = [src.getParameter(i) - for i in xrange(src.getParameterSize())] + src_params = [src.getParameter(i) for i in xrange(src.getParameterSize())] print "***************" for p in src_params: print "Name is %s" % p.getName() - print "value is %s \n" % p.getBuf(api.PARAMETER_VALUE).copyToNumpyArray() + print "value is %s \n" % p.getBuf(api.PARAMETER_VALUE).copyToNumpyArray( + ) + def load_mnist_data(imageFile): f = open(imageFile, "rb") @@ -86,33 +89,36 @@ def load_mnist_data(imageFile): n = 60000 else: n = 10000 - - data = numpy.fromfile(f, 'ubyte', count=n*28*28).reshape((n, 28*28)) + + data = numpy.fromfile(f, 'ubyte', count=n * 28 * 28).reshape((n, 28 * 28)) data = data / 255.0 * 2.0 - 1.0 f.close() return data.astype('float32') + def load_cifar_data(cifar_path): batch_size = 10000 - data = numpy.zeros((5*batch_size, 32*32*3), dtype = "float32") + data = numpy.zeros((5 * batch_size, 32 * 32 * 3), dtype="float32") for i in range(1, 6): file = cifar_path + "/data_batch_" + str(i) fo = open(file, 'rb') dict = cPickle.load(fo) fo.close() - data[(i - 1)*batch_size:(i*batch_size), :] = dict["data"] - + data[(i - 1) * batch_size:(i * batch_size), :] = dict["data"] + data = data / 255.0 * 2.0 - 1.0 return data + # synthesize 2-D uniform data def load_uniform_data(): data = numpy.random.rand(1000000, 2).astype('float32') return data + def merge(images, size): - if images.shape[1] == 28*28: + if images.shape[1] == 28 * 28: h, w, c = 28, 28, 1 else: h, w, c = 32, 32, 3 @@ -124,6 +130,7 @@ def merge(images, size): ((images[idx, :].reshape((h, w, c), order="F").transpose(1, 0, 2) + 1.0) / 2.0 * 255.0) return img.astype('uint8') + def save_images(images, path): merged_img = merge(images, [8, 8]) if merged_img.shape[2] == 1: @@ -131,14 +138,17 @@ def save_images(images, path): else: im = Image.fromarray(merged_img, mode="RGB") im.save(path) - + + def get_real_samples(batch_size, data_np): - return data_np[numpy.random.choice(data_np.shape[0], batch_size, - replace=False),:] - + return data_np[numpy.random.choice( + data_np.shape[0], batch_size, replace=False), :] + + def get_noise(batch_size, noise_dim): return numpy.random.normal(size=(batch_size, noise_dim)).astype('float32') + def get_fake_samples(generator_machine, batch_size, noise): gen_inputs = api.Arguments.createArguments(1) gen_inputs.setSlotValue(0, api.Matrix.createDenseFromNumpy(noise)) @@ -147,12 +157,14 @@ def get_fake_samples(generator_machine, batch_size, noise): fake_samples = gen_outputs.getSlotValue(0).copyToNumpyMat() return fake_samples + def get_training_loss(training_machine, inputs): outputs = api.Arguments.createArguments(0) training_machine.forward(inputs, outputs, api.PASS_TEST) loss = outputs.getSlotValue(0).copyToNumpyMat() return numpy.mean(loss) + def prepare_discriminator_data_batch_pos(batch_size, data_np): real_samples = get_real_samples(batch_size, data_np) labels = numpy.ones(batch_size, dtype='int32') @@ -161,6 +173,7 @@ def prepare_discriminator_data_batch_pos(batch_size, data_np): inputs.setSlotIds(1, api.IVector.createVectorFromNumpy(labels)) return inputs + def prepare_discriminator_data_batch_neg(generator_machine, batch_size, noise): fake_samples = get_fake_samples(generator_machine, batch_size, noise) labels = numpy.zeros(batch_size, dtype='int32') @@ -169,6 +182,7 @@ def prepare_discriminator_data_batch_neg(generator_machine, batch_size, noise): inputs.setSlotIds(1, api.IVector.createVectorFromNumpy(labels)) return inputs + def prepare_generator_data_batch(batch_size, noise): label = numpy.ones(batch_size, dtype='int32') inputs = api.Arguments.createArguments(2) @@ -193,10 +207,9 @@ def get_layer_size(model_conf, layer_name): def main(): parser = argparse.ArgumentParser() parser.add_argument("-d", "--data_source", help="mnist or cifar or uniform") - parser.add_argument("--use_gpu", default="1", - help="1 means use gpu for training") - parser.add_argument("--gpu_id", default="0", - help="the gpu_id parameter") + parser.add_argument( + "--use_gpu", default="1", help="1 means use gpu for training") + parser.add_argument("--gpu_id", default="0", help="the gpu_id parameter") args = parser.parse_args() data_source = args.data_source use_gpu = args.use_gpu @@ -208,30 +221,32 @@ def main(): if not os.path.exists("./%s_params/" % data_source): os.makedirs("./%s_params/" % data_source) - - api.initPaddle('--use_gpu=' + use_gpu, '--dot_period=10', '--log_period=100', - '--gpu_id=' + args.gpu_id, '--save_dir=' + "./%s_params/" % data_source) - + + api.initPaddle('--use_gpu=' + use_gpu, '--dot_period=10', + '--log_period=100', '--gpu_id=' + args.gpu_id, + '--save_dir=' + "./%s_params/" % data_source) + if data_source == "uniform": conf = "gan_conf.py" num_iter = 10000 else: conf = "gan_conf_image.py" num_iter = 1000 - + gen_conf = parse_config(conf, "mode=generator_training,data=" + data_source) - dis_conf = parse_config(conf, "mode=discriminator_training,data=" + data_source) + dis_conf = parse_config(conf, + "mode=discriminator_training,data=" + data_source) generator_conf = parse_config(conf, "mode=generator,data=" + data_source) batch_size = dis_conf.opt_config.batch_size noise_dim = get_layer_size(gen_conf.model_config, "noise") - + if data_source == "mnist": data_np = load_mnist_data("./data/mnist_data/train-images-idx3-ubyte") elif data_source == "cifar": data_np = load_cifar_data("./data/cifar-10-batches-py/") else: data_np = load_uniform_data() - + # this creates a gradient machine for discriminator dis_training_machine = api.GradientMachine.createFromConfigProto( dis_conf.model_config) @@ -244,26 +259,24 @@ def main(): logger.info(str(generator_conf.model_config)) generator_machine = api.GradientMachine.createFromConfigProto( generator_conf.model_config) - - dis_trainer = api.Trainer.create( - dis_conf, dis_training_machine) - gen_trainer = api.Trainer.create( - gen_conf, gen_training_machine) - + dis_trainer = api.Trainer.create(dis_conf, dis_training_machine) + + gen_trainer = api.Trainer.create(gen_conf, gen_training_machine) + dis_trainer.startTrain() gen_trainer.startTrain() - + # Sync parameters between networks (GradientMachine) at the beginning copy_shared_parameters(gen_training_machine, dis_training_machine) copy_shared_parameters(gen_training_machine, generator_machine) - + # constrain that either discriminator or generator can not be trained # consecutively more than MAX_strike times curr_train = "dis" curr_strike = 0 MAX_strike = 5 - + for train_pass in xrange(100): dis_trainer.startTrainPass() gen_trainer.startTrainPass() @@ -272,23 +285,25 @@ def main(): noise = get_noise(batch_size, noise_dim) data_batch_dis_pos = prepare_discriminator_data_batch_pos( batch_size, data_np) - dis_loss_pos = get_training_loss(dis_training_machine, data_batch_dis_pos) - + dis_loss_pos = get_training_loss(dis_training_machine, + data_batch_dis_pos) + data_batch_dis_neg = prepare_discriminator_data_batch_neg( generator_machine, batch_size, noise) - dis_loss_neg = get_training_loss(dis_training_machine, data_batch_dis_neg) - + dis_loss_neg = get_training_loss(dis_training_machine, + data_batch_dis_neg) + dis_loss = (dis_loss_pos + dis_loss_neg) / 2.0 - + # Do forward pass in generator to get the gen_loss - data_batch_gen = prepare_generator_data_batch( - batch_size, noise) + data_batch_gen = prepare_generator_data_batch(batch_size, noise) gen_loss = get_training_loss(gen_training_machine, data_batch_gen) - + if i % 100 == 0: - print "d_pos_loss is %s d_neg_loss is %s" % (dis_loss_pos, dis_loss_neg) + print "d_pos_loss is %s d_neg_loss is %s" % (dis_loss_pos, + dis_loss_neg) print "d_loss is %s g_loss is %s" % (dis_loss, gen_loss) - + # Decide which network to train based on the training history # And the relative size of the loss if (not (curr_train == "dis" and curr_strike == MAX_strike)) and \ @@ -297,11 +312,12 @@ def main(): curr_strike += 1 else: curr_train = "dis" - curr_strike = 1 + curr_strike = 1 dis_trainer.trainOneDataBatch(batch_size, data_batch_dis_neg) - dis_trainer.trainOneDataBatch(batch_size, data_batch_dis_pos) - copy_shared_parameters(dis_training_machine, gen_training_machine) - + dis_trainer.trainOneDataBatch(batch_size, data_batch_dis_pos) + copy_shared_parameters(dis_training_machine, + gen_training_machine) + else: if curr_train == "gen": curr_strike += 1 @@ -311,19 +327,23 @@ def main(): gen_trainer.trainOneDataBatch(batch_size, data_batch_gen) # TODO: add API for paddle to allow true parameter sharing between different GradientMachines # so that we do not need to copy shared parameters. - copy_shared_parameters(gen_training_machine, dis_training_machine) + copy_shared_parameters(gen_training_machine, + dis_training_machine) copy_shared_parameters(gen_training_machine, generator_machine) - + dis_trainer.finishTrainPass() gen_trainer.finishTrainPass() # At the end of each pass, save the generated samples/images fake_samples = get_fake_samples(generator_machine, batch_size, noise) if data_source == "uniform": - plot2DScatter(fake_samples, "./%s_samples/train_pass%s.png" % (data_source, train_pass)) + plot2DScatter(fake_samples, "./%s_samples/train_pass%s.png" % + (data_source, train_pass)) else: - save_images(fake_samples, "./%s_samples/train_pass%s.png" % (data_source, train_pass)) + save_images(fake_samples, "./%s_samples/train_pass%s.png" % + (data_source, train_pass)) dis_trainer.finishTrain() gen_trainer.finishTrain() + if __name__ == '__main__': main() diff --git a/demo/quick_start/trainer_config.resnet-lstm.py b/demo/quick_start/trainer_config.resnet-lstm.py index 5bed925d84..89a837abb7 100644 --- a/demo/quick_start/trainer_config.resnet-lstm.py +++ b/demo/quick_start/trainer_config.resnet-lstm.py @@ -13,7 +13,6 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - """ This configuration is a demonstration of how to implement the stacked LSTM with residual connections, i.e. an LSTM layer takes the sum of the hidden states @@ -46,11 +45,12 @@ is_predict = get_config_arg('is_predict', bool, False) trn = 'data/train.list' if not is_predict else None tst = 'data/test.list' if not is_predict else 'data/pred.list' process = 'process' if not is_predict else 'process_predict' -define_py_data_sources2(train_list=trn, - test_list=tst, - module="dataprovider_emb", - obj=process, - args={"dictionary": word_dict}) +define_py_data_sources2( + train_list=trn, + test_list=tst, + module="dataprovider_emb", + obj=process, + args={"dictionary": word_dict}) batch_size = 128 if not is_predict else 1 settings( @@ -58,10 +58,9 @@ settings( learning_rate=2e-3, learning_method=AdamOptimizer(), regularization=L2Regularization(8e-4), - gradient_clipping_threshold=25 -) + gradient_clipping_threshold=25) -bias_attr = ParamAttr(initial_std=0.,l2_rate=0.) +bias_attr = ParamAttr(initial_std=0., l2_rate=0.) data = data_layer(name="word", size=len(word_dict)) emb = embedding_layer(input=data, size=128) @@ -73,17 +72,15 @@ for i in range(3): # The input to the current layer is the sum of the hidden state # and input of the previous layer. current_input = addto_layer(input=[previous_input, previous_hidden_state]) - hidden_state = simple_lstm(input=current_input, size=128, - lstm_cell_attr=ExtraAttr(drop_rate=0.1)) + hidden_state = simple_lstm( + input=current_input, size=128, lstm_cell_attr=ExtraAttr(drop_rate=0.1)) previous_input, previous_hidden_state = current_input, hidden_state lstm = previous_hidden_state lstm_last = pooling_layer(input=lstm, pooling_type=MaxPooling()) -output = fc_layer(input=lstm_last, size=2, - bias_attr=bias_attr, - act=SoftmaxActivation()) - +output = fc_layer( + input=lstm_last, size=2, bias_attr=bias_attr, act=SoftmaxActivation()) if is_predict: maxid = maxid_layer(output) diff --git a/demo/semantic_role_labeling/data/extract_dict_feature.py b/demo/semantic_role_labeling/data/extract_dict_feature.py index 123df022f5..a02a49a86e 100644 --- a/demo/semantic_role_labeling/data/extract_dict_feature.py +++ b/demo/semantic_role_labeling/data/extract_dict_feature.py @@ -33,7 +33,7 @@ def extract_dict_features(pair_file, feature_file): ctx_n1 = sentence_list[verb_index - 1] else: ctx_n1 = 'bos' - + if verb_index > 1: mark[verb_index - 2] = 1 ctx_n2 = sentence_list[verb_index - 2] @@ -48,7 +48,7 @@ def extract_dict_features(pair_file, feature_file): ctx_p1 = sentence_list[verb_index + 1] else: ctx_p1 = 'eos' - + if verb_index < len(labels_list) - 3: mark[verb_index + 2] = 1 ctx_p2 = sentence_list[verb_index + 2] @@ -69,7 +69,6 @@ def extract_dict_features(pair_file, feature_file): feature_out.write(feature_str + '\n') - if __name__ == '__main__': usage = '-p pair_file -f feature_file' diff --git a/demo/semantic_role_labeling/data/extract_pairs.py b/demo/semantic_role_labeling/data/extract_pairs.py index 2d0d535c53..94a8488c16 100644 --- a/demo/semantic_role_labeling/data/extract_pairs.py +++ b/demo/semantic_role_labeling/data/extract_pairs.py @@ -66,8 +66,8 @@ def transform_labels(sentences, labels): else: verb_list = [] for x in labels[i][0]: - if x !='-': - verb_list.append(x) + if x != '-': + verb_list.append(x) for j in xrange(1, len(labels[i])): label_list = labels[i][j] @@ -93,7 +93,7 @@ def transform_labels(sentences, labels): is_in_bracket = True else: print 'error:', ll - sen_lab_pair.append((sentences[i], verb_list[j-1], label_seq)) + sen_lab_pair.append((sentences[i], verb_list[j - 1], label_seq)) return sen_lab_pair @@ -103,7 +103,7 @@ def write_file(sen_lab_pair, output_file): sentence = x[0] label_seq = ' '.join(x[2]) assert len(sentence.split()) == len(x[2]) - fout.write(sentence + '\t' + x[1]+'\t' +label_seq + '\n') + fout.write(sentence + '\t' + x[1] + '\t' + label_seq + '\n') if __name__ == '__main__': diff --git a/demo/semantic_role_labeling/dataprovider.py b/demo/semantic_role_labeling/dataprovider.py index d12f10bfcb..042cd4e7a9 100644 --- a/demo/semantic_role_labeling/dataprovider.py +++ b/demo/semantic_role_labeling/dataprovider.py @@ -21,7 +21,7 @@ def hook(settings, word_dict, label_dict, predicate_dict, **kwargs): settings.word_dict = word_dict settings.label_dict = label_dict settings.predicate_dict = predicate_dict - + #all inputs are integral and sequential type settings.slots = [ integer_value_sequence(len(word_dict)), @@ -29,25 +29,28 @@ def hook(settings, word_dict, label_dict, predicate_dict, **kwargs): integer_value_sequence(len(word_dict)), integer_value_sequence(len(word_dict)), integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(predicate_dict)), - integer_value_sequence(2), + integer_value_sequence(len(word_dict)), + integer_value_sequence(len(predicate_dict)), integer_value_sequence(2), integer_value_sequence(len(label_dict)) ] def get_batch_size(yeild_data): return len(yeild_data[0]) - -@provider(init_hook=hook, should_shuffle=True, calc_batch_size=get_batch_size, - can_over_batch_size=False, cache=CacheType.CACHE_PASS_IN_MEM) + +@provider( + init_hook=hook, + should_shuffle=True, + calc_batch_size=get_batch_size, + can_over_batch_size=False, + cache=CacheType.CACHE_PASS_IN_MEM) def process(settings, file_name): with open(file_name, 'r') as fdata: for line in fdata: sentence, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, label = \ line.strip().split('\t') - + words = sentence.split() sen_len = len(words) word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] diff --git a/demo/semantic_role_labeling/db_lstm.py b/demo/semantic_role_labeling/db_lstm.py index 75946bd72e..04e2a559b1 100644 --- a/demo/semantic_role_labeling/db_lstm.py +++ b/demo/semantic_role_labeling/db_lstm.py @@ -20,7 +20,7 @@ from paddle.trainer_config_helpers import * #file paths word_dict_file = './data/wordDict.txt' label_dict_file = './data/targetDict.txt' -predicate_file= './data/verbDict.txt' +predicate_file = './data/verbDict.txt' train_list_file = './data/train.list' test_list_file = './data/test.list' @@ -47,7 +47,6 @@ if not is_predict: w = line.strip() predicate_dict[w] = i - if is_test: train_list_file = None @@ -57,9 +56,11 @@ if not is_predict: test_list=test_list_file, module='dataprovider', obj='process', - args={'word_dict': word_dict, - 'label_dict': label_dict, - 'predicate_dict': predicate_dict }) + args={ + 'word_dict': word_dict, + 'label_dict': label_dict, + 'predicate_dict': predicate_dict + }) word_dict_len = len(word_dict) label_dict_len = len(label_dict) @@ -77,24 +78,16 @@ mark_dim = 5 hidden_dim = 512 depth = 8 - - ########################### Optimizer ####################################### - settings( batch_size=150, learning_method=MomentumOptimizer(momentum=0), learning_rate=2e-2, regularization=L2Regularization(8e-4), is_async=False, - model_average=ModelAverage(average_window=0.5, - max_average_window=10000), - -) - - - + model_average=ModelAverage( + average_window=0.5, max_average_window=10000), ) ####################################### network ############################## #8 features and 1 target @@ -108,22 +101,28 @@ ctx_p1 = data_layer(name='ctx_p1_data', size=word_dict_len) ctx_p2 = data_layer(name='ctx_p2_data', size=word_dict_len) mark = data_layer(name='mark_data', size=mark_dict_len) - if not is_predict: target = data_layer(name='target', size=label_dict_len) - -default_std=1/math.sqrt(hidden_dim)/3.0 +default_std = 1 / math.sqrt(hidden_dim) / 3.0 emb_para = ParameterAttribute(name='emb', initial_std=0., learning_rate=0.) std_0 = ParameterAttribute(initial_std=0.) -std_default = ParameterAttribute(initial_std=default_std) - -predicate_embedding = embedding_layer(size=word_dim, input=predicate, param_attr=ParameterAttribute(name='vemb',initial_std=default_std)) -mark_embedding = embedding_layer(name='word_ctx-in_embedding', size=mark_dim, input=mark, param_attr=std_0) - -word_input=[word, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2] -emb_layers = [embedding_layer(size=word_dim, input=x, param_attr=emb_para) for x in word_input] +std_default = ParameterAttribute(initial_std=default_std) + +predicate_embedding = embedding_layer( + size=word_dim, + input=predicate, + param_attr=ParameterAttribute( + name='vemb', initial_std=default_std)) +mark_embedding = embedding_layer( + name='word_ctx-in_embedding', size=mark_dim, input=mark, param_attr=std_0) + +word_input = [word, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2] +emb_layers = [ + embedding_layer( + size=word_dim, input=x, param_attr=emb_para) for x in word_input +] emb_layers.append(predicate_embedding) emb_layers.append(mark_embedding) @@ -131,84 +130,89 @@ hidden_0 = mixed_layer( name='hidden0', size=hidden_dim, bias_attr=std_default, - input=[ full_matrix_projection(input=emb, param_attr=std_default ) for emb in emb_layers ]) - + input=[ + full_matrix_projection( + input=emb, param_attr=std_default) for emb in emb_layers + ]) mix_hidden_lr = 1e-3 lstm_para_attr = ParameterAttribute(initial_std=0.0, learning_rate=1.0) -hidden_para_attr = ParameterAttribute(initial_std=default_std, learning_rate=mix_hidden_lr) - -lstm_0 = lstmemory(name='lstm0', - input=hidden_0, - act=ReluActivation(), - gate_act=SigmoidActivation(), - state_act=SigmoidActivation(), - bias_attr=std_0, - param_attr=lstm_para_attr) +hidden_para_attr = ParameterAttribute( + initial_std=default_std, learning_rate=mix_hidden_lr) + +lstm_0 = lstmemory( + name='lstm0', + input=hidden_0, + act=ReluActivation(), + gate_act=SigmoidActivation(), + state_act=SigmoidActivation(), + bias_attr=std_0, + param_attr=lstm_para_attr) #stack L-LSTM and R-LSTM with direct edges input_tmp = [hidden_0, lstm_0] - for i in range(1, depth): - mix_hidden = mixed_layer(name='hidden'+str(i), - size=hidden_dim, - bias_attr=std_default, - input=[full_matrix_projection(input=input_tmp[0], param_attr=hidden_para_attr), - full_matrix_projection(input=input_tmp[1], param_attr=lstm_para_attr) - ] - ) - - lstm = lstmemory(name='lstm'+str(i), - input=mix_hidden, - act=ReluActivation(), - gate_act=SigmoidActivation(), - state_act=SigmoidActivation(), - reverse=((i % 2)==1), - bias_attr=std_0, - param_attr=lstm_para_attr) + mix_hidden = mixed_layer( + name='hidden' + str(i), + size=hidden_dim, + bias_attr=std_default, + input=[ + full_matrix_projection( + input=input_tmp[0], param_attr=hidden_para_attr), + full_matrix_projection( + input=input_tmp[1], param_attr=lstm_para_attr) + ]) + + lstm = lstmemory( + name='lstm' + str(i), + input=mix_hidden, + act=ReluActivation(), + gate_act=SigmoidActivation(), + state_act=SigmoidActivation(), + reverse=((i % 2) == 1), + bias_attr=std_0, + param_attr=lstm_para_attr) input_tmp = [mix_hidden, lstm] -feature_out = mixed_layer(name='output', - size=label_dict_len, - bias_attr=std_default, - input=[full_matrix_projection(input=input_tmp[0], param_attr=hidden_para_attr), - full_matrix_projection(input=input_tmp[1], param_attr=lstm_para_attr) - ], - ) - - +feature_out = mixed_layer( + name='output', + size=label_dict_len, + bias_attr=std_default, + input=[ + full_matrix_projection( + input=input_tmp[0], param_attr=hidden_para_attr), + full_matrix_projection( + input=input_tmp[1], param_attr=lstm_para_attr) + ], ) if not is_predict: - crf_l = crf_layer( name = 'crf', - size = label_dict_len, - input = feature_out, - label = target, - param_attr=ParameterAttribute(name='crfw',initial_std=default_std, learning_rate=mix_hidden_lr) - - ) - - - crf_dec_l = crf_decoding_layer(name = 'crf_dec_l', - size = label_dict_len, - input = feature_out, - label = target, - param_attr=ParameterAttribute(name='crfw') - ) - + crf_l = crf_layer( + name='crf', + size=label_dict_len, + input=feature_out, + label=target, + param_attr=ParameterAttribute( + name='crfw', initial_std=default_std, learning_rate=mix_hidden_lr)) + + crf_dec_l = crf_decoding_layer( + name='crf_dec_l', + size=label_dict_len, + input=feature_out, + label=target, + param_attr=ParameterAttribute(name='crfw')) eval = sum_evaluator(input=crf_dec_l) - + outputs(crf_l) else: - crf_dec_l = crf_decoding_layer(name = 'crf_dec_l', - size = label_dict_len, - input = feature_out, - param_attr=ParameterAttribute(name='crfw') - ) + crf_dec_l = crf_decoding_layer( + name='crf_dec_l', + size=label_dict_len, + input=feature_out, + param_attr=ParameterAttribute(name='crfw')) outputs(crf_dec_l) - diff --git a/demo/semantic_role_labeling/predict.py b/demo/semantic_role_labeling/predict.py index 15145fafce..372fd090b6 100644 --- a/demo/semantic_role_labeling/predict.py +++ b/demo/semantic_role_labeling/predict.py @@ -26,7 +26,8 @@ UNK_IDX = 0 class Prediction(): - def __init__(self, train_conf, dict_file, model_dir, label_file, predicate_dict_file): + def __init__(self, train_conf, dict_file, model_dir, label_file, + predicate_dict_file): """ train_conf: trainer configure. dict_file: word dictionary file name. @@ -35,7 +36,7 @@ class Prediction(): self.dict = {} self.labels = {} - self.predicate_dict={} + self.predicate_dict = {} self.labels_reverse = {} self.load_dict_label(dict_file, label_file, predicate_dict_file) @@ -44,25 +45,18 @@ class Prediction(): len_pred = len(self.predicate_dict) conf = parse_config( - train_conf, - 'dict_len=' + str(len_dict) + - ',label_len=' + str(len_label) + - ',pred_len=' + str(len_pred) + - ',is_predict=True') + train_conf, 'dict_len=' + str(len_dict) + ',label_len=' + + str(len_label) + ',pred_len=' + str(len_pred) + ',is_predict=True') self.network = swig_paddle.GradientMachine.createFromConfigProto( conf.model_config) self.network.loadParameters(model_dir) slots = [ - integer_value_sequence(len_dict), - integer_value_sequence(len_dict), - integer_value_sequence(len_dict), - integer_value_sequence(len_dict), - integer_value_sequence(len_dict), - integer_value_sequence(len_dict), - integer_value_sequence(len_pred), - integer_value_sequence(2) - ] + integer_value_sequence(len_dict), integer_value_sequence(len_dict), + integer_value_sequence(len_dict), integer_value_sequence(len_dict), + integer_value_sequence(len_dict), integer_value_sequence(len_dict), + integer_value_sequence(len_pred), integer_value_sequence(2) + ] self.converter = DataProviderConverter(slots) def load_dict_label(self, dict_file, label_file, predicate_dict_file): @@ -78,6 +72,7 @@ class Prediction(): for line_count, line in enumerate(open(predicate_dict_file, 'r')): self.predicate_dict[line.strip()] = line_count + def get_data(self, data_file): """ Get input data of paddle format. @@ -88,9 +83,10 @@ class Prediction(): ).split('\t') words = sentence.split() sen_len = len(words) - + word_slot = [self.dict.get(w, UNK_IDX) for w in words] - predicate_slot = [self.predicate_dict.get(predicate, UNK_IDX)] * sen_len + predicate_slot = [self.predicate_dict.get(predicate, UNK_IDX) + ] * sen_len ctx_n2_slot = [self.dict.get(ctx_n2, UNK_IDX)] * sen_len ctx_n1_slot = [self.dict.get(ctx_n1, UNK_IDX)] * sen_len ctx_0_slot = [self.dict.get(ctx_0, UNK_IDX)] * sen_len @@ -99,7 +95,7 @@ class Prediction(): marks = mark.split() mark_slot = [int(w) for w in marks] - + yield word_slot, ctx_n2_slot, ctx_n1_slot, \ ctx_0_slot, ctx_p1_slot, ctx_p2_slot, predicate_slot, mark_slot @@ -123,8 +119,9 @@ class Prediction(): def option_parser(): - usage = ("python predict.py -c config -w model_dir " - "-d word dictionary -l label_file -i input_file -p pred_dict_file") + usage = ( + "python predict.py -c config -w model_dir " + "-d word dictionary -l label_file -i input_file -p pred_dict_file") parser = OptionParser(usage="usage: %s [options]" % usage) parser.add_option( "-c", @@ -187,8 +184,9 @@ def main(): output_file = options.output_file swig_paddle.initPaddle("--use_gpu=0") - predict = Prediction(train_conf, dict_file, model_path, label_file, predict_dict_file) - predict.predict(data_file,output_file) + predict = Prediction(train_conf, dict_file, model_path, label_file, + predict_dict_file) + predict.predict(data_file, output_file) if __name__ == '__main__': diff --git a/doc_cn/cluster/k8s/distributed_training_on_kubernetes.md b/doc_cn/cluster/k8s/distributed_training_on_kubernetes.md index d9ed431ec0..64f8fd4b43 100644 --- a/doc_cn/cluster/k8s/distributed_training_on_kubernetes.md +++ b/doc_cn/cluster/k8s/distributed_training_on_kubernetes.md @@ -306,4 +306,4 @@ I1116 09:10:18.019069 50 ParameterClient2.cpp:122] pserver 2 192.168.223.143: I1116 09:10:18.019492 50 ParameterClient2.cpp:122] pserver 3 192.168.223.143:7165 I1116 09:10:18.019716 50 ParameterClient2.cpp:122] pserver 4 192.168.129.71:7164 I1116 09:10:18.019836 50 ParameterClient2.cpp:122] pserver 5 192.168.129.71:7165 -``` \ No newline at end of file +``` diff --git a/doc_cn/cluster/k8s/job.yaml b/doc_cn/cluster/k8s/job.yaml index 1e0ac464b2..488aad0bed 100644 --- a/doc_cn/cluster/k8s/job.yaml +++ b/doc_cn/cluster/k8s/job.yaml @@ -40,4 +40,4 @@ spec: - name: jobpath mountPath: /home/jobpath restartPolicy: Never - \ No newline at end of file + diff --git a/doc_cn/cluster/k8s/start_paddle.py b/doc_cn/cluster/k8s/start_paddle.py index 6a46161410..df00d82919 100755 --- a/doc_cn/cluster/k8s/start_paddle.py +++ b/doc_cn/cluster/k8s/start_paddle.py @@ -19,7 +19,6 @@ import socket import os import argparse - # configuration for cluster API = "/api/v1/namespaces/" JOBSELECTOR = "labelSelector=job-name=" @@ -145,8 +144,8 @@ def startPaddle(idMap={}, train_args_dict=None): if __name__ == '__main__': - parser = argparse.ArgumentParser(prog="start_paddle.py", - description='simple tool for k8s') + parser = argparse.ArgumentParser( + prog="start_paddle.py", description='simple tool for k8s') args, train_args_list = parser.parse_known_args() train_args = refine_unknown_args(train_args_list) train_args_dict = dict(zip(train_args[:-1:2], train_args[1::2])) diff --git a/doc_cn/demo/sentiment_analysis/index.rst b/doc_cn/demo/sentiment_analysis/index.rst index 82400b2459..9d7972b219 100644 --- a/doc_cn/demo/sentiment_analysis/index.rst +++ b/doc_cn/demo/sentiment_analysis/index.rst @@ -1,8 +1,8 @@ -情感分析教程 -=========================== - -.. toctree:: - :maxdepth: 3 - :glob: - +情感分析教程 +=========================== + +.. toctree:: + :maxdepth: 3 + :glob: + Training Locally \ No newline at end of file diff --git a/doc_theme/static/js/paddle_doc_init.js b/doc_theme/static/js/paddle_doc_init.js index 5c815a8d3a..153ce30745 100644 --- a/doc_theme/static/js/paddle_doc_init.js +++ b/doc_theme/static/js/paddle_doc_init.js @@ -28,4 +28,4 @@ $(document).ready(function(){ $('.doc-menu-vertical').find('li.current').last().addClass('active'); $('.doc-menu-vertical').perfectScrollbar(); -}); \ No newline at end of file +}); diff --git a/paddle/api/GradientMachine.cpp b/paddle/api/GradientMachine.cpp index c1b546dbcb..297eaa19bb 100644 --- a/paddle/api/GradientMachine.cpp +++ b/paddle/api/GradientMachine.cpp @@ -15,8 +15,8 @@ limitations under the License. */ #include "PaddleAPI.h" #include "PaddleAPIPrivate.h" -#include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "Internal.h" +#include "paddle/gserver/gradientmachines/NeuralNetwork.h" std::vector GradientMachine::defaultParamTypes = { PARAMETER_VALUE, PARAMETER_GRADIENT, PARAMETER_MOMENTUM}; diff --git a/paddle/api/Internal.h b/paddle/api/Internal.h index 4a07880d80..d48dd3a04c 100644 --- a/paddle/api/Internal.h +++ b/paddle/api/Internal.h @@ -16,14 +16,13 @@ limitations under the License. */ #include "PaddleAPI.h" -#include #include +#include template void staticCastVector(std::vector* dest, const std::vector& src) { dest->resize(src.size()); - std::transform(src.begin(), - src.end(), - dest->begin(), - [](T1 t) { return static_cast(t); }); + std::transform(src.begin(), src.end(), dest->begin(), [](T1 t) { + return static_cast(t); + }); } diff --git a/paddle/api/Matrix.cpp b/paddle/api/Matrix.cpp index d4c00e7093..7c375e5cfb 100644 --- a/paddle/api/Matrix.cpp +++ b/paddle/api/Matrix.cpp @@ -12,12 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "PaddleAPI.h" #include "paddle/math/Matrix.h" -#include "paddle/math/SparseMatrix.h" -#include "paddle/math/CpuSparseMatrix.h" -#include #include +#include +#include "PaddleAPI.h" +#include "paddle/math/CpuSparseMatrix.h" +#include "paddle/math/SparseMatrix.h" struct MatrixPrivate { std::shared_ptr mat; diff --git a/paddle/api/PaddleAPI.h b/paddle/api/PaddleAPI.h index f3c80e3b06..84a66719c3 100644 --- a/paddle/api/PaddleAPI.h +++ b/paddle/api/PaddleAPI.h @@ -16,8 +16,8 @@ limitations under the License. */ #include #include -#include #include +#include #include #include "paddle/utils/GlobalConstants.h" #include "paddle/utils/TypeDefs.h" diff --git a/paddle/api/Parameter.cpp b/paddle/api/Parameter.cpp index 742ad0679c..4eed00a84a 100644 --- a/paddle/api/Parameter.cpp +++ b/paddle/api/Parameter.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "PaddleAPI.h" #include "paddle/parameter/Parameter.h" +#include "PaddleAPI.h" struct ParameterPrivate { std::shared_ptr sharedPtr; diff --git a/paddle/api/ParameterOptimizer.cpp b/paddle/api/ParameterOptimizer.cpp index 606dccd5ac..21b851dd5e 100644 --- a/paddle/api/ParameterOptimizer.cpp +++ b/paddle/api/ParameterOptimizer.cpp @@ -12,11 +12,11 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "PaddleAPI.h" -#include "PaddleAPIPrivate.h" #include "paddle/parameter/ParameterOptimizer.h" -#include "Internal.h" #include +#include "Internal.h" +#include "PaddleAPI.h" +#include "PaddleAPIPrivate.h" struct ParameterOptimizerPrivate { std::unique_ptr optimizer; @@ -36,16 +36,13 @@ struct ParameterTraverseCallbackPrivate { size_t sparseId) { std::vector real_vecs; real_vecs.resize(vecs.size()); - std::transform(vecs.begin(), - vecs.end(), - real_vecs.begin(), - [](Vector* v) { - if (v) { - return *(paddle::VectorPtr*)(v->getSharedPtr()); - } else { - return paddle::VectorPtr(); - } - }); + std::transform(vecs.begin(), vecs.end(), real_vecs.begin(), [](Vector* v) { + if (v) { + return *(paddle::VectorPtr*)(v->getSharedPtr()); + } else { + return paddle::VectorPtr(); + } + }); paddle::ParameterConfig& real_conf = *(paddle::ParameterConfig*)(const_cast(conf) diff --git a/paddle/api/SequenceGenerator.cpp b/paddle/api/SequenceGenerator.cpp index 5c65b34f23..8428edc60d 100644 --- a/paddle/api/SequenceGenerator.cpp +++ b/paddle/api/SequenceGenerator.cpp @@ -12,14 +12,14 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include +#include +#include +#include #include "PaddleAPI.h" #include "paddle/gserver/gradientmachines/GradientMachine.h" #include "paddle/parameter/Argument.h" #include "paddle/utils/Flags.h" -#include -#include -#include -#include // used to represent partial sequence struct Path { diff --git a/paddle/api/Trainer.cpp b/paddle/api/Trainer.cpp index 9aeb874bdc..59b47d4b1c 100644 --- a/paddle/api/Trainer.cpp +++ b/paddle/api/Trainer.cpp @@ -16,12 +16,12 @@ limitations under the License. */ #include "PaddleAPIPrivate.h" #include -#include #include +#include +#include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/trainer/ParamUtil.h" #include "paddle/trainer/Trainer.h" -#include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/trainer/TrainerInternal.h" #include "paddle/utils/Flags.h" diff --git a/paddle/api/Util.cpp b/paddle/api/Util.cpp index 0c9c048099..c3f739568f 100644 --- a/paddle/api/Util.cpp +++ b/paddle/api/Util.cpp @@ -14,16 +14,16 @@ limitations under the License. */ #include "PaddleAPI.h" -#include "paddle/utils/Util.h" -#include "paddle/utils/PythonUtil.h" -#include "paddle/utils/Flags.h" -#include "paddle/utils/Excepts.h" #include "paddle/parameter/Parameter.h" +#include "paddle/utils/Excepts.h" +#include "paddle/utils/Flags.h" +#include "paddle/utils/PythonUtil.h" +#include "paddle/utils/Util.h" #include +#include #include #include -#include void initPaddle(int argc, char** argv) { paddle::initMain(argc, argv); diff --git a/paddle/api/Vector.cpp b/paddle/api/Vector.cpp index 4f3ab7de60..874f2fd044 100644 --- a/paddle/api/Vector.cpp +++ b/paddle/api/Vector.cpp @@ -282,7 +282,7 @@ FloatArray Vector::getData() const { } void Vector::copyFrom(Vector* src) throw(RangeError) { - if (src->m->vec->getSize() != m->vec->getSize()) { + if (src->m->vec->getSize() != m->vec->getSize()) { throw RangeError(); } m->vec->copyFrom(*src->m->vec); diff --git a/paddle/api/test/testMatrix.py b/paddle/api/test/testMatrix.py index f76f84d2e1..37666bdccc 100644 --- a/paddle/api/test/testMatrix.py +++ b/paddle/api/test/testMatrix.py @@ -100,11 +100,12 @@ class TestMatrix(unittest.TestCase): for a, e in zip(gpu_m.getData(), [1.0, 3.23, 3.0, 4.0, 5.0, 6.0]): self.assertAlmostEqual(a, e) - + def test_numpy(self): numpy_mat = np.matrix([[1, 2], [3, 4], [5, 6]], dtype="float32") m = swig_paddle.Matrix.createDenseFromNumpy(numpy_mat) - self.assertEqual((int(m.getHeight()), int(m.getWidth())), numpy_mat.shape) + self.assertEqual((int(m.getHeight()), int(m.getWidth())), + numpy_mat.shape) self.assertEqual(m.isGpu(), swig_paddle.isUsingGpu()) for a, e in zip(m.getData(), [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]): self.assertAlmostEqual(a, e) diff --git a/paddle/api/test/testVector.py b/paddle/api/test/testVector.py index 525ed97edd..1ab095c1d3 100644 --- a/paddle/api/test/testVector.py +++ b/paddle/api/test/testVector.py @@ -26,17 +26,17 @@ class TestIVector(unittest.TestCase): self.assertEqual(m[i], 0) m[i] = i self.assertEqual(m[i], i) - + m = swig_paddle.IVector.createZero(10) self.assertEqual(m.isGpu(), swig_paddle.isUsingGpu()) - self.assertEqual(m.getData(), [0]*10) + self.assertEqual(m.getData(), [0] * 10) def test_create(self): m = swig_paddle.IVector.create(range(10), False) self.assertIsNotNone(m) for i in xrange(10): self.assertEqual(m[i], i) - + m = swig_paddle.IVector.create(range(10)) self.assertEqual(m.isGpu(), swig_paddle.isUsingGpu()) self.assertEqual(m.getData(), range(10)) @@ -69,7 +69,7 @@ class TestIVector(unittest.TestCase): expect_vec = range(0, 10) expect_vec[4] = 7 self.assertEqual(vec.getData(), expect_vec) - + def test_numpy(self): vec = np.array([1, 3, 4, 65, 78, 1, 4], dtype="int32") iv = swig_paddle.IVector.createVectorFromNumpy(vec) @@ -85,10 +85,10 @@ class TestVector(unittest.TestCase): self.assertTrue(util.doubleEqual(v[i], 0)) v[i] = i self.assertTrue(util.doubleEqual(v[i], i)) - + v = swig_paddle.Vector.createZero(10) self.assertEqual(v.isGpu(), swig_paddle.isUsingGpu()) - self.assertEqual(v.getData(), [0]*10) + self.assertEqual(v.getData(), [0] * 10) def testCreate(self): v = swig_paddle.Vector.create([x / 100.0 for x in xrange(100)], False) @@ -96,14 +96,13 @@ class TestVector(unittest.TestCase): for i in xrange(len(v)): self.assertTrue(util.doubleEqual(v[i], i / 100.0)) self.assertEqual(100, len(v)) - + v = swig_paddle.Vector.create([x / 100.0 for x in xrange(100)]) self.assertEqual(v.isGpu(), swig_paddle.isUsingGpu()) self.assertEqual(100, len(v)) vdata = v.getData() for i in xrange(len(v)): self.assertTrue(util.doubleEqual(vdata[i], i / 100.0)) - def testCpuNumpy(self): numpy_arr = np.array([1.2, 2.3, 3.4, 4.5], dtype="float32") @@ -128,7 +127,7 @@ class TestVector(unittest.TestCase): for i in xrange(1, len(numpy_3)): util.doubleEqual(numpy_3[i], vec[i]) - + def testNumpy(self): numpy_arr = np.array([1.2, 2.3, 3.4, 4.5], dtype="float32") vec = swig_paddle.Vector.createVectorFromNumpy(numpy_arr) @@ -136,7 +135,6 @@ class TestVector(unittest.TestCase): vecData = vec.getData() for n, v in zip(numpy_arr, vecData): self.assertTrue(util.doubleEqual(n, v)) - def testCopyFromNumpy(self): vec = swig_paddle.Vector.createZero(1, False) diff --git a/paddle/cuda/include/hl_base.h b/paddle/cuda/include/hl_base.h index 0b9dfc6117..84c5f2d5c9 100644 --- a/paddle/cuda/include/hl_base.h +++ b/paddle/cuda/include/hl_base.h @@ -223,9 +223,9 @@ typedef struct { #ifdef __NVCC__ -#include "paddle/utils/Logging.h" -#include "hl_cuda.h" #include "cuda_runtime.h" +#include "hl_cuda.h" +#include "paddle/utils/Logging.h" extern __thread bool g_sync_flag; extern __thread cudaStream_t default_stream; diff --git a/paddle/cuda/include/hl_dso_loader.h b/paddle/cuda/include/hl_dso_loader.h index 9ddf0e61ee..20c13f21e6 100644 --- a/paddle/cuda/include/hl_dso_loader.h +++ b/paddle/cuda/include/hl_dso_loader.h @@ -16,8 +16,8 @@ limitations under the License. */ #define HL_DSO_LOADER_H_ #include -#include #include +#include #include "hl_base.h" /** diff --git a/paddle/cuda/include/hl_gpu.h b/paddle/cuda/include/hl_gpu.h index aad0450c8c..ede2670882 100644 --- a/paddle/cuda/include/hl_gpu.h +++ b/paddle/cuda/include/hl_gpu.h @@ -15,28 +15,28 @@ limitations under the License. */ #ifndef HL_GPU_H_ #define HL_GPU_H_ +#include "hl_aggregate.h" #include "hl_base.h" +#include "hl_cnn.h" #include "hl_cuda.h" #include "hl_cuda_cublas.h" #include "hl_cuda_cudnn.h" -#include "hl_matrix.h" -#include "hl_aggregate.h" -#include "hl_cnn.h" -#include "hl_sparse.h" #include "hl_lstm.h" +#include "hl_matrix.h" #include "hl_sequence.h" +#include "hl_sparse.h" #include "hl_warpctc_wrap.h" #ifdef HPPL_STUB_FUNC -#include "stub/hl_cuda_stub.h" -#include "stub/hl_cuda_cublas_stub.h" -#include "stub/hl_cuda_cudnn_stub.h" -#include "stub/hl_matrix_stub.h" #include "stub/hl_aggregate_stub.h" #include "stub/hl_cnn_stub.h" -#include "stub/hl_sparse_stub.h" +#include "stub/hl_cuda_cublas_stub.h" +#include "stub/hl_cuda_cudnn_stub.h" +#include "stub/hl_cuda_stub.h" #include "stub/hl_lstm_stub.h" +#include "stub/hl_matrix_stub.h" #include "stub/hl_sequence_stub.h" +#include "stub/hl_sparse_stub.h" #endif #endif /* HL_GPU_H_ */ diff --git a/paddle/cuda/src/hl_cuda_cublas.cc b/paddle/cuda/src/hl_cuda_cublas.cc index 7cede8c63c..182e8ab218 100644 --- a/paddle/cuda/src/hl_cuda_cublas.cc +++ b/paddle/cuda/src/hl_cuda_cublas.cc @@ -12,12 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "hl_cuda_cublas.h" #include #include #include "hl_cuda.h" -#include "hl_cuda_cublas.h" -#include "hl_thread.ph" #include "hl_dso_loader.h" +#include "hl_thread.ph" #include "paddle/utils/Logging.h" namespace dynload { diff --git a/paddle/cuda/src/hl_cuda_cudnn.cc b/paddle/cuda/src/hl_cuda_cudnn.cc index 9c9b8906c2..7111224d59 100644 --- a/paddle/cuda/src/hl_cuda_cudnn.cc +++ b/paddle/cuda/src/hl_cuda_cudnn.cc @@ -12,14 +12,14 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "hl_cuda_cudnn.h" #include #include -#include "hl_cuda_cudnn.h" #include "hl_cuda_cudnn.ph" -#include "hl_thread.ph" #include "hl_dso_loader.h" -#include "paddle/utils/Logging.h" +#include "hl_thread.ph" #include "paddle/utils/CommandLineParser.h" +#include "paddle/utils/Logging.h" P_DEFINE_int32(cudnn_conv_workspace_limit_in_mb, 4096, diff --git a/paddle/cuda/src/hl_cudart_wrap.cc b/paddle/cuda/src/hl_cudart_wrap.cc index a3ac750b53..ecc03a729d 100644 --- a/paddle/cuda/src/hl_cudart_wrap.cc +++ b/paddle/cuda/src/hl_cudart_wrap.cc @@ -14,8 +14,8 @@ limitations under the License. */ #ifdef PADDLE_USE_DSO -#include #include +#include #include "hl_dso_loader.h" /** diff --git a/paddle/cuda/src/hl_time.cc b/paddle/cuda/src/hl_time.cc index 3005065899..2bb69d25e5 100644 --- a/paddle/cuda/src/hl_time.cc +++ b/paddle/cuda/src/hl_time.cc @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include +#include "hl_time.h" #include +#include #include -#include "hl_time.h" using std::chrono::high_resolution_clock; diff --git a/paddle/cuda/src/hl_warpctc_wrap.cc b/paddle/cuda/src/hl_warpctc_wrap.cc index 619b90120f..9ae8bc0f22 100644 --- a/paddle/cuda/src/hl_warpctc_wrap.cc +++ b/paddle/cuda/src/hl_warpctc_wrap.cc @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include "hl_warpctc_wrap.h" +#include #include "hl_dso_loader.h" #include "paddle/utils/Logging.h" diff --git a/paddle/gserver/activations/ActivationFunction.cpp b/paddle/gserver/activations/ActivationFunction.cpp index f1d09c568d..f8c4bcac2f 100644 --- a/paddle/gserver/activations/ActivationFunction.cpp +++ b/paddle/gserver/activations/ActivationFunction.cpp @@ -15,13 +15,13 @@ limitations under the License. */ #include "ActivationFunction.h" #include -#include #include -#include +#include #include #include -#include "paddle/utils/ClassRegistrar.h" +#include #include "paddle/parameter/Argument.h" +#include "paddle/utils/ClassRegistrar.h" #include "paddle/utils/Logging.h" diff --git a/paddle/gserver/dataproviders/DataProvider.cpp b/paddle/gserver/dataproviders/DataProvider.cpp index 55ca62543a..0478256f9c 100644 --- a/paddle/gserver/dataproviders/DataProvider.cpp +++ b/paddle/gserver/dataproviders/DataProvider.cpp @@ -14,12 +14,12 @@ limitations under the License. */ #include "DataProvider.h" -#include "paddle/utils/Util.h" -#include "paddle/utils/StringUtil.h" -#include "paddle/utils/Logging.h" -#include #include +#include #include "ProtoDataProvider.h" +#include "paddle/utils/Logging.h" +#include "paddle/utils/StringUtil.h" +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/gserver/dataproviders/DataProvider.h b/paddle/gserver/dataproviders/DataProvider.h index 5b854936c6..9b7f7e36ce 100644 --- a/paddle/gserver/dataproviders/DataProvider.h +++ b/paddle/gserver/dataproviders/DataProvider.h @@ -14,28 +14,28 @@ limitations under the License. */ #pragma once -#include -#include -#include -#include -#include #include -#include -#include #include +#include +#include +#include +#include +#include +#include +#include +#include "DataConfig.pb.h" +#include "paddle/math/Matrix.h" +#include "paddle/math/SparseMatrix.h" +#include "paddle/math/Vector.h" +#include "paddle/parameter/Argument.h" +#include "paddle/utils/ClassRegistrar.h" +#include "paddle/utils/Locks.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Queue.h" -#include "paddle/utils/Locks.h" #include "paddle/utils/ThreadLocal.h" #include "paddle/utils/TypeDefs.h" -#include "paddle/math/Matrix.h" -#include "paddle/math/SparseMatrix.h" #include "paddle/utils/Util.h" -#include "paddle/math/Vector.h" -#include "DataConfig.pb.h" -#include "paddle/utils/ClassRegistrar.h" -#include "paddle/parameter/Argument.h" namespace paddle { /** diff --git a/paddle/gserver/dataproviders/MultiDataProvider.cpp b/paddle/gserver/dataproviders/MultiDataProvider.cpp index e1fc4c9365..46fe053768 100644 --- a/paddle/gserver/dataproviders/MultiDataProvider.cpp +++ b/paddle/gserver/dataproviders/MultiDataProvider.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" #include "MultiDataProvider.h" -#include "paddle/utils/Logging.h" #include +#include "paddle/utils/Logging.h" +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/gserver/dataproviders/ProtoDataProvider.cpp b/paddle/gserver/dataproviders/ProtoDataProvider.cpp index 6a0cb5ef63..d16ecca2d9 100644 --- a/paddle/gserver/dataproviders/ProtoDataProvider.cpp +++ b/paddle/gserver/dataproviders/ProtoDataProvider.cpp @@ -13,14 +13,14 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "ProtoDataProvider.h" -#include "paddle/utils/Util.h" -#include "paddle/utils/StringUtil.h" #include #include #include +#include "paddle/utils/StringUtil.h" +#include "paddle/utils/Util.h" -#include "paddle/utils/Logging.h" #include "DataProviderGroup.h" +#include "paddle/utils/Logging.h" P_DEFINE_double(memory_threshold_on_load_data, 1.0, @@ -562,16 +562,16 @@ int64_t ProtoDataProvider::getNextBatchInternal(int64_t size, auto mat = cpuArguments[slot].value; mat->resize(size, dim); if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(dataPos.data(), - slots_[slot].indices.data(), - slots_[slot].sparseNonValueData.data(), - HPPL_STREAM_1); + std::dynamic_pointer_cast(mat)->copyFrom( + dataPos.data(), + slots_[slot].indices.data(), + slots_[slot].sparseNonValueData.data(), + HPPL_STREAM_1); } else if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(dataPos.data(), - slots_[slot].indices.data(), - slots_[slot].sparseNonValueData.data()); + std::dynamic_pointer_cast(mat)->copyFrom( + dataPos.data(), + slots_[slot].indices.data(), + slots_[slot].sparseNonValueData.data()); } else { LOG(FATAL) << "Not Supported"; } @@ -598,16 +598,16 @@ int64_t ProtoDataProvider::getNextBatchInternal(int64_t size, auto mat = cpuArguments[slot].value; mat->resize(size, dim); if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(dataPos.data(), - slots_[slot].indices.data(), - slots_[slot].sparseFloatValueData.data(), - HPPL_STREAM_1); + std::dynamic_pointer_cast(mat)->copyFrom( + dataPos.data(), + slots_[slot].indices.data(), + slots_[slot].sparseFloatValueData.data(), + HPPL_STREAM_1); } else if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(dataPos.data(), - slots_[slot].indices.data(), - slots_[slot].sparseFloatValueData.data()); + std::dynamic_pointer_cast(mat)->copyFrom( + dataPos.data(), + slots_[slot].indices.data(), + slots_[slot].sparseFloatValueData.data()); } else { LOG(FATAL) << "Not Supported"; } diff --git a/paddle/gserver/dataproviders/ProtoDataProvider.h b/paddle/gserver/dataproviders/ProtoDataProvider.h index 9ec5cb97c0..7dd45e0622 100644 --- a/paddle/gserver/dataproviders/ProtoDataProvider.h +++ b/paddle/gserver/dataproviders/ProtoDataProvider.h @@ -16,8 +16,8 @@ limitations under the License. */ #include -#include "paddle/utils/Stat.h" #include "DataFormat.pb.h" +#include "paddle/utils/Stat.h" #include "DataProvider.h" #include "ProtoReader.h" diff --git a/paddle/gserver/dataproviders/ProtoReader.h b/paddle/gserver/dataproviders/ProtoReader.h index 6708e7cde7..4e6f58a529 100644 --- a/paddle/gserver/dataproviders/ProtoReader.h +++ b/paddle/gserver/dataproviders/ProtoReader.h @@ -16,10 +16,10 @@ limitations under the License. */ #include -#include #include -#include #include +#include +#include namespace paddle { diff --git a/paddle/gserver/dataproviders/PyDataProvider.cpp b/paddle/gserver/dataproviders/PyDataProvider.cpp index f5dcbfcf34..5bdd55309c 100644 --- a/paddle/gserver/dataproviders/PyDataProvider.cpp +++ b/paddle/gserver/dataproviders/PyDataProvider.cpp @@ -13,10 +13,10 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "PyDataProvider.h" -#include "paddle/utils/PythonUtil.h" #include -#include "paddle/utils/Util.h" #include "paddle/utils/Excepts.h" +#include "paddle/utils/PythonUtil.h" +#include "paddle/utils/Util.h" namespace paddle { @@ -316,16 +316,16 @@ void PyDataProvider::handleSparseNonValueSlot( auto mat = cpuArguments[slotIndex].value; mat->resize(slot.sampleNum, dim, slot.sampleNum, NO_VALUE, SPARSE_CSR); if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(slot.sampleSequenceIdVec.data(), - slot.indices.data(), - slot.sparseNonValueData.data(), - HPPL_STREAM_1); + std::dynamic_pointer_cast(mat)->copyFrom( + slot.sampleSequenceIdVec.data(), + slot.indices.data(), + slot.sparseNonValueData.data(), + HPPL_STREAM_1); } else if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(slot.sampleSequenceIdVec.data(), - slot.indices.data(), - slot.sparseNonValueData.data()); + std::dynamic_pointer_cast(mat)->copyFrom( + slot.sampleSequenceIdVec.data(), + slot.indices.data(), + slot.sparseNonValueData.data()); } else { LOG(FATAL) << "Not Supported"; } @@ -347,16 +347,16 @@ void PyDataProvider::handleSparseValueSlot( auto mat = cpuArguments[slotIndex].value; mat->resize(slot.sampleNum, dim, slot.sampleNum, FLOAT_VALUE, SPARSE_CSR); if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(slot.sampleSequenceIdVec.data(), - slot.indices.data(), - slot.sparseFloatValueData.data(), - HPPL_STREAM_DEFAULT); + std::dynamic_pointer_cast(mat)->copyFrom( + slot.sampleSequenceIdVec.data(), + slot.indices.data(), + slot.sparseFloatValueData.data(), + HPPL_STREAM_DEFAULT); } else if (std::dynamic_pointer_cast(mat)) { - std::dynamic_pointer_cast(mat) - ->copyFrom(slot.sampleSequenceIdVec.data(), - slot.indices.data(), - slot.sparseFloatValueData.data()); + std::dynamic_pointer_cast(mat)->copyFrom( + slot.sampleSequenceIdVec.data(), + slot.indices.data(), + slot.sparseFloatValueData.data()); } else { LOG(FATAL) << "Not Supported"; } diff --git a/paddle/gserver/dataproviders/PyDataProvider2.cpp b/paddle/gserver/dataproviders/PyDataProvider2.cpp index 8b04a03f6d..460efc5adc 100644 --- a/paddle/gserver/dataproviders/PyDataProvider2.cpp +++ b/paddle/gserver/dataproviders/PyDataProvider2.cpp @@ -15,18 +15,18 @@ limitations under the License. */ #ifndef PADDLE_NO_PYTHON #include +#include #include #include -#include #include -#include +#include #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION #include #include "DataProvider.h" -#include "paddle/utils/PythonUtil.h" #include "paddle/utils/Locks.h" +#include "paddle/utils/PythonUtil.h" #include "paddle/utils/Stat.h" namespace paddle { @@ -400,10 +400,9 @@ private: if (this->loadThread_) { // wait poolActualSize < poolSize; std::unique_lock l(mtx_); - pushCV_.wait(l, - [this, additionalBatchSize] { - return this->poolActualSize_ < poolSize_; - }); + pushCV_.wait(l, [this, additionalBatchSize] { + return this->poolActualSize_ < poolSize_; + }); } { @@ -529,12 +528,10 @@ public: // but, loading from cache, cache object should ensure // data pool ready. std::unique_lock l(mtx_); - pullCV_.wait(l, - [this, &size] { - return this->poolActualSize_ >= - std::max(size, this->minPoolSize_) || - callingContexts_.empty(); - }); + pullCV_.wait(l, [this, &size] { + return this->poolActualSize_ >= std::max(size, this->minPoolSize_) || + callingContexts_.empty(); + }); if (unittest::OnPoolFilled) { (*unittest::OnPoolFilled)(this->poolActualSize_); diff --git a/paddle/gserver/evaluators/Evaluator.cpp b/paddle/gserver/evaluators/Evaluator.cpp index aa6dc7cb86..7556d21e01 100644 --- a/paddle/gserver/evaluators/Evaluator.cpp +++ b/paddle/gserver/evaluators/Evaluator.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" #include "paddle/gserver/evaluators/Evaluator.h" +#include "paddle/utils/Stat.h" #include "paddle/gserver/gradientmachines/NeuralNetwork.h" @@ -842,9 +842,9 @@ void PnpairEvaluator::calc(std::vector& predictArray) { auto start = predictArray.begin(); while (start != predictArray.end()) { auto end = std::find_if( - start + 1, - predictArray.end(), - [=](const PredictionResult& x) { return x.queryid != start->queryid; }); + start + 1, predictArray.end(), [=](const PredictionResult& x) { + return x.queryid != start->queryid; + }); CHECK(end != start); stat(start - predictArray.begin(), end - predictArray.begin(), diff --git a/paddle/gserver/evaluators/Evaluator.h b/paddle/gserver/evaluators/Evaluator.h index a26c650c38..5770847309 100644 --- a/paddle/gserver/evaluators/Evaluator.h +++ b/paddle/gserver/evaluators/Evaluator.h @@ -14,11 +14,11 @@ limitations under the License. */ #pragma once -#include "paddle/pserver/ParameterClient2.h" -#include "paddle/utils/ClassRegistrar.h" +#include #include "ModelConfig.pb.h" #include "paddle/parameter/Argument.h" -#include +#include "paddle/pserver/ParameterClient2.h" +#include "paddle/utils/ClassRegistrar.h" namespace paddle { diff --git a/paddle/gserver/gradientmachines/GradientMachine.cpp b/paddle/gserver/gradientmachines/GradientMachine.cpp index 6adee05dbe..36ca05b919 100644 --- a/paddle/gserver/gradientmachines/GradientMachine.cpp +++ b/paddle/gserver/gradientmachines/GradientMachine.cpp @@ -14,16 +14,16 @@ limitations under the License. */ #include "GradientMachine.h" -#include "paddle/utils/Logging.h" #include +#include "paddle/utils/Logging.h" -#include "hl_gpu.h" -#include "NeuralNetwork.h" -#include "ParallelNeuralNetwork.h" +#include "GradientMachineMode.h" #include "MultiGradientMachine.h" -#include "NeuralNetwork.h" #include "MultiNetwork.h" -#include "GradientMachineMode.h" +#include "NeuralNetwork.h" +#include "NeuralNetwork.h" +#include "ParallelNeuralNetwork.h" +#include "hl_gpu.h" namespace paddle { diff --git a/paddle/gserver/gradientmachines/GradientMachine.h b/paddle/gserver/gradientmachines/GradientMachine.h index f3e44a9e39..579eca71d4 100644 --- a/paddle/gserver/gradientmachines/GradientMachine.h +++ b/paddle/gserver/gradientmachines/GradientMachine.h @@ -17,15 +17,15 @@ limitations under the License. */ #include #include -#include "paddle/math/Matrix.h" -#include "paddle/parameter/Parameter.h" -#include "paddle/parameter/ParameterUpdaterBase.h" -#include "paddle/utils/Thread.h" -#include "TrainerConfig.pb.h" #include "ModelConfig.pb.h" +#include "TrainerConfig.pb.h" #include "paddle/gserver/dataproviders/DataProvider.h" #include "paddle/gserver/evaluators/Evaluator.h" #include "paddle/gserver/layers/Layer.h" +#include "paddle/math/Matrix.h" +#include "paddle/parameter/Parameter.h" +#include "paddle/parameter/ParameterUpdaterBase.h" +#include "paddle/utils/Thread.h" namespace paddle { /** diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.h b/paddle/gserver/gradientmachines/MultiGradientMachine.h index fe6d96e8ea..5f9855c4be 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.h +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.h @@ -18,9 +18,9 @@ limitations under the License. */ #include "GradientMachine.h" -#include "paddle/utils/Queue.h" -#include "paddle/utils/Locks.h" #include "hl_gpu.h" +#include "paddle/utils/Locks.h" +#include "paddle/utils/Queue.h" namespace paddle { diff --git a/paddle/gserver/gradientmachines/MultiNetwork.cpp b/paddle/gserver/gradientmachines/MultiNetwork.cpp index 61af82fcb7..6eb3d8db96 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.cpp +++ b/paddle/gserver/gradientmachines/MultiNetwork.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include #include "paddle/utils/Stat.h" #include "paddle/utils/Util.h" -#include #include "MultiNetwork.h" diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.cpp b/paddle/gserver/gradientmachines/NeuralNetwork.cpp index dbcb97b42b..ee36a87b9d 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.cpp +++ b/paddle/gserver/gradientmachines/NeuralNetwork.cpp @@ -14,15 +14,15 @@ limitations under the License. */ #include "paddle/utils/Util.h" -#include "paddle/utils/Logging.h" #include "paddle/utils/CustomStackTrace.h" +#include "paddle/utils/Logging.h" -#include "paddle/utils/Stat.h" -#include "hl_gpu.h" +#include "MultiNetwork.h" #include "NeuralNetwork.h" #include "RecurrentGradientMachine.h" -#include "MultiNetwork.h" +#include "hl_gpu.h" #include "paddle/gserver/layers/AgentLayer.h" +#include "paddle/utils/Stat.h" namespace paddle { void parameterInitNN(int paramId, diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.h b/paddle/gserver/gradientmachines/NeuralNetwork.h index fd885b436a..384ca88f47 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.h +++ b/paddle/gserver/gradientmachines/NeuralNetwork.h @@ -14,18 +14,18 @@ limitations under the License. */ #pragma once -#include -#include #include +#include +#include -#include "paddle/utils/ClassRegistrar.h" -#include "paddle/parameter/Parameter.h" #include "ModelConfig.pb.h" +#include "paddle/gserver/dataproviders/DataProvider.h" #include "paddle/gserver/gradientmachines/GradientMachine.h" #include "paddle/gserver/layers/CostLayer.h" #include "paddle/gserver/layers/DataLayer.h" -#include "paddle/gserver/dataproviders/DataProvider.h" #include "paddle/gserver/layers/Layer.h" +#include "paddle/parameter/Parameter.h" +#include "paddle/utils/ClassRegistrar.h" namespace paddle { /* @@ -57,14 +57,13 @@ void parameterInitNN(int paramId, class NeuralNetwork : public GradientMachine { public: - virtual void init( - const ModelConfig& config, - ParamInitCallback callback = nullptr, - const std::vector& - parameterTypes = std::vector{PARAMETER_VALUE, - PARAMETER_GRADIENT, - PARAMETER_MOMENTUM}, - bool useGpu = FLAGS_use_gpu); + virtual void init(const ModelConfig& config, + ParamInitCallback callback = nullptr, + const std::vector& parameterTypes = + std::vector{PARAMETER_VALUE, + PARAMETER_GRADIENT, + PARAMETER_MOMENTUM}, + bool useGpu = FLAGS_use_gpu); /** * Connect two submodels and diff --git a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h index 934a7cfc7b..8f445b1ded 100644 --- a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h +++ b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h @@ -37,14 +37,13 @@ public: NeuralNetwork *rootNetwork = nullptr) : NeuralNetwork(subModelName, rootNetwork) {} - virtual void init( - const ModelConfig &config, - ParamInitCallback callback = nullptr, - const std::vector - ¶meterTypes = std::vector{PARAMETER_VALUE, - PARAMETER_GRADIENT, - PARAMETER_MOMENTUM}, - bool useGpu = FLAGS_use_gpu); + virtual void init(const ModelConfig &config, + ParamInitCallback callback = nullptr, + const std::vector ¶meterTypes = + std::vector{PARAMETER_VALUE, + PARAMETER_GRADIENT, + PARAMETER_MOMENTUM}, + bool useGpu = FLAGS_use_gpu); virtual void forward(const std::vector &inArgs, std::vector *outArgs, diff --git a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp index 4fb1a44ab7..ee1c92bdf5 100644 --- a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp @@ -12,17 +12,17 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" -#include "paddle/utils/Util.h" -#include "paddle/utils/Flags.h" +#include "RecurrentGradientMachine.h" +#include #include +#include #include -#include #include -#include -#include "RecurrentGradientMachine.h" #include "NeuralNetwork.h" #include "paddle/gserver/layers/AgentLayer.h" +#include "paddle/utils/Flags.h" +#include "paddle/utils/Stat.h" +#include "paddle/utils/Util.h" P_DEFINE_string(diy_beam_search_prob_so, "", "the diy beam search cost so"); @@ -78,20 +78,22 @@ static inline SymbolType loadDiySymbol(const char* symbolName) { return reinterpret_cast(sym); } -static InitFunction __init__diy_prob_method([] { - std::string soName = FLAGS_diy_beam_search_prob_so; - if (!soName.empty()) { - gDiyProbHandle = dlopen(soName.c_str(), RTLD_LAZY); - CHECK(gDiyProbHandle) << "Cannot Open DIY Prob So " << soName; - atexit(exit_diy_prob); - gDiyProbMethod = - loadDiySymbol(DIY_CALC_PROB_SYMBOL_NAME); - gDiyProbStart = - loadDiySymbol(DIY_START_CALC_PROB_SYMBOL_NAME); - gDiyProbStop = - loadDiySymbol(DIY_FINISH_CALC_PROB_SYMBOL_NAME); - } -}, std::numeric_limits::max()); +static InitFunction __init__diy_prob_method( + [] { + std::string soName = FLAGS_diy_beam_search_prob_so; + if (!soName.empty()) { + gDiyProbHandle = dlopen(soName.c_str(), RTLD_LAZY); + CHECK(gDiyProbHandle) << "Cannot Open DIY Prob So " << soName; + atexit(exit_diy_prob); + gDiyProbMethod = + loadDiySymbol(DIY_CALC_PROB_SYMBOL_NAME); + gDiyProbStart = loadDiySymbol( + DIY_START_CALC_PROB_SYMBOL_NAME); + gDiyProbStop = loadDiySymbol( + DIY_FINISH_CALC_PROB_SYMBOL_NAME); + } + }, + std::numeric_limits::max()); class BeamSearchControlCallbacks { public: @@ -1281,10 +1283,9 @@ void RecurrentGradientMachine::beamSearch(size_t batchSize) { std::vector*> prefixes; prefixes.resize(paths.size()); std::transform( - paths.begin(), - paths.end(), - prefixes.begin(), - [](const Path& p) { return const_cast*>(&p.ids); }); + paths.begin(), paths.end(), prefixes.begin(), [](const Path& p) { + return const_cast*>(&p.ids); + }); beamSearchCtrlCallbacks_->beamSearchCandidateAdjust( prefixes, frames_[machineCur].get(), i); } diff --git a/paddle/gserver/gradientmachines/RecurrentGradientMachine.h b/paddle/gserver/gradientmachines/RecurrentGradientMachine.h index 369c8c3d98..db7d8aff6d 100644 --- a/paddle/gserver/gradientmachines/RecurrentGradientMachine.h +++ b/paddle/gserver/gradientmachines/RecurrentGradientMachine.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once +#include #include "GradientMachine.h" #include "NeuralNetwork.h" -#include #include "paddle/utils/Locks.h" diff --git a/paddle/gserver/layers/BatchNormBaseLayer.cpp b/paddle/gserver/layers/BatchNormBaseLayer.cpp index 51463f1118..1ceaaaa206 100644 --- a/paddle/gserver/layers/BatchNormBaseLayer.cpp +++ b/paddle/gserver/layers/BatchNormBaseLayer.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" -#include "Layer.h" #include "BatchNormBaseLayer.h" #include "BatchNormalizationLayer.h" +#include "Layer.h" +#include "paddle/utils/Stat.h" #ifndef PADDLE_ONLY_CPU #include "CudnnBatchNormLayer.h" #endif diff --git a/paddle/gserver/layers/BatchNormBaseLayer.h b/paddle/gserver/layers/BatchNormBaseLayer.h index f5a555a6d0..75bda95de1 100644 --- a/paddle/gserver/layers/BatchNormBaseLayer.h +++ b/paddle/gserver/layers/BatchNormBaseLayer.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once -#include "paddle/utils/Stat.h" #include "Layer.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/BatchNormalizationLayer.h b/paddle/gserver/layers/BatchNormalizationLayer.h index 56be473568..052c207732 100644 --- a/paddle/gserver/layers/BatchNormalizationLayer.h +++ b/paddle/gserver/layers/BatchNormalizationLayer.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once -#include "Layer.h" #include "BatchNormBaseLayer.h" +#include "Layer.h" namespace paddle { diff --git a/paddle/gserver/layers/ConcatenateLayer.cpp b/paddle/gserver/layers/ConcatenateLayer.cpp index f6b3d86b8c..d19adace7d 100644 --- a/paddle/gserver/layers/ConcatenateLayer.cpp +++ b/paddle/gserver/layers/ConcatenateLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" #include "Layer.h" #include "Projection.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/ContextProjection.cpp b/paddle/gserver/layers/ContextProjection.cpp index 6080aa51b9..7ac56e3a2a 100644 --- a/paddle/gserver/layers/ContextProjection.cpp +++ b/paddle/gserver/layers/ContextProjection.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" #include "ContextProjection.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/ConvBaseLayer.cpp b/paddle/gserver/layers/ConvBaseLayer.cpp index 473ca24a94..7b234dc2a6 100644 --- a/paddle/gserver/layers/ConvBaseLayer.cpp +++ b/paddle/gserver/layers/ConvBaseLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "ConvBaseLayer.h" #include "paddle/math/MathUtils.h" +#include "paddle/utils/Logging.h" namespace paddle { bool ConvBaseLayer::init(const LayerMap& layerMap, diff --git a/paddle/gserver/layers/ConvOperator.cpp b/paddle/gserver/layers/ConvOperator.cpp index 3ede98ba4b..f943410dee 100644 --- a/paddle/gserver/layers/ConvOperator.cpp +++ b/paddle/gserver/layers/ConvOperator.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/math/Matrix.h" -#include "paddle/math/MathUtils.h" #include "Operator.h" +#include "paddle/math/MathUtils.h" +#include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/gserver/layers/ConvProjection.cpp b/paddle/gserver/layers/ConvProjection.cpp index e72dc37ec8..aa634b3287 100644 --- a/paddle/gserver/layers/ConvProjection.cpp +++ b/paddle/gserver/layers/ConvProjection.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" #include "ConvProjection.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/ConvShiftLayer.cpp b/paddle/gserver/layers/ConvShiftLayer.cpp index 527d885d86..9bfb1ab7a4 100644 --- a/paddle/gserver/layers/ConvShiftLayer.cpp +++ b/paddle/gserver/layers/ConvShiftLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/ConvexCombinationLayer.cpp b/paddle/gserver/layers/ConvexCombinationLayer.cpp index 57ff95fe37..3f4d77a2fe 100644 --- a/paddle/gserver/layers/ConvexCombinationLayer.cpp +++ b/paddle/gserver/layers/ConvexCombinationLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/CosSimVecMatLayer.cpp b/paddle/gserver/layers/CosSimVecMatLayer.cpp index e8a7f671ee..ad490b0b8c 100644 --- a/paddle/gserver/layers/CosSimVecMatLayer.cpp +++ b/paddle/gserver/layers/CosSimVecMatLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/CostLayer.cpp b/paddle/gserver/layers/CostLayer.cpp index 90cd473c42..7e9519f6b3 100644 --- a/paddle/gserver/layers/CostLayer.cpp +++ b/paddle/gserver/layers/CostLayer.cpp @@ -12,11 +12,11 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include +#include "CostLayer.h" #include -#include "paddle/utils/Logging.h" #include -#include "CostLayer.h" +#include +#include "paddle/utils/Logging.h" #include "paddle/math/SparseMatrix.h" diff --git a/paddle/gserver/layers/CudnnBatchNormLayer.cpp b/paddle/gserver/layers/CudnnBatchNormLayer.cpp index d44c217105..09dac05a7a 100644 --- a/paddle/gserver/layers/CudnnBatchNormLayer.cpp +++ b/paddle/gserver/layers/CudnnBatchNormLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" -#include "Layer.h" #include "CudnnBatchNormLayer.h" +#include "Layer.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/CudnnBatchNormLayer.h b/paddle/gserver/layers/CudnnBatchNormLayer.h index a52a683e15..b1e7d2082f 100644 --- a/paddle/gserver/layers/CudnnBatchNormLayer.h +++ b/paddle/gserver/layers/CudnnBatchNormLayer.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once -#include "paddle/utils/Stat.h" -#include "Layer.h" #include "BatchNormBaseLayer.h" +#include "Layer.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/CudnnConvLayer.cpp b/paddle/gserver/layers/CudnnConvLayer.cpp index 6e28d5eb42..978c2c1479 100644 --- a/paddle/gserver/layers/CudnnConvLayer.cpp +++ b/paddle/gserver/layers/CudnnConvLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "CudnnConvLayer.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "CudnnConvLayer.h" namespace paddle { diff --git a/paddle/gserver/layers/CudnnConvLayer.h b/paddle/gserver/layers/CudnnConvLayer.h index 6317fab6f8..b869c695bd 100644 --- a/paddle/gserver/layers/CudnnConvLayer.h +++ b/paddle/gserver/layers/CudnnConvLayer.h @@ -14,10 +14,10 @@ limitations under the License. */ #pragma once +#include #include "ConvBaseLayer.h" -#include "paddle/math/Matrix.h" #include "Projection.h" -#include +#include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/gserver/layers/CudnnPoolLayer.cpp b/paddle/gserver/layers/CudnnPoolLayer.cpp index d0e71c6345..4adb2d4709 100644 --- a/paddle/gserver/layers/CudnnPoolLayer.cpp +++ b/paddle/gserver/layers/CudnnPoolLayer.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "CudnnPoolLayer.h" +#include "paddle/math/Matrix.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "paddle/math/Matrix.h" -#include "CudnnPoolLayer.h" namespace paddle { diff --git a/paddle/gserver/layers/EosIdCheckLayer.cpp b/paddle/gserver/layers/EosIdCheckLayer.cpp index dc3c6e6b64..fa53e2e4cf 100644 --- a/paddle/gserver/layers/EosIdCheckLayer.cpp +++ b/paddle/gserver/layers/EosIdCheckLayer.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" +#include "paddle/utils/Logging.h" namespace paddle { /** diff --git a/paddle/gserver/layers/ExpandConvBaseLayer.h b/paddle/gserver/layers/ExpandConvBaseLayer.h index e14f6e6f44..8445642217 100644 --- a/paddle/gserver/layers/ExpandConvBaseLayer.h +++ b/paddle/gserver/layers/ExpandConvBaseLayer.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once +#include #include "ConvBaseLayer.h" #include "paddle/math/Matrix.h" -#include namespace paddle { diff --git a/paddle/gserver/layers/ExpandConvLayer.cpp b/paddle/gserver/layers/ExpandConvLayer.cpp index dcc7839960..f9267b81a7 100644 --- a/paddle/gserver/layers/ExpandConvLayer.cpp +++ b/paddle/gserver/layers/ExpandConvLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "ExpandConvLayer.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "ExpandConvLayer.h" namespace paddle { diff --git a/paddle/gserver/layers/ExpandConvLayer.h b/paddle/gserver/layers/ExpandConvLayer.h index 6f8504b50a..de81a017e1 100644 --- a/paddle/gserver/layers/ExpandConvLayer.h +++ b/paddle/gserver/layers/ExpandConvLayer.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once -#include "paddle/math/Matrix.h" #include #include "ExpandConvBaseLayer.h" +#include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/gserver/layers/ExpandConvTransLayer.cpp b/paddle/gserver/layers/ExpandConvTransLayer.cpp index cd4965c3c5..520586b138 100644 --- a/paddle/gserver/layers/ExpandConvTransLayer.cpp +++ b/paddle/gserver/layers/ExpandConvTransLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "ExpandConvTransLayer.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "ExpandConvTransLayer.h" /* The implementation of the convTransLayer is basically a swap of forward and * backward of the original convLayer. diff --git a/paddle/gserver/layers/ExpandConvTransLayer.h b/paddle/gserver/layers/ExpandConvTransLayer.h index fa9d7fb481..4a527d6799 100644 --- a/paddle/gserver/layers/ExpandConvTransLayer.h +++ b/paddle/gserver/layers/ExpandConvTransLayer.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once -#include "paddle/math/Matrix.h" #include #include "ExpandConvBaseLayer.h" +#include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/gserver/layers/FullyConnectedLayer.cpp b/paddle/gserver/layers/FullyConnectedLayer.cpp index d2a028dd80..89afe33c36 100644 --- a/paddle/gserver/layers/FullyConnectedLayer.cpp +++ b/paddle/gserver/layers/FullyConnectedLayer.cpp @@ -13,11 +13,11 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "FullyConnectedLayer.h" +#include +#include +#include "paddle/math/SparseMatrix.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "paddle/math/SparseMatrix.h" -#include -#include namespace paddle { diff --git a/paddle/gserver/layers/GatedRecurrentLayer.cpp b/paddle/gserver/layers/GatedRecurrentLayer.cpp index 01b210ba70..930d9a0561 100644 --- a/paddle/gserver/layers/GatedRecurrentLayer.cpp +++ b/paddle/gserver/layers/GatedRecurrentLayer.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "Layer.h" #include "GatedRecurrentLayer.h" +#include "Layer.h" #include "paddle/utils/Stat.h" namespace paddle { @@ -386,8 +386,9 @@ void GatedRecurrentLayer::backwardBatch(int batchSize, MatrixPtr inputGrad) { { batchSize = outputGradTmp->getHeight(); gruValue.prevOutValue = - (n == 0 ? nullptr : (batchValue_->getBatchValue(n - 1, batchSize)) - ->getData()); + (n == 0 + ? nullptr + : (batchValue_->getBatchValue(n - 1, batchSize))->getData()); gruGrad.prevOutGrad = (n == 0 ? nullptr : (batchGrad_->getBatchValue(n - 1, batchSize))->getData()); diff --git a/paddle/gserver/layers/GatedRecurrentLayer.h b/paddle/gserver/layers/GatedRecurrentLayer.h index e099b4d18b..25770ce57f 100644 --- a/paddle/gserver/layers/GatedRecurrentLayer.h +++ b/paddle/gserver/layers/GatedRecurrentLayer.h @@ -14,10 +14,10 @@ limitations under the License. */ #pragma once -#include "paddle/math/Matrix.h" -#include "SequenceToBatch.h" #include "GruCompute.h" #include "Layer.h" +#include "SequenceToBatch.h" +#include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/gserver/layers/GruCompute.cpp b/paddle/gserver/layers/GruCompute.cpp index 7d4e8001a8..06907768e9 100644 --- a/paddle/gserver/layers/GruCompute.cpp +++ b/paddle/gserver/layers/GruCompute.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" #include "GruCompute.h" #include "hl_recurrent_apply.cuh" +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/gserver/layers/GruCompute.h b/paddle/gserver/layers/GruCompute.h index 2a5da72068..42c0019319 100644 --- a/paddle/gserver/layers/GruCompute.h +++ b/paddle/gserver/layers/GruCompute.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once -#include "paddle/utils/TypeDefs.h" #include "ModelConfig.pb.h" #include "hl_gpu.h" +#include "paddle/utils/TypeDefs.h" namespace paddle { diff --git a/paddle/gserver/layers/GruStepLayer.cpp b/paddle/gserver/layers/GruStepLayer.cpp index c48b5e40e6..4a1006aa94 100644 --- a/paddle/gserver/layers/GruStepLayer.cpp +++ b/paddle/gserver/layers/GruStepLayer.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "Layer.h" #include "GruCompute.h" +#include "Layer.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/IdentityProjection.cpp b/paddle/gserver/layers/IdentityProjection.cpp index 8660631b5a..f1d41a33d4 100644 --- a/paddle/gserver/layers/IdentityProjection.cpp +++ b/paddle/gserver/layers/IdentityProjection.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" #include "Projection.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/InterpolationLayer.cpp b/paddle/gserver/layers/InterpolationLayer.cpp index 94d4614b21..44fe1fb1fe 100644 --- a/paddle/gserver/layers/InterpolationLayer.cpp +++ b/paddle/gserver/layers/InterpolationLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/Layer.cpp b/paddle/gserver/layers/Layer.cpp index 3c539f3076..c9e121047b 100644 --- a/paddle/gserver/layers/Layer.cpp +++ b/paddle/gserver/layers/Layer.cpp @@ -14,15 +14,15 @@ limitations under the License. */ #include "paddle/utils/Util.h" -#include "paddle/utils/Logging.h" #include "paddle/math/SparseMatrix.h" +#include "paddle/utils/Logging.h" #include "AddtoLayer.h" +#include "CRFLayer.h" #include "CosSimLayer.h" #include "CostLayer.h" -#include "ExpandConvLayer.h" -#include "CRFLayer.h" #include "DataLayer.h" +#include "ExpandConvLayer.h" #include "FullyConnectedLayer.h" #include "HierarchicalSigmoidLayer.h" #include "MaxLayer.h" diff --git a/paddle/gserver/layers/Layer.h b/paddle/gserver/layers/Layer.h index 6609e16c4c..172e558b82 100644 --- a/paddle/gserver/layers/Layer.h +++ b/paddle/gserver/layers/Layer.h @@ -14,18 +14,18 @@ limitations under the License. */ #pragma once -#include -#include #include -#include "paddle/utils/ClassRegistrar.h" +#include +#include +#include "ModelConfig.pb.h" #include "paddle/math/CpuSparseMatrix.h" #include "paddle/parameter/Parameter.h" +#include "paddle/utils/ClassRegistrar.h" #include "paddle/utils/Util.h" -#include "ModelConfig.pb.h" -#include "paddle/gserver/activations/ActivationFunction.h" #include #include +#include "paddle/gserver/activations/ActivationFunction.h" /// Macro for registering a layer type. /// Example: REGISTER_LAYER(crf_error, CRFDecodingErrorLayer); diff --git a/paddle/gserver/layers/LinearChainCRF.cpp b/paddle/gserver/layers/LinearChainCRF.cpp index c6414c822e..af550c7a01 100644 --- a/paddle/gserver/layers/LinearChainCRF.cpp +++ b/paddle/gserver/layers/LinearChainCRF.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include "LinearChainCRF.h" +#include namespace paddle { diff --git a/paddle/gserver/layers/LinearChainCTC.cpp b/paddle/gserver/layers/LinearChainCTC.cpp index 60e814fc30..cb2b249110 100644 --- a/paddle/gserver/layers/LinearChainCTC.cpp +++ b/paddle/gserver/layers/LinearChainCTC.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include "LinearChainCTC.h" +#include #include namespace paddle { diff --git a/paddle/gserver/layers/LstmCompute.cpp b/paddle/gserver/layers/LstmCompute.cpp index 18f7996958..4c42970964 100644 --- a/paddle/gserver/layers/LstmCompute.cpp +++ b/paddle/gserver/layers/LstmCompute.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" -#include "hl_recurrent_apply.cuh" #include "LstmCompute.h" +#include "hl_recurrent_apply.cuh" +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/gserver/layers/LstmCompute.h b/paddle/gserver/layers/LstmCompute.h index 9b7aee19dd..140a4c6ecf 100644 --- a/paddle/gserver/layers/LstmCompute.h +++ b/paddle/gserver/layers/LstmCompute.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once -#include "paddle/utils/TypeDefs.h" #include "ModelConfig.pb.h" #include "hl_gpu.h" +#include "paddle/utils/TypeDefs.h" namespace paddle { diff --git a/paddle/gserver/layers/LstmLayer.cpp b/paddle/gserver/layers/LstmLayer.cpp index 975edcfe7f..452091eff4 100644 --- a/paddle/gserver/layers/LstmLayer.cpp +++ b/paddle/gserver/layers/LstmLayer.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "LstmLayer.h" -#include "paddle/math/Matrix.h" #include "paddle/math/BaseMatrix.h" +#include "paddle/math/Matrix.h" #include "paddle/utils/Stat.h" P_DECLARE_bool(prev_batch_state); diff --git a/paddle/gserver/layers/LstmLayer.h b/paddle/gserver/layers/LstmLayer.h index 16c62aa88d..f49df2c412 100644 --- a/paddle/gserver/layers/LstmLayer.h +++ b/paddle/gserver/layers/LstmLayer.h @@ -15,10 +15,10 @@ limitations under the License. */ #pragma once #include "Layer.h" -#include "paddle/math/Matrix.h" -#include "paddle/math/BaseMatrix.h" -#include "SequenceToBatch.h" #include "LstmCompute.h" +#include "SequenceToBatch.h" +#include "paddle/math/BaseMatrix.h" +#include "paddle/math/Matrix.h" namespace paddle { /** diff --git a/paddle/gserver/layers/MDLstmLayer.cpp b/paddle/gserver/layers/MDLstmLayer.cpp index 9d3797d16f..1243c12889 100644 --- a/paddle/gserver/layers/MDLstmLayer.cpp +++ b/paddle/gserver/layers/MDLstmLayer.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "LstmLayer.h" -#include "paddle/math/Matrix.h" #include "paddle/math/BaseMatrix.h" +#include "paddle/math/Matrix.h" namespace paddle { @@ -318,7 +318,7 @@ void MDLstmLayer::forward(PassType passType) { CHECK_EQ(starts[numSequences], batchSize); int* dimsData = input.cpuSequenceDims->getData(); - CHECK_EQ(int(input.cpuSequenceDims->getSize()), numDims_ * numSequences); + CHECK_EQ(int(input.cpuSequenceDims->getSize()), numDims_* numSequences); for (int i = 0; i < numSequences; i++) { std::vector dims; diff --git a/paddle/gserver/layers/MaxOutLayer.cpp b/paddle/gserver/layers/MaxOutLayer.cpp index 4fb99ce2a2..3a86a95321 100644 --- a/paddle/gserver/layers/MaxOutLayer.cpp +++ b/paddle/gserver/layers/MaxOutLayer.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "MaxOutLayer.h" -#include "hl_gpu.h" #include "hl_cnn.h" +#include "hl_gpu.h" namespace paddle { diff --git a/paddle/gserver/layers/MixedLayer.cpp b/paddle/gserver/layers/MixedLayer.cpp index 490b217347..2525b1984b 100644 --- a/paddle/gserver/layers/MixedLayer.cpp +++ b/paddle/gserver/layers/MixedLayer.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" #include "MixedLayer.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/MixedLayer.h b/paddle/gserver/layers/MixedLayer.h index d73ba6b7a1..9655a152c7 100644 --- a/paddle/gserver/layers/MixedLayer.h +++ b/paddle/gserver/layers/MixedLayer.h @@ -15,8 +15,8 @@ limitations under the License. */ #pragma once #include "Layer.h" -#include "Projection.h" #include "Operator.h" +#include "Projection.h" namespace paddle { diff --git a/paddle/gserver/layers/MultiplexLayer.cpp b/paddle/gserver/layers/MultiplexLayer.cpp index dc4a1ec321..d09720c525 100644 --- a/paddle/gserver/layers/MultiplexLayer.cpp +++ b/paddle/gserver/layers/MultiplexLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/NormLayer.cpp b/paddle/gserver/layers/NormLayer.cpp index b8682a1422..3db0af2515 100644 --- a/paddle/gserver/layers/NormLayer.cpp +++ b/paddle/gserver/layers/NormLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "NormLayer.h" #include "NormProjectionLayer.h" +#include "paddle/utils/Logging.h" namespace paddle { REGISTER_LAYER_CREATE_FUNC(norm, &NormLayer::create); diff --git a/paddle/gserver/layers/NormLayer.h b/paddle/gserver/layers/NormLayer.h index aedbb95b4f..86255b231b 100644 --- a/paddle/gserver/layers/NormLayer.h +++ b/paddle/gserver/layers/NormLayer.h @@ -16,8 +16,8 @@ limitations under the License. */ #include #include "Layer.h" -#include "paddle/math/Matrix.h" #include "NormLayer.h" +#include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index ea301292e0..934fc31e0a 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "NormProjectionLayer.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "NormProjectionLayer.h" namespace paddle { size_t CMRProjectionNormLayer::getSize() { diff --git a/paddle/gserver/layers/NormProjectionLayer.h b/paddle/gserver/layers/NormProjectionLayer.h index 0db8e2551f..4f7b638334 100644 --- a/paddle/gserver/layers/NormProjectionLayer.h +++ b/paddle/gserver/layers/NormProjectionLayer.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once +#include #include "NormLayer.h" #include "paddle/math/Matrix.h" -#include namespace paddle { diff --git a/paddle/gserver/layers/Operator.h b/paddle/gserver/layers/Operator.h index b0586b59e9..6fd331382f 100644 --- a/paddle/gserver/layers/Operator.h +++ b/paddle/gserver/layers/Operator.h @@ -14,11 +14,11 @@ limitations under the License. */ #pragma once -#include "paddle/parameter/Parameter.h" #include "ModelConfig.pb.h" +#include "paddle/parameter/Parameter.h" -#include "paddle/parameter/Argument.h" #include "Layer.h" +#include "paddle/parameter/Argument.h" namespace paddle { diff --git a/paddle/gserver/layers/OuterProdLayer.cpp b/paddle/gserver/layers/OuterProdLayer.cpp index 42587dcce5..cf9a008318 100644 --- a/paddle/gserver/layers/OuterProdLayer.cpp +++ b/paddle/gserver/layers/OuterProdLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/PoolLayer.cpp b/paddle/gserver/layers/PoolLayer.cpp index 36e396487e..96d5c54acc 100644 --- a/paddle/gserver/layers/PoolLayer.cpp +++ b/paddle/gserver/layers/PoolLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "PoolLayer.h" #include "PoolProjectionLayer.h" +#include "paddle/utils/Logging.h" #ifndef PADDLE_ONLY_CPU #include "CudnnPoolLayer.h" #endif diff --git a/paddle/gserver/layers/PoolLayer.h b/paddle/gserver/layers/PoolLayer.h index c05d7a364d..318b89d7c2 100644 --- a/paddle/gserver/layers/PoolLayer.h +++ b/paddle/gserver/layers/PoolLayer.h @@ -14,10 +14,10 @@ limitations under the License. */ #pragma once +#include #include "Layer.h" -#include "paddle/math/Matrix.h" #include "paddle/math/MathUtils.h" -#include +#include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/gserver/layers/PoolProjectionLayer.cpp b/paddle/gserver/layers/PoolProjectionLayer.cpp index 392c548d45..ed5011ab89 100644 --- a/paddle/gserver/layers/PoolProjectionLayer.cpp +++ b/paddle/gserver/layers/PoolProjectionLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "PoolProjectionLayer.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "PoolProjectionLayer.h" namespace paddle { diff --git a/paddle/gserver/layers/PowerLayer.cpp b/paddle/gserver/layers/PowerLayer.cpp index eb69249270..64fecab5b0 100644 --- a/paddle/gserver/layers/PowerLayer.cpp +++ b/paddle/gserver/layers/PowerLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/RecurrentLayer.cpp b/paddle/gserver/layers/RecurrentLayer.cpp index 0832eeaa10..9f3bf76a2d 100644 --- a/paddle/gserver/layers/RecurrentLayer.cpp +++ b/paddle/gserver/layers/RecurrentLayer.cpp @@ -13,9 +13,9 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "Layer.h" -#include "paddle/utils/Stat.h" #include "SequenceToBatch.h" #include "paddle/utils/CommandLineParser.h" +#include "paddle/utils/Stat.h" P_DEFINE_bool(rnn_use_batch, false, "Using the batch method for calculation."); diff --git a/paddle/gserver/layers/RecurrentLayerGroup.cpp b/paddle/gserver/layers/RecurrentLayerGroup.cpp index 5cb4220623..af8dd61d84 100644 --- a/paddle/gserver/layers/RecurrentLayerGroup.cpp +++ b/paddle/gserver/layers/RecurrentLayerGroup.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/gserver/layers/Layer.h" #include +#include "paddle/gserver/layers/Layer.h" #include "paddle/gserver/gradientmachines/RecurrentGradientMachine.h" #include "paddle/utils/Stat.h" diff --git a/paddle/gserver/layers/ResizeLayer.cpp b/paddle/gserver/layers/ResizeLayer.cpp index e79732155a..7fcb3adea0 100644 --- a/paddle/gserver/layers/ResizeLayer.cpp +++ b/paddle/gserver/layers/ResizeLayer.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "Layer.h" -#include "paddle/math/Matrix.h" #include "paddle/math/BaseMatrix.h" +#include "paddle/math/Matrix.h" namespace paddle { /** diff --git a/paddle/gserver/layers/ScalingLayer.cpp b/paddle/gserver/layers/ScalingLayer.cpp index 013bff6b98..7f0084be6b 100644 --- a/paddle/gserver/layers/ScalingLayer.cpp +++ b/paddle/gserver/layers/ScalingLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp b/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp index 75d9fa8a97..9200a01eee 100644 --- a/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp +++ b/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp @@ -13,11 +13,11 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "SelectiveFullyConnectedLayer.h" +#include +#include +#include "paddle/math/SparseMatrix.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "paddle/math/SparseMatrix.h" -#include -#include namespace paddle { diff --git a/paddle/gserver/layers/SequenceConcatLayer.cpp b/paddle/gserver/layers/SequenceConcatLayer.cpp index d3e0e16e96..069bc26e60 100644 --- a/paddle/gserver/layers/SequenceConcatLayer.cpp +++ b/paddle/gserver/layers/SequenceConcatLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/SequencePoolLayer.cpp b/paddle/gserver/layers/SequencePoolLayer.cpp index 856c889e3b..35260ca912 100644 --- a/paddle/gserver/layers/SequencePoolLayer.cpp +++ b/paddle/gserver/layers/SequencePoolLayer.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "SequencePoolLayer.h" +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/gserver/layers/SequenceReshapeLayer.cpp b/paddle/gserver/layers/SequenceReshapeLayer.cpp index 4b90424215..23924b0490 100644 --- a/paddle/gserver/layers/SequenceReshapeLayer.cpp +++ b/paddle/gserver/layers/SequenceReshapeLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/SequenceToBatch.cpp b/paddle/gserver/layers/SequenceToBatch.cpp index c12ed82197..5fa7b6f488 100644 --- a/paddle/gserver/layers/SequenceToBatch.cpp +++ b/paddle/gserver/layers/SequenceToBatch.cpp @@ -12,11 +12,11 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include -#include #include "SequenceToBatch.h" -#include #include +#include +#include +#include namespace paddle { diff --git a/paddle/gserver/layers/SequenceToBatch.h b/paddle/gserver/layers/SequenceToBatch.h index fe9b34b224..17e735a135 100644 --- a/paddle/gserver/layers/SequenceToBatch.h +++ b/paddle/gserver/layers/SequenceToBatch.h @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #pragma once -#include "paddle/math/Vector.h" #include "paddle/math/Matrix.h" +#include "paddle/math/Vector.h" namespace paddle { diff --git a/paddle/gserver/layers/SlopeInterceptLayer.cpp b/paddle/gserver/layers/SlopeInterceptLayer.cpp index 5c00e54f8c..b678f414b6 100644 --- a/paddle/gserver/layers/SlopeInterceptLayer.cpp +++ b/paddle/gserver/layers/SlopeInterceptLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/SubSequenceLayer.cpp b/paddle/gserver/layers/SubSequenceLayer.cpp index 8b35456391..c52fbee262 100644 --- a/paddle/gserver/layers/SubSequenceLayer.cpp +++ b/paddle/gserver/layers/SubSequenceLayer.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" #include "paddle/math/Vector.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/SumToOneNormLayer.cpp b/paddle/gserver/layers/SumToOneNormLayer.cpp index e6759171cb..aa99b49380 100644 --- a/paddle/gserver/layers/SumToOneNormLayer.cpp +++ b/paddle/gserver/layers/SumToOneNormLayer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Layer.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/TransLayer.cpp b/paddle/gserver/layers/TransLayer.cpp index 5cbaaf8f08..d1fa90f384 100644 --- a/paddle/gserver/layers/TransLayer.cpp +++ b/paddle/gserver/layers/TransLayer.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "TransLayer.h" +#include "paddle/utils/Logging.h" namespace paddle { REGISTER_LAYER(trans, TransLayer); diff --git a/paddle/gserver/layers/TransLayer.h b/paddle/gserver/layers/TransLayer.h index 8189700759..b43fa1ebfb 100644 --- a/paddle/gserver/layers/TransLayer.h +++ b/paddle/gserver/layers/TransLayer.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once +#include #include "Layer.h" #include "paddle/math/Matrix.h" -#include namespace paddle { /** diff --git a/paddle/gserver/layers/TransposedFullMatrixProjection.cpp b/paddle/gserver/layers/TransposedFullMatrixProjection.cpp index 8282584ab4..3f7ff04882 100644 --- a/paddle/gserver/layers/TransposedFullMatrixProjection.cpp +++ b/paddle/gserver/layers/TransposedFullMatrixProjection.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Stat.h" #include "Projection.h" +#include "paddle/utils/Stat.h" namespace paddle { diff --git a/paddle/gserver/layers/ValidationLayer.cpp b/paddle/gserver/layers/ValidationLayer.cpp index f029ea4c51..5127bcaba3 100644 --- a/paddle/gserver/layers/ValidationLayer.cpp +++ b/paddle/gserver/layers/ValidationLayer.cpp @@ -12,12 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include #include +#include -#include "paddle/utils/Logging.h" #include "ValidationLayer.h" +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/gserver/layers/ValidationLayer.h b/paddle/gserver/layers/ValidationLayer.h index f9c61503aa..471055429d 100644 --- a/paddle/gserver/layers/ValidationLayer.h +++ b/paddle/gserver/layers/ValidationLayer.h @@ -15,8 +15,8 @@ limitations under the License. */ #pragma once #include -#include "paddle/gserver/evaluators/Evaluator.h" #include "Layer.h" +#include "paddle/gserver/evaluators/Evaluator.h" P_DECLARE_int32(trainer_id); diff --git a/paddle/gserver/tests/LayerGradUtil.h b/paddle/gserver/tests/LayerGradUtil.h index 2b8f334f19..62ac2d160f 100644 --- a/paddle/gserver/tests/LayerGradUtil.h +++ b/paddle/gserver/tests/LayerGradUtil.h @@ -13,9 +13,9 @@ See the License for the specific language governing permissions and limitations under the License. */ #pragma once -#include "paddle/trainer/Trainer.h" -#include "paddle/gserver/layers/DataLayer.h" #include "ModelConfig.pb.h" +#include "paddle/gserver/layers/DataLayer.h" +#include "paddle/trainer/Trainer.h" #include "TestUtil.h" using namespace std; // NOLINT diff --git a/paddle/gserver/tests/TestUtil.cpp b/paddle/gserver/tests/TestUtil.cpp index dc00711697..e656da5b8f 100644 --- a/paddle/gserver/tests/TestUtil.cpp +++ b/paddle/gserver/tests/TestUtil.cpp @@ -14,8 +14,8 @@ limitations under the License. */ #include "TestUtil.h" -#include "paddle/utils/CommandLineParser.h" #include "paddle/math/SparseMatrix.h" +#include "paddle/utils/CommandLineParser.h" P_DEFINE_int32(fixed_seq_length, 0, "Produce some sequence of fixed length"); @@ -63,8 +63,8 @@ MatrixPtr makeRandomSparseMatrix(size_t height, std::dynamic_pointer_cast(mat)->copyFrom( ids.data(), indices.data(), data.data(), HPPL_STREAM_DEFAULT); } else { - std::dynamic_pointer_cast(mat) - ->copyFrom(ids.data(), indices.data(), data.data()); + std::dynamic_pointer_cast(mat)->copyFrom( + ids.data(), indices.data(), data.data()); } return mat; } else { @@ -80,8 +80,8 @@ MatrixPtr makeRandomSparseMatrix(size_t height, std::dynamic_pointer_cast(mat)->copyFrom( ids.data(), indices.data(), data.data(), HPPL_STREAM_DEFAULT); } else { - std::dynamic_pointer_cast(mat) - ->copyFrom(ids.data(), indices.data(), data.data()); + std::dynamic_pointer_cast(mat)->copyFrom( + ids.data(), indices.data(), data.data()); } return mat; } diff --git a/paddle/gserver/tests/test_ActivationGrad.cpp b/paddle/gserver/tests/test_ActivationGrad.cpp index 0181d62519..20a6126d0b 100644 --- a/paddle/gserver/tests/test_ActivationGrad.cpp +++ b/paddle/gserver/tests/test_ActivationGrad.cpp @@ -13,14 +13,14 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include #include -#include "paddle/gserver/layers/DataLayer.h" +#include #include "ModelConfig.pb.h" +#include "paddle/gserver/layers/DataLayer.h" #include "paddle/trainer/Trainer.h" -#include "TestUtil.h" #include "LayerGradUtil.h" +#include "TestUtil.h" using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/gserver/tests/test_BatchNorm.cpp b/paddle/gserver/tests/test_BatchNorm.cpp index 8575999aba..3bd4e321b7 100644 --- a/paddle/gserver/tests/test_BatchNorm.cpp +++ b/paddle/gserver/tests/test_BatchNorm.cpp @@ -13,16 +13,16 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include #include -#include "paddle/gserver/layers/DataLayer.h" +#include #include "ModelConfig.pb.h" +#include "paddle/gserver/layers/DataLayer.h" +#include "paddle/gserver/layers/ExpandConvTransLayer.h" #include "paddle/trainer/Trainer.h" #include "paddle/utils/GlobalConstants.h" -#include "paddle/gserver/layers/ExpandConvTransLayer.h" -#include "TestUtil.h" #include "LayerGradUtil.h" +#include "TestUtil.h" using namespace paddle; // NOLINT using namespace std; // NOLINT @@ -35,80 +35,87 @@ P_DECLARE_bool(prev_batch_state); // Test that the batchNormLayer can be followed by a ConvLayer TEST(Layer, batchNorm) { - FLAGS_use_gpu = false; - TestConfig configBN; - const int CHANNELS = 6272; - const int IMG_SIZE = 1; - configBN.layerConfig.set_type("batch_norm"); - configBN.layerConfig.set_name("bn"); - configBN.layerConfig.set_size(CHANNELS * IMG_SIZE * IMG_SIZE); - configBN.layerConfig.set_active_type("relu"); - configBN.biasSize = CHANNELS; - configBN.inputDefs.push_back({INPUT_DATA, "layer_0", + FLAGS_use_gpu = false; + TestConfig configBN; + const int CHANNELS = 6272; + const int IMG_SIZE = 1; + configBN.layerConfig.set_type("batch_norm"); + configBN.layerConfig.set_name("bn"); + configBN.layerConfig.set_size(CHANNELS * IMG_SIZE * IMG_SIZE); + configBN.layerConfig.set_active_type("relu"); + configBN.biasSize = CHANNELS; + configBN.inputDefs.push_back({INPUT_DATA, + "layer_0", /* dim= */ IMG_SIZE * IMG_SIZE * CHANNELS, /* paraSize= */ CHANNELS}); - configBN.inputDefs.push_back({INPUT_DATA, "layer_1_running_mean", - 1, CHANNELS}); - configBN.inputDefs.back().isStatic = true; - configBN.inputDefs.push_back({INPUT_DATA, "layer_2_running_var", - 1, CHANNELS}); - configBN.inputDefs.back().isStatic = true; - - LayerInputConfig* input = configBN.layerConfig.add_inputs(); - configBN.layerConfig.add_inputs(); - configBN.layerConfig.add_inputs(); - - ImageConfig* img_conf = input->mutable_image_conf(); - img_conf->set_channels(CHANNELS); - img_conf->set_img_size(IMG_SIZE); - - // Setting up conv-layer config - TestConfig config; - config.biasSize = 64; - config.layerConfig.set_type("exconv"); - config.layerConfig.set_num_filters(64); - config.layerConfig.set_partial_sum(1); - config.layerConfig.set_shared_biases(true); - - config.inputDefs.push_back({INPUT_DATA, "bn", 6272, 204800}); - input = config.layerConfig.add_inputs(); - ConvConfig* conv = input->mutable_conv_conf(); - conv->set_filter_size(5); - conv->set_filter_size_y(5); - conv->set_channels(128); - conv->set_padding(1); - conv->set_padding_y(1); - conv->set_stride(2); - conv->set_stride_y(2); - conv->set_groups(1); - conv->set_filter_channels(conv->channels() / conv->groups()); - conv->set_img_size(7); - conv->set_output_x(3); - config.layerConfig.set_size(conv->output_x() * conv->output_x() * - config.layerConfig.num_filters()); - config.layerConfig.set_name("conv"); - - // data layer initialize - std::vector dataLayers; - LayerMap layerMap; - vector datas; - initDataLayer(configBN, &dataLayers, &datas, &layerMap, "batch_norm", - 100, false, false); - // test layer initialize - std::vector parameters; - LayerPtr bnLayer; - initTestLayer(configBN, &layerMap, ¶meters, &bnLayer); - - std::vector parameters2; - LayerPtr convLayer; - initTestLayer(config, &layerMap, ¶meters2, &convLayer); - - bnLayer->forward(PASS_GC); - convLayer->forward(PASS_GC); - - CHECK_EQ(convLayer->getOutputValue()->getHeight(), 100); - CHECK_EQ(convLayer->getOutputValue()->getWidth(), 576); + configBN.inputDefs.push_back( + {INPUT_DATA, "layer_1_running_mean", 1, CHANNELS}); + configBN.inputDefs.back().isStatic = true; + configBN.inputDefs.push_back( + {INPUT_DATA, "layer_2_running_var", 1, CHANNELS}); + configBN.inputDefs.back().isStatic = true; + + LayerInputConfig* input = configBN.layerConfig.add_inputs(); + configBN.layerConfig.add_inputs(); + configBN.layerConfig.add_inputs(); + + ImageConfig* img_conf = input->mutable_image_conf(); + img_conf->set_channels(CHANNELS); + img_conf->set_img_size(IMG_SIZE); + + // Setting up conv-layer config + TestConfig config; + config.biasSize = 64; + config.layerConfig.set_type("exconv"); + config.layerConfig.set_num_filters(64); + config.layerConfig.set_partial_sum(1); + config.layerConfig.set_shared_biases(true); + + config.inputDefs.push_back({INPUT_DATA, "bn", 6272, 204800}); + input = config.layerConfig.add_inputs(); + ConvConfig* conv = input->mutable_conv_conf(); + conv->set_filter_size(5); + conv->set_filter_size_y(5); + conv->set_channels(128); + conv->set_padding(1); + conv->set_padding_y(1); + conv->set_stride(2); + conv->set_stride_y(2); + conv->set_groups(1); + conv->set_filter_channels(conv->channels() / conv->groups()); + conv->set_img_size(7); + conv->set_output_x(3); + config.layerConfig.set_size(conv->output_x() * conv->output_x() * + config.layerConfig.num_filters()); + config.layerConfig.set_name("conv"); + + // data layer initialize + std::vector dataLayers; + LayerMap layerMap; + vector datas; + initDataLayer(configBN, + &dataLayers, + &datas, + &layerMap, + "batch_norm", + 100, + false, + false); + // test layer initialize + std::vector parameters; + LayerPtr bnLayer; + initTestLayer(configBN, &layerMap, ¶meters, &bnLayer); + + std::vector parameters2; + LayerPtr convLayer; + initTestLayer(config, &layerMap, ¶meters2, &convLayer); + + bnLayer->forward(PASS_GC); + convLayer->forward(PASS_GC); + + CHECK_EQ(convLayer->getOutputValue()->getHeight(), 100); + CHECK_EQ(convLayer->getOutputValue()->getWidth(), 576); } int main(int argc, char** argv) { diff --git a/paddle/gserver/tests/test_ConvTrans.cpp b/paddle/gserver/tests/test_ConvTrans.cpp index 3af3f08f40..83100e3bec 100644 --- a/paddle/gserver/tests/test_ConvTrans.cpp +++ b/paddle/gserver/tests/test_ConvTrans.cpp @@ -13,17 +13,17 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include #include -#include "paddle/gserver/layers/DataLayer.h" +#include #include "ModelConfig.pb.h" -#include "paddle/trainer/Trainer.h" -#include "paddle/utils/GlobalConstants.h" +#include "paddle/gserver/layers/DataLayer.h" #include "paddle/gserver/layers/ExpandConvTransLayer.h" #include "paddle/math/MathUtils.h" +#include "paddle/trainer/Trainer.h" +#include "paddle/utils/GlobalConstants.h" -#include "TestUtil.h" #include "LayerGradUtil.h" +#include "TestUtil.h" using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/gserver/tests/test_ConvUnify.cpp b/paddle/gserver/tests/test_ConvUnify.cpp index d59acf96ac..02763406a3 100644 --- a/paddle/gserver/tests/test_ConvUnify.cpp +++ b/paddle/gserver/tests/test_ConvUnify.cpp @@ -13,17 +13,17 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include #include -#include "paddle/gserver/layers/DataLayer.h" +#include #include "ModelConfig.pb.h" -#include "paddle/trainer/Trainer.h" -#include "paddle/utils/GlobalConstants.h" +#include "paddle/gserver/layers/DataLayer.h" #include "paddle/gserver/layers/ExpandConvTransLayer.h" #include "paddle/math/MathUtils.h" +#include "paddle/trainer/Trainer.h" +#include "paddle/utils/GlobalConstants.h" -#include "TestUtil.h" #include "LayerGradUtil.h" +#include "TestUtil.h" using namespace paddle; // NOLINT using namespace std; // NOLINT @@ -36,10 +36,17 @@ P_DECLARE_bool(prev_batch_state); // Do one forward pass of convTrans layer and check to see if its output // matches the given result -MatrixPtr doOneConvTest(size_t imgSize, size_t output_x, size_t stride, - size_t padding, size_t filter_size, size_t channel, - size_t numfilters, size_t groups, MatrixPtr& inputData, - real* param, bool useGpu) { +MatrixPtr doOneConvTest(size_t imgSize, + size_t output_x, + size_t stride, + size_t padding, + size_t filter_size, + size_t channel, + size_t numfilters, + size_t groups, + MatrixPtr& inputData, + real* param, + bool useGpu) { TestConfig config; config.biasSize = numfilters; if (useGpu) { @@ -51,11 +58,10 @@ MatrixPtr doOneConvTest(size_t imgSize, size_t output_x, size_t stride, config.layerConfig.set_partial_sum(1); config.layerConfig.set_shared_biases(true); - size_t weightSize = channel* filter_size * filter_size * - config.layerConfig.num_filters() / groups; - config.inputDefs.push_back({INPUT_DATA, "layer_0", - imgSize * imgSize * channel, - weightSize}); + size_t weightSize = channel * filter_size * filter_size * + config.layerConfig.num_filters() / groups; + config.inputDefs.push_back( + {INPUT_DATA, "layer_0", imgSize * imgSize * channel, weightSize}); LayerInputConfig* input = config.layerConfig.add_inputs(); ConvConfig* conv = input->mutable_conv_conf(); conv->set_filter_size(filter_size); @@ -66,7 +72,7 @@ MatrixPtr doOneConvTest(size_t imgSize, size_t output_x, size_t stride, conv->set_stride(stride); conv->set_stride_y(stride); conv->set_groups(groups); - conv->set_filter_channels(channel/groups); + conv->set_filter_channels(channel / groups); conv->set_img_size(imgSize); conv->set_output_x(output_x); @@ -77,8 +83,8 @@ MatrixPtr doOneConvTest(size_t imgSize, size_t output_x, size_t stride, std::vector dataLayers; LayerMap layerMap; vector datas; - initDataLayer(config, &dataLayers, &datas, &layerMap, "conv", - 1, false, useGpu); + initDataLayer( + config, &dataLayers, &datas, &layerMap, "conv", 1, false, useGpu); dataLayers[0]->getOutputValue()->zeroMem(); dataLayers[0]->getOutputValue()->copyFrom(*inputData); @@ -88,106 +94,124 @@ MatrixPtr doOneConvTest(size_t imgSize, size_t output_x, size_t stride, initTestLayer(config, &layerMap, ¶meters, &convLayer); convLayer->getBiasParameter()->zeroMem(); convLayer->getParameters()[0]->zeroMem(); - convLayer->getParameters()[0]->getBuf(PARAMETER_VALUE)->copyFrom(param, - weightSize); + convLayer->getParameters()[0] + ->getBuf(PARAMETER_VALUE) + ->copyFrom(param, weightSize); convLayer->forward(PASS_GC); return convLayer->getOutputValue(); } TEST(Layer, convParaUnified) { - #ifndef PADDLE_ONLY_CPU - MatrixPtr input, resultCpu, resultGpu; - input = Matrix::create(1, 4 * 4, false, false); - float inputData[] = {1, 2, 3, 4, - 5, 6, 7, 8, - 9, 10, 11, 12, - 13, 14, 15, 16}; - float param[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, - 9, 8, 7, 6, 5, 4, 3, 2, 1}; - - input->setData(inputData); - - resultCpu = doOneConvTest(/* imgSize */ 4, - /* output_x */ 2, - /* stride */ 1, - /* padding */ 0, - /* filter_size */ 3, - /*channel*/ 1, - /*numfilters*/ 2, - /*groups*/ 1, - input, param, false); - - resultGpu = doOneConvTest(/* imgSize */ 4, - /* output_x */ 2, - /* stride */ 1, - /* padding */ 0, - /* filter_size */ 3, - /*channel*/ 1, - /*numfilters*/ 2, - /*groups*/ 1, - input, param, true); - checkMatrixEqual(resultCpu, resultGpu); - - input = Matrix::create(1, 3 * 3 * 2, false, false); - float inputData2[] = {1, 2, 3, - 4, 5, 6, - 7, 8, 9, - - 10, 11, 12, - 13, 14, 15, - 16, 17, 18}; - float param2[] = {1, 2, 3, 4, 5, 6, 7, 8, - 8, 7, 6, 5, 4, 3, 2, 1}; - - input->setData(inputData2); - - resultCpu = doOneConvTest(/* imgSize */ 3, - /* output_x */ 2, - /* stride */ 1, - /* padding */ 0, - /* filter_size */ 2, - /*channel*/ 2, - /*numfilters*/ 2, - /*groups*/ 1, - input, param2, false); - - resultGpu = doOneConvTest(/* imgSize */ 3, - /* output_x */ 2, - /* stride */ 1, - /* padding */ 0, - /* filter_size */ 2, - /*channel*/ 2, - /*numfilters*/ 2, - /*groups*/ 1, - input, param2, true); - checkMatrixEqual(resultCpu, resultGpu); - - - float param3[] = {1, 2, 3, 4, - 4, 3, 2, 1}; - - resultCpu = doOneConvTest(/* imgSize */ 3, - /* output_x */ 2, - /* stride */ 1, - /* padding */ 0, - /* filter_size */ 2, - /*channel*/ 2, - /*numfilters*/ 2, - /*groups*/ 2, - input, param3, false); - - resultGpu = doOneConvTest(/* imgSize */ 3, - /* output_x */ 2, - /* stride */ 1, - /* padding */ 0, - /* filter_size */ 2, - /*channel*/ 2, - /*numfilters*/ 2, - /*groups*/ 2, - input, param3, true); - checkMatrixEqual(resultCpu, resultGpu); - #endif +#ifndef PADDLE_ONLY_CPU + MatrixPtr input, resultCpu, resultGpu; + input = Matrix::create(1, 4 * 4, false, false); + float inputData[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; + float param[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 9, 8, 7, 6, 5, 4, 3, 2, 1}; + + input->setData(inputData); + + resultCpu = doOneConvTest(/* imgSize */ 4, + /* output_x */ 2, + /* stride */ 1, + /* padding */ 0, + /* filter_size */ 3, + /*channel*/ 1, + /*numfilters*/ 2, + /*groups*/ 1, + input, + param, + false); + + resultGpu = doOneConvTest(/* imgSize */ 4, + /* output_x */ 2, + /* stride */ 1, + /* padding */ 0, + /* filter_size */ 3, + /*channel*/ 1, + /*numfilters*/ 2, + /*groups*/ 1, + input, + param, + true); + checkMatrixEqual(resultCpu, resultGpu); + + input = Matrix::create(1, 3 * 3 * 2, false, false); + float inputData2[] = {1, + 2, + 3, + 4, + 5, + 6, + 7, + 8, + 9, + + 10, + 11, + 12, + 13, + 14, + 15, + 16, + 17, + 18}; + float param2[] = {1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6, 5, 4, 3, 2, 1}; + + input->setData(inputData2); + + resultCpu = doOneConvTest(/* imgSize */ 3, + /* output_x */ 2, + /* stride */ 1, + /* padding */ 0, + /* filter_size */ 2, + /*channel*/ 2, + /*numfilters*/ 2, + /*groups*/ 1, + input, + param2, + false); + + resultGpu = doOneConvTest(/* imgSize */ 3, + /* output_x */ 2, + /* stride */ 1, + /* padding */ 0, + /* filter_size */ 2, + /*channel*/ 2, + /*numfilters*/ 2, + /*groups*/ 1, + input, + param2, + true); + checkMatrixEqual(resultCpu, resultGpu); + + float param3[] = {1, 2, 3, 4, 4, 3, 2, 1}; + + resultCpu = doOneConvTest(/* imgSize */ 3, + /* output_x */ 2, + /* stride */ 1, + /* padding */ 0, + /* filter_size */ 2, + /*channel*/ 2, + /*numfilters*/ 2, + /*groups*/ 2, + input, + param3, + false); + + resultGpu = doOneConvTest(/* imgSize */ 3, + /* output_x */ 2, + /* stride */ 1, + /* padding */ 0, + /* filter_size */ 2, + /*channel*/ 2, + /*numfilters*/ 2, + /*groups*/ 2, + input, + param3, + true); + checkMatrixEqual(resultCpu, resultGpu); +#endif } int main(int argc, char** argv) { diff --git a/paddle/gserver/tests/test_Evaluator.cpp b/paddle/gserver/tests/test_Evaluator.cpp index 2c20f3a52f..7a930aebcf 100644 --- a/paddle/gserver/tests/test_Evaluator.cpp +++ b/paddle/gserver/tests/test_Evaluator.cpp @@ -15,8 +15,8 @@ limitations under the License. */ #include #include #include "ModelConfig.pb.h" -#include "paddle/trainer/Trainer.h" #include "TestUtil.h" +#include "paddle/trainer/Trainer.h" using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/gserver/tests/test_LayerGrad.cpp b/paddle/gserver/tests/test_LayerGrad.cpp index 7983d9fe64..9f8b197df5 100644 --- a/paddle/gserver/tests/test_LayerGrad.cpp +++ b/paddle/gserver/tests/test_LayerGrad.cpp @@ -17,8 +17,8 @@ limitations under the License. */ #include #include "ModelConfig.pb.h" #include "paddle/gserver/layers/DataLayer.h" -#include "paddle/trainer/Trainer.h" #include "paddle/math/MathUtils.h" +#include "paddle/trainer/Trainer.h" #include "LayerGradUtil.h" #include "TestUtil.h" diff --git a/paddle/gserver/tests/test_MultinomialSampler.cpp b/paddle/gserver/tests/test_MultinomialSampler.cpp index fc164da8ea..eadf40ade0 100644 --- a/paddle/gserver/tests/test_MultinomialSampler.cpp +++ b/paddle/gserver/tests/test_MultinomialSampler.cpp @@ -20,8 +20,8 @@ limitations under the License. */ #undef PADDLE_DISABLE_TIMER #include "paddle/utils/Stat.h" -#include "paddle/utils/Util.h" #include "paddle/gserver/layers/MultinomialSampler.h" +#include "paddle/utils/Util.h" using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/gserver/tests/test_NetworkCompare.cpp b/paddle/gserver/tests/test_NetworkCompare.cpp index ff6b5ab0d0..baa55aa025 100644 --- a/paddle/gserver/tests/test_NetworkCompare.cpp +++ b/paddle/gserver/tests/test_NetworkCompare.cpp @@ -13,14 +13,14 @@ See the License for the specific language governing permissions and limitations under the License. */ #undef PADDLE_DISABLE_TIMER +#include #include -#include #include -#include +#include +#include "TestUtil.h" #include "paddle/trainer/Trainer.h" #include "paddle/utils/Stat.h" -#include "TestUtil.h" using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/gserver/tests/test_ProtoDataProvider.cpp b/paddle/gserver/tests/test_ProtoDataProvider.cpp index d5b8017cd1..d421b6e2f2 100644 --- a/paddle/gserver/tests/test_ProtoDataProvider.cpp +++ b/paddle/gserver/tests/test_ProtoDataProvider.cpp @@ -17,8 +17,8 @@ limitations under the License. */ #include -#include "paddle/utils/Util.h" #include "paddle/gserver/dataproviders/ProtoDataProvider.h" +#include "paddle/utils/Util.h" #include "TestUtil.h" diff --git a/paddle/gserver/tests/test_RecurrentLayer.cpp b/paddle/gserver/tests/test_RecurrentLayer.cpp index 3f26b710e9..cd96ca7c84 100644 --- a/paddle/gserver/tests/test_RecurrentLayer.cpp +++ b/paddle/gserver/tests/test_RecurrentLayer.cpp @@ -13,11 +13,11 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include #include +#include +#include "ModelConfig.pb.h" #include "paddle/gserver/layers/DataLayer.h" #include "paddle/gserver/layers/Layer.h" -#include "ModelConfig.pb.h" #include "TestUtil.h" @@ -220,8 +220,8 @@ TEST(Layer, RecurrentLayer) { } #define protected public -#include "paddle/gserver/layers/LstmLayer.h" #include "paddle/gserver/layers/GatedRecurrentLayer.h" +#include "paddle/gserver/layers/LstmLayer.h" template class TestRecurrentLayer { public: diff --git a/paddle/gserver/tests/test_SelectiveFCLayer.cpp b/paddle/gserver/tests/test_SelectiveFCLayer.cpp index c588f69446..4f3a95a535 100644 --- a/paddle/gserver/tests/test_SelectiveFCLayer.cpp +++ b/paddle/gserver/tests/test_SelectiveFCLayer.cpp @@ -12,17 +12,17 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include +#include #include +#include #include #include -#include -#include -#include +#include "ModelConfig.pb.h" #include "paddle/gserver/layers/DataLayer.h" -#include "paddle/gserver/layers/Layer.h" #include "paddle/gserver/layers/FullyConnectedLayer.h" +#include "paddle/gserver/layers/Layer.h" #include "paddle/gserver/layers/SelectiveFullyConnectedLayer.h" -#include "ModelConfig.pb.h" #include "paddle/math/CpuSparseMatrix.h" #include "paddle/trainer/Trainer.h" diff --git a/paddle/gserver/tests/test_WarpCTCLayer.cpp b/paddle/gserver/tests/test_WarpCTCLayer.cpp index e526a27906..700425412c 100644 --- a/paddle/gserver/tests/test_WarpCTCLayer.cpp +++ b/paddle/gserver/tests/test_WarpCTCLayer.cpp @@ -14,11 +14,11 @@ limitations under the License. */ #include #include -#include "paddle/gserver/layers/Layer.h" -#include "paddle/gserver/layers/DataLayer.h" +#include "ModelConfig.pb.h" #include "paddle/gserver/layers/CTCLayer.h" +#include "paddle/gserver/layers/DataLayer.h" +#include "paddle/gserver/layers/Layer.h" #include "paddle/gserver/layers/WarpCTCLayer.h" -#include "ModelConfig.pb.h" #include "TestUtil.h" diff --git a/paddle/math/Allocator.h b/paddle/math/Allocator.h index 4d0a1506be..666a8b8368 100644 --- a/paddle/math/Allocator.h +++ b/paddle/math/Allocator.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once -#include #include +#include #include "hl_gpu.h" #include "paddle/utils/Logging.h" diff --git a/paddle/math/BaseMatrix.h b/paddle/math/BaseMatrix.h index 368557bb26..2933c20fba 100644 --- a/paddle/math/BaseMatrix.h +++ b/paddle/math/BaseMatrix.h @@ -13,10 +13,10 @@ See the License for the specific language governing permissions and limitations under the License. */ #pragma once -#include #include -#include "paddle/utils/TypeDefs.h" +#include #include "TensorExpression.h" +#include "paddle/utils/TypeDefs.h" namespace paddle { diff --git a/paddle/math/CpuSparseMatrix.cpp b/paddle/math/CpuSparseMatrix.cpp index 324c7ec0ca..b5d5b6ef61 100644 --- a/paddle/math/CpuSparseMatrix.cpp +++ b/paddle/math/CpuSparseMatrix.cpp @@ -12,12 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "hl_gpu.h" #include "CpuSparseMatrix.h" #include "SparseMatrix.h" +#include "float.h" +#include "hl_gpu.h" #include "paddle/math/MathUtils.h" #include "paddle/utils/Util.h" -#include "float.h" namespace paddle { @@ -656,9 +656,9 @@ void CpuSparseMatrix::trimFrom(const CpuSparseMatrix& src) { if (format_ == SPARSE_CSR) { int* srcCols = src.getCols(); size_t numLessWidth = - std::count_if(srcCols, - srcCols + src.getElementCnt(), - [this](size_t n) { return n < this->width_; }); + std::count_if(srcCols, srcCols + src.getElementCnt(), [this](size_t n) { + return n < this->width_; + }); resize(height_, width_, numLessWidth, valueType_, format_); rows_[0] = 0; size_t index = 0; diff --git a/paddle/math/MathFunctions.cpp b/paddle/math/MathFunctions.cpp index 037525b402..d7aa118487 100644 --- a/paddle/math/MathFunctions.cpp +++ b/paddle/math/MathFunctions.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "MathFunctions.h" -#include "hl_matrix_ops.cuh" #include "hl_matrix_apply.cuh" +#include "hl_matrix_ops.cuh" namespace paddle { diff --git a/paddle/math/MathUtils.cpp b/paddle/math/MathUtils.cpp index 1fb7655c5a..5bbc3e4e37 100644 --- a/paddle/math/MathUtils.cpp +++ b/paddle/math/MathUtils.cpp @@ -14,8 +14,8 @@ limitations under the License. */ #include "MathUtils.h" #include -#include "paddle/utils/Logging.h" #include "Vector.h" +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/math/Matrix.h b/paddle/math/Matrix.h index 395143a4b1..4342ca52a3 100644 --- a/paddle/math/Matrix.h +++ b/paddle/math/Matrix.h @@ -14,20 +14,20 @@ limitations under the License. */ #pragma once +#include #include #include -#include #include "paddle/utils/Logging.h" #include "paddle/utils/ThreadLocal.h" #include +#include "BaseMatrix.h" #include "MemoryHandle.h" -#include "paddle/utils/TypeDefs.h" #include "Vector.h" #include "paddle/utils/ThreadLocal.h" -#include "BaseMatrix.h" +#include "paddle/utils/TypeDefs.h" namespace paddle { diff --git a/paddle/math/MatrixBitCode.cpp b/paddle/math/MatrixBitCode.cpp index 6390d4b6a5..cea912d3ca 100644 --- a/paddle/math/MatrixBitCode.cpp +++ b/paddle/math/MatrixBitCode.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" -#include "paddle/utils/Util.h" #include "Matrix.h" #include "hl_gpu.h" +#include "paddle/utils/Logging.h" +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/math/MemoryHandle.cpp b/paddle/math/MemoryHandle.cpp index 4c4a827b23..84afb5944c 100644 --- a/paddle/math/MemoryHandle.cpp +++ b/paddle/math/MemoryHandle.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include "MemoryHandle.h" +#include #include "Storage.h" namespace paddle { diff --git a/paddle/math/PoolAllocator.h b/paddle/math/PoolAllocator.h index 1544cb2cfc..c06efa9ac7 100644 --- a/paddle/math/PoolAllocator.h +++ b/paddle/math/PoolAllocator.h @@ -14,11 +14,11 @@ limitations under the License. */ #pragma once +#include #include #include -#include #include -#include +#include #include "Allocator.h" namespace paddle { diff --git a/paddle/math/SparseMatrix.cpp b/paddle/math/SparseMatrix.cpp index d2779cc9f5..9154503c21 100644 --- a/paddle/math/SparseMatrix.cpp +++ b/paddle/math/SparseMatrix.cpp @@ -12,13 +12,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "SparseMatrix.h" #include +#include #include #include "hl_gpu.h" -#include "SparseMatrix.h" -#include "paddle/utils/Util.h" #include "hl_top_k.h" -#include +#include "paddle/utils/Util.h" namespace paddle { @@ -537,11 +537,9 @@ void GpuSparseMatrix::transpose(MatrixPtr matTrans, bool memAlloc) { dataVec.emplace_back( rows.getData()[i], cols_full.getData()[i], value.getData()[i]); } - std::sort(dataVec.begin(), - dataVec.end(), - [](Element a, Element b) { - return a.row < b.row || (a.row == b.row && a.col < b.col); - }); + std::sort(dataVec.begin(), dataVec.end(), [](Element a, Element b) { + return a.row < b.row || (a.row == b.row && a.col < b.col); + }); /*get sorted data, row index, and col index, put them in the right place*/ cols.resize(height_ + 1); diff --git a/paddle/math/SparseMatrix.h b/paddle/math/SparseMatrix.h index f8d9ffc29f..bd96a3301d 100644 --- a/paddle/math/SparseMatrix.h +++ b/paddle/math/SparseMatrix.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once #include -#include "Matrix.h" #include "CpuSparseMatrix.h" +#include "Matrix.h" namespace paddle { diff --git a/paddle/math/SparseRowMatrix.h b/paddle/math/SparseRowMatrix.h index 2fee1b39fe..badb4b9c1c 100644 --- a/paddle/math/SparseRowMatrix.h +++ b/paddle/math/SparseRowMatrix.h @@ -14,10 +14,10 @@ limitations under the License. */ #pragma once -#include #include -#include "paddle/utils/CommandLineParser.h" +#include #include "Matrix.h" +#include "paddle/utils/CommandLineParser.h" #include "paddle/utils/Util.h" P_DECLARE_bool(allow_inefficient_sparse_update); diff --git a/paddle/math/Storage.cpp b/paddle/math/Storage.cpp index 0170b4efb8..f9a2c12cd5 100644 --- a/paddle/math/Storage.cpp +++ b/paddle/math/Storage.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" -#include "Allocator.h" #include "Storage.h" +#include "Allocator.h" +#include "paddle/utils/Util.h" P_DEFINE_int32(pool_limit_size, 536870912, diff --git a/paddle/math/Storage.h b/paddle/math/Storage.h index 3658320182..06a66b5f14 100644 --- a/paddle/math/Storage.h +++ b/paddle/math/Storage.h @@ -16,8 +16,8 @@ limitations under the License. */ #include #include -#include "paddle/utils/Locks.h" #include "PoolAllocator.h" +#include "paddle/utils/Locks.h" namespace paddle { diff --git a/paddle/math/TensorEvaluate.h b/paddle/math/TensorEvaluate.h index 346ed7ab13..9de2099b85 100644 --- a/paddle/math/TensorEvaluate.h +++ b/paddle/math/TensorEvaluate.h @@ -15,8 +15,8 @@ limitations under the License. */ #pragma once #include -#include "paddle/utils/Logging.h" #include "hl_base.h" +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/math/TensorExpression.h b/paddle/math/TensorExpression.h index 7f28ad83bb..9bd789e8c5 100644 --- a/paddle/math/TensorExpression.h +++ b/paddle/math/TensorExpression.h @@ -13,11 +13,11 @@ See the License for the specific language governing permissions and limitations under the License. */ #pragma once -#include #include -#include "paddle/utils/TypeDefs.h" -#include "paddle/utils/Logging.h" +#include #include "hl_tensor_ops.h" +#include "paddle/utils/Logging.h" +#include "paddle/utils/TypeDefs.h" namespace paddle { diff --git a/paddle/math/TrainingAlgorithmOp.h b/paddle/math/TrainingAlgorithmOp.h index 2dc56f69e5..881a8d72d8 100644 --- a/paddle/math/TrainingAlgorithmOp.h +++ b/paddle/math/TrainingAlgorithmOp.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once -#include "paddle/utils/Logging.h" #include "BaseMatrix.h" +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/math/Vector.cpp b/paddle/math/Vector.cpp index 484f4c9252..eaa1cdce30 100644 --- a/paddle/math/Vector.cpp +++ b/paddle/math/Vector.cpp @@ -12,17 +12,17 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" #include "Vector.h" +#include "paddle/utils/Util.h" #include -#include "paddle/utils/Logging.h" -#include "paddle/utils/ThreadLocal.h" -#include "paddle/utils/Thread.h" -#include "paddle/utils/Flags.h" #include "Matrix.h" #include "hl_gpu.h" #include "hl_table_apply.h" +#include "paddle/utils/Flags.h" +#include "paddle/utils/Logging.h" +#include "paddle/utils/Thread.h" +#include "paddle/utils/ThreadLocal.h" namespace paddle { @@ -754,8 +754,7 @@ void ParallelCpuVectorT::exec(SyncThreadPool::JobFunc func) { } template -CpuGpuVectorT::CpuGpuVectorT(size_t size, bool useGpu) - : sync_(nullptr) { +CpuGpuVectorT::CpuGpuVectorT(size_t size, bool useGpu) : sync_(nullptr) { if (!useGpu) { cpuVectorT_ = std::make_shared>(size); } else { diff --git a/paddle/math/Vector.h b/paddle/math/Vector.h index 535580ac37..8a24103bd4 100644 --- a/paddle/math/Vector.h +++ b/paddle/math/Vector.h @@ -14,15 +14,15 @@ limitations under the License. */ #pragma once -#include #include +#include #include -#include "MemoryHandle.h" -#include "paddle/utils/TypeDefs.h" #include "BaseMatrix.h" +#include "MemoryHandle.h" #include "paddle/utils/Thread.h" +#include "paddle/utils/TypeDefs.h" namespace paddle { diff --git a/paddle/math/tests/OriginalOptimizerApi.h b/paddle/math/tests/OriginalOptimizerApi.h index ddcdd6bb51..0188372771 100644 --- a/paddle/math/tests/OriginalOptimizerApi.h +++ b/paddle/math/tests/OriginalOptimizerApi.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once -#include "paddle/utils/GlobalConstants.h" #include "paddle/math/Vector.h" +#include "paddle/utils/GlobalConstants.h" using namespace paddle; // NOLINT diff --git a/paddle/math/tests/TestUtils.h b/paddle/math/tests/TestUtils.h index 5f9fab7245..c302096188 100644 --- a/paddle/math/tests/TestUtils.h +++ b/paddle/math/tests/TestUtils.h @@ -40,9 +40,9 @@ limitations under the License. */ */ #include +#include "TensorCheck.h" #include "paddle/math/Matrix.h" #include "paddle/math/SparseMatrix.h" -#include "TensorCheck.h" namespace autotest { diff --git a/paddle/math/tests/test_Allocator.cpp b/paddle/math/tests/test_Allocator.cpp index 440fcda0fe..33e0952efe 100644 --- a/paddle/math/tests/test_Allocator.cpp +++ b/paddle/math/tests/test_Allocator.cpp @@ -13,11 +13,11 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include "paddle/utils/Util.h" #include "paddle/utils/Logging.h" +#include "paddle/utils/Util.h" #define private public -#include "paddle/math/MemoryHandle.h" #include "paddle/math/Allocator.h" +#include "paddle/math/MemoryHandle.h" #include "paddle/math/PoolAllocator.h" using namespace paddle; // NOLINT diff --git a/paddle/math/tests/test_BaseMatrix.cpp b/paddle/math/tests/test_BaseMatrix.cpp index a4683918ca..cc7c1e7eb2 100644 --- a/paddle/math/tests/test_BaseMatrix.cpp +++ b/paddle/math/tests/test_BaseMatrix.cpp @@ -20,8 +20,8 @@ limitations under the License. */ */ #include -#include "paddle/math/BaseMatrix.h" #include "TestUtils.h" +#include "paddle/math/BaseMatrix.h" using paddle::BaseMatrix; using paddle::Matrix; diff --git a/paddle/math/tests/test_CpuGpuVector.cpp b/paddle/math/tests/test_CpuGpuVector.cpp index c671735875..624fa20ca5 100644 --- a/paddle/math/tests/test_CpuGpuVector.cpp +++ b/paddle/math/tests/test_CpuGpuVector.cpp @@ -14,10 +14,10 @@ limitations under the License. */ #ifndef PADDLE_ONLY_CPU -#include "paddle/utils/Util.h" +#include #include "paddle/math/Vector.h" +#include "paddle/utils/Util.h" #include "test_matrixUtil.h" -#include using namespace paddle; // NOLINT diff --git a/paddle/math/tests/test_ExecViaCpu.cpp b/paddle/math/tests/test_ExecViaCpu.cpp index b328ebf554..27216ddb58 100644 --- a/paddle/math/tests/test_ExecViaCpu.cpp +++ b/paddle/math/tests/test_ExecViaCpu.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include -#include +#include #include +#include #include "paddle/math/SparseMatrix.h" using namespace paddle; // NOLINT diff --git a/paddle/math/tests/test_GpuProfiler.cpp b/paddle/math/tests/test_GpuProfiler.cpp index e5fd6f4523..d490078d90 100644 --- a/paddle/math/tests/test_GpuProfiler.cpp +++ b/paddle/math/tests/test_GpuProfiler.cpp @@ -14,12 +14,12 @@ limitations under the License. */ #ifndef PADDLE_ONLY_CPU -#include "paddle/utils/Util.h" -#include "paddle/math/Matrix.h" -#include "paddle/math/SparseMatrix.h" #include #include "paddle/gserver/tests/TestUtil.h" +#include "paddle/math/Matrix.h" +#include "paddle/math/SparseMatrix.h" #include "paddle/utils/Stat.h" +#include "paddle/utils/Util.h" using namespace paddle; // NOLINT using namespace std; // NOLINT @@ -52,7 +52,9 @@ void MatrixCheckErr(const Matrix& matrix1, const Matrix& matrix2) { EXPECT_EQ(count, 0) << "There are " << count << " different element."; } -void testBilinearFwdBwd(int numSamples, int imgSizeH, int imgSizeW, +void testBilinearFwdBwd(int numSamples, + int imgSizeH, + int imgSizeW, int channels) { int inWidth = imgSizeH * imgSizeW * channels; int outWidth = 2 * imgSizeH * 2 * imgSizeW * channels; @@ -73,10 +75,22 @@ void testBilinearFwdBwd(int numSamples, int imgSizeH, int imgSizeW, { // nvprof: GPU Proflier REGISTER_GPU_PROFILER("testBilinearFwdBwd"); - target->bilinearForward(*input, imgSizeH, imgSizeW, - 2 * imgSizeH, 2 * imgSizeW, channels, ratioH, ratioW); - targetGpu->bilinearForward(*inputGpu, imgSizeH, imgSizeW, - 2 * imgSizeH, 2 * imgSizeW, channels, ratioH, ratioW); + target->bilinearForward(*input, + imgSizeH, + imgSizeW, + 2 * imgSizeH, + 2 * imgSizeW, + channels, + ratioH, + ratioW); + targetGpu->bilinearForward(*inputGpu, + imgSizeH, + imgSizeW, + 2 * imgSizeH, + 2 * imgSizeW, + channels, + ratioH, + ratioW); } // check @@ -88,8 +102,8 @@ void testBilinearFwdBwd(int numSamples, int imgSizeH, int imgSizeW, MatrixPtr inputGpuGrad = GpuMatrix::create(numSamples, inWidth, false, true); MatrixPtr targetGrad = CpuMatrix::create(numSamples, outWidth, false, false); - MatrixPtr targetGpuGrad = GpuMatrix::create(numSamples, outWidth, false, - true); + MatrixPtr targetGpuGrad = + GpuMatrix::create(numSamples, outWidth, false, true); MatrixPtr targetCheckGrad = CpuMatrix::create(numSamples, inWidth, false, false); @@ -98,10 +112,22 @@ void testBilinearFwdBwd(int numSamples, int imgSizeH, int imgSizeW, inputGpuGrad->copyFrom(*inputGrad); targetGpuGrad->copyFrom(*targetGrad); - inputGrad->bilinearBackward(*targetGrad, 2 * imgSizeH, 2 * imgSizeW, - imgSizeH, imgSizeW, channels, ratioH, ratioW); - inputGpuGrad->bilinearBackward(*targetGpuGrad, 2 * imgSizeH, 2 * imgSizeW, - imgSizeH, imgSizeW, channels, ratioH, ratioW); + inputGrad->bilinearBackward(*targetGrad, + 2 * imgSizeH, + 2 * imgSizeW, + imgSizeH, + imgSizeW, + channels, + ratioH, + ratioW); + inputGpuGrad->bilinearBackward(*targetGpuGrad, + 2 * imgSizeH, + 2 * imgSizeW, + imgSizeH, + imgSizeW, + channels, + ratioH, + ratioW); // check targetCheckGrad->copyFrom(*inputGpuGrad); @@ -116,8 +142,9 @@ TEST(Profiler, testBilinearFwdBwd) { // nvprof: GPU Proflier REGISTER_GPU_PROFILER("testBilinearFwdBwd"); // Paddle built-in timer - REGISTER_TIMER_INFO("testBilinearFwdBwd", - "numSamples = 10, channels = 16, imgSizeX = 64, imgSizeY = 64"); + REGISTER_TIMER_INFO( + "testBilinearFwdBwd", + "numSamples = 10, channels = 16, imgSizeX = 64, imgSizeY = 64"); testBilinearFwdBwd(numSamples, imgSize, imgSize, channels); } globalStat.printAllStatus(); @@ -128,8 +155,9 @@ int main(int argc, char** argv) { initMain(argc, argv); // nvprof: GPU Proflier - REGISTER_GPU_PROFILER("RecursiveProfilingTest", - "numSamples = 10, channels = 16, imgSizeX = 64, imgSizeY = 64"); + REGISTER_GPU_PROFILER( + "RecursiveProfilingTest", + "numSamples = 10, channels = 16, imgSizeX = 64, imgSizeY = 64"); return RUN_ALL_TESTS(); } diff --git a/paddle/math/tests/test_SIMDFunctions.cpp b/paddle/math/tests/test_SIMDFunctions.cpp index 2c54121d99..f62843310d 100644 --- a/paddle/math/tests/test_SIMDFunctions.cpp +++ b/paddle/math/tests/test_SIMDFunctions.cpp @@ -17,10 +17,10 @@ limitations under the License. */ #include -#include -#include #include +#include #include +#include #include #include diff --git a/paddle/math/tests/test_TrainingAlgorithm.cpp b/paddle/math/tests/test_TrainingAlgorithm.cpp index 93a930cc2f..1bf6a0cc43 100644 --- a/paddle/math/tests/test_TrainingAlgorithm.cpp +++ b/paddle/math/tests/test_TrainingAlgorithm.cpp @@ -13,11 +13,11 @@ See the License for the specific language governing permissions and limitations under the License. */ #include -#include "paddle/utils/Util.h" -#include "paddle/math/TrainingAlgorithmOp.h" #include "OriginalOptimizerApi.h" -#include "TensorCheck.h" #include "PerfUtils.h" +#include "TensorCheck.h" +#include "paddle/math/TrainingAlgorithmOp.h" +#include "paddle/utils/Util.h" using namespace paddle; // NOLINT diff --git a/paddle/math/tests/test_batchTranspose.cpp b/paddle/math/tests/test_batchTranspose.cpp index 88631c62b8..9925e24dc1 100644 --- a/paddle/math/tests/test_batchTranspose.cpp +++ b/paddle/math/tests/test_batchTranspose.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "test_matrixUtil.h" #include "hl_batch_transpose.h" +#include "test_matrixUtil.h" using namespace paddle; // NOLINT diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 713792d82b..62de5b25e4 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -16,13 +16,13 @@ limitations under the License. */ /// This unittest checks GpuMatrix/CpuMatrix get same result, so disable when /// only cpu version. -#include "paddle/utils/Util.h" -#include "paddle/math/Matrix.h" -#include "paddle/math/SparseMatrix.h" #include +#include "TensorCheck.h" #include "paddle/gserver/tests/TestUtil.h" +#include "paddle/math/Matrix.h" +#include "paddle/math/SparseMatrix.h" #include "paddle/utils/Stat.h" -#include "TensorCheck.h" +#include "paddle/utils/Util.h" using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/math/tests/test_perturbation.cpp b/paddle/math/tests/test_perturbation.cpp index eaf4dfea66..60ebae0153 100644 --- a/paddle/math/tests/test_perturbation.cpp +++ b/paddle/math/tests/test_perturbation.cpp @@ -14,10 +14,10 @@ limitations under the License. */ #ifndef PADDLE_ONLY_CPU -#include +#include #include +#include #include -#include #include "hl_cuda.h" #include "hl_perturbation_util.cuh" diff --git a/paddle/math/tests/test_sparseMatrixCompare.cpp b/paddle/math/tests/test_sparseMatrixCompare.cpp index eff2c502bb..6f6de238ba 100644 --- a/paddle/math/tests/test_sparseMatrixCompare.cpp +++ b/paddle/math/tests/test_sparseMatrixCompare.cpp @@ -17,10 +17,10 @@ limitations under the License. */ // so disable when /// only cpu version. -#include "paddle/utils/Util.h" +#include #include "paddle/math/Matrix.h" +#include "paddle/utils/Util.h" #include "test_matrixUtil.h" -#include using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/parameter/Argument.cpp b/paddle/parameter/Argument.cpp index b632a11bbd..e91daa3717 100644 --- a/paddle/parameter/Argument.cpp +++ b/paddle/parameter/Argument.cpp @@ -551,11 +551,10 @@ void Argument::getSeqInfo(std::vector* seqInfo) const { } seqInfo->push_back(info); } - std::sort(seqInfo->begin(), - seqInfo->end(), - [](const SeqInfo& a, const SeqInfo& b) { - return a.topLevelLength > b.topLevelLength; - }); + std::sort( + seqInfo->begin(), seqInfo->end(), [](const SeqInfo& a, const SeqInfo& b) { + return a.topLevelLength > b.topLevelLength; + }); } void Argument::checkSubset() const { diff --git a/paddle/parameter/Argument.h b/paddle/parameter/Argument.h index 69d57a28c0..afd2de0202 100644 --- a/paddle/parameter/Argument.h +++ b/paddle/parameter/Argument.h @@ -18,9 +18,9 @@ limitations under the License. */ #include "paddle/math/Matrix.h" #include "paddle/math/Vector.h" +#include "paddle/parameter/Parameter.h" #include "paddle/utils/Locks.h" #include "paddle/utils/Util.h" -#include "paddle/parameter/Parameter.h" namespace paddle { diff --git a/paddle/parameter/FirstOrderOptimizer.cpp b/paddle/parameter/FirstOrderOptimizer.cpp index 17268d3715..630f15c8cf 100644 --- a/paddle/parameter/FirstOrderOptimizer.cpp +++ b/paddle/parameter/FirstOrderOptimizer.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" -#include "paddle/utils/Flags.h" -#include "paddle/math/TrainingAlgorithmOp.h" #include "FirstOrderOptimizer.h" +#include "paddle/math/TrainingAlgorithmOp.h" +#include "paddle/utils/Flags.h" +#include "paddle/utils/Util.h" #include diff --git a/paddle/parameter/ParallelParameter.cpp b/paddle/parameter/ParallelParameter.cpp index b3182306a4..cea77e5b17 100644 --- a/paddle/parameter/ParallelParameter.cpp +++ b/paddle/parameter/ParallelParameter.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include +#include "paddle/utils/Logging.h" #include "ParallelParameter.h" diff --git a/paddle/parameter/ParallelParameter.h b/paddle/parameter/ParallelParameter.h index b0fe82d3c4..417e386dc7 100644 --- a/paddle/parameter/ParallelParameter.h +++ b/paddle/parameter/ParallelParameter.h @@ -16,19 +16,19 @@ limitations under the License. */ #include +#include +#include #include #include #include -#include -#include #include "hl_gpu.h" -#include "paddle/utils/Flags.h" -#include "paddle/utils/Locks.h" +#include "paddle/math/Vector.h" #include "paddle/parameter/Parameter.h" #include "paddle/parameter/ParameterUpdateFunctions.h" +#include "paddle/utils/Flags.h" +#include "paddle/utils/Locks.h" #include "paddle/utils/TypeDefs.h" -#include "paddle/math/Vector.h" #include "ParameterConfig.pb.h" diff --git a/paddle/parameter/Parameter.cpp b/paddle/parameter/Parameter.cpp index 3b06650e0c..986ae1539b 100644 --- a/paddle/parameter/Parameter.cpp +++ b/paddle/parameter/Parameter.cpp @@ -12,19 +12,19 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "Parameter.h" #include -#include "paddle/math/MathUtils.h" #include "AverageOptimizer.h" #include "FirstOrderOptimizer.h" -#include "Parameter.h" -#include "paddle/utils/Logging.h" #include "OptimizerFunctions.h" #include "OptimizerWithRegularizer.h" #include "ParameterUpdateFunctions.h" -#include "paddle/math/SparseRowMatrix.h" -#include "paddle/math/CpuSparseMatrix.h" #include "hl_gpu.h" +#include "paddle/math/CpuSparseMatrix.h" +#include "paddle/math/MathUtils.h" +#include "paddle/math/SparseRowMatrix.h" #include "paddle/utils/CommandLineParser.h" +#include "paddle/utils/Logging.h" P_DEFINE_int32(enable_grad_share, (100 * 1024 * 1024), diff --git a/paddle/parameter/Parameter.h b/paddle/parameter/Parameter.h index 6b0600517a..532c6770e5 100644 --- a/paddle/parameter/Parameter.h +++ b/paddle/parameter/Parameter.h @@ -23,14 +23,14 @@ limitations under the License. */ #include "ParameterConfig.pb.h" #include "TrainerConfig.pb.h" +#include "ParameterUpdaterHook.h" +#include "paddle/math/Matrix.h" +#include "paddle/math/Vector.h" +#include "paddle/utils/GlobalConstants.h" #include "paddle/utils/Locks.h" +#include "paddle/utils/ThreadLocal.h" #include "paddle/utils/TypeDefs.h" -#include "paddle/math/Vector.h" -#include "paddle/math/Matrix.h" #include "paddle/utils/Util.h" -#include "paddle/utils/ThreadLocal.h" -#include "ParameterUpdaterHook.h" -#include "paddle/utils/GlobalConstants.h" namespace paddle { diff --git a/paddle/parameter/ParameterUpdateFunctions.h b/paddle/parameter/ParameterUpdateFunctions.h index 7374843d80..2d277e47e7 100644 --- a/paddle/parameter/ParameterUpdateFunctions.h +++ b/paddle/parameter/ParameterUpdateFunctions.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once -#include "paddle/utils/TypeDefs.h" #include "paddle/math/Vector.h" +#include "paddle/utils/TypeDefs.h" namespace paddle { diff --git a/paddle/parameter/ParameterUpdaterBase.cpp b/paddle/parameter/ParameterUpdaterBase.cpp index b938270ce1..49e2ae2b39 100644 --- a/paddle/parameter/ParameterUpdaterBase.cpp +++ b/paddle/parameter/ParameterUpdaterBase.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include -#include "paddle/utils/Logging.h" #include "ParameterUpdaterBase.h" +#include #include "hl_gpu.h" +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/parameter/ParameterUpdaterHook.cpp b/paddle/parameter/ParameterUpdaterHook.cpp index 466560c437..f826e8448c 100644 --- a/paddle/parameter/ParameterUpdaterHook.cpp +++ b/paddle/parameter/ParameterUpdaterHook.cpp @@ -14,16 +14,16 @@ limitations under the License. */ #include "ParameterUpdaterHook.h" +#include #include -#include #include -#include #include +#include #include "paddle/math/Vector.h" #include "paddle/parameter/Parameter.h" -#include "paddle/utils/Util.h" #include "paddle/utils/Flags.h" +#include "paddle/utils/Util.h" namespace paddle { @@ -156,7 +156,8 @@ private: static WeakKVCache, IParameterUpdaterHook, - StringIntPairHasher> g_hookCache_; + StringIntPairHasher> + g_hookCache_; /** * ParameterUpdaterHook actually factory method. diff --git a/paddle/parameter/Regularizer.cpp b/paddle/parameter/Regularizer.cpp index 4420ee0031..8511900150 100644 --- a/paddle/parameter/Regularizer.cpp +++ b/paddle/parameter/Regularizer.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" -#include "paddle/utils/Flags.h" #include "Regularizer.h" +#include "paddle/utils/Flags.h" +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/parameter/Weight.cpp b/paddle/parameter/Weight.cpp index f366a2b53f..3738a58d7f 100644 --- a/paddle/parameter/Weight.cpp +++ b/paddle/parameter/Weight.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Logging.h" #include "Weight.h" +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/parameter/tests/test_common.cpp b/paddle/parameter/tests/test_common.cpp index 4e4d0ccfa2..aa57a63469 100644 --- a/paddle/parameter/tests/test_common.cpp +++ b/paddle/parameter/tests/test_common.cpp @@ -12,12 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include +#include #include -#include #include +#include #include #include diff --git a/paddle/pserver/BaseClient.cpp b/paddle/pserver/BaseClient.cpp index 62fafc1891..a43def98c5 100644 --- a/paddle/pserver/BaseClient.cpp +++ b/paddle/pserver/BaseClient.cpp @@ -12,11 +12,11 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include +#include "BaseClient.h" #include -#include "paddle/utils/Stat.h" +#include #include "paddle/utils/CommandLineParser.h" -#include "BaseClient.h" +#include "paddle/utils/Stat.h" P_DECLARE_string(pservers); diff --git a/paddle/pserver/BaseClient.h b/paddle/pserver/BaseClient.h index 5924f80684..262afafbe2 100644 --- a/paddle/pserver/BaseClient.h +++ b/paddle/pserver/BaseClient.h @@ -14,11 +14,11 @@ limitations under the License. */ #pragma once -#include "paddle/pserver/ProtoServer.h" +#include "ParameterService.pb.h" #include "paddle/math/Matrix.h" +#include "paddle/pserver/ProtoServer.h" #include "paddle/utils/Queue.h" #include "paddle/utils/TypeDefs.h" -#include "ParameterService.pb.h" namespace paddle { diff --git a/paddle/pserver/LightNetwork.cpp b/paddle/pserver/LightNetwork.cpp index 9a398d4f45..329dfb0fb3 100644 --- a/paddle/pserver/LightNetwork.cpp +++ b/paddle/pserver/LightNetwork.cpp @@ -12,23 +12,23 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include -#include +#include #include #include #include -#include +#include +#include #include -#include #include #include +#include #include #include "LightNetwork.h" -#include "paddle/utils/Util.h" -#include "paddle/utils/StringUtil.h" #include "RDMANetwork.h" +#include "paddle/utils/StringUtil.h" +#include "paddle/utils/Util.h" /// quick ack can reduce the latency of small message P_DEFINE_bool(small_messages, diff --git a/paddle/pserver/LightNetwork.h b/paddle/pserver/LightNetwork.h index 7aff007a27..c4a06deb94 100644 --- a/paddle/pserver/LightNetwork.h +++ b/paddle/pserver/LightNetwork.h @@ -16,10 +16,10 @@ limitations under the License. */ #include "SocketChannel.h" +#include #include #include #include -#include #include "paddle/utils/Thread.h" diff --git a/paddle/pserver/ParameterClient2.cpp b/paddle/pserver/ParameterClient2.cpp index 31418822b3..84d965a66a 100644 --- a/paddle/pserver/ParameterClient2.cpp +++ b/paddle/pserver/ParameterClient2.cpp @@ -15,10 +15,10 @@ limitations under the License. */ #include #include "ParameterClient2.h" -#include "paddle/utils/StringUtil.h" +#include "paddle/math/SparseRowMatrix.h" #include "paddle/utils/Flags.h" #include "paddle/utils/Stat.h" -#include "paddle/math/SparseRowMatrix.h" +#include "paddle/utils/StringUtil.h" P_DEFINE_string(pservers, "127.0.0.1", "Comma separated addresses of pservers"); P_DEFINE_int32(parallel_thread_num, 1, "Thread number for parameter send"); diff --git a/paddle/pserver/ParameterClient2.h b/paddle/pserver/ParameterClient2.h index 0f180722e3..5255394949 100644 --- a/paddle/pserver/ParameterClient2.h +++ b/paddle/pserver/ParameterClient2.h @@ -16,23 +16,23 @@ limitations under the License. */ #include #include -#include #include +#include -#include "paddle/utils/Locks.h" #include "paddle/math/Matrix.h" +#include "paddle/math/Vector.h" #include "paddle/parameter/Parameter.h" +#include "paddle/pserver/BaseClient.h" +#include "paddle/utils/Flags.h" +#include "paddle/utils/Locks.h" #include "paddle/utils/Queue.h" #include "paddle/utils/TypeDefs.h" #include "paddle/utils/Util.h" -#include "paddle/math/Vector.h" -#include "paddle/utils/Flags.h" -#include "paddle/pserver/BaseClient.h" #include "ParameterService.pb.h" -#include "SparseParameterDistribution.h" #include "ProtoServer.h" +#include "SparseParameterDistribution.h" P_DECLARE_int32(parallel_thread_num); diff --git a/paddle/pserver/ParameterServer2.cpp b/paddle/pserver/ParameterServer2.cpp index ac70efc64f..2cb4c93535 100644 --- a/paddle/pserver/ParameterServer2.cpp +++ b/paddle/pserver/ParameterServer2.cpp @@ -21,14 +21,14 @@ limitations under the License. */ #include "paddle/parameter/AverageOptimizer.h" #include "paddle/parameter/FirstOrderOptimizer.h" -#include "paddle/utils/Flags.h" #include "paddle/parameter/OptimizerFunctions.h" #include "paddle/parameter/OptimizerWithRegularizer.h" -#include "paddle/parameter/ParameterUpdateFunctions.h" #include "paddle/parameter/ParameterOptimizer.h" +#include "paddle/parameter/ParameterUpdateFunctions.h" #include "paddle/parameter/Regularizer.h" -#include "paddle/utils/Stat.h" +#include "paddle/utils/Flags.h" #include "paddle/utils/GlobalConstants.h" +#include "paddle/utils/Stat.h" P_DEFINE_int32(pserver_num_threads, 1, "number of threads for sync op exec"); P_DEFINE_double(async_lagged_ratio_min, diff --git a/paddle/pserver/ParameterServer2.h b/paddle/pserver/ParameterServer2.h index 47122f3632..61c139981e 100644 --- a/paddle/pserver/ParameterServer2.h +++ b/paddle/pserver/ParameterServer2.h @@ -15,24 +15,24 @@ limitations under the License. */ #pragma once #include +#include #include #include -#include -#include #include -#include +#include +#include #include #include -#include "paddle/utils/Locks.h" #include "paddle/math/Matrix.h" +#include "paddle/math/Vector.h" #include "paddle/parameter/Parameter.h" #include "paddle/parameter/ParameterOptimizer.h" +#include "paddle/utils/Locks.h" +#include "paddle/utils/Stat.h" #include "paddle/utils/ThreadLocal.h" #include "paddle/utils/TypeDefs.h" -#include "paddle/math/Vector.h" -#include "paddle/utils/Stat.h" #include "ParameterService.pb.h" diff --git a/paddle/pserver/ParameterServer2Main.cpp b/paddle/pserver/ParameterServer2Main.cpp index 1ba9b48c23..ffc521f2c1 100644 --- a/paddle/pserver/ParameterServer2Main.cpp +++ b/paddle/pserver/ParameterServer2Main.cpp @@ -12,13 +12,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" -#include "paddle/utils/StringUtil.h" #include +#include "paddle/utils/StringUtil.h" +#include "paddle/utils/Util.h" -#include "paddle/utils/Flags.h" #include "ParameterServer2.h" #include "RDMANetwork.h" +#include "paddle/utils/Flags.h" using namespace paddle; // NOLINT diff --git a/paddle/pserver/ProtoServer.h b/paddle/pserver/ProtoServer.h index 97b7bf167d..3acdcc27da 100644 --- a/paddle/pserver/ProtoServer.h +++ b/paddle/pserver/ProtoServer.h @@ -100,7 +100,8 @@ protected: ResponseCallback callback); typedef std::function msgReader, - ResponseCallback callback)> ServiceFunction; + ResponseCallback callback)> + ServiceFunction; /** * @brief register one RPC function in function mapping diff --git a/paddle/pserver/SocketChannel.cpp b/paddle/pserver/SocketChannel.cpp index f3e74257f6..0599889164 100644 --- a/paddle/pserver/SocketChannel.cpp +++ b/paddle/pserver/SocketChannel.cpp @@ -14,11 +14,11 @@ limitations under the License. */ #include "SocketChannel.h" -#include -#include -#include #include #include +#include +#include +#include #include #include "RDMANetwork.h" diff --git a/paddle/pserver/SparseParameterDistribution.h b/paddle/pserver/SparseParameterDistribution.h index dc63b065a7..24b14106cf 100644 --- a/paddle/pserver/SparseParameterDistribution.h +++ b/paddle/pserver/SparseParameterDistribution.h @@ -15,8 +15,8 @@ limitations under the License. */ #pragma once #include -#include "paddle/utils/Logging.h" #include +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/pserver/test/SocketTest.cpp b/paddle/pserver/test/SocketTest.cpp index 528f5e381e..6e63c4f678 100644 --- a/paddle/pserver/test/SocketTest.cpp +++ b/paddle/pserver/test/SocketTest.cpp @@ -14,11 +14,11 @@ limitations under the License. */ #include "paddle/utils/Util.h" -#include -#include -#include #include #include +#include +#include +#include #include diff --git a/paddle/pserver/test/test_ParameterServer2.cpp b/paddle/pserver/test/test_ParameterServer2.cpp index 493b6d060c..4257a2308d 100644 --- a/paddle/pserver/test/test_ParameterServer2.cpp +++ b/paddle/pserver/test/test_ParameterServer2.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include #include #include -#include #include #include diff --git a/paddle/pserver/test/test_ProtoServer.cpp b/paddle/pserver/test/test_ProtoServer.cpp index cfed0d30d3..3880dde5e3 100644 --- a/paddle/pserver/test/test_ProtoServer.cpp +++ b/paddle/pserver/test/test_ProtoServer.cpp @@ -16,10 +16,10 @@ limitations under the License. */ #include -#include "paddle/utils/Stat.h" +#include "ParameterService.pb.h" #include "paddle/math/Vector.h" #include "paddle/pserver/ProtoServer.h" -#include "ParameterService.pb.h" +#include "paddle/utils/Stat.h" P_DEFINE_string(server_addr, "127.0.0.1", "Server address"); P_DEFINE_int64(dim, 50000000, "Data size"); diff --git a/paddle/py_paddle/util.py b/paddle/py_paddle/util.py index d6bbf9a5a9..ce105d249a 100644 --- a/paddle/py_paddle/util.py +++ b/paddle/py_paddle/util.py @@ -559,10 +559,10 @@ def __monkey_patch_trainer__(): def monkeypatches(): - patches = [__monkeypatch_init_paddle__, - __monkeypatch_gradient_machine__, - __monkey_patch_protobuf_objects__, - __monkey_patch_parameter__, - __monkey_patch_trainer__] + patches = [ + __monkeypatch_init_paddle__, __monkeypatch_gradient_machine__, + __monkey_patch_protobuf_objects__, __monkey_patch_parameter__, + __monkey_patch_trainer__ + ] for patch in patches: patch() diff --git a/paddle/scripts/travis/main.sh b/paddle/scripts/travis/main.sh index c49d4546c2..13f2552d29 100755 --- a/paddle/scripts/travis/main.sh +++ b/paddle/scripts/travis/main.sh @@ -5,6 +5,8 @@ if [ ${JOB} == "BUILD_AND_TEST" ]; then ./build_and_test.sh elif [ ${JOB} == "DOCS" ]; then ./docs.sh +elif [ ${JOB} == "PRE_COMMIT" ]; then + ./precommit.sh else echo Unknown job ${JOB} exit 1 diff --git a/paddle/scripts/travis/precommit.sh b/paddle/scripts/travis/precommit.sh new file mode 100755 index 0000000000..3e70bc118e --- /dev/null +++ b/paddle/scripts/travis/precommit.sh @@ -0,0 +1,6 @@ +#!/bin/bash +set -e +source common.sh +cd .. +pre-commit install +pre-commit run -a diff --git a/paddle/trainer/MergeModel.cpp b/paddle/trainer/MergeModel.cpp index 8cb2873feb..1cf29a39b9 100644 --- a/paddle/trainer/MergeModel.cpp +++ b/paddle/trainer/MergeModel.cpp @@ -14,10 +14,10 @@ limitations under the License. */ #include -#include "paddle/utils/PythonUtil.h" -#include "paddle/pserver/ParameterServer2.h" #include "ParamUtil.h" #include "Trainer.h" +#include "paddle/pserver/ParameterServer2.h" +#include "paddle/utils/PythonUtil.h" P_DEFINE_string(model_dir, "", "Directory for separated model files"); P_DEFINE_string(model_file, "", "File for merged model file"); diff --git a/paddle/trainer/ParamUtil.cpp b/paddle/trainer/ParamUtil.cpp index 200417ebfc..ffbca42e10 100644 --- a/paddle/trainer/ParamUtil.cpp +++ b/paddle/trainer/ParamUtil.cpp @@ -17,22 +17,22 @@ limitations under the License. */ #include #include -#include #include -#include +#include #include +#include #include #include +#include "paddle/utils/GlobalConstants.h" #include "paddle/utils/PythonUtil.h" #include "paddle/utils/Stat.h" #include "paddle/utils/Util.h" -#include "paddle/utils/GlobalConstants.h" +#include "TesterConfig.h" #include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/gserver/layers/ValidationLayer.h" -#include "TesterConfig.h" namespace paddle { diff --git a/paddle/trainer/ParamUtil.h b/paddle/trainer/ParamUtil.h index 8fa6fda75c..2e05595848 100644 --- a/paddle/trainer/ParamUtil.h +++ b/paddle/trainer/ParamUtil.h @@ -22,11 +22,11 @@ limitations under the License. */ #include "paddle/gserver/dataproviders/DataProvider.h" #include "paddle/gserver/gradientmachines/GradientMachine.h" +#include +#include +#include "ParameterUpdater.h" #include "TrainerConfig.pb.h" #include "TrainerConfigHelper.h" -#include "ParameterUpdater.h" -#include -#include namespace paddle { diff --git a/paddle/trainer/ParameterUpdater.h b/paddle/trainer/ParameterUpdater.h index 81ac374425..e52b5cd318 100644 --- a/paddle/trainer/ParameterUpdater.h +++ b/paddle/trainer/ParameterUpdater.h @@ -24,8 +24,8 @@ limitations under the License. */ #include "paddle/parameter/Parameter.h" #include "paddle/parameter/ParameterUpdaterBase.h" -#include "paddle/gserver/layers/Layer.h" #include "TrainerConfig.pb.h" +#include "paddle/gserver/layers/Layer.h" #include #include diff --git a/paddle/trainer/RemoteParameterUpdater.cpp b/paddle/trainer/RemoteParameterUpdater.cpp index 702ea07f8a..b7f7b93b8d 100644 --- a/paddle/trainer/RemoteParameterUpdater.cpp +++ b/paddle/trainer/RemoteParameterUpdater.cpp @@ -14,8 +14,8 @@ limitations under the License. */ #include "RemoteParameterUpdater.h" #include "Trainer.h" -#include "paddle/utils/Stat.h" #include "paddle/utils/GlobalConstants.h" +#include "paddle/utils/Stat.h" P_DECLARE_int32(trainer_id); P_DECLARE_string(save_dir); diff --git a/paddle/trainer/RemoteParameterUpdater.h b/paddle/trainer/RemoteParameterUpdater.h index 46ce4be146..66055c778e 100644 --- a/paddle/trainer/RemoteParameterUpdater.h +++ b/paddle/trainer/RemoteParameterUpdater.h @@ -14,12 +14,12 @@ limitations under the License. */ #pragma once -#include #include -#include "paddle/pserver/ParameterClient2.h" +#include #include "ParameterUpdater.h" -#include "paddle/utils/Util.h" +#include "paddle/pserver/ParameterClient2.h" #include "paddle/utils/Queue.h" +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/trainer/Tester.h b/paddle/trainer/Tester.h index ae7e0e93bf..e892744db2 100644 --- a/paddle/trainer/Tester.h +++ b/paddle/trainer/Tester.h @@ -24,12 +24,12 @@ limitations under the License. */ #include "TrainerConfig.pb.h" -#include "ParameterUpdater.h" +#include +#include #include "ParamUtil.h" +#include "ParameterUpdater.h" #include "TesterConfig.h" #include "TrainerInternalConfig.h" -#include -#include namespace paddle { diff --git a/paddle/trainer/TesterConfig.h b/paddle/trainer/TesterConfig.h index 9ff145a8a1..68d4c931ff 100644 --- a/paddle/trainer/TesterConfig.h +++ b/paddle/trainer/TesterConfig.h @@ -23,9 +23,9 @@ limitations under the License. */ #include "TrainerConfig.pb.h" -#include "ParameterUpdater.h" -#include #include +#include +#include "ParameterUpdater.h" namespace paddle { diff --git a/paddle/trainer/ThreadParameterUpdater.h b/paddle/trainer/ThreadParameterUpdater.h index 492692dbe5..d01ac689f9 100644 --- a/paddle/trainer/ThreadParameterUpdater.h +++ b/paddle/trainer/ThreadParameterUpdater.h @@ -14,13 +14,13 @@ limitations under the License. */ #pragma once -#include "paddle/utils/Util.h" #include "paddle/parameter/AverageOptimizer.h" #include "paddle/parameter/FirstOrderOptimizer.h" #include "paddle/parameter/OptimizerFunctions.h" #include "paddle/parameter/OptimizerWithRegularizer.h" #include "paddle/parameter/Parameter.h" #include "paddle/parameter/Regularizer.h" +#include "paddle/utils/Util.h" #include #include diff --git a/paddle/trainer/Trainer.h b/paddle/trainer/Trainer.h index f50b56143d..cabbb4acd1 100644 --- a/paddle/trainer/Trainer.h +++ b/paddle/trainer/Trainer.h @@ -22,13 +22,13 @@ limitations under the License. */ #include "paddle/gserver/dataproviders/DataProvider.h" #include "paddle/gserver/gradientmachines/GradientMachine.h" -#include "TrainerConfigHelper.h" +#include +#include +#include "ParamUtil.h" #include "ParameterUpdater.h" -#include "TrainerInternal.h" #include "Tester.h" -#include "ParamUtil.h" -#include -#include +#include "TrainerConfigHelper.h" +#include "TrainerInternal.h" #ifdef PADDLE_METRIC_LEARNING #include "paddle/internals/metric_learning/MetricTrainer.h" diff --git a/paddle/trainer/TrainerConfigHelper.h b/paddle/trainer/TrainerConfigHelper.h index 2c5c492ce8..f1366cc041 100644 --- a/paddle/trainer/TrainerConfigHelper.h +++ b/paddle/trainer/TrainerConfigHelper.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once -#include #include #include +#include namespace paddle { diff --git a/paddle/trainer/TrainerInternal.cpp b/paddle/trainer/TrainerInternal.cpp index 1b49d4aa28..f3b465b444 100644 --- a/paddle/trainer/TrainerInternal.cpp +++ b/paddle/trainer/TrainerInternal.cpp @@ -17,22 +17,22 @@ limitations under the License. */ #include #include -#include #include -#include +#include #include +#include #include +#include "paddle/gserver/gradientmachines/NeuralNetwork.h" +#include "paddle/gserver/layers/ValidationLayer.h" +#include "paddle/utils/GlobalConstants.h" #include "paddle/utils/PythonUtil.h" #include "paddle/utils/Stat.h" #include "paddle/utils/Util.h" -#include "paddle/utils/GlobalConstants.h" -#include "paddle/gserver/gradientmachines/NeuralNetwork.h" -#include "paddle/gserver/layers/ValidationLayer.h" -#include "ThreadParameterUpdater.h" #include "RemoteParameterUpdater.h" +#include "ThreadParameterUpdater.h" namespace paddle { diff --git a/paddle/trainer/TrainerInternal.h b/paddle/trainer/TrainerInternal.h index b67711a721..7018faab24 100644 --- a/paddle/trainer/TrainerInternal.h +++ b/paddle/trainer/TrainerInternal.h @@ -17,15 +17,15 @@ limitations under the License. */ #include "paddle/utils/Util.h" #include -#include #include +#include -#include "hl_gpu.h" -#include "paddle/gserver/gradientmachines/GradientMachine.h" -#include "TrainerConfig.pb.h" #include "ParameterUpdater.h" +#include "TrainerConfig.pb.h" #include "TrainerConfigHelper.h" #include "TrainerInternalConfig.h" +#include "hl_gpu.h" +#include "paddle/gserver/gradientmachines/GradientMachine.h" namespace paddle { diff --git a/paddle/trainer/TrainerInternalConfig.h b/paddle/trainer/TrainerInternalConfig.h index fd6fdf45e6..b47692720e 100644 --- a/paddle/trainer/TrainerInternalConfig.h +++ b/paddle/trainer/TrainerInternalConfig.h @@ -23,10 +23,10 @@ limitations under the License. */ #include "TrainerConfig.pb.h" -#include "ParameterUpdater.h" +#include #include #include -#include +#include "ParameterUpdater.h" namespace paddle { /** diff --git a/paddle/trainer/TrainerMain.cpp b/paddle/trainer/TrainerMain.cpp index 7a18f9836c..0a4d56b892 100644 --- a/paddle/trainer/TrainerMain.cpp +++ b/paddle/trainer/TrainerMain.cpp @@ -13,10 +13,10 @@ See the License for the specific language governing permissions and limitations under the License. */ #include +#include "paddle/pserver/ParameterServer2.h" +#include "paddle/utils/Excepts.h" #include "paddle/utils/PythonUtil.h" #include "paddle/utils/StringUtil.h" -#include "paddle/utils/Excepts.h" -#include "paddle/pserver/ParameterServer2.h" #include "ParamUtil.h" #include "Trainer.h" diff --git a/paddle/trainer/tests/picojson.h b/paddle/trainer/tests/picojson.h index cb657d219e..23bfa16408 100644 --- a/paddle/trainer/tests/picojson.h +++ b/paddle/trainer/tests/picojson.h @@ -30,10 +30,10 @@ #define picojson_h #include +#include #include #include #include -#include #include #include #include diff --git a/paddle/trainer/tests/test_Compare.cpp b/paddle/trainer/tests/test_Compare.cpp index 07a47b2990..63fa48540c 100644 --- a/paddle/trainer/tests/test_Compare.cpp +++ b/paddle/trainer/tests/test_Compare.cpp @@ -16,8 +16,8 @@ limitations under the License. */ #include "paddle/trainer/Trainer.h" -#include #include +#include using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/trainer/tests/test_CompareTwoNets.cpp b/paddle/trainer/tests/test_CompareTwoNets.cpp index 7e5449dcba..8a4556721d 100644 --- a/paddle/trainer/tests/test_CompareTwoNets.cpp +++ b/paddle/trainer/tests/test_CompareTwoNets.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include #include -#include #include -#include +#include #include "paddle/trainer/Trainer.h" diff --git a/paddle/trainer/tests/test_CompareTwoOpts.cpp b/paddle/trainer/tests/test_CompareTwoOpts.cpp index 4d051b537c..673ef289d8 100644 --- a/paddle/trainer/tests/test_CompareTwoOpts.cpp +++ b/paddle/trainer/tests/test_CompareTwoOpts.cpp @@ -12,10 +12,10 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include #include -#include #include -#include +#include #include "paddle/trainer/Trainer.h" diff --git a/paddle/trainer/tests/test_PyDataProviderWrapper.cpp b/paddle/trainer/tests/test_PyDataProviderWrapper.cpp index 5c5c6d5346..66ec65e340 100644 --- a/paddle/trainer/tests/test_PyDataProviderWrapper.cpp +++ b/paddle/trainer/tests/test_PyDataProviderWrapper.cpp @@ -13,16 +13,16 @@ See the License for the specific language governing permissions and limitations under the License. */ #ifndef PADDLE_NO_PYTHON +#include #include -#include #include -#include #include #include +#include +#include +#include #include #include -#include -#include #include "picojson.h" void checkEqual(const paddle::Argument& expect, const paddle::Argument& actual); diff --git a/paddle/trainer/tests/test_TrainerOnePass.cpp b/paddle/trainer/tests/test_TrainerOnePass.cpp index 1d9dce1b0e..0b587ecce1 100644 --- a/paddle/trainer/tests/test_TrainerOnePass.cpp +++ b/paddle/trainer/tests/test_TrainerOnePass.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include #include +#include #include "paddle/trainer/Trainer.h" #include "paddle/trainer/TrainerInternal.h" diff --git a/paddle/trainer/tests/test_recurrent_machine_generation.cpp b/paddle/trainer/tests/test_recurrent_machine_generation.cpp index b52acc2ca7..7d8dfd788f 100644 --- a/paddle/trainer/tests/test_recurrent_machine_generation.cpp +++ b/paddle/trainer/tests/test_recurrent_machine_generation.cpp @@ -14,8 +14,8 @@ limitations under the License. */ #include -#include #include +#include #include diff --git a/paddle/utils/BarrierStat.cpp b/paddle/utils/BarrierStat.cpp index 5040deefd0..9dde155aca 100644 --- a/paddle/utils/BarrierStat.cpp +++ b/paddle/utils/BarrierStat.cpp @@ -12,13 +12,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "paddle/utils/BarrierStat.h" +#include #include -#include #include -#include -#include "paddle/utils/Stat.h" -#include "paddle/utils/BarrierStat.h" +#include #include "paddle/utils/Flags.h" +#include "paddle/utils/Stat.h" P_DEFINE_bool(log_barrier_abstract, true, diff --git a/paddle/utils/BarrierStat.h b/paddle/utils/BarrierStat.h index 3c5c0885d6..352f641b6e 100644 --- a/paddle/utils/BarrierStat.h +++ b/paddle/utils/BarrierStat.h @@ -15,18 +15,18 @@ limitations under the License. */ #pragma once #include -#include #include -#include #include +#include +#include #include +#include #include -#include -#include "Logging.h" #include "Locks.h" -#include "ThreadLocal.h" +#include "Logging.h" #include "Stat.h" +#include "ThreadLocal.h" namespace paddle { diff --git a/paddle/utils/CommandLineParser.cpp b/paddle/utils/CommandLineParser.cpp index 14f83241c5..51558b45a1 100644 --- a/paddle/utils/CommandLineParser.cpp +++ b/paddle/utils/CommandLineParser.cpp @@ -14,15 +14,15 @@ limitations under the License. */ #include "CommandLineParser.h" #ifndef PADDLE_USE_GFLAGS -#include "paddle/utils/StringUtil.h" +#include #include -#include #include -#include +#include #include -#include -#include #include +#include +#include +#include "paddle/utils/StringUtil.h" namespace paddle { @@ -46,16 +46,13 @@ template <> bool StringToValue(const std::string& content, bool* value) { std::string tmp = content; - std::transform(tmp.begin(), - tmp.end(), - tmp.begin(), - [](char in) -> char { - if (in <= 'Z' && in >= 'A') { - return in - ('Z' - 'z'); - } else { - return in; - } - }); // tolower. + std::transform(tmp.begin(), tmp.end(), tmp.begin(), [](char in) -> char { + if (in <= 'Z' && in >= 'A') { + return in - ('Z' - 'z'); + } else { + return in; + } + }); // tolower. if (tmp == "true" || tmp == "1") { *value = true; diff --git a/paddle/utils/CommandLineParser.h b/paddle/utils/CommandLineParser.h index 3d25bc3b0b..b4449c6f09 100644 --- a/paddle/utils/CommandLineParser.h +++ b/paddle/utils/CommandLineParser.h @@ -14,10 +14,10 @@ limitations under the License. */ #pragma once #ifndef PADDLE_USE_GFLAGS -#include "DisableCopy.h" +#include #include #include -#include +#include "DisableCopy.h" namespace paddle { diff --git a/paddle/utils/CpuId.cpp b/paddle/utils/CpuId.cpp index 734b2e0924..53db82e48a 100644 --- a/paddle/utils/CpuId.cpp +++ b/paddle/utils/CpuId.cpp @@ -15,43 +15,43 @@ limitations under the License. */ #ifdef _WIN32 /// for MSVC -#define CPUID(info, x) __cpuidex(info, x, 0) +#define CPUID(info, x) __cpuidex(info, x, 0) #else #include /// for GCC/Clang -#define CPUID(info, x) __cpuid_count(x, 0, info[0], info[1], info[2], info[3]) +#define CPUID(info, x) __cpuid_count(x, 0, info[0], info[1], info[2], info[3]) #endif namespace paddle { SIMDFlags::SIMDFlags() { - unsigned int cpuInfo[4]; - // CPUID: https://en.wikipedia.org/wiki/CPUID - CPUID(cpuInfo, 0x00000001); - simd_flags_ |= cpuInfo[3] & (1 << 25) ? SIMD_SSE : SIMD_NONE; - simd_flags_ |= cpuInfo[3] & (1 << 26) ? SIMD_SSE2 : SIMD_NONE; - simd_flags_ |= cpuInfo[2] & (1 << 0) ? SIMD_SSE3 : SIMD_NONE; - simd_flags_ |= cpuInfo[2] & (1 << 9) ? SIMD_SSSE3 : SIMD_NONE; - simd_flags_ |= cpuInfo[2] & (1 << 19) ? SIMD_SSE41 : SIMD_NONE; - simd_flags_ |= cpuInfo[2] & (1 << 20) ? SIMD_SSE42 : SIMD_NONE; - simd_flags_ |= cpuInfo[2] & (1 << 12) ? SIMD_FMA3 : SIMD_NONE; - simd_flags_ |= cpuInfo[2] & (1 << 28) ? SIMD_AVX : SIMD_NONE; - - CPUID(cpuInfo, 0x00000007); - simd_flags_ |= cpuInfo[1] & (1 << 5) ? SIMD_AVX2 : SIMD_NONE; - simd_flags_ |= cpuInfo[1] & (1 << 16) ? SIMD_AVX512: SIMD_NONE; - - CPUID(cpuInfo, 0x80000001); - simd_flags_ |= cpuInfo[2] & (1 << 16) ? SIMD_FMA4 : SIMD_NONE; + unsigned int cpuInfo[4]; + // CPUID: https://en.wikipedia.org/wiki/CPUID + CPUID(cpuInfo, 0x00000001); + simd_flags_ |= cpuInfo[3] & (1 << 25) ? SIMD_SSE : SIMD_NONE; + simd_flags_ |= cpuInfo[3] & (1 << 26) ? SIMD_SSE2 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 0) ? SIMD_SSE3 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 9) ? SIMD_SSSE3 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 19) ? SIMD_SSE41 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 20) ? SIMD_SSE42 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 12) ? SIMD_FMA3 : SIMD_NONE; + simd_flags_ |= cpuInfo[2] & (1 << 28) ? SIMD_AVX : SIMD_NONE; + + CPUID(cpuInfo, 0x00000007); + simd_flags_ |= cpuInfo[1] & (1 << 5) ? SIMD_AVX2 : SIMD_NONE; + simd_flags_ |= cpuInfo[1] & (1 << 16) ? SIMD_AVX512 : SIMD_NONE; + + CPUID(cpuInfo, 0x80000001); + simd_flags_ |= cpuInfo[2] & (1 << 16) ? SIMD_FMA4 : SIMD_NONE; } SIMDFlags* SIMDFlags::instance() { - static SIMDFlags instance; - return &instance; + static SIMDFlags instance; + return &instance; } -} // namespace paddle +} // namespace paddle diff --git a/paddle/utils/CpuId.h b/paddle/utils/CpuId.h index d15e58d1dd..66ac59cf3e 100644 --- a/paddle/utils/CpuId.h +++ b/paddle/utils/CpuId.h @@ -18,54 +18,54 @@ namespace paddle { class SIMDFlags final { public: - DISABLE_COPY(SIMDFlags); + DISABLE_COPY(SIMDFlags); - SIMDFlags(); + SIMDFlags(); - static SIMDFlags* instance(); + static SIMDFlags* instance(); - inline bool isSSE() const { return simd_flags_ & SIMD_SSE; } - inline bool isSSE2() const { return simd_flags_ & SIMD_SSE2; } - inline bool isSSE3() const { return simd_flags_ & SIMD_SSE3; } - inline bool isSSSE3() const { return simd_flags_ & SIMD_SSSE3; } - inline bool isSSE41() const { return simd_flags_ & SIMD_SSE41; } - inline bool isSSE42() const { return simd_flags_ & SIMD_SSE42; } - inline bool isFMA3() const { return simd_flags_ & SIMD_FMA3; } - inline bool isFMA4() const { return simd_flags_ & SIMD_FMA4; } - inline bool isAVX() const { return simd_flags_ & SIMD_AVX; } - inline bool isAVX2() const { return simd_flags_ & SIMD_AVX2; } - inline bool isAVX512()const { return simd_flags_ & SIMD_AVX512;} + inline bool isSSE() const { return simd_flags_ & SIMD_SSE; } + inline bool isSSE2() const { return simd_flags_ & SIMD_SSE2; } + inline bool isSSE3() const { return simd_flags_ & SIMD_SSE3; } + inline bool isSSSE3() const { return simd_flags_ & SIMD_SSSE3; } + inline bool isSSE41() const { return simd_flags_ & SIMD_SSE41; } + inline bool isSSE42() const { return simd_flags_ & SIMD_SSE42; } + inline bool isFMA3() const { return simd_flags_ & SIMD_FMA3; } + inline bool isFMA4() const { return simd_flags_ & SIMD_FMA4; } + inline bool isAVX() const { return simd_flags_ & SIMD_AVX; } + inline bool isAVX2() const { return simd_flags_ & SIMD_AVX2; } + inline bool isAVX512() const { return simd_flags_ & SIMD_AVX512; } private: - enum simd_t { - SIMD_NONE = 0, ///< None - SIMD_SSE = 1 << 0, ///< SSE - SIMD_SSE2 = 1 << 1, ///< SSE 2 - SIMD_SSE3 = 1 << 2, ///< SSE 3 - SIMD_SSSE3 = 1 << 3, ///< SSSE 3 - SIMD_SSE41 = 1 << 4, ///< SSE 4.1 - SIMD_SSE42 = 1 << 5, ///< SSE 4.2 - SIMD_FMA3 = 1 << 6, ///< FMA 3 - SIMD_FMA4 = 1 << 7, ///< FMA 4 - SIMD_AVX = 1 << 8, ///< AVX - SIMD_AVX2 = 1 << 9, ///< AVX 2 - SIMD_AVX512 = 1 << 10, ///< AVX 512 - }; + enum simd_t { + SIMD_NONE = 0, ///< None + SIMD_SSE = 1 << 0, ///< SSE + SIMD_SSE2 = 1 << 1, ///< SSE 2 + SIMD_SSE3 = 1 << 2, ///< SSE 3 + SIMD_SSSE3 = 1 << 3, ///< SSSE 3 + SIMD_SSE41 = 1 << 4, ///< SSE 4.1 + SIMD_SSE42 = 1 << 5, ///< SSE 4.2 + SIMD_FMA3 = 1 << 6, ///< FMA 3 + SIMD_FMA4 = 1 << 7, ///< FMA 4 + SIMD_AVX = 1 << 8, ///< AVX + SIMD_AVX2 = 1 << 9, ///< AVX 2 + SIMD_AVX512 = 1 << 10, ///< AVX 512 + }; - /// simd flags - int simd_flags_ = SIMD_NONE; + /// simd flags + int simd_flags_ = SIMD_NONE; }; -#define HAS_SSE SIMDFlags::instance()->isSSE() -#define HAS_SSE2 SIMDFlags::instance()->isSSE2() -#define HAS_SSE3 SIMDFlags::instance()->isSSE3() -#define HAS_SSSE3 SIMDFlags::instance()->isSSSE3() -#define HAS_SSE41 SIMDFlags::instance()->isSSE41() -#define HAS_SSE42 SIMDFlags::instance()->isSSE42() -#define HAS_FMA3 SIMDFlags::instance()->isFMA3() -#define HAS_FMA4 SIMDFlags::instance()->isFMA4() -#define HAS_AVX SIMDFlags::instance()->isAVX() -#define HAS_AVX2 SIMDFlags::instance()->isAVX2() -#define HAS_AVX512 SIMDFlags::instance()->isAVX512() +#define HAS_SSE SIMDFlags::instance()->isSSE() +#define HAS_SSE2 SIMDFlags::instance()->isSSE2() +#define HAS_SSE3 SIMDFlags::instance()->isSSE3() +#define HAS_SSSE3 SIMDFlags::instance()->isSSSE3() +#define HAS_SSE41 SIMDFlags::instance()->isSSE41() +#define HAS_SSE42 SIMDFlags::instance()->isSSE42() +#define HAS_FMA3 SIMDFlags::instance()->isFMA3() +#define HAS_FMA4 SIMDFlags::instance()->isFMA4() +#define HAS_AVX SIMDFlags::instance()->isAVX() +#define HAS_AVX2 SIMDFlags::instance()->isAVX2() +#define HAS_AVX512 SIMDFlags::instance()->isAVX512() -} // namespace paddle +} // namespace paddle diff --git a/paddle/utils/CustomStackTrace.cpp b/paddle/utils/CustomStackTrace.cpp index 730788cb98..083f5c509a 100644 --- a/paddle/utils/CustomStackTrace.cpp +++ b/paddle/utils/CustomStackTrace.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "CustomStackTrace.h" -#include "CommandLineParser.h" #include +#include "CommandLineParser.h" P_DEFINE_bool( layer_stack_error_only_current_thread, diff --git a/paddle/utils/CustomStackTrace.h b/paddle/utils/CustomStackTrace.h index 5686f3c84c..6992e85622 100644 --- a/paddle/utils/CustomStackTrace.h +++ b/paddle/utils/CustomStackTrace.h @@ -14,10 +14,10 @@ limitations under the License. */ #pragma once +#include #include #include #include -#include #include "ThreadLocal.h" @@ -96,7 +96,8 @@ public: */ typedef std::function DumpCallback; + const T& /*item*/)> + DumpCallback; /** * Dump all thread stack, and all stack will be cleared. diff --git a/paddle/utils/Logging.cpp b/paddle/utils/Logging.cpp index 3c31633e58..20f32466a5 100644 --- a/paddle/utils/Logging.cpp +++ b/paddle/utils/Logging.cpp @@ -22,13 +22,13 @@ limitations under the License. */ #include #include #include -#include -#include #include +#include +#include -#include -#include #include +#include +#include #include namespace paddle { diff --git a/paddle/utils/Logging.h b/paddle/utils/Logging.h index c91ca9fecc..4379289f6d 100644 --- a/paddle/utils/Logging.h +++ b/paddle/utils/Logging.h @@ -18,8 +18,8 @@ limitations under the License. */ */ #pragma once -#include #include +#include #include #ifndef PADDLE_USE_GLOG diff --git a/paddle/utils/PythonUtil.cpp b/paddle/utils/PythonUtil.cpp index a9c6a20997..2ee4e4fb7e 100644 --- a/paddle/utils/PythonUtil.cpp +++ b/paddle/utils/PythonUtil.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "PythonUtil.h" -#include #include +#include namespace paddle { diff --git a/paddle/utils/PythonUtil.h b/paddle/utils/PythonUtil.h index 2cbc2fdd37..9e2a1c360c 100644 --- a/paddle/utils/PythonUtil.h +++ b/paddle/utils/PythonUtil.h @@ -36,10 +36,10 @@ limitations under the License. */ #endif -#include "paddle/utils/Util.h" #include -#include #include +#include +#include "paddle/utils/Util.h" namespace paddle { diff --git a/paddle/utils/Queue.h b/paddle/utils/Queue.h index 37748345a4..f054738f87 100644 --- a/paddle/utils/Queue.h +++ b/paddle/utils/Queue.h @@ -142,9 +142,9 @@ public: */ bool waitNotEmptyFor(int seconds) { std::unique_lock lock(queueLock_); - return queueCV_.wait_for(lock, - std::chrono::seconds(seconds), - [this] { return numElements_ != 0; }); + return queueCV_.wait_for(lock, std::chrono::seconds(seconds), [this] { + return numElements_ != 0; + }); } private: diff --git a/paddle/utils/Stat.cpp b/paddle/utils/Stat.cpp index 01ea535cfd..44acee2495 100644 --- a/paddle/utils/Stat.cpp +++ b/paddle/utils/Stat.cpp @@ -13,9 +13,9 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "Stat.h" -#include "Util.h" -#include #include +#include +#include "Util.h" namespace paddle { @@ -207,10 +207,9 @@ static unsigned g_profileCount = 0; static std::recursive_mutex g_profileMutex; GpuProfiler::GpuProfiler(std::string statName, std::string info) - : guard_(g_profileMutex) { + : guard_(g_profileMutex) { if (++g_profileCount == 1) { - LOG(INFO) << "Enable GPU Profiler Stat: [" - << statName << "] " << info; + LOG(INFO) << "Enable GPU Profiler Stat: [" << statName << "] " << info; hl_profiler_start(); } } diff --git a/paddle/utils/StringUtil.h b/paddle/utils/StringUtil.h index 8a63ca23b4..0b4f4c9113 100644 --- a/paddle/utils/StringUtil.h +++ b/paddle/utils/StringUtil.h @@ -14,9 +14,9 @@ limitations under the License. */ #pragma once +#include #include #include -#include #include "Logging.h" namespace paddle { diff --git a/paddle/utils/Thread.h b/paddle/utils/Thread.h index 435dff2f66..ef36a8c5b2 100644 --- a/paddle/utils/Thread.h +++ b/paddle/utils/Thread.h @@ -13,9 +13,9 @@ See the License for the specific language governing permissions and limitations under the License. */ #pragma once -#include "Util.h" -#include "Logging.h" #include +#include "Logging.h" +#include "Util.h" #include "Queue.h" #include "ThreadLocal.h" diff --git a/paddle/utils/ThreadLocal.cpp b/paddle/utils/ThreadLocal.cpp index c9b32784d9..8a2878fc4b 100644 --- a/paddle/utils/ThreadLocal.cpp +++ b/paddle/utils/ThreadLocal.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "Util.h" #include "ThreadLocal.h" #include "CommandLineParser.h" +#include "Util.h" P_DEFINE_bool(thread_local_rand_use_global_seed, false, diff --git a/paddle/utils/ThreadLocal.h b/paddle/utils/ThreadLocal.h index b6e31bd05b..a4987c9ec2 100644 --- a/paddle/utils/ThreadLocal.h +++ b/paddle/utils/ThreadLocal.h @@ -15,14 +15,14 @@ limitations under the License. */ #pragma once #include -#include #include +#include #include #include #include #include -#include "Util.h" #include "Logging.h" +#include "Util.h" namespace paddle { diff --git a/paddle/utils/Util.cpp b/paddle/utils/Util.cpp index f48726bff0..26ff385c84 100644 --- a/paddle/utils/Util.cpp +++ b/paddle/utils/Util.cpp @@ -15,11 +15,11 @@ limitations under the License. */ #include "Util.h" #include +#include #include #include #include #include -#include #include #include @@ -28,10 +28,10 @@ limitations under the License. */ #include "CommandLineParser.h" #include "CustomStackTrace.h" +#include "StringUtil.h" #include "Thread.h" #include "ThreadLocal.h" #include "Version.h" -#include "StringUtil.h" P_DEFINE_int32(seed, 1, "random number seed. 0 for srand(time)"); @@ -126,25 +126,23 @@ void registerInitFunction(std::function func, int priority) { } void runInitFunctions() { - std::call_once( - g_onceFlag, - []() { - LOG(INFO) << "Calling runInitFunctions"; - if (g_initFuncs) { - std::sort(g_initFuncs->begin(), - g_initFuncs->end(), - [](const PriorityFuncPair& x, const PriorityFuncPair& y) { - return x.first > y.first; - }); - for (auto& f : *g_initFuncs) { - f.second(); - } - delete g_initFuncs; - g_initFuncs = nullptr; - } - g_initialized = true; - LOG(INFO) << "Call runInitFunctions done."; - }); + std::call_once(g_onceFlag, []() { + LOG(INFO) << "Calling runInitFunctions"; + if (g_initFuncs) { + std::sort(g_initFuncs->begin(), + g_initFuncs->end(), + [](const PriorityFuncPair& x, const PriorityFuncPair& y) { + return x.first > y.first; + }); + for (auto& f : *g_initFuncs) { + f.second(); + } + delete g_initFuncs; + g_initFuncs = nullptr; + } + g_initialized = true; + LOG(INFO) << "Call runInitFunctions done."; + }); } void initMain(int argc, char** argv) { diff --git a/paddle/utils/Util.h b/paddle/utils/Util.h index ff67439da6..24ddde28e7 100644 --- a/paddle/utils/Util.h +++ b/paddle/utils/Util.h @@ -14,25 +14,25 @@ limitations under the License. */ #pragma once +#include // for syscall() +#include #include #include -#include -#include +#include #include +#include +#include #include #include -#include -#include -#include // for syscall() -#include +#include #include "CommandLineParser.h" +#include "DisableCopy.h" #include "Logging.h" #include "TrainerConfig.pb.h" -#include "DisableCopy.h" -#include "TypeDefs.h" #include "Flags.h" +#include "TypeDefs.h" #include "hl_gpu.h" /** diff --git a/paddle/utils/Version.cpp b/paddle/utils/Version.cpp index 086515791d..a9e351b69f 100644 --- a/paddle/utils/Version.cpp +++ b/paddle/utils/Version.cpp @@ -14,10 +14,10 @@ limitations under the License. */ #include "Version.h" -#include "Flags.h" -#include "Util.h" #include #include +#include "Flags.h" +#include "Util.h" //! TODO(yuyang18) in gflags, version has another define. Use another flag //! instead. #ifndef PADDLE_USE_GFLAGS @@ -33,7 +33,8 @@ void printVersion(std::ostream& os) { #ifndef PADDLE_VERSION #define PADDLE_VERSION "unknown" #endif -// converts macro to string https://gcc.gnu.org/onlinedocs/cpp/Stringification.html +// converts macro to string +// https://gcc.gnu.org/onlinedocs/cpp/Stringification.html #define xstr(s) str(s) #define str(s) #s diff --git a/paddle/utils/Version.h b/paddle/utils/Version.h index ac04963c2c..d1a07d9485 100644 --- a/paddle/utils/Version.h +++ b/paddle/utils/Version.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once #include -#include "TypeDefs.h" #include +#include "TypeDefs.h" namespace paddle { diff --git a/paddle/utils/arch/osx/Locks.cpp b/paddle/utils/arch/osx/Locks.cpp index 8590226431..e03992363f 100644 --- a/paddle/utils/arch/osx/Locks.cpp +++ b/paddle/utils/arch/osx/Locks.cpp @@ -13,10 +13,10 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "paddle/utils/Locks.h" -#include "paddle/utils/Logging.h" #include -#include #include +#include +#include "paddle/utils/Logging.h" namespace paddle { diff --git a/paddle/utils/tests/test_CommandLineParser.cpp b/paddle/utils/tests/test_CommandLineParser.cpp index 9a1d2391a8..ed2b3068d5 100644 --- a/paddle/utils/tests/test_CommandLineParser.cpp +++ b/paddle/utils/tests/test_CommandLineParser.cpp @@ -15,8 +15,8 @@ limitations under the License. */ #ifndef PADDLE_USE_GFLAGS //! Test Command Line Parser for paddle internal implement. -#include #include +#include P_DEFINE_int32(i1, 1, "test int flag 1"); P_DEFINE_int32(i2, 2, "test int flag 2"); diff --git a/paddle/utils/tests/test_CustomStackTrace.cpp b/paddle/utils/tests/test_CustomStackTrace.cpp index 512330b49e..292ed4619d 100644 --- a/paddle/utils/tests/test_CustomStackTrace.cpp +++ b/paddle/utils/tests/test_CustomStackTrace.cpp @@ -15,10 +15,10 @@ limitations under the License. */ #include #include -#include "paddle/utils/CustomStackTrace.h" #include "paddle/utils/CommandLineParser.h" -#include "paddle/utils/Util.h" +#include "paddle/utils/CustomStackTrace.h" #include "paddle/utils/Locks.h" +#include "paddle/utils/Util.h" P_DEFINE_int32(test_thread_num, 10, "testing thread number"); diff --git a/paddle/utils/tests/test_CustomStackTracePrint.cpp b/paddle/utils/tests/test_CustomStackTracePrint.cpp index 60ba210b70..611b16aa71 100644 --- a/paddle/utils/tests/test_CustomStackTracePrint.cpp +++ b/paddle/utils/tests/test_CustomStackTracePrint.cpp @@ -12,8 +12,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "paddle/utils/Util.h" #include "paddle/utils/CustomStackTrace.h" +#include "paddle/utils/Util.h" int main(int argc, char** argv) { paddle::initMain(argc, argv); diff --git a/paddle/utils/tests/test_Logging.cpp b/paddle/utils/tests/test_Logging.cpp index 667864aa75..fbfffcc65a 100644 --- a/paddle/utils/tests/test_Logging.cpp +++ b/paddle/utils/tests/test_Logging.cpp @@ -17,10 +17,10 @@ limitations under the License. */ * Used in embedded system where there is no glogs. */ +#include #include -#include #include -#include +#include #include "paddle/utils/Logging.h" #include "paddle/utils/Util.h" #ifndef PADDLE_USE_GLOG diff --git a/paddle/utils/tests/test_SIMDFlags.cpp b/paddle/utils/tests/test_SIMDFlags.cpp index a544901aa3..41532953a7 100644 --- a/paddle/utils/tests/test_SIMDFlags.cpp +++ b/paddle/utils/tests/test_SIMDFlags.cpp @@ -9,40 +9,39 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ - #include #include "paddle/utils/CpuId.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Util.h" -using namespace paddle; // NOLINT +using namespace paddle; // NOLINT TEST(SIMDFlags, gccTest) { #if (defined(__GNUC__) || defined(__GNUG__)) && !(defined(__clang__)) - CHECK(!__builtin_cpu_supports("sse") != HAS_SSE); - CHECK(!__builtin_cpu_supports("sse2") != HAS_SSE2); - CHECK(!__builtin_cpu_supports("sse3") != HAS_SSE3); - CHECK(!__builtin_cpu_supports("ssse3") != HAS_SSSE3); - CHECK(!__builtin_cpu_supports("sse4.1")!= HAS_SSE41); - CHECK(!__builtin_cpu_supports("sse4.2")!= HAS_SSE42); - CHECK(!__builtin_cpu_supports("avx") != HAS_AVX); - CHECK(!__builtin_cpu_supports("avx2") != HAS_AVX2); + CHECK(!__builtin_cpu_supports("sse") != HAS_SSE); + CHECK(!__builtin_cpu_supports("sse2") != HAS_SSE2); + CHECK(!__builtin_cpu_supports("sse3") != HAS_SSE3); + CHECK(!__builtin_cpu_supports("ssse3") != HAS_SSSE3); + CHECK(!__builtin_cpu_supports("sse4.1") != HAS_SSE41); + CHECK(!__builtin_cpu_supports("sse4.2") != HAS_SSE42); + CHECK(!__builtin_cpu_supports("avx") != HAS_AVX); + CHECK(!__builtin_cpu_supports("avx2") != HAS_AVX2); #endif } TEST(SIMDFlags, normalPrint) { - auto simd = SIMDFlags::instance(); - LOG(INFO) << "Has SSE2: " << std::boolalpha << simd->isSSE2(); - LOG(INFO) << "Has SSE3: " << std::boolalpha << simd->isSSE3(); - LOG(INFO) << "Has SSSE3: " << std::boolalpha << simd->isSSSE3(); - LOG(INFO) << "Has SSE4.1: " << std::boolalpha << simd->isSSE41(); - LOG(INFO) << "Has SSE4.2: " << std::boolalpha << simd->isSSE42(); - LOG(INFO) << "Has FMA3: " << std::boolalpha << simd->isFMA3(); - LOG(INFO) << "Has FMA4: " << std::boolalpha << simd->isFMA4(); - LOG(INFO) << "Has AVX: " << std::boolalpha << simd->isAVX(); - LOG(INFO) << "Has AVX2: " << std::boolalpha << simd->isAVX2(); - LOG(INFO) << "Has AVX512: " << std::boolalpha << simd->isAVX512(); + auto simd = SIMDFlags::instance(); + LOG(INFO) << "Has SSE2: " << std::boolalpha << simd->isSSE2(); + LOG(INFO) << "Has SSE3: " << std::boolalpha << simd->isSSE3(); + LOG(INFO) << "Has SSSE3: " << std::boolalpha << simd->isSSSE3(); + LOG(INFO) << "Has SSE4.1: " << std::boolalpha << simd->isSSE41(); + LOG(INFO) << "Has SSE4.2: " << std::boolalpha << simd->isSSE42(); + LOG(INFO) << "Has FMA3: " << std::boolalpha << simd->isFMA3(); + LOG(INFO) << "Has FMA4: " << std::boolalpha << simd->isFMA4(); + LOG(INFO) << "Has AVX: " << std::boolalpha << simd->isAVX(); + LOG(INFO) << "Has AVX2: " << std::boolalpha << simd->isAVX2(); + LOG(INFO) << "Has AVX512: " << std::boolalpha << simd->isAVX512(); } int main(int argc, char** argv) { diff --git a/paddle/utils/tests/test_SpinLock.cpp b/paddle/utils/tests/test_SpinLock.cpp index 9c7ad05b0b..22f8584ef5 100644 --- a/paddle/utils/tests/test_SpinLock.cpp +++ b/paddle/utils/tests/test_SpinLock.cpp @@ -14,10 +14,10 @@ limitations under the License. */ #include #include -#include "paddle/utils/Logging.h" #include "paddle/utils/CommandLineParser.h" -#include "paddle/utils/Util.h" #include "paddle/utils/Locks.h" +#include "paddle/utils/Logging.h" +#include "paddle/utils/Util.h" P_DEFINE_int32(test_thread_num, 100, "testing thread number"); diff --git a/paddle/utils/tests/test_Thread.cpp b/paddle/utils/tests/test_Thread.cpp index b069be1d7a..2f5c5bbce0 100644 --- a/paddle/utils/tests/test_Thread.cpp +++ b/paddle/utils/tests/test_Thread.cpp @@ -12,9 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include -#include #include +#include +#include using paddle::AsyncThreadPool; // NOLINT @@ -52,17 +52,13 @@ TEST(AsyncThreadPool, multiThreadAddBatchJob) { int counter = 0; const int numMonitors = 300; const int numSlaves = 300; - std::vector moniterJobs( - numMonitors, - [&] { - std::vector slaveJobs( - numSlaves, - [mut, &counter] { - std::lock_guard lk(*mut); - counter++; - }); - levelTwoPool.addBatchJobs(slaveJobs); - }); + std::vector moniterJobs(numMonitors, [&] { + std::vector slaveJobs(numSlaves, [mut, &counter] { + std::lock_guard lk(*mut); + counter++; + }); + levelTwoPool.addBatchJobs(slaveJobs); + }); levelOnePool.addBatchJobs(moniterJobs); ASSERT_EQ(counter, numMonitors * numSlaves); } diff --git a/paddle/utils/tests/test_ThreadBarrier.cpp b/paddle/utils/tests/test_ThreadBarrier.cpp index 997a393683..4a8af5b97e 100644 --- a/paddle/utils/tests/test_ThreadBarrier.cpp +++ b/paddle/utils/tests/test_ThreadBarrier.cpp @@ -15,10 +15,10 @@ limitations under the License. */ #include #include #include -#include "paddle/utils/Logging.h" #include "paddle/utils/CommandLineParser.h" -#include "paddle/utils/Util.h" #include "paddle/utils/Locks.h" +#include "paddle/utils/Logging.h" +#include "paddle/utils/Util.h" P_DEFINE_int32(test_thread_num, 100, "testing thread number"); From 3aa00440b1864badfe612b6bf241f896c829cc3e Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Sun, 11 Dec 2016 23:02:47 -0800 Subject: [PATCH 087/265] Fix bugs in Docker document --- .../build_and_install/docker_install_en.rst | 26 ++++++++++++------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index feb027ccbb..8df7e063a1 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -122,9 +122,9 @@ The general development workflow with Docker and Bazel is as follows: git clone --recursive https://github.com/paddlepaddle/paddle -2. Build a development Docker image `paddle:dev` from the source code. - This image contains all the development tools and dependencies of - PaddlePaddle. +2. Build a development Docker image :code:`paddle:dev` from the source + code. This image contains all the development tools and + dependencies of PaddlePaddle. .. code-block:: bash @@ -139,14 +139,22 @@ The general development workflow with Docker and Bazel is as follows: .. code-block:: bash - docker run \ - -d # run the container in background mode \ - --name paddle # we can run a nginx container to serve documents \ - -p 2022:22 # so we can SSH into this container \ - -v $PWD:/paddle # mount the source code \ - -v $HOME/.cache/bazel:/root/.cache/bazel # mount Bazel cache \ + docker run \ + -d \ + --name paddle \ + -p 2022:22 \ + -v $PWD:/paddle \ + -v $HOME/.cache/bazel:/root/.cache/bazel \ paddle:dev + where :code:`-d` makes the container running in background, + :code:`--name paddle` allows us to run a nginx container to serve + documents in this container, :code:`-p 2022:22` allows us to SSH + into this container, :code:`-v $PWD:/paddle` shares the source code + on the host with the container, :code:`-v + $HOME/.cache/bazel:/root/.cache/bazel` shares Bazel cache on the + host with the container. + 4. SSH into the container: .. code-block:: bash From be1b70e64e719bb1f1af5ea414e9e1aaf8f0a89b Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 12 Dec 2016 15:08:55 +0800 Subject: [PATCH 088/265] Tuning travis --- paddle/cuda/src/hl_cuda_device.cc | 2 +- paddle/scripts/travis/precommit.sh | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/paddle/cuda/src/hl_cuda_device.cc b/paddle/cuda/src/hl_cuda_device.cc index d181448292..b0bba73594 100644 --- a/paddle/cuda/src/hl_cuda_device.cc +++ b/paddle/cuda/src/hl_cuda_device.cc @@ -12,13 +12,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "hl_cuda.h" #include #include #include #include #include #include -#include "hl_cuda.h" #include "hl_cuda.ph" #include "hl_dso_loader.h" #include "hl_thread.ph" diff --git a/paddle/scripts/travis/precommit.sh b/paddle/scripts/travis/precommit.sh index 3e70bc118e..48d2f437a1 100755 --- a/paddle/scripts/travis/precommit.sh +++ b/paddle/scripts/travis/precommit.sh @@ -2,5 +2,7 @@ set -e source common.sh cd .. +export PATH=/usr/bin:$PATH pre-commit install +clang-format --version pre-commit run -a From 4afaaa4b9210a851e735bf79aadb92420d773477 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 12 Dec 2016 15:33:41 +0800 Subject: [PATCH 089/265] Autoformat all files --- WORKSPACE | 20 ++++----- demo/sentiment/predict.py | 11 +++-- python/paddle/trainer/config_parser.py | 4 ++ .../default_decorators.py | 3 ++ .../test_config_parser_for_non_file_config.py | 44 ++++++++++--------- .../tests/test_reset_hook.py | 10 ++--- third_party/gtest.BUILD | 20 +++------ third_party/protobuf_test/BUILD | 29 ++++++------ third_party/protobuf_test/example_lib.cc | 4 +- 9 files changed, 70 insertions(+), 75 deletions(-) diff --git a/WORKSPACE b/WORKSPACE index d6ae2af8eb..0b8299905a 100644 --- a/WORKSPACE +++ b/WORKSPACE @@ -1,17 +1,15 @@ # External dependency to Google protobuf. http_archive( - name = "protobuf", - url = "http://github.com/google/protobuf/archive/v3.1.0.tar.gz", - sha256 = "0a0ae63cbffc274efb573bdde9a253e3f32e458c41261df51c5dbc5ad541e8f7", - strip_prefix = "protobuf-3.1.0", -) + name="protobuf", + url="http://github.com/google/protobuf/archive/v3.1.0.tar.gz", + sha256="0a0ae63cbffc274efb573bdde9a253e3f32e458c41261df51c5dbc5ad541e8f7", + strip_prefix="protobuf-3.1.0", ) # External dependency to gtest 1.7.0. This method comes from # https://www.bazel.io/versions/master/docs/tutorial/cpp.html. new_http_archive( - name = "gtest", - url = "https://github.com/google/googletest/archive/release-1.7.0.zip", - sha256 = "b58cb7547a28b2c718d1e38aee18a3659c9e3ff52440297e965f5edffe34b6d0", - build_file = "third_party/gtest.BUILD", - strip_prefix = "googletest-release-1.7.0", -) + name="gtest", + url="https://github.com/google/googletest/archive/release-1.7.0.zip", + sha256="b58cb7547a28b2c718d1e38aee18a3659c9e3ff52440297e965f5edffe34b6d0", + build_file="third_party/gtest.BUILD", + strip_prefix="googletest-release-1.7.0", ) diff --git a/demo/sentiment/predict.py b/demo/sentiment/predict.py index 0095c6f727..8ec490f646 100755 --- a/demo/sentiment/predict.py +++ b/demo/sentiment/predict.py @@ -71,9 +71,7 @@ class SentimentPrediction(): transform word into integer index according to the dictionary. """ words = data.strip().split() - word_slot = [ - self.word_dict[w] for w in words if w in self.word_dict - ] + word_slot = [self.word_dict[w] for w in words if w in self.word_dict] return word_slot def batch_predict(self, data_batch): @@ -85,8 +83,8 @@ class SentimentPrediction(): if self.label is None: print("predicting label is %d" % (lab[0])) else: - print("predicting label is %s" % - (self.label[lab[0]])) + print("predicting label is %s" % (self.label[lab[0]])) + def option_parser(): usage = "python predict.py -n config -w model_dir -d dictionary -i input_file " @@ -143,9 +141,10 @@ def main(): batch.append([predict.get_index(line)]) if len(batch) == batch_size: predict.batch_predict(batch) - batch=[] + batch = [] if len(batch) > 0: predict.batch_predict(batch) + if __name__ == '__main__': main() diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 42a7a29403..5b7f4d85e2 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -3364,7 +3364,10 @@ def my_fatal(s): logger.critical(s) raise Exception() + _parse_config_hooks = set() + + def register_parse_config_hook(f): """ Register a hook function for parse_config. parse_config will invoke the hook @@ -3373,6 +3376,7 @@ def register_parse_config_hook(f): """ _parse_config_hooks.add(f) + def parse_config(config_file, config_arg_str): ''' @param config_arg_str: a string of the form var1=val1,var2=val2. It will be diff --git a/python/paddle/trainer_config_helpers/default_decorators.py b/python/paddle/trainer_config_helpers/default_decorators.py index 13712aad7b..ad3efcbf36 100644 --- a/python/paddle/trainer_config_helpers/default_decorators.py +++ b/python/paddle/trainer_config_helpers/default_decorators.py @@ -84,12 +84,15 @@ class DefaultNameFactory(object): _name_factories = [] + def reset_hook(): for factory in _name_factories: factory.reset() + register_parse_config_hook(reset_hook) + def wrap_name_default(name_prefix=None): """ Decorator to set "name" arguments default to "{name_prefix}_{invoke_count}". diff --git a/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py b/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py index 87a607acf4..9b791a0222 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py +++ b/python/paddle/trainer_config_helpers/tests/configs/test_config_parser_for_non_file_config.py @@ -17,33 +17,35 @@ import sys import re import getopt + def main(print_whole_config, globals, locals): - ''' + ''' this test will all test_config.py ''' - cmdstr = """from paddle.trainer.config_parser import parse_config\n""" - importstr = "" - functionstr = "" + cmdstr = """from paddle.trainer.config_parser import parse_config\n""" + importstr = "" + functionstr = "" + + for line in sys.stdin: + if re.match("^import", line) or re.match("^from.*import", line): + importstr = importstr + line + else: + functionstr = functionstr + " " + line - for line in sys.stdin: - if re.match("^import", line) or re.match("^from.*import", line): - importstr = importstr + line + cmdstr = cmdstr + importstr + """def configs():\n""" + functionstr + #cmdstr = cmdstr + """def configs():\n""" + importstr + functionstr + if print_whole_config: + cmdstr = cmdstr + """print parse_config(configs, "")""" else: - functionstr = functionstr + " " + line + cmdstr = cmdstr + """print parse_config(configs, "").model_config""" - cmdstr = cmdstr + importstr + """def configs():\n""" + functionstr - #cmdstr = cmdstr + """def configs():\n""" + importstr + functionstr - if print_whole_config: - cmdstr = cmdstr + """print parse_config(configs, "")""" - else: - cmdstr = cmdstr + """print parse_config(configs, "").model_config""" + exec (cmdstr, globals, locals) - exec(cmdstr, globals, locals) if __name__ == '__main__': - whole = False - opts, args = getopt.getopt(sys.argv[1:], "", ["whole"]) - for op, value in opts: - if op == "--whole": - whole = True - main(whole, globals(), locals()) + whole = False + opts, args = getopt.getopt(sys.argv[1:], "", ["whole"]) + for op, value in opts: + if op == "--whole": + whole = True + main(whole, globals(), locals()) diff --git a/python/paddle/trainer_config_helpers/tests/test_reset_hook.py b/python/paddle/trainer_config_helpers/tests/test_reset_hook.py index dc494d0eef..0423babdb7 100644 --- a/python/paddle/trainer_config_helpers/tests/test_reset_hook.py +++ b/python/paddle/trainer_config_helpers/tests/test_reset_hook.py @@ -14,13 +14,13 @@ import unittest from paddle.trainer.config_parser import parse_config -class TestParse(unittest.TestCase): +class TestParse(unittest.TestCase): def test_parse(self): - a = parse_config( - 'trainer_config_helpers/tests/layers_test_config.py', '') - b = parse_config( - 'trainer_config_helpers/tests/layers_test_config.py', '') + a = parse_config('trainer_config_helpers/tests/layers_test_config.py', + '') + b = parse_config('trainer_config_helpers/tests/layers_test_config.py', + '') self.assertEqual(a, b) diff --git a/third_party/gtest.BUILD b/third_party/gtest.BUILD index 3e68a1d879..71c74af513 100644 --- a/third_party/gtest.BUILD +++ b/third_party/gtest.BUILD @@ -1,14 +1,8 @@ cc_library( - name = "main", - srcs = glob( - ["src/*.cc"], - exclude = ["src/gtest-all.cc"] - ), - hdrs = glob([ - "include/**/*.h", - "src/*.h" - ]), - copts = ["-Iexternal/gtest/include"], - linkopts = ["-pthread"], - visibility = ["//visibility:public"], -) + name="main", + srcs=glob( + ["src/*.cc"], exclude=["src/gtest-all.cc"]), + hdrs=glob(["include/**/*.h", "src/*.h"]), + copts=["-Iexternal/gtest/include"], + linkopts=["-pthread"], + visibility=["//visibility:public"], ) diff --git a/third_party/protobuf_test/BUILD b/third_party/protobuf_test/BUILD index 46f769da5f..95a687a356 100644 --- a/third_party/protobuf_test/BUILD +++ b/third_party/protobuf_test/BUILD @@ -3,25 +3,22 @@ licenses(["notice"]) # Apache 2.0 load("@protobuf//:protobuf.bzl", "cc_proto_library") cc_proto_library( - name = "example_proto", - srcs = ["example.proto"], - protoc = "@protobuf//:protoc", - default_runtime = "@protobuf//:protobuf", -) + name="example_proto", + srcs=["example.proto"], + protoc="@protobuf//:protoc", + default_runtime="@protobuf//:protobuf", ) cc_library( - name = "example_lib", - srcs = ["example_lib.cc"], - hdrs = ["example_lib.h"], - deps = [":example_proto"], -) + name="example_lib", + srcs=["example_lib.cc"], + hdrs=["example_lib.h"], + deps=[":example_proto"], ) cc_test( - name = "example_lib_test", - srcs = ["example_lib_test.cc"], - copts = ["-Iexternal/gtest/include"], - deps =[ + name="example_lib_test", + srcs=["example_lib_test.cc"], + copts=["-Iexternal/gtest/include"], + deps=[ "@gtest//:main", ":example_lib", - ], -) + ], ) diff --git a/third_party/protobuf_test/example_lib.cc b/third_party/protobuf_test/example_lib.cc index 56341a0124..ced377bc0a 100644 --- a/third_party/protobuf_test/example_lib.cc +++ b/third_party/protobuf_test/example_lib.cc @@ -3,9 +3,7 @@ namespace third_party { namespace protobuf_test { -std::string get_greet(const Greeting& who) { - return "Hello " + who.name(); -} +std::string get_greet(const Greeting& who) { return "Hello " + who.name(); } } // namespace protobuf_test } // namespace thrid_party From 6ea5a9fd0361d58bcea205bc9736166cf17e1b94 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 12 Dec 2016 15:43:06 +0800 Subject: [PATCH 090/265] Add tips when unittest error --- paddle/scripts/travis/precommit.sh | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/paddle/scripts/travis/precommit.sh b/paddle/scripts/travis/precommit.sh index 48d2f437a1..5ad84f1821 100755 --- a/paddle/scripts/travis/precommit.sh +++ b/paddle/scripts/travis/precommit.sh @@ -1,4 +1,11 @@ #!/bin/bash +function abort(){ + echo "Your commit not fit PaddlePaddle code style" 1>&2 + echo "Please use pre-commit scripts to auto-format your code" 1>&2 + exit 1 +} + +trap 'abort' 0 set -e source common.sh cd .. @@ -6,3 +13,5 @@ export PATH=/usr/bin:$PATH pre-commit install clang-format --version pre-commit run -a + +trap : 0 From 579e59120708b3fb0190347da1dab5189dfd23d3 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 12 Dec 2016 15:45:28 +0800 Subject: [PATCH 091/265] Try to fix unittest error --- paddle/cuda/include/hl_time.h | 2 +- paddle/cuda/src/hl_time.cc | 1 + paddle/utils/BarrierStat.h | 1 - paddle/utils/PythonUtil.h | 5 +++-- 4 files changed, 5 insertions(+), 4 deletions(-) diff --git a/paddle/cuda/include/hl_time.h b/paddle/cuda/include/hl_time.h index f214b055f9..f63f025820 100644 --- a/paddle/cuda/include/hl_time.h +++ b/paddle/cuda/include/hl_time.h @@ -14,7 +14,7 @@ limitations under the License. */ #ifndef HL_TIME_H_ #define HL_TIME_H_ - +#include /** * @brief High resolution timer. * diff --git a/paddle/cuda/src/hl_time.cc b/paddle/cuda/src/hl_time.cc index 2bb69d25e5..7e5d7e8aae 100644 --- a/paddle/cuda/src/hl_time.cc +++ b/paddle/cuda/src/hl_time.cc @@ -15,6 +15,7 @@ limitations under the License. */ #include "hl_time.h" #include #include +#include #include using std::chrono::high_resolution_clock; diff --git a/paddle/utils/BarrierStat.h b/paddle/utils/BarrierStat.h index 352f641b6e..a9c925eff6 100644 --- a/paddle/utils/BarrierStat.h +++ b/paddle/utils/BarrierStat.h @@ -25,7 +25,6 @@ limitations under the License. */ #include "Locks.h" #include "Logging.h" -#include "Stat.h" #include "ThreadLocal.h" namespace paddle { diff --git a/paddle/utils/PythonUtil.h b/paddle/utils/PythonUtil.h index 9e2a1c360c..daebaffc85 100644 --- a/paddle/utils/PythonUtil.h +++ b/paddle/utils/PythonUtil.h @@ -13,6 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #pragma once +// clang-format off +#include "paddle/utils/Util.h" #ifndef PADDLE_NO_PYTHON // must include the following two blocks, otherwise, @@ -33,13 +35,12 @@ limitations under the License. */ #endif #include #include - #endif #include #include #include -#include "paddle/utils/Util.h" +// clang-format on namespace paddle { From 81afc1e8324f399c1b3a4dffee09489e664b8678 Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Mon, 12 Dec 2016 17:34:44 +0800 Subject: [PATCH 092/265] translation for imagenet_model --- doc_cn/demo/imagenet_model/resnet_model_cn.md | 286 ++++++++++++++++++ 1 file changed, 286 insertions(+) create mode 100644 doc_cn/demo/imagenet_model/resnet_model_cn.md diff --git a/doc_cn/demo/imagenet_model/resnet_model_cn.md b/doc_cn/demo/imagenet_model/resnet_model_cn.md new file mode 100644 index 0000000000..6b2acb160c --- /dev/null +++ b/doc_cn/demo/imagenet_model/resnet_model_cn.md @@ -0,0 +1,286 @@ +# Model Zoo - ImageNet # + +[ImageNet](http://www.image-net.org/) 是通用物体分类领域一个众所周知的数据库。本教程提供了一个用于ImageNet上的卷积分类网络模型。 + +## ResNet 介绍 + +论文 [Deep Residual Learning for Image Recognition](http://arxiv.org/abs/1512.03385) 中提出的ResNet网络结构在2015年ImageNet大规模视觉识别竞赛(ILSVRC 2015)的分类任务中赢得了第一名。他们提出残差学习的框架来简化网络的训练,所构建网络结构的的深度比之前使用的网络有大幅度的提高。下图展示的是基于残差的连接方式。左图构造网络模块的方式被用于34层的网络中,而右图的瓶颈连接模块用于50层,101层和152层的网络结构中。 + +
![resnet_block](./resnet_block.jpg)
+
Figure 1. ResNet Block
+ +本教程中我们给出了三个ResNet模型,这些模型都是由原作者提供的模型转换过来的。我们使用PaddlePaddle在ILSVRC的验证集共5000幅图像上测试了模型的分类错误率,其中输入图像的颜色通道顺序为**BGR**,保持宽高比缩放到短边为256,只截取中心方形的图像区域。分类误差和模型大小由下表给出。 +
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + +
ResNetTop-1Model Size
ResNet-5024.9%99M
ResNet-10123.7%173M
ResNet-15223.2%234M
+
+ +## ResNet 模型 + +50层,101层和152层的网络配置文件可参照```demo/model_zoo/resnet/resnet.py```。你也可以通过在命令行参数中增加一个参数如```--config_args=layer_num=50```来指定网络层的数目。 + +### 网络可视化 + +你可以通过执行下面的命令来得到ResNet网络的结构图解。该脚本会生成一个dot文件,然后利用我们服务器上已安装好的draw_dot工具将dot文件转成PNG图像。如果你不是在该服务器上运行,请自行安装graphviz来转换dot文件。 + +``` +cd demo/model_zoo/resnet +./net_diagram.sh +``` + +### 模型下载 + +``` +cd demo/model_zoo/resnet +./get_model.sh +``` +你可以执行上述命令来下载所有的模型和均值文件,如果下载成功,这些文件将会被保存在```demo/model_zoo/resnet/model```路径下。 + +``` +mean_meta_224 resnet_101 resnet_152 resnet_50 +``` + * resnet_50: 50层网络模型。 + * resnet_101: 101层网络模型。 + * resnet_152: 152层网络模型。 + * mean\_meta\_224: 均值图像文件,图像大小为3 x 224 x 224,颜色通道顺序为**BGR**。你也可以使用这三个值: 103.939, 116.779, 123.68。 + +### 参数信息 + +* **卷积层权重** + + 由于每个卷积层后面连接的是batch normalization层,因此该层中没有偏置(bias)参数,并且只有一个权重。 + 形状: `(Co, ky, kx, Ci)` + * Co: 输出特征图的通道数目 + * ky: 滤波器核在垂直方向上的尺寸 + * kx: 滤波器核在水平方向上的尺寸 + * Ci: 输入特征图的通道数目 + + 二维矩阵: (Co * ky * kx, Ci), 行优先次序存储。 + +* **全连接层权重** + + 二维矩阵: (输入层尺寸, 本层尺寸), 行优先次序存储。 + +* **[Batch Normalization]() 层权重** + +本层有四个参数,实际上只有.w0和.wbias是需要学习的参数,另外两个分别是均值和方差。在测试阶段它们将会被加载到模型中。下表展示了batch normalization层的参数。 +
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
参数名尺寸含义
_res2_1_branch1_bn.w0256gamma, 缩放参数
_res2_1_branch1_bn.w1256特征图均值
_res2_1_branch1_bn.w2256特征图方差
_res2_1_branch1_bn.wbias256beta, 偏置参数
+
+ +### 参数观察 + +使用者可以使用下面的python脚本来读取参数值: + +``` +import sys +import numpy as np + +def load(file_name): + with open(file_name, 'rb') as f: + f.read(16) # skip header for float type. + return np.fromfile(f, dtype=np.float32) + +if __name__=='__main__': + weight = load(sys.argv[1]) +``` + +或者直接使用下面的shell命令: + +``` +od -j 16 -f _res2_1_branch1_bn.w0 +``` + +## 特征提取 + +我们提供了C++和Python接口来提取特征。下面的例子使用了`demo/model_zoo/resnet/example`中的数据,详细地展示了整个特征提取的过程。 + +### C++接口 + +首先,在配置文件中的`define_py_data_sources`里指定图像数据列表,具体请参照示例`demo/model_zoo/resnet/resnet.py`。 + +``` + train_list = 'train.list' if not is_test else None + # mean.meta is mean file of ImageNet dataset. + # mean.meta size : 3 x 224 x 224. + # If you use three mean value, set like: + # "mean_value:103.939,116.779,123.68;" + args={ + 'mean_meta': "model/mean_meta_224/mean.meta", + 'image_size': 224, 'crop_size': 224, + 'color': True,'swap_channel:': [2, 1, 0]} + define_py_data_sources2(train_list, + 'example/test.list', + module="example.image_list_provider", + obj="processData", + args=args) +``` + +第二步,在`resnet.py`文件中指定要提取特征的网络层的名字。例如, + +``` +Outputs("res5_3_branch2c_conv", "res5_3_branch2c_bn") +``` + +第三步,在`extract_fea_c++.sh`文件中指定模型路径和输出的目录,然后执行下面的命令。 + +``` +cd demo/model_zoo/resnet +./extract_fea_c++.sh +``` + +如果执行成功,特征将会存到`fea_output/rank-00000`文件中,如下所示。同时你可以使用`load_feature.py`文件中的`load_feature_c`接口来加载该文件。 + +``` +-0.115318 -0.108358 ... -0.087884;-1.27664 ... -1.11516 -2.59123; +-0.126383 -0.116248 ... -0.00534909;-1.42593 ... -1.04501 -1.40769; +``` + +* 每行存储的是一个样本的特征。其中,第一行存的是图像`example/dog.jpg`的特征,第二行存的是图像`example/cat.jpg`的特征。 +* 不同层的特征由分号`;`隔开,并且它们的顺序与`Outputs()`中指定的层顺序一致。这里,左边是`res5_3_branch2c_conv`层的特征,右边是`res5_3_branch2c_bn`层特征。 + +### Python接口 + +示例`demo/model_zoo/resnet/classify.py`中展示了如何使用python来提取特征。下面的例子同样使用了`./example/test.list`中的数据。执行的命令如下: + +``` +cd demo/model_zoo/resnet +./extract_fea_py.sh +``` + +extract_fea_py.sh: + +``` +python classify.py \ + --job=extract \ + --conf=resnet.py\ + --use_gpu=1 \ + --mean=model/mean_meta_224/mean.meta \ + --model=model/resnet_50 \ + --data=./example/test.list \ + --output_layer="res5_3_branch2c_conv,res5_3_branch2c_bn" \ + --output_dir=features + +``` +* \--job=extract: 指定工作模式来提取特征。 +* \--conf=resnet.py: 网络配置文件。 +* \--use_gpu=1: 指定是否使用GPU。 +* \--model=model/resnet_5: 模型路径。 +* \--data=./example/test.list: 数据列表。 +* \--output_layer="xxx,xxx": 指定提取特征的层。 +* \--output_dir=features: 输出目录。 + +需要注意的是,这些ResNet模型中的卷积层适配于cudnn的实现,因此只支持GPU上操作。由于兼容性问题,它暂不支持CPU,我们以后将会修复该问题。 + +如果运行成功,你将会看到特征存储在`features/batch_0`文件中,该文件是由cPickle产生的。你可以使用`load_feature.py`中的`load_feature_py`接口来打开该文件,它将返回如下的字典: + +``` +{ +'cat.jpg': {'res5_3_branch2c_conv': array([[-0.12638293, -0.116248 , -0.11883899, ..., -0.00895038, 0.01994277, -0.00534909]], dtype=float32), 'res5_3_branch2c_bn': array([[-1.42593431, -1.28918779, -1.32414699, ..., -1.45933616, -1.04501402, -1.40769434]], dtype=float32)}, +'dog.jpg': {'res5_3_branch2c_conv': array([[-0.11531784, -0.10835785, -0.08809858, ...,0.0055237, 0.01505112, -0.08788397]], dtype=float32), 'res5_3_branch2c_bn': array([[-1.27663755, -1.18272924, -0.90937918, ..., -1.25178063, -1.11515927, -2.59122872]], dtype=float32)} +} +``` + +仔细观察,这些特征值与上述使用C++接口提取的结果是一致的。 + +## 预测 + +`classify.py`文件也可以用于对新样本进行预测。我们提供了一个示例脚本`predict.sh`,它可以使用50层的ResNet模型来对`example/test.list`中的数据进行预测。 + +``` +cd demo/model_zoo/resnet +./predict.sh +``` + +predict.sh调用了`classify.py`: + +``` +python classify.py \ + --job=predict \ + --conf=resnet.py\ + --multi_crop \ + --model=model/resnet_50 \ + --use_gpu=1 \ + --data=./example/test.list +``` +* \--job=extract: 指定工作模型进行预测。 +* \--conf=resnet.py: 网络配置文件。network configure. +* \--multi_crop: 使用10个裁剪图像块,预测概率取平均。 +* \--use_gpu=1: 指定是否使用GPU。 +* \--model=model/resnet_50: 模型路径。 +* \--data=./example/test.list: 数据列表。 + +如果运行成功,你将会看到如下结果,其中156和285是这些图像的分类标签。 + +``` +Label of example/dog.jpg is: 156 +Label of example/cat.jpg is: 282 +``` From c91373e52b2a7d5594b687debf4d0649071c1ee8 Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Mon, 12 Dec 2016 17:47:13 +0800 Subject: [PATCH 093/265] translate for imagenet_model --- doc_cn/demo/imagenet_model/resnet_block.jpg | Bin 0 -> 22422 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 doc_cn/demo/imagenet_model/resnet_block.jpg diff --git a/doc_cn/demo/imagenet_model/resnet_block.jpg b/doc_cn/demo/imagenet_model/resnet_block.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e16bd3c624030c4c09b358a015b491141b42d8f1 GIT binary patch literal 22422 zcmce-2S8KH);7FD=q>aPp@UTENDGR9h)7YofHdi#6ahh^h;&3iK|nz1RisM|9RZOJ z0@8wwrU?p02+6b^LY!4Ct zNBbBLfb{n^V4pt=co(oQ;OK7PU+5F?`YC~RPe}e~4as@}{X+&0!0!R#HlS+a?ic76 z;O=*oUqR+LpnA%{i1es;u>6!oeu@lJyoqzkKrQL|EviRbxKj~CG_93^j*f$gnX#V1 zIbCqF06=@@oQJO;2?GH5_y+o$>7U}avbN!;cn!{x3LppM0Km~X;HuFDy^BYE8tCfq z2Y`)!_W$R{-ptQ=0F%;ZO!)c#tp3*oM(3;kf#3p|fXx+LTmzgzISQ1$g9EP~)nh=J z#mVbPCOMiTi$6F(Q06$2-F}m$ezy5d);^M5e0*KNHb-N=>f+;aB=>@H?DZg5P==lZ z<>>1kuEC%@17V34*AQ@=0HEV{HPqk3%{`D`-1!*4+=&ya{AXN)yE!`KKd2FbhNho;36x_olsVmmX`sS|IhM&{PNFM|7YRo+5XnztLe{Y24Q>kkGg;S z_8)bA`Jhhif@^dCA9YTd0MK+7066CVQ78Na0ANu7&^Y=}^I<#E7x%!xt7@{c!NI{Y z9@{^@=)NB8@4?f6f-UU3Za4&*;tRA)bLzaW49fUAzquKdz} zRpS5nhyOIJe;S9Bnd=o-e^+1dRu-U^dHA}4kK5P9BhbUom*2zpzk0&|;m`hQ3`gQ` zdJPK9g#>`bK?YzLWdWeuuK-eJ8UT7J7yJhD=W{zpWdR($d6w{xzv(?FgWv!B{QrHB zB!mAX3Gi^^KdRO-yU6bxEunfB`H3C%^{?0pfr(AP=Yj>VP(&3!DXv zfs24OU=KJ0?tl;A4+I0@z-=HJNC55w4}mNo4=4o6fM-B0&;+yrT|hrD42%KOzyh!W zyazS`46p|rfSWS~gbu)n${;n6W=I!g5Hb##gRDZhdvS_k&vSP9ZvOcnDviD?rwP z$g9aa$;Zjp$ag6yC^#u3DKsgJDI6&RDWWMdDJm&Y6cZHhDR7iDlyFLUNR18!iRBBYlR4!DZRLN9@R4r7aRPU({s2Qons5PlCQhQS0rhY{I zjJl8dE%jF#8X7^G6EvnY9yGUT9??9fc}cTM^MjU&R)SWC_7ZIn?LFEO+D_Uz+OKqU zbfR?HbT)K>boc1W=z8du=y3GR^fL4Y^e*&B`fU0}`U(2aFe;b`>?F({b_13UtAmZg zwiu`wL>Y7#92p`QvKg8gW*K%FnHl96jTwCy6B)}H2N^#xQ80-z=`y)6MKKjH^)Rh5 zLz#t`b(o!*?=TlK_c6a`A!iX|F<|jzNnojBdCh`hWnxuiwO|cl&1P+9U15W=iLx26 zd9x+6)w0d7?X&Z;pJaDsk7IwvKEeK-gPTK}!<8eRqncxyW1o|sQbCHXb zOM=UkE0`;ntB-4on}u7Q+nGC_yOw*NhlEFh$BZY8r-0`b&n_h^mOYNQTIO$ahgOQA^Pn(I(MPVjN-y zVmHJp#NLY2i6g|XiRX(?9V0)cdd%xs&ap8G5(y;Wl0ao9LaGhGN}_% zzEb&8v(mKEC#A1TS4gkQu*sa0iIi!Q`7A3eYa^Q?J0MFqu5{e{c;WE{IVQQYa*=W^ zay#-8^3L+v@>2?Q3VI4ig%=7~MM*_h#azWVO3X?|N-;{^N{7lS%KpmFl-E^+RW7SM zR+&|0Q8iIbRDF4Z?8K=P$P;ZReyAy{1*+AmeNmTI_fjuc|D++N;i^%nv8E}g>8SZs zb480^>$27ptrdg-!V!^=Kx+$WyJ#0{e>{2Yr02=Xlb=r=KjnX_;na5>HJzI}ou{Fv z^-ss2eyz)-Yp$E6yQC+m=cZSohtXHk57X~FLw3gSOv;&A16~7XgE9lmS(URl&-NP9 z8eTBWGDM$~IOl(^#fZe{tWlcL+w&slea|-;1I7l%X~v5tVkXy2T20AKO-!>*KbXmx zg_{jrV7}mRq5Q($#ZwoPFD{sin+KV9Tfi(XS(I6Pw>)i`YPn)1Yjx9V*qY1Q!}^5{ zg^h(xvCXcnuI)qH_m`9|MPHh+6SE7k8?@)L_qK0$fH^oi)LkaKYop?9Mwj5N$StUFvF{C4>2 z&678uMnEE*BD#@6$SCCdTl%+3Z&Tm)x;+wkJTf)%+a1e0Em6Etx1!eW>ff!1rjPcI zo{dqD$%`e8^@x2Prx2GJM~HWcA4)i$ke+}~bWR*fl1qA&M7($9-s@!LM(fsrR@`CFH9||oB2Z~gS%8I#) z6N(Q@d`nhJ%}RUARLaWAdCHS3NGgIV)+;Ynj#Ztmdhty9+0$ya>V)UO^PuM&HJ58< zY7J|<>s0HW)r-_WZeVVRZ-g|4He#CGn^s;}zZh>m+uYrv-qO%2(^}Fd(3aiK+Me7& z-4WG6MBPB`c3$n=>T>H^>vrf~=&|aV>NV|s-DlYMvR|*iXW--j>ZRt(wn4SQ<{{Og zreWpb##c(O8b*{x8eS{EZX8t^eKB@otYutdyki0}(KUH`vVY28YIyqm^!SYV%$wOu zvny|0-h7(#p2N;xpFdc*^_J{y{363*`V!Al{_?Tq>J{ac_SMs?uh19J3v146o9_bN z;onDnp!<;i5&p64ll-T)b-nem4cm?Pn^!jvwxYKgw;z8N|6Kn?`^zhg73MwGA4}Xx z{L1;Y4Jr9Gd$gYWS;PF(2^)gJ@hA` z*%5$&+yuV|NEykP1msSVGn+b62wr26zn$`wQs`7o7pvJgMp(hgKaz@?jh%y&OGH#m z{FsEIlCp~G2{oP5x_bI&49;G-Xl`K%x;M@)u2<+XM74UJ7Nnp?Vidi(kZUJedTOioSD%)XhMUq!FId;j6% zr}Yi&&ew0dd*5+C_K)TTBDLS9_2TXP*RGcc|k~mk0#DYN+uvj&UDh0 z!toljp!{u0mQyKDYr3d}6wENJPX6Q6Y{H7GBG{v;{hZmqHnGV6Xl8#->>u-50CWJ7 zp9M-n0wse&p=9J_U?HajJrQzpN@~iVh5GkG`?Js=71*DJ2zCMi`yeGHr2zkDprN8+ z__qsj5%egzi1Pp~6olH0P(}a&9OCoi#ex3^7nC^mH}`65T7xC-`ay7KdoGQeiM?pX z!lDBknUA^dC?)w0pMSwmQW6rqI>fW0OhrlpD>H%Uvi#E(VU9O#>%i3q~Rb=G`Z>KjMCoJIzWEC1QkFhQjjRw%7lCSq#k*3FN);l>!} zwUTbxJILCS&f(;Eg3NQw%W1qoepQ9TE1|FRbXsqQSaOSkpGD-?`)VnMJK}P%(B53$ z!yEz3+kN&sKJ|0H)w_)?_dSEviprOJ$TAyT8J;wzih8^?cy)_1*32Ecbwl!Ad3#Q}YCfL#R*@+8b8E>m z71&{)P->Aq{!kG^=4KX7?qQ zNxTxewPD7&CUw~?re{BD-sIlCaJovG$(gK0bMwxW#7BChaxayB5CLqRJ?;lKc~1q@ ztr-Nj?K**H+PmT9HeModA;7yuv!S;!G!;Vxif+~qaAse<#u4!Tht)eEqC--$)RjL{ zp7N8fwE1%)kj#zT|B(06yc`hEj|Yh3zr3K(kFfdLy^rT{h90HXZ;Z-~z0k{~hP*j7 zac<3lSgl9Hw0fa}vFa%e=TZep^gfsma=|gpT7pDioC=#qAP=!50vS^g6U#g2>yr<7 z@Ki*g%p${{2x#*`hyYC*TK143kCO`A5&#L>Y4k-rlV8nfCtD1X8>b5s2c}!CY@TkO;ZctD*&; zaGvmK(6l5l(zH(UIL4+dHgu)StH0Fld^-S8AIj(7(t_h?hyWFPr)<;`H_mnyb*n?_ zSpM8XPf&m_fu(_cHl;UZGz1G{(3%r-U7EXcLZfE$5|5I?C=yzt)AIW$8}(<70j$g*P(yAFBsZy^v7Y$2n)M4&cf z>nkhqIIzevrlq1QH&EJ|Xp{5WX_Xh^iF*55@6@}y)ngJE*!%#& z$ZbvRFfLbs2uudazS-7x;W?gYj9TU8ni&)Ky;6;u(fQn*=s~`gD44i?=$NU}wR1Z8dmnAsCYjhpltL`i$9@Lc)5k>H7H; zEYh)9HMAY85lq|KzoftlO|>w03Y7|d{qbKJ>ops(t=lUjTt;v`KNei%K_`KC%6U3) zl={*oTjdn50Mg$R7_(%)SjV8fmk+UM88Zt8gMLRL1e(v^&!^9}WA<2X9l2jfep4BxaKkNl+jH-Cz0~@bv|M zv&>&SJiHve=#m~7lC`%;UF*LkA;{lde!CN(g*fnp4D1PG^b<8S(S88fi3d7|yE9iJ zcVlO>C+p^7;f?TMOMlLyRPxjb!$)JU`~CQh-f_a@9*V$9kv)odH6ri?l}H3ItypkB zkQOHb(H^)UB5)E3zWS@d=xTzE2}OG!R}a&vW&sxQ(Myo{)Hdkem9SBq`7B#-)AyFV zJ-4_vhms!Ur@@+svvb}Wx~20A`XVEf!-^f3J8UTMYH43wsxErFf!d*Np4eWC7vKJ}rR;YQR;QqT*ay)wcmhM_ zw?PSvqx0E>c=#OzCTR-!qp;ww75}%J4IAU6M!T2U?AGl89BNg)V)6OgB=Vi5CF}T1 zk$?ba!R{D$N5PjgYa)p{C#WTNWA*}YuECgq6~9OV*M_D4vL}~l4wL<|GlOI1Tx$}pZy73 z#5^j=e#p*q*cwZpqS11*Ej_&s>9on77Y^aC&>d6rxjEu!p`{?3>94HcjLA7ZjuoZw^ zCKTBe9qHTFBoPqa1UJH((E;TCL(hl5nO6vL@(^DQVBz>2m7Y=1l)4gu5;(mjYW7x0 zGU0so>POHX%Y*hfIYu11{0US}$(ID#kP5U)OyD;2Ih^H_fy&J#fvJM$)}LpbOFW*3 zni)yIUybtHnRa?p&S^OGsWm#lzS*7~JK0Y-h7rxq_q0Jq`D!??dJPZMVR_r{wl!rX zb>wLwzPJ#?kYA=?_CI*Cns+ zpCbbE$9oi|bbi#@)?abG7nCyR?9FzyB>5aJ4x2yOGs7n{PJnytgiH=-an#l~tBGWt zeFZm78>(kMm(rqF86b4e&e0;xcZ>?x+GQb+Q|N|sS*|8`FJAe+nin^FezHw=)kx=j zTjRa27d!5`D{*!VUainf&lWPEF%ll+0#aYTc)94eQW8T@ju=AFhx=eEj4%(DG_V%Y zGiry$_4nU$gd4c3_t3p>ZoYn5iAKC2YZ;PH#KzpkgG(XUZ3{ZUW(_;?D4m4_jog#bonm}nHRDeFCz77 z0Aj9ZAT~yg-^O{*YoZ1g4biABV@OC^k3Gu^452J^jiXvIZ1MQUJ}R4KMsEJil9aLL zH{|-scX8AVt178_TCIENBk+Sz*_dd_3-rSQI@34%WiU{S$P7|S2;l`Mbp zlJ;uY_b0GN5gddu6q}aFCLafuEv-1j7fFqKb}?6)OH*eUtCXH_m9whBdcvAbOy5|> z@;390{xz?comJOSTu65#!%jG38x7`T*S@z})$4{TkLy|Qd;1b?eD0M<_5j?R0RdNE zeh@hU5xnj=M2>Urgj1o%3EX%AocikW-TB-E?42$x!3mM6hQ09*(eq1mYM*X)eA~Pt zM>f(vov&efDy>z%MgRS+M-KY9hZxztV$9|zvrkKc;MNfPZn{I@jaL)9jsfvr$$ZFs)3;8a`lL+jczseGg#XE&kAWrT?U+W8 z5`i}#Ly17?vvv@(*h4@gaaOi7?||nEVwZKcV)@h00jXHo*?miRkHO#9E$0wnOAbsD zX?g3TW&MybB@^m=)xu0qoByi=oG>y!+kYKfGT=jlj%fBlS2KhlZE6fvx1$`SB!O9n;U$zCN6~P^cbPU0>1J2FHBtx2MG}_dD=fH{5;KaiZfnYJ4Tt!`WoU zMziYPa1%pKtRjZ_QSXby=>DxI06Kz|Fnt})u5lqhguBO{vpJ!c@no=Uf{%4+0Hgf}fs(OP05S;MB|ZWs^LWc{`%)wcIG zwoFRPrA%>>Q-&Jj@{&8m-1KoDrex0-mx0L}m`C){z!?|3xV|T(J7RfdTyvxwsNUAk z%-Z7VDAGqs<()n|873mw7jm4_McK@)*IpTk5A@jzLzfWwk)QM{esLY-Si zUE-I;)}$97ZfQ26Ld(MsqJ$=9_~wGdhBqwt98*a*u%R;$!~%V^+wNnoqmyc$-_j|)}xS} zUnMKT^}30WQ)|97r=?#uUI$~ihKznjkiZ46QeoZ^xN*(FS_Mk4t2C!nZfI^FKDG$H zcVEX(F}S9-yS6_`iu8m;Z?z(@JYRxF#Uocuya?<#{b%JNi8wtX(3>&87;PGOC^0`_ z;QZoNv{UNx*sRFW5_^o zJxCp~uJG}I9#Jta=Ex(ugZ#FJR{Oq>fzcxeC&P6>3#HJ7+P{Y71znQ=(56j+Htms? zL`b|W6n_(kT(fgHJL4B<6|R7;(z5=R#BMgzmR-@hNbN54P;ntFBl(q~M|@rBg_`Jq zzMBYL00e%DO(5_i4FQ24r7;Nno+aZv{6OGG2?9Tc+#}#;_d7>0u?zw~MiBT3Xn??P zs0FcF)ByrNArSbT3IT!N!=NwUOek(4dkn*m0Ezw)Ac;BxBod!MK$3n0NVJoW00|Wc zNX%zJJ^`+%33jUVioT|qJznhk~tF6cs*i-8fvRH-vLc5lQ< z@6#t$Z?j{%`)9Z%rD^)v_Vzy2^sz*cV9yE)(n^#kfw?psz*RDy*@D2AL<0oAQDPwQwbcJ*=Iw-d00aAF`f$u&Xa0TvXqWu!^aKB9=9|l( zp1^>^%+#+0)}X%k+jl=_9+sf}*f$t@N^OiCgZ6BR0Hddh#e{)RV2TlYCT*EuWo%W$ zZ9#A>KS^WoK3?!2N_mNSd!iwr78y;>f(}c>3}VL%!HL*9hlLTcORAAiNaDr+yyy}A zC|EcYD|t5I6fSACN}6dHq<58kQ+ z$sWJ1Je;w|GT!O;m2mko=SGNK1~FRhBnsCD#|$pljvg|i$dRj>B}i=i@9U6;a?RTp zil7C7YuI@ru*QPy5sMf`eAB~&PdjQahY0jb{#*x84bC5?BR?e_fgY-D(7GgAh{YjX zjpuH_=r`)RFkxNegS?enl^yWOJCc-xq>SVz$vd(t;jGA)HI>NOoB=|DC-6gPMSlx* zptuQ#68Jplo4Oq+mzM2u!3Mc#^i2xmRO_PD*P@}q*KgG+$ca2kH8`QPvHaJOFCK!U zdbdpAM82uX2SEvo`PN0qk1%XP7rd^idvso?i;v#;HUU0cpUfB%V3VV0C)BNGH2wZV zkyntDfNLm!@-SoPcd~>L9;<`i#xC2m<%rI7d(z|qo0u*-rpYTPc(ly7W({7)!wT1m zAr+DOnF>ay*VGSo5^whukIYp&rQWwb22{aDeM16YV3x(?6jcl!og0s4&P`P{%- zOm4O!+n&vxm(hEn;SE+*Rc&o=&)^oCPsNBq$J-p)leE~wt#LKjeK;)1R_{)inxe%S zKc8)~_e&Yq%8WI*-#IzWr}YQ7kf}c}ysxd8eAiwa8{E5`$R~zn%i8D5z_NEO@lAMt zK9@2mbwSGJetWXh^OSSsA9&I)(nqYTsscL*@}2{FkbUD~dq~mAD|vRJ%}%KAiC0$g zo%+q2l2$3EMdpne!}7<{g*^nmt|g3Up9_2OdU-1dZ5-Xffk}?6S4YsSXTcL9M5<#y z3pbAkhVjaCKhjVt(`-pdapR%3QT@#5^n)8W$e(IEqD3Du!Y4SXgoy5dV=a4pt_*$c zY_bU0P@5@OlU&r-2~~V)rE{&wW4%nBJ>C3N&0Cw^`~9zqY$4~r-_c?X_rw{Ogfn9k zBPd>Ac|S%bCvFA#i1zPw$vT77<|er6%x^F0`tX~Q_&KN^X9^s84#0zPaJ0;AEXl_{ zwk)pm$*a6hKJ;H->uUQxiROGX5bZPA7}TYDVw5LfX?7SSAiYDvO@wx{QNDTt_?l<P={w5 z&K?=3xe|iGOf?a3&bvnhb|t34clW0oMx6eYz3xPK_<_EnKIkigzT8xW;ZI-j*Zr4T zyn2+|gP=BeV2$^lB+&Le(as;}{%W`6h2F4ANU*)l+Ga9e@00N(Vdzq9=Ow-OFMvb( ze3TYm{lFCOge$AU!Movf;hF^=YNK(AJ`o=+)LOx1EA-^!n|eCwdr`Xe^eiK%^)w5| zT$50SNoCcy;H_P(3P{f~7$WN0{#8w%2V34cR zAsME+Ch^^@(dyxYe2MbUJMg&W25Zn|3s`gOmiFphV#40VV(6=?Y&`8@FH#LndH7Fi zHoQ_WP#BQlbauHWz*OekK%Jruub#3Tjez2cDmNoEaY-wN3ATZC7?FE%(uL*lkw+VB zej1UNYB(&k+1+`9MuOk~fqx^9O^XW0jrMKwqbq)jPU0Ij&E>w@&GD`|&c!uRUBQPG zn=ZC=OVb~;3ZVI^M82DekVn4BM8gkNjDLFC2UZ|*uEe_>`5?WBec#;Q{rNxRRFb_~ z3b>|$d9v?4M1U^*MC+vDwqbuRf7;p9FXlDbrje8a6H2@WnLaD6wO6itiD;ke1bSB< zTOjd5c?~LuEC|p@aF-H+uLgf`3V*HF{~xw=pg?w=371@3MDCemMO!fom!|49y}G5c z2Gx9~(yOj{_*Q0IJ3C(cIq7oBdkKF9j%t7KO$pd5CIa~}E2DU!AUMH8BE4V*HScUW z`MEyl!ay!nOI>^1N-n1cXKAegOT78`Z7f{i!F~NAARK%nNDDF5$$`sr(X;(|3`!&V z=GlG0Mlv~MyHk_;ZpGHgb+!YvW;IrKqlYD0?h&HgaL`77Cl<;524RXA{R6&>HD`=0u zT=}|fctnL!e%uDBXY9mUSwEoY3zP)4$_kc30qa#`v@< z2f_e?mk4Z_ms_I7jsngU#F>9>ZyEWru({WSbH&In_c+kwJzOiao|bwpabjr|Y~~Bo zizoGFw`FB6jD#|O@0PUe20~p1P3GWUcs^Xo>X#%GU7&1y1WRLk{e*@N^N0`MQ;NCI zr#*!9o7aW`j^g+H_lI<#k>b#aigzQyE3$oIutfXgxGqdJTC@1JN|JQof_jy@={S9n z7uWiDn}CLMnyF1%m&4O2I!;)IXK1TKyl&X=gx;Ghn{WGRx?JtMlnSQ;DiPu*l`(HE}G#y z_K*l9Yys)|X}e^7=Ge8?n+w(r@3u92!ka|{sh^aS(e@sO>H^^fX!zXsfdMpoEZzb0 zbpRXU+ta^)C9ilhebVF9-kMstzYg!n-GO;0+t93I)De5f#u&Ig`X%2)(BMyi2ulZ( z(FL-D$a+KrYL#VTTXDJJhrRy1)-Xr%tDUC7y6Xx}Va~-yH++7E!oaXsLoyil>Nf_t zIuI)ruWv4I{*5>MQ-S`-TA;WU4sL0w2^cZLfcog6ZqUZ}{U1Ghw@_&(X9uTtHmJV3z{1rau~3TH|;qr z-P5%IFxT#MRWpH#U$KlEHo)D*s`n8@u;i_)yy_-(m>+dGBM>sqEPXLe)D_u%r*(QH zK(vqMec^pQh&N!%51j$spm7vahYW5zf<2FJ!QJM|?bR2_1GTSe2mRc_4f~wQvx3VW z*lnqt-Lf`Y_I&(ujSp5Ht}vlRg?Ga#6wUQi&g0U3Je6h%0`~QhqrGPKw&#T>uTUwz z8xFp@{$5{xfJL38PvCO-j0W$jUsMOljD`{#NwC-LN(@&k=;S8^9rWX^EnlN(Cl-fJg?Ftb)+OpYn1^#8mMU&v(3@3SW;H^OGJ3T1 z*Mx?xZdUjKxVz43Wc{->)H5yNHAC4{a<4A6r923x?9KlHOSn>>eG$6_r)@Juf1$$p zp~I`LJ+n5%wDz&j%Gnutr)Q{e1&z)LSPAEH+*iqF*?2}%N{YUvhMd2HX2ext)cZQ9 za9+=hWRg8ryiiPDS>4C)S=MKqPhC4-XBPkT3d)t~6-iR#(2#jJZsg`5X7>(`qURj_ z)_QvQE*PVgbjU_AYKR?B>^NK!=Q98vhfRt`Lf5J79ch|Vq zU5=&dxjCi{^BLO!`!is;@LvGdU|8!{a^zS1jgFweYh%9m5rHIobzI)cnO*K=>ZK5` z9yOoFr=dj->-XQS#j`NZpaBT>;6wQ41yl@M>iKPu5~!&l>bs20?#CJn#zv zp^sNeKEmp{X`op*SzpFS{EpSV;2>5{5CgILQxK~k!Z9-l5UaBpgKn4(h}HLxusU{t zKrQKWph}nq`GI3@De_lQY|IbssP%!r?ik`)1@6Rb5rI^ixo2;2vb2&KO+;t`<)l5Ka} ztXCa)+$}neVJzxg(R6IzktsJz-s%T?sibdn{|GqsDyd=m;q&tx458#L)c-Zw`5Q~~ z-|;U0%)uPVpy!>20zGd!5MO1kemPV*`J3x4h6Fut@MWt}=vm(PjRZY!Ef{Vg9|tm< zhz26ib5m&bOB{1|uq7WoR&2fjUfUOXjWvJS!@GO_*jHHA1m&!}Q2En<`bNqXkW)(n z!zdIn+g}Jw188pUYA#$Dn(g-Loo9n3{rqhcyPLXg@gCbOQPybdYl6Mbw9edTDU^1D z`()?7=;N~2s_F4YG2qj?t6{uqs6MQz_65v^BJ~Q#Dpm6Z+s8|Ow_@!p-_r!4C$_+041ibU=f)BZ5tjx@~v{v_b`Gry13er?v{;-lIB7hiT z_k^b2Fs8x7$28Q@NCsU0=g%K4r`jwF*SYixWrhq$YdmQDOIMYB7iH%QzXrFG|G3JY z(+MZX3mg_jDB|w)I?#vf+zdd&m~C($KFmwI>1k1doPHJGvT0&lFq>M1t!yT5WwcVt z$8_B+`H5ras6QcGlX-F3dUZHvFk)leZ1v5U*#@uE(h=34IAK`4$$0Zo41ewR!RbtD zZSSv-srQXRLW&{=#IU$9w*<2 z2eH4%m?O58xhjc(;Uv#u^5Pr5(0N8cUQAz^*bO`8J^xL#LBS}DwBnkVbbZR1SI4<{ zl41ky8vvVHWFcRBJ5;9##}-23-}^da1`@05!+@In(Cb%itcOxkZ}|$m4`{A^yOK+; zx!)GE^05m^hi9JH-B#$yiY_RJGi`j`3~u5o+`Oid(IltCGTLsc8bIk!sd~lR@p}3G z?tilDf*Gy#kASnnt5IJ6OH)m;^8%s#lg-3|3(*IDHef=$vqA!kq89`-6Up>=wmmqQ-`K zq8NnrNF^mnj%#Zv;TS7Jmb)XkaA*{nhKguf*y^GE>+ffA$ZF2JBS}x|6GymMQwm}Z zP7f%i=W|}U?GPCnWNeFZ=tVFR=7<12-XUy-`$sq@=Bsu5^7#p+yYZ)IWL)$wl%FXW zNj%AwG?S8Y(8Zr>^5!C4k=WMoTD$XE?E8;#T0#)?mpEsmn(19lm2(F?7W*=JAE3@9 zr)JICV1h>S*_ombZ{L4@uKYIZ$E~3JH|tA^n3$d~j3X<{Ezdn^}*>1C4a7T;vmILfypGnjYN6;?NzJ zCp)OZE&Z6A`Vg@M%_gi^JRb36!^Scu`eD9ko}ZftE~&URM64}un_XU!k*-S7E2U}_nM?mNM%AS-t|zqijq;grVW4K09D#n zyUD!@AIO8_bb4D{za%thYJP4D3_e9oO;zZv-EV9bV9d2G)ke2@QbT*XAa z@tjKU9+&am4i3*;Ekn@aM9@x42WhJ7b6F*u1x`NGnsjyxDm^SpH4F=7Q`ng=yvHDk zF!Kr5J-mZ*gYjJQP-)~N9}D!VF!G!=?|7(K>T>fGM^0{-^_gQkqj{P4z62>k_JUYh zj`#V{Uf=r!M!+fK3&03C03D-F|0C%7?+|zYktxH%5((1Sz1ugzeF}uyzTZaF&9Ei0 z!V`t9)81F6E=N)ZCh419u?>;g%FM|cG+{mt$f=64=KUKa!{0Y`Q1MJh)knmTTH4I^ z!AaIJqZ-QN?E_KOXv^FWy73$jP;r2)RAOoTi7Nuy!nbW4F!v&+mcN<&3^w={5P=a9 zFq{?+S{gF4i3rg}!f=EHmZBTMxDmmHWxq?}WvHAHB=UID*2ZnyR3qhnA1Uq5_NX-1 zY`zy;UOkwBT+trvfGb$BQ!siS_p!O98S@Z-T!+P3YUd0=m^%K6OAS9{rqVt{XD@rI z^5gRk?3YMiY2O0(q9ptgX~dmWSqY29oTMPVVM5W5#GF|aLwo1dMRlm)?xKy1EuYUk zGaEfw*WMDA5GE~aaKSt8ioV#8McS*xd~Bm|XZ5tr53!y+n3i%lABHX_`kOHv$J~?4 zf@#k>Ntw}oBeloJhh&9QG`qL=F}?LWr{4FOal%vk=TX=K2h3Z7!dQo5_=%}-glDbd zd{R^Nvk!GiPTYmEY9WvMpHfH}UR4{TnEqxt7{QKJ>6hiXWbCn0os>t1KdD(8ldg&A z3AXg(V!rkIIW99kGGwsAMzk7>>TM|^#H@}c=F#uk=^Lv1ZdYl@g=*|=(aT9)!#%16 zeXbB{V88sx!`DZ6`t6JVj0CBGyvK*s{)I>4cznO-m5j$Wm@POek=- zX)jtzDSEr^u*2FsE&VRu>a*;Z?^_2X^0h5D_sGsej$rwpv9f<7a{+Obja|)vOUA4X z%!5vNn*x@5bu{kVC~U(XKB;t5;>@m{ppOmJ%5?GboJw|;Zlotx80f-o z{>+>HMfKpnYUA(D?*Bs59g5y|+r45iroNzwBrrRWd&0eD0;Pq(pg@$qX@cM0QiWG> zXe(+}Ap8fyUk5|9_-GiF91zG=;Bi8>mVpn1U3|tFg;lvGg z<@=3u&(Lo>+L0EPf;?t0woZ!yd1l#CXb-}>I{Jo80hpzL%M2>rs6rNneC@iwFqd@dTJfqQUwVz_ntHR~vEP1dH6+7A=Gn)*12+dEXAjhS=`tERQk@&- zyD}t$Q^RvghqRp}D{x8ALQ2pPnT6{%M4$`CXWLWYrDoqu)!2bat-G|&Zo1f6jk3M! z>>&i{gv7rizwq*Xn_;2x8j@>cNeMg9>Bk##x50?X7$$Oe`+HPw|Lg8|Y1gF-?0l?U zT7HDDbI!${HkT+Mg+4rd5y9}*Yq@41uDU}2S71J)v`iJ^ANnDEV#7#nz@$CNLPxqU zspgFENx=__L6-}7(o&J9!7~mQ^G-TYfPQ>jP~I(eL|tP$CJ${9~mm^KNNQ-ScPCC%||4b5`ykyGMI4m8~v zGx3P8>?E6Kz7P(&r~Hby&}7 z4F>HA_H&07202O@DG0@O4!?{AG-3VhC~A!$tYLQ^cepF&OZQcrjv=lB9uwFO*PpH_ z;@%A7j%=DwdMA-M)4#EF<(}fPkMep&V{9hs$y~S$`@?e3dxZ5`EYoTUxo07BigDHF zTF3pu`Dkmo14Jt1H*6B0u9+TBu{gtcr(Dk5wbK-1Dw*zmxKri79OLa zl#eSm1~xhb#+9n6YzMC-(bY*;VvTs_Qm!#t$y*OZ%awImKh6u>PEdVYakcf9aGWHvCxUU-yMYdA zhXtX$UmTpV0m--yTRq5bzkbk_X(woQuJM|VbA?TJ%Fy|BEB*GDWZAdK5TqeOy#z+A zN;KY}GF%l`wgy=#b-mLwll8i6Wl;KCLl(TQZeZ`G1(j&VpfuC4|4vH*1HFMT^qa+z zMfsQBE099g0J&%}BQSBIca$6q{SWj8{*uIMf$aW|TEk4^DSx*p zjQBIB{%6X^A7=%jU)}Z&51(~#;i+SK5cF+s#hc6yo@&+6Z8~&)MfLtR+;-d!_3bSQ zuR;s7LgJ?zpP0!Hri0-QpKu;r*%Q1E&f{tLMNE6nZgd)&BR9-Fh(ou1EXl>aN5k_q zaI^J(w7~re`UVUF&q}dUeFyY7v@qB6kmwv(kCy0oLq&j}Oik(h8@KbQ6rV3=Zx&nK z@$L;^pt{1yqWyXZ_Nk<26G5?_OEDIRjQP6oHpK5-ZRDtNn6*Vn#E5sKc=ws83ZR_) zLL@nbsS>r>k5GrIC^k^~JnB$xlK1ax?PeJgx^rv*$RwYbbjgMNeSG z84{QluY_3#9otgW)D#|949pa6Zh7%gfSf%`>A0=T>C`g^4>;By-iLmZIob>UlDQQ6 zrF#@)tZP2@jY2`>#Gs*qxzOD>e?9-Z+yL)#^tbv)+=g+|dHw@D@sN;5&p7ZeUejv> z2jqX#)h|g;Yf6Mk^*Ky#c&m9U?KPG*D8C4_aIAmzNg^r-lIZ#T`0Yyx8DvyH!oSFTGiP z%TXQdAz4edVZ1W^lGjQ%_PsYQ+AV>38gh}657k#A$qd-!lZ0C{pC&I6j^mKFShsFt z@U&i!8D+ywH=U`<%E#YqvlBdoVsrKN9p75s2|wmH&)tup(~v{^QG<-Xi<&th#~X7h zoV)b%?bD9u=n<igZ<`0!)0F^Y#? z)|gkAg@!3-lLp)2!Ef2I1<`sBT~i7VG<(j9I4t&K8{ZNVKF#-Mpr!c+dbvb~>^ShjPTJdX z(&;``u~7tT%7k2WQ2g;;CdPnzkfnuaP#s)wN04d23X?f-mU>o-37U=U5zMGu^dOgr zYh&u^*;NbolZ=n`p7?aGX8{MkN4obv!c)Kg`oDx;|IeEKO>)uZf8^!=zIlS_um8Mx zf--c?VZLYOWA_&ZT!0ZaqGB~$Y}Pm2e5K+<^;+^^Yx)wB$zx2z--vBCXlL;OxtOp2 z3st3WAMa>DR+3|EY~4FZ$13Z&#R4P;+(=FdKkWFH$_lhL1v^=NS9-r(-vnka2!YuP zB*+c(qFFF`!6BND9fRyTl-v`UA)Ek@J-gs)R@?P0)O<^Lw@OUvr_s~JY&PCXx@F-hy)1a9t6Q!?9G^-QJ4*Q4&12fn~iz?rA|ZkSN@fb_dh%N zQ=Sn1t_Z>0Lt*^7kZc-Z2WFcH#;u|$-+IJW|D=;wX1cWerx2&tS6(z+47&LEh4Iqb zbp}9N3O2@o+grYLm?S-EGb@nS*V5Dj+YWUMyL;f>_3i10#n7;}?hdcZ9da+$PAKi# zgxL1TjZ|J5D6}r@5ReE`VZ%muNHTSzJk$e}1x)gX5L~i(wog8jZGWfklI7r>be(_tusjSFzZSx{WyiULTx@$EH6lN~(aniGmuVy7m;<%;G! zTF5$EC#xK5BUPoInOmQp!IqQ7Yd>321KwQWd^bk%fj2KnG28D=!fIp2wMxB~_Sxk6 z98D($SiMp&pEOKSB%sy*bfUGf?q(P3{<6Il3aUjd7AuRl>Hx;bo63iIz6USQT}OGm z$U2saJ|Ubjy!5(GFEv{D{zi4q5aq7gZ)f5DI1~3beXy_^7dB~7*u+qYmrdkWHR_s_4uJ7IU! zKpvI}6ggF6j#MgXxlWJbhxU*^k;QWBYJ@tct|;RC;B!6xr~r_nmK_I{RgF z&&zJ>U~SR;WtcS`obHo#HJ{mg;Yt;r->a+YzZRzj7uqk(UDC6dD^Q`VGZ5B>aymDB z7boNH`2Tk0A45(2B|GgJ#~<@QtbSCwtPwc*D)@%?YTv?z^_i}jf9EHQ&hpJWU71(& zdE<=fY(`wG-OBQEfir~mo7Fl$EI)FO{aTIfnzOdkoBrsptSE6T7N~r$dgXm?{<6K( z3Lda$=HA%E@xiC1X<>nbRsGG^kMcYJssB(9-2L%r@6z_p8q2j=v)GpGsBsV7)cPhue0V=P`+-M$CS;JPc6z*+*Ef)Ogx@zPw*}(l4}N#5Y!y7jB78gMu*qbbqJ`7U+=T?SrdbE96#dTLR%ev`z1&{1Wcj9T zf`9bARW~HammZkyy!}f5{LcxGJiU5n*9BOAj<8euJGV{+xW~i2?~ndR{YNI^)9v&= z{P`!p?ceE(xBV}#33^=id)wyhleendeir$>5OoaUk_1lWt-rba`1O4nAD@12{HNsl z@GkeG(y42=?aWr2ps#l+I$HJepV!>;w;Y+UW!j6bLmp|j!iyfvW3X%d;ri(J`;K@n z8}Y|)gCyR*ukl_4tgAmd?{JmbKJDU}SF<~F7tU#%fg)`d+m>DM@T^Ooxc$rg zZxz>pi*#2%=x??O{+OG)_+#4E<*v2WvP*RHMe4K8Y;=FR+cMl^rkGzrj3IDKxcj%J zzw`bu{`UM~bo{`5!K6vP_WoCOwnqF}#9X# z?J0q|N8Y7MJel}@>toP_^PyVn$MX-?OZ-v4(!}ynZm#YgeT7p;OZM;CHly+Ssky** z=mQhR&*FcCmw)U0(0`mCctXR6^{pjy!%q3P&C+XM7=P#fL6NfU<`W`yg1FuLGPzet zY${(Ru&!NZg5BTs6YLY?Z~prHE$8oS;PT1;46$3i3^ryA?JWA3N7` n;mF2Ak&;t>Rt5n#3G9dOet>l`k!gM`>kDDsY--cY|8D{S>XFvd literal 0 HcmV?d00001 From fc640e386bbbc8ebf68895f74354507e08ad7b89 Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Mon, 12 Dec 2016 17:55:24 +0800 Subject: [PATCH 094/265] Update resnet_model_cn.md --- doc_cn/demo/imagenet_model/resnet_model_cn.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc_cn/demo/imagenet_model/resnet_model_cn.md b/doc_cn/demo/imagenet_model/resnet_model_cn.md index 6b2acb160c..7e402a5040 100644 --- a/doc_cn/demo/imagenet_model/resnet_model_cn.md +++ b/doc_cn/demo/imagenet_model/resnet_model_cn.md @@ -7,7 +7,7 @@ 论文 [Deep Residual Learning for Image Recognition](http://arxiv.org/abs/1512.03385) 中提出的ResNet网络结构在2015年ImageNet大规模视觉识别竞赛(ILSVRC 2015)的分类任务中赢得了第一名。他们提出残差学习的框架来简化网络的训练,所构建网络结构的的深度比之前使用的网络有大幅度的提高。下图展示的是基于残差的连接方式。左图构造网络模块的方式被用于34层的网络中,而右图的瓶颈连接模块用于50层,101层和152层的网络结构中。
![resnet_block](./resnet_block.jpg)
-
Figure 1. ResNet Block
+
图 1. ResNet 网络模块
本教程中我们给出了三个ResNet模型,这些模型都是由原作者提供的模型转换过来的。我们使用PaddlePaddle在ILSVRC的验证集共5000幅图像上测试了模型的分类错误率,其中输入图像的颜色通道顺序为**BGR**,保持宽高比缩放到短边为256,只截取中心方形的图像区域。分类误差和模型大小由下表给出。
From 529f24c262850974dd8ba4c5b7ad1a4e3e0230fc Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Mon, 12 Dec 2016 18:17:27 +0800 Subject: [PATCH 095/265] cpu cmrnorm --- paddle/cuda/src/hl_cuda_cnn.cu | 192 +++++++++-------------- paddle/gserver/tests/test_LayerGrad.cpp | 3 +- paddle/math/Matrix.cpp | 137 ++++++++++------ paddle/math/tests/test_matrixCompare.cpp | 115 ++++++++++++++ 4 files changed, 279 insertions(+), 168 deletions(-) diff --git a/paddle/cuda/src/hl_cuda_cnn.cu b/paddle/cuda/src/hl_cuda_cnn.cu index 0992286f36..1516accaae 100644 --- a/paddle/cuda/src/hl_cuda_cnn.cu +++ b/paddle/cuda/src/hl_cuda_cnn.cu @@ -381,57 +381,45 @@ void hl_avgpool_backward(const int frameCnt, const real* outGrad, CHECK_SYNC("hl_avgpool_backward failed"); } -__global__ void KeCMRNormFillScale(size_t nthreads, const real* in, +__global__ void KeCMRNormFillScale(size_t imageSize, const real* in, real* scale, size_t channels, size_t height, size_t width, size_t size, real alpha) { - size_t index = threadIdx.x + blockIdx.x * blockDim.x; - if (index < nthreads) { - // find out the local offset - size_t w = index % width; - size_t h = (index / width) % height; - size_t n = index / width / height; - size_t offset = (n * channels * height + h) * width + w; - size_t step = height * width; + const int idx = threadIdx.x + blockIdx.x * blockDim.x; + if (idx < imageSize) { + const int w = idx % width; + const int h = (idx / width) % height; + const int n = idx / width / height; + const int offset = (n * channels * height + h) * width + w; + in += offset; scale += offset; - size_t head = 0; - size_t pre_pad = (size - 1) / 2; - size_t post_pad = size - pre_pad - 1; - real accum_scale = 0; - // fill the scale at [n, :, h, w] - // accumulate values - while (head < post_pad) { - accum_scale += in[head * step] * in[head * step]; - ++head; - } - // until we reach size, nothing needs to be subtracted - while (head < size) { - accum_scale += in[head * step] * in[head * step]; - scale[(head - post_pad) * step] = 1. + accum_scale * alpha; - ++head; - } - // both add and subtract - while (head < channels) { - accum_scale += in[head * step] * in[head * step]; - accum_scale -= in[(head - size) * step] * in[(head - size) * step]; - scale[(head - post_pad) * step] = 1. + accum_scale * alpha; - ++head; - } - // subtract only - while (head < channels + post_pad) { - accum_scale -= in[(head - size) * step] * in[(head - size) * step]; - scale[(head - post_pad) * step] = 1. + accum_scale * alpha; - ++head; + const int step = height * width; + const int pre_pad = (size - 1) / 2; + const int post_pad = size - pre_pad - 1; + + real accum = 0; + int index = 0; + while (index < channels + post_pad) { + if (index < channels) { + accum += in[index * step] * in[index * step]; + } + if (index >= size) { + accum -= in[(index - size) * step] * in[(index - size) * step]; + } + if (index >= post_pad) { + scale[(index - post_pad) * step] = 1. + accum * alpha; + } + ++index; } } } - __global__ void KeCMRNormOutput(size_t nthreads, const real* in, - const real* scale, real negative_beta, - real* out) { - size_t index = threadIdx.x + blockIdx.x * blockDim.x; - if (index < nthreads) { +__global__ void KeCMRNormOutput(size_t inputSize, const real* in, + const real* scale, real negative_beta, + real* out) { + const int index = threadIdx.x + blockIdx.x * blockDim.x; + if (index < inputSize) { out[index] = in[index] * pow(scale[index], negative_beta); } } @@ -440,84 +428,60 @@ void hl_CMRNorm_forward(size_t frameCnt, const real* in, real* scale, real* out, size_t channels, size_t height, size_t width, size_t sizeX, real alpha, real beta) { - size_t threadsNum = frameCnt * height * width; - size_t blocksX = (threadsNum + 1024 - 1) / 1024; - size_t blocksY = 1; - dim3 threads(1024, 1); - dim3 grid(blocksX, blocksY); - - KeCMRNormFillScale<<>> - (threadsNum, in, scale, channels, height, width, sizeX, alpha); - - threadsNum = frameCnt * height * width *channels; - blocksX = (threadsNum + 1024 -1) / 1024; - dim3 threads2(1024, 1); - dim3 grid2(blocksX, blocksY); - KeCMRNormOutput<<>> - (threadsNum, in, scale, beta, out); + size_t imageSize = frameCnt * height * width; + int blockSize = 1024; + int gridSize = (imageSize + 1024 - 1) / 1024; + KeCMRNormFillScale<<>> + (imageSize, in, scale, channels, height, width, sizeX, alpha); + + size_t inputSize = frameCnt * height * width *channels; + blockSize = 1024; + gridSize = (inputSize + 1024 - 1) / 1024; + KeCMRNormOutput<<>> + (inputSize, in, scale, beta, out); CHECK_SYNC("hl_CMRNorm_forward"); } -__global__ void KeCMRNormDiff(size_t nthreads, const real* bottom_data, +__global__ void KeCMRNormDiff(size_t imageSize, const real* bottom_data, const real* top_data, const real* scale, const real* top_diff, size_t channels, size_t height, size_t width, size_t size, real negative_beta, real cache_ratio, real* bottom_diff ) { - int index = threadIdx.x + blockIdx.x * blockDim.x; - if (index < nthreads) { - // find out the local offset - size_t w = index % width; - size_t h = (index / width) % height; - size_t n = index / width / height; - size_t offset = (n * channels * height + h) * width + w; - size_t step = height * width; + const int idx = threadIdx.x + blockIdx.x * blockDim.x; + if (idx < imageSize) { + const int w = idx % width; + const int h = (idx / width) % height; + const int n = idx / width / height; + const int offset = (n * channels * height + h) * width + w; bottom_data += offset; top_data += offset; scale += offset; top_diff += offset; bottom_diff += offset; - int head = 0; - int pre_pad = size - (size + 1) / 2; - int post_pad = size - pre_pad - 1; - real accum_ratio = 0; - // accumulate values - while (head < post_pad) { - accum_ratio += top_diff[head * step] * - top_data[head * step] / scale[head * step]; - ++head; - } - // until we reach size, nothing needs to be subtracted - while (head < size) { - accum_ratio += top_diff[head * step] * - top_data[head * step] / scale[head * step]; - bottom_diff[(head - post_pad) * step] += - top_diff[(head - post_pad) * step] * - pow(scale[(head - post_pad) * step], negative_beta) - cache_ratio * - bottom_data[(head - post_pad) * step] * accum_ratio; - ++head; - } - // both add and subtract - while (head < channels) { - accum_ratio += top_diff[head * step] * top_data[head * step] / - scale[head * step]; - accum_ratio -= top_diff[(head - size) * step] * - top_data[(head - size) * step] / scale[(head - size) * step]; - bottom_diff[(head - post_pad) * step] += - top_diff[(head - post_pad) * step] * - pow(scale[(head - post_pad) * step], negative_beta) - cache_ratio * - bottom_data[(head - post_pad) * step] * accum_ratio; - ++head; - } - // subtract only - while (head < channels + post_pad) { - accum_ratio -= top_diff[(head - size) * step] * - top_data[(head - size) * step] / scale[(head - size) * step]; - bottom_diff[(head - post_pad) * step] += - top_diff[(head - post_pad) * step] * - pow(scale[(head - post_pad) * step], negative_beta) - cache_ratio * - bottom_data[(head - post_pad) * step] * accum_ratio; - ++head; + + const int step = height * width; + const int pre_pad = size - (size + 1) / 2; + const int post_pad = size - pre_pad - 1; + + int index = 0; + real accum = 0; + while (index < channels + post_pad) { + if (index < channels) { + accum += top_diff[index * step] * top_data[index * step] / + scale[index * step]; + } + if (index >= size) { + accum -= top_diff[(index - size) * step] * + top_data[(index - size) * step] / scale[(index - size) * step]; + } + if (index >= post_pad) { + bottom_diff[(index - post_pad) * step] += + top_diff[(index - post_pad) * step] * + pow(scale[(index - post_pad) * step], negative_beta) - cache_ratio * + bottom_data[(index - post_pad) * step] * accum; + } + ++index; } } } @@ -528,14 +492,12 @@ void hl_CMRNorm_backward(size_t frameCnt, const real* inV, real *inDiff, size_t channels, size_t height, size_t width, size_t sizeX, real alpha, real beta) { - size_t threadsNum = frameCnt * height * width; - size_t blocksX = (threadsNum + 1024 - 1) / 1024; - size_t blocksY = 1; - dim3 threads(1024, 1); - dim3 grid(blocksX, blocksY); - KeCMRNormDiff <<>> - (threadsNum, inV, outV, scale, outDiff, channels, - height, width, sizeX, alpha, beta, inDiff); + size_t imageSize = frameCnt * height * width; + int blockSize = 1024; + int gridSize = (imageSize + 1024 - 1) / 1024; + KeCMRNormDiff <<>> + (imageSize, inV, outV, scale, outDiff, channels, + height, width, sizeX, alpha, beta, inDiff); CHECK_SYNC("hl_CMRNorm_backward"); } diff --git a/paddle/gserver/tests/test_LayerGrad.cpp b/paddle/gserver/tests/test_LayerGrad.cpp index 7983d9fe64..8ade15daac 100644 --- a/paddle/gserver/tests/test_LayerGrad.cpp +++ b/paddle/gserver/tests/test_LayerGrad.cpp @@ -1021,11 +1021,10 @@ void testNormLayer(const string& normType, bool trans, bool useGpu) { testLayerGrad(config, "norm", 100, trans, useGpu); } -#ifndef PADDLE_ONLY_CPU TEST(Layer, NormLayer) { testNormLayer("cmrnorm-projection", /* trans= */ false, /* useGpu= */ true); + testNormLayer("cmrnorm-projection", /* trans= */ false, /* useGpu= */ false); } -#endif void setPoolConfig(TestConfig* config, PoolConfig* pool, diff --git a/paddle/math/Matrix.cpp b/paddle/math/Matrix.cpp index c69e074a76..2cde11dd47 100644 --- a/paddle/math/Matrix.cpp +++ b/paddle/math/Matrix.cpp @@ -2227,52 +2227,43 @@ void CpuMatrix::crossMapNormalFwd(Matrix& input, size_t sizeX, float scale, float pow) { - size_t num = input.getHeight(); + CHECK(isContiguous()); + CHECK(input.isContiguous()); + CHECK(denoms.isContiguous()); + CHECK_EQ(getHeight(), input.getHeight()); + CHECK_EQ(getWidth(), input.getWidth()); + CHECK_EQ(getHeight(), denoms.getHeight()); + CHECK_EQ(getWidth(), denoms.getWidth()); + + size_t numSample = input.getHeight(); + size_t numCols = input.getWidth(); size_t height = imgSizeH; size_t width = imgSizeW; - size_t numCols = input.getWidth(); - CHECK(height * width * channels == input.getWidth()); - CHECK(denoms.getHeight() == input.getHeight() && - denoms.getWidth() == input.getWidth() && input.getHeight() == height_ && - input.getWidth() == width_); - real* imgData = input.getData(); - real* diffData = input.getData(); - real* targetData = getData(); - size_t halfSize = sizeX / 2; - size_t imgPixels = height * width; - - // use integral vector to implement the sum in local window - real* integralData = - (real*)malloc((channels + sizeX + 1) * sizeof(real)); // NOLINT // TODO: - for (size_t i = 0; i <= halfSize; i++) { - integralData[i] = 0; - } - for (size_t i = 0; i < num; i++) { - real* targetPtr = targetData + i * numCols; - real* imgPtr = imgData + i * numCols; - real* diffPtr = diffData + i * numCols; - for (size_t m = 0; m < height; m++) { - for (size_t n = 0; n < width; n++) { - for (size_t c = 0; c < channels; c++) { - integralData[c + halfSize + 1] = - integralData[c + halfSize] + _square(*(diffPtr + c * imgPixels)); - } - for (size_t k = channels + halfSize + 1; k <= channels + sizeX; k++) { - integralData[k] = integralData[channels + halfSize]; + CHECK(height * width * channels == numCols); + + // TODO(hedaoyuan) After commit TensorExpress code, + // Reconstruction this code to remove the temporary memory. + CpuMatrix tmp(channels, height * width); + CpuMatrix tmp2(tmp.getData(), 1, channels * height * width); + denoms.zero(); + const int start = -((int)sizeX - 1) / 2; + const int end = (int)sizeX + start; + for (size_t i = 0; i < numSample; i++) { + input.subMatrix(i, 1)->square2(tmp2); + CpuMatrix subDen( + denoms.subMatrix(i, 1)->getData(), channels, height * width); + for (int c = 0; c < (int)channels; c++) { + for (int s = start; s < end; s++) { + if (c + s >= 0 && c + s < (int)channels) { + subDen.subMatrix(c, 1)->add(*tmp.subMatrix(c + s, 1)); } - for (size_t k = 0; k < channels; k += 1) { - real a = integralData[k + sizeX] - integralData[k]; - a = scale * a + 1; - targetPtr[k * imgPixels] = imgPtr[k * imgPixels] * _pow(a, -pow); - } - diffPtr++; - targetPtr++; - imgPtr++; } } } - free(integralData); - integralData = NULL; + + denoms.add(scale, (real)1); + this->pow2(denoms, -pow); + this->dotMul(input); } void CpuMatrix::crossMapNormalBwd(Matrix& localGrad, @@ -2282,19 +2273,63 @@ void CpuMatrix::crossMapNormalBwd(Matrix& localGrad, size_t channels, size_t imgSizeH, size_t imgSizeW, - size_t size, + size_t sizeX, float scale, float pow) { - LOG(FATAL) << "Not implemented"; - - CHECK(imgSizeH * imgSizeW * channels == preOutV.getWidth()); - CHECK(denoms.getHeight() == preOutV.getHeight() && - denoms.getWidth() == preOutV.getWidth() && - preOutV.getHeight() == height_ && preOutV.getWidth() == width_); - CHECK(denoms.getHeight() == localGrad.getHeight() && - denoms.getWidth() == localGrad.getWidth()); - - // NOLINT // TODO: + CHECK(isContiguous()); + CHECK(localGrad.isContiguous()); + CHECK(denoms.isContiguous()); + CHECK(preOutV.isContiguous()); + CHECK(localOutV.isContiguous()); + CHECK_EQ(getHeight(), localGrad.getHeight()); + CHECK_EQ(getWidth(), localGrad.getWidth()); + CHECK_EQ(getHeight(), denoms.getHeight()); + CHECK_EQ(getWidth(), denoms.getWidth()); + CHECK_EQ(getHeight(), preOutV.getHeight()); + CHECK_EQ(getWidth(), preOutV.getWidth()); + CHECK_EQ(getHeight(), localOutV.getHeight()); + CHECK_EQ(getWidth(), localOutV.getWidth()); + + size_t numSample = getHeight(); + size_t numCols = getWidth(); + size_t height = imgSizeH; + size_t width = imgSizeW; + CHECK(height * width * channels == numCols); + + // TODO(hedaoyuan) After commit TensorExpress code, + // Reconstruction this code to remove the temporary memory. + CpuMatrix tmp(1, height * width); + + const int start = -((int)sizeX) / 2; + const int end = (int)sizeX + start; + const real ratio = -(real)2 * scale * pow; + for (size_t i = 0; i < numSample; i++) { + CpuMatrix inputDiff( + this->subMatrix(i, 1)->getData(), channels, height * width); + CpuMatrix outDiff( + localGrad.subMatrix(i, 1)->getData(), channels, height * width); + CpuMatrix input( + preOutV.subMatrix(i, 1)->getData(), channels, height * width); + CpuMatrix output( + localOutV.subMatrix(i, 1)->getData(), channels, height * width); + CpuMatrix subDen( + denoms.subMatrix(i, 1)->getData(), channels, height * width); + + for (int c = 0; c < (int)channels; c++) { + tmp.pow2(*subDen.subMatrix(c, 1), -pow); + inputDiff.subMatrix(c, 1) + ->addDotMul(tmp, *outDiff.subMatrix(c, 1), (real)1, (real)1); + for (int s = start; s < end; s++) { + if (c + s >= 0 && c + s < (int)channels) { + tmp.dotMul(*outDiff.subMatrix(c + s, 1), *output.subMatrix(c + s, 1)); + tmp.mulScalar(ratio); + tmp.dotDiv(tmp, *subDen.subMatrix(c + s, 1)); + tmp.dotMul(*input.subMatrix(c, 1)); + inputDiff.subMatrix(c, 1)->add(tmp); + } + } + } + } } /** diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 713792d82b..5233a9af40 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -1261,6 +1261,121 @@ TEST(Matrix, MaxOutFwdBwd) { } } } +void testCrossMapNormalFwd( + int numSamples, int channels, int imgSizeH, int imgSizeW, int sizeX) { + float scale = 1.5; + float pow = 0.5; + int width = imgSizeH * imgSizeW * channels; + MatrixPtr input = CpuMatrix::create(numSamples, width, false, false); + MatrixPtr denorms = CpuMatrix::create(numSamples, width, false, false); + MatrixPtr target = CpuMatrix::create(numSamples, width, false, false); + MatrixPtr inputGpu = GpuMatrix::create(numSamples, width, false, true); + MatrixPtr denormsGpu = GpuMatrix::create(numSamples, width, false, true); + MatrixPtr targetGpu = GpuMatrix::create(numSamples, width, false, true); + + input->randomizeUniform(); + target->randomizeUniform(); + inputGpu->copyFrom(*input); + targetGpu->copyFrom(*target); + + target->crossMapNormalFwd( + *input, imgSizeH, imgSizeW, *denorms, channels, sizeX, scale, pow); + targetGpu->crossMapNormalFwd( + *inputGpu, imgSizeH, imgSizeW, *denormsGpu, channels, sizeX, scale, pow); + + TensorCheckErr(*target, *targetGpu); + TensorCheckErr(*denorms, *denormsGpu); +} + +TEST(Matrix, crossMapNormalFwd) { + for (auto numSamples : {5, 32}) { + for (auto channels : {1, 5, 32}) { + for (auto imgSizeH : {5, 33, 100}) { + for (auto imgSizeW : {5, 32, 96}) { + for (auto sizeX : {1, 2, 3, 5, 7}) { + VLOG(3) << " numSamples=" << numSamples << " channels=" << channels + << " imgSizeH=" << imgSizeH << " imgSizeW=" << imgSizeW + << " sizeX=" << sizeX; + testCrossMapNormalFwd( + numSamples, channels, imgSizeH, imgSizeW, sizeX); + } + } + } + } + } +} + +void testCrossMapNormalBwd( + int numSamples, int channels, int imgSizeH, int imgSizeW, int sizeX) { + float scale = 1.5; + float pow = 0.5; + size_t width = imgSizeH * imgSizeW * channels; + MatrixPtr localGrad = CpuMatrix::create(numSamples, width, false, false); + MatrixPtr denoms = CpuMatrix::create(numSamples, width, false, false); + MatrixPtr output = CpuMatrix::create(numSamples, width, false, false); + MatrixPtr preOutV = CpuMatrix::create(numSamples, width, false, false); + MatrixPtr localOutV = CpuMatrix::create(numSamples, width, false, false); + + localGrad->randomizeUniform(); + denoms->randomizeUniform(); + preOutV->randomizeUniform(); + localOutV->randomizeUniform(); + output->randomizeUniform(); + denoms->add(0.01); + + MatrixPtr localGradGpu = GpuMatrix::create(numSamples, width, false, true); + MatrixPtr denomsGpu = GpuMatrix::create(numSamples, width, false, true); + MatrixPtr outputGpu = GpuMatrix::create(numSamples, width, false, true); + MatrixPtr preOutVGpu = GpuMatrix::create(numSamples, width, false, true); + MatrixPtr localOutVGpu = GpuMatrix::create(numSamples, width, false, true); + + localGradGpu->copyFrom(*localGrad); + denomsGpu->copyFrom(*denoms); + preOutVGpu->copyFrom(*preOutV); + localOutVGpu->copyFrom(*localOutV); + outputGpu->copyFrom(*output); + + output->crossMapNormalBwd(*localGrad, + *denoms, + *preOutV, + *localOutV, + channels, + imgSizeH, + imgSizeW, + sizeX, + scale, + pow); + outputGpu->crossMapNormalBwd(*localGradGpu, + *denomsGpu, + *preOutVGpu, + *localOutVGpu, + channels, + imgSizeH, + imgSizeW, + sizeX, + scale, + pow); + + TensorCheckErr(*output, *outputGpu); +} + +TEST(Matrix, crossMapNormalBwd) { + for (auto numSamples : {5, 32}) { + for (auto channels : {1, 5, 32}) { + for (auto imgSizeH : {5, 33, 100}) { + for (auto imgSizeW : {5, 32, 96}) { + for (auto sizeX : {1, 2, 3, 5, 7}) { + VLOG(3) << " numSamples=" << numSamples << " channels=" << channels + << " imgSizeH=" << imgSizeH << " imgSizeW=" << imgSizeW + << " sizeX=" << sizeX; + testCrossMapNormalBwd( + numSamples, channels, imgSizeH, imgSizeW, sizeX); + } + } + } + } + } +} int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); From 976063c2774be24f8eda783eccae3affe60e3ae5 Mon Sep 17 00:00:00 2001 From: liaogang Date: Mon, 12 Dec 2016 19:37:40 +0800 Subject: [PATCH 096/265] format code with clang-format3.8 --- paddle/cuda/src/hl_cuda_device.cc | 2 +- paddle/utils/tests/test_SIMDFlags.cpp | 22 +++++++++++----------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/paddle/cuda/src/hl_cuda_device.cc b/paddle/cuda/src/hl_cuda_device.cc index b0bba73594..d181448292 100644 --- a/paddle/cuda/src/hl_cuda_device.cc +++ b/paddle/cuda/src/hl_cuda_device.cc @@ -12,13 +12,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "hl_cuda.h" #include #include #include #include #include #include +#include "hl_cuda.h" #include "hl_cuda.ph" #include "hl_dso_loader.h" #include "hl_thread.ph" diff --git a/paddle/utils/tests/test_SIMDFlags.cpp b/paddle/utils/tests/test_SIMDFlags.cpp index e056be5f7c..42edede209 100644 --- a/paddle/utils/tests/test_SIMDFlags.cpp +++ b/paddle/utils/tests/test_SIMDFlags.cpp @@ -19,7 +19,7 @@ using namespace paddle; // NOLINT TEST(SIMDFlags, gccTest) { #if (defined(__GNUC__) || defined(__GNUG__)) && !(defined(__clang__)) -// clang-format off + // clang-format off CHECK(!__builtin_cpu_supports("sse") != HAS_SSE); CHECK(!__builtin_cpu_supports("sse2") != HAS_SSE2); CHECK(!__builtin_cpu_supports("sse3") != HAS_SSE3); @@ -33,16 +33,16 @@ TEST(SIMDFlags, gccTest) { } TEST(SIMDFlags, normalPrint) { - LOG(INFO) << "Has SSE: " << std::boolalpha << HAS_SSE; - LOG(INFO) << "Has SSE2: " << std::boolalpha << HAS_SSE2; - LOG(INFO) << "Has SSE3: " << std::boolalpha << HAS_SSE3; - LOG(INFO) << "Has SSSE3: " << std::boolalpha << HAS_SSSE3; - LOG(INFO) << "Has SSE4: " << std::boolalpha << HAS_SSE41 || HAS_SSE42; - LOG(INFO) << "Has FMA3: " << std::boolalpha << HAS_FMA3; - LOG(INFO) << "Has FMA4: " << std::boolalpha << HAS_FMA4; - LOG(INFO) << "Has AVX: " << std::boolalpha << HAS_AVX; - LOG(INFO) << "Has AVX2: " << std::boolalpha << HAS_AVX2; - LOG(INFO) << "Has AVX512: " << std::boolalpha << HAS_AVX512; + LOG(INFO) << "Has SSE: " << std::boolalpha << HAS_SSE; + LOG(INFO) << "Has SSE2: " << std::boolalpha << HAS_SSE2; + LOG(INFO) << "Has SSE3: " << std::boolalpha << HAS_SSE3; + LOG(INFO) << "Has SSSE3: " << std::boolalpha << HAS_SSSE3; + LOG(INFO) << "Has SSE4: " << std::boolalpha << HAS_SSE41 || HAS_SSE42; + LOG(INFO) << "Has FMA3: " << std::boolalpha << HAS_FMA3; + LOG(INFO) << "Has FMA4: " << std::boolalpha << HAS_FMA4; + LOG(INFO) << "Has AVX: " << std::boolalpha << HAS_AVX; + LOG(INFO) << "Has AVX2: " << std::boolalpha << HAS_AVX2; + LOG(INFO) << "Has AVX512: " << std::boolalpha << HAS_AVX512; } int main(int argc, char** argv) { From 081eb1c42bc8a65a836df29e6f889464cbddc1cf Mon Sep 17 00:00:00 2001 From: liaogang Date: Mon, 12 Dec 2016 19:48:36 +0800 Subject: [PATCH 097/265] Format code using clang-format3.8 --- paddle/cuda/src/hl_cuda_device.cc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paddle/cuda/src/hl_cuda_device.cc b/paddle/cuda/src/hl_cuda_device.cc index d181448292..b0bba73594 100644 --- a/paddle/cuda/src/hl_cuda_device.cc +++ b/paddle/cuda/src/hl_cuda_device.cc @@ -12,13 +12,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include "hl_cuda.h" #include #include #include #include #include #include -#include "hl_cuda.h" #include "hl_cuda.ph" #include "hl_dso_loader.h" #include "hl_thread.ph" From 83d78da153532be47097147957e083df3f59fda3 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Mon, 12 Dec 2016 11:52:18 -0800 Subject: [PATCH 098/265] add submodule instructions in build_and_install doc --- .../build_and_install/build_from_source_en.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/getstarted/build_and_install/build_from_source_en.md b/doc/getstarted/build_and_install/build_from_source_en.md index 3771d316a1..547742a4b3 100644 --- a/doc/getstarted/build_and_install/build_from_source_en.md +++ b/doc/getstarted/build_and_install/build_from_source_en.md @@ -14,6 +14,16 @@ cd paddle git submodule update --init --recursive ``` +If you already have a local PaddlePaddle repo and have not initialized the submodule, you can simply run the following command in your PaddlePaddle home directory. +``` +git submodule update --init --recursive +``` + +To sync with the upstream submodule repo, you can run the following command +``` +git submodule update --remote +``` + ## Requirements To compile the source code, your computer must be equipped with the following dependencies. From 76e9e0893470c8c3c1343457edbae0e52209af61 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Mon, 12 Dec 2016 11:57:35 -0800 Subject: [PATCH 099/265] minor wording changes --- doc/getstarted/build_and_install/build_from_source_en.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/getstarted/build_and_install/build_from_source_en.md b/doc/getstarted/build_and_install/build_from_source_en.md index 547742a4b3..39f8565adf 100644 --- a/doc/getstarted/build_and_install/build_from_source_en.md +++ b/doc/getstarted/build_and_install/build_from_source_en.md @@ -14,12 +14,12 @@ cd paddle git submodule update --init --recursive ``` -If you already have a local PaddlePaddle repo and have not initialized the submodule, you can simply run the following command in your PaddlePaddle home directory. +If you already have a local PaddlePaddle repo and have not initialized the submodule, your local submodule folder will be empty. You can simply run the following command in your PaddlePaddle home directory to initialze your submodule folder. ``` git submodule update --init --recursive ``` -To sync with the upstream submodule repo, you can run the following command +If you have already initialized your submodule and you would like to sync with the upstream submodule repo, you can run the following command ``` git submodule update --remote ``` From b45246b697ea587e47006e4ee296ddc885488be9 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Mon, 12 Dec 2016 13:11:30 -0800 Subject: [PATCH 100/265] change following emailweixu comments --- doc/getstarted/build_and_install/build_from_source_en.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/doc/getstarted/build_and_install/build_from_source_en.md b/doc/getstarted/build_and_install/build_from_source_en.md index 39f8565adf..3441ebf222 100644 --- a/doc/getstarted/build_and_install/build_from_source_en.md +++ b/doc/getstarted/build_and_install/build_from_source_en.md @@ -14,10 +14,7 @@ cd paddle git submodule update --init --recursive ``` -If you already have a local PaddlePaddle repo and have not initialized the submodule, your local submodule folder will be empty. You can simply run the following command in your PaddlePaddle home directory to initialze your submodule folder. -``` -git submodule update --init --recursive -``` +If you already have a local PaddlePaddle repo and have not initialized the submodule, your local submodule folder will be empty. You can simply run the last line of the above codes in your PaddlePaddle home directory to initialze your submodule folder. If you have already initialized your submodule and you would like to sync with the upstream submodule repo, you can run the following command ``` From 2489b885c437b13c452d64ca9dc35ba5bc0f44d7 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Mon, 12 Dec 2016 13:59:56 -0800 Subject: [PATCH 101/265] fixed a typo --- doc/getstarted/build_and_install/build_from_source_en.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/getstarted/build_and_install/build_from_source_en.md b/doc/getstarted/build_and_install/build_from_source_en.md index 3441ebf222..5db871d59a 100644 --- a/doc/getstarted/build_and_install/build_from_source_en.md +++ b/doc/getstarted/build_and_install/build_from_source_en.md @@ -14,7 +14,7 @@ cd paddle git submodule update --init --recursive ``` -If you already have a local PaddlePaddle repo and have not initialized the submodule, your local submodule folder will be empty. You can simply run the last line of the above codes in your PaddlePaddle home directory to initialze your submodule folder. +If you already have a local PaddlePaddle repo and have not initialized the submodule, your local submodule folder will be empty. You can simply run the last line of the above codes in your PaddlePaddle home directory to initialize your submodule folder. If you have already initialized your submodule and you would like to sync with the upstream submodule repo, you can run the following command ``` From d5c0eeda00222bd78bd703cc5bf5018a00c1b470 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 12 Dec 2016 22:58:06 +0800 Subject: [PATCH 102/265] Remove m4 when generate protobuf Also fix compile issues --- paddle/gserver/layers/MultinomialSampler.h | 13 ++++++- paddle/gserver/layers/NCELayer.cpp | 4 +-- paddle/pserver/ParameterClient2.cpp | 36 ++++++++++--------- proto/CMakeLists.txt | 30 +++------------- .../{DataConfig.proto.m4 => DataConfig.proto} | 12 +++---- .../{DataFormat.proto.m4 => DataFormat.proto} | 2 +- ...ModelConfig.proto.m4 => ModelConfig.proto} | 36 +++++++++---------- ...rConfig.proto.m4 => ParameterConfig.proto} | 16 ++++----- ...ervice.proto.m4 => ParameterService.proto} | 12 +++---- ...nerConfig.proto.m4 => TrainerConfig.proto} | 32 ++++++++--------- 10 files changed, 92 insertions(+), 101 deletions(-) rename proto/{DataConfig.proto.m4 => DataConfig.proto} (93%) rename proto/{DataFormat.proto.m4 => DataFormat.proto} (98%) rename proto/{ModelConfig.proto.m4 => ModelConfig.proto} (95%) rename proto/{ParameterConfig.proto.m4 => ParameterConfig.proto} (87%) rename proto/{ParameterService.proto.m4 => ParameterService.proto} (97%) rename proto/{TrainerConfig.proto.m4 => TrainerConfig.proto} (87%) diff --git a/paddle/gserver/layers/MultinomialSampler.h b/paddle/gserver/layers/MultinomialSampler.h index 6e50f8738e..677b047029 100644 --- a/paddle/gserver/layers/MultinomialSampler.h +++ b/paddle/gserver/layers/MultinomialSampler.h @@ -14,8 +14,8 @@ limitations under the License. */ #pragma once +#include #include - #include "paddle/utils/TypeDefs.h" namespace paddle { @@ -32,6 +32,17 @@ class MultinomialSampler { public: MultinomialSampler(const real* prob, int size); + //! protobuf always using double. + static MultinomialSampler* create(const double* prob, int size) { +#ifdef PADDLE_TYPE_DOUBLE + return new MultinomialSampler(prob, size); +#else + std::unique_ptr tmp(new real[size]); + std::copy(prob, prob + size, tmp.get()); + return new MultinomialSampler(tmp.get(), size); +#endif + } + /** * @brief Generate a random sample. * @param g is a random number engine. See . diff --git a/paddle/gserver/layers/NCELayer.cpp b/paddle/gserver/layers/NCELayer.cpp index 540db46545..5ab765247f 100644 --- a/paddle/gserver/layers/NCELayer.cpp +++ b/paddle/gserver/layers/NCELayer.cpp @@ -99,8 +99,8 @@ public: if (config_.neg_sampling_dist_size()) { CHECK_EQ(numClasses_, config_.neg_sampling_dist_size()); - sampler_.reset(new MultinomialSampler(config_.neg_sampling_dist().data(), - numClasses_)); + sampler_.reset(MultinomialSampler::create( + config_.neg_sampling_dist().data(), numClasses_)); } return true; diff --git a/paddle/pserver/ParameterClient2.cpp b/paddle/pserver/ParameterClient2.cpp index 84d965a66a..887168075e 100644 --- a/paddle/pserver/ParameterClient2.cpp +++ b/paddle/pserver/ParameterClient2.cpp @@ -25,24 +25,17 @@ P_DEFINE_int32(parallel_thread_num, 1, "Thread number for parameter send"); namespace paddle { -template -void copyToRepeatedField(google::protobuf::RepeatedField* dest, - const T* src, +template +void copyToRepeatedField(google::protobuf::RepeatedField* dest, + const T2* src, size_t size) { dest->Clear(); dest->Reserve(size); - for (size_t i = 0; i < size; ++i) { dest->AddAlreadyReserved(src[i]); } } -template -void copyToRepeatedField(const std::vector& src, - google::protobuf::RepeatedField* dest) { - copyToRepeatedField(dest, &src[0], src.size()); -} - ParameterClient2::ParameterClient2(bool separate, int port, int numPorts) : BaseClient(separate, numPorts), port_(port) { #ifndef PADDLE_DISABLE_TIMER @@ -618,6 +611,11 @@ void PreparedOperations::addOperationHelper(Operation* op, CpuMatrixPtr mat) { pmat.mutable_values(), mat->getData(), pmat.num_cols() * pmat.num_rows()); } +template +static inline auto add(T1 a, T2 b) -> decltype(a + b) { + return a + b; +} + void ParameterClient2::doOperation(PreparedOperations& ops, bool waitForGradient, bool sendBackGradient, @@ -682,8 +680,11 @@ void ParameterClient2::doOperation(PreparedOperations& ops, CpuVectorPtr rvec = resultVectors[i]; if (!rvec) continue; CHECK_EQ(rvec->getSize(), (size_t)vec.dim()); - CpuVector avec(rvec->getSize(), const_cast(vec.values().data())); - rvec->add(avec); + std::transform(rvec->getData(), + rvec->getData() + rvec->getSize(), + vec.values().data(), + rvec->getData(), + add); } CHECK_EQ(resultMatrices.size(), (size_t)result.matrices_size()); @@ -693,11 +694,12 @@ void ParameterClient2::doOperation(PreparedOperations& ops, if (!rmat) continue; CHECK_EQ(rmat->getHeight(), (size_t)mat.num_rows()); CHECK_EQ(rmat->getWidth(), (size_t)mat.num_cols()); - CpuMatrixPtr amat = - std::make_shared(const_cast(mat.values().data()), - rmat->getHeight(), - rmat->getWidth()); - rmat->add(*amat); + + std::transform(rmat->getData(), + rmat->getData() + rmat->getElementCnt(), + mat.values().data(), + rmat->getData(), + add); } } } diff --git a/proto/CMakeLists.txt b/proto/CMakeLists.txt index d7f523bc8d..2c40070eca 100644 --- a/proto/CMakeLists.txt +++ b/proto/CMakeLists.txt @@ -6,25 +6,6 @@ set(proto_filenames ParameterService.proto TrainerConfig.proto) -set(real_proto_files) - -# TODO(yuyang18): Some internal proto will also be depended on. -# Find a way to automatically calculate all depends. -foreach(filename ${proto_filenames}) - set(PROTOBUF_3_FLAGS "") - if (PROTOBUF_3) - set(PROTOBUF_3_FLAGS "-Dproto3") - endif() - add_custom_command(OUTPUT ${filename} - COMMAND ${M4_EXECUTABLE} -Dreal=${ACCURACY} ${PROTOBUF_3_FLAGS} -I '${INTERNAL_PROTO_PATH}' - ${PROJ_ROOT}/proto/${filename}.m4 > ${filename} - DEPENDS ${PROJ_ROOT}/proto/${filename}.m4 - COMMENT "Generate ${filename}") -endforeach() - -add_custom_target(proto_accuracy ALL - DEPENDS ${proto_filenames}) - set(PROTO_GEN) set(PROTO_GEN_PY) @@ -39,9 +20,8 @@ foreach(filename ${proto_filenames}) add_custom_command(OUTPUT ${CUR_PROTO_GEN} COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --cpp_out ${CMAKE_CURRENT_BINARY_DIR} - --proto_path ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR}/${filename} - DEPENDS proto_accuracy - ${PROJ_ROOT}/proto/${filename}.m4) + --proto_path ${PROJ_ROOT}/proto ${PROJ_ROOT}/proto/${filename} + DEPENDS ${filename}) set(CUR_PROTO_GEN_PY ${PROJ_ROOT}/paddle/python/paddle/proto/${base_filename}_pb2.py) @@ -50,9 +30,8 @@ foreach(filename ${proto_filenames}) ${PROTO_GEN_PY}) add_custom_command(OUTPUT ${CUR_PROTO_GEN_PY} COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --python_out ${PROJ_ROOT}/python/paddle/proto - --proto_path ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR}/${filename} - DEPENDS proto_accuracy - ${PROJ_ROOT}/proto/${filename}.m4) + --proto_path ${PROJ_ROOT}/proto ${PROJ_ROOT}/proto/${filename} + DEPENDS ${filename}) endforeach() include_directories(${CMAKE_CURRENT_BINARY_DIR}/proto) @@ -61,5 +40,4 @@ add_custom_target(gen_proto_cpp ALL DEPENDS ${PROTO_GEN}) add_custom_target(gen_proto_py ALL DEPENDS ${PROTO_GEN_PY}) add_library(paddle_proto STATIC ${PROTO_GEN}) -add_dependencies(paddle_proto proto_accuracy) target_include_directories(paddle_proto PUBLIC ${CMAKE_CURRENT_BINARY_DIR}) diff --git a/proto/DataConfig.proto.m4 b/proto/DataConfig.proto similarity index 93% rename from proto/DataConfig.proto.m4 rename to proto/DataConfig.proto index 1f8e3f4f3e..e895c184d9 100644 --- a/proto/DataConfig.proto.m4 +++ b/proto/DataConfig.proto @@ -11,11 +11,11 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -ifdef(`proto3', `syntax = "proto2";') +syntax = "proto2"; package paddle; -sinclude(`DataConfigExt.proto.m4') + message FileGroupConf { optional uint32 queue_capacity = 1 [default = 1]; // how many files to load for a load file thread @@ -26,7 +26,7 @@ message FileGroupConf { }; message DataConfig { -sinclude(`DataConfigInter.proto.m4') + required string type = 1; // name of a text file which contains a list of file names at each line @@ -51,11 +51,11 @@ sinclude(`DataConfigInter.proto.m4') /// Note the field number 17, 18 and 19 have been deprecated. - // a list of values which will be used to create additional one dimensional real + // a list of values which will be used to create additional one dimensional float // values slots. These one dimensional slots can be used as the weight input // for cost layers. // Currently this is only supported by ProtoDataProvider. - repeated real constant_slots = 20; + repeated double constant_slots = 20; // for PyDataProvider. // Specify the load data script module name, object name and user args @@ -80,6 +80,6 @@ sinclude(`DataConfigInter.proto.m4') optional bool is_main_data = 26 [default = true]; // the usage ratio of instances. Setting to 1.0 means the use of all instances. - optional real usage_ratio = 27 [default = 1.0]; + optional double usage_ratio = 27 [default = 1.0]; }; diff --git a/proto/DataFormat.proto.m4 b/proto/DataFormat.proto similarity index 98% rename from proto/DataFormat.proto.m4 rename to proto/DataFormat.proto index 54e9fd008e..19b1499b02 100644 --- a/proto/DataFormat.proto.m4 +++ b/proto/DataFormat.proto @@ -11,7 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -ifdef(`proto3', `syntax = "proto2";') +syntax = "proto2"; package paddle; diff --git a/proto/ModelConfig.proto.m4 b/proto/ModelConfig.proto similarity index 95% rename from proto/ModelConfig.proto.m4 rename to proto/ModelConfig.proto index ccad69a3c2..b34e1ebded 100644 --- a/proto/ModelConfig.proto.m4 +++ b/proto/ModelConfig.proto @@ -11,7 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -ifdef(`proto3', `syntax = "proto2";') +syntax = "proto2"; import "ParameterConfig.proto"; @@ -20,7 +20,7 @@ package paddle; /** * Various structs for the configuration of a neural network */ -sinclude(`ModelConfigExt.proto.m4') + message ExternalConfig { repeated string layer_names = 1; @@ -146,8 +146,8 @@ message NormConfig { // the parameters for normalization // u = u / (1+scale*sum(u^2 in window))^pow - required real scale = 4; - required real pow = 5; + required double scale = 4; + required double pow = 5; // The size of output feature map. required uint32 output_x = 6; @@ -223,7 +223,7 @@ message OperatorConfig { required uint64 output_size = 4; // For DotMulOperator - optional real dotmul_scale = 5 [default = 1.0]; + optional double dotmul_scale = 5 [default = 1.0]; // For ConvOperator optional ConvConfig conv_conf = 6; @@ -266,7 +266,7 @@ message LayerInputConfig { } message LayerConfig { -sinclude(`ModelConfigLayer.proto.m4') + required string name = 1; required string type = 2; optional uint64 size = 3; @@ -293,7 +293,7 @@ sinclude(`ModelConfigLayer.proto.m4') optional uint32 partial_sum = 9; // for dropout - optional real drop_rate = 10; + optional double drop_rate = 10; // for HierarchicalSoftmaxLayer and NCELayer // the number of classes @@ -317,17 +317,17 @@ sinclude(`ModelConfigLayer.proto.m4') // For NCELayer // The distribution for generating the random negative labels. // A uniform distribution will be used if not provided - repeated real neg_sampling_dist = 17 [packed = true]; + repeated double neg_sampling_dist = 17 [packed = true]; // For MaxLayer // default: output VALUE of MaxLayer. set this flag to true for output INDEX - // INDEX will be put in Argument::value as real values. + // INDEX will be put in Argument::value as double values. optional bool output_max_index = 19 [default = false]; /// The filed number 20 have been deprecated. // For self-normalized estimation - optional real softmax_selfnorm_alpha = 21 [default = 0.1]; + optional double softmax_selfnorm_alpha = 21 [default = 0.1]; /// The filed numbers 22 and 23 have been deprecated. @@ -338,14 +338,14 @@ sinclude(`ModelConfigLayer.proto.m4') optional bool norm_by_times = 25; // for CostLayers - optional real coeff = 26 [default = 1.0]; + optional double coeff = 26 [default = 1.0]; // for AverageLayer // can be set to: 'average', 'sum' or 'squarerootn' optional string average_strategy = 27; // for error clipping - optional real error_clipping_threshold = 28 [default = 0.0]; + optional double error_clipping_threshold = 28 [default = 0.0]; // for operators used by mixed layer repeated OperatorConfig operator_confs = 29; @@ -355,11 +355,11 @@ sinclude(`ModelConfigLayer.proto.m4') optional int32 max_sort_size = 31; // for SlopeInterceptLayer - optional real slope = 32; - optional real intercept = 33; + optional double slope = 32; + optional double intercept = 33; // for CosSimVecMatLayer and CosSimLayer - optional real cos_scale = 34; + optional double cos_scale = 34; // for DataNormLayer // can be set to: 'z-score', 'min-max' or 'decimal-scaling' @@ -394,7 +394,7 @@ sinclude(`ModelConfigLayer.proto.m4') // if number of the selected columns is less than // sample number * selective_fc output size * selective_fc_mull_mull_ratio // sparse multiplication is used, otherwise, using full multiplication. - optional real selective_fc_full_mul_ratio = 44 [default = 0.02]; + optional double selective_fc_full_mul_ratio = 44 [default = 0.02]; // to indicate how many threads selective_fc use to to accelate // the plain_mul period @@ -406,7 +406,7 @@ sinclude(`ModelConfigLayer.proto.m4') optional bool use_global_stats = 46; // use to compute moving mean and variance. - optional real moving_average_fraction = 47 [default = 0.9]; + optional double moving_average_fraction = 47 [default = 0.9]; // bias size optional uint32 bias_size = 48 [default = 0]; @@ -438,7 +438,7 @@ message EvaluatorConfig { // Used by PrecisionRecallEvaluator and ClassificationErrorEvaluator // For multi binary labels: true if output > classification_threshold - optional real classification_threshold = 6 [default = 0.5]; + optional double classification_threshold = 6 [default = 0.5]; // The positive label. -1 means average precision and recall optional int32 positive_label = 7 [default = -1]; diff --git a/proto/ParameterConfig.proto.m4 b/proto/ParameterConfig.proto similarity index 87% rename from proto/ParameterConfig.proto.m4 rename to proto/ParameterConfig.proto index b5c0fea6c3..cbcd0af598 100644 --- a/proto/ParameterConfig.proto.m4 +++ b/proto/ParameterConfig.proto @@ -11,7 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -ifdef(`proto3', `syntax = "proto2";') +syntax = "proto2"; package paddle; @@ -32,14 +32,14 @@ message ParameterUpdaterHookConfig { message ParameterConfig { required string name = 1; required uint64 size = 2; - optional real learning_rate = 3 [default = 1.0]; - optional real momentum = 4 [default = 0.0]; - optional real initial_mean = 5 [default = 0.0]; - optional real initial_std = 6 [default = 0.01]; + optional double learning_rate = 3 [default = 1.0]; + optional double momentum = 4 [default = 0.0]; + optional double initial_mean = 5 [default = 0.0]; + optional double initial_std = 6 [default = 0.01]; // use L2-regularization if decay_rate set and decay_rate_l1 not set - optional real decay_rate = 7 [default = 0.0]; + optional double decay_rate = 7 [default = 0.0]; // use L1-regularization if decay_rate_l1 set - optional real decay_rate_l1 = 8 [default = 0.0]; + optional double decay_rate_l1 = 8 [default = 0.0]; // dims of Parameter, e.g. dims[0] as height, dims[1] as width.. repeated uint64 dims = 9; // the gpu device which the parameter in. @@ -60,7 +60,7 @@ message ParameterConfig { // sparse remote update or not optional bool sparse_remote_update = 16 [default = false]; // gradient clipping threshold, no clipping by default - optional real gradient_clipping_threshold = 17 [default = 0.0]; + optional double gradient_clipping_threshold = 17 [default = 0.0]; // static parameters are fixed when training optional bool is_static = 18 [default = false]; // para_id should NOT be set by config_parser. It is for diff --git a/proto/ParameterService.proto.m4 b/proto/ParameterService.proto similarity index 97% rename from proto/ParameterService.proto.m4 rename to proto/ParameterService.proto index 25b0991583..c1c04d8cc5 100644 --- a/proto/ParameterService.proto.m4 +++ b/proto/ParameterService.proto @@ -11,7 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -ifdef(`proto3', `syntax = "proto2";') +syntax = "proto2"; import "ParameterConfig.proto"; import "TrainerConfig.proto"; @@ -73,7 +73,7 @@ message SendParameterRequest { optional int64 num_samples = 4; // cost will be used to calculate global objective value - optional real cost = 5; + optional double cost = 5; required BatchStatus batch_status = 6; @@ -245,13 +245,13 @@ enum MatrixVectorOperation { message ProtoVector { required int64 dim = 1; - repeated real values = 2 [packed = true]; + repeated double values = 2 [packed = true]; } message ProtoMatrix { required int64 num_rows = 1; required int64 num_cols = 2; - repeated real values = 3 [packed = true]; + repeated double values = 3 [packed = true]; } message Operation { @@ -263,7 +263,7 @@ message Operation { // matrix handles created on the pserver repeated int64 pmatrices = 3; // A, B, C - repeated real scalars = 4; // a, b, c + repeated double scalars = 4; // a, b, c repeated ProtoVector vectors = 5; // x, y, z repeated ProtoMatrix matrices = 6; // X, Y, Z } @@ -272,7 +272,7 @@ message OperationResult { // error message. Empty if success optional string return_message = 1; // - repeated real scalars = 2; // d, e, f + repeated double scalars = 2; // d, e, f repeated ProtoVector vectors = 3; // p, q, r repeated ProtoMatrix matrices = 4; // P, Q, R } diff --git a/proto/TrainerConfig.proto.m4 b/proto/TrainerConfig.proto similarity index 87% rename from proto/TrainerConfig.proto.m4 rename to proto/TrainerConfig.proto index 4684203b03..a334e07b62 100644 --- a/proto/TrainerConfig.proto.m4 +++ b/proto/TrainerConfig.proto @@ -11,7 +11,7 @@ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -ifdef(`proto3', `syntax = "proto2";') +syntax = "proto2"; import "DataConfig.proto"; import "ModelConfig.proto"; @@ -24,9 +24,9 @@ message OptimizationConfig { optional int32 num_batches_per_send_parameter = 5 [default = 1]; optional int32 num_batches_per_get_parameter = 6 [default = 1]; - required real learning_rate = 7; - optional real learning_rate_decay_a = 8 [default = 0]; - optional real learning_rate_decay_b = 9 [default = 0]; + required double learning_rate = 7; + optional double learning_rate_decay_a = 8 [default = 0]; + optional double learning_rate_decay_b = 9 [default = 0]; optional string learning_rate_schedule = 27 [default = "constant"]; // learning rate will be scaled according to learning_rate_schedule // 1), constant: @@ -49,14 +49,14 @@ message OptimizationConfig { // owlqn related // L1-regularization - optional real l1weight = 10 [default = 0.1]; + optional double l1weight = 10 [default = 0.1]; // L2-regularization - optional real l2weight = 11 [default = 0]; + optional double l2weight = 11 [default = 0]; // "c1" in wolfe condition: if (newobj <= oldobj + c1 * origDirDeriv * step) // then accept the step - optional real c1 = 12 [default = 0.0001]; + optional double c1 = 12 [default = 0.0001]; // multiply the step with "backoff", when wolfe condition doesn't satisfy - optional real backoff = 13 [default = 0.5]; + optional double backoff = 13 [default = 0.5]; // how many "s"s and "y"s are kept in owlqn optional int32 owlqn_steps = 14 [default = 10]; // accept the step if encountered "max_backoff" times of "reduce the step" @@ -82,15 +82,15 @@ message OptimizationConfig { // default learning method("momentum") use global decayed learning rate with momentum. // "adagrad", "adadelta" and "rmsprop" can set momentum too. optional string learning_method = 23 [default = "momentum"]; - optional real ada_epsilon = 24 [default = 1e-6]; - optional real ada_rou = 26 [default = 0.95]; + optional double ada_epsilon = 24 [default = 1e-6]; + optional double ada_rou = 26 [default = 0.95]; // Force to do average in cpu in order to save gpu memory usage optional bool do_average_in_cpu = 25 [default = false]; // delta add rate in pserver, used while num_batches_per_send_parameter>1 // will be divided by #machines automatically. - optional real delta_add_rate = 28 [default = 1.0]; + optional double delta_add_rate = 28 [default = 1.0]; // We split a large size into smaller mini-batches, whose sizes are // determined by mini_batch_size. It only takes effect when there is @@ -108,14 +108,14 @@ message OptimizationConfig { // shrink sparse parameter value // only works if parameter is remote sparse update and has L1 decay rate - optional real shrink_parameter_value = 32 [default = 0]; + optional double shrink_parameter_value = 32 [default = 0]; //////////////////////////// // Options Adam Optimizer // //////////////////////////// - optional real adam_beta1 = 33 [default = 0.9]; - optional real adam_beta2 = 34 [default = 0.999]; - optional real adam_epsilon = 35 [default = 1e-8]; + optional double adam_beta1 = 33 [default = 0.9]; + optional double adam_beta2 = 34 [default = 0.999]; + optional double adam_epsilon = 35 [default = 1e-8]; // arguments for learning rate scheduler // Format: num1:rate1,num2:rate2,...,numK:rateK @@ -127,7 +127,7 @@ message OptimizationConfig { // for async sgd gradient commit control. // when async_lagged_grad_discard_ratio * num_gradient_servers commit passed, // current async gradient will be discard silently. - optional real async_lagged_grad_discard_ratio = 37 [default = 1.5]; + optional double async_lagged_grad_discard_ratio = 37 [default = 1.5]; }; message TrainerConfig { From 3d817037a08ac50d9c135f6aafd8ff4c2ac4a29e Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Tue, 13 Dec 2016 12:31:30 +0800 Subject: [PATCH 103/265] fix url error in rnn_en.rst --- doc/howto/deep_model/rnn/rnn_en.rst | 2 +- doc/tutorials/semantic_role_labeling/index_en.md | 4 ++++ doc/tutorials/sentiment_analysis/index_en.md | 4 ---- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/doc/howto/deep_model/rnn/rnn_en.rst b/doc/howto/deep_model/rnn/rnn_en.rst index 64f464b1dc..b4c0c8bb4c 100644 --- a/doc/howto/deep_model/rnn/rnn_en.rst +++ b/doc/howto/deep_model/rnn/rnn_en.rst @@ -246,6 +246,6 @@ The code is listed below: outputs(beam_gen) -Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`sentiment_analysis_en` for more details. +Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`semantic_role_labeling_en` for more details. The full configuration file is located at :code:`demo/seqToseq/seqToseq_net.py`. diff --git a/doc/tutorials/semantic_role_labeling/index_en.md b/doc/tutorials/semantic_role_labeling/index_en.md index f5bdf64487..bdd12c0d9a 100644 --- a/doc/tutorials/semantic_role_labeling/index_en.md +++ b/doc/tutorials/semantic_role_labeling/index_en.md @@ -1,3 +1,7 @@ +```eval_rst +.. _semantic_role_labeling_en: +``` + # Semantic Role labeling Tutorial # Semantic role labeling (SRL) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence. SRL is useful as an intermediate step in a wide range of natural language processing tasks, such as information extraction. automatic document categorization and question answering. An instance is as following [1]: diff --git a/doc/tutorials/sentiment_analysis/index_en.md b/doc/tutorials/sentiment_analysis/index_en.md index 279ebddf19..bb7681db44 100644 --- a/doc/tutorials/sentiment_analysis/index_en.md +++ b/doc/tutorials/sentiment_analysis/index_en.md @@ -1,7 +1,3 @@ -```eval_rst -.. _sentiment_analysis_en: -``` - # Sentiment Analysis Tutorial Sentiment analysis has many applications. A basic task in sentiment analysis is classifying the polarity of a given text at the document, sentence or feature/aspect level. One simple example is to classify the customer reviews in a shopping website, a tourism website, and group buying websites like Amazon, TaoBao, Tmall etc. From c93df580113464777cb4b01951beaa280da6bf9f Mon Sep 17 00:00:00 2001 From: jiangfeng <103531948@qq.com> Date: Mon, 12 Dec 2016 23:08:56 +0800 Subject: [PATCH 104/265] translation about to chinese --- doc/about/index_cn.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) create mode 100644 doc/about/index_cn.md diff --git a/doc/about/index_cn.md b/doc/about/index_cn.md new file mode 100644 index 0000000000..d19e2ea5f3 --- /dev/null +++ b/doc/about/index_cn.md @@ -0,0 +1,13 @@ +关于PaddlePaddle +================ + +PaddlePaddle是一个由百度的科学家和工程师开发,目前已应用于多个产品线,易用的、高效的、灵活的和可扩展的深度学习平台 +。 +PaddlePaddle目前已经开放源码, 但是远未完善,我们希望能在这个基础上不断的改进、扩展和延伸。 +同时我们希望广大开发者积极提供反馈和贡献源代码,建立一个活跃的开源社区。 + + +致谢 +-------- + +在这里我们欠所有`PaddlePaddle开发者`一声感谢。 From 55afeccf33826a7c7ba979a0bbbe1a5bb7c85362 Mon Sep 17 00:00:00 2001 From: jiangfeng <103531948@qq.com> Date: Tue, 13 Dec 2016 13:03:50 +0800 Subject: [PATCH 105/265] add Chinese document for about --- doc/about/index_cn.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/doc/about/index_cn.md b/doc/about/index_cn.md index d19e2ea5f3..393ba07b58 100644 --- a/doc/about/index_cn.md +++ b/doc/about/index_cn.md @@ -1,13 +1,11 @@ 关于PaddlePaddle ================ -PaddlePaddle是一个由百度的科学家和工程师开发,目前已应用于多个产品线,易用的、高效的、灵活的和可扩展的深度学习平台 -。 +PaddlePaddle是一个最早由百度科学家和工程师共同研发的并行分布式深度学习平台,兼备易用性、高效性、灵活性和可扩展性,目前已被百度内部多个产品线广泛使用。 PaddlePaddle目前已经开放源码, 但是远未完善,我们希望能在这个基础上不断的改进、扩展和延伸。 同时我们希望广大开发者积极提供反馈和贡献源代码,建立一个活跃的开源社区。 - 致谢 -------- -在这里我们欠所有`PaddlePaddle开发者`一声感谢。 +在此,特别感谢PaddlePaddle的所有贡献者。 From 98b3ee26a42076f6a345eb9357570f31b7e54826 Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Tue, 13 Dec 2016 13:24:43 +0800 Subject: [PATCH 106/265] move file --- {doc_cn/demo => doc/tutorials}/imagenet_model/resnet_model_cn.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {doc_cn/demo => doc/tutorials}/imagenet_model/resnet_model_cn.md (100%) diff --git a/doc_cn/demo/imagenet_model/resnet_model_cn.md b/doc/tutorials/imagenet_model/resnet_model_cn.md similarity index 100% rename from doc_cn/demo/imagenet_model/resnet_model_cn.md rename to doc/tutorials/imagenet_model/resnet_model_cn.md From 67ddaff77b368d3291b8d36470abac53279d8d5d Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Tue, 13 Dec 2016 13:48:37 +0800 Subject: [PATCH 107/265] delete useless folder --- doc_cn/demo/imagenet_model/resnet_block.jpg | Bin 22422 -> 0 bytes 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 doc_cn/demo/imagenet_model/resnet_block.jpg diff --git a/doc_cn/demo/imagenet_model/resnet_block.jpg b/doc_cn/demo/imagenet_model/resnet_block.jpg deleted file mode 100644 index e16bd3c624030c4c09b358a015b491141b42d8f1..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 22422 zcmce-2S8KH);7FD=q>aPp@UTENDGR9h)7YofHdi#6ahh^h;&3iK|nz1RisM|9RZOJ z0@8wwrU?p02+6b^LY!4Ct zNBbBLfb{n^V4pt=co(oQ;OK7PU+5F?`YC~RPe}e~4as@}{X+&0!0!R#HlS+a?ic76 z;O=*oUqR+LpnA%{i1es;u>6!oeu@lJyoqzkKrQL|EviRbxKj~CG_93^j*f$gnX#V1 zIbCqF06=@@oQJO;2?GH5_y+o$>7U}avbN!;cn!{x3LppM0Km~X;HuFDy^BYE8tCfq z2Y`)!_W$R{-ptQ=0F%;ZO!)c#tp3*oM(3;kf#3p|fXx+LTmzgzISQ1$g9EP~)nh=J z#mVbPCOMiTi$6F(Q06$2-F}m$ezy5d);^M5e0*KNHb-N=>f+;aB=>@H?DZg5P==lZ z<>>1kuEC%@17V34*AQ@=0HEV{HPqk3%{`D`-1!*4+=&ya{AXN)yE!`KKd2FbhNho;36x_olsVmmX`sS|IhM&{PNFM|7YRo+5XnztLe{Y24Q>kkGg;S z_8)bA`Jhhif@^dCA9YTd0MK+7066CVQ78Na0ANu7&^Y=}^I<#E7x%!xt7@{c!NI{Y z9@{^@=)NB8@4?f6f-UU3Za4&*;tRA)bLzaW49fUAzquKdz} zRpS5nhyOIJe;S9Bnd=o-e^+1dRu-U^dHA}4kK5P9BhbUom*2zpzk0&|;m`hQ3`gQ` zdJPK9g#>`bK?YzLWdWeuuK-eJ8UT7J7yJhD=W{zpWdR($d6w{xzv(?FgWv!B{QrHB zB!mAX3Gi^^KdRO-yU6bxEunfB`H3C%^{?0pfr(AP=Yj>VP(&3!DXv zfs24OU=KJ0?tl;A4+I0@z-=HJNC55w4}mNo4=4o6fM-B0&;+yrT|hrD42%KOzyh!W zyazS`46p|rfSWS~gbu)n${;n6W=I!g5Hb##gRDZhdvS_k&vSP9ZvOcnDviD?rwP z$g9aa$;Zjp$ag6yC^#u3DKsgJDI6&RDWWMdDJm&Y6cZHhDR7iDlyFLUNR18!iRBBYlR4!DZRLN9@R4r7aRPU({s2Qons5PlCQhQS0rhY{I zjJl8dE%jF#8X7^G6EvnY9yGUT9??9fc}cTM^MjU&R)SWC_7ZIn?LFEO+D_Uz+OKqU zbfR?HbT)K>boc1W=z8du=y3GR^fL4Y^e*&B`fU0}`U(2aFe;b`>?F({b_13UtAmZg zwiu`wL>Y7#92p`QvKg8gW*K%FnHl96jTwCy6B)}H2N^#xQ80-z=`y)6MKKjH^)Rh5 zLz#t`b(o!*?=TlK_c6a`A!iX|F<|jzNnojBdCh`hWnxuiwO|cl&1P+9U15W=iLx26 zd9x+6)w0d7?X&Z;pJaDsk7IwvKEeK-gPTK}!<8eRqncxyW1o|sQbCHXb zOM=UkE0`;ntB-4on}u7Q+nGC_yOw*NhlEFh$BZY8r-0`b&n_h^mOYNQTIO$ahgOQA^Pn(I(MPVjN-y zVmHJp#NLY2i6g|XiRX(?9V0)cdd%xs&ap8G5(y;Wl0ao9LaGhGN}_% zzEb&8v(mKEC#A1TS4gkQu*sa0iIi!Q`7A3eYa^Q?J0MFqu5{e{c;WE{IVQQYa*=W^ zay#-8^3L+v@>2?Q3VI4ig%=7~MM*_h#azWVO3X?|N-;{^N{7lS%KpmFl-E^+RW7SM zR+&|0Q8iIbRDF4Z?8K=P$P;ZReyAy{1*+AmeNmTI_fjuc|D++N;i^%nv8E}g>8SZs zb480^>$27ptrdg-!V!^=Kx+$WyJ#0{e>{2Yr02=Xlb=r=KjnX_;na5>HJzI}ou{Fv z^-ss2eyz)-Yp$E6yQC+m=cZSohtXHk57X~FLw3gSOv;&A16~7XgE9lmS(URl&-NP9 z8eTBWGDM$~IOl(^#fZe{tWlcL+w&slea|-;1I7l%X~v5tVkXy2T20AKO-!>*KbXmx zg_{jrV7}mRq5Q($#ZwoPFD{sin+KV9Tfi(XS(I6Pw>)i`YPn)1Yjx9V*qY1Q!}^5{ zg^h(xvCXcnuI)qH_m`9|MPHh+6SE7k8?@)L_qK0$fH^oi)LkaKYop?9Mwj5N$StUFvF{C4>2 z&678uMnEE*BD#@6$SCCdTl%+3Z&Tm)x;+wkJTf)%+a1e0Em6Etx1!eW>ff!1rjPcI zo{dqD$%`e8^@x2Prx2GJM~HWcA4)i$ke+}~bWR*fl1qA&M7($9-s@!LM(fsrR@`CFH9||oB2Z~gS%8I#) z6N(Q@d`nhJ%}RUARLaWAdCHS3NGgIV)+;Ynj#Ztmdhty9+0$ya>V)UO^PuM&HJ58< zY7J|<>s0HW)r-_WZeVVRZ-g|4He#CGn^s;}zZh>m+uYrv-qO%2(^}Fd(3aiK+Me7& z-4WG6MBPB`c3$n=>T>H^>vrf~=&|aV>NV|s-DlYMvR|*iXW--j>ZRt(wn4SQ<{{Og zreWpb##c(O8b*{x8eS{EZX8t^eKB@otYutdyki0}(KUH`vVY28YIyqm^!SYV%$wOu zvny|0-h7(#p2N;xpFdc*^_J{y{363*`V!Al{_?Tq>J{ac_SMs?uh19J3v146o9_bN z;onDnp!<;i5&p64ll-T)b-nem4cm?Pn^!jvwxYKgw;z8N|6Kn?`^zhg73MwGA4}Xx z{L1;Y4Jr9Gd$gYWS;PF(2^)gJ@hA` z*%5$&+yuV|NEykP1msSVGn+b62wr26zn$`wQs`7o7pvJgMp(hgKaz@?jh%y&OGH#m z{FsEIlCp~G2{oP5x_bI&49;G-Xl`K%x;M@)u2<+XM74UJ7Nnp?Vidi(kZUJedTOioSD%)XhMUq!FId;j6% zr}Yi&&ew0dd*5+C_K)TTBDLS9_2TXP*RGcc|k~mk0#DYN+uvj&UDh0 z!toljp!{u0mQyKDYr3d}6wENJPX6Q6Y{H7GBG{v;{hZmqHnGV6Xl8#->>u-50CWJ7 zp9M-n0wse&p=9J_U?HajJrQzpN@~iVh5GkG`?Js=71*DJ2zCMi`yeGHr2zkDprN8+ z__qsj5%egzi1Pp~6olH0P(}a&9OCoi#ex3^7nC^mH}`65T7xC-`ay7KdoGQeiM?pX z!lDBknUA^dC?)w0pMSwmQW6rqI>fW0OhrlpD>H%Uvi#E(VU9O#>%i3q~Rb=G`Z>KjMCoJIzWEC1QkFhQjjRw%7lCSq#k*3FN);l>!} zwUTbxJILCS&f(;Eg3NQw%W1qoepQ9TE1|FRbXsqQSaOSkpGD-?`)VnMJK}P%(B53$ z!yEz3+kN&sKJ|0H)w_)?_dSEviprOJ$TAyT8J;wzih8^?cy)_1*32Ecbwl!Ad3#Q}YCfL#R*@+8b8E>m z71&{)P->Aq{!kG^=4KX7?qQ zNxTxewPD7&CUw~?re{BD-sIlCaJovG$(gK0bMwxW#7BChaxayB5CLqRJ?;lKc~1q@ ztr-Nj?K**H+PmT9HeModA;7yuv!S;!G!;Vxif+~qaAse<#u4!Tht)eEqC--$)RjL{ zp7N8fwE1%)kj#zT|B(06yc`hEj|Yh3zr3K(kFfdLy^rT{h90HXZ;Z-~z0k{~hP*j7 zac<3lSgl9Hw0fa}vFa%e=TZep^gfsma=|gpT7pDioC=#qAP=!50vS^g6U#g2>yr<7 z@Ki*g%p${{2x#*`hyYC*TK143kCO`A5&#L>Y4k-rlV8nfCtD1X8>b5s2c}!CY@TkO;ZctD*&; zaGvmK(6l5l(zH(UIL4+dHgu)StH0Fld^-S8AIj(7(t_h?hyWFPr)<;`H_mnyb*n?_ zSpM8XPf&m_fu(_cHl;UZGz1G{(3%r-U7EXcLZfE$5|5I?C=yzt)AIW$8}(<70j$g*P(yAFBsZy^v7Y$2n)M4&cf z>nkhqIIzevrlq1QH&EJ|Xp{5WX_Xh^iF*55@6@}y)ngJE*!%#& z$ZbvRFfLbs2uudazS-7x;W?gYj9TU8ni&)Ky;6;u(fQn*=s~`gD44i?=$NU}wR1Z8dmnAsCYjhpltL`i$9@Lc)5k>H7H; zEYh)9HMAY85lq|KzoftlO|>w03Y7|d{qbKJ>ops(t=lUjTt;v`KNei%K_`KC%6U3) zl={*oTjdn50Mg$R7_(%)SjV8fmk+UM88Zt8gMLRL1e(v^&!^9}WA<2X9l2jfep4BxaKkNl+jH-Cz0~@bv|M zv&>&SJiHve=#m~7lC`%;UF*LkA;{lde!CN(g*fnp4D1PG^b<8S(S88fi3d7|yE9iJ zcVlO>C+p^7;f?TMOMlLyRPxjb!$)JU`~CQh-f_a@9*V$9kv)odH6ri?l}H3ItypkB zkQOHb(H^)UB5)E3zWS@d=xTzE2}OG!R}a&vW&sxQ(Myo{)Hdkem9SBq`7B#-)AyFV zJ-4_vhms!Ur@@+svvb}Wx~20A`XVEf!-^f3J8UTMYH43wsxErFf!d*Np4eWC7vKJ}rR;YQR;QqT*ay)wcmhM_ zw?PSvqx0E>c=#OzCTR-!qp;ww75}%J4IAU6M!T2U?AGl89BNg)V)6OgB=Vi5CF}T1 zk$?ba!R{D$N5PjgYa)p{C#WTNWA*}YuECgq6~9OV*M_D4vL}~l4wL<|GlOI1Tx$}pZy73 z#5^j=e#p*q*cwZpqS11*Ej_&s>9on77Y^aC&>d6rxjEu!p`{?3>94HcjLA7ZjuoZw^ zCKTBe9qHTFBoPqa1UJH((E;TCL(hl5nO6vL@(^DQVBz>2m7Y=1l)4gu5;(mjYW7x0 zGU0so>POHX%Y*hfIYu11{0US}$(ID#kP5U)OyD;2Ih^H_fy&J#fvJM$)}LpbOFW*3 zni)yIUybtHnRa?p&S^OGsWm#lzS*7~JK0Y-h7rxq_q0Jq`D!??dJPZMVR_r{wl!rX zb>wLwzPJ#?kYA=?_CI*Cns+ zpCbbE$9oi|bbi#@)?abG7nCyR?9FzyB>5aJ4x2yOGs7n{PJnytgiH=-an#l~tBGWt zeFZm78>(kMm(rqF86b4e&e0;xcZ>?x+GQb+Q|N|sS*|8`FJAe+nin^FezHw=)kx=j zTjRa27d!5`D{*!VUainf&lWPEF%ll+0#aYTc)94eQW8T@ju=AFhx=eEj4%(DG_V%Y zGiry$_4nU$gd4c3_t3p>ZoYn5iAKC2YZ;PH#KzpkgG(XUZ3{ZUW(_;?D4m4_jog#bonm}nHRDeFCz77 z0Aj9ZAT~yg-^O{*YoZ1g4biABV@OC^k3Gu^452J^jiXvIZ1MQUJ}R4KMsEJil9aLL zH{|-scX8AVt178_TCIENBk+Sz*_dd_3-rSQI@34%WiU{S$P7|S2;l`Mbp zlJ;uY_b0GN5gddu6q}aFCLafuEv-1j7fFqKb}?6)OH*eUtCXH_m9whBdcvAbOy5|> z@;390{xz?comJOSTu65#!%jG38x7`T*S@z})$4{TkLy|Qd;1b?eD0M<_5j?R0RdNE zeh@hU5xnj=M2>Urgj1o%3EX%AocikW-TB-E?42$x!3mM6hQ09*(eq1mYM*X)eA~Pt zM>f(vov&efDy>z%MgRS+M-KY9hZxztV$9|zvrkKc;MNfPZn{I@jaL)9jsfvr$$ZFs)3;8a`lL+jczseGg#XE&kAWrT?U+W8 z5`i}#Ly17?vvv@(*h4@gaaOi7?||nEVwZKcV)@h00jXHo*?miRkHO#9E$0wnOAbsD zX?g3TW&MybB@^m=)xu0qoByi=oG>y!+kYKfGT=jlj%fBlS2KhlZE6fvx1$`SB!O9n;U$zCN6~P^cbPU0>1J2FHBtx2MG}_dD=fH{5;KaiZfnYJ4Tt!`WoU zMziYPa1%pKtRjZ_QSXby=>DxI06Kz|Fnt})u5lqhguBO{vpJ!c@no=Uf{%4+0Hgf}fs(OP05S;MB|ZWs^LWc{`%)wcIG zwoFRPrA%>>Q-&Jj@{&8m-1KoDrex0-mx0L}m`C){z!?|3xV|T(J7RfdTyvxwsNUAk z%-Z7VDAGqs<()n|873mw7jm4_McK@)*IpTk5A@jzLzfWwk)QM{esLY-Si zUE-I;)}$97ZfQ26Ld(MsqJ$=9_~wGdhBqwt98*a*u%R;$!~%V^+wNnoqmyc$-_j|)}xS} zUnMKT^}30WQ)|97r=?#uUI$~ihKznjkiZ46QeoZ^xN*(FS_Mk4t2C!nZfI^FKDG$H zcVEX(F}S9-yS6_`iu8m;Z?z(@JYRxF#Uocuya?<#{b%JNi8wtX(3>&87;PGOC^0`_ z;QZoNv{UNx*sRFW5_^o zJxCp~uJG}I9#Jta=Ex(ugZ#FJR{Oq>fzcxeC&P6>3#HJ7+P{Y71znQ=(56j+Htms? zL`b|W6n_(kT(fgHJL4B<6|R7;(z5=R#BMgzmR-@hNbN54P;ntFBl(q~M|@rBg_`Jq zzMBYL00e%DO(5_i4FQ24r7;Nno+aZv{6OGG2?9Tc+#}#;_d7>0u?zw~MiBT3Xn??P zs0FcF)ByrNArSbT3IT!N!=NwUOek(4dkn*m0Ezw)Ac;BxBod!MK$3n0NVJoW00|Wc zNX%zJJ^`+%33jUVioT|qJznhk~tF6cs*i-8fvRH-vLc5lQ< z@6#t$Z?j{%`)9Z%rD^)v_Vzy2^sz*cV9yE)(n^#kfw?psz*RDy*@D2AL<0oAQDPwQwbcJ*=Iw-d00aAF`f$u&Xa0TvXqWu!^aKB9=9|l( zp1^>^%+#+0)}X%k+jl=_9+sf}*f$t@N^OiCgZ6BR0Hddh#e{)RV2TlYCT*EuWo%W$ zZ9#A>KS^WoK3?!2N_mNSd!iwr78y;>f(}c>3}VL%!HL*9hlLTcORAAiNaDr+yyy}A zC|EcYD|t5I6fSACN}6dHq<58kQ+ z$sWJ1Je;w|GT!O;m2mko=SGNK1~FRhBnsCD#|$pljvg|i$dRj>B}i=i@9U6;a?RTp zil7C7YuI@ru*QPy5sMf`eAB~&PdjQahY0jb{#*x84bC5?BR?e_fgY-D(7GgAh{YjX zjpuH_=r`)RFkxNegS?enl^yWOJCc-xq>SVz$vd(t;jGA)HI>NOoB=|DC-6gPMSlx* zptuQ#68Jplo4Oq+mzM2u!3Mc#^i2xmRO_PD*P@}q*KgG+$ca2kH8`QPvHaJOFCK!U zdbdpAM82uX2SEvo`PN0qk1%XP7rd^idvso?i;v#;HUU0cpUfB%V3VV0C)BNGH2wZV zkyntDfNLm!@-SoPcd~>L9;<`i#xC2m<%rI7d(z|qo0u*-rpYTPc(ly7W({7)!wT1m zAr+DOnF>ay*VGSo5^whukIYp&rQWwb22{aDeM16YV3x(?6jcl!og0s4&P`P{%- zOm4O!+n&vxm(hEn;SE+*Rc&o=&)^oCPsNBq$J-p)leE~wt#LKjeK;)1R_{)inxe%S zKc8)~_e&Yq%8WI*-#IzWr}YQ7kf}c}ysxd8eAiwa8{E5`$R~zn%i8D5z_NEO@lAMt zK9@2mbwSGJetWXh^OSSsA9&I)(nqYTsscL*@}2{FkbUD~dq~mAD|vRJ%}%KAiC0$g zo%+q2l2$3EMdpne!}7<{g*^nmt|g3Up9_2OdU-1dZ5-Xffk}?6S4YsSXTcL9M5<#y z3pbAkhVjaCKhjVt(`-pdapR%3QT@#5^n)8W$e(IEqD3Du!Y4SXgoy5dV=a4pt_*$c zY_bU0P@5@OlU&r-2~~V)rE{&wW4%nBJ>C3N&0Cw^`~9zqY$4~r-_c?X_rw{Ogfn9k zBPd>Ac|S%bCvFA#i1zPw$vT77<|er6%x^F0`tX~Q_&KN^X9^s84#0zPaJ0;AEXl_{ zwk)pm$*a6hKJ;H->uUQxiROGX5bZPA7}TYDVw5LfX?7SSAiYDvO@wx{QNDTt_?l<P={w5 z&K?=3xe|iGOf?a3&bvnhb|t34clW0oMx6eYz3xPK_<_EnKIkigzT8xW;ZI-j*Zr4T zyn2+|gP=BeV2$^lB+&Le(as;}{%W`6h2F4ANU*)l+Ga9e@00N(Vdzq9=Ow-OFMvb( ze3TYm{lFCOge$AU!Movf;hF^=YNK(AJ`o=+)LOx1EA-^!n|eCwdr`Xe^eiK%^)w5| zT$50SNoCcy;H_P(3P{f~7$WN0{#8w%2V34cR zAsME+Ch^^@(dyxYe2MbUJMg&W25Zn|3s`gOmiFphV#40VV(6=?Y&`8@FH#LndH7Fi zHoQ_WP#BQlbauHWz*OekK%Jruub#3Tjez2cDmNoEaY-wN3ATZC7?FE%(uL*lkw+VB zej1UNYB(&k+1+`9MuOk~fqx^9O^XW0jrMKwqbq)jPU0Ij&E>w@&GD`|&c!uRUBQPG zn=ZC=OVb~;3ZVI^M82DekVn4BM8gkNjDLFC2UZ|*uEe_>`5?WBec#;Q{rNxRRFb_~ z3b>|$d9v?4M1U^*MC+vDwqbuRf7;p9FXlDbrje8a6H2@WnLaD6wO6itiD;ke1bSB< zTOjd5c?~LuEC|p@aF-H+uLgf`3V*HF{~xw=pg?w=371@3MDCemMO!fom!|49y}G5c z2Gx9~(yOj{_*Q0IJ3C(cIq7oBdkKF9j%t7KO$pd5CIa~}E2DU!AUMH8BE4V*HScUW z`MEyl!ay!nOI>^1N-n1cXKAegOT78`Z7f{i!F~NAARK%nNDDF5$$`sr(X;(|3`!&V z=GlG0Mlv~MyHk_;ZpGHgb+!YvW;IrKqlYD0?h&HgaL`77Cl<;524RXA{R6&>HD`=0u zT=}|fctnL!e%uDBXY9mUSwEoY3zP)4$_kc30qa#`v@< z2f_e?mk4Z_ms_I7jsngU#F>9>ZyEWru({WSbH&In_c+kwJzOiao|bwpabjr|Y~~Bo zizoGFw`FB6jD#|O@0PUe20~p1P3GWUcs^Xo>X#%GU7&1y1WRLk{e*@N^N0`MQ;NCI zr#*!9o7aW`j^g+H_lI<#k>b#aigzQyE3$oIutfXgxGqdJTC@1JN|JQof_jy@={S9n z7uWiDn}CLMnyF1%m&4O2I!;)IXK1TKyl&X=gx;Ghn{WGRx?JtMlnSQ;DiPu*l`(HE}G#y z_K*l9Yys)|X}e^7=Ge8?n+w(r@3u92!ka|{sh^aS(e@sO>H^^fX!zXsfdMpoEZzb0 zbpRXU+ta^)C9ilhebVF9-kMstzYg!n-GO;0+t93I)De5f#u&Ig`X%2)(BMyi2ulZ( z(FL-D$a+KrYL#VTTXDJJhrRy1)-Xr%tDUC7y6Xx}Va~-yH++7E!oaXsLoyil>Nf_t zIuI)ruWv4I{*5>MQ-S`-TA;WU4sL0w2^cZLfcog6ZqUZ}{U1Ghw@_&(X9uTtHmJV3z{1rau~3TH|;qr z-P5%IFxT#MRWpH#U$KlEHo)D*s`n8@u;i_)yy_-(m>+dGBM>sqEPXLe)D_u%r*(QH zK(vqMec^pQh&N!%51j$spm7vahYW5zf<2FJ!QJM|?bR2_1GTSe2mRc_4f~wQvx3VW z*lnqt-Lf`Y_I&(ujSp5Ht}vlRg?Ga#6wUQi&g0U3Je6h%0`~QhqrGPKw&#T>uTUwz z8xFp@{$5{xfJL38PvCO-j0W$jUsMOljD`{#NwC-LN(@&k=;S8^9rWX^EnlN(Cl-fJg?Ftb)+OpYn1^#8mMU&v(3@3SW;H^OGJ3T1 z*Mx?xZdUjKxVz43Wc{->)H5yNHAC4{a<4A6r923x?9KlHOSn>>eG$6_r)@Juf1$$p zp~I`LJ+n5%wDz&j%Gnutr)Q{e1&z)LSPAEH+*iqF*?2}%N{YUvhMd2HX2ext)cZQ9 za9+=hWRg8ryiiPDS>4C)S=MKqPhC4-XBPkT3d)t~6-iR#(2#jJZsg`5X7>(`qURj_ z)_QvQE*PVgbjU_AYKR?B>^NK!=Q98vhfRt`Lf5J79ch|Vq zU5=&dxjCi{^BLO!`!is;@LvGdU|8!{a^zS1jgFweYh%9m5rHIobzI)cnO*K=>ZK5` z9yOoFr=dj->-XQS#j`NZpaBT>;6wQ41yl@M>iKPu5~!&l>bs20?#CJn#zv zp^sNeKEmp{X`op*SzpFS{EpSV;2>5{5CgILQxK~k!Z9-l5UaBpgKn4(h}HLxusU{t zKrQKWph}nq`GI3@De_lQY|IbssP%!r?ik`)1@6Rb5rI^ixo2;2vb2&KO+;t`<)l5Ka} ztXCa)+$}neVJzxg(R6IzktsJz-s%T?sibdn{|GqsDyd=m;q&tx458#L)c-Zw`5Q~~ z-|;U0%)uPVpy!>20zGd!5MO1kemPV*`J3x4h6Fut@MWt}=vm(PjRZY!Ef{Vg9|tm< zhz26ib5m&bOB{1|uq7WoR&2fjUfUOXjWvJS!@GO_*jHHA1m&!}Q2En<`bNqXkW)(n z!zdIn+g}Jw188pUYA#$Dn(g-Loo9n3{rqhcyPLXg@gCbOQPybdYl6Mbw9edTDU^1D z`()?7=;N~2s_F4YG2qj?t6{uqs6MQz_65v^BJ~Q#Dpm6Z+s8|Ow_@!p-_r!4C$_+041ibU=f)BZ5tjx@~v{v_b`Gry13er?v{;-lIB7hiT z_k^b2Fs8x7$28Q@NCsU0=g%K4r`jwF*SYixWrhq$YdmQDOIMYB7iH%QzXrFG|G3JY z(+MZX3mg_jDB|w)I?#vf+zdd&m~C($KFmwI>1k1doPHJGvT0&lFq>M1t!yT5WwcVt z$8_B+`H5ras6QcGlX-F3dUZHvFk)leZ1v5U*#@uE(h=34IAK`4$$0Zo41ewR!RbtD zZSSv-srQXRLW&{=#IU$9w*<2 z2eH4%m?O58xhjc(;Uv#u^5Pr5(0N8cUQAz^*bO`8J^xL#LBS}DwBnkVbbZR1SI4<{ zl41ky8vvVHWFcRBJ5;9##}-23-}^da1`@05!+@In(Cb%itcOxkZ}|$m4`{A^yOK+; zx!)GE^05m^hi9JH-B#$yiY_RJGi`j`3~u5o+`Oid(IltCGTLsc8bIk!sd~lR@p}3G z?tilDf*Gy#kASnnt5IJ6OH)m;^8%s#lg-3|3(*IDHef=$vqA!kq89`-6Up>=wmmqQ-`K zq8NnrNF^mnj%#Zv;TS7Jmb)XkaA*{nhKguf*y^GE>+ffA$ZF2JBS}x|6GymMQwm}Z zP7f%i=W|}U?GPCnWNeFZ=tVFR=7<12-XUy-`$sq@=Bsu5^7#p+yYZ)IWL)$wl%FXW zNj%AwG?S8Y(8Zr>^5!C4k=WMoTD$XE?E8;#T0#)?mpEsmn(19lm2(F?7W*=JAE3@9 zr)JICV1h>S*_ombZ{L4@uKYIZ$E~3JH|tA^n3$d~j3X<{Ezdn^}*>1C4a7T;vmILfypGnjYN6;?NzJ zCp)OZE&Z6A`Vg@M%_gi^JRb36!^Scu`eD9ko}ZftE~&URM64}un_XU!k*-S7E2U}_nM?mNM%AS-t|zqijq;grVW4K09D#n zyUD!@AIO8_bb4D{za%thYJP4D3_e9oO;zZv-EV9bV9d2G)ke2@QbT*XAa z@tjKU9+&am4i3*;Ekn@aM9@x42WhJ7b6F*u1x`NGnsjyxDm^SpH4F=7Q`ng=yvHDk zF!Kr5J-mZ*gYjJQP-)~N9}D!VF!G!=?|7(K>T>fGM^0{-^_gQkqj{P4z62>k_JUYh zj`#V{Uf=r!M!+fK3&03C03D-F|0C%7?+|zYktxH%5((1Sz1ugzeF}uyzTZaF&9Ei0 z!V`t9)81F6E=N)ZCh419u?>;g%FM|cG+{mt$f=64=KUKa!{0Y`Q1MJh)knmTTH4I^ z!AaIJqZ-QN?E_KOXv^FWy73$jP;r2)RAOoTi7Nuy!nbW4F!v&+mcN<&3^w={5P=a9 zFq{?+S{gF4i3rg}!f=EHmZBTMxDmmHWxq?}WvHAHB=UID*2ZnyR3qhnA1Uq5_NX-1 zY`zy;UOkwBT+trvfGb$BQ!siS_p!O98S@Z-T!+P3YUd0=m^%K6OAS9{rqVt{XD@rI z^5gRk?3YMiY2O0(q9ptgX~dmWSqY29oTMPVVM5W5#GF|aLwo1dMRlm)?xKy1EuYUk zGaEfw*WMDA5GE~aaKSt8ioV#8McS*xd~Bm|XZ5tr53!y+n3i%lABHX_`kOHv$J~?4 zf@#k>Ntw}oBeloJhh&9QG`qL=F}?LWr{4FOal%vk=TX=K2h3Z7!dQo5_=%}-glDbd zd{R^Nvk!GiPTYmEY9WvMpHfH}UR4{TnEqxt7{QKJ>6hiXWbCn0os>t1KdD(8ldg&A z3AXg(V!rkIIW99kGGwsAMzk7>>TM|^#H@}c=F#uk=^Lv1ZdYl@g=*|=(aT9)!#%16 zeXbB{V88sx!`DZ6`t6JVj0CBGyvK*s{)I>4cznO-m5j$Wm@POek=- zX)jtzDSEr^u*2FsE&VRu>a*;Z?^_2X^0h5D_sGsej$rwpv9f<7a{+Obja|)vOUA4X z%!5vNn*x@5bu{kVC~U(XKB;t5;>@m{ppOmJ%5?GboJw|;Zlotx80f-o z{>+>HMfKpnYUA(D?*Bs59g5y|+r45iroNzwBrrRWd&0eD0;Pq(pg@$qX@cM0QiWG> zXe(+}Ap8fyUk5|9_-GiF91zG=;Bi8>mVpn1U3|tFg;lvGg z<@=3u&(Lo>+L0EPf;?t0woZ!yd1l#CXb-}>I{Jo80hpzL%M2>rs6rNneC@iwFqd@dTJfqQUwVz_ntHR~vEP1dH6+7A=Gn)*12+dEXAjhS=`tERQk@&- zyD}t$Q^RvghqRp}D{x8ALQ2pPnT6{%M4$`CXWLWYrDoqu)!2bat-G|&Zo1f6jk3M! z>>&i{gv7rizwq*Xn_;2x8j@>cNeMg9>Bk##x50?X7$$Oe`+HPw|Lg8|Y1gF-?0l?U zT7HDDbI!${HkT+Mg+4rd5y9}*Yq@41uDU}2S71J)v`iJ^ANnDEV#7#nz@$CNLPxqU zspgFENx=__L6-}7(o&J9!7~mQ^G-TYfPQ>jP~I(eL|tP$CJ${9~mm^KNNQ-ScPCC%||4b5`ykyGMI4m8~v zGx3P8>?E6Kz7P(&r~Hby&}7 z4F>HA_H&07202O@DG0@O4!?{AG-3VhC~A!$tYLQ^cepF&OZQcrjv=lB9uwFO*PpH_ z;@%A7j%=DwdMA-M)4#EF<(}fPkMep&V{9hs$y~S$`@?e3dxZ5`EYoTUxo07BigDHF zTF3pu`Dkmo14Jt1H*6B0u9+TBu{gtcr(Dk5wbK-1Dw*zmxKri79OLa zl#eSm1~xhb#+9n6YzMC-(bY*;VvTs_Qm!#t$y*OZ%awImKh6u>PEdVYakcf9aGWHvCxUU-yMYdA zhXtX$UmTpV0m--yTRq5bzkbk_X(woQuJM|VbA?TJ%Fy|BEB*GDWZAdK5TqeOy#z+A zN;KY}GF%l`wgy=#b-mLwll8i6Wl;KCLl(TQZeZ`G1(j&VpfuC4|4vH*1HFMT^qa+z zMfsQBE099g0J&%}BQSBIca$6q{SWj8{*uIMf$aW|TEk4^DSx*p zjQBIB{%6X^A7=%jU)}Z&51(~#;i+SK5cF+s#hc6yo@&+6Z8~&)MfLtR+;-d!_3bSQ zuR;s7LgJ?zpP0!Hri0-QpKu;r*%Q1E&f{tLMNE6nZgd)&BR9-Fh(ou1EXl>aN5k_q zaI^J(w7~re`UVUF&q}dUeFyY7v@qB6kmwv(kCy0oLq&j}Oik(h8@KbQ6rV3=Zx&nK z@$L;^pt{1yqWyXZ_Nk<26G5?_OEDIRjQP6oHpK5-ZRDtNn6*Vn#E5sKc=ws83ZR_) zLL@nbsS>r>k5GrIC^k^~JnB$xlK1ax?PeJgx^rv*$RwYbbjgMNeSG z84{QluY_3#9otgW)D#|949pa6Zh7%gfSf%`>A0=T>C`g^4>;By-iLmZIob>UlDQQ6 zrF#@)tZP2@jY2`>#Gs*qxzOD>e?9-Z+yL)#^tbv)+=g+|dHw@D@sN;5&p7ZeUejv> z2jqX#)h|g;Yf6Mk^*Ky#c&m9U?KPG*D8C4_aIAmzNg^r-lIZ#T`0Yyx8DvyH!oSFTGiP z%TXQdAz4edVZ1W^lGjQ%_PsYQ+AV>38gh}657k#A$qd-!lZ0C{pC&I6j^mKFShsFt z@U&i!8D+ywH=U`<%E#YqvlBdoVsrKN9p75s2|wmH&)tup(~v{^QG<-Xi<&th#~X7h zoV)b%?bD9u=n<igZ<`0!)0F^Y#? z)|gkAg@!3-lLp)2!Ef2I1<`sBT~i7VG<(j9I4t&K8{ZNVKF#-Mpr!c+dbvb~>^ShjPTJdX z(&;``u~7tT%7k2WQ2g;;CdPnzkfnuaP#s)wN04d23X?f-mU>o-37U=U5zMGu^dOgr zYh&u^*;NbolZ=n`p7?aGX8{MkN4obv!c)Kg`oDx;|IeEKO>)uZf8^!=zIlS_um8Mx zf--c?VZLYOWA_&ZT!0ZaqGB~$Y}Pm2e5K+<^;+^^Yx)wB$zx2z--vBCXlL;OxtOp2 z3st3WAMa>DR+3|EY~4FZ$13Z&#R4P;+(=FdKkWFH$_lhL1v^=NS9-r(-vnka2!YuP zB*+c(qFFF`!6BND9fRyTl-v`UA)Ek@J-gs)R@?P0)O<^Lw@OUvr_s~JY&PCXx@F-hy)1a9t6Q!?9G^-QJ4*Q4&12fn~iz?rA|ZkSN@fb_dh%N zQ=Sn1t_Z>0Lt*^7kZc-Z2WFcH#;u|$-+IJW|D=;wX1cWerx2&tS6(z+47&LEh4Iqb zbp}9N3O2@o+grYLm?S-EGb@nS*V5Dj+YWUMyL;f>_3i10#n7;}?hdcZ9da+$PAKi# zgxL1TjZ|J5D6}r@5ReE`VZ%muNHTSzJk$e}1x)gX5L~i(wog8jZGWfklI7r>be(_tusjSFzZSx{WyiULTx@$EH6lN~(aniGmuVy7m;<%;G! zTF5$EC#xK5BUPoInOmQp!IqQ7Yd>321KwQWd^bk%fj2KnG28D=!fIp2wMxB~_Sxk6 z98D($SiMp&pEOKSB%sy*bfUGf?q(P3{<6Il3aUjd7AuRl>Hx;bo63iIz6USQT}OGm z$U2saJ|Ubjy!5(GFEv{D{zi4q5aq7gZ)f5DI1~3beXy_^7dB~7*u+qYmrdkWHR_s_4uJ7IU! zKpvI}6ggF6j#MgXxlWJbhxU*^k;QWBYJ@tct|;RC;B!6xr~r_nmK_I{RgF z&&zJ>U~SR;WtcS`obHo#HJ{mg;Yt;r->a+YzZRzj7uqk(UDC6dD^Q`VGZ5B>aymDB z7boNH`2Tk0A45(2B|GgJ#~<@QtbSCwtPwc*D)@%?YTv?z^_i}jf9EHQ&hpJWU71(& zdE<=fY(`wG-OBQEfir~mo7Fl$EI)FO{aTIfnzOdkoBrsptSE6T7N~r$dgXm?{<6K( z3Lda$=HA%E@xiC1X<>nbRsGG^kMcYJssB(9-2L%r@6z_p8q2j=v)GpGsBsV7)cPhue0V=P`+-M$CS;JPc6z*+*Ef)Ogx@zPw*}(l4}N#5Y!y7jB78gMu*qbbqJ`7U+=T?SrdbE96#dTLR%ev`z1&{1Wcj9T zf`9bARW~HammZkyy!}f5{LcxGJiU5n*9BOAj<8euJGV{+xW~i2?~ndR{YNI^)9v&= z{P`!p?ceE(xBV}#33^=id)wyhleendeir$>5OoaUk_1lWt-rba`1O4nAD@12{HNsl z@GkeG(y42=?aWr2ps#l+I$HJepV!>;w;Y+UW!j6bLmp|j!iyfvW3X%d;ri(J`;K@n z8}Y|)gCyR*ukl_4tgAmd?{JmbKJDU}SF<~F7tU#%fg)`d+m>DM@T^Ooxc$rg zZxz>pi*#2%=x??O{+OG)_+#4E<*v2WvP*RHMe4K8Y;=FR+cMl^rkGzrj3IDKxcj%J zzw`bu{`UM~bo{`5!K6vP_WoCOwnqF}#9X# z?J0q|N8Y7MJel}@>toP_^PyVn$MX-?OZ-v4(!}ynZm#YgeT7p;OZM;CHly+Ssky** z=mQhR&*FcCmw)U0(0`mCctXR6^{pjy!%q3P&C+XM7=P#fL6NfU<`W`yg1FuLGPzet zY${(Ru&!NZg5BTs6YLY?Z~prHE$8oS;PT1;46$3i3^ryA?JWA3N7` n;mF2Ak&;t>Rt5n#3G9dOet>l`k!gM`>kDDsY--cY|8D{S>XFvd From 95035908b4f47e61bad12d0ed49bf62a1734b2cf Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Tue, 13 Dec 2016 14:27:42 +0800 Subject: [PATCH 108/265] add CrossMapNormal --- paddle/math/cross_map_normal_op.cpp | 129 +++++++++++++++++++++ paddle/math/cross_map_normal_op.h | 47 ++++++++ paddle/math/tests/test_matrixCompare.cpp | 137 ++++++++++++----------- 3 files changed, 248 insertions(+), 65 deletions(-) create mode 100644 paddle/math/cross_map_normal_op.cpp create mode 100644 paddle/math/cross_map_normal_op.h diff --git a/paddle/math/cross_map_normal_op.cpp b/paddle/math/cross_map_normal_op.cpp new file mode 100644 index 0000000000..3eb51b5998 --- /dev/null +++ b/paddle/math/cross_map_normal_op.cpp @@ -0,0 +1,129 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "cross_map_normal_op.h" + +namespace paddle { + +// NCHW +void CrossMapNormal::operator()(CpuMatrix& outputs, + CpuMatrix& denoms, + CpuMatrix& inputs, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow) { + CHECK(outputs.isContiguous()); + CHECK(inputs.isContiguous()); + CHECK(denoms.isContiguous()); + CHECK_EQ(outputs.getHeight(), inputs.getHeight()); + CHECK_EQ(outputs.getWidth(), inputs.getWidth()); + CHECK_EQ(outputs.getHeight(), denoms.getHeight()); + CHECK_EQ(outputs.getWidth(), denoms.getWidth()); + + size_t numSample = inputs.getHeight(); + size_t numCols = inputs.getWidth(); + size_t imageSize = imgSizeH * imgSizeW; + CHECK(imageSize * channels == numCols); + + denoms = denoms.constant(1.0); + const int start = -((int)sizeX - 1) / 2; + const int end = (int)sizeX + start; + for (size_t i = 0; i < numSample; i++) { + real* denomsData = denoms.getData() + i * numCols; + real* inputData = inputs.getData() + i * numCols; + for (int c = 0; c < (int)channels; c++) { + CpuVector denom(imageSize, denomsData + c * imageSize); + for (int s = start; s < end; s++) { + if (c + s >= 0 && c + s < (int)channels) { + CpuVector input(imageSize, inputData + (c + s) * imageSize); + denom += input.square() * scale; + } + } + } + } + outputs = inputs * denoms.pow(-pow); +} + +void CrossMapNormalGrad::operator()(CpuMatrix& inputsGrad, + CpuMatrix& inputsValue, + CpuMatrix& outputsGrad, + CpuMatrix& outputsValue, + CpuMatrix& denoms, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow) { + CHECK(inputsGrad.isContiguous()); + CHECK(outputsGrad.isContiguous()); + CHECK(denoms.isContiguous()); + CHECK(inputsValue.isContiguous()); + CHECK(outputsValue.isContiguous()); + CHECK_EQ(inputsGrad.getHeight(), outputsGrad.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), outputsGrad.getWidth()); + CHECK_EQ(inputsGrad.getHeight(), denoms.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), denoms.getWidth()); + CHECK_EQ(inputsGrad.getHeight(), inputsValue.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), inputsValue.getWidth()); + CHECK_EQ(inputsGrad.getHeight(), outputsValue.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), outputsValue.getWidth()); + + size_t numSample = inputsGrad.getHeight(); + size_t numCols = inputsGrad.getWidth(); + size_t imageSize = imgSizeH * imgSizeW; + CHECK(imageSize * channels == numCols); + + std::function oneImage = [=](real* data, + size_t offset) { + return CpuVector(imageSize, data + offset); + }; + + const int start = -((int)sizeX) / 2; + const int end = (int)sizeX + start; + const real ratio = -(real)2 * scale * pow; + for (size_t i = 0; i < numSample; i++) { + size_t sOffset = i * numCols; + real* inputGradData = inputsGrad.getData() + sOffset; + real* inputData = inputsValue.getData() + sOffset; + real* denomData = denoms.getData() + sOffset; + real* outputGradData = outputsGrad.getData() + sOffset; + real* outputData = outputsValue.getData() + sOffset; + + for (int c = 0; c < (int)channels; c++) { + size_t cOffset = c * imageSize; + CpuVector inputGrad = oneImage(inputGradData, cOffset); + CpuVector inputValue = oneImage(inputData, cOffset); + CpuVector denom = oneImage(denomData, cOffset); + CpuVector outputGrad = oneImage(outputGradData, cOffset); + + inputGrad = inputGrad + denom.pow(-pow) * outputGrad; + for (int s = start; s < end; s++) { + if (c + s >= 0 && c + s < (int)channels) { + size_t offset = (c + s) * imageSize; + CpuVector output = oneImage(outputData, offset); + CpuVector outputGrad = oneImage(outputGradData, offset); + CpuVector denom = oneImage(denomData, offset); + + inputGrad += ((outputGrad * output * ratio) / denom) * inputValue; + } + } + } + } +} + +} // namespace paddle diff --git a/paddle/math/cross_map_normal_op.h b/paddle/math/cross_map_normal_op.h new file mode 100644 index 0000000000..2f99607252 --- /dev/null +++ b/paddle/math/cross_map_normal_op.h @@ -0,0 +1,47 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#pragma once + +#include "paddle/math/Matrix.h" + +namespace paddle { + +struct CrossMapNormal { + void operator()(CpuMatrix& outputs, + CpuMatrix& denoms, + CpuMatrix& inputs, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow); +}; + +struct CrossMapNormalGrad { + void operator()(CpuMatrix& inputsGrad, + CpuMatrix& inputsValue, + CpuMatrix& outputsGrad, + CpuMatrix& outputsValue, + CpuMatrix& denoms, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow); +}; + +} // namespace paddle diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 5233a9af40..9bb1fdbdab 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -23,6 +23,7 @@ limitations under the License. */ #include "paddle/gserver/tests/TestUtil.h" #include "paddle/utils/Stat.h" #include "TensorCheck.h" +#include "paddle/math/cross_map_normal_op.h" using namespace paddle; // NOLINT using namespace std; // NOLINT @@ -1261,30 +1262,32 @@ TEST(Matrix, MaxOutFwdBwd) { } } } + void testCrossMapNormalFwd( int numSamples, int channels, int imgSizeH, int imgSizeW, int sizeX) { float scale = 1.5; float pow = 0.5; int width = imgSizeH * imgSizeW * channels; - MatrixPtr input = CpuMatrix::create(numSamples, width, false, false); - MatrixPtr denorms = CpuMatrix::create(numSamples, width, false, false); - MatrixPtr target = CpuMatrix::create(numSamples, width, false, false); - MatrixPtr inputGpu = GpuMatrix::create(numSamples, width, false, true); - MatrixPtr denormsGpu = GpuMatrix::create(numSamples, width, false, true); - MatrixPtr targetGpu = GpuMatrix::create(numSamples, width, false, true); - - input->randomizeUniform(); - target->randomizeUniform(); - inputGpu->copyFrom(*input); - targetGpu->copyFrom(*target); - - target->crossMapNormalFwd( - *input, imgSizeH, imgSizeW, *denorms, channels, sizeX, scale, pow); - targetGpu->crossMapNormalFwd( - *inputGpu, imgSizeH, imgSizeW, *denormsGpu, channels, sizeX, scale, pow); - - TensorCheckErr(*target, *targetGpu); - TensorCheckErr(*denorms, *denormsGpu); + CpuMatrix inputs(numSamples, width); + CpuMatrix denoms(numSamples, width); + CpuMatrix outputs(numSamples, width); + GpuMatrix inputsGpu(numSamples, width); + GpuMatrix denomsGpu(numSamples, width); + GpuMatrix outputsGpu(numSamples, width); + + inputs.randomizeUniform(); + outputs.randomizeUniform(); + inputsGpu.copyFrom(inputs); + outputsGpu.copyFrom(outputs); + + CrossMapNormal cross; + cross( + outputs, denoms, inputs, channels, imgSizeH, imgSizeW, sizeX, scale, pow); + outputsGpu.crossMapNormalFwd( + inputsGpu, imgSizeH, imgSizeW, denomsGpu, channels, sizeX, scale, pow); + + TensorCheckErr(outputs, outputsGpu); + TensorCheckErr(denoms, denomsGpu); } TEST(Matrix, crossMapNormalFwd) { @@ -1310,53 +1313,57 @@ void testCrossMapNormalBwd( float scale = 1.5; float pow = 0.5; size_t width = imgSizeH * imgSizeW * channels; - MatrixPtr localGrad = CpuMatrix::create(numSamples, width, false, false); - MatrixPtr denoms = CpuMatrix::create(numSamples, width, false, false); - MatrixPtr output = CpuMatrix::create(numSamples, width, false, false); - MatrixPtr preOutV = CpuMatrix::create(numSamples, width, false, false); - MatrixPtr localOutV = CpuMatrix::create(numSamples, width, false, false); - - localGrad->randomizeUniform(); - denoms->randomizeUniform(); - preOutV->randomizeUniform(); - localOutV->randomizeUniform(); - output->randomizeUniform(); - denoms->add(0.01); - - MatrixPtr localGradGpu = GpuMatrix::create(numSamples, width, false, true); - MatrixPtr denomsGpu = GpuMatrix::create(numSamples, width, false, true); - MatrixPtr outputGpu = GpuMatrix::create(numSamples, width, false, true); - MatrixPtr preOutVGpu = GpuMatrix::create(numSamples, width, false, true); - MatrixPtr localOutVGpu = GpuMatrix::create(numSamples, width, false, true); - - localGradGpu->copyFrom(*localGrad); - denomsGpu->copyFrom(*denoms); - preOutVGpu->copyFrom(*preOutV); - localOutVGpu->copyFrom(*localOutV); - outputGpu->copyFrom(*output); - output->crossMapNormalBwd(*localGrad, - *denoms, - *preOutV, - *localOutV, - channels, - imgSizeH, - imgSizeW, - sizeX, - scale, - pow); - outputGpu->crossMapNormalBwd(*localGradGpu, - *denomsGpu, - *preOutVGpu, - *localOutVGpu, - channels, - imgSizeH, - imgSizeW, - sizeX, - scale, - pow); - - TensorCheckErr(*output, *outputGpu); + CpuMatrix inputsGrad(numSamples, width); + CpuMatrix inputsValue(numSamples, width); + CpuMatrix outputsGrad(numSamples, width); + CpuMatrix outputsValue(numSamples, width); + CpuMatrix denoms(numSamples, width); + + outputsGrad.randomizeUniform(); + denoms.randomizeUniform(); + inputsValue.randomizeUniform(); + outputsValue.randomizeUniform(); + inputsGrad.randomizeUniform(); + denoms.add(0.01); + + GpuMatrix inputsGradGpu(numSamples, width); + GpuMatrix inputsValueGpu(numSamples, width); + GpuMatrix outputsGradGpu(numSamples, width); + GpuMatrix outputsValueGpu(numSamples, width); + GpuMatrix denomsGpu(numSamples, width); + + outputsGradGpu.copyFrom(outputsGrad); + denomsGpu.copyFrom(denoms); + inputsValueGpu.copyFrom(inputsValue); + outputsValueGpu.copyFrom(outputsValue); + inputsGradGpu.copyFrom(inputsGrad); + + CrossMapNormalGrad cross; + cross(inputsGrad, + inputsValue, + outputsGrad, + outputsValue, + denoms, + channels, + imgSizeH, + imgSizeW, + sizeX, + scale, + pow); + + inputsGradGpu.crossMapNormalBwd(outputsGradGpu, + denomsGpu, + inputsValueGpu, + outputsValueGpu, + channels, + imgSizeH, + imgSizeW, + sizeX, + scale, + pow); + + TensorCheckErr(inputsGrad, inputsGradGpu); } TEST(Matrix, crossMapNormalBwd) { From f62f5181f1193a7bd425e1a9fd67927a22a7e722 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 13 Dec 2016 14:56:22 +0800 Subject: [PATCH 109/265] Use explicit type for std::transform. * Also fix a protential bug in WarpCTCLayer.cpp --- paddle/gserver/layers/WarpCTCLayer.cpp | 1 - paddle/pserver/ParameterClient2.cpp | 9 +++------ 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/paddle/gserver/layers/WarpCTCLayer.cpp b/paddle/gserver/layers/WarpCTCLayer.cpp index 23ca5257b6..94e926a8d8 100644 --- a/paddle/gserver/layers/WarpCTCLayer.cpp +++ b/paddle/gserver/layers/WarpCTCLayer.cpp @@ -31,7 +31,6 @@ bool WarpCTCLayer::init(const LayerMap& layerMap, CHECK_EQ(numClasses_, inputLayers_[0]->getSize()); blank_ = config_.blank(); - CHECK_GE(blank_, 0UL); CHECK_LT(blank_, numClasses_); normByTimes_ = config_.norm_by_times(); diff --git a/paddle/pserver/ParameterClient2.cpp b/paddle/pserver/ParameterClient2.cpp index 887168075e..86fd1c5276 100644 --- a/paddle/pserver/ParameterClient2.cpp +++ b/paddle/pserver/ParameterClient2.cpp @@ -611,10 +611,7 @@ void PreparedOperations::addOperationHelper(Operation* op, CpuMatrixPtr mat) { pmat.mutable_values(), mat->getData(), pmat.num_cols() * pmat.num_rows()); } -template -static inline auto add(T1 a, T2 b) -> decltype(a + b) { - return a + b; -} +static inline real addTwo(real a, double b) { return a + b; } void ParameterClient2::doOperation(PreparedOperations& ops, bool waitForGradient, @@ -684,7 +681,7 @@ void ParameterClient2::doOperation(PreparedOperations& ops, rvec->getData() + rvec->getSize(), vec.values().data(), rvec->getData(), - add); + addTwo); } CHECK_EQ(resultMatrices.size(), (size_t)result.matrices_size()); @@ -699,7 +696,7 @@ void ParameterClient2::doOperation(PreparedOperations& ops, rmat->getData() + rmat->getElementCnt(), mat.values().data(), rmat->getData(), - add); + addTwo); } } } From 4736246515080626f621c5c9aeaf9c301322c46a Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Tue, 13 Dec 2016 15:49:18 +0800 Subject: [PATCH 110/265] fix warning --- paddle/math/Matrix.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/paddle/math/Matrix.h b/paddle/math/Matrix.h index 395143a4b1..2c918f7b3a 100644 --- a/paddle/math/Matrix.h +++ b/paddle/math/Matrix.h @@ -408,7 +408,7 @@ public: LOG(FATAL) << "Not implemented"; } - virtual void addBias(Matrix& b, real scale, bool sharedBias) { + void addBias(Matrix& b, real scale, bool sharedBias) { if (!sharedBias) { addBias(b, scale); } else { @@ -425,7 +425,7 @@ public: LOG(FATAL) << "Not implemented"; } - virtual void collectBias(Matrix& a, real scale, bool sharedBias) { + void collectBias(Matrix& a, real scale, bool sharedBias) { if (!sharedBias) { collectBias(a, scale); } else { From 454ca01af37292a83f869bd9d7d0d3ae3dd7a4f5 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 13 Dec 2016 15:50:36 +0800 Subject: [PATCH 111/265] Fix Travis-CI build error. * error because merge of #711. The old issue don't check pre-commit hooks. --- python/paddle/trainer/PyDataProvider2.py | 26 +++++++++++++----------- 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/python/paddle/trainer/PyDataProvider2.py b/python/paddle/trainer/PyDataProvider2.py index f3e3fbf483..6618153df3 100644 --- a/python/paddle/trainer/PyDataProvider2.py +++ b/python/paddle/trainer/PyDataProvider2.py @@ -202,24 +202,26 @@ class CheckWrapper(object): for each in item: callback(each) + class CheckInputTypeWrapper(object): def __init__(self, generator, input_types, logger): self.generator = generator self.input_types = input_types self.logger = logger - def __call__(self, obj, filename): - for items in self.generator(obj, filename): - try: - # dict type is required for input_types when item is dict type - assert (isinstance(items, dict) and \ - not isinstance(self.input_types, dict))==False - yield items - except AssertionError as e: - self.logger.error( + def __call__(self, obj, filename): + for items in self.generator(obj, filename): + try: + # dict type is required for input_types when item is dict type + assert (isinstance(items, dict) and \ + not isinstance(self.input_types, dict))==False + yield items + except AssertionError as e: + self.logger.error( "%s type is required for input type but got %s" % (repr(type(items)), repr(type(self.input_types)))) - raise + raise + def provider(input_types=None, should_shuffle=None, @@ -374,8 +376,8 @@ def provider(input_types=None, self.generator = InputOrderWrapper(self.generator, self.input_order) else: - self.generator = CheckInputTypeWrapper(self.generator, self.slots, - self.logger) + self.generator = CheckInputTypeWrapper( + self.generator, self.slots, self.logger) if self.check: self.generator = CheckWrapper(self.generator, self.slots, check_fail_continue, From 72bb211b2fa9b6a81e8ddbdfad8ff1d97cb3a595 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Tue, 13 Dec 2016 15:58:18 +0800 Subject: [PATCH 112/265] remove COMPILER_SUPPORT_CXX11 --- paddle/math/tests/CMakeLists.txt | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/paddle/math/tests/CMakeLists.txt b/paddle/math/tests/CMakeLists.txt index fe5177291c..a3ea078509 100644 --- a/paddle/math/tests/CMakeLists.txt +++ b/paddle/math/tests/CMakeLists.txt @@ -16,12 +16,10 @@ add_simple_unittest(test_CpuGpuVector) add_simple_unittest(test_Allocator) if(WITH_GPU) - if(COMPILER_SUPPORT_CXX11) - CUDA_ADD_EXECUTABLE(test_Tensor test_Tensor.cu) - link_paddle_test(test_Tensor) - CUDA_ADD_EXECUTABLE(test_lazyAssign test_lazyAssign.cu) - link_paddle_test(test_lazyAssign) - endif() + CUDA_ADD_EXECUTABLE(test_Tensor test_Tensor.cu) + link_paddle_test(test_Tensor) + CUDA_ADD_EXECUTABLE(test_lazyAssign test_lazyAssign.cu) + link_paddle_test(test_lazyAssign) else() compile_cu_as_cpp(test_Tensor.cu) add_unittest(test_Tensor test_Tensor.cu) From 62b20ca033568212981c5836aecb3fb1d025c3f2 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Tue, 13 Dec 2016 16:52:21 +0800 Subject: [PATCH 113/265] refine data_sources.py and PyDataProvider2.py to make more readable --- python/paddle/trainer/PyDataProvider2.py | 4 +-- .../trainer_config_helpers/data_sources.py | 28 ++++++++----------- 2 files changed, 13 insertions(+), 19 deletions(-) diff --git a/python/paddle/trainer/PyDataProvider2.py b/python/paddle/trainer/PyDataProvider2.py index f3e3fbf483..dfa7496cf5 100644 --- a/python/paddle/trainer/PyDataProvider2.py +++ b/python/paddle/trainer/PyDataProvider2.py @@ -106,9 +106,7 @@ def integer_value_sequence(dim): def integer_value_sub_sequence(dim): return integer_value(dim, seq_type=SequenceType.SUB_SEQUENCE) - -def integer_sequence(dim): - return index_slot(dim, seq_type=SequenceType.SEQUENCE) +integer_sequence = integer_value_sequence class SingleSlotWrapper(object): diff --git a/python/paddle/trainer_config_helpers/data_sources.py b/python/paddle/trainer_config_helpers/data_sources.py index c62553f54c..fc72014c91 100644 --- a/python/paddle/trainer_config_helpers/data_sources.py +++ b/python/paddle/trainer_config_helpers/data_sources.py @@ -78,21 +78,6 @@ def define_py_data_source(file_list, if not isinstance(args, basestring) and args is not None: args = pickle.dumps(args, 0) - if data_cls is None: - - def py_data2(files, load_data_module, load_data_object, load_data_args, - **kwargs): - data = DataBase() - data.type = 'py2' - data.files = files - data.load_data_module = load_data_module - data.load_data_object = load_data_object - data.load_data_args = load_data_args - data.async_load_data = True - return data - - data_cls = py_data2 - cls( data_cls( files=file_list, @@ -207,10 +192,21 @@ def define_py_data_sources2(train_list, test_list, module, obj, args=None): :return: None :rtype: None """ + def py_data2(files, load_data_module, load_data_object, load_data_args, + **kwargs): + data = DataBase() + data.type = 'py2' + data.files = files + data.load_data_module = load_data_module + data.load_data_object = load_data_object + data.load_data_args = load_data_args + data.async_load_data = True + return data + define_py_data_sources( train_list=train_list, test_list=test_list, module=module, obj=obj, args=args, - data_cls=None) + data_cls=py_data2) From f17c6c759ec6edb815696fe685d5cda610a33fca Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Tue, 13 Dec 2016 17:43:17 +0800 Subject: [PATCH 114/265] Update resnet_model_cn.md --- doc/tutorials/imagenet_model/resnet_model_cn.md | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/doc/tutorials/imagenet_model/resnet_model_cn.md b/doc/tutorials/imagenet_model/resnet_model_cn.md index 7e402a5040..03e4c6f258 100644 --- a/doc/tutorials/imagenet_model/resnet_model_cn.md +++ b/doc/tutorials/imagenet_model/resnet_model_cn.md @@ -9,7 +9,7 @@
![resnet_block](./resnet_block.jpg)
图 1. ResNet 网络模块
-本教程中我们给出了三个ResNet模型,这些模型都是由原作者提供的模型转换过来的。我们使用PaddlePaddle在ILSVRC的验证集共5000幅图像上测试了模型的分类错误率,其中输入图像的颜色通道顺序为**BGR**,保持宽高比缩放到短边为256,只截取中心方形的图像区域。分类误差和模型大小由下表给出。 +本教程中我们给出了三个ResNet模型,这些模型都是由原作者提供的模型转换过来的。我们使用PaddlePaddle在ILSVRC的验证集共50,000幅图像上测试了模型的分类错误率,其中输入图像的颜色通道顺序为**BGR**,保持宽高比缩放到短边为256,只截取中心方形的图像区域。分类错误率和模型大小由下表给出。
@@ -52,7 +52,7 @@ ### 网络可视化 -你可以通过执行下面的命令来得到ResNet网络的结构图解。该脚本会生成一个dot文件,然后利用我们服务器上已安装好的draw_dot工具将dot文件转成PNG图像。如果你不是在该服务器上运行,请自行安装graphviz来转换dot文件。 +你可以通过执行下面的命令来得到ResNet网络的结构可视化图。该脚本会生成一个dot文件,然后可以转换为图片。需要安装graphviz来转换dot文件为图片。 ``` cd demo/model_zoo/resnet @@ -94,7 +94,7 @@ mean_meta_224 resnet_101 resnet_152 resnet_50 * **[Batch Normalization]() 层权重** -本层有四个参数,实际上只有.w0和.wbias是需要学习的参数,另外两个分别是均值和方差。在测试阶段它们将会被加载到模型中。下表展示了batch normalization层的参数。 +本层有四个参数,实际上只有.w0和.wbias是需要学习的参数,另外两个分别是滑动均值和方差。在测试阶段它们将会被加载到模型中。下表展示了batch normalization层的参数。
@@ -165,7 +165,7 @@ od -j 16 -f _res2_1_branch1_bn.w0 ### C++接口 -首先,在配置文件中的`define_py_data_sources`里指定图像数据列表,具体请参照示例`demo/model_zoo/resnet/resnet.py`。 +首先,在配置文件中的`define_py_data_sources2`里指定图像数据列表,具体请参照示例`demo/model_zoo/resnet/resnet.py`。 ``` train_list = 'train.list' if not is_test else None @@ -233,13 +233,11 @@ python classify.py \ * \--job=extract: 指定工作模式来提取特征。 * \--conf=resnet.py: 网络配置文件。 * \--use_gpu=1: 指定是否使用GPU。 -* \--model=model/resnet_5: 模型路径。 +* \--model=model/resnet_50: 模型路径。 * \--data=./example/test.list: 数据列表。 * \--output_layer="xxx,xxx": 指定提取特征的层。 * \--output_dir=features: 输出目录。 -需要注意的是,这些ResNet模型中的卷积层适配于cudnn的实现,因此只支持GPU上操作。由于兼容性问题,它暂不支持CPU,我们以后将会修复该问题。 - 如果运行成功,你将会看到特征存储在`features/batch_0`文件中,该文件是由cPickle产生的。你可以使用`load_feature.py`中的`load_feature_py`接口来打开该文件,它将返回如下的字典: ``` @@ -253,7 +251,7 @@ python classify.py \ ## 预测 -`classify.py`文件也可以用于对新样本进行预测。我们提供了一个示例脚本`predict.sh`,它可以使用50层的ResNet模型来对`example/test.list`中的数据进行预测。 +`classify.py`文件也可以用于对样本进行预测。我们提供了一个示例脚本`predict.sh`,它使用50层的ResNet模型来对`example/test.list`中的数据进行预测。 ``` cd demo/model_zoo/resnet From a54bd6cd84f22556f1b80feeb77837d2f27ce6d6 Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Tue, 13 Dec 2016 17:48:37 +0800 Subject: [PATCH 115/265] Update resnet_model_en.md --- doc/tutorials/imagenet_model/resnet_model_en.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/doc/tutorials/imagenet_model/resnet_model_en.md b/doc/tutorials/imagenet_model/resnet_model_en.md index 5403ab9f17..93864b82ec 100644 --- a/doc/tutorials/imagenet_model/resnet_model_en.md +++ b/doc/tutorials/imagenet_model/resnet_model_en.md @@ -52,7 +52,7 @@ See ```demo/model_zoo/resnet/resnet.py```. This config contains network of 50, 1 ### Network Visualization -You can get a diagram of ResNet network by running the following commands. The script generates dot file and then converts dot file to PNG file, which uses installed draw_dot tool in our server. If you can not access the server, just install graphviz to convert dot file. +You can get a diagram of ResNet network by running the following commands. The script generates dot file and then converts dot file to PNG file, which needs to install graphviz to convert. ``` cd demo/model_zoo/resnet @@ -238,8 +238,6 @@ python classify.py \ * \--output_layer="xxx,xxx": specify layers to extract features. * \--output_dir=features: output diretcoty. -Note, since the convolution layer in these ResNet models is suitable for the cudnn implementation which only support GPU. It not support CPU mode because of compatibility issue and we will fix later. - If run successfully, you will see features saved in `features/batch_0`, this file is produced with cPickle. You can use `load_feature_py` interface in `load_feature.py` to open the file, and it returns a dictionary as follows: ``` From e357f2715843cd531ce0b0143647ed5561d2fceb Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Tue, 13 Dec 2016 17:55:31 +0800 Subject: [PATCH 116/265] add GPU CrossMapNormal --- paddle/math/cross_map_normal_op.cpp | 42 ++--- paddle/math/cross_map_normal_op.h | 37 ++++- paddle/math/cross_map_normal_op_gpu.cu | 194 +++++++++++++++++++++++ paddle/math/tests/test_matrixCompare.cpp | 66 +++++--- 4 files changed, 286 insertions(+), 53 deletions(-) create mode 100644 paddle/math/cross_map_normal_op_gpu.cu diff --git a/paddle/math/cross_map_normal_op.cpp b/paddle/math/cross_map_normal_op.cpp index 3eb51b5998..be242926af 100644 --- a/paddle/math/cross_map_normal_op.cpp +++ b/paddle/math/cross_map_normal_op.cpp @@ -17,15 +17,16 @@ limitations under the License. */ namespace paddle { // NCHW -void CrossMapNormal::operator()(CpuMatrix& outputs, - CpuMatrix& denoms, - CpuMatrix& inputs, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow) { +template <> +void CrossMapNormal::operator()(CpuMatrix& outputs, + CpuMatrix& denoms, + CpuMatrix& inputs, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow) { CHECK(outputs.isContiguous()); CHECK(inputs.isContiguous()); CHECK(denoms.isContiguous()); @@ -58,17 +59,18 @@ void CrossMapNormal::operator()(CpuMatrix& outputs, outputs = inputs * denoms.pow(-pow); } -void CrossMapNormalGrad::operator()(CpuMatrix& inputsGrad, - CpuMatrix& inputsValue, - CpuMatrix& outputsGrad, - CpuMatrix& outputsValue, - CpuMatrix& denoms, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow) { +template <> +void CrossMapNormalGrad::operator()(CpuMatrix& inputsGrad, + CpuMatrix& inputsValue, + CpuMatrix& outputsGrad, + CpuMatrix& outputsValue, + CpuMatrix& denoms, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow) { CHECK(inputsGrad.isContiguous()); CHECK(outputsGrad.isContiguous()); CHECK(denoms.isContiguous()); diff --git a/paddle/math/cross_map_normal_op.h b/paddle/math/cross_map_normal_op.h index 2f99607252..c2bb95f6b1 100644 --- a/paddle/math/cross_map_normal_op.h +++ b/paddle/math/cross_map_normal_op.h @@ -18,10 +18,30 @@ limitations under the License. */ namespace paddle { +enum DeviceType { + DEVICE_TYPE_UNSPECIFIED = 0, + DEVICE_TYPE_CPU = 1, + DEVICE_TYPE_GPU = 2, +}; + +template +struct MatrixT; + +template <> +struct MatrixT { + using type = CpuMatrix; +}; + +template <> +struct MatrixT { + using type = GpuMatrix; +}; + +template struct CrossMapNormal { - void operator()(CpuMatrix& outputs, - CpuMatrix& denoms, - CpuMatrix& inputs, + void operator()(typename MatrixT::type& outputs, + typename MatrixT::type& denoms, + typename MatrixT::type& inputs, size_t channels, size_t imgSizeH, size_t imgSizeW, @@ -30,12 +50,13 @@ struct CrossMapNormal { real pow); }; +template struct CrossMapNormalGrad { - void operator()(CpuMatrix& inputsGrad, - CpuMatrix& inputsValue, - CpuMatrix& outputsGrad, - CpuMatrix& outputsValue, - CpuMatrix& denoms, + void operator()(typename MatrixT::type& inputsGrad, + typename MatrixT::type& inputsValue, + typename MatrixT::type& outputsGrad, + typename MatrixT::type& outputsValue, + typename MatrixT::type& denoms, size_t channels, size_t imgSizeH, size_t imgSizeW, diff --git a/paddle/math/cross_map_normal_op_gpu.cu b/paddle/math/cross_map_normal_op_gpu.cu new file mode 100644 index 0000000000..0a154d97ac --- /dev/null +++ b/paddle/math/cross_map_normal_op_gpu.cu @@ -0,0 +1,194 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "hl_base.h" +#include "cross_map_normal_op.h" + +namespace paddle { + +__global__ void KeCMRNormFillScale(size_t imageSize, const real* in, + real* scale, size_t channels, + size_t height, size_t width, size_t size, + real alpha) { + const int idx = threadIdx.x + blockIdx.x * blockDim.x; + if (idx < imageSize) { + const int w = idx % width; + const int h = (idx / width) % height; + const int n = idx / width / height; + const int offset = (n * channels * height + h) * width + w; + + in += offset; + scale += offset; + const int step = height * width; + const int pre_pad = (size - 1) / 2; + const int post_pad = size - pre_pad - 1; + + real accum = 0; + int index = 0; + while (index < channels + post_pad) { + if (index < channels) { + accum += in[index * step] * in[index * step]; + } + if (index >= size) { + accum -= in[(index - size) * step] * in[(index - size) * step]; + } + if (index >= post_pad) { + scale[(index - post_pad) * step] = 1. + accum * alpha; + } + ++index; + } + } +} + +__global__ void KeCMRNormOutput(size_t inputSize, const real* in, + const real* scale, real negative_beta, + real* out) { + const int index = threadIdx.x + blockIdx.x * blockDim.x; + if (index < inputSize) { + out[index] = in[index] * pow(scale[index], negative_beta); + } +} + +template <> +void CrossMapNormal::operator()(GpuMatrix& outputs, + GpuMatrix& denoms, + GpuMatrix& inputs, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow) { + CHECK(outputs.isContiguous()); + CHECK(inputs.isContiguous()); + CHECK(denoms.isContiguous()); + CHECK_EQ(outputs.getHeight(), inputs.getHeight()); + CHECK_EQ(outputs.getWidth(), inputs.getWidth()); + CHECK_EQ(outputs.getHeight(), denoms.getHeight()); + CHECK_EQ(outputs.getWidth(), denoms.getWidth()); + + size_t numSample = inputs.getHeight(); + size_t numCols = inputs.getWidth(); + CHECK(imgSizeH * imgSizeW * channels == numCols); + + real* inputsData = inputs.getData(); + real* denomsData = denoms.getData(); + real* outputsData = outputs.getData(); + + size_t imageSize = numSample * imgSizeH * imgSizeW; + int blockSize = 1024; + int gridSize = (imageSize + 1024 - 1) / 1024; + KeCMRNormFillScale<<>> + (imageSize, inputsData, denomsData, + channels, imgSizeH, imgSizeW, sizeX, scale); + + size_t inputSize = numSample * imgSizeH * imgSizeW *channels; + blockSize = 1024; + gridSize = (inputSize + 1024 - 1) / 1024; + KeCMRNormOutput<<>> + (inputSize, inputsData, denomsData, -pow, outputsData); + + CHECK_SYNC("CrossMapNormalFwd"); +} + +__global__ void KeCMRNormDiff(size_t imageSize, const real* bottom_data, + const real* top_data, const real* scale, + const real* top_diff, size_t channels, + size_t height, size_t width, size_t size, + real negative_beta, real cache_ratio, + real* bottom_diff ) { + const int idx = threadIdx.x + blockIdx.x * blockDim.x; + if (idx < imageSize) { + const int w = idx % width; + const int h = (idx / width) % height; + const int n = idx / width / height; + const int offset = (n * channels * height + h) * width + w; + bottom_data += offset; + top_data += offset; + scale += offset; + top_diff += offset; + bottom_diff += offset; + + const int step = height * width; + const int pre_pad = size - (size + 1) / 2; + const int post_pad = size - pre_pad - 1; + + int index = 0; + real accum = 0; + while (index < channels + post_pad) { + if (index < channels) { + accum += top_diff[index * step] * top_data[index * step] / + scale[index * step]; + } + if (index >= size) { + accum -= top_diff[(index - size) * step] * + top_data[(index - size) * step] / scale[(index - size) * step]; + } + if (index >= post_pad) { + bottom_diff[(index - post_pad) * step] += + top_diff[(index - post_pad) * step] * + pow(scale[(index - post_pad) * step], negative_beta) - cache_ratio * + bottom_data[(index - post_pad) * step] * accum; + } + ++index; + } + } +} + +template <> +void CrossMapNormalGrad::operator()(GpuMatrix& inputsGrad, + GpuMatrix& inputsValue, + GpuMatrix& outputsGrad, + GpuMatrix& outputsValue, + GpuMatrix& denoms, + size_t channels, + size_t imgSizeH, + size_t imgSizeW, + size_t sizeX, + real scale, + real pow) { + CHECK(inputsGrad.isContiguous()); + CHECK(outputsGrad.isContiguous()); + CHECK(denoms.isContiguous()); + CHECK(inputsValue.isContiguous()); + CHECK(outputsValue.isContiguous()); + CHECK_EQ(inputsGrad.getHeight(), outputsGrad.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), outputsGrad.getWidth()); + CHECK_EQ(inputsGrad.getHeight(), denoms.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), denoms.getWidth()); + CHECK_EQ(inputsGrad.getHeight(), inputsValue.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), inputsValue.getWidth()); + CHECK_EQ(inputsGrad.getHeight(), outputsValue.getHeight()); + CHECK_EQ(inputsGrad.getWidth(), outputsValue.getWidth()); + + size_t numSample = inputsGrad.getHeight(); + size_t numCols = inputsGrad.getWidth(); + CHECK(imgSizeH * imgSizeW * channels == numCols); + + size_t imageSize = numSample * imgSizeH * imgSizeW; + real* inputsGradData = inputsGrad.getData(); + real* inputsData = inputsValue.getData(); + real* denomsData = denoms.getData(); + real* outputsGradData = outputsGrad.getData(); + real* outputsData = outputsValue.getData(); + + int blockSize = 1024; + int gridSize = (imageSize + 1024 - 1) / 1024; + KeCMRNormDiff <<>> + (imageSize, inputsData, outputsData, denomsData, outputsGradData, channels, + imgSizeH, imgSizeW, sizeX, -pow, 2.0f * pow * scale, inputsGradData); + CHECK_SYNC("KeCMRNormDiff"); +} + +} // namespace paddle diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 9bb1fdbdab..8d7a4fb94d 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -1280,11 +1280,25 @@ void testCrossMapNormalFwd( inputsGpu.copyFrom(inputs); outputsGpu.copyFrom(outputs); - CrossMapNormal cross; - cross( + CrossMapNormal cpuCross; + cpuCross( outputs, denoms, inputs, channels, imgSizeH, imgSizeW, sizeX, scale, pow); + + CrossMapNormal gpuCross; + gpuCross(outputsGpu, + denomsGpu, + inputsGpu, + channels, + imgSizeH, + imgSizeW, + sizeX, + scale, + pow); + +#if 0 outputsGpu.crossMapNormalFwd( inputsGpu, imgSizeH, imgSizeW, denomsGpu, channels, sizeX, scale, pow); +#endif TensorCheckErr(outputs, outputsGpu); TensorCheckErr(denoms, denomsGpu); @@ -1339,29 +1353,31 @@ void testCrossMapNormalBwd( outputsValueGpu.copyFrom(outputsValue); inputsGradGpu.copyFrom(inputsGrad); - CrossMapNormalGrad cross; - cross(inputsGrad, - inputsValue, - outputsGrad, - outputsValue, - denoms, - channels, - imgSizeH, - imgSizeW, - sizeX, - scale, - pow); - - inputsGradGpu.crossMapNormalBwd(outputsGradGpu, - denomsGpu, - inputsValueGpu, - outputsValueGpu, - channels, - imgSizeH, - imgSizeW, - sizeX, - scale, - pow); + CrossMapNormalGrad cpuCross; + cpuCross(inputsGrad, + inputsValue, + outputsGrad, + outputsValue, + denoms, + channels, + imgSizeH, + imgSizeW, + sizeX, + scale, + pow); + + CrossMapNormalGrad gpuCross; + gpuCross(inputsGradGpu, + inputsValueGpu, + outputsGradGpu, + outputsValueGpu, + denomsGpu, + channels, + imgSizeH, + imgSizeW, + sizeX, + scale, + pow); TensorCheckErr(inputsGrad, inputsGradGpu); } From 090385155ebb46f42076121ef074d91c8ee36a39 Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Tue, 13 Dec 2016 18:17:11 +0800 Subject: [PATCH 117/265] add chinese doc and modify ml_regression_en.rst --- doc/tutorials/rec/ml_dataset_cn.md | 105 ++++++++ doc/tutorials/rec/ml_regression_ch.rst | 347 +++++++++++++++++++++++++ doc/tutorials/rec/ml_regression_en.rst | 14 +- 3 files changed, 459 insertions(+), 7 deletions(-) create mode 100644 doc/tutorials/rec/ml_dataset_cn.md create mode 100644 doc/tutorials/rec/ml_regression_ch.rst diff --git a/doc/tutorials/rec/ml_dataset_cn.md b/doc/tutorials/rec/ml_dataset_cn.md new file mode 100644 index 0000000000..d500294e7d --- /dev/null +++ b/doc/tutorials/rec/ml_dataset_cn.md @@ -0,0 +1,105 @@ +```eval_rst +.. _demo_ml_dataset_en: + +``` + +# MovieLens数据集 + +[MovieLens 数据集](http://grouplens.org/datasets/movielens/)由GroupLens Research实验室搜集整理。 +该数据集包含一些用户信息、电影信息以及电影评分\[1-5\]。根据数据量规模,该数据及有很多不同的版本。 +我们用[MovieLens 百万数据集](http://files.grouplens.org/datasets/movielens/ml-1m.zip)作为示例数据 +集,其中包含6,000位用户对4,000部电影的1,000,000条评价。该数据集于2003年2月发布。 + +## 数据集特征 + +在[ml-1m 数据集](http://files.grouplens.org/datasets/movielens/ml-1m.zip)中有许多的特征。在[ml-1m 数据集] +(http://files.grouplens.org/datasets/movielens/ml-1m.zip)中的这些数据文件(含有".dat"的后缀)实际上是CSV文件, +分隔符为"::"。以下我们翻译数据集网站中README文件的描述: + +### 评分文件描述(ratings.dat) + + +所有的评分数据都包含在"ratings.dat"文件中,遵循如下的格式: + +用户ID::电影ID::评分::时间戳 + +- 用户ID范围从1到6040 +- 电影ID范围从1到3952 +- 评分被调整为5星的规模(只允许整数的星级) +- 时间戳表示为从1970-01-01(UTC)来的秒数,与time(2)的返回值一致 +- 每位用户至少有20条评分 + +### 用户文件描述(users.dat) + +所有的用户信息都包含在"users.dat"文件中,遵循如下的格式: + +用户ID::性别::年龄::职业::邮编 + +所有的人口统计学信息由用户自愿提供,没有进行正确性的检查。只有含有人 +口统计学信息的用户才被包含在数据集中。 + +- 性别,用"M"表示男性,"F"表示女性 +- 年龄从下列列表范围中选取: + + * 1: "18岁以下" + * 18: "18-24岁" + * 25: "25-34岁" + * 35: "35-44岁" + * 45: "45-49岁" + * 50: "50-55岁" + * 56: "56+" + +- 职业从下面所列中选择: + + * 0: "其他"或不确定 + * 1: "学术/教育工作者" + * 2: "艺术家" + * 3: "文书工作/管理员" + * 4: "大学生/研究生" + * 5: "客户服务" + * 6: "医生/医疗保健" + * 7: "行政工作/管理人员" + * 8: "农民" + * 9: "操持家务者" + * 10: "高中毕业生" + * 11: "律师" + * 12: "程序员" + * 13: "退休人员" + * 14: "销售/市场" + * 15: "科学家" + * 16: "自由职业者" + * 17: "技术员/工程师" + * 18: "推销员/手工艺者" + * 19: "无业人士" + * 20: "作家" + +### 电影文件描述(movies.dat) + +所有的电影信息都包含在"movies.dat"文件中,遵循如下的格式: + +电影ID::电影名称::电影类型 + +- 电影名称(包括发行时间)与IMDB网站提供的一致 +- 电影类型如符合多种用管道符号|分割,选自下列类型: + + * 动作片 + * 冒险片 + * 动画片 + * 儿童片 + * 喜剧片 + * 犯罪片 + * 纪录片 + * 戏剧 + * 奇幻片 + * 黑色电影 + * 恐怖片 + * 音乐剧 + * 悬疑片 + * 浪漫片 + * 科幻片 + * 惊险电影 + * 战争片 + * 西部片 + +- 由于意外的副本记录和测试记录,有些电影ID可能与实际电影不相符合 +- 电影大部分是手工输入数据,因此可能会有一些错误和不一致发生 diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst new file mode 100644 index 0000000000..19a89d270d --- /dev/null +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -0,0 +1,347 @@ +MovieLens数据集评分回归模型 +========================= + +这里我们在MovieLens数据集描述一种**余弦相似度回归**任务。 +该示例将展示paddle如何进行词向量嵌入,处理相似度回归,针对文本 +的单词级别的卷积神经网络,以及paddle如何处理多种类型的输入。 +需要注意的是,该模型网络只是用于进行demo展示paddle如何工作,而 +没有进行结构的微调。 + + +**我们非常欢迎您用PADDLEPADDLE构建更好的示例,如果您有好的建议来 +让这个示例变得更好,希望能让我们知晓。** + +数据准备 +``````` +下载并解压数据集 +'''''''''''''' +这里我们使用:ref:`demo_ml_dataset_en`。 +要下载和解压数据集,只需要简单的运行下面的命令即可。 + +.. code-block:: bash + + cd demo/recommendation/data + ./ml_data.sh + +:code:`demo/recommendation/data/ml-1m`的目录结构为: + +.. code-block:: text + + +--ml-1m + +--- movies.dat # 电影特征 + +--- ratings.dat # 评分 + +--- users.dat # 用户特征 + +--- README # 数据集描述 + +字段配置文件 +'''''''''' +**字段配置文件**用来具体说明数据集的字段和文件格式, +例如,说明每个特征文件具体字段是**什么**类型。 + +ml-1m的字段配置文件在目录:code:`demo/recommendation/data/config.json`中。 +其具体说明了字段类型和文件名称: +1) 用户文件中有四种类型的字段\: 编号,性别,年龄和职业; +2) 文件名称为"users.dat",文件的分隔符为"::"。 + +.. include:: ../../../demo/recommendation/data/config.json + :code: json + :literal: + +准备数据 +``````` +你需要安装python的第三方库。 +**强烈推荐使用VIRTUALENV来创造一个干净的python环境。** + +.. code-block:: bash + + pip install -r requirements.txt + +预处理数据一般的命令为: + +.. code-block:: bash + + cd demo/recommendation + ./preprocess.sh + +下面介绍预处理过程具体的步骤。 + +提取电影或用户的特征并生成python对象 +'''''''''''''''''''''''''''''''' + +在movielens 1m数据集中,电影和用户有许多的特征。 +评分文件的每一行仅仅提供电影或用户的编号来代表相应的电影或用户。 +我们首先处理电影或用户的特征文件,然后用pickle命令将特征(**Meta**)对象存储为文件。 + +Meta配置文件 +........... + +**Meta配置文件**用来具体描述**如何**解析数据集中的每一个字段。 +该文件可以从字段配置文件生成,或是手动编辑生成。文件的格式可以 +为json或yaml格式。解析器能通过文件的扩展名自动识别文件的格式。 + +要将字段配置文件转化为meta配置文件,只需要运行: + +.. code-block:: bash + + cd demo/recommendation/data + python config_generator.py config.json > meta_config.json + +生成的meta配置文件如下所示: + +.. include:: ../../../demo/recommendation/data/meta_config.json + :code: json + :literal: + +在meta文件中有两种特征\: 电影和用户。 + +* 在电影文件movies.dat中 + * 我们仅用"::"来分隔每一行 + * pos 0 代表编号。 + * pos 1 特征: + * name是电影名。 + * 利用正则表达式来解析该特征。 + * 基于字母的词嵌入特征。 + * 是序列。 + * pos 2 特征: + * name是体裁。 + * type是one hot稠密向量。 + * dictionary由解析自动生成,每一个key由'|'分隔。 +* 在用户文件users.dat中 + * 我们仅用"::"来分隔每一行 + * pos 0 代表编号。 + * pos 1 特征: + * name是性别。 + * 简单的基于字母的词嵌入。 + * pos 2 特征: + * name是年龄 + * 是整个的词嵌入 + * 嵌入编号会根据单词排序 + * pos 3 特征: + * name是职业 + * 简单的整个词嵌入 + + +Meta文件 +'''''''' + +有了meta配置文件之后,我们可以生成**Meta文件**,该文件是python的pickle对象, +存储着电影或用户信息。可以运行下面的命令来生成。 + +.. code-block:: bash + + python meta_generator.py ml-1m meta.bin --config=meta_config.json + +meta文件:code:`meta.bin`的结构如下: + +.. code-block:: text + + +--+ movie + | +--+ __meta__ + | | +--+ raw_meta # 每个特征的meta配置。列表 + | | | + + | | | | # 编号字段,我们用编号作为key + | | | +--+ {'count': 3883, 'max': 3952, 'is_key': True, 'type': 'id', 'min': 1} + | | | | + | | | | # 电影名字段,嵌入特征字典 + | | | +--+ {'dict': [ ... ], 'type': 'embedding', 'name': 'title', 'seq': 'sequence'} + | | | | + | | | | # 体裁字段,体裁字典 + | | | +--+ {'dict': [ ... ], 'type': 'one_hot_dense', 'name': 'genres'} + | | | + | | +--+ feature_map [1, 2] # a list for raw_meta index for feature field. + | | # it means there are 2 features for each key. + | | # * 0 offset of feature is raw_meta[1], Title. + | | # * 1 offset of feature is raw_meta[2], Genres. + | | + | +--+ 1 # 电影1的特征 + | | + + | | +---+ [[...], [...]] # title ids, genres dense vector + | | + | +--+ 2 + | | + | +--+ ... + | + +--- user + +--+ __meta__ + | + + | +--+ raw_meta + | | + + | | +--+ id field as user + | | | + | | +--+ {'dict': ['F', 'M'], 'type': 'embedding', 'name': 'gender', 'seq': 'no_sequence'} + | | | + | | +--+ {'dict': ['1', '18', '25', '35', '45', '50', '56'], 'type': 'embedding', 'name': 'age', 'seq': 'no_sequence'} + | | | + | | +--+ {'dict': [...], 'type': 'embedding', 'name': 'occupation', 'seq': 'no_sequence'} + | | + | +--+ feature_map [1, 2, 3] + | + +--+ 1 # 用户1的特征 + | + +--+ 2 + +--+ ... + + +分割训练/测试文件 +''''''''''''''' + +我们将:code:`ml-1m/ratings.dat`文件分割为训练和测试文件。分割文件的方法是:对于每位用户,我们将评分分成两部分。 +这样的话每位用户在测试文件中将与训练文件含有同样的信息。 + +用:code:`separate.py`来分离训练和测试文件。 + +.. code-block:: bash + + python split.py ml-1m/ratings.dat --delimiter="::" --test_ratio=0.1 + +这样就会生成两个文件::code:`ml-1m/ratings.dat.train`和:code:`ml-1m/ratings.data.test`。 +将他们移动到目录:code:`data`,然后进行随机打乱,再为paddle的训练过程提供文件列表。 + +.. code-block:: bash + + shuf ml-1m/ratings.dat.train > ratings.dat.train + cp ml-1m/ratings.dat.test . + echo "./data/ratings.dat.train" > train.list + echo "./data/ratings.dat.test" > test.list + + +神经网络结构配置 +`````````````` + +训练器配置文件 +'''''''''''' + +网络结构如下图所示: + +.. image:: rec_regression_network.png + :align: center + :alt: rec_regression_network + +该示例的神经网络配置文件:code:`trainer_config.py`如下所示: + +.. literalinclude:: ../../../demo/recommendation/trainer_config.py + :language: python + :lines: 15- + +在文件:code:`trainer_config.py`中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 +展示了如何将每个特征映射到一个向量。 + +* :code:`id`\: 仅仅是简单的嵌入,然后添加一个全连接层。 +* :code:`embedding`\: + - 如果是序列,则先做嵌入,然后再做一次文本卷积网络操作, + 然后得到平均采样的结果 + - 如果不是序列,则先做嵌入,然后添加一个全连接层。 +* :code:`one_host_dense`\: + - 仅仅是两个全连接层。 + +然后我们利用多输入的:code:`fc_layer`全连接层将电影的每个特征结合成一个电影特征, +并且对用户的特征做同样的操作,也得到一个用户特征。然后我们求这两个特征的余弦相似度。 + +在这些网络中,我们用以下的一些:ref:`api_trainer_config`中的接口。 + +* 数据层, :ref:`api_trainer_config_helpers_layers_data_layer` +* 全连接层, :ref:`api_trainer_config_helpers_layers_fc_layer` +* 嵌入层, :ref:`api_trainer_config_helpers_layers_embedding_layer` +* 文本投影层, :ref:`api_trainer_config_helpers_layers_context_projection` +* 采样层, :ref:`api_trainer_config_helpers_layers_pooling_layer` +* 余弦相似度层, :ref:`api_trainer_config_helpers_layers_cos_sim` +* 文本卷积采样层, :ref:`api_trainer_config_helpers_network_text_conv_pool` +* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources`. + +数据提供脚本 +''''''''''' + +.. literalinclude:: ../../../demo/recommendation/dataprovider.py + :language: python + :lines: 15- + +数据提供脚本仅仅是读取meta.bin和评分文件,生成训练需要的样本。 +在脚本:code:`dataprovider.py`中,我们需要设置: + +* obj.slots\: 特征的类型和维度。 +* use_seq\: :code:`dataprovider.py`中的数据是否为序列模式。 +* process\: 返回数据的每一条样本给:code:`paddle`. + +数据提供脚本的细节文档可以参考:ref:`api_pydataprovider`. + +训练 +```` + +准备好数据,配置了网络,编写好数据提供脚本后,现在我们可以开始paddle训练了。 + +代码:code:`run.sh`如下: + +.. literalinclude:: ../../../demo/recommendation/run.sh + :language: bash + :lines: 16- + +该脚本仅仅是开始一个paddle训练过程,将日志写入文件:code:`log.txt`,然后 +打印在屏幕上。 + +脚本:code:`run.sh`中的每一行命令,请参考页面:ref:`cmd_line_index_en`。 +这些参数的简短介绍如下: + +* config\: 告诉paddle哪个文件是神经网络的配置文件。 +* save_dir\: 告诉paddle将模型保存在:code:`./output`中。 +* use_gpu\: 是否使用GPU,默认为不使用。 +* trainer_count\: 一台机器上面的线程数量。 +* test_all_data_in_one_period\: 每一个测试周期测试一次所有数据。否则, + 每个测试周期测试:code:`batch_size`批次的数据。 +* log_period\: 在训练了:code:`log_period`批次后打印日志. +* dot_period\: 在每训练:code:`dot_period`个批次后打印一个:code:`.`. +* num_passes\: 训练至多:code:`num_passes`轮. + +如果训练过程启动成功的话,输出应该类似如下: + +.. code-block:: text + + I0601 08:07:22.832059 10549 TrainerInternal.cpp:157] Batch=100 samples=160000 AvgCost=4.13494 CurrentCost=4.13494 Eval: CurrentEval: + + I0601 08:07:50.672627 10549 TrainerInternal.cpp:157] Batch=200 samples=320000 AvgCost=3.80957 CurrentCost=3.48421 Eval: CurrentEval: + + I0601 08:08:18.877369 10549 TrainerInternal.cpp:157] Batch=300 samples=480000 AvgCost=3.68145 CurrentCost=3.42519 Eval: CurrentEval: + + I0601 08:08:46.863963 10549 TrainerInternal.cpp:157] Batch=400 samples=640000 AvgCost=3.6007 CurrentCost=3.35847 Eval: CurrentEval: + + I0601 08:09:15.413025 10549 TrainerInternal.cpp:157] Batch=500 samples=800000 AvgCost=3.54811 CurrentCost=3.33773 Eval: CurrentEval: + I0601 08:09:36.058670 10549 TrainerInternal.cpp:181] Pass=0 Batch=565 samples=902826 AvgCost=3.52368 Eval: + I0601 08:09:46.215489 10549 Tester.cpp:101] Test samples=97383 cost=3.32155 Eval: + I0601 08:09:46.215966 10549 GradientMachine.cpp:132] Saving parameters to ./output/model/pass-00000 + I0601 08:09:46.233397 10549 ParamUtil.cpp:99] save dir ./output/model/pass-00000 + I0601 08:09:46.233438 10549 Util.cpp:209] copy trainer_config.py to ./output/model/pass-00000 + I0601 08:09:46.233541 10549 ParamUtil.cpp:147] fileName trainer_config.py + +模型被保存在:code:`output/`目录中。你可以在任何时候用:code:`Ctrl-C`来停止训练。 + +模型评估和预测 +```````````` + +在训练了几个轮次以后,你可以对模型进行评估,得到最好轮次下的模型。运行下面命令即可: + +.. code-block:: bash + + ./evaluate.sh + +你讲看到如下的信息: + +.. code-block:: text + + Best pass is 00009, error is 3.06949, which means predict get error as 0.875998002281 + evaluating from pass output/pass-00009 + +然后,你可以预测任何用户对于任何一部电影的评价,运行下面命令即可: + +.. code-block:: bash + + python prediction.py 'output/pass-00009/' + +预测程序将读取用户的输入,然后输出预测分数。用户预测的命令行界面如下: + +.. code-block:: text + + Input movie_id: 9 + Input user_id: 4 + Prediction Score is 2.56 + Input movie_id: 8 + Input user_id: 2 + Prediction Score is 3.13 \ No newline at end of file diff --git a/doc/tutorials/rec/ml_regression_en.rst b/doc/tutorials/rec/ml_regression_en.rst index 6346090a84..7e34672002 100644 --- a/doc/tutorials/rec/ml_regression_en.rst +++ b/doc/tutorials/rec/ml_regression_en.rst @@ -36,7 +36,7 @@ And the directory structure of :code:`demo/recommendation/data/ml-1m` is: Field config file ''''''''''''''''' -**Field config file** is used to specific the fields dataset and file format, +**Field config file** is used to specify the fields of the dataset and the file format, i.e, specific **WHAT** type it is in each feature file. The field config file of ml-1m shows in :code:`demo/recommendation/data/config.json`. @@ -188,7 +188,7 @@ Split Training/Testing files We split :code:`ml-1m/ratings.dat` into a training and testing file. The way to split file is for each user, we split the rating by two parts. So each user in testing file will have some rating information in training file. -Use separate.py to separate the training and testing file. +Use :code:`separate.py` to separate the training and testing file. .. code-block:: bash @@ -217,7 +217,7 @@ The network structure shows below. :align: center :alt: rec_regression_network -The demo's neural network config file "trainer_config.py" show as below. +The demo's neural network config file :code:`trainer_config.py` show as below. .. literalinclude:: ../../../demo/recommendation/trainer_config.py :language: python @@ -239,7 +239,7 @@ Then we combine each features of movie into one movie feature by a get one user feature. Then we calculate the cosine similarity of these two features. -In these network, we use several api in :ref:`api_trainer_config` . There are +In these networks, we use several APIs in :ref:`api_trainer_config` . There are * Data Layer, :ref:`api_trainer_config_helpers_layers_data_layer` * Fully Connected Layer, :ref:`api_trainer_config_helpers_layers_fc_layer` @@ -271,19 +271,19 @@ Train After prepare data, config network, writting data provider, now we can run paddle training. -The run.sh is shown as follow: +The :code:`run.sh` is shown as follow: .. literalinclude:: ../../../demo/recommendation/run.sh :language: bash :lines: 16- -It just start a paddle training process, write the log to `log.txt`, +It just start a paddle training process, write the log to :code:`log.txt`, then print it on screen. Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index_en` page. The short description of these arguments is shown as follow. * config\: Tell paddle which file is neural network configuration. -* save_dir\: Tell paddle save model into './output' +* save_dir\: Tell paddle save model into :code:`./output`. * use_gpu\: Use gpu or not. Default is false. * trainer_count\: The compute thread in one machine. * test_all_data_in_one_period\: Test All Data during one test period. Otherwise, From 5848288142522c0a587e810116fd73c3488f081f Mon Sep 17 00:00:00 2001 From: liaogang Date: Tue, 13 Dec 2016 19:07:34 +0800 Subject: [PATCH 118/265] Integrate doc en/cn into single doc --- CMakeLists.txt | 3 +- doc/CMakeLists.txt | 43 +- .../api/data_provider/dataprovider_cn.rst | 12 +- .../{index_en.rst => dataprovider_en.rst} | 0 .../api/data_provider/pydataprovider2_cn.rst | 454 ++++++------ doc/api/data_provider/pydataprovider2_en.rst | 16 +- .../api/data_provider/src}/mnist_config.py | 0 .../data_provider/src}/mnist_provider.dict.py | 0 .../api/data_provider/src}/mnist_train.txt | 0 .../data_provider/src}/sentimental_config.py | 0 .../src}/sentimental_provider.py | 0 .../data_provider/src}/sentimental_train.txt | 0 .../api/data_provider/src}/train.list | 0 doc/api/index_cn.rst | 37 + doc/api/index_en.rst | 2 +- doc/api/predict/{ => src}/predict_sample.py | 0 .../api/predict/swig_py_paddle_cn.rst | 2 +- doc/api/predict/swig_py_paddle_en.rst | 2 +- doc_cn/conf.py.in => doc/conf.py.cn.in | 2 +- doc/{conf.py.in => conf.py.en.in} | 4 +- doc_cn/faq/index.rst => doc/faq/index_cn.rst | 19 +- .../faq/src}/reduce_min_pool_size.py | 0 .../faq => doc/faq/src}/word2vec_config.py | 0 .../faq/src}/word2vec_dataprovider.py | 0 .../getstarted/basic_usage/index_cn.rst | 13 +- .../cmake/build_from_source_cn.rst | 84 +-- .../cmake/cblas_settings.csv | 0 .../cmake/compile_options.csv | 26 +- .../build_and_install/docker_install_cn.rst | 19 +- .../build_and_install/docker_install_en.rst | 2 +- .../getstarted/build_and_install/index_cn.rst | 12 +- .../build_and_install/ubuntu_install_cn.rst | 19 +- doc/getstarted/index_cn.rst | 8 + {doc_cn => doc/howto}/cluster/k8s/Dockerfile | 0 .../k8s/distributed_training_on_k8s_cn.md | 0 {doc_cn => doc/howto}/cluster/k8s/job.yaml | 0 .../howto}/cluster/k8s/k8s-paddle-arch.png | Bin .../howto/cluster/k8s/paddle_on_k8s_cn.md | 0 {doc_cn => doc/howto}/cluster/k8s/start.sh | 0 .../howto}/cluster/k8s/start_paddle.py | 0 .../nn.rst => doc/howto/concepts/nn_cn.rst | 0 .../howto/concepts/program_concepts_cn.rst | 0 .../howto/concepts/src}/pserver_topology.dot | 0 .../howto/concepts/src}/trainer_config.py | 0 .../howto/concepts/use_concepts_cn.rst | 54 +- doc/howto/deep_model/index_cn.rst | 10 + .../deep_model/rnn/hierarchical_layer_cn.rst | 0 .../howto/deep_model/rnn/hrnn_demo_cn.rst | 0 .../rnn/hrnn_rnn_api_compare_cn.rst | 29 +- .../deep_model/rnn/recurrent_group_cn.md | 190 ++--- doc/howto/deep_model/rnn/rnn_en.rst | 6 +- .../deep_model/rnn/src}/glossary_rnn.dot | 0 .../rnn/src}/glossary_rnn_with_memory.dot | 0 .../simple_full_hierarchical_recurrent.dot | 0 .../rnn/src}/simple_full_recurrent.dot | 0 doc/howto/index_cn.rst | 27 + doc/howto/optimization/gpu_profiling_en.rst | 6 +- .../howto/write_docs/index_cn.rst | 0 doc/index_cn.rst | 11 + doc/{index.rst => index_en.rst} | 3 +- .../image_classification/src/cifar.png | Bin 0 -> 466572 bytes .../src/image_classification.png | Bin 0 -> 52635 bytes .../image_classification/src/lenet.png | Bin 0 -> 49835 bytes .../image_classification/src/plot.png | Bin 0 -> 31006 bytes doc/tutorials/index_cn.md | 23 + doc/tutorials/index_en.md | 6 +- .../tutorials/quick_start/index_cn.rst | 56 +- doc/tutorials/quick_start/index_en.md | 28 +- .../quick_start/src/NetContinuous_cn.jpg | Bin .../{ => src}/NetContinuous_en.png | Bin .../tutorials/quick_start/src/NetConv_cn.jpg | Bin .../quick_start/{ => src}/NetConv_en.png | Bin .../tutorials/quick_start/src/NetLR_cn.jpg | Bin .../quick_start/{ => src}/NetLR_en.png | Bin .../tutorials/quick_start/src/NetRNN_cn.jpg | Bin .../quick_start/{ => src}/NetRNN_en.png | Bin .../quick_start/src/PipelineNetwork_cn.jpg | Bin .../{ => src}/PipelineNetwork_en.jpg | Bin .../quick_start/src/PipelineTest_cn.jpg | Bin .../quick_start/{ => src}/PipelineTest_en.png | Bin .../quick_start/src/PipelineTrain_cn.jpg | Bin .../{ => src}/PipelineTrain_en.png | Bin .../tutorials/quick_start/src/Pipeline_cn.jpg | Bin .../quick_start/{ => src}/Pipeline_en.jpg | Bin .../semantic_role_labeling/index_cn.md | 2 +- .../semantic_role_labeling/index_en.md | 6 +- .../semantic_role_labeling_cn.md | 201 ------ .../{ => src}/curve.jpg | Bin .../semantic_role_labeling/src/feature.jpg | Bin 0 -> 31204 bytes .../src/network_arch.png | Bin 0 -> 27822 bytes .../tutorials/sentiment_analysis/index_cn.md | 650 +++++++++--------- .../sentiment_analysis/src/bi_lstm.jpg | Bin 0 -> 35593 bytes doc/tutorials/sentiment_analysis/src/lstm.png | Bin 0 -> 50694 bytes .../sentiment_analysis/src/stacked_lstm.jpg | Bin 0 -> 31077 bytes doc_cn/CMakeLists.txt | 31 - doc_cn/build_and_install/cmake/index.rst | 8 - .../build_and_install/cmake/install_deps.rst | 4 - .../cmake/make_and_install.rst | 4 - .../install/paddle_ssh.Dockerfile | 15 - .../install/paddle_version.txt | 11 - doc_cn/cluster/index.rst | 11 - doc_cn/demo/index.rst | 26 - doc_cn/demo/quick_start/index.md | 543 --------------- doc_cn/demo/sentiment_analysis/index.rst | 8 - doc_cn/howto/build_docker_image.rst | 35 - doc_cn/index.rst | 32 - doc_cn/introduction/parameters.png | Bin 44469 -> 0 bytes doc_cn/ui/cmd/index.rst | 20 - doc_cn/ui/cmd/paddle_version.txt | 11 - doc_cn/ui/index.rst | 32 - 110 files changed, 1030 insertions(+), 1848 deletions(-) rename doc_cn/ui/data_provider/dataprovider.rst => doc/api/data_provider/dataprovider_cn.rst (99%) rename doc/api/data_provider/{index_en.rst => dataprovider_en.rst} (100%) rename doc_cn/ui/data_provider/pydataprovider2.rst => doc/api/data_provider/pydataprovider2_cn.rst (95%) rename {doc_cn/ui/data_provider => doc/api/data_provider/src}/mnist_config.py (100%) rename {doc_cn/ui/data_provider => doc/api/data_provider/src}/mnist_provider.dict.py (100%) rename {doc_cn/ui/data_provider => doc/api/data_provider/src}/mnist_train.txt (100%) rename {doc_cn/ui/data_provider => doc/api/data_provider/src}/sentimental_config.py (100%) rename {doc_cn/ui/data_provider => doc/api/data_provider/src}/sentimental_provider.py (100%) rename {doc_cn/ui/data_provider => doc/api/data_provider/src}/sentimental_train.txt (100%) rename {doc_cn/ui/data_provider => doc/api/data_provider/src}/train.list (100%) create mode 100644 doc/api/index_cn.rst rename doc/api/predict/{ => src}/predict_sample.py (100%) rename doc_cn/ui/predict/swig_py_paddle.rst => doc/api/predict/swig_py_paddle_cn.rst (97%) rename doc_cn/conf.py.in => doc/conf.py.cn.in (99%) rename doc/{conf.py.in => conf.py.en.in} (98%) rename doc_cn/faq/index.rst => doc/faq/index_cn.rst (97%) rename {doc_cn/faq => doc/faq/src}/reduce_min_pool_size.py (100%) rename {doc_cn/faq => doc/faq/src}/word2vec_config.py (100%) rename {doc_cn/faq => doc/faq/src}/word2vec_dataprovider.py (100%) rename doc_cn/introduction/index.rst => doc/getstarted/basic_usage/index_cn.rst (87%) rename doc_cn/build_and_install/cmake/compile_options.rst => doc/getstarted/build_and_install/cmake/build_from_source_cn.rst (94%) rename {doc_cn => doc/getstarted}/build_and_install/cmake/cblas_settings.csv (100%) rename {doc_cn => doc/getstarted}/build_and_install/cmake/compile_options.csv (94%) rename doc_cn/build_and_install/install/docker_install.rst => doc/getstarted/build_and_install/docker_install_cn.rst (93%) rename doc_cn/build_and_install/index.rst => doc/getstarted/build_and_install/index_cn.rst (61%) rename doc_cn/build_and_install/install/ubuntu_install.rst => doc/getstarted/build_and_install/ubuntu_install_cn.rst (69%) create mode 100644 doc/getstarted/index_cn.rst rename {doc_cn => doc/howto}/cluster/k8s/Dockerfile (100%) rename doc_cn/cluster/k8s/distributed_training_on_kubernetes.md => doc/howto/cluster/k8s/distributed_training_on_k8s_cn.md (100%) rename {doc_cn => doc/howto}/cluster/k8s/job.yaml (100%) rename {doc_cn => doc/howto}/cluster/k8s/k8s-paddle-arch.png (100%) rename doc_cn/build_and_install/paddle_on_kubernetes.md => doc/howto/cluster/k8s/paddle_on_k8s_cn.md (100%) rename {doc_cn => doc/howto}/cluster/k8s/start.sh (100%) rename {doc_cn => doc/howto}/cluster/k8s/start_paddle.py (100%) rename doc_cn/concepts/nn.rst => doc/howto/concepts/nn_cn.rst (100%) rename doc_cn/concepts/program_concepts.rst => doc/howto/concepts/program_concepts_cn.rst (100%) rename {doc_cn/concepts => doc/howto/concepts/src}/pserver_topology.dot (100%) rename {doc_cn/concepts => doc/howto/concepts/src}/trainer_config.py (100%) rename doc_cn/concepts/use_concepts.rst => doc/howto/concepts/use_concepts_cn.rst (89%) create mode 100644 doc/howto/deep_model/index_cn.rst rename doc_cn/algorithm/rnn/hierarchical-layer.rst => doc/howto/deep_model/rnn/hierarchical_layer_cn.rst (100%) rename doc_cn/algorithm/rnn/hrnn_demo.rst => doc/howto/deep_model/rnn/hrnn_demo_cn.rst (100%) rename doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst => doc/howto/deep_model/rnn/hrnn_rnn_api_compare_cn.rst (91%) rename doc_cn/algorithm/rnn/rnn-tutorial.md => doc/howto/deep_model/rnn/recurrent_group_cn.md (98%) rename {doc_cn/algorithm/rnn => doc/howto/deep_model/rnn/src}/glossary_rnn.dot (100%) rename {doc_cn/algorithm/rnn => doc/howto/deep_model/rnn/src}/glossary_rnn_with_memory.dot (100%) rename {doc_cn/algorithm/rnn => doc/howto/deep_model/rnn/src}/simple_full_hierarchical_recurrent.dot (100%) rename {doc_cn/algorithm/rnn => doc/howto/deep_model/rnn/src}/simple_full_recurrent.dot (100%) create mode 100644 doc/howto/index_cn.rst rename doc_cn/howto/how_to_write_docs/index.rst => doc/howto/write_docs/index_cn.rst (100%) create mode 100644 doc/index_cn.rst rename doc/{index.rst => index_en.rst} (88%) create mode 100644 doc/tutorials/image_classification/src/cifar.png create mode 100644 doc/tutorials/image_classification/src/image_classification.png create mode 100644 doc/tutorials/image_classification/src/lenet.png create mode 100644 doc/tutorials/image_classification/src/plot.png create mode 100644 doc/tutorials/index_cn.md rename doc_cn/demo/quick_start/index.rst => doc/tutorials/quick_start/index_cn.rst (87%) rename doc_cn/demo/quick_start/NetContinuous.jpg => doc/tutorials/quick_start/src/NetContinuous_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/NetContinuous_en.png (100%) rename doc_cn/demo/quick_start/NetConv.jpg => doc/tutorials/quick_start/src/NetConv_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/NetConv_en.png (100%) rename doc_cn/demo/quick_start/NetLR.jpg => doc/tutorials/quick_start/src/NetLR_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/NetLR_en.png (100%) rename doc_cn/demo/quick_start/NetRNN.jpg => doc/tutorials/quick_start/src/NetRNN_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/NetRNN_en.png (100%) rename doc_cn/demo/quick_start/PipelineNetwork.jpg => doc/tutorials/quick_start/src/PipelineNetwork_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/PipelineNetwork_en.jpg (100%) rename doc_cn/demo/quick_start/PipelineTest.jpg => doc/tutorials/quick_start/src/PipelineTest_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/PipelineTest_en.png (100%) rename doc_cn/demo/quick_start/PipelineTrain.jpg => doc/tutorials/quick_start/src/PipelineTrain_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/PipelineTrain_en.png (100%) rename doc_cn/demo/quick_start/Pipeline.jpg => doc/tutorials/quick_start/src/Pipeline_cn.jpg (100%) rename doc/tutorials/quick_start/{ => src}/Pipeline_en.jpg (100%) delete mode 100644 doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md rename doc/tutorials/semantic_role_labeling/{ => src}/curve.jpg (100%) create mode 100644 doc/tutorials/semantic_role_labeling/src/feature.jpg create mode 100644 doc/tutorials/semantic_role_labeling/src/network_arch.png rename doc_cn/demo/sentiment_analysis/sentiment_analysis.md => doc/tutorials/sentiment_analysis/index_cn.md (96%) create mode 100644 doc/tutorials/sentiment_analysis/src/bi_lstm.jpg create mode 100644 doc/tutorials/sentiment_analysis/src/lstm.png create mode 100644 doc/tutorials/sentiment_analysis/src/stacked_lstm.jpg delete mode 100644 doc_cn/CMakeLists.txt delete mode 100644 doc_cn/build_and_install/cmake/index.rst delete mode 100644 doc_cn/build_and_install/cmake/install_deps.rst delete mode 100644 doc_cn/build_and_install/cmake/make_and_install.rst delete mode 100644 doc_cn/build_and_install/install/paddle_ssh.Dockerfile delete mode 100644 doc_cn/build_and_install/install/paddle_version.txt delete mode 100644 doc_cn/cluster/index.rst delete mode 100644 doc_cn/demo/index.rst delete mode 100644 doc_cn/demo/quick_start/index.md delete mode 100644 doc_cn/demo/sentiment_analysis/index.rst delete mode 100644 doc_cn/howto/build_docker_image.rst delete mode 100644 doc_cn/index.rst delete mode 100644 doc_cn/introduction/parameters.png delete mode 100644 doc_cn/ui/cmd/index.rst delete mode 100644 doc_cn/ui/cmd/paddle_version.txt delete mode 100644 doc_cn/ui/index.rst diff --git a/CMakeLists.txt b/CMakeLists.txt index 0a44e56719..d82d8f633c 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -11,7 +11,7 @@ find_package(Protobuf REQUIRED) # Check protobuf library version. execute_process(COMMAND ${PROTOBUF_PROTOC_EXECUTABLE} --version - OUTPUT_VARIABLE PROTOBUF_VERSION) + OUTPUT_VARIABLE PROTOBUF_VERSION) string(REPLACE "libprotoc " "" PROTOBUF_VERSION ${PROTOBUF_VERSION}) set(PROTOBUF_3 OFF) @@ -169,5 +169,4 @@ add_subdirectory(paddle) add_subdirectory(python) if(WITH_DOC) add_subdirectory(doc) - add_subdirectory(doc_cn) endif() diff --git a/doc/CMakeLists.txt b/doc/CMakeLists.txt index efcf8b0ad3..1b0fbadeb3 100644 --- a/doc/CMakeLists.txt +++ b/doc/CMakeLists.txt @@ -7,25 +7,50 @@ if(NOT DEFINED SPHINX_THEME_DIR) endif() # configured documentation tools and intermediate build results -set(BINARY_BUILD_DIR "${CMAKE_CURRENT_BINARY_DIR}/_build") +set(BINARY_BUILD_DIR_EN "${CMAKE_CURRENT_BINARY_DIR}/en/_build") # Sphinx cache with pickled ReST documents -set(SPHINX_CACHE_DIR "${CMAKE_CURRENT_BINARY_DIR}/_doctrees") +set(SPHINX_CACHE_DIR_EN "${CMAKE_CURRENT_BINARY_DIR}/en/_doctrees") -# HTML output directory -set(SPHINX_HTML_DIR "${CMAKE_CURRENT_BINARY_DIR}/html") +# HTML output director +set(SPHINX_HTML_DIR_EN "${CMAKE_CURRENT_BINARY_DIR}/en/html") configure_file( - "${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in" - "${BINARY_BUILD_DIR}/conf.py" + "${CMAKE_CURRENT_SOURCE_DIR}/conf.py.en.in" + "${BINARY_BUILD_DIR_EN}/conf.py" @ONLY) sphinx_add_target(paddle_docs html - ${BINARY_BUILD_DIR} - ${SPHINX_CACHE_DIR} + ${BINARY_BUILD_DIR_EN} + ${SPHINX_CACHE_DIR_EN} ${CMAKE_CURRENT_SOURCE_DIR} - ${SPHINX_HTML_DIR}) + ${SPHINX_HTML_DIR_EN}) add_dependencies(paddle_docs gen_proto_py) + + +# configured documentation tools and intermediate build results +set(BINARY_BUILD_DIR_CN "${CMAKE_CURRENT_BINARY_DIR}/cn/_build") + +# Sphinx cache with pickled ReST documents +set(SPHINX_CACHE_DIR_CN "${CMAKE_CURRENT_BINARY_DIR}/cn/_doctrees") + +# HTML output directory +set(SPHINX_HTML_DIR_CN "${CMAKE_CURRENT_BINARY_DIR}/cn/html") + +configure_file( + "${CMAKE_CURRENT_SOURCE_DIR}/conf.py.cn.in" + "${BINARY_BUILD_DIR_CN}/conf.py" + @ONLY) + +sphinx_add_target(paddle_docs_cn + html + ${BINARY_BUILD_DIR_CN} + ${SPHINX_CACHE_DIR_CN} + ${CMAKE_CURRENT_SOURCE_DIR} + ${SPHINX_HTML_DIR_CN}) + +add_dependencies(paddle_docs_cn + gen_proto_py) diff --git a/doc_cn/ui/data_provider/dataprovider.rst b/doc/api/data_provider/dataprovider_cn.rst similarity index 99% rename from doc_cn/ui/data_provider/dataprovider.rst rename to doc/api/data_provider/dataprovider_cn.rst index e6796429a7..6861ecece8 100644 --- a/doc_cn/ui/data_provider/dataprovider.rst +++ b/doc/api/data_provider/dataprovider_cn.rst @@ -1,13 +1,13 @@ DataProvider的介绍 ================== -DataProvider是PaddlePaddle负责提供数据的模块。其作用是将数据传入内存或显存,让神经网络可以进行训练或预测。用户可以通过简单使用Python接口 `PyDataProvider2 `_ ,来自定义传数据的过程。如果有更复杂的使用,或者需要更高的效率,用户也可以在C++端自定义一个 ``DataProvider`` 。 +DataProvider是PaddlePaddle负责提供数据的模块。其作用是将数据传入内存或显存,让神经网络可以进行训练或预测。用户可以通过简单使用Python接口 `PyDataProvider2 `_ ,来自定义传数据的过程。如果有更复杂的使用,或者需要更高的效率,用户也可以在C++端自定义一个 ``DataProvider`` 。 PaddlePaddle需要用户在网络配置(trainer_config.py)中定义使用哪种DataProvider,并且在DataProvider中实现如何访问训练文件列表(train.list)或测试文件列表(test.list)。 -- train.list和test.list存放在本地(推荐直接存放到训练目录,以相对路径引用)。一般情况下,两者均为纯文本文件,其中每一行对应一个数据文件地址: - - - 如果数据文件存于本地磁盘,这个地址则为它的绝对路径或相对路径(相对于PaddlePaddle程序运行时的路径)。 - - 地址也可以为hdfs文件路径,或者数据库连接路径等。 - - 由于这个地址会被DataProvider使用,因此,如何解析该地址也是用户自定义DataProvider时需要考虑的地方。 +- train.list和test.list存放在本地(推荐直接存放到训练目录,以相对路径引用)。一般情况下,两者均为纯文本文件,其中每一行对应一个数据文件地址: + + - 如果数据文件存于本地磁盘,这个地址则为它的绝对路径或相对路径(相对于PaddlePaddle程序运行时的路径)。 + - 地址也可以为hdfs文件路径,或者数据库连接路径等。 + - 由于这个地址会被DataProvider使用,因此,如何解析该地址也是用户自定义DataProvider时需要考虑的地方。 - 如果没有设置test.list,或设置为None,那么在训练过程中不会执行测试操作;否则,会根据命令行参数指定的测试方式,在训练过程中进行测试,从而防止过拟合。 diff --git a/doc/api/data_provider/index_en.rst b/doc/api/data_provider/dataprovider_en.rst similarity index 100% rename from doc/api/data_provider/index_en.rst rename to doc/api/data_provider/dataprovider_en.rst diff --git a/doc_cn/ui/data_provider/pydataprovider2.rst b/doc/api/data_provider/pydataprovider2_cn.rst similarity index 95% rename from doc_cn/ui/data_provider/pydataprovider2.rst rename to doc/api/data_provider/pydataprovider2_cn.rst index dce373118c..f243ea775a 100644 --- a/doc_cn/ui/data_provider/pydataprovider2.rst +++ b/doc/api/data_provider/pydataprovider2_cn.rst @@ -1,227 +1,227 @@ -PyDataProvider2的使用 -===================== - -PyDataProvider2是PaddlePaddle使用Python提供数据的推荐接口。该接口使用多线程读取数据,并提供了简单的Cache功能;同时可以使用户只关注如何从文件中读取每一条数据,而不用关心数据如何传输,如何存储等等。 - -.. contents:: - -MNIST的使用场景 ---------------- - -我们以MNIST手写识别为例,来说明PyDataProvider2的简单使用场景。 - -样例数据 -++++++++ - -MNIST是一个包含有70,000张灰度图片的数字分类数据集。样例数据 ``mnist_train.txt`` 如下: - -.. literalinclude:: mnist_train.txt - -其中每行数据代表一张图片,行内使用 ``;`` 分成两部分。第一部分是图片的标签,为0-9中的一个数字;第二部分是28*28的图片像素灰度值。 对应的 ``train.list`` 即为这个数据文件的名字: - -.. literalinclude:: train.list - -dataprovider的使用 -++++++++++++++++++ - -.. literalinclude:: mnist_provider.dict.py - -- 首先,引入PaddlePaddle的PyDataProvider2包。 -- 其次,定义一个Python的 `Decorator `_ `@provider`_ 。用于将下一行的数据输入函数标记成一个PyDataProvider2,同时设置它的input_types属性。 - - - `input_types`_:设置这个PyDataProvider2返回什么样的数据。本例根据网络配置中 ``data_layer`` 的名字,显式指定返回的是一个28*28维的稠密浮点数向量和一个[0-9]的10维整数标签。 - - .. literalinclude:: mnist_config.py - :lines: 9-10 - - - 注意:如果用户不显示指定返回数据的对应关系,那么PaddlePaddle会根据layer的声明顺序,来确定对应关系。但这个关系可能不正确,所以推荐使用显式指定的方式来设置input_types。 -- 最后,实现数据输入函数(如本例的 ``process`` 函数)。 - - - 该函数的功能是:打开文本文件,读取每一行,将行中的数据转换成与input_types一致的格式,然后返回给PaddlePaddle进程。注意, - - - 返回的顺序需要和input_types中定义的顺序一致。 - - 返回时,必须使用Python关键词 ``yield`` ,相关概念是 ``generator`` 。 - - 一次yield调用,返回一条完整的样本。如果想为一个数据文件返回多条样本,只需要在函数中调用多次yield即可(本例中使用for循环进行多次调用)。 - - - 该函数具有两个参数: - - - settings:在本例中没有使用,具体可以参考 `init_hook`_ 中的说明。 - - filename:为 ``train.list`` 或 ``test.list`` 中的一行,即若干数据文件路径的某一个。 - -网络配置中的调用 -++++++++++++++++ - -在网络配置里,只需要一行代码就可以调用这个PyDataProvider2,如, - -.. literalinclude:: mnist_config.py - :lines: 1-7 - -训练数据是 ``train.list`` ,没有测试数据,调用的PyDataProvider2是 ``mnist_provider`` 模块中的 ``process`` 函数。 - -小结 -+++++ - -至此,简单的PyDataProvider2样例就说明完毕了。对用户来说,仅需要知道如何从 **一个文件** 中读取 **一条样本** ,就可以将数据传送给PaddlePaddle了。而PaddlePaddle则会帮用户做以下工作: - -* 将数据组合成Batch进行训练 -* 对训练数据进行Shuffle -* 多线程的数据读取 -* 缓存训练数据到内存(可选) -* CPU->GPU双缓存 - -是不是很简单呢? - -时序模型的使用场景 ------------------- -样例数据 -++++++++ - -时序模型是指数据的某一维度是一个序列形式,即包含时间步信息。所谓时间步信息,不一定和时间有关系,只是说明数据的顺序是重要的。例如,文本信息就是一个序列数据。 - -本例采用英文情感分类的数据,即将一段英文文本数据,分类成正面情绪和负面情绪两类(用0和1表示)。样例数据 ``sentimental_train.txt`` 如下: - -.. literalinclude:: sentimental_train.txt - -dataprovider的使用 -++++++++++++++++++ - -相对MNIST而言,这个dataprovider较复杂,主要原因是增加了初始化机制 `init_hook`_。本例的 ``on_init`` 函数就是根据该机制配置的,它会在dataprovider创建的时候执行。 - -- 其中 ``input_types`` 和在 `@provider`_ 中配置的效果一致。本例中的输入特征是词ID的序列,因此使用 ``integer_value_sequence`` 类型来设置。 -- 将 ``dictionary`` 存入settings对象,在 ``process`` 函数中使用。 dictionary是从网络配置中传入的dict对象,即一个将单词字符串映射到单词ID的字典。 - -.. literalinclude:: sentimental_provider.py - -网络配置中的调用 -++++++++++++++++ - -调用这个PyDataProvider2的方法,基本上和MNIST样例一致,除了 - -* 在配置中需要读取外部字典。 -* 在声明DataProvider的时候传入dictionary作为参数。 - -.. literalinclude:: sentimental_config.py - :emphasize-lines: 12-14 - -参考(Reference) ---------------- - -@provider -+++++++++ - -``@provider`` 是一个Python的 `Decorator`_ ,可以将某一个函数标记成一个PyDataProvider2。如果不了解 `Decorator`_ 是什么也没关系,只需知道这是一个标记属性的方法就可以了。它包含的属性参数如下: - -* input_types:数据输入格式。具体的格式说明,请参考 `input_types`_ 。 -* should_shuffle:是不是要对数据做Shuffle。训练时默认shuffle,测试时默认不shuffle。 -* min_pool_size:设置内存中最小暂存的数据条数,也是PaddlePaddle所能够保证的shuffle粒度。如果为-1,则会预先读取全部数据到内存中。 -* pool_size: 设置内存中暂存的数据条数。如果为-1(默认),则不在乎内存暂存多少条数据。如果设置,则推荐大于训练时batch size的值,并且在内存足够的情况下越大越好。 -* can_over_batch_size:是否允许暂存略微多余pool_size的数据。由于这样做可以避免很多死锁问题,一般推荐设置成True。 -* calc_batch_size:可以传入一个函数,用于自定义每条数据的batch size(默认为1)。 -* cache: 数据缓存的策略,具体请参考 `cache`_ 。 -* init_hook:初始化时调用的函数,具体请参考 `init_hook`_ 。 -* check:如果为true,会根据input_types检查数据的合法性。 -* check_fail_continue:如果为true,那么当check出数据不合法时,会扔到这条数据,继续训练或预测。(对check=false的情况,没有作用) - -input_types -+++++++++++ - -PaddlePaddle的数据包括四种主要类型,和三种序列模式。 - -四种数据类型: - -* dense_vector:稠密的浮点数向量。 -* sparse_binary_vector:稀疏的01向量,即大部分值为0,但有值的地方必须为1。 -* sparse_float_vector:稀疏的向量,即大部分值为0,但有值的部分可以是任何浮点数。 -* integer:整数标签。 - -三种序列模式: - -* SequenceType.NO_SEQUENCE:不是一条序列 -* SequenceType.SEQUENCE:是一条时间序列 -* SequenceType.SUB_SEQUENCE: 是一条时间序列,且序列的每一个元素还是一个时间序列。 - -不同的数据类型和序列模式返回的格式不同,列表如下: - -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| | NO_SEQUENCE | SEQUENCE | SUB_SEQUENCE | -+======================+=====================+===================================+================================================+ -| dense_vector | [f, f, ...] | [[f, ...], [f, ...], ...] | [[[f, ...], ...], [[f, ...], ...],...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| sparse_binary_vector | [i, i, ...] | [[i, ...], [i, ...], ...] | [[[i, ...], ...], [[i, ...], ...],...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| sparse_float_vector | [(i,f), (i,f), ...] | [[(i,f), ...], [(i,f), ...], ...] | [[[(i,f), ...], ...], [[(i,f), ...], ...],...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ -| integer_value | i | [i, i, ...] | [[i, ...], [i, ...], ...] | -+----------------------+---------------------+-----------------------------------+------------------------------------------------+ - -其中,f代表一个浮点数,i代表一个整数。 - -注意:对sparse_binary_vector和sparse_float_vector,PaddlePaddle存的是有值位置的索引。例如, - -- 对一个5维非序列的稀疏01向量 ``[0, 1, 1, 0, 0]`` ,类型是sparse_binary_vector,返回的是 ``[1, 2]`` 。 -- 对一个5维非序列的稀疏浮点向量 ``[0, 0.5, 0.7, 0, 0]`` ,类型是sparse_float_vector,返回的是 ``[(1, 0.5), (2, 0.7)]`` 。 - -init_hook -+++++++++ - -init_hook可以传入一个函数。该函数在初始化的时候会被调用,其参数如下: - -* 第一个参数是settings对象,它和数据传入函数的第一个参数(如本例中 ``process`` 函数的 ``settings`` 参数)必须一致。该对象具有以下两个属性: - * settings.input_types:数据输入格式,具体请参考 `input_types`_ 。 - * settings.logger:一个logging对象。 -* 其他参数使用 ``kwargs`` (key word arguments)传入,包括以下两种: - * PaddlePaddle定义的参数: 1)is_train:bool型参数,表示用于训练或预测;2)file_list:所有文件列表。 - * 用户定义的参数:使用args在网络配置中设置。 - -注意:PaddlePaddle保留添加参数的权力,因此init_hook尽量使用 ``**kwargs`` 来接受不使用的函数以保证兼容性。 - -cache -+++++ - -PyDataProvider2提供了两种简单的Cache策略: - -* CacheType.NO_CACHE:不缓存任何数据,每次都会从python端读取数据 -* CacheType.CACHE_PASS_IN_MEM:第一个pass会从python端读取数据,剩下的pass会直接从内存里 - 读取数据。 - - -注意事项 --------- - -可能的内存泄露问题 -++++++++++++++++++ - -PaddlePaddle将train.list中的每一行都传递给process函数,从而生成多个generator。当训练数据非常多时,就会生成非常多的generator。 - -虽然每个generator在没有调用的时候,是几乎不占内存的;但当调用过一次后,generator便会存下当前的上下文(Context),而这个Context可能会非常大。并且,generator至少需要调用两次才会知道是否停止。所以,即使process函数里面只有一个yield,也需要两次随机选择到相同generator的时候,才会释放该段内存。 - -.. code-block:: python - - def func(): - yield 0 - - f = func() # 创建generator - tmp = next(f) # 调用一次,返回0 - tmp = next(f) # 调用第二次的时候,才会Stop Iteration - -由于顺序调用这些generator不会出现上述问题,因此有两种解决方案: - -1. **最佳推荐**:将样本的地址放入另一个文本文件,train.list写入那个文本文件的地址。即不要将每一个样本都放入train.list。 -2. 在generator的上下文中尽量留下非常少的变量引用,例如 - -.. code-block:: python - - def real_process(fn): - # ... read from fn - return result # 当函数返回的时候,python可以解除掉内部变量的引用。 - - def process(fn): - yield real_process(fn) - -注意:这个问题是PyDataProvider读数据时候的逻辑问题,很难整体修正。 - -内存不够用的情况 -++++++++++++++++ - -PyDataProvider2会尽可能多的使用内存。因此,对于内存较小的机器,推荐使用 ``pool_size`` 变量来设置内存中暂存的数据条。具体请参考 `@provider`_ 中的说明。 - +PyDataProvider2的使用 +===================== + +PyDataProvider2是PaddlePaddle使用Python提供数据的推荐接口。该接口使用多线程读取数据,并提供了简单的Cache功能;同时可以使用户只关注如何从文件中读取每一条数据,而不用关心数据如何传输,如何存储等等。 + +.. contents:: + +MNIST的使用场景 +--------------- + +我们以MNIST手写识别为例,来说明PyDataProvider2的简单使用场景。 + +样例数据 +++++++++ + +MNIST是一个包含有70,000张灰度图片的数字分类数据集。样例数据 ``mnist_train.txt`` 如下: + +.. literalinclude:: src/mnist_train.txt + +其中每行数据代表一张图片,行内使用 ``;`` 分成两部分。第一部分是图片的标签,为0-9中的一个数字;第二部分是28*28的图片像素灰度值。 对应的 ``train.list`` 即为这个数据文件的名字: + +.. literalinclude:: src/train.list + +dataprovider的使用 +++++++++++++++++++ + +.. literalinclude:: src/mnist_provider.dict.py + +- 首先,引入PaddlePaddle的PyDataProvider2包。 +- 其次,定义一个Python的 `Decorator `_ `@provider`_ 。用于将下一行的数据输入函数标记成一个PyDataProvider2,同时设置它的input_types属性。 + + - `input_types`_:设置这个PyDataProvider2返回什么样的数据。本例根据网络配置中 ``data_layer`` 的名字,显式指定返回的是一个28*28维的稠密浮点数向量和一个[0-9]的10维整数标签。 + + .. literalinclude:: src/mnist_config.py + :lines: 9-10 + + - 注意:如果用户不显示指定返回数据的对应关系,那么PaddlePaddle会根据layer的声明顺序,来确定对应关系。但这个关系可能不正确,所以推荐使用显式指定的方式来设置input_types。 +- 最后,实现数据输入函数(如本例的 ``process`` 函数)。 + + - 该函数的功能是:打开文本文件,读取每一行,将行中的数据转换成与input_types一致的格式,然后返回给PaddlePaddle进程。注意, + + - 返回的顺序需要和input_types中定义的顺序一致。 + - 返回时,必须使用Python关键词 ``yield`` ,相关概念是 ``generator`` 。 + - 一次yield调用,返回一条完整的样本。如果想为一个数据文件返回多条样本,只需要在函数中调用多次yield即可(本例中使用for循环进行多次调用)。 + + - 该函数具有两个参数: + + - settings:在本例中没有使用,具体可以参考 `init_hook`_ 中的说明。 + - filename:为 ``train.list`` 或 ``test.list`` 中的一行,即若干数据文件路径的某一个。 + +网络配置中的调用 +++++++++++++++++ + +在网络配置里,只需要一行代码就可以调用这个PyDataProvider2,如, + +.. literalinclude:: src/mnist_config.py + :lines: 1-7 + +训练数据是 ``train.list`` ,没有测试数据,调用的PyDataProvider2是 ``mnist_provider`` 模块中的 ``process`` 函数。 + +小结 ++++++ + +至此,简单的PyDataProvider2样例就说明完毕了。对用户来说,仅需要知道如何从 **一个文件** 中读取 **一条样本** ,就可以将数据传送给PaddlePaddle了。而PaddlePaddle则会帮用户做以下工作: + +* 将数据组合成Batch进行训练 +* 对训练数据进行Shuffle +* 多线程的数据读取 +* 缓存训练数据到内存(可选) +* CPU->GPU双缓存 + +是不是很简单呢? + +时序模型的使用场景 +------------------ +样例数据 +++++++++ + +时序模型是指数据的某一维度是一个序列形式,即包含时间步信息。所谓时间步信息,不一定和时间有关系,只是说明数据的顺序是重要的。例如,文本信息就是一个序列数据。 + +本例采用英文情感分类的数据,即将一段英文文本数据,分类成正面情绪和负面情绪两类(用0和1表示)。样例数据 ``sentimental_train.txt`` 如下: + +.. literalinclude:: src/sentimental_train.txt + +dataprovider的使用 +++++++++++++++++++ + +相对MNIST而言,这个dataprovider较复杂,主要原因是增加了初始化机制 `init_hook`_。本例的 ``on_init`` 函数就是根据该机制配置的,它会在dataprovider创建的时候执行。 + +- 其中 ``input_types`` 和在 `@provider`_ 中配置的效果一致。本例中的输入特征是词ID的序列,因此使用 ``integer_value_sequence`` 类型来设置。 +- 将 ``dictionary`` 存入settings对象,在 ``process`` 函数中使用。 dictionary是从网络配置中传入的dict对象,即一个将单词字符串映射到单词ID的字典。 + +.. literalinclude:: src/sentimental_provider.py + +网络配置中的调用 +++++++++++++++++ + +调用这个PyDataProvider2的方法,基本上和MNIST样例一致,除了 + +* 在配置中需要读取外部字典。 +* 在声明DataProvider的时候传入dictionary作为参数。 + +.. literalinclude:: src/sentimental_config.py + :emphasize-lines: 12-14 + +参考(Reference) +--------------- + +@provider ++++++++++ + +``@provider`` 是一个Python的 `Decorator`_ ,可以将某一个函数标记成一个PyDataProvider2。如果不了解 `Decorator`_ 是什么也没关系,只需知道这是一个标记属性的方法就可以了。它包含的属性参数如下: + +* input_types:数据输入格式。具体的格式说明,请参考 `input_types`_ 。 +* should_shuffle:是不是要对数据做Shuffle。训练时默认shuffle,测试时默认不shuffle。 +* min_pool_size:设置内存中最小暂存的数据条数,也是PaddlePaddle所能够保证的shuffle粒度。如果为-1,则会预先读取全部数据到内存中。 +* pool_size: 设置内存中暂存的数据条数。如果为-1(默认),则不在乎内存暂存多少条数据。如果设置,则推荐大于训练时batch size的值,并且在内存足够的情况下越大越好。 +* can_over_batch_size:是否允许暂存略微多余pool_size的数据。由于这样做可以避免很多死锁问题,一般推荐设置成True。 +* calc_batch_size:可以传入一个函数,用于自定义每条数据的batch size(默认为1)。 +* cache: 数据缓存的策略,具体请参考 `cache`_ 。 +* init_hook:初始化时调用的函数,具体请参考 `init_hook`_ 。 +* check:如果为true,会根据input_types检查数据的合法性。 +* check_fail_continue:如果为true,那么当check出数据不合法时,会扔到这条数据,继续训练或预测。(对check=false的情况,没有作用) + +input_types ++++++++++++ + +PaddlePaddle的数据包括四种主要类型,和三种序列模式。 + +四种数据类型: + +* dense_vector:稠密的浮点数向量。 +* sparse_binary_vector:稀疏的01向量,即大部分值为0,但有值的地方必须为1。 +* sparse_float_vector:稀疏的向量,即大部分值为0,但有值的部分可以是任何浮点数。 +* integer:整数标签。 + +三种序列模式: + +* SequenceType.NO_SEQUENCE:不是一条序列 +* SequenceType.SEQUENCE:是一条时间序列 +* SequenceType.SUB_SEQUENCE: 是一条时间序列,且序列的每一个元素还是一个时间序列。 + +不同的数据类型和序列模式返回的格式不同,列表如下: + ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| | NO_SEQUENCE | SEQUENCE | SUB_SEQUENCE | ++======================+=====================+===================================+================================================+ +| dense_vector | [f, f, ...] | [[f, ...], [f, ...], ...] | [[[f, ...], ...], [[f, ...], ...],...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| sparse_binary_vector | [i, i, ...] | [[i, ...], [i, ...], ...] | [[[i, ...], ...], [[i, ...], ...],...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| sparse_float_vector | [(i,f), (i,f), ...] | [[(i,f), ...], [(i,f), ...], ...] | [[[(i,f), ...], ...], [[(i,f), ...], ...],...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ +| integer_value | i | [i, i, ...] | [[i, ...], [i, ...], ...] | ++----------------------+---------------------+-----------------------------------+------------------------------------------------+ + +其中,f代表一个浮点数,i代表一个整数。 + +注意:对sparse_binary_vector和sparse_float_vector,PaddlePaddle存的是有值位置的索引。例如, + +- 对一个5维非序列的稀疏01向量 ``[0, 1, 1, 0, 0]`` ,类型是sparse_binary_vector,返回的是 ``[1, 2]`` 。 +- 对一个5维非序列的稀疏浮点向量 ``[0, 0.5, 0.7, 0, 0]`` ,类型是sparse_float_vector,返回的是 ``[(1, 0.5), (2, 0.7)]`` 。 + +init_hook ++++++++++ + +init_hook可以传入一个函数。该函数在初始化的时候会被调用,其参数如下: + +* 第一个参数是settings对象,它和数据传入函数的第一个参数(如本例中 ``process`` 函数的 ``settings`` 参数)必须一致。该对象具有以下两个属性: + * settings.input_types:数据输入格式,具体请参考 `input_types`_ 。 + * settings.logger:一个logging对象。 +* 其他参数使用 ``kwargs`` (key word arguments)传入,包括以下两种: + * PaddlePaddle定义的参数: 1)is_train:bool型参数,表示用于训练或预测;2)file_list:所有文件列表。 + * 用户定义的参数:使用args在网络配置中设置。 + +注意:PaddlePaddle保留添加参数的权力,因此init_hook尽量使用 ``**kwargs`` 来接受不使用的函数以保证兼容性。 + +cache ++++++ + +PyDataProvider2提供了两种简单的Cache策略: + +* CacheType.NO_CACHE:不缓存任何数据,每次都会从python端读取数据 +* CacheType.CACHE_PASS_IN_MEM:第一个pass会从python端读取数据,剩下的pass会直接从内存里 + 读取数据。 + + +注意事项 +-------- + +可能的内存泄露问题 +++++++++++++++++++ + +PaddlePaddle将train.list中的每一行都传递给process函数,从而生成多个generator。当训练数据非常多时,就会生成非常多的generator。 + +虽然每个generator在没有调用的时候,是几乎不占内存的;但当调用过一次后,generator便会存下当前的上下文(Context),而这个Context可能会非常大。并且,generator至少需要调用两次才会知道是否停止。所以,即使process函数里面只有一个yield,也需要两次随机选择到相同generator的时候,才会释放该段内存。 + +.. code-block:: python + + def func(): + yield 0 + + f = func() # 创建generator + tmp = next(f) # 调用一次,返回0 + tmp = next(f) # 调用第二次的时候,才会Stop Iteration + +由于顺序调用这些generator不会出现上述问题,因此有两种解决方案: + +1. **最佳推荐**:将样本的地址放入另一个文本文件,train.list写入那个文本文件的地址。即不要将每一个样本都放入train.list。 +2. 在generator的上下文中尽量留下非常少的变量引用,例如 + +.. code-block:: python + + def real_process(fn): + # ... read from fn + return result # 当函数返回的时候,python可以解除掉内部变量的引用。 + + def process(fn): + yield real_process(fn) + +注意:这个问题是PyDataProvider读数据时候的逻辑问题,很难整体修正。 + +内存不够用的情况 +++++++++++++++++ + +PyDataProvider2会尽可能多的使用内存。因此,对于内存较小的机器,推荐使用 ``pool_size`` 变量来设置内存中暂存的数据条。具体请参考 `@provider`_ 中的说明。 + diff --git a/doc/api/data_provider/pydataprovider2_en.rst b/doc/api/data_provider/pydataprovider2_en.rst index 083436e271..6881805266 100644 --- a/doc/api/data_provider/pydataprovider2_en.rst +++ b/doc/api/data_provider/pydataprovider2_en.rst @@ -24,18 +24,18 @@ of 28 x 28 pixels. A small part of the original data as an example is shown as below: -.. literalinclude:: ../../../doc_cn/ui/data_provider/mnist_train.txt +.. literalinclude:: src/mnist_train.txt Each line of the data contains two parts, separated by :code:`;`. The first part is label of an image. The second part contains 28x28 pixel float values. Just write path of the above data into train.list. It looks like this: -.. literalinclude:: ../../../doc_cn/ui/data_provider/train.list +.. literalinclude:: src/train.list The corresponding dataprovider is shown as below: -.. literalinclude:: ../../../doc_cn/ui/data_provider/mnist_provider.py +.. literalinclude:: src/mnist_provider.dict.py The first line imports PyDataProvider2 package. The main function is the process function, that has two parameters. @@ -74,7 +74,7 @@ sample by using keywords :code:`yield`. Only a few lines of codes need to be added into the training configuration file, you can take this as an example. -.. literalinclude:: ../../../doc_cn/ui/data_provider/mnist_config.py +.. literalinclude:: src/mnist_config.py Here we specify training data by :code:`train.list`, and no testing data is specified. The method which actually provide data is :code:`process`. @@ -83,7 +83,7 @@ User also can use another style to provide data, which defines the :code:`data_layer`'s name explicitly when `yield`. For example, the :code:`dataprovider` is shown as below. -.. literalinclude:: ../../../doc_cn/ui/data_provider/mnist_provider.dict.py +.. literalinclude:: src/mnist_provider.dict.py :linenos: If user did't give the :code:`data_layer`'s name, PaddlePaddle will use @@ -119,11 +119,11 @@ negative sentiment (marked by 0 and 1 respectively). A small part of the original data as an example can be found in the path below: -.. literalinclude:: ../../../doc_cn/ui/data_provider/sentimental_train.txt +.. literalinclude:: src/sentimental_train.txt The corresponding data provider can be found in the path below: -.. literalinclude:: ../../../doc_cn/ui/data_provider/sentimental_provider.py +.. literalinclude:: src/sentimental_provider.py This data provider for sequential model is a little more complex than that for MINST dataset. @@ -141,7 +141,7 @@ initialized. The :code:`on_init` function has the following parameters: To pass these parameters into DataProvider, the following lines should be added into trainer configuration file. -.. literalinclude:: ../../../doc_cn/ui/data_provider/sentimental_config.py +.. literalinclude:: src/sentimental_config.py The definition is basically same as MNIST example, except: * Load dictionary in this configuration diff --git a/doc_cn/ui/data_provider/mnist_config.py b/doc/api/data_provider/src/mnist_config.py similarity index 100% rename from doc_cn/ui/data_provider/mnist_config.py rename to doc/api/data_provider/src/mnist_config.py diff --git a/doc_cn/ui/data_provider/mnist_provider.dict.py b/doc/api/data_provider/src/mnist_provider.dict.py similarity index 100% rename from doc_cn/ui/data_provider/mnist_provider.dict.py rename to doc/api/data_provider/src/mnist_provider.dict.py diff --git a/doc_cn/ui/data_provider/mnist_train.txt b/doc/api/data_provider/src/mnist_train.txt similarity index 100% rename from doc_cn/ui/data_provider/mnist_train.txt rename to doc/api/data_provider/src/mnist_train.txt diff --git a/doc_cn/ui/data_provider/sentimental_config.py b/doc/api/data_provider/src/sentimental_config.py similarity index 100% rename from doc_cn/ui/data_provider/sentimental_config.py rename to doc/api/data_provider/src/sentimental_config.py diff --git a/doc_cn/ui/data_provider/sentimental_provider.py b/doc/api/data_provider/src/sentimental_provider.py similarity index 100% rename from doc_cn/ui/data_provider/sentimental_provider.py rename to doc/api/data_provider/src/sentimental_provider.py diff --git a/doc_cn/ui/data_provider/sentimental_train.txt b/doc/api/data_provider/src/sentimental_train.txt similarity index 100% rename from doc_cn/ui/data_provider/sentimental_train.txt rename to doc/api/data_provider/src/sentimental_train.txt diff --git a/doc_cn/ui/data_provider/train.list b/doc/api/data_provider/src/train.list similarity index 100% rename from doc_cn/ui/data_provider/train.list rename to doc/api/data_provider/src/train.list diff --git a/doc/api/index_cn.rst b/doc/api/index_cn.rst new file mode 100644 index 0000000000..2d54af84b8 --- /dev/null +++ b/doc/api/index_cn.rst @@ -0,0 +1,37 @@ +API +=== + +DataProvider API +---------------- + +.. toctree:: + :maxdepth: 1 + + data_provider/dataprovider_cn.rst + data_provider/pydataprovider2_cn.rst + +.. _api_trainer_config: + +Model Config API +---------------- + +.. toctree:: + :maxdepth: 1 + + trainer_config_helpers/optimizers.rst + trainer_config_helpers/data_sources.rst + trainer_config_helpers/layers.rst + trainer_config_helpers/activations.rst + trainer_config_helpers/poolings.rst + trainer_config_helpers/networks.rst + trainer_config_helpers/evaluators.rst + trainer_config_helpers/attrs.rst + + +Applications API +---------------- + +.. toctree:: + :maxdepth: 1 + + predict/swig_py_paddle_cn.rst diff --git a/doc/api/index_en.rst b/doc/api/index_en.rst index 6fdee9f928..10c297a71d 100644 --- a/doc/api/index_en.rst +++ b/doc/api/index_en.rst @@ -7,7 +7,7 @@ DataProvider API .. toctree:: :maxdepth: 1 - data_provider/index_en.rst + data_provider/dataprovider_en.rst data_provider/pydataprovider2_en.rst .. _api_trainer_config: diff --git a/doc/api/predict/predict_sample.py b/doc/api/predict/src/predict_sample.py similarity index 100% rename from doc/api/predict/predict_sample.py rename to doc/api/predict/src/predict_sample.py diff --git a/doc_cn/ui/predict/swig_py_paddle.rst b/doc/api/predict/swig_py_paddle_cn.rst similarity index 97% rename from doc_cn/ui/predict/swig_py_paddle.rst rename to doc/api/predict/swig_py_paddle_cn.rst index 05f25345c5..15e35353bb 100644 --- a/doc_cn/ui/predict/swig_py_paddle.rst +++ b/doc/api/predict/swig_py_paddle_cn.rst @@ -34,7 +34,7 @@ PaddlePaddle使用swig对常用的预测接口进行了封装,通过编译会 如下是一段使用mnist model来实现手写识别的预测代码。完整的代码见 ``src_root/doc/ui/predict/predict_sample.py`` 。mnist model可以通过 ``src_root\demo\mnist`` 目录下的demo训练出来。 -.. literalinclude:: ../../../doc/ui/predict/predict_sample.py +.. literalinclude:: src/predict_sample.py :language: python :lines: 15-18,121-136 diff --git a/doc/api/predict/swig_py_paddle_en.rst b/doc/api/predict/swig_py_paddle_en.rst index 9845cd1607..16de50c0dd 100644 --- a/doc/api/predict/swig_py_paddle_en.rst +++ b/doc/api/predict/swig_py_paddle_en.rst @@ -13,7 +13,7 @@ Here is a sample python script that shows the typical prediction process for the MNIST classification problem. A complete sample code could be found at :code:`src_root/doc/ui/predict/predict_sample.py`. -.. literalinclude:: ./predict_sample.py +.. literalinclude:: src/predict_sample.py :language: python :lines: 15-18,90-100,101-104 diff --git a/doc_cn/conf.py.in b/doc/conf.py.cn.in similarity index 99% rename from doc_cn/conf.py.in rename to doc/conf.py.cn.in index 4f3afb814f..92d72f797e 100644 --- a/doc_cn/conf.py.in +++ b/doc/conf.py.cn.in @@ -62,7 +62,7 @@ source_suffix = ['.rst', '.md', '.Rmd'] source_encoding = 'utf-8' # The master toctree document. -master_doc = 'index' +master_doc = 'index_cn' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/doc/conf.py.in b/doc/conf.py.en.in similarity index 98% rename from doc/conf.py.in rename to doc/conf.py.en.in index 01d156e887..f942f166fc 100644 --- a/doc/conf.py.in +++ b/doc/conf.py.en.in @@ -63,7 +63,7 @@ source_suffix = ['.rst', '.md', '.Rmd'] source_encoding = 'utf-8' # The master toctree document. -master_doc = 'index' +master_doc = 'index_en' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. @@ -144,6 +144,6 @@ def setup(app): # no c++ API for now app.add_config_value('recommonmark_config', { 'url_resolver': lambda url: github_doc_root + url, - 'enable_eval_rst': True, + 'enable_eval_rst': True, }, True) app.add_transform(AutoStructify) diff --git a/doc_cn/faq/index.rst b/doc/faq/index_cn.rst similarity index 97% rename from doc_cn/faq/index.rst rename to doc/faq/index_cn.rst index df8f1308cb..abdb5c7cf9 100644 --- a/doc_cn/faq/index.rst +++ b/doc/faq/index_cn.rst @@ -1,5 +1,5 @@ #################### -PaddlePaddle常见问题 +FAQ #################### .. contents:: @@ -33,10 +33,9 @@ PyDataProvider使用的是异步加载,同时在内存里直接随即选取数 个内存池实际上决定了shuffle的粒度。所以,如果将这个内存池减小,又要保证数据是随机的, 那么最好将数据文件在每次读取之前做一次shuffle。可能的代码为 -.. literalinclude:: reduce_min_pool_size.py +.. literalinclude:: src/reduce_min_pool_size.py -这样做可以极大的减少内存占用,并且可能会加速训练过程,详细文档参考 `这里 -<../ui/data_provider/pydataprovider2.html#provider>`_ 。 +这样做可以极大的减少内存占用,并且可能会加速训练过程,详细文档参考 `这里 <../ui/data_provider/pydataprovider2.html#provider>`_ 。 神经元激活内存 ++++++++++++++ @@ -76,7 +75,7 @@ PaddlePaddle支持非常多的优化算法(Optimizer),不同的优化算法需 使用 :code:`pydataprovider`时,可以减少缓存池的大小,同时设置内存缓存功能,即可以极大的加速数据载入流程。 :code:`DataProvider` 缓存池的减小,和之前减小通过减小缓存池来减小内存占用的原理一致。 -.. literalinclude:: reduce_min_pool_size.py +.. literalinclude:: src/reduce_min_pool_size.py 同时 :code:`@provider` 接口有一个 :code:`cache` 参数来控制缓存方法,将其设置成 :code:`CacheType.CACHE_PASS_IN_MEM` 的话,会将第一个 :code:`pass` (过完所有训练数据即为一个pass)生成的数据缓存在内存里,在之后的 :code:`pass` 中,不会再从 :code:`python` 端读取数据,而是直接从内存的缓存里读取数据。这也会极大减少数据读入的耗时。 @@ -90,11 +89,11 @@ PaddlePaddle支持Sparse的训练,sparse训练需要训练特征是 :code:`spa 使用一个词前两个词和后两个词,来预测这个中间的词。这个任务的DataProvider为\: -.. literalinclude:: word2vec_dataprovider.py +.. literalinclude:: src/word2vec_dataprovider.py 这个任务的配置为\: -.. literalinclude:: word2vec_config.py +.. literalinclude:: src/word2vec_config.py 更多关于sparse训练的内容请参考 `sparse训练的文档 `_ @@ -158,7 +157,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 这里 :code:`hidden_a` 和 :code:`hidden_b` 使用了同样的parameter和bias。并且softmax层的两个输入也使用了同样的参数 :code:`softmax_param`。 7. *-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. ------------------------------------------------------------------------ +--------------------------------------------------------------------------- 出现这个问题的主要原因是,系统编译wheel包的时候,使用的 :code:`wheel` 包是最新的, 而系统中的 :code:`pip` 包比较老。具体的解决方法是,更新 :code:`pip` 包并重新编译PaddlePaddle。 @@ -220,7 +219,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 10. CMake源码编译, 找到的PythonLibs和PythonInterp版本不一致 ----------------------------------------------------------- +---------------------------------------------------------------- 这是目前CMake寻找Python的逻辑存在缺陷,如果系统安装了多个Python版本,CMake找到的Python库和Python解释器版本可能有不一致现象,导致编译PaddlePaddle失败。正确的解决方法是, 用户强制指定特定的Python版本,具体操作如下: @@ -231,7 +230,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 用户需要指定本机上Python的路径:````, ````, ```` -10. A protocol message was rejected because it was too big +10. A protocol message was rejected because it was too big ---------------------------------------------------------- 如果在训练NLP相关模型时,出现以下错误: diff --git a/doc_cn/faq/reduce_min_pool_size.py b/doc/faq/src/reduce_min_pool_size.py similarity index 100% rename from doc_cn/faq/reduce_min_pool_size.py rename to doc/faq/src/reduce_min_pool_size.py diff --git a/doc_cn/faq/word2vec_config.py b/doc/faq/src/word2vec_config.py similarity index 100% rename from doc_cn/faq/word2vec_config.py rename to doc/faq/src/word2vec_config.py diff --git a/doc_cn/faq/word2vec_dataprovider.py b/doc/faq/src/word2vec_dataprovider.py similarity index 100% rename from doc_cn/faq/word2vec_dataprovider.py rename to doc/faq/src/word2vec_dataprovider.py diff --git a/doc_cn/introduction/index.rst b/doc/getstarted/basic_usage/index_cn.rst similarity index 87% rename from doc_cn/introduction/index.rst rename to doc/getstarted/basic_usage/index_cn.rst index c996f5f4ac..8b84306ed7 100644 --- a/doc_cn/introduction/index.rst +++ b/doc/getstarted/basic_usage/index_cn.rst @@ -58,6 +58,7 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍 cost = regression_cost(input= ȳ, label=y) outputs(cost) + 这段简短的配置展示了PaddlePaddle的基本用法: - 第一部分定义了数据输入。一般情况下,PaddlePaddle先从一个文件列表里获得数据文件地址,然后交给用户自定义的函数(例如上面的 `process`函数)进行读入和预处理从而得到真实输入。本文中由于输入数据是随机生成的不需要读输入文件,所以放一个空列表(`empty.list`)即可。 @@ -65,10 +66,10 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍 - 第二部分主要是选择学习算法,它定义了模型参数改变的规则。PaddlePaddle提供了很多优秀的学习算法,这里使用一个基于momentum的随机梯度下降(SGD)算法,该算法每批量(batch)读取12个采样数据进行随机梯度计算来更新更新。 - 最后一部分是神经网络的配置。由于PaddlePaddle已经实现了丰富的网络层,所以很多时候你需要做的只是定义正确的网络层并把它们连接起来。这里使用了三种网络单元: - - - **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `x` 和 `y`。 - - **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。 - - **回归误差代价层**:回归误差代价层 `regression_cost` 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。 + + - **数据层**:数据层 `data_layer` 是神经网络的入口,它读入数据并将它们传输到接下来的网络层。这里数据层有两个,分别对应于变量 `x` 和 `y`。 + - **全连接层**:全连接层 `fc_layer` 是基础的计算单元,这里利用它建模变量之间的线性关系。计算单元是神经网络的核心,PaddlePaddle支持大量的计算单元和任意深度的网络连接,从而可以拟合任意的函数来学习复杂的数据关系。 + - **回归误差代价层**:回归误差代价层 `regression_cost` 是众多误差代价函数层的一种,它们在训练过程作为网络的出口,用来计算模型的误差,是模型参数优化的目标函数。 定义了网络结构并保存为 `trainer_config.py` 之后,运行以下训练命令: @@ -99,8 +100,8 @@ PaddlePaddle将每个模型参数作为一个numpy数组单独存为一个文件 # w=1.999743, b=0.300137 .. image:: ./parameters.png - :align: center - :scale: 80 % + :align: center + :scale: 80 % 从图中可以看到,虽然 `w` 和 `b` 都使用随机值初始化,但在起初的几轮训练中它们都在快速逼近真实值,并且后续仍在不断改进,使得最终得到的模型几乎与真实模型一致。 diff --git a/doc_cn/build_and_install/cmake/compile_options.rst b/doc/getstarted/build_and_install/cmake/build_from_source_cn.rst similarity index 94% rename from doc_cn/build_and_install/cmake/compile_options.rst rename to doc/getstarted/build_and_install/cmake/build_from_source_cn.rst index f345ead2bf..3a52c8723b 100644 --- a/doc_cn/build_and_install/cmake/compile_options.rst +++ b/doc/getstarted/build_and_install/cmake/build_from_source_cn.rst @@ -1,43 +1,43 @@ -PaddlePaddle的编译选项 -====================== - -PaddlePaddle的编译选项,包括生成CPU/GPU二进制文件、链接何种BLAS库等。用户可在调用cmake的时候设置它们,详细的cmake使用方法可以参考 `官方文档 `_ 。 - -Bool型的编译选项 ----------------- -用户可在cmake的命令行中,通过使用 ``-D`` 命令设置该类编译选项,例如 - -.. code-block:: bash - - cmake .. -DWITH_GPU=OFF - -.. csv-table:: Bool型的编译选项 - :widths: 1, 7, 2 - :file: compile_options.csv - -BLAS/CUDA/Cudnn的编译选项 --------------------------- -BLAS -+++++ - -PaddlePaddle支持以下任意一种BLAS库:`MKL `_ ,`ATLAS `_ ,`OpenBlAS `_ 和 `REFERENCE BLAS `_ 。 - -.. csv-table:: BLAS路径相关的编译选项 - :widths: 1, 2, 7 - :file: cblas_settings.csv - -CUDA/Cudnn -+++++++++++ - -PaddlePaddle可以使用cudnn v2之后的任何一个版本来编译运行,但尽量请保持编译和运行使用的cudnn是同一个版本。 我们推荐使用最新版本的cudnn v5.1。 - -编译选项的设置 -++++++++++++++ - -PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/Cudnn库。cmake编译时,首先在系统路径(/usr/lib\:/usr/local/lib)中搜索这几个库,同时也会读取相关路径变量来进行搜索。 通过使用 ``-D`` 命令可以设置,例如 - -.. code-block:: bash - - cmake .. -DMKL_ROOT=/opt/mkl/ -DCUDNN_ROOT=/opt/cudnnv5 - +PaddlePaddle的编译选项 +====================== + +PaddlePaddle的编译选项,包括生成CPU/GPU二进制文件、链接何种BLAS库等。用户可在调用cmake的时候设置它们,详细的cmake使用方法可以参考 `官方文档 `_ 。 + +Bool型的编译选项 +---------------- +用户可在cmake的命令行中,通过使用 ``-D`` 命令设置该类编译选项,例如 + +.. code-block:: bash + + cmake .. -DWITH_GPU=OFF + +.. csv-table:: Bool型的编译选项 + :widths: 1, 7, 2 + :file: compile_options.csv + +BLAS/CUDA/Cudnn的编译选项 +-------------------------- +BLAS ++++++ + +PaddlePaddle支持以下任意一种BLAS库:`MKL `_ ,`ATLAS `_ ,`OpenBlAS `_ 和 `REFERENCE BLAS `_ 。 + +.. csv-table:: BLAS路径相关的编译选项 + :widths: 1, 2, 7 + :file: cblas_settings.csv + +CUDA/Cudnn ++++++++++++ + +PaddlePaddle可以使用cudnn v2之后的任何一个版本来编译运行,但尽量请保持编译和运行使用的cudnn是同一个版本。 我们推荐使用最新版本的cudnn v5.1。 + +编译选项的设置 +++++++++++++++ + +PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/Cudnn库。cmake编译时,首先在系统路径(/usr/lib\:/usr/local/lib)中搜索这几个库,同时也会读取相关路径变量来进行搜索。 通过使用 ``-D`` 命令可以设置,例如 + +.. code-block:: bash + + cmake .. -DMKL_ROOT=/opt/mkl/ -DCUDNN_ROOT=/opt/cudnnv5 + 注意:这几个编译选项的设置,只在第一次cmake的时候有效。如果之后想要重新设置,推荐清理整个编译目录(``rm -rf``)后,再指定。 \ No newline at end of file diff --git a/doc_cn/build_and_install/cmake/cblas_settings.csv b/doc/getstarted/build_and_install/cmake/cblas_settings.csv similarity index 100% rename from doc_cn/build_and_install/cmake/cblas_settings.csv rename to doc/getstarted/build_and_install/cmake/cblas_settings.csv diff --git a/doc_cn/build_and_install/cmake/compile_options.csv b/doc/getstarted/build_and_install/cmake/compile_options.csv similarity index 94% rename from doc_cn/build_and_install/cmake/compile_options.csv rename to doc/getstarted/build_and_install/cmake/compile_options.csv index 12b45eebb2..171d8fba71 100644 --- a/doc_cn/build_and_install/cmake/compile_options.csv +++ b/doc/getstarted/build_and_install/cmake/compile_options.csv @@ -1,14 +1,14 @@ -选项,说明,默认值 -WITH_GPU,是否支持GPU。,取决于是否寻找到CUDA工具链 -WITH_DOUBLE,是否使用双精度浮点数。,否 -WITH_DSO,是否运行时动态加载CUDA动态库,而非静态加载CUDA动态库。,是 -WITH_AVX,是否编译含有AVX指令集的PaddlePaddle二进制文件,是 -WITH_PYTHON,是否内嵌PYTHON解释器。方便今后的嵌入式移植工作。,是 -WITH_STYLE_CHECK,是否编译时进行代码风格检查,是 -WITH_RDMA,是否开启RDMA,否 -WITH_GLOG,是否开启GLOG。如果不开启,则会使用一个简化版的日志,同时方便今后的嵌入式移植工作。,取决于是否寻找到GLOG -WITH_GFLAGS,是否使用GFLAGS。如果不开启,则会使用一个简化版的命令行参数解析器,同时方便今后的嵌入式移植工作。,取决于是否寻找到GFLAGS -WITH_TIMER,是否开启计时功能。如果开启会导致运行略慢,打印的日志变多,但是方便调试和测Benchmark,否 -WITH_TESTING,是否开启单元测试,取决于是否寻找到GTEST -WITH_DOC,是否编译中英文文档,否 +选项,说明,默认值 +WITH_GPU,是否支持GPU。,取决于是否寻找到CUDA工具链 +WITH_DOUBLE,是否使用双精度浮点数。,否 +WITH_DSO,是否运行时动态加载CUDA动态库,而非静态加载CUDA动态库。,是 +WITH_AVX,是否编译含有AVX指令集的PaddlePaddle二进制文件,是 +WITH_PYTHON,是否内嵌PYTHON解释器。方便今后的嵌入式移植工作。,是 +WITH_STYLE_CHECK,是否编译时进行代码风格检查,是 +WITH_RDMA,是否开启RDMA,否 +WITH_GLOG,是否开启GLOG。如果不开启,则会使用一个简化版的日志,同时方便今后的嵌入式移植工作。,取决于是否寻找到GLOG +WITH_GFLAGS,是否使用GFLAGS。如果不开启,则会使用一个简化版的命令行参数解析器,同时方便今后的嵌入式移植工作。,取决于是否寻找到GFLAGS +WITH_TIMER,是否开启计时功能。如果开启会导致运行略慢,打印的日志变多,但是方便调试和测Benchmark,否 +WITH_TESTING,是否开启单元测试,取决于是否寻找到GTEST +WITH_DOC,是否编译中英文文档,否 WITH_SWIG_PY,是否编译PYTHON的SWIG接口,该接口可用于预测和定制化训练,取决于是否寻找到SWIG \ No newline at end of file diff --git a/doc_cn/build_and_install/install/docker_install.rst b/doc/getstarted/build_and_install/docker_install_cn.rst similarity index 93% rename from doc_cn/build_and_install/install/docker_install.rst rename to doc/getstarted/build_and_install/docker_install_cn.rst index 40339659be..35234e0eb3 100644 --- a/doc_cn/build_and_install/install/docker_install.rst +++ b/doc/getstarted/build_and_install/docker_install_cn.rst @@ -111,7 +111,24 @@ cuda相关的Driver和设备映射进container中,脚本类似于 简单的含有ssh的Dockerfile如下: -.. literalinclude:: paddle_ssh.Dockerfile +.. code-block:: bash + + FROM paddledev/paddle:cpu-latest + + MAINTAINER PaddlePaddle dev team + + RUN apt-get update + RUN apt-get install -y openssh-server + RUN mkdir /var/run/sshd + RUN echo 'root:root' | chpasswd + + RUN sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config + RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config + + EXPOSE 22 + + CMD ["/usr/sbin/sshd", "-D"] + 使用该Dockerfile构建出镜像,然后运行这个container即可。相关命令为\: diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index 8df7e063a1..4708890e48 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -17,7 +17,7 @@ CPU-only one and a CUDA GPU one. We do so by configuring `dockerhub.com `_ automatically runs the following commands: -.. code-block:: base +.. code-block:: bash docker build -t paddle:cpu -f paddle/scripts/docker/Dockerfile . docker build -t paddle:gpu -f paddle/scripts/docker/Dockerfile.gpu . diff --git a/doc_cn/build_and_install/index.rst b/doc/getstarted/build_and_install/index_cn.rst similarity index 61% rename from doc_cn/build_and_install/index.rst rename to doc/getstarted/build_and_install/index_cn.rst index 48163fb36e..e599aab2cb 100644 --- a/doc_cn/build_and_install/index.rst +++ b/doc/getstarted/build_and_install/index_cn.rst @@ -9,8 +9,8 @@ PaddlePaddle提供数个预编译的二进制来进行安装,包括Docker镜 .. toctree:: :maxdepth: 1 - install/docker_install.rst - install/ubuntu_install.rst + docker_install_cn.rst + ubuntu_install_cn.rst @@ -19,9 +19,9 @@ PaddlePaddle提供数个预编译的二进制来进行安装,包括Docker镜 .. warning:: - 编译选项主要推荐高级用户查看,普通用户请走安装流程。 + 编译选项主要推荐高级用户查看,普通用户请走安装流程。 -.. toctree:: - :maxdepth: 1 +.. toctree:: + :maxdepth: 1 - cmake/index.rst + cmake/build_from_source_cn.rst \ No newline at end of file diff --git a/doc_cn/build_and_install/install/ubuntu_install.rst b/doc/getstarted/build_and_install/ubuntu_install_cn.rst similarity index 69% rename from doc_cn/build_and_install/install/ubuntu_install.rst rename to doc/getstarted/build_and_install/ubuntu_install_cn.rst index 4500d6e0b0..f923a1917c 100644 --- a/doc_cn/build_and_install/install/ubuntu_install.rst +++ b/doc/getstarted/build_and_install/ubuntu_install_cn.rst @@ -38,7 +38,20 @@ PaddlePaddle提供了ubuntu 14.04 deb安装包。 安装完成后,可以使用命令 :code:`paddle version` 查看安装后的paddle 版本: -.. literalinclude:: paddle_version.txt +.. code-block:: shell + + PaddlePaddle 0.8.0b1, compiled with + with_avx: ON + with_gpu: OFF + with_double: OFF + with_python: ON + with_rdma: OFF + with_glog: ON + with_gflags: ON + with_metric_learning: + with_timer: OFF + with_predict_sdk: + 可能遇到的问题 -------------- @@ -48,9 +61,9 @@ libcudart.so/libcudnn.so找不到 安装完成后,运行 :code:`paddle train` 报错\: -.. code-block:: shell +.. code-block:: shell - 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. + 0831 12:36:04.151525 1085 hl_dso_loader.cc:70] Check failed: nullptr != *dso_handle For Gpu version of PaddlePaddle, it couldn't find CUDA library: libcudart.so Please make sure you already specify its path.Note: for training data on Cpu using Gpu version of PaddlePaddle,you must specify libcudart.so via LD_LIBRARY_PATH. 原因是未设置cuda运行时环境变量。 如果使用GPU版本的PaddlePaddle,请安装CUDA 7.5 和CUDNN 5到本地环境中,并设置: diff --git a/doc/getstarted/index_cn.rst b/doc/getstarted/index_cn.rst new file mode 100644 index 0000000000..a0867a6e59 --- /dev/null +++ b/doc/getstarted/index_cn.rst @@ -0,0 +1,8 @@ +GET STARTED +============ + +.. toctree:: + :maxdepth: 2 + + build_and_install/index_cn.rst + basic_usage/index_cn.rst diff --git a/doc_cn/cluster/k8s/Dockerfile b/doc/howto/cluster/k8s/Dockerfile similarity index 100% rename from doc_cn/cluster/k8s/Dockerfile rename to doc/howto/cluster/k8s/Dockerfile diff --git a/doc_cn/cluster/k8s/distributed_training_on_kubernetes.md b/doc/howto/cluster/k8s/distributed_training_on_k8s_cn.md similarity index 100% rename from doc_cn/cluster/k8s/distributed_training_on_kubernetes.md rename to doc/howto/cluster/k8s/distributed_training_on_k8s_cn.md diff --git a/doc_cn/cluster/k8s/job.yaml b/doc/howto/cluster/k8s/job.yaml similarity index 100% rename from doc_cn/cluster/k8s/job.yaml rename to doc/howto/cluster/k8s/job.yaml diff --git a/doc_cn/cluster/k8s/k8s-paddle-arch.png b/doc/howto/cluster/k8s/k8s-paddle-arch.png similarity index 100% rename from doc_cn/cluster/k8s/k8s-paddle-arch.png rename to doc/howto/cluster/k8s/k8s-paddle-arch.png diff --git a/doc_cn/build_and_install/paddle_on_kubernetes.md b/doc/howto/cluster/k8s/paddle_on_k8s_cn.md similarity index 100% rename from doc_cn/build_and_install/paddle_on_kubernetes.md rename to doc/howto/cluster/k8s/paddle_on_k8s_cn.md diff --git a/doc_cn/cluster/k8s/start.sh b/doc/howto/cluster/k8s/start.sh similarity index 100% rename from doc_cn/cluster/k8s/start.sh rename to doc/howto/cluster/k8s/start.sh diff --git a/doc_cn/cluster/k8s/start_paddle.py b/doc/howto/cluster/k8s/start_paddle.py similarity index 100% rename from doc_cn/cluster/k8s/start_paddle.py rename to doc/howto/cluster/k8s/start_paddle.py diff --git a/doc_cn/concepts/nn.rst b/doc/howto/concepts/nn_cn.rst similarity index 100% rename from doc_cn/concepts/nn.rst rename to doc/howto/concepts/nn_cn.rst diff --git a/doc_cn/concepts/program_concepts.rst b/doc/howto/concepts/program_concepts_cn.rst similarity index 100% rename from doc_cn/concepts/program_concepts.rst rename to doc/howto/concepts/program_concepts_cn.rst diff --git a/doc_cn/concepts/pserver_topology.dot b/doc/howto/concepts/src/pserver_topology.dot similarity index 100% rename from doc_cn/concepts/pserver_topology.dot rename to doc/howto/concepts/src/pserver_topology.dot diff --git a/doc_cn/concepts/trainer_config.py b/doc/howto/concepts/src/trainer_config.py similarity index 100% rename from doc_cn/concepts/trainer_config.py rename to doc/howto/concepts/src/trainer_config.py diff --git a/doc_cn/concepts/use_concepts.rst b/doc/howto/concepts/use_concepts_cn.rst similarity index 89% rename from doc_cn/concepts/use_concepts.rst rename to doc/howto/concepts/use_concepts_cn.rst index 2d27e29fac..6b87522088 100644 --- a/doc_cn/concepts/use_concepts.rst +++ b/doc/howto/concepts/use_concepts_cn.rst @@ -8,29 +8,29 @@ PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 本文首先介绍trainer进程中的一些使用概念,然后介绍pserver进程中概念。 -.. contents:: +.. contents:: 系统框图 ======== 下图描述了用户使用框图,PaddlePaddle的trainer进程里内嵌了Python解释器,trainer进程可以利用这个解释器执行Python脚本,Python脚本里定义了模型配置、训练算法、以及数据读取函数。其中,数据读取程序往往定义在一个单独Python脚本文件里,被称为数据提供器(DataProvider),通常是一个Python函数。模型配置、训练算法通常定义在另一单独Python文件中, 称为训练配置文件。下面将分别介绍这两部分。 -.. graphviz:: - - digraph pp_process { - rankdir=LR; - config_file [label="用户神经网络配置"]; - subgraph cluster_pp { - style=filled; - color=lightgrey; - node [style=filled, color=white, shape=box]; - label = "PaddlePaddle C++"; - py [label="Python解释器"]; - } - data_provider [label="用户数据解析"]; - config_file -> py; - py -> data_provider [dir="back"]; - } +.. graphviz:: + + digraph pp_process { + rankdir=LR; + config_file [label="用户神经网络配置"]; + subgraph cluster_pp { + style=filled; + color=lightgrey; + node [style=filled, color=white, shape=box]; + label = "PaddlePaddle C++"; + py [label="Python解释器"]; + } + data_provider [label="用户数据解析"]; + config_file -> py; + py -> data_provider [dir="back"]; + } 数据提供器 ========== @@ -47,7 +47,7 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据 一个简单的训练配置文件为: -.. literalinclude:: trainer_config.py +.. literalinclude:: src/trainer_config.py :linenos: 文件开头 ``from paddle.trainer_config_helpers import *`` ,是因为PaddlePaddle配置文件与C++模块通信的最基础协议是protobuf,为了避免用户直接写复杂的protobuf string,我们为用户定以Python接口来配置网络,该Python代码可以生成protobuf包,这就是`trainer_config_helpers`_的作用。因此,在文件的开始,需要import这些函数。 这个包里面包含了模型配置需要的各个模块。 @@ -100,11 +100,11 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据 例如,和 ``fc_layer`` 同样功能的 ``mixed_layer`` 是: -.. code-block:: python +.. code-block:: python - data = data_layer(name='data', size=200) - with mixed_layer(size=200) as out: - out += full_matrix_projection(input=data) + data = data_layer(name='data', size=200) + with mixed_layer(size=200) as out: + out += full_matrix_projection(input=data) PaddlePaddle 可以使用 ``mixed layer`` 配置出非常复杂的网络,甚至可以直接配置一个完整的LSTM。用户可以参考 `mixed_layer`_ 的相关文档进行配置。 @@ -114,13 +114,13 @@ PaddlePaddle 可以使用 ``mixed layer`` 配置出非常复杂的网络,甚 PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trainer 进行同步。多机训练的经典拓扑结构如下\: -.. graphviz:: pserver_topology.dot +.. graphviz:: src/pserver_topology.dot 图中每个灰色方块是一台机器,在每个机器中,先使用命令 ``paddle pserver`` 启动一个pserver进程,并指定端口号,可能的参数是\: -.. code-block:: bash +.. code-block:: bash - paddle pserver --port=5000 --num_gradient_servers=4 --tcp_rdma='tcp' --nics='eth0' + paddle pserver --port=5000 --num_gradient_servers=4 --tcp_rdma='tcp' --nics='eth0' * ``--port=5000`` : 指定 pserver 进程端口是 5000 。 * ``--gradient_servers=4`` : 有四个训练进程(PaddlePaddle 将 trainer 也称作 GradientServer ,因为其为负责提供Gradient) 。 @@ -128,9 +128,9 @@ PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trai 启动之后 pserver 进程之后,需要启动 trainer 训练进程,在各个机器上运行如下命令\: -.. code-block:: bash +.. code-block:: bash - paddle train --port=5000 --pservers=192.168.100.101,192.168.100.102,192.168.100.103,192.168.100.104 --config=... + paddle train --port=5000 --pservers=192.168.100.101,192.168.100.102,192.168.100.103,192.168.100.104 --config=... 对于简单的多机协同训练使用上述方式即可。另外,pserver/train 通常在高级情况下,还需要设置下面两个参数\: diff --git a/doc/howto/deep_model/index_cn.rst b/doc/howto/deep_model/index_cn.rst new file mode 100644 index 0000000000..31f8c39af6 --- /dev/null +++ b/doc/howto/deep_model/index_cn.rst @@ -0,0 +1,10 @@ +How to Configure Deep Models +============================ + +.. toctree:: + :maxdepth: 1 + + rnn/recurrent_group_cn.md + rnn/hierarchical_layer_cn.rst + rnn/hrnn_rnn_api_compare_cn.rst + rnn/hrnn_demo_cn.rst diff --git a/doc_cn/algorithm/rnn/hierarchical-layer.rst b/doc/howto/deep_model/rnn/hierarchical_layer_cn.rst similarity index 100% rename from doc_cn/algorithm/rnn/hierarchical-layer.rst rename to doc/howto/deep_model/rnn/hierarchical_layer_cn.rst diff --git a/doc_cn/algorithm/rnn/hrnn_demo.rst b/doc/howto/deep_model/rnn/hrnn_demo_cn.rst similarity index 100% rename from doc_cn/algorithm/rnn/hrnn_demo.rst rename to doc/howto/deep_model/rnn/hrnn_demo_cn.rst diff --git a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst b/doc/howto/deep_model/rnn/hrnn_rnn_api_compare_cn.rst similarity index 91% rename from doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst rename to doc/howto/deep_model/rnn/hrnn_rnn_api_compare_cn.rst index 9baa0b5780..96e52b910a 100644 --- a/doc_cn/algorithm/rnn/hrnn_rnn_api_compare.rst +++ b/doc/howto/deep_model/rnn/hrnn_rnn_api_compare_cn.rst @@ -24,18 +24,18 @@ - 本例中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层RNN网络直接使用。 -.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg +.. literalinclude:: ../../../../paddle/gserver/tests/Sequence/tour_train_wdseg :language: text - 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。但于双层序列的LSTM来说,第一个样本同时encode两条数据成两个向量。这四条数据同时处理的句子数量为\ :code:`[2, 3, 2, 3]`\ 。 -.. literalinclude:: ../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest +.. literalinclude:: ../../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest :language: text 其次,对于两种不同的输入数据类型,不同DataProvider对比如下(`sequenceGen.py `_)\: -.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py +.. literalinclude:: ../../../../paddle/gserver/tests/sequenceGen.py :language: python :lines: 21-39 :linenos: @@ -43,10 +43,11 @@ - 这是普通的单层时间序列的DataProvider代码,其说明如下: * DataProvider共返回两个数据,分别是words和label。即上述代码中的第19行。 - - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层时间序列。 - - label是原始数据中对于每一句话的分类标签,它是integer_value类型的。 -.. literalinclude:: ../../../paddle/gserver/tests/sequenceGen.py + - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层时间序列。 + - label是原始数据中对于每一句话的分类标签,它是integer_value类型的。 + +.. literalinclude:: ../../../../paddle/gserver/tests/sequenceGen.py :language: python :lines: 42-71 :linenos: @@ -63,7 +64,7 @@ 首先,我们看一下单层RNN的配置。代码中9-15行(高亮部分)即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的RNN处理函数。在这个函数中,RNN对于每一个时间步通过了一个LSTM网络。 -.. literalinclude:: ../../../paddle/gserver/tests/sequence_layer_group.conf +.. literalinclude:: ../../../../paddle/gserver/tests/sequence_layer_group.conf :language: python :lines: 38-63 :linenos: @@ -84,7 +85,7 @@ * 至此,\ :code:`lstm_last`\ 便和单层RNN配置中的\ :code:`lstm_last`\ 具有相同的结果了。 -.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_layer_group.conf +.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_layer_group.conf :language: python :lines: 38-64 :linenos: @@ -106,7 +107,7 @@ - 单层RNN:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 -.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn.conf +.. literalinclude:: ../../../../paddle/gserver/tests/sequence_rnn.conf :language: python :lines: 36-48 @@ -115,7 +116,7 @@ - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每个时间步都用了上一个时间步的输出结果”一致。 -.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn.conf +.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_rnn.conf :language: python :lines: 39-66 @@ -151,14 +152,14 @@ * 单层RNN\: -.. literalinclude:: ../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py +.. literalinclude:: ../../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py :language: python :lines: 42-59 :linenos: * 双层RNN\ \: -.. literalinclude:: ../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py +.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py :language: python :lines: 41-80 :linenos: @@ -181,11 +182,11 @@ Memory Memory是PaddlePaddle实现RNN时候使用的一个概念。RNN即时间递归神经网络,通常要求时间步之间具有一些依赖性,即当前时间步下的神经网络依赖前一个时间步神经网络中某一个神经元输出。如下图所示。 -.. graphviz:: glossary_rnn.dot +.. graphviz:: src/glossary_rnn.dot 上图中虚线的连接,即是跨越时间步的网络连接。PaddlePaddle在实现RNN的时候,将这种跨越时间步的连接用一个特殊的神经网络单元实现。这个神经网络单元就叫Memory。Memory可以缓存上一个时刻某一个神经元的输出,然后在下一个时间步输入给另一个神经元。使用Memory的RNN实现便如下图所示。 -.. graphviz:: glossary_rnn_with_memory.dot +.. graphviz:: src/glossary_rnn_with_memory.dot 使用这种方式,PaddlePaddle可以比较简单的判断哪些输出是应该跨越时间步的,哪些不是。 diff --git a/doc_cn/algorithm/rnn/rnn-tutorial.md b/doc/howto/deep_model/rnn/recurrent_group_cn.md similarity index 98% rename from doc_cn/algorithm/rnn/rnn-tutorial.md rename to doc/howto/deep_model/rnn/recurrent_group_cn.md index 9e488b0d51..984fdcc505 100644 --- a/doc_cn/algorithm/rnn/rnn-tutorial.md +++ b/doc/howto/deep_model/rnn/recurrent_group_cn.md @@ -1,96 +1,96 @@ -# Recurrent Group教程 - -## 概述 - -序列数据是自然语言处理任务面对的一种主要输入数据类型。 - -一句话是由词语构成的序列,多句话进一步构成了段落。因此,段落可以看作是一个嵌套的双层的序列,这个序列的每个元素又是一个序列。 - -双层序列是PaddlePaddle支持的一种非常灵活的数据组织方式,帮助我们更好地描述段落、多轮对话等更为复杂的语言数据。基于双层序列输入,我们可以设计搭建一个灵活的、层次化的RNN,分别从词语和句子级别编码输入数据,同时也能够引入更加复杂的记忆机制,更好地完成一些复杂的语言理解任务。 - -在PaddlePaddle中,`recurrent_group`是一种任意复杂的RNN单元,用户只需定义RNN在一个时间步内完成的计算,PaddlePaddle负责完成信息和误差在时间序列上的传播。 - -更进一步,`recurrent_group`同样可以扩展到双层序列的处理上。通过两个嵌套的`recurrent_group`分别定义子句级别和词语级别上需要完成的运算,最终实现一个层次化的复杂RNN。 - -目前,在PaddlePaddle中,能够对双向序列进行处理的有`recurrent_group`和部分Layer,具体可参考文档:支持双层序列作为输入的Layer。 - -## 相关概念 - -### 基本原理 -`recurrent_group` 是PaddlePaddle支持的一种任意复杂的RNN单元。使用者只需要关注于设计RNN在一个时间步之内完成的计算,PaddlePaddle负责完成信息和梯度在时间序列上的传播。 - -PaddlePaddle中,`recurrent_group`的一个简单调用如下: - -``` python -recurrent_group(step, input, reverse) -``` -- step:一个可调用的函数,定义一个时间步之内RNN单元完成的计算 -- input:输入,必须是一个单层序列,或者一个双层序列 -- reverse:是否以逆序处理输入序列 - -使用`recurrent_group`的核心是设计step函数的计算逻辑。step函数内部可以自由组合PaddlePaddle支持的各种layer,完成任意的运算逻辑。`recurrent_group` 的输入(即input)会成为step函数的输入,由于step 函数只关注于RNN一个时间步之内的计算,在这里`recurrent_group`替我们完成了原始输入数据的拆分。 - -### 输入 -`recurrent_group`处理的输入序列主要分为以下三种类型: - -- **数据输入**:一个双层序列进入`recurrent_group`会被拆解为一个单层序列,一个单层序列进入`recurrent_group`会被拆解为非序列,然后交给step函数,这一过程对用户是完全透明的。可以有以下两种:1)通过data_layer拿到的用户输入;2)其它layer的输出。 - -- **只读Memory输入**:`StaticInput` 定义了一个只读的Memory,由`StaticInput`指定的输入不会被`recurrent_group`拆解,`recurrent_group` 循环展开的每个时间步总是能够引用所有输入,可以是一个非序列,或者一个单层序列。 - -- **序列生成任务的输入**:`GeneratedInput`只用于在序列生成任务中指定输入数据。 - -### 输入示例 - -序列生成任务大多遵循encoder-decoer架构,encoder和decoder可以是能够处理序列的任意神经网络单元,而RNN是最流行的选择。 - -给定encoder输出和当前词,decoder每次预测产生下一个最可能的词语。在这种结构中,decoder接受两个输入: - -- 要生成的目标序列:是decoder的数据输入,也是decoder循环展开的依据,`recurrent_group`会对这类输入进行拆解。 - -- encoder输出,可以是一个非序列,或者一个单层序列:是一个unbounded memory,decoder循环展开的每一个时间步会引用全部结果,不应该被拆解,这种类型的输入必须通过`StaticInput`指定。关于Unbounded Memory的更多讨论请参考论文 [Neural Turning Machine](https://arxiv.org/abs/1410.5401)。 - -在序列生成任务中,decoder RNN总是引用上一时刻预测出的词的词向量,作为当前时刻输入。`GeneratedInput`自动完成这一过程。 - -### 输出 -`step`函数必须返回一个或多个Layer的输出,这个Layer的输出会作为整个`recurrent_group` 最终的输出结果。在输出的过程中,`recurrent_group` 会将每个时间步的输出拼接,这个过程对用户也是透明的。 - -### memory -memory只能在`recurrent_group`中定义和使用。memory不能独立存在,必须指向一个PaddlePaddle定义的Layer。引用memory得到这layer上一时刻输出,因此,可以将memory理解为一个时延操作。 - -可以显示地指定一个layer的输出用于初始化memory。不指定时,memory默认初始化为0。 - -## 双层RNN介绍 -`recurrent_group`帮助我们完成对输入序列的拆分,对输出的合并,以及计算逻辑在序列上的循环展开。 - -利用这种特性,两个嵌套的`recurrent_group`能够处理双层序列,实现词语和句子两个级别的双层RNN结构。 - -- 单层(word-level)RNN:每个状态(state)对应一个词(word)。 -- 双层(sequence-level)RNN:一个双层RNN由多个单层RNN组成,每个单层RNN(即双层RNN的每个状态)对应一个子句(subseq)。 - -为了描述方便,下文以NLP任务为例,将含有子句(subseq)的段落定义为一个双层序列,将含有词语的句子定义为一个单层序列,那么0层序列即为一个词语。 - -## 双层RNN的使用 - -### 训练流程的使用方法 -使用 `recurrent_group`需要遵循以下约定: - -- **单进单出**:输入和输出都是单层序列。 - - 如果有多个输入,不同输入序列含有的词语数必须严格相等。 - - 输出一个单层序列,输出序列的词语数和输入序列一致。 - - memory:在step函数中定义 memory指向一个layer,通过引用memory得到这个layer上一个时刻输出,形成recurrent 连接。memory的is_seq参数必须为false。如果没有定义memory,每个时间步之内的运算是独立的。 - - boot_layer:memory的初始状态,默认初始状为0,memory的is_seq参数必须为false。 - -- **双进双出**:输入和输出都是双层序列。 - - 如果有多个输入序列,不同输入含有的子句(subseq)数必须严格相等,但子句含有的词语数可以不相等。 - - 输出一个双层序列,子句(subseq)数、子句的单词数和指定的一个输入序列一致,默认为第一个输入。 - - memory:在step函数中定义memory,指向一个layer,通过引用memory得到这个layer上一个时刻的输出,形成recurrent连接。定义在外层`recurrent_group` step函数中的memory,能够记录上一个subseq 的状态,可以是一个单层序列(只作为read-only memory),也可以是一个词语。如果没有定义memory,那么 subseq 之间的运算是独立的。 - - boot_layer:memory 初始状态,可以是一个单层序列(只作为read-only memory)或一个向量。默认不设置,即初始状态为0。 - -- **双进单出**:目前还未支持,会报错"In hierachical RNN, all out links should be from sequences now"。 - - -### 生成流程的使用方法 -使用`beam_search`需要遵循以下约定: - -- 单层RNN:从一个word生成下一个word。 +# Recurrent Group教程 + +## 概述 + +序列数据是自然语言处理任务面对的一种主要输入数据类型。 + +一句话是由词语构成的序列,多句话进一步构成了段落。因此,段落可以看作是一个嵌套的双层的序列,这个序列的每个元素又是一个序列。 + +双层序列是PaddlePaddle支持的一种非常灵活的数据组织方式,帮助我们更好地描述段落、多轮对话等更为复杂的语言数据。基于双层序列输入,我们可以设计搭建一个灵活的、层次化的RNN,分别从词语和句子级别编码输入数据,同时也能够引入更加复杂的记忆机制,更好地完成一些复杂的语言理解任务。 + +在PaddlePaddle中,`recurrent_group`是一种任意复杂的RNN单元,用户只需定义RNN在一个时间步内完成的计算,PaddlePaddle负责完成信息和误差在时间序列上的传播。 + +更进一步,`recurrent_group`同样可以扩展到双层序列的处理上。通过两个嵌套的`recurrent_group`分别定义子句级别和词语级别上需要完成的运算,最终实现一个层次化的复杂RNN。 + +目前,在PaddlePaddle中,能够对双向序列进行处理的有`recurrent_group`和部分Layer,具体可参考文档:支持双层序列作为输入的Layer。 + +## 相关概念 + +### 基本原理 +`recurrent_group` 是PaddlePaddle支持的一种任意复杂的RNN单元。使用者只需要关注于设计RNN在一个时间步之内完成的计算,PaddlePaddle负责完成信息和梯度在时间序列上的传播。 + +PaddlePaddle中,`recurrent_group`的一个简单调用如下: + +``` python +recurrent_group(step, input, reverse) +``` +- step:一个可调用的函数,定义一个时间步之内RNN单元完成的计算 +- input:输入,必须是一个单层序列,或者一个双层序列 +- reverse:是否以逆序处理输入序列 + +使用`recurrent_group`的核心是设计step函数的计算逻辑。step函数内部可以自由组合PaddlePaddle支持的各种layer,完成任意的运算逻辑。`recurrent_group` 的输入(即input)会成为step函数的输入,由于step 函数只关注于RNN一个时间步之内的计算,在这里`recurrent_group`替我们完成了原始输入数据的拆分。 + +### 输入 +`recurrent_group`处理的输入序列主要分为以下三种类型: + +- **数据输入**:一个双层序列进入`recurrent_group`会被拆解为一个单层序列,一个单层序列进入`recurrent_group`会被拆解为非序列,然后交给step函数,这一过程对用户是完全透明的。可以有以下两种:1)通过data_layer拿到的用户输入;2)其它layer的输出。 + +- **只读Memory输入**:`StaticInput` 定义了一个只读的Memory,由`StaticInput`指定的输入不会被`recurrent_group`拆解,`recurrent_group` 循环展开的每个时间步总是能够引用所有输入,可以是一个非序列,或者一个单层序列。 + +- **序列生成任务的输入**:`GeneratedInput`只用于在序列生成任务中指定输入数据。 + +### 输入示例 + +序列生成任务大多遵循encoder-decoer架构,encoder和decoder可以是能够处理序列的任意神经网络单元,而RNN是最流行的选择。 + +给定encoder输出和当前词,decoder每次预测产生下一个最可能的词语。在这种结构中,decoder接受两个输入: + +- 要生成的目标序列:是decoder的数据输入,也是decoder循环展开的依据,`recurrent_group`会对这类输入进行拆解。 + +- encoder输出,可以是一个非序列,或者一个单层序列:是一个unbounded memory,decoder循环展开的每一个时间步会引用全部结果,不应该被拆解,这种类型的输入必须通过`StaticInput`指定。关于Unbounded Memory的更多讨论请参考论文 [Neural Turning Machine](https://arxiv.org/abs/1410.5401)。 + +在序列生成任务中,decoder RNN总是引用上一时刻预测出的词的词向量,作为当前时刻输入。`GeneratedInput`自动完成这一过程。 + +### 输出 +`step`函数必须返回一个或多个Layer的输出,这个Layer的输出会作为整个`recurrent_group` 最终的输出结果。在输出的过程中,`recurrent_group` 会将每个时间步的输出拼接,这个过程对用户也是透明的。 + +### memory +memory只能在`recurrent_group`中定义和使用。memory不能独立存在,必须指向一个PaddlePaddle定义的Layer。引用memory得到这layer上一时刻输出,因此,可以将memory理解为一个时延操作。 + +可以显示地指定一个layer的输出用于初始化memory。不指定时,memory默认初始化为0。 + +## 双层RNN介绍 +`recurrent_group`帮助我们完成对输入序列的拆分,对输出的合并,以及计算逻辑在序列上的循环展开。 + +利用这种特性,两个嵌套的`recurrent_group`能够处理双层序列,实现词语和句子两个级别的双层RNN结构。 + +- 单层(word-level)RNN:每个状态(state)对应一个词(word)。 +- 双层(sequence-level)RNN:一个双层RNN由多个单层RNN组成,每个单层RNN(即双层RNN的每个状态)对应一个子句(subseq)。 + +为了描述方便,下文以NLP任务为例,将含有子句(subseq)的段落定义为一个双层序列,将含有词语的句子定义为一个单层序列,那么0层序列即为一个词语。 + +## 双层RNN的使用 + +### 训练流程的使用方法 +使用 `recurrent_group`需要遵循以下约定: + +- **单进单出**:输入和输出都是单层序列。 + - 如果有多个输入,不同输入序列含有的词语数必须严格相等。 + - 输出一个单层序列,输出序列的词语数和输入序列一致。 + - memory:在step函数中定义 memory指向一个layer,通过引用memory得到这个layer上一个时刻输出,形成recurrent 连接。memory的is_seq参数必须为false。如果没有定义memory,每个时间步之内的运算是独立的。 + - boot_layer:memory的初始状态,默认初始状为0,memory的is_seq参数必须为false。 + +- **双进双出**:输入和输出都是双层序列。 + - 如果有多个输入序列,不同输入含有的子句(subseq)数必须严格相等,但子句含有的词语数可以不相等。 + - 输出一个双层序列,子句(subseq)数、子句的单词数和指定的一个输入序列一致,默认为第一个输入。 + - memory:在step函数中定义memory,指向一个layer,通过引用memory得到这个layer上一个时刻的输出,形成recurrent连接。定义在外层`recurrent_group` step函数中的memory,能够记录上一个subseq 的状态,可以是一个单层序列(只作为read-only memory),也可以是一个词语。如果没有定义memory,那么 subseq 之间的运算是独立的。 + - boot_layer:memory 初始状态,可以是一个单层序列(只作为read-only memory)或一个向量。默认不设置,即初始状态为0。 + +- **双进单出**:目前还未支持,会报错"In hierachical RNN, all out links should be from sequences now"。 + + +### 生成流程的使用方法 +使用`beam_search`需要遵循以下约定: + +- 单层RNN:从一个word生成下一个word。 - 双层RNN:即把单层RNN生成后的subseq给拼接成一个新的双层seq。从语义上看,也不存在一个subseq直接生成下一个subseq的情况。 diff --git a/doc/howto/deep_model/rnn/rnn_en.rst b/doc/howto/deep_model/rnn/rnn_en.rst index da29b8efad..51c52f3b52 100644 --- a/doc/howto/deep_model/rnn/rnn_en.rst +++ b/doc/howto/deep_model/rnn/rnn_en.rst @@ -42,8 +42,8 @@ Simple Gated Recurrent Neural Network Recurrent neural network process a sequence at each time step sequentially. An example of the architecture of LSTM is listed below. -.. image:: ../../../tutorials/sentiment_analysis/bi_lstm.jpg - :align: center +.. image:: ../../../tutorials/sentiment_analysis/src/bi_lstm.jpg + :align: center Generally speaking, a recurrent network perform the following operations from :math:`t=1` to :math:`t=T`, or reversely from :math:`t=T` to :math:`t=1`. @@ -102,7 +102,7 @@ Sequence to Sequence Model with Attention We will use the sequence to sequence model with attention as an example to demonstrate how you can configure complex recurrent neural network models. An illustration of the sequence to sequence model with attention is shown in the following figure. .. image:: ../../../tutorials/text_generation/encoder-decoder-attention-model.png - :align: center + :align: center In this model, the source sequence :math:`S = \{s_1, \dots, s_T\}` is encoded with a bidirectional gated recurrent neural networks. The hidden states of the bidirectional gated recurrent neural network :math:`H_S = \{H_1, \dots, H_T\}` is called *encoder vector* The decoder is a gated recurrent neural network. When decoding each token :math:`y_t`, the gated recurrent neural network generates a set of weights :math:`W_S^t = \{W_1^t, \dots, W_T^t\}`, which are used to compute a weighted sum of the encoder vector. The weighted sum of the encoder vector is utilized to condition the generation of the token :math:`y_t`. diff --git a/doc_cn/algorithm/rnn/glossary_rnn.dot b/doc/howto/deep_model/rnn/src/glossary_rnn.dot similarity index 100% rename from doc_cn/algorithm/rnn/glossary_rnn.dot rename to doc/howto/deep_model/rnn/src/glossary_rnn.dot diff --git a/doc_cn/algorithm/rnn/glossary_rnn_with_memory.dot b/doc/howto/deep_model/rnn/src/glossary_rnn_with_memory.dot similarity index 100% rename from doc_cn/algorithm/rnn/glossary_rnn_with_memory.dot rename to doc/howto/deep_model/rnn/src/glossary_rnn_with_memory.dot diff --git a/doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot b/doc/howto/deep_model/rnn/src/simple_full_hierarchical_recurrent.dot similarity index 100% rename from doc_cn/algorithm/rnn/simple_full_hierarchical_recurrent.dot rename to doc/howto/deep_model/rnn/src/simple_full_hierarchical_recurrent.dot diff --git a/doc_cn/algorithm/rnn/simple_full_recurrent.dot b/doc/howto/deep_model/rnn/src/simple_full_recurrent.dot similarity index 100% rename from doc_cn/algorithm/rnn/simple_full_recurrent.dot rename to doc/howto/deep_model/rnn/src/simple_full_recurrent.dot diff --git a/doc/howto/index_cn.rst b/doc/howto/index_cn.rst new file mode 100644 index 0000000000..4706d9339a --- /dev/null +++ b/doc/howto/index_cn.rst @@ -0,0 +1,27 @@ +HOW TO +======= + +Usage +------- + +.. toctree:: + :maxdepth: 1 + + concepts/use_concepts_cn.rst + cluster/k8s/paddle_on_k8s_cn.md + cluster/k8s/distributed_training_on_k8s_cn.md + +Development +------------ + +.. toctree:: + :maxdepth: 1 + + write_docs/index_cn.rst + deep_model/index_cn.rst + +Optimization +------------- + +.. toctree:: + :maxdepth: 1 diff --git a/doc/howto/optimization/gpu_profiling_en.rst b/doc/howto/optimization/gpu_profiling_en.rst index 667bf1364e..40ba698f4e 100644 --- a/doc/howto/optimization/gpu_profiling_en.rst +++ b/doc/howto/optimization/gpu_profiling_en.rst @@ -51,7 +51,7 @@ In this tutorial, we will focus on nvprof and nvvp. :code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate above profilers. -.. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp +.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ :lines: 111-124 :linenos: @@ -77,7 +77,7 @@ As a simple example, consider the following: 1. Add :code:`REGISTER_TIMER_INFO` and :code:`printAllStatus` functions (see the emphasize-lines). - .. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp + .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ :lines: 111-124 :emphasize-lines: 8-10,13 @@ -124,7 +124,7 @@ To use this command line profiler **nvprof**, you can simply issue the following 1. Add :code:`REGISTER_GPU_PROFILER` function (see the emphasize-lines). - .. literalinclude:: ../../paddle/math/tests/test_GpuProfiler.cpp + .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ :lines: 111-124 :emphasize-lines: 6-7 diff --git a/doc_cn/howto/how_to_write_docs/index.rst b/doc/howto/write_docs/index_cn.rst similarity index 100% rename from doc_cn/howto/how_to_write_docs/index.rst rename to doc/howto/write_docs/index_cn.rst diff --git a/doc/index_cn.rst b/doc/index_cn.rst new file mode 100644 index 0000000000..460fedb565 --- /dev/null +++ b/doc/index_cn.rst @@ -0,0 +1,11 @@ +PaddlePaddle 文档 +====================== + +.. toctree:: + :maxdepth: 1 + + getstarted/index_cn.rst + tutorials/index_cn.md + howto/index_cn.rst + api/index_cn.rst + faq/index_cn.rst diff --git a/doc/index.rst b/doc/index_en.rst similarity index 88% rename from doc/index.rst rename to doc/index_en.rst index c107239438..1d9cca7de7 100644 --- a/doc/index.rst +++ b/doc/index_en.rst @@ -8,4 +8,5 @@ PaddlePaddle Documentation tutorials/index_en.md howto/index_en.rst api/index_en.rst - about/index_en.rst + about/index_en.rst + \ No newline at end of file diff --git a/doc/tutorials/image_classification/src/cifar.png b/doc/tutorials/image_classification/src/cifar.png new file mode 100644 index 0000000000000000000000000000000000000000..f54a0c58837cb3385b32dc57d02cec92666ef0f1 GIT binary patch literal 466572 zcmV)PK()V#P)004&$004{<009A1004q3004bH008f=0027c000!12prSL000ab zX+uL$Nkc;*P;zf(X>4Tx09eDVmv>xK$ri`wmJ~ul3q(Ll=uMDbrS~Er9c+XoKqw&u zf{GnoRB-JgilU+`Ad0R9khLIUXAx}JUl37QSr-d7RNf>3blvyR;&ADgJ z{LWQ8z&kQB3_u&Z|J~l* zhSAV&SW0q%|IL&++&ne_NF?MGP98fO@-YB#O}0Ro4*;PP`VHpf3ncs&vZ@dY1b|4E zaGH#@B%C5+YiUeOlrLmI0LWApOPB_Lf+Rn)fSm^OCVeN zx0=kHPzgir$Vq-mcm!k>$d`B=X}pB*rHg`MN8<kW<$ttQ-uLhvKUM~J8!Hm%!9v~RH>D{?d*tK?<{M#<(G zJDgV#J=XVnK3rh3dB7fLutFkb`muL{{Ky!YHXH|Gz!&%dCJ@1v0&+kG;B^c%4!~cs zCjo#3gdh!Yf#s-YtVg``2_IggK_U1P5*Or+_w$iH<$?m}|1$2CT`fVX^l5>#M<#d* zIUCo+J#aTX3|_rpv&Efpm$80K#j#O|{zN}w5HWygHm-vWRcFB_1ib&?3x#(stWKhZ zKUTxHe1xZjF2ZBNErOVE7sP=l(k!D$i{a&GHT-&#~{Bb;q>hJ3r^(&YwIE<=D8UJT|$D@AC z;``kkao|1l$4G1?b`-maoy9I=C$UDr#Li$%SUYw~!s9F9iF755jTR-FaV~oCR^YL+<+JG2O%H=#DGLF3(k5v$cC!tfCXSFCIVfjn)J@Ox^DpYf-FLef0L|sFzrnXS;Qs2;MG-H|}6 zr3$4JO5I9B%5-HLnvb4OY!iEmb|J+MznArlMxA7N<5>twQaTnwWtxj2XdK5yoxCrwQ5< zyeDucteQ|i;rfL4>J!zy)Vb;<>W9>CsejU7YWQnpYOL32)Oe_gYnp3L)10GOsd-*= zV4~_ow~6eDB@>TKyr+d~nQKL9&DYwa)vh(9t)m^NEzsVoeNKBohoR%8lcBRo=d@10 zE?w6{m#4c)_l)izOf{xAGmBZyY-0}UY3T*&iS%~sb?6Q2o9M^t7wI3;e`r84a5hLa z*ksUZFleZ2IMr~0VV&VUBRL~yqjaM!Mi-6V8=Dx%8?P{KG=6TPZW3%#Xi{hLz?5d{ zX_{+VWqQ+$Xy$B|X|}_x(;PK-Fi$tHFu!JjS~yzpEOuCQO(INknZ%z|HL2T@YUyoR zU|DC`W5uuvvnsM`w0b$&aB|Y*^^-47{%Y-Don^h(`o4{dO^8jAO|#8wTXS2sZH4Vk zJDOd9-D0~YyI1xW_FVf)`#TPFhp7%L99kW|I665B9qS$YoeZ6lowhr5JF7ZJI^fI%fO4{wa@FBx0!c__W|#hKGr@$pW{C7eO-MQ`nLFCet~`^ejWZQ{;~es z{2vAA2c!km1q=o{1kMj^2_giA2CWb34%QB42iFDR3~39chDL{O3w<1B9wrQH zo{CNlnYwZ6gK(p8et2U9hzO3@81XRDBr-Shw`s&_k<%)sJ&m%7S`c+HnjSqndVlo0 z7@wH6G52FlV@0v8af)#>;%ejG$NR-^h!-bJPFR?5HBmE>m)Mj zeKQlCiPO(@=WgIWOLb4(kor8$J#AxJf4XOSS^5Cam$!}gDkCUkXU6-?h|KEDk*xTv zL)ql)Sthr(I+!`bpGN z;89Rf@L^8eoTGD<=kn*a&oiI5YTmQLfWlwqinx&MbnM;2!vM$F3q#)mMJD`MLb(ku|AnI@UU_tz1W3$6t4Qz32Lx4ayq| zH}r0ty0Kvsb5qHtx0`2gzFcNswsQ+*i)f3uJgmHFtKrslTR(40-*&6Qr{d6dt?jF} zzu&>xalO*3vVN!b&eENqcJX%I*&VpMp~|?bY!7iy!Jemk1*Q@v&V?n8`2rH8>`(cwRSP5$-vkA2T|({W$%2NdwX_uVJt;y;0nh*wl3*Q{v7_Od)3v5fm(J#{DP5&+lg4eRZ$k-the;4+sxR9x6R7d!+TK>R)F6IwW=!pX%}JX@4C3_ z8TqmoioFX!ORntPCov-2qOETB)0Xz^&Yrl50tO>&w3B6_rQ7& ztoOir53KjVdJnAkz*p~qb*1U6_tE+h>pigE1M5Ao-UI7Bu-*gfJz!<)?z#0*u(St; zC+BTEw_p{h(`9^7#{cBM)vLDf8vXG#cU|NOUiI@e&bR0^UhwNLER-==&5H92wz;ul z*_EeqGj{x3zZD8|RuQYPa@wa~={Hu!Uo^U$U7l!fzyEDk&OKIC#KO7e!)=vw<6OyR zi$$CF&-g29Ub%jxgE-H;&ZYM{FKr5~EOpYQ9@1|Mg?ZbxxyGt8iDm76=E!*)7@e|Y zqTCYmyq4uGl}K7yMb@V0lkDZ3Th}Y2d1W?NrvGzY-Pf{BZPanmP52{-C zd!S#`WdEJ_ul^UlU9dUsecm#u3cKT)j%8OGn#kLc3nMl+H%I@4bNN567sg3AgP%@) z7kX*2cNULXN(xXnE;%ZE6&k*zS3^6ZHu}AmPVAq1&)5v@dfv_^`Z0R0QR^oQJWu^5 z>N{DBYHR7)>i*~T9Ha5=LuY9>mL@`}3rp=-);p!nUVd|-Yu92ygwMFFCFsk_6>T0J z8@Ius;njVx?3$PV!*7LiuNp4T@%LZtusO!a?j2iw3@qIqd-7?UnVDs7RV>Pj(zpoo zDH=CXUzeVNj{+YvVZ1EqYJaaeTwWeCU3Xz&F7)aWIk7l4q8`)pt(@aXN$IZ zQ-^Kcym48ZFLWQbp^=Lgk0*R+X@1Yo&2z0ezlRw2B3a^}ae(ATMR^5dX3mPmSwB}i z7N>ohvuam(;<~=tt>5}z9Ip$+IJV3`EA#hroVPGDi9WyFRLbEI8*5-VcG||t9hpTSO2#)R9kg6wd}Iz z3S%}oS3pijav_>arTG?Q#}W~?QrunfCf5Ekv`p8FS#~7VHGdF8_JsZ}5 z-KmDlUS(Zr`Z^xKH}g*(JyEoy7w0SyM{#2CRFsE>#FeOMyp{WKUouIT4CBxS9a#+EBK9sHG(r<&UX@U zb2c|CVX6;NX#6>;84w;jvl!l^?y)uHIh1tle*Z*Pq+H&z!TSs<<^4 zPFQ34S!=GXGt2C>&e?XmSg5hWT+GVnV^-v>xr$j=nqu_DC4r^3p(wiYriDaLKng{W za%J>GnHu5kQ6D1w%MsZ8W`TQiB~c2)oP-?ba^X_6LNRBXTPy9e@BOZ2ZGPZrxBdFZ zAGfJo9$_!#mHx3feZ+H8*V0!(h2(skTmSs5^t~1WwW{0~i?D^i)zq+9>LmW_=(G9g z^Kd(QhV(6cUA{0I+A2ZKZ_4PCA}tx3&e`Yw>&t9IL*=q|pB|gG=>iKmU8L(L=#x0@ zukZ!UP?G0;L7nAXMO};Ms%uvGAm$p80#`bzzFSL{`*T>LAW2BiGl#TzURvMvm)aea z+&L70+C<#8*JYNq`RM}(?Y;*ea|O1f+%7xd3V--=99>X-8D`!$ZVL*?to`!af0}2$ zx4+>{*4H=Wb5Y;v`D>i1!}J=IDRx&5f>ERAuJ;-v`a75;GIGM0(WZGFk0mWH z1$cgjPol+H<5xzE6b=cA=*DL;qSUuibTpvm=8HByK5lRRk?*y)y!nThb;kew5phy|Ha!F;Q5C5iSFr)|ENx2md2mrVQU|-IA_X&P^88RLM%Np=+zr8I($<$7w{vq7mS8TvtM#tS zPL_4|+&4~+d{dlBLQ0FF1XRq0m=pyGIw?^4TLKS(CgG;lj>Sg{k9JLT|4e)Iw_At$ z-j9#*F-ich6}(pPT8{869TP0cOS>aPSadpGpZhtMEV!i(_d(2}?-LvWxqwpRiz$;s zmfIW)q9%If`rGt1#pl8UY<1xt}@ zao%}^@B+JLw~&^3F1WzCb{q6lPt5|#JFJHJ_xzOUzu`FV9C`c06 z^a^55N~RP(UO7jxJmGsm3f5YiP$e$4poK8zDto4g&EYfLQv?ro^D%rwB{J`#W{@O zWISe-sRRmh+_IUtRi$`M#VnIdxH9GEi^iMU@Ats(D!5n8gK}Pdu`kVwC9mF4zvt+8 z(S7Rg(mhLIt>8N4JV&2Lu05=wf@z#Ns-m&>+;>5w`i<|dSZMKH^4T}!J$R&)6n`*F zmQh#UgVG~~+O>6`v)Y4_t??ID=Fm^!n9|(Vcnyk{e&@&f!8)(m42rLm+pu;=*C-(z z{3)}6DaXPsg-OOtP`SJ>_>rSjI%vhF{w>TFP=cn|^R6gpt|LmA|7A8kokyX``qqYi#)E=Ih40^Lpkdl&imfw`;2vt&REw7uWRlc ztMB!@p=mo`dL7L9@6QkC_-xwF4NThE!AUzkFlA?k*g>_3Z9lZMOPE#z)_kwsocHirw?A_oWmQp-Tc0 zVtS3%MJ*%(EHrf^;KIe3{?}&QpV$N;nu|i7bgq~dQ@2JDrU|TJ zek&!)EhsJc0XA7Rm<0I?*-GG&%O>Mi1L9N}FIcQNY>ByXciEK3vsRW&xzJ2jWPp-E z#`sMI)=(@SN6@k(z!TbepAb^U=lVNNd&+4~1@9~POxCWBjmP*~yDPvPqgTE!$30ZW zIm)$TS(jneuiVO|uy9<0_9oRP&Yw(k4ipdFM*??3j6IpmScc!HId7VFq`9sZnl!&v zKc(oqu*WRLL)D%X`iI^nccqTiJ*4<7!yrq=g$&Wx{5As$T3I%2slu?OX2<9g)IJm* z{_}Hcj4j`STUKLD3zMFy_OzZEpC@=w8ULJia6j^r>3O>X?+T5^xXfOy)UwR=T|hL- zT=IwApE+ZtLgQs|*U$OP$GXOKPy@?_j%i*v5)&zpD_ZS}my|RaF7(fH-OJ^oW^C!z zmCVTa2}sU2X4k?W7NpfOG8s?uEHq{{4ifHi_OZb@_Fdm`Oe&e?`?UX^^v}a_spxn5 zES2PM##21Txuoc3(#&B!YsNBdNTOs{qbk+%Y_dEHjhUFw4dw{DBgc$E%~40OB3ZSv z)tO8d#3$>9g_I^KW^sO(OyODr>76U^6z9~hamJY1%KWKd?#8l=|7!jg!p$5&rX*>z zBx7z0R0=?>0s|zK;JA1V?}a7`bz)vJ2a70bgM$M$GCGI?26`%`3p-58Sgayxah)@j z1evN@)zLDa6X+Kd7a-iJNU--XNUGa$_R-jAIPzn|9M?9+Uv_fvc}HLD$TpZ4he zw04m1X_R9BZceWYHaW9kV|i#DD0>rn^o+zO?Z5+xkljc#Vq*S!b5DX8ttyV{DhZBEwN&Q!f^PSjWjbci;~x=U^Ys8co1 zbzN!t*3*5y?mLbQWdwYH#$^#lK&_)CZdf{`8+OlFv2+)_8=*Lktwu1h8njBjI@seU z^xZ8D|80pA5ZpyN!WXIN{mPruHx77ULW*A?lt(;(6!3Umrf8K(l$?bE@V%6`z$NhZ zg8@r8<#V%Ge6k7ynl%qH1k~b6MS=;OU~zRf4i^&Yr1Lwcg5Q=Q6x^{CwK*(+S`>pN zHP1CdcdT5G;U%cVs859%|KG>DWc|Fq~Hms+F5Qk$Xk$lEo%vG**GD z&ZgNxeuNb1d;W?!ni%@j6$V$lgHls68cT|>BQLsDSE4~Xb#7Xh0!d2Yib)u*wScby zG0 zIOl3b?nXW1XsiZKTi|U(K!_{rk+B@+i3GJPaA8ikvc;b<=jR3Vtb{(5oFZ#Y+(m@S z(U|A9%dJ_m+}%a-9Dny*kGqn9Szt^E*^)b~C?kYEcH=}ytlVWnX!*Oyu44i+%ub){ zw}B6I$(1j3XkO=VzK_s8U7hn3K`}sdVxYsxJY&M$ugqOXl`gfgB%XXU3vF7%St;dI zie4rIr_i4{6a^IN%1p+Umx@>#g*%N)KjlbMekSj2l(LjbR=LtA=akSUA@t%rjgJ+q zr+XO0Xc81F%!>-nH77-7T79Ca9kVH1a$_UoHZc)Uu9?}q%P%Q%8C-V>DUPxTR&2T- zadD(P>Q&d3S0KT461KXu z;f5k}+dT|25W*ODLz^GzbH6TM$T(U_GRV40$!Oje8MBDt z;BEbX+)MK|$u;GQ+fvnFd#YP)N4Cit;vj%ruLfndj(6z=EVnCc>qyfTyXRZ)OGkhl z-VcjVNkP)8q=h!@k$XkRV(paUS8c7?c&EwuHEazBWQk>+TOVF;X%A}mD|yn2+SHrm@%9JK5_2$`WofwZ{N6bWH1G#V2Db&OoS(W`&% zLS3)=9>L4APr>1;)*hP$MlWzJ$mt-5i6ZXPi2;jE^jmy((iN-<@y+-xYtY)nXALE8 zC*ge-Df&`BU73(z4#-jhL@Vj8GtC2WAi`J)43U5|iGWciSFiYDQWB-8xjQ#33Cnjq z`^YsZMah33tX`i>y1uUQ4Hh*z|8n-pH~0cqINi+5H1lcsxkHnl#Fry9IkC$2pecc=esna{N}mnJKVyYylonf#;kr@EhiD9{6TvhWEI$bMS+v;onU& zCUW@63Md@2a*gp_MPb^K1vp8(pPy%524Y;0Nh`J)WP=pEB<`66^HB(H3Qm@kresQp z7^rim(m6|7^d~1EU<(-%QVXIgBuwL{z-IziQY_46EWX_rhOP5VkM;Hpzc_GpJ(zdwlCuzz^m#az zJtkCNMyuw9gyZrJAj|T|(^-i#mv3^HRah$s(B5>TU6XFM=CYJ4faBh+zMcE5MJ^Z(OMgV_qJPWqUM` z-QX7q(0MoFafllvMj#83Pb$4eJd!nf(d|N6{xnc7skD?4tWpsX#bc<>R9IaVvq-{q z3Sr7*j;&DfNPn^>uH5rH$u=hM?OHa!W)0c?5>#Yg0BE%Mn2Ku{|HKi8iEDOSv0d>+sSdkmF z1dFN`2)W$UCRw>ckS?cycwD8v6#q@=hATw@HIo}lO1Ue@I+t^*!gwutR|8t6aTf8F z8NHX{zX!yGrld>Xc?=^^IbpiSfkn4iO(W88_~QhcP?xWa@2 zRhVNwqp%OgiZ+-i+Gxd$P0pjR6~~y%qn5}GS(LnE$iLNq3uTswwicDrM~* z^-i)nduGs{dGef{I?~1O7|%^xthu2Dw{rreDFK1J*0NP~uApVBvQ}4{wVIkrtIVcF zzqH}O2^$_7S@V(02#8~x2>nUOm8VxBL3|cte#e*W{x_e?I9dFST={a_D;iLUx@3w1 zXWYdvcfNvXWGISy8dLMcc{2A)hrHLfj80||HC?s$YiJD_t zQh3gJxnJ2UP{7l;A>|d2gq8u*7_YL;D79DG{;lTKbD8t&$NU@gKzn_KU9}-$+nQrm zmzISAOzoD4uhlLP+)^(I9$yh6cU`mqxUDS(OM3ut!cwhuXvr?EN^{B=4(BaPa826z z&WqM{u^S-@1THt1uU+LSfa$kLdy zInFCmwsoG>85FKAaDnAd>~HKFY}YQp>RPVlqGoQ1XXiE5u|mD7GWtqPN4L@ zi#lcz9a2Y25hy7HA^Ol!6L$-W93GHADT9@`9qMbVtf>zFer?9KwAI_TjZLfiTEa)d zFpegy-xYN|_G)-1O-R4?`7vu?;j6@A)?+9W!XzQh7?Q#f7mA2*mVzV2C?@D%3Y8RD z2hHSCfZM`36MUY=YHiGrW+FRh?KWn0ETqYqLC}M|BcA1OV|TNRX~*xnn&3 z2qM?7+>b&)1v;U02DcP~UW&Y45&c0W3T{@tFQKgNxD>fs0RTc%eGf{#+_5D!$Q4zM zSx2Ly2Duwo*k5pdN+}D}*R@qW4`0`FjubP$1I|(U`*H;xp35;`i+EW{M0ew6tud|k zxndb1!`zn}R9b9>K1<=$oaUI)?*qp%h?LOdLFtz}T}rOzmlRzg54z-Hz=&K*QX2GR zbY2-=au>VN02ha`H_ARq|IeaK%@(Inv?efuR+E!MN&&Np$(#)hf^>mC3*nV=uW1@* z%m+_-K%5uQtY!*16ebp6gf2ACIB z*3q%m+FIKznXR-KF1T2#+LE9i=}aA;HCilPN1F-8m`mC4$ef+ycPlq{zcN?KE8rjE z_S0O@tD)>UAJKQ*dc5lIh3L2iT#fp>48^dF_gp;u4PVk(wt-_8B-@;wx_rwrx9U)mYIxlctF&zu)wUy6Ywe&v z351!(S}>y7qkv!;&LPsT6p5;dt1%SNbP3{;{#1HjN-^TgmZZJb5TRRAf^`t`SuR3E zj8io6PF#>OrjwR4A_vYwL%Y<9fQ1(Li# zS_De*F&+}U$mfYPdMySLB4IsUTrf%i$R!{o%A+Dd=~jH1_4W_ip`*v_^qI3LE4gSX ziCidB+}3yonlwcrMT&~^{7O<*q{!fUn2wSzOVj+hGhKmmCSc+OXGhTX;!s{J(i%8r|`JfnS(QpM; znq25~S&VY!dDNo+&Ekt<=HyIqr1-7)3yw5F>aNNrNO_R7Z5yg>*Oq$Q(vFt7VpAKm z;5j*8q)pmBiNKi@ri5Fs7OTGc)+T~(U zkdPGgxD(h@^c-Y;+E9bvVH5nc1{<~5^nlIb0ulmh2<(`X+ogcXr6@<|00f* zv&@9NMh5fcM2iE0(U&D9qcq=^DpD(dzf`I5aq)F-CFN_a^Ix{jQJ-kRlq=UGW_>J% zaqHxE5uu|X1W|l7M)jRIhZT%XMPs`1%z_}rvfC|Dxy|C4EmkONvYGh?8=p>E|BzBFEZ;O9h4zT> zUkVZsO6#O|M_!ljp6UV1#Gi#+^!}>9!|q6t-0pe?{+`$9JAEb>q{#I`rJWnfcOGcq zie-3z>2p0#8HB4xNi%M^Hp^H?S*6`j)n>O;ci2@#t~KI{90OTDIXh~7vnaUqt%%!> zuxmzY4r<={Tnxs#2O#SjT5nu&E%9Of7Z{#l-g?Ba#(}&d9vCBIQa=^~g<|g!mV2tM7s*b2m zk(PgHkorQjsP2b>tS#~sEx++4i$!T6mW!-!V8Hr`CdubNYpsYnncRBf z8NqkJ0-O^$*guXL#-k06qbxDnTbmGsl^h7PF9E7aCYO#xToH6+9_0u5>5Ud)RLL~O zeJZf$$h|UwJ5I?orC@mkkz6rKDIjG{(RCT2{HNLq*d_Y0@Hr zm6k#v$pxs8`Xv3GBH?8cGIa1336YY*BQki2%zcS#7Nmz`K{Q`NP1avXxaUWCYJ&i= z%i4FswWEFNyWk3~fz#B{n6Xx@SIfp@=y3jSwajsFVn}ikzdtggXo64ADuGB>DsZ z^b`XL+&%HRX(x0iMa)HrlWxnAD*69BIpHeQ#xW_s6}fS>FuF8%cq3B!{a4Z0a_*61 zOqZ0P#ep2Y@qg~O(9 z&;XHZ86x&X#1>IOpQYhC%@To@!@!z@2UETP8dFxw_-;U9YGAC_VsIr;sxT~))z;YF zVH!A2ai~45bul zVEUTG_d@8y)>3O-K?VqnJO;_xmZ@n+aCUVx@KVldAMr>?qiVlzv!m65r zC!qLQjXLKZ1$ReDh_6b%R}xH-;#5+gqA?pS1mSd=!17~^PmJ5>@ECdli+8mwHVLXA zcuS{Q>xt+Hx#e`8;$mi92~$O=;PL}48W^3n(UECF`pc}X3B;+2M3D&CIigz1$`qM1 z$xd?2c+u+95F0^E%A+VC=XKFOC2dolm?^B#DH6+xH|6{o+;0-<5+rJAQG`x*2_$8v z7M~}OiI;EXkPgcr9hQ})5fCYU2^@|hC5S(86;PbuHYH*+?wTl}N+BU&v_SjJ4@iew z@Z#q99*`xETyra2p}irz*Q3xxaEB)(!nKqqxgM3vQm~($?D6YAr2oR#%xK+J^h6Kv5O9Ng37G zSQY&v?p^#ejOaV%NSlYpR+NFmaRrQ#NjQmU(clo~kX;X1;D z;@T+!G@LIf4ifyMa+GFE?kX26K+0D18dA%Y6s}0IlFOXW{QRZ8^tZe8gTdt~ zY23|TQr?6VhhU00b9gOIQ#EeDmPEBHN-0tp#Fff{IUvwK!98V=Rgw#?u}M)yey67A ztZIg|ASgj;&ko7tvh$xzH$$rD*~PCNY{B zT_BNjzEAOff-yJcWAO6Zy3%yz+5au~ts+ITSu)#JRzXC}T0>(quT>$D8h$suY6J>pnm%5j{B2sMnA%l*OK>#kdDnjs;$HZ4MEeIt8 zehuwOuzs4bTq!9^3E&AXJ;Yfd&+w7pzs$f(8b@jB>mf`X1gEV&VGWgn^WjYivPVfs zqMTGECyCDLCjSeZBAA0RP8202o{JO=@wsNmO(MlfBytJb5b93!3W7CF4jC2jPVkLr}W znCGi;FWyl|1}ZmDi2M{`IPlR3;Ri8{q1?n!zT-rbB|&F=hYXyA>ysi6QWp1CCYg12 zk#ZhoAnT?p8&GiSN!QW9-bglYcHBI^S2p=)=``PE_?)P$N)i}mG7XHe)tnW%1bzPS zV5+N8#IqrT_blj@6p{=%+bY?WmLP^QrWmYIP@}O?$r$l~>fk1|5&QshL_BG+yp>b| zTP<6?g~-@eTy*WWfiasU22sZt(P|4ijJ}c_2rE!aG+_)1WcstjC2%XA8 z60fo5<_2qStY5VjX+(O6zbh`{4a(pNyGO4sQI1?5Xny$TkrEZnV@Hx)X_6rof+=JG zhOGJePO?M_RCw>mmR|L}Utb*0fJB8Hcu}erJ^wW{j8>lC7M@ER&p1S{>LA+9oDc%F zJzk5e4t|po=2C4TMK=G$;JBs7a@I@^f_BDCmBtivFUDTMJQlK4jEo%zo|Fp;MXA=( zM$!hT4(jW$mW*ReUm%+E9EjG1g$cWejJb%6>4a8rYG&9@5kPRI?dwpQ=IzOQK4Cxe z6K}Hbc=c=Sga7o{6)*60e~A#>|CNv1JAVEh_Oow!t9@bLF&A*_+yB2kP))=}LsPBQ zQ!}W!x!&p{k@~;FE-}3CmAqPvah8;8VSnhC~ zD^kqp*JsjVq)8CzW@|1m;uNX8fyySR7L>EkWpyZ9xAMTi zH$o|!nq+d|ejMxv0jW&bmJXs-(u55Y&aSZE48-@D@w0aE#6vbe)$cib9EO(F0;i!8 zca`94P0$%g=HfUh+qh@*j$CuPFDW5{GV&-P)05L7-N34aJc7R+NQ)EG18-rpVCOCl z*@d1VT$Ga_V?$P$x?p89L)^be$CtQCiy5oM0}-F5q%meQjrHy%y`OFtDDM)W;- zXE<*G<-4Y_#x|hTG@y)TA>dYlI5=;K#VNt{lzdbQlP8(>Ej zW^7As+_tpVqrf-e?!Zlvs zdZ_}l6y3U#f^|9NCQ|(tf4Y_n?yDuGH3~*`RTF!ozh=p&Q&o-P12MXsjM|tWC{$IRzbM5x(4Lf{<*5Qd zl0cOtH^jja-^V#WQB{FZfk4l3b$=@zhY%?7fU2rMCg9My(ZQio=94IORkR_MsO9%n zD1Eg?!ftXaiR6#62Z4J5B6E@NX3Iq4u4k{MJ@i9$v!zh(YU-P;z7-A=$mm#Z#8afy zRn>SZ1EFG`^pb@>sa!?41=9+SD9@Al7A{}S8VMJ<4+TCrXE0J?7EvLgKlg1zfn>FLLz-rAa6xr;AS zy4F%Ks*w*3^tFOVgeJ+o=2>f)iz0$&AhIWgxO1GccGu-WpNFTdYBX<~A;({X!n~Cv zl=UL&JK+g8DGzm4!3A#QhFoZ<^&wMSvUO)Eh?18H4thF++TUr_@Y`Kvrk9MLwSe%*>U?5IfV8#2HBNvL4rUVOe z3NlJc8KRXHS{b+b+cQ|R>wO4amG}ujmCzRnC%KdyL1Ez-8yT^I&eJyEdCXE1y$pJy zqvW~}U>*s%B~*}QcPF@Y3J#PICAsHRB1FG)o}W+yLHj6GDho1?l{re-@AxDxILI)Y ziD$1x84{ddB0n*LH zxLgN0ci{zJJkfW!=_aX%rejtuZ1A1RagoU#ISb!s9wOvCR=45mQlyIdxomC>3ykn- zC3h6@T$};U9r9q6+;k$=N?8&UA-H@(_`EniC`!((1`X09;4|^3a6-5{P{KN!Ci5~} zDxol{w^ig`)Msn0=0ci+8x7@0N+n8Z1y)+RI0^+;95Iwu2|r~yXS_7w>S%`JR7VEy zN&>`{H%25=4`Yt3S7jAYOic0Nx;@ilWk}F**@uDN&tu24Jp{q4%{Ij){YTZVfTue=M^_C@H zS_t~md=FW_iG}00C_E6e3n)X`noW#$d4NuR_wRIi_h9n?{Q?)$ZGi^ z<;9hgV9bRimX};{LV-f0Z$N%R6@*~uNXe11Bf~|m@!-nSx%5ht9i5JLDN*7($q4g3 z{O#v=B`hFL{<&OtY8SuNxz^$|DfBPSyObg!~^D@xCwyfzp3k#~ZzguwIUpECC~4rHry)D+hk z0T1j(-kc@v#c5JqoFPR;C#fwiP&mC0Ue%ShuLEf!WjfWj$0OKhWbDcpxZ>5X*fPQF zmojr+DgVmXeU0@WuIhmakda=Hn4bPICjAtZzjHvzIZz1vztm+?!a8G-EXU=tpeQtW zF9kyQMZQ*2xgraf$k;`4mlQyZR3>1a_wz_RS7fvp$ujWbvcdI{1LR0>VV<0^DUMPar^)|PRBeP68?k2|k*KWzziAfXoWjCR!(T{Yfr`|uq;yqSo2hc* z({TFxBJO0%5P?*xjposGN&qT8lb7fS1aWtwimxFn%o(8}TXDIvD+s?AcyRO&>$ql|6WP=g^QoW7p_3w~Pklwsw$a~LchuOPW}>7}wx-AW?c({f zBnj-IR0I4v_#|b7Wap^KN4I*2b1e_2DM`362vxOmI7vFLv3W6Fc4Zo#^&+I9ewUtr z!p4<;Vut59>n_)@xca)QzEei;=${r|?_WnPBIT974vJE6H#iYlV^QusDFRZs^lW4Z z`0oP;y38K@5DVAl`#(JZwQR63VMOT%o;-3H+mAh0AfI zmMc{u`}ta1pnHcJmEoE}$#jJ#B7BESg?5F*N@)k=x-9X$!u&4>zL26nLqqdQChPa6 z00)pE&y+8BUkEg)kjoELs6y`-_wcurnotp7b;f7)atv*u7P59@QwLG?4K}?{$H;DG zt!sj`-HiLZiGY%7^12~Mhvxm#r5oc^s ziITfN%0Mozqerj?RLX0Yu8t+~6e*tagn!IJf8OFV2?gq54xeNV7$W&;9C}03e8x6_ zKJMANmx$h7$mrTac zb0}-$7=7bqP$E#2$mC@(BrwXZ?9=)>l%{g~fp>n8To&WjeX-yE{#Sl%?V0lp_;eW* z{S7$Z`YFD851c-C(H?sI8GG`XBlg782kgMJ$Lz$(a{yXErN5*AIB#8@Cv3Rq3?8pB z&-yx>n}8$K%{Vw`Jr_Eyulpi@ci8~1L%rR;5BK)ih@#=f2S7`vAzBmhLOAIR3Q~cb zOa&&#=r{@!3P(9Ge4M0jNr;c)G6dd7UQN! z5H=ZpsfQ+LU-?N!C*~l#lbDXvHp0DWX@sCmk+YOs2@N}~78hk}Qwo%Wq>70N?s?SD zr3FQct1n_Ku0_zGqLYgits{4f7CN$tdmSn9oD>qTkk-|GACy6P@`xJ2_86pS(=DX5u~-CIQ7*;v!Fb-HP={sGa=F>xojl4WgYvn)U)f|dPFjh)H%UYZs zw+fKH3=wbmHav0Yj4HO%A>vVFF)uTUq(iYv3Q+=KlS?m2{+tY-HO8iK!=-I!ORcqm zPR-^g?A*zd_RPM0c8al=gDhIRVXM`&ZLt`NQ=ZW@fpi?Fc2yD7DFy$oHl<{BJSQba zUA0A`PR6LvND1kP{H`<;d6fDo5H^wh9pP~cxTLNJ1yuh$^3e4uMH_m8mykL@pGjF0 z3aL@+(m5!s+C0gqls1J+OZoTI5sR{Yk>j+q*`@KJmM^Utr?D4uLIsz%#-1yD+=FtB z&GIV3yV|OpSWAWR3um@@UZWdic=VVvQYvOS_)-XAb=1? zuLY+acfVX|?oJb@iE<@MFCgN!NZX7fSAOJj^Y?^-V_2xj`As%B-C{%2jW(Ji^gl=1 z)~Pg#5Q}uI-a6X1SY2bIhp@{%E=4IvN)jo$>cgUn_+mh5C0Fh^l|zY>Dj`iirc9b> zJ4MSf2OG(W*HoLgn(Q#syW1v4k63rtKGG9B%eVx25;1)ZqjWw(Oox?z`W9=KEf~=&zR`PHo<0zw(|B*|X<|qchFhvk!d2{_ES`Y(M^{ zH`+9%8Po zh?f6>xBa-i{ADk)V>D ztfi@j(0Rgu5oB(BNPE!aB=6J2bk2@F^+mgI^l=*-?)MsX^PmyVlZsF#g*h)KvQ*Y$+M>jq6L6@exL%faDG$ub z%^eRC^zeWt@-gcf$QztTCNPUeWyVqvs^csavPX*UfM*87x^5yOY3PRatl$4IBAFIcF-#^qYZfS#BesZWAcOS~#n5 z`?fbyDh8b@qlh}W^A;vOJ;81oa>MqGwy^MHgRmr2pq(sSm#NSSWl~9%o zE7D4&u!vY8)QQg=vE&?q=XZIj5cKWU$s{FPr>Cc_zrWwQy3X6VGiU4qh+Jo9r}gw) zgil8%ZSF(LnmWZ5z6b@I10^nC^c3?`e2yWrq77fIu&+)TSYj(ST`EeZs3eHWcE5na z;IHiSpqul`w5cS+Wkn7YnypOY)1VQP zWQ89m+3EOHFW!%CT<%?vv(MYe2>bA9UQd%4@)Yk+vY+C+lQukj97Gs2XT?^5n>tor z!-8DPJZ}c|ZpEl;wCRbAO|e%%)@}x+uF1B6`c%WSYP79eciIM`;pL>uD;DmJ>E1=)E|)g4va6*(M(#AP8z+9Fa~46QwYyTls}O4OlcdlHEm9ejXcuow%96N* z?#AOxYDTrKjr6@hswhKt=>VbV4RCzxMYLBrW zn(nvbkgpH3COk`ul5;4D}J1LP&%L+a$*Zg1(#e^UIE(={#w__pZ0vop-;=4v*@qayxzSar<9?`*-%aFMi1$ zIDC5X_y_*sVSD7>&)Z-Bk3Y6=d&x`eJ%1-<$!r5;V!YAzx%(foFMR&<_J98LkL>Q} z-Ddyu(@!t{_T1B7v>$lcjrQyRbmX$_Ep`8U-}y%SuK)6D_UPdY%Z}amrBB&g-}owf z^RIn8e2%8N@9#ctU;6Tw?4y75+xE(rzR>>r-`s!cHxnoASAXjD_Qp5=+2Uv2kN>m1 z{$M#w?SXb|~ z4dL5`Bkfk%6lDwYxLq`)MW~!hHvyZbxej%uP&!nA*OOLy_;Dz%x^i=?jwlxuh^!&` zT^;EPQk1XPLLf^+yd%N-)1=NH6X!@-z?szINQ_bs>g?GI*4Il;A>v&(ZLGHTHsUQH zXU?D?O@M+GbK^`tggA<#xIQWofX|V&Fc84?S;-ApQl4CK;{J2^=4ZW#=;{IEd6!gI zPA&!|k4r17FA@s@=7|@vMCVmXK!tk|D9P#bRt3)?3u2cjSyal|B!8Xy0CH)bq@)XQ zvQPE2RoljK~lvP4k?uj!fHim2m_i3V|}BlU8DEpvTUiIYN%EV<3gg z=a7GZy`5RDJIT~H9G(ko}G^`MvKOT%GwltG3N|YmXrY1gB678 zaWCcYwush~Yc>>ql`^J+&_Xq_)=+R~5%!}Xvi;|d*)Z}mKYY>3;7yd1(jrbjhdP8B z?-42!_3K*WI`U>kUag=wl~5)fKLe7{3)*}3)CoIt>KKaCNjr1qxScqD*v_3jWdkTr zlcN|VaAjr~$5Rs{D8oHAO2EZf?>QUq@3P!Dq-5qyNp?ClHNf77Leh)N2_dzrBrbjsrLv8pxk!zhcG1g_{;}^Iu5SsBGUi(M`fKf*9#0>MHz_kmM+bL=% z8nuXfSO(O}O#vOs!!`=!J21$Ym||X#d@&u;}ZRh!mwfn8a0PE*)* zOiW3O-7pZM8MvS!=aUp}DHAC6&@YrXt_B&IjboJYjEmC~$awBgiZ>LDa^`)wu9U~X zyIgB5b)qC4m9m7=dxB?mnD>VXyg5V3n?Wh_o(f)dKUo*xGAg2agrJxUJ-v3i>w=x? zx@c#S*ImPmDH4W4bhNTw=nF$cK4YX&Q3qEYsF_l%sQil?+dk!dWV@SL z4GrN~G6)wdYVD=p@kV>w+us7I3^P009>4zswofUECeGOJzUQ~>{?quO+3I)S$+2&@ zAOF$s1PBb?Jqz6xy9dpv%T5S)?(N0-D6ufZzOUB|2_LhbYA zAR^Otu5-w_Pb1@0=$>R^Pt8HFp0{BXqW-Zt>m8!P_h11eXT~~FlumX{+LMQS?6CtE z?9jKh;G=d;ye@LCW> zV4i?*U0_ivRpj>QB(JO<_!1IkDvP4zlwKMI<#%Lc(oUc>ojS+4X}_*thpS3N$#IZ0 z@spB7%_Iq5RvBGo-pwkyM&_7^#afDmNO(o}kmjJ|)==If5y6FSi8Hfua~F_YX$TkP zf!VZmlkL5BukG2pi~iqg8RpLzzaJi_{uYZ%dH+ExA38+Nk}h~bB>$9EDP>Dl*by{y zlPnbdN376$#1@DKL6v2SC}$Hc3`N+f4Ww2mCzL-9;gJ|OR|uy^ai5O#TYj|9Muu3( zhI?#=#b<7MY*m{joR#y^IZa$&7HB1lNEQy3vh9wKQ8#Q@`IJx&R2GN!tCn9*HTqD3 z6Oi9GHDzsYM}uu@z$L?Cc8&%9;4@FN*q`+pfDL3&-nexKA=*2vu5|-hl54!AL(<(D zQkbxK5$Gx;s)#_jc*M^tgUFi@`D`iadhrfQ{>dOXg%COVI5oin-DguHeKybihJw#3 zCMpRV79>5UIa~(glik+aeaucBJ3u|Llb$cgbApRFny2pvS;)zvKX7`#^&ETLdJjKp zee694AF+#v9=Glj&)S8v$L#_r-XLRHwae6+rD;2sb-3#zYWJlwJWG>W>RTbNG7n~1 zw1)e8to!^KJA3@7ojGyDjvv`?#|}Mdr;j|%XHQ$_F1DEH0~+rW`2d^wdRU2MQaC(F7=vl$*&hDR8`wvrq87{)P)T&OXYx z8y)TfWf4krp6`26ct*JODa!lch9TrW0b;9Xroz7kO7#?^pt7hI;jzsrp{;iE@5~_s zoY&}_!+;2vN^@LF0K5F|2!Ze#cN46I6Ec!m8S9mDrJB+vC`ME&voU1H46AFB+Kj3Y zBEDjpu_;t3kFlU4-%5l#3xQbyGu1WKj)0F1fZl^RsStWv$ue0pRik%ulC&OOhu~J8 zKn4yt;^(>5K;~4(E`^e%=P@!eg;J>nYW1dUQmW9FV2qR$A{F9IA`etZ*A*ol!)qXb z>%EjIm79?QCAVG3@b9iRo@a%k2Bn0#LZxa1^e3of64a}?N>OjhD@t)?)di%$asj1D z{a%J~C*{lYMu{#^QdU+pdkCZH0%%hg3StjYwO#PFdN|(*O6?3TIt3U>8LUpnG3w+t zq@Li+2ny{WebsLRy#sckyVouZFz0j~0t|{MPV*OW{gVcweh#X}LZCK?zv%_qBZbM? z9UiR*s?3g)0W>*8@69RLLon1D-i)QU;7Zdh7Im6X$=hG}QhUw!yvaWLHy^W)e)t3S z-aq_fdr@mhDX#o5v5L}zTY|JKnKL0wSE#|5HEz7a-u?R@vj6k3PuN>tzJt$}62?>Z z((itoef;k}X@BvzpR&Jt|1Vf8nTUOxxnO_(k8=5u&3KJL(%;u>d#}6MUj3?9+gpF} zx9xNP_&4?kfA9zPq2K%&+gu+ha0u7Z5*4!lhV8q4=&kngPyKIu|NA~{|NPg#v-r7q z9~TIgI`!B?_Q1md{Ytc4XaDPypXS*2*+>8KQ}+H}{1FBk8}+~*er%uh{r|*&U*nel zRX^IZYn#3B`M21vo!gi-DKgmRU9p)$v5|{44lEl7lGR@B@u>_*&<52(lcOPz8)zKm zYIustJ~eMc6uliJzsBGYSzakRIYb!yw74!z3?UWOtiG-FDcjYNwd=Rn*mc`#Y-ekh zVzJ5rL^RygpiLrBhq}*N=jr3V5UkoJ<)sB^ znn=4z1a5(eo~Oo?>N|}WGIpLw({mSbDUnD`#Ptc>9wS3z&;zrCv)1D>NIECN!)fIP zD)FrZ{kn1{)IwaRDA^$2t0tY7TUd-?U)p-ihw=jRN>PaAwrlq;yXyrnu-k9H6U1u^ z=O@!P_tb}Qn8LSTU{RUsKW@3P^EQtqZ7kMGMW9J6gi8y^>_iV~8xG;k1F@a%K`0`~ zcs>iv-vvkqwL}MP*tp4!iD?uDp%PL82D(qfGaCRI7y?;D5mLD+BHmVRQXrHzKm>KA z3dls|PpVl+Dp4foCWmbnH)a-rU5mk_2*6PSM<%FcBLt|az82y=VdQf6;@Ulgvh?`F zkAjMv_2{JyD2tm>m>OFtGr^)213^;WA0@d|J+BO~y}JtBbx?}l2?dA6Ezx4N*5=WL zB?U#w0S5_jng&T6>O5{EC->R(xo2#C=sc*~xIYs&-#w*644Z4xP98sC_uli*_SsK; z!tVdVzgX9q6E-u}XZe9n%k`YHy!Jlanz*FL&Od8oori4TWP-6?Nb zEWv#8fQu1~7}Aj7u1=A!DV>dxibCl)P_99*$f99200eaa3f0wWJIaz5ALc$$+!P2S zknnaOFr>?86^sI2^rs)Z$;W5oR)AI zj=)7pn~QR9t8JR6JU5|LO7Rj;lAa;oW4R)7Wq}0CAeaT{A$jDXJP~surh87mj4h1M zTjjXf#)WEYi;-(k<#>=2&ikYZL5T|-Cn?2>sP#xx{!q#SDNItP)+*?{^bTBUN>(@7 zTmR_q>|=laL3_&&zQ!&-{`dCwpZi&R>sx-z?mZn8B|UtPW_m6gc{<+owzt|Z{PY{_ z<*$CNUDvLMe+d$8fCc|^zxrFYm52dt&;NmU+xOjl{oYzUccGk5Iib z)=?Ym8U_-MK1M@5@aey@_x|_aw?F*D_t_y}+tN7?o$IoTL#r2*=h8L*-j8W$s3qM% z69Oa+N`N?nQ_|Sb2r0V8S{qdK7T!;70v-%mnVU1#+{CU#hK*JD_C?4g0;|5tT3a}t zT_kI$EVi*RZaX(t+TP6>+p{rgS8d7I)tghcXG7ffZiv}lKHJ?9=kK&_Y9_#!{5&&L zqc#kWN|r@eSC<_!#Ga>i7dwNt>e2Cl^97vN;z zasw4Br7UQuPg^6;prLZk>NB%eMGmAm+@LaCKgykDEcgqPoi@|+Ec*eQ8#@bhKJ0?D ziUoBObMN}=ZnLf1cjC$zwv$JnwiEER`g@LXzx<9ICFNz%GG%q6M6KKe|Eu(g+;g7Q zTZu)bJfWD@;AWF`sI2Q5&?hAW%wP53tnxzcil)r(A`Fv+!z;68eAEYLd9T=g(Vl;sno@6aa-YaC!z9|2-hG zr*XrbL^Lmc#)zWwrp8wn8J0|mNl{{MOKFPJ_8786u6ZS@oCmp?0Qnsq?6p~xR4GW6 z7-!-liKD8>RzD80xMf;K9dXD#II6IZmBFys2VDr>~}X{08f zs1UUf)hm&!QlNxrDe^dmQmFH+O{}JEQeIptO0z(PW3Cq%C|&@ZdoRGBDuCCF;>tS% zN45)h_t<1VvH}AV6n&bMFa;uObGV!3D%V_6d6hUGkNT-HYnsKC4|Sw~0X+|`q5@>3 zB9C%N&__9a?XMt*73^`|C+K5}K;U+i?zRS(V#!c!MTvk7*%u;4y>+)n>+c#XQty5Gy;F?P#ZB10! zBY*Qwd-G5IEN;2C+1uasZu_%8`-pw|E05cx%G3L{Mnkb+%1MNFX}fe|x_=Shx`E3} z)Yn4yK6}?Mzteu<7k|m#`ObIP2S5BV$I!H$yEw`NTx$?t_Ll#{d@;k|8UhaP=_5Ng ze4J_~gvixR_+zWmC?L-1z{SI)&>~$1)Ef9bz|*+B3XrYqtKzn6Lyc`|W1>}~u%G~S zRL@yQ?Yy;C&sbe@3h0#d0uTXb;3JGo zwfi4BXa^3RwzFNmcDA!;RbK|-seDXKHlXndT#BQ(#KdD#Y6H!~Ebgy{T1eh3>>&fT zBFeCM#0`>*jFZTnHlv(NF6?zy@scK)s1q!53kU`TKg85&&aZqvF>+HmPeuxqAZaH( zhUgsM6;UU}DY9@^HoP1TO!?d>3E0LVNC)Wr*x{r0g)e;39(eF!NWB-F(o(g>J8EoKdpQw)1V(3PtRYK2BtqiU)>v`Gra_y=yAD_p#VHNM zU4zx!$^yH2!v^ah%kTma`NZf2+-7}H2L^3y5cfzQZW3H|LqmN;$gWnJ6fU2lJxU4T zN&o0vg)JvosIu^uRNkZkC0h!K!uOjR>ulTRO}s)92i-VN0@Ho_pS43rj!}dZ?h=U! zH-j#UM^TO>hg_N}C(*1_;EZhy^|~GHksNQO8i=D?nxnEJnwl zc+ejB>_6C3_ufP9x`Vi3dO@Tnyat+Qj_29+4^M!$&al|G;%?k&*WPps3Jr?P)~(K2 z5wa7S^Bmxv6$7ZW=)}BkVVs4Vu}V~`GWu`bzT38J-)kG-gtda&v~1>e^B#!HdvJa3 z_O@tm*|5zjYa0TWZiU?ZdK4x#(IZG@Jjh^BlqpDtNA!t6kK3t+>un(NY!67%jVSQ9 z-}XY3rI(^0y~1w4^#yj--s@}we75?!2GAa%2-G!%8#IC;v;gtHBH7IISExTpElF~j zLE)~ds%LKm-E8KSk|tTuOrq2(GjNVTY!uDtsTQK8dgWXcC9QyQl1EV}2i3?=odw-_ z1||0ql#VB@FmV917Y{?{BX<7eeW2bC<7zwxAM6-`GY9GSqbL|>L5fCDxR4K!oMHqh zDDqDOa$asj_&;Tg2^knOpm4dVAs;8Ak;ur(QAAudErmL1{EJ(ZX05CQO;mb?m75gY za%8WrMmlH(w=M`KuG;9-nC|3PLHb=C7IUbX8szMm4W)^(s zc?^;Q1y4wdXs{IYjEX7RjL~|f&6TD}8uRmay~OT$=qWpiV3??AvY&Y8du)GapZ(Em zTY`YO;%QcV=}G@CQ`y~xm0<7XzY!8%+KOwCef6Dwa`dzBvG4hXzpxX>C@9-IWIJxS z!#?tNU$(D&_HXRQy%agT+=eo+02F*_Tk2vAnSrZ;Mr%q3Y`(?b`L5rvU;ov2*}LBL zF8lRge>Z%m-?QKP^_%GjYyFgjK88-pMvZf7!7h9DSeJE? zDSHf8;pF66lQ0^fk>h$^B{IB_rwV%tE)EeBny5lAM3h_*8l)`Z2q{In$z7O1I7qRm zC(^45XnhM-do!rpEGZHyP;NF*?6(7iEe(lz0Y868sHYHhEEe&65IlmH6egyCrxr%4pUoF_tBz6ohk zGD?L|OIb?GWd^#G#?nh+*((fufki-Ncse>Z+0HH7{RBr2AF_uYc)*@PVe0Pf^APfF zJ9gU5H{WX4Tz4(^M{O-Z^HN5YrMRRRxhs^!q8_y)mt0WTGTL2ehrBZ4H|hII&rh{_ z3%sF_yC{_Q2pP;{85Z1*T~@VmkCnqAD~E`>aPhQ_?|aaC9{IeD9AnWuf55tM&Gnr* zN7Nk3Gm2Xr7t|n8YUfGrdm3c?vBwYE=kIyMzI5*+cI*s@9j<|#%55}I!TqjR+Uwu^ zc6-ea{Dj^9;#b;6P`)JPU_u@C%S+Z0KB=~^%$*c?bIKp3=sYQmEP&+IN|UZ4snB+0 znOC6qEb_|yuJ=*79p%&E`$|aj)pd>V^?-tv_?7;Un|-OWRC>Qw`hm+SdQy&@=jUlS zn2Q97G&Q%|^Pcxo``+*VVSD5E{g{2{cm9CA93<(s+h1T?wrwYRlYkS5(;`qyG1vT^ zk#b7jw;{6BA34=$hffR>EjZ(u`WtJTtQmy3hO#JWQcftNx8_*^?&=IODi2pKk1J5S z3Uhn8AUBTJR!(OXEP}!3zbUfFn%v)goRUK2#ZHhR$mrxwQVym0nLZ+ZPohknWxl}w zgX>m+r&Y#x*@|wIYM%dSH)W?-j2SF3&_Goq7$CxN45MTYBWw~Ra*~DKBLmg%N-T>U z(l8YwtlUrPM?K#J#+gbqt=z=fx^Vumoj?7ob)7n5=T9BBi|2?C9JpwMJ?HJ%vml(0 z-Ve(8kexWZpE*8GGT%DeKvZx8#H30R_ojhXsvvh$&e3Ygg5Ih;ZPjF68-<*`?=hnJ zj>7FKx6N=6YvD6h&|aZr;*ZVqz6{wxR0+x;uX0O*Oo`)%8bn`-biYRW56?#tl6!`H zXCkhngkgN9N&AtUz<{BYRE}K=mE7WJ9qS1 zbb@E?nP;A~haW}GJn^JmAUL9jq}W|YPFNqQb0!YpI^TENCZ0NJvxm-G>{P!c&J9@w z5qK4y1ZrF$aNsy=+0oSjFiZR1RhOaM0sF_%@bSjGUSsz@^^E=5dwB8giG|n7YYQ@gm#~$chY*$Nj;8Cr$ed$l$AAMY5uS2mr`OKsCQ{R2Jz4-ce za`asGXKRVkWuFOYsA3S6@UB~*{~h+;-+8zF=5POw{m$?Fj{W9u{su0=U$A%m#(V6o zZ+bnb$r@?xzn07YJ3Ur{T13T-8(QELY#|#s;j~T!#Dc+cSU`c8fQ+q)UrUF#HsZH0 z!Vkh~5(g-(5 za;quPo3c>@9E%{TTs`dNEYw2Ps@RY#^Znd&@1w#@o(gufN&lU*GZCQkMP#6@k zr@RJpDC7#Sm4zr*-K7faimH<0wA#0y#&uA|?~{1W_R3nquu)AyY0H$Z?{`- zxz_Hyeuv$-z23IBP`on5Gn^f?iGi~$_=jz32tm){;zUw}wHK{X{vy>PCVY3T$XhKr zAvMkC6#Lgbx2L}sR z41ob6T~9-dor0Subae~`vx{JfV`nbdkrSPE>{KV=(YWRaiEi0`t=;tESKC#$znDt* zS6d~V97RQjTs|Q|q%Y%Zh3rIpCReLgaGSzM;T%$$Do|QGHtn#@J9fLfPUjWvKuKPm z80{|OQko2Xt`bP0z;nnLt|1y+CQG-H!nT&U3c%f3N?)k8r@UoKqS@BcZnxd?e0#}@ zUd5Pvp6%X!wQbzA-Rc^eQLZa}-pqo)Oo1j16A^Wh2r0>q85E{zI9Dgm_1Q@lIE^21 zqO?y#2A`5!RXeUHMeEJD!lVSeL|~P6Tr)C`LcK<{SFfvSCUSoZ8IxOBN_sg7W#!@u zT%CZbsQ7jzi?!zb#PB!>$S4RC5yEgF3!oESrN^jzZ!+LCzGOIf#HRQ{sJ=S*&7u+;i!8bMI>A)aW z$}hm!lzUj$Z`z08Tzje1o-69leYiqedHN@gdBs9+Or3qw!??^+u;M`dfRu% zI`_3tA_;oKKdXC%|r7fA= zfUs`!KMRNc&fc;gLFYqFTH&P#xV0wHrO&PmHIEU2^7bEigWa=V9a^+^z2tU#@Gx3@ zwEf`AUdcNa=rt}fL^!fmm=W)`>%Y+9^8L1e5afhS8m{{Z|5k&y7}7ewsBjC zs8V;2Yi`_ zUnF;uCi^H0iVEYlL+IVy)&N<$7w*jf79XVUrbYw?1n~ZDToq{Y6q{89TR9PEZJT!6 z4L9FrH{E&*LURY>N!fL=YN#zFTcZkVsv66z3X3L7d5jt|Ro7zz)znb;iNBTZU-?AV z-U6RU?NTJ0X93qDAkTZ@Rw5Ge9vxPQc+c=8E5 zaNr;jQ;@WQ`BhcD2=T3%w(Qy5gz{8l+uDW1!Q~PR0%WaHDqd||)~vI$LnwNg;PN&6 zW;RW5y|Eqvw}Ul0C1P zr_=(x2QiTw(BC@&HI#`-=y|$Jj}UOiV2G7my?I`wrJM83SoT#Mw0WClwp?cwHEp(K6s79kJHUR}H?IN{04Td&U z@Vx3NchtU#1fVtbeopaIjVkw2-bU!-{z2Ts!vi)jG-M}O zK=&Q!w2K#!?I;-)aYe;Vl9vfIWvG+mNnb$JXn{3MDM7-zAq>xPGe;H`r5X^k5^Fm)IKFEp^PX!;f3%se_QB2N|yfQna<vScx0X_3`msmt zK3vjY0TsXZOP>cp{<1yu zJ?&%a{5kUQA*W95KWOKlJ%TcI$Q7q6Z9d}D8MOZBs8e73h<)#Wy3<~F_bryZ_Inq9 zt19Dz6Co#)FGLaQCX1=Gl{|7G#PiAyy`HmQwpZMU)dh_Gp8K9!Jja&1-)Y}-tGGw9 zjxM?QN>OmP-hFF07GLg%fA#0>MX&e{U&Mate|_qbl0^j7U`V0jNP=%p&|TW5O9E8w z%_Fc|Z~9NTIKFRD(0uUczQZb7uCflIyZ+ag&xZ3PHrgA%`v+_@kmWby)(z+6;FC|; z0Z6Zh_CLemIz!%)0ff{TiW3ncgk>*?XCv_hY&3#kJ9+##oTM(SEG%oHvvzOWYS%Dn zo8dD}jbM?SJH+pYK_R3J5fbiMmFI!isV}y<3b$&5UBA1*wh=+MokW_uQAn=dq4QMR zE?k7WH^Gl;P1{Bg$ZhRQd1>JZl!T##r4*)FBI!cuh>!)hg8vm!ipMAslf#M=ABB7;E4Kwj;E9X_(qdHoF00yR@tWP_8rlXiy%`=8 z;m%|Z&X65_ffOBM0|-=vZ4XN701A^9n*mVU{sBUFN#D@jKW-O^v>P6fg^2%N5p^gU z$~}}tIB#kPns28sQKD|X`D%OV%kH+@@3@@`?35)@J{{Z%dOo$}=2Doot-+(vLa#hk za*@nI7VU$_qi0)1T9KNn3=82D8RU=J<5+u#4;=u$FWT-syX}@+Z?mhfzm}-LP25w4 zen#QNeV+<%G)zIl*`ix&~@eHaQ- zsv-9(L1TfOWYeH=N|0HRpmtwn4X&*;QKQH{N;_=cd4oOgC9kr**WBdyt3-`T58ySq z+(Zx|T6{PuOjlBxq<{y7sl>^;RC-1CE7(l=$dpG+xzPeIM*Kb@9KkDj>9;;#qJijW zkk6lxRoqGP5%F8fY^fwm=(`Z_#juBlc8LT#>Axh7~Ydr}9@WeAD&4tr4r=*}vt_E^@)e#7x$|wbId>mPjFchd{;#s^~en!92eba<5s8r|dlYXxWHm2}at!Z4++9O&H%8W89M>MZ3GHP{H~&&DhiWBpttq z9GPpXw7HII8>WiDslF~dNx)ApvTPbTsKm`z+I;12Yu#>t{Hs5VPm@qc{yv2-^V9b} zZ5yw?)n0tpb(%6RtR8*rK6!c>s0jvkutJymhyf1R6jDic)ytbs44`Ve9ztYul;g>{ z1!PLywBvev;>+(1?NteYDCAOqI$X*2Z9o5)_JW<8BFk*r9=PY9?DRmvZn))k+krb# zwY02o%s%`{AT~O>hy~`G2*3)FU);ibp$i#>hAKrU6Xvb&f3N-5pZgJOqe8p8aQcs0 z_n2I;32W@wX773byX|LR|GX>p>3{6+6fWw*mT=@zl82F%dsbO_Q94<^P;}bp=M6;B zrHEh{f}95&EkQ>&D|z1nQCE83P*?5QtL3)Q#7rtNASA`YxKar8!ljuc>1oWa-dbrl z?qWQkAno0pITq(h1kt#%bOYU&Tq9mPP6TjJ17#0a0nNCO2P&2;3=PPa{SI>U>!OZV?t4Ia+#1>~{DN$PHyh4j(xLk@^rNMxM4O9)Hpv zdt#p*B+2Y)IEdYSQleO#;00(wpDQEEh(&xBH;FG)+@#Mp+EYu7yG;;=>svNZ%$N-9 zOo0lLN`^#|v_(-#D>j8v$KgQC&Q041k}2*7fw|b* z_lt^?xJkk7;`wd7Yy^nH%4SP3QoylLhe$3(2?!jK2z3o1k&AQ;7poU{+fx`1&s-R^ zCr)3m<2`-kox}YC>X{^GPzB1rf~xbtOSvLZgY_Hjnwwr|H{W@;U4Q2b?D;Q$rM>Lc zud}PJx)Ht?!dbjJDJ1lz4^`K5M23|{a|KqD*CTPi;ZW*?k3 zvO6C;$v7nwHe*Dtb49XBks84@m&3(7O765nESgUpP>Ck;wM~QeLPBn8#7$kxd5KOe zz&%rrv^k#397ygwJ5DkuoeQzis{-0XP==JHpfH8^tGxtTUzDang~tJY;8%&)SjA^b zJWKWt+A3sX41#ns<%YIw-|Xkn+)bkpX_=XtBrO0atdd?9SfIfbSqI7D9|P5){$c?% zM+~z8-L)gCtBP$FWrtExm(2xEqe;kN?OI*_(g#2kkpw1E2Cm zcR}pG(RN)+MFo_Sgb08uw!R~y*8K!I_zshBw3BDmU1n3~7eKSh@M0ubGpK}6YyjEY zXsM=MHrKG-X6m-tWX%?{_H9Amd!i1TKkqw1YQso zD!GdxSB}R*uW`yW#p{Tm-q2+AdpBFtO?$2R);-pE9h|y79hTnFVwtV2mf03~+?U%h z%1ZI9bN0w5|JFYDN;S6Gcm3=;?8a=~e&aX)ko+oS+Ksi^cl_XY;(t71U%2OC z2Eef0_PX!4yRTCoCbN4#_F;R1JU4Z9b@q~%y~1{GYF~2QVJ2Ew@OIz+Qi#4^vh#oc zC-xQmh>C2e*|^u<@iRXZav}LPXQvOKyxsRCTH}np?0de?Zrnzv`By#mp5OfloEHeX zC^Oq3gTLtwud~i4zF=Q?;y8#1xioJ69{aJEU1bk{>~HN6GKi*Hci1al@k-l`wi>pg z{r>O2kI8|crhLcCzWsY`2Mb_nJMiFV>~mjvf_h-^U_dD5M1;K+G=nX&iqsyMaDUwtE{BQja z1o0rge-TMJ2W}kYOOmywx^(W2bHH12jnG!coXRqIbRD0ju*Bq6YKBa^0CDu$rypQZ zAGg+xyNEE_4jMu7G|oH8SX36~YLc#PYC(t-h8rhu%of+;AObhTBJwYv{+nfOmgRVk{6vDZ@A=BTcKZBzBAtlF zQ<@4{sjM(nAVwQ-!EM^GiJE?`pfDgeT&K_TUP%rp6T4g+qn>|B@}d$Fu*x0evGVd# z?s(P>4fPmw?R<|L-jxeIJ@>J=@)dFmMLDtbnUFPBJQ2(j5SQ(tiat}=_6(>}N}2Il3P(sk(BB8ICI?c0 zg?;DqpKo`+{6CS1vkk>_#udcYR-yrg)@dH7)k>%rjX!->QgA{B`B0jsMTC+i+f&ax zYhR*@f<$!4t{hxvi;CY01gZ4>(jvem2b!Zx;!r6+GTj&uCS_g@#Q8<^$tQ0>V3(px zrDo~#Qrq=Rf9qG?xvb6l<41n>r%8T$hH;<}^%mUf8J>r7Qws?qF)7MvgQ7U$wdsDw zah-={hzK&=o@YCUNjN)7#2oHz`lXswCtE1u+(rcN+)O`-Xb;=K2<|Vn#fO8GB;B67 z7Py9LGls}LeR0WMOJprd*?)fXPutu6^V^oS`!iqsfIarqKQcDpTQdHtN!F^Uwkbu= zg3647w2xEcs1~KUV^asn1r!ckIZ9T!X>$t*(-hxNc!aCa&v_K2B!3ezGKa?aZgdn5826&F6C!vq=p=VJuj|Bg zkK~_{Y0l7*L)_y@+*X6E>&nZgc}NT>zlEB>xE4My(^;ARbJQxU#a(&xwb$ES&wCzn zrq)YFi934q+;MyGz!SFr&@=XjzxF}<{@1;6S-ao$-v4V4KD;0KHfdx|p&c>gDl!5y zyPW9k7)E*;GJN&Mc5AvBu5v4LkgV~Op#DP4Hnt+85_2Gw>N9c!qEzRg%VaP#%bCc1 zqrKJ#Iws^*mDpn_r5V9U7>qrSMLG;P>>oBiFxkJ}w}->7r1pXVFZ11Er8yQnHJYu*iA$sIrW zB`9CAs08~5?vA8zDWo@`Jt4EOey7iM3{x($P=k03L{UGpTog;t0U6+4Ow{FfSRiFJ zYs$``G=1j}{^YXTJ$d$`z5U&P>WM8qUxzM?<>uT5V28LOt`KfQex%rl59fl^i`s^} zQ{`?Uq%|mha_RNJJ5V{8D(cWRHOaT1Wf4@e&oS#Hq_v2agLJE~{AWzj8 zgwSJW`^ko`g+R#{$s|lP+6UhK!*WW+ML6h zJ`TEc?-xFA4?q66jq;n0c9bUgG;MMPkUF3l<)INCRwW9u+-fq_g>vY~4#}Q}|^`*(sgJ?VCktNo^2H6uu1s zJE2cq`EZLO@cjJk8fepVQJUWI8^7zuQWMIfctB}V18AWR(W)2|N=V3E<-2HBP$fS z7l$NXhjFz~Oxx4&ag=&w^M*E1Qc^x(+z1)ccq-y{bnKgJ53`_mUntl}A6c%eHrQ6u zKFpU5<9_cWXo3)TA{R%L_fG~K0UlE1#jTR;6Vj!d5?}IPhiKDhWIQt(19@D3wd??76cjT;o2!^la7t>U$w-GJ+jpM_KTK!;HTP45Euf(yN4( zfD@s!SyGrJ;AE)?x#pklH@u1)O6s7N7V@iYaO8T5-%rway(+Scm!c1O^T)sWA^RRE zFH5&~fAkOSaiVA~&)lCQaWHj9XC_q#Qsb9CfQm3bN3u$cx0WIL;8ES%a)jli+%aB$y*acgwTXu({pR8se4?J<-=U5b<3F6Uog`)L0pQ>rs5l;1~7vQBsBou6g(|8S*xXdlUyP znnWoZAQy=Qp!@`hnnv3~6xFsJdniFs3&#b8rLP-T>P0WM?2$$AF041US<%a2kAGe>#T!mk^E1d&Mme4pwbxmU7IroHtG9#BALO6-=QY2kt4S9XG zwBs79_YhUtCs&gAZ787^aV=k9VVojDR<$1+;F7j9lgt&+P?1|g3TK?T zFSo3cR{BkO9w#qn!`M+Nj36&0lL8tk!$^0oNWyWHxmQDwe(ft?x@=s3<_jOPr=R_t z^UsD^^z}@X|4P5p*u}Q+?By|t=xI>q*_jx%{!lQeU0GLOL(m2&B-hVi$jB34AiviP z<3?#8=5dja4#(jVB~g;DLczHKq?KTs@n=zJdY$_y8jXtTYHrJ2sic@vPL#Gpg!D?< z^#{Lw@3J=cP;>Cr6r-OddTi4sj25C++uK3*Fysbsm!Cb`i4sF}tU^#V28FzPy+#~| z9#zSlP`VYqbLEVIqC~vH-U~#w$+&2$Cxr#IE{j})k37k4cm%;1lKhl{sVCueLOG`* zqi5;=NzVPAU;gj*T6o7xw-0>!x9rf_L(I1t1{+Ef(T6G+vQR9CGYgdhFDKwGC^!Z$ zj)|ID70I5ZP9l+t6BJ7(Pgk0pbhGnAo{mLDuZxU{8sua(T&4N6l&BHQC<*hnjaG%y zv;fC!4(f=IW;X^j_ED-r_HWI7#SZgaNfca3fBpRPetKD(m;O=lZnn#Q`wjhKjIjMb z3==<5ZLj}{x7xN#`7FPoS6V;W*WUwY$#ro7cvU$-=2*P)ATu*qO&*=Xds&7mjw_y1 zkzFCz{yoBP1{6dQEVGI*s9a78f1ZJN;mQ`UeF3vbeUlS{ftzm{_s=l( z^2TuIPGk8LC zKZ_Wy#Z>$*Wx7tGcopf-9PZsXLNkj?M*_Nx=b|fwMKd^ET@dO|1a_7PCs+2DY|}g= zJzqbb@sQy7fHFnS8-?V{ibilY5+W}}Fo$A1!84iWxk@MtK?MS^CX})p?s%TP^4q_| zuD|sT3JPOsi)gDb>Y&fLf}(bn5GGGr9llxIN_E~mbeiW;=?Nw;ZOUhLmaOni zeS`v);G&X}wDfi5?DrgH0XhofTS&0~86wL{t%?$Bhwm;ULP~Ac*HTbl@`AgT9jWVl z@hcD5Y1}+Q7aK`xTB`p!1AlJ{WoiO4xD+S3Jefo z1w!&b4P34)yv8&)sn1*K?^+P!45(5@pMx4^Kwzs# z8d?SSu@b_322`w)R2^Bq%j&yywbg(URmIcx!n^LatsuWkxAWA`EQ^zS4unjV8XD?Q zjHyG}M74uD6y=&4O6tH?sLXN?Ny^z!rbZdd0pUQVFmxqbQEsT4!Z=WrEygMLu1W=> zF~nt9C`-84*keY^SYq5QfU?KP3Z6i#i{lcFVW`H%;|B>WhXYyxk{BZpC`;|STXx-c znHiv|{k7)EzwD{L+7~(d===XCuo!tzu+r{+?e|#wH)npxuhtR&uD^fii=VQ?&peC; z0UYjDa6k)6?z2D~4WDVv4I$4i&VxHo`Une@L*pVeIt&n86QPs>g3Sega1#W(Jp6NS z9f!JW&=*4eEVy98iUA=L2>5Nk@u6jXsJu8o^R7PvX&QFUl#rqsFPSzL($-unJyCeIq$# zRD%nb%`E*KM+w|@^DR~dPYUt|Yf>0^dd5B$9c>COhZfL{Wi{w{B*8rwvbD=P7VWme zU1lO=%N3#A?y8@t99~A!%0u?oAAG-^A>WJg2bC-HG<>fPB7|;z@hj}A>u-Y$yvK_a z7ZtQ3g^BBUS_8%pqujZ39@eh`A$QRt0Xd)`C4Q2W*UKSIU%vklk0M?~ntV)`w4mjp zB)F-TV?>%#<%>~nVPFPCSPM1BN&yWvg!2qF=l)-iCY|@^e(~2)_Bugd>TJi39d^yN z*SL#F?kn+7CQy!*n@Cj$&YkH35$Sd=n%v)6_$P4`?|RUio#f|H`5noVX$+Pb=jW7p z^2hG(Zj`N4Ja1g+AfqbQt>lnuya?A)(9(=cDLmSQh6^Q@QS;^>`ab)~AOGQHo%g9v z|1s|7fAAc2DT18FaKkGNfRb(&SZGh4=(e7VBNm4_-PB5U{#wvjN7xno!F*wi^rG~& zx3yzDP#Z4}GQu3~0o~0K46v|3`Ji!X>owhI*WK_!ciW#n@f8f2`yIhf;QB2Q)y20} z)d3liLSa2aI5O-nK6GGNo5xWI-}l?U?ST^-BL$3BA#U|>3M*xJ`J@QXIv5pUY-ldZ zFcO+3^v_+oPLMC5weWHE;40CpJlF2!)?<($^2K}DhqiL z4&i)W!D_0{hr*L0A#o+wrhTd+k}Ss+tP~Miw?N0rh>VMokIJhI_*_T!X z(c~u4qAM5990SaEae!R-0KYP6W>|zY%!I0V{uw5Wpjq)`3i{rQ=4!HW43kdv;m{3y zQfc=|D+EQBohA$Dja*uKFH)~2slxk(^k|a$`9XZ1clT?T_1qwd=00=JqlEvWI52so z)P-!isyjzUb4A1nX=`T(?{#oI9Bzyzw$QQ*w(nL-T!j$Lw2s4@L#Se7eyUej<;;t zhMN!Tni^`Gh)}6T*_nh}HH-x$a(e~W6$+>@b+0=n(l=2ylQ12P45U6C2o}1hdS(Fn!8@XGQ zEK?D4MTF-(NL7q{T}dL27TmfI$d1Q|MbL<@m1Ut@jWqF`9a#%SL0yTLmgAf`>FNkx z6q|AuFQqd%Lk95A{^O@0d=quVgHK^`?<5lNwimqAZh65=;Bjnsw~5fd5*bK~hQ~eW zRpUs>3B6OKFqN#B<-=Wjiw-lt$eP&Dl%GbQMV{|KZwg)Wbt6ss@%ujT$IeemgSt2h%>q11dJHA59AY0N;i;0Ioo4T6DTCgz)>Vw?tU7kgCTx4J&GRTMp z2^ydKL^XoAsI*L(GSur?>t1ynbK^#mJ>GKD^~?J3;NeHO#z7v{f^)5=L0rXW5{e`@ z|L_QW8Wz|Lik-%c0u~xk>@pxcKIbuiZTOwr~fhF22hFG;SdIIoJ#OVTE_+nE>+ zLx#R6qaSo{Diz~9GSHQ8J`eR4WI!NW=?}TCH*VRwtleXH21rUrsU|I1?2@c6(tki*!H4+WijGL{MynyZP1c(p;*RcU=0ElEW338io%{A78 zY7#V3kE^th z#rji?q^dPSSS1p*IF)0uNa!fqvWT)E6GDrCGU{gdt@uj%yC^7~_D<8@DXh~e?nxO3 z6}6(@C=s1(E)|JY!c8sO+VVE5MCsh*ukl8S`!96XK;2v=LZ zN!ZE-pa@Xa(5fNfqx88UC9K44Sx52XMlA1Y+*wM<8C+_VJ3y&Yc=3jgR@@i}43w)@ z>H`X**S)D)w}Qu1!XYgsiKLfPq=!(wz*^}BKAAXpO%}?7A(q$Y!LdlOV3f+>(G$r>+1&JR?45`!=BdUEk*y9DO&yxq}G^H~x z5CN^|%R>jAwTB;i0EMdGOYgLSJjPM9RS3Eo6j;%sb#UK=*j6fQ;- zrToBEhmR?R%L-?(_jfV@nmjv9w)qQIN8QIJNXWvMl*3KaJ3;;+&nv~e5eFlDhL_6ZQFDU6dz-wFOe_T@nQ zW}pN(%>d0`|x(mE4scI>X;HxFv<&$*2vM3Zt1^S#uQo&53UWhiiza^+BHD zBy)2L`I=MG;5FKvoX5pCS4(kgWXv3Ss^&=zFo*Ivk8A)%N|Jh|64zceF281;ZeiAGaDx#`%{aF^_DkUFNSUgZ^4a!a(N=gk-w^H`k^4h>&t1REH z_@upBb~Q>t6`$8sryMbgm9Dy%FCj!Wagvs2?0uw@KXw>qU$zo-e}u)ZnN!_d)TsbC&b~+5X##;VlIn? zDO9r{WClgCjC&ac0T`qPT#>Q~<>Gp1;rHZ_A-9e2uHX`0vFUknfUc_pr3Whsqj){K zA3Z0J&lD-02p}yMTXtS$SCRki`FFqE?s(x#Y%^SxIQ%Q28d_jIIy2(OIijNs-5z8R zlo4G|w_Vx-!z=p2)$+(~av8}H#jr))UQ%yLyt_-UD}TOJVG7WZi0MIrUApNncafJ? z#Y>MhU>+_g$$gd$3D5l?9ZWF!F!5kuA~U55fDT2R6rnK;)~ZX%ZUf@+Ul7 zACo9=iewX7z3Mr0*9DXz=RKj=auSsy<$~5!60w!Rx=0a*?NMeb-0H+ zA{{HZS6P}>D@^<$7uZVtCMn~IYm@*_B_%Z2gUdmlFXuIBv61590z`yUDK8>G7fM%IP6JK`&e45=kIuk6(B zY72|*3^}Tbav3`DP;R*Z5$GNSJ<9@>kKf6)S_BmdD3pm;aZvo8`FjfV?%Q8+8xeTR zm5_Z;JnjW@8-?m{=o|`xGOUk~0B>TDI!quzNyz8Xi+Yal2ccI z+#p5b`LA8k4`Vw7EL_rZN(yQz@#NAdy1+xcl+kp#Et;MS5&T0R z{wNC5q!YKZS(IJ=Ts?oLQr46;9)(C109=um(jlU|o)e0bUPVs7{@Oiu!*$Dv z;QOC_z|NdI=;)A2CTX!&Un(b;Y78opR4%FB-eFvVq#Gf}k5l%vOoPIR6B$yih_(Pd zK}2$lAr+pUld(m=DzdJPwRsbW%wD*eiIG0*As^dVhWrAEIqibq={JfxcJ+}trSprH0Y6dldKxtAjyvum=wth` zcK7#_c^hQPBVRGTMSfQX`2|vYKtXk-IYc7pauOO<%#05|_4@bRzxm*>1Xa_hr}Rb)@Ov zmCEvO-x#;Im6L`EQ|{)E62%2c>ZIGqJk^@y+P_aA6#^B1M+!#}>hkYPsnEpEsVo4m zih`S0$qG!X%E|aI(ULBnbJDC7M-<6BZ#^GK+;?-P%)e(@?|CT^bU~E~d3^g#y6J5G-ZR z6)8NE|QUu-c3liqV570V!Q20zmNE2>q8Lv>n7|3t|16i2~b1&Zdo< zx8llcA$v8vfR;AfP6EiCyLWmNTL<}tln<#Mg{leicQd(OS~x}-y;p8QHDH&{Bm~I4 zQu-q(nyaF?qT60npnRbhDif3mjWZE@<5=}VjEd|+up}=PHPuQRQ!WI~aoF<+@j7ghjekfo@!8B4Pg^sV< zG=4Rf<a%u#QoE!PhD}GuPaT@Neq1J92C?8MS?{U6N;2@ zqzAzz(zBP0ilA)djG$3jewX$oe{+24x1Qf)@prOtBH^~U*o28G3r50Cb6V3~lTXBI z39xbx$z2|n9iEI-l2FZEBxmIq(gfFp)9#_pC`@vjX<<;*oXX~S`4TK&r9x0tjs&&7 zE7huUp@U!)ye~^ti{G5?mGh{rLf3r3b|fH7bVVErVG|fEWl4*=3y(DxwmKGvRx(Pr zD?waa3#C{#+OF-}Z1+|QN4Io90>@Iv8gB+&+9a1%TQdcG>%26J+(RRy6E+5lH7OUC z2%5MQGAJNQ(wP;BlBgY(@sOon34)|7!&<;92`jEvwq_B|DHTzP-#-O^Agz#Ml(Dpa z$8S=cW94>9VUn=Mkl} zns#@ne@HM|Nokh^C2mqG_KQ1FWtLpPE;;ArW@(bv8dyKoP#lYmr3w-cQ+U{2xwg0iB!z4W)Yp)^{87QW;cYqVa!4-l2_{_hG( zQ%Pqly_U$~m+hC*7fZCZ%PDZBLl?*LWpuR$l+Gs7Hq?U3g`~2HNDk;;g-#Q^KrXRr z#3E2-`&#|9+zSMkx%hg*6M(u7$dna!~^m^49Z9 zg0xRgfwsC)6b#0|6AKHyN}))}*Zg;*OK6F5d~w_a3c^S=IoGl=8|Jt?hCp!|c?+RB z8YPazNuL&KA49Q_?V1ARUu!M~gd_myQniiR*k)T)9tBu~Sg>DVZ+K!%|AYj~A0tm9O*p@i^-Nw9j51V^}LxALXQ zh03fW@JLk|%TacMK^71#m&NqoV*WGGr81SO;x)1=Be9|DfEEczQq-_A%diXK)fy6* z1aiZRN;Va7 zBBSut*-d8}rOYlBH zltzsAIJy*c8ky}Wx$=5LIoV}R+b=nms~+um!gg|frGq2fpk8LTzp|N3JH=h5obii zWRDKR?dOR(J)Ad$manBWDPl%eb{2P;$`CYRX}2k}Hb_qs3r=lT#d>kwq1dyejQw7eZb7xL zWZ_>#-$e6;-DgYB=zt6x^j8b>!>S@TD9X6BR}f`OPrbBfN(%7OSAUXA6ywspFFk!N2vU80vlE$}SA-1Fc#_hdWV~z6 z%Lwv(UK$^fMy+`wg+~a0lAUVTqCE=oGen9Dg`X?BQ%IB|+(ZEgZgUpx2<=v+t>%T& zXvnznd`EhQ7*XP7mihBsjaG-!BITsr$P?8z0Rlk~46Z66K0;dtAa|>Xu8^Ywg{=o5 zrH^|VFO@yf7?ecNybOw(o|k0d0_=S+(X41Qrt2gN`VeyO2n+vF>ZJ|B2MviPLs|(b zU*b|a8Ju7k@up=F9q%>Qdv=r zevB)cVOPI7odKm!X&y#K20fBc*U+`p7oM|jF|QtRiaGK9S2IM=)!_W53iwF zBXbmG=e{^4Dur;aLdoSQkRjrM`5+grqEF>|_vnrw&jULNqg9+LMGHG|Uh-H$4$86? zxQnIavVp4@Dno=wTPbIH&NFs`>I9EG^pHLM#4~pAnG<&CnX`8G_yrs6p{&XkwsoZG zirw?A_oau1qnyNg3AO_reY9l48wu+%asHzG)Kj4Vdw8d2!26eI0Yc7i3z zd3cqu)Nf-%MkKH{<8h)Ud{-7-DMXsA;Tm$+Xd(6VWK6Ci!bSo}9nihwV)94iZ2uFSQpcPT!yMkOHK9*htzDS#=2tip#yNL3Nt za$FjoAy|qTecVbZfz7z$+CZUpl6m=BisEjfPFqmq0*O+2lEk|Sii+ByNT2BWQ2NE+ z>oY03>Jh6F3hyJOUgX%rwyX4gz79CBW}5d=jjT96uyFKM- zwcn5QvgPQ~Qlv@mJpe+-X$bMEJ*dUu)srhOL8n&!eyzWkHYPmN(1ubpq`&%PSk-Kw zAal5@+|4c&L@sepH$iF)ej}w~f^sNh>>+Q{m?vu;fuA?bBCWcIB8v~f?HeS5ag@*W zY?Nj|N|BPG*20CV1Xa;|lalSAu#oAJD`s)LNqdoAdWpwY8o$e0u7C=Wms#}XCKduC zge%A5DtS}~-%gRL4GrW0V!<968ly%V+4a$nRIkq6K+(c3)JMp?@|Y`+lS=St{w;u_ z6laKBMUYVd&$~3@+847L^UostY|>wOI+Wx%Sfwl#7#>0zSIyUcGc%{ zt0q995;|TE$FeHbR zPtD~*pdU$T3NG*}?7Ef0w+`0@4og3= zPGpHrLY)4sk9>03gV=xYguVXFzhvEzG*gP=LwMAX>qm9sh7rbcH9Cn9p%`b>4S$t8kDQGF78L!??znGv;FleNOxi9s>&yeFPFM|P!h`$TRX$Ed_Sj!Lp{K z+tE4IwIw9Wm4G$1o8@OH{iXAG>5*t@2+C53L@T-b)MlXzA9>&JEhF74uw(PCtL@tPA z)T1OH5mCS50x1oF228!`o>7iPdD5mBm*uz_H7>H`7826a+oOC>L~m}ZMUf=^%Lw)E z>MHF1e^?&CKTM(Y_x##BJW?@%>`;H|!8?y=2}D_%W{U{P;=Bt)@Q|}E%p;8=Av@DV z;`LLSX`DP$_2gq~s3*(5^3QpoL!+T63WamX+o@;f2CM5iYQH`c(j6%-LN&tz<8#m3Tbhj;*AYQYztu1Z2}i>OaI*K3 zxgg|PGEhp|d2+-3<3Ij`9eDOxIBKf`_gv1HKN<{l}mCZ zvQQT9Gu-nx(vDC+sy~%kcV#qLPyj;Md#Rrn+vO{5P(-C*xYFs?)TPeVB3#OgRB{EC zx@G0_TKYVmo0SxfT2}aWvCl4lhOheE7Y|t3qSD`2YTYGuV6ih+9IiiC_VZ$&ejP({ zWvi~#%BAz!K1?`o-l0%u;PS{PvO*3dnJ34qv+VSL>s2PA$4}4T`(7bQAATe8?aGN?jh#>D^zt z7b#ij_^qV`g` za9LsfTpx0Doa-JX17_*7e2(YwsL&EAeaUY^aLvE%ru!=8l3PBO!kVD`SeM(@m8O5& zXSaU!^&VL7f%P6(?}7CmSnq-VAU&|oY5EWHpw^qY-UI7Bu-*gfJ+R&b>pk#q+XL%L z)4y&1uU~z=2iALFy$9BNV7&*{d*DAv53IZA{)0TIf30S!$g84AJb|D4?xTo>$TjZ{ z`pCb%=;x<#Yw|ftF$N`v`!Y_$QeIIg&bk zY~UD_?0`!oz=##=Jgwlr4);}RUN~3aN@xqd{6OOku`5n=D)E{97>_1eJUR*;j~ca- z6Q#tR%6}xH^wLd{QtvMt4~x%QherMBk1Ra5 z61V1ZJj&8@5&x!v8${s%Y-Y2@0Twf zUsG?cAN)<%1E2Wh-`YcuJV4%$4B@s)<3H;;LgLg&iYYAJ(+FfqEU21K%H0$Msh1E? zB!h>xi*Ok-YAfn(k?ibA5 zlGus3DdCZWVA@Lo&;R+C|9e@#4^wjFE1&xq-%n5y1;RCvE~-VOsJYoF(WSywm1wV) znpTSP(7AI`v^?fbVZ%yDM`$j8c#aVwexe+69??}$4jCd~2ncZrR9j6kT+$P`il#!Z zibnIgMoLk@y?b&t^~1c4Q6uZkKlOI%tL<6V?oWRDv(`(ED&?J#GPrT$7NS%dNfqGn zOG^DFksO4fvI8k40#PNZUOvO|GZ0tfN(RVtkx=!F>L@iLjw{VW8R0#1O4FbMOHMAO z2q#K~auCgXG!maET2WWw_X?+1gwv^$2dwMdY0s*bc z6_pBNzF4ZaCWR&n9o7P&+$hSwrLsLLSwL z(=S6l9+Bb|jmdA78iIf7U)`D~Gv)m#C3W8Yz7G)n)V!?SpZnY=te;}A2}S%ZlI)eB zG!zlekVLbRFlvp1K)R2_Y@~yi6e=g*iF|N%^|D4BB_)!E6g+fPWi_4yd~T#=o2kUx)14bi|c^v zIKFZPMZmv|41g%zfx_@L?vxrp3V_Jb3hNJxdx%i<5y4#0T+#STG4B>W2vJ*=1J2hymG~e;f10tb;mWxc z0aa9_t!H} zhwc9R@ArPZ(&kG&|J$-<8=2#ot299ob$U)phoCkqC52KhXl#cx8+tY}m_pR8B1=_z zO19D{(ZC}kBUe<`G^eY{#rDyUeso#WRC4Jzf8#eODKtscumVj&V&0N6?Lo+%MkDlb z2xkroa<~oJ>U|&Po_e=rbh=caff~~VIzMDI566UaM24qF-{}D>(lN|&-}DTSmp}aD zKW=y2b=Rsc^)xGyQS8adHShUpg->RU7ZuVDw9VfA5BBpv!b%a%ASYAV#0403WK%Flt ze+Ej3a+Imwk_NdFF?v0>u&8KLXr~J2s>H!G3F{>6_U_tbH&E}VhI}7q&YrU;o_@wU zFZ6hU%~GUG9)zQ-n^4sidj5s;C+$!F;Fqj0F+{a%p$rwI6UceM_zV-?CxatYRVZd1 zULxUQPp?fNY|02JF6Ev{k{3x6NukuU6pqxr6j3?~OsE-U940yB;8@5+yP+jzHMBdX zf}7-H@zOD5|5h?Q2~edhPQup`mNvlJv({oyq7gm*_drR zcaurEm21rrMm+8XX_fXNTUF|ik*HeGC&sCJH+s?bJ#mknJ$1r5C?E3W zfBxdKHsABvf3kP}#1D}Lc!Gk#N~V`0%pc`Rr7RFOah|QGFwlbTc~e*%sI?;HOiGea zHT8*S@AYJz%G;wo5G?vNoSQb;eGfIZtue7SDoRPTQ5^sBdVH1dIbb$h>7xjO&|PteIq=LHUUCB}h?- zlqg4V^{)iNhBec}^;Ns@%<&W6=bNMmkfDBF=_FMCgQuY~Mi;v;*qKwOyjbmqj?H9} zulIW$M_IU=V#dohu6t^j!mu++xxqY-5-x@XDj>%y{GH$dsCkTmfK)Cf<=6{p49aADV6ZD=dWG`m z$v9HJrdedSs$GCK&Q6be0s5c(o>yM>k*q6CUv0F0i+%rFzxPY_H~;%D5O$ReI$5_0 z@mFa86~NTOru-^Z49+A1DHNF0(juu8${8`rqe&=4$tP4)5#ATeC^?TZv?`sw7RgMe z!glXyx7%*G${HwLaq8r0(5}br)VcGXEYmB|3(4YN4@FE@W|{4wdiq~|_}vt|o$!() zn*T%NDqKmrd?wR0c}z5kRNtzd1dz(5(oX_C73S1j^!!0CHA0RhcNJoAD>)~(htST4 z93uSPH-yE_-EV3m7ZgZKl8n8cWt-ngIdg@HT^Y#pC{D`m+t=G;|Ly&MZa3U?`?AsW zkw5(t>%P!MqRMv5(ML)0p~RC}`gG1Fut4)9{ga|vPfnU_6}es{sNx($GC8*j$xK|E zrgQ>mTu)aQ7342)Z#hp`r#f?W+^bP~q=?C;M51%##92_%O9^cR`vUDNN9YLkk^Aag z*9kj*@(DX}e80^gDEDr?%0BV8%X4DgchA4rul|>xKsx0}WEZlwQ|;0Pzh{A01^JSa zB-Dn!@gl7(EM832l@<*b)r^z!l0X4u5%iIu#F4>rbw$ZDnbJO4{|aA5ft&+*kT6_8 zFv{8uLRY<^%IEVGRPC{Qo;qZ06xUt4z4yJpXaI6RjlZ4ePEZ#MWQ616v`yzz(dH~! z##JD;iqa`c=32@dBn7P$mRu6@jwZRFMX{xDu?n?nYz4$CxL<-59TX;CUfh}usH-Eg zLZS4TP*`54RD}A*yX-%|^;eg*`OEh|Z1+9%2=xYc*+%LJDjA`YFP>tdJ#+FXUq(JQeX$@ug4yF*cGMgZo1Xl$p;wONcG=Jx_} zajYjKrA)ZeBm^%Q3DJo1Ij2E3@0GUo z`zXshpUXJ?!$0|ueb@JX?;1B1(xpVB)RiVjqSYpiQW-+t`cQCK%38U~V4(=lVyH|@ zzIu7Aq76^xY&3_g#dwvBQx$}wyd0&vs9eXBWXe#NQODe`P)2%3;sSc5TxdmPU~y*3 zbEf_9YhQHPXHl{}p3`7iKj@pk2Q)bfSj0*$ry5J1(b{u6YWR}4j7c-4BE0T8(BFc} zwGab$tDFFX5jCFi&)}772#aveCoC=`tSs&GER;pxDbmWKJIzGs8yc|@Ty=A3{O*Fl zLdYp2FU2bb#ZHo$R}*C|!Qu-PLPs!mPW7k|u(PE~+-fJ9k1TzaWf&l-nu<}%5_QB% ztBR-Lm!Ofm22>5KzY^C}H3)=~VV21)$=_*ZAZC%3FiD}PRFbq#Rn}{&!d^}CJN-uJ zk1Cq$7}cOkdST$zV<#4;W>tKe?xJ{S6&aL^!&DgWp>282SIs$;Q5OL_F@Vdm69}2? z$)IC{xcde{8b*ftZFHd9CWd=iKu0VAI+nqLPh+Z0@?FpAAr|j6N)AsDKQ?~!`Q9E$>n4Laz&IbDWZ4hLn3-|cB zu8Veo^Y$qVWLQBLH#cl>t)Dq_2G?^h_c@6I&T(?(F)q_mn3PO0QY2No(%omSG>78g zUKp0)Q$#rudMzX*m71gP$MP5-WSShB zz^y+*zoRs1aa`0SzaDWO-L7*#F+5awT=|{K7*}bsa%bwv&J7Y*UEf5O?5vkw85tOM zBzPP*-1H>#kAA3bs<&EdDk_V&W>l%@^w-oRxp+M(sHg)G@(KDVS?8rq9N7%@HB}^C z1=Ahv6#W3BXHK7C%!q>FrDs$}PqI`PlIDXqCKUITG8Wu^QkR0evm}e%pok2GQtr6G z(+h6pH8&YLp3GFqGBK98ZUujrspJ&Ok=j_yP1@ARkPQ#^+mI6e9zSPipFLzJo_xwq zKKZzv-1oF~9(mS=E}pae=#b6lCTtD`Z;tgu2D-ccrEp>3IMS=W@bXB^Q4-y%luV!p z1S3Z6&^@n}=WXddtSe1l(Lide49?b#~bWAo?y|2L@1r5S%J8p@n{MxZjQ(J7C9; zK5boR;V7MX+Af@X3^&VzAW2{0I``U%lMmWa_QOZN0>bnV3;J2B!^P1-hVnY?nj#JtL9AP5UAegzh@XRNFF9}eFBzuB| zbuB?Hu3M!FGCTiAZbBIX8sky2#IPDHx&$?r%4n|(P)SMgvSg`78O^0^NolI~=iZ2` zrp4W8LIm2{+M>O~5x+(Vyh5>5m{|NIDRnBiEZ6ZQJ_Ti^pNf*RE>l40;~WadV1F;q z_^kDGp0;7eKyGXhB@`u-whIO5?zv#cjvcX1zVGZj#rQc7DnYe&l%1ovDm%M-ZIA_f zi2WkyL>Kc+F1~)agM%nta_JVh{uGM9q>L2WHN~PbO*`{sk}nEHkaf=dt>IpqK)pKP zU~S!f^`cwnQoH5eQTdy|7L2QSHH%V$XPQlwQRaw@-aMle&rBUVgS$m8v}vv zjlwe<(q$l0ms!Y{jrEc{Drv7)WQQohE~SZ!hV@IZrL)q5L1V$Qxrg1$V|l$w)&%+{xJ`p{by@h#5mC_wxV|`!$b$+Q#WEmpH_p&kbB9h? z{s4yGv&ZcW`;!kpZeRZN=j~tq?(ghhKK^n0(kDM@_kHf4?C_)a+Q`{sWSt+h`SAga z!C^!R-T{=t7<>7g_;|<`6?1n4idXT^uCRUWPwH!Yu|0r8HDD=3%xcv3EHIV3T-M`Q4DlhM1t7YRGJW+)3j3xR*pr; zi+y6vPf!je2Xrkf%FWV{AF+wCVY_g?(@vf^2^!RE1yFOjm=6t^r)fj;z{z}@+%9Y6JD(1iO~P@nSs6pGsE z3rDPL@VE_6Uw}sdnZ7Q8d#m0ybs*4JY!!m>+2x)J?vp5MwFZKloZlZ&|t-9PAs)R(r{XyUfah$;|lS8

27#9dk};rh=2o>h zCXuViu?(+>7Y4iYsRBYJ(r_5}pikO0wuLfk9H2O78DlD@ED~=G#zd{&5j}hQ1j_9(J3?{MlPn}XJ>8B>m2vGkRwf}&3_IU3l0d2Qauadx zGR{+vLC{tm!=?RPl%_B~!sgD+Fec&m>w4M_N`h{+;c@m~dsQ>>hNbaSB8 zjQt9X9~H~agMLj=rb7thJngMQ&eT9AuVAjIl#|!oey&K+BA=g#eLM7P_n?OSZy=8d*>(+1nRwcR>8>QUgUtfesxkFwhL4IRz4Yv&H! z!#CO;x7}*jUvrIJbInzD!*y5NwO3zd8#Z)Ag^`(K>W9Dwk6hTn-K_Fd3i??PQsna~ zAW>dBZ>`#M`chRVR7f3Tc^oBj2zl3c>YSZFa>9-sc-GE5@R;>{@j=xiHEKG zJWUIG4KiMy6BTWq#X!h|{7$n*%%E`2f@;NBM>KBbF}TuJ@ou&&9*!9X9LaZ_o>`VZ=g!KuTvIZp(EhGz`V2nng<2st3u*Vp&PSlRHLeQ*BM9weVdH zwX-C^MUp#cxwAJRa%g49P&G48+S&mrhFC9WQv9H8i#? z8%uJ@w6|_TIoZQP+vGxZdP2xe%t`n|J?E{zy9?LKMU<&w8ymp=a&ed%dn}w&V^)pg z+1x0egoHAShah*L?S?FFLzK2kblbFBgD5TZZ_j9#4Ndle+D%)T;As{qk$puyS`e=R zYfD_8OtRWGHh0*b?Yr&TYnSt|I*F>g|MUOkk%Td!g<^4nE&=j{=jv76ai!pLtENA< zHO6gwQ<-fPPEs`-PkCYL)%*5aD00Jp0asfmUZefv+ z^hqHUT+a(6%RPw#Cn27o9g3X1@#g0*YxCnzK51XN??F#jk*OivJOh`Id09@OWhNx* zFq}P*p%gr+}}w(BO$orDrr0j`HhOU3hnBJS0}_uWb%gg z4)i1S1EJz9AX*3~aLbK=X5|?-RV?!2?o?7RSxU<71aB-=LDXLlKe3PTI|BL&0?R@r zw}=!aO_8D(@Rd?3bVW)lsFHI^0+C+`lPf?xcPW5VirkVrSg03sTga4Lvbv?PI0wa7 zXw6&*!Hc*s>U1Y@x$&3a^R{{nn>229cDc@%ZlC$gXQ(&!^a@$;6m{eX=mV&6ElOfr zYnye@*KGuBv@|zcBmK}sG2f~AP(@xv7dpAnc4%@bMt8GPb+J@rP)?7!FaxH&vu-&_N@Y-QJw(dYty4G&K@g}=^ z&($1*0!I5bZ`xv8Hg9oduBZDVsQr|a#@)g0Fl>-*lH*0q3%8lVNO0G;3zY?X`Kw-K zd#<`_dB5`886k8RFQPE@@;+y|QT}F}4G#7aq|;>=K}g4CBw`?r)m2$>L%Yo(?}jM) zCl2c7xCNY z1ZejH$gOx!38ns^?b|mmm(@#eU|nhYmQsnn`ptg+3!k$m5Y}4s#WhGFK-=5fi0W#0 z-clZm4{QlBCZQAHaWA?E96HZ}tx4pp7g^)7$V73H=#d=v)dSv;U|CmyWW8`U1XzWT zB_QQ0ghP8srW!4Ap@WG=Lyg(uQf=g2PPc;F5!g45Fpi4K1}; zin0vFaZ=Puh_aHaz#~}rPept4{`*n8^Z9pMQ!~{)mu!tSO}4pxC!xz*Sj2$xXR$_c zE0ojE61X`?xZ~tukua)+tc)YnVg7NrGHHaurnW{F0*K);g$~p2C_NSQe};Pz%q$B~ zQH`p0UZ!w*giZ#*UXy9C>QtkpK!mDOHQY}Xie(BVi~H7hi3a+h!J6yaZ0|KUEo<`y zNXQR->0cc2Q3?ma+n&8ZIg+&CvFho&8u;Pcnv=Gx4Fw3pt=bh7!QLWkif9kgfFc{m zwVPz|sG)rg{HPX$tQMX}3n?h1*v`_IqeO5OC9q}4D-KypuA=dTn#h`EtZKYz&77dW zzVFBW%POVm{`)>_g9Cj5F%nuN&JyhjoF!bSxM4L;%27Hz06Un|(Y=S)pg?KJ%Rs<| zh2Ym>A{V64ACY-;dXF%kmTN?RqexBVrV!L1eo+bm1qp!%JLvd8xCXCZ5{P1|LUW96s$?EiD!iS) zVHNrNxaa*C=>^(^-@2I4uy!dbx0QIAqp z*VJJtP*yx*cIVZ*R=Mat{h80LA32Ssn`QXt}(Ptq(Ah>^YY`J6-eUQZsKZjDI zqTX2)r={E4N|W4+NiPn(Wy@wzoE^4($1c#DZ6Gu#As9i;xaJ#$ya`1Yf=Bx@Jfm6` zX$5e!wl?$I4Yqm9Midc#)4tJ~iE2$~oY7|SL;L%>-GwP5uAxzZEK-JZ9_3ov)>4|} zVjAu`2{MxLqTfxp`!qw>@HxGJIV6;`iGsUAJ)3CLCfcx>K#5k`--=7Qy>$b> zZE_x_kRr(*rzs#arA(iO|2E1qlTkG9?(7ieuQUXbd*Yy}%OwZp@|M!4WA)j~q%`ea zt~BWpPKxW-)d_ED2H6wjs)BZAkP}lV(|zzSb8z!!wkRl%U`s8K%WveS{$}+-DWR(@k?#@o@)^PxPZoI zJfcerW*XsB$>7w)^vQ{fsu5RL11>K`mDI!UaTf#&zW6`2C^0oGh*}J@a)Yr~F;T1e zOp23y%So)}Yp%J@Cz=L!7o43*CWuh2Ru+bB+cw*O{;9Vtdr+eY<45lM0)FZlCJbCE z!QLb|>}YA?nsU2=$PmTTf|Xo}fYD;9%H{PaM)g>C5*F@$LorIT>t5=x94m3JjN$$g z-%6aOS_FwuH@Q-pfw?mxcuPP?X`-zX_N7<#pC;$f=oG}+E_>liUWKLEv8>(V$BvH8 z*#LXK9MXBR1{6#1aK_u-)`GxAKyTZ&!!~W)$iljT>$Y0Q#tw(%n@~E^EWUCPIJ7K* zr3DA!p+!R|#JD1JSl!ZCw%1&DlWp6w)0%49xt9i8$RRjT)G8K0TH;yUV(@VqH&|U& zGdutg3-~NI+Jli!7EVAtnmz*0yT9VGpPXLa1k= z%zCF2);~?v_L)i>%2g2howA((^cM(dkGKk!tq?NtlZ1wR_YeQHWmh3h_dvuR1w|90h@!+e3#4ul_Jh)- zLeO$4IClvF>kF|6vCd=C=OT}XMZ-5O)=qFxFod4H2&V;jIfOqWoTpLtq%h^=3Q!a$ zyPk=nH*>hmlpm?c@gfViuyEaa$BUQs^%I8<+C$Gg4JubjBx8rwV>yfLpJq`}o!mLz zk3#?#f1{ak-iB3O2V#g_Zb{I;GVMw_0SuTz8C4JiFsuh#BpAC1l!zH3PJ7OsvMH?e zHmv#P_SQ?v2k-LCy)h{|EbI#9ZrQfi8rwJV`wUik-tM@T++$0iCcXWuzy3?A$e#vn zg2WDY>SEVGS5le^jIZNQ-Upv)+&P^xk^+eu zV@c6vpfIzHH5nHv6lA4yX{>EVCe~q0RuM@G=aTkFc`Y&)N8!AUlfQ0`u_<{g1yunD zCy1ULg=Z8pZ-r>pQV>l@`4J3fDS@TPW)Hkzp1u54ueM!#mglDve{FJb*zzNT$o^_y zX9hX{w7eGd=?q3d55lo9K)?=p>_$-5$3eAYprfo3Hmx8BjYqg!4t7ZkbEFc*2}4Hn zPhPs4(wh>n%0$}=h_2b zx$jGM;NSrt@Um#Hz4m&0$xB`i+O&lT9VQe=LL-8tT|%RU*S2PIAGO&AAnQ#m=36>C za5r^eiQz*>0C%wPcW`Vw1lf+Z=oRZrLZ=n-Y%6eZOJfs+$T~-Z>Jb_eOl^eAkE5&{ zKYEntqcCB267GES&9~TVU;Dacw=)GIbMoM0Sb7Fu1b3VGMl3WNQ9PvJWLPX}S%5`u z7OEf@MGCY?N?VG>gXPhqy9CwBy`_YES=ypwhsUO@e{jT=L=lN)nQL;^kOCm3JWX=v zc%+_OG9jU%hv6c-^GwhoP^ybW5xwk{ub~e&ENk~OM^D?q6S&~;t!oubV*M+5WfKuN z*I&KYZod9T+qHWyF2n}DLs`S3Z3dBOhr3V@(Y#y>FOfkx$gVR)2F)XkijZn^<5LJx z$hGh+rY3}b;9?~q=@xPqN$Dy>Nt&4;!6yjaJc@`o8VZHZR@Qj(LJ_4EhN~P|E0w0s zGbio7&;PT#q88?3pmiDBuqh4952~~^ZC7p2+SR+OY|jo9zeXY>QI4|o{l*Q|JZCK8 z`UGw{6lE58;@xN)=z)^*p>!#JEDujAA;po0Swo-8wKxUZA*DmSU*}LL?+or?Pel=t zg+qRA78``->woxXP?~C%_w~~cVx5jVGAP$U4COcMijb261;+;kD&&$1ijyN4t`l7% zfQLLbLNRdbdD;RLZN`Q6N-437skAsjj5}pR7CNe3g4k0oloIUuA|s?s$w3#atz#2P z(~Fk1St!$kPdsX^2=^CWf0b>;y^;l0tc9!5h}#P<1c3p)HcNVnX0EprS8WzPRveUT zj%d9Jp6@J+k6hy(F$H%cjio*hqMHG!6tcI#!gKNLDHhB@lzNa$=Co(amdg`OW)f?E z0Al&XP@ks*Xd{&_xbqOLDn*aBI|1r)t>Z{qb6epseCW!2tU~!+| zZ*dKU%yn$sirb{#7qPGiU#bcqgGmdb=hRU|t~Md5v`yS($1${6dS0Rs_2fJ?Cl~t{ z0cm0`Di}maL^;nuu0W;fXslkI+;T0YsTu|7|6}hz;48hVbANoP_ue&{qUpVP+zYm` zF<@$_2@oIUvVdu&hh^ciVp)O$P9(b3US z`+c9a&zX@l@|gR%N&fkNnAaSnIXdNi_q+GnYdz~(&ss&QYBj-5T{UYCyI07%%fn{m zfo2Gekz#(40zwd;3SLu*y{qO?Rf|m{cr3@RRWM#<=%y8j|2E7$pqdWD63X~N2lM!x zX`zluPEag14r&fT|9PAyIo#(CKjq`13O8x3IgLSDc|wIeb;)@77;653s4KOLv!=9Y zkV;9`$T@#i6^aCuAnjTMqkR7m)Q7WYNQXgo_dr@%1@YffsH^jngfsi9Sf%RAf$*N$5cSrF{^7rl#8eu z+tpX@wre2{UVrsfb}heOcl92A?y)O(?}AjggV${(SH8t|LMq(BW7oE=JV)+)=XU1JdmtiScir{2bJrE*^;tM9U@dl$$30oV##+ioxEQU`ZIOc~ zC$3J;n0N*lD@A=uzm?&D$dOV-pVAmPJd~mnVk9G=2rWV`y{qZ0bslT7!E^04jLd%Y55jG8ON(_-dL8$)p#;<gwP+5t))teo7y_; z7^T@N36HS(^}-wJmgpvqnuz2=)~awQg`x-@&?cV`vZpjm1{?eg>BCVXwL!RM19Lgn zKbL2b+%lU4DH@o~x1l*y-Lp$Afgovu^6k;S2-PVVMo6lP+`~139Fm$VEhg$Gk`vUP zV56O$#2~>>TYR{33LBru&Kf}HWY%`k?taz ztp)nu#QoU_%)YK7*ES+9c*TlryQVSAcC5`Ksw?9DL52qPo5O(_fyFz3qc_gad>p7{ zq#Y_j1q(5lUfF_UImx)IdM}2Ps#KZojr4t<>qwDo-3!tU@?Yc_%`<@xPXu0?N6z}0 ztRfAsI}dLJGPuMs6}56d6s;2Hc@d?l=@Ywq>SrQ!9Y2woO1XX;iuBYso=Ipu^?bG1xKnDR5s@MLH@F5=5AE zpj=@|9RT#;UX?+@&e089bc|zP#A!+c#Tqzq+X*%TCkt&dlTsxFI8GMzp%7 z32c7X^}qBn(${W#pM2aNf8>69?3pHiu?V|;L~hN^=SbU79i?Rcf1x9Y%7v;CEkePy z{b;htgBL+?(}wlr_Se`Na`bC~o!07n0+DAlll zYvE|rfig9~m0AgGy`1+7@?O2V(TR(yI+9&e6_QXbFB4}GDy<^!%Hl)kEV8!vLuxCM z194VL%Nko%;l%lUaAeH-i0-5!E9)=kMw|)3xxHxDG@<{|bl`~fo`dvAG}w!C)Wda4 zQ%8|ij1PO`D| zbx@9c;xM9kgPvPg3zB}Exzi|fAH&FYqK)GOcb8yLdGnmbxz;G?P?Yf+$MB7#Je+`Q zGKsMpA*zb=zzSLh#%GqlD^)Uvp~}X97m+$GQj{-54KjQ&o3F-s1~Pdxg8tJm`cET$ z7oD^a0aY<-l)%n(tJSJ?DWp`AIMW+zTE&m)M8vP^o%Gpuzm)0#VG z>=cjY&KbzGn6 zQCzC%UZM_{Ob@9Nq;fZl&RoV_+@UF)Ti4jp=FNty=v*7CqI{9J_0aFC+HyfkPq_vy z8BY&sLswMF=gAR>g5F3&DGE9uWI*u0x70{#z^Fh%2#SOqr2&?M(at3(Q9z>N1UXto zN)az%sB@U7MPd&U4!53ZB~8){X}HS32X+L$7(;+PSL;+ zW$sgKcG9ewBnX%!nx6xC$^@CrrwlQVame9(^H~E`tf>_sv-LO^<-`3}NwwxO$T@2& z3T$N!5+b>3#Y5>f>#VvMRFUU4FE6)kt7?+zu!Vy?B}oOf5>o0oNe@mTbh>v!i$sF= zv^L|Td!jLyG@^*08hWa|~ zT+nZBb#+wph5%AV-$yJgcdwnr*ag8Agele1UnG{ zHIu&FAcZUa*q9#l=DM*F?NofVk(TephA1r@hif+>E>~Z-KW01IZLam4%_2yfMmnj2 zO}!eMDCA_ARF#66=R33pBRC+=`}7z_=n_&Gq~gnJR`6J9g$U>7!lBB8%ajjAAsZ(? z3#TKCv{VL6+9Ea1fq$C0xDoM4M1%0M~3MMNW zJz_Kh)&kp3k$^ZAri^n|y{dHdyoFLBRn(wd6=-a!IOtNxf11*kg~`0MYLM zza7(TB47B>Yi-x=D{SlLo9*78FRrw>_==8y=PrBa>t9Pc>1B4?`@ZTg80$G}pM1*| zv`x6iZhymDZSV2mO#N3mf>@e&D`G*2C##qLgP=q`qp^D%(uuMH!KJ z(ANo7C5sRB?mB$xjGaciD8k~9<1#cjMEeKyjCuy#fSm+lK0|w!K^ZQLu?(|*ijX?S z#y5nsGK8^ywPU3AVus3cUH2?DfT4%5|}AifscP;;*J zG~pPXp?d1@ls$8N%6f-zR^d&Z8lADjgXwlAUSw@NTF3M4Oti#0#!GA{Q9wDdwmvC; zgWTxBwdp_I>1m8x5z+7(QXct~uxG$UQL0ou6IggiC{PMsAdNLGG{UPmK_uK66v`0<|6JZ6=~T_Ys}SSexgJ8daA3!9mWGLfCHgpk z4v@66`rvBxq6Oh~R#YF;B;(&xGR=m9r%snMFr|&{e#H z1iZAvIQ7pwbI1-IIbr*rI%rQl@Q8Iheb|aeVzvTecRtQg-?=Wt0LLwdIZ?=FpsKDF z_0_h8GW2V9Y_mO^c91q-%Q{|8nj+T*I{M*!bg-Gs*>0*hYvKEh^Z9*`?X`g?pRnl3 z!*R71|6xyC+A+!IpGx`*0OMdnxP+FM&BD+@i1 z&CQ#DgBDe~juG@Js+na52qkU0=;S;P7sw2TCGY+Y2{^z8bHvc%1kNri#t|TSxkjp z4f6;Y4FZ`g5WOt*yTrN9W^A&YUnU!*G`Zj>79#g8i!m#Lf46=eZF!b(K!gM;Z5#)| zE9cxBdG99P)5sW?Aia{&cFM}zPg;4T6Qpy>Q_^`#IU`NbcN(25G?``-!|=1Ha;qq@ z?VxGbQw{hMvKTMhvDIF@Zk650UbRMoxk$K-9y@B|CqVY#)fJ-DUI&kF6@0)3?ES(XF47!SP>UZ&?!b8ug6^j&`z04Y;9<+0a z0;HS`&p@w}BPA|~xB|5xTkAKj_tIWHrwCsJ+r@_rMo;emx<)8xvr)x7`XBY*m6IP~ zW77r|2U!Sx<$`*&b4}Qg^NI9}NCD*&?WOVkvp6GAfn1@dRNA|p6y_DA zHkHyD=G^30w;qw{*2QK`1hp_BXk? z2)1GM^Lf82m4=5#aR|}FViT7WmJmt@*JPB9C^jKYD9|+!)nTG*Rhun7v=NF#t@*HV z9mMhV6*g5vdymQj8_R|}l1}@K640SicuR1;vSz7lnZlvNP}i@lx78caj9CxAtr0YG zrMLzSY=vs2GR?X|J$Vo$Z5$*bjd8}g<}aWvaa&#@ZJAOwe5!(Q05u1*Rlcv0V6&IQ zG%2w`l@W{AvFPyTwvKQ>P8VlOZkm_<1)hw!MnNUqw3mH zcco?q2xin4!+Z0QTFb%_n#GB4IgMy%^C{*8d`cYM&ej&|fzL99qmc_@kcIOk!f$K@ z⪚SiqHslqIcHacHFwpfgT@6i|o{41YS?up@Wawvj-lvzMl5yHMsJV)VQ@2G_XSL zjc}r*qCJVDr*qf6lj^xp;FBlO(xNxt2~^Y%9yx6L4jv@fIcU!wKW=AR-~-_>$uS&d zvz}m$PO&*pl3pD{q&14uKFRfpmnHgvtp;ysEN{NiMA-oUNc9!$XWB#mOSx5PO9Yq-gEAZvA zu|JvIBgracYAp!absnHlJ{)G39+Cf~zKS?Hyms+Xjq_GYx^J2wBb_}kA5Ll&PSG;% z?{d(mnywbBIDNn>TTfb9Y|x4jwJm0Ra?!z>B^a4#Yqfmr`ex9q>yS;jee)K3`Oe+; zV(1{-aH5Kl7MVP90F9?7Eb{C#7CU{^CeJn5G)UDfg)CW}tyYNRTs+iig|QwhK@+Z= zCe8%}WZCRd&o}n}N!4?icHcMt#SXQw$hetr`t3iktyRC6H~qY``#0g?Ak3ew5Z2@fF-JW}egD&tn01OK;D zQ>jPDai^XOuMg>p)NA}X^)8c>g;OObOFSfPWa_b`hgvdppqBlvcfD)DaH%cI;}73Y z@3~lu(e`*14Sq@GM#Of4wXiF_0oK8m)HRFh|cl9dMNh0NFunx&1b2f@jHwMu}1Jq$Tl z@9paD^ZrP$c;#zwJ{MNcJ$>Mqx4>9YS7kTA?OH>{O@gStlTE3IM^{&uwNjQVjhU5H zzf^V)vMJy+!V^e`$g4g|nM8|aq~xmL(bVH?Rboga##w>!D6JxDlE5I`mr8hInHaZe z(g?F6Vd5BdV_Z5ZCm#h}nvHAYp*04ib(Mrm)<6#BYv26t1!LVw+l>Q{-ESM#mDw$K zY_R1UNCy^x0;0r2OBbB?@xxDedm6PoQMyN!cNy?NG>=6BFCfa!Lajc-$=d357t%+*AtckR z+Wf_YQmRi*j29G;-YCNnsiEm|tdpmO6<0#dlto5QDa*TT z1LsjsaI$;%!fx;rI0IQ}lk}FO3TyQWm*OZPeK5g%7+@2Q;GCu7ES0m7t3X--M>QK) zLwv}pB0cbB+O1%?&vI~XGUOCP!X5(Y8pg4V)0$-xy{~EJZw`FLSvBBhgN>8k@usJw zh{gv9K6(a0NJyJV7699==G>AG5aIqDG=r;dc=1K1OT`n_NR(8-Wy{MY?E|5@fccjX zXGdH-M<>JyGL)hXQ?xlj>C)n_)h0kXABVL9eg6;)(_5y_g7| z!@aE)cZPYCL%=r8{mWpT<&pB16p2J|T@r`aQ78XSD96nPvS4esvHldHTB4<2xEG0Rhs=MA(`;iyX`qEs^ zVh*n4Tq;~TV^+N-m4;GU4IZMZ>Jr!@rKa@|qCQER8KscD@=Yi?V?4?H7@##%7Y;^0 znn71sSm-nXj~zSsqYq7s3@^NkvOWfwksRteW?%c_-`eROj4SD$JvZKA zJC_yNj1+p4s?gJHbbBB8p1tqGAG7a&?|b&`?|jcrv<{G0Mc1NizC!1$?ce*bouL=a z;7}A}Rb)>-^ey}JU;d?i{Ra{o{|{YybPRe`#O%&I2}a^!~@}&;I9M*uQ-Ki+0!7@3JS6ZZ81BuC7K6H&sqMIc87(`0Gihslwj$Yp=J4 zfKmDkE=Cfb3$GgUD+vcqvU8j!XhD~%W)N^#ri|QQ%B%SnJkr zfT(>f1n|aWMFii1fe-Cflv+ww8L13g(YH2naTVBFw0I===ww=tWE2UBbQ1J*6y5k{ zlx$<+8=3x=iuhHsD$*jG;ShQ6Ru+Q@l4@G{v5$Rh!R1DX44);l17`22^p_^K+jIywzL`D5H+w5lX_TyYcNqNX=D}xJA zqOw$s#RLl1xkR6J=m-gsZfWa7Y!V~^m|34QJj{dl4pR}Qa)0q!MS6Yd%U_O0%R;B= zBnr5>Y}mU{;a#(08R?!;Bo&vZm%HJIlW{M&;sn;*`zOfjb`+-$FKKp>G%Jq(8eLsg2HUp3rSVBf>8t}drs zq>C{BWcP)RD&;512UQn^sywGgu_5@+q-dfLN~Muh&8d_~0w+xsSurXQ$3U3W)t+6+ zB>>{y0!V%J;Nu7}!->~?7N)mks}#D`V~aGF(##Pe``9>R!#PglSWGKzjKh-{9HYE@)YD62N`a{?8?-UO zXJ<@}!(oAhj=aPi8z{RdYXU@pd8Mkt%%t+a#!Y%Z#mVY1HRV=W{0%RBxu@agA8A~# zqJu-;XJ_h+&LXEPu5rU3ZxLG}j6;AP<92~7Q zw7j%thgpwmg01zfHP#8rc9s@GXDFyReB!jVwsm43dOckVYngO^74by@w^ICS$E>zOo2p-Pgp1+Owg8eJ}V~ViT6ps)Jy+yV1{vS6&%jzUM;QDM323e zBi?j7jliKGhm<>9Mdutb50$shkn9Ag(UclXONu}alZsn7mnrO_cztq~=EUz~jH9Fw zqv|-IAQ2}dfwQ6ZCPmzXisB0Pdbn5c3oBG`1AKRdX5pG;G5@kvlq0Px?D90Po5V4X2n>R)}fSOyzh*C^8LSQe{`Q<5Xc_kQ0?SkE7s4ff@G?zXqw zzRREb+5h@uyW@j@6vX&zKWuZ|e@Wza)_(hyyX{L)N33GgmG+&xzG5%fzQJFA;GWOe z9j|_`4P4MPJO9zY`&VDFPk-pmh}Zqc9)I>1e`a6)%9p8RDzu+=O_~4z|MW>jK~z`0 z{FU~?TVG@=sXp*>SgKQ$*6`{gPwYyg2sk$x5rt+G(sxoTiXisr;%CW!OO4sR2!EA+ zgrDBsTuptwd1tsjLHRNw6BzoZp5AN!_HSRcXZAhAral((=kpGAeE-wm`wbiEI_(kO zWJW%`xingoOn4*RP2|=qSlEL&GK!S*@~OH|S-Ynnt3inIMdB)a!Ldyouv+l;m@F#B{KR-%Dnd=u2;{7kwXAc zyrI!@KrvFyC)H*|<@lK*qH~`H@!REu%E48;ihK-pw#c-_Wj03jP6R$o4BkyoV}o_i zkt&X!BaIfdqBJVl~>RVH5v7e)?I1MN8=s~15kRv$9S0A)a$ zzY>lQ(I>1A1cVgET>1pJLL3Wta;P|0f#l?7QszFcfUT8QFjUg?6SdHi%WT13j2-d`-Vk0G;QGsC>F<#s-Cle>D>eATl zG-H)E(k3|$x+7w&)LQe|MALBu+DXB;L;9Uzv#TW)S_H)3OR8~#*Q{agtwf4rn$1gT zDG6K(O_=sZs+!}fPMxwfYqr|UUjFL~&Ux<>kJ#OJf7@H4s6&T(C+Zp}h@Pd*RYHVb z$T=0W!Ik6mXH(gh#U_!*=N2))i@4St)>R%2Y_)DDsgfI~N>7cn$2nJxbuQ~X57gDN zOI*q!E>b)Jr>X+Ae&aYQ5s0}WI!|eHTUl*6btRTrl#7F!u#f%E&$vC9e>`;PS$pD% zXYA>H2MGY^bp|RcC!-uhB_9VNM+mewD6T7m^UYv9WeG~CEX)Lznxu+pjx|~W(p^fb zN-bXGKuC_G5a(0B4}pft!5Ux_?BbrEga0RS?HeasDw$fcM}R{(`F55qwy#8=W^b3r|;v1<*sCv(QmhmCZ2b zQ_RO1f>Kvemvl!mx9UNzr>sLSL)llaFw}J;PcerLo>RLRIp|6itI4kTrOc5jNrG@c z2Gt~*U{gnebmJHx!=61G?9SJ|(^jq9x!}B?c=Cu1lR7JX_HkQ*a(n^Ab7}vnS+A%E z2~xIdOg=^-g9JYF;Hy)x${+)al;KZ0YaBZD(PR1je4#l$!SX1_rLfz z=N+bW%9eLud5vAaD{SUE?)o3UXMg*hE(|eT77(PAA11w;^Gk*sQwU5S4tNJ@eg~v{ z$FT-gnsN?s+=g;>JC%w+Kn#PpE|zx`**s-lv%%t6xxdO&p+|C|7~9djY|1BpMJ*+=Q&K2 zPj9`#uH3%XRjPG7`1wD!H-74K{(HE_)Mnje_x)H7QzmltYc7oKbcenF?QgMr4~EH8 z&H|}RDbLninvowr^?keI&I=Ax?fM;T%&U`i3OJx2evAFD?;W}DUH?J9i4UTyEQF8H zKt-EWv?V<0 zZ7ig@;w9Z`02xpb??CTe=!00HqV#$+pekJ{ccF1k4S{+IAx`9_%}RAhV`P{P?EOU8 z`hKpho2nN`l|<+9X`M$9Q%lpi zaE(=HKGjPxS(RqYwc?U8PcKZWx0fJ!PDVyZ{~lMkySd8{e_RUirF>cKdDhcJ-bb+p??D*6aYSuE7yVblZ$7_68F` z^ANP*Hx%d1*m8)>%eG!^mAh}WbvIpSH@)&j_TpE*+^)N3o9$c+;zcTO+lpMQ6 ztw1ZQkbB1jb%e5;LIR1H_Or2#z^!X-KFb3Kl~k1i0%_+6wG4<88>shd4BS5P{KW5D zd<+sXpE%V)Q{`@JI@=4MXc%Xa#=G3pm=H`-<_YH47*55Is<%Ls20;q@8N0TTNoz`E z+1adO>w*y6OCZq!lGMk1k+VOi_DHO)G^y;%`5f)T@xEa%-X6G2IOi+t# zy@z%y=~Ra?D2xv&CgshRI39fP0eg&IV7(wdLgMn^+bJ+|@h^=ud%%=5e;gu|@_p30 z2pCR7s1~6vl#XLiZT!$oHbR-d4~b!qI84gOe*!656Y2^Q<%>0 z6Rf?6&=kI+y0*f$Z&;1k>N?xdxSV2>OZOJjoRB6{4TTaVq|^)8!wRYV71BIQI%ka3 zySS2yRD>ky;dYVxS)_JNAWKT$sn4D+Q3#J0W=NI-ggURXl#@bM1(^*dxOaPzBPgU& zsJvP|6;aK2UPSddY`rQpPf@aeT)o=pP1t(kn4LcGv^5{tYiABUWlivT&Yd`9oy`Px z2#iLk)Qf|#%#fM~P6E+Ew+upd)+Ab9*pG}D!6Ng%h`_P}n^RqcqB*h!D{Dy)H>|c* zD^NnG-OutB1aP3{s~MZEv|70m$KxtmvFt>Uc;{w*qV1;^E|Q7~(h>^~bzT_7QEmbm zI(_uGbsjlRWn2qYo+G>iwRW7-YP9-RZQgFn*R6*#0pE@>6$&bZXp+`3Gc+GgUsh@9 zo7Nd8XLAa`SVy>wAc{CmIHeiPPdTT`eTZ)*r_3cFw88P10JV-{CnsqQHG}<1U^`_e zp06_Aou)~Mc+dRgTYhDEXh!_bNB_Is`_l*P(T5+jm#s}YL8znloH)rV1Yci>Ef#EK z{)g1YMkGPhoqN-p?aSZ(rro|HlpbcYAU^bezG|QS>p!)vb?NrizyD`@m{eXk8c0k1 zt9}1RKeTUr^)vRWTT>9~r2WljziF)~C`05p??{OCzwHelvafvO+xE|Y^$BZ)$8r8x zd82*qYu~oJzW5Jz-OiL_GiRM_jt=Au?Xy3)>!`mxBdfq(`_4bJyT0{Z`#u$KpLqAH zFl6C6e*EWuXipCCiyPw9war**-FExEKl*DBkT2UGf8wLutZ)ny&3^M6-}K)Xbgo`{ zzxQ0Sjt{)^Bj->3+F$*c-SyqO?e6b?+dlsG*U*_eAY*^{zW*N1{Xg=MvNQ%j@&Hw3 z61pQQ#i)*ADCGS|s4i0(zDjOKmDbV&Qe>3MAZZo3E)fxv^pzwcHpV8^iy>4>tH>Lc z!U+&piAcr!O930}HTrDvXe59)&SD;8QIAlTDn~#LTtzA91iXf&jx3C)+HO>H|HUDw z#~?Lgkk(KUv3fbGy%_DdI)D?ws3I`V<}t!K4`~xsj-MITN1pvGasxd*Lm)?NKvc8E zIVbfel2ap|f)v0jiq&QXKAw7VVK}EriEt%E{t%E!%cxz2v}RQMr6!@%I6Wgca4Jz> zd}y_hZABF=GJw7(*l=fvj^vEaffN8i5UEoM zN7YaPxraO|O^~fLga>#`TM3S6kxG#zb}4uooz+~YhdWTbt zdp^dcm?pZp*+R=MHh50_l!QYI_vci<^&CdCYVRqFKYh{?wEf5#Ic@7!Mr_x02m|h_ zwJUC2WmnzUXq&byw;H0i%$y1kMfEKtO*l;(g?Jn3gKeC^lEhf19p)z-2kvY6h8?#0su$Xp>+i4?G!@Myg_n+_1H%Gxc+ibme8{O+ z!Gf!Ys}GttN!4>|xEm@Dy;Hl=%1YfSor2R4f!osq^xqBaKg#@>mbXu?5j2da&8eyT+4rBdiL+@5xt zmyTjW)QU#vkk;v(0u<(ygbpW2kB!0&nxF_^7T%Y*GI90s0?C2{1o&$=SA^k2K z@)?jtRc`XdORmQ@9H1c_M|GAL+e~~HiE9eois2y1IZ*H`fU3Vfv04mu{j zj4I;_Nx2p?uS>E}Ij8PC*L9haMG#7xDz}a!mdCoNhe{ztN6x6L^mD(J29)j>UBxJP zU}aKe7#&wr?@LP#c$+2GUJmwVma$C3S;_|a60cJf0#*D8Sy2PzEcFnS^Q(du6|%UP zsS0I6cY!=OdAXoaB6nAU7L|fp7a_ZnM|D>Q&f5%*l_W1F2>4=*uN+kse5lA~DS4Io z11>jTg&nQ%3{QdX9i#o#aVqzY@83s5+x@h|deR#U7YH9LqS(RGGOHMnK{*r#F8#u0dMBPw#h8C5}o zTKFVzrZbq!wV^bYE z^tK=LZk%9mj=8Uta1lX!38Kik2>hnw$RW^%Tm!ZY4YHAONdGluHUQ^ugjD{RkQr>A zRKs0nQ`%qHsx-BEQQt~VV2Xe`!g}ds?wunj>V`KMZ#`>;KY4T?6Ej#K<@K}C>sSoMUh2a`#ZV^F{2M7-3=WM9 zQ{$$si`+wLv)E5G?W_Nr?yD38A4ga6&$cJpS^2rsjbz2o)gPgA@srMUKf z`ftuZ*SPTp`@7Hnq3zxX*>dd__D6s7*Y@JgNrY-{(Ek2=dqKDs_9}YI2mjFi=2O2* zZu~m?}Nl-pZ@fI zpyT`Pr+VD`Q~Sw1KcVm21NQR=e{TC|cqwfmA-f{G3h|LHQUs?%O;Ob{L7R)(NE}1y z`=N&)v3u^l&whN*J@!KyX5M}G5A1v2|E_)Sd*SiTZ+wHl{g`SO2!j~Y(naLn)P+f0 za2<%>md)$kc-EC=+HQ!-*Y8+nyEf3bs}dFJ;&f}It9LcsuCs6k)Z?y<0AU6^HxqHl zIE!&`0Hlcx&6@}l&DP;qY-n6z>mZRgK-O-=*|=gW4K6pX^fJ{t9JI}A8f@cg1mUXa z?*zB6ni79CiL5|zxB!DWNklcqMzNISfYM97a7)^V0+rqo0lOXURttthNJu4GSIg1B zs-*Nche|j(JcWvCaheK2(MtM9EgvzY0{9=<>e9`|CMdg-4XT0H)ZmO3YC~t;-{kpa@hAZ%2QrSmyc>IyH;6ywCU&M9AP<5b-?49{eQLW6F~ zyxZA;+YvBSu-61-FVB*wqy4W(wQ{21yw2s(JW-G zmO%Qa3!$Y^O&9089RX9XGMx*KMj;%rB3i{{sOh(wxH7J)mQ|{H4jVMigy;oT=;P6e z1VjX+_ELwdkgL53!H*o)Z03g=Ko>A~@|Zh9+0)oa#7WC#4k;xjK98Dri}R(<1{DyN zrTL>u!b%)yrPpdu7BA;#k=7b$rO+`U?2-_X{6)g`!?3?xN3Kx#=h;4lM5!2OTJ3qT z9Z>?W2*Fk_)sD^QTHRSccpQleD*TQfK4|A~n%a4^9Nuru2cH3vdKy0OlXenR@Z>X( zS=0U}cz&<79Y0`aj~%oV&+fBhy!ObE{m3aiZBHI}(jIyGF?;yQC*dyQoE?fBx0yBlmk(*xk zHkvv2et0%au;<_Y!`EmMTzrsE$LrLgT^=oh1n*43f`$`3z zOBW)r&g@V-Fmv+CDr#O}fL<4ld^jZAR-ONMCpRd?aR^UR?X+cHT8?Ml`0jaxSvDLY z(a#63`gk$vC!4#dXi0t@#_5Y6xznzB!EJWi%{SRwKJZC9))CTuIzTl4Ll5PA<)y3t znLPbtEQ)V```h+!U;C3kT(SG>DA9~f7 z?z65{OjlKOJDkued8N<`RCq+gY>O$eapVXcYXJ}-*pG- z`#<=;{S1e!6C>_T9!Yz4Z$ld68hhy-x7bT=ztwKP^(N9l zS5hsu#@4QGK+$}??b*K3wr*T&H{WoTz2?r_?N?rLyWM%)O?K1OyKU?GRkj*LWjV-K zHpYGgVkKeI!s(N|gW_kMX=<`3FhWNTA7&FiZEe8R5=@jrwEE{&(mErnK$<8CtHeN* zlKv|nqnZR6iy9_P8FMqgnP8#`Qg3%Z4%E1n^p02sJcm38wONp4i`ewbFk(3%e$#AZ zg>1aF=uC}MbTB(TWE&Wdt8fU**eFJUwEHk(y};^2O9-!~fdEW#U2P)?iakirwNuG9 z2{I$1I-H}-UN{j{UGXXe^WeLY?wlDVEjU7%IAX79krvC0o#Q>dY)BAMQJfs9&#s;$Z5X&SV)UTw7z_XJ1UrygL^nFyh8O0IrgV@^2TI*)R z8dq;Vcr4RWPzITvlZFw|bP-N&HV$E$TE|GoN2y3Q<|+6vamIZT1X-w$8pvkiz|Hjz zk-ou!g3BUu`8YvTM7mPaG~+u&cpn0Vih;tqdiN4X9|Z7z*6a`tl0;j_Kq>I|QHC!^ z_JDNG_APe(6+7((*IsQey5&ZD$%}5Y+h6oTyMff}_O0t}J?m`M@*2q0R5gL}sL}HT zm0tZgbS{J%0`9l zMg=r{zW@bEpNL=Nm7hulgWQmw7eXiWOvvb^4rO_=xaTS`P<)<&v!9`AHk^bJ#$kvt z8zzVt0A1^$hg(1Cx#7+>8=xYtA04(ns^fZTq0>w6!5$vnXW5;vU+5?Y0&f^Jt;GswC@y8yvgHJtahxa~Vhe5e|;FS#_3D7~+a3{7W zLU5p_)Qb=4^%YlETM5eJ8Kn27vF#49sCrL@D9{LokiF<8b=ULk5sREYMpnML-rWu=d05A9q zj_weTS>{!y=nt&*On9xiBG0QxNCfjs96o#I;Dh|*dtYX6{@qWJ8vC(5`s7oTfX>)e zcf8WBsCn+vJAdyy10}LpSEq7li;o~yci{-XOMN#kev2PSxRVx|e`JzY$xY>-@e{90#`P(p>}o+u=pS!Qo3xw%Tho6k36Kv ztCRPIjh8cuaqfkuA;+Z+PrMDdR^|WV9;hr<4w4?{fQS#X@$Vx75n)lVtl(vFqUvx; zbjTr7+UXE&9rm%vZeY|3(9+3;HSKV_LS`z&XAA8Nmc z*jyn)<4~af!ZacSq#34+%-I6~Dh z(g`>Q<2aGi8I*Lx36iL%Bo0)i)Cj+0$F|+JYsVFK!*w^=wO8LjpT^y`5gx{tJ=c() zyv44+=?=T$+M98@ue0l~y}_=$`g+^6b(bw)xt6L!xH5UAIEieWJTlqn79T~}!-kr2 zQp`2Xry9N!6@BjpwX_adi-4`Fn(G^uQ3r=oCkq+wJhl7AoY<*wIrhAa`wcw!H`LmN<(bxtz07owKON3wg;? zstmg!O|vK|bSdHYW3!rBUlIzEA&FeFDkWH zk25`_|E3;yapyBiv{aRh5-^o40mvfNuo9LX#UY*G-$KL^>Mh5;QOUiwHgWAlnpcKG zD8KlempbIwC76@t5c#Vq7RezkoFE+;!#N&j)0>h^jEWWVioF?b6f{fy7o~ZnD$^-( zAwimja*2j8#eEZsILf)k=n61BN|o7IAB1}Zfq35J>rDs(Cvd<< zu@!@7I_xa`&ZoGK2SI-aL~<`BQ^0-6$MMSnZLFXIx#1=j1fW*z4chwr#U3 zY4)l2?WX)!pR339*I)0SeFN1Dx7~J|z2L?d;EbTLv|>4q$tK&qbJr4A5}I^s6U@ZO z&t&GIM4fKuKt0-f$1$$MRND}Vs^mS=!A489NgT&+tcg^nayoUDxPM zhYN@v#rv4XD2Xj20meBHhcxNTBr8)g$jhgZ{~jlmGk_x#V}mWk7|6jYSI;}zf)v4j zYFJhUVsLQ@Ya7)u%|v&7M9ULI_G;3p`7Kp+MXjY6yev2;QepN6B1B0UG`ma}F9RbN z;XQGn{5gq5yA{M^W&flV zcd_ZlA+-ywp@KyL&74b+FFaHM2^D}M4+VyBM5e{F3D^$p?RY<%IYjP>lOxuLi0zSw z;b)zIG#eG-l(2!8X=_)Dl~Yt@orS;F3rU{*{1ktm9iUJE6Ob6hVMCH*w3SWoY}7{1 zz@<0`->r>*cR=23AF&9;ei_sVXhz1&IOJitA*v1%hauFfE;yu*q{>R6oFJ);7o392A~uV1r~ev8-Ijo001 zx7>28z2N#6(0fsR6t83+zre29bE92-#Wi--4jikkJ3%IKtk-X*1vxq@IKz z>oZ;jNP7{GYj;?2a@0#uMH(Jjsv1fJWgLz<6)k`&sUferyo>iSEG;M;kOw&bR-=PX(Xvat_m8)m5j+2)~=NQBOtFGY_@0MubgQ*2am8F zz8IBP1Uh3lV?xk6QSd);^prjG6kQ}9d&Yhag=6ob6YeavgBD2>ZH%C5T0xbF^sJR! zc#jgu%_AjR$fJbcb5)(IN=KZ51gKLV?PsR=j$eQ^;jhV58x=1@XWlV};`FGfC4yr= z0y;4$XKbL~`G`{eR~JQsh+# zN~-%;i0v&9C5Us4f%u*vF#jp7aqb~qd=RemDEBs(;(&ap03}_m)&QEfmN8g%{J9(nh%&~NII=5oP;u9+UVTlm86$|do+CZj-GkHuY1~U4 zz5~a`PFu=d+4kDsw>SUJJJEXx7yCcHAs&%TdC)ZxSg8u9l#BAiFWz(h^Lq|{-#-1| zzUTP9;DU%iSX3C2W$BeG=;mJSrIkNWQ){_uzG_kaKQ{quh7w|>iB|HjwZmQCy71t982@27+0<`*BDi#ZZi z!+@&UuNqCpW{QZ4(yg6R_MQk)GKh=RnPV932?*w+LvVB#<5@ViL zOAN({8Yk>o5SMe}a1;^cmSeQwh2@uA?4Zky%B5;ZP2#8cE{WUa@njy(WiC#dr+|cV zACK6PpY+&gzeT0uA^1y387M_RJeb8b*U*xt*beN=wZ|SVv}g7g+lez(*4~|KEyv>a zlZJMd^Fd>qQz zxhKV_h-c1HjkE+oS*bfYokA`pqAFgIbh1vOs(lLL@=2Pfj=}$t>hw5GLB~)HPB358 z@<&agYiZSx2N6{qAhoU#VY*h$a^a{{<3yL1`GLQSUg8W{hBMj7J~0x)bt$S5>xk z)YCt(gmeZd8g#yySDly;7)=So#G!FQYnO;02>oaUdm%#))tY;5T`Skm*6tU*5& zTTkwL%6`3f`)>Q$&!4m>_aCA~%t1Jj2jR*bvPU1^XOBHf${N&K zVybdd)Fwysc@Fsiq8&E%2~aE*J|L^Xs+^vpa+XwY4iXyKq{C?jLJ9}o6%K*guM`zx zmq|fa*P>KT%I{K#YtDf(2?0{Yue6vX<~f2r=!N%{65ExNq*h1mq@vW5RMnH`;3uAg zyLA@z=`$c($`%}_Sm4OXqju=gcNh7(-Z04U;!Yyx(?6uwwJ0cQLXNk}uMh zCL5w`jrw*C`mw4<9; zEa+7M+|CSaP%W|zg{qKWbVzhl9M?kTa+X9TRi(^lXZn1`jw79sp9d!rVz(ZWPwBx1 zo+X$&2v_f6kfqaT<4unxtYQRlS2aK0yB9>2;E$lFqz0Wv96SZ-+2zHSyR3rh%N*N< zy?QacsB0(Tx5lOr%>)?0`53~UwQyfg9zKr3ez!YUmpa@krlZl3Iu70aIs4)F9-xoZ z6ZUIYRoFAB4`63fd8gc`D)=0!_Bb7n{GF}8<#zkft+!a+wQr=tMHLyTPfP9aK-|gP z<^<@`M_&DU`^t|WB;L=lSKR!1bd!=EL+k(z1b5jV|MfrHTVH;?z5P!=fBwAQ@OyuV zrVc48@k4$AzhJ>@_{V%e%Bvu6`l#(pe$K>ThrQ=TJM0Z_`?r1n)iOv{^5WBvE59{=(At^ntlKKKd_%ZK5dP^$i@Ex&YL&mW$`UXhF}>;%rcto>gP%#VZC0?P=g|3HWB6ZmBW{_6e1VN z35e5#FeIB$6$EGXm0Pi5g+Eg7)#p`9G!3Lik21>ol{RYuM8w?wMSK5-D>B8P$WjE zh;djAGEjpIN9Gb-Kjr%~N&S=|8mH?M(NtB3UTp)XOl=C%&^*Zjw$1_J7{|EvgSd_p zmCX`KtG#I)qc#ItkS+w3NtcPC&%yW!*K&@7i2X#f>F_Ypi0pu{&}u=^oP8t427eTE z>Yo0H{g`y#QH;_!ZCBC?*=&k2Ud0!u6g8(QLI=q}2|QiRbxG@BMHG>X7@P&{ogtAv z2==3h*>(=nY_^-UY?p)+S5&X6uXBS{u9f{g1`$ zYxhL$?ng50;F;yt+_%a0Hr3l#ABx-kd&g*6t0Jme9N`8KaW-Ul4U?07*bX-9q=NV2 z`K|J7?>~og8f6n40!5kx8uRf7Nm;KH5^ZdDZ{29SK}9w;(l?8=L6u0mY!GqvCnVK7 zO*&(gX!jKT08gQ(qn14)`m6D?UK0n&P_?NZQ8iPAQ0)i`M18LsBmFW$YGWJ;gCe*b z60MABV_qVqDD*)@a0QAUDXOxS&88OliAqqYJg)*osaohNbC12w#UDk8Cv)oBJX(7* z+SD*-130+-a}+0()>7`?Xo=dj7GJT(qCAEgmRk#+@JXl|Eu+XLOrbVj#>N8~dysPU z0nR%@v@VGQ^%@%`E!+y~bMbM3BC%pQ9KhW>Z3Rua%WyC?$K}9}lMYhZzS^ZE5MJzM zqdIo%7|z6TZ)C2vGTN}SDMZL-zURQ9%LMU=!-4Ig+N-Cv8PvF&)-IzsQKYy)f->0` z6w+j{0jA?jN$EMEo{rE6Mo80Ztgqa&lPbYqjGbH={(|F{Q| z3NDq}(^?YBq%jh;K^2{{q0e>b^(AUti)gZt8&wO=VrM4gD3j(YhitwI`@C}XDi5dx z{zFM~=PAM5PfRPA#YuuIG{e8Az4R3)_f z)+I=9jj&ht!JiuB!=;Uv->O*g>#nOc7lH& z@ya-XJOxy5m-C)VQr4vkeOYgkA<&#Y$6RT~FNvd-SCjz{7CUmO!|zCa#UA_M`XJ^s zJJMz!dhKoWl)B!&zHicMh?4#JsZo1${})Y+rQ2_Oh23%0&hww!^w7`jZ+`v|C9IWb zYC#-#TT8AqnW9m}TQwc*!+{9ikssLay!ZFe%^9|JuYR+A;N$Nz2a$Bd57{4m_}%v9 zA6-DK*1qfy?4xgaK`86zhNco@F*ZEqmBT^QO&M^?M-&$!93V@o;7a~WeZW%oX@AE9 zz=5f>$%K9LODJ)F`U{dF@gpO*(SGvq-R=NM-ysgnn>rlgpFCoJ_s?Io-YhIR;Q3M%rbA1L|i&|;jxxayD$f2tJy*S~kaT|reAj1!Hlyxz9`E0Ifbfk~2Lfo^uzy_$m8dd~Jgry1=k%vct zrhB!qEB%L4Qw%|}oO@d)qqXRmq{?uJ7AR4S>KsxXIjTM*iks6z{4)_<`3#k6i|r(^ zqi=W?%_+_+#+<=mXJVv+JU}LXKp`aGGMf7q!k?2$GEGDC=1_zNE^h}V?Bn<0G&n|V z`jF4_C_}F-MzB#LhC&xX7!t#fXF<9Osdk)n0oEnXTh&(F>j?8{1ScoPB44p;joozP z&Gw=fqWg5mOYMf6UuZA9?WK16i(g?oNViqNx2U9aS^?iM=@+@FE751FW88)z(I0-6 zY9vxH+DIjcItUkUa1=c!(9BVBX(-25dR<&SZQSZfr(S~nY`8Mw8`iI7K2yP|4L~H+ zMTf3K93+XpO6$q1@)_G&w89$D+ESBiiE+Bb2S0^Z#6d>8nU@_@=gCo;5n96Bi;(&l zWUWT}DH(^eF@#chFXPcc>Z2JB(@C7jW;UUAI6rMT&{3{A1IIlV1Z@rkbQH&=g=;x;a#&E!=unEP`DjFJA*xEIX9^7WpLTH+b zxy;HMTefM7t=)5_RdbKk_&2|R!VJ*?mOu`aZIYN}Ho+CS2#YfRhxo1tV~(tz^CiU` zi{UuMkeg62FpWJbq%w8cnhjR6akGs*ytg7~P&1+Om}mws}*dN9j^+ zT>;9lZqs)A?T>xh-f;D5VB$P$T(#ToeEEyX{fGW+*WSqI96V)pe9kJ+ifeAV1%bO; z5YWoDE3UoWHrJvT0=!>@uvZOwC-p#kt2u`|-}r9($G`fht*cHdAu+L@ee8hMldHUf z#dj;bmz5B0g+R5O?xbP|$ecy^;@7;z)>J0jZ~%k$_l&tn-c7f?#ID@B&KnD+6(hL! zl4~fBrNXG9mJ|@E05JUOH4qhdTxEaxm2cXe8v?wjuG;;*EjFA%iR`*t?L!}U2R+SD zoCR833BuCQxE`&i_u5~5>iv|*h9e4OJIGCKKzyrV*KPLJmu>fw($2QCq~sdu5`QJt zBUJ8Dx~}70@B4MDNQcx&lhaBd|7uE@B_?+Doxf)P{N=CNH7hP4f|p$N{||p`IzeiR z3ab({nub7%0zq6SHraNZGOBw(Zo0UR6O_n`pAo}hk!BEbPM)9F$3{`T-FD6GR=TJM z)AxSxW9z`M7DG%H;wa*$QgY&fh|?rV0_h?afK;i?#~h7S<@C=0S!b1~9RrR62!nWB z5*nNn>coZ-BhrjXqeKztvT}>BskB%_$_B z^JE;>)nJgS84sKu?|}vPY6|Xv_(;W>QS_k(Fjiy8MNEPef--PEWx0HI$%K{XM<@~* zvGp}`cH_1*-1{5L9#a&5v;^I!Qe#Rv7NkA|r zqBx&5f-~}g-}}9V)`t21&!798H|7=jehQ>PV>JoFCj?UVy^?Dc>AL~)cLivO&}3DZ zi3_N-O+P}RYG$tSpOM$i2#gHSR}Se6wL=jBS_q8ROOB8)RRns_hV#K#NW02=)3NTg z9waZZj6kpoPM93fTB;eBEnl(Vyyy3xlcQ5#Q{mZ(E(ql65)kLJRMa8BPqHZr7W*i0 z7{umD5K&rlTCd}vi$Wg9NdHPd?izTXx-Y%hhzRy<6emfxQH3t5EUtuaBqJeN79m{X zstIXQdaMI(S3gd-)aaLLqoQ@x3SuSVxYWm0%_lvme9(3Yhbq`oaIbS|MV=x~AScG3 z5S^IpgvLw(7tVwll7)=RIsF7$=WyO8Ry0^XnTc%XN;>J;0>-fjLqzE1_9#u+EcT-W2;MAJuIU8D zv!Gzk-BUwt6;9xQ5Ad0t1VGKqh0_F2J=kjrH*Z<9-qymaEoQ$iMWW=nj`M9)QZE>h zkQL}i1I?xRttd=^3dW@NTWo?P@TT_?c48Yka~ED4-othl;3eew@}qkgTZ2&_I0(}+duMmwtDjw3(ou3-}(;PupA)OS?}#v3gJMlTDbzF zPQ?dk%m^D=E_14Wc^&B(99WFNz;F*DZ#0yoBX^}*fZ&`Ue~h9n+DB+MMI|7pDmKnH zgJ2>Z4o4;epJ@m!P7+ndB#6kpQk7W@M<9D2qLi)OcAVJm2=PI7QHHH%gRaE@tw0g^ zBj2RNf8G)6dDhN7_F2Rl`|J$TVkJ|ZwyAOihe*PLP|mS@6aZUmZ`|%a8n;KfPq#g=)V8DsnJUV_fQxS=6=e~Od-|x}8YHc#wiKFEJk*M!yc%e~ zjLIksEGi7HxI716rGGrbj+~mY@BOsR{^vIiyPEU-!!H{+$2 zC{nDhtS04L*9ZrLbOA9m%^Y3JbcQpi-ioQ1918`afL2T9&z%_EOPobmp_!Ey&rMcPuRO2Cx#pg=R8H~uEE=1#CqDS7J_brq(5+e zy0OngICOc4oK7&`qYOP52kAjqf&z^qy`xl#`aQCFYeO#r{V9iYl7~|+Zp$dukfRcP z<@+RV7}Yq59JA=S3Is`rnj>MLB}yTcmmwEYfQml%IG1!w0rOGi_^y`)S}C<{+1`H6 z?;ILKax$jX?n1(t5O>?poOHc51v_$tB!?ocKXJUgl2B;1$n8m_Nhq#(P?DEeO_0BB z^I9~|m=kQSJ9b}bS6*}Ng0r6AMnzHqd$w=#vj3Bi)g?xnk4~J>O?_Sk^a3I6M`$#x zz)$HPPfE&}<5P7G;%2Ob;_yK@plXfu_!CcfDZZSc8P;|=4$WpNZZ!{Lq;9(*hZi6{ zp+bdhT{Cm*(I@v&>VM3If0t^bBI%w75I}alEpN1|>NphPu^VR~p37OON9sblO%J1q zH;JR0EoX_~A&zLU8e!Ln*o|FNI#h_NTKdSzkYcuGJq?v-Sqk@y& zswC(NN|pgND2sqbvJG+cxMr}8vNNJF%wg-P!L@b_M=$fD8-84$oG8|L4)<*}V}JXt zx7+r&zQyv`vsU2$JlAm%_E!qR)A$QIO#09?wMzMSa+vgaHB^=~g50d&Cj4hPO#0s3 zVk%R}|E}hHe^G}?@5_gPTLh8&-_K#vr)Jx-6{~E`+BN^F4wJrF5uBjmDSBn8!s9r^ z>-_+!KvutxK4M4dgLdvT5rI^KRawPEQ?%5AuF%ha{&V}uPwun(e}>@Y{rB;>*Y5l2 zPwX)^gLWJll^^fhPZi9AKehYrMGW(SpV@;CJY)|(_^>_t$Rl==$v>e6jb1?_?V6bN zbtAEV_*n$h5Mn!V)X^_Bmt2Y$DJRU^x{w0ULYLyN#QRt^UaFr<#FvGmN1#QjXx!R* zBi020IxZ09G<&KY&c(2z<#AGURJ;(O&&G+$ zhpBu!y$_McQ;ZYvumse!S+QVMp`^_%XuXIGuZ42)5|pE)2CFJd&J=l@6r3VOZ3rq; zR`cOfYPQQjfO13@h5w-s&Vq-N3E6xO35q$;kF-X#f3~c&#M)XL0#O=Aqhw}vjm=P^ zK0#@29OPg&ic=!hZ&icwz2b;Sr{o+)=_w-P6F5^tsvKi;?7{FxNb}6db6j-fl~M-3 z6-RyzBu@~s$XGhO@RsxW{ohe)53d`=W1_#A~386yM&^otX# zq#W?L3Js!)5_pn`j}{j|)XtJdjlmbpv@JC6-BO)n8_Tk71K+!n*Oma|XK`!OF^IDS zYwGTvHVB8SAHh%cT!Z-Oav6&brOCS3n0i4whcKuk1Ox4Dl&>E;z~=X?wWxIZ%yH)O z{r1>n4^jn%HVpS*7=B700;Cg+iKHO%iQEfFKd5o5()_Aol&+T0Ri)|r#Al&3Lp#xK z8>t#qOf{crwqwU30wcAdX4w5yi#b}td5X0pUW?Sd7ayLcaVJciP&J}fDqAj&oPlgt zz?W{BS{ca>r%OwUwG&}J4Y9Gbkalk2o_1;@<@~x-g~XSf!9nU`9u0Ay##F(l6cRzu zl-7Q-Oj<}$j5!}+oHVCZT%go=1Cf3$T%{r$A-M|k#9YpwzRf(Y%z; zb)7vpJ|}4rbQF1#0}#EReEe~H^wEbI^GEFIr=DOv9mPTIA#s3g2CXvECPt_-Lx+lM zP(id%Jvm}Rx+lb;0d0#2*#_}ZS-gg{n^w-6Tb9ZTUbRgzlD*4hCH&3wM{IaQKr z(m$!YTd~KU+XApQ0PUDMT*Sn@$Aq3Q$8Kc1uVB?pd65DI!t$}At6Lpi#7WjJK%IAr6j#W=cUa|COe zx6`6pfM_aqiE{3}JlD-1A_T-E1ZT}$_cPekLk#v=ZsZs?u#o$=96MS=|H%x-LTdU7 zxG!~hdhU|1xctXosu4JgxLJRHk2g!r0&(aiIvU3q$v`L?EJX_>kH|#amw+^#bEio; zd%>+DO$vV2L&jwlD3W0DpZw%rDwIwVQFXfk6jZGUS?Ry!5WshC-%hHd*dsBeH=-E5 zaiCU3U5ZKtJqwi(kKx8!Ub^6(Oi&Kodh~G?G{iuaxH?>~?MNCZGDhU%i9poHN+O#| z-Hm|?je!_AxJz`MmQzexj*WhVe0LmY3dclb$qAw?p<+@9&xBx_LDzJeUcp~Mw*=Iy zXMppiqq@=t(l<$f$`w=jQO>3&@oYIwGB7)@eK7)@3-!Rh`jxNvhFa_#CeqRn9Bbev zNdHEP&lHYZ6%L=;fvFu!f^^mt4$z$7I9YG5Q61Q4vf*n%>K@{|=<#jSM1mO@>OxS0 zInsJjoR&$P7FD*%3Qr3aQ56jlTni$Yheck;aHolEkMg+W2DE9ak>*R@{GJ74ot7ry z%2t|~KSdGmDq#CcjP@*OrqavD;Y*0SB4p&)u~P`a9{0yl9*5aPj>2C#0#a~@78m=e z5Ze!`c>LH2-{_FyLw0MZ)qxGCAu8IF7$RIj# z60$o{;@6N4W1UDvy@Ac#O^o_pLCw4LFH7AuW=s*=^4bHs4ut>J2et^NfWILwPKPk68B6F ztkR+y0|ib(d4&Rn_9vuwRQZUbRmaA>k~Gu&V?lZjCsg-Yd_9eAjL(&mq-&9oqzYt| zmeuo0jfwv$asP=EmyZ6U&Z8IsDp^sdAyO{O}<7oL$QD)&5({Xje>73)k%d7Wyx z9%vZ~5+=9@8N5dno~@uXb@YnUT#}SQ4MhOOpn>y`QRoeFuGC&f{Q}G3NmZ>QjYq$_ zG*GHBoFUCyaUQF=Z)z*k!I;FU1S~+4s*u8&LYz^F^p;7i7uD-D_Noa`mSJ8$z%{gT zohPunT_9(fYE+HQK7hk`IBn8KFeF(#YPmBjamtqSc~zS>TIrR0DEwLjnwgK|M=<>y z$7QGKm&zr2{)_FzNcR(2&kzk{G7;4!y9^^CNLuB?CFFXua7;vC7J0Fky#7;9?ZpWi z@efy{(;)mtkpZEw1Dk<&yZdhYNe-ZM)^NRE+SFXQjV9wV6N<*YnZzAd;)C;^Li z6eBvyIgPVvkB;F?ke4P zVSF+P?J7v^mK3Ogv5*4Fd|ALduY1u&PSaiA_&3_rbi?nU`70=skVQqNik2mQm#2Pe zZ0-{{Hqjo+t|9Tx5IrX_G1llDO(GdEj4;~}o2d8(Fme^ z2wkRgy_rN{6OvFsW})gIH|-tVp9x#LiE1uVl9^~4r9oxuAEqj03L-rtG|pBmE@TG23S5Oak1^QJ{Wyei=4v5a z$;NBffg0a{v%HWlaPLpP2M2|+e!hPSL`oAtQV&9D#zAB{k&fs@Fj0!eLZW&=rCQ;5 zwm^7Rb7V!#wNyeDlb+Egc8Xqd4?X;-9i`c56Fulo(fZ*OZ7WWnZt@DSb5u*|{St%i zL?OEeNdZ+~$~n+?%VAQ$p!b|>X=f8%O=IH=?VdE~{iIJj5%2R7csVgRNkX2aRIMwR zVWN(XL2QQzD@RuqS8~#|PNZQ3HPRA719<-g^Iphe1{;59k_{TdLlM7_2sua!aMuFi6k$FL#UZhC>3&eYK_fTsd;V4;w7m!fs~2G_YrqeM-GT_C^#XVBs(Jq2?tR{ zH$sH;85#qRDN>FkpDpBckhRl+I{qpgrfpldJg3tnB&)g_ayqTQ^f?+EiOrT{`^x1Y zQ9&jLNvhwDzJbitdqpu^DIVf#$>vGry$;@*P%81Qq$J(|N~ctts)KV0LdES7XDWsr zlKz(*GBtG;dHWi)=9%c+#j)aR(uQoJW2a+8c_HoU(q z?~4bjEiNSULiSb#Nu%l|3PL9>s!>qSIKhQdu%6!Hb-D&|&*~tU3lSH=z6EK8AsjZL zJjDcDYpBwz##z+(O>u9gnWy!npi3nyFz*e)vHy%=G2>B%GgJYeI3JX1uC|H_ zVVqn7+HB@RF6)1g{b+>wG+k7VLkXJ9Jf5Utv6S{iWl%7t`FAUJ>nzvW&)jMwh&@ha zrAou<7|RjX$_d)=;7VFKHeeN>*N7ukCPj7DUfESwSqXbkE`9S9c-K)ye%Wbyet+Te zD;R-Zgu(_vn&NCs0~o3~7P|0WU%6 zz(S%LaiH?}UKjafvlZuHCq(eYNYhW~-8V{kbsfgS$&{42E5a7RR47@x92^XnNWV&P zjf2d|VTvMwFp9B@0G$s}`aMijST%+nBn1^AVvG{0D|%N$;$hx13fX%Ma;@4-4Ds&~ zs;6Q?`P5(;h`Jyz!KT25NE$E>!Xj>$crUp$*M9qAr)iFBjh&>LY>YMw!*=XQ7v1bp zSw^|Mva!@E8Zi27m;=M`l7=BzQx#am->PyFa29BM3|bIV1uO_&?rrYC`r9VFO(qD6R3O;nGb?nW=Dhg4JltR3%++gSv32XV0aCbI4HaJDsr_KnXl zpQrXmWnr3 z%Sqiom$8c=&auTAd2$Rf#(8rcrG+FyHGzk!qD9t^agCB~&;}ht$Tb6}K<6YRDG=^iuX7x#Az|YLQ5u78&`MR?b(6A;onBo0;^`%oG|*Er`qz@v=$ zvXRdpV}6a`M6F^Sz*e(WRQ(m9J(hvPl#M8D<+V3jSpxwV@-bPAXDz9~=Q=JTf?xI= zez`|r{}Vs6XPtTDmY~EEdujGD~=$8;hslrd4;(r90@6-u2Yj6%+ zjsRn-h*(hWt+(E48U?V zd_PEUtQdhkyXIFmAqye&6wZ*_+Ku#Ndt7|{=w(}*mYd;DBG8QiCX$dDnSi{ z<06K$xV6yYTJ)+0!SJWi{BZ_>K2vPE0xE*_u+u2@-Lr76$1V>9coy{xi%BXfTW3wKI>mXHD} zpOnv^^xT&WI1wTSBm-Vc7UPjex?zHiD2GkGk}4)iLS$z%#;Br?MQOm>GiqHRQj;Rb z^FbSEgnR$J3&m>Zq9<+Q%r{6$joCwwwAk0beFz2TQ5qNL+1p;bdJ`^;o}K= zdLOVj8d+IHxQzvUwyUV!@>KZ+YBa1KcSs7ijSxXbvutyX6wGm$I?+rSk{p2yD)$AF zxT4)Ww0erTALAIm0vZ5sM1ZlBYv_(5C<^yzSw#Vgz?*FAE9PsewPhzM_ z^QCmuQcjtpO_GgJgbyN-CTB{{ha51G^n^I5D+3#;Vh~jfJ^?RrAzg3IzVff1b!SHe zd(BJDUFl>=AhHV-`d-jZd$RFfIfLSn<+Hx! zC{8FyAtce+Dz%s{y(%H&`Ybtz(m2v*gW7mnjj>j$PWR+D-~M)c$J^evV65Nuu6Nnz zsQS|(v`vDh#kBB}lOqRBhmViOPQ+qIO5gYuDD~nB&LtgtAf#4lHWk85v%V&!_AlE=z@>r$ape-s6+JJ$k}3e>;^`4MjNyQc zB@~3k5nUz)%zALtk+~th*cid8su6`o3L&o~4JRF^y1ErsgL78KHNEr|zhWDZBba{- zg6zh%r)Ui2xbqoP(t{!&pOAc-{ zF4cXtD2dl>*i6ed_OpEUC8~Y5E-$<6H2reBLcjRcKmEvkcIcVCu9$r6^jTD1CE^BG z2jij0RjC%i(;_$45y`9}51q{hujY*66*!SR8Epy9m3XBPCyCbyV%J8c(pryZ*$70^ zoMXaw;4pP~vs&6H;4}=Wq??6bis9Nq#JZh+Rdqnxad;2!{lupiTu>j?b3gpoKLUo1 zU{rC)*s#<_M6iY+dsRrOzn{bR;zO_~6D0Lj{Ul8;Md;JOD^Z*ak*2fQ2+QGaY2z7S zLlOr@hNxi$oyt`ft&JZ|n+R!SQd+S<4hDuEe-wEFHXo@{lirKbBrg(YlP_6q*W5^H`3wIS zq3^;n=Wl)WA3!(Sov16MLJozulkmCfEmM7N%t+jidnTf>oDFT<`cG|W^hz}(*Khv+ z;%BMVh8zRmOv7CA>QQ%4g5&D`je`&&EvWw~JrPg|6)sHJwjEd6+u!w}1!H~m&|Z7= zp`RqxnbqqLbPp1_(!%OHryWsFu^bXr`$oi*5RV4u zB*WA3C{?@jB~@vD=loSnp=;B5hl)KF93WO&USzL*?HlZ+cfMl5Sik2z@3GH*_Onj* zSBgv5ta~JfNlvIvl1~bnbebPJ%VdBkC(mbz#znU@Su#JzcRL9$skJ0AMZHHv~ z;QEKwDN0>>rwZLe-FYGKx~Ib9;Jo2@X#Py&WF=IQ>AHD1IRp`LP+0LZ$Au88_n@@A zgyL(CB+~XF1;y z&Q!)L-T_{o-E3 zFZnCm&zweaxz*D_ed@Es0#S*zqVt3=x?q@nK}iP%kG=#w4)K>H{V!6p_xDm4a4EjB z^SROCkQ1fnl$Hw>FqDcCG4?=IY;U8q7p{W{k!ma{Wn-+imp+$c?3CUp#*io-c=gQ- z(-;#htTTr(-b7Maq}05MC6%h+`=xE;Tnyic1MDn^P0_TXf46DQW0J6(mU45Ioe?o+a?SH>g8^{dT_stD5jX;`md9U*{9A@V;a89ni=}xPxS+-zy z-FM&po{m_L9$6_-S~qD)IiC`%EMmhe#mE-m{N&P>V2-vKV{#fWa@s6Jgv?Q6Q|-;J zH>GP+M2;cC=zE$}Elohu)a5?Y(=obEDf`m!62~c~=Ah*H6Qp}4;1EsCQTC02DPo*T z5CyHF%=i_rUzmQGj`moz?P+*#5!;6@+Oti)Zl;U4rxm0n(;(9ilSUhefHKX3kb%O; zAysqI*#r%RwV`otSTjctaxj$Bph*4jliG2Z`e>XrFsAqO6P>4lA%p=3X$0QK=1VzD z*8u(N5Z4@_S6$yIX*xJfCrRP8#u0eLaa*@M*YelAl**fhMDXKBpK&>c6uOcUX$_lL zaD0S{i1U_<(am9#Rf)Msx#HdB;;HGriYT4Kc=~IU=5@!yyZgEG5jHJ3TSyjg549O- zvkJTx&54lO(V^$Gi7B#|KHTmrZ&)xt`g?;;ll&v0Ohd!!e@0XdXK)cjFDI3bWqwFz0$=Id>;l8%xK*oyn)U2NHa zl9D3&9;GGU^PBIy$YM6JktyJi)2np8RI$&%+d54;rweVL2s|t~`kIg`(szpEdAKTY zr}RX)9Q_vpF>ej+JS+#{&;M;PMX=z?A)RLOuL5IZax}It%rplUI~b^{~#I@yI2WlZ%Z~+D>EY zX@4PF;(}>zhy?HUC!`Nk4yyC9l8(DvE+JstNK(Zi-5UgisTc#@ms6F-XQ|E9{6i>F zD;zqn)@7}G#}3WsltD}W+ijb4zJwe)@gL+3WYmA+bgD0-{w*|9=P$0BIydOMRe&Rb zOmVry-%_Al4Yy0OC?b5z*{f44A@Q`}BUPh;Rfz_d#;^!t`?8gd-r8jTp*5g!^tGqh zR_G?bCw~Ok$UN}@yv_sVB*vX)RmgO*E5DOq3D*aK0 z;VZ&8N`vFt7cQ=zi`cnikNbO-E>tKj8y~8!0(`IMJ;mS=zETs~KzNSN)+UkVsR$8X zaUNXAQ)vxyIMjXC8{n(t#?1Os#66P_qmikFkqW zRe1HyI8D;fnRoo`frs2csx^(GIVnZ!dviJO5`I>*c&qD46)uAVBW3;q_)@6WRu@s- zgEmnKXiEX#n}?B8XZQ-lE-On_=1e1O#;%NN!3vx>@oJ=epAW~Tk}~xwWD0UP&wNfX zhm!3P&_pDsaCA|bW{mOuNY72VqWUY|@YV(AJTo&0vU3Ff(S)6BiC9znuv@u8j9VoL zTM>LUO7?KbsIH`ekU~Tqw{EGl^|g7H z*9hlrk<)bW>BrzG_4{o}ZB{8IdKYY!^MNCxhQDe`Je{nX47xgs&I6fOd%fZYPk|8? z2lZ*Cg7uJ)p0sX)-i^2@`fL?ND05-%2SAxrpGcEvIqZ zAq(R$DP1A$wiE)Ujo0@dIZB=e3JWx#NMZ8~PN9$h6+sA*5$8&r9~=zd81sG8^vFRA z+|y*B7ETD~Cy(I0@OK3j{<%U))VFNP-t*3d;-@-q$p$E(5^^V`RZfQMC+U38qak5~ z_j6Tx?Yxtw-vh#>ONkRp`rkYeL$W0X~ z)oSK@Ex*@6B-UD436DjRF7uBEA9&DCP{}A%Tuy`AH5FVWQFFH!I399LbkB8oP$ETk zIhpPN2F%h6T>|Xm2z5Zx96$3{$aXq5CpzUoITDw0s*@Zysb)LEs6rZXLzy>f z?UXOOz}i+Jg!o6|2CCZGD+#HxEh2C(LVK#1heSqf{;q!Ot2b z<1h-Jat!&R36T7`OsXr1n)u0+-Jy*yYT~=P|E_33KWq_XU5ZAQg;Ok<#$DSxu76g z=#!-MX4!;fY_(~~nNZZI${E4ES|7zyvBl7d%qSE^YPG%@VO=vpI|3)fNY6nkt!Dwi zTl)RK`s{-Hr1lCw{kOls@lx|!^+OY1N!+Ti2t1-ygoslXN|7(Lrcf47=1fa?hEzd} z4NB4cAO!0KoBI&gKSWhp8r%+5o`@=xM)&UwnwbjOPJ?rkPn1$#2A2umllKEtibP16 zljy0+Ly+=!KJgE>Zp&2*&ifBP{z>$K&LMKx2!f@G724`B3VDoYIlPq$)U%7$QeI2T zlr$FiG@D$W2)H6-;=HLJ)C9&Zk`S7Q(*q|U51peZ4vPe>#H$f{;uUBZdKpBK;?-|$ zj(a*cfO0Y28>?-@s#=UF)hdi*b1zOMjgu!u+QuDcGP}7pCVv`*(k0?Np+v@>834s{Kzu1Kp2KWj6d5RqUxm{``|lAGwDj z!mrXUAV}V3CTx73zH@}c|4%dE{BQc~R1Ze~o2PM7Es3bUB!b?3#dY>u@BjFMu|7;o zh@aheHx3iM-$uthQDRP=Z;}}h4@l^b?{daA)I*)*gIGW~BJ<~(P zZ#hN$V~T4PA|xj)b$(8&#@SL35z>=##)SR^XFzLx6y)v*J--$o-?{4xuAQXsk=;-P zBXV~?PIC`jux^5>9#Z}z@Fu+zm$?l5T8tq zy4mE4ktQi)`Ec%r>9dLBvXnG8^b_db^v)#VvOTyqeVkeZ*=ad3W=QD*rr>Rv+G6;2k6xWD5 z=#EmI`fl?5;)2P^Q8QghM|2Qmjt22ZYH(W0er@QVP-{mPkA*uQ-cLn8;VRLXQ-MQ60bDKY7n|ANE zIw%_R4-Xi`DfHB(9DRL`oJXx8(#_%#>T^|*l3ZVsKxxLQJwT=Q0KB+#KBK+4$@*zU zQ^>riudSh_49(6_o97oQ{%~fM(NF_)#Fg{ubW&Je?dYmIx^fQP(Zz-c3DjhjbINn# zzwM#*)N>t|k*4PynqSU8?A!ZRTCgOpDndS1hUW-bOiSlqH(n$Zr zRmr0~wMvEA(6ts6iH}p2rqq@6Q`}Jz!B66qsy1SAdgDfKP^*YhsU}i(7-yvyWTX}S zE8^2)S_T}f^5qE#rZ?TOkdGjN$mVAs^UqZ@sEy5+vW%~yX8!YBzS1KYFL59gX$O^M z7P+VF)J+>iibz}zoZw#Z0^|^ixS0=sND9HZ$!a{c;SpM|QrmI_B+IA}E5h()i6rYF zQ)*+>=fv&G8((JSRCmok9(m+Z?~v|-o@|I(#2(!%D(5@gh)(h4A`g58sD|4ONNmK#H>w$lywmX|qtTF>M4&t7@)H11Wim74oQ3NkUD; z<8ide$>DNFL=+cyNbY`kpZ_k0$~OSL_JWg?a+c=*^q>QWr54SetFC#$g0XI)Lg~<< zeT*@rTh@}C8eOy2&Lj>~N>Sf8V?QTBPQhVHWxLL!twKtiE0gP~^i(SlpKrSr z@1!&510^IV_zb?8={ZM7k_eJ`E$@5xZ_fKR-{TOfd>DfBxTiWjZOl4UN>Ey9HB^b# z(4s_@gjKYU5_wp}XQ5s}3rTA;RPL!z#M4Clls24F^->U*y%6zObbu=Q@GT?twq_-w z%q!|}vf;Gx9i>coRY{5mbzXDIksS(D8dlOqW#xjgzV|0TK`-lRw+r4!lliOHiofVc zv_1zr;e09at30(O#rRN(tJGjfYlWXm?aGd*0@u4Ks3AA#uFzW{NToP-s?e(-U06{@ zQH}o&&4J)W_d!8r1dS!hkMv3m6`S?@7p3Q@AVcm#_j-QXIpr{&Z^Lj|8iMU~DK`mZ#vCmkFC3>aWo*Eo%QR|7%aWvA(v(+B#$zUuw^ zp0c*qa}EplA`#Hj9kgYflYzId$(79_DMEH$+9h(FWUK}K=Mu$B1=icCBqw5saJ2{( zwF)N20m1kclWr65Nf5g#GPHrI1X&d|UbTUPFifS8h~Y}ntwPwW0@8RK4$e(4TIe)Q zP)*i^(=^Q@lG72w?2w=K2!g|jvEo!KRY#IM5N<5Y=d#imWOEc+DTfo1u?nyUh8h@MRXJ=q-PLk10GjS2?nK{4HXB3 zhqx8vq)Gc}aWmM`X;JRs7!m$h4{1|<&N;Q^eooha??~@V4+7;RvovGQ)EoVM?WZ!rwxgTNOuFUAx$6I)`%h zvo0y%DRNh!4ycx2g3us2`$J@ElV1;$|7SnFk6)(Y_Z2zMh7*hr;{U5_AXEork_93MFl>dBW6VkBotsVXUCi(BJAS#RPfm81DogVVLF zzM2jW^>CHo*2v)pX%N>>=j}W*anRg3TOR#hpq6$sOp?xe6br{U3*2*pfh@6hcFF`3)fM&Kpg zq20>)6bPG*J5FA45#Za9PEeWjbB}jPL9yNbf@^K-h7~jm9kjpt{C5_-xVNpzzWdJ~ zr?j8+oYcx4E*BD%B((}@N|JCLIv0HG4vtE4;lmKQi&U%fa}m6ylB-S1EA5oJa6&j~ z1W6VU;R`l*5;LEnb`~nh&IEeOmQIzHosbHl4cbikeGJKg5B`rY+NSLb`(6G1hyRei zeb0oZuOegWeBGG9gPB2?Eip=(fa*oavm^Z|k?`FmY647!pTr?4F0mp-*;}^Zkgag8 zjhw<#`jd3FH(ATG`)mTIs6ZvyiW=Dnr=--2_Rg%}f2AwYCz8T>BgSnD8tVkqHlmC^?=AHmSG z{iO2FjT_1Y9+*l^@_dmYP+`5WI8);C3k$rGV*H{ocAAn?6>U9>0>St;E0Jk z$vRSKP~e^fhbK5kp+S4-Cl;Kl#ZFQ9d&((V`1^vd5AT(ylialC|22j)dF_q2+Q&Zi z*9+cs?C4>8{LzQJDs@I1lcHiKRYFav;ZG^0CH|GCHbnfE(NA8MCpMUWVFSAG5VuLr zgitlF5_Dcw5}QiCH&hA+$IuU+_cXbhU28M-omYInN&9af{euPPtZV$^Pkj`$a1J5F z%CL^S`YepDHs$0t<)~bWD6f8JD!PdBnQCe~ij;!XxTVq_L^erTaV>HHi^S1`<02=} zktdvsVV19T$p#M0Lx8bPIc~?1HC4`CiUrQlxrpzVzl?NK-N(Jk*@nB^cBj((gg2CJeF>;WnuV zP-vDr*cXT zKK4Ve&=ShxMa+)!1Tw6CIyXUll+-4Y)1hrIw!suWQ@u zDIJ!3cHy|^?r(sqaZR9vGVB-NAa!!%zAc!qp@=*Xg;FLvM4|rNJS^$Lzrv>CljXwh zxZoU;?5Fdad-x*;o>WZZ>J9dym%eJj>mw9YbW?ez0tb0TDzMTa@xve^kaR@!Df)Wm zc5oK^X$<01F2j`!R($+$iHNOH-7y2r$L%eXo!5@G2 z(|g=;D8{)7~rm)5oB0v{79y)LAJ zgwVQE8(8M&of`d1o1&^=o%GL7ZLE!Svu|8f1}`1CM;>{^5iVaRjyh|c0^de=H4v9C z@RPzQB*#AKcP72!3qvA@Y5tLNoRg`-FyulGgU!_Xk&~xQ-L%XAg~y%&zu*`<&wUmHG#!SKm`x;ayxHYHJ@Ii<;{ z*UI@yp{tsg{??GjQ&7ANo@6O9EenPqxj!s+l$^-!-q_NI-(JmO{|-4yn#;QHA$SN^ z2(5$PT#!=|1fD9Fv#v^Fu7x*?$FKR~yhV4`T#&)%tfrCDxs4SI?Z^Bxy6iMP=PkPY zhsz^yc?2$xz~vFRJOY@;1T9G6Gn@(5fWfy*Osc?2$xz~vG61xDbq)AS1r(&d+29)ZgvaCrnSkHF;-xI6-v zN8mE2>GB*{Famvjy{-)sswD!Ma1rk{JfC8Ah8k8rBgypCi<8e?ARI0d{(Fup37yCV z0(t857TzF{B)4wew&2N8+M66~8u3QFu1BKJx#+$opGlHfFZ_9a^;3A(#r)1+kJqCl zuW{+g;)K)lBOvDH!q-ipsdU5EYWmWtX6(Z81kHN;X-+Kism9Ptenm%2a*CxFGQTQO zgxb)4M#N& zp_6@*k4=epfoG1$1(NZbW7^OO{`Iv>0nsED*<1znUct5t(;3Ns3su+^8 z?-T-$BB^Ifp_lK7QY9&(>x{_iw9fH@0nWb&mEV%GYRjN`tE4a@%-3mjxJVO?N8U{> zulceL$qx-<;OD9N8H5y7rKg{+$j)5Qd8qg!B^BZ#iIBKO)K^OqsfDWoHzg5~BDbej ztlK7EOQB3Xl$KwZjLFNpaLbYd-qqFRvKV@giW6MG(Hq-R8H~-Lt)vUHO*3w@>fgsd zRXm~r%B1>K0;~EWk@nT(H?BF$e>#87wJ^3pdO?-LK|nH?6@R2`s_t`=DDCC@$+6WP zx#^mP6@Lk;RF9o#a>8{W*lYAdMOZMyNfAr+mGdbZNY75Z7untIM|ggIm0uXHplKDF z*XS9MyDtaikJDs&z_qy2Ik&>19OM{~fT5_y ztDB{fmn86Oq}5(U7Z3zibsjqBP>drky3Z&6%TOl>pT0qCRM(|Gqk2+pfK<$*3QC_J z5bi||SHB^t>3h+UDo!ZAk%;QH^?R1OFn{i{)AXD}^UL{%C!ToRP98t*o9+CHj4&E0 zt9+q^*`Gb{1ZhED*dS6Ph`9?!e^J#$Fen$sd9i-Y!mT%aPmJD7ANat+#-?YwqV~sM zK56|39;$h;J~T8$O`UUcv-awyU^v_?$cSih`??C7msfpoo-*2QV0fjQ#>G3*yy>CW zxD%1w*n+_cJE{J^ZzOJ?``}e}_2#Mtlj^-6_+13`p0P^wM05_W(yZzvzDs6DdQdWm z($vv{R;jHtWTcob5ki%Zt2=pAUGLd+)XqSCo}^!-W~l03?s_UR2pAsq8B+zG3ma1L z*LL;{eVvNEAE2D4EDXOYcJ!UP{(3~={s5)q`9}|ed;42^EG`F5@)2%KrKGF66$z8d zK!u{~;82A@(7!VZO|fEv-t|!xbsHKE3bDwvDRk~SiGYNrFa@EQyMeM*nICh-}atZ^N;WS=sxS~ z8weYPSNVhuAk}Kbm7BGhcsX_Wn_NO|j9!xL8%yXnLkmVqFl~S^jQqoisGZ3eeV4KRwrUqI69{l&WCMKtC%LB&bSn z&K)+br?6hoeDv>(b4lYk6Cb{L{rQbO*n;4wsW)C~&4lsvzxsOJnyPMZul6?s&GN=87xYduUN$)IYsJSseM(dnTyGPJWn%S^t4ItS=lQ%#e#Vzao9 zs<3mXFzmU0q=Kr&MQuNS@aJ~u$g{5TBqHG1u$j&Q#2p!pSC zIKu2h*2Jrgr|dPada>=?v1P$|zwuq~vPYkO+KLMhxaCIFHPDoO6MgOIe4zGDb9|TD z#!S)yfKftnguqVf^mH-EOw(9PdVuqz`YoY$N0J^uz&DGI2L<L`tCnXb@x!y^DhUKvMKh-ny6hCm;Uz(2gPXT#yn-HWE#} zZqd)B{_f&$7=f#5y652^d=ec|=Vj_S#O%V`OJ4{aR$@ADqndR`)e=f6*{39pqxu-cux>}ncYRxCt6 z9>7o!65Yz#@kW_yqAEQa*Tf0$)tIU4y3Rp{U#OHe6F(m!HI)JjWzzE{pfGs|hN@pz zjyiUGZEhHAi4qndDi{p6t`nthHY&Bek=STxU@S#*_1}{NhtvZO*fm!#9P213P%pxx z+N9XlTxP&GqGVL;+kiW1DaSg<2DszqrxT)VIUkwYOc5&`ZaHN`Eln1u|DameMASB9 zumglCir$L!zIeS18%j@=4Xdqo)h5fq=DOF+dj8SW ze9qhVsGK+@dZ^!hQxp2)?M1@xDyR0+a{kk~r{<#fLv!>((iLF?4s9bYY#RY7Ns0VP zR5d^ep&{;hH`3)L+_ERP^QjNn%f4rD144PZ{b`$qUPUnZq$BA zPLpe&dE}i`IZkQ>xnL{vy@>TuN^`+tG_Q*DX@-jKR}m4uw6OBkq}Ur4Gv^qOTy)2B za6a^V9`E(PNtxMYBQ$1aSEw8O{9TjNl#e*SO~-k(PLcvr`A2sug81sl&@e5AhH;|k z->CbC!x6`k@)COYin#X#1##5zrzPgaT+=+zX6gEF^5Ae@bU(s<4W}X}J|L$d@bQqy z$0`Ix0dZSs-#a=wylQUOStfVVJRjrj((e9)IG0pNC z1U0L%IYo}_j*O0=+#cneah4dLe5sg&hzhaI=cm@Odb-JhN>^_;##7fLrz%S&@oN2% z)E&wufXSw*woo@$s^^}*?#^FbaMEfjeDE*&|dy$NN< z_DHQMhx>K?&YPdp0l3U*`sMVEp7W|*2Hzq4bGT;z^#?M7DZsGZOg1V{23N;%Go;aSS8`n#cHzi|DMwR5VXDc(D~+| zJ(TOsJ6Yg}-l;8%7G(Gxg1`xDMHjHI6!@k+HwPo?3gmvTlN-Go#LzqH z0zz=%#J$Is-jm#j^n2)2mQwJY=P;*IBszJ&202*n$NBZ>-->)E$LT>fHf%$5EI)Jn zm>oO*tabGx>d7WLN@SRW0(3PU)JsaMT*F5VqSdaZ0!O1BouDE*m=~5;@tRtzq2a0o zLNA&lNd!#S=FYQ}uiaq^=i(YcLP4a(MLJbzqY%54^p7?Qjh>64dU^^&EC<1Lh~&?N zUS7df5o)C~2=`I{@W@*!QxAJD+(YSSsU^>1KdZ}r0S3H}l1 zbEAk|jsgEfh^`|<-(#Rpdc@U8SdJ7(gXTdVC`ln3kQ}C>)K46zLgr3r{wyVIJ=8Wx z4WacsKmB)5DtE9Db}*3Qy!wY8al~s!akSLjdxE(U(*q}2P8S0e8s9p4 z7}iiYLWJ%Brx3Y$>4gKsTVJLhdq|6GB#n%CVeqI5bDX z8clkijAal*bsLpTD~3Qh2w0{9gbRU!f+Eo*d_d2ypZTX=NIR?o#zQtn2x(+21`13c z%ptlP(91K6t_JBiO*&`w3(V&CERfO6d=!|nag@~xDkBJ{rsG%zRdSH|G+wJ928T$g zywJB)&KJ%V+IDKLo``|k3HejI9PX#=pd8o)sMZiD`skRNwa-g`EH&m!-igak{?d+s zw!>7)!lSc*^t-@Z9)?0K7!)zdKnRVfz-g+jt#e1IkWTa2^!&ImGp04Of=!qjnT6h>N zp7RHN)zrK-^>4ji|8@~QJ+J1xJTi&)GC^fBwD@>FllNsX4tjrh=(T*0|2;TZ`rUgn zsfQhZlajLF-;0j(&O{Ot_dO_xtX*&{lKhAB$W4fp-HFsjjTomK58oJM45hTrr^p#n zszEJFWH__|Pl6I9Q1AA1k!&rHx6Ff1L-*GON|U{`f^zh8#Z-aQ;R?XwA2=!#MD0@M zX1>UtI|}YB2!zzN1}D(-nQD5=hb2e@3`HW=O$u!oHR!p_B6v$U-&M;0(hB3|B8lFlGufiSQ-(pgUe9C@ zk)|o)_atf9+)}*+o%5v)FNs)$)K%b+1jqhDDoPgF+m5&}>>{p;HX_|;pW{JDaq*$= z6pA$ELY9F`q>=aMCD_W8pq*_VerS;AoPf_;ZKvgP--TNYt3Vm0DyEv zR8El`Y9SCpDYUVw9gDYQ;o3uo`(!&6UZ?rsJ=pYGQcO7#cICOnD3h)sIjxg(j4?@i zo^l$ccpW9}H2|0I+}U&1*3xP{ot-pvMl_WcN(s`j@z|J2C4GX_?G$L^4E&%3h-7>s z%JXuV#`p{gzD97UB0M)n-%DO2rD!>bo`MfEF{OyEG;1mqztlOO572O!Gif)JIXp^Y zh(V{F4-F74XX|P!-n!0m>MCrUAf*@a*N)zHYwK&b?x9Xv07XDPNDJbCW=V*Z@$^Ke z91=OkjGcsIg%0ORp+5s<`dREK&9#H}XW(iHk#u!&tu3u&M+gG~eet1L6I8fo{XP0$ zK-uJ#1)d@4wREL1LolU@8t{~0TjdDpY0W`b)b^JI3PTV|4eNDQ!j`Eq)wt^Ok|#8u z%P1Q^oBA9-yNeI?3@oiw1Kl!rs+69M&?;(pU=;CU)XhD;N)zH79N}!G2gQLS9Y;EL zE*;@d(xOuY0}`tgQmOz-89L_>O4(ATMh_P=)?#Ju@+oo(bqN~#Oprg>^mN)1sE6bv zbcU%I{54XueA*Icec`FgNYf>U=a=t~&Phpzy;>|Xj#X7vwrtrl(lsT%pvH*|WZ-5n z)U)tNVq*zfh0t6QLo+f!zq*bt8vr$k5glrCnwFoU9zp(|HpH>xZ!%_+ z+%s_)JZT}rE{8|RqMEBZ;uMItlSJ>a1pQAI9U;YrSrd8yhhRodj~n4|q#P4DAg(g3 z=iEWo_xSW-bHQwLp0m3Dj8iFH$P23LEsvWFZS{sNI8XJq za_be`zfv3P?+lI;7IK=;A-Q97IfYhCbYD~7;G5J%=j}*ZXuc}MGh2w0oB%mBLJp)l zO<$@+V#50SI;^L?88W&^=JZ`udka@!*Ww82X3bY^R6+`TqfTy+=f9U>QBdGcsDDpL z+v&T+*Kx;0&XhJh9H10geExY0(Tam0$?~ z)W)na45`DV&e$8yYF#nD<2Yy$oQ)w+nxW|^?R}z_D8ZJjsF~ zHMO0zGd*-o81J#kw3tn2(Ta(m6Vo)6<}opa<2Oe^!yHYKMc7tq9j7Ue-iUe3_v}pU z3TdKDek#z?x|L$}G{{Fnh$qdyT||7zomv+Y+TCP`-z7(0^W0O!8c6Z!5~MM4Kyc{v zxZu$0-wKw3;=UXecif#c4xh?10wRmcrs%^0lB&alC84IewaH9`KG$uNJTR?&Y`qYi zrOrEzV6w212E{9DL7dc|Fq5=;6m)sW21(t{a?R;;?&k+Ca;JJPUhISq3gJ#96FvFoGj_JllhdeU{Y zcG|HcN3H2(ll63UfE0AlmAf6TLN8I{C=L;AJQSId7RVRRND!~W04G_8MHooA7mgGk zB867W?@_eW9npf#%O{;s0II^`P-~G~{+)vplcQX?erKWTIXCt!3PszB2xa&^pHCV> zQJ~;({q*!+77NJJTuS??foO&mX3tq^=9HCZC9FJq!b&ib#kB6o7vIB?9h?+7UpP;C zq)L2vuK%VRLSQ z5ICGXn)2!z)p_0DF2m-WfIuQvmCz*I5s`6+X>2=;^Q)%GOCD|@Q}<2xaPj>M?+Pek z-~f3LXQLQDK$v zNwf(J_I3wnQC|;1RNRDo_$6vsoD7H;oVwBvN>9mo6PL*^h>S(p& zho7_)&pu+kXO3EQpxtKT^!;PUz0lB9sfrNIdUT8N4%ZdhlW1=Vv3zoq^0bD~2654p zLJ$EvG&$AH@YLQCC+z^1?lGHy(sI60Ir1c@LMRi?5t~R{gjimy_*fgRK1#ueh^>Wc zrYs&)0X^GTA4ZdWacx`GVol1v_RPCgBUn$RV2qDq2fx)jO`RidvUm?2o&ei z`8M$l`N^H)`x9^*Cs}Liz^H&FIMm`ss)wG>2mL2RQrL&1$RSm%-kwv;^@|U!I|V0U zj2%@IUrX!G{~||U?U>|1YGQ;`dT<+=n_P36qpLz?^i)7wLpC!YPbs7>MX1;JI3>W* zD1VEuez#4@IpuUap7ZAAOPqC#`#;HfDab4=EoAM|(uwaIC3Q9m+N0tHwJVI1PE&wB z4$>IsZ&Oj$!Z?A0`1hcFX^bgh3zdxuCgFbh8UdX;KmWAXkhaSu4Vg!g5Z{xIU7yL4 z&1DYa914TVA%H2yS*jwnUqln-d_L<^$7QGKlEd@M_s77eNtR&BND>t(qMl{JrBkI6 z0a_o%P-&x$#>cGb%xP<5qv;*&Cap5WM4SQ*U}L~g7l7c1XR?072HUZHCq185L#Cv^ z8P12`R&UnH#7$$ChrUQ*yi+N;)acJIXU9=gF9!%-M}8U&X{W3{XVg~Z^xE>-b5<64 z)-pQxTB2>A&Ga0zj8Xc?C5EjyebP#E(%7ifC0$+JN#iYYEc`l>6?rMxdrn?d@(5kO zeKgKL7dEKmzujq30oTbsh2qn`Pm1JbOaLfY&=B`q*>^B(U)b1%m8>Am(r~_w6{|=k8rv&tY zy8nFt)C44Un(MCGd9`i2Vz<@RS75}ZA8J=lkiL^dbSJC^u7*gvGawsnpdQaW`G_6g|CEh@G!3*J zwZZnIG&OFc;%3OwCr9}iV~S5$hAIp_6%r6Sh14^o0cf}9D3A9JQ^2c@MTiQA( z0!x?v;C!8&u6fTmYMeY!(D!=2NVZPSnHrhP0Tu$!b%2m)Y~2y%-iRXLNDl!R$s`7m zKwx$W^h5VXM0j6sdI-r%roKI;#eDHb$Q)XC}lvl#cxOynP^zMTQF%1~Ng4wtx#K(dT@_`c z^;S&ePbw*CQzkGTb4U!tFi`c`P4bf?|I5b5B<CtZ6+&=yXPHX9tK0_juEV5Vr+I3G~gX4JvWZyF+r+LnoEVn zLFcIxUen1FCwZK(GiT0#V7EdJ?q}1+X@SEuNCWDA(r^9!1K6Wsq(`D&{W%2+!EWfz z%HU9+okO~!4Na~HG8|5Tcdst+e1crW(lc6)qS8@$`NdXJQf(#Lw1s0prk_-LwZ6v6 zm)Ei;AbsQH2$@`k1AoP~D{!DTF~(d6h^L%feTk>4K)2ix;XH+|#XwOcwGhYAn_xU7 zZ6GO)xT9$SO`5{%o(5IXjn1NKuYfdOZWSEDE1HPsu_Hp^T0-IloIq z4MN%i>KV{#&3mQ(Ldw$9ZOokn=h55QV<(TDu!B$Uw}VgZx6{W@P&;QH#?n$15JP+tqXjbj5(0uDbPbvLS~;m99z9^wQb+E-PW&P2THJv=&_RS=H*seUTsB1 zRczWhkT%g08ch%>0#RcR3JOcu6jze+TW+PG1B%E(ZeEtwSDYKt@XzP4a`WztBAz^) zrh-BiU?Cf;9z~!{JpA|MBd;)^QTclKYjK?X!E5!ifS>$LzbgkW2Pz+D3!sP%Xwqu3 zqgJ2ZV-*w4mNRhFraKN;^z1V>c=~bcIsS-s9(~9Q8OZonwjMiEkN z5riU#cxc0tri<%0@i|^yNBXf4{hLw<{nB^}KTDh&rqtvmk0f%WlTRUIsg&vk&8@)K zaUKpSlNg9V-2EFG7$F(^0F$p zge!#9YEocPGi$?pxQlPdRkB=m5Scrrc0*JbgUOkbS6 zOe-M>D8uP5We%4RI215=c_~hkZ=502VckkHcMK2nsJa4kSjqB`+?N{6S)i_QC zWv(BU0Qrz+lU_HCb05bEh($omA|NB!!buPgX$Z|RpOrET2r~giI>8T*g0y@q$tkmy z)vIjV+U>Tkew|enRbPD4eqF&CboP$(k%CRoJyww6TsR?4DHLC^6#r>_rg1bBRB1RS zaUS|Xna?&Kv@<6TSlg*)YdPL(Cl8-;12 zQl*sh&R&Hza;~BvO{1h%MXJig7EO|4j#30NiSr~>G8^Q!D65z`E#4Es(=6Of9G&c0 zxS|=wq~(C4;qhtAJ>?D+!DR?kJ&aSD*MorO={Czd-fiVa2d(@>)CzhC66FXJaKt7E zCULqaLDhA?rf|e(#f7_^p1ZW&zx>yQBM_pIfcEH!0;R`@g0kW1G%R0kyLauetsA$3 z8dVaJ72BFdh>;*lenk+ziJ_51x~=1gJsPh(k#_QOnGL70r!WK9Nlp zk$ozHa~>3VrBvmea`w4YisXQncvQaV@U7K31SzK}3Ea4$ai)NCNjzPuGGZd(RJ5;M zRGYrIHrf;v#g7tQOXzZBXed+`5uwM}l;jlXB(*t<7+=Ua6(MR_21iJ`PPsTp1^f(J zETM|W(+W$b=Pq1?qa(@4bv`<{*x}WA%3%}dt>x5VYdwAdr}DTBbe_SnJYh|TpTRgE zv|%>YSvDLQ`8-IOnseu!u_WX#!frtcs^FktQ*4+q2*I-ug;idyF`1b{*C>_eoj2Z| ze&8iu!Nqte%Adk{6w)N*qmvD{8?-~)Penz(m6fY95~$uc(`lTJX`=o~P%WypBJCFG zJ!^v<$E~mRxb?OiwUM6F@Gbh;`amI+YVopf@uzebg>>k1Qrb7kdF*;YAw_s09Xs!& z^x&9grgEj%1K_999KE0Nr!gvS`<|vMoLh@MHgJDZlfT8@rzMCmtI$9 z8O(=C5G}2zNj6(mTg4?dn^RzGR;;sKTlU~!uktlCqx3hM@H9wA8t9`G!*et9u@|hh zbP7`}I7hkZ@E)^^tgNKc)~sA_o7Zl!OA5cQn5Z~p99t_1+}YO(cm^Hp^8aKM)2C)yh70Z z5^Ns4OCfDIO}RKrndwjqBt=O8o;}BXox?875+qKu+8EzC(!`k=dL9Y}j#qwU+;Tg6 zE&WWJO`Yj9>%w{B8nW4s(y^0Mq(vvEq9E`P{2}Tq1*|zGT{vYI}AZx3nnoWfLE+_dj^=}oYsB%{TivMIIIsb3H zUe81H7TldB#zS-99 zzSbJIUukt4H(Sl>wN}-z+%if^Y1t(3dH^RRRSgwHG>NE* z=t3(MsROI35j9^&U_dVLE^oN*x&`MvP9)X?wyfjVAa=%z?n@q*9Oc1RGWshAbg19{3B@ki#ohu5vC_e@qUOyS<^Egiqxpw@Ya;*XKz_|JuY#Isf%*S1%arK9s22P?HZjL!{<-j~#_j@3em zj|=?9)QS0;^i&V+M%b*EgT^j~ZiI^Ip8JNLS0^aozo#m8 z7^_Dw34RhzVnFgF3>rd_U{)5c#k1$mIR5oH8WskIGv{v+q(h!T|kYuuvOWZOy8+|97fg<$l({QaYO1g2AA6fm*eK^w$eK!bG0z}PQ-(Ni z{tu+D59R5S! z{ReyXt6pPof6x2vM^7AEB34`Y8GXud|ND(V;58^`O)CvJNYx+`lfd6St?kx%3bH1gg9-F>X4wdb zI{Tx7GWHYrtgU2{NqQVvrvN-6QXDV(4Q zxH_Y3G9x&gLpTw`@H#|FRke|X8YSYHhvY&r8%1d;VtP0YftLh=B%wSEb*j2Wo0C^- z3Ep=eRL~LfDlV;LZC*lUo+IS){r~)Q3l?*J8daZl92`E%&^IL!+}(lps%m!;oU@a`IjNFST$oJpwQ#;1;ll|`_?Vzp zzQj^cI&KmYsKf9s_!!b%ax}HkjYfz09J3k<6>3p1R>6V_2nL2C6gdpLUzjgR2{!XI zHbglwB8`?nc&n=@q^c5hVYJs`Lmi|GqnxVy1HO^lS)v6jTCt< zIxeW_JN)bO!7)e|@*_ClH1ZrllGtPx$dc5)B`-iV5K%+EE8B}yoasW4ZD&p*C`m;i zXo@txTo6?pxFp9ZMUoEyVz{S{awhNhLWe4>)Wy!nf=l3y&X!?6@>U@p5QU-=JblKL zQIf6{4)GW%8A%=#fg+Znx?K$aF&8*^R&!GbtEvu}#})8jgl=!$vdwnx-euc%?zDAK z2G(xcVmo$SX&b0)D=R5?M~%`l$}?LgJv?x+$FKB^VqgfXrp`tX{E}M`Li-5DG0&+ZWPKQvFu79AF9D zJx&19@oa{w`(lheyc;&@=?Q|7?mmL3UN~HmZyB-XQ>X3t@x!DwTWvHxc=0*&jMnHB z13ZPJCYcbW--9@7lFuZ$4Dsy5NhL^;jDegz@v6i#lMa}-=7Id2O{7p=o@LuNXW4Bp zDzewSuF|f(4sKUAx>%8J5VnXH00i~?z$`Z&>h_4g@HSc1+bs`zjUtefLq^>6k6cy}oTfE{~;+eB>nY@ZvJgg;b z_58t4I!#k{>d-^>{U3bSzWBK>+rehR)R#UI10DACBlp_Fk3MQYe&oLAef{%%90L-! zuYUe>_T?{r(Z2n|?;DxU%g6r@Mj#l*z?qSRzYM?}k$rD_t96|{Ze3@NSWn9d8^#%m zqS`zLXQsQe)d`qqMZ)Z8vCb~U>ZIz+W;8A+oy|&BXmt&>Rts090B=To$pN5s!S&sg z3(L4=d%KkjB5En=0XYyZJQ%|Di$+Czd?|~$g3YO%ji7=ZrINfuC25HY9_7d#R8o3f z3BRKfB%z8&B_!_((sET)+UZeFIrtWKagwTWz^V#o;S=zA{GO8mnRA@Yhw6{+ zA&h*r)KBz&=-ukzjWT6Nd5c#>v_Vt5}uJ1I!4sYraeJAEghp& zT$~SDP)JHwWxfU0P+ep-#W;R*(&K@guJUYoOZpB)vicNv%}Zlz3+n z3C}{_9BOZ4K8|>0&*DSpGp!U4o5&35W6d{3+9Ju1(YFpJ3av~ykE(pDugGEJsJGj0 z-iq8t85@i$ws6M0oLNnfK@o7;Dk>oi!}D`}E6;HN^$&OBIJINc;XuU^W>lpQ6Eq)V zohKnap@+Owx=lH}DcjqC1U|}AhZD5rc=&(^K8_0|@^EJf6h_)hB|Jp>E@3VgN%h-X zqJ;Uub_rMIgx&MQ@7fQ)d6)hC$3LPvs>Q32ysZudW+!{gd2+{HsoY=#6yc}Z%cNT2 zXj4=Md8st0uE7hn&M(tYf3CJ5wIGG78`si3pn+losv^~6%u}X}zR*o^Q@m9N!3Y&; z#iXYSN-J?;%G0`WjzFcU98Ps0w@(# z&<=+0m4mvff#Mg?E;(sx+B?k{$7ew1<47Zn!^0Zs<9mjjS5rw^ZsoEDCm*Zd=TQR9 zVbXsrV3&{>LHlk1vh)zutO=Z|#fNi~gqRWpX!IK=bQ^jDOOJnT3x)`8ltUMJKAnVkv_{E>alD%hqI9$QhuXJVm+dnk|)9W z9jPm}>7p`7;LN#6HS(1M#`*&fE<)oiR~m7khm)?WMRCC;j^H#2TK9&39Ae{3?Cg?n zQ>8E6$B7PfOtdeJ3Ovt8Ec`QzMS2x}3WCV~uikw5#s6+2;N;NcHt4fMR!v+EHO+)a z5F4}(xF}7OqqgBJ4O0mKB*)yNL4 zT(OL*x@E`_)Oe)R-$RuQ2#WfMh1M2y56Jd+O=Mcxa=P zlc*G5q#v?7(Pdz1EIeFaS!0qEiquO~nHGaYEZ$5EBvsWj6F8?4P(+R8`ZYDSbIU5L zuPQ}YaT14&h#RCzD2dR6DXu4`*W=tKNZDm6T{jCUd8EU}2ixE{QQbGrM$ASz0U|bq z<0XMcZEADyD$-`zAQADCJZBnZ-HQ)rra0Q8J0k&Di3x&msTZ7@?&35R;4Bs4kcrEr z)S`5m3Yc@bDqTFFGvF12%2 zNFn%`kyS*xop}$Eqo%&6=^=RJ_-UH!9=E+9+mAl>ggyS`)1;{m+F?+cLx-NWr}jN= zhvCQ#LTtVeY^9jcOHOG|SqZ%rr%D2+j%_BTB*qX`>>C_~et>q_Fe%$;7bJ9gbdC}j zB6~4B((jdh5fCSl!NqZ^uUQtp8^^c+-cvD7U@0hBea&*)*tp5A+;O$tc=fGz)3vwR z_U${Yw!V&g#cF3w~5}>WvVODWu0CYB{AWf-YY!Xh>IQm%1 z3&(Cj93~vSwiL2@jx;TKx#N!XgskV z5}?ExP!I}mnzG4{C^k$H`MO5a9=?OWaNu*AWF&U7qYRXVU$OBt{1JQE-F~sfT#uU7STeb zjHtVqJXUFWiPbKv!9iMOn>KH@UE8+VrgiIpe&HhEOqG|@GYliKvT+s0qTY!HV?!ej z+bi--rKx=RB@J}yk}gEB7cNq>g`VA3hT$j!jVL93QpO7_SOlefb_vgyW2GylZqBAt z&2v>es^u(!09Em*#CfWM)Lq5TYW`iJ7BNKe6>_dHW+f8E#5>dXPpcmj+=OzFhC0Tj z9CWG#ZJ`Q2PYzf$8&*CPL|6h35yWaIK7$hTP=UNFUX@Xb>AW*we zdNjjsXq>73$W1htR1-Dn@NH?gtd4#ydIq^zsaIhYtKl3P5P;X)2QSPOY4gB!3Zq9- z14j2*M`pB|8NJ&v2|g!CiFm%d;?0|<6=T{`WacVBkJVXRJKf~FuoP%ve0Uy>dIoWw zYPsf`Emkw$+yJ7xJPV(`I}1kSvM~Bi*=;>`aNalkY)C!X(uZn47x6Vx4rQiq`uju6 zwe5%xrfQ>NH{hQGB`T-{AQL0_XDNU-K*1$>>TxLcjMmL^9oror>4(^V?Nh6ywlXC_JE z;dl(7T0@UgUPctXlYdZ=&NH)P<6Lz}ZmjhhsKx`vi$Q5C346jR^!DerMk%#K2|;>n zW1Fe7w>iD_Esv$QJVqf0-eemvA+fG&Ya^fzfaCZP(v*C6ascK3mmW7Wc{_#6k=+%S z``Y2>Q*0$--#Svr^`vTLD{TLxhmd*_-J`wM1_U+QBKWQ_FuG;80*>wd*392EvVKg% z?mYACr_%5J!SAKt`TgHbANtTAq>p~|Bgg}uLPPl?x+xy7od9?c&;^i>QG3T#QO0qB z`Depjiv+w5(Vx(Bu`(4HL5#yZMVC1{!Cal@D>_cV=NQrhQ+DI9>nrFy+najQzTTs$ zzjJRoy6;$e&iO=+q*HXNn!tCZ8Ppj%v_f$+8?D&zEi%4V&)yw|`|$r8FAWV?!tV58q!sZ9=` z-$uN^>PQrC0G@P{tgI}MQlz+m^!MT-sp;W0Iy^8}5Z$i?&>l|H7#C|UDjOFS*4IZx zzRK5IU&YU(&TiXj*8`qq#5chJw&H1KT$EY$$~su%T<6cGv_xTUZBD^5O|veH6PQmd zq~>6L7M&Rube^^1DsuS)%!}EmYt!Q#C7Rewmqo|n(m?)-fABBUSx&xmj6C=2 z>03VVdFc&rcszaS|MvIe$4j$=>F<5@d(*>gl3(~u|0Mm^@BeoC_Ah)r1HCzY-rxOi z#Xnt1pZK+(Ovm?hMX9rsWbcEYoj&;cPeU>NIVH|_{3v&S`M=}}sJ~Dc$i^^*m|GBR zYE3uoGChxo1lrh`dI3E3Y=koiW2Yz`&FLQdyWu>Hp2kO4cW*j=>=@(Wco;k?{$6Kd z$c(sA+i6?RCPkZ0^m!IT=UCR+(QNv`ZVG*t&vaTzBPgs~% z5f^Vl+16l&*35Y}m?HAQ_f0I=jcn45lm<6(eodCDQa;`cUERDqms(l0TKHK#Ϥ zVKVBuRL$b1J9rcC>)$j2BpaESjVy2td~M*mG_VLa@$&{=uX0CyP*I#3njba=Qlaet zOPe73Zvj&28kAC%VZs+0pCTh~vk@zpQwc$g< zywjb%!t@TV+je1h;SHc`UU~a z8jW2^`>_G9OJ8o*(E2!?iM95?f@D9~`v>yWGMwjOaSBTX*opyCs-SWoIrW~q@! zy$yDG)XO^Y?*L5%HBku{uHWjFa1A2 zN}9A%71S-PugsT$4S!Wk%Ivu8T;Ev2%n=saF|O;f&AqjgU`#f!Ce-N6<9ag%!D- zq}sZu#O;HH?uU)sMb%of8gU!!kaBNpwKRPiDZnZ8kW^sOfnX9AO^`7{72WU%Rggmz z9Pn>Hb8ZP$;!tnDLdrBP!0<%xG1>!_uDRE}o_VNFu(73|6X}3$*H)kaGdvT9nino! zLW^__hIl43$E*SM)vf8V2i}st;9Y+!z2{wDls^9*Uy$DQ*`J%<`1o7Wy;L9fkgBvW zLuN$pv^j+Af>s;U%1*%n@jahcy&OPsOYE3Gh~uN!mc`(w+D8;2<6k)St>;gNW#Dm1EJs zz(^XtFqqax(2GqGgdkH`8XF)u7-))M6&sMn*Pk!oS>C~As=|eyNlj+Of z|GxB}{_w^4-QHd8@%3Xr`~B(jzx+GW`LP=<8b`i_m`jkE0E*L z44jNwbuW$_Kl%LzECssJ7#syKJxHqUQT&-6ecdBz?*Ys!*_7+pFk}jBcM#p#5qYla zi2>ayb!{ZKYN#($6aA~$NTFeSC_!!OZedKrY~t`*qSS9*tV`dQ`r_SuC02--FP=S} z&O*YG zd=8Vvr_*cCJe6L3>dExNNB=aPeDV|N0%g%x_}vvMs|N9nnTCRwx_6SVtx@8y3o)9k zOD{c_F7v$0FYw$KpG}u}pNlU%n=bLV^8B;u+|y6sJj({Pua+B2SrQXF- z;gZJdhHH4yq|UTmIDsRk)%BN+H`Fp*-rPG(>xFv25+Dq=%KPHRvH>`3rMk*Yxc*0B zc8G?=DD--&CS0!X9GS>xS2!?`Q| zc1+s$(7#vs8s>sh_`G<4R!u$aNRQ80O*P}OP|ilm0uRHWorZw+@B!EZ7}P`i`zey> ziN>SRY@iWZr?1;4 z+LQ7MgBXGt?tj=xu=dy*r*C&J>3@8cRQtwJ8qYfi0{k`Xtd82bmtd~0%nsgVI?@{q zz|b@_bn@sSdxETB6!3Q8Dxe7q3R1VeM-wDK2V-J4HUWDHE;|8&ja1Fq99aWa=Rz3H z5`d{O3abFd;Ua12SRVm;vOYe?Do2YCl&$Qv?)h4_fYR~5LVaRs4x1^^M#A<`tg$tK zf?&+3NDBi`epYACO?x0RUQGQ9s=%d4C6?3GD`+aGmSU_mP=V=rF3oW-=2xO+&BV|k za}JOvizuI1GDa%`0};kv`z(F zIrqO|;Tph)c1EN`XV7;};`yfztBSz3p1{PhTi~+^nj)~`JT_QMH@MywkJO=AtcDfb zBp6dTU~1hMrQ8Z>KA)`SCUO9?0nuite5dBfH)dzH9AE)?{$szLe&?gFL=Tun7T{xV zd2jlbZ}>pkS6@gc5QclqUU-43g3k0MpZkWiIQVM%<=_5D8pW)uZO@VPJ>UM#Ov<_R zEC2mx)73E+1kgGlI<>$g>|qjo_7{I+ddH*n=?#y35ry0N*z>?=rEmMTZ%PN~^>Z3C zE!}QCYDj&2>p%Uj)VX~bx%-n(^A#9F+-=iK7d&+Y4NNX0JR{LE#e*2NZJ3VW8Tx9p zve7gXHP*7}teB4{-67R&9^KMV(-?Y!=pmAG{6@eERCAfVi*OBk3?l-?n7Zb=8J7UW zb-K5h-ZF*ry4O8^%T*Yqg5?)~_Gi+ko_r#meECE=@!|{V1RL%v_`tk`C(Nl6C-8_m zlP&^q&LX%x_4149WdP2}7hg=Top_0Vzr=I-H-Gb*KYQuLbcyowi)UUa;6qWLk5Wki`ZH96w+H55u`_9!!rg0gqK?Xzz=XDtUt zseVx(6q~9WEnhO&&j05|8Q#EAF_x<^H&_^#(SQ^XNk?^=6-`E&meJ$sWr}eY-Nr1! zTN}{&T8kLn&C|w0b0WpTS`b*Jw5cH-g(x#v7XW=gg1;Q zo6uV{>2*bVp}xrm(M0Y93$|dePoq``Rt1GTfsL<*4X?9})MO_WLi}x&t!-Gkyj9oO^AX<(MNExcBEDp`&neG}afNnP;~qb#5AGib62uiBQ}F2>*a zys7aq#`;U?3V}rj=Zy3zjgTsw9~pp6Krh7nwc)X6m4lPwjl3zP91LUtrp#i>+^HSe z%V*zOAte(kGbmV!>NWmO&!Q>xdS{q}ubw=~_v5^lMF9oWS%sh844idd?{n@cv$ca<&CXAfohS76I9Y~{1IP?%tg zCsV%_=smyWJzsXqXYA&n<@uRr8>^1igY)N7ovNl zJk9})i|G+q=g-+!bjV+XH8Oa#KO-R-&%=gARvs%8a;ng@=;IFUO;wKd5W@2SXs^kH za{tx|DC|jEMQ*T$dHyE!X16e@!FAe6;mHo__G( z{B-)!ANYavb3gx!>7RVX?N9QyXMgt>pN1=t^$fBFRV*Mb2%+6z!J9-dJ2gT(3948o z=}0|G&RLh|HZ)2`;mt&eHav8pG3_xhfJMRJRj6`$HWjsTb~#_9d|q=nBN+d^K*@0} zi*^s`Jb{4}t>7q@x*`)(0uq_Uxb;v({p80#kv{Urf0F*}PyRF>pZMsXrKg^FGQIlJ zE9nA&{52Mr=bn8wJ^jfi(kGwzMEWFEpMHG-Ht9v!pqHP2KD`XUdX>%PBEaXhR{&p% zOZnL=fT=UIc7p?W6TFlT(}B@7y0Oy_u;F_7`iTtiO{MHEkVjEZBlE zX?psPA->*=_`aE+t#dCFXlUdZ5om1WhF&aU(XchA=UmMGLPwDexiSBXxw(J_890Tv zk4yoZkxh+litJ)8Fsov-7g!Z28Jb-r0%!5lj=6?%T1e)TM=4?v!gG%V#N8;nd% ztx=&GA3l>(f(jNp+(cviZap^Gs5VK5Su(9YNT>0~vhngtZ;i26`fFsQYO9I?19uHkv zR38zEvnh;^GAFNM`bQe4wHaN}BJ&t`^A#M;6^jcNGNW0%|2Ki8aCNuHKn`E_xCVQ$1ly&4qyv_@ue&q#!iF}Zqb=ip%X+PIEye*$BMAE! z855g~lO^uou*Q>k?@ZzluHavttW|aP=959O71DkTc;E8o&q@zJ@;U%ni=#NFoZ~V$ zF1C-2xb9I(uP+f`GXfZc72D!aUB-H3&rK`6)Ex?PM`&6;gc+;tT9#na7fCg(5DaMF zQ>~dOY_7pa>AiCWqMlOpJA+M`M%Ok21L9iewYSKljR4ef&wh{DZY|<*b$yl&5}+l$gB%CnHw@x zf}k>4MgZe#O?_IUNJgVndqkG8PDJY$gD5V*x(qcyYu!N3H4iVSa@d-3Dr#5JJZZ`B z=bHL%U&x!%*L~|ZrFS0Rm4DT+C;jUm`kw1gn}C)aWc|0I;?74~`nqrYC+X|I=pE^i zM;}5Rk69e)k9<_7cf9X?=>wmaBdfBO-t>Ln_xDoIb}7Q2_a=9L|7o}a4tbe|1(Np@ zEL`@0vZ`ww?Z_-N{~`mlz+;X9KLM3$O3lgG$Y#+?bxb>(b`=|sx#k5H6Bm{R1eTV} zTO<&+^*sEy8__&D^EC!S7-Dgo=lyI9S%{uPx(jnI8f)?#!CnZenPkgWhq`dESuXMU1btHfy3d+IQf#P@ zc#(5olw!6QR?;C0x!b}7u&Arss3yX#)L_%}37dkxNiK#3D}5MCvFx&m z&QNtW$bve|7<54k*r;nbn`jN}OdVR2hSn~yQX2rl#>6|PGOWg1VG;Ffq+fV1mQag@ zL!VS;k1iHsoq@x2k^AR3)!cPnz!XM?Fs+Q=RmW7Zq z6gupKVMLr>+e$hBb3p77Y$Ksb43IVas1&}+3T-_~zRuyO&7<6OW^8Z(VZo+ML5P)e znpQ8M0D=LYr2jHV2tOtk>_w`bhw1sX&gR`u`=jn%o!Ew80yuUJ5DlXhz#@aJ%qn>= zGy~1!!g*l|6+$PBEwu{R>4+g+0{fDYvDp@)?TO5@AhVl_+LlOVqN(9rH@GjGZgM zye83GEq|oq9FwKpe%4k1uUXB=He5!EE?+u7NO`OUR@S)=PO}Eix!E(P9V=cJ$HF*l z-tZ6(<|DM)q3nGbp}(T;D9_Mkn%DR168b&+8(z6g_3@SSv@Sth4S<>k5LN^F`ugeJ zNvhJ|)7But(#kB~rjk{TH}`vks{!amYn!<1R>tP>A=sAriIqbZn~GaOq2nh42wM;d znrPipR!-Avfao+WUnWSQtzt3I*;|zk9q#7b1a&y*lMWvnzDV1e^90X$r?bX6_SGJi z3P1wr7elC#?IPp)G}utOp1Egxa~Ohv5Mjsxz^M<>q4rNhCLB?GrX?4>8x9UxcT zd&}#bqT(SA{VWc)qhh~t)ZP1Nlpx#Io?n<=eoMMO5RmNaKO@ z7XT*9q~;XyHc}O38lnV{wz4f-uaF}#Z?tZMMXH_!ewEd6g_V89=BE5RzWRP{y&-s| zO#(Z>5)*0NW}Bo9*NNOr6R)trFQb83ULi_0qIL3Fox$I(LIvxiLho$SL72s{aOMpk zAjRLg`ROB#DNvkK*ju2fbQlQc+nIqz(WXJu(_%2;^~+|7x%b^o$%X5u zpTWQ2IgbM)cNq%*)?$E808=ZBs7$B#S(^vjqKMkUSW+`Xfe`GA<8Tm$>cYj#(QnR@ zdQ;JkooutA{}l`?tloir2h&3jy)GTQ=Q!8D?z(|#sWv>PwRaxV(ZCkQaUHWsXZdEi zIw_ea<^$Khz*w9>wlE3sbNskY=}z8FAY##rU~-Dm{7kca1uCZ01D?4)psqaytXG5XjfvYd9Yh16jG&V`a;{xkDPYcG8 z&#}x{chv|G$GCGHrhYD!QSj1;tGR_SHQc+f*suVDYoW%g0Z>&BLsd(YWUC7AbZiGr zy!n!bC^sJ%K9!ExIPP4Lcm26r$7f4ZQDDU{<_#lmKJYM6D|7ewtGxo|tfh17schL~ z(U@XFOfYB#7Ap)e!O{`~PQYZT<|3(*5&TQ+6XjzD@MLaUz@!kf)wG=r;Fw1Xq~&b9 zIi;9ylBf;#%BW3`CfO)AE!%vwz>>65m1GM97vDS?6&2^6TU(E42+HO)3&S(b`A$N6 zPw1S@YkKaJOi+I}?*dKfuAQ=%HFX0-_|->&SyzBbP_t7|ydD56PsW0_0MfxSP;7QS zSN#@v3LuI0h7H^n4$DU20oM3Czq-G2%?Mt0bhL?{Upms-x21U8%%nvHVTm;;rsLd) zj2s&jdH15d+1pD1(vPNsUVs49W)_5cu79OoieZHqixAsZ+9VoUg4AXxNT9 zu?z!c?`{%qV?~ysohuc-O)GgkwcEae_XbUHGP zLol0Q8^TO71_9blTi7^RkD0;C7t%|gd^%mD`fQzcEAHxRyyqO7@gnzX!!m2%{x%GF zPiITmL$u&Xt|rLTngFN~txOvrvxfT+iD`Zwtz@{DAv(3X4oW}R8nt~^PLh7nE!?tn z({(b7db_OIL(6JYo@aTy0`UnLgiDuTNJv2}*PN3 zaP-vUKUrOMjix$HLwPL=tOZ&JOd%|uf+3z}1GO>l0(vz)$Qn4`8O$bMB;9=$mPzZ0 z7OYI#=}w@QY=bv4_!0~^kw)D`=YmHbeH_sADF1HH8@E9VdzouS_?CITal+dvG6T0| zCTbM11gmCWLN!z9WSAS}bOTs{b#c8~XKeOU@q6EW_op|!@v-!VH@!YR^2o!?#RHrr z4%A2zUOV+nI`!&P>B^;-sj%b0*P+XJM-80=AkG9c>8h{{u$rd2cMQhnMYNRqOWDe$ z%2qfScDA-*+KF^QK(zIUf*k_|K!G+9ioZQdl7>>SCE)i?nKsKk13h5r1P9Ea3Iy=c z&=_D+)pUq-=)N?!TnE#%N``@+Y!ru;ZH_QjmeSrs)#-uP^`pP*BUnRh-_^$bvKWQn zfdIO|`npa~X+WAE+k+TEr{LIes$Q?cj_6ewP0})^7IC)%bL|}p3j|Eyg!z@KG|qjT zV;tFHX@Kc)`f5UhwoJ+wn}`_-90*1Xb}HvaQ>E$dO`O~}DhbR1ya1(2sy1tFg@Y_W zMz(_WyvmC2BAchxfp#pW@z(#GBAD$<%*tqb;l)p48+GIOAHV-w*ZADu-E@YVbocnHy#lc*OaGHTF?tak*zYQO zTnGJIOa-{ShQS#C0hYjiNp>+Gq?ekkk>&C&EmR}1_}Oo%9?gdgg92jH9`mH@=1E@} zp*Ytqt8K;)M{5GvDFvLSloaLJHX=H8--h^bF`s7APO+eA3OCN8GXdp29^VsL^VR5D z?~etd97X8@n~21Vn}my{m40pt_F=Np%1GMJFZHV?!1czorT&JdD1s$3>W&S?N$c-2 z7*lT%U&CC^9L4tg8s?OyRf@WglFdTtXZ}SdakBT4E54w%hRv@Fk@ucGmboLu=h|Aj zt*}$3XoZb@g_M#Xproa6?l~-@;M@d zkv)C39yeOvC=69!0g#})Su;xYNt^u9GMm-p4Cia>C&kOWr_72Gu=*#fzf76fI^hhW z{nLo{^|3L3qFq5!R8Hx!PUGb|2S*xo`$AnwnRB&e_7tcHQq}MOQMIrYOUnnVEpan9M=$A=3mFd-K1;Bj`5ERc@nGO!w)v#d={dCGd3q zFYsQWpNmQ_bTK&9juZ3u>dkqdrpj2Zf`y;gMBze7DM6 zsRw-T?(ItZ_wA#t%bv7vUtefDE2{u|bA@#BIO1PRlUp)w3~CV?5rg#%TwI9G1l*e z)Y09T_8s1b&)y-V06k%AU;yOW6m}?j9g%icFuzk2v&{2^tA3`KAc zi%s`{3jFkH2r4z&TZ1{&57I!wYaD$$W8Ky@J>!qc+g$6?<=-(b`_&)$p7dQm@MGyR zXZGaJ{ZRVAcm7QA`8Da0-nXTL{Oe|~%G*txt@rpH-|@d+|Iw3w_!H@C-v6EH5{+Iz z{y%;!z5WB=yK@TTHk0!1k3T(EAU0_?L8I+}6nsx)E^VCI0dm0cK+GEszDzW8<{%0i zRgRFS9%W&$$CrS|6h#-FD+Sy(L(e%_b4U2Xs?%BQAckJHN1Ji^+Qx6w6 zc>$bE8jObznK9V{uZ`M^_}sLA*w8fknNi-?6S^#u4W1t2tmMKxIwZNX%5$y z&whnc`RAT~GM%OO;i{WE_qT@6fbAgZHdq)gQ|@BFJ)k>(PD1oyQQFQ*ZzuXHKbZzw7hxry3( z9H&RyThaP-_U;8NO{b51{FCWp&%Bt}6Jy+4EHMlSb0!&}Sos=d_Yn}-TsW8E$9RkY z#+38UQA_}p;;%BeGJr>PtOpUVADTwyi2V_F_3RF?v+uxxwCCV{){0I(-)uU6k-`fA zQwIX%UKneucsBsW>K_+K^;>l&IBX_mqc&<+Z$AxhyMwt6eWJ`#Fiffg{0z25$%1hE zjBPzOX~R=ifK5aYTn2@_$tKL(LB6>rkg{hj&{Wm6kgOKtnN1e3&h{S6d>vp}5M6-= ztER4lkxyqfA4r)>PSiI8`@4gbr^%y0SVzHB3I|YPx!5oq5`@3l?l|YQWy3lIvfIR&Wc438yVbB?OL8QyD_^zBuO;?7K#rL&WjX<%X_O%LJN4{Nj0+4 z`;Meag3EQ*2!;Bl;;SKlDydA=8d4Us0L!|fR~c&{ckzy!YaTs8UEWGr{-^K%M`=I$ zh&O-3_g#O1UH82){jKl*K>iaaQL(~lnGGU-CJxtUz#}Z%Z=F;%{hyP*_X`eOf4vX? z#=lPo$Zx;x1OF(ka^XrxOx8P|_T688x~{+`E5c@Z9g~^K#BCK6H4$OEq2y+z0wg;DWSn+tY#Iw=arg+X+$c&hPAO^RuhlnM5i&bT$__Sa)>M_% z5-#jT)+s;~i>xeDxJRq?;l*o+_BL36=2$%R?pax75jG!W$j1MqqFszVm(V=PsL4JN z!OBYAs4%^8=c50S-L*Lsv|dkZ6s-qt);WdQ9>UTd=y03K%4o=ZImZ;_x{<|3CmWz{ z0UFwW&m&T|lzC#9zEQ9lb-aeUn01)BO*RA#W(8j1hU@1D;cRiYxs5*Pma&(qu>7N& zrhDnLWQ{`xR*KNeHPnlKcbp%<$24dQz37_a`l27>{mY};$2(`Sw=uIDxoK5;Ai`u` z!)&s!D&U0e;qUB`D4Q4HEh_c4Z;I!r8hh!?)%4QYfpmhe=dTV!?~kW}A=pT4Q@m$J zxu!E?tmvVGVAhJfYJgr1n}fph#&$f#I`*(>^uV5~MIv=#Hv_NftA%^+e}KnB>A;br zm_zD|)&om~pA?`@z1+OoLIB{Eix<;nn2o_Jml5gO8*KvczJw3b?)2ujzB4`g*qb6a z8Ks}u%deeJr!QS&tX5LBxP*~9G$Wze6i4>ey5vqkUj4h(87W2m`~f(l-znFg{oh=waS zd6Ze{X!@4dhJoJ%%vX?(lGU3b^%@o}u-N*yxhX65Za_bC;PAug@X?0}6xs-+%3-}1 z(UY<6ur3Z>y9^VBEy|0pkh+64<(f3;Sw%3^&7}*h$kSA| zTZTNR;DR|1Hjs6I`DA+E_93?Z@pA|J7d}e43fM%hsyRCZ$O*3vmZ>;mv*mO*9*iUMo$0-s@)bz>Ihcb5A<4|p2K zq;p`UI$fDyZNR;JY+B|T&EPcYeeP-X{ynK?KiX;-$#vH53RAE!h-%na02vdEC)rNc zoHF(_8QRKh3U}M%`mnF1a;Y31w0l>7dgtf9E8T~In-RXQxB8Rpd-(0?-~8}Tr7ymh z+??cyQ5~$;D@3}kPwol(LtEuv-#=Nbviim^Fz%p1Js%;W`-E7i`B~XcNPl`=r z)zVmUEi1&Nw=&L(IVn&7wzIPWt27q6D-)Xbgo{a+HRSY zizx#ep1K(6{%q8&4yDZSZ_AO}1?0mVc|=@(UwnkV!(RrVL6i>9Yu~MDG*7loC=|`) zjIslCs7I13f8GE*nd(PAW$9I$r03vq;TVlakX%BubLGzM(r&U~YinA{iAL_MWmY8mt-V zi&+dsM{(31#`YoyOM@fIs@ zfwjW?6DnUYU?SR5Y5(8k6L%jF3PS?ei}&F@bGrr>=um1} zI?zQ_F}A2y8=Fmo<7jqZ)K<(HFmaXzDJgc4(W`$RzOlOfkp=@ zohH&)9nL4p>-P|N9LAUE!3Q2n$7os9#Jn1WRr@3+t$+4swibChy?Ww>GysUQYVq*l zqv%j}!;aLG&R$OE0Cr;wRQ$2A&I77v@V&|#uXnXHgWPOmfXBGYb1eUMrz0}jf+~lE z*UG9xO4HW_6|g5-MOewXi}k>w5g7`_*RyDQ1{te*Ftwm*>_&%Tirx_>DDV_qUo^nf zG*Y0|N+8xmAwVM!#q>%5!fv5B7<1sBn^s54FiP~dAJu^GS{qE#f6`(Yg9u%+Ww`43 zT?CD2*%XQ|ms4rSbJhqvmX~I^ZaBu1BGqdkNY?DSTE>fXIaN&Ta5 z8z&7qIz?J(1PAfKU?E?_w`ziU>ZV-7~x+k4W$3K4xVb#&{Q=eelI)1tBi zdo~K7xc<4Y#m~*>Wqhs6*7DhHGm;ge7MR$cD9bF=e?@vNf)Cndm17cHdoZon98Tp8 z_W`2rhq63^pI09s8{bd-oT-JH6|3-jn|J_kMr+;y3NR{#rFn z9ck}952w$6_j}Tpf5{i8!~6F#`D@d`!}q0kz3X$+mwx#drq6nuPQ|p}xcMlrZcbnI zwO^U;MMUG^kpCTyXi;&)%T`1zy6W*=C{+E2hjF<7JxgGWOw{rcYpam zc?A{#N)-@ON7GF`k32pOiL z%7{f{G<%}Sju@%&``By){NyGRMvYbn{jCptAoZhvxcT_QKl%_=AedXSAj>*_#q6x3DoC`VUXi7-USzh3{-a2WUe(>`Uk*xqiS21jyN5CA$jTiwg@ckMoIu~N2 zEK{d*^MF4(kGqj!;WT&KKv-)xak z0@ydqNi}Fx@5u(r=4^vpS-;Q}frd>fjnbTSo(@~sEIkll^>4rBoF_@|j$sWUcokeZzOD!EV&r1g zoUE_jP>lq>5sbk8>aS+SnsaT_G%q=17e%vBCTK zt)$be)YQH}V_FR&)w-tsOx?L=xYn3nHl^n^sIlD#EUSWYTL87ewzUIP_w*lxecK&i zOT$LzO}_bk08f_Emh+47KxH*^AYK8(7V-joS*62Idq(B zutBB@o$tZq@JAK+U^T)(T2w>~6Qh~5K$ib-T zMANbM{|s3|G0pk~?|lC)|5>j$~U{fadr*l1IBTqC`x zBB--+HdkIooWyy^_|AuJ3@ez%PFU1d?k8CY07@CrMI%6?nY~?~r# zK4x>v_FK;T)LBfjxo!;#v;{HxuK{G1C?c928ev_v8E7YkHi)_Hdn>@<{pKuYv9B#- zWi9f~04MH|7BY*Z5LZbJGUC$)hO5m-_od!PkEePX#_K}w9H=3SQVZB}9hFcsND6H# zfDuX{v)Wt$HJ7h@{hM#QgJ#ggliNCXQ+e=Uy5}&P15fYTv!50juS<{Os`~ijuS*Bo zZ#7Y>pp^D7RPp`y-yeX^`XE`woNhwg9Hw)8@Vd^o;w~fTm zEcEYcN{>By6o#xfI*y+|JBroGhA`@5Q=cc*p&FtM(YG#2%eoAix1KBpIpG<~eaEJ)GD1kWSHnoBH*zRw2dHAOYo3Yp*WI{*GVR*tC|AE96VI)n>F0j=gNSi2o5v(g zYxj3k8eB(!_%fAgON_Nu-GimIisjk5fY<0Yj^0X!uNu^Mix-n)UcV(b2vcn=zLS_B zUc96a&owH<0AQv{10*TX&P`LpCEG=xF=9G!ffQd%tRp8U=A$BZz8g8LkHCjE47P97YtYn0A+~+fl)lKnqg;piPBAvAEg4U z9aC8~Rnv&C8`o5t;_A^WNCJgxpW31?QE^}?F-$Iji7{2ZuIJ$LpT3-@H`mwJd!dhs2y3U8K(PR}SeAbUIA>DUP{<>%*x{RUvlz{zynvgtZf z0;UBDoU7(~CTCoD4j8itkI>kqqeoIxNp{02-+Vy)RUH)LY(XU zoyAva0VY8~z2k0*F)7&4&00;>JQavmSd9}I%P9F6#;JM)X6f3s(R3BP)EGLZ0@qwE zmK&Lyg5W9ofw3{GWm{yttpIujGkTb9Jz|@WjmZ6|s8|s|#lN#w$L~Xf$3|{j6O9;c zrO`(EZtwnssgE-F-S`*nJ#s8P`1l*rTR->n(u0pZPOrxvH5O>Kn$y8U_e80_rTzn? z_J#)sLQpSPXZxf)bu#O8`}uY2*~h(#{a8*aSF^2#%?we!3>$M?w7&*Ojr5YfJiM2| zQ3PBXo~oPWGisS(MU$Z3z$S#(tP9OBO%{%{(BxN7Co7^1G>S7R(oSKn8vormc97Y` z#;??%lJ_tGw3nN`GuN;qStqR%qHqgR7xhfa-c6N&=}p+LMZoO{TD418uEqTy-zlRf^07 zfy~dHmpoFDMM1snsWBp&fii+~24Gr-i7PP9>~W`|aJfM==i5l2)x3SvM=T>kDrl-FHFLj$7Gh?hh$WVdK!Zhy~2bHH;=@Ujzg$YOX%f|Ej2A zGPRU@U1@S`6H%K(vM2_XZrTn+FO$T8RVcYk8Mc5)!KVwE&(}#@%Jssfi|NEGr&+j1 zqU^tm1#u6Nvd_DE^)d}IM zDd!m?$#ZnvzIb^UmIkerx&f~To3|AriojjqgEa}3J~IjSK`IHFc9N_V;Zlyuif6MK zWzEHnQiDa#X@bhDF*fE&fYKP7_W*ua=dA?0a0#%9_?L>X^QUMW%+E$?aWlo=&5@=O z94(UGnIS5kWPzErA`G9YN_0T2RF%}R(NvJy3is#$CfVO7$BORqw^<$n*tva%L_09c z4IiA$gp|bO*=H-u?2U(Ix7HAE)&a(wsi3r~N5*U!?<^y{T9}S9Qa4dms1l#QtFfAn zdr=#@w#??~=`)PDU9&;^NM+v8NO9-U(O6YTZB+}YrD`$Nmm}I;!eM-VjB3IOoP$a0 z(ge1g^73^6bV0fwvHu*F1JeN1QCbFF9UM-VE?;F+q=D@4IAZh>s_!nOlk_IMMhEo? z#NDF+ry0^0B~Qe zqBH>W8m*9`S%_4v{VKH{$*i?QODhsL#_$}9b|iW33TQO96g1fg*YsQytWynvoosPd zFXnM$jwst+-1I2@ddo>i!&{{WMetotMOm3vAv`oYtgS*~z=pcyf>)RF8fCwMbFg70}m|8BdCKkd)KC`t3 z7kWl%%9z)a)o9GzL)Gl|L*|wiR8-xy&?~M5_NNRlsZBsnEg-3ZF;>sq@oN<`uZGlO zGh+KrTIjS=Ic8;jbv3~uKxdQ2((`Ca$47?J5JsX4r1m=iNu9iIQzd^(Q{+Z8%35ov zN6>iIHPV21owdD;_iO~THG|M<0Bd<|utH^r32Eh?g&sVw+>QW%w)HC3Q$FV!_jCch zXn}xa1LiR_UZ&K!9&qBRlPbb`f~^hqhfT(G8T!Zy^o30@=DYjQ$sK@g@3 z<)bpJLhqlL0G8A1mYkzu*&ov*E?_w=RT@UQ8KjzzIVov?k3-H2=uccr8;D^)i|Iw4_(1oRf=X*0InRD~7h)00wzt8<^R-k5RkK-Ep(n^G z0R`v4q>@%es>XIPf}z`Crkl2*lB~Am+mw-bhk95-H65Ot!^uVyBYK^r%%x!8K7-y$1w#S4CZeq zI;5Qi{~&bzk^ZjKK|d|dBGsiuqUV(M_B5qKea!%LBGnGZb`}%6V0$m;xsDnz&Z6Zq z)ieZvJ@wj!bmAnt*>ba2(sRy>KnW{kB329@{fDR{8uUvHd{posd z%GJ$BC0sBsTMq9*ZjM_j{TDO|lyb^aHaG*NcxF@+#`r6y^VAfD*I1aYl8$u)lchmO zeVFFJkG$c{NFw&9X~5x?!4azRDE&ulevvfQt1mncgE`B-<{UQ=xZ!zJstw z-Z~9g0e_$`5O>ezU8Y8#x zoRyVn`sZCtgQHh5-DSguNty+?F0t-}_+FD-n70*M^N{M&1-_@Vj%wr9ZI?O1K8#+= z*B1dbfMVU^?Y@AZeueA0NounKP-ag>8#*^I7TfR?Yh_$EGRA9QR`e~~M20XwKSA~7 zC3Rcqr%lE&s#mU;!9i~Nef>Uc zIrGp@X?Dw6puwiWr$qvrEu;*)ThpR*+%<_ndKngP9{Ip572V{WsamD@0^n9&WH3$d z8c;Pi!Q|~ByTKgZsAmqq;OV92{#8X5#4%{G%_3uK1m<{%-htCtZxnDnUy@m-x1}w9 z1oPK7huds4cR|zt{0{s#@-Gc|LtDe(XN!wUww#l<$*$EI*}uiF0WK5^MoX0J5oC`k z0n;dD?xPIMMWT`L2`P#=+cDEqz-#Px<;no8(op23ci}m7oILd5gL^`3{_2^t>BW~{ zg-sZb{%SoiPHtdlPM`n(;krw!yMSgZxX7RtAMv8<35s6@6B>5D%;xx!KYJlP^Wp{C zInYi9n%BzZC^&IbLa;1dzbd1`hZ20b$+@7~d(T|DO=5k=9ag=(#K+=u!}lhP;oRiB z-b`exk91Jh|LW?cAsCNqoC^&@@m{(*IF!zj7MO%lS|z=pYx_K?X@->7^fWHZZ1|B9 zYb4c5)l@B&SFuP*`$ywT8N|r<#3M69f;Kg6+mkV+YnFdg)P_YgSwx|JOgAstkpPv& zN=vJj1OO`KSO5!^O{ofBosQl;r0Qrx!S$J;1<5GhaAW8|R*CdYA>IG*L+SMth#a#L z1`yxH#?#y0#Dcp<8ssV!NEZQzuoThD&CSd`ZF_Xv2Vg%+wa(E4ed!2klE?6edH8rg z8qVE#1+@~*SEpWBfCmomNe>;{#q+4@0buoXBRFP}R_tAY$a@837H-~vyUdhY=B}Ja5o}JM$v*M>U|90=qBRN#TrZk6#+1NvO}!#%u(qDw)Q<*r-@e1a-q_Gr zl(lDfPkQ#rPo$Htyd2GCZNuUbL~~RZc~4~+ciujNgY>z22JwCs>TC3N)2tR|*>PdY z*@sL@7>;tzR?04Mjh0}21rah$0w;s8wPL|T8Rt@iS*L-URwovy_`3G2RJ1R<<@JbD z%cZBDd42>szcEyAzT8ov%Dt%NwQHyhkJtgmVTtrpfnpRH@+Esox(;&x3q+PH^kH2u zSb%|UhPhr>Q%K#d>jZzB*g7y)0pK-wqRq`+g{6Kmy>j|fc!ixzS4Xc9co9g-tnz+V zk>`!tdx*ws1^wt8dK-L*D(Q{Zre_^Sj9ZTg+E|}A5Rxwg(yX>?QXI{iI{~m)&uC9Y zjbH7rXmjZ*7%}HgWeuq~SSFi6nPSt5q=Rv~r>`yDdt_JI%U2mIeZ*Y*^#z-xzn;~` zRt78Kpp=elW>afz8FIG|HBWT}$u(R%z0-27-mOQt$+NaC!!R#kY-_Jj3p1LrTjK~(xRMnE`A2Z|N+nw#7gvdgLG zP;KfvhUmTv*1srXUjxj##@UI6Qu978Wv# zd#9r)1i#~R?*8&K=n4#tu;Bv?I$P`bmSAUr!|7qPsF1lnlcpjeu?8V$ZCOE;KMK zinA;N=g*x@U-}haeajsgx~Y23?PKS4F;4{HOQ!5^%Ka zjRv#jMUip|80a$;y`cn>rcUg66qRk3NN2OOa=!nPulZ{9CHrnU@8AFMhhWbJv!No3 zyD6L8T-B;0Kq!%DR1NVvf9^>xlWv&>Bxs)L2hHgkB1xU7X$ewBRA@8ac^=E4yZS}| zK}%lbK;_61w;0D>A|PLJXj)4!{O$d$1e6+#%a($=S)G>}g91h%0AvDl#& z7G;I9a~t;RZ!`}pIA=8|>82HoD{U&QrGlboH?VmZ>8Nzf{#m+O*M*j`^)6+v(mCuHyv_s9DyB8ZJa z;AW3EE7dRzBuH3bo-6*nbonA1?*#L)IrUI!CkrC2snIW%#75B8Ok&T}e*&IJ3Rsm3xmfc*!Tz4!Djb4L6CB08N$i9{` zEY5q`wF>S*6}rN5OW_ss!@k#&QmT?c0hI5u2|6BO`}e*6mhoX_@e9vs5w?Ih0@-pLEjhDL00_j@kv>@8g6=tQJ_qv52eBuV%DzScaqAA?Fr<1fN zTrBL9sVmp|Il#>nDRBebMc8XS?yNVfWd5$(T8evVyBsxqvW7CRy1ySfcm#bbK72PF zFTMInT>ZyV1#aJtp$&UvhOVoeAOk%s6`cJC_NMy&?o^H0Y0JKz)Udl7`+@TCXq#m} zs%Nk0+1H---;4Fe(e_l|hHZsan8B_Ru%bmsu00dS7Jv``a%z-ITmg`jcjCUNO%1-E*x>b28N;FxS<3Z>nSS((vsX zrT24e#xhdffTXq#DxsK&SBGF%21Y`J+D&!Lo<05BZp(v3ow~QZcEaEEfaILimuodef z8=sz5Yl01cj_eQLkeZQQv#keBd~LBn^eix}XCvul)7lOD(m`cT3!8I0-ZDLS$?0yt z%%W`Xuvy%?-B1h6m09#4OMnj<8aHv-KAq1iNh!5a?PbH%8Ls~ukqDRpinXj{Euo@tC7Te}?q!ELyg8fQ6#-Txy>0XXyJCaLYiM zN_O0YCyzvomJTwFY_^=&E=1$auuoNh5KSe=Xp(%AG~El&3S=WYPIowq#hK)k2beY3Q8Yx+?V2X%+A8TQS2M4Z>yqtV+FtTK#~E!%(Em;QpeSbs zmU;m&Ue2{`L3rOzHJ^d4!Gh`KYI?z0k!MdjQ;~wM1sI{p30@OGHsR}1_%H)2Y*EB0 z?2p>MNxoX((@aFt2jFRd!Q6TX;snK;_E02Es7F>g<04xAz?QerQ?qFg9U~6*r2b=l zX_wV-yIZ*~Svy2JWB07T%dxnVOr^*YBLDDDfVd zb&7OBRJmoE-lUSIzO4X}9wM`5X#HB59v)$}!1*<>z&De^XeN)|z~-o?!Lo5TL60b9 z72qhF)`*VC&C*qYLZr8X$)U8B4QUDywu|nz#iIyT6xav5!^0?0ZxUGrSpYYhDxy&v z?D|!IqFVoL&$=Sm^ApJ|O|l7(&bSE3_LC0cJVxp1bm8KqsNhjdu2Z`$WMa_|=1@PH zToskQcszeUQ`UZMKeB{Av_m(8DMgT=H6}pt%=ESVcGSTWag!QHyfv+r7 z_0i6w8erNF6~7yMhEB@j%VDm(e(QjyN8j?!bm*S@VH6Mm!n~LJB0JN;?=;w~1X%mfL0m_0!Cz-@51h8fzxc=l^;z%rsz_ zowFzd#%G?dY~oh4E;H8)%-ftz*7n3lUa(}a$~GvhVVpMAI`+szZ8@SbO0kSy4=YRY zR{;Ta=({ux)tqyRfOeR*b%?-q1XkNZk_kZ6IHmoQLwI5VOgr1@+t;_3pp?I%I!=~! z4$W!!e6fZJyq5tWHp{&_1UMSzcQ83JrIQ2#FrH?M0K1;lWf@Tj@X{jvt@ zOip)$HJG&x0#^HO7D&OZSRlguHLcEMhw(SqR6huKQhOCf>DT@zhDQuCQC{_2wHdnmYfpIa5Or9QQbnXOjWg~9o#kbEbp1U{FDgZp^F&0a0Z-RR z!zupUVfBcJx0GhfOM8CY{Il50i+dAO2=5WnFHqzP32S&X^4hILTdkHdv+3DrSf444 zCG{9G%_y^D8q0NVO=C;Im!MCVV|!uAQbYxij1<}-iFl*5t!;{8nT(Asr?;Ztm+Vz| zJo!7|MYMg${wbx;;rZB;te%b8jay-DwK;iKa19|Bpq~9YMXF2oZ-PhmzsY_k0y#HB zO-9YoC?bxw2W|kqx0lHC3fAa0%R`W3H#Yg})pDg9<7Rq&uo{qH-wjN;R+v9sT=L|6v@jyLUxYS_MGq#a;dgJ}~>x zD6JFyU&69t9XAXN}AOHEweE=erI6q3@B_PK@m48TdEMTZ>VZybEEvd8FR=MBKM{M2?)ypSDG&B zFXa6$2Ml+xQOUG+!t69C&Id?r+Y~La=X{(mrm{_)nKHJ4whzacnx07WSZ*K9cD;Yu z-oe=$Y|eEc8{tC)JlAhkJU?z|HZ%oT_M$TrSRTNp;K;Fin1elx`~2EjBP8Al1hL}+ zsS)JOPVjjy=9Enx_tJ-HdgqewSrX7&-DFj*tagC;n714Ocj(KOVIf0t~ zS8Y5RIJZ@L)XkGV*Ok5wF`H{uHK2#&d4N^tS+x7fF&0dkb_P5jkYI`}WDP!w>!TJU z0>U}~B6a%PPTXr#&HGBfl({x@*QXaS4~1>1=bU;NTQ(6kMLYomWD$)vC3B?crvXJn zgV)kD>w~DM7BFH^D7&V`#w=^DYu1t#w>(TC8t~2%z*rPvuTK55d>zlb30{0cS1Ef4PE4|Zk7c|}R@_gp~(j0FzeGDRBc?1jxlS#njqLkOyT+I?$ znqq}i22Q$D$0{8DE;Zz|qXRq>NAw@z2&WHm z_&rCcs_SY61k9kbpn{BQrLBi)gOad&C$JQN!_B(*hkmV=Spp&E2Csl3zOT2kgoU`7 ziZ0nDQ;dG5%&D=cs$Xz{i?%DdLdW-$g{lq{Lmm;?M)D#`_rp1zf9;fh315~{`b0lk z7e6XW?9ZkQ+eJw|M;2@=zPE1|f=(7V&Hvmeqlt3S98-q_8)*lXZYqF>RXE-+OTQH} z>jz`WyBm#wD=TI*J5j2DbCV7BYX@<1Y?!4-aYrg$hZr2DZH1J@{WMn?W>A)izM?e% z|292n)O(rxI;&2~Gyq_43~)Th2I|7Bw^4gnSK4*(aH_|ov62NbQZf9`2G9rh?M@He zcZ@=T-f)CJ`^w4d-O6tsCpNIP8fS%6$t+@QU7GEgl~1WG*b5s5&l9E2F+RP}O}u`9 z8Fdn*kLJ-TO_AOh=P`z-)6mEi3@LtDXWv_wVP#5m>RTXUt>de zV^fU5c6vYanW8h#ucMXk|_a8c%?!jdE;E^NfT-w4fOKK(3 z@{F0hI%ETan-77(6z617NT~|xhEtLj1c0e0DPw zb~4z|ksq+PV)|0Xk^AIh&7v3rqD|&&8)?h?4jrO^<2VHxeSF?FKBW!IU4H>q1`hTN zshXa1jxm0#kkyO->cvI{A4{2%-fR=KhHq*Vv@|w>kj* zc$`G0g=|4EIX;OXa0>To#IuSijXu@qm>%MF$Vd^%)iPOGv;igyMmaop>mhrh1%50= zoL>`BLpv#y2GdzqDKT-ha1kVh!!wMKo;iL#oA{c_2)AMWP7z>;Cj&&}&z#e8St}6^ z;_wKjWab6T@@rL)a1IE3bdLp zQi6amK|?(oNGk?zmRwuGRZ5MOytKSGF1#`kE~b`EyI>lrEBtknZcx}9teHFOjjLMB zK|CS^F39*>bsIMDt#6HB9@D98QnCiEab;-Am<5yJ9D> z1!-P&aYd%f#kTH;!QE)1oH_0d4TsTRSk~RQpX$2Su5{t#tBkw6iQIfNP+na{ zRJ({r*!098Vq}a#N$ZWVX(&uK+7BHbn~v9BCZP>y?Rv|nX_#smj@3Zk>CCpS0hFdl zU0q2R@#e8f?f~Drff&_~;NvtkuFi<69V(n=Fwk5e&1T9?cXsE8_t%Y6Mq$T(S4Q`k zk5)!sHP{q@Q?)H)1iRb0^TU7-~`kb6^eQynODJ(%(WutX2jq1rb7iS`d&pZ#ykge2$NaP z>x5G12{XKluxu;JE1tYecLQLt8R2^a+A~3|%t@`jb8f=?XR6rwY06d4DcM~MLFNf= zCIH*x__-|rfIOczUZ`P0K16?6`oj_s zG!Y!Su1-?CFn~YP1ov!#d9bRLg^k-QDE&lZ|>uJ(>e%lQI?HY~nif1&xdRd5EG<#v{EYm&i2F5=j&07=yX zjJ@ug82m2GtfXn%H=)NhNYsN1__gD;F8%N>Xu9L&`3(GJae%p~yS|tatu#|^1?zA{lR zbb_NcqQh2FPj-~w-QOSSC>xy$qWrrYooDS)Fo*s-pY;wCAa!+yWK-HrU#dsn`dR4>?|NI>efS{jv&>gE`LkfaYwK7D5bfs2-#Ydj zBUyzdROP`nZX>;R;PBz7n#*wh)}sl5Y;z;`jdJTXv=SSLef6@@Tv3xgL3#+5)vWR7 zZs0m7Z&2;EfyTq8q*m-T1ElI;8XEC>sz(n&R6y0-WEy7EF}l{waDg#y`waDbE0(F- z$Q72UK;_gr(d?!bcguM5@K6_`U)dt*zO6?FpSk)iw)`T1w~EsgQHY@hQR-O$8513xuRO}qLJZ`@d zca;g3MO9SWeA|f?|JGEM&tk)ClJ0BdQwn@RK3MNJnQO4$}XkJ#&V5Te>sF)P- z*#!IzusJO-UM&O)S`0W=?75fIFga+sK4Js^B%=;@CqU{i&DaW z4nHHoL^w`P$qK>%*OTrAG~N5)z3J$&BWWM$akZ(k*aFO2nY1!~@gMx`fD=)zs7b}WJOwp|(IVcpHzqdCt1^`S98v-z@ zjKi3i3cA*!~goPE05VTT$(`J1GPCdr=2czIt^cYn26klw!MgP zAAR^i777jEXj005YND2T-ib43(lgJW2#4O?OwgbI_20hb>VNVRPo~d%_vb~GpGP;3 zgZn!XL66h*T3fd5hXv^}QxdUvHqHNw-RP3l!Pf2C_wfCH^H+W)z3UIs#jQ zHsPj2YXnoAHgNU2yBPNOQF+MUE68mDM0uS6s@!oPPZO{682^v>s-Kz7iA!3}(# z4Q^5o9sqh9&dJTRbJ<48^y8@##Ifs+<%v5^EyT2g)>;L%QZ#m~D&J3jA zdHN*aj(bZKZKa?>^YA?&je$anK*7p5)8bU(kuqFn^HwLdpRV!kfMvmt;@SWoq}@%+ zR9Zz8eoJa}ik?dWGIAM3*hVmF3xJ+8CthScPNf6)9#75a$yWHAsJJ7-SX^39@4u(} zdgZAnZ}r=c{^a-Le!Vw6{?O}TZUEvEYi4}JP< z^#k)JGoh*M3Ic3dC>f?U%uRa$#e4eqr3W5)W9r|3j4E10^@#eL=~_^Z48ToSyzeI{ zu}W$VHUIP%6{gm1g40{GbpPfj|84s9-~ElyAr%+{_BZr?t}9G8XbkIvtT)o;V-xSI z82QFAyhSxByiVqteqb#GXk}cVf&tLdOuEW^o+AjX1q|s2CO~ro*V@4#)V5KwS=BH* zP0@^4?;|RV0q8Al%+==1PQCwq|MZseF$Ylim7n_wgxBbbWadO51RE8YbedWWjSyZ- zD(~h2carAK zGD0?jr{YZ~_DbeRc{`v_Gfk?2Hx@PsL~sb_y^im(P~m}ecIj$5b>(@g5~(7^ymyI$ zrApenbN~daDqARs)rH^BO3^@7Yg!^pvC1{D3fXJxn!m6_umV%7!Klo419NwkYu4J_ zk$&;Ve(Sai|6S1Z7uyr>KI`rkxO)ZeUV*z;;4g3mt}~W@f#?3Od`jjQ?jCoqz^C^L z+yzaa-s^k!)$d+`yI0`u6}Wo^?p}erSKx2R6}YQu`Wtd1?|zKCSK#gyxO)ZeUV*z; z;O-Up^j?9xpy|_leeb^d-79eS3f#Q{cdx+RD{%J;{0+GRchhr!LvG~%s*hpAQAL@# z>1D>$VXkNunEpc{AkCFlQDRRir)5piG9pCcxrA6zX4DvzDzPkkwxTW@lEmvqHA^U` zbJa#!vFT&{GXB=D@sC_R#Q$4KVVCZk51ZmzG8&!GKS}XA@cdkX=ObT+d}*`S zPOj7{86oD%NdIMPh?|dS)J$b%T$$`85U=F5kGCw2mH1szWKwc;&gXiwR6dW7T)JMo zwsG5i-CQ!CEA;Zc@&BSvUVQjmRcCdll}RB&$Esn4a_81I#KBvXzEiXyu|iWHG5@Qh zdMIo&EWWc;8sWSa1|C*{MyY%J+X_DWENc0{&x+SHnk=JwEtT?H(HOQ5aUB?k#U|BJ zrXL<=`Lp!A}&7YHf z!Aj#|7lqO@^mfu>#<>)CrZ|R6Yti-hF1~DW>c^>O+lb=zI~!&m$^%DySaG-8g7L&~Yf$?K&2vuk23O>w=@WIod$vhs{;L_5E+Z<>rhI z52fGv)t{pZZ#K*X8`|M-9VnRF5t>n1#i>>FiGjmDGuFYTuKs0**0R@|j^Oc$_vE$roGpS%hU zNOgYJ7mE+3xm*EHtaVD{%j{+ku&JdKIoxph`&#?DQF_}-X>%>rJ1ctP#O9Oj9jqX+ z+9&taOVOXsUSDs2{e!oh^T^mxdf}Dlql!px7(1nJQ1-h_iMy|5sdE!ef_3M01=66- ziWRrBHQHrZ0*|=T|9X$ecRvr3QDyxgS_eZk3aW=>Evs#Vd#gI25k}^ zdTcK~H8lBLE=z4xAs#-w2cIV$jSFd*zFITb5j1x+!~!}wIiD__9|jC8#_QJ6D{GN) zI6BUlrgFzBqaGI7I!e12s5rCrr)_}x`sw4wW~bf8GCn?nG5ww9sJ>gGI?UVJ%osD(bG3`aRIzRSV=2~F*(uSNt zQ+Q-uFUk*8THGD{L;tL34O`kk#c|hn!U#dKGi;!iTrKH5s-$bc@BQd6=f=#%LqPNY z`FH;`s-0?VK_7rdfSOw`=`ycBQ3D=9Ed&zfRH0qFayeb0LGY>-f&n-IN>qr};Idwa zr&yCN;5b~{yxWad2k`|O9e=K1XYd#;Wil9Y-ffie z@bhj`@#Y3@MOnrAHxE=>-8?uik(tCOz zz2%$@0?wT~9dq_Zo0w7n;yuYGyDYf8oAb>!SL?Y~pTSLh1cQQ%Jj(1q5Vtb>(|NDG zzU5)Vq6sOsFRKK2>-G)wxtw4gcp1n9;>|sy5cy$@w$0`@H#ePq0rrn5iV<fTG@qHj`0*e9zVxAA|78p%_3y~7 zA2fNZY3u;l>`B!vy{UzA_hn|$%P&8h&c6H%<8Yal3-_h|qYqQIxhsPneLrji;iO~{ zSQ z5C7x$-*R)VT)L2c;z$2Q*yA_h=hBIXQ74OXH7k}qtyWpwbF*Qmgu5~eb8(Z_jW?Ri za-No-M~gU4PS@KO8ei%@Oe2pZSXf@0i4A*w9pJP~sk-jKJa-BI9ZPci>BQYdx%UPJ zB4e0v+9!-^=rl1rn7-f}{z2+HeCtB}t>5!K>Eq8n!$#a5CB^Nu_GqWQMt9GywC~W7 z_}WDItEKQd{~DdSso4Eoz~aK9nW)a{6L{&&O^x91G{-{ABFX2i$IqmNRv|4do%k{} zrY=giI~()7q8LS zcRB4n)DvaqS1*sH*8pR)xPEtcH3$5d!~K~{2msJLw3604IarvZKioRkqXnyiz61Ty zPQx;L`wwTqz9Cs>7m!XKE3Yox7>2AFOIyH{_x39ra$=Tt7(#U zDAkmzTbf?Z`xCWbYk*&q#)qtF0yEpKY-`f`G$L0<3-7L#(rAI)-gPyy3iXFop9qY$#*S^-AWe3}E=?`}d?bKXN#|Zwsi_ zp5fVNKNT&l%40nUI2-@(joRE`4#s#W@OvLReT&O7vt4ZY?Sh9VSv0IE{;jtgc&zQfLyx{L_3hnv%e8*&^>0bfJ@Z7o zZ$3)zTLOHgpWpZ_*%&pmjee+YfCh_U`LvZdOIxA>F{|mnU#L$GRi%uZG2$ z0m~T`ANti{yTI>vv4(%@4?eZ?i}TZp@8`JfZ}0x%GwupJ^U05;ix94z&|GyRd$XFqn& zE%!xlqvt>MDVUMmJ4@GV8>Z?S>;!WGu+>2=OiL{e&vo_XJcrE+R!3&18l$|DL%0Tl zWo*hld-kWp_uiWhAA11vw!TQkEV4-1=gaQ)ku)Q6ZNX#7X1Z&3?A8khfH@1Wo3uX} z?9Vd5qlu=XhaY;J9%C%(HyywG;Xh1Q2d~Dtv~j%;?A@RC^5_99b!fl_0~N-VeEwxv zD@_y^=?S%L%K>P4{dw%uHw{>t7`~JiCWq1*o<3!oC)zfI>l@97dHoiANLuiTX*4ax zMbxFZz`|$Sg@UaE?0Ka(jS(;RV3~!yjI`46!+UQz=XqL?7~KwC8-#7axDjw+&#<*s z0BZDkh9SbgWeJ1MC0mkc@_9z)Xt*J%ZBmW5f)XO2Esc&+$% z1->rg>|2;8O@{x`Mu9WJ6KHXopW*g94`7*};xz#=%bfe#B0f{|6=`8?C7nC-TKf8L z`f~hxZe^N2^7QWzYtLiIw=doQ@cwj!MfKp`-RbaN`mAAss&~!|pJxo_YL3RF8*C1? zW0)LYOaqsvF`$GQqS>r0lM%e&tsld{{{5H+_Rs?jZyi0%93NdRv_}C5S7G8OV6e@D z-b#)~o)q`ganphij~lL?=J)g*NN;%Xb8orUFPuG@{`BOt5x6umu6F@Kj&}E_hv*Bp zhaPv;(S}DeP@`tr;{Y-=yWGV%J3z~seLQ*?S8nt=SyySU2m>fUmqqdOWoK^Yft!f! zfD8mM{L?_x79ysyWGZVk)8x6eeh-)kmwe{c|M4~d@RoCSbNkpI|7Nt|v8k?q>veOt z*5Ao?Ig_@b!P{+^G|lLkea=YtvSC%#HgkPx8B^N|Si#H@6uU<6H^=xqURU#svuLpb z(1<3(Y{~``D}b?iJ>l@;nZzh`7M9m^n?|ShgKXe))x)qhVj`)3YA;UbUG&$x`52iR zPS2e>o1S=?R!s|)_$pPUIarlBSj@SV8eX4&ui)dg%44l2&GL0_ou98&rv;w7NFPGS zqK##1YFc2Z8$-UZ!DnVVq{&4ZcjJCN1>rNzJegYJd%T3^7~|74^POQ_PviGBO?!`t zX&<HHBX2CYGjfX2gf9F`PCeW#hX16xd5gg^Tsbe7; zKo^@p3vfII8)Ul{-+cViFa1iobm?-q+}o+VmGk^62B3 zdn}q-t&8VsaBAvO&}00qGkOK{c~|Pfw6iuEt@B*g{pf53c(R30oo?^Eu6u%GIk&{| zv)N3#24ArD^o?Kn4Y%FnZ48u8ET9~?7L|-vGbuG;e zollEw&};M4G$gG|yE>XuUpG#@bxgr!Od00~Q`OQ4reve3YHpO(0fuJeVptVp*U+Ri$)L>Wg zRc&o~y%pFKo2#zzxtLz@yoiWwoef>RSx$PTf@q}D=8i0Q_4EqsXr=AIJ?Q|Bw|)Ej zFbnK}4QfN*!@?4=l`Ioy3b0rv60Be{){yFogL+fHME79D5LU5Z{@8^!i3 zH=p}5DIR<6xv;w7M)Oh9dw!ZZS{h7giZ?S2P?`s9;5t2YHC?)RnzZO?Hu4K;gdT0v zywWsk05!Y5=rlqvK#Rj5fJZyf#~E82)+9*Bn3W9E)sx(F9uk zkLf!%MK(OLr<^x8g}>7f)z`AEM7`|}_^*Zqo+cNmmIj@P)WF)3z2Hn6@H%W*Wjr>Q z(K7-3mTgH@2YufS16R)L3F<~BW_V156MQ3}yKJK-T{u0Q zUVUjWoqcVL=g17nmeA9ymDJl_f`o4R_c3lT9ma$)gYl;3D&|86tYbgvP&R%}rq=#> zv_ffNjN|1^YnC#Y(7I?o+Q4F6px@UlZH2ZT(f^P8<25nBsN}xX6Ld6iZ`)ugc5^Pf zm_BV7N47Di+8E<~jLu_#?bo3-dVKfZbRS>403g+D=4JeTljm-54>oy7m2F}exbDw* zK<#lKm=m_{@wxUbbR5`f$dpJhTINMEBXD!m=9|ww=CE^?&u)Unw6Co;u3Q;l>|RQj zt_-F@%s$6RZ_drHv6-`Rk;P8we6Z}-cC|2?Ba-6uUZEu4dp_)d` zRjo4fIAdeZ*upx{SGkt<;~Kx4v9Y_MBJJtIO`El{fizPW%?S|A#7TEnF|I0|vz4q>)oA|eSQ-HD z8x5?jjB|Rol`*euDH3c8b8>@+tWtc>{9E0y8F3AN54pIU=Xn5mSkzkffSjH!uGP`3 zJNMwcp`E(*%(I3uw}o1_O-ju9+|P4^F&6qV4O{&^k2U6`sc3uRIhJG*S6ttCxQ>s_l#KoqODd=Io|HL3!3hDc|P-gxyT@z zHC5wg3RAE|b3@yDn7)xwn4O@f)|InqZTw1VXAe8Pua|Zn9RZ-K*5^~j`V8B`JoA*4 zRROTDfGBjH%}6TWHYmL20zgbnh>_yD;%^qAa7t#jn|i8Zaf>zs_8D`7Ws%EFI1@2< zCAsBL=VQahCKjJ&79YV)HLOo9X%n*AX#wZvNjAmBMN?5y<*`3R%3%}0V^^Xz-dpCV zjpndNVVDjc-bchigH*x6s=$a;3M@^9fC3f=!is1Ihickn=&oy;#-3p7(U^-391t== zLrj}mULLpzOSQdChGOqV7>i~W=Jr;aj}nnp5E&aeF2O9#vG9+Rava9-duVt7_wOrd zko4H)i?8u`HS}cDqqMl-c^iN?WKpN!tGm}fC&o<~ww~|i1 zJe1D8ioR)_&!^su^Q_^-Y{gT-cps55xi*^`)*4brZ6}`_5NPi)POs9G4P&>CM=P6O zUuzG*2_}{_NHzGRo=vHS^a=mZwdedJ9kOL}k@fRFvSA2j)=0fBOb>JZS8!4uNR2CV zsgLV&fE%@+`*08-BnNv0AhruupqbaLL4Q}lhQO{15aRlne&zlPnrwo)VK5OCq3JF+ zL^j(fZYZYo*g(o;YXeN1ip{!_TmZ3iex#iH6{#^Erd4NP7_SYF0+dE!nQ)s9EN)tp z-hRy28uW*zylf0RLmG3G{(09%hGC-Ur-##h0T5TgT(-^1It#$WG#WF2*J~Fpr&kF| zPVl%&iqG|_m7rr!Z%^t6xVAUJPOz3#0L*Q+y9wI|{~gT9_5+|Gd*+%!YR=Xvm7JeI zuF{rEfTUm26s#exa^DQnpFhnf}v=1 zZo_hWaJo;h9|fpkecKc$@i3+AKSm#OFtXX*oH0FV<_`8LudTD6tLqS~36L6-YvM4G02 z*nq5>i_MxGxSKB%eS^F`Dh4)A7ognWz`<`nDwz*^3spklR})FvAaa?;jWNMl(B0S- zefyZ@eCC;@b1bOZ{BJ#+B=v60?D%SkV!=QtT}OJbsaAn8{j=ysKQfAFc%unnZ7aa3 z25}&pcYP1qB>+;#{?xVmXxe}HVVYVVi(X?Z3lr#)03{0}q=2UASvCXk6Byb?qmrmm zikAhCh1s-CZ*M1z0}ChsK#56Ad?xzfz%T%Dd*jMx`oHz;+JpXPZ}fl?d@R#KLRO&$ z@VJV&dT8)`I{DJm>6I6rO;^vKjusTxE}TvmPQ92;zWB*>=GAA@%*dtarMAKIS7DiK zk04OAJ;xG{X&TZ_kI@!~R9S)Rsqj014M4A&mDyG;DEo5yuz5|Y>16|jxP;-&ui9sx~%-XQDnMR@w z_QK+aD>5x?WUFj;OI(u~#@+&;-bSfx#}R=FR4ZXB?E|)KR7z@ir}LgS1u{xIi3wP* zg|cWU`ot$*OdtI9kEh@H&8O1G|MZpg%5zuKOP{=wp8B)X>EyE)>ER||-as%;+nV-z zQl^a&Ak^_0XK1xDGhwTnrnI|#S86D4NYle~q(3*7&Rr($#Ku1}Gn6i{p+EQZ>GbN0 zmwEp=KC>VYT?i5T7^#=(apuB`U~{K_S6Ogf=KZpvHR;|mHB7YTs9-be>upcR50Y-= zaSwXB{#Kd{Gw#b^7Z)an(+Dj}h6XOtpx72iw8r7y%QTe%z-&n&c=GFd@u8k(lh3=x zx!Ln8G=7Oi$2{asptYqfR05nmFONT08!PaiDQ}+#eac8hMWwC%??SCDdfFjcEROWdedmsV z8!$|cjdEoX%+V#%i)*l^s{xd9Q*k?B~mvo4ZU#UiciOrE=(E zNod#$@UU|lGqLDOU9z=BSrh&_}PFPs3NvrD0M}!!RjBd>x`XZwP=e zO!e6)kFiT<)9|@7JDztni{2(+W^Ij41>hinP*_>dLf6PDJ#<^ zR=~K?H;~S&2RNA{Z)@*PjQ}AVWHu9pwvm2m;W2O5Zqtxkjs_NXD>b@0+EXKp#MJ0O z8aP9gik4-FYjExSt7+i$E9nvdY4F0yG(j`gB{svQsnN7FJ(A|eY0gTjbcqu0b^899 zF0=R7D!@VZ%O5O>7*%Y0Fu=*KEd8;(U6ziSh569e6>lk8X3#*w;tZ- zT0lbujq#$(JQ24WMx~npW1)scaD&$e3Z&%?SOr9{j)7_xMwyxbXhiy^c2;Fi0F&w> zf-dLHg6t4*DkKFIt+@GGRi^;e1Dn`ZBc_3i%mtA@(mND{h?q@&X$+ zRcmvTt7*s-4=lw~FT9lg>|-ycPd#-yojFM-`PU|3md4VVm#?L(7ly+kqmjN}HcV|o zgj~hu=}D@~nVz6IC;+YrRsy7fh7FG|s(x1$JsGn5qCt2+-qe5ipf8uQ&NwnVXL~IW{DblE69&Fk~DBPDbfP zk$#b)$Tw<-Lm*t>cayUVXl^C}L-bRmgZ-jG0r${!{|cW$J>482a27B$N72LrL6Mbl zG7JK>dOs83#CtwDF~)0=j)pCDv#Vyl$za-~T9!`zQVRi?EMT>1R0=?rV0V{!tir-t zSm1aItAZVl$S^51j>gGlmeHXM#vs?oA*BOU^)(? zfBb$P_u2gSXgYZ0U^;N@7(@s18OfURgV(Rdac16Rt>MU(Pu zaZ@@1AO}4sftIyzVNqeAW)N0n#~t7CD1x&1ti{ilufgq)yP)Zgm*+F_mqdOHtU0OC z@X7(%8{wn9aiQL1fwCe>&=Zs?yPR^sGAU76hnTz$P6r{Nip9$qsFFp_CbH{MrN)NH zD7J&Rf8(Y;b>(!L8a$Jxubo2%McRqS-0+1oH+m_}!=xNckeS(l)vyVw zCEtsb-7l#Zf+`O!la)JwohcUQ(J^#Yq-Hwz9!-7s0Gy6IkaitBntJvfPMwtbTlux( z-%SFNXdb&vl)OlaU`6c_i&A(TnPYb$=KTa5vNd%I3`sXN!j!akw59(3p0po5R2Pki zSD4l8{o4*%mqrAh9qlauCl&>+;VQbX8KUcJ=TD|#fYQ|9g;apqvr?%NEm1Wgb3Lq$ zg6>8aYc*cMt`vQ4f{vy?Il#rHqN`k&jdhwW^RosuTQEW{ifibx=3y@85SB0S+A=hz zRD8GyDZW=kzU^AOLHWLl_3aE#v@OG=142bn*zZQO$>LeY;;mq*oX{407At?~3+y6J z^lXKY6?&%F%B@@>dKFx%r^^dr_V$tlGkqsn8S!-dviLfrbW_VqRS0vJ089XCK!-&S zR=`~Y$eteE)7;1_VO$c*jaCnyYoF+}eXGYXMF;1YsW1d`bIY6G| z&5Tl!OvpTv<$tNzHwWQ@!@Gk*q*QR4&!>6l;AXJ z;1RB~Da<8aGe&4+2?5ijxApoe5ZKIN9WW2jkj<8r^{7MBvl{{Np~HvM!Grr#2b-q% ze}n6&25IRrHsHVIbfc!3R;F)R6Y3S{NfIq!{`FSlXc*|sr1X+@!)n20qqmU!Tho>H3f#c<){3GTRusp;%)hrxg`{KCfKi zIQM!*){^(H3h?N)(mG-1Gxq*=bhiVDs9xRMPvhnOw41N}6lUz*w=W&wejhq|1Y3y1 z*cBW~`(RP`AK07r?f2`Rw0HlWv}YfW{(czxeuCV-v}^ZnDvf&qSRJXeg8~|gGQz8g zdsEH0$)zm-#E>rJ&tuwn!_OH(nm^0TQ>6cP)Fs@Qo_9ghXCf}T(`RO)8?P^l!kvg{ zRn~~o7V$tbViWMl7KK=^C^&Xwi|sHdRyG(TRihpjsxltsM#^Xns#x&V47dqZFqmu9 z0a(Hh8?l0LysR+6B0j@JoMTbJtP+h>*(MwQMu80(T|=-t+r`N)giD;uGyq@*&@o2} zZIRUCQc@5j@mk42-E?(~(QkPXG3XsQ|zP<7$^(}kx`hgEP?U9~pR z0i8#qRe*reZ0M#AjC2KBhWdKbZ*J-=bO0?m!d1@I=zErpYLd-mU}zv+BK2_KZX|9Z#LfYn)TFsk$v7$Gt_2KLvw(-x2pTgY zzu1f1baG|RIw_k?Qdas~sq<1kUcsX422`of6Q4`K6qySCVwEwq!nLt-udu`h#r?9t zz>UX@?_m~Ts^6AO>5+n) z83SBSGA5<~JEImyti$MV-{=g#%s4oQp61-e(R6LlDm%smRZ@#EOmrqbBREsXH)#Kz0D7r;7S)x60psc8!RTajhB9`M8_o;OLy zx8KJ`8GCmIt=Xzz`(iRhb>raBR2n8gSx`5HUTG5sXPwU>yCEnxb!`P`u+GjWjdBgZ zc#lpx)!+Bv1L?8HA5Zt)e_uKR`_K>T(An7)>AZQEKIf^aqH2O1(c%PS)xwJr^nV3d zzB10g5)e|(nh{lGMUSGa5!!yzexewYYSD^9V%eVv1ho|iUmyRSANj~Bu>codFH?tH zuLfA=?!H}V*P+Ad-Z#80J@h&6OM71bj#SotAT5?RaShN_1+vLny^`r~+?ROB@VE*W zy^BI8dAq~OlvyF@p$wXMzT9NF^GVPx+s`%LAfQ~Mf_Z7?D&uCHwS~2n`%!61D=INF zAoBgg(joMUc}@vfEpNP&o++*x>E(6S`i&L# zlmhG`jQ%=%iy6r}*g8Q^b88FCJ!5@WZ|db*_EF`!_rQLD6igJ4L-!m?$L>9x?m2!m z-Fx4$bo_yP)BO+JmyX?cl*+j-)<4!`nEY0R_j=TM%8I@PPWe&vsFRHnF!|MD80UxJ z@D9ga&~z6x-EswT)d7>v6p5gTh0aY*W@0^jb`<@pDUc~p^8{Tkn9{Q&M8f=`i?fqB z1g2;)m_SW3DlGWT00b%4Y9CPVWxP%nC5O-@gs4u~sGM*?A@VZ%T7$KU1x7Ho<*)>K zhIHZ#qV;(N>jmrq04g(-jt{YL4~-)NM=!O;<~##9I{oUY^vO>=k)D3y>GZ<$FQsRn zem*_($xo#ho_&E7%9Zfw8AG#j`I=R5XoGm|B_iBKO$rfyoA);b;AdMwpS5Ph&cv%? zXYW9O+Sb|4C$2=qO$qxL;$1db-+Sv(;CfC0#s;W@J9qhV8X=9bf%vqkYj38 z2!OC`JPrdM8hA-(kS3+W6()~$z?CKd*m zw#`*AGCo5=7I706f+n|sps9?8Ez#E(#?(So*eD29Yh#404^D}ty7kOEv9;)E?v6B;8_C27 z>>5P@ZFsu$?`|RDMXcS|koN3HXVyz)5#xStmbBZLRikLd0A}hs1aLL5cuj1K1_&eL zv*>VO?WRdPVV#ih@T#h;N0Y~xoLz#As!WFtbf=^D^`vIJnc|bg zWOf+N+>zqrr6S$thL@QH1@`&-oQ8|}Yg`@#l2uTFB=U3On_L|k;$4s1>@B9jhtZ5J zSZyH~Mt5c^OX0KWFTs{Y1RCUq`+Ct=DcEb~vhU$%ka6mOIq2K752oQj+P%Av^@DU1 zphZ(z^?7=OsSlF)k~tOh>tEy#qKYb$MRA1I-tlykuT*K2#B)kxOSUPlT4s218GmM8 zO2Xs#_yWq%{kU{twn_!0nycQvzVz02y*<7CbKVv91~O8s02EDJW#1#cFF=WE#9}_W zs3FTv+QkbS-{0v73MyWwoRd(1pUaGqLMV8T`)Ht#&IqXuTQ9HYm=EV|T%sn9D` z>&~7_mh-*$b9x_&)nw`qqxv^;T+VO79*p5CmA$;i48iFnY34Z?R_9GEz^M#sxp>*O zeSjJ~lbV*UrN-sqR8trt-+d)*&JCw>3WqiUQw12Y1uB7;sL)`kOZ`t+z}2&?vl$nN8deLl5oJY`AFHud zA8vSVa60_z(u#Q_fT9-`rUTKlS|%xZLCg$HkfxPx#`)_aQGW4(E<(!yBg>LW;MTk1 zpr3`}7cea>S}EeWQ^{0bhj^GdNvedR{wS$6im?272yk?n(&bBMFQ#*+&!tl*UrVpN zd?KBA`PKC5nRDqP=8e}_3??X(FObeKN?B3oMKzi!6GhFgGwY@?Lt`Qfh3FkN0JU!^BhJbUpXT#i3dct*F?d%$@7rNO^Sw>7{a0lL(W~o;#P$oV`Hl z_f@KmsK&c=DGi)Ei$Uoth*zKE@k}~-@}+e43``teOqXzK-g>xc7^S(BxcPWJ4X_n9 z&k9=6(^&RziLhy}4FNV0uV#hSb)+vbGVQ@6wV$ZFw|x&Fup>6ok|s&Ryj9D8gGMJ_ zjg7Pvjc1Uwu23|pXk5dAurgIqWhP*fRoFgn?>&5fy7%aPsTbhXq7M)iZ`G7sx7Pti z1tTy=EkwB;wCHI9gsS^;@2UXM)#@^jA!bWURq96H)WUPr(u|H3xJDEy%rce>4V)_} z=rRPzY+B2t-3BqmT$$V8vv#L_`(O__w}A`e=@Me`nHiW8HY?}Y1e@&`skE&}W_F6q z9RWrSm0Z=0*iv-h4b{)}J9rG>bR@OlIn>C{>L`D&!PY>AA%JGE2nb3cwvI{!@|uk9lwjZug&ZPl&z7D}Wj7^s9*tVe^^ zfUsJ>D7Vz*um&-el~Y9CbL}feVLHz+Cr_V6!#F<4=cx+tq;YHyk4~7jIz-g#%t>`o z`FwP?&w-njn)=ICQ|(sC^{{-ulJ%w>#>-YDrjNH@YqvrXR#8h-`h2o10d7hGU1s2N z)mgr#t_Nk>Y7kVJ7Gy}m9(JaOV9OpjewfNio&&h3T$%>30FFtQtAq3Um7KIgWL>NZ zbA!)-rwCHx1MYNWhR8L!0w&Ef-^4o#J6Xex;(G0QdyPol@tIKjZKRjCkC8I2 zLj_f!l@vk<_#>kQ(L(091piWo0O}ZH}b&vdPp@F_&7(=2K(YVyY{n z_yQ)~bajn|Cw7&Hc#HQl(@~sL0wik0-X4_o2%)T(4)jcUlbfh`5Bv}dJ9XgWff$cj4rku zf+kZgMhj7mrN<8*D~*`@l~Lxs3=`s#LQP$yj$~u1)K(#8pPNAF4JflLe*&ML%K(Mf zXl*h`wb?Wb(UKK|_A~=1tORhf`p$LV&8^_&UEaa z`+_RodK6%-W_bUx;SsC}=*Og1BZRn`$8o(aQKzy{GeVP9eztn9eLEVD4j8C*1>^u) z**8-iQJ!lQ=*p+rVHm^lw_M6ObT5}MEj@MemGtrp&!v~1emb27EDfALpGE;tTO72@p-BOu^-oodtv#_rM!k2wG2H5!5xbDmPs2B#`p$DgCQmOdr^uR#%Htj}W-3C=rnWnCxH5!=W zdLiT{%}|fWPHP+9U9B*9jci4FP4HRg7Bw zd%s5D^3*dgq~~CQFJOeYL{KETYihcyoq&Og#wzrAw)(LE#Eqi-lTNB~aV#kB>9imEL~7CWmw%#4KJM}I*JKZ-h|&>EFeo6+ioGw07LbutGmRWt5o zF_i(FJ$(j~;0vjf^F7kjmhS0nO8r!rRY$5zogwRH6w4ITUJB?+8+x()EE~;cc6CQq zFYuDgkCqs&i!QzbLIG1|jFeH7>>+~DS3L(StgqZU4(8({m(u`x(s4AS2EW=A@T+M5 z5`uNvw^+Xg9~5RFeVCojOpxEJ8*}cd0~4^Uur97~Z|wk3Vlua0Ywa%9SbNu5@5<0q z)^dI8&|Fo{O{6Mbw`yiMRpAL(J$WtFOk7EI(^pd+Y*gdi)zrEO$bzA2+ZazBWzztw z+0^+E3Z!b*8E*2XW z93r*`QfpQ*Gy|Z_`L7e5Y!EG#GdWDlTyMr2G{2Oo6Gv|z7kYE+xx%iFh_7q+u5{q| zF>?L~(E;UNUSVC(jA0rJOC6?g_0T1?=uoWos$&*6^6y5T*VuS#(@L+3pg`S}&dM%+ zrm8x~g}3!$rH>iiIvdTV#&x8xn&~grfj&rIDbrR9q^&ktz@j-T=e|HolyO*@Nq(-? zfi+MoL~aG#ftNuGb5t8m!4k|M78eXCxC}EmHu)tg^v;}mHNE`OE8%u*wa5KbhJ4X` zKA-ej*R~sIWN8|}=k4mmTL$s#Ug-P-Z14y79Y}i-rgy+JX$>IDHP1PPa9a0qZ53F= zj1mh{)-FOW0#=e$!bsq(T-($gMtZh@Skr$ChHZpus*9xEE>KxD1z=O8tk$dn2W^e_ zbVd&!6LvjKFSqUE+}Bm^rNsal7V#NuF6@S6wH*s=ly4W48)}}KF$L`D>_?N;ms$al z6li1L02o?uW3n<%-zbbiZJ6pN<%-Q~s#&qeRe(XS74PYSZ~CjejP%$BK3cL_+{Z}U z7KO??`K&}i0Oz7R^#;Q0U?^T9+JwrMtZ!a{o*yM zK<9#$(}%~hYh9Gr=zX_><$wTj>tU+ON;0EgZ2<)CrA<(+julvDBC3tLb$(_CuQE8n zW<%padkvZ%)LTr(sh-c@z&xnqx;h^teU(F%yvdgIImI88Qc=-zwJt#&2e$m$Ua%2I504jPM(REOSZx*B z9Yy2z_0!x_P-Z}a_9`k4OW&*K#DZjHi~^O5r_##6h15t}7po~17@s?TDShI}r(v@% zaNTJqvZpgWu!nY0f)#?2@TjwpMqQbDqoS53vrVN!n#}A6nA-7h{jlM8PRnglWtyu2 zJY*6x=#4gUGC%xh;HTw_)s!qMX?}8$b9_FXdF7Mo;+YrH#K3vF3tUcFNF1PNJ8uI4iIjU3La%3%N88*;Pja`cot1uzo?o`W-+X?SimjiVc! zI?5|0%E%FVISR56KZ@&x9|@wl35MsXFNxRk0VuBF=P zp;UpT!zSmufwmO{2%AP~k~LkeXDa}DS4TH3wmOgo;0?w+sO21+DX8gTU+E!$Y0{@l zHqC(j4#!^#X!`!|{?7DOANcC@wO{@I^oy@d-|>X*{_?N(3e;n=*a`LDzH1*%db_Bg zgQ-A6BUq|sA+nlgg{Uu=iW6yBj%&0im20caC`T<#bS8C(DABRNdg#fa<$V)_wi{s3 zO|?V|5upowJw}r1b?Rle7{ofw*0T85A!x1z>@*TRHzK@kZKnEU%b~Edo<+Jwnw;@2 zE7lH;U*FM!5gm&puUkjN*T}ob!NXJx^`*Ul)guQGIZ`=vfGUwr^h{Q#83kD`Jcs?kB9Y!AdH*F^ zKNPSZ0O>{LniA?-^h5Ok#d^SR9c?!Ry)I_9%czb9q%H{1+obfG5!`kHVAWuB@9H7_ zM^w%x>ju39%cRJ7`^9FH16Xa^K$z9oYDi51K|5TvoWq~cytB3lCe;1bve6t-LEOnMxsiJV$T#EmB{yCkWo5PmGxbZfEDXyuu7fWpL*M_cP+i)&j!*XL4;98HPds{nm znZA8>tP>3cVjZ2W%p-JbjA`uvuD9g?6z5hk+$R8cwBFbOG{r~JgDMJGV3EAG02+%v zq5{wKO*o}z;H2~ZI#rVk=U-0aFF%o{&cBo@NLkfdb;@ThP^~yTJei(<@kJc_VMzdc zeXZJ`WbK`4Xw!=_SJ5QfYY_r$({$O;R9Ao5^#HRwT(mTBZhE>QB7$j0-+1|SI(On3n9--xYcB($PQQ$F;Z(YM@hlGO zbRf8N4s+d01YTo|5gY%OG>zgk+nZ=mig6|ZiKYDZdkd$A9T3J9tUpdqrF4P7 z8em=w(MD+)^WjmX6q5v}3o`tC5PiM$gj(Sdj^zA)fjPJ^Mn6Z2eC8<9DPZwXxN?eLQy+BI{@8DN_`wyqb_xz<`wZB^Q@&Bjq{@g1srSo(g zH$7|_vK_A5D2Z>SoYP2sElTgHxIKi>kOust3nuV!MR6bs5hft_*<6iZ#F-AY}yraOwPo zG>Xt#I(;`lWEbp`Rdz2w`$XDz_$Vft9k-nKlb?Dfn&s{%I^G4#)B!Nvv!{oKk+fI< z#jO&#F2HQ)wdCvQdR}ehWRyxeaY0K2B69nJHKT!Q;P0*8>jCsO;u$3aWMmo6x_Vd% z5={s2nFUaFfXNW$^R-6r8ShQ_o)C?~T$uv;%J+ZSE$2MTwL6au!Zel)E|!awjgMeL zI0d!t>sQ!sko2b;o@L^ljXmiA=AB)nWp&f`A+uNxI4iJO$m|uU5S%M;ZLPEfgqGoy zt-JFgi~9oo!DdjapAI790;#ZhnKoX-O2$#Z(ck^{zY|)on~z`o z#h*`4Klc>+wrRYvsN`gvS{=7Uv9hc0jd_EXSl;NrN9fJVCyKz1GH*aXmPU_%9$2iCng?Uq{x z<)lx{XN^kLhXjz-nqWD7-#2{cww3O`{Fk3*+%6FiF>jq%vK?0E36Dchm9OFv_C?0Uf zmI3AO?O$oNuJ7AG;MdlcnW&qOaRBtI6l0t`KN6bAu;OJNn3~g;Ay>2CCY1G7AEz( zmDr?1!V(VjlLLUJ%L5c&jFEPqBG5r@!1~=l;L-wEtQVXs(_#)-tv|=rSJXMKb=1D5*|DDso+>-Gn=1yIeqMUWyoea^{0P?>DuYU8g%#gYj_39 z^d4H8;4#g_6+q0z=B?eJWxEy37%6T6Oc@LY=qXr-+$X5$8e6m+upHM2M}VcxK)6Tq z{3zLFkXNwrwUL7AJ%~W}(DBrJ_;}iN&qJx__(Q4p-bd5!`(Ky(?|&@yKk!)EeeXl5 z>&Wrcb@btFS1nv*qoFP!i1%x-kCl-X3tg>EDp~9%7>kokCM$MUurHWr!JTwkpo1x^Xn-*wZAWRhg7eUD@ZMI1_4mOV z?K{ZESWku4XnO6H6Y06;S?-Oxik-=(RCgxY?_)z7U+>^ z#J-`v#HifyRuVUFIq?}<`7F}fX9G)y0&I*vO9c!U1zk2))@{CmG}|G-(xaO09XXVC zclIJWZA8@VJ_%qb;K#K=s=5khQ44_9%GBFmow|GRPwFInjB7w$E$w#DSE!rQExv-Z zi1WgLqlNp@Nl?&&_`H^YVRHrh12%(2YzmfPop!p`3ein<*pXdj(jv^%2tsN5xSczD zK3x*;b$r5nhRcU%QWfw&h=5HxW?R+MUi0>^r$Nl7=?ShrQnX0 zBHJQxs*nLA2*^!Z?QNDdTCNplspgp%rblOJ-n_&)S~b-OW6%i*(sIEu;MNk1B}>jS ztkZbs+_dEp zsqaC6zzpk*Y;~=yE8}dDRQ?5i{t5J1Pdx_K>Z`cl${F<_6 zX=hQB>koktlSXzag~kN z^`5B;ST7@(BQt=vQbQB#0iS6WU%d%ZVUwiGmH>tNA%#jTQi(3@yf!S64&i0`kZd`2 zzW(`1KHE==WHHgpIL)xePVx^~sH*^~%K)kKJWd0uPO(0mnp#U|X4lhMes*qdGhOBR zUaE3QtRAZ_)rSfh8z&bmEDKJ@pRe;zEsl-(t$^cfFkNU)I0OZQmUEJZg0h%zo z`h%bTKhmQQKc4=@kN!mZ=%0NwJ@xc+>A(N<52e@N+n?U~jo+OHL4_VOqbJihf6nXE zo8I%j^soQJ2h%4${;~AL6Hlf8*Y|vTdh5eS(&v8T4}uC)dhLl{U{iP~ox60ccnTZo zKmPMCNc$doSNfCTJD%6wU;Y(cfikKp%2%i8M>R(jLfeM|Wu=JomS?A!kn~)Gm8m18 zWb2b^OPoo$R6u!$=Z?`;2q^Oum(j4wNH^hVzu>=ZD(1!(J|ISlh$F51a-oDxswcHk zPq*`WHvOhXG*keA7MfwUbnH$oL@gbC2e(}rCz_Qw1!hS#;^}%=7gHJvt8H>>q&Nj@ zF~Pz%&E%OU;+vbJJqzMn(@+}iEf6KnQwcZ^#Xb)U6?zvI>jtzE0;hH~N4@=rF=4we z^&PmEl-2?KlsKpMo~T$V0|?l)y9rQZ_v}SB;;R@Qp1&}V&R@LBhOvE96qGcF*Anc3 zEDy6$0OPrijM&f=XK4bEWfp-AspgTI<>!Zk zt&0{DEMU>3mnc?mCae3@7}T&)RcS^km@ImfL~8{F_u(iEXwf(m0c9f#=(fYxrTWM8 ztW`6zAPq1eYDQ!b7314fQ$tjXglb@isz}p>H9?esYf+GeqB32U(VNE67|?WA4Jnj5 z8irvcTq@bt#t8M@tbVgy}bXBepSj+5q+KjL~*H+sEK$#-1}{^ZCVw8BG^%UZ9RY> zAyQ(N$7=;4Si)D#;75SO7S+gD?__iAVVCUD4#MhC4N0}qaNIJNHZ9}lo^BASt9J|M z@Y^^io2HS06&rQ?scC)@M_LPk=Ui zc%MsLAL{jqUIQNxXf zwHCT$IC1OYFgMGsr+3K`!skT`9GPVR-y$saINBte7_YIed5s!C_cH9VZ)(h3k1I%1 zHMQe>kH%E6<^6F!H=%R1Jya`efi^3$Z&h{`z$jO+rfFzCGmC%>fE3R&cSGpEbM>5K z^2Tu^l^$;BA=1C`v&fqAu%W^-%+?|wn2cVe@J7KSLs)xUC1rPs;N!ylX1c(XzCeI= zl{I6?=ka&b>0kfB7kXy*WAQUZk;v9=Hcp~Dq*kgbX*@j&wb=q z(%&X^HdG{(4nO#K`uxv+?E2rn@M}Mj{@#!LZ2bGb{f8e;|Kpjn`FAaS=?mZYW$BCG z`!4z==i}Mm`LF5cKKMlH*?(X9=5P4Iu=y_?`|f*V`l_$|lGImv@*Pj_?l1qcuYlsx z8mQG8y201_wTwzAqN!CTzp zn$SQ+`7#@18cw<*a3G8*3jl&OssK0Wvt~tL z4Zb|hyl)50NNZ1DYC(_FO!}`@D+d7J61}@-#;?(HZUik2)HEPY14uV=H*ovB#lwgm zhC$lB0RU&DY0H9E%pF_M4sAW6+K04(Z7P;o^wqC4v#?sNA_J*I@H*n=GFms(pzEmw z?9JofGKKr~vVK+3O&qWccxyw?wu^M4-d#27BCPo1nys<0%>iiEXnj%@zB<4iHn3F% ztbl?w9;@6-1-GV6)QIFR=`arhjADuW&F9_$uF3c-isXA(AqiPhqF{UO- z$&5|fP9=K@DG)aWS`T2Yqo~0)J3SOu^dU}ds-{sh5N4;tO>rHNU<;B7fX9&i*_a0l zSUbq>+Ss}(dOK>VkyBbB`~`tWD;(2PMIDh$W{s8<7FMva2ODN;#%d|fTWH&;=8&{! zgVk;}ALd$@>9@o~#z`f6=z0KDcHZ4d)ueBlm6Qs)HI=ocj9zD6-+6&kEL9yA5CvuoV6)L;Od%K zx2*KaX14HGFf)3fbyJK|O8~h9Aem)dTp;k8=iV$*_2>1e3d2{~clBrJ(2D&YE&5pJ zny#_ltN}zfijL)*3c|Ib*Oi^|Y?J0(`DR&s$(CUI=!mV|_ECI^ibyqVjKLMa+{`G2 z2ehTSc;+O&U@ubkO-9ZQlfc%~$v@fJdY^chCow8usQ-bo0%~HYQsEqCcr) z>~=HGI}K!6v-G61Wsn(#QahD0#`X$h9pWiXvOdr;DNQg;C%CUu0MAwamO+v>32X~! z-Bv6@;XX+$SyAVGuLAhdU-^zlfTs1uiS*k)`|nbT54!(NUz7gJ&;3gJ^|KI&I4KIH!{nqb(DE*yx?bviq>8F0? zH`Ckq^c8_f0EaA5;%A*IuSZ_aL5pTo!Q$FY_kOAkE`b91YP!VfXVwYWBTEzVPB z$VFAVfC7u~$oL2;m`mxxg)_{z%e3#XdIgtdG*hEk6wI+H3qXy~z5W6jEdrLH&Sz*f zGj;7knpC){K7neIXe(hFz>@r+(J7PW{)FjhdwY~=TdCKk+cb}8oXdi^{qUwMkXH!i0-OH*fP zeX?eRw2QU}Y^X5RCnH097aBrcDng0AUMEk>jX|v}Y~R>81=3 zdh;FcKl(H`M+&lGQGK1xM?7;UOWEl;!r3H@3Ysassp3?OOa z^S1J6jL0r{*t&x=hz*m_vbywkGuf1sM8^8$=!N$MSQLxezcE<|E zQq$_t(z$W5VHZpv@>qo}P{8cD3ZfMc^JWBO0P$AzRxOOLD#n4uB?_Gp)EAhinpdu) zt8AuFq)W>MY*E$)DU7X(6R)Q7kD>$ngAb*;-~W@;`nyl2?mu`sHGSlnRQ~kKseEt* z25tL!hki@%FHKw3EjdO_O$u&oS`I^2Lz=9&#W7KhmgpMH^>}n5kj;%=m39;0A;2(@ zi`G8?Q>|7kZn5fE9nqTWpX?pqn>we)r_<1tGu&^+3!q`%mRQ_FE28arvdD?fdvCRAL3`BH?7XMo z!qR=rbgN;_1^si(Nn7%0{JW}nU!V(%T%iCQ0K%{jiwPh0jPzdb1egMEL}sW%HpLok+?Q-cK@fzrppG$Zkwz1rr5!NXLx9=!8zztc&8&)msx1cKl#O%<9TKE2hw-`^M8`w z@z8$6xckzN{M1jSZ~V%yPJjDbzcqd17kv(qNpt$<@BZQR(T{y1ee@50J^kHx-=BW( zH-97jhyU;&(rGG0@{8nB)_hc?uA3@DT3dHbUOM-X8zIL*`M0UYC`Kop9Sp=1GeRmX_}mx z2tlvyAI9lM@4u&EhNfW@G^89K8AhCmfD>KBFciDKO2dGu8QPpIQB7wf;YF%ntg4%Z zp_>>Pz!QkJC_`7GL|Dt7gkfFY(gu+7TFk;sx;D~_S};Vxj8W0F(40s#v%`J zl4WwUSvIA?W@Dq{mO9J_>-JD)-KI`B4a3}>Id=)*bSAw<_1XZRL;a8-M`OA=D&U&X zVHp8?ua=3-t#IlF%=GpECxklr6Ya!1Z&ja+m&{)fLe(s$WUh|!_iI^jcFrWeRyaNYQtyQ8h55T?t2$T zi=8d>6PvFh?J>!<9>chBIW?N%Vcbojm70NJnzmidEWS2;uNPUg;3+m$R=m537}Yj6 z4NU--a2Vfyczwg83cykjR9aEUIn?L`&qMY~Ce>bm{aNZERK+|BGpGw zHAiqYPh;o}^jw>#F9MogN%i!vY%HKTBHiAA{;QQn$(`p1Qa!2i3JMsa$KLi650QNM z!U=2uL#vK!7(E~Br0tuzwp~>EmIG`CspOM2Q#ZLz3e(i&sIwds7eNxShG$1mJF9KWWxx!!d0zM%D2_`qVxA@Pp~|-u<5RZimyppn3E4Y?fKxT>P2CfXISLhP|5+dipix)|IVRois zmWd)@vSYZiV1dC1uGiyKo4CnGm-nJ5bHno307ZhNY@}v1CSa;0BoC_sJBt@vF;oy4 z86oVj;zd3^m*&dYsO89R(&`PCFk!QefnYAQG_WEX0ahx8jD?yC5`NYV-Q2@s(P8DD zG&P{YCcooj!vMqy&SM?FrMB>cX=`iY=VO%SQsxS%(k`Kc6rK-LTWwIl+9}A|^wIvI zR?+;HBLj{_`g9GXlK{$>PrQI5_si)tZrti4W&lk~0E`0Zm}#!xIM16Ug;`($w+XN8 z(jr>3nXw^CYiaUJI&W~`GNsPfxZV_DxY+aADv5S;bsX0@bR?nXE;^ap@tkYNb=L+U zi^KC~78O$4HW#rbQK^160No`Rp?T6>8wGSQiOr!Lx}VhCDmsHDdXVXQzC?sNMTzqu zCYtAQVIL%YG6Pd)1(Eklra%y?wy2&>rjK4zy;Kv{lTxc@EH+?dSOc&qgB4jK+Sf2~ zLttQ38|Il-51DeXl|n-R!L5f2q{fH$(V+Rzo&$Jd0mhnP4_R!b`)e3$wTwA6WqQTv zdfr0R-G)C|J+B#&sisK0w5&)YHWM2-fRBzW;mO7L0$7xBo!4j~vB_dy-GcDD)08Kl zi)*b_fGH{M4WjH`dy85|Y=w3LrcSpTiEgrzXMwN!5qs&j+C!{yF6ug#xc3WIsF|{kXdgC9 zV?)LMHun`oDlFmktsL}GrpxKJF`dI@e^W}sgr-VR7p3pn=(ZezQTPi*5R^O4uW=1m zV3MZkV|MlOR62iVl1(0sAxw|seHD!*pDX}oPtP9aU^^94XoKeP=7F(Y!V_t&d?vNN z9Y3%S98dLc-q; zR>j}9aK0Ou#17KG(lH&?etL)n_)*O5{Xk2pU5D;-r=ctfmP%3UG!_ zPj(%!wHewxfwd;R0BY`msX9H6vhn6l%bsvkS8y3g<^kty5M__h6&}k37kYOHsK*G% z&RsaqwVeouds)d9dy4WIXX*WBDtU~4o2Kx?p21#$yyX-O%q64(S4s1a5YSE0>0p+| z)CGW7_N)>pGXBxIZJ-gdM|*T3iMA!B`pa9-{SaEfV-Q`(JIm8OZRP10#<7Q6%G18) zatuJr(k_12jf;8*^SOn2(rh3fJ-)Na?W!qL9x_5r{Jz#XVqjsx0%NVSYgc;R8{U$> z^sB!ief{70w)8dM@QvwxU-qTx!3XXqSg2#YqV*AC@EJZs0md%^3xbF$)}CcpvxObf z`!|1rSY5PBFDWj{p`Ewp*L8(ftlGc^nij96@B6OrNdN3#{&@P6Kl^xk;l!y_pse_P zU;Az8^WSv%&qo7y8Kyt~8vPYM!@fPb#nwnG6QS1OKhsoi?>N|+I_YyOqo8z&rgX8E zMH+<(k#SleDm0pw8B$YWX|fKhdd$=Tc$(14DE?J*vdU|y`*D#Ev8)0{8%~;ccVDsE zM`0k>A`6K9p}rs6wL$J=mXMn@j6Ax1@`%_39CK8<$ZRaR`0;alBAFs_6Lmw9Ubo82 z{jjsMna!Hi6e*MW5M#Mc!VM?f4_78pgPGSMV9RjMalFxoa>*6M&?7&RtDJR2Z?TsKK(7*e(!R%qr`{ zrb`5A3T~Ot=|U-kB#TyS^&j9*LAwsormkeuWZi7?Yeq&N7=efDe>vC07CBZ6`uaP0 z!w6&*={HaT=5@CfPUSk_BjG2t(G=im5vIlJzcqC$ES^=|KTF4D3Z`KvhbRZOm5r3Y zz^OK>rRrgQ6jkrR>AZ(ALnSYubPQ9yxq15kEl=a@4iFF!#Rg2&3y4^Su?aCM<4z66 zref6{=Fhbet|>aNM!_-Whwa!|n9kVb`qr}gdS3+yql3IBpw9ju9ROp2n^lxLMK7{R zENM(ALZQ;83JTDCr;4zz!HR&=}|?Mj;zrl z>7`9T(>h~*5pnSX;C=@L5@Xa9o1m${=bdG|%m5k}1dRM#lhGCCk*!E9Jf+k*6|2m+?~B}{MYTuHMS+)WZ|D)(1pwLhB`X1^a_*nijpdf6 z37&ZVwoz?Lk@vPu(sc`qlG2emtOiRBoo19aXO(RZhL|;CCE69C>BEQTDh%Ew9+P9X z>`)j@5el2sQDL z45m17`(oM@T4DXR!&I1hCP0iV(|T0O`8}@HtPd@`o*VcQfz0S2hKe%ejOk!iOe=|M zyjEPd+{{3A55Pf@b@-zpjuzl+Hl(p?q&bTsbH<=4Yg3N?opUl8$NKPjf~K83v4T9- zn@&3hT0W7MyLS0(x_0qW8X3d}1cON3^vf6r1=d3)7LBkl_2^LRVV9doH7oG;UeCd9 zk77x%$T}$ybgoW}3R;<~0K|FD)ikjm*dB0MB7n#~s-{^(f^EwEAk}0E_{ueG4UN*R zM>he*0eG4H?NxlOO8Yu0(?MMK4+Ea=>8wu2JE{Ow73o-aMY?ZSWxB7sG97ORShXgK z6}X2CRQrp{Q`NDSsmrsuw^@LC9vv`c$L={w!{^UR4?Xl)+CxQiCm9oC#RW8I!$UcU zpC{lfaNQK&yOtQdufry;?o=pn(?`fm6Ns=RuU%X|la|V8qIdlmPT%yt_ou)25B`4o z{(t-P>6w>KrYC;qgZZ~LyVFnp>8ojJ=zRLkpa1FfkH7gHf9`f4&dYa?zqVIE;8Cq0 z6SlmX(&=i@eKn={b+s&7fTnUQ=tTd}woT-~)iF(@;g~?Xu?R(O1&D*se7?fw8bq#b zcrfibcn{u1`(dv#XmTBpbzCNWWP~J5?ZAzSBhxNLA*5Ao=8XJzK6Z$1i!E4m%3f8p zPNW#kbXh>>Ns$-{nX2O*IUoDRq1gZoTEuNwWZJKZa@xM#utI%3shq`jU}z#;V&NIX zHG2W@Vq@avMKn@?6KQ)F@cvz$qxPb3FVs$2u$2v}QG-vey@rr7ZKj!`ip7%fXi^D1uQ4Oo_=FjZ=@b|GXx zcIctBfA^7K7AIK5hlp?o2Ct+`xD;O-9zYv39rvb|6snt{5qX$AGA2D)KaZ~BJAu7d zFTRqVKlxO;g6E8_6k@Y86@#X(sdfV$1mGQJZjp$5nP#)qjF&pjTLVuU&rZl6?yBNnH~LH{Q!NdfD!WExG%3fgzf>tyOgLB7Cr$<{!KX1xbhypO_K^@&ZUD}U^b&ir2&H;Onf*!+AJ85dEG6O$T$lYEle)`D(}oS z=XI@cou}fBvM%P>?(_iOYoGnrH>a=simyoD^ex|%zWS@aK7H=HK0h6yinIyPzBW3R zHUZ)rXAxxo(Z#gzTW8Yh(_}s-0H<6F0n9SRju6b~31&)HGfm)9v~V(QFB@ppvOL&Mpo!suZVk;WTE%84q$WB)G{Y=) zVOrYLSc!c^b=pOr$Aeu0DZT=(?r8%&39N|K+E8k%=d@)@bP3=EZ5P$Xy;ZkWNnmgL z@CL@NErhgHSmRo#J)52}K$$~Cz6rRp6NUWHq%KkEEHR9_!=(@2^ik^V=UxKlLAfBK`MY`VgrEdURdM&wc#eZ%^O+mV2(h z-oO90|0{nVOu-F`{Kn$TMkQNy_xNjj1tO5QMF$hdi7F$csB;!jv_NE0060|O-_+4o zhi(TmIZ_>!lwq&hq?FaWz~a5B79w1z8%R-fr~OCoPY*oy1}e|?Mz?zxk}4LPs!eF` zm2v!UrdTAX%rK3n863c8c?D{m&Be%NMM~BbgsjvKtA|zxD{K@CfQ%{9XgX)tS{4hl zqOX+fl$5lSA=*nIifm-_Y9sn?Cv~yAr#+5)3|&o^>EUMkmHz%c>3%5RRz%YiM4%^6(9`WD+7VoMg%sv+e4b@Ki;>Jufnql{ z2KOAhH$DFN^?M-JDTqBif^(=Pvh?nayBh}_Wdx&re=31J|FB(Jyn3ZXi zp$Pw_9cXHur)M}X8z`Iq@9Nk~T5~UBh0Pm5^c=>Db5s#6u;Hw_0R@vKuu*df^93tl z(^wtXt>`N#vgNUl2ENDkJb>5}e=8R16~=*X&+1wx0HIT`Lkxg`14N87vyMX6 z2WC_Ob5uxjtywg;9)omsfBwqL>4{gKOefBt0su{ih@p+OuZ6{Ph0gC&_PS!R?P3fx zxao1;i+qN0Qj23~@n&o(L8LlIb&}pWU8L`J1NN#Bzo&AvAUK~Ny7zFp|MZ zU878+o?bo_`SXyKs&_NtckF8brtDfBLBP&Cd6dk%<~w#Pnd`xd^3Zpw4^7G8Bdi+$ zC(FxCEox6-@L&__Mwq=?n6WCgWbL#edgQ+Jh424c={vvkAExj6uJ2CY^tE4;-hz)? zm({7KNkf0))pQP#_p47opB65Trj~)_)bad$YI=Gq)ttjV1)+XaqERWe#`|nqT?^pZ zezCQQ;QGM$>4d*RAgH)o096C?-cMU5V~h}gbDd44tp%&B2bZF60~acbnTwTxdau0! znKlZ}si^Rzz-66vZ3VG*ft2hD`XjxPWWZMiN8H0z0Mi<&C#w^Egxh`exY=pp6EHc4 z^A2k|)U>J2`|U^!Wc0H5p5Vp*B1m#$r$U;0+QK~QVa?cMM*uWH6)?Hgtd}~hD@9l$ z@Tq4V?Lk-A$NJOEyb$;aSn8~prm$v{>soDt=&%dWw@po}n}{?RYse01#pvw^8J@ET zR@v$CmNi8N07F;_3}YEE%Q&5-;&yC;>!J``t!*Qj4<=_l=UTu6YnqlTrkHEdBN{+y z`Z|gn9IGw5$)+Q8fufW z@53n^`9(WH=besNGHY-Te%H5r*Y)T9%D?%p^v*ASfBJer(cAv%hl@Y2N?-OZ?@e#n zN5oRBM1JPCelz`}fB6gP>~kMZNBbMohe~S|C+f{B9kWGIrJ>39fBEauKl`D7m(J@x zb@%w|cLls{EYkKdv(jY&XJHrOiRpPt->INrgEQS##s=1hWAc3uJeclz@S&&})2nDj zsy*Cu6@+GE!)7**rY0f)W{^Hj($!UImd@3qsl93}wN=uDvFuuE+8j&`YlG||!<2mj zrq-tzGSjTVh?zH`%Z-tDsJxIKp9k2?v*1nBEOU~GQ;mrgAayJtrUCR3QrFYOzgrM) zx53J^u%TILkt1S_0}JWOz;L>NUTcQZj>(n2jn zX`S78H4(i|j1Qp2fF6e#nWAbK`Iyw&l>Op&e22fdg!`_fctgOqsFe#we2BU458lW5_t4P1e#yG#q3SJHFnlAgfveNkqRzprgZ2;04nC>k9V3o%Kg7{3L?#il?j`J65G?BiZ}Z%AIJDFneo z=-Wo7NL^6z*-A>``2Ah!&@osjQVn%2Y_JV0EZ%kL=+VA(>_~s=*$c4kVs7wRw;qaU z%UwrJd4x(kS5jH+GtGD(fuWt|2O2k;rgJ>l@K#Xj5u)r!ZJW>Kn(@13SqTA$<@G#G zeVb``(;DiZ{7Zq1=@vmzt(B*WZTVUcI;i0_?Ryv<>~{i9YM}T}EuoBDG_~dY)GK;u z@i7PcD-)@1t|Sf@Tq#1XlY;Oyc&~kj_NDiJ(RCFR5<4iPuP3ywb!Do z-PZ2K1wd7pR`fHG91CMK`%YbI;-lrTTx(b5TdeG=Dg2E;e+;TWQ2D9Y^M0Jgf zTTH=~B@wkOXHBt5rNPIq=e`whC?8WCR28I z8hB=7HefnlYupQbvl;8`L+mM|d>uef0Fni3I9=W)SwO6ehTVylAVmx3XD~sh+t|de zinXGNb7>+7yaTV(cf9p)q;GofccgcH&9|qwy#3wj&0o&gdefI9&P~#K>MYI$9U(&d zYDZzFmJ!W=clyWgy5r^Fed_l=lz#c6pM3ehAN#t$m;S|fe*l-}&Qo1+`9(H{W&i<@Xu8`5c0nKb~Hkde!sV|Ka~j zcVJzQwGDU|K;Clp7f@K8r?q;1nc#udX9-Ajfx&Bc49gPPG)ltVT}+}zMtx<7nl0P8 z#+n-Hzmic+a6;0VImPc3$tR+h_Kvt;*^sVcGfZL(pm52xJPV9Kxb z^Xfx0W}Fu2Ny;W;dv4?lEJBi4Cs~k3XysSyC`TK&gH$Oc7WD(SS6 zQ#DL`_)C|rr_Vn1eEQ_5WSmdVQH;uAjjs(wUGz|_9rYn~&rHdCY z#(TTyt3PF(e+1Rp{{CL%1a3mG9OUo%!nbPY;pVrMM`axpMQLpOk2CgOL5|^u?*+8&H9zp#mUzIF+;lrh%Ae(?C z1~rE)rH06z8|~Kr+@SYZ@)Q=#xC?qq5j53eNrC9&N@&W68tA7-$0cxnC8Z%XhU{3E z<0jw3QdKzt^dhz={UbLCP+v-yK<1u*@r&u{=buV1UU)9Pzeu25L2_2rpf91iVDPXM zH)I2M!8JR#wBdaXsR8{7F=DO)HIneZ^Dy%-VZ$tGp!}1kG$nt8jZry_;o)({2I7t& zdaY#L4Q+=~bw^Js;o3Hs50;EnF!yD*lI2+Sc^3F8@23i6VUvwyh%t8opRw~duBOZV zH&8X6NC>ur#=_Zx6IjAnFU>IyaxCsrh1qa5q3%2gLWqFi?!zr73zjnPHkmUzJQ6q& zgpYz`juM#9vLUS6wHgSbCb?M$ko&l0$YuVY>e-H+B{kDbY)p(S?eKnv^DF#Yfmr8m z1lIhWfPp$sc{a>SH(E8GREvkJqB(Yaz>Y)GyWdr5KFXN<-WIpXKtP)jtzY{3`XcyH z(cSO+Z(nmHCI(W6dwSC2AOUZD{bT9u;lpX2d3Wt|pG}|r@NcFkKl)qg!V`at8S;5H zMnp?NbfU3?mh%OSaYv6er@>R0FCTACW8DpDu(>J?(4Ia=DSaJvYa)YP=N(;YA&_cE zrd+oS=?R30>lj8*^v0EI(dFveH%}tu%P!&-HLDC(nXj&SnU|b{)L>UcJu78K`At5P zi9{ez-0R>jWweCMEZBdO9x6QR^S1o_ip9C?`I_v=S5)^*>5x)=P9V;UAgaw}g+OZ; zOIVF2Wdx^d8e+RvuukM^5pBhpp_A_44MZL%QNJ%LtxboIo=OKv=o$%-@>NYx1e!s5 z+DXbyD@O{g^+VW&vRJ|xgnhb#U_RO;hLZ$qY_-eC);-;B$xn1=Q@Rk}V zpUFI9$i~ZtM}ttOg8{fDy*z=jChT&Va;$0fK%pJ*p^hAEOg+dPb%3@v!N;v%N4IPI zQffw8>0Eb5dhqUh(jB0cJyc104jlyfJeE$8v>r8SrE9mA_3PD+lJDp67c8+n@VeKM zJ<&BqOKuOwc0E0Z(;au;n;w4iJJXN<%nuQ*aZ~VRx*O;ATJ=CMcRh^Q-?_W)PVf4z zf1LjPKllc+?WJ_vxwEMgAE4Vgkxt4u_rLvKT$AV0u``ru5F&f{wXaW)z3%m?b6?={ z|E19Al|S~Pn-&ZH1|DGRlsz=5T_bbSXKj_mvIs?8aSAtZp=o&#q7)JbC3LGMtFKcz z6VpZ9L0+QH3TVKFFs@xlax~OvlANeZw-ev}AyzIsxmaxE8fgsaE3|mfg907c1c7+X zqwm;u0TQV{_#6K%O|md}epNuRb@lkik} z@$R~I13@^X_SKJZ^NHTRfW^U4+(Eoz7BK?@>)?}=(FV|;60SCzj zz;^CCU4L`Kke9W#;XYBiSOC3TgIiGyXQ2q8pkfWN%2;@jvQQf=5BajTYC&*qh zpyPilc2$X9c;5{oWii1ESq3d!$0rBV(=WW3zVOTy z?v0SMfibgMN%lYMH7MYBMc7KvzC^F;mvvL@#&BJDmqu!;5cS- z8A0VS(cvbrj74KcCDD^H(hNzv$`D{hk8 zFyWfkGwGlFz<)}=_2Cbv5B=tc(g#2Eq4ZlH{EhT;|M_31&Rdd-x?^{z@BMc_o__Cl zKAb-I!4IY1{NVf3zxv<4AsxBr%hIoW=mY83e*OLF=YRO0rhBm%h$DYC{m9S$V*1rz z`IYpmzxM0tegE!XrhA+Imx~wnPblUv8333?QeTxJoSrlUoh zk1@!)t89SeSg~=Jl5q#J!31l z50#Gy(9EpqTj5_b?4i@Uo|6R9B?29lZq*Z+0f`xdz&%9}Gbb^YZZ12P{G0_Al`)9E zy}i`EamOA99T>tb`^xpWpJlm$xxY7lE=Mf!^*`{mR#|3jW$k=+;$s;2W zW7CklUJIhv-q{8rx`PsCBg2U<9lX272`r`;aVFoGB-o(LSPTlOpd=_zE!{KK4Rm@z z23@1$Y02O^gc(8L^b`cC6p~j;!I7o!m5Xl~JT-I zdIFPSkgoACG~a&2EO0Z7Qid-x?JP(VO45={o)WA%IQTq!`?BqfGnnH0svOT!VwzQr zo9DS4V}y8d>&4rNRE7qWK|stAQ71tXJ6uItMW2*)$WB2BL7s~GO-{^=;{Bx~N`vwh zBZ$v}E|aN1UqS~-w=>S?2vnZ`=ttAD zpA=cb3WR&N!F4XMk589TLT8Ct!q9fOqb^-YK$dg|d+wd6sW?^y{=LnV0JiCZgS;%oPU z&36nGtpG<5o~`#GJ0tqU%fv)}l)oUivdn;q0J#d$S$V3nc3Gn6Z?~JACQcRZM0 z`=)nM3H$c+=;Ln=5zLM8-n2S+HI>YdP-UQ8SB{wSBAw32RA`1=sz5O7mz|+n3w5F2 zb_4_US37nW(hI+BrdZDc@@j`!y*32<$6uX0-~bICpK%U@HYy#lNL#nhy4w6toxo=y zSx(c9u*}4>W-qFc$qi(@r4ZZGcZ@}2Q;QISktIq>A&{x)u00NHysp2#cfh%lObacx zZuYC;3BfuH97RztVye9k$%V03+J4LVjH^6ucJIXG+Xfbs@F@BQ>4+OcZ-n;bF@9$Z zWNCcd;L0pK+eGxsTD053jywL?h0pglh?wC$h6xlc&x!mrVTECtOKAcXb3?7#Akd6o zOj#kxP_W*u!Wg-l;b{^h-(E()yBzR9FtT{mtw5po6oEI`}+uGNW0+8&Qt052qGN%ea z84>dGBK9r>OsgAo>6F>o<2}c@o&{`8(ab1D$j!;h6l!)USXphEQBS(y=cx6QZ4{R0 zmgz`Z;j`3?S|&g!?C#r6-3+!!EiWRCJY73W#q();lM)N#YL1`XT$o7D)8X>m(53WT z?=$JyYoAW%Z+;FA#pSeEJVkIt`L3Nc5tVp_^g8jTJAg^DwlzbbZWk_7mV5DL>btC< zpqqxs9mI6T@&aYVIZ(7^1udg&q0@?bQ<>d}fvm4x_d2q7cOUndj4u}a7K0}P7E87U z!Tm7BqHr9e^0#2FcjZt|19H zcH~GpO_idKa@*vE3+dX&Kc0r4_$+b)qoLC?$MuXt03H}&{?e_xq)iOSaxrMe21M*d zE^1c)NCjU(9j6iAUD$*eLuqt*7UTK83uzwC+0MgY&yCu$wOy)qQpxdBk-n0_W6{$< z!0(CJgioH$CF+RLEmw;XZXO-1GR70Q8oFO>@^Id^x(FM%)@IeH`Q37Y{8Hr~oGXl< za;~+)ubPANQX?)08D6Be9FAgmd+y==;y79t1&kOZlh@=*j}eLa(8*m;Bv64BZ69Ai`CToUkA6yn$D;#|YAVS|@w z@_NB=+WjavQx0KXj_Zj9J?Pk9pd8c@r?j?pr?a=+mCl~JgX`_$+PQBu{Y1AUft&ZV zwndr(Pm%7^b^S$^li=U+-;B><_J}-Vww7o5YR4-EO@DR5fc-1^3*7+&pMZNa-=GMT z*l65rVzD+GGfQCe(%^8J;Av4^82#kbCLutOqhp5Z*0JHHtj`ZKPpSGDf{Qsa^WZb^ z_iL5`tzMIMFby-2%J{dPav~p@8+)0=Xg}S0{f@&x%0;S_OkS0qMR-k^X(1IDop{Y0+1#4hlRL-df|Pr1i9;8+kVd9Tb5m21+l}tdtmrQRx`?*3q4xLZW_C1p>jJ=R9 zj6av2A9yOA?|UkZE!;r!2jRr3iBws?jA^q{4T@W_DYC)V>bnS?VG2U~RS=OIH{j{e z_Fq?3PcV(JX;TgMI@P=gj6ppr*)U5*#oUb9C`F`N!dxR&?2Qz+vT+%TBcjAypmGs@^Biw_Dq^YHGi))#QRq1 zHB0;!1(PDcI#3cS;d{3+rM1r;@e_sX}6(T7hu?h>rkotC~iUV$NqLqUZT%Dt4;)Ygd&ua+HXu zhTRi@2mh|(ez$gl^upH*xtBoy>~7Sz(9S#vgOFO%u1Q6zjVVaecRme$Z2 z^m&B34M0Q)ECZ|(EZF%I)dzuQMx?=#EvD+Llv184(V|6is%!QhFoN^a8h9PqnTvjr zf`@8z@EIzKSCet652J6G(042~P7ducqP4!DR&4MR|F*nfP$jvO1(@@lq_avV^9jl- z<4n5IF$m#6+$vpb>!9Gl|2<8BH#e7=h3cua4BDgy zi}znNEVQ)3XgSMfX%Rfyde(#Y6##u55QYz_)-YNKPeUe3R(-EhbXD+stsD7ldGMOIu* zmS~13vLOOeMDxHZh|LNCSyt}Ufg$lF@~gM;oRb5_dUzN(OroFcNH*6KvXrL~;E|NQdwhtf75e`} zt0_v>!FRD%mKn=N7nWwP(R5+ADj1hKMC>#x!5`w;2j|JL<~+-^eWD~16u8X$Et^?e zXsshQBO;3?+Ht2wR0tUQJqgNkb9MufF~rY+6x~BWbo3~y-G~a-GB2Cq7#%-;B;Ehe zJ?TwvdOW@5%ifq?`^dxT9=Lw@-F;Vj^pOYQyPgj2;xkV@nV$UPPo{wv&Zi=}cDem% z1rZE#z9w6fl*gAq@77rx=0TJuG5KDA?5z91JacH461$zgVsQJ2UR@l99+4@ zNxc{;3-3H4aD)RU-^C6$g;znN=VkNIV)LIIAON zaGZ0l5g62fYE{ckX1?ViUN@j#e+ZPO8##!291RSVav;10O(rqn8o)R;xl-o909=K4 z%afZlCb&+QQ=$|V+JG}?S93wrWZT_VF+jOK@9BhIjz@mCmLMeG}=ApSYYp_D4^o zCqMUvbooN&gV|4~^dQf63iNG>z_EnCXE)+a_x4oWx`_Rt>92kq^cT897g!04^T4HY zBJ7m2Y-Iv3fmT&mJGvAyQ+0Tb-GqGRPT- z({A8O^>1+;0vigkOL;kZW?A-O=vuR5lPrUO67S2zR%2wZ0qf*#B%^2r9&chH$iXSt zB)e5KQ@^>MgNM_JqsP;M11Q6S+z6ZJ$-?SwQ4g2*On1i8py-;2kMaa&;EM!)GmyI{ zSrkVJmPDHLoLV3_)RS&h!9x~Txlc*~OtXM4fV_nThFiJ_Tx?RiQ!WJ`&D;#!+<%t& zERBSh)oKYINQUfX<)}>W#f9Axi`{iD@P3seHVcX;>0OzF6&8I;#Qk$D; z0z3seqr_ubgP_S^N&h$7=CiIhOgoWLsK*YbmW*}hA?bAnWJS#+wU%b#ZOzP%A`yUR z-aMXnAWCC1S1F?mFg{T6UdKC(MQn-eZzG8PYR_b4nyrRpvvvhF4VsE6*=_LL-Ovq` zq#zkgN9#2~z!G{&%Lw_ct}ldEQmI`wkjB?p@RT43GvYmvCY9f3C6BAN%-`w8Q&K_T zUN(hB(jwykJs0hIME=aCRd*$z)6_GbQE1L&s7@3pIWd!md2B0+N|r$YkrBA9H66aK z38FD51OZkp_q8t9m}&`3UD&HNZUtSdWt_@~0+xVmPM~f8yoWtd06K0!u^GBi37Q(n z`^{{;N;X>4d^&^?nJesAB02`$2H?|J*{I8;Tt#$OGL{T}M1#XJ$L0VwaCv-=y;*{+ zSvY_*1Rs->-Y4zURN)=Y-lCnHh2TlG|Gf|HBi);w?FZA{aPuC0=%Mt;L$67%d-(qJ z*rU82x-UKWnmg0o=diOuw@pOF;NV^Yr`J9DP`dZdbBN%!qiKbP6_NuDc!#yzd2ebx zb9bsgd^(lGC97&IPDhUAK&q-~l@}QwBUoa6nHdY;#!-Tnv2l22lgwL!nt9h8=Cuf) z+#^x*T?goiUA-dE%W{?o!WK!A=9%Nu2szH6(X|ActMdLL^GppjHJp|R)D}h05k;Mb z+c*OOewM%UHNkuvLLm3r#mni!v(Kd)5X~o1>z?l&NK3c~Xh@EOWujdIqiZcekaytO zsUer=Ip&!cS~8i)#F*T11m?cUIHE}>3yL``B@*n^A(zoeb3qkyAab1Q8{47V(COw_ zV~qRD%OLJv!vg2(`aDeoLph&Y%bFn4T}R0?Piee3_^5KvLa!j?JvIr}id;ipHii~a zqaxrNN5w%fG7H9c9#SBrXV`JO4wowY-fTcDh&34yx%RoI7_A4>$4ignPTbK9g#Thg0o(KW5_7l=`7w5WH%a zqxyRZ>k1E&=ZM7G3O}5Yr%oeuEqt3%2Bm%s^AF zj4H`azGAY(A~+1oEMD_y7A=qVrUew3rw1u>@V!+A=O!Cxl=_$@w)09lDsfgw#-l8_ z%7lq>8UI#nEy{Logd$BAVD=zVl(G@1x1{)<`Z*;|TCm-DpjXvzTN+z{y4zwgFj%UR zi@;{6h+CbVUAEv;QUZf7SR!Pt*lV2Ug8U|4Go(IK1wyUX2%-#j)@fe3M2W~vYt9Up zOmh;>gBrMBk#)F+F_?` zLXPr4l}ff|kY=vrGRs9t8=h2d{->vk{Hc5A5? zZcrTM=1>k_336$DEo#De$>k8Jtk`5zAv?FcyugCGicn+;*IMfJhVD+m<4}nUWCx~Z zCnyz6!Bqh{sKvSmZ-oW^TXEPr6!RO9W2mddaJafDHslgEMmwrZS-7;^YLf?JQq(Q( zC%h`2$2>v4wl{O2l5=#9%@H`wkT@AY`MpI0EQ9}Oh0o5aq!7gI+!Vd{qj(XnP%7&_ zbTmD9|HJ9=*S~?_>A`fz2}*YGPl_dkb8qE3sf)GA*i^Cp$f2He&pmgg2MC_dJpOQM zdHo})_VG8Qy4SuT)!zF^szW5V83Ec8r(4p)4>zSZd|6kz?|uXe588ABdP51$A)B*A zY?X}C-Gad$FS`d<2?|SDyGp~bd&ePXM$*0a(xgQ{ID&9n_lsE8pcRWGaS%Y0OM_7mDN(2KK}1^n>Bgi9WwLjWfNY$%^>Z>il@0ERC8Grs zvuV&y{w?BNCE{x*^3GE>n23l%PNIlYhDKUGtER)Py|afVlS3e(@R^VXXzBn#A{nwt z#IZ3!v&;kmrs(By~Tn9`hTq82MfGFpOHkDj3mlaFu znYz1W;rQ8JkT2hrBV{kjI?p16m6G0jz%4ssxNftc6=cZOe*VU0DJt3~SD*$n^2y!e7alqYd*LK?! zl$3?UeYtRED3h5TJh9j)GtQa*ar($fDh*iyOOtHqS!YtlQ!#k4%w@NoO6_*<1R2%V z=t9{iwrAxE)Jh?Uv$eir4?6+*Et=R`QpulVM@EYc5E{KwnB-ChuH-O4A%_R^IpPf7{jc$vy z`H`mq&tz5A>)_+Xp1$iSr<1T;JuAuA@FA2-sADt9LoP2yBg>G_XHZTsDVir|sFa|p zuD+BpqSX!egzIgybkbUrT98ty5anW{?&`^>gQpu(X>AcCcNFtMIxQEJftuB0pK|!n z?Wy(X!)dvu8<9meqFfvIzngpCh9KdQbT?(Gv&XQZ*|{mkM(&)+oZNFk@^+shGY_7V zNL)tL)_=@0@2cHgJV#Zv>uCvZAYpLM2Cq+qQMif?Sbj=5?c$buD=GivD9M&l3byPL zk_20Oo&l(TXM={KsC%z|#MC}ODBA)Jz&U0sxV9DgESwu#4! zF~82&b!7=8oEKzC);Xsc=H+;6OPZwnQhg=$x{e)hNw0s)q4elm+S8ePn$ih008iZc znsnlJRQ2CZ^Wk*ogH5TbsTiZ;^3=mKXk!B{e*S7IJKvkiDBbNmLb=>B zslg6wmt{f&B{_B?pMK38`%HiQpMoo{Niy=I1BI7gU8S7UYBVXvn|*Zv`;1HuHx*_j zSh8thonS6(RBYf-^NH6Q!P6QUWEY0v~_Met-er?6#-KAuK1 z*c@$E)KiK^sIvPoQb4Ct3mQgYlC7sA*K1=)cLx#ztPxX`wB!j!*_Hr1%+j$<(xh8n zTLg@SsK)smoYRs;#uLdBVw1`PETZWbL0kwo?LO_eON=2c45er+D<8nvTHw7Lqfr*N zu|Z_BKy+U*n6<%(Qez$6#z#^0e&C@;)BX3|kG@n_%mMG0X!|tC|BQ(%_qvL5=h3sL zQa24WbtlfHeD{fzM=iZRznU6LW>O*^;C~@-IxuG7Ai$614 zkVe9Ekfu4`Fa#E5gXdsJ-7G=lq9&g#v}N#KYTzeyv)CUe6T0i(yZG&X7DR1*HPFjYB*~u{>Jv-2D!eYGrFBYMR?#FCnwajYw zR`^^=dRJ^u*#gUEQA%`=bqm>jv*ofZMcFp%P+DHaRG1(+x*w%5c>f4WRwd^04uMYC z%3f#`@O{0$67>N@Jwo#<<8{gF*@T-RZ^Cm@9%7znW+`+5VsCWxNvdZ9TbIB}P*M)M z7CKh^T_y4F^4~Y^`D+_0X^MDd= znaBnFqYZ+zHO8f;t}6t|>V`E?SLvp$y$;_j1AdXR3dTqcB@$&L7m+|fRU;KQ9HQm@ z45adI@?lyDi?KMyp!hKUbUg=X4R6{lxF`Z!!A;HaTx*#p)qM58?as;+x#bmQU0te* ziA5-!D7q1RkC;3$fg)(~w~CmT>IOA|ICP=Ia~MMQ$)ktU$zzAp@gs*22J8x9zyoCQ z?JbakkyvPF<7sb%Z{v$#u^B6ydP ziZoBo#qqA=ynW=COcW%D+EXNw>oU$dWk-JxG(c|CNWM9>YlvbbKAch{Qr;El2- zsU64K6z&El2#n1|)-4fvxl4N;89&XtYYErIl$Ym@-dP9=gQyy07b6=`f|W-VYO8gv zD0zb<>ocilp5^?dXh@)Aj&Usl>9q$+9=s`qQ^nOXoK0;H6(2ZKOr!rrEa z#Hbycl@2u8_JgLs`mxSm+7;SaQz~-7W}UUxlA%j_RntcmX}=FIqtebNN6ca_ zAHWWqSRnj8ctdug(E3>e!7z_FS%qSS(TQhP? zvd1iR(oL}ePEkr(Ao$87&f0nCSn8&2zeQUC%5b)@E8aK9f;dI6GtKNVc+$&FmE@pn z7m${iq!Qu{s`Tlyi^S|)J`uko-$?remB3XBS8dmp-DTS5a|DFlF4T}-PoOk=f{f_w zZFfZNedl56O5{c)$@?yfWI0hjmG@o+-#m+i=DL0_f+&Aq5xqiZTd0LAC1AV1KNoe7 zs7p|{RVE1G(O95a3*DsP2CY#R$@FHFBFL;DyyndQW()S;ohHgPTR@&+21IQRDBMm( zB_|9ZwTdw?6HQtcspGvZUs|TH^lIay+#Jc;!Mm~Jhqh!zHnY`H?iS@jwtsnA3uf_vbR5$y7Lnc*y%t$I70vtgT-61u*lfzGOQyi=o5=fm!eypm_yOt9t)%PmxrOH(WJu0dZm z$BKIz_n?f;rILF<@0KDvkeJy7Mef3nkoOi|DG90|-Oe02KwG*T zqqAp@r@L;a`|a-A(!Kb;-E$6a#M_Ujd(R$A_nx7AhOge)V;FFQw&_;C^Dthp^VFA| zy5KM>zY!g(S&`ZR+2|m*>QI&gFv z9WShHRu87J*)wUr_>NT41c4l+YypH?-LJ+D1Tt^yN)Ns6{&e3%uTR~_9!OQ~h*}=G zJ2l<*nv}=#Y3JeC&tyWjo)<@2Z~bxVyyj==&mJk6ECkn z3!IpnbUO#X&!iT~j^Yhk_oKYXxU-5G0aGE_+ACnW2a?QLL%5< zyrrM$;!}}Z9USjvym!WP3Zu=fpGVf1ndz##q%!s@Xok+-W&Vx`gML){(HTBOr&~r| z7Gf9Li}-|%E6C+xZ=w!N##?ivmp5Zs(z|f)?Web&Gu=c_KKlif|3~>O%0Oj#zGj|f zu~Zh+-=i1H60SmZUQ?7GIU`QcO~lscFcBqekyn*%w01;gq;EM9s~~1}BChKQuXh%o zC$G)`ID|J1!X&7g4LepC01AzM+k3tPD}=gj=ly|?{dZj8A%RjR)-+o2PRD7j#CH%S z+q~k24!AT6}*nTr7T24i`avTSM9uH5obYPBs;HeME?e{%n@3b9kVT# zDhaX|S+FArW1}4-qg6L-1rn=d{qmxz)Ly@l4uVK7QHm*t_jKyuL)*^z*-KBS&pz{~ zq5b3gmKqpauj4<17viQ?D4CV|bI`pNyCex}14#_ME;(G;(+TzEEp>5aLnPZP_n-^` zzvBBFK&psb4>DTI*GBAcTAJ!X-W2rYg|Me{+UKGo`?r1PKiPK9&eabhh&;?DrR0W* z#VVV&|BkNG2;e36W)cuveDaqA6fSOs`>`-RK%4o^)Lb{nXWl@hu@AkkdDfI_-s=G6 z_mdd99za8@I_~rJ%b#Zxy@-+c20R|huK6y|@Kb2@97`KbJ&C$rs<`XURC({+DR=5P zZWh~--!V|r7wD+EfFb=#8P|xsM3+3y*?Z=TQe9U|6=-ICdf={yx7};!bsuYW z2+PLY%h1G34b_fa&n5F_jK&n_zU4N)>Q`C87|zorR}JD>g_TN81A>(`aGn_Jx zaIDAP#on^cx)SAPXeQ-N?Wv;caH^sysD{rA_lO3A$O8!2c9YORC7_hwc;LjgbH4RI z_JgKZyfuG4e{lKoOH9P;VQmuAomt~l%#2twsu+7dO ze%b6?3kl5>UBn%~mZC~2K@~ILi-eHZ@o$6J>y{oL0ow22G&=%?8nR z@KCzrj=NLO(G$S|5dG2qz`!C(JQ7`PC00iz6pbx4X=wq6MzVE4=b3<}NR3%%5YnR~ zi2%k#PJV}^bgn(qyx}hjL8Z>(2RVmh1WfJiK;V#kS%D(Ut0Joxsk`+UnWX)nz|8jF zAcDvHWgbjhUk#7~y)#%cz!3?I;K~I}?u4zr1VwxeYX>)Z{5dMy0 zqn5^8wE4HP1&UlM#gGMTnLd*W!Vxsxc52%>kD_q?>`PCBGNHG`-)lpXxs`vZa1lYL zsGTwaAq;99l4Zjg>L4PeuNeU~nX4JT?b-`$mJ-%i;FJXj86u8qFOfNV-{oug0J=a$ zzj^#mw9lpl7wiZkLrf3h!_l$yH{bi6yMm_Q_}GW(rWz!OwL}5>h#_xP9Sdl_1ul<5 zfRv@H>8Paq++@2oJd$YwGw*v%Abxf&5^!MIB+2*wvcq~__uDXA$+<4dT&gS^>k zSIa_f$zy@;s0ft#zs2QAHg!3E5KiIxmC#vO4SBh#s*dh1^kMMxI(g{9ZTI?fFFcig z@AIDsUpPCJ>{wgo#?7fiWYA<-9C|t2k05tL?3M#2|06bW1u=cp@|buTQ1fzQHi%Jr z!i`$gR7Gy*h!!V`wAu~2M42c$T?@%XpnuxF@a#t?X43@i+~4)x|8)C8$@~B4kN%f* z?fSJa<{jgCOwcuJSKAa_TE3L&@Mk7rYPb2{mbm?FlN(#WCn=q*t@MH}O@IVYI;kTm zKy810@ag7O@tm71PGf5XyRDt6 z0;T6Q=OF=8p2mq=k1@niuMvk2Rd%GZWA~({6L;}Cml}`Vo*Itcmgw11RyIOda^!b)UQ=9XNdle5^ZvnYC(e9sU$SOZBmmeZ_^4q%RG%2l>~p~yh`EX z7Q>}0p~0h^uEC1-o|J=Im^*Mf<=T!C5Vte$8>yl+QkHK{o0Y9;8H*Sz9ttkE96bJt z0RVgAo>j-co)N?TRi}^q{s+?&PyJ~qF^4WkWU5*BQXoPGI2pDc8Dypep4~V#Yw|1_ zHLBS{Dy_7Oi%BQzVnZGXnW>1*1{r&fO{bDUUdr!Q(3W1hL}tn4jxel{;rO;;TA2}_ z0-~NHoBx~t?Dw`kDK(NVJjF&%mR-ccdabWNJ@MpIXmMN*nS3Yw46VUqEZEZ;-7d0V zvr#K7x7B&%W}*#D76_^tLUtk<)a8x0S!2wg^O3<}SeS!ByBQ55z)b4qFf$ z)r*e>YuWCjD3Jj@j}O!C%w|Mp?@n0*(jq}Tf}N*kfeAX7>kNW!kRWJ++^P~4C@H(Du)m8D)3r*A?aT`HYU*QR>Y5FT|^ z+}}=ox#US%Ueutt40(Y$-e+;8lx7LelQMD+)83Z)@^rHO5aiow+NN(y?|OVYYV+g2 z^-JlW{OrFCS&TYM;A$_~)d!-aFl>0l`J$E*ov5O?Q94vyk-*-U@k5^xT*yFu72Q-C0dh7$f#cnBi|9*rwfbWO(R4blFHWWA+L?66ZN-!h(cytl zl_OYPSwBFj^B}y%cH|nWSc6D}FvaeF;SWpUhYy&^5Myb6vJ@*32;$sNJ zHXloEDC-x)z3gb~N(V7!zV*0B=hR29kED-Z!#SXc)_j5hc>bs|0B*7HJll zZ-C?s@x7k=QiJDSBSN-$o_%Y{bZSK&D`dq(xSws$%PLUYXCLWb$mxlttoebI)8RR7xGruHQWC zTM0=|2y9A1j|4Xguwk_$r|s^0?oS5}9olx@fAaYk(*%4iML{b>@U+}9?~;dU$1LMR zxftmVas&grlTsk}is0Vhu(km-5!A7Q`Bp(vlXdEf!e_~?l@gedAi5@{e4k_9< z*&@aBl=L5ogU$j<>(D99b8|1a{ej2!gQop>=+Acte)B^=m;UtAzaRQP1`N^g!N1l* zOiZ$GrVMUZh?7TQJ)1++6qE;|YkM`>x-I=7NXTLCH5bD``2%~*Pnj!kJG>SxBnzv zeu1)p9aSz=fljGxzNPGK`6ZKY5CKy4+yK<_j3vSngE^MyGJe0D;K)+bG~JGiATyc4 zD`=ACBT-HX?&_~_;VP@E0_-9$iG6|RDIX?NxRy%;N;STgE?&A2<W8+S^AEoNr_=ZS$oGdpEOog8`YHq`{p2+57s( zbeFE)l!F)pG!6sf2Ohj54bZWwnRIyIaUa4mh`C|_C9 z4#$qOT9hAa=}0Vv?dnS9ICKaC|1la+nqyNxe&hrKoM*P(>l>)ofAZ?}^e305c$Si=3#II^2+91; zG9IQATE38cx~!K@1$*_!N;FDlJuK3wnc!+2^lBL$F9U#8bl%oU+?MI;T0-k-!T)C6 z$#KnfknS7W+EN2aS4}f|L!e7#+`AHznhlV*#*OjRQNEVG{q28$+c_(``^TU7SV)RQ z3E#jc_*Ej)v0|`xYT+bj(?llgYWCinm>BG6k0QN7L^P_7QGZihn7`34P@5h(pGYrk zAjxe4R9;cTdQVv_WDxwLtOOi3SSCgWTz1Ox>|gWnBk3?`(XGe(KlHn)cX$Gm?Pf$_ zDZPa+AAwx>?uoc(5-29zMTNQT-v|0`AnQ7%&uKKoWW~{_4lu5*WJF2PYOdR-@kOH;Ih=Bp^hTahGA(7ecFc#`E7Mzk0Dum=K zBI2wl&;kNzF)@f4k)ai(Hgml!KOq(gQjT3=e@&z zB2W&*<7jEFvNMpsF~f=-!Sl5QXb@#tD60Y8)y7+Y*jQQ)c3eM1VJza_GL0sec=p4@gi zeFUPP`rIebogpL3-lCvYW4R%F=9$l4Ku#GK%QN?$sq9`TNn}?PAys^Q#=ntdYv7h4 z9&tVzq553-8oq~S(flv;mfXBw-mmSP!Bm+{@$Y`u_k@(ht;f&(xA((2nj~T>uB*e%XX|dfixagRonJ$lG#9K!^VQuzkzbYDtbb-$b6hV{>Zj-{ycN> z6VHAw_A#GlHWo9DuC2S5J-pbqBGAl^Y~u=jut-F*z$%_gj2p*0&&x7%;eI9mZzOV29w?! z5nPcdg;*Vi@&--JB=maj<&4unrlJE(-9~k;G9ILjCUSC|-v%djH7gjcg7&t_uLRXG z(E<&eU(8|O!#!T3bCTA1$5|7dyiJtVZawC(S{a5LIfx+U44s;R;>G%v6~4k#vyfC| zgHH5TNF&qBiuFz|q4z9iH@A5H3DZ5CyjbV-SY(R6?`?JP*Bwg z8OF;5VmShzYUCYG9lm?pIlE5v_TCKc&{nb)&psw|h8<;sEv{t_0mme$lR-la8-puf zt`CLz8-MFu6~(k`*gzx;x?F}s0);Ui{6}%Pfav4R3qvs*k5L@t`r1(!uhVMkmgD^S zOKFz1SqmCvF2dN@^~!`Pyb&`>f(cf1?G<=@QXxW5t`GoaxnyC?$3B#4*=_AdzDKP2 zF)y;DA{q+|+}vy($r5$J0PmyCGv2%I9SQEO=f7W`+co9)zp;M@_V2*{9oWAE`*&df z4(#87-FLtkasSxA1N(Pi{|@Zmf&DwMe+TyO!2TWh|8fWRgQooiJRa?(9 zyVC672nAr;hGi;Npb;WjaT(I&ilk-;e3f)nL>oSDR%TZf=hM7H(RXxoG=2DYemhjE zC4JjLq4IT*$RYVjUbLf35~1g>IGWev_bMQSt7g4OCzj;mFu|3e9&QA5@*spMF|xh^ zy=AUrS|w@7nbp`xNMl}6$0m$SGxb`90JV$p_eMsiXwhDy<8LOt_Zz=49Xq;Rp8tb% z{JrqP^YNJ(o3g;H4%%kc#gdU43zh~267ZKE*w*iTse9>5KP#~5MdI=Tk6cDSS8k zyj#`Yw+P=`gy!s_1rIph$FIRt_nZqC7w=LavS(jCB;ZQWUr#3vwMQrCt;aK;|AR1a zwj0xDW>0iQD3hymq`06Wc#w^0VRmFr^W5hl+DcHZLvyTNKRvIH4k{Ijm66yYuPV4V z3-NiD_VQ&&zBWxL3|I|OSp|LdljdlvSC3~y+X@8?dHD>2_zW?+2PNV+yzZ^r&UqGH zshQF1G5_>yE2fQJKR@-h!b6Y_*X)jG5@=QBw<#b<`5F1!ZmMazJ)O6D-p$c~aNncvD4av#DexYj{*#Zz zyv`vr5FJ^Pvh6xmXi!qI++j%Fwv2~3taFkM5jDotR12#SuElw3cU?J6Bes>tIc$vw zsq|`0asK7Zp9VS}6*bJ3imZ9SmurDUX4fML=*m(Yqnqm}+CR4*pZ&~d(hJYO5V~th zh%`eMuf?LL)XrJE=ja4h3u{rObI9G=!|=~c1I^LLeRF+V)H{xAkz5_!UU3gI2?wt^ znj-iUf5&c5?Tc*OQ03pwSi9A{|9O1ia>!Lkf}dn9Q8wgj-u2#X=R89L%rE@HFNYAV z=a(%R8U5SO&GfknU2VLtglL;nf*>7Bjn|enZ}@xhg|Et~xK?`4>?BqJKcp?90VZ45 z!?wu|+>m3zHG;J<-dC0Lf`F-wDQ|h>8`Ig-+XZqz^6_6uH|X|VN9Z-JWTKQM5Zyz* z%00tpr+UBLaY#gjP^-2~E>m=pR^(|CptEuVd#i;7WeTFtJH*JY-O(G&}e(xqu;dcUf=p3`$5wy-kQIjKbRaDNP{;o zrzv$-6oNxzCI&PM<{}H)lG62x{vo4ZiH(Zpt|e%_)FLjHqjPP6mfEQ?#On0)Vqs8P zpsA^q$v=;ZG+H<;q87gFD-RDc4PwV|%eK|{=uuU@Ytr?bL-GCgw#Mkdea-zAO7LTZ~5L< z;NQo;&`gbuJ_{3F zFg5hd0>!OO($EHR?fAX81OA*&#r5bejPR15Bko9dIVhu;!|e<)iI?oxjtga1&MI9PaQl+~#XoE{w8>R?zWw7*V}rXz0|tx}zJF zZ$eu~^H4TMJ3trcG@PJ&aS*$Paa`k*@L6TJoBQ}g0z-sU;45D*Ry4D(+a?N+4z!JFwH{HqUSn{MGSSo`Py_;00o5|}z9 zMN06XGqKVdMRRKdFa7T-kml$>l{tq_n_h@^%*`TjI!*$#il&>Jx1D2(U)trWWPzKY zO3=l0(3SHnX3<=msrMw7a&u0TJRHN@^b#YPQiwElVg4R1|f_Lg^U zyVw8r2Y)dA#;^YxzM7Tk0Oqx4j`yVF2aphQ-jVeqD;MvUAJmF z|K`(W?)d`O@qu6cgY9#%w?Cb``~DDywPl}{&O!$+VXXm&I$J(VyYSEkGoeLYF228M z#~;^LNvEjKhp;39Ax*6u(q3Qr97Dx4PPxyUgFo(bU%5m45w)pW1s-be-E5w|vEW^VjkR12-Nu-(tTtd6NnpjcOHz1JdJ>*k^%ZLbs-j6ia418 z*XFvLer5XBfAquKp3O_=UrOKh9p9F&-MAjI17<&!Wa0*TB3e1s$ysb&^c6AFvcn9& z8dczyj#fTrh2UtM&1^0N8dZC4A@k4zVsw0h%mbe=7SJMoHav_c7K=+0h>|MjE;D-6 znPr+C-getb7Lk0qeEE91c$Icw7MBZ`FQt$C?jNStzwY(h&iftjczgQo-~I@jAMX`g zlWyf%!*I0n@;Pexv^CM1%R)Cjg$U%d>b}YzWJpgY7@4)`ii~}!^_Z1G72iMz&?V+s z^>h;?)|98=k-2njXqMk)rHX~t4c69Md&)D52V0^Ku&Ld=`TVwX9vGWWpL@P9Q)tLw zghIen2zk_o%3!C^iwSSBXiv_98#nWp&1U(NtYbM7#S_KxZ~S{rpZ$l8!d{1WrPTD; z_HB_Bf6O@A@EUw)eX3&7UP@o_(8<{RZaseX-+y1 z=Tq;+KL#P2VSdaJVB}H8M(!Y_EJS;1NlH4=_UT5^ygm@xVdm}1{3!ZHiqm2r1R4}d z{BHc}9dM<~Tw5kRAnInj`UabMCjFoqbg3IHG6~8Ah-mp$Ggc)PmI%6hX!96j4-5>Y z|Lvdu+%}}iH~ClJ`?u3ze{Td|Or6Xsgx|{M-x@(5+C&-#AL&U|1QMAJQc0+RPqW4> zB3hbR4r$yhBFA_n$Z*g%G^_rM9R|q845FZusE4~gHE^G+*uX`Kw$Kn~RMr|_&=2CD z6O+^F++7c)`yYIK+r9qJ|M5SjkNn<;Q)@d$&AiVc)TU1z?oQ41D2t=rGs^n2%-F0T zFe}Eud5m?iuMf5J2}FAhhLtH1VRWM=(3@urg+MLmu02g9$!9G|m)ccon=N7AG!Lp@ z&d<8ANNFP|uO;|Z1;3oi7XB&zdxoHDkj3&^n!=xKfrVy;3|!URmBk4*w8fNT2bvu1 zPgl-An}+#2(JU<`6nh&7;p)3_K248a<9mo*u1%35O!N8U2*#12vUm*jUQ17Z;S0M? zK7+>DhzREVx0610UPgCVps;8gba*J*7CKBq%}k!iid2SG_oo0q`)^&db^aEq5>?Ug zbZ*wthF)=KOtEQZrOAjhBG9m$xMX$=5@p6)$AZ*_-bZUoJqylWQ_b_#W<;fN{vH1( zf}>!gqJD|RgP^o7hdF8`f}ASvW`1KXSXqFSdAnuV{?_+1)F;X)oSgsCFm-X3V3#nY zjzCPGx|ahi-z#hv3J`rkfGG?>cN`)0qh^wLkCB^J3ZXi!3NR@lK6pjT2*fQjCYE@H zC%3hPB&6*nzP1|eWdAh7O>2t?mE#RGkBU!k?aODh)3B8vGL#GCPk z$e?G#d#+gqiV*Be$j8>{7poCQ32kiwlTm_*9A&w3c9{*tR{a?L_lgM+7M3&9;%d~V z^ChU-6U8g#GK+T47%0LxnP23*QL&*h!nPi8q~3f zWAdvPF>i~tkSK}ukIYUcG;{o5;Iq2+9Ej&`LuV6=X|%jXhDO6TQC*_?1~kEPS{hTb zneR;#1J9nF`BgVSZk(HW z9SKHrCjyS`&G@*Hyo5adym)asrVC(3s|COO}>WK?eD36`*sNgI87%4Dz1kNPAcMs?^k4t*zbTLk%&V zPV%+b3RPEt93pbrKLF}&a>1-Ym#epLBwfEcm{x%z8}O$q!-LRpu!{8{uQ;PxLzb5q z$Ce5WGMs{U)0jI zO>}?v!CFvI*AiK(0JSS85!9b?ON%Jh6y8%q>gM)#f@!?fq&k?iQUPehV#Mbd=H9vo zfVQq`o~}~1o^1%rRs!RclMt3GTuczz*42)BIi*$ZQ;zRDK~zkyA~0!dX~xQ?4c$6C z`wI3eJI{M<4Lz=Pg1q7^f!QQ@q8`ADvNK6~-9h~4npj_}N=RTEP|oj&l4(;%Rq8t4 zfT?a%pl}^Mjp@KCDj8jz8}qA}`BjO^eti=`7r(FmRT*<;ZUt0oz7!j!B7{|0^JmrJ zBY6jHU+3N~POagl;N0NaL5Sz3Z$;^M-}U{VX+LP%b_Xn-fY_#L7Pn0tvNu?ymhsG* zogOBbnIs#>kdz=mgr<9_X|Tu_N59fe{0!rYvHkUv&=cnp;Ej8 zFC#?P_#Kg}3T~8Jprr?qrdg0Gv%fL3Qek#8GqODctXR2T;mDAf!om5Pebx{xRk4A! zQwr+jWzc0QV70(i5lSyOuI&A;1$0MObt5bbXcIGJxAP!xEL1gd?pneW%4*@zF4Dx; zExynm!G|CAvSg`N5R8{{N>;^KOKkGo*4(5Cd=~~Tr`i6?X@1~JS{k{Y)(NPKKtPHZ z2WHa|!tyyjFN+523Xr-58BmF&6$D20B`gV1OrfnTwp3Ys1uL9&P@mYG;s5qwsI+^JZ;MzC0hYIj>VYU0NarS^k(Te0YGmU3?@x%+J38w3VASCnHK zyjg5eF`w#NtI~;c2h-702X;S4`FLDZB{hj=Hn?g`bi=EWz$K42SsekC)n(UK{D2!2T8=yi#pCmA?2QI9J8rgURUMmFdsjp1=*Cv4=TJI$q9+|Zav&W# zip9_&j9Oc;3qc8c4RO#7)`DWnFq(B2vGIyTRuZ&T5g=RA^LAWQ9Q`gM5%OZ|}fZ8fV>cPTa}^ z{L+aRxMr5e4O}7_BysV2-(a&r(XSCOodQd~7Ptb3IinskO5q-G1BIbo%tkbmBB~RU6pvsHMKXS6K9&!ZFjV#t^-(d@tOKD5_YDFP0+|yN`~tMXjCU=`OK*Wt-lCl zH~H*^(lp*Tp9Zetg*c)pFoMUHffY=}Lg&?v{h(<-XxerMthsBts~W7Ptg|duqB9{^ zZ%{$qy_kilz7YgPm+D+Yx_NCVU4O9`JCEtu#8!l7;fUxLW*{^>F`X{Ha6Vmn;X<0l z)2tbrfa6Dw;wRM*3)C`bk8bnT3|fhh*vCVrZlJVBMw4q~fV)U)9;(Uc?u^Tcsb!1A zZZkY>9SZAYT$>9=h2C8(WFBT=t8C1YI#r7ejk^R4|^V)vmnUc2~I*|PmXy9ajnxKp~a^dr7 z==^8XFt7d>Kb`uY|3td>lbK zQdU^!Ik?Zu6R(gPBZGwOa+3vhZCQk>V>t`4wBGHMqJGzTMim1pRN}5_XAxm zYz$<(1Qyj~)KvsEYfCf@%q%kga;dxLU^;yESgK}$FRtb}RWoD1YS zSh;kh!)LnFgRj3k9Y1plI}JVcwjUxD&6Kf@9yy$Pj&zefqfidgrPpEvmy}8-W>!+~ z&4Dxlzi48tA2sV4#tIwqkn#O9> z<#8DxP+}7EdCYCM zT$Hf4H)MDXOO$qJDPPYmlEg6vMLd_8(;K{J1^0AIdAe$gksja;+^_=QThmRk? ztFMAFx&-%#^529C>&0-8TGO3p&ZIj}ok~6U3U*-^)mYNZ2T}T4HVG^xm|aV=+~;M8 zQwz%#X?S!EFH-a8T%d3{?6k@S3Pk?hgI?v>51L-}_WbqxVJ?gh$ppyc=IKmWp!-YG zMimphgj8yp63-CbQ?qO^dAfOWWTUGfAbR>Vve>l(IhTu;Xd6LbV#m_ZV1K%P<3<`C zrldh2k)zw9NwkUuqXI-FN08D4hp0qTMHcq*JQ*Alu&lg>jFXNbet(6<$_%Z#DhqtJ zE!1X_Eb_wRE()&W;ae9H`gYQZUdXv5Yt~Ji%*%-}PhDQec5C*=2XeB?SUAGC)b1$J zC`>8KDbq;$HEXv6NLvWYCE;ZkhLV{tS8VovUc@4l!+_UE#wlA)Q5F&r*>ly~>Lzn4 zW>AS6dk!|_Q8wK=u6JmBK3yNc6qHO-DBVN;Pi7CWGnd;6c5dCw3Z*;#W_q5*6HZN@ zg|S+qsDzv#78PDEpQAjd?HgizZWQ|n1Vr@+c9tgx(q!LDY3#}u)7bUr(q!+YG&M+x zb^tFtP^RJD7s=+HPkoo4PB(C(@4NU!>c8+LT!W|i-51jwsMOjNzJu^=O3BiTL^bq+ z0wq}lnVbVfScNAcQY2nxuu}qxvVoWJxk`n;~ zypCDMwWwWYS;4bi;%^rTq?XB$;iHtm1!`d|HCoo?_g5GBY&PvGprXcV#w)>71*MQu z=0qjIKt(y_1xkI35UWcWUv*8jsTR&eCFP7_o@HfCS?W66l1_puow)O0y8Z6s>Fx*5 zk(r+%X-H}C`uMKV;l@=_S3^L>JaBV%XfyWgGMj^BJH@6xI5-2}1N4k+9Uu|1I3*K4 zD+aXlg3U|LkC92SNEvfNuP-;$Ip*ycfjBWikZs+FMkee6bbQYSa%vJ|EV@h(E1GH- zoSj0u0`IqCYAR?0M|Qpdrw__q;i_Z0rAbk#6$N0opa6{$0N>1NeuWmhe$Iu$Vvg_mA4D)hl0Rks7A=|c_fv)pR{!Id@O%_SHD>cIY`Y6dy z34Vm+Lt9-pAXSd07oUF~vh<5=0*k2`xW2Isc-0I|4hf$-4+(paU~3rOQz?_Xs+Mw9 zZNja5_kx!-*%rv21V!aS#G{>?4m>v488XmW0x7i|BIATi^q6tJs!$L(R#?x4x<7%-K)ShARn(~&lG@#!=_crk5EgC4>4F`!xC zwFK{F6)w&4+$bC8K$?XQH_MSk zPY$P%zAI_q+Ql^7cP-6~z$Kw8tXN}I0*95^2})6y)4+|Zbb!r35*H24WCy+^PVl9n zuK%88w-v_l2*~H)#CV#4h%0A9Lbx2G1vcpUK6o<_$g6E8DZm(dgJ8Utl2!MyHuxBIlhUb0hWkkEF424RP~~xkXyRCqPm_ z0O%f^6fq^pDJ5wLUqQ-?mS{wyEeA;?-+4IhZj8*Pn~d9W5R-Y3l1T!vF-Y6PlaT9K zBP>5HhA}ooZ^pcm9RU*Bjj3da(h>Zxnudy#w;fMCl=aG4OJ)h4hlWSsQo@ns`Z_5y zb@ZHs^oztqo`4Hdxrv>O=#Wp)OAKUe5qiP!;9$CP=^}G>X3vt5<;*2emj&*DZTTC_ zlX=#J8UA~ja^O1GS_cWg`(PW0OCww$?{hW$GJFOJvPSwQ)A-;N$S8ccvZ~bD*qpjR zN?PIgv{D8;*w&6Ia|dK^)>07EI)aKu5p9C7O1@XZJzl4jv<&LG;HIt@8tc#s=QT@m zH%Fkb^YEF7kUJhk)1%cQuAO_x`XhqA3@Lq?B*36cX^csT_BY_XUA=Uk|6atauO%Hh z)E!bOWvq)9&P@<4MOKr%*4BXD5qL|wA08M;b3DT$*H|hJjRf-zjHMC+=mikp30l?0 z=7*8%nBW>jv_XOj$oQ@2Ud*0S!Sz;`THch@U&WXLX@%>wKyW=dvXB-q-rj_VRLVUO z86O5|>?1*#oF)(=m~u@kql~*rkhQ+VGhd~o&U-adw%Y_v8Yj>jotjJ&lsOmS?@2UY z=X00z2dpkl9Y>o|&l!-=j(nh}vrt6l+zZ?XT(e*0*bkcagQjhFU@uACFLqs$>IcV3A&+1^4>Jo2mb5AKc$rh=4grnwHoa zDQN(clkI6}t0swEX@hVZF+h#4Gi8$LjyO-gemN;YW0i5Gb)3 zt%8Qxb!Q+V0a?wN&{c`nWy=+o3H@ET3Gx+K^jx?Mn#u^+T&%oC3Fm?21i$K)f~Kud z#cWMZmgd6nk3fWOl1+_)Qb|lMjqau_Jd~r7liVI}ods+eF+A#9Q3ePUN}{#@E41^= zSEz-LP{jrruIL4zDw7B(1WyKAg^+B|so@43TpNkiAOI5tGjf#(el;gu_WsCelUOQI z+aN}SPEtJ!X_7a~Xayn12vQEHNNzhqk2PpwS z8ShsODYl9duh+VOW|Q|{B5D~W-#p!Sd3v7RAQEKT*@^jd?fEO|#XsYEU+PQiY!@Z8ZyCA!ySGSA;=+jM?)zW8?<>7fVR(bYfM*yRzi9u34^{Q2dSET|Q&y z(FR?6)wNBjy}c#I<(2CLbga%WKW0gYM%h&12T|6ryzcxck_V<4Hwc7igr#9i)zRU( z<8Un1!R4x_{98{D>AAQ@RIyRj*F*Acq|8-Uks54lfCp4X$x)PW5y)wUXCQJbQCt+) z^RA<0-%TmBsgC=`HIK5njdHy+pa{#XNpAAW6)X|lF2Si=nVSHaRQ7?Oc(gB#++f`F zT?P%BWGt$SWRu2j)OQ|=zyiTc8K__#=h@IGiP&I(ZY#UTn8%WlTj3$qfHbZ!9_m=* z4j<@5qM!@1I_n&A6)kmi1)?K&sFU(!dnap9d+J6ur3V><;|JQ)?Z-OPxf5;a%!zi; zN`fTrxd^tPkS69Pjf*663*2965fb!8TUL4PweC6DvjCPgz~+&FVg-*gq}^yVu;C!` z`vaO)K;1?_?Wph?K~knvZlv9PjI)+T zo(WO|c~Cpi{Uw?`MraHgnIJi$F=&}U-%|Tt>s}=U@@mSMdB#T#fpK*suo{be$23c2X4Is zmcU%0ZJ!s7sbJ9BS{_*oFA31Xvoq>3v3aP~lwb4pDP!V`_yk z-bmmSPTsb3yXmsjHNk19CzukhUYAG>q-#f%heOsZ#q4i}#e4*E`6?Sg4Ov7r?cX~O z^;-;BD&!B?F3%z=>Tvi_PrB{Q?Wu#stAb!H$HLzT0ksh*T1_cQq0ufb!_^7~!O50j z-g=NKS+H2p+;HS{M4-fi9bgfY2FS zWqomcwTs40fs&tO_9cQr1GKH^DS|7JtU#~)_wL%h!I@>AbzI>Mm=;+iH{m5^95G6s zl8lAlD0xqh&7^A=Zlv>1bG^9K zuYtsrvY_ug4CFUBCnXv-NftHH4J`>Bcp?afb<>7E*UdeL8Ux0&`D08a(J1fu!^f41!i) zN`2R^rvB>}()IISNH<=h{CobXbRAOsb6@yWdf}D2MI)I?CZiFVx@bD&nlikHETx=b^mLw4&LB#ST_ z45(?eSJ31-6^WinNr5*RLc?;BoOd>(5ZI`VS4&`5gEWPrs3s0}2Ch?rtgfgbc&baS z1VJq{?zBZ$4Z$pmcK=Appge@!Di`=iUUIntx zxU?$l(z^4=*HhlC=Nv0Z3P3wcAkvqxo+-g00#(HrYlLLqggk@$NXB!7Bg&e%4tKDc zwYCdxTqjQC@xlNA|MW>jK~(!o9fWl6+dOk)0zTis_!tEKA*vZ5iV)hHnwlY~w{XAZ zn(?d<#;k2%52KrMk$JaB$+n0j#(gd;OUkF+UCp@l8upQ<{g`Rn9WZb(K@s*fK$7Dm zf59Nc_WK|`8{~kpxOls$+FD&ko38{>JJibg8@WHL$P8?d-MJtca7g4YtTU+1BzDoFg$%*)h0pa3qSRPGCH6?TGGznKi!R8?u>mq}0t3c^MC*CDKW53^e}hDWCAkH80!w)` z{#((}GN2th3M&f%V2R~mGQKs+cuUhrT#VmLi*(yBMcv=yL0!EKHZ1`Z%CMmKz%D zRX5I*y1jgZgldVc+%ppdZ}$>G@hl~xnITG$aDY~3kkaU%OP8L5wEP76T+hPcyWqxu zlMd05C@I$Pw?zacv&t0=^ri7(f&qfPS{D7LRuCCbhn+{Jlc-<)SUlUCf~Z zA}Tgom>)Jr*x6>VwlGZzt#=?@dFd+pJD1WBD72il8Bk_7;3^OWOJld+b~K%VWZwdE zGsSxG%rnoXC!Y9x`r^~iq-URbCVk<_&!^8n@!9nB(@&@8Xgqo5{B?w8X&T^}nbZWK zdXKi#Jgx2xh|<=!ND5}olBkGm+tD^PH=X)AfkG=JU*4_=S_48 z25nf8&s77bv>7sd8zr|o5c=SS@_cikqum`n=}6CUkT#lgP|UWo(uNoxe^q5IL1R0C z45aPG?vU#!rg6f=a<2|w{+tdx#zP6?x|n+cH?9cz4psllpn24bBBB3e2e+L49NJ4& zH3UEies*`E(?nNYD?w152`fluF@g6QYtoYC%&d2ZBoi5rrpC}3JbJn<9XW}BWj8@# zGlH<}Z3_gZ>y$yOL(G`qjCHXBNs1iL!a)Dkj{Tr%KWN$>G%=9oZSjTB>J}uQC{MOz znM(-)VR;Fmp(o_Rsm6+0NNnr|SJ9Cfl?KsqnDu?|FL%`n%8ByO^(CN#8k*FY-)iahk}1w67X6i(aVdF^q*r9v~{^m z#4Y$({Lckd)REt}{6h-Ki*Y5X8+0aQDPTx%l8Ki@XC40lX+W004Ox7Opkr)U{hX=j z=F*~OW`=TrNKv#KXV<@bgUM9Y>uB3QcJfF%cTXX}AOx11&2TGxR@w&fHs0TA zg8v321#;ZGa*_h2GjhxYMMxJ3B>Ha5z>^tCm#++`Yd1&PKu1zRd_PWhS= z!3b*HjI~WR!DV=G!>BtC!6TX@!*ye+f~Z_w%V$$2+%>L6I=RMWEl^Zs_E|~%4I}~7 zhuz1lyMpgIMwGeG63BLZIdNl_7Wuv!RF(yO+7h~JlnWp@6xp4oOjFB-zX5k?b_{NZ z0l~mTx^(#_f!G8DZpgNS)0F?4o#x(v_PdX&8>%5W(s)NM0quMm3vlC5eFZL#r08 zYt+P&5UsXWOH&g+&!tI%pR2u-=_1^*tBjHB1LNt!&B62{axGU0rpMH`3nWoBZh319 zA?Y%KUf0gUb=S4BlX75vE!-nJ9u+-h>XuSMts)svs8_lkmvB%)<%WGM*xD#xEIX!`x1{@3ZThaOC?d-FTf z5B&JY_QQ<7!gs*+M8CA?n1GjIIZ9N(c55?4gC?_FyDucex`_zKhWA?dc#)L?r%xm8 zuVjJnaqy9t0JUsb9f-MAu}F`O4#NAHB)A%96X{JueSMT-sDo36vn#I=7`P$SW^HXI zqaf2SAsgGU#c9V8HUWH2Nc^)I%jGb57#toV+k8H~bm3xc=_#LG9>1YLYwAeY#vLLVk38!)v(jA{*8 zCVLs5-UGIY>oCYtKgNJU-4&-~=J~Zoa5F{yerTKkhx^`+5_Ka1Q!_zGJ!QW<0c|zU zMje;}4K!mWaUJCZB$gzss?=v;+_~xb9tIM#1We;J7RZasngR9>nl^c+j@xqXMH}LY zc}TdHTZ)Pp*J!t_FM|+}EO=c-G(D)ZBmXPMLb|C=6q`mRK~oLmpn?s2b)|x~^rdw1 z#zeX{$VN}lS4kjJkFaGuouLxH^DNG4$1D?UOw#>0+(-EUL~0n~^@J_cBOq1;DVB1V z8J|0kb?)ghV@oA&d4;ykJ1+8+Qp)L;3$%{ywHF?m}~I75FuQroxOrm5<6IQSZSfs zKmwJc6cm1BXy`~xuV!OBfGFglo |L(p)T?x{0(o?~;oEl}*+?>Noo+aAQ|p^+I% zYOFVuEtLt`b+5U{%gl*Iy1AAKRu&<5&(aY$J7*F}k~IlBL@;?174Uv~)yL3`(mthu zQhy6HgK7e&NWyrjuI^k=#avXQmg{M;lth5O24`n>*5^YP7)bI}%8$=ox{|I@YMg?i zuq3ZC5V)-D=De4ewpDt&^}MYHbafn{3mJq_FGAs5IT``0qN|J#`F4x^%{U3lII8VS zSO%?uXjN3LQ*NcpuA>$i4}>vQerJDijh`S$pYz_Rp=S5EGWP#>gQoc}elGp#XFi)g z`LU0s^UwAE6%HBpFHKz+Q4Wnc#|bXfda(`Pj)y78czt-U4G!rG$HGggqZUG^7AaM1 zCN&TxMVj<)19yrR)n*pF7DUzDY#QNDl#&MZ-MEsTf8tZ=+A~k4F*;gi@KBl`AuI12 zO#PRxr%|%aH8vWv8l{yiNtjtk+M9)6uq$XXcnNQ-FKO{uP}ephAZS~(6{fV~f<-$@ zg;D=^A@{ut#BXZnF7}J?SY{>Ns@D+&&0|s@gWb3>^22nVg(*o-*T0Nwy_qi*6(?0JdR6`%R+@CF$bee0r&WhOCzbRT6GN zmY|6&t6q+h#9+=wMd3EY;f{CY(w&E^(rpK-(((2@ojSSH&qn?EOXKMZTo;@E!e+t? zRHG?S8b1^4$zYQImJPG>2+FwJDw+ZipozBf2$+MfB#~6)%dVcx8bb*HH%u3OUzT*e zH{O#7bVZs(+M+}X781ylftPj?o)O$9zLt}5)`CdX@Tz3IsVP%J$#{`Per&Lpu{sp0 z#35?4$IV^@v07f6#rus{3FCooA5E1f$0$|Nt)m^r6r1J(oA(-AA4nF~sJW+n2G4Mk zO=pVVw-tSPvob9bY^{l^LS(Hu(wsVO>qv+0I+#ws=0v*vq0{N~11C}s@&cVbG)|B( zOuVdZgvvK8xj7jrQ~npNhd1g2M)AFN9!ovfb(?k?S(|nN;C%b znWFR0b5UA_rH|)L84F1SwSgwth?OXrA(&i6!>g^i4QrwUoQFIo+v9DUF98jQL@l37 zL=WG>dNk;?e`%mxR6^iNE`;wHr6Xmm6!%pM!S!bxly_eKsWc9Gi_vr-hmnMr_mi*Mhz4I3L6k|u z&I1*ZG}^yLId&1BQ&BFsgdn=L?I86v*zBOLUrhpHfN#>^zBA08aP>kzV#p(r6kCC) zCJ7dG)w^)bBmv#Gd7LU1)+W;=T*y9beg0UA5gd zw1c#-C<+7GMFrP%n`S6jqN{?iXXYnANONCJeDwrdfJiC_V1}%0k`kT7yE!_Lmf*vb z@wJ@UlOwyTW+GOR-L2uTHHt#`^@~^W3A={n%2*hR&MERX#fHViu#;rh=ECI8G&VBL zC9{XE?vV`@j4h|Zr@{Y)7FH}T%)&T-q4UsXin5+u0N&a6vg1f+ZAp$Q2!m``X3$kU zx_Y$o;I&AmXk1mQJ_>(oM!3P>1uc>{%i5kTsbt<;yN@hbiSnkpH^Enu5J`~Af@il& zd_G&vG^K|@GSqb@ba-rWIa39nr^?b*tWh91LUbfSL*NY&#ZI!@W=l9FwAQbsYoqh& z*}hr60=M%_OUS;%;XL5;jJqXr7BrN&#YU^Uh3QlYw)|PnB3{k{8M=hr1j}*0Ji_+V z7GH_e221V$eoiP-(`HV#y(+3F#uP|bMwIuXgGprB5Wa+5GuyGvOv||^Zr*b2L@I5O zuT*CWglBSU0FF=}-EK1xux0Lj+Ye)@QsNRYMLK*&)77iJ={iK?K1w&Epb#@?9odmJ z!Z@2?!_?Gsj{j#hLq9glU89N}@(jvo!*1^BNC(c*C3RP4YCF-Ksyk5$N4zqRB}z>f zJQBDu6?BFb!DX`gI19S8Fl`rMTyAuN1**q$C?{aj(`)AuzeX_RZI$>f`l6nX<5Q8v zO@fbBL?+LjI}#-6^DkY*D-aG4L704~;MMVA@(UN{*-)5E8?<8^o6Mt;Qv$cF2~uwz zgQkq}Qx5_9#F?Y%)aj$lVG#nG2;wYa6VF`ll@TDIK+(CpqH$OJ5dtG3(QM{r;RmSF zv?I8nDDFHVSe6&L2D&qcS@sPQu>p{l6}ql01x-M-50z>WZ+NT~y%tF;um2a;gIHH}`- zj8-QmCXBx|o{jfWZ8w9GRXLpn&%Tbf&U6y>>kbk^B}^2h6k&C|HlqL2jeuttYTqpc zc6Qy>Fjw+)ja89kR*>ZACvM(RA*~^Z;*1p<1sor)xuQ*BrNsNnEp-jsd5Clk&_Oy3 z@-lCyDkz;v!*b{+fdbKWc#w7O z@e5jY@W}CW`s8WK_pO0U=_@!#z_P~stzdUDg_Od`$OLl*LuKAyg?Vj#{2WnV&5D(E zi9m|ALlfl&EL!aJ+<6T5QSK#q8|-DhA4Ix^60mE8O(fIu_0*;!DJTNv_4-#p{RXa$ z6F4DfLU2>6!Z(4|`3t@2;w5=+HgT|alJqt;SEN&CTGQQkA4$i%d$4e-PB)&PNmpN( zp!ADO2~7}w#weV>(Gg17Q{3;FG8#MBrzlTPj}yd!Hon@iYjs(nli~WcOXF^0$3i_boro_p?vbd8P; z#maUbb41-!)K}*Uko5{>oQp5L!~lV;xPSaVe+O(|b%EYEZ&1FVZ5Uz4WlB_==ykY}+oiSZkY}TKrIstv-Nt4unjsmT z32xVq>f-@N6V%y#CITYbV%bC4{Bi;x+il&*3?S_O(v~2|m3&v_@`2ty0>J68+ZH}w zU~yyJ=bSQcw=g8Nl*0cq%Z5y9G0JCqr6)!xY9+}#@}=0ElY$_%W7 zHVE(Yi)ozzWS+%+ouicxa;I&4m}5LoA_hA#M5h#`9_|#G zy(k(up4To1h!1fC+ zXTvP#IaEQ6&H_>b1ceI`pWS(cB!&$Dq8+@Vdy7&`F&ny;CT{G$O4#`PtR)nkC+Oar zK~rZ2sgV%^q$zk-ORW92`|BXEG)<=>>qqo}2A>PQ3P?$=8ntmsLmi!T)u9UANQtPH zz^DrHb6(;zfxQwC4d`ig)9$a6z3YJF{+);G*c!r#u3wvc&Gg!MWrWGXcr+0zbeM9D zZuFv%J_DO0NrC(bJO4CY>j_Kw;!d0J|#X#FoPswP-Lvjp+W#g+0fMx36a3uHj~0Tx@jK32*8 z^JIv}XyI)EIjG{j>X__xZoGU(4j8{$lC7t=^k#xJb}Y>sL#J&6$OKgH_hgGJCc?SaPeIpm2g&lCXWIiL`qs7k#qSWx!-` z>hJg^KnO1*zGV1Iro3AM#Ejr2$cUueN;Vx!F$QFvXy4p@tSQ}jx|0komxlSh>y({_ zcn(K>OIHy3`ymuKXeG92kGGN!H7?dfljm@?it(I!|963rNo`<3b zD)$D>$+mO@Bo&nsrTUDO7|_tA2Ir-=0of6FFX~0LA#m$>S7dM&ZsYXa3S}TRYLXQ@ z&Qyyoku|LQu`nweo={D$x-oTh^?)cr=#^8*pefh9Ry!sB?Yz?dq@F-qew2+Iw&d@= z*CL|?MQnAVoKzH}L=`$&oS3f-o`J%y2B$asZlKAq^Q4V(wXrq)FiMxgvphbhq7Uyt} z$+#(ROkN<#p!}XL5Uxibd?>y1o$p9*eDhn|LXt<(~0!xBM+yyyyea5 zwQqc5y6ZKMrUS=rr_9?N-ihO*1FS7$X_9fU1fsv-cu`(sY=m)7(+5i8E7&G25v0YO zv1!5zvgLKIK~&r&B~}#!&PdF5Z$JTR%NJE7ljrK4o=2x_Y!n?Pkf%Q6F0S7|PU2cG z;}PV%>$Y_A=t+DF;glj*u!5%6JnLG%h%#V#YbuZGEs_%DjhHAS_mCi`HyzHzL~BZ; ztFgj{3GH?mLzN^wrK0#ES~i?0qzaX!41`uUi<;`)>)x$b@p8~~>FH0Vum9%vr1$^X z`RwPRG@8iN#`DpHB(f)-ee>Z&#LDnz-&QpcYSZ6RIc87xNXFmF& z^gZAD-t_B#axs3U#?D?vj@^%C|2zM`-T@utmqTbSD>IqpIBD$!F_U&t%F0H{_Ulon zxEwWq%59_QG|kYJwZwv_pIjNm?s{Q=+v8c_OCTYxl2Oco9$9Wv+J9YMgWVKNhB{Qg z^OngNcR8}AYC86;=_}Y)4>zC-~-(j?LvG(}lYv}ubV&6&o67xB9`(-#_%dOSeR5cmXv9bB=>gN zKvqvaE^}2BO7AgCzg&kKM<`&cbOd-xP@@Kv>Q-xVv4S(_GtJ7; zKGW<4M+rpcw%bmn2OfS+I(Fh1dr8SC;3ozsqYt#6!b5zM+R^ZPXj zbt)X{7`7DY;qT8;zIGHzI?fc$l?2hHU^jvq#Ott0GN_hg#@J@};5^J8ZuoOXu#qBR%x+qv`CKbD`#~EP&_ZniL5g*XZMr zHz|jwi1Dqu{wST!H|Xpf=Go4%cH4zH!F7#8k(d@?vTK#*9iP381slvaS&!wK0h^F< z9_8F9_3$KR=XjZT?==@oh)6e)&#=KlK~|Gh#VJ=Qy)8h#AGk4{F1$FNKJ}T4=@WnY zg)nKq=hSWK@w)&|+76{=g7H@TPT)Y%YSf6*X|q3nT{@D8&yk zMijOzxF$ezSm$##L19-Yxi6BCtPt>O7;UoD(FWHGt+Q7e0RKmXx<>1TiPN7B3Ba&Lisa4r36pbcyWa3l0FDkn%X`!6QTf|Qh&#z*lPL(ha@X%X`1JlVQ_Q^KsSr6mMb zYIpe9tI7D~99Z_Lv|ZM2DYSYtRBQlR7xLZf2-30b?3RU8@b(n}6xU}4=((OGh=|pF3>P9KHmd(x1v{E@mEYfmnT4}jGLSS+` zZL9a6J;FpjjIl1INye=uD9aL(Q4Qc+@InVll6U;~UZt)=V8Wv31|3`#7SqTuU8Y@x z3$Ngc$UO=aO%%b;wB(d|X*DC?a2TG=(H?>bG))xboThYV`7na8cuT*E;J9HGz8Aq< z0bw$8G;lFn3dC;rozA3pR0TJ9b}VFQj9yvN%_MZYi7zp3rr8|kxvoXgB<_#od%0i+ zFRLI?N>YkG86ZU|*bNOXjm-$ssU(9JISwIZZ{|^zZ#|l758a-cj@^@5kKdcxj-N~I z$IhkJgU3^Klb&M`PkE2M;?qR}pE-Cd^PJ=of%FWu%aMK%piy{GJUD`v)EFuabq&3UDnlvZUYSZWe4V;JnTFA48paaEe=iRb5KTjn z#~wlsmbzI6WvUC1Cj`Q2Y#R12dyv;dGfnLz0?biLSd$ZWx-kYsA^2W!mDt>7Y3Fxi z?&+#aueq-WfyT8l5 z@w0*+9P^?U85^5mHJaX_59fgN8**)jr($^~wNhy8(3enhxT32}6m|y2twPDW zgFSWq@(5byfbFXoYa-|ipeakUtZg(&SZ*8WTTWL%ali2Vb7+Qr4zl?g)e4f`_EYI_ zV^3-*t;e&Hzzbr(%_C}tZLptIlvER-uA~<(UQXw)Ut;|sAYwmqkzQpDSmbX^Vm8=+ zHpu3d3=r9)HjC(VMDMARuF}2Ax3}K=Sjne}CO`C3AJ4vz2h893-hYw)!@v1I(%<~s z|1^Ezx89!)-BJQ}kxr`@)1RScyLJ4NpZ@Li55MKz>D}M@-RWn4_I>F!vco^M9N6)E!wU?OquD*M zvb4GK$`zu0+WYA?sX|>h=SBkpRP2C+^mAm{GZ5nEdC5b_ z!3S!k%y$OArZ+rzUwXr%574~>YGJD{Kg%nQM!@HSwdIMY0Oczpq-KH=%SEX)6ed88 z7Ht;-WeY;Ozy-Y^4j6MIvpjLZb+O%KkzF!<)J6jxlXLhfJ#w@mJ`2h?oQ1|?@2wkKa|ei`zV5ychRBLzNh;72D|kPWBDpwIA3_?JR*oM zM(4~7U2PQva5=k_V(fc)2CoHDtGpjk^h!v?w2oIp`qgB$qL{!U!3TnCx6Wd0sUZXm zEnzdx6J_d+NlB)f_|mq|hBS+)Egv< zggG};TT@G+yawvQrehnuLXnoQ8z_0sz{9%CJTdENqIyuwGnjT>&flb|mX1aCzkEY%vv5;%um znw_Ye!#L+Ngb?H`U0d=Dy&pqp+Q97OD_n+_7a z)-!tqO(r$Y*&toZn=*(t0T5}It5=XvP(nqg0=23Xv78G{ET5CSJvn=Dk_hT%2{LVh zvFmOcn~rgKJY#et&q;I+EfPCs)dbSW*Y4Lsc`N3o$%w(Us{V>>s-nJ!;9>;c{5n&$ zL{GJ-3}jD3PNv;_%7V;+rq~3qKoYaWWc0N_;G`bbC<3Arphr=fCCQqB&m*7Dxfi~a z28sN)Yg9R7Uqo+RttQs-Ns`fVQ2cQ;tj6^!Bq^1PJQe&jf-p*YK@Q(>$Wb)0YgCp4 zA<+nER5a(gx;`mDT3uU-_$;1%=g?4V<^C6gsNz*vv4+eICAiWh5>fQHszgt@{%JJ6 zCb2=9rD5O-W%}oNO(3UHqQQGsQw`(o6@sOCxLOM+^9PqO5KGOGIZ4)UJ5J}#lwReC zTf6`~^<;rC-g@+r^saY)6?ji}bRK&Ij?o=2zlF6+l*BO5oOt2H7cccyRo*6&FF`#=9L=_Xnw*)g0h z_0o$NFm7S7F6l^bd)MDh?|A6Wbcju=0rK$v@&D`{aItU#7b?@Sf!NN!OqOD#t{p}- zXi^om;u6$c*J$rtU7HGlvd->GNQDR}^0k@T)dpP;6E_*meP|-Vce3483yw1C?-gWF zWxVnrNO^*lGW2uw0L!z9G*gOeK<+<8;~<<@@sf`*FgT2@!sy;Nz|A}${EWFBC`H6g zpdwt(92xpWI;EbygqIx~P?>r+TxY#J2TNT1O_U!w8u+|c2!$eEi=6Kw3$Wy77g1Y^ zE7{bWS%}Y`I+7lJ=rwTvdI%!bRbpaG@RfHG+#GE%&~GwmdWEP-IDbV#IWPUCq5{f- zmznKmhP+>TWeAl29ruyXQu*6#*Z{PKiiZZJzkTJw?Lle{4!2! zc3-mcA76rXoe`4dkcqa0MpFD>cb4daA-_W&&r1SWekQK(PqUzyh1BrMXH< zn~iNK1UGkrCZdUgbNvzs<+3{8lx_HBr3+Kq7IM2*T9Z3cb<4rjapY7wdg^wJT~EPrp-i(!h-yY#M#*3pqUU)mI-Y&* z5}jrvVf&!scP- z3XmlEH2C7C%{dliGUo-#(bH`DV{A<0l)~0>2nV(xl8J6ssT~pUC)UZl*VR?nafqNO zGq6CJX_oT$3m3}&C=d2ad;XOL!n!PC%?B4YS#uFmZU(Q`S$_=)hvOP zeveh~kE$s92GChZ7-rbeM+t6Cm^KJ_)P7naU|%L!TIQZEqDsGln51*VHN||uXXi7o zGT?x~UP~~j(zDmJmpH#=OL<*^l!-bihrt+#^9UbG%jR;91u6Ydtz5t! zM~Dm)tVvBUSyc3Ojxw%t78-`n@^5_w=Ly{Oa9d=YEE_Ztv`#>$=~OW}pz^rhc+tGU`D&_c;5ti_!jzvu@T$4-EXd6q&vlN()aC=%#GOYi zbchDHgOwx;MtoGD;YSikH9^s6MmD0cBy}FlrxUj~rn3)prlWTsNX_t>N@;&xm}I>f z1wlkxP0N>BxJc@&^^;5tQqj0MgaSN%mBaAN)>+Jo;mZ2V6d|@$X}7bTk}pfjn4mGX zr5i{~SmwHPVR*G8f~Mug8)=I#UW2Dh1#ObI9uFQqQ~1xUb~=IX!Q|AJ-uQF>{U_2- z{Om8MU;M@YnttgQefq0ujL5oHRHcjjrw?$zd%nashhV@o~vWy%xTohVqDFC%c<#Tg7?IQ zsJ84zVCf(jIU-%rYMsIGyUv?P%~IT|d>cDz0$s{Va@pT|O@$!G05A|Wf*WNN_?L>l zNceKS>zo@Rag;Vo*XAhytRx;mI@VO4?mt+Y9y(f^PS^^MRe+hCn?W!ovdS3mn+2MI zBJzs1eLs88VP~mIZ5>M_8w5=2bl9+&K=!QzKCdT;Da(=F*EOdjCvHpkKsG%K7vbP( zP@$gNQe)?dRNrwdH6J*ZTDy-CR2^httAaqzd$IUwcQHqRKF>3n1I5|Qx1{Q}BY0JT z<`G;40%!0SL7%%Lr3_ZxGD~xeH?Jwoigz4(ff;~{a89#vjiKf|%a|5PaC24dTLjgT zt{cN9fxRU{_EETbnL)7}y~hHW>^Phwj^FTnG=Sk;)Hj-Cb5nk!BsxAd(`M}0 zsK7ahdMvO>`}FF_=8qn2Pj}sUG#xtJOrSEGUc5}FGgc)xZVn^*dNY0UGcREYGK14S z3jExo%LGtQo$qC%;oR78@(8N7HkGA=$SAb7*M=@q6FNu=-c~Ue_Nd@y-FS>i>2V@= zlFf^V#^jBJ`j>kpKX=0v)wEob(MMR{Yc zMAi}KR^1OJKCLe3G z%_}QmGm7huO)XL-iWQ1|1#1uIf@JDS3Gze)Q%==OSI@=g7}#sAtq@(xTBUKm3oN~ zE9Oo1^K8x9c{JcnSf_0e!RQjPfpe_BjSlBdEKQo(H%Kb#37qtbi-b{G7|`Ag_LBK& zyc@AD5`|SVL*CU8T*g^Y%Sq@U!;Fhr0&$I(tEd=Mkd#z1u1aX~Sf>uZOah?BlqFL+ zS9aApg=_>-a7tv(4%$~cB4{eDY<&55R)KyuD68Ch3?T%U9jd(|c@EjIf$TLqD$_sv z;K$R)Km5V;fe(Bz{pN4|R{DdFe>{ES4?mVZ`p3VQzWqI4o;fcUj_?1mpHCnE$p7GX zKaf83-+w!OmW9%D9C)?HIk4u==W(vj4-sj5{EKTBue% zq#A1Ec9|_}tHPKNrY(;WhS>&0QpMgtZ5HUR*&y;S4?6*~Iz-*@O4G%=9&&dzlKBYg z1X(y*+7X6xf~)>qq}npul*-wgWXua}ma}9)<2P^MP~ID50fhi(AvxG0F3)4HBZJp1 zk->-ZrRST$nb|svz&IOCzcvWwLmCZSd+Y7-=ZXo)gcSp-$0C59GT5V}O` zHo8oj3ACzhdvC&iLqkdKN3DjWe9w(D#2Y1n+Azmw8=we1?>&q&G$-RrNLJ)CfuZH6 z_zDhr5N6{a*&-y87W@|%U`UKC5*4VVcOeSbT+Fio0UklHah_-8rq#h>ac>vpIuNH5 zEJ_AWmNNb1tW0P2>=rAy?|&CGlU@ND%jnq#Ss-hw8`Is0>|pBZ!AtPm!<5MGPpw@i zQ)BA^RDO@9!zXV~r_Vi*&fWip^uU|mnI3)H*QM8e#n+|B`TE$GePz1izDH6X&7HYr z1q3PK*s00wn?~bnVV&-%*2Ag(z)7A>U2H0G>1Ng341iCbEzg(dXLs1LndB=J@aNPW zBWhXU`A&r<53S!pq9(wGC0+$i%2z7QGMkh|7W|b-;}8DpeRqJF5x}r)q>HYa0k3!YJ>YRr^1oTDNg>u?Sw+8#a9k?yvXk8g}geiaHg6@ zcFn@YA>eNYeP{=1IC2ofSniR*$s!x-9GoiAnR(mr8Fy~b3zYp9c`Z%TZa&!u^3YHG z#pX+$M$4rrG4oK0nM;=Tqw5f9iei3VLV0vA{#d45PO!xDG%<87F<~@7G6}Nmm-$d; z>fCaxL|hC?Bu*P_XwI$WRFxvEbZptJH)r{gj!x}<447atz5vM-eB4cSD z1W_%%(B6uYYDP1#47t~8y++`&X$g{f8~Wvz|L~VwC+I20FuM*)hI)TmYmH&yGkA3> zU4F4Y-MmUEku}^7&tZgCuU@&5daw5J`HPWsFTqP%F2-Jk+^Dq`zr2HR@&q*qmEWh!f4W8Ap)GDkrXPJ&BB>;dK0^L_TVx7byH61Mc*3FB5md^LTwC!(x z>c@Wf<^K-f9#97TX|iJ}edSj_o*sYuSEqNq>#NgOzT+L~t{z$gTifYSq2^uR!J@Vm zZkDFQw|{GT{L9|T@4hR2)jR)2y6f~ICRh_$X9xBWzTYjkIjRwRZ<2o*0mA-!?7jnQ zY+m|CsWhYimT)#8Qo%hXo8W?dlORYG6ZcY-k19p%lRH7|`uJL`5=Q zK-OxkT|_x$A~PgKIK7q?GN%P5^B70~$v6Uq=(%8Pu*f8zWO0}lx+WZ01Z^5Y?Yocq z_yX5h3dbSZY1#D@!YrcnQLGqY746itJV)mq=jldOtR{&LlVO3v!ehD6g^c^L2l&Bd z7`kM(5(bqP*di0R#AhAe+35;)3+Cg`G;|E zQpcfFsi_M$^aCg0J~W|8gSMACUfk=N<_;`I=zQrsmfC0rY3V$O#?c`%%Px?q`gkT( z_~0+qryQ;5Yv}l_bDt}#n`sKD4_&Ij$E>w_KP0n*0Lr2AJp39ZJ_FS~vRi$^MmQgX z<6>-jkOvAhsl0|D5gy$l+`MU(l{Htk8xO=nJ*ptMOU}2f$r;Fz*VC_*O>|R_Jc;8X z2zE_^x-Bx6H7%ZnGa{lT0oXE5nWbRb_zftuW3iN%uf~dqPRCYEMeDF%*?EM`3_(*Z z$W$$VJ5S;;G)%x?YdZmz9e$#M3OSbA$;zXVx1(0iI;dQ}wv`fEcV^g1Lq#p+kVa5q zONdR)S`^iV9mNoOXqPBkKlAJrG@N>w$8i4W4(;whw3E-Zxj`Ex^@K!ivcTC6sOU1- zelQmU`C#+c5g^Y?@kdd>Km@tJ1|^l06%|KRB(a?CDRrPK&{gW5-eM7e|5X*RKYTsgz{SHA6%xXa-7#82qT=Ve^6=8!-`g zkLxW2Q-Rj3v~JLup}*=e1#w+_80)3}x1&(LkKM*Rd(r4T*JkomP2f}?{4flT`OR6& zV{qEccYF_#D-jmU+oJc5ACZ%F4b7RiqN2?NoXsR=HI}g`txFBqd4vcFYm%Or25FW2 z&JyU`AUxu07oJHYy;le>u^d4(barBZR(`}!uU$y}y_ZuzoS#X8%`(uxJOZritRD;f z?J~%<<4Pw3uT#&_Ry_V1TbYMQoVd1WK&*U;>qI1AK}#g$V7waK&OkM=19C>f`5t1# zBQzZhQ88FDFtV{Fd<+RfO$1q|HF#2Bbb;h!gEDuT!8Cb@01ADz8B31@J@b^S$Fcp< z5~h`MeKYbWwNxZ>1e~E34$5R%)_}Y+&pkDPN2y&@xrSFA^FW%&2G{Bu9QRSqn@u7l zov(I8(A3)1n*QE*+?)O0{6PA)uYOnh!Jm77dit|}n%;h_D1E-Le#g=(iLUM8bne~< zUVh(u(M$NipFD>o`itqGedk+K%TZu?#I^3a=dtvKo2BWWBh{ZBYw4Tc+L_+>!9T@o z>_Yn9cfAFZzGHXiWkm$yZ?! zE^!g0B&AvCJR*NHi2cnh5HKy&(u%r)nyT*W1Oa7iJ7vo^PgI+1}_W+vJQ-U9{H8`GL>n1v4aiWDSniHPlu5)Wbdl`r zW3dFh4X8U4+G8P=QYAr?KxkNnaTci+Hd{caG!MEkVzx!yf07A5tac7aN)C15YPeMf zO+vPA##NNA>qrR7(T7=sZ?nPq6tkfegG3dhR9sAlQgjx|(F%;tTYXyx z8ZY?5q5Gopvvx6?`n(wo6(J^<=$cXBkl?ejzJ&mWV1}|@Qy11FM^2`EV;f_w5oo<8 zgfrz;gdBoYQ;y9uue#fi7J+P@<3Uz(z13W&ysnFvZ=@$b z{{>#3OJ98Q3+dCJ`h5Dd$QZgu8{7l+y%ymv>d|M{p~G8cdgU%w&_r4VsT4u6%k7m8 z*FL)%<#if#ITJE1Bt=>?lqq^l{E209lC&m*n+`ng8qt5t6AV`q6xVUzT2LwPMk`28 zQgEW2YFp?_l(;#uLaEPy$?m#oMV%q#OKg|G9C91#Htj*E_8bgCD+$obKp7YKz3Vg! zJoD`H>Ej>!Nc#Bie>gq$sgI^>U-(42_{C4ACqMO(^vOT^z4Rxa{7CxDpME4g^W>+} z*ueFaBcN34gr4ia3w#w>~ZA+=*VjvxP z%0!h~gg}zEtzD+(Iwh4Q122N6Nla;{u$mC*vi00WcS&b-O@mpaI_$Uz znQzNs>H=AdXR#A`A`7pgq^4DYnkP>9QMx@YqDDW!W}z-jj!j{O$v#asIM3$i-`3H^ z(~h#P3tt6+kFD~vAP6}+`)Y7qU*~>+AAm~DN07BBZ%OSOt}9AJD*2M}7xTH9KmFce zlh+d5oCsXDUQReySicp~O*RLW3vjxK*h^px83Xl=;2N-~O|xOl5!{7>uy_Cqa|lB6 zZ^b%o<@fi)*uuyn$O&1n=Qg(h2>`@AD zX~OS|docqUc#%yY3l80Yvd^{b9&S(E+d2rH6$qwQf=D$WI@y40dmSjX$^c_PoWI|B z#|v3-(S}``EX16a_fj5K)!5SXGmP0p zR?Pyek02$Sk{p{YEyz}2}!i`AR==To18zZLOn-^-z57;&jfi?P(@-WB37l%^xNB`eSXQq=xfuF_ zaSoya@>eYXND5ZCZ;kVvm>5Bbvp0z2mW0$kQ^-=)e5C{QQd2Ot4YzmI!COd99=PxB zbl+WP!Z6j&CJnhO_Xs zX}d0!4)K$@Z(-L&Qa4VCR1)_JV<>`EYw2 z)46UW1iHG>%4*V>(Pty`u-O9SDIQ zt&hAj{m9S%*YvGl`DS2T{ENs4l_jqwJ3M;k?({Ey^q14W{O+$LpX6wNP5SJ|e>Hu> zm%lb0B#Wpg!?GPTe)sro>2LmrUrE3EgWps*&!Tkb=m`c}SL!}}1f*fVWA0U-;9v3& zoxXNSC|YQ{v&1;FK^Yth?NwDVEYGa2*PUNUo^rusU<7fAX^+Uh@(7I%tEEr#ta$qXP!HsK6CzZ8YJLa zU?G`v^E4Y}(X>>={N}&qLcDU&lvT&f&Rv`%w~hOhwfXp8SqpyXzy!FA-`7#gsysdCHF>DL7_{vo3gjxf|IhcQ-mLExW0!Qg`4q~GWfRp?fI#P zrNn~5bllTS30ON6Uz9{x#9PTwJ0T6v!?&5HtHoA)&n>e8Hbj%7#d)zo)*?<=P2iMM zq>$2FEgg5QT?Zf>LntR`Lc}0mk*Ry0v8EXeW`uS}XZF=VYVWdQ8Jas4l!=VGR(V#I z*UA|$MJ1Hp7?*M=^sZXx`2}}{#eSCYJk6#xjkEhC9mbQmz)vedp>e4sZx!X`p9S9~ z3l0p-=1KZ2?JnqH!UN4z+jX~x6ig<)I1hQAt}3l0CQu4rBH`G}#oj0|&d2JYcNjqju*Vcczm)Cj+t3!bbGKc6Eb6IX||Ymz7o%#K3D(&vDoOLSRCm zKV!P5=FC(-3ACZL+75socdmtMHMjhZ*!E6TH$_aFC=08!i}MH zgJzC?$iS1J)H&9kt^=JhE|%m9#l|X`mte^NN-ZLSCQGa~E{HTmiEYOb3=#qYnT+z5 zmf>t#9^A@ZB1F%Id0~WQ%s@?Pq=HSqLIK&G=gc|fRk&8=oPYb<$&9F5v^tcNDqCP;P(8+dA7UJmjXjYNB71Fyg1p7h?w-jd$^=$q2F zzv(N}dmni_-M(*1uRr$M^yV{MCS;cib0#IiYet|i86_=+#=np8>oD*Fjw3YDi}*P8Pq{UZ|a~Fe7qqY zJVphB3W%uRJZP8eQmq`^S2=FUZ_3i$Z}~gvSN_`v(og=x`_hm9`1{gN{?t#W5B|n4 zruY5E2hy+n!Y`%&_Uk{DzT?|p|MJ_NzWa6Q=OJ|e;(z_Q^uG7eLH83so__p&Kbb!8 z-#?uG<==e|E#0?#a>wh_ul?Tdr(gcrpXT>?|Mz8|@$0|&d+BHX{@28UnjLlNKm8;@ z)UWyRAogo@pr4H`miC4O@a^=$-D zY;p|*(o7H$8Z*UBcs(U#AZ6N(kgco8S`Mc|I6s6C)$T#pWD+RV*E9jr1ioBTQJzB{ zfQh|;YOC49%p{fr@E8{GA6!IqZIumd=i!36Y=A*_S^`vDDzC#9a~88rHivaJlURs~ zH0mV4SYgtSQJxqhbE+Y`&eJ8gsmZY2N?dyh+13=B2GN>a-`6u!WQq!lH~!ltl(x2ESRTm7y~cC$S*e z9b^ZcL4wi^I?^kJ5FLYtw|12A%T?U)T1dNj5I+ORNlKryrrLImWHrDngZ2|df8=lQ zjJJ~7z5C~~ATC?XGmgrZj#Qj!>@2k8(`}=fsohSK_*L2A#L5&whGh02w0j0#4>AGO z1WOtK7lYtc;UU&a;M4%;%=USm+_wsx(N|bF=lFZ?OT4m@N!IYa`>@Pf4FVRnK%#GW zFW5}ipeC1SAkTG_a-yx_lDKDBK<(0T*IZ>>%n?*AFm@IVg1LtZ|1GnKE02|DT-7ty z8VG94)LqL0aaD_ljm!-fsL8s*Pu`(xRBTDum40=XuRhx0%!o8? zh5okGi$E5!vC1*D>?QxtmvRkqvSKU;qH2etBDbJipucXAOKgyhZJh7flL%mTM1bz* zX~|LviJpV)=>R+^X-m;oZsL(GR58M%)AVKSF#=udc6$vG=j@hfrg8=b&yK5jn%>(2 zI;zm5V=WRZT@h#!rwltd*iR4)-wJdn%50FT$633kcqtlcSFl=PPVZ3qw!uR+ZPq2r zo(fr3ux?euwW&crt`7cVIjp*e=2N%}zq}IT~~DA z#2xAD-u>?Mp7*{xecji*D|P$#kTV~8{PpRv2kuX&IvTgXA6`O_f7#p8yWjJk^iALV zt?9k*{+e{xiSAeU%v^fxE8da5{u|ztzV(~GIlcEiUzhH?ZHt1w^(RH?jyvy3k3RNT zdhnh*Qzud2{_#KW4s0plDoSQH5tbG#ZbHOr=xB60G&ll(0<9P~8Mq>5k#585WGSn* zi8Cm*fKaSXBDv75r2#XPAePfvF+o5yMg}piDI^Q(HbU=A!ITUh6~?)rLj2b0%ek%!(3ncf zHRWy)qEEu*+-qCbb36w(-D1j7@G80Yl07Uf6{y~^S!Wx90j&X0@P|aR!US*UkzJSP zWYAQ>rf>X{S?o9pT;Rwy$$slmch*v6439F+bk~*;Fm%%@5a8{g5t%BfYE1R5U8x>4 zq>RP3Tp>yMQ6N@9O4YnH_$38|Dp0U3)Jc+k(ANurW_C|^9xlK&jKNU+2AQc<7Nez- zQUowP8;Ek|(`4of{LMpjGiWcO#kq=cS<26s`1=Kd?sB8|!*LPrnn9CVa#yYoM0xD`jlM7+R&K&BWV?%_^GvQ| zw7QD`yC(YH+T)P-!B?whuGYfe%)yOPWObS{sy=W#4`+cSdLuCduQs&h4s~`h2JvGf z*zp8=kLa_7G_y%O!x?EL=ws0o4YJjHR>zT5au|lSDbs ze@|JGVKPp@6B*Z{!Yogxd0xX@dmq*ma-UQqzHsqHHsO_Y?$ptAJFgnZlsPv{GQ~== zdnc;`nP!uch+pa0@|AOi&=x^LYxetv04jDSmkbx{$a%9xT{t}c4dTT2^G;7VSqu5a>nLCyr3T1G_4I)Y+L+xi+U)tRuP&&$^GT^o5JtaB$RhfC!%iPA@y zIZM#YdaDd_$|#A}f<%hQj1KpM7*eKNC&=(RViV%?kuztP zrGY|5=yY?+_UPJLKHhHHh}!C{}2hN4*`r z_Qocb!;o|oBiu!}e5)uFOPIC;bLTM!iF*jc-w77?c{a^e1yG~S8(t9QvJFkG`Am7H z3yjH1P_8Pjsgg}hQ(`w_iRqK5HczO*!(5Z7T!)5Q4t}6REXpjZpYt5l54wEgdiv~> zPa^=@%Z7 z{3C#g;h3r3uQ30pq7XFc0fu%I~u zoG2Uh!}wf3FFMxj(p7^gg67>&lL3>z8!L^Sq5Mz8%8_U0nc1~f2JfR^n9KDU!)D0WWL(5A2hw{?fL8XLxUqdmxNgBCFrv8M<)amLHmF? zx@YX}Sfs0NnT=wR#d=M_90utcT1|_P=)xZsL}wPR#sqHEbClmSYPFlf8a`8zK#4_3 zUpZjWVi$3=T=Kk%DJ3mp;ycfx8ri8CHE&;rnsg}(xS+$%LsUe6sXU8U-hhRSzk&tF zt`a9>_98n4#i(S#vLj3A*)EH5TG4F_7ac2dqltcHIZ{k`I9z6IRiIarx@TBK&-ag{ zUb07}rtHWXq5WB5$1sV_p0rO2%}_9?nS9OGFmo-5T?P3 zwj$b^_*K&P)}k%1G1+OI5c;`zqKROW}bn@jD@_V9-(E*Q7Gy~5NlVVW~}PzXo(TpQldUC$lOn4z#+*Vgml>% z<)sBmbjy@IvmGeoj|3Zu_o?97%H;@;H^xZ#W|1jt8oNZstBJ16tw7!Q>Q+<0=?Y=o z-o?bN^KXGX-2%s_2JN02?)N&I!5G0vA1d*qllp3@^iF3Po8r9wW)?cRm%<7a-26Hh zR7;Y4as%Y#X2zKvdsCG1dI@3%I7j)TIXEmiO2Z+4Afb)-Gaf1jJ+Z`V7t%6Z3$g_a ztO-=mT++aL8RT{6VOQRKC`i)=&R8`QmmEHf7MEn<@&Z>aOlt9Z%#j$>u~Bx?BHnYL zhjK|P&!vQnIwM+_@J#duG^q8NnW>8rtF7bn&}8)he|rp}%l@0a>3JjwhKBLx6WNLl zkkSRERDW-c=Qs~awJT)e8xrH?_-I5dAzhT%fF-caD4}I~e-WUE>UZYb=r|O~GKgs8 zKDM`Bu|b!J zgi2uEgH-L9Gbs$=RyCNERFQX^ux!l1tW0i@ru=|9cz!QOS-rNPZ?$XPV{K?V0QdLs zk<@iX8y1d4R{_?dJJe$auVchHr3ic)7g)#C@#EL zC#wN{3CL0S_d$VBC0^vtl3^odIVFfJqsi4WSlX0_W_K!d3QM0eBr8aQCx!@UF?KAi z%r`nv`^Q25;lB=#3E%7n0(G!$cC|6FI27}E?c}`RbYS0sGgg@QV zs8qy$(Dc_Y^7+4U#R}v#;sh^=*UZJvD0MUziB=ZSbkXua{TNAs0WGsBSfY_p72r_4eo9zqDjB2|1USVE+&O%L#^%rzNo*2yEG@Cv zsz(#{0>G>#AP?#w$tT&hk+XPIGlOb?g>x)K`ZSrfI5FgB>D?4B{+;Q^5Y(_xRPb}p zTPBb-_-e&GBdZ35ubVFp@@$X^w&Ew8#PD99Gkti@e@E6qK#?ywMR_x&t~sd&!DsXj{; z3!H)K6!&KcHENZ*^`45hW!}>JxJ21W5y%Y1+RC=K6vxe=Ez4yHkRDd5DeqdSzZ7z1 z9gzI41YC_{MJ*y=GH)aBBX@(N(VBXCj;1cUWQrhOuRvV(_p-nz%h(N^+(@Umx0do0 z2rXq{wN&THyMfrVF1ll>%fH=*HvKuq`vk%5G!`W6hS8-oFf^9>`XChJNH7T3XpYQt zopP5Yw+YGy^4IK^)2pn7v2+OTTPs0bHP>7Vn$wIeM-7WEkzQKiwLmFx0|ds7&2=@D zLdQwcZ`Q$C4f#6H_*Ch+7{rNi8q;1gdboU`K+~XAJC6!B%z6Uy9Gy)jGon@okqe+q zq#OhklzNyu@?moDGFHsYDd*)Y6qW4Zo*yM}>g4a?hfpd};-C~R$vPfh(`+E}WU}xx z10S6qci(Y3ef?V>hZjec^ukNmc^;XJhk=WNs*B5d-(gD3*pI0D)`R~H;@avsMY@>@u_~Z&SM!gH1K9_>8hhGI;S^EamM!j9jdA5!|bx#OFyG53+i6 z&1Jz{aCCTuH>aq)sF{=j13&{c=7H;+f|bkM-wii>y9TvLp#&VdZU!eNb*#0bD2@}+ zl?st4lLf{_NJcOw+~B?5O4iF=>mK*0r2~tho+GKP;~)|pEhLKdsSa1Po1~z2v%M6k{wR6QlSoQBA(lP23`1nm{9o6)t*bw*d+jw4$O?UtNPg9A6yK>y9? z^fW2g&Zm*Ge$S!91UwMvnR^D-TEI*bFijX_@tkW=(?5H*H9hdak@T8}PNt)0J5yP$ zi8z678G(N73U*6Y-50_SZ@7O3gKF)FmPsxtCA!Xs91YiMu+)Y-1V-)kystdMS3CBR zra#Am``77LRKhiwsHEep2rUeS1uO9lD<#mVgj8DFg5FFUlL%G#=8lHcd7zE#K&4+6 z@!AZQA0LHOJ3_e>t1Me_X*unn~n|Ow$T#;8RBCU&Uax<85Xh^U;}cvX*5aR5VVS z6uo1SX5*hljB=JBU<);+)`ESr5R7cb=>Q4iHb$p(tvAXAR{cw!7OAE zXVb}(+34A3W*mAb@)K0`W}y=ha$aEwmW-z^rvjkQlW2=qwmfW}`&B|jS z-~{=qa4}W+oPf!NPGWH1=v*2l02l;48AnxnH98-;T<*W6f)d>=W6^SfEO=@a_#7ew z@e&E-YY>cg9BybC&6hXf^X=Gi0bK&MuxrTsZxE!c!J4)wAV|$Ebl`M!MV37c=VqCW z*UU1wQ1X-XZ_@;nU_dk4W!^_WmvVv<@98qC*PAFoixl{=%7ETlUT&qo3XZP@GnTKIg|# zUheBh)advK!5wxLv!f`^4^h@0CF3`k-Mu<)Li$#NX<70Uhux9}7>*B3 zbvG>G*k#4Dw5wJT41Lo!BB;+wFg6WrsH87&L;$l)>C5iiS)S24+ExuCHY-vng2%&a?HXZJ4 zN=NCm?ChqL-i$i}@*r&lVXYv^Eyym^KmxB;(~IO&epBAAzCFrwTD;7$4usYhG1VMn z>gM$e>C(j)Qg81~_JYZ<>N2{%5(mW>Es>F=>Uq~ z-KUz;kvkgGv2&d7S$JFcpOSu)gi#eB=?)5qV~}T72G38EMU$HneCO&>)jrr`^GRLi zG+?>912n^yj!%)xh1fVsixbD1*b$imGrAwep_BM=|5=0^&vB5x=!27VAzH zbvNRL1vGPGus7V*-=UoI)&@+wZ}5e#w*z~BY;Y)ujH}V z3TqZ``D~fR(2h4%ni~mV>d?Wd2Nh`G-0BfPRKdBL5@N^?pv>HaKw%B_d?FTzy>IJ#Hqukwjht+EmAem3>7C9nYi& z?tzRKG2fuw6KrT^`&slV$;I^vyEP|cUwDX27z{%p&Bwt zZwm}V-7GS4yS+eTN-1~Ry(V}cD$83S1*hTnizT)4=CN4@*NHL^Z?@1O;#ekq;b3X^mQ{}GU$;?P)Fb^eLxC;PnH`Ok{={I z7M&OLVvM=B{m4F^Ae+i75NIq+4yXBXNd3e8por6)Cy5LeXscMxZ0gfDi?z!11fO(+ zdE0^NJ4jPUcMh5kAurH_iw7mWeQ=_VV1ab_=+Sia@Zofd^55y>N7I>;C(=Xr-jg1A z=z(^!DGer71`O7?c`4_J+;^CzBbJ;>&>e51HzH`hJ2;78z!F@PGT6u{ z<#zTE6xxm)Wy36%VDPj-Hn&bNYk7AK^r5N&4I$h9EkOi^ROq84g*Q;C!KT5M^pJ1h zbCnw$XvAyhqupS3Kl%Q-2IyC}hqOp$_eTX6y=}u?%9tz>L=KNo0z@!UyOU}D9U7dV z%tRJXgcEJj3V_Und@3bhvM4Y2pi>aTAsL-mT-Naqsw#l>IMG2nx-zio~Eu|P8*at zY7{7yuVl%H`^to%vscR-yyzY{kj~ym8+Z$7A`_yL@?)jtM<)ksf}Md)%=w)Ws#jlf zFntFZX>(2l*_NQsuFp_)7mv>9tAV#AFQp7LfDNxWzP5ov6XhCSNUuxzKV4XPs0nBZ z-hk5Au+*${f!rq1a7p`?b{&6ti9SlTR8r2uPA;vF>rzr z(B{A?bfrX%B$zW68r3Q#Ogzj5Nrm+^&F#13=HBusaTCg)3zV0mBCOOe7l;tnKi%Ift)94kJo! za1XIm8KsMLgDiE#e?M9RmxrcqdJ(PlbQ9mbeLa_2EK8D9muuc+eX$iCQ2sZ=faJH_=?8F!! zZ#@Fi>8Wvi~Rm}H(T1Q_A`=1_df^eT8pi*b*)-4Qs(eu-KtDQBuy#x6Y zP<#2EEg-V{dyybOzM+kKwUsIc(;!s!MKl++b_-K^wjAi>GWwbqf>{{5m0~=1U#otAL4s%;-mZdet){>?BR5lAm%K>sn*Qp z3mJU$P-;AMSnGfeNT!_xRy+u-e63R_34BiCuXgM}I!<8JO#oJpCsYfKSeXudkRBmB`5q`oNOIIZVRZ5 zD&CTXYY8?w=qNlyHr@row8m?!vx#MW=DCoh{OvLt4NfuaQh_CCrYjg&16VkGFR!{c>XqzPn2m43N}Z#LwLRZ3)LO)z*4<7zuICFDi5 zIwI~+)>f~}a%HG-2anF?hU_INQCG$78aI7g*_j7*Bp-@YE+|DpU?Gy}X~>fc#qvOo zK}j{$^*xpE02SLJ$drd@*u9$|xa*zebPZ_dLH3HgJ42}m_;@HgkgY*@5$3q1vjJ>^ zt__X^&vAs3_zVf>0;Sm{=I=D?=JlcRbeVvAfW&6ePB8gQtXGwMPzm321Y0xwovrcn za2V!5h)q5hm6u>cU$=xBox^vNlbx#}H3HIR3Ah&IuNIg^0n)6O56TJR)&^RmjDxE2 zvtb;ZIfwS|sG7aNwKOp``LtuzF;vi4ITkCdIzDWXC^PB#8}#!yCxge_!%;9w*E_I^ z(k6{FcA(XfShRq`TgKjbG%=SN(Q$JgSVfR<@7V^Yi`Uy??ik!%A;7yTP(3?fr zd<8P!!rT-Sf1b@75+uP;i3o-!lM?D!h&4tvSSn*uR*(g3Qbv<-s|lwff%Dur5{d0% zn+Soy`PfkFD37(#dfd^|oVt1;IKus@Vx!-AxTrM|xU_SlNLVzWXrP4Uzh?gGxHR%; zpzBG(bOYII9YKRN^$j+K5_k{gaI4B8Tiaz{%Q_U9Nke9W zfhD?cCg7KhKqMbyF`nmsS*0u%l}vWz3OW~QRSf=>T}bVevAf}gG?3!9A{N<6Cf`Fj zvyt-j3jC89gkjZ4BPmmo`Wa)sW4oU?PnpN+FdjFX&6h*$0+Huun2%6!0>1^yeaAFyde zV_qUP$bvJq;}Eg)wcW%cU=p#6Kq)^lcbF;(5>a0Sf|hiLu{*gsGz`)^lKL6*d3^{A zf!j3K;=+wv>8V%Fr}Nk0h!Je4mt~N!0v~LHAb1tY34Pa`k4rPGgUqj4%1zUD^|G-` zfQ}A6gOnXd7V;!G)GV5Dkc(rosZxJC=ujQR|0TxN^>mToT%KYDs9Pl-qFSy@Gk)V} zK#h!mwoc*8$hDb*`_zZ^&cGlVQv~9($X;l??Y~oSB{g@x3E%B18dUv*qX@K4gLESv z2^Udu*dflle+<1Uo;QpPfVyIL&1;gqO7{cSS8AIf`{A;%1lMtLeu7dQ^B6|b$jnf> z(tiOD$H53@4jkx?Aj}4VZiG$`pwV@dpFMEqc)IuW@sJK#Bw=5egX%$YzX3gD@aAN? z@;YY4SEtg}BI_P9BF&Q4ZN{KXU%rNZ-8i2cmGlh~eUphz612BE%6^#REuXRX%eU1C zOim7@$@)ShL?C zAlVRh2fb3qBwW1>z^wn9>FC?WBC;+r#l}9*4Qew=9WAHrK&!gI*N~-4cd!~$1LL(eXEP zmI(-?iR&KbcDMw*Znays) zCI`IEG${7E$@#nI-XoNG4>I0#dM(qMzK%~H9KR_zDB~bLwsE&I#}CozcIwO%+s=7x z=vo@O{(1x+wuCRUInL=*$Ly-5>|y&jw_X~&Fqo$MM&jP93yvT%9R``87%QN_qIUXo z&2n$8F?KVQCdVM3uQUDzCC;G^g1D2X;4U8T33J{O(2{jFG5I@jZGhPfD3>t=9;Mk} zij7Iv`-k84C$^olSNOG8z7QRxUOy2x?;jJdxJuqXg;Hb$z7Z(pKqm*o`D$7=5e;;{ zw2?g35j-vsgj`4esPE<=fyGSRBQ5ZO+6YWV%W{XF9jCmeWN?BX(LMt(kx$owbE-?S(O$#KD8{fNyGqF-^3N>dNnAIoo}v@QZaml!~%~Yf}dp& zMY(R8pc`PW@tP*Y243Ea3%33HweM?Jqk*dk8;EC!zB2a~c+BzGhw*P4JhVHHmGv9A z0MOkgpUW<1rCLZdOOTr>$3(lS8@?=vf6IQ%tvi_K7)KvN2k$UtKfUmzUEKS?k#r2M z=uvo7M^A#h-hn*(N85$Z2W#N$e4zE(60U&+(kgDMY6Ac^Zx^^wrY)8sSY&J|7ip4z;CaP{U@$`v;s&3s zA%xQ*pcDM)%@nEgZXig&{03H01PBi&AKL9oHHx)9^joq?Xaq?HgRzz5Qc)#APPOGr z0v^TY!f4cUBVetOKM_joz}^H~%`8l<&8Xu-FmGXjY6WrXh_YXP*v;3@?^{6wwHImJ zrS-OuZ{IE^l@Y|$HiD$IbP`}gW(@(+tYzb8FT)8}l5Mg45=7VdKMCm;#)Us{4gbVrBMj@~NGPwR$Fk z`fBhR+jz)KC}W^mHc1J|#umYpoGshf-D)+to`cUe50YSsD!7Tk+lbNT{xgWtM@$1@ z0x%^zK(OiqZPLS#vLhX>l431cS(aOa3@+b9^sA|vjz~0c8g|vQp;ySUYVXvDx8?Qlz?fc3E$RAsmTUkJ%Ri>VYJEWw*1OfEi99VAzUCa}a ztI#*fT0SOb1~k6rBG<-Jj&cPCAI?7K-OeMAZA3G9Vi_bA8YMjEtY)rAujOpL1R2Ai zI%PE8c_NDC_pTeE<;2&bUmN_b?d_JI47z7o=L|kPpAt%~=5m@x+X^qIM&oFe+Z9Ol zzL(#pCe&T)9-n2yaTXc7%@2Zf&fMsN-EDLSYu0T^(>c^dsdpc0>r%@^{yZe? zYk8tt_z_6m?ai&Z8G><818fz~#pT6W#+J@S&O>XUMgojF*H9DE(6r5^F$wQJ9G^g* z3jy+-=U#*FTm2Ex%%doGpN2Dax+k@t!b0U<2;~nQO@|&nmX1AoGM#+#o^Y+X48Y4?LDW^x;R-2S4<1`shdAl|KC5ccn+~eK0+G z&m-xPvyZ;zM!6R>ed##jZS{;y{$^pJhORDB`2E->tn#ZuiDaCclw9mI(o;&&J-ebB z;C?i<_hN<8f$aqRCBANk=-S+k=wmzDAWgKSwjW3h-G|U;!u7iawcn0Iz^o@hl<-8t zb4V+V)_sRl`@TcrH&u_Me;wXLzHVbX*?QZ7_?x?qsI3!fa1+~2QxYhb>f>&@Ip~Oc zHwe%=Z*x<#Xl&A%vxX+rre!Eess?&wSNL9ne zhh>z(n8gi#iO6YPOPr{zxY^S2Si|B9IaMWU1LRO;=Jlc!Q$U*nS@7lFfCIJ-_XOTk zQy4;T60Fy0n?%Izv8;He_0Dwj};o>Up1ubhbz2t|}f3|g5x1gc9A z>E}RV)a70I`s0CK!@9Tgqo0 z>ql9iQpe7to}{CV1_#T?@&z5gdMqB~zR5}3;2gI|U>b~r8t$t z!-|EZ#Xv5^2??xJ311~B5gitJ+0?3r0Bae((t;K&vv@s%s+F<^IQCk-n0(AbzFh-h zYeYNh5NO_R+z+pT+@jD*3SL{L{98WC+DEJ>dc1|6S^k*>_eByS^^sOcaI}W;{1-vW z)M+H>+49H0&(ANRJNyZw z^q3PkIa4!S|2eotOUQJDrh>swsF&vwvCMB1(slK#3TDu*F*^@~647;U799aimXvXD z4pbwJQ7igg@c4^Bp7pU6#Ig-!)8N>SLQzSjHPl0D$GrIzqM=9d^HbEaDv0|TDGZ(& zf3Iwa0lWO5Ry&m8m}wa9d#Vep@_8MV24Alh`hZE)UF)8fHr3oK^75NFhZgQ}=5!;8 zTT2@O6+F&b0`gka-s|kJhSyhBM}k;`S7?J$AtW^lFhgWtW^P}-ifzrMej+?|mRc~R z7GXoA5~{}jo;G|I`AkRl1@bs>9nK$tbu+Kq*^M1cPXlGXR=NP;8RANDbU$SF-rl#| z^Y_YgZ>x~#OMlprp*|L@X%Gs^2p|Q*>;?d47osz6h%NwS231jJWAfWk1ms%|5m2+< z8a9LmOjk9Bw0>SmHeUxCQPaSK5|yxijq0yxd$e^@rzUGDr8}n)PD@<_UPw*+t0FAo zb=7r^VX9k2d7~0eh(^JWz4s&AKFI9SYC9F;;N2o%~!!nBt%^LM-TbV5hDY6$O zz7U(E6&L>1s)18@8L_}wwhIa+H#NbIWc7Ik^CYdai22$}zMk*F4LC>ky=n_GB|ORU zmI<@4CHx{5#2I;Byw(ccg%2Elm=3tMZRhahCYxL|FKM}fXZsG>< zB1j9)U&-4B8&L|f;}#S{m3qGRcgviw^HrhIn|OoubB-k_oy#fIJ5CPdv<(eh<4Tag z76Qc@48bjx%DHjeS0eCYT$uCD?-k~;*F@gWn#$sQj(sR*A3So;wz2-de(Sf=AAb5% z;fJQepo)8$Fz46j?gsxAb+PljRBKX z1WkPB;FuZgc)x|3zT|WM#-Mb8*O&<%Bkon+ qm(2UW_!qx-`}_crd-;V|Loq%O z@sK^xUoU&?MWP}ZUShYP5AjQpwou3+1J%~TCrrK#t%HPLM9V~@R9xe4CDE}TWRFn; zG?GIds^oo%W_lf&Lu-t)hfNh;`+eBmv>XGZ8o@Mkyo@Arl{v7XbdK^L zmN;2kUVi3WdhO-&sR!Ce=YG0)F_K^By*EjIR&^@yTokn9`Hl4tz3FDjynYMzx*)7#=Q?|sO<5!f4ny%E?OfxQvf8-f4S z5!mB2{ilxLe`9|?IEYTu1pY+fPW=YKQ?92n^A&2Z@S6~3ROC(JLJ5bv^wg{BThucN z!^?sIqMcG6zD_uZtE*Iy3@B>vc@H{l!#I zDpw*xvtd=ni@s!q&lN4_aasj&OG%}4$k~A@-E}XM ziZDRo-hN}w=l8Pft(yjv9;|=vJZA7@)BV8v)wRHJ$OQNN$X^!Lh2nX~IpsNUdwp@V zZ{L^M<|hB{jro=5QQ>b2_heDCs%Sp#*oR{Rb5A)GZDRJ*iG1hba~BpSb2sVj@!V|= z=5zP6ZQ1e69Eqk2{>;V(A0aJK5YWt~O8I_r=6f3}uBu^l4YCNurCJ%75GPh~J312c zSQPIo*ZX~RqBZH&dB-t7H4uJ=F*>)$+4+>;&oyC7Av)It8$P&Zp}kkoth+s?oQM)D zMF$H><7(C^Y2h@;-NnpzWKrz=i>~H;MdE72Wl_pOVL5T8VcG7+}XzOBxAz5&^F<)-j2~@rQUE%WGB3Vk?p*(`4$%2 zytts&E$S`?{+yu11#fh8 zG(GpqxosC`YI-Pr_UT`s)p$0vmedIG{X9o40lc4G{G&mh4iH7#O4*1?4Q>>C~&t)v*chR}VIEBOkolFYr@%&1>yVR7a!;|XZ zpZb&A#`^h7Q|X`m`|CVEB$MJAscK$Zv5vz#t-Li$si}H}e^E>?F9d|@$k54R7oXR{ z;An%#>P8tRoaJe9sVYsa)~1DxMx+~1*w)IyE<`#NbD-@hsAR<(8b5IEes;aKHtqI9 zGi<{}pD~;#p!@Bg{mbbfD)M(8TXdJ%&7&?@cwafMVrMRHGKGze=g^-Q8_5=(m5Njv zaD-Vfx=ea|+36Q&<3bw&ssWl0U$+pz8AQj1>gzk+#$JrMvJ!MoZE^j!v>V}HJDbX@ z+fq&Iv2Evkb>yk^TK|9G^Rj7THmq)$^JIYzvQnfNYS4|)LP-6krFB~G*DKP%z)ZS? zRm|idon%feu5o#Z`bx`mprQzkS6*uyrqpvMImdl!w~(szh@)zU!S!EeJZI?q znx;Kl`4PM5G_zGj%=Y>fgeDERzVgX`x~+RoRqe0)y062c<|3`>SyJB3*}1DwZa57n zWnut^NyCUvY zezsuesnQ4fAx5Bs`~vsycl`N3pZ>;Q|Lfbv`cuFCZ_;beehxVUJTb%cSEFyV+LV~^ zz6y;rT?E{GT`Nq8@-@!+PVOo;Q7CTKr5aT;H$rm2`DARveDc0_oV@3h5^-H|J{!0! zvvz2$GC`MKKPuz&{-q8i0D9^8`nx~)zim5bJ7=GJ?l+=)PEpRHaHeBn;+8F7)RHqe zL5GVDo{VYS!Lah-dH0-He@RlxgUAQ(R3D@ zDurVm{|UrLUwP$f`odGM)4jNq1vv}rxYt0#&Oh?fWVx0Inbs{G0QH5GaC``v}1A# zTdx{Au^Z5v3R@z&smqZm={SNZHo;qcQxg&}oTKZ$dzJDNm;`T5jixQSypd5)n+8@~ zv-X}h*J|1en% z^M_6nUx!6Ogz)M&Y59l3`W=;EKx`(sK`XN@xdcZBn0`(djwLKP z*0Gq;YfP2vk@3Ft-QV(0(upJ6Z^ZShH`4}f#!72zR$PLv%NBYy(Jt)fZBU~|Oo(%e z;aIB6{-^4>@(SuEt%*Kp-=I5vSX`h+Y~W=k8yPLSGe$xD zT;o<3vMOG)7E{`T-~5f+#`-g_O{5?Gxl1fsjGO4i)(QdBTI#A_Ox^YK=pHSkrkYhA zc$=ZHt#(#uN`+Pp<4-`ehH#}CIxEXsabV}LSdPJPa~fLeOfzfE*um(5rACe|{szxk z@5$iMpwzbNY{_9T$v9xUV2iOR%fiTFy7JWDNyqlziI8O-9LFupV|hI{=Rj9PTW&)# zv?tr@Ofa)XEAj?GQaO4zgm^KY3FY``gQl%NgkMA4)a=!8P9=7ORnujIPk!cx>BMp~ zu|M0%Y#?tNb+l6>@j#pQs&4Of-Kn|ffo)^`@{Qk3PhI-u(7>u7XwtJx#r4t+)XEWv zWzS#(GpDXX;-WHLx;dB5pU3O$5_TR_s$Xm7Tf#&&U;&65LkNrms$DZNnNH z$wq2vFF{@aZ!WBPnh6+c(b-w$GfvrNPat1bfsi43O?8MZ8iXl@v5rIoxNrKlul${D z=d29H*|Vq9#fz8Xb7wIw0*9Xle4WpHo_f{j4TD3|NgY^BYiawp-QMJ+ zNokU%mI&ruFFXPPB^fc`pFvm3wtkVCIyOv`J%2xiJi?8ec z|Nf8v@wW5+zdrU;>Df>J2V_o^N2z066}Uw#G|=)KG`R-Gn#P>=e)oR6jUO?ua-gDq zn8^X3M2No0vdFysyI$JT~DQr+)9n zG>zng_k?S;#;^7m=ks0Vh(xw%M3fIPr%zBMBS${VdCNk%|B3(EHzXkKEH~+6$Vt zjlc`fKApz!t2%u65J6HWLTGJF%GQwGZ^hBR8T6r+#i<(W4~-f9+91lGw;+zz#Nz1U zRn6jS;Gsgcs?>T})w3WebML~YX=4)^f*KwzXgJjsOdkE)CNhFXpNG7DBPNv12(9_t z9IS1m@BRKC*mllKXs=wo`FuuRM2JG27DNfXA1yJ=So~Ss)XMKh4wXv;E;a^x!r<$` zGL_h8RI|XiSeTWH0$Gv_Pcm-9E)aqCFrBqlruB!}GZ}cGax5lnqBR5_^@zUd=T$*g z?dJc`*^e-W+e-plEcONzX5mp*3@7u9*o8+)A^6K1g{H6su{_U(QPa^dQMO#FQZb*Q zLicn%()d{z@4Da@)RzOnH%sPvZ1Oulo9J7BI70W@!MI^E{$5wnhDx)xGpDzW_2Bqw zdiL@huZ!IPBAKlONWBfK>2T|8I?}$7_O-5~EMA7dTvO`mXa||>Mzag8Ew5<{*P3~v zEl(Z7cNzvaR<5I&!~z`(O?n!7Js9s8_iP&@+TfU0qhPHFrNOo1t*Bm=Xax_mRSi35 zQH=kAZ+{YzyF0zy0un1$!!qt;Inj-cAgP1^7ZpXGvq(Uq=;IOr4MLcS?3qlz6c_oD z4T1!;y*AN`*^GbDbunlqxY+dX{d&uQ7vCzr*K~Bt?+w|bWaN7mN~&&UF5X@ngGg<6 z%CQS-=-M{cV{`p!aOPTU;9G2f>uf||FI+=VVD*i$fT=1Y`q^kmII^S{pB+y9*Ywl_ zS-?}PihH4fdDx5yVGH+ME5frK^}VQ`6Z9`J9-vdzW%WF-HmkRnFpo?4dP~bB%xyO; zWT-~o$o+)H1g5RdpLGJMHIP)V*n_9OHG-YuVSw>dKlM}ikWIxs?fq3rfYZX-(TV3? z8;Mti5gR|RLA6Qf-j}%ynJtQ%IPVtL9^D2+dEz+)Z>|%f-;S$uKLRG! zh=zuSSSJjk)RYpncC1YVJP)l)-v7Z5rVsPD^SE*8TpAx4VE#A6{H(@Lup0AO&r9^$ zM>%uTe=9`G*wFJ_^?tK_=H}12Ye6IcDS8~XNUj~(>d1T8Ak2SNM9)b;0bw^W)S4yN z#=YSw%T;-a(>&{&0mxtcw(s9|&Z5xQ`d$jVB0rn2tCwcHcHIA-h-F63k-)bM<@u^g z)?rM2+cAsovP{|Ak$Opl4;?&+!uG+iF%pSy!;0kS(W7}E=+w6n4@K-uT3RpzMmI^p zOC25tZ%sw;xLR5QW#T^#Lc^X!#bSii%pNNX_7oajs_h;AP}*k*-SS^0 ziw;X+sw1Fk-lzgmAzMRmbFma5J~qlq0~R*&@TMbM@-vjV*{i0jeFz)k3LEJIXFtTo z)xGV!Hxa_KL=#7#Tv_EiN_9qU!uG0D^Yv|~^$>Wpbf*G%{!^>;1`-BPX1~_W+^4rR z%k^dan^1@}*n)=;_83{9v&hAWGDtw>8Rie$6Bf^gd(Lb-=Ud~;=`*j-#Lwy~*HUNY zSn8-6N&D)?Q%A!JpAmy(kbw@=+nZ5uZ)!P^YU^9Mu9!fxSSxsH*;dodDmN5Lcm^YD zD~K@?Y?#HasbU@O^clU0j~QNELgxeyG0*bmlFd^Js2snT#T)JW|LnU6nzr+Uie)md zzX6%3Q3*)T)(S`;?_a{drR&tQ2!={G_!rSOYN5!Tyc1KU`}Vu;d6 z3D?zttJLQZAeDhIl*P{oun4?LD7}?{iVZ4gaSG?2Ie2W42*Xy?wr?Bj z{>jVf+Qh42NhFPF1usEvD^Mk;2I7KTWCO_=3`ftuG?ZR@eke^-b`iy@q$Jl!K-*ls zoZ9Ot@ic6vLkLnHCQ0d3G?pyClEA2{1X0>l$GwB<_cG|sat+ocpo7fy6}B!0$m#B`P0+Wfuf2k7X9uFn%?`~_off+2%4^6d^L>>-Xy4CUV^lhn>6SS z5J8RsO)lex1xt8BidJSmT^7X69H`)DF&|@0f@sD?QaL}kOPTBF-vkeo#HGl|3S`{! zajZ>w9T5GfLYaRa+QB%6vVZk^{?@i}R-OFT;A?SCl@YY%M9cZD@S{OgIml!MbE~eZ zm4BNFY6yTy#5bhT(L<&{wa_ zVLS~LFE#PisJnrAI!_QYNI7v4vCkEPC(XfUT&r%^i&fgrK5`Y_}3QNbe+B^C^Cm>@tK!)x|SCgIH}(xh#}wd>b` z?Ju$bpp487H%(d2MbXVWj1qZ|w)S=mI5WD{Mt16>74)i^jI)u<)W^=lcZSiz#-iDY zp%18$3tGU`5?xV47O9Yv!D8aMs$^TO5^fOQ4N|hd+h)1Y@J1{obr823$*!B&T-w%n zZ1TCu^hwaeRwXvRD9agWSV9pY5pl}oUNb&-Hp`rHw(dIIykkSjC`j0XxHt>@2g;QH z!kaYCQhruEQSJojke+QpYIlXs3uf@EyaGWu^5N4n*TRKvS(RitV+G=Lk*9I{dTK>e zcnx`HJC7pA%wel8UrlwTW2tttFIBIzIDy*KGas5T)~x|ORLxx_cTvSwzVE<5vz13v zLwndA)KPk=K?%Ox?6h>5*BVLnCH-IXrj$(^7S>B3KWV{-v@=Gn2dIy;(HXGYTI{3s>UN%&<1Mg~g}P_dw5 ze2XtiDqEbwYmUsCjjU`D;xMmKN+4fKFa^;kBUbDDOAxlUGEEFNNlZ4CHu)O^t}Ozu z5(17b%C2M$aUG*X!DHt!$3`(o05AjZZ5a;7JOSYhrRg~`>m>rU5;Ej@-n)-7R3D|@ zO|tR|Hr}cX7G?=*h6e^xKV_l*zJ4~~Ta<|U(%|4w8XucT6O`D72%!3I^rh?9 zZ^Z9>P9M+pd&_Ub21o=-!$ZRa5d8g(o9W8cEAjO_<7RN_e0J_|22oj`v=k5D(8T=GwXt*&glXPVv_Y<^6*qew0im{0 zt8liK2&k7WiEI4?Lb^eDd4t!8M83C-(u<;J#B@P9H3Gy|M~d^b7< zSrvkxb^L(P`5Vi2!s})~(beXH8^J*?Ke=EBlCt9nVlAC{gAf`-0OUE; zP`|GR3Jww`aJEEeGaU)0de8w8uwdy#4*!Ci^gvBxIsw#sg0Bz4OQ>f<(qC+xO=^OT zas?!TyE7IGz3G;@`L$XJbcbw=Wr#(O#TCSBxj<;%dES{*<6@JuR_$nl^Ppc7Kid-G zF9t^j2$tsD{P{2ctzA%#jc8h4m?auVM`G}Z!rZN}X@GcXeFE>tv94m#jB+XCoy)3C zAbvSGj)y28C5pSwnendz0jev5_)Wl6v3fOCER&&c&VyQTeF#h}O+^Wd5*JZOlo3nl zU@5I?;p;l_Jg$Lrhs7P(y@Kr5vSL~uPigLIDqrp=V3`V{zUZRdh7ul@?JTv5?D??# zq5okZYt9c1lI?)mWm|LuG$=07US@VWtwH`=ogPaI6C-JQ_-2~8bv4b7^rzXezO*zx znAYf~+G4TY!X$Q$4Sbo+c^#tc#^g-ez-z9QPOB1rw}E}iCbmP6Kq3&+x@1$H5;iI& z1YIixSyRIUX@0_B44w|zZ!FjZ5hY$9#;Zc^O992bbG+sC+#;x$7{hu4&WPoZIV?{S zV-FQJjgWkN>QgVI&wlDzl8}jXpanFkt(<=`U#4_XSF)VS3D|0}g{hyPNHsHksch;} zYTX!3N1C=Mt1_m+J)scEM;N~~n{HkiCwRgRWV|dbPgbVQCCcn;j&NAOX`y->kDs{PA6pKXy1&GM#=j@||~hcgy;)MIt+|7M%!U8{y``b0%(5>iUnN79JSQ8A2p+>;%s-wJXiL@0_P@!w_UURY8wzN=Uet+E%bj#B{v_1XYYW%t_)n%;h) z&wur5QQ6Mn$D2XG2nLyk9KU#VtSf4!}1`3MmHXdR2XKm(1a0Yq0}d-dFK(ql}_D- z#gb5>z|k;}ig?O(o8U&=$wlxDWXB?0AWhu-!tu=nNnKpyH@F%WcQ7rp@xh&din@5N|#1i>>%~yH&tP9 zXprQ50=C}*ZF09MUT+aHc{4FSKKy-g6TNd&Ebx~^1vc;~TP1*5!XIsp0BL-%KaCIF zBw!j&3!tCNoSR0mt5frs=~4>1!taJB)A~Hubs96=k;ybOh_NkxaPt$Guk!E8EMu@t z;LE_pd=&GfQ*?v*yrx4urKnX&{~%1Ic3eulXG|phhg`!h@Q3EQEg*9?H`ukg7F;6c z83@$m$V58-^3`{tCcV~^#A3lRIp9$vlN27bn`*M!VerXO~7JwLAPXx+#A1>$-M}5#z+bkY& zcTem5K}riHW#_WNK$L+EI2iwV#(tJ@US+(8c3>LkJ>*_2n zpsTSlXN3EWy*Z;aS!y7ci{2=S6(_{qiC6e2#(k6#S<84!1clMz7TvOzfJrzk^%sz% z$Y3n9@tj$%Zy!k5jZ2qPKS9$Pyqs>hHNBl(sTHI18qmiw8dWxsMiF^jnP>hp$0UUp z;2GvYz@gFIC0@Nuqr?!tg+sT7SR;428yTNU0)uky3%h;gSd~+HtD+oNU(-p#djQ1# zAP?I9;j`7^-B}|i%f=J$H7OD{vKSDG!ulCaDnoie)LTBDor(mo@CU1H2&e-wZaHbMF;BDZ><)dwhO%EA>yVq}Q%ZrKex* zPuK8^Y}lGl4;`yd@3^NiJ$$q_J+!YZon)@pxUNS)PEyR?(TMqW8)m*#p;^=9r?O;F zDqo(YRLgxnP0~I^*>@7eYL-N1?&hU5fAyubc;&gYeB~8<9H$7JxaM#h-|E;yn*QVC zi~lD7(haCK_$PHiau&@fYGFiBBKHL%re(Bar%tD{4?L8PojDU6s7e-Y$%`d+)N$k6 z2_uoRmcYxKql;R3cUNb$FE?VyTfy%}2u6m-t%(a~Q+^-|SRrHgW{XX~AQup?$BkAj zE8R8tq8yZMa`s840l77AqM1*z0{pzpBMT4kJqtoGv7%*OLM*P`AR*nA5LT^eYx+s) zu&-}8y>_KP-58yUAfiUy6uYooTr`oj3~110YdUvR$Q)S4Ffc2GN}>I-%WMgR%>Q%K zCjU;=<&@9AsJ0KM})v z43ujrJyytA<*yz0d61Gxu;%>;jE(>NLER%dftg&b{yw8_X3JvQf%P zAR?Sg1rh`m1ekSOGpVt3Jk^&?Q*MGIBmiL1(N$OyZItCi8`#b%?M3;Hc^kys$ewqM zc3fsR2+eslB*sLQa@hhVyfZAI)1W|e@MnbaS2aW?aIk~vme>A;3+W4=dojI!?i#!r zg6EML%1I+^jBs)Iew^>GT^pkGJV*;R!O1+GcLX;vSs2$1IwDsIF6Jg_=cXLBMbHxl z%FOW<0;=eu(^NT7!R)Q!0h`Q3HR` zhJZ!RadCEu@!%W@2G{05CpYoji39>k8w>roBGZmrCY_(%R$z?edC;<}gY)V96%bw!$)nBX>120xI@wvC z?&+#Zr@Cv?@qLY{y9KVTbP3MCn&hej!|4tH@;VSH$G(^sTCDjDWBKzVn;3GIU-P^H(9EIt`@jucCXplXaIW~ zRpM{WsC9OqrqndW*z3-2ur$VF(z4Tz19c@hM<7g_S5ocT06Zl417y1vB$p^zUA=fc z{oyCSn11)SKbKx2;Ft!jDWw!w&0_1;Hv)N?VC`}x<>rOShjY(@S$^&n-`f--@E+M+!$O`G=gQW}_9OE*ELE?gT(7cUVA4pO?9hhxRO zaSqqgc;VM81T~v--$0d?CTWA`^G+e>u!=lHQ*~$RZGz-q)3yuJ^v32Mb6Ld5yVL6w z2|zBBxzUvu}8GfEy59u{E-uG?AElUFR326xgL28DW-Ip&C+4N8F+P z?dkL(x^ipF(*mfj$R8S-BhzV!68jh>??LXT zfz{G9xKaw2iZySPQaY89w>tKMrnkP0-gdu@4Ou=yEHv)XS@ApTVF-5A z_;4|7uyq*z(i$0;`Y6>vz$Z?eNT*MqNheR9PWRq>U#NIL_~>J_wH{)#D@WI3K8;VY zs9P@KPMD{3F-$fv&jKD zIZts7M41eR%o%-tP`(S*JQwq<#$6eFCtuDWf(%o=88(|B$jiNAkz8Q|Q1Eh95lNNQ z)yuK{+wv>due#7WeAieLIBwb1yzXWg2G#5pkvvQ%1|4(va{=c$Ll zJCXI?m; ze&E9}(>fn|h2UvPjub)6I%QwKU*%j@DATMnzjhu18>@3@ z*UYRS+R0;lElmu~rT!~psSncZ^aMc-=uZiAsDcf?5?uI_ z3{s>xf}uiCWDJ}uahZ1%2|^)J%1ZtQU4_^374JZo{F@@`R4kG2ij9eO$z5XZt@7D7 zR7i3Oq*S_qf|Ldo3fL+IV3*!FWgq8|W!4tT-%aqv))pr~IBqiM#$|*> zr)4w%NPgz^L}uqH5sQTA8XzL>xdo6aw8*_+@TLlPRqfqKWTE02ygrkNN&=Y**0M?n z&6TWgCU9npa)vf|4+FNjiQzN{pKocFdy%=~ePB1U>mV~Vu7`y61p+G3)&)MF-OyQu zzQ8jzK;=B^S@(|}J(eDL;C^HYItm=A!ZmkB+>vbGIk)TKM>Td6bniQzI*&h`4xD*3 z9X|U|IsxhY$UXu+Bt$&tYOZN#GgS=&sTBgBX%dKO^x?)qJ-G#mY^&3;gRSXkFBAj8 zZuTFOz-bUH#Wv-T$+s=1Yqt#X%2LK=i-2+kF69b5%DLJeT28vt&Bgk3b+#g19T=v; z;A-mcAJ_|;-gc4Cf5HdOHVTFI*#b8tplzYwg3y~V@)%_mNs2=dRi{`q=FuFQAmA9I z)mS3;zJ2>33%0|Fg8M`!eDLs*bc!IVy{jkS|69>sJen#f>D7=`O!69i{r#Cpu3B*6 zy^AgJv1t~v$h$Xk+tjrM$hG)Y7J)fnZ&i(#S)^R(gqlm~DgpP207^w=i9030z|H8F zG^0 zv+4Ig^@s6t_ug}FI)G-+7T0Hy#e0jz&TAgpSB$|jndlTq%`imIG5BOEP`{xB&1Wo# zQpOfzDJLHOZRRxXS_)+|TtiM^2`-84@^a0he<|Y9EOu$YJ~Dzx9vr6UU%8mh5%?@o zs@+Ed(6aAf8i1(!^lLZLXJ5LOKKt^u^y-aK)Qh<~1mWYLLC?H)9mMIm^tsQ!l&)X9 z1uuxrmdtjGi_=g;YriGd$lwW5MHdw*W&BqZiwuNe$H6ACxdL*w!sfS8L)m~0gP^JJ z1`QRX^K{N_plL(vx2083t7U@1DNw8t#%^qGJ&jP-xi!6`4dA=bhA{id_FUqrXGw_=zUDsAa z=tk*Uls`Xgcrd54iKRTXQvT}hYPoBy1I4B^TLrD4wz{2rx08FAf9YPW%BjrH>ouo=U&|tG}B5 z?Qi~OdiLDSFZ~C5&-nlB2!x+iWb?U|KDt)a^e70+g~bLBiQHd&`itp{&pe%8KYtG7 z;8NkkNiyAT+BAzVna~u@*KNE=(>IZi4cQBy*AtlC`YJQ81(8d{#tPSdi3cYGmW6 z#(rXk?ujec`_lOvw?Gue(lt<#VR#(~R>R?_OveuOq~4C!u$0+Ccd90ASQsk<8R2Sz zu}y+7JL%TQ_&3>f?3`Oi4`@yMja|4O@uNZ}9-M(#{8f^6KbKpP3F_LE*nX@^_5!5p z83?@tqnu~owRGKR`-vh?&*pHKhx-~C$p^7Lq!h`cg0aoCrb21%_A*wmvnd4Jcg~Yl z16u;7B{nG`F;O`=UFsZF!}T)QnxV8}me~hCX%UpB`@rFJ^1gdg8;kiU{GR77+)PiO zy8&7=ke+${Mtb^{Yw6bHTsm_0WV-+H$5{Bg;R&rk2p*&~c@B=tWxCDAKyl_7UyxGt z&MM_$ku@s>Xct(hkJEv*TY1ig2Eubtf=c)nYe)ewCqbBI2#QujuQ=yr=FI^2&C8VA zUbs4xt`5WBScR7a(l<-lVRpF$RFS}z>qA#I;-7TAfqGe*cD$?j`$nxM*i^bf0NQO1 zU{`3QL$4L(@G9o>5~T1&G;pda8VUUB(%A3>U3QS2*}U%F91FW8*TB+sBm+UxQMl1y zDI-r?VS~a1;~Hie3j<%v=k7{oI6m_8Hu$^z5RugyC|lk5{3vJ#*(<-U;*dqM0S zx%tm=jtQZ2WzljDX!-;tolye!zQM8d>V<3Rb5FmNe*d!s#{7GQ z^8GAmVW7}JsCIO&a8LCkt1xhDFqE{{D6wwHNh6q1f+I@p1k1}HjWf)hWdikT=DZ@G z-iKMjB%^G04^|?y+t%Kb4jnkYYtU3v+d+36ByWh{wKZLI>e)CzR~vsHL~oINlExL# z+`gOF(zDM!P1*VR^zv)3q!(U#F+I&1{?bb?MH1|Fay`&z&XF<+9^if$XRVj~AJPj3 zPL}JU+{ts<3wkI=o;V@5sWlMb;9dqY?s|N8q}qt!ZF|pt_*Mja_1qKcVzIwyc7_ghXarL(DaWjvK2CsInudV^qgdbmH0-nc~9>~+fK675MO>8=}w3_dtS7w=mf zf25%4(?9n2)7O3DH>R)s)8CPP;zvL6|Ji_G@5I0K2pFtXV-RUqQYBg!#Q-YWhh2EQ zS-7z$K`bn-C`70J%FP=$LV!-BNG+Bd5P+@o2fRzbQ^|%=rE)mD8j&ait0_x-EaGx# zBpypJmn$XmRJzwR13FxgN9JTnazelg-sD(x{aN z0n-whi`{?A5M5VTL>CFJ=IOXw;O5>f%iS>0V{y2hl$o8!N}ric*;fu_V~~-)!5ajo zucnt?e=a?L?m0><&!(qedMbVP`Ol}%;F7@{mW;C0m(Pc*4YYY5)6oZZFomeZzho}8$i$0q!w)sl`hS9T;#m7GkoTfGE{ zGz};*f!gyygz!$DIUB*R9dv{J*I1N>=`tHhV~pU z+#*0J7iF?kqA&ytu;h75sv>xnFc4oRZrZ zD9?3}u=Ai%)1VHW@bm6Hel$IL-^q0P*a2F>OKCla@XKQXv|*88HH@HDL8bws+`tqh z$I|FJo0?*)JCAbpb~dVMlU_~zmxfZ`c`S!+Avu8fWMwIv4XEIT2@G@Wwf>3pxtIFW zxj{PL5Th)i3|&b}cng70x5>)EeW`1I58bY)A|sWt0s2{6kL*AfdQB~;C^ynEN{3Z< zV@GNsfUZK|vyt*^BMR4+*;Xlu@MZ*l4RO!j8XHKrhWp-hL5Xs6;fTPgP=3y}oCKOa6gxF2iZ0J+H}u~^5Eh73ZhsHt(EzJ(J9+N~ zpQ*rUDx%nTp1jUe41{zO*%M64X>S5!4VkId5&dU#4kb9{cYMr2fBLqAcDKU7hbC}E(LDW8|=*OXD-LGl0ZGAWR#CkV3Q0O zSFuH5`62i#!9fCP6x!sfGy*qRleY?Tt8UXxk!Yv@6`)H!MEX6=l}IDhvB$X2fKWA6 zrGGdG=c8OEv^l`GS>WYB|SjirlE!`&rJr;aBdea=bfM6j}$bu z6kxZg@9tFE{nGpJ-ZTC`Gy)Z2dWnsOGWfxz$tUkZQ&a&DSL{N^Q%|Rma6=FYvmSR{bqJ9*^&BM7>14y9 zBt*M#Dc83b(q=EgaC2=li#KI37V}z2vc1O+gW~iegsEhO{!>aglu}wndICMCGNd1> z(90?bIfMeSCD5o$4$e+9irMM0mQ|Y7tWv0S=dsKjT#lB1K0lDX)+c7FD%DHqFNYvs z3(C^HuN$5QdNduF2ZK!AymBd>d+kEHHaL9GgTq=z3kMac_2%o)hv&J9X8OQB@U=OjSeQt;G&*$h?);x%R7c^GG? zt2IqmUjNkrOn50}(pXwUz|_Efp@FFZ$t;`M$lNL@qg`=!QOT(Tp&(#7xc_iE0?GEs zp%Y=1(T(wIdv{Ih>Zwep&orh-AMZ|Q?`us5_hFGidA5@KqTG9x&tA%TsdW?gB=cA6 z6ji~ic-|@}a%=3?o33mD81V?vQG+B2#U@-9-~w^7v|1?LN$w_)EG0;&K=iZ%#IlU` z_^m1p@_4|p+l1i1u1yGYVKbTm7+eW}(Y@-Mi9qGDezvOiKCyNs^Ituzh?ZUe4SBc-;6dON-l$p!kGr#n&YEmf*@7qIEY z@Suc_vMRVZon8Cj^mGOB`tAwBPaK0lTn!it&k&jCoYv&+DdMV}1m9bJ9snim=W|@< z+%JQy--5?u;ycNjyhxchO5qHsntTTi(1AzCv#%YJFLIj{hRtm?g*Ph++Zioa4jrP3 zKmcbzn7?Ok40B=3cMF`95K)({B7EnvoJu*@zaGo3-u(yDkpnHMivXw&*#IQ(n2)Tf z+`Dri!ItfoxmJS&OXsdmkjPAmrBey%eBL>(rCoFD@VK_nmXrAZR>!WZqR!D8tFAz#^$0tPIIK1AUaVZ^ z;<#z}kW2B_2RsYv_;M}2)HA=t3+?^-Kl2EPB8lc$4zoUQ*+7$1Ay)&NaOVamwe#%^ z*~RJ8r;!q9kM6l6hmNH7jhr{jIgSse9dE93noW0pW(3@xP{$Hhxzt1rKd0`jdw zmr=G?aAjYom49f6GSC-42N&q~(rf2mq~u2XEuw}Ks03|l)J8TZz7>QqlUbqq7* z?z+_0SW0F___#bmx%71i*w3aHU;cc0_JuF7h(FI}a22SXauMTOi*Q{HwiuQMYRUd8 zSoB44Ht7bk45<;!&Lbz8{Duv#x{+Y1?I^+0(Nx{ilS=EGIXC>Gxc<{B_rd|ukFB*y z&H)k=UIP#pNynB@DhZaVn?Z9VX=1NZN(p1FvKa~w&$X?cYoz5va9do8)PfWwvUC{> zvmJ`cJ!B*yxNt^p&`(jsjWMX^F$1^l+^ersrn??? z9;3pMD{I z9Kq1v`OFLH59x-yN(ru<;QCZgCw_{pyf0n1-0SwM&cGj=0C_BDQ*NPCxD=tuHqe&t zj(ySaFhtj3SxpOp*b1mGLaLNS?M`eVXtL45fY)Hgq{+`B$`nBYKX)o=4Zr_em0RZyWd!AZ7z^6w@jH{E}b37a;E0F4@S`!o1*kMLlP@3 zLLS$G%`?g#@Ocf`>j|6;nrgV$y~%FI*UowjRxv%P z4MX2blBG~6xB9?BlbsLW{MfdVOG^9XFaLD<`j0-5jvqgn-uIEem|nPYjfj^)+x6H? z*D1^V)Ia@h8xvX%ECIcm+bs(FU76((d%S+c_}a;AUx+VH=RkOSNpcRgjrC z4V>f#Lg=2SOtnn*)x@UKv{{GX;8Gf-T(qDLi`i+sPxfiETo3}18iyU6Rj}jisi8gmfdg-mK4MzSTzWZqMRFWlxlJ(o#P*E z&$}I#Zxt?NLgDNN6|p!)f6hYv`mBd06FrEGt+JbY8ppIL_J(Y3*~TtVi+G_0Ae*ekuUF zOom;_c|=P!*V*pHHP&l46}u(tL4?{Vb2dxlXM-zOB$2taNBw@H9kSm> zNRqtCGG#2|UBEfpSW+~|&RuehonvSTjh(&~b?JNVeJ9D&`_dEdcs~i$`_g-#{80M9 zd%i5a?>+BNkH6#b^bQiMcRc=r zeT_k#8h0cO2H;&?z3IM(INx`?Gu{8_qwI~x!|+_RTtu@8K2K*4mQn4HltTst8bT43 zi(R>d?zNe8@xq1l>=&O&ufFigT?3s>iBY6yzPeV0E>Z|?W>3lJg0n#f`or8DNQy}? z=e(*oDRW){+W%A_t2GQWYToRw`VB>X}+iM37|!Uv{eYPj94Qnj4f1 zSYz&)*sIG_BN9sTSI?TbB_(2(z#oFXos7!>KYmWHRTW5gE&GejK%0EVN|L7r_@<5i zZ376aRRv1XQWZ1`f2-qm(6l@|lz#ZzzdC){xBTVw@BiQ>vfNwg=}-S`dgR2h^drCZ z!mh9J^l$!bI(p7V_t-<7`XgHNV!|JH9y-~CG&~3;EurqLBq*=VNDx#sjgus4Q~90&o6YctjcRRnO=yru-mS~lAT zT1~38U15={;Iq{NLDZ7PHj&M?5jeFr*J3fi_l+(Hwn&d~pHkNAzW@@1xSnRR28NQ9 zdF0MKMpU9Qf$+{FY+d3r5(r@GIClze6@V68}jwkxulIjs_o`GHY-aCWqVfwSV`&)BU|I+6=FP8nf_e+h$w!?eyhD&neHt&8}& zXlJ2ZTTfswdQ`ECw$wDG;OmgeML3<4W#tO^A=-yOqGrJc(<@l|2*0U$L^XrJY7@ox z^3>f1qJd^WtwG4L0cG-vY0hJm#~{ewaN2jQB<(*=;?uzA zAe$cniI}7sumt~Rm98Z|C@Sf6&QiJ>?ZW~IDS|gW+{F1$6|v1AHs{VbjekqfPIG6K zuSC5abYdUtz&+{V`yNW~dgPJxegdSAy!(mtr#}4t^fh1ck@QV}`cI|rfK&VJU;i!X z8@}f2Xf*ipw4di!fIO*Rma89Z>dwnRlT2Q6aBEpN^vXgkvVcPFGS-U(y8;Yr1h*np z)e69JU7A?ydO*?ZM2~3p5XH&DwGxs`DCr|PKtQV0MeMa)i&iwzP9gkz--8dP(+@t9 z9)9OLS>KKj&?*j@;FUEZ419!EeG`hq1W-)`>;@7_jufm^2!7^4@`m9WUB7w_55md2 zgC?m2az+Ck$_@Y_4P(=cLcs)GL!h36D2k7={!YTba(%Zs$2D3|dyNHkp5WYda2lR! zAWag-xd*H=c&{5YyS5d}tYNlpaOQxSP+Q(6VBFWcKN@}N2;3u(-BtKhFfcc8Sk*9K z3JDdjlU>J*;}#|267G>&E~Q<)r6Oe9Uv|eXGZz=QR-(6!bRBkrMl}&(DZnf(VhY3R z85H`LNEDPGu;FD@Hv~CTBsP^Gee0}ID)cef5Xm z1qhap-}|}$CH<3s{TmURdwk~y|3tb{_@=bHF}?c(UzQ#|dnjFp4=`1HrILdfe!k^7 z@BQ+>;0SoLiS86TOtMZ>?uIakS>dJ=ZPHY?j!ktG0^Mt`K|1cI#Tw`{O0IHKBta9X z$elTK=rEJNI$qbz#+%Rbo??2IRfJaSwQNxw)9yAioe0?OZ~^B=wULt*S0mIImqn`wy%=@ZMC z@;gx@*D*hn=b^uxB}0RYYJ(|OA+?p}v2qw0#Uyp`IvY8h3?XUI5=#$ZIRlbZ2AQ(5 zrRna4+Rx>D2-3Z_2UTf|PPfrXGD0{t<3Q<4kfbGgHjr8GKXx=7zxPZ!N{3m`;oj8P zj+Fx)GhS;mPRpB6NmvLi-H@ph?6pJSY%FKN-k5>wVJ2p`Dx!W`yR2VmtYq?dGC1Ou5@M0=0{W#K(gVfxBFLDe1$b{{^m?MWxpmuQrU5OVA?I|4e~G%dM(PEBlH^RV(Pcl*z3{5fzjU@6Lo5JZK-JtoVlyQ7^`NAS}@pmCt1B^~Leg?)cp zI>F=Q!LD=z9ehP#P{!m zk!_slMr3pD-QSy9XvcT1w_vv00E&Sh6Xws1GwVukPY0Z#F3fmaA%{!e&VF+S8ezQz z4++dK5acMu?h5G{vCti9LTfjy*EE_bSpOh?%Nv~~F$ifC0`@7AH0NGMV!aRYy<%V~ zg6lkp_ZA7&%F6j^`At7h0~*Jd`zJ=qs3nA@To3XKb#}I=Bi!#DtdF;Kv~~k+3W--} z&V_hsmX-)v8-nQI`k7SlkR~Cidw(0%ji&5%x&S>y3Xlk^4h5UsOH15?2E21K2+1;6 z7YJsfdy)HIEkLUwn#PyF)2wivKY>ABX zV?Xhuw}1NZ2ficyi=X@D^h+Q6So&B0==-N(V|GodizevCQ%fFa@eoNmEO$Fp3*qG3bbOtiWY5aFdsE z*TJ0B{-J~tS$L0WKN95}PnC_5g-E><$<>==%cGEYZ;`o8vxs=}=uEDC#wr`BMz;n% zLbPfgRbnks0Xnw9drUy+9;D-CXm%ZL6V?fHY`!G=t3*?4WI5687Je57X?X!ttgi~# zIV?bS91UhZl=`k+znOmJH$I+z{#X7@`sAlSmHJT(t{`w}!WyHok@3}>R;GX&FOs3JH6CSl1oGZ)p1U2P>`+{6L5VhzZJej+&~y`4c(OuMNf0tz;P&5kP}W% zWeJ=cT8rz-<``!LH&xi&C2BHg4iQuyB1v{V4M6T58HF#xd5mEmJw+GX5)qPFyYnFU zb9Q-c8w5Xmp)1q9(=H|EF#ajIbftsmTWE1*vDQwvln# zrrk^6)ZIq=brZ;Hj0?dh*I2D8tvkZt9aNUqjX(|M=!huU>1naLgmTL)n~*BW!|-2h zyPu#%eh%XD(wrqaqxCn-$Te1F&IWSe!{Ab`k()1HGoK~Gm(umsj7-BJcq~T`bf$w{ zbnuz+i;jXcHq%Pp$OhI5+S`hFuK(_1V?INv_a4yd4!(9X7eI`4Dr&Vkc#>0?H)N5g z9K2zK_iRJ1N^z0uWL;^_gF6t4oh@zjnq}8HYf=v;f!_J!fEi%drh%9U#E#= zjDWd=lH5^fEgjumRPkwy!0^`3TgE&sSqBk9%yOB=fDMFOE4Dzr9d&i5)J8Ip3_g;H zDc1)iu?f`U0KwIv1MLJ1yKLkpcABM&urHU_nvn}o5+itjimhtKOaN8lxicS3!m{#y0oN-eg%bR|-yE+d#_BW(n^gma zFHP=tsm?`!WaM=_p7IC&&%~2+s3UQzq5HHz(UM9{mouilH90t5ATu%tTHzj@A&{Ry zi)_|;$+^$k-3rKPn+*6zwx=-=W+7nlj$G*FW;2* z<3o1*p2ySw{y%?L>O^0mcFevGe2a@b5x=*Z9&LlCv;;DC?zPv_i+Hm24-ODOtgx7|k&L<66Maj7X^ zNiEo|bk|R$uG$g&>V{G|uIRf3P0?xRhREL=uuKLKb}`<6UJV+8qbA<}z`;Z5C?(_L z%<032576?>=4Sxy94ubI5E?ab(vQr4?N$;|RQrgMDab-28%Py=CCkYxa;SLU2?G$& zO@#v&)#>3raU2YIf{(J}*wlt)kzhG8YxTv*s1?NIy+oc2L;^r7+=q1rq+H)BMgG`y zYT$%a!F`J|rmgt|@k?l<&C(LTLg16o!P=t4yhW?}W=SIfb_s%_Y&hsZX^iZg8K-m4 zvd$#uJb=B#1cBZ%=L5bKE$m?An(eB)B_4TYFguRhq=v z{RSZ(4&bpf_on+Fen)VwDl99x_VBtn;GpA@PhwC$PYHMlj**7NqUT`=#KZn6QF2j0 zW6MrZsM9Bpq(|;QlO8;MaF;bcugk{MmX75RxsDk;h2>Hx`Z^?{D=2Ft3DSXqa4ja? zu5S(edx++p`=HDmaV)uiEz!-<(W{nJY1qmbK(eNJyT=&Eh7jMo{^#DA+|W?T{cQmJ z;DZmuUf_E~N&BwDxvuiK<4cr@=UB@m(=YJv5_4}=bdL&@Q8B=*%$0-2+Eof;_QMZ(AM3O8VH_Bk%3rA z)l{D;lhbHWjX3CA9dV~FtoNr^3eVd2z~kx2$rIcD<}blXC>{~HuMcz4U7Ygqr@kSV z@rp-FLwh>jOG|PbQ$bLDk!&{~Wz=*ZjgE}sapFB6Ne5W4_KrXP5r`YzSsPhGp+?T6 zGgvaXv8L`jP>RV`U%4gadF9Yj`90Pd6}%6I*Ay1j}% z)oZlVU!t5d%}VFHtdQa5l84d@1_;*ST?}(}9p!@^$04BX(`V1%gx(oCNzodnb~rb; zI#Q86N{km~h8tE23ZxiggPKSX82wrqhb1zvNfMu11WDIuOTTz?I9<9yi#mZEWE=Jg z(IDH~C84%JCWjuV*?K6ytH-j7wMPzK<*CS_poGRbRZ&8y zAq%Wmm<~c_mBA*TWYd6xMXL$~u%2_U1Y*h5pvsSHdvZJ7IXx%>cOf9zNfKojP9q2x zpaP22=pR)MrjD|ksl8+Xo10O1JCK*@hOB_w(^$2XdK#D0;g*HeTR(-eJDzx35F#U} z*?wffP`&nWq^F#zklhS|l|B{-n(FA@I}EqZ1{ga+?|OiW^XmscnX#Qd9u*uBIY_FBp#aaM(>E2Uf=Wain-he-u4HN29?RrEfTfrYrJ0b3EPB#CH52Tz&8T_>qryockrXl1#?|c}9 zi87rr4S|wAhEm5vJItm6*Pjd^#XZo-y?N^JL4u~^kbs+_RO5XRWw#K(R2M3D??su= z;LR>AVcJ+vc1z2PL2HIV6vCZ=Xg`fcS`C|XGbptP#eT$C57T%NZkD1=1$W>&Iv2b5 z9i@BjaH?zVjCT6WvE$q+=~uTMN_G3srUnddtKsd1yaxC6l3gk!IW;yyi1;x_DjBmz z(Dydbs-7+a`JOg9W1H_D>bMSBXDlqITssJuJa^?qhS1IH@9QTBo()3#egc})bUYuy zouRjPUpkAu)w`a24+Q(iDXDdY^vl2i#-5MMu|3@HO=x7WAKd&JzV?l|I`N;^HD!7JolszrEjk3NCgDTB| z=tp_c&O;kfAjUZ#4Kht6^OWX^`1JLdX62TYqv3t z&90%KuPzw;!X_+UkUn^;BWB9l`tA;24wd@L}{|jvNJXIu?&3Gzhe{bw>-gb@bOjQ7&G(m~P#=74|8Uknj}zlhF$z1G`|KkEgWlS2=KHWQEW5@6z0VSE?aCKD-xAm zU@h;klS+8Byb|6`1ZwI)RndlDEksOLi>1d>c1qjBVbvnvKb!i7`Y>v}ny%bL9Blz2 zEt_`*q}$xZSvOX>1~2kuS#u$*bOvqm#v7K9=ZKZpm> zwM8b!SQt#!dVZAV3~*NY{Q@N2G4!B@36gG5dVK8)Zr*eTmXYl?wzZ^s2=*0}$+TRN zN8$z&86sm~e7sI6agvU`Nz}tPJkQX-*?oFhmUA=M2-_M0Ii4taHGJqnUPL(TMrk7O zi8g-5PpIB3Rx)PM-%7rf<;iAB11%E2*&yoSc7)(20ijTTV5Q-qQ4`V{<i8H{SKnrv>!jL?zVAxY z(u-<*7g8GSsizlxBsvH8!)xgQW!;Y!)qw;1d3}}d;X6V4S7u<%`AHDuwRXPmJS-zc zz+KQZvU^V=xm>4ptbi&C>$EEA?H$dj0Rh%o0@YV37rlsa^3|IN_E9ozhc9$ue^)wl zxFa3!L0l0Xni|A8CSez}vtQ6FQvU8EZEf-0$bDoRx3@wjR8WcD(q2NB+ zdBmr%{39Y^a9!BM!u-3Sw`F)(!hX=xd4Mrv14k3f<^}_%5Sa{bMdqnJR%$pAz%LQ3 z$ahx~j?QeeOlfQoZ`^aiiedwnYkDvV2&un}CP1>2CA9%g+GYPXLsW%+;JT~Bh{1!gx zZyXnY_1~b#dAAP5Tb}veFaL{-fTTZ(*|`j-xTJOu*~nF-$r7D#yTMBf)+k50>AK-9 zb%_?xS#}So(}zFuk@Up7-VKV-LP=sgok!^MGoODdUAuOJ5&$moV4Hja&V_M6()i-dio+s}otnXs$=59GbzE&kcljX%KzBYieq*Ts%R8Z<`B#_dh zu$f0SNM;p*QET~FI?*_gPS@W`N2>Z#N6Gb6w>5~V>?}dlI%O2b$P9LCkznxdg*w}0 zWGM>+W#tAI^uDtkB27t74~y&^+^-pQZQOWj8H4?utq_tCfINO6Jpz~Jk<&-gqi2q# zCmuSTKJvat(pP`kyV5s&^nK~ie8s!dhn{>m-NW-*`RsK-QT1wIS!U&)swc~JZaWpU z=U6akQdp0~L8IW^&=1M565PXr#biZ+M}Uc0xOMW!yhryDwB3LIDO&bVBFws)&fmgp z_;NqyyyFn`H_-DUpgVIaJ$T>obml}C?elExAh4E`p%_vsEX6ksbm|tKb$w{(5j1UX zD$_x^nvK?yqk2&y&@}|!HK<-I_}WasTMK7r*TYR*B%mnOATptOED{}C1q12EdN@p- zxZ8v90a8YIQ^Ll!!kk~^f~>)DTOpuW;YL>9=mvHk7cX6<+Yys*ElUWd8W2mZGf=VI zCz`-~jIJ#8q2%ra;XJF32as#$qwB%0w_^cmDkeDmoh>IT01BI z2H@~PGbXWtR-QDjXlx3NZ<)OZrk&D?Uj{V>?50X?r~UbxPzD*VWNn#Nb|%GFCWn9v~$ zkH=0>OYb z_WYaS>q%KyN+X$ic+nSNY0TsPE%SQffznc$!?Vr zT&Y2}Wl-Kv-88vSs(`OjIa`F=Id=Vg8o74)E%(+qG8@h|Mm2X&lsr|jwm5?q=ycix-EV82mk$7r{BAlf4@9-J^kW8`H}S8l|r=kz*n(5 z)TVd;xsTp{iHk_Lf6aINNa~|rf9cczHvO5u_M>4gxp(~Wk3cCIVGt;z43ihXjQHe0 zN{)`utkMFDwiQWw3eG}?TkGJY9Hs1Z_{gE47F@i1DSi60pG}w1`VRqTI!-wjjZBmSAhQAREj)?^g|}wvj+dp~hZRpAQ}s z-ROom+Xi>AJssK);T9{64mebGf&35{ve<5d`ubUCxgIx%^e!7jBvho&tn%*NB^JwA z1H|ug2i@9M6(#;kN}`l>u-aH=Q4QWUxHjWQFWu{2xJVRmO+0koe?vI-c@fG-CP8q_ntY7@E$yzL!IgV z6Nl3KAG2-o2IVuLG2FaoKMRH*qzcwRvjMJl!+@BtO?0yiv%Jjl(nhYaONvUBK z`HhNNB^}z>hzU$+C;*(IoI!`-Mm_VD&e!46bm8@dbmhWq8t$X~#wHU)c+mu!?x%xVudcz|VaP#m4L*UWQM$gCK+;()`3(6Ho6s@o< z?t4WmyY2hMH7xV!wrngcg%xekHA}cE2D}>pHnff${jn-8ns8O!swy1=^)@ZjPnniUWK^{wf zMmUJtUM(VK8vHq%W9*_ca1&_?gf6i2_$;>zLkxrKX~1~}8>`&lL-NC3U~|x2M-kM! zLo|7UMEdH5SJTZ4=dfKuQ;J1NE~;hM8Rn${q-dmTP9!|cnH3Y%2%2&drciGS^Slw7 zfSlHiKq%J;94YT1G&-}$z6J6(%^1n2R9+;}6tl0h78P>T*IU@%Ko+-_-k=`swj)Q- zR1fR_yZ+%1-~RolfAOC{(4$Ql$ozqC`KM6+a@{^3`4fNN4}E9)@csJ=KbuM)LF3~i z-}J5N9fwY*|MllSS@?Nn`upGa9VmM;pl7}@{kfw>-qLdV;?Mn^bf~T@oqGS*rq{Vi ziWO!5aF;}Jw{ze7*?)l%5M44z%4NCCFOqniEmh12STtDd&6YLUwUu?Na8sFSMK;2M zW^gqP+40)7tLgHkD=fxJ0MNz2X5@zDCg8>^bZl#^H>({?1A}xLNtR_ohtOaUytAE3 zwqMvKY%)5N&=F-kGjKO+H*NV87LdrCaL8W8$=dFTa7q`Q0s&A{#FpPRZs2*!6^p>W zLcp_py}V|b0R13LN>#_U{k4tO)Xtu!2%cIxE#nc?fy7mi#i5`G;UB)80%ID_gBAYUN$^0+QgNJ%eULW})UZZ)& zae*;w0u4BO>SXHe?%->1q;9yYx$kSr0s4t`%&0Z|!L%r3hrf~HIO1%C1Q zXVX*9e32mcQs}(wJaSnt-=v$2@oLBpR6r6&fNo@k7Ho81IAN`5HYnfu`8Q|;aovy@ z>N!Wr!LOZb2IB$Gn^D3$U}baxlt4_u6oq88vh+fWfL(bf>}zPdZ*4%8zD7f66qB)0 zxP|1w;B*BWTnneR3mO?PzElA?Ao9Qnjuy1JfzPBQzy#!IJLcq~Stiy*xQCKyFCALP z4)mtuhxez$`?}I0N^b`!%^iYc(n~q43uLl|b*!DX^xh6U>nO_|J$Qh&{}#HR5IQ87 zvD8vyIg;SJk<8v!Xmz=?6tcP#N)o%kA6cM~pvgJH2H}0+W>gY36s74P=XC7o5hN#; z)8{_-4CNSnw8jzlBLQ8)zi)*gVFvEtHOk4ap~`=b-{0uRPi<(G#-EAw2fvE}?I-%u z#aHYsqWr7Y5NORZ4FV=kWh50|?A2&x--r&^Hp;zr&3Y{qWZm_U*z9@QkRksuJmW+= zTvJeb-gh`X@(A2n#;UePG)VJZ8j|35X}!}&Nog2P)~gsE|N3wJPWpq-Jw8eO0DnX_5P#*pZ8R)VEaQp*H{Sh{1q^$=ML z;yFj|xOPoJ=0N+bUF?BG&x5^bCo5&w0kj=oMwjlj&;EWIzIr7%r%DKzb-Py25}*|pYjo7j(fK)s8E-#btv9j9 z8AQZX9YNQede+>gW~3q7>hSq%K$w)I{Vk3{tWuUf{55|${e$oRG9v!MvnOt)>(r1d zy6#CQSRk@G4jwIBAdm7R?@j;kfBJ#+o-@V330sKl|Q@ z$31@j!t?1X-uF;yBN8kg6P>*M@}v68or!uMtSO zROroShAXGb=hr~~R5Brc%X=2x>kSShHBM3LnxGS9*lf(dV5)^Kn>)Agtgs9;@eyXh0Z|B3X%%P-tLtr*tWeB2O% z5XzV_zen%9a^nxge@!302a+4>#867|YEDc@0(?!f`ZfHN$1Y z;5RjVc7rCn8B6q20ogNYx2$KG5gG;@u5!+S0JpL`U;iCPlxcl%?rV1MvCyw@{x+{o zLzJE48hAQ(pNVp*ITL&<(7P^(`rTa}C~h}L(9nyeMq6uRss;745m6WRI?IvFd*@O+ z)+t?-7B$PYojVXf?uTZsO(TKSJi@Qnuo)Tx5wHVD!M;K-!nU&xQCT(}xmVhmh$e_$ zI_DJBT0p9y|LS16bnZHe`xC@BjFA*&5YP3nj6t-JjqAeA{`B&t%NP%j1_w?r#KHcN zbp1Sjh1bT@>=^!JqQ(Tb`jt6%x8#U`L{#E6stm;}_xHR3f#h|B5GN*vX|vy5g*dzc z;f9qH7=)cr!P>^TAcES4`Rsv%h&sY$lOGivIrV?6o)K&q=Qy|eNuIv&(z)~u%_Azd zw>DF{C&*u8Qx(Om(O8`Ka&Co9ie)Ofv;0g;l_JqteYRN7d^7H+;+nSmnwiT@@@p*H z&Y&MQ4wAWyj*#%|F;bh*5jL+4IBP!x!uD$5>NGL(?6 zk0PIepVxo_b{|C72zo;$)fi17>k)kI25oQY=wZ)=2u-)6K})S3d^?U=RcD!J^xIa9v%p7I@duh z+*V2EyeTc)yM`$bP{Y@jVhvP90kUGKa#%bo4z@H z{nvvGecQLCAN`j-DT^`zsX`m+pFhDfJU>x-tosa0)=2AHXZ{fOB&h6Bm^fT>lJMR z%4gEq;WY(sNs-Gr%5TwnDl)+4?4r>KPv{`E?ap?~PFM%9p>%5pLR)q<3)nRE>+>v% zpZfhzrr-VD-%T&R@-iyOH_{DIkAcC#taw|<+@gbH7k*IgkujO%28r%w18?mrXeS5d znFD@apbJP@eQzLJKwL0N(KRZSv&BX!>08?j;wn0;RzL(=+2F9iaPw!;ETyGb&o$d* zy^*UxX)5Um%Vi!G@hHVj zK~$e}f-GB7;v5^Ao>rELEKgPtM1@kf^8f5QzK&K1B54K<&1j4?vM|b1+2Y~nT4X`d z4&uB%Xc&sl@cAWmsu*5Ri&7h#e?oE`ie+-E?E9doM9& zb?DK^HH%JB?NL-(W==_f^?uWCLn{jtf;xh{rVc(!D`SgO`~-%lBa?k31KgKLJ1lOF z@4iX}s-tF^(i8=A(T+zgEtKJQ1y1lOCgdmbG z%gjrZ$_UEGxF2WmwQQ_dOvl@{QWxlArLdQsnmm0L&9^?*EU#-2iG$lCe@}#*d%%)b zBy;-B8Ee780Z}@FNN00vM>HVFA*_HJ(9qnSx_Xa=TD}wrEkPeJj4X|upg(6`6@>>l=M~2x3nhrOJLrg7j(W1u+^k%Ku^t%w?K|Wufputq&2I9 z8#FmA?g=qUy#H-Iz3AW*-Ot^(!F?6|*o6pgqBww9rsox zQIJWWr8S#0T%W!7w2x|Us6++4B6Z<%aFBA^VID15HW|2zn0wu;xK|CDM4jYp))H*_ zxhI+9zU|8GfAB-;$A0X`($67;>E9@b`Ex(> zlj(2%sPKb&44+)V$&kNorWQ$PJv>979TuSidQ{NJV_ zR8>7{fZbbnk(9x;@vfTnUtIU<`c}f-I4bWr*ynY^Ih3Ye2f{8`{{UL3U?*|Er@G0Y@)nHO}+l@=(&M8o)d96LX zw1v`z3%F>;ZYt){J!5wopFi&bnWsdA0QiBD^g3e%bnZgDguTSFUS%M7lH|2)SR+_6 z@Dw#M6E)~`3~YbRlFl8SZo1tNkYU@p3&koC=k4qP)#m+*cB9zMPKb=68Q^)sseik@eYJ27;^_? zrWVbYasp~*KJ!In!Hz&SBR&1NFZAeY6ous(6C@0?jDgo+MppLDO|j6@ZPuQZ`9%RT zYM|VJ2#uu-hfd3%xFAYKShiw5yYVb*vaQ_+4G1h>Zfp^0*I_Yn}b10YAuaBOO3Qbj9C#eU{q&+a7wkza!#%P*d{L4)UG z0AUG#8glX|o2Jqe(XHtk!-w!W?Z{5Fp`=|+(9}YLb@b$MbdUB&N2{7?^?^>Qq|cNi zKy7ZqP!{9z3RA5YEa#E+^+f;>(8DSsGh1BsQwR3 zm8J7j@CO;+qrEjq>R6WbRdN|_nYHCQK~q1@2(zF~qN^fYVUHre%j?H8bMKxF7>IU? zNYX$fvEGh3wb3d#&oX2n>KZ!;ATY3oaqjxM&U(4?a2-}!V3Ed!MFJ;nZ*F?(ob)B`{GP!?a$z0x&qN%F{VG`0^tCcbk33iP$`1P|G zh(u#p1cABR+PNuWWz&#c)G$uoag6lcNEcpsI=%S$Pp7`i=b=vE14ock-zYk%u9wE) zQWtDGk)vp%j|~VFsRyI-dk!?FM~^k9cb;fW_hRf@5Bg-8U5zH!7lR->jFmqLw+Z5q zW{Bc-_zYD;-KfKUMY6ae!6Nt(;71_O{S%EhcPCwUeuOvut=xnEX9pME^1`n5V(QOy zZT!N||Eu)B{Q$T?N5A8Ll>Yl4`m55}B>Cb)BpHAQX_kM|1?)+}=c<1-Q zBH26s*hWBoo)8?%qGORs1Pogs1IEwPjiDPMY)6#)$8nnS&e-T^AVY3~B2i(7Vo8q4 zc<{i!EKLCESy8pz5_=Ug>M%DtLWq`v*>Q^JhG5p(0gOy75W$bF{rdm2MSIphXolj z))txHGPSy;P>5FuPz7r}{&@H^Q98TG2#|v~%XKlURm(~8bQu8|AV1eqy{Q~1Xl|DvEfEluTUxYys;PD8SmjM9%8G@+z=b%}sl8V?Z_87$iLfcjZ)yhZX(Dj4n{N&Y zf-w?~S<1LO54nZKCJZxSgQwd{+pM))%-Iz-yhX-rg=?TuGW`r7`dl5}U+#wh)&|xe zC3IHKQNFu60=EMoq>K%@y16SYF&`!u-D zhzF%YOW);`)tyJXPVC|jjcWpJVjffenWGDFV6Zg3b`8s;emHsr*!LYnVDkv2S~mIy zJ3k3d=jiMk!}xY?jt*3j7t2M3x#(Ids_NR~c&Jg<0FTLXu>7*YeOriuIK>ocW_< zi?oX(buQORh4|)ejQ8>G`gF3VHXUdN9U~A@Yj2n) zpD~od7ZmZ8o5&P205s^bb6WhSi-4=0NVlHyUGOHOVIbS;M2Ruz0vSbntK+RhsxR@! z|Brs%@V@j*f9p>c{$w+~_QD@h+WY17i@)-lsc$T2S(W=9O26^r-}e9LW%r)^pLYaY z5TZSs8$2@GVo9ukTE~nMd9qYj5ae?~ccB=86JwSY9&sUzrpc=CHKbjj-)2g3t+d(O zK0VI{H9R~NwfO80hhG_)mNqOJSgL7bwiewRyq_DD8;{yPJCE=ZBh#9rp1;aXBSA!h z;R;B{#vFGG!Nmp}r4X~7O1@v98LsJyPzIq?&XV0*WGHkc$uSxqA7t@@2ZFB6IybT; zPJeF~R2|x66=W@ex-e!XK=qb&)J>6xwa8*5sPfY*c(?Jq zUIMRsPo7Edd&j%dlaD@5`}}s1+v4I_YzQzoc0mmZ0=XS!)ojdlnD|!mo@DosxJ6=? zy&i0Es|08|lw0mpjeUvDKpsZemPFatPOW@xT=~_Vu}om+jObS_?dA1uG@y8HE@tKm zX}ULf?Mkq(*(fdi3d^9Fxjip@AFyJc>A=p|1bxUX>PSwnbQEXL4DT~ch z-r0E=FnK>XcMMYl+>G)L8?@bi@{80ZV_rwlZhBTXbzGw3BYZ14S1=Km7+r7|6J67!Ab<+Q3`&G>aw zLVlQrqzCUgoetnh*Fd1SK~OsdA7v6@S-Dl(e26e_h~&8G$)~Zb8g>|3GijTm{D3GN zo1_8M9U6nT$NbJXJJ+6Kz*kv=G~}J0T!FK+l3uAKo;cQ)9z*x)Fms}V z&9_!b3xeP!%4s%OLYHgSL%b< zZ$}>UOskx`4lxmfa(`h*Gi5rHR_1}P7YJn_w_=V41t!Fn#F5&N@Et)D$%H2DTx-$& zumvI+UW6p=^F*2?b7x*umKq4RgTfH`eNM2wXS|Pd*IN_SgwC$+NaS`NBmFniB-c=$ zQgbW89Stxht=_ltlA>dn>t$(N19CZzeuh9;!*7oF;{G$B9Kk?(Xi|!SL6gF@iUDVi zmPqG3==~(B;l5@K>u??9@Ms_?=>O$d($|hp6kONcRm|V`6#w;tCf@T~{^7q#*Is=l z{iFZ!doYyD5omK;SNf`N{+{%&f9tbp;M`~EAlhYa?EUD!;t1rMz$2Gmvg|-nav+qg z3uHfK0z3?SCMR%t>z@rC~o^+BBR8`KTJCkRjT(wc)8L^u!!-rY5EqRj# z2+ycxUUwlp!B{}Z?yD3F4M=?ZQBP)hiooQ{KKP;Z!4H2qYP}#zi2WTobtav9;GuNn z{)bZc>HAar@w2HBwd%H0_oRalKc0?0@xgTT$q%MOkG(e?diV(fs)sSh?F>Tk44bc< z0Xwr?RMcnLL?35pe37oW^XaLl&rwDo8zx{;6p*Ew#S4xAnp;mi^p5o5_q;zndjEsz zDDB{FyQuVs|DUK4@3k%`DS{w70EKQq0m_=w$ZBI6oWn9=9fSk)X@ek1M8RNbg-z1T zP%9A$tK&R^bHPK?QT37Hd-)dn0fH*XC#$Zalv+jUi(n~&Yeg_ap zkx8!A-KEgQMY@A1w=SRs6HN%4+Q`adGXwE4`QTw7QNuj8`>YkA!3F}NVFU#Ynk?_^ zJmgf>f~uK7xX>GbMMtJd0{6l80tU-4!oyE&fH&(AcqYQr|aNT)xobSr&PSO zUV+N>7MeBqCQ{-Z1Ie&UbcK09`xYXt^$_Iche<3pqRa^I7Uk6v5UFaOQ-Z?$#s(HE z{N8I=L>ji8djz0%MC$*RxlC$PD`M_t0XkzeU~u`zs$MJ1UE`S%Pj)+Q2%DU0_<{w^*>8f zDn`d`c=!A;s4E`vlw3uNr-ojZ-f!bI!-L ziIQM;O%QF0g)V{%OZlphtITbXlsz#eE-33^0%`CguX8_kCCXylHLtaBX$18_1>WkE zytm*uT59K9wdt{W&fukj?zUR)M}%Eim$9^A9l0?$n_j&(oz9_Pe|eO^i9~RbwP=C4 zq<+~FJkoXMr<4ObkT+;%C`VJQ8iYySl&GFaaJK0R(ud(vZb+~b{7ba&|5pu~z)RDS zdml@G^GAL>T_2EJvIQY*IDPzMKc2qvD-lDaEV6g}@s5Dpoa|_w^_iUf{u>@ga*KEv z#I&LY-i=j`q&wTSXCRwq)n&4nC5W34?c4;-JQO?2!k8*VXEkx#Lxo7)2vqO3Ht)va zLbAkSx=yyBXj>)0N!Yir!ALF?p5J*Gd$~CE(5-X~mEq2ML=j6+!lsj|bW5I(9dvAJ zg_^q=h}qPNEwLa}E;!+4Bw|GHv&BP3OJs=}1VheV1HtlhDMn~uB5zkx@uT->uo1Y8ImPqjhh!0bGFQRlt?!3Wbrj}!QuIGGwc;b+j<*2K6r(OuNI|3IqN zVucO193-g>Bgs-avFgx{YNK@5bM$mNbn3x$;=XsJ1INz7_vi@YTT5?~7>SNinj64I zV~p>Y*c5J2(s=#yO@gK&2!Bj2d8tD5@TS zF36T2ea+}#Qs0TcV{S#M%*|51U1aB3Ox3`@$$&{xGib=V)+i(i^K4RgjkRuqe7;am zhbtv|V#|YzdMKKUY`FPaqAW;*)I#!ZWTUb?Vs~0Ah+_l)ZgJkuiCd!+m{m`al{1f- z*GrU9(L;yx)5zzn<~&7u;Jy)bw1e!mq4m@ZpRSqmUQG#M0@r$;QphCA^V2jCh+*7$ z+rSq6AUq6@nuD=E%Hc2qe{YLa09dshTj(zYjZQ>9qG;pwsLM8jV z@5S;v?$7+&d`}l3JHEGT6Wt#+^s8el>DB^m{1uIOZK4_vr|DoDHS(itAPjb1Yh_A7kfiOXg=GuL9;a$CUJ!Drb@9XPWVAgG) zBF-yOM%JqZO3!QjD}v(iXVKsSUoX~ey9{&CxN6`nqRa2?cy`W;W(9h~Je(td5_!r& zlYySLItCVUqjFcP>i_;+q}4z$0*!dzj5?Ka9z8f76t$r4Jn;rJzM8Ncs?tfs@}YlE zgrlBS-a`m}SYsCDKob0HaIcm!XUa%Oxw)7?tH_R&P_@80fY;gp&74A?ZBN)AF%TV>bQeHJqpNCTmA7J){(xLPU+ zJP4l3D-rTza}PcN?-A1bZuTr(OOzDXC_k)+ju6CZQ3W!o5*E9#l2MpYvN5R4GM#0X z9;bPXV}d)$LN0$ONaYc5DX3Yf_4BV55T3gm<~UpfGryfj8>H8Ef}>j6*|#7rZebfz z+Jemmn_LNHs0}-XLP$@P%u*HSLg3-1>|&;{WR>;{UFZVgr_|PYm_-_Qdev`LR0e_= zg#bQcPriCh7zCHH8R}Lq^3%J&C!MAIao@v_;I;NB3gSm8=YcqIjW&S9H{_<=wUg(u zwEVgXa<@c=x5UEcLbu9e1qH=5Sc#KM4FqX$&G>ya3!L;)wP1$vL%TuHcJabZl;W?Y z!NG|re#%sw)TdUXU ztl@g>vgqZKVlI`ecs+Shb*8Ez6W7a=3Z4rbEjAlBuW7RHWlPK=L|#ovy==yzEkzR0 zpv8{XFm6UX$FW#oPR$dT&8wNC){UhPHv`8|o(}W57A}iusU4=iR#*q|y!l-;u|hGN z^YC>di2>!&pDfPD#ojJfeF@cgVE#2}{{la)jyX`u24GQ-+c)O1E+$(9liq8VUi2B8 zpfsm|Wblb3+=_a@;c2Bate3`ueFS&iG$OP@Y;QqNs9N40$-@ktcng^NTJs2tmz{?Q zkE0oySOf)`DiUPjkUR=b4IQm)Y%>JMOKf%}e6DhMfMuNMI??w$2;}4dzG>JMP2aqd z=K8Ovm9fE8g^f`?2G$jn3|AM%396@}v|C0QsFI*MN|}<9V;8inFJP9t^h$LI2r~07 zLDNq3$-w*0c~n5y<5I3lI&VqqExl>JxibxMPhF$M?ka1-`ToK5h38*P&prD>h;dFa z&a)tLN&$$b%RAEG*95{jV<}Dd15qNAiGGrZ8~DYkjlI z1ri-${8!gZ=VS0RAyvbl3m#mdEE+Tfkw0Fj*wnLTS=aKPNhNb%jV9N%2zb3`B59EI zVp+k|l%k&RjVVDG&I!wF4e-2O5IJZokcgx7vHg&zq%-@KOV`uOFTI+EAU@aBpsrin z8t9B%2bFv`+`!nxSM1*6W?Zo(h~L^)w42La<$XehRy`_LecR(12X5L8GIG#)zUItm zL0aK4k48XviLt;>vtZ7#nYiFAv(Z};j2GeWCkeDf2JSjU4p`tyDOr|*B$Sb@mVtQa z0B^7*>L!ss_+xH@Y$)ZnCQE812vPu3Li;8tLp{X#MpS-F`I#Cfp^=WSShcnRd}pby zj_Xt3)W)JlU`w{ujcMmmN`c2to=(klOqF_#3}PiZhwaGSZSbX-{8pFFf#l8e4s+P? zA@rA)=uFuH@j*>X%w#2 zimk%T8J2h!I)H`I>2&iZzo(;1C0`xR=RhC=!I6t$AVI*p#D<~8#gwj!_TJK$ZgLbf=gn?8fwmRP9k|=j$%_&!3JMP z*JX1PirXT9p&)M{kh5Gha@-sZ3PNYhjX@$g@1^#U9d;7hl_^lW$zaSZcjuwHw1%a2 zm>T35ES9Kr4=+i11B0g^q_fd&^4>Kf!*($;mX^6@FgkVa$*-$~AEF1|sS`&i4K|`p zh0q^bXk&1L=0SF5#z)flNMBl;8z=bMN=H$VKa5OBI~_xv zkJ#j{3ziwXyHd7+5q%BWlHd{A6~*RE;I>G@lFfiKUq)ZcdVI?oK*wqrn+=0d^q`i; z`%?*}xeCxJDm76e67e?RtgMQyuJ51UN?#bCO`jT{NuOoTyg(p+lYs9Uo!~FO_p`KU`fuw7!RETOmdkynX({kh7@RPID&b%?uSIja{ z7w&r#TAd8aBnyWAN4BHdbPeTEV!Nhv2LioS&U?8aTpXRTf_au5`CSd?U%5*|nE^_a z4kKvc0WUQ8tAz$6k$Xja<)RrtmC|@453s^!oq~>(LFXLEX&Kj>Eg~Vb4;soIrL~Iu zNxQeT+u>ZB2p*e|T8TY|_gN*_H^`GP9$l87Dve~^EzyMwJew0TzzUbm^1_O zX4cfTiIS5jm9{FgY=#mVCEa!-Flcv-@9zc@-EX@DA5Q!kbeQ@O_`8UJ*)4?T7TmnZ z&||Gr6;{qn=-p(K4O`${1njs|a$Ejx)cje@rr4M#$iSDd=FmsZ4ZVs*r-~p%gV0(Q z>JngH%L~(#CFiEuxX1DC8k*%nJN3vk@(V0P1Xd!-3y24<%lX-P*a--lRN?}~>s#z| z!*c_5jD?A#9o8-?N|Ty_+B#ioD>0j9s0xEtXpY)InV<*L0s?bI#{=m_SZ~7x>`q$imRRiICAjdhG)X)i87DKIvC9i%QiS|!;Sg<- zJ?2(uyIm05oL(eDXTwCreG!?3c><>CSvH2LQREoNs9C5N*`y}ftj5MvGPVN`ogX*H zAgS(xe!z9nmPY#@UEGQ6xu%#46BJZIHnn4D5=3ACU8`YAPjhhV6u-NLU1OsYv^d_4xDy3Bkj1x@x|@Z3Z_9HS!n+jsV1C!rD$A%3|_hj+Ph z5_y4o<1x=<9^G<|(7o^(tDujP6 zf2#7$TyDTdq#Y}!W~^>RmgJC?YXW|o_+0p#(B8v-h@e$B0h6@QB4Z5~A(#IpWa|)! z!$h-=u#8|j>y2MKACx*+^SA%R*fmFWVTID_qVa)A^B|ekjT*}MqYs98+ zY!T7PT%l1qpJA1IYo5=!z_p&^KA$ClodF%4~ib28$l-G38mW_+r{S5gIEVYT(T1*eyTV5?7Q^44UN9?M9HsK|1(>;<=o^>lPT;3R^g8H96p}T(uubZ z9#mu3J~+S#Lb7<*rHf^=xV@ z9Z$`heW_{lT58_vPg^sPVj=F%Q(D<%fmIHo0q?6$i0cOrbkRx<-)76kfz0=?j4?b0 zSsS>$9C?MZax}dpbn?3m(2fm>)^=La`8Pc!FGTZJgMr2{k!_?LV2O9flVdbl&<(mU4Jt=TluW@Ovm8l>s`c5_P`a3Un##ii@!k{23moaiLZ$~9 z3d9#_uCRT3ExZcRfMR?~*LKl6@=Sy0;?9+&O9Dj$K`voUC}o`ey&J6315@$}E%{qQ zWXDqn4c>Vw<@(g3*Hni(cIrBk=IVRXTwQNkZ9SUi8}_Gxjk+`d>boR9Nx&aL5wCAU zhF=QGZY=0xw@wbw}Dee%geZi7Ov$NS)Q9h)qi$;C~ct^R)^e1 zgQ$H$0$?{~?U?3at&=sDykA{I{kn|6PTxY4FQJp{gxUsxlCegQiFnnI*B$?SlH=`3d%drng;K z^d&y9e1Hhj6+g=!N>Q(l$(0{b@Akh)8bK16KoNCG**|0HdgGAsI zgmTHIStCOpODCnFGiOexd+$A)&d^@`;DZkY*QNumj~ztC!|Y7H)ZItsTZQ&dD8o{M zLD|^s+(NqUyJ_6vXAsex2*ha)1P%iBEy_>*2zZX6oW2{<6pNnYU0_+n z_o8rljG{`wIED?0d;(WV&pGDEIUpK9ZjN;0MwNzv3&?0}nllsxU6& znBH2VtW<0hJ*Oi`8$9%mccx=#FjeyX7Qfpv6J>ESAlu|oCi$PzsHMW4hr?EjHVW$& zB{?iaSkvk#$yHWr!>~$+%Sx);Adp@jNt?4b(!|Y6>B_lR(=9v*Eje_;**ehKnhti< zr-L|>x3b_Z%gGoSW}FmAon&EOr=-`M+B%~_trsrQg zpUz#po}PQ@LVDrV%kj9{KhAhpAq4^2LJ%fdd*@;4#*IuKl=p$;Rc|`ESaO@{xONTd z{SY)Xa{X!vrt3j#szG*CtFA!~~yq&{fcpStDw*fdSfJCl=LY1(_Sk`xV zl@TZJ!xecmw6dpFIBER4ma%Rk$k|8meY&eTJ#ef$oj%@^T00Ch_0}W_00FdU85=a8 zU!=uOwI#Ym4Q_&T4H8RXK6NpNJMd0J%aL_5>nz!=cK4x73iCpOJs;k{W|~s##MDHH z@=oC;H%HUT0{);1ica(W0vVobWs$dPu+-h%mAZSnvA1G95N((5Y1!Aczo1C3jgP^e z4J}HESdvzO-U@;BzO_}#bW2kNh*N19jk~4!F%V%&hSA2n{Rn=&>l*VoJUN4e&%(lI z$P|u3x3sXBRFsvlVGN_%0|2U1D#uQzm6FkO2 zBt@i{TCq0BnYA-K?_mbKh4TsF-CQPot79){dduN?`~A|1Bq7g$Nm5{t7v*Lsq9!U6 zK~yoA%1X{o@@j~7E;*CluyA~wh>~4hX>w^TYd7gX7yO3kyvXQ_Xw&iI$I}y!Kc1d^ z@?Ghn2kuY3@GnBgB&K%GO3cVC;q5%+Dv3HoNyN?+m2T;B(>KXm!=Oo(*&xxXoE*9> zISb8;x{Fd}S}ScJlZNa{<`^9|mJ4X8)=%jog|}!#sv=Ve~qJ3NO5z zX#%FXSyiz4Y;a^(s3pvjg-%auKLIpc_^jK`y+~Lk5RVP{q1`SeL{?lDik9 zjNxYrv`)@~Xce0%Xy5U(XVV8h`qA`dU-{AW-Vb~zz4OT@36dU94?Oyg^ym}sNl(1* zgX#4B48(n~^%Kc2EYAJgfbTC4g1h>m{F$!c!XoeNq}>4E1(B>ZTQOC9oR@ z&75bxX`gWm1@>DI*T;ikzaDl$Cr=!ve28H+r1v1UazDI$?g}QfR|9PtCFs6}dFc5o z*AX2>!Fd-wn-J7w)6n)LxFODsC>go2*ukd+f_#lyf}EzbpJ23|jijxv6tZj;n=f_} zl7KLnUAIHZjSpl;ZLLz6qZ4v& zR>=d&)|7}P#s|ZD*S$Qt*Fv{Pa&I(YfNtn^l|Xi>0%L8u{ANIXM?ixoRkDvctNME8qQtc~<_X{; zkCWx3NJ1<*#hrHdQA6O^sF`&9s_=|r;1qkPWc9)ehim|^&NQs%XKkoWAMI|5VDtWa z4kI0MEbTkkP2+_@1#1oImkC^G$0<)?AYn%xqPe;2O(YsaTJG(`pzaS)u03|>Kp>~0 z^uB%Q)$P0&V=c&?&XxqX;Cii~MYfEVm9H%Vv_gq&8qU+$C=w0u!xjnP)R?*5*>|m z*zC-LEH5sPv%bR_gp0IWFNr}?2qs71l;sd2A&dmlf3lUAB~wdu>e$sl93j347;v+OHVpSAJ3$9e*b^Ut}A?A8WvUO|#)okXr*ao+!khK~>NGzX%6#J*Y z;hVQzlTk>l|N7T|m2Mz>vm`vq1rXx3M8*7HfT+I2W+B0`iuceo)Q&5cGf9WKqnFip zrE+}26xv&Y^CIV}2}Gh%^TtrmB_Ohs0|QS58ka5Ik;N>K(MV+8pw_N;kyUlOjLPA+ z7$_|>Mk8c-fA*XI9Ioz1ww-se!^fZ~14~f@jcl~oa^%LDOLaM*@`(2B!25INl)cvS zXd6`30AWi~R$OjBasn!>A3t?Eoj!}s)6t{wX~-zuOu~~(0m9t+8&PlUJO-JXoARDf z8hYc<$8TV0Iszik$@X+Nree@k#aLF7iH4drNMm&aG72sGX#4F=wJlu;aW(fs(R)_1KmSrwWS~&o4%*W3Y?%;f~K0VEm4vn zZyw4UJn`s5QKGo>P8GBl*G#95;=c`{PfOZ9NWmm z_jNQ<=7wO7P@tdF_gC9Yu9ktG9j=z2Ew5PqHhY&#W`eP4HyRszHTO+3MC?YDoI$Nd z@W#7%y#6X8L?5lg$oAlNn2m${j56y*xq9j93xqrxNUws_vufiSDt+i ztsfG@-Bx@z^_c74nyeiJjhbM0(wTK^Uk5_4htkm_`>>_yfXLoN`N)=fERHBKHq(6J z#&7eCnn=o~G$HcqJ*M8)nvMfA$Sa`j)W~gD&vSi!g*p`mRe#IU;+KEzw{JT$gWp#! zJ{6P(IhSFbBCkmc5lg#vX@bCjrAGke+O3bHf}mSG#M0xl*xgFPLs}YAF$Rc{7%?9L zrB{|Lk7k(2C zXnz*p6eY9jP@rx^LJ%>HX6jNCVnNFDC=epc_F3ko=ouG6s(f!CB?r*IE7OgEb+D54 zLm3Vq&TpGtvSyy;+KAR-Z1e1-EJ;FNLDKicmw(4@vmky3FYil)Rd4#Id%yT^aRmPH zkNn;A+rRPapaI>t^_kpQv;>jR9ZIUAG|V@%pXjz=vRO|Tg^+xzRfT1*;>PZCEt#p! zWcki4L-5>ewagnpQh2m;=h9RN;<*qCh1<@`YZ^Sw054y=d^LUc)r;GP{+Sn^Nnc4x zYJ%=8{YMXUp@z+-+j`&>$Wb{qB^T2zOlZw!Y`a&&pps6jTFMqR6$B^>C$^se$k-2x zFq|&G{tV|2xtR=XKZ>;*pb5it=Z#Xn^9FBbQ_%)P0`ROJO?+Mj=*&Jv)fVY8TGY1Q z6j87S!0Y3`{qgiwAN{Ir=N)8kQ9D(1Lt20Ulr_RQvdI4``b2r3=UX)gQK3Lxh^@A) z-^&q@H$u=LsoiMNR$6`D6sHT%C<6q8q0l;V-QZdnfO!R@oMIU^Yo(~D<8`TEQ@HcE zacd&|{!_0LKxBlmSVDB+mU1iC$JZ%gY)e?6oi{3}tAV9(pJ-u;nKl{l8ky@VnYQO8 zVzz1Us_X!zMv*h&0uh1=7fQ3x=&*58mN4Kw3-I#mUH=`e>9my=yfv2}~DeIVH zT`eh{0x@~`-UrzGHq#}#xi0jNr1?s`=sFIjN`AkzI!q8b#QLz2e)s>H-o_D0e(gv9O!}%1e-%8q{V18wrayT4v*|ZK`CDo5^0^EvIp-zHd-KQvECe~RJEf@m=Z1|NFnXZBYJy-}M*LkN(uZq962+esq#h-HR>M@x#4{!FEzo*3*$W z>V4r`iiWm#lYF%XZ>JwQf(0PVV@HqE0Pp~Hy?W3Y%2~J$Ox$=a&D^}4Rta#GYndY% zTgJz6ku_yT=o(n-mp{#Cz2lgfn@Ioi7yk}rFyU(8Eg{sY<{2j;7iwAccsdV7 za)k)D&y49 zx-b2`U%L9H6JalC`lH_?Z;NMMxx@xBIuwe_!9U_tyIBVe9GO<(XN8}~jCN+TScK=7 zh=n_NanUM_n#fe_u_7FWmT)pC`qVyqgi7q*+mI;diezl)Z`_y z%Ezj;y;$-lkEI}U!x`a5G_$j%xo9uoXY#ddFV{mWcu1VLpJfr$?LAMv7xMG=4ynvj z%K9cQ3Zi0;k=(0T3m6~ED47oCmB@jp-|F6yrP%ybZcLdd9IJrH6@gUo61dh*>#W|6K$Gl=@Of7Rgvd5R6dQKK{WgbWAB9suj^KgN; zgJyyGB0ob8T(o1`H3ZtE3_v;apdjVVNwRSnSFz9$9a6Wi zU7jP}^NyB6s?DH_3fP)Bh{&kb+k36?TDLe&vKWB~}O!pi+wC%jFy!KiI$4k7wMg8Jm-+og3Ag@*4)0Rc;{B2Yi zRau5uFAG{m1=QAiOHp<5 zE)e+J(Mgd9du;lI`vD`o{TZxTjF=F9f$4S&GY; zuc6n=HUEzCuQMgqy<&MaC=EGj%TZG-n78@3d4bBGA%0s}V|*3)U;a9pp5o85!L;|T z<6^LT`k@cpHLr_5*bAE8bZGW|u{Q#HBd|9Ddn2$n0(&E{Hv(^rK%icG$KD9+jlkXr z?2W+Q2<(l(-U#fCz*~>NUeL5RL-s~sZv^&6U~dHWMqqCQ_D0~3Famo)(;s1w_MWmg z0(&E{Hv)Sjur~sGBd|9Ddz_}dIk0U6UVZA5>H7JXqIExe{p1dX;1Ba(sC$Qynue43 zCuzejB{bZ_A=yb}mS`7U5NK`Ky#V1)fj~8$svw8f>DtZfsROcrt%V(3s)tL=vx`bn zu2KZ5bO(u#-%HuE<$Hn7nLqdae|OtCkByF|-~RW%79Mz+^l5__=M|!GnNTemS#o1^ zW&~NW=&@(lmxSx=z+R+QOI%f##sAdqP~t^GZYGvy6VOm-4jY3^Sq^S%s0T|b%*19% z+RG^9e(g7XC#~hjx1IOD{`p@>S1-Q=5jm4MCGbkrRM0NoNx~-o25B+=GSgEx=%}&# zMY3oGMzyuH@Hf(4TM^bYbUH%h4RcPumUwLEqJCe|DH7!Ktd6X$iGFEb*D6TF8YKIi zC0fzuuw1D@a^buG#{abKoTu=|x_0T+EFS?OSFt<^(h|$Fxu@DVDyC=C&XA>$EDbro z7O4O`b3(0I1!vyN^O7jPLf4^$Y#R;ckcY5+*p~lg=Eo#0US1!`#cMua?HDwYN|Anu z#wj$Ss?-)@7}thZ()a(+UrRlmJ=@Ovr+?~SrfXy8Awi=@#c5XR`^C6vO)@*p+(o$2 zi(FmXV9_r_@Yl;r2@Q$XA%MvItwPsE^0j{>ab4x~aJ?_A3*YtK-<$8J!Xeps8nt%M z$#uhr@*-P&hJ0Ry&aMLKRwBMWbOX2gkt(^#`%g1ZcuzX3oV#{I+B~v>Ot`vK`wt#0 zxCH2psAr^|yA`Rk=K$kPhaaST@1J}|chR|vy##ly`p>@f9MT3C@;EAw;(e%TH|tU< z=Tm~bK!s}v53NtyTH29rXs0{363wFA$+3#c`N|qyYI@^&U9^3Q#sJqxy)G4Sqgllc zWJuJmp*9oP#T6DtlD5^g;`-`Tu*rQGnt6IaGRA-P`@VnMd9R@#clpvY=;Dpx2RIdL zaJ7DXTw8tT8n}PGZcE%lW256SpKa<5OO_y6XF{}VuJ$6cieExVZ-;bWM|bf0{0!P% z%%K)+YPfDw6VqvQ7-<2{OHHIoq&w_RYw4hKkg=b}hDe`1g-u(KU2t7Gdh#(uKIw|Q z<9Ou@pTg&GFy`?%8d4Q%Co#Tu*Q!pgD!HAT8`xNt;vnGf7m!RSrCYH^FFqs^N|1ah zSw})bf;WR-q53zAeF?v-ftc^-(-C5ZIbDs3bu<9+OeG9dpI0(L;ZM7Z6{1yPUxCz3 z#-ScB(69gczi`(aDgIzDXnND3d3*lim;dPxr_X)-7tt2Mp<1)tP@J|CDLY=rqJ(zH zawsHYi=l!v3!gXaDw-+TGtKsSH)pg=!po=CWXc}#npZmh#wu^J=!t3b=zwa+`L)-Po4H%1? zLV$L%8DtQIsP9v3)%P|trZNxEnWAc2I1L=r(5Fv-RNujPmhtatg=3Gdqb8QZXn&4*1AHj+RHgg^*MD1t`PX!6X- zq0i}@b64o<@csYK+h=q-Xm;UegN)8Aik{P<>J3l0@9Vzq>zc#@rlYqbJbhz-zxv3v z3*n<5d{2Dd-q-kyxPG6ZU!%PO<7GqJ1vb6S*rkSHEceb0T*YQZ^|rZSea8$4)>)`m z=^Sc7CcI!1NLowYH0FsJ|xkAqr@>O<4^mJHHs?Zk|mD?&MHCFVXc@SV6Z2 z^XIwA)xdv-;RiF}fmim0YIK^O_@f))>ctSg^{sCRzwuu_wQpR1;tzg5y!YL|9FvO` zldPwtFx@ZMd#%gDF0R{Ux6s(cfaKrJLX0wQ0V$st=ho8ZbI!Y>iD-)qs)F(di znL|O7D(41D2FG@k#UNzQRlrS9bL8kdmNaUG(UGg+sV5)DK==}C!VH_O=Wiz~fg&4m zBdXvf#0pguID771bjOAm_jLr$EAjd@O&&aPmd$=BYIo%bpkvoml4&H}p}zL!=3dUU z?_U4XyWbgp_oE+*34mt0`Z?C|xeW}6D%81a5o1j=pUlOY_};+42_zm)ge?SD3#$F2 zb+yJIT3KIU92BDT$Bx~93>Xuz=q-n}lY=P78fZFYSLT{lPbzaN@)2b}_-qE)CV|ct zx@9Jc1%AhKfBLBx_MNkSZ=e3yFNX_s{!Jm%U=Y*U-5J;8M%L*rRKC@4VfFc9Qrz&aAz3iey79ejfrsS#SyGw zRtOwP)VMG0-R)uE#OaV?5gWxuX#CbKx{GU&RXB#VNo{!5*Zh?*aO$yr=l$NF`PbnR zLFx)ZoC~FCJSYjQSyMMSmukgY3HGW`{4XyNd=o6!FrHU2;!brS3)0t4AkQ^mQB+&3 zMd*~yKGyGM65}=IUXgCglomX^RGPmvKF7JXvWMVrN-}BkR#S_Q9U5^<1ez-RH`J*I zm*?3tcT3Bm9pm#~{P~aE4Vu0(L!dwYRsY)0{_F6$-~Y|1YMe)-C$@aE5t`Ai(+Vt7 zvFyh)G@;*rze`GtaV5t=u4Jjfz+%r%R-SMSQ3FyOwVgElXcUL zas>o4H6T#7Fl&N4j~J+SAlrD<)u1P|i!a>ublKX}S@y}aERr^A?c=;Ma*9cW|<2$fYY3)Ee1{;eP zt~?*kf9i4=>K)v7-e3K;uR(L@$*8nkkABoH*V)>p#6V-1n&Wy={@$&{S)PB_IXJi{ zLP{c2>RPFrV^E^#U@hm?$fl7e>&<}h={;9VCaXr7K@$=+QC+!;^I2a;BL|JI0$-O| z=(jOE9)6%VboaG{<=MqBaRrrS7VoeB3x6^EyMOg-`_B0fKlKOU*VxQ8$6n+;W~Szm zdsr6xB$#Q5zK$X~n_6)gxD%LUv65*8;c7sCEZwjJ%7iuhL=}4G?=G47dw=tT`_9?m zWD!$n5xhh|drwcpCV0`sz2_Z8aY6f*@mqWT>Cb_|!OP~m!|Aj4ATl`^^}%erk4n=qWb&)Qp16dG&3Xh-@>T)r3lYb*e2|g%r^oQ{lWw zAfP^3t(qW;q9T;L#XYSkHz3l)jgIGLz{2-^?K0MO)Xa#msmP&1zy>%vDUei<_-H}% zN1u6O-@V>MQ|Swj|3*0f;zcAFRuDIBMTk%nklv$TDrL< z)#WgUewGHjDdvp=$JGR)2J#!+A412N&}0erv$zS=yUyUiP(#pEKybK`ag_zdUSsS` zF3iVdthF;2rAx-fXSl~aJA%g?Rz;drRAYHlPvGR7DexKEGiA`8()nSvgurMn7ykXf zdC!;JJnwRv{`hglpXl=G;NTV(N^J>t3C7$A z^srl9#pD!ZVUbNuO`Il5cL$GHM$jFd08Rn3D4MA$u%d}<2E0+2izekb*xUL4DN+gjs&8bW+=kXLVK z8~^TPVd@Vj29622MK!BRg0*H1IQq0AqyvtHL!K;3oI34Ei-SITx;tu-H=)sAkj3_TY~-)EDwS zWSaO;)v#%6)+zEQvSXH5RkMw)3gV@)jJp4@B)3foZkuzhtKA64j#Y)z#~TT*QlYms zgQ#OmXjOBE@l{9EqehX}QwwbG=2IWb$Le2|V=&T&T}wOm8FiM%5WB2phnEsugZ8x7uRyNH42L4(>rg$^DDyK^A1w?@Y(<6aF@ zQ<%Sk2)O@yzi%t+M{ODAO*_}n$Nfn&7B&bl)|L^(q>fl-piZ#aT^}I;o8GnMtwUU{k@#qQJtR~Xn>;X|7|2!^W?)fIz&_8<+5Im@?6*e!C2Fr7mFoLX;=7fZ8=uZLmQtMAsD!co`VD?&)E+YhV~5UlXiEh2J$sG}9lu);AV8zbP!1u7rj0>!GkP zi5TuO5fMtHyh6Yg8BNWSij8sZL3AZc`uW6n20P8 zsQa2_e68^wCV~bd^{kC4F0hrcq;}RazL}SvyAozb#&7@xUG3`(=T09Fk39Gw0qQyC zZYE6J^!lgKno>($eK0)V@NA6MQF|^1cb@wM<6?GY9#ly*Pk~LwABjq>v=8q;!TA;? z_gZ}AYGcyEdZMu-bZ29_9J=C~U){kaZ-{$v*?52eUpD*zFTrnum% zWUHxW^b6V(_lpHCf}sl8ljf6(`)RDI%CZ&?*8i=|?~=9TL6|yO^g45(A|NOA7`xUW z>PE>NaXv~hX2g6yLxx-pyuDReVZc-F<8O81Uf=UV(lGK+(8Pikkq#CK5q~*V*1P1M zxELA;){{lm3hSa`?Br9N&HzHVRs^Pj4Y-ADBpc=S+1RnHkn>TBoDa@FSBJ?fzfW+s ziJw|U+Y~-W!N3&toL1`o0|S%>2zClw^D2{CPKKdU#U1;I^KNfT@i$%J%<+CWEd${` z5P(yIpb+g@HZr&gEGqJ_+`x6S*NbX^py?ihhx_>bbA$bC`uM0(BI;m~vm|HWr*L7+ zt0Uh^h3`c1R^Cd1=ji5OV9|ugUJZd@nc#4ZGFk~}yJ89~??@LF_)-TC$HppU9Rnq= z&gUU-jP*Tww<#Lw^EPg2!1t`0AVQ0U@*4FoRMu2PaD=%pmxq*FY>yEY;$q1<~oL5(>! z5-ceUo*6s+t#FcZc)MW~+Rmb$(j1rbY9we&>*+>7SWD2_z`bcf6fz51p;7UPfu3*z z_2^Er;3nPV$#5I^+S8W}-N*I|0DAgvKC6~+G+m{HspWr+fzIfEma60*^!KC(^xCjGuP_n&FSU1CsMobRx|K!UVA=VeetPq0 z1HdCTMU}J5Bo1+--?QMb%&GmNfkX{Kv4Js~?`-lK66?jc5T-^E1HO4VlolDQl*_6p zoo-VWs(|1Xc$G*nL_#tI1r7S~X~zVTyRm>j9EnF2Nk^4dHUt9t18$KlYWcLfiZVF- zO;I3&nj#6!;kme#aDx~pYXk&(AdcO*83V=)V@~eT{M;hA9p=SHT&i)Px(TE`1Kl8x zsdgSJM-yvvnzgV>I8>`8#(ngFWXu#m64i9#Q zJm}RXX!gpShQjPUd{$dC321u{$p#5NNu9|mv!C}r${hKNS8+FJ`jf>Ke{TQEl8PlP z;cx>c7ahNDqAnb5D`fXrzS7&sOjRXf38~Skn~BZFy9F6?gCM8OLQmKf@^B1}AL|Ok z-K|9RWCE(W>PilYXB$Y=;K;wzoxPRpq8)g*gF)X70??TUfuckuxziF>v?n=mY_llo zvE_m%M#d*Tt!B&{f5hC9TTR7ntqGqK+yxfAHC|>5hX)6%F|*~SM}Q< zNu}&qLWEc=0n##*WM2<|Kk_5?-D0KEa@PK%t1}-?9q$eI zo;?vBzVBo>9fL0Xy|5+N*uX4xelkqYOvlZ_Qp%zeo=mk4BTVl}x|UJ__quj#gJ)nB zN!ZGTQJWRK`UqgwSaerdoL5+w)b`1tx#0{MH zVf3RS(nJO)Sz7|PB{l;mQL7?ZoMq0<=;(?;~Ia;bg-pENXGG7~yl3mUTDZuCBm6ngE^Hh+b~@-FGiGH+^tzC_S;c z)q(sp;}42IV|c9v(HW=w-FwVnzOJn+i!rZ>Jiy!9=w zU<1a;n1X=UV?rQGxDLmsNO01~gFMOmKY)Y%DFdArN?mPr;Y@!v^yKv+s}BA7y3n23 z4cVGvNQY9$LQHKV8ORF45lmI!Oce1LGkC2A1x*+Z~%1 zaJ+A1Y*s4EVSJPT>f+}Kke4DW%8zXr7gDJztE z^hne%Qm&aKeM|IO{fHsU9Xb?%@C|3QHa<-tqD_n|mDaO1SrAAzb748v)Cm^rMMZdv z4H305xI%N|*4m&BkmONLEX-r>jMDo$C}|DrP91?qH370q#%6)!g@%A4zqcs@!WfrN zRATKcMT1=u8=kA)h6YTIyJ$&uB&R)5I`_3Yze+GVPQVH}OR}*<^TagXfss-Y8*f|6EKk=O=lDw9-8f|s)t&CEq@tqySi8i383K#Q74(!HK4g0oGNLjw6a z=G(4;E-LB$SS0m#b%aj#k`9uyc2M^%g2M&4d!p(Fn~vvbDo+5rO~6@gY5rx7yFt^R z+wkX4c3}yT*=CKz6E50;u+TU#h0guHUScgQES3Vg+nb5hG1|3zi$$e@)ug@(o(pJE z9f3j{8N*P2SLmb6BpPE7;s@Au(Zq%=VX;n*1PhVnmS*eiymvoY;~*%?a91`QlK@DG zMzhuxS>cF zLFV~^!>K8kAR#Bx9meJ$^sEVKnLxv@8#;_rE|b(a&&DFWZa}KPUlBL*4PKJ5MdCzx zOq3&!iy+q^W{vx@%4@|^CyT-=3r4YI>5UhBfC9f4!z7>UWkeqf(6R|_< zz|lc7-Gi^lhco9|*7K#(`G3SXoM z+LI<2Ifgmg5M#5;Yk}_4)mFpzD7&>curSlb)d4crULQJ|YeGlUZfI+&kmb_Vguz>x zfN5ocU;spHm4IN&1)5;0u8BK-I3bT?G?L?(FD;b3wOg?eJP{s!Y!H*?-O$rdDRQtjJp6EPc+F$S@ovk-&3Th&Bx|g|Bp`2~0&-Yk+=zIraBiZ;4Xg=`1TZ2Hy`m4?ukyw`2-MAR zgT{nspMN5}Kmd00)^*B4a|9f`Kks3ahbWAjvEwrGsknD0R0j_|7;6aN8`xCSm=-rO z#!{dgDV}+TdC)8}YDX&9=5s57T5PVugJb^HQ0nLAFn0(}Bv!N07a=m2m|s=9mcwbt zU|wy2hLk|4c}!eC=uGU=lM~6B<{B0Mrv;N;kO?2M7TSQ%SM>Yf;W}MsFu`M3r`vT6 z&YXGc8Yr6q=gizZhzzAZ*1kH|Zq`^6U};`W&?M{{lZO}QF3(kpL5o&Gmg_-p4aN1E z6G194P|NzsW!l2}R(%+oA!#DO%6A-N&h>+8_J`)q&d@+mUz5p(P2OvoQe(oW+<(-8 zXf^<6Ry9B)U`#o-wB;ei3`N%RHKI$;X$fAX!Jz9d;C^iC*#eQ;UEd06g8n?*R}Zc}AK~8UV!(wxriytFENGjmZ=WOO1OScb1|O6Ubel_;Q$-0#awQX>4LcA$UqqD6Q!XM9NHZ0vq7)5}Qy; z&K8?M+Qms`aHOB^KUN`i&yLQ*! z$=zI7T!Ucy)YF&{e~mSP&Mx_VRRc5&&^{ilHGSB@{rR-o~ z=paV18*4Ei%O2?FBhGIs4g?FA|9CGpC)7;q-{ z&Sp`^k-SMad@HTZX}MQ90yq%mG6ZBw=qFDOg?sNi6^4iVA@pZ?PtZWno>fphc~ct% zW{yecbrRSo-k7nXctrWO*0Lu^L^CDLCXkQ}_q#!q#j(I;N+YNtkTF?VrWCvaM`?=^ zTAhmp^WW|`(RN3lrCkG$Z9ca^Icb&qCZb`Jg`7Y)e>>(Z8@eX?{xRWXKGezOA((#Z4%USG)b>8_L^hqU8HmaG&nYD5Meu4^Eph+bFJ~cOz<|D zgCf&0ASFmt9wth@FdiET!dsvnG=bWfK&&%ntMr`X+%8~olk#bqc}V<)ylX%qCGWSRZNl7IpH3IWVEHh(QDISrWs=eYV@@9K89Fmhf_&c%n!X~#pFi0}+pgL`*~$gkGJu0E z8XHDlK!Q4nHtF83xoK4mzl$MbGZ|_}KcwWr;{-r-ub^R43psIzP0kkSbuzOxT7)I% zxzXCe6_FYir6w}G3=3g{S&oth@{CwSm@CrN@PZZ-Tsl4D%#HKb(mG*gTEO_o9;YgFwYHH{LZ%gE=51aUgc$Q~CKBxM3i z$1V)Dd{~ri`BtZgzb_HAtTMsP;OYr7Gn5fk9aw5~-P=Wv11U&W3EmVV;c$VVbgRBj*)OFuACd zi!M=don=ge&NBQPO0VhzwR7HX26l*z-x>=O6O%ykE72%417|`h2V2Ky7w3>Un2zOx zi znMJP2CZ#r?cVUB;aJtTjAqcAE^Ra>N)Kg0GdvxE`Z!>;t2$-5cAbE~B>MZ=b7cI;7hd-Oyqcm zY!pQh7oPFfCY-wlP=LlE5MVT=C|AwVg07T<1bPYeB~YIw%x}vOj?-xar*6J7-wd7! z%IfOX8cO&uCQ>Pa_cnt06DN;x!UyIf|J+!I_`O0W839hzYN8ZcK~tx~GhhZX$LK}` zxmy7>t;pwL6I~;~UL|<9DWQe{a@%qiyhO?PnG76BUPYpft2f8sW=Uu_@JcrOJeyrl zcYl1P8B>M0-iS7t1TnXmo7*Bx1fU|ZN6-r+X}X?7tqHzagM2~|7P1FQO=_NT-Aqc+ zD}$y@%8ymdom$ap=Am3512l>96`-C92(iIT1sRPJW2(&f-sJDLK=>+*TLZVmIGfU) zx=#iwe10YpnVIu;&9e5^Nxo!`NjtCu_2404xtSoybFHG$rJ7)V8-%bxuw=(wqI>7s z1^U9d+Le~0B=Ue33PpgN_&w{!|g>5P?4kh}ud z=4Tk#-Wr<AbnF7H<~bV-HBpLd zuYt@?NwW=^kvu%ZR_;p^Gs$@;QW_oHk1UC;4IpCeDQOnbwEIgRcgu5sve@F!?O#R5 z`91RSZQ;BGg-M`!v)w2J$wFVF<_jxU5!I$UhY zcM%i8F|*lLj3^Q^4KnPy68{kif|I@b_JSDh4yQ8neL5qwso2s$-k_!NWG#7RcE0tJ+;02n_9fvW*^2 zF`Zasp$3F@1)2o{uek7+4U8x;tt^v;fhujmMc{`7O?LHI(gX2{$dBA8f+t?K;i|9Y z%LV(k&|p15X|tqDct-io=FmntH1EA;u}dF8;l=!;^Tk4gf;2sqV z)nOi;r-iYlP{T9tK&m5)W=%vF6?o)&q?$>d7JY05ee8tL-%dc1<({RR(4Hd5o?kee zl3SLvTW%esVuO27ms$<|C%3}s`#`OFY6)%(C}1TKnArhmV}YEhk?S|YwaZiJLV=8| zAQD-C+{&~wFqeyF7v9|5EE^xetwO3?>f-!#nC5;c@6gPK*OI0Sk8vsaTiv93HszE= zH_Th8%GGhUAOH1RT#Bw z0OedhB7vzTSZO2~NP%*TTFTX{Gl-#FSI34bS8okOLY`ghQuV&YCJRe(aD~{6%bMn(Lm6i)Bjrly+X$TV{%~8;SHl zagRfJ=l%BHH0OImeTMO7cP9y@L5zg(`I))c2x3rax1`MjEg;sSSL>YDBxAt9HR`x= zU9QMY{9e;uLS(YqOi-SIqL4H@@GJ?Gwz*dG*4Sysn&e)w20FgPds=E!*mKiTF3+n% zVlAcO&=@g5Vm%gZYp93gN(Y{G>RQ(4PV|(Hk#KjX2!!EMof@LdO!G@8zgGvcRc0=% zo8-wu<(b)K*?+7j>X2>}O zI}w!YKOWipfkIgI&M;(zn#8Pc}QJzo@WKsXF%2r{%XKByCsZs`=0XFwg zia?<~Yu-B|K5URe`I6V&AXF}Q0dtX)M6T|Kd#MX_3D{ReDi1Q#4aY^*Tv-WmVU~Eh z$D?pzu$}gxE{valn~!o^q1lck+h((5`B~fC32^MtbD_6X=*?3VxrPNMa7vv(Ob6P%r%#4^&zy`h25Gtp{aHH!Q$O6XJ_wVRyL?6^ z0-s`;4xpXG9XkdoX*Rwlo?8xrb_;cjYJP7At_rCh{GEE*`MX)5Gf_L{;+5;+McTnH zUB4A>PR)dM0*gknA0@rxl!E&}wraVq zYRZsx1Wq+-*pMl&^BJlgFOlWa>ch46MvmOUqsX(}FrWp!*k)j?vA~W`jj;gFg=`0) zLNmlu)w?(03~dsG!AEfuZY220R5KPGuREoPG+rAohnZCd0*h~L4GVsSU6QV1EhLZSvMY)}>Ts(2dr{uTnyyeh}uS5bK=SFCyysyJ05Ju^igE z3*q5chw$n*HimoduLn`ehO=h}LpLQ%8v{~GYVcYG)fgXx06e}5ibik-~teFtmem+OjfgCjvrA9gi<A@gidY;2T|%Sb`&fGmk}t7LCN z6BV(U^Hp4%T2jjdR|Xns_)Ya}pc|ljmSz@cUtfpQw?e?O%!awB-Vgsa$TV4q!8G?! z{VW5g5_30l7!Mo?UY7iSffmPko*=^RGK0-|I6@NdGtN!Muj`6ym)B@8+{Bp6aE+ay zRP+Xf1_HwdJ}+HGLqIjcjFdU+`CJpBY7iDXV+~+jm!e26_g@a8^n%#(&Px*fIIb8?qSiCJoCz=GerRGI%boOsFj_C>fy;NwJ~4;6xx!V z(i%Pk*XxuTWU9b|NXCFqp-$Et*I7fM*!;tJdAhZhzWAF8WzrS83YBAUJy+rT;8Ewe zN+CE&U{qV3VcoL#)F2E5vn5YQZ7Zl7^aK#ke#&hfl=CpF^OeEWxUnfYa z&Yl>As|QyJ#Hflj-Kxgo;(Yx4MuI<^a_W?`Ac)JP&^Mzp!WGh>0u_biKLhW{#s-^+ z)(D(dRk3}U<4B}w9%bcgm#@OHpiTJh@fCFk;t-FjbV&!D6U@n6ARAGP((gGz+ZfQz zt#mxK4}?ZCg2rrDsB3N~VW9JbOm%4ZB(@=RH=#ysYpbo&Davb-hz*$PK}HmTV4W^wQ$n zA2qHnUcMZjzi@?e$}}qDAOZ$0mOpLlXK}0nWrIl-(T{Biv1!VAiuzqgm+1VZD`Av2 z^D2Vqq3(__Nbpo)wfTnUu{0=t>bpHc=^=(YCE7}iG%ax-n6i|qhb*{Ef*^U3c7Xh z*aAANr*rH0ncDE$H`RwXyrl&)J7j1m7pG2jgmWiPz?tfbG5|H`ETx+XlIh}Wq%(2X zz>qTIEE)IA>M*gmju~fAU5<^GbP8lvx z72H5JTIcjSoo*)gmYEEcE!kBFYDFD=k8QrLf#gVDZ)E;CPUQYo!6oIJZbNF?lOrR7ccrfzsTwe|z+a_zQF|G|Y+)owBbwt?VGv^2n=i%%a z0OknxZ9b4bQ1A8ed0Ez)G(nSJy9n-EnZHe*JD-!KpG1YNE_0)nnW02J2Y{o{=HaPl=7*Tr1`g~e;vmCiSc*P@hVB8yi z@Spwruc(0G?!|HkjF%ke%u$8x)QWN9sFoVV!dM7WiU5ihG^&`JXMvo!rl&X5g=RH+nR2iHM0RyU}cDkhRj;{*nBNL<7$n>8|! zZzEN)W9^6q&yr#igb8c2>4^?0=Eh~jc2e8mC8W}!Ve7vOe-bX4&?S}wtypRtL)34G zR`RwynjI{3ot>#Lc&v>epgWu#?j-90(j~A;w4LOk?1qI!N=JwYsy5yWA~J|jXBT`A z`5*=%MHanv799!Y2ah$7lSLN14Poub#ZoX6M1(~yssOvu+8x6^(T1X8>56-lLjAjz zAg@foQWP==0jYsh)5N{*hLnH*nd4+%1E^p3hI6L|4|KZ8!y>A|k>&^<%Q zK=hCyLmGu7z6PXSVX>=OeC`1!>iYCqb1a z#uJHvb^b3*Us?wd^RpF7>HDbF$ z5ZuH0S`sFBvm9ZUW0`KEErOlRog&QxWa^YybG({JICkhTT3qCM+3@9lh{TAjw$t(2 zrsi0qU0Y~p@jjC169O^6yKoC+W{GD*Knd!gC~eA)Q8{EHxJ;ZiHq=={t+A1Sj;~E1VbLOmxuKLq%JwFeukCzg%vg$#{)4gG)xg|N+yu#yvnUZ; zy&kH^prQmLc^Xo<&NRqDV{Mv$BLs^GYKC&s3@GCD>z9!17zfc}b0=6*x5%L3h$VzT zg)z#;>?!3eJ@c7cpj9)Lx=>;-(>=S(xT+Q{a<(hz%gblHWoe2|x{ATSr7i-kI_AEK zh{$4^AVI!TMAg~c!ImgRF$cImi5FzTZ;FI(*8nYN6itodoQZ@;{SmYT`C8YmU4h6y z1yYwZx#&%}#B*K%ZJUR~BX?;9V!Q&Gdew8dk=~C*d+q!J(1^X>tQH7x`rbOa84T*m=;# z>G3d&9LWN*ECqtUTEQDi)F-`b)lUOh=yt@}+Cdkfu-4V-Hl1WM3B1gDp*yd@Cp7N1xx-RNy)igp{r41r> zqgA7+VSrLgrcy^uyM}?sitUM&E!C@R8OXJLc_=jHsi1R z-u=d=YP@{z4?h_f1p^vmzC@cPS$HgQ$yaJ%;jd=$Nv<`9@pC1b3NL3s654VK2O4Ne zBvt7bjijg-vqbh~TQ21eN<(VD7;shbS=wsoizSaj)$<&iWDgyGb|0Css@l6s>E_Mf z_$_pA?eEHX|9gKq+_-)*28ITDLd#~bW?$}1QL2E+Z|z={Td$?ef|Zp1Af9p^@)U+Z zjt0=dYE@O;?lu;;j2=$h-`LLYrpM-&B@pT}umcThgL~6xClou#7U)YgtFHPpRUj$# zbrAdsWOLc}@ZI14L;D8$EZmnDUVJk0EXMJgx_a#<6yzTI^yGzNAZ4i)6Y$TSLPe4EsJ%~#CvDKc!=7J%`4cBC zxJ4)HRMbC;U1D~%F`pbm8U9AJ;?Cph=rzn^7a;~SE@)lfa8kQ9U?5%XFRj99#A}GHq+cjs_DVA_o6P}yYIZupMN~8@a*f>=$@s+Et}1=pY+Gl zy2ZaqN`*H@Mo5mdlxli*RY7!adW#ouqM@Ij@i3I;i5Bit0QElVokb zT&o*NFihC(%BIdiX_;rnGtw2opmLUkZ2`i5J$s%avN~UU!?%C$m)ziAD$fcfiip=p4YLGCLA6se|LiZr zCgMhr^sdFdJ}ODup_U@LZ=jS{*MwgX`W_9GI~v%Cgl4rOQ4B~&?phWwNssExNaBth zBFUfdcIx^4b~HfR>2#2L5&!PM;pbK?CmH~7PSNkmy8E6JyunM{Jm}upqT9@X%>~P> zuYQ*-raX@x$V|9#^=7z=r`gmbnKpb6+nbAsOs*`lF->9j0DQfUwuj_XJ7pR{cN**P zzCzbWMdn3Gn5Y*|%^=FhLX@XX`V=L~hwdL{b3cqEi0D}^P(6H+#cs)tB3p&E3Sbdh zQ;bgWHOhM~9K!9Az$0YN&PdD_3DY(3yk3+TpL}_*B!KMy=T2p`crVL%S*;Wt*Hn45GM;Rk> zTx0}D+O@13Q7^{k2IZgC)fqbVsu1W@n@epYN?e?GZ8gc3k_p>OjIk9sWT>`-FwIWR zhS8f?snAiifVubh=mmr{pCh<@nnr*t%u%>WM~n~0R2!vjB{H_yh$k@-9=Uvxj?6{Y z2ngB)aGR77*F|OF(usQ4@VjcuiDo$V3@!~CY{>BCTDeUNexX=Tuu5RhSdeGwoHf{1 zb|IlFBwQVX8iN;I3=|_!0zjWSk&y-iumgwdnlb`4{LTt%kFp&GL+Y5NSmP?7L`odU z87$K684<0h^JS->-MUF&9Bpw7Y+DsJ6%pfafo0b~o+|{jw&TwbTu9oNkJ^E0ZVU1j z8WXRAuoV#$wPD6p)#}3Z#%2}He5tLYnueWiuu|JQaOaL;maK{Xxe5I7`ya*SQ|@-eO33p zCH&RD^TXjgzV^}h@A2!G!!`VV#wL@WkDY%W)6U-v=WoH#f9c&z;TCAs?|tZZ!n@z| zuJEpRzbAa;lYbPmyDxRzynG>i`1d{-e)|I-2!Hg%3t`bDl6v}O$fF}@$Sg-Mo4d!K z<{fa8lUGt;!E*At%fxOE76AD*)~avf@AT|*&k$gohdVIA;030qRJ5fjr30A(y+c|+ zPRFEI>m(JkrY#E}PD*)7Y#ND(TOvMbz!TlmdxIm>!dL0sg`o zBv(DqBAL=ICmicrGK3?w3UDHiUF5xo zSu{xOhBR|0t?7mEh4a_L$3OF2c>L+hVR8Q zB&S6N8$m0`PzM>>!NaoYGGyaXN@xbpDN0T|ls(5r$HO=sJ9dKEF=*M%&VRdA%vfE( z9ZU651I*78^%Fg^tYEp*j+&a-T5oyQa!Ep{r~Jg97UV}zF3N*e^h5saLc}tM!EcTY z{@n0Tc;&rk@Z#%YoGRLQxIDKqkIoCO>5W`ln-7j8ae0xhpIP{18}eo-<8mD$rUrav z#@-72j$OuOVWSEitxW>NLKwScXB`AmBpKw@>0;%n}3N7WK0vp=hKWxkk!X60Hv&8LsVE_leNm{OZu(@p=NWA)dE>e*8U2 zei{U<>sVtLJl=zdr5Mx1btbu?Q81r_b8MLM;pEuCXO&GvEr`+6}PujlI zoNGPntu`qT+?860WNI|GL~5!Oc_a{&_m|7bt)rBr>DNI@6aQ9H#8-iPx<*jA2$Ho3 zXRE+_)-!+AK*|t2D_U7cuwhW9avbsY>wHj!u0 z-bRF0a=P5CEIcmdEq3{xDN1syNEp}+wT#O6^z`*`>*kB$#*Jqw8D9cbJ4|3Vc&IiZ zBuE;&crDyIe-$)j2IXm5*g=MrS!km4Y$shcd@8wCfL+o0C!5AIlJ3PC0>B3SHnGg9 z0l8L-s18Y%T3YBujHG`_cd*+|Ew?g(x|SZIGR|iON_X|DGyxR7RA7{!#H93y?ptXI z@*L;bmnUvfhNKj(-=ppGmhLw6abz8h4iDV>=uZd|K8U1;UW-M7302lpiV9O2Z6Vn5 zJt-8%byzzVQDQlJMF{(olqsF#v+B_iP){PBsV$GXQw`898VR;6_1eti+_hAfG_dSH z{FxVAOQ2eabOgKG66upw~lD8yD@V<<-(9$@G@5Ie8gV*62UV&i~??P1jYYF}vk?XHT>ueJWfP768 zgqJ&F(DdPd^|!;vv*D)Jq4o+~57L|0aChS7Tl9jo%c$^;^Cry#BEV!ngg{PlbQ=!`~h5 zfAHb(`Zs@lc-P1GyE^}A1`l7UpJjPShw@pV?ioFtBpUDWZElm2r{S7B^@YzyX!wcG z{}Cc~ms#v4ArsC3lY+9a*cQoBNKm4dkZ`eliVcG*y`We;+~t*iGqqTXEP`^nyf#gA4Yu4UDg>EX&`LR6kG$fZgpYOLsDV4zq;?Bq zOj{gnWr|7*aG$FPL>;5i^&Hq=0<}Jia)#18Pm!6q1qkY6#c*Y!9G-Y_F+6i=1;J1j zd=P|nu4RgVYz%~L3RULWWlA7gWvnv}$*5WGqNr<$4a_oz2&(bCMx?C-xR)_%B7C&KWlJ_z30;l!eGOg_q7!|D>U6ug%3 z4qIR2TnJpF_Lw0Q<1B>^P7D0Q4%E~;XbN!XMZOvVyqmGXjOC~e+qPK~+_>suTRf#5 zN|8q%%tet3JK-#2+Uzh#ImAvo*OYa}PaQ~0j-bgBj~%oUzU_XE$r@d$mNT8xnP!kl zw6R22SC^Cw5C&(C*ytf5Ch^;54}#qiM!q84Ehx2Hbr$(Z78=H%2?@Np#o1eMoG#OC zH+SgNB`_};FkwS7acL~9%%b#7z>tIAn1lbNH2`)%iN8k!0= z17xX6giMmPrCJj*X%eDUQYj2feBTxVMmrI2Tpte?2r5SvDrMa$vA#zHlajuN-E>ij zo#(87o=E!P$*5%w9f_6oFki-FRLd%mX3=exx)qjH@}Q1jh{!l1k~zj{JC%w?){Grh z>3Obp8K#))TH8d=O)s2*6Qy#vjyl$dM*eo2F}*=^+p@;V*z|}{>PEoE5<=Zb zeE|~v*u(^5Wb$RV{}?n?(2`gt8<4R!yo<8@XSW? z08_&0e0}$i{f+P)KmMP|C=z6-t*bA3y?Cr6pZ|}(_if>yzxOjQz4z0<^>g9-zyHVZ zvPsH;))I~FKK)Za9RA5K{UK-t@F-5qw*TEd{&ep^;#)?WFPqUg`Z}}lc}kKpV_`h; zaT86N1@?UM$j61zHRV+FU>Nwinm zQ}i&ga!`68QPXuISHs!ppu4CAvB4Z1kZwHU>e1@haUxo3bTe?FM%_6_r<|q9YBHN$ zT2n5U&~cFsb$pV0sH5$HLj`O_g%_@b%hyIhQFb5zqvB2{nweRNjc^=aDoLMq?b&_P2A@RLZq?WG67~;w?nm95 zSvH_uHmGL!27LsQ_nkdPhu%rk{32Hu|i2|8q^1Y!_iv= zVB>U#`GUK`80)3&_^oey3#iFqge*_++iu4m0n#F7sgk*+>Va{@ZYvjJiO@lwMtO75jjT8DVEzN5Rx`HUKyVM!J|ZwILl%`HcBvz_~a%V z3`J}R+@O~P@7jVP70@mg&+K!kQE zQEh<4&EPRM%bHT*+H>%ix+#U!Q*u=^%QCG>^v$4~mabBKb`FHLnG&tSslMN~969E; zK!9CzjzvpGmGUy1Y=QBLNis-Ti=A&U?(1sQR8ramt{D$>15wgt#!G=_gM-Hg<*oVA zi7=5^Dis)|VjU1Y*8#+Cw z^3Fz>B<`4^`D2^U^!F^I?eMp#VsIa~m@6c^4cG^D^dp_n%e~CTwN);pC3LAIc2;Yf zC9;{c_NY5X7c_%Jc?xA68tek)mz8`F@okpx$=o)PTBk#Ap1C>=Iy^Z}LWh3X68qN6 z9WiKn>@WZG@SlF>{~PWb-pecpUK#$y|Mm0Xr~lCpx>Lr0Cf=#|Uv**lp*MxMfBUzD z*S_L7k?0HIhyLxmUiyr$`ue{We&xUZO8Avu__^@yZ+hsZ@BhRP{MGp1m5tf(Z-3&S zhTFq-@c!3?fBjQG8{YXJ{%!cGJ9x5oLpfmY9)G%bz@W*6#!jwfGVvvNH6;e5i?PtH zE2YG0vyc)s{t?QykALQK;gcW#MELY4J`q0m_#cG}FFu=etsuX@NhGi5TQw6*PoOn6 z0yp$TWRU@X+jLZI773Q3E1)vkSzNm{9X|4j&xQB>?#IH1KKXb!kJZQu+0Nm^l1hTW zsQ#_Ww`5Q=C?{7;IC)>YcTSz`4qbh0o@R=BYQrKVW?0GtNQUf$Ae;eyS4lQOJecML zaW0I?0VrtcVqcDaR%|4SPsUBeewy?EDvs6#?AgB zF84QYP!64mdOeXxWf{nF0OcV&D{Fw`cPTTjvwPea;9o^|A?2xPF#SxS|ZzQNcW z!**qol93%f+J{JDE-;qnK<+Nym<><8I7JXQ54k-Z-o$$Mmaltt3~0xeis6M@%i+>A zEzHYXVTrE1MFOV+*Q1GgQ6*{p`;-RYd2bv+Mc9O6Ix}}6y!hg$Fk^iNKeqYkDJcDE zh0@|08cY>9LlSQtw_cNZua|!tS&b4KQmYxO@(@-3-T;-^f{$e5g28<(-@u#Or5vpW zPeg7dd=mt$^ZP4oWZKV&j#Pv8s2>(Jg7)Uz!F#<;a3!Kv0PV8XzdFi5fFASTD_Cu) z2P7J(f1rp)B<}L;O`;+>nz5qRngW`wnlQr$v~z8fK)=qcjpwe$l*pr-zoyi+)gV|^ z7zKmIb@4YR1__ee30Mfevb33};r{p;3nY7M%(sI_nfKTrcrP0qDgi;Du^z=ADd`dv z$@N-*BQph;50xd}vz>A_0U>3n>Uz6^r4X>e%N`IJC%DXJvGf3qZEB{pNzjxBxlP0C zQcXWaJG@=6v3XdL%e|7Efkx6$L!juxw1oX7g40^YPA|uZx{*O&t_6 ze9k5T)w&v1BrXO_4Fq{9N_g@kYk_oJ)8pH-)v7Zm@3Dzeq+BggUFSuaU`Wx*JfAT% z+>P>jM{M9xD^3K`q+?UFVZYij$9(sCOvGF>4;}@5Bq_ZX2x<|}r^GD^#&~4S>So<* zuWM$km%{lgP<2R1Ha6kXlCYXgi^3Ujs{f||sU^KlHS}T=4J%wUW>$L)v`>vcHT#sI zkrtr)1Czrh5))k$g?i9_9Tt|NsI5{E1n%%1U+#!OQ}+Xpgtvd|+dzks$g&pin74fM zcZJv7OHf2coh<_@Dpfe+)nTH@-i7 z*E{}U_^prr_8sSXRE+wv*K+rV{|)YdBsxo6wz$eUaFfsyBr5VUVC6O`6ecOVNdhN9 zgB+kKN>MkjO`zR!H9Ygw`S8?}&xU88#mOAKuW7PNIZPrNF5-zYHRz9aDv~lIVbabH zAZvpL7D$7p=Pq0hpZx4o;rBoJnefRcAal}jfosK~!}m&5Zz-+qDMUMI6TwVS7dAZS zuH(Ch)>a9`b8wEvZ_UPTj0T9bc@U;7uNKO49i3TPY3cBhhe55q(Jmdca$b4OXL}G_ zqa5W*Q)0rYJ!A$Zkx++uiMe(h@)ptBpiaapp4a}PnQSm+FiF59F>?(#dxb6#OJMQ@ zRGQRUroDsmV1HjUqt)rY17plqi17xZZis2J*WuxTaQ4h8f+<|aWk2zZTJeYL=ldP_ zxur;V)p2R1+;H~vvGCX{?+veg-7CU_51)&67~R-MWLd%=4-TsQ5MD>PM@Bl@^-h#ae<>C?`#&M&e8EXqz5K}%#kC&<7x#o$NOE&*h?dMQOkx^;XP(( zkG_Dt##84nhmRAq8+;x-W+p(pMkz;7QqVwEWp$}fYW%42O$JUtiEzMnDdjrO4)ZtNNEX&6!gAqKm|A$A7V>dg^Cbs1 zKuF$U4AU0QeJYb=*l`wZGqeMN(_$$|flLFec#eZFXApC^doCHeTML#vc(h}t5vt{*D2N6s4W zzX1Za$Of*3N7PbduIz$RsM;^dSo7(5x|vFB)b*g5DxSC46yPr8rq7((gpX1v>7D1= zfouiZT9#53&fKXiGKNcx@s@D<6k|FsHwKM09GiDIk0p?E5vPO4CP7mX1agNUNd@W6 z=zr(BHXA!qY>pf!J9*a#QX2?BassZVq2K`B%2tZ-U|}i;xE0YW8!a+OenCGrzNR-Zt^~&5V=wF38>RoYqY|5QY?O zR3lGW1$jvd9BXxT94VPL2;&BFB9pNffQ67=))DdFD?j>9PE`O>N z%N$xCD%98V{r;A=aJ=mpXnzlD;a2#g-8M#{`$r28np4o)W4FhaJbqF!LQ88~}|! ztaN|pczJ0hFW3*zR( zq@6UV2(#GeE!T`nt$YpuGVd-OYX^*gc3xQbf2)A!H zKv6X19Ljpzt(M@v!S8_}>T*1Ys`tsEW3g0VAi{DQgH7AXyFjSA>GJA87sf!tU{7Ly zItFxf(GWN;Q(l}V;~#~DJA$OaP3%)=q{!xWs%@TCX{N>n4R^{Al z$26u^-@&@0fOudqVf^aA4qWaHqmyo6628Fxc7C}|EOWIXcL=vI>Snng9a zn)$F!oBZmk;+|~!%#)}Z9}R#Zt1AKsqR~Q`E3=0l#4@xxX=D-dO*RNbombi-)dG8{ zab?%0>wpP|q-0Sy3G(v)v^F_63p0B8^s@^Uh3SJsW zFLMv4;Wurdzg0&E~8d_sBO5BbVsYEt{4mS}^J4YKy z@C-S&nS-^I=+tn^AtKa;u3DQMM9@_QTn*q%+HQAR-Fd&axp&*FQ6?~!Fdg5Ul;Wo8 zgdL@sNe!s`I!7Qw|9qH%5d9op6Xi&VjvagJLvKyNS#7>X zB1qUw?8enrN!Zaw2Gz%v9`RXh!XHylDHF?fDukiS8(wa`33dW3C>aRZUF9{hN*W z?7aR)=0tbCJ)9msj(P5hsNy{eC-gFs5;Jfqr8AiP$sNrTytN}5BMK<=wYOkhMxs)+B^Dj9w))wcijJ~qir(& zyF>_g-}KNOh}}$V8jVEh9dvi~b>zc9PhaQ(*${nk4yf-U2vJRUhp>6%N$|2w+iY%2 z<5<;*$%1F$sLYYwUAsDhZpuOwuxn$WE4v?!b#?J(LDA!>wW~8Uu*r{;@m;!skt*fC z9*sfimKx~8Uq~h17=Y|QT-g15yQU;DYT9YwB)3I}^fnNrV;GxuL%5Abw(t(x;3=Fy ze!rW~Y=&f5+|E*tOpyUoeqpoQVbZq)t>^hYRS`GgCdtd%#yB#E0{Y;|bDw0&EFG$% z&Ia7pdo1+e9he72O3{_3sAEP~;)65Vph-_UgFDIFc1A(mzB!I{2c-+klqZl`FmTEe ze0e{XA#pFk6blBPv1c5U?g0+ivH5fGG!IGF5I(d)_mZItt8Z4shyecJF+Gip!FddVA(OXs zGVbe^*%+q?RuRXnCyR&kRmA6PegT0$BqSC=d@X-zA}oR4joBbb&qBwMjSAWY8K7u@ zTd%;Z!hcN=Y_*o~xyI%UtxXtjY=jjMIeiul9;-+irbvh`T$@GK0=tzYjXHFwhh}$K zE$wABpha=h=i$ilGx$~dVH$puB~DfRyLt(tv+%o8B@h`jtGd#Zh+87Ju!=FTO#-3U zTRp1Uw(LjWNTyRA8*Vy9-~n$){RYuqRkmGYB=$xNBaq*!T{dN&XBJk~iV%l$bl4pl;P9n~enJjkoP%I0re2c9_MKBl0Da z>ppl0a?I$nX27DdwloCJ*3~iGwjs3RdPqhcr<+lOttK2}-+APolgNp@BJ|;LI1l=@Kp8)Z zfJUu993u$yvif!M9-^ztCd6{4#?l1rX?aZRYgL^lem1^bRqfA}zu;$^Sk?hpT)-T|}2b^@0+$gDjb2-3m%XrmtQAnu3nqD4m@ z!mk5WGWRAOL3X_8c{Pvf@GNSoOK>lAuHOJ^pPrfy=U==)$?oas3l)XNAiYL|Ov+O& zEZ*Incnl2;pwW{LFI*i9AO7SMkiT!m1+|?4-NUOD5?~zxf+f9!hm&-c_VAg-8A>&5 z&O+=`P@;$*0h!q>+!sqh8=BpMEcCOnzxMU_hld{;22#gJl=6YWKn@;&-7rmPzVtwH z?qUIVc4=}sMd#P8o2n}l=rj=If#m4bWQWm?K1;R}1Ie#JoV5f{qQh((!Xu(2M|QtV z(;0Y^%x;#&NaAi3RAiIRg1oeHk6JOpmCs>k&n)n@q-l>5S>htPQq!pAu0d)oVG^p} zk|;!h`DTZp_68+?#<^^BZr1TL8XwcfZi%5{R}_olI*b1%9f`KFi+&VX%vRVi*LjHq z9y~-j27tSJF>I@g(h!Iv!mF8}b8K`B5+wn0mW=kkGsnUM=Y~R_`?JAfJjXbgqq$?6 zplEEGj-RpdaLtZCUN;cJoCS?sitX`mu~@*X397SXl05{=UHG`=37+kQS)wcD>J_x1 zuxGghX?Kc1q0A!O!iLz_*&7}>d-mu}aqmJ@oKEv{lpLP=(%*u-}uyEtr zAz%<;O_~YR{i=(ltf?8~AV8_y)U*gGtRapE*?$hdx2ZA6r2M?Gx`o#6A(D(HIuuC) zFblppMequIm15l2@i}tAmXL{Xqgqu%>cGLH2^TNyh6~Tb{~%DVYoI(yKr?cEiBcni zlxuYmW-V!fGH~A)Mf@mZEU}TwCt3jsTAZhI2wfaYk~>>$oYq7%N+p9A;ma`RQZZmk znjFmH)dteA*z4bAeyazh6-6GTu$oROJ1sYO=0|mO#Y?DR%vXyzI9C&7TH?J%uF;x_ zO?RDVy#l#kPLk@_xm*Qa+*UYurh#Ne`2b3d+}jd6_PXMroRb_Kg<3lyDtFT&tfaoB&$u(2f=2{OP1}DBuZj{TgY!o??G=MA; z&_DOwizI%RxRg@pC2-GE*0Padm2znnO$JrzPAK}V%$?@2wncynveU-8Wq>lyxR_x~ zM3OXos*>kJBDMvRwXVI8ygyaIt(~!7NBIjaT{r=`mv##t#SIcI9HQiX629XDa2oG_ z#VbK#2jDrPiVk1THLRIVzh=sCRnc}vDFe`Kp4|p0vE0d3K7T`L6lJ$m0?^Zvb6Ew6 zpQG_cZlpY@D%M7ScbfUP1^>-buT2znl>QimVT*t;!iX9}Ylv<4v|Okox$UyC4B^bHKn2&@=HqCH4;wT%jl1!W6T`hN1YfnYNxH20eO}Vpl;BhGiL@t0@+}y3j|6v;eq=G!YdySnt+3&*^OERWBv>j#>aAk3C}=41apS5URXc2ey>M}QfQ|{45Ps}bdkK^$&t|@e zX3%@z{|Dh`e(t^D)1P@ZW~FVd?JSbCmTMzGCL!Fr2h#7mEc1dLIVY<18VnG)EK*8X zP~eWhWSs?Kl}xFGpkFTE9af4B;j_=)f>ewa3mkwNh^5DSx5C3`Yw%~Qi`EQ1J;R}Q z@OT(L4NQFhnQ-dtFubAOXm&bFx7G{K!?C(C5j#`1De1-1p$o2XI_Iv+f9xTOlKqmf z4lZGD6&5f_mX;P=(AOy8-NK{o>1Uq} zm#$o8IjDkbq;chN3}zQZaR?+UY*gBD?CMU>xk)h1LWJRE)UF&N_hjx#I-aCG^0Jl zDH0!3lwgBskb}dfQqphHvW#PHfaKmq*94m)SU9Kr)7E1b<5tJM}9Pg!KZTQ0HR@t-(e%zBFXJ5E* zE4+CAX1H>367fd9PZ=*}$85rLbSchGBH*|V%2Z@?goM5dk!l^@l%#Jv2IaeH0o2Gf zI@ax7DuTXh)XY{CEseIyBt#|3Sp<2+u#tDvKr+$R*+R(|v04)zG@cF}+~&pQanQmE zHcrQ%x>j(cM4I^TO;nx>Xg?L$>!F$pFMt2nt z#_lxGCOe^q2YSK)K8j5wLn6Gk!bi>-#>vZc433PTBQ~Mbf!BPl?PWZ+C8TbGrcSCNz4)HydkM5K1x_*N?mc}^c=VA6 zLl+V)`kdzBY-c-$5zQrl)}6vig(XYYtvaByCMqc}cYG0P^7mf)-Ti-b=eG|c-*5Qm zKO0&x!}Qp=@l<&8t6me{`D-5zzxk^_9lrao{TSKR9_g;(+roEr0m0@U3;*Ixx6!6b z_!mF)o#Cx-|Gx0ozYPJmZ~ck==XzAj(H}PoxcmEmtUFLr%~cGS!64tpjjJ7Pl6J_9 zoh)QR)C!r&?HC7QpJ&lu;=k9_SAj1QyLIeb(*um0LMhF87zvC!Gu zM^=Ef2WVD12vY_U;tm1UA_&#Zk+CpC>$w}bLUp2D4eFM}&Js}%RIROt95~E*?X-*> zD@7Igxmq}CaH!WJY`wM3f=LHc+z@F!zcsxTe(zJ4!uvjaIsCyNVGw(5DU8pfi3G{E zu8C4$4$>#$oVd$^c=W*eISV>-iD^FqH*Buttmx=sioyI59} z*;mmib<|-APF|8~&B_RPNN$y{F-x#HMM+?elGnOgRcgT4;m5PlB4I=4buz?j$Ohb? zG`2#?rWR6aw%x^q#hYw)8in8j0xfma_C;>Y%qn(A67aeS$+x%y2k3c_z!zVf|KyTES2{4lex3mo5jh)nt|UmMoD&JRx5)d0WpXZu=lGTIuRZK zdFqA?tIm((!+=r*Lc*rx>jp{hD$w)3`WiNXepq5z7L5V+UuKb2NRP{e<3s5N;%Zha z7)l|QHdAE7X~tCx2As-193CWlYm~ZIlqL#nw2qV}w6=0>1Ucv>xxl+|ssR%>H178r zC1T4pX7#R*-jW6=2W{{o>@un%%c$Y=Rhk#UT!ACC#C@1za~cEnkwmR9Yp0!5BnRyT z9Ic>zZV}70dzaw*v|#Q|xG)Kv4DqNZaL05Ff6<93tZ)Gn^Y!;S|)bmIrB%$oxA&ZLmCn%X7qJgfBxNhk)cNYWf?z3Pt<_ci~A^ z!_(W!{k25}9H$xY6CfjKxkD^Z*v&+;ehNJ>FlHgJ`!_6h)0oYxI&=%sgHdjeEO45Qfh{4rpi7Vg*zKuABma5td`st>9}1bXf1$3HKpt# z6TzxxT_r(cK5iq(=@>F7HhGA?aT4JbZRCIIW68;pPvrftksNM4^6nq48mcCWyo`MT;x~PKARb z1;e01yB!i_?r9C&AxYPE9xHKR6Nim08BjlwL#Ya7IxmkBx12#rW(;I@dQRkOg+yx! zF4j`ia7X{2c&vMl=6LXCLE8?3d#-cxi^^<{M;Z3231kkQJkubRDRDZbllE6=2hp%CX2BV#5FImIYJD_c#ATW1aJ@= z?M@cC?-K|Gq6xJ{BSsAyg%Sdm&ZATebHYH#QkYs^28_;eJr$Qw!Cyh9VVxjqolR?< z_fzvmf><3ToeUhIJjhsYcN^$VCj{N@a0U_4K8&fGnX^Y7j(J3DG8p@w&RcOBt&~32d3MjG8{(Y|4754U|GiQ}IPR%tU7ADT^(` z4RTKTy(WI%;&X&^TMZg!aOFB>r=C&{A}%G$*30t25L8rZ;oxEUugWnbq9Uh=z)Ive zTAJACLO@d{SeT^LcJaa}DDNam46XnKk{Q+;%TrqfpVc50yIi~XxFfs9R9kH9vr+9cSn!qg2%tn#Q8AL88IHy@8 zKt$M;VUVBI2+FC~p!~>~SlX0FvLY`v!@cz$M`Ly!2SC`WK*g(hIo3T7#cCDU6)96y zoY*RARn=Jv!yZScf_SZZ-BC(a+9?MWLVvn4_gulsgU1%{Uu1mEGd^x%&vFfSfh%Za zT_*t?Cox*#*&(_TC2uC_&NY}XkSzE&8zAO0NZDK%A7#D9){1je`oj3ddt^sln!Tlo zJ!ThC+9mYCu9H+=x`B(u)zR>|=daOW`(pU`=U)g<<2-VU&h90ePs*ITmBcdRWrKP9 za!1WSLZw#lqGpG`e#x5kGvJ8uL2QAVkPTd>+m!E!b_y^y3&r5%j#Y^cgfA5>gg5Y|v z@VAkP4c_z0@FPF@pF-}_KNp@^hMGDV?;JCH#lQVCpZ~+~+|y4Y zE=o&29%W`}jgsuh1{HU`0-^5HpL-F%F$^)8EKUgn5`*bzdlBsqfcxE8S+iT%1be8l zpFVp$l22=y#8KEym>SqOYO~lfO17fe`(aI6P*^y?sX^X&b zfysECE)oMrbzR&%GN}5l+d;*m6jjF=yJamZX4d)|vT^V`yKEvQHo|2BH%kfY`de zMHXJWf7&QHnz>7CwOo->P56%Eo$q{SxOwwt{EotjS%|tmBjq8~w@I??L=ZI^pK^X8 zAdVl^uIkua+;kj6w&dGURtMR+nfCGyIDfv!7AWTA!X}FAvux~)F*rh&if5Ti1f{L;ATG11Epk0skX9G^_U=>R?7eT=cg|NYz7Rh9*-sKc*cB;%iCe-PY$4EUprja8 z%$b1OSQZ(WuYqu>XfEeORewvq0wsx2t>|lFs;U%+dT8>b)+n_suMj*T>97F*X^joT zjZ*}!1KT1sdHR^Yoyc5d^^W5?R1+K)L6sE&{i}cNZyxHP`^^vhXP$%3^lCn-GHtM? zx_e@zDq32_9Mf68oq)axe#zXdV<;)T8j$9Z2Qgss`4}`Mnrlgs+d#$*+;u`8T0Qr| z4m|_idD`vAK`!RmXuSs}O_8`9yTCwRc-?!?gW;b0Ua{|9U$}CaPR&~piQ6Q|Nj@hE ziYMLPd|wmWO*=vfk}VB3knDA$OXga#2v@H{Fr*^6$eey`HEJ$d+jM1MtZ&nZu|Ytp zK(AuAu8$?2hh;m_oT_*|aS4b)faLc^0&|Jre&+Z6^}o6AoJDv?M=nHEZ~|2G7IqEznk?jkG@}XO9 z+F4)I93Fc8Tfby1@DkG0(R(8N^Z)Zdg%=qd1*y4Colta2gIG7a5V0RRWS( zvW1E1#i-`}_~)MqpZ?Tmv0HgMTw>5pAX7g}b~F!yuu1^3&illAKd?LOme7d~QYX42 zLcV5N?hQ76$Py|#Q=w$rx5iM++46;U7-qBMa}8;OCI z0xdwx4QjAJs=ai11m$Toaagok&}6F3QYz9MR8>LvPf7~Zv8l8mp4WSvPM_lt#)l}o z46<0ABw#s9IrD61Xg|>ta`3-8`rD&*#KFVTOOE$;Q=TE{nVbNfpq8&*l|iEt1l4xE z*j2(}SRmNYT4q%(C!Yq38KXb@c#0vZ*NeUQ<_lly$6Hq2f`>LMw6k%3HXAhT;Zl%PqYVn1gbg8Rq> zCZC{tk@RoJ7SE!e4Omrj%Q5V6QTct1%}p(&d62GoHt-D~_zVl`AYO7^SbWGKN<)Yr zJl;!y*~sR;97YLfSJ!a02gx#Pmg}=j@a%b6);V}YJ359=Hm}YSJPJ#dF}QOR;aorx zRm~RViyNldw`5$A2D^^ja2bn@{B0w`pAvl}n6~o#L^Q^y*U*xoG)Y-*fey9>2816W6gEq<84SbH$7n^I8#b*e@ zA)$K>xmKkG4DyuyAYh~ua0VTvNjOdO+<(y;=V?(=zoM8R1j-34Qo>mzxR|DlbDco) zCL72o3hgtXkZzU|)%*Lp;PCaM%QHyu)Dh>nU8hB|d~GN+@K%%O2mmAo9=EO1BJ0EQ z6LWc+Y~HzS5+riBO;_orqJ|)bX^7OjXfD;V88>mSOvMZ|9b*RbKCc+Sa^E)zC=%2R zyp9dpv85i7m6F&sNHpS0LU)bxUo&W)v*#hF>j(ff{$2(Rw+@igUc_TNj~%1SstpwA zHqTDU6Qx89KIsf*GnX$^VO&&UTw4n=>&9Kf&ocZ?2d(SPqKAyTimKZTc6Ep}p=6 zuMUsC@DA&P5iT<=^6bsl zc;|;cgi-JBg}HJq(Ry20oxT=+^+$gwy!ZT#`0Fj5gW<3J@Q;MS?8}p~zU0Z={lTB; z4m|(FABJm}UWlkmvszp%JS)_F7s+C$*$}Q0OxVrRplyR%7!pgF(Df{eX%-7hH9{q# zJe!n3RHI$cCWDfNNXV=U#m$j0!D8+AB_*l|y-Dk|hQG@M18a8szy-2fu0~BLTS#LE zi6)YT@U|cLTl=ofGQ}4@_bDa}0S!Tf8IrJby{MBx)@_|6V=#r{EN#iVGIKSvGug5X zXm=hff|*jtcj%n#m9hI5c zg}9h#DPc>ssGgftBU}U#7t5W_<(S#B#FzNJ6*)l!TYA*F>U-)g&gw9!+2dxSchxd_ zNCv{~%yUs2Tc;oW>itO55{uad5D*Q8&EC}qnxqtUYiuSW9F`GpYaEGa=nvKn0aMa> z5d=#;75k@WQRkk852xz(ny%y^5bFw&vax#I1N}X8-<^yI(yHYZe#d~z#omBv624b0 z3#dH1lm?}uD%`hPKIiM+`VCPd=+5Ka@BJlEg1OMeBJK0GlgZ$xU8I2LzC@rS(xh;u z_g_K73S1!fDmM$qQzM(X#BcABW8Z)`%RM)USYljE(?K{cVlu)VPs37b5=( zL@fHWNs#9L^4|)M+KganrPk7mWNOPzN`i2Y6t@(q*;1XKPQYnO>@!o8ci<;(F!w|$ z&YT?#_uqFWnj%}?k()CQKTgXMH_Q^xd6tep(dhr_Z~fgvantX9=;vcVA=xiog zDi(3E#vTIzRlk!!#!0B0#|F=&z8;&44Co(HHkP?8CHbeKum(vhAYtnuj7n%&y3#~g zqVgK|e}nT}UL)x%!exOFZFySH$~raM*m!M-fTDB>&Xse39H{h$cR0wZJS%nV9kCr7C&b=PXTq^xLPHf@bVDmPpaNSXq zwWX~c61|N8qF}C*qVga#BrxsR7}dwlR)Q}Ehx%r&t(&0ha-F{CukT;?MBV@B(;o^K zQP=mGo}!cP98DM6&lqg2@Y!RxCZkBFByQJ7kxzqYQA$g822hr`T<5Yh=Xj4ze3c6j zQFWbkO?2&1#-LuQm}rd4^+mlY--}W{Yh()aO@ruB#}Z}fCgOpwdd+LYVE-LfAKdjP zpZgel$_%I+Wod$x?iPJ6yTUQ}qTO_8+A%GGzL@}Shkd1phEpe<+i8MGgQlb}02I=u z4H|sXdd#t2DVVwn`Z!Gu%4dIIG~?@_EPxh?|mx# z#(Ul$e&bi)AAbGazZ2g5t`CG?|MlMsAOHAg!xRg*4&Q3R&~_5rN7DofYRgClO?>Ji zdFasnjh!!%lP7^S>DY)1n06u{OLAg1)reu^*(AGnQ&GRg4c=DeGCwG5Yr(=Cm2xSA zr9dBc_}wi6sBKC<3m|8gu1h572R3R!pd%32t^ zgjw%}8F(lY@P;N>gbvsC?ToR*P&hm)HS0vqqPfGX$U;>B9kXS6-Hn3UylB~`b{DEP z%WaCJd$3>yGT9(nC^xEcQed;5LuX|QKENpcb5qmP(ek8`vXuOkP6(iFEJ~UPi!O{H zKsp8=VnaR@?{7Q41W*_BgNL0y=OK+>yLO%6cQjhSh~%|!&&-bHgJ@j31c7>b5!7x3 z{U}h663}-Qzo*-KPVX)PG8b5NW@ZR_D{SyBWXhe#dLi5&i@{BvphR-1*{%qLk`W5@ ziEt>jV0J5!wFOePXoIDg<>kd#emZ!hTOefjw(zo}CMo+glksX7;r((SZ-d;d&Xf90 zs71hG;TeDet--A*aqkVj4NBW+jOb;KsPJr%Qen(WI$fp%Xc>(p0}{upych%jEW~fE zNA$B>fgmh$JEzQm#CeqTZ}AeHJA9x!#s*=#znztl^P=!3(gCFcgy#wyx+NEYnX1~P zBpPia)V7j`NWdc@#&dN3CStM%9|ZrL!z=;bEaQ0*o|I)iecw)^4u2Ne1`!brt39tt z#=ft4o~H?v+F}L9(;9*JQ3rcYqARp#50aF&lCqYlooGabKw$$)$jsbCxN++`0p}F= zjnXHGsvVydlA{VJNrgnp*8C#lNd!lXNcP~d%Vt_3*g+C2H1IoZtY0SyKpuMa!{K2J>hFE%TsV8*sc_%@C&K*? z00^Aui-C>-j)-sz2zdtN*mJFLu2nIJhau&1C7GHvMy{}El|=700<$3m9EVQ~MZw%= zp0UBW=V~dkNikb)OtxZS>T78RM5VxhW#+FIIbSkxRAACMHK{#u#PYS;Io7mQLu_@W6w^ z;h{$cNDA7*Ntzj2NoqBFmVYOoYrUYRAUbg?i!Zx}|LI31IH~{0|Mg$PfA|MK7=|?U zvL`^%NxFcK8u?I84KR zxe2f4@}-+ttz5vs^*M;h*l@6Di*A^G$tN*DEA#u3Qbk}21{O9V#4fu1hkv6Jx|_o; z1WPj{|83SJYTzNk(^lhXPvWA^rY0g|SCpA-G}v{qWpT*DOX%y#haooZAykFCshL;X z(Z=RBNjK=sJR8oO`Z{hvEfAaW&}zifu6Zc*HX?pkLrZIs64WZF&mvt_1X4%vA!87s zpP6ml6`eq{H1{;eJ(4AY{PrFepi+OLNBfRJy zysIh5q2psWSU^UB=V!>$=Lz1(Iu)iAWuRkalzBG+`lm#}1{+kunZlz>I|leo2ajd6 zwqAVU5@-*GyJWJSXBLFI3xv9k0HnhGFA>1XN0_4gdIcWU)90^4KA#Ll0w={!o7BSM z0H8o$zga8(x6Z~_2C=IpJ8nT-t|hw2O9TfofDFHc-&YL*$+5m6o)=|0N)BypSc7P< zLB^OT0P8~MrIXLk@E(;dy8jXu%D%&ZPQ);lRvQqxgIm-B7fJPdvubt6+8|^{(k`2D z3k#}T2N6ciV(q%zMB~Ol2U1?-fYqrKPPSX%wN(ww5Ajr`aLt$V1Ypge4R%Bod0vsT zfs}*gNWvR}KNZ&zRHdZ0ifCr!-W(j)(qYMv+bH!I6h^54cGV*Dz>c3CL^RWBH(bUb z!M>$BHI>{P?M$?s5&2h991+eOW6e%Od4~qz<{zmP%JJO4(`U|x*Sz|X@Zdua(e~fR z91L_Tu7(jrM3bhEM9_1d>zJbBX@(N10;IRgaCe^iE_^r%(RPQ%bBH`Q&qcnXNsF3S zszp;KWqd29xwSIk-QoDy>Ji~KpvEx$xImW68&cUORb*=&a9T61?3(o}9tBSyS0OfoGru$!UUwH7* zbK${9PKO5`IvyT;;4}db+H(AUNyAvUTLqGZWx7WfDSNI$$hM&(>OKk2$yGG5qr_>) zq7o`LyO>lSBT&fl*;{Ph27y0M*>^+Mm+9MiZj0a=K8e>@!VPdYYncvIN8J8i5vP)%cL+J;CQ}yNedrfvNgJ(#r>LjPL=2Nr+=^-%V z`qYa`wDr(_BJohEo!c9cSiKDwC`T_6#O$ir%^qS&TQv0Gq5Az6+_hc!cSRCHyCDk% zzjDd8Y6yn$5nQ8#ZJ8#5ExZJqSdT?^qtprsj$*iWd9E@K>{OhGdaw?!FGGi_9kprB zKi>h8gmgvUKs$0d4RAc+YM*Qg_uSix@^=PT0*%)b>ldpTHYJonw91U#3fJ{=$De-C zqUD;~%2Qci&0uCWfQGPZx3nF7`P(p*kQf=a2%H3|uBiS|*!VWD{S8q^f_@3K{JhnU6wi zTSwIPldcn6*OAZXJuF2OScuJzVojcnqLwy#A!hy5s-?5A zvcUew(2W@ezU}TrR1ddxljjl}+!C$5bCf3L={~Z1q=Ea2wkiD!z&-S547QMca)un3)p#V*>2B$vTpC80=TBJ(Y%yvC~{u2a1tg#k6bL~ z+03qEFLLP$dOerye!B^m2L2GGGHde&nHz9V=BeS&&LLYcPgl*%ShQT2QB0Fo+gtEn zByZ2cEsK74mT%&=MQMtKeHyKl39{CMN93pQ|FnH*2F`c$H>1~nAPtgbr;k{e>hJ|3 z#aXWLlH_xEV$*QlHdru4%4`$Y1hs+1T;V!8@Ij~$=!DE~scRJ?tpVCHo3pkjok%H2 z9JVCZjp%4QRuz^Q8&SnhBQ9tuInCnQ$}=-aIC$7eq(_@J9lI=&S(*+kd8m)0)q`1W z6>ZJ>QDq^A4zSoN1JDW~InTAV(2*neW0ua*Wy*?{Gz0;52;^2aG)T5kUcy>cC;JVOd8$j-_OIW}I`}oI!vL;+mV; z6xb|8`|Jv9V4hk+aZYS*LZF3!o#;!7YMP_!^QKpq3i%yeXAdQvq43(*JQQB@*kf$q zLyUn2%9ZN`P0JvW1m&o$uOewsM$TZ7Ht*4K0(iU|6-zvl)1(qsOiGw>p7-8$#18r$ zd5LWFO_WA>rac%=s}I!C4x&K!TbjVOQF|d!vOMD}19~de&LC0YQ%lUT{K!DtLFSgsL zkMnXfw{tj$8E+d%NDeda7L>|c;9Yhgd(jM{QsKf@K$qmvOfwH-#}^9iB?ANIfjp}k z)*q2BpG@qCCEBc5b|Y$RV+~YxL^bSf5mbO=6T}JnN7)uh>5MCb zCOLnu%{#0!S)~RnnaQPOUQIy3kk+8Jl*p4!JxSm4fg`GVgBnvpHQ^QE((7(hw1?n% z5jTbT0wv?(d{{%KVi$hn27zor)GBrw(tyK!D>@GdcBZKYEV5r!W2~HE%;+`P1wUZ-lXrvNR<^G1)rRwl@MrtB&kj-4LwFU5Hj+Z!1e-<0G_6T+q zojNrHT!xMd1NJV1>CfUxeDO11_}m|c>u^>g+)b8X2}cbJYvPuXBwNa3u@i^+7KxQA z2scJ2X&3i|H>5zEzawc<`wz=(tzdbnsq17~r zPD_aeIr5RBY7+}Eldu}-ShC+={A+)A-%VPjB=f|lJ`we18tJszA?UC)myjou5|eLL zx3`&kUUqOr0;5X6u|qG>crl~1y}6kM-2XOETSU-pnT>G^vTXw!cQd6WOGUQo&RPbO zz>NinHX@U7jx>A6A}W5b&OwOXVc7tu8ZFIgnpJG%$~8>6DS1mY%wk#G6VAKZ7&^8{g6u0EdxNi47> z7Cd`(-Q471h#FYZZ6Tx8f<%J&BK($Bfz zuQ|iQm}AV?QoqS&-QL2b3J-qpAzA?nu8u6>N*o_YDSZ$j)o=gi z@7;IK*RNg)Pki>X1h90mvUx_kAj@2AN=1+&uS33AC!3snyDc^wZL@}kBym$dge-3N zmPl6-WYN-TnhwzWQILL+V7?RCfHb9#vk137`p83}t*aOQOOaCTIKkmGYT^Xem?dk^ zqDWvHf`aoPV-``6x!B1nXU1{r)+y3r;Lx0IK~cMp5`QnR zJeyh?+a1?{V`1o+femwxaj%-RfuwpxdcYO9_x{^?YD>ImjnmuBQ`?y6k;$0TPpybP zP(5|c)_eznF9L`JJsR-V5@2NLJRCX!U#s`@{(C)xF*2Xo!G>EP0IHB2tvi1x_mxG9 zn3Jv-+JaO#pDmrk<-1uH23=fODuo*~$c&7P()_@U=2xR3_x3a;wo85>tLiDocu#}4B}is0?v|2OuXv;594eC{JrgEuQn1?mPsaN(ac*VIG09n z5>#yf%7*j_$HSTl8fEpOYT(?L5;ZyX*;WYj(n^9*;*{t0Is)+%gG2ky`-Q7d5FnP2 zBT~S{fiY#)CiY3e4Q!zcmeH6)+>t*wg%`vkC_!}tR zD@dF|__7|2w|W)6*$Z-9&n~hL^;-JyeylLD(>aHxT_;>BkqtZCG&k(_iim$Sc|biK)$F?NW~RT7a`a}+ zX28++$PGhnlfB8Rh@t$XRGlJ`$!1~=+3HgVLWjdF?#Ix7X{P+OiH^%M$cSA%whG_Y zT~ZUrQcfdTgy$e(^r*wW1q1FHKGRZS6N^*S88N%&dj?=?x>%->0wKR*hnM9)rT(?g zu&QB8d4Yw@ZZj9cxfMDBQDxo<8eYOJ;eDNuO&}$3o@4=^>^I z_h}N1dod4v;9gqQPoLm^wzFwNutxoP79#S@G>Z@Bu(R-0<|rl3&hQoCz(vX{A{CN} zd&%+#hcNCPBxoHNB)DrMXu+5jo{{8H$=3%D(XSlm&`SM7RHH_%9L7YRh4So)lay}z zDUE>6P!c&onP-^K$g8^q(qSOCL=dO`(kAFxtqU9N_6Aqo1W;{chbm3m;gcAeQhorv zv^=bbogGHD$eU~(qv1rt?goqYP56;l&}|u~gJ{F@tdYurL)GtAORc?>5ee`OS`mE-E0YzmKn?P>>O*B#AH!7#kOy_Pn4T(jA3~Ks^%EC1at7Pqsf;hr1CzhA%Pp@ z>RCtF`I78{!DE(<yp&_-QM*^Tb$O8vJT!EalTh}-vM5g_-hXwEB5HJmMMg|hV!wgiSuulK7LHmUB-<(G2g@Q z)lky1tJ#FqrkgsY2Y9Yq!ecf}$i))5H<^&rb%gr54MNJW$eXe(EqbMdM>R>vZL+8BKEH8K54Vv!8Q2Xw{mddznf+ZnK z79k1LG3Zcko`s}_387M~CbJT>W0pYaNphw0CX)O#n~VcdUV_1kaI`Qe3xe7ujJi%Z zPG=xfo+NN;WARhtM3J^7N`sPCqa}$8utJi;SB6CHJ4dMb`twF6faf6EArU+(1#3pD z8Sb7scU;h25beBR!AeN9NoNg`l6;QATKv>_4n;JWHnEdX6_2k<>G0Tt=fYzT-N#0If(5P`fkC>d$QUJ98rU4#T)2Nxe+fQDC*=yc zB8kRJ8cBfR@W`VNhx^W9W^BooV5pB0&`E-_egdclHlc|5t9RpqOgTUzsD{wzPM#ug z8)Ea%MxwF^XD!e4B<0*QXO2e%qQXMC%(X2VMB1InhOXu$p> zhidNN3A=l{&~af*G8~)WW!Uv2QsxMA11W&W%)v`pfq#^U(6b$J&Td?G5?MA`17*>V zObH9^7eo#WTrBl1Tkd3IUBjex5#HGv0ahuNec9|4w*>Vt17BbR9z_*-fgsj_zE^6j1B93<}+?y;MOOzrVe=P(#c5$ko zHBT^QFuyB5&AQov!^9;v$lVqS6&E8OY$Wh%CMZ(GQ*v^HU4Zm`H`XxDmCIlQR1QHs zARmqai&P-X-(|+J_o8+$p|F!teomG#Xg8srdLn%hUDhKoc779hh~_XJcDE?8)yc6U zfCXLDWc=WnNP%ob#Bd~96Sqimq2s~2NfAi#Dv8NF3B?@2-?Ak;#=|z<)H`dG z(-;TaBqZAet&PqZo@f{Jkp4cnXmyqF+?7k=nHSGvuzW3cYK!o$GmkCjTBDA}>a6KC zTZzx8-P2Y}h`<6pg^WN0LxR=+2XV{0LDSu!X@Ah9*qVf5rSn~QESK-`SR$XroF?|^ z+qzG1qzWtAe^=O?m@i%S=c@{%C3LT&zmLo0E z>@x9>(mXng298eN_Y=U}duo6H2yTPoiY!PeN%*{%0co0Rh&&ape+l8Y<$20A1PSce z1ZK3OqjKDl!`!l;(g)flSptkEL>E(OyS;3g78OkXyR#F6%t5@nAn`ZzZ;AejdrBO+ zMW@?MI_V0OBO?#RhkxL(W!%L_9)NZ_)dY~Dbt@3%HyH~(bd^2y=!2mjDBl2AgiIcV zQYNkN)^hMlY^_d3{TWMNRg{q^cQQ8WQLSr)Ol{t*x3ghdwC*ffqfZAy=DQ3xaw>kT_6&E!#wW?|MkMZT(T^waQn zmRXebD{Ca^7L`h5E0T73iOpqx*-ozOs0?4mS858y8hZW5{@`&^ql9wp`!MQ?uPJsp(7g~neCioTQNV+qb zKOndj#*-Vb#EjI0jFwSv9OlO(2T34H?kf zHM{^Xho_!8AD%`y@ww+Ngv*G>P=ZziTC#_*ituAtANfoV8cu{-#P1{jmk-hT9?K<{FQB;9CLYA+EGL^hG= zzUL7;{7i7+Z4or(+i6k(Wvz>9?D}TP$+HU)&L${Ok%Y@4a2jM!%8x>`t_zngUW=W! z3rJ6hUDYxk4agQr6y&#MxE7@rBL6O~|HA)=*ATzuT3*zbP(MDdd4=_CR+D$h@+2Aa z1Y#1|A%V{G?>v=@Tpep{sx5S3Al;>yFXqi9^s;W=WL>z0!@=};xH@tp zTws3cTj?AvP|B~N)DP1;c1`;~yQOPb2aAx;x9C8X=VV9X%N=)vrn^DYzB?eQp;2Oq zMb|7-Q&~mgn0C?qT6i~FV6ZX>JBtiC$qXbiWUTrYS@w{xQ5&;#;a5Am5M$);tC`Sp zGR#gcGN4DtCddk~RX|uUg;RY_n-#sLVw*Cn4GVw5Y7!PEI3n9Ve5h-L)dj1fIy>^G zqH%5%#8V@MEJMiNRYcvG$iuQs;^t|KJys2DkY>Lcwr;Vj$)oUZ+$8;Kr%qHVkEI`9 z_Tv#W@tO5(cu{X=@2Fytl~WN>HYQ<9qZ#qx@=F_=*#MlBlXTDAcarvFN?BPK9XD}S z+SrQArrE;%$P#$8l3h2E6>V<74hSoFscIIP%q97RQWZby_~Vg|34 z3y^stn8$*IUBbll1bnosWZP4bf7VPOrl;2EAKhod<`feq*jboi9;Ez5Dvwdg8s5$LWk-XxvQu!-CL?%Hf%Y>=Ub z*iqT7JA#6Qqu4cwcE-{tS+Ei*RS^*u3OKsf$!kU?`7IoiP{RxX`i@ zYbrd?COFOa<&dZqlxCAw#oeGUW&wMgI+4jN<>>bA76eC;rO6#VYlTxJfhm@b4wAX<>mNic+(|eIv9Y}~92`Kx zqOXTCYA0sh+<(Tl<-0@4z+QRD(r@e<)02-t!fw)Nu4~X?mng_w9`4ry8}J0&z-csr z7AV7t?6@Y%`H}d&Ncm|-@5iY4jd(3%uX8MkuO3>=)d zs~~z+B)4)H&z(KR>p0z~xfpCq+;@rdd8kV!2Tt)=OMW+R;d4nhrS@6XtZ&1oPr-8< zjC`blo*ufUhe;SoY)D3wbFov?;P~L7Kx`F(EHu4-$L~uBOepWFyG5y_Hg*-v`YP-pw3iMPA$iV2pgO_C>X2)uY z5_pB=wmuv0^@=h&@}k%)L=zJk7pogu1AiHI&)tnvzI^Y1kZ%-YGstlB=2TTO-ejRv zs-^^lEU{2;AipQYXqid4z@)OHC<&_MtJp0f1Zozgx^iNnV9;TmTj5A`U&dH;u3}Yk z6K#=o0+lG#NR8bIVniS9RM=BQqeOu$Q5;(Ev~960$f9hiNM&%(-E4Xb$ak0T8$=mE z)!8k=d#71!ZPyi@(n`fwRMj@?qK0o3@HVf?0KyvmZG>FTmZbxAl`J0=_#6qti88Kx z_!sqZ5);mZ<8kn?lo%DM*;vwSggF8}OMRkEhFGFlRO~`E_q>5csEPAyU~|?GbBCa( zhPLh&EGOht)w4J?^0Rg})??k-aGcJs6X+71>CcDzFj;+wvebj{Tu!lZXW6*xcgf1( zmz8MiT|t+mOaRtoD?aV~%^A=?&={>j4j!>A$-R=S>Sk)jYH7{@dlkaBWuA$t2^MNi zS~*YN0)|9b;at~Pd{;nvrYV_>%uZrua1(O&B%9y@WefNcY#v;3+}KkDl$$Kzi*DFD$B14v3GJ8TA3v?Qw))kI0z@}XQ2Hzvy!25xpaNtqFZX=s4M zgm+wymS@X@DY}3}&=xK4fmA7unn$6$zQOkY!>!p(>f+F;#mwXVwn!*6(1w7|W@eD) zc&#I7*9WW~>4Cb&UglmFq^O#Ju8f_C-ES-L5w`MoS%NR;p+Tb^rgH?5*RK)ivXL** zBEJHHAnLS;_+}O5=o}lnx;?7|lxkHeI$QA>5lpHCpQBUhs3QuT3ViC3 zSEfr)zR(tEO{x*?xPeole|R|BH`z_sO5&k;xSP9#emPTz=OT$7euj09ZIeFcl$B(I z5Xt|}c|8W}=o)P&xoH74<-%fTpTTh?CP#!jNfa&d8ED82oQ0@ukUu(!2rP-ob^IPh zofMyyyuAaVp3xu0WIOwg#XzJJYcV^}45p*+I znIRYw4J?{avVNzU&})JpcWhuFQXzKXBK1SL7=$9w=%VHRVGwx)r)f)4xBNZ=)Eo&? zv*k*DuZDTO15#^`VLRw^juLboqQTYZEY;Fo*`(zVjWA`5u_wls@F|=OH?Dwup~f%d z+zeg9l5Hd7yrzlyZF2_|h!W$qu*rT^VhmyCTp}r25glg_DiFXHDgEgN;PvjXJ|#7~ zFMZran!bGFmp_{`GpKP9P}_wCz}9@l7AySQ$h>Hof?c7CdjAfCQ9q?kRb?SP&S}%Z zqN~P_sEoV|^=!HPCs)2w=h3{H83w0nct7P0`aA>YTioJ}1?BXJ515mppY0)5g zJu&DIDvw*#;o!sCpOCmb92Wv7s-)Zs8(u{}G+icLprYU%89I|yGPnE(A)-yQN`oGA z4G}Jht9x833Ct2GEid~0WR0}6kYwKrY<@~32y1)4Byk%6X$2Ae=P0w)bDhy_@4%sz zg5?z#Ap=e~Grii5(OJ}qgo9KQ#ULYJi;dX;%gsU*F51g*9cFZHOd156d3V8C z8YJ^Pfl%Zy-9tn0iiUdH!!blH^C>oQP@Zl|MW@g)>C98wVxh=W(mOre0k;U=PoD@F z_Apo-96Y=)qG+~+=V;^ZJ%-f=8RnTYkifN{0RE5G51@;3P?VIgGq0Wnua*VAh7wx| z#AqH4$rN3WQw#GktMBugY2nsS%QpEmK~p;$NOMOw2Fiip8u3HWsyygMx8%;24!Cgb z1aM@Yl>7Sndm|^R4??+P=HM|u3-@9g{T+z?A{tu+NfM)dw$+SD5lXo|+Hb7D5m{Vd z^T0n&YZ!e24PH_##MNvF4XG+Jdw67Lhe8)+7K3#4Xv|b=X>-ry@!9JI?W0`UGeD^b zl+Ug$11oJxVyT(yuSNaZfJ&ulNxTP-*m}-(s(@o1`ci31u36s4(q;o?1(nd%=#kSm zPuJw7%L||ZlImWQe%Y8bU`nS4GVKemc=S{lh9B1jqSMhvBLQ7bmJyY?=z=RJ za^z-SrbBm;zcZ+|T|K3~9P$OZ4iOrpD0D4Qw@Rd0e>WRU_Dc5$&s)2w(l)K^{M?SU zj5a9vx0$g;9wwR?yakov+4rKMQ z$TDxhaeV%%=Oa>UsVu|%(AQ1F=Nhgv#WRM|8NVHN>j)OhHxq$fqnx^~ zIzJ`!B8Jg6OKh+fRWrBl22Fo9;m7}wpBe)LHm=*9F`BPhUUM^ZqmCUZh7Q)HZQX~! zr~ZwK(-IGzlncPXvCD=_?CR5Qm-%hp?4RTa!! z*Fy>PLUm&oH(~JD3COjT$)4mQtdf1nRoS4%?!qE}#7s{0U%Q!H;0)9fkv@%HRmfE* zPfnGj^Mss!@R*{d_d1)_#57h1EO2^kc|Q~=v@=B4@CL-*8Wew|UE20>ts@j-)5c^L z;^{ma-aG{Q6*Z%{cPcI4{yn>vybk#u+uXlh&aGOPcV6Ucp~cOCa-XPE8mocMmO5T6 z!erSw&f8$>;9)SlY4$C-gmYCWPyL$VfqsHCy0utr+a?8ZHcn-fkS;C`6QwScE-t>m_?>xm9D4+A)>>KOneF=>@1QO z3$r2xl<*i!ddJ!EW|P3Ss3jUNHK0S5#|)^p*cdF)$&0ZIvw`e31p?TJG(r{VT@@r% zH@`Ik>NY~GVqZlWF;W@vKiR4Y2r|VK)(v{b{2U9t&Ao>J!Y_?S#S*Taw9F1WHcdm z*`|pu!Pq==`35NP#5g{AYjnBM1=-or9{GAE1O|a7Scd};nh4;R$g6U_P#CnXmLR4D zpD}`_7BqrHZKJoMXboMTwyfKRUc-H9=DtNyU7l|qbLLh8=uTSAJ39$L+tGmIm7?Re z1j4ks4*43C)Q~ktnXVNjY4yUK13ORxlp!%7Ghnpb(-;%6!*fvtMGrxd2gNd#Sx?0_ zcnzuTL}Np>v(|1~*E4n19NIi{^_EoUt>^s-cOV&q0(w)_lyV*N*@9A4gJhcIY%=C< zp|3W=#=i=>w`2DP_p`#^skE-0jJz^UkE8s7YuFKpjR9a2$XS6gr4)$Ww+7L9O5pX{ zhN!B}`?-cIud&QrTf^v9JEIL+>*a7wvi2;Hs9JJv<+&=os)ya#`_F{_6N8{?eW9CV z=*$E6ppbqFSqg~cme(zV5nx!3)?&@3ju)SNHe7!4xv+|4#V&ztjb!p8)cG5B={P26 zg7t|M3FSj143wErncLEy=UleKkAP`o0_w&n<8yi$Ue!!Ee`PEtP)Y|hlIYQ+L#HK( zc@^EEtYh3>?ju3!CUOV}-EOfSuCX>Q(RdN{=vX(|s6stF!v>Njn1K+(Dc6Ar-Xdrd zeUhkuH)#4FGJN^7Icbp?p?9E0(fSOdL0iXpSpyfxpv_uEuV) zsLztNH2lG9F1i0-LI(Ld#{| z`x-e`+Ofz4iwIdF|LG}{fPm)I916eZBC5c=aMvU?-L$ zo(F*h6DzM(LuS_mnB9JMk=a_Sa=e5NV!)xtBW+Jz8Z!HS4LRoyXV z`x|-w&E8K`;=4dhd*S~{h|eHm>RP0#w%TppW5-}yXKxIi^=WjVMO`KBOGwIU6&+=4 zyS|7h9z2vfP>P^|`%&#?%36?Nem60Gqdu0wk2JMCu8wN%EzFl@*1{~$y^XK!JnuGM z9sI17^T|P=Z((faAZpw8-p*&YbAF0TIy+Y>QO=R{>@pwIT&DuJ)uObq>zMNa*;59< z#uV4&L&+l7SjRJ5B`A=uq@%(Kx?oSAJ;^ibjB~p6YuG>B_Y z4D1#WL){<=vh0^*jK&hC5rJlf?W)@5GoM#&xt}M?w1uCr&!wwv_dN@m| z4c3S`c#NYM?jp`x$JiXA(PW4QnpWrrbu<|?QMz`YvUs4@TY-d3qHVJV(1fsXyg|RI z1W+MJ0qW*i=eu(qbaC^UnH<3tk}*^Z+!Jyy9b?rr*wCI1FP&5rkiIQ~sS0r!HRgMw%le(i=;#Op&iSOX4brb9P0uTK zF!JY-v<;b7Z5ibms*!21d(JMnDh1>elcM8D9W06Dk|*^6^ci_C6pGu?EgjOP9XxhB zsre#-Swgp+a1!+;2>T81i(;ZlyEv`&kYLZ9dvxD9FJkd=@JUkxPgq=QQ}V+h4j zIo)C5^G%*{eD1to(yx?wWZjl?h~MH`ZK%vaxK?XvjZU!T*qN6+uPBmur_hA&8$ToI z9@Fjq_IQe&nB0r|XcccbAUjYhqZZ};h~v!$g~VFH>-U*@U$%DW0%e>#9_t`ntCnmx zl_X$4OCyd=T9zwHSj6-4x{|N4J96*3Rm!$O!kgik_I^uFmbC3xuPcr)B|UhnGMY(_ zQQPY8JmzQ#pQM{l@k{52@1r7ko@M|=RXwLa>7jt3ousillsTtLw$I(M7xi`c1-Zf! z^Bo^{xGLRYgN2$zB|6DeAb-PaUw7yfo_Xq-c+VWa@ezBw_I}Ijj3Xhg2Yd6!4uSYg zlV5u2`~IGDGme}u9?4x#Ol)tz_u(GChe9!?e9D`88^?rcp>WF)iFj?o%ejAO%$(0ACGRoJtceZ z#qD)YCGXp7tg7v<*Am|2y3U~#hl=ycJh(2D?UGhsiEC3fMj3~AqUxmCy^KuA?e*O` z71i_kI*4ADP5^fv=g&VEb=+=S-Z-{zFBvb5m%Yx=WS#O}C(kiDGsL^Nw`P0Jai7x4 z$UP!i>%EsgO8-tjzo^*hy^m|u-e`$)cyBCvzP=~-+J=r-KlWAo&iT%N+zpz(DzwEGkP=XJ*G|1>pZP+#dgXjWSuYdC6KNar3e}Ay|AO7P{hA%wvG1_3~5$N8c`>YeI6IzJ5f7-q^;k=#e z#BL#v_gLSV(A;$l=@B;lmB~=GG8^g(^Py^O7ST3DMKxL0t(%w5Y*1nR;9us=shZy*dXKgxgR^I1=G4~@YLHo9DefOePG|c{?v2Nhkx=5 zzl2KnF5OvlqS-~p22(?q;tpL*ik=$4x!G(houEPTo+&9|bQC#M9I=}F7q<{LF?(fR zO=nI!-Cq~buKC>bO!TobU^IA-Ha5wUmjvSRzuCa)E^Oqp721pe1V39s+vaCqe}CvE z2)pwbpMHUHM!U807$|ZVOUPmDk_52OR#&qzu)nWt+@h^nTa>kEAnRg3K^y-%G8S%n zE9g6oPonvLmtq ze$V@1U>E}{?_07F$IYK(iwjtw&HDU>E8)rKzYuOtTq9^iGJ~$8f?i+Dg+}xujQun$ zUc~rW-!y&j+zh8@CJ<;`49`6Ncnl71AAZJnee-wYVK&F#r}u&lpI;I{7687v_%0=>D)NzNhoo|Hb>juS9>q zweXI=_cz1e|B=75e_a3OFNI5={Rkbsw42lISKHi0Hybu5BrKk*fr#g`g5=2(E&2s? z)tvLLPhNja##qy0Vx}7cG6I@=r@04q*rpT$coek40@bJB+LOj0~NtIdrU)S@2x-qfs9@fRvU@ijkvU`oHskt-j0*RuhHl5M0Xpb-sed$e<4Tysy+JS1oS&od zx*mdbKC^;`!U8&K8WD4^i)in;{%#Ru|J{H155gPY^u~SX{f@u$1K~5D`NPB;G#Vo- zerkXjPcc5v8eJOnydIJ0Fyw}zHOdx9iDr@dI~k-s{zo4R7cV@==C&T)-Oa9D zq&IOXub{iK%p&mdPd~Zu;oQ1;E&S_$^S_{eJQXLwI?B}1Q;Z3eWL5@tw3oB_IFT9& z&YIB|Qi345(?|7lHJ3zW61x;9sXiHr9m-gQEOHe*X%@i(}jIo z>lqhwkqzE!YIZ#Q%ul^1Jo3u@S@}==#E+u@(EjK~#p%edePCOsMsuRnkXG zy(r$20S@i4Ev=oQ15Kh_CvD?H4~0%zfE#zPYnZ(nDl;RYc4H9%Ikc-dSM) zadj7+63sYip#SwGkBlxNtha>L5o1M79x`$L;Is{DK(MpKqU{*U(@Nh&3;HGj zbsBWB6}R=ij$?QXA__?Gq$<4uR|EH;mMnkNam#f+{Lmx&&iS{0{}bW=_s-vlfsCGt z?F3E7(ZYJwgAauJA2<`b5J2@l#NfiTT68udI7t?Mdjm-llqB$s`;wb7<3*M4^OvuL zPqKkM{lZh|I-ydH7L);Vt@oHjL%+tT297n0_CN#37sN5mMm>kJ_te!}hZf4$KKL30 zF{hYgEu0VcmzP0Y3}B)%e}XJT6d-CnC2z44?R!j4oIlBRd8Xzg&flo?zK8I|YjJLh z$QTXnAqjht8J}k`mIRd1ur=NrENsv}`mrDXk?_C$%l~!Xd4K3v{_pVIC*F^w0SHq= zOQ_BE66o}xvjwuoXNo#1NwJ7DhJlC{Jf4p~b0d!Evx1f;Qld)+(k3D#ldU9=%{;pu zgQgf)vzd!B`7B(Y;@Xj@ej89FcM3TX?2-)5)bbMT`GI%*=)QBF!G7rN-}7BznV>tx zn2Odoti`TRd1M4s2VdYB6{AR|$e{@$eux@*vmO-5%<}E+;oSZAhyLTodH#r7&d)_1 zyBu@z?0v69f?|*Wx*Tp@ej$wA7>RmS+ni55t^{oee75xtMdS0r@^mN?MC+!oLejB> z{**3E|NFoGiSSKt`=)*8{TILcE#VU%|0sKi@+l@g+&_b+WO~O(sq=t#E2`^jb)>D7 z(j?Ab=hddF_9R^1w?>=*^afnjK$`&6@!fzLcuIj`l0(mD>F9w(owW!8Z#y$_vC9*Hq$sUMfsrKfQRfrJ)n9J z7rmOaIjAS=P}`+i^IZnA1O0YZYj)>kG{~@o7mX;%gxoM1dB3!Vtp;gsCThFz{=Qu7 z4Qgun?C251%g;S{xM&F@`#veYHgG5VuTo zZmoFj*NKI)lj38qD7MuO%8jCoBBO1!015?d= zr3r@eC{nlcHOJp$smsz!K+DgY(MCx%HnR|+*;0!|33?^^bk$Sx%W%)sB5FpfD2HB* zI$U*pt^wA;L+AG_+AO*DPQ0`5fI=_FXQl~n8v%lTX$BaY4CfgSUEIG80_7}0Rhq>u zB{IalsAFM6FEUX#=Q$J!s;k(jn>2UjxixZL4F-53IlL?h)Dv(;vt_HPm;1SCwe2!+#gBT&7JADGZoc?IxqHv_BnVVn?{J3bp z#%!96*FZyyl8R&00K^TNpBYN%eK#>Rhw*>qIXd^lwdlVXY*21NgJ95?T%E=*Do?E=ktBd zz6s_nZ(1^PEUDga0O~rCh>7~U{5NwoCNEI~3ymKByGc#C_SVGPa|G==05bW`tZ6$1C06V zd#m2wpui{yV>|j?9q2!$c=lBqnR5>s&`C?Tc2IrjA;?5;i))X`WlSCkOpch0BEyM} zm!IMDaJ^?;c5U{*dFKAN>yH{$YV5hDI{u^pczsC;GSPc$YgHc(jWHYqdI{>=QCZgl zsD+sasTCxyFt_4plRP7(2Gke3C(Sj zmx6yzT$bFJu&FXlojlgtg-H|6fwN?jQd<$V=w_g3H{dcniHXr$;n^p@5I+C-r^3X< zE#OaN>w%3+AW21(w&jqhwp~Egw}?uu(SCxZCot^}ADWb;_xK5Slmc#_c+5uG4&U6UoEd%7DV8M5S1;x1&Mr9xsI(7ya9K z9%Wtzz%HN$$ZlAP2&0a4iN{>BiG7im7HwtX!pw8u%T$k=(rc&_FBF!;^eQUIi0>^i z4h#Ad@;4Pr%xnM|1SDJAdf5n@dA5up+|a8~3oi3aN@V<&VjI0aHv24#c{3Y(gJpqb zvg=vqM1icA0IG$b_jhMl!1DwFsDDJVjj3T z9z5i@b%-v(!8txmz(o0O5JmJhY>5n{ECCsmINmD;P;&Y}7&plHmr(rPKvB62dZ2>) zHcIGao`I#>c?1Zj#sF(Tl)opjm7&aws&+4yI_EI9J%jB|55k)%?!R+JV_A{P0@f9l zEeg>t2sBMjkDI)1xHx7CAuv+y9p1C_l})FWPI7FRl^E{7aB3|LpePh`>44ZnLWPpSYfS z&bx@bq5yAssenyV!SzW6Z4y7x8B0YPl#7}dcy4;NSr)R4mVDiaNcS3R*&2UmX>AR~ z?kUj3F}Q%UphC-_oU0g7uc1`FURdQl_+5T~o!?sn9ok@BS?AxY1kAWevl=BXy++Od^E zJhaU7EaCU5e_ta4$gVku&U>$#nmk9ImGd^jZb`yzu&8mio{+};a_SWG_2*TrRX#@- zTLV-Bt|Xww$a>F$riFx2pTTGxPcSP&!(inlz~1rbN+; z67rDNEZR~K_|{_c&;q3d5~^ zp=q}XL(q*dJ-rmhX|lLEKM}5CXgrSp+Zwzo$C@OPZOR7+50%5uo<14Qo;w@S&l2T_ zg?Vfx2*7LraGtGU3o$-69x>6Z4ToO%ExbKJfi_7XbOb?`9W|k~v>D6g zeAQAYnM?bRDxQ}Fa+8T&)&T>jEz0X#1W%$a+XN-fkNA5AewMYI?;2E#+JG|1Axr+7 z@quG~EnjQ2KymzN(E~zQ&C4L%60x#q9*>;5ln!Q)B`b zT{pN^gP@hwMS>OVObliq%xk|>)ZT;pVHv6vlPzfn{5Q|hmkD4IzK|Z6231;{7;H^W zO@-?tJd5dx7=UdMEEPz83fR09D6yAItGo}TcMX*zBE|XmUIu6cy#D*peP*C%G#Lgl zsbh{;lSEmnG|{XVaicuy%fKkQAVkiRrBe8J%>5eXatG^17wB?EiUsQPjrI71Ms@n_ zurN6qEo2~5SKAuFSq?u%0;hBLhm-fft;1ra4Uam@dAY8Bs05v%1y%lT zY?b;4haz%+czihL1<678_Udtw2e~(KMdzYXGIMZ~K(C5*rlu-Eq3Udg0r`#So^KL^ zW0D5uPl0>xvv9F6ITWdishL$>-d%(-68$3&(7h}Pe@wdS_jr*ppZkA^ifaBD$Khdr z_d8#ycVO(=<*-afjQ1m@Ac{M@YOIN~SQM-s0bdGpHUpu*^79Rdk}DWS zE>en^VS!k8f!8~Z#j;3nvWTPo9Kq2P{z20?+>hQGfnZ1g#YQs2f^S9@xk!5!EOEDF z!keM$qgG@A`GLRugZm!O5})zOPkxA+yhg!fI{Z$;XR@=xY)mLO=`688;n!GpjB9A( zTW6Pv8HtN$We;I;1J?vM28){Ik{M?qS?5`}tJvY!=@7D0&61IexB*^7uz0WN-N&ih z5mqJwdCQx>BlKYZap&>bC;y1WbS%ku3>M-dz`~EchbF%Mt((Lan~&s`dN&6M;9HB6 z{0_cOEDCP$X7!r{CF^rjpa*l9S_Vu&`$Jp4Gh%T1bFH!gxB$nJC0SV&*WqH4X~?ii zKpZENw3d|5^+1O0jG0|yIv2k4yZ*|)bG`*p`{P${#166yfy;xK?7p6b^RZKdVHj(W zd$3`6&G8fA9t?t8s@YhVrXyF)ve#}cwkRxS6^%)KsstjD&%wX5Eb{#Xh6c!^A{v&a z9D4>t@s|_+3a4$j7aF^(6HA4ba@HerwhJL0D;fBvcP=}&z&JpS1y zBA3wlbCX2msb`<2v+6~fDz1{eh-xiFo=*#aNpBpm(D$cPOnx8kkE3f{1ZE?EbfZ{EzpYvoX}KzWZIw^)i7z_9>tZ z^-bE%$o--`MrpA*hb>GifhM*zSc{|Ok-J;(&h<^l_tuk@)lF@+G5!cZc9DZA(oA|;o`jJt6mdc z`?^=}JMU+o{#+QlektTVPdil!EPzv)VwMBhu-#1~L0OcOq=WG#)j*QBFYSl4QfkqL zhIHc>Jj+W8x=8z>?n@^H}1u0g_D~COO*ltchB|)!QNnb}g%ew>E^y@9XbB70xnO zi>$vlpL-$9ki6QsVslP4L3t6Rcx`?vel6gIx3XH|J~J=+`mkQn)S0ADZ%@{)Zstkf z;EB+4Y&i4}4D;`v@G$(lF8m7bJU;M&Uk?+I%0-4ml56F?8Q4ayk^Z1bFzS4^w3cD* zsFgFxxHO2e)S2*5K?OkvlePSI2{mDp@9>-rs4V%L*jlo74=_=Vr1;43G7&b3(=Lg` zRq{Q7YCUV>JO0++|B@lV-Jt0!H9Gnd*F8b&{sJV=DkihF?|RA8k_<%ze>94K`jh48X3&?8C5|--WCxe>)N5@6=e;~ z5}53@{9ZMmU(HuOJ}H&42uo1a>Hhov%3s^Rxd3;6;u9Z=S)mYqS)VA|RGENrLPqxA z?FA|+Cf@Ge*%soDt@&=?NlUN+%I==KxFp3`*&{HiA;UaIDWe~MB+1_MWSc~cu`K2U zGy}IpV)?>AEPf~pL6L0oEpPrdI**R+JMWKw?7iXE$nylKniI?2frG^2y2YZWL2}iu ziw8!NOLL(NgkPbAS(?2bDobMou}j>~Lf9oc-6}3}ek^i)_6`fQMUxcIuEET*vP!TC zdZ*DV8E`g*9~B!R+KmIbkaXO|}^rBN~;~ z#La4u&BqAoIva9{pv?u(!)(+ARf4UXfWCQYHm)2S7oQkQG@Ay$yj6+k`0YmoO&@>i z;~+iLY$y&RQ3}zFBxovdpSHR0RT#$>7YK$&3!!c$5X3RA>&Vu3$y$qyf85y{ci5mC z$hPx6jDz0L!u8Z^N(TBzj7XxvCe5oBS682BTL%7DV=K>(CI;H97d2*P5pUyqkN435 z#{#`dcUl99$Pk;*oyR-|zgI@a!^pLf@Z3{RK=_?tURWwCg^^q1biQ2#y>SzR z-WfPkvtb&J)X4ZW<-bXMx(J|oUNd}ehLYVT$wcfpWaGA+9ZO9jw9LCz(6AX2oh5wC zsv{2yS9r$D8ZQ%xNP6af&1~KxJqAs)pkDvvK+xng`>hYYKbE9(?eIwmYMMYE(=7xs z1WYNq5)F{_k4rmmH1Fnp^8_2M1d(=3$!)A9&}z}=F9-LA;46(ESqpP7ljm;8Iu^O@P(pXY%=Jf~-2S!N*?xSa|f&{iORBp8kB8phGMg zi<4O7;K_6l5a-*wBj2%XxsEv)JGNL5Y`#c*0Od%^k4!o{qnrdcVR>;iT)8$HE|FlY zaIZ`pBJn>G#Tlm{N|pd)*-|8l&Dt(EN!uF)@Efdi>mX4kU~cA;m>+xn8~2TmE$-d* z$+1|LY=V>101_S1KoeZc{JfXkt7ecUQ9)mJ%vw!ohVOO%$>ZU5k32}=-$`IJPtbUU zXG%~`BSiorF_#Ae8pxH3qASZ6J_z*v+3g` z9&t7>+5*1`z~g2VLp8HBfk^%r3_LGgybScXYf5E>hc`+AV3-k$wI(?>u2U&6%A=b6lQ%{Y;rxP$<-{MXQRw#(NF(0vh< zXbl8m36x=ta@TfY4zvSwf^yMr86Fr=zs91i@CsT$m!u=N)og_vE#z&GojY@Np{JFu zN1n$pC8y!RHV_mtv{e>XNWn{>3w7N0GY>vY2joMTjHcNzA)B+vf8*DG^S*mM3exoX z8`ncGo8D_7CBLGlgTMfiHwaY0nSB|Yl&xn zRM7O1&;LQV1@c)Wcqzk|u$*C*8C#eMdMfKYr^!-SxVRK5;{`ZcoWEpv*4}MQWOr7! za7JfCg+!mrv_pKyv=%N&orHQ8?G3JRgDgBy>TKXS*7;&FZcgPH^R3Z1T5ikI5}Oc# zn>?b^Lv%*A_JYbVSC|8X!vp*7^(1WS}XP$mCTz>I+P@m;kN}glf-5i~Yyr^jm zUl-)pQ36_CAwU9Ava4!^`D%Gbv|51PO=kHcCXX>%<8@O^&Q7`zwgYfHEVC8KmGn{=Gl3( z*1YR2Pr0A_KCkmSuk*69tc69c5mu+Ip*{6&W{ruCP z2Q-}Gy4k!M1`J_wQ)_n!q2~})Pa>L^t&!bXV!SNEB1J`BXgoKf@WnKmpvO62f(p9} z=+~~^7!3AP0eZbnz_|yt;0V#TpJnMk;3?8eFs!nW`r)~N3l5YSs7YJq@#Zgm*OqU3isEw+-ZkqVkat%$lJ68$}6>(cRZK1{-dc9pV z;oT0JKKP;cl7+Y$zH}CNSecj`U~Acst0tEKNTyfYxDG88oOq9VKGcl~T#Mp=WgE)` z)N3458@f-~B(-5`%PgXZ``XgTMRPHeVKPRq)g1(gp`}YA;Doe$U@{Wv1z_ zrs)gA5??&85;t&@sp(kNUnzvoO!nMNGfu=rP5J@}q!wX`mPmsL zSQrqv>eCUzY^p&({+7Hi!isFDWhi4oB$840(AwEW*Ygf;AZ|GIKjdG0D0Ho+Y}A{o ziU_3MiGfM7iHTwR7VMG{V9}FFKolZy7DLNZVYQ5^z1c$Rj}?yV)#Jaz8)VxS zlV%|@bh2F1sIm;tmbw)pa+s|OK&jdl7`bV*DA%uEgf$pV!(5L!MYXU;+5@a}Ju0ar z>DYHH9eC)C=`hB9NA@(OL%l?LO{MAnwwmmuu=`B zl<@58*1M|vr<4}JKU|yXu;+Ih*N-v$;qhz5HkRnxp5-%vGp{4buL|h zW+aWgGy+XP8jJI}4hUKpXAuJ&tdS}!gK4TMA-$ys85N0@q;TB5Dw(tJ6EignC^NMH z=(BUW-bPg%}Yr;mJs$K#(%pLzPlbP8tcDnjuIQZY5$dz~#*AOZMtkm3DdKw#QO z4cX4iN2RtNq?OH(8r&BjGDzx+TqGM1C?C7$K3JbeVK*O+!i5T~A8N~6Qb$8y>Zm@h4H4PHvq;PFrjM*;n!s;Wk+NwU`NKr@b#jg zLLk%JO2s9%70zv|q`H>5mmRZe_^Jpp9W#P3L7C%UrfJNIO4z|jMO$yr4dA$)z*Fna zdCxyrrOs~H9OhA(x$fRJQe&NI*RFOfOxgf|<$&jDK*1&%0aze_B_%rqp_&m_#y-uX zyHbx?YUQHdU#4H#mvN4LiTO;~!aU5CK#xmHz(zJ8gwtxHHVGDTb@Vu6a*^@3!F)T* z958~W)iq-K*3Z5jIMR!#yc6+x8~aRk9^QYlR}HX|O{Dx9sl;uexW!608F2$%nLWXc z=dRwWKwi^KpcPfq>fqS7oGZw_dCCrqlig3xqml#p?K1A$5=E#E1)fviWpErdKv8(| z>dXCI(DcOvhrXaAwc4PDNI`SH8YcHj1g;h6P%0RtRSeKdGzf|eD~UX+7~C~buTxXg z>B7152))mfs~$`v;So5^H8Km|7^H{vXlxfLSg+*}g^Frzg7KBZ|Klf@Y6 zfpMZVwNF!Mm3&N~k(%Z2ip1xrI=OT6=Fsyg=ex~WtADaeF3N?S;n^2&di72r;smS+ zpl-U?hKxGv7J{X6zhOMa$EIO;STxm1F=1XL7k?4KtvB#)=xNJmecghUXH0&6uHH5q zQ9hgn=B6EsB-ZWwUS6tK6qSaxZBf!*$wXAmLezi`q^Xhg0wA#x)5;QWY%BZFoUC&^ zi%jlwe1{aj69YupKYI2; z`s4Fg(ub}Mr>D_yjI+OGq~kWjucm_U?TPeV73sx_`LR@mkiG(dQEs_46Iw1M&gH~m zF~I^>sI3dr#_YmQbPHJOW8v#Sj9+a9p}&Y@pvA$9XnA$EJXCYD?9vd>k7k zT{bwKrmoDU(Nm1I3+&f2=f1ucdRMgSMc0Q8fhQp+(>-_Bi?p#RP?RuEAV3u^<>;-Xlj|y) zs8U29h31aSZ?#0QDH|SNJP3vqt9#Fd*{qsKQ``#CvmVQb=X#y8mk&|0Rtpqlxzrw7wW z=0_QgIRMo>$7Ci~_RNLcIm>hG{26nIRdZB4E)rl`O6UT=0);xI4sZO#|BcGtpKPlKvGv12DxoAXO!l5Hl|L1+<~svbl<+-w5zEqt)o>MyL>S% z6Bu-m2Ho4!21A^Ip0$a3qFEH9Ves_KBwz|Ab_reH6zO1sNQ07;78!*4Ib z9AKltErG6@CJ{O#eB>b2(9Qce{HAif2HiU8=5t3C^k%Y+B4X@Dobj{IQ~B zB8`}soj7=YnaODck>Qn**>rt)Dvb>=;f@SNY5ok6#yDv+9jRx{;cGI+Vu|7@=y!9_ zdb61Js|*?*?SsIKM>!F02@wP{9KkOqwx43*bCI0yryQ4AZx5JveS3JNht>2h*mr)0Lnce+I0A)v+_;$gz z)kk{7YBDE^Iu@B~CyCG!=pL~8K3v@Qw4^uQw?Dn@wMWza{Z!Imp1F?Dd>Z}C!p!I$`)tZn*5lmu!Ss1_ zHJ`dMmOeQ&oj!46Dm_2DmacEqq^T0z!HIy&U|4jN@9pVHNB8Z=LyBuh*?$?4EtU9j zJQe)Cg6bHiE5=zi2dow}$u>oz(Uul4jpgg953tlWKbab40A3h-GQbM$VX|$3peYK< zcUaA(zjqSwKO$^-e>>@bP3H~Jr3-6ySN`(Qn)aeSw*RVVu;dh zXW~BjFBVK@Skugnq>&pG9%~v3;8yc9&zM(NqS#lDtu}yhJ?CT{{gW03vY%71x3e%t zrUtfeGXx2vL)=FM8UX}k*PP*_wFdjk_$uSG-9MYW$Tnpw4Hjp?L6q+2$~XUQL4phO ze3a4q)6#ZQ8CHbpLJts|BYihcD%BL=5Sk?oJ8KL2#0*Tb8D+HX&@gic;3VS~`^kdq zl_}FGQw@pLZ2VWvecajJigvCu#L8zcTufIeh*(&eBb~_ij$TVQhAy*@)0|uHi8eHP zSZ9>9qJ;#E14>FPw&0iqk{+}7nOZ`b?RD6&Xmb+aDKnVC^cH-Nb|RYek7_~O--eE9 z9AW-BDiTi-6kMR%cM$jUk#&ecR;|Z84z#@U!Fznte?3StC4Z!vS%$9-Jcy`11|)wz$67d$1Syl7}8i??zO9l+;%Rb{Qit z-$>J!#?vmUaPQr{2T(=26n$AIRh3O>!5RtTeET|bf%zr0am!O_;Ofiiv*<`4|C9Ho zryl=UI`jNf=`0n}&wTb{>GO|&2%DKtrJ?H=BlWm_D}x=>e6p4ls(xV2=+oLU*6r?V zN`3t;sc%gtAdY=8+{bi8|~oB70|g*$L9o2^;!=Q)L7NM z5!H7Fj=8Ou)3Crl43`1D8BjA$xJR7ZNRUkwzL~C`ej@#s|Ma`*b1$4qE2Vgh zbhTyM`n%f~)fu>O?o_%=1>F*&NBaqxBA7?hGtUjY!US5yfUhR%c0pMOomVmuT9!PZ zHw^623IlkNMPXI{5}2h4ZvK&pxtN^S05i3w*@(gG4`;uX*=3r@p?uTgH-%#))GOrr!QNQ8_51m0*k7`;Mo*C*MRHgA+^~twi2T zszj6<0KKalP}dMx!&ZDz@^6T-6>2hz3ixiG41X+VcHMt?Ych0u$X)OH}0 zZcHon9cgTd{%+IwhnL}W-2^~hz)+I??%UOzcI|`WN7y`0D)JHnQ+SN@o^SfDE&F^0 zHt?gcOiM&}i|84~0Y@`@{{j`1nmkqlBKo2I4^rt=XR8z^GMJqv1ftE*>9#vqr;*MICqmQ>a*6oa~(CTY(@vt$G+6FunxcUJe8RPQFXJ(XRV?? zS?9CD^%<}vv-x%Vd#Rp-lD}>H&=Vg^QwtO5?=bpA|Dl;`6-?jC$ zz$>0uh>1%8Xe+XsEWj8L1SE6pZ0H#UQ?v9o2mH5O75RoG3` zl%t1A*aL7cynOyf8iYk{C-UFBZx2C1Bl`t&Mv!n||5nZ0Q%^se{`dnFIAE8c06d(; zL%?3j^f?&yWVc~^LOSCW&z(>h>u1XBF<04@(dK_E^Wd7U5$NNDB zSb!E@JElN67EK0it5H+Ob!}Wv-~H`>>yG*H58wCOQC4qHuws9;+>1;$cm-y)F72bb zs*6g)8`s9tMFi9CZz-uYSu{bSz;02|GS!Eb)f6u@k*=$4;n=yS8Kd2uJ!ta|U}(4} zKBpS%3oEMxqgo^s7t`>1k|pyed%6S*c=DbHsOs7(1ix|lD)+@KW3?eY_u@;q_>*p8 zP8%jYI;UL*dZ_(qc-jDa6(ESNt_~_UvFd;UA3{^<{C8tulyp9F1--BUQiDU|=+3Z~ zVE!x#BR7vxV@d2q0L;j}=-?gp=U^s%?30|$?6+3$~k^u1&zSYrsXmRUQNwP-M)_Mvy4DM9A_DgwwR z^h});9vwY$I2}545C*L|#`FSg+=5lz2B+G_7<8DzQ%6WJs|&q>g5#=2&6Gx*{4`uQ z`0#qzA;S4zgWb?lSr)>0w<_|xzx&VCG_9qlKk?r5*T3)k()axJZ%_aJ_a09RuRy(i zQ6Y#g!h?0dE*deJzvsrY%+rNHi++}Ow0S7k%~^?%lvx++IbX8 zc^+dtQ$sB*h#Jy4QOt95Z%hoP^|9eJht6f5^xG0G1C~j}EuzO*2SiDGm-xBd3}t%g z#Dj%khe#k=DHv6_NU$Jy16U0ajHglCd;^6n_43EV%0~8FI^m5FKryOSkJS$8mj+$D zsh(P;_uKR|Q9feN1#<06Wv%p&>q*NsU1^!>s}(BT)|kv{00{@_6Lky+-vj-9X)g^t zdu?UV(F*H@9}4N3$q6QTh3F;JmEme#N@QNiVpCqGPR;}sv}bCpz3K?PwSm`fD0;rFQYZYDm}WE6jO&tn{%T{As}PPG$I;~ zSdb${QwC$!l^$lHdNa+IAKJSw?WF*qooKUzg}4iGa34`+2iK?<(eOjP?X*?tO0P$g zabIg)>f*f21Jth4ob=)u3KcG#OQRU?&9Fdt{X+8;6?P^xGDEd}Q@gPQxLLuUXqAG6 z62RIjpF1;1%4Gu4a+zSN4WA}JC_r2A8^($<7^Irj_nB_BuVU!+tnv(<4C9~kzJj*Q z9%2>{=rLxAeYxzYjcut&n!_$&fhteW&5Axbu!rTek z&B8}Y3GXRWRTTg+0nOwznjQkSWk9P16hqjP%rIw^;7e9j(G0NN%X^QczRr8o(fyC$ zFZGu6x(D8tP9A=gYQ%e(XL^x0)n>q;yEezQ5=NNWo%)rQwRy(E4H)v5xv$6Z@Y0FD zjQLI*6E#20XuEo;c5G=Az%`^}M-HZACy%GY#}4DA))8ZW79H3G0hDRbL99QnXfUcS zG_I2(aTq-o`f}$I$Bn*KHr|H0oJZ?7!aO+0d#}MhT^kvPy<+S*x3Sl@4%xoRq)Y(n zMgUb>P>iCv6kv^$b{+?`P0BpMY%L)hD2F-ku4_&GEfgP6{RxButdKUGGB-sKyHU^Y z7_koX`cbYMZF5#;1>o2uV9n8t#Y0Q8fJAM)mABXzP?@{AzQB9YDDtQvK?%prEkW77 z?RaR(l=G=uw`GiDHy3b&Ct&Dm>r34odl944LaMw8_HLbX2tx~AZ-o_m&4Ul7H^2E! z>9vnMLi6xrXtl6XL4+=o=D0FI5ag>Jwc2wIkm&uOHmbJ=(@rD_Z3J-5%+bxZigMm* zHO+@{aHni1dMX-xH}N-_;#%5-Rq40$DqG>NrBtsJ+f$&UAeS87Fz#+&bY~!3Q;7sC zUDUb?H~e^3MM{{NvEC3 zW>+^84b@ilFh3hijPsN>&r!)Va}BdM^55h9oP&NIygrr&5xUOd4-?IC(Nn2;(Rj?d zsddV@^*hI&!qicfGzD!`F>G&UUV4OPecuf z?GmCj%AG^$dFbr-!;pRIPd`FCjqzZ{WMpJ71~B|Qb?Q{QaPz$507a^-oa@CsJwCmJGc`IH_WjcN z@w7nm*$P;{vPN8U(Nrvs!az}q-@)~I_~?Q3wiCzi*l8h23D;qrmLc{Cs{kyPJD~$g zWCBC@??bi^vPVI%jGO)8I&mg`h>7p*_uZ4e?C6p7m4^D>n~SKQT^ z?(c3%@7&j&zUAP)^sOvN-$?6_dx`Q_=%IBHKsbq3XqxiqD;LfqjvtHaAh#?cSGT0I z(@smHrlE_ZFz~EmVOpTq-;L8()8MJ=sdTn1wE|8XsUoUCAghR2Mn_$TfINGKWv{WE zZndN?i~E1wfZTY(CELW4VwF$V6oZSU>_>EYMBA-(M_U!K0=onM#U z_2plmzU-afl-~BXuTIC0J;FRqwJA0od-v>1Z+z36dH(I``0*3rft9PAy$1}E?LlYy zEt{#TNz%oA3emjQEnUo&2M^$M-fx?g7Qj;@1qaQz^>@&g2n`i2Vbo7`qx)*_Y9=Mv zMsW}S#!lx808P14C2vHZW|^HJAznpOu_D5hU`@@_@DR3M!-#P zZ!1wHl?|kVy82oW<+XzF8EAWY5$Y+@L$Jp{bTD?{jNHIr?%QQ4{{D1muifhLRRO)U zA=qmrDl_so;;mx=tu>uzH3%%uI5hUamCI@L8qw1DV!DKD^4Y7<^*4x=h(edGYOth~ zC~=NOV44ZsCbV~KL~WtlFs%nbvR{@0%7thHA=OWG&jMi2tp1Mr^rk~S=_^m}OYbQIlN*333W{d%%yh)x7vd?R-wtYKA8Tn^E^|5sJ^r=W$9N4ut9RhfEm9p>-oJz|Vo==M~ zDipVohUiZF=&MBdoF?()xlU8tD>nwxMasp8i6$2q_|x2Xr=CBX-v5Vxn*QWdPp2oY zkT!r>S}DU0fPEMo7)zs+qmS|Wp-G~Av^*;a$SYu}N=SJu4FbYu)=~$e?SnmhAJ_4Y zP4wsSa80z3C@mbHVo}pTmj2K1e~YkK74|Q5HVZ9|oyxJJV504TVt+k$1z&OGPF zBk3yuXm6om=;01TsPu*FK)Ak(RLv14_=lO$UPn}a&;9qMCfurPi4I+G7D#cGG8QU5 z_cD+{5oW39PVru?9ywk|L9A(GCG%u{HXOl3!FBZbddFfMSl?_4lZ3Z(tJIv~Z-0we)b zm?}!KY9v_br&yRji!DoJ)+S+kWKIUIk5VAAPLV++iw}U=inSfKWm>N+0+bdRkBf}8 zMZo$TV&)aB8(IKM&A8QHy)=+sc=kmWdNe~rSLlUzIanr6B%ssc5bj$&$k5o$qhp(# zM0Ybyuw@G)9^=gU=T2Qs&wc)Mdj2U4JTH$S3Bd2Hl>3vE9v2|3!ICYLK3_3)Y9$!K z*b4jOI<$RTV|>w4Aa@<}RDNn#O8` zZ9a79P`dY?V{xBW6PS5k1@DojeJ=R6TB#!i*1^Kw+0jhu z5uYQpL=}LTDhxrCa`I-fJY zh8C-W?}Rm~Zfi(wyJ;kSY)@+0*OoS0n4`gE^OdVG>;e{H(& zGHivA+%4+wiDES5?o?E`r~l0{ir8>*NkR zl%=y%=}+JL@6$Wq`UuoxcRKON8`3|bOZ&wu+bWkZ-+SqW&!ivyJAXYLq|CVwm(c$H zed)1x{KfRk|LG4A5>m=|b6ZS5{Qvub^dmp=Bk5oK;=fKWUcH=t;Ya^A<%avxFZ`EJ z=2uaz^XU)&Zu*X|c?Yyw))yQ)`e^!FfA4402S3juvgICKPcQu0$I_4do$pGA5AF>Q zoa6VuHT}>(`K9!!&tLrWu)*EGzrq=4VA1Mrg^B58!T@0G>uySi``ajC?@GsE0FLhm z935m#9PCLa4)fU0LQw6^w4tXB5y3Q7 zMUyOEV@$B4;UA;pIPTTt_y=kCFpWN{2H|icUB{bs2S$g|AceZaLhik3)TK{~9y~V1 zTqMeib_J&WR-pGwmH;MYEZ9xyUc5uzw6`n0<8WL0l4HH;jR$(u3DQ#SOm^#P0HzBS zQ`{YPqYptV-%(@3Q3S*l?1K+i;RzO$DS`?cYUT>LoF+87D?VKKcB-DMt3mfu(*YZU zHw{&E?e(1iBs4Rei$+pC%`izFcrEoem8bnwm2~3~wTO0To;2GCrP!CD(MKliJ66d; zjoBtZa{^)W%4#iT=@aR5&zwu=FAOn(Frgq&okfo{PV_j&q%+JyGI)I=4V@bXILK5m znG2*S!Ed6%s~k3D6p*!j%Wa=n3_z6YEZp%>ho-S7$1lJYz%A=vL>ekjU@TfhtX3|q z11_T4%j!86-6{m$m5MlB_*p0y5ScE{jj`C#Z5>@iDeO=m?B)X}PSU<%554;6*M|tc z4gaTBQn0mvluABVF|?=O1dJx7+qXI!2ji=>3)k-M`rb5uoiPVUUy>oh=+nh{#a0%J zT+3OsCG2xl;;?`Achjh~0`H-6u16^;zD-`c#{RA1MI(E<1hZm#$x5|Z#MBckt~TYJ zB=A$8BgmdY^gPKzUdlCYgqgLOjHwfi4MUT)Z3~9ULIu-uoNpTqSMmt+M;-1%i94uj@iB$EcU=QbR&LlhIyrKEzD(YOk9U0#47XEI=UeJ zqc&k#ZD*qmz!cSP3cO9{wgdXxSa|CRLTs>I8OFN0PBV72PO;4r=Q_NZm>bnnYN=v+ z$qLw7nDBbonnu!Ljif^>OD#emeaYCwIN;9l;rX>`H`<|6u2;E+#1`4e0<&0ZnPO_v zK&6a(WPv_*8!%e6ZLLw*Vk@E+s_a_3@x-d7LJ<3qQPOV9^)O<9#(IL>Qk}&SVownq z8h|R_P@ulDfS#2=(NwElAy&A5=NKo`+&dQk2=oY# zAG$6`ds{hMLvU)Nd4>O`;r(6r_&O`{3JsyFw{;WPzW;ZErka)Xd%yH|(>uQQ+ta6> zJ{PYaMjZTOf9IRimwv+!(&FIONBr@B_{DV3$v31Q|E1rcI*L5M5e*Z{i=TgQ`WxT+ zHR-Fq?I+SX^a6RCN{gn!4?yZgi)XF&g)YHrqQa?(=CjodgbR5`Q(URoQO%qUaU zpt1v}U4L(FDNhIXbcRviZkVYaqBMJTsUb4Qe}gKk8!WyvMxJ4|$IZ(Gv_RB7hhA!m z!94+`swk?<)(W@=>w@dR?`*3~yL-^q^fvO@`LUPB_AQhOg|xylirmm24a}4X0 zEtjTF=;sRiWuZqJBo$^AOex<{YUw@^$qc>MrkGq8WJ$RZErPOeFN!p@+-KrzAX=%V z+`Q78jR{>plR?r(BSd>PCpP5~DLK8NqCshP=VZwI8Nv!f?@r zjHs;+wxy4#@EG<0_a23XBDK{^C-RoP$5K=8;ndWUnTap8K$V2 z{!g4M<75b-b>YS%_z6@s}F8Oy9Qi={n{2 zI%#X}S<1tv%gZnkEA~$VEUnOtcvJ5i7AbZiDsraLIZXCinbbgS1Efk7;2iUyHdALw5MlOs!IDY*h#u2D4PwlY?ePp*%J7kWFb za^BTx*xb*XuJbz(lVo&bWJn5xJFue4un;!!?QI=rx(?`aEhyb6=*JdWoabgq zA5q~s1`9RBI2avs!!)i5|uny&Fn~Ir^+s24#Q_LLh(do)V5>NM9)w!=0j?c8r2_|4gF(-PvHV4p|Cer5cU%YbpR0UrXUll-bq1$-gK3Er2*bb0ZiplFtK$rO?ZSd{geB< z(t|usVp@B+r!k!%OK==_`9l;I9PV#VN4uCeVOg4()T&W)3)EKdeVZS-mPT(Nkr;-( zSsYiT=WpdLprRx&Tvm^bU95>J`c#lFO&Nib{9ZB54g@nMg-VVP0iR> z@25;L%BNp>yIDPV>Df=FUx(KB=;x-E(w>Ljn!f8h{%YD+J(ixofZ-e4%+g@$xc6=8 zjYqrF=+nQS{=#?s?KDmwC%4Y~-M6R&l0n`Yt4TWR`QDlqO@e>%PWTYiuhlIj5tr=R}WA53q4`(vqoZ9JX1Jj}DR z=|eBgrN94weSIps_WAVC+rKr<_;zlFN8k0;>D#{P-Dz|DYPve=vy401&JEbd5G8xXFIZqGMSkvR`EZEN3xV z&%`=6ZVJJf*wpm=2#tUUUrb(*|18gtot;Ge=`>B-4yM!n=8=A6SZJ5(TwFPyR_d*&J7WB)?Q3L_R|IW^)I9qY=B3eY zN2r){wu_9bDcd_>|F8**xyFJyi>QAEfB;=UqQ7C=4UIfoIvdgk8j9yf=hNv+fORHE z+1QpAQlbKEKx7jo-iHqE**ZR6cshOZ@sDzj6rGo*o_^BElwzCmor6^j0k|_L=cx&u z-ouaWPDk(SO1t*b*`9Qcg%Ij<1VyHZmGZcF2j~r(4FN<3!B!vnH*-&(&Z4kB0%5(0 z9M*iDOh9)J<>=jw0i+&z@GsmkxqSSCe-J@WJr#j^Vwt|FfhATPU&`XUOwqyW8hV`? z7!CX2*%F1%Z3I}gkh(p3fJt`2Xf27-(liV#diE&#q{FrAb9xeU10 z&{JTQ9kHoMBGYKQ72l$Ru=hk#^Wd!0%y-&#ye>QW`=Q85O15_x5uD@=*EFe*D*2L`~D(`YMByeuP~=@e99} ze(N`XFMai6_ob&Fe+n)tZ&h{u=`VcS_opBF{vS^7{hi-T|L|}BmGtHBcvCt+4;l-J z|Krd8KPe{)OGyR(<@CWn`u+4{KmN1nPbg)4<6(3kZsQ+Izx;_;Jnp-Hc!e{dHtgIf z(mLlZ@bhdsgWl%s>C*@e&!kId&!o%e&V>l~8m%<0^BB5<`5v9)N3P!>Y95G|9|Egz zie<8m?(vpsbAx#6m%-gvBUo0$vB-p6$BllFRN2X+z3Irl_H=A7o#IIi+M8~alHCm= zk~*S~M!-~yH#8B`_RWdVzD0IHx7U)cv6`9L_;#Ii-3()PV3;Pt{M3%$>b~nRN~bY4 zeE!04dhz@~x^$JlPYQl)%Bgr$Ekf>)ZA8}&fa!z@tFeKuelL**%9ThVTW}dw@T@nK z9x2)#@Tj%z3~512qgkm|Ko znW);PG=gW-&=RH7h{hKy8qyk-cC!e*CpmYsJLO(Dh!9=|aPS4DiR=ewG!Ia<@h0Cr zKR-lDU`x8q z58-VCV??@zBn8(m+fdjCuD-g7>xyW8W<6cTd+OXZ8slCWO&6}w2zO|TRxFcf$;SBi zFy2D=8O@V!p;WtpK3?7TA5YCk_M}Y~nR%|y8~{vB%O>pP3}a!K$$FNQ);u6^5m?ydXMLjbAc_p|UE?nu46n^Ol&gyx58wREyJtzv1vqH(n$_Dodf6fHPd zxP#5gR^SoY+XiNasxDJ{rlfY-G%NKH_5>`lRRkLPrB%TAmI0K?ShXui|5U-EHqb|| znF`4kE10l@aIEiPe4aRQZ#r@OUVz2Xw2y!HP&F!`GG!`2nk7}I>+}NY_a!t>3x&r# zLjMK4z-$IxOOZqPzOf+d16OUsVAx^}O1E#ZSObjh2Nxb%1Qm+Nmk_DPcVW^v$LG%S zeJgyw{qDMGblZt|*f~sAGy+R=;nEq#@ac4M;5@zSu3&+I-OKtc)vU`Dnv}sz zRiZDf;`f?#PcL!?qzm`>pqlms8pHe0vfjUkF@*-JmFwCBC>FFG!Z7prf!!Enwns0? zUYNI9+U{)MoIR^x>N^l>+q+bA&*8x#=4m|K2&r|U*V;j_8&3OLiD=v@LuUhD3zZC# z_~6jwMGzoJ%1wVoP0q=(fU=a@7wFPHO*(akRO&K8NTe>wcqkZ;;myZbDQ>~UYV-|# z?0t?OjC0VEGbmkhl{KDsc%a8p^dSqXr{Iskk z{iVP0J%E?a{MU&;BTAJ0xwUANRJNyY{EI)9{=wh+j`SD4`D-XQ&Jk*Hn;88}dh_cZ zOmF$JFHhh0BmX|_dF-3hZ~XS}r%(OoACH%OgsPXkk+)qOO@H#fKS;m!Z+|0w^i!W^ zvXbM<+vETF1Fw4UcmH_j85oBFymIL(Ejvi7(ezWjzPtzRqB^bmPo_h)51Y^bxhco&R?gKJMCG}L@f}xtrA_VAT+#Vi!6mhIeQ~H zgDfG2oI`-R1ZdJX${t!9R(t@gN?71W>AUta9x;~&aDtu&!(OA8{CLF6a0FCy9V&X2BUb$m~1%uw!S1BH7a{6i zv=xhVr@w^Qb_nCiS<1iXVU;ZFK0rm#Ywv%Uo>LFNTpUalZM`%(?d96^a1LwHmZ6J+ zaT~fu+ZIIT+qbY#2rn(Q$eC5M^f^lHIVqn1WuizmD$}zt!~i&+A^pj9rBnJSc)|)n|%3HCkJm0?@+NL`vy>i&3N{kXq&}uCLT6C!g-iC3g&3mf>rLApL zNpc%&#lS;qN z1C&o=!dJ#P(6ek+0BECPD&^*QpV}A+qD3`t0H#vd(-J^-?o7RPt3zAU4G3jxomiwMOq}6k9%)9chuIh&{MhF&GkKS@DK*D zq*Y;CWnN|&AJd5ImzLD4sBxMEU@HL_W8XBOq*o(rj$T5!Au-?wFz)xSM3vy3sd6i8 z)lg{&StH2fclAuI1Rh;fT(tmtY~xf9i&|?ROmtT^9k%*(mGM7BMe5lLj9IJ#6l0qr z%(f%w;J82g>{)r=iFCkTZ@pdU(`v(CY(-gthOV$^UD${0Lr{MF-h0w9fYUx0mR)=L z@p0=9m;ddXDXTHkzhmgSMzwE22e^h$(%Rf;DucPLfypgfq$|T(ZCb)|WEzd(RWwAG zE}l!5No8K5{naAZp_z)+en8Oy%q0(YSEv23#CzJR(f?J`V-lvRwj_0swrnRY+=gVM zrD6>|As&0o)fL(ekVdcPZ%vWjMEl8bZcq*Zn_q0Yefz%KC(*Q-Y04sVI1+_q><`y= zO<{a^VRds)()648W|Fj=%%5#lWG@AXAv71>M`4BEpy4VyGWCAZRw)#w!7i=12C#lC zGj{E(7@97EttzU&Yq&RSu}`TZtyo{(fFURng;&{j08Q1X)hXq?Eim5T!}7*WKW$=A zM&XuQ@H@ZqzAazTNVL`uE0i|`pxe9}>M#X7`sTN9dC$Y&`t9lK9%{|cA-245>Y4P3 z4}Bp0`oH>T>HELyThsjyy(fLi-%M_W?-+)FhO{5PF>_RJm^?yH+mFAqKr02*5k?KKA zsPi>9pNn#pqCzKJScPynC(1Kg)B@p__~h2^yGZx+?IYUmN~_$gB`_U|5i2!Mq&fw8 zXcpy0Nx6EdET~=7k(fw!gOzOqG5HdY5%y(*=t&W5KgNFo!;2S(px%dQ(8~4WGi_x%06u0BDynzE>zK(at*`@lVo2q zeQ{)|MyX_)!?}3@ebVgIRe;e;>E-7?n=YLF97df_1Du|v8tZAm+0$v{D!}Rbd6=Bx zbk)khflE{aT|h)VKxByj45pN$h<;^l1ZG~Bb(l*HGX1&EB)Y+4ooiyd7N>x)V)V^~tiLNgYZPk*$vOrCd7~kv<9?>bqN08L5x&2{H`xe zN8j}N)bqeSse@*x?MHY}>C@7`9~l_nsi7sRt&Sc$3ZBCx6G8nZDT)=qF9uLdYed3$ zo5{+l)rnLFW2yxJqeyuB<{a$M(QvPZ)RI~unaoH9i()uVnUfxZjy`mM(ON|2GvWsp zH1#=FgCX?Ye;6axhdR@JukEFEMJwaXeq1&=ts_;nE4}T@4yT7-zdLonEZ9)ic3s{5 z4W#;70af@`$vW8V{MIKz^bPW z`$2}AcOK`VMS-O=>bD2kv17-Bp)nP<$T*k|K{)9zXu~-w7p+d5fnl14shybyIPv#+ z0K)=v)*>3HW$ztzFAD3O({>mj-CY!e+;dMldGen0(8CX<`|mpm(`B0QSh|nze&FQE z^w53x!33S4@8luY$ZT>c^E=D^HHc^12>wWO%r}lx`zb~XC%&(id0`hit$nbORj`f= z9PboN)&#&|N)Ivsx>A8z0wTL7$o#cq=y)&RdH?+nQC;=`hMXtwoc90(m9W&cn5Q<^ zuA?z6P5WSoj`TF7JuqR-wriniWU;zE4VTrY%V-)WE}o^&B9*r^XI8k~N0HAxv^3dI zp-(q@)Ox^<{$uJvZOv0g@k5E8U%JR6%$}h|)@3pbmvBqJN^m`XV>m6)G;;3Y-remm6d+V6{%LDRub&mT)UcJv;m;hObWQ2CgW|* zJNu6nFTf6CJC_JAM!E zN?-96U!LCbrpMBo9xE_pcg~@ApLvBdAbVmu%9KhC5hl1M8tK-+&?p3~W%91!#;PHA zu4$xMC%arbsaHD7hY;A#Crs8v#k)Fc(k`OjUd;;uAa%U2mK&S#jR(G1wDZH0vc}y-6b)?mLJ7k&aFw`TL?;<4q%}uK3je^1j zfWeaBfQfHz-UrozFgqLuk+$tsm<;D`v)Q<9=PLk84IMqHb=Uq>-qM|>x!Gq}C{{w) zNOD`R9in6f$o3jD0t-zFpe?H|>=2I2cH5q`+;<=?@EJ3d{yL`%@$w^f7Xu18^7RBKtbZkKo4c<6tV zo0rZAol1L-^6 zk!4aG`!*-FE?X85z9Um+( zm1O(&wBiwlCmVBuXWvE8_s>s|VkEu2(-!8LtQnfcnj$gXVUNId#&@h0rbMWu+u&h3 z+{UWKX4|^PvmHWTjT{IIzM^OiVRvt(^_FZ>Icrqc7WqElfW><91^PyBGaqKWVY*a?C3+ z#hu0CVGa^wc9A{9k-mZRTLTah%yq+{9ij_BKl-#<`Wop&Cd(a}S&qTuP^%br0~U8k zEIWrVzYUV6zDy?I2HsFpn1W8g%1!|qS8Qbo6)^(x1vjfKpT0Lm{E(CP}V zpRs)_8*n3UwD{n@QTu0= zo!UyJBEHOm9Roj^DfcmF5AwgOcJNHQ{oL=El#Yq<6{3hd;Bd{oO%?z1|K|6$T+j0` zJfFVsbm2M7_%_WU-!@Pkx7*PE>CgRX`mNu3Z)!aJNIH*Bzyo^fgTIyD^SX!8vlMgC zPo$4b!pa}olRul(;dlM>XVZIs@AuNjKmPIbv5$Qu{mBR3pZ@5N{y2U76Q4{!_I>ZX z3y%I9&w#zO1P~QeZYWu>qC*g2AZ#V3V8-U9RfU!g;(Hh4Nt+D1GQ3zQ@3+#f zUhrkdbCA6n9Lmh=084F9|Fvk3Vyue#`I|z(Y8GrAq^n_~^h)UkbhS}66a zU3RFM)I=4Fz&s*cDe-*=yV4upege(g1L!+Wq<*w+-5V>3hI z12H7z1wV=6+0RyJ>OK~w<~k>P^i_bSHp+A>5QcmI*=(zJ+2hx=RR6g90D*W>18#q z=S-s9rMV|n>^+)F_a8&!L#q@fW6kwgKEv;3f{AzzR&bMvdRlKOs!~>=?<;Wh?SOi3 zB1N>ug0sZpqb)#{qE#yM`VQW6$36>^?DJPb^d7pZaMCiu)8mK*5;jsGqoCE!<5ppv z2c(`~C`%Ldd(uF8Z+dyICY_$EO_$4h()GF%=~Bx>=~~wt(s0jxX|8DxK5@8`lhP`| zm3ancaN_!vw1lJf>JY7A(3F4}qVlm!;3R9H?+OFHtvgRldgXvZ2-Cq{<{F#J6F zzys;{gAb(r#}A}-7ONV9fqGIojQ}Cj(T+#7%W0K?wF!u)mJ7hd z<1Ro|69Fhm9{#&?A+KP;s$>DN>2EZd29%HCXEr#1j>l?5_EnuwJwb_S<1)qZIA;qC z%KW|!OXnG<0v0}l+-0y`lYEAs*G!uHz>!YaPO3Q3!*%!Jl-yk(`!CzsPMUnz9^I33 zHLMB?7of2lbLqkh83250**hNecohE*B6qFTh4^ zMr(|M!+*g)O(R(E!<~#0;{GLIvDt1Jtg&FzzKOEE>wvoeH&(;SjHxSRpBJpgwWlU? zj+I^dEm`QG?ncbL8R&54d%>qCxpEvG1?)4e&OqqCy({Ke06Vw z>SNibdICE^v*&akota)};b!kR(sIE0RfbQH96&vfJ5zc~+9qf^95yh5C^INoswWc6 zHXG`kVDSl3Wwl;yyLm*nPcyHCm*Rs$$={qF-Fw36I<}{N?k}eQ^m&Dq%W0JAtAF#e zKbc;-kr#=YH+)5ErKEDF#qRbU`_&)&OX*v_plS9(`mJC1j`a9-%goVHJo7v0^#~qC=+boG_dA2HdhmDuc;^|& znx0%$Bi$j@8LcSMee3ar-;F)xxQD~bBv__wXy|#c)rx|4t8S{$ zF8E{(;H+y8`h(pt6uamePCBEfpDG$O0iC^+*&>u|qLX+XsmU7j0JS{oS&&M1@cz)C z^2{j={w|(|iMd7z`b@Zit0AcdusQiU(a%9!6|4HxiwzClfLL{}A}u%1WNx3m=&sJ9 zJHL9TU^Wb=iWNQK{;SWDEJ(h-6^xBG&S5VsAwATwGMG=#W0-layerN2zb1`$Kb9^x zye3_&e=uEZdo*2bdwm*eJemd@decNjBN{C=MqDSVa#mOz=g}c8(Iab#0)-W-eAY*4 zzaxu4v}_fV6KXE%E=-mVj-5mYrsNRpPV1qB+i?f|AX;F0ns{_!(qly2xHd}k?R zZPWg0Fh)(Z`+&0qysB@59fcXH;XMt|_!{r+24MAL61WEeY$sZkdcar-@0|hg4q{+v z^VSu#b>w8iw%&W96ZW!$=)MX)n+veuXL}GChvE3tR)OD_^);JdejR3UGkxm)7e<#h zInJqMnTKeK>%A&l>aC))zAO{>mZDlyM$6V-vnaJvUDXCik`0n2)#9KA-A);xxtz#e zU#MCFpIXMU0%P-j5)mU*0S*qps<8NNug-Q~U0^+~n8f|PaKl=~=XHvE&{BN$JkBYh z#5JX2R)vKVUCRjK3&?(4J;pbWu1?8u#$wL*ZGZA?LU~=J+h8GYzwl*0I5+dL$+_8;cDkuT7RMV6=f0w&}%DeUY&0FwI(zU|Xip@wR=lk+<0>(`wN{!OtxK+yIdHX+`^= zNblP4ntPA=)7R)7wm>jy(Lyxfjd>^cSJcbQ@fA&*!@yN!>h;J>Q>6K{g3;jCRIMY_ z8;64qqfDMzR{p?iUTwQMz1%Y2D=e6|z3bK>^A5<7{&)W9_oOd>w3z-{PT%@SQ#$d+ zx2Ag!KahUm_di>BcT4(b-}_w%%A9;}(dM~{DN9}WUWr5huPzI|Eph4hYR0u_H!ReZz8Avy4OCOS|9wz^x1P`ar2ZCvHtXX zdS3P5@BVQI{J~SIP`0L$|!#PHrRircYtKl|@*DlL= zPES4gM0)1gXV7Lb(W5hX?!txiydr+m1$tsBc-=&#xk3c#`EM`+q)ggMtaiSm2ga-! zcEqbBU|JwzE@Kj@f#sW|N$goZbNiOlb4Bs4{l%!HvZt0FI%t}jSTs&BiS9-(GzM7s zg ziC$UNG#RZ$+fs>`d~@`A+8`yPPEtzWRF7juQLNQnn&xevTRetJ%HNxD7OqEAVhfTc z4KmR%bkGpFjYo~9g}}UWs}#9bWu%}h5&yOj#aF^eEwWz<@;79UO!d0POTNI2)(}zG z0?5>(HPG9t6@8SWcmRFh6{_sUVcJSq2pa%oy_g?%qdBPou&&UzOD1sD6ypx#+J18@ z+Iukem$+Z9VpnkWqMlYdA9HW;{gp7FwXgpcR%u<{gZk4k%QaZI|Pmzsh{&s3-8is#F1lVLMrO)5zl_3bU{R&R}x znAJ6O6wF!8 zz{H*-@PYk8FILI*QlBX65FTlvhk_wszN!sF%36XQHDDPi78lQ`c)gR*7bFT%1`k*=^>>4` z*JnTfWcuVMKYiz9?RgFjq}7S&mprF7H|H4Z7-x-5?WvS%(FMD1%nbvX(4kVDV94uWe^9Sa^aYG`E&(EQ+!P>Hb}hGGa=hsO28dKw z%e*Fg9o31xr(ma%VXUs@#U7+qEizd}N>+Omz9X7!M`>sliWdX<0!!pr)X`sU=NAN; zM3#9YzbYEPZD1+R%NrA`rPyvQC(}Rv;eVLE_OW|!e*XE-eTL~jZ{;mL>HGib&!)fp zwXZ2;`fl-83--?1b}rxa*ZyAmslWV<(AD|WsIUe&REQ|*yalwvN2}D1p^Q7UIX=zfxK5?_DYly{%!EM^EGLqf}4Te7* z&zuGGR?jF+rVqmVQ}Jrc0=SL@K9=bW2$Sf3pPK@^5@us(8Mf${4K=7UvLiVi~Zqoc-ChY;hgbGC7ek9a$*A zB)>sguL97t3ovkio79Q%89JyxKS@(j#(;}A&&I<_yAqsltw}xCt9chtD*c8*J zfx5m`%J?vJ-j>Ia!)|36b=jW5ltvs;HH5 z8SP7=#X{EnZ18=Hvr_n6JM<^D^yn&qZQ1~c)bO39fC}w1W?9H=k7C+WP=dOKj91o5 zILVv739&jCE}Hd%#{Eozx1gXMz-nZE=*X_rjVYlFWF-r%{R}gov~_c`2Q^2}#8pxx zqp*b&;|n43cZ3Ef7=0FL033Z8NrCCUt@zq>4hT0`ofrp(BIxU24r~tX_qJi}VfzlP z0iv{>UC_VG@AEzk9SHYxE^M%x4DA%+^)^65Cxt%sfX?k(Fikl`y2&WT9q!F*Iwcmh zd@mOlJ+FfizYTR`E?{BP0sQ>MQ-CGR4a28OrVQW-Gg?KJmKGLXK*35YDWZBnt1izT zrz}r2agEQ-9#v6EmA~}PG0ig;wTx+cOK>ud-p_VmdIDwRTGB_6b|r<#xz%ZUh!o-= zOz!}#j7E8_f@8(u{ia<3%{8)h0Bi%X8GOEcosU5tS1w*cqk9b?G8G(z#Mdl(BYSR! zF{9w$;tx}T1SEc+wkbQEY{#BLYRz|bt*-~@w&@?$+l?uuHCqGpn7kHq#sco*8o#s= z5Om|CWZ+P;4ub;+RgEQwbbA<`#(xH@p0C_DQvW-zjfE(ty%q&`f*+1gUPdkB8rM)p z?jkAh>j=b47?%~Wx7#=G1uuiP0kVNIUIt^%`KALkSB(}_utf$q8sloSG6&c=){$Bdqf8}fG9$uUbD6eEvdDCN$rFVSUm!&Ux`x{b^ zE?dQ|q$)l7weLxX5VQ3iJQCFl_ff6z(8I4uZ+zodryu&spGn{O&IfKzEiK0%OyBfX zUqTw7Dea|1^W?qv(FE|d>8)@3#`MpB?N`&=4izi@cDzUKKKTk~;LMYsOgGLy&!X!p zpQ{nVlO=?9x$}EGA1Gab!#@k`gOwyD8Xd+f3-~6e_?R{0Mp@$~Um;TUA9XM(YNRq2 z@;iz+?Y2cPJ}AhIcq|iHe$980zP)XmU!J9B-BU!&2$=OvA<|ss#YS)Dz7<-oM`v*a z)J%Fd2d;xfXeFh#h~0psQw3?iGE!C!dcQ|TtB%Fl9!1MUQq%U(BCsgoylgOu*eWDK ziI{`4VPK?-WZ9em1bR$R-hdj%z3JhvqGD}p+3=~CKEgzBBO=of+#|*;D<^s@2S~6# z&9EZ&Z!2dJt7Wk=9drHKd3*0dk!#jx2Gz(5TAvh!j8c?tvw#a;tC_GYjc#d0Zv&7i z!@aQraeEoSX0?>)o@jBUipi1B+2ngG5Z#!nTL7F*Q@-uzzwV7+zU5qBL8ST-0~C(m zC{aMG_CU>xEYpGdjkLQO)(__C91PPXJfK{^t*Eb3WC=TDx+wRhavVu%X3rs~3LTtV z&@YtO%Z=(A+p#QD*=PEpgkxRD_;MA$rFGH-(G|qS-PB8=T4oM9|C{eQLi5nAsl^%C z3z>{DqVCJ*uSQD{Q$kg?P=FmOVFJHCFq!HBNeAzvY6r7b$FpPT#&v|~EFj$&jW+Rj z0kaERbQg;>i!dQK5*3uq0ZPuFxyqtT1sID?3)-L_DlltdXLj%If~BO4oNCNEgy@#G zTb}RY(bwO*^;};+m7aR$)3mcGddt|;uwW_*z!wmAA=d8avjq*yq+MOmA|0pUAlJr+ zniKD1Qyv5Kb$sEOfpF|Teh+Tcq?e1P;gx{5NkG}Pi}dhh0o#9&CdPo&(IG3F5GJ$# zb)-3)S#W!>;Am+rPj7tWYwwsz{t(Z;1;B^eBSC*OQ5G2pw&VI6f0;JSahQyPjFgi` zF$*E_%Sb*<;;NkSZL$BMi-=F>D$J5I%J{$91h=@6UbxfvhZ2RGL*6??NliV^- zm_yQJm`1;2q`$4ya}gH-nZ{bx1k*GT=iIcMCG`^)Um$YVgjag09uV5v+7f9-eToEY z8b8~DMe74k$pZ7;DEFL(kALZ}|IID?yhfmM`Ro(HFq#c0Wv(gXcugT!*0%#(hSI5L zpClc9Ddve90Qm0aO6&w`(bHKqN+p_;2_QVO1)hIZO*ft3?PaJ=h_O@X=VpO{#=1*t z)VP0^Vc*8EH?d0Dxm{_ITE-qy-n~5f_MISrY~QlqFTMB-AayE^QLC0b?+enh39?!> z=g@Ng)=auvP&I%L;Kd7EHg+^7KgWyZD-j zss@9<2Hfi_S-7lF=>V+v9O+9fENYGDl5B(E#A5T}H72nu=!!1EIE@a{;sY_KI;*lO zCRf-Q(=_OBc@H~ds!;>_yS-;I|I$Dz2m4;HPLMO0T`yN@^9mIm{{2H37@ z(l}~VHULSB^zc}uOnJGAD61UJLJ1xyrPHu0Fv%-uF6OX(SfhHWN`L;Xn^RK0b2A$k ziyzmzjzz!STrDP)H>i}G0myZCz^u@KbT^=|Rf9iXCwP*f66i$VD1I)nKTCkPB}&Tg$~_H6p_dPHjRPRWLhjj;Bh1XcvAty;Q?3 z^1UOZ0S48qa6PTuS!X}0(X3T7CTwP5-nIc9Tnpo%!m1sCUaa&QL;5?c1`9+btdeX^ zi4n7m4IsCY^pNQ~1I}`;MO#ltI`#k+c!L>)5rY+)i6A*vKl@VoyTTa-J{$rnXAJ zEi)vGRl(Ta)NGS|2JZ7Z&2LR#qAyvYcO2Sd$8etpA}N3dN?=1GVCyi!~*NsbWB@m3UH5fCSq)@RBqsJbP-eEGpA0aQ!nA7 zed;v!8K<#Gxdfo36%KmDIUL}Z(f&lgJwVRn*dRv0XL${+LU5#?qh#MISSw0r0ACXT zDbw0!4-FceTPqkl#k7)vYBuhbZQtQcM{rkoKEQ%$Q60tYBRNGX_e>hR@^Tu->ui+D zwq>dw8)2fF2%0LG2axDRbKyGKRaz&NSGEu=tpilCYayLzn$&hi6M(!6R4re^;8l}I zQ*&#~SypgX0c?5^eILfy@*w>&d%5RZkpXDGF^2Z7sKvZ}?*;F6Qiyxb%C-=kXLiQ| z5-*R8lLzNDTnWQb<|#n5J?IwfW+2OCfLdSEFCg<=HATn4ZUQ3Xez3 zU@?|Wv8tC6oXR-8+P2fBc-1$&>WBRg{WJ%Q<+5Mc?nK`F3w!C^SG~#^h~}T+a&1JI zsh>ijZVWxQ44woScw{9<9I( zZTP_SZG$f1n-$!6ur@YN-9TeuT4<+=onlo{c*uk<$1}+!Z>oYtYuU(${kBq7!@O$< z$88T$hU2fINA)z#uq0jj3c;G~21NDILP3_wsui^;ZLl28q?gEOu<%A8mKQVGo)B?~ zacxY?b%KdPWkV|$7DmoU_f!ES)MZqeVqo!5HzX@)xwg#wJh)Z%y9oLUk2>Y2^oa`)4u4cU1zpa()XcuA2@2WtBmUtol9REFH}(pj-o9|m>B9kB->>) z_DZWn4$Mpu%8)f8>TZOIQ7bo1>Q2*H*)vTU=Uup2$Zg|cn;&C^M)tLe@3J*fDd#Qv z*eOvUcvxSCSpqES(qF@VRRa*$IHwb|e-b#^ZL&@eBG}}50;FJ73p89dZrmw(8@tQB zYiiCidS8<@XM&6w()g8Yv#GlY%^KrWnZnS>WSTI&#yQ_0dRe85zzXSjEgbYP*<7bZ z)GYlpug#|cDm2FtpkG0I^&F<1Pn;Tv4j{8urO^Usz0ilT2A`;du#OKM*`4k?)SC{& z&~`AsH%Pg^`23kPOzLv`W@{pSxODk8J+?^NdkHO(=h~|f>1BX^h5Z7#m-O%wWH)eb z6qtLzdCY}OT-@%F5Oxg3Imqd5=c-&xq`d|AfrSFmf=6o`b(q|pQJJXdowb7R5j-uz zn*DzUn(m^J|NYN^gVsgd#VkaVfzbv14#Et4Zl^!}m#J_6QsD%}0Cf_vYqyL+OqQ}T zP6E)mA=tC(D$;&g44MlKvYzQ?x+ThzbACE9IiZ6PfZeHAafdcZZEaL>)3Jc8!=jY1 z;Fr>@m-&L!n-wN3C@eUYMCm1f4UG^>Yc^q(@L-~XsF~)#<#jM2EGYFD1vbD2*ayxp zmuaZR4SbEsU;^MZX8Q~Jw>7X|3gwsa5Hc!WBBi7#a-HXFLS3#aZZHaY;@bi4-Hd!< zkqR#q!%ic4T&u&kB&XZdI}A}lX6JR!wb5ColL4TvQ7~hbESkFEMl}+ksuDJ@1|T4- zC%aItJ}f2@rWUIrp~iepn@*YNmvE}T54##>ViGG zT9IBDSw@sjT7kZD?WK#Ub7L|!tV~dCr(1H?O$f$IHKHR0SF*zXa6Bb2bSpH_9VWWJ zKE0Nv5UFk!8mO95)Wd0!X3Q(}4_pHUY;ewfgt24c-#dXO1*xHd-xjc%x!>bzQ*ol3w@C8?t@JXL5Ldci#Yw{L1bT4>(rr3stfUOSRuro14PyC zFix9D=PDFeQ&LX#tA^#a4jCC43LTFY7K@~fHZ_sjX;c@|l~G{9Jy=ss%7~P0H9sr3 zH_Ku4%6YGqqNbEI(yg>9EWt(<>}C}VXcg8UYWZY=?L%3`JtQcxHHsjp1t8jkUy`kY zYQlvdunU819H4mF6C&UcI2G?bdqw7e@Xm7`K#k9@C;%-}N2bpdl;?hyGOFsTEQMF+ zlNnYk9IF7Df@+O#6`WYuQqO(g!1-<9^$!Y6vF??0Mq8<3jOgmD_NdS2MDI_$6u@;Js@Wf#%L)`5`3*P z9{fy;5awrw(mXoInbD!N2)k5P(tzP6#<;@^>D+ZRi5CG)JO*bO+t>}1Q;@Ns-34HH z(}u`q!6^|=VdnsgC|pO z>h%EDmQ!m%(oU^oC9Sey$56!l}wb%RC%4e_ti^r>^4SU0iZ3C zIS@Fh&5Fjo?1Mp(=B6~jBAOKwGoK2*_g1Pixr;6 zBlmpMoi-*yJG0Az?T~)igoRM`Uq)K00$->qfKd%qKQ-LI^>{ti!WikCtP64}DJntD zj6JkQ7ui8rk|x?tXnkNC6BwjfSdP->(liHXnm|*rL1ekf1hZ}{sBZed!QRY~@>>`q zoyA1t#4tT$)dvpgyw<81t7>L7v9p;dKwu{UduB(i`0(^)JBOgEiF~}FXAr|KQo=jQ zLN!QL+$6@7GBVN8TM;(b#^@|6k3uvFD*%9yVKdFrV?QrIL5Y=B2H80z3-Pp(Tck+P zJ*gRp=#1-bq#X@ieT2Aw>lO(V08$l9UL~4}G9ov16!q*64PPQ1vPSPVjX$qaJ@(0$ z2h&H+Po>X|m8A;{6`T(uNTSp##Of8${}r6qI^N&F1d0|ugyc14xFlOi!-76ziWU9B z`YM%?3hn{nruCwVuaKUwZHN<@i+}kmn9P|j_IND4?|?PjwR<-}1_lYyuEhZ*q>0LO zbZ4AcJ-H4uu*r3Eyw$(}=D8sFAT<#Z4hQ)Q`nQ|89i5qz*NZ+@O@t zl6D2qm86TKD@#|@!sH=<`EfBPcC7LK{~p#(7%fXCbT%2-i>>;fwn5{WdN)_&^oAQz4yMJbm%C35$`Wa550aZ zoqS*^9ld8Zow#o)9Y4wMM_1Cl$2M@k-b}A~Z7D+eWkApb48wR-f7(4jKd?o$c9iVj zKtz4x+I0#c@T)5L_;hx(roDIvbp_KZ0Lm4gko?-?4}9*Pf%7UZn3xB zButZxij~b<4mg{UC?)`=#ume=GO_*02FHu?yvBHV*EBCA80d`i?xN8 z22!2_LL{ zUn;3sVtgtnw~u7Zp&@=pYt4P?d5(J#=C^};R?&SFghbY(Fi&|JinbfhtOo6-&^`AN z92E2u1}o8U-4w83QxjorYYZF!S4M$%LDO9@^o5>*Ac6y}n}$ z$|;0N72btLk3`#>R5xLyiy9MC%P1l@1;_ET@Q(5ML+At+DYG>PF172;XO)w5b9k>n z)CQ@`==(cwA z{yimgX%8OeyU|tct(;GXYu8d=8R^FWi9mM0r3rLQE2R4tQ`7oXDnZM&&SG6d^xV2S zo7zY#v`}8&fZnA6_vcz#wN&${ogGaz^xd-Ldi!R2IQs&zs4=?dgeX}7|;0Lz{AV5a5Vs+g@v}6_AfP6 zI7O2R+k<}YNs0D$g}ph=g1^-eH@4UMI#qXTh&%O;IzQ+o4T8YwQQ z!$Yk)-S=Qwy60pmj}lsjl%xkAD5ZIA4Q)yYnraC?Xpv$%%$|&E0PQJsKsT(_3i ztF(NX1l$eWxRI_QzMn;hXv?7xb92pQ{j#nr%S3L59>LP(2I^)xa=}Ulv5pN{^!Vxk zp3ITV&jLvu^_4~31x;TlVz~RdNMXbxs?Nn>Sg1lUjbbCNA{<$OD(jL!n-z{POD}@>aJHlUhNf7X#fON;`n8nc6AMr5FR)(xE;f6~}Y>CgH)+e=*va#)f zDVgTJHk!}XkPdc0kc5D++vIr2VVZS<9l8UEbU$&P6taT)9y0fkJZ8az~uy@O^s8Rd0a{5GaNh3MN6RkTgEP@zR63eYvN*R zpSX}(rZ1(IDIWaZJb59tO`H$y$o9?3##OG@GJtE1bZQ5FSo@CPXxwk3lIT|RyRima5zv!>HM zUJGB;lIb9BKSry~WKh)Y`LHm=;}j?e2CiPBDhxYWj^kw!z%)+)XN#yQg0&U&M~#}4G7u^iUE{btGX(&hBDiGi=(#n|c^+1) z2{Zf>-c(OL`yvLJQ&BLn_wZii2`#ZMOfa{OPQfbG(lEQVJJr$SvbK|k*@(zzWN}ED zHDj1+)8|QT879UF11kzZP038Cd)I363iPtrBfHQYTHGWBUl?H8zb@olBB(k(>X{L8D_ym+){sXMR`_ zIE7~G3-oE)arEDP@{8{bs2x&}=SXxS3(Zl^F-vJWFpV~|qi>FM!pc(NWdYrL(u$?n zM%2EG;5{Zz^UP5dA}osqoC(#ebBN9jSX4u2UlpT+Kpbo`P>%o2a2MaUc|*ka7uvoE zz>NN+TV-IFK<0?B=Aovoq^lrmZ{O2KYNmzBt0c`)jWte{-8zv*GkS>zjV$$}A&od3XUJbW=N4W0*> zU7|AXTB<7@M1L|%1tYqMmBCaxc{SB8jHhncq~6jQ*rln|Q8Jy{%g{KL&8KE2^9FP> z4d|B|mvQK(dQZ*M<}?jJnQWJE{w^82nkvSxq)j@wuS`*nJvo$Whc2i3;j^h>^c*1R zN@|+CmRe@6rqaF5)_}PdTn0Mdk2yJ!78KcX>Zom~_ zq?|Ebh90bppV24LvV9iEmIhlJ(Q?S!jps>NM@M-9Slp|C7=eFpHw+m%5*H%{`$QC+ zGhLn)R%cyKjn1$;Y~`FYAIq>;?%VZiwh1Xu&p)%0{^XCw(?>o$lRo|ULi+6ImeP|? zu45>8Fxz)K>m{>BCISk8hb&KKU-RE02#nS+1urZ` z_At9+<1^{X)e#sR0ygfa#SqVvYATGEWqrU*=K~C!f0htQ>tAP)%M_sKEC!mF0R)2b zajH2l!LVEgJl&wDn~wiBI1V;v+h&kzH-KmjV^NLsDU9bL<8t1450N@zL0D({c#^8? zLfhFw!Ge8Xr_i>oaDJCpDqv6N=o>d3#(J_igQPxRc*N^7G{lX}Pj7EindXIDf;W9<^fFzGav8oCftgFIzCdJttF_ z%_U(@xi=JUKZnWahd=m1^ik*d8?2cb3aeTv=CMe^`yq?Kb9JfFd@%8mp36aeq{Rzs zmVs)~RCSThcmI|xBnjz)x>Dc&>dXFJP16@2Ec)|DM6AC~61j}k$w#4~>4GA?oy#_3 zl;-~gPfjYqF2&QC`z+GlQ4Qi0@8?<5kUd=EXOxcS6raBYI4eX90lwU0pNgAcC+&xA z3@C|4WNQRkS8OP3k30pKp__swa>0uGh{b~FOWOgX=0zg3Y1(8g(}JLazI0Uz6wPZh zxfsdMGD#{Px5U}b^m^A+LoHiHcTQjo{x4`HGo0#6h>j77HX>5pXGz`TP_Dqo-c2H%?)$Q0T++(n~Q}Y*EGg^o$USH)E=MXf~b4?rY#v1-9$8(zh^bB zQFdUD=L%OS8mk(WN5tx+Zsp>g;DbWIw8`STL4=?1$Eu*3uaYM%7gvyL55NlaYW4vu z$AGh|r5q!@GWt;ALT>wzE?$F1r<6&2g%sE(nwxrhxB2&~O0*ql%)0tvfO>lXtsPYV z;gD{{BT;M}>Bd^3)EX9SEV*F+Y;a<88mMQgK~ z(!j_Zi!9~rTz^4B4d+YYw8apSp9@Bkg&~5WtgVp^HDyy8%_X@jss5}`w9R)`a9&qv zfIEkY;37RY=ILWLN(+}6u6r%t-;L9L7gdI`qdRS~bc)Vpizdca(o4?|rKg`9N}v6+ z@$}hGPo*b5Go3#7XOrpa&rPT2o}I&qdoDfu^i+E4sp<68=cm${)0X*rPgrzdMU(FK z=)2I;*cZ_`Wnl@{nDm{%-}_LOw+8mXYSRTB^hvD=xMVI&b*T}I%D-R+cUTo;ZYgL4 zWf>#wl^5gUwejARNfImzMAg37?qrsLY-9vuOoAGLP*n0p%3O2GT*|Iq$NR|^9*T}l z=Xq7{bv;Mplh z0ku!jQIQS+mFeKF9(vW$%$M}oigPdxwb|u#Z3rtB7B0 zW;)V#Pp%=d>2qj9BUQ&dJO^7gOaY8(S~Cis2%8O8GY<5zvw?5u1v8zLGZ(HI#p*+! z4!vek&*p19pEB(RLWP1Fuf9Wo&$$bl?t-HK_A|f$3lANcjF?C)>6gyT(UuEr?qcrG zY#op_G1+^^--{03ftv(ErZ@_VOfdp+(EM9IvGM& zoq(|QpPyL7yJ{*eOmQ6-P#iC??+l7YEFHQ!s;G9Ur_7(`od|X7sJv+)I&5eK#5MsE zYgmAoKkMU3)Q|JhVIdhixJn zmC(!u5V@JBfTpv_xdr?h?DE+!d6L_FC(<(Iewqw zF){AnS84jn^Hg0<@$Y$p7b~g+9`=@VzNz)vQDr9hoJQNCgSW*N(U_ILSHPIbV$1=U ztUL`{0R4bqC}gCvMMV}_Z{LcnL_A*@J_^3QpL`g^6w`LT#QQu~8q&iM`wCxc3V5A? z5X@sVkPP;lu z;h~A!z6p-hjE%x%4FVQ*U$=_Q^CeiCp@--s$D=n>w!X*)xZaN*keA2)rq1~?Qn`pC zgInGHL%*c>JN&KqS!ktGoP&c32z_CWq0}=KoyDV*xUeAjZ6A2Xt*$3iEmZgB73HwmJA^bxeVUc^&UBXE8JNRm~()!{k)U>LXZcpsK8y z#kCnw(axgVfsaW&6N*v&8aJ2UUkYF z35{Tz`>8~sN80)tX$v0f8}^DqBeIUJg;pa==vj(A@;W=ZQyC)76+p&XRXq!aER|*B z=(p(XJI_OH_7Y|O3ovZ+JQh&A`CdjEXFhQzU44Pp7w98ai5Q1(j1fssrt{~AaiYGEF1>UqEuc@T-k3?%`23VG zVTB%pNqP~X=`0cDIOWgN6SQ!_5xky-*>ZdJZdWPKzJhb|96MUg_c!Wx4)|?B*oJ zNs4vm3Y z7{RX;){y(D1K@fLz;yg*Cz?Nd{Omw~xCjaE3qo<36+wD{2rVJdlda?JvjH>cPaE*s zH_o}^rX#c|Qdr1rI%wr7#|p5}M15C1aL<*|4dMB@cP zalnJuQFULk&1%8A@%IAoHZg8=r-%l_jH3mDv=!c0Lr~*>sT-GZlBKcj z*(grDaN8X$#4(j%!DgscDgc^Bel`J~8u7hp zp^Uqho>pZrR4Xt&i$q>)^F(K`G|ez1?ey7dpv6!f=8Cm6>Z}DU)zOev?~qDN^{QYn z^pkPoY~ThD#siUJz5TibYqmt#VIf~9+FxD(&_HG_FO;wlphuV`z2IYp$NVM$f#+Db zE7&)?yRXrfWN{AA!|SOICK81j_rYffifr4V*Alph_u1w^22C?P|IYWYfG)wNsk2z& zvo~OiHUV9-D}ve;+J+cq7E>GBM~`+89qdad+L#_#uueLuH;-t!mJ`RC53b5EU37oIwo#!sV(x-y*Ba2cNG^*3laa`DB>Jg%kl z&tFN;e&+e~+^3#Q7oL7LU3=*i)q@w(@OgY+o_{%Adih+M(p;2kn@Yx14P&l`>)*h2 zYa~U`q_BD$7;)su_|-86nyMRckZu4l!7%Z&iIyP!ySt;kNh9qyqRP#lSOqpnhNqTm zYlG8Wd%Mxn0Ib3?z{xqcD~W&edlQX{_p(sz+tV2SjmK2KD@Ezht3a_uavXh)mXLlbGy)d*Ej0~9CnChYuZr>j4A`a}~cd9O7b^Ci;0EtfcR`rn*m+^sds_|Gkho*eH zxmWh#4&K6ebsTKpW;6?>bAYbmn;Oz`tpzPs5{{vRngJ{=0)&@I-+}I5|!4!xjcpu#PflwC8T?lu;=I#{O?K#%D!8@IMn z`P7LCViOCjj>EN8h`$j%Hq_xj3}C7zauZNS#N*;h`k;nIyM~2ScB`77Rp?_(XYF*O znZI{(SSsi*q^P6R88nnr3TQD%P0Y@XM|+i8ggl!>>R}xa6>)_doSR%Jm0)3mJmDP8 zIhQwhKmWl60sxH2v}>>jlhJJM`KLdYtXQ2|Fdn86YjEb1Evw?^PU3a@lXY@#x}f-X z11OL0r%CdW1L@$N{&f6MPkQK{-RZ$&d(*)_fCJ~Lk?YXNIq2eC?M5))!g(xR0N9Ol z3{#|yU@<1IjHl7dlW7Vk;^`YSC`C9tO^>vZD?{n>sSD}Ki|5k>>8E;3Gus#gO^hw% zH|UsP5_u4#G%Z^hH?@Socpk$&tD;x{sa_f#A&oMPOEM0 zX0jSb@2aW>m_o(@LCso#c2mdb;6eigGo%n--3ep4D#Y@YX&mO^!j*HJkFi*A*SQ|f zB@rYMNKMU5&^vG-^fAF26f6Ql2(FwUDo3-w*ADn2_Rl#h*dZ3=TzM7s!#69h0t9Iy zDz0eZdyz8;`3;;j$vOOG7y;T~>DM-TMjp|vmFbI-nX;IKNw z24n!}+hB*Ln7iiHdupOamd*7<#wgo1xNZ`3Khleiid)jXh`;9TIb z!sKtd6`KOF1os0Mw^G%YZ=6a$`)~gSC9wkmmaG5@Lo;tkZa8~)DU6lk4k>zg_7wa% z)O(nA3;{GO3k%8yx%qk`w`vSBjb^GsJ7Og09m&3tDk=#}0HPYj+iH?XH!hlHP!PoA zRrKF6qc@3h-1q*a-%dyOKepwh|I9!7I#O02f%0xcC1bBIqaXP=QYV&mNB44KCrb}C z(o=jK)gv-Pg$2v~QPYIzdo4uhzD{#d{bQI6lu3Cmr1?YCnk!dA0&Y5@kT&#tV^UFL zW*U`?)X}*s{m@T;Y|B2M`}_ye`~UfOu>g@etEG1zld!$6TtKtAqG=5)u}oK3Qk)kf z>GE(JXW`Nn+{ttea2n=v?7OK5r9v`gb`LWO4-gn7!D@^64js=mM1@r`0(bH=4HW4P z_mTsfnI1`B^+Ugz8an#6?DuCs`}s6Dcq7;jC)2!nzD+-8ap>*aP5nxon7nWnw@57) z>=<(M;ev0zz}53<;RM&92;Rc`sqh)apYiRcq=SiBPJ8z3NpF7ho44%4AN}#KrC<8h zA0m}8O9?spADAJDiKd2fKn;eOE{K%b*R-)fDvoBc*+6_PE8+QGCIz=ZdS;oyzNqjR zhOwUO)J>m7#lVW{yGYa2(DYW8MW9iIqrO#nb!GH%giWffK^z_+nxKSZTlKnew(UGW zNAub5{+<_u*)47sGu%r{jk6a9`26T56H+~25%WnVjl;6U^~$z)&IR7l9-AKHULE43 zc)!AXnZA$&*y{{nVybN#@CbmUBTznOmg zAOGT(bNy34^na({__zNu#;kfo=Yt}J?mgoD=<(-D$~^x#4*Xt$iMm855*n1mvC5}8 zf@PQsip&C2?;G?YQV;5@3>tF<<;}4XD|kMbGF~HS7py&dW2?LN(9n(acmK-QrHAi3 z02A1jU9OpbhG2LvynHduj$bEbt=Va5dinHds-w8#-q+lp`dewPJvW#tXNSXVb!4$D zJrA&+L&x~Wle<$Fap@_VT7UY?SZeG)m^yatWxi7YPYMvACUd~a`(5R)DZJM|22d;3B^Ey~e`8DgR`6l>_W3D1lU%F{ z04(}md7k3(WnD-_^j_OIjRkr8*&1;&k0_J?MOhrNNPNMwc>xw^LIhHJ31II>O=ls*tn)Rdk@3(Ka z1FoIEr|2Q0K>hY@p8IfW29~h^rZbScrQL|*kU@;{q9p@BJ+%zd3V}g!`invz#%0v5L{=iRbTKQL z-YQ(~Iy9e^TGHRPjcXxdpv21~5Le4ZpymB(iq`iAh;&}ke#SYyP+{%!GIw5&@WZlD zL(N*G=PVlXlo#2gJSW9@EHYkkJ@Q<5bH0x;lCND}4^ldC@aP@my7gXT4rslmJ%IrsrJx+W`imt#p~bqz2BSeyZ`2y=PSQJhxUMz}4 z3;P;oY{hdLrz(p`v*#p=B5u3K!wQarc$8fH|evUQepAUbu_-}R#|W1tMm7U?((KWcgttJ;y;RF+rrN~a6(bE9M?58oi{hF z0gmq8MRnMnCECT-7+=^<6eQh@n?WjRo<8+*G*azpZldWe<<3;t^mX@Pnp#7H-hp)H z<=C{5tn}V1Ak1l-vU*6l)j@B_7V5rZT>mFQfmi& zm$VwNa?h&9qQ$_?)+Gh8z5C#pD?T3vriF2z$9bU`!m+C;xL`0Do=cvZLMsjT`M|DS z>B!zaTh7(*{>dMw$3OQO<~^+hvSms*4&*zm@F<9gJ6&JKmSEv9&9X4rDz(ZH<_pDHgSJc9~S%qecIxl zeAOH==rY=2agMA)t>ir3+q1POT#%l50~($;&MXRq@>>k!BT1r=xk%c#2F1XHMryM%-|~Sq9`Mtq|W%?|ERL+U^Rx)iMAH&u@7HiXr*{-p8kWcVbA63sRsCRCq)YKTie5-8h|QGViuR`Q0-B)v z`Ya3X()@IK-}~N|-u>=(Z`topJ$nN$oq0T1^7)JYbl%M2C6aw}TnwBHt&oZdv2aRb zVvYahDm4F@+w9uHhO1z;R9qXqn+j8mr!fpl1*Eo~$W4voHBmX3C+V0_c~0M^x4&lV z;xde@^s^VQhUd-ADzuvmTVc_S2`f*2d37%qfX6iZ=ig!Rvch5?ljOFy#JI{8U5=w7 zxb_SzGj6`*;rsC9y3>hxCnxAv+nGEEm!y}jTuHzC;SZ-5UwSFs10Xwuai2Zv?mc!N zZFrh!F#AM${>i7)nP;BEi>Efd{#{?1b{#pA+B#bS;&bWqm~?*pcRvvQ#@g_;TE?zo z0gtG*-j3AL)0`%8bw1j?H~o!o{`M`qGq*OIrq`y!Vc)T!ZA2LNd6!ug8jrlB7fcuK zIGp&K7oxCO6wBXZ!STB1B{i=7?Nym^m0T!Ypu(tDYlXa6dA`?p-fjR#>i6X0#=_=j z{=h%^sWi^mZ@^!!aHHtgZ=lRfK-=Ql65h26=2^PA%Mivu;E>i`?^au?<7_ml(7JS|4W zC4bAeNO6FBEcvD3G+$WM*C@gFwa@?I2e$0<7_RMq=V$&iuITglI%Rtm`zw0fnzKfA zWXypAF#%KVd{6{U(MwPFc6Zn6xk7_s8~#Qk>Hre>J~2Pp$|PFTV9By#g&xOObY)oL zZ#KS-2FH4t@%|;M6lbv|_;)}49qHTN{pgnc);eK=djDJv>;0YQtmuWQHGsg(>sUC4 z;km^pMC-0xOI-l_JIp3AVDkCD{jV7?<)9;eEzYx^7uT!8*z&k)xc2WnMlkRS+g;G~ zil^p_^A8_*W+Xj%eggj_*%-@I9btkd`%D#X^uokdjwxNm&G<79WNZ63jI1IWxHY_W zB$2(5bO;a>ekZHZthJPeueR-2UBlala?%xBv@oICZz+I-ca&@opRUX8KmGQTTQ1Jr z#PxLQPk$B1XQBj5ENx0^>j}$Mz1zazj0xG+1*J<9JVwJ5#z`ioMQ)l|Cic%>m`@*k zb}xNjX^OxS5z>4d`t6(SpR2Gxwtru;_O~r{XA5~#7;x{{% zzwQ2e*>?@Vwr<5%CSOR(tr+RL3ol>2n%?)ZkEe_0&Zqkh>`&b^t8Hp*Ob7NJ#<;XR zJ^%DG_*$)`E2qzdtMNl`epBi@c#vLS?P(Gdz$c%0GJW!oKOWA|U9?zP#|+S>vP~FB z*5bN6Gd7hDx9>{d`tEPoICSG1l-Nz0oY8ce_2M8hFA>E#p)$+{tlS zJel!RfDyk_|1^#v3>34;VerqK%Qu`nnO;YmkK4DO{MCP%e(U}3V-CR|NH&utfqf+m+kN(#g{^bvRbNa@2JiKMUUmn2FmU%p%6Xy**T{rUi zj-dGVAk6v8NiB@K;*c{!EnEk^^&A`dnF9Ofc@n%9-lt^;(_VV(;npqteEUDP&%v+w z!n=R?;y44EsTsjWBT^?hhL=lqn5Y(jkXR%-y|Z9w{O081&ukmRjpm}w(DbL^#l_!l z+ZC8XR+M89jIChM^=B&&oUlo}N^?E88E08_}rL=+g+%NHKaViDb5I zPOu#`zO+GVCy$Px=Zs_RXkDm*PTnT=e0I#YywK2)iV$jr5&hoCI>Ll^2to z!pxJccqr5HF=6;$gtR`NQ*rU1Gj+VRF#E^6k|**Ur58T1z$$Ft<#=o1yNmlBi~#SG zreDLjZ$+ONkAW|W#>@VU-*0>HZ~q>2zpwRY;kj^GkA3yqoFx5M9AMs0k0(;Tw;lrI z*msRHUt!Cjv=G4uJNL?un=D!}I(3obcjI1 zcp*rTX`^IGqq zU`gHQI4Q8aT<;Qo78~Q{K*7b=K@fC99NC5oIxg_*rUJ8})|L&v0Z^sMBx58RaI;V3 zsy0ZouYnQ-6rO|jC6BHv^>ble+Dwyx((h5tH{#z!bAX0l4K^fnC+FiFwMj zYZ34{R|G)j{?pvlb}QxTHX@Udvk5m-@Bv)&d=o#t&kAY4J5F}Si2=A`Uh{mrwgmH* z{ntVn60=qT*3#&`O#bd<@d<4dnN=!@iW#mpD%(X&sh=57ZcQ;bNA!U zv9W#s?}DZ~uILx{6BQM9PiL9OS-WB#W3u(eRJBesT*gu@8kYuGm0F#jgSKW;c9Jog zDQ93t155f3+0&?O8Jd&6QI%`)c^DU`rkZ7%t1i%rWInYw(9f!eUPc|b!8TXXXq9#z z6%1gaYu{f@AE$8hHS$%T5bYSW$B)@?>n8n8kGTb$YUfv2R77aBOewKd17N4DrU9#e zm?QpO(txR5GbV9eCsOJ-k(QhHr{%`IDb*dIEy_V$uJ;fjcclhiUtd$kepg1nx<>RV zO@OMVMw-30wWk)$@q9Elx2In8E60u=N%!7!G95W`7-O{kX%A@}|6WOG!BN2RK0tYAuo_-dS*}(9`8w{%moHtS zn>)_Rbgp+1F@@nrM>`=IA}^sqht|d0HqVC*s&iv(!3qz~%d&m4dd)0=*UYqz`2ks1nR#_p}*a6pl!I+H3Qn+;c`F>wX{Pc74uyH(T5XtKS zWwv>_W3Pje${LwyTO#-_jHzh+U1*G21VjO*Ob5B6MSvfgm^^N7Hrh3n7mO5!h5|^K zsm&Eq?W;4i(;?t8-5g_*zw>z+*s&in)-<55h4rW*4Ol{oG%mpQDI~y!feFBqg9^rj z;|%SN4}qwdAOLBh5qShJd~XtnnX32U*p&~-_nv7?zb-$boo|5IoDTOc%<1v01F|oG zur~(@I)+D3RCw+7eU5v9_m;qH$@E}#QQZeHmLCte#X%c&!Ad{ z)?JSFG4Z+n6eeai&)d+yM>_Q;*xq@jxC@%Tpc(Xw=S3Sd>D*kJW+GyN*re4$qHhv| zu%?t7lO9@h3k5*YEEQl=!K0kTy0T)G6w_L&5$y1@wsfBEZ&LuS2_AE>O0#KdyymX6~U6%RT3npNm$MU zcnDwxMSO<8yUfe#_?!%npe%q^cCWX!jUY?LNr1AVsMo(o+T7>VBiReAz+7%QwR!BQ z`&;6=s^fCJ6ONH3cq#B>?X$_B7Mf2xg=Xhj7%m8ur7ETn1@7g1sl!cKKqW!0Ok4&mf+U zEP|rUkVT5pDefjq?C-fTZ!Y1np{0*7i7U&*dep2hT=0hL>3;EJ4Do}P0m z_0QkFkk|bApK|vV|Hqtxx!Iw#IDLagvw$FOs7g|42_4Yv$b~d?=>;Cor0L=Fscdo9U@d$l{Ru^VdIZV?gnxnczl|c1$tG7C;4)&^%M=K0gCskD4T^-S9sTxYH61|F% zn~P5tOT)?4)JMuV|0ekGJs`3844+f)B2cndoKpgf?h+JL zWKg&7x1i4PQ(zzkiZVnig@^x+X2_(>1W1A-KS$;**qQjw{Q9WMiu7BdHzbXa1x+@& z9*60>arIi7934l94486%*+F~m`MzcLTc9UU=MSl(a8Kp|z>}HyNWB$+Rj^9I8WryGVD1FHMg3*INArE`|BL#JYYKtW zw)5=yYJsJ%tw+WH6K_Lqr4m_6k-y*HW$H`S^Moi~ss`PT4S`|vqthdEi9jVZ;{|YX zQ(GBnHj8MOGd-S|f7D6&*=mk5h%%U{;89IrP=_c;@MLrE5TbA2?_i4xAVuKfcXGJ- z^NPyY3~VfTh?Kg3Du7KPrS9uFZvl!z1195UfS9EtJ^~oU>wI5M*##(!@6P)jfGb{` z|BJuf$)9dHg0dfF`|~E4yMO=UIRnqW@JzZsG?-f3I?2D!rq%i3G&*=HUAgpZ8o2gi znwh-8481_AjvhdzH0_+en#OLtl*R{6rRkBYfTn>oHFh~oj9#SC=rvLpLriXT@@64S zYxAiL_F$cvew94H)D(-&z*xFII7WB!Xo8wn7s5UERq5Qisn=MDnjUb~`b}=eL z1PJ3WB!evcOQcn5h|sD4S5*KeJC;|H?klIfycE``f=4yqQG<4+h0fmnJss$#TF^Sx zM2cvcO2APnTP{--H8?y*1I^*|t?&8jw0C#^mi@j;V@(C{5)L8EEKUgQVGvVJ68Zmf z32}&l1vWGpj^l8e&bPVSO>LBuaxh1Q3B3TkoI#2eWI=-)(Ymk&Xo?TZKjX7o>3%U|)F6l>tE0OD_Vb0D%CM8$%;$WNe&$UFEn|0G)t^LdtAu zZYi~q`n~Va)}D&9oX<(lhY&E57{vv33&74xxC`$(RaTKAjf8``rQBRLmgxeU;Ub;W zK{Aaw4JtF0(?@;|-=z>&wkiIWAL7sq()>NA2W<{}sNE`~+ZIN~@JIjb&qCbo_=)c- zfS)2*FS1KsH-8U?p|JSIHE`h!J&nK)R{Tl>+jt%u0Q=06_Dr71VbD$0Dyv@lNpa< z`Mwt+FWxb5@~s&Qc5w(rD!d@tFIK z`MrPj>$VmOs5QDiJ{tRzQ+fa==A_V=22f((tf~tjl);9T&c0s2R7IhWY_O?XfxC&R zBA|0_yoHAgdZ^3?#rg2Kqr%O9%kVf~FgI269ThNN0(?RB+g^7(9o^TnGbYQ8xCaQe*ip{nW&7IN;Uy8(z>KvO1VMHVSFk{ zm6ey(@GW#`kLVR3L)6pU0PC|dg?+#<0`XBw_6O1o-MS}+ud-;dFr9riefwYe?kyK* zeT_wW>S=UJVj`SY;s}Gc^yLu?-MpAYyOx7(_&eb9ix9Z1RZw63seQYLO9GplaQ$)mk z<~M%Lm!>`aTPKeIO}thhR9tv+bXbUZ3W9*5*Gu-Al7hBeH`CLWU1wlbsBS9g3vzUu zC+Q-1azTwVK;*uR*nWj{Rz4@${vl6Pd7^Rgv>NBYuC1lF6I9T=3^O>#Vs7eXDOd~H zsv-bE2g3@hN0k*(kq!zSi|G|pMlOPdNYC%(cUh7R7UA%R(d#8rO6ydZ0WM70sdwQx zeZT*>HDH9|FH+_Gc1i=h6EgjCAVu?RL@1XF09;`b~7kJkvC1Wi1# z%;zq$C?4%-&kI>$^Q?XN@yF9Vpve?v5i}L8E#h7)Fr7Ew6Z?Nl3wJX;Tm<`GjaVqX zwex~HL8IDpFP=&l&Ryi1qmk_GOwDKz8_{HTHCLyDyIa$q?#9$qTR{*d*aEEb33D9F zuKvEXf7jLk=JBVWNG|}IDnpQO$}}HSF_6i>7wk{G zvq^E5;#_AC6BjGlYcD)K{x|^Qcayjq*V-wBN z+%W01bDv6s7e1B7u6;hO&Yg!@noO;AE2*^>jnM|I&yD9PvHmn3GpATc2GiQy5NVm~ z0I5qbO;=&01_4l`RLCu+I)#BSH>C)et17BfQ(Zkm@5*2#WT2La#y3`GxB=BJm88Z- zD%q$=tEKC^x%Tmqp)`EsDxZBlEsw!;ZQC-1&Y%x5EsN(Gej-Gd66_IK5Pp^za*}S0 zJbL)1Aci&VA?roN?1F8@U@f$DJtFWrBL9Y#hE(4|C0%11fT)@D8`XB~dutQbc>JxK zioJF~Qzft81Q@QYm$T60B{j2>My6NO#8O3Ctwc=U)Jqh6X9;{nodH(zq7ako&4nWW zJN_#?7dj^X?F3MvI5H~bGMnW4qNKjCfaXQY^juyEO{=-Es-Kx;v3}|0m(s<{mjE|T zav8kj)LmX=1tU8Z{4KNCjxVmK>6MZ+OL}PzQLMkua2{t!*-WE*nB(U%6Y&b=0u)1s zuZwKU0^hN~1#t0Nl&w%u%j-6oU{{^6NtOC&eg{WgO?2Sr9&p78Iv}L?@ zfhx}#8886n6zSx_vC(v6bT|!7j02X{T#>Sz9!sNRBLSGEF#{FI$%4tOs0A~nIWans z2ComK%h(uPK6@dJ4r7C{Z4;y{f?5^@DW;k@hjyc1VZ1Jzt^!LgbA4}$#RaC8dlz17 z3NF%lMd5e|#r3kf?ehv?)z9#um4>hP^OyK+KTBY2YSFIuk(T3HFZ1x{9VeToq8LE! zl_|DJ!{rp5DKmf1U@o%uyGHQ0`@({*9*L(iqQS!Mq38JPEF6J!J*-_ zIzE)D76wwo=0IvI8&A6%*VFOdn)KQuE$RLvZK*w};b}nnKD`y_6 zNxS+mTkWDyrM3(ZLmiAv5^1su^moOSTU0e_ZvxoSr%m0UdP2WTRxrflA)?L?L2s~B zPOo8}nXB~F-&Ii%)PyHpS8GGs)7zd7_IHp)m`f968y4`TD+hT5WsOuk$7|v_fz!>3 zyuyCQS7w0tYTI4V^s1-li|>!?XazRs$5R#hp_cOHB;f9Btxa8Rl-rZ0s)hout*OAw z5spZab{!xGA$Q;IJ!#j0eW_>9?zH#7q14l}3xmiS(rQ#_04Vyq`_t}S`&0j}J*lt1 z8-}JDY8Z_Q;A(@Pt##N2G}o|*FX6yU`eSkplSzP8d0VPz+LdZM4y4A;{i(HMYlK*w zVAcU;E3EQM0E0Qw6tikIHYuTn)!HD6+C&&mDIFS=#3 zggmr0dL^R%rq*^qQ&+0OyimYXNshh|=Ba}eTt{OqCFON6Ottt-RYe)D;AfddWfHy0 z;3UT|xtXSx>R9w!Q%PetfX)iKty}0Dq`?E!80!1G)jd`cWy)I;Dijpzmw0$fGhbO8 z4B*3}!K%A>Sl(o5a-rlle)4Y@sG^Qs0Wb^F#nUgRPkiJ<>GY{nEIP9V50uPSNcm?i zQ;zJnZ!%Ex1<bfYlY5Bssvs&t*Xr>6A@AW1XKXRyFbd4fSc*L6q>=q+)V|&tKy+SNY6M z(x&RiA}{d@ThRqM9FEm#<(J3vSFXYk4;^mtd&G^w`Ig{jn1xk-Tn#R=&A zDoF)5R+XgI`f5^Zm8q#3EgY3Vl`xs6DC)W1Rg8O$<2L~pH+^^tETWTrrP>mhjqFr^ z=R~c5I6zAP6o2N&O#l@DC<7sNO{N8NHC+Zk8C2y|8X4n4)g4S=;k8kL$D>#=XH{N) zo!`OV3IIt)X^}Sx-~>&Oqd4;eeAP5_lFcy$-&fP};EU<{OP@&t7oJaR zv)58%`9#{?S(Z*5=}vq5TS9Yl$G9$3g9ghOOhiGTuic3NqTnx;tI~2RGr*6JLQoS6 zb0N)`0Z{?``1jByv_7hetCP=k^?$F-@l&?@oD08JcM zuva(vA}gkI&?XQ9_GRmvm=HaeS@3PLu*`I$%wPmKOei3zD30;7eHi>i@SGo3*_lSP zaxODYA%cqeHq=^L69oEtuB9{8^L%qhF93t` z`kwu?g`j;!6Mt{%jrUc_%uwRr)weI5y!YYMi>9Z)uPb$RcSPBCxgJbRMvVZGJpifR z-rg`oyE-(Ph9{P3#Q+#7BSNh0CPmhRi$F^%qYCcpzUnKtoHa$Eml2mQlAm9IotRoJ zPxBaMuC7#(#wky8OJ!+>pVKfu)BK!e!e3gJY06bV0;UaE69U_KzO1|{RX22{GAik! zK$Xwbz|sY>mGogVhMZ1Fm4K)!#Nkynh|XEKO=C=wg1Iy_ovsY8rqNl%zw6M{P$ysi zwi9VL9#Xe&R|jwm=Xh@~tjlVYOivfZj8TObW$)NNn1UdbG>Tf$`$bWEOgPRb1?DOy zb&*nz)ch|%0u#alg6eZGKAp~ zk5OU?Vo2YB{Dhu6uR0?)D~@%To)LaoSNd5J+fEQHQNX zHGr;y>!Kge3XfGQzD$FJrhrtp~5ao-E#$=WMt9b$pt!6Z#*w)l&kavJM1$* zwH%1_oK!xac2H4dk#6ejZeyNu z>c&wmvy`jvJRYq#N?`kzrbf~*DbG1lbFGc#>5(H%X;;$<_9%1cAZHNkk7LF-1(Q9F zRm2kG942PhUM@Ly5UpJlZm{3CZJ&JNb7-1Q!_EN8WiJX~DS!swoeepQ#Q_;$lzVT5 zd8dne{C`d(28hZ6@M7g<5Xw0giPx8Te+OX5hP9!4Dgd$U;+A}Oatl26{n1m{J z{FoTX&gTNj;`#IQZ?W8=%_@+8A-)$JNhU?0QP3?FWDAkTb7~Oq@~ee`g_mc5#JSYw zAeb(%Rgu+-f5nl-dAB7^Un`v*USa$108Q)Z{r~B_V{vS*OttZo0K630; zLD160Pp8*E@aFWR|LmWq-~aev`b&TP>k|aU-R*zDGcaxVU)!FVy7s~fG^UNJR(y{7 zxUmpUR`)>G@Tl%hWp(|jvVM2ktnMe;?@aUSt!ZwxiPQ=HG#gZ#HFTz3yZ5L4h+Fp^ zJdh3@Jw~stE=(u8@U&^gWU@W=_3grZvMaUYVbk8;p4z*6086AUW)bzSRsn!u6G~xg zN}EaRpc$%fO?9NYzT!>yZ#i?5R84&h=424<(8PR28lJ5tJywa^c6k~YPdql$_0jco zZDgI_0W%ZawDT+&M9k|;EC7gkmss4E^?2IE4ZOMq#-cd@lZ!?b5mpOEQ|-;9=J1nh zq*|q#MY59TYN4nViA#M85~fJQTpOK7%S5_rf%FtB+!W>V-}siJfTqSR`+b$2n?1p!hjaLE{@EB9uyJ0WCz* ziBgaf-wR3E%N+A^v%kmXiuir1{W@0 zO_#1+N9P8^!5)@sLs3Vi5}#WNdUl>E#9Q3Xv@(WBAxB0viPI6t%ffDD$J zCc#p!*3(X44$ajY_p>0z!dlq){6D1S%p{eK?sf4mx7e5rEZmBEr`yw1#Z+9Rtt0)C zHGZMH<0+Q*)K~_?N^!{0@I;y?MYpS^63~Rcu!{b5jKh#U@iS~OsIs)hQUC6JsR`!T zKPAp9j8$3PqBmBN z74^8H<}46J0ieW%%VSOr;R25t?h$)ZZZZdUcK1>g)4}1FMV09r?>~|bZ__kQ0h+wu zJf=u<>Vp+yJA=j8GRg^H6!SscI|UKFb7Tx5uTcheS^S>&&B-GSXyvFP{g+>(u)jj9 zkHYsv+EQ<;!dFH`9UoA^eBaaZKNK{Trsp32VET{$D;x=bYYOkz-GEQRFVeAr9msw{(XDXJtyu-hmY*% zM(U0TB*fKBU@`d+%GT77_QSu%Xj?&bNoiF&dwGEGS`DUX9%g8fCc4wBxIQfbPMqwT zdeWD_Zr7HRaD(clU-;MmA-!~dI$a&9NH@l6(3#YvtK((q(r8IKH@K0`TwhHW2bR-~ zVJnyL$su)8iW%k-(fr&3>4zotEP^3izG1Z1_z5*)Hl54&>f)Z(sisHm~6bgWg#Mx%Jh(2Ohk`m zxVweHBa329#d@m8ugWICOL!$&ia*b1+Xv5zI+2J=tJJxggYaKLv>$D}n3z_0T|?DU zdicKm>FsZQT{=XnqPGViTH72|hvoX8Fi~dFv6DwR6W%t^gpI(H=dNbZVD`+3FnSqghA0>+HAI4|07w}qg~eY4l>W?%c(5r>%92L8f7bj><^?yXjwojFcQRuCMv&1;3a4> z?Yztz*7$4%;EwAhzH5nSiEj-{Q6%%@(~IJ76VWeE8nt-rEDAD83vUF&7J2& zpzPQUjh!HGSfbSACI6A7hAG*-&9O|w~%>bpTDJn0qve?~Lkq-B)B`n@JKj`9u zfuOKN$8hfB3V>{V2fl1&briRZanCl;5y1VvwSD@TCt;e-1)zxW>IjasSY|h}zEE;9 zGft+kYzke@b&hcy)*YThQ>6kE+K&K+x?!|>+q%-e-o5Gg-ec*F4?dLM_1K%!n;v>? zdib9E(?j>(mmXxEJaOD6(2AUdiJtU$e|Qh1i;X zsSHh%AVe%1h5>P`*$+#WSpPCKW7xN-E>HtCGrh?19!ZDxbZt4;(=tsQTvS*3N#S87 zJX5EX+1LO@MT*JSh^->6$3L8E;&nN=b6$uFyEr~Hm9q;G423Z=pDNoH0M}HmteYTM zM6S(@V6d1nF9fVI(LDr}ueAMlX__K>Ex7(ipPoNwTdyul_YqNb_Ozs3C+SDz-FtWY zU-Ar`y*imb@VSfWgMW5DJ#}Fwy>xv!ogG|D7ltXLo+wGzW-8L)LQNW5s>QyN4(;fK z7MYBe&oAcIDYa34%vrOR|n48m_W9$n6x6Swz}sE{x#>$o=-Oh>AObSW0FZ? z71PWTCYfC7n{R%*oi~x2uO)R+0h6;?)c}}UPfxvcb%*m-#^T$IZl;5X3EI7~kJK7b zr3@IFqN%l}G`8H3CRa$em7!^Mxios^MXD9gAilrE^P`keFW|CH z%BZ1?2o_BkKP$0;xN-5Nbn4m9rgJYmnXX+tovvR!j}O)m)rd0$&?_Mnl~Gzis0?X> zfV7G9XiMs(r&=vfL`uR1y1)qBzTY_lE`TCux0rO8{5ZTw8@arh=Ha1%Vwfw+5;5VK z1`Q=cLCc{7b+A&tm%qm&K&PL6{-1m}{qFnTpZ@TpA5Wip;;Ho9nRDr> z7tf@pozHFoiUNFVrF?Ac4G)Y zJIi>7$b_eI@nuZ$yXVE}&m+=;VH^NAiY>(q-cH*$uR-`NDb^Oqgmx*rbuVe?NgCZM z!bNmHf*7?Np+R9`Fme}|ED78-Yt$eVy8~IZ5Pb7``-3g2Yl+XzTBxj-%0{c2S6W^o6Q##nRkowDpQhV7{>M31HhZ;81J&fJGG(EP0%thSu>#7@uh>ivO zqVb*M^mrVfLA(#8wq(9{+m?-BvkuAoFGgwvErrxA$5FvJm#L_!vYiNvFLRduT`|r9 zl>+i=s>pgfI{4j)J- z4jfD!wp)_*lf4xTL>@b{FBFD+bo$gMKb6i5AkuB3ny9HM@TJ@7CqJIv_a~nW z5$B`t`L^_?-p2GtfA}BLfBnEG({nGJLu`wxp2(?aY*pU0FMa)A{r2>|fBWyIZ~LaN zPi>?&-0d4zE~FP`b`e_)A*hYr>fRAE;zL1s~o`HKmCtA z10VR@)%5Xa2htentGVS$=vNq*RdgPvGZ3Y$m|ifNP^8NQQU-HW<|G2ps6o@wR0~_N zIFPQNeI}iH?x}Qvi2e$Zy!wNS7cQn#rzsP@cqv`KKFFeTEnU8RIbFq|Zg5}#zncO6 zb}d~S9Eq~{cE();phYKR#lmWXt`AM5@BiklrP-5AhL1lxNP46{b?rVtZ@DfsPz@}? zYKpA3$|iqW1@v_`rTqegTB0w24$B1krP{C-@b$A$&6b!i@m;m(1NypK0Zp{cAfLRB zHewSXRmE#d(LPvVw1A#Uv%O93oejWKDFR`g%)2@&Q!f*GD{RyVDZ#fq&`Z?0)%)WT zi|&j*O)*8gNyBqH7c@7rL~GbDOj3=wxZk}@!Fx-S$nnuhD!p_= zM#lxZI=pwQ_G1Y1$*0a;3uYpBOwUnpp4@7wr2v+YrpQ_x66N+}|E*9sRa zUmY`KOlT45N5mVHQ!rKe-=>suRbl)uKvmX|2&xtV5nukm$t_Hid$Md3Qg&0edlKKJ z$De#MUATf}2B1X)$}!9}N5&@5WndFQ1Ko*9M6Cds8Pdg5JU2N+McfG23;Td+*o_I! z(VcFTO!v z{m!iood>5D)5URI_yc8SkDQ`^PH19XX!sc|A~=VpX=Z2bE}yH+sw)ts1ES#$V-a#O z4tMI%cV*8d0qQa*#(T{7L>f+3DRfTimkNMN2k|Jb5Qr2i4jFrI+P~)x-1LVqO#-73 zO&5T-DX$<2f}JR)5_8*;5aGt61(Ok=7oR;~!SAl!v^WVAFnnOtJMJCMRiw5Svu9o)H4^Zl9>P&4I>y|LK z=7+BXE*bzuU8%FJEu;z61cIi_?L1&_&DrUh^z2J7r_aCeG=brGoFC@{Mb3h@Y`1{r zz;;a&*JOBNi2K^Kpxj9T?0G&dG5|C|hVfm@8+_L$*I0kVC z`j&6w`+oa((l9sKzy0aIpZ@7D{d)S)hd!A8@DF}F{qaZtEY1cEAi$4 z2rE?E2r$|ZaPOfkx2J`hdwGn>YLH_BorXis3D z@PCQEN|ks_k(v=OD(=-B({w|0qL0ag35qJKTE45CC~OrQf^!$=((CVOr#yFSp8V2v z_JMuBNkS17Ylje}?G#N+PmXdPhSSspZs~|$ZJ5~w)gO!n->b=}6?Dr0C0mWibgj-$ z5v5YE#&xP9ou|9sGUs4cXKp}|AZlWKGTj&$#Xo8~*tR*u>sPMeHH7$i5a4Pzb0-t6 z9}e%|Y63V6Xn5-4_0Sr*kYq7s(L66#syOjRbQ;=^JP$i9sVS(6K2T1+S@azO+5%v5 z!Yr0@2Mr%nsFQdWYUU~R=G2<$c_;OFe+CfCjFopibl(2Shi01^b@Eu`oUY=$XL=K6r<1; zj4{_`Y*?r?hrM=jEEq7&8v5H{AOZWek|KQ3BFD09D~td~#r7&ubQ$7wk2xD&%JP}g zlex3X%T_2RP2CgSqXA(6^iBM#-bkOt?Njp~{K=nCMTJpq;rh7HWrUm4DluMU2>Bkr zGXQ0zp;+YkTQmFI+g6$OG!uMOjHlk(rPPJpN%aPmo>OCSFRZQ1A@)apiJokwripuG zI#PUsp6FEu>S}1~PF?$trnbIAG}rD0G}fmJSBEH0X-@m7_Pu@k^fOPS7g<>KY05gD zqL|xjR=934KJqzov_$I-n7b+fOB>w*TIfcACkCCSO47nK78oPL0COx@2#m^M%Uckm z*P)Aa6X2RQa=gBH`V1ic^Xb&(bHNyf)<)n^U|@>2B;TF1s!(7s5(PE|9S!g=mXtyG4w{8$6fEtBe;o+eGs zdu7G?Xsm4`?P`47a&=(dhkS+oe}h0;lM=AM()Rzd_aE?)<<*%teiTbN=ayRPoF_6l z&oF^849NxqPT1?rdSSf|tZjm`%{py(@jBoIgYgm!CJ!*f5I1iuJ_vE&SE?FJWSJr0n^m;(lI+f6}AHy%H@+Xh%2g(&)Ih#J#SZB zdmGeaBZ#}l{^&n{!s47$Ny%<|?|*m)JPwwt{GB4vQlP3OLduT1LRkP(zNmzN2o~U3DH<>a|7wO zdL~|AXek|&+^A(I&;(mlM%A25Pjaz%@^lHyx^^bSqqZY_37Oo|?Xk5YsUzWeMkKK}{KMe)2M`i!wTtNG}z zT?Y`^W1Xw#&~f|g~N0)P`Nayq=K#Q{#X z%?3QXqF}eHQ0U7Bo{@Vs3$RCdj-H%kbC48N4hAuAF4_;st8CgwubpMMa)2qfK`-!G z@4NB}*K4dkUhKJGy%Qs7<7m&q0z3<2HV3;DCd&7ocZ}6siTHU z;=KX|Q>6=IVVs8v8-}4BBhZ-8rp)I`OW?ndu<^k$#>^y|l?B_`L|TR+7h*$Okc&F5 z`_k|&a$afXQHJ!O94i&@6aat-#l{<{2VR@O28DKddo3&v)xbjHl@7S@OIZ1(uytf!1njvFU zoPG1j$I%I%u~~q3P^Fh;T%1vnK1~;D2_g8~*POFJ)5O%612CTt*$U7FbUjf3w*XlH zR)NsO1_L%9fm=U;0%_+k zY|adw9wsnBo`B_VUv2|8=nvUgsP|a z{mviSm%jRC`|?-6VLx~!3TR%3frRZ>nM9~sVqc8vjl}sm+|zdWoA!WdiEdL`3j@>YgAh8tYCx3|zJuyYPmL7C$ z;M&+#0YTzD=o`202r;I1b~RfEEK&t8!nzN`L zGOdZrS2bBxC0t!y71bX#xCSHOY^x_lCxGN?RnP1DLv%I^BEcBZuDafnMywQ5g*cT+ zb8IecJ>v^@vTxqbqY+a?w`O0-o_`>s%x$E>1lCTH0yIqz(Zg?Mlyd1s>mWMX)742T zLg_s@hzH=pOX&H&2u84kD9W99c`xZL^eCq)M&;~PPz_z%8tl*k#Hc)W?P#;k&IY`+ zxF@ht5|38$8EQ!-HlTf|M`T-yACW3<)*m`c*CFMEyq<0eqHjf^fdjlFK>uFf3NQ(9 zmvNuVxmU%ymSU_v<9|0svlO}&LF%v;U*r0mul8=-BEO@MUu+~fFQiqpx zJ6Yf;c1n@_iWnwuOdJ5yoDY;YtW~uhKwm9r{Cl+u$q9Q_U2tt+8Gz985;{6<;((iF zz|F$Kyd_|CBQw1go9@PuzZ*ZIKC3PY<7Av?J6SyIN?@Vp#w{{BWDz#EB)}>!O7r05 z7!7P^tsp^pcxuv0(=%3vCtIm{Ny6+F;mimAque!hqD_ zv*&H>>^VzeQLzNuyUaDjrOZ0;bSoyn2~T7Gi>9Vf^Mi-3!-ei1dly>oE^7;J|U^%9CSVR4Q^d4Gf^#Pp52t3tS zlD?Dlj~pX|aPru3I|K7GNlGjp1!Q}3?TuQjpoP=w&y&?H`g)=HqPZRfBMK;@6jV$B z#!_hbB<_v?$S0?!-NUPx^<)9CHw{S2@O$3Wiut<==2<1F#}MYqB4wD2n9R=N-j6ns z^L*jCXY55PCeNRxKPtr^nzMeVv#I`9+iC-e84L45EO4Y2KH%BYlB_OM!dZW{}VJ6m+jeM_uhS<-Ff#9*j3lw zXuA#^wAQZeR@c&iF(oM~^hWYylGQ*Z{y;@UJo&IPMSzw@N^!5g^04jfY{hSc>J!pt ztq3Y>(Fqi(GQ%6~Zmhd30U&Ir3aq*U28OX8#VR@)$A^TuvPe~kyjhBQAMK}V06T*Y zWr?V$3TnT$uF-#|MNC4$2{zS!=v_s2)itE;h)Uc%(l_Uvqb&J&qW2;+PKwe?p`}al zv|0g}jYl$es(;pwUzoMtff#9n6j~;MkX-Qb@-6hu7$A__lmjqAM|Rk+F0R+%{d?>+ zS6^v2-E@mxebqI9oHiH(m|xf+>8Nx#h?7P}x)x&bVub9)G!<4Kt|09cv<0YzWovE1 zs|NSp4m2QL_+{fGLDd-rzQL8=`uKd{R#JFpX0Yon2jK~$>1QolTbCm!;H@cQ`Rq3Rile>XcMmR4&ISv2VX*R+<5pk2YzOw%+s)VQw0mzm zXt!Lw)Anz#wRS3gc2QY&u(J~Igr)|TD*}@i!6ssf&tIW{r)0sZ^5?9YwYjcv&MKBC ztuT&xXGD)ND~2&H$Lpwy5(<4Sr~hOHk9x*?2Xmo|IZ?-4T?F(kz!+>mO$xeHtt1T) zV{rB7*2j|&H_b-21S?v!5{IpURqCv>8;`WxYj&5}_UZ*|pvPlf=?=6&ovaHKc%)WH zrQ*WQCWvO)#{uFh?=vSRqL9u0^4ILNE3e;W2d>^_T?h7CK~=M*uqs*1uconb(vF`z ziGcq+YeSI6TYq@!G@A|zoT{_hGc}zhpeutZ+KDB~jhA0#H(h=`0_z4^JJCB6@&6ne zwcn3`uTLrdB{3h0Qy6p}Gu zeCqShlJcLVAYyD}z=i-q6C?fZ7)ap0bJ|n(CbBEJ>!fe{a#F} zcT!+lXDAqmqJtcxhv?)Oppjsvrn=S|sf;dF6TnT5AS0M%AN}{M^oxAvIHq~hx%N}h zI6^gR6oyNer$EIw`dfiT?oqM-Tg;tg60o|Qe)eDe!e5mqveu-PS=l&)I;_g zzx*D1%l&uRn}6=N?XN!eKdk(+d+ap__Xf{d1rl$2<85{+>A)P?-cfhyt>!YI)s53? z(^GliN8f4v!0x&C zUVH6pUu*Zi?q2`?hFfm7gOu8C!=I)NV6YX}O@g9NlKDcqF=b$OuH;edv3Gd@N3N~T2n9zOKW}MwFHZnH+VjA7q zsi$7D3(udi?xSbz!qJnYe@@#_PoG7Iw97c>dVppbz&RPFI*dxb+0k(-EQjgSh-MIx zY%e|Y21mwi2(8)>y0rns&Jz@3%us|e$G@jYNllQtoajDd<2`3>cA(qlNBV5C`?N*- z&s$>XqGiShtRyjOa5h#7m`ovpA3F<+ak_hx@!=BvAmyTa=NyT~o*pV3;PW$&OJKaV zF}$wYRcCizy~A!exXn8dBoQ{Ju$(Wh>O|mu$m-jgWkBEpdu@zfd;mCEB=wZfL;Wft zpzMy<97DYhX+&-E%4XcPF!(G1Sm2^<6r=!SFFbKY^2g*vzt+}lgQ)3K? z2`Dx@9GH0U6^sy{vwQ}D>I5kQ^<^9ZsErcf#?(&7gN!Wem(AUKwV%ZKikV<*$40OT z>Aq;w$P5xld=jkH3#{2`^@K#*6qzGcI@}MCl$N&Jx_NFt>Fqgw?$#1nc0A~a@XALp z^BG`2^ZviJ|Mg_oYNPksFF?C#5U+i!jR)P`5m#_{s8@#gyawJx2v z+36HZxw`t5B&yAt`sQ+uulh;0e)&DQ0*c-ji3HP3`h^VWdohCl38K5aBkH(rNE(vz_}6xR#&^mSo4yb~|`vuU&uRb@sY@?y(=Z|MhnF zz4zJO_uU6@x|_#6_P`t8Xm5VgTkUN>^n>>Hx4+Zg_@=knjW^$Ix843ayASa6`Zv7E zUVG0Sbgkb_M2h$j9a3C6q;*G5&oUoP19GT-12n1BS`|PN`YGM0oLv%KPz;ShG(5w? zGG*gTuCr&)Kv`czbPapP&MP0OGQh+r0P5j~AF`((f6V8>EZ|9E_Lokaux>i5N0`_P zd1V$^Dz%>JG~(zPJKjHM160b5&F0%o7~leL(nKk7RVj=6CO#W!*t!^>DK>||5iM7S zD0qbhUFLN4RG(JCgw!su(i#CX4H0C9vW zw-ndtWx=vif595&0`qlgCSS_33%wS6y-O~I=e7q*@$oYOIJaN$++|Dx=97*#VAIYM|X&YMGu>O@6%6N|Fh3;a^mP`i1s!17Syzv zuS&Vfe^Crk8EkbY)o|DEYqWcBJYe_Vbil4Zyu-R0(Eus9;bzB~0}q!tCca7u506frH=^h3)7}~S)g?IrWrKI6@(E{p50=dig?DCdU9jtMUwe?#@&&{Y&dqZgTM3YyeHN9~nVeF%Wz+=qv!s8H^9&7K>_@^=bg>H%|2&9&A7 zklER_6J~jbHBix4#QE#qL~-vHQ#3X*iD-J3^)PDlurx8WQAq$*K&rnL9k|Ve0vOg0 zRvNXdl72APQ&3ty1#9Inw@ur&*^Yg?Da@gQUS*K$4t+Fliz234klIuF!FnRS)i|lR zvw)g0st=dBj?1jc9vDfxMKGlnQj#f2BUvS9eO@mLz6w9_1SM+Bm7l$t0Dp zXz=0`W-PI%X>D~0fp}~gk@7j}(U^5|Ut}X8aH^T>U07b_2tX4@dHWvwy?=hxH_0#m z-d+y z0(pSo5RWoaZ8Ak|f>An*>+fw>?zOvbx!mr$ocXWZ$#i{5ip zwY9#H$6;ooAbc4xiu%lnrCsmA*sR32X+J65yY9LRVDwtM>E@eA5A6>Gf@~)3?OhD! zourj&+;1d*5iJCL>S9xa8!m#U-AKnxU<5eLCM-W1l~8NL@>U00nCR%O0X0Ws&shZN zvw)=DzCQcv*S~5{JoN;VHASipQ%e?tK7RfJlZcEGf3N6$^!vF>pj#0h+kYKKUIh>uvJhzTzMv_O;^8izECn(%p72B zg~&6NV8dghn4KWiiSTrc#|&lOi3Cd>PVt$=GR6j#JW0$yv9w?h!vY{lG>nD@R;mT3 zco{vGvrtM?rCxlh1QRsMS3bsFjd`dDLGIx0?3Wv#1K!%*$mR$M^(W| z06qb5ifU}CG>CK0r8hjSn5*p3wa$X5T%R%jw>B%K@#HAZqAa1T0|in8@&)pqYR&=% z)oALOrc!Td>^!K|2IN||R|ZXmX#UFZ*a=~%8Y;=qriC7BR5Rf`;VE2+%;Qwh$@-$0 zY(XhZLPIktSAc|^#@pE-w|A&@5mlx7c|$d+Sc)Z@>tT0WU>#a{@H^5+O8qwBt5!=& zycTfYNcB()Y1Gz+G8jiRdsW=u0zPMIid5m06;bF>Nfp}$c*+j~ur5)tMCYCgwsQM^Yu$SV5`Y6X8(y??ePP`9VM?*MG3Mq;rI$_|bFp%Yf|O<2%j8q} zn!lO1pkj#Z79&9Bpot%pu`jjO12o!qQNdkoLn?6}?#9~;0MbMy7Cv+(0!WOP8OCrg z1r<|c!(Jt-)PE^}scX+3+XrY$lOBEgrDyH&qtDqW#;GzB728=#yCyMk-uKZ{+e*6! zRs$N$x;n4Oez*ol_8;=VA}GPiWoQ_-4&WO6E#6Tu$<-|8m8u96=<&wMh_=xqh38^! zVp_y?$*7eSphbVX$h}kkQ)`g=*U7#>H&R}EOB8WvEm~YqVF2Ka;BOg#vO>ji2$s5$ zG1s`Q9e+Kt0&BEO!D`|uetk*@rus+rd(eu!$L1(}8K#O_W4jdAN_G{R`pOrSNQBRQ z(W8(QVHtC;2Uy3s-Z2GL@`+{T~f2Th|Fo^^4iCUDiEw_C%tKW2aV$*=p(IP>xMTzY{eB1I}G16R)P z@$qNx`xTpjV%CvCu=p2${6}^%tJk~cK!>$AReg`&aqHzay8?BH3=ZyTv|FwPB;9tk z{m{L)*jw+t-fqA4fbA#s+FFAE93#bgHLb0qN`=LE$Bs_B`syp}x@)hpJ$rW2*Qu7Y z7ddSxV6{i7M#NcDQ;*PC)HT9LQWx^n(8dy>GIJc^;DG31M4B?YEJiTi0>$2j$BInf zsu3xd1I+5mmuzQSP_n!FkYn-Sc#KEfrs)`~1!090PhkO(seF^Lw`i!OGN*wEpG6y^ z7dA`!!%<~%_RMMeuwB6BV8YIvK5bw6@>lE|4}Qbx;YNJITG?3S?NPyJYG~;sRd&dl z+xHP!me`9IrtOJigLWFRcQnc-ij!f6_g|zUt`Jpn2@Q>x7NeB&Bi1F_Uw@oGdx|{! zQ5M)CMC=iw*O)ce@cE4@wz*!!qY_*WpdUa-CZB9JW@&ix*nqqo*G1Dr@@(MVYsu#& z->|R=a!3TOo_R^>wGca|=SD1lcEE}v{Z_UB_zcrFA~i&MJ?4GYio`t0*bvjqB%5Lk zW?{o2KOdP8N}#<;EUhSgfr`c63j_A_6VKX{k3VfkUp#K7&s?-_DwfW757?P=^6Bca z6KBrb${!H(T@)Go)4lPBQkB9xBNuDqtDHazRe@O2s&!^mZgKgTmD*Iz{`g zJJUMPLq!wbN=Bk@f&s5U5(}rMvZ`E??ux}yU@6U_?agM@&PBcG_`OQE#l{6j3LB5@ zu(h|{ccWcLuyW+`9n1%uutOQf2<7+~1-DZz+RNEs-~x&THn6}AeSup=#Ld+oJ+CK#sO9rz@1 z|5D5q8Gn}G+TFI}pmiR(&URmZt?k-<#2VZ7SX1X!*0k$dYu$H^wH>;_c3yX@?Y{9Y z+jYz9*sE@|NvpB51Bm=bqA(jUD7&wqaxZBQW0&*Xvrn>~Oi~h0#U$@jSyk)3<=psZ z-QkS1^k_=2%Bh-jtD!XgmTPXXx4-c%cI{QydIj2g?cKS}Q|Q~mdq{gKxfr}*_aKl)B4xm}!o(#akJp0VE z_WTRSVc1E%VQCSM%bZhHW2DpJE=o!%0IFmRkwMswo3{!`p%lO4%^s_6Nz+pqodPujPi zevylM#ymoUaPRHh;ta70#ME_XWG^`ups+|LXs-_xzi;*`Iv=+3Xor z*iBd5PX2pim-tOrnynxHA6)@O5(`vM%t6;P$9y*?8xWhwuDD|DCG`-vUD0P+> z#wbF#!+wQpCuD<3< zt06jV4$M&ekw!Rz-fcfkKs#ZfIxxX)fl(=E z5iDdE%jfgv3nZ(=pTEi1XWeAcOP~W&$#fdfq%ox?&-s8FHTDe=Nh`hKIw@aIg9R?f zqUZD^`spQGh3wMmJ=U`UXO61!O~FO*!Dirp{2;R;8fL>w7)>Fy3cNdHi zrQc1pRBB~NF-{NJ?D!z#rk6D5pr?(Kl)T4e&PhsnY8;OgxJu$n?3A}iXAG@1XdCVRnZ=(s=9(rlj<uo(z=M{WMeO+T)eolf?k$Yn9@?Q zipO6S4Zv%+?YD;AhplDL<<_w4GOO8fnN@UNX?bAqSR9@V&((r3ER*T%BEqK zX7j_eVL5-kkLx~3P*`JKo!dyk(+Y}I-FCpy6<1v5y3M-?Hn;C+AVn0mlh~%bgb2Bh zc@UAWn+gaRca^Y%g5+fg{o{m%5fl9YpMp= zT7BMSoVjPH<~?&3DZ;5iYyw!b5QWbHmKK;R%p$i@DHgEc-6}%pLjg`@1e`2Pq-{s6 z=K_JHEKn|5ybsLVN4wM znYqt3)0nHKGPVM?zMQpO-56A?(*UroCUfCy6mwtI#VRsVD;(E{{x2A&+^BKTRF7!u z*FW+eE>!U2$G-6QcK401A$@Yle(MVXc&TaLVL$iocS53~%x7ajo9#z}MeEYRL`fmI z${mWZbja3yCOcB4yx&aHT?rd8dhT)i;LrY|{ncl_bLlm1d*DaxN8ft$Cb7XQtTbCc z{y(|`QFhbu8L>7qb{b~j+2bee1#|&t&UeEW^sr!b+fj5q&%AilUf^+@fqN9a#p%;0 z-K=sL{|td6`7zA_nAB-}9#5cNJT|5f9I6?rfRr8rP;vZYl35Lez9y8`AA1a)9d#R}1Q2|gjTH2^T}0B{|lD(uLS zeT?nh4ven8@-?Id@pDlckVz&%`Yku#Y&YL{qg_t})t&gc6f(Jzi$xZuta%KowHKHC z;W3O6dB2cMk;r zH(4P}i_AA!c-5z^m$Ky^fPfgs6kx6#y+s3V<=b#t?rLteJ#?u**tydV@Yvb5jkF2u z3RR1ZfT$K&OaZE0=qcQuV%;Irzy-CP09?fZ7|SZHWN}u7S+VpnfC#ZoUICa*d91FM z)5`_g%0ST&z@h9l$bSA@s_jzBEz1t*OAg3+Z>@l*3SQ?`qDg7o;`o)VM8@G$Qnst9 z!Vd0kwY|GZskPNojf#&>18s`9#N=g$W!l-H4fG}p0?0)wk}wb`}wsM`lKR)cBoGN4>`3j*4KcwesMv~&TmsByM# zO$^o+4Kt}*bc*2=)zdM2{33R4h-S(ZS`5w79Gf(6S$V5g|S2#-#W3nKL}p zl7mLhRN*#rO*<*rC`E%Olh_??+wI8VgSO}Jehbx>bB@Dyycc6%0?M$;?>+6S0s!U% zX+{-E6vFNZ?!4T>BjRdS0>WS|H>J^m$W-Sx9YMNV(R)lFObYLFM5% zEd0q+wEyWDMrMFkQ7o={fwHdV!^~u00uum_2moo8fM$|HfFVlrFP=JSCs^B`!9!7L z&asg}g10Hws(EY(RBa|bpoHoKo&a!xrOv%O@w}@Bs0gSArnl>lKy)1luf?ND8qq@# za0X$vJmVA$m7}{XC&)`P_QnXVdN`kM*t!en&R9Q%3kuRlIUiXKs2WoYsT)}Mg%Z;g z1O*a{E8SWQkSf=GQtwkSbOZqs#!Iv=F?lvA`d#6sz}%7>x&PWbZ7)?mUbz5mSOM}c zM`+hVq<8&|x7n}!=5N|hQYoibohC}0M~u(}0>9(_H``4IyRz41k@gLJHkc^Gakbs< zec%mt#a>Lr$afxv((LHkVV7Tdy}j|xci>^MXiq%)4SVR37yWO4@;Cq3?(3+v2otFe z)ZW^)+g|_HpRhmt?f2Whu8r)%rXzprhyTY{;M*?@G0|r|5{d($1d0ntrFlAyXiUP= zaZ)K$Fhpt-A?=ck5*4ZW#+h{$&dqWE8M9ic7a6K~L)dIuM`;0RLY$+Jt-XAP{9jsd zF>7SO^VjD+ClK%sQ5|&V;;{YVyVg1<&msJN=mo^oF($A#XN1Bn&XA5Pz$mW@b_)GL z1HL?EdA=dYS4mNPv+FuKNIC4J6u;T(*?>CP@DEV3+=XMX%F!2CG$(0G(L0hP4G^*Z zadZIacHC=9hLQ*_qvOMthDphT9xmkPYJypa$5kv0Xd0m@>`TwtJKp?y+kxP4_3^1s zeacRpI7RedM(;OTK@jb$WL^EkiqNyDWLto-n1!&65}|6+3$-mo>aE+UrrQp{sRwkp zO*t3|15g(3qGD;Gc>fT93*b}R*hWy)fg^DR;!R$cK7bW~kV4w0EazkQP=o-toIZT& z#mDdTnzfCsEHVHoKuaS=x2um)?)|gp`<>0ZR9&PhG47c*T2bTWFbDG0kq=Tiz^{Uh zxtvY8ibq8a=?k@GBAhMdGif2KVyzLop^d)^Mz4(PE`~$n#65i&hidJi&Bm{f7Q>vs zm)b>8!d(HNtO6`3!n}I-4xYDGfckk-t`UT~)vzaOAK`ts#A;H(r?FayRfG3jgSFFU zo)d@bv#-`|=G=qjDV0M!xfI?dhjOLuvO@A;33qtp|F3<*nY!R#{{M1x-iNdHTd6c% zeVjdY!d`gtA=ooE=n6GYZn48us~$YG)ArIOVb`t>*bJ&qw`1dhg+wbU?J9iH1XiS5 zk>;uxFjx+7k(<2IHB4)mwZP@y{jI#>5ikqRR8?Q0+GX%Nr4F^hC%N`fRbv4#9SBMA zE3rK~uqjy2*Ia$)hBfDNU-^nVg6keMa9<9?5*MInng<0<5!_6|a!jHxn_^Cn(RXhY z0rKcL4)qA%C;9gf)qlO+{dSQ6?c#-ADx8LS&}Ip}&a~RfUgncZ;o~9=6;L!}=l72tV^l;`9W0V?ZLjNmgufTosDVp~_#c8PrU670eS2aAq$lEV*Nc(!OzuRWWHdHYWOPK4)kqOJdn8paIN5a!Kj*rzWg&T41 zZ4&RYIBb4GO``>NH7@<70%KfB7~JpaQlSdENViH4G*e;?TL{9gxa>;i&~DaU&d&vn z0eI3r)S`gk6%Lr=wMwm7dR7gLk_QI;rHOn?Szz}=QP$*4D|sQ6Ye%ICU}9F#;kzu? zy;r|Oe=#p-L5>P0uu7ZdnjB_AWOTL3ORAroEG_@wxBs2}#}EFMKli^r!e)3|lbwXw zP}yV&C7pYyR8U%U>-fKS1wQz(XYA7touLFhXkD>{Gj=)JoDTdTIvb%nxo;VCG|Ds3 zhna{>DIKCq^6H}c<;$Xq2~o~M`|eT;kkN}3!+>}cnjLbvl6gf3&W`F;HL!_p!=#@k z>Gd^@S)AMDy+9SlkA_^^r9fqz^x_-}@Nl(uhGz7%)Y!Jf)$yZEsw`i>Mm5 ziRfAen{v-Y7`Q~VtoM-}zf9QFHlTtHyg*_{kHRWBpAc;7k`1x&bdS@JmFPA`KfW>m zOcjFMeCCH3i#Zm|2qMIMbRtF2!}%4o45(;)F~e!T`QR{s(G=@7(*e_7h;x5ut_MaBC=c{ z;v%93SUO|D;t`Zet71#{q}aWPN?;pQ0T@NlUM9gX0!=lGRb;IQP4_{Ijs)XX6#;Nb z5xwcLy)$bwl0EYmFwFalFFfSpWA6x`1qXp=H3#P*jgb~E*Z;`&KrLRN;8#TBJCoxO z%iDS6Bfyq}c@c|!8Qxr~MkC}KO5u3J%m9A0;d4Pr{`6pV)&&3g2j738kA z8_*tC;AM${4$8GB*$qJ7JN=*inVu_WW7n{|=Z@bf9$7Cd{uzaRf5UyRwNL%!pRXAo zPdxdsJ^0msv=Ryd>S2ZI#SXD4m+{$TYf!9;N>v(?dI;E^A{_Kd2h)~g3N47vsTODW zdLsW4(j`HfFxV7y9%|OQAdtwt)O?lstiY)11rkc*;w+tWN>?vNH{tYFKWx&>&+`K# z_Q_9tZ3A2Orl0r;iX|o(OO+VihK!!oYvJ$> zRSBtkpb?-zt5#K{u&bposZ9g{vfd~om8U|68GM0M{rb>%z6Gf2cZ3>Z9K`@ADt3^7 zyfO>KFLuYYcz1z0yWHjz&3bOxdp0eS(|e7zTF@q}^ZB<}p% zS{v=nKXAXj`$yjC%=9U=d5=>?c<$m^SRAS`VVbA0B$*RHQqaSEQ(;D6?x}g255R}@ z;(mGMs_*45n?0@u>-wdL8rhI&d3^4lzPjd|1!Nw3@kw-g8O$qZxMxwAU3$b8mXiXf zk`JIM*+YteHO>8uabEKPkVVE*F$~Ni^FIv0i=lbbSO^;6Dt)X#H&_<}dkt0T!IZhZ zftV!~*9c_Dt(?`LYt^}Duif*;H`orcQ;Pt$q}Ep6^DQ`nU$M^bZD4#JxdO#GoP)sR zJOvp?UwF}`u*HxbY>B^>Pn1&XOQif#tjkdI;9VdM@|F zy4Fs+^6G2cQ16br?zHQ#BA2`hFzI>fza{Zh69$E=0h9lZ6OaH{4+l)XkTM|%6B^8L2F7${L^=e8C|{=$A{Z-OGo?xr9Yp%_~KtN*Fr< zK)P`LJZV2D;pY$%Qw`CF-^@H4e1`WfC?z#j)w;o->U4*WwKJ0ra(t5WU6FhJgatuobxv;LhuW z_|iL_zAy^Yq2`)wZqf;%F?i!WX>M_~t`QZYXhM;y%qL3@=l6B z_g`kaShRNRIYh$CscYBf~;k> zS;huaO4U;-nu{_XN*h%mw5@=Zs$xwjg?%al02BaT^XTD~X2VGUj$$m>@R&A7#m*#R z)-iNlLoED5qzOh9fs(?S;Qo%$)nFJKi5^%#m0qtu=XyC_D^9eUW+QOGuaa%R zgM4xdvgqA$y*4zb{9k^xmla6*;i{a8N2-CSd(GeabF$BMmTA?Q<^6~YWv1CHoB+P?2f+i~DEw&xIO^aEGf_Fel}e;TMzP1;e~ zb$s?q|78F0KR<;)`=o<$X=!{b6%)*6Ud*)+xRQ1+&N>p5&TkwaD!s2}4QpY{YfV#i zuQu8^L1PTTe1dtG&sYvge?^x7F&EP6^$uRdONf>>kx`l}kJ?4%`xt4mw1Py|HUS?= z3Wd;QP}QpyXFM$QiH zOYiPAWeeB*Gc=m5532J^TQwW z*Zk0Dj@t)+=<@I3>u5h?*Z zh*2t2CqVTqG)b}@k8huzWJ>O z?e5pEWt!gn#s};{`gE1D!8MYa`nNy#!*<)vms9a4-=m=Sl^as>e`4>vAFVy&Z`bUY%)R{HX7B6N2^zDtlLytA$3XnjnS}A*{`% zh=uRK!9#1V^|NPt?IWN2Iutn#Q~@Gl7zJ1tS_A^FvaRa&9+S-@un8v!pL(gLk+(K=_5JK0UdQ^5k2TA7s+<1^2Y zD*i`Ur4~T_>f@(=?bq!hRuZzXkl(n{O=@Z3CC`{|>hnfOMah3FXyVIQ*7-S`6rden zR-MIXbaLK0w?G7+jcJ`V$o|aZUp-wL44_PYI}&~S4wQW%HWzuq$nfwD*lv9CFaK=K zIiDXmg`oK{E1<2;u_vF!_%TiZTjnV=2ZpM4BK1}XAZ42(GqCKeWOXI!Ab??nY(Nzo zL_irT16$#JolU(J$@WbV864PO;qug0lKxeD1DPJ z<5Gf!LV+f(vBrBoV<-Xe{;7BVi2clu{n(oG{xiG?zxLRZXuTTT0-y=TR|4sY@o^iV z&!5__sFzsyvy_F^s3Au%3b{Ngv(oU;V$U!x*Vy3r)*&`MH}4M+~o zogK6%pLmh<<^|UXsc)bh;pLc3)TLFxUq_y5d1{328X(4zeB9*1l&#>?&x7F~*vbZ@ zncKj1Q1=0)xAU0)d#||MUiY>i1~^^r&x*QvEE+Xf+}B-ur5)~C+gYHWBAPhAb88b` zV~}|{cJ`wCzm=fBOB2ARwC*vs$LXbcLV8*5vkYA`0(gwYply>bQ~-Jkb^V-KQjjT` zM!=E<+YoCE*)TDi1?)i`9Xss4dtXoG9NIIi7ODy094I-vOob`yo_glK(($hdniBR$ zA9%05|9AeDh_ui?_&=Yw-+I$!uXv(czxcm%1wQcKAG1$?^Q=ezkoGPpE@WpZTcISr zGG&LlU=kbTfXqb00alxVc1VFHJ$%smP6Q*N4OvSAi-{ZEv1qG8PT)wDRrBPiqX@&3 z}>F-aIZj0Li zs^q#_s}U>W#{>%`x8O#a_*TdP8_*=Ge}zqY3_4qryS^E}m#R8);{}Mx>5x5wFdL1< z1ivc`Ra+YsG%aQ1-{S)o8#->$-t!p#g{d|nT~rE~$u>1ssk^{pf@o(x!)t}p*3Iio z(_&$GsN25z&2QS>cdrFa50IYw#@D|Vc#*)~zVVHB+ATL-MO6nIAd89?xir^Cn}zq< z0$3=0Qb0tQ&%#~+s4Yg5q{<+rJ@Tb*@`X*Qmq25tHCrG^A(;MC?=A;Pp7WR2P_{%r zOS?;_B!IgF3!{F0x8C*xYp(S(=X>oxefAqJ+Evd#E#?9di&?WckJPkS379$X)B6Wn z0f8uu3^fRK4Y`0FKMR_M2CmVbHgYheg-@9cfe--%z?7(SRb2Ud2y@9ZDrn`Q_foSVix52X$Q&=hX?@h?GW{ zQf;M>>H*{ewqjP3q}jsgTLeO-sY$!%mOQRxfuyeQO3yLg(B&zGl+Pv;7@$gTsDPf& z%~;F_ye}|TKKkK5+^{eSG!0WhtLvat)LSsKz3TFV2&=2OzQKFB!JwEu-oty-IcS1? z_Ozo?obiAdh(ZQJdt|{$W!WvSJ zbfg%)U~v8cSe+%k^W#7K4*P|7zkAJj|M@3AW?%f)LojkxoTrQ+sqo_uh7cQ%QzaZ{ z-71i$2j?osda>yegU%E7^`RFO@EkF_n)^q;_~4pzRtoYnUqU}d&?!S=r4-eY>d7abv8SJW1_$&h0w<&t zVyU=!1u*eB4O$eT##$E^6R zZ?VdT76(HMXvqZ7#qQmQ70toUwYtaw%vEDNM}B-681(@_)A+;?DP&TyWUyq;o6!2l zdtac<%ag~S^_D6z?%fhU_tdyrC~2N$nQqT%=ssuXn^O2*4Xm|Hsd8!dxJSB1`>>6< z^NxG)+QK7@v0Q_eGe_g(T7bZ@h5} zFn!N2@&CpP%G;ux=u}at`bWv&O>96ix$G4FJo)cpsO$;^$1+Qlh*g@JIW<+BA&o^f z!yNsIRBagh>W38=}( z0ob=KCaa9eq*F=ZBa%;rR|u`!h9f(vs}ajK;vn2quRd#ac-S;jT?DuQ`(-GhU4r8E zhSx-BG0rhUtCe_OwRO+t*<)wt?7AI9v4P`{ zS(KCdZg#kVj)&vn8Fxt%A2+pBq#aV1mG79 zOlBI3o>{)1Wiv_vGJ}P3^`Y3!MU`Af3BO&PB!f-4Ec3NMk+Xxc;A6W3uXl{sCMkfU zs5OxLyLdBOT_ZL?j7_dcKx~q`1gmv}n%+v|BHCSzdi^@{yKDnrcktDwz$p{}n zO!0RKm6Nk+ORKX)x=8IH(!3`vxx8`nQed~9jr6noh?>euG_E~G4{a-A1hk)tA&@RZ z4%KSupbgVsLoXwGA!x(lf3H0XOdEXVHMxfJaPrUZ&CWIsjx+@aql4pAE_GusFkypi zHWx@4pYNZve!d@>BrQq#_{^kCWaYzyu`0%*vT+#29570mc9=@oQCyzK#&NZ$mz@|k z>B1z!UDVtbYlh`O8D31KuyCcM1S=`St^zdF;-yrJ`D86F|1}gy)L@!fi*H#C)mfFz zb>4(~v%^_VjcQ#}HboVatT1OhJ}xE^9>#%M}AJxdD_T%{KQN^+J~QOYDBcYx}x z87jr|RiOwttE7;k1a?b$K7s8*>5(*cc>i3FHLu2Gtf>TE44X88I{f#}o(2_%0-Mf0 zd0-O+J3gP*9SSbSMx%)5BYgH0>9#Ta(O#mL+6mgAj4;rQ)EZ_xpNJ^b4v*sc;dsi6Ya4oH1k+rHD|?YO2#T? z&hgl~Bh8waVqB4)Xh(X`M6rna!03G> zE{}3w6JlG@e#N-IDfY8hJN^}!rdK_$tzZ5tU4h^J__Ox+51x0Cw^#%T?i5w$L-&`W z*SY3kz1@53A#e1mC@W0I{rU?N2{-0Hq#5NNEaOgrHfP#scXY_I@-9Wf4k}k#Y`pD1!(xoUp zcTH}8^P6oHZ#D%e#mu)`(5T*Z=bd;Ob$Z*H+%?pDie<`PSvOc#m82LPdA;?*40iL` zd+D{OW~=Hhps_4wvJ5xpI;zAf0Yb7t7)QX*$>THCM4;5_LU|ch=EjaRZITJd%Ho}G z{Xy8Kx2+i;zyDW%X%9Z}gzenE)2$n#c&?4odS#Z9cQJ9g25}FB02aKiw3uSK)C}B- z9L+H_H2Q3+I#ileW$hX>LGxefZ8Y8(5nN|)al*Qk7IT#aBtrh~(4S4y$%NI7WSP5K zi;sW#n`_Q_jy5O1`3E1R3X%3OoNp9x5vDcH*k~WUGHJw2R|l63Y2GN3$;LoRQz~H5 z_><*>bD&CTWqBG8de$8pssTQ_*i?anKA(^AY|66;&?<&6q;@N4-D;_hE=ISON0sR!DeE!zHdP$& z@7m+*n1VK~L#kGO13e61>G)S*1!4UP>;~8|?vfynCInP|U*& zhSmQ_Z&2?OdR8wI7VpKF;5GR87l8a zrWWmVU)0VFB@v)fG997%W*$K^tA8C;8atYbYwspC@%Fc*OzPTxq( zE)ImPZ+N38mVXM#E)=z97k2?tXPdH%Z7#YNXi&O9)(OE8qy`029GJMXxcZU>Ix8K4 zkz|lE2=pF7sCvmWN$=^jyqc>9X}n9_(s>Py-OZXu3Ad$TA+(q$u~m`YLY0$A#+CrP zYj)TW<*MxZ6Qx~wFKt#{kto(k(RY?X5R0IP|689-?1*$ux(LcxI~WYML%~o_(Krkf z1EGen!KNklEWlD5cPg)3NTR@X(S!hjG3aeJaudU65y6pVB>&6 zWf{6Xu{h;?UrN?B z`>d+cLN*eCaRCOk3{k4-QuXV)fgPk?1tR1IoQLQ(r3rCa zXMR>9<}M|uPz#hoXA`x$`Ht<^$O8!#yC^lMG3-QQR`cmhY?ik4k>*gzgCl3>{QUx^ zQh>k%f}q51!#V3+=KvI9OJ)Hfb8!7||&EMntv@ zfZ{oT`8+;>uXb#KrdPc@-+#YURHc-PCWwTbn)pHLfklPrX^OPlDV$rMJ$}rdJa*Kc zJ$BqqVcs~1sCt^bf0)5H1E?A&LX%;knuVrdA(BwBUTq}?iwIek_W3NvdFuZmpBT9n zQ?XJ7eO`+gv;xl^MY$;|`J}^GaPeR8SbvBMPzUZYgo3B>c6pX=@J}5(XOF+o<84l! zJ2hd)&(GTN?s+@eAG7oHiGq0AyC>0z%7Vmq)`j39i&cE;=)k!as zCYZsh0D&r-Oa&9UoJB#7-J+kDi851p)Xd>XJhxC}Q}H~sP)jgZROMmLmi7oQ%Ytk| zQ#UjYwQ zr{M%D9V`U2j5oA(<|;@dco9MN-=6XjE0q1JO*a5n*%U(-JY^FL8_pZZPVpqH2|2Pf zgD}lTlu^kv%X^xfnbLG(ns`TN9R2@!J~P0zd-^UU;WdvUeqO|D6!BX6P0i2d{!a79|I>4`k3ybbgaEfV`?Y{{NdX(phHI^~5*n;Vj6&Nm4eY>ws)?w* z9>?ceKvQKssVOu=?gIpfQG;qG7r=;A8)E8GfJG4-TOk{Lk(ekLsUiSX5g@4uASD5L zAv&Cpzz0C8n6#AoCCZ~sf@bMx3Q1{5obOr|AqUUYh@A@3vm0n8fdIf`hKGQ(K+_~? zvpF{Q1Ujsw0vk4ZrSa7iyT7;3jy?aPJ@k!l+UF3Jf9jK;vM+q@pX`x`zmv62m_#Ro zD>QZ?0!#G(#9A4=y3GPr!Cn<38XXbM`83f5A1Ag@g2a z-nd$+D0w#VO#NKJqu4L8ewrN4sOsWs91HX~BQGK{8#HVJs`YnDbBTeI8L%otl};-n zNR<|#C@ql2 z%^>Js1|+Qjcu=}q9+nd;&Fz-It&_)g3w2@}uzR1?P(j>y;E=T*xy;%w4<0Ri-*Did zRmr=JbaRLvsrjm41n^doCJr%oqKvOO6$bH`Bj}C*@V7wI_aFB8-~Pltg?usvX$i4D zZYaniDJpq_C~OGoy$>G^Ik=uj#MdVc1%PEvB5x-B0uwZ$zH$Jl7>h+lVn$--v^QF1 zV_^Xl>!fC~67Qy=*OdlT>yDt{D%eClyDlLJ8<7YIuPEE31+~_L43W`%bQ#@;K;57n zJ#`VV)MLla4%q3QG3yo5+0pzL?la>lJ%0yCb+b~QS7Fft+n@bnkG|hV_6U#OQwLHth93w(iIlFi? z*M#}{v~lsMV3Dt_Z$flWw{be6Czeb2`#gMJ5Ui?^E#J$_Z*qPKpaft7;Dlow@U04PQLz8K~~)uu)KoxFhx ziHeK)U5Muu@%$3ZL_<7^`Cec3ykb3<%Fqxg=t5Y`^@n_zD$(Xup{=WeQSz?|~& z%tK27Jz{b-PgKpe3|PwtJcRVR=z|IYLxq6-e85nl7+wJbz853RlfYL&AS8e!0k}V; z%i?E#?H%w3j9D-z18^W_#g#TniEMIc+<4&^pj74lVKzs#Qc?#10hbxN0Z4as`qU|V z;_=5_uk?+tecc{=^f5bi>NNVCS)VP+1QpS|x)Oj_O*&eo;sP$JZqk@hb6FLZBuV+{ zo|Pcz4e|RT))m)7W-9Wum_A{bb4Em9yV4@yIP0$oOoa6I*vkU#NnM1otP-BT7UVfo%#CxP)2j!vq`i zcqzrXry1sD8laTovB-Q~hDlm%YO@scIHm_plcNNmF&a3>TG}km&*CumF*LGaIt0w4 z&zm6-A7MU>QF%JeoJcZPi(#zFnV-c3zTW6t!5nLloaDR}-m4r1AL4lRr)>T9`{@ci zc4C+m*0i^lP*-cOR?6nFSI`HoEJxWCl-v%gm3)!)-btd6h&71xMDK@`3ZQ&hx*Pw> zLZ?MSqD-ad6cne466NWVKsYU0Jf|;F1xy6Nd5TSb5g?Js08nTLkftgvN_yqjf9(1- zw?wTLzVjl6tfb?#s;AVN1(VeU7D3k;z)s~$0L4H$zuzM_SXuyNqC9uD-(ur^Ha~pU z=10!)xL`3_R^%_tlZso$q5z)|(iN4Y9V_H9#p1Riu*Alc0LaZ#Z8y#NOyhAgg&t~# zbDa^*i@9AK-xO7|Xj73=C{6_PW0W}W+_`hjc|Y>_bEL%(T`SV}*Qc6942aSio_fkw zDv2cl;3S>K*22t2DRzXPDd8a-L-0obxan+G8ir@q5^|?8XqJ-K3?;R>fUfoZYo3&<81n zu?o<5>OmWfHY-u(SvpE%z|H8g8-g~+96-x*Qhq+3TFA0^fhH-aLXs)(+i$zsOT$+m zPtsyz5pdIpn7K}ECRl81DbKFN?6OMDR`r!dwj2+#62zjVd@Tc@l%RnqBcd+D8>mD= zX%^uUn2r+t9szK<%tO_PI{P;veU&&_55P?^Agov>zdXN~zt?N{L(hXv)z>`EYu~=z zcK@x{tvTm0%EkMqCrAsixe{a)GLOWDDDx5$5Rz4cSRY*n8KE*(1+=6UEOL;ImfOn8=1CGCV%uXGD$;5>f=Q3uy+*6gB4YxT3Kz`cZx05dbY?p zWW;Q$xiV|xGS_W|dz{Z0;zrvF_j#E|hH;)z6{oa(JmOq`{D%QvXdAgd^IX3$*VP|f zgBbTE&g(A9JXf!&0u71v1;X8UDwq$_%w2bI&ns^GTj{y)Cq(pJd{!3+Dw>xKzL*DT z0a%<|=&PAbr9%`SlrR+sY0XqE)(qSw2As;-S+!WG(Kl(qQ3am&{}IhE512sDq)F`6 zQxYe8w5lqNAktApsK`-ODzo@5MKEv6z?3Ko7ULy>`T9c>St4^SJ4N)23UNLqY|@pB z^K`|YvI^`6iX%hhyLv4#a@Hn$j@s~<=WOWA^ET0Q!osvVS!A&(Au_9G)2SsQkuzR#$UKW!6l!-N4P&M>B8y*+7+VwM zWj}x~liJ91?If62?=6~WE~pHiXiVTqW`%L8sA5EB3j$MG92HCQg1RGeVA8*n>Tu4Y z8JA#^bBRD9&Epsz<`JQbxgJq|rbi4GN@>5OvrkLtzT{8Df~zLU2$_@oTkjc{Zf(QG z4g%I(TI^+gQr0FwnvE=&a?O$ptyNMUmMfj2USHbaL-yRTw3gvD%>PguA?Vi%dVZ)O0(ux(xKb-?ZY_nfOYIYZ0-9GTKnGp zw(YHNQR%Yh63~eEvD_%P8L{7dy*G$88Ug14yF;FjPfdB`HM*L8`g}6syNt6puGmKNiC3 zm6X?DfxtD%8W_sDMCm($6p7~L1+>gKaz7~7&~4N-Y5kChK7+tnLTyzTx~5Cl#Tn3- znNr;vfyYY#B*)T7X~1wycirIzlTu+JN6)Z|KrRwqQt-j>(<$zW9Kr3Um zxMMS&o?c(Vc@-h3Do|#zT4()qoaqZvX};k4@*1SsHd{Y)^ZC2jB7&#zIp@#@o`Yq2 z{`duZ?&w8(;TU~Mk6i=|pSKsEJ8wsyJxeEo`3)~An+dU;^5E0>bUP7FOZqE;mql5E zz!s{5$fCwS&s$cSlfPS5kqeImpCiJxnd4s0at&u#`=&_Aj4$@j+-ms|+WcC3mK zpWIm4tAdVF&^*#>qI{(bNRf6^)@4a$;u>^{kcrGysSJ#Z)N@-7x7DwWXs}Em6_MWNV;ox(OHi zrlonSk7Lar9b$HXEIb z7`6rIVx{0FVX@Fh+hAnT2Et1=h_HMjuGAb64cBQNr|Jl<)LM8YFciZQl-_44@l7V8 ze&f|za<9#bqa`E`S`@gFz&%TRGIQOe!%0EE%bz96`^2EN!=xIebyA9Mo@T;xJffuV z60|80tCT?eEJi@Ro)*x3$#q;Co|r61J7=3x63WVK)YBO%2M&bNV*fTAt~CmPkyo&> zXlcUUa!RGE)bkF%nJQSL2I&80`VKaB zbXpUo_IfmSbnul1#8iK^bah!Py$V}*0KRy%bnUb@7{qP60Af7a_*qjYe**|=-X1)+ zg`YL@w~e?EH}SK2bQpCs9IgS#Rb!M`+m37dI$tTjhFYB^tQDIp{hz>((rKxNV;z}RKuTL6H2!ywY3EBUYuu)bnrbgn6Yh%5s1 ze0eEVa-U_w>f@5ns&tq$kh3C z-jFu8h7#nJC`Bym181c5yKKfy@8P2LjdyG{n0hs)$yHBcA_WdL{^Vj9*J zP?vKiqRLkn7t5Ilw0YobW{G}j^d=Z&A}&7tfi<`0JYwkg{r;CpjTK;6R)-hIqLnSs zmISMS2<859QUx;9Ot4_Z>AR%zdqsj3oJ$2-rBd&^B_<59Cjd(AO_tbPGVDHasyVz0 zkI4pUgHXGG^Fc@qm`Zu-OU+DieC-={@9XYebKdX! z)!(zHpFfITgN0od3~YW*@v{IZwI|lEgsQ$WrDAv?0g0kqb5%vHij))cc1ZiAtDna40FNmzJ+I|8pn#Pa%L1hS)=!^)$-Yh_*?i`Yyuk!Q6RnwaOxcqO3Lb*7;a{^= zoqB?MhDoB}dM6qq1D52#uD=OWFcU`?CF_Au){er{$kKcgm}tFpcHL)za&p1f;99;6 z&@|RFk{2*0d%pA3nsc7OX5j~a@_pDlOnGMprMYJjuqWndc8f&{qk;jT2s99PODYhA zG%p`PAHpm!VGQTTn=K5n6EE8nzEtDwtLcM04jx-4@|LWtv zXFvU8Z(8$WfA_7Y>^$CKS^`#J?_IFJi1b6Nl~^b350061EwWXR0yQ$xb@n;vJdgwP zLG5DTakb2v?Z%?n6th?0@+z`$(VFj-pp2C=RdS0FLyFSoVrf-}=S>*RQ~T!1vka?ydJ$l< zKsi69{EN$^$cPvw=~MOMC)X}`{fG|#^l!gKYO2WE(4}-V6R9q0TzI8^=bl;V&sLKP=WDDN$-6d32Xz+w}n2cP_xVt_a@T!1(xIeY*2um$+rzxx}z@h0>itBybWFMnqz&z!~OnT=d5h^~zy zINyBPu$h{=j;e+ckkbYx|EeN@ouV#ztoY{a;(f6NS-H9L_7ZAq1JPpd9xcH`5xF*T zRU3*ORC%!0Y1uyL^;PMk#Y`^8YQOQp_rGt=IUhy4@Yi2|n0uq~pb8x=u&&++-cOsb zSJ>&E>howb4l02>qE)1t-LSm{3D5PaIfbz5o`NM6E z_Tx8Pw&t8q^j@^508U=YjUSi{J-tl`Dd2qmPK|F>;w@cFhVdzlQ_wU!7{rRsy_X)( z@id!sBN{K+>#xt|!h3C;dKZB@&cJm|{3qZjEt8tLidE3978|Aa(|Ad6jsD>0e$wYp z?$GPK<(==QQghrZ#9ZH_Qg+&RqzPIUV1eXvU!ZHGus$FR+__nmW*WyqWtoaUf(>(7 z6H*%@V^RQ5Oi>vc7ulT1l0ny8=8e7y>l*8N$%;U0n3y@uRe=wHlNg1!<0JKsH~-+8 zYyFp>{CoS_cOJ%i0nn*aQwlFj(`fnxa)My2YTP-qr8Xiil$S~YGexmXfIZ#+`6%lT zY>R-KOA1_-5CjlP%?WU=_7@Y__TesI?$L)fXFYV?<#ShLpZ#xtz2=;!F%SNg5B~`! z!Why5ngqb(ST`hLyq5{!0@g+r9N9JL6*jN`Uo=hr5*|!*j2n3{=bVQO zU-dZ&27dp8@3A-iz#VJe`K!;<7G-ia$n`p?Rq!AX$@-(Lgy-rsY=beW^~A>^fnM6z z#=!c+3rz%~f+zVLRO>Cbh{(=c_r+4s&;653H%R}2_aFsI)T)QOvYK~Pnz)Sh;V167 z`4z6#7HE3K3$yi$tt+r~1-7oh))m;g0$W#L>k6#C0$ZAk4dLf$!o9Y=Nfl;v#K5W$OxTU4gADuyqBtuE5q6*t!B+plNFiY+Zq^E3kD1wywa| z71+80TUX$_xB^>3@bBUxeLp=#9zlT8HQDxy0e0sB2RXdH1(_nJcCTOj&px1Nb7gedaelaWj4TOvm z{;3_o4?ghbHMc~Kf**VAQLnmE^^lw4sa*tu(J01m;c&zq$8&~qYSb!$wrivH2?ND3 z+Sx$J?mv@ong3MnSBZr|3vDYJsmPKTb_RpTNg5Q#Xa^wg82|fhMUr00OSmP(b%Q)A z(GVx!aLb+a3!@Fds$+t-CAzK>dF$O|10bQQ+Ii%BQ*xC|-gwi`$=@M(%j~K9&s({~G=YF1jPVl5lpF>wb*G2}Xdz#ADo-@@sFTC)a3$C+O?Jh|7 z$~@KXWi19=5TC82(+lXDd-*%BsY)X+G1pk&s|5LlRO7htSN0LAT$9+;#pwv2@n2Di z_RCd8x<0}E3C3bjb)^4~Qr7*px4q3pxvP)w(EN64W{MUZwbtCe!#%iS^cPDZ*eyV4 ztFq=iO6Us^=;kaRRwM@IJc@WPRroEcaDgI%CC)iYMds}6d{7YJVtgvTurp|GZoxtU z!@q$+Jg=x2Mc`i|k0h=|n)|HkKrflkRbuf$^s~;cPP^^)Ti2ZP`1G6&j!pV|sB%x& zSo6~D8vMUNT(4%Ux;7ef@>&Y6OE8y${&>LvQ4v*8cbN58a?yTnYy@9(FA38(~I zQbm;aG*dx=+^lnRUIvk?`U_^gzr4~+b57#%?6}f zt#WIaUqT7hO4Oi}4Gd}cO*@)e;WWUWQ++wi(cSdS#{TEPUJ{s@C_qlBx{#d8vIsUTMLiacRvDE_pCM z{qNMcI9TgfA1|t5FGU%FIc_kPR$qhQ8mb5YAv=4HJo+@Q{m4E)a3x>+ms$&g&&&Vm zJT<2&V&bNIgZMJU-(Cgj0`$OwL4UC5jlWgF&HJ#PYaCs5_0?<6dG$ZGK+`K;n(xnF zOou7Ur>wbHCD~cm?|C-&IX1~C6*?+oFQCdn9xo+SsHoAU`f+K!@P*xrv4T=`xAq6D zEURq+lUh|@6>L^zcp{bZxA~YSKKa#e*n{7B#GdQxu^1Cf9jvppJiK7AV0*=tUX%Bp znWcx?>GNxD&WRJp?ZhQH_~6IjmD>IaRu<{9098RWv&7qJ#I{ciw8Z-9$wN<^9KwowP?Eebmm-NYp!U z^LHu zxx8>RnYDp%zIx3ZX!1otHV_W10&tl{c-hjY?81>sy7#uqdamR) z&RpvZ6{qX@Rosssy{=`=IY0Wyx9wMc`F++mL>Kj}56q%AEpJ1niXi@y*H!f(J?+F!|_+#CF z-~INvFMe*#IY0c|OZNGPpJWqL4@hlXLD_VXKu31bS9M$=mISgoU7}KTSyhLC6>Uhx zprm4|JvCP3p(eWlJ-q#i=2Z?4$$mm$-E9c?zB65mtN#P8&*BM_j6UQ56r#r65< z`+qv963iX~&ACdckvHJQeNHmNj#DH)!oVpa{n<-tU7 z!$#+05g?LrmwGi)R@IaYU;$Zbyet16uBXOT(6f;7Cg*Ls)|cT~;PwzgJh@Fd3#=9Z zbN9XJwf2S|ynoHL{@ADg$-eQ()7}7B6>tal?X^9-x>$Q?9W_0T7h2q=Cusyu-$#wh zJlcGOG=?i=WXUcVVcx(Kuv>}=yMma7Z|YNqmp0|6_epE-r9Kb0>e3Zo-DFB3jE1de>YCj z2M+A>&fcjcO>f6(6)-U59kFF%mkrqcycfJp2}YYBuqymOl63Tb!&}~I?HC)bKF-gl z=`ok_Zs4->XMf-f_?~O%WxQGdJe&$1?$|7A0hQ%?Ikezq7CboHr{~GcFnFkx++)FP ziM$pY?_=fcMAq!iJ!upr=N7(v`s`sB!Q!Gu#Rpr~GEIFrI)CvCU!V!=oR|NKX-f)V zuvm*hs)vnhZEdy=%o{t~J8Uo4ZSVfwc7RRjz`k8}kjAqI_IBFdogK7SXt(X{Er8qx zYiVn+&W?85y@O8WJGNU_*Dl+EF=RW&k4r3mv(v*|<0L?h#heeNMw6=QFL~ zuam)aF%FZMqDLC0NLE=u88YKt4RFia+bxaT_5kDSTu(Rd_4FBJVK0E8ZKoB>HMHoc z0vtT|?6Wq2)kL9!3eGdlMlV1-&p6k3ly6z!3@wlV(}bbskNoJ5tU2ckLt`|W?!N?R z+;=P2u-K#i!2C~}j67>_?7)l_s|;BT2z1M4fX#1#ze{r6l7OoOz+^@3hCokC{t)}3 zvV8^QYJ4rb1=(q+Y#d<3zDSoK{yyJQV$1x!g7?$>e$$=TtzikYZank&GqA9*yErnd ztxbY!D64^t+9k#3DaD_`iw$41I8A&bb8L)QC@@!i1H`sN_7%<2=n9pvZ03g`Ic-((@@0x4<{PEK?`@YD_WUQgF z2F9=vkPyP~HI88{mL=|J&uej?GXytcM70*ma96;soc@%7-GEvUv4-%TK37#L&pRnr z5q5EJH3u+RUm<;32Q~pH^d{DNj;{$if8FQ6c=1hlU%Td!Tw_8Qg z5^#b#HoywYG0xbKbpS+N0put%W8n<32@}zAwm3N|#*TB0*2L zz|C2hDDZJ90Js+Lyuh71__;isw3(D}jZ1ibi8dAmX5JQ15g-7GYwE7L{IG<-6Zqxn z{`w((R_=iqrgH9e7592)!`eo&gZ=&X;5WbNZscl*P*}`$;eC~2>%{T;&O2|j+i$zw zj$C=r_U`TCXZ5f;Rj@DRR#zt<8!f7&+wiwaBqK0M^SGMN;U+u@Kux(JYE^B8gRe#! zakkKSv$`A)99X+9UaPCK)pqUdwEYM7+K#Rc9GG!^t_tBEU1hDXi*4IF?8krnCwx&{ zef-@gK4znU>sHcsEwCJIY!J1W?-kJduXs9a4UB_ou4kE=s)~hU6E5SzR5H#Rxc43O zi)*JDXjN@J=gj#}&)5RsYB`hiTctE?U1iAXS?m`8_I>nMn}#td!g0KuaiEJVBgSQI zxUja)I&gi}CrvEMep;%$^PTTpJ3eT9dZu?Y+lNhtl|;uH%icaN02zg9`QaW#%qh-Y zslyfSo!@jH7ivwx23N@DSOjR`$HgnmH9E@kNqG&8D}ie{*06cN;sx&{)_~XKyce~h z=rsjW5fINt{l?p`U6)*0v1w9@^?=sK<_4=`qYV|qFqgA|VIErqC|Tm(gP;IK z3BY#3z1%`L9?z=MUzs`ZnsEltR^b)_4@# zAy@w(HRrJeX;XQeE!w`rdjL*rLDQM;J{zH_a5-y07pXh3;R^g>FidJEBoLqtx`@sM zO6F({DNye}-d7BBDeYdg;VZ4>)@9;ubjg&K=DhMXhqL|Z{CmwouB{A66(VWv6ichL zrF(7Vp5tk<0FXRz*EMU#htfz`7M0y$!eG#1h@w-K@Otl={-Y{!qjXlKrxbc4=DnqEs<5ml39wb3G&c0V|MU}v2C zN;|{n?)}W83QR`T^nLZAfJ67Xv9{dmt4ggA`9mFRuD;e1ls4#5S!T72nL2#48tCKL z%sS8v@M*5Cv?kUEJ-4x{%o=L+eYrL1-~6t{e_zS>tOqr$ztzRo$k#TU_uC108c5IA z(CoU7@mWW2NSTqU$ELf{YfSU`1@dmW@%Amyw1thnyDKomM2M*wt8Dn)z%tMWD9;^Y z;S_zXg-fovAttw?Xib1678o%$ddM@mj0L5J0bj!|S+1UQ-UdUHNf9H~_AF2@J^d8T zU(e8<1g(HvtugzryLc@AaT53?uOH*S=&KqW5A_a*FLT zyw9RroPaB}p4NBvsFy|Z_ z9kw%PPuuykG#H(l@U|n0&_s6&B$cwTghGmtX_G*Nr}SV`0}Uo&Bkq3v+pJ|BXc`LB z=7dOJCWLBGsTey@#{>&y(9l->rHFEs0)n;Bk!xj;&9nLD7`SZNt>P-A%Y#Sp^<)@o8cPLSUJN2JT)?G zkQ&wzf7{DZo){8=$HdkXsCB0+2OSTN9wB1sC)C z?t24&x0WIM!k52b8P0ikbCYc+J-97N1Ui?r{JAJ;InvSUr-nWxp&ou})gji?0Y!mT zkY;Q!05t$NGOA|Jb5=tduY(PzP2+|8qA^p;H7#R&F0$D~NZU>T3YN+M+-&S(xWpvt z{|dCx_!yP`BrwI?2NU;>b)cymCiYa%kZ-=4V@jLKCsFsu?=^obr*Kc)(ZJc8%D^TC zDP7VHYygFTlmcgOqzNMh$@>=Z&tlG{fX|q(R86*3gaZb{EM zs;{NAib}_!WxzQ?dzu_r5(A{4FmrTjBmA7158#Q7l&~zXH8VbK|y!1`uy>5_jE}<^m76mmQu!@+Uk^AIe~X20IaIC z$ZBA(Y8XG&oKG$Hs)FYhGcHO1$)3$%t*>A$u2yz~=av(g)DZME)zes8YYMMd!QWT# z_Z9qG=OB3>b*OrB^yWE!4#x$+-`>FHmAN z)H`ZPPQNUmiafQwD8Y%A|4H zSFYu}R)i+N>fOfbNwZHPR3@n0&Aqn~pRoefZ!3K6t*|B?P`OF^AovJYqUB4-|&W0Nn^;bY7 z%C$>QP1@M-pe-{Q+F-#Bz<#vzw>wL!Y*$gab#N`3xOU}AClTeViKzO*)uD9>0n!oz z1bR8|B^S8A3vg<)J51Byx4+|u*PL_r=!BgHH06s`WK0#Sg$|$yEX;2gg>APA?S26`sOA7H71^C*Y$D#sm~e(+WTA}Qzd6|0pHKP$>3jl7Z~WXOKyd;J?P zetw_i`Lt`v#I2M$R>JFqNa+;H4PI$;X|e9SY3-uP=b0IuW$i$>s0|Eeuc02M0G2|G zfol%y8?1?<2m#x2?yojUrG`9B4p?zr4;N~gbx6S6Kbg1pQ9p2k8@^lSJk**dXvq{X zyV4XD@zC?6-^nm{i$RACjW7=_q;0CiibP5M!cy)&yym=5p1o*&gCouyWdJZq*uVuE zaWgv^XKD>47DNw$fE2|q4w9J5 zG3IK5=SKlh5r9;9ZklxABn*{;qe&zZL!_PQ-$x5Ij{7FdSOsQgc&w{`iab z?5WdiBnjKGt=KN>Dzx1V1$NDze7o=J3cKMD8zyf)G)kxH8RgKmo74M^jrmvCrRNGb zmx{uO?d+(d_gt;Dv$^i2+HH4BleI#_YBHq&F-aB)0krZaXx<&9emYyMzNOmA8`yN~ z5s)^QTEn(V+eb@_*IjXyb+&A`FpJhG6Dc8G0DmtXg;FZ6`O-oqF*b{n7RgFJdnG`u z7$Besxs*xT#`yrD?3TOVVofa_YXL}%)+{_vy~RWbDJ|#GGUupNn-;)@@T4Wjdsx7d zG?k^-lW2RTE%kekW`q8A1xn?y7&wU1Ldo*3R89#`C>FSKSQiNDbG^s(citMrQwc#t ztO!(XG!~^q=LrYafu_Fx0YK9?iO6C>15Kq8K}-^-iye^2c^)m#N-SZt0JM4*{svyZ zhS#p(9+pOuJQl2WA>Zm?N6KPoET+&B%>cq+e3p1DuwbVF93?OkRl0u&d*dvw>O{N9 z2DY+@+JV=SmP4&he3Q_&$tfpb1n;=x2fP9F>Z9k_Q7hs6s@bT+;kX^Yc)`xhPTK_* z)NVGUJ{XmLK*RtG?l3Pj1XDK1_hVeYnVKp~SJSd1Z^35s2@rMwyskKG)qt2qHrr}g zsO@a7Eih7*5+n1d~6-Ghe?+FO2b zZEKG{K+~ySKoh?&7OSKp490rVGV_kqW@?`FAiy-AjX@QD#RAQ|eh83W!Mv(rL#$&X zsNs56GEXaDVk<}|mhijM#jq7;c%dwp#QnP;`Jn^}+1gd64S4wUubcxsUy5$fS)?%1V{EtbV+PQa5D`G18P? z*~z+xZgO@Ez2sn@O#!5n+*cL62n30hQ6N$(_JfT+&^n3%Q=pEu(4Wi-~76@?1!#@B+6$JL(Fwh%a{-WqT~#Y$z0Gc z7Aa`ml7P`w&=(0$SpuC_g4B2{M(_}w@<#$eeu{sm=?}O9NLHJkYJ!hSTGog~^D0)c z-dqP20W1(;s2`?+NHuAebi#ooEYc!>pQMOm|G@(=#BFQl;&+aow!X0`K9^(^X#7~y zV`!YB2xp_D%pgtw>!e`b*E{u;gOQ#B)QzDb z4Wm_!MgVg&!#2*n$}k>61oO*WU+KDXEuC~?UUA77N^|aEu8-EUo43+)-z8$$dYX$o zHRDQ@iW42qVy9G%SOrf2DWX+VRltNXsf*|-rMz_o6Dvd-!h2n5jnjXk0>nmT#md^Y zS6B>^bAvWJa*8Noh8#4k(%wRQ&F&(Y9a0=D9v!ev%DErA5VK=FVtRs#9ldabzFWWk zOYdECr&6h?jr1O++KKaFu`7)wt(MKIhGu{C0eM6V!VzEyiB)40 zu7#1RCr{o)9=(>$u$aYBY+VIfo0b+p6N}<4cfApfNBdgP6eZmwdk3XL0#4qU2Qfz8 zdku|Gbgb7h(?cv?(9-oSTnF`c3d({#W#N&eKvaMo0(c}ar2p3y*X_(!=49X9dy8e$ ze_o;E;5HDNi!st`Y8(i39Rx_OKDv8)?JHmYs)G}i6bCRY&|C2RRnWaSyxY5q}KIy z(opHIJWf|yOKBP37=mgfow*{{+i$yf186$>#3Pm>GFF;ph6whhp%I%>6K38k!o5wf znMGNg6YTX1eD=6fExcZudzk?=h0x{|Gj=0v2+3{D7Q!N+T$QGX!Ke7#VlR$x$nr7_kC?@G@!eBn;s) z8>I4t0{M5|efx%0b9``|%~dH#-J`691lQK@6SyG9cBpzOgeI;8;8;~v4J%dUx(|8(FtPgogLjx2F&_j9uGTkxtx-8M$gY|2!u&x~~*4*0Q4O)F( zF-+Pz_E=l{+Um;g{$VtV2~tVbKBA>_475csn!bhsShb#sRfkcDjP_duRwX(=&E`A{ z>w+#%{|;kgL8?mNN==yy^!I?ppch(@&erP`iUr|*E;62!o)hz=I8%K-C7MqHmNJZ` z3~SmFY_>qwTYliaHP>38XpSI6b6jMXbWJJ1aRl~fn)^J%y`MoVILmp3ImbBbWI8d2 z%?2svF!v}m4|4+mhLOvtp*aDBg!|5x}IW#Sa_6)P4S+Z!wCY^2y3>2WEGrb z64SnpPBRAk(F@Mgm|L`bp(@_E4r$JRiSr+$$YM0Y988gB=Ncx$qc$@!$k;+B)!5)g z8%i-cm=Caca8LM5V%=t7p9KnUx?$tE&RzV14{>Z^g{>=)y8@YPC4@F%rHyjwAgy*w z-sqKbUuhfoy@cM<&6Kja6&97SD4=-o%}e`Nj@6Ln2O;Is47<^|EzS4Xa=4ofUG_#Sz#T%F;!5$b>zTz&@H{L#6`TdZ1iHc zN+hUV3RyB#VT(kbg@}_&00Zh6F6EK*T8#+C+o7y?F%fqoGTgz!+)3WQ26jZCL%uF0 zL_SSa;wWmn;f^<0L({f318qo56C108A7_{XT~W};bID?J<2KuW5)I0{wKmlwlB}W< zO8IxCOttt1ER*hqH)rLq{HruhUV^XaJOFyG_jC{yY(m~i+&6=uvPUK5K`D8VScS^cH2o+TnQ|UfYmJ8u^3|}!MFCvk^tfy8_&>E(z@B?#@M(jsXkMa)eyR? z8uUD39l~sgVi{@xVnwRt60qRXDb2E<^xW&;_{KHkqi<-O3cY^U5CJ1eu`OCT5pxln zUxp2_fC@B*v8UQpDqSP?io2IaD;68*vjUpt7NQ*tO-%!uW~^|QIY^4R0H&dUjlT%M zC16#^YZbD|hG0}n*!)Y_bW4cROW0^D(Qj3vQ7Hg$E~)B|&+iRUD{2szhwP4f?pU{Y zuGra^PO@=Rg($6#>vhCN$a+MTu%szTk0%Iz1azd=lYx|2k}86aS_&u15eKWNLVC6` z=9AU~y~Z-1J;mQ^I%=$?wa#rrGiC z+jdyXwq0w^dys1s0Wg&Feqt~M#!3OV0_qN#cyFadmr0eU0Fq0v9b$rFus~v1#55(9 ze&%yWxVCewNioJtRUP-9F_fp*@f5nNu=r2sBKw3S*9f-AgAP@10UGq0Fo_vbl`8*} zI9Y4%55DOQYtC70+tK4MQQ$CQqhljBMmlV4bik$nxiizO9RMb=z6n^u64nd_C(`j0 z5u`0LhEm*TE+~(HArdpHFo=3GD+TD45@3`$b4u{ZzsmrUrD}9e`muyZ>9QVZTe*KB z7LgLZmcbsEz)}~>^9&~G(3MwO^Y*pD;$z3p+XSh2r6k1wDHvVmy;Lu!BTP~aGT>)5&ixhvphyE;@s zNZo(#dChgIa$LzBYecitH@FRBNLNDp4d8#@}Z7;@nmr=cS+3xLj`JU}|WJjAF z=xnw<+Z(O5k*W-G?GhhWK_8dG)`7G98m&rTTsnIJQr8s$(!^Z3e-s~2LT+hB66k-z zFa%09RHBNkT8^=4(m9>Q#;d5Z52fD(aK&B-KnXmlleZROCEuh8NQ*g9o?kG@y;p+t zp3<)b>Ymz>d|O3Q(y_VdU%yjtTuon=%XsC5V+#O6k=mg^-u^MP6*~y z0^EG*9uTosmsg>^A}y=^oh)ID!i8BxMJ3F` zLN@0zHr95;=eyB;H7P|)R9z$u5AU5%$_^#21@W`~-9YNJhVQGe-lzodQqsWd!vOM-pIXk}YwdaGp|j~@ z^S%IZ7`Aw927olp{G0*^BOZobjz*btbE7snMT%>JLV?*ae#YiVaH9aMy?q-Xa-SWf zqVVvc19s%HgD`RXZO`tVSX68y-P35b_@9+fQCUz75G_qvbpvb3HU$3ds-mPCt3gEs z^8I4NmAwHVM4PSvh{o^wqd*3$j5lcs(N7xldv|HVeMj!mcv}T z-xTAdMC&0zjX;-}!>P$J(n3RSDFO1zY9|9zS2v4fd~p#04hm|NVh&Pio^td#3RqHr z>R?e83i2phdkAz7^q#eW9tu;2x?M*&51SZ=g-P(3S3v^lrVQZ4%}EJF6R^CQWk67A z9iNXN3~&=B5MAV66T_0uEwtKHfz>7f_erWA3G7Ou1d8ZNOXmSqQ`mmY0P?0-FK6^U zU#AmRG|SkV1JKW-dyAl_gGH~1^L#Bj>kd^qO3be{Pi|FiC*kuuV0;3@WR?}i`yyUV zi3llBuD!I5Q>=+2Sj&tI01T==UdVvu5FOlWMgeo17>mggcj}W z%INvtJ^Q{!hwU*{v8~@(MhSXBN5j6Qn(4v%O z#&mPsg<1>{n}e!Tl&Y0}BO$s9z*JgOhn+(;013~kr9xwS$1kYZt9QWGHZ>k$qn*ct zOP)u%<*Jy&q5_eyB0wLHE~E`2bx%*aMnpQ9toyS=tH96Zl4;he>pjcnF+MqMQ_%Zi zypr6*h;b{bK8a33%$c)RzJ*DT!=}YGErr%EBW+X&Xj%pwB^3e35U7$mNXtHeYcFu) zTZ20!qkVy~%mA9JNzGO|Sm(1~czFnI$1R9rZ-t$_lPbTviNNoImAj1yc>g+!1if6I z#K4?mfi`;>pbSjWGCC%kp1@OT3T+rVBftPjK(@b8F&ktEwyRj|n8pSymd1h#y;Vyp z!&X^&1ne$BL01Ljyi!J zq_%_fnzUSAI_NB}EC}3v8{o7Ndia22m}2b$@Tz1Ud$lrbT{YmXka;7MN~N%s@|=eu z92_6DQ)kZDY3x#zzAC^pa;3Bq_O+C85wh|)Yr`zMO~BB~B)Y0e*r*YN-U9$J#QlkJ zQhcM+Bn(LLO@%EsIcxJUnDdh`n=nT)iaDY)G62O!1pAjL6?F1%+YB^mNV)Vw*pc~G z=vBL7p1Gc4)|5(wDT|E*ys6+*!GosdGC8~;1eM8h&|#t)TaV_hZPv(|jHC<3sgLyO zOO}{Ai-LsnWsa+BM2vORS2O`&#zzWZm6Qfc%%v(xRp_Kb1D)Kf99y92`wthr>L=C& zmZ&nYo0n;t=ylQ6ET-;5q-cPbkV&5G12nN$@GAl&EiF(nl|~-{v$PxqWJCc<^Sx~L z6Rbc$3O1PpqLHYoHdsKC$|E!Q6}e}^%T`0gR#OhW%j?T0M>!%HY_U+POY+^3<{<2?fyA3eZ&SN`|PSDf#3LaI(d>^tM ze7^(0)X8HT&);x}UCII`FJE??kkzrNZGk|XB5P&;0Hvj{d=*>|v0)OCh5;ON@&ZFw z6y|!zaGM_=i`Y1eb0We9vxM1Q0c~(f<$FWQH#-isVk*@bI3x#U)j9%Z$yY+hN}IDE zW*08@+RN1Q1PH3+ zB0|-MC^}ZmfGVHdz+T3q0R9zK$obeI|ZY}#y2@JWTWH#2&ac(JESe+UL(FInTXbA zo^;DNX{I3?=o_%!^F4Oq^aVSQ^Z&*37pTajB@Ne3DXMY+d3{}jwW1GfML^y}sPU$R}S=M3RS`4!!)+k7VGj6!nGVu(kvd4-HQ_>5H9{*{jO+T@>$(HEgj`o&cy?EFTgu zRT5T@k;WSyhTY?OO23zf1fim)5wHjA#QVevEaTEeao%xNCBxRts0x;JhWoO#6t{`KV>WW`aWsD? zxu>(vE{bJT;4hFRcHG_NrKR#LiqggcVF7#X#VxOL{Qoy-B8BtK&wt$B|Es@hzx8{6 zVlNGlgWWp*MX!L$EfrmALMEu}h?7?q`&3G5B_wk;A$-vT+#3=d3^F#d7z@hWc@X}2 zXx>3*la`W`(97hl+29wUg;(+sW6#9x+)1X^)S~TbgI>osB?g5%G__>ib z69A}*aV9Jqs$6k{wKXHuor6o{6cb4UWw4n01%%^@n=Ssamd>hZatK)io9? zP(Z38pN%J<$sU`DK*0~t&uoNfokf|7lVYOF8rJ^$(6ZGNxi+x^H^XjhBjvfhs?fR+ z*mt7u=&I&x4FIXKz;^KS9hklCt`6}KP%3m*NuupWz_6I7DqgD$z*RzCv519i<3gKalJvoosBnRj!{ZM7v-F9QDZ(^gTCvU+rDRa6vK%0ZcQm`n-9 zB>ILxRJ_79Qz>^I41vn(74f@H&fSs&(Bw=(b|?fruVfk+FiwGaV4$;Fe~fbt`|yVu z1Zd15W?o^zR+}KX(0ji>7nZ3)N+~`_ek%4B8F*o=71Ym8em|NUa>R~MMK;C;A4U(P z^YioIoG1u%P@?o-8UQjjo3L&SD!WOOOyFS?hv6!uNTH0^EQejI7@D>Eo++#EowiD% z`VH5b_bQN}nAF=W?&TA`JvKi`B@lXfnkK@Z1pJK= zO%I`q>E+szzUC1l7v-u@<4uHZuV$K-w^d$axN6Il#E@>F%|G{t?F496EM%DGAvy zAkfc)$y;LHCSiRR0GwF?cx#G4dAmKKEX&zs<`I$k+2Wy?D&`7~b5&}bB#>Fnjo2?Fp zv5Y`Z#W-5OWgsdml|=ygBCO@|JQ^rc!(M%<0&bU^XpRSQo(?h1DeN>-s#Mhc*1ZH! zuWk#p@uFwz67-Alo)_&G-gU2C^O|ey-+bUd*$9;(|LO;d`LBKPZ|o00@)7&p-~6Dx zH~=-hb^HrofnZ1}VwaJmM4QX_z2vL87^Tj681}6!%;CW^&Oq0K0F|n;bs~fUQe6dT z2nea6QK@E-Cy0Kh2!iL0lJ7n_l(O?fl<|*btgkm|7kbimcF645;G!M95V4-AWjorp zV9!%EH_8sA67#hwxs4m5DDo6LV2S~sW?^1NKa-((s?uc(Q1S};Du}vD*vN_zE*23b zEBBpGnxYW#c?f2s7{ImMQf`TsVvARx;iyBObC6Q#L#kbDyzcMpme*QS7s?R z74haOOSa+?44|sr*>2mm@30!ucZvzeSnNfEpTv`-pUqm%_$?3;RKsa8QIlNt zJByKmnYnn*7O9ewo`n(;7ni5m2ot0?DFuU75_?n#dxQpo=&KgsQ^(hOtREVQgd0ht z*7NT=u~sZN0y1Sh(6*x&l7S(qNv_huGOPe8)o^Bv$Z^Cbll?XxyNIZAaN{`(Lc4Y- z@J5kn)?0zF=qU0|5uryB6xV}#u136mHC2UsdCwhu&dW$0TnmuB0iT}Bxkv51P8DOM zjm3M99DEhCQxP_I?hMrYtZTvK4JK_TrEOAV9h7qGKi92nvX!eE zP};6VHY2qF2{x#p6kHpM7)Mn%xmBdRete^0JjrV*3Wx)Zhy+lj&N!)_zTsXQh4E46 zFVuc+RJp+FoMNtI)IJD$G0j+81bi*yA|3NqGc4d(UlfuGE<(6n0Dvk69E4^GK49K9 z?1=)0Vzp#CND3|jSc+4fwM;c=834P2jkrwae=5OeW9jSdwNvNV*sv)Wn~&lX1z4I5 z+i`S$CrG7?sV`bvlP&MqWyzg;ExmKME#O&{>fB`s08X0k=Nh*Gs%V9R?&&BD@LAa2 z9_C69flnW7OqiAy87j;c2%>=X8?<^#o$TFzxg9ukt?i})b8-}hKWH-rl1BOR{}#+N+1#{ECckAqRlYoAT7iOsjA9b z(QqBr)-s)dlS^X@uu{ziS%tFzfzLuUYg85NPzV54Kspsb3zHc}ZzE^y^@qyvr_f@} zpmPjUts0pfwUztDwTIy^Z9&51d|!&|DfE2clc8A?31!@b)eOtZIW507`$#cblGz zq4tUVOnGe&Gbv+{S_RFT7iCe0fHi+Ee&JrAxIEmRrZa>=kfdr9jx5l54h` zH{!d2uS+Hkdz2be(>Pcm%nH%h_-MpzFu$;xQK|MQvg(%>D#YegXX%1QCVdUL?piA0 z3hdd*6?=GK$v*#N*dBZ?We=ZE+hcthB9}#b5Q~AMGmG{V-b^oyq-~05QreRM`M)wL z8e_wFs;9>uqEFJdPM@*|PoK5thln(wZR@FAYC}h&_8C<~;zd!SKG?0wSiUucRS^Xs zN+59!v-!y0nEPz#$~I}WndXi*fYsWEX04JhTPBqPKgsOgg|9H&DG>n~$5rV_8P z8Ql)Qp9LIBl-@Hu#s-EUbQZ0Se2>;04w57Q7t^JQFY5p;gQPN3GvjFJN30R`aT{$! zkQ%vAHb$CeW~7%2sS&Hhn6Lr!zDgFB3fQ3#jLr&lU1kYpD-FfHFu^<~Vonh)!vIc2 zyKN$R(FW(w*}(iM>(|%FIqQp_vEIlDyEt>gy2CG7Zx|mHg!{wc3meCp^h{zSrIC`E zC&KsEx@tSLt~8MY5mvfGMC7~J==LO6Y(LlFYX0szHis*CjU52l zPB!rav_!e)+N*3gi+Uk0*irhyEweZ)d9xsvNuWKu;00_FS4@6cY}|Utn?^3bS_D12 zu`!cD|1=5Wm2&3%iC*&npt3_Q&}O7G8=zl)M=I}kS9UdE6AszOTCzw7c=Xv30cbpV`N*pmvFlnOSFa?%FHq}~=~vp@<>;8*qlTEZnt&C@x^ z%iEcYwZ}5!L+Q0BjKnm$s|hxiQ8uShM6IeqTqX@D;8?~vmLd9%(8F=|#3`F1{THS+ z%oO6@KE&U>h`IaOyoLc-vFZj}+9KO*mJ9vZb=sbjLeivHuWb(=G~#jN{3Z1b9aADvC3wQxO8aa6SxR0Rm{Q+k7$isL1BV@yLS7P9SPt zOv?#-<61I>=BoF?3pUz!+9roDV2RRWa}9h!l)Bq^u>&`FRT1uP=1UJT<{0Lxdwjjv-2H^9zS zqeCp5BpWe{8Esy!f}A_t@3fUv00y?>>w^@3(8NxPpQJ)(luEH3e3myYkQyY&L>+ zIlxz_m6THL0U+|M1#hmZx;fb288lwwq$?*T_>5W;wC}xoX3js!rO~s(1+Ja4lbK+XnPGxQ89&lWNN6fY+(P%^d6`|>Ob{7o6M6B>X*))XlauGp z*-_e@Tp*p%K)a30t~_iz@ziOMJ2*pRi8Rg1G$r9IstqitHM78!kwu&D!GorcK4cSV zt7YM>i6Z0;(^7%w6;jc+Gysj;Lu-KZP|z1fYzS+E0la?tnRplRxalPaKS@+JiZF79 zPUs7KPA^whq~xU_8}E&H4Nt|%>6yh++EEGPRiy}ZK+6f(B6Lj^E^Ouf(O8TO_F9sv zz?Le^Bue3=I>atXFV6#wud{;a*HE(k(=)9hie2I<(7fxazRoarr^Jg+{zL zQsRA?7%t8yk2Dyy#drd-p+wn4=Fz?(6kSB4u*kJ4Ce2aK-&7yoXEiiXEtDq~7t1S| z5MZm9eUwyWinU`wl`yK>QyNok791?89~qki8*JbNEe33J$*eBFHrqvBFOPMjhz+ZV zjgB0EG(&VwTWii8(D)3l2dou00;&7*`frE0OI%*2*JnI8A71N?GulxX+zv z&-QRGm%(%%MrYE@%~bVd8UP%pf0;@bm4=i7Uy-zMq(oO}v66>?cS*h7Byd^3`mu?M z$(lrCGDB1!0U*zk4j)3}G)$l{FCZu;g1K1D2HnJaHL*c7s)rz#AueW8S{HS9N7oWz zUJXn~xzA|HV80TGfD@&#MzBX|9!vb28h?r>Dmg#O?YYhcq-+a0w-9-e5DaWyAtK=d z(v=uRDz&}w5Nihv6wAULQ41nfEUKJS^Pz|UMEwej)U!|kh&^SV`I3RrmS>Sn5w*Uk z0A-1>EsdaBdC{f_7}@|mHO+1yt7|F4S^10=!1gV1EerWtk}W1EWWI(-^H!GA%dJ6G zjZNrBs~yPY%PoBqEtbq$6)dQLNDYG}ewU6=eF&G8h9u}Ir5L28s@^U?e8ld!{WiPy znj7rOD_>*#4;@0YddLo6ak;gzu7_Z+)ofQ?5Y!H(0#ILHhg71jfdMASrt@rsS z?9YDp1NNDRPx!w-{O%vKfAh1yV88J@|H&SE_G$aTPyP@M5BJ;ee)<`HJ7&N1vp;RW z@C!e0|NhrLWM|k6bH~u>$Ls^Y{$BgppL)0b)QA3N%{ZMMJZ~TU-CwsmuGt5DyN?pn zy>|W057?jl^{4Il+1~7PU*YT#p8c2K{tbKYFaL_Y@0Z?Vzw!HjZs%x=uyuS7u7IQE zpzF3KtvlF4I|l}W(tOa!a+$JPt0dB`WC6uNrhv8^JHglH9XAg9-|01sZ_%8IR+p~KzAqA3ymwT z>cRLTK@av2Nm8pRZCEUX3mLR!nT>*3EzrBERJrHY>+JUHkJw!|UTtr<_YS-EhSxy( z*8_&wY|y1FAzpW(HJe(5#W}*p7G{H)L+>$#_s6Ei|zxKWrYZ33Ge!X*{FZUqZ-1shoz zX@^pRI{mwr1-_1PS-~|ZlY2HBas$_*osD~k^jV}Y+F*_jz&6~9GyCfr+w5w@$2&>w zUe4DWE9&jmiaNWuzSUlf_tReab#ZMcsVthEK;uVHpwF}<4>0D_vI+pQ6`?_8=q71r zrT#bvH8f5GQt}b$v-8_Lb90JuF~>bjC?!ax-%nQ}-3ur)=X_c}}>ODmj63Dt;4Ct?Qm*EGA`INNOBD#jDmN{*GXl8KwtenDk48GV(O%qE;@Y zO2Z^bJH{v&VGQTtRK5gby2Mz}T9HJHC6mJizRF}#04lA`k4=49jlCT#i=83;sFoZu zM+68*)ZT*%!=zld0gM9M3d97u)*njkl(4~u+@AE0es3}91b|E;aT*}6Pw=0sRgVqSQn8#apjz6TKE`|c$KKo;1v$1w}*qs%_S#*(M1 zlzcXUqB^QAnu$VN@zLpEk#0b*0T+s~WnnpuBPsw+rKICRH9VI-X#gr!qNt&6H|Aus zY}oQ9DO4JYMaug#$qR=CqTGAMp=02CmEn4hS>7ey4K@H!uXJdSOp}U85?!h*d4>g6 z3sp$8HUJ<@bQD>Tjzva*OfI!(Ktm!M|AIvpFUF4|6_pzc@I>V60p%V=)H#VBX|#Wm z1raf3O3WJ11vC~W5Hd#)P{$jrB~A1i3!%f|^QlDKMW13r_-v%s)XiJv#_JC+q34Hk z29~}5BH$v0ULyfRlY$XZ`XMGZmHoDWv_)kV?O*8OrG}>xNvn~uQlW}?(-MrUngu;s zbeUR~MK%m7Smu2NKE?NqMpX{xk62+CDux4oXSZ%ipmS{iJDML&knISIQYOD$ zQbXK0tZ9M(Jol!42X{NOJB^jUdlr#?w z4cWP~=j{0xj@lDXK4(wU%kbEXRLtmttvgh8qx60gk9O`$3*l@7W2}Yo)egPii8{BF zMRq&iw{kz40VO+`3y0a1ug0iyA52jbW2%jc%H1%42Vi2lV4WJ!Ep>4(b~DcQBBL9H=3MTnXh` ziiWM2`)+Yu!LyIV9L-}80+$!1t-S?-aJ8&JNL%pt_~FxVmu~Tm>OPMcm!H$tcEhaH zHrYSWYrTDF`TBeLe!%+q{^FUFcI@a=_RLe?vFDzC#Ew1tn4LQIG_6Q3pzWHp5_FOc z#c6bSq&G|DOD1zz<}fvPRerxurbK`I15Xm z3S;T|GzYh}ZL@2xxfWLBN;Y}4MfjG@14xcN|Fk{#)T4Iv#pmtRiQ~9*&j!pHds;?} zj2JZO0s&A}G?y(Q$Wu@doVNho#<{3~VQ6BC6gKHu&as~K+P2Q!Ffe;Ch;6fCob;=x zEL88ed_Z?HI?nny!n#NM9R*LMbk#R*j!Mk&{(g!LNJWAHLd8_v5ftn`bh%x2-Hmqm z+M8_eRoB@rn4Y~?zsB}ld5v`(y27eS_mvWmmbJ7}y+%uGbfmJ05eSqyX#>s5OPMbl zk3c&l;rgPgO4S#UV0RZF?M{qo-~QHj*xTOv!*%3KXI3DR?YrX1}Jk|o!;pj$|i>}mJ(%l zNKY&1D6e4iP`!BekT9)+fu|!+<<~lhYX1I@-)G-C7O+X>&0Y3$zxrGD!Jm1Dgdr3WRY) z?LL1NkD{Y?q5C4fM#w#hyr^3sYJ_3p`(@G-G6$5<)y3irNLk-!03%A#&LKXXWm8k1 zqamt&E|4};d1?X5`MYTo+^l3GlK^=ODW;Wby=v!2(FJYu>0=yZfI+d$x z%1hp|CAn)qrWl$M3<>km80535*U%WWh6rMaK55_j)}!|22OlSWcM(vbC_~L)I8O;3 zq2I$Ssxn8CC6ubD!~`s}NTt!yNW3ljT7C7()hdQ+6t%!`OExzAWvcs%V|)e#r)3MY ze_*rMqAf63PpYYnv`90bwSmtnQ^xg2hSYMBK1u2t9$^uk!=-wH#drpGA}%*`7?cPV zM){*!3#PS37wp`!B_7%fD95JjY( z>gAf!KX3xj5~JnEBJ5g<-7Tumg-CY+Yk`H7v>T$~`D1q}5Bia*`?{n&MLfSqbQ!6$ zjOcnm%p&(GK1aHP>aZvva-K+j9_Gr+(n&Q8;&y%FrI+kG4?k=VJ@g%W>Z#}KC0v4K zConJs)3gbbK#`yeryIF;0!{4zur}6~HUhH_*t6}7fiBpUT>#mge8w)$X+J>b2&t^g z=2EtEJ^|1&8cLT~1D6Hg=%Cnvud0AjBiteu=K|7ZtmuI51uyH4v%rfb%b*$81$w_R1YRcG%zK&Y5LH&zhz(i=P%k< zzw#CP5?{aa)vwq?k34M8KK-mc{nWGe=p#?q!;d^_k39l_dh${0#Yi?vg_?>?^3)%W z&rm|OWig*g6{Z5}^3PcToMJD8Hxlf(QnMPeue_uh` zsIImV08Scm-64-2rFgYE6f#aLNK@7|G+7w{M$Mv?T2o_TvG2p!KAbplg3|pVi>tXU zKvsQ6f+f8fQOc}w7kWtnmSo4Ds2QmrKg`eX5V@ANqgz^Iqz>- zOW(`trVcA1?N(8bZ(R!-GMw&REwlYfZOK%1!$^)A$%0!melskHfx7G=mY(4){3iOrZ^8j|}|MtD^UG|}0`z4%d zgOAsK`3?4)zx&7brrQtlFFEPx5Q7x4&O^7_-+b;X_VurS*?#0sIz{C4&ib@mIt_(A*VAN{s{=)eEH{n6`lO%#{y@h|*msx#EbD?2SQ zPyEcE{kpyD=YF4`Yp&$k5B=(2*dP3dALUtF$M@JdYWG(xkdvYheHk46v}A?(Mb(Hm7pHQ6nACl|L^n zg=Hd2uD}PW8nIt(T@|8Hz>ZVaf$&j--y=&023jox0JZ)QkU2o_u1@9TNy+Wrx0jx8 ztu{`1`J>-?)E@lyx9!-eGo*&p0)ohd36v&1nDky^XzqB-p-Yf{Mrld8pr_feq(7)o zm-ZT19vGvB`by~fDtfCC3?Mv|pHgfZ(?Rn7B>?ym#NyJ$7qS620uDPczpSmG5(su} zv*2a8=c1i^=>bRXhb<%p!Wt~5L!g{%0Q3StHv^+x-7&4*-T7tpIgTW|)c}YQK zO9;CpEr6Hf%QYAwET3nNs9g)%H2RDQT0Tr*>^IGVr+-h8=FvTwn&d%hVTRPj6dI!$ z09){W`0+$2JD)9q86*`rzcD4CO0FgD_Z*d4vB4pmKi_L3FC4d-UP|HF(csyfO;T!6 zVlL@36s5}trh-UR9!~1LoKm?j_ajY8Gd)6yFwAB?fS^E)d10br0!b5-)`xG;sS~H| z)XCG7o@1IwR6I&W)0s18?b&Buu#+dx*f{M%;-10`SnJKI=af>Bt1~VPR>wxw3~*|t z%CMOYqZxX>8TPJ)O|h9V*TA_o^0fuvw2e)v1!kd^dqF}P+X`L>kdh&NG>Z^_zL?Ld z2cAP2fOc+^hS+qrQfZ`R#=bz4EG|T#7Dzk6qQAtZm}CybS0W^&c-l`B*rVNp?EU>!8A-qi+s*ok@T&L+F|s$F)=^@r@X8?Urm zZn}cD5{L0X>$2@FRn|`Wt`kGacI-JeqneH#d&!=8<{5kD*=Kn?%?1v;LeSEO^~MmU ziG8E<6iLk5g~16sbFs%xU+lKtVVIloX&d04k1{r+qy-mMou!nV+8_Z!Q)&t;KQ(Pg zE|}JS69iNN%Oq`I;P|YxmeO@O_xd8L#L7$SnY-IqduldZYo*b(*=tkQhF8yq-PlUI z92g#rD|e~pJ;MY!y?uSA9*lCR4$^|bm{Z!g4i71LNy&O(3cXBBT41&AV67}AaB6Gj z9=GXHi@0fGNsaPdo|*SgIx%qq>%6*ml+y z+}e|@Bhs-hWnjH=agW19$6>2uSjqGcku^s@Z(%=CxU`1&A z8WH!GaV<6+&aCV6dOa&Zw%QIKK4@26Nt1N?)X5;%n=WhJ_xqbgAOb`3rz7xHI~VOt zy4Om{x~M~JA;C}?j}qp%#^#b(Mwn+^zcNzWC93YzuOvYC*AD{mK*y`amHTOs6X?d2<`1GB$a=s#v- zS(DKA8}7F^zU!UY_bITS{+lm@jvw=z{)JEdH@b0SI+p`slpy}aAN?l;kipYiciv?m z|AU`LvA^-2Z2jba{|Y3?e`lB+)n#;NuVq4M@F^FpjmNoFOj*Z(-iDf62PKL7S0I1Xk1od>^cFFf-IVCe-r_0m)JEa{8WCy#qdVHqISKRktP zL9Y$7p(v%5;JS@q_$HG&3E(AyT~KpOiD6+LqJW24bu(v?4ola;zjI$jPs?0#8U|;I zMO}VH)9B2msEAR;-#o(nIYjW$2n$w>Yn3hhSbg4FX0GvC6xAL>S_cV7rEe)B z9lVGbJIsBGa&8%bQy#COpI1O9g{ZzM#5zH}x>Wt75^0H#(~OZ6%;W-0X+J5=U2iwsd#zo~&#u1hD!cxME9}-AuC`mRy~1w0 z<_f!$$Nkq`VLy2LHTJH1Zm@UWb)((S&u-qg+pgZb-L5{^$%9rv2RiJ~Zdw8DY_@&7 zTJ7N8cIHy6bpWusx?1eOkzIDxRfkBE;C)64wx$ZvFX_N)?qLI`vB+|$5qI%Mu3ZITc(IK$*LoO#=cZzIdMs=gC&I9ytOblILwA*tIGZ$* zfQQDLj0nY4>fYy!PA{eAJkSc>Q)`7xCk3kVup6;u?l)S!6&%ku96U$hNWe@Y=9cz0 zT)!KA_Y;ex&l4t{)t?1S5yqbYmUl(SZnR>e#DYjmC6-<1rfN@_OqPfRmk3CFZJE6b2AQtfG9YWGJk?MHvGL@|C*PdHC|mrV%0o zQn~`Ozktx59(4I+LzFTsRNEwZ;)$MOom5+v6zikz;S%>&Y@f7GK}O@%_s^HB=NxTi zVUm#ZCRHdkEPR(;v2)!ku0kz(zg2IjUQX+8c7gcS?9sdpW}*J&oM%Dv9=rY4TkPhW zZ?UV6?Dr}btw-6jB9eaj#pk9^*jJDC|L>!Nt-sv#3aFAmK0poBZPpR}77*>>Sjk{p zCY?r#ibcN^a8gE&xd!UGiO8vee0FUOJ|H|A`M!;arA-?pw<}K6I1ZCBg~k5(@FWy9 z`WifAV))3Ys%Qp3qwz5e(FR6sM4jiysH|cF%1KGFgp4SqvyvB+*vgy}^y-sWB$7g4 zJTNrqDVnovu;(wpZULMc8i_8M`1x$a%7-h}RfrC&ysR2}w$j@*%+R($l9F1|U-htF zst6mVXG!#L*`is#z_;6@?2?h>d_YMqzqR6DN_h4b&nG915lGu9+G)$ zf~Bg5e$Gw))#p8mDD&c3O2p|=RgfRY6}!jIoTdRMG=2e#L49)z)k`G+ng|Tb3>By} zz$I-ph@HYPX}l=abQ0EUb5R9P4Dga*L5T6QxHKZ3n<_r)l&qwPc~%>p*io zk4C$i&Htv|JN$7K-|vReZHF;!#v;B6udzBnP!;rcPK@!=liCo%}qw4jFmJTasrjG+Z~Y-nl9(R}&dcZqP|aKp!FVWjDu1*%b7++E`+k^D1%iRkSFpe4m`#^Dk_oEU$n#?oonWL1=9 zp{J(+*B?sHFEIDT6!)=a9B1t~apnYIAYxq|RJOvnZ-Xt9ndlU$e+6RFTP}P5H33TI zJ#&4U^O>TLo0w81;?}jw8R|oW&+_|HOQ2uFt{7 zjq=_e0QFqvW8dEKi^4P+K8R!FbyoheweeR^~6Bvfyq zLa&B7l_&5?kWxyJQ^FeQT{h9H<}<#QQ>@=nRX0jtPnwhQrC?@>-;2Q$D;;4?Rhu2z z5G;%JW^Gzx4fHllo8|>xzWSRKLccPa&6R+3ooFCrLR zRtI$^Ylsv}2$Zhw>QzwKwM?QK7M)TyGc}AHV{?z8Z;@X|3g8h}-!3L^N*fG%mlXew zA)t;Ep~NL12ABxJXi;{j67kGXJ&G8dZuUT2Et53XJG<%CJ?+}Y)Q7Y zS^K_Jsiacv`@YWa_qh*bDrK+#K?|q;5dG2%X|Nig)MpfcU(hh6a zHiVT8wK(WD-9(26I9?7(ODDbPPF@UqUOOK4?0+pBKo6$_XD`RHVHR%}G5|vNqCGPh z*;$NUKH5o*Y+hAFe0FMIdgLyc>(T&e0#ctTL>7w}f6*TK z4+fs{FH(}jNv-ARl<_l7MFGxG8HV*n4r&9W=*sGXFoVN3%4>!|oQ5#aLpVMC7|R~G zM+1!29M_hQ!&8Fs&A_3yD${gY`Xtp?obaEKQozpjvaNJeT0Y6MK~HF=mL|p9Hid!* z?+$eze>|-H%_l-OL|IXaIU{^x!^%6ih5Fz5WGMUmlVNzr#_(GDSU3fDXN>1-m$#*d zn}R920>;WF%~=F5lCo1)$}zX7A0y3aC90f45Jw{CX^`6))65_hHu25lndjknWKvR} z=B5bhYQaGi^kdLcY6lU$XOS+?v>k^s4g?Lkpl&&|3KG4WBlwjkG=>8%a^qkpeJMqt zuXjNFxmBN5uu0mV0!fN3VdTRp+8eKH*XEMpe6Lk zBaehne)5yy$tOP+KJebh!bd*zq44ArzaAcc?|Vso-WBe>=bmu)U3Z1sw{NFO@kSh- zniz!U@*JWAfsN5QSbB)MwUgG=`4W{furTc>qNV-SNwhjohv5;8PlLTw_bbF zggZ;;X7Ic-7{uxC!5N*Qqe7Mo7H*t@r3&O6_^ZEpD*WYBPeZ8wad_qhbL9{? z_q-Smy}Ub|BxinM@7{2E-=WY+gWG-xxBO`f`6VI(5t)c5Tdk)N zqmhSjx?}~_v7cLQuqejZ{N|X0nWX7jK`^>7z7;^t z>q)1%p-y4o2JMYyI*X`Ml4d$+(VCSNVbkgwP^khclV(E;nqes_l`5z{vT1G`8+a>% zm2H%^pFY^pQk^8Jc{mnAFe(4_O(xhz@*D%0Qx6cE7|h}c~#d^Ue4b$n(u z&XL@nOU)Em@MfEDx+y$x&s}_NB081-RSbev98;A;LiiZSxTqCA)DZBQrZ%m#GZiPI zGr<}~fhEz%q4H~*2;K+`@(tI;MmPltauP=l73Hi+RE}o~Ldg`LHHlayPE8Ju%M1>~ z$iVgdF`GtYf&|U*`H6})NDK(@AV}C4>9!0u-xP@Sq^J&P&?MKJg)?WVe-Y2Kgx6;? z7iLJ;po1E(lcY-+10|YZtVDHE*pys0sxtZjHeuwNKnHcs2Rm@+d8Qp`KTTs`EBKrW zi1yi}$aFb!NedT%B>iHp{9-xZ+_aJnY)vR{ti^k&3MCtF3Ki=prLJBXsv*W!@v|}- z!xmG4lqxEV8@RG~)HA_Hq6h*94W-kN2`8qgeq$)R{f@Bm@kc}UeLKQywS{4CeMy-3 zz(b+&E1wUmzx#vwPTCk7+fdgPf)WD-P&#-dL(J7SoT94)lijrb=!eTT zMjMIo5m8-A{m~KY@2A{-2t6{2AEsGflb+n;3>(xGb3~HGP!QLDIw}7OnlLw!`dp2jxoJxi^p(5AePm4~JkOnX z-Wl$>`|fb(?K`8R^uT@hKzP18+`3gE(0YQ53R;7p<_)@Bf>Th4>bm_h6-c%3XZj6- zuIRWC+!Sz~6{LDslIpEwd`$=UWA{1q!-h8#%NL%Z3#OG&f$!9F@kFQv+*e1o#8^#HbVl0$jbEB7%YfD)SH@9%UVLQpDDyGzpZ4))uP6&$fj#ry#P^ zwV}279PN8r*hgBKC)naQIey{cM4U14Y=+5^p1W`nf~->_fi#o->IP~v`D;V*yhAkH zcP#wrpZp0u#(KixzyDVFvmgF2^e*}EygKsk=zVxhH$M~pkMDmc9BGwI+!j9kM}Nsr zr|@#o%gLdfQ5MSrrzx)?eD*^-L*}MWh5u@2QqRYgh8KVO@53u-=`0_=oDncWTEBih z&1%tJ!YH<1ZAUcnMA(fp^vwQ!;pvxO3IES4FNdE{-u+X)KKJS?VHZx*tNULeTGA|^&g5;9rlQJ*K$^Ag9( zxfhYg>m%DV7TXkybclu3OU07SHmz7Pjpkf3Vy~_&g&a=vUW`jcEiwUhK;{*Y&mqgM zZLEa+S``{VniMs>#D;MdC%~$=l~gTFM9vP~#98TOl>O)N)%<-+E08y(tp)rn1u=Yx zs9JNLPr#95{+w&Id$&EOZWO$f;mp}F-H4~Cs0y{@o5OqG{b+db-g{#8oygfF##^qC zZNK*dQ-sjOOM#{Md=Tm!$gl*No0uIjQtQO zpF9Cmgk$!Ef&-BYA=FkM)+?1*g~$dl26Ay8hv+yG5$8Z3+c3^)AYQAAi^682)Xi)Z z^+bae+~X?J8|BP{Asp6Ir!R%q4xJ)mJ{?B*tP&*zEGn>-1skXvWe(`Tjpj%-rS^@h z>OvJ8R5{$Ustp@LQ576C0tEXi7Et9=L@J|n!H%Ec)v&PIrE)ltea z)s6EzOCP%4pX?4pPwxt8Ct5*&F#M=44<0%jMljwt8ta&DV)IvtJR65U3uhzUY9Kat zJZ2nw91o=!m_tLLQ6x@cg^;T%!+B+hF;DG7st4xFG)P)t+6 zZJ^ATnp+|FHq(~nV(fWn>b$+RGxU-!HGSU4>!&~^4Nh*bClYiOz{9Ex>!~E$cJs|) z`_`@Srq+eE^%bFE1=WWjommj+(^ZRRbIia2u@0{QmpWtsms_%GJJb5`o&zp6gYrqlV+RF<1??KEp|%B^xVa80h`&@-VynRB4x^KXa^&|Ysn#|rZA?} zZ^|QB8R%y#B5K8{eJT~5j#^LZ6>$xUfX3=lZr5E{q@|EFaskz3@|~6?_=@elF=dQhB*9)(PmtNrO&-R5^UfdUs96S`xg1TLyaLK}w zH#=T;n#}FV2lF`l?0*RlZ{HHW{%3#7zwF?W9I=||x>ZsYK1V~-crbCF{tthE*SjLz z^4Mp>?h{u+@dE4|6a92wFKY;Y^rv5Y{hI#%&%P1fwF$?I2yXwy{`m8P`kTWy|LhMi z#t7ssD2ifOhp7oozwd?cZ~yymkP}}%eiYiHJkb)##->XC-9cC?<8dmV-QGDxqc!Rk;qRLNN# zYR79s{dhfRO{g9r4LAgk2l<3L9MGGy$HOf|Y#UP78)_SqW@8YM&`G|sp~vRr{M=6T zLsY>jLYM>5xpM7lcs(FN7}gZV&W&IHw-rsT13lwm|G;=Sjv;SmV;h^Gl1rgTdXR0P z^-Wz}C5|Nhq2Sf17Tv;T{?guKl&r?V z!B;b{>qBEL6;v4b3I4v1lximixf{XF5h~d-H?9dO#Qf5!!Yq00!BGE&PlXNt&F_X) z-~3{z|I%l}sxN#d)cnSip$x}qhDx?${oUah)rg0VUPOE7NN72VLwEjS7&v$$44gb0 z`q1)ezk(JP5qih&SHqQWeLr0KU;jLGKl@^sqcl8!a3T~jmrTVhJ*G%gPpUVgASdaT z3b-2;Y&11&6!oM3rSKU}2_#aNKfm06o&mumv40J3<`81q& z9t-2`8ou7@q`1yBkY0P6*_=5SNrTLuGqifS7|Q25LRm&XRg8noWz&tUEz*m*7Ftk! zH47q;&E{7E@+d+(2huTRQ4+{Unr)kgDO3Rw5fvF#iJQ%Elqz4VdPh9@euTn}5$2#N zYSEA}zV<`>y$TxLd>Kxq3hhe{sVB4YXGGtaj}}>Uz;!!Mm8j06&P3w$Bz)gUH0P^Y zO13bmLJKLMXOUu@RN5YnO`em=uPA=WJQI~sPGb`8pp~yyZ4H8`o;yd6N|5p^bwa+r9@ECAL%< zXYCEr>rmS`!E=;tV_(K`*60|owF0l3hRO1lhDe!@lB!pzw+vEv6XKRT;pA<mPt)BQSaRw{dp5)ep5 z=!9#fbj=W&Z|$tnHs~VuxGRFxU6Kt7r-qxswxm1UixJl03|e~GStV>FIk6geU>FV~ zGAf96@N!TFh+iHHx_rWX>SioGe*ZuJZ{gl01o6to?cqZocn`)YN%Lg@y(dR{ z*z&=zh5!09kB6#~uIPdVY8fyDn&kH z#3V5rqk9z+dKXCFB-d3y3a}XC+=ugbmDVkuL!@J-EW@?}QN9p+LE#vi;|M&1VZ=g* zs1O{%T(ak+kFb^?D^GJjlcq6kH9_B~!5$nhoWx#?R3DM)0MBk%LMm{)O=6cGLgAT2 z@EP(`d?K+sxGf`aJs=jeEiB!<|>~w^EjT%LLyyl7=A~yF?4CN?Jv{}F_Xj1j! z987}P&EO!|{y+pNOT}cB)oraKGQ)h5*qWMery{RH%^p7}3Om?NK)Ri>?;5jnugD`Ydst*r}VI3IH~2hCi&D$GzlG!6nbmk$DK$7AlQ zOC2&CRx2{o*{rUdg2?~FpM;ig{Z(lD)?bGFvn@D96g!aKxp?jZ>dTigzbVFLh;?`F z#JO-8WN3oVLuV`-y(I0JC` zkR?$5#5N@(@V7=uRhh;c$7zWL5janyrkE6tbbkMX)2@p45-M5JP+A8V=&=h0SS z&7e#o*iPU?b|8_}Nn*G}BTDqf^mBHAO%vLf<6`&vF7yo_Tq5ZY!gV!f5WGn`Ozs8!%$7#u@r(7{8cq!;m(r$S3z|Er^ked81VMT$CdrfYNbf4y8b>qZA@JN3a_>TlodWu?cvxmta@89GehKPh&@@u%vS3O*hMo zf63u1jKuCI2fG;}+Pw%&J@UZ2!~5U=Ale@2gRH3tJ0CMek4V%?f5HyMQ zM~KS0fc%Y0E2%zeEJGj>rD%z>45{RT#);5amhcDB^&-`5bGF?X2K!f~=k3F&I znmeu1`T4V`d)t9r@jb5B@?glXIaC{^V}0iMEX|Rd@C2<*AlAeb(iof0MV|XPQb!{Y zvQ67Y4KBGP%!^Xeq2+KKGBKpnR@@d?iLc5AH`N zh-P0Tz><>1`TEpveI`b_*B+)sQt7#&>q2Wdz3%{iyw6AhQ+kE*yCUXzJ;eJpv}!0K zW#?vPiE{)0u7G1=X|k%`&eu7{cYVDw;!w17vcoJ4Arn4!R!uO~!)dqsoKGql7 zR5`~gL&ps(dA7AU6ei4CQq!ST5k@{|l(r=KRKwl&*!ve<^BJ6u7RtrfA%BpIPR?u} z+Bx>iQ+ozSdJ?B=3a*Z+n9K0I+FRR0I%$<$-Zu;6Aj|7EpKk_;cUBILFHOML zuB)SmS^c8xe(t$v!vWN~7Ya{&kE@rd=mjO5VO;V_ITjMQm^N~a8qDi(drm7($#blN zk0klK3@tk6q7`7i-dvbMmlCRv9X&$z>&3_gEW*(+#dQX)D%&GjbtgKdEfu9PfO37f z{&mDY{=_F1UGq4>+Dp4$iB)i}FPm4#U@Zzyn=V7Yi*-;G`BB(F&V0^XoPPC_@}qv5 zJ=)OribB-M_xXHwzJ&k^Ut(io!*ZOAY}SM6FoTp>!D?FEpjYBH*R0N!>B_`Wf~dfC zZN7O+Sc&s>?Xmyht8k0pGJ9X7Q>&>|tcUwHL_qY@XPyl|dg@2v0CbS!r%!Q>xu9D+ z;6F}7|9CYVK6oIUhm?N;%7|&Smv+4ro_!YK)V&8Fs~-**q5AX=F-Isa>IP9W1Drt> zpVyoPqAT~x^rah$4zdLR>T4;aS&gG;ur7M|p~oM8!;SQHrz!dkd=gW9fXogU{QJ ztoz-!-@5EDz2PH%YrhzTFggO#Fp6ZpJvrF>WefPpkNB8L>PBHwz+F^+_aZ>PWfFBF`y4T<3KBU(!G^ zF2>~FI&#PQ2XRL1*pB8EV~ zmWNsrY3p9Fu0a8$-5P;a++${=IB85*8a*ma)J|H(@wNKmgji5N2d! zXhbk@1Du%>HGe>H)IlPNM=hv(SK*~8Df z<*bmh?4-)C+YaA|lInb^DWiippkO8MVX3qw)goD%5 zUAM^HET~}X(Np2bOFw6m#fik3>$N`_X&%!9D{;cgY@SL}=k)FYxSBL;1r5xXx53y= z!SxxXv3T!+*FyW@6JbAw`#9C4gCKP(Kc{fSYMC<|0avrxSWl(KLURtrnGMHuQzO^9 zGBY>KFqhguE<0FD>1=X6nY0aAw<Gw9)dOCFT{Sf2n=k+u9 z`uKi;?|YE*=mTk*0~t**hefb5Xk0sEBU~KHQ7U4Gnb#xSQ|!|RAF104OP!+_^>ezr z;3Q@8Tr*jtX|^J=RfUbGd5yX{X`HTCtJ74To};V%wg(=&&cHnNz5k1f$^O_=t|n(H z+(xU3G6Ib>){LZVQD!U5+_a|QWu@~u%9#|A0?NSg9%SvgPRG#d8YDQGpwPfNiZsSo z1gVR)+RL?%MW=`~lN|}{)d-^GdQ|I8{!SDw#Oa8r4qYr#ndP9YGhzLvy0B?W62LDU zyZ7vZ?0zCaVQj0x+Oom3YtOW}^GVbwB10ly%=aj%E42(y;|xSKfRRYf=4p@?N$n5pDARY+V(bUi1EnL2A2A|Hr;<^Ssjp zg_)OCh@z57@G5uAU?cLztb*g;YUoAt%wH&4sSJx_q^8+soXA>G+iQ=5uf4*YrRg|+ zx>dug`F#U@>@HG`{FA=}sl~?R&{j*`JSA*yh7&iz96XH^B}qSrfJwbK$@j-koFK(^ z0Av$~lPcB%i+(n(XCEk!0n`|_-aE;(hpNi%+NF3eBCPiJOmw-*aM;V>yXG+uODGt6 z&!gAR>xFw*ZLEl}<%1LL3co)~DSj_efg7O^ zvaq&`&B!F5)6|Tq8u>jF5)X}J-54=KYK2VNVH=}ur0n;eLbaJ1IgEA`z%;^)l`%L} z8qE2@7Qy=Z7k_Wj(A)U4>&zZj>;wh_-5HETG5R{iAV39q2rN>5q=AV1DRpLIb*Sa6 zk_|QFHpLyr$$%uv;v44Lq7WUMunv}@k~%CaPy(+6#Y(}^>Sj~Atfmx$QeOeLgfyTV zb2|~esSnZn7^MLRk3eEwz5P+RC5xS=Zj6IYW;zjE2M*MJD)Nk&PMDw&k2v}>V!$E4YPZ{LaD^*2dc70>0f)rc}$EcR5KMTX& z0a9~SP6zX_W_4pIK$fDBMyR*n^}wQQK6CC|xODzxXryP{aLcJMKsDH zNR8#Pk>pqe!0)nnKb{WhR&W&^D@7|EBe^66^qtJe3){#*EInTM>+d2p(*ru47Haco zQ$q?)ZenaY&GRdU=O-~ghu`P0bE^3>#9SGq(zAzayh`y#Cuo>$j|RAh0mfJ7MUwZ3 zXfg9>M29P;t|Wn*n6r|+6)w-KPSycnvy7N&*zL0y~bkoN8EYx5sEmruFLK__>oWk1rTH0|{ zKzpc+f#p@!Xd#a3x~;c`rY-Bj-S^xc1FFkZezw!zNe#PM#zmc{3Z7ZE?NpStz~(t$ ztWXr;EC(Shq5z=?V*kTr+TQ5+MfKb}(F0h%r?)%;>XM{bper!)TTua4Os@hV!Z8S& z=SBy@Aq>D_2&2uH5LW~ypOXh6OiP-}iLCG1Z1AQJrkLClY(zsCkzrb~OtQgGvFT2LF6r196=y1rvh;`q1M~kx?lw7P z0cXNtNT~(GZwl;3hvBh>LDQOsa2rJM?M-#zmi4t^8w$vq*Va-B4V+E7C_fvRoekZ8 zSMt@U)16v_-@`R`(u1sn6j(QgeN;k-rRf;ePDr!OwmjiJV)ITG<=7~?M}tIMku!2V zxljZ~(xc*@M~@#5yQxfknM%Y15b13nV)SNJiGplh5S2S<%iyM`F*Rx&XY*8(>IzLe zH5~0!n011>bm2r<4b{Wh1-j4+S4pQT3jNKCyQ|-l&wEU>Sy>5aDYZjrQ&ORL`Br|$ z(!(udn&+9q&}2a_9>=k2Wdl3G^}R-F>L`xM3Et}f=*KHu?+G00i zlpGJ?9JcM-A1?m*C*k4`ei+*S_V2>LOS{9=;iDmo-=(M^o588e!CAS9mK5tv-++ij zm1Q=ZSxUc}Xbthu!;gfAAAAss!PbyY`tLO4=9jJHVVp$b8$p=vd*J?X=Z>A%jkTro zRs>mL7W=I+R=K7onWyETnq?qXMFa)q@I`B=5UQg+LIb_=YD#lSU%Kg%=HWVuh|I0< zDuPy#rSpUKQ3^SNJRH6p)^rX8?1eK0N|^!QDhFpOmrXevv>*#^P>R%AiszhWTAP%q zt#OnBxzW)`5V4`MAgn4=vn?0GJ!w-W^8{#bit8$*(Q`>5jh-u0VPiu%*IN?jax(&q z=LtkQA;(7tiuuuxG{peVyH8mQKHENO%8-~A(t%ERk8JYn?rH>Pamu^R;goYH_sza- z`EaI+DLr3NHbXFVJug?Zp%l3h+d0WW$^*6088YxO2pq$q86dbYfK)l!6pySv=>h}x zNhDZtEnaNC2s+qI1sW(W!HYqnU%NqeMN=z~7O;P$m46Cd4#Sdcr->j_R%%#Tcv zu}t`L={S3{{W$0F^wLJCdIUM8Lb8}9(h8+6J)$+V7o56Cps#~G-wVh9w1DKQk5>#@ zv}Sc33iS=3VCk`)W+{0LQ`2Tq5K+o3BP*?o+-c+(crkHM{UF4bLX^ zq5+yFS`>k@fo&Gilu*B2!tLO$w#%F?OMvWwJAiJ9VR-E@#!At?Iik)XKJOcU@P~^Y z!sPfs=sEXtY@n(tuZWDL|H@^kN}rJ?0g}aG_jwFKIVhCG@v2fBDSFA(c~+=%Qc6l1 zHDwmn#)hh}8U<%7G$x4tOf`v`4dNWNwGYs!l74$czbVo!cAz&xQxj?cCvTK%mE@kQ z7)flQG7*|~K1MXO7-@QW4{cN6Egb~9KY*~@;n$8q1Z6`5$;tiqj?U%z^H*t3o*Swr(cuA=%La+7ps!a6*O1TW6;XaI$=#GmCv*2T_5&S< zyJ$HgO4`fDn#Y{4C7LbdTm{Xca`gs;_Qu0Lp6$=5j(X<6p>UAsxQDr%g4fjmm#JYj zgxgiCLTT0Fl+o!kr_ho)5mu2Fo*wKB62tykIi;fj`$3Ms3h z+k7>8XH?f_Rn*gjdp2yp?>(=hzzgHuy630JUO@6jUA-7?l9jU=tOwJl<04#^vdayF zqLG!J&qkOA4FIRwwaz$Z!A;cpiPMy|ClQ`6v4q`*65|#OSg{#pNlYn$Uh?l;K2uV0 zx&oSvLoT0Lgws)nv)Z&C5;JXKs&JU<)}ookcrTtGd&3399;0567&qf1-%C_>L@l(4 zK4)l_qPlr8Ri5&sI@`KpYRy(Arp8SN4Kv3^aE3Z*%4-GOc?rt|MICT(B%w!FiW0ef50PL48~Ip%W;#eu37dN{D3)>ILk~Q(=$cgn-}mz0;ZJ}>ljfQNDfCp? zokz-guq71h3!SA5kspyIL_#z9`3zh>7Jn2^wF*+<(HU&bI9UcA8mU-3235+ip;qOA zCgH%x*DD=DkYlxKb{cja$D;t7mBn+)$G+skgH6aox zm|P-doh!?ei>M&Wl4Q=Nq{f+PB8jIna-!IfChb_gRAJuj)C!+iF{Kg3!NK6pDmAZR zwisPfCMOh{*fcYr&ogaTfK?#qC1NK^r>(>mBnWnZKHWw`)LYk9#dF8zmElf$bZu&& z1fD2=06tJFZBKe}PGXNd+eu(((=gm4Y~mvr#cV8go*WL-jYQ~q!rzwoEjd~!fo?&h zaN3BE`;S6?L`AVh2!F4oEJTY$zCvust0FUF(yu`V#WCu2m$VdmLI(vWO2;lbLJ)zC zu;f-v!#i-(apSTCI7&P)UPfY(GZy6?+<*L;O6`z=F`~I_q)QJu5he6gOJie52f?bw z*=giw)ohTtJmYjW-ZYR9JBh3DGRi$r$PzmFfB{7m8KYS$_6ixh3g$ows6>u!J8-7# zS!bH98)vUqLi>b{661)on}Ko8ARUyEmdCJ&zFBH5nbZD(<{P316RR(805pLBTnd*N zvpk%EGK40_nV(%$3XXuRRH3hQ_g!~{t#_iwgdD*b&td?Cu`iP`k8MOmFW8)Q%$PqD z+($nfi&Z_gg_zhI`I!GmJ`JQ!JMm1*HM#ND``BkBlw4&*zu!4rxHKa<`-En8Q<&OIioO~dj+jiU= zws3CPeot6?+ufo5w%bGX*4snbP1{2;8(!i1&7pABCQkIHYS)F#@&**pOG6Jz=`n)RZBPV>=|9qHk04}7!%P$&rHGt5o4W2(1t zI&mgzBHvBzorE_js5=H4Fiw@?ERIrAxu)}|?19ylYctcuOHa~I0Nuf29<1Y)l`Qi!oIf%oCRe9*ZPo~1S`@_lR}gtM1Pu#^QE-qLo9 z7$&6^5HP5j)JS264aC>+Oyq9qI7Kl?`?Fbyf~xo^>$DX|>G0trv1@`Ada?f|X==wb zN{nE`3~>N1FZZ7$2Qrok|g`ak~LX+NYXOaY}82c3?Jod`UXgRKC}sTDwj~ zhBkov6R5RcpoJ1MI7(CVJ>ujlC6#=lizM=_$fU3$g`7=@7(7Rg$BQ zfzE@Kau{Q^qGEq67C*m_jt-TXg)FoJoTCC;1dswLFcpFkFQam;6hy?PkESrqwaee~ z9*Sv0l8Rlkr*IClQJ5yx2V#?}Q)a_eD(f0=eRs$$sa`aB4jexah1WK3*#yL#7tWu# z6b(lShPITHmF4sjWT!zn6u7J)d(m3bQ@4KKR&i5Sv)13gpNk zs3$W)YGwpyWQ=o~@0A?TIn{YeCpF}I=?vT6Bch4kybe>;t{Mwp{>m4xLz*mOwj|vi zZB{kL^duXzj_oA+HM6-O2c=aQQ*^6VHH0cs1LZ*dH5l+>RmO?Xbv{R|`eGlr!m~XK zDv*JLS)+eWilK%3?11n+z{a`{ZJ%e49SzS??Q@bc={d%{9s$Hn2*<9YT>B=-sEPj`lrL09;!=qe$d%!?imZe{^1WVHi$fi|M)-u@8}^R zSj@b&`m{>{Q!1LS3{LUBv!I9|OZWgSH0iwz->!}JA6MZ>j-U=+3QApVA3QcQr9MV@ zuA*cU%&iGFHFc?3J!zP0q;%jNf0F_y65 z7J!;vdmK2p7jeT=u`=i$VQUIfIL&s@mJ{S4)8cu3ZwcHi*t116jgKz zuA(b-`Z$_eRGYTZGR7wD@;a}g?KA~dq7ELERo{;P)?2o*Zxzww6a6&onE|63QR`52 zx8hK{`@}~VR}Eh~SC*ZoceW4omhWy54Vf`oGgz&LjiJ;SeHP&2m>$7!$agBnz*pIn zm#AFWG+SiBZ44*GGHoR#G;ETLbJ?70h@{m>(UB1zH9x(AjodQggpjAyeIBLvDr-=T409F zR(XM#%ELHB^u_;lEFw$+H$onkrL{oFIs9!gDT@l6#1)VfOF7Hnd{rVbP=wQzg(ehR zJyC2?s0D5>g-mEar81C|m7r_IAX!zLA41l@a?$g+(%BqJkwe(PMryk0%;}4f)NLBG z6e6>d0{J)=Htj9Nh-@HQt|N*yQnOc{*Q;@}OjgdyruA#7a=k6wylF$MI$4Xd^(IiW zVy8=TR{{FwEvYMEZ)Ss5Z^|rVU}u)|=oyG(@eU zOi~_cKps;#ZZkM}nK-XkL6y$nT;?!$*Aww(k*+ugNqZkg_$Zo8M>tzB+OC;8n&{ql z*SqLJ_F#D6fqTMs1Oo5au_NrbZ5wHWay9}ScO0AiC6W|JAfWF(av~CO)8%L2ltbl$ zd>=r10+I%4I2!>c1VPr3$5 z(Nw#L4&TMAJ>eW*&(b#NJbbvbp7dlqP5Svl$3VD9pT?FBPPA^C5&moGwOz|Z=pwt* z$MgBECqHtXjrsPs{xby;aQ<*=yw*OPpPu<^9>6h@!y>_Zkh#`Hf3V9q?!7w4GXy7W z?21K*n)EZbY~ZUSF7m8HWMggDx6e9m;UF7sdeoA}!j5Tdki8<6hfs*MP-zNMrAkzo zTeH7&0Zvms&hWLz-o3AcW5Wu=~{qRvTi z0{kaziTuHSXav0=DD8;79w$xI($W&&GpRCD5H~9#$L&y!hNLO5ZlWV-<=nh< zfc&sHmT-`Gt<}M?D}#=Z|Ha0~pjEDA-7lRi#|b+gc|*)g)AW8H9Vo}&V2+gihMR8- zRR~{RdmKG>h%rQ*7Q119HDr|=Rg!|mMc9HG3Ma4};Y#~e^v&QHVvp8gBYl4nF@s7a zE9||wckf}y@TWo-{Kh`UeBAa`c6sN148Gegw@1Ra=wE$f9g;NF;SSK&2|5y-IDRr# zCtL87OF+EijvZ)Fts+<~4fo&s;2Z9p%jvnd>{h(9|3Fm39&Upm9s{J06oZS4N1Y`$ z4~5Cn>;+bgvh?arVaM%vhhP8Thr=g6@@wJVJ9ftE1C|$Pfekx>qk(`&BN5st%|^3! z8)K5fn<>KcF934QQs;-6XnCv^6E@c^ATKQ$S{wqGni|W(I(RMX>wuMUta8~v%W!(?N!8Sl5-8+eTC;js@s5^mnUHB`|c)@XJ$O<`A&rm^)+ZV2TRSP?;w98F@}zxPL#%d!cFZ4;+FCYI;b4% zVAF142XDp!>#%V!Iy5h_(eT4P$7peMJpA3>Q}CwFhv#2w4lf*~cjT$I@XEQauX{e14w9{l39?ck4sne{rrN<}(t;U>4r2?udvFjwrtBU9iYe&j~!IZ5(lwg>F znvj-;BgTBzaaT9!720$>^`oc4cmMpm;oIN-cKH7HzaI{QHX5iTI&{o`OZX!cgcHtM zR5oPLX5R#aO0p4Na#g*vr3X`ABtBLpVk=l;N$*z?AOnqPFB4Qf zY82@_xgu?iLeS#eGl-1#-E^d=e(ar+obnu%gP9^XIzieiR@SPmHJ>t_Wercja~i~v zG~KL7>x3%bUIieX+~MxeNhQNJEGCh za2Z9aA)>F9?`J|#Z(Ls=)@*E~bpd)d7#urnsIr}9$__(hnfA%Em%=!@mQuT}6J1vX!LXbyEg2KeR@k@yJ==5f%glax%$ z1xT^^XOk9SFJog#97SG^8;udgIH~(H%h}I(_rYl!gRtGpd>F(@o8W~e4NTz*{p;w; z)tI4jD(Zo0%wr?ZgcL3r!RMBEzq-wA4&2|3REYZ_o}Yv0eG~}47pE^r5~OUCD*h$r*JIX4>}!t3X?m5e}h zl-{S)1_VOAX{|A&8zzvvpwV*AtPTzhrs*{}l2IDQGO0q#T0uIlk_N-oVJ43=w+0SU zRhTXSSt+O^g-6t1R2^oEE5aOS8ieL_)^}PVzvu4)0+4gL706&z!A+w*R$dk9zUnZQ zU%`8pGZ{$%;w+`(m~#Wy@8dk{E%fiZnN~nmq=!Xd61{6Re;_E6CNIOGsashdYKiD= zQp`}OU^0A<78KS$Nf=8xu zbQzVAWuUeOKDi+1lDF$Yszj8_AQY=)ZR$$L!u2BB(Z*U($np>ZjRy}L0A*^9;+u(| z6e05%0E&Ryl-AZNb|J7jILfP-HpfBeL5s`QHs=vaZN7MeCbE&P!)r}}E6|J6qhm7_ zoifonZMdnk1g)2WuN=cdw*}Xf2%gqZzFvg0L~c#Vu_m>YHXxVHXN8$6;(46nQaXF4 z`A48c^unKN$IkbFypF1m#ZAcliv>A1h!x|{wLw-n#(f~^9cgQs8Jhs(JSu9bSgyS{ zd(n?OPSxspID(f@Xg@)!^9V}dXHT99JqYpk_d>?M&=L+EI1+X}zbpLssb|8^pZj^Z zfON|Uw$l*2ER=@bFcat%FOVgV90hv1Rj7Kuz+ZE15$$c?P5U z&2RpTMGs+|lIbfacHwYPGHct7*otJn6-!*Tq8+AnM4S>wDFV3BSreg3ngpwCyU;*5 z`FAQBpz$azAU{B;JSkJnBofZzXUZ%n%|FWqH_oDWV=n+jtAN*Jnl8&6ClxG(tL_My z`H5KU!g1&%%G5-vwjVfs{8Tu7^d!+HDIdlq76!4&b0~$= z56kEj=SoCx7(-)EE&JwmAlqPs+JQ)A8)eX!uOf0uRfy=FsYpfttfbRPQVU8a*!xfk z*wNA$ibNHHo_zJIzrX03?Rx*rGe2WYhzhyB5d}6e)ZM&CFNSm&!yUOzl4DieH5!hf zUV;c^M5B#?WUOb{n1-1<%^1ZlL>SX%EDcA*(^LpNBAsZfabiUuPKJnI4s%jrPdDRq zv`t-QP!tZZS3sl%PCOc<1nIovK>FYSIZ{Hpq`SLAx|>5n8bPIn18Jo>^x#06qXp^a z@xJ`u+u8Yj+@0B(o%!u-*-@HKf7%>V784Mg;lDnWj;B&WGcMju9xao=v>^5A4r#sR zH7LoUoLMa;%c23rPPk_Z!rj38l^F@d)oNUxT6!7{2#%)zuDZb=8DW#ZWvOT&L>3DF z_mkxuSY~76THfN^B`SIOH)b>LKxW2ohqpLYjOWpZKs!e=;>arnex9n+BZD00k@9*& zS>^uwi5|Zs{C=hJ@4t$P65Kwwxsq-5JvI865J;abnMdUDE+B1T80(WOUNB2O>``mN z8(Gt1gF+?$u)!+}cgwM^Q#;zN$Qc(NnX`*v!zH>(50e=gcGNP6(am>Z38l*AZlQ z=qO_I2qbFIy4XMj^5;BhzHiFmbxhvI#yC)=YUT5p1@7u!=ky0O|v`)2PClUg>2zGB0ql2K#o@2?Wfs; zuQ;}*4!;tjsxQarA+Nv1?=jjABeD^RRKb`|p2__o$O8k&W zZB^6uxyytDHW})zwqbWylEUS|tO?$@ydgmXqte|4VI0Lubs%DSS4BsAgiXqit2QMO zr#8|<@7|IfDfeBoU8xVfAYQk{wUt1(xYcaBUqi)D1$^G}6;wd=99qMVX2SREkiMxR zFMGZ`CL3VP*nlmPQp_Z1xdk8)$r~`cIqnAHq!~^l@rb}exyh424c>@`-k3_=XE*Xo zVo@mWdXzYC4`&bG$XJYIB$=_$EL8uLDkJ?(m?A+!ot@yJPv##gZGJ98tL!W?3$cw+7HA zUsb5$L(;?6)4vV0B!Ztv=sQ|Hr7?XG=XDOS{s>iKw`id0{`XC&nCY zmRFaWJQ~dI$k9Q{ZJU6d@3WLQ!#83~J4;hvUqEMp)$}Tl{T}ghMUyDuZ&18Idk(YV zXXQ!!oXc#-6+9F!kxq{-85q(V%pK3H7qGTwVBZO12K|A=fjvbxh2Nm3BZ-b1tMznQ zp6Np;ALcW08X5<^sjQ9WS1dEN^6|JjPUUzX8B$~*awlA=d=||geCttW69-H@3Gv&G za-}iXP%U?w6iVVtr^_lt%}q{ku|iprhsB=Gg1_k=Hih0YFkj9G-M*fA;_r%+)NeiI z1DL+7A0`hN+|3W`&D!>c! z`g+W-4wW9S_^Tg*D89Df+ z)A%L|C4xP0>JNP1m3(=UWKQ{Q@U#GWDH-!r9zjsW{+$DvZ@Ri0s@-j<<^R9l$ZWC>^Gi!U3qt^Z9vIDbz7Pk7N2i8iU8RJ$B-GreWt3g$NbP zQB+TCGok+;0IFjj`LhoFKc}3>^stq+9%lE z9h5I=*~wBAl}O8^uP;IOX{NJOFwel+(nlzCk`e2q$S21plh><$t_FFH!J#OJi6m8b zL72+J(nuFbEvAf+R@j`k8V=ECV%0jv^DwmvqJNzf^ySKlj}_}%$#WmQtILJv zR7PG^n@8wBjH$|iwdZWMuqjT14V#rUrn0YKm)t*i8=AuGC}VyR%v7pSz)$o%QMsuGi*$rE8u0 zZe9HCuk@lALEM9s`GNs#4cTj{@R%CE(;ell>0Z*BdhYlnEmp4VXnZIq@vMR!N1U^+ z_*jY-m0uh91gw^O1b3H;yw-|5yTq@L91poY`&5ItT0n(7`J90r1YZ;u>ZJ~SuqR}7 zDC$(rerrc#HB5^cW|p0hw@7pr@PjC^BtC$-u%RezfyOcl z#9Z%L6hle}1L&TQrSS;G-n3GFsbe!Je1j4!u^dwdQ7gB%Jz|d^xD+7`xj&JTd(dMM zm6p_DrwW?x1N47^+$okMGx@@B`~hDN$rul#(F!g2D8>o zlcN^5D+jl-sa`tLM7iJQ6-37Yoq{!$kE=rpITmA_j`EtG3QY?G$r09$HQd}Q9%w=> zUvwKnsqce|7e{`nliHVD{ZH!JH7M{3`+Kinivr&GsjZlCf~q z1s+sWDzvGgdq4hH)zQwosZ(J*W_$)r?o2YN68Gsl0M#3El&CEgp9pSYlccx_o^;7; z;y!0|b|-`7ov2}6-x=%{nIqB_w?y$MjtrPtQTvhq=C6gnr-r_1ws=b(#FUF@Sa z9`-~7<=#at?Hn$YkMg$Lvog2UEW>(_JTiH_mUUEk_&0Br>7Uicg9^w=unn+kS`4F7 zW$>{&fWoqhEljqDSXp)>hGw|;1NcSnruz(@bB6}b-;WP zI+}=y(G^d{Lr!=pSET2ZDJeHh{2do9r<9y(B8wQXF}?bwgPh<>(UxRQVD*lBu3oMm z`$)^@6wn{yE#Rx7Hj4t5j856LOS^s4ljM@ld_+)2nr8e{-8Hfp?TvxnFzpw;)uL+g z&OUQNOl2GM&cRsly#5ikNJ#27Q8_{Vc3`k~Fl49HT(}_b^#r3Ipu|ZV%juis-jQ$~ zq*z)e{FLbRlmy|So<;|Y4k}~^vnm~Q7PD-6lYB?m@bkjvl5M$%`u#VmxCMxhQ*tpN zeL91EhhO=ub)9|8gC}F@4h5zGREmzf$_k2GV#VDjz_JBAl0W+>JpNu@ekSpB(TMpy zDb$qrxAS1h%in3;eXYM@r5x!R>$rRMB!qqdR!1VYyKj9$Jc(Cvcg))s zjn^@pjMlJcB&dNKNUfGg71#}l{4N-aRV!5MA`bmQL+=6al@-=di80I=yBrC>6Xi{% z{$#+>Q7IXoZP2Pw;z;8G=9o7Oprfj3dG*wu>EfW|^DX)go804ryeizBI8B3S@yUA* z>Et%cj|y=+K_3I@@Jt+mI}`4Z%twD)e?8{HJc5yxAzInehuS6i!PC)}0}o;* zIaYsWaT+tT@WzQ&wTugUF=AcqtD)q3))lC-h&8VOC(RNz_cEDi&)SW6#ge0vh-ay7 zF#lNDsMWH%o9X>v*7=i(fM>>MMYsGR^ht$a`UsWDmri}+w012=l19}*eoV~>NXV%i_2x}KT~@}%^Kq>w zd8GAEtQ?K)pV-Z!&0a5SU=40oC~~F8&D$qo=-9*T6CpKJ3xmVZ9>y)aye^R z`bB(4{!_cV^L8U1>z2NL_*-s31eGSg!AI$`5%RZlz=oRpi>D*j!NNjQCW9pL4(*3L zjV;6K_n&dd3rMH@Q@t+@3Kls4Zsx}9RR}AVfu<%a7Iu%x?q z_ndI#MP@MDFsMrsXPC9&KREEF54#Wh-nmXzy`3LkmXg_ju;c{7xi1Ch{-*MnPW*p+ z_m=0}@a}v0EvwoF$ntdxea5i=gO)!8JQ8Xy-;}-t%Km>(_dZ<)Db7P4avjlIzNpi_@% literal 0 HcmV?d00001 diff --git a/doc/tutorials/image_classification/src/image_classification.png b/doc/tutorials/image_classification/src/image_classification.png new file mode 100644 index 0000000000000000000000000000000000000000..14f255805081c1b4fab27eaf336fd389fa93ca19 GIT binary patch literal 52635 zcmZ^~19W7;^FKPVHnweTw6Qj}jY&4P?agMRjcsd#iLFI!+fF7n=Via&-yiRt^X8m8 zr@L-d_3gfWyQ)7`9jU4;gNjUq3;+O7qmHaO~k```et~So@HcpNd|MZ%Eaq@5%rlR`S(EtAZ z_c`5dEdNhSj&A?ct@jSH|GUG^!S<2;f9rl175YaNkZ^KvcC~PGd)F_*BlI86{}=85 zwDVtnRcj}Ar}t5CwK0=(bhmJQ7j`%O_jDqh|0DhX&+-4&rQ&L1@!sqINOSy;^#4u! zAATYBe;xn72I9Y?{2%K3FpD4yvH$N^CW2hM!?^|kjHt*-ifeispS~hwgCDilQFDj+ zJ-Yc7$;N(IO55PXe-o31m)q~#H6gwry1=qG8%0UC0OI*^yw#$i1a?&+HD98Deo+xb zQXo-IGEmyMc60G=th9Bw)-0d9z0HNz?yVGAz_ttb5T&$tv>yW@5YUsz!AtIS_QBuM zRhHG?$lq`gnK86C#cIS}#Ei9C%WI)c;r>xCa1klU-!T5m;2f+&5&Q!6=KQB5>_7Yz z!Tt}#{{$(jdZCO%09ObPe^Ar#-m7A|@3k$b{}gJW5xn7`2gx_lc@6b-L;k7$vM@aa zNdrFMe>(JV5XxON7pS}ZCb$u&Hea?a7frj37v0B#8BIK4!c&$SF~C|dfQ_VZ6%j~& z;L_c|0hHOPi;?y{`B!Ior(eV8ZFUgfQN#TFfn?%DfRMx6uMKU+Ol~R%?k}vm}b0TV!DWbLYailU(!y@#L?| zt^e9Bf9>L@IX|~EiE3ittG;)8UU*Gb+KE-pK}*2E{mRww)&+?6-*}!l!_^>NiC&Qc z9bdI)pzOEvKQ&Y43uYsJ1JDGE1rx(yCPE8{O+&@x%~ZI2c*szT z`3$?MccO{GX|izCZQ-b%s8k7W^5>AhjvB{BeUKZ`THMJ96q?Gm$(DQT3X>=#{80p zpO70LA7`z6(m36Qx?7*q;LT!_vaH@U7}$gI-l=o+Q~bBM`$Z+*jX=^jpKuMdf0H}Y zJw{9has!8F6RZWu7^A6$fELFruF7yqS z!(AM4i~SW&l}!*X;rrJ$ee$ht*w5AA+j`a z$=2gN881*B(!V_r?YY`8Th648So&Aj6V`-ZkJZXDLT9+To__N;YRfI2R~yi{tM<~N zLnAujK*u4`Jr||KKt(iO-uuwT46t?(=3Q7mek2KT-u4PZhm)>O0A_uywmNLsab?w5 zXYRkmMuJm;`viyXX(0p*kytqUMSCNOP>jgzyjIi8BQC<|D*@Oh2@_*1w3PItf%*W| zc{TS<79J|CULQd>IF5y9_`V1NYl4hy!#QF|2XpB1A$EtJ;v?1-V-{XKvJ;fR7dSDz zDRgK^GD3Kx)#AAHVF}kVj3oS+uvJ_+61r-kOY+fmD`x=bLdYRGIteB&Cmpf&cj8b3 zc_st;L&X#&H`Hay5AYf%w|?el)F)8BBviiWL0-Md*BbrIz3xAw%>Fh>Y}tKojITE< zGb=Qs!@-f{Vbu4*P{#u?R9H~RGIiv^LG{3w`Af36u%dNmOMuP^@2umoKR&UNOru2G<68uHTlg$ZFK+w z^Fjq`uZ0kdTn>_;X2WAon7K!0krcDbtcfg+p}id}CDkIOr}UAmNe^*nTP@3SC1)^x z+u zJdeRob89-fXgITP!<(Oj$pe!0qhNVR#si8V0@>!fpd~zC1e3Or=0iXP|qS_iICod{4 zS52mGka9rvZv7--D(?F{sU+YW2eS!IrN;`N&iU+D&Iu1pkkgb920hgw?m%bjknxwM-I@!`?}yVBAEj*BoOZX%fJYTpedt zQ^DnJ17&f_rvh&)5BmJ{rAtE^lbjs8?EDtx;YE>Y+2fnRKO48Ohir(wOK;5jjU3+! zjY0{k46~OLR+ftBn_`-iZz0dOy@_X5blv=%>@W!tY9PJtcJ-f7LgID;i%Mv02awq%*^>u3a^df;!{&ouwFN<#yA~kt)Q@C-Kn^^r|`CV@&;09S1*?h_|)?yuvyklxO1rFiRnzxWA^EAIv zBE7ohcR99@s-FGe|#^IfMIUdZVUd14E3Ct$?RW*6+hYBa&GcJ3HT}^uM zcR@rj3=Jy(PffwpkwG-sH^&|YHD2G?88P+7@_Z)3a%dbx=J<(G5yXWzK2gz!H7i_Q$FJm(7)9|(y%{=2Ltz}Nuy;A>=2Uihil2;suW5-Y{Lk0gQ}kI36)zD_?Q zh^FKwKYxRNQ*x0|4Mx2babqj}ewI#%(w0qKn_LSuOkvyxrn^v;iZYe~7b-BBuu}Jk zQ~7}6Z--1-**e4JYnLfLsHNnW>+^?non z)VYSRExf92Rb1O`U5izam7?1M{!Ztkdf5m921;~zvgE>RZBuQ*Fp(8_mltRci~D6g z1`6phO{Q|D2Pt(8rx({)(qTDZ)8lrlf&%lM#rX@o-!l+GXE%RxcQF826c{6A-~U;g zGw(CX=UW{*Ha_0I?Iy={YDu>bxP zzVU{uz2VR*ZJ>97;VZF8lbp&ap?`tnU&njxJzw#icIX0DJ)xcYm^+2cZ@%QXZ#bi7 zg@a@g#~a0j(eP>Nn;DLK4nPfoMQpKvQC5r`SI`G_N?gwiw9wE@69y=mrI#4`!2jz< zRHUu(b^tm%%i_p+VAGm!xn+vnhfvO>w zlBy|>(hBr)DCX#CSX=+nh)8HaRa4NF@up_Z*5ldCyH%{ckmi(X$-t8Cz&n`g3z8wC zO_Oaeq$^ao=_YiK@r|ATcuc5@>21jMi)0K)aC5n>2D=%Oe;I$Z{aOz5NXCg|?D?~R@3!Vko0)UV4ODfZ;aV)Y1X zgCG0N!cUzbr#)61k6no+MIwkKb)OIAwkX-naLzqfspX0^{ma*IB@J zqSXr=M0mM;^rE=WFFvQmjDarv`t7hb;yO+b&v$=+<-D%vHRx5`Z;ZJ6ucuctIEfrm z>ri97TB2W4Wr+qFsRYK?Mo`VK_{P6=$c36xnbSYJgptiyo1N-S0nE$@fH5-R?-Z6-6ok& z8&1nZx)W$I9ojZcRkBqIYNE1~0h&qL3yc)iAvc$a!BILo^bVSa9)!h*)WEcI3?d1< zU6*J8oi9Tovvf|`la%^G<+uhAp`c-aH@0X2tJbk&pnKoxO}OJA1=L_^HXz~xK{(w) z5D^zHJcrZaNs~j}=_t?(){Kv^Mf{l~X*6hTH+4{86#kWGp|rpDpu@V0SUb;Wcc?+8 z*62RybPM`7awqIRm#QaCt<);h2BCO~DiY?L5>YMVw;kX+v#w^S1 zJ$#;Q^P!a5LZLzl)hKo~H4|)p64rMxKk8=5rt}q|9^-DOLUa1al@Z9e#*~D)b!{xQf85ac*6I}`JbpYA|@F&EATn+f;f%m0M9DJ z#$4V^2XOGeS!57j>*Pf)hn(SV)xY_&jgp8bs|$SwKNB37S8K&TsDvw zE`67X(K&uuDlRg1G7xCYt)>x|%sO4b%mBFH>`O$;fj*POl3B1uRs+zoV@Q|no~1OD zk~50cn@`$2u9Sz)qI^^P6U!?N6{{RWeH7=4T8?x!jY;k&ZCMXh zi^@cvmVt@aM^h!owiA)(DA22?fv&OZ!jNyk44L1}%zU{UE7u9E)8@35_@(x7NjXq# zVD8LDJK3`_c3yX>E3c7(hFAB#rQe*LvBe&B zOB?in2U}8i-*bm{upN-ke_46WH;q)leNcXhAV!B@(EX;o>(qnYEf6x=-|M+-RylgP z`k0+#$f94J-i*8gW4i8HWR_En`b-QGv2Dpo_fmc1;z8axUxLH9#)+5gVJMm(d-U(A zT;Z+PPKA~oCT{uY`6S(^k34i)s1>)mOWJRneB8Klw}sw4k?Va3wE@FdS#;1Y5dN8a zIH#NYDKM@$!|%JLAC6w}W8^VuO=VmppE=YgNCZXjEv?p)zPz ztgt1-;pZ@6 zOf>36caNw)tBM>qYhmnKt#^dj?eA5`u0ax>Pmk!yS^~?=RD_;_12*piiRJ9P=5KIt zRdet{KWP9?UCukCfwv*$()nOxL6ikneh4|PuFp8>$0L_?_jQ=7r5z!5`o2+!;W2c;%|v2~durD&b%!{3c=$uy1yUavm@nxF=x+zUq%5Al zmr0bqTsWGFJ$1BSD#;1yx*RuamMj(haUi^t65~u(8TJ8gGv|{Ym%9q8<6FfDDSIm3 z#lp}SINc3a2*gp2QS}tUY^5uKYs8;wciP3TFs?Z3lCe=J|j;Igx_m zIWoozJsa@M4mmmi%LN>1SJ=^S zq8bR$2cVs)6E2|#9i;^|Y1v&b900xfPHLgqFL7HvxsuUSR6-lYJ z#jiwNw%KxLVSazHShmer5N6_h(wXReMN0TBx$KkS7CGY8f=e216r35}ImGDeG$EgI z8)X^an7AXIR4|whHJ5E33roIxI$wy`u7G)ypuqZkVyOE)>#I5 zmpa86FE&H&22@spo+mX#%FSE~%$X6LZTaod55F5#?Ver~z}+9I%^L(yXN;)S>C4t< z4LvIFuIi6)?e5vG+tPaSwdx`tD)3q0C11i66SiDxp+X@QLj&9sIbVb}Gj_gR-w?LC z%Hs7T=;tc>f6-}PX79Pm8Lkf|PKlh=%D^>Zp$?-#jF^I|L; zm>87T4Usl6rZ`MS8Vw`axX#7x28oHLG&PJ-+0YKHI({&bc5Yu!Syfv>YgF;jni=%iz? zn}&3EtLvbn?_8Srf~31dyJ}xIAb)R>c~X&HCthbgaMaju!6i%3W%-JEpt|1uFVhIU z_bKNl{s1wJN_cFXrqI-^jV$gqrJ1`;anIj{s% za=Dl=&FT=qlWjQ@#s~RH1x^!Y7xs%TPs#OBIJm~8<7LZ$!<%^Pawfcy$X~I-4|SrS zO_MK!Idx|3JQSz2^}RePJArD+i8!)cKyT8Ph~;wa;bnh*r)z@ll*jHL?5g~ETLd3? zrw(Y-7kvGc_%b>7b`a)daaG@Z-+0N!UunCc7mDoy&)0hF9k{p0kDwcgnhUCQ9b+6F zD3Vy4co}DA@HVaoBJ?BaaHSaY4T)tOyGG6deh5M#0nkx>!t+*cnzXDpRU(DH?uNLI zwCp#fid`{6x}BJez6$s?ttV8!2|da1*+3n{J8Aw)9bgBipfngK^>#n+XOdr=r7u6% z{AQ4^2KoI~Dq~lwTXLEuYX<8EQU`UQ_{-CM|9E zQDMjFC6Fq)xeaLJDd2@-4R?%caHrN#Cn_C_=45G*uU8e@s3|UNce-BOy=m#Sd43XV zSSMYw4Ta`#S1i%2pP@%zAwKKs%k|maX$~=T#S8}FaRqaGtuu%3&f4IoWt9qXvD@Nr zjej}L>^qDx$q(N4vPGetQijbo{Dh39zI?NJz0#G@)6Lr~^-I%e)G6p-LC?c+i}p>0 z)-dorlHdkcvv$dZ`(btyM#Za;=f^~Z)FQ(_$X9fJ3*8&T z(k!6Uw9tK~bf*M-Zqk-@Wm!^S5(AjeC3)$ld^xG+C27#cJoKJLb-~ifN9y$ML2;pILeHA3-a7%yz-t z3xfeQ+1J}w!-i96bI6_%{UF>xW63LionhS+faS87KM^laxVr+ow;~>WC7pnWP7Rw?C<|QGhMLH@*^g>1=DT4vUn`k;JIe)ea?61AXQLj0%xAL3aOr(JXG$N(51!m^2up;%okAFuN!695( zcLg$Rj{-wM(lOR$ajd$2C!KyX5s4KEzjUH}MxfTG*DAflG|aUSH;SqyS0bMAaWzz3 za4nQ$HVfCke-I?iV=4L4#z5<{deK#^p}z27f*nBC(2a1;w@(0k+5?)UsgHi>StVz52f- z_d@anRwrTf&`!aIXOWC82>q~1GbxfQvzH zHoJrI3|2@NGY)DNQ<@wMRvZ_7%yU%odTjFox3gcjWXM~*AeUx$A>s32SB-`*;hv^l zjzL$4wAPv8jn`0y(Xb(XBn!N@9SY{3`wT6XR6i1ZBJ+5zPh_x21o(0bBgf|OjQ?K3 z$HCZFXs(sb*Du#2s<6(s7gkn{v{)jgEc!8m_gQLLsP9W4Frz5HBmTexjA8r4Z#5V8 z*_@j6zus&bv-6nZ8~Vm|UN+t2T>vr>JyiM!tUoMEM$$}JHdRoW);A={P+}YiyevEs z+V;v1Lx|jp6tNPDOG-EA7?3dJp2(; zl!tJ*BjtVf!Ci6h-KBPFI@R9D^?q_O&tg1GWEoCv=JDdScMDk9_FkKWwHqmOto`Hk zraeO1Hc}FY4F8j*vfuggw~qm^V_Qn$Xj033zSE%14Q+`P47OkCn`0I1`5vTP_$94{ z{%+LA-zA=(31*=^M@YS{!!eZZ_nB#S=5{rFINCw(JcZkQ;BZqMV!H@Peuz&L<-!Bs zxom4CsU7AC2@vqRT?Hqfe;u}BM$!yJPvTB`-b#i;vViM*Qxs*L;?-k`0+@~ocF(r| z%nQihYY9S~f(nN5_({a-H63n0PpmQt$U(Dj)aJZabSOP;su7fix&vLkvgFv+|OpWV3D4VGw*p!guo*mz1|88Wy; zp-$P9jy6Y3f6Bam%L^U@zV9|!7)3K#7JLs*!@(Dqg-t^@1BPyyUu~NQaaiZgGoj;m z7)$}DlnT6-F=en{F@Xa@pD=Z?cO$;yL}TN`6!Zez?>^7xsM8$uV48^TuEJe}!Wy*z zg2j|?sDx;ZhTg`BxH&3%*Bkaup7`EP2J+tCrEv955!Og+?`wm|aGd{nJnHz%Dsq`stLh z$al5MD*bQ;rF)@o<}w8c+#w3@;p;yEZ+m9sN4e(j6F8ZZZ^l{%lyWGcwp)1i3##Xy z7*b7YEEKTz>JpCK9jKEZBclQJob3D@4eI3?n)i;RDQLaOPPj#N@ghm|xu3Z>BCMA<3;mEm<=GX`jaJ}CXR$tiUYw3g6NOvv zdlpHw`a%+gvCjO5gGm!gmHIewc-&3B{k*A4ck)219UtcO^5 zCEkxoa_IufJF1klp0MGnqa*@fGnR~;XhMT>QexM4r8XIbx@Z!(qRAjM$z%ANw+FU7 zPG5gw=uX2DpAzjJGCYTM{V>*h1C%N1;R;>fB;DEd+yPa)c27Hc&dC>B0uH?N>OWbL zs_|@3pqQV7Q`?Uso3M3B<5Ro_+YMgU&xPix8k`an&V{AhR*#dK9XOHf>{jbf7fyZsamD_|&PuZ7rXS+;I`4EV zJ(TxUk@rO>;MUFe^k8R5^tnOwx%0*V%Q=H~-F-&sF4n8_&)X*Y+vfFUD*G}A2hFzt+sXQUP10MbdQW#s~j$a{PGkT(2#(ryq;YsXN? z+*H-cvm?<*sE2_s%E`JCBZ>u~Rbq6=N=d{NDZCKyG+=A)^y*D(_1AEnbt}8tu>@SG{S3NJO z!fysW7IEG8!gvzspl=>lZgVx+LG#+|c}-j@BD=u7R({bI^YB>Gx|t6{Y+7lCSjTVR z+-8|P@m(UtrnLX3}`N2z;4MV`ThW-hp42#3#%K(_I1H7&`Z(pZ-}E{98J&g|ENDV8U>{ zDVd$`zF3?qHoNkUwS;Z#>g4I7u@RxC}ob0v0d&!9M1|n^(*3 z`&bFSODbc9!=7{Yc11u12q%6!A53mBAu}!fum-%;%1v`}3=F&++ap;_hD}FI(u4l&T~A zpi6wg@}S4GptgW}ub`IKey5l3ue&O5#jpSP!7$_xy}%*<)BDne-Calv@QKbm0JQ~I z^`HUlR|5>c}yG0n&4hohv93MUGr+>9nLW3=MNW7>{}fZJ@- zu@*of)DNw#YBB06JhD*rBnIb|R6oCEnT^xCS&TS7<#65o9AAtbEugEgEZ`fwD$p-w zu-sOqazP!}?Oe*_U9}RPrL$_*IUSs;pL6V5o~g)kK!OmB$)B3Sdp=D_dw3?hQi5;b+fL|OfL7Y z_Kv2Vo5DymQhOcsQ`z#h3r~^(&eyCh%PoN`Ez6bXemM>f)hp)#U`XRYjsLg&Pd7&5 zI3}}4(e<)ira3xsnHVHhdJ0P-5kYwvlKv5hP+ED>=m?nhBh(Rg?ltS78ncb1$@g3@OA8Zsi|cMRw;lLb*%wa`moCu#ZQR>?;Oo5G zE}MV$?H+jR7xHS=VIh}|L-59VfF50gSq^M-?Xyz#-&cZkH?4TzRsv7+L0RMHGJ(4? zou69Tqf&;u?w3U$=^=k{Xhg2>{)I}LAP?k_BH`Ovjl1@RuJOQa`oQ0zOQ6?f(9%#6 z{r#VF{F`UHuP=~O{GLcEn~|o&Owpfoq!vjDX2tH8wuMs!{DjF;8Tmwhu$+Zl$)?DM zMG5YQGd>Bk3{1;WbP@ZBOwO`?pj?tMk1#CgAG0^JWfv8}XZ3Qu!vT}GL~^`Z+&D#I zI>+1OIdv=K%O))3(uT9hi5CkTP{Q1aijLB$&1bAm^}hjvE&4J!~5imxHRUMiJg+ooBIhqI3dj%GL-_9^H{b?cy)5HxZGG<#J>5&!#5qUH$ z)!x7Ccbq~VQXn%8&O?%O!c0Bn6S}SUAH8PCn~4Vg?i{$m85ruCuf&e8Td|+F&&x@fSN_!pWneUz z`{*`4&iknAaqi(9C9To?z;3HPlWh!VEfVTBz<}8E-0&9qlJ`p8H7~k3;jiQ!VE*TI zX?{!Ju7O-OkP-wz5iecnzH7Rxiwgihmf40g+3qgZnJFi|jP*S7s`*#^x@14_WQ|`S z^uN^!YJ$0?2KVFouiupNF2SG+nTXy(YXV(JX_JGaz?;h4&R?lPB_|8cwX)ikD z0{!{6^H>Yy>Im!zQAlH3!d4M7ZCUp{iFS2afPEx?4s-faDJia=lX3i8a^cVf?0dW9 z?8wWiabHmmZThowYN!(mh5Ron#$&$I5CYGyK|w8XD`!bMeU5G&z(v6ZAc3gjMmG)& zl(sG;F}y?M&}ODB63W?BsPJDYCLd2*UqM5@A9Hrdo->m3K3sW|X=ED95@N0yD^kSh8ZHZ}*Cf^FbS+OGs18 z6R~5qV}_Ok#%h!vrg1`+MwB1w%Zn} z*dVKO%=t6FrTaGe)G1)atCi^}PV^@H?kx_I8iPl~#^Zq=7-{UiL1S0Exqb4dUEU>0 zOoVV@$9!9T+C3rgDINn=8FPb;*(#(>Q^_?WIW2ZpU4%q>!t~T9_Z&82<|kA+<|%p5 zD`X`W)?+OC3Z0E)H>^6iemO4irs&2hnFRPYzwu-+?ziDGK*|mVuwOOY!94!6luf{w zKHyEz2IRKqO$Q+$f3SzK+g=b)3&@e#eH(F6aEfdSv3welxJX~8nH+efU zMhCrI4xBcxIQps8Y%hoDyec4U7FDUF{x;seW_>~IP1H-L6O;PL5*)`6R%dmRkW}o} zIesEd#(nIb`DC=p(OU?42&|H)0C{;7>?Rai8Fo=o{AMEPi$(A8%#^%R@WdcW98)gz z@g5)Vx@{EpZ~uaB_IAtC{V8oKAGmoV%CA#*zpZuMTWsnvE3}lQV&GL_N$z)N<9F3| zV2IonaiHYjNcPrwc=}fOjhpvV?&zP4k>u*JyUGxH;S7#3Q)rh{-J3a>Tp?qjoCPN9 z8{qF)Qem5xF0jR-VC_!4C^}BOh^~e1F~XqgBJOE1n<<9~;W|j0bw$`~V~4bcF)-+U z1Zdhq(!N4RJgJkp^podKdn*DDtA|ZmL1H?fib{~29VTh7GDIna7G9VuhbNfv`oQRw zq!3WTSAi~nkQnwUQ=$lXO;MVY3fy7I?_mgX*rIsf?`|UdV+K%ze8%j+X@N&+)?}+3 zAz&bGZ%@32_FDeu-{o1Ho{Ua+0OL3$a-8Di0${iQk{xnkQ6u%^=n8MpHE&OR%X4R&)FfkMMvi zr801+7t0btT-s?DZ|Z3%uMW$7J-XTynDNN6 zIA-A_WiDDp$C9jtZQ!&%m8iCmQDd3IhqvCAAd0H0#8Vdr&ML(sKv>HP`eMP;Ze7=~n zsOM*W`{zz(qH`cu7s8D-o?Ff91qQb@@M#&}Z7on2|M){UlRLTK)w00vn+N|hG37!% zEvlrhF0xBJ=R|8tra*2?Q@(-+#B()$*-K4Ewl{U^wQd7yJ8$BS_yw6rPPOUn{M=uY zCM?PmO+E~pp0?pZZ_LH4yu(n>v?Ds|e7V;~{mC#qm1<>T!nqD zor@_M=&kz+JIG8A+Gwuob7{VjiNSsWlko-WxGCL= zteVMkYQV;Rh*6Xiuyqrn=auf-F1iPpE=-wC#cKvo3Rr&V#no9GQ0d-S^bG7!DHSn2 z;Ym$_qO&SAuE#N{R23<7-Tlm1VbRkKlh&hTV%@e(OJ!TwOo=X973Y1o(DmkF9FulC zPDKu0>AqpTvjE|q!bG%OCXod@m^kGk7T4mQww(nPM#_J z>R2*rv2+^i(ZSGI6B7(Q$bv@u-81Nn`o2u<;y%bg4$0fTo;~+Ko?i#=SKZ?9O(yb0 zAqVS6UV#T*+0$y$pkRvS8{u=Gi9EmYj*Ztd{6}!Em8Iyjiw1|>DR+=&MmfBDQbu&# zf;%1RF1-8d2x&e}Sye(n&1n4ZP>5#WT->lv${MnNa*7|+JW3EU4%&6#sPMNQntF3#9cVEJ_nOi`}!?%0lI=lGj(gk>P zOn?dbFR81^X-b{v4;R_wjal|0hi~??YdNkFKP@?fqsthGn_jVD7wZ~5TdKPo9Epk3 z?02U5PfV?<&K5E{!??n8xA_v8$V5G}p+`6j^3g8Fy*k&r9}V78bK19(<1Pi|w)ReX z?oPlfZ+9DwUhlipdJjSI@39*(QLQ9W9`iVYUnNGuhdLXwfyFUt4P`argnmJ?PL-Ye zb(!U*!yn`+%dB3DC5HzEWj&D#Z6xs@`Q@N!lH>RS@=lIl1b-0>j9`ZJdQq#d$`%S} ztacs-`D_KhLv?>}A2tz0t>0fo&pE1^DCAny$;p3*xXUrWw3^tda|$GjHE7#p3%uq5 znxPBd#5*`StxUX(PApEmu1+k8zP;zFCUMEJX_>$NB7YHoo&&yW`Q1#(bOD$}Abs6N zTuE`2!3VZ@`8qZw3SCq(QhOQG8Zi6-%zCMqt0Y5pG4^hqO_k=%@SlUwLSOE}hmWD? z^blKkOrs9z+o`9!B=9GH^V2v>HW;U1&y+CMhIL&~Rp))PMj8l=YLMrY+KY(ZZ@aRb zk|?BmKn~d!j1lQwf$A!f-G@A;L_0bqpBlEkxTfDgmeHlaf<2w@mJ2Jh_BZ+<-e}$LKIcyB z+&3*?Z6P3ZjCv*c_Ud%7sQb*W2i~n7)Y#(x*a5!l0FEg`IrOltI4P??UOIL@!gqun z+;eltb80N{G>DK^5eRe{X>xm|Z5?o)qeO7!aCZ0l+``7{&v^0((!QrG9&sr)7;o- z9&{rUv`h5Rl~tS*bVVPuZNe7NSx)%+HV_CN00+GSAqxk2^gOY#_re?3!nv;sG1ug8 ze}XPHD&5}t+Kn_mKT`KRG)-bVoCkxGwivr+#gI|w@M4luv<+@h1KD9LHu5WHX>9B8 zB@0a4Nn7jnm0S~v=+Vg8Lz8ot&7qPxJ&|k?Z9P-uqFkp&)s3UEi(Ym*j9erEJ@szM zT!rH5OdqbuPSLcxNQkKh#06w7KcYpT@{?P1)>s_$u=TQ%iN^ToQ>HW}rF4KjcmpFz zOZb@9MRP4Ou|p~l;AYm0{|vw*N5GdO9VAwcQa4obmx)kBFTk>Rq?Klj7oYPI9CA!J zn!J@~vq9N}@iNA`AG~PIo;IDd>T#~e{h}2fe>^J7ZEl(yxf|B$>c>k%e^i$0v&C)H za=UPCh$sZ*%H(c+bs^F(DfBVjHE$-P2d8CDFegJ=o9h2zgy*z;+?E$X~L0yn69jD%Cmk!G) zI^@rKSI&PwIx(Q2w#M5#I|=0puyv&%U01gDpQs7(fKjk;)Xkiajha|sYU{}^kk&tw zpzfENWW|3-WYP0#jKq|MQRiogz&@)-3_|lh;0sLM*czYXGm1Z&2l>Ny?{x$1DlUYTS*J$v4ratO+r)bF7T{=Ze5b8)wH{Pz9%QL#|br1(jwmgeAg4qa>&5C%8-Nn1<^eRJ9cb9Pkvj z*j5q>JZXj4h@r>u@2Tj+l1NB@65FMLcY;N%0!c!)3w`XcR!=3rZf zL^`P2l%tpzl*?4KH^PrIBATz*Gz#oALC(z3Yu;UhAm|_#H>O_YDX_!@A~=c7XZw|h z2m?7hCbmbu@5>K5dtw1AV&tDN_oieu@Lmq-zjuvuKLZ$BPD`4IIsedj0rr zV^`gN;^u+VrAPm>pv^7ZfBAdVj=M*`U&qPAG77g>MN-smqCRa-5bb9 zP|$CqPs&s-<>EyXZ5kYyh51!Q__bX4VpHZlF_OGdq|bobSx%(FbbsQ+T@&BJLB1rD zR5%cepXs}jI*!W=-UO+93XQyCh>~WUasnKg+{%QDwqDXmdJ9i+jhcxm@66sa{6cLg zqrO)}Mb;BRqp^@ErBCkQvO<%&$aW%OK{WPer1(c@KZ- zJ9#(zuBw&z%_}F*NgebX| zFX2-jepa%+~FT9iD=iKl&_JQv^e`@^$)N0QeN9#>?W|DW0 z2={NavxMX9u`h!8irT){c!Wo2CIY42%p0C^+;irQ>3I?v>h&67E8y|Ij%w-NWg$p@ z$&hMUaNZ#mqi!iI%M=H6JN)!uoaiIM-RL+B9(l21fQrH4eo}Bd;H@rP#KJ1iuivrl z)v@Dx&%M|eieh^hZV=jb4!^x>ZTt>?pz`@1?J3ybtjy7BT-YY)YVv$V;y}a^e_X`s zx(V*^|6%X1y5j7*XkEA{C|rWOy9al7ch?}n-7P_aySuwP!6gKDcbCH5A>_n*{hBx`yeGTnFuxuq46KqCAUh$;is|(}20Jfs-RT@X^?TsQ zyol$2-50W3=Q%u{KJ1XKriZ16yTiU6p|^GJMA;C`0$8X32l+#+>l|9C#BiHe*Zev+ z>X;225+~h!o92`X)Bl}-8qQ;WVkB4YIY`m#410Zc^zEV=-um0yvE!MHE9armf`n6H zI*Y5@pvA!>V>*#Vb16(nV}s2I)MB#NFB%gE=_IUU6h5#gYIL@m#qqeqsph-RICa28 zc4$e045SHTMtb3a;!^%Fm}=8ms+K4m@?L}Gn!df3;#4PM)gw0=>l18WKbOY)t-E$p zw!XU7{wPH<2E+rO2Ycp|pOX&G(?t5q;*94Xxaf6l4yyuN^!{#Z;Aj9y4)y>+1 zPAr_v5i*U<`cqcDii=-T$A{B`)4^58x=})F4L$n6hR0ce*DKG=d_VoXFmyippu>fW zKjyBpL~{z;{XkLr!-O_3m3dc;xf1rgmrhw96nLwCqry#eZZD1%zB_8oK;8jZA*Lsx z+IJFjTwpINWBbTc*LLmI?!mA>7Q8!!2K{3cT%|SwKg>ZPud9ug7110R&q{pN{(955 z{>$!sjSqub!mtZH)d^D@ak!6P0 zylXMCf((b30EEOSp3|!UZXzl>#kC8zEC|sZ!Hxzd-Lbff3V;KAq6<58%wfQm6O<4O zHjxP_ApMVACpxNE84S z4om0d)WnT2L@z%C-u9`hl2;o>N&PYDC)5Rz9gvF~HvBbpuQpUuw#nc5Zz_fyqh@|6 z8ND|dEPMa@%gW9vzTTxS;zMPxJLOwpiFpRMk6rXR&iD^2hEns|kqWG)Lx5*7f4@<~ z3%|xt+)9QAp5w%c{r#)lHF5qa2x=KPRMfPK9Bd>!xbAO0{_9DjcifZcYN0yhzy?Pm*__mtG~9O z?$F>|C4?PbM*YMtp2UT z_wzN*&Rwg^=O-XnC>}_?)|fn}jk=}UEXGx}yw-E(u&jbO9I}ZUso;+nm&XIu!svM+ zMf`nQ(EaZ#F6?{Od5uxCMv0=p^|aKvlGj7hRGm5wX}R1oa-M>l zogzrX?}zy%YJf!D|#ELFl=8>Qpu{k;|U zR~5Vp>{fo%r^M*%7tKr(a%0mlSV%dSTc+-t_+ETQ#sz^r~~qWTVtKoV#d~@ zwpQFgt1a=OPg&pGCDr;3M6l+o;GzSapUbkJpoGmu&4$plCUDpNr3cPN|F5U1He zbi^Zytz*|)K+uZ`jpz86JlbSI}Z9x+F~iLcU!iZQg(2QXsG4CCRTNN-gZtf zvf1Jbyb)|vVe*P=S!=jH%7>Df9TBiyqamr^E^74sRJ?A26cGEQ+9w`71Jw-O6}}bpA*10WVRvQ{S!X~E z+q~-j&1IsSm9e>M>{)p>Rg+zz*c>;7ih z6g)Rl@f>k`9g%6qIxJ9GE5Y0XwU}UPjwI6wy2t@~Mt;rI_lNPz^ibZNdqKWQk&e#6e5eC0XAQ~+09bnOcg{+6vcW9N=x z!+3XgwNVhUIWn=b%qd>F)$W+{-Y%Yr17hMfwicd*{)^c{9I(RTEh z+kJhcUR|yb+iljMa-%^!3?Ysct9;eoN$lw}xdFDT_*xJ!IV6A`+RX-Xx*))n2v4)% zms16EB1eo2*sezmdR-Hnfi9O6+_ctse%~^N0J}1;=O_ph zh05aPQ0Ov{SgHT4;-?#&rcrQxvO0l9x7vV9D8!N_R?nSnz}B9ESX!&C9K!}hS)){C z2QKVk6E`7broMS|Y);!ivrXmISp2EVBcLlThsrXCS~9_=+Ex_Hr)AbkE{u-82?L+B z?!KGjGy1^L=DRY!bpBp@^ol{8z331-$ymLATq2z0{x?f*V{rpnmgFRtVcjNgGAv=L zkYxg11~*8Kq9DB=H?-MtHL|Ng`A8 z(-527XNrEp5}B5fTyNZmG`Jz*M~jWQ+F|m5br`-$1*H*pZ2EITz@j(yZ9=m8nqt>= z0=aK2-G&U47 z^Y($3tQ6Xadv33Hd%1Nee*2IQ%WAP`wv)3A3pcwr^?Q%2viIyFQR9MbR$zRT6&Eey zpB7U>5?$L3W2C8{+4Kyv8iU&i_%P3nsRYB!*2STDLk#m#$J=$~9whK4$s%Fh@x0sx zRuK2#+B9)X%aw$?AY}&0+*wF6M-gBPDn`;okTkU*GjElYS#%-q@ogI`KVcS^E4PWJ zOsU}Wln2rj-L4D^b;~H*<*%ha84`ApTeb_i3?lwD`G*Q?-TX&eQYg*>$uEzRTM$50 z=yl;7NKfDR4 z&bC>5FbAT6^pn}yvg8Pu^bHE66u~PEP3P}Qe$#@Ue=i|#rbH!3h4$YL*0p7)NR`eY zDrodgq-A`x&c+9nR{N*k>jgaCRZ^vx7#R9yuv^s-71}eq&y5%I4Eo}-quSl+t4gVc zpX$)ud2iB}de7Z`jDCbyDFzH-s(V)*-4QKN z39Ly`7)oL~9O#`sRUOm5(#BRI*RCs+Va|!VL)aYuA$XWRAZoi~a)Jr}n~$)gef5`` z=ptOF5J-9j&oa)lSQML?U9kl|EhW?a%O36Pof>hy(A)Csc_Pa9;T*xgR(vSTMnj<` zDAQc|9;=Qg)svX4FEE{B_p^gu^r_%DRn02xk{T8D3c9#gjgQR#o`;~p!_c2n4$}`N?g_iU;D(q6;MjS7P&Wc4FEv#B7)X?|rtjtx(a8XU zl3n?~RX{U1xIK55JWVHlo(Hx>n?zrW`;*9CT=KYN&v3{XTiLgg@thS{F9e%@RAI6+ zR)7}B4gDB$eJ(;ZY0Q2t$i3zS3Tctv+&sk0$(qOQ%1oV1KxRn1NNqt zk#d+3xmnmP+yh=TNF4YwLP~=MIdb{l4fvi+m2mem8#};)lL1ZT_9U zT%NN*X?u~Ho$+q80CRfS6-sOYaNjg<;cZ&1+EiB*3wn9{b}7^fQy8%I{`$}~7XuZ~UThd(On4cM_?`_ZHjfgB#I=HE2E{x~)f z{Q<2uTiV;J*98aWFRzpRDpr0@;uuPHQ&1L*uZ?k6*}cKkHxJ&# z=7GLmG0>#VgNb%rBi*Ksy%`iu8)v%tRb&Me$zn-NG^*|)?(w3gnftZK0*&F!Q4~IE z8Q~Obkv9A!Q;dnl6*g*Wp&|9%b`bt_SI6?uPqo9}CB3hP)#53ztGmhw4tcM0Krr3U ztNg*T`e^yzH8rtp52Pn}%z0FI(~teBbg12Qf6gYSrw59MUd}x=??)`7FO8PzN8~IZ z`=A!b59c?L4ioTdHm}NtfPXWhIOI$MBmB*cAUhzs4dDB2PRlGSyebKq^uC3^^m(G| z9_mz@5;BAo^*>XvVeoiuMdG&_()B`=^a^bB>D$K?eC<2iS17eTR>plbp@=Fb7}zK4 zNCBq1Zq?WencAc!RA1_w58W{s$Cl&SFwvEWxeJ*I_$`sY8=SS{6`K)@B#j?=bE9DD zYwRcQq`_A$$ijg>eG`~wA)Gi`V3GyXQ6hO*z3=%zu9`PVvb*QLI{vSjY}fx+=$bGy zwCQe8ZHhXFd9&A(rPny1C%yA5z{-;hUnvzp^DTE0v(eYp4YUr6bgwspajZ+WFZPeKuz_Qj=x2I#Cq&%#l@?3 zgzgg~BeR%WlS;>x3uCl7M6f(iN>m-GHyP$=G-@YtltM1AB93-y;-iObGrzc`*_j~kG(iU==2wW>th*!2i4~rx_GwxS zWex%QsXe?vvhuHq!UNF1`o4F0jcA@?pX2&k>RubrFMKB7^O$|Gm`h`YoLv>Evd{fl zdEeNg(_@SMGp9sbp74ipp=49d+lk*`O+=F~G}Ht^b9|a4Yv6oFPh~B3C7MT>N(Sbz+XwNOWH(JvcJ z5}<4TIj&s$_j5`d)}f~Xvb}zZu5}7pZq1MxBR`RH)^(5Cu6DJZr05H_BG^39@rVrQOko5CB*3~4; zy&P16y5Z`#qrz@sT|TTX=r$_X2Ue0hWh-;15d=Df;%t1wurt7 zcKva4%zl!*LBH<&y!aPAK8mHEQ&ky>hHIY=k|D=AVw6+*FkIlm3aoh9t2r#?Ptkt3 z!wq)Ogix)TTOpF68fboEKmel%97da6z4Y}QzPx=1LEfs~Z>|c8nqAiZK!9f5NpkE2 zci|bP9w4H;7>z_bTK6V-W3diz_~s30kZmq3DMFT;_@>J1%61JF`#yQa#RpW`7i z;T9tf=8U>NFeHbPTxStktm%9=0a!X6A)NJ^dkCaQ=O(1Lp&Ew zDJP@v;|Oor1Bw8Z7!02;XcXhL5=f`|1R|v_;vgOY9$q5|dF+I63*0&TB8)dM@AX8K#|p%p?=Y1 zJn$mcVSmi)v@F9*w2SqK(=2E|W`{6*?ius9lQqSK)aQ1Fl!Um7LsdIcwyXQxe2viS4lVZ4k=eYWFClDoGbZV?l?Ne;-1m-gB1nRbnD}`8m zI1Ivn+r-#Mj;hDd?sUyFhrdmh4W+4%wWV9iCNy3k!6||`M7Eqo|3%p>8)b|oZ}7e^TA>Df)0i9spymHzd|3WIh) zW7Q80$hZ|RI1(m^cNYR?)KgUHJmcX-YR7{c-PCpT7rbl%Uay9un74U2b(z4FtVN9< z>ME+Pp=ao$&+x)>V?yc9!MeW*IKs$pU22w$$}kDb78G>zx2sN50wt-88(3pT=y2jR zmTckt8Jx8^8GWQwlv!pp){{M`XPB^@JzvYpWk8m!cV7Cma_qbZfD1H8rCF#SsOhe$ zOlzop%#+P1xN$XGT;@Dc36dw2)oOtoad@T9%OZ{sn)8m>n=*nzEW8PcPC)Du)8!Zm zKA5{#cv-xT);Q)h{*{hy?B=8&L$S2WqnJq$R1*G%m+1cmGJF1o1Bq{Azg2}$H#kgF z;59%n76u+^lR%ZOfq7`X#vo0mw{wz5U$knH5j0u}%5Y=|%!T4e(6L2rUUNh3rNb#w zg9_n`tf#1@G4c_t5zC~l&@lS)K<^D$q&bN(3gYaRl-Fq^82S8}nWQ&2rQEyrRH{l1l5tYWfhsDFk06 z6JMI(HshpW2mW&h{n0U@9|hSZhZz^!{^|huaeHzI&}xlmw?GibKgg|_al61O%eG}< z(G+Rh7%tdd{e_%85;MN8Vlj`QYmR75F{p&uUH#;lezW{+z} zpqv*Ts-+@(oFWx&p3ZyyjC>)(VS)OILU#Sw?sk*Koh1sQ2vClw4bnk^8M)asXZ^x~ zO1Yl`M_l`!PZYe(6d;}TJ;a6v-QW65b~nRojJM4^$Qnmn?D~vlQ|@MvDUXh)u>JeO z&&rD>3K4~GTr4qa6U;CA`XH!qIgPzIq<}>gkuwUqPT%^t&9|G{r&E zhBC3%G-{uY^CBl-CgGwfSupBQT=NZbL;;bA)|N-!DSM}ZAm{(%g|e_u|5fu_U#N~` zYD2&B4p3Tx#W|(?_DD>*7ikg~vE#y#GN=@Vo5-mV}FV z_MHk4w=36QVKNeE@}2On3njA9>EXK2O2uYrC`4*Rk`MT}>)m_uYf&e{BQdVDcMuIImkcl74lls|{FdR#U_GrMxip%TyPeYvtzzzj6*s>J4h)Fk1J z?IJFW(y5J}R2CVIJ-n>l{?+{2E(ONUKHs#OT6D&0eb8jiKQ`=Cgb+zBxSi!`Zy!mG z&n-sV{cPyJaQO#(#5Q+IUrhOv^8cu$o_{LI=zA*qf0oX_RzT1N6lhEKCh&jw<$zjY zz%%6APKew;0sBE82RvVK{qKWz0Awc+Vjtxy6rbsTaDlpyr~a)EXXxM8AYZ{l#R>Hi z+qX9Td;InBgz_EoKLk2=Fa0Oscfj}7o!S2|=%0iI-;n;lAN(lY{~IN(r~PSTu!XK9 zWVLraY*L?w`#19@*WQ=!)&6q9Cp0_<`_{sD5Z!c!-VLWm;cv&*<)4cwfstICu2`Y> zl_}TDuAJ{v;4ROGw<+$+uAry<9HF}IXv05mYrcb*1&J{=r_UV5D}uL4cWO;Zxz|UV zEN`c|gj>0LfAolh*Ay_hm9q8r7I9RS*Z(tfP92od zkhd>u1DsNrqv}vhZ_3a8R&S=E{omY#a_~K9PTwY!>djo4I^B1{9EVvT`xze!?q0Ls zZP)mOk(HknXIQ5DvS3X2WL%>tdgMwNQXSr^2n8Q;a#EX*z#id*x|o zr<-0Lz{q3g&@#D>YgZ><>4X6e-&h`KlI3E=?3XIQgSB7BMaj&QA-)L+&ZIJ?f+b+d z*|Rwf&D0$wR=R2YHdKzzjUJ(|%lm!{wGMf};6ENnZgCOd5(4Jj#=A2M<`XC(+3r)J zuDHB!dwzLld-A;_;%&teef+N-&#UU~Cb^_x`ji3mEt*H4ax6Txs|G2^78`SU7kfF{}VdW0^K69w)nP3UwGM9T|nbcAO3*fS%r^BBr zFg^Lh((y7BVkDTG|8j_jE=?jpVreS=-cJ?Ri=YntI{zE(xV(7oj0fOhXlE7R7f5~E zxanhi236wsO?{DRI~tgrLLQO8kRb?}4Kgw%-%M;)KfGMxSz8q!%2@U{7!3m-X?ZJV zZKs)>yXu2js(b!ULWXuP)h~0zZ-oAjt_@=WdVulT32ph~Paz%<6!6*b=ySg}jOaBz zi!eP4C3-eQs3Dl7223~gkc6FgZP|MZnA4`)!L1fn!S}~8DqL~_d$IVVv18M%s`uhk zBY>M_dIe-sawdxyN$F&aX=O$IFLXFk*3U<7_&i4tN3S5;0*>i$wVMg(Pv`W`6w+ONn1$}G}Vb|L}pKW7!dAa z`bg?M#2pJak609k4CX^1+Vxb{ZvRSC3U|6^n{mls@R)N$M4BkX{kOG0`3L4S{29uP zqBVU`?2`p*e_+=L!$59AdxRJ^OfEqwgCLMbfQIHa7Vu3pgeX%q6W|JbfD*=&yB5ep zQq|*!6z=!nJ7zFexK17#s8xcXO0$U}^#`kaj;OnWjE`O6g-gdZJj_o_dlhu*BL;xK zN&SU(*R#clArh_*(OGr=P?OyC{(7bK{652{7C0h~%f5sS;jYe-c60a<;F)70;cwDQ z!9FAO*fQOE8Dy^KTKE2W(7Vp!3wamws=Jyg_XEs}l+OB{?2c9VjZRhT4S+HO?!}w{QrOUSA?Z6MLFP#Tbf;0<5`Z(1y2`q0#1+SJ8N+nAIx`ChLWO1aBW=06 zEd^lSMo)y2T44k*-H%y@0gHwpNe;|lm1FXFlUQy$u0MSyD08ed({v3NR&aov|xCHK6Ffjho@4+r*T9ty&gVmPMkk1#(O2qPD#*9;J0_WX~I>Lbr=p@(;*e(fX0k}uAlN@5A{Ql#*O1imDiJWRYU95 zGFIWN8P^R~=_XFliER!-`{d&nP_j<;k*fRF1J-Zdmb?;i z`5u|tg)#hlGnZPOC4@0C0j2()h&hxo`jku;5RNcLmVV?KZCq-XgzfNg|f!?R(uY^WT&=Ukt&YBIB zmC=?Va)3VO*YMDEh)d3t9ZuIcQ(E4edl`W5r)yo*G0Ul2KDmTBeDFA6!m~XhX-^cx zOn)>yRe!=ksj7`6+nn`G4uz{Mb$*28Aa7}KpI3>ed`DWq3ADbLsw}~|sJ{1!x4e-M zhk4F}77&lbH;HM3G@k;uyW0e~81{I)~=guWMyKe za*NbSF><=i#y8(~0u z;(bjUwhS?**Q5I<#pgP`s}h=nhhH4W7lc1m8#*>jvjv15?x=Qgzb!7akR?bUQR=4h%** z=Dawhl|>;(WJi_y+qe11?YkK4FU^whyFPqHAAD0nCh~tL^ha9{-%dYWulD{ZicwsH z4g2aQ6EGd+fvv%i-VVjK->nQgs#$2e>E*$3Uj#t=ZZ}Vv3CK#AriHuMY@-{1oeD38 zs6#YY5tbJb8&JQj!MyUm)^Y_wkQHo`E3{z~V@8s}Q_~uB9ewJ`l_Rsr@PWC~saF}& zik}b1>q*ne*L-HsgKIDWHH5j~kp@KRgg5adPq=&wBfG_v`-nbJ94E^oJlt9n zKIL#SU}|Lz7R|iU0?Q(3G+R3Q)O!*2RyRfBGS>~B4p4Sb!)a#{i(#2Y2$IdFZljaS zhPNI#FK1Pg(fnNF=la)3m&hC^pVB4UpC16@1;+WaQvvAJ>7$>GPzFMe@m}zjF0O}E z`IYmi{3+hBagW|1PcH7X@X#lEIFr1=#o3QFMt*|3e-}!}bO+f^B;xMa;p=o=&gCVc zKBX2}%V=;()5?nbH2kN9xBqEj;u*^JZRuGsl6WKMQ(37@dcLqyhCQaRg@~|l&CpZD z5}#Z?lXCeR;HA9R5Vt6A03prSl*hp&lE!dy~^X_FoYe-l%c=giC38Q zC|Q~-Tv=m8W81o(;Npn?qpY>Uq>fkU4~2#PyII7ReASQ`jg7kJLPRSWRRyznL_Tc% zKsC5AYr_9)h2dDppOAa^8s+bHH<|@GC7Nc`Hx-!;8vs{YmyU3Ot&95mUIf; zuZv~KWP^0N-_4KUU1h2;@4nEKwbHObc=fln&8nWlhF+pg|5u*gf8}A$3<$%=%9Cd7 zH@FKh<;gouBa)%$i?=+QmF7|LAf>yc zEAxswG77%7*7~DsY0@hvZ*F>dqcGQ~$|$@*MdwovGWK@=;a^A?P01^~m_6?LF6r3T==FathlFGMd64$%k&VP-Lhy3aIe zlINhAi_s;~04ZKyKZ4DnyzSpH;@2{nyJ(-H3l1*(Q9)9Ar=tnl+@NG=7#q7z`n`%d(^f;RN z3$3Cdv{3^>4)AbtsIstGj8o`rI&I<3Myp2ywS1p36sw-EbwCplb6rd@NLsChG}+a~)siNZt{|J|B> zp!gwS*r7m>?+%DpVG*d=q?3BpuPk0m{%kjj*puAZpNcP0lk8PcO9T zFH8_ETJj(h_e6F~bn7nz6gkm*EBJ5yawzTPBbhFb?;?zVZyTLoBNXmhyL!AXI5x&P zKAowFgpxtUXms45gwG4LmXJBS-cu1}95V@)eMgu0&4HAhE*D>f7aUPYr_Kk5iCIPt zVb=6+UoX$%o@WrXs1~4(rkYwhDn(>HEZ|2nyyZ);>(H_n zpF%ufhO8CL_V<4J9n^y?J=@L~Do-U4(%$y^#NxA7PFWPDp>?1gqNS!r2>FAYbyFo| zPwHD2K*$0(%{-Nw1sz{NJ-9cLRfdy8uN!Pq3XD5}W9mpE^k;uJwEcujMC-?@OoqEc z;J=RxNEZ{&xzNw?J?IlpRZ;b6CXF`8-UvZalezw;Nv`Zh2V2E47PvPNRpZPRd3uXPU{#bx}scc8c5?IRRX zUi+~r*B(pfjpoMRtZsJHw!J5U{9B%8lnQu~yel2lCphCtnCXlwcBC+VMEzXn8(3f- z71oG}7M@_UD@%K|Z=L9VZ01_?@S7ogh)LZt8?yESXma+B{_rt-S_bi5Ui!v%?X;w> zc=)d9oxAPDv6O z^O4C${==HIcnUTc0xh0ynsm7A;y;|7otB^4%F6r3Xi-$iv{6$LTKox}VlEG4Y1d4< z9>qnXmntH)Itm0uEQhBqGGtuu2*cIn;+q=9gjH1dVw|PaV;>VFsNVEL@%gw1ioJv0 zygKEp3#uG`dfFQ`Bk@y~~0T8X** zXw!Y2pjAZ9H$ad;$$w1me9kK%}W)bI}=b+5!IDzO%J zx$}P4u0b6E^T4YuQlk6au#8Xth{S8j(z=b&-wLfYKeI;LNdX%Y zgoS;CPm;jeyh6no8|>^4^9ZCw7O3D&*y|jY>$^JWD|F=Esu++`gUC~cR>0cJ)lQ%LkNWjRt(|398`A*^ql!ZhvNeZ+$-_5}5s3%T^2L{aa}QP#qI(uGO+zv|#qggo`EX zE9XsY4{6*KQV5G9_hA?U6VCI_3MkU`%PPAhG;^)sDGFH$>a| zt8*+a7Yt<9NSwJLm7rEn7-UI83PnCTqOffIsN72_6@ht-{bQw~dJg>r7jnlY2}K?8 zh*kWfGO2Rk7Z;Cy!FGinl}kJt94)ifmp>=*Om!FbIi|(@uQX>`A0l#@R}R{d&ge~c8TU&#`hlt1n$TJ7d$T>gb9VBs3;=^~!N)go0HUEP63KQ};*T*o_&y zuxH>JmA>9{mIn70@5bwWh3Y&ZzPeS8ySiVPiVx2Kss=(=CdG*LUIE#+t9>FaH4?94 z3~*D9otK-#4o4kRXI7KIQ3*d7F!eNacWSUcyA4ZYhwZGTHcv|P(x&q)ovU37*H62;dh>9@Ekm)z zzYVe+&Y3?3m6vQ+k;W*5lGs)C-NY64VS&`wd0q-kHXgV4E&4MTdF98PffI!54$|Q2 zlpkujm~0%}j){XsU=rS(_Vv2vX2R1&d``>w5puVvxwwNhqU(chAJnkeYN?bgzbFR8<3O$zC3rzEEnb*cRNZL-q;F=N!r;msizKOs1DMc9~^(_>fFcE3bcjyM@ z=|`5u9E2uy2!HCQi-}0gXbuxDPAVJ}&x;=h68Z^~z)KM%QHj0(Yu)8y^oGHI;vPeg zOyF<+6WQy;5tjHleh`mW_(taX5whWgNY^=>e^$^+rneR`bBRtOX9eQ5OcO_~_C zA0jxTci=WJ1=z?SCzfe76!q!Akr^1-xAc~E^C9r~m=;?)fM2=d40~d#o@>9{zha&V zcVo2gMJ^L|;=#W!%1p-Sjij^^%g$G2(4{dluVTukOc_?MO}dXsP8du*aR`1cvDZ;q zcOE2YvHRI&M|kSJGml@^X(itiJ1+~*Ia7C2mdXX0_CdT@jBK+$R}~((QDC$83Ybb? z_>A@k3Ajwl{s8xJ!(E{K$SgV?UM1YPz)|vb<6VV_S)pj-)3@sfwcZWt2%oQ0TIDwC z##qt+$6-nP=dc7*BI~};5#yN`#lQE9h^t4BnUgiT6ASFjBWJnySP))_3VaDYGQGs3 zLw+uK{~L zeb=?EWp=z`uKX+9pug$#7Uztm*k_TkOlvb_I;}o!Lsil?$NVJU_;VYVup=Kh*U7d3 ztsCQt1@Lr4)NI|j+=hK)UxV3Fx|UVkakv&v-L37`VrpB?TsLfM0%TSw%!5=1atbEf z+h_dgOgv`F8;P~tSrswm&5_SU1WU?2Sn zbC8_4U^r1a*Y4vW{nmj`Us@Z&D8Ns|a3Ya5cb~#qo18!C3-F?`Tpmv{9+)(TOcTGT zo5$Fea|AzOd`?@rK^~C(E39+Re6-jgVC>I6EG=>t1Q~8`2lSEq&qcsA)2i$BE+Joi zR!gCa3}1tG1Kzq%XH%4Fu`wp=DFNRPQMlMt?VxL4xYwIZ6o@+)cQYoNV6u6+db8SE zjWfa4N1xu|IT}l>oq_b3FX<{{y9V-u-otFPIdqPgqY{;eY%TJKK2YQh zMX?(hiWuuaRer+OuFn95mq>%S%-DKz8MJYe-Ti!1@hwXQT*ZUaG!j0gDFPlp!S<*3 zbhr8wbivN7>||zE_LL9oNlC`!*=Ne+=vOc*d@Th2FL;8Yf1LzjYv(JQ7(@!PjOri( zV}W@uEA;`aaAE*>5@=$n?HMEATF{4$`)H+d+F6Fc>YGtE3-HGJ;lkL_E1T8G{q=Yg zf{+83c1|Hd;wL_z@@F@j1rInVa?6ZB%$-y=$D{JjujR7EbB$c#JzSiPQXR!%ZsWz}Jiv$`Y&@Zg@q%T4Dv*FB!6y^ zm(Vn~L1s6q`emoHdMSBiszu>m{&k`v%tnB!11&*{{vrpXprXdOURD@QEnA|`ld7%b z@(mZUIg|i12tTJuFZg6UuT;^mCQdjjfk8O3o%pP{C zTua;a|2=>QjZHG^91N4vFD_$_Cs+SAzNn&zps`@@Y4%5~mnXgb?L1;HIC+sb-puuq z(zV*~H5nxI6z+zXb7xX|3L5A;2~8w;BM?t*w21vz)$E=67Ce7TfFt z!taekviPa&K*oWP@5yypJH z`MixH0&hQl0(f?Skr##J$ON-(Q-CHS+Ni|F(>>K!K6gV*&j!H^UiS_t_{yUeSVP3~G3X+9D8^ zdM{8D^Qqj)jlVY}G<|SDc;M4+o15nNoQ^Oak7s!6BoTPNNs18@n<&ZaG8z#mG^_ft z4P}@&B|638fjHbhsSZn{(PE;r$NTPbREZDFGpw}fB2uvyYH8IUu>vF2m3H?vQsWMO z`|}Qc>eb|G;CE&9HB`hNG1TURawG0&OUJkH{;WEe#Nb$Tgqn@(S(kQg{H)b-~BKlO#uEu_~$dvK~yCCn42?d)f2W=ah zehr4>XAnIDqYjqAw=dr&Z~S|F9c1-yqdD50T$((E( zQ+Kvqcb>_%?V9YGY`Z4gn2dSadw*Wf_xCTXYh4THdY{J;(U*a9*3AJ~f_p0mg@^RY z4CrPMIK=J&bn*GjgB-w+e12rdbviDDE6?HMz=CFsp)89mUB4l|3!OvOl<~yP<_j!zVgGEz1?o9#0EdH@d}O@Z5i#lX3%~a z)&F0??%&!sKonYE!t>HZ)F2rc0S>9cK`SWv+1S{wK5V6XIC@SaeQV;l1ztkfYqWbb zFQ=oY8?o9qw-7Ty@r}`{hC!dL^N}_1E7fw-9HFm^;X;(>MT}il%^I@AB>!dWV#K1^ z>r<|fRnN^Y;o`W}rjHkwCEmw6q!#9N%ex?j%uMcf*CA^DRPsd5!Cd=wT`gvaYrS*0 zFo5BLC_d7VV++o4WX(Df&8mhpZ<1Iv1*pcses$4eTDOB?)uNA2(0408;t-jnnSKc# z3e+;BOUO)B4O$X_I7ke_jyU+K4$6t&{4FV{RWop{F;Z|$2eDC)sHifGDpE61S63---Gc96tprCf>wI*1eR+0O=ax_ROaOFd%M0uC2AbsLQ z2o}Z{QWS9U{Gex?KT7DiTZkP&YlQ0Xop&l_aTvofdOF?A(Ra$&ZW+F`ci=g)Imw}V zxiSjYg_D|$OM~8?ev|IZwN8e9dKq%-yUv4wJn6-r=YHY^w))zq3o4b}ICubMDs!31%#*UJCWW>rxK|)@wAJo7D%ZswzwiAHd?ecIo>Jvg-jh zDU2O)W2?SxwvlB%^&>nIWhmu3192wI#j9=uY*KOIo!r#ypuMcc>`XVhk_m1u->fF! z??Ko>4NtS-eeOPG4zDGiLSv|>GeM}+H7I#Y$jq|5ahK%tS7V1bLqQZ?YiylW!PbD7 z&wYs>Rf+`t8G!g1AzXkz&mDyzHsHK)dU+q59)~f=n^iYrST)XyXSk=1D{(V)(P|E&QNq328h?_9AycG`BOjbD7^kLMF@$Q# zmzO@1LtJJiYnYgqPFA%OafN5cf`al|i_NziTDP%3P1e!<>1+3keVhR@77SB5)1_P| zx3=FLMgn=5-n?30^c!Ruvr4VB*ra^gv-O2RC$99Y8PfT8UV>jzGvg}inkCFRleAz5l^H@`P4OvzhKN7ANk2oxpEpXa z)G~miB#r3?SqJdk&)?n#9_?66>`eELoax33OTX{+0fkyPVL++i#H#z8-6#a9Rs zNomsI`nLnV`;NK3P9)lahj>xyNGJgA0U}-i@(fe`c-nZ3U1-qRTz-&IkDLPDaWySA z#DZyIi2@w4T}^)AgMV+{b=U?yT+BjIxl7$1*&dGa5uQIe*CcG4X#;PaV&aq$v;FrP zgBSPcP)ncd&H7)a?-6 z`Rph&|K1q5SykZEVelALY1SysVn^NtQH@j}iu;HZOUywp-5uvul13N#{te7guKPbr z>+HYb5Z42*1TfTyUB7%pRcqJXpWH9k&>pN1F$UbN01Q!MM3gKT-mk)P@M)nPTB%1G zgoFgYzqjL}a8?B$ig^8$$_(Vu@!)(HMqT(fUB5AMt2qK5Lh z9BUQ~4O~Uu%YRAd^|YCr!6quz%B{b7Nn9bbU70dZtWn8GRm=?%Hv&wEeub(+N0T0{ zs7i40I=fJA)Bxm>@-eZ*$w-lI?dsEDN??4gLR@ZkbsbHKt*z^q{`G5q+w}#ylF(4p z4FZRSA6mpaNZmQ8Z`i#YN}1v~@s|zL{w6;*55_qs9KEDSTJ9E}GwJ|xoLv3LV-JPQxhLtJQGSUuF9Az#7aybiNO9;Tp^f)9J>G^F`=GBW>yth*E{aE_di; zBmn{MC*}n?lud6D>`k1~J*M{pn%7>E2 zYemE7*f)>r9(d`0JmX17vOm&s#WHrJGRkH*^010k@&%*Op!FOpky;*tIb>L5uZ_+; zCDTRLhEsu8g?BE8)H-1*;ZatoMGUZ0sCn3>fatOw?F3HCx$#olS`{;O=icy;{H<0NdIK^`8-m zLVV~x{mxW85~0T5Y^wJl@*dH3G|}+z!NLD9;vR~a(`TcJConjsxD1_{TGit|^`cj~ z)UV8_IH*dRWs=Ieodwy;XOW&3y)&YbMF_a0nB))LLq!n|T6b7jKbNPiLvMWnm`B-+ zj=O_R2|+EQhlizeezEOit7?k@-q8bolh0G%g&F;jAMqs8d4r67W*fc=JQG(nL)uWR z5d8t74wXwWP$4!nbc-qo3N;@9e>%OKtl-cPZ}g1evwH=&#DGr4C%NX7{vIJG?w)q# zFY;R@PE0CTBbgbOK+^~{lo!#DxfxsFwVk)t>}r)-V~Nb*!U5Z zVOl4AnUmJUzA*Oc)>CiD=neTI`T!S@K**`D#DcEB+OFgzW=bMOeaazt-HCP&p)zGJ zD#WVHNdzJ`B>6=RpAn^z?+>j_(*T4NzKEImB|h2O!NMQIJBq)00_3EM_#J2Sp8OEW zk=oRthmJuBQzdLux%&qaK$F75l6$C)vBUi({eLLF8F=46g(OW{ox4~$ZGS&qBF%QTwkzSnZ{J?|1CFj% zl->rER{jA`cjxV$3!a4gABUfB{e2Di%iCq%Lvw|2E3?1}$KH<~Eud2r$a&R)VHOFU zk1f*5bR;Q4Lj(4Oy$f>HDucYDgBkqTjeVP8hRz1@Rr@!f70h=Q11?5S4ALkgcv?!8 z)|A8S?w-$@5ZYOPO++VH*f-*F5};oGb&KypA!q)&@BBNsmB%B!w&HnYb%5(^vOMXM^2NTnI>uQ3i67uTrcTYOMWV5)u~>YRh*OTa7|b~{G23K z971@VIzT4;o!X#o+&{}R-?r_*G{ul-Up80!vsX~p|EgY-`qkWGKE5-a*3lDs>Vg@2 zFv>PzOqG2YoFBPdXX(q!jGXHl0^eNk|6_`4!GN+1~>UWqyrk z>8SctuSxkudKgcPH@F?N0KY?)BCk%TXk<$1{d~I$E^ByD7j#bS4_3b^9V9KTR)46p zl23~*w&pg1F%)Fd-Q69sx5ALUKiTHTFX1N`_r2&Z8SK|BJ?|IIb#=!7TAE4&UP>>{ zx-ZYpj{-i9Ej1?cy^AS1Yt{VjjOdhO}VS&2k%YGD3eJH&U55A5R>?-y8@37#_jo(nM z*YWvQG!P==n%~fe<5dUcoz4*_FRu)Ls`x-$Q`u;WvM~*I6E2NvYHSpvuIcw!cj$Eh zt3cEe>Z4@IT`^wvm5NSDRem5OF@y+U>81pE#9Vkuxp=sy?Kh~ctU{^nLpgbn2`!vQ zi~G*|5K|TdU2{blo)iY}uBv-5I=oDgatF&(|r2OBo zfK+VRx;S`d*-i3_4bqfmetDwXxA-#$B0I)`tbfpcaGHu<6fJgAx@x({%2W3XmTrPi z{-o+{Ufpq3xn!*8Pf%#&ybg|eJIMc0bdg*5T@fY4leo*sNOQElYX9z{KKP%%#pn~D zI6@J-!cD>b_v7;)Kt$4QtbM7dt^d>xz0V?suz)f_?|USFTi>ry{<0Rpy5Ii&tswH3 zLim<~vSy~EVV&izkL432zj@+L*`zz!{ZiSp)-26DxYzUPajBOg`4p*NEc;LP1U&78 z8zoh4knFDx$w`)*SP_s;%PPVV7lCaSOk-siEE4M`M(GH#Lx`9%`cdjQr`U4A5f!!- z(F+}dF%)N0RJ0HU4WGXB)))pBT4494X| z>p%xrL|#+i$JKcq2{-J>ViUC#`3)?s8RDu@;}0}eekgeBhllaH(aig8&rfvwk5LEt z&sa*pHB7+Fr+MOfw`=q-iPRdzPS3;g$C`!TeLRI!R(;#!rqBCh&&OGh{np3Dmi@mP zk$)Up7sCN>!xw&6KDqC`C~ts2ZmXNmQyzTRxdL8uJ-L~|hn{Hm+e1zLyQzMAMD8c2 z$TC}iir#b7#tCee6EH@zH`{jly5>5{vcC5OHnEcb zL^)Y4Yo7fvnsjlh6AC*7TY8`1Jg$h^w4kP?rD{!ZqaX0~#zM!#B$}H9EO& zmDv!O=un2wJIkWupW#QLP^vz)cLV`PR1z%|08nGkaYQWr*gW)k)V*xM3xe_^u-HQvo*gW1T|nBC9w*Lb)_ zFPTek8xXF}06@I#6H#yF>ANVpj{(^)seGawD`LO*_TlTkO5FDM1ZPKyfsnka_s9Wi z-+7(F>7{bwwCBjDd)WWev9##*gyZ~r=8e|V3UZP!)z z>v?jmZ}I!YC&M3scOSufsyn?C7>?1Y4h{TeoWix?LFg7CKe|r^p-Yqe9xoH4TV|DK zL9T?`3o($s?ZIk=R0TWY%GC>QoUtt|EDA^d?k9@fF7I*LXxcmuKvof~zgtwN)4&Y3 zJb?k2iy3)zG<)ChcC`D$S?y*a?V3fDXwxV!>__|YsS!5GBhF;b$Z(>uOkFMFFYLWj zR%RvPjOzO+i|MPCm=ge>Q9)#W6K2Pca=rX@u5A3zJN_Hs*EwmjS5S_oO6>hcozo|?4z}tx;UQE7nvU)LuKob zy#0a0UZh_a?DQ2e=q0RMUZ5_^W0$zJbu!@sH(m96DjZ^uRN?*j#J9 zf4M%@>Uo(COv|8``_o3-|K5G|D#RZ^Jf7Iom$`K05-7#H97V z$vK*QZ7~6Ogo*88#+^&+pnhVQefPwQ zoquBdb7r&A!r0L&HhfOlf$R>rH5)D=4RxWmfUyH_mEDsTQS}p1l`q$jTp)7S&b4rV zeV9-juQ73UH*4C_4#*Nei{Ns!$9$+-$`P0Jy%xaD59O*Z`*aM^xy)q+~ z$un{@teZ7Sg%Xf930Z}i4w+h6@P1XnsxOJMz?&7>tPoHoZ%{w3;UvYh*)bSnme`a~ zA*~~%u#5(@q}fa6@g=a!*o&VTQVe?Rw6$iJhxZ!-M;g#4dh zVf|#jw2SmDJ?qsXCngBQ^DO)nEPNe4*j3q`w@jz}-qrK?=zBSI^xDiaf<vK$zhEU72;h6*dmyRG9(MMD( z2+EX@Si^U2n6*clxn``Te{#?5{L|jo^Ma)usG4yX@Lv13Nsau<|FPpju6ti@gZcnr z$9ET{515Q zlGJ_}`U$I;&`rI-B@0FY6_7KmQSbUhyUIw^2zo_c4af#8?su+GqQ1=DMY^DbbZVQJ#xIaOXQqejkqFucAIrU{tb*Ho zF{=-_h*=O>DhWX)DXDIt$^=FWPll{*S?)Cmh!+ zwf)yC`@gqFA97l=ozIgF4=;fWJO$~Sw<#?GxxfCJjW9eyc?CW=0(h)82pgfkhq+Md zP(q20Lj6Q{@gTq=fw}&5ib?exBBnf#gy(P66DlVj$`P_YozQd$KSRSVPiAbHWzZ8^ z<12QkXx=VUlD%W+T$H{l&#aaR zHZlrsKn}W&q^=AOn@2|H1p-U5gv@W5JGy&BqA%4oqZwkh3RjVzfDo_WD8dX1(fLqpisuYVM&#h|2;onPZ z0oRfjMVF?FBF7U!-4SKg-!iB3rl%LUO(1uK;E5bO=SutCtU@T1?!qeiF^eT$M>wR} z6};XCzB^~&pqq#J=8MYkK6XELDN+gEO>jKCZ~c9La1nlV zsmW7$tURV3WfZ;+L}_JoO&(H|VJvP{0V$i;W|ox?lLTzEy><79uRTUyNu#do(_;TQ-kP)>jdm_WpVY%IbNqsPn zyiajd$e%dTD(Nm98&LwuGkVskq`7vrZTSP8GSuSj>l?d*2wO(T0$D~$o76-9w7?^2 zBhq&4LVZ`;x5;XSM@bym4h#y?4JqO_1qe-?P{l9tEtT@N)+JUM^m%s{TCCufFkaRO zQ0}C>@X6qUH4Jd55o#F42?@&zDKy{}V;QBl#oM|~*l^0czt~)Szu%T_nF%&>@J{3< zsD(QVtq6!hEZ~Mt5N7;khm*ncGLMH-MY!`?0+C8SUhcrdlY_^)D>`2$6)RM7gf2UX zVEjZdF0lfB3eI@Zos?L8mK0GtIu-Hg-9qZZ9n%kBVB(&>70Mi9P^2(5k$V<@8n z8r;FK*hkmXQJv9{eO>zYH9g^@(<3=P6tf#ufA6zT1ev|%Scjh}sBnRHJD=QWu_y!C zEMw_5TA(Apw<=i?>UyO7-6oBnX;33&P}nLY+b*Nh0-i>aPW)3355|*ny{WiwC8qjwf@#nlysrvM zGoWPJ2k%D6abt98(9<8$0W2=^{YO9UT(r*1r%IT9LY%O{nRS2_Qw^e&O@7zQWZkTz zD3zHQSyML7ynY-pNb||0hE@53I^%xR9b2v)NuBdsE~QCI6pobx7y_qf=Pq|@fNGay z0{IPQjG;J)X?d>*734E+G*K-#R#v3bCSnoolankHiOHRB;%hll16%TLl zXY4_kq{WZpSwMjkpN-5dlG0BN&@PUB)&CWcW1!B|NFTQXetEnJL)!CZ4-3QjvA+|2 zbRC;|f3cwXMh+OUBE}C#%(kE5!-hQTpy0X7&(=GTmPS@eIk7w_gMdQ+@|u8%s!-K# zGbpUqc|J?YrLad*OI>E#qPMd^l)~>ERE0BUena~JA@!52CJ#D6sWS_^97!2S1a`O% zSsm|*><&#)BZYrUw}rCcMem;-WnfVSwg)~|F3E|eZQ^)sEsc&NypQjOc!d;KNR=@V zqR*Uf&lzCq)?mlEMlP46!OE@_Bpz(M6s%e9QZ2Y2>$Xaf_)}>m=iKCP9=t~w(R^CA z1c~u7&o6NGc=L01h(Qk?cK=LkDNz6hm0D?HbMT9Fj-a6Hy(xav{woW<Mt4yI*BW&kVa%rRB6V!+)-4E z8%I@NLF>4(;@gW~KZY5=IBf%Pko9gB>{ehnj48E3>+?Q0=%R7vLirshW(hUa1x0T6 z#Ea&L&fJ>7!_qdw`YCChhTtUUNK1i|NUmfS)(405Z6p>XRbP-5FH_ zymX+}FM?xfyshv3=eA)ZiY?10Q@?k#vbqbtY0={yqc0>MpUyY;*OW*A)NZ&Cw7!@v zRWn96BMVq7on+#|~z#ata-ygX7 zslfPaXPvx#x2Rl}0ZzgW$Fq- zD_?u-aJ3l%#v)u@G}Ok2#f7gy0mJPKD-VmQ3)NsSy=czKIYmsP5KMc5ZQ~Bh$gc9+ zN`o)P<%9Vw3ZWZx*;vom)tX~$%PWua+DBFOtEpd8&Kz{Tf~RIr$Z!*i=Bcdex5TMD z9d+GccoY>623qTYH@rm&AloeN8MOB2UrQN!2q^IyW1FF}5V|g{l&e@amM|hlU}CB9 z>LqLf^L6FONLu&>KI2|EGOV7#wsT``0|oxO8Vl3M7Y4)ZDozA$+d< zXF;3)izhHC*Y|)+ya_UoNmYJwF^vo!*I_xIaeq<~nh1w05Z+8W+&qfh7;V?DS6?kx zVr=e`8z5Si3z;zN#6jhQE8mi|ekTAfol7JdT(!k@WWC(uJhaAd1ET(2cFqSxY>uGr zLYC_4f1EZjc-&c`D~f3Lhh{Yw8OaNR4*2d)G&re?u}#9JO*n3l#rArxoIz`wVe<*3 zx9?w2c8OcXHGSbMmI;}FWO)bX6qJ^D0xLL4RRm!wWAl&O&=~gFJ0f}Xg;QbO#QgwU z8PG{%X*SDWtDE6-pJuBZxXf_o1=|wh zNvBs(!dzJNFEd0vafw5LG2eL@P&|H4Kuf3+Q_@k~UlSskGRc;EV_m4w?LqNi2FcC% z!z~;Ej-mITw{UbP#OEG6^Bm1QYj&V_hIqw}IR1x9Jfg}Q)CLE+fL@l;8rdP(AswkT zwJc&C^U^hNm|k0R5MQ?1hj`RP7TV%^L$KaOA7vo2NnsiavCA?_DZMUbk>G;&vHw~gY`ku7jeb{fW1pqZg|YW=H? z7P$jXR1f^~16=4%n@pb!*6Z5}!jiM~_KOaABhLbBoG|zd-R1yV10<<|#h$d2`6!}3}`t6hY&!85O zEiX%+IIWfRu4Dzh-3F=}TJ_q)bcJZXf`kF|Kt6%n+xtqEq~8bbXp=8EyPvK&$$go2 z4lyu{Tniv9IusHGH$2x$=IgFCXqkI;L2|< zx$xSTN{FmLKi#X)YB!uksF7^fdwIjZL>nYzjsHIj#T%{~BbTs>EYtuscWk&Ee7h*q zlar^Bf@zS1})b@d$jqc+dZ&;>_}_WwtSZTDMOO@2_C7c44j?v99VljCVkOS z<^)WPqsl9rS2G7Ryd@*stL?|wkhCddV4t`6w@vdkcQV5;KxNC-g}4 z2rg6$N$@If#0oIxCZY);zweN{-?=|548mEyl#Way*S30GNR1c@ueWskGM$N+h?Q4N z?|8fB7vmBcqIOds}d_h(#4K9%)AHrn40(xT#&LM1ee$i?gRlpSs6i% zln0c#!UsWnq}hL6L;v!4N^cfupxu=Fq%rKJ(PA}j-Opr@jsPP&@tJx6{%NZ~MN^)8 zPGTn5ZyT=yiecRi3f}<>^T*wl88DG3PJFH>0k>qNUgxxC)Z9tXB=9uBCkk+URoYY{%*qXUSH@sUvEsK- zyRV$;>M6O4)s$~UP>`^eSux?fU4wO8lF`w99-mSEMaTZP zQs}H56puKr5<`tCpjyhie0t&>5XVHU)KAiqR3+aWVqanNu0%*^_msrPDR0zeM2{#( z(TDjnzJ`PXLVJh>e6Rp4%DvUI$hu(}bP2_>wSvm2aeSuvhHH=!K_$v$Y*iGTBxPr= z_0c8oyd69%eV(?qDE8Ygt;lwwthBvnTKO6&boI=v=nhqA!BoB2pu|k(A7A)kZ=Ry& zgI^7|3DNaUk;Ll=lT^ZxXOY2*W346Yr&j7jNmpDknrZlS*3&V0Kbrv^*9sMh3kcGp^nJ1tzPccX!X4MpKKAv9O?ne%dE<40|jB^ z&%#FyoWz;s2Fryjm>w`3xO4|-z%QzhexjT{^NLsEkc$5UMlW|zr2!UO^=CiNR`Spo zUqGR`V$0${1Wvhhg$i>-GZIwLxWzt??jWG-f;lY2ZPb{d`_?jB;zDft^oUG+HE|4j zJapsXPN~D-8Q6|Z9M|CI)T*b)4>++xPF$)5_~L=8P~yx-y}1J1;jZ~r#fu?`RLSF_ z>a@}TDXY5$2KH4P(THw z`=We(i(sx9o*qv2Vaf%C@{9t*crmZU!Ahy4iuh2t9Gn*T*i<-gy0QwoD~=F*Jh|JE zM+ophTXGXx+amLX16GAT(A^3){q4i7h#P+O_6JuAYv5)8@>z&3mmPyIcZJsoxo%)J zq;N@2D5VXri4@miS1Du-8)rgi;KF;w<=X%C$o(xaTS^YSq}_w0bHG1!r8VKClI?Hd z%7f^^EQ%GUDg-8S+VAFrgT`;Mz0HaHx*7p_$}s2RN3MB{b2BhRy*44YCE;8J?3+*# z=^D@b;P&`1um^~(R{F$H+6x@uJLJgd6i$d!eS*$@fYNX?RY;}6jqtz zexT%wL0_Y|l~K~U8v1+w7^=?dY5uy+=T-&4N`g8E(_clhdBH4=8$RkT3*0LF1jG;I z=y#dKvI(BPr?$K2&kqId!1)~Ray;zN2wMd_9haW_#R^*oT+dLrH~j>oqM$>CGRLp- zMnqqB!){A6ZR*U{HW3`FLhHz6@&>rmZQ3VP-dgel1gOJ%n;28BuvJ4O5#jJ!)+ZHSgALA zKv@MYgPT0IosZ(?C9qmhqy_9?Ce(hFjORPKbUvmE#R4n~jtm~zmY`)Bm^763xYZik zS72sr!!p?^AXsI^_|lV%_%|2A8F6Nq5JT^P+3laGT9+*Ne7AgZ17*^UMiv zB&&mY*XokIU`3Le_@6Y@Uz@vGrM5}?nQN5gq2JbMm4~!(YK@WFo9`JVjLUMJ*Mz)I z65bEhw9aE$7A`gpgh`;S_kn@x~Z~-)#anw`!#53MQw79#`;w^FDvvc&oxz=GyBX$IwBX;yC4N4 zM%`+BvVW)<2;o$xwi|KnX^2bhGf^u2W$qxj$F+1Ek30HPml9{I+Uc8>9c*u%LGJ4) ze^2`T5kCUq(9+CofT}T7$cZ;HfiGOj?Y?)$Es?yCB#nQqGJv2%d4lu3Vmt z?EMQ=5B~`ChZR_IT5i#pD<0D{9d_21}tRfnBS$FeBn3;$iQ{77~qur0atI~+Umjcd5T?D~bP+n1l`NeV`B}W++ zzIn4pm@d!sgNt^5Ai?kI2?P}Ib_f{pv*#j9gKEUOlFPJ4=7mqVrz(`BoAEB@{tu#L zeoH?^haD9U&ONJRbB1!s$_z z87~gmOm~n4bMX%)r}*7E zC=n5h>(8}6DW;hfY%?|U%tz^oNNX&ckzN#N9p5gJZR>_H`PPSiXzCgjvuY;&jwfZp zqcydDW2!M;pl=n`oWA~j-}yeu@9kvR79&pahxqWLAv5$(_9G>-#^Kk2B2%rxDrnbj zNxG|4UR$(!&lgJ1SV-~KGxS80fs!McxOoV-Vq|>rokVQ$Nd5C8@777Q-L zSwVWtaZeZtl#V(?&Mdum497dFZm?@EWY+)CG5;+2@$Sa*E}?LuGS2ZxM2PsU!70+O>R&(2z)d@&x$+Vr9et4ZZLS+-Lq|5JyL`T z%Y@HD($ z;UDN0q=l?CR!*T|jF4|SlnM40h~G=fasI$cL(rBQcG~ET@U(z!QoaQ|qjo<_(|vi? z0TILOi`xI(+XPWs{?3oL%Cui1@9*LYb_3m-N7@MMGO4Xgchq(+g`Kh~m|?aZKz+ro zMcyW~Nh515vDB4g)3u>g%*~!ilVrqdR>O3Ffjk#duw|jlsCPmG2`+= z{okqvaywC@1;f_f#@YjIXlk}U004|S6MH(=pt+#hX?6j#oPt|+H!w{0 z+c@5!B|(Y8MLwGe_u=r+@Aw3A97=W4CJn3yf*rt%Q%?m;5bSm?V0#{t%niBJ?6LP! zdMt}mL46I$GEQ{i4`g(432p0JRn!rzYQ0iiJKg^?V~4Y^ioe5@#h7rO0$TCSyO3x$ zDk^O-@N)SycLIh1M*0+h_$8iRfVJSK1o$`yU^&b_1+G5^jP7$H&Ds(#kNY+zZa8G* z=RYOKfPz92E;N!l^2Rl#-pzg7<+07*YmN&$%l|@Jl9o({wy+`~(mc%;c8(kNvxJEv zQ(q{_q9_->@AenW46O9nam#2kfBhQNjr$@VJkL-fa0@u6_4D~cEZNrZ0`S6wU$Wg# z7sJiHGFPWJGjU5SF{znpD*kQoC!6B#WTAlsG|o7f&5VghqLwj)Izguqj-=V{!{qcZo9otc&q-J|{Q z;QPtreW#g3CI+F-j6tb(b{v+9fssB^_=P}FH%|0=?*9zyZO2jU^WTg z;PX}1yWeI9s1R@oX3#EgR4K(t9M*4Dmo%~KX|34ZF$Nd$dsE{LNe(9H6Vvyj%7sAK zAWAm z#@AeeQFP<2BE7W};uRB=rnRlRZF!};s@zK;Oy8JMOR6VWOU`FD)9?cqfxpTeWf7N{ z(ZBA$wSw%sXze=xyi22Mq$$jh4v_tXmBb3MDXG~RkqL7$-E!R3p0eLLi-Y+v5NL3@X%db0eBJ^TDpHo=x3qL{~W}&0aYge%g5k&f>#ZTi6^TZd6 zBnG8h0!U6Q05Fyxv;P?4HuRA_>`nib$qQLNqsF^y9HxHTEs!q75>(w8@*8 zv3M;wpRFR=8xdESClv+j&FIV|#h2h$58&s<&^T2v8aTO-N9eT;UozN>>6CUyB`OD6 z&7T2w{Z}k1(wb=A;cF5-NCB}DA<8iFF#dqQyu&FjI2=?=zBYn*v}@%Rujv8~>_r zy81#wvjqzqKN7e`)RZ-cZG!M+aS^fN$f|mO>3+Tl<;Vm~ebON5tfglBO(96R1ja5yAQR@XIPOrnXdFf~MCTEfgKF~X)?-*(-8`J@`atY4Fzws+<+2{0qkT?f ze;E1=`c`Qo&H`>zG7MlF7q~LCdZY}2V%+FXZs7!vf_avCi>k;Oi_3^X60K5J z{*!mSEZ{0r^rU{0E2|5UwZVnQIq26uPn(^?NcWes4pd%`C2ZVi*zg+AqR0vRb{Z0X z24+ab9o{-F_czTl@v_LYUasl22&HAp(x{*aZo0Ah6Yk6U!+Q9d&l|DOEE(**&brW_ zMbH(OyqUr82G0oxW&KM>WCL~=Zj73RQS<>>eTNkCRy9Z99e$9|{!P4V(h3WuG8NQW zM({m~fWCtE=dgBH)F^{^A-1i?g`w*@XqB2zT8pd_HU0NP0g|ETn)r_Smf#Km87hgN z-~V2;0JH#~cWy?0(7lH5A{fO`emh1npi4tbgR#upFBCu!b3 za5L_B>vS~{BCZuS_B!rCrSwoC2Gp%r)H#hw+ZYJ&+Qb>k5o3Vpq_cJ(^~}UHJi~9G z(K9Sp=X8$ibIJvA6|2aa8{xHHmUPl^X<=P&4vKlo8=ujpjUuO+8W^65dTa-{Wa-p{ z{)WI;+6HyZu$|(R(W))6+SlkoTVQPBY;kO9$j6yg+^Gxv-9YCa3L!r^GChylUEqfY zE@5z`k7^_7N}pAi1gl1$#oH5ua<08~NHOV@$L||tZqyC$m`-u%HNUg#ZHcB%ACVD2 zKtWb(iG>;k?Z<6TsI6$Aeh2eGs2R%HUa_G7R-!IrG`9g;~ z2P4M;9Nu0rzXyqTOcK*8ELUkT-IiM%6FR?gpz;6Ut0gH+u_BwaZD9B@c$ZSOc%8(u zD-S1FAjWc4`_hN~Yp1U`M-N6Dzc#~a8DV5>=*x9^ZQ;^rug?*W)FR@N*9Y^$9P)7J z_Gg|i2=iBBT9_=Im0Wm(HdF6S1G4Jc$P`(>ceIr|euJS}Gxi@H71Q3X#Gs$>b9BeF zec)3G`Z#BWbgG;vQVovzfz50z>)#CU^?*;1N62GkKYasXhu9(x`5r)JR2-aMYyY?R zLwPyP_nm^Rc9@4mnA$`95ybacNavLR;{bGymGMUad+?Fb!(&Ov?ks<~eHmE^Clp6g zJB>Y7|Qx)pa)~whQv-E-8a=+DcGwYzMd;!=Lx!%URE&vzHXU``3+6q=S`uR z|MIx^FfGs@&Fsk?w#n~ktv0v|Vze0tE%;d`Q%|bt7FH|#xp4)rHCCv~KlXB4q;J8g zV^FOp_A-Z+#vBuJkK9(9 z%jR-8?6{xTJwo;*^1939a;RZ$VQF*8AvsD$8fj*3xgK^%(NGdzo%25L`~C&*@6Y&s zpYQi`+;jvl*Zl7G#z0%B=GzD`8n$?_kwg85K(BO8@cOI}^n>A5xV6Vhi>z_L? zuXW)72EcLfY3s`JqQdZ?tVS2&Z^yObE2p4 zD3dH(5Xj8W_Qd(Wcs9LqJNNA+o59KYdXnwYUc%M*z-324{E=EMQz?iVlM5l|S_ z8IMJx9NGNMrl9_>Q?mJjNx&~TH^fW*3LBiWMJS9$e+C>Ajmu&Qg`l+&f)0A95Q{iq zInN26JsLkIVb-07^AMUhEnv#qlGKtYOp8CB>rjS4pOJN^TinwXik;}UmZ&dxw|vSI zE9P==;-X>&wWdtt07~qgzAG)B_7ZCJrh`7IT@hiFaiqS?)g_Lae{MU8=Ldt-DvI*h zb2^v~j&{!cM$yw(MVP51Nnef8^#7<40tvuQZmt$Uzck?ikwEnZqZu-eRwfVt3o6y~ zd{hcayK(i0SMhblf3A?a^V$`&s!x^P<@_`n1f$JtG)kMtn_EdrSz?uJjDiG!l}=|u zRwYbQY@n!q8({T}^u>(#{@kYpIli8N`ow|LSP2r^XJqL*4B%7ELzo0A3{fn$;s9vX z!Cqvs;owftWt9O(bv%tB21dWYq8)AA*$Yj9CYe-A1@X#TQ7pCNBD?u(ga#S z!#e;!V}Nn1aPhYV@_xXKTnHZykPO1-?%`B@;H@iFc~j*B);gqs*HKqpl*5}gqcbN$ zr;4H%yKjmFD#GHkDgMi-szX#{b-eaz z^nQ@=bE*|fE_59J+=cw{5ok18;~KA%BBUIQIBdl_G!eY-AG`>fgo1%d341ba{uY0b zENvlyeeT(-H4YEY0=BE>W+sgO=NE86C3~bm`wHFc1eQh1!vINXFIm5L=xZ#MBUa=@+r@{f!xwvlNVpKR|ek(n<9`MHpJ^wE-TfC3{Sv4noW^$I>C zeaZ%32nh1*e3}b5rK0BtH$S25-DEq}ca;LJHcK|f0m4Xtc#Ah#$(LXzD9RTR!cxR_ z8Yku{sH_W?G$`|ZONbK#T~^1{TQTGnBbYpfy_WDV_3W9vqid;tOMAKFY9%Yy6)mBq zZ!Wc+)dm?!^a_JD5FyZZuK(d-%`%91b`_f&?ao`9%FSSGr1v@7rrpWHLE-^||d z&Dh;eN8X=o-`FucB_DU@y=@r_%4U2Szfk*OT4TNk@bSlc+KBbcAy>9F8o79=UZNjr zXezG}_1}F~fIt3`Q=b`c*ec*t>o-1G+OwkRFZ|9P$hVJsK9%Gu@B0DMXt4=Z+`v7_ zfmuU$LlCGhNT-Q^-uNYFXZg)#GSe-RIVE}AhnWUy4`~z)zJhs=6qTdrxXCeMQFmjW zc^!<|6Jl9IBnPWnm1?TdD*R8%Z*e=21kU=*-5brkNWp9Xi!ZdT3e1&)|%9HAtwbb+kHtWn-r2ELd{6hdzoQ z8prf+lr#o&^fsXrEzx=ixwrn1#*)Rv=Df^?uPCO4G(V<6smshzQX^*|IlPTxqV)Aa za+))KfL&$__;|Se@Di_-%lq|#r)q{mO2bjt$q6O5`M*}hQ+%lg$D?q6v1vZHjxz#p zo|IQ%#NfZNG~Mr5T5l3*!SZh`ZSxZKJHqBJ`CViCKZ#+zN8-IQw5WDq@E+j!>}{N_ Jsg}OC{so6%#ex6; literal 0 HcmV?d00001 diff --git a/doc/tutorials/image_classification/src/lenet.png b/doc/tutorials/image_classification/src/lenet.png new file mode 100644 index 0000000000000000000000000000000000000000..1e6f2b32bad797f3fccb929c72a121fc935b0cbb GIT binary patch literal 49835 zcmX_o2OySz_x@uhlpP^EDl#G~J3Cp0?3tC7WY1(}lPEh03E5M&zf8b!lSN}9FH26QX zSE@1+sN62<75E0*=#h*h>H_&+N>xTQymHx2R`V4KMPP{hhn6XtVF52*a*%s0b!ilh z;@W+dR<;+sC=?w^PE!1-^XQs~i~G}oqjT<)4<^j?hL7+sGX%8~e<&*=W`yEbg{*Mfs0Fym14Ox$hXod&09 zPbd4h`Jy9!+?;k1Nj@^U$9*vQYD3^)e95DGr#g>;`f9*Stm(9Hk$P`5jEGO4Ts6a- z=39wmqu!!nV2h=^Qr6K4dHbq9sWkd(#uLwwSrGhu^u(Ia6?pdZk3YRV+F2?cRXRI4Xel1j8)L^X z?ENAaXxcD1G{nfrh~af{GBsmgE@_7I-@`#5Gw;3s{z~5^E-o(e*U5W(dz(k@vWki} zTN6zpAH-UaPY`z77t4BTG}jzD{KF9AZ`FYw%DdC_MRBrC)t-8Pf4{-W{(9Noptkn< zfl8WmvFlonc4@s)z9jXML`(F4r{Z1P^mbeRg&jY&zeH-FvjBD2wdC%2K91Rg;^Jad z*RQ)DIP}AQ{gQXsn7ncO_HE@`q&R*OD&gcfn1r;n1bz}ehlkFX=xDb{aY@O@(D+xm z)Nhey=H<2gieVku8p_db3FEV!6bhjcro{C_KRMVSPy0)qnU&=;Vbc=sxyy*Eov!oc zvgl>9nyS8mg14Mn+vHVw}ST%ax8Ho9)fF*50JyL;}7}%ooz4= z)!Z9`llgq3%tUJnBYrHtX zLh&+_;YtNuxrX-NLwvRyFPcTkzkNYMON&zd`?ER`{UURs66-O4GFEki*~Xx?S2a@v zD7$hUcyfMJ2-#G7`n2dFwvk9RGG*BvW&dmr6mV{5$OG>(RPR$y|O?HZn9 zqM+ZKH|S5FK27=j8NDwg>TI@~^@9kBcUs_75tgIZ!XKbOet@MFc*fa|;E1Wi@ zjg1*mDL;O2T2^RPaQsOJi#Vk3CjY^lvf}hpQI99dLejVmy?q3s>1NS@?9b;nuBN zwGa;ZMoqZyKYTFiP7)6JL86!>bQvZ4=utys5D8-luFw9%ROM+{%vGUb!Zh0_44IQ9BkhZ2Ub$t`Gtkl%1Y|3t*yh^U=HOc zPkhaiT(BYRF;Kg!LwQYc5#bpnvrXYN!tXpz_6>ihh~K(%ryhrPDx|(Xv_wsf|yGfLUd=?i?%XHWiL_8=_cE`Ibnc19<6^T_y zKqk5VxeaSVlcAI>eT&qRi+O%7NY?$u96J|RWRm8LI4@333P9`T2Re zn>VpyVq%O^F8`NQh^JhBXyrsOU1pRCYmA_On4+GnYjt?Mnn#3#`NM9e9*1EkK=c%i zH$F#Az-Pb`3k7>R)oEigGdmjt#rEt66b+XhQ&g>z=QasSOiu1HF$qcj>s9r|Ao4QK z#-^s!%uF2K_$!cIZA+PgaCpRX)ERksuSusxMw%xn$TZaYVC0*$5f6q^~y#6*Rk}Md*g?ep_ZvD<*tuX6O z!nxw?mX|G&T3hXL!fDhP7%W)LE|aXcHkYtkO=zt4-+uh{_SjN@s6cAI&N~*psi_I& z{gKF3|68ro~Y3!jM(+eb3UPxzO9ax^|8`|pVEy&N!~ zCBwzCx3`b`GOar6OCVA^TKwYUq~7!_Lx|HAH1|LvD(4`!Bn=gg!B z3lHE%PPM;#+{P(88*jTiZ9VNSCb|!A4ZL&xt)JeWU(SOab&pr?p#H>2)YlGkivdgW2Wcg@@yCotPHaj}RYQ^H_B2=eE9&7^==d zxs?$!*Hz);J*;ryeTtKsuV1ktxBXt-gR-NX^%TnQEml_J8kdC_jTF@^_~Q}k%%N4o z!jaL}9YG{4v!yj>*4a~L&!5+s+?#1*NhZC~ulQ75T_SI&aP`qyhVm_g!VzU{Z9Lc> z(QTNa(!Nj{F;KsL{TkIBGnn}K8a^W@N+h5GEnW3;bX)(-onG`CGJp-_)9c%!Sw=kg zAune>y))Pi)f`Hg@tMq7+S5BD!ill50SoQ1rK@o(D=QF#wtFX`RD2R?Jz{A+flC=; zu|7X@;$`PIs$J=}J@HN>;245SCtZU(H6Vg1?puKX1RbHd1&Aj)y|Po(ZXMo@cJl*siTjR#DZQOlZ6G;DS#NXy`Y+BV+wxHF8fOS>y(i**Uz99+{Y#8E;P4!45dw?NdtaF-Lx%;r8vU z$LRV{u{z`VN=Ln-nIG4~TPZ6m)8S5C*1684msB$b7cswu=Ao)8F=SfCKu;YVY6rm^ zyRO3U6;(U8#aT~rH6XQbh45;?)^b|-=Yj%an^LN*tSktZ!0>PxH8nCTUaC`1(QE8F zfmAkSXyxVQ03(TR{pdq_}mfd-t{};q7|NQwA(GE12Yr#{4Dd zOLC?*0_4+;NO0C4P#&peJzaYcBaa2)k**qK$B$pEbtPT3t@2XOwsrL1xY3z|lbJ2& z>7Ao#JN{|-$Ck6*_PpHko-0MDv~!gn&;9$U-@oJEiNUojrUYOM#Sm5n4%V(>!raob z9+Hse=5oxZ1MR1d>(sxA?X}xmb#n?

U`?d)7KCYT17aE+G#uZvj_wLPE%&KYw1i z4m5|;cH{d*GRjDwJ=WB`{#7ZtYq8DzS&gSKYI;!HHnePG<+!ay(Cfy*##DIelxKyS zaaz^9O7#@pNN&}M;M5d`n2HJ!D#h#K{P1+A8_j!9{PpYm-i55}lD&M^qnI)BetWA! z##^&Zhlg|Fk&%%;05X{W z+{gL6u(2p7AjiF2#U!VY2aBJonnh9m^cE@PT3BleQBNVa2qHP&_|6~C*dRTJemZL9Tyw3W!S+SFeED)_VaaK%23E%9p3V?vSLO?3{UTn zL1uRs(Qho*4RWbzGhmu)UUB$bUVi=Y3(K^NfpGJtn&(DwIo(YZ)yGHO7 zvSR0_N2uDxGRTVsZb=_bSy@>}s%W8Wk&8mQ!L>Cj1c*Z-W7WuQ1V|Ve6GJv5r&DTk z!@Dpl5&5g~=YS-K%U&{$*}A&A_PZsU+=gtdd?yCkTQ6U7joCnRkW&&#v%L!`RKnW& zZiUFMTU2ckxzqT|OV{pdVWO<|0LSB94e&kt3gLCPgz=u^#Oon6l%ChwUginQn=<#K zZS{1X_E?SpXo*gbc0T44=4=42X%UIP|Kb|=>FAgf^f{3Dk>hZ?S~l@lalPX6X9nJ| z(Wp;43T)aX_$Wdus$k_?Zy}-Lahr98&~0|$Wr?f?%(vyMXNMkkzQMv%h7?llh2ew3 z#l?;I@WJ1(!N2%hafZMfp?%)?yG9B_{rz-=ICLT+NggF$7oO77Q?9FbVNK%%=q#u{ z<&olIL4hC4(GJ+%bwH9PMD3HW3Kp*`rMRV`?|N2Y+WE;ZhE=0ry_GJESJC41_lv>r znpO!t`4(2VI`m8;TIV*9CxnJ2GtGaWxL2#o2?_JZ$lruK2Jky5-sWe}bu8v+2%XCa z>fSk~6?DM4IO_Hy#KF`ebWXapIr&n6=)*<#RESWJ?c+O*(aw|h3m7)br_RQ{wv5-w z=lz!CB#U@^ue6dlCn~lCx^@y5t=fM*-P~0v~(XL9nr7dO8OnRI$}RaoWM_LDnvT9VVtx&Vi!R9mub4l5`V_b0BaR6xRv#_@nxH z)@@$m!Cc+ockgiHI!(WP`BJ;?SV$HLM`1NtSw7+8qgM>2=7t3&V<87V1>pEXPKhk{VXaXMJYdf7AWFzOh-?T2H`Eq zNg-)yNZ;Gnr=q6jSLb`FVZokjLlgzwbbzjY_P}U;WNn6qW5onCpmo1fA0o%45XYH1 zP|iRpFE0;As-mLe1HFQAga4IH*D9o|{#!P86_8ry+YQ`bnqOv;p@{|-E-wN?BlI7AgprTGn$Np zMsjy;gk(tHOL%s6_BS+|KYt3kpRAX{=fPo#!)`P(GWz}dcWvcZrA&O~{GFIPZ)E%4 zKDdSNzbuI#3O@{d?6xP55+R+s^S@a`6&uo zwZLvh0%;Jxe)V@jOPfh@`|AU3&UpE&t-lo;3<3flZwxKiOIb70^YM`bvoKuzf_!JZ zGmZ;mXQ^Ap)m8A$ojarb%sVM5(otqn0m7uXGk)ZHwU92k+z;&*pEFx@6g+P?;GrDh zH7&fpQGH^ACxF*E00kiqwxr|V@u2Z>o%b;@QqGQ{(u{DVa88#J70HV?CcQ2^-ZmHx zTk*PY)?HfDy2--gl(~U2*qE%6m6xwS-dlZ8wE>4KKuStV$H766?0LNMJo>3ZJn!Av z3y2Vd#z3N_@vtvkLWV5e?wXpKc(&&Ow4SZ4t9wKG;g@KfQPB6d6j}5!bj36yEoWO& z1r?tSupTUNKIZp6{&+FO4hJ?9%u(&#WQ2wqwE0=R&h_@zrpJ71;7^H_Rg2( zII3Q>0c^*Bl1Y$3wy|v}G00Z__YJgEbu(Q8pWHgxEqlX?2K2`>{360A< z(N;=@s?b*di1iiz`!Tkj znUnK;-dq3g%uUvw4>c~~ye>Z%))&XRYvxZFtRE0ro%8in9-dxodz3m6hcK7UTp{lr z?hCHm16T)LSrijJAr7Z)wGM$(y2UT;?6o_3i5Gvqu?jhKXMFhrr1KT3YsJ=;Jaam4 zRl{6%7P%gZkm4c-9enMsV8;0oP^JLz>%KmY--I5fcH>~~qgj$T6kKhz_i6WY&dS9O zI)F0asDupIQER)Zhv5w!^S%0d{nC=rTLC@hlJ4$8us2P+5`rL;laiB{zR+gS-z_aD zi0T}Yw7*KtUppEb5#gPXkdXP&tHbnF_L9qLPN_tWI(HDu`je|sa>avABrJ-O5)$Zt zf3D_UI~$0sP8(Rx{p}Pkw2kv!omKwPBLK?mZmv?W@CgajK&ml)+o~5Ou(Y%U4DW3{ zRw8wu)NSw2z$~{`4i RMfoiO6n}-I}QlGwJ7mIDu(+yHRsNqnAe>6chv{{;+}e z+*?D>CB13y;N(n2c+>ufAoAi>Cz*Aq$`O}~nU(e&59PP)?0mxpY=}om-7MnedZy~) z%M6r34Bg~JPI6Jd`};9t+nZXMEaq;b22JjX=M6Qi?ipR2pYhvGqafX1q-f;naZ=!$ z+t@VA(0aU0lL?o|9*`cWDwX!U9TB7Fp7Mf|BII{{^te=7kK|hNJg13W1dktqRt!^+ zONqtQ>FcO=v-$OqLB~0qj9D|6|08|MeeIcM;kUX+}K3 z>R$Rg5XcTOWJ!>%WyWnGNC5I8EU;=;HN+sRX3b5 z*f&3bm8?8JUgdwai0^qaHf01Imh*0p4AOT-Oi z|L4!2d8-P6)x}12Ko&9UN~i<$@-bn{eet|xw9KN9nVlQg- zLm`a$x*#*rcPKQRc?{_UOxAxvC_p8|xTA@gnld902tKQjmsZ&IJ<+xDNVEKB5zr8< zkChhkmfqB`$k{-MD`rDV0HOdRmFr_honVRmr#4z1aa1*`KVZM#Val!Nwtj#ABJ9Qk zRKh@7_ow@Kz>K%-ua9xPoRy4IULQxoI2>b4d_Tdm#QW^-n6B^A#5 z2jv-ZwnHfI<4$>=-2L|LTg1nYZSz)Mus*}Tk~>X}hb|fjL>qyL14kmzM2Tb!qx@-P%K?N1NrPD_sl!AHUH8(Tu+unMb|U-O zc0U77oQeq=ad#I2{wvhmKQL;6aZ}dRybM1AT2(xIpx%I!4sb3s01FEX9fbxIe;eo(y=RrBlysA%0=bAYqgJzk=|8uvun7*$GI&-Qx3(Y&a!xR zf6UfQ(*$JzBsmZQV*$7}+s9H+OX8-0A75!anpXT0@&=O+M?REQ939qQ;HD%qJgcMrji{Rz~E)6+bzZf4Qk8 zFT!we`pu_(;Z66FaeJm-UhH%8)CILDP66tx2|_N!QB3f0j1>4tX9^hN`=1#|T#*S0 z_=O`K%;{YcMK7;e%a$BCp(k<%onE}0h-p?+SHEWSbA|c4R`EBrZWFi50t;CsS~=^3 z_nrC=EpqPm^mY{QUHDP2N{lLDrb7E>SrWG+yboJGGCqEcq41g8N%$X(gcrH|$-A-E zdoo3@&QkX#j^6y`bCGW|@A|plg$3Q+s=!T~C2s7|pX}Rb#bydRxmY*Z zR;}r4s>aVaev3DJPb09zHNR=gNA&bpRc_zPaPWb8pEq+iXBz`#WXL}tAzcY?At7@A z+GEaXGtR%LGF`P=Ihc5GT>9K6N<;^EJ75eC+@-Huve=>y&&~tihYO8l6ovI)t&StU zPGwK<{Dr{mQCkqpAt28j^@vouWbdH)6L%42H3oFs(lMzlb01n#WLp8ymbr0o3(Haz;o+;u9`o(QeVef!W8IK_nH^^R)_l2)u19u)ysfhuG9W-a(ss#j5z}6e*vJz$p0yA48En4YMB`|B zfv3YjtaNPa8d==&V(bbwhMUTyeOeC=ke+EhtwMt?E-q#3C1VWeT&sWn*e!Nk2WCn< zUA4}Nx6puzp_TMYom=G*X{hA(LeeXS*2;s#q0VR_S;fB=o4qgRo6(huj>eDNW3*kj z>T>P#h?AqDY&mL#X*27WoLLy5-rh6z&8?ZRF>|SW@W+Fcuq=v*mNxtf74HwB=_Ne5 zD1>@k8!vyN)35lZeZj~+HY^2{6WG628tNoIxmds6^r}~ixkbds2Se+pVyHI#O)7wA zYr8+xUBt2a&TXZyL3BHhctHM@1Q8Ef5ym0F5RiO<%#cna5=^{G8Hx>@Lz|dZA;W^8 zy@2M2XxHN#=`XC>jTXb}-1?8k5185_sqlfw*6kjw&B&XV6CoR(#V|kORoUcBZb=S* zcR?TFtTIyk>i58%ib*K6Y~CC0;wu(dqyCj~lTDZc-h%HCiexuhW32Ke*GAf-5hrr= zZ>?_h<|32JssHB%$dS($V;YS=R-MLW*^(SQdLzRVv}u3+S5$qpbhUY;dbS^cHRzS? z)`qz<2P~n-;8H3F!ci|TpnItSumK`)BFGR<%d;j~PCD~OrA874a7ZLc@p-RVYsGP+xYxFMdh!?t_%r_?Cs<;TzeAo&Yedt#Nw5CU@ zdNvUWNkhq=v(VkH7%yer&S?#<(L3`u?Y6{YbbT(=Rp*1yJ84apot_Rd&}}|U=4^0BfB8d70(xH60@PY>%i z4vfdkUIO)ICzz-ZtJJ7FsaTC}$+ID#6z5{pexaT>B=aSD9*?8#U}oniZq2Voq;gO% zqfd%L+Y8LE0*iN?N2oqL#8Feh@IWE^TG?eVH7M%Qo;t z7UXi(XlI6S_k7~0{LQBD`RLC&d5T(pkB?o;CTtePBVLrYY3B`CLiG6pM*y_U>0yg# zvvuhNc5FLV%~UveMrNQRHf;gM)SF2T=$x_{e=4LZvgub-hl?EF0s?Qd+v}Vp+3Q?@ z#(5fT+`j!Tzr!*i@DTUz-DBtqWtrGbVmZrsUOj{Hc*XsezEGqaLw^)gG2NU=D*($6 zCy7F9l`3M=SVW%inEe>O{vO(YWIV$Nlm565bP`8 z;50x1575Xwk95Pp);>>K12;;Y0jF3-9~!FqY+t()7@@L@^OKhwRR^6^4TyYOmS3NL&M{*LdQ;mR4L&yP9R_uDAej5w zc9tGTQEV^ojEL}1USZr5NhT0U4v%z(la23eHX>}5ji7oaikbImQ!(hgeAv4B6Z%&v z@A()}_9h_~*yJ|E8wV#nzkj1ZRD^5-DspsNW8f{UO5*G3-NG~klvGqN!C3^?Q#OiJ zd@|_wwyOj1Le+dgr~hl5{UGuCuL^A@N9Mtfa;e=3Rw_qhgq`6HQSmqUZ{4Fz`OuvymJ5keT4V}eD?D6@Fg_u zK(ai{8{(exCs1Ax-e*dlVKK2}&CyaGTXqwf_&T)uys-+ljfaey(NT)-u|9<5DNf%2 zCFfmqCM>WqoI@ZV{T$Wj%>zHpQi20;}qkCez|ip6M6kQmcUGS z!++!IC00`q5wSrB0SlG1-4Soehup~6W|Vm(z5hHX(6$(tu9}R6(u&f|wS=2xUwstI z-qfcg+BtuRZ`=+eL6Jp>2JBQ=2$gv-c)NZIJ3DB|u*&VTk?;VK`Vc>-q!oF42I zq(K8^(?Bx%4Z@r62=)aRjS41)x$NHVt`th?-#93&T&4BcB?82caAMN4XP#b?f{z6K zPtow{-k^4}aVj&ktM9JI=hgml>pYrPk1C8%$OJ9`JY}X#C)Fo^8=xt{_{Si;MNBkc zi5v1CrZp7p=I{1q`>p( z9eO!pCb>5thJdbSSgh4k;bf~`{vP{|Qi02+iJ(^9O@{6wT@ZyRDJenBj&qQywsrAz zoh-EcDLNj6|QpN1;?~PrQy`1BO#Q~oNh>=2j z{ZCM}Uu2=BF}JdM+r8wgmD3E+(++mGLIOVmqqIwHz#Gue8p#Nqh>EW66(D=wzQyF~ zP1{WeGF6~Vm9;`Ir#Xq!D6hli;Zxaa`(usW0I`fHXG$y3!=U4V#lu29)zC<*uBOp< z-@*cVXxgzN654epR-XLLpSIQYcUGovr!!zRMwup}3+=fIL!E-2#%}pHz5ChT5OD5D z<*L-P?e}IYH}Pj#x5l#DVnriGwMRP%1hFu_<>OuU5;1;EN9YY3KTnK~FjYJ~MGaDt zeWjZaO$of>4ck@+1SfF*gocK;wzvB-$!$6d0U1rfZHf;G4J71eU*mx?EIelbDiqR- zF^FfN;x&MFlme#?Tmg-+E8XqeE0?%5gMZ~FVyKL$iFHuTiX5lzZ!9Hyh1pE}gh)+Q zN*0;@jHlyDti&|9?l=ehJ6N^Av*0^m87k}BszhAL_eltA_lB+C_O&}omTiP|67uuq zV99i>xPf%y(W#pjciIdhIMgC(lKBXi`I$E8F?_q*yQsK|>}yM-zi>l!NhEjK%j$fJ z6Wm$?4cwSZl^(KN+(p1OB`D#mudh!4em%hXt%A=l-J`&lef-!D>NM=s_TxUNNrNRo z3W5`~7^gWnI2Z$ke?t(-Lu;Qv*_a-)t&xJ?1lpd{upSdQg36=CctCH6{N}p976gwCcYl_&ItECaEd_F3rFXOp7a?uF61qu|bar)2 zNh~`oN=cINQ^3!|sx%9VvC(>ycwLu@e)&FIXhx5;1F{1e`&{S0;uSH#H9DPoJC>^ z+_w&^gP{{o!D^+LL4sTrYyz4P&$DAG%x~LFNv-#bWA!gj@2v~@#Ja7m#8vtyda}x? zO6oao7wr%y!oO3w zGvuAykS&F- zVEw`QZUDycHNJ4sbAE*PyhQ5`TZsfoKY4mpjk?GAJ}!@ft3=@ z@ru6J*?oWf>%?_qg6%8xc}@>cgA*FRj1s=;c8PMHtWf;&kwEgM0ImmZXzj_F=ZB=I z-8uJ+&7_zA8u!k7vnYcPEGrvw4+jB(!)u9(mjW8<_GP58l}ytj?M1pDTwyAIE3G)` z5*!i8y|I39vl@1C4Nl>_f#U{H`@HZ|B%7cev7p`zoyn zb$$(WFNR2_@92;O#NL$1VJAm-rveuTM+g6&Ezs;OqUS`Y%_egFY3N=MVL3aoZ?L4Y z@CA5>*3P;;_qm(1L=LAhbY6AdN94skud^4G;9Lgj1hIy3*^HMxQvA?%S)ObODoSY0 z*}CoFpX@vkXMx1G>wC;+2<8Nk#tel5lpeGHRs^hQZ3&fuhZ4~+*Bzy=%CSq#C&cFc zmRc&_l++5k6(B(`?R--Xx;7*G8&!JUOl)6nI`LB@4qK7)L*o#@?52eRZ{kQ+mW0o? z6LvxDnSaZ&ExEQ*xxxp1(*$OvyI)C~gdz|R03X<6gWCd~yZI9@6|3o4z0-dOpvh&z z1IS}0ISKfxeI@m2qC?lId|k}dP-zmPTPN{Tr~XwNLwU5Yf*nF9U?dDeY;}Y1w zUo}F@jwn0|iGpMxlbF^VL5llFpMx#nxtjli_!#gQ+iXFf9A@x0~Ls+^F19v>GkCTsuPcCi&uYiQGDg zNn`+g7%a)|#PGc+qvtEI?js&&VBerI5k-azzzPC|mOf|YR#3R9IMK_cs^R&V4{kX@ z%EX`BELydd3P^h2sQ7;q{=C?z^0yO~+e+jOFIiL)h<87;hK*PcZ z<4qkHw4<+`V0#0D|EPl*-Aa)V*cZv?8a*}Sg`&4IPL0Ebu`4F7q7Wf%>lbUDj$N{@ z)5Pnwv#6VohEEVT`kKe{=QrI%U3bktblR+s{u{q|z~`d#_yy+V#sOF^4F8UoBQ_T3 zvGv@yq{d$l=`Q!AT>|6^Iy$XESAxLdlIR7fnr}tVR+NZ_qzI3$MmURPMzQO>diARR zT_)6CY{9#A#G03Cdu5`&YG?Ez1nk@v9R+?t2Mc&zBA9~iGTvlj;^c4wvDEXpFPQ-x zAYdkU*#QFvV=uNZ{hP8rF#H%zHgsnAF!}o6a<&QYe$i$*d*CtvIzK-4!?$6fLmpdN zo?7LgORkMi*zmP$C^ZuokAY(b44ZJoY-8a~Muy@wMOq*~40ENeoO`u`6bqDkP#h+B zUz#VwZ#=kWQyT9wpI7l$5Se7TL_pK%H02RM|Iq)()Q9))aTqv$8|9KN?KRL~d@jgd zo_f?n3&$U#aoT-xO7LJ2W+%4hqm}gfcjk>duCYC6)-GBicW?378zz^g^tD z^IW*-Rg?lvgFaX|egFj`J$)>(BXjL0x?Oj z1FwacodCMeK-qvqnfdLT1W4Y zByo$h9z-y27|Kb16bo}$#*>vU03e?gTatlm1@Rx0B!bTS{`Qj;^=xEX0N@%j-U1#D z0Ey@*Xs>YqVZfXM8~{v&7`ya=sRcv|)Q>e@_tZOiLsaw)Gk4G61CZH>2^)BRc-L!k zFMj}}HVPsf23%0;l~CIaOvbo_O=zaYiO!v4=BhNI&J7ZRa1kW2-eWQO&{3T zLUstsRh`0-5CHZdQbVd~Y-~({p%AtRW1zQS)esWMred8Jni*4Se5g5yFbt$}WK`4} zU^Ai8bKY+Irj|`7N8DG`o%kSj;VBGZm96GhFcd9fVB;ImY=W!Zx2{&UST#!$?29>M4W?Q;{$>D?bv!VTtXChfq*ro1S!Gms9hgganLMl z&j|t=;vlPn)#t?h?0665o%JL`B3%_6gjb*-h#vQZgWVVDR$+<)t}DWKLy1InsT--? ziMe(6F0cdAfX37ISHazzI$+6VJ<0>>E0USP2YhsNgz`qx7DN-@gagSx6D(z@|9W12 zK9ObdCpGW?W)@;}r7rZ&&kq+ky#Cqn8WHaTv=Va&Zict#(C(3{PK5f5W#{;M1~wA5 zow~73a}sgn`vNwT3{8FH8cV`Iu!6JI8oq4hfd#b z#7Y`~m9oBmI53vAR=nIUodCoLas80mO%FCFvS;!@w_wts07+9vT9|<{pio1JkBww; z?kpvKYEwQeiy<11T@`TIo3<*;vqUWlU7YR^Y3;}Ax!#tbItlkW#Q-JbIyMNCY<2^% z+K_4y{$gkqp(>DZDpmC`m@5K7prdaR=F0|YVQs-w1-@0bj7q&cvnc9vax@f}U{G>V zoNzTzwV+p{H`>d&0!xVCG!emoJz|AOLIVU3mS)EnDHxox#uN2xr;V@`N?nse)2++J5NI z6y7>0J>JNiKBpP4_*dt<-qtoNb;TQ%O_vIehrkQ zLti2q{`I1;?awT0a+C%QfA$0ev321H3{yONC)bbdgM8lb-TK zWLlElZKn8T<$e;nN|hc*`(gumz~}GxO`b=k%lZ3^SedrqV*4Wd9`$43tzMNYJ@wVX zU*$;k=O+IAm^iQc*b>viV=2*z!Lt-53IZ?M+uIq>^Rlyp4@C)ae#8)Fdqh7B`XXP3#fAWT(CMYQS-0Jb*+U% z@B_<DP=QxuWXyWrr54?oi&c?-v&^%Z6dfCPc5p2{+CFZ}09gOM=`;KHOs zD15-V0`^pdfCS|YiY)?Y@f3yZ=W!Gh_|dWl-ofq#Q_2tCR=j(~QqwNzZOMf~@nMyl zPl{gDnZ>2Xpn*UI0}9adqh0`Y2Kc!Bl{*`F;E1GA!a}#RuLvsxS}3#x#=m_tIwHbu z#5e0O9^mWk*$QZ^Bv1cgBjS6MG@geuffq}i@d2Zw$P_R(YBNLh?Dp&l-;{e{=a~o4 z0z?6Dga-4eDdt6 zBiY%Q+P)Vkpj;UUKM+!Lakha%fi)FI7uy{tob~S8O^YExPwT!Z{M$4e!>4+C)STFk5n%PF)zktb6R z+Wq?Fix@z$8M`0e9ZAA&^`?dJ@7ftVWSkI$Whep-AX50lmVhz>Yy&dC477_6On}0W zf{MOAt%`XT@SX8|w$0Eo!`$dcOj*Usl~*o5o6QMSzz1Fpu2fJf*}Q~pQNMlT_T?l$ zp*w=k(7w_|CFnj#;W=i&*}DDf2^G8sJ!mu$man)eQzUuv_900mOqrUV&^O2_^sWcE z8>Kx%ZCgPD*7}y?zfMds;=MnP&_6G=c8OKFl}S?6XF&AKJ1{19RXUWC?0#7>q!}fl zElkuj0B3Bw5~KAvF5_tj!JZl~FGO|%|3_L6%o$MoUIHHDaXbvOaK8TYlo~wg;Qv>s zvdc=im#E^R?m{2B`1}r!*sZ(=F8(f|W#g*!O5A54AB_+bTjO~@H%P^22QMTq?}v0v z)$9Q%m%!Hm6=~{IdfJpuZhKQ@?6XDl!ZwAAl$MK@T(8B}(xW7Mh6#ICh4WbaOCnT+ z*J&yv#eKc8(Os$z-)On_+^wDfOp_R%T~)xEJ_)69dGxU6;zSfB2Ezy0gjA0gUAg2c zwQ~IR^8OJRFe?En0t_G_jF8`er5|YP1C~tyyxzg?2Fw3+$(YSUFt6vTjs@W8AeLdw z1^%D`$h`nySO-BE7^@~h2}X!F#TSTl*Y7BHBdqm+&vk|Mxw2C}Yn zfeQf@kqht)4%0x{uZ?C6SURpRAKZy~mp<;mPpwcr4{4%U3zE@x_mfX@1qwma(e#q6 z#P%Npy}iYjBw(k)mB3A@|McTp-l&){&%q`a8SS#tbji^!It$kh9&P6 z5SzeeYCFO>uQ66Ak|#lw8XkVeoO+v`-M`L2OkN(hJ$n-HE)ZTY&j!s&fJ?(|=jel9 zeGiL@Ut#?=TJbHRUs2$sFlkDcrEJjtWO5oIlW=b=!ZdMzX3teK{??N#!Z`OSuRQ9L zn|#pQ?*_~~52gBYP%ew6D{u$J`a;TB;0=RT6pixIzcP~hoA&(+&p-WHNODI_XJsVS zgXZuo|7)<}TJ-oQ)esJyGD3*676kCYYJdh0pqY-Fn*=7g9w0OSU{!}6Uf1rqMsQPo zm)|`?bhGTTXvtF;y~9J2DnjprF|=XAX$r9tf-pk;Q#XD4`?322pP{-PzXa!5`AI{D z1W5y3b`8m5^`8`^q|bTdHSa%-@5BZu{l*}R_>NNM?G#)TcyB-Iy<_LeTUo-8T0r}j zGeP&BOsbmTm~FJPZK`ZL>~nWu7g7L3Mv>z2W($xOYUGq9ZOxIr1;wi`7)_Y0H34IjHj$#4u21cup=TK#Xo zazSA;+aQNw#p7F$jsdGfU;*rv0vNnAu=k4NklgRl|h3Jo);6XyrG9_ zJ)Btq|9Nfzut3!Y?g7v83MRU&tZXeb@By&@q>trHqX6*yvwU6!Ne_5##QG2Pf_RPR z88SQstV6?$9ZajeS($sxOoEI@!Y%?kp0AvXa~7UbbDC!E_HL9zc3YiS#;YoqyxI9_)0IwjOjg9F(gYmmJx*&%! zo?2^eL(+q^C$o(Z<5&V&KOiU@VZ!w;2lCx+B~dg8QU zwgQqJ(BZ5`!2%u|Y{geA!F=ia^lZ)F)x|hbj8~~YbbUi>; zFgd?z#;&L}Q=%>%JeO*(1}uveN)^G4bO2+6f{E-5&L6s;vX z+$R8?eh9Kd-S{```SD{A_!6KKPrzy~F%xzFARJ||Lyd>ZKI@(Ql?${=NVsn6AITR- z6D~Fj;6b6zUALHO`YtqPL##AvU*3{X@GSzr(X>#?ysoYHv-six;>aGcy-Fh#2%V(C z^6%6&2?#3LC`{;_8XF~%u_4&ekmJEF#t(27;147TaKd53s4e)d;ZW6|??fw!h(px| zwMcTGV)8+k6Ryipo5YuH5dAT^K4i2p<%rr&M^yUb2EAThNjEmlz2!=njG(!$LwR#b?2<#RfJ-665BOI=C-MWOTIe zjR%efyURVp<*!(z!)Zc+AHWS9kRuTbU36Ogyk*YhhEYzJjPbbe%*IX~IB zWT)QNlXQS|_9z%--7h~qn{SIorkJ>37(ZZ?yz`}do2*bc?6O5N=54u00`5L61>a;B zfw$>ky8ZsKA5PWSx_{q_$(MLRg>+T8TS}12XK;kV!h$^nR52o4Kjg+CFvNlBD{eLM zlMLqIeSxupGzWayWfW(8Cu~C)lqRBl&9K|^?Z@^t?ulWCo|UhQec;NIX-wRo+j1Vh zgRU6}vL1N$A>1X<8RdxKZV?8Y6ku)X3Nr&E1MpK!%*+Kb$ZQ!TRnuqX6E=?cQLC`@ z`+^#TdB{#hhFlOIu{8Rvh>6v`0)a38-P{F?Q8du7dm~~YLjQr0fn8H2dLQ&bNBqwP zFNV-If`-g_fP)C745Y-t9H!V)v32V81+7BMKopoVvgDE~H(0)@9?Tbedtif@FwXuS5a) zFFep&p8b4?>v^-i2ejL{trTc(pfB(RD-lVhYSvpOxsGVX@&OPYv~mDuBF+R&n`LbL zfCd;2LD-538yR3q;>O>0lXR0fd)96yrJCR9YPVYUnvA0-?j|o{$SZ@9q=p%9Fv!-V z-(2C&c7HMiv1HtM8DyTK#vIT<&#v|U1r7>^j*O^RW&&stsQ*mM%AP+7 zfFlU1(1Tm|pE(p2u*$_}7O)Pq(w{q4$259FZ@Q=A&2=1>Id^zY!J) zW`5wdK5e=2P}Y13NRuJu+C4|o1slf}wBwLiqyYE%v}3-TU^Fu-iiJL(!>1(gCaMw& zH>&F!26!7$DA+?VykXu)u`K(1Xl>03s-pVH9^xl>NFYoK35GG|N!!s8hzr-@ekXUS zN#8T5f?Ho$xN2e@sF4QHZHb9io$1Y;n$#yhUB*O5CWlaPABq$x!EAU6MeZxRu{gFs zb7vodb_Tb;p&)VuMGQQY4+Ov;4!6dTi$bhO1D0U=L8cWl`hq~4syDgE70zQ33REC? zVQ^6tTt`!zwugw3-YbyTEFcotHT(1qE>El@0`^+ zbvd!TwfyaUaCEjk_xxV@tM&&6Nv|fCgu0(731a~{2tx~iq<5Wi-UfH&GDx15i^Vtr zm5qpLs7=7u^uU$E-3vO3`0t8UiJ1){Em_~!#|<#q*xFB4f5e{=^|23b%mM5wGA9aQ zfdRx0bs67e4G<^&gM&#!=T`u#!+k*JCR-L|(K*cVBtEvm;B56R*e2!!-^hTU7i9~4 zpjh`MBKk4<1#l7nkErhe>$z{={t0bqN~u)ZyF^KpmP&&rsT9!=T3T8fTGAHUdx~Tf zDH4^mB_p(lj24RMJwNXMbG*mld5+tCEB(IT&$zC0U2yeKjJDA;2cH85?9^Md4#I-@ zdT1F>cETci+2ZyaSM;wAeVaff51MH{hvWvbWaOn>$*oMFuUpLj-9KpLqN-Ju%fV3G+v>7C#drUl zI$h%8O?H92F_P`13MOTnnM9x8z8Z*ApfDmJIm@29R)^u)o!{!h(7J*W7y^}}fgYRW zZIR}-+OhMJn{ZtUbEb_fV_tQ+Z{UsaX`)a6t)=j~Q*A{R4-LNih2G)zXC0*eZ^k(V z11smyXMX}K@7$T5X*1@r17y_&!Vt;Ge7)FFigp}br`n$dHg7P=M0yL4+Fo$=6t^=y zSBHndCKBLUCT+Unw-N|_05v3P4!d8f?e7D#;nL@Z94DV&DtU|+i}gSIH)wAMX>T+| z*#9^$&=t6^eE^gM&@f)m~&_Hr9>1rk6Ue;>*5(lT#NutP#z zTj2yC?$Hsx?9NCis&PO;uT-ry4xu0ma!br4{30~*n&=~R^3S(AghKQOP=sb2kfOtT zKNLQ-)=c-_wZGf~N$-vRHoEW{FKG>}w#b{T#%rS)a(jVwzuF>_nqDMf4V1_Cb5%2i zg0g+Qa}oBy2;jOqNU%_Vv|{ieQ*S5aYjnDSWT*AqjWdtMLdA5p$)Kz-e!?KW50 z^Zr+_8ozjNL4JgUM2xeXmsFz;jSJ7|WiI>xQ%dNfYX3Cd@KMQ>s@98Z@Um&5DWYYO z-rT6KdcI6!f+4BpH@b(8^6r;Ny8#-tnnKiJ5Gjr{e5`g{PpZR!?|qiUo#^2)$Km%O zy|;AUWQ|{5^>pgaBb96L7U)&SJhorT`@R1925@^>49_GjDjp%!swIV!ep0Vz*`TA6 z?4w)N)#<86xt^b(g9!nd!Hb(2r-SM@^B9~mx8Uy%U7`MS$i$efM`ju;% zF4X<&5aJU%WePOIyi*h!r|iQ8)(1zAh&ve-cZ^?{{Oo_cGflGn z&3Ef7JAGvKg(&IZG~1%QU{cBs_va!s(4hIiqjgSwhQR$^);9j))aURkTxBbnkBdiA z3XDx{C-O8V-W2_i#jSyo`9$^AVt21U(VDFJj39LyL$E zQYBU1|0Dsp!B!!$Z47TOym=(LBm*Sp?K2xAFFxEo;kH|ezQkD)p`l$J*23yb0^1^2<@ z3S5@opt*wz2N>6z+cAtn=qeF{6JD=sKHRz{=9+0V4GLnCX$89$ZZd_w3qOv)c#YC{ z%tJs_R5WkNJe;g~L-XN?D{8Qoo{4o34+bmiX%(7H$6iAE%Olf(GeJvJC_W$jL`j$@ zIg%m$hY2L@xcQwOnJVd<$fDnMq|(f>zX-`Fa>0L>=dIlN*~DX?h9Zn02kKj5^F}rX z3kSzp9mjspB?s#Y&rdiZN%sbt_!9CPnjJl|PMr??6-z&y?h@l#cdSuw->Sm)6PnPt z0WDI>N_xwKm2x$QoF4Z*g$r|7a`nD?bq)R0T z;$_Vfhf}W8SNq);h+0)g%aX>)?Kzy@2sKVTULHG<2t~1M5#+H=|8);`*4 zFQ}RuoHw6an0!0}*#QA7p(g;-@e~Y9*Y(3nNIx9BcCvElxBtEyW!vwWI(lk)c|W{0 z(U$u4vp)14oCaJyF8@?5+0&${1^vE=ZnGN+sUJr_``34rf_dKydn`bl&W)lgEQ-QQ zOIcp@9?|7m0}}r2BeOGFLLKp7&QWX=L3r-lTw#MJLobK_07R z`}FiQ{`Ey5Mj*dIljW6_(W8FF%bn~gWKi-H5fVa{EBPP*{UqnmuWM^-6OF9O*-zmR zWBmH{iBHmIytVtEdtuwtxJ z#I8k!kI21@oc;X9bn#HQ;|F?d_Xq+9+K0CBFJGFqjCr7phq(g>xaU~b;CXHPEQfEE$In*p^j1eMw|FiSV> z9#aj#El5f+RX*eJ(Dn$#%3P8>0#(LQ{g04G zSb^`lVM~^!VTGm`XHe|O&re`Q&Oy*?^E9`Ij0 z2ypV*-&RTQ3MXHRYh=i#LHoxAmhSZ3FNRsFSKAdJ-aPB=orzs{DAN=~#k#@944d?x zzy{Y)Uwh%J5>MD`wg)||icY#|e;ex;tEaVBCqM0?yxxSrCu!T2^w5MNPwdu<5J=$f!Horua7aUQ z&z@CiJ|WbYfuhj9J5M_x{{F_}?AI+~adfU$Z!+-$SwjL?UEkbViys$dvZHm`-(7C? zaVPRji)r2Kt$ASWxFYf1Iw#~eEHZmw4?SHst3#wW1k}DPbtn|BHbHQKXNL&|gkHeg z`#lOAk^{9lc$tN?-s$cCU^1i37{FO1kY@4S$G;cC!^7K@qKYLdX9^Pi>DS(5S2pQ` z-om|j+m7o!(+$gW3^cV62?G2Kfx07F7cTYtIa-^$ofJ_tg#txEw+I6d;-%e&o9J-% zAbXImySuxtNqd!y`xkNff2L2wp_G(}A&Z6()nvb5%VKep@Fk=T8&U3iO@+e909Php z0dREV*TvcC$wi!-Q(;(W;;t!P+D)5zP){dWOvT}ZIg+2)tZOoWjDbBxv4pnIuIixg zIk0p%+rVk_f}21>7s(_*niaVI8~x|3<|AoY1g&7U=M@U zq#-VxV+ao+y5_s5-_wIEQVn7e)Uy6dTK`u!j&w?TwR5*@_wWWGLR7aTG-C)ERVrWp z5FQv=Br{ODW!zP86T$WSOHekQKa1+oGs8^wkTNrgqRND~jOI^Dj~ZvVdI+No1SJ0y zs0WUmuBu~uwf63u(7Q7M$vR+$zEwRi41_ojM zDvHKP+BtEq9r7=CgeZbY168syPKU&@u~94Nv04naKC{lehn=mu%ZQ;$!FS+yK3XMf#on(oGK*~j`wY3%cT_yCAV5bLoK^a;veNYlFU^IhDe z#DrS*xT7UbH>9#0Lu3*Q$8R*rjq^Q8R7X?vU;N~y!ZZ-}m>)h4WIJjJ zUtvstd-qf~<^EH@G41|`FMQY)4Qs$WDtw>Qfqw-Wdg3H9j2W_Oc) z<6ay+JioLUt8JBcTYnFSzmaOE~%) z;J`U_z!B&GB?NbHR?#g_88{{7IM0qfPlfkwHHFA!Gk5wtgS%h`RVJ|s)jOTdjHTlS zh(IdSQ-7B{YE>SLkB&llW1~=eh<>!Q|4k})9E9#49OfmNr#}AzXGu`sw1tf_&djo- zrjn`pC?zd|a~WwVN`WSB`W)yQnZOdk{DyWPk;yQW19mAYQaqj2H|6Czu9Yk%KD-9uYaL~x!3&cINSq8&`Ogt03F^UgB!OE+(fiR8+CK^Cth4Z z`nN~xfTgx$|M=SLH`yY7=sy{o{QP#HKOYOaq0MhYSsQbD=uvTn(Tei&Cs2w5~L>FGD0oqst*L*__hq0Ou{B zF!+d%D#^l{-qxt~5et#vmD(Gh|8Q#6zCzhHeyy8e2W~}TUhDGb^$u-5)odr1aY9a8 zFWg$j@%oaRa=^9T3S62p!9a(>qGxOqJM-?YG`OeND;hFs*}YnyaF+ogBB}RPc$yH{ zK;T885T!G8CR>K*DRvNilLT54g9x?f3XyS<2J%N_|4ROv1U@f(HHKES-9AqgXhi&c zh_~rzmEyjq##p-bpTzl99@w@7)beE7AP|!%*SDX%|8nBDK}V_Kjt5De(^uW9Ix3~N zM-}sDo_^7&r_XU#K*_o!TVJMPv_t|s7=eti)q?myJ}0tYZ?PmB6^}HCUh>8Hfb$vJ zX;Jp;R(CXTK?2}Kdkb`dq(9=&fL#?OGc;;wL04InrxtRdE{C~o@aJzB03-26XP(?Z zRSB0mcAyRrL9BNi8PDqclROI4uUT+M)eF817hcPCjbdibS@x{!WTT6!`xm_nN%eIF z=EfO803lF~A*%s5Cy!cC#<{f+J8IF#(Xv=>M00?F5#+{3Q<5PsCK}A>f(Uunjp@3X zt#7^q1Am4G7mKK;Jghs=a}OO-i;+-ut6&sr=mVyvt0~F>i*gVw5m6)3qK07|--Esj zw;Xbz67>rWCRT-TBAgsW6Eq+I4|abD`h5yfMf|;fL=7P^{SD^uL zbb~pUg`~yzTS%m$Ck5Dwn;ZW?07i^7A&e59CQL8f-zl&A6oK_Y&HR4t2wa5cucL&A zN#Ju#iAm|=Pk_i!#12tg3RvnNj4mEGvsX7d!Y**I<>ncIl(1ypCF{Tv)RuRUe=Y5jLpKwQK?1zPkSJ zuamgP!Z*FSMkP$^Ajif?4PV1ego6|AojAS$9O0m@x#=a4YL2F~v$la#<;KeAj!MJt z^UDYNLx;;^-Mu+_&dR*wx7Cf$mGB!06P(if&aasovKI3D#wO+?biy8JLE(2NenhxkY?aCLMo%WNb=*ORT!+oj6woSI>7oRP1nKxRIksas}8RXf`wi6ZzrOM%|5zi2&8annhYrG987$ z6sj|iR)21vJ?uASwk9v~B!U}A=)+Zm3sBa*NLN}9JER6M5BLW{VXnwNFt4(EHy@jb z0fmAW!G^|{5<(I@Ac|UdEwz#tBX@O z5%v$|Ml0_#vja)`g-%XRgb13Bx}28gDclMR^211pcc4~nZzu_N6%~a2h*bWfv6f3; zOZV@AlC`p`#a`Bp4tlx=@*3t5lCTtjDhOx7#f0yw>A7L#s;!IZHQB@+zgexLOv+H& zh9PW@Xb$jn(2G`B^=^#Ybb^=Odqn*jT1vwtizAWXbG}Ms|Es z;STJGC!aRsmAACFKLu%g6M69i$j<{@zS!HoLIx&4&@#cB0%-G6b~8?1ycGZr@~|_9 zfmY`xgzHb>cpE@+6Z^`)qYye%Oin=P2o3PZHPF-JKN8r$3Pr{rR`)%xfm}e~-(BnJ zNb*NrT)WuBm4?Tss>%DxEH|xfOlp(fv%Ns%#bt5x59{yR#nUn>&cn0?-ZiLf z@lqdw)452xPF#(UD4j!*(`C6OF7W z3&`q(wlVYp)`d52+;Ge^&)q@U)@0JC*=)Ga2?^mVmpT~qyiJCIVUZL~49PD>edku=ue4_jyTRYG7e)%0NOzsASMU<#x?kp zTgmy1aHv(33yA55xEgF#hO#>qdcqBWF5&_H!#9bMF)S5>nmN+}IA*mnCa{(W2@P>A zad3giy>c?RaHR6V*GA&@+{3ZoUUTHg$}dYkP>u5>uldO4`yJ>BCp-@!I{~aA-n(r5 zA{992z1XX>$kwulV~OBde_=EvACKIqwt?TxiO3f0%OXPlx*&8yElTDO;Rt}#1-#jb z>LueUjX!&LKIp(H11$Gx&YrNp2_xCaxF@#&I@esFVg{NUR36Ac@caK+ZW^R50IS9y zu*avH>@8l8VJ`fT>{n=B9k5 zpO2H11{!JZFA%7MlN8@#7FU#$EwavhK$w+oehOkR62%Kj+w03)4wzlT^b0j$IYp-r zNe%K)vlF4g;ebmvxK^^2AaDdcoKH8Tp`#8=G&n63` zKm=pETUf~}Nc^gRHjm0_`d{TJJOPWfGHz>y@B~rby+Wy4XpLrTc(}PyIM1Lh97cP% z*UIWGSC|J3PFMn%g8*^WkJ}Q9R|_OX4X>>MSeD-It<$j>Fx)HV?ra|^^x8d%g zkP{viDi6a3sYl3PS_f8$X!_A?4*mWbA!LUFn3_UZe>epxl-nTsJ_fJsq7b4F$J{~8 zm_i_BBx=iE3kft?)O?+Ys|S&c$411~7?E-jl`h)!tZr2ut=_0hmrh(`|L#g56YqSb zDv}_&pI6PoQsZ0;*-GnQY9WzT6cPvkRtdR8Rlc)F5q?F78{;vImGYRh;&A-9{nFeR ziJ`$kIZK?wk3v=vtp`ypV0;?Ml)>F+QGVy?)jt{H=YQpA+O&>-`Kg6KJKznF@9j7* zq+LFL);qP;0k=$Dj8<7p(}fz>FyH%&hVG%de-jFX!M$Ad*8DZWF2*35)-_mK>bk#$ zt+9~A|1PJ+*E>w@3NKXrIpR6sCP}U#a*!2>7-VYXkJpWWtp}6ToVNq?fAOk!~S7b z78VxC>pO}<=dkagFswLykBLa_I<2Bbu{#C8TOn5c31k9jYSPcHT(^*<>A-n5)Q{BS zR+BR%tpVUO!rf*t-De#NrzQkI#Nss1Lsf(}c{KD{Z5Jv+0F=}eIpkCs@k5{#LmckJ z$!Ae%9gVw&=u=n}F9B`)&;iX36@uRY4_F=v$D_D?YH--$EW7DBVMGQA;!J?13_1#vaQ>F7W$)VMkNkCV?8gSAD4+ah=o$Kp zpUj_lm4D__z0dBB%7t^-y=%uWb3e-VVU}g2jaa``AZ;hQ;+!RPSaWHZGX`yY4!{I( z5!m*SDv$Ahoc^w(nAY>7l%6T1k4cwT2khFE*Ie59%Q}$p`%O(^Tbzd@1~L(>C8b9| zv5+N$4*I5w5$^4McC3MGDF7CBIKwUq2{gKOMEyxA6@aO8V{;Z`AC5IV|DbIwX7H}5 z`G+wN_iZ0Q@8ox5T$RquTTq^DjwT|8N$J=;D3`E{AaFj~r|u*oOg%5CrMO-ujrtBM zMnWi{jD-BBtcz@8cuX`qyAo z2Ml;eQR{9*!Z?zfFQO9U!~-Xm9$<0d;&vk=d3_94T$B(({>D*=Q;e8MNC+`TT11lq zU_)#Kg9_!QOP>Vt0fCa|-@k8_^6S)N=KHKZpI$~c!^pNZ-toniZU?Fvgax=|$OSF? zW@SSpvz!I)jGQBQqa=3#DPZx)ni@bMco4u0UNBjYs!+Y)0s2Cz=nl{1_O_}&iT}_> zU^V({0zaALVTZ%#C_&mf)rBri$K^{54bzUE-i7@UpJV^;wWlbtQ%7;fiD<0PQvX`W zUV%`gWq8sdae%e|MMN=;4GLF0FG!9cgGYwti{vnsP!!pgwGl7b%Wfl_778Dx6F`1R zd5fL~Ej#p;gn-i1(*x#7Du&~TW3j+F;gtq75rc*b1~15M8_z7RN-O56;jW% zki%?rYyP@UG_%0)a}A771lPe7`NZAGu^&nWB#sPWa6mKaIK+yRe~F(Hlnc);e|7}@ z1|Q59)uU5l`bxIP*L5Zs2PP$@g?9~*K3@LVGdtYK#A7SxJ7ahI)VmG%5=XQ4+#m|Y z7ly4J&ID_KA5#wfGI`D zJRj|v8u^q(vlPbt+oxxJAx9(1PBV#GwkzE9+@=QMoMEB<^gp8KG*c76Pv`%a&IA+# zRRl{%IlSnSGqAUTOot+(bkA)?BYhPTRzMvFnd}h5H}3~KhM4E3TQ5d0eg=FQFrXo7 zp<;|OW5-;Vta(yHB45-!g}TjP0L0h$@SS^NYw(1W|d8vL&k zAe}mzHWJi@pO0lA7?LEf&GxD8DA}Z}

0Vjkwr$sfP-7F)3rL0#=oB=& zf^Y(3asSJr@fKZ8E>iejK0lm?Ot58*)NdBR2q3?= z@)@mf&SJFNgh4qO*m2$=tn12z%+0o@I#Zb<(yqGJ!5GKl2}qYc}5 z|1>nh5C(Lb@k2LEGWQa|IqdNvzUS#_JXnn0BBU#}KKMKGQfuji848rQ)G@_n9*vkw z7hG=|9;GGf77S zHAosGw=QYx2@QncrBDzOOg=+_zB9zXQ1+~^f+%%Cv$p9V37BNU4qPhn1V#6 z*WxnyS(oXcZT(^KdfdDQcL&=4S~Wy!UnNJ*Dhf#?S=t3b-r%=J>@zKu=OCkW_kT*)hFV)`-ViOE`;4bErLn+sIjgHN2eyfVaE} zdn?TM+LfRNwZ@@7IFm6f46?U^g77dGG%Oa^t=!LwfPP& zx2<>2*@~JBXE?Fzp$|iECkQz^8Mz1)H6P^CaxI$C3H@mjF9UL5br|(J4Ga^ARoKMV zUH9|i$7*-m*PkVp>wix7ucr<>&yhiE80eEuY*&ZgpigCfL=dU93>!$ONm@Z;@Qx8B z?J&dRF^wxq-d;rRwI!_$3;lIp>g}%5TJwXYH!wpORmzcT#u&UzIc1=W>;V<72s`^x*BzXhU%k$39i*DW+*P@e#mB@-NASPqJTDuU8ELHiv~sQdP)oG#nM(l{PyAP+=1dFZRkN=A@Ux9H&7j z&3j#VaeznlIsdVRfHku~2MMa*163&)M3wFu7zsX(e@*^85RtSYataa<6& z0F(uFO63beCa|?AXLdxH zhr&sTTI|0dAcgN_rU`Kz!9@d?Il698!YAXB0{Vb2SE0nkg2j)W1Hlo9)-X7SfaZzv z;bP|S4c9rOp|VbVS#kSu=GFczf`4mbtoDHp0T=_o)=1KSxjUz?HPEj+i@Axe_@Q#G!wx9WM5qdh*Z}#v(0SDp)S`-b>#6a*0k*>3(F5))?awk?U zoQW$W>I#e}PDVQ}1Ip`$U^RUI8RTUW8ROWzZ>d`)ADsXhI78-eAgl8!nJivweF^eV z8mzQ1Ynf)8CW{YudVhWdzJfvlckz^H(9uNTg13ja>vR1%6xewo_a3!rZ2Pt+yX0(v|kP1rrNF7Y>S0PC)H-=w!U z)`=TR4+c&d)nmzvOO5~B5y!yO{u9cR^EW;=N@RGR;d&*rSc2$V+E%fi26m_IL57QR z`7s=VV!cD(QYpBBIjOBjn9gFa}W%*QV*H-w+6^FPv;#>tEf%{3_NA)aJ}^bVAX z*JRiE+IfI}=hZ`mMcPkn&CAb6P;er&G93;NsI9AHTaN_U4o_X zlr#1cOzM}Cm7RzQhKe75EBrkn_>dpSgfrZF!)Ut3){wE1=-FT*!{M`SqV#}e?*5sR zj^_qx+cU%IH3&){SnfRfnHyqmv^)PrU~CyK2G&XfVoXeWzFAf!w#_25gJ3HY4^=<( zKtljHjUMk0-p5?Dsz9g+s%|e8WnL z()X>XAuz0P$I;_jVm@N7FI3?zRC_>_?0i^c>7BnJuJ)}%)|ZUkL&T4&+CK1>2E;(I zL(!p1DEoL$X{#p5jTn0`7Y~?&4&H=j{fABj_8>>+ixEVu%*+B*OlgLNW@Hus)8ZdI zw6R%4>iQCRbH*3-k7^JG3)aLRhwpCrS7QW&pIQV#9+ti%uy%eCp?{(cj*xsU|AGl{ z>Gj8ec|*Q$ud7yXMo#wKGaozDpj_I0V@Dna=x*BdJAM2OK5`9LTFtWHwOrPZ$_EQ% z+2SG=Sb8)#t^QIKPp9_YVwr8gEuAiQ$E7c(eBPl(i!QAF#K@);h__Gy6BucGIvRGw zXJDr?!6g*%0y?(w^5&){Km#|E%ivMAmLcvJVEMS5AUZ^|`bN2F%!6=%B^4D8eH(G& zBKB7`05%&Ol!WNR1Z;@^w`G@ohA^)YMLtR9YSx}W1QSxG_GTC(YtjAEUx=!@(X zu9Bn^Dcib@UJRX?qM*N4zRti|yx^BM2QnRJGq@mQ2m%e1*tIca|@Sd-;Zb(i*m8`3}?Zsm;IG zs=DF+z50coeX6$TFybQ~55lMDyNuI)?qH3=O$<-qJM;d2YBSDk;dr0VuOE*9E_aIv z!pgF)4aSx3R|o8aPwnS`=fLz2{80102sc#C!stNJa{yG7niaG@x?IHOo8R#n8HW(E zF9!vOFQ3Fp*eFT;f z3Ul0`IE;xx6LL|c^}+BtIQ9zYwYp4NK5AV2TN1ej7H3h!kmm=^;;NHqD*iq{zXXZd z4&NWXs;-k-3j$XSo~=KM9tTR_L99Dc?V=HYSlJ!I926S_6v2&v27m*Dv@VyY@a8S43JbbzLsKGJOy#2qoJ^7E2@hEUF2wae^G!lh5;qs*2aust4Zghf z9<2eQ$ZGPKGV;2R0H>v`ohkB1%q)UMMDAt4J|+C&bvUzeDnRZ49G;3Y{}aNy-V2lV z>g5bCWz}zQ{%n{$j;<1CAr?i)}9oJV_shh3glu^kzsbZ|bizFOPQAH>_ZPoorJH9Tiyj zyVU#IBftb`%w z^u7GDvX~Vk=w|<^HyfQ41)A;e0z`}gOqL_LUSzX!W| z26PWl?4zwo!l*Q!5WJO4khHWp+voS-NX+7&F4j#G_s)F01OhGFlHV4Z9qfK?lAi-E z_o|4{Fe}Zhp`l>eL55UkF*{2!c7SvP8RCh$2RjZx#VTD`RjEQLx`&+!)De({chFgt z%sK;rc*MAYY!Q&D|0jjStY%<$gowq3gf;x1DTM*T7yLqQGk#$bCC68kY?reNMAi>G zu-wbMWK(DY%A-op3=qlAyWqiImZy3sO`Z?%R5(-;%yX1KTyaoFRH&|Azdco`D%*;5P ze0#fYw5$yPhC(_hBoSW}-gkMhb|Nd*iRU5uJ_HeV*9V@%my$qE5RpVAP9m{T9K&WB zfWw2pD=3i+$wf%!@SuqbgBscPwq#M%c($xr*BT=ZPCa*hl23&uht|Kdun`n)r(UN7 z4rDki3D1AofTPkWdXYD5(R1yhqm_P94ll*CW0SF}Jx8{fOaZlKpL5wb!=Ap=!!^C1 z)bWJSJxR9EZW{)jfDWrc#8u&BnS69o>a*s(dLc7Ey)qit6uYy@{o&<0O84D<>Z6^U zFSk$9>zg8|Z{9e`w) z0hA;3uwT!za#O`9KNR!c9oOY$M2B+$j630yD8yk#c09l>qW%s#f({Q&D`!1I@MuUN zJ;Fd<*9Y!Y@%efY+B9nH11M3jOvpe#0y~pw3dC+eq(K;>yADl+7O3Sd%jR0QUo6d? zQ|#E8X$1S} zL(Ez+J-*VV1Q1fZBs#c182#t)qy2kL1YWYHQ?MeU$Q%3Lc^h+Ns9@dXupX z#xg0_7^%Zv*dxN7C6&wSD@(mGNR?ya?+d`ngEXM|3mec898u&0ym%1~_636-u;B3F zDHH(o){h}Ku4C=Gd+KoID^%Esvm>*DaajYEGhoP*UB|mgSQKa#W)dFF19nZw*K z37(C5hrBj|#NmZ*6yROLbYg2W^|SH{p&C=qaHPx}EIIIpgyi22fwbiqrkN3qFAxGO z@!CmT{m>%f|7UTE=V9BS^!5dQ?%r?X0~7IDGATM^hbM@i$L7NzgH(UKK1SwOCnY8Q z6ZL>qBXofamu@>vv;rO zwmXQ>$#6AH-yT;e-~)1p-0+CcT)_acnE}sE6!3#2G3^YQ=CWb+lB93F;ximzTg-d` z=aaLLO!gRZa|>BH*X7^D|5r)9L<4a_*9tN-BpB8P$_UwbmYN1rk~S?b45QHrHK8yVrnHB?iTn0nL2xRud-0W=--=rn&duKW0R zE&U1lz*qW)wll)*cgpLo^_k~>u`>MJXLg-MVE8A^cbYyFi~!~#)OZ4K%q)}+B$OY6 zUpEonn@F)>jRREy%#ITc7R2p2xT?8dl3-;%$K%I|S@BW=$C|ET#!POSCIE7oDrbp- z9wW)gG>C?DNKE55pZhG&$MC%&LPQay=_*bsK+@~c!6P#d*{brHLNL33%Otuhn5-bX z1$P#Y{siM_kD@fa0$~gA95(>8$cSE~!8@F3*%EJu#)6F1!LSw5XAxoqa8V6Ld86~1 zLBmP|jc2$~F=@WLaKNGl#Pn*Sc|xCZ!i|UOesOWXGw5 zYM#d(_};sm=fNX`>325+)qp7p73S9uKYdjVZ0I61umyRRdo_unGADrH?aW>7wo+;&2& z5=UqMba&xCPF&lANFr;(>!Xd9{!N)OZLL$f)PlGmy_+UZ(qp~C!8<5mI za!{~G0YI)qt2-|LVx^-+v%n5L9)248Z<;io!I`>QYLY()kpv05Mz;W8QL@EwFMf-^-xW1xrgOJfFBm)*_ z8u~c3L2Ohq7A_7Y1{QTbNGdEVGSiQe;iCK*mllQONxLeiVLFyAi*!3k<`r3sRZ;sL zYhmT|l`{KL3+B<+Kp}xbn!LG1e6z7xw3=s%OxqFLih>j?ZW6O5Hf_KJwXZ8GC^*99 zEwLNW5g>R!WNCf^2_eUzd?aEgQ|a@jr>_x)3p^!QiJJ&dyoj)WJP~MZo1swSPiu3B zTa?Jc6s|I&m;vH$zm_MFNgz+2co)%^gNLZRZ`y65#lSQiQDmsKljTzr!?vLaMK1d= z;fF@!+U!@9ikiEAhxY~0KNEL5QxUJY#bgHi)0UCutmW`I(C4;Bv?ZelD2^E&`g!nApMn0Qz|eKCuUfPzSYq} zlh`f1#&y6&@gl$Zj$d_z+VLiGQF{yGpi76zOWJlo?Xf5EQq{$1uVr9K(}M=+<*JM# zHrZvhJz{@!>&|)v6n@&d>n1h>P)5r;;&#HlqW+U~sfX}y@O*-0^wJN6v9Ik0my3;w z10O3MbHw7Zbp?t5+Co~hh_OPioy|8^fZ^~S%-697e24r8YO@83h>G#iqk=nY$H&HI z>oFmE{LD3YLBKatr^1CXGK-JF4~`l@2}CZaJb?DU5Qb8`qSAA5#@Bh@XML7Qmg3fwu#5@)&64EQFK@>2{|&U3E|nLODt? z_j1XBt+=qEvRvl#xVJ35{@KT#oV$=z!LvJr5lDOane8-pHey`89BM?xREuYVUH zw@sUSFhyeB`EeBrnP8|IgyP{JI2%+%v}nM1Kw_cIZ$=;cqP^;o_?I*8AO7sfDko#p zy}WMw=E>gUw^iPAEzcSHpz-qFlUkO!FA~bW;>&PsJSr_y-c?aDY|xV;bXYI-oJphr z2WMHA>Cs5om!Opla)yYQkffF(^_)y~z+uD)a8=y22vO*})u9$9 z8d}(lH$1jWc)t~t072J$SS4 zR(iOU0G&|CBQt)9yj~X8&$jZ6c|!k$(BeYJ>9arKyRJCe?F~ zom^GEO+h16<&qOemwQ-3>?U3B{^O-JSJ9WX9Ct1Z(4F7F62EA6+-I1O@ga@5&<_^&c&DrII{7YSttwu$o2wZ3wj z+rW5_-s?qbgu^My4A`&eFJw9ZfseHWSzbVcimpTpUIn~DG6HQQMN2{zT@s2EO5oI7 zi@LF~F=y6ZGV$eCE=jh>9Y-d25Um993=-W5^cpjG^~0#xwZzy%pB`P+bK0`LIWLfqn?EE!}g1ETXye=IO?%l@QDTAe*L(R9?2y4Qj<~u z@= zgeBbV#EzPAa&j`HcgmooMV~DMq?Np1l$Ib+ctYsf+GZ)!TvJXMh`hLfI92<)eM#XU zX-KndT4b;Xm@|aj5L1FTfTO>wuaCejpv(~1fA{Y8Dyzh-ESXVVAzrG#X{)b@tY1*Q zP~L2(Y?gCOM%t*@RM+_LYZC2MC9tSVo_*q$*?!m+iLwKlBfOiV(MWWQ ziH>U<(-2~1kz8$d#~fo?_~K(fZ^uXyU>(E1NBw@tYSYwPhqb$vsW&t7Jr7>D!12Rx zGB=V!A)~Y5;iJxxy?z9DCxr;CS+Y|nm_=6eQQ$6|dGPU%%;Tuy`kmHMPmdBwJQ@E1 zz;_Un!J3H5|J5ste>uA#jzb^kT+y1Cn1JY#TRm?L(t;ej0v!~ogFI(<@QGf{JteDl z`IpMg-hS=y~b48X`SWED#$vl;Ocqxe>6k<~;iI?6!KKc3it1~k*C%=B((2Q9$ z!><7s;fK+638_E6b!o@yV2uo%L_uW z2){Lzx<NFd%~_W7w_y6k-`~2nPG%Wx-KkKc_e?`gZtjJG&=vu@ zSk*8FJ-e24UfMO`-urBubhEA{jY`~B)Ox1Kn;hye)qFWdY7JbZgGNKhKTx(m2@T6uYjzqvEor^x9ficHEDc=aWX)?h?oi5Z4VC* zH!}MX+6bL|d4U@Nd-q1R=xTO<05*YN>elbAq4G5-r-STJ8(nA49C0XA~3s`7F6tO4W$*A9#Nkr1ilbC5L=3)a(( zemhiM(zd9dDj9sFe8K*TfZTBFR_&D}PHx>4mxS{&N^@f?BC=1q3PPxTmp+bixjAjV zCI!sc-Yj`S6{DM{zl%7OB?^)GQ%OSjT};JRd_V-NVr2rg8s;u(NrHrC)_HyI|Fi(k z?~Jl{$Nl)>4I@A7zz5v6uttO7jC~{XJ{E@{Zr!LsvtK)px8$Z$o)>7QT8f_%<|xVY zO5UAq+(b#(+X)r!iRxTDO7gs(T!LQD$480mv5c0jYvWP|W?w}7+y@LB5H2m5Hv~2p z$6VC5ldnJrjF7<{gdv48*bZ%2V^kWA?(^Q~iL2IA@C7BlvNuFjGIIZ}p&+JTo4U;BPY4{l}=PKI5ES%M9-HmK{-tt9j0l2_?5^WJ~g$l&1 z1F7;qckb!)^N`VElm*cf#6A_At3uADKxY*y&suBLpH64azy)<4Q84IU%U9zp%Ttjw zR0T&Ock-1K>=tygDr?7jzW)l?ZU3lOB19t>(XJcLSEoovs?G=r3*J};NrBBzVvdlA zjNTPr18j&N&MS8MXtQg7b}5}$)U4vhnL3IxLW^tYw0pEl1CnfOG@LLPA_8%zwO{(f8eyPniqlz)TcfN@e5 z*4||UlJ&>PN=)5bnfoDGhhjG(SC4GGU|)HYC0n&CDe+AnBxXi zLgT^~CFRzwhL9zFwUJ4i`IJ!q9u$wHq@+OHF;%|35KEJSjmW1UQ6_Q?Bcn@QfABnp z4ism6tvbkvl>=$5Mz*o}ruq5#2+I%na-6Yf901U9W}U&u#Z)_D{rpCTT_btchTRF6 zBS4B_U9OH(73qZ@tQDk&d%ak*~D3eY*k3DQ0N=^z8WiEgMhH%ht{!besiNN z&I`7clsma!+jurMj|kmcb)l9@r*44FSWRkfd1Q6qg@kj%t&3^r#!Ho1TTi!WZZQ@R z|M@(*BWvN)+@Q%dyRKVX&0{fdLi@~_8(p`09rgCES7+DBt586571VIV_nqFoN@ArB z5n-Kcr7wJmahw`dLY>Zt$_r=8s!yA1HKpdB%RpCviw6)A%P3-Wv@h@}yqwydWDu#k zoN`RfZ`<(#`IoDL(2s9ksNc2gp9U5R|Cx@R-*Uir1J8MKCiwSZB3M9m^q9yG$?$Sm zR@Dz5PA@5|!NAuQzY$a5O`=wQHf}zA2u@!cTyy|AV`$M zAI23|_Z6ox{%VcGYDw!Bb1M(lAijZ+Two4(U;Yio*hMs9Go&5;_A_>KAF&8v?bCD> zw~CD3pHF1n=%33J0%;C$1_BCg1VlzqPvT4`Q#XKCqgZYF#M&hqn9C9VeY!vP3;V_A zaxb37C2OvuQQrE3Lvc7c%`I72c0vDDt*Ok9Q9JXchLo*YjB*qWi#(H3oOD?Uffu}H zgj(Hetu=OQd_cIabln^FkaBc@knHA^mounm`y5UZ!i{g+ zm2j6NN+dncPwvksyO+e5No6v2A;8#py^4wo!h|rP2*0$)dG!zJ?J2)4LB=NIQRBPe z?jH&)4w0kpifbN$M-GQ3%(xKf^&$V%J3 zs)NyUZQP>66RpY!<5v%pyTz2o^#r%v=+S>Sc8hySUc4mvqT8>XdXCWHFh^^JyGGRg zm0d30WfPJmDaD1m*2<1w#W43(uSU%XsL>u=*`Cd7ULMtQmm+|Wj~Ak}A= zco|SBpzHDS_9lr+c=<#!8ooRG_EW4kMO+I1z|(V0E6VOq8{1-q^d`T2iRgA}!Ul&5 z%pEnK9YmD`R1a^!!fOk$hX zC@WX(D&yY?_pVa2CgmqQRxjmmT};{>No6xKTO)UE%-puMMpi!yUOSH*AtaLDlS@`72F+$N|DfK{Pf*lTH-CT5}Cc3x;5 z8F24ALd*o}Uykh7=kz0f7ai*kqM`3X>!;Uya4Ey`vueo$k>H?k%!r}ERT!`8H z=rI2gg|iRdN9p{FZ^D^E*7?z|@bQz?J5)3{VxOTuHlmUaF~ z1^eZsTKE29$T2j!s{Q+U+A!XywM<&VbKnN}<>liO+X-nFE;v68kkZtf#f=OOAQ^(*lAqab?zPZ}|cUWb+3E`o`) zByekX#>=p0eWXhk}Hpc4r^4;6F;kF-Kw=;xoM`VaVdtTjsr00YICR61@ zDfk%JcEn>to$B@zVZBrARTSt*K`?3Jj+vXE$DM_s9BqDQd;8og1!!~d$Vh{a^PIzP z41rkysjv@X_lpT#{DV6Pfm0|yGB6(jq&mbKP1Hd0|E~!J^Mj4&bo%r&SQx{bKc2pa zijgjJ7v;dhl*gI?1aFpx%_3Py6kgO$`!E@oclC7JeQrWqaC0dI@AwxyO znQXGuf}ru7$V}Oax|wn7|10at?)Cj$yB&RxtU5tB2gh*L(~{8w&GfD6k=>q(f9rM-uuVz_4D%b@*2mPInQ~{^L&=~ zvLMEvb)+VrcQIaMz5k6`K}(rKXS8Z+!p)R(1MTgThpk?*u6GPs);M+}X#Vhvp3uCk zIX`hOZY(!EmUHrMg6wk!5|96D5ie!9sgvyld*-NYcH;;&ovdPrgAObxdE3Lk4v zPhyM&V#NOd1Bk&F(}XrzU|o&C)`}6@2q=-e4k6$GbATx?xAHsGI8>ChklP?2!fxBu z(om{X4szblO@Z)j0CEL%@3S#$>W3qH?2*k8{RPsF#BN1A)+S#Z2@NMe9THL#y?s2u zpu-7L-pD5i&dZD@!+tVKhN047)8=3;Uv5y?N2OY7sacp>;;owFBQ8f@XVX@};6w zY_!cDw0XQA?sXp+4-DC`j=JkY_Q2HbR7lqHSv2$l{46Oc5!n-9w#?E)`R}Y1D3%fP zA+jnkGdQ>W!+F+&{NR|3=m=t)grWh8V;7LrN|#MmYk)FEB8e`v=^&s|flCyi2(%>6 zdETi~hmV&vO3+=z`smDhfh%22TgIQ38g}{q{6YU^sU0_X3Oxpiw=6FQy_S!kk8}v@ zH?Vv0DJdydy3xnrL1*ROru-#nq~{d&c%N*db!*AFp{k4c%cZau2m&PgJpkT5`90AXAt9x!aR zpn{<{o}7|`r8geNSfn$EI@6L1ousUcWCNHBkUZeDb{Zj5H+nFYBuWD)GP+jqmryo4&Wo#Dy??(}Zt@?|sZtBJb&uVx+BpGZK|w*#@g|=L4}N+zTES#@ z>ll&{Pwshs3xhiMy0woKLdY}QJsPq9Xfo<0JKrDi_J)|$dN;RQ^KHhV+TU^*twD@1 zaY})(`Q!kel!cGqtJ7>S>6A<`pD={RP7p?lv`3>l6g}vrrE{q`L#(w5GHtU=rA7aOTb=mVmBPVZ%LaSJqj#PvN8>A<*+Z#+ZV5bCc51=w}i*KOtfPxOP z4~PsB<<+MCO>o^qY>BA?SUxaxn65+_QHM^KlOY)-R$J4p>NR%QJP{S#Pcp&OM4Kgk zsqm?qB5kQ+2iTltSgTN~z-(hG7}K=M=rh_T(etbbN6vy*WI&!%dwYt%u%z=dV|&rUtRUCa34e0ME*Zc<{|?8kIU+Tl_qDEXkAIXgA6ptGD*Qpp?P zd$XkKd`6D_W^32op@y@6uW_V(y~TqiQ##Rc?F2HAi(M zP)IM;+s18&GdT&pYw3qMoqF>tPx>BT^ZK8|zo#hYy{T}BpcUR%@%uY=vL{doMZAGn zw}1ZQrkGai$7-6!!8FH#^Z&j$`-vnORh4i>{O9w@r~UH_d7JDH5ZSs>S@1Di?{$$odCMF2|-q5Q*PIe+0CRvY+g(B+i>RN=l})2l3Smz~`Q} zZoV!!4RP41Tm7}v)YPER!}Kq9P^WkO@q=*UoBFx>{?EF7C~}((ftpsCcwhgNh890mK!>*LXvf?`UG^o zj9(c)F)?xW(`A+dV4>-N^#J`3rY6GXEzMsZ_2&@G1ehHHW4j&_>tF#2oj@Fd0t+F7 z!Lbud2G zvXA$VFYfHIxnsj)f1n3Af*I}5=0*3g_O@VhzC@}dYm;j zI$P;vZQA=M&t1JXV;6i!1R3Xq^mTB%e(A16(?k@^BZz__vb&mZP&f3(44#Q&{kx1L zki1M6?r1Fp+Mv^`3=MN$W8)kOo2c|rt=mog=xjE;dq+c_1;sZ^n7qYC@8BGZY>SZU zfUZDyc!jw*l+IdPTOmvCgFcKidhjBmp3q+cq=Zn(?CtHrr;G*OL6DeM;+zQcG(qVm z9bg=z8#uPAz}~Ck3j$!?b?WKh3B($RN2sT8fRD!HLaX%rc@t1Q$fg64f1-T~4%)UP z4|OPtBZ9$I0Crz@stMkm%0wItgeeq$!oGLQ5=}2mg!7R{a49_A=_@L`;U<7@XON+s zeeb!}Eg|Z_@QNV%zr69V!)*7G$^BIMQ(Kr8F}6>Mv*a8*f+Vr8%O&bL~P?1@nTZeDgX0 z(j`4xBw*@JBM$IrNJvcRbu;nGs_Z_>-VD{G`=Tie?5Fm2J@2V_UN zbE@IX${^O?m#)pU@~6H|M@-!H8!w*uymuu3F}ZIrq(zOC*WSgSJar9P-ppq8hv_Vs zsS@3IqLxFD(vV0)(JlPZ5JqRV3{Di}AeSg8#j5{__wJplz(wc0q-F+uI9hG(>{jiv zv56{MOl6?=rz}acpfUgi#ZUPDWkQ$ZH-wbmHp?v*Ipvbyk^x;w3lv%O`~8(JnIH1@ zUcv#yiKSp}?dh4-%#5?dUE4jdg_EdY4#aB$PzEm_F}{B83Bue^t9I5DIc2kRr+m)2 z_Qb@*Ew~-x6oiQ^Ajl}v#G%Pz-Y8P6S067wud8Fb>-^Sw=ZTjUyz(+Kda}D) z22Y5b8Ut{`0tV~}hv_LcJ9exXY%kec5P%Inj@R70su0H4=4d+>XnuVO1`AxF9S1gg z4dL;)QaU}cm)D*-4aPY??W;vVK(Q|1L)&!SK4~9w&~wvUlB&HA>dF7V~+I=$Rw=Ema|z zxw(niwps4`Ur#8QvoHt>INJn(0vGYBi`E}ihB`7l-OKBi(CPH(Lbv=D@m3)NF6O-E zqUUYnP_^%6?r%B+GZ6om>l^DAggn2939m$ zFl5~>a+yX)cndV8yWaJb4f+N}wB~s6d8*|&NUG^n` z4~mww(i_}7vO&Mw;;&x&`t|Fmel=y~#n|_EntKgY`!ZPk=%No*=vK+}VJ-`OJ83SSJ8GAD)dG zRA$Ql7;rPH1nHO$W;p?1_P!4TmJt=rjj%O?{w>03vSLE994TX`D7HE}gj~^ysQM(P zwllKQiMJBVmVDeaG-c4^0TWElxNZs>W9X%cfqjMo`Zo8`5wh9$=k0SDWfQ}=m&L6H%1J@E|ov`Wt{d`~%obY@RFDP+aQ9%6s=MW+; z;Nk=;h8WHnz+RH&c|GeqiU!ikZ0|)1N3Iv@px1&oD0uKdajY0mZXOrVg*@n(^Mu!W zwUUMj!1uxHr8OziRs@XEVz}=vT%ZX24?W?+!!2j<)GrWA3yvTlZ0F^UMMu-^;w4{p z)0krNp227>7Inwz6ZlKgiw_IG9u6Cv44$4jH+HvYuxH$|^ayv|C=K#_Bxp|ezCNKD zHWLhLvgqWVbCm*^{e~%r1;;sn=n*;!BFnKS6IYf)?~M|)NV40mT?OepK}yE|F9BqS z#EJ?@l5+}pDg>3<*y!8-(!6dPo=55w0+nAteIarzr2O<$P>foC#>rCXA*eZ9wp_UQ z%1T}z{9Qb*;!^v-kh}?N1B4X?4<8=Hy|0*G_7kQAbpSY4$+h0gk^c)&Wte80Zp}M| zHA0#qm<>6oAE1*|IN9kOxspI2Fuf**I>gujlG%Uc9Qid?W%@mE->@O6_RKAb1oh^r|kUq|SKy8<()l9!Ge3bd2%`044~ zmbTrsi}1fu0U#y5V8f0mUlKTJafJ2OQb?Z+dV5R^&CR_z;e%f!UOg0QZSM$?-`gL| zk?c#`-&ZtIug!I6C~^Y}6B^cjQp1O<#5B=q!_z=8sK9;@bOnkKAQh2~1*K3@c?H3x znz?w|PH|(P@+Z}`?f&uzHen7Fj4YoVhOHTDyXgm^>;_M7;fL*ko*OuOu~LxRfdNp` znS?^cpV`?o(vkpI06c{=YhCXN=;N0^`JY?zg<+Fn92E3;n9gIiKnsYB905~iy&-$L z#DM65994j6jNzK{=FmKflnkC?q~b7|B<6*Lo@^SSNg-@Q6U`L%OMpHh!$J`NNW_Nr z_BhBTc7r{&*yt##ShsKAeyWoDgtc0E`2Z7T8=@)+6e+|kt~O{05w{NN>7fT{qmQu? z5cNR}2jV->)8zr;n6bxUtF1u!gsl`9MAGnGb|azL=G2YhoOAI;h6JT}QNg^*_3DM5 z6TzjSLtnmW?CFr|D}8}D2Gbb<^|eVyXEc{k@^zk*_`N0Qb1|ul#H{RXtP)Egjdz6P z?uQON8txWlFJhU$jud6IvGpbn%!KA}tGoGIPK5dnmnh9R=X4)p=#C$3JiXzjlj1)4 z60WJ9N8o(62m%xc+K6tR} zc})3RhYf$J|;6;Nvkq$<0GGdm_-J!Dk>`G#!g_9tLjC}Smk%c zGa^&Ep(tu$lo$kmyCh$4#=hQXI^v_TN2PysN@(58Q7`#*hBPj}(}8O%6TUe)WGc&O zTQDNFicy)qFOSPY!_q_g9YS)-z9}(ZT<=viXOR#==_+UawQb(H(=Dok)q}O%HN_=6 z^d+~}%Nl5Ga&=|Ty{DA6G;opZi)lYiiW2BHH!-#`j6q%@H(Sl-ixyLn2M;fFr4`wH zQyTJH!AT5*o*Qus5L6#bVl4xppc}yP<YqMDc$ONGL^_=lVv4OZJgUziiR%Z}4>MU%s)S|TAan6&RBglt<#|gPzcwT26?}M* zvHITCe^c83lga-3;@oKHUw#DIk?86@E?+JGbEB275kq~+Z(3SOT3s}zHtkCoqYp)% z*~$!H(uo;eKLhGOc#V@vtBvN*@EQJBJ7>5tG+0+gg1y`MLqM+XNzMFq0m~_TnQz)| KlC$B!ng0T^sCgj( literal 0 HcmV?d00001 diff --git a/doc/tutorials/image_classification/src/plot.png b/doc/tutorials/image_classification/src/plot.png new file mode 100644 index 0000000000000000000000000000000000000000..a31f99791c670e18bb8c62b7604ec8cb0284ffb4 GIT binary patch literal 31006 zcmb5W1zeQf*EY(dkBNW^p_Eg!Dh=}YZ^6#%C;Y58RqH9E`M-S!g!sq(! zRTZrFtCu%RgNU<|Qor3e&XT6gq8HDeR+tcSH#YdfOo4n`g=APxh-zp{!gQx5I#5Sj zT0c+}qaUh+m0%1EEgbhHzG?FM+Q~jgSJ!K{nrBOHo^)Sz5TX%sSSy`$Dj%d0)ERL8 z6TX;Ppvn;h>v_#mkLjj@Z-_YDn_(xfSw8=teo2>%S+D$bfl|n+;Acg}`0OmU1E*rR zzq_T4F`Mnc6)l$STqGxtr{prYdF$4*LDvK4kO}0?g?9E1FV)o}i)$V%<-CC(4yHg3h`52Prn z+gHWDCWf&WSMKMKQgKTTy6oUcsrlqzIdx}g7fnVwuJzjvCOhwJFboV9+nSC~P8v3Z za{1CHwkaF@J_Z9%+lNzbF7#u6|NgBx+S1bE8z<6|D5)@R_T_;m-jp>4xdxqSYE?q4MyyM#tHoob%*@Olo}O|b{zMfJ+IKE= zxr?)LcUJt*x7g~KT-(I~Ope~yj&zN5=?K2U>L5;yw2X{fRlk0Tv*;arDlLtE@gjPr z1IL;iecG{_+IDI_f3QPif92A#W5)^?Wh5p2%gbHA^18AzFg&iQsd1Q$uxR?azH7GD&1~LgIiR7;nFJiXSHG*{jQ9c;wV(|ZZn_V(hI@8wYH+o0Y@(D z8F@E%cSPivBCBL|`*HHkWra_lK2h>o#GOBXeyu~rv~baNe`^w-EXUOEwAv7QPu~~z ziz8=66BZ`HSt@Vf_=Y}LuSI0OC%5=#`YF0oXU>f28sGS@1w63_eRHt4*M|=v;~8Zl zQ$h8sDl7FC`U)up9ddT~JgY|bS5|$O144;q+y@$HMDB0{re5pM_ii&}e(ct^* zzs5Jc;W(eo`%kaEURp9^SIUTPH-uICfB9lo)1;}X=^q@d6v*%we5k2OAJJD9;uvxT zX(uNqZPULx&kCDb4o@0$?3O%_Jc&!P_(RM4&(9zzp4tmB}4AAgB%b>8UKo#`*m7ZDK|8y$`PIEt(eUuZ}uRk+}MLN$cs> z$8=&o{{1G3b%E@js%<6qmdF17J4v@G@-zuaKze#Qea*00lIy`frqm&qlE)?&bq(bh*cL{Ok>R{10mIc}~9*E#yi?8ApHS}x!nGt-TU{ksV8{O1h zoown1zO8l7al>eDJ>8YvLPK8Of52{(q^hba;Ql6nG?)F#*bPhZR!`2dBxJ>8u7BQ9 zaHvl03-IzP3p%VduT8hXbIx4T!LoY8ZzIh#i0(AN$8Q$*r}0`&TqF*1-ENCQM5Ta5 zlNXzUHpYJt^Q`*D6lwRZE8O><>41Q}z5VofkOs$!{oeNadb^xV&A5fyO6Pu%$^~*y zanOe2mitbTHj6^m4V}gV0`e{{8x|3k6(2}y>+0mp%*;M@UKMsJ>FeOPT?pR*kfmV9 z+`M-$r)x5T&)S#XxW6d3ujA1cB8V5aifyB0jplM)iEc9Md-ddE71Jyug`6=iE-nsB zKTb%LB;-6$K~GOl`^I5%4b)ave=Z+56P*b_yxi*__e^P0;^J-xWSQ7w6|jL>+1cxB zZ8CKRxIbxCpa~Mbva+%qTlT|tFJ8QGTRf&Ms&|a&`-|P(xpDR7RJQrv{7&5{=Y#-O z#T1Ql7xFT7unv5VzTDnQ9VegF)R%AHzIiy6x$HThWQ~lBkO{PcPh^HmD}oUc5&37k z(>Of!T^o8wHUo~0ZDR`*dcDuGDPT5K>zCKpw(0f)^HG04zfl@to4McwnJ5k{KBXmA zENC{gz~Mu34qrrL?EqNlO%@W>v>2$pJa{`U-NviNT^SEGoYw;k+LK}?8o~@`+LI}{je>VDo>ezIt7K=Kf{K6X zWb0RXkUK0_UCK2Z7GYP;dSp|qX55>HuOJgneRK3gpuUhQ)rV6!(h~t_BBB&5hgwk* z7+p{)$Mx$EPn|jyuWq?C^rNnEbbLG>T)n|a<)`s1Miv^uL)bWT0kgWA45v>N9Pm;E zTk$X+@alUiJmwVyb^Trf-yQyU&`S|k!k`K5;O5$|itxY4cj6MDgRkwcFy_c2;CPF( z;yu~{14mL{cPl1;?Al;o0%HVoM>+1uzP&-#p5 zgUDnxel=x|#Lj~*9l*ndQ9Ar4i0GBk@<CH|Cy;h`gn|r+1B( zmKKjwQ55}frX@|S*h8(@u#*G$k|YV`y)XIs`ETF8wdys_pI&r5D3*>AVn@yN6=p{| z%rK)`Kc1uW$f3I`3P!%t24<&vaNjDvN!j!v| zniLElN40{_L45G|#6${w%)`g01Gv>Xf`{8{(*|t`-rJjt>DNU++~nawP+b;06FxM}l9EM3WAH{4e6JD`5;D^o@A<^%DsyrF!ay&;1iswWmG>&C_2Ph9r+_6;fZpjwuoWViBja_*5_x%24x^XHg+lL+9<{ZaEE9-e+H zKN{z?=ZKf{jRRmoD=RBwvR-|1IoKI2PPZ;p1+P>Ho{b4rbg;i;y*BmSxF-h%QtbfN ztC---Ku>>@i%Vh9Y1JFlYl=n~;L}lNd3kwdb(Pg(@DToA**iLFSPXzaEM5LZ1z^E? zCo&=e_p;(c4VW!Bp2r>GT!w*+lEKkCi{%;J57n(F6nqF@fWN9iytS@f4nqZ3mrg z3+MEAHs-ehB)QB-DsK!w_ciE9O&+drH(VL3>B-UeE3%o#ydraBqVRtO?xpGomY!Xs zOf@+<8C;}lCRg=m+H?@C3aesTJHVl*yo#W5Hp^W}y?8MH*GCd<7CTD1x(v$}g5&7% zhMfzc=_-0)I=*qm3q{yULpBOe=}3X(?e*E&h2q5yrYNUxg9rPYlzdi6eAY8@dB(jd z;M&G*#2*YtBGg-Ov(J)YVfxSV=y+BnJtta8KeooA+UkrOQojGALN{M*u43CgbhVk!Xwg0K zx|vz~ryK}){PWl+(@Vn@Jw?`7ZlkW~W5g7CAkmf0&CaNU-ARk?HXzVx<$Lp{id;d7 zM~)t4KmkJ!+AT3FSt^?Oc}t2!nhBwPcY*Ph(oX|ev#pjQ;TFU~INeD5CxFob^qWUj zd|&1FGN1|z3l(z`!8`-(WfW5~n$~C2%Q-eez-@sa)6P_CPYd01_x=Y)gRA3RjDmDCD$cEHxb7)jV3M{v*aKIW116{0NzH5V$QPo!2^!Y9J=J0xqPH5%Y8#K;p%4Ub&skMb5U8_$#+wyaqoMYdc^srs1-s{j~`>xB32J z#Zt#j^rJ_Q3USKh1S*pfE>Rl7A|Rj&L225^=qQ)f)U&<8gS{w|lI1608HUw9)QAQG z!|Wq_`y zxN#%Lc2T`_XQ4RAIe5Bsb3iWNqeWy;YLq*HaARxBJNmB{w%e z!zpEoSWY(LeW>}{K>~=6zkT~QjkB+_k-mq7#XF~*bP}XgxpQZ%0EiIqK>E!MAS!fDx_UIfM#e3bj`;GU4}Q0DD+L zDay*q41iotG(|18loC_0eZd=)jzvvPP021I&;p+?skJcw|4CUm+-nNwwQSGPZ@Zxd z6UGmS0%}GV0T?2*uOKdEZBZvgWY(8KBFA!4+IXNu+1%XRHx9yyn}ULBZ3$9sQ!TOD zx$0$hy)mNhcglkT5wC%l0&dBG0vN-NMG(ca?%L+w31H4i!{myCnpmCa5P5f;q!j`X zY+M{Y!roPRls*DKcwl`2h#;5&xO&TgQu63WpuF;ul9Jk{u|_fAvP!{wvvI3;eJ;{% zqWKJLYWNv!5Tc`M!#0p}l23}rWg@R=Nixm&|DLc26i2F9^uyRjWkjp=0r;teAw3%vth*%vG&~tKfQVTgLnU-xOfDu}+jG++K zWEiL;#D^0K3IDe=Mf|{EmbOH$QFqqwt}oX`ttv=(rUDezuOr7ly_P8WUVV?aZ_(IMq`j(Q@g^ zYR=;|1Apcmh&EI_DcD=VIUpQk^Uun79XSO>LRXg(N*2u9*Uyjq^5rB*&=jRh$jB&y zzx1pRiHuBxLr~rLD{X4Z&Uvk6Ez9T;e&1%k>7aUMW@gc9Bfmnvi5eoZ zY~3a_8Xc%^H|!?cIq{GUQD+u8z4euqR?sjm{Z@LZNCD+5SFU(``qTnP&gHPCw`on` zS+zHr4`%ikl0bm4k!AR?vvhEbqww^G@ma z$QtDBKb*#0wTH=GNW*uBi}>`vq(YluPK3x4)&DYs;~qsA$z0)VVObp z{eSiW5b!_Zxa(tL*X`*<2vUk>(~1R(B2Q$(s_N_N+K5+IR@zG)Z6SB62Z5N!$B)gm z0jyj)^_PLldsFjwj6UWu=}Q{iTd?(|hn!rCAa~DE7%N#f+FbOi=KOqLp@~Wxg6OpZ#2$+y5ITt4Tzk7b0|SGajZcL;85WX5lz? zhj1i*z;&jbd3~;1j+)=5oi)8g_Ls3ZI_Moi_fhs&F>S4_7tfx3W>jLv#?Fqbbj;4m zYR@;-c>Lta45Z5o-x{AM*h~FrB>JD6vTKz_&ryYca*jEAEjK_O@OQJ6)Lrxvhw7K0 zpiYKK&R67zsu)lm&=DPdOzVeiQBhGlO<(E6)}|Aq zYCAg>A(6o59LsU2PpqF=aX=a{**a30%MdV#`-XBGV@67w1XjIhTz8jmXmJEd09BIr zW0YE(%Wh)1apPP2y5i!lY+{y+{K39PQS~95FHR$o{dWaSiCrc@y6UUc9PXBt5pzAD zfccO-ii6M&E0-v}usmKD#LC7d383m52ON@tiHQ|z5l1R`vF(KvI0BIi^Cp zbFuAPY~Y+p`ez}B`Zjhr`si@A6ukoLS+p%h>P?#b6K7}ylyI2bR-jv0oBo`kkK`YHEJNar6fR0WSq!7w)oA%^0 z&3sv-W@f>Gs!yV6g)cqU4+-&!vRT@u|3b)F#*2r^o%9gqUBCV#c8iooU~0LC0zsJ9 z`u2S^&$`*d3DgJ!SYRtBQZ1+VMF^v)Vbmo;Iwb)~ryTlf?Sf{9VG$Pw)4yS= z&QNoa*(ys+Q&ban%o`pIVeU%nl`82sG$s%Lo*P!iA@IS$1HP(kLHF}y@Z$N+tb z5y5_rAXw5B0HBT~3xcgk{(kX~Z$X}J+wlfONF53lklhq}XnLn$&K^Ai0@5Ln)xcPc zk0HKv)#845Kq)n>N>%jNDV$C5T;SV`P`?UgWCJ>y!Ob({^Xy?)eGffcHyj0~v~*mX zIVt1NOsnq4;OivR4|iV!BCG3FL0=*`(YpY?x%-ScI=ZXZC_suKFt}&VMmgfpmHmd1 zy($!950w}QXzz*Own>m1VSCI7%Fo#pt`Pq;u;C{78%a2Mcy)a#{tR>6uB(d;PS{JSc?{{J4!%kXAQaUOjQE2qSHGA+daema~^ae)`LmWy~ncmVZ5 z->Msiqy*_1!Rm3`VFrI67biHgw%b5k=*z#HEzD|VKy`Nud_;%_1rUw0>(?#R&-}wT zDwJZl?kjCgw$c;4VSQZv|4U_+rhjMHoi+su2SD=>kOeM@2Ute}R+|3BOXt6poFN=i zw_U*aNJ)tRlA8ij7w>?pYHa;=Y5Mn>1 zk_8o4&wR^?`sY?wR$nurRLIK0B7U&d#XQ z?F;sMdd=736B1aMnUmXh4tO%eecPt}>7gvKm^Hg|{KV|;blb}6I)M-dKZQ_gpwtm6 ztjb0ux4ag*v#Vq6p|suF)|UIS;y4M^bRoQgO|@$;u9BSkdY4ls*GY|VDDYN2DG3Rr zVw1W2GZS)DpC`7!Ym-BTRK>JZ0ZItl5LH7#+p&Ac>D@a9)I*`de@3oO|HU1K>k6(L z_oY{6qxd{gFr5&V1DSqjC^}?ea7bP~&vekGduLZfgdUYUM;M1ljpX!cSsfi6~2V-*dx(3m6pC32=`+DfFS50NE5jN2jfo@zcb_mbA(N zU%$qTjlDuL!6yAhzVuL>2Ty*JjV<+exH}nbN0abDs>|MXK+#63a&~*3vC3YBN#`_L zF!y#k^&22PLzJHI-oPJuCjAcGu@dR%ke;K*$!Jp1IL!2b<03xHv@99wsCs<`=1BDw zxX*%6EZ6PZ&mn)%4uy?{vQ3C(AaVQ^@~+8{Wi%MC4R9ECyM&~(YpScoWQ4ZHv^(>a z*j7kL(XZLX5@mFQma$-(-;-+P_5U}bqr_(q9=t}fuMn?EBRvBK1}Ts(Z--hsICC{5 zw=w^Nlm1a*ZAY@)WL=gui14yfhzOP*sT%)ydn{hv@5`6y_wV0-BZm4e90tN=l!MrQ z^4wSIZd|SXT@YO&;6Ar8l6VD%MmsRCepL);;Gut%*JtVb$P=EuMqTE}*>-#X+gI%} zM*kc&hZ|Q3ACdfy&Ehd*U_)bfD}MTG38uH@oPbrzJAO#b5kFR~ zzEyPRuq5>Qr6|@lf^VgfKQUW3d)(|kt0O^2a4%o3)Yg2dt5QM2j}(4bRym(_ z+}u5staxpq_y7q!W${JED+RB#H9ht9^&@yPe2BTIVFX{1*+dST1q%a1Hj*Jj0_`r) z0t!Yr;or{)5c=AnF(W2-38zFNN&q+bxgds%P15(`^WKLDJ4u@GqQY>ZF%`R(hck!6 z+hpc@vU=uEZxlW_{74Cl<59(0&CK7zY?9K3c|NpF{%zB~c!T6QV~~0Dk;{3lEac(W zhGy&iD~4NZ{Gh!8;WF}PMcoN!*i`qWk{D@zO1Z1$-jQ*O89Stqh9KYTnY!i}e7ZFC zczi@)6@g7AHJ+iy>{a}<+8;x?xt}DE5mCheQnyBQGI;@Oq|pDF$CIO^x^Pl0PbO2l4vRp)fGubPBH1K5?e&HsXID zSj)4E7?u{?$JYoaygSi|-awq^v%gXv!yBkri$_a*7P{2BRbcNAjwMmVY_qXh{P2re{ z##4LVcc>f)i0|{IQ`5T09?s2+<&NQ+nYbU3*V(qLaF9HQU`ceiLRd2kO25-OaKq%; zhO$o?2sTMNML`eOotZyoWiYw3sCc+wA9!fon(&Df6Q`wdVf0ALaIwl-Cywh zQsc5X#9^;zzG2)t5~M_}&<}m%EUSGKR!ophtHT`LKBV#qxq0-=qf;%1T3;)Xdm4DK zR7?vA1L_I^K2p35$T1Ab=GjHHt7!z&j;2huiOJP(d!danAcPAkU+=heK;1y^&(iQ| zGO{4(q=3XVZfkYY(P|NKCXm-kfocd8zT{$`cq2hMHS46tD33AaH@QL=ap6N1{p0DFT{CH(o=DDjAA)3*Rw%&Cw6V zrKA*~S0Q=xfIEW%X>XwiKG)x7tQr^;q_7_PL_%4=w%jbHZvu4W!T9q1oCPS3eEs^h z>Q`)RECW(PW?mhk;x_t1k0e>^Hv<#&cXWK??h&$SDaSsG8bXou$dMx+-j{qJ4;-LB z4B@ms+TR?XbQCI`osD9i6eW{| zIx|pnL&IbcmdhtN3Ytqa<-wpbMkFecKLI(8|6~(Z*QO7k#*hrzY^2KuovIWAbkwfs z*k^p&g;XaMlD(|lId(8vq9g|aA;D{mj8DLGGjo?gUL7S1nRHHmF>vD!Kx=^JPD2yR zGiYrS7?^^Bo9D4n=(VY;uGUy_P!H2_W%!p#Q@yW`gP{rA&u*ZWwtV(Db< zx{NzGIE~j5#qupkpX36Y(sKb)98%70lM%)t^)ZmK0u;>A>FT(6iJ%#9oaRw2p~1mP zkWpcQRteM6^}x8d?rBi%O#+HLh1Au4ygOkH%@a3P2W@(dAwAEW&)E^Jt%cuz8Bw-9 zbCwXE48eQ;HiryXSy(l6B|V$rf?T%v)2BD@P>_=|K=b58gPJ#ngfi)gOA)uz&McQ2 zAy1wzDW-k}h5L9?Syeh>dp*h5uSsFU22R3^JjGPcxH)<#y|(!Si5NY~r%#oArdP2z zh#gyxLwu4~inE^a#R)U_jt0*1RA*Mj8P8<7e$K}*_yJzgSQKme^rMlGkO4I_PQ&$) zGLy^%)Z)@)zGnmdyAJ3hGVWW6FVBz`GmZQau9h*C` zTwbG@sW(*{aN4Bxk;+6p^YQ@kWTU>#=&$YJk*{iq`l-S&aoOE!TGnrlrX*Gz_&gr< zPU^OV8gQiMWCGCZe-PSpkKDR@83 z%A1baaNW$xqnT{1)+M0xTSUr0Wk#lQ`HH4I$oM8q7+z-O2 zyuLJ;1krCnbhQ1)j-esKvF1b;X{vJUM^b8kcwfbAXVCq8I!>87s77ypr6bRCg>?sC zgH>1ie8rUSXINOQ^lRbvmrle}^To9}Zd@$n-urznqNh|PUot%Db&>UYU1C(={l)20 z$MsPa)9D}uEQ`@1OmYjQN6nX$d*=?v6kf61_A^(>Jh$?^4fUVh)f=5OO4(1clrHw; z!a~<(6b!BjIu5-C?pNkao^1|epWZ>~8TZ_K6#FE%&Fjh_6u`uM$L1H)kMj(szBBBk zDBSoS%9R415<+LL1-wKuzs=3ju1twWN1Fo|)G=}-?I(X#?mZe6%C}Cctvfj#qPyQ4 z+mqyIOTE1!!Anhw@~xY0e*--lNn&K1lN4Bm9mVvrv2cQ;Nh4`{-?6=JIP+D^q=T5# zNY-M77bR;;^pnxUnqV&VZj^0=eQJ2xjDn|i2`)^wa2RzaO<0WZ6b~v;T>seObn=& z{^HrC{K1)umlYRVV&0h?^bUHcYG5B8@HjvbJ;0E#6f9T5e1y z{;saPS%A6_LCE$!dntJdW7xhV8zppGT(XZHm14QK?lIG+Y}*i)IP}Y#(_UtjM(`p; zb$KNlp^m9b*HgbI-E^z0oRYl4in~(nqpz+S-WVUJI?Mz6x<%IJhDJee_C!w+*JF3m z_!`YzmSIoMc896v^Xd6YI1g%mH2}xnJ8u@1i^fR23+Jr*Tc=CHN`WedV9&lNhu= z^}X1dqyIzy$R(BI0t6nu6wWM)rT3=9qp5gkpdT)M<$gh4*`_-6&H{f#cX_#zUVwmW z1eT6RvD$U_`Spg-=|7)^Sq^@>vF@CQ6JdPW)6DncfhDv8Xq2wC>h!xFFj??%(yE5_ zTL;bU?~8}>$S5Z#FKoPi?Xdo8b93|0-)}DT=GT1C)Ldd>vUYarfLU+a72npf(%i9e z<~0s=oa@>gG;GlNJUuArFclws*_P@%tW0rlW1azZj;EhcODi}df{l}oxPdRSrfvvX z-P2Cb?VdNdtNDH=*BC<75gOBWTv(CNx_?;+uN*%GXGc$s-}eP5#ae)1-?f&c z=d~!A&YCD!8~2XmJpFOmCrEG3j@5WaP+)s6bV{d`w3O>T=r zQTqAEsW8a|;;}>J#FTwb**%a^>j?q%8E`ZIAGfnS)sk zmBW~G7ss$%hWkm45wY$0i#ME?tHxz6+{8BPG+gXSRc^7S;*y=5Zd>n9RTh^na&m;F zzV1GYIdG4PQloXbuSCuFNN=RFNr107SD+qrY&niPEMsk&XkM!u!h#ZfRGpUeNk2ZV19~;1*D2R>>y> zdk?rC>6LnIt?n4&Qyg=(s*i@=8iq!N^#~u@^nz7+(NJ!#S9((W`;OZ22?kAKvt+E)0Y#?J8^X|Ps-QItiMQn0PHcz=Oj-1 zEC{}uW;UtN($8vXsQa`s-F?O<-MWoakTE5{l@3=$B zv%q-l#16CtaI(DG|B@iZP-BMg0N2NyB5{d#b;ZhVZ<`Q@C+WaD#ugWGz)!*koNe6) zTwQewn`rwW+|Z(S42bfJ1>%r|ANsLEM%Ed_$->wSNV$3=NBs?g6TiRALP#UlQ-0vJ zvKj#mN-)RfjK=99aQLC;&OtJq$#q|uBT>$}H|M!k5Qp}Hp&&T6z&^{*7TH-WdhT$f zQzvnX-@zOf`dlFHdv>tD2t-=Z|4W8s#Rq344t(dA;9!Zo6Jv4l%)(5QDU+O@b9Z&u z9Nb9(xa6R%9AuWIB%i3j`q+9tWsF_L>N;J;C|8-?$f^1LCDDk1>_mY)U}Y)SMP|uU z6zaql%MbXeg~HQx8bkv8?>*TZ=*65mRR@=Y#L^8GhR*wxV!BF;(cZMK3YJ0mOLB=U zTRV%nt9zRgB@>k!^QTSvlm`C#%VVMSF^9Woxy!1z*lObbnx74uI$QEkDiSP9T+dN= z%QK#3W;gZ~+P|+p--B=4-P(iuLxSsz2a)Qof~e$V#f{koI2UV&L7_rbj!h-BL5i%e zZw~A2=hE)6C1oPX(nf=6?CE7Gln)=q@7D@j@7TBv+Qrf{Cz)$Of?}=U@lc(9n>_0c z5nroFq>?87K(ADsyVv#LTu0JyymsAjtW;PLw2zUeC@%7H8LWN#7ONqTMO&_IO`Z@4 zy-Uyd#9mpsRuX0Y8KG6EV?WYzH1Hbi;20s*4u8>tyBTk_G}}i*bgeXdxg&(FQ@Yln zcw3!sEcb_m=P8R`gkF2B`v#i^M^Kyc45sFis9n&NyJO?C5=iRK|6?mrMLPz!wk} zk~}%n*(q$;(T>Hb%r*yUq;#e#8}wx-#vL5jMIr2IB`Z51#P{1=TVg-tf(-aI--C@$O-uA=GiZNOWo0?~!1=mpO0i|zdSlH?{T({-;HwPyt`0hFV zxkExzB0t$Q$$ULV>eJ?NE(3Z`d`jES26Kv~E~+&KqD-KW78B$8O)KLw%jvG<#ZBW@ zwVI`CuI_Qbt?%w^vI;Ya<-+BYfR@3YQm4W!nJDxmQJNU}d$B}L zDA!=Rp0iwUktr(afXei@L3Rx%o7V;)*s~iVvs@!UdH5Im=&<;2Vl>LQH%C>H(CkqX z>V~Tu=oj+-((?oCD88WZ)HcV>lse)4>*_JQc0b21ld{jkb0L^7w=42n zv5mE&boutqAbD$QNMn=`PUe?SfNewQ>5>J$&0J^^hHHN_*`F>1*zXpU1@kFz)6$OX z=&($$cjC=Q9vMS>=a$?1Bx|Pr}1@-w#D+T+s!irmH3iBdZv z**fI&a;0gOy2b~ao44X9lxmfAN?pP$~dk2ovL`E?Fwn|=-c~c}DtzzcsMUqRb*eJp6Bp0Vu zDeFifHJm#s-Zoo@d8hu$-qJ*v0gsMXi9!1p`YpE)gytUcSLS;g=hj*DAWh)m?G2e8 z-=4h$2fbHhLIom=<>hkjq~4va@xq3cAIG!0*?O|Hy?Szz;o4l$wnes1k@+99ab4Zn zsRSv9rNk(D)FR2{Xds2GOoD3Mrl#vo`}S(WuRcP8I~=YbfhQbKGe7k{D zO?Kx%2eHbk+vFj;uhL9j$^{qw&hTqzQV#lUji0H)lV8WYY0JftZNo)@*iara{NASP z4foigPdlN|&+RkamaE-(6~Xey*3%SOpoBDmz5oi;l*DnXsQAY5ka9 z-T@urB)gmC_{H4LzBdZKPVe5{8;kwyZ$9>MU>+~QtFOPUzpVM#cGjlmI}7>lw$Sbe zDu2g?aFeJz<5}?e9^pUHuQHQ%?l%3Nm`5lTK2DJnu~!3Ly)XP3;zc@wI0)1nZnKE1 zOg5z>`+2Z+9X9iU#L)2161T&sH*I?yDns2lH(lKk7ge^i5UlR%jNK^;`{gb7G+U=u z$nr^xT5(ae>7P;B^`XXyZRj4X;iQkNw4TW|Or-iiQvB-i?bOG_d*<^PBv>~EWDwPG zj-Tb+2nvkre*ITUZypYFhH|WB(+e6fFZfW)dA>Q?uNA>469`7y@L5213@I^nS|MTj0!wd=40qK|eqDEHSpYlR%q+{L9&3rhSOp?|mqqO>>z zM_#U3JnPElbKAa{@^FD<2>MlD^>N;lw?p75H~7&<+to+s(MHruJ8_y+@qzIztK z3N5g?>orWG_?&51)_ASzUo3)qZ=orBmd3MalyuL&lLLS9OzIcsc742fERSgiP*s<1 zF1RKea>Nh?=j-9U<>rP;=~67@qb8kKE>i{L3#c1z9w2A+Q1dgN3-ZK|A5ex zf)ljzkK9RnvYiTd?m~@bak9RJl^vt!;!e74{^RZGUa=UpgRRfprAu$6je60;hJO*g zuzKIYLVv$Yu{D2Hd7JEP?QhVq|3Xm1?{>qTADN@F`{$y_!MMc{DE$T zuUVA>+MxHhnu>&k)Qz8bHLskb%bBjaRF^X#7W+j1Wxh#~*+RHj=cP!;u#MSt?Hx^s zUC(k}N)j=^uYAQD+}75^ox6O~vjLqp|JPtsNNfXVX&gc&gy|U#HlfEr`CuQ?P>5f` zZ048igHM?%?#>Q??AZcTN?X*5evez!Kp7KT{W-{0N$Hbr=YEGzmFGHmjNjNNB4c5-G9wmBad=7&>3r;9Hnlj-GjslxIbnK|&WsDj}P+s|} zk4PAgB505J&bp%W{-03;9*3N)rtp@X%|1HNQwO!w$PamXF++?}{`ZZ7Q`ObCM?<+h zO2z}vP@m@d=E-qwiK8XyNQJ{8vxCB)8a=*t~yx z0mx|Trx_$3#D5z;LHosS_{hY@Mot7b$pvC!mxT`RUU#zD1X8(VdHLCah>Djl*Y?*H z)qZZ<&j0)wyk5&Xx4&!oR<~iK3&+tM4o;V_2W5 zWwbYRv1D{nvhlyO!q5hp^y$GU}0q zcnK?wQc76o(6AR7o$26l8pd`BOs=Lv5XTul2YK7nS;vi`?u_e=c&w$?V43T|@F@^4 zCoc*^QeP3L5W9G5?PqQGR;Cu_v(qMZJ}J#Bkc^3CTA^?`PYUzJzkNlH$YzC5~_u2z@-UDRBuxS(0Ljgf-exOQde1x6+!MP;0g{ra0!|BLrF7iS#| z+|G)eeEx+Vcl;Opih{Pp(RJz%>Pr0l!WMSl-PUJoq-7H4*QWi6Prld@qm8-4ozV2l zd+U71t>t{ZGiW<9JZ3{3w~%r6fA z-GAdYPXE*?w*eO6>{qA8RdOE8&r8LgqkHP@Sy5`D#&Y7sEOXDus!{vLPS+jp@%i3S zd=4`s1w~R_&Dd4yjgAc^FbQ&%@jl5rYbBk41SCWlgyN4CAsZ$=MH_%Q_0!NRZ`HshvOGrrU z?(V{!S6UC}A-KW(!q8AK|7H9sJ?`OKn@L+fBI%SBb(vc9pfnc8%jj@v0Zjf9%Wo zWK8#DYG8G~pI>8fw(imVJi8$7@D;53YNZZb@LMsEIp9vh(X&^t*2Db4w_YBB`yM}k zPqjqkccyD3YF2p^b#ila%Us#0t7}d(S9j$~wH zq_jfya2@5O%Nv?#W|XT8Mk1^vvI77ouClTjdUMUzmVaRN+m2YKmPxh`4A7%wurTei zQk6J{j*gCbvs3($t744EF(SC@yVf<|P+z~axtRzB%=^5&F*n7{Ni|r;7;v_Wfn{;{ zuJ((>xuGg2$YgPbis4v=HruBRKjA_?sI*^*D&OQJVoe_3lu(9H@{Gw}Tdno=^^KI0 zj}OM|EhVqCcD0>8m^C(r8m)s*+}K#dU#z7BdB&1EZV+i{jULgq7fe`rfiKrJGJ20O zes180-dP!6o^Jl@v%~rtXv5ZyDl*{xP1Q`AN((cyYg}Bp5A4-LFgO?P=K&V+_Gujp zSVb)(orJ|(;{M)YRKIj|?!0@AZ1^&47z(=ea4+h#zu2QkM9<>I19UGaTo>A1syKo& zj}mg8YD;XgMVDwk}8e=|C%FXJ@*h5nUN>+m>bdUcP}zjms5v zCrwq#d<(76XV0FEW*y9Ud9<;yv3+MY(8R*x4)i7^;SKuw`e+KvN6s(>^I9@OLo>T{ z0cUD!cupWXfZt~9cmANS(<}ve1v7_na%Xq?_h&^eresK0KoNav<@;p3k`~5sedaOT z^_uF<=On(wK`$T>zB%X`1=odcLEP1U6$imbER=ekwm-Y;!%cZ!>zRv*(h(!jeK$5X zX12Yi55L5tYHw#Vs$aQuh+d!Fxu+rcVfy+ks>JT=K}T9cgyrNKwJ!+i%(|H!wpeIMVWZnX_tKtKE6=6Y9kAGU${*Nyq?6yz$@CZ zil5~b3^y=0} z8^|8pL1jwx z&*tUJSGunzFB z>GjEFMxLkBEUJ}BwqkezGXY-kz&FR^+pX-(>t;Xy{t(IU zo{MGm4CU5_R1TYQZ(w>vc-MWsv)xw?c1X43<70NUHhjjW^el2YtGlM>yv}f>s#JsD zb9TS+D0!o4di%W{8kv)nl+5&mL@SGj$ozamSowi;>{q(NENfQqyzQq+#^ z6VG;LCn$0zq~M)YVvg`vdkr`7S}F}KHl2}?(NRoq>?UXZ-B8oAnS4>g=E-r}#)#3@ z1Sw>nI(7~#miZ?2%j@GqIr_(ZmtiF|eI?0>dY;nHFW&LMdAChnwkZ+3J)u=|PI)jopmzaQx*%AVPk`BtL+!v`+J8w8pk%gEmG zJ#_)I)FLLeM(ttuuqr~Rp{c*f{9s0*Rz6vz|KYm=+jW*n*ZnI}vgd{7d*w>S$T(8f zB*rEhl+6I14XxF(wo~6b$uA?5Hs4dVU*orI)E5lp&Qk)C%^hnqZlBza0fJ=M zm@}J|*GQ42EBnQCrFoO zU;jlvSSFQE&YIMbYxrhQCqDkAjyG^&$y9Xg%eLr;;mUbl7E+-@o$0i?LF}U)F`~iBDspA^mfJ=>f3uMH zKJgk_^JM^)f3wmMN`7A&j;!)3|C6~-7ou<8yUe$iX|6t498~=EtNeW56~jxHq}ef* z9c03PKaM#_Q}bWn&Llp>|%o`RNkph*IGlZ5G8}S&|{u9Q}I|pJ~E1 zvX9eUxIpQB73H}+YCZoo@XL2qVd1&j!1!4Y^7{%n@icVZdZ}Y3o80kyOzyKQf<_?)Pt7+wU@BjHaX|ePB zoJDr_vO&*9yFY4$MaES5pRZo$;tpJ^eDr5Fs^6f$cyzN+U%^Ev=#9&5;2-cI=hT2s%-^Op9r-tRY@1tRTbWc?SJe@ zn}Q$OIPftB2GWP~SL5eQQn`clMNJ(N8OL{eEj_4 z4IbkW_;oQ!tn$%nN|ytDeB2hqo?l!D=XWo1vag??^KjaAt;HLlip-Bziw8tH(0-c| zzo7wS@qxW#Q=F-WaDt?M{Nl4pVz#>sZjM2mqv+zi1={uSRJN?v)(k7NW_&SV;B$W4 zZ+N|CdIodnMfLMHZe(n(XbufmbQKjJHFF*m0z0%iXiliC%z&YIb;7o|JUrb9$L(XIG|D57Hj z_w;mMsZ&USA5)?A5d6L$&=~)e2>#J$F+VFRr%h3H^K@ceLKgH8K~w@Y=0A7N->~m8 zm8~tHm)O!=_q{)cNX28d6RWSTF zw;-Qo$~YGw=$utmqi0=>c(OY^YxER+rOcA#=VBAdMcoY4X&%l^uz|zZp=~(tw+#p{_hxB?$BF&rq8`*!a%W0JBEo%FK#H1(k49 ziEWU?T}}eWQ!dem4@(BKCEB)Thf2zN=E9`YyYp#_C@AXU_?joM2)zX@J;9qzL*@ag z0hfGwbwsw>FvmM|ag#?jadBWTMq^{?-M=}bo10fDcq0YAbI7E~(VCF+)WvlQVnkr$ z6y8ZQ><8DoQaj1aK;)a43&9e6yvRYk>wA&Lz&_VSawJ=uOMSf>%M}9VPd}K9g|*m; ztidyLDr<)N!uv|NG;&PDC2^x%&LZYUi}TDasqOYxg)W-fEH_$+jK;>TU8m+2=57}1 zVBlAs)}&m&`?$Dp{wzJc@%+z&xa(r7^c7+s$IEs$4_I1VFGj^C6TD>^wt8qH3k*FV zhrY7EMs2b4e&Ar`B}!2O){dCRGdM2T*!Q143UqPZYfGkm@K*QTXL#NNj~>BPn3s6S zE+@;CZ$&?Nu$rLwfey@CEv0N zRi_DKNL}m_!V^wgA2r>9C;)NAGrI8U@a@~MRGpn!@l%WCiBTyF-=B1mhzPzpu)mIk z+^d16aNEKu;#^9Zo*e99Lxi-i{DRWN)1<643ov2Uzmtxcnyc|pS1u0ht8!p=JUeED zDR3Fjk+gkn^)V|a#@mC%_??q*dNiwVoI$E*C;6$irZ5qb_V*tyHC%@XK`)nry$2mJ za7{3dXNFFEjvY`&zGX_j0{`auV5%c!;XTMbHp;g z@Xx|K#{4EWUiv)LdYSo|t!ItIXLkl@(JhQjgRK7!77vPFd<7%lnuB-b@0vauz~ zjus(#g#XEb53l*>I&eDy0W>rm#0O>8OU2fVoEL}g3bDMs#{CQoMTYf%mU>Gq_n8b$ zP01J(W+FOb&C`|fMPy3yOetwyPeWFBtt>1pN+6gjc62}v5G|a*GbQE50@1*1UXsXd%oKFNKBQNr5;u7hy9QG7UCSOTW*aR4mNZ*digsZcyrs} z!gvowu>?8B@<8y4Q7!)pt5LZmeQ6{iE5fIS1-?5xwH8iOsi|Z!JB(|gWaRCskT(9( z!$C_O82GBHa_OzfM?Yz^$B;f+I!*9FEKd|_jpRBxez&q781S37Xnj4op?+&IhWiC2 z_vh$r=E@4kB-EQvggc92R%bkVKjjmk;aYxWgy_Iw57v5@LJMRa!Ui}elslWl@?v;7 z<&oLVw7zRV{M6VNjK40PTe!W_dFw-$8&o^q&aX-;iesCeZ(*!kFZ1oO!ZIm(W$~qh z(B*aO6}HVTDyX};d5NK5$RW`WsyI_4^G!74=8J)+56OV`{M?K2Pghv;^QNmgCvz?P z=p?hV^BCkjDXLC|R9r8-np@7#$i8s|G;Nxxy6zEON)o+<=ZIo~AT4E6Wa7XECidQf z&6b*4tlaF94tu^AmTj%4$Aqe)g{2R_JprsD%SL-VAPd5x$6Bh& zoLzbBo0{~}(q&C`pyHf|TSW!!>*#2&!33X^NZ;aOONG*rmA8-a3maOblf~U{m z6IyR*DObhC|F*7e8v5Gu=g%(eE$M%=L*haYhjH%C?t|HN4fWGgD={7Ou^``>;_2E_ zc*?H@&%z_+Qu$Wx+uG+{Cs>uxv1dre2Lech9 zUIslH_wcYfq|~idXQDzc(0xcAP4y;Rq1suzf>+fmXuIOM05~Xr${^dfZ!4V{eTBBy zMwNNE-oBSRbei)`yt%d08Ka!*2EakTENUe7Yqs@F9~J29#S-Y~ily;IM- z5+FH`dQtc#;#FlTX$cgj7~L#R56zvuWD_}z%aW^K%&!jH!Rk@G6dt->XkH$HfYH~` z?_o)FzKQB%xzD3Di?S@V&yMwN{7o4ZtErp@a?n)Z}op z7oODIJgVGJEK>=#T3L=-W~Jg4JD~aefqKc0xetXR6O+@XRSFM3v-OUub-&jmvb*>j z!`*VA@RP2&yZeG?zosspKhtS;`KM&wFWGgi-}`+h&F$?9p(YbfHlJ<>L(+(N#a(hw zDH~Bwi;I!M%kj2>ikZ48X)E*$33N1JIZ!?O1WeXH{nmGCJ)geao{K!b@iN@v8wOr( zZtm zX}p{}l$;k88n*K|npv(PR6K$#L#{EvVUN#hR65=z_-IEuS67VW4#(~Te*W*s)L$PT z9IC9K1m0PHt36U55xRwxbD17@-6bLUsF=;;m6k?6>Eiem-bhSHh^PN2xAw=6fr)f= zjcMj916OWc6*&AsrR6&AX}=S=wbjT=r_>Qh%uLMg?4TCO%-b5xzIAxW)l;;_w!ey% zVq?td9G?nR4JgKn^%z!5O8Nsihi7MzQhy*T>1*`uodGp#yOXuD!{dc`D}Hvl*3+ZL zqrXXOM#B~!O-;X{FHZ;7AkAF~)3}tCSK$u#*RMmhj;+9?B77h_YV+)X{O{NiQC60u zkB^x4WaUbM>4v+za5rrlF;X40n}GU+nftYVy+PE+-L;j*Tgv8lrzyxdX$sJUfNg!GU!0w7I^1I3 zj>y&=kA0+}NO&_h7z2;3)aeMdp1^{Kr{;2T_L@P_>K?tA@Xmstw~yHL!Q^DJ>eI!V z8^~kD+{$SVS4wmX4*toc(sPxU*Ai~2I!onGmP)Vt>+i>J_4LF?izG_TseRz;<|j66 zC=^G5L4NqkVAw8s*}r>{?dfH}hE3(y$NSkTKgI?JJ>1+p%C}pL&}`P7+yQjg>5*UvpS#=pgn;q~o&-_E9z7YMUWJo_ni^gQ`fLonz*3-t6_C3YPEsf1049 z+fA>Vv<8x!PW}BL_Tg_%IE!GIctoYnRP}Wx6^n1|`ZLXKeFT*8Ez*LqGqXX(D-_2q zO-&2!oSaiL+t<6hn;isrkq=4Auw+0~uUJvWPAi4mtfZLm`O3?{^7irHORJNv?iG$X zz~Zqce#CdxEiX@;^U(n8u-s<9!EW}JJdiSIzAuBz!jcH3C7o!b@|!@A^4}Em6q z6`=H&Q=u3zsF``~nQW-1w>R&L7Y|OKQWOyw7GAYnIBiZAJ25lMeS5O+f|w$&ZO<-Q z1u|y15f8FwN@X*dF)=sq->YWSIs)r@H&x9;diSojw)W?|Jl{jRf4ef6GFxs6GSd1swtiM;NQ-{f4_f|$^7`~Q--ZlRZ9yC zZfBmr)zugReiZU3eWj2K$}{w9WxnG-i5u|uH-fX#!8uF}%L1lV75I2Tj|~Uo8r4jV znfBYR-o8ETbTE!jK!}5ygXTWAc$cy;v9SHnZ{PNE{>=n&UP5P#dv%kgCCY?%PgWgN z+1U%*;$34dPO1b;Tm{{oPp)<4CeYH;N26n~(9+QOb#`I-Us*^rtaLdXf z<1h`J8_FZ?h^wd#xGM%TP3(PY*p15(6F<6LH}-ZUq&_-CY{&s-^}mdH*~I&A)ACf&*0;}F&wm+t^TCesjH=g z8cP!PntvfOo~BjxO}*42s3TU%eE07?fpa@vR#tj4UXHH}3N2`xQ=dFpb!;r)jRvxR zzmk;1j*GK7JSy!fx3}*z#95zstv^=6Z!|$BCh8dp&PBPrggxHrm%CziWLH`cOBn4& z_JX%kw2?cS58qesll zQxDq~;8x-t1k9-f_Ch&woNoE+d)cXmb`@9|?|ANzC)jJ89(IP7%& zoLgLsRtCmWpJ z-ml(H*Mvb71)iE#Ol-Ebbw})P;oel7+k7Py5P4Ip1;FrodVX19EhfS4L^EU|0|V+x zHaiPfWaaYHHOG&9^4{w&Zte|YzzH9IG6B9|&+Ght=Bdc!(WdutdQDMBaIL-|e}ip@%{^M* zTCGZzc3bp({I#U)?`gsPuTD4zR9DBtcOBTMn@h4Z7@dOm#+Ns+vvDXv3Q7k&4HH*buRarQ&UmJ z4h_jeDh@v-WrBC>${#7D6!5p+*3F8eM!YPt%P;gdIx33)#tq_MP5Q7%7%mu01g`#t$^lr{K{v~21w%+pj`mP=_}w*_{~pX z>ynU+%s(uQur-QhvFhTC1<<+Z*RBx|67J+k33-!fz;VpYEpy!l|Jwh#xmO}ms=y$A z{o(~WTuu)T;-X_RVg=j`wz0;)elF0kQ$Lpmji5nk|~7eQ|zfwbVm1$+LCi#*G4_;ZVS5@QM7L(Nj=$ zZN5(W<}M^^l^5sfmBREgDTNghj0PJ$A|q;i67VsQU3w*y`loGdA_g=wV3oc!xl2Uk zp_s{RGwFC3q#|vMYZ&0V5BP=bDW>!C|921Mn|i@dnY?oyukx| z&P~fIU>)MI@0iz3jEvNOFl|tFR!#1^YL4P;{j#(w&>jAT($&??*J~w$i6&a*vVwK} zEFvc6g63yOha#+7J^`b$p@D&njLFQlxW2g=@Fj$8?E$Rp>L@T78dU51i4PvcqDoz{`tSQ*$2@ zEsIk-rYM%ldw|-FA@j)BXBU%{q@@>gSNxu|(dD;9u($lW&lWazUpYoSIyT<;rMEUz zk{2`)sP!%UMV#VXa-qe13*fVpp4m_EWh(rXx1TElRtnHxlDoK7sPEt3+}eU4t*@`! z+1YjFXjLJ1bs2OldTlWiU$hU}q9wi%LoWSsox+MplyrZVKb!ozO?lqM)M zlX0}jSO@eu+T!@X!-hYJ8&Ei5xn_!g=S~Yqcvy{Iz%Cg2)vNdf1VUid-5;KO@$kSW zBC22NO@lt0)g5kO#OGjc)i7?aUV8ZO;p$jv7@WX{6Y0Xh+a1c+$J<$G@1@BKKDg=m zv-%1tbPUiyM?*;o7cw=0A!=rB4of=c&|G9o9=sug#ecc%mz&43b8!Qq#FUhsZkMi@ zSzFil^biON3L;oB6AlS=iDdXed;FOn_lkz(}xf8HpAOD%ov$6017(A5r*^A7Q zVJ}BU%>ce034C2AXXm`);>O|OXgCiKBq7ZoDD_E}%EY2#0o2C3_Fh$4Xu&WTFMA5Y zCz5e|(ZBRsw?|E@SV7Xn!oq?SR3R#bZ(YcO5CExwlijCnV5?uwb-i%;?C!qQm2d?- z94$4qJB%hFDQOdY+1lKsTMDixOioR;fMci)f#ebZixl9afmL&SM7$L9=6Gk2Bx)z#-kj86#U{ayw;sXNKXe4hVub+ z3T*33O?zj*=nNef>AP3dSf2q=T0x#6VtX;(b%AkXFeiRydHE-_ll-M;+hT%B{5--k zHi`vK5PhLxzpRC&rL`XL7vxq~SHpt@j{n+w(ZvY?u=Q4ic*=I|a(bw9K(l5%1on*`Zt&zzeSg0K z7A9s_MDyO61>EM!>@i2v^rippbypS{HwLor0&!^?AOa4)lHuq9qP6E|C*Uw8Dlv}9 z$jCkn|DsVVGV0c`nvsDuRV%QCqxl_e5;jQEGyv!-Wn%IOETgqJ2nhX`n{2Dm_%bT| zghxhmi8wq$N}<6nE{@M`6D@eS(JL|<#~IZ2hc(#-$Ui2 zFDY{if(W)V3j+gowxZf!+iGZKfhQ}tu~{dVu9$u2{QMkF@nm-$`1PUy_JDSE8GPCg z#msqIw@Y@iju#qIQa6T&hl8B7Mo#L(!o%x;tdy=&$saK zF989sKW&%)0xomt4KY3)So${9)LaI821yxGUK=Jb!|IEX1DVN7$en~jmjF?kMp$U* zV|Mo1p`nsz8XrG?JR;xx4C*^n3kwU-uml#_05JQV3YN9AnIqV1gGpWvTzfaL^WUyT z2a^$Vd&a?l;`4}$*&bX~zlgb_SgyR#7VTbK%vn@a1cT%Taz3oi$5ucgYX&5O)!rQM zrUirHZ(=x`j4HVB`!hKi55&jWg#}L^A1qGO30-j7pm5>^m?kGY9dXAhE>KNs27!V@ zc>kU3U@QygDu0UpiQ|&W>ud(rW@dB+w9Gk@*@{tCuV!u|0z=ys5Lz2xTEZz6#A=sE zIDxBKT?BCYuiw7CGd9+^pXaN;D7Iz@@W+ozg{lFfN%XrUs=%Q*y2FJHspZvmN|&0_AAz~R&t5a38;j9NLF-%a1YcBVdgjhpWWgaCD5>_Z4p zH@%2aeO`XJU3myxZ~33{ScHUOLW}!;-rm)XjcLiXgq_boE`f3s1+pXq6)4#hY~RFN zhx}$I`^G>op0};WX@hDL(AF#t)-^Pu3l99W0a4*k%K91999RK-;VNpbsP7RisM7MVRxU@2WRw{-ouc=Vt6~sD~ViR&m zqUt(20>6GGg#(uL;8l+nzMDdq;h?Se481d8-2KV8Lm>hI`lJ!~{fD4Ga0Cn&q?eMM z*9M*9yk9`40T*;J0$>(GXWKh~%Ot0yHvX`y8eu+!Q`Mv~B)2aZS=1{cZtpw)3{NB*kF#|FXy4*)4^NymIwm$PPB)FaQp(}>ecy7C5T}y zEwTts&d$-15$JWkQBhHBhJAD*N!{Un^gfMr;*MS(9`q1o6yFK?6>5GT& z&Y?i^?|<5x$im`Z5k~s|=SMFxEYFUu{r&yF3OJWR2K)bko_fT@R1LK2mO0?$T8MRj$5dSyl8&70u$J#ZAPY;5l!%P)sO2>F+`o*q1(dn7EH zASFRve!QGHG&(vvGgDLj>53Q#VdYK&{~e;YUsMzcF)=Z;JA4P3B9ioCoyHr$v&$_k ztcN#rb3+4Seb4;V@!Y!8tu9*$g3Qloah)E2l)BnfF{W_%;KYEu$A6QXGe&E ze(~{?a5Ld7T5$%wsYFAQ;H*CkqR996_rWTAvsBn^R)=JMXj)l40a7vtL_k>Q$rc(a zOxQ1yqqM5xawfptVq#(a>FBtjq@)DO0R%4>L|V_w6N6Amj((nPgb?6&UvW0yrnvu%7`ig#5q` zMjE+sj08bO4CcC!vN9>O!a}P3B{=vds5x0#L+`dI$j4+TQ6A%-NLI%6Fv!er9HWVoaV|Z-?5jBO-au6@+1y~{z z6Vp(sB`u)qb$9=E2aRul!k8Vz8q6Cv-h*#zi)P0FlLV4*bE)OxcK>*#GcVkgZk;!N zmTI9d2!F7`RE&J2@iP--lb(a5-g9wrY>3eB;h_$VjPxW3W4N5bv5*Q--2s)LYVZKJ zAq7QNtWCjGkjciw(syMUxK7wASC2a(rEetk@(Re>OxKsW6+NJ%Er`G!VDrgCv{(M>uZ#3pus2>18L z9hZja5)NI!OGU=ri5{{UvNSY|jG3?ECE!dX;^lT%huO7`049A4lJXP)q?Y}HY#Jm5 zQ(v7_*zcsigFtKq5>_PnqDe9U361b2VqETcEkofl0S@Zwz6^xrdH_`)@Z7uiY-@X) z!OR4*C|?jyTHgaGXbKo61Be{9DoJPdGXPn%%o4z%@<@rG-vCrokR0uPY|_m{t57&E z^TNl>*0%Ft1eQn8VSHH_-M6;AJ-b?f(mD&6s+UJchll4K*Jfsf03*%i2!|?iy;$k# zeeeVh(csiVR#ujkASj%4+TNwCZfU`mlk2@-?*d*OZW}Uo=}HUm`Y@-#cJxE`ccfuS zC7jt0c_@iSTV!M;%q&ZgC%DW?vj4ypephNXeVLn^J2@@Q@^DjI({Ziv^5Q_gzO>0$ z$sb4+(1u{&`1$kaFl!exp)uY+GxKBRpk7OOvl7V9gns8hTU29x zy$4K|v=LLyHi)1#!^5T3Cl``^I7y0`ut!S6%8H+@_;2N5z7Rq4;K9`FtT5!}Aq`u^ zuz**m=pqo}i$vV4ySA?G3mcmvO&xC#I-t?9ulNKhGI$}u-r2de-?s$=TFUs{A4B`} zX+&RN-=al1z~5rI%s&Gw6$z1OfIx$C(==$eJb-0j^}c>bjQz3>C#(S(8G+cU9FoKL z@S!I-OrVc_cbXT89W8*MNP7D=t9H$o%Zq7r*l?I5Xj}lU4~9>0>(-Zy40?#cPfW(X zLeqW-s7sk@!G(SRdsQB#e%lS^YQE!v2|T0_(|CA%WfW60F@86NPGkR#LN`2Zlx9T> z$af*nkNthy6i~vZL745rVbWvVL9@(?9vHmZuo!NO;SPg!7}PC^Ja3R=^O0?EL+T7p z4Gw8P0?Zn+Jci5#|K`nV(4N|_)8T`}fE>;T=)!VkcWpEZP6FhDY7D4?0RoN&0P%J| zot1vd`#{cUHdTEIw3*sKtp+yIzDIKy-+@=_PL$gpEnk4Tq7%q-8BLT&!@3Ku?+Y3k(HedBvpFEVFPOaArq57dbjnTLSxDEMT+#W7G&P#^%}$#9Jzo`zW`5R(K;dJLH% zpp^XJgF{Vi4xWFqkO#JO#RdS}BLOyM#f(Dc-X)YzX=Fb=Z9kgzaeC>>zp;ibWH=18Cu_Jolb2Fm~;sHZ_N zgk;0By1KeCuC>?61uEboDNXU^Q2l2B!6IZC!2vo28*h-)gt~S}DV6HWV>>9OQoW$3 zDW3JX$wF|O;@-cU3O5ca-~W;-ci6G}KVFIa_W%7y2Nxek{dH|l=05{s08LCp>IF*Z H?dSgm&LiL= literal 0 HcmV?d00001 diff --git a/doc/tutorials/index_cn.md b/doc/tutorials/index_cn.md new file mode 100644 index 0000000000..fddaee5b2d --- /dev/null +++ b/doc/tutorials/index_cn.md @@ -0,0 +1,23 @@ +# TUTORIALS +There are several examples and demos here. + +## Quick Start + +* [Quick Start](quick_start/index_cn.rst) + +## Image + +* TBD + +## NLP + +* [Sentiment Analysis](sentiment_analysis/index_cn.md) +* [Semantic Role Labeling](semantic_role_labeling/index_cn.rst) + +## Recommendation + +* TBD + +## Model Zoo + +* TBD diff --git a/doc/tutorials/index_en.md b/doc/tutorials/index_en.md index 97de356665..039ec4b4a4 100644 --- a/doc/tutorials/index_en.md +++ b/doc/tutorials/index_en.md @@ -1,7 +1,9 @@ # TUTORIALS -There are serveral examples and demos here. +There are several examples and demos here. -## [Quick Start](quick_start/index_en.md) +## Quick Start + +* [Quick Start](quick_start/index_en.md) ## Image diff --git a/doc_cn/demo/quick_start/index.rst b/doc/tutorials/quick_start/index_cn.rst similarity index 87% rename from doc_cn/demo/quick_start/index.rst rename to doc/tutorials/quick_start/index_cn.rst index 0536936dc4..754c2f6212 100644 --- a/doc_cn/demo/quick_start/index.rst +++ b/doc/tutorials/quick_start/index_cn.rst @@ -21,7 +21,7 @@ PaddlePaddle快速入门教程 使用PaddlePaddle, 每一个任务流程都可以被划分为如下五个步骤。 - .. image:: Pipeline.jpg + .. image:: src/Pipeline_cn.jpg :align: center :scale: 80% @@ -99,7 +99,7 @@ Python脚本读取数据 本小节我们将介绍模型网络结构。 - .. image:: PipelineNetwork.jpg + .. image:: src/PipelineNetwork_cn.jpg :align: center :scale: 80% @@ -112,7 +112,7 @@ Python脚本读取数据 具体流程如下: - .. image:: NetLR.jpg + .. image:: src/NetLR_cn.jpg :align: center :scale: 80% @@ -147,9 +147,9 @@ Python脚本读取数据 **效果总结**:我们将在后面介绍训练和预测流程的脚本。在此为方便对比不同网络结构,我们总结了各个网络的复杂度和效果。 ===================== =============================== ================= - 网络名称 参数数量 错误率 + 网络名称 参数数量 错误率 ===================== =============================== ================= - 逻辑回归 252 KB 8.652 % + 逻辑回归 252 KB 8.652 % ===================== =============================== ================= 词向量模型 @@ -176,7 +176,7 @@ embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovide 该模型依然使用逻辑回归分类网络的框架, 只是将句子用连续向量表示替换为用稀疏向量表示, 即对第三步进行替换。句子表示的计算更新为两步: -.. image:: NetContinuous.jpg +.. image:: src/NetContinuous_cn.jpg :align: center :scale: 80% @@ -197,9 +197,9 @@ embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovide **效果总结:** ===================== =============================== ================== - 网络名称 参数数量 错误率 + 网络名称 参数数量 错误率 ===================== =============================== ================== - 词向量模型 15 MB 8.484 % + 词向量模型 15 MB 8.484 % ===================== =============================== ================== 卷积模型 @@ -207,7 +207,7 @@ embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovide 卷积网络是一种特殊的从词向量表示到句子表示的方法, 也就是将词向量模型进一步演化为三个新步骤。 -.. image:: NetConv.jpg +.. image:: src/NetConv_cn.jpg :align: center :scale: 80% @@ -230,15 +230,15 @@ embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovide **效果总结:** ===================== =============================== ======================== - 网络名称 参数数量 错误率 + 网络名称 参数数量 错误率 ===================== =============================== ======================== - 卷积模型 16 MB 5.628 % + 卷积模型 16 MB 5.628 % ===================== =============================== ======================== 时序模型 ---------- -.. image:: NetRNN.jpg +.. image:: src/NetRNN_cn.jpg :align: center :scale: 80% @@ -260,9 +260,9 @@ embedding模型需要稍微改变提供数据的Python脚本,即 ``dataprovide 本次试验,我们采用单层LSTM模型,并使用了Dropout,**效果总结:** ===================== =============================== ========================= - 网络名称 参数数量 错误率 + 网络名称 参数数量 错误率 ===================== =============================== ========================= - 时序模型 16 MB 4.812 % + 时序模型 16 MB 4.812 % ===================== =============================== ========================= 优化算法 @@ -284,7 +284,7 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 在数据加载和网络配置完成之后, 我们就可以训练模型了。 -.. image:: PipelineTrain.jpg +.. image:: src/PipelineTrain_cn.jpg :align: center :scale: 80% @@ -294,7 +294,7 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 ./train.sh -``train.sh``中包含了训练模型的基本命令。训练时所需设置的主要参数如下: +``train.sh`` 中包含了训练模型的基本命令。训练时所需设置的主要参数如下: .. code-block:: bash @@ -312,7 +312,7 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 当模型训练好了之后,我们就可以进行预测了。 -.. image:: PipelineTest.jpg +.. image:: src/PipelineTest_cn.jpg :align: center :scale: 80% @@ -348,12 +348,12 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 对于Amazon-Elec测试集(25k), 如下表格,展示了上述网络模型的训练效果: ===================== =============================== ============= ================================== - 网络名称 参数数量 错误率 配置文件 + 网络名称 参数数量 错误率 配置文件 ===================== =============================== ============= ================================== - 逻辑回归模型 252 KB 8.652% trainer_config.lr.py - 词向量模型 15 MB 8.484% trainer_config.emb.py + 逻辑回归模型 252 KB 8.652% trainer_config.lr.py + 词向量模型 15 MB 8.484% trainer_config.emb.py 卷积模型 16 MB 5.628% trainer_config.cnn.py - 时序模型 16 MB 4.812% trainer_config.lstm.py + 时序模型 16 MB 4.812% trainer_config.lstm.py ===================== =============================== ============= ================================== @@ -384,12 +384,12 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 模型训练会看到类似上面这样的日志信息,详细的参数解释,请参考如下表格: =========================================== ============================================================== - 名称 解释 + 名称 解释 =========================================== ============================================================== - Batch=20 表示过了20个batch - samples=2560 表示过了2560个样本 - AvgCost 每个pass的第0个batch到当前batch所有样本的平均cost - CurrentCost 当前log_period个batch所有样本的平均cost - Eval: classification_error_evaluator 每个pass的第0个batch到当前batch所有样本的平均分类错误率 - CurrentEval: classification_error_evaluator 当前log_period个batch所有样本的平均分类错误率 + Batch=20 表示过了20个batch + samples=2560 表示过了2560个样本 + AvgCost 每个pass的第0个batch到当前batch所有样本的平均cost + CurrentCost 当前log_period个batch所有样本的平均cost + Eval: classification_error_evaluator 每个pass的第0个batch到当前batch所有样本的平均分类错误率 + CurrentEval: classification_error_evaluator 当前log_period个batch所有样本的平均分类错误率 =========================================== ============================================================== diff --git a/doc/tutorials/quick_start/index_en.md b/doc/tutorials/quick_start/index_en.md index 29637293fa..4e765b2303 100644 --- a/doc/tutorials/quick_start/index_en.md +++ b/doc/tutorials/quick_start/index_en.md @@ -32,7 +32,7 @@ The monitor breaks down two months after purchase. the classifier should output “negative“. To build your text classification system, your code will need to perform five steps: -

![](./Pipeline_en.jpg)
+
![](./src/Pipeline_en.jpg)
- Preprocess data into a standardized format. - Provide data to the learning model. @@ -160,14 +160,14 @@ You can refer to the following link for more detailed examples and data formats: ## Network Architecture You will describe four kinds of network architectures in this section. -
![](./PipelineNetwork_en.jpg)
+
![](./src/PipelineNetwork_en.jpg)
First, you will build a logistic regression model. Later, you will also get chance to build other more powerful network architectures. For more detailed documentation, you could refer to:
layer documentation. All configuration files are in `demo/quick_start` directory. ### Logistic Regression The architecture is illustrated in the following picture: -
![](./NetLR_en.png)
+
![](./src/NetLR_en.png)
- You need define the data for text features. The size of the data layer is the number of words in the dictionary. @@ -182,10 +182,10 @@ label = data_layer(name="label", size=label_dim) ``` - It uses logistic regression model to classify the vector, and it will output the classification error during training. - - Each layer has an *input* argument that specifies its input layer. Some layers can have multiple input layers. You can use a list of the input layers as input in that case. - - *size* for each layer means the number of neurons of the layer. - - *act_type* means activation function applied to the output of each neuron independently. - - Some layers can have additional special inputs. For example, `classification_cost` needs ground truth label as input to compute classification loss and error. + - Each layer has an *input* argument that specifies its input layer. Some layers can have multiple input layers. You can use a list of the input layers as input in that case. + - *size* for each layer means the number of neurons of the layer. + - *act_type* means activation function applied to the output of each neuron independently. + - Some layers can have additional special inputs. For example, `classification_cost` needs ground truth label as input to compute classification loss and error. ```python # Define a fully connected layer with logistic activation (also called softmax activation). output = fc_layer(input=word, @@ -240,7 +240,7 @@ def process(settings, file_name): ``` This model is very similar to the framework of logistic regression, but it uses word embedding vectors instead of a sparse vectors to represent words. -
![](./NetContinuous_en.png)
+
![](./src/NetContinuous_en.png)
- It can look up the dense word embedding vector in the dictionary (its words embedding vector is `word_dim`). The input is a sequence of N words, the output is N word_dim dimensional vectors. @@ -283,7 +283,7 @@ The performance is summarized in the following table: ### Convolutional Neural Network Model Convolutional neural network converts a sequence of word embeddings into a sentence representation using temporal convolutions. You will transform the fully connected layer of the word embedding model to 3 new sub-steps. -
![](./NetConv_en.png)
+
![](./src/NetConv_en.png)
Text convolution has 3 steps: @@ -295,8 +295,8 @@ Text convolution has 3 steps: # context_len means convolution kernel size. # context_start means the start of the convolution. It can be negative. In that case, zero padding is applied. text_conv = sequence_conv_pool(input=emb, - context_start=k, - context_len=2 * k + 1) + context_start=k, + context_len=2 * k + 1) ``` The performance is summarized in the following table: @@ -324,7 +324,7 @@ The performance is summarized in the following table:
### Recurrent Model -
![](./NetRNN_en.png)
+
![](./src/NetRNN_en.png)
You can use Recurrent neural network as our time sequence model, including simple RNN model, GRU model, and LSTM model。 @@ -378,7 +378,7 @@ settings(batch_size=128, ## Training Model After completing data preparation and network architecture specification, you will run the training script. -
![](./PipelineTrain_en.png)
+
![](./src/PipelineTrain_en.png)
Training script: our training script is in `train.sh` file. The training arguments are listed below: @@ -395,7 +395,7 @@ We do not provide examples on how to train on clusters here. If you want to trai ## Inference You can use the trained model to perform prediction on the dataset with no labels. You can also evaluate the model on dataset with labels to obtain its test accuracy. -
![](./PipelineTest_en.png)
+
![](./src/PipelineTest_en.png)
The test script is listed below. PaddlePaddle can evaluate a model on the data with labels specified in `test.list`. diff --git a/doc_cn/demo/quick_start/NetContinuous.jpg b/doc/tutorials/quick_start/src/NetContinuous_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/NetContinuous.jpg rename to doc/tutorials/quick_start/src/NetContinuous_cn.jpg diff --git a/doc/tutorials/quick_start/NetContinuous_en.png b/doc/tutorials/quick_start/src/NetContinuous_en.png similarity index 100% rename from doc/tutorials/quick_start/NetContinuous_en.png rename to doc/tutorials/quick_start/src/NetContinuous_en.png diff --git a/doc_cn/demo/quick_start/NetConv.jpg b/doc/tutorials/quick_start/src/NetConv_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/NetConv.jpg rename to doc/tutorials/quick_start/src/NetConv_cn.jpg diff --git a/doc/tutorials/quick_start/NetConv_en.png b/doc/tutorials/quick_start/src/NetConv_en.png similarity index 100% rename from doc/tutorials/quick_start/NetConv_en.png rename to doc/tutorials/quick_start/src/NetConv_en.png diff --git a/doc_cn/demo/quick_start/NetLR.jpg b/doc/tutorials/quick_start/src/NetLR_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/NetLR.jpg rename to doc/tutorials/quick_start/src/NetLR_cn.jpg diff --git a/doc/tutorials/quick_start/NetLR_en.png b/doc/tutorials/quick_start/src/NetLR_en.png similarity index 100% rename from doc/tutorials/quick_start/NetLR_en.png rename to doc/tutorials/quick_start/src/NetLR_en.png diff --git a/doc_cn/demo/quick_start/NetRNN.jpg b/doc/tutorials/quick_start/src/NetRNN_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/NetRNN.jpg rename to doc/tutorials/quick_start/src/NetRNN_cn.jpg diff --git a/doc/tutorials/quick_start/NetRNN_en.png b/doc/tutorials/quick_start/src/NetRNN_en.png similarity index 100% rename from doc/tutorials/quick_start/NetRNN_en.png rename to doc/tutorials/quick_start/src/NetRNN_en.png diff --git a/doc_cn/demo/quick_start/PipelineNetwork.jpg b/doc/tutorials/quick_start/src/PipelineNetwork_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/PipelineNetwork.jpg rename to doc/tutorials/quick_start/src/PipelineNetwork_cn.jpg diff --git a/doc/tutorials/quick_start/PipelineNetwork_en.jpg b/doc/tutorials/quick_start/src/PipelineNetwork_en.jpg similarity index 100% rename from doc/tutorials/quick_start/PipelineNetwork_en.jpg rename to doc/tutorials/quick_start/src/PipelineNetwork_en.jpg diff --git a/doc_cn/demo/quick_start/PipelineTest.jpg b/doc/tutorials/quick_start/src/PipelineTest_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/PipelineTest.jpg rename to doc/tutorials/quick_start/src/PipelineTest_cn.jpg diff --git a/doc/tutorials/quick_start/PipelineTest_en.png b/doc/tutorials/quick_start/src/PipelineTest_en.png similarity index 100% rename from doc/tutorials/quick_start/PipelineTest_en.png rename to doc/tutorials/quick_start/src/PipelineTest_en.png diff --git a/doc_cn/demo/quick_start/PipelineTrain.jpg b/doc/tutorials/quick_start/src/PipelineTrain_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/PipelineTrain.jpg rename to doc/tutorials/quick_start/src/PipelineTrain_cn.jpg diff --git a/doc/tutorials/quick_start/PipelineTrain_en.png b/doc/tutorials/quick_start/src/PipelineTrain_en.png similarity index 100% rename from doc/tutorials/quick_start/PipelineTrain_en.png rename to doc/tutorials/quick_start/src/PipelineTrain_en.png diff --git a/doc_cn/demo/quick_start/Pipeline.jpg b/doc/tutorials/quick_start/src/Pipeline_cn.jpg similarity index 100% rename from doc_cn/demo/quick_start/Pipeline.jpg rename to doc/tutorials/quick_start/src/Pipeline_cn.jpg diff --git a/doc/tutorials/quick_start/Pipeline_en.jpg b/doc/tutorials/quick_start/src/Pipeline_en.jpg similarity index 100% rename from doc/tutorials/quick_start/Pipeline_en.jpg rename to doc/tutorials/quick_start/src/Pipeline_en.jpg diff --git a/doc/tutorials/semantic_role_labeling/index_cn.md b/doc/tutorials/semantic_role_labeling/index_cn.md index c7e0a78f50..f6061766c0 100644 --- a/doc/tutorials/semantic_role_labeling/index_cn.md +++ b/doc/tutorials/semantic_role_labeling/index_cn.md @@ -149,7 +149,7 @@ paddle train \ 训练后,模型将保存在目录`output`中。 我们的训练曲线如下:
-![pic](./curve.jpg) +![pic](./src/curve.jpg)
### 测试 diff --git a/doc/tutorials/semantic_role_labeling/index_en.md b/doc/tutorials/semantic_role_labeling/index_en.md index f5bdf64487..62fe0a41cd 100644 --- a/doc/tutorials/semantic_role_labeling/index_en.md +++ b/doc/tutorials/semantic_role_labeling/index_en.md @@ -41,13 +41,13 @@ Unlike Bidirectional-LSTM that used in Sentiment Analysis demo, the DB-LSTM ado The following figure shows a temporal expanded 2-layer DB-LSTM network.
-![pic](./network_arch.png) +![pic](./src/network_arch.png)
### Features Two input features play an essential role in this pipeline: predicate (pred) and argument (argu). Two other features: predicate context (ctx-p) and region mark (mr) are also adopted. Because a single predicate word can not exactly describe the predicate information, especially when the same words appear more than one times in a sentence. With the predicate context, the ambiguity can be largely eliminated. Similarly, we use region mark mr = 1 to denote the argument position if it locates in the predicate context region, or mr = 0 if does not. These four simple features are all we need for our SRL system. Features of one sample with context size set to 1 is showed as following[2]:
-![pic](./feature.jpg) +![pic](./src/feature.jpg)
In this sample, the coresponding labelled sentence is: @@ -148,7 +148,7 @@ paddle train \ After training, the models will be saved in directory `output`. Our training curve is as following:
-![pic](./curve.jpg) +![pic](./src/curve.jpg)
### Run testing diff --git a/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md b/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md deleted file mode 100644 index f3c855a9fd..0000000000 --- a/doc/tutorials/semantic_role_labeling/semantic_role_labeling_cn.md +++ /dev/null @@ -1,201 +0,0 @@ -# 语义角色标注教程 # - -语义角色标注(Semantic role labeling, SRL)是浅语义解析的一种形式,其目的是在给定的输入句子中发现每个谓词的谓词参数结构。 SRL作为很多自然语言处理任务中的中间步骤是很有用的,如信息提取、文档自动分类和问答。 实例如下 [1]: - - [ A0 他 ] [ AM-MOD 将 ][ AM-NEG 不会 ] [ V 接受] [ A1 任何东西 ] 从 [A2 那些他写的东西中 ]。 - -- V: 动词 -- A0: 接受者 -- A1: 接受的东西 -- A2: 从……接受 -- A3: 属性 -- AM-MOD: 情态动词 -- AM-NEG: 否定 - -给定动词“接受”,句子中的大部分将会扮演某些语义角色。这里,标签方案来自 Penn Proposition Bank。 - -到目前为止,大多数成功的SRL系统是建立在某种形式的解析结果之上的,其中在语法结构上使用了预先定义的特征模板。 本教程将介绍使用深度双向长短期记忆(DB-LSTM)模型[2]的端到端系统来解决SRL任务,这在很大程度上优于先前的最先进的系统。 这个系统将SRL任务视为序列标记问题。 - -## 数据描述 -相关论文[2]采用 CoNLL-2005&2012 共享任务中设置的数据进行训练和测试。根据数据许可证,演示采用 CoNLL-2005 的测试数据集,可以在网站上找到。 - -用户只需执行以下命令就可以下载并处理原始数据: - -```bash -cd data -./get_data.sh -``` -`data `目录会出现如下几个新的文件: -```bash -conll05st-release:the test data set of CoNll-2005 shared task -test.wsj.words:the Wall Street Journal data sentences -test.wsj.props: the propositional arguments -feature: the extracted features from data set -``` - -## 训练 -### DB-LSTM -请参阅情绪分析的演示以了解有关长期短期记忆单元的更多信息。 - -与在 Sentiment Analysis 演示中使用的 Bidirectional-LSTM 不同,DB-LSTM 采用另一种方法来堆叠LSTM层。首先,标准LSTM以正向处理该序列。该 LSTM 层的输入和输出作为下一个 LSTM 层的输入,并被反向处理。这两个标准 LSTM 层组成一对 LSTM。然后我们堆叠一对对的 LSTM 层后得到深度 LSTM 模型。 - -下图展示了时间扩展的2层 DB-LSTM 网络。 -
-![pic](./network_arch.png) -
- -### 特征 -两个输入特性在这个管道中起着至关重要的作用:predicate(pred)和argument(arguments)。 还采用了两个其他特征:谓词上下文(ctx-p)和区域标记(mr)。 因为单个谓词不能精确地描述谓词信息,特别是当相同的词在句子中出现多于一次时。 使用谓词上下文,可以在很大程度上消除歧义。类似地,如果它位于谓词上下文区域中,则使用区域标记 mr = 1 来表示参数位置,反之则 mr = 0。这四个简单的特征是我们的SRL系统所需要的。上下文大小设置为1的一个样本的特征如下[2]所示: -
-![pic](./feature.jpg) -
- -在这个示例中,相应的标记句子是: - -[ A1 A record date ] has [ AM-NEG n't ] been [ V set ] . - -在演示中, 我们采用上面的特征模板, 包括: `argument`, `predicate`, `ctx-p (p=-1,0,1)`, `mark` 并使用 `B/I/O` 方案来标记每个参数。这些特征和标签存储在 `feature` 文件中, 用`\t`分割。 - -### 数据提供 - -`dataprovider.py` 是一个包装数据的 Python 文件。 函数 `hook()` 定义了网络的数据槽。六个特征和标签都是索引槽。 -``` -def hook(settings, word_dict, label_dict, **kwargs): - settings.word_dict = word_dict - settings.label_dict = label_dict - #all inputs are integral and sequential type - settings.slots = [ - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(predicate_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(len(word_dict)), - integer_value_sequence(2), - integer_value_sequence(len(label_dict))] -``` -相应的数据迭代器如下: -``` -@provider(init_hook=hook, should_shuffle=True, calc_batch_size=get_batch_size, - can_over_batch_size=False, cache=CacheType.CACHE_PASS_IN_MEM) -def process(settings, file_name): - with open(file_name, 'r') as fdata: - for line in fdata: - sentence, predicate, ctx_n2, ctx_n1, ctx_0, ctx_p1, ctx_p2, mark, label = \ - line.strip().split('\t') - - words = sentence.split() - sen_len = len(words) - word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] - - predicate_slot = [settings.predicate_dict.get(predicate)] * sen_len - ctx_n2_slot = [settings.word_dict.get(ctx_n2, UNK_IDX)] * sen_len - ctx_n1_slot = [settings.word_dict.get(ctx_n1, UNK_IDX)] * sen_len - ctx_0_slot = [settings.word_dict.get(ctx_0, UNK_IDX)] * sen_len - ctx_p1_slot = [settings.word_dict.get(ctx_p1, UNK_IDX)] * sen_len - ctx_p2_slot = [settings.word_dict.get(ctx_p2, UNK_IDX)] * sen_len - - marks = mark.split() - mark_slot = [int(w) for w in marks] - - label_list = label.split() - label_slot = [settings.label_dict.get(w) for w in label_list] - yield word_slot, predicate_slot, ctx_n2_slot, ctx_n1_slot, \ - ctx_0_slot, ctx_p1_slot, ctx_p2_slot, mark_slot, label_slot -``` -函数 `process` 产出有8个特征和标签的9个表。 - -### 神经网络配置 - -`db_lstm.py` 是在训练过程中加载字典并定义数据提供程序模块和网络架构的神经网络配置文件。 - -九个 `data_layer` 从数据提供程序加载实例。八个特征分别转换为嵌入,并由`mixed_layer`混合。 深度双向LSTM层提取softmax层的特征。目标函数是标签的交叉熵。 - -### 训练 -训练的脚本是 `train.sh`,用户只需执行: -```bash - ./train.sh -``` -`train.sh` 中的内容: -``` -paddle train \ - --config=./db_lstm.py \ - --use_gpu=0 \ - --log_period=5000 \ - --trainer_count=1 \ - --show_parameter_stats_period=5000 \ - --save_dir=./output \ - --num_passes=10000 \ - --average_test_period=10000000 \ - --init_model_path=./data \ - --load_missing_parameter_strategy=rand \ - --test_all_data_in_one_period=1 \ -2>&1 | tee 'train.log' -``` - -- \--config=./db_lstm.py : 网络配置文件 -- \--use_gpu=false: 使用 CPU 训练(如果已安装 PaddlePaddle GPU版本并想使用 GPU 训练可以设置为true,目前 crf_layer 不支持 GPU) -- \--log_period=500: 每20批(batch)输出日志 -- \--trainer_count=1: 设置线程数(或 GPU 数) -- \--show_parameter_stats_period=5000: 每100批显示参数统计 -- \--save_dir=./output: 模型输出路径 -- \--num_passes=10000: 设置通过数,一次通过意味着PaddlePaddle训练数据集中的所有样本一次 -- \--average_test_period=10000000: 每个 average_test_period 批次对平均参数进行测试 -- \--init_model_path=./data: 参数初始化路径 -- \--load_missing_parameter_strategy=rand: 随机初始不存在的参数 -- \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 - - -训练后,模型将保存在目录`output`中。 我们的训练曲线如下: -
-![pic](./curve.jpg) -
- -### 测试 -测试脚本是 `test.sh`, 执行: -```bash - ./test.sh -``` -`tesh.sh` 的主要部分: -``` -paddle train \ - --config=./db_lstm.py \ - --model_list=$model_list \ - --job=test \ - --config_args=is_test=1 \ -``` - - - \--config=./db_lstm.py: 网络配置文件 - - \--model_list=$model_list.list: 模型列表文件 - - \--job=test: 指示测试任务 - - \--config_args=is_test=1: 指示测试任务的标记 - - \--test_all_data_in_one_period=1: 在一个周期内测试所有数据 - - -### 预测 -预测脚本是 `predict.sh`,用户只需执行: -```bash - ./predict.sh - -``` -在`predict.sh`中,用户应该提供网络配置文件,模型路径,标签文件,字典文件,特征文件。 -``` -python predict.py - -c $config_file \ - -w $best_model_path \ - -l $label_file \ - -p $predicate_dict_file \ - -d $dict_file \ - -i $input_file \ - -o $output_file -``` - -`predict.py` 是主要的可执行python脚本,其中包括函数:加载模型,加载数据,数据预测。网络模型将输出标签的概率分布。 在演示中,我们使用最大概率的标签作为结果。用户还可以根据概率分布矩阵实现集束搜索或维特比解码。 - -预测后,结果保存在 `predict.res` 中。 - -## 引用 -[1] Martha Palmer, Dan Gildea, and Paul Kingsbury. The Proposition Bank: An Annotated Corpus of Semantic Roles , Computational Linguistics, 31(1), 2005. - -[2] Zhou, Jie, and Wei Xu. "End-to-end learning of semantic role labeling using recurrent neural networks." Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2015. diff --git a/doc/tutorials/semantic_role_labeling/curve.jpg b/doc/tutorials/semantic_role_labeling/src/curve.jpg similarity index 100% rename from doc/tutorials/semantic_role_labeling/curve.jpg rename to doc/tutorials/semantic_role_labeling/src/curve.jpg diff --git a/doc/tutorials/semantic_role_labeling/src/feature.jpg b/doc/tutorials/semantic_role_labeling/src/feature.jpg new file mode 100644 index 0000000000000000000000000000000000000000..0e3310e4ace5613917e7779d3198ccbb3cdc5ada GIT binary patch literal 31204 zcmce-1yo(xvNpPL4^D7*cL*Nb-3jg%Ab})EaF^g7+=FXyf=dzz?(PH+Zu!^V^yxl* zZ=d_W`^Fn@cQShKIcrMQ{OYT!nYDh;{9XZJ$VNL6K$g)SUv&Hr6? zNTaF>4t6i>$#b9;2m9=67`Qc)o^DU7ZdqGZVnH7w1~fd%17CKN|0aKPV#>s#D!%yl z4)BamR-oS_ZGv8>r|AyiJ734YRnT$RdG+p6V@D;sxS{deJZbh>vOsEqgI?gj9ym$w-3z3m;OMr}ZJKQdKe z_upYTj0dDt{aDVw!JZ17Q?GU2j{W8JaIiKxd{JXL)wC-i2AE1EN5mJr8DLjjP___N zy8=np{C&D2_le+<( z4+{D-aIh!ho|3Ium3YMM_emNLEI3}9IKo|z_Jf#9HQ*gYE1@qfpq(!uY(Q~tz7^@u zYQ5(dmdGHaGr9yQ!M@gaMeCm8+XZNV>r{_#K6Ep~g=a59R9aWAOSpJL zq7G05G?dB*!Gngm+~cn5U~jO82n}j~6$_>vobJ`J?``WZ$~>K(5D7`!HDLkN&JuqQ z0tP@zMB4YR~V6d8e5{K#yT9HDGzpM-oI@DTViM{1p_ z@v|014;pfyAH2864WG&^*gDWdOu8x1s21IA$+dF6RlTR#P3Bf}TN=Nkq7U+4iXh^* zS!rq}1gw6BKr!7J=Y16QlZVr!C_`+o>9*xtmZ!_FH|(RGmg9e@^!2Z3_D~M{h1N9lDod9uShT1CjWQb|61~b21fIsPh$!^wInvcy9!^2s-{u$k^@ek> zh6Q3U3@6#p>x=hX5nG3N4Yn&ICX8TAi~rc+UhiY6PZak9pIPEw{@^!Py~MnD+CwL< zDaF#(zKNF7^k}@fPigi|U|c`zsJ|O*WdW}n+%VkkAmD9tgjsMV@M@(TD8IIY8>47E?#FOy7($R#ut8l=g=pR33*uyg}*uPIU z@GSwb0TTa3b7BLCBp|t{k;xAy(1UdY7XIV-kd4g?Up^Z1bq4W{U$xk98=2n0H^`}j zLf8uW3FmGz_j4hu5O2%yDk~JtcWN}$9BlnE6v;r_;y#Qgec8~y-JEM%u>l6xHs0M} z9YnPdp0PXx9(%KG8*cETf4UvW^v6jUYY;?7Sl#Ln1phcGPbe{nwknQib27HWC{iJG7L z?~GrA$3P#Ya-x8B$SG(E*KK}!SwU-B@8_-@V&5ZnJ^s#+xy7EF{5%yqsr7^hp>U)) z) zyao&-_RA2NZP2l)B7DTQgQjr1Rc{OqY(sQ>UUCvd5cunBH5Kz^p4U_GzF_-!o>zktIUY$3 zM)n00NMhjaK_e^72mBHJ#GO(aFps}b%U&q!ea+r?_Mk& z+@xms z614BXdk)uddwzyk9fXGfl_4uX@x;mCt@Gj;4+2{NJ&ySmmfIn1xht!^h~qZ)^v`Dt z+k+lTHdppVkG9*I_?J!<(=04(xYq_B>i*qV(17sl^=m%?`ztsgBV$}56TbU1I`<+S zaIU?(WDRG(n$>$QREOOaDBKljm{X9&LHC0u!`jfg$yI&zq^2e~lk-RBI4fP$to!3& z;D_m?G?^aWzoOV*K_4085PX3D^T!8BFE}tj&`>ZiFtAX-A2_f)3@kJV3I`5@2b-K7 z6APD143CPNf|`Sq5}sGu#2H9D5P@_A3J&@=Xo|RP?E`)(s#Y*#$``KH@Ty*2voa2C za0^YIHTz$`f*HGUP5@*Zs|@&^$kURJWyE#BC)AnE$W8|~mS6?-S!ji4nM-sj0>(r4T0*aBis zhiED>CG(50hT*xBY|LiW(7jl*(y~#Z($<*voKa~Lo=fW&6Xg>h@?I94qhBi}$T><)@XWWx26bd5LDX-9^EtiMFbkUUp; z*-~jO;Mye$e;m5o3%2#|%XyyPOW*SeYTPh%tlA!&ac2;FPnJARy*J*Gxb7JZUUO(P zIE;Q-=`f3;&O@cX1erd@BFuCNz+sG@zdVac*h#H+MnL(6O3b`6H53-B4XjF38UTa1 zgxZ&@{D%k5*<$?$X1*riNMUD5VWD2T0HOmJoJ4HqKGzE=SXW!{H znt~JkWzVwg)p69Qk)Y9)sHvx;4n^NK{s!S(qf;&TgnNao2=`xU+=ya=o9{n}&;8_g zXx}|0CHf5t)Js<@YJ6DA_sEFfVEheQ#k2ZxCoFr;=NUb-?>ZPuQ9qY8swnRa|FKhF zJxzr3E8(~DXGZK&>5Y}Kkl9wBG~_IbNLa4Dsl*yAoefSFi%1#^dpW*N0asjrT?eW0A zTtCKtt~_{3M9;-wwmTk|n;ujQj7(5+^FX+wIbdQ{@d`W1zdBLsnwDLkWUwU#pF#?3 zMQ=A(6WB1f740clNGOPA1r(z8U(=oQYzOJuFE0wD@tl`D}(h|S)ZoV48P8>|}FjpihrLhpfd&rrrt5{zM zJ9>(B(QK)Ow>j><4xd+yA(2-5Xkv+M8z18bnIF=w~x*do4PqZz8{JrD9VLlfDKZRpg{{ z6QKUNsbb10O#ya!-v6BTu+Z#c>gL&(*ZRbE(U!3(hAn7Wo_P)B#c(YXW=aHmbFtY zKw#J(6lnG<2>uPS^O3ScTP=jmah|DB(6~c@;MFn=t-#=XvZZLiu~2`}%>sWH_4cN$ zGqxsmrE3x3XI`!ig2qI%A_7IBZHsHPx(I{3`{&R_1WIf!VYq#tjFRrzJ5JJwFf9%w zKjHEZnQnu9Hul2)Ceq`>o1!!AMh0wU)P2>am+hag(7|!)2yPYL6HpL#zcsHgnDNf} z2-Rl+BNy`m2w~(NX}iH<&{S|6DI3$kYS|y9-02C*V5L!#RWR}UKZwAx%RGtMK-Vy2 zA9SWv^LqL$JOHoU}z~9*kHVz-75fP=Ri^1&GSj!9PAKz*O<9P=4t$jb-G(qxD0= z6hlRdRmwr_ad#m+g63h6#vC>ZR7=Bw_|nMpl3BW!>HxdXp(}pY3T^ycvsY3WQ=rfU z_)*r63+o?GU#^)UnCX88REpt{h7zGHeRD`l?Y^}3hF19|p_TN_;|BU*K~Zd9 zK^yQTv#%u;qO1{%0NpGf*KKRVC+3Bl>!&5&=v$tAJ1?g`Ygx47$yze>Sy#D-IU1v5 znp5iFN6~KyPg7iG(obB+FN(4O%mTQZoS)=$h>jM3_J)kRaE~KBI z^QS}!_wT=Rt+!gAo&d+7kyTrr=pI|r&-BT5u&oy3PETxVHu8c#i9hGf7fA}S!3cW- zhat{SrX(w)q4qk%Y?WBIE_s64RMvy0qZXhtfHuh`_3O)t-U23M3dUg%6%;ck1G1kn z#6eGK-bh%E$)0B_fE$j1EE#=+1yX|L6R7yt1B&qbFk_?yZyjyENRYgcNZ4bKWKjVK zW-2ic@d;o$T#hf*o$8bJ`n<)k;<>QWrDaIsKZ0__LMNfVxO^m+{o;)Cp5-SG9(9%G zJg(215l9;4IX6YH_VW6ai1o+p3B*OUeuD~7;v3qiVgECmhEi$cIQQN6bu{%X2fh07dhK032(^j zuX_7{@Kft-7G+M24FN}`c_aI_HyDq?fivwEsWEs!A|TPUSq5Z6bsf`EYquIVOfQ&2 zis11uUUWzp{NccMPqDWz8kfNVhx#c+GEXjD-}e_YgsJ$|xR?EBU7za`061Bb*TN>~ zcm5H_ft4FR%0g4W_<@*3#0~EdK*QD~>gA)z?artI10C_W50#epBC z!dv0~i*u?8U}2r#AjDeUkkvhz5+DqXC$7WlwS-?3ooR59Piqd>q_OIK`H^nENkrY% zD!erE7%|FkD*$B7SXawY$$n?Td!I(la&ixx*8lrp-f^?jY7A(@cgG^iwdNWI-x*+! zrQgbic)LGNsslk}#-?c@k)P4-DG_@Uk+ztvHQZ#PZLUD)A;52$x5?{tfeelMyTl$v zfKw+BBviZZ4yq}2O-Rl;%F{Nt!Q5e8Bqr0fOJ0q6y(?U`&v)5NsEeimTyp%pj=LJy z6&xj7l;!i{YUpXj;>c0i_;mxuHJa_OA%Obl@Gu*&#H<^L-8K%k_v#9 z2Why~>Q!2RdFS3H<`h=*g88fv9Zm-=)kNIR`85#vAt3b~vDngD1VFms89lqZfU211 zpAu3lj#vQB0cYW02Vgq`i6v%pZZjIcIPWH-T4uxR#E{SmN-|4lQtbcmXR!AKNZRLo z^j$j;Vy6#$qC<*mRUev1fXog6X)Vf<)sA8raO5)_!eOhxZq#_9SvbcuC0=Q{MUlt= z4*ZqXjlS72G&^{s6f=g$Ii3jhH|PoC5GX;#+*q-G3}YeD&|F5XAotye%=L?`miZ!9 zEgUd6hr?q84s|1EMgrl|`%wQ>hFvR#@f)rcaxI*me_5xtNh!D?o}(r8uU~)f)|}^{ zbpNML&D5xczl;9U)Q%~!LjPLkFKP8}J!XIX`ePUWv7vA_4Er}|iwvlr=`le-f5WixQYFA8%$vqdbevap-b)K>2C*{m@BxD{{5hR|3;dhtm`asd>Z~arHGhLT0)2Ss*MLqm z)ing?^~8R}yl8a{HrG4Z-yoZwaJ4cQZGC*KA+7Dh)J0p*L{>btv3xD7;LI zvTW-BOv4PA1{MSdg#Zf!zE=P~VADK@#-LyaZWF{*)pACU#a;Y5O-y4U=79#A=Xg~C z&D+PuK2^8Q%`Te(`Ia*J*?sGMO}DYo21Rs!-c&IYnfH;>3iJkl@`#ohp1&|_S!9`* zCdXNV#|Mc(ezIspnpm`E%4b?J*AF>27bDxBs!^VkCR#IL;G!DRm~EmVKV7A+a+P@b z!AN4(>pXfy%efAdB*FdJ3pfe995aDtLX4m*IqmseiCLr<;#4&Qr>`23{j_g$Wt7#> z8h>H*=nB-vu6EIO_2=b=AZIy;poY1>kPd5MF8f#k4a;xol*s8wU^Y3r=cd?Eg48~_ zaU<0GrS#<%x1`O`U{nvKAViS&HZN7=L#Siv;F)_wFe~Op1-g$TfqO0$@*Q7!W2FECx0O2j}bf>QVCN zYLry$rY<<*>VB~`Tw)Tha?Cp6xHZgIsmDy5UHt=&PyV>SffohtZ&o975@IAB%|sI0 zM1^ms&h9Rc490&gx5;4MDs9Z#W-i)$cza%!?_M7*eJA_Sv#y8+kdh=7I~~JFRxXJy@6t?ne^}kHn#QGinFJYplZ(0ccD6O4KO}Q^1Ai9F z>3)=VzOKN2u%`_k3|4hIt$&}g(wWy^Ya9Yz`^4fWRxHiFg8Ig_yvz|o)!}YdrfmT(5AQ9QI2|4=x+J3 zE)+O4gnXde5tIFU+YI>~k==%c!FXl`YxU`7}aOr(UEQ{yh^N4UuD@K9v@Dr9Jm+AZqcC4Zv6_(@v@+MW_28D;hX4=v1FazmkB3ewb^RZ*>PU-uYZ2^APxDT%?+ z;aZ&5%rFmqKgFr)>&9vBVAJ=x@`^6D4~^g4m3kOWD8EZGKps>eLL;&-rionpfbl-3 zW_!gFSUnRT;oRcs4Vqj0#`@7<K}Dc~=9-jKs@V)l$E0 zYW#JDah&z(4$(?iw=euT9~lCGHWdt=wu&c}FfJDN+t)7uAP5ntAT?|MNoZ#7v>^A? zGFu z!l?h#ivNXPFnalpmv2V)It0LUyLSa9UaP2@2qx2MuK&oz|FDC9X!?guopEqqnp;A@ zZTbxY>J56C`u*{Sk^e}=PIgXfqUTvBWEo(E&c=N9bm^PA1$_>4I6ISVMlw@=gAQ}3 z)xUl1|ATn%fRLiV?DS0IH|YKC@X?s=2YZ=Ypku=P>94Xp3c-$^=-p#5tcIHe61ADa4|I#Vqwz2|(4>guP)iu< z*Kdo?OQvNTG>LS+o*#FEdCsS{%%{)shErfAMkKOC)FU@4KUgzi?&N;_W+FWPIe~=J zyEnfd3x^!hNE@Zlb+D3vf$^RZ`NW&{PYR0bf%=sXidZ;5f$Um(j7Vyq72)%=<@3b6Y72Xe zL#6)h=Rf?5Mj$?%^*q6o8ddhPo;jI|IMYGiYtaUggUfwUw)?!Kdcl?o-_7w+d)GvN zk<_SkEne+Vm-l%}0V8No@U%{XCHUU;LIlb*+A*z8_5mKL17A{EW<)LcDhCSIiv9V>-;H2=Ubv-;H>8}d`bym3sLeQ!Rc86HF|^5c8LaeCpqR~=1e z(Whlu2=&Aw^X#)eY}jCL<(^4FH#sQQ+Kyx=YOJ5q;l-0Ja55n=&Oo@8R-Za`f64fd zT8c}VXkJelTEU<0O`mZ7jY6)P09$+Uzf+m?i5+^UG` z{TUKiPrCe12WuXT1$1e;4W?yJQUa?H`!ZmL7alVrM4mmOj} z(?p({)sv-txUb&=>n!B)>SugxYtfO{9tUx-1ap#f+qt(JwgmAU=lhj7F(2Xjgrk=X z6WiTjFLJ-Xi;dzkfU3uTv6f*qYDW4uLB8r|T3$?f&Ml$>@dr=kv@aa>cVBhX$rWh! z7VqVoZ)J~FsOphhUg%=Mvt>vZZ%M}N_I$rHRpA_@$dtGetfa1k$5Pw2+_4qF~RQ>HnpZBnz0 zHE)*IDZ~JjJ)X-u3^ggqgagb!x6ccQ79E^dB!3mjnb9jG%`wWJdC2NjGePsbw{v0i z`yjsdJsq6VY$^)K>UkUZ2v|g}xUeRXdmjz?e?{)TN)bLfD@vt8jU#{BTl;87)h=V4 zpa2G(8C}#Oxq^Wp#mhaQWT^rfw*78srWmOT2WC9dW z(35YcqKHrXO%r62p?e~zqY9Yazxlq`NG*?(k zhN}W6!j&ZdDm{Yk=-Efgr)K%O>YYOTvTa9ZXm}rRJ8r~q4uo_qteV=QEUu-Mp3o{O zm7#>U!}xS7;8Z7?sH8xg=N;IguAIS9792}OAnq{$PD+l!6N?UkzpjA6C4xyI`HQKc zFhKUS3kSnk4`Cca#$GIpIf6Xnsl?S->`i4%l<_-gs2`gHf;Zx8+BU+I@1!SPR>8wX z8}>RhbYlb9in+yQBc*?!@v1~USF6hsRY3)3ftwDwj`5uSTNO_}QBX@$*}wyXvlbk6 z+CLc#WlkN{f98XXPxK?r4VA+Tor5YIJSk@6jo*IUoX|5$o@&F7(egnY*f?|b=`)XJ z$)EZXBrsX7HgI@&;F)ChnmyvGe7|TaPNJ>4z0Mf}y#%L5e&%C*<*&bNF=9WztAL-l z#L&rW?nwqVN7J13)EGzaTuo5ynajGf>}loI`()%@X133k=nq-7^^g*nw>nBYU;x~jJJu4O&b|kLVU2wSm+u%lf$l(b&yy2!s9>8%NTkcuV6LtU2 zc0m|bRjW&)Q1`P;)GAXny*$X1U5>JcW#8c4y^;co^l1LW9DKtZ581@|U&a}(#8b;_ zm}GLcm*2*on?8fYri?Re)2Q!5xNYr;tcv3tI{cN6!8!ej*wjkSGoo|x^m5_+FX(35 zdbwvsR7amK<2CX)NQKSyk3fv{o>3NVs#ks~KWw;(h`_Ye5QX_UV`!)l}%JO7vf~UK~w-Vin_ow_#4Co{th9?%_vI`i33xWua7?US$1pA+%bPD z@df_HTEPqb`8O*OQ`nw!7ofB!!FyhM;0yzlVLgB{6~kA(=cXNvn>CscF}`tSG{n0i z=*14W^H#l5VcgPn!_RWY=9G)Ia#AMIJcg&0Hdz@7>E4NjW3169MhU4>>89V!J#U~q z#)2<|l|PNH)!uN%dU$1_%t|jHI6<~{Y?1Yd2O?KUqgYQo_3 zv@Dl?FA`*<9#6-v;W6Nf5iaG2sn{5{=o(18qdlVp+P%a*oM#?{BV z`b0FoN4o~UZ;_U5{LyvXrv0kb1@=un2C0m9Zyiq`Wd4luNY=1xanpCLU_RYWD61i% z=y7uFD3FU!dK>k9m@$%omX^!S>}IF=D`gzBqtj&>BF|&NQ-CE(`{sZ=BNT7K$Dn|*=$H-!A&2V^6?>T7vWan*LsWPGLhM=OO{b-N^ zy`W0Q`*=Cp4^yWd@|sQjQWZ8sH3Gmwb~Xerqss2Sqq{S$E7a{r_YXDycDQfiq`M;OramBX ze7?v6vt&E;HwXAREXkReYe;zxi6l)GHg$^lFpkDV;Fx;#aYju`9G*vCVnmhsVX$T<*sYj|3`BsNBi2XfSAL$&9!Q#5$v zYdEULPgtzxN{LH?s!hAl4(<4hqMyX8v)fNp>mlbx>2{o~h6LX02;DG$9CH1>Z8r{j zv)Z{GF_stp?s%7n3cig$nEi@}sJ?g;G1B?Km_s2EH8oPZh1h_eea2pb(0|p1C|x^F zvdbyt$>Q638Jv^VF*|8Ak0d4vO|{ae50DDi+MlY0*zz|>U?88=(f1x5l7*)E?DJqG zDd^hc6)`hcY$m>kcO*QLbq>!WciR0ZcVAB~2s{$Rx4@rwqq%+91p-Po(FcGQBW~gN z#&S~(Yvg?UAL8`{VVt$feB=i*W*B|l(sc18V+I^8?ILKLR;3$fJknb1^xX#YrC;c_ z#n3|mjLjXQdLjQ) z{#rcbR_MYLO_2eW0j6mYIrlul+=PvA?%|Ihc%e9Pt&OVsu}Cgt^1bb!GM$NK`brd* zoKR(iSz_VUs{#|cJU&;S*!FP`H#jUgqLkYciN~`YQ6EmbHK4X-O=U@jceI|C zo0|TnPuBz0-~hfZ>P#S+(jX9KqKWiT+$Gw4r1h*@G>PyT$BOeLn~u$s+m1U`>#vFt zSFR>TDf!>=LuQ-joigVFfHwk?_(@kB?n=^b>g|g;caru)P|#KF#OC{Y+dmgeYS?_h zJb*(O%2Em`@i=%8Hc1LDgpn953HZMt{@B z0FfrRx_nwbj!=CSuIoVU04eaYuqA`qpbv}y@>@% zk?GZT7>@`!uEV=co#g+JdnU!qi^XeIL$7v90H)j#D^RqAwiL=4Dec^xnxx?@I%O*HJ^d5emDjrN=DaMhx^2#=*WlU3@oGS{c10nA2 zl|`j0|G8nsJzO!(##QPwA|JwIbgdGr{tY6jZ{w2Bp^+I)Y%iTqtBVE+w&+kw~m5;UkG@& zz*4(AVvfk3cvWJpdx!VmcbMTCVfnn?tHT&vSf+lLIT!X)>W#nrIAy3ik4DMbi&yqR z>Y=D=S#;wSU|h)ENhWH=v>38*Yr9+Aj;v)=buDWr#MUC(wd_%mJ-%}ljEeOb@kLm$ zzZmKNyf{cRnIkBZn7iQNic&M-wVL!;BI|CL3k~owc|5l4A>lZ9YR|bOh13r!yfh*& z>tOLyTMzS1Zzg6Axnm;*9#gjw0r^fc#lG*K(wQXr+eZsKw}X8Hu=z43B1QFq>sG8% zStGvs&Njsj6GhsYdhO!}l7l_~3v!qDFLs zQ3rk!Wjtev)Ec=u69cT6qPgE7ZX+T7y8yI0ju?aC?!U6h&IF|m;kgF`FGzwBhg9&c zCVMWJ66&m80YgrYs_N1)#Q3L1e*%1`&l?5%&zrIjljryAj_sL3U~O|6x1PlAnL_u}n6`{$%($BdWNTlV|jlf1w&S z$LSMB<_6&_BIB`~?s8n$qWkv0;!w5Pn}!_OsJ8m{c1R4`DF|4Knq}qj+8K#2PPI@a zQ7|wr$4h`~?}5K4hmG~c$+E{=CI!3BP>O3z&0WKT>mb3GHA7~xf2zK%gTePTr#XS` z3@GsG3{_PG%JB+5Oh5EYb$zfA+8z=af@#u_3~;M;OG;c&2D)98hi-WTcb!e^d!-4S zW3n-oqv4bA$k;B+33&pws&o}R$DenfH& zaDSAcm%UwQ8$=NA;^JTwm&ARf#BNjLFZrjYU;}1#46|VT)tnHY(@T0!d^(?3b_an zk_NrpWA!$I#Adq^#eA$9+zAm(U!vxStYk=CunmH}+qI0F2@@28(9d_S!*V1e4LlZ8 z3^Zed=~>PQGs>OyBOge?0~v9$0X#--?n-rd_jCVRx0fl}%&A9M2n=P&l3e*Ul9q#2 zGc(3W=fyhIChE2hKM5;DSZLIaKtjoDwR@FEUSV(evUb)m zYc~LW^x`24a#8YFU~BY=#fcgQYQ9zqv^fgWTz*U~T`S50Z>!)tkHJxOSdX0(w*im0 zC>jsJmfN?Ym-{J095dBJ3ZI=rF0UeMqrubUN4PQCEE;}YNY{^icR=x;HvV4oPr)4_PvitiTQd<3n<&Fp&T`Pow+R>%7&h^Axj8q}VW)xR7W@SbcDrMT7JzY96||FTIVmwGde6QR*ZrRX<^ z8T?=Y69(`?2MjbEEbxHJ>#{=V}%sb*>9R-EL^n{&I#Qte_RmKtg@YuW3?YAzR-(JdbI(hwwj8ime^_gTIpc?lyILuLp*JvxJj|mZq zvpS1;Age7BrmVUbL@!{L_Kl3v;LZGQ(nJ(uRYwZvqL8PWIfEKZGjLH98E;o?^BXh- zy}u@j(qlaiOi$omLVn=X?EgHunzp1EPBqQ=A}F5j^*bb%{K@-I3A3L)$l1DHrB@#7 zOg*$AX(QR;ZB(+Kr>nFnB#?Zwayn75bqqev^;>yjR5mQ!_^iEC37C!YY-DEey*ugK zx7eRmkl`X;ZZ!`n;Fl?7%}tfjL~7HH996Jo%w+}hJ3aMKduy~oC7I+K{B?T8%HPbl zj9;$QvNhAu%16BNcs+glt%uOzAzgUbjQV6!tg|?apxTEkbT@IQ?xx(f87_K>RcbYR z4cUao@v#Iyfn!s=DB+z1PPBv2G=(6i?-$_$IHRs^E{^P2=P~kojrMtgc z8Y?j@-qaIdA*M=V6`rV~mP-bPl8@T?u=I;Xu3t;K_@bQZapxsSzLLpdSqMWaxnj21 ziItU9A9fgDm8ZV!yvcT-?7k1TR#aTs*Mu6=&e-9n(%14O0c(we7W9kkllTCu6b4n1HLh z^a@F=>#g9ZyU5$OC8xoR-?*+hybh0KG8X&FHJ?REk)~xAvG-Sk;cJ(+V4y$OwqQ5; zLKV5uIen1l5ub&6IP#pMEy-XRc>w7p<-PUgrHfskf~+m`Cn;o_h9BW?H_4=3uE88m zLtq0;u2-<2x~~N)%7Vv z{zZaG(<|1-I`vOKDM^yTimZEi=DxTf(`lsgmVwDl?`&lEwjfkx(PHCzaY5H1``Z#Z zb#c4%ux0v6#JCN6+ZpZdO&LYLCU(GEBXQmR*v-%Fh}lYJHg6r3vb!#Sq!;Zq5B4_C zVYrntSe`|C(Pi| zfw!UFMq{cZKw2oCf5F$>f-n3@>0G`=J}_05NZVwKz#tXkYexeL+i$(ij0IbMtz9oJ z0UHku!rF;0Gm3w8&EWZftS3J>W-CQaA|!Npq#Yf3i>|LH553ESgw`fhPDXG+3Ji#d zqM3o$!V+d@*Pu_1#UaOb#uuQG+S+k{8l~fFIK1+sjFXtxCa&zcPa0QDj1|-eCkVD1 zJGq`N71W0;ygVoxp<#N>F;#snNaVw~nHfu%G-*8-PQe(CVo9nWe8qxhVB#so;g76g zWYH#nASb}U`hI8Zi=N49VFHwFTFSTcd|6Yb7tVl}r)3ZNDyVsBd#8L%tsQNnVTJnU z&@TMx8o&DK_00wi#bKJVcFY+n5<&W$SzIz`i`STR(gW2dO2Mx7Kw0*M6gl&p{7Zbe zpoQtBWKQ#i;(hJ?r?F!>y6jkqUv0zXU%;fydU?Y1@6KN%YrSV00Ym3#_h9Ut)}DKAkZ+G zV~?dQ=_Xy+6J~33rG+di$C&&E)x%=m8x@2QRJ1Ts0f<`UPzy0A6#Xxyx^Kzco zI$L8eAbyti9e_i}`h2uFtFQI!;Wqj6V*Y3nGaNh-{S;i@;u_zihDbIEU$~PdEsfeC zl1~T+VCzmoSAU$`*k@+`QvJ(#!Rd%Yc2-OpV^6#Cf&O|BoxUx4qpezG;5ZP0l8eE;H{R}#{CuY#6>%?$=ch%PN!Jc8&7djo z_oZ`I+_bR!-e=ZCL9?S6BK@vqUXdhLLyuWCEGI9;&7C>TnSq%L&A&6KL=*80rO;Rf zAmoE2-Z85P3aE{;>>~~5z`|9>jqi-=tXj4?nn;boui6=3Vk9iE)G45Ln^c+FQL=O4 z7lbAyWz-gI^zLTL*W9X!dP$^a&xCd0syVurG>U6DH{4hQP%>`E;9lN3HjaHd&vd`iAuVbKjJF;oWU)j<}h*>RMo4v_JPkSTN+Ny zad0zFX>+$R{p{rLyhM8@piW21+v}Lb z13&yRup~)=oYge`T+cA@=3#G0lg-?6Zd{wGJOqvYWmOuZWG620L{7y7+CF&`34Ly1 zPC4IT8}+m{ti#~n=E1}=XBPnbZu$!>T0WAY7-B=H*S1ar;k6TAiWW-RTf=?{)^nxU zsN3rE@Zo064TkaGGm_&-NB8^&+2TVdKec}QsNR34vJa%+`QGGL%n<5Vy;9GQRqBCb zBEPgTUyU0T(|qCororFya(qCi57ax_u9G`^X!Fc`x$S z)n624A6x-Z9f0UMLp3Lg=k9&J9~HFKw`KE$q)z6-N^e1m?9y+PrPv$ZN;Mhm^RNLcb#PVzPvP^yJkkO;rmXFIyCcTgIWA9=B`@< zEhB&R^@WdW7f>;S945sXY1QQ~M*P%PdIWj|xJ8j26vMZ|9*2*M3buq7`M)J7A7ZQ8 zkkXW=C9=OQD3-^{JeMzwM(_O9(gtXi^FtB0dUQ~sfY42xyMH-FnArhk$H zCtKDWpPLY*)4X{O(3bL;P8eIR;KUtN;w6JLsidHM6mB%4$0^o;{i1l&Kv6w6*5&)* zds|4Co^=-21SJ|=;xo%s`-OF0-h4Y}jFCxkbmTD-%%#FS0)srRr3BLriMrxZ-Q$o- zSXf+_WPq{y?xHLO6LN>E3`{|C4jJ|LjsBgq+(3SpS;x$Y^L0md^c_@UY zuL=kV3_o&5LZ1QJ@X&LDEbSbaTbh>MzQxap6Ko+Gdp=Nwwz?tp=8L?pm+G^r>2oGh zaye2u84P>5fm3oXTxdPP#JGwbgIjgY3goX_^+<{x;(nM1G$oX+Vb%-uW6s6<5uZ3a z<>1(~ljn0|aGs&*T-lVC(kCec3xXNxbmHPd(;l|Yi8#NO2S;KXcuh{#qS?-(ppFu; zVfjzDH)n-`q_M|x!16mk)x9=e%4k!?0a1?rph)=Dgq^`o zEi#Gv4Bi_DnMEqguXt~mze+eKJwr{97866GU@l%_YQ}jQ3Uh`$XVG z=S>f3sd{D5&7D76{A`X%=IUf=B}<_)1`>985nDC)2%n)Ak~%2!tnyI5c`08o@DAr! zi_V-jf${>EObN(N-C3;_POy#vL;QP-`qI2Yfv9FV=-b-I1X*Z>e<@80FX5Wo^gQbL zK2zNoq2Qh24moSbr>N0ZK4I^-Z!x8zr-QtKcYHT3NRov26j#iOykm9BG-HJVYeB3x zMm1%{a=}CK{P-oy`D%G2^3^CB3~T<9hE$i`r!DIE-nE(=6ZHld(^wM7k3w2Maj8|H znAeqt0izQu&)23|UuE!Su~D+bxkuqS9`@;TO1|Yz{|;Ily6!4xqi1;gWtYqquStA) zMl0@{Ok;6}tx(oQ>Kn@zCY53BRc@mK+^!fL*W4K>;SzNE`P_-fH;_ZD7U`Xl@+sey>`SU+*v-fuUiif>0j&76XHIX zO)-zf6-)Flgin$5#8Ii2h>=CjELwR?4I*a^WdoEap}F!8*d>M(JoDqK=a5qdiQg}l z)ZrW5V7uO{Vdq9D_)=%3iAlJMgf8U{DWDSeP4JJR`^W%V1`nZrUxSv9*hbybYe#J3 z4M15If0^~Ei%{x0KcCwM&4)o!1J9Lk*eTkOHxqNHZ10hpy-UkJ8kFZC=+~qX&h8c0 z#C5}0*FWKwSc)xq>5vO8&l!Nk$%}R@wfn~gv@ymQkvdx{!EP#oeCpYRsU;9rIQLNe z(m&i&idjI&2jS5yRy@?SPXuWr)9Ugmdk#%)n994PU%vSxw32c5*4WRaK=DW zgtKA!a`wd+xHle1&|8pZ{{K`m7^@}XY>KPDG#rQ!jSQ#>zzAPBTb_&kD#3mfyd3Z! zN`i>oF>Pf(ajh?MP|Ky?akMBu2ox+vW+>oMN`8IuswZ2L@K z;s-mmmD67-`wmAg56h>K`ezg_IKH*E42>puPhT5*ycmprl~-+0Wh5gXWb#E`hoz8# z!TE`~<;j80KyPTr$TT<#pEC^}Zb+Saavv#O79{0cFa(~@%YQK-Hnz@v!sM-Dtq46qxK zqb-a@YR!QnWf}wT5Ap!j3sfbulL5u$n1o-LuR`3VlSyR?KG*5{Bl4h|^i1)Hm09z+ z3!wGPDu( z(GMOI@(royD`FOVV_3*?6~Oa+pN8p5Ddqx`S74+*RtacEhPaP96&+>#2P}l0d_(eO zcgt7Di>aDn@#z7|M~^an;wz7_A$0^Pdb8xJ=R?&tuUElL*Ioz7I!FbdQQ)Io_BrUkMHNV)~wujX0E+w-;>N< zGuLGA!1o7hE`1bHvWor0fbzuRDe&g1N5pa^F?QzrqhUow^EY|!_EiXza=L;KqPqwq zJx2tcQhC9Y`9@pCxbu=&#-jvgpGWvBh9TD_3f7afNu?*E5{%}q3-J^kxF_V(L4k?( zOAm0@PhAEl9_ZK%*$z7y=8-|iQevWyfIoBCa?SK@y$HxSj^@*}`X-}V4wsr?GJFgp z^BH=dTzljlgU4c7Wc0-(1xwvL70t@JY1tF;4N?-Y@267dk+#XzBkFyFob>9)Oux(W zVba&hHlX+g=jH)}UHz4?5AZt{4*zTncn14Zj#mD2e|ISOr@sG*+arL@^P%JH>kzXEGT%W5yut1f z;M+v&IQh5e=x@0MxHR-{A>VZv5DEhx1@Qi(0}i0Of`fYr&R{|%8MJaRpa=}O-nP5) zw}wAf{4M^cXE^>5;Qx;bnC}^kkXNqpYz*NHCWUq=H{dBKY%{C1~>+8&E^@59V&VUH-a7pEO7X%5?&AW zJ%ho=0DhAVe{EUG@qhtcV8CC$pYZ>wz{!4A0yx=^fP(&;>;D~-36CNq0MO8K=N(IG zm^lT)iGHqxANVKg|3<9h|8lwse9t12xQ)>l=J4E3=kxXD5_i%0&p7+nuf}HaQum6G z9|3^E)x71buyGG7ZHH+1;PVz1gjXQcE;AOKIEX9IU(hqwTA8sl;f5|@|G4ihLQ7b_uUm>Sc8bn^&uZD)E^b2gq*^hG zgU7P{jBLu#r8r*>s`PBT#jA= zVIFJmuMN9U;zBJ9X0Z@n5QlusEQk>xI$4bpq_`JPN-Yr_1L_QLw|74Hq^d84{A#Eg z#abUa=W$Q70IFPgxgPBeA_`nvb4X*|)g|d5AC2fwt@)&nTxKpd4!~GHtm~$#{;D+D z0?HjdCq+TsG$xx5SMzw7_ymKVR1#S^ZRINK%r(|>>|+sBMXj~Y>5;mHe)v&6ds@qg z8OU2l;dZoJyy`D9B%2EF#fl|t`GA$J1Z^kct}Cb~<*+N?E9qm)QX)SWLEZ6$1sRVA zpYNtF!w;V7sh)PUp+kX=2Z=L|8P?~1^E{?%+-u@?Tv)O+6Y`srcnwsp%R<-w{A+H} z`tv{w#uH7MMRiiQu$}FvV&(D~izJMWMF<{xdO6fbwg<>{hPc%1n-n4`hCOTUeiL-$ zKDC8ib|1WM;~JXiV(;qT_Q^ZGV_sqy>3Y9alU77D&!dd`<76-Mb}7j(yNuWaZX)s- z6@9ZiF(2zu!sMpLvzfMeF7kyjh=OJNYv!gGSZzG!(0YMEwepWl%>xJ#iw0*4DJV-% z>&#@1fogUKGB|YXx3j|A(y?3JR!WU&v;Ls`>?Yc=vy{4uNXLxt8qHr+7CEjg8HC0Z zwz9WL#%$=FzhI@x$4$D?eHdY~V+coeR^yHGU%acyAlp`vz=5t$Gpvsk%Kp`gkt4os z+fax9vlMmHxiH036MRw0A|b+t*la88&*D>47lIEs;(0LRCws4Q8+y($8&^JC2eYI0W9h39cJk{71)A15o)}VvHCn_3_>uMQHF+=3!f@%m*PCsBN!+#HYB1D z_~`}N*V=_kJE zIfa>)3Dvgoeu~?1fPlYuwRamEhl&M1c~5O2BR&!)*&7H}T%D>*D(*slK%fW`j};8dAw{ewpBA zIiVf(6OwwDlMl%G;UE zClk8<-D80lG}7qtolA8z`RooLkK{zPXvK7ksITTkg?*}6~8S$-Ch`gXe@_l{!x{Qu0vW*yo z(A`dz0jqlx|B2T0tw>KiTS+P8xfuEs6`c%Dq(6W{kd~{8g~?}l`zPE4G~G%J#B%RQ zLKQRbk9-qg^5->EYjy2nSDY!%lKJxZiYgK`Au;$SPKG_{0E2VLmgJ{Bv9)P*1Oc_@ zKXgMpB8X_PjFXsDCz=c+ky-SS<=ZRL*cDshDFq>H*rx{_1Cc|3=o&eg>xZ73@lv#+ ziKx=ZUcT_*xo=v`h{>}_ec=vjtzqQ_4L^R6p{{@K}J7Z zt^fquNBoa~x!=tgPf*S}&LL~(gk#4Vc57Z40S#j~QjdV8?I*UT={TJeNJyJyjz!Mi zV;rQkd(jJ(?M3)+eMRNIW5lC$AK(KrOOjX<+yHuhRSxhGTu!2ZqLtnSz6`eyu?^hx zWleAu5}RoWe2H?TNaV?3MNr??`Gt-A^vLB#LJ9F^*;OX0*9H$Cz48{|NE`P_lH>T( z9R#tW@X5J`>CF@yqU>rjP$p3S959D&@&5H+&0zSmp$r6XtCz~37^u^R;?#Gi$ejA5 z6U%%F=0Q5L2_wPR^Q>HWI6-`aCyWV5IT;RoNg}sHB|{9vW1%Hvr(8oYTWhdwl}JY# z3kq4z8h?o$l?SM#_AyD~CDJxN9Nq=suYg(t$P)AOxS|PFT($GNXP*Qwma=xV?;#hx zd(|U}c$cECB#_(fm~508C3!f7oWqOHXh%3)!6l8}%P@N~_Ab_WJx-kF#oH#WWhQ{r zxH?XUGmQ#?#s<=GPFbJhdJBjeKz*U;j=n;IwtgtQtmg3qP^*ek7{OvVg;KP@E@JuDY6Mg=6 zoqsL;e*ezHjE2i8Facj8ukP5iSN+?`kAKsAisL^5qJrEsBV&gTNKiwu_eD`3s!&t3Yy^km|K6Go-di zT!u~qjb}_rHEPIY3ZGv8l=DW_Z~Pw%0uhTP6x0h|Naq^l<5M#HMBK?!*hcz^f@Lh_ zSJT1Vp0v{MtApQTF#1Y6?+_METHY!_%aZ8R_Yura6ioYUwrS?2V&f(2u0H@j)=~zc z@t3l}EA0$tr1nptL3JrAx6O7;x9fqqV1b=TyqJxQIkxChM(^m1WN{AjIQu!(y***<;)wuX)Q+FtMO>WlQ_s3Pfvd%^r zSHRB7tzS;~Y6L*r+Z>s|xe&q_W-H%W8{%gD1}6-6F!fUwd03w()ML!OC)N4su0z)-Lb7-e+mz`tG_#`LsCSePsimrsQAF6;E<=_l0i%H}_EW|;zE1Xm%H^SYo zO#I+#$`p?|#&=i_vPlyj1zO(>OpSh_Xoe7kwW@0 zdGMMl@ipK{Bn#zt^(f3~h9|-~925}##8(4mBmS=I-HwWh60D1?ZS3+<6op7Lwi?6x zA0*yq;+Dh-Ll+7Zp3SvAp9SD+$ozN&^ss^MgZ8j6^9J69$?1V;`OqA7SG(z{WKE5F z!s{zy;o0-SmHU;?Fqy+ky6&Bmhu)&7)%a<2R3W@UjbVX@-~+5RArfIq>Si$+oI8Y+ z#1}!ort(9SFtwY#LKY?c$f|@@@$05RIvJlI>GR zEK`=|^Hz7lcts(CIuUUQJvL@_A|jP`%tXX}G`6T@;F2RIw3e`UQ{?35-#!T5JH{hm zB!S+Z<~ZWwq8t@GAdc7a^foQTBA(==+<2Y!x#G#B81?QI2}WZT!?)7Y_HoB0F-y0k zx}*!#S}Mm`ES@Sfg|%N~?v~#H5RC6thm&*uvX)vak`J>rxH2&p2KFbT#^AHCR{d-% zVKvg@(WW32LNJT|h=;-4q~R`G53hX2ew4{8*DukrDf7Ldgk->8wIgKoj6@Kdg3bkB zbiHXMh*>Io???5McLunL1NU49BdrJ$M4t>7ORBV$gMbdyy|LwDJ(AQ^pYukV+>Dy~ zJiIOQL*NNv-5!Yh=_$Z9F`8FT6%2G^(~lnksx)*M+k6k{<)G^*c^nTXiX7TP*2%iXy zVN?7P7;`o!-*X^a;AosK=tD`}S%jS!hjI!6n86E!`?Jsb{P9BAFoFnE2HI`rr)a|!(3i;>fm|l%b!;d zP=)HoZd{=8BAeQeU#-Rve14Ox)Tch?c@*AIq<7O>1VPMZ z{XTnNrdhI=Th3di?Q7T0rKfF&_$Cc89hD(aI0H@rX04(_pqa)86sH z3^8M%n88|T_7!W{DU6N8+KeT%J18`>9XpaWBZlIPyq>d5emX@hhrw$yF+LVx!s9pi-ZzLpz=C&E{9NY45 z@#Lx-%(HoM$FAIS5kk41QlN8QfsG{xXz3hNUq3v_$bL&V&ZVP2`LBY&gXspPuf(S% zaRx=h?S8XPIeTcskARX|+Xt$9wA(C35?8XRQJ&bH6uZGyzg%PuGlwkkl~Nu_h57)Y zd)URD-j`%Xlgc4q4ez}Ces_gpB2n(8&+GI4$udDfx@RHl*ZZ_`az=GKm|f2aNSlR= zE$ktXPCJSd0CWRDj>aYAVQ`o|;NK?0ZcW^)%k|m{cOYJZ;)u-j`XbhEo9B3j?<1bU zbsxPw;!AvBz|Ahm@TVQridgP|%1on07Pr46K)~#7~P1|}+jqqtQujlSTXsu0p+WXlv0!&z*P*;kRovtEie2o&Y`fa;;shT7YlNJ{ zuYv3uGC#%o3M*^b4s@|J;^TzoD`#E(RU~KhF(XZhy>W7Yw<`sa@u9$!^x7vq>WI*t zqc9EH((IqMZQ@LjeX7@kbO#!a>LfieBFl3LY7*3&zo;R>UTpWeoLUJJE=2m(JNFsE zmGvaVJuK(sK(C8(3ByP#;!SofQy!t26}mEPRK^Tu6Voa6!)OK@iZ(Nl1v&#Sg?}b1 z?;w}QId!q#g8H^K3yQMQY~0PuZgP)bm1F{WZzTrmy{;Cb+$d0C`PgAV1?4Bl`orPh0}F@-egkb`~H z`tjVgKHATtx{TJvYY5F*JNZPsE|d|`*fbX0DnJts>@-uE%i&3^;0T!vPZ4Cvu|+Zc z{RbAJp*n+PbAV{^*=q!K1yboxG^iI}&XT_(NW~~7g|iqHe0cqIfu*2j@<6=Ym*fsZ zEjf||g1}$Nw9Ksf@)MXvqf9e?6RXkbC0EA!BVhdPE-!wN;a_tF2QpKK=7^fgxL7P(ueUC>=>D+a<8Z_*97 zUiLH%1_@>jYOu$Qwb`^?kUGCZmwk&JX7kkPWslO?TeM-u`?QUTbp@?aJcRgAn)GSX z;Md9;ZYlyoTerR0uA+V2Jn(>iP}yzCUJM&3xM6%D8etlQ)SjWAVSv{Y1vBdOOI?lZ z&v^TkLY0O5)dteVdP)J!ps~eE#;t`YLCH&PuH6bj zCCt9e#J32&DqHc03^L0AKV+m7>oa|`A*l_=1YC6ov6fphg8chTp4hR%;}^B~)haYH z_p`-+aC%^KnOFpZ5`_b?7E1FeMTUtnBHiNClW2wARAZ^br^xcify{uGj!G_G{rv$eLB9TU9w zn72bY8JbKaHHP4im)(n3Y{^(wyEGw+E^S>S=cUBPCCY^b!6l|@Vd-WguRw`*l;1nO zJ7f^z{0L}!gRB5e&^AWfttNonEevWc%agwl^^N) znN9&__0;XhOB-{XtVKSwmOKAsNl2fgs9Gq!%!ZSvA!ZYpWD#?(O*B{@EJdfmH5FrH z>_z<}nytL^o4`fy_(NZjE&%(5FNr_)97+N-S>nL&&f&8;P@Tl25<&*Z6e5VY4zMY^ zmb8gLrSf9HiY#dC0dH}s zOM?6SBM~dHxsN;xqS0Jb)kqre`i7V>sIk>LROR<=Rsl8ZfTT#rB#G#=msX=XJqrT? z(biN!WohDk{g?ZHb>sQUXh*-r8}OjSg9$_i1_8stI*+958Y@F7A=gU$(v72 z4*VEjy&rtol2wYG$!G#8TdZq)*4Z68ZgX>UEpK}z-$_@qr2-=iev(3bive0VEwVq& z1Qa)-Af^B`{T2`UFzD;@Jh7-lHhTlXo^rYxg>Pl$=t}uJpPvW4nEaG_mN)TiMrA5a z$PLS7!m9$uidobze6#Ge;0|>UUIT_4fwTVZNi*a2j$xN zNV=fu?QlH{3i`_Td*EunuZ1%YoRB{5*(L*jTl3aX0^AQmIdV5RoIhvQjBVZXz3?+hY!eL!6_>;a_Kc~Jv zJk^Bh7y+^6Hb*rPW;Qc-!$dju0r=-ok@Xq3D$b-S;T>--Uad8jgv%#dbj&y8RWrQp zt6Cu35-Pz#louna0|K4yP57dN2r#xQ4a+I65NQA!t zaSP$#KHYRkEQ*N0ptcuXej&|OHOI|me-1Dm_S9ItTel= z#b}Om$BIKYEzSPykc}mPMB0~i^S<@1Hzrc!n)$Aq$pX!5RG64!P_8lb_-zO3LsX=E zJY;tgr($7K-ul!2tIxWh0J?I~gqS z{Pq~W*8aIBe5(~}G8#c#Ql+*z;JfR8HARV{wt^MgzzMs?J5@(a5c_wNTrlg0E6^XaX zAfh$L3#a4sP@j!15^o3o(r?w*t!Mgm?JeF4D(hUeS#MHF=)Ov3JiSi>tEhEbKVYVs z2nRcSDV!C>*^@;yT_Mum!4tF%O`i!aJPE*N#Zj4CxFs>y$}DhN^$I1^Kr_4RFE$^!1CeMoQ+WYqR-OM?^5Q zb4!bw*$qjunjhMgafcA0z_fMC5T7^F*s~vR-5|R_6Da=-IJPD&rrlW@ab$+)XNwRr)FM$4ysen#`LK8m(_qaaIB&LR5u;o z=QK%-la>6qw0O2_&zl`UCw6?7hy<|+m#l$%Auhcwc)v@ zoGi-e5C*@yCRfw8N8TZNpS(~dX_MWxpG_z+TkxfXh2g93=ruZ2z4=`7t=Myd+vuz# zwXFQ^_jONj+Ji)u?ac8MEtN#3eYFuC3ZRMEPb=q8v^*8Sn%#jqe!r$Qi{$J*UPs(m zEQo8DE#QFX=@bvQJ#T&bTdV5;TA7o?R4kK}p`9}=sTj%+EW0ei8=Q@$R=_A$dEA)M z#u+e|JX^DdL{QOxw*UXxy_?;Y6wS%y{Uk7`ek|=@ z1$b|(GK@`2l7ZgS@Lh?IMhq^pOH_>;qVeJ=;}K%YxggIv?*+S|5vE&QG67B-^_A@B zUoUYJBp(4F|7p!)*PY&~Q1$0a!`|sCMJAswbL%kJ+JxzZH|(yLv5tKU&}S(gwla)9 z93f_`hDI^2#aMyb8&Dp+AWt*8)#!hAxo62U$9}^cS^)Um36ugs%2t>OUbUQ7lJ-yN*Dd^Z+m=h% zv;*!Ii~TMwr^?m*Y5P23==wLM0=-8-Ju=OQTZZjOuGxKYC~yb;P9R#J z5hH;GlWUIn$DsH@ARb2`kpq3Qt@)x*M>7p2ZMZXAtk*J z#XROoml?g^d~aA<`o$eVWgb6Ju{F97X~xE0%Y>E%PXqznwio=l!y%{x{42RR;jN{@ zYW>Cd83;hnh3qq?7%UcKBZJy?s&CPyZB0m1I49EBg6?+bOpQAwwfBlV%px)cKLWc@ zlPXB6!EM6p3*5=1IuP%`E|xdDj?#wyEe+8OUu8`JlAenrvy>jbyUSHwCCl27+hJmL zZ}(wwQs#zy85P@TfxE#*G=pcmD|(mFma@~OR1k+qGBaqyyF9)US)P(C*_4e`Go0{8 zfUzH2sdapE3H)OlWiGS?(rwT2IuJQ%{K~>mq#QyPT3h?x;i+SL+I;;`Uh>kHkOs-7 zEobhZgFY4Ro`i1#^VhyPJO5Nk1WMB|M38(egEjv?3H*HqKmSIt~~A<9@PV&=@)6}|=C##32XZ6Jw`?}U8XJY11X@o_ z*SSZ-N^<|-VWLg5{`to5x2Cb8O-~}@ZqWbvG6bf;+yC!?{{EZbfsef6tN$7nj&MKn0x;}Q~&)gQknnX0qK5Nx}yPqCUs5j2FUSRWLbmBV; zqq&?7agQ|kK4y@=xQ`R}Mf*oBWVetUNyhBGOuw`}#gsiW-qMfRthXd)uKyk%9t1&{ zKfNE3T|w9p$xK=_W3!y_MlpV2L`CcIrcT#MZ@f6Nb5CJtt{`52ff~FLV2LI3L-4Rl zboC_7uC9EVcJ)xY`~!n^8k~=JkKaZNEuYaUzH}k_TH1i-Sd8Upo+<_R6f4fvnHmYB zNuMlE4QqNd7%m?ME>y4bWWdXAc}}E2prgwx{dZFXM=N5kS)2JZJvYCswPDk(F6OHm zE~~MAz#CJtsVw#OIU$rlf#7PCMNW$DqHWqWB8yjkotwe1>$%;JklnoSRR2cIfUv>u zRyB*B>vHI(t45JS(Kgcz3_S)r@h@uDo!e~(@|fhY7wvws{N8{e#m2b$nj$q`Dp7da zF5@JK=-+xZb?}VBgYC%QAgGm5a@Z zQc+>F-)1|7*(c~VM`C7$?7qD`SlI*%GCkL&`Dl|++%$X6=VZ+xA3=-U826G*PAOY; z*dcT7>62TgYwsqWxps9JC*bvwcCLv}9&>8dpRIaCph~8`50O8(+!9n0@xKnE86a!q z1nk(^Y|^U8aljWF@t}J>0$)O7l&P{_Mi3Gxbe%Dc*_4!%*8TcCu;&rWZbwS7WbSS@o@eVM{jhRp$?j((&+KDFe|xl%81(ze^s|w+ z_lc{;Slq~EN|NjJ4^WNKWr#e5+;#SM2JiK$EPnsA5&u6wJ+n*^DTR@D%3u+^P4q8L zD?Cv|_2$aOHbqTZj zrJEvHCKRNqSFsE;72@OcGo;#Dydzfc@p`J#mYgsmVLly42*I24}d(6Yjgw)dK7_P&%n53RRj zes}5^T}hqC7lhE zPT!Att!7~;ag$Q-l`s#-k=p6ju7}ul^quA0mPyp&Xy@2EmyW}83*y11;~RnfRIjcR zJV!vzYhkiHcQ6Y*oI5C5KO6P(qlEWONBxcrj)yXOV$Xwv7DA^tDw>0nq1^-zu+<-HMlPRD&*!s3i}PFa=D|DvN~acusguKGoS>>QYWg ze1WwsiZBoPV8wO|<~-!so-XaUzT-p_oWPaLRA!;t8)ehhqVzfB4%!~oNC)#^Nrs*8 zBTj1(Y3G|7Q3Vmi3tJ!OdM(5c`*STiv%g)?qn_hzCBp~(>5N_&fbn(~->2jl&n{g` zh$J8XjQupzpL3~gF?gfIEzn~Ht(mue*vtoq#&lLSQpE5#8_a+ zQ}pn9s8N%a3o4>pFm9qLVy7RIpu$z(G|tS{G?PjQ-XJt?41r)RNuV$yx1M!E z2)*8UX5|mu43cWb@;=>Fl9lJULnGpIF`2@Z$Qp5G|0opyR~C!EZ*~+KDNIS~)Od0r zqy-0hCcw{#_t+RK;3H9Y>hXy-KL$*h$=_Ec0>||pT{kyKfjf7#8S3Rg$GB~CF(AG-NjewRgZj_@vPsssYBy2H(DqgWy#OfeQTnHa6P9 z*S+i6T=4+kyogc!I|c-H1H=>3fuDST#?>P3^Wi$A-p8Rzm}Luxuz!0{6s^MMuw?@09-P2{cLAb(|^R<%*ZA-ziMgV@nWU4PP#W z`&6EdVsOPpap-A @h5ft)joUN{^~+1m;#qv&xp%)icz7#2`b`8fGtB6wHl+9wkT zj+$E3I_s?cWfEwKv;tC#xqTif{G4Y){ts6o!hXr;y8D$^<0YTNan>YSKwcHh`E zQZc8;m1svc*0yK3ot>=WUnEXAdNkN$xup`eVrE)67jI1Q5hH$-`OsX@S@ z?ky*hKr5$_*!`df@5D(FTK@~Z17N_Hl%=8_0GRp*vc6OSzGO37j`7z^iGYRkq1|hf z{rAWPfPs7n9lZ6|WE3Fa2@FO-_7`kR{ebGD!*9vF;;4oM+|7H8I5W!8d*njRd&!_FvZCOlCCXV*~ z;B<3zJ6L0{e67533SQRW!{4NN0AS_$gJ9=qv#|o1LQaO~vqvBF`R&d7h3nz5jb8Hj zdcTBDDmYDJ$ZfJK8PUufvWJU1*9=EijqF#u6qvIw1kG{LwJ$Abj+fk7uq+j=qvxgL735{QD4VZ?;rj(Q&PAZEz{rsA;SEoFZ=uim$`5 z)(yFOavPkX?hWO?zRvq5#P`HoCC+x9>Qiwb`HFHJ>rWIijn!S3w`XQLC59>WAK|jl z;1@eStBizPt&@BW2^n+6`a1hGfCVM1=Ko7-zw)c4dZPH~bUISg+BMCrIB9nfL8=s6 zo<)@2uveK-b+>O6&au8efc(nzf$=+@v!aI8&Jz4>k-E99&S?C=;0QBA-GFZNV|NC5r&2q1@msK=?O8uF{qX(? z)bi7l7mL%h&ym9L)e*dqrGxvuEyAP0&gh?#Uw7_RDG4;##MZxXL@8%jhUFo~)@$e+ zu8I>J#mcb$9V$^R#rC#vQ}j?5Ux!zw`D|H!$r~uW;{{K#s0@p>z;o$}5WyLrV6tW#eKdwMy~@1xi}rJ0!Q527 zo*4)bE+b@`*F^?APq~WS-d+C8ateGF_KHSL^c zfNv*SwbmrtrkjxpAb1z$2eF@SvHC_@~z67e3x&>-$hRI7}M zxR|OFx8JAI`l?-TNpUBW!z;3Zk&b;4 ze&G*~_`mJWSw5N2R9A^*=cLf*V0|!SZCAIjq&s-lwq=%E*7+{7M=*C&O^b`ni&Yy=4vWf<(nNfH&5o=x=F)O}yh2 zIyg!Y^<5h0uAww?e7J(oac^YP%IzbdnSK3(i%`t*LLZ(v%vIW&99dP`w=F zO*LVi`q~qd-iG8mtThf-zJS;ssz@;@yxT*O zpcgzkto>Pn&n+!Oi|D97h^BP2@nOwo%(~WCRDUP4zi%wfrI1%tWlyNx2SncPLE3<= zr(%9ynKgwhdQMrv9_x;;740bU?YcmsHTmGKS?JEXfp)2F^mJch$640$xo-{M;jR6w zyr7&@xk;@8l@m{`uU|_{X_W`|QLW>YB$HprTCmn6#n^q+HeCz7J-*S8R-R_gkeb|r z$pIw(kkmVj46WQ&ODs@@m&Wj}2em;}(pOu;p-Q&PXti3Co{quU?9(UaheOn(%x{ah zF8lh9jvNtdmIk@wOc93i6+~93jGm{vUe2VWx{j+$uCj)mCn+1zcYCw9!RO;A+O%7+ zCU&<3j)zEgsqbi)R6p1_9u;WeXcNN>2T3ZM{pcna)N;Jg9X-V~Pt-KK%uns(V2?qd zteV@Z$$0JggI8IzpHn{nL~P9_v-v0~wI|2ehRbnsvEFhlbdJq;y5q(6?m<3M_*IEXQ!65(C?U;)6bl% z&H2FMpX{kqOwPw#gAzPf`Pp?S!IV+6;@v;LlWZ@>w%Sl}-?hO*rv9SjTjykzw?ZC& z(3V)5wKXedz*Fc_RT9SZtTe^0p3GBkr=VJF?dSR+y`{=;zAQty6L z7Iv>Nozee{UQH7KZsBM@i(n1F$e3VyuYFT|JN|I5jl^nnZx4&I*8w*x{ew>??oe&=x6iS>b&WGYGkqLHUCXi}imO-O>LFZo? zn5pZR21o7l0wH4ufmxR>+y)zx8qfZPxS>}N z*BuM;pThDA;T@>~1nZYKg!wmH5rHRs+eDCmuqgQHRt6xkE|m-Z#R>3qz!uFiK_Guo z!B0s*gJPA#_VlDsqzW*hl<<2b> zBmTZcw+)^UbD=^0Kwt>k7w|;+ zGYsmd!Dn)(B7^J^kT_p(`2xfBp8- zD7O2;6_KkxFd#lZ#emNQa8&Ba`03>2e8uNax~%c0(0eQ>g!${uCSs|ECceB^mYA9A zmeq-slr~76u`!`(Cxp=h#IjHi)$43jTk7jWw|VxHrEfQPED zEEKP+Qj)rZm$jcb{O2q~IMA|@-9OY_XOd!#?AM~R_B_p^On1A{Mn#sgh{FCg0%^>b ziKciiN=h@HEeUq4UeuQJ_ueR$wVbcFT9e#~XEoKeKOS;FsM*f%%dC|U?3&H1W&3IV z*YpV30v;T6IuDb$CLG=EV^!f`2+YlPV``NN%K4Ow5WxuJ+N;d_I?QFRbfocQsD4Om zxofZE>ug@`)A8bS)`G&75nr8O>;w7?yRUD1Oy#3fJ>1nUENM+M#>STIJr& zCp}pu8tUvcqWzUCC}H=SuFbxdrV2X!F5c=HUpp zmLh*$JS}qH+9jkMZiIi$A3NcDRd^>RBkQjQI9D8&1GLytyre}-Y}aaH|Zbyks;<5>r@7uBV`6h zsn6$%6jk!N1bbwwh`LU7eW}r8ewnse5@Tqx(V`qTf}8kUlDYREHq3^_$Q{W#FmD=g zZNS51J3+oy_MyfyGDdPV$ac0Uk} zM{vGLhZLdBI)}=el#_SLDztNIYP_3FdltO(UQbb^E5_TW+|&;V&N)4pNPb{IE3#s- zL9RXiqrp*+4sdr6BZep0TOU(1g zz}Hu}x^Zx!luv;E7^DL5tI#sHsGxuYt>~HE+D5h7N z%};I$oH8?LOYCJNc=;bc0?GfKd$lD`0J(LF?`Rq)f_Ks(D~$GeYA4bd&Er=R4-{9~?xb5M z`j&-PjZ@Ne4(&@ZIIhKM%&(*P)(DHFi!P<>j7uQ%(kCr6#i zW`zax51C-E(tgw(hI#)It=CjyBEWnFIl;Zo7a=1Bv6Y8RrZQ7>^0SaWR=*Qq(k>%$ zZbf*0+$Y^DEm@(}>O+8;27XKceO_(e2_C(+&)G$5o;$ZMP3=_Mk?kF|0 z+svYdb<_F#_Jmy1I#Pq!o_-rDr9|*v0r>=iJ`P?18-met)=xeUJI(6vH(CNtacRp) zp9a?FO0zZVbjO-^j<{>yBTIwF@HDx@z%9mznd{`hEBwDeEaImQsEyIZ1vvdSy%=^E zKce#qps6G=E(sc=sYQRX6us9}yumiO@csDtcTMO?;tq{l$X;33+XDLLVm4TxJ&BNh zX)YGB|v`hgPHTYSn#C(~vmC$XCPbi^0er z5C^2+BAmpJy#^Hk+btl6n=(ltHEwsCDG!Gc6^i8&PW}B*R8SyH-{nzwYIFP-fG@z) zS!&twQ0!xOyx8A#^&;WpGP!xGY3^^F>@WPOzy{zlef^;Tu^lI`kg{d&#JR~8f2t}a zXfWc#Q&K;6_^yBe8UWHt0ApT22uLd-jVY7th`iD5rdoCA;WU?H3-0Lh&VaRdu>UgP zB$o-B>vgd**L6EvPb84tkAnq+dY7HMBLv&}c=Q!vX!s}ME=?~QW2jL=Onm~B7Ckr0 zec1#<{4s2)Oz-Bwl>PI&uAwXMOAxtl)YzDDPOB9+ZMDs0sV#oV`%9 zIAU5SRv>=2{dQXEa=TZm=$3@<{d)I9=Hrc?#IEiF+<;2i1s;~4563GHd-l3E_EWEP z%fnVytN=``r8{sn37z?r>F~ag{=8!*@vkq`I5EldrcY^Ukbaj7sx>=V!aLv9I2~pn zvUJ?&Xf5R`(_(U|3_?B59nwex-a=e367d0oUNsRRoW8wkHk~# z1kw5;Qs}X*3%QjW4#hsJNVv3$VN9=(kbM=G95=^R#JL;pZ#F@pMPYvrx*Ro&haz?|pWH=m9m-Ix%B4srD~~F3UPkecAXJOsg_kVOmTpGfNmi zSzI^WeZ`-@B*|(HbW7Z>tEU?d1Q~brHPb%5nf@`uj2jhoqX^A z`u)xT$(-X{Sha{Z2SnTo<9j{kUqj%7-PJ~q=mS{N32Jt}HuJ^! zEG9aeF0I@CpgZ>k?DxV8;X$_2p=GB@KBwXEPLdKQBB&YI1pyx?9p`YNCi6A9muQUd zh}`E1V=gY<$u`%%ZOCzX_HiKQlu0}{=CX)w8?h!FxF?$6>h#h0HyE#F07tl-Vj#=D zcmGTP2LvNeh7V2_uB%xdqw88?lT=d1z6cD+-QGu(dDBAKZB@}&TyQP8>N_|Z%D9~YmYz&-i{7KBZkuZS6*wPtWreq z&XM*D9cP!s3`N?^Gz)pW;zH1a8kT5z{y4NUmTZ?&oVl}*khR^c3;YLFS zdC;13R~kW5Z6q9&k^zg5`MGYB03YnFZk>LX{v_Nld~S{YB*Mu#cAaWW)%8r=Qw_OH zNZu;V&={wzkZHwC0BBf(Ld|CoP>@OytnmQGJ^e9LmwKKll4P^mBF;5uySFd zGv|At9J>w=dkI1tnKZhpQ$3h;{cH|zAlTo)7skP?aFpv8Rvj%~n%F}3_8z@Gs;ZD# zwMfWVZ_LkpC14Yr_a3s#K%GtLoAX$wu`&%od%_!fKo-*nd}zl(DtkSRr&e;?xQP-M z6~36Y`V5NYE3zLj2J3(^a0FV)APMu0SBgd>oH4>q_id4FbU&%k_ExDB*?7YtV*>1S ziz(7bE8lRW#mrP49M$>S(_2igVLOy5{U)*0c&pfv04J#X90(uMIYUL5`0Sr7^$gI# zyn!wGG$}Bl6G@Guqqo|`f_xmQ##;oqAJaB*&L`Q^)iOkqieMJ%4BoWC5X7R8e>F8P zNbRM#0s=u3+&%XiF+lbKT8l9p?4#*pWx+7`;2tJZk3?GJ%$tlHS#gPrtm&gKL~3DR zl&816o&KyvPAcEzaTEom9pw%Ra#2xuq~HKGl2sS+A-*9YmSExNB*b@) zYCOj)s4}+LKlzNMvk;3)@VcDTrLM&0=f-#WPcdPNkaP5~#IKBS|4PP}_V$Cs>vw6o zOb}9nr*-p0hWek}*16^}@Ff?N3``z5jvXg{-Qk)L9$u*ZX1HsiG+&vbJ#2ta@)A)0 zqycgbvO;SYFDiGprAIt&&iNX_o%CgHV#fPlRICjKO5S5hGJdXe*~MYZFHPl3_(8cL z_yF%@!K2l)&t8_@HN9MU{N8=Jk&R_uO;k8gL2HbVq<@ zH&8E#D?dCn;jZ|%Si?+G1AiOzMxWxKUP(yMR8%i;rFT1sELovB@ztNwdqP4;h6^|| zTs*wI@3Iivkyl&=f+;Dci$uaLK~ksV8pvlWaRE7O%$5y46U>a38U?f9ea_yxAkS97$-a=2n+ zB4|{8W57QJs{%Zr1KIs%gY;JbozLx#A((wJ z*CFw2(txv+mR6Cb2R=rBElSOa5RQ^A^5w2^ zxxdh5v!;+3%k`E;yIsn@1V^FF8Fz7IGghF^$*8EX#9FBAvq$fLolqQc2wf*N&(lne z4fY?GER!D}vN6Xj2t=j#!<@jE0ep)}D&f!S5NtmII& zGuIODjb6hJ0A`x_rHb&-7AAZTnS~wKgkpZuLqX)f4Nq4+I##t5z$&54l)14?Fx!rOTC(Lj7xd$P>CpJRD4b!A2l8LXvqiHSBh0x6T8kjq4QaIgMCTTDHlF z8V%y~&~PK}s$rnmz_r-9SeG4_I9o5kSfs>rO|NSC5lhqAk6a6-Z;2@sb%hwrhe3+| zz6_u}hLi}+c^%LB2$5IcnQv93FZsCIytJ9$&9k1pAGJB!S6Owa-2lR`0F7YYWOS6?z`UQ{N5YogZ6OV}vSf4P__177;&3F86gT zIP!`~<^yQ`T56wfIRCB}K+DIy4L@hh-iz7&y2{s!J$rSkDRWc;!|B_J18N1~cAA}N0U}uQ!})uco>ksEVIHN=j%zBuyASIylze6w`0;rB{JPty z1hU5dw&;58GKOKNSME5;`Dv_Z+RHM|-ebz+pP&^brCkTnP`^)ntaU^{iB`UzDK|28 zvt5I7%Kf1y4)mn)n}wuzz=Tg1?`Y|&e_7-Mzc=m@{52DwMs{rxHTSCS=+-f;huaLd z3bQsg9}u5wF>UBcKCB?&M86NQjS3|(hzgZ!&fugSPar4>w@I6S*DA0)m$p!>U`3zC z*03J*It@cpe_H0=H<+^ud#bp}9J(}SpBgz4X7^S+q?WTC)dw%5hFP33J=m3&+Fd-h zR_L~Vk8UVsJ=3);O8T`ylUqg=1EIt;9?0Wp0Enj{R6TXEQ#T~HQOFUY`_fzieEhrkua%cg$V;ulQbVwUzqrp)EavZ$^-%yOD_J*nA zU#f%v;&q6$EA^2c?~L?|n2I$W#*7??GGx+FsBxpi5}#cID%Lf5Yp2?1PhE}MhBTY|W$Y)@QG^iQ; z%Ri!3s_#FdRTSjy*JofsD_h3iBtZYYLCyJy0h9@$H-2R?W!fijdzW;3-I`*~sC}B= zVznQJy^x+^`#OQ5C`B3&<;2+R+lfH%NVI@HW4vl;C~|susXJBC^|;LC-|jA|#J}BL zl-5E)ufUr@Ehw#K7XQi?3pYcful7{c=g9?v%GnEWa}K`Lw(|ALyA&q!xA+ zOQ3IJ#Rm!fs@)- zeo47MI6#O<6>wzw5FS@r+jgPqrfRw0`cP4DV>kta2#3kfT4kCd{W=DN{!dieIc9^e z?K?IEqclJFcD@9K8Wxr_bW&y<3&+!k6Ife&)TclOqb0-c>5DVAj_>FVU@0dU* zw>goe&CP!CQ5^k5D`xIO-Z03?T{dd+IYtoI7`u$g)^_R8aM3_6TS&iZbn01l&IJboInuBW1yzTn?6<}4X z7D@h;K)E6?eQ$8q7c7(R-u#S=TcQcf_C+AA(@4K-_qDYsFQAA$&q>I{)OU&R2*SPh zUA2w?GXap730xKOU^epYQ&|ask-VYBpz{6tqNrOKwNSLep~ILX@RuFP(~Bfn-$9aP zHMb&&gmKq1l$kR9cuMpv-nsm^9S`D?R<;no|=Fq z(xgevahiT3?(0yEt^Mjv+E#RdUVBhO%6@DvsSTJYZy!>{U5;fHeRo)1P zo6={XB0ir{;8+A%VzKoLx-5{|5AI(9kQ2?BAWr%T!91!||*Rb|Ek;STAQwAv~lgluf~A2ki2{5 z#y4R4Af**^KkT><&Z0T<*uK=^zm&~`X}vk?wf|9fx(D`fZ%uiXWurvhl8y1Tq#Ch7 z7=1-*X3OATX&a_RS4o*@iawjm5r0q;`@||%f~C{0v~^`^UB8n3Q*M90BuDcqb9NL4 zsXMTF5R=QrfqRL6ywNf6JhuBxMB2a_G~Fvz#fjtb?_@s6#qdbAGPOv1dT^m=c0>5P zp0EDt#F2(&pL>S(fdca&?Zi);y~D6O{mQl4PM`-6fKL<94a&g5r{OlWTwK(WcfCS0@yn$D&UWjN~}&o3k{WCir>G@Ol?4D6j@r@x;^ z$9yRVU`9}K(0yQN(&%+DJ5TP=@J$SFP{##--p8yhTZl}$w z>}HLric^g@zN*LaQu$T__O2`AUZ64Q6q_>|i$(QFTk`HbIt37E6l#ZPn1K5*op+4k zadd`vOH-=iq0zKZI(XG5L>mXFZvC`h+Kk6&hlP*C} zco(qi8=~|ClTUSodJM~GC&+K3`7Fs3EBn}QMERxqjEVK37f?ebK$eqe&SA?25hc5s zu#$%qj$%+zB^9De6C}0w$kDs`$g_4r@A!A0nvfbYaXmE7M^Svcq#8#r{>ivrmFm3S zG*Q1cN*t(gxPtCQ0XqaCpdB$HQk(-r31W-UHS%JtNP%~drV5eL>#=teH*I@1rcu+c%dF-L4x$-!$~Vo6Z4K_|ni=*-EAv%S_;f|t>EvK{XZUEz2wpDu z`*Q#Q!V%M~S^^wM`{_q^ds!rFsk$d`AXACXP<{_z>9&9cXB}LqBJP_)Mrrk`UUwa&arZY>6DmB&lhe?O(3@2c{*c?18QP&*hgP7&|mZ%c_p#A|3r%s z-ROuk()w^2x+?K-rqCz9*n3HGosaT$J2(JvAqYzp3obPd1g4zM!G#T>hhO=jwKQ!- zRgrdI?2>(cTnhjcxjmrB%TOR+3wzd&+jTnEZ2AU2Fb2Jy!|IWzWd?FnI>avwc&uiv z4T)=3*kW>(OlAq}-~d3WXjT(1u#c%~R<}DP{JxPgDFy{f=Y*f%K|FF%Cs>x?Er@zr zzgix7f$;5RLKeLMN1+AuNEF7!q&KdpJ#Vf3xViEjL=wp@ z9RNI;xfWf$3RitRJs-cea=RI8EqGySJQnHo(t0QAF6yrFTwVv}lHu1{GG=QY zMr<_Pk$WJ}qDARN0Q--?Q`2k%cSmH?U9A~Em`5nc%iA70lEJb7Jj;%RtYZM)4?}i% z#S_6>glE_sKK)roCf{_gz!4Ag2XLb?Ql2#zU>%>B;;S7&*ARA1E8ePZ_krtcn#MgD z0AoY({0h_L$3NoPi}%Ku4ogRQwJ=_hqO~pzf)hV$do;+{pBoGhy@)6?M?Ir-^#BRD zt}m-l1xEDN_QdQEx=?}K(0RzK2(9H>{N~5GGe;|LCHru^rCqIqY;y`eU3csV_@_5Z zl!kNB7R2r4(rN$^3vPzbh>A zLkl0R+yK8$Wn#rv=mAnP#!u3O6e|@m2d3*R&&R5hj821uZ}f6q@o)!ASrV z@nBnWZ$z7-x5Vq7Gh`0(_-&K4LdAl8 zi75_bgd!@52^T%n=BteBU^S9LP*FGY=|}a?ji1}P7kU)J__T3TN`EPAY4mY1{9BC- zE?nIZoyJ_cCaG<(py#3q3wQwV@B@dz#~~o+gnU@H13JaVQS120f3!#U(-609P~iVn z8XzMVv8J_hvQ2$hVu#d>s5jF2_oe}6n|`1kV+=x;F&fVZHB#BhB-W-2$brD6ySwTi zv5W$AAp-DWZM5+RC4fnoZile~Lc`gJ8O>m~%m#(3G$w(y0qoGu_{BvJc)T7XS?DAj z6=Rg&?|>z?e{+?m{gn)w2_Q^Nqyvpz0N3CGRUTdkl1^Ba_>|o^^V7az*#rL;bw)55 z#%6y)Gyvz&q$2D=eYYJ^G?~8K?C6sQ$AH;@owp6A6!-+R#K{L)-I!n+IYzh83?BQg zC3U04!}l_Yz!-p=m<63E?et-9PNk3( zB94@WivA68?`LPSKp|8zgVy{R6h`aG{Y=Ml$Zoj&w9K^2>u@Dbo_=1N7mY`-zRjcX z+r}(F%d(hYs7!|Q%|iOTvCw=x9i~1bW4&MbGl5Z<0-BY-0UNL|ouIpq4873}u0i<% z=`0LuxEkb?tDZmu@MnW<-Y>dHWoc*t1tZLLW8MWB()0w_#?%jE4;R3vh|m^m!2zE_ zh|R*t*l5@l^CsVz7Mv4g`i@=@`FR5U$6eSMXBbs2XY<2- zY(~_h?dLq2Pzf$}d{Acz$RBRv-t0dyw8H{DAcrxK6OO=8->&E%9#C7dR$K$JnYSme zV!A=+6FD2&dPlG5w?k$Nz#-T5Hs4SapId7#srQ)@+zYpNdH4_$Vmhv}LO?yFf$%vx1^)8O5H z`BJ+Lz@WAu^p+JM2Cl-ACdPrmQhDu)jYNj$clEr+CanS9ch!331Gd@IOQ_baKlU>0wyt$EYa+*U- zj73_6xshc7`QHm9@I)ZNIhxof{Q9gqQ0Q(HAqdW#ic4`E$eZKDcdl{Z3S3oGN_@Y{ z{`3NX>T3NQ-7^wlPH5BI01MFHzlC9DVg=%DJmdczQrc~&FycKWwZ^{ zoEeqZnsceTO#LAHx8@eUYyY4&d4Iy+d8*Uhe3IUTwz8H`{oYHk8!xx4L>_?Mc<|Yw zBo*#7fr*beTo-vc%5D%SUdn4I!9inP`=yi5YlO!({TIMOlce5ut2c)|Yq5IybWXFi#QK6+|k1=fk$gT@cZWs5{kJhJZ!=vDx)GKzZ zPsWonNWZBnwHCN;5z)6MLCAqd&)uYI{jEq z_j=GWbL9k-2G#skpE+poF^HrV-8j(tdan@^ns_&j^YZla z@$ExNkuO}Pj0`UbeF+~sERQ<&d+;sTZ+eaO3!ax5nzj_k`+_CnGgG89D=cnXRns6B9HCJN6C~aIcCs z>n!1NgUWC#1>Hq1id_b>&jfF0u-ZU7x#@mb`JQ);4j!#2r{Ls%&e`~!s42K4!Ns9L z3f(7ud#|BJjw@MEW21gSdZ!6nEz$4jg755pZ@^4G1#G47ApgQ9eNfW+#i;M0|DJf= zI8MfgW|9ExMdjGd!g^(ik89L5=k4FR#PRKkX=e5uU#6Pkb)HmQ1GnyZlMKg`Vw%=w z

cx?58m&(sSup`X2q%$A0#8_w)Ew z0F#AiW=ZAD(_7s(+cAJbBCw2kq0QE0-iul>v)RF6E%J0lPw{AFogjXSveZm2;|)bvfIMehqbAihQeX8C#;g%j}?f!7~7}1 zG6qEUxr&#a$hzK$yINebsEMmw-IDW|J~Cns=t`ep{e@r0girK73-V@~fm+LVTVotI zmbs&HE`syg*Tewc4 zxy?+)Vynt4L)jbz>FSAs1&>81lVrW!D5rF(V|ahJIV)d#lmJJeG_8^Q^SR#73w~n! z`Ytn7svcZo7Pb17>$t*xoAe&fTO!ZuYVffaRTB{`5#mc1{Yr}+L!Ggyvn6#7GJDg) zPWfFLYT`^RQ?x1)PWz*KyLBeP3w+uJMzNitp$?N2@|e_|ETbZ^pWZLoB4HyCjV_dC;? zKR0k3UFmA$5|P8AX}(-u)Ls}f`xRDvUH763*Cej|*IwvcOP2M45Dv@pwwo~pxV&YN zkC!>IdXw{|TY6udR+fxQzKHO}9Vw#P_3k^cciTvux>)CR*^ya3-f=u0nwD72Gy6(@ zUuwj7Wog0QaFk2Xa1!1{*5hca14EeiRw7Nyc2$=*Z>KKy#=kzXuzRUF;_Rx>KhXDP zKTApZj)cdpic!Lfl2!XX!=&H~7nc!fJ+p&uxth}ivo}?=c_rl}l2DP-T_8})RCyoI zlkCcjm^(_*;lcQY`_m+y!zQd9i7+nbeoEVInJ;AY#&7y~)GV!KC${KWaoyEsTm9Ub zzlST#h|o7aj_H@?d3xDdu0OY?!GBe{+r8q4PYRk>={~E!Nao4w$4;AddIaS&4_Sgb zMFmHmEos<`IA5-RGf;f797Egc!|)*6cR%ri#M?Zz=@m{ac#=eG;IlKv2icV_%W)EL z$BVTBRmWMgy)MbQf&wk*Nqfo0%;~`9v@5#3#OVyS7)#QaD~u6!ZD2GG*{l9-T#LlU zIh{&;Xhgt#dVzew+4JHJ!{iOtM^8PR5(`-=r`TP>H;XYX<@5B++=cf1Ug!)`r5W~$ zaC}1x=)fGxG^}_efBj-RugkPp6#;}T8R*?-DJUl5z~!F2hRY$M#IDJiH5qSVtP2AmVD3=`(Bbyq4ow>)V^~&d*miJi=C{&_&P6O67S`=})oQ((Z7^ zoabR#s= zSl}X$)kRb@q7b>^({sc?D;X88-S0mVd;o=_0~!+j`ZW!2fTMj{QRQEWl1x{I5d~+W z{Z56>)F;)SO@?(p zSo21_j33V!RG33r$EkL@d)JE9eAt>00#mAvHi}jQQ&OXg)xl*%b%uXJJsJ6+mWPTo zdc*(LjD0N6<(%(1=Xsvr>-+D|>*YUlU$^_ZuFrja-rE2cCo;y*hF?9) zd?KXVjw3@9JAWghNCp_qe{k;Ff+Lg}9V0kje($2Suj6uAs6DURFA#ZxZ~xGvoH;Vx zF6<>W9GPS}es~hqAafYu4U1o!@RuspEsEqavFnAEPovmjy-}(K+Y;;Z z37>{&>j9gHh;&5d)6f-}=SHg_1S6vFYM2~2Gmu2r9a!$gE0;MSYGvg`y z%Kqx@S?gx|{?psjS-by)o(3OLO88ht2R=gQIHnXIb~ciAk~qEb)*q+eeiLYktJ=Fg z)Bdl{-l*=U6F6%-y?7~DY70JP>!IEl7uWHQ=L`FB6pqXGPl@2^w;CQArQs(HiS#6CT`^Mlzx!~tU1rJXBFT5vEt#U~2 z3VGjzw#a_@`bqUK-ZSy#6i)cBMn1>;|8L~MMAKfHd0qPkLwjR)e+E%4aijCCfvYG}WJrh-GrZ-_qpW?W{jq6Y8BvjbT=ud681EoNtnn4VLB8lHd z6sM5(^Qt(whPdm<9}(ah5gyJXfHwozn0e!6jc$+BSEQZ0u&@323+)X5h>qUJL)S!h z9CP%~Y0f}R{>wf`J_=6n_|Lxv+*?RXTGBUAF`8y?1`VhQTV47-R0HG&3Oi3$#=d3$ z7mk_bt(nb2$jaHqCfW1Vg(4ha1!$3nF#_407m(c<0q>j%ipDsX^-HF+eq4dM0%#~f ztVL)Ts1{-RLIWJ}T2@NP=S?X|>Yq;WdNUuI-csyf1`p#FEfswm=p~axrXndRHm}*r z9$4LH|Cmh&OqXCGTb%|yu~I^0SsXhqMbA?0a$r%(^i&ori+>q_IjinIxxO3-Sj1Bi zIUug`Gup#3AykQaK^b|5B@t^aY-K(p zOA*QX(Yv%Y=K-tV_L~J$7W$UIEt$1kCT4Mn+FjU8k-&i_R3gTGFeAp_h|a&pD#PFj zCIqZN4x{T!aW7S&=f(w!wl+I&SzXs#pHvkljowMrP}khiW8uH(SuMF$y;D~-;pF38 zWpMW@IiTBk&E^)#-m=hYZSCP?jvBu_!JjxdA4yzO1=U976pBOXHgKTa+i71C32^6% zW3G;yM0U7pbMeP&k_TyMM3{*X%@BTV>KS5E-A7LJ$x@Z6)O!wzI_)0bBoixhPn7!$ zZ}S~@_iy0!{`!pUz1187#hKC1Ma_3oSG8lTuppgFUEK**2>aZZ<{0<#Y>+0S2&vA+ z=H0n95eXTc$HoBkE$#@e*aqDCc8lOCnn+E+z%V{Y4s%&iJ2zP}^>7={jXLdUb&?%N z9jayfZx^3W`xWM|1V|lW*_4!XfkNO8guXl<@dfj;e`%la(p5blPd#FYkM+QpD5Z4v zi4C#w5JL*`thAFQeQejH^E(qYR|RY*>@tlhnR!`3lEMYLd|$wm%1vq9p8|X*2;9Jy zifE~S$TMrZb?%0b-uu2F*}Ok(o)=9&&$d)`M=1oKeRwF#W#)OcV$ZiG-{X=*K-7~p z>r^vqKZcJV5}S8&zh7jObjH5+RP^t?^G}!a0@s>K5QsSjCc`_5>kpFp_ulh3k4MgeZmNA z3O&;2-tn_cA*C5Vyk`FK3n&=H$@}0V+?m|o&sm(Z{)%b*;z#Ie{MRwlrmu0_lYsJs z*Z06?Y*y`Xt+sY-2Wv}pZsR^|hlL!2&KazKVv-*!gQSoV29OtD58Sx79Ca{ZwN&e$i`q(gA|!;5?+UjYgT%&C>!)^?iN zW4_pxW%~4V6cu~KeLeTRiGg_UkeJUL74Pq5H5g(1zK5I`c)wS(_&hB_4d-jKDje5< zqSm#oA~NCMu1S>;tmpxXEzl@QJtn_1B2P**zmI3)bu)CVX2o7L;g;JQolwVK|ZIu%Lh2${#?Xuz-V4 zqgv#cWz)hwi8@@;9k$ffjO?p_`;S3JC!jMx9WQh8?ZKfP!RjpjQe0`)I~ySy7UBC7{Bu_?>X7 zR&#)>#(fqa3a=G8c7ghG2^ ze}tGxe0AG0>a%DDB#Od-Rzq9RklXc2hXaxmo1MERCC}`$^Xb68iJFt)^X@@vx#g;d z3Ffz!iKMqLZ_4CrIZzbj%Dx%EC_Xrlij~%+6H%g1Uv*>(4KEgLQ3O;-G{ci^*>SL( zpLTMAs#%9YEAZu;b{+<*++sEUw!ismetpHaop69X%o9n5x~c4{q4;D?eo1YwRXN_Q z$6nEBQ+UZQ67WT$W;`5!DVqS)bb$DUtPUJE>!yRNRw*O%2td+qp%baqw#0mFe1LVf zGja!^ew&@E>t_w(*+qc)>IUbZ7KO2$R%|=&Y022cKk+!y zN)>$iN-L|6g}x(|BcdzyHnXHvctuB8gOq7NthGve)&SL=->{gq*yFK-H2?X0nzrBT zv>Uk2(za=LyH*?Rf(U;3EV0kMWd~^ra%Pz$MRR%iKHEm12HLyhXgQT{`ONJ^OZXmP z82RHsY+o3^ih|q9r)-O7kDh=t_$L**#tT)4b>`fAzxTn%!-svC-qM|Evvj>}*6cf-Ocq zv4pW^*)V#aUg8`DNQHr)fG-@Eo)> zi;iLI6YI2HjEJ%8aAH3Zr9JE!!8YECI!z7-lL4pAlQ%s7q8F14H#y9OQFFHSBho{x zP8Tb(2$R>^Zmg~Pjiin=07C%L$@c?8CG1!zRg8@^saxe%<5cTC$5|@b?Vt(?Y9=FV z1UvMkopqq7cN}$UL?87@oMZNk@0eu3y~?K8*I zk_wFBC|mF5{pfNl&}mu4P1O@99DBX^4$IwgVAPNJfaY)`&kM}N&#Gw4CDM<^&Gq0+ z^iOKR9lh9O#;t2e#4$D^=4e;OQAT)G-9uWzxOO>%b3@d0tA*nIj5zfb+9^gDNuKt^FZhMe>)9x17Y zJi=^te&vc}7r9>Y(~*k|_2Foglu0*^_U_+smoxL(9r6>da*dxSU4Zp;X+aDN&D<7> zdL>yVfvVS2Gu#bx`Mv5We6zhRqG1@(|5uO=*h`}*((Ll_KDy$_pg`48YAH3I+gVrQ zkAcsUTUCks;iA;%Bf;C^BtjN*<-PGJe5<`JrePRz4RomO$dQYnLuEPcW>yfWF4?Cw zCYeRghPWVm1+iJj&`J>;c)-g%TMr$WE?tiRJbc8(c_`;(DCTe_IgvA<0`XyEzB&)F ztw#%@x8EA})t)sdc<(9wF4GEslq=lcqTTNq6!`(Fg0 zq;UZr$i6MYNF{!eM%7U$;icEJ&oR#4S2}{-3kIQiF4X8!4kEFKv)^;+3B;~&SyYN8 z7Vx6(?7IvSE9&Tc#h<}oG<*(StkSjf6 z(J}JXEg<=IUZx4}Q{BK9`C+Q>vOz#`-mU#K%`2UbmD2%Khs`n#>f8M1q34DLoX`vD zPR6|IwuF^WI6ZRdS+o<=@!niJ%2N!U24>ea)wD%E_1ADbp@ZMc!o4v4$MO-32@ARw z8WL_X1r1$ALs?JPI|1M^yMJ4@BgM~!ILT-bKLMZ6ZQo(l@rnd#9!O_t|?D11J@ zeaV+cSB(B};%=^~nFK@*??cFowqGh5kemhrhGq$j`VQ|!Xejt+WEa!Ni$FT99FN8g zP&E@gsd5{_R0l&X@SSA`+e%m~O0j!dKEP79a| z*v7;gT3bkE9FJT@doHBnSCD5F?Rmpd{mpibkOm+B04=7*^LJQ)SOs+-+=YOo6TRm? zYh+XRV6sn{F(jza>0U+Yj{{__MB+}LxJ?kyY6VSc6!+4eHyhR8YS&0>sHvT}jzt5w z2h6d~iWB}AoT-G+6o$vbG}BL6tKV!vAMBfI1^C4^ghw?2F$DeLNYTVnQfM*ZJ;^5T z9$ps10aChco34RMl2FExQu|1M@W*RUCi(5mid7Nx_T8#nRBFR5V)Qu#&wleoY7(es zn@(Ad<0eeO8104RMCt%{o2XdpM|0mylupX46YWXpahGq3U}vP8Mt)B>U`5?A_WQjd z#_xcnD>!-A8Gx@mU8B{TVjmU;pO^WH3fzUJ_X+Uu@6WmQ{)LBSQ+R|w9%Mf{

ll z`1%@ZJ2Yx-MAVh3SR32C=L7KA99Yk!U>4ZJQJ42`+FTCR)ps0y2Nr!8CgPfQya|wL zoIDS$f;XPpIXarkt}CMmBiqgtrU$rdqdy02_SYWTpE`9>|KvLYmPlP&1Qozke}2ZG zlLmx(NiAfDNg`$IT0cj`iy;Hoa|rr(KnzIvBof>fA)6yb`Ul;^SB3T-hEps%ljt+Y zAUcCOnciYpU|)rbnh3i7GOJhc7F(;PYqXWnXN~AGrBRbJNb8Z`2)9?zpE@fbKsrNN zqu_RqtOubxlAB6DO|J|ZFm0|Xy4CM^vYn9^jd@$KJ)mf>$K~SJ8B!mAy9!Mo5pe40 z!Km4*&*^zbyfY|VX-slq5MDpN;Vc~fvPA4>=xNcwa3B#8T&rg!(5dvl%4I|EW7y<2 z399>pa;lXlz}3;hpd1gaM%p_ll4KM0u&?T-PW&*QUmJBg_ZW5&ow#M4nk&0&q@t)Q z;8~n4KbEs>rX_j8k{dF11|aTXMCyiyE6wrVC8QQtWq`~1&PM~&OT;uat`h3oh!)Z) z$0(@_lLoh>j|D2XY}k>#2*y0Nn?1+3fD%22EuU~mCZLD_qo7CUOZg5@r@&`?ySwPR(~IapGJ7Ns5f@9nVLIGdkskKi>~%R zGV)TZHA{Q^`XaieR?^x9o_Zvt|?4FVbDh3xKzW>f{GSuWj5II#t@UFl$! zJxCa<+opezubUna$mBy5lUhIW+bFY%m?8*=al_p1;BHZgD~JCnsOgN8gCXPnp5Ms!1uM6!zL$lC2l z>*(3Z>*`a$0G@}@faI&X)nrD-p~I7)aYFHZWg}Z~csqfwPgHP!vM-6C#L6Q^<+Y|{ z@~Y5w6Qj{S-TQj9&c=44_YfA-ep=zU%4+OgbD`Tt!eWlXmgLduWuLWi@0p=&1!?s$ z5rNgDodSPFep!-{-Di!Tx2PJupST=09xK6#(?!wTQ8*oB2)3Y29jTK<+Y)NCQHh&D z1;JVoP`Ezk;^DnjQnTGjY-f(R3?44Fjq6G=`+hVGEydovS)`08d>(n_eeXd&6*fR9fvyzcM}fk>qmQ4t7j2=bekD|FfBC* z$t=Ck^HG&X=hB0pNyKD#AhL@lcn_iLv(%#hYi(o>f*= zaQB*9a$jqTMMGp*a@b038U`E_Q+JAwXX!YB!Y@zRzo$P_ZxQSgq- z5G~Z-uJ_Ta#EH-TZnIDoJ<1AIIesDhBZiCcyY159K0ROQN^hyew#8l!6y{%)xZNhN zGeRwnWkT+&_2q@xVLe=Ht)h?#)M#^s+T?sBnl92=dMkf6ZCMP(1T6Jd;xp~R8n|WY z@J04$mjFG@v(mF&uRn|iat>NKVzZ@DP^+-#G_@wu>8#;F?!O&)I*g?wc zdDg*_abps`es|>Y10@-SPs>~WeV$7zK42X%TpJyrAXe58QOZWDqKI{^FXU+Z2kThV zp#lS1mH4HFVR}E>&T7-^IS|CNhky8OE@uYt`S)V{8u~Q`zJY%tnhVR6P~3G&qqK5P z+`ju5U0-6LJZK&5JSV?VVZYYts82nUM$hb#P^CMP!LV_e`Pru*eYxq7geFydVl%gh zvMM-fab)_3X9$a=zuA^!IK(~GJ>W>^{Xnp~lOmPnan-BM!A=T7$~-V%^{`fe+A%YH zdKCh-djtRD6}E*Hke26F-$R`3AOh$&;9z8Hho>{z=!vPmdy(t2fp13PhI4u0vd|Wl z8lO!&`9;Ah723>k%xF!;)5yzTH2NLye+VssMeg?$Om5j{NaGT@YX^*C2z3_T59M3$ zV=8Fkqeg2_1{+tA_L=SR@(`*<0j!Z#Lpa^Ktji-At>01X;{l(`671I2l+YZsbP?HE z>0a5_UORu*KNYewt&|=013RZUk1`MpvUuE6+bwhaJITtu$$E$8+k$=mJB~Ll^jEjy zH0e0SMQep06m<4`nWr;sJ(VH7-*X^gM3fsj-nVKof{|Uz{xh=bfZ-Bu$_CfG|J^I^ za>E_rMtevuyr!ZL{$O#hT)A+wghzKGjn0nw!qR8+3+FEJ$nVMVo6yKh!frcqI~rQE ztKY1?>31PR-fD(&W6G)|3Vys>$C^qj-1)NjWtX&wjPTx(-|9p>Qfj?H2s7F>QDsTa zYcHd;$@o^1xAX>7m0odlcuTbw2fjt0G&^j)4EeSG0# zw_5O>4N-GJ5nY~snwGO3xi92GVs3i%H~Y1e`@IxLNo=3iDY?6m9ZOgVP}K9jcJhoo zHCZs%-Vzq>JIRoIu@920!>tRVNEz=+hp5I^WUIAfElh@yapi8cTl&N5e5j;oF}Bg` zQATy|EYDXOn2%V@)`k=A9?$iK0wD<6vZ#X30E^q2g$}vfVuOdPCz<4a*GjS_{7TJ~ zx*a-k@Y(S0j>7bdY6_o#^WH$G!o4}hp8w{e)17^P&g>PS{?lom`Qdh(mgh~g@3J7R?yqt;q>^hN06}a z6t@{P7M&ngHHtnn{0$=NIlJUjsgMOyBMjqPGgy`n zdTwV`qHga~nF!K}g<^95NFuB(-}dg40NyLufFaq={kO3%8;}R<1=a1$H1&Kf6L4IO zN8k4_;i-2kW;PzLe~+`BoBfiR{GcG+lW(r^;BZUWYJ@wuUwL~E%tl=3EkBD8yc~xr z9Vb12E?nl?})p)mMmVxkRvPcXi2-JWSTJ&$x}NCPka6CY*t95FzaXaAG;qc}b>o ztLOXU!J&gK&rad~mz>ir(^fL1vV#l6dN6&k6vwh6zRH{C_0D!{iGbZTewOerynE?- zvf^vq^4>3v6=!7jqT{ zm@_gr2>LJPYydFlq_fl&|6yz>oR^9p#eymaqN~ACzP+ literal 0 HcmV?d00001 diff --git a/doc_cn/demo/sentiment_analysis/sentiment_analysis.md b/doc/tutorials/sentiment_analysis/index_cn.md similarity index 96% rename from doc_cn/demo/sentiment_analysis/sentiment_analysis.md rename to doc/tutorials/sentiment_analysis/index_cn.md index ba307e97e3..1323ec1a6a 100644 --- a/doc_cn/demo/sentiment_analysis/sentiment_analysis.md +++ b/doc/tutorials/sentiment_analysis/index_cn.md @@ -1,325 +1,325 @@ -# 情感分析教程 - -情感分析有许多应用场景。 一个基本的应用场景是区分给定文本的褒贬两极性,给定的文本可以是一个文档、句子、或者是一个小的文本片段。 一个简单的例子如:把用户在购物网站、旅游网站、团购网站(亚马逊、天猫、淘宝等)上发表的评论分成正面评论和负面评论两类。 - -情感分析也常用于基于大量评论和个人博客来监控社会媒体。 例如,研究人员分析了几个关于消费者信心和政治观点的调查,结果发现它们与同时期的Twitter消息中的情绪词频率相关 [1]。 另一个例子是通过分析每日Twitter博客的文本内容来预测股票变动 [2]。 - -另一方面,抓取产品的用户评论并分析他们的情感,有助于理解用户对不同公司,不同产品,甚至不同竞争对手产品的偏好。 - -本教程将指导您完成长期短期记忆(LSTM)网络的训练过程,以分类来自[大型电影评论数据集](http://ai.stanford.edu/~amaas/data/sentiment/)(有时称为[互联网电影数据库 (IMDB)](http://ai.stanford.edu/~amaas/papers/wvSent_acl2011.pdf))的句子的情感 。 此数据集包含电影评论及其相关联的类别标签,即正面和负面。 - -## 数椐准备 - -### IMDB 数椐介绍 - -训练模型之前, 我们需要预处理数椐并构建一个字典。 首先, 你可以使用下面的脚本下载 IMDB 数椐集和[Moses](http://www.statmt.org/moses/)工具, 这是一个基于统计的机器翻译系统. 我们提供了一个数据预处理脚本,它不仅能够处理IMDB数据,还能处理其他用户自定义的数据。 为了使用提前编写的脚本,需要将标记的训练和测试样本移动到另一个路径,这已经在`get_imdb.sh`中完成。 - -``` -cd demo/sentiment/data -./get_imdb.sh -``` -如果数椐获取成功,你将在目录```./demo/sentiment/data```中看到下面的文件: - -``` -aclImdb get_imdb.sh imdb mosesdecoder-master -``` - -* aclImdb: 从外部网站上下载的原始数椐集。 -* imdb: 仅包含训练和测试数椐集。 -* mosesdecoder-master: Moses 工具。 - -IMDB数据集包含25,000个已标注过的高极性电影评论用于训练,25,000个用于测试。负面的评论的得分小于等于4,正面的评论的得大于等于7,总评分10分。 运行完脚本 `./get_imdb.sh`后, 我们可以看到在目录 `aclImdb`中的数椐集的结构如下: - -``` -imdbEr.txt imdb.vocab README test train -``` -* train: 训练数椐集。 -* test : 测试数椐集。 -* imdb.vocab: 字典文件。 -* imdbEr.txt: 字典imdb.vocab中每个切分单词的预期评级。 -* README: 数椐说明文档。 - -测试集和训练集目录包含下面的文件: - -``` -labeledBow.feat neg pos unsup unsupBow.feat urls_neg.txt urls_pos.txt urls_unsup.txt -``` - -* pos: 正面评价样本,包含12,500个txt文件,每个文件是一个电影评论。 -* neg: 负面评价样本,包含12,500个txt文件,每个文件是一个电影评论。 -* unsup: 未标记的评价样本,包含50,000个txt文件。 -* urls_xx.txt: 每个评论的网址。 -* xxBow.feat: 用于统计词频的Bow模型特征。 - -### IMDB 数椐准备 - -在这个例子中,我们只使用已经标注过的训练集和测试集,且默认在训练集上构建字典,而不使用IMDB数椐集中的imdb.vocab做为字典。训练集已经做了随机打乱排序而测试集没有。 Moses 工具中的脚本`tokenizer.perl` 用于切分单单词和标点符号。执行下面的命令就可以预处理数椐。 - -``` -cd demo/sentiment/ -./preprocess.sh -``` -preprocess.sh: - -``` -data_dir="./data/imdb" -python preprocess.py -i data_dir -``` - -* data_dir: 输入数椐所在目录。 -* preprocess.py: 预处理脚本。 - -运行成功后目录`demo/sentiment/data/pre-imdb` 结构如下: - -``` -dict.txt labels.list test.list test_part_000 train.list train_part_000 -``` -* test\_part\_000 and train\_part\_000: 所有标记的测试集和训练集, 训练集已经随机打乱。 -* train.list and test.list: 训练集和测试集文件列表。 -* dict.txt: 利用训练集生成的字典。 -* labels.txt: neg 0, pos 1, 含义:标签0表示负面的评论,标签1表示正面的评论。 - -### 用户自定义数椐预处理 - -如果你执行其它的用情感分析来分类文本的任务,可以按如下的结构来准备数椐. 我们提供了脚本来构建字典和预处理数椐。所以你只用按下面的结构来组织数椐就行了。 - -``` -dataset -|----train -| |----class1 -| | |----text_files -| |----class2 -| | |----text_files -| | ... -|----test -| |----class1 -| | |----text_files -| |----class2 -| | |----text_files -| | ... -``` -* dataset: 一级目录。 -* train, test: 二级目录。 -* class1,class2,...: 三级目录。 -* text_files: 文本格式的实例文件。 - -所有同目录下的文本实例文件都是同级别的。 每个文本文件包含一个或者多个实例,每一行表示一个实例。 为了充分的随机打乱训练集, 在预处理含有多行数椐的文本文件时参数设置稍有不同, 执行`preprocess.sh`脚本时需要加上`-m True`参数。 tokenizer.perl 默认用来切分单记和标点符号,如果你不需要这个操作,在运行`preprocess.sh`时加上`-t False`参数即可。 - -## 训练模型 - -在这步任务中,我们使用了循环神经网络(RNN)的 LSTM 架构来训练情感分析模型。 引入LSTM模型主要是为了克服消失梯度的问题。 LSTM网络类似于具有隐藏层的标准循环神经网络, 但是隐藏层中的每个普通节点被一个记忆单元替换。 每个记忆单元包含四个主要的元素: 输入门, 具有自循环连接的神经元,忘记门和输出门。 更多的细节可以在文献中找到[4]。 LSTM架构的最大优点是它可以在长时间间隔内记忆信息,而没有短时记忆的损失。在有新的单词来临的每一个时间步骤内,存储在记忆单元区块的历史信息被更新用来迭代的学习单词以合理的序列程现。 - -

![LSTM](../../../doc/demo/sentiment_analysis/lstm.png)
-
图表 1. LSTM [3]
- -情感分析是自然语言理解中最典型的问题之一。 它的目的是预测在一个序列中表达的情感态度。 通常, ,仅仅是一些关键词,如形容词和副词,在预测序列或段落的情感中起主要作用。然而有些评论上下文非常长,例如 IMDB的数椐集。 我们只所以使用LSTM来执行这个任务是因为其改进的设计并且具有门机制。 首先,它能够从词级到具有可变上下文长度的上下文级别来总结表示。 第二,它可以在句子级别利用可扩展的上下文, 而大多数方法只是利用n-gram级别的知识。第三,它直接学习段落表示,而不是组合上下文级别信息。 - -在本演示中,我们提供两个网络,即双向LSTM和三层堆叠LSTM。 - -#### 双向LSTM - -图2是双向LSTM网络,后面连全连接层和softmax层。 - -
![BiLSTM](../../../doc/demo/sentiment_analysis/bi_lstm.jpg)
-
图 2. Bidirectional-LSTM
- -#### Stacked-LSTM -图3是三层LSTM结构。图的底部是word embedding(对文档处理后形成的单词向量)。 接下来,连接三个LSTM隐藏层,并且第二个是反向LSTM。然后提取隐藏LSTM层的所有时间步长的最大词向量作为整个序列的表示。 最后,使用具有softmax激活的全连接前馈层来执行分类任务。 更多内容可查看参考文献 [5]。 - -
![StackedLSTM](../../../doc/demo/sentiment_analysis/stacked_lstm.jpg)
-
图 3. Stacked-LSTM for sentiment analysis
- -**配置** - -进入`demo/sentiment` 目录 , `trainer_config.py` 是一个配置文件的例子, 其中包含算法和网络配置。第一行从`sentiment_net.py`中导出预定义的网络。 - -trainer_config.py: - -```python -from sentiment_net import * - -data_dir = "./data/pre-imdb" -# whether this config is used for test -is_test = get_config_arg('is_test', bool, False) -# whether this config is used for prediction -is_predict = get_config_arg('is_predict', bool, False) -dict_dim, class_dim = sentiment_data(data_dir, is_test, is_predict) - -################## Algorithm Config ##################### - -settings( - batch_size=128, - learning_rate=2e-3, - learning_method=AdamOptimizer(), - regularization=L2Regularization(8e-4), - gradient_clipping_threshold=25 -) - -#################### Network Config ###################### -stacked_lstm_net(dict_dim, class_dim=class_dim, - stacked_num=3, is_predict=is_predict) -#bidirectional_lstm_net(dict_dim, class_dim=class_dim, is_predict=is_predict) -``` - -* **数椐定义**: - * get\_config\_arg(): 获取通过 `--config_args=xx` 设置的命令行参数。 - * 定义训练数椐和测试数椐提供者, 这里使用了PaddlePaddle的Python接口来加载数椐。想了解更多细节可以参考PyDataProvider部分的文档 - -* **算法配置**: - * 使用随机梯度下降(sgd)算法。 - * 使用 adam 优化。 - * 设置batch size大小为128。 - * 设置平均sgd窗口。 - * 设置全局学习率。 -* **网络配置**: - * dict_dim: 获取字典维度。 - * class_dim: 设置类别数,IMDB有两个标签,即正面评价标签和负面评价标签。 - * `stacked_lstm_net`: 预定义网络如图3所示,默认情况下使用此网络 - * `bidirectional_lstm_net`: 预定义网络,如图2所示。 - -**训练** - -首先安装PaddlePaddle。 然后使用下面的脚本 `train.sh` 来开启本地的训练。 - -``` -cd demo/sentiment/ -./train.sh -``` - -train.sh: - -``` -config=trainer_config.py -output=./model_output -paddle train --config=$config \ - --save_dir=$output \ - --job=train \ - --use_gpu=false \ - --trainer_count=4 \ - --num_passes=10 \ - --log_period=20 \ - --dot_period=20 \ - --show_parameter_stats_period=100 \ - --test_all_data_in_one_period=1 \ - 2>&1 | tee 'train.log' -``` - -* \--config=$config: 设置网络配置。 -* \--save\_dir=$output: 设置输出路径以保存训练完成的模型。 -* \--job=train: 设置工作模式为训练。 -* \--use\_gpu=false: 使用CPU训练,如果你安装GPU版本的PaddlePaddle,并想使用GPU来训练设置为true。 -* \--trainer\_count=4:设置线程数(或GPU个数)。 -* \--num\_passes=15: 设置pass,PaddlePaddle中的一个pass意味着对数据集中的所有样本进行一次训练。 -* \--log\_period=20: 每20个batch打印一次日志。 -* \--show\_parameter\_stats\_period=100: 每100个batch打印一次统计信息。 -* \--test\_all_data\_in\_one\_period=1: 每次测试都测试所有数据。 - -如果运行成功,输出日志保存在路径 `demo/sentiment/train.log`中,模型保存在目录`demo/sentiment/model_output/`中。 输出日志说明如下: - -``` -Batch=20 samples=2560 AvgCost=0.681644 CurrentCost=0.681644 Eval: classification_error_evaluator=0.36875 CurrentEval: classification_error_evaluator=0.36875 -... -Pass=0 Batch=196 samples=25000 AvgCost=0.418964 Eval: classification_error_evaluator=0.1922 -Test samples=24999 cost=0.39297 Eval: classification_error_evaluator=0.149406 -``` -- Batch=xx: 表示训练了xx个Batch。 -- samples=xx: 表示训练了xx个样本。。 -- AvgCost=xx: 从第0个batch到当前batch的平均损失。 -- CurrentCost=xx: 最新log_period个batch处理的当前损失。 -- Eval: classification\_error\_evaluator=xx: 表示第0个batch到当前batch的分类错误。 -- CurrentEval: classification\_error\_evaluator: 最新log_period个batch的分类错误。 -- Pass=0: 通过所有训练集一次称为一遍。 0表示第一次经过训练集。 - -默认情况下,我们使用`stacked_lstm_net`网络,当传递相同的样本数时,它的收敛速度比`bidirectional_lstm_net`快。如果要使用双向LSTM,只需删除最后一行中的注释并把“stacked_lstm_net”注释掉。 - -## 测试模型 - -测试模型是指使用训练出的模型评估已标记的验证集。 - -``` -cd demo/sentiment -./test.sh -``` - -test.sh: - -```bash -function get_best_pass() { - cat $1 | grep -Pzo 'Test .*\n.*pass-.*' | \ - sed -r 'N;s/Test.* error=([0-9]+\.[0-9]+).*\n.*pass-([0-9]+)/\1 \2/g' | \ - sort | head -n 1 -} - -log=train.log -LOG=`get_best_pass $log` -LOG=(${LOG}) -evaluate_pass="model_output/pass-${LOG[1]}" - -echo 'evaluating from pass '$evaluate_pass - -model_list=./model.list -touch $model_list | echo $evaluate_pass > $model_list -net_conf=trainer_config.py -paddle train --config=$net_conf \ - --model_list=$model_list \ - --job=test \ - --use_gpu=false \ - --trainer_count=4 \ - --config_args=is_test=1 \ - 2>&1 | tee 'test.log' -``` - -函数`get_best_pass`依据分类错误率获得最佳模型进行测试。 在本示例中,我们默认使用IMDB的测试数据集作为验证。 与训练不同,它需要在这里指定`--job = test`和模型路径,即`--model_list = $model_list`。如果运行成功,日志将保存在“demo / sentiment / test.log”的路径中。例如,在我们的测试中,最好的模型是`model_output / pass-00002`,分类误差是0.115645,如下: - -``` -Pass=0 samples=24999 AvgCost=0.280471 Eval: classification_error_evaluator=0.115645 -``` - -## 预测 - -`predict.py`脚本提供了一个预测接口。在使用它之前请安装PaddlePaddle的python api。 预测IMDB的未标记评论的一个实例如下: - -``` -cd demo/sentiment -./predict.sh -``` -predict.sh: - -``` -#Note the default model is pass-00002, you shold make sure the model path -#exists or change the mode path. -model=model_output/pass-00002/ -config=trainer_config.py -label=data/pre-imdb/labels.list -cat ./data/aclImdb/test/pos/10007_10.txt | python predict.py \ - --tconf=$config\ - --model=$model \ - --label=$label \ - --dict=./data/pre-imdb/dict.txt \ - --batch_size=1 -``` - -* `cat ./data/aclImdb/test/pos/10007_10.txt` : 输入预测样本。 -* `predict.py` : 预测接口脚本。 -* `--tconf=$config` : 设置网络配置。 -* `--model=$model` : 设置模型路径。 -* `--label=$label` : 设置标签类别字典,这个字典是整数标签和字符串标签的一个对应。 -* `--dict=data/pre-imdb/dict.txt` : 设置字典文件。 -* `--batch_size=1` : 设置batch size。 - -注意应该确保默认模型路径`model_output / pass-00002`存在或更改为其它模型路径。 - -本示例的预测结果: - -``` -Loading parameters from model_output/pass-00002/ -./data/aclImdb/test/pos/10014_7.txt: predicting label is pos -``` -我们真诚地感谢您的关注,并欢迎您来参与贡献。 - -## 参考文档 -[1] Brendan O'Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010. [From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series](http://homes.cs.washington.edu/~nasmith/papers/oconnor+balasubramanyan+routledge+smith.icwsm10.pdf). In ICWSM-2010.
-[2] Johan Bollen, Huina Mao, Xiaojun Zeng. 2011. [Twitter mood predicts the stock market](http://arxiv.org/abs/1010.3003), Journal of Computational Science.
-[3] Alex Graves, Marcus Liwicki, Santiago Fernan- dez, Roman Bertolami, Horst Bunke, and Ju ̈rgen Schmidhuber. 2009. [A novel connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine In- telligence](http://www.cs.toronto.edu/~graves/tpami_2009.pdf), 31(5):855–868.
-[4] Zachary C. Lipton, [A Critical Review of Recurrent Neural Networks for Sequence Learning](http://arxiv.org/abs/1506.00019v1), arXiv:1506.00019.
-[5] Jie Zhou and Wei Xu; [End-to-end Learning of Semantic Role Labeling Using Recurrent Neural Networks](http://www.aclweb.org/anthology/P/P15/P15-1109.pdf); ACL-IJCNLP 2015.
+# 情感分析教程 + +情感分析有许多应用场景。 一个基本的应用场景是区分给定文本的褒贬两极性,给定的文本可以是一个文档、句子、或者是一个小的文本片段。 一个简单的例子如:把用户在购物网站、旅游网站、团购网站(亚马逊、天猫、淘宝等)上发表的评论分成正面评论和负面评论两类。 + +情感分析也常用于基于大量评论和个人博客来监控社会媒体。 例如,研究人员分析了几个关于消费者信心和政治观点的调查,结果发现它们与同时期的Twitter消息中的情绪词频率相关 [1]。 另一个例子是通过分析每日Twitter博客的文本内容来预测股票变动 [2]。 + +另一方面,抓取产品的用户评论并分析他们的情感,有助于理解用户对不同公司,不同产品,甚至不同竞争对手产品的偏好。 + +本教程将指导您完成长期短期记忆(LSTM)网络的训练过程,以分类来自[大型电影评论数据集](http://ai.stanford.edu/~amaas/data/sentiment/)(有时称为[互联网电影数据库 (IMDB)](http://ai.stanford.edu/~amaas/papers/wvSent_acl2011.pdf))的句子的情感 。 此数据集包含电影评论及其相关联的类别标签,即正面和负面。 + +## 数椐准备 + +### IMDB 数椐介绍 + +训练模型之前, 我们需要预处理数椐并构建一个字典。 首先, 你可以使用下面的脚本下载 IMDB 数椐集和[Moses](http://www.statmt.org/moses/)工具, 这是一个基于统计的机器翻译系统. 我们提供了一个数据预处理脚本,它不仅能够处理IMDB数据,还能处理其他用户自定义的数据。 为了使用提前编写的脚本,需要将标记的训练和测试样本移动到另一个路径,这已经在`get_imdb.sh`中完成。 + +``` +cd demo/sentiment/data +./get_imdb.sh +``` +如果数椐获取成功,你将在目录```./demo/sentiment/data```中看到下面的文件: + +``` +aclImdb get_imdb.sh imdb mosesdecoder-master +``` + +* aclImdb: 从外部网站上下载的原始数椐集。 +* imdb: 仅包含训练和测试数椐集。 +* mosesdecoder-master: Moses 工具。 + +IMDB数据集包含25,000个已标注过的高极性电影评论用于训练,25,000个用于测试。负面的评论的得分小于等于4,正面的评论的得大于等于7,总评分10分。 运行完脚本 `./get_imdb.sh`后, 我们可以看到在目录 `aclImdb`中的数椐集的结构如下: + +``` +imdbEr.txt imdb.vocab README test train +``` +* train: 训练数椐集。 +* test : 测试数椐集。 +* imdb.vocab: 字典文件。 +* imdbEr.txt: 字典imdb.vocab中每个切分单词的预期评级。 +* README: 数椐说明文档。 + +测试集和训练集目录包含下面的文件: + +``` +labeledBow.feat neg pos unsup unsupBow.feat urls_neg.txt urls_pos.txt urls_unsup.txt +``` + +* pos: 正面评价样本,包含12,500个txt文件,每个文件是一个电影评论。 +* neg: 负面评价样本,包含12,500个txt文件,每个文件是一个电影评论。 +* unsup: 未标记的评价样本,包含50,000个txt文件。 +* urls_xx.txt: 每个评论的网址。 +* xxBow.feat: 用于统计词频的Bow模型特征。 + +### IMDB 数椐准备 + +在这个例子中,我们只使用已经标注过的训练集和测试集,且默认在训练集上构建字典,而不使用IMDB数椐集中的imdb.vocab做为字典。训练集已经做了随机打乱排序而测试集没有。 Moses 工具中的脚本`tokenizer.perl` 用于切分单单词和标点符号。执行下面的命令就可以预处理数椐。 + +``` +cd demo/sentiment/ +./preprocess.sh +``` +preprocess.sh: + +``` +data_dir="./data/imdb" +python preprocess.py -i data_dir +``` + +* data_dir: 输入数椐所在目录。 +* preprocess.py: 预处理脚本。 + +运行成功后目录`demo/sentiment/data/pre-imdb` 结构如下: + +``` +dict.txt labels.list test.list test_part_000 train.list train_part_000 +``` +* test\_part\_000 and train\_part\_000: 所有标记的测试集和训练集, 训练集已经随机打乱。 +* train.list and test.list: 训练集和测试集文件列表。 +* dict.txt: 利用训练集生成的字典。 +* labels.txt: neg 0, pos 1, 含义:标签0表示负面的评论,标签1表示正面的评论。 + +### 用户自定义数椐预处理 + +如果你执行其它的用情感分析来分类文本的任务,可以按如下的结构来准备数椐. 我们提供了脚本来构建字典和预处理数椐。所以你只用按下面的结构来组织数椐就行了。 + +``` +dataset +|----train +| |----class1 +| | |----text_files +| |----class2 +| | |----text_files +| | ... +|----test +| |----class1 +| | |----text_files +| |----class2 +| | |----text_files +| | ... +``` +* dataset: 一级目录。 +* train, test: 二级目录。 +* class1,class2,...: 三级目录。 +* text_files: 文本格式的实例文件。 + +所有同目录下的文本实例文件都是同级别的。 每个文本文件包含一个或者多个实例,每一行表示一个实例。 为了充分的随机打乱训练集, 在预处理含有多行数椐的文本文件时参数设置稍有不同, 执行`preprocess.sh`脚本时需要加上`-m True`参数。 tokenizer.perl 默认用来切分单记和标点符号,如果你不需要这个操作,在运行`preprocess.sh`时加上`-t False`参数即可。 + +## 训练模型 + +在这步任务中,我们使用了循环神经网络(RNN)的 LSTM 架构来训练情感分析模型。 引入LSTM模型主要是为了克服消失梯度的问题。 LSTM网络类似于具有隐藏层的标准循环神经网络, 但是隐藏层中的每个普通节点被一个记忆单元替换。 每个记忆单元包含四个主要的元素: 输入门, 具有自循环连接的神经元,忘记门和输出门。 更多的细节可以在文献中找到[4]。 LSTM架构的最大优点是它可以在长时间间隔内记忆信息,而没有短时记忆的损失。在有新的单词来临的每一个时间步骤内,存储在记忆单元区块的历史信息被更新用来迭代的学习单词以合理的序列程现。 + +
![LSTM](src/lstm.png)
+
图表 1. LSTM [3]
+ +情感分析是自然语言理解中最典型的问题之一。 它的目的是预测在一个序列中表达的情感态度。 通常, ,仅仅是一些关键词,如形容词和副词,在预测序列或段落的情感中起主要作用。然而有些评论上下文非常长,例如 IMDB的数椐集。 我们只所以使用LSTM来执行这个任务是因为其改进的设计并且具有门机制。 首先,它能够从词级到具有可变上下文长度的上下文级别来总结表示。 第二,它可以在句子级别利用可扩展的上下文, 而大多数方法只是利用n-gram级别的知识。第三,它直接学习段落表示,而不是组合上下文级别信息。 + +在本演示中,我们提供两个网络,即双向LSTM和三层堆叠LSTM。 + +#### 双向LSTM + +图2是双向LSTM网络,后面连全连接层和softmax层。 + +
![BiLSTM](src/bi_lstm.jpg)
+
图 2. Bidirectional-LSTM
+ +#### Stacked-LSTM +图3是三层LSTM结构。图的底部是word embedding(对文档处理后形成的单词向量)。 接下来,连接三个LSTM隐藏层,并且第二个是反向LSTM。然后提取隐藏LSTM层的所有时间步长的最大词向量作为整个序列的表示。 最后,使用具有softmax激活的全连接前馈层来执行分类任务。 更多内容可查看参考文献 [5]。 + +
![StackedLSTM](src/stacked_lstm.jpg)
+
图 3. Stacked-LSTM for sentiment analysis
+ +**配置** + +进入`demo/sentiment` 目录 , `trainer_config.py` 是一个配置文件的例子, 其中包含算法和网络配置。第一行从`sentiment_net.py`中导出预定义的网络。 + +trainer_config.py: + +```python +from sentiment_net import * + +data_dir = "./data/pre-imdb" +# whether this config is used for test +is_test = get_config_arg('is_test', bool, False) +# whether this config is used for prediction +is_predict = get_config_arg('is_predict', bool, False) +dict_dim, class_dim = sentiment_data(data_dir, is_test, is_predict) + +################## Algorithm Config ##################### + +settings( + batch_size=128, + learning_rate=2e-3, + learning_method=AdamOptimizer(), + regularization=L2Regularization(8e-4), + gradient_clipping_threshold=25 +) + +#################### Network Config ###################### +stacked_lstm_net(dict_dim, class_dim=class_dim, + stacked_num=3, is_predict=is_predict) +#bidirectional_lstm_net(dict_dim, class_dim=class_dim, is_predict=is_predict) +``` + +* **数椐定义**: + * get\_config\_arg(): 获取通过 `--config_args=xx` 设置的命令行参数。 + * 定义训练数椐和测试数椐提供者, 这里使用了PaddlePaddle的Python接口来加载数椐。想了解更多细节可以参考PyDataProvider部分的文档 + +* **算法配置**: + * 使用随机梯度下降(sgd)算法。 + * 使用 adam 优化。 + * 设置batch size大小为128。 + * 设置平均sgd窗口。 + * 设置全局学习率。 +* **网络配置**: + * dict_dim: 获取字典维度。 + * class_dim: 设置类别数,IMDB有两个标签,即正面评价标签和负面评价标签。 + * `stacked_lstm_net`: 预定义网络如图3所示,默认情况下使用此网络 + * `bidirectional_lstm_net`: 预定义网络,如图2所示。 + +**训练** + +首先安装PaddlePaddle。 然后使用下面的脚本 `train.sh` 来开启本地的训练。 + +``` +cd demo/sentiment/ +./train.sh +``` + +train.sh: + +``` +config=trainer_config.py +output=./model_output +paddle train --config=$config \ + --save_dir=$output \ + --job=train \ + --use_gpu=false \ + --trainer_count=4 \ + --num_passes=10 \ + --log_period=20 \ + --dot_period=20 \ + --show_parameter_stats_period=100 \ + --test_all_data_in_one_period=1 \ + 2>&1 | tee 'train.log' +``` + +* \--config=$config: 设置网络配置。 +* \--save\_dir=$output: 设置输出路径以保存训练完成的模型。 +* \--job=train: 设置工作模式为训练。 +* \--use\_gpu=false: 使用CPU训练,如果你安装GPU版本的PaddlePaddle,并想使用GPU来训练设置为true。 +* \--trainer\_count=4:设置线程数(或GPU个数)。 +* \--num\_passes=15: 设置pass,PaddlePaddle中的一个pass意味着对数据集中的所有样本进行一次训练。 +* \--log\_period=20: 每20个batch打印一次日志。 +* \--show\_parameter\_stats\_period=100: 每100个batch打印一次统计信息。 +* \--test\_all_data\_in\_one\_period=1: 每次测试都测试所有数据。 + +如果运行成功,输出日志保存在路径 `demo/sentiment/train.log`中,模型保存在目录`demo/sentiment/model_output/`中。 输出日志说明如下: + +``` +Batch=20 samples=2560 AvgCost=0.681644 CurrentCost=0.681644 Eval: classification_error_evaluator=0.36875 CurrentEval: classification_error_evaluator=0.36875 +... +Pass=0 Batch=196 samples=25000 AvgCost=0.418964 Eval: classification_error_evaluator=0.1922 +Test samples=24999 cost=0.39297 Eval: classification_error_evaluator=0.149406 +``` +- Batch=xx: 表示训练了xx个Batch。 +- samples=xx: 表示训练了xx个样本。。 +- AvgCost=xx: 从第0个batch到当前batch的平均损失。 +- CurrentCost=xx: 最新log_period个batch处理的当前损失。 +- Eval: classification\_error\_evaluator=xx: 表示第0个batch到当前batch的分类错误。 +- CurrentEval: classification\_error\_evaluator: 最新log_period个batch的分类错误。 +- Pass=0: 通过所有训练集一次称为一遍。 0表示第一次经过训练集。 + +默认情况下,我们使用`stacked_lstm_net`网络,当传递相同的样本数时,它的收敛速度比`bidirectional_lstm_net`快。如果要使用双向LSTM,只需删除最后一行中的注释并把“stacked_lstm_net”注释掉。 + +## 测试模型 + +测试模型是指使用训练出的模型评估已标记的验证集。 + +``` +cd demo/sentiment +./test.sh +``` + +test.sh: + +```bash +function get_best_pass() { + cat $1 | grep -Pzo 'Test .*\n.*pass-.*' | \ + sed -r 'N;s/Test.* error=([0-9]+\.[0-9]+).*\n.*pass-([0-9]+)/\1 \2/g' | \ + sort | head -n 1 +} + +log=train.log +LOG=`get_best_pass $log` +LOG=(${LOG}) +evaluate_pass="model_output/pass-${LOG[1]}" + +echo 'evaluating from pass '$evaluate_pass + +model_list=./model.list +touch $model_list | echo $evaluate_pass > $model_list +net_conf=trainer_config.py +paddle train --config=$net_conf \ + --model_list=$model_list \ + --job=test \ + --use_gpu=false \ + --trainer_count=4 \ + --config_args=is_test=1 \ + 2>&1 | tee 'test.log' +``` + +函数`get_best_pass`依据分类错误率获得最佳模型进行测试。 在本示例中,我们默认使用IMDB的测试数据集作为验证。 与训练不同,它需要在这里指定`--job = test`和模型路径,即`--model_list = $model_list`。如果运行成功,日志将保存在“demo / sentiment / test.log”的路径中。例如,在我们的测试中,最好的模型是`model_output / pass-00002`,分类误差是0.115645,如下: + +``` +Pass=0 samples=24999 AvgCost=0.280471 Eval: classification_error_evaluator=0.115645 +``` + +## 预测 + +`predict.py`脚本提供了一个预测接口。在使用它之前请安装PaddlePaddle的python api。 预测IMDB的未标记评论的一个实例如下: + +``` +cd demo/sentiment +./predict.sh +``` +predict.sh: + +``` +#Note the default model is pass-00002, you shold make sure the model path +#exists or change the mode path. +model=model_output/pass-00002/ +config=trainer_config.py +label=data/pre-imdb/labels.list +cat ./data/aclImdb/test/pos/10007_10.txt | python predict.py \ + --tconf=$config\ + --model=$model \ + --label=$label \ + --dict=./data/pre-imdb/dict.txt \ + --batch_size=1 +``` + +* `cat ./data/aclImdb/test/pos/10007_10.txt` : 输入预测样本。 +* `predict.py` : 预测接口脚本。 +* `--tconf=$config` : 设置网络配置。 +* `--model=$model` : 设置模型路径。 +* `--label=$label` : 设置标签类别字典,这个字典是整数标签和字符串标签的一个对应。 +* `--dict=data/pre-imdb/dict.txt` : 设置字典文件。 +* `--batch_size=1` : 设置batch size。 + +注意应该确保默认模型路径`model_output / pass-00002`存在或更改为其它模型路径。 + +本示例的预测结果: + +``` +Loading parameters from model_output/pass-00002/ +./data/aclImdb/test/pos/10014_7.txt: predicting label is pos +``` +我们真诚地感谢您的关注,并欢迎您来参与贡献。 + +## 参考文档 +[1] Brendan O'Connor, Ramnath Balasubramanyan, Bryan R. Routledge, and Noah A. Smith. 2010. [From Tweets to Polls: Linking Text Sentiment to Public Opinion Time Series](http://homes.cs.washington.edu/~nasmith/papers/oconnor+balasubramanyan+routledge+smith.icwsm10.pdf). In ICWSM-2010.
+[2] Johan Bollen, Huina Mao, Xiaojun Zeng. 2011. [Twitter mood predicts the stock market](http://arxiv.org/abs/1010.3003), Journal of Computational Science.
+[3] Alex Graves, Marcus Liwicki, Santiago Fernan- dez, Roman Bertolami, Horst Bunke, and Ju ̈rgen Schmidhuber. 2009. [A novel connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine In- telligence](http://www.cs.toronto.edu/~graves/tpami_2009.pdf), 31(5):855–868.
+[4] Zachary C. Lipton, [A Critical Review of Recurrent Neural Networks for Sequence Learning](http://arxiv.org/abs/1506.00019v1), arXiv:1506.00019.
+[5] Jie Zhou and Wei Xu; [End-to-end Learning of Semantic Role Labeling Using Recurrent Neural Networks](http://www.aclweb.org/anthology/P/P15/P15-1109.pdf); ACL-IJCNLP 2015.
diff --git a/doc/tutorials/sentiment_analysis/src/bi_lstm.jpg b/doc/tutorials/sentiment_analysis/src/bi_lstm.jpg new file mode 100644 index 0000000000000000000000000000000000000000..adec1606d64d6e35ffe7e62abfa9a09309b05c84 GIT binary patch literal 35593 zcmb@s1zc2LyDz?nF6pkJJCv3hLP|sgr3F-SKtPdZV5Fr51Ox<>ZlooLPU(^^gOnL0 zMq=ha{@!zb?|bgK_nvdl=f7ZWzO23W+Ur@*v)5A#`vtoWP~XXIar34Em#%~FQwJ|E@q3T$TpxSdb7^|GJ$vl#4*>t5 z`Oi}T?Vq*f!Y#7gjhk|E;!+a0<^Q|;De^@b(`)ei{A6GZ5qFf42Se*?+cq zKlls~(2)c94|8yLK!5Y?4yh^d=gTt#|M-X8<1p_2U)TR|!Ar#b zi}wQR#Pz3H)A#|GowujYAB_7Z{#*zFGJp!82Uq}3fEN$~!~rQl9#8_*04?A)US%4maoIvg%Zx9R=28snGgEB!M zK&7A>P&23tGz6LiEr2#aJD?-bIUYV91s**f2c7_)B%T7EI-Wk>13YUyC%or)0eInf z33zYu^6)C~8u7aDM(}3w*75f6PVfPIa(qU7ZhUck1$<3>LwrkoM|@BGVEh>TH2ggL zD*P7w0sLwFb^Lw&-vmSibOc-k;si@81c~H{bcrm8T!;dR;)t?| zs);&@CW*F)PKb$#S%^i6Rfr9V9}{~KM-pccR}dqJr-;81qe;j~z$8*6+9Z}F?j)fk zX(VMN9VAmEKS(Y}sY&@s6-fn&F=7m}NuTbJ9L`vdnF_XUq2j|opG zPc_d9FBz`_uOsgp-fmtbADGX855`x{x5!V*ufXrjpUywPerby2Twl6Dd*k+v&>Jl` zj-*7S?4+`!W~8r4YfHnWKS`rxL}eUga%8^9(#aagM#*-|UdhSJdCFDE?a1@VKazhZ zKd(Tqa8DslVels5O^uuIn{79LE6OQ)Db^|;Dv2w(DwQd1D+?++C>JPisqm;gR{5Z^ zuF9qQNcFwyI)oeY7?KCsRO3~%S1VHcp)RcMqF$l?Q{%dZr$&RunWmyveU=k+-B?DR_YkhkS;2i@+~r_jHz zpQ*ojN92y@ofZQ;gWCov2FrK(@4DY@G6W3u3{wnO?g`#|ey`Pt$mpI?meF_P8^$o> zf%|m#ZSR-eKYgI_Ao0PHiI9o6Nw+DL=|j^}(=)SMX31vj=91_g)W94?mB|XQIy{pRGSve4hFo>8bBo^aAw4`o$+N zdYm!!*;~jv!h6F<)hEjb?Q7~==SSn`?)TYW#6Q~qM}T%fVIV=EU0^qi8x{iF2vQIF z5DW@_65I{vg@?nxz0`SG5<(W@8ZsJsJv2G=ILs`pC7dJtW%yQvPDFVmWu#~1e3VjD z?kj><&{t#8($N{wm>Bz*;n?f3X|b1ac5%b;H{##MV-p+`#uDWcb6yj@c7HvW1W77M zrb!M+-byh@X?(-=Cg#m?s%`2}noL@5I$64R`uf{DZ$D-5WxUR~%yiD2&CZ)$odHN-T68vPrQP0mg0pDaF2Hs5aUZc%D!Xq9L!ZsTstYG-Iq>>%q1@4zAg z5GS3_I`_Mry0*G)yO(;*duDo#ddK?o`v&{9`+Ek|2M~iQgKa~KL(Rhq!%ZV{BaNSB zKR1lZjy8H#0JOcXo2lWbVuS!};|u_FuLa zo-UvkeHJg4UM>?Z$E;AVq^+{A=B){@RjF z_5=K*bX#h>ZAW`&V%KW-+upOii~Yz0nuB*gg?`o_svUktnj^oVJW<%AxL+*4ijQTE zyHD<&te(1^UYy0CSH-EMwiUSZpUiLjrOG#o~i_ z06ZEHJ`D)l3xIK!1R+joffvPy=`%e0)3td_uxM4h$#= zcOM|2A*AKLp-x0+^q83EIlWX!(gzY=jhZe7<4Gi+^b^leQZhy+W)@a{0YM>Q5gA!I zd4-#bnzyvHb#(P^-+y3YYKAj!?Cc#Jot&XAFTA{ceEs|b!onjWqh3YFB&WPdO-p~9 zk(rlYP*_x4Qd(A9SKrXs^r^X}yQjCWe_(KEcxrlPc5eR5!s5o}*4J;}e{ApUqKR^r5DUlvmmq$?(K;l8lj0W`iH~hqS*a`|lAJ`ah!VZ^Hgb*Ak!!;Qdwb@$m2o z@$vBqi3o9phy-Ve5D}4(k^EK2{;g2_RVe=ys=o^sHwXwf1_1#9G47X|oRpmU|G8jS za26#Cb`hY!$GO@x_%r|nxVp@X5d!{OHR-KLnhG!q%UFj{V)Y@5b#S6Q!s`u$ zWe=&LX=zQW%!rO`@#7%NLxbqTPX;~<_WrbAlo%oaF*Rm-&`BY~IvAM{JhOIu?~(-z zguCZsfl>bdO%o7!#qbOK%Kqwo8~>x0;+PIGKVqFaf4dOJHe}&Ro1=x*)8~;Xy=COB z&&uDWbXMwHgHPXe#E#dZQ`WOyH7R-|cAbY+ZGXwH(upP+{wSjJghV+8IOmB$QlWJq zE*|GX!4z1aTiyr@d{0jN&rL5?F#j)Ik8(q7RAz#OVGKxcZ~PO5$1^a)575liWuM^@ zyoc6{12W~btx5!xy+V#ph@|wsw_Cm>6UzH9D+D|7-)0XUTQ2%E{i+{h@+)ya3k&cC z!!KAb-LOEZYO$@(GTwx4e(C+o@}mpGIfN@DT*{?`3>}08NKRHmuBt0sOvm9LYz;m0 z&1T$XiFMLAOzDFXpAA>=J}hBj83lsV_l`!0B1#Pj4qf?trlm&N1vYiRaC6Tw5DX6c zlYqqh24sH=Z&1Z39z;b}U#fN}x6Rc5j*7IOztk`EOWS%_*En6}Vac{MdX^Mh$++vl zWk_@=vMN5+Z|9s5RfOZj_Ej_D$9%Uf`tGjnWlND{zwJ`IbnSSz=%=q+J(;hERK_pw zmf%I#d_=9qDf1UYZN%o3wbtXQHXoR_5S3)kejSxD_Vd~NzRTaC*R;yVq&t|RG&s^? zE}u%-kd5@{VYI9&-WZPv(22iPcP);sj-lOSoU^>zT;V^6Yi?6>F3DKh$o;(p&h9obE08WmXkHi?t@=r^1IGpJ|sD-dkma*FJvStj_*a_seQRury}5fZ9!&M;#{s zzQ$PK+bqwe!&NiW&qcpm&xkqMXId!3Fq#iQXYyStwzuYk>CjfD$kRIiO*W-i=ES82 zM#qj!qodF6VV)_`nqoCb-e-0ojh8Cg2r0ydSMc)Mue}Y)OF?f*jP?EeKkL+5#^ymP z^n?h$!7=j*3k=T!LDCn@SYW8+k{=7y*lJ^eDWUS6GF0xukFInGWe2IBXs7%OwhWk3 z!2)zt*idNLjZ3#_f5dI2Ny(SpYiQ4ARE?0{=)bIB?Bsv@8v#lT+9*6HOCjp_%@WbY zr{QSAbvK_M-K(?3l2?-+WMBL&ss^!gN~`zzPzFmYQwK}2<{PA24${c z`~|O$gf3(emq&&dU(LecJ?5AZ$gzboZa0gd%*Fx(+gKp7<|+-oUF#=}1;{v*ut4Gj zaog3uJu7RQuO0uAk(Vs=MH%~HdA zdFs=)Up9gqjb=jBz+V4uilQWT~#_9yr&n@WzBd!0kg0NHneco4Rse&Hrd2e|n)ZO-6LuPd&LeZ|8 zM(1SbcM+;E{tFG5BpQqo++vJCNd%c0w9F}+G3GlynrLuEJJl;Prnr6QJ2g$Y^<0Mb z1KSy1G1v3gR1+?uU#ZaUhYPuwFw1!iZ3j2*L-PhBSJvtvS40ifm;5kI+_$g`oICZm zM0`(KKL|9L{(4kZQ6W(jjbwY8)cd)Zf<%Ffom!_7#*a+dg2&Y3#Ang4HB^E&yO@3BZj<~tuEn0UKS$0f6SLk8MHl>|n2zo#Ne$TdXd>>J@J%7%z z+FZrQ8sFAq_*q?AoH)sT@2k4MKhbVL&Gg+(NDty%TQ$ zqqR@IR2{J0x?$kz|3fbQo7l%TQej6!u~enkA4Ju_&tNJj!T2_HrxL9#EI=+9(O|8( zH@HB!J|p%s|3*QqO2a@ZehcgyvmV9MsyMmS#nElq-<8Pe)AOqsFJXz}B}i-wBF=j> zVDCWsp^bB9MbmxVj|I1rG@(VB^v+4W*OPALD`nU2e-0|gu(h5#VS)E^4LJLt`W5`Q zwHnxV(v}O};~V@L3mmh~A&{vkDl8ycs&lY#AyY%vGFPv+O&sxWOTCK)?6RUI;0};* z1MVS1GUr#S(RL)@!XQCgdCVXq)1uiv0@-wD=HP2xTOD#Wg#vje^5&B?CEe7?$(JI- zouMRy%B<%a|D|sJqrV~8>Hi1$<3UMwLGb;m?+00#HvE#{+jX6HeouvmW4JpSN3km8 z$FF=0OY27NB^nFljg@ zwI-5|CdvI-6jRp^@4~7G6l)hG6fUnm(~Sk@Z!!7QA)3-%I<`tLaC5l4T0#nHT+R2{ zlDpbc+S?-J>Kv`^+8TQ+2{wOJU-nPQx695apmIl^;l>CSntnguCMsK1Q#|@h>xGlz zLpAizTJ6upNZI2;znu09rAzRoxG)8O`~i?|!q#=tB18i>^Gu&Cm)(8-JrcL6wj*N3C>@m3KoZ{po;}s znL=dC-K;O=g9Nf+ANAQIg||*fMvodP=al6r6fd;Bcbbk}ed;=v?Y!hbHoa07hNHlSvt9xpV~tP@tA%!6_DynUb7!w2 zeEg-92kk^mdTWXWh1=}sduoEmb6Gu@wq6`|jU-p4!!pH?VH`Kvk&{^9|G6Po%GBTCmdP&Aiz`H~T&i?T0DmaajR8n+It zE-QX+9eQt7K6PG|M>@S~%ip+y8_ry!zhy#dSG45N?;Z54eQpfrE0)uArvd9pwxnqI z`dP0brC*pny3~7P6yC?P*86L>DL}j0djnNw@O^vhZlX~Fhl<$o;E(wJ(MTVkSSb%$ zu#hJfAe1o#ZwHJHB)~+GN3OAE?I=v5M7o3TW16X-)eR*QmA&6BHedC;vbV5Y9yWUU zC82;7xFSs1wC#E8acm%9%K#M)+H_6}5+Uk%sb6wyOO(HiX+`S#&1=k$bZ%QqZW{Jf zdh{CmX=H5p?*45jN8T(Q8y$UwR+ z<)!3%XUm6*oE@2G2|wKrlXVHwKJJ}nel<;(rcF~WA65lv27D1654F@Q4xe9r6O;+e^stm^R5R15PlFpOSgq zyjOYls>w=3udnHmn}U)C2lgED|zl z9f(A|nGbEBYU;UvR$#_zy1ViHKG^61$^40*$J>I(a_~`9;asT5rcJH6%@&k)S$c%` zPJLus%1C|a@)&YE4Y|-g{_S9~QdBCh^`oQEd$8{0e>4<8*qQ(Sru={Kd-30(d|zJD zpzsYa1E{sAV=O>peguoxo)azUQt2M90G+YVr@(uZcUq{%k*$3g9;CFSC}H9685tC2 zux6I=h$Ln9k$K_=;>>f}7~uEe=vC1rKc*2(3A8hdAXL5anG~(i^(U+Cp?j!fz0;L0_oH^ z&$Ki>o_qLvaLg8zE7J*iB{54$eH;|gA)uLv$^1_m8G@bt4~)j|LpWmrmOWcaEP&Ie zuP_S8(eW=0*;G9(4d3D}XblB^lIz)r8YGMIx+SSE4nNo;ph>cTAd4_OSYXlzm5TwP z&9J~5+2{)=IL>de{NuM+;QSV0*PSxpx6<7SzS)zj``wv_2TIkedjUM@h?wC6>);+Oi%8FSi}H9Amr&RMtG>I zZ{2WFv(K^>?S5{c`1?^*w=r{Z;*3D#n&_hgyPii|A1V*H&}SQG2cQiI{+=z3Y&FaF zsu1$xp9k^AvTJC6ci^zn-75EhtSy~ARiefPmSP3FETsh_JYvN0b;G;K9u zKb1GrruCxCVmZChuj=5ChQ)`(!Ji=yr}j$Tz5C$(z4?#>=iko5g#O4$KjhqM;~Y45 zi4l5FW|_M7MNF`*ANEE(u@DxB)aeE@cJRZPkW{^HzFUd0k7n9THm*cXS`roqddTq% zgP`n3Vzs;_PEbt{`pWy7)*V-w@O0p3_on#O@|3T2juJ@L|(u}0IX#Arbqb>aL~ zdZ7~eLPU$LzllJ1m#j`Q8Rm^Nz?83F9DuuUs_G<`)C*2BE*Uw9! zjf-l9O9m0NLeE}XZFdfxI_~WugXMOxmFYCJ}EH*v* z_1V@=4X;A^yI2UQT{TaAk%>_-Nfhyx_-gt_w3lc+yjO1Vjjv>pp?!LzyrM(C_KC~rC;;c*omyihO;FIs!NI2WzU zou$_BX&PRipZmZ-_Q}k53-{#SQJw{Tiae#z5%$s%I$8g-C@kP;%f5i78(`IL+Gy3! z)_j0|qO4MMi@zzAt1aiyqFK@Y?tZIXlYW+N*y|dO7?Y1t&cvyYMrd6LlXaed%abF&Nuv6MP;UbswW>6stXbilIp0%)6Q#P%Kq~6K zO}5r7O;-<&hW4`RjElp@*z)`MT4V*{ENpC_7p-+doe}uK3NGhQ&x30=$d`!Q&`z6h z^7gjz4^}&-0Ui0L!jB@m*+?nY?4;C%^r~zuY9DlbuA`><*akrgZV<=R>r2w0U-Wi} z&(x1<^JlJpUmL&(rU~0Ue?7cYmdd;B(zO4Zevv6Y_=X>)xc>baF9anzb4nCcw(Hwc zkJNnr9x5+MySz54{7reM;I*(P=*3H7hC@{{*G6PQ;^~!$8|7Ea-dgiznQP*$WiO55 z>r*szdC`224{cbGwitb#fwc=(nZqk>FUx=IiBwsTr>@cB2nR@PB}*5a%+)zK(e65= z5Gud_;X<@5%_A&_0`SyBo@T#<9QN4-FRUH8Ko}AGW^?f47|V0~-b^tYKICcX#!jI# z@2t2}1C;JCZ!Cgb^3#|W0D}}k;mEtzHY27pmCD5HZx3Vw)l2kc2O_u{V*;hQqTR(- zUHPmAEw#<(^zB6IYRi&8zaR4gqXjdajnJVUkr>)g7Fq6ma{o5gB-^TjxdX4X%15W7 zK6QFzO!2H-_QT;6*6K@H>*E9B@Qv?L>!ZFHnru}Me5>uK+ksGb3Er7%FQp5mFRNeM z>kXYeglb7927aVUc`SeDyH@Fh)Y(*RM?nnG&6}tKXw%515{Gt>;@EEBr@ZOo12*I7 z&4uUu^+c}eID;zCQE1lg-iNm(%b-4+?KR}x3#w~SRnhc~@oAf zW$kY~*R&i@Uk`a!Mbf94K=Fm|`AYbBd4%RN1#_b3er*zmrfbYeHVC&FP^7g)d>gGW zvT|junW+@id(!Z(uAkImWMhrPjoFiH z5{n96a?@WwHL%K7)-9x`Sy^N%McbHaB^jA$NK8;Bt{?UoewoL4V&!|P>}cMNEZyTE z(+mA14;a6j^XIwyZAvo_E&6qzWyQq35wzRr!ke)r$1uZ$uF6$8s(Yx2omXjC$0L+2 z({mSCy&Iwy3FH?^6GtzUp@_$hORWmaU-}IKi*YJ~$S=1uEk|fd_M|S*>X3SmwIO`2 zZip}0YPTtQ^&m$+@&Bc>F+xZF9A1ou=hRv7gsqNaTYS+$QGWOO@eylbJWnG8bo8JgKHc z!%u&s2Kt2CY|C8Cu0!WqE>Gu4@il#v@2l29Ro-nmvaZtC8^q8DJP)xgV4V-~D#rq~ zHc&8%Z&v0@uc@z3Ov#hz0F6Pwf@_ON;CEov9OxzSkMT=hc$452g8|MtCCuT(87d|L zzFmfqxomvs^5RPk^vLF@PVYRcKyFa+m&)0WEI0ZaE>epl&(uEFH99)F>b{it3=jkM zndjp@HexN!GknPEn8*55)K&xE-85japVS9(pNy-OJ9*aM(1K3#fS8L}Sym1~E-y-ny-6)Qdfbshm`WYJ3O}Pm+Pe^_~vij2fTKnkD zoQZy_nt55syLC`sa59cPAY5|F`obTlo-=R(NenJwIZ_7Mkhp5Y0=su`_ux2r`8FH$ z+vrYho0J$O@0(JxUU#xD>q$xPI2TynQw_!|oeo@;$i}X*?_~0ihfDrc<#f5Rli8K1 zHY2xmXMO#(Y!N?sdEcjl-lR|c*A0u%;4Q<0aumK^kWU|iqjs+eke)-XBOBgHqc>E&Oea$!KM&YrB?S8f>ny515bAlPFL~fmjujW)^-HtMBirlHvb} zJ|*1=hzt1D8nrT23GCq~7()tl@`~3QwUyx8iO3a(&s0M3-+Y2Mf+#KCfQ!OYh|rpP@7AXm8aw&;AxOwc3X_;6&Ia zQhu%C5r|W7h%d!vWFwIt+95K%c-pHh)%l$*3z}|!I$gaQ^I*S6MO~yb(s)>1g#(~B z(gI!tOClRnwrsDtsM60rO4z$Q%ay#k$zo)mU|wikl@Lc(TboS66{;YM!X-|+6yZ+V zPu7spJNSYMtlmvnfUUY?`#sf5T-;$93)B@~k!9*&$SV=(6-QSH(hwK^it1TZIYm_Z zz0j_oJ|5(6Y0flI{la&zyYQ=zsx9aNt?RLBGlIoWvu728%&A$2GcINw(Am~!_Ma~6 z@u_+@bTfZ7KuPJgzDY`bq+_QB9z%4FN3BsIoU0=P)_SW!mnxI2d+KuzAKRzIk6WQt z-1Tn`Ynn-D7Q$cM)=T^iWvhJ|MS9M$4*6nr8ZcLpKY6LMp&~He$CyOaUYDZZ7#$+W zw#mj~AFE7Cohn#0H=OwJDyjpxO?BpF=SD7_^2V-M_lbNHv--&sE#(p%+YQvk}r%E#6btP|<#@GZssN@d4PbXE! z?Kf!!buczl;KTo7R^{h`^XMBZU~F4{a~vj~Pk&AjHqC5k5f;SR|Mo0;&d2xF&eMTX zCvO6zMSt(#D=~KdKWNE+KU#v{seBU)2pXWMFumhA$989+*Zc5fos%7>bc@h9)BTB4 z82or^Q}OH^_}*r;5euP*?zzguyL!80ybG7j=`6?IMsVZG<$d zE~Vc2t|}@asYU`_|e11Bq^eQ}Io`93paoDCFY43J`rikC=Z6W5an=v8W}?fD+_N%Zv374KSmPKnT5s4t=d` z;X%ItoNv-216KuRT5d>+@Na89QvzC%^l)Vz-?GZN*y>x2Etq5@NuE2u-NCBXy*I!1 z2@zC#bHBbEcCD*cM16~&%Mrf}%XZS}!&nZNR>W-o*1AV;nc7Rde`93!%dtKrqa zRRtUuT#$l)$=C#6$s4NS_D~!@iF44^>NDb!Dg79_S8_GZArj&tWG(S!N005MM{Vv% z1n)WMksMgNM^!>GJX7Mb?IMP9b4NFLXs`FEn7lcvDCT~1M8b6>YTDMnq7$a<`SkVE z$u{$GF-g@kpFwXa30r1X)#}JqM00dg1#55?>29~RJK2Owv*B%*jO!xx|DLRbVAW~k!Go_ zoc5{zGAJ5Ktd}4bd(xRf{P2U(Sv(M6KtEWR;5_AQuBeMIivL>uD|OH{)xX8!e{)oa z6tJHlkX}}IC0|}1tkGf?8~Z9!!l5Tv=m&qLs^~l?)zBE`ycj0Q=?a!mzIbMV;I?r? z*$&j9!|tctn*J(UWKnZJ4AY{uJ1~Ra2~I3h4nQz(B4}K?D?Xx$?dv^P4~NgK#!D}3 zN&TK{d)Y;QrgOI}IdJ&l=J9R1bUPE?Ypv0^7`3m-Bo8_XXjk_(s^fz>=lJyvYy_sh2jfe7gy;HpilXBJ=LcuZhSuBEKW~p` zn%qoxOy6+~@=@yfTvg9*XPA0sfN9d%KCf+w(qJ*Sry1l)>v)7#+FCgHnUBmV7Y_9I zYj9M~r}S~UC~o?&g#a7QAKoPL*T3E)rX{W@yKstW0l!31YMDv&wy9VL6tDPVT5_Hv zw>bnj%(^6}dz!RG;ejljnO~@!XXAJ=&GY&5*}9Or4$;n`?!6Y}4_ER#>7;ciQ9oSv z^TI4|zTPiHi$Ul9jGI@y>XvbP&}%DhAJtB{V#bixO}*`yaQ4hNlO?V>?@`g7Q-cy$ zCQ&y$RNAg#UdWJYcbljg`Zt;G;su-7*|6#>_sYSPQw z$hBB=D=S?$4_#b2X9GqWo_68=3nnqXayNYXt%v8Ljg#XCn zOgz~S`h93qZQK5>IYP{uWK*cM-*$c-4*Bly{gB^(V^8$9*!g_mK6;fLib%qaRp{;D8p#m>B`=f#lAjxsnIXC>#(wKKLR z;M71DN)F+=5l_2k+4?(;@VA+Q#E%H06G*z{aJ%#U@G^r1exJ*vN;0}~h3TTIrCOV( z)teqn&Nj65OqRVDEWf_>B4m9Rs<9o+m z;DtvZxjMJTCz1>H^~*1+4L{DmHodLTU_h`*e*2 zriDC7VYMj1Rbr zN^XJVpwO&uU8X-+SQ_~_-VLc(M8|4OH(an^#B75RUN6h-@@=S zQuSwisqG@yaroUS4Fr-on|Q+}*4l~LNb4~=(YgV-8tYxLa+J37x_u#$EPvOpVs|!< z=ZCy~gX5dh+XXXRRjaYH85mJiQPlc{k!rcH)KIPj{db*#_xmbigs@-bmtH8@nK+p& zzo(B6lD-e2QzCFjrCf(j+FkFVh>BG!OLgsY>~bAAQgt3FnHs=VQN*vzKWpi*i#2mk zEg|G^8VQ{{ajCK~M|I6WpMgO!+oSeK{{$YghCBPD+P;2aAtvKonLOp%I`SjDFTyom zv_L|h_SqP$w)`kp6HV*-l}vPe z_|xHy9)sH^>|3+?wgX-P7S$k!PnqsFiatmrCb6jvr*0cgcepN=pEC)6-r2g2?44S)jq*N%WJ|-U zs>o@SW!<*Wr*IknwI2E8d5vad{ZAW13hR-g&;hYWabjtGb!+=aS}Dt4=kv_6cX;=0 z=`J-L+jL5x=iWb69L+w3;;dsBA1|`{R`so48n~1L9R5wX3kUN>Xq9^=gY%@7zjIjWy?v*|vi884N^Lj`{)QT-4)}oM zLU|NZycrddd$r<@%Y6!$CB9NszU2EfDY1cEOQ;vuG0LX!K`!q4)ZIQpkkQa;e17qT z?4(p#eq#YK5A~_W9FxG`R!gBjtglh%Y>-K`P*D}AN=4B{mS}n;HC)O zv(>THMnxnUWxMSCjTS`Vi7L$;%k%j-1RK9xn@QDITe@bzmw@W>}ide-n_C$Gb5 ztB!PQ@mayu3oP(pb8r6cvxvE}MX%aDr@1ROYus5;y32d$^t{mx#~P${nEK{OY@m`C zVospGwW&FCO21hDZQjRES~R~U!V7Bc*S838>BI=t>knPlXkqHe@!rK-iYhq0l2OP-z>xYpCFm;qw zzal(}b`zwisoYvsc=x-g^O>O}WLdj93>S!0JnS;H{8l&=HG-Hu^f;Kq^C@ia31g); zc{WT$!h&BTdkA5GKT2E}IE|mIJgI)g2@aH`e`ux2wr6QyH1|8)x#mMkRHSi9%*zqB z*Jbq7M54#z*Z9G%F%L{iSLH4ZKP%sjYfqzFQkto+MuXL$7%3#~Z0OHm=1WavYL7LP zlQ(kHRWDO6I%jeB^H@QCgoEJ7!%z_O_x9Qjo(Z@;)xuV1wqV^F>8-n2j)3<8D7q=1S5MI@n)z{EF z<$A6g7j6;)155mDSCcm}eq|8kFu0D&BOJpBHeme>KM*;}eT5)WWw?~dnSOxwT|W@) zF`>Y__q5#p)7VPB|ZeoCT1E%Fpr;$G{A+JKVQ`#OuLWz(1Cf{)gM`4lufU1Mwc5CPMB`9Q-4G7Ha*HO#x$F-^@M4!q8%w_` zd1qO_io9033Otg^2OiXkb-Blh;`2Wo_)@EXlrcY*JyfzNIayeE0rj3QJY0}Im*^0r zbJaMzYb|dEYgwplSzK9EGJtG?`>b+brNAI9$b?vOON+%_>1Bb|7hH_#_eTMtD1Gax z>eoLWJKR5tzf8U>>Hp z*ptNA@S`PK-#Bb@taz&_nfGyEp7~D6%cge%jm<3?U*cPR7_!y4qX%ySk8&4iYe{lg(-F#qN7DU+rL#M zX1Jd%jcBs#5A>kjT7QY>7nN%5H?DoY__bafQ;=eY#I%*3dd;p!=cp_-R(2DUzoS@B z8Y8}aM#YYq#f4wSd#agtY^jtDw*nbZs&zp`SwvYbqyAn^@={{Xbvo{~%tYo6g3ap{EepZ?psXxW?8zP47OP%i#alNKgy&o51h)Tbt-8$7r>BPz8vO!r1I+$Cg zs~@%HOdBRy-EB87Fm)EJZ`RABQLO5&A?YVw|7BZ=wqKiZk*qB`*s%|WY=+v}6k(=A z5tR$4iSY>UO)&mS^{TavZeojm@k%JDSQ=n4>w_HfHTdXjs8+O3*DLAx=JO9t!<0#)vb}l3RkW+PqHB6GAqYl-gN; zZU$OU>}DNS6vTcualNpu3&E>cY%OzFzF7+$!v0RJxI-tB}*V9z0G9DwA zoi}r$?>GCVkH5{QhR1(BL@Fak_G?~_9ERMGD)QmwTtjNZl1`9B%jNu0Lm*u3LGPxm zp!lJ0ko>Hbm1?fwiV_D$RG(q=lxv5;l&m3%^C`;jHgj=D8i&uOqkSfEHi6Kh{o^qv z;uiv>>zQ3M5NNrpWJrK-KLPSWzr=-(&0{~YfzUoj;PtX_?(j{T9*PvABaAmbKtvFr zbIHXzaC+8jeSa)$80B49!5H@9r*`PVhMnZdOY-ZyPsRuN58LSmTcu z6pMIWZul}E?B@CFn`%CN5*$&88f;`EVT;ul7Lrbxj7+e-De0W7pL!_nb(VHm{@eU} z@5uFV+d8+V&y5M07u61vZgc0kg}KDuUyg0*b(`*s$)!_6GRL}%U^R#rbe&|NugB#~ zwoQrq^||yiw&9j$@dw_Oz82Z>eL>`1liL9~ ziO=T6q4r%~BeizU-3AU^$<(o}&&*9B8QPvwRQsH)IG@;2!mT;G9kTH=Sbozp>NNk` zTB_|lhIOCY%}h+z1Mh8bLTYh75{Ixp5U8R9kvMwf=4(T z5Jlcog9X~lEvbHpd#)0oA0uZ^Y?)Vz!|OxX_V{`5y4T1)E-ZFyT)w-uJ%HaVyL-cr zs&fKfiu{=kzG!@%__g8e$B$xi$pf|v6?IH2&V0nBWdf~gz+)~)9?@V{Rpv|ehA=g> z@0K{YcdC0e=yQNgzDL01``-@+KPsgXg6Mvm>*?)mkte_)2>8@^(|i7BIEXQM568?+X5;qD>OETzRlVl zRrPIJRqid*lO)LT{#Kfke9ay|d-+>)H+t3B)J26eVfaeyWPkomy6tu*7O--ejBWAU zZ7}n>Fk5N~ejdb9v-)Hat`+WOkAAScR;Uu|#Yu3z~HFI6p$RKO{ zd}mKh_XUqs27d1!o$k#@HYh5#qj#Xw@iUXEa=gn7zPmCvJi}wn-m*0W8+ElZb_I@w zN3%AK**{$DKZ?t1=v@BlPLMKluPdz6ITEz1@ekHi_#`d`ZFjY@-9N~OrIvjgojQVa zV#StA40k%)_{aI;e_5M#iS?4&-WP~ec1JC#ZKSlbWga_0UoA`r!=tk~v9e15uuyGT zg2p#`@8xF}vyUxL3zMD}Wq9T^bxQ4`UhOi)wNq-P1xljFZ+YbUK3p>--1N1nd#@d8 zTh+tR{X63IOuKs#?4H-R+MyfXO?42s&=cF%sLaRs%sxL}V*sM? zK<0iiO5xC$w@Finne=3VeC*=g!t{psaW#}3ql!R$531CFo7ZgiC!{ZT#V?#Yt8)1GrSs0+)mRgxRr*{9bF?rPq#bs%}Iwrczmz6E6vmmE3` zj-m`|oAwZZLTfTd$Aj4eA(Ik?6S8n}C!eX5MBD5DA^y}KOjolKQ`(7Btn)ZOF&8bL zb=Y0b%Z`vv8P@*+t-Sw=v&8DV$#@U@PTfm(ddkDIZQPadVG%@FF2>7dZ>$ z&QJJ~@OsxnI; z$?aO1iudUm(enin63fG{_bjT2MNvZRy&w;BBlJ`(P!=&+Cfy?{=<ci^@ccQQl=$~_-WJd!ISd(UGEL-$`Cm;>Fi>H>{59d z+Nld>GrM8qTH}2Ch_3%T>B8(x+np&D=@kjGpTjT~i*(77<$&jn)Z0&z654YLgBiPm zIi|v#Oxk-g!RBI9MP4&AIjyhLBR}|!#E}O!+1IWIK&^(p9Sa$l@|xWB7OUuop4|iI ztR+mgNKj&VyW1qPUQ~SzQkqB@P+4>(#;2|Z^W_u4> z0@=vY08m_<(PP?*mSC-?N53HXiuRn3>Lc~5=r`*&pp^Qj?^%XoJ|-cyX?1V>0Tr1) z6MM#sPuFf8A)Kr3jFfGHv&oL&(ZOsG9&}-c`dih80~^6FCfPw{qasVLF>-MFV4gs` zvDRxVdk~?`{w(D4TJ#%w7)eq!m+OzVP{>s?} z0hDj>u$|H$!^gT{@Ye8FrYFE127#`LSRp@G2bN~v1e42(r7vF~GjJQ9)+gM67LD_P z#CK0?-a&w~^&!w;#D($wsOM~dlWIXzve0vAbomKjA_(~(P!uAv)o2;mAfe(lg0KldPX;CW(|P8g9%@vEmHlJPw6gvU~aqqoYSnl2HjhpE~V zU-tO+b23%hCbwCCFf3o|YgoU&eSKHU%~)i-O(2kVN+#J+*CC{Cw_U?( z0Es>OTvx#I2jKeR`Tp60gT)%=$9w?(Hff#58XM7v7tZ{~Y)RV-n+zt&m1&}8*;y^smQpHXTvo^cW#2?VG41ZuU@h*%c<&E;BcS@nDV?EW@ zBz5Fc7(kaTdyh>I@Nik&nH5w0>2uLGAU7UgCcAUps0Z&sZVP#IJ#~KPCJ}yNeuZW#N!xQs667LBJZu?ad6!}&y*B+G0t-M1 zuAZ}U603-)oYD9KRLLY8yT<1sTHYH|=O*K_W`m>i9W%-99$Tr^g3v9IZQinCP0y=Ba5d_ya0D z{Iwbgz{F|6wAfCo?YIDj9O{$!Zu+@%eZGDy(6i!Pq+mI?iw^&2Qv#BhV_806-{1c< z1a!)gl@q^A+&cb(CjP$Q|3Nh*&cwhYp-W8Ywe{+xpuA<-r+Z6-fc|1jMl&&k+d?o+m9#XHIGWPd>33h#vz z+R26iSl5P%`7#L_8R6k zA4h|P3%kNne|7!5A#vG9VV?I!YH36!k~L7yjr)X5^5nk7#+w&VnsC=05{oR zn$*YYdq?xuMFQY3^O6r5ByhB%sR^bsuXPjv9Ca9Bu1r0!&UuK=W%S{k3pgLHXmyzC zi^?=+k3r38fV`~ehHcP2;m3PKeA28!MjVePP5pbh&wTh3nkvJig^1HJ>5-hBR-BFF zE-x762My;3uefbkKI~9C7!xAUOFS$!g2%qbtyk>eM4YD^v>|sjLAdvwC7{_N#y}N6& zUV|R>u7#7*v7vdt0A>CZHQF|k=8_*~F!uqS>#hSb zW_)vOz9*vv98bAz2&73rZ%iP@a-(tB} z9oZNcj6;m@TAyY`8lCaf1XAZ(VBvS7DfH4T4Kefk1f$I}kE~IkxDVy==PBNM+9m)T zhi7pzU^4s;jBST8wCa$FKTrO0j8^!v^Y1Q~O?~?O4b;qwj)x5B&9%$(Sl?s9z=4G+{g&b`;Ew zVF{vIq=yb+5JGb!!CIyICA`c0*5NtihI_!XtYG(U1`G>#o z5VPQ>wE4+7(kgTb90^=|-uJTHpYjI90k{#7gwq>q;IV{9u-<+Zqjz8OV+#9no*cNQ z<2aLc?6YYHZ@YIGt;6z-kwX_J5Co2X;-si#; z{5w^QSupdkSfIpbGe__E(TeJD;mx24-uX0$(Ov`c$7Z}cKauqhbl zb>)m&Fj@avWRy;2`T+wQee5Z5i`S6;HGew8?x{ZZ`=(Vsee*{j)g=LuBU6kQ5~ z;f~I%in-%+9~^kR(Wg`CSEh5DaA4GK1!AaSaDS)tEZzuZ4C z=Ny-yFN@CV{+LEtF|#F(1kyC=T_`;^ob%?UiBc!(m7QMI_bY6wU8r&zCKG%KTNp=1 z?;=aZ;t*JGX@EiuvBt=fI`(bic>6Yg^QZl!a6d9|--A-Wb+NlUi;`j8R{uB$WmXME}|0FDcz zg$uj|@_z<9t z6H&am#6389m$t2;l%1#8WxWy_<(jzdH&Urc;L)Ag0= z4@)vv=2B^DaZLeMvSP%>EyJ(pYZUc~yPPK0A0@w_h`-(=5<0~Wi@1f+dzcq4m6fvH z><=W$yhu6p=Z=rrfSEQgF8-QnEAA1nz&0xiDk?dUl%xUBix)ckg)hc*xOTgG&;5lu;z5F7`F zV-wPcWVMAi6!c6hSjdFR@#Y3G+M@_rSp19XaGeGGukkAURnrU9Sg=}c)B47)o37$Xw%rjwG&Z)o#SFl+7-A^H z@;^B@zhj6S7flwfy11n%yU0lc5t|G8;zHVy;%9)mujh&M`|#RW`WN_sQva1AR0Tk* zYLEWxnj19zu$ml@3_GIa!U(6a_5TL5!M@ff zhMDnFselOrk&CzlEm`#EZpx=h4X^^B%6%R{6lg8+LZmmQ1Qw)F(ZS!Tx6xU0o7KK2sC?Jig=FKvMOW?Q8Xn8%h-5Le+ao3 z;W#)E2GX#|4y)Fs2`I(8r#>AN{x7H49!*vJZ+qm`O8;AX>i;lkTOV7FLLHmDVtQL; zmxsvAo%DHG9mR2~L8>S>I9acZf2;Pcay1GtlBcoBsrBANbpf#(9PTq$zc*dQ`lId^~@ERoS8EnVfqpAoO4lB_51zJDXBF?OZxs&B$OyLf0E z07%94-2A9YTGE5IG#LWA6vumJOV*F9OhseC+Y8=iFLJ7O7dB3 zQsU#WDtEPK$=Gg-;U?6DH*$E>wd=m@>nJ;q?kreE9M1(#CFdw5me3K%D#-Fsw`jJRpybep!`a7UP&{7$H7Vs?@`FX%xB~{UV zFz1NhVZyB6uEqNV>Hs10P1BP`iG9BR2edhps6d+67s^H8q6+dwRO}pvF6vD-*r98d z_NKx|Z+K}RTk_3{^|!v3$+3?VN*b|D)pw`+CHtsuFT9+y)f z4(kWtWiy|Eq&p#$mM~w(0Q`5QN=47*@}V#K!AWVDGO7I81waMe^_#ON=MHZsMIs ziX)v$po{q|l`ZJ>NBA7OgrKg}<7By5Q%?+S#b@9JUC*~tN!wK)D^#Zzx5S(Mr${N3O$(uA6CTx*lcaz%mRv$J?fhZ zCO62={n+wpwHQi=r#7)Rnw=z?@=0}KCzmZ-yhsiN ziWb5!q2FQ{KFN;07KKbP&eX)N^v)p_HkbxQI%TY*pkNhxOvS(HD${JwodLGTA22d zv^~4Q%laWxd(~Qm)E}Y0VW!BNQi~{!F?03IKHb`|o6Em|w0Y+tSS=L+XuFGx=gDEq zZjOG*Ua%aFsSO!(zZ!G;)-Aw|r<09(yFNjrZ?Deak@nN7S0G|OjHLxc8XeRV!G{5~ z(VmNklKmJe78}_p?f zVPWv)IOz@o%{R15+?T>~Em+Yx!`YPk)7iGwJG1*d4b7e%$(r1OpD2lbTweE0IsL-L zi8aY4-yjr;0d(7pK745B%X@2YQWcepbqcrgansnTs1Lqcx8tR)lde1+~A?`X(t?P9TI*9;iC+h!8Z7dDwdHtMogHr(npcxGuYl%(;ma)S) zO|CCps^4|F|1!$JW!2QZ3i6h$L1g$xt4H?4#f|gnlPtf4t{m!Kprdi)XMIKWI==NC zZ)rG56DOKVGl16mBfJjUg}@2{_W_6qmh>T3aWO_h_;hBg@}*}+;nZ<|aENNpm&`CPFc5}&vz^Kk1N@4zhGfqqI^!f$7S8EZzxK_Mf%*yIg`1Z858JZ$C%d;rqH53Y z$=_lvtEG8wdRLo>pDDj=?3B%v!Bdw4c_*W zcp+|$)Zuxx#y#12JC!EVBSq~ncjlJ#%w}XU*HSz@f(KMEDm437rVIR{GUcvup2I?J z5c~?0Ou>|00(Opq^^3+Y;N1(MN_EF9R;ep{ zR}K%iCKaAV7}X}5@_B?k(MYV3Q~_Pn17yQK`rgLc5NHh$TR?X#K8*7Uq=-zu>4pApXu}6a1S%)jy!m+o|Xfbx){+B-*tfzPpZg#C&-jh^&I! z_=T&^$^QYpB$89wQ&paN$jX;*W*p=EJAHbK-^YcaOxH)T%5=?HvNhRfACyl2Tpy{p-7Fvf2&NU8tH#5jt# zVO+v)Y;W51_H0r3%-;Ch;p^s*IoQ?$RvQ2Ux?Tuo1I6f$et;#n{t9W0o~2dFa20L) znQ4^?9mjc~ev95(w-~-O(7V`1sxRye1&bV9_US^#i_J$k5ziNDqIE)OlB0}6UASL*;#}`Zyy{? z&2wWX5WY7%$8i4W(~PAAdTH%3(=C%YziV?P;V(3KgSx)Q*=F1UoflLYofAa52E#3_ zDpW^t$9Vm`B^5rA^(VKUO!xow0oEY>hb=A5Uu&DAVILVZ8l%|Fw9PaqLo`T*O|_q0 zXC~#1-WW>2k*R2^a0O}iP70G7iF2O6 z2zsQF&fu;l@!^Vg_qsr`B4gk8`LeO|`FQQs-Gl!T_~rh89{As)IlCM)^BpmRHBYyb zqH^t?{jeYpH&oyrYutHjZECWv)$tY~9}y^!&Da+4V8W_ezgpZ=BJj1Y=eIQW-7lXuAS zlyA@r?v&!exk3n_FT6yACHw*HKt#y6EH_l0IO-X60+Qv(5LK&{K z^u<5T&lU`^1G7aNy+7uPrM^QwJ*OVzbak(3j=n=QGRUY~4HxGN?%hhA$d4i&Kz zayWhb9r{MGa(3zED467zlRUZEin`R6d&*hNQ)TmT=s~GS!lU-H$_BGhm%4A_E4nR@ zMh3$k2JL)g4h!>_&Q~qSgiDuQjmhSy6&afslRborwQ8WtmV9EpIQMcGe4(0C64UgM zv7n+^i=PI;+vmyoa@TyB+skMJWw1UT%lsIxo^Y97uUeswlL~Iw8w@&boqn$APDi~v zDd?>wBDywCeQB^tAgPj$JlwH*K9sB^KzP8h(;mz9Vogq_o~4gI$$o%737`wE|2mX=#MtOgCnLgDGIXr zrpDn;)*$cGIAx(z`eT0hgV$d)4{>acz4Q8}{>Z~$gy+k@PwbhdjiYOgla8%7%E&Yy zwU?2He!rF$eWAXtuZ%V9h7xTxKmOF4cRpjjQR#VZH6^&sOX43w@j8do^PKf=?ZmL1 z=qi4rTAqZ!JmAB@2K_a!j1i2;S&K0|^onuSpBHLqH2MSbS~hX@gH=t*#8~>=ZyY_K zlkq5doa4XNy~G`vuS74G?1URErVk=*-8*(lsk#&sNmF~eQ5j- zNcgg)h<<6|8iYfCNl)y=#b#qd0zJwWH1Z34hxE#~ZsZrNHLqzfr9!~GIno4gz6uw| z82cZwy{x)nrYxaB2*7oGRWZt!h-{TC+oTvXtjV4Qf{W(ki%C5FvEbmRp^%d_3}fiA zo*#km+0IotbX+B=>^5XEnZkFW=aSSC(rIRJ+2S+JeQdk}z4KbnpR=1%T zF5swDQL%OTgb!YwZF3L0lTnF4WzrW#9>cPDFzQ-LsT)<5DxFu!^==H8)TU_JTgGRz zv`W&Q#pv%tA={F4y1WmH^Rz$=?Y+dgf-@P*dh&)?zcNV7GT(E1R)({u{NAS~s#QCH z8IpAAM#{f7CNbGceflpe9PpqPg`165HZP$1JL(rjyhG^eFnZ&6-%O0QkyiPec~ydw zBn>eIj=iUNP4&7L35gLCKeiGcA_p74GJ6FYY3PC(Z$TCiHZ8Y*IzD$&`V|G30k2#$ zL16rAhL1w|%;tCO6ED22hcYjdTO9zsiY*oPeAa;n=b2e&Ln0has%@dU2z$iVJWH>! zb!a%pSwO@386f3z?h$p=@WDnfsxGb9Ti5mIJnVH2or`@d)@|8Wta}StQ<*Kfy~Qe( zfO=~=rzL7IMB51{-`@FGbfkjKLjt1*w^?_zC0qtKE2(YOw=1Qf>QJ}lddDgPU2jSn^3pRVe9rANvuBCUWmh?SMJMOk1bt8OGgt}cS`2KNoo7X- z6^M_8erJ^rf+?Azr9VG*5@_X7D75QjJ$%KkqU zu>Z+cCLTCG0z}czP%c^z;AvxBRF$7ei)QwWxSI6K?s30XURM7#RbaxnQbBWRJhIX{ zau-OjH;=?l!dKpm-McO+xm%+-M9cBLcU#1@b7n6gs@7c9N00Bj@7wn05t^GnR# z^mZ!YBpmOF%u(6&BY{RQPFcE|2Uy8%<23>+Dhd4ptPzdq_zJO*D!~^`MjPYA38QWO z#p_WHQA_UMZ4HZHbT6snfyKX|Qe!Vus(o&#Qlo=V)w^Y!rRee41u6#a0$`mjR`40= zWuGo^1cMciEs&EC8#16(+->&Ko_(;g6JEGY3-teQxBtS#I`9zyIZ!O^k_2&TGxg1~ z!7o~?8&jF*4ySkU$-y@RX|TN*4-^u7eA%MAM{JVNIawjLI$Auo?`g^UIEme~ikCGb zD-@#5M4XEl@gD+*=4+U}rwgL71`bGOvzl+FP5uCKLlQ)GrMR80*1(eV_TL0z{SSyL z^*C%SFyKn^JX>n!DrH5{!z-5HJW$O_QzYfd=8r8N(WI}o&#rT=Qu$Lk@x?3VI~G3L zTr3kI4*`=<1+W~!iFE}d5aPjX){=s{`pSCViqcNyzchNcC-*MD95nRXoNW@9V@Sh2 zU1_IP!p0<8?wAzXuEE4~~DtcE@4Z9e9NtPjMFu>X=b%QsN-tKu~ zJ6esjSeE`)H!SV`X0sPR=>VyH@O46W4i)Wb9<~P~u{3uOWuMlsk%5r8%^oo$(^z<^ zT(y<+p-XU}aote5P9wMv3tpTG#a)j+W$5Q`{{u>kRUGgg+TB4XENO+b-NHGI<5(BJ z(W(%?vy*GB@kL`l*d=c;9MXFk1H3%?Ae}ngzF#GJgElL3}M0TG|>jE~5KJ zA5#%kVlZFu@x+Jn6R?fzCI9Yal_^vaff~uQNs!8mRQ^Ze}xs6qGG&xK5vsQ~yxlY09~>V{(>e!ofMU;N6tlJZo7m zxlt)i(U-74m?O_dI-m7q_*z4h%B?)#%HDFXk9HYMHhk_PD+^K9HD}?iH#EQO<0=q&d+8qJ7CovR)e{LYA^I8PV2N8ofEYmXoBdIb)|IH zu&+0Hme+mcG4qKccf~eogSq1hDj9aN)1Gr(g?CXQ{Zek)%IyzGjLCK?DfkDR63{o2 zXeV%ux3VtusT58PiI?{uZ2A zqE=vkI1R4w*K?bfA9j`f#%?##_ob?^FTI|!=jLhT+YZUy)9(3dN5Y<$Erzx=GiXdSxvvs zQiY+VtU0iS??4)P+E8C~={z*iKSQ)vubMu!y{$r}woRdN-z{@3vspaLRP9J@`kXTa zT2+L)j&zoYh0_q&7qDqg<+Y}9>*4ZWFE!KYkPhCe(nm}M=hYs!m}l~k_JKFhJ{=sy zaSwA|A{73$hY|GntYB)K5Za^*8mE*K8pxRED4*#XEp~iqS}wB#Gkpdi#xmLaXHgi< zLT;z5HOKKVDQQL4DA`kb4K`IJ=kev_Lu2uJo8Qr7_?bcSxg3se7lsx7{J`yC#M$D_ zJ7_TVlSIDs5$XR$&jAe%g&!IOE5mSB6^s0sZf1zUtN{h2ER5fw>fK#wWru z?}aaSoBX(o?d;@cJAW*0fLC}XFFa)j$e+wJV%UqKJ>QM*<-`2wGeu(jZu8eS#_lCE zg9v=DRIhiEmyjIGt`qo1y~K!(;q1OPsVq7cLK=BMU&!loFao3&Nq=Qg8TG) zo-Q^IFxN6%%QvTPdO3KutRyK>4~KxcLjS50sDYwE80i1^sjZ06u7vv%J+7B*yMI6@ z^uw@)O@N*KUo&73s+hEt;+c7K-<8hS87F#TEQ)(6p+UAi9^bdtZ=Ej*=C8Ta|C;)2 zaEG}Wf+&JrSXR~){1g{GpweNDmns_PP-DNY68e3;wydvCy9n!5YJtt}kW26M*|!_t zTOs+VV}4+T%1;IwB@)-6JH^ZkEn|E8TrDBW0J+rB{?R5tYcKnI?dFPCqa#{esL{uz zR9F*wpa6tw+o;q8klF|yfF3@qgI}`Ye1Opw#wGv54t`eUJxgbi0+_pQO9RiRF7>o@ zl|PZ_4mth?rJP!begfZ~Dt+RiPLI;R+98^xU1-U4&Dij^8fnP&Z6Tt?a8PJs!01$B zXOIWF`EXWqJalO3W;(T*RWcGI^5S;d8DHW9^4DMcLu(V0spP6owgHjL|Ht+H?{y~r zH&yrF>;5;MNIxAp9-?weXMt3`YB2T@sn?$UynmfHQSG}i6W1Fx68pxRK|W8={C!Rk z*+$qm4&xh60#i*saDy1tSd^= zu|D(FM~B?rmtyfzU#zq(wpol;2Zv+u<{S4o{#1ChN_V_~-@N55oFE$Bn(5;eO}92VxMDqT<%akR})mt*Z!n0&MzueGMee$#2zY$I)nxI~xTXG^hj>w8Pnfh~_fFX}Uq&WQCEFVTiHi)!+W?fJ=X*{l zoF2lDdyU~lAwgJpmwy3=WJ6v2j?rj^M3JsINE&kFdn$ zTW#E!gzh~Dtxpza7TCVw{5M*2=(cy|g%8Q9zY6!=5pEA5A~6~DT1+PNA**qh4;Dyq z>;hfaTUwz-Y(ksYo-I;#G{UEmox#j#EtEvO6*<~C!EOmp*4gE)?`l&t+v5bg@v+li zx_7OUtWzbM>p(VqDdDeP@Gqs+axhloz~>o2#fJn^A*jeHb8P% zq@^UFBV z`m{)>)JVMOk5zs77OD}uNDmQxH#GeF=nuRp4q7D5S#sOH2|mrz^Khh%1(wY=@^Wdc z_8Vt6M7pk~u0HO0$oxE%dYeZfFAntNo`rDf8(`=dFmWmVzsc!WfcNXKTq+L4)@oa# z9761F@xA(-G!ahrRfS56gyj&Bs4&OAM9uPD426+G4EHn79&+wdS_Qb_=j{@(K9%&U zW^HU(H?+gz0zd$Rv#DrdB-$aI7^@P}SeHGU<_XU--L27i2};L89#g@nUtIR~WRyfN z+<@htXd~J753X5pjl1}g>cKvjH5B`D{??0Uo7;gRxrf)6+u{&cb54!%AH=N(ny}Et zEeqkw4K(3XkF$^w^KDNu5b;;0O7m0Asr+}H``9;?N>J{|h? zQe1o~dV7bI^hek)l@HLlYj2cFMSi*l%LOuz35<)KaB^%iRBuT#9~fF1j>^@)(_PIF zpx~0seSdEEkr>`T72S=*=eALA5^h=F+yQENqo|8lS*2#ybMt23p9e6Mi2XF)a*^&+ znP!@4d9F#8Ds0XN0z2e2nI8bp@&F^75K<=rpq8__$5*6goT}$K+4oSePpyxnZ?J1p z-Y;QeS#fPs43wgcdxaKTPMF&DZJLPw~i6%NXU!5;geyF0FZ#U^E3%mM6q?SZdl7zN3oD6B< z1f>aPhh6L~gw;yWZJDLj0g{csf6ZE}$&S^ydqaB(_t>!&Vc*GJCoMppi({ z6(G=9gLnL4zJt7cGxbQH!iP0LTSGPfnnpN<0N(UU828Uo9ln=D>bFUO`%Z`A6s1$gNqF(%U4SjFqqgjTd)Y zG?7?!1vod1$BHqa$h?w+6A@$JWKbiSamDd6uuNrn zwsDR!R|b!GZtpZP)AkcPPB*%DzkvW9!Eh(6*B_7?oEa!>`z#{}W+ud!^1k++-vLB) zo06}OPUh)y-x%tYmGcp@d3--Ar883{ZXM8Zl|_B3XAF$1dtDX=0t0QXa9?~2baNki z_V)SKlF=xlrVwpyio5UM7=Uoz%@?W zoa+zNxUI#3g6jSuR7S#RjoFifHZJtFQQT>r%~=~B@l9W&}AicM5~Xn zM}9F2<&2VL>Pd&$!6{n($Zo!j^N};HMx~sE4dDX%2|I9MTs0Pob~yctu?|1^24BkA zqX&uzg5i~|CII80+HnCLxz$`W?d==!NC!H)&u40T#Alqc&MA)152GVGEy79bcOous zVuyg%7Lp!*INHSd>ksHbga+1US&w64J$a_5J6I}2peftDq)%g+U5zNsZ*DF12eT8O zx3{AG3slM)5GGyTokUGx z(d?^y%+-qlk5k`X?bfNQxH|sF%34Oqy1ahMr5&%r6>!gYreEBBqL!bj4 zS+hx$R|U(Jn58}D?~XD%{k|E8tIW(dlfuO6o`73mN30rHqe~s^%JN2ch56z}4-{)` zCJGS^xly}-)fAOQC~XBo3#2XZd0P`x?oxtj*G6hy|1}S=SIEcEJA@Dd(B$hl=(5Ju zR%)F>$GArT^^Fv*I=_PYDojpnZd9d1+O~TVu$*`7vVHvQ+^?DGuxG`aeyHL*cg0Ef z>`ngxX#tnNG)lnvdSD~)7vGdEM+ns0nflRQaICg$nFNDx^UXco-7Tln*3>ox(m*T-WTq;DobZ`YnJaJs{cq#aExv$j( zA9@4GsLHIh^=fL>O)^g^9QY)~M+>Tg=6<$byV;O%8x8N!2T;^tS7S??C05^#`JHw* zUnfJ>3K2ZVqO;li0~l9RQ|wJA?rdV-9wgHv>L1T?z5vQ?YYl*$o;^6`g5k5v5}d{g z8)_E5zR|&`U+grcL}KRL7G_su-p~JYYBI&SS5$hsUq;vEeWKEv7Y-RcI=MJT?B_Kc z4_d^RMUUfb>14gFIU*)(;H{44=lpH&7{8ZQgRg$iS^QR)Sz6XIl}v1xMr?`E5uIBu z=Y&A%fsE}NaYwU!2OO(~2oysfSfg9h1&$t2&skf}&SmSQjS_qsz0aDe z<9}uf=*$;udH6ZxX}FYH;2yeCe=G=qD>$v5!=2}lid0!G(n#0-hEFE502Y0nboZRk zKLQ1Q3t{{^ESlGdW(@O+F`qM=yy)I@?I(2`w zgQE+B#c2#x*R z6hCD&;Uenb%i7s2C0fRIm`^7f`}xgxVNG`mkR7};2tLsiXb)^~^*^BIvcH=%fIx$M zmx+i**0_4Q{_RpVpY1BWhdQ*?U1}oXf=M2f=KYzfv$hYi%CGl!`M}YW(CQ&PgFY~A zPrV3^xmbhRvRmJuBCr8z;4tF(IKWTuwvuY9YpzeS5AhYiFyvzT=_2npD<$^zX>)~8 z3K4IY{xb!|>H|74nCzitiEb?LB>**fK673=vRAGGF}L-&96%!^IDSuh9;$;hzVlhz zLpWK774&G}%q|%2tG=?)=^A291Cexu({P%M?RzEAcNI^63?eoe=gyv}6%&~Ksk2+Y z5xMNhe^-A0>ccxLT%2m~4z0fjqzZ6AGME8AD(%%y7?=va)r!r+j9Rm7O_~|k&I^8( zXP60E%?vAtV;(v#skCHNRIneuHLbbg^5V%WkhC69Xa=+tfOhC(OJFtiQP#N1Vx+w* z?N+ujrl;GQ)7M|*d!BD7%G=|qUVx2Pk&enEM8JXzt*1YI2d zGf5SX7b}GB${!-$!5vgRR@NFiEADcVVC6fNDx+15R(M@Kim_>D`pqd2zH#02v=PU2 z$B!1(m4ZQi#OwvB)-C(+m3iwk)^n4MqVKLV4oiX^5~SN!MA*0DbzgS8Ap#iNQFYGn zla^v!*<3JTdu|j4V}71w8~()0)|<)I0?3(l2{h1|kA$d=P$;^39De>02!eNVo-|^; zmK9}Z9bcaCu0WYti(S6JKJa9?w1yz z_zQ@+oLib%SIf?9!zoTG+*t!+0$d!rfi@xIxt z2xkTkgTnuL7!Vj>oXoof9-M0eFiA5ITCr8Jq7a+nJcFN4RRWH#>!-2I3`{Z~Pi)EP zBBV9#1<2>~g!}KcFLdi-g^`kPp%{B&7)2%Nm&?h25A=mE8Y8&GQaVRJ3g7FIV- z05p46<<1yRXNID2CdD%`El%a@0p@$&7whwyAxpfeDnD$sh(^rGrLKa4bpMew{8c$6 z{l5(uLZr0`{0h8Mhv6s$ehPm$y`5(K+Pa^$OVX#Isqs=&@A?x+fXv#9!{zyTnU?BGsdCfvyZ1=jTex;}#=rJyL?l~c`H5=ViJSs9|`e&Rbds#JF<#lDD$XvmK-YX zls%cfmc95rNJVYI(TJopp8oFJ4&cT~JhS<{~0PFLaC^8 zj#5|G7k8gge_XKTYHOC<;%=)i@8O))O0VeGh?%!Py!pf@K?CTXFn|xz*SBGS9w=Zu zO-uKYlaRXCA?@bG9qy%RN7SKR9+sH;yMQ>+ZTrG1^?AOTSlY+yA#|H%?TA4{l$m`E zT7T&(@M~H)xt+ zZrxM2PVK!SQ6cRi>JOBVdl93iy1pvUDL0>mmXwaDlGNXFX52S^Nf(QUmALsj~ zk_~0e~2E02r7$01(Uo0Qd<409UmD0Ph(9fSC?UI6n*kAYxdFi7Cs7 ziIFNhJD6M9ngIY>{lC*a)aOUBhmSQ?Vd1`^g3%QMMZ8nDN@ZaP$ly?AVS@^crEsux z+bawuG$qt^5lEw>B8-jD>Xwnn=^4(9!4T*jk^rw8jR74lTRH6AzRt&6eA}55?`#l( zMFMi9BciE*k)vaTVsO6lM8fHW~djoa(oc7z73hjSI&ND%N7*%FwM%$#t|H&P`^iwYpu?z3WBN^*>f`CTf= zs?wG40U%Y=W;DVFz$d*=fV<;Q6q5$pP99%BCwPcG8kRS5Ck!Ugm{}YPYm|s`DZDT# ztpyy)<4unmm&P{QGR}!U?jOQ6Xx;UEwVPT*4!zEVr)DuBMkU4IO|AD%^ zUV2%JbI2zm@h9W49q-AQ^WEY<>MpMPsMfMJ)L$F zut+@)%p}8OUWJ(CpO8rI(4vYcvyjNT<*=Z|M}1iy&O7Rzj5KW1)I)epYv1oPNVOT$ zdM4#yiROoPUth*K3<;Tp+|}8kHPe+AEN$ya8JrhL)<-AdW0EG>-5G01y&3;9kT9{V z@*66#X{k||z1EY7_?yPa6U2om?;yMbFd_^Uz<5X@uGikqUUz}oasWdxGHD21Lmq3q z=!o>c*SqZan8DqoP9j^LIq`o|5#S3x##_EW{(-EVTLQNdr}l*AAy^&|a0K#W3U(Sc z4dyzf*K?QcY1N7c6p^q-;0kjPEcb?0H5y`pZwc9LlfzBJ=KiGlsRPNnOAH(ITbJ8B zQuOnAWUoifDsA?v(MzI_B!yiB7q!17aCWw|=cJ33WCOR#y~oedqk_bG7_Wu8Zs1~9 zy2D9{n+wMBL%+~ccd)tjyuQ+WT9ZZ75%+r{Tg|q|#@R4q*Y+eDjDOhsN#ySV zMZ@1KUgz*j5sP5d?#RD}*Xr2Oi4EYCN*Ro*n`tZm%4dWFk4*IJHZ4A0NIDxOv0zKP zfg{ILlj*jw+l0ORWX#_L@=uYrM|90u^6fYyt!(V}VFM37n0)*?%%r#1VfX(5b?h+! zOjI)&kCQ`efSr4Gz=_zlEwsbV;}4jgOys}PTBMD;NiblaHne@Yg(6VnB}2T8FRTvzI>u%n{T6K7YX zHmu*`D8)@DUibLk2w26;vm%Iwx401V{fw*@6FAX5Np)5K%`Jy;`wqK#C zhhFRBxzWn^w_;It4B$9|v3rAeWMQc(QRiS? zU5bu#jiqDa1Cv3=| zQDW2OtIMxR^2+l{{Ud)!5==N8Z8Ozz2Jl2q(BLGFkA#m{jOra69~kja@lfz+w!-qn zZSc1?Ilmp?AJmLLj3KF|&{1Q0$RbjOG^8}7I;S`%-X`2?=G49}1~{uQ*aS_=i{((H7<%xF+5y8)K#ahw_J@9n$Y8_n^m&N0Fc!!&21h z9u=)> zP3jL3*6sfSv!b#(f8?y`Ho-TsTN^_Uv6w@$H{jaf4dXTA-$>!s%gb z{ZdkYQa!FpuUM|wR{bjJUASHzS#k?M#aKKdUU8A6DM;F_uTR zX;y#xtNRH#Jy?{0wJhECK|i>ETr;|Bxhj&&J7*5$tMi=3Q7z7`H}o{DHLOP&VOf7S z{ciDK-LQL5GI~bStoUhtVVG;yIsFk59ne1L{@`AF z@$J0zk?-P)cZR5sP=lL;&)3P-8IzaT?c9aURn~p&(sfvSk!Q}U>COx%I68jZI9)P5 zz9z2brDj;~%e~Qc+?DU((Ku=p^D@Ox&3ofvQ~T0qV~|)r-ZXxb*jPzGDb1qv7=syt zId0j%_IS&D3vVmD)8Iwx4c1TbrQ^x%LGNV!Ap8aDHTAja?bq}0YxH=EPj?ntABl!-^dmcP7-btXbO)C`)0yeIBcy~hU&t1 zC3oMi+h9@A2jD*aHqkV^GsD%D1FdWRA#HE0I$TQe7zF=UfW;-Q92DA+twWY~)$7 zxm->#G;&y&P4cqqIC0rLMw*H```#8FOjpBdP}#6hahv&!0oKzyLqC%C2UiD9_uE{6 zj``(n<@VBkiAQ;VS${KcvUf6e<{ytU9CuE^j;*BL2(pvk2_H#o_-ZI{@$m7yaTtW= zk2;Lbj%H@h)77D3A`G)0?zv!vks0NW?uO%usgCKJfdM=o z&^>H@Z?(IdIR2gW)WK44o&NId`}lqY<&BWWbfSOK={QYpsxnq|GzcA?&U~qd-g2;` zIyN^^+Kr=)V@K=V%;>M(?|gA6m-h6I`K;zvFIr#sJO$s3)vmF};Ki`6x%zwU?>`&3 z@&oHiH!JPa8P8s)`NGA^`{g%$epJ@Qk9AGEfbNjSVw>{i zsp9rO9;uhFf3ln3F7tW>no@^S&3qz#5FQ2Bahhu%aE=YC9&Ij}H@^-6UxCkt3%zOd zc(}xxh)Np!}-yvGp*;|z2Q0dFn0QDO7pCSNB@_d zizVMfd;eAW8qvB&%bSmsucINv?$hJtUP?kw zOaYQU8jre9UW*9pII(_UenbI^UyVhG1DcU25%0UOyNJCS zUw;p+LtLG2bD~Hh3lfhJ z>dAHqgQL6|B3f14qs2SPd{Yzy=F{m*5fi||<5AmuM;vDSxyuVLe0NK7D0&^S)#_@w*^ z3!fWfW3pGGaPaWpUzR{M2zSqDj)B()Cgs&a37H<**F>EK|I7zd2yk%6k5@N-uUp9P zzceIrxSa-kKi=OR_lJ;t-=EP|8!f`i%gb9Ntq@y+f`VxEJGi&H{Gx~BN&0(x!QfHv z@;5wI8_W_}jQR|EURUZ(T6~_Ey8J()bX%PZy&wM7)z|mt@_Afe?~2_#1w$c_Tg{ae zksRK2c-`%RY#R&uBE_57*&!Ji7__vvhXhQY-8?>GcQ-aRij5{w?H(P4F=*E89FC{g zH8ml9#=Ld;zOZd|2gGl*xpJwNDLZs znIy^<_lqptry-C(20u1hwy(CkqhC6Fo{;%GF2#$)Bd>S+Vg5L5>j$c=YO3yzjc$_fXd8^-E#(SiBN1_sKea@uGt_5?wMw08!4b^VJM`b7Pr{;i(M z<6@2lW_x`F2l!rV(5=zwc|-o(*cXcMYpK?N`SauD&H^jQB*KbGyFvVWnLHjAnDg~9 z9@|W_m7Xkm298`GAm72z+%;RdSbCmW+J94LLTRe6FUk|#tWV^ zwa13q6*{oW#Iq~;6ag#Bo~+Y_O0`gI1yX!QoHC`BoBVq>j30D1gDAxOJ(JnMP_F5$ zmbqtMaQT~Q&zn6mr@evRZj;yjX`wIiR_DW=^K-&?9-FTw>%~5UfDb5eH4%%tqngbY zgU^^#f-;_#kM|cf$Mdl!ew>Te+ifrTr44O@V;e6dqlZZs^B*#qwR6C)(QkHZf5?s) zNeJnJPGp6_y8^SYoV*o{0rZsYE(DC|<)YX}uvlu>-VwTuB~J|FH1nPQb}bp#d_0OH$*PhQbWB{2hy5@un7N25g9h@PB0=%Mb!VvyB27 z#@?3WJ2K-%m)EC!XI*&oBk%K%Q+L{Ks<0$b3=@;#!hRaxqF>8m^YLR*Zfqw8LaVyJ z><6$bOrnJ{BqQYV-_I^Oy9u18eM&0ZWN58dsTrr&x zxGR~Ch121%DU%y?`F02Lwc|B#k>KBZb-fpJ3@M2bGj9aq$l&$xLi6dlp!*-%st)&up4{GkT+Xv<3(Cz;* z;I}d2BYC|#F?{Ftbh(M~xyzqcM8=LZAh>;AGvbOgPg_08E-V!AsoTL?_tj>M`~7m? zAGc>UHSz6;?)6d66T8-(QICm@ z8vT3w%HdIaK(WO;yBZAu`Iv@ky3q!G>mAtH5<-`gV`L^&uF zvucA3o7`Rw-8Qj02w2Tgq9ssfrwGfZ1w!6tbhOv|+MOfyNB+efF)J}+48S#hzNJbS zzDA0R9aoEq&e=+9a&{N+efam?7Bf`Wg)tFyl~%)Sy;bJvC(uPM9nMGp)Kt+aj<3aeuISnqlpbT2DOb!O>%PQUE1VDMsZ{1clYWa*nPz;u< zl!hD7f|HKrAF}_gi;L5=1CC*YRhlrDh#n#HpV4G@wHm6QklcOA<5Nzj!2f#E@Lx|3 zHdAddMjJ$zcTjEwSysX2jj5(hf0UVgGQGezK+(yvkcys}jSo~m6+Ly5J>F~`mEyi&**iXc+&D1%P#P@ku( z{g-)~>z7ZY?Ev5YFoo6NjsxA%fjFtPLHy2 zMFAimE_chD#e=+8d!?k?<_lM@L^wPz)tpG~q)RE62z#1LC0TY~$!h+pYC4{RJo!&! zU?0c9@okjB)6;h#yw-mtgNH^ol_~Ze3ZZ9eXegpFO-u&vL=!ZH9~hKYv(cy=A2*;F z4ccHImjo-mSJKL2jmEd!YykX+@5^uPBj+ zVV_^oB7S9xilnx*6&%l;j%2fA@cykp@|?d)9g0B|w88OK#31txoB9a;;LWEl9Et-7NWfFdCSTxTkU| zGwcJI;Zv+t*l{9nDCjq)wb4Rlg-ujS%MAQ4RuZVJI9H6w?aUqidbT+fy+75$(Vj2S zu-TIoahfE=n}`f&R->#f&^A}f4J=PGk9>z7V~G7P_)20m6Ed=)bNjtA;t>oR-xqL= zWRk`1o~B$=4;o%xgg8^(B24<06lj+K6tG>U}X*`%e2u-&*5U!Of{acQbDys`p__&s5Ydy-}2r z+oB6rWOV}NU_=@Z#O~TkS$x*SWg0U)bq=n{K#@&X5Y3z{NX=pjzIZPqNWQ!T>fIlI ziBDqryG@7o7?w#^DY_(LDo1n;dCD?@BFKSAG&-v;lvF*gV?TCt=fXPFA{32Kfy2~2 zIaX=DCv!=bX_fMg(hk@z8&xY@G;jy?cEgjlo#{+!bFp4ca{@vqE92kgFanO(q*!z&p@Ou)@5#2sb>XI}~ZYBgIuLZ-1Fs@$`kPuv?ikQxE%e=%_Slh^rd~> zd24a7u&{W4f5C_+76?vH$3dr*4)nO%imbQea8rY*T*G+EWi^qcRLXl#=kx~=R2(K< z)TbL=FMkk`6qzeIxG7nDe&Rp=V>J&QFy_DpVoazuGsN-3^65-O9i5$nzFtJ{{js%^ zi4@_u?uOM32P0u_cihqyd-J!hqBecGKbz zx^Gy%?r!yidu9P-20?w$bOauUV^|&jANWl+YA<`Uak_H!B5R zgE0Ozhy#*>So%tLfY7YE(xAXvEJ`|vq(}1HYHci6Yd1myaOhMi2bw^70;ZqlFO6@{ zF2wr>qsh`a+^k2F*)+)FmlkJBzl%^VYe9TTI*~k-dqT^cfXfzjnSCbPCRy0Nh7Tj; z$StWu?PxM1=0MoLi7wb1Up9utLC{t8P^+0(gj8@}+6N4j(3@BE=;qKeA+rVGy zjQW43@%X*Ef=Pl%DtZsEVGlt4_mA(E+1Xrn=rCd+O=Cb8Z$ulz>>?8^NuvQ9~s4c2-n&y}o;yzqaZG zsrV3-AuKjm8@LA$Uu0OC(+fS7doJv4ESB7 z=jQ}RY8Y+=t}`(dtU)H-c$xnYY9Dh@s$M2OTTN>NXeTy-j$ol){1k)wa*3mvK zm^>$7;98``=PGE{VmOdU-^5O1x4$2pQ%`^!H)Xp!fSJRz%s}hS3`wxfkKxWx_6Wp@ zdVlaejpKhr<1i0^^={Rg=X_79>l#53rpYRYCLavrQP|A-@ekK`KcGr}V0BypdqLAV^-(e^B_Q*8;AW~<#qz?pSH{gGAD zi>ndq5B8*LEoCVsu2QRZj8r2_OM?_V0ee%iNG6xzG$NNIn=%fwJ|Re_Ks;*I*0C@l z^=Y*PLi-#|VF~JjiG2bCi1F)Ig-3?IA62IgkxuC|*j6k+tJzKQR$o+q|CKhAl6g6U z9APY|!}IfUtCK;O>}>HD4ZnUyzs2v@=bQaL(qDdt*8+to%z_Ee--6!`IB!t4n=PjE z3_T=Vd>2ZVqj)Z(LZ?m#M%6NeeAuWFt)dwb_sSF{Y;0>&uOAimCzH~^3s$ntUe5=` zAr2(_#L2wQM(o>b^cl`0`*JwjAM0A6ZloW6uSgK?*0B=lPz;`3h^{_eRa= zqEoity{=Fsu=2nZJ(&9$4Xu>}2IDf#VIV8>%$oXd1p zT%??W4sJX|i?t-6Ibku8sOPxZ4ds->$m%1og99x$sa;Yxqoz?s!GF=mKgD z!UQqh^96`(_Re#SdnA-QP%@kPrnT*(0t-eMmotOIrqpI;{;HiUI?y^Fs^7J>{i%S2 zbox5)8eXHf8I^DTxrj@MH?wjP->cqU?>>gl5JEzLk{Mu8XQrSFhiPo50X8%9!5(qv z$ImrSycYYmBNi0!=M7nj$rttJ$FsK}rl+r;=qn}f z`t<~sFpySia&70ythK2NQLLpzV#95ok*6t8ClXt9)~G@9u8Q@sZlWA>d{KIK~I@OC_(hwvKUZpKkMH5XNkp_=Z)TIl1~niuDYvI zrmP%{5%X{CX!J`f^wd(`*#zgK^|hLb-+|0Dk|;^H#kILvxO|IF@Y%f<9e5z5pM5%;n(fAFmp%0~uA z5DbUvm!6`zI|ZjEuhk6-JD@43aGI7leTW zMzbgSEzX>brCe65lXpq{N?&iZf6uJssegm%z*G%{6b83CeG-8V`Yze}F?sQjx4501 zuX=iw&TPZ$9(~W}6PXfmLq(kWTCTU;#9iM5Fe)+)tZ_9 zz3#zk^Ure3cm?b<$*O8ITN&b!Lb1zcxJ;|d7~WcgDRTd~W|ogSpFZv9!Mnb^3$;U9 z5tMVGNBA1!3lQS%<(?24GY78@o6b+@fO2k&o&{<6D z^(1@>)VVH^n##+Je#WLoaq(NBb?)t0q5x^;db+;?DXjKHCgQXAE`*u+s;08Ef)NW$ zpgE+5AAk3`DVXB$xm2#M7K*_nmZiC5N~121hoF`!QYJ{r*w!CYip=!J0CEWvQZ!7- z=Ft#sAqooEo9R=i6|`}(Scnb%K#_I$F^r0!DmtdH>sT1MrLCc`oU3ZF~rgwa_btNxA$LY1%!(uXer%tDg=du#ZDq`j~7(A z=twVkdm5QGgQ%*pV19YLe%R5+3WGm)i4p|qtU$3ou#{?Ecu%e4RN09#?ww{b8vR!i zR}B-1xfu5qDZi$?vaj~`Ta9-AsvM9d;Lu(^PY^~1odzAtSqtNc%{-O<``lAS_l44p z;;!$$Dh9CXB!Dj?^cT|+#YhGR##E+&37c0rYHm^<1=l3(Kqit!U7Q%0uzQJDdxH7D z^Aq7AMV;4qV&rbR5<@4SlHG}DtT6?vARCj{1E-X;_DFz}y+Sq@))x7YwS4*Y3V#`- zlZlLg^FBz1&B-zD{cfG!$!;-~ocJ3jEx|(;x=byii=~#7Al8S>YP%#j{2gxLosaq= zjXI}WYm3nH>j>^}@Du3irW=%pK<)BPP7GM4*1`=@?)E*3mcUznHQ?YswehwgYFG2e z?!u{y7KaDhkZYf3`#YAxigB|J;nhX4K%k$gT!Qn25eiWipxI3ou1+W=lIq z(>S$$=UE6bPMLCC*@o8(P4HI#Lo1`s%F~p2@$wkl4IRigQ)!W2K|x*2zP`<8Mq3(x zK0Sp{GDXZZMyC;D6kS#tR|#UjZ=HJW?!k(?h9olOCvsI`r&8dpMhDhpo#?sFM}(iG zyk=XNMQVHP*{r$7WaOL+9F@FEd!~^pt)e$OXwMa$9>>`F%B6&UT<$^_3zf+u#v`t> zYk#aBRy)tf0sqMYfWjZL>GI|B*(~V$D}13YeRHUd6Rp~*wWmSHU z6Q>~Ejr0C$9m8R}bJ>6CpSDxY6}xiO9%PGcHrw5>OidE`GSsY*vscjE zW`*N-&V;#ZnPvqk3r_J!s-E6zmvWbFg-$uVnSyQ16w%t=O|k{A0?^gTq)YooG0`cQlV-RF0QQlr?J+3VmeKnh#MZ zGrQT|I#iyw@ z63ov>*?eGoo-vBL3zRdaH@zWr*y`v#TdwW(KLwH6lwLvX=LArOzDFVYUrMGT-qaRd zBTSM9Jk*=CB7ap8$CMxWH|FsAHUQjSFTKC7vzV*!VK5YHks4o#Vq%)?O0ombN8BxX zfoEn>26qKjo2xAb*<_#Ogx&hzkA^s5UR={z(|y}oL8-)6je9x;_RBL4n74`Sn8!*K>Z58)_IyQg*?H(n_13y*d6jgmXlr0>3; z7?rxoMdCxcv{}qR5NQjPa@aP!ObTG}-cUD{Rx%`!+>e;4>w5zwQer_2wkHQ(6Ze)i zlFkn#>(KGNWV>y&co7tb-S_51cG9kdb2_;ov;%vnwa63ha_9^Y&(EE9pOF|>u7Swi zoq--a0X&*#ruenQe<1Mcp{Rf&G!*x8nthiN-GBwTfV`wX}C>QzwyDW^bpw9;6ejw7J3(7d)Ye&Um!NSu0a25}_y(?d?6# z3gy)C(jjJC2`oS>KdP_Rn@H9DF3NBV zNQ-r$4*!TJR4stVHTeze_ikuSiUBq#XKu|hY@C+@i;xocG`U_|vDSctO}!Sxy}7x& z_wQ!`Nl6e)0<>fXsW*ApLB&kiA;;DiMP`gDEVY>vtR-nh<;fSQ zYbr*2hR6m!BO~UJfth}$Cy_Vjb^S7FkvN)gX6u}Xy;{Ts_OOfh@b=(U4(aSCJCFE^ z-Ye)rZA@HRr0`D>G{p<~v*8Yn73z)-&*VnRPEM$EV%Rv8LN}ncJDvlpD7^Q)l9i-W zn23Wrf`i%b{+XnKo1AB|BP}UYmi95hS-sr2C!1i)=S|HSDD$8D1?00}rY+|@-n{xtui(oZOt80RsK6NS zVJi8Mj_(oK0!)e}0<8mTFPUf_HAdvmBhiz@lFbLQ?~9R-$c6sI z!q|2FH(#Mz2)`vkHk$@kZt0|#oouAlM)G50|L*MPDqkqWn>^uKV&CQDFXe{WfpLHg zRE9_phrdw3&eDTUi{}~fl%6jj;7eGYK9?Z$)>LAMQ9E+6z*3sYXn05x!m8J(M(27@ zk=x}RAK#uhG*Dxy4yg`}86j+TP;4rAqTCvKa~7VsQ@_CVeNXFa+I*}rsn2p9Ga1^r z7Rv=LCrQ|TS83@#pz12Dl&GXTLsvhj#6Qr5hM5F?S$o^xut-qv`eC=>E9up4C)>N0 z2yccO2=&qt*z}L_woE8gL_oH{D#Y7VNVz!d_RCmj3m4_ye*+ncB4Qv}lr0O_cue@<8qVf*nlKB}*<~e^b}ESThHfB| zIiFp8pN1HLoU6Y`RX6A6`qy`HoxB_}B&Y_xHx=T@sga!!2=Hi8J%5(4^v~iS6wP_0UEPCf%-x zsMxio(2b(ZIoMg`T4zGa{hRU`i1a8 zvOkzaE4+P*0lFD+_jh2Wyyfe;SJwBPl9pTKcqYOhY=Tr$z@KjBEuc3TSDXIJVr(SL zaVl?vpaGsxPKVk(>Ur_uCRxwKPGFf-p>k;sJBy|Gy zeLlUjn5m;jWIh^Beyr46SHN!elO|`keMK2v-uc4|Ok7t7_^>*roP)R|O|@C2f7MRL zFccpXiAWIP>P)Q~(f!pX(bcw^WrZy^4Txo6|1~S8joz&b5~vS8Nx2`TfPFOjL55rL zQVXwa^9+rA^_M~^4pr#y0Nb@mVXFoM%3p0Tfm*6O21%vup9ELXy*&23U^C>fx4@Xe zuMjiX2@I^>8-iF?tH=BUnLW(UWK!{)059>=M6%ysb4@-iBwtX3x`guh@U_U;tq|F5ifYJ!^_^-ka=eUg&> zf}yoqS7AuD(Ff6G8Bm}dV8*y{hq1a`cRmip;t#9(v!hSKU%M0*Bn+*ys9-X)lwFOd z)iuPgwE8KwG&d}E9v?+Mb5>dYSct_CD}w%I^G-5>_uM_hZlu^MA^*6MdD=6#VpT?f z;r#q2Yv7bO+ziS|IZMT1T+R@D|^fys!kCpEZH;Pn`QovZ&v20jx{=wo-;Ln>)5JPr1 z9ZkUZayBm_T(&}umyL@Yy1S;yce_CK6l=eSv)@1qu-6UeDyHwJazgrHSHQ$lGsO09 zXlx8xpLqOw`(J#8ZWTFE943^t0!Q?>(IjMnw9{m%={lP)xauF>M5ps5eg8sDP^Tl2 zIrW?8{F}{*M@FIiXr_B^@885F`$-juvD5_o`+Np_@voQy21LuZPS&afFQ|C_v=8<+ z5qPi!EE31y31JZvJJeY((w}d4llZ;f_XdJN$a9)C5-jJ$)c9alF7?Ve+#RH!P>4?Jvsy#FcGdie7Eym#tC5B4~yHUq9%yV<)5qjdAx zPa})57jsL607{M+)2dK1tCq`EGVw`Cn6q_PC7u8P&Ho(pxDGs1~%%@XSH6I zG-kE^?L~>t&Y1Jz3`4e3XsDAYNC(zz2L$(rPa95#ZMKLB*KFR-8o674;qM*5oj^!; zn&U|7`)I|YXw@rCop9SRa+wQ(J&D)6vlZO+7C%X)66Y$#cN_M!Q)kGBwir1;ICz`a z=mrQgMKaXC?PMB+PABbke9z)e!(n5X;u zuP2>!F8E5&JI<~h*?72H>Sl5rW|h5MVzog+rC!|MR53e@ZAT@O#U7F4GbJ_>?Priw z#g5SoMQ>yvY*2=%QCm(w7U{xA(Sk~g0 zD(jB?0&sJD?VHU%QJqDKDm6_Ji~wN!!#Y$OE3 zrvQt3uVXOJj<5H1B4#y4p|(E`!eC@v(plVKzlFf$*w1~Qdjg5VYV*U32OWAcx2TKQ z^g4xt|NJ)S_7^*bX;Nu6#H(KKK=^w1fi&p;67&riNg$)uZDE1>p<=HG!Qrn{M*;s! zHkE!ac2+kVv?y)jv1jAV(}|7NwHi7LRh^6V3I^Q|MiSuY=XdYbPWsS2n=$BncStnd zQq8pZW2Mlhq1?bKLn^cHU?_mQ_nIi1;fwU!oncN^^intV$|X08vnKJPlcM4Je%&I4 zR}!uoWIyc;WR6Sc^YVDR+J`|m(f5SX;`=@=L2kh^QM{^B`vXUMN$g$;$w)~=9bjxL zV(3rhg)cwQIGz7W#YSn@`;UX=ba_(r@$jm>u2$W1-R|_d_&&TtOF8RWG>U7*T9^}9 zhCD^`8v&y#7zHDWc{gD#%KD)?JW^arAL~UOADxDVF5XT;qx}8_Y&#!M#e$O05w0y^ zmJ@@+kr*(^G)j_eQ(IP`-Lb8i_AxvP(WfbyYLX_Wj74fKZ3uWz_S-GtRxTPw$6Xk- z8O#LGk5=K2xT|~>5OAtb5wV+R8 zV662^JA?nX09g%2SJ!%h6T9dA65s%Xz`{%^|FOTdljZ z9J#-W<12A9^Ko20@O{3$7WcCi%c~mo>0_TjF|REqz@50r;+#9L^Y+@suR~#EtDkbp8quE6y@=JlRfd@iIrq z88Or2k0A9_-7_D}HzH?AI)=jx@L>z2E(PX&{rx6cQ;iph<51vzhfJR)v`W+W zUO2}8&a>s$6K}DNcApy4(8kj#Ze*6&Dse5)gG2taYsN_YjzSTt6#lSp<5cYCWY3*& zvlUa!yICYiSDRKS7HJOSJ23K(sl9I?!9EQeL*lYyn)xZEwZ=3Ujx4djACxpELb^Yl zU1H_?=b_mrGiAhWe{15!W$0+afKJmTFYw6ggxPyCVp$CB-@5#EgI49Z3`e87Ret@z zlP6w8uwpAg??Kx`B5$xk(eR5^ahib0o{JUP4@>I2O5zDsIL$pP*h=c|b% z`co^eMCQu?f`e^;aKNTSk{H9|JBkgj+bd$35`NH2Becn03DrnaH?k@!(Es>cBL;to zC~}1;jzfR<(Y%Iq%2{~}cEm^GA(G!>B*5==LB!MJi$%3eIvCt6%XsEG6ZHz!DZ_jQ zDNy2T;826n0PTg1Xi$k{&iiFjINTm{SuFp`Lxt_i>|i8SQz4mFK^uZ3V<0G`<={PY zj*O<&FgWmA11VG6JgyHrl{}mc3bc#>KjZ9qU(`8b&{9yu-4xoXYQu11Sk_)EeM3pV ztOT*#U&^x_tK?#GyxH(4>z_5b>nbxd`<7F}TQy2>($*6Bb1{+zILrpU`Qt~0>WPFE zKN^#-sayQL5zH0w0T!Rmey`-Z5Q_zdWm%pl4>o-V+S%Xzo)KxX9{=3g`cTawi97tX_C0c01Nid>8Fb(e)>tOwDlb?V)1+_<1gWE;gZ+1h$OSw zbrr44FJ|E1LjPjU9{rCQ4CpwXEh(tX`mX1NjHGZ<*e79c)p1GS z!o!EUgln;QhsT~+!-pVONH#I-K3s%6->+hL$#V8a-YpxZ_I$hj17$O+|Cu3Vf-3oE z8zz6)L;?+tVXyxCgaIvgWuYGrZEnD^oRv){$^BrJ)iz5(0L=b@<1r?`y?uY6`b015#|jNijUOm4RN6{V)d6uL1< zpMiSqr^R%Bv0-7KbYc%yw#KyZ-|7Q65p|Jj+iIR+x89I)W|b;c0*-1mwCFMvmAl#M zPN{#tfAW&me9XV$JbQehRd0Kv;5e-Bmw(58zkdt%WV%x3_dxBuFctk+*RH?T~T zwDUcobR3RAyJHzPXW#qohU3N&>t*Ru#LfV;@ilmW`z0GIS}b}0IP4se5+f!Pjq)Rk zh}+|ZmwIwN#*AT^T$}OMoRy{)3$$VHgZdXtyZ=6Pm3mDK{Gf};$1NQ>xS9l9Rm@8$ zS8oFaagyGLv|8&o{s9czlG)lYG}r*+`4Eakjyv4&4!FGbQMv9O_I^8`|tC zK%6KQzaK=2?1+PjJ-fo5vL6ahsOU77IcfBq0(Wqah8RhehMqou5uG08y-|>WSps=X z(@d3v=`rPau2HQTI<>5UIpg}NHNyzQyQ8qHmU;_=0{Jsr5tvUI-w(NSW=6&O-EfN% zgR#~Zsm3-$;ay@)3B4}hGi7o-MX&xMOQhmX7paa6bP>;u4)21+a$rL7PVn6Il>&~` zv8mC7)Nm)PaJih;!0WH?nAdU$(q$swabKdYB5UOQoj7<+-@Q=0P#*N@R2#j!cfnhV zc9qR|5-=+uR{(F@wryCvc(L)MVBnD$Sgu{aP5~_CFlkr^j2+uY?I{r%Ez`hrGt2H- zOZ@m1E0#_~i^kPauxvBj=kR`}ULwc(>v$RB;qA*Dsu_hodB8aVbCj{EwJdqCe_3EKA-#l=nuTL1i2Vpz&rI zH@GFbbZUb~kDuyUda}FlBw$uRu1qprx^$Q^V}?j*9~0W`+jr2gL46GC+X5|`)dnay zMuK<*JIk6PMTJfo*%O2NwnzPz{qdURMtRhKD?3ImN)vnb@=e^K-09p79zTz36o4d3 z+8Ipm{r!5BYC~4!D<4Ry3-%s7NdZFS`sZwi2^qWK+q^IEw<6a`TlRwwHz97Mz7D}& zmm&!&DHN~%dfLI5+6~KyL7BqD$B*fa zVny?yb@x#ePedLTYykaTBToIlgJ+3yR~t#cQGV?+=ZRji=VV<*78tLA`Dreb7tal+ zF5Ezw+8t4&d0$+={Se*T*T;yF-SB6{QShDl4W4g*P-#8#cjqF}n#5GP%vf;Yd>A_dn&iaX+ooAzUn_B9?r||Oa>M~Hc*RpTFnr7)s1)Y0S(9pa0l4jA1Xn)g* zSpm8BJhNoUf}=-|V#<^$m^W{pq2-|W{`pY4dFvJmOlaKta z{l`(5a?F&f*@5z%-xM>bM1?rYP*z1wztEsV%Rni@iFB zjQTtXKjWZsKv{F35xEzB`{;#tQ$5i7R0Z5y`wuEtqjbmL6hp58wFuF*+5lQKe;$k< z+=AW$+E8v588l?b$Ik5C5ATqdNA23R$@g{y+O=zENLlE$@B8=fQ^aOj>I;)4E8Sk(Vfxqvx(*d@HAUJVUzbHja4pM#-bBt`zJYlwc2VHaJfurYNylPH zGbi^$vj=kCa}fbqp#mmMn1J@}+neoUpcGkn`SK--6fFi9@)9_{dliyX5D18;lypke zw@w)Cs0zh9950U@eACSoF(ay*o9*ylF$|9-=P{ zy&0@MQ~KNbtp{Sqen#0jJRAehme63C;l84TvNYSlu|o;^i1Z*y&zu3SOW#*M&l{psDyMG#SNU<+d$73`fjV<{%hUKMQc{B!XHv}#(DvM&-(Yz%M{ zZwF3jN;Wcp)Y^Gimj$cEqh>n@IqY`>5rtUQE zjb;~%A!rsCoW67o3s>#NWTsiD`jsU&a{_W>T7IIvXW6r7$DTcV%*vZZ@lLsM`z{vE znuLV$oRKkOI`T$nq~<1N)2{@Ci`x$j=-!f2Xa{2Hx}ECCPo*)yegBS;qk3ZT>=EMA zN#w%VvL59eH1N)$uAO2sMTAXmaL#$?QF3_-I{ZkC&OF>V&+W|=*oXcpQ# z3vA}3ky~UM+k^=dqGH90=-an1;>L{&_B%IKV;W`UA2ZRdV||RDx&-Z;)k3xM#npL3 zjPVL@s)heaCawLfQ2=g##T=org*bVoH)^_oVnti@`AimYE-L&#O{u`b@v%Xm4C(E(_4`_ zeG1YJW0{wV0^ZE!x{{x_y#szyE}8S>t#Fx&7T>&iAJ3k@#`}+7h|;CRQKX^z5w9DM zjF;|I(=#`f5X+xC3yKuXO)hFAKQ{ES-*<2-gD`@q2o%1MW?P|N7YysujtW&&2$7QY zC(Wfv$6**gs3QtxBdt%nCa0dB!I0h1(L;bVVE7CyU$YI@PX3MbDU)d23)NSu-K=9N zh-(W=qnV#2x6H}&HevbdzmPNu<&d|aW=tXb1& zK||*+T*C5&)A8cvGaNj2F~n-Py3M~E<{%-{1<_3rs%JUrwL3)vXX=C~k$s0xW5M!G zxPSjC3KDR)Y+M6ni{(>;FyktNumD-2O%#5ZlQxO0gv7T{fQ<0Wpx?cG`G)9{lUTob z4?a@9l?r7FW7vQWNRuL&usf(imB8~SKDAbn#k1w)$68MU}qZ5{M5y3ShVV23L2V^v}vdqv;{PC0&eZ_a@(2Wbr`)=OaU%y|Q2$mGv_=ICLbXd*Z{zCCdjwDsdnFiT zqQ>f!y(!3mc>2n1l&;wkZMu#^Rk96SKCuyhubYEfl}jTIC#hne<}dWa+4e-faU}qT z0FXhQsT0y#RYb$6_&TCfQYA@(UzI!Gj_96yV|LZVZz`0^H=1{ln!gw&cv%*dkGtT zks@am%=zOl1Tu#%vKdmN5ck>?=wnA?Z`yW%jH~R9XatuKahSzNvvxzVb^8I_Jo^vQ zr65jRc+bQZUM-Sepx<3aZ)Pga%~BvY#tm*oF&ORe;Nc@ZbR2uSh%rFUTgHjDcJJPe z1q&90(9A<88;Z*IGM%z)!8FVm*A0miCB(be@9~Bz&S^&~q687A3KZXNLUz3*^d%f!*q3^oT)t{(^GQ%N8jK#3&%=6?4!KgSmDJ zToEXt^%E({uuiApfp6TrgE1oqQea7EWJ*aM6{Iox5KX}8QI%G_aBi$wJRXjWU&iuR zj_g9Bd!2>{X6!3&1PIxO*;6`e9Ez*xXqLs0qIfwFdPL|uZZ-YboPn|j? zs24XkHzZ3&MzcsXg-)3|emGXl9fFhz;;VUHCnuc0b{p%q>?M!3KoriG9sjJJj`$Sp zVUjF!kyuNVFA-V%?BGpWZ>QelapLr49N)DZMT+EBbKjbf@F@4M%Z>a+C;xF9G9p*6 zR~gyJv!!O^-bj@)8FJ^(4qMWYKY#s>&Ho%gOY-LrB%|oo-G@-RL;><6r@hn1Taxfj ziHBxw2jag&XK>-@Mx;xgICjSk=(2|qU8|wiQl?FXteKNx@UV%bJsZR6{ z?EQNITuAFx@quM33PBl{|0H8*?Hc8=bna*jAGZKkiK|YNB?I#1%8H|BuELtUTkL2p zjxMgk-<M-cZs<)mUF`RxOVdaYBubKRqN+aHJWFbJbNX& zwQqv0OE;lV$$UsdQKJbd|4HVIX^|sM3MAo>aL$^@ytXL#@tXxPsL;JlTlL4;^EYr} z|0*(U%3vR(JOWxNa3{>(pbi?!XdWqW&B>2E#sfKnc2Fp8l)3Ni+qa`mojSifi&w5) z!;*Q^(6wb9Jb&^UtM=~1jqi`}KEYQ6r6t2#9IB^C-(g3}8xr^~5PqNhVE5b#i9W_f z;e^@ItYTFZFP;yHlaWR%Kq6E$qEVjyC=SB(0cov$M$E!|3hvQv!~3_#q#<36lgm3Y z(-#I*Le{yFe`$cAZM|cPN<0%CrQ792E*#_Y~ z9lSrYr$@QQ)lj)~5u}XgPVSPF7o4Ay(2=8bXa$i0weP4o*h_g^c5R?wA2w_nBd2*} z9ZFiP3n{;U%3QwBW-~Xg?=+nK7Rc8U$k7>~)I*cbBe8>A47D2@40r!E8*w=j z(;Q0H#c{z8qPs@UTZLyUx1qw@k7((T2=UmrT+~9-xIl_TTADm9V$*p40`b__2mj(b z_L8A8ZPRL)HMBcinRek+x7v+pwEjbVfy@aX&Z??;s?Rm8tnk|+jSIKjZm{*WLZDZWC=13G>fbi1)*|9rT&)mbAB1;D-XigixR}_u8#;9(KWlH& z_}GQcRNQi8WKPDs&z?U}u2B!%xcfN79nNI*Jh^uT@@LOt+Vs41kOWf3#OW*k#_=%= zF~ZvqrJU&%Nzh~}O3N+G(Z6rB2jrnn*U*O1p|s2UE#{I9p7%_4!^&^xZxzoKpPN1mRZ0??N&TSiv! z=crt{GWPD>hiByBP_c0(coct!1P#awf$W+pqECf;kq*GCR`}FFV(n-xs?&Vi=7|em)P=P^T==50 zI^jWMhc^_W-@nHglsa%3ogK-TNpDJln)J0%+ACU|HRXEwL3=gL{~MmvFM}m>#~=v>EnoA%)2 zXY_2=7h|tKM{(Eq;-KZM<+=d)FWs|OqR`FN{cQRE4%-@3fD?J_Y~Q#C*G_Fg8lu`{ z?i77&q@McFfcg2WH>o(qCahZXC#AqOKFuP5Tu=KM-EOX(y^1nj%ED@d9bCw*Fvy2! z6dg41$c(R|;1wG(kbZjl1u56L;qdGwNS%^=&Xt0}k?F(;*N8V4^5hsXW-jH2Rwo{F zu(QSd=|j<(5?JB=C)3bEnm2{3R1Ii+gI<{Q_zSXAF%b^SSLu1L_>i)u(Eg*}KBJv| z0Jfjqf_xdssP@%zY#c^-RLo-t#hZm_mI+kQnGK+sGNm`_-1r$Z7Y{Bo@*^|q9^l!% zr)WI1DQtV&!NbB)x;U*eu#Bl0mt+ z@vv6qf*JiMk?oT_kBps7(YRj)%N$J!8p@GX1PzTC+7GWZ&#ti+q;I7!I`fW|KpsGq zX*%`jgjZEw!!d;uC1%njdWw!De{(N`d5~`q+%vl4aOSI+vup!}z1y3fT~PvKIBz-~ zECG~K_TIf`VkO%s<6wnSg@W^_7)Ue`cT2qZkn!`-^Wj_Mai#;8{xhV(n7^ESP?T|S zMup=SF=^qSf}1z}W<@NjSa*d0b8nO=njdYOR>aUD14B@P2A7Av-w2~I;aCOaiUvGx z)OZ|rW-h{zBB!bJ z)1T6a*sFcWjihd|an72nyMORP{(Wh1V8#OSPbY9PW+fp8Ln?ZqqH)h(zKqoykKo+( zAiQ|(Oa&YQ5r@DtM|xl6%dSGZrsa?;7bVB?BS5EoD@GENdfVtN<}n3S&8Xi4;~#uP zVk+KjG$7~kNlct{@iFC9pIs(9#;%@%M4TGeln1X-Ai;76@uE%ZJ7zKZ4;h1OS;+ABsK1E% zlFj@iz|Y6G?`TlHJiI=?!IP)Y&2NqdoKdWLD);W%1HZgKVdp}|D?oeS8EHmJQgdeseE?>Ef`tA1Mhuc4JiI*8FiZrD-R54Tj9t2}0{e}ED zM6qO-B>C{sZX23+`4;liz3qAXdIT@e+}liw|gfZ0F58+}G@!{cvbDYhmd(Mkat;0{0RN#vnnFprZG z_doQ*m}&nIkc&`A!wHbq6GR>~7mi&(S&=41lPAS+BSzRWdmWV`Ih?*oZzT-mvYD?0 zs5)1Xg1K@3)^(BpD7Lg%1LTh%KE|DI_u!a{!sJc7zd|dg93Re^9C7l&4FpiJUQy0W z=SHd_8Eh#Pqxw!IDU^0UY)GYu&1Gvh;q-(0aB|PB0rAL*#-{y6@!x;@ zMX?EkHKHzlP#FHlbJvm1mRv&(HjpkITOlXWD=*Jnqc|>lgNf2!=`xgEo{#QJ2P=IKA>;y!8NQF5iMb1Y+n^k^Z*!1s5 z*tpaX*&CzcrDMam?^+)#|GrK&HVovm$#W)c{3V6rCwTV-@o6ol2Xb~rR3V+6BLZH$ z#XFH5QR@&?Cde}9JTx4v#6Oxn5PvG4h4@pwM{IBp{(UHqm=O0$Z<$DaJ4CdlgDujf zOo+E{)VcIyLo*nVhpt>7nSVz`;DZJYLi_gZQKLo;FWK5_Xtc7 zB;5bZp|LBXy|XPo`G3W`PhSbud0w3OBa%Of<73^n{U}+ZJxbK*fKoL(V&Sq)_{don zspx?fd3^}HAuxRM`WbFMN(mc>6zn&nvl}5RIzg0}%FZPNF5Qlgx1XqbJ!cy<1h|k! z{)xacr5ER;A(g-CrXyj^#0c`%e@+!$cSfv-6lmxFtTDU5aAe7z0!ish?M^{LLE-q7 zGoV_t8z6nb7cX3k?o`b_tX8gAJ~XLc4Grs6L;|XYbN0{ylxg7ryZD_5D9FGWIaVb^ z*8;b!Y~biM7)Li|MV>4<)HO<@(ph-I5^jesSoP0-eC#;|gYDc69?2Q0eWQ6<(>i(dldXoP?`;OriQPz)3W9&s@e;^J>Tc!nKR%)*D{+<n%= zeE9Go_+F*(gFSJ<{KK2VK<-VN=aVN-!rT|-9_5!ga^?`ZRG!y)4Y!JJ^d(e(2Tj?; z!{uTHpKV^qb|yKz{e6fV_9nK3{M&_A7IK1A?}m~l`XYPQ3{;2G7Tfn8i*Wi33iYa& zg(D?^TE3?V7075W@~K1xdwEJh1mv#1t8iq!9dc#Ps16j0N@u?Va^my<+;sr&x=+R+ z8#nU-a>_S9+`$`F{+^3645$Lgsku5~>#ieMw&ov99p4vuvS%WiHVD%ft;LV8-!Ogh zK+K-E7Tfoo#QGJJ;6z&Y58`pxZ`g?*t?J_o)q1T`zdPp59D(X3@~IC;MJtcvjP|`J z(A$75mP{H1J1!$A+wUcy1?1jHif-MzOCXQxGs9qf_wHQ*#=0byVE|*WW}w#A%DFH^ zs#K|vC{ZGj2ZL*EzIpQoO`A5wqeqXzoo2?28PT|LW3+7167y%z!%CO6NLVc~R0ha$ zsH?1mMoxBDw-a{QS$94%rB8~tAHIa^y}}ydvzH$D@Z~E3vpr_aU4_FZF8+4dxTKlX zsa6JY;@V*8&c<+z*NRIoMClAuG;(L3#W=jd1$i=MR%dWD0FbZQz8~*dBeyjVjhy$2 zND%a~^G5y6a|p;a8aYuvDxy5H6>o?EJzAnu`$kj{nqKD#to?~nS+#B_+BL6(HGgi! z?n7s=Yt1yZeS)4}Z?f$;ledz`7f)1e)(4AbjX=qK^p^8eqv^B2=^*YscpB`efcUIY zJ=BMXKIN_JLu&!o3yz!)z%?6o!3&AeqgPJ}@h*S zw;xx=soe<|uc_D0&5mBj3+2g%sufG1LETEoo}Ylh-T{xVokhN?kKpeznhr^>c%-`t zk#QxE;LP2>GmdR4g{)~A$PGZP5NQ2-{wkig9*!RV_VgA&3o|izDL~%e-VgP*&Oxc{ zv~~tn0_$Vvuc2wD;W)T`3372QPbEDu&zrr8NGVymax?zgei*YUC9pFU7df>5Bx;r^ zgpBD^;g!b+RBzTB3ulf*Y2Ggd$XTA6Ps~*{~C-M+6CFuQXXyMW%2S2lE#gXWbO|rzUP}*x~?!Th#t~DfBJeNOX_b( zm6Q%hn!7>dNE~nyG7P>Utx9-+m>9&xUhzWdkPS$YSY5zC@lPhr`_22$RFuMscyV&c zWboApm%UNIDRAizYf>;ieE5WOXD;LP@e5>(Cm(0cwP8Jq;+Yle4*Nlka?C*_p0OBl z3Ca7}a}T)3C*!l`yA~tX^_xW)1jrRiMc;phUI#zW*|KHBmMvQ_VZsDt&6-tc<=Rc5 zb~2>T2#2@Cv2)>IGig76_#>ge8&Z(RLS&3s+VUg|7&4|zhW%R?Vc(7==+dz@%0usa}3O;HykiNH&A$4YzCwft@bB&U&CEtXH2J zXKdi(?MW5n*Q0amT82$;OF-`C<_xdgnc?A2du6yyh-A-dzx!K(K1ftFH{x^REY_x3 zzI0fYEnOH16S!mlf0R^}iGlr=H{)NA8a78_h*F$lV1eF5Sfa7gR)?{~g(l zjO1!9`Z9JPAlmWsr!VlN`q}9-q_gPd=rVcJe@}FoZ8+CP#-QpN*A=-E<$~`WAE<1U zS64j@9+fWGm5w4`v}Pol}b5l!a#{Ri1P9#D1o$GG_5DJD>7V(-DzNSY$4!S^9z*_h!Kl&9y?)!RsxJT=^?Bw%c525~<$SK%G~_b93^ ztd97-C?Lb%bY~cUK?7_95O1kHPEA^l#0jXtgn_}4K|X*CogMm3#h+WN!7*b`%6P~J zNceuqR^(L?NZ0s28IBpFzhGd`cEmAgyebT}$qbI9Y2G5ub!m~NnB(t6ImI}PNzXJ+ zB$Phf-bC)Xn-<#MN7#Qd5fdQiKOI^!v6$SeNZLFpEJkvVfZ6erNnmMI;c zKYNAaS8kJC*q<<82pm7jzL1y#hU!!(j??5r{)%{ZUosrC?ysh-g3zi?CA=X+=*2sa z;6)xT{4t%W@T0e@68W>EP>w8A*~(uhudqli*>W%AXUEwABc?1v^@bf$qIgluU5k!0 zk5xdfqAybZlKJtd{xihMPOrfxaLW}Q5Kh*3clHwoeQSovBf1eda&ccH02R|L?yi_U zX9kAP_z6GzVdRpS4;+3k6m%`xo9jsUeYpf@|1nrSD+{@Cwj%YA!_$q>cpeGe(53T8 zwEz1Aia5E7EQEUJm1LwKH$R?gIy z$T!cW;)K;x!-@0l%&D^%3I})vQ(>C7D)RNm529i0$lfE$%<&CcW0EEhfBG@e;TDnD zO!j+H0B$~fhB>SDV(IeL6mJoG5}etLrDih08+l++2-!0L2ag_veNoDS7!Z}LBT=6; z8^xFg@+XbVaVIdhb#F4r(3hXJaU%e^D5T-*k7A|Epkgsw{CKwrFCXp4_ix__dx&SJ zIsEh??eXhjxCc%}yV~cldR~2$E=gtIz8UGTQR}(Fo@kxW9Y^;bM`^+=_MVJXOm<1+9|mp?_+cO> z?p&Ol>~bo8N1ut`(KR%tI9Qc?*>J7gOMkawpf|$v67?tD1b4FCg1o<`FItDv6>1_+ z?p!9Agz1KiRT?>q*_XZ!OPBHClNTn<+KXxxGog0Xg2d06dolS`By#mCo@SK3VZ<)+kv2HdJFn}A{juF}U$?HN& zfV_e{Db+!$!0mHdRC`o$) z$P2@MLm;Mh?S%vhiKFK#^`-(zZK{GPfj4i_s8&T>B;(uJvzIkBI*m_fg!uju<&Y^! zB6OZQ03+c8U!od>f!2^s1LEK4fE_`s*Wif*kbo;V$z)ZAlzPJb2tf zbm-hGR)RPSFZKbshDB-Hqb(MepAWxn-UxD5(<_F+2w^|uC{Bk!d|CPt3#$)8(PG5k z`I0fqTnZF0sFQUwUFP&CUN|3?tmaBUWP~#g;}rgvnCC@@Or6T55!k;urh9)D^f`Ny z7^E>S@S@2mKHJI+4N{koRDo@mTI z_5ry@Sf4+A#_Lb7k^gQ!_%8QC00AwzFAFf&2Tq877<4FRBg%Zs5lCd!lxx$s2I-<^`UH(UN%cz z<}cZxPJU&UdluzY zTkeg#Uz`e%y;kJVd&7I=-9C%_V3>hE)mzW zXHT?h)e89v=ZF0>d+Zvs13@XRV4sCj=?c2x{~!^bF;op9jQr{y;p=Q%D8B~LHE%b) zLB^elv48Rcl&MrmI?$kc_6>7Gwl_DNl!0>C zx|yg^yZ{oqI8kAas%YMR2y6(*if#y@VfXn zxRi|pCl@NW#ku~;vnAMNLW!JFzSA5bPo>&ac0us^l;<>AL(7f|-Y+61Ug*m+{6+|`%5K~7|I*EKFlUDwX8&j8~$p5o(X?g)Wr(7gPvaMedg3 zPV$m@PkC%A)$4|uHOgYr=w9T_Lpg0^GmQk8PNBRrHJbKDh3btlc+ddT%)<;vjnP1U z_3Bkru3Q-(KTsvP&6_vxm(Q0hS%RlmP9bHoWY}=|Kir7>6oFZ&;6-XjDwb>u zYkO--g2aWAgD9qhoZ-k1-Tw=5;4l4PciReueq=_2LS?C#MNPOTqFfIQfFj9~1!-~& z=9BaQ=U*=Aaa;@s1S^cU=M1II(uav&U38413 zfAY!$wVL)pSIRKit6f78@FSZMCBXYY`}u>uN8JZbLB^cLFnZKTBTm5lhlCZZ+bEkl8R1gG`XyrSpZ*DpF?M7G;Tjw z%;AHlFnh&j6r(zs&u`wtYswIqDaaP@JU&ty zS$`x-l>{#wtdOKYZe+=o3B^kmrajA~Ue7n8yuu4M(t!DU(!}$WA}z3Qd-Utw+Kg=- zQQd4azK@j$N_y9@WnZMpQV6qV(z7UA2nob!AU}QjG%8i9B=~UcHgMoT%$PCbw~xZy zc?z7Z03-i&enH#$?pmN$)j@YBu>yPBga`x;sEd6U6_E9!K4iq|nXdMa{ z%d4hW_^rv-5tF7Zz_sglv1P+t;Z`W-PKmA9tx@Dc5nOr%Z%Z05e@8)5OtaLkSq>9N z63xP?gJm^$#nJd`6|2U4GzJ&K5zXq>j!mw@_NtKbf zW~MMF6PQz064Y!n2zTy3#Hl^2ktRues5C)XI@>D*m^%>7@)orSRJwL(N;FFY!Kv%V zVKovP5}Pj3&?#ONP1}9oWTeYc3}eTPicJ#GX{|BZ{c!c_)j}BQ)2Bz;v}py1_vq0h zQXtOv4H+^BK@Q0cytYw|!! zOo#fF6C1E*-8PJxFrN;2;^}46p9Dx@`bjBhtJdv`!li0qRs=Mt{sy+(o?;x32NC_z zrcE2Xe*GG|cI~3FVQwN|MEghA?dsE~7gD4zfUz@I;wOKZnJ11_V9uI3vCzaz4;j!N zE9Q^Iv{|dswB-Oid-q8gqJ+9`2tcP-AYamiUy$cWSZ(r{K1i1)g*pwag3_ct`_ny# zk6#SF&9g_Rv1U$QD@m!a@Ah?bv2Dj;Ea0FawxP%tJqc)neY{9%=+tK% zij}QSE{2j5kBm@b9FTYF)CvFn_n!#*$&ev~kU{7o(2)^L=xc@!9SoIyQjD6mMC5mf zwdqW$uygIIrEu%qRx11ENS+*xFk$*q5u|}c@zn)|MSxT^!eccDujq$-#=Yo`;5!#; z2-WJ>s(=>FYO0NwN)tv=JKTBn3|DX65B~V!(^oiq`L;TTOmq(%0p>nPpDG#AEXy!; z)(XsC{x`ryv}4r)AC+&?Z|-3uHPI=pyN^QR)VV1BL$|kQJi=Niz`^EhVR9fKHg4E+5^1OIPk7 zLHaW2)MYrXKck|+amb#)S~`QKqNxXiGh^NVL0)-N3jT}_0&**~qEcvn zfVJy)V&r&9vqbsX)zB%`>UBktGBwGoY+97Nij6l~wAYG~9+v-X*RCC|UAu;V{`n_Z z6F1a`PMtafzF!|G>JleuZ~D0y)I3FZ>TaPa6kBu$cl zwD!^{M-}F?X30QG7P$@!`hECY{gQ|Tmf^T8qgP$vjw`k2pYrl z%{$z<{}gxbJ;L>y_wkj2Xsm>>SdGD)VKE!gBq>uSL7rUMP>`c7`CbNEb|(};JId7> zQiN|5&f=lof|STlw>a2);3S%N7)AlMYU+in3$0x|0S#(X21+j3rtd>n1Lhv&+g`O1 z1r60Gj|n4tQiLxZKp{AJG4ZHAaxv>ZqEr0nO{`w4ekfa|q0F5Ujor0aAZPEC4jnpx z^QjnvP6ILVpXXDI=+g#vB4{WUgSkdXBCV_jysTK8=6oltu~VtLFrz3n!B(O& zJO>(apcN;_VxSHy>M{Z>RMFd!75TaE+;bGIx{Rc{5I?bI(Reg%P{r^?q&aBlg$HUk zCz_>$CN~QgRn`Xc=o(w2@r5^wTeawv9z>_4&QciT$4NRRI(uWWKyC^;B|Jfo89fd@ z1m=Qf@nj7!7Sk-Dg~ZAy^W&V;Bt#^v)`c9T6RdPIct_`v*4)ePl2nIu=KMd2d-unX zzU|Z@^$~^RM|bGpaDGn|EZYoyd$h)YZp{%$U@pAt#9BF3fF_kgbjP#A`$g^)qElM* zNBZnVFk|{uOI(Fn&DkQoR+vuV@Ow@}Yb;Luce^%j{21~xFNofQ$(x0PhGK=EIe`%w zB0p0it*`9<$6wCSnkM;df1p&cr*GV)%!BL}$dR#1B&^n-issE}XY~Q%;!vPRn^FhI zcXLJJ_!O|iTCt%)T#U=EiM~ie5ufLetjDs|TVa*3AZ$|BE<>kwl*s)^=5+zEc z7a{1QlRkZVB3ase7&n`OhHUA}tjUY5kG{>V55|rl0?PH9_MmL74mfuHnwl3^8ec$P z))yZ>Vfe&FxOw*xCeB)k@0>l6kE$GyJ7>d_xjJOR;!M0XW z*wrcq+nPm?pg~#syb-!}9f>u|cj56%N}0gvl_^3}oVVrx=n9?Ekm!`sH8FjL1g!dukZf9yhQv}&s3@^PmG{boY9Ktl%#2Lhj1~BQYzu7Kany1PO}Db_D;EU>5`yRIG4DkHBS1I zJYx~`=G-jo&BFfF=J2YkCqas(C_f57i=q?$l>a>?5+-yf&73?mXx*<~y+f7^sgROO zv$1PnY-lVXia;$(`O?py`v+(CtwQzk#e-MAc=e^7^M4&(w8mrGww=alY}+>5n2j6T zX>8lJ?M6*wH@0&p@4f%Q&H3qklAJTMXJ+rU*Lv3T#p3gM)!TD>xQ7?H!2(xulBMP& zc!Q;FFC^;#bKEctMA;iSkBA!jCWib}u-9iAoy_B*lugeod<&qsB8IF4(SqZs!2 zAw!~NpC=GBsA(kvOe+9U#d}(o8EblI1&b8aDbj&zUCzG0=E?=W4@SG6ekHQhx(&-qt`F*-ehU7A^c;;LxJY3(tA@A&_G^rV&^R zp>7nvO;Rkk5+`N+3HSdS@Bhm7_BXB3P7>}NRYTrM@m2WtXX(}et!4Dk?|KWVb36K_ z3oO(k!i;4!P!V7~x7)kAIqpJ^K)&#g{oc8~WrCc{?7aWn0&$aPOE>~$Xy+?Dd?xK^ zwyHwmQgD$%gdQ3`5Zc|~%l`J`J5=tseW)+)5vatR4fCVPt{n+4YI5L?2DwY(Gk^XNPf14gR+JlVZ^?xcSnhIwgx&mX+swTP2>bPyFt)r) z{BV1BQT@t;&nGNsllRE;KL71}-JT2%lllS!_tbRJa!MtBuvBFJ&`1<=DgkYPidhBQ z5CyMXpj-Yq(_GA_PPPav4lh&98xFUVqj~a!x@+g(U5#}JkF0o-UJzOKSH01=BZAmH zxnR1*xe}_BrqQ9DpgJ~t3)G@vvJ=jL)=s_Nkf9w9fVJz7fli|M( zSHwC#p-}HvU}9oPr*l$dp>=6bbdb}72CCBcX`hmPCFeK9&blgawila774$!}MBqwz|quE~2;T1a`2^!c}uss5Q)2i@J)oVVQkEQNk{IaPNZpAkAYwS`a zK0M!oETO#8-g;_kDQa%F#qi)h92TGy&0mcoJBt%bK~4eMt=Ohs6i;O_j?TcdiEYiU zyWgSjK!Uo5;@_k))$NC5)ve}0!ceD-fr_3YU-7WOhu3>tzsl8MOnk$gv5qfd#!3E7 zvMTwbutl9YC_l4ArWX%qD@2hRtyR#GnhWF6X6zTSsF_Y4PGUtVtNS@Hal7AwITo!{ zw+`3WR-tmHy+uZ5IBFdhNu1)%49N-cbKnZYy_EgXqXOhBdF*x(AT1KUuC}_Qe^DCE)o9n25tT z92RN-yjtO3O!`2-*gJ~zC+@`Y_DFd!8ebEn8yRgvGTTVLzc#c_c71>sL9;H?Ehj%{yrX)~7}46<-x)~nOuR^`EH?q8+b!W4nc9D{1s zX}mR+$-_pakQoN#?M$Ao_dh$|xjVa^_Qy>?U+tITK7l?6odDem-RH^&;d-G&5xdOh zsdj7l?UVftFnFP;XH4SEYxit%Rt~2!LV?R03PZr_`^pU<&VzvqL2j$zYDP#bwEh4z z`HHGx;C(TJ`D_(!u$+OiJ>xqBVAXlwR_jd+J}KkCK{x=~vG|AJ5qh<-2-XFhS9iPD zuL2l403_#l3XL6J9nZub3m^W$BK$W0&XF{bn{s!p?LmuPG-sAd&mBlo#TpDa^nK)`glDiYB}#E!B4|3OULGva z&qg8i3doLDnryM55O7c?8S6E^x}9ph0_`OEKlnFI6*mx=_S}Y}IW(8*pGdZbySEWi zE46KHXWm{OBmubeAjd0KuMf@c&Gu})xoi@hI$Xr?WDe_SL-Vxr(Q*y3I2siKRtokL zS1c|Y6wu?21m*Z{4glc$Os!Q(Uhn(xIAS{h$oiH5cYTtuUK;vG_9kz=`9uf+Zi}1& zXkNwx9{>g$nMGd*@BoWrcO{bBT4%sIo64UkFYsgpXFGeD_@&6=Jr0O1^yO{|0=TKm z=4&z6*wfOp7a;OgP_3|kwR&K;g7Al&7YL=fpXlU<3wd zRx(%4T3(A1p(<+Z?F;cId$MEJfij5BqukSVKeB7=3I<`m^w_#?J7n}x+`ZJFMpeJ8 zv8Laq&U!xH(fxmsiASL99*)L~g1n3YqUys}cSS|Ts|P@&8h{K2Wnu^HAuxKY^$ior zx%v6{;`qJX%8wt_trRQo8r&w+nIHPnxxt(Jd!Pm5(wT| zMVes9tpXrbgHBL~UB{EfvOu6rXfzOl*tcw}$_!^%<8Tb;agz$5;YNT>p*RZqf{b#w z5k5d4G^doWzPH#0M(lVhzh>kM+FPx~*}+fJ#HH=&(CQ=wE?DawP?2vv1)0Gj(|up) zz?>f{TWkCkHYt6277h{L?_@@lbatR@h0C)I7ad}~wFOEqh3ROQ9UpoI@J^H3vpZhuDAo8tHAOm~D}IbZk`nysksucasZpx5S(+-*Io z8bZz_|JMp`31j3^Aefqn&%4rhX|R#qdg`%v$JMzT8}9jsqBcFhv_TWzolyAo9n1F< z3e_5+m>S9emVnG-aLyABt631>On|OCwZ;@Y=grfyeiCB5-n1Pf9bxN9RA*(gr2|a) zEWeV=^$5;MWKTeks#K}PDxAf#a{Y+#DyPi;ZAHmptv%%AM%SMivf@&hgh(-3L%H3> zIuF2Iu2<`|k$Ie9(rbj}V}5I02Oic!Vl)Ax)N13epw9bQU)`}nmDO~KWdbrN)S2Wm z8+7IwwqccL>6W;ET=Rl`!*f7{=h!;&kL^d1Y2kL@X|EY_Jb6uyyM@Dc*~;94Fsq?? z>&^M+mhbV~&1%USLY?FTTo2Nd*@7?J?1zA)3`|#EQFKZ@P>qKTx9Uvo|jH?YNxZaQ0=082I!aL7pGu>+h|bE{EaI>_+Q%zD1u-q5mFck8E;=L&y?q)Bb6l zQhJZYb^)JnHRp_78ursif+OS~$POt5S!)VktJ^>DJYMTuwOZVyysNY23$2zX!=#E; zR9yHa;S2*)$NMk19qvS8U2b}?5uF7vM?9M?Aa`MS^bd}|^R8_=Y}Kd<=7i#3x}4%x zxwUt9A8N>?V8;@Be|=;Bb5A7xhUF9AEyT!v!h-Sjkp675tmgfh9Zrb0-QOI{TEyB#Y4ti87z%krEZ~hIA-(j-Q^Rv}eY!TfE{Lz^=>yc=1_53uUE;TueQ?rtYp4T8t*{dJ zX``i;joQ3=<4iU&&IAz42j@%43<1g7^F5^>5+d!H5;_uKN@E@myXfgy7r}OC6Wd$c z&MS{Iejcws6J+0t)#+3rd^$Ko`7tLQC^xtruL~PLwhuEkpZ?^}BuH~>A5e9Sj zMl9&7)*j~i@VetWx-}9a4&^cpI&J5b*Q9U|ge#PGVKE*U5cT&K#|RfjJHtE{quAup z8Oo7Ur{2&LAk2#UP0}{ZwB@&b31jpBQp$!x*S*_G-dQK-iNTbL)yMZ*(n&>=d z6H4Ufd|cr=^AYhA8*JkH?Z`~f-re1OyM6T2S)c^O)0ZZP&y(yMIG+VTl@$B#2rV>YOwC@V`a8fj;I~x3&$=$-oeOU8iT74@KhqHA>xop zNe+$bOUcMV+m1zf)xkv){!nKJNZzb^OqmS2=t&p?B#5F56#Xnvbup> zZ3t|%dY^H+UGPK@eh>}ZdBytTRsoUXN`qyrfFDhCL@}cyGb*3{xUNf$AB7w zd=mA-b-%Mz*2C&1g^e*@fJ751+WFY`JyN;djC6|w7go?O@DAC2Mdd2Y!LM`l{$>H& zm)Qs9F}PFXNY~Paz;Y8KUMi9frK}t6`@=E(6+<>n92@*l@U7^1=W6Nv8^pF%FFRb1 z_)P{E^cbyt&hXUbbHph+>Ytj03<;t?kk(Omis@496S3Gy8Aabur>FX%1#x>7GkL=5 z&-AuZYmu)OPJK)9`L2Hp3uElvIg*?cj0QxNHN?Azufa_7bO!S$M5l`tKqLgbvSZF0 z5ntx39ZQNEzs1Y94jp?}?X)IP&%?>N8X&ju*}?4KEmAeQM-c2QaC$iRg5$GAODsU` zr5#*Q)A6|yvcS@^VUg(mh9HG^Lo#|%y51WLq#Yft{)@8-&lA$F7X1y2EC7`}bYPg4 z(zAhKQ6|8aG;#Tm9T#MbzTBnEX*A*1@b%VHnxr~IquPz2E}&)^U5()x4wnrz%=rg| zJbGInu5}v@o++=&;|#XkhD+0{VI~{dXv>cr;?OfGcc;WAU)zf~^aSt~Z{`Ex8=iyfBwY zLJ>8v0MCT(F_F)f>ns>;|HNny&0I@`w*)ML2I_Pl6(uco=Q4h2Dq68tX{WI)qai$TMeqH7QjT1z7#LwhOU61}&sO zGjrly!HFL@%3y6oK~@n=5-|kA!lGp?WEnm1Ef`F%_=JwK7Sc_gZ@X0`{OQt)Pn*A_|uL3+0-Z!+pEQZx8L zow!uQ0z9mk#HELSHpIKhekIrQZp~`Rm?fcjCVC7{LGNK>kFYq!pWQ9}2XsEqJD*Iz z%jpL0BtZcLqT^mgfG};k^U}Yw{@}Q!j9$b)&EEz~!zCRL<-zMo7lWgH@OIp3MX5$q zAfHJ`WA$KrszmEfC0R{Pe3D3F2o1aOXR;Cxjvp4_AfcNFGN`g0a=%5(X!CWA?{Q_p z+GGBRyt%p*-uA)WIzJ2&!L%{i@_!#r|4A_Ne=In>jtOaWYGu0lUnjYwF(4djyc(v= za-mm2mQs4y9=>luiHFmP$d9<|o(U*5g3siRKa(69eZw=;US}H(?9}KCkra@Ukk=y< zI>o>ER3NFseyD6x6T2SRv&X2VWr>Ob45v*+th(2=sZzVm_
Lc* z9-DjCqc42Y=8w&4mgA}AQiGW7FHG(+=>B5r(j(HY@r3p|uXJB%VlZwJ+>{`TY|op` zB2Wkngz~A<>=sdNhZ{VQsASbmvB%A8AP@wps$cXn(yEWb&9XY17D7GWk)ax&7(s6S z@_k&t=o5gLF{8AFbhV|nB7B!cZt4lKYu!h>BuYfDz~q)X*MzONyt-2| zZkR=^FG=j7Z%z@_*}PXKnoXBO-93`g z{+vH?Mob=#cwAu~m=~CWEKLh)F2}!zv^q#fSbqYjDtzJ5VFD8hXvDVSzO8T!b_`wB zEi@}nWs@c#=?6Qv>GcS)Y`1Z^w`*Y{XvFd$oXCVY%WQ1gne&n2G=kDa#_s&a>AHm; zEh%2I5$zEja(>5Z#$L3Y%qRg|EYGMlg*y`o)YOtG8TJVjI6O3VS%x(WP8t>CAU16K z?Gr}V`4XXgwtFJ(@4jE(Vmwa5|JGIZvph~ehU5h7@k+-qJlCqT`NJ!$&%uFWn0(kB z;mdtz!FIaEkZPJ-E1${gd(;L{@3{lF%bDuy=>lr%R zqUSg}ANsM96)}@D8^|25jYpPSeyZP@{-ma%xwb9UWb!UR=1U3K@fb!79&x63uIW(# z=I45D{UA3=3k+o}9{pg-rQ8}AE(c(JN#PD3d2x7;XD^Bl_v4K5AQRpWnKZ~6S>aIS z8?_8}?{%=ziYfjhjmx+3*p=(Ffl;Mtr1@z-C&q)bXq9}jn#ijJvpKr`?)wj)axRND zCFSfmP(??|^%3wnY$$BN9CpVI^nORuTRR%Nt}I;MHgu4LVZXAKPjRK z|MQDDAM$tn*+xOevrczx^=7MML8(7xyJHA7=Nu>2^+_b~vi=x}~Qh*8jaq+MTJ(w^cNPb*8uP2or#y#ae?+bG=JStMGRuf9}n zwXX->qRZ7(T>ZB-kq4P5=(qmZ;S#>dJ!cFM1E(T>d9yu`gclo6ez9vQq} zMQ5W%9xROKA3Tx4m=5gH#UhcP57MLT`=T+qh+NdI?o&u7WhgkXDjlr%q*9<))nB^V z3`gYc831)wk?+#vJ(?CO1mL^8%Q)@M=&F=Dy|Y+^GqI~5=US}hL>5O8|2{rYJv6mM z5q_!A9-g(Y*B)D`q#D>@xJtAbDD(tMjMd>(5e|!qYGUJSv+qATsbo}VYrNnWt9790 zf5{2WKicheeBY_(Yp+fdPDa+1|IUqxxv=%4Dl|4{8VPE2Smo&+;Yk{0!7r@W`?Jz)lHFpUu)!c+<`cl&XT?RT5?onzZ$>& zIWs!cH>GAvqD&8h~1ikV2a*jNeIz%I}A&5zH4)16CDJK z_KCz2clAC^koxp68*s-0+W+G-ZB>LUfX&x8S0GtuK7l2d$|7w4HW7nB4xCCze8te+ zIAqo?)%Z{81BC{w`Q9a<(|IUjuTL-iqWKJZw$j%eV>D5}^>m1vV)=4cQo*1Xd&ZtL zy9&q=Yz;@_LwRmcOqrn6;T5XF?m?ken7i@*n50>0E9?Wgw}4d$u(bfZy3(r%&w+L- zIoyQ;-HVtcmvSBng}lYe(>{}eF+-Cy-OCJZ==uLy{A2v6TU8fH8og_MxX1Iw@Q&7iD3_HkipASUPBu?Ffl5AtTry@W z$L~F!Ju<~e93e1`9d^&69`%4dTA<4d7=6Ig2l7j5hVCp1<-1IV0Or9ALBxI}wb5AT z+)A@GQD+!^&yNS#r)1~yHJtfT(L+fi5U5z)Ff?;pfR3z6qfwcvMNpCtaEBS!*Vn~F z7(|=dt>eLwB!bQgm@Q7;w=eI2W9YZV_16VvK&O8^Ul58JU^+nAtL=~g)r;T^qF#~(A zR0~>(4F!pLAg{@Z~)$B?$j9yvN?qGJsslsBlkFN4`%=cl=KQZ~iy?gGxq zkQFtUY3G0K1Yb~6K%RNDd5;w15QQ%pR2SH&dyoMcI{lVPpURdcVvi|0!Yo#VH>FhK zHiaUByeL+Im#zS7U$UAZKF?76MifM(UtGsg^TzULRW6$y^TNPj1n@ln9gHi7@G~Fq z?A~y$M)lPV6Ma&H$rR-G3ddKodionW)9Q6W%X#sLybpW3P$(Ay@SjQL(*b}FBf4H9 z^*$V8FEaD|*6PFhAazpxO=Gsuc-cnh4a}n5EEb~<`^$dX<@L1);NDhiF@0J!?mDCa}0wb;j^l1t$D5ET=#grSKjaeAmKibgR9tTQzCeW7BFr9R70ZMx!owc{Ji0+^)hG?PPC;R>>r!W@+TdxF$_EeT}#;r4J)rx?@)H0HMXGWsaNF0iD)F|xW; zwL~KoIV3hV7UUY3>(d}co2c(zpPUzK42gl~X8`DqXWSKDT411z`zIY&kH0Hxw7vqX z^wd3E>U(B1(8m?xn)2uxkgi9shjw-rxs1c@Ms)9Y6>J&56QlT#-Cy{jk=eXx5(&)U}_ z`5GAEOsDAg{$;LRBP2n!sDGn$4TN1lI$jx`L5;K)FuCYt{`G*3i}gtyl1)4|eTU(;6B zX~Q6I_ncY-u!f~rQ|?(qoEo@Wi^M-zooTpa^#(4-)@JT z6x@&XPN~i0iH1UieUNyLTMz(*V6K=<(^Hy(P((~x2WTa@br4kcLdD;D`qJkfZ_64Wy}&MRA>@$zzT)CHOt!_HcD*C6~Fhs!y5O_LvPQC`c` z;lp50e!xX$|3qG?!)=8r#G-YIS6+IajyIfs4I3~{X@)U}W(WY*{sB_EAGi4O?C_E* za9fcfb#aIRg>#ICE%0C0RY`@<^_1c*BUtf>G3c1>DFAlcO^(kHV5SRcNc7zS%_RbS z?VbQ=4E~olWv_=2U8f)#V>~*r6nE3S*i9LydorE2-5H8Z{4;7*CTyKSFnOIpdC18L z-uuS|sRdCF)M5GW(Vh%GX`qBDDdSQZoG2NoT9K0K^GMe1^NdhYs=^TA4u@-@>qZ!u za^@vD{*_YYCJ9>_HV(qfti&7PmMOq-m&R)J1ycg~2*x%hb+L){a>#uk%($Vpmy&k9 zw7nxDA2+&c|BAg*FZ1$jvnw@+lxeN8RFw41%skgy7{v%!*((&eS$N!$U7F5+NJ)5WEYH5d#-%d7D-zV&8fVzfMd zyg`7b1IqHci&>=*=Jf3B3P=nC#y!HXF`kauvM6&BK){8W<4oi{4S$o6Z~gtqC>Ecq z8Yknhugo1Di-G06{d!eI#guX{~C%JMVX+mQ1NV6<_0=DX(uC{QR_ z6{)GIWm^D!nEy5L>vgC_;NG9yGaklZodPYl?eQ&$F<@-It0XtS2Y#T0$sQN)BB_~wq`+(l zY7Xk^C@erYu`j+rq(-mp=U_NGyi%1`ok|z8X?ynFkyz}9F24_=WO_|3p$Q`ai%A?lK({gI zgrnz~2`CNn&BhYQbX)*G8nRIu_A1x&bsl!xB{`6$`On@9z_vJ&<7}fVW8+6U{MNyY z<#ZA7x)IF{>+o?8j@pY}R*ZSN5Yefs z<9ZZ`HFbBG)vsfr#{B|RXyB*P62J2(z29HoHmd?nE}*SI#ET(oDxc|>!=%rf=XsMu zt6EGk@a;PSJ}1IZWa2M0&mtBUi>Y+X;Nx2JOHDjt6omVt|kFN zCV8=NAb=TmDw0E`m(S0He#Op3}l(;`+BP*AjJ|b0=TVP z1gA0`&WABtej_o2;yWzn*>b3~_E3CCe$`PWlYxi1dybah-vDV3*d-&doo%@#WWRZe zCHx8%bB_D2*dV569R@NxbMrNZ&lA36Z%{WKXB4tT)EO{XhAbiK7DHK}*Fg3D{HH&}JzVcvk6+KFf=Tbb zU>hq-h-C-Nzi86V78ocsO-;?dKlIUMxB^Dy>J?PwWVFyws+6x(eIR}}2lDG3?#z?> zp&ZOaITODfizR%)R~oGowE~L(?n_E{AQsDq@c?o9L_|I}!(O~*{fIy2uNAC+`&3>kSWE1$-u}hOP1-?wJ^`%x z+w6cEVxLz7s?uCL?rW-qad)QYKS0Kk3j{#hNQ^YJgg1*UVSZ27;;%H(hy9I+_zF*P z+XKN*u1pYJrXxu4%fX(0?`)M=O9UQvjJnMjPev)shB1I}1?O*Abq)W*xg=N{X(8Rq zt2mes4OgTyU%ob$m{5bmJL>!oA9PAALl%9|K9u}Jo)zWad9l-WX1(Mj*ibVp4B71N z_F!+M7^kY;HUoxqFE}Q4?f1ATDdRb`I>!l=vX__Jz{AR=s6@W*@rH0Rsr!pgolb0! zYO_KKbKHQ{hn$ACYS2K&Amg&p{all^-lirT8ROtM`6cz|w(XM;t!6a=fI1EWj@e)J z#inJe&^iyIke^wAX2m(AOY5;}Fd+lNNp^!DKI5g;QLJI`w?~7+tdiFDYoRN@=PNUEH_KTRAHe~lf?yLGmcpElQ; z1XmF+W&}Sl*6aG2*+v_Nr$GqY>wb*|fJ0wCu|2KYBFqCr4}nj#s;+E}aE*|N@mid5s1S-Vh7PW#%GP>Lt$zO`b91_G|0P|SZcPI5 z-?6yHw(Irtooyz!BN$+Fh0td%$xZ{#+JRUP4RbSfsynsD^-|QZG|>qF+7Kgooc2&c zM2q==T_Y1ns&mODeN`vTcI4n65a4TKd0%20pb=Vt*%Stx(W{om`9L&yWLGreaK?ZO zVf{3DfF1E8`^5tHy05qU7fuX zaXm~^24bo4&8G+Ns_TPZKh1+lQySaD9{Lc~ZmRN`UkES}N4iXY#JiL%cCT98!?Lj( z1?+9w75|Oqep0-rdsi=N2>|&MO~K%vy+j>7}%1H$E;Z973}PAVK;P` zAWOe}d;N!mCZj>)g`W0q+xP^7#6kz!e$j#VZHc3eriAcL?Koj*DGwW9=dP73;Y~gf92g zYCTvOUB29dOM}0m&a8$D25=QE?;GWOQO66`Ju~=dP<|Noz5)z8UGvouIIvf1Jf(or znCGczW%dxzV6-iUn%gBYNfon>>$p22s$l~1O1mkuC;Ea&A#yNsoV;;31aBt4BY_#e zBO(s9x&%;IifzbUtTvZ~z(Y3~RWaKgb|rFh31*Kzv!5>U6QBbLy|bi&VsG@niC2EeP#bsZUJe;b=feR)6X$bP;E%~2kjLqo@FI_6abkxvgQT_vXZ^V&C1#t4?u z{sds$Cn$ZP)|HQo$P3I5o*>nv&9@S_mq~6nKUPsI$%m_#hP9II%`63=sMo9PUa*z= zoj8Ov1pKU&YW1w_skZBgJehxNPt;D+SOV4&im2@Fk;7rB0Jmfakzb-Z%@8Rn*@?z# zmh<5eNV*N%3n=@Wk8vc#-~D!mt4d>b4U@EM5*7y;U;0?LPN!f;DJAWCqf_+%NrExc z;vCW?KhRyj^7x_*q4j#!^L&K*wrf80lPJ7BJHI1Fq)#g1UdJebnlM3AWhD8x$qK2^ zA9Rb>dHCveWmPOWR9jcC_+Gtw5)_{teGhMND|k+rR#M&o}O|k{yf%uUx+>BeCHWlzB5aK z6SG-F(eMmEm9{yqy~-rhlXI!NjEP#SASbqNLPt^<2lPucUY}<)Dan)r zN*lmJ(feDcwUilqq{uf9a=a3!v{c#R+iJI^#^H(3LP?{8P3zVOuV@ zE9DYKo{ZZ-9oICsn5lLSpR430SdOwQsqW~TP@I+icoBk0pO4*!Q=C7}W!%L1eJyO> zGEevY^{LKwnMwF*6zeMhXy$V}@6fxmr|1=TtpFmx@i=R)>Isk2l{y&EM2_3H!JLb? z2g7wTn*&yL(+7&P^$>V&)zHj8e5AkQCWY4BiB1{-S4kRCwP-fS-5y&$=4zao{k}&0 zU--LUj1#zGyJDLlu2(u9oc6b+{M#;pus1S2VlCQM7)W4ATDCzdX@Bv1))onwdjwSN z#sw1j2-<4(v~K^F5cTv8IvjnZpsv--9FiH`WO_6G_1cMtr!8ksHKw3s&TP*4gqMc# zQ9&e|Y4^t-f9U3`XO7bRh(QYbKq3x-|GjR8qFFshxt_!0$klCFaitd=ug*ye$`em{c%5C5vE){Ia*V1U2edZ*S=0FljC~@O&}FP0OCE^z4-Wq zP6>N7Bi`UP8&0b)EA6W6#iVO?*R;rgQ@gjZ)W@khJuSwQq%U89hFu}xcz?S}m`7fr z)xv$rp_DN#c{&x3%7F#9dV9X>8>{7>rD~MT>4tCstcCZcvtF>>CGMX9f864}VRLY+ z05C|gm{eRgiz8c}dSd5fkP`nelA74MVkQnJQ)!);!1*Fs`1S!Mmk6>Q4-60tjUzsM z8gZq#0K5laYhfnx>Yu& zbJ&^y)iVrqGg~E4wp8LgF@p%M$UrrjIS4ESy8TmmoQv~96W>OhV5Oq@QUpOl47x>1aPS8` z^-q=i$Ldxxf`l`5#oEU2D3OUWmOI03qMd@uzp*NU(J^sZQ2|huv~OZ^8NF2c|uibVI7xD-^LBC680!gcg#K2rP@w9H7eH&gy@XPBJ1! zR~R!BbvBLwJcc83^GHQQQoj*78NJE&>VUPq2h(}66T>>x#)gK53hN_GSvH~01X*L< zTda0^HWX@;Vg2ePg9DaF8H^w*dstnt)2x0MI4>KAY;;pR&M6yHtk8D z76yA?v{6R;nMt?dDxD@Y7q~$6Z^f$pe@K8sJS(uhLQfR3yVA8Hg5OeWgv;vJMlgrh z1*=SRwgW0>W_DOlNk_6k1aU_>rnNSMT^>Ew26U$f-(*PfQtm(pbZMtrY!gpU?Z`HX zn%kNSR+!x>ULIRhmbo{Z+T>xhBSw>Fa>B%jjQV zssea(2_UvR1ow88LiqaBe*jaly{QaMF|Bk{z#%;&E~lZPfk#1cdQ60Zgu@AbmBu-X zYPp786NO7N&3Y#C3>IyW(YR#ND+Y+~y!`e~3n4yVWJ$o&1S3wt9EbM#VzK{B&R)F5 zHzfY|K1>NXBXW%RP9d4kS8pdUH?Ma-NKP|Fpeda=t*xwx)j@j#>(c&bXt#fwoLPp1 zro?9rBvc@5ZJ$#Ak??w!LCOdTY-6=rJB`ODf>2)o=Y?6QLB!)ZRsqOBfDw1U*qbc+ z?`VR6L79Li9hM;plW}ZoZ6yH;wXAN@PR1(#g=Q!Pl)X@jUS3|U?d`)yuH!?}z)h(q z0M4odR+)Sg`%MLmRU7z2)9dr`Cg4}FgK)sv#A5zoz^0`np+rgiTv(_s>=+F3Rq6B^ zNm*D}Qe;J#1dC+>5s=*GwpOQrLY;1n0Y3%MKZYeM%*4+rC4`j{hS8c&=A!CoF1*a|Hc%fW;D+ z?NHGuq)_x;R}Tn{jEAxZvI zS~ejBLidc6xl%^Jbaw=N{AaNi5J-o`;@-YL|6>OIL?3>_YG(^1k*~fYBO~*9Kcd_m zOriJkx}K7U%b zuO5~99aw-u2~NTGWoHPUL8lQ0NOhs~0yCKI95FEio5duGc)au8XaLYa6z-{6rXmi) zM$C)wcyl;|{g5T#^XW8%W*Q580<}UW$`-%nG+?W<9txZeR5FM)E)b{U$b$jabn2)= zH_HBE{B;FeSHV=1m%EAy$o=kp{(Baaz31{Y6n`?A0L2{tvbXd z>C$YAy`efF;tCYpwLOk3#N%-?yR^LJ%+;wk3y)}Y#r)_3a)jlCU){nsYjj%_#GwOE zfVQ4;z|uzYQ%SV-a&(F6w+YNfC%|fFG#71;l|V|e9HrcM{s_U>|R(aCSyPG$zuLV&RljJ~V*My+h`O)2* z@ncLr4gVwKr3hSTQu#QM7RyfVVGrmZ6Mz6l?w3D=Q(&YPQm&N$li9(bWE4)oXLh#z zXNKE0ba%+pAwXb&lPLm5joa`2m03iM4n$m|9bq_&VOtEQVOV&r<>O=Hvjf6k*z`zWF3MCWst~P7| zav-_~IDvfHtcR(fwn$P98vb-^Y>{$9LHv9+LoWBKloLTz(GWI(NCU%V1PJ@KkB(?~ zO@+)H%#jlp%T(c6EM+L$c~l#l4&Bby@+@bvOO*>kaurr&C2%un3EuT1u^5A(YP_eR zm5ve|JCU@J*==V0J7s%-9cZTjpyxj{q%?UaJTZkCPYy!Ry!WM0y=H!0fOy@r&8xax365 z_2q!4PGTlBdcFtj`I;1%qMJVjk-9r|9(=_rqc0G9W2g*}oEY^J+5Pkve>*gBgjavZ z^`-np0NgK_^&nPfG7t-ZOOKj6z0ULT)y0%CLmy_DNhAPF;Hx;fd^s;h)wE@`oN6$_PF37dx8JGFmI7~@2KYUW$1A~nI- zf7uRU@|DN-+#B)5h2H1N(irl{p%>+0jE~o_V3u+1?fK~mRz9=o0ca?Ft9LLUeyP9t z17TRfe=#a6Z9q$z+W`#0KPEQM&;SaQwXYPF#$?XXq{Y}vt;Ok)q)XH;T&rEboc4%uCA{BRt?yn*ja#DTU|>Xz`?-*G{GML zI|sa03v_h?09{=`5C8yTfDnfezym2902jQ`{)5H=X;y&Xw|oFF#kuqcZHvSIXZa{s z0RJClz&3wN;9J19fQz?*fAQXd*KZV<_YU{JO5;G^;r&7LgU#b{~ZhXWAE+b2M)k(u(+gyqpv+kKLzP~0e;>W`6!U4zw3TM z<6iJc?*n!aq?s>h=ih1J-^%<>U%Q|kJUku1G8cXJcJOexpxZ$D*#mz^kjA?O(vc5b z9Romm2&B31`Fps6G#aGoJsj z$3U>30HEsS9pvNc?Bd74Z!f?hE-x>~q3IZK&(Y6MX5Cw<-kM!R@{G-%=Ph8CH? z|49F1fj>(AXW}pK6TNufA7jUH)6vP!|DGSm#h}`I-ShJI;qdjgvv=eW`MVJRpDX^Q zTYu??u%V-qqmQE}_$pIS%3M92!RhvNaP@Qb^5k&!{7*Cdf2{VGK3tH$%QZ;Qvb#>;r z$W}Ep=CJqod2m64YvSTY0FVMy06oA0Tn4xRen13}0AvA0;2NL?Xao9yF<=4M0``Cl z-~spm0YC^44nzX6zzZN5NCUEfT%Zsr11f-8pb_{2bO7DJ4`3LW0A_(DU;{t_hrlVQ zoQZHKaOiMYa3DB5I6^oQIC3~DIBGbza13!QaPHtZ<9Oou6i ztH^0d(vae)wWXI&Rhk?&DZQ1DV* zqp+m#qj*kHNYP3$O@XGQqZFmoqI95yQ>IhaP!3T3q9UQe zgqoLHmD-Lvj5?jVj(U{(fQF7noJOCrqh~^v3DlHK$FRdD_1MO4V9NJde1v)%B zE;>~@d%7odxpeJxOZ0^FeDqrMZuGJArS!e@C`d309GRk-%9sY2j+ieoUt@M;j$uYH4>O;zaImPcxU(d()Ur&m;$YZ*BoseC{8v`O-_H#_nf_)r&su{m|S^$ z1#xAHiXLBDPm()QLQ;-WnNmNc>7;d~qoiA;&t+s~d}PXG)?~S5@5sKD z9hIY(yDb+d_f?)i{)RkUzES>IK}Nw(p+aF>QAE*Qu}E=MiBHKQ~gA)r-|p8qylU8lN=@G<7r+HAl7Bwd}PDwNTo!+M(KQI^;S= zI%zudw*+tb+^W~b)z#L0sXM91qvxqts}JaF>A%#Uy3Kd{{_O?>LWA1|=?2S&;)WrH zoknyy=U75qk`GND(?{7F}qWA=gdyuF5B+luGZbR zcTx7L_Nn$84%ZxBIjlJ}ZbAh^~xNN$rx~94AxM{j&xgELd zyBD})@0s00co2Ks@u>5p^>p`a^SbO6=+*BnJRgR@q_JyTS7QP!b9dluZO-5 z!wI_^)&l2(KZP$n(s=YCoHX1$yyx-N$4QU(pO`(Vf6D&!(bI(pjfmn%%1EEc(I~~J ztY`SoT%YwuOGdwm#>6{3(prLiO|Fk zNi<0hlNMg;zN~r0@haxkeljfiTZ&XlRw`+#f9mY(TdzO8;ePYt%~_gD+Hm@fbi`Y> zx6yBpG8{4nGOuNpWwB?)W}UoqefRUd#`~IV-t3ed!kh;=%ekhxoq2M3h54-cvH9l( zo&~cX3_r9L$`%$Du@xm0;}-iDua#Jr{3yLy`l(E$>^*`J5&IGN=>KuO{7(5ug-%6F zrCeoM6;IXMYP#x}8l0M-8dR-I?d&IuPyKb;buIOZ_0v_%e8NuwzJTsC!s%xPQcC6?YJF+Ld*gI7a*Jl`?JxdcRomCLzoX1iOFKS0*xk52mc4>~ zsr{CN+XvH!9*3t#F=!U_hhw?p&J(kfmD7MTqO;fM0_XJ@9n2)w1B=Ce255+J_;H|k zI9C8%8XPgros+uA!izkcf)7#tcN8T~mnJ~zLxxU{^oy0*Tv zySIOEc!WMax!?;tsr}B@ADsP-FB*_9Tzq^ye4-1!aBu@I7^lG};1nmMy>38ccc1Qx zL^v`1jl}omUr4wl4N(ktefmimxuxcKb}m@^jkEt9V~_t&oc+PrpL|UKssQe92@e++ zj{px3kARQ>OoYUsB|=C@OiKJ)BK=1q|1D8oB&t6W7Hk9uYy%%3p9uV=CL8iRQa&3@k_w!6C5Hl+KZxWVfLqGkX~fuZ_m zSdgoBSm2wEK1NdGOe&zm>x3D;YdDAn;zn2DJH&fp$H!F7u(N0sWQSpQ`}hc24?Rmf zNmmQA>^xNqaZ6E)5eaFi!!foPPtsZicHKYLq0>*bEvF}3w~K^&9t>m?XAdsjMo|ou z#Is6t^dm&s3JtpNm@*DLpJkL8s6uP(aM%ZNxr0z{nw{$xA2;6(zwMb5L^f7lx$^TT#Vx89fI^wzy zXUu>Wk3oKkrWcnOYh}fjp>{=FrY633cP11a75Lnls>HA!v$wm^?OD97fSl*Jy<~LK zWCf21+FwR$96{bynXO2Vwjsyw_F7p5$6tRs>U;i%%UqaMEuCfTiXdZjc`r9vhyL|6 zMDUK)Vf@EEr70Un^LI=7F0=TY+n0fnAMt4~An$EI`4& zla7gmoV}{T0t?dneVY?IIvrt3$dS!6*ax^vXoko*-eNmr&$JU3IM964%T)I8{&eH_ zjn)!RJPIWn*DoD1P3g_0qUAQHnJK#Slw|DD&lz8F=L!8vuF~Cqx%Km8Ach|iuCmIX z9RvKQ3;W~Wi!dSq>Aro;Smkwc3ONzgJ|(P!FroQWcba0>PBv}tY5Nd7tmAb)P_j`n z6O$EcLb#tDD1F}$Iay}RzmzuFHCny3`{V3>Hc}r8%%wN8V1d4benhpe-gSfB(?U`X2uC3Xpe=yi@6jL$d9%Sie6z@2`=s)dv(4G0(>Rt5SCP?YJ1pEiAMQCaJHIg)U$L@+njwCIsHM0^<(}u14FTUUoKK8Mzi2rN|64ZdrPoGiyGP9GEH5NZ_uC_3I|J2j6dtrLy|mv*SlGr=j#&-A2iml~Fyt4W+j0rnR(V>uHN=Ndjtj%O$LfykJ)e9bI3% znyP*zXvOZgfX;aDDr>1%mq+r;--dQ1IFO-~qoVpjreXWIV$fB3^1{&D!NK&bg&#h6#7y{c7aPqW&Z0KySh;ig%NbkbG7%$0y2W_Iqa5 zsXAF^4Outo$XFc2XeaMh}7HWO5_pr)P4`Qx%K=7d{G$zA7zHVP2SEz^~{iCftAiPjQ(oH)Cz7*9-xC;6F%5;#vYzc)()mj{GS<(8N3#bzbcIfT#7#FxMs{{bON7O(~>gHJVlU{)d~_ z0N@Uw00<1D>&jeuvTZbghFWAy;D}2>jZc5QI9HBf{fp}a7RGUY9YU-{Sx-st8$K7) zIPb&!G*Pnj4ykBaM(X;O?JtU$(NUUSTBhPvLOJvyaQV-V+*XUOyVOpFlkkYhyd+d6zB1 zF3~nc|BVh&OhHUcLu^xmqD*&AY^y#6pCgZZR%{);gV_f1OfUh!0za-mP8Z=3L~|OC zi1;+4FvpNN7`WTC-Dv~$>$CICvr!BuF9ZYc{-v>~uExS};tYp;p0KKu>QMkKd3DL^ zQom2EC&pe{N)e8+8$ujMMQ{|tUl$(F_vIuY(qkfbyqK)v1$1Me|0z6UB-FHlDMoI> zHZZU~sHS5$)SjHZ;%hr52~QezbI^~sa_kGHRQ<}8vc)97 z`un_pCf6(emhGW&GwaTSH#wkzU>FooAKE5qIa&0b?#Ne^lDn0_k^3q)%?kqjJNkNRnG%DYXj}x!cYlpKuGrX)Sz`P>8c_Q{18o^2!mqwZO z?)ss$VMzu5i*JdM5{%#O2Cu9Q#*8w}KY;uc2v%`R8ql)geioAT``xK@kGy&_(?OdD$63j&snKU5MeEd-tHdegosg8oH#Y~qnhE#*V!#vca|8fF zxgN;v+beqVV1Mdfy<&w-G8y(R!S}! zDI_3|9%hWMbst&@9j&>-FPTq4a+!qKqEMELsWHrpWgq+9zs>B{jY24{ij2*3XQkDy z*zZBEGuy3zS${8;gJIp6L!RULd!6oiVS!H2WHI#h zIw5)iFAIX5B37dY%9>OIw)U@9_0i2;E_g+3U zVorcJ7)nuv;cK{I+@5>GbJ*P9EaScx%VClUA~^@$j#61y`?TYj@Jz|fJ6C4Q>T4!@ z>w7DcHzLL>^(|CQ50xO*sjXz@hU24J!<;c!84NgTO>Qdy_-NdE=un&=bosImoEvF6 zI_V29do6$kzTdpk?171fHXigikb^G9=lEW#E85EOcyp!+ zg%HV(`iP^L$%E&ZJ0euIKWAyIuI|rVVhd1O>H1$xM$Wl;uNex}s}c)qJ~~r9clb85 zBTj0I(666FhVKtYX4&OgYS`slsX2R4x2TbVo5w!sI~=r0*S`+_8U>L>)ed7+!NwtH z%8X%c%(Czh~Z`L9_B-DA4!6y<%rT7IAwi8jyq~%Tca2Yw6qNS zH7nW7iRkMYj08*B=*_IFbXhK3-ohPKeHVE2C5%70w2IwrWOqsByuK@YPC1jYEhhAh zS|IfAFH=Tf`jL2zCbVDXVPrPz&$9fi$1JstPYPY54Han~^J4*{`@86mNeTJIzaD}I zpD&o&*&Nh-j+hN(IvKfZwE0kI7J;@wlJC!?w=iLZf7~D72^8~a&eYu;EE217RDM>f z%XC+JfsFZ$d|BenYQs#n9m} zt_1d_Z+QyQK?VLN(9frl|GFM8jv4i6aP?lBvcjBAV1b>J#eXd0#IgV17k?8vU}vHQ zLMQ#Lih^|DWP1M6gE|p^Hq8i=eqsV1-v&`ddzR(89c#Wfw+jTTU8ETrWLtJjZ7j3% zv@3jqxyD^U19ET`H-`V+#YKf3`duv|dZ@+|-!$`6cPieS$6IUGJ{LoLpBaCXA{L!* zp?TBY{Ik<6E&HxS@aU=v%g9qnHG!&Bp-HqDEq`y2eH@+Fn|mP}uxsm?bp6#0UL@Tf z9_())nASWo3B1ke9#AW*{-8rv<>v}P0-xJs_Z6?|anOQrzyi$?e#Hw-jvSwAbahP? z8{{^543^?Y`t9%fZBsXSC+^meLlBzINHc*pqU;rq0SDTWpX$B4IcH#c|6u!A)4oFv0X|Z z->W#u_ev03(^TJInp-j@6+dYct_vHzSu#?LfM=eb1V(6k8lE(k9XIy{_*bq?EY(a$ z&A#@?MoDAdA7%*Ye`0+_6PjaGY4y?m>eqHUnIn9MM}c?+ma)czAvjX0wtTOwGX#Gf z+$&uupCG277q4zRb=2S_Q?ZXF_12sex7CwAX3`0zw<-9U>G(-}Nj};5oySaY#9i^X z!KRt93mzJ+JIc-J0YtBMg(rApE=5=vaXMU=_+$p0FMT;GD@rWrQ}~FuR9h=y={nLg zqpwn2*rR8{#;NVco-~u(lbhj_hj`ZJWX-zIPjqW$f$}levx@9~QXr*rY#2{V{WGf}~DL5F2en#rHRMp9uLx zzTyXQLusnfBDVehax(!L(GHpp87loR`JX!*33>_Zc-SRr{qkjlQTu~ZW@n`Z@zoH(cmfwC_Z#%J6Mp*L-i%lX$Ec~%5=um$bV$3b9grg3)P^P zHMESASu~h?i8;cJm24J>LrjX}wWjPf=OU-2op0Q@5w0<5Y>-4kT7>(giX6x0^)36z zD}yC(rAOa*gfcXgVF8rbx0o-5cH5-7I=b?<=dheDEI=R{hP1?xRm!$t0m>Bl_P1Op zVxe6r8!K_riuZhv@JP>n(4r+-GWWd5E*FBh|HvN@iwiCJ8+l6GdI95ze z=lloL9Krj=4DfBPV||FEu9`PyDx-Oe(ef8=}G3l4m+n;;L|l zcUOhonr*U~!O3 zv-IcYWZ@;oYkCqBzTpjW%&>ZZemg{M#Z6+vEZ}Rc?{_EHfFe4fn)y~{;Mbx(s%(XL zI9XP3XE|ndq@q4x|G}_m+P(V%U9a5ue}q(o8ueQgKP+CIu?bLq-u-!+cGUFOo@VCZ zbe~$L`F6yg9-DTa!flwQ%zT$%G^h%NrRNhFpbtrU${v~08`yUMepEnX$_ zv^vmI;V3THLkUBH`Yw|?9+G6sZriFAYvih`ap)}BNuSM;F8ipMzk6Hp z83#W;>sCut zRPnrhTD?}$UfuA%UsHP7aTnesdp=S|k> z>5Wm{rl4x1zHZ}mbNqSASnB*E();zjx5LsG5NXY$?pQ$iCFX=-KXtRBTdb{L_@;tL z$)k#09MRV-?Ca=0$PEkIE8-v0mwrXho)H8t`|${RhP0HIBXF~ITq_F)j( z#LiXvV5z9p%QK$)c_-^p;4_SQn za;am1#gOH>)}(~{e&lX%pFw4o{t$;wvDJrhfxF47o!p)>6$&+~Lb%t79t*W&C<7&1 zE<>`~Mo%cC?DHq%ZBK@DJUr+n$o&0<7#D9pz{Mf^VV@Tq$TBxVSzk!jv%A|ozHyYv ztH2x;_inBChxKH4b6RLS^gr>5pg!#TZ#hM&oxfFK4f|x@^_iutYiDrT^Mx;+e{*R*kz`_T6k*ojdth zpj9?R5(JbXd&i^A;B$h%0J|Dzk^!yI;~gvj%?RG2$C-uw1)JQ1ZtAdE)zGopOa2>B}bAJP{kohhr)g5)5(-T({rfX0}JHTn7QVlV_PwtQCqBp z1@476q0h%1-oT}}jfcOCJZ_vF*t=C^-X;0q>0p@dC3*ujUv|!aOdt_<-RoSUoZ=OFfYKX^;e z#UVr0_9YewKLjxcn=oYqSP_WFL0aK`$Dv=(keDmeT$m7zZRbs~GvS#FI6`_2iiUu- z+WY_BoR88!b**49%@;&p7I~n){)nHhj?GZnui2QE`SD2&XQx^p zMeCBDPmyWVMa_C7-xgycXFi|wmW@iHeWtC7*s2lnaEV^y zQg+V>^m_ETP}@AE)P2fAw4q5MgR1S8-;_dmYeDdM@7Dg9`;T)Fq>e~O&0r#%KzkAo zw2IL6&^f0wSuF4#p@juLpTaSs$jcwf63|c3UdLEKtd0#FMaCV--aIrIY6t&&*#xu& z65RV&-3Q<(74Y;Mi|8x@&z#So;4o(Gp1y#$S+&70P|x#-e|__&h-WFnBfF_0Sybc~ zjl*-zp9%_cT@@c^A5&guqk%=6oGhSx=5?|nRW7+wYMW2IJK=UCT7%JbOw#EH=IP}r zJh$#0pC~^JVd?Pq)vgv8%FpRSBk>+pK9nDuV`1gjAbQ&Hg6ULnVI4?K|wsD_l zFP)0pnVg_dV2Ck)t)LK8@lmi#lPfd3*#|K!0VA^&Bh#ndVQcI?VEm6ZioGm;pcRd;{8w!I<~RT{vdwyfGU|y%ErI zB6h}|$urQXxDuLms<%!Cll45$^nE&2%O%%)KQYU)7x9$lsr6E9zdH(k*zu~-^3~#d zHc^Cr0v4!-I&XZ%$gI~WKp&TSAtsA7%3R197;+QOnoLL3%@?lOYWys@@j=+zn;2a52!INxhFMd%L5%JFY9}<#8hG5mPuXeNT;u*)D0h z(a6_ot5T=#L?IPIdu9_Y+^@uvhZNk)wsA`mvtM%f;*~jZw!HbZAh&gn?iM}pfxO_r0V<5ZLjwP|N2=&tXg=1w}XV>W~Md4 za~W@aKIYFG<^d`c!7!YOz!CS4m)AF~^2&NcvMcU-9RJ!;*qS(#?{gba!qYpeDu0`n zmnV}kUO(KOZZh?=*V@-`a@hOriEQc3;g^;D8mSs%*5}>>RTN}5o?5M@`PXF8?&|~^ z@<1K~LY7GC)l$dVy_b#U0q|MStMPqdem0Ja7xt9PdJ-cMi8J<}K*)JmEg;g=>0kaVs5 z+DRIzPKwfOo8^8d02S0+Msm9!A`J$s&TU&3bcQ{08Y>&rpSw6++xZ%IwX5s%W8!s+ zkW^;W=pn4aWIb3fc%!?U$KAi7`8+tJuW0Tv7iRFVECTBM%|pH}dpj(}bRsXd$}BZd zpsZ0Eu@0g1TnDxB*SUph6x)6-CfNB8nl2Z=C=N)8O?SWuH^y4-nA{ufvZ1!*==(x{G2l+E3u4xv&$qB*YudG!KQK-}_I| zI>sK|mEn6E!5t3v*L@u=shpE)Z^~QmDdVmKgPA6zERZLa**y+)6XEkaXsX)3g{F-; z(ZI0i&qK~I1&h#@1hgac8GP>~>jenS!X@rgfO@_i_DpbxtRi;8muq2$FMa_s|Ztg16wT~)L zMflr5Zq?OQHX|D=G91Id+jrGIpR=(@e&XY3NJuOv-$CeC8lv@XW=$c)mL~;0y=KNF zS@=jOw0MgEwvwD{7JpRO9pQ6*m3UrQd8ysCk!+Jcl{uljoxf%;CnnL-LH%k@ zC_jgOYAQ>z!d0mBXk=v6x!1rpea>slYoXPpW%-2p{ih)ADT3k7Z8O)2)}mXvYz}u8 z-X0a#LMo5ii}A1k9R=flhr&0H!{B5Pi!x~|Ks|3#(s+g@Hhy*GZE4NytB9mljuAZIJ-`nw@qE9u1=)i)qINL z4d;tg5-4ynJl~K@QIS7=M~bGDS*T};5cF>GZEvsHb{zaxxh*FdUVKMzC{rv+D>oU$ z6{O-IXd@UYxZ#+v^N@3a4w=3?{dZuWhh52JpnZaJ&=o=?fcm z(7NKeq?%V$@6M~dKE0Bc2Tsg!FNsW*k_FTdH^YoVd{D56GrR?3)XGyS^!}8}CG(#* zBhrN}zBoDYymxHFOq*1`Uco+p(=ZQ~O?v3GMp@YBla4*bb>A7ItOmNPFarVy2~X z<~$ws-0l~W6@)GgVy2YpNA#mcm?BM@hRQ}9{b?_0Zsk^ovG681PDHb4eo+@B=Kh6G zIys~iP@q#Bs5kFk++&ZSIZc!rt<@~{6%KQgi`IC(SYk)nGvI$$j=5~~FAC%(hQ(-J zLZaTLt({oW#wR{u&ZEJKsF@jZLnuBlHrY7M-$Y|zOG78WUZt;~G+hxAHI^qQtu?sq zuvD5BYEo+xDw?=_LKMGM#W(DEC1Nq<7iMH#n&Y0!Y!J(U9y}iANqBuZZ^tZ7%EnBq zsb4CPFS6frvnQ#GNyyz5zdYpBd|tzO`m=*Wt+HgwiF33P(GHg$3em1lI5y#WDWZL9 zljK7m%SUxw*Bo(chdb0HT>Rf<1-=q(Pm#?gY#z^t%7{3(*56}Ij$06rZaV*7UVBF3 z#KrY#U2znVcHh+M>05ES0+~FgxA#U;YR9Yh4`%8Xz#!z;^9xE!e8iuoq=gDfw-3EM zLS;6?9Qt8}NK`MB5WZ>)M!Pj2G0NbvD&b4k2@`VHa~pJkm_m1$Bxe2*-v;hu)H&jy zu)H`0jq|TrDaK~&UjJPFLWCAj&X|bpv-Acq%4!JSX59^6nZ&Guaah;?Uf<+22qp?T z=fIE>Cph_JJk+E-;q-ckVb`UG?$~e8$(l&BWF!GXBO~Pg4gp1RtL&3AWieVwIo9fd z%SFu=LM|l=HOuKl;dCLxs7e1eDPPAMemVT=N~1nnwr33%4O>4>ZAremJL**lc1gaf zyO%vtjc6LEIoDyO-Esz=|hHC2Te_k^2V_4!whFv zR`t|(t|{Z9vF_~2@oB_!kvzoDVt)Wa>PXpkQ=`;|ihj6la8|aR%O{QLsdX%X5P2NK zUuK{CZ=$?#k~ouOuC-Ve1MP}~}?>EF{WO@06o1(HP7?Z|v@cEHCh4(n3RV*rP5(~HwE6Eo0_>RLX z)hiQni|`V@6xU@}=h^yA&G9yo7yXD(b8VMiDO`r3imOibLX;8i55?K>E#qqWk-yr5 z4_|*iBuj9EepKOKo7p7?k&E0XXn$qtTk%-HLxiW+LGAxvo3R@ ze%0k94mD>UJyWSLzm$tNS);r$=#sM`mDR2rlCHnw5r%W%Ry_9s<0_5j1~;=OgDU)I z^1~~q>A5R!L+xIaJlV1Q^@SWy;H%POA*fORe!Lr1#RlVTSPT4dV2a7v zYZ_~=kjv;G8nN6pkN0&e3c774&MKd(k~rWrPo`t-@Z40F2 zAU{F^3p}N2OHo=?ksanV7!Yc!X^!2IxWNZKI*HF>-rsHzjY51_WfAqO<(dj_VmTTo z`k1qx2q(CSIXEs3<2VtRm=hfwds>jw6UTlre45MyrB9v?kQ(E(FuYu;Q0lOK4r*HV2@cxRE~ zGc}br6>L!`d?<`D5YMpTrGW>lkm8z_nE%~Ad!NN5pFztiL%JkA9e>`2;@H?=P21&l zlSt-{<$yay?*h0{1VPn(F)jJpg)GxfYg=jXDF! za~ghHq?~DYqDEGH)L!T6E_n*RZ6QU+K%|RM-J|%QMEw{1Cut_>TL@lhmYf`-_DgH z=pR;B&GABF%$#kVco~er(2nPmW^ujHs0G=h)2L51s=H;e*q!iRm-3k3pm2Io?8CCwV(54)O z_kBWVi3+ERm27@Ggr}pf{T?O3{MEzdddrQvef__AvB6D(!XHE>!s~0#k88m#;NW;r zEey=1ZV~&QJhN8Z=QZy6E zv0T!W|HcurG^QeQ>qSU+9g;d-G+T6mN1P)l^>T&)g_VBFkJ1 zLw|v4q1#DvbEMcWzQ?^nE{k>5JWOd z$&F^>C69^LEHni)b=s|HmdE)VEdPkOqqyOJKKoOrbL%g}^v zWK(MlG?OLB2~mXS>6!C4)NGf6Zne*o&e0vig=`MVNu~o+`Z*sEAcPx{lOOP-8Z@K5 zgOM0njWZDuq2?$Ad^QaXI*m75hK^N(wtp75s$G6y{2n?oEW=S%`(Y=dU?kLIt8I)V zOgV7xuzlc+Uv zy65QD@!P;@jxTfb1*X~Bhug=O`*-LTYM-jGY`j=y-O4*?ZyVY0VI7gnz^$wYbh@G* z+&=6O@9~EHeXK-6!gk0n3+-g&Bi#vlN@O3zjP3S%O{=#nl1)=T32VD5H#C)yA zhqmmf4Y9U6g{BA=-|Dg^{jY&f+Jr;N9kq?Q>*xmcqW(o}2mW;qLnIyZe)i1j!Yq=6 zr!Ci6LukHzIxFvE7}WgKOL@!1M_pa6LoXOg;A+b{<(;Pbbfp|Q=NrONwbHtWMyheW zdyLX$1zm*{Q{Q)}-rX{b`I$vEj&`=GYewbeOs}TO)UH{hW~S$d`X{mYKhTU?tH57#a!9|wzgX4xcez)=IcYoK*WHi|ePfV38^9wsTm zCv+4WZc`ys!d`(&<(`Ml-K0shEej79%_LchF+ON2T>to%vf*aqe%HZeasRm)a_bPM zfkvqP-8(-H$eM&hUw_@aY352+pDKdPSa~tVUf%9%8%QG^Dj`=|RVvYHP(S1$>~Tq_ z+5P96GTBHbqAjKy;-3Wr1mxt%aayv-7h_+J4}|UN`;}w`O9%T;aU{^J*q-E>1ecbk zH`tVAAO@ZK46D$isH78@1F_2!?kd7V#*q;B+FHijwdYqWm!cc4QDDZ1=p=xkAQ^=G1&NxPN4zSA zO6FS1mV`vU4a1Ui>^;}S2JWP*vRVaaTn)&^Ax0Ky5)a|r$84>6Jt70$5?)u_y+5~c zvB^(PCRT>IlX!bk3DqnWylt4MsXvslhGjKaXu$Xnk)cw*LP7QR&)dwu(S;%9V1T-% zO;)Uw4O7Pv=GM{pn`*{rIt5+!rp`o;pyd7>T7`N}6Z^!MRlc^IHi0mzUG3dhu7!$L zRt;|BgU#QU+VB7JRTodzUNtu0;ijfI$>I38LU$V*ZbkS(V#ZV6ILKI_pXjr` zN~B~E1;63J^MbO&A+Yt~>%F101J<){2JNeafBjC23S*RI z{gWGwsNE}w)gdDJp5Js0ZlZoC#LnqgVlGEQCj(y(E4YP>uDq*OF$ zECE-aM8WudOyi0_{eojbQ`3x1)x+xYHcQ)an7U(SvXh#kmQiBj#6hvip>6aj8QNKQ zU|?XQjmgj^(r2)xDCTqd*l8Vp5wSJix_W3~Dzv<5>5W)E=rc&7OtiVqQ<+uLFBJ=( zvD}K$i0ir?@lpe0b#gO(v#udya&R*DPwFN`QqyZ{+B3%rUW*CwtBBG|hrUm4$&EI0 zg{-RVkK42{^;4nABd2|_;D2IC^MevK;{_Fa6>fNC>#evXm`2)2cRv&xAK>U5bd}#S z2MraY(njxq{WSP1iwaa%!g%|{2qpAw;;wF3b;yOE1U5HwvJaxK*BIs*VHIAjr>|lT zJNC9{WLPfCke4RhAfyO2m^){jX&qOqNpSOJsEEKnAw{c*FTl4fy>nN0eHzZp&S{RU zxxT>1{P~Q=_x6N5yoS@I4Aeq!j_k}kTbm&sUKm=_L67H%CE~E3cB=Q61i1Oq<0pLd zSvsXF)r_4MDtT`kPC-C-O*PY5G9h?i*F^5i3#}-eYoHTRe62`Y>6_qY%EEp9IrQ9Q z<(~x9o_j}QJy1IkR=np^3`-k5_Nm;R+-MamyIXWOtlZN%*k^i1#UaGxJ-Xl&qSMsx z>DQoZ+I!-{=TI+MHYPHnb|ar8TbbwHW6iZS%lm=rob-%Gmm!0{ILi}rD>Y>0YA6Hf z4f)7dVIt9FyLyx)42Mi-5=4)oDU~5t;+Fj( zH%9HM8>*r?j2j2%2GbYsn#!4VQ|8vvGIzgMmT4d--#L{14}FjiEL1C2q6Pho=g@}5 z=6UYw2b@8t}elqjp&*ExHs4k=r9C`h}?!Ggu$!u#I%M2<40)j|Y z0qGz}Z?VzL07DHuDou!hfbwB z{(P+HFw2fK$@@jUfMbRv*$r+Nh+HfY|L(Q_HXc{*V|BmL{0jKgZeooM1Ea~o^=_i) zU`z=kHyLWQzKvJ&V6AcTs@gXDeyQD}XbG<}6 z3QLrWTZ_q#iJyN(R3D*k021#or%4eYNROM1L5Iw(aSl%Spr48n;?oY^twU|bFBO|K zAnfdG>Hg>l0%7*!z*dd&=#@`rP&u0&Pt)v#FGE<)o*ikHn}0Blql@cL&~~4f+6quvKJsvL|uhnYiZ?UGkBZ^kc2l1i!j$_X`+x64tHf z4Oi{9j|-1?+^_)}?-hm{ROi}qkl-`dS4y#OJRbUuScR^1CA#m*x$AI6&B(;fNsFlS zJC+9B4@No)x`=kccH9rX)U)=p1mzSCn@d`@dvMtt(eOO9%to-Qah1L+>wM^$r^zYv zE@xm=EyOIl5@emeoJB{pO>ob8;kT2p32z;**u3$@$BpeR%qA(FSc|Qjsh{3Z!a8o( zd1$RqB)XxNE3^THrkd=eyPvsfXxz0G&V69kra7s_+E5W$3oi=JU|lcnC%XJFQE{Wf z67DPhVAI}p*Y*a&&14G^zT_#SqCTy9o1$4TDo5rpVlp=ep)j8W^s$RsW%1K7WCe86 zG>>4c!}s*Z?(R@^3Hvoy{;UeCfRwE!)EO-#4#ZPDFHoaPnxB-S0B-w846F7T{O~%B8Z~Z@pG8k8iC^7 zO22dOeO=*dE2}_F=Ft=QZ={Wlc9a8-qoSGnZc_$VZU1+f&TDOrM2G56sY_-#HA#!M z_ufc+xHt6r>MYmnhRl#4s(UC6qN5#Z%i#=8 zNo;bjsP)PrJ1C|njy$9eCF2Fg7iQemLoy{^qYc0ytLZIJIyrcCC_ z8Y=J^1tc1h-)i0C=LaKGy_B3pQ}Cz&8`arTvWL;9;xfH$()bYAlOHH8o_;g_nI%?H za~|O&`?oWQ=!>eT$87hnQ;L`i1pIF=E~2ORh87E2soaazp2$e*HEl=OMX~A`p>x;r z&OD1zj0~pL!6|Id)rkN7+|e@Y+kk;d=s4(a_Jj5K{x(enNvJ<)=ARQ?Z7{U%%ACJX z+U%c*-4B8&NMU%(a`!mPYB9TZ<=FH%x6G>azaKEm97BrUP3oTNG-zz;1rc!G55))K zJt)kY-ZZ)jv%DFB1!4cRXZpq;R-CW+F^qUU_?a1>TbKz=9#04ly+`$b#y$Ec!E46` z7B==fh&dcN_71=0DZ!?ydH)(64pouGF7rUCnf_dxCEK#c?+|&vfTn&V>85putH9tK zv9Fh;ZvS#etoH0?z4j%Z%VyVij|PiHWv*2G-LHrc_e(9N^dagz6apxsd?sZBaJpePQm&(`}rYPo{ znvMwWlUKhBRLASWgSA$_oyBowZFXZ62C_u`)w%sW1ovcjB`Xb(=JGo(=1&l!&{RKP zY%Thu&p`u^;M-9$+`0st?(_IjI)|NF43Br-&J~X3g02A9Z&7iV#>}%cGxcnER2DE+ z{7cdx#lta#V~8g1r>qB?E0(dCb$_GCoe^jlRUo-hu9aFXa9~WV0By7hOW;S;6y|8> zi!?84GKQw343iG~y|x`C8*U`=vC>i3QpZyvTRL^MBdUFo8~!M|X@*n;=?cjtJzuDW z1=6pT_|F{j=VWo*~_NRwB^0qko0?Bzf9FZ*JX_hq3Q8zBIcaS`m_>J)ocw&f5_j$x-FV!{t*{H^LWFLF? zT)t@jJ*^&1j!@$Tn$N5=$=C~J=}!AYWKkTI0}NCMpciXga}p4H_9W=7|4I+vF)}2R zEY%!B$aajIOag_-II53Mn(r@?gCI&dCiiz<_V+_^o5lV5mR*0Jeg8hggO$UPt}WW1cGj56)3_?p;UksFl@l{Z!ejC-eTk8mjUwL#i$G&UnL@QEu^r z9Cw_w?@MaHkI7MJrFCkK41y`ZILnvwK{=-1h0P_%SAt`aTUG7$IRrEfddh+rB%=W34QZ4TfbrsF}LVR*?pnQ&`@_r-- zHVRk1xn**U!oz}r_7>1W(1-2`9??7ra{V(9Zy}3%W%5uRK7%;?1aAh~+`qU|5XT5R zSJ8jzr(z|Zf#~2*9KEXtyYd`!%!WEopWnUXH7?Nj_3Opk=aJbu@wd2Dog&upNM$HE z^VWV{PF$z#VD?R!t8TmR%ADG6C{?5HM_RM>$CV+s>tBJ~Tb(i}`1%oOo^gOqo}`1} zQk{kM;brD`v|o-?R$f0cO} zx?-My{9dJKOckX@tSn?kl37-RBSjGjUotL*H`7gNV_>G+$QXNz>nUtgS&wenMW;7E z^*dK-_!iw(bZ<#RZq7^Toako34sdCWf0GN4ldtn03T4!rq6ath=<2INkdga1N$I*O zS675INTRD&XFo$Emvd?NSH$GTtiwvbNcG#NmrWnM{Cb6em<$jmd&AuclS3vu-f_D{ zPrOaX5QX>_MG$F;5}H|;uf{ZxyQarZ$Obw>6&d2Lta$?_c_Dgu2w(#}{2KX|=ZZ3d z-ly9Lqu<9lDXJe%a_kx7ZFJK_kX`~=+ya*66Dc+LF+Nb*KsC$FE9BtvMSsFf4D7Lz zEFl8Z3A$Bc3y!%@6n#8$=MHU3u<)Z3S8O+*>tSGqjdPi^5av{i3B*!1f@r7L+&q&7 zX6W}fM=lTv-+~I^jjd^V)6G;?eA#QIuSr=yM=fzQ6tPRzh)h?q+JmC7=j|#oW_&Ul z)H-T%Ud~}aW)9Y)ac)&PkxI$w2>^MbC&3Pft<`hslcJPVV0}{i#$q{9?KAW9*{x@S^W)n?0w1(uNQbreUpN@qkVMECZ%6b?2 zC$(yV+$oe5oBZ%sKCK0f!LTmC0&o5Hd|iUx=GKvDzhLD<%N8igxuEg-!7T5pH-V3e zH8gJdU(VLxCSJ?oJumLbJK=-18uZ*_H1@A3t>9^#X#X0M+~2_vsJ#qLjagsau+VYZ zyg&E^BhzDJCUKAVvB_S_+x)X12PVpy+BK-k1%eKlaT0NZ zdx(N+*CtT7AE-0e$3~|jV5WLRy)LdmZM^v)UdOX{c;rs7!iN}EzaM_4q5KeZUEh3Q zJvMuMKH3!zI;={UWL<19XTMh`a4cF;wmZTQo>w#u)-A8$Nw^cwNb%vA79`(YpD;6@ z>%S8Yt_(7FmPP2kMHmWX-D^IW!(Lm+&zX`n4PeLDT|1)LlNKQ(@56cML1vQFTZd27 z7)ZI{2i<4L`LGB=PDWYq4znto=f0N zlG?HYG)Y{aoeaj6xr@2;czrwt&=R3#Bdag7d=OgIJpp&^*ghhyueW|kxtoR7f9)T9 zg7%jyOu;vyye(W(T&2Szn>+7!2Q?E5EIRXC32Tt5?OML(_>Q`Qj|{rt$Fm*jYcub? z7I&3R@tL*V4G_PKxWXkUqh~>$WSVZQa-G_e{kYf;%grm=rW+fd7xdq6wAtA2r4;mw zev`Tw_`V8w*2c3N@^D@PBuY7$)-KqK!I zy94Ix*lL%jnp2JN)nnDtzFs)ffy87^cu=x?P6O`1kXCW+Jt4FA6|^?#pl)|>FAf0r zcypIqIB#tH%CI@kCP*19Q-I-dHo+Wc6YJlziT=3^#l5N(>nLl_sY6}Iplf8L#)e!2 z43;}4R!iBil>uUtS#De&FK{Pf_1kNbVwdG^i`17uy;JitH`uo$n&sQ= z%LpEG3Ist^?B`lFo-u-+l^jT90p06LwuJ&gc#C-uyS5Fm1K0t=&n1ATa2n%!a0sNI z7SWwJa-^c(CXks#=0+X@2!b&6V-mw{aSJLCssKQKBxvf*uY3cr z$q@|}56~JO{quQ7a+n!%19wDoQcZlF5H;Bp#DITND8VeQv9aHHPczh>{=C%KE5++pHQk*pek43dOtnp z#%~AZE{%R~U+K`IsBSvYteNKN-ByiD6gTY6-d*y#8{*;Qjtozh=xR^d3g}$e)j`F@ z3jI4)tD#q$O1ZjGT8MCf3!DNKY(_NW-F<-f5Dn>5VteB}b;zRE z^CV?rQl}zvMDBI)?A2S5rq56CGd!i~b|OotOk2t+#VCA3^7ng>JKC)JaK4mc7r_X` zgqS;8I1CU$Q~G3Os-II6bvhM!L?Zy87|ta$@@|fr^6qjE!uLe#q$4S~127ZN2ocyV z-9mgH15pm*#^R2<<=Rsc1dg74*nS1@B;h%Een)N{7pLOO`wGK&!Rdyh+bE6+vpH8? z?Rc?Re5a<$-p-OKbi}Po3Tmfs<)d;$!x^5CE`n0F@TYn!O1%dKH>=-4@ei(Hu%f&v z*fMl@VSp&HleTHZHlf|wF6&MN;byCc@Q-ufD1G-ka0#EgEBDM+5*Jg=IKZ3R?PfEut!j_lNs2zTkB6XZ6S#uG>?qYW31N=1g&XelTaou2j)2 zi$9$lU?2bGbnIY>LbISwNy=%`5e@OOJ))hLa)z>4RQlO3Jt=BBj&YgeeTGg>>_?bN zckrl9tg*eq&)T19pP$`AS7ZN&ll7;=Tj<6kFv}|3_C7~chR`&I4-dbLQEbV!;mplA zjEmW_aK6;EUpXLNYjYkV7Ek9Nb+HzD%|OjiA!+#Rt1$gw7nj=&muVjXy$`<&q)Q#>F)6p3@>cWHRJ6Mt7+-lKL zi&0qiibvg36Ib1|Xkm@f(UjHHKr3|a!6WvM}5NecboFb)->gxPHGZ z>i%lq)ZH&K@^XEe*W|AzCY?N^8iBTu?UwG~)`AoWtHEUYYOPSN?W?Y{Gj@t{cF{k2_|hDysIr48uK>8M$G=_cJ?r!%j7*BL^x_yMKTlIoSHZg zWb#p(C$n$3WkH+Ew=~p_${u$fq&7|2WK7LxX_Cw0fmMl$`+BXF_F5tQKc|QXpa@}C z1%=&*)yEe*vCg|*>MxbX0cd@_9N}<<)WKh#-?4?bFF5__|)vF z$8AizyDn?{8$+AuaArqC^JVvi*4a^R8v_$d=fi!gwq#n~m7log>phN>?>bRO1ZL%8 zby=mx2B4g`$6&GGDuZv+hj}mG?{>j@o!mrs<0rlAfzTzr-o5KKYF9of$#={ML<<&^ zT+(tVw@JFmg_ahJJLULP&%SPzf(u?LMM>)IuWv*#C&Z=!IlkChb{YWp)U%UA17ztf zd1Wcr!L!sgQZxF-W8jef8y`O;h0M$oCF& zA~Ot%b5V@1;0&Io0@ZyK;~eX|wEMWQ{j;nJ-5thXxjZ_3e&Q{0eNNOgv41qqm~$bh zwf@vAZnKoAXKIP=iK@<*X3-RS_(>a%R4cS9=-VPWW91=9a;mG~bcH!IvaI0igCiQ# z^~u}UkeBx&t4_uGW;PL_P3b0QUCOYmj*!<4)Z(f7V0GP_o2>4%T$|P`wBOUdDZCj> zc~^0F@!lVYX8y-$G|^6uUbS0CG%t}vf6V{N+TPwb+Af18Vb^6TC#CXoMmGDY42W0D zO5IUZZlASAKYUD%4?$PURouQjX|-z~U}Q757?t0>ar)F7L_aEE_dw(^FGD@&f;C@# zzlw+RFJFA&9_Qbv4q}rpdR>BCXacq2cml54cHI)WeVJW$u;Ji3nStNa-d?s(q(9UT@Qw=x-rNW4e{?cRFcmWph z(kHNI4!PidZ5ZQr9Q_uvh#TTv!E1&2)b)3N9N@}fLI5qZQt3^$g+QUVdCQNn;_z=U z@@X4GFlHV};Rdmc7pe&Wi0B7N%V3g8qVIX$WggB2exqrS6y%Cqw~T23nCy|?AdXKa zqC_^>L3kDU9<98VphrcwrZZrT5B~W7|IMlfNK_^T!0f$1e`AW#ksxFs)}K-Iy#AN= z?VsEB{Qt>aBaPGUn35fGJNcH%;UoG=ubIYsPHOFyUN-FKD^e^EnOth<_ajpGFA=}-6FA@Qa&5HS@9cS|O z=b`IuK#=SI0u+@-%Kpc=F+TKHUh?=a_ndGBTk6^%b6PUYkd~Sup+_AA*i`@4Vnea9 zR94vX^(=t1d&$7!@q}QC#7Zs`?)v54cTyuL4EW?zuVBA(o6F)iztL}bEb*Kt(TBY* zX6|eK=tS&dwb^9h_v$MY^QV&NAEUpIuIrqjn|j-DM1zb1n%J1f(BX=2sV6Tr3BNp| z(Ypi`+??=Rr1!}?E@0{eWs9(`xb$FeW#uCDux1a2B-#%-?K^ickNH5S#p||F*hzWD zTBCtM$97dz1HTMq+L&nn!QweMsmU?9vWVUBs!S- zQtt#VYya8occ`7%(>)jJFaN~5?^6XjFnMKfu~Fbv5(%5j#L}I1AFoP37;i;xxWm?} z{2MP#o|Gwe%O``3dS((q)5IOouM&bP4!Z=gH9oh)U0M8J`UouKU(Ma~XU8aV8l0A! z6|I-+eZUy*Zuu0 zbp0WqIsPG7FVwRxsqx@tVhl~zvx~cPt(H<>;`r~$xPwM)nZHJ#J&%@{JI}z*6-)0I z_-KKLySG*nc|GBy#bHvPWOZ5!iCiVPjmnSgH%N;guCK;o^Nb!$|8S|=iZyw>zvp+- zY-S{hF&4|>2@m7pC>eal?mS;pU1NHGtf1Q0UXlGZQ_s9e9Vwb*RpoUuSETmFR=uz? zU)&9xVUvE+HFOiJmwzDZ-rzAFlEeqV?)47$fOgIFuXsqUV{AR(??Emg@?jv3a%XI` z45&m9vBc}M@Z)!?L)PQcJAm!vhxf>91N8@a-rtlQU-53y_rcdXfzktS7WJx&^9YE1 z0p6_Bc0`j31+eI%){g6GRCUj%XgT zgI=<3$)|q)SXldVpKau$3A@9A3WF1`!rOQZF4gx)YqzROt+&e<+0kjNAhC9mn~FI^ z4bx=}@5FeR%PMFlD5V#gehkTG1}$)A0|cS8xZjle{x}#x{`bWpsn=W}zt$1-dd-2V zHJ}|y)I2=(3~{jO_M5&WpjG$gSgTHuN*RNb_CZ9px$&(3g3F`#a(;!&mq9*L*$(qn zd<%ghU7su`y|glTx!vdbW8!-yoLFR`QA)(u)6-X?chGf%FNGfVL=!q3#^hhMJFq`8 zs7)M9@6P#3j-r3pXxJqGxJJ2vPW>Vz*#E`v>L@>J4z}R#ixkFYecR!(Gc$Z(idso*V6q{i)356r^eaTe90QPOIn4+|mz8R*#<` z_Ldc3nP8A=u*EnZPD}?mHqrX{=YsM$)XsvKuIvaez;-(Ct;mxQBVkP;IPq0;ebLI5OXm9buHq&c1 z;VTG10(eQASUT5>R>LD2J>ypGY+qt0gl4Xu(q^~wQl9iiF2Ole?vga8K@HpHY4pr-I*S5x|w)9;%1dWtsLc}3jv?#H*u34MOPnb$JQejv>lf@MUNDT5ep=Q#Hl z%BHB?K5rc`Z;zDyo*LC%*;+Zb1W9-(@S|7g(8)*+%?8HMRQe8YKe?CO*~R3r6%TwnkXMRysAjx zPhI6CEA^m$FnjU)=733YZw>b%BUi&v+yuvb2tc9L(YuA0Ezm0)CtGiC(<@SNde;_L zkfn|{qCal_(42>9lI(o%a!ykfzK)op<`K^J_?luzG!3= z(-A!Jba?OOg9oRkB$=r_lb2<51gE-j2v&LpvZi?trkiDSVZ>r=bnMKd-t0O|Xocsk*u6pri3Es(C`5lzVvO#{9B&%spjlNO<6VLNy{e);G6>S9d!fX(Bg zm6gKW^p%fcw(h8lZu!86{aC%q+jpR3aeK+=RP|X(b!dp6Wdd|*KXI$o?tH7;`JdZ8 zJop-~rpm^%u!!E9c4vL0O+(^l*$E~O6Tx+K4XG|~9PLmro-37oERex1dI!ytj4svd zR$5j~xzO56{_$F=KSp)gJ(53r%Qg8U3EK>OA;2eUM}VA6Iyq<88vw~(P?Ny;$B1Y@ zFhy@^)|JS!6`8hQkhbtGrp)$VapAYGTfRk>TwiX2mHQDU$q2G@u_VdZnmf^4OhT+_tEXQl7DqOuqMUN*%( zgY5f}GPr4*p|QvHWISX9_>kT^a_jUs?6ca*l0g}|G)`wj{hHE}V4XX31%vd8?e7W; zv3;)BhYAhOq?pAC0L0`yY!e(47jtEOV1F&X^s-L5cD`0rgAmWuH1KR{6k>u zg88+aH!19ij~ISf``$a{vqllf_VHscqzHc?$JAPQ(svNN2?0UUn|6lJV{P3V!BDNsr#4VMWxyBa9IU06~&_NIX$J%$$lZ`C47tcbqgIS z*J}n{52A|`trA5iK=y0glzZfy^|U5H;Fx-KWU)G6La4ZOK-~G+loZoll{0&ZvtDCe z&DHrYinZ=o2G3{LkcHQY2Q50&H8Dyad99v5P>NC8Bc`4HPa8+dmUAir3dO8IRvX zD1RvvG!#eczPE6GoK#3YSszK-jo_;q&=V3xM`GJzk=hZ*WigXd&L34g=+Pt1oy~A=-6vnP zXyQYR!P4x#pX;hm%+Y`PYFca4%U;8r^|Eq4ujriVmUF6Q+qjKxiym^Y#Q76UQy?R`+)$s}_2(aNldaYQH&C9vTvXYaeQt zl3FoVIiJ*;Z%CH^On~G+=@aH?v>*wfQm|hdhex%QzVIizX_i0g4RvFhkux@Wfnca3 zB}aHF3-o((o9r+3o_Tn_S*h~L;XuFvpiN%Q58xA)QGYQZHg zDEXNf6$7H|>NejBQ90_)s6Z$LRW`feXm~c_?fZ-$T~_z|1J`;3&<>pgyTannxWNrd z@Cg1Myu!0l@Il!pGy9F}ug5JclFnOp&I-AAoK9&&ykkhG-I0hvUk`Wob`Z}A?dj1+ znK+(|n1}U!ww>8(yJ`1aRaYCN*gB17X%A#WN|gShI>s5|Ayb-iLE%Jj@ytmYT5e6& zi}li^^Sit-gb6l2@GI3uTPmV`Aa7t>&EY3KZ*0JfWq8DGItb+BJ6 zd%g%;eVg0Trmhl7FgpOAz)Q1W=y3+?V3|U#8Bg@5C@onwfBp=6a9F9_=;f4mUvE_W zkUi9P=59>YbR20SCD&9f+#}bNTh>!mPPFQ(#OkS5u`ChGq!D9{9_{JN{4*K;O;pLs z9#`?zi`_CN6OOSrHk)Qu+E}w+WnIa7UGR1J`5C@QhgSg>9>l=gu4^w}a>1BS3cL*#1MW%lKtJ{7rjW3P*uJ!C?HLF8b-%GJ)r*?D~5@>{blIj_g75%@UIp4Yzl zDy14)au5~701yak06Zyr+4YbEb07#{2Y7nnmDn~cr^-2w%@!pwuZ8dJzNg+7Fecg> z=xT8n?C6eCRf$J5Zt%pQhs0c)@a9KC?TiLDwN+CC1LFE>Z?rE@EtnWDR+i%zi!y8n z^won*f<6GST$+FP2mhUYTWAGq+SiP5{!)Z~QX$1~=hIhIzR;;I?39rYE%gP^vzp{P z7N-HE;CX+Sf=8=oK37?$vT8nuFEBA4ZX8SB81n#Kx)1k2yI+B3(w6!Em48qGgTgY- k_OBYO_Jbo8Z4^z;Ni)t`_0Z7J-Uk2smiXOb@T1}X18vpS$^ZZW literal 0 HcmV?d00001 diff --git a/doc_cn/CMakeLists.txt b/doc_cn/CMakeLists.txt deleted file mode 100644 index 314b34525c..0000000000 --- a/doc_cn/CMakeLists.txt +++ /dev/null @@ -1,31 +0,0 @@ -if(NOT DEFINED SPHINX_THEME) - set(SPHINX_THEME default) -endif() - -if(NOT DEFINED SPHINX_THEME_DIR) - set(SPHINX_THEME_DIR) -endif() - -# configured documentation tools and intermediate build results -set(BINARY_BUILD_DIR "${CMAKE_CURRENT_BINARY_DIR}/_build") - -# Sphinx cache with pickled ReST documents -set(SPHINX_CACHE_DIR "${CMAKE_CURRENT_BINARY_DIR}/_doctrees") - -# HTML output directory -set(SPHINX_HTML_DIR "${CMAKE_CURRENT_BINARY_DIR}/html") - -configure_file( - "${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in" - "${BINARY_BUILD_DIR}/conf.py" - @ONLY) - -sphinx_add_target(paddle_docs_cn - html - ${BINARY_BUILD_DIR} - ${SPHINX_CACHE_DIR} - ${CMAKE_CURRENT_SOURCE_DIR} - ${SPHINX_HTML_DIR}) - -add_dependencies(paddle_docs_cn - gen_proto_py) diff --git a/doc_cn/build_and_install/cmake/index.rst b/doc_cn/build_and_install/cmake/index.rst deleted file mode 100644 index e2a12c5001..0000000000 --- a/doc_cn/build_and_install/cmake/index.rst +++ /dev/null @@ -1,8 +0,0 @@ -使用cmake编译PaddlePaddle -========================= - -.. toctree:: - - install_deps.rst - compile_options.rst - make_and_install.rst diff --git a/doc_cn/build_and_install/cmake/install_deps.rst b/doc_cn/build_and_install/cmake/install_deps.rst deleted file mode 100644 index 7fa4665a95..0000000000 --- a/doc_cn/build_and_install/cmake/install_deps.rst +++ /dev/null @@ -1,4 +0,0 @@ -安装编译PaddlePaddle需要的依赖 -============================== - -参见 `安装编译依赖 <../../../doc/build/build_from_source.html#install-dependencies>`_ diff --git a/doc_cn/build_and_install/cmake/make_and_install.rst b/doc_cn/build_and_install/cmake/make_and_install.rst deleted file mode 100644 index 212b9c9352..0000000000 --- a/doc_cn/build_and_install/cmake/make_and_install.rst +++ /dev/null @@ -1,4 +0,0 @@ -make和make install -================== - -参见 `make和make install <../../../doc/build/build_from_source.html#build-and-install>`_ diff --git a/doc_cn/build_and_install/install/paddle_ssh.Dockerfile b/doc_cn/build_and_install/install/paddle_ssh.Dockerfile deleted file mode 100644 index 7cb947bddf..0000000000 --- a/doc_cn/build_and_install/install/paddle_ssh.Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -FROM paddledev/paddle:cpu-latest - -MAINTAINER PaddlePaddle dev team - -RUN apt-get update -RUN apt-get install -y openssh-server -RUN mkdir /var/run/sshd -RUN echo 'root:root' | chpasswd - -RUN sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config -RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config - -EXPOSE 22 - -CMD ["/usr/sbin/sshd", "-D"] diff --git a/doc_cn/build_and_install/install/paddle_version.txt b/doc_cn/build_and_install/install/paddle_version.txt deleted file mode 100644 index a80873303f..0000000000 --- a/doc_cn/build_and_install/install/paddle_version.txt +++ /dev/null @@ -1,11 +0,0 @@ -PaddlePaddle 0.8.0b1, compiled with - with_avx: ON - with_gpu: OFF - with_double: OFF - with_python: ON - with_rdma: OFF - with_glog: ON - with_gflags: ON - with_metric_learning: - with_timer: OFF - with_predict_sdk: diff --git a/doc_cn/cluster/index.rst b/doc_cn/cluster/index.rst deleted file mode 100644 index 25313a9635..0000000000 --- a/doc_cn/cluster/index.rst +++ /dev/null @@ -1,11 +0,0 @@ -集群训练 -======== - -* `集群训练 <../../doc/cluster/index.html>`_ - -.. toctree:: - :maxdepth: 2 - :glob: - - 集群训练(对内) - diff --git a/doc_cn/demo/index.rst b/doc_cn/demo/index.rst deleted file mode 100644 index e15e839f93..0000000000 --- a/doc_cn/demo/index.rst +++ /dev/null @@ -1,26 +0,0 @@ -使用示例 -======== - -图像 -'''' - -* `图像分类 <../../doc/demo/image_classification/index.html>`_ - -自然语言处理 -'''''''''''' - -* `情感分析 `_ -* `文本生成 <../../doc/demo/text_generation/index.html>`_ -* `词性标注 <../../doc/demo/semantic_role_labeling/index.html>`_ - -推荐 -'''' - -* `MovieLens数据集 <../../doc/demo/rec/ml_dataset.html>`_ -* `MovieLens评分回归 <../../doc/demo/rec/ml_regression.html>`_ - -常用模型 -'''''''' - -* `ImageNet: ResNet <../../doc/demo/imagenet_model/resnet_model.html>`_ -* `Embedding: Chinese Word <../../doc/demo/embedding_model/index.html>`_ diff --git a/doc_cn/demo/quick_start/index.md b/doc_cn/demo/quick_start/index.md deleted file mode 100644 index 4a6e07ee1f..0000000000 --- a/doc_cn/demo/quick_start/index.md +++ /dev/null @@ -1,543 +0,0 @@ -# PaddlePaddle快速入门教程 - -我们以文本分类问题作为背景,介绍PaddlePaddle使用流程和常用的网络基础单元的配置方法。 - -## 安装(Install) - -首先请参考安装教程安装PaddlePaddle。 - -## 使用概述(Overview) - -**文本分类问题**:对于给定的一条文本, 我们从提前给定的类别集合中选择其所属类 -别。比如通过用户对电子商务网站评论,评估产品的质量: - -- 这个显示器很棒! (好评) -- 用了两个月之后这个显示器屏幕碎了。(差评) - -每一个任务流程都可以分为如下5个基础部分。 -
![](./Pipeline.jpg)
- -1. 数据格式准备 - - 每行保存一条样本,类别Id 和文本信息用Tab间隔, 文本中的单词用空格分隔(如果不切词,则字与字之间用空格分隔),例如:```类别Id ‘\t’ 这 个 显 示 器 很 棒 !``` -2. 数据向模型传送 - - PaddlePaddle可以读取Python写的传输数据脚本,所有字符都将转换为连续整数表示的Id传给模型 -3. 网络结构(由易到难展示4种不同的网络配置) - - 逻辑回归模型 - - 词向量模型 - - 卷积模型 - - 时序模型 - - 优化算法 -4. 训练模型 -5. 预测 - -## 数据格式准备(Data Preparation) -在本问题中,我们使用[Amazon电子产品评论数据](http://jmcauley.ucsd.edu/data/amazon/), -将评论分为好评(正样本)和差评(负样本)两类。[源码](https://github.com/PaddlePaddle/Paddle)的`demo/quick_start`里提供了下载已经预处理数据的脚本(如果想从最原始的数据处理,可以使用脚本 `./demo/quick_start/data/proc_from_raw_data/get_data.sh`)。 - -```bash -cd demo/quick_start -./data/get_data.sh -``` - -## 数据向模型传送(Transfer Data to Model) - -### Python数据加载脚本(Data Provider Script) - -下面dataprovider_bow.py文件给出了完整例子,主要包括两部分: - -* initalizer: 定义文本信息、类别Id的数据类型。 -* process: yield文本信息和类别Id,和initalizer里定义顺序一致。 - -```python -from paddle.trainer.PyDataProvider2 import * - -# id of the word not in dictionary -UNK_IDX = 0 - -# initializer is called by the framework during initialization. -# It allows the user to describe the data types and setup the -# necessary data structure for later use. -# `settings` is an object. initializer need to properly fill settings.input_types. -# initializer can also store other data structures needed to be used at process(). -# In this example, dictionary is stored in settings. -# `dictionay` and `kwargs` are arguments passed from trainer_config.lr.py -def initializer(settings, dictionary, **kwargs): - # Put the word dictionary into settings - settings.word_dict = dictionary - - # setting.input_types specifies what the data types the data provider - # generates. - settings.input_types = [ - # The first input is a sparse_binary_vector, - # which means each dimension of the vector is either 0 or 1. It is the - # bag-of-words (BOW) representation of the texts. - sparse_binary_vector(len(dictionary)), - # The second input is an integer. It represents the category id of the - # sample. 2 means there are two labels in the dataset. - # (1 for positive and 0 for negative) - integer_value(2)] - -# Delaring a data provider. It has an initializer 'data_initialzer'. -# It will cache the generated data of the first pass in memory, so that -# during later pass, no on-the-fly data generation will be needed. -# `setting` is the same object used by initializer() -# `file_name` is the name of a file listed train_list or test_list file given -# to define_py_data_sources2(). See trainer_config.lr.py. -@provider(init_hook=initializer, cache=CacheType.CACHE_PASS_IN_MEM) -def process(settings, file_name): - # Open the input data file. - with open(file_name, 'r') as f: - # Read each line. - for line in f: - # Each line contains the label and text of the comment, separated by \t. - label, comment = line.strip().split('\t') - - # Split the words into a list. - words = comment.split() - - # convert the words into a list of ids by looking them up in word_dict. - word_vector = [settings.word_dict.get(w, UNK_IDX) for w in words] - - # Return the features for the current comment. The first is a list - # of ids representing a 0-1 binary sparse vector of the text, - # the second is the integer id of the label. - yield word_vector, int(label) -``` - -### 配置中的数据加载定义(Data Provider in Configure) - -在模型配置中利用`define_py_data_sources2`加载数据: - -```python -from paddle.trainer_config_helpers import * - -file = "data/dict.txt" -word_dict = dict() -with open(dict_file, 'r') as f: - for i, line in enumerate(f): - w = line.strip().split()[0] - word_dict[w] = i -# define the data sources for the model. -# We need to use different process for training and prediction. -# For training, the input data includes both word IDs and labels. -# For prediction, the input data only includs word Ids. -define_py_data_sources2(train_list='data/train.list', - test_list='data/test.list', - module="dataprovider_bow", - obj="process", - args={"dictionary": word_dict}) -``` -* data/train.list,data/test.list: 指定训练、测试数据 -* module="dataprovider": 数据处理Python文件名 -* obj="process": 指定生成数据的函数 -* args={"dictionary": word_dict}: 额外的参数,这里指定词典 - -更详细数据格式和用例请参考 -PyDataProvider2。 - -## 网络结构(Network Architecture) -本节我们将专注于网络结构的介绍。 -
![](./PipelineNetwork.jpg)
- -我们将以基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置 -连接请参考Layer文档。 -所有配置在[源码](https://github.com/PaddlePaddle/Paddle)`demo/quick_start`目录,首先列举逻辑回归网络。 - -### 逻辑回归模型(Logistic Regression) - -流程如下: -
![](./NetLR.jpg)
- -- 获取利用one-hot vector表示的每个单词,维度是词典大小 - -```python -word = data_layer(name="word", size=word_dim) -``` - -- 获取该条样本类别Id,维度是类别个数。 - -```python -label = data_layer(name="label", size=label_dim) -``` - -- 利用逻辑回归模型对该向量进行分类,同时会计算分类准确率 - -```python -# Define a fully connected layer with logistic activation (also called softmax activation). -output = fc_layer(input=word, - size=label_dim, - act_type=SoftmaxActivation()) -# Define cross-entropy classification loss and error. -classification_cost(input=output, label=label) -``` - - - input: 除过data层,每个层都有一个或多个input,多个input以list方式输入 - - size: 该层神经元个数 - - act_type: 激活函数类型 - -效果总结:我们将在后面介绍训练和预测的流程的脚本。在此为方便对比不同网络结构, -我们随时总结了各个网络的复杂度和效果。 - - -
-
- - - - - - - - - - - - - - - - -
网络名称参数数量错误率
逻辑回归252 KB8.652%
- -
- -### 词向量模型(Word Vector) - -embedding模型需要稍微改变数据提供的脚本,即`dataprovider_emb.py`,词向量模型、 -卷积模型、时序模型均使用该脚本。其中文本输入类型定义为整数时序类型integer_value_sequence。 - -``` -def initializer(settings, dictionary, **kwargs): - settings.word_dict = dictionary - settings.input_types = [ - # Define the type of the first input as sequence of integer. - # The value of the integers range from 0 to len(dictrionary)-1 - integer_value_sequence(len(dictionary)), - # Define the second input for label id - integer_value(2)] - -@provider(init_hook=initializer) -def process(settings, file_name): - ... - # omitted, it is same as the data provider for LR model -``` - -该模型依然是使用逻辑回归分类网络的框架, 只是将句子利用连续向量表示替换稀疏 -向量表示, 即对第3步进行替换。句子表示的计算更新为2步: -
![](./NetContinuous.jpg)
- -- 利用单词Id查找对应的该单词的连续表示向量(维度为word_dim), 输入N个单词,输出为N个word_dim维度向量 - -```python -emb = embedding_layer(input=word, size=word_dim) -``` - -- 将该句话包含的所有单词向量求平均得到句子的表示 - -```python -avg = pooling_layer(input=emb, pooling_type=AvgPooling()) -``` - -其它部分和逻辑回归网络结构一致。 -效果总结: - - -
- - - - - - - - - - - - - - - - - -
网络名称参数数量错误率
词向量模型15 MB8.484%
-
-
- -### 卷积模型(Convolution) -卷积网络是一种特殊的从词向量表示到句子表示的方法, 也就是将词向量模型额步 -骤3-2进行进一步演化, 变为3个新的子步骤。 -
![](./NetConv.jpg)
- -文本卷积分为三个步骤: -1. 获取每个单词左右各k个近邻, 拼接成一个新的向量表示; -2. 对该表示进行非线性变换 (例如Sigmoid变换), 成为维度为hidden_dim的新的向量; -3. 在每个维度上取出在该句话新的向量集合上该维度的最大值作为最后的句子表示向量。 这3个子步骤可配置为: - -```python -text_conv = sequence_conv_pool(input=emb, - context_start=k, - context_len=2 * k + 1) -``` - -效果总结: - - -
- - - - - - - - - - - - - - - - - -
网络名称参数数量错误率
卷积模型16 MB5.628%
-
- -### 时序模型(Time Sequence) -
![](./NetRNN.jpg)
- -时序模型即为RNN模型, 包括简单的RNN模型、GRU模型、LSTM模型等。 - -- GRU模型配置: - -```python -gru = simple_gru(input=emb, size=gru_size) -``` - -- LSTM模型配置: - -```python -lstm = simple_lstm(input=emb, size=lstm_size) -``` - -针对本问题,我们采用单层LSTM模型,并使用了Dropout,效果总结: - - -
- - - - - - - - - - - - - - - - - -
网络名称参数数量错误率
时序模型16 MB4.812%
- -
- -## 优化算法(Optimization Algorithm) -优化算法包括 -Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优化方法,加了L2正则和梯度截断。 - -```python -settings(batch_size=128, - learning_rate=2e-3, - learning_method=AdamOptimizer(), - regularization=L2Regularization(8e-4), - gradient_clipping_threshold=25) -``` - -## 训练模型(Training Model) -在完成了数据和网络结构搭建之后, 我们进入到训练部分。 -
![](./PipelineTrain.jpg)
- -训练脚本:我们将训练的命令行保存在了 `train.sh`文件中。训练时所需设置的主要参数如下: - -```bash -paddle train \ ---config=trainer_config.py \ ---log_period=20 \ ---save_dir=./output \ ---num_passes=15 \ ---use_gpu=false -``` -这里没有介绍多机分布式训练,可以参考分布式训练的demo学习如何进行多机训练。 - -## 预测(Prediction) -可以使用训练好的模型评估带有label的验证集,也可以预测没有label的测试集。 -
![](./PipelineTest.jpg)
- -测试脚本如下,将会测试配置文件中test.list指定的数据。 - -```bash -paddle train \ ---use_gpu=false \ ---job=test \ ---init_model_path=./output/pass-0000x -``` - -可以参考Python API预测 -教程,或其他demo的Python预测过程。也可以通过如下方式预测。 - -预测脚本(`predict.sh`): - -```bash -model="output/pass-00003" -paddle train \ - --config=trainer_config.lstm.py \ - --use_gpu=false \ - --job=test \ - --init_model_path=$model \ - --config_args=is_predict=1 \ - --predict_output_dir=. \ - -mv rank-00000 result.txt -``` -这里以`output/pass-00003`为例进行预测,用户可以根据训练log选择test结果最好的模型来预测。与训练网络配置不同的是:无需label相关的层,指定outputs输出概率层(softmax输出), -指定batch_size=1,数据传输无需label数据,预测数据指定test_list的位置。 - -预测结果以文本的形式保存在`result.txt`中,一行为一个样本,格式如下: - -``` -预测ID;ID为0的概率 ID为1的概率 -预测ID;ID为0的概率 ID为1的概率 -``` - -``` -is_predict = get_config_arg('is_predict', bool, False) -trn = 'data/train.list' if not is_predict else None -tst = 'data/test.list' if not is_predict else 'data/pred.list' -obj = 'process' if not is_predict else 'process_pre' -batch_size = 128 if not is_predict else 1 -if is_predict: - maxid = maxid_layer(output) - outputs([maxid,output]) -else: - label = data_layer(name="label", size=2) - cls = classification_cost(input=output, label=label) - outputs(cls) -``` - -## 总体效果总结(Summary) -这些流程中的数据下载、网络配置、训练脚本在`/demo/quick_start`目录,我们在此总 -结上述网络结构在Amazon-Elec测试集(25k)上的效果: - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
网络名称参数数量错误率配置文件
逻辑回归模型 252KB 8.652%trainer_config.lr.py
词向量模型 15MB 8.484%trainer_config.emb.py
卷积模型 16MB 5.628%trainer_config.cnn.py
时序模型 16MB 4.812%trainer_config.lstm.py
-
-
- -## 附录(Appendix) -### 命令行参数(Command Line Argument) - -* \--config:网络配置 -* \--save_dir:模型存储路径 -* \--log_period:每隔多少batch打印一次日志 -* \--num_passes:训练轮次,一个pass表示过一遍所有训练样本 -* \--config_args:命令指定的参数会传入网络配置中。 -* \--init_model_path:指定初始化模型路径,可用在测试或训练时指定初始化模型。 - -默认一个pass保存一次模型,也可以通过saving_period_by_batches设置每隔多少batch保存一次模型。 -可以通过show_parameter_stats_period设置打印参数信息等。 -其他参数请参考令行参数文档。 - -### 输出日志(Log) - -``` -TrainerInternal.cpp:160] Batch=20 samples=2560 AvgCost=0.628761 CurrentCost=0.628761 Eval: classification_error_evaluator=0.304297 CurrentEval: classification_error_evaluator=0.304297 -``` -模型训练会看到这样的日志,详细的参数解释如下面表格: -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
名称解释
Batch=20 表示过了20个batch
samples=2560 表示过了2560个样本
AvgCost 每个pass的第0个batch到当前batch所有样本的平均cost
CurrentCost 当前log_period个batch所有样本的平均cost
Eval: classification_error_evaluator 每个pass的第0个batch到当前batch所有样本的平均分类错误率
CurrentEval: classification_error_evaluator 当前log_period个batch所有样本的平均分类错误率
-
-
diff --git a/doc_cn/demo/sentiment_analysis/index.rst b/doc_cn/demo/sentiment_analysis/index.rst deleted file mode 100644 index 9d7972b219..0000000000 --- a/doc_cn/demo/sentiment_analysis/index.rst +++ /dev/null @@ -1,8 +0,0 @@ -情感分析教程 -=========================== - -.. toctree:: - :maxdepth: 3 - :glob: - - Training Locally \ No newline at end of file diff --git a/doc_cn/howto/build_docker_image.rst b/doc_cn/howto/build_docker_image.rst deleted file mode 100644 index 46ba07d9ad..0000000000 --- a/doc_cn/howto/build_docker_image.rst +++ /dev/null @@ -1,35 +0,0 @@ -构建PaddlePaddle的Docker Image -============================== -PaddlePaddle的Docker Image构建源码放置在 ``${源码根目录}/paddle/scripts/docker/`` 目录下。该目录有三类文件: - -- Dockerfile:Docker Image的描述文件,包括构建步骤、各种参数和维护人员等。 - - - 一共维护了12个Dockerfile,Dockerfile.m4是它们的模板。 - - PaddlePaddle中所有的Image都基于ubuntu 14.04。 - -- build.sh:Docker Image的构建脚本,使用方式见下一小节。 -- generate.sh:通过Dockerfile.m4模板生成不同的Dockerfile。 - -使用脚本构建Docker Image ------------------------- - -进入源码目录,执行 ``docker build`` 命令,即可在本地编译出PaddlePaddle的镜像。简单的使用样例为 - -.. code-block:: bash - - cd ${源码根目录}/paddle/scripts/docker/ - docker build --build-arg LOWEST_DL_SPEED=50K \ - --build-arg WITH_GPU=ON \ - --tag paddle_gpu:latest . - -其中,``--build-arg`` 传入的配置参数包括: - -- LOWEST\_DL\_SPEED\: 在多线程下载过程中,设置下载线程的最低速度。 - - - 默认单位是Bytes,但可以传入10K、10M、或10G等这样的单位。 - - 如果小于这个速度,那么这个线程将会关闭。当所有的线程都关闭了,那么下载进程将会重启。 -- WITH\_GPU\: ON or OFF,是否开启GPU功能。注意, - - **编译** PaddlePaddle的GPU版本 **不一定** 要在具有GPU的机器上进行。 - - **运行** PaddlePaddle的GPU版本 **一定** 要在具有GPU的机器上运行。 - -注意:所有Image的构建在Docker 1.12版本测试通过, 低于1.12的版本并没有测试。原因是旧版本可能缺乏 ``--build-arg`` 参数,从而不能在运行编译命令的时候接受参数。 diff --git a/doc_cn/index.rst b/doc_cn/index.rst deleted file mode 100644 index 88a9f79fd2..0000000000 --- a/doc_cn/index.rst +++ /dev/null @@ -1,32 +0,0 @@ -PaddlePaddle文档 -================ - -使用指南 --------- -* `介绍 `_ -* `快速入门 `_ -* `基本使用概念 `_ -* `编译与安装 `_ -* `用户接口 `_ -* `使用示例 `_ -* `模型配置 <../doc/ui/api/trainer_config_helpers/index.html>`_ -* `集群训练 `_ - -开发指南 --------- -* `新写Layer <../doc/dev/new_layer/index.html>`_ -* `如何贡献文档 `_ -* `如何构建Docker Image `_ - -算法教程 --------- - -* `Recurrent Group教程 `_ -* `单层RNN示例 <../doc/algorithm/rnn/rnn.html>`_ -* :ref:`algo_hrnn_rnn_api_compare` -* `支持双层序列作为输入的Layer `_ - -常见问题 --------- - -* `常见问题 `_ diff --git a/doc_cn/introduction/parameters.png b/doc_cn/introduction/parameters.png deleted file mode 100644 index 2ec67480951e21f0400bce1c34b3108dcd65c18c..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 44469 zcmeGEWmr{P_dgE9ra`)-5u_XGMp9|%25CWR)1A^F2D#}}IwUvU-67rG-TW6v&;32W z_s`46>*A8V_KG>@nsba#j7gZXq6|7JF)9oU4Eh^c$#*a?AU_xw*b-z!;7H%1sXp)@ zEcl(wYnb9e(rw@aioL8h7zPH@_~{2WUo_7OI6>P=P0LA3L0-Vb?jx(Qsoi@sR=1D# zz}YY`LT&=UuOH2vj49nd+Sq~x+=QwAIYR*W{pm0p73DvtI9Ur*X(=dEO4vD?QF60# zv9eQ%pi)v&3OSmZ3%rw*{(Cv_Ntnvg$;n=Tjm_27mDTkXtDU0-8wWo>KN~wI8z(0V za0Uz5-PXz2jl~vB{pTkC-bd05Y~pBT?__0XOZjwPRgWXF(@={Zu=}dbg3wj~8)$ry}Aqnrt zJ4x*N%z|mNjsu;kasnq7>yr9u4N>j-_j$+x$N``L*~13J*9wV;-h*ojwx8Z0f8p$b z(lyntL%>Sci-NU7(7fxr6{A7q=djd9@R5Gl>>z2%_sEh`@(M+n(#`)l9PNi9=85&^ zjDL<}8NyyDVz;>%V1+hlc*!|yY^9Q}aDkErKTQe=xtHohpUv)2T`(Hq%8YeMN1=9w{p`nT@6dg;|i$1#-@^ZBlY@m}ua-7c}GdX`L_L64!! z2EOFhb#<5+m#Ij3McW>$`4pEsT>HwNog>y>AuRcz!~dV#+(3Td_(w;2v6P_~d!Hz_$P; z>e_`GpA9jff8r<>u!Ck@Tk9DVbj@vUa4B3MZN@y#_5LuHSBh!y9Oz2sZQF+W5VMVo z@HHr}4>3gFAmtT2%#B%TpCx$OL9*6=EH7Adv6L7p9zTy)?8Am)2AKyVrCHx8TfyIM z|5~UUvs-NR=DS||s{iQ2QqdU|8NRvP&Jjo;(*tM}98W@wW| zx*@Fkb$lZwLn>v3-|z)AL3p=F7^Ka>OpZ(y_B4OM>OZGKW1fV+@-~maYFfM#FL|G# zrP+&0@df6P8=dD6dWw<6^O3j=PKf#)%t=`K$)>2g&PCP7*-zQoFTXKq4(?m6zmb&{ z%aV?o6Z^a~QDE|Lcb%_WU5p##W$(r!C0?LW#O`%vuU4qVu@-2aN)rNe8|t{`mZzBV zRxziLmLF&75+aIo4L71^ie*%wlqTX$l~t}GS7^{6pxg8)D}mzxQXYW5vKVZy7!uhv z*Z^Su7u=s_190EsHk2pM}*Yi>g4Yp~j@^0+6YrW}$gsD!8P}A8R)5C8!dM_gjmY6P~Y2iQMSCa5-z`5wG3C974jh=oh6o zXUkrtHn5o$hps(!ZFn9Fu9fqKlq>M*BBMES@q?tx1wLM4eDsoJL)W95A&L}~=+)k) zSi`<}H_X#uETYR4&`}VsVrXQq4htEq%WY4@s|b4o`d}r}lB0^pMeR$P_Xw2ON8-wP zn4vcaX|GaIVH-LxV&62U26opTUX+JIJO`3CoSG)mio{pDVq6tED)Dx1smkf`n`X-U z25UD{cXiIg4Y$#X+p$?-BpSK{?*oR=4m7lSfh-$D0(<)qGE2rVeL}kLQ=(r_(DZP_ zH2pEuK>>zf@4h=`oImr{_whE3WVpvxpK;@qdVeKVS`w{!#pkr}?oXOlc7vxUpZR4S z!t-$k&mcE$%mH_6y3E9!SO)hx5u17m$m{TPrUjXQ^707WpnuZ=dmZ zw|oSyWs-5E%^jYdK95ez&Ag%qDn8l9Aa+rD9pXRFz`};74_A90jD!0OZ9PqsFUiGV zl~7WM48>dMoIcd?{x?^f|8^RSr?`OyZX*6meYx2`iQp}>zmc(c9}ywGGWL}PLJ($hN`QQmxOp(k1^jn{HJL^&au2dggS<`R1N7Ch8*8ES(@sI1gk0>MBxU0T zHzABhiV{(e zChuF9IiK78=~{4}T+BmK{AssYR;%=7WUsY*aoSSlxZ&!DlXycP{>c)?v_pBvZ7vfp>swnb)6(uaJwGwVN&q-=zsnLj)Fkr?V+2L&9e2Y8is%1A0I1-(T3iL{)YY?{aXW{YqR)+(;?9qAUc|b$3sn}#m@!>4{P!t zA1<3xLNoY&0!cSIEBY6y4bKK=F1lGx=Jr^gNqZ=E4Um7?&3f_XJa&t0^*vFuLMVKm z@6T4dX72XTrM1V1J|Gv{cfjIcid?1Fry`_xqtRq)b#{?^_Uq@Ny%aj>re^h?eC#B# zlLS(FOOD3oP+ANcYYmJ^XQab>$C*ki)<@vc%d0V>ol}OsQ5r=$%U|SJVf$LkMbF1o zwJI&K7ragt=zO$YCIijPwGW#feX6zEPS*R5`(F*E%fyfm&h*mgCKBup;x{@;v8}JJ zQ}hmzkgIfo+U7WLKC}7PzxL_5TK7M*o2l$cDR|_dwHWz>k0>KiwjM9sPdBd9d~UXF zXzrIoD5eRvMc$*FX#yemRPR2J7#07nr0Jo~cPLfZ zM|*XW?RD>BjvQf=7WF(08UgcBn)k(TRz3Iv-@Z5&AM6Cyy=1$_V?*Ej^TaoXP9ste zK(wj@W7j4=2rWR}!5hw0D|b`+t|@31sS3$F=MkUJBEAoH6zJDi`7XBx`;^Vq3iObM zv$FQpTTijN_V4z5j_xx3<#GN&(-zyu2fO+K2({OOPFqWaefbexM~&XMbwXP$UKHME zDw@2LhP7J{Sur9fzQ-Me2Ri4g20?j`08Z)KI8>xEH6ajlpXn$1%QW~jK$*C6bthEe z_J<9ZbsM@mot8vG_h(}--XDc-Oe_f{AjXs7#lm4T&>J>Fcj4#*5UD~T!O;%UlTJO( z1tAz@NwrQ4EVdAM_nq+{#m}gQk6Mr@mNt^@q%RNV^?j)B^oSC9Z|@vnB9Pxq36cDS z*o+b^hdQiw);ikGR-uD82Row41oZ^ef2#S1%DjB}vIg@Or`+&m5qOWJXID{hpUd8$ z5y)lr%zPzO$xv-O9at-LQ}^!Uc3dT=^+axKg2_T{xC5*8td1{}3X1*u4Ia+%yr=2~ zruM%ZHrqUZTeFc3rL!g#}52U7&=xq1o!Y;&NKD=t0{z~_FI$wD~*hNevC zE@kuO;Px48l$|2a1l3`Rt59u>!yC2FeJIHPn*!)y@ue#jqsRmaN}wmOlKEd;-#miS-qB$^n*C;Lsw6Gq4p^q|2O<1}0#YqGAVxl#^Sn^w zQ??R|Q(7czWyrdEKYI!aF0H9`U5;{AM|5Vyc6nl)RRu$pSznx@vcC8E#Gqr38#5gO z6&XvAM?aVa2}vLN_;9NT9ocyUE1NgA3|{6*E)vFazT1>gv204yxEk+FbP`=;uD=*x!fDycZBzIvSW6dxF?7ddG!aLn~1g{j&c=6zg9sE zt`YIG;mNI8Mc%qO-?cF5p|wzilMwIa(lQj-kSn0tU2)bij}|cJpO$%dU$gm5I1l0& zPQqXKu&^t4Q>G0gcSBZn^CyIe`Qs|XiXV5G2>gPq(|{W`hV+KaRW3qQyS#o0alY1Q zcy;+K>$w^l37qMfgZUK-9b_q*8$H!BDRMKa&!2S%YL=dc%0)g^pczw7O?sNH{+U7` z6>KC!Kc$Qq7UbA+{VCkUM+S!-*6k(k>A%#`ULGF$e#jFtB?+B!i#hf1E?rk1OK%dm z*$#OfK_PH+ti=)!m0krngj=nCo)x=~y^yd)LgyHi`*8`}sZRKl^%+`meP^}z0U*|F z40gT#mpcau!{V#8APn?~YAqe>=_^jL-)R+}<`Q|OY>^75KH8=vCgSQzgj0;3dHma3TnQt`q-Qk5BfM`+QBa^UtuxiB9}7n z&_!mezX+sB5+Da8>F(HcMiMPKTH^UMJwDXy-s}Nj23`}(eKm@=`InQR%N}yB_@jvT zP0cTl+5@(!ZT(8iSiHk`&q(>$b$_UNjow_|?3SzodC+AgHrBT3-X(tXDcZYp*;c{t zWy_)@z%akNivy;{Q%?H!9E9g)(rH?v=p|&o;0_C$|Dv3gwKIC6qS42pZJcYcMTmKIXx~9qFxONjj>F)bl_|9`A?Ob z_4Pw*%n#S&4Wnzpn*zq&FsL;UNz$qPd!TX00H!w0ZQ<;P@Q| z!B|S`$CjIqVjMYyeQ!Vf=N!JC-_y zQV8JpBeh@)~(jp(T(Wk9Qf*d%FtmoZrK+yYa2hW$^c8l`8lo!&~wf#1N+o6z)>G8OW z!WOF&0Q$r7%uxzSTn>RkmJ>|W_gD;Dmh)NMF;ouQqd5mPa`CJ+N3CcryH#=&of8F` zf{(@e^#YG!xOC(yufr9?ngJ zjzA0c??RBo?vB7`V{?dxMU|Qa*#3d^@?hetE|iL={KUQauuAc04~nApvO*AgygN;y z%yRSGNVJ%I1YzHD zyeXj6KXT$69OIEv1eRg!B8cf zmK@&{P;8>Dg#^S*N}`Hi5Z>J`K1Od0BwK7`Y;QTU8qWri0ZYZ$RjyF%&rAeDx}x8E z0PB57Bn-kaFld5Azry_tXGeN-3ZedVeJrq1G|<{I<+l3U;p!Wjb}^)|JvGNR907{p!bcFe2M^m zj;XNeAiCS6wnc5uRx}Nn!q7#q1n^dt?W^?u%w9?=JsWrDG@HqQ?a!egbsYzQ1zATVhyW;?G>F z1pUJFD@q!(M|q$eQ?p3Nlr8qdTk&SkbG+^#`rZ{&4yj73Kr2fO<2(HVDuj zXKT32?Xj+$mUF)X&5~#Ur}}r@-<2Jqz zL866eJBZWr|GV0?aMEGs#?fSBN5RqP{8F!(uJ4{>d6Ek6;WEwm8~Qu9!Rp6G9q)P* zV)+rvK%+ZG!Y^CV*o*)L(pF(1M&t$=NOyvO9H?LUahBjoEZ0qoXL5x&bP`#P<)T{_ z?M@bv_&i)&C|B-Jm%-_cxdM9ZRiEKQ0Kngtv%e}nv6WX5nVD>35*Xt#8nQgFPcj#v zMM@TEymM8m5iC=f23GKMp6XWA_kX8*vR`QZm7w3adS-b0NqnP>q`n;gpt8qI0I|L{ zvrn7kOO5!Q@OSdMRB^4SUFyy}Qwu<5R@~#rXbD6zS?i89dS%n#an9~@=T`b7&ibNQ zyQ0(k!+5?w09TLzB;BaH8vifICDE}w}vwmpS=@Wa9fSK0@B5kYXJdt1mGhtB!Ar1 z++5mGO!R^``-()wiw}S!fv1ClLsBT`w&w=cT~acXnFy2={d44sc5b%E;RhGmgMXGT zmIi!gxujO~dZyGcVVcQN8p;x7R7v9vYW2KJGH$@$ZZ&|!#tXG^0nvhpRuQswbFFg4 zqFw$HOo?1V!fxbu%Vjm%&P@VvP_p&Dc-MRQ-VR8;dRhS1^*`MaXo8Ma->pTPYA5Yag&%j`#i- z$p~O|9>#xsR~m9YjFM=AtXSR6T~;=aeCcW-M?pbx+y6DCm)2b8yj%C+6>s3dQ0KHA z0Ei{EA|5v7W=$=+wzVDFNKak>z%N_1(fFhKimDfObQ*qY6z$}?)!rV|>%*!Y9ZSYw zlJfbfWJxarYt-3Y=ytE7SaiA6xNR7KME!2HQ_Zi@lEvCl#Jr8HW_$p}WTebQ=F)$m??U!Gl?M(DJhd1n%_W_aP0{6r9Kl~@0%pV$B3vKW5?6B&0I~$Qn znKo$><(VDLlyL<*kO?>#gG=J#vLjt8t&o#og@($7y48^^8H*e6F!KF2 zlt4__9DLe_DY_>D3HTJf@?}y6ye#*R(p|c@cXKgoSC3J4SN=rT6NKE$UquwiuCe{> zV{0P4zr7rS20nXb@s~b`E#fbW1+-e9ZVV_oxd#*Wb^BB7T@v;>BqbPiU7FHAF1dLO zo;kOd+-!?Lo|nllYml$2;oSlvK5Ff|oVN$!OZ*frNww;vJE_!33m&^HQ@)wn73NQh zASXg*ZksR#{rv8)!8V>hlTGKjEdUSd59EW2z6y+Sc5$ANxj&^ekanL|($0~m3~*2TK#^u*UwbW`2Y$I9I2vEp%`lY_b>v7_YVKu6$?$ z7AkpTLUUms|Azc_tYM0)I?Rx8{7%7;^L4suj>Fb&QNBeJ{wc>@O2SkC(adt|QpgF= zFxUc<`s2#F{iStrEuxah^~)#0hwf~;>kxPuu37&4R0#u(T-am1)y8dzgalrvM1zTA z4Y>A*U1i`up^u&)UTtryac!LWG6@eUBi<9;R9z&IN=84sshV05W`{6N;g{Z0Qh+jx z_nlO8ZRB*2To-n;9&T(jnhp59eCG()))3gmYr@1X;k5EE^7=*(x*;F-4*RJmqW0-f zUBE8>rnpfpAB_*?oV*{_+)U%R_*U4OUBp)FE{?e1WuG` z%=_U>{l(a2`nv!*n@i=mG>lI;D4Wn@FXg~sQVN}OwCFalrxDficB#NWbr>hol;eWp z3(PWl%yu!4jk^RBQ1{bP zPrFZb7CH~IvyZjvd2%o`BJg-#3z!8`zTRR`hw&14wfUca%P6kJD6`#g$_VcW84IB8 zD9-xyqHz+7`YpGRR<4w#E1{w7B<1r7r%HZjAeP%2tu;N~AKJ`kaT%x_f(s;6;)_a8 zV`R~cj3uh>{=V!r>~yUY0iE-Ut4lGDcz*2e(|0HFOLF#`#FwRy>=!*0Qj}z?@!y;H z3?b4zrv@D}0SIWNUe}JiuQ~urUZu&$`-vu=>u-L^g>=OQ5-^O{$U}w=Oeg{a_V>r~h`hz;Og|W@`Zmctpx~ z$4N*OUc>0AHuH7dfDubj?tVQq&LUcZ(r9s==;9#P$L!a!AFb{F*AQSvhL3nbz$o>A z(L+(>8*&%&c}FgIF}}et{gc#nHp!PevTvgJJ|FP!Ug58en^Vg~JHuehMxjT`@#_v0MY?J*tK1DBggg}zES)h`?#u(o>!1d}w$DXe`X>~CR`X*`UY$-Q}41L-EFX|)JH8v+kt-z_pmAY z*wP+`dwkS}S$}Geqp!0;&4}wy4sfWWT;1<~X5P{jN)Qf@JHDfPRQ%LCasj6&I9`tn zt~@&5pPh=@%AvgN5`FLoYy~_>=K2!=8>rJm_E)ITTMoCLANyrBhG>!MPRoZC)R#Sy z7Vrb`CZ44H2nlP$QWMg9_Q-8zp`X*E-bc<-4&*zj_)>s?qV)d?D0|33UrU$OSO9y< zCCi;ZQ9A}kXG=p3{X^}r!xLlrmPLKZ=0on94RFD#3Fz(X`?@J^#;GZR(o~yq}$vg zT0leepkb=^EBkLtqPA>4DV=Ayep2C!oPQ_L%3HBu6_q%{zTrG23W(Q9PceOqhCt_t zTO^iO;bz?w-vStwZjeq^j(RMwdo9&R?_ppUI^}-fw#*0V)0p-G5E_p|-w;sM_Fkg7 z5TRp31OJV9+xGPm{ZfDvo}#eFnSGk_br5cZ^15o%$~#3CJMja+l>g&x`@dIsy5{Br z-))(C|2o}mg>E`l!ogTYua~I5hH_Bia)UV(!u*dUA;v+~oXawpWe$`I`oqA6)A8IQ z`pT=}57D*58y!RKG(G@{6d=+?Br|DNwah6U*kAOx{T1)gD0SsVpJcxeG|$W!m^r>P zzrg2!)p~84t=G}SpPB$nUIM(khl#b+1nYj@Wrn`SJ& z@QG=iV6?ns-c1E;;6Dn07(DJ0nl*rx6pv_Le(Q?=>g&Ws?sX*oMCz72@&XajUw1{5 zy)tydj_mu%3g7x>%?%+ToRnu`)G9$RR-sgE%*y0>v9V0O^w?-a`BTuP4!#a#c<}<3 zgM)*ZhWSEgu#{dNo3NGHPs$69*%Rnrc&c9`d+poS%dd@{U82mL>r;y%9IhgthD2_k z#p}BydWJc^VF62vp=^P(Hl==9@YCDSUpXD(`x=M>wMT;YXKghLX94%%f>pp}-qN60 z4u2Vlgo(I`o_)IUefB&Un?TWwA@XJ;4$!*5StiE7tdoR?^VWxUb{gv}PS zm93$irL7?@MZRTtwIZE~j$>OV5iwTLOCVgV0*D-*E@$KXgZ8tIo@8lb@(@*R)QW|S z6HeIY;o(o50gD5ke#>%wN$#X-*0v3ZHhS|i=!c(FLl7hv0g4Mr3u_8>XFgF=@g}EV zqDN4bG!nj^(42(9L(PtzWk%DTwG1hr|t4@3Y_6^GLvbM5&; zldqfsI;h)aof5k3WhyZOOzL85xUrhGFLDqVuk#7b`I#1gb+@i`L~y)sM+P)gwpXHY zN=OyLQ&F0~c)1qY0&i>Wt(YO=F0V~Hx2FCl zw#K{8j>F+HIC77p7iZt%PpnJsNuI2A&)Q&J-&{Q=3x5Pfv7$F0kk0ycp1VF=kd#{C zVo_kq`qt=>2`o8cs|<$0;DL#G4(21`^ArN*kwgI?woJ4^HbxkN20h&R-zKXo`94gXgLE(gw?^At=rE zO%fU+z%n>VfiaY={C#9rYJN|9$L9Kfez0fpfdgOY)r}R$>J={rnrR@xjSmx0yH0} zHPWj>dj0$pLNoVpPCFnrzgaxt0ZFG1qrgY9!zK@uf4Ana_|2;JQv)ebsR=nAGJ%`- zY%9{zkl1BUKk(%E9E+OmJPyzRJos7Kj_A?@6y=CnUT>+DU9GQi`K&UkH!IQSIlK;D z-{{)4oLB97@VU*lN~Agiib(06DnBai78lO9XUj}NNPhIym;n} zLqm@6mkI+zl`Jez+(t|b8~N@A6VK}SMbx+}^{!9q+(g+Z3nL4%-(19#jN4qHYQQY2*)XBW)W{9eD*$;mzF|?)# z27DeQ4ghui<1c?xc{!LSx`-lmF_ne?8DtL#dc}(r2&O;`3I_^Zr!yzVD2q37F9 z5a$Q_y@dv}{hfHJ0@`(lr6rduZ?mJTGj})R86l{PkBPmSMJ05rpU5AI)g3YHprk4y zstOp}#VLFaR0lv^joMGOp8el8x82k+OAA2#Qx>@1^)Q;mwx%Tt(cGKQ%>=M12E#RT z<%ujuofAYYCVIFBpW)%CEtjTWtU%UZ~su)yeN3I`3E*ZMBRV2n=QMqbBEZ|YRR6kRWVPV9%5_?%9H&-Bmp=6U zpe^GyX+xiguagezuR#YTQGs(X9TACL0`J`JP(DQNGetDcBuYBJ?*>ct+{z6xXY?hp zoz62fq^^@-liKirD3s>RbX#|T2rDnakB;%TR17(LrHQx_U4*9BA8uckI`>r996ltg zH0V&7Gv6Y_d-1^LtKgHc)jbOfYmR3ye9==)k|&eGN>y5C`5Mcm_?K;kfvQc|@B(+-;fHthwTO$gcHdXXjQ_LaGQMNtsmK z_S-g%M-8=jQLr>69cbB9;l1`1%0X7V>Tl`^QOJR|$|As<;reMa?{z-`S9!4x3K>R6Z&g+^DOdY8c8JTxbJu}@At74*sRiAT2G@v35-zmcNjyyz1vEkc>(R%rMF(k>%6RCu8Zy@)Po{~a@8Lt zytiLcZ00tFN|lQenPtie#%?0Icgr!w`8H}{zz<^i`e&K2M0BuS!&efiTrR&e!OvxE zhOVoM#!c)FokCh2Pd2#?y4)7H8oCzTmhDLg&6sRNfI^4lnsq>_6l=t`XUmUlRNCdS zY2mHVH1>RJwEY8cAm9Op+G>GDXma%^(mLDrX(e+|-^{*?!0Y=1chXH#rmk9bbA%kN z|HUncK-BPrG+hlL-3izIRc_zxD6{{yOWRx7*NkpXX$?BCa#}7;%1k}*U1<1po{Tbr zst#G6cGu%dW5ew81)`TbgkDj+(d5>8gP2!9+^bcW;^U>v%jO5)GwFDJ$dtw6~3ij(;y2p3Z1*YwGXhgTMLHloZbI7jP z8+98Ii)P}w?h+EG*Ogj@uC1wj)z@NHhUf+wr>{9muWhh>+x+StVc+v{RBD4B$I%&E z2-jI7PZ-xuh}rBYG06nhf(Tfa_1faUimQ5!lzGD6S0+o=I<9lFxD?7cI+p8_1$99` zF$D~}-}8@DD8AeW1@b%6`CVl#$bGGa9s3eaN6O@!fz9_0`Nr(Ja{lBu zLz?~xoGINqGgJ2^8A*+@(J{BBz<lN{j5KN=n7RzS|?6|H~9|D^&iNn1) zW=rYUEyh2iWFSeW1oWzWzvcmJNbcQ$rXBJMV`;)@^-#yXutJNd>CEqOkvAMEm%Isx zf;Sl$W;;gXs0ga)`IWw!jrI1@y=Yq@a7!H>rHA2-Ff@9@(bcO#CMfQ(X;tEYphs;a8pA>WrF7#qcT1r#rbKbZ!LzK_CRzI=h;Kj|B; zBdu?otlWRy#X@;upZB;f?%2YI6Tu#e9-`njHe2PrQ7z&tR`A&Gut@hw`EYU~99fwU zXQ^BZBG5k}(eG^@=#~v2 zFQ;M3QLYZ^4_yfO4BixwvG@b~C0>8j-Z%mZt~-ETAoEELzIVq+lg0WGfWyS!as?Qi zHD)-1&O2+9=j)e|R>G(KuWGh3BZf01kO0%`)f8xCrh7uRxMl@4(exhXNfT$*E^kK} z&|_m>YhLoy=>8p|?gIF%J1$zrK)&h@^V7V$KlFWUbKgwa%uF4>eyQgntu#(>EVKEQ z!QlPcS4LkJvsjP1=3inghDFV_HWjpP1Q~?-*!icBy}mc0T4pv!A*-gzb0}_UJ!0}{ zXANjVCbjxRWMpiUd=K5-qFv=~w_7S(l^-ThNg*xf%Yb`q1Z>+VHY6|duRIwk+ja#E z+jRWTeb9#ci&;LPKjIM!0At!|m5w2g0>E+awdZbOSLIb%?G?bF#Wn|1^y>f+*9n-@ zTZLctRP!D5jj(N7b4aB^DPLLd!X<>YVgNlu)&wO!R!S9k-(627t{g6h*tI>1xXZ;k zM6@WoJA90#WuDQ}$S!K``axl0aige7^XqIRcT*G!+c+kFObxuD7i-BU$#_J7|DY{a?yc=x21n|$>uV6?4L9)wl zQm+Yh&D!0=`a={T)Xp)66uE7?jIO2zMm))n1_mK)``8E z=O7$s4Sf!}k>3#^WmL@grLa-&wH7O*=dj?uNd`Ti)GbbopA^zLpC-N|izRK}3-!#Qb zpsF%Cj9;oBSad1w=y?vtW2^{c+*5jAx6@Qp_(5(!(Yoy#+o}-2AM^FJ5p}})wuuR~ zx|4wf`a<(QbWa|J+`nDuY0e>m1T3__aLFhnHQ=e36;QuTDp)baXmW{K!6Mo33WyRM zHomK_f!b2vokpsb)tr&a%V`HxHCAnr%o85lvWJXs91$4le!7aVewpR97(3Fa4=$!H zBAu~?fSJ)aK*kb&aBMubbD`n*Om#5?k)UXAI1*mlLWAEIq?9XwX+M(wA5GRF$|+uBVcsQ3k$wvcc6%v ziFYPmq1(pAyh&6CFPYmqjz7`KUPp+R&<7rHNIfWQ7YObVaJ?d+hQ#;p;s5w`Co#Zk z0M^*Doo_w*<9&(NZo9s{j$vZc9<%G-WtX6_W(khHSY`Fg^n~%b@%bA;qMz*(0@>=B%s<)H;!0Z78#h~v@nstU6kY{60MSs^ylt(A1T2zyTi;+XQ zECk2ZI9W1}2$u`q)T6!X(ua??I82i^otRmy(q60c32_f_@778nC8Wf+#1Y8fc2!NC zgIR`P>M0ORX8{-H#_@gAVWGBc{>`_ZL&;XIWdIXK?ptUa=n<3)+}(2BJoG`Wsz^=TG$RU(j# z4kbhmtUD-VQjqz61aIEm$>+&A8P*CRPI_zrI zw<>|49EHG#XeB1`MBaF0MNQB~oS+Urk>aW+>P^`cT@i@0@NN#Gc$n`Cy(c#CI+o)7$}gh? zltl%8wQ&4FszA+KN?;DaH*sm&g@6ARzzB+)Zxow$Eo?=+jZAR3daNToQG;S(*bZkz zd)-Gt`^$E1Sqc9B2RXw#6$gQ;+&!7Re8ny72{?}OAOf>gwry)l_d4cHS~~SLB>nHP zj?Hf@_4D_H66%NzXXRsQIDjBtl zj)^hG7hE>=$iGldAl#{%XKd8WzuCeYU*_ZSKttxv7 ztJh@ix=mR-Zs_gjk&qr0&k_Eg5&0s$g3)LAat7SmZM+|GmBkDF);-PeD1)v{LYZyZ znC2zo@Onb&>Q9G4`oAUXpZ!qo4>x;avO$zsZ^yS?^)44&vu}cBe~LAnc@H#@6Iy;U zhPbt(isAbLzF*fQWyCnsel zz?k@5raclG`uH2?VSq;rd~W5Q9gLxz@2W|8%or!O*W!JuEDVHYnR%h;Q+aI`wLmap zGlO2^2Q~35sd@LFweS`XbB-Y=Lp7Ah!ZwdUIB^if$zKe{bALbI6-I|R_)E!XslQ5k z2^n;Ax3fUvAEQ-@b}mTe30+a96DGg)VzQgroKBA<MdqbyL`dL#up0;UPAa&>ZA#X+?!8H< zwy>OJ+kW3QhLpTw_f7uLzLD$J{@^9PTVJ*k7LvF9y#(vWR}L$Z)ki`U&lM|sLS%hM z*jX;nNddVByKVK{7;TEgZk-b(B`((FYjv`Fdmkgxg|L|#@%1(}D1@R0AE|Gu)sLuY zX=A_MVe_+()^lE@ic#~duo9F{(wI_KihFya$cq=p<=L}2qjOB$p+D#b8b;&eU$d=J z2z6wED;Em5mD;#5H2FSbKfKTUJkmNkAQ7DT3F%#F5q9@?KanBgu+2BO7=o};q9PhO znOVxLB0Rgtc#jecR@1JOG-ENxD{}F#x2H{fX{?F(WMfx8$w3|V7j8HMyn)I?+vr9^ zpKFAtjle8wHb3gFRwCwaBh9kc5B94UU5X6#a1H?S;p9E9*@H*j!Z7LEGDK*;e(kPk(uVk_5JRx9)t4iDbVMtRV~#AcK?jM5A~aP zY$oz#@9h9|`a27yn(&pVCNOwJ)&L`u{QY)lpG>@7K)GAtBu*Azey`Al;JEh)8$m z3?)dHgrt&6w{(MaDIguv-2)86d(qGLx8Akp?>qNC=bR_^-ltF@^JvSdC$(nAe9Cht zb@#)%TC90Du`pNb`74%FYx8-4tB<%=e|;fGB};-zwQO0G6WFwOSqrcuuMXEl+8yR< zJDF0A*1Ckg@-N#ExdUmC$NR+9?(iW!**5mI;5`*dg9%o|Y>Tpd=XYQnq`kGm=0cS9 z%=V{q|3alrS_oTRF9&}vJ}#^E&i5@A?EtLLWabIIRrd=RQE%?ArLFHCLSA#jZJz ze0pX*UthMj+2cipN@LZXyMC2iX1Jg=>oScaf;F3olGJbym%$Sc1rWCV7s!7+GF?JA zZK+fggA|7tjMZ!!JHSgsmP@Ecbz&^EfEdBRF{yHKJ>=l5-}|Pr1UBWS+I4`wa&{Lx?yoed%?`sW@^u0Q-i#4lonDg_$xVZDz#@g=i z#bYwBM3J88t`iajXIS35(3cpIeu1 zO%w0*v_^@Qql0BhRn6pis9K#68gJ3`TWfl;fpy%qldRy$Y90$mM4R;QWs*N&x5Pdh zojAqCEkY_WsQY&Iu5p%QzC@3^a*W!rMFO^hJY%fL=`$yJ1oTS!n~&P|HoFT7Vn=F; z-p7fZM%YmlGR1>WOkR^e=pUpX$vPp1a{B~I-05G#@q&=OcJu7D|`Q|RsQcnmqPWn^FjlCA$|2&Q1J+F1scz%N~ z(oh@{%2%B$)^j>uD^_1CiuvpD;D8|^8417idpDF%wwW+3Y|{d$2V+NaiUe=g59~hS zirhR*x!#;}r>@^tRyph3iB6Hf#FsHp+6lc!5rOhsR${rA>)ayl%F}W%3XiH1?G#!HT<{pr1jWb5c4fswd-^woT6rP zOI|Qj1d=IEEs}k5p}tyOPb!A-Skm0uIX@aBa}&{J!BM}b{fqT)7shE=eXr1Gen`bx z^-Ow2k=^81WZPX8sLEzVNVI28ffcIc<>^_u8McISa$4SMRM5PInRftO@Z+j7f(JT$ zV9SSN-l5Gu3Fi`x6eLvYw8oDlr^RotClb9T6We-jU-4q%CRQ-->EGleD&ja7$?Wl%m-kMzG_hUlPG;gs z(^taiS@w|z!?(UE$Y+-MPSp`YJ6Bs#XUsVT=r*7Q4%IHB(K5xPeFhDYW-AdwNIL62 z-91KVHS>nfkBw(0Cdxy?2i8^VpDrHcD7Zd?=-KAB)h8Mp7e#=yj*6eoRVhd^Pr-JunHpIH>9(4_cFlqTwTwjO#oJB{GGUKeRiw z-%4gK)UdZ1HPJ9iqy_5?E zp#trDnDDZ=Ij^M{g59nMv`Ht2#F{aFe!sq}zewqijgcb5Uy(*7(ym21&5I4BiW&N9 zDlHt@rV%U6lRWhz5l4%+H!#eqCPNmZA5K2HW7$&9+LI#)5hXEq{3qvi{Q`PFz5{!u zf8MV&Y?u_?SJu1|_0?^n*-o}65;a%seqFm9`T`j~tLMwU1~1@!VvF?2%>%D}co1?d z2#q@ExgzGhD&05aS9DR+efL2${Zx9cna*J-$jtHg{Ri;)o(jyzX4XZ55QBxF`!zd- zfc1*)4}>?IW;cA}JW#T30jEfIZ14P&27>?Ha0dXqSVSUJ4Pgueu!FmscT@h)#8soV zmd;xc6b+S^28UgP+?i**AhJ=f{=s)Bk(faH_E(s)uEpV?INzcSaee`nR7oN6#zxZn zt4*%nIyXJ@iN>uw?U5`Py$3Uz+@qJR#Z9l23B2t$%UqgO4mko4Y*BS|@859IvFY4f zuEhtcq_0(&O8yA7ihpg3+oC^f9 zLIV4R8PT@REZoWiw`_cV!qZX&4;CiR-V<-SsBN$h^Mxv#0BzO{HGSb|DPE>rZZHW1 zK#0RU9Iy;(9it8=my0a&CX2P6QhY_6&Zg>uYUDFNF#jr{A2FqVmS^`lvm#ijMN0mf zEM2Jny}-_-3lu%zI&s7Ad3V3lYUHfwq}9T1W80@BvR6#dk7iBU5LsoR9_)qV z3vEhzrmqjbzWXi`Y;-k~Xqy*P%z}{}ryIy? zas491(I$?5j2cKNEx})q67g=IJpD2|ObN%-b1B$ZGr6VvW;)x=6$<`@T8U3*NFtY; zPp~pqXtYQttGn;Lb-&bItjv|t*xI-Dr;TN`a|iq6e7CGv%o0p-sno@lzT?yY5uLjc z`D}732NTT3FjdO6B>fOKAYE`#v;H|ZZ+tx$_w_5RqZUGpB2>*3XZP4uin#vGXeoO9 z6>Ht{cbyVWM32)Yga5q#>Bjj!doLT|I~?FRno({-N+nq&w40L(i{4W;t@st-Xk$$+ zP%ijaQA)spoAAh^bG38Jml{C&T$!{3ooPYL%Lk4~;<(s?VyF|K(+5K~`y~kOSAt?M z$tzCnOH_@Hq%~0Zz1dx_6GX(EjqWp*;3oGM*hsWoEj8$;d?(v@%a!WN8i&xl)4$9C%!WT4LhZbP;W4O;w;YsKz(#l<}YR{ z&1J*@i4(qqKPFbFS`%(mit`1iD$uK9yP0x6)#717Hnk&5kd+}vq9910!!e0?5+f8H z^gOTf+A6LEiDkq94PwNwD`*|@{ukvBgTk+A4YrT6vm3CIxP)m3Vp8dz9?-pqp?y)( zHj8Ep78hC`q-2SEdHl1AOCBvMi6DFwJX@rmo32+D$OXVGa-A`qr7g-! z&C6i`fj!ORNCoL#RV5>k*1FOI^D(>VKv6r?H$yI;QU$c57enNJtySLU;&BfzLm}s4 z9ex5D(P^$z&CCcMeNlE7eY;5ZRd}YJyd%4AG3b^&~9`X zi%Luas<|O1*@2Jrhb||GsUpz~^gN&gFc@;zoB+)j*xq)@L~gf+``!d;!JVJWtEfJO zxWy=};$W-y(=S@i8_h!;dl&hG@~>7a#7WxHY}b+|hKE`O7^9wQ!t-q$1|FMDF++7F zJc#_NV~gF?SO*g#WnwIVO^$-1qCf<4AF7tZ@DB7PRk%rckk07MMB=2ezg&F6aFFT7 z-E-f1CDFIUi$z(_(e<@;?v0iIV~JDs&cf|Spp(=#yta|C&UPORxLjD1|{%X3r;m`o)WrUZbGP1?)m4I^| zALmB%qQQmHc`K`9XOe1I7_t4mC1XbOJpK6WG0=kn#v@74XRB7ATIQAiprL3WMAFZ~ za$-Bd)iAHN&X^g~P|?E{;twC$87v2l7S1m0b1y{Pzfy#Y>|DIfcXH~vep~(S^_vA2 z1H6xKed;NymE@Dp*QY*w5?%@U0^|!TC;iBcF*J{yBhfA|w4zMqgzf)hA)!7HJ3F20 zv}2^Im{y1(*!TJ$!uGE6tnR1A^ZW3%OAIKQFAw^ckZrurzm&7i2U6M=>B!%2OH`L; zs0?$=;Xb2HQT#NPtOk{l9k&$Vs~kvOi}e#pPXSf5vE>C>K(Q^E@OK2Q*SMj5^$%DK zD$t#|p|dbiPBMVw&GEBjdIm;S>tII&zWz=Hu@{R#NuM7(Qh)IgtDw^5ie`b|LxRiZ zH)_Bq!A{zzQ<~gO)N8$iTk=iYlA6#ezGc(>%IF}PS8rc)&?qWO{fho-_H$6sHkk~Y zAA$Nctg^$-njFA10o|RJ`*ntN$d-oyxET_VJ$HNfH#nlw_~Q%R?(YY{ z$>e);X85eV066di%?K+Dz-~x)+NYBR9Ww(4rS*r=zsDty{yo*!v2>8(;<`s6WaK4T zSit!rriabi#CY>!{^AjQ&!7vkqHH$*`k2~;usHYfL^-J~kpCT0cQ|^ATgLN8u$w&& zjHOQMrS(#>fBfMeUf^kz%&?!Ipf&ol2?P-eV3QL;#|6so@Ka6e<+CR(RUS-DsdFC| zFTU_{UZC%HIwq@DgsRPxMr=0=@c@&vfm0_TcYee{lX!V;CZ^2JMzH7FK~@_idEbaJHkRF4Td`0P$8?y*?T7 z#0pCePV`OL7Z))w?_(rGYGAa1A^KoM59R;dC;<7bPVtFA-M(8{oaOI)r6 zT|~O{%0fV<^M7egSXofSG3R-7H;)WJ8u8XDRglZ2!z_l)H?k}C@9t635nnVD?zRmdXVC=K&hz>)@LeLc*Nq>&!=}h%1TwDYM+~b#|6V zTm9bdB3GsljaERECy0U3jHH9({8Ve{VG5EUE!Df60J?$JCU#M@UzYfgY)&OblVIst00*Cdj4=Pnif3EsbU8BZDrk}QVSPTpH;Ht2~2MjAvc z!wsFM5vmGCevjccUzTBnRfADdYZ3dur@meNwgR=w)1sCqZ!Rl+tnRPk2aeD~bTI_9 zR=O{z^*f`b5CplaTuRLE5Eds#))UplOJr90~U6Jj)ikF2Fa1V>)|CD{|vJ+NXh zW)h!+WmTm_#k5k(jG8gk@3)Wv0;doTwFrjc6|)6gblKddQD!@e=sbEimaC1c8Y(XU ze?}Y3ec_pF&u#-&Lz`AM?cC%9|I$qGIvsfZYpQUSgZ4ije-IC|8JH*9DVBKTp$|k1 zQL$J})B*AVZnT*1WDXfG88^lQhhNta$GPtXv5k;?aZUHMW`wv0R-Iq3^gJL?W2#+J%8?jT@<6;1f z1qqv_kFf!5=R_*T&FBL*CqfMfEVISl$+08$o$eXAo*YvHmjVbt-l}%b;5hTFSGg>W zE%DPp^6kyO(qI3*^v|tzWj{dD+GTa67@h4=z#-y7j5XHZomedFY-P2}w2?(G9c9mS z(KSj8u%A3hmI-Tp^K0G}4<0T%b@d1d~(Z30Qr>@U6a18h5Am3sW5OpS10qH>iGi4!xbv+sz$EOnF%X z=-KvZu=nv#SZZ2fY(K;TFWQm`+KC{1@gd~_w;f6}q){I6LyeCpnM(qDP`d2@Q$g;{ zh60G9JI-3H2NKAgb5nVgv3fe+4LXEmA8h#Dnv4YGB8B;>eEqqqQa=j zgWf1h^_nUvVP*U8n$Ff)@B(UO$ndM`nzu@+u4#2wJh(J!y-z2KepoM$MtmEmMZ5#J z>ld$9hHN@K-k_=|4Sti9HEVPQo%9zhu)dDhzbExIx`X}zsEp1#sZ{a z_lUbR0Y7rKtZ@RXbL#i5mg^eqRcwccnk1?K{>W<|U*LRZpz>I@yP*c_crv8p?g|y7 zb_Xe%oKQ$tnQqMfX#*1BgPYRW$4o=z*(3S%4>_4TOQ3PmU|8y<1$iCTt%+G|l*i}E zRP@H-RE@(#kn-v$1Y5H?gZ{Z%X;3+3j{ceMkL~5Yn3`R`0VV(y^Lk5+FUr)D#uFI- z0WRYON-5@dltq;irMCoTrjm$Hi}e;Pg~P@nKetaKKCp$&eL&YNdL=OUM7>M+gWhNN z-MEOw3lV39PXO|SV2>8vbG-GsP#)1Z}gNYzU zGhW>{7vI+}MXM~gXrA;W_hs+0SGDfEqEbtb+-x(%XI#93TQ(^}+OMU8OYese+;Hu9 zf_8NY%;9uo_y#v1L4TX+Y@wSCXSQ80kJp6V7Q`PgRuVL61;4 zHuu`|B%5}W#LcP1!H$1VIOr~YJ$trVp>LYvo>aJe2_Ht+Hd_uOSL`!Qf31=Z}XO! zhWoS3q)Bv^9>x1Mw2{?Y7v|$y6&U=pe_P^5aWuREGL%H#MQl6!fV-}nHp5lG5 zfnn`i;y{yxP+W0&xJy`l^xsD zZ0s@2G7K-rRBcxf>3iVOLSs#L=<+{HOeCi4U@W?&Ck)Mmy_*?$#^aoz(T8CRBD`-QX_wM|vjR&^_+Xlw@kD>wwwmefi`!vjjs|~KU7g>*P#>NC)zPrV(9v~Q)iT!JVxx4+Kj!KT)E z$|{J5{>wXaq}uv5cb))(L1PwxgwywxVC=WVrJy&gm?~AD8&&~EYgg&(nzy41 zSPII@-GF|RW}~a+qxx0?Aer<>6YgII^p}q>AojO{I3HL8 zRtq4wOG;=}2CZ=aqJO=nQ3O}}`GF?fGmrqn zzH!ZVUmYP?1wZ+{QS_!43R^$xcO!)ObUL^^nB5EvhYaB@&n#U9JYo0%u+ooXa7i5f zSy|5kL5>viXUxoK%l>DN=IR|XcKU_vDYC_U_`d0-{vz!;`4k$8a%IN5kV)_(1}b4^ zr&rNu;nWh69U!0k%AIO@+VBJ0$LC+*(y!YbbRWT-W}}O@+R+fc9@DwFkrE<=fgs%a zzZVDmk%rzoL9RDQ0Uoyw!a;8JY8lcWPhTzeF0?6v7|J#ZGz*afw|{c~tih+G^a~A( z!z0WHzg4_10eI_8&Jx=4%Iw-Vo1{J{#mlhaD!g4CZn5yME)%Q`7MK)XvGQ-$lmvXt zah-3?(|pCcPZ(g@0ih;+UQ zGzX;S&$0{h^Bn8Gnp8U=L1}jKCS6x%LC-XRiQRg{eN|oT1O-rLJAj7> zvfa4TQaNtPbttdg&x_(Aef_o<1L3b}<;LdF@jYsdgho(vOi1>TH1`!(_0h!e+D7+% z(kB#+j0~Q|pu-6MqMfjG%f(@T#D6BaHEObi6a~iq&VOb+>vOfJx4F37c(!yk>Tt7` z`{H8W|7i$i13Ut< zU#(9OilP8T2~3B2oNrI|L9UCB7Z}&Xp!I{QzwQCk#{m)_d5THQ%uD6QqRFYLl=Q`$ zqeDadOXQ1J+>q-i8@%G3TM~*}sfMlT05XMK5pbvF+ttKhKL(fICxkRBe^%-`3(#ZW zV#H+Wi-U?TJ-W|;gxR&>T>2xfrHqVnvvRVNv)gya`C|UQMgp`IeecyhgD?Qks$Js* z#Q61$nX*3ybO~=oq!zFN*vUS*#wS4(gq4Cc;uFD4CCBTVe6{vJ$(yl)jC0BgWss%Js=#)LhxHhTyDRa0=DU80p{Vq7=Asle1qQIH#;W{@ zSMm2oNz8a8%=iv&d_YR5*a@i$e4k)FVTyjSriGZVl5qhDHV|)#CyFysEgOkAJNAEX2@Jppma`K8`Xq*X>Ne-Nu7K z=?jawcB&&8(G;ps6(^ihzT+b^zpu5KA&{H|CO%k4lfle%IBs%BX1A6IN^i+!V$X^{ zP8l894iENM$!VuK$oTcFrhA`jF8cPiDA(jpf|(K#s{j3ux*R@U^ijXylm-uXMFQ|f zp_t>y+hlRzaq~_x%%5q{3;4#LgT83gy@Go=Hr+qWbcJupcgK9G6iH)09^p^0Fib2& z{M@s$x-33vI74~Z01tb>8D$pjctyyC`!Vb_m@mKhYp2ebsahacVZ_$hfnvCzFpX-% zXt#^HL_~gyS4U1f+ZHCpRGdkB&cOww0`tR28tvO<-mNRAp;I<$C=wt^m=Mz zCUfkv>rV)z%Dexsq3$U;1GsR$oHFJej2fEmY!QgE00NYymM_ z+zs~p?_>{NYW6ioaDt{ChUB|z<*fB3wQgY|-7VIce~*Vkt}ebArBpx8Khv2(z`8IU zoz*6dAj?KdUUn;SK7KV>e?_K9WzY4{a@BF3nsNe2p>3K@+ilHTtiLB#s*GL=q;IFv ziT%|B<(u!pH~F!;5)Cwa1i`XXdUTuCq7Z*8W?!u}lHLp9*rRD4I>bwk>L8c08V2r& z2iqy+Qh@hSjUEv(CqU`x4z8kMx999I6LRRaGU7dz9>i!l_mRZ43+CqN9Zt93bJ--h zM{n`1BPCNP9*kCefC5WA)Q_N%dRRJqy4+uFufq1m=5S})O;9Hfdm{;~y0d(^)od2X zR1*am0{2MZJHb&Jdfx&9B+_qZ9M}RC)xD~MgTVIbg38CwE9$GKoogH`00;P&X@00KqB>f#!Lu*=RG24oQKl@{P5qP zup-F}o3$)3s^oran#ulkR3T8~S9)C5JuLB5!C(;^G5I!GhlGxQG>*pjmf`vHx!UVI zSk2GxPyW5YGvEb2zvDGIN}x9j%fX#VY@qbM3xKKbq3UIQO-owu?&#(97AQmhR5aax z`GrbA@nSV}i8|kNwi}mLz7H&h{jjD~Zs2{CHM$}scYPj`rzoYcDR`hHDWc>sWJr{} z+@#}+@eWzeg`U}Ka+(<9h&?S=7lVpJve^=gT!&n$AxedOBBD*;T+l6GR1!v5Xg)V{ z1)hTmJcm7DN7N($714ILLLorzO$mE$OE%pHfrhKm#CWOiLKjk(yE^&?$XK%_r+7ox zxF0KFoGAcS=|h2mkH}L0F7m2M2bN@pd>lH=B_VyftYvQ}LhW^U@wVh@{-y&y>4EIs zn!n23RSj9-nzI|qG6km)*FPuG6@-LpCZwO>Bg6wD4y3qL8DU<(-Qj`o@^*}Pnd`)1 zaZvmsr#|&HvvSlboi_-grre_E{VWV<8`$9%!1&2vFCLbNUcF;!%!7O&l>l3sPYqxUP2*j)3~4*Ea-#MrA-8)Ttln=U!`%9jU$^(`HM6WA90(3^L())bU%D@J2?M2Knur9WtjG1*8y1jXpTVm%GHI$B~sP8?TBGLHjD8o#> z({dXBdZPeL{jnV5!)Ecs=^aq?6FYfn6NhHpgAKo12RZolo?@S5LKWf$h;*t4ym@N@ z{(RO%qxK(Tw-CO9ftkVtMVC&hC#0AgAwIy?1mfp8TzK=}j{1gDx;?cwa~UX-<&Jqpr#hAK8Ms)H$sQ}&y&qQHY23%2O=vFMx;LgXe&n8pD%(A zXxYAxVFzJS-095?4Cq|y6mS~BwjrZ9rxq=Z{$;A>1}_=SHcGqted(npPBOREBUXg; zz-Ei?U;jk^5E-EV9vKEDhKLLgy-n|3Z(KbAe_m3fd+eIgOrACXq8-Z-}9^cEi z_#g{*P_wK#d8w=Dd!Zi2*~K|Dvz`2;JTkvr*s)F}4-XQb2Lf;lfqt6NL(&42>1Nz! z{iA=QV@V8(y+=jsbK{EL?avMf_#?hm5tsb@kr2CXeESQN7{OEdl}~NP8r+nwj#6+XPvbv77oiRKxdhv5jUNs5CpF@3R>MehwIGUTkMD|F11uAg zF=mQu=stGpYmi6j_Nz*<`L3P%GFgA3iK1?H)vS8Q^lmuq1HB9ydMp7*aV+OWC*H$a zMF@dg?nRlch${i3&dCA&c}Mb&YsH;aQTO8W^8j%1coawShG+WpOc?$cZytlo{6SKKX9b@R_>4k65jEP&% za=QtNoFMw3V^D<|#lqy2myM(7f`}kUreMAQ_E>QGq-N@a;L}Ho`w6Z9Bi{C z$XfF?)as}Si+xkQH=?cVDI&Ix`a0lW6s4C3vY|gJB+R5#`xm!6EGsQ%&@su1!Ay)U zHu~+j*n~#EptWN6Qb&JY=LK(~-i!WLasKflOU84XX^bKw`#$3K=Esnh8|i#zmdP_x zX4%l+4=V`diM~K|23khh*HRC`EO`lF?||G7as02DoG*X#RUA!AImN7PxYo|NFZ?*$ zS{;hSB%^I5U>8TVPW>YrT>VKC~A0MbRr2E`_RzNG0%E-%e6{I{CxZ`xFKLTYpFAtC)haC1Y&m~+`JLIn9 z@_iQc+uf+!Op?QYiLM=A=IgL>M5Io^$*Xv)GH0&zXXBRtg)bE-F1X(t=yQ$CIx{HX zp3C94+)3REv673@frtYx)k{2|m1l{mNs2rF5T6{d2-d#Gwc%bn0+H`-u%++mAxPW9 zD%kJC)i*7NyrDMMH6{F~n}Tu)k~9UfsN3UTFb`Yy)LzQ^up)3Vaws3tJ17bVNMeW+ z)5hTFB_sExn-kbTA<)t9&xFV@Rmp4(wdnrS{hO6ezOa7(oz;0A6muqci72SRPKCI# zq;tJkp^;3hXt?8hg>q$y2WH}A(EFG@h6*~on4~+t3MY92h^Aae^FcKIrRaYt5M59J z#&P2z(gz`sb(nF`xk$iPmRJ18k6{Fia%L+XL2?S#tChFfy1H-RuoL|@e_`Lt{SQ-W zqN?whqpXX*YPsSvzw6D>y$Xbx=a%KvUro7Zr7{{snqJXtssy%EmY=iqA`4Oc@W-3a zlUq~I8R;9p8LD(ypXYaw2+8yxE5DXd6fHwK( zVN!>Tx+BHk?S5A?xw^SknRN!+4grEwgv0f~T|=}NE3(I_f34x2my2FSnZD`BpGT39Zzo&rR#XyUei}ELNJM99o zm4#s2?efcWAuhDmrrIwsQ4IJ(*xzK=%n1y5V=;zNWb2rm?CjViHIV`D_~!tpBRnEN z^mtH(XqWzhF+K6JFRxh&Wth~|KPZ7wo~F`vmZo{@Z35NA*M(ZE3UAe`P>HpJlK8#= z3ME}oHW3~3)cN_y<-2w4`&5O9uTfL}sNak2ERyx66EI)2=-0UmOZeVvJzNyBJ>O6v zoLTw^E%2f02U1Qj=yreHGCv%UQJr3?$0r)pc=C%FMO9fzX-;b+BO$@OC9J8va6UJf z&p=c_A!2*i{MMrS;I{Ug|G^v*Gx-qSQ44D31*MG#n@M|1ddDTb)PkkddQs(QQq;~N zcLF=I2ewVDPcUqa@mx<;6N1QJsbKOH^Wr zE({gBGm7hRRVsR>AJ=$!7Tr@&^%9b&G4|Jsu#!LRv)X|a%)J{H^Yzm14&!9V5<7|UOIe|D$kRIuxv(lIi!M~QT75Z5}WYBJSF zE8Emj&xvdw$pEG2k-nyx6uNOduqJ8cFTu5{yHnVjG+(-m; zTx^x$5jb=8_Q2IUFvUW8!zTBqfV#2OT*e5Xv5Fx=>|&MWL1F^A^H^WK-Oh`9(5mYm z^aXjO$Hzl~YZ-D)+XKk}HRabcSzRV@8)k>N?etd#^P2CSeO6J^Te>OnL)pN;Mv6Y$ z?x;)ZY@Uh}BJwj#-1J3Y$2^O)2!0Q{weQW~r>`_AF2s41xs^Lw&d&nKHp<->&IkHg*~hvd z++>k>mc^w0$215;QAw4@1|Hc&12M(3VSy#x24RU;xqN^7a8h9TD5yk@4tg=6g57a& z&X)Y1sNJ3qV>#9=g<(mc6QIg1W)Q-O|d~izYiI+3N-A0x+ zs_J$Qxkq2{!y9kXp@V$nTTtG4Oy^Tv8by{|IoeF3CGJLtcvX8|-xJenkY_1}leo4% zUY4y*>a_eGkOwCv&GCDjsC8LxE6B2f{x@v^5=kgjqUa*7PU+PLJ>*z(x8e&dG%E${ zh{AaXPx2P^o}RF7lg{$J*+7bH2n}%M2>EtnRB6O4`#h3;Bf+cy&X3exb4=wH<+o|6 zV^#AFn9I7OFA78{%O(@60`zbIrQgWu1N8OJ;WT}~U5qSi+{6;su@Qtn-xEpxbG5M~ zAPGIHYJx40d-Y)tt~OopRLJ^h0s(#vwRmt1+uCes{VNCdN49{(Yp!^%dLN7(F6Vn= z!r5w}`)NQzqv^3*(idKIx7Aa*&#$$yqmKc&L#!)@zFwp&gs&~LmMBj0l+3a9z?1Rj zr|eimjoC_c(qBIo-;n@hOx|yO-Y@^I&697dJ(}7zu@SRF^VG%5B@TK-p^#l77D}8E zR3>97Mx~e-NZmHfy{bm}LA$hLfp##Vt~0DbJ}y*pD2y92`#$-+Y4Wfdsa^Q)CCc4UYaGJ6}7A#NTsGsSJfJ=!!j7#6ji~0ySm5ZBvdzl+4d4lmxi$`~I zFZN^AJpvQbeczEq7p&Q}@8r3j^)-lGDx2+wDvRIXvs!E6B!~qsCb3nhO&$IBPLTP13aOq6#Qq zfOLI8{G%Zazn_pF77$_$DZ9+~D0*A{k{)if7V7(Xu`;2|-4kwMDnEV6X`Qgj3uuxe zBcI9-B+CDFBU0f0SR~YyDhuQ0c1777;=oI#gJds0lgEYX4?(b4i>2}x)9qFuTCx-A z4omB|Xl#7_eZ$Csa@ph0;DQZ_pC7JPWBAeYAVcfU%BC0RAllYDr`n1wnf5~mS(#UG zROspdTzo7wNEbvaI$d{`QMz5@aI@cJaxlPxOvsGX>W3UdAv|-$o*YXv$;`9$9H!+6 zEg7&lZd^afIzPMv=1*7&aqf`EH~n6{2Q2O~1$y$P%dzUrf###UkWTRV_R?a{&b3?R zf4&j;sK|$pVlbv`XZap}6ep=f*`_b$cWFRSOf4lBgJslM_*H?!m}&h&9EGSAvwa%r z7F%;|q|g1Ht#u6`J_6ldt`Al4#mvG1=jf4echgBK`qR(R!2u-nFc&vYx$vVG9JFMoH)+F0RbEph6+ z+o;SEiB_1pLLms|bNt@Ebm9DQEH$7!sD1Qx;f868{nP6z-fYS%ao-u`&T*In>mXZK zeV27lzwBpLZpzgAcS5@}s)AD+E@`BozJL8P=V8ps-w7d^KxatSyFZ#mUDikgcB(87 z|9a)LE*BBSrGaWM7WfTOnxXdIX5x^Bcao$s-5aN*$JEABI6FreyZvrgbYY@FH1K%y z^A@dBN5fevBQGyO0jj9Ws~i&03CoPs329G`=D!-wLJ6wVf$IoV^dzg4*p7Qe(&O)= zxY67*+o#mg<>Ju9vcK&veY*A8~!w%|I^k0CnV+>Sn9wMgzzi ztK9u6M;si-WAedp%JoL1I|{*`%_#L3-N`84?5zqMoP1lpD?yU$f<@;*Q#;2OQOd## z=i!(CS4(|<0rVOA)qws*)N$17xcTX_j&3}3U)YknmQFbCKsnY+c7#B=bQK@^>`7Df ziC4++CH?J>WJfMN9M3$YDWFo|)Xc!&i;`7_*u5NDXyZMkdgA@2%F|A`?Pg8JC}3pK z8&J%429*Aliq`aP&g4%>15u-d|5dga9uM72$IhJQz>4ojg{sJ0?NqZZ!j(lpA-aC^ zbww-48c@;GYa*FtT2q%<+c*|%GM&47CR6Vv42-RYMo-ON+B)HUICAp*otYI}e!}7-B1fw>ULha$GllZ7g^F|DvP_@PT!sPr`j(OeYnE&wJ-c1SQ3g zBQJQ-j#pWjE!xi%d)%TXUQ8);Hl+?0N=-PY^CB(6-VCUJ-{mJqnac^@5-gWn{3`KM z=jRLCMYJCS120ON9C7vwv|3SnXy;!GsgN`NEnSPLQDFA2`x?I%rxeXNdN7L9rp)GV zQTs(b!0zwkNKG&zBo|vw^sbPQ*cTOFzR(ubhwJ=EgUlBYDr~5^VYLg~XesKmiA$3B zL0&eRio||j@HQ=U6!q9|P7|ue-ZT|>+&)$u5+xVip{FI;AG^=tZjH~mrryicn*$p7 z7fFH)6s1a_z$^~va5p}&t}teY*Oo00rw1*}g3E_SvyhQRm922{4u=s8XOR%_b7KZ- z2?Ub8pHZlJn7wCShbeSmiTn8L5uyLy-DqY-K&x+>=fr?effoC@^()$R*?Et5{zn+oHsrh|(a(7|Gfc`ufggz5Z% z%v8)z(p45|yhc7G_sjIP>m>K5AjIr*Ik@F@HSaG`RMExX5mzi7jK4aFm-yU=qu*Bq zT97Lif@`*hyPYbUUW~EIV2lFyl!!-bX^t`&Xy1kZudS~kim5M;Dju|XVk#QbQ*wg} zW~(RRLIEyZf$i6HjIw0nlO1ux#>Ky=pyXjPAf3tb3=(xj*x|ly!_^za|9XC55gUDy zdhk2;d!&0N{uCaykwMv>jE(HmX{!(QMyYW%@3TtK-|S3Y0~eym#LvX7sdw=FtE+y# z0Z(h1r`Ew<#oZKCVZn+SC8SMrY1&u58H*pJaK^%>gaafIWZRQWl{o}~y5~EiY~ybi zKOzZuKStD2t~iMhag7qn9cZ+3#e!ZeV9y>MU zen!94@ODaKZESz=f0;w^FO=Z&bUux7BS?{!mtUif?BJuH{8+c3U&E4mBo>A|F9LWJ0)PEwvW z0&gs(tdlp^PI-&y8O51?oLS*+{NJhaeP}i_e*wl9KikqSU^a&Bi+n6jVcN^nq+^-H zKA#ZiJ3XJ3A3bli-6r9Wo5v-;?~X7pPU($bVBDTNHcB4vpr+ zJeO9T!By+bPg|wDpo{aJW#IDAqd@e9Kv4}1d0jAnP1KPWD#^Jx0s9G)NX*kIRqt_-Y;uBWU9xJ<<#vYIrKH7vp9TVVTn ztUnAwrqDd_Pg38BL8w+5`%*SA|K-7fs1KF?_zvj77W$vH`_Rn$&&(1T8dngx`Y$7K*rW{HHJrAJt-#d}q*)e3j*T&g|dFGWD(Kw2>QHuSL zmsTOtEqfg|CYgHl(HZd+*?Bs`L_tMmt)=Zf#((Lz^hcnN9J|2^1j;&=cO|Zp*F6Lf zk?dtHbx=Po?RPc91G{N3akC$v2G{!eN0o3^5xRBa^tl6u1mCJ3mzJxHM<_ksVn5t} zbtBzN33xR~$y(B{Enc&-#r0(}?KyA7=$UOc*ya1IYJCTIwFK1Ey|&BZ&gB11;dCCp zIVBHRf4DC%JM0|G-lM$Fvi=bx%=ZFI`~JCY{?`{vz?B3PKS!+AAFGNg&SAHKAnwdY zHjE>>;!e~*dvmwjV61z7J&sP1n~Mh9oe{B0J(tE4Fq<0E46mP^yOSy2*9D@^zc=(c z7~do~AaI!f>tlopJ@osLUjczSFiYPVZ+YdrNz8CZ5S|25Vp0&yn> z>TD=Cu$(OB02AT6d!c1sJjuhO;FdhZ*j0f^bM5@R(wriZjqn)K>T5A_Q1i$`znlur`QXf5pmw%^&H|?$@g~#48&{m7tk!JKvn*44eBMXttMvs`YUxamu(b_ z<*k7_@ZSLr@^Ne~;c%z?n*JG^h z0YHI^y#?F0_OQbo%7O>!@%%Da&j>V3a+HbIY@GU>W|l5z_cx)_%4h?$EF{SIK6$B) zJH4z)_1@vqq3d?(ig!$KEPsoL*kUDAowrbtLPF@u+dtsMI-ivIjaj&-e*IgJt7Tex ztm@iCk)dIY_jr=7qE)@3IuWIczMV z<8Q<1|22f)nHt4{K+wI_ActSnFPnF+<&3v#*|~uu+u7oc{`uNa;)k_&dszEe$Kf^) z6zrz_CtIS+et1H+_B#pmT8{cb+xCkrx=+ul@60lx3azq*o=GR5Jj%2Hk{gmYkzQ zAM3f`@eDQNvw^BaHm#1%b`CKHg~wV6=QRw&{0DSDK_QK!Lc>LrdDzMpb+93f(7CjuPKkdXFl4>pRT z{4b>oVPWj8vnDIjC29fA_ZIhky<3?P(`EC{X{5;6wEz2TVF14I$NKkNq@?(L>FbVS z&3Ip3v-Q3XL4(dB<>n)kHFYmSAH6D_*lXl?QAX!Xf-s#ony&`$EKhfq);kSTK86|g zjdot`d9K!dlzbzrTkk?K3a}U{-N5~gY*@~B#1_#ot%)DzCf~+g|8~)VKPHd{uo~86 zb68FG1g3G~BxJ)r2zZBJnQ`-#;J590b(qewL;mvu=Hzh9qD}buN*mM>M-eub$AMR*=KQIkFjyj-eRPPT1HDe%9BUNCI!oc zi_l2ctRm40(}FY5!IqMPVr|vxt#QcwehS*{8-7PuX}cMbzYx#T_HIWr^BrHY9q#`o zUmvB4o)W3+f^OP)?}Tl+%ySB^9Ndd>XN5Pc=OY|6S+dr~kp4?yrCt)`PIdDN&I5ZM zm%DID^dYCqiP`(DMpTi7)p72=GG`Z*oZR8Hzjad6rP=C7p;wWcsoWPXX9U^VxW$1h z21FlKNWZ&F+sO!%ZR?G2@f%-SCJ9&!81dmJ%yS;AJWMnRI>qAdgTO(K?J|92$vpYD zED1=UIEN*lX|gzV9zAh&tvMNLA7z6%Gf`xb;&OmLC=hJ!Z((DeD%-bBWU*Qb&$R zs^4or^-T8Xt~EEFXnaB>)u`Ho{MqY)8wHNR@$bU9rxXZ80iiVL#W{Yh=3I-`mVDr zMo@IT2#SuH*NW20h-jU`DgS`;)O81>xk*`l`svBSX6;0r&*TLY-;HL$AYt`Z%j7a9 z$6s$F+8*83|MZbgC-yp5v1t0Q64@f+B5i~45hlUL6?y)S&V7VhiZ70BB3Y_|}5hMq{)ALBo~qQ6!RvH(l=X7|}x#%_Au8!-**UPV2F ze%F*f9H=J-^vLuS-w@Pqg+dx69+T}?Iz|k(W+(9yl|4ys+U>~W$hbRjZgEh8YEr5~ zl-D|V#6G4`<@b|7cA?Scouvl=6}4q{=q_HoL{w^XDpiUkRz71lkM1efNQRqrvgOAH zALrw@WjnZ-2!z51mj-zb=w3s!;j~m1rF4xHcT1{y?D=mE><>)kST*li(h8#X`3*(Y zTcKuRaUC4HJlR@!qSUcU5!~ktqtU-g?jQ*S-!;lDNQlx9VSwK#sl_f2ahZx-KV>6n zCJ3E$G~CpKr9?C>?kk{};GM>tSA?R^9*C?*p?~qe3UKgupk0Vn<uM(SF(9~XnkUXY4)YM%Z&xcsdprDsiL&($$q1!X-&)Yv z+Ro>z8Ec#k#ziuoQLgW+Xk=P z2cdFDZgW8*-8Eo%ce3z-5tkZmvMlkSZ1VtVXO5JwUC%hvx{6Kb7afY^=fnSy@qrRL zuyy`Q$sHiA*vo8b%sCIFff$44r5@e*P$evPXhHfE*DLMqxO}8@&53f-Ig#Qbb5ynb zIZK$9Ojw#Pm2(}6&|hd>whUBkD)ufPk=;3y57mZ-wDWmW%Awl<T9{DfaoA4TU$}y^3H^r?&gK_~GjbVBi2cO_Qb)(^#4FTm5SE}5-!9JT- zXdHqJifd@MNFH`74Y;ng^Y+qppVpa@B1HRp67|iA9F%aK$>TD+L6(Mo5t*YggF%N$ zX#4Bf&`Jr?7GE)C)l|_)#9>S$!{4v0bS@g8fi#I&wlXTSrIo|2UOzSuZU?6gSKL}9 z>TqXT?JYXL9QWf31;??grE2W?D?JBSTUX9CTGfZ+9_wxC|J3#Tigs~>5zPs%Jzcu| zOO5kt{hfpA7i(SZ9@8F<7#HPhb2=S*YO$1LeQcx@D$`-lBtG7Md?9MIt8Y3w{0Pg9 z$B$0CF2zvKO_Zvd+fF~Yl`w1L|B}NOkF6xW|7S*L6o?HNuf6=C;M#m+GO`w!VOjv| zhFhAN330}B-vGutAJ}{W3E%JWo|8Mr6pzjXzHdN7`7JOZ_p2aLJHRU#$)!(IJ_o8c zvGl@;TWqv}JytnMke8Q&bg++aZv-yq=YqFIs-Ga?Tj^o@|YP=Q?ZeO^S*U$hEQjwsqb5H$|-(y z8K-=1==_Qw^P2Se<7+d zVvU$iM!e8#T_=u-jn7o-%mZfoafFsg<742?=mumtEqmWe`X%sB7QN4>n}$i#M`cfh zki>7?yj`L}$-9(Gc!OBV@76uNp<*vIk|^9LiI<8C85yG(cn$y%Ge#6IRhQPQihbDTE zri7>%Xc>;`UaYxl6_Eg{Y{MQ{2_3~RJv|A+yYF_sU(;)tGRcX5EVb-ArNVyHwfRaQ zcT5ci3+#oE;ED2g3dEP%B`*`X+1wU}yhTeo@A*mc6d4r%dtIGON%)hodd~}yV z)OnchC`FEq$;7|pyH;O%Q{zc;MdA?=u)+n*UWkZ@h{kr&Ei8emF=H`iosvzAV#MSL zJCG-Tk(wFEB zaB5yG@71;25LdW2bcPu@EQvp_fzfIc+j~-TS;L@?!Lp{ji6m9p z!W)-1Unb8F4Gv|CeDUT1sn1@ut;tACgY4-fCx(F2r%nO{0L)(8hNf?e4HiMCqu-g2 z@HnvmUE9!CUU=_cUwOBH4^k#@-T_vG#U(HxVbL=lSZIsc5MYVdzWFXvk=@b9h4#`= zb%NDp&nJo50UOOsQ%InX1XANFkHcLD#bAXY8fU>1=(h z5BxcC74KoTAP9qkEVNn;ABpm03$pX#Ho9}zc8khXI0rCNuE(9@IE%;+zzi+Sg`T(4 z!ma0N&wdZUv$Ky^2(4iKddN?v4Uib3;A_MdiQ%3vg(NQn%->jVqypc3nUL)pm~^q5 zgN?}uqCSXRj%A$sE{on*YXNF{ZQ5@m9Yaif#`5OHM!AWHb6&4oFchZg6m%vf^j83d zrwDC6mRKrek0zO~nySamR0qgJP%#APY5a+$kL!N@#M~A15kb0&ADabtH5YNj@xl4OThNWt^a|QX? zjT}<7a!&Nc7Flq3j^xXcuaA#QV!wak!0kKVTrxscbQm{c366G{4;CQ~#1@a_rr*9( zk`eV00YWeYTb=`Men*6hlPb03r+$hQOuUlp`mlW~xBEPIOMMm08=Z-Zd;LaZOJCV< z)Ai41OM)iw@QpjWZ3o${j|BOFaUjr^_S3*>spcLGa6#SuYmvwSf@+b$t^vN0ncYj2 zo6qBM8TVUIFPIa#xA*5~d)(6BDURc<3GTRTX}z65l%wxt@8$(w@xJ`0?+4t3Vg>Fr zH$B)CNLyyxbP0TaLNj%bvcDioKh%T60>4?o@^H&sRBy*w8w!QKb`0&xmA2m06kz?m zGe-@qf~u!Al%tCXc&Tk2!rpJ~9r%^sl&RyJyZa}GRv;qf20@E*7^tMRnYVT7gi#e@ zC72dCr{c|8)6WSNRWh(I`{*ob4n2g&m6lTMOl))FanmE+JCEAuXImSby3vmbw<@)% zQ7tvB5S5eHx&jeG1H{RREbOk->=Tl0snN9Gcu}=_<8jF^;Z*(KTBzs^4DW_5UKX#g zN|-H_hBS{pnHE{t?+d^Os|!@O>Js7}*{m@%|IVu(?@W&~ph7d-1rqsfTphXu?bY;~ z!}%OSt1VB2%?`XByZRDmtvtw*B`#bPYpaR5xa(|n(GUYM@0B;jRs-wl=-q*!#o<(z zKi_ab-H-xRVCjhC1nGJAZXQcA>$R!S1#68{B5bH1KXg+|d)}&fPJY%KFE&f1wp;Q< zW?NGGV=#^svT57<5D{C*N!QelSu9k*3i){cJi6st+fgy}erCEO4Yy*LXfK;Dbz%(a zN|-;{*#QT8(i+hiwLSrp9RZ>>c<-E`|eY$@ZS;z7?-%56zh6q1D5DSrWA-NMNn{-|N z>;m+#Sh6D-Scj)RnNkXmWzTCTk@<0do8aWM2i`LF#0DhBr^*Bpjk3M?T0cUI)H>3F zw~7+eT5_w?XL`q7ln=YRWrLMQX88Wz4aXm5H*nKo;!o4z%uAS*dx%dUi7zWjRXOE; zwhgM8^vmGn7J*q>aH8VV5uUB`mE*xxNe0(|mWzNy0mr#+T;^$;ADMsefD5i@@NiYB zMju8*#R72A`P{KC5K+*B8j*d=n>6XsP#lSp>f!v)# zc?Q#=#%E0(SH%o5`PoV{=Am~b%7YnP=wI+>wg~z z8?Tyuq|o`FcgZL6`oCOIftfyo6e%ol438K7d(1+Yj?*&6zZpI!+tys;xWHZYoB#BC zh~O_LH<=EBYDjYO?UUAobgtlE{~k1FU<9WEAv3MD^`+k^vf~+Q1Z1fHG2UXP;{R~d zdN@1kzfaUup3tTkK9>;_zc z;i3hQ-Bo26TeIeU8X`;Run`nQ!My;zCv-=p_iG`Eqa~WWld$082}zdmf7969Y@a06LO_0njSnQ9-O*L zkSGX^1^$h!e%nnY9NEQpba2b+2z-Mqqz|H|U|{t!S=ejHPdt4@0pJU9LL=h1LX+Az z>B>q4YEjO4QQskYJA`prKC-wQ8@CqtkzKv8Sn8d4Bugrkm4zi*dD_6W@ zYJ8TLXx|SHw0paw5csJjE^^+L^;D+v^5HqaTV7b4MjTDC@!0ZPkeInLrjb`q);{5p zm4yioUMp`JWJ;)S$UuMpi?}#K z{EN-<>^`yyyGBCaBjj2MQEv85(qm*4i5v^zVTThBGTB zPi^5{ltA;Rk7@QQGZC~r83AK6N6!Necnm9@okFJ5@n~MuMG7orc@HgE6{8cvMOpjn zbCK+tW|MVUP0A@J=cSN&cWBDGLm#ZGwP9TO`beDL5aQ7Wtdj4_n2cywS3`xa^m4l9 zh}C9IdHE#dO|@KAUaSNmv!ijr@OX`99_^E{Fzprq&dpdmbVRwVapebuhK63Me|or| z%>(Bb^7^FLI3}l&7h2B5q70bM*oo@LyzcJq8Q9@$LNSxaABL32AUcWr+?iHwUgyYr zjJI>k&vA;2j{wQz0gj2U9b=~BN6Rk88`PR_M7hdnE@kxip~VX|{RgKiP~klvnIHHhvsm4FJ^`d`C%QeUwAbo?%~aBWQ~<#(YNO(bP%l9 zhq|>f?gGd1XVV}-G`~|YE}ByhG~aAh!vMd!EGvC@bMY1pMW%A&R8gXUNhAQxj?~T6 zgYL99uuFSQs1xU+7Jx~6y-(ACU!b!-t!d?%*f)B$>f!S;A47qJ5QH(7Z9^yY|)dL=HNAcK_a~T z!7R0P`in0Ng1_Lz=pDy^ynUIC{tr>|($eFrE*ne0m*hRv6stM*R`T(I_>?z_u#ZWL zjobOs3o}^0i!Zh>d)!m_^X-0nE(}qH?YHK9YIZ4xC?-KMw@LjLus+!6D>+jtYmzxG zFOFw(%qVZqL&yMjYA_=hzOqUnbs@rO;gdRsVM$?jEXr|xqS_kjM^)PV`~_DA!m@yT z@PiBuw#M1vekxsmEL(1U6L71J?nx2fC>PXM3Ws~m-%oK^)AUpfKrQe2%{}9F1-c-X zz?sQf+z65@9cEV^uW;v1q)*t&(5J(}QkxX-QJX5cB599BGm#9G%Cnn>tI%jhmRr22^rCIJvY`4(Ki)>rxcD& zdk5|tlQJ-A2PR*<#GOl_H1M9(a|kPLJlVU?R;h?G6eyeIR9>W`c|L8*RD`YEfjJFG z1J&$-Ze=K$OT(V^HrvS6*_lJE6rsNT#XvJZyZeO>h1KodyNm~ZBeb%PKE-ZMyt$gP z&5|l0{j)&9W-Q?$k897MyyLqZ!SLB*9ackNe1o^TqhmBg5;Rm~wwA zp~hT|2WmZ6PP5;;)e$#X>{2cK?11bx5$S%+%7%$_=$f>%?JaAOAVI@52afyiC<)Sy-Q}d?Mh;I`w~fbf_7h&jl*!SM z_RxC-5P~=SqRU$YQ)y#3?ciRnd}EhB?w;tS+TFyAvBW>F+4rrlTv9T)rjomFA__F`(n|34XaDR3ZZFX zrzu&Jt}4dl^_1UvAsd*Wx_H{b!NeXpT~=O+Z!b{0TjS21I%4yfIa zqfZapb9i#)2_ePIfEi;XtsgYt@P^zW`o2F%dsY4Frv1mgKF{lK-(cQIxzP7Tter*9I9dM^J|kXk9{hjf3@{Q#7` z-_od+;~sEYNy*GaU(_D=X#jSR*-wh8$d7lvIXK@~FRLvPKN_R>j^atR%=s}rroJ&9 zu-66M95!-EPLy>*6^x^w6hOGwCIdiyAlWM|05?AE+QnbFnk*8%gGLrtkbBps=Xob; z88OTGSJC?HvppzU2eS4VvLTqtLfJZuX`z>XNe=30Sv{R!Ff$INjOdIwPr8W|Zt_@0 zRw_Gj=IRBENvg;a))a~pdvX;)5mR*+bK%87LE=#@nTIRvmI0emr;}pov!^D(>FSO6 z&9&96igJiKhS^q+cI|NGdtUam<640-ZgJxwba#-?tiy9qRN6VI_e{PZ$ui}GlN3 zsZ2YY`em5YCp&v>&5`1b4WkjKH{ZFU*0A>&7%K=E+9T* zfSWrSTmJ^@{1xWp@_@Q1J7f00Vh9j55l~p7L>&GJiFMdPV78ni^sk1^NuZp|%$E4@ z@9>W^a2do#57a{K{%HuJzyH)TqGGA58vkktVlqPJ_I9h`zY{XY!Z!@?p}Xe)8WK3V zlQ#;936%aFTq+O^(Sl@NEcw7c4MA+FLl9Qtmi!wY@f-IF*aT*P3IDH#AmRh6k1<{O j{~Ea`=`_ 。 - - .. literalinclude:: paddle_version.txt - -* ``merge_model`` Start a paddle_merge_model - 用于将PaddlePaddle的模型参数文件和模型配置文件打包成一个文件,方便做部署分发。 -* ``dump_config`` Dump the trainer config as proto string - 用于将PaddlePaddle的模型配置文件以proto string的格式打印出来。 -* ``make_diagram`` - 使用graphviz对PaddlePaddle的模型配置文件进行绘制。 \ No newline at end of file diff --git a/doc_cn/ui/cmd/paddle_version.txt b/doc_cn/ui/cmd/paddle_version.txt deleted file mode 100644 index 33e2e4de7c..0000000000 --- a/doc_cn/ui/cmd/paddle_version.txt +++ /dev/null @@ -1,11 +0,0 @@ -PaddlePaddle 0.8.0b, compiled with - with_avx: ON - with_gpu: ON - with_double: OFF - with_python: ON - with_rdma: OFF - with_glog: ON - with_gflags: ON - with_metric_learning: OFF - with_timer: OFF - with_predict_sdk: OFF diff --git a/doc_cn/ui/index.rst b/doc_cn/ui/index.rst deleted file mode 100644 index ff36c9adb6..0000000000 --- a/doc_cn/ui/index.rst +++ /dev/null @@ -1,32 +0,0 @@ -######## -用户接口 -######## - -数据提供 -======== - -.. toctree:: - :maxdepth: 1 - - data_provider/dataprovider.rst - data_provider/pydataprovider2.rst - -命令及命令行参数 -================ - -.. toctree:: - :maxdepth: 1 - - cmd/index.rst - -* `参数用例 <../../doc/ui/cmd_argument/use_case.html>`_ -* `参数分类 <../../doc/ui/cmd_argument/argument_outline.html>`_ -* `参数描述 <../../doc/ui/cmd_argument/detail_introduction.html>`_ - -预测 -======= - -.. toctree:: - :maxdepth: 1 - - predict/swig_py_paddle.rst From bd5b101ca9760e276ba2b821ff264e22e4288c5d Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Tue, 13 Dec 2016 19:51:43 +0800 Subject: [PATCH 119/265] fix rst format error --- doc/tutorials/rec/ml_regression_ch.rst | 74 +++++++++++++------------- 1 file changed, 37 insertions(+), 37 deletions(-) diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst index 19a89d270d..13548fc3a6 100644 --- a/doc/tutorials/rec/ml_regression_ch.rst +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -1,7 +1,7 @@ MovieLens数据集评分回归模型 ========================= -这里我们在MovieLens数据集描述一种**余弦相似度回归**任务。 +这里我们在MovieLens数据集描述一种 **余弦相似度回归** 任务。 该示例将展示paddle如何进行词向量嵌入,处理相似度回归,针对文本 的单词级别的卷积神经网络,以及paddle如何处理多种类型的输入。 需要注意的是,该模型网络只是用于进行demo展示paddle如何工作,而 @@ -15,7 +15,7 @@ MovieLens数据集评分回归模型 ``````` 下载并解压数据集 '''''''''''''' -这里我们使用:ref:`demo_ml_dataset_en`。 +这里我们使用 :ref:`demo_ml_dataset_en` 。 要下载和解压数据集,只需要简单的运行下面的命令即可。 .. code-block:: bash @@ -23,7 +23,7 @@ MovieLens数据集评分回归模型 cd demo/recommendation/data ./ml_data.sh -:code:`demo/recommendation/data/ml-1m`的目录结构为: +:code:`demo/recommendation/data/ml-1m` 的目录结构为: .. code-block:: text @@ -35,10 +35,10 @@ MovieLens数据集评分回归模型 字段配置文件 '''''''''' -**字段配置文件**用来具体说明数据集的字段和文件格式, -例如,说明每个特征文件具体字段是**什么**类型。 +**字段配置文件** 用来具体说明数据集的字段和文件格式, +例如,说明每个特征文件具体字段是 **什么** 类型。 -ml-1m的字段配置文件在目录:code:`demo/recommendation/data/config.json`中。 +ml-1m的字段配置文件在目录 :code:`demo/recommendation/data/config.json` 中。 其具体说明了字段类型和文件名称: 1) 用户文件中有四种类型的字段\: 编号,性别,年龄和职业; 2) 文件名称为"users.dat",文件的分隔符为"::"。 @@ -70,12 +70,12 @@ ml-1m的字段配置文件在目录:code:`demo/recommendation/data/config.json` 在movielens 1m数据集中,电影和用户有许多的特征。 评分文件的每一行仅仅提供电影或用户的编号来代表相应的电影或用户。 -我们首先处理电影或用户的特征文件,然后用pickle命令将特征(**Meta**)对象存储为文件。 +我们首先处理电影或用户的特征文件,然后用pickle命令将特征( **Meta** )对象存储为文件。 Meta配置文件 ........... -**Meta配置文件**用来具体描述**如何**解析数据集中的每一个字段。 +**Meta配置文件** 用来具体描述 **如何** 解析数据集中的每一个字段。 该文件可以从字段配置文件生成,或是手动编辑生成。文件的格式可以 为json或yaml格式。解析器能通过文件的扩展名自动识别文件的格式。 @@ -124,14 +124,14 @@ Meta配置文件 Meta文件 '''''''' -有了meta配置文件之后,我们可以生成**Meta文件**,该文件是python的pickle对象, +有了meta配置文件之后,我们可以生成 **Meta文件** ,该文件是python的pickle对象, 存储着电影或用户信息。可以运行下面的命令来生成。 .. code-block:: bash python meta_generator.py ml-1m meta.bin --config=meta_config.json -meta文件:code:`meta.bin`的结构如下: +meta文件 :code:`meta.bin` 的结构如下: .. code-block:: text @@ -185,17 +185,17 @@ meta文件:code:`meta.bin`的结构如下: 分割训练/测试文件 ''''''''''''''' -我们将:code:`ml-1m/ratings.dat`文件分割为训练和测试文件。分割文件的方法是:对于每位用户,我们将评分分成两部分。 +我们将 :code:`ml-1m/ratings.dat` 文件分割为训练和测试文件。分割文件的方法是:对于每位用户,我们将评分分成两部分。 这样的话每位用户在测试文件中将与训练文件含有同样的信息。 -用:code:`separate.py`来分离训练和测试文件。 +用 :code:`separate.py` 来分离训练和测试文件。 .. code-block:: bash python split.py ml-1m/ratings.dat --delimiter="::" --test_ratio=0.1 -这样就会生成两个文件::code:`ml-1m/ratings.dat.train`和:code:`ml-1m/ratings.data.test`。 -将他们移动到目录:code:`data`,然后进行随机打乱,再为paddle的训练过程提供文件列表。 +这样就会生成两个文件::code:`ml-1m/ratings.dat.train` 和 :code:`ml-1m/ratings.data.test` 。 +将他们移动到目录 :code:`data` ,然后进行随机打乱,再为paddle的训练过程提供文件列表。 .. code-block:: bash @@ -217,27 +217,27 @@ meta文件:code:`meta.bin`的结构如下: :align: center :alt: rec_regression_network -该示例的神经网络配置文件:code:`trainer_config.py`如下所示: +该示例的神经网络配置文件 :code:`trainer_config.py` 如下所示: .. literalinclude:: ../../../demo/recommendation/trainer_config.py :language: python :lines: 15- -在文件:code:`trainer_config.py`中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 +在文件 :code:`trainer_config.py` 中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 展示了如何将每个特征映射到一个向量。 -* :code:`id`\: 仅仅是简单的嵌入,然后添加一个全连接层。 -* :code:`embedding`\: +* :code:`id` \: 仅仅是简单的嵌入,然后添加一个全连接层。 +* :code:`embedding` \: - 如果是序列,则先做嵌入,然后再做一次文本卷积网络操作, 然后得到平均采样的结果 - 如果不是序列,则先做嵌入,然后添加一个全连接层。 -* :code:`one_host_dense`\: +* :code:`one_host_dense` \: - 仅仅是两个全连接层。 -然后我们利用多输入的:code:`fc_layer`全连接层将电影的每个特征结合成一个电影特征, +然后我们利用多输入的:code:`fc_layer` 全连接层将电影的每个特征结合成一个电影特征, 并且对用户的特征做同样的操作,也得到一个用户特征。然后我们求这两个特征的余弦相似度。 -在这些网络中,我们用以下的一些:ref:`api_trainer_config`中的接口。 +在这些网络中,我们用以下的一些:ref:`api_trainer_config` 中的接口。 * 数据层, :ref:`api_trainer_config_helpers_layers_data_layer` * 全连接层, :ref:`api_trainer_config_helpers_layers_fc_layer` @@ -246,7 +246,7 @@ meta文件:code:`meta.bin`的结构如下: * 采样层, :ref:`api_trainer_config_helpers_layers_pooling_layer` * 余弦相似度层, :ref:`api_trainer_config_helpers_layers_cos_sim` * 文本卷积采样层, :ref:`api_trainer_config_helpers_network_text_conv_pool` -* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources`. +* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources` . 数据提供脚本 ''''''''''' @@ -256,40 +256,40 @@ meta文件:code:`meta.bin`的结构如下: :lines: 15- 数据提供脚本仅仅是读取meta.bin和评分文件,生成训练需要的样本。 -在脚本:code:`dataprovider.py`中,我们需要设置: +在脚本 :code:`dataprovider.py` 中,我们需要设置: * obj.slots\: 特征的类型和维度。 -* use_seq\: :code:`dataprovider.py`中的数据是否为序列模式。 -* process\: 返回数据的每一条样本给:code:`paddle`. +* use_seq\: :code:`dataprovider.py` 中的数据是否为序列模式。 +* process\: 返回数据的每一条样本给 :code:`paddle` . -数据提供脚本的细节文档可以参考:ref:`api_pydataprovider`. +数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider` . 训练 ```` 准备好数据,配置了网络,编写好数据提供脚本后,现在我们可以开始paddle训练了。 -代码:code:`run.sh`如下: +代码 :code:`run.sh` 如下: .. literalinclude:: ../../../demo/recommendation/run.sh :language: bash :lines: 16- -该脚本仅仅是开始一个paddle训练过程,将日志写入文件:code:`log.txt`,然后 +该脚本仅仅是开始一个paddle训练过程,将日志写入文件 :code:`log.txt` ,然后 打印在屏幕上。 -脚本:code:`run.sh`中的每一行命令,请参考页面:ref:`cmd_line_index_en`。 +脚本 :code:`run.sh` 中的每一行命令,请参考页面 :ref:`cmd_line_index_en` 。 这些参数的简短介绍如下: * config\: 告诉paddle哪个文件是神经网络的配置文件。 -* save_dir\: 告诉paddle将模型保存在:code:`./output`中。 +* save_dir\: 告诉paddle将模型保存在: code:`./output` 中。 * use_gpu\: 是否使用GPU,默认为不使用。 * trainer_count\: 一台机器上面的线程数量。 * test_all_data_in_one_period\: 每一个测试周期测试一次所有数据。否则, - 每个测试周期测试:code:`batch_size`批次的数据。 -* log_period\: 在训练了:code:`log_period`批次后打印日志. -* dot_period\: 在每训练:code:`dot_period`个批次后打印一个:code:`.`. -* num_passes\: 训练至多:code:`num_passes`轮. + 每个测试周期测试: code:`batch_size` 批次的数据。 +* log_period\: 在训练了: code:`log_period` 批次后打印日志. +* dot_period\: 在每训练: code:`dot_period` 个批次后打印一个 :code:`.` . +* num_passes\: 训练至多: code:`num_passes` 轮. 如果训练过程启动成功的话,输出应该类似如下: @@ -311,7 +311,7 @@ meta文件:code:`meta.bin`的结构如下: I0601 08:09:46.233438 10549 Util.cpp:209] copy trainer_config.py to ./output/model/pass-00000 I0601 08:09:46.233541 10549 ParamUtil.cpp:147] fileName trainer_config.py -模型被保存在:code:`output/`目录中。你可以在任何时候用:code:`Ctrl-C`来停止训练。 +模型被保存在 :code:`output/` 目录中。你可以在任何时候用 :code:`Ctrl-C` 来停止训练。 模型评估和预测 ```````````` @@ -322,7 +322,7 @@ meta文件:code:`meta.bin`的结构如下: ./evaluate.sh -你讲看到如下的信息: +你将看到如下的信息: .. code-block:: text @@ -344,4 +344,4 @@ meta文件:code:`meta.bin`的结构如下: Prediction Score is 2.56 Input movie_id: 8 Input user_id: 2 - Prediction Score is 3.13 \ No newline at end of file + Prediction Score is 3.13 From d391118d5096dd237e2890111c1686d5972719c5 Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Tue, 13 Dec 2016 19:58:27 +0800 Subject: [PATCH 120/265] Update resnet_model_cn.md --- doc/tutorials/imagenet_model/resnet_model_cn.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/tutorials/imagenet_model/resnet_model_cn.md b/doc/tutorials/imagenet_model/resnet_model_cn.md index 03e4c6f258..82ec9d70b3 100644 --- a/doc/tutorials/imagenet_model/resnet_model_cn.md +++ b/doc/tutorials/imagenet_model/resnet_model_cn.md @@ -136,9 +136,9 @@ mean_meta_224 resnet_101 resnet_152 resnet_50

-### 参数观察 +### 参数读取 -使用者可以使用下面的python脚本来读取参数值: +使用者可以使用下面的Python脚本来读取参数值: ``` import sys @@ -209,7 +209,7 @@ cd demo/model_zoo/resnet ### Python接口 -示例`demo/model_zoo/resnet/classify.py`中展示了如何使用python来提取特征。下面的例子同样使用了`./example/test.list`中的数据。执行的命令如下: +示例`demo/model_zoo/resnet/classify.py`中展示了如何使用Python来提取特征。下面的例子同样使用了`./example/test.list`中的数据。执行的命令如下: ``` cd demo/model_zoo/resnet From 4bd53517b9323dda8942a55b2076eb7aea4d5317 Mon Sep 17 00:00:00 2001 From: CrossLee1 Date: Tue, 13 Dec 2016 19:59:30 +0800 Subject: [PATCH 121/265] Update resnet_model_en.md --- doc/tutorials/imagenet_model/resnet_model_en.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/tutorials/imagenet_model/resnet_model_en.md b/doc/tutorials/imagenet_model/resnet_model_en.md index 93864b82ec..478ad06193 100644 --- a/doc/tutorials/imagenet_model/resnet_model_en.md +++ b/doc/tutorials/imagenet_model/resnet_model_en.md @@ -138,7 +138,7 @@ There are four parameters in this layer. In fact, only .w0 and .wbias are the le ### Parameter Observation -Users who want to observe the parameters can use python to read: +Users who want to observe the parameters can use Python to read: ``` import sys @@ -209,7 +209,7 @@ If successful, features are saved in `fea_output/rank-00000` as follows. And you ### Python Interface -`demo/model_zoo/resnet/classify.py` is an example to show how to use python to extract features. Following example still uses data of `./example/test.list`. Command is as follows: +`demo/model_zoo/resnet/classify.py` is an example to show how to use Python to extract features. Following example still uses data of `./example/test.list`. Command is as follows: ``` cd demo/model_zoo/resnet From 2fbcf4deabae29acaf8e650b05f8ce858291638d Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Tue, 13 Dec 2016 20:18:01 +0800 Subject: [PATCH 122/265] small details --- doc/tutorials/rec/ml_regression_ch.rst | 40 ++++++++++++++------------ 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst index 13548fc3a6..9d2b5071a2 100644 --- a/doc/tutorials/rec/ml_regression_ch.rst +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -40,7 +40,9 @@ MovieLens数据集评分回归模型 ml-1m的字段配置文件在目录 :code:`demo/recommendation/data/config.json` 中。 其具体说明了字段类型和文件名称: + 1) 用户文件中有四种类型的字段\: 编号,性别,年龄和职业; + 2) 文件名称为"users.dat",文件的分隔符为"::"。 .. include:: ../../../demo/recommendation/data/config.json @@ -96,22 +98,22 @@ Meta配置文件 * 在电影文件movies.dat中 * 我们仅用"::"来分隔每一行 - * pos 0 代表编号。 + * pos 0 代表编号 * pos 1 特征: - * name是电影名。 - * 利用正则表达式来解析该特征。 - * 基于字母的词嵌入特征。 - * 是序列。 + * name是电影名 + * 利用正则表达式来解析该特征 + * 基于字母的词嵌入特征 + * 是序列 * pos 2 特征: - * name是体裁。 - * type是one hot稠密向量。 - * dictionary由解析自动生成,每一个key由'|'分隔。 + * name是体裁 + * type是one hot稠密向量 + * dictionary由解析自动生成,每一个key由'|'分隔 * 在用户文件users.dat中 * 我们仅用"::"来分隔每一行 - * pos 0 代表编号。 + * pos 0 代表编号 * pos 1 特征: - * name是性别。 - * 简单的基于字母的词嵌入。 + * name是性别 + * 简单的基于字母的词嵌入 * pos 2 特征: * name是年龄 * 是整个的词嵌入 @@ -135,7 +137,7 @@ meta文件 :code:`meta.bin` 的结构如下: .. code-block:: text - +--+ movie + +--+ movie | +--+ __meta__ | | +--+ raw_meta # 每个特征的meta配置。列表 | | | + @@ -229,7 +231,7 @@ meta文件 :code:`meta.bin` 的结构如下: * :code:`id` \: 仅仅是简单的嵌入,然后添加一个全连接层。 * :code:`embedding` \: - 如果是序列,则先做嵌入,然后再做一次文本卷积网络操作, - 然后得到平均采样的结果 + 然后得到平均采样的结果。 - 如果不是序列,则先做嵌入,然后添加一个全连接层。 * :code:`one_host_dense` \: - 仅仅是两个全连接层。 @@ -246,7 +248,7 @@ meta文件 :code:`meta.bin` 的结构如下: * 采样层, :ref:`api_trainer_config_helpers_layers_pooling_layer` * 余弦相似度层, :ref:`api_trainer_config_helpers_layers_cos_sim` * 文本卷积采样层, :ref:`api_trainer_config_helpers_network_text_conv_pool` -* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources` . +* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources` 数据提供脚本 ''''''''''' @@ -260,9 +262,9 @@ meta文件 :code:`meta.bin` 的结构如下: * obj.slots\: 特征的类型和维度。 * use_seq\: :code:`dataprovider.py` 中的数据是否为序列模式。 -* process\: 返回数据的每一条样本给 :code:`paddle` . +* process\: 返回数据的每一条样本给 :code:`paddle` 。 -数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider` . +数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider` 。 训练 ```` @@ -287,9 +289,9 @@ meta文件 :code:`meta.bin` 的结构如下: * trainer_count\: 一台机器上面的线程数量。 * test_all_data_in_one_period\: 每一个测试周期测试一次所有数据。否则, 每个测试周期测试: code:`batch_size` 批次的数据。 -* log_period\: 在训练了: code:`log_period` 批次后打印日志. -* dot_period\: 在每训练: code:`dot_period` 个批次后打印一个 :code:`.` . -* num_passes\: 训练至多: code:`num_passes` 轮. +* log_period\: 在训练了: code:`log_period` 批次后打印日志。 +* dot_period\: 在每训练: code:`dot_period` 个批次后打印一个 :code:`.` 。 +* num_passes\: 训练至多: code:`num_passes` 轮。 如果训练过程启动成功的话,输出应该类似如下: From 0eac39928090c44fc3b8b4edc18604ff7b662f91 Mon Sep 17 00:00:00 2001 From: yuan Date: Tue, 13 Dec 2016 20:57:59 +0800 Subject: [PATCH 123/265] priorbox layer for ssd --- paddle/gserver/layers/PriorBox.cpp | 137 ++++++++++++++++++ proto/ModelConfig.proto | 10 ++ python/paddle/trainer/config_parser.py | 13 ++ .../paddle/trainer_config_helpers/layers.py | 36 +++++ 4 files changed, 196 insertions(+) create mode 100644 paddle/gserver/layers/PriorBox.cpp diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp new file mode 100644 index 0000000000..b0d59cd145 --- /dev/null +++ b/paddle/gserver/layers/PriorBox.cpp @@ -0,0 +1,137 @@ +/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "Layer.h" +#include "paddle/math/Matrix.h" +#include "paddle/math/BaseMatrix.h" + +namespace paddle { + +class PriorBoxLayer : public Layer { +public: + explicit PriorBoxLayer(const LayerConfig& config) : Layer(config) {} + bool init(const LayerMap& layerMap, const ParameterMap& parameterMap); + void forward(PassType passType); + void backward(const UpdateCallback& callback) {} + int numPriors_; + std::vector minSize_; + std::vector maxSize_; + std::vector aspectRatio_; + std::vector variance_; + MatrixPtr buffer_; +}; + +bool PriorBoxLayer::init(const LayerMap& layerMap, + const ParameterMap& parameterMap) { + Layer::init(layerMap, parameterMap); + std::copy(config_.inputs(0).priorbox_conf().min_size().begin(), + config_.inputs(0).priorbox_conf().min_size().end(), + std::back_inserter(minSize_)); + std::copy(config_.inputs(0).priorbox_conf().max_size().begin(), + config_.inputs(0).priorbox_conf().max_size().end(), + std::back_inserter(maxSize_)); + std::copy(config_.inputs(0).priorbox_conf().aspect_ratio().begin(), + config_.inputs(0).priorbox_conf().aspect_ratio().end(), + std::back_inserter(aspectRatio_)); + std::copy(config_.inputs(0).priorbox_conf().variance().begin(), + config_.inputs(0).priorbox_conf().variance().end(), + std::back_inserter(variance_)); + // flip + int input_ratio_length = aspectRatio_.size(); + for (int index = 0; index < input_ratio_length; index++) + aspectRatio_.push_back(1 / aspectRatio_[index]); + aspectRatio_.push_back(1.); + numPriors_ = aspectRatio_.size(); + if (maxSize_.size() > 0) + numPriors_++; + buffer_ = Matrix::create(1, 1, false, false); + return true; +} + +void PriorBoxLayer::forward(PassType passType) { + Layer::forward(passType); + auto input = getInput(0); + int layer_width = input.getFrameWidth(); + int layer_height = input.getFrameHeight(); + + MatrixPtr inV1 = getInputValue(1); + int image_width = inV1->getElement(0, 0); + int image_height = inV1->getElement(0, 1); + float step_w = static_cast(image_width) / layer_width; + float step_h = static_cast(image_height) / layer_height; + int dim = layer_height * layer_width * numPriors_ * 4; + reserveOutput(1, dim * 2); + // use a cpu buffer to compute + Matrix::resizeOrCreate(buffer_, 1, dim * 2, false, false); + auto* tmp_ptr = buffer_->getData(); + + int idx = 0; + for (int h = 0; h < layer_height; ++h) { + for (int w = 0; w < layer_width; ++w) { + float center_x = (w + 0.5) * step_w; + float center_y = (h + 0.5) * step_h; + int min_size = 0; + for (size_t s = 0; s < minSize_.size(); s++) { + // first prior. + min_size = minSize_[s]; + int box_width = min_size; + int box_height = min_size; + // xmin, ymin, xmax, ymax. + tmp_ptr[idx++] = (center_x - box_width / 2.) / image_width; + tmp_ptr[idx++] = (center_y - box_height / 2.) / image_height; + tmp_ptr[idx++] = (center_x + box_width / 2.) / image_width; + tmp_ptr[idx++] = (center_y + box_height / 2.) / image_height; + + if (maxSize_.size() > 0) { + CHECK_EQ(minSize_.size(), maxSize_.size()); + // second prior. + for (size_t s = 0; s < maxSize_.size(); s++) { + int max_size = maxSize_[s]; + box_width = box_height = sqrt(min_size * max_size); + tmp_ptr[idx++] = (center_x - box_width / 2.) / image_width; + tmp_ptr[idx++] = (center_y - box_height / 2.) / image_height; + tmp_ptr[idx++] = (center_x + box_width / 2.) / image_width; + tmp_ptr[idx++] = (center_y + box_height / 2.) / image_height; + } + } + } + // rest of priors. + for (size_t r = 0; r < aspectRatio_.size(); r++) { + float ar = aspectRatio_[r]; + if (fabs(ar - 1.) < 1e-6) + continue; + float box_width = min_size * sqrt(ar); + float box_height = min_size / sqrt(ar); + tmp_ptr[idx++] = (center_x - box_width / 2.) / image_width; + tmp_ptr[idx++] = (center_y - box_height / 2.) / image_height; + tmp_ptr[idx++] = (center_x + box_width / 2.) / image_width; + tmp_ptr[idx++] = (center_y + box_height / 2.) / image_height; + } + } + } + // clip the prior's coordidate such that it is within [0, 1] + for (int d = 0; d < dim; ++d) + tmp_ptr[d] = std::min(std::max(tmp_ptr[d], (float)0.), (float)1.); + // set the variance. + for (int h = 0; h < layer_height; h++) + for (int w = 0; w < layer_width; w++) + for (int i = 0; i < numPriors_; i++) + for (int j = 0; j < 4; j++) + tmp_ptr[idx++] = variance_[j]; + MatrixPtr outV = getOutputValue(); + outV->copyFrom(buffer_->data_, dim * 2); +} +REGISTER_LAYER(priorbox, PriorBoxLayer); + +} // namespace paddle diff --git a/proto/ModelConfig.proto b/proto/ModelConfig.proto index b34e1ebded..460a39275f 100644 --- a/proto/ModelConfig.proto +++ b/proto/ModelConfig.proto @@ -248,6 +248,15 @@ message ImageConfig { required uint32 img_size_y = 9; } +message PriorBoxConfig { + repeated uint32 min_size = 1; + repeated uint32 max_size = 2; + repeated float aspect_ratio = 3; + repeated float variance = 4; + optional bool flip = 5 [default = true]; + optional bool clip = 6 [default = true]; +} + message LayerInputConfig { required string input_layer_name = 1; optional string input_parameter_name = 2; @@ -263,6 +272,7 @@ message LayerInputConfig { optional BilinearInterpConfig bilinear_interp_conf = 10; optional MaxOutConfig maxout_conf = 11; optional SppConfig spp_conf = 12; + optional PriorBoxConfig priorbox_conf = 13; } message LayerConfig { diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 5b7f4d85e2..5de524e507 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1577,6 +1577,19 @@ class PrintLayer(LayerBase): def __init__(self, name, inputs): super(PrintLayer, self).__init__(name, 'print', 0, inputs) +@config_layer('priorbox') +class PriorBoxLayer(LayerBase): + def __init__(self, name, inputs, size, min_size, max_size, aspect_ratio, variance): + super(PriorBoxLayer, self).__init__(name, 'priorbox', 0, inputs) + config_assert(len(inputs) == 2, 'PriorBoxLayer must have 2 input') + self.config.inputs[0].priorbox_conf.min_size.extend(min_size) + self.config.inputs[0].priorbox_conf.max_size.extend(max_size) + self.config.inputs[0].priorbox_conf.aspect_ratio.extend(aspect_ratio) + self.config.inputs[0].priorbox_conf.variance.extend(variance) + self.config.size = size + input_layer0 = self.get_input_layer(0) + input_layer1 = self.get_input_layer(1) + @config_layer('data') class DataLayer(LayerBase): diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index 8dd6b7b7d2..f04b5646aa 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -106,6 +106,7 @@ __all__ = [ 'maxout_layer', 'out_prod_layer', 'print_layer', + 'priorbox_layer', 'spp_layer', ] @@ -171,6 +172,7 @@ class LayerType(object): SPP_LAYER = "spp" PRINT_LAYER = "print" + PRIORBOX_LAYER = "priorbox" CTC_LAYER = "ctc" WARP_CTC_LAYER = "warp_ctc" @@ -933,6 +935,40 @@ def print_layer(input, name=None): inputs=[l.name for l in input], ) # this layer don't return anything, can not be input of other layer. +@wrap_name_default("priorbox") +def priorbox_layer(input, img_shape, aspect_ratio, variance, min_size, max_size=[], name=None): + """ + Compute the priorbox and set the variance. This layer is necessary for ssd. + + :param name: The Layer Name. + :type name: basestring + :param input: The input layer. + :type input: LayerOutput + :param img_shape: The width and height of the network input image. + :type img_shape: LayerOutput + :param aspect_ratio: The aspect ratio. + :type aspect_ratio: list + :param variance: The bounding box variance. + :type min_size: The min size of the priorbox width/height. + :param min_size: list + :type max_size: The max size of the priorbox width/height. Could be NULL. + :param max_size: list + :return: LayerOutput + """ + # plus one for ratio 1. + num_filters = (len(aspect_ratio) * 2 + 1 + len(max_size)) * 4 + size=(input.size / input.num_filters) * num_filters * 2 + Layer( + name=name, + type=LayerType.PRIORBOX_LAYER, + inputs=[input.name, img_shape.name], + size=size, + min_size=min_size, + max_size=max_size, + aspect_ratio=aspect_ratio, + variance=variance) + return LayerOutput( + name, LayerType.PRIORBOX_LAYER, parents=[input, img_shape], num_filters=num_filters, size=size) @wrap_name_default("seq_pooling") @wrap_bias_attr_default(has_bias=False) From ac2710960564c1c252612556c0ab4d626a68c60c Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Tue, 13 Dec 2016 11:46:30 -0800 Subject: [PATCH 124/265] Use @ instead of //external to refer to external dependencies, so to standardize include paths. --- WORKSPACE | 9 ++ third_party/gflags_test/BUILD | 2 +- third_party/glog.BUILD | 169 ++++++++++++++++++++++++++++++++ third_party/gtest.BUILD | 2 +- third_party/protobuf_test/BUILD | 2 +- 5 files changed, 181 insertions(+), 3 deletions(-) create mode 100644 third_party/glog.BUILD diff --git a/WORKSPACE b/WORKSPACE index 8060047744..14699da905 100644 --- a/WORKSPACE +++ b/WORKSPACE @@ -23,3 +23,12 @@ git_repository( tag = "v2.2.0", remote = "https://github.com/gflags/gflags.git" ) + +# External dependency to glog. This method comes from +# https://github.com/reyoung/bazel_playground/blob/master/WORKSPACE +new_git_repository( + name = "glog", + remote = "https://github.com/google/glog.git", + commit = "b6a5e0524c28178985f0d228e9eaa43808dbec3c", + build_file = "third_party/glog.BUILD" +) diff --git a/third_party/gflags_test/BUILD b/third_party/gflags_test/BUILD index c3e53afb40..a018299ec4 100644 --- a/third_party/gflags_test/BUILD +++ b/third_party/gflags_test/BUILD @@ -5,7 +5,7 @@ cc_test( srcs = ["gflags_test.cc"], copts = ["-Iexternal/gtest/include"], deps = [ - "@gtest//:main", + "@gtest//:gtest", "@gflags//:gflags", ], ) diff --git a/third_party/glog.BUILD b/third_party/glog.BUILD new file mode 100644 index 0000000000..560c82d8d3 --- /dev/null +++ b/third_party/glog.BUILD @@ -0,0 +1,169 @@ +licenses(["notice"]) + +cc_library( + visibility = ["//visibility:public"], + name = "glog", + deps = [ + #"//third_party/libunwind:libunwind-k8", + "@gflags//:gflags", + ], + includes = [ + ".", + "src", + ], + copts = [ + "-D_START_GOOGLE_NAMESPACE_='namespace google {'", + "-D_END_GOOGLE_NAMESPACE_='}'", + "-DGOOGLE_NAMESPACE='google'", + "-DGOOGLE_GLOG_DLL_DECL=''", + "-DHAVE_DLADDR", + "-DHAVE_SNPRINTF", + "-DHAVE_DLFCN_H", + "-DHAVE_FCNTL", + "-DHAVE_GLOB_H", + "-DHAVE_INTTYPES_H", + "-DHAVE_LIBPTHREAD", + "-DHAVE_SYS_SYSCALL_H", + #"-DHAVE_LIBUNWIND_H", + "-DHAVE_LIB_GFLAGS", + #"-DHAVE_LIB_UNWIND", + "-DHAVE_MEMORY_H", + "-DHAVE_NAMESPACES", + "-DHAVE_PREAD", + "-DHAVE_PTHREAD", + "-DHAVE_PWD_H", + "-DHAVE_PWRITE", + "-DHAVE_RWLOCK", + "-DHAVE_SIGACTION", + "-DHAVE_SIGALTSTACK", + "-DHAVE_STDINT_H", + "-DHAVE_STRING_H", + "-DHAVE_SYS_TIME_H", + "-DHAVE_SYS_TYPES_H", + "-DHAVE_SYS_UCONTEXT_H", + "-DHAVE_SYS_UTSNAME_H", + "-DHAVE_UNISTD_H", + "-DHAVE_USING_OPERATOR", + "-DHAVE_HAVE___ATTRIBUTE___", + "-DHAVE_HAVE___BUILTIN_EXPECT", + #"-DNO_FRAME_POINTER", + "-D_GNU_SOURCE", + #"-fno-sanitize=thread", + #"-fno-sanitize=address", + "-Iexternal/glog/src", + #"-I/usr/local/include", # XXX import libunwind + ], + srcs = [ + "src/demangle.cc", + "src/logging.cc", + "src/raw_logging.cc", + "src/signalhandler.cc", + "src/symbolize.cc", + "src/utilities.cc", + "src/vlog_is_on.cc", + ":config_h", + ":logging_h", + ":raw_logging_h", + ":stl_logging_h", + ":vlog_is_on_h", + ], + hdrs = [ + "src/demangle.h", + "src/mock-log.h", + "src/stacktrace.h", + #"src/stacktrace_libunwind-inl.h", + "src/symbolize.h", + "src/utilities.h", + "src/base/commandlineflags.h", + "src/base/googleinit.h", + "src/base/mutex.h", + "src/glog/log_severity.h", + ], + linkopts = [ + #"-pthread", + #"-L/usr/local/lib -lunwind", + ], +) + +genrule( + name = "config_h", + srcs = [ + "src/config.h.cmake.in", + ], + outs = [ + "config.h", + ], + cmd = "awk '{ gsub(/^#cmakedefine/, \"//cmakedefine\"); print; }' $(<) > $(@)", +) + +genrule( + name = "logging_h", + srcs = [ + "src/glog/logging.h.in", + ], + outs = [ + "glog/logging.h", + ], + cmd = "$(location :gen_sh) < $(<) > $(@)", + tools = [":gen_sh"], +) + +genrule( + name = "raw_logging_h", + srcs = [ + "src/glog/raw_logging.h.in", + ], + outs = [ + "glog/raw_logging.h", + ], + cmd = "$(location :gen_sh) < $(<) > $(@)", + tools = [":gen_sh"], +) + +genrule( + name = "stl_logging_h", + srcs = [ + "src/glog/stl_logging.h.in", + ], + outs = [ + "glog/stl_logging.h", + ], + cmd = "$(location :gen_sh) < $(<) > $(@)", + tools = [":gen_sh"], +) + +genrule( + name = "vlog_is_on_h", + srcs = [ + "src/glog/vlog_is_on.h.in", + ], + outs = [ + "glog/vlog_is_on.h", + ], + cmd = "$(location :gen_sh) < $(<) > $(@)", + tools = [":gen_sh"], +) + +genrule( + name = "gen_sh", + outs = [ + "gen.sh", + ], + cmd = """ +cat > $@ <<"EOF" +#! /bin/sh +sed -e 's/@ac_cv_have_unistd_h@/1/g' \ + -e 's/@ac_cv_have_stdint_h@/1/g' \ + -e 's/@ac_cv_have_systypes_h@/1/g' \ + -e 's/@ac_cv_have_libgflags_h@/1/g' \ + -e 's/@ac_cv_have_uint16_t@/1/g' \ + -e 's/@ac_cv_have___builtin_expect@/1/g' \ + -e 's/@ac_cv_have_.*@/0/g' \ + -e 's/@ac_google_start_namespace@/namespace google {/g' \ + -e 's/@ac_google_end_namespace@/}/g' \ + -e 's/@ac_google_namespace@/google/g' \ + -e 's/@ac_cv___attribute___noinline@/__attribute__((noinline))/g' \ + -e 's/@ac_cv___attribute___noreturn@/__attribute__((noreturn))/g' \ + -e 's/@ac_cv___attribute___printf_4_5@/__attribute__((__format__ (__printf__, 4, 5)))/g' +EOF""" +) diff --git a/third_party/gtest.BUILD b/third_party/gtest.BUILD index 3e68a1d879..e9187e51ef 100644 --- a/third_party/gtest.BUILD +++ b/third_party/gtest.BUILD @@ -1,5 +1,5 @@ cc_library( - name = "main", + name = "gtest", srcs = glob( ["src/*.cc"], exclude = ["src/gtest-all.cc"] diff --git a/third_party/protobuf_test/BUILD b/third_party/protobuf_test/BUILD index 46f769da5f..29c5f344d3 100644 --- a/third_party/protobuf_test/BUILD +++ b/third_party/protobuf_test/BUILD @@ -21,7 +21,7 @@ cc_test( srcs = ["example_lib_test.cc"], copts = ["-Iexternal/gtest/include"], deps =[ - "@gtest//:main", + "@gtest//:gtest", ":example_lib", ], ) From aff8a0f12c8a98c49a599b8ebdb77cbad880fc3e Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Tue, 13 Dec 2016 13:07:47 -0800 Subject: [PATCH 125/265] Fix enable_virtualenv The resource "unsigned char[]" array generated by create_source needs to have 0 ending as a valid C string to be used in PythonUtil.cpp initPython() Change-Id: I5e214606ed9102f37813ea3f07565dc10a9c015e --- cmake/util.cmake | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/cmake/util.cmake b/cmake/util.cmake index 11641f6064..eb7db7ce2e 100644 --- a/cmake/util.cmake +++ b/cmake/util.cmake @@ -119,7 +119,7 @@ function(link_paddle_exe TARGET_NAME) ${RDMA_LD_FLAGS} ${RDMA_LIBS}) endif() - + if(WITH_PYTHON) target_link_libraries(${TARGET_NAME} ${PYTHON_LIBRARIES}) @@ -136,10 +136,10 @@ function(link_paddle_exe TARGET_NAME) endif() if(WITH_GPU) - if(NOT WITH_DSO OR WITH_METRIC) + if(NOT WITH_DSO OR WITH_METRIC) target_link_libraries(${TARGET_NAME} ${CUDNN_LIBRARY} - ${CUDA_curand_LIBRARY}) + ${CUDA_curand_LIBRARY}) CUDA_ADD_CUBLAS_TO_TARGET(${TARGET_NAME}) endif() @@ -206,5 +206,5 @@ function(create_resources res_file output) # Convert hex data for C compatibility string(REGEX REPLACE "([0-9a-f][0-9a-f])" "0x\\1," filedata ${filedata}) # Append data to output file - file(APPEND ${output} "const unsigned char ${filename}[] = {${filedata}};\nconst unsigned ${filename}_size = sizeof(${filename});\n") + file(APPEND ${output} "const unsigned char ${filename}[] = {${filedata}0};\nconst unsigned ${filename}_size = sizeof(${filename});\n") endfunction() From 6a9891703c76328eb33e5754d98a1bcafc80f39d Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Tue, 13 Dec 2016 15:01:42 -0800 Subject: [PATCH 126/265] Fix all building errors --- WORKSPACE | 29 ++++++++++++++--------------- third_party/gflags.BUILD | 12 ++++++++++++ third_party/glog.BUILD | 13 ------------- third_party/glog_test/BUILD | 11 +++++++++++ third_party/glog_test/glog_test.cc | 9 +++++++++ third_party/protobuf_test/BUILD | 8 ++++---- 6 files changed, 50 insertions(+), 32 deletions(-) create mode 100644 third_party/gflags.BUILD create mode 100644 third_party/glog_test/BUILD create mode 100644 third_party/glog_test/glog_test.cc diff --git a/WORKSPACE b/WORKSPACE index f4358f0195..4581b89aaf 100644 --- a/WORKSPACE +++ b/WORKSPACE @@ -8,26 +8,25 @@ http_archive( # External dependency to gtest 1.7.0. This method comes from # https://www.bazel.io/versions/master/docs/tutorial/cpp.html. new_http_archive( - name = "gtest", - url = "https://github.com/google/googletest/archive/release-1.7.0.zip", - sha256 = "b58cb7547a28b2c718d1e38aee18a3659c9e3ff52440297e965f5edffe34b6d0", - build_file = "third_party/gtest.BUILD", - strip_prefix = "googletest-release-1.7.0", -) + name="gtest", + url="https://github.com/google/googletest/archive/release-1.7.0.zip", + sha256="b58cb7547a28b2c718d1e38aee18a3659c9e3ff52440297e965f5edffe34b6d0", + build_file="third_party/gtest.BUILD", + strip_prefix="googletest-release-1.7.0", ) # External dependency to gflags. This method comes from # https://github.com/gflags/example/blob/master/WORKSPACE. -git_repository( - name = "gflags", - tag = "v2.2.0", - remote = "https://github.com/gflags/gflags.git" +new_git_repository( + name="gflags", + tag="v2.2.0", + remote="https://github.com/gflags/gflags.git", + build_file="third_party/gflags.BUILD", ) # External dependency to glog. This method comes from # https://github.com/reyoung/bazel_playground/blob/master/WORKSPACE new_git_repository( - name = "glog", - remote = "https://github.com/google/glog.git", - commit = "b6a5e0524c28178985f0d228e9eaa43808dbec3c", - build_file = "third_party/glog.BUILD" -) + name="glog", + remote="https://github.com/google/glog.git", + commit="b6a5e0524c28178985f0d228e9eaa43808dbec3c", + build_file="third_party/glog.BUILD") diff --git a/third_party/gflags.BUILD b/third_party/gflags.BUILD new file mode 100644 index 0000000000..85e8bd0bd7 --- /dev/null +++ b/third_party/gflags.BUILD @@ -0,0 +1,12 @@ +# Bazel (http://bazel.io/) BUILD file for gflags. +# +# See INSTALL.md for instructions for adding gflags to a Bazel workspace. + +licenses(["notice"]) + +exports_files(["src/gflags_complections.sh", "COPYING.txt"]) + +load(":bazel/gflags.bzl", "gflags_sources", "gflags_library") +(hdrs, srcs) = gflags_sources(namespace=["google", "gflags"]) +gflags_library(hdrs=hdrs, srcs=srcs, threads=0) +gflags_library(hdrs=hdrs, srcs=srcs, threads=1) diff --git a/third_party/glog.BUILD b/third_party/glog.BUILD index 560c82d8d3..52fe12a716 100644 --- a/third_party/glog.BUILD +++ b/third_party/glog.BUILD @@ -3,10 +3,6 @@ licenses(["notice"]) cc_library( visibility = ["//visibility:public"], name = "glog", - deps = [ - #"//third_party/libunwind:libunwind-k8", - "@gflags//:gflags", - ], includes = [ ".", "src", @@ -24,9 +20,6 @@ cc_library( "-DHAVE_INTTYPES_H", "-DHAVE_LIBPTHREAD", "-DHAVE_SYS_SYSCALL_H", - #"-DHAVE_LIBUNWIND_H", - "-DHAVE_LIB_GFLAGS", - #"-DHAVE_LIB_UNWIND", "-DHAVE_MEMORY_H", "-DHAVE_NAMESPACES", "-DHAVE_PREAD", @@ -51,7 +44,6 @@ cc_library( #"-fno-sanitize=thread", #"-fno-sanitize=address", "-Iexternal/glog/src", - #"-I/usr/local/include", # XXX import libunwind ], srcs = [ "src/demangle.cc", @@ -71,7 +63,6 @@ cc_library( "src/demangle.h", "src/mock-log.h", "src/stacktrace.h", - #"src/stacktrace_libunwind-inl.h", "src/symbolize.h", "src/utilities.h", "src/base/commandlineflags.h", @@ -79,10 +70,6 @@ cc_library( "src/base/mutex.h", "src/glog/log_severity.h", ], - linkopts = [ - #"-pthread", - #"-L/usr/local/lib -lunwind", - ], ) genrule( diff --git a/third_party/glog_test/BUILD b/third_party/glog_test/BUILD new file mode 100644 index 0000000000..b0d790f6ae --- /dev/null +++ b/third_party/glog_test/BUILD @@ -0,0 +1,11 @@ +licenses(["notice"]) # Apache 2.0 + +cc_test( + name = "glog_test", + srcs = ["glog_test.cc"], + copts = ["-Iexternal/gtest/include"], + deps =[ + "@gtest//:gtest", + "@glog//:glog", + ], +) diff --git a/third_party/glog_test/glog_test.cc b/third_party/glog_test/glog_test.cc new file mode 100644 index 0000000000..a1e3fd71e4 --- /dev/null +++ b/third_party/glog_test/glog_test.cc @@ -0,0 +1,9 @@ +#include +#include + +#include "glog/logging.h" +#include "gtest/gtest.h" + +TEST(GlogTest, Logging) { + LOG(INFO) << "Hello world"; +} diff --git a/third_party/protobuf_test/BUILD b/third_party/protobuf_test/BUILD index e972ca8b3a..67d4293c70 100644 --- a/third_party/protobuf_test/BUILD +++ b/third_party/protobuf_test/BUILD @@ -15,10 +15,10 @@ cc_library( deps=[":example_proto"], ) cc_test( - name = "example_lib_test", - srcs = ["example_lib_test.cc"], - copts = ["-Iexternal/gtest/include"], - deps =[ + name="example_lib_test", + srcs=["example_lib_test.cc"], + copts=["-Iexternal/gtest/include"], + deps=[ "@gtest//:gtest", ":example_lib", ], ) From f821b6b7503783803b5e8f014bf8bbee9aa99142 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 14 Dec 2016 10:49:26 +0800 Subject: [PATCH 127/265] Fit pre-commit for clang-format 4.x --- paddle/cuda/src/hl_cuda_device.cc | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/paddle/cuda/src/hl_cuda_device.cc b/paddle/cuda/src/hl_cuda_device.cc index b0bba73594..41787f6c0a 100644 --- a/paddle/cuda/src/hl_cuda_device.cc +++ b/paddle/cuda/src/hl_cuda_device.cc @@ -12,6 +12,9 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +// clang-format off +// Because clang-format 4.X and clang-format 3.8+ format +// following lines in different. So disable clang-format. #include "hl_cuda.h" #include #include @@ -23,6 +26,7 @@ limitations under the License. */ #include "hl_dso_loader.h" #include "hl_thread.ph" #include "paddle/utils/Logging.h" +// clang-format on namespace dynload { From 8777ff3fa6ebac4587454e8ab845d29be94b1aba Mon Sep 17 00:00:00 2001 From: Yi Wang Date: Tue, 13 Dec 2016 19:12:41 -0800 Subject: [PATCH 128/265] Use yapf to auto format all BUILD and WORKSPACE files --- WORKSPACE | 7 +- paddle/cuda/src/hl_cuda_device.cc | 2 +- third_party/gflags_test/BUILD | 11 ++- third_party/gflags_test/gflags_test.cc | 9 +-- third_party/glog.BUILD | 98 +++++++++----------------- third_party/glog_test/BUILD | 11 ++- third_party/glog_test/glog_test.cc | 4 +- third_party/gtest.BUILD | 20 ++---- 8 files changed, 60 insertions(+), 102 deletions(-) diff --git a/WORKSPACE b/WORKSPACE index 4581b89aaf..f097c41da8 100644 --- a/WORKSPACE +++ b/WORKSPACE @@ -3,7 +3,7 @@ http_archive( name="protobuf", url="http://github.com/google/protobuf/archive/v3.1.0.tar.gz", sha256="0a0ae63cbffc274efb573bdde9a253e3f32e458c41261df51c5dbc5ad541e8f7", - strip_prefix="protobuf-3.1.0", ) + strip_prefix="protobuf-3.1.0") # External dependency to gtest 1.7.0. This method comes from # https://www.bazel.io/versions/master/docs/tutorial/cpp.html. @@ -12,7 +12,7 @@ new_http_archive( url="https://github.com/google/googletest/archive/release-1.7.0.zip", sha256="b58cb7547a28b2c718d1e38aee18a3659c9e3ff52440297e965f5edffe34b6d0", build_file="third_party/gtest.BUILD", - strip_prefix="googletest-release-1.7.0", ) + strip_prefix="googletest-release-1.7.0") # External dependency to gflags. This method comes from # https://github.com/gflags/example/blob/master/WORKSPACE. @@ -20,8 +20,7 @@ new_git_repository( name="gflags", tag="v2.2.0", remote="https://github.com/gflags/gflags.git", - build_file="third_party/gflags.BUILD", -) + build_file="third_party/gflags.BUILD") # External dependency to glog. This method comes from # https://github.com/reyoung/bazel_playground/blob/master/WORKSPACE diff --git a/paddle/cuda/src/hl_cuda_device.cc b/paddle/cuda/src/hl_cuda_device.cc index b0bba73594..d181448292 100644 --- a/paddle/cuda/src/hl_cuda_device.cc +++ b/paddle/cuda/src/hl_cuda_device.cc @@ -12,13 +12,13 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ -#include "hl_cuda.h" #include #include #include #include #include #include +#include "hl_cuda.h" #include "hl_cuda.ph" #include "hl_dso_loader.h" #include "hl_thread.ph" diff --git a/third_party/gflags_test/BUILD b/third_party/gflags_test/BUILD index a018299ec4..b50615203b 100644 --- a/third_party/gflags_test/BUILD +++ b/third_party/gflags_test/BUILD @@ -1,11 +1,10 @@ licenses(["notice"]) # Apache 2.0 cc_test( - name = "gflags_test", - srcs = ["gflags_test.cc"], - copts = ["-Iexternal/gtest/include"], - deps = [ + name="gflags_test", + srcs=["gflags_test.cc"], + copts=["-Iexternal/gtest/include"], + deps=[ "@gtest//:gtest", "@gflags//:gflags", - ], -) + ], ) diff --git a/third_party/gflags_test/gflags_test.cc b/third_party/gflags_test/gflags_test.cc index 5e588c2279..53286e7e5b 100644 --- a/third_party/gflags_test/gflags_test.cc +++ b/third_party/gflags_test/gflags_test.cc @@ -4,17 +4,14 @@ #include "gflags/gflags.h" #include "gtest/gtest.h" - DEFINE_bool(verbose, false, "Display program name before message"); DEFINE_string(message, "Hello world!", "Message to print"); -static bool IsNonEmptyMessage(const char *flagname, const std::string &value) -{ +static bool IsNonEmptyMessage(const char *flagname, const std::string &value) { return value[0] != '\0'; } DEFINE_validator(message, &IsNonEmptyMessage); - namespace third_party { namespace gflags_test { @@ -23,10 +20,10 @@ TEST(GflagsTest, ParseAndPrint) { gflags::SetVersionString("1.0.0"); int argc = 1; char program_name[] = "gflags_test"; - char** argv = new char*[2]; + char **argv = new char *[2]; argv[0] = program_name; argv[1] = NULL; - gflags::ParseCommandLineFlags(&argc, reinterpret_cast(&argv), true); + gflags::ParseCommandLineFlags(&argc, reinterpret_cast(&argv), true); EXPECT_EQ("gflags_test", std::string(gflags::ProgramInvocationShortName())); EXPECT_EQ("Hello world!", FLAGS_message); gflags::ShutDownCommandLineFlags(); diff --git a/third_party/glog.BUILD b/third_party/glog.BUILD index 52fe12a716..a0ff1d6b41 100644 --- a/third_party/glog.BUILD +++ b/third_party/glog.BUILD @@ -1,13 +1,13 @@ licenses(["notice"]) cc_library( - visibility = ["//visibility:public"], - name = "glog", - includes = [ + visibility=["//visibility:public"], + name="glog", + includes=[ ".", "src", ], - copts = [ + copts=[ "-D_START_GOOGLE_NAMESPACE_='namespace google {'", "-D_END_GOOGLE_NAMESPACE_='}'", "-DGOOGLE_NAMESPACE='google'", @@ -45,7 +45,7 @@ cc_library( #"-fno-sanitize=address", "-Iexternal/glog/src", ], - srcs = [ + srcs=[ "src/demangle.cc", "src/logging.cc", "src/raw_logging.cc", @@ -59,7 +59,7 @@ cc_library( ":stl_logging_h", ":vlog_is_on_h", ], - hdrs = [ + hdrs=[ "src/demangle.h", "src/mock-log.h", "src/stacktrace.h", @@ -69,74 +69,47 @@ cc_library( "src/base/googleinit.h", "src/base/mutex.h", "src/glog/log_severity.h", - ], -) + ]) genrule( - name = "config_h", - srcs = [ - "src/config.h.cmake.in", - ], - outs = [ - "config.h", - ], - cmd = "awk '{ gsub(/^#cmakedefine/, \"//cmakedefine\"); print; }' $(<) > $(@)", + name="config_h", + srcs=["src/config.h.cmake.in"], + outs=["config.h"], + cmd="awk '{ gsub(/^#cmakedefine/, \"//cmakedefine\"); print; }' $(<) > $(@)", ) genrule( - name = "logging_h", - srcs = [ - "src/glog/logging.h.in", - ], - outs = [ - "glog/logging.h", - ], - cmd = "$(location :gen_sh) < $(<) > $(@)", - tools = [":gen_sh"], -) + name="logging_h", + srcs=["src/glog/logging.h.in"], + outs=["glog/logging.h"], + cmd="$(location :gen_sh) < $(<) > $(@)", + tools=[":gen_sh"]) genrule( - name = "raw_logging_h", - srcs = [ - "src/glog/raw_logging.h.in", - ], - outs = [ - "glog/raw_logging.h", - ], - cmd = "$(location :gen_sh) < $(<) > $(@)", - tools = [":gen_sh"], -) + name="raw_logging_h", + srcs=["src/glog/raw_logging.h.in"], + outs=["glog/raw_logging.h"], + cmd="$(location :gen_sh) < $(<) > $(@)", + tools=[":gen_sh"]) genrule( - name = "stl_logging_h", - srcs = [ - "src/glog/stl_logging.h.in", - ], - outs = [ - "glog/stl_logging.h", - ], - cmd = "$(location :gen_sh) < $(<) > $(@)", - tools = [":gen_sh"], -) + name="stl_logging_h", + srcs=["src/glog/stl_logging.h.in"], + outs=["glog/stl_logging.h"], + cmd="$(location :gen_sh) < $(<) > $(@)", + tools=[":gen_sh"]) genrule( - name = "vlog_is_on_h", - srcs = [ - "src/glog/vlog_is_on.h.in", - ], - outs = [ - "glog/vlog_is_on.h", - ], - cmd = "$(location :gen_sh) < $(<) > $(@)", - tools = [":gen_sh"], -) + name="vlog_is_on_h", + srcs=["src/glog/vlog_is_on.h.in"], + outs=["glog/vlog_is_on.h"], + cmd="$(location :gen_sh) < $(<) > $(@)", + tools=[":gen_sh"]) genrule( - name = "gen_sh", - outs = [ - "gen.sh", - ], - cmd = """ + name="gen_sh", + outs=["gen.sh"], + cmd=""" cat > $@ <<"EOF" #! /bin/sh sed -e 's/@ac_cv_have_unistd_h@/1/g' \ @@ -152,5 +125,4 @@ sed -e 's/@ac_cv_have_unistd_h@/1/g' \ -e 's/@ac_cv___attribute___noinline@/__attribute__((noinline))/g' \ -e 's/@ac_cv___attribute___noreturn@/__attribute__((noreturn))/g' \ -e 's/@ac_cv___attribute___printf_4_5@/__attribute__((__format__ (__printf__, 4, 5)))/g' -EOF""" -) +EOF""") diff --git a/third_party/glog_test/BUILD b/third_party/glog_test/BUILD index b0d790f6ae..56d08e95f8 100644 --- a/third_party/glog_test/BUILD +++ b/third_party/glog_test/BUILD @@ -1,11 +1,10 @@ licenses(["notice"]) # Apache 2.0 cc_test( - name = "glog_test", - srcs = ["glog_test.cc"], - copts = ["-Iexternal/gtest/include"], - deps =[ + name="glog_test", + srcs=["glog_test.cc"], + copts=["-Iexternal/gtest/include"], + deps=[ "@gtest//:gtest", "@glog//:glog", - ], -) + ], ) diff --git a/third_party/glog_test/glog_test.cc b/third_party/glog_test/glog_test.cc index a1e3fd71e4..f1d737d625 100644 --- a/third_party/glog_test/glog_test.cc +++ b/third_party/glog_test/glog_test.cc @@ -4,6 +4,4 @@ #include "glog/logging.h" #include "gtest/gtest.h" -TEST(GlogTest, Logging) { - LOG(INFO) << "Hello world"; -} +TEST(GlogTest, Logging) { LOG(INFO) << "Hello world"; } diff --git a/third_party/gtest.BUILD b/third_party/gtest.BUILD index e9187e51ef..9255b51d9a 100644 --- a/third_party/gtest.BUILD +++ b/third_party/gtest.BUILD @@ -1,14 +1,8 @@ cc_library( - name = "gtest", - srcs = glob( - ["src/*.cc"], - exclude = ["src/gtest-all.cc"] - ), - hdrs = glob([ - "include/**/*.h", - "src/*.h" - ]), - copts = ["-Iexternal/gtest/include"], - linkopts = ["-pthread"], - visibility = ["//visibility:public"], -) + name="gtest", + srcs=glob( + ["src/*.cc"], exclude=["src/gtest-all.cc"]), + hdrs=glob(["include/**/*.h", "src/*.h"]), + copts=["-Iexternal/gtest/include"], + linkopts=["-pthread"], + visibility=["//visibility:public"], ) From c5d5ad3da02bd3707c5afafb18f6e53da5313578 Mon Sep 17 00:00:00 2001 From: qiaolongfei Date: Wed, 14 Dec 2016 12:37:08 +0800 Subject: [PATCH 129/265] add python api_predict for quick start --- demo/quick_start/api_predict.py | 148 ++++++++++++++++++++++++++++++++ demo/quick_start/api_predict.sh | 30 +++++++ 2 files changed, 178 insertions(+) create mode 100755 demo/quick_start/api_predict.py create mode 100644 demo/quick_start/api_predict.sh diff --git a/demo/quick_start/api_predict.py b/demo/quick_start/api_predict.py new file mode 100755 index 0000000000..9c224e3cdb --- /dev/null +++ b/demo/quick_start/api_predict.py @@ -0,0 +1,148 @@ +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os, sys +import numpy as np +from optparse import OptionParser +from py_paddle import swig_paddle, DataProviderConverter +from paddle.trainer.PyDataProvider2 import sparse_binary_vector +from paddle.trainer.config_parser import parse_config + + +""" +Usage: run following command to show help message. + python api_predict.py -h +""" + +class QuickStartPrediction(): + def __init__(self, train_conf, dict_file, model_dir=None, label_file=None): + """ + train_conf: trainer configure. + dict_file: word dictionary file name. + model_dir: directory of model. + """ + self.train_conf = train_conf + self.dict_file = dict_file + self.word_dict = {} + self.dict_dim = self.load_dict() + self.model_dir = model_dir + if model_dir is None: + self.model_dir = os.path.dirname(train_conf) + + self.label = None + if label_file is not None: + self.load_label(label_file) + + conf = parse_config(train_conf, "is_predict=1") + self.network = swig_paddle.GradientMachine.createFromConfigProto( + conf.model_config) + self.network.loadParameters(self.model_dir) + input_types = [sparse_binary_vector(self.dict_dim)] + self.converter = DataProviderConverter(input_types) + + def load_dict(self): + """ + Load dictionary from self.dict_file. + """ + for line_count, line in enumerate(open(self.dict_file, 'r')): + self.word_dict[line.strip().split('\t')[0]] = line_count + return len(self.word_dict) + + def load_label(self, label_file): + """ + Load label. + """ + self.label = {} + for v in open(label_file, 'r'): + self.label[int(v.split('\t')[1])] = v.split('\t')[0] + + def get_index(self, data): + """ + transform word into integer index according to the dictionary. + """ + words = data.strip().split() + word_slot = [ + self.word_dict[w] for w in words if w in self.word_dict + ] + return word_slot + + def batch_predict(self, data_batch): + input = self.converter(data_batch) + output = self.network.forwardTest(input) + prob = output[0]["id"].tolist() + print("predicting labels is:") + print prob + +def option_parser(): + usage = "python predict.py -n config -w model_dir -d dictionary -i input_file " + parser = OptionParser(usage="usage: %s [options]" % usage) + parser.add_option( + "-n", + "--tconf", + action="store", + dest="train_conf", + help="network config") + parser.add_option( + "-d", + "--dict", + action="store", + dest="dict_file", + help="dictionary file") + parser.add_option( + "-b", + "--label", + action="store", + dest="label", + default=None, + help="dictionary file") + parser.add_option( + "-c", + "--batch_size", + type="int", + action="store", + dest="batch_size", + default=1, + help="the batch size for prediction") + parser.add_option( + "-w", + "--model", + action="store", + dest="model_path", + default=None, + help="model path") + return parser.parse_args() + + +def main(): + options, args = option_parser() + train_conf = options.train_conf + batch_size = options.batch_size + dict_file = options.dict_file + model_path = options.model_path + label = options.label + swig_paddle.initPaddle("--use_gpu=0") + predict = QuickStartPrediction(train_conf, dict_file, model_path, label) + + batch = [] + labels = [] + for line in sys.stdin: + [label, text] = line.split("\t") + labels.append(int(label)) + batch.append([predict.get_index(text)]) + print("lables is:") + print labels + predict.batch_predict(batch) + +if __name__ == '__main__': + main() diff --git a/demo/quick_start/api_predict.sh b/demo/quick_start/api_predict.sh new file mode 100644 index 0000000000..c90d3b7054 --- /dev/null +++ b/demo/quick_start/api_predict.sh @@ -0,0 +1,30 @@ +#!/bin/bash +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +set -e + +#Note the default model is pass-00002, you shold make sure the model path +#exists or change the mode path. +#only test on trainer_config.lr.py +model=output/pass-00001/ +config=trainer_config.lr.py +label=data/labels.list +dict=data/dict.txt +batch_size=20 +head -n$batch_size data/test.txt | python api_predict.py \ + --tconf=$config\ + --model=$model \ + --label=$label \ + --dict=$dict \ + --batch_size=$batch_size From 0764bb4a9ea50480fca334eb6b81de1ac964c265 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Wed, 14 Dec 2016 13:54:26 +0800 Subject: [PATCH 130/265] add exclude_patterns for conf.py.in --- cmake/FindSphinx.cmake | 3 ++- doc/api/data_provider/pydataprovider2_en.rst | 4 ++-- doc/api/predict/swig_py_paddle_en.rst | 4 ++-- doc/conf.py.cn.in | 2 +- doc/conf.py.en.in | 2 +- doc/howto/cmd_parameter/detail_introduction_en.md | 2 +- doc/howto/cmd_parameter/index_en.md | 2 +- doc/howto/deep_model/rnn/rnn_en.rst | 4 ++-- doc/tutorials/rec/ml_dataset_en.md | 3 +-- doc/tutorials/rec/ml_regression_en.rst | 6 +++--- doc/tutorials/semantic_role_labeling/index_en.md | 2 +- python/paddle/trainer_config_helpers/data_sources.py | 2 +- 12 files changed, 18 insertions(+), 18 deletions(-) diff --git a/cmake/FindSphinx.cmake b/cmake/FindSphinx.cmake index 6702f45a16..05aa100eae 100644 --- a/cmake/FindSphinx.cmake +++ b/cmake/FindSphinx.cmake @@ -72,6 +72,7 @@ function( Sphinx_add_target target_name builder conf cache source destination ) ${source} ${destination} COMMENT "Generating sphinx documentation: ${builder}" + COMMAND ln -s ${destination}/index_*.html ${destination}/index.html ) set_property( @@ -143,4 +144,4 @@ function( Sphinx_add_targets target_base_name conf source base_destination ) add_dependencies( ${target_base_name}_linkcheck ${_dependencies} ) endif() -endfunction() \ No newline at end of file +endfunction() diff --git a/doc/api/data_provider/pydataprovider2_en.rst b/doc/api/data_provider/pydataprovider2_en.rst index 2e3cbd89ba..30357be325 100644 --- a/doc/api/data_provider/pydataprovider2_en.rst +++ b/doc/api/data_provider/pydataprovider2_en.rst @@ -1,4 +1,4 @@ -.. _api_pydataprovider2_en: +.. _api_pydataprovider2: PyDataProvider2 =============== @@ -104,7 +104,7 @@ And PaddlePadle will do all of the rest things\: Is this cool? -.. _api_pydataprovider2_en_sequential_model: +.. _api_pydataprovider2_sequential_model: DataProvider for the sequential model ------------------------------------- diff --git a/doc/api/predict/swig_py_paddle_en.rst b/doc/api/predict/swig_py_paddle_en.rst index 8dfea1d334..1c628e6971 100644 --- a/doc/api/predict/swig_py_paddle_en.rst +++ b/doc/api/predict/swig_py_paddle_en.rst @@ -23,7 +23,7 @@ python's :code:`help()` function. Let's walk through the above python script: * At the beginning, use :code:`swig_paddle.initPaddle()` to initialize PaddlePaddle with command line arguments, for more about command line arguments - see :ref:`cmd_detail_introduction_en` . + see :ref:`cmd_detail_introduction` . * Parse the configuration file that is used in training with :code:`parse_config()`. Because data to predict with always have no label, and output of prediction work normally is the output layer rather than the cost layer, so you should modify @@ -36,7 +36,7 @@ python's :code:`help()` function. Let's walk through the above python script: - Note: As swig_paddle can only accept C++ matrices, we offer a utility class DataProviderConverter that can accept the same input data with PyDataProvider2, for more information please refer to document - of :ref:`api_pydataprovider2_en` . + of :ref:`api_pydataprovider2` . * Do the prediction with :code:`forwardTest()`, which takes the converted input data and outputs the activations of the output layer. diff --git a/doc/conf.py.cn.in b/doc/conf.py.cn.in index 92d72f797e..418d718fbd 100644 --- a/doc/conf.py.cn.in +++ b/doc/conf.py.cn.in @@ -79,7 +79,7 @@ language = 'zh_CN' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. -exclude_patterns = ['_build'] +exclude_patterns = ['_build', '**/*_en*', '*_en*'] # The reST default role (used for this markup: `text`) to use for all # documents. diff --git a/doc/conf.py.en.in b/doc/conf.py.en.in index f942f166fc..e96c25cb75 100644 --- a/doc/conf.py.en.in +++ b/doc/conf.py.en.in @@ -80,7 +80,7 @@ language = None # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. -exclude_patterns = ['_build'] +exclude_patterns = ['_build', '**/*_cn*', '*_cn*'] # The reST default role (used for this markup: `text`) to use for all # documents. diff --git a/doc/howto/cmd_parameter/detail_introduction_en.md b/doc/howto/cmd_parameter/detail_introduction_en.md index 82136b7d4f..27b2faf1d8 100644 --- a/doc/howto/cmd_parameter/detail_introduction_en.md +++ b/doc/howto/cmd_parameter/detail_introduction_en.md @@ -1,5 +1,5 @@ ```eval_rst -.. _cmd_detail_introduction_en: +.. _cmd_detail_introduction: ``` # Detail Description diff --git a/doc/howto/cmd_parameter/index_en.md b/doc/howto/cmd_parameter/index_en.md index fb658f2aa5..a6c236db61 100644 --- a/doc/howto/cmd_parameter/index_en.md +++ b/doc/howto/cmd_parameter/index_en.md @@ -1,5 +1,5 @@ ```eval_rst -.. _cmd_line_index_en: +.. _cmd_line_index: ``` # How to Set Command-line Parameters diff --git a/doc/howto/deep_model/rnn/rnn_en.rst b/doc/howto/deep_model/rnn/rnn_en.rst index a0740bae61..73f5d5371f 100644 --- a/doc/howto/deep_model/rnn/rnn_en.rst +++ b/doc/howto/deep_model/rnn/rnn_en.rst @@ -30,7 +30,7 @@ Then at the :code:`process` function, each :code:`yield` function will return th yield src_ids, trg_ids, trg_ids_next -For more details description of how to write a data provider, please refer to :ref:`api_pydataprovider2_en` . The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`. +For more details description of how to write a data provider, please refer to :ref:`api_pydataprovider2` . The full data provider file is located at :code:`demo/seqToseq/dataprovider.py`. =============================================== Configure Recurrent Neural Network Architecture @@ -246,6 +246,6 @@ The code is listed below: outputs(beam_gen) -Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`semantic_role_labeling_en` for more details. +Notice that this generation technique is only useful for decoder like generation process. If you are working on sequence tagging tasks, please refer to :ref:`semantic_role_labeling` for more details. The full configuration file is located at :code:`demo/seqToseq/seqToseq_net.py`. diff --git a/doc/tutorials/rec/ml_dataset_en.md b/doc/tutorials/rec/ml_dataset_en.md index dc11a5e060..25dea5c4af 100644 --- a/doc/tutorials/rec/ml_dataset_en.md +++ b/doc/tutorials/rec/ml_dataset_en.md @@ -1,6 +1,5 @@ ```eval_rst -.. _demo_ml_dataset_en: - +.. _demo_ml_dataset: ``` # MovieLens Dataset diff --git a/doc/tutorials/rec/ml_regression_en.rst b/doc/tutorials/rec/ml_regression_en.rst index 6346090a84..4bb2586e34 100644 --- a/doc/tutorials/rec/ml_regression_en.rst +++ b/doc/tutorials/rec/ml_regression_en.rst @@ -16,7 +16,7 @@ Data Preparation ```````````````` Download and extract dataset '''''''''''''''''''''''''''' -We use :ref:`demo_ml_dataset_en` here. +We use :ref:`demo_ml_dataset` here. To download and unzip the dataset, simply run the following commands. .. code-block:: bash @@ -264,7 +264,7 @@ In this :code:`dataprovider.py`, we should set\: * use_seq\: Whether this :code:`dataprovider.py` in sequence mode or not. * process\: Return each sample of data to :code:`paddle`. -The data provider details document see :ref:`api_pydataprovider2_en`. +The data provider details document see :ref:`api_pydataprovider2`. Train ````` @@ -280,7 +280,7 @@ The run.sh is shown as follow: It just start a paddle training process, write the log to `log.txt`, then print it on screen. -Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index_en` page. The short description of these arguments is shown as follow. +Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index` page. The short description of these arguments is shown as follow. * config\: Tell paddle which file is neural network configuration. * save_dir\: Tell paddle save model into './output' diff --git a/doc/tutorials/semantic_role_labeling/index_en.md b/doc/tutorials/semantic_role_labeling/index_en.md index 9a9f0d757b..92d7c63483 100644 --- a/doc/tutorials/semantic_role_labeling/index_en.md +++ b/doc/tutorials/semantic_role_labeling/index_en.md @@ -1,5 +1,5 @@ ```eval_rst -.. _semantic_role_labeling_en: +.. _semantic_role_labeling: ``` # Semantic Role labeling Tutorial # diff --git a/python/paddle/trainer_config_helpers/data_sources.py b/python/paddle/trainer_config_helpers/data_sources.py index c62553f54c..0fcf993d57 100644 --- a/python/paddle/trainer_config_helpers/data_sources.py +++ b/python/paddle/trainer_config_helpers/data_sources.py @@ -186,7 +186,7 @@ def define_py_data_sources2(train_list, test_list, module, obj, args=None): obj="process", args={"dictionary": dict_name}) - The related data provider can refer to :ref:`api_pydataprovider2_en_sequential_model` . + The related data provider can refer to :ref:`api_pydataprovider2_sequential_model` . :param train_list: Train list name. :type train_list: basestring From 39d689e2536c0838f01e9f87fc232f0822273557 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Wed, 14 Dec 2016 15:19:57 +0800 Subject: [PATCH 131/265] Format the priorbox code --- paddle/gserver/layers/PriorBox.cpp | 34 ++++++++++++++---------------- 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp index b0d59cd145..994f7c2038 100644 --- a/paddle/gserver/layers/PriorBox.cpp +++ b/paddle/gserver/layers/PriorBox.cpp @@ -1,4 +1,4 @@ -/* Copyright (c) 2016 Baidu, Inc. All Rights Reserve. +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -33,28 +33,28 @@ public: }; bool PriorBoxLayer::init(const LayerMap& layerMap, - const ParameterMap& parameterMap) { + const ParameterMap& parameterMap) { Layer::init(layerMap, parameterMap); - std::copy(config_.inputs(0).priorbox_conf().min_size().begin(), - config_.inputs(0).priorbox_conf().min_size().end(), + auto pb_conf = config_.inputs(0).priorbox_conf(); + std::copy(pb_conf.min_size().begin(), + pb_conf.min_size().end(), std::back_inserter(minSize_)); - std::copy(config_.inputs(0).priorbox_conf().max_size().begin(), - config_.inputs(0).priorbox_conf().max_size().end(), + std::copy(pb_conf.max_size().begin(), + pb_conf.max_size().end(), std::back_inserter(maxSize_)); - std::copy(config_.inputs(0).priorbox_conf().aspect_ratio().begin(), - config_.inputs(0).priorbox_conf().aspect_ratio().end(), + std::copy(pb_conf.aspect_ratio().begin(), + pb_conf.aspect_ratio().end(), std::back_inserter(aspectRatio_)); - std::copy(config_.inputs(0).priorbox_conf().variance().begin(), - config_.inputs(0).priorbox_conf().variance().end(), + std::copy(pb_conf.variance().begin(), + pb_conf.variance().end(), std::back_inserter(variance_)); // flip int input_ratio_length = aspectRatio_.size(); for (int index = 0; index < input_ratio_length; index++) - aspectRatio_.push_back(1 / aspectRatio_[index]); + aspectRatio_.push_back(1 / aspectRatio_[index]); aspectRatio_.push_back(1.); numPriors_ = aspectRatio_.size(); - if (maxSize_.size() > 0) - numPriors_++; + if (maxSize_.size() > 0) numPriors_++; buffer_ = Matrix::create(1, 1, false, false); return true; } @@ -79,7 +79,7 @@ void PriorBoxLayer::forward(PassType passType) { int idx = 0; for (int h = 0; h < layer_height; ++h) { for (int w = 0; w < layer_width; ++w) { - float center_x = (w + 0.5) * step_w; + float center_x = (w + 0.5) * step_w; float center_y = (h + 0.5) * step_h; int min_size = 0; for (size_t s = 0; s < minSize_.size(); s++) { @@ -109,8 +109,7 @@ void PriorBoxLayer::forward(PassType passType) { // rest of priors. for (size_t r = 0; r < aspectRatio_.size(); r++) { float ar = aspectRatio_[r]; - if (fabs(ar - 1.) < 1e-6) - continue; + if (fabs(ar - 1.) < 1e-6) continue; float box_width = min_size * sqrt(ar); float box_height = min_size / sqrt(ar); tmp_ptr[idx++] = (center_x - box_width / 2.) / image_width; @@ -127,8 +126,7 @@ void PriorBoxLayer::forward(PassType passType) { for (int h = 0; h < layer_height; h++) for (int w = 0; w < layer_width; w++) for (int i = 0; i < numPriors_; i++) - for (int j = 0; j < 4; j++) - tmp_ptr[idx++] = variance_[j]; + for (int j = 0; j < 4; j++) tmp_ptr[idx++] = variance_[j]; MatrixPtr outV = getOutputValue(); outV->copyFrom(buffer_->data_, dim * 2); } From 9bc04baa8da64a26b43d2659c98d859c40d92884 Mon Sep 17 00:00:00 2001 From: livc Date: Wed, 14 Dec 2016 15:26:57 +0800 Subject: [PATCH 132/265] add contribute_to_paddle_cn.md --- doc/howto/contribute_to_paddle_cn.md | 119 +++++++++++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 doc/howto/contribute_to_paddle_cn.md diff --git a/doc/howto/contribute_to_paddle_cn.md b/doc/howto/contribute_to_paddle_cn.md new file mode 100644 index 0000000000..1e56f9549b --- /dev/null +++ b/doc/howto/contribute_to_paddle_cn.md @@ -0,0 +1,119 @@ +# 如何贡献代码 + +我们真诚地感谢您的贡献。你能使用 fork 和 pull request 的工作流来合并(merge)代码。 + +## 代码要求 +- 你的代码必须完全遵守 [doxygen](http://www.stack.nl/~dimitri/doxygen/) 的样式。 +- 确保编译器选项 WITH\_STYLE\_CHECK 已打开,并且编译器通过代码样式检查。 +- 所有代码必须具有单元测试。 +- 通过所有单元测试。 + +以下教程将指导您提交代码。 + +## [Fork](https://help.github.com/articles/fork-a-repo/) + +转到GitHub页面,然后单击“Fork”按钮。 +这就是这么简单。 + +## 克隆(Clone) + +Paddle 目前使用[git流分支模型](http://nvie.com/posts/a-successful-git-branching-model/)。 +**develop** 是主分支,其他用户分支是特征分支(feature branches)。 + +一旦你创建了一个fork,你可以使用你最喜欢的 git 客户端克隆你的仓库(repo)或只是直接在命令行输入: + +```shell +# 克隆 fork 到本地 +git clone --branch develop https://github.com/USERNAME/Paddle.git +``` +如果你的仓库不包含 **develop** 分支,你只需自己创建它。 + +```shell +git clone https://github.com/USERNAME/Paddle.git Paddle +cd Paddle +git checkout -b develop # 创建 develop 分支 +git remote add upstream https://github.com/PaddlePaddle/Paddle.git # 添加 upstream 到 baidu/Paddle +git pull upstream develop # 更新 upstream +git submodule update --init --recursive +``` + +然后你可以通过做一个本地开发分支开始开发 + +```shell +git checkout -b MY_COOL_STUFF_BRANCH +``` + +## 提交(Commit) + +提交你的代码: + +```shell +# 显示工作树状态 +git status +# 添加修改过的文件 +git add xx +env EDITOR=vim git commit # 你可以用 vim/nano/emacs 写下你的注释 +``` +提交信息的第一行是标题,其他行可以添加一些细节(如果有必要的话)。 + +## 保持 Fork 状态最新 + +在拉(pull)你的请求(request)之前,你应该从最新的 PaddlePaddle 同步代码。 +为此,你需要首先添加远程(remote): + +```shell +# 观察当前远程仓库配置 +git remote -v +# 添加上游(upstream)仓库 +git remote add upstream https://github.com/PaddlePaddle/Paddle.git +# 验证新的 upstream +git remote -v +``` + +用最新的 upstream 更新你的 fork: + +```shell +git pull --rebase upstream develop +``` +如果本地没有唯一提交,git 将简单地执行快进。但是,如果你一直在做一些改变(绝大多数情况下不应该),你可能要处理冲突。 + +现在,你的本地主分支与上游修改的一致并是最新的。 + +## 推送(Push)到 GitHub + +```shell +# 在 GitHub 上 push 你的仓库 +git push -u origin MY_COOL_STUFF_BRANCH # 创建远程分支 MY_COOL_STUFF_BRANCH 到 origin. +``` + +## 拉取请求(Pull Request) + +转到 GitHub上 你 fork 的页面,选择你的开发分支并单击 **pull request 按钮**。 + +## 使用最新版本更新你的 pull 请求 + +在代码审查(code review)期间,由于 baidu/Paddle 中新的提交导致你的 pull 请求可能会失效。如果没有冲突,GitHub允许自动更新。 你可以点击 pull request 页面中的“更新分支(Update Branch)”按钮。 但是在这种冲突情况下,你需要手动进行更新。你需要在本地仓库执行如下命令: + +```shell +git checkout MY_COOL_STUFF_BRANCH +git pull upstream develop +# 你可能需要根据git提示解决冲突 +# 创建并测试你的代码 +git push origin MY_COOL_STUFF_BRANCH +``` +现在你的 Pull Request 是最新的了。 + +## 修改你的 pull request + +当根据审阅者的意见修改 pull 请求时,请使用“git commit”而不是“git commit --amend”来提交更改,以便审阅者可以看到新的请求和旧的请求之间的区别。 + +可能的命令是 + +```shell +git checkout MY_COOL_STUFF_BRANCH +git pull upstream develop # 将本地更新到最新的代码库 +# 可能会发生一些冲突 +# 开始开发吧! +env EDITOR=vim git commit # 添加修改日志 +git push origin MY_COOL_STUFF_BRANCH +``` From 96009326504de2149c9fcd978b769ae9ba21843a Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Wed, 14 Dec 2016 16:33:01 +0800 Subject: [PATCH 133/265] Add fake gpu support of the priorbox layer for the moment --- paddle/gserver/layers/PriorBox.cpp | 30 ++++++++++++++++++++++++++---- 1 file changed, 26 insertions(+), 4 deletions(-) diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp index 994f7c2038..4b8573f058 100644 --- a/paddle/gserver/layers/PriorBox.cpp +++ b/paddle/gserver/layers/PriorBox.cpp @@ -24,11 +24,13 @@ public: bool init(const LayerMap& layerMap, const ParameterMap& parameterMap); void forward(PassType passType); void backward(const UpdateCallback& callback) {} + void forwardImp(const Argument& featureMap, const Argument& imageShape); int numPriors_; std::vector minSize_; std::vector maxSize_; std::vector aspectRatio_; std::vector variance_; + std::vector tmpCpuInput_; MatrixPtr buffer_; }; @@ -56,16 +58,35 @@ bool PriorBoxLayer::init(const LayerMap& layerMap, numPriors_ = aspectRatio_.size(); if (maxSize_.size() > 0) numPriors_++; buffer_ = Matrix::create(1, 1, false, false); + if (useGpu_) { + tmpCpuInput_.reserve(inputLayers_.size()); + for (size_t i = 0; i < inputLayers_.size(); i++) { + tmpCpuInput_.push_back(Argument()); + } + } return true; } void PriorBoxLayer::forward(PassType passType) { Layer::forward(passType); - auto input = getInput(0); - int layer_width = input.getFrameWidth(); - int layer_height = input.getFrameHeight(); + if (useGpu_) { + for (size_t i = 0; i < inputLayers_.size(); i++) { + tmpCpuInput_[i].resizeAndCopyFrom( + getInput(i), false, HPPL_STREAM_DEFAULT); + hl_stream_synchronize(HPPL_STREAM_DEFAULT); + forwardImp(tmpCpuInput_[0], tmpCpuInput_[1]); + } + } else { + forwardImp(getInput(0), getInput(1)); + } +} + +void PriorBoxLayer::forwardImp(const Argument& featureMap, + const Argument& imageShape) { + int layer_width = featureMap.getFrameWidth(); + int layer_height = featureMap.getFrameHeight(); - MatrixPtr inV1 = getInputValue(1); + MatrixPtr inV1 = imageShape.value; int image_width = inV1->getElement(0, 0); int image_height = inV1->getElement(0, 1); float step_w = static_cast(image_width) / layer_width; @@ -130,6 +151,7 @@ void PriorBoxLayer::forward(PassType passType) { MatrixPtr outV = getOutputValue(); outV->copyFrom(buffer_->data_, dim * 2); } + REGISTER_LAYER(priorbox, PriorBoxLayer); } // namespace paddle From c0076084e24175cfe729f085c7feaf286270dfe8 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Wed, 14 Dec 2016 16:50:38 +0800 Subject: [PATCH 134/265] Format the python file. --- proto/ModelConfig.proto | 2 -- python/paddle/trainer/config_parser.py | 4 +++- python/paddle/trainer_config_helpers/layers.py | 18 +++++++++++++++--- 3 files changed, 18 insertions(+), 6 deletions(-) diff --git a/proto/ModelConfig.proto b/proto/ModelConfig.proto index 460a39275f..f28f69641b 100644 --- a/proto/ModelConfig.proto +++ b/proto/ModelConfig.proto @@ -253,8 +253,6 @@ message PriorBoxConfig { repeated uint32 max_size = 2; repeated float aspect_ratio = 3; repeated float variance = 4; - optional bool flip = 5 [default = true]; - optional bool clip = 6 [default = true]; } message LayerInputConfig { diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 5de524e507..8a82e5d667 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1577,9 +1577,11 @@ class PrintLayer(LayerBase): def __init__(self, name, inputs): super(PrintLayer, self).__init__(name, 'print', 0, inputs) + @config_layer('priorbox') class PriorBoxLayer(LayerBase): - def __init__(self, name, inputs, size, min_size, max_size, aspect_ratio, variance): + def __init__(self, name, inputs, size, min_size, max_size, aspect_ratio, + variance): super(PriorBoxLayer, self).__init__(name, 'priorbox', 0, inputs) config_assert(len(inputs) == 2, 'PriorBoxLayer must have 2 input') self.config.inputs[0].priorbox_conf.min_size.extend(min_size) diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index f04b5646aa..80c421aa2e 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -935,8 +935,15 @@ def print_layer(input, name=None): inputs=[l.name for l in input], ) # this layer don't return anything, can not be input of other layer. + @wrap_name_default("priorbox") -def priorbox_layer(input, img_shape, aspect_ratio, variance, min_size, max_size=[], name=None): +def priorbox_layer(input, + img_shape, + aspect_ratio, + variance, + min_size, + max_size=[], + name=None): """ Compute the priorbox and set the variance. This layer is necessary for ssd. @@ -957,7 +964,7 @@ def priorbox_layer(input, img_shape, aspect_ratio, variance, min_size, max_size= """ # plus one for ratio 1. num_filters = (len(aspect_ratio) * 2 + 1 + len(max_size)) * 4 - size=(input.size / input.num_filters) * num_filters * 2 + size = (input.size / input.num_filters) * num_filters * 2 Layer( name=name, type=LayerType.PRIORBOX_LAYER, @@ -968,7 +975,12 @@ def priorbox_layer(input, img_shape, aspect_ratio, variance, min_size, max_size= aspect_ratio=aspect_ratio, variance=variance) return LayerOutput( - name, LayerType.PRIORBOX_LAYER, parents=[input, img_shape], num_filters=num_filters, size=size) + name, + LayerType.PRIORBOX_LAYER, + parents=[input, img_shape], + num_filters=num_filters, + size=size) + @wrap_name_default("seq_pooling") @wrap_bias_attr_default(has_bias=False) From f1e3ade09de259905ee901840e95514351d8c75f Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Wed, 14 Dec 2016 17:39:21 +0800 Subject: [PATCH 135/265] update chinese catalog --- cmake/FindSphinx.cmake | 2 +- doc/api/index_cn.rst | 4 +-- doc/getstarted/basic_usage/index_cn.rst | 21 ++++------- doc/getstarted/basic_usage/index_en.rst | 20 +++++------ doc/getstarted/build_and_install/index_cn.rst | 4 +-- doc/getstarted/index_cn.rst | 2 +- doc/howto/concepts/nn_cn.rst | 3 -- doc/howto/concepts/program_concepts_cn.rst | 4 --- doc/howto/deep_model/index_cn.rst | 10 ------ doc/howto/deep_model/index_en.rst | 7 ---- doc/howto/deep_model/rnn/hrnn_demo_cn.rst | 7 ---- doc/howto/deep_model/rnn/index_cn.rst | 9 +++++ doc/howto/deep_model/rnn/index_en.rst | 7 ++++ .../rnn/{rnn_en.rst => rnn_config_en.rst} | 0 .../{new_layer => dev}/FullyConnected.jpg | Bin .../{ => dev}/contribute_to_paddle_en.md | 2 +- .../index_en.rst => dev/new_layer_en.rst} | 6 ++-- .../index_cn.rst => dev/write_docs_cn.rst} | 8 ++--- doc/howto/index_cn.rst | 33 +++++++++++------- doc/howto/index_en.rst | 17 ++++++--- doc/howto/optimization/index_en.rst | 4 +-- .../{ => usage}/cluster/cluster_train_en.md | 2 +- doc/howto/{ => usage}/cluster/k8s/Dockerfile | 0 doc/howto/{ => usage}/cluster/k8s/job.yaml | 0 .../cluster/k8s/k8s-paddle-arch.png | Bin .../cluster/k8s/k8s_cn.md} | 2 +- .../cluster/k8s/k8s_distributed_cn.md} | 3 +- doc/howto/{ => usage}/cluster/k8s/start.sh | 0 .../{ => usage}/cluster/k8s/start_paddle.py | 0 .../{ => usage}/cmd_parameter/arguments_en.md | 0 .../cmd_parameter/detail_introduction_en.md | 0 .../{ => usage}/cmd_parameter/index_en.md | 2 +- .../{ => usage}/cmd_parameter/use_case_en.md | 0 .../concepts/src/pserver_topology.dot | 0 .../concepts/src/trainer_config.py | 0 .../{ => usage}/concepts/use_concepts_cn.rst | 6 ++-- doc/tutorials/index_cn.md | 21 +++++------ doc/tutorials/index_en.md | 1 - doc/tutorials/quick_start/index_cn.rst | 5 +-- 39 files changed, 102 insertions(+), 110 deletions(-) delete mode 100644 doc/howto/concepts/nn_cn.rst delete mode 100644 doc/howto/concepts/program_concepts_cn.rst delete mode 100644 doc/howto/deep_model/index_cn.rst delete mode 100644 doc/howto/deep_model/index_en.rst delete mode 100644 doc/howto/deep_model/rnn/hrnn_demo_cn.rst create mode 100644 doc/howto/deep_model/rnn/index_cn.rst create mode 100644 doc/howto/deep_model/rnn/index_en.rst rename doc/howto/deep_model/rnn/{rnn_en.rst => rnn_config_en.rst} (100%) rename doc/howto/{new_layer => dev}/FullyConnected.jpg (100%) rename doc/howto/{ => dev}/contribute_to_paddle_en.md (99%) rename doc/howto/{new_layer/index_en.rst => dev/new_layer_en.rst} (99%) rename doc/howto/{write_docs/index_cn.rst => dev/write_docs_cn.rst} (90%) rename doc/howto/{ => usage}/cluster/cluster_train_en.md (99%) rename doc/howto/{ => usage}/cluster/k8s/Dockerfile (100%) rename doc/howto/{ => usage}/cluster/k8s/job.yaml (100%) rename doc/howto/{ => usage}/cluster/k8s/k8s-paddle-arch.png (100%) rename doc/howto/{cluster/k8s/paddle_on_k8s_cn.md => usage/cluster/k8s/k8s_cn.md} (99%) rename doc/howto/{cluster/k8s/distributed_training_on_k8s_cn.md => usage/cluster/k8s/k8s_distributed_cn.md} (99%) rename doc/howto/{ => usage}/cluster/k8s/start.sh (100%) rename doc/howto/{ => usage}/cluster/k8s/start_paddle.py (100%) rename doc/howto/{ => usage}/cmd_parameter/arguments_en.md (100%) rename doc/howto/{ => usage}/cmd_parameter/detail_introduction_en.md (100%) rename doc/howto/{ => usage}/cmd_parameter/index_en.md (80%) rename doc/howto/{ => usage}/cmd_parameter/use_case_en.md (100%) rename doc/howto/{ => usage}/concepts/src/pserver_topology.dot (100%) rename doc/howto/{ => usage}/concepts/src/trainer_config.py (100%) rename doc/howto/{ => usage}/concepts/use_concepts_cn.rst (99%) diff --git a/cmake/FindSphinx.cmake b/cmake/FindSphinx.cmake index 05aa100eae..d319442ef1 100644 --- a/cmake/FindSphinx.cmake +++ b/cmake/FindSphinx.cmake @@ -72,7 +72,7 @@ function( Sphinx_add_target target_name builder conf cache source destination ) ${source} ${destination} COMMENT "Generating sphinx documentation: ${builder}" - COMMAND ln -s ${destination}/index_*.html ${destination}/index.html + COMMAND ln -sf ${destination}/index_*.html ${destination}/index.html ) set_property( diff --git a/doc/api/index_cn.rst b/doc/api/index_cn.rst index 2d54af84b8..3718cd73a2 100644 --- a/doc/api/index_cn.rst +++ b/doc/api/index_cn.rst @@ -1,5 +1,5 @@ -API -=== +API中文手册 +============ DataProvider API ---------------- diff --git a/doc/getstarted/basic_usage/index_cn.rst b/doc/getstarted/basic_usage/index_cn.rst index 8b84306ed7..d01cdaaeb7 100644 --- a/doc/getstarted/basic_usage/index_cn.rst +++ b/doc/getstarted/basic_usage/index_cn.rst @@ -1,16 +1,16 @@ -简介 -==== +经典的线性回归任务 +================== PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍将向你展示如何利用PaddlePaddle来解决一个经典的线性回归问题。 -1. 一个经典的任务 ------------------ +任务简介 +-------- 我们展示如何用PaddlePaddle解决 `单变量的线性回归 `_ 问题。线性回归的输入是一批点 `(x, y)` ,其中 `y = wx + b + ε`, 而 ε 是一个符合高斯分布的随机变量。线性回归的输出是从这批点估计出来的参数 `w` 和 `b` 。 一个例子是房产估值。我们假设房产的价格(y)是其大小(x)的一个线性函数,那么我们可以通过收集市场上房子的大小和价格,用来估计线性函数的参数w 和 b。 -2. 准备数据 +准备数据 ----------- 假设变量 `x` 和 `y` 的真实关系为: `y = 2x + 0.3 + ε`,这里展示如何使用观测数据来拟合这一线性关系。首先,Python代码将随机产生2000个观测点,作为线性回归的输入。下面脚本符合PaddlePaddle期待的读取数据的Python程序的模式。 @@ -28,7 +28,7 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍 x = random.random() yield [x], [2*x+0.3] -3. 训练模型 +训练模型 ----------- 为了还原 `y = 2x + 0.3`,我们先从一条随机的直线 `y' = wx + b` 开始,然后利用观测数据调整 `w` 和 `b` 使得 `y'` 和 `y` 的差距不断减小,最终趋于接近。这个过程就是模型的训练过程,而 `w` 和 `b` 就是模型的参数,即我们的训练目标。 @@ -79,7 +79,7 @@ PaddlePaddle是源于百度的一个深度学习平台。这份简短的介绍 PaddlePaddle将在观测数据集上迭代训练30轮,并将每轮的模型结果存放在 `./output` 路径下。从输出日志可以看到,随着轮数增加误差代价函数的输出在不断的减小,这意味着模型在训练数据上不断的改进,直到逼近真实解:` y = 2x + 0.3 ` -4. 模型检验 +模型检验 ----------- 训练完成后,我们希望能够检验模型的好坏。一种常用的做法是用学习的模型对另外一组测试数据进行预测,评价预测的效果。在这个例子中,由于已经知道了真实答案,我们可以直接观察模型的参数是否符合预期来进行检验。 @@ -106,10 +106,3 @@ PaddlePaddle将每个模型参数作为一个numpy数组单独存为一个文件 从图中可以看到,虽然 `w` 和 `b` 都使用随机值初始化,但在起初的几轮训练中它们都在快速逼近真实值,并且后续仍在不断改进,使得最终得到的模型几乎与真实模型一致。 这样,我们用PaddlePaddle解决了单变量线性回归问题, 包括数据输入、模型训练和最后的结果验证。 - -5. 推荐后续阅读 ---------------- - -- `安装/编译 <../build_and_install/index.html>`_ :PaddlePaddle的安装与编译文档。 -- `快速入门 <../demo/quick_start/index.html>`_ :使用商品评论分类任务,系统性的介绍如何一步步改进,最终得到产品级的深度模型。 -- `示例 <../demo/index.html>`_ :各种实用案例,涵盖图像、文本、推荐等多个领域。 \ No newline at end of file diff --git a/doc/getstarted/basic_usage/index_en.rst b/doc/getstarted/basic_usage/index_en.rst index 4ffadc68ee..c10b897d42 100644 --- a/doc/getstarted/basic_usage/index_en.rst +++ b/doc/getstarted/basic_usage/index_en.rst @@ -1,15 +1,15 @@ -Basic Usage -============= +Simple Linear Regression +======================== PaddlePaddle is a deep learning platform open-sourced by Baidu. With PaddlePaddle, you can easily train a classic neural network within a couple lines of configuration, or you can build sophisticated models that provide state-of-the-art performance on difficult learning tasks like sentiment analysis, machine translation, image caption and so on. -1. A Classic Problem ---------------------- +Problem Background +------------------ Now, to give you a hint of what using PaddlePaddle looks like, let's start with a fundamental learning problem - `simple linear regression `_: you have observed a set of two-dimensional data points of ``X`` and ``Y``, where ``X`` is an explanatory variable and ``Y`` is corresponding dependent variable, and you want to recover the underlying correlation between ``X`` and ``Y``. Linear regression can be used in many practical scenarios. For example, ``X`` can be a variable about house size, and ``Y`` a variable about house price. You can build a model that captures relationship between them by observing real estate markets. -2. Prepare the Data --------------------- +Prepare the Data +----------------- Suppose the true relationship can be characterized as ``Y = 2X + 0.3``, let's see how to recover this pattern only from observed data. Here is a piece of python code that feeds synthetic data to PaddlePaddle. The code is pretty self-explanatory, the only extra thing you need to add for PaddlePaddle is a definition of input data types. @@ -26,8 +26,8 @@ Suppose the true relationship can be characterized as ``Y = 2X + 0.3``, let's se x = random.random() yield [x], [2*x+0.3] -3. Train a NeuralNetwork -------------------------- +Train a NeuralNetwork +---------------------- To recover this relationship between ``X`` and ``Y``, we use a neural network with one layer of linear activation units and a square error cost layer. Don't worry if you are not familiar with these terminologies, it's just saying that we are starting from a random line ``Y' = wX + b`` , then we gradually adapt ``w`` and ``b`` to minimize the difference between ``Y'`` and ``Y``. Here is what it looks like in PaddlePaddle: @@ -73,8 +73,8 @@ Now that everything is ready, you can train the network with a simple command li This means that PaddlePaddle will train this network on the synthectic dataset for 30 passes, and save all the models under path ``./output``. You will see from the messages printed out during training phase that the model cost is decreasing as time goes by, which indicates we are getting a closer guess. -4. Evaluate the Model ------------------------ +Evaluate the Model +------------------- Usually, a different dataset that left out during training phase should be used to evalute the models. However, we are lucky enough to know the real answer: ``w=2, b=0.3``, thus a better option is to check out model parameters directly. diff --git a/doc/getstarted/build_and_install/index_cn.rst b/doc/getstarted/build_and_install/index_cn.rst index e599aab2cb..3ffa858504 100644 --- a/doc/getstarted/build_and_install/index_cn.rst +++ b/doc/getstarted/build_and_install/index_cn.rst @@ -1,5 +1,5 @@ 编译与安装 -======================== +========== 安装 ++++ @@ -24,4 +24,4 @@ PaddlePaddle提供数个预编译的二进制来进行安装,包括Docker镜 .. toctree:: :maxdepth: 1 - cmake/build_from_source_cn.rst \ No newline at end of file + cmake/build_from_source_cn.rst diff --git a/doc/getstarted/index_cn.rst b/doc/getstarted/index_cn.rst index a0867a6e59..c6a4d3121c 100644 --- a/doc/getstarted/index_cn.rst +++ b/doc/getstarted/index_cn.rst @@ -1,4 +1,4 @@ -GET STARTED +新手入门 ============ .. toctree:: diff --git a/doc/howto/concepts/nn_cn.rst b/doc/howto/concepts/nn_cn.rst deleted file mode 100644 index f4d2cf490d..0000000000 --- a/doc/howto/concepts/nn_cn.rst +++ /dev/null @@ -1,3 +0,0 @@ -TBD - -目前正在书写中。敬请期待。 \ No newline at end of file diff --git a/doc/howto/concepts/program_concepts_cn.rst b/doc/howto/concepts/program_concepts_cn.rst deleted file mode 100644 index af5bbdac26..0000000000 --- a/doc/howto/concepts/program_concepts_cn.rst +++ /dev/null @@ -1,4 +0,0 @@ -TBD -### - -目前正在书写中。敬请期待。 \ No newline at end of file diff --git a/doc/howto/deep_model/index_cn.rst b/doc/howto/deep_model/index_cn.rst deleted file mode 100644 index 31f8c39af6..0000000000 --- a/doc/howto/deep_model/index_cn.rst +++ /dev/null @@ -1,10 +0,0 @@ -How to Configure Deep Models -============================ - -.. toctree:: - :maxdepth: 1 - - rnn/recurrent_group_cn.md - rnn/hierarchical_layer_cn.rst - rnn/hrnn_rnn_api_compare_cn.rst - rnn/hrnn_demo_cn.rst diff --git a/doc/howto/deep_model/index_en.rst b/doc/howto/deep_model/index_en.rst deleted file mode 100644 index 00a45641e6..0000000000 --- a/doc/howto/deep_model/index_en.rst +++ /dev/null @@ -1,7 +0,0 @@ -How to Configure Deep Models -============================ - -.. toctree:: - :maxdepth: 1 - - rnn/rnn_en.rst diff --git a/doc/howto/deep_model/rnn/hrnn_demo_cn.rst b/doc/howto/deep_model/rnn/hrnn_demo_cn.rst deleted file mode 100644 index 96396ff105..0000000000 --- a/doc/howto/deep_model/rnn/hrnn_demo_cn.rst +++ /dev/null @@ -1,7 +0,0 @@ -.. _algo_hrnn_demo: - -################# -双层RNN的使用示例 -################# - -TBD \ No newline at end of file diff --git a/doc/howto/deep_model/rnn/index_cn.rst b/doc/howto/deep_model/rnn/index_cn.rst new file mode 100644 index 0000000000..9e805ca851 --- /dev/null +++ b/doc/howto/deep_model/rnn/index_cn.rst @@ -0,0 +1,9 @@ +RNN相关模型 +=========== + +.. toctree:: + :maxdepth: 1 + + recurrent_group_cn.md + hierarchical_layer_cn.rst + hrnn_rnn_api_compare_cn.rst diff --git a/doc/howto/deep_model/rnn/index_en.rst b/doc/howto/deep_model/rnn/index_en.rst new file mode 100644 index 0000000000..7adc79873d --- /dev/null +++ b/doc/howto/deep_model/rnn/index_en.rst @@ -0,0 +1,7 @@ +RNN Models +========== + +.. toctree:: + :maxdepth: 1 + + rnn_config_en.rst diff --git a/doc/howto/deep_model/rnn/rnn_en.rst b/doc/howto/deep_model/rnn/rnn_config_en.rst similarity index 100% rename from doc/howto/deep_model/rnn/rnn_en.rst rename to doc/howto/deep_model/rnn/rnn_config_en.rst diff --git a/doc/howto/new_layer/FullyConnected.jpg b/doc/howto/dev/FullyConnected.jpg similarity index 100% rename from doc/howto/new_layer/FullyConnected.jpg rename to doc/howto/dev/FullyConnected.jpg diff --git a/doc/howto/contribute_to_paddle_en.md b/doc/howto/dev/contribute_to_paddle_en.md similarity index 99% rename from doc/howto/contribute_to_paddle_en.md rename to doc/howto/dev/contribute_to_paddle_en.md index 1decc91d62..9d59b24031 100644 --- a/doc/howto/contribute_to_paddle_en.md +++ b/doc/howto/dev/contribute_to_paddle_en.md @@ -1,4 +1,4 @@ -# How to Contribute Code +# Contribute Code We sincerely appreciate your contributions. You can use fork and pull request workflow to merge your code. diff --git a/doc/howto/new_layer/index_en.rst b/doc/howto/dev/new_layer_en.rst similarity index 99% rename from doc/howto/new_layer/index_en.rst rename to doc/howto/dev/new_layer_en.rst index 922bda5b0d..0513f068f3 100644 --- a/doc/howto/new_layer/index_en.rst +++ b/doc/howto/dev/new_layer_en.rst @@ -1,6 +1,6 @@ -======================= -How to Write New Layers -======================= +================ +Write New Layers +================ This tutorial will guide you to write customized layers in PaddlePaddle. We will utilize fully connected layer as an example to guide you through the following steps for writing a new layer. diff --git a/doc/howto/write_docs/index_cn.rst b/doc/howto/dev/write_docs_cn.rst similarity index 90% rename from doc/howto/write_docs/index_cn.rst rename to doc/howto/dev/write_docs_cn.rst index a1f983b340..5051a89230 100644 --- a/doc/howto/write_docs/index_cn.rst +++ b/doc/howto/dev/write_docs_cn.rst @@ -1,6 +1,6 @@ -############################### -如何贡献/修改PaddlePaddle的文档 -############################### +################## +如何贡献/修改文档 +################## PaddlePaddle的文档包括英文文档 ``doc`` 和中文文档 ``doc_cn`` 两个部分。文档都是通过 `cmake`_ 驱动 `sphinx`_ 编译生成,生成后的文档分别存储在编译目录的 ``doc`` 和 ``doc_cn`` 两个子目录下。 @@ -51,4 +51,4 @@ TBD .. _cmake: https://cmake.org/ -.. _sphinx: http://www.sphinx-doc.org/en/1.4.8/ \ No newline at end of file +.. _sphinx: http://www.sphinx-doc.org/en/1.4.8/ diff --git a/doc/howto/index_cn.rst b/doc/howto/index_cn.rst index 4706d9339a..805b63f044 100644 --- a/doc/howto/index_cn.rst +++ b/doc/howto/index_cn.rst @@ -1,27 +1,34 @@ -HOW TO -======= +进阶指南 +======== -Usage -------- +使用 +---- .. toctree:: :maxdepth: 1 - concepts/use_concepts_cn.rst - cluster/k8s/paddle_on_k8s_cn.md - cluster/k8s/distributed_training_on_k8s_cn.md + usage/concepts/use_concepts_cn.rst + usage/cluster/k8s/k8s_cn.md + usage/cluster/k8s/k8s_distributed_cn.md -Development ------------- +开发 +---- .. toctree:: :maxdepth: 1 - write_docs/index_cn.rst - deep_model/index_cn.rst + dev/write_docs_cn.rst -Optimization -------------- +配置 +---- + +.. toctree:: + :maxdepth: 1 + + deep_model/rnn/index_cn.rst + +优化 +---- .. toctree:: :maxdepth: 1 diff --git a/doc/howto/index_en.rst b/doc/howto/index_en.rst index bd64c5b1fb..1000d956a7 100644 --- a/doc/howto/index_en.rst +++ b/doc/howto/index_en.rst @@ -7,9 +7,8 @@ Usage .. toctree:: :maxdepth: 1 - cmd_parameter/index_en.md - deep_model/index_en.rst - cluster/cluster_train_en.md + usage/cmd_parameter/index_en.md + usage/cluster/cluster_train_en.md Development ------------ @@ -17,8 +16,16 @@ Development .. toctree:: :maxdepth: 1 - new_layer/index_en.rst - contribute_to_paddle_en.md + dev/new_layer_en.rst + dev/contribute_to_paddle_en.md + +Configuration +------------- + +.. toctree:: + :maxdepth: 1 + + deep_model/rnn/index_en.rst Optimization ------------- diff --git a/doc/howto/optimization/index_en.rst b/doc/howto/optimization/index_en.rst index 1e2f16b5da..84804fc9af 100644 --- a/doc/howto/optimization/index_en.rst +++ b/doc/howto/optimization/index_en.rst @@ -1,5 +1,5 @@ -How to Tune GPU Performance -=========================== +Tune GPU Performance +==================== .. toctree:: :maxdepth: 3 diff --git a/doc/howto/cluster/cluster_train_en.md b/doc/howto/usage/cluster/cluster_train_en.md similarity index 99% rename from doc/howto/cluster/cluster_train_en.md rename to doc/howto/usage/cluster/cluster_train_en.md index 1de34a6a99..2fd24e532e 100644 --- a/doc/howto/cluster/cluster_train_en.md +++ b/doc/howto/usage/cluster/cluster_train_en.md @@ -1,4 +1,4 @@ -# How to Run Distributed Training +# Run Distributed Training In this article, we explain how to run distributed Paddle training jobs on clusters. We will create the distributed version of the single-process training example, [recommendation](https://github.com/baidu/Paddle/tree/develop/demo/recommendation). diff --git a/doc/howto/cluster/k8s/Dockerfile b/doc/howto/usage/cluster/k8s/Dockerfile similarity index 100% rename from doc/howto/cluster/k8s/Dockerfile rename to doc/howto/usage/cluster/k8s/Dockerfile diff --git a/doc/howto/cluster/k8s/job.yaml b/doc/howto/usage/cluster/k8s/job.yaml similarity index 100% rename from doc/howto/cluster/k8s/job.yaml rename to doc/howto/usage/cluster/k8s/job.yaml diff --git a/doc/howto/cluster/k8s/k8s-paddle-arch.png b/doc/howto/usage/cluster/k8s/k8s-paddle-arch.png similarity index 100% rename from doc/howto/cluster/k8s/k8s-paddle-arch.png rename to doc/howto/usage/cluster/k8s/k8s-paddle-arch.png diff --git a/doc/howto/cluster/k8s/paddle_on_k8s_cn.md b/doc/howto/usage/cluster/k8s/k8s_cn.md similarity index 99% rename from doc/howto/cluster/k8s/paddle_on_k8s_cn.md rename to doc/howto/usage/cluster/k8s/k8s_cn.md index f8c9f19a9f..2575701053 100644 --- a/doc/howto/cluster/k8s/paddle_on_k8s_cn.md +++ b/doc/howto/usage/cluster/k8s/k8s_cn.md @@ -1,4 +1,4 @@ -# Paddle On Kubernetes:单机训练 +# Kubernetes 单机训练 在这篇文档里,我们介绍如何在 Kubernetes 集群上启动一个单机使用CPU的Paddle训练作业。在下一篇中,我们将介绍如何启动分布式训练作业。 diff --git a/doc/howto/cluster/k8s/distributed_training_on_k8s_cn.md b/doc/howto/usage/cluster/k8s/k8s_distributed_cn.md similarity index 99% rename from doc/howto/cluster/k8s/distributed_training_on_k8s_cn.md rename to doc/howto/usage/cluster/k8s/k8s_distributed_cn.md index 64f8fd4b43..d4d01f2759 100644 --- a/doc/howto/cluster/k8s/distributed_training_on_k8s_cn.md +++ b/doc/howto/usage/cluster/k8s/k8s_distributed_cn.md @@ -1,5 +1,4 @@ - -# PaddlePaddle on Kubernetes:分布式训练 +# Kubernetes 分布式训练 前一篇文章介绍了如何在Kubernetes集群上启动一个单机PaddlePaddle训练作业 (Job)。在这篇文章里,我们介绍如何在Kubernetes集群上进行分布式PaddlePaddle训练作业。关于PaddlePaddle的分布式训练,文章 [Cluster Training](https://github.com/baidu/Paddle/blob/develop/doc/cluster/opensource/cluster_train.md)介绍了一种通过SSH远程分发任务,进行分布式训练的方法,与此不同的是,本文将介绍在Kubernetes容器管理平台上快速构建PaddlePaddle容器集群,进行分布式训练的方案。 diff --git a/doc/howto/cluster/k8s/start.sh b/doc/howto/usage/cluster/k8s/start.sh similarity index 100% rename from doc/howto/cluster/k8s/start.sh rename to doc/howto/usage/cluster/k8s/start.sh diff --git a/doc/howto/cluster/k8s/start_paddle.py b/doc/howto/usage/cluster/k8s/start_paddle.py similarity index 100% rename from doc/howto/cluster/k8s/start_paddle.py rename to doc/howto/usage/cluster/k8s/start_paddle.py diff --git a/doc/howto/cmd_parameter/arguments_en.md b/doc/howto/usage/cmd_parameter/arguments_en.md similarity index 100% rename from doc/howto/cmd_parameter/arguments_en.md rename to doc/howto/usage/cmd_parameter/arguments_en.md diff --git a/doc/howto/cmd_parameter/detail_introduction_en.md b/doc/howto/usage/cmd_parameter/detail_introduction_en.md similarity index 100% rename from doc/howto/cmd_parameter/detail_introduction_en.md rename to doc/howto/usage/cmd_parameter/detail_introduction_en.md diff --git a/doc/howto/cmd_parameter/index_en.md b/doc/howto/usage/cmd_parameter/index_en.md similarity index 80% rename from doc/howto/cmd_parameter/index_en.md rename to doc/howto/usage/cmd_parameter/index_en.md index a6c236db61..2a96e7e976 100644 --- a/doc/howto/cmd_parameter/index_en.md +++ b/doc/howto/usage/cmd_parameter/index_en.md @@ -1,7 +1,7 @@ ```eval_rst .. _cmd_line_index: ``` -# How to Set Command-line Parameters +# Set Command-line Parameters * [Use Case](use_case_en.md) * [Arguments](arguments_en.md) diff --git a/doc/howto/cmd_parameter/use_case_en.md b/doc/howto/usage/cmd_parameter/use_case_en.md similarity index 100% rename from doc/howto/cmd_parameter/use_case_en.md rename to doc/howto/usage/cmd_parameter/use_case_en.md diff --git a/doc/howto/concepts/src/pserver_topology.dot b/doc/howto/usage/concepts/src/pserver_topology.dot similarity index 100% rename from doc/howto/concepts/src/pserver_topology.dot rename to doc/howto/usage/concepts/src/pserver_topology.dot diff --git a/doc/howto/concepts/src/trainer_config.py b/doc/howto/usage/concepts/src/trainer_config.py similarity index 100% rename from doc/howto/concepts/src/trainer_config.py rename to doc/howto/usage/concepts/src/trainer_config.py diff --git a/doc/howto/concepts/use_concepts_cn.rst b/doc/howto/usage/concepts/use_concepts_cn.rst similarity index 99% rename from doc/howto/concepts/use_concepts_cn.rst rename to doc/howto/usage/concepts/use_concepts_cn.rst index 6b87522088..77ba764419 100644 --- a/doc/howto/concepts/use_concepts_cn.rst +++ b/doc/howto/usage/concepts/use_concepts_cn.rst @@ -1,6 +1,6 @@ -######################### -PaddlePaddle 基本使用概念 -######################### +############ +基本使用概念 +############ PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 diff --git a/doc/tutorials/index_cn.md b/doc/tutorials/index_cn.md index fddaee5b2d..adc75978a7 100644 --- a/doc/tutorials/index_cn.md +++ b/doc/tutorials/index_cn.md @@ -1,23 +1,24 @@ -# TUTORIALS -There are several examples and demos here. +# 完整教程 -## Quick Start +## 快速入门 -* [Quick Start](quick_start/index_cn.rst) +使用商品评论分类任务,系统性的介绍如何一步步改进,最终得到产品级的深度模型。 -## Image +* [阅读教程](quick_start/index_cn.rst) + +## 图像 * TBD -## NLP +## 自然语言处理 -* [Sentiment Analysis](sentiment_analysis/index_cn.md) -* [Semantic Role Labeling](semantic_role_labeling/index_cn.rst) +* [情感分类](sentiment_analysis/index_cn.md) +* [语义角色标注](semantic_role_labeling/index_cn.md) -## Recommendation +## 个性化推荐 * TBD -## Model Zoo +## 常用模型 * TBD diff --git a/doc/tutorials/index_en.md b/doc/tutorials/index_en.md index 039ec4b4a4..63b2091c24 100644 --- a/doc/tutorials/index_en.md +++ b/doc/tutorials/index_en.md @@ -17,7 +17,6 @@ There are several examples and demos here. ## Recommendation -* [MovieLens Dataset](rec/ml_dataset_en.md) * [MovieLens Regression](rec/ml_regression_en.rst) ## Model Zoo diff --git a/doc/tutorials/quick_start/index_cn.rst b/doc/tutorials/quick_start/index_cn.rst index 754c2f6212..936f16118a 100644 --- a/doc/tutorials/quick_start/index_cn.rst +++ b/doc/tutorials/quick_start/index_cn.rst @@ -1,5 +1,6 @@ -PaddlePaddle快速入门教程 -======================== +============= +快速入门教程 +============= 我们将以 `文本分类问题 `_ 为例, 介绍PaddlePaddle的基本使用方法。 From 0c65442c5bfdf1a7f6d82c3601a2d8fe24e5d2db Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 14 Dec 2016 17:12:43 +0800 Subject: [PATCH 136/265] Upgrade protobuf to 3.1 in Travis-CI linux --- .travis.yml | 6 ++---- cmake/check_packages.cmake | 1 - paddle/scripts/travis/before_install.linux.sh | 11 +++++++++++ paddle/scripts/travis/main.sh | 5 +++++ 4 files changed, 18 insertions(+), 5 deletions(-) diff --git a/.travis.yml b/.travis.yml index 7de4ec7fc5..5b14f8e61e 100644 --- a/.travis.yml +++ b/.travis.yml @@ -29,10 +29,6 @@ addons: - python-pip - python2.7-dev - m4 - - libprotobuf-dev - - doxygen - - protobuf-compiler - - python-protobuf - python-numpy - python-wheel - libgoogle-glog-dev @@ -43,6 +39,8 @@ addons: - graphviz - swig - clang-format-3.8 + - automake + - libtool before_install: - | if [ ${JOB} == "BUILD_AND_TEST" ]; then diff --git a/cmake/check_packages.cmake b/cmake/check_packages.cmake index 0688745541..1a7c6a791b 100644 --- a/cmake/check_packages.cmake +++ b/cmake/check_packages.cmake @@ -28,7 +28,6 @@ endif() if(WITH_DOC) find_package(Sphinx REQUIRED) - find_package(Doxygen REQUIRED) find_python_module(recommonmark REQUIRED) endif() diff --git a/paddle/scripts/travis/before_install.linux.sh b/paddle/scripts/travis/before_install.linux.sh index ec2ac1f224..3351deddb9 100755 --- a/paddle/scripts/travis/before_install.linux.sh +++ b/paddle/scripts/travis/before_install.linux.sh @@ -1,5 +1,16 @@ #!/bin/bash set -e + +cd /tmp +wget https://github.com/google/protobuf/archive/v3.0.2.tar.gz -O protobuf.tar.gz +tar xf protobuf.tar.gz +cd protobuf* +./autogen.sh +./configure +make -j 2 install +cd .. +rm -rf protobuf* + pushd /usr/src/gtest cmake . make diff --git a/paddle/scripts/travis/main.sh b/paddle/scripts/travis/main.sh index 13f2552d29..1b49a12563 100755 --- a/paddle/scripts/travis/main.sh +++ b/paddle/scripts/travis/main.sh @@ -1,6 +1,11 @@ #!/bin/bash cd `dirname $0` +if [ "$TRAVIS_OS_NAME" == "linux" ]; then + # for manually installed protobuf 3.10 + export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH +fi + if [ ${JOB} == "BUILD_AND_TEST" ]; then ./build_and_test.sh elif [ ${JOB} == "DOCS" ]; then From fc886e8bccf10c1d0b167afee20fccefb7e498f6 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 14 Dec 2016 10:41:50 +0800 Subject: [PATCH 137/265] Add pre-commit hook for contribute documentation. * Also add a soft link for contribute.md to make github recognize this file. --- CONTRIBUTING.md | 1 + doc/howto/contribute_to_paddle_en.md | 30 +++++++++++++++++++++------- 2 files changed, 24 insertions(+), 7 deletions(-) create mode 120000 CONTRIBUTING.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 120000 index 0000000000..f3eb8b4edb --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1 @@ +./doc/howto/contribute_to_paddle_en.md \ No newline at end of file diff --git a/doc/howto/contribute_to_paddle_en.md b/doc/howto/contribute_to_paddle_en.md index 1decc91d62..f4b67d64e7 100644 --- a/doc/howto/contribute_to_paddle_en.md +++ b/doc/howto/contribute_to_paddle_en.md @@ -1,8 +1,8 @@ # How to Contribute Code We sincerely appreciate your contributions. You can use fork and pull request -workflow to merge your code. - +workflow to merge your code. + ## Code Requirements - Your code must be fully documented by [doxygen](http://www.stack.nl/~dimitri/doxygen/) style. @@ -12,11 +12,11 @@ workflow to merge your code. - Pass all unit tests. The following tutorial guides you into submitting your contibution. - + ## [Creating a Fork](https://help.github.com/articles/fork-a-repo/) - + Just head over to the GitHub page and click the "Fork" button. -It's just that simple. +It's just that simple. ## Clone @@ -25,7 +25,7 @@ The **develop** is the main branch, and other user's branches are feature branch Once you've created a fork, you can use your favorite git client to clone your repo or just head straight to the command line: - + ```shell # Clone your fork to your local machine git clone --branch develop https://github.com/USERNAME/Paddle.git @@ -47,6 +47,22 @@ Then you can start to develop by making a local developement branch git checkout -b MY_COOL_STUFF_BRANCH ``` +## Using `pre-commit` hook + +Paddle developers use [pre-commit](http://pre-commit.com/) tool to manage git +pre-commit hooks. It can help us format source codes (cpp, python), check some +basic thing before commit (only one EOL for each file, do not add a huge file +in git). `pre-commit` tests is a part of unit tests in Travis-CI now, every +PR doesn't fit hook can not be merged into Paddle. + +To use [pre-commit](http://pre-commit.com/), you should install it by +`pip install pre-commit`, and currently, Paddle uses `clang-format` to format +c/cpp sources. Please make sure clang-format 3.8+ installed. + +Then just run `pre-commit install` in your Paddle clone directory. When you +commit your code, the pre-commit hook will check the local code if there is +anything not suitable to commit, and so on. + ## Commit Commit your changes by following command lines: @@ -83,7 +99,7 @@ git pull --rebase upstream develop If there are no unique commits locally, git will simply perform a fast-forward. However, if you have been making changes (in the vast majority of cases you -probably shouldn't be), you may have to deal with conflicts. +probably shouldn't be), you may have to deal with conflicts. Now, your local master branch is up-to-date with everything modified upstream. From f53e8d7a09eacea7d6c1637d4002cf3ad670adbc Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Wed, 14 Dec 2016 18:31:37 +0800 Subject: [PATCH 138/265] fix bug: if test and sparse_remote_update can not co-exsit, crash trainer if necessary --- paddle/trainer/Trainer.cpp | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 9c83c207ed..e10984a8d1 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -226,6 +226,12 @@ void Trainer::init(const std::shared_ptr& config, DataProvider::create(config_->getTestDataConfig(), *config_, gpuData)); } if (testDataProvider_) { + if (config_->getOptConfig().use_sparse_remote_updater()) { + LOG(FATAL) << "It's prohibited to set sparse_remote_update " + << "in some layers if testing will be under going " + << "in the middle of training. You can do testing " + << "within separate process."; + } createTester(); } From a1d2abc16d9c7b42af6dcb41902423ae2904ee9a Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Wed, 14 Dec 2016 18:46:40 +0800 Subject: [PATCH 139/265] add Function --- paddle/math/Function.cpp | 47 +++++++++++++ paddle/math/Function.h | 84 ++++++++++++++++++++++++ paddle/math/cross_map_normal_op.cpp | 46 +++++++++++++ paddle/math/cross_map_normal_op.h | 20 +----- paddle/math/tests/test_matrixCompare.cpp | 15 +++-- 5 files changed, 188 insertions(+), 24 deletions(-) create mode 100644 paddle/math/Function.cpp create mode 100644 paddle/math/Function.h diff --git a/paddle/math/Function.cpp b/paddle/math/Function.cpp new file mode 100644 index 0000000000..21d2719172 --- /dev/null +++ b/paddle/math/Function.cpp @@ -0,0 +1,47 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "Function.h" + +namespace paddle { + +template <> +size_t FuncConfig::get(const std::string& key) const { + auto it = valueMap_.find(key); + CHECK(it != valueMap_.end()) << "Cannot find value: '" << key << "'"; + return it->second.s; +} + +template <> +real FuncConfig::get(const std::string& key) const { + auto it = valueMap_.find(key); + CHECK(it != valueMap_.end()) << "Cannot find value: '" << key << "'"; + return it->second.r; +} + +template <> +void FuncConfig::set(const std::string& key, size_t v) { + CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; + valueMap_[key].s = v; +} + +template <> +void FuncConfig::set(const std::string& key, real v) { + CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; + valueMap_[key].r = v; +} + +ClassRegistrar FunctionBase::funcRegistrar_; + +} // namespace paddle diff --git a/paddle/math/Function.h b/paddle/math/Function.h new file mode 100644 index 0000000000..b41ba2a13d --- /dev/null +++ b/paddle/math/Function.h @@ -0,0 +1,84 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#pragma once + +#include +#include +#include "paddle/utils/ClassRegistrar.h" +#include "paddle/math/Matrix.h" + +namespace paddle { + +enum DeviceType { + DEVICE_TYPE_UNSPECIFIED = 0, + DEVICE_TYPE_CPU = 1, + DEVICE_TYPE_GPU = 2, +}; + +template +struct MatrixT; + +template <> +struct MatrixT { + using type = CpuMatrix; +}; + +template <> +struct MatrixT { + using type = GpuMatrix; +}; + +typedef std::vector Arguments; + +class FuncConfig { +public: + union value { + size_t s; + real r; + }; + + template + T get(const std::string& key) const; + + template + void set(const std::string& key, T v); + +protected: + std::map valueMap_; +}; + +class FunctionBase { +public: + virtual ~FunctionBase() {} + + virtual void init(const FuncConfig& config) {} + + virtual void calc(const Arguments& inputs, + const Arguments& outputs, + const Arguments& inouts) {} + + static ClassRegistrar funcRegistrar_; +}; + +#define FUNC_NAME(typeName, deviceName) #typeName "-" #deviceName + +#define REGISTER_TYPED_FUNC(typeName, deviceName, className) \ + static InitFunction __reg_type_##typeName([]() { \ + FunctionBase::funcRegistrar_ \ + .registerClass>( \ + FUNC_NAME(typeName, deviceName)); \ + }) + +} // namespace paddle diff --git a/paddle/math/cross_map_normal_op.cpp b/paddle/math/cross_map_normal_op.cpp index be242926af..0b72732063 100644 --- a/paddle/math/cross_map_normal_op.cpp +++ b/paddle/math/cross_map_normal_op.cpp @@ -128,4 +128,50 @@ void CrossMapNormalGrad::operator()(CpuMatrix& inputsGrad, } } +template +class CrossMapNormalFunc : public FunctionBase { +public: + void init(const FuncConfig& config) override { + size_ = config.get("size"); + scale_ = config.get("scale"); + pow_ = config.get("pow"); + } + + void calc(const Arguments& inputs, + const Arguments& outputs, + const Arguments& inouts) override { + CHECK_EQ(1, inputs.size()); + CHECK_EQ(2, outputs.size()); + CHECK_EQ(0, inouts.size()); + + auto input = dynamic_cast::type&>(inputs[0]); + auto output = + dynamic_cast::type&>(outputs[0]); + auto denom = + dynamic_cast::type&>(outputs[1]); + + CHECK(input.isContiguous()); + CHECK(output.isContiguous()); + CHECK(denom.isContiguous()); + CHECK_EQ(output.getHeight(), input.getHeight()); + CHECK_EQ(output.getWidth(), input.getWidth()); + CHECK_EQ(output.getHeight(), denom.getHeight()); + CHECK_EQ(output.getWidth(), denom.getWidth()); + + // CrossMapNormal cross; + // need: + // size_t channels, + // size_t imgSizeH, + // size_t imgSizeW, + // cross(output, denom, input, ); + } + +private: + size_t size_; + real scale_; + real pow_; +}; + +REGISTER_TYPED_FUNC(CrossMapNormal, CPU, CrossMapNormalFunc); + } // namespace paddle diff --git a/paddle/math/cross_map_normal_op.h b/paddle/math/cross_map_normal_op.h index c2bb95f6b1..86f54abde1 100644 --- a/paddle/math/cross_map_normal_op.h +++ b/paddle/math/cross_map_normal_op.h @@ -14,29 +14,11 @@ limitations under the License. */ #pragma once +#include "Function.h" #include "paddle/math/Matrix.h" namespace paddle { -enum DeviceType { - DEVICE_TYPE_UNSPECIFIED = 0, - DEVICE_TYPE_CPU = 1, - DEVICE_TYPE_GPU = 2, -}; - -template -struct MatrixT; - -template <> -struct MatrixT { - using type = CpuMatrix; -}; - -template <> -struct MatrixT { - using type = GpuMatrix; -}; - template struct CrossMapNormal { void operator()(typename MatrixT::type& outputs, diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 8d7a4fb94d..0b75785528 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -24,6 +24,7 @@ limitations under the License. */ #include "paddle/utils/Stat.h" #include "TensorCheck.h" #include "paddle/math/cross_map_normal_op.h" +#include "paddle/math/Function.h" using namespace paddle; // NOLINT using namespace std; // NOLINT @@ -1280,6 +1281,15 @@ void testCrossMapNormalFwd( inputsGpu.copyFrom(inputs); outputsGpu.copyFrom(outputs); + FuncConfig config; + config.set("size", (size_t)sizeX); + config.set("scale", scale); + config.set("pow", pow); + FunctionBase* cpu = + FunctionBase::funcRegistrar_.createByType(FUNC_NAME(CrossMapNormal, CPU)); + cpu->init(config); + // cpu->calc(); + CrossMapNormal cpuCross; cpuCross( outputs, denoms, inputs, channels, imgSizeH, imgSizeW, sizeX, scale, pow); @@ -1295,11 +1305,6 @@ void testCrossMapNormalFwd( scale, pow); -#if 0 - outputsGpu.crossMapNormalFwd( - inputsGpu, imgSizeH, imgSizeW, denomsGpu, channels, sizeX, scale, pow); -#endif - TensorCheckErr(outputs, outputsGpu); TensorCheckErr(denoms, denomsGpu); } From 3d0e73bd32c6f8de2d5751e0d8e1a38fa0883ecf Mon Sep 17 00:00:00 2001 From: liaogang Date: Wed, 14 Dec 2016 21:04:53 +0800 Subject: [PATCH 140/265] Remove custom glog-like and gflags-like macros --- CMakeLists.txt | 20 +- paddle/api/Trainer.cpp | 6 +- paddle/cuda/src/hl_cuda_cudnn.cc | 8 +- paddle/cuda/src/hl_dso_loader.cc | 30 +-- .../dataproviders/ProtoDataProvider.cpp | 6 +- paddle/gserver/evaluators/Evaluator.cpp | 2 +- .../gradientmachines/MultiGradientMachine.cpp | 8 +- .../RecurrentGradientMachine.cpp | 2 +- paddle/gserver/layers/Layer.cpp | 2 +- paddle/gserver/layers/LstmLayer.cpp | 2 +- paddle/gserver/layers/RecurrentLayer.cpp | 2 +- paddle/gserver/layers/ValidationLayer.h | 2 +- paddle/gserver/tests/LayerGradUtil.cpp | 2 +- paddle/gserver/tests/TestUtil.cpp | 2 +- paddle/gserver/tests/test_ActivationGrad.cpp | 4 +- paddle/gserver/tests/test_BatchNorm.cpp | 10 +- paddle/gserver/tests/test_ConvTrans.cpp | 10 +- paddle/gserver/tests/test_ConvUnify.cpp | 10 +- paddle/gserver/tests/test_Evaluator.cpp | 6 +- paddle/gserver/tests/test_LayerGrad.cpp | 10 +- paddle/gserver/tests/test_NetworkCompare.cpp | 12 +- paddle/gserver/tests/test_PyDataProvider2.cpp | 2 +- .../tests/test_RecurrentGradientMachine.cpp | 2 +- paddle/gserver/tests/test_RecurrentLayer.cpp | 6 +- .../gserver/tests/test_SelectiveFCLayer.cpp | 10 +- paddle/gserver/tests/test_WarpCTCLayer.cpp | 2 +- paddle/math/SparseRowMatrix.cpp | 6 +- paddle/math/SparseRowMatrix.h | 2 +- paddle/math/Storage.cpp | 6 +- paddle/math/tests/test_TrainingAlgorithm.cpp | 4 +- paddle/parameter/FirstOrderOptimizer.cpp | 2 +- paddle/parameter/Parameter.cpp | 10 +- paddle/pserver/BaseClient.cpp | 2 +- paddle/pserver/LightNetwork.cpp | 22 +- paddle/pserver/ParameterClient2.cpp | 4 +- paddle/pserver/ParameterClient2.h | 2 +- paddle/pserver/ParameterServer2.cpp | 10 +- paddle/pserver/ParameterServer2.h | 2 +- .../pserver/SparseParameterDistribution.cpp | 30 +-- paddle/pserver/test/SocketTest.cpp | 6 +- paddle/pserver/test/test_ParameterServer2.cpp | 6 +- paddle/pserver/test/test_ProtoServer.cpp | 8 +- paddle/trainer/MergeModel.cpp | 4 +- paddle/trainer/RemoteParameterUpdater.cpp | 4 +- paddle/trainer/ThreadParameterUpdater.cpp | 2 +- paddle/trainer/Trainer.cpp | 104 ++++----- paddle/trainer/Trainer.h | 2 +- paddle/trainer/TrainerBenchmark.cpp | 4 +- paddle/trainer/TrainerConfigHelper.cpp | 20 +- paddle/trainer/TrainerInternalConfig.cpp | 14 +- paddle/trainer/TrainerMain.cpp | 19 +- paddle/trainer/tests/test_Compare.cpp | 8 +- paddle/trainer/tests/test_CompareSparse.cpp | 32 +-- paddle/trainer/tests/test_CompareTwoNets.cpp | 26 +-- paddle/trainer/tests/test_CompareTwoOpts.cpp | 22 +- paddle/trainer/tests/test_Prediction.cpp | 10 +- paddle/trainer/tests/test_Trainer.cpp | 8 +- paddle/trainer/tests/test_TrainerOnePass.cpp | 20 +- .../test_recurrent_machine_generation.cpp | 2 +- paddle/utils/BarrierStat.cpp | 18 +- paddle/utils/CommandLineParser.cpp | 214 ------------------ paddle/utils/CommandLineParser.h | 157 ------------- paddle/utils/CustomStackTrace.cpp | 2 +- paddle/utils/Flags.cpp | 112 +++++---- paddle/utils/Flags.h | 50 ++-- paddle/utils/Logging.cpp | 171 +------------- paddle/utils/Logging.h | 168 +------------- paddle/utils/PythonUtil.cpp | 4 +- paddle/utils/ThreadLocal.cpp | 6 +- paddle/utils/Util.cpp | 8 +- paddle/utils/Version.cpp | 9 +- paddle/utils/tests/CMakeLists.txt | 2 - paddle/utils/tests/test_CommandLineParser.cpp | 114 ---------- paddle/utils/tests/test_CustomStackTrace.cpp | 2 +- paddle/utils/tests/test_Logging.cpp | 162 ------------- paddle/utils/tests/test_SpinLock.cpp | 2 +- paddle/utils/tests/test_ThreadBarrier.cpp | 2 +- 77 files changed, 408 insertions(+), 1396 deletions(-) delete mode 100644 paddle/utils/tests/test_CommandLineParser.cpp delete mode 100644 paddle/utils/tests/test_Logging.cpp diff --git a/CMakeLists.txt b/CMakeLists.txt index d82d8f633c..65fbbb481c 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -25,8 +25,8 @@ find_package(ZLIB REQUIRED) find_package(NumPy REQUIRED) find_package(Threads REQUIRED) find_package(AVX QUIET) -find_package(Glog) -find_package(Gflags QUIET) +find_package(Glog REQUIRED) +find_package(Gflags REQUIRED) find_package(GTest) find_package(Sphinx) find_package(Doxygen) @@ -40,8 +40,6 @@ option(WITH_AVX "Compile PaddlePaddle with avx intrinsics" ${AVX_FOUND}) option(WITH_PYTHON "Compile PaddlePaddle with python interpreter" ON) option(WITH_STYLE_CHECK "Style Check for PaddlePaddle" ${PYTHONINTERP_FOUND}) option(WITH_RDMA "Compile PaddlePaddle with rdma support" OFF) -option(WITH_GLOG "Compile PaddlePaddle use glog, otherwise use a log implement internally" ${LIBGLOG_FOUND}) -option(WITH_GFLAGS "Compile PaddlePaddle use gflags, otherwise use a flag implement internally" ${GFLAGS_FOUND}) option(WITH_TIMER "Compile PaddlePaddle use timer" OFF) option(WITH_PROFILER "Compile PaddlePaddle use gpu profiler" OFF) option(WITH_TESTING "Compile and run unittest for PaddlePaddle" ${GTEST_FOUND}) @@ -136,16 +134,12 @@ else(WITH_RDMA) add_definitions(-DPADDLE_DISABLE_RDMA) endif(WITH_RDMA) -if(WITH_GLOG) - add_definitions(-DPADDLE_USE_GLOG) - include_directories(${LIBGLOG_INCLUDE_DIR}) -endif() +# glog +include_directories(${LIBGLOG_INCLUDE_DIR}) -if(WITH_GFLAGS) - add_definitions(-DPADDLE_USE_GFLAGS) - add_definitions(-DGFLAGS_NS=${GFLAGS_NAMESPACE}) - include_directories(${GFLAGS_INCLUDE_DIRS}) -endif() +#gflags +add_definitions(-DGFLAGS_NS=${GFLAGS_NAMESPACE}) +include_directories(${GFLAGS_INCLUDE_DIRS}) if(WITH_TESTING) enable_testing() diff --git a/paddle/api/Trainer.cpp b/paddle/api/Trainer.cpp index 59b47d4b1c..d83dc380be 100644 --- a/paddle/api/Trainer.cpp +++ b/paddle/api/Trainer.cpp @@ -27,9 +27,9 @@ limitations under the License. */ using paddle::real; -P_DECLARE_string(config); -P_DECLARE_string(init_model_path); -P_DECLARE_int32(start_pass); +DECLARE_string(config); +DECLARE_string(init_model_path); +DECLARE_int32(start_pass); struct TrainerPrivate : public paddle::Trainer { bool _trainOneBatch(size_t batchSize); diff --git a/paddle/cuda/src/hl_cuda_cudnn.cc b/paddle/cuda/src/hl_cuda_cudnn.cc index 7111224d59..8cddf10d40 100644 --- a/paddle/cuda/src/hl_cuda_cudnn.cc +++ b/paddle/cuda/src/hl_cuda_cudnn.cc @@ -21,10 +21,10 @@ limitations under the License. */ #include "paddle/utils/CommandLineParser.h" #include "paddle/utils/Logging.h" -P_DEFINE_int32(cudnn_conv_workspace_limit_in_mb, - 4096, - "Specify cuDNN max workspace limit, in units MB, " - "4096MB=4GB by default."); +DEFINE_int32(cudnn_conv_workspace_limit_in_mb, + 4096, + "Specify cuDNN max workspace limit, in units MB, " + "4096MB=4GB by default."); namespace dynload { diff --git a/paddle/cuda/src/hl_dso_loader.cc b/paddle/cuda/src/hl_dso_loader.cc index f509b89243..54c7620fc0 100644 --- a/paddle/cuda/src/hl_dso_loader.cc +++ b/paddle/cuda/src/hl_dso_loader.cc @@ -16,21 +16,21 @@ limitations under the License. */ #include "paddle/utils/CommandLineParser.h" #include "paddle/utils/Logging.h" -P_DEFINE_string(cudnn_dir, - "", - "Specify path for loading libcudnn.so. For instance, " - "/usr/local/cudnn/lib. If empty [default], dlopen " - "will search cudnn from LD_LIBRARY_PATH"); - -P_DEFINE_string(cuda_dir, - "", - "Specify path for loading cuda library, such as libcublas, " - "libcurand. For instance, /usr/local/cuda/lib64. (Note: " - "libcudart can not be specified by cuda_dir, since some " - "build-in function in cudart already ran before main entry). " - "If default, dlopen will search cuda from LD_LIBRARY_PATH"); - -P_DEFINE_string(warpctc_dir, "", "Specify path for loading libwarpctc.so."); +DEFINE_string(cudnn_dir, + "", + "Specify path for loading libcudnn.so. For instance, " + "/usr/local/cudnn/lib. If empty [default], dlopen " + "will search cudnn from LD_LIBRARY_PATH"); + +DEFINE_string(cuda_dir, + "", + "Specify path for loading cuda library, such as libcublas, " + "libcurand. For instance, /usr/local/cuda/lib64. (Note: " + "libcudart can not be specified by cuda_dir, since some " + "build-in function in cudart already ran before main entry). " + "If default, dlopen will search cuda from LD_LIBRARY_PATH"); + +DEFINE_string(warpctc_dir, "", "Specify path for loading libwarpctc.so."); static inline std::string join(const std::string& part1, const std::string& part2) { diff --git a/paddle/gserver/dataproviders/ProtoDataProvider.cpp b/paddle/gserver/dataproviders/ProtoDataProvider.cpp index d16ecca2d9..c6f5cab191 100644 --- a/paddle/gserver/dataproviders/ProtoDataProvider.cpp +++ b/paddle/gserver/dataproviders/ProtoDataProvider.cpp @@ -22,9 +22,9 @@ limitations under the License. */ #include "DataProviderGroup.h" #include "paddle/utils/Logging.h" -P_DEFINE_double(memory_threshold_on_load_data, - 1.0, - "stop loading data when memory is not sufficient"); +DEFINE_double(memory_threshold_on_load_data, + 1.0, + "stop loading data when memory is not sufficient"); namespace paddle { diff --git a/paddle/gserver/evaluators/Evaluator.cpp b/paddle/gserver/evaluators/Evaluator.cpp index 7556d21e01..2f99281911 100644 --- a/paddle/gserver/evaluators/Evaluator.cpp +++ b/paddle/gserver/evaluators/Evaluator.cpp @@ -17,7 +17,7 @@ limitations under the License. */ #include "paddle/gserver/gradientmachines/NeuralNetwork.h" -P_DECLARE_int32(trainer_id); +DECLARE_int32(trainer_id); namespace paddle { diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp index a7324f5545..88c098b355 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp @@ -21,11 +21,11 @@ limitations under the License. */ #include "NeuralNetwork.h" #include "ParallelNeuralNetwork.h" -P_DEFINE_bool(allow_only_one_model_on_one_gpu, - true, - "If true, do not allow multiple models on one GPU device"); +DEFINE_bool(allow_only_one_model_on_one_gpu, + true, + "If true, do not allow multiple models on one GPU device"); #ifdef PADDLE_METRIC_LEARNING -P_DECLARE_bool(external); +DECLARE_bool(external); #endif namespace paddle { diff --git a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp index ee1c92bdf5..8f68b3d66b 100644 --- a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp @@ -24,7 +24,7 @@ limitations under the License. */ #include "paddle/utils/Stat.h" #include "paddle/utils/Util.h" -P_DEFINE_string(diy_beam_search_prob_so, "", "the diy beam search cost so"); +DEFINE_string(diy_beam_search_prob_so, "", "the diy beam search cost so"); static const char* DIY_CALC_PROB_SYMBOL_NAME = "calc_prob"; static const char* DIY_START_CALC_PROB_SYMBOL_NAME = "start_calc_prob"; diff --git a/paddle/gserver/layers/Layer.cpp b/paddle/gserver/layers/Layer.cpp index c9e121047b..c47943f81c 100644 --- a/paddle/gserver/layers/Layer.cpp +++ b/paddle/gserver/layers/Layer.cpp @@ -33,7 +33,7 @@ limitations under the License. */ #include "TransLayer.h" #include "ValidationLayer.h" -P_DEFINE_bool(log_error_clipping, false, "enable log error clipping or not"); +DEFINE_bool(log_error_clipping, false, "enable log error clipping or not"); namespace paddle { diff --git a/paddle/gserver/layers/LstmLayer.cpp b/paddle/gserver/layers/LstmLayer.cpp index 452091eff4..2543d1b49a 100644 --- a/paddle/gserver/layers/LstmLayer.cpp +++ b/paddle/gserver/layers/LstmLayer.cpp @@ -17,7 +17,7 @@ limitations under the License. */ #include "paddle/math/Matrix.h" #include "paddle/utils/Stat.h" -P_DECLARE_bool(prev_batch_state); +DECLARE_bool(prev_batch_state); namespace paddle { diff --git a/paddle/gserver/layers/RecurrentLayer.cpp b/paddle/gserver/layers/RecurrentLayer.cpp index 9f3bf76a2d..85812c9d66 100644 --- a/paddle/gserver/layers/RecurrentLayer.cpp +++ b/paddle/gserver/layers/RecurrentLayer.cpp @@ -17,7 +17,7 @@ limitations under the License. */ #include "paddle/utils/CommandLineParser.h" #include "paddle/utils/Stat.h" -P_DEFINE_bool(rnn_use_batch, false, "Using the batch method for calculation."); +DEFINE_bool(rnn_use_batch, false, "Using the batch method for calculation."); namespace paddle { diff --git a/paddle/gserver/layers/ValidationLayer.h b/paddle/gserver/layers/ValidationLayer.h index 471055429d..4c1de7b3b7 100644 --- a/paddle/gserver/layers/ValidationLayer.h +++ b/paddle/gserver/layers/ValidationLayer.h @@ -18,7 +18,7 @@ limitations under the License. */ #include "Layer.h" #include "paddle/gserver/evaluators/Evaluator.h" -P_DECLARE_int32(trainer_id); +DECLARE_int32(trainer_id); namespace paddle { diff --git a/paddle/gserver/tests/LayerGradUtil.cpp b/paddle/gserver/tests/LayerGradUtil.cpp index dffc24936f..1d5e7de1ba 100644 --- a/paddle/gserver/tests/LayerGradUtil.cpp +++ b/paddle/gserver/tests/LayerGradUtil.cpp @@ -14,7 +14,7 @@ limitations under the License. */ #include "LayerGradUtil.h" -P_DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_bool(thread_local_rand_use_global_seed); namespace paddle { real getCostSum(LayerPtr& testLayer, MatrixPtr weights) { diff --git a/paddle/gserver/tests/TestUtil.cpp b/paddle/gserver/tests/TestUtil.cpp index e656da5b8f..e07c60861a 100644 --- a/paddle/gserver/tests/TestUtil.cpp +++ b/paddle/gserver/tests/TestUtil.cpp @@ -17,7 +17,7 @@ limitations under the License. */ #include "paddle/math/SparseMatrix.h" #include "paddle/utils/CommandLineParser.h" -P_DEFINE_int32(fixed_seq_length, 0, "Produce some sequence of fixed length"); +DEFINE_int32(fixed_seq_length, 0, "Produce some sequence of fixed length"); namespace paddle { diff --git a/paddle/gserver/tests/test_ActivationGrad.cpp b/paddle/gserver/tests/test_ActivationGrad.cpp index 20a6126d0b..7d7e68da5c 100644 --- a/paddle/gserver/tests/test_ActivationGrad.cpp +++ b/paddle/gserver/tests/test_ActivationGrad.cpp @@ -25,8 +25,8 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_bool(use_gpu); +DECLARE_bool(thread_local_rand_use_global_seed); void testActivation(const string& act) { LOG(INFO) << "test activation: " << act; diff --git a/paddle/gserver/tests/test_BatchNorm.cpp b/paddle/gserver/tests/test_BatchNorm.cpp index 3bd4e321b7..7f5fcb670b 100644 --- a/paddle/gserver/tests/test_BatchNorm.cpp +++ b/paddle/gserver/tests/test_BatchNorm.cpp @@ -27,11 +27,11 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(gpu_id); -P_DECLARE_double(checkgrad_eps); -P_DECLARE_bool(thread_local_rand_use_global_seed); -P_DECLARE_bool(prev_batch_state); +DECLARE_bool(use_gpu); +DECLARE_int32(gpu_id); +DECLARE_double(checkgrad_eps); +DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_bool(prev_batch_state); // Test that the batchNormLayer can be followed by a ConvLayer TEST(Layer, batchNorm) { diff --git a/paddle/gserver/tests/test_ConvTrans.cpp b/paddle/gserver/tests/test_ConvTrans.cpp index 83100e3bec..99202c2d57 100644 --- a/paddle/gserver/tests/test_ConvTrans.cpp +++ b/paddle/gserver/tests/test_ConvTrans.cpp @@ -28,11 +28,11 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(gpu_id); -P_DECLARE_double(checkgrad_eps); -P_DECLARE_bool(thread_local_rand_use_global_seed); -P_DECLARE_bool(prev_batch_state); +DECLARE_bool(use_gpu); +DECLARE_int32(gpu_id); +DECLARE_double(checkgrad_eps); +DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_bool(prev_batch_state); // Test that the convTrans forward is the same as conv backward TEST(Layer, convTransLayerFwd) { diff --git a/paddle/gserver/tests/test_ConvUnify.cpp b/paddle/gserver/tests/test_ConvUnify.cpp index 02763406a3..2ab18f8868 100644 --- a/paddle/gserver/tests/test_ConvUnify.cpp +++ b/paddle/gserver/tests/test_ConvUnify.cpp @@ -28,11 +28,11 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(gpu_id); -P_DECLARE_double(checkgrad_eps); -P_DECLARE_bool(thread_local_rand_use_global_seed); -P_DECLARE_bool(prev_batch_state); +DECLARE_bool(use_gpu); +DECLARE_int32(gpu_id); +DECLARE_double(checkgrad_eps); +DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_bool(prev_batch_state); // Do one forward pass of convTrans layer and check to see if its output // matches the given result diff --git a/paddle/gserver/tests/test_Evaluator.cpp b/paddle/gserver/tests/test_Evaluator.cpp index 7a930aebcf..e07066dad8 100644 --- a/paddle/gserver/tests/test_Evaluator.cpp +++ b/paddle/gserver/tests/test_Evaluator.cpp @@ -21,9 +21,9 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(gpu_id); -P_DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_bool(use_gpu); +DECLARE_int32(gpu_id); +DECLARE_bool(thread_local_rand_use_global_seed); enum InputType { INPUT_DATA, // dense vector diff --git a/paddle/gserver/tests/test_LayerGrad.cpp b/paddle/gserver/tests/test_LayerGrad.cpp index 9f8b197df5..8a8d094ed3 100644 --- a/paddle/gserver/tests/test_LayerGrad.cpp +++ b/paddle/gserver/tests/test_LayerGrad.cpp @@ -26,11 +26,11 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(gpu_id); -P_DECLARE_double(checkgrad_eps); -P_DECLARE_bool(thread_local_rand_use_global_seed); -P_DECLARE_bool(prev_batch_state); +DECLARE_bool(use_gpu); +DECLARE_int32(gpu_id); +DECLARE_double(checkgrad_eps); +DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_bool(prev_batch_state); TEST(Operator, dot_mul) { TestConfig config; diff --git a/paddle/gserver/tests/test_NetworkCompare.cpp b/paddle/gserver/tests/test_NetworkCompare.cpp index baa55aa025..fc60228f81 100644 --- a/paddle/gserver/tests/test_NetworkCompare.cpp +++ b/paddle/gserver/tests/test_NetworkCompare.cpp @@ -25,10 +25,10 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_int32(gpu_id); -P_DECLARE_double(checkgrad_eps); -P_DEFINE_bool(use_label, true, "input label or sequence label"); -P_DEFINE_bool(static_para, false, "static parameter"); +DECLARE_int32(gpu_id); +DECLARE_double(checkgrad_eps); +DEFINE_bool(use_label, true, "input label or sequence label"); +DEFINE_bool(static_para, false, "static parameter"); struct DataIn { std::vector inArgs; @@ -267,8 +267,8 @@ TEST(Compare, img_conv2) { } #endif -P_DEFINE_string(config_file_a, "", "config of one network to compare"); -P_DEFINE_string(config_file_b, "", "config of another network to compare"); +DEFINE_string(config_file_a, "", "config of one network to compare"); +DEFINE_string(config_file_b, "", "config of another network to compare"); TEST(Compare, network) { if (FLAGS_config_file_a != "" && FLAGS_config_file_b != "") { compareNetwork(FLAGS_config_file_a, FLAGS_config_file_b); diff --git a/paddle/gserver/tests/test_PyDataProvider2.cpp b/paddle/gserver/tests/test_PyDataProvider2.cpp index 436318d356..5f8bc5ecd0 100644 --- a/paddle/gserver/tests/test_PyDataProvider2.cpp +++ b/paddle/gserver/tests/test_PyDataProvider2.cpp @@ -19,7 +19,7 @@ limitations under the License. */ #include "paddle/utils/PythonUtil.h" #include "paddle/utils/Util.h" -P_DEFINE_string(train_list, "unittest.list", "file list for unittest"); +DEFINE_string(train_list, "unittest.list", "file list for unittest"); namespace paddle { namespace unittest { diff --git a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp index a351667d8b..874aabf37c 100644 --- a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp +++ b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp @@ -20,7 +20,7 @@ limitations under the License. */ #include #include -P_DECLARE_int32(seed); +DECLARE_int32(seed); using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/gserver/tests/test_RecurrentLayer.cpp b/paddle/gserver/tests/test_RecurrentLayer.cpp index cd96ca7c84..f91c788863 100644 --- a/paddle/gserver/tests/test_RecurrentLayer.cpp +++ b/paddle/gserver/tests/test_RecurrentLayer.cpp @@ -23,9 +23,9 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_bool(rnn_use_batch); -P_DECLARE_int32(fixed_seq_length); +DECLARE_bool(use_gpu); +DECLARE_bool(rnn_use_batch); +DECLARE_int32(fixed_seq_length); void checkError(const Matrix& matrix1, const Matrix& matrix2) { CHECK(matrix1.getHeight() == matrix2.getHeight()); diff --git a/paddle/gserver/tests/test_SelectiveFCLayer.cpp b/paddle/gserver/tests/test_SelectiveFCLayer.cpp index 4f3a95a535..ab23d00a2c 100644 --- a/paddle/gserver/tests/test_SelectiveFCLayer.cpp +++ b/paddle/gserver/tests/test_SelectiveFCLayer.cpp @@ -29,11 +29,11 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(num_passes); -P_DECLARE_string(config); -P_DECLARE_string(init_model_path); -P_DECLARE_string(config_args); +DECLARE_bool(use_gpu); +DECLARE_int32(num_passes); +DECLARE_string(config); +DECLARE_string(init_model_path); +DECLARE_string(config_args); size_t fcLayerWidth = 1024; diff --git a/paddle/gserver/tests/test_WarpCTCLayer.cpp b/paddle/gserver/tests/test_WarpCTCLayer.cpp index 700425412c..0a4a814d52 100644 --- a/paddle/gserver/tests/test_WarpCTCLayer.cpp +++ b/paddle/gserver/tests/test_WarpCTCLayer.cpp @@ -25,7 +25,7 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); +DECLARE_bool(use_gpu); const real* getData(const Matrix& matrix) { if (matrix.useGpu()) { diff --git a/paddle/math/SparseRowMatrix.cpp b/paddle/math/SparseRowMatrix.cpp index 3091743123..b61c6b2d49 100644 --- a/paddle/math/SparseRowMatrix.cpp +++ b/paddle/math/SparseRowMatrix.cpp @@ -24,9 +24,9 @@ limitations under the License. */ #include "paddle/utils/Thread.h" #include "paddle/utils/Util.h" -P_DEFINE_bool(allow_inefficient_sparse_update, - false, - "Whether to allow inefficient sparse update"); +DEFINE_bool(allow_inefficient_sparse_update, + false, + "Whether to allow inefficient sparse update"); namespace paddle { diff --git a/paddle/math/SparseRowMatrix.h b/paddle/math/SparseRowMatrix.h index badb4b9c1c..9364feb4a1 100644 --- a/paddle/math/SparseRowMatrix.h +++ b/paddle/math/SparseRowMatrix.h @@ -20,7 +20,7 @@ limitations under the License. */ #include "paddle/utils/CommandLineParser.h" #include "paddle/utils/Util.h" -P_DECLARE_bool(allow_inefficient_sparse_update); +DECLARE_bool(allow_inefficient_sparse_update); namespace paddle { diff --git a/paddle/math/Storage.cpp b/paddle/math/Storage.cpp index f9a2c12cd5..56e5442394 100644 --- a/paddle/math/Storage.cpp +++ b/paddle/math/Storage.cpp @@ -16,9 +16,9 @@ limitations under the License. */ #include "Allocator.h" #include "paddle/utils/Util.h" -P_DEFINE_int32(pool_limit_size, - 536870912, - "maximum memory size managed by a memory pool, default is 512M"); +DEFINE_int32(pool_limit_size, + 536870912, + "maximum memory size managed by a memory pool, default is 512M"); namespace paddle { diff --git a/paddle/math/tests/test_TrainingAlgorithm.cpp b/paddle/math/tests/test_TrainingAlgorithm.cpp index 1bf6a0cc43..2c458cba9c 100644 --- a/paddle/math/tests/test_TrainingAlgorithm.cpp +++ b/paddle/math/tests/test_TrainingAlgorithm.cpp @@ -22,9 +22,9 @@ limitations under the License. */ using namespace paddle; // NOLINT #ifndef PADDLE_TYPE_DOUBLE -P_DEFINE_double(max_diff, 1e-5, "max diff allowed"); +DEFINE_double(max_diff, 1e-5, "max diff allowed"); #else -P_DEFINE_double(max_diff, 1e-13, "max diff allowed"); +DEFINE_double(max_diff, 1e-13, "max diff allowed"); #endif class SetMaxDiff { diff --git a/paddle/parameter/FirstOrderOptimizer.cpp b/paddle/parameter/FirstOrderOptimizer.cpp index 630f15c8cf..dbb738e98b 100644 --- a/paddle/parameter/FirstOrderOptimizer.cpp +++ b/paddle/parameter/FirstOrderOptimizer.cpp @@ -19,7 +19,7 @@ limitations under the License. */ #include -P_DEFINE_bool(log_clipping, false, "enable log clipping or not"); +DEFINE_bool(log_clipping, false, "enable log clipping or not"); namespace paddle { diff --git a/paddle/parameter/Parameter.cpp b/paddle/parameter/Parameter.cpp index 986ae1539b..1673fc6e53 100644 --- a/paddle/parameter/Parameter.cpp +++ b/paddle/parameter/Parameter.cpp @@ -26,11 +26,11 @@ limitations under the License. */ #include "paddle/utils/CommandLineParser.h" #include "paddle/utils/Logging.h" -P_DEFINE_int32(enable_grad_share, - (100 * 1024 * 1024), - "threshold for enable gradient parameter share for batch " - "multi-cpu training"); -P_DEFINE_int32( +DEFINE_int32(enable_grad_share, + (100 * 1024 * 1024), + "threshold for enable gradient parameter share for batch " + "multi-cpu training"); +DEFINE_int32( grad_share_block_num, 64, "block number of gradient parameter share for batch multi-cpu training"); diff --git a/paddle/pserver/BaseClient.cpp b/paddle/pserver/BaseClient.cpp index a43def98c5..b4ac7a2506 100644 --- a/paddle/pserver/BaseClient.cpp +++ b/paddle/pserver/BaseClient.cpp @@ -18,7 +18,7 @@ limitations under the License. */ #include "paddle/utils/CommandLineParser.h" #include "paddle/utils/Stat.h" -P_DECLARE_string(pservers); +DECLARE_string(pservers); namespace paddle { diff --git a/paddle/pserver/LightNetwork.cpp b/paddle/pserver/LightNetwork.cpp index 329dfb0fb3..cbc105e651 100644 --- a/paddle/pserver/LightNetwork.cpp +++ b/paddle/pserver/LightNetwork.cpp @@ -31,23 +31,23 @@ limitations under the License. */ #include "paddle/utils/Util.h" /// quick ack can reduce the latency of small message -P_DEFINE_bool(small_messages, - false, - "if message size is small, recommend set it True to enable quick " - "ack and no delay"); +DEFINE_bool(small_messages, + false, + "if message size is small, recommend set it True to enable quick " + "ack and no delay"); /// reasonable sock_send_buf_size can control the traffic injected into switch /// network. Injecting too many data into traffic could cause packets loss which /// cause long latency and degrade the efficiency of communication. -P_DEFINE_int32(sock_send_buf_size, - 1024 * 1024 * 40, - "restrict sock send buff size, can reduce network congestion if " - "set carefully"); +DEFINE_int32(sock_send_buf_size, + 1024 * 1024 * 40, + "restrict sock send buff size, can reduce network congestion if " + "set carefully"); /// reasonable size can hold bursted packets and reduce packets loss -P_DEFINE_int32(sock_recv_buf_size, - 1024 * 1024 * 40, - "restrict sock recv buff size"); +DEFINE_int32(sock_recv_buf_size, + 1024 * 1024 * 40, + "restrict sock recv buff size"); namespace paddle { diff --git a/paddle/pserver/ParameterClient2.cpp b/paddle/pserver/ParameterClient2.cpp index 86fd1c5276..a97859f83f 100644 --- a/paddle/pserver/ParameterClient2.cpp +++ b/paddle/pserver/ParameterClient2.cpp @@ -20,8 +20,8 @@ limitations under the License. */ #include "paddle/utils/Stat.h" #include "paddle/utils/StringUtil.h" -P_DEFINE_string(pservers, "127.0.0.1", "Comma separated addresses of pservers"); -P_DEFINE_int32(parallel_thread_num, 1, "Thread number for parameter send"); +DEFINE_string(pservers, "127.0.0.1", "Comma separated addresses of pservers"); +DEFINE_int32(parallel_thread_num, 1, "Thread number for parameter send"); namespace paddle { diff --git a/paddle/pserver/ParameterClient2.h b/paddle/pserver/ParameterClient2.h index 5255394949..eed71ccb43 100644 --- a/paddle/pserver/ParameterClient2.h +++ b/paddle/pserver/ParameterClient2.h @@ -34,7 +34,7 @@ limitations under the License. */ #include "ProtoServer.h" #include "SparseParameterDistribution.h" -P_DECLARE_int32(parallel_thread_num); +DECLARE_int32(parallel_thread_num); namespace paddle { diff --git a/paddle/pserver/ParameterServer2.cpp b/paddle/pserver/ParameterServer2.cpp index 2cb4c93535..856fa0ad1a 100644 --- a/paddle/pserver/ParameterServer2.cpp +++ b/paddle/pserver/ParameterServer2.cpp @@ -30,11 +30,11 @@ limitations under the License. */ #include "paddle/utils/GlobalConstants.h" #include "paddle/utils/Stat.h" -P_DEFINE_int32(pserver_num_threads, 1, "number of threads for sync op exec"); -P_DEFINE_double(async_lagged_ratio_min, - 1.0, - "control config_.async_lagged_grad_discard_ratio() min value"); -P_DEFINE_double( +DEFINE_int32(pserver_num_threads, 1, "number of threads for sync op exec"); +DEFINE_double(async_lagged_ratio_min, + 1.0, + "control config_.async_lagged_grad_discard_ratio() min value"); +DEFINE_double( async_lagged_ratio_default, 1.5, "if async_lagged_grad_discard_ratio is not set in trainer_config.conf" diff --git a/paddle/pserver/ParameterServer2.h b/paddle/pserver/ParameterServer2.h index 61c139981e..b0cf22e1fb 100644 --- a/paddle/pserver/ParameterServer2.h +++ b/paddle/pserver/ParameterServer2.h @@ -38,7 +38,7 @@ limitations under the License. */ #include "ProtoServer.h" -P_DECLARE_int32(port); +DECLARE_int32(port); namespace paddle { diff --git a/paddle/pserver/SparseParameterDistribution.cpp b/paddle/pserver/SparseParameterDistribution.cpp index 0068f85b52..6dd725db30 100644 --- a/paddle/pserver/SparseParameterDistribution.cpp +++ b/paddle/pserver/SparseParameterDistribution.cpp @@ -20,26 +20,26 @@ limitations under the License. */ #include "SparseParameterDistribution.h" -P_DEFINE_bool(check_sparse_distribution_in_pserver, - false, - "check whether sparse parameter exhibts balanced distribution at " - "all pservers"); -P_DEFINE_bool(show_check_sparse_distribution_log, - false, - "show logs details for sparse parameter distribution in pserver"); -P_DEFINE_int32(check_sparse_distribution_batches, - 100, - "run sparse parameter distribution check for N batches"); -P_DEFINE_double( +DEFINE_bool(check_sparse_distribution_in_pserver, + false, + "check whether sparse parameter exhibts balanced distribution at " + "all pservers"); +DEFINE_bool(show_check_sparse_distribution_log, + false, + "show logs details for sparse parameter distribution in pserver"); +DEFINE_int32(check_sparse_distribution_batches, + 100, + "run sparse parameter distribution check for N batches"); +DEFINE_double( check_sparse_distribution_ratio, 0.6, "if parameters dispatched to different pservers exhibit unbalanced " " distribution for check_sparse_distribution_ratio * " " check_sparse_distribution_batches times, crash program"); -P_DEFINE_double(check_sparse_distribution_unbalance_degree, - 2.0, - "the ratio of maximum data size and minimun data size for " - "different pserver"); +DEFINE_double(check_sparse_distribution_unbalance_degree, + 2.0, + "the ratio of maximum data size and minimun data size for " + "different pserver"); namespace paddle { diff --git a/paddle/pserver/test/SocketTest.cpp b/paddle/pserver/test/SocketTest.cpp index 6e63c4f678..066a6c0293 100644 --- a/paddle/pserver/test/SocketTest.cpp +++ b/paddle/pserver/test/SocketTest.cpp @@ -195,9 +195,9 @@ SocketClient::SocketClient(const std::string& serverAddr, int serverPort) { channel_.reset(new SocketChannel(sockfd)); } -P_DEFINE_string(server_addr, "127.0.0.1", "Server address"); -P_DEFINE_int64(dim, 10000000, "Data size"); -P_DEFINE_int32(loop_time, 100000, "test loop time"); +DEFINE_string(server_addr, "127.0.0.1", "Server address"); +DEFINE_int64(dim, 10000000, "Data size"); +DEFINE_int32(loop_time, 100000, "test loop time"); using namespace paddle; // NOLINT diff --git a/paddle/pserver/test/test_ParameterServer2.cpp b/paddle/pserver/test/test_ParameterServer2.cpp index 4257a2308d..8e7231a9e1 100644 --- a/paddle/pserver/test/test_ParameterServer2.cpp +++ b/paddle/pserver/test/test_ParameterServer2.cpp @@ -21,9 +21,9 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_int32(num_gradient_servers); -P_DEFINE_string(server_addr, "127.0.0.1", "assign server address"); -P_DEFINE_int32(server_cpu, 0, "assign server cpu"); +DECLARE_int32(num_gradient_servers); +DEFINE_string(server_addr, "127.0.0.1", "assign server address"); +DEFINE_int32(server_cpu, 0, "assign server cpu"); class ParameterServer2Tester : public ParameterServer2 { public: diff --git a/paddle/pserver/test/test_ProtoServer.cpp b/paddle/pserver/test/test_ProtoServer.cpp index 3880dde5e3..9f86ee80f4 100644 --- a/paddle/pserver/test/test_ProtoServer.cpp +++ b/paddle/pserver/test/test_ProtoServer.cpp @@ -21,10 +21,10 @@ limitations under the License. */ #include "paddle/pserver/ProtoServer.h" #include "paddle/utils/Stat.h" -P_DEFINE_string(server_addr, "127.0.0.1", "Server address"); -P_DEFINE_int64(dim, 50000000, "Data size"); -P_DEFINE_bool(test_proto_server, true, "whether to test ProtoServer"); -P_DEFINE_bool(benchmark, false, "Do benchmark. Skip some tests"); +DEFINE_string(server_addr, "127.0.0.1", "Server address"); +DEFINE_int64(dim, 50000000, "Data size"); +DEFINE_bool(test_proto_server, true, "whether to test ProtoServer"); +DEFINE_bool(benchmark, false, "Do benchmark. Skip some tests"); using namespace paddle; // NOLINT diff --git a/paddle/trainer/MergeModel.cpp b/paddle/trainer/MergeModel.cpp index 1cf29a39b9..91d89b61a3 100644 --- a/paddle/trainer/MergeModel.cpp +++ b/paddle/trainer/MergeModel.cpp @@ -19,8 +19,8 @@ limitations under the License. */ #include "paddle/pserver/ParameterServer2.h" #include "paddle/utils/PythonUtil.h" -P_DEFINE_string(model_dir, "", "Directory for separated model files"); -P_DEFINE_string(model_file, "", "File for merged model file"); +DEFINE_string(model_dir, "", "Directory for separated model files"); +DEFINE_string(model_file, "", "File for merged model file"); using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/trainer/RemoteParameterUpdater.cpp b/paddle/trainer/RemoteParameterUpdater.cpp index b7f7b93b8d..974e78fa17 100644 --- a/paddle/trainer/RemoteParameterUpdater.cpp +++ b/paddle/trainer/RemoteParameterUpdater.cpp @@ -17,8 +17,8 @@ limitations under the License. */ #include "paddle/utils/GlobalConstants.h" #include "paddle/utils/Stat.h" -P_DECLARE_int32(trainer_id); -P_DECLARE_string(save_dir); +DECLARE_int32(trainer_id); +DECLARE_string(save_dir); namespace paddle { diff --git a/paddle/trainer/ThreadParameterUpdater.cpp b/paddle/trainer/ThreadParameterUpdater.cpp index bee7f061fe..9caa92a4d7 100644 --- a/paddle/trainer/ThreadParameterUpdater.cpp +++ b/paddle/trainer/ThreadParameterUpdater.cpp @@ -19,7 +19,7 @@ limitations under the License. */ #include "paddle/math/SparseRowMatrix.h" #include "paddle/utils/Thread.h" -P_DECLARE_int32(trainer_count); +DECLARE_int32(trainer_count); namespace paddle { diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 85610ec04e..1eec2c432d 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -38,60 +38,56 @@ limitations under the License. */ #include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/gserver/layers/ValidationLayer.h" -P_DEFINE_string(config, "", "Trainer config file"); - -P_DEFINE_int32(test_period, - 0, - "if equal 0, do test on all test data at the end of " - "each pass. While if equal non-zero, do test on all test " - "data every test_period batches"); -P_DEFINE_bool(test_all_data_in_one_period, - false, - "This option was deprecated, since we will always do " - "test on all test set "); - -P_DEFINE_bool(local, true, "Train in local mode or not"); - -P_DEFINE_int32(average_test_period, - 0, - "Do test on average parameter every so" - " many batches. MUST be devided by FLAGS_log_period." - " Default 0 means do not test average parameter"); - -P_DEFINE_int32(saving_period, 1, "Save parameteres every so many passes"); -P_DEFINE_int64(saving_period_by_batches, - 0, - "Save parameters every so many batches in one pass"); -P_DEFINE_string(save_dir, "", "Directory for saving model parameter"); -P_DEFINE_int32(start_pass, - 0, - "Start training from this pass. " - "Will load parameter from the previous pass"); -P_DEFINE_int32(test_pass, - -1, - "Will load parameter start from this pass to test"); -P_DEFINE_int32(test_wait, 0, "Waiting for pass parameter if not exist"); -P_DEFINE_bool(with_cost, true, "enable cost layer or not"); -P_DEFINE_bool(distribute_test, false, "test in distribute mode"); - -P_DEFINE_int32(num_passes, 100, "train for so many passes"); - -P_DEFINE_string(config_args, - "", - "arguments passed to config file." - "Format: key1=value1,key2=value2"); - -P_DEFINE_bool(save_only_one, - false, - "Save only parameters in last pass, remove previous."); - -P_DEFINE_string(feat_file, "", "File name of extracted feature."); -P_DEFINE_string(predict_output_dir, - "", - "Directory that saves the predicted results of output layers"); -P_DEFINE_string(model_list, - "", - "File that saves the model list when evaluation"); +DEFINE_string(config, "", "Trainer config file"); + +DEFINE_int32(test_period, + 0, + "if equal 0, do test on all test data at the end of " + "each pass. While if equal non-zero, do test on all test " + "data every test_period batches"); +DEFINE_bool(test_all_data_in_one_period, + false, + "This option was deprecated, since we will always do " + "test on all test set "); + +DEFINE_bool(local, true, "Train in local mode or not"); + +DEFINE_int32(average_test_period, + 0, + "Do test on average parameter every so" + " many batches. MUST be devided by FLAGS_log_period." + " Default 0 means do not test average parameter"); + +DEFINE_int32(saving_period, 1, "Save parameteres every so many passes"); +DEFINE_int64(saving_period_by_batches, + 0, + "Save parameters every so many batches in one pass"); +DEFINE_string(save_dir, "", "Directory for saving model parameter"); +DEFINE_int32(start_pass, + 0, + "Start training from this pass. " + "Will load parameter from the previous pass"); +DEFINE_int32(test_pass, -1, "Will load parameter start from this pass to test"); +DEFINE_int32(test_wait, 0, "Waiting for pass parameter if not exist"); +DEFINE_bool(with_cost, true, "enable cost layer or not"); +DEFINE_bool(distribute_test, false, "test in distribute mode"); + +DEFINE_int32(num_passes, 100, "train for so many passes"); + +DEFINE_string(config_args, + "", + "arguments passed to config file." + "Format: key1=value1,key2=value2"); + +DEFINE_bool(save_only_one, + false, + "Save only parameters in last pass, remove previous."); + +DEFINE_string(feat_file, "", "File name of extracted feature."); +DEFINE_string(predict_output_dir, + "", + "Directory that saves the predicted results of output layers"); +DEFINE_string(model_list, "", "File that saves the model list when evaluation"); namespace paddle { diff --git a/paddle/trainer/Trainer.h b/paddle/trainer/Trainer.h index cabbb4acd1..7cbf18ace7 100644 --- a/paddle/trainer/Trainer.h +++ b/paddle/trainer/Trainer.h @@ -34,7 +34,7 @@ limitations under the License. */ #include "paddle/internals/metric_learning/MetricTrainer.h" #endif -P_DECLARE_int32(num_passes); +DECLARE_int32(num_passes); namespace paddle { diff --git a/paddle/trainer/TrainerBenchmark.cpp b/paddle/trainer/TrainerBenchmark.cpp index 5c3177c808..173653c816 100644 --- a/paddle/trainer/TrainerBenchmark.cpp +++ b/paddle/trainer/TrainerBenchmark.cpp @@ -18,9 +18,9 @@ limitations under the License. */ #include "paddle/utils/Stat.h" #include "paddle/utils/Util.h" -P_DECLARE_int32(test_period); +DECLARE_int32(test_period); -P_DEFINE_bool(feed_data, false, "Wether to read data from DataProvider."); +DEFINE_bool(feed_data, false, "Wether to read data from DataProvider."); namespace paddle { diff --git a/paddle/trainer/TrainerConfigHelper.cpp b/paddle/trainer/TrainerConfigHelper.cpp index 2017a08d20..60ac8459a1 100644 --- a/paddle/trainer/TrainerConfigHelper.cpp +++ b/paddle/trainer/TrainerConfigHelper.cpp @@ -18,16 +18,16 @@ limitations under the License. */ #include "paddle/utils/Flags.h" #include "paddle/utils/PythonUtil.h" -P_DECLARE_string(config); -P_DECLARE_string(init_model_path); -P_DECLARE_int32(start_pass); -P_DECLARE_string(save_dir); -P_DECLARE_int32(trainer_id); -P_DECLARE_bool(local); -P_DECLARE_bool(with_cost); -P_DECLARE_bool(with_gpu); -P_DECLARE_bool(parallel_nn); -P_DECLARE_string(config_args); +DECLARE_string(config); +DECLARE_string(init_model_path); +DECLARE_int32(start_pass); +DECLARE_string(save_dir); +DECLARE_int32(trainer_id); +DECLARE_bool(local); +DECLARE_bool(with_cost); +DECLARE_bool(with_gpu); +DECLARE_bool(parallel_nn); +DECLARE_string(config_args); const char *kConfigParserModuleName = "paddle.trainer.config_parser"; const char *kConfigParserFuncName = "parse_config_and_serialize"; diff --git a/paddle/trainer/TrainerInternalConfig.cpp b/paddle/trainer/TrainerInternalConfig.cpp index a017cdec9d..039fcdb524 100644 --- a/paddle/trainer/TrainerInternalConfig.cpp +++ b/paddle/trainer/TrainerInternalConfig.cpp @@ -14,17 +14,17 @@ limitations under the License. */ #include "TrainerInternalConfig.h" -P_DEFINE_int32(show_parameter_stats_period, - 0, - "Whether to show parameter stats during training"); +DEFINE_int32(show_parameter_stats_period, + 0, + "Whether to show parameter stats during training"); -P_DEFINE_int32(dot_period, 1, "Print '.' every so many batches"); +DEFINE_int32(dot_period, 1, "Print '.' every so many batches"); -P_DEFINE_bool(use_old_updater, false, "Use the old RemoteParameterUpdater"); +DEFINE_bool(use_old_updater, false, "Use the old RemoteParameterUpdater"); -P_DECLARE_int32(num_passes); +DECLARE_int32(num_passes); -P_DECLARE_bool(local); +DECLARE_bool(local); namespace paddle { diff --git a/paddle/trainer/TrainerMain.cpp b/paddle/trainer/TrainerMain.cpp index 0a4d56b892..947f9cadcc 100644 --- a/paddle/trainer/TrainerMain.cpp +++ b/paddle/trainer/TrainerMain.cpp @@ -22,21 +22,20 @@ limitations under the License. */ #include "Trainer.h" #include "paddle/pserver/RDMANetwork.h" -P_DEFINE_bool(start_pserver, false, "Whether to start pserver"); -P_DECLARE_int32(gpu_id); -P_DEFINE_string(job, "train", "one of (train, test, checkgrad)"); -P_DECLARE_int32(start_pass); -P_DECLARE_string(config); -P_DECLARE_string(init_model_path); -P_DECLARE_string(rdma_tcp); +DEFINE_bool(start_pserver, false, "Whether to start pserver"); +DECLARE_int32(gpu_id); +DEFINE_string(job, "train", "one of (train, test, checkgrad)"); +DECLARE_int32(start_pass); +DECLARE_string(config); +DECLARE_string(init_model_path); +DECLARE_string(rdma_tcp); using namespace paddle; // NOLINT int main(int argc, char** argv) { -// write logs instantly (never buffer log messages) -#ifdef PADDLE_USE_GLOG + // write logs instantly (never buffer log messages) FLAGS_logbuflevel = -1; -#endif + initMain(argc, argv); initPython(argc, argv); diff --git a/paddle/trainer/tests/test_Compare.cpp b/paddle/trainer/tests/test_Compare.cpp index 63fa48540c..72fc76bea3 100644 --- a/paddle/trainer/tests/test_Compare.cpp +++ b/paddle/trainer/tests/test_Compare.cpp @@ -24,10 +24,10 @@ using namespace std; // NOLINT static const string& configFile = "trainer/tests/sample_trainer_config.conf"; -P_DECLARE_int32(gpu_id); -P_DECLARE_bool(use_gpu); -P_DECLARE_string(config); -P_DECLARE_string(config_args); +DECLARE_int32(gpu_id); +DECLARE_bool(use_gpu); +DECLARE_string(config); +DECLARE_string(config_args); struct comData { vector outArgs; diff --git a/paddle/trainer/tests/test_CompareSparse.cpp b/paddle/trainer/tests/test_CompareSparse.cpp index 3fea3a3c24..a7000eb77e 100644 --- a/paddle/trainer/tests/test_CompareSparse.cpp +++ b/paddle/trainer/tests/test_CompareSparse.cpp @@ -25,22 +25,22 @@ using namespace std; // NOLINT static const string& configFile1 = "trainer/tests/sample_trainer_config_qb_rnn.conf"; -P_DECLARE_bool(use_gpu); -P_DECLARE_string(config); -P_DECLARE_int32(gpu_id); -P_DECLARE_int32(seed); -P_DECLARE_int32(num_passes); -P_DECLARE_int32(saving_period); - -P_DECLARE_int32(num_gradient_servers); -P_DECLARE_int32(port); -P_DECLARE_bool(local); -P_DECLARE_bool(use_old_updater); -P_DECLARE_bool(parallel_nn); -P_DECLARE_string(config_args); -P_DEFINE_double(max_diff_ratio, - 0.0f, - "max diff ratio allowed for parameters value"); +DECLARE_bool(use_gpu); +DECLARE_string(config); +DECLARE_int32(gpu_id); +DECLARE_int32(seed); +DECLARE_int32(num_passes); +DECLARE_int32(saving_period); + +DECLARE_int32(num_gradient_servers); +DECLARE_int32(port); +DECLARE_bool(local); +DECLARE_bool(use_old_updater); +DECLARE_bool(parallel_nn); +DECLARE_string(config_args); +DEFINE_double(max_diff_ratio, + 0.0f, + "max diff ratio allowed for parameters value"); int gNumDevices = 0; diff --git a/paddle/trainer/tests/test_CompareTwoNets.cpp b/paddle/trainer/tests/test_CompareTwoNets.cpp index 8a4556721d..80c61e259e 100644 --- a/paddle/trainer/tests/test_CompareTwoNets.cpp +++ b/paddle/trainer/tests/test_CompareTwoNets.cpp @@ -22,25 +22,25 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_int32(gpu_id); +DECLARE_int32(gpu_id); -P_DECLARE_bool(local); -P_DECLARE_bool(use_gpu); +DECLARE_bool(local); +DECLARE_bool(use_gpu); -P_DECLARE_string(config); -P_DECLARE_string(nics); +DECLARE_string(config); +DECLARE_string(nics); -P_DEFINE_string(config_file_a, "", "config of one network to compare"); -P_DEFINE_string(config_file_b, "", "config of another network to compare"); -P_DEFINE_bool(need_high_accuracy, - false, - "whether need to run in double accuracy"); -P_DEFINE_double( +DEFINE_string(config_file_a, "", "config of one network to compare"); +DEFINE_string(config_file_b, "", "config of another network to compare"); +DEFINE_bool(need_high_accuracy, + false, + "whether need to run in double accuracy"); +DEFINE_double( max_diff_ratio, 0.0f, "max diff ratio allowed for outputs and parameters (value/gradient)"); -P_DECLARE_bool(thread_local_rand_use_global_seed); -P_DECLARE_int32(seed); +DECLARE_bool(thread_local_rand_use_global_seed); +DECLARE_int32(seed); struct ComData { vector outArgs; diff --git a/paddle/trainer/tests/test_CompareTwoOpts.cpp b/paddle/trainer/tests/test_CompareTwoOpts.cpp index 673ef289d8..383505f813 100644 --- a/paddle/trainer/tests/test_CompareTwoOpts.cpp +++ b/paddle/trainer/tests/test_CompareTwoOpts.cpp @@ -22,20 +22,20 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_int32(gpu_id); +DECLARE_int32(gpu_id); -P_DECLARE_bool(local); -P_DECLARE_bool(use_gpu); +DECLARE_bool(local); +DECLARE_bool(use_gpu); -P_DECLARE_string(config); -P_DECLARE_string(nics); +DECLARE_string(config); +DECLARE_string(nics); -P_DEFINE_string(config_file_a, "", "config of one network to compare"); -P_DEFINE_string(config_file_b, "", "config of another network to compare"); -P_DEFINE_bool(need_high_accuracy, - true, - "whether need to run in double accuracy (recommended)"); -P_DEFINE_double( +DEFINE_string(config_file_a, "", "config of one network to compare"); +DEFINE_string(config_file_b, "", "config of another network to compare"); +DEFINE_bool(need_high_accuracy, + true, + "whether need to run in double accuracy (recommended)"); +DEFINE_double( max_diff_ratio, 0.0f, "max diff ratio allowed for outputs and parameters (value/gradient)"); diff --git a/paddle/trainer/tests/test_Prediction.cpp b/paddle/trainer/tests/test_Prediction.cpp index 322121a579..0c79404eee 100644 --- a/paddle/trainer/tests/test_Prediction.cpp +++ b/paddle/trainer/tests/test_Prediction.cpp @@ -18,11 +18,11 @@ limitations under the License. */ #include -P_DECLARE_string(config); -P_DECLARE_string(config_args); -P_DEFINE_string(merger, - "./paddle_merge_model", - "path to paddle_merge_model binary"); +DECLARE_string(config); +DECLARE_string(config_args); +DEFINE_string(merger, + "./paddle_merge_model", + "path to paddle_merge_model binary"); using namespace paddle; // NOLINT using namespace std; // NOLINT diff --git a/paddle/trainer/tests/test_Trainer.cpp b/paddle/trainer/tests/test_Trainer.cpp index 0fede59f8d..371282dd6b 100644 --- a/paddle/trainer/tests/test_Trainer.cpp +++ b/paddle/trainer/tests/test_Trainer.cpp @@ -28,10 +28,10 @@ static const string& configFile3 = "trainer/tests/chunking.conf"; static const string& configFile4 = "trainer/tests/sample_trainer_config_parallel.conf"; -P_DECLARE_bool(use_gpu); -P_DECLARE_string(config); -P_DECLARE_int32(gpu_id); -P_DECLARE_bool(allow_only_one_model_on_one_gpu); +DECLARE_bool(use_gpu); +DECLARE_string(config); +DECLARE_int32(gpu_id); +DECLARE_bool(allow_only_one_model_on_one_gpu); void checkGradientTest(const string& configFile, bool useGpu, diff --git a/paddle/trainer/tests/test_TrainerOnePass.cpp b/paddle/trainer/tests/test_TrainerOnePass.cpp index 0b587ecce1..ee21008aec 100644 --- a/paddle/trainer/tests/test_TrainerOnePass.cpp +++ b/paddle/trainer/tests/test_TrainerOnePass.cpp @@ -27,12 +27,12 @@ static const string& configFile1 = "trainer/tests/sample_trainer_config.conf"; static const string& configFile2 = "trainer/tests/sample_trainer_config_parallel.conf"; -P_DECLARE_bool(use_gpu); -P_DECLARE_string(config); -P_DECLARE_int32(gpu_id); -P_DECLARE_int32(seed); -P_DECLARE_int32(num_passes); -P_DECLARE_int32(saving_period); +DECLARE_bool(use_gpu); +DECLARE_string(config); +DECLARE_int32(gpu_id); +DECLARE_int32(seed); +DECLARE_int32(num_passes); +DECLARE_int32(saving_period); class TrainerForTest : public paddle::Trainer { public: @@ -122,10 +122,10 @@ TEST(average_window_cpu, gpu4) { #endif // 3. test trainer + pserver. -P_DECLARE_int32(num_gradient_servers); -P_DECLARE_int32(port); -P_DECLARE_bool(local); -P_DECLARE_bool(use_old_updater); +DECLARE_int32(num_gradient_servers); +DECLARE_int32(port); +DECLARE_bool(local); +DECLARE_bool(use_old_updater); double checkRemoteParameterUpdater(TrainerForTest& trainer) { auto gradientMachine = trainer.getGradientMachine(); diff --git a/paddle/trainer/tests/test_recurrent_machine_generation.cpp b/paddle/trainer/tests/test_recurrent_machine_generation.cpp index 7d8dfd788f..03446b3b2f 100644 --- a/paddle/trainer/tests/test_recurrent_machine_generation.cpp +++ b/paddle/trainer/tests/test_recurrent_machine_generation.cpp @@ -30,7 +30,7 @@ static string modelDir = "trainer/tests/rnn_gen_test_model_dir/t1"; // NOLINT static string expectFile = // NOLINT "trainer/tests/rnn_gen_test_model_dir/r1.test"; // NOLINT -P_DECLARE_string(config_args); +DECLARE_string(config_args); vector readRetFile(const string& fname) { ifstream inFile(fname); diff --git a/paddle/utils/BarrierStat.cpp b/paddle/utils/BarrierStat.cpp index 9dde155aca..a6dbdcae3f 100644 --- a/paddle/utils/BarrierStat.cpp +++ b/paddle/utils/BarrierStat.cpp @@ -20,15 +20,15 @@ limitations under the License. */ #include "paddle/utils/Flags.h" #include "paddle/utils/Stat.h" -P_DEFINE_bool(log_barrier_abstract, - true, - "if true, show abstract of barrier performance"); -P_DEFINE_int32(log_barrier_lowest_nodes, - 5, - "how many lowest node will be logged"); -P_DEFINE_bool(log_barrier_show_log, - false, // for performance tuning insight - "if true, always show barrier abstract even with little gap"); +DEFINE_bool(log_barrier_abstract, + true, + "if true, show abstract of barrier performance"); +DEFINE_int32(log_barrier_lowest_nodes, + 5, + "how many lowest node will be logged"); +DEFINE_bool(log_barrier_show_log, + false, // for performance tuning insight + "if true, always show barrier abstract even with little gap"); namespace paddle { diff --git a/paddle/utils/CommandLineParser.cpp b/paddle/utils/CommandLineParser.cpp index 51558b45a1..63f16bc54c 100644 --- a/paddle/utils/CommandLineParser.cpp +++ b/paddle/utils/CommandLineParser.cpp @@ -13,220 +13,7 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "CommandLineParser.h" -#ifndef PADDLE_USE_GFLAGS -#include -#include -#include -#include -#include -#include -#include -#include -#include "paddle/utils/StringUtil.h" -namespace paddle { - -static constexpr int kStatusOK = 0; -static constexpr int kStatusInvalid = 1; -static constexpr int kStatusNotFound = 2; - -/** - * \brief: Convert a string to any type value. - * - * \note: It will specialize by type T that is supported. - */ -template -bool StringToValue(const std::string& content, T* value) { - bool ok; - *value = str::toWithStatus(content, &ok); - return ok; -} - -template <> -bool StringToValue(const std::string& content, bool* value) { - std::string tmp = content; - - std::transform(tmp.begin(), tmp.end(), tmp.begin(), [](char in) -> char { - if (in <= 'Z' && in >= 'A') { - return in - ('Z' - 'z'); - } else { - return in; - } - }); // tolower. - - if (tmp == "true" || tmp == "1") { - *value = true; - return true; - } else if (tmp == "false" || tmp == "0") { - *value = false; - return true; - } else { - return false; - } -} - -template <> -bool StringToValue(const std::string& content, - std::string* value) { - *value = content; - return true; -} - -/** - * \brief Parse argument "--blah=blah". - * - * \param argument: The command line argument string, such as "--blah=blah" - * \param [out] extraInfo: The details error message for parse argument. - * \return: kStatusOK, kStatusInvalid, kStatusNotFound - */ -template -int ParseArgument(const std::string& argument, std::string* extraInfo) { - for (auto& command : - flags_internal::CommandLineFlagRegistry::Instance()->commands) { - std::string& name = command.name; - T* value = command.value; - - std::string prefix = "--"; - prefix += name; - prefix += "="; - std::string content; - if (str::startsWith(argument, prefix)) { - content = argument.substr(prefix.size(), argument.size() - prefix.size()); - } else { - prefix = "-"; - prefix += name; - prefix += "="; - if (str::startsWith(argument, prefix)) { - content = - argument.substr(prefix.size(), argument.size() - prefix.size()); - } - } - - if (!content.empty()) { - if (StringToValue(content, value)) { - return kStatusOK; - } else { - *extraInfo = name; - return kStatusInvalid; - } - } - } - return kStatusNotFound; -} - -/** - * @brief ParseBoolArgumentExtra - * parse '--flag_name', '-flag_name' as true; '--noflag_name', '-noflag_name' as - * false - */ -static int ParseBoolArgumentExtra(const std::string& argument, - std::string* extraInfo) { - (void)(extraInfo); // unused extraInfo, just make api same. - - //! @warning: The order and content of prefixes is DESIGNED for parsing - //! command line. The length of prefixes are 1, 2, 3, 4. The parse logic takes - //! use of this fact. DO NOT CHANGE IT without reading how to parse command - //! below. - static const std::vector> prefixes = { - {"-", true}, {"--", true}, {"-no", false}, {"--no", false}}; - - for (flags_internal::CommandLineFlagRegistry::Command& command : - flags_internal::CommandLineFlagRegistry::Instance()->commands) { - if (argument.size() > command.name.size()) { - //! Use the length of prefix is 1, 2, 3, 4. - size_t diff = argument.size() - command.name.size() - 1UL; - if (diff < prefixes.size()) { - const std::string& prefix = std::get<0>(prefixes[diff]); - if (argument == prefix + command.name) { - *command.value = std::get<1>(prefixes[diff]); - return kStatusOK; - } - } - } - } - return kStatusNotFound; -} - -/** - * \brief: Print command line arguments' usage with type T. - */ -template -static void PrintTypeUsage() { - for (auto& command : - flags_internal::CommandLineFlagRegistry::Instance()->commands) { - std::string& name = command.name; - name = "--" + name; // Program will exit, so modify name is safe. - std::string& desc = command.text; - T& defaultValue = command.defaultValue; - std::cerr << std::setw(20) << name << ": " << desc - << "[default:" << defaultValue << "]." << std::endl; - } -} - -template -static void PrintTypeUsages() { - int unused[] = {0, (PrintTypeUsage(), 0)...}; - (void)(unused); -} -/** - * \brief: Print all usage, and exit(1) - */ -static void PrintUsageAndExit(const char* argv0) { - std::cerr << "Program " << argv0 << " Flags: " << std::endl; - PrintTypeUsages(); - exit(1); -} - -/** - * \brief: Print the error flags, usage, and exit. - */ -static void PrintParseError(const std::string& name, - const char* actualInput, - const char* arg0) { - std::cerr << "Parse command flag " << name << " error! User input is " - << actualInput << std::endl; - PrintUsageAndExit(arg0); -} - -void ParseCommandLineFlags(int* argc, char** argv, bool withHelp) { - int unused_argc = 1; - std::string extra; - for (int i = 1; i < *argc; ++i) { - std::string arg = argv[i]; - int s = kStatusInvalid; -#define ParseArgumentWithType(type) \ - s = ParseArgument(arg, &extra); \ - if (s == kStatusOK) { \ - continue; \ - } else if (s == kStatusInvalid) { \ - PrintParseError(extra, argv[i], argv[0]); \ - } - - ParseArgumentWithType(bool); // NOLINT - ParseArgumentWithType(int32_t); - ParseArgumentWithType(double); // NOLINT - ParseArgumentWithType(int64_t); - ParseArgumentWithType(uint64_t); - ParseArgumentWithType(std::string); - -#undef ParseArgumentWithType - s = ParseBoolArgumentExtra(arg, &extra); - if (s == kStatusOK) { - continue; - } - - if (withHelp && (arg == "--help" || arg == "-h")) { - PrintUsageAndExit(argv[0]); - } - - // NOT Found for all flags. - std::swap(argv[unused_argc++], argv[i]); - } - *argc = unused_argc; -} - -} // namespace paddle -#else namespace paddle { #ifndef GFLAGS_NS #define GFLAGS_NS google @@ -243,4 +30,3 @@ void ParseCommandLineFlags(int* argc, char** argv, bool withHelp) { } } // namespace paddle -#endif diff --git a/paddle/utils/CommandLineParser.h b/paddle/utils/CommandLineParser.h index b4449c6f09..4e89f90bb9 100644 --- a/paddle/utils/CommandLineParser.h +++ b/paddle/utils/CommandLineParser.h @@ -13,167 +13,10 @@ See the License for the specific language governing permissions and limitations under the License. */ #pragma once -#ifndef PADDLE_USE_GFLAGS -#include -#include -#include -#include "DisableCopy.h" -namespace paddle { - -namespace flags_internal { - -/** - * Command line flag registry for special type T. It will store all command - * arguments settings. such as name, default value. - */ -template -struct CommandLineFlagRegistry { - /** - * The factory method of CommandLineFlagRegistry - * - * \return: The singleton instance of CommandLineFlagRegistry. - */ - static CommandLineFlagRegistry* Instance() { - static CommandLineFlagRegistry instance_; - return &instance_; - } - - struct Command { - /// name of argument. - std::string name; - /// address of actual variable. such as FLAGS_xxx. - T* value; - /// usage text. - std::string text; - /// default value of this command. - T defaultValue; - }; - - /// the command line arguments of type T. - std::vector commands; - - DISABLE_COPY(CommandLineFlagRegistry); - -private: - inline CommandLineFlagRegistry() {} -}; - -/** - *Helper class to register command line flag. - */ -template -struct CommandLineFlagRegister { - /** - * \brief: Register a command line argument - * - * \param [in] name: The command line name. - * \param [inout] val: The command line argument instance, FLAGS_xxx. - * \param [in] desc: The command line helper message. - */ - CommandLineFlagRegister(const std::string& name, - T* val, - const std::string desc) { - CommandLineFlagRegistry::Instance()->commands.push_back( - {name, val, desc, *val}); - } -}; - -/** - * \brief: Define a command line arguments. - * - * \param type: The variable type, such as int, double, etc. - * \param name: The variable name. The command line argument is '--name', the - *variable - *is 'FLAGS_name' - * \param default_value: The default value of command line argument. - * \param text: The description in command line argument. - */ -#define PADDLE_DEFINE_variable(type, name, default_value, text) \ - type FLAGS_##name = default_value; \ - namespace paddle_flags_internal { \ - paddle::flags_internal::CommandLineFlagRegister \ - flags_internal_var_##name(#name, &FLAGS_##name, text); \ - } // namespace paddle_flags_internal - -/** - * Declare a variable to use. - */ -#define PADDLE_DECLARE_variable(type, name) extern type FLAGS_##name; - -// DEFINE macro for each types. -#define P_DEFINE_int32(name, default_value, text) \ - PADDLE_DEFINE_variable(int32_t, name, default_value, text) - -#define P_DEFINE_bool(name, default_value, text) \ - PADDLE_DEFINE_variable(bool, name, default_value, text) - -#define P_DEFINE_string(name, default_value, text) \ - PADDLE_DEFINE_variable(std::string, name, default_value, text) - -#define P_DEFINE_double(name, default_value, text) \ - PADDLE_DEFINE_variable(double, name, default_value, text) - -#define P_DEFINE_int64(name, default_value, text) \ - PADDLE_DEFINE_variable(int64_t, name, default_value, text) - -#define P_DEFINE_uint64(name, default_value, text) \ - PADDLE_DEFINE_variable(uint64_t, name, default_value, text) - -// Declare macro for each types. -#define P_DECLARE_int32(name) PADDLE_DECLARE_variable(int32_t, name) -#define P_DECLARE_bool(name) PADDLE_DECLARE_variable(bool, name) -#define P_DECLARE_string(name) PADDLE_DECLARE_variable(std::string, name) -#define P_DECLARE_double(name) PADDLE_DECLARE_variable(double, name) -#define P_DECLARE_int64(name) PADDLE_DECLARE_variable(int64_t, name) -#define P_DECLARE_uint64(name) PADDLE_DECLARE_variable(uint64_t, name) -} // namespace flags_internal - -/** - * \brief Parse command line flags. If parse error, just failed and exit 1. - * - * \param [inout] argc: The command argument count. This method will modify - *argc, and left unused arguments. - * \param [inout] argv: The command argument values. This method will modify - *argv, and left unused arguments. - * \param [in] withHelp: True will parse '-h' and '--help' to print usage. - * - * \note: The Command line flags format basically as follow: - * - * * If the type of flag is not bool, then the follow format of command line - * will be parsed: - * * --flag_name=value - * * -flag_name=value - * - * * If the flag is bool, then: - * * --flag_name=value, -flag_name=value will be parsed. - * * if value.tolower() == "true"| "1" will be treated as true. - * * else if value.tolower() == "false" | "0" will be treated as false. - * * --flag_name will be parsed as true. - * * --noflag_name will be parsed as false. - */ -void ParseCommandLineFlags(int* argc, char** argv, bool withHelp = true); - -} // namespace paddle - -#else // if use gflags. #include -#define P_DEFINE_int32 DEFINE_int32 -#define P_DEFINE_bool DEFINE_bool -#define P_DEFINE_string DEFINE_string -#define P_DEFINE_double DEFINE_double -#define P_DEFINE_int64 DEFINE_int64 -#define P_DEFINE_uint64 DEFINE_uint64 -#define P_DECLARE_int32 DECLARE_int32 -#define P_DECLARE_bool DECLARE_bool -#define P_DECLARE_string DECLARE_string -#define P_DECLARE_double DECLARE_double -#define P_DECLARE_int64 DECLARE_int64 -#define P_DECLARE_uint64 DECLARE_uint64 namespace paddle { void ParseCommandLineFlags(int* argc, char** argv, bool withHelp = true); } // namespace paddle - -#endif diff --git a/paddle/utils/CustomStackTrace.cpp b/paddle/utils/CustomStackTrace.cpp index 083f5c509a..66b38218a7 100644 --- a/paddle/utils/CustomStackTrace.cpp +++ b/paddle/utils/CustomStackTrace.cpp @@ -16,7 +16,7 @@ limitations under the License. */ #include #include "CommandLineParser.h" -P_DEFINE_bool( +DEFINE_bool( layer_stack_error_only_current_thread, true, "Dump current thread or whole process layer stack when signal error " diff --git a/paddle/utils/Flags.cpp b/paddle/utils/Flags.cpp index 1c9e602f45..59d6cbdc51 100644 --- a/paddle/utils/Flags.cpp +++ b/paddle/utils/Flags.cpp @@ -15,65 +15,61 @@ limitations under the License. */ #include "Flags.h" #ifdef PADDLE_ONLY_CPU -P_DEFINE_bool(use_gpu, false, "Only support CPU training"); +DEFINE_bool(use_gpu, false, "Only support CPU training"); #else -P_DEFINE_bool(use_gpu, true, "Whether to use GPU for training"); +DEFINE_bool(use_gpu, true, "Whether to use GPU for training"); #endif -P_DEFINE_bool( - parallel_nn, - false, - "Whether to use multi-threads to calculate one neural network." - "If it was set false, use gpu_id specify which gpu core to use" - "(the device property in the trainer config file will be ingored)." - "If it was set true, the gpu core is specified by the trainer" - " config file(gpu_id will be ignored)."); -P_DEFINE_int32(trainer_count, 1, "Defined how many trainers to train"); -P_DEFINE_int32(gpu_id, 0, "Which gpu core to use"); -P_DEFINE_int32(port, 20134, "Listening port for pserver"); -P_DEFINE_int32(data_server_port, 21134, "Listening port for dserver"); -P_DEFINE_int32(ports_num, - 1, - "The ports number for parameter send," - " increment based on default port number"); -P_DEFINE_int32(ports_num_for_sparse, - 0, - "The ports number for parameter send," - " increment based on default (port + ports_num)"); -P_DEFINE_string(nics, "xgbe0,xgbe1", "network device name for pservers"); -P_DEFINE_string(rdma_tcp, "tcp", "use rdma or tcp rdma transport protocol"); -P_DEFINE_int32( - trainer_id, - 0, - "For distributed training, each trainer must be given an unique id" - " ranging from 0 to num_trainers-1. Trainer 0 is the master" - " trainer"); -P_DEFINE_int32(num_gradient_servers, 1, "number of gradient servers"); -P_DEFINE_string(comment, "", "A string for commenting this training task"); -P_DEFINE_string(load_missing_parameter_strategy, - "fail", - "which operation to take on load model fails. support " - "fail/rand/zero only."); -P_DEFINE_int32(log_period, 100, "Log progress every so many batches"); -P_DEFINE_int32(log_period_server, - 500, - "Log progress every so many batches at pserver end"); -P_DEFINE_double(checkgrad_eps, 1e-5, "parameter change size for checkgrad"); -P_DEFINE_int32(enable_parallel_vector, - 0, - "threshold for enable parallel vector"); -P_DEFINE_bool(loadsave_parameters_in_pserver, - false, - "load and save parameters in pserver. " - "only work while parameter set sparse_remote_update."); -P_DEFINE_int32(beam_size, - 1, - "Beam size used in generating most probable output sequences."); +DEFINE_bool(parallel_nn, + false, + "Whether to use multi-threads to calculate one neural network." + "If it was set false, use gpu_id specify which gpu core to use" + "(the device property in the trainer config file will be ingored)." + "If it was set true, the gpu core is specified by the trainer" + " config file(gpu_id will be ignored)."); +DEFINE_int32(trainer_count, 1, "Defined how many trainers to train"); +DEFINE_int32(gpu_id, 0, "Which gpu core to use"); +DEFINE_int32(port, 20134, "Listening port for pserver"); +DEFINE_int32(data_server_port, 21134, "Listening port for dserver"); +DEFINE_int32(ports_num, + 1, + "The ports number for parameter send," + " increment based on default port number"); +DEFINE_int32(ports_num_for_sparse, + 0, + "The ports number for parameter send," + " increment based on default (port + ports_num)"); +DEFINE_string(nics, "xgbe0,xgbe1", "network device name for pservers"); +DEFINE_string(rdma_tcp, "tcp", "use rdma or tcp rdma transport protocol"); +DEFINE_int32(trainer_id, + 0, + "For distributed training, each trainer must be given an unique id" + " ranging from 0 to num_trainers-1. Trainer 0 is the master" + " trainer"); +DEFINE_int32(num_gradient_servers, 1, "number of gradient servers"); +DEFINE_string(comment, "", "A string for commenting this training task"); +DEFINE_string(load_missing_parameter_strategy, + "fail", + "which operation to take on load model fails. support " + "fail/rand/zero only."); +DEFINE_int32(log_period, 100, "Log progress every so many batches"); +DEFINE_int32(log_period_server, + 500, + "Log progress every so many batches at pserver end"); +DEFINE_double(checkgrad_eps, 1e-5, "parameter change size for checkgrad"); +DEFINE_int32(enable_parallel_vector, 0, "threshold for enable parallel vector"); +DEFINE_bool(loadsave_parameters_in_pserver, + false, + "load and save parameters in pserver. " + "only work while parameter set sparse_remote_update."); +DEFINE_int32(beam_size, + 1, + "Beam size used in generating most probable output sequences."); -P_DEFINE_bool(show_layer_stat, false, "show the statistics of each layer"); -P_DEFINE_string(predict_file, "", "File name for saving predict result"); -P_DEFINE_bool(prev_batch_state, false, "batch is continue with next batch"); -P_DEFINE_string(init_model_path, - "", - "Path of the initial model parameters." - "If it was set, start_pass will be ignored."); +DEFINE_bool(show_layer_stat, false, "show the statistics of each layer"); +DEFINE_string(predict_file, "", "File name for saving predict result"); +DEFINE_bool(prev_batch_state, false, "batch is continue with next batch"); +DEFINE_string(init_model_path, + "", + "Path of the initial model parameters." + "If it was set, start_pass will be ignored."); diff --git a/paddle/utils/Flags.h b/paddle/utils/Flags.h index 922533d63e..2ebbcb24eb 100644 --- a/paddle/utils/Flags.h +++ b/paddle/utils/Flags.h @@ -16,28 +16,28 @@ limitations under the License. */ #include "CommandLineParser.h" -P_DECLARE_bool(parallel_nn); -P_DECLARE_int32(async_count); -P_DECLARE_int32(port); -P_DECLARE_int32(data_server_port); -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(gpu_id); -P_DECLARE_int32(trainer_count); -P_DECLARE_int32(ports_num); -P_DECLARE_int32(ports_num_for_sparse); -P_DECLARE_string(nics); -P_DECLARE_string(rdma_tcp); -P_DECLARE_int32(trainer_id); -P_DECLARE_int32(num_gradient_servers); -P_DECLARE_string(comment); -P_DECLARE_string(load_missing_parameter_strategy); -P_DECLARE_int32(log_period); -P_DECLARE_int32(log_period_server); -P_DECLARE_double(checkgrad_eps); -P_DECLARE_int32(enable_parallel_vector); -P_DECLARE_bool(loadsave_parameters_in_pserver); -P_DECLARE_int32(beam_size); -P_DECLARE_bool(show_layer_stat); -P_DECLARE_string(predict_file); -P_DECLARE_bool(prev_batch_state); -P_DECLARE_string(init_model_path); +DECLARE_bool(parallel_nn); +DECLARE_int32(async_count); +DECLARE_int32(port); +DECLARE_int32(data_server_port); +DECLARE_bool(use_gpu); +DECLARE_int32(gpu_id); +DECLARE_int32(trainer_count); +DECLARE_int32(ports_num); +DECLARE_int32(ports_num_for_sparse); +DECLARE_string(nics); +DECLARE_string(rdma_tcp); +DECLARE_int32(trainer_id); +DECLARE_int32(num_gradient_servers); +DECLARE_string(comment); +DECLARE_string(load_missing_parameter_strategy); +DECLARE_int32(log_period); +DECLARE_int32(log_period_server); +DECLARE_double(checkgrad_eps); +DECLARE_int32(enable_parallel_vector); +DECLARE_bool(loadsave_parameters_in_pserver); +DECLARE_int32(beam_size); +DECLARE_bool(show_layer_stat); +DECLARE_string(predict_file); +DECLARE_bool(prev_batch_state); +DECLARE_string(init_model_path); diff --git a/paddle/utils/Logging.cpp b/paddle/utils/Logging.cpp index 20f32466a5..5a1c6ecb22 100644 --- a/paddle/utils/Logging.cpp +++ b/paddle/utils/Logging.cpp @@ -18,175 +18,9 @@ limitations under the License. */ */ #include "Logging.h" -#ifndef PADDLE_USE_GLOG -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include namespace paddle { -namespace internal { - -std::string join(const std::string& part1, const std::string& part2) { - const char sep = '/'; - if (!part2.empty() && part2.front() == sep) { - return part2; - } - std::string ret; - ret.reserve(part1.size() + part2.size() + 1); - ret = part1; - if (!ret.empty() && ret.back() != sep) { - ret += sep; - } - ret += part2; - return ret; -} - -static inline bool env2bool(const char* envName, bool defaultValue = false) { - char* envValue = getenv(envName); - if (envValue == nullptr) { - return defaultValue; - } else { - return memchr("tTyY1\0", envValue[0], 6) != nullptr; - } -} - -static inline int env2int(const char* envName, int defaultValue = 0) { - char* envValue = getenv(envName); - if (envValue == nullptr) { - return defaultValue; - } else { - int retValue = defaultValue; - try { - retValue = std::stoi(envValue); - } catch (...) { - // pass - } - return retValue; - } -} - -static inline int env2index(const char* envName, - const std::vector& options, - int defaultValue) { - char* envValue = getenv(envName); - if (envValue == nullptr) { - return defaultValue; - } else { - for (size_t i = 0; i < options.size(); ++i) { - if (options[i] == envValue) { - return static_cast(i); - } - } - return defaultValue; - } -} - -static bool gLogToStderr = env2bool("PLOG_LOGTOSTDERR", true); -static const std::vector gLevelName = { - "INFO", "WARNING", "ERROR", "FATAL"}; -static int gMinLogLevel = - env2int("PLOG_MINLOGLEVEL", env2index("PLOG_MINLOGLEVEL", gLevelName, 0)); - -static std::vector> gLogFds; -static std::vector gLogFileFds; -static bool gLogInited = false; -static void freeLogFileFds() { - for (auto fd : gLogFileFds) { - close(fd); - } -} - -static void initializeLogFds(char* argv0) { - gLogFds.resize(NUM_SEVERITIES); - - for (int i = gMinLogLevel; i < NUM_SEVERITIES && gLogToStderr; - ++i) { // Add stderr - std::vector& fds = gLogFds[i]; - fds.push_back(STDERR_FILENO); - } - - char* logDir = getenv("PLOG_LOGDIR"); - - for (int i = gMinLogLevel; i < NUM_SEVERITIES && logDir != nullptr; ++i) { - std::string filename = - join(logDir, std::string(argv0) + "." + gLevelName[i]); - int fd = open(filename.c_str(), O_CREAT | O_WRONLY, 0644); - if (fd == -1) { - fprintf(stderr, "Open log file error!"); - exit(1); - } - gLogFileFds.push_back(fd); - - std::vector& curFds = gLogFds[i]; - curFds.insert(curFds.end(), gLogFileFds.begin(), gLogFileFds.end()); - } - - atexit(freeLogFileFds); - gLogInited = true; -} - -static void (*gFailureFunctionPtr)() ATTR_NORETURN = abort; - -LogMessage::LogMessage(const char* fname, int line, int severity) - : fname_(fname), line_(line), severity_(severity) {} - -LogMessage::~LogMessage() { this->generateLogMessage(); } - -void LogMessage::generateLogMessage() { - if (!gLogInited) { - fprintf(stderr, - "%c %s:%d] %s\n", - "IWEF"[severity_], - fname_, - line_, - str().c_str()); - } else { - for (auto& fd : gLogFds[this->severity_]) { - dprintf(fd, - "%c %s:%d] %s\n", - "IWEF"[severity_], - fname_, - line_, - str().c_str()); - } - } -} - -LogMessageFatal::LogMessageFatal(const char* file, int line) - : LogMessage(file, line, FATAL) {} - -LogMessageFatal::~LogMessageFatal() { - generateLogMessage(); - gFailureFunctionPtr(); -} -} // namespace internal - -void initializeLogging(int argc, char** argv) { - internal::initializeLogFds(argv[0]); -} - -namespace logging { -void setMinLogLevel(int level) { paddle::internal::gMinLogLevel = level; } - -void installFailureFunction(void (*callback)() ATTR_NORETURN) { - paddle::internal::gFailureFunctionPtr = callback; -} - -} // namespace logging - -} // namespace paddle - -#else -namespace paddle { void initializeLogging(int argc, char** argv) { (void)(argc); if (!getenv("GLOG_logtostderr")) { @@ -197,13 +31,16 @@ void initializeLogging(int argc, char** argv) { } namespace logging { + void setMinLogLevel(int level) { FLAGS_minloglevel = level; } + void installFailureFunction(void (*callback)()) { google::InstallFailureFunction(callback); } + void installFailureWriter(void (*callback)(const char*, int)) { google::InstallFailureWriter(callback); } + } // namespace logging } // namespace paddle -#endif diff --git a/paddle/utils/Logging.h b/paddle/utils/Logging.h index 4379289f6d..d9e551f089 100644 --- a/paddle/utils/Logging.h +++ b/paddle/utils/Logging.h @@ -22,175 +22,21 @@ limitations under the License. */ #include #include -#ifndef PADDLE_USE_GLOG -#include "CompilerMacros.h" - -//! TODO(yuyang18): Move this utility macro into some global header. -#define PP_CAT(a, b) PP_CAT_I(a, b) -#define PP_CAT_I(a, b) PP_CAT_II(~, a##b) -#define PP_CAT_II(p, res) res - -/** - * Generate Unique Variable Name, Usefully in macro. - * @SEE - * http://stackoverflow.com/questions/1082192/how-to-generate-random-variable-names-in-c-using-macros - */ -#define UNIQUE_NAME(base) PP_CAT(base, __LINE__) - +#include namespace paddle { -//! Log levels. -const int INFO = 0; -const int WARNING = 1; -const int ERROR = 2; -const int FATAL = 3; -const int NUM_SEVERITIES = 4; - -namespace internal { - -class LogMessage : public std::basic_ostringstream { -public: - LogMessage(const char* fname, int line, int severity); - ~LogMessage(); - -protected: - /** - * @brief Print log message to stderr, files, etc. - */ - void generateLogMessage(); - -private: - const char* fname_; - int line_; - int severity_; -}; - -// LogMessageFatal ensures the process will exit in failure after -// logging this message. -class LogMessageFatal : public LogMessage { -public: - LogMessageFatal(const char* file, int line) __attribute__((cold)); - ~LogMessageFatal() __attribute__((noreturn)); -}; - -#define _P_LOG_INFO \ - ::paddle::internal::LogMessage(__FILE__, __LINE__, paddle::INFO) -#define _P_LOG_WARNING \ - ::paddle::internal::LogMessage(__FILE__, __LINE__, paddle::WARNING) -#define _P_LOG_ERROR \ - ::paddle::internal::LogMessage(__FILE__, __LINE__, paddle::ERROR) -#define _P_LOG_FATAL ::paddle::internal::LogMessageFatal(__FILE__, __LINE__) - -#define P_LOG(severity) _P_LOG_##severity - -#define P_LOG_FIRST_N(severity, n) \ - static int UNIQUE_NAME(LOG_OCCURRENCES) = 0; \ - if (UNIQUE_NAME(LOG_OCCURRENCES) <= n) ++UNIQUE_NAME(LOG_OCCURRENCES); \ - if (UNIQUE_NAME(LOG_OCCURRENCES) <= n) P_LOG(severity) - -#define P_LOG_IF_EVERY_N(severity, condition, n) \ - static int UNIQUE_NAME(LOG_OCCURRENCES) = 0; \ - if (condition && ((UNIQUE_NAME(LOG_OCCURRENCES) = \ - (UNIQUE_NAME(LOG_OCCURRENCES) + 1) % n) == (1 % n))) \ - P_LOG(severity) - -#define P_LOG_EVERY_N(severity, n) P_LOG_IF_EVERY_N(severity, true, n) - -// TODO(jeff): Define a proper implementation of VLOG_IS_ON -#define P_VLOG_IS_ON(lvl) ((lvl) <= 0) - -#define P_LOG_IF(severity, condition) \ - if (condition) P_LOG(severity) - -#define P_VLOG(lvl) P_LOG_IF(INFO, P_VLOG_IS_ON(lvl)) - -#define P_VLOG_IF(lvl, cond) P_LOG_IF(INFO, P_VLOG_IS_ON(lvl) && cond) - -#define P_VLOG_EVERY_N(lvl, n) P_LOG_IF_EVERY_N(INFO, P_VLOG_IS_ON(lvl), n) - -#define PREDICT_FALSE(x) (__builtin_expect(x, 0)) -#define PREDICT_TRUE(x) (__builtin_expect(!!(x), 1)) - -// CHECK dies with a fatal error if condition is not true. It is *not* -// controlled by NDEBUG, so the check will be executed regardless of -// compilation mode. Therefore, it is safe to do things like: -// CHECK(fp->Write(x) == 4) -#define P_CHECK(condition) \ - if (PREDICT_FALSE(!(condition))) \ - P_LOG(FATAL) << "Check failed: " #condition " " - -#define P_CHECK_EQ(val1, val2) P_CHECK((val1) == (val2)) -#define P_CHECK_NE(val1, val2) P_CHECK((val1) != (val2)) -#define P_CHECK_LE(val1, val2) P_CHECK((val1) <= (val2)) -#define P_CHECK_LT(val1, val2) P_CHECK((val1) < (val2)) -#define P_CHECK_GE(val1, val2) P_CHECK((val1) >= (val2)) -#define P_CHECK_GT(val1, val2) P_CHECK((val1) > (val2)) -#define P_CHECK_NOTNULL(val) P_CHECK((val) != NULL) - -//! GLOG compatible APIs -//! NOTE: only implement Paddle actually used APIs. -#define LOG(x) P_LOG(x) -#define VLOG(x) P_VLOG(x) -#define DLOG(x) P_VLOG(5) -#define CHECK(x) P_CHECK(x) -#define PCHECK(x) P_CHECK(x) -#define CHECK_EQ(val1, val2) P_CHECK((val1) == (val2)) -#define CHECK_NE(val1, val2) P_CHECK((val1) != (val2)) -#define CHECK_LE(val1, val2) P_CHECK((val1) <= (val2)) -#define CHECK_LT(val1, val2) P_CHECK((val1) < (val2)) -#define CHECK_GE(val1, val2) P_CHECK((val1) >= (val2)) -#define CHECK_GT(val1, val2) P_CHECK((val1) > (val2)) -#define CHECK_NOTNULL(val) P_CHECK((val) != NULL) -#define VLOG_IS_ON(x) P_VLOG_IS_ON(x) -#define LOG_FIRST_N(severity, n) P_LOG_FIRST_N(severity, n) -#define LOG_IF(severity, condition) P_LOG_IF(severity, condition) -#define VLOG_EVERY_N(lvl, n) P_VLOG_EVERY_N(lvl, n) -#define VLOG_IF(lvl, cond) P_VLOG_IF(lvl, cond) -#define LOG_EVERY_N(severity, n) P_LOG_EVERY_N(severity, n) -} // namespace internal - -/** - * @brief initialize logging - * @note: Current implement of logging is lack of: - * PrintCallStack when fatal. - * VLOG_IS_ON - * But it is portable to multi-platform, and simple enough to modify. - */ void initializeLogging(int argc, char** argv); -namespace logging { -/** - * @brief Set Min Log Level. if Log.level < minLogLevel, then will not print log - * to stream - * @param level. Any integer is OK, but only 0 <= x <= NUM_SEVERITIES is useful. - */ -void setMinLogLevel(int level); - -/** - * @brief Install Log(Fatal) failure function. Default is abort(); - * @param callback: The failure function. - */ -void installFailureFunction(void (*callback)() ATTR_NORETURN); -/** - * @brief installFailureWriter - * @note: not implemented currently. - */ -inline void installFailureWriter(void (*callback)(const char*, int)) { - (void)(callback); // unused callback. -} -} // namespace logging -} // namespace paddle -#else -#include -namespace paddle { -void initializeLogging(int argc, char** argv); namespace logging { + void setMinLogLevel(int level); + void installFailureFunction(void (*callback)()); + void installFailureWriter(void (*callback)(const char*, int)); -} // namespace logging -} -#endif // PADDLE_USE_GLOG + +} // namespace logging +} // namespace paddle #ifndef NDEBUG #define DEBUG_LEVEL 5 diff --git a/paddle/utils/PythonUtil.cpp b/paddle/utils/PythonUtil.cpp index 2ee4e4fb7e..7faeff55c2 100644 --- a/paddle/utils/PythonUtil.cpp +++ b/paddle/utils/PythonUtil.cpp @@ -20,8 +20,8 @@ namespace paddle { #ifdef PADDLE_NO_PYTHON -P_DEFINE_string(python_path, "", "python path"); -P_DEFINE_string(python_bin, "python2.7", "python bin"); +DEFINE_string(python_path, "", "python path"); +DEFINE_string(python_bin, "python2.7", "python bin"); constexpr int kExecuteCMDBufLength = 204800; diff --git a/paddle/utils/ThreadLocal.cpp b/paddle/utils/ThreadLocal.cpp index 8a2878fc4b..75ccbd28cf 100644 --- a/paddle/utils/ThreadLocal.cpp +++ b/paddle/utils/ThreadLocal.cpp @@ -16,9 +16,9 @@ limitations under the License. */ #include "CommandLineParser.h" #include "Util.h" -P_DEFINE_bool(thread_local_rand_use_global_seed, - false, - "Whether to use global seed in thread local rand."); +DEFINE_bool(thread_local_rand_use_global_seed, + false, + "Whether to use global seed in thread local rand."); namespace paddle { diff --git a/paddle/utils/Util.cpp b/paddle/utils/Util.cpp index 26ff385c84..7c0d66c488 100644 --- a/paddle/utils/Util.cpp +++ b/paddle/utils/Util.cpp @@ -33,7 +33,7 @@ limitations under the License. */ #include "ThreadLocal.h" #include "Version.h" -P_DEFINE_int32(seed, 1, "random number seed. 0 for srand(time)"); +DEFINE_int32(seed, 1, "random number seed. 0 for srand(time)"); #ifdef WITH_GOOGLE_PERFTOOLS /* @@ -52,10 +52,8 @@ P_DEFINE_int32(seed, 1, "random number seed. 0 for srand(time)"); #include -P_DEFINE_int32(profile_signal, 12, "signal for switch google profiler"); -P_DEFINE_string(profile_data_file, - "gperf.prof", - "file for storing profile data"); +DEFINE_int32(profile_signal, 12, "signal for switch google profiler"); +DEFINE_string(profile_data_file, "gperf.prof", "file for storing profile data"); static void profilerSwitch(int signalNumber) { bool static started = false; diff --git a/paddle/utils/Version.cpp b/paddle/utils/Version.cpp index a9e351b69f..731c308421 100644 --- a/paddle/utils/Version.cpp +++ b/paddle/utils/Version.cpp @@ -18,13 +18,8 @@ limitations under the License. */ #include #include "Flags.h" #include "Util.h" -//! TODO(yuyang18) in gflags, version has another define. Use another flag -//! instead. -#ifndef PADDLE_USE_GFLAGS -P_DEFINE_bool(version, false, "print version"); -#else -P_DECLARE_bool(version); -#endif + +DECLARE_bool(version); namespace paddle { namespace version { diff --git a/paddle/utils/tests/CMakeLists.txt b/paddle/utils/tests/CMakeLists.txt index 298ede5cd6..26fafbd1ab 100644 --- a/paddle/utils/tests/CMakeLists.txt +++ b/paddle/utils/tests/CMakeLists.txt @@ -1,5 +1,3 @@ -add_simple_unittest(test_CommandLineParser) -add_simple_unittest(test_Logging) add_simple_unittest(test_Thread) add_simple_unittest(test_StringUtils) add_simple_unittest(test_CustomStackTrace) diff --git a/paddle/utils/tests/test_CommandLineParser.cpp b/paddle/utils/tests/test_CommandLineParser.cpp deleted file mode 100644 index ed2b3068d5..0000000000 --- a/paddle/utils/tests/test_CommandLineParser.cpp +++ /dev/null @@ -1,114 +0,0 @@ -/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. */ - -#ifndef PADDLE_USE_GFLAGS -//! Test Command Line Parser for paddle internal implement. - -#include -#include - -P_DEFINE_int32(i1, 1, "test int flag 1"); -P_DEFINE_int32(i2, 2, "test int flag 2"); - -P_DEFINE_string(str1, "1", "test str flag 1"); -P_DEFINE_string(str2, "2", "test str flag 2"); - -P_DEFINE_bool(b1, true, "test bool flag 1"); -P_DEFINE_bool(b2, false, "test bool flag 2"); - -P_DEFINE_double(d1, 0.1, "test double flag 1"); -P_DEFINE_double(d2, -42.3, "test double flag 2"); - -P_DEFINE_int64(l1, 1, "test int64 flag 1"); -P_DEFINE_int64(l2, 2, "test int64 flag 2"); - -P_DEFINE_uint64(ul1, 32, "test uint64 flag 1"); -P_DEFINE_uint64(ul2, 33, "test uint64 flag 2"); - -constexpr double EPSILON = 1e-5; - -#define cc(x) const_cast((x)) - -TEST(CommandLineParser, defaultValue) { - char* argv[] = {cc("test_program"), cc("--unused_flag=134")}; - int argc = sizeof(argv) / sizeof(char*); - - paddle::ParseCommandLineFlags(&argc, argv); - - // Check Default Value - ASSERT_EQ(argc, 2); - ASSERT_EQ(FLAGS_i1, 1); - ASSERT_EQ(FLAGS_i2, 2); - ASSERT_EQ(FLAGS_str1, "1"); - ASSERT_EQ(FLAGS_str2, "2"); - ASSERT_EQ(FLAGS_b1, true); - ASSERT_EQ(FLAGS_b2, false); - ASSERT_NEAR(FLAGS_d1, 0.1, EPSILON); - ASSERT_NEAR(FLAGS_d2, -42.3, EPSILON); - ASSERT_EQ(FLAGS_i1, 1); - ASSERT_EQ(FLAGS_i2, 2); - ASSERT_EQ(FLAGS_ul1, 32UL); - ASSERT_EQ(FLAGS_ul2, 33UL); -} - -TEST(CommandLineParser, normal) { - char* argv[] = {cc("test_program"), - cc("--i2=32"), - cc("--str1=abc"), - cc("--b2=1"), - cc("-b1=False"), - cc("--d2=.34"), - cc("--d1=0"), - cc("--l1=-12345678901234"), - cc("-ul2=3212")}; - int argc = sizeof(argv) / sizeof(char*); - paddle::ParseCommandLineFlags(&argc, argv); - ASSERT_EQ(argc, 1); - ASSERT_EQ(FLAGS_i2, 32); - ASSERT_EQ(FLAGS_str1, "abc"); - ASSERT_EQ(FLAGS_b2, true); - ASSERT_EQ(FLAGS_b1, false); - ASSERT_NEAR(FLAGS_d2, 0.34, EPSILON); - ASSERT_NEAR(FLAGS_d1, 0.0, EPSILON); - ASSERT_EQ(FLAGS_l1, -12345678901234); - ASSERT_EQ(FLAGS_ul2, 3212UL); -} - -TEST(CommandLineParser, printHelp) { - char* argv[] = {cc("test_program"), cc("--help")}; - int argc = sizeof(argv) / sizeof(char*); - - // Will Print Usage - ASSERT_DEATH(paddle::ParseCommandLineFlags(&argc, argv), ".*test_program.*"); -} - -TEST(CommandLineParser, parseError) { - char* argv[] = {cc("test_program"), cc("--i1=abc")}; - - int argc = sizeof(argv) / sizeof(char*); - ASSERT_DEATH( - paddle::ParseCommandLineFlags(&argc, argv), - "Parse command flag i1 error! User input is --i1=abc.*test_program.*"); -} - -int main(int argc, char** argv) { - testing::InitGoogleTest(&argc, argv); - return RUN_ALL_TESTS(); -} - -#else - -int main(int argc, char** argv) { return 0; } - -#endif diff --git a/paddle/utils/tests/test_CustomStackTrace.cpp b/paddle/utils/tests/test_CustomStackTrace.cpp index 292ed4619d..2ce1998376 100644 --- a/paddle/utils/tests/test_CustomStackTrace.cpp +++ b/paddle/utils/tests/test_CustomStackTrace.cpp @@ -20,7 +20,7 @@ limitations under the License. */ #include "paddle/utils/Locks.h" #include "paddle/utils/Util.h" -P_DEFINE_int32(test_thread_num, 10, "testing thread number"); +DEFINE_int32(test_thread_num, 10, "testing thread number"); void testNormalImpl( const std::function&, diff --git a/paddle/utils/tests/test_Logging.cpp b/paddle/utils/tests/test_Logging.cpp deleted file mode 100644 index fbfffcc65a..0000000000 --- a/paddle/utils/tests/test_Logging.cpp +++ /dev/null @@ -1,162 +0,0 @@ -/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. */ - -/* - * Basically from tensorflow/core/platform/default/logging.cc - * Used in embedded system where there is no glogs. - */ - -#include -#include -#include -#include -#include "paddle/utils/Logging.h" -#include "paddle/utils/Util.h" -#ifndef PADDLE_USE_GLOG -TEST(Logging, BasicalLog) { - auto pinfo = [] { - P_LOG(INFO) << "INFO"; - exit(1); - }; - ASSERT_DEATH(pinfo(), "I .*test_Logging.cpp:[0-9]+] INFO"); - - auto pwarn = [] { - P_LOG(WARNING) << "WARN"; - exit(1); - }; - ASSERT_DEATH(pwarn(), "W .*test_Logging.cpp:[0-9]+] WARN"); - - auto perr = [] { - P_LOG(ERROR) << "ERROR"; - exit(1); - }; - ASSERT_DEATH(perr(), "E .*test_Logging.cpp:[0-9]+] ERROR"); - - auto pfatal = [] { P_LOG(FATAL) << "FATAL"; }; - ASSERT_DEATH(pfatal(), "F .*test_Logging.cpp:[0-9]+] FATAL"); -} - -TEST(Logging, Check) { - int a = 1; - int b = 2; - P_CHECK(a != b); - - auto pcheckDown = [&] { P_CHECK(a == b); }; - ASSERT_DEATH(pcheckDown(), - "F .*test_Logging.cpp:[0-9]+] Check failed: a == b "); - - P_CHECK_LE(a, b); - P_CHECK_LT(a, b); - double t = 1.2; - P_CHECK_LE(a, t); - double* ptr = nullptr; - - auto pcheckDown2 = [&] { P_CHECK_NOTNULL(ptr); }; - ASSERT_DEATH(pcheckDown2(), "F"); -} - -#define cc(x) const_cast(x) - -TEST(Logging, LogToStderr) { - auto logToStderrCallback = [] { - setenv("PLOG_LOGTOSTDERR", "0", true); - char* argv[] = {cc("test")}; - paddle::initializeLogging(1, argv); - P_LOG(INFO) << "This output will not print to std error"; - exit(1); - }; - - ASSERT_DEATH(logToStderrCallback(), ""); -} - -constexpr char kLogDirName[] = "./test_log_dir"; -const std::vector kLevels = {"INFO", "WARNING", "ERROR", "FATAL"}; - -TEST(Logging, LogToDir) { - ASSERT_EQ(0, mkdir(kLogDirName, 0777)); - auto logToDirCallback = [] { - setenv("PLOG_LOGTOSTDERR", "0", true); - setenv("PLOG_LOGDIR", kLogDirName, true); - char* argv[] = {cc("test")}; - paddle::initializeLogging(1, argv); - - P_LOG(INFO) << "INFO"; - P_LOG(WARNING) << "WARNING"; - P_LOG(ERROR) << "ERROR"; - P_LOG(FATAL) << "FATAL"; - }; - ASSERT_DEATH(logToDirCallback(), ""); - - // There 4 file in logdir - auto dir = opendir(kLogDirName); - size_t fileCount = 0; - std::vector filenames; - for (auto dirContent = readdir(dir); dirContent != nullptr; - dirContent = readdir(dir)) { - std::string filename(dirContent->d_name); - if (filename == "." || filename == "..") { - continue; - } else { - ++fileCount; - for (size_t i = 0; i < kLevels.size(); ++i) { - const std::string& curLevel = kLevels[i]; - if (filename.size() > curLevel.length()) { - size_t diff = filename.size() - curLevel.length(); - size_t j = 0; - for (; j < curLevel.length(); ++j) { - if (filename[j + diff] != curLevel[j]) { - // File Suffix Not Same, then break. - break; - } - } - if (j == curLevel.length()) { // Same suffix. - std::ifstream fin; - auto fn = paddle::path::join(kLogDirName, filename); - fin.open(fn); - filenames.push_back(fn); - ASSERT_TRUE(fin.is_open()); - size_t lineCounter = 0; - for (std::string line; std::getline(fin, line); ++lineCounter) { - // Do Nothing, Just calc lineCounter. - } - - // For example. - // The info channel will have all log which level >= INFO - // So the info file's lineCounter should == 4. - ASSERT_EQ(kLevels.size() - i, lineCounter); - fin.close(); - } - } - } - } - } - closedir(dir); - ASSERT_EQ(4UL, fileCount); // 4 levels. - // Clean Unittest. - for (std::string& fn : filenames) { - ASSERT_EQ(remove(fn.c_str()), 0); - } - ASSERT_EQ(rmdir(kLogDirName), 0); -} - -int main(int argc, char** argv) { - testing::InitGoogleTest(&argc, argv); - return RUN_ALL_TESTS(); -} - -#else - -int main(int, char**) { return 0; } - -#endif diff --git a/paddle/utils/tests/test_SpinLock.cpp b/paddle/utils/tests/test_SpinLock.cpp index 22f8584ef5..8351e7e3ac 100644 --- a/paddle/utils/tests/test_SpinLock.cpp +++ b/paddle/utils/tests/test_SpinLock.cpp @@ -19,7 +19,7 @@ limitations under the License. */ #include "paddle/utils/Logging.h" #include "paddle/utils/Util.h" -P_DEFINE_int32(test_thread_num, 100, "testing thread number"); +DEFINE_int32(test_thread_num, 100, "testing thread number"); void testNormalImpl( size_t thread_num, diff --git a/paddle/utils/tests/test_ThreadBarrier.cpp b/paddle/utils/tests/test_ThreadBarrier.cpp index 4a8af5b97e..60c2214ffd 100644 --- a/paddle/utils/tests/test_ThreadBarrier.cpp +++ b/paddle/utils/tests/test_ThreadBarrier.cpp @@ -20,7 +20,7 @@ limitations under the License. */ #include "paddle/utils/Logging.h" #include "paddle/utils/Util.h" -P_DEFINE_int32(test_thread_num, 100, "testing thread number"); +DEFINE_int32(test_thread_num, 100, "testing thread number"); void testNormalImpl( size_t thread_num, From 9a243d7a23d4c520ae0174232ac874879e2e4e9a Mon Sep 17 00:00:00 2001 From: Zhizhong Su Date: Wed, 14 Dec 2016 13:06:38 +0000 Subject: [PATCH 141/265] fix a few bugs related to frame size. --- paddle/gserver/layers/DataLayer.cpp | 2 +- paddle/parameter/Argument.cpp | 2 ++ proto/ModelConfig.proto | 2 +- 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/paddle/gserver/layers/DataLayer.cpp b/paddle/gserver/layers/DataLayer.cpp index 66f0606a38..3551df4e17 100644 --- a/paddle/gserver/layers/DataLayer.cpp +++ b/paddle/gserver/layers/DataLayer.cpp @@ -54,7 +54,7 @@ void DataLayer::copyDataToOutput(Argument& output) { output.setFrameWidth(config_.width()); } else { output.setFrameHeight(data_.getFrameHeight()); - output.setFrameHeight(data_.getFrameHeight()); + output.setFrameWidth(data_.getFrameWidth()); } output.cpuSequenceDims = data_.cpuSequenceDims; output.sequenceStartPositions = data_.sequenceStartPositions; diff --git a/paddle/parameter/Argument.cpp b/paddle/parameter/Argument.cpp index e91daa3717..65d01a1571 100644 --- a/paddle/parameter/Argument.cpp +++ b/paddle/parameter/Argument.cpp @@ -245,6 +245,8 @@ int32_t Argument::resizeAndCopyFrom(const Argument& src, bool useGpu, hl_stream_t stream) { dataId = src.dataId; + frameWidth = src.frameWidth; + frameHeight = src.frameHeight; if (!src.sequenceStartPositions) { // non-sequence input, copy samples directly diff --git a/proto/ModelConfig.proto b/proto/ModelConfig.proto index b34e1ebded..552af71e76 100644 --- a/proto/ModelConfig.proto +++ b/proto/ModelConfig.proto @@ -245,7 +245,7 @@ message ImageConfig { // The size of input feature map. required uint32 img_size = 8; - required uint32 img_size_y = 9; + optional uint32 img_size_y = 9; } message LayerInputConfig { From ae174b33c0095aa20c058e47a5b3b826fae8bf37 Mon Sep 17 00:00:00 2001 From: liaogang Date: Wed, 14 Dec 2016 21:45:51 +0800 Subject: [PATCH 142/265] Remove WITH_GLOG and WITH_GFLAGS in cmake --- cmake/check_packages.cmake | 8 ++----- cmake/util.cmake | 14 +++-------- .../build_and_install/build_from_source_en.md | 4 +--- .../cmake/compile_options.csv | 2 -- .../build_and_install/ubuntu_install_cn.rst | 2 -- paddle/api/CMakeLists.txt | 24 ++++++++----------- paddle/api/paddle_api_config.py.in | 2 -- paddle/api/paddle_ld_flags.py | 8 ++----- paddle/scripts/submit_local.sh.in | 2 -- 9 files changed, 18 insertions(+), 48 deletions(-) diff --git a/cmake/check_packages.cmake b/cmake/check_packages.cmake index 0688745541..4b7cadfc85 100644 --- a/cmake/check_packages.cmake +++ b/cmake/check_packages.cmake @@ -14,13 +14,9 @@ if(WITH_STYLE_CHECK) find_package(PythonInterp REQUIRED) endif() -if(WITH_GLOG) - find_package(Glog REQUIRED) -endif() +find_package(Glog REQUIRED) -if(WITH_GFLAGS) - find_package(Gflags REQUIRED) -endif() +find_package(Gflags REQUIRED) if(WITH_TESTING) find_package(GTest REQUIRED) diff --git a/cmake/util.cmake b/cmake/util.cmake index eb7db7ce2e..38366373c6 100644 --- a/cmake/util.cmake +++ b/cmake/util.cmake @@ -65,7 +65,7 @@ endmacro() # link_paddle_exe # add paddle library for a paddle executable, such as trainer, pserver. # -# It will handle WITH_PYTHON/WITH_GLOG etc. +# It will handle WITH_PYTHON etc. function(link_paddle_exe TARGET_NAME) if(WITH_RDMA) generate_rdma_links() @@ -108,6 +108,8 @@ function(link_paddle_exe TARGET_NAME) paddle_cuda ${METRIC_LIBS} ${PROTOBUF_LIBRARY} + ${LIBGLOG_LIBRARY} + ${GFLAGS_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT} ${CBLAS_LIBS} ${ZLIB_LIBRARIES} @@ -125,16 +127,6 @@ function(link_paddle_exe TARGET_NAME) ${PYTHON_LIBRARIES}) endif() - if(WITH_GLOG) - target_link_libraries(${TARGET_NAME} - ${LIBGLOG_LIBRARY}) - endif() - - if(WITH_GFLAGS) - target_link_libraries(${TARGET_NAME} - ${GFLAGS_LIBRARIES}) - endif() - if(WITH_GPU) if(NOT WITH_DSO OR WITH_METRIC) target_link_libraries(${TARGET_NAME} diff --git a/doc/getstarted/build_and_install/build_from_source_en.md b/doc/getstarted/build_and_install/build_from_source_en.md index 5db871d59a..aaa07d49d3 100644 --- a/doc/getstarted/build_and_install/build_from_source_en.md +++ b/doc/getstarted/build_and_install/build_from_source_en.md @@ -49,10 +49,8 @@ PaddlePaddle supports some build options. To enable it, first you need to instal WITH_GPUCompile with GPU mode. WITH_DOUBLECompile with double precision floating-point, default: single precision. -WITH_GLOGCompile with glog. If not found, default: an internal log implementation. -WITH_GFLAGSCompile with gflags. If not found, default: an internal flag implementation. WITH_TESTINGCompile with gtest for PaddlePaddle's unit testing. -WITH_DOC Compile to generate PaddlePaddle's docs, default: disabled (OFF). +WITH_DOC Compile to generate PaddlePaddle's docs, default: disabled (OFF). WITH_SWIG_PYCompile with python predict API, default: disabled (OFF). WITH_STYLE_CHECKCompile with code style check, default: enabled (ON). diff --git a/doc/getstarted/build_and_install/cmake/compile_options.csv b/doc/getstarted/build_and_install/cmake/compile_options.csv index 171d8fba71..463b825470 100644 --- a/doc/getstarted/build_and_install/cmake/compile_options.csv +++ b/doc/getstarted/build_and_install/cmake/compile_options.csv @@ -6,8 +6,6 @@ WITH_AVX,是否编译含有AVX指令集的PaddlePaddle二进制文件,是 WITH_PYTHON,是否内嵌PYTHON解释器。方便今后的嵌入式移植工作。,是 WITH_STYLE_CHECK,是否编译时进行代码风格检查,是 WITH_RDMA,是否开启RDMA,否 -WITH_GLOG,是否开启GLOG。如果不开启,则会使用一个简化版的日志,同时方便今后的嵌入式移植工作。,取决于是否寻找到GLOG -WITH_GFLAGS,是否使用GFLAGS。如果不开启,则会使用一个简化版的命令行参数解析器,同时方便今后的嵌入式移植工作。,取决于是否寻找到GFLAGS WITH_TIMER,是否开启计时功能。如果开启会导致运行略慢,打印的日志变多,但是方便调试和测Benchmark,否 WITH_TESTING,是否开启单元测试,取决于是否寻找到GTEST WITH_DOC,是否编译中英文文档,否 diff --git a/doc/getstarted/build_and_install/ubuntu_install_cn.rst b/doc/getstarted/build_and_install/ubuntu_install_cn.rst index f923a1917c..d02d9c63bb 100644 --- a/doc/getstarted/build_and_install/ubuntu_install_cn.rst +++ b/doc/getstarted/build_and_install/ubuntu_install_cn.rst @@ -46,8 +46,6 @@ PaddlePaddle提供了ubuntu 14.04 deb安装包。 with_double: OFF with_python: ON with_rdma: OFF - with_glog: ON - with_gflags: ON with_metric_learning: with_timer: OFF with_predict_sdk: diff --git a/paddle/api/CMakeLists.txt b/paddle/api/CMakeLists.txt index 9b2d122a09..6ad1d79e59 100644 --- a/paddle/api/CMakeLists.txt +++ b/paddle/api/CMakeLists.txt @@ -17,22 +17,18 @@ add_library(paddle_api STATIC ${API_SOURCES}) add_dependencies(paddle_api gen_proto_cpp) +list(LENGTH "${GFLAGS_LIBRARIES}" GFLAGS_LIBRARIES_LENGTH) -if(WITH_GFLAGS) - list(LENGTH "${GFLAGS_LIBRARIES}" GFLAGS_LIBRARIES_LENGTH) - - if(${GFLAGS_LIBRARIES_LENGTH} EQUAL 0 AND TARGET "${GFLAGS_LIBRARIES}") - # Because gflags compiled by cmake, so it is imported by cmake target, - # not a real library path. Get the real library path here. - message(STATUS "GFLAGS Libraries is ${GFLAGS_LIBRARIES}") - get_target_property(GFLAGS_LOCATION ${GFLAGS_LIBRARIES} LOCATION) - message(STATUS "GFLAGS Target location is ${GFLAGS_LOCATION}") - else() - set(GFLAGS_LOCATION ${GFLAGS_LIBRARIES}) - endif() +if(${GFLAGS_LIBRARIES_LENGTH} EQUAL 0 AND TARGET "${GFLAGS_LIBRARIES}") +# Because gflags compiled by cmake, so it is imported by cmake target, +# not a real library path. Get the real library path here. +message(STATUS "GFLAGS Libraries is ${GFLAGS_LIBRARIES}") +get_target_property(GFLAGS_LOCATION ${GFLAGS_LIBRARIES} LOCATION) +message(STATUS "GFLAGS Target location is ${GFLAGS_LOCATION}") +else() +set(GFLAGS_LOCATION ${GFLAGS_LIBRARIES}) endif() - configure_file( paddle_api_config.py.in ${PROJ_ROOT}/paddle/api/paddle_api_config.py @@ -57,7 +53,7 @@ add_custom_command(OUTPUT ${PROJ_ROOT}/paddle/dist/.timestamp paddle_trainer paddle_api paddle_cuda - ${PY_PADDLE_PYTHON_FILES} + ${PY_PADDLE_PYTHON_FILES} ) install(DIRECTORY ${PROJ_ROOT}/paddle/dist/ diff --git a/paddle/api/paddle_api_config.py.in b/paddle/api/paddle_api_config.py.in index a2352250c3..23542b952b 100644 --- a/paddle/api/paddle_api_config.py.in +++ b/paddle/api/paddle_api_config.py.in @@ -8,9 +8,7 @@ CMAKE_DL_LIBS="@CMAKE_DL_LIBS@" WITH_PYTHON="@WITH_PYTHON@" PYTHON_LIBRARIES="@PYTHON_LIBRARIES@" -WITH_GLOG="@WITH_GLOG@" LIBGLOG_LIBRARY="@LIBGLOG_LIBRARY@" -WITH_GFLAGS="@WITH_GFLAGS@" GFLAGS_LIBRARIES="@GFLAGS_LIBRARIES@" GFLAGS_LOCATION="@GFLAGS_LOCATION@" CBLAS_LIBRARIES="@CBLAS_LIBS@" diff --git a/paddle/api/paddle_ld_flags.py b/paddle/api/paddle_ld_flags.py index 85cc54700f..51d7dfee58 100644 --- a/paddle/api/paddle_ld_flags.py +++ b/paddle/api/paddle_ld_flags.py @@ -47,10 +47,8 @@ try: self.with_python = PaddleLDFlag.cmake_bool(WITH_PYTHON) self.python_libs = PYTHON_LIBRARIES - self.with_glog = PaddleLDFlag.cmake_bool(WITH_GLOG) self.glog_libs = LIBGLOG_LIBRARY - self.with_gflags = PaddleLDFlag.cmake_bool(WITH_GFLAGS) self.with_coverage = PaddleLDFlag.cmake_bool(WITH_COVERALLS) self.gflags_libs = GFLAGS_LIBRARIES self.gflags_location = GFLAGS_LOCATION @@ -88,6 +86,8 @@ try: "-lpaddle_cuda", "-lpaddle_api", self.normalize_flag(self.protolib), + self.normalize_flag(self.glog_libs), + self.normalize_flag(self.gflags_libs), self.normalize_flag(self.zlib), self.normalize_flag(self.thread), self.normalize_flag(self.dl_libs), @@ -96,10 +96,6 @@ try: if self.with_python: libs.append(self.normalize_flag(self.python_libs)) - if self.with_glog: - libs.append(self.normalize_flag(self.glog_libs)) - if self.with_gflags: - libs.append(self.normalize_flag(self.gflags_libs)) if self.with_gpu: libs.append(self.normalize_flag(self.curt)) if self.with_coverage: diff --git a/paddle/scripts/submit_local.sh.in b/paddle/scripts/submit_local.sh.in index ace2c0dee9..283fd34a6d 100644 --- a/paddle/scripts/submit_local.sh.in +++ b/paddle/scripts/submit_local.sh.in @@ -21,8 +21,6 @@ function version(){ echo " with_double: @WITH_DOUBLE@" echo " with_python: @WITH_PYTHON@" echo " with_rdma: @WITH_RDMA@" - echo " with_glog: @WITH_GLOG@" - echo " with_gflags: @WITH_GFLAGS@" echo " with_metric_learning: @WITH_METRIC@" echo " with_timer: @WITH_TIMER@" echo " with_predict_sdk: @WITH_PREDICT_SDK@" From a10622d7dd3e6a11d66c4440c63570ac97dcd0dc Mon Sep 17 00:00:00 2001 From: zhangjinchao01 Date: Wed, 14 Dec 2016 22:15:58 +0800 Subject: [PATCH 143/265] fix index bug --- demo/semantic_role_labeling/data/extract_dict_feature.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/demo/semantic_role_labeling/data/extract_dict_feature.py b/demo/semantic_role_labeling/data/extract_dict_feature.py index a02a49a86e..da44111976 100644 --- a/demo/semantic_role_labeling/data/extract_dict_feature.py +++ b/demo/semantic_role_labeling/data/extract_dict_feature.py @@ -43,13 +43,13 @@ def extract_dict_features(pair_file, feature_file): mark[verb_index] = 1 ctx_0 = sentence_list[verb_index] - if verb_index < len(labels_list) - 2: + if verb_index < len(labels_list) - 1: mark[verb_index + 1] = 1 ctx_p1 = sentence_list[verb_index + 1] else: ctx_p1 = 'eos' - if verb_index < len(labels_list) - 3: + if verb_index < len(labels_list) - 2: mark[verb_index + 2] = 1 ctx_p2 = sentence_list[verb_index + 2] else: From 9136d2e64e0c09336c633c9ed8d9f0e5e1250eb2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=B8=80=E4=B8=AA=E9=99=8C=E7=94=9F=E4=BA=BA?= <546777653@qq.com> Date: Thu, 15 Dec 2016 00:00:06 +0800 Subject: [PATCH 144/265] Create index_cn.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 添加汉化文档 --- doc/tutorials/text_generation/index_cn.md | 339 ++++++++++++++++++++++ 1 file changed, 339 insertions(+) create mode 100644 doc/tutorials/text_generation/index_cn.md diff --git a/doc/tutorials/text_generation/index_cn.md b/doc/tutorials/text_generation/index_cn.md new file mode 100644 index 0000000000..5cfdf2304c --- /dev/null +++ b/doc/tutorials/text_generation/index_cn.md @@ -0,0 +1,339 @@ +# 文本生成教程 # + +在语言生成领域中,“序列到序列”(sequence to sequence)的方法已被证明是一种强大的模型。它可以被应用于进行机器翻译(machine translation)、请求改写(query rewriting)、图像字幕(image captioning)等等。 + +本篇教程将会指导你通过训练一个“序列到序列”的神经网络机器翻译(NMT)模型来将法语翻译成英语。 + +我们遵循 [Neural Machine Translation by Jointly Learning to Align and Translate](http://arxiv.org/abs/1409.0473) 这篇文章,其中详细说明了模型架构,以及在WMT-14数据集上得到良好表现的训练过程。本篇教程在PaddlePaddle中重现了这一良好的训练结果。 + +我们感谢@caoying的pull request,其中定义了模型架构和solver配置。 + +## 数据准备 ## +### 下载与解压缩 ### +从该链接 [http://www-lium.univ-lemans.fr/~schwenk/cslm\_joint\_paper/](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/) 下载WMT-14数据集,然后解压,并将Develop和Test数据分别放入不同的文件夹。 + +- **Train data**: [bitexts (选择过后的)](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/bitexts.tgz) +- **Develop and Test data**: [dev 与 test 数据](http://www-lium.univ-lemans.fr/~schwenk/cslm_joint_paper/data/dev+test.tgz) + +在Linux下,只需要简单地运行以下命令。否则你需要自己下载、解压、拆分到不同文件夹、并且分别重命名文件后缀。 + +```bash +cd demo/seqToseq/data +./wmt14_data.sh +``` + +我们会发现数据集 `wmt14` 中包含如下表所示的3个文件夹。 + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
folder nameFrench-English parallel corpora filenumber of total filesize
train_dataccb2_pc30.src, ccb2_pc30.trg, etc123.55G
test_datantst1213.src, ntst1213.trg21636k
gen_datantst14.src, ntst14.trg2864k
+
+ +- 每个文件夹都包含法语到英语的平行语料库 +- **XXX.src** 是原始法语文件;**XXX.trg** 是目标英语文件 +- **XXX.src** 和 **XXX.trg** 的行数应该一致 +- 每行都是一个法语或者英语的句子 +- **XXX.src** 和 **XXX.trg** 中任意第i行的句子之间都有着一一对应的关系 + +### 用户自定义数据集 ### + +如果你想进行诸如语义转述(Paraphrasing)等其他“序列到序列”的任务,你只需要按照如下方式组织数据,并将它们放在`demo/seqToseq/data`目录下: + + dataset + train + file1.src file1.trg + file2.src file2.trg + ...... + test + file1.src file1.trg + file2.src file2.trg + ...... + gen + file1.src file1.trg + file2.src file2.trg + ...... + +- 一级目录:数据集文件夹名称 +- 二级目录:train、test和gen这三个文件夹是固定的 +- 三级目录:源语言到目标语言的平行语料库文件 + - **XXX.src** 是源语言的文件,**XXX.trg** 时目标语言的文件 + - 文件中的每行都必须是一个句子 + - **XXX.src** 和 **XXX.trg** 中任意第i行的句子之间都必须有着一一对应的关系 + +## 数据预处理 ## +### 预处理工作流程 ### +- 将每个源语言到目标语言的平行语料库文件合并为一个文件: + - 合并每个 **XXX.src** 和 **XXX.trg** 文件为 **XXX** + - **XXX** 中的第i行 = **XXX.src** 中的第i行 + '\t' + **XXX.trg**中的第i行 +- 创建训练数据的“源字典”和“目标字典”,每个字典都有DICTSIZE个单词: + - 频率最高的单词(DICTSIZE - 3 个) + - 3个特殊符号 + - ``:序列的开始 + - ``:序列的结束 + - ``:未包含在字典中的单词 + +### 预处理命令和结果 +对数据集进行预处理的基本命令是: + +```python +cd demo/seqToseq/ +python preprocess.py -i INPUT [-d DICTSIZE] [-m] +``` + +- `-i INPUT`:输入的原始数据集路径 +- `-d DICTSIZE`:指定的字典单词数,如果没有设置,字典会包含输入数据集中的所有单词 +- `-m --mergeDict`:合并 “源字典”和“目标字典”,使得两个字典有相同的上下文 + +你将会看到如下消息: + + concat parallel corpora for dataset + build source dictionary for train data + build target dictionary for train data + dictionary size is XXX + +然后你只需要运行以下命令: + +```python +python preprocess.py -i data/wmt14 -d 30000 +``` + +这将花费数分钟的时间,并且将预处理好的数据集存放在`demo/seqToseq/data/pre-wmt14`目录下。字典具有以下结构。 + + train test gen train.list test.list gen.list src.dict trg.dict# Text generation Tutorial # + +- **train, test, gen**:分别包含了法语到英语的平行语料库的训练数据、测试数据和生成数据。文件夹中的每个文件的每一行包含两部分,首先是法语序列,然后是对应的英语序列。 +- **train.list, test.list, gen.list**:分别为train,test,gen文件夹中的文件列表 +- **src.dict, trg.dict**:源(法语)/目标(英语)字典,每个字典包含总共30000个单词:29997个最高频单词和3个特殊符号 + +## 模型训练 ## +### 简介### + +神经网络机器翻译(NMT)旨在建立一个可以被协同调至最优翻译效果的单神经元网络。近期提出的NMT模型通常都属于编解码模型(encoder–decoder models)的一种。编解码模型将一个源语句编码为一个定长的向量,然后解码器通过这个向量生成一个目标语句。 + +在这个任务中,我们使用了一个编解码模型的扩展,它联合地学习了排列与翻译。每当模型在翻译过程中生成了一个单词,它就会在源语句中搜索出最相关信息的位置的集合。解码器根据上下文向量预测出一个目标单词,这个向量与源中搜索出的位置和所有之前生成的目标单词有关。如想了解更多详细的解释,可以参考 [Neural Machine Translation by Jointly Learning to Align and Translate](http://arxiv.org/abs/1409.0473)。 + +这个模型对于编解码模型来说,最不同的特色是它并没有将输入语句编码为一个单独的定长向量。相反,它将输入语句编码为向量的序列,其中每个向量对应输入语句中的一个元素。然后在解码被翻译的语句时,会自适应地从这些向量中选择一个子集出来。这使得NMT模型得以解放出来,不必再将任意长度源语句中的所有信息压缩至一个定长的向量中。该模型在长语句翻译的场景下效果提升更加明显,在任意长度语句翻译的场景下都可以观察到其效果的提升。 +
![](./encoder-decoder-attention-model.png)
+
Figure 1. Encoder-Decoder-Attention-Model
+ +### 使用PaddlePaddle训练模型 ### +我们在训练之前需要常见一个模型配置文件,这里是一个例子`demo/seqToseq/translation/train.conf`。前三行import了定义network,job_mode和attention_mode的python函数。 + +```python +from seqToseq_net import * +is_generating = False + +### Data Definiation +train_conf = seq_to_seq_data(data_dir = "./data/pre-wmt14", + is_generating = is_generating) + +### Algorithm Configuration +settings( + learning_method = AdamOptimizer(), + batch_size = 50, + learning_rate = 5e-4) + +### Network Architecture +gru_encoder_decoder(train_conf, is_generating) +``` + +1. **Data Definiation**:在示例中我们定义了一个序列到序列的训练和测试数据。它返回train_conf作为配置,其输入参数如下: + - data_dir:训练数据和测试数据的目录 + - is_generating:这个配置是否用来生成,这里设置为False +2. **Algorithm Configuration**:在示例中我们使用SGD训练算法(默认),和ADAM学习方法,指定batch_size为50,learning_rate为5e-4 +3. **Network Architecture**:在示例中我们使用attention版本的GRU编解码网络。它包括了一个双向的GRU作为编码器和解码器,它模拟了解码翻译过程中在源语句中的搜索。 + +### 训练模型的命令与结果### +写完模型配置之后,我们可以通过以下命令来训练模型: + +```bash +cd demo/seqToseq/translation +./train.sh +``` + +`train.sh` 的内容如下所示: + +```bash +paddle train \ +--config='translation/train.conf' \ +--save_dir='translation/model' \ +--use_gpu=false \ +--num_passes=16 \ +--show_parameter_stats_period=100 \ +--trainer_count=4 \ +--log_period=10 \ +--dot_period=5 \ +2>&1 | tee 'translation/train.log' +``` +- config: 设置神经网络的配置文件 +- save_dir: 设置保存模型的输出路径 +- use_gpu: 是否使用GPU训练,这里设置为使用CPU +- num_passes: 设置passes的数量。paddle中的一条pass表示训练数据集中所有的样本一次 +- show_parameter_stats_period: 这里每隔100个batch显示一次参数统计信息 +- trainer_count: 设置CPU线程数或者GPU设备数 +- log_period: 这里每隔10个batch打印一次日志 +- dot_period: 这里每个5个batch打印一个点"." + +训练的损失函数默认每隔10个batch打印一次,你将会看到如下消息: + + I0719 19:16:45.952062 15563 TrainerInternal.cpp:160] Batch=10 samples=500 AvgCost=198.475 CurrentCost=198.475 Eval: classification_error_evaluator=0.737155 CurrentEval: classification_error_evaluator=0.737155 + I0719 19:17:56.707319 15563 TrainerInternal.cpp:160] Batch=20 samples=1000 AvgCost=157.479 CurrentCost=116.483 Eval: classification_error_evaluator=0.698392 CurrentEval: classification_error_evaluator=0.659065 + ..... +- AvgCost:从第0个batch到当前batch的平均花销 +- CurrentCost::当前batch的花销 +- classification\_error\_evaluator(Eval):从第0个评估到当前评估中,每个单词的失败预测率 +- classification\_error\_evaluator(CurrentEval):当前评估中,每个单词的失败预测率 + +当classification\_error\_evaluator的值低于0.35时,模型就训练成功了。 + +## 文本生成 ## +### 简介### + +一般而言,NMT模型受制于源语句的编码,并且通过给出当前目标单词来预测下一个目标单词。在训练过程中,当前单词在相比之下总是被当作真值(ground truth)。在生成过程中,当前单词是解码器最后一步的输出,这来自于PaddlePaddle的内存中。 + +而且,我们使用集束搜索(Beam Search)来生成序列。集束搜索使用广度优先搜索来构建搜索树。对于树的每一层,生成当前层的所有后继状态,并将它们按照启发成本(heuristic cost)升序排列。但是这种方法在每层只保存预设数量的最优状态(这个数量称为beam size)。 + +### 预训练的模型 ### +我们在拥有50个节点的集群中训练模型,每个节点有两个6核CPU。我们在5天里训练了16条pass,其中每条pass花费了7个小时。model_dir中有16个子目录,每个里面都包含202MB的全部的模型参数。然后我们发现pass-00012的模型有着最高的BLEU值27.77(参考文献[BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf))。要下载解压这个模型,只需在linux下运行如下命令: + +```bash +cd demo/seqToseq/data +./wmt14_model.sh +``` + +### 使用PaddlePaddle生成模型 ### +在翻译法语句子之前,我们需要创建模型配置文件。这里是一个例子`demo/seqToseq/translation/gen.conf`。前三行import了定义network,job_mode和attention_mode的python函数。 + +```python +from seqToseq_net import * +is_generating = True + +################## Data Definiation ##################### +gen_conf = seq_to_seq_data(data_dir = "./data/pre-wmt14", + is_generating = is_generating, + gen_result = "./translation/gen_result") + +############## Algorithm Configuration ################## +settings( + learning_method = AdamOptimizer(), + batch_size = 1, + learning_rate = 0) + +################# Network configure ##################### +gru_encoder_decoder(gen_conf, is_generating) +``` + +1. **Data Definiation**:在示例中我们定义了一个序列到序列的生成数据。它返回gen_conf作为配置,其输入参数如下: + - data_dir:生成数据的目录 + - is_generating:这个配置是否用来生成,这里设置为False + - gen_result:保存生成结果的文件 +2. **Algorithm Configuration**:在生成过程中我们使用SGD训练算法,并指定batch_size为1(每次生成1个序列),learning_rate为0 +3. **Network Architecture**:本质上与训练模型一样 + +### 生成模型的命令与结果 ### +写完模型配置之后,我们可以通过以下命令来进行从法语到英语的文本翻译: + +```bash +cd demo/seqToseq/translation +./gen.sh +``` + + `gen.sh` 的内容如下所示。与训练模型不同的是,这里有一些不同的参数需要指定: + +```bash +paddle train \ +--job=test \ +--config='translation/gen.conf' \ +--save_dir='data/wmt14_model' \ +--use_gpu=true \ +--num_passes=13 \ +--test_pass=12 \ +--trainer_count=1 \ +2>&1 | tee 'translation/gen.log' +``` +- job:设置任务的模式为测试 +- save_dir:存储模型的路径 +- num_passes and test_pass:从test_pass到(num_passes - 1)加载模型参数,这里只加载 `data/wmt14_model/pass-00012` + +你将会看到这样的消息: + + I0706 14:48:31.178915 31441 GradientMachine.cpp:143] Loading parameters from data/wmt14_model/pass-00012 + I0706 14:48:40.012039 31441 Tester.cpp:125] Batch=100 samples=100 AvgCost=0 + I0706 14:48:48.898632 31441 Tester.cpp:125] Batch=200 samples=200 AvgCost=0 + ... + +然后在`demo/seqToseq/translation/gen_result`中的生成结果如下所示: + + 0 + 0 -11.1314 The about the width of the seats while large controls are at stake + 1 -11.1519 The on the width of the seats while large controls are at stake + 2 -11.5988 The about the width of the seats while large controls are at stake . + + 1 + 0 -24.4149 The dispute is between the major aircraft manufacturers about the width of the tourist seats on the flights , paving the way for a confrontation during the month of the Dubai . + 1 -26.9524 The dispute is between the major aircraft manufacturers about the width of the tourist seats on the flights , paving the way for a confrontation during the month of Dubai ' s . + 2 -27.9574 The dispute is between the major aircraft manufacturers about the width of the tourist seats on the flights , paving the way for a confrontation during the month of Dubai ' s Dubai . + ... + +- 这是集束搜索的结果,其中beam size是3 +- 第一行的“0”和第6行的“1”表示生成数据的序列id +- 其他六行列出了集束搜索的结果 + - 第二列是集束搜索的得分(从大到小) + - 第三列是生成的英语序列 +- 有两个特殊标识: + - ``:序列的结尾 + - ``:不包含在字典中的单词 + +### BLEU评估 ### +对机器翻译的人工评估工作很广泛但也很昂贵。一篇论文 [BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf) 展示了一种方法,当需要快速或者频繁的评估时,使用自动的替补来替代经验丰富的人工评判。[Moses](http://www.statmt.org/moses/) 是一个统计学的机器翻译系统,我们使用其中的 [multi-bleu.perl](https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl) 来做BLEU评估。运行以下命令来下载这个脚本: + +```bash +cd demo/seqToseq/translation +./moses_bleu.sh +``` + +由于标准的翻译结果已经下载到这里`data/wmt14/gen/ntst14.trg`,我们可以运行以下命令来做BLEU评估。 + +```bash +cd demo/seqToseq/translation +./eval_bleu.sh FILE BEAMSIZE +``` + +- FILE:生成的结果文件 +- BEAMSIZE:扩展集束搜索的广度 From ff4e046378244eacb025c16eb10dbe20b86037c3 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Fri, 2 Dec 2016 11:20:09 -0800 Subject: [PATCH 145/265] improve demo/mnist dataProvider speed --- demo/mnist/mnist_provider.py | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/demo/mnist/mnist_provider.py b/demo/mnist/mnist_provider.py index 6df4676da3..c435e1681d 100644 --- a/demo/mnist/mnist_provider.py +++ b/demo/mnist/mnist_provider.py @@ -1,10 +1,11 @@ from paddle.trainer.PyDataProvider2 import * - +import numpy # Define a py data provider @provider( input_types={'pixel': dense_vector(28 * 28), - 'label': integer_value(10)}) + 'label': integer_value(10)}, + cache=CacheType.CACHE_PASS_IN_MEM) def process(settings, filename): # settings is not used currently. imgf = filename + "-images-idx3-ubyte" labelf = filename + "-labels-idx1-ubyte" @@ -19,13 +20,13 @@ def process(settings, filename): # settings is not used currently. n = 60000 else: n = 10000 - - for i in range(n): - label = ord(l.read(1)) - pixels = [] - for j in range(28 * 28): - pixels.append(float(ord(f.read(1))) / 255.0) - yield {"pixel": pixels, 'label': label} - + + images = numpy.fromfile(f, 'ubyte', count=n*28*28).reshape((n, 28*28)).astype('float32') + images = images / 255.0 * 2.0 - 1.0 + labels = numpy.fromfile(l, 'ubyte', count=n).astype("int") + + for i in xrange(n): + yield {"pixel": images[i, :], 'label': labels[i]} + f.close() l.close() From bcd280d6bd8d14d8f7e94b5b39458280f24a82f5 Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Wed, 14 Dec 2016 13:36:18 -0800 Subject: [PATCH 146/265] Check the input size for convolution. Make sure the input size is correct. Change-Id: I2559075e32a098c5ed51942ff8351d964a6d6c75 --- paddle/gserver/layers/ConvProjection.cpp | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/paddle/gserver/layers/ConvProjection.cpp b/paddle/gserver/layers/ConvProjection.cpp index aa634b3287..57d2d83590 100644 --- a/paddle/gserver/layers/ConvProjection.cpp +++ b/paddle/gserver/layers/ConvProjection.cpp @@ -130,6 +130,12 @@ void ConvProjection::reshapeTensorDesc(int batchSize) { void ConvProjection::reshape(int batchSize) { size_t width = calOutputSize(); CHECK_EQ(width, out_->value->getWidth()); + CHECK_EQ(channels_ * imageH_ * imageW_, in_->value->getWidth()) + << "Wrong input size for convolution" + << " channels=" << channels_ + << " imageH=" << imageH_ + << " imageW=" << imageW_ + << " inputSize=" << in_->value->getWidth(); isSelectAlgo_ = (batchSize == batchNum_); batchNum_ = batchSize; From 828303bd5f6b580b0e1adef845bb2c76feb55c4f Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Wed, 14 Dec 2016 17:59:07 -0800 Subject: [PATCH 147/265] after clang-format --- demo/mnist/mnist_provider.py | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/demo/mnist/mnist_provider.py b/demo/mnist/mnist_provider.py index c435e1681d..4635833d36 100644 --- a/demo/mnist/mnist_provider.py +++ b/demo/mnist/mnist_provider.py @@ -1,6 +1,7 @@ from paddle.trainer.PyDataProvider2 import * import numpy + # Define a py data provider @provider( input_types={'pixel': dense_vector(28 * 28), @@ -20,13 +21,14 @@ def process(settings, filename): # settings is not used currently. n = 60000 else: n = 10000 - - images = numpy.fromfile(f, 'ubyte', count=n*28*28).reshape((n, 28*28)).astype('float32') - images = images / 255.0 * 2.0 - 1.0 + + images = numpy.fromfile( + f, 'ubyte', count=n * 28 * 28).reshape((n, 28 * 28)).astype('float32') + images = images / 255.0 * 2.0 - 1.0 labels = numpy.fromfile(l, 'ubyte', count=n).astype("int") - + for i in xrange(n): yield {"pixel": images[i, :], 'label': labels[i]} - + f.close() l.close() From ce1d98e083017afadac9fcd9f94f5c59aceaf6c0 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 10:31:45 +0800 Subject: [PATCH 148/265] Add a Tensor to use as a Function argument --- paddle/math/Function.h | 12 +++++++- paddle/math/cross_map_normal_op.cpp | 37 +++++++++++------------- paddle/math/tests/test_matrixCompare.cpp | 9 ++++-- 3 files changed, 35 insertions(+), 23 deletions(-) diff --git a/paddle/math/Function.h b/paddle/math/Function.h index b41ba2a13d..539759782b 100644 --- a/paddle/math/Function.h +++ b/paddle/math/Function.h @@ -40,7 +40,17 @@ struct MatrixT { using type = GpuMatrix; }; -typedef std::vector Arguments; +typedef std::vector Dims; + +class Tensor { +public: + Tensor(real* data, const Dims& dim) : buf_(data), dims_(dim) {} + + real* buf_; + Dims dims_; +}; + +typedef std::vector Arguments; class FuncConfig { public: diff --git a/paddle/math/cross_map_normal_op.cpp b/paddle/math/cross_map_normal_op.cpp index 0b72732063..d55bd78c62 100644 --- a/paddle/math/cross_map_normal_op.cpp +++ b/paddle/math/cross_map_normal_op.cpp @@ -144,26 +144,23 @@ public: CHECK_EQ(2, outputs.size()); CHECK_EQ(0, inouts.size()); - auto input = dynamic_cast::type&>(inputs[0]); - auto output = - dynamic_cast::type&>(outputs[0]); - auto denom = - dynamic_cast::type&>(outputs[1]); - - CHECK(input.isContiguous()); - CHECK(output.isContiguous()); - CHECK(denom.isContiguous()); - CHECK_EQ(output.getHeight(), input.getHeight()); - CHECK_EQ(output.getWidth(), input.getWidth()); - CHECK_EQ(output.getHeight(), denom.getHeight()); - CHECK_EQ(output.getWidth(), denom.getWidth()); - - // CrossMapNormal cross; - // need: - // size_t channels, - // size_t imgSizeH, - // size_t imgSizeW, - // cross(output, denom, input, ); + CHECK_EQ(inputs[0].dims_.size(), 4); + for (size_t i = 0; i < inputs[0].dims_.size(); i++) { + CHECK_EQ(inputs[0].dims_[i], outputs[0].dims_[i]); + CHECK_EQ(inputs[0].dims_[i], outputs[1].dims_[i]); + } + + size_t samples = inputs[0].dims_[0]; + size_t channels = inputs[0].dims_[1]; + size_t height = inputs[0].dims_[2]; + size_t width = inputs[0].dims_[3]; + size_t imageSize = channels * height * width; + CpuMatrix input(inputs[0].buf_, samples, imageSize); + CpuMatrix output(outputs[0].buf_, samples, imageSize); + CpuMatrix denom(outputs[1].buf_, samples, imageSize); + + CrossMapNormal cross; + cross(output, denom, input, channels, height, width, size_, scale_, pow_); } private: diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 0b75785528..cd34ea18a7 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -1288,12 +1288,17 @@ void testCrossMapNormalFwd( FunctionBase* cpu = FunctionBase::funcRegistrar_.createByType(FUNC_NAME(CrossMapNormal, CPU)); cpu->init(config); - // cpu->calc(); + Dims dims{ + (size_t)numSamples, (size_t)channels, (size_t)imgSizeH, (size_t)imgSizeW}; + cpu->calc({Tensor(inputs.getData(), dims)}, + {Tensor(outputs.getData(), dims), Tensor(denoms.getData(), dims)}, + {}); +#if 0 CrossMapNormal cpuCross; cpuCross( outputs, denoms, inputs, channels, imgSizeH, imgSizeW, sizeX, scale, pow); - +#endif CrossMapNormal gpuCross; gpuCross(outputsGpu, denomsGpu, From 8407f4385ffd21219b437b2289bd99422d54e085 Mon Sep 17 00:00:00 2001 From: jiangfeng <103531948@qq.com> Date: Thu, 15 Dec 2016 10:47:54 +0800 Subject: [PATCH 149/265] add doc/index_cn.md --- doc/about/index_cn.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/about/index_cn.md b/doc/about/index_cn.md index 393ba07b58..3bf030004d 100644 --- a/doc/about/index_cn.md +++ b/doc/about/index_cn.md @@ -8,4 +8,4 @@ PaddlePaddle目前已经开放源码, 但是远未完善,我们希望能在这 致谢 -------- -在此,特别感谢PaddlePaddle的所有贡献者。 +在此,特别感谢PaddlePaddle的[所有贡献者](https://github.com/PaddlePaddle/Paddle/graphs/contributors)。 From 4ebb3eb759903bf95968b578eec99b1364d3bd10 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 11:55:35 +0800 Subject: [PATCH 150/265] imporve Function --- paddle/gserver/layers/NormProjectionLayer.cpp | 60 +++++++++++---- paddle/gserver/layers/NormProjectionLayer.h | 4 + paddle/math/Function.cpp | 6 +- paddle/math/Function.h | 14 ++-- paddle/math/cross_map_normal_op.cpp | 75 ++++++++++--------- paddle/math/cross_map_normal_op.h | 13 ++++ paddle/math/cross_map_normal_op_gpu.cu | 46 ++++-------- paddle/math/tests/test_matrixCompare.cpp | 21 +++++- 8 files changed, 147 insertions(+), 92 deletions(-) diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index ea301292e0..5dda7ee205 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -14,6 +14,7 @@ limitations under the License. */ #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" +#include "paddle/math/cross_map_normal_op.h" #include "NormProjectionLayer.h" namespace paddle { @@ -45,6 +46,16 @@ bool CMRProjectionNormLayer::init(const LayerMap& layerMap, /* the size of inputs for norm-layer is 1 */ CHECK_EQ(config_.inputs_size(), 1); + if (useGpu_) { + normal_ = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormal, GPU)); + } else { + normal_ = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormal, CPU)); + } + normal_->init( + FuncConfig().set("size", size_).set("scale", scale_).set("pow", pow_)); + return true; } @@ -62,10 +73,14 @@ void CMRProjectionNormLayer::forward(PassType passType) { Matrix::resizeOrCreate(denoms_, batchSize, size, /* trans */ false, useGpu_); - denoms_->zeroMem(); - - outV->crossMapNormalFwd( - *input, imgSizeH_, imgSizeW_, *denoms_, channels_, size_, scale_, pow_); + Dims dims{(size_t)batchSize, + (size_t)channels_, + (size_t)imgSizeH_, + (size_t)imgSizeW_}; + normal_->calc( + {Tensor(input->getData(), dims)}, + {Tensor(outV->getData(), dims), Tensor(denoms_->getData(), dims)}, + {}); } void CMRProjectionNormLayer::backward(const UpdateCallback& callback) { @@ -80,15 +95,32 @@ void CMRProjectionNormLayer::backward(const UpdateCallback& callback) { MatrixPtr localOutV = getOutputValue(); MatrixPtr preOutV = inputLayers_[0]->getOutputValue(); - preOutGrad->crossMapNormalBwd(*localGrad, - *denoms_, - *preOutV, - *localOutV, - channels_, - imgSizeH_, - imgSizeW_, - size_, - scale_, - pow_); + if (useGpu_) { + CrossMapNormalGrad crossGrad; + crossGrad(dynamic_cast(*preOutGrad), + dynamic_cast(*preOutV), + dynamic_cast(*localGrad), + dynamic_cast(*localOutV), + dynamic_cast(*denoms_), + channels_, + imgSizeH_, + imgSizeW_, + size_, + scale_, + pow_); + } else { + CrossMapNormalGrad crossGrad; + crossGrad(dynamic_cast(*preOutGrad), + dynamic_cast(*preOutV), + dynamic_cast(*localGrad), + dynamic_cast(*localOutV), + dynamic_cast(*denoms_), + channels_, + imgSizeH_, + imgSizeW_, + size_, + scale_, + pow_); + } } } // namespace paddle diff --git a/paddle/gserver/layers/NormProjectionLayer.h b/paddle/gserver/layers/NormProjectionLayer.h index 0db8e2551f..ea44669be3 100644 --- a/paddle/gserver/layers/NormProjectionLayer.h +++ b/paddle/gserver/layers/NormProjectionLayer.h @@ -16,6 +16,7 @@ limitations under the License. */ #include "NormLayer.h" #include "paddle/math/Matrix.h" +#include "paddle/math/Function.h" #include namespace paddle { @@ -39,5 +40,8 @@ public: bool init(const LayerMap& layerMap, const ParameterMap& parameterMap); void forward(PassType passType); void backward(const UpdateCallback& callback = nullptr); + +protected: + FunctionBase* normal_; }; } // namespace paddle diff --git a/paddle/math/Function.cpp b/paddle/math/Function.cpp index 21d2719172..02880e5ea1 100644 --- a/paddle/math/Function.cpp +++ b/paddle/math/Function.cpp @@ -31,15 +31,17 @@ real FuncConfig::get(const std::string& key) const { } template <> -void FuncConfig::set(const std::string& key, size_t v) { +FuncConfig& FuncConfig::set(const std::string& key, size_t v) { CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; valueMap_[key].s = v; + return *this; } template <> -void FuncConfig::set(const std::string& key, real v) { +FuncConfig& FuncConfig::set(const std::string& key, real v) { CHECK(valueMap_.count(key) == 0) << "Duplicated value: " << key; valueMap_[key].r = v; + return *this; } ClassRegistrar FunctionBase::funcRegistrar_; diff --git a/paddle/math/Function.h b/paddle/math/Function.h index 539759782b..f8fab972a6 100644 --- a/paddle/math/Function.h +++ b/paddle/math/Function.h @@ -46,6 +46,8 @@ class Tensor { public: Tensor(real* data, const Dims& dim) : buf_(data), dims_(dim) {} + real* getData() const { return buf_; } + real* buf_; Dims dims_; }; @@ -63,7 +65,7 @@ public: T get(const std::string& key) const; template - void set(const std::string& key, T v); + FuncConfig& set(const std::string& key, T v); protected: std::map valueMap_; @@ -84,11 +86,11 @@ public: #define FUNC_NAME(typeName, deviceName) #typeName "-" #deviceName -#define REGISTER_TYPED_FUNC(typeName, deviceName, className) \ - static InitFunction __reg_type_##typeName([]() { \ - FunctionBase::funcRegistrar_ \ - .registerClass>( \ - FUNC_NAME(typeName, deviceName)); \ +#define REGISTER_TYPED_FUNC(typeName, deviceName, className) \ + static InitFunction __reg_type_##typeName##deviceName([]() { \ + FunctionBase::funcRegistrar_ \ + .registerClass>( \ + FUNC_NAME(typeName, deviceName)); \ }) } // namespace paddle diff --git a/paddle/math/cross_map_normal_op.cpp b/paddle/math/cross_map_normal_op.cpp index d55bd78c62..e520351d2e 100644 --- a/paddle/math/cross_map_normal_op.cpp +++ b/paddle/math/cross_map_normal_op.cpp @@ -18,45 +18,41 @@ namespace paddle { // NCHW template <> -void CrossMapNormal::operator()(CpuMatrix& outputs, - CpuMatrix& denoms, - CpuMatrix& inputs, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow) { - CHECK(outputs.isContiguous()); - CHECK(inputs.isContiguous()); - CHECK(denoms.isContiguous()); - CHECK_EQ(outputs.getHeight(), inputs.getHeight()); - CHECK_EQ(outputs.getWidth(), inputs.getWidth()); - CHECK_EQ(outputs.getHeight(), denoms.getHeight()); - CHECK_EQ(outputs.getWidth(), denoms.getWidth()); - - size_t numSample = inputs.getHeight(); - size_t numCols = inputs.getWidth(); - size_t imageSize = imgSizeH * imgSizeW; - CHECK(imageSize * channels == numCols); - - denoms = denoms.constant(1.0); - const int start = -((int)sizeX - 1) / 2; - const int end = (int)sizeX + start; - for (size_t i = 0; i < numSample; i++) { - real* denomsData = denoms.getData() + i * numCols; - real* inputData = inputs.getData() + i * numCols; +void CrossMapNormal(real* outputs, + real* denoms, + real* inputs, + size_t numSamples, + size_t channels, + size_t height, + size_t width, + size_t size, + real scale, + real pow) { + size_t oneImage = height * width; + size_t oneSample = channels * oneImage; + + CpuVector outputsV(numSamples * oneSample, outputs); + CpuVector inputsV(numSamples * oneSample, inputs); + CpuVector denomsV(numSamples * oneSample, denoms); + + denomsV = denomsV.constant(1.0); + const int start = -((int)size - 1) / 2; + const int end = (int)size + start; + for (size_t i = 0; i < numSamples; i++) { + real* oneDenom = denoms + i * oneSample; + real* oneInput = inputs + i * oneSample; for (int c = 0; c < (int)channels; c++) { - CpuVector denom(imageSize, denomsData + c * imageSize); + CpuVector denom(oneImage, oneDenom + c * oneImage); for (int s = start; s < end; s++) { if (c + s >= 0 && c + s < (int)channels) { - CpuVector input(imageSize, inputData + (c + s) * imageSize); + CpuVector input(oneImage, oneInput + (c + s) * oneImage); denom += input.square() * scale; } } } } - outputs = inputs * denoms.pow(-pow); + + outputsV = inputsV * denomsV.pow(-pow); } template <> @@ -154,13 +150,17 @@ public: size_t channels = inputs[0].dims_[1]; size_t height = inputs[0].dims_[2]; size_t width = inputs[0].dims_[3]; - size_t imageSize = channels * height * width; - CpuMatrix input(inputs[0].buf_, samples, imageSize); - CpuMatrix output(outputs[0].buf_, samples, imageSize); - CpuMatrix denom(outputs[1].buf_, samples, imageSize); - CrossMapNormal cross; - cross(output, denom, input, channels, height, width, size_, scale_, pow_); + CrossMapNormal(outputs[0].getData(), + outputs[1].getData(), + inputs[0].getData(), + samples, + channels, + height, + width, + size_, + scale_, + pow_); } private: @@ -170,5 +170,6 @@ private: }; REGISTER_TYPED_FUNC(CrossMapNormal, CPU, CrossMapNormalFunc); +REGISTER_TYPED_FUNC(CrossMapNormal, GPU, CrossMapNormalFunc); } // namespace paddle diff --git a/paddle/math/cross_map_normal_op.h b/paddle/math/cross_map_normal_op.h index 86f54abde1..ef9533485e 100644 --- a/paddle/math/cross_map_normal_op.h +++ b/paddle/math/cross_map_normal_op.h @@ -19,6 +19,18 @@ limitations under the License. */ namespace paddle { +template +void CrossMapNormal(real* outputs, + real* denoms, + real* inputs, + size_t numSamples, + size_t channels, + size_t height, + size_t width, + size_t size, + real scale, + real pow); +#if 0 template struct CrossMapNormal { void operator()(typename MatrixT::type& outputs, @@ -31,6 +43,7 @@ struct CrossMapNormal { real scale, real pow); }; +#endif template struct CrossMapNormalGrad { diff --git a/paddle/math/cross_map_normal_op_gpu.cu b/paddle/math/cross_map_normal_op_gpu.cu index 0a154d97ac..9b92974344 100644 --- a/paddle/math/cross_map_normal_op_gpu.cu +++ b/paddle/math/cross_map_normal_op_gpu.cu @@ -61,45 +61,29 @@ __global__ void KeCMRNormOutput(size_t inputSize, const real* in, } template <> -void CrossMapNormal::operator()(GpuMatrix& outputs, - GpuMatrix& denoms, - GpuMatrix& inputs, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow) { - CHECK(outputs.isContiguous()); - CHECK(inputs.isContiguous()); - CHECK(denoms.isContiguous()); - CHECK_EQ(outputs.getHeight(), inputs.getHeight()); - CHECK_EQ(outputs.getWidth(), inputs.getWidth()); - CHECK_EQ(outputs.getHeight(), denoms.getHeight()); - CHECK_EQ(outputs.getWidth(), denoms.getWidth()); - - size_t numSample = inputs.getHeight(); - size_t numCols = inputs.getWidth(); - CHECK(imgSizeH * imgSizeW * channels == numCols); - - real* inputsData = inputs.getData(); - real* denomsData = denoms.getData(); - real* outputsData = outputs.getData(); - - size_t imageSize = numSample * imgSizeH * imgSizeW; +void CrossMapNormal(real* outputs, + real* denoms, + real* inputs, + size_t numSamples, + size_t channels, + size_t height, + size_t width, + size_t size, + real scale, + real pow) { + size_t imageSize = numSamples * height * width; int blockSize = 1024; int gridSize = (imageSize + 1024 - 1) / 1024; KeCMRNormFillScale<<>> - (imageSize, inputsData, denomsData, - channels, imgSizeH, imgSizeW, sizeX, scale); + (imageSize, inputs, denoms, channels, height, width, size, scale); - size_t inputSize = numSample * imgSizeH * imgSizeW *channels; + size_t inputSize = numSamples * height * width *channels; blockSize = 1024; gridSize = (inputSize + 1024 - 1) / 1024; KeCMRNormOutput<<>> - (inputSize, inputsData, denomsData, -pow, outputsData); + (inputSize, inputs, denoms, -pow, outputs); - CHECK_SYNC("CrossMapNormalFwd"); + CHECK_SYNC("CrossMapNormal"); } __global__ void KeCMRNormDiff(size_t imageSize, const real* bottom_data, diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index cd34ea18a7..aac3f75799 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -1281,24 +1281,40 @@ void testCrossMapNormalFwd( inputsGpu.copyFrom(inputs); outputsGpu.copyFrom(outputs); +#if 0 FuncConfig config; config.set("size", (size_t)sizeX); config.set("scale", scale); config.set("pow", pow); +#endif FunctionBase* cpu = FunctionBase::funcRegistrar_.createByType(FUNC_NAME(CrossMapNormal, CPU)); - cpu->init(config); + FunctionBase* gpu = + FunctionBase::funcRegistrar_.createByType(FUNC_NAME(CrossMapNormal, GPU)); + cpu->init(FuncConfig() + .set("size", (size_t)sizeX) + .set("scale", scale) + .set("pow", pow)); + gpu->init(FuncConfig() + .set("size", (size_t)sizeX) + .set("scale", scale) + .set("pow", pow)); Dims dims{ (size_t)numSamples, (size_t)channels, (size_t)imgSizeH, (size_t)imgSizeW}; cpu->calc({Tensor(inputs.getData(), dims)}, {Tensor(outputs.getData(), dims), Tensor(denoms.getData(), dims)}, {}); + + gpu->calc( + {Tensor(inputsGpu.getData(), dims)}, + {Tensor(outputsGpu.getData(), dims), Tensor(denomsGpu.getData(), dims)}, + {}); #if 0 CrossMapNormal cpuCross; cpuCross( outputs, denoms, inputs, channels, imgSizeH, imgSizeW, sizeX, scale, pow); -#endif + CrossMapNormal gpuCross; gpuCross(outputsGpu, denomsGpu, @@ -1309,6 +1325,7 @@ void testCrossMapNormalFwd( sizeX, scale, pow); +#endif TensorCheckErr(outputs, outputsGpu); TensorCheckErr(denoms, denomsGpu); From 45bd0a5e167ae7910ddfa36aa764b4738949c4c2 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Thu, 15 Dec 2016 11:58:12 +0800 Subject: [PATCH 151/265] more accurate to early stop train sparse model --- paddle/trainer/Tester.cpp | 6 ++++++ paddle/trainer/Trainer.cpp | 6 ------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/paddle/trainer/Tester.cpp b/paddle/trainer/Tester.cpp index 97d1b53934..24f7271734 100644 --- a/paddle/trainer/Tester.cpp +++ b/paddle/trainer/Tester.cpp @@ -46,6 +46,12 @@ Tester::Tester(const std::shared_ptr& config, gradientMachine_(gradientMachine), parameterUpdater_(parameterUpdater), testDataProvider_(testDataProvider) { + if (config_->getOptConfig().use_sparse_remote_updater()) { + LOG(FATAL) << "It's prohibited to set sparse_remote_update " + << "in some layers if testing will be under going " + << "in the middle of training. You can do testing " + << "within separate process."; + } testEvaluator_.reset(gradientMachine_->makeEvaluator()); if (intconfig_->distributeTest) { testParameterClient_.reset(new ParameterClient2(true)); diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 1380e46440..85610ec04e 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -226,12 +226,6 @@ void Trainer::init(const std::shared_ptr& config, DataProvider::create(config_->getTestDataConfig(), *config_, gpuData)); } if (testDataProvider_) { - if (config_->getOptConfig().use_sparse_remote_updater()) { - LOG(FATAL) << "It's prohibited to set sparse_remote_update " - << "in some layers if testing will be under going " - << "in the middle of training. You can do testing " - << "within separate process."; - } createTester(); } From bcf6f8283fd123d50bb6b64e24dd06434abc815d Mon Sep 17 00:00:00 2001 From: zhouyingfeng Date: Thu, 15 Dec 2016 12:27:56 +0800 Subject: [PATCH 152/265] add chinese doc for gpu-profiling, and fix the out-dated lineNos in en doc referenced to a code file. resolve #834 --- doc/howto/optimization/gpu_profiling_cn.rst | 239 ++++++++++++++++++++ doc/howto/optimization/gpu_profiling_en.rst | 120 +++++----- 2 files changed, 299 insertions(+), 60 deletions(-) create mode 100644 doc/howto/optimization/gpu_profiling_cn.rst diff --git a/doc/howto/optimization/gpu_profiling_cn.rst b/doc/howto/optimization/gpu_profiling_cn.rst new file mode 100644 index 0000000000..3132d3eaa1 --- /dev/null +++ b/doc/howto/optimization/gpu_profiling_cn.rst @@ -0,0 +1,239 @@ +PaddlePaddle 中的性能分析 +===================================== + +此教程将向您分步介绍如何使用内置的定时工具、 **nvprof** 或 **nvvp** 来运行性能分析和调优。 + +- 什么是性能分析? +- 为什么需要性能分析? +- 如何进行性能分析? +- 性能分析工具介绍 +- 详细教程 +- 性能分析小技巧 + +什么是性能分析? +================ +在软件工程的范畴里,性能分析(Profiling)是一个动态程序分析的术语,它可以指测量一个程序的空间(内存)复杂度或时间复杂度, +也可以说是某些特定指令的使用情况,或者是函数调用的频率和耗时等。通常情况下,分析得到的信息用于协助进行程序的优化。 + +简单来说,性能分析工具是用于给应用程序的性能做定量分析的。如果想很好的理解程序的行为,那程序分析工具是必不可少的利器。简单的性能分析,可以告诉您某个操作到底花了多长时间?而更深入的分析,甚至能解释为什么某个操作花了很长时间? + +为什么需要性能分析? +============================ +训练好一个深层神经网络通常要耗费非常长的时间,所以性能也就逐步变成了深度学习领域最重要的指标。 +而优化性能的首要任务,是需要了解哪些步骤拖慢了整体。 +如果某一块根本就不怎么耗时,那也就不需要急着优化性能啦! + +如何进行性能分析? +======================== +为了达到性能最优,您可以采用下面五个步骤: + +- 对代码进行性能分析 +- 找到运行慢的部分 +- 找到运行慢的原因 +- 修改成更快的版本 +- 再次对代码进行性能分析 + +Usually, processor has two key performance limits include float point throughput and +memory throughput. For GPU, it also need more parallelism to fulfill its potential. +This is why they can be so fast. + +通常情况下,处理器有两个关键性能限制:一个是浮点计算量,另一个是内存操作量。 +GPU则还需要高并行性,才能发挥其全部能力。这正是它们速度快的原因。 + +性能分析工具介绍 +====================== +就通常的GPU性能分析来说,市面上已经有NVIDIA或第三方提供的众多工具。 + +**nvprof** 是Nvidia性能分析工具, **nvvp** 则是带GUI的Nvidia可视化性能分析工具。 +在这个教程中,我们主要会介绍nvprof和nvvp。 + +:code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate +above profilers. + +:code:`paddle/math/test` 目录中的 :code:`test_GpuProfiler` 就是用于展示上述分析工具的用法。 + +.. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp + :language: c++ + :lines: 137-151 + :linenos: + +上述的代码片段包含了两种方法,您可以任意使用一个或两个来对感兴趣的代码段做性能分析。 + +1. :code:`REGISTER_TIMER_INFO` 是一个内置的定时器封装,可以用来计算CPU函数或cuda内核的时间消耗。 + +2. :code:`REGISTER_GPU_PROFILER` is a general purpose wrapper object of :code:`cudaProfilerStart` and :code:`cudaProfilerStop` to avoid +program crashes when CPU version of PaddlePaddle invokes them. + +3. :code:`REGISTER_GPU_PROFILER` 是一个封装对象,封装了 :code:`cudaProfilerStart` 和 :code:`cudaProfileStop` 两个操作;同时其内部实现可以避免纯CPU版本PaddlePaddle在执行本语句时发生崩溃。 + +您会在接下来的部分中获得更多的细节介绍。 + +详细教程 +============ + +内置定时器 +------------ + +如果想要启用PaddlePaddle的内置定时器,您首先需要在相关代码段中加入 :code:`REGISTER_TIMER_INFO`。 +接下来就可以使用 :code:`printStatus` 或者 :code:`printAllStatus` 函数来将信息输出到界面中。 +下面举个简单的例子: + +1. 加入 :code:`REGISTER_TIMER_INFO` 和 :code:`printAllStatus` 函数(如高亮部分)。 + + .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp + :language: c++ + :lines: 137-151 + :emphasize-lines: 8-12,14 + :linenos: + +2. cmake配置中将 **WITH_TIMER** 打开,重新编译PaddlePaddle。 + + .. code-block:: bash + + cmake .. -DWITH_TIMER=ON + make + +3. 执行您的代码,并观察结果(如高亮部分)。 + + .. code-block:: bash + :emphasize-lines: 1,12-15 + + > ./paddle/math/tests/test_GpuProfiler + I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/math/tests/test_GpuProfiler + I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions + I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done. + [==========] Running 1 test from 1 test case. + [----------] Global test environment set-up. + [----------] 1 test from Profiler + [ RUN ] Profiler.BilinearFwdBwd + I1117 11:13:42.845310 2522362816 test_GpuProfiler.cpp:114] Enable GPU Profiler Stat: [testBilinearFwdBwd] "numSamples = 10, channels = 16, im + gSizeX = 64, imgSizeY = 64" + I1117 11:13:42.850154 2522362816 ThreadLocal.cpp:37] thread use undeterministic rand seed:20659751 + I1117 11:13:42.981501 2522362816 Stat.cpp:130] ======= StatSet: [GlobalStatInfo] status ====== + I1117 11:13:42.981539 2522362816 Stat.cpp:133] Stat=testBilinearFwdBwd total=136.141 avg=136.141 max=136.141 min=136.141 count=1 + I1117 11:13:42.981572 2522362816 Stat.cpp:141] ======= BarrierStatSet status ====== + I1117 11:13:42.981575 2522362816 Stat.cpp:154] -------------------------------------------------- + [ OK ] Profiler.BilinearFwdBwd (136 ms) + [----------] 1 test from Profiler (136 ms total) + + [----------] Global test environment tear-down + [==========] 1 test from 1 test case ran. (136 ms total) + [ PASSED ] 1 test. + +nvprof 工具 +---------------- + +要使用命令行分析工具 **nvprof**,您按如下步骤操作即可: + +1. 将 :code:`REGISTER_GPU_PROFILER` 函数加到代码中(参考强调部分)。 + + .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp + :language: c++ + :lines: 137-151 + :emphasize-lines: 6-7 + :linenos: + +2. cmake中将 **WITH_PROFILER** 配置打开,重新编译PaddlePaddle。 + + .. code-block:: bash + + cmake .. -DWITH_PROFILER=ON + make + +3. 使用 **nvprof** 来分析执行文件。 + + .. code-block:: bash + + nvprof ./paddle/math/tests/test_GpuProfiler + +然后,您就能获得如下的分析结果: + +.. code-block:: bash + + ==78544== Profiling application: ./paddle/math/tests/test_GpuProfiler + ==78544== Profiling result: + Time(%) Time Calls Avg Min Max Name + 27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD] + 26.07% 9.0957ms 1 9.0957ms 9.0957ms 9.0957ms KeBilinearInterpBw + 23.78% 8.2977ms 1 8.2977ms 8.2977ms 8.2977ms KeBilinearInterpFw + 22.55% 7.8661ms 2 3.9330ms 1.5798ms 6.2863ms [CUDA memcpy DtoH] + + ==78544== API calls: + Time(%) Time Calls Avg Min Max Name + 46.85% 682.28ms 8 85.285ms 12.639us 682.03ms cudaStreamCreateWithFlags + 39.83% 580.00ms 4 145.00ms 302ns 550.27ms cudaFree + 9.82% 143.03ms 9 15.892ms 8.7090us 142.78ms cudaStreamCreate + 1.23% 17.983ms 7 2.5690ms 23.210us 6.4563ms cudaMemcpy + 1.23% 17.849ms 2 8.9247ms 8.4726ms 9.3768ms cudaStreamSynchronize + 0.66% 9.5969ms 7 1.3710ms 288.43us 2.4279ms cudaHostAlloc + 0.13% 1.9530ms 11 177.54us 7.6810us 591.06us cudaMalloc + 0.07% 1.0424ms 8 130.30us 1.6970us 453.72us cudaGetDevice + 0.04% 527.90us 40 13.197us 525ns 253.99us cudaEventCreateWithFlags + 0.03% 435.73us 348 1.2520us 124ns 42.704us cuDeviceGetAttribute + 0.03% 419.36us 1 419.36us 419.36us 419.36us cudaGetDeviceCount + 0.02% 260.75us 2 130.38us 129.32us 131.43us cudaGetDeviceProperties + 0.02% 222.32us 2 111.16us 106.94us 115.39us cudaLaunch + 0.01% 214.06us 4 53.514us 28.586us 77.655us cuDeviceGetName + 0.01% 115.45us 4 28.861us 9.8250us 44.526us cuDeviceTotalMem + 0.01% 83.988us 4 20.997us 578ns 77.760us cudaSetDevice + 0.00% 38.918us 1 38.918us 38.918us 38.918us cudaEventCreate + 0.00% 34.573us 31 1.1150us 279ns 12.784us cudaDeviceGetAttribute + 0.00% 17.767us 1 17.767us 17.767us 17.767us cudaProfilerStart + 0.00% 15.228us 2 7.6140us 3.5460us 11.682us cudaConfigureCall + 0.00% 14.536us 2 7.2680us 1.1490us 13.387us cudaGetLastError + 0.00% 8.6080us 26 331ns 173ns 783ns cudaSetupArgument + 0.00% 5.5470us 6 924ns 215ns 2.6780us cuDeviceGet + 0.00% 5.4090us 6 901ns 328ns 3.3320us cuDeviceGetCount + 0.00% 4.1770us 3 1.3920us 1.0630us 1.8300us cuDriverGetVersion + 0.00% 3.4650us 3 1.1550us 1.0810us 1.2680us cuInit + 0.00% 830ns 1 830ns 830ns 830ns cudaRuntimeGetVersion + + +nvvp 工具 +-------------- + +如果想使用可视化的分析器 **nvvp**,您可以导入 :code:`nvprof -o ...` 的输出,或者从工具的界面里运行您的应用。 + +**备注: nvvp 也支持CPU的性能分析** (需在nvvp界面中选上才能开启) + +.. image:: nvvp1.png + :align: center + :scale: 33% + +从内核函数的角度, **nvvp** 可以精确说明一个长耗时操作的具体原因。 +同时,如下图所示, **nvvp** 的内核block使用情况、register使用情况和共享内存使用情况能让我们对GPU的整体使用有更好的理解。 + + +.. image:: nvvp2.png + :align: center + :scale: 33% + +而从应用的角度, **nvvp** 可以帮您提供一些定位性能瓶颈的建议。 +例如,下图中就展示了一些关于data movement和compute utilization的建议,为您做性能调优提供了方向。 + +.. image:: nvvp3.png + :align: center + :scale: 33% + +.. image:: nvvp4.png + :align: center + :scale: 33% + +性能分析小技巧 +================== + +- 开始阶段,从 **nvprof** 和 **nvvp** 的输出信息入手是个不错的选择。 +- 接下来可以考虑下时间线的分析。 +- 如果真想挖掘内核深处的某个秘密,您最好先确认:这一块的耗时比例真的太高,值得深入分析。 +- 可能的情况下,试着让输出的分析数据和理论值对应。 + + 1) 例如,如果我知道内核花了10ms来移动1GB数据,那我会期望分析工具统计到速度是100GB/s。 + 2) 若有不一致之处,很有可能实际应用就是没有按照您的预期情况运行。 +- 了解您的硬件:如果您的GPU理论可以达到6 TFLOPs(6万亿次浮点运算每秒),而当前已经有5.5 TFLOPs了,那估计这里的潜力就没啥好挖的了…… + +性能分析是性能优化的关键一步。有的时候简简单单的改变就能在性能上产生明显的优化效果! +当然,具体情况因人而异。 + +参考资料 +=========== +Jeremy Appleyard, `GPU Profiling for Deep Learning `_, 2015 diff --git a/doc/howto/optimization/gpu_profiling_en.rst b/doc/howto/optimization/gpu_profiling_en.rst index 40ba698f4e..0e3e6f9342 100644 --- a/doc/howto/optimization/gpu_profiling_en.rst +++ b/doc/howto/optimization/gpu_profiling_en.rst @@ -49,7 +49,7 @@ For general GPU profiling, a bunch of tools are provided from both NVIDIA and th In this tutorial, we will focus on nvprof and nvvp. :code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate -above profilers. +above profilers. .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ @@ -79,8 +79,8 @@ As a simple example, consider the following: .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ - :lines: 111-124 - :emphasize-lines: 8-10,13 + :lines: 137-151 + :emphasize-lines: 8-12,14 :linenos: 2. Configure cmake with **WITH_TIMER** and recompile PaddlePaddle. @@ -90,31 +90,31 @@ As a simple example, consider the following: cmake .. -DWITH_TIMER=ON make -3. Execute your code and observe the results (see the emphasize-lines). +3. Execute your code and observe the results (see the emphasize-lines). .. code-block:: bash :emphasize-lines: 1,12-15 - > ./paddle/math/tests/test_GpuProfiler - I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/math/tests/test_GpuProfiler - I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions - I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done. - [==========] Running 1 test from 1 test case. - [----------] Global test environment set-up. - [----------] 1 test from Profiler - [ RUN ] Profiler.BilinearFwdBwd + > ./paddle/math/tests/test_GpuProfiler + I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/math/tests/test_GpuProfiler + I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions + I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done. + [==========] Running 1 test from 1 test case. + [----------] Global test environment set-up. + [----------] 1 test from Profiler + [ RUN ] Profiler.BilinearFwdBwd I1117 11:13:42.845310 2522362816 test_GpuProfiler.cpp:114] Enable GPU Profiler Stat: [testBilinearFwdBwd] "numSamples = 10, channels = 16, im - gSizeX = 64, imgSizeY = 64" - I1117 11:13:42.850154 2522362816 ThreadLocal.cpp:37] thread use undeterministic rand seed:20659751 - I1117 11:13:42.981501 2522362816 Stat.cpp:130] ======= StatSet: [GlobalStatInfo] status ====== - I1117 11:13:42.981539 2522362816 Stat.cpp:133] Stat=testBilinearFwdBwd total=136.141 avg=136.141 max=136.141 min=136.141 count=1 - I1117 11:13:42.981572 2522362816 Stat.cpp:141] ======= BarrierStatSet status ====== - I1117 11:13:42.981575 2522362816 Stat.cpp:154] -------------------------------------------------- - [ OK ] Profiler.BilinearFwdBwd (136 ms) - [----------] 1 test from Profiler (136 ms total) - - [----------] Global test environment tear-down - [==========] 1 test from 1 test case ran. (136 ms total) + gSizeX = 64, imgSizeY = 64" + I1117 11:13:42.850154 2522362816 ThreadLocal.cpp:37] thread use undeterministic rand seed:20659751 + I1117 11:13:42.981501 2522362816 Stat.cpp:130] ======= StatSet: [GlobalStatInfo] status ====== + I1117 11:13:42.981539 2522362816 Stat.cpp:133] Stat=testBilinearFwdBwd total=136.141 avg=136.141 max=136.141 min=136.141 count=1 + I1117 11:13:42.981572 2522362816 Stat.cpp:141] ======= BarrierStatSet status ====== + I1117 11:13:42.981575 2522362816 Stat.cpp:154] -------------------------------------------------- + [ OK ] Profiler.BilinearFwdBwd (136 ms) + [----------] 1 test from Profiler (136 ms total) + + [----------] Global test environment tear-down + [==========] 1 test from 1 test case ran. (136 ms total) [ PASSED ] 1 test. nvprof profiler @@ -126,7 +126,7 @@ To use this command line profiler **nvprof**, you can simply issue the following .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ - :lines: 111-124 + :lines: 137-151 :emphasize-lines: 6-7 :linenos: @@ -147,42 +147,42 @@ Then, you can get the following profiling result: .. code-block:: bash - ==78544== Profiling application: ./paddle/math/tests/test_GpuProfiler - ==78544== Profiling result: - Time(%) Time Calls Avg Min Max Name - 27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD] - 26.07% 9.0957ms 1 9.0957ms 9.0957ms 9.0957ms KeBilinearInterpBw - 23.78% 8.2977ms 1 8.2977ms 8.2977ms 8.2977ms KeBilinearInterpFw - 22.55% 7.8661ms 2 3.9330ms 1.5798ms 6.2863ms [CUDA memcpy DtoH] - - ==78544== API calls: - Time(%) Time Calls Avg Min Max Name - 46.85% 682.28ms 8 85.285ms 12.639us 682.03ms cudaStreamCreateWithFlags - 39.83% 580.00ms 4 145.00ms 302ns 550.27ms cudaFree - 9.82% 143.03ms 9 15.892ms 8.7090us 142.78ms cudaStreamCreate - 1.23% 17.983ms 7 2.5690ms 23.210us 6.4563ms cudaMemcpy - 1.23% 17.849ms 2 8.9247ms 8.4726ms 9.3768ms cudaStreamSynchronize - 0.66% 9.5969ms 7 1.3710ms 288.43us 2.4279ms cudaHostAlloc - 0.13% 1.9530ms 11 177.54us 7.6810us 591.06us cudaMalloc - 0.07% 1.0424ms 8 130.30us 1.6970us 453.72us cudaGetDevice - 0.04% 527.90us 40 13.197us 525ns 253.99us cudaEventCreateWithFlags - 0.03% 435.73us 348 1.2520us 124ns 42.704us cuDeviceGetAttribute - 0.03% 419.36us 1 419.36us 419.36us 419.36us cudaGetDeviceCount - 0.02% 260.75us 2 130.38us 129.32us 131.43us cudaGetDeviceProperties - 0.02% 222.32us 2 111.16us 106.94us 115.39us cudaLaunch - 0.01% 214.06us 4 53.514us 28.586us 77.655us cuDeviceGetName - 0.01% 115.45us 4 28.861us 9.8250us 44.526us cuDeviceTotalMem - 0.01% 83.988us 4 20.997us 578ns 77.760us cudaSetDevice - 0.00% 38.918us 1 38.918us 38.918us 38.918us cudaEventCreate - 0.00% 34.573us 31 1.1150us 279ns 12.784us cudaDeviceGetAttribute - 0.00% 17.767us 1 17.767us 17.767us 17.767us cudaProfilerStart - 0.00% 15.228us 2 7.6140us 3.5460us 11.682us cudaConfigureCall - 0.00% 14.536us 2 7.2680us 1.1490us 13.387us cudaGetLastError - 0.00% 8.6080us 26 331ns 173ns 783ns cudaSetupArgument - 0.00% 5.5470us 6 924ns 215ns 2.6780us cuDeviceGet - 0.00% 5.4090us 6 901ns 328ns 3.3320us cuDeviceGetCount - 0.00% 4.1770us 3 1.3920us 1.0630us 1.8300us cuDriverGetVersion - 0.00% 3.4650us 3 1.1550us 1.0810us 1.2680us cuInit + ==78544== Profiling application: ./paddle/math/tests/test_GpuProfiler + ==78544== Profiling result: + Time(%) Time Calls Avg Min Max Name + 27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD] + 26.07% 9.0957ms 1 9.0957ms 9.0957ms 9.0957ms KeBilinearInterpBw + 23.78% 8.2977ms 1 8.2977ms 8.2977ms 8.2977ms KeBilinearInterpFw + 22.55% 7.8661ms 2 3.9330ms 1.5798ms 6.2863ms [CUDA memcpy DtoH] + + ==78544== API calls: + Time(%) Time Calls Avg Min Max Name + 46.85% 682.28ms 8 85.285ms 12.639us 682.03ms cudaStreamCreateWithFlags + 39.83% 580.00ms 4 145.00ms 302ns 550.27ms cudaFree + 9.82% 143.03ms 9 15.892ms 8.7090us 142.78ms cudaStreamCreate + 1.23% 17.983ms 7 2.5690ms 23.210us 6.4563ms cudaMemcpy + 1.23% 17.849ms 2 8.9247ms 8.4726ms 9.3768ms cudaStreamSynchronize + 0.66% 9.5969ms 7 1.3710ms 288.43us 2.4279ms cudaHostAlloc + 0.13% 1.9530ms 11 177.54us 7.6810us 591.06us cudaMalloc + 0.07% 1.0424ms 8 130.30us 1.6970us 453.72us cudaGetDevice + 0.04% 527.90us 40 13.197us 525ns 253.99us cudaEventCreateWithFlags + 0.03% 435.73us 348 1.2520us 124ns 42.704us cuDeviceGetAttribute + 0.03% 419.36us 1 419.36us 419.36us 419.36us cudaGetDeviceCount + 0.02% 260.75us 2 130.38us 129.32us 131.43us cudaGetDeviceProperties + 0.02% 222.32us 2 111.16us 106.94us 115.39us cudaLaunch + 0.01% 214.06us 4 53.514us 28.586us 77.655us cuDeviceGetName + 0.01% 115.45us 4 28.861us 9.8250us 44.526us cuDeviceTotalMem + 0.01% 83.988us 4 20.997us 578ns 77.760us cudaSetDevice + 0.00% 38.918us 1 38.918us 38.918us 38.918us cudaEventCreate + 0.00% 34.573us 31 1.1150us 279ns 12.784us cudaDeviceGetAttribute + 0.00% 17.767us 1 17.767us 17.767us 17.767us cudaProfilerStart + 0.00% 15.228us 2 7.6140us 3.5460us 11.682us cudaConfigureCall + 0.00% 14.536us 2 7.2680us 1.1490us 13.387us cudaGetLastError + 0.00% 8.6080us 26 331ns 173ns 783ns cudaSetupArgument + 0.00% 5.5470us 6 924ns 215ns 2.6780us cuDeviceGet + 0.00% 5.4090us 6 901ns 328ns 3.3320us cuDeviceGetCount + 0.00% 4.1770us 3 1.3920us 1.0630us 1.8300us cuDriverGetVersion + 0.00% 3.4650us 3 1.1550us 1.0810us 1.2680us cuInit 0.00% 830ns 1 830ns 830ns 830ns cudaRuntimeGetVersion From 3d560c517983f519bd2d85fdd5142cae8a35fb60 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Thu, 15 Dec 2016 12:59:40 +0800 Subject: [PATCH 153/265] fix conflict, move doc/conf.py.in to doc/templates --- doc/CMakeLists.txt | 4 ++-- doc/about/index_en.rst | 2 +- doc/howto/index_cn.rst | 16 ++++++++-------- doc/{ => templates}/conf.py.cn.in | 0 doc/{ => templates}/conf.py.en.in | 0 5 files changed, 11 insertions(+), 11 deletions(-) rename doc/{ => templates}/conf.py.cn.in (100%) rename doc/{ => templates}/conf.py.en.in (100%) diff --git a/doc/CMakeLists.txt b/doc/CMakeLists.txt index 1b0fbadeb3..6fa42fd0c7 100644 --- a/doc/CMakeLists.txt +++ b/doc/CMakeLists.txt @@ -16,7 +16,7 @@ set(SPHINX_CACHE_DIR_EN "${CMAKE_CURRENT_BINARY_DIR}/en/_doctrees") set(SPHINX_HTML_DIR_EN "${CMAKE_CURRENT_BINARY_DIR}/en/html") configure_file( - "${CMAKE_CURRENT_SOURCE_DIR}/conf.py.en.in" + "${CMAKE_CURRENT_SOURCE_DIR}/templates/conf.py.en.in" "${BINARY_BUILD_DIR_EN}/conf.py" @ONLY) @@ -41,7 +41,7 @@ set(SPHINX_CACHE_DIR_CN "${CMAKE_CURRENT_BINARY_DIR}/cn/_doctrees") set(SPHINX_HTML_DIR_CN "${CMAKE_CURRENT_BINARY_DIR}/cn/html") configure_file( - "${CMAKE_CURRENT_SOURCE_DIR}/conf.py.cn.in" + "${CMAKE_CURRENT_SOURCE_DIR}/templates/conf.py.cn.in" "${BINARY_BUILD_DIR_CN}/conf.py" @ONLY) diff --git a/doc/about/index_en.rst b/doc/about/index_en.rst index 8a372d2bc2..065c430cde 100644 --- a/doc/about/index_en.rst +++ b/doc/about/index_en.rst @@ -11,4 +11,4 @@ We hope to build an active open source community both by providing feedback and Credits -------- -We owe many thanks to `all contributors and developers `_ of PaddlePaddle! +We owe many thanks to `all contributors and developers `_ of PaddlePaddle! diff --git a/doc/howto/index_cn.rst b/doc/howto/index_cn.rst index 805b63f044..618f0c6e72 100644 --- a/doc/howto/index_cn.rst +++ b/doc/howto/index_cn.rst @@ -1,8 +1,8 @@ 进阶指南 ======== -使用 ----- +使用说明 +-------- .. toctree:: :maxdepth: 1 @@ -11,24 +11,24 @@ usage/cluster/k8s/k8s_cn.md usage/cluster/k8s/k8s_distributed_cn.md -开发 ----- +开发标准 +-------- .. toctree:: :maxdepth: 1 dev/write_docs_cn.rst -配置 ----- +模型配置 +-------- .. toctree:: :maxdepth: 1 deep_model/rnn/index_cn.rst -优化 ----- +性能优化 +-------- .. toctree:: :maxdepth: 1 diff --git a/doc/conf.py.cn.in b/doc/templates/conf.py.cn.in similarity index 100% rename from doc/conf.py.cn.in rename to doc/templates/conf.py.cn.in diff --git a/doc/conf.py.en.in b/doc/templates/conf.py.en.in similarity index 100% rename from doc/conf.py.en.in rename to doc/templates/conf.py.en.in From 707a9c9bbd67e936efeea134cc6eaf2f5fffe464 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Thu, 15 Dec 2016 13:33:36 +0800 Subject: [PATCH 154/265] Fix variable name and add the annotation --- paddle/gserver/layers/PriorBox.cpp | 130 ++++++++---------- python/paddle/trainer/config_parser.py | 2 - .../paddle/trainer_config_helpers/layers.py | 10 +- 3 files changed, 63 insertions(+), 79 deletions(-) diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp index 4b8573f058..c9194235fd 100644 --- a/paddle/gserver/layers/PriorBox.cpp +++ b/paddle/gserver/layers/PriorBox.cpp @@ -17,6 +17,15 @@ limitations under the License. */ #include "paddle/math/BaseMatrix.h" namespace paddle { +/** + * @brief A layer for generate prior box locations and variances. + * - Input: Two and only two input layer are accepted. The input layer must be + * be a data output layer and a convolution output layer. + * - Output: The prior box locations and variances of the input data. + * Reference: + * Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, + * Cheng-Yang Fu, Alexander C. Berg. SSD: Single Shot MultiBox Detector + */ class PriorBoxLayer : public Layer { public: @@ -24,106 +33,84 @@ public: bool init(const LayerMap& layerMap, const ParameterMap& parameterMap); void forward(PassType passType); void backward(const UpdateCallback& callback) {} - void forwardImp(const Argument& featureMap, const Argument& imageShape); int numPriors_; std::vector minSize_; std::vector maxSize_; std::vector aspectRatio_; std::vector variance_; - std::vector tmpCpuInput_; MatrixPtr buffer_; }; bool PriorBoxLayer::init(const LayerMap& layerMap, const ParameterMap& parameterMap) { Layer::init(layerMap, parameterMap); - auto pb_conf = config_.inputs(0).priorbox_conf(); - std::copy(pb_conf.min_size().begin(), - pb_conf.min_size().end(), + auto pbConf = config_.inputs(0).priorbox_conf(); + std::copy(pbConf.min_size().begin(), + pbConf.min_size().end(), std::back_inserter(minSize_)); - std::copy(pb_conf.max_size().begin(), - pb_conf.max_size().end(), + std::copy(pbConf.max_size().begin(), + pbConf.max_size().end(), std::back_inserter(maxSize_)); - std::copy(pb_conf.aspect_ratio().begin(), - pb_conf.aspect_ratio().end(), + std::copy(pbConf.aspect_ratio().begin(), + pbConf.aspect_ratio().end(), std::back_inserter(aspectRatio_)); - std::copy(pb_conf.variance().begin(), - pb_conf.variance().end(), + std::copy(pbConf.variance().begin(), + pbConf.variance().end(), std::back_inserter(variance_)); // flip - int input_ratio_length = aspectRatio_.size(); - for (int index = 0; index < input_ratio_length; index++) + int inputRatioLength = aspectRatio_.size(); + for (int index = 0; index < inputRatioLength; index++) aspectRatio_.push_back(1 / aspectRatio_[index]); aspectRatio_.push_back(1.); numPriors_ = aspectRatio_.size(); if (maxSize_.size() > 0) numPriors_++; - buffer_ = Matrix::create(1, 1, false, false); - if (useGpu_) { - tmpCpuInput_.reserve(inputLayers_.size()); - for (size_t i = 0; i < inputLayers_.size(); i++) { - tmpCpuInput_.push_back(Argument()); - } - } return true; } void PriorBoxLayer::forward(PassType passType) { Layer::forward(passType); - if (useGpu_) { - for (size_t i = 0; i < inputLayers_.size(); i++) { - tmpCpuInput_[i].resizeAndCopyFrom( - getInput(i), false, HPPL_STREAM_DEFAULT); - hl_stream_synchronize(HPPL_STREAM_DEFAULT); - forwardImp(tmpCpuInput_[0], tmpCpuInput_[1]); - } - } else { - forwardImp(getInput(0), getInput(1)); - } -} - -void PriorBoxLayer::forwardImp(const Argument& featureMap, - const Argument& imageShape) { - int layer_width = featureMap.getFrameWidth(); - int layer_height = featureMap.getFrameHeight(); + auto input = getInput(0); + int layerWidth = input.getFrameWidth(); + int layerHeight = input.getFrameHeight(); - MatrixPtr inV1 = imageShape.value; - int image_width = inV1->getElement(0, 0); - int image_height = inV1->getElement(0, 1); - float step_w = static_cast(image_width) / layer_width; - float step_h = static_cast(image_height) / layer_height; - int dim = layer_height * layer_width * numPriors_ * 4; + auto image = getInput(1); + int imageWidth = image.getFrameWidth(); + int imageHeight = image.getFrameHeight(); + float stepW = static_cast(imageWidth) / layerWidth; + float stepH = static_cast(imageHeight) / layerHeight; + int dim = layerHeight * layerWidth * numPriors_ * 4; reserveOutput(1, dim * 2); // use a cpu buffer to compute Matrix::resizeOrCreate(buffer_, 1, dim * 2, false, false); - auto* tmp_ptr = buffer_->getData(); + auto* tmpPtr = buffer_->getData(); int idx = 0; - for (int h = 0; h < layer_height; ++h) { - for (int w = 0; w < layer_width; ++w) { - float center_x = (w + 0.5) * step_w; - float center_y = (h + 0.5) * step_h; - int min_size = 0; + for (int h = 0; h < layerHeight; ++h) { + for (int w = 0; w < layerWidth; ++w) { + float centerX = (w + 0.5) * stepW; + float centerY = (h + 0.5) * stepH; + int minSize = 0; for (size_t s = 0; s < minSize_.size(); s++) { // first prior. - min_size = minSize_[s]; - int box_width = min_size; - int box_height = min_size; + minSize = minSize_[s]; + int boxWidth = minSize; + int boxHeight = minSize; // xmin, ymin, xmax, ymax. - tmp_ptr[idx++] = (center_x - box_width / 2.) / image_width; - tmp_ptr[idx++] = (center_y - box_height / 2.) / image_height; - tmp_ptr[idx++] = (center_x + box_width / 2.) / image_width; - tmp_ptr[idx++] = (center_y + box_height / 2.) / image_height; + tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth; + tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; + tmpPtr[idx++] = (centerX + boxWidth / 2.) / imageWidth; + tmpPtr[idx++] = (centerY + boxHeight / 2.) / imageHeight; if (maxSize_.size() > 0) { CHECK_EQ(minSize_.size(), maxSize_.size()); // second prior. for (size_t s = 0; s < maxSize_.size(); s++) { - int max_size = maxSize_[s]; - box_width = box_height = sqrt(min_size * max_size); - tmp_ptr[idx++] = (center_x - box_width / 2.) / image_width; - tmp_ptr[idx++] = (center_y - box_height / 2.) / image_height; - tmp_ptr[idx++] = (center_x + box_width / 2.) / image_width; - tmp_ptr[idx++] = (center_y + box_height / 2.) / image_height; + int maxSize = maxSize_[s]; + boxWidth = boxHeight = sqrt(minSize * maxSize); + tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth; + tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; + tmpPtr[idx++] = (centerX + boxWidth / 2.) / imageWidth; + tmpPtr[idx++] = (centerY + boxHeight / 2.) / imageHeight; } } } @@ -131,27 +118,26 @@ void PriorBoxLayer::forwardImp(const Argument& featureMap, for (size_t r = 0; r < aspectRatio_.size(); r++) { float ar = aspectRatio_[r]; if (fabs(ar - 1.) < 1e-6) continue; - float box_width = min_size * sqrt(ar); - float box_height = min_size / sqrt(ar); - tmp_ptr[idx++] = (center_x - box_width / 2.) / image_width; - tmp_ptr[idx++] = (center_y - box_height / 2.) / image_height; - tmp_ptr[idx++] = (center_x + box_width / 2.) / image_width; - tmp_ptr[idx++] = (center_y + box_height / 2.) / image_height; + float boxWidth = minSize * sqrt(ar); + float boxHeight = minSize / sqrt(ar); + tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth; + tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; + tmpPtr[idx++] = (centerX + boxWidth / 2.) / imageWidth; + tmpPtr[idx++] = (centerY + boxHeight / 2.) / imageHeight; } } } // clip the prior's coordidate such that it is within [0, 1] for (int d = 0; d < dim; ++d) - tmp_ptr[d] = std::min(std::max(tmp_ptr[d], (float)0.), (float)1.); + tmpPtr[d] = std::min(std::max(tmpPtr[d], (float)0.), (float)1.); // set the variance. - for (int h = 0; h < layer_height; h++) - for (int w = 0; w < layer_width; w++) + for (int h = 0; h < layerHeight; h++) + for (int w = 0; w < layerWidth; w++) for (int i = 0; i < numPriors_; i++) - for (int j = 0; j < 4; j++) tmp_ptr[idx++] = variance_[j]; + for (int j = 0; j < 4; j++) tmpPtr[idx++] = variance_[j]; MatrixPtr outV = getOutputValue(); outV->copyFrom(buffer_->data_, dim * 2); } - REGISTER_LAYER(priorbox, PriorBoxLayer); } // namespace paddle diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 8a82e5d667..0f7c601fe0 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1589,8 +1589,6 @@ class PriorBoxLayer(LayerBase): self.config.inputs[0].priorbox_conf.aspect_ratio.extend(aspect_ratio) self.config.inputs[0].priorbox_conf.variance.extend(variance) self.config.size = size - input_layer0 = self.get_input_layer(0) - input_layer1 = self.get_input_layer(1) @config_layer('data') diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index 80c421aa2e..4bcdb9f35e 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -938,7 +938,7 @@ def print_layer(input, name=None): @wrap_name_default("priorbox") def priorbox_layer(input, - img_shape, + image, aspect_ratio, variance, min_size, @@ -951,8 +951,8 @@ def priorbox_layer(input, :type name: basestring :param input: The input layer. :type input: LayerOutput - :param img_shape: The width and height of the network input image. - :type img_shape: LayerOutput + :param image: The network input image. + :type image: LayerOutput :param aspect_ratio: The aspect ratio. :type aspect_ratio: list :param variance: The bounding box variance. @@ -968,7 +968,7 @@ def priorbox_layer(input, Layer( name=name, type=LayerType.PRIORBOX_LAYER, - inputs=[input.name, img_shape.name], + inputs=[input.name, image.name], size=size, min_size=min_size, max_size=max_size, @@ -977,7 +977,7 @@ def priorbox_layer(input, return LayerOutput( name, LayerType.PRIORBOX_LAYER, - parents=[input, img_shape], + parents=[input, image], num_filters=num_filters, size=size) From 8455071a51cb1413a6420c0e0e84154873b7ce98 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Thu, 15 Dec 2016 13:47:13 +0800 Subject: [PATCH 155/265] Remove do_average_in_cpu, max_average_window, average_window from optimizers.py These three parameters have already been moved to model_average. Leaving them here will cause duplicate keys in kwargs --- python/paddle/trainer_config_helpers/optimizers.py | 4 ---- 1 file changed, 4 deletions(-) diff --git a/python/paddle/trainer_config_helpers/optimizers.py b/python/paddle/trainer_config_helpers/optimizers.py index d95b2cfe46..819d40feb3 100644 --- a/python/paddle/trainer_config_helpers/optimizers.py +++ b/python/paddle/trainer_config_helpers/optimizers.py @@ -361,9 +361,6 @@ def settings(batch_size, learning_rate_decay_b=0., learning_rate_schedule='poly', learning_rate_args='', - average_window=0, - do_average_in_cpu=False, - max_average_window=None, learning_method=None, regularization=None, is_async=False, @@ -412,7 +409,6 @@ def settings(batch_size, args = [ 'batch_size', 'learning_rate', 'learning_rate_decay_a', 'learning_rate_decay_b', 'learning_rate_schedule', 'learning_rate_args', - 'average_window', 'do_average_in_cpu', 'max_average_window' ] kwargs = dict() kwargs['algorithm'] = algorithm From 89a638b8b3008db7626e3d4af38b0d63afd46a5b Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Thu, 15 Dec 2016 14:07:21 +0800 Subject: [PATCH 156/265] do pre-commit --- python/paddle/trainer_config_helpers/data_sources.py | 1 + 1 file changed, 1 insertion(+) diff --git a/python/paddle/trainer_config_helpers/data_sources.py b/python/paddle/trainer_config_helpers/data_sources.py index fc72014c91..3741bfe795 100644 --- a/python/paddle/trainer_config_helpers/data_sources.py +++ b/python/paddle/trainer_config_helpers/data_sources.py @@ -192,6 +192,7 @@ def define_py_data_sources2(train_list, test_list, module, obj, args=None): :return: None :rtype: None """ + def py_data2(files, load_data_module, load_data_object, load_data_args, **kwargs): data = DataBase() From e10e2d94a1dee4f24bdf033e2f1d9ab34eb733dc Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Thu, 15 Dec 2016 13:42:14 +0800 Subject: [PATCH 157/265] add linkto CONTRIBUTING.md --- CONTRIBUTING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) mode change 120000 => 100644 CONTRIBUTING.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 120000 index f3eb8b4edb..0000000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1 +0,0 @@ -./doc/howto/contribute_to_paddle_en.md \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 0000000000..0d4bb973ae --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1 @@ +./doc/howto/dev/contribute_to_paddle_en.md From c287b6b216aa74bbbea92a2897df907525c2d766 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Thu, 15 Dec 2016 14:36:50 +0800 Subject: [PATCH 158/265] Add an extra parameter label to chunk_evaluator In order to keep consistent with other evaluators, an extra parameter label is add to chunk_evaluator. Document and demos are modified accordingly. --- demo/sequence_tagging/linear_crf.py | 3 ++- demo/sequence_tagging/rnn_crf.py | 3 ++- .../trainer_config_helpers/evaluators.py | 18 +++++++++++------- 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/demo/sequence_tagging/linear_crf.py b/demo/sequence_tagging/linear_crf.py index 736b580bb8..0624b17787 100644 --- a/demo/sequence_tagging/linear_crf.py +++ b/demo/sequence_tagging/linear_crf.py @@ -74,7 +74,8 @@ sum_evaluator( chunk_evaluator( name="chunk_f1", - input=[crf_decoding, chunk], + input=crf_decoding, + label=chunk, chunk_scheme="IOB", num_chunk_types=11, ) diff --git a/demo/sequence_tagging/rnn_crf.py b/demo/sequence_tagging/rnn_crf.py index ad1e7b68e7..b9b41b2433 100644 --- a/demo/sequence_tagging/rnn_crf.py +++ b/demo/sequence_tagging/rnn_crf.py @@ -112,7 +112,8 @@ sum_evaluator( chunk_evaluator( name="chunk_f1", - input=[crf_decoding, chunk], + input=crf_decoding, + label=chunk, chunk_scheme="IOB", num_chunk_types=11, ) diff --git a/python/paddle/trainer_config_helpers/evaluators.py b/python/paddle/trainer_config_helpers/evaluators.py index 0ee116d8c4..15b573b48e 100644 --- a/python/paddle/trainer_config_helpers/evaluators.py +++ b/python/paddle/trainer_config_helpers/evaluators.py @@ -327,9 +327,11 @@ def ctc_error_evaluator( @wrap_name_default() def chunk_evaluator( input, + label, + chunk_scheme, + num_chunk_types, name=None, - chunk_scheme=None, - num_chunk_types=None, ): + ): """ Chunk evaluator is used to evaluate segment labelling accuracy for a sequence. It calculates the chunk detection F1 score. @@ -363,22 +365,24 @@ def chunk_evaluator( .. code-block:: python - eval = chunk_evaluator(input) + eval = chunk_evaluator(input, label, chunk_scheme, num_chunk_types) :param input: The input layers. :type input: LayerOutput - :param name: The Evaluator name, it is not necessary. - :type name: basename|None + :param label: An input layer containing the ground truth label. + :type label: LayerOutput :param chunk_scheme: The labelling schemes support 4 types. It is one of - "IOB", "IOE", "IOBES", "plain".This Evaluator must - contain this chunk_scheme. + "IOB", "IOE", "IOBES", "plain". It is required. :type chunk_scheme: basestring :param num_chunk_types: number of chunk types other than "other" + :param name: The Evaluator name, it is optional. + :type name: basename|None """ evaluator_base( name=name, type="chunk", input=input, + label=label, chunk_scheme=chunk_scheme, num_chunk_types=num_chunk_types) From ad6e3915d1030795c45496025e74d8be5414cded Mon Sep 17 00:00:00 2001 From: livc Date: Thu, 15 Dec 2016 14:54:58 +0800 Subject: [PATCH 159/265] modify details and add 'pre-commit' --- doc/howto/contribute_to_paddle_cn.md | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/doc/howto/contribute_to_paddle_cn.md b/doc/howto/contribute_to_paddle_cn.md index 1e56f9549b..2e3e76836b 100644 --- a/doc/howto/contribute_to_paddle_cn.md +++ b/doc/howto/contribute_to_paddle_cn.md @@ -1,10 +1,10 @@ # 如何贡献代码 -我们真诚地感谢您的贡献。你能使用 fork 和 pull request 的工作流来合并(merge)代码。 +我们真诚地感谢您的贡献,欢迎通过 GitHub 的 fork 和 pull request 流程来提交代码。。 ## 代码要求 - 你的代码必须完全遵守 [doxygen](http://www.stack.nl/~dimitri/doxygen/) 的样式。 -- 确保编译器选项 WITH\_STYLE\_CHECK 已打开,并且编译器通过代码样式检查。 +- 确保编译器选项 WITH\_STYLE\_CHECK 已打开,并且编译能通过代码样式检查。 - 所有代码必须具有单元测试。 - 通过所有单元测试。 @@ -12,12 +12,11 @@ ## [Fork](https://help.github.com/articles/fork-a-repo/) -转到GitHub页面,然后单击“Fork”按钮。 -这就是这么简单。 +跳转到[PaddlePaddle](https://github.com/PaddlePaddle/Paddle) GitHub首页,然后单击 `Fork` 按钮。 ## 克隆(Clone) -Paddle 目前使用[git流分支模型](http://nvie.com/posts/a-successful-git-branching-model/)。 +Paddle 目前使用[git流分支模型](http://nvie.com/posts/a-successful-git-branching-model/)进行开发,测试,发行和维护。 **develop** 是主分支,其他用户分支是特征分支(feature branches)。 一旦你创建了一个fork,你可以使用你最喜欢的 git 客户端克隆你的仓库(repo)或只是直接在命令行输入: @@ -43,6 +42,19 @@ git submodule update --init --recursive git checkout -b MY_COOL_STUFF_BRANCH ``` +## 使用 `pre-commit` 钩子 + +Paddle 开发人员使用 [pre-commit](http://pre-commit.com/) 工具来管理git预提交钩子。 它可以帮助我们格式化源代码(cpp,python),在提交前检查一些基本事宜(每个文件只有一个 EOL +,git 中不要添加大文件)。 `pre-commit`测试是 Travis-CI 中单元测试的一部分,不满足钩子 +的 PR 不能提交代码到 Paddle。 + +你可以通过 `pip install pre-commit` 安装 [pre-commit](http://pre-commit.com/), +目前 Paddle 使用 `clang-format` 来格式化 c/cpp 资源。请确保 clang-format 版本在3.8以上。 + +然后只需在 Paddle clone 目录中运行 `pre-commit install` 。当你 +提交你的代码时,pre-commit 钩子会检查本地代码是否存在 +不适合提交的东西,等等。 + ## 提交(Commit) 提交你的代码: From 4213235a82c76f0b30ac23da1a3c21bc31ab4635 Mon Sep 17 00:00:00 2001 From: zhouyingfeng Date: Thu, 15 Dec 2016 15:01:11 +0800 Subject: [PATCH 160/265] update the chinese doc for gpu-profiling also, fix another code reference error in en doc. resolve #834 --- doc/howto/optimization/gpu_profiling_cn.rst | 6 +++--- doc/howto/optimization/gpu_profiling_en.rst | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/howto/optimization/gpu_profiling_cn.rst b/doc/howto/optimization/gpu_profiling_cn.rst index 3132d3eaa1..d3def92c4b 100644 --- a/doc/howto/optimization/gpu_profiling_cn.rst +++ b/doc/howto/optimization/gpu_profiling_cn.rst @@ -1,4 +1,4 @@ -PaddlePaddle 中的性能分析 +PaddlePaddle 性能分析与调优 ===================================== 此教程将向您分步介绍如何使用内置的定时工具、 **nvprof** 或 **nvvp** 来运行性能分析和调优。 @@ -201,7 +201,7 @@ nvvp 工具 :scale: 33% 从内核函数的角度, **nvvp** 可以精确说明一个长耗时操作的具体原因。 -同时,如下图所示, **nvvp** 的内核block使用情况、register使用情况和共享内存使用情况能让我们对GPU的整体使用有更好的理解。 +同时,如下图所示, **nvvp** 的内核block使用情况、寄存器使用情况和共享内存使用情况能让我们对GPU的整体使用有更好的理解。 .. image:: nvvp2.png @@ -209,7 +209,7 @@ nvvp 工具 :scale: 33% 而从应用的角度, **nvvp** 可以帮您提供一些定位性能瓶颈的建议。 -例如,下图中就展示了一些关于data movement和compute utilization的建议,为您做性能调优提供了方向。 +例如,下图中就展示了一些关于内存数据迁徙和计算资源利用率的建议,为您做性能调优提供了方向。 .. image:: nvvp3.png :align: center diff --git a/doc/howto/optimization/gpu_profiling_en.rst b/doc/howto/optimization/gpu_profiling_en.rst index 0e3e6f9342..a54db6a3c2 100644 --- a/doc/howto/optimization/gpu_profiling_en.rst +++ b/doc/howto/optimization/gpu_profiling_en.rst @@ -53,7 +53,7 @@ above profilers. .. literalinclude:: ../../../paddle/math/tests/test_GpuProfiler.cpp :language: c++ - :lines: 111-124 + :lines: 137-151 :linenos: The above code snippet includes two methods, you can use any of them to profile the regions of interest. From 6516fd9294dd37ecfa38b3c9ed35417cc457a5d3 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Thu, 15 Dec 2016 15:11:57 +0800 Subject: [PATCH 161/265] Fix formatting error. --- python/paddle/trainer_config_helpers/optimizers.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/python/paddle/trainer_config_helpers/optimizers.py b/python/paddle/trainer_config_helpers/optimizers.py index 819d40feb3..a53ebe160b 100644 --- a/python/paddle/trainer_config_helpers/optimizers.py +++ b/python/paddle/trainer_config_helpers/optimizers.py @@ -408,7 +408,7 @@ def settings(batch_size, args = [ 'batch_size', 'learning_rate', 'learning_rate_decay_a', - 'learning_rate_decay_b', 'learning_rate_schedule', 'learning_rate_args', + 'learning_rate_decay_b', 'learning_rate_schedule', 'learning_rate_args' ] kwargs = dict() kwargs['algorithm'] = algorithm From 08917e08c29c46d7799d1eb8f3bda114e08fa20e Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 15 Dec 2016 15:13:21 +0800 Subject: [PATCH 162/265] Make Travis-CI pre-commit check more readable. --- paddle/scripts/travis/precommit.sh | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/paddle/scripts/travis/precommit.sh b/paddle/scripts/travis/precommit.sh index 5ad84f1821..7a59b1131d 100755 --- a/paddle/scripts/travis/precommit.sh +++ b/paddle/scripts/travis/precommit.sh @@ -12,6 +12,9 @@ cd .. export PATH=/usr/bin:$PATH pre-commit install clang-format --version -pre-commit run -a + +if ! pre-commit run -a ; then + git diff --exit-code +fi trap : 0 From f12dfdd224fb1186d418430df6ea1236683e7a6d Mon Sep 17 00:00:00 2001 From: livc Date: Thu, 15 Dec 2016 15:21:09 +0800 Subject: [PATCH 163/265] update details --- doc/howto/contribute_to_paddle_cn.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/howto/contribute_to_paddle_cn.md b/doc/howto/contribute_to_paddle_cn.md index 2e3e76836b..e0a63f5a14 100644 --- a/doc/howto/contribute_to_paddle_cn.md +++ b/doc/howto/contribute_to_paddle_cn.md @@ -1,6 +1,6 @@ # 如何贡献代码 -我们真诚地感谢您的贡献,欢迎通过 GitHub 的 fork 和 pull request 流程来提交代码。。 +我们真诚地感谢您的贡献,欢迎通过 GitHub 的 fork 和 pull request 流程来提交代码。 ## 代码要求 - 你的代码必须完全遵守 [doxygen](http://www.stack.nl/~dimitri/doxygen/) 的样式。 @@ -49,7 +49,7 @@ Paddle 开发人员使用 [pre-commit](http://pre-commit.com/) 工具来管理gi 的 PR 不能提交代码到 Paddle。 你可以通过 `pip install pre-commit` 安装 [pre-commit](http://pre-commit.com/), -目前 Paddle 使用 `clang-format` 来格式化 c/cpp 资源。请确保 clang-format 版本在3.8以上。 +目前 Paddle 使用 `clang-format` 来调整C/C++源代码格式。请确保 clang-format 版本在3.8以上。 然后只需在 Paddle clone 目录中运行 `pre-commit install` 。当你 提交你的代码时,pre-commit 钩子会检查本地代码是否存在 @@ -87,7 +87,7 @@ git remote -v ```shell git pull --rebase upstream develop ``` -如果本地没有唯一提交,git 将简单地执行快进。但是,如果你一直在做一些改变(绝大多数情况下不应该),你可能要处理冲突。 +如果本地没有提交,git 将简单地执行快进。但是,如果你一直在做一些改变(绝大多数情况下不应该),你可能要处理冲突。 现在,你的本地主分支与上游修改的一致并是最新的。 @@ -104,7 +104,7 @@ git push -u origin MY_COOL_STUFF_BRANCH # 创建远程分支 MY_COOL_STUFF_BRAN ## 使用最新版本更新你的 pull 请求 -在代码审查(code review)期间,由于 baidu/Paddle 中新的提交导致你的 pull 请求可能会失效。如果没有冲突,GitHub允许自动更新。 你可以点击 pull request 页面中的“更新分支(Update Branch)”按钮。 但是在这种冲突情况下,你需要手动进行更新。你需要在本地仓库执行如下命令: +在代码审查(code review)期间,由于 baidu/Paddle 中新的提交导致你的 pull 请求可能会失效。如果没有冲突,GitHub允许自动更新。 你可以点击 pull request 页面中的“更新分支(Update Branch)”按钮。 但是如果存在代码冲突,你需要手动进行更新。你需要在本地仓库执行如下命令: ```shell git checkout MY_COOL_STUFF_BRANCH From 77130c31c2ef95d018b08581995baa253e2e0883 Mon Sep 17 00:00:00 2001 From: qiaolongfei Date: Thu, 15 Dec 2016 15:28:28 +0800 Subject: [PATCH 164/265] add python api_predict for quick start --- demo/quick_start/api_predict.py | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/demo/quick_start/api_predict.py b/demo/quick_start/api_predict.py index 9c224e3cdb..a1a9ef7bca 100755 --- a/demo/quick_start/api_predict.py +++ b/demo/quick_start/api_predict.py @@ -18,13 +18,12 @@ from optparse import OptionParser from py_paddle import swig_paddle, DataProviderConverter from paddle.trainer.PyDataProvider2 import sparse_binary_vector from paddle.trainer.config_parser import parse_config - - """ Usage: run following command to show help message. python api_predict.py -h """ + class QuickStartPrediction(): def __init__(self, train_conf, dict_file, model_dir=None, label_file=None): """ @@ -72,9 +71,7 @@ class QuickStartPrediction(): transform word into integer index according to the dictionary. """ words = data.strip().split() - word_slot = [ - self.word_dict[w] for w in words if w in self.word_dict - ] + word_slot = [self.word_dict[w] for w in words if w in self.word_dict] return word_slot def batch_predict(self, data_batch): @@ -84,6 +81,7 @@ class QuickStartPrediction(): print("predicting labels is:") print prob + def option_parser(): usage = "python predict.py -n config -w model_dir -d dictionary -i input_file " parser = OptionParser(usage="usage: %s [options]" % usage) @@ -144,5 +142,6 @@ def main(): print labels predict.batch_predict(batch) + if __name__ == '__main__': main() From 520342ed9179a727cbe05ccfaa80cc491acd9eef Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Thu, 15 Dec 2016 15:35:44 +0800 Subject: [PATCH 165/265] Fix code format --- paddle/gserver/layers/PriorBox.cpp | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp index c9194235fd..dd52f61c30 100644 --- a/paddle/gserver/layers/PriorBox.cpp +++ b/paddle/gserver/layers/PriorBox.cpp @@ -13,8 +13,8 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "Layer.h" -#include "paddle/math/Matrix.h" #include "paddle/math/BaseMatrix.h" +#include "paddle/math/Matrix.h" namespace paddle { /** @@ -100,6 +100,8 @@ void PriorBoxLayer::forward(PassType passType) { tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; tmpPtr[idx++] = (centerX + boxWidth / 2.) / imageWidth; tmpPtr[idx++] = (centerY + boxHeight / 2.) / imageHeight; + // set the variance. + for (int t = 0; t < 4; t++) tmpPtr[idx++] = variance_[t]; if (maxSize_.size() > 0) { CHECK_EQ(minSize_.size(), maxSize_.size()); @@ -111,6 +113,8 @@ void PriorBoxLayer::forward(PassType passType) { tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; tmpPtr[idx++] = (centerX + boxWidth / 2.) / imageWidth; tmpPtr[idx++] = (centerY + boxHeight / 2.) / imageHeight; + // set the variance. + for (int t = 0; t < 4; t++) tmpPtr[idx++] = variance_[t]; } } } @@ -124,17 +128,15 @@ void PriorBoxLayer::forward(PassType passType) { tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; tmpPtr[idx++] = (centerX + boxWidth / 2.) / imageWidth; tmpPtr[idx++] = (centerY + boxHeight / 2.) / imageHeight; + // set the variance. + for (int t = 0; t < 4; t++) tmpPtr[idx++] = variance_[t]; } } } // clip the prior's coordidate such that it is within [0, 1] - for (int d = 0; d < dim; ++d) - tmpPtr[d] = std::min(std::max(tmpPtr[d], (float)0.), (float)1.); - // set the variance. - for (int h = 0; h < layerHeight; h++) - for (int w = 0; w < layerWidth; w++) - for (int i = 0; i < numPriors_; i++) - for (int j = 0; j < 4; j++) tmpPtr[idx++] = variance_[j]; + for (int d = 0; d < dim * 2; ++d) + if ((d % 8) < 4) + tmpPtr[d] = std::min(std::max(tmpPtr[d], (float)0.), (float)1.); MatrixPtr outV = getOutputValue(); outV->copyFrom(buffer_->data_, dim * 2); } From 0b37c2f67a2ddb31cae40060502a44ac5af9fffc Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Thu, 15 Dec 2016 16:02:09 +0800 Subject: [PATCH 166/265] fix conflict --- doc/howto/{ => dev}/contribute_to_paddle_cn.md | 0 doc/howto/index_cn.rst | 3 +++ doc/howto/index_en.rst | 2 +- doc/howto/optimization/gpu_profiling_cn.rst | 7 +++++-- doc/howto/optimization/gpu_profiling_en.rst | 7 +++++-- doc/howto/optimization/index_en.rst | 7 ------- 6 files changed, 14 insertions(+), 12 deletions(-) rename doc/howto/{ => dev}/contribute_to_paddle_cn.md (100%) delete mode 100644 doc/howto/optimization/index_en.rst diff --git a/doc/howto/contribute_to_paddle_cn.md b/doc/howto/dev/contribute_to_paddle_cn.md similarity index 100% rename from doc/howto/contribute_to_paddle_cn.md rename to doc/howto/dev/contribute_to_paddle_cn.md diff --git a/doc/howto/index_cn.rst b/doc/howto/index_cn.rst index 618f0c6e72..e03138723e 100644 --- a/doc/howto/index_cn.rst +++ b/doc/howto/index_cn.rst @@ -18,6 +18,7 @@ :maxdepth: 1 dev/write_docs_cn.rst + dev/contribute_to_paddle_cn.md 模型配置 -------- @@ -32,3 +33,5 @@ .. toctree:: :maxdepth: 1 + + optimization/gpu_profiling_cn.rst diff --git a/doc/howto/index_en.rst b/doc/howto/index_en.rst index 1000d956a7..983dc743eb 100644 --- a/doc/howto/index_en.rst +++ b/doc/howto/index_en.rst @@ -33,4 +33,4 @@ Optimization .. toctree:: :maxdepth: 1 - optimization/index_en.rst + optimization/gpu_profiling_en.rst diff --git a/doc/howto/optimization/gpu_profiling_cn.rst b/doc/howto/optimization/gpu_profiling_cn.rst index d3def92c4b..e2b0b0396e 100644 --- a/doc/howto/optimization/gpu_profiling_cn.rst +++ b/doc/howto/optimization/gpu_profiling_cn.rst @@ -1,5 +1,8 @@ -PaddlePaddle 性能分析与调优 -===================================== +================== +GPU性能分析与调优 +================== + +.. contents:: 此教程将向您分步介绍如何使用内置的定时工具、 **nvprof** 或 **nvvp** 来运行性能分析和调优。 diff --git a/doc/howto/optimization/gpu_profiling_en.rst b/doc/howto/optimization/gpu_profiling_en.rst index a54db6a3c2..ed208ceaf7 100644 --- a/doc/howto/optimization/gpu_profiling_en.rst +++ b/doc/howto/optimization/gpu_profiling_en.rst @@ -1,5 +1,8 @@ -Profiling on PaddlePaddle -========================= +==================== +Tune GPU Performance +==================== + +.. contents:: This tutorial will guide you step-by-step through how to conduct profiling and performance tuning using built-in timer, **nvprof** and **nvvp**. diff --git a/doc/howto/optimization/index_en.rst b/doc/howto/optimization/index_en.rst deleted file mode 100644 index 84804fc9af..0000000000 --- a/doc/howto/optimization/index_en.rst +++ /dev/null @@ -1,7 +0,0 @@ -Tune GPU Performance -==================== - -.. toctree:: - :maxdepth: 3 - - gpu_profiling_en.rst From 59ae6612e1439e447c1950c52a88b3fbb9fed90c Mon Sep 17 00:00:00 2001 From: Peng Li Date: Thu, 15 Dec 2016 16:11:22 +0800 Subject: [PATCH 167/265] Fix formatting error. --- python/paddle/trainer_config_helpers/evaluators.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/python/paddle/trainer_config_helpers/evaluators.py b/python/paddle/trainer_config_helpers/evaluators.py index 15b573b48e..3e0e88972c 100644 --- a/python/paddle/trainer_config_helpers/evaluators.py +++ b/python/paddle/trainer_config_helpers/evaluators.py @@ -330,8 +330,7 @@ def chunk_evaluator( label, chunk_scheme, num_chunk_types, - name=None, - ): + name=None, ): """ Chunk evaluator is used to evaluate segment labelling accuracy for a sequence. It calculates the chunk detection F1 score. From d2d0010609b6ba621360973b6c6972b836607de3 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 16:19:10 +0800 Subject: [PATCH 168/265] add CrossMapNormalGradFunc --- paddle/gserver/layers/NormProjectionLayer.cpp | 41 +++-- paddle/gserver/layers/NormProjectionLayer.h | 7 +- paddle/math/Function.h | 2 +- paddle/math/cross_map_normal_op.cpp | 145 ++++++++++++------ paddle/math/cross_map_normal_op.h | 40 ++--- paddle/math/cross_map_normal_op_gpu.cu | 54 ++----- paddle/math/tests/test_matrixCompare.cpp | 57 ++++--- 7 files changed, 190 insertions(+), 156 deletions(-) diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index 03c6952c30..d6923c2192 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -13,10 +13,9 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "NormProjectionLayer.h" +#include "paddle/math/cross_map_normal_op.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" -#include "paddle/math/cross_map_normal_op.h" -#include "NormProjectionLayer.h" namespace paddle { size_t CMRProjectionNormLayer::getSize() { @@ -48,13 +47,23 @@ bool CMRProjectionNormLayer::init(const LayerMap& layerMap, CHECK_EQ(config_.inputs_size(), 1); if (useGpu_) { - normal_ = FunctionBase::funcRegistrar_.createByType( + forward_ = FunctionBase::funcRegistrar_.createByType( FUNC_NAME(CrossMapNormal, GPU)); } else { - normal_ = FunctionBase::funcRegistrar_.createByType( + forward_ = FunctionBase::funcRegistrar_.createByType( FUNC_NAME(CrossMapNormal, CPU)); } - normal_->init( + forward_->init( + FuncConfig().set("size", size_).set("scale", scale_).set("pow", pow_)); + + if (useGpu_) { + backward_ = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormalGrad, GPU)); + } else { + backward_ = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormalGrad, CPU)); + } + backward_->init( FuncConfig().set("size", size_).set("scale", scale_).set("pow", pow_)); return true; @@ -74,13 +83,13 @@ void CMRProjectionNormLayer::forward(PassType passType) { Matrix::resizeOrCreate(denoms_, batchSize, size, /* trans */ false, useGpu_); - Dims dims{(size_t)batchSize, - (size_t)channels_, - (size_t)imgSizeH_, - (size_t)imgSizeW_}; - normal_->calc( - {Tensor(input->getData(), dims)}, - {Tensor(outV->getData(), dims), Tensor(denoms_->getData(), dims)}, + dims_ = {(size_t)batchSize, + (size_t)channels_, + (size_t)imgSizeH_, + (size_t)imgSizeW_}; + forward_->calc( + {Tensor(input->getData(), dims_)}, + {Tensor(outV->getData(), dims_), Tensor(denoms_->getData(), dims_)}, {}); } @@ -96,6 +105,13 @@ void CMRProjectionNormLayer::backward(const UpdateCallback& callback) { MatrixPtr localOutV = getOutputValue(); MatrixPtr preOutV = inputLayers_[0]->getOutputValue(); + backward_->calc({Tensor(preOutV->getData(), dims_), + Tensor(localOutV->getData(), dims_), + Tensor(localGrad->getData(), dims_), + Tensor(denoms_->getData(), dims_)}, + {Tensor(preOutGrad->getData(), dims_)}, + {}); +#if 0 if (useGpu_) { CrossMapNormalGrad crossGrad; crossGrad(dynamic_cast(*preOutGrad), @@ -123,5 +139,6 @@ void CMRProjectionNormLayer::backward(const UpdateCallback& callback) { scale_, pow_); } +#endif } } // namespace paddle diff --git a/paddle/gserver/layers/NormProjectionLayer.h b/paddle/gserver/layers/NormProjectionLayer.h index 1dc3921283..82aa427f8d 100644 --- a/paddle/gserver/layers/NormProjectionLayer.h +++ b/paddle/gserver/layers/NormProjectionLayer.h @@ -16,9 +16,8 @@ limitations under the License. */ #include #include "NormLayer.h" -#include "paddle/math/Matrix.h" #include "paddle/math/Function.h" -#include +#include "paddle/math/Matrix.h" namespace paddle { @@ -43,6 +42,8 @@ public: void backward(const UpdateCallback& callback = nullptr); protected: - FunctionBase* normal_; + Dims dims_; + FunctionBase* forward_; + FunctionBase* backward_; }; } // namespace paddle diff --git a/paddle/math/Function.h b/paddle/math/Function.h index f8fab972a6..095584c0b1 100644 --- a/paddle/math/Function.h +++ b/paddle/math/Function.h @@ -16,8 +16,8 @@ limitations under the License. */ #include #include -#include "paddle/utils/ClassRegistrar.h" #include "paddle/math/Matrix.h" +#include "paddle/utils/ClassRegistrar.h" namespace paddle { diff --git a/paddle/math/cross_map_normal_op.cpp b/paddle/math/cross_map_normal_op.cpp index e520351d2e..8547978c99 100644 --- a/paddle/math/cross_map_normal_op.cpp +++ b/paddle/math/cross_map_normal_op.cpp @@ -13,6 +13,7 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "cross_map_normal_op.h" +#include "paddle/math/Vector.h" namespace paddle { @@ -56,66 +57,49 @@ void CrossMapNormal(real* outputs, } template <> -void CrossMapNormalGrad::operator()(CpuMatrix& inputsGrad, - CpuMatrix& inputsValue, - CpuMatrix& outputsGrad, - CpuMatrix& outputsValue, - CpuMatrix& denoms, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow) { - CHECK(inputsGrad.isContiguous()); - CHECK(outputsGrad.isContiguous()); - CHECK(denoms.isContiguous()); - CHECK(inputsValue.isContiguous()); - CHECK(outputsValue.isContiguous()); - CHECK_EQ(inputsGrad.getHeight(), outputsGrad.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), outputsGrad.getWidth()); - CHECK_EQ(inputsGrad.getHeight(), denoms.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), denoms.getWidth()); - CHECK_EQ(inputsGrad.getHeight(), inputsValue.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), inputsValue.getWidth()); - CHECK_EQ(inputsGrad.getHeight(), outputsValue.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), outputsValue.getWidth()); - - size_t numSample = inputsGrad.getHeight(); - size_t numCols = inputsGrad.getWidth(); - size_t imageSize = imgSizeH * imgSizeW; - CHECK(imageSize * channels == numCols); - +void CrossMapNormalGrad(real* inputsGrad, + real* inputsValue, + real* outputsValue, + real* outputsGrad, + real* denoms, + size_t numSamples, + size_t channels, + size_t height, + size_t width, + size_t size, + real scale, + real pow) { + size_t oneSample = channels * height * width; std::function oneImage = [=](real* data, size_t offset) { - return CpuVector(imageSize, data + offset); + return CpuVector(height * width, data + offset); }; - const int start = -((int)sizeX) / 2; - const int end = (int)sizeX + start; + const int start = -((int)size) / 2; + const int end = (int)size + start; const real ratio = -(real)2 * scale * pow; - for (size_t i = 0; i < numSample; i++) { - size_t sOffset = i * numCols; - real* inputGradData = inputsGrad.getData() + sOffset; - real* inputData = inputsValue.getData() + sOffset; - real* denomData = denoms.getData() + sOffset; - real* outputGradData = outputsGrad.getData() + sOffset; - real* outputData = outputsValue.getData() + sOffset; + for (size_t i = 0; i < numSamples; i++) { + size_t sOffset = i * oneSample; + real* oneInputGrad = inputsGrad + sOffset; + real* oneInputValue = inputsValue + sOffset; + real* oneDenom = denoms + sOffset; + real* oneOutputGrad = outputsGrad + sOffset; + real* oneOutputValue = outputsValue + sOffset; for (int c = 0; c < (int)channels; c++) { - size_t cOffset = c * imageSize; - CpuVector inputGrad = oneImage(inputGradData, cOffset); - CpuVector inputValue = oneImage(inputData, cOffset); - CpuVector denom = oneImage(denomData, cOffset); - CpuVector outputGrad = oneImage(outputGradData, cOffset); + size_t cOffset = c * height * width; + CpuVector inputGrad = oneImage(oneInputGrad, cOffset); + CpuVector inputValue = oneImage(oneInputValue, cOffset); + CpuVector denom = oneImage(oneDenom, cOffset); + CpuVector outputGrad = oneImage(oneOutputGrad, cOffset); inputGrad = inputGrad + denom.pow(-pow) * outputGrad; for (int s = start; s < end; s++) { if (c + s >= 0 && c + s < (int)channels) { - size_t offset = (c + s) * imageSize; - CpuVector output = oneImage(outputData, offset); - CpuVector outputGrad = oneImage(outputGradData, offset); - CpuVector denom = oneImage(denomData, offset); + size_t offset = (c + s) * height * width; + CpuVector output = oneImage(oneOutputValue, offset); + CpuVector outputGrad = oneImage(oneOutputGrad, offset); + CpuVector denom = oneImage(oneDenom, offset); inputGrad += ((outputGrad * output * ratio) / denom) * inputValue; } @@ -124,6 +108,11 @@ void CrossMapNormalGrad::operator()(CpuMatrix& inputsGrad, } } +/** + * \param inputs[0] input value. + * \param outputs[0] output value. + * \param outputs[1] denoms. + */ template class CrossMapNormalFunc : public FunctionBase { public: @@ -169,7 +158,65 @@ private: real pow_; }; +/** + * \param inputs[0] input value. + * \param inputs[1] output value. + * \param inputs[2] output grad. + * \param inputs[3] denoms. + * \param outputs[0] input grad. + */ +template +class CrossMapNormalGradFunc : public FunctionBase { +public: + void init(const FuncConfig& config) override { + size_ = config.get("size"); + scale_ = config.get("scale"); + pow_ = config.get("pow"); + } + + void calc(const Arguments& inputs, + const Arguments& outputs, + const Arguments& inouts) override { + CHECK_EQ(4, inputs.size()); + CHECK_EQ(1, outputs.size()); + CHECK_EQ(0, inouts.size()); + + CHECK_EQ(inputs[0].dims_.size(), 4); + for (size_t i = 0; i < inputs[0].dims_.size(); i++) { + CHECK_EQ(inputs[0].dims_[i], inputs[1].dims_[i]); + CHECK_EQ(inputs[0].dims_[i], inputs[2].dims_[i]); + CHECK_EQ(inputs[0].dims_[i], inputs[3].dims_[i]); + CHECK_EQ(inputs[0].dims_[i], outputs[0].dims_[i]); + } + + size_t samples = inputs[0].dims_[0]; + size_t channels = inputs[0].dims_[1]; + size_t height = inputs[0].dims_[2]; + size_t width = inputs[0].dims_[3]; + + CrossMapNormalGrad(outputs[0].getData(), + inputs[0].getData(), + inputs[1].getData(), + inputs[2].getData(), + inputs[3].getData(), + samples, + channels, + height, + width, + size_, + scale_, + pow_); + } + +private: + size_t size_; + real scale_; + real pow_; +}; + REGISTER_TYPED_FUNC(CrossMapNormal, CPU, CrossMapNormalFunc); REGISTER_TYPED_FUNC(CrossMapNormal, GPU, CrossMapNormalFunc); +REGISTER_TYPED_FUNC(CrossMapNormalGrad, CPU, CrossMapNormalGradFunc); +REGISTER_TYPED_FUNC(CrossMapNormalGrad, GPU, CrossMapNormalGradFunc); } // namespace paddle diff --git a/paddle/math/cross_map_normal_op.h b/paddle/math/cross_map_normal_op.h index ef9533485e..f065208084 100644 --- a/paddle/math/cross_map_normal_op.h +++ b/paddle/math/cross_map_normal_op.h @@ -15,7 +15,6 @@ limitations under the License. */ #pragma once #include "Function.h" -#include "paddle/math/Matrix.h" namespace paddle { @@ -30,34 +29,19 @@ void CrossMapNormal(real* outputs, size_t size, real scale, real pow); -#if 0 -template -struct CrossMapNormal { - void operator()(typename MatrixT::type& outputs, - typename MatrixT::type& denoms, - typename MatrixT::type& inputs, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow); -}; -#endif template -struct CrossMapNormalGrad { - void operator()(typename MatrixT::type& inputsGrad, - typename MatrixT::type& inputsValue, - typename MatrixT::type& outputsGrad, - typename MatrixT::type& outputsValue, - typename MatrixT::type& denoms, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow); -}; +void CrossMapNormalGrad(real* inputsGrad, + real* inputsValue, + real* outputsValue, + real* outputsGrad, + real* denoms, + size_t numSamples, + size_t channels, + size_t height, + size_t width, + size_t size, + real scale, + real pow); } // namespace paddle diff --git a/paddle/math/cross_map_normal_op_gpu.cu b/paddle/math/cross_map_normal_op_gpu.cu index 9b92974344..6339c04194 100644 --- a/paddle/math/cross_map_normal_op_gpu.cu +++ b/paddle/math/cross_map_normal_op_gpu.cu @@ -131,48 +131,26 @@ __global__ void KeCMRNormDiff(size_t imageSize, const real* bottom_data, } template <> -void CrossMapNormalGrad::operator()(GpuMatrix& inputsGrad, - GpuMatrix& inputsValue, - GpuMatrix& outputsGrad, - GpuMatrix& outputsValue, - GpuMatrix& denoms, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - real scale, - real pow) { - CHECK(inputsGrad.isContiguous()); - CHECK(outputsGrad.isContiguous()); - CHECK(denoms.isContiguous()); - CHECK(inputsValue.isContiguous()); - CHECK(outputsValue.isContiguous()); - CHECK_EQ(inputsGrad.getHeight(), outputsGrad.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), outputsGrad.getWidth()); - CHECK_EQ(inputsGrad.getHeight(), denoms.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), denoms.getWidth()); - CHECK_EQ(inputsGrad.getHeight(), inputsValue.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), inputsValue.getWidth()); - CHECK_EQ(inputsGrad.getHeight(), outputsValue.getHeight()); - CHECK_EQ(inputsGrad.getWidth(), outputsValue.getWidth()); - - size_t numSample = inputsGrad.getHeight(); - size_t numCols = inputsGrad.getWidth(); - CHECK(imgSizeH * imgSizeW * channels == numCols); - - size_t imageSize = numSample * imgSizeH * imgSizeW; - real* inputsGradData = inputsGrad.getData(); - real* inputsData = inputsValue.getData(); - real* denomsData = denoms.getData(); - real* outputsGradData = outputsGrad.getData(); - real* outputsData = outputsValue.getData(); +void CrossMapNormalGrad(real* inputsGrad, + real* inputsValue, + real* outputsValue, + real* outputsGrad, + real* denoms, + size_t numSamples, + size_t channels, + size_t height, + size_t width, + size_t size, + real scale, + real pow) { + size_t imageSize = numSamples * height * width; int blockSize = 1024; int gridSize = (imageSize + 1024 - 1) / 1024; KeCMRNormDiff <<>> - (imageSize, inputsData, outputsData, denomsData, outputsGradData, channels, - imgSizeH, imgSizeW, sizeX, -pow, 2.0f * pow * scale, inputsGradData); - CHECK_SYNC("KeCMRNormDiff"); + (imageSize, inputsValue, outputsValue, denoms, outputsGrad, channels, + height, width, size, -pow, 2.0f * pow * scale, inputsGrad); + CHECK_SYNC("CrossMapNormalGrad"); } } // namespace paddle diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 0341d757f3..bc14651457 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -19,12 +19,11 @@ limitations under the License. */ #include #include "TensorCheck.h" #include "paddle/gserver/tests/TestUtil.h" +#include "paddle/math/Function.h" #include "paddle/math/Matrix.h" #include "paddle/math/SparseMatrix.h" -#include "paddle/utils/Stat.h" -#include "TensorCheck.h" #include "paddle/math/cross_map_normal_op.h" -#include "paddle/math/Function.h" +#include "paddle/utils/Stat.h" #include "paddle/utils/Util.h" using namespace paddle; // NOLINT @@ -1282,12 +1281,6 @@ void testCrossMapNormalFwd( inputsGpu.copyFrom(inputs); outputsGpu.copyFrom(outputs); -#if 0 - FuncConfig config; - config.set("size", (size_t)sizeX); - config.set("scale", scale); - config.set("pow", pow); -#endif FunctionBase* cpu = FunctionBase::funcRegistrar_.createByType(FUNC_NAME(CrossMapNormal, CPU)); FunctionBase* gpu = @@ -1311,22 +1304,6 @@ void testCrossMapNormalFwd( {Tensor(inputsGpu.getData(), dims)}, {Tensor(outputsGpu.getData(), dims), Tensor(denomsGpu.getData(), dims)}, {}); -#if 0 - CrossMapNormal cpuCross; - cpuCross( - outputs, denoms, inputs, channels, imgSizeH, imgSizeW, sizeX, scale, pow); - - CrossMapNormal gpuCross; - gpuCross(outputsGpu, - denomsGpu, - inputsGpu, - channels, - imgSizeH, - imgSizeW, - sizeX, - scale, - pow); -#endif TensorCheckErr(outputs, outputsGpu); TensorCheckErr(denoms, denomsGpu); @@ -1381,6 +1358,35 @@ void testCrossMapNormalBwd( outputsValueGpu.copyFrom(outputsValue); inputsGradGpu.copyFrom(inputsGrad); + FunctionBase* cpu = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormalGrad, CPU)); + FunctionBase* gpu = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormalGrad, GPU)); + cpu->init(FuncConfig() + .set("size", (size_t)sizeX) + .set("scale", scale) + .set("pow", pow)); + gpu->init(FuncConfig() + .set("size", (size_t)sizeX) + .set("scale", scale) + .set("pow", pow)); + + Dims dims{ + (size_t)numSamples, (size_t)channels, (size_t)imgSizeH, (size_t)imgSizeW}; + cpu->calc({Tensor(inputsValue.getData(), dims), + Tensor(outputsValue.getData(), dims), + Tensor(outputsGrad.getData(), dims), + Tensor(denoms.getData(), dims)}, + {Tensor(inputsGrad.getData(), dims)}, + {}); + + gpu->calc({Tensor(inputsValueGpu.getData(), dims), + Tensor(outputsValueGpu.getData(), dims), + Tensor(outputsGradGpu.getData(), dims), + Tensor(denomsGpu.getData(), dims)}, + {Tensor(inputsGradGpu.getData(), dims)}, + {}); +#if 0 CrossMapNormalGrad cpuCross; cpuCross(inputsGrad, inputsValue, @@ -1406,6 +1412,7 @@ void testCrossMapNormalBwd( sizeX, scale, pow); +#endif TensorCheckErr(inputsGrad, inputsGradGpu); } From 7439025753b7af36ee9e7b424ab38f2c7c7c8310 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 15 Dec 2016 16:26:55 +0800 Subject: [PATCH 169/265] Refine code --- paddle/scripts/travis/before_install.linux.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paddle/scripts/travis/before_install.linux.sh b/paddle/scripts/travis/before_install.linux.sh index 3351deddb9..c23acfb60e 100755 --- a/paddle/scripts/travis/before_install.linux.sh +++ b/paddle/scripts/travis/before_install.linux.sh @@ -1,6 +1,6 @@ #!/bin/bash set -e - +pip install protobuf cd /tmp wget https://github.com/google/protobuf/archive/v3.0.2.tar.gz -O protobuf.tar.gz tar xf protobuf.tar.gz From 22a5e478f3b6ecc0e43d31abce39a686b6331165 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 16:36:51 +0800 Subject: [PATCH 170/265] move Function to function dir --- paddle/{math => function}/Function.cpp | 0 paddle/{math => function}/Function.h | 0 paddle/{math => function}/cross_map_normal_op.cpp | 0 paddle/{math => function}/cross_map_normal_op.h | 0 paddle/{math => function}/cross_map_normal_op_gpu.cu | 0 paddle/gserver/layers/NormProjectionLayer.cpp | 1 - paddle/gserver/layers/NormProjectionLayer.h | 2 +- paddle/math/tests/test_matrixCompare.cpp | 3 +-- 8 files changed, 2 insertions(+), 4 deletions(-) rename paddle/{math => function}/Function.cpp (100%) rename paddle/{math => function}/Function.h (100%) rename paddle/{math => function}/cross_map_normal_op.cpp (100%) rename paddle/{math => function}/cross_map_normal_op.h (100%) rename paddle/{math => function}/cross_map_normal_op_gpu.cu (100%) diff --git a/paddle/math/Function.cpp b/paddle/function/Function.cpp similarity index 100% rename from paddle/math/Function.cpp rename to paddle/function/Function.cpp diff --git a/paddle/math/Function.h b/paddle/function/Function.h similarity index 100% rename from paddle/math/Function.h rename to paddle/function/Function.h diff --git a/paddle/math/cross_map_normal_op.cpp b/paddle/function/cross_map_normal_op.cpp similarity index 100% rename from paddle/math/cross_map_normal_op.cpp rename to paddle/function/cross_map_normal_op.cpp diff --git a/paddle/math/cross_map_normal_op.h b/paddle/function/cross_map_normal_op.h similarity index 100% rename from paddle/math/cross_map_normal_op.h rename to paddle/function/cross_map_normal_op.h diff --git a/paddle/math/cross_map_normal_op_gpu.cu b/paddle/function/cross_map_normal_op_gpu.cu similarity index 100% rename from paddle/math/cross_map_normal_op_gpu.cu rename to paddle/function/cross_map_normal_op_gpu.cu diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index d6923c2192..e69c406993 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -13,7 +13,6 @@ See the License for the specific language governing permissions and limitations under the License. */ #include "NormProjectionLayer.h" -#include "paddle/math/cross_map_normal_op.h" #include "paddle/utils/Logging.h" #include "paddle/utils/Stat.h" diff --git a/paddle/gserver/layers/NormProjectionLayer.h b/paddle/gserver/layers/NormProjectionLayer.h index 82aa427f8d..3c4876ece6 100644 --- a/paddle/gserver/layers/NormProjectionLayer.h +++ b/paddle/gserver/layers/NormProjectionLayer.h @@ -16,7 +16,7 @@ limitations under the License. */ #include #include "NormLayer.h" -#include "paddle/math/Function.h" +#include "paddle/function/Function.h" #include "paddle/math/Matrix.h" namespace paddle { diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index bc14651457..da7a585484 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -18,11 +18,10 @@ limitations under the License. */ #include #include "TensorCheck.h" +#include "paddle/function/Function.h" #include "paddle/gserver/tests/TestUtil.h" -#include "paddle/math/Function.h" #include "paddle/math/Matrix.h" #include "paddle/math/SparseMatrix.h" -#include "paddle/math/cross_map_normal_op.h" #include "paddle/utils/Stat.h" #include "paddle/utils/Util.h" From 7b08a98a8f920f21703dc83b1828f12a6f9fa674 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Thu, 15 Dec 2016 16:44:15 +0800 Subject: [PATCH 171/265] do pre-commit --- python/paddle/trainer/PyDataProvider2.py | 1 + 1 file changed, 1 insertion(+) diff --git a/python/paddle/trainer/PyDataProvider2.py b/python/paddle/trainer/PyDataProvider2.py index 0f0b0a1175..de266bb5d3 100644 --- a/python/paddle/trainer/PyDataProvider2.py +++ b/python/paddle/trainer/PyDataProvider2.py @@ -106,6 +106,7 @@ def integer_value_sequence(dim): def integer_value_sub_sequence(dim): return integer_value(dim, seq_type=SequenceType.SUB_SEQUENCE) + integer_sequence = integer_value_sequence From 7c7430470d79f7fccdf0cfc1cefb12b018d9c573 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Thu, 15 Dec 2016 16:55:31 +0800 Subject: [PATCH 172/265] follow comments: more readable LOG --- paddle/trainer/Tester.cpp | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/paddle/trainer/Tester.cpp b/paddle/trainer/Tester.cpp index 24f7271734..24fac3e5a8 100644 --- a/paddle/trainer/Tester.cpp +++ b/paddle/trainer/Tester.cpp @@ -48,9 +48,9 @@ Tester::Tester(const std::shared_ptr& config, testDataProvider_(testDataProvider) { if (config_->getOptConfig().use_sparse_remote_updater()) { LOG(FATAL) << "It's prohibited to set sparse_remote_update " - << "in some layers if testing will be under going " - << "in the middle of training. You can do testing " - << "within separate process."; + << "when doing train and test jobs in the same " + << "process. You could run paddle --job=test in " + << "a separate process."; } testEvaluator_.reset(gradientMachine_->makeEvaluator()); if (intconfig_->distributeTest) { From 558e86927caa2bbe0bc97b287f9d1abe73cfaaa3 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 17:12:22 +0800 Subject: [PATCH 173/265] add CMakeLists --- cmake/util.cmake | 1 + paddle/CMakeLists.txt | 1 + paddle/function/CMakeLists.txt | 12 ++++++++++++ paddle/function/cross_map_normal_op.cpp | 4 +++- paddle/gserver/CMakeLists.txt | 8 ++------ 5 files changed, 19 insertions(+), 7 deletions(-) create mode 100644 paddle/function/CMakeLists.txt diff --git a/cmake/util.cmake b/cmake/util.cmake index 38366373c6..03734e7839 100644 --- a/cmake/util.cmake +++ b/cmake/util.cmake @@ -96,6 +96,7 @@ function(link_paddle_exe TARGET_NAME) target_circle_link_libraries(${TARGET_NAME} ARCHIVE_START paddle_gserver + paddle_function ${METRIC_LIBS} ARCHIVE_END paddle_pserver diff --git a/paddle/CMakeLists.txt b/paddle/CMakeLists.txt index fb3af8ea92..2daea052b0 100644 --- a/paddle/CMakeLists.txt +++ b/paddle/CMakeLists.txt @@ -1,4 +1,5 @@ add_subdirectory(cuda) +add_subdirectory(function) add_subdirectory(utils) add_subdirectory(math) add_subdirectory(parameter) diff --git a/paddle/function/CMakeLists.txt b/paddle/function/CMakeLists.txt new file mode 100644 index 0000000000..8fad0e3ebd --- /dev/null +++ b/paddle/function/CMakeLists.txt @@ -0,0 +1,12 @@ +file(GLOB FUNCTION_HEADERS . *.h) + +if(NOT WITH_GPU) + file(GLOB FUNCTION_SOURCES . *.cpp) + add_library(paddle_function STATIC ${FUNCTION_SOURCES}) +else() + file(GLOB FUNCTION_SOURCES . *.cpp *.cu) + cuda_add_library(paddle_function ${FUNCTION_SOURCES}) +endif() + +add_style_check_target(paddle_function ${FUNCTION_SOURCES}) +add_style_check_target(paddle_function ${FUNCTION_HEADERS}) diff --git a/paddle/function/cross_map_normal_op.cpp b/paddle/function/cross_map_normal_op.cpp index 8547978c99..0391a58d89 100644 --- a/paddle/function/cross_map_normal_op.cpp +++ b/paddle/function/cross_map_normal_op.cpp @@ -215,8 +215,10 @@ private: }; REGISTER_TYPED_FUNC(CrossMapNormal, CPU, CrossMapNormalFunc); -REGISTER_TYPED_FUNC(CrossMapNormal, GPU, CrossMapNormalFunc); REGISTER_TYPED_FUNC(CrossMapNormalGrad, CPU, CrossMapNormalGradFunc); +#ifndef PADDLE_ONLY_CPU +REGISTER_TYPED_FUNC(CrossMapNormal, GPU, CrossMapNormalFunc); REGISTER_TYPED_FUNC(CrossMapNormalGrad, GPU, CrossMapNormalGradFunc); +#endif } // namespace paddle diff --git a/paddle/gserver/CMakeLists.txt b/paddle/gserver/CMakeLists.txt index a066f80c22..4f92150ec8 100644 --- a/paddle/gserver/CMakeLists.txt +++ b/paddle/gserver/CMakeLists.txt @@ -27,16 +27,12 @@ if(NOT WITH_GPU) list(REMOVE_ITEM GSERVER_HEADER layers/CudnnConvLayer.h layers/CudnnPoolLayer.h - layers/CudnnBatchNormLayer.h - layers/NormProjectionLayer.h - layers/NormLayer.h) + layers/CudnnBatchNormLayer.h) list(REMOVE_ITEM GSERVER_SOURCES layers/CudnnConvLayer.cpp layers/CudnnPoolLayer.cpp - layers/CudnnBatchNormLayer.cpp - layers/NormProjectionLayer.cpp - layers/NormLayer.cpp) + layers/CudnnBatchNormLayer.cpp) compile_cu_as_cpp(layers/LstmCompute.cu) compile_cu_as_cpp(layers/GruCompute.cu) endif() From d11e2b401348c147b20507863a43b8952f17d6a1 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 17:33:01 +0800 Subject: [PATCH 174/265] Remove some useless code --- paddle/cuda/include/hl_cnn.h | 56 ------ paddle/cuda/include/stub/hl_cnn_stub.h | 24 --- paddle/cuda/src/hl_cuda_cnn.cu | 120 ------------ paddle/gserver/layers/NormProjectionLayer.cpp | 29 --- paddle/math/Matrix.cpp | 176 ------------------ paddle/math/Matrix.h | 65 ------- paddle/math/tests/test_matrixCompare.cpp | 27 --- 7 files changed, 497 deletions(-) diff --git a/paddle/cuda/include/hl_cnn.h b/paddle/cuda/include/hl_cnn.h index 06ee3b3654..c5787630ab 100644 --- a/paddle/cuda/include/hl_cnn.h +++ b/paddle/cuda/include/hl_cnn.h @@ -240,62 +240,6 @@ extern void hl_avgpool_backward(const int frameCnt, real* backGrad, const int outStride); -/** - * @brief Cross-map-respose normalize forward. - * - * @param[in] frameCnt batch size of input image. - * @param[in] in input data. - * @param[in] scale buffer. - * @param[out] out output data. - * @param[in] channels number of channel. - * @param[in] height image height. - * @param[in] width image width. - * @param[in] sizeX size. - * @param[in] alpha scale. - * @param[in] beta scale. - * - */ -extern void hl_CMRNorm_forward(size_t frameCnt, - const real* in, - real* scale, - real* out, - size_t channels, - size_t height, - size_t width, - size_t sizeX, - real alpha, - real beta); - -/** - * @brief Cross-map-respose normalize backward. - * - * @param[in] frameCnt batch size of input image. - * @param[in] inV input data. - * @param[in] scale buffer. - * @param[out] outV output value. - * @param[out] outDiff output grad. - * @param[out] inDiff input grad. - * @param[in] channels number of channel. - * @param[in] height image height. - * @param[in] width image width. - * @param[in] sizeX size. - * @param[in] alpha scale. - * @param[in] beta scale. - * - */ -extern void hl_CMRNorm_backward(size_t frameCnt, - const real* inV, - const real* scale, - const real* outV, - const real* outDiff, - real* inDiff, - size_t channels, - size_t height, - size_t width, - size_t sizeX, - real alpha, - real beta); - /** * @brief Bilinear interpolation forward. * diff --git a/paddle/cuda/include/stub/hl_cnn_stub.h b/paddle/cuda/include/stub/hl_cnn_stub.h index 52c9787352..039551c6cc 100644 --- a/paddle/cuda/include/stub/hl_cnn_stub.h +++ b/paddle/cuda/include/stub/hl_cnn_stub.h @@ -117,30 +117,6 @@ inline void hl_avgpool_backward(const int frameCnt, real* backGrad, const int outStride) {} -inline void hl_CMRNorm_forward(size_t frameCnt, - const real* in, - real* scale, - real* out, - size_t channels, - size_t height, - size_t width, - size_t sizeX, - real alpha, - real beta) {} - -inline void hl_CMRNorm_backward(size_t frameCnt, - const real* inV, - const real* scale, - const real* outV, - const real* outDiff, - real* inDiff, - size_t channels, - size_t height, - size_t width, - size_t sizeX, - real alpha, - real beta) {} - inline void hl_bilinear_forward(const real* inData, const size_t inImgH, const size_t inImgW, diff --git a/paddle/cuda/src/hl_cuda_cnn.cu b/paddle/cuda/src/hl_cuda_cnn.cu index 1516accaae..b94f4d8fe4 100644 --- a/paddle/cuda/src/hl_cuda_cnn.cu +++ b/paddle/cuda/src/hl_cuda_cnn.cu @@ -381,126 +381,6 @@ void hl_avgpool_backward(const int frameCnt, const real* outGrad, CHECK_SYNC("hl_avgpool_backward failed"); } -__global__ void KeCMRNormFillScale(size_t imageSize, const real* in, - real* scale, size_t channels, - size_t height, size_t width, size_t size, - real alpha) { - const int idx = threadIdx.x + blockIdx.x * blockDim.x; - if (idx < imageSize) { - const int w = idx % width; - const int h = (idx / width) % height; - const int n = idx / width / height; - const int offset = (n * channels * height + h) * width + w; - - in += offset; - scale += offset; - const int step = height * width; - const int pre_pad = (size - 1) / 2; - const int post_pad = size - pre_pad - 1; - - real accum = 0; - int index = 0; - while (index < channels + post_pad) { - if (index < channels) { - accum += in[index * step] * in[index * step]; - } - if (index >= size) { - accum -= in[(index - size) * step] * in[(index - size) * step]; - } - if (index >= post_pad) { - scale[(index - post_pad) * step] = 1. + accum * alpha; - } - ++index; - } - } -} - -__global__ void KeCMRNormOutput(size_t inputSize, const real* in, - const real* scale, real negative_beta, - real* out) { - const int index = threadIdx.x + blockIdx.x * blockDim.x; - if (index < inputSize) { - out[index] = in[index] * pow(scale[index], negative_beta); - } -} - -void hl_CMRNorm_forward(size_t frameCnt, const real* in, real* scale, - real* out, size_t channels, - size_t height, size_t width, size_t sizeX, - real alpha, real beta) { - size_t imageSize = frameCnt * height * width; - int blockSize = 1024; - int gridSize = (imageSize + 1024 - 1) / 1024; - KeCMRNormFillScale<<>> - (imageSize, in, scale, channels, height, width, sizeX, alpha); - - size_t inputSize = frameCnt * height * width *channels; - blockSize = 1024; - gridSize = (inputSize + 1024 - 1) / 1024; - KeCMRNormOutput<<>> - (inputSize, in, scale, beta, out); - CHECK_SYNC("hl_CMRNorm_forward"); -} - -__global__ void KeCMRNormDiff(size_t imageSize, const real* bottom_data, - const real* top_data, const real* scale, - const real* top_diff, size_t channels, - size_t height, size_t width, size_t size, - real negative_beta, real cache_ratio, - real* bottom_diff ) { - const int idx = threadIdx.x + blockIdx.x * blockDim.x; - if (idx < imageSize) { - const int w = idx % width; - const int h = (idx / width) % height; - const int n = idx / width / height; - const int offset = (n * channels * height + h) * width + w; - bottom_data += offset; - top_data += offset; - scale += offset; - top_diff += offset; - bottom_diff += offset; - - const int step = height * width; - const int pre_pad = size - (size + 1) / 2; - const int post_pad = size - pre_pad - 1; - - int index = 0; - real accum = 0; - while (index < channels + post_pad) { - if (index < channels) { - accum += top_diff[index * step] * top_data[index * step] / - scale[index * step]; - } - if (index >= size) { - accum -= top_diff[(index - size) * step] * - top_data[(index - size) * step] / scale[(index - size) * step]; - } - if (index >= post_pad) { - bottom_diff[(index - post_pad) * step] += - top_diff[(index - post_pad) * step] * - pow(scale[(index - post_pad) * step], negative_beta) - cache_ratio * - bottom_data[(index - post_pad) * step] * accum; - } - ++index; - } - } -} - -void hl_CMRNorm_backward(size_t frameCnt, const real* inV, - const real* scale, - const real* outV, const real* outDiff, - real *inDiff, size_t channels, - size_t height, size_t width, size_t sizeX, - real alpha, real beta) { - size_t imageSize = frameCnt * height * width; - int blockSize = 1024; - int gridSize = (imageSize + 1024 - 1) / 1024; - KeCMRNormDiff <<>> - (imageSize, inV, outV, scale, outDiff, channels, - height, width, sizeX, alpha, beta, inDiff); - CHECK_SYNC("hl_CMRNorm_backward"); -} - __global__ void KeBilinearInterpFw(const real* in, const size_t inImgH, const size_t inImgW, diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index e69c406993..4ff3b805fb 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -110,34 +110,5 @@ void CMRProjectionNormLayer::backward(const UpdateCallback& callback) { Tensor(denoms_->getData(), dims_)}, {Tensor(preOutGrad->getData(), dims_)}, {}); -#if 0 - if (useGpu_) { - CrossMapNormalGrad crossGrad; - crossGrad(dynamic_cast(*preOutGrad), - dynamic_cast(*preOutV), - dynamic_cast(*localGrad), - dynamic_cast(*localOutV), - dynamic_cast(*denoms_), - channels_, - imgSizeH_, - imgSizeW_, - size_, - scale_, - pow_); - } else { - CrossMapNormalGrad crossGrad; - crossGrad(dynamic_cast(*preOutGrad), - dynamic_cast(*preOutV), - dynamic_cast(*localGrad), - dynamic_cast(*localOutV), - dynamic_cast(*denoms_), - channels_, - imgSizeH_, - imgSizeW_, - size_, - scale_, - pow_); - } -#endif } } // namespace paddle diff --git a/paddle/math/Matrix.cpp b/paddle/math/Matrix.cpp index 2cde11dd47..a36c31d32b 100644 --- a/paddle/math/Matrix.cpp +++ b/paddle/math/Matrix.cpp @@ -1265,69 +1265,6 @@ void GpuMatrix::avgPoolBackward(Matrix& outGrad, outGrad.getStride()); } -void GpuMatrix::crossMapNormalFwd(Matrix& input, - size_t imgSizeH, - size_t imgSizeW, - Matrix& denoms, - size_t channels, - size_t sizeX, - float scale, - float pow) { - size_t num = input.getHeight(); - size_t height = imgSizeH; - size_t width = imgSizeW; - - CHECK(height * width * channels == input.getWidth()); - CHECK(denoms.getHeight() == input.getHeight() && - denoms.getWidth() == input.getWidth() && input.getHeight() == height_ && - input.getWidth() == width_); - hl_CMRNorm_forward(num, - input.getData(), - denoms.getData(), - data_, - channels, - height, - width, - sizeX, - scale, - -pow); -} - -void GpuMatrix::crossMapNormalBwd(Matrix& localGrad, - Matrix& denoms, - Matrix& preOutV, - Matrix& localOutV, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - float scale, - float pow) { - size_t num = preOutV.getHeight(); - size_t height = imgSizeH; - size_t width = imgSizeW; - - CHECK(width * height * channels == preOutV.getWidth()); - CHECK(denoms.getHeight() == preOutV.getHeight() && - denoms.getWidth() == preOutV.getWidth() && - preOutV.getHeight() == height_ && preOutV.getWidth() == width_); - CHECK(denoms.getHeight() == localGrad.getHeight() && - denoms.getWidth() == localGrad.getWidth()); - - hl_CMRNorm_backward(num, - preOutV.getData(), - denoms.getData(), - localOutV.getData(), - localGrad.getData(), - data_, - channels, - height, - width, - sizeX, - -pow, - 2.0f * pow * scale); -} - void GpuMatrix::maxSequenceForward(Matrix& input, const IVector& sequence, IVector& index) { @@ -2219,119 +2156,6 @@ void CpuMatrix::avgPoolBackward(Matrix& input, } } -void CpuMatrix::crossMapNormalFwd(Matrix& input, - size_t imgSizeH, - size_t imgSizeW, - Matrix& denoms, - size_t channels, - size_t sizeX, - float scale, - float pow) { - CHECK(isContiguous()); - CHECK(input.isContiguous()); - CHECK(denoms.isContiguous()); - CHECK_EQ(getHeight(), input.getHeight()); - CHECK_EQ(getWidth(), input.getWidth()); - CHECK_EQ(getHeight(), denoms.getHeight()); - CHECK_EQ(getWidth(), denoms.getWidth()); - - size_t numSample = input.getHeight(); - size_t numCols = input.getWidth(); - size_t height = imgSizeH; - size_t width = imgSizeW; - CHECK(height * width * channels == numCols); - - // TODO(hedaoyuan) After commit TensorExpress code, - // Reconstruction this code to remove the temporary memory. - CpuMatrix tmp(channels, height * width); - CpuMatrix tmp2(tmp.getData(), 1, channels * height * width); - denoms.zero(); - const int start = -((int)sizeX - 1) / 2; - const int end = (int)sizeX + start; - for (size_t i = 0; i < numSample; i++) { - input.subMatrix(i, 1)->square2(tmp2); - CpuMatrix subDen( - denoms.subMatrix(i, 1)->getData(), channels, height * width); - for (int c = 0; c < (int)channels; c++) { - for (int s = start; s < end; s++) { - if (c + s >= 0 && c + s < (int)channels) { - subDen.subMatrix(c, 1)->add(*tmp.subMatrix(c + s, 1)); - } - } - } - } - - denoms.add(scale, (real)1); - this->pow2(denoms, -pow); - this->dotMul(input); -} - -void CpuMatrix::crossMapNormalBwd(Matrix& localGrad, - Matrix& denoms, - Matrix& preOutV, - Matrix& localOutV, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - float scale, - float pow) { - CHECK(isContiguous()); - CHECK(localGrad.isContiguous()); - CHECK(denoms.isContiguous()); - CHECK(preOutV.isContiguous()); - CHECK(localOutV.isContiguous()); - CHECK_EQ(getHeight(), localGrad.getHeight()); - CHECK_EQ(getWidth(), localGrad.getWidth()); - CHECK_EQ(getHeight(), denoms.getHeight()); - CHECK_EQ(getWidth(), denoms.getWidth()); - CHECK_EQ(getHeight(), preOutV.getHeight()); - CHECK_EQ(getWidth(), preOutV.getWidth()); - CHECK_EQ(getHeight(), localOutV.getHeight()); - CHECK_EQ(getWidth(), localOutV.getWidth()); - - size_t numSample = getHeight(); - size_t numCols = getWidth(); - size_t height = imgSizeH; - size_t width = imgSizeW; - CHECK(height * width * channels == numCols); - - // TODO(hedaoyuan) After commit TensorExpress code, - // Reconstruction this code to remove the temporary memory. - CpuMatrix tmp(1, height * width); - - const int start = -((int)sizeX) / 2; - const int end = (int)sizeX + start; - const real ratio = -(real)2 * scale * pow; - for (size_t i = 0; i < numSample; i++) { - CpuMatrix inputDiff( - this->subMatrix(i, 1)->getData(), channels, height * width); - CpuMatrix outDiff( - localGrad.subMatrix(i, 1)->getData(), channels, height * width); - CpuMatrix input( - preOutV.subMatrix(i, 1)->getData(), channels, height * width); - CpuMatrix output( - localOutV.subMatrix(i, 1)->getData(), channels, height * width); - CpuMatrix subDen( - denoms.subMatrix(i, 1)->getData(), channels, height * width); - - for (int c = 0; c < (int)channels; c++) { - tmp.pow2(*subDen.subMatrix(c, 1), -pow); - inputDiff.subMatrix(c, 1) - ->addDotMul(tmp, *outDiff.subMatrix(c, 1), (real)1, (real)1); - for (int s = start; s < end; s++) { - if (c + s >= 0 && c + s < (int)channels) { - tmp.dotMul(*outDiff.subMatrix(c + s, 1), *output.subMatrix(c + s, 1)); - tmp.mulScalar(ratio); - tmp.dotDiv(tmp, *subDen.subMatrix(c + s, 1)); - tmp.dotMul(*input.subMatrix(c, 1)); - inputDiff.subMatrix(c, 1)->add(tmp); - } - } - } - } -} - /** * Input: one or more sequences. Each sequence contains some instances. * Output: output size is the number of input sequences (NOT input instances). diff --git a/paddle/math/Matrix.h b/paddle/math/Matrix.h index 5685cb7bcb..62bc1b16fc 100644 --- a/paddle/math/Matrix.h +++ b/paddle/math/Matrix.h @@ -952,31 +952,6 @@ public: LOG(FATAL) << "Not implemeted"; } - /// normalize-operation. - virtual void crossMapNormalFwd(Matrix& input, - size_t imgSizeH, - size_t imgSizeW, - Matrix& denoms, - size_t channels, - size_t sizeX, - float scale, - float pow) { - LOG(FATAL) << "Not implemeted"; - } - - virtual void crossMapNormalBwd(Matrix& localGrad, - Matrix& denoms, - Matrix& preOutV, - Matrix& localOutV, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t size, - float scale, - float pow) { - LOG(FATAL) << "Not implemeted"; - } - /** * Input: one or more sequences. Each sequence contains some instances. * @@ -1459,26 +1434,6 @@ public: size_t paddingH, size_t paddingW); - void crossMapNormalFwd(Matrix& input, - size_t imgSizeH, - size_t imgSizeW, - Matrix& denoms, - size_t channels, - size_t sizeX, - float scale, - float pow); - - void crossMapNormalBwd(Matrix& localGrad, - Matrix& denoms, - Matrix& preOutV, - Matrix& localOutV, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - float scale, - float pow); - void maxSequenceForward(Matrix& input, const IVector& sequence, IVector& index); @@ -1685,26 +1640,6 @@ public: size_t paddingH, size_t paddingW); - void crossMapNormalFwd(Matrix& input, - size_t imgSizeH, - size_t imgSizeW, - Matrix& denoms, - size_t channels, - size_t sizeX, - float scale, - float pow); - - void crossMapNormalBwd(Matrix& localGrad, - Matrix& denoms, - Matrix& preOutV, - Matrix& localOutV, - size_t channels, - size_t imgSizeH, - size_t imgSizeW, - size_t sizeX, - float scale, - float pow); - void maxSequenceForward(Matrix& input, const IVector& sequence, IVector& index); diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index da7a585484..c89b7ff490 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -1385,33 +1385,6 @@ void testCrossMapNormalBwd( Tensor(denomsGpu.getData(), dims)}, {Tensor(inputsGradGpu.getData(), dims)}, {}); -#if 0 - CrossMapNormalGrad cpuCross; - cpuCross(inputsGrad, - inputsValue, - outputsGrad, - outputsValue, - denoms, - channels, - imgSizeH, - imgSizeW, - sizeX, - scale, - pow); - - CrossMapNormalGrad gpuCross; - gpuCross(inputsGradGpu, - inputsValueGpu, - outputsGradGpu, - outputsValueGpu, - denomsGpu, - channels, - imgSizeH, - imgSizeW, - sizeX, - scale, - pow); -#endif TensorCheckErr(inputsGrad, inputsGradGpu); } From 6a461812c648780c7f7af5043abb9b1c2344d5c7 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 15 Dec 2016 17:36:57 +0800 Subject: [PATCH 175/265] Install protobuf3 to /usr --- paddle/scripts/travis/before_install.linux.sh | 2 +- paddle/scripts/travis/main.sh | 5 ----- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/paddle/scripts/travis/before_install.linux.sh b/paddle/scripts/travis/before_install.linux.sh index c23acfb60e..9620bff6bc 100755 --- a/paddle/scripts/travis/before_install.linux.sh +++ b/paddle/scripts/travis/before_install.linux.sh @@ -6,7 +6,7 @@ wget https://github.com/google/protobuf/archive/v3.0.2.tar.gz -O protobuf.tar.gz tar xf protobuf.tar.gz cd protobuf* ./autogen.sh -./configure +./configure --prefix=/usr/ make -j 2 install cd .. rm -rf protobuf* diff --git a/paddle/scripts/travis/main.sh b/paddle/scripts/travis/main.sh index 1b49a12563..13f2552d29 100755 --- a/paddle/scripts/travis/main.sh +++ b/paddle/scripts/travis/main.sh @@ -1,11 +1,6 @@ #!/bin/bash cd `dirname $0` -if [ "$TRAVIS_OS_NAME" == "linux" ]; then - # for manually installed protobuf 3.10 - export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH -fi - if [ ${JOB} == "BUILD_AND_TEST" ]; then ./build_and_test.sh elif [ ${JOB} == "DOCS" ]; then From 94a568bd0817d7dfc5ee828db49f94de851c776e Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Thu, 15 Dec 2016 17:39:32 +0800 Subject: [PATCH 176/265] delete _cn --- doc/tutorials/rec/ml_dataset_cn.md | 2 +- doc/tutorials/rec/ml_regression_ch.rst | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/tutorials/rec/ml_dataset_cn.md b/doc/tutorials/rec/ml_dataset_cn.md index d500294e7d..2207a776f0 100644 --- a/doc/tutorials/rec/ml_dataset_cn.md +++ b/doc/tutorials/rec/ml_dataset_cn.md @@ -1,5 +1,5 @@ ```eval_rst -.. _demo_ml_dataset_en: +.. _demo_ml_dataset: ``` diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst index 9d2b5071a2..a084e4790c 100644 --- a/doc/tutorials/rec/ml_regression_ch.rst +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -15,7 +15,7 @@ MovieLens数据集评分回归模型 ``````` 下载并解压数据集 '''''''''''''' -这里我们使用 :ref:`demo_ml_dataset_en` 。 +这里我们使用 :ref:`demo_ml_dataset` 。 要下载和解压数据集,只需要简单的运行下面的命令即可。 .. code-block:: bash @@ -225,7 +225,7 @@ meta文件 :code:`meta.bin` 的结构如下: :language: python :lines: 15- -在文件 :code:`trainer_config.py` 中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 +在文件 :code:`trainer_config.py` 中,我们仅仅是将每个特征种类映射到一个特征向量中,以下 展示了如何将每个特征映射到一个向量。 * :code:`id` \: 仅仅是简单的嵌入,然后添加一个全连接层。 @@ -280,7 +280,7 @@ meta文件 :code:`meta.bin` 的结构如下: 该脚本仅仅是开始一个paddle训练过程,将日志写入文件 :code:`log.txt` ,然后 打印在屏幕上。 -脚本 :code:`run.sh` 中的每一行命令,请参考页面 :ref:`cmd_line_index_en` 。 +脚本 :code:`run.sh` 中的每一行命令,请参考页面 :ref:`cmd_line_index` 。 这些参数的简短介绍如下: * config\: 告诉paddle哪个文件是神经网络的配置文件。 From 0c7d553c0ff31ba774be0074cb95f087d4e1d3ed Mon Sep 17 00:00:00 2001 From: qiaolongfei Date: Thu, 15 Dec 2016 17:56:15 +0800 Subject: [PATCH 177/265] add python api_predict for quick start --- demo/quick_start/api_predict.py | 2 +- demo/quick_start/api_predict.sh | 0 2 files changed, 1 insertion(+), 1 deletion(-) mode change 100644 => 100755 demo/quick_start/api_predict.sh diff --git a/demo/quick_start/api_predict.py b/demo/quick_start/api_predict.py index a1a9ef7bca..9bdffe1006 100755 --- a/demo/quick_start/api_predict.py +++ b/demo/quick_start/api_predict.py @@ -138,7 +138,7 @@ def main(): [label, text] = line.split("\t") labels.append(int(label)) batch.append([predict.get_index(text)]) - print("lables is:") + print("labels is:") print labels predict.batch_predict(batch) diff --git a/demo/quick_start/api_predict.sh b/demo/quick_start/api_predict.sh old mode 100644 new mode 100755 From 7395d2dadbd68618a5c126c5492b04136ee34a5a Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 15 Dec 2016 18:09:58 +0800 Subject: [PATCH 178/265] Using protobuf3 in Travis-CI MacOS. --- paddle/scripts/travis/before_install.osx.sh | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/paddle/scripts/travis/before_install.osx.sh b/paddle/scripts/travis/before_install.osx.sh index f438e69b82..bd88ed3913 100755 --- a/paddle/scripts/travis/before_install.osx.sh +++ b/paddle/scripts/travis/before_install.osx.sh @@ -2,9 +2,8 @@ brew update brew tap homebrew/science brew install python -sudo pip install --upgrade protobuf==2.6.0 -brew install homebrew/versions/protobuf260 --without-python -brew install cmake python glog gflags openblas wget md5sha1sum +sudo pip install --upgrade protobuf +brew install cmake python glog gflags openblas wget md5sha1sum protobuf wget https://github.com/google/googletest/archive/release-1.8.0.tar.gz -O gtest.tar.gz tar xf gtest.tar.gz From 78b455ac87d11e16379c1ead2ad6df990810c968 Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Thu, 15 Dec 2016 19:04:10 +0800 Subject: [PATCH 179/265] rename ml_regression_ch.rst to ml_regression_cn.rst --- doc/tutorials/rec/{ml_regression_ch.rst => ml_regression_cn.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename doc/tutorials/rec/{ml_regression_ch.rst => ml_regression_cn.rst} (100%) diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_cn.rst similarity index 100% rename from doc/tutorials/rec/ml_regression_ch.rst rename to doc/tutorials/rec/ml_regression_cn.rst From f13aeb52e9fc666ac1e24acf5315cbdccf108402 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 20:12:53 +0800 Subject: [PATCH 180/265] fix swig_api --- paddle/api/CMakeLists.txt | 1 + paddle/api/paddle_ld_flags.py | 5 +++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/paddle/api/CMakeLists.txt b/paddle/api/CMakeLists.txt index 6ad1d79e59..ed69bd764f 100644 --- a/paddle/api/CMakeLists.txt +++ b/paddle/api/CMakeLists.txt @@ -46,6 +46,7 @@ add_custom_command(OUTPUT ${PROJ_ROOT}/paddle/dist/.timestamp WORKING_DIRECTORY ${PROJ_ROOT}/paddle DEPENDS python_swig_sources paddle_parameter + paddle_function paddle_math paddle_utils paddle_gserver diff --git a/paddle/api/paddle_ld_flags.py b/paddle/api/paddle_ld_flags.py index 51d7dfee58..7c8206e3fe 100644 --- a/paddle/api/paddle_ld_flags.py +++ b/paddle/api/paddle_ld_flags.py @@ -30,8 +30,8 @@ try: whole_end = "" LIB_DIRS = [ - "math", 'utils', 'parameter', "gserver", "api", "cuda", "pserver", - "trainer" + "math", 'function', 'utils', 'parameter', "gserver", "api", "cuda", + "pserver", "trainer" ] PARENT_LIB_DIRS = ['proto'] @@ -75,6 +75,7 @@ try: libs = [ whole_start, "-lpaddle_gserver", + "-lpaddle_function", whole_end, "-lpaddle_pserver", "-lpaddle_trainer_lib", From 1048aee0f7b32b27538f175112e42c9632642648 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Thu, 15 Dec 2016 20:25:31 +0800 Subject: [PATCH 181/265] Add input layer check --- python/paddle/trainer/config_parser.py | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 0f7c601fe0..83fda9f709 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1584,6 +1584,13 @@ class PriorBoxLayer(LayerBase): variance): super(PriorBoxLayer, self).__init__(name, 'priorbox', 0, inputs) config_assert(len(inputs) == 2, 'PriorBoxLayer must have 2 input') + input_layer = self.get_input_layer(1) + config_assert( + input_layer.type == 'data', + 'Expecting the second input layer of an priorbox layer to be ' + 'a data layer') + config_assert(input_layer.width > 0, 'The data layer must set width') + config_assert(input_layer.height > 0, 'The data layer must set height') self.config.inputs[0].priorbox_conf.min_size.extend(min_size) self.config.inputs[0].priorbox_conf.max_size.extend(max_size) self.config.inputs[0].priorbox_conf.aspect_ratio.extend(aspect_ratio) From 22b9b6662b215b663ce2cebdf7624ea1212bb9c1 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Thu, 15 Dec 2016 21:01:47 +0800 Subject: [PATCH 182/265] Add unittest to coverage SgdThreadUpdater's enableBufType --- paddle/trainer/ThreadParameterUpdater.cpp | 3 +++ paddle/trainer/tests/fake_file_list.list | 1 + .../tests/simple_sparse_neural_network.py | 23 +++++++++++++++++++ .../tests/simple_sparse_neural_network_dp.py | 21 +++++++++++++++++ paddle/trainer/tests/test_TrainerOnePass.cpp | 9 +++++++- 5 files changed, 56 insertions(+), 1 deletion(-) create mode 100644 paddle/trainer/tests/fake_file_list.list create mode 100644 paddle/trainer/tests/simple_sparse_neural_network.py create mode 100644 paddle/trainer/tests/simple_sparse_neural_network_dp.py diff --git a/paddle/trainer/ThreadParameterUpdater.cpp b/paddle/trainer/ThreadParameterUpdater.cpp index 9caa92a4d7..049022b1f1 100644 --- a/paddle/trainer/ThreadParameterUpdater.cpp +++ b/paddle/trainer/ThreadParameterUpdater.cpp @@ -55,6 +55,9 @@ void SgdThreadUpdater::init(std::vector& parameters) { // not create parameter buf for PARAMETER_GRADIENT for sparse update in // Parameter::enableType(). But gradient parameter buf is still used // in SgdThreadUpdater. We need to explicitly create it. + // + // The AverageOptimizer::restore/apply method will use PARAMETER_GRADIENT + // as a temp buffer. para->enableBufType(PARAMETER_GRADIENT); } } diff --git a/paddle/trainer/tests/fake_file_list.list b/paddle/trainer/tests/fake_file_list.list new file mode 100644 index 0000000000..f27ceed277 --- /dev/null +++ b/paddle/trainer/tests/fake_file_list.list @@ -0,0 +1 @@ +do_not_matter.txt diff --git a/paddle/trainer/tests/simple_sparse_neural_network.py b/paddle/trainer/tests/simple_sparse_neural_network.py new file mode 100644 index 0000000000..9604e1b9b4 --- /dev/null +++ b/paddle/trainer/tests/simple_sparse_neural_network.py @@ -0,0 +1,23 @@ +from paddle.trainer_config_helpers import * + +settings(batch_size=128, learning_method=AdaGradOptimizer(), learning_rate=1e-4) + +file_list = 'trainer/tests/fake_file_list.list' + +define_py_data_sources2( + train_list=file_list, + test_list=file_list, + module="simple_sparse_neural_network_dp", + obj="process") + +embedding = embedding_layer( + input=data_layer( + name="word_ids", size=65536), + size=128, + param_attr=ParamAttr(sparse_update=True)) +prediction = fc_layer(input=embedding, size=10, act=SoftmaxActivation()) + +outputs( + classification_cost( + input=prediction, label=data_layer( + name='label', size=10))) diff --git a/paddle/trainer/tests/simple_sparse_neural_network_dp.py b/paddle/trainer/tests/simple_sparse_neural_network_dp.py new file mode 100644 index 0000000000..8bfd1f37e7 --- /dev/null +++ b/paddle/trainer/tests/simple_sparse_neural_network_dp.py @@ -0,0 +1,21 @@ +from paddle.trainer.PyDataProvider2 import provider, integer_sequence, integer_value +import random + + +def init_hook(settings, is_train, **kwargs): + settings.is_train = is_train + + +@provider( + input_types={'word_ids': integer_value(65536), + 'label': integer_value(10)}, + min_pool_size=0, + init_hook=init_hook) +def process(settings, filename): + if settings.is_train: + data_size = 2**20 + else: + data_size = 2**10 + + for _ in xrange(data_size): + yield random.randint(0, 65535), random.randint(0, 9) diff --git a/paddle/trainer/tests/test_TrainerOnePass.cpp b/paddle/trainer/tests/test_TrainerOnePass.cpp index ee21008aec..4d0174f784 100644 --- a/paddle/trainer/tests/test_TrainerOnePass.cpp +++ b/paddle/trainer/tests/test_TrainerOnePass.cpp @@ -27,6 +27,9 @@ static const string& configFile1 = "trainer/tests/sample_trainer_config.conf"; static const string& configFile2 = "trainer/tests/sample_trainer_config_parallel.conf"; +static const string& configFileSimpleSparse = + "trainer/tests/simple_sparse_neural_network.py"; + DECLARE_bool(use_gpu); DECLARE_string(config); DECLARE_int32(gpu_id); @@ -298,11 +301,15 @@ TEST(checkRemoteUpdater, cpuDeltaTrainerOldUpdater) { checkRemoteParameterUpdaterTest(configFile1, false, false, 1, true, 10); } +TEST(SgdThreadUpdater, simpleSparseNN) { + trainerOnePassTest(configFileSimpleSparse, false, false, 1, 0.5, true); +} + int main(int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); initMain(argc, argv); initPython(argc, argv); gNumDevices = hl_get_device_count(); - testing::InitGoogleTest(&argc, argv); FLAGS_num_passes = 1; // train one pass FLAGS_saving_period = 100000; // do not save parameteres From cee934680467c50d4084dbaf7273a39a40cc832d Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Thu, 15 Dec 2016 21:23:05 +0800 Subject: [PATCH 183/265] add some comments --- paddle/function/cross_map_normal_op.cpp | 5 ++- paddle/function/cross_map_normal_op.h | 34 +++++++++++++++++++ paddle/gserver/layers/Layer.h | 6 ++++ paddle/gserver/layers/NormProjectionLayer.cpp | 18 ++++------ paddle/gserver/layers/NormProjectionLayer.h | 3 -- 5 files changed, 50 insertions(+), 16 deletions(-) diff --git a/paddle/function/cross_map_normal_op.cpp b/paddle/function/cross_map_normal_op.cpp index 0391a58d89..a18c0bb750 100644 --- a/paddle/function/cross_map_normal_op.cpp +++ b/paddle/function/cross_map_normal_op.cpp @@ -17,7 +17,6 @@ limitations under the License. */ namespace paddle { -// NCHW template <> void CrossMapNormal(real* outputs, real* denoms, @@ -36,6 +35,10 @@ void CrossMapNormal(real* outputs, CpuVector inputsV(numSamples * oneSample, inputs); CpuVector denomsV(numSamples * oneSample, denoms); + // f(x) = x * ( 1 + scale * SUM((x)^2) )^(-pow) + // x represents inputs + // f(x) represents outputs + // denoms save the intermediate result for backward denomsV = denomsV.constant(1.0); const int start = -((int)size - 1) / 2; const int end = (int)size + start; diff --git a/paddle/function/cross_map_normal_op.h b/paddle/function/cross_map_normal_op.h index f065208084..e935b26e12 100644 --- a/paddle/function/cross_map_normal_op.h +++ b/paddle/function/cross_map_normal_op.h @@ -18,6 +18,22 @@ limitations under the License. */ namespace paddle { +/** + * \brief Cross map respose normalize forward. + * The data structure of image data is NCHW. + * + * \param[out] outputs output data. + * \param[in] denoms denoms buffer. + * \param[in] inputs input data. + * \param[in] numSamples batch size of input image. + * \param[in] channels number of channel. + * \param[in] height image height. + * \param[in] width image width. + * \param[in] size size. + * \param[in] scale scale. + * \param[in] pow scale. + * + */ template void CrossMapNormal(real* outputs, real* denoms, @@ -30,6 +46,24 @@ void CrossMapNormal(real* outputs, real scale, real pow); +/** + * \brief Cross map respose normalize backward. + * The data structure of image data is NCHW. + * + * \param[out] inputsGrad input grad. + * \param[in] inputsValue input value. + * \param[out] outputsValue output value. + * \param[out] outputsGrad output grad. + * \param[in] denoms denoms buffer. + * \param[in] numSamples batch size of input image. + * \param[in] channels number of channel. + * \param[in] height image height. + * \param[in] width image width. + * \param[in] size size. + * \param[in] scale scale. + * \param[in] pow scale. + * + */ template void CrossMapNormalGrad(real* inputsGrad, real* inputsValue, diff --git a/paddle/gserver/layers/Layer.h b/paddle/gserver/layers/Layer.h index 172e558b82..16f66a2205 100644 --- a/paddle/gserver/layers/Layer.h +++ b/paddle/gserver/layers/Layer.h @@ -18,6 +18,7 @@ limitations under the License. */ #include #include #include "ModelConfig.pb.h" +#include "paddle/function/Function.h" #include "paddle/math/CpuSparseMatrix.h" #include "paddle/parameter/Parameter.h" #include "paddle/utils/ClassRegistrar.h" @@ -100,6 +101,11 @@ protected: /// Mark input grad in(true) or out(false) of backward function. std::vector markInBackward_; + /// Layer forward function + FunctionBase* forward_; + /// Layer backward function + FunctionBase* backward_; + public: /** * Wait until all input value ready. diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index 4ff3b805fb..0f6f9b91d0 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -48,20 +48,17 @@ bool CMRProjectionNormLayer::init(const LayerMap& layerMap, if (useGpu_) { forward_ = FunctionBase::funcRegistrar_.createByType( FUNC_NAME(CrossMapNormal, GPU)); + backward_ = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormalGrad, GPU)); } else { forward_ = FunctionBase::funcRegistrar_.createByType( FUNC_NAME(CrossMapNormal, CPU)); + backward_ = FunctionBase::funcRegistrar_.createByType( + FUNC_NAME(CrossMapNormalGrad, CPU)); } forward_->init( FuncConfig().set("size", size_).set("scale", scale_).set("pow", pow_)); - if (useGpu_) { - backward_ = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormalGrad, GPU)); - } else { - backward_ = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormalGrad, CPU)); - } backward_->init( FuncConfig().set("size", size_).set("scale", scale_).set("pow", pow_)); @@ -74,7 +71,7 @@ void CMRProjectionNormLayer::forward(PassType passType) { /* malloc memory for the output_ if necessary */ /* note: one sample correspond to one row */ MatrixPtr input = inputLayers_[0]->getOutputValue(); - int batchSize = input->getHeight(); + size_t batchSize = input->getHeight(); int size = getSize(); resetOutput(batchSize, size); @@ -82,10 +79,7 @@ void CMRProjectionNormLayer::forward(PassType passType) { Matrix::resizeOrCreate(denoms_, batchSize, size, /* trans */ false, useGpu_); - dims_ = {(size_t)batchSize, - (size_t)channels_, - (size_t)imgSizeH_, - (size_t)imgSizeW_}; + dims_ = {batchSize, channels_, imgSizeH_, imgSizeW_}; forward_->calc( {Tensor(input->getData(), dims_)}, {Tensor(outV->getData(), dims_), Tensor(denoms_->getData(), dims_)}, diff --git a/paddle/gserver/layers/NormProjectionLayer.h b/paddle/gserver/layers/NormProjectionLayer.h index 3c4876ece6..6b2c5dde0d 100644 --- a/paddle/gserver/layers/NormProjectionLayer.h +++ b/paddle/gserver/layers/NormProjectionLayer.h @@ -16,7 +16,6 @@ limitations under the License. */ #include #include "NormLayer.h" -#include "paddle/function/Function.h" #include "paddle/math/Matrix.h" namespace paddle { @@ -43,7 +42,5 @@ public: protected: Dims dims_; - FunctionBase* forward_; - FunctionBase* backward_; }; } // namespace paddle From 454a1a291279818d315a88ef7a91e019a99c73bf Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Wed, 14 Dec 2016 17:51:47 -0800 Subject: [PATCH 184/265] fixed a bug for demo/gan caused by batchNormLayer --- demo/gan/gan_conf_image.py | 4 ++-- python/paddle/trainer/config_parser.py | 10 ++++++++-- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/demo/gan/gan_conf_image.py b/demo/gan/gan_conf_image.py index f89a4e706c..c469227994 100644 --- a/demo/gan/gan_conf_image.py +++ b/demo/gan/gan_conf_image.py @@ -87,9 +87,9 @@ def conv_bn(input, print(imgSize, output_x, stride, filter_size, padding) if trans: - nameApx = "_conv" - else: nameApx = "_convt" + else: + nameApx = "_conv" if bn: conv = img_conv_layer( diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 5b7f4d85e2..ea3e4308fe 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1871,8 +1871,14 @@ class BatchNormLayer(LayerBase): input_layer = self.get_input_layer(0) image_conf = self.config.inputs[0].image_conf parse_image(self.inputs[0].image, input_layer.name, image_conf) - self.set_cnn_layer(name, image_conf.img_size_y, image_conf.img_size, - image_conf.channels, False) + + # Only pass the width and height of input to batch_norm layer + # when either of it is non-zero. + if input_layer.width != 0 or input_layer.height != 0: + self.set_cnn_layer(name, image_conf.img_size_y, image_conf.img_size, + image_conf.channels, True) + else: + self.set_layer_size(input_layer.size) psize = self.calc_parameter_size(image_conf) dims = [1, psize] From a50caba8814605850485cbb8ac94d375bd3cb3d4 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Thu, 15 Dec 2016 10:39:48 -0800 Subject: [PATCH 185/265] modified img_trans_layers.protostr to refect the change in batchNorm layer behavior --- .../tests/configs/protostr/img_trans_layers.protostr | 2 -- 1 file changed, 2 deletions(-) diff --git a/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr b/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr index cd310bd13b..6934fd0da6 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr +++ b/python/paddle/trainer_config_helpers/tests/configs/protostr/img_trans_layers.protostr @@ -58,8 +58,6 @@ layers { } bias_parameter_name: "___batch_norm_0__.wbias" moving_average_fraction: 0.9 - height: 256 - width: 256 } layers { name: "__crmnorm_0__" From 7462692d4c2a99558aa0784035f69b986749647b Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Thu, 15 Dec 2016 14:39:59 -0800 Subject: [PATCH 186/265] Formatted by pre-commit Change-Id: I2b58c8d854aa31096a6b6e49c1c120f7acec622b --- paddle/gserver/layers/ConvProjection.cpp | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/paddle/gserver/layers/ConvProjection.cpp b/paddle/gserver/layers/ConvProjection.cpp index 57d2d83590..e1c4b91ace 100644 --- a/paddle/gserver/layers/ConvProjection.cpp +++ b/paddle/gserver/layers/ConvProjection.cpp @@ -131,11 +131,9 @@ void ConvProjection::reshape(int batchSize) { size_t width = calOutputSize(); CHECK_EQ(width, out_->value->getWidth()); CHECK_EQ(channels_ * imageH_ * imageW_, in_->value->getWidth()) - << "Wrong input size for convolution" - << " channels=" << channels_ - << " imageH=" << imageH_ - << " imageW=" << imageW_ - << " inputSize=" << in_->value->getWidth(); + << "Wrong input size for convolution" + << " channels=" << channels_ << " imageH=" << imageH_ + << " imageW=" << imageW_ << " inputSize=" << in_->value->getWidth(); isSelectAlgo_ = (batchSize == batchNum_); batchNum_ = batchSize; From 96eab5046a14ff901b8685d6d7adf3d9ea4a8c5c Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 16 Dec 2016 10:15:36 +0800 Subject: [PATCH 187/265] Complete unittest setting in CMake --- paddle/trainer/tests/CMakeLists.txt | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/paddle/trainer/tests/CMakeLists.txt b/paddle/trainer/tests/CMakeLists.txt index 60c129f4e2..28c3d6f263 100644 --- a/paddle/trainer/tests/CMakeLists.txt +++ b/paddle/trainer/tests/CMakeLists.txt @@ -27,7 +27,8 @@ add_test(NAME test_Trainer add_unittest_without_exec(test_TrainerOnePass test_TrainerOnePass.cpp) add_test(NAME test_TrainerOnePass - COMMAND ${PROJ_ROOT}/paddle/.set_python_path.sh -d ${PROJ_ROOT}/python/ + COMMAND ${PROJ_ROOT}/paddle/.set_python_path.sh -d + ${PROJ_ROOT}/python/:${PROJ_ROOT}/paddle/trainer/tests ${PROJ_ROOT}/paddle/.set_port.sh -p port ${CMAKE_CURRENT_BINARY_DIR}/test_TrainerOnePass WORKING_DIRECTORY ${PROJ_ROOT}/paddle/) From 0a8af50e69a193e1dc12d1ce18ec782f7c452313 Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Fri, 16 Dec 2016 10:55:26 +0800 Subject: [PATCH 188/265] put Development Using Docker to the head and add a line of notification --- .../build_and_install/docker_install_en.rst | 152 +++++++++--------- 1 file changed, 77 insertions(+), 75 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index 4708890e48..a429ade9f2 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -9,6 +9,83 @@ Please be aware that you will need to change `Dockers settings of your hardware resource on Mac OS X and Windows. +Development Using Docker +------------------------ + +Develpers can work on PaddlePaddle using Docker. This allows +developers to work on different platforms -- Linux, Mac OS X, and +Windows -- in a consistent way. + +The general development workflow with Docker and Bazel is as follows: + +1. Get the source code of Paddle: + + .. code-block:: bash + + git clone --recursive https://github.com/paddlepaddle/paddle + + + Here **git clone --recursive is required** as we have a submodule `warp-ctc `_. + +2. Build a development Docker image :code:`paddle:dev` from the source + code. This image contains all the development tools and + dependencies of PaddlePaddle. + + + .. code-block:: bash + + cd paddle + docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . + + +3. Run the image as a container and mounting local source code + directory into the container. This allows us to change the code on + the host and build it within the container. + + .. code-block:: bash + + docker run \ + -d \ + --name paddle \ + -p 2022:22 \ + -v $PWD:/paddle \ + -v $HOME/.cache/bazel:/root/.cache/bazel \ + paddle:dev + + where :code:`-d` makes the container running in background, + :code:`--name paddle` allows us to run a nginx container to serve + documents in this container, :code:`-p 2022:22` allows us to SSH + into this container, :code:`-v $PWD:/paddle` shares the source code + on the host with the container, :code:`-v + $HOME/.cache/bazel:/root/.cache/bazel` shares Bazel cache on the + host with the container. + +4. SSH into the container: + + .. code-block:: bash + + ssh root@localhost -p 2022 + +5. We can edit the source code in the container or on this host. Then + we can build using cmake + + .. code-block:: bash + + cd /paddle # where paddle source code has been mounted into the container + mkdir -p build + cd build + cmake -DWITH_TESTING=ON .. + make -j `nproc` + CTEST_OUTPUT_ON_FAILURE=1 ctest + + or Bazel in the container: + + .. code-block:: bash + + cd /paddle + bazel test ... + + CPU-only and GPU Images ----------------------- @@ -104,78 +181,3 @@ container: Then we can direct our Web browser to the HTML version of source code at http://localhost:8088/paddle/ - - -Development Using Docker ------------------------- - -Develpers can work on PaddlePaddle using Docker. This allows -developers to work on different platforms -- Linux, Mac OS X, and -Windows -- in a consistent way. - -The general development workflow with Docker and Bazel is as follows: - -1. Get the source code of Paddle: - - .. code-block:: bash - - git clone --recursive https://github.com/paddlepaddle/paddle - - -2. Build a development Docker image :code:`paddle:dev` from the source - code. This image contains all the development tools and - dependencies of PaddlePaddle. - - - .. code-block:: bash - - cd paddle - docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . - - -3. Run the image as a container and mounting local source code - directory into the container. This allows us to change the code on - the host and build it within the container. - - .. code-block:: bash - - docker run \ - -d \ - --name paddle \ - -p 2022:22 \ - -v $PWD:/paddle \ - -v $HOME/.cache/bazel:/root/.cache/bazel \ - paddle:dev - - where :code:`-d` makes the container running in background, - :code:`--name paddle` allows us to run a nginx container to serve - documents in this container, :code:`-p 2022:22` allows us to SSH - into this container, :code:`-v $PWD:/paddle` shares the source code - on the host with the container, :code:`-v - $HOME/.cache/bazel:/root/.cache/bazel` shares Bazel cache on the - host with the container. - -4. SSH into the container: - - .. code-block:: bash - - ssh root@localhost -p 2022 - -5. We can edit the source code in the container or on this host. Then - we can build using cmake - - .. code-block:: bash - - cd /paddle # where paddle source code has been mounted into the container - mkdir -p build - cd build - cmake -DWITH_TESTING=ON .. - make -j `nproc` - CTEST_OUTPUT_ON_FAILURE=1 ctest - - or Bazel in the container: - - .. code-block:: bash - - cd /paddle - bazel test ... From cea628c350dfc5164f921200486b1834c5350451 Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Fri, 16 Dec 2016 12:47:24 +0800 Subject: [PATCH 189/265] check grammar etc. --- .../build_and_install/docker_install_en.rst | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index a429ade9f2..7633bf4d57 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -12,7 +12,7 @@ of your hardware resource on Mac OS X and Windows. Development Using Docker ------------------------ -Develpers can work on PaddlePaddle using Docker. This allows +Developers can work on PaddlePaddle using Docker. This allows developers to work on different platforms -- Linux, Mac OS X, and Windows -- in a consistent way. @@ -22,11 +22,19 @@ The general development workflow with Docker and Bazel is as follows: .. code-block:: bash - git clone --recursive https://github.com/paddlepaddle/paddle + git clone --recursive https://github.com/PaddlePaddle/Paddle.git Here **git clone --recursive is required** as we have a submodule `warp-ctc `_. + If you have used :code:`git clone https://github.com/PaddlePaddle/Paddle` and find that the directory :code:`warp-ctc` is + empty, please use the following command to get the submodule. + + .. code-block:: bash + + git submodule update --init --recursive + + 2. Build a development Docker image :code:`paddle:dev` from the source code. This image contains all the development tools and dependencies of PaddlePaddle. @@ -154,7 +162,7 @@ source code: .. code-block:: bash cd ~ - git clone github.com/PaddlePaddle/Paddle + git clone https://github.com/PaddlePaddle/Paddle.git cd Paddle git submodule update --init --recursive docker build --build-arg WITH_AVX=OFF -t paddle:cpu-noavx -f paddle/scripts/docker/Dockerfile . @@ -170,7 +178,7 @@ generated using `woboq code browser for users to browse and understand the C++ source code. As long as we give the Paddle Docker container a name, we can run an -additional nginx Docker container to serve the volume from the Paddle +additional Nginx Docker container to serve the volume from the Paddle container: .. code-block:: bash From c91b7906d1f1563db36f903145bf03e79bace50b Mon Sep 17 00:00:00 2001 From: qiaolongfei Date: Wed, 14 Dec 2016 12:37:08 +0800 Subject: [PATCH 190/265] add python api_predict for quick start --- demo/quick_start/api_predict.py | 148 ++++++++++++++++++++++++++++++++ demo/quick_start/api_predict.sh | 30 +++++++ 2 files changed, 178 insertions(+) create mode 100755 demo/quick_start/api_predict.py create mode 100644 demo/quick_start/api_predict.sh diff --git a/demo/quick_start/api_predict.py b/demo/quick_start/api_predict.py new file mode 100755 index 0000000000..9c224e3cdb --- /dev/null +++ b/demo/quick_start/api_predict.py @@ -0,0 +1,148 @@ +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os, sys +import numpy as np +from optparse import OptionParser +from py_paddle import swig_paddle, DataProviderConverter +from paddle.trainer.PyDataProvider2 import sparse_binary_vector +from paddle.trainer.config_parser import parse_config + + +""" +Usage: run following command to show help message. + python api_predict.py -h +""" + +class QuickStartPrediction(): + def __init__(self, train_conf, dict_file, model_dir=None, label_file=None): + """ + train_conf: trainer configure. + dict_file: word dictionary file name. + model_dir: directory of model. + """ + self.train_conf = train_conf + self.dict_file = dict_file + self.word_dict = {} + self.dict_dim = self.load_dict() + self.model_dir = model_dir + if model_dir is None: + self.model_dir = os.path.dirname(train_conf) + + self.label = None + if label_file is not None: + self.load_label(label_file) + + conf = parse_config(train_conf, "is_predict=1") + self.network = swig_paddle.GradientMachine.createFromConfigProto( + conf.model_config) + self.network.loadParameters(self.model_dir) + input_types = [sparse_binary_vector(self.dict_dim)] + self.converter = DataProviderConverter(input_types) + + def load_dict(self): + """ + Load dictionary from self.dict_file. + """ + for line_count, line in enumerate(open(self.dict_file, 'r')): + self.word_dict[line.strip().split('\t')[0]] = line_count + return len(self.word_dict) + + def load_label(self, label_file): + """ + Load label. + """ + self.label = {} + for v in open(label_file, 'r'): + self.label[int(v.split('\t')[1])] = v.split('\t')[0] + + def get_index(self, data): + """ + transform word into integer index according to the dictionary. + """ + words = data.strip().split() + word_slot = [ + self.word_dict[w] for w in words if w in self.word_dict + ] + return word_slot + + def batch_predict(self, data_batch): + input = self.converter(data_batch) + output = self.network.forwardTest(input) + prob = output[0]["id"].tolist() + print("predicting labels is:") + print prob + +def option_parser(): + usage = "python predict.py -n config -w model_dir -d dictionary -i input_file " + parser = OptionParser(usage="usage: %s [options]" % usage) + parser.add_option( + "-n", + "--tconf", + action="store", + dest="train_conf", + help="network config") + parser.add_option( + "-d", + "--dict", + action="store", + dest="dict_file", + help="dictionary file") + parser.add_option( + "-b", + "--label", + action="store", + dest="label", + default=None, + help="dictionary file") + parser.add_option( + "-c", + "--batch_size", + type="int", + action="store", + dest="batch_size", + default=1, + help="the batch size for prediction") + parser.add_option( + "-w", + "--model", + action="store", + dest="model_path", + default=None, + help="model path") + return parser.parse_args() + + +def main(): + options, args = option_parser() + train_conf = options.train_conf + batch_size = options.batch_size + dict_file = options.dict_file + model_path = options.model_path + label = options.label + swig_paddle.initPaddle("--use_gpu=0") + predict = QuickStartPrediction(train_conf, dict_file, model_path, label) + + batch = [] + labels = [] + for line in sys.stdin: + [label, text] = line.split("\t") + labels.append(int(label)) + batch.append([predict.get_index(text)]) + print("lables is:") + print labels + predict.batch_predict(batch) + +if __name__ == '__main__': + main() diff --git a/demo/quick_start/api_predict.sh b/demo/quick_start/api_predict.sh new file mode 100644 index 0000000000..c90d3b7054 --- /dev/null +++ b/demo/quick_start/api_predict.sh @@ -0,0 +1,30 @@ +#!/bin/bash +# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +set -e + +#Note the default model is pass-00002, you shold make sure the model path +#exists or change the mode path. +#only test on trainer_config.lr.py +model=output/pass-00001/ +config=trainer_config.lr.py +label=data/labels.list +dict=data/dict.txt +batch_size=20 +head -n$batch_size data/test.txt | python api_predict.py \ + --tconf=$config\ + --model=$model \ + --label=$label \ + --dict=$dict \ + --batch_size=$batch_size From 8c7cc72b515c6e53edd82cfc9e68cbb7edd480ef Mon Sep 17 00:00:00 2001 From: qiaolongfei Date: Thu, 15 Dec 2016 15:28:28 +0800 Subject: [PATCH 191/265] add python api_predict for quick start --- demo/quick_start/api_predict.py | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/demo/quick_start/api_predict.py b/demo/quick_start/api_predict.py index 9c224e3cdb..a1a9ef7bca 100755 --- a/demo/quick_start/api_predict.py +++ b/demo/quick_start/api_predict.py @@ -18,13 +18,12 @@ from optparse import OptionParser from py_paddle import swig_paddle, DataProviderConverter from paddle.trainer.PyDataProvider2 import sparse_binary_vector from paddle.trainer.config_parser import parse_config - - """ Usage: run following command to show help message. python api_predict.py -h """ + class QuickStartPrediction(): def __init__(self, train_conf, dict_file, model_dir=None, label_file=None): """ @@ -72,9 +71,7 @@ class QuickStartPrediction(): transform word into integer index according to the dictionary. """ words = data.strip().split() - word_slot = [ - self.word_dict[w] for w in words if w in self.word_dict - ] + word_slot = [self.word_dict[w] for w in words if w in self.word_dict] return word_slot def batch_predict(self, data_batch): @@ -84,6 +81,7 @@ class QuickStartPrediction(): print("predicting labels is:") print prob + def option_parser(): usage = "python predict.py -n config -w model_dir -d dictionary -i input_file " parser = OptionParser(usage="usage: %s [options]" % usage) @@ -144,5 +142,6 @@ def main(): print labels predict.batch_predict(batch) + if __name__ == '__main__': main() From b7157d4485cadff8292a0d736b654d6923e9a6b7 Mon Sep 17 00:00:00 2001 From: qiaolongfei Date: Thu, 15 Dec 2016 17:56:15 +0800 Subject: [PATCH 192/265] add python api_predict for quick start --- demo/quick_start/api_predict.py | 2 +- demo/quick_start/api_predict.sh | 0 2 files changed, 1 insertion(+), 1 deletion(-) mode change 100644 => 100755 demo/quick_start/api_predict.sh diff --git a/demo/quick_start/api_predict.py b/demo/quick_start/api_predict.py index a1a9ef7bca..9bdffe1006 100755 --- a/demo/quick_start/api_predict.py +++ b/demo/quick_start/api_predict.py @@ -138,7 +138,7 @@ def main(): [label, text] = line.split("\t") labels.append(int(label)) batch.append([predict.get_index(text)]) - print("lables is:") + print("labels is:") print labels predict.batch_predict(batch) diff --git a/demo/quick_start/api_predict.sh b/demo/quick_start/api_predict.sh old mode 100644 new mode 100755 From fa33262d2e43adc1f4fe559caeba706c5f0df4bc Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Wed, 14 Dec 2016 13:36:18 -0800 Subject: [PATCH 193/265] Check the input size for convolution. Make sure the input size is correct. Change-Id: I2559075e32a098c5ed51942ff8351d964a6d6c75 --- paddle/gserver/layers/ConvProjection.cpp | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/paddle/gserver/layers/ConvProjection.cpp b/paddle/gserver/layers/ConvProjection.cpp index aa634b3287..57d2d83590 100644 --- a/paddle/gserver/layers/ConvProjection.cpp +++ b/paddle/gserver/layers/ConvProjection.cpp @@ -130,6 +130,12 @@ void ConvProjection::reshapeTensorDesc(int batchSize) { void ConvProjection::reshape(int batchSize) { size_t width = calOutputSize(); CHECK_EQ(width, out_->value->getWidth()); + CHECK_EQ(channels_ * imageH_ * imageW_, in_->value->getWidth()) + << "Wrong input size for convolution" + << " channels=" << channels_ + << " imageH=" << imageH_ + << " imageW=" << imageW_ + << " inputSize=" << in_->value->getWidth(); isSelectAlgo_ = (batchSize == batchNum_); batchNum_ = batchSize; From 573cdb60f4fff7e2098a0ae3fb1052ba0c3fa2e0 Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Thu, 15 Dec 2016 14:39:59 -0800 Subject: [PATCH 194/265] Formatted by pre-commit Change-Id: I2b58c8d854aa31096a6b6e49c1c120f7acec622b --- paddle/gserver/layers/ConvProjection.cpp | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/paddle/gserver/layers/ConvProjection.cpp b/paddle/gserver/layers/ConvProjection.cpp index 57d2d83590..e1c4b91ace 100644 --- a/paddle/gserver/layers/ConvProjection.cpp +++ b/paddle/gserver/layers/ConvProjection.cpp @@ -131,11 +131,9 @@ void ConvProjection::reshape(int batchSize) { size_t width = calOutputSize(); CHECK_EQ(width, out_->value->getWidth()); CHECK_EQ(channels_ * imageH_ * imageW_, in_->value->getWidth()) - << "Wrong input size for convolution" - << " channels=" << channels_ - << " imageH=" << imageH_ - << " imageW=" << imageW_ - << " inputSize=" << in_->value->getWidth(); + << "Wrong input size for convolution" + << " channels=" << channels_ << " imageH=" << imageH_ + << " imageW=" << imageW_ << " inputSize=" << in_->value->getWidth(); isSelectAlgo_ = (batchSize == batchNum_); batchNum_ = batchSize; From b67d78e7ae57c635c7247f1eb36d507a30fa8fdf Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Wed, 14 Dec 2016 18:31:37 +0800 Subject: [PATCH 195/265] fix bug: if test and sparse_remote_update can not co-exsit, crash trainer if necessary --- paddle/trainer/Trainer.cpp | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 1eec2c432d..3082b279df 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -222,6 +222,12 @@ void Trainer::init(const std::shared_ptr& config, DataProvider::create(config_->getTestDataConfig(), *config_, gpuData)); } if (testDataProvider_) { + if (config_->getOptConfig().use_sparse_remote_updater()) { + LOG(FATAL) << "It's prohibited to set sparse_remote_update " + << "in some layers if testing will be under going " + << "in the middle of training. You can do testing " + << "within separate process."; + } createTester(); } From 76acf3f8b9f5eea467cc5244df3389569da2105f Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Thu, 15 Dec 2016 11:58:12 +0800 Subject: [PATCH 196/265] more accurate to early stop train sparse model --- paddle/trainer/Tester.cpp | 6 ++++++ paddle/trainer/Trainer.cpp | 6 ------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/paddle/trainer/Tester.cpp b/paddle/trainer/Tester.cpp index 97d1b53934..24f7271734 100644 --- a/paddle/trainer/Tester.cpp +++ b/paddle/trainer/Tester.cpp @@ -46,6 +46,12 @@ Tester::Tester(const std::shared_ptr& config, gradientMachine_(gradientMachine), parameterUpdater_(parameterUpdater), testDataProvider_(testDataProvider) { + if (config_->getOptConfig().use_sparse_remote_updater()) { + LOG(FATAL) << "It's prohibited to set sparse_remote_update " + << "in some layers if testing will be under going " + << "in the middle of training. You can do testing " + << "within separate process."; + } testEvaluator_.reset(gradientMachine_->makeEvaluator()); if (intconfig_->distributeTest) { testParameterClient_.reset(new ParameterClient2(true)); diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 3082b279df..1eec2c432d 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -222,12 +222,6 @@ void Trainer::init(const std::shared_ptr& config, DataProvider::create(config_->getTestDataConfig(), *config_, gpuData)); } if (testDataProvider_) { - if (config_->getOptConfig().use_sparse_remote_updater()) { - LOG(FATAL) << "It's prohibited to set sparse_remote_update " - << "in some layers if testing will be under going " - << "in the middle of training. You can do testing " - << "within separate process."; - } createTester(); } From e83f17a38b2b525be8f06bc097ddf08798095b59 Mon Sep 17 00:00:00 2001 From: wangyanfei01 Date: Thu, 15 Dec 2016 16:55:31 +0800 Subject: [PATCH 197/265] follow comments: more readable LOG --- paddle/trainer/Tester.cpp | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/paddle/trainer/Tester.cpp b/paddle/trainer/Tester.cpp index 24f7271734..24fac3e5a8 100644 --- a/paddle/trainer/Tester.cpp +++ b/paddle/trainer/Tester.cpp @@ -48,9 +48,9 @@ Tester::Tester(const std::shared_ptr& config, testDataProvider_(testDataProvider) { if (config_->getOptConfig().use_sparse_remote_updater()) { LOG(FATAL) << "It's prohibited to set sparse_remote_update " - << "in some layers if testing will be under going " - << "in the middle of training. You can do testing " - << "within separate process."; + << "when doing train and test jobs in the same " + << "process. You could run paddle --job=test in " + << "a separate process."; } testEvaluator_.reset(gradientMachine_->makeEvaluator()); if (intconfig_->distributeTest) { From aede8aa04f44bd92b539a29032f99d0ee4bacb70 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Fri, 2 Dec 2016 11:20:09 -0800 Subject: [PATCH 198/265] improve demo/mnist dataProvider speed --- demo/mnist/mnist_provider.py | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/demo/mnist/mnist_provider.py b/demo/mnist/mnist_provider.py index 6df4676da3..c435e1681d 100644 --- a/demo/mnist/mnist_provider.py +++ b/demo/mnist/mnist_provider.py @@ -1,10 +1,11 @@ from paddle.trainer.PyDataProvider2 import * - +import numpy # Define a py data provider @provider( input_types={'pixel': dense_vector(28 * 28), - 'label': integer_value(10)}) + 'label': integer_value(10)}, + cache=CacheType.CACHE_PASS_IN_MEM) def process(settings, filename): # settings is not used currently. imgf = filename + "-images-idx3-ubyte" labelf = filename + "-labels-idx1-ubyte" @@ -19,13 +20,13 @@ def process(settings, filename): # settings is not used currently. n = 60000 else: n = 10000 - - for i in range(n): - label = ord(l.read(1)) - pixels = [] - for j in range(28 * 28): - pixels.append(float(ord(f.read(1))) / 255.0) - yield {"pixel": pixels, 'label': label} - + + images = numpy.fromfile(f, 'ubyte', count=n*28*28).reshape((n, 28*28)).astype('float32') + images = images / 255.0 * 2.0 - 1.0 + labels = numpy.fromfile(l, 'ubyte', count=n).astype("int") + + for i in xrange(n): + yield {"pixel": images[i, :], 'label': labels[i]} + f.close() l.close() From 6f970c1baa4484307e002cc8d2e4ba95dfe4e5f9 Mon Sep 17 00:00:00 2001 From: wangyang59 Date: Wed, 14 Dec 2016 17:59:07 -0800 Subject: [PATCH 199/265] after clang-format --- demo/mnist/mnist_provider.py | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/demo/mnist/mnist_provider.py b/demo/mnist/mnist_provider.py index c435e1681d..4635833d36 100644 --- a/demo/mnist/mnist_provider.py +++ b/demo/mnist/mnist_provider.py @@ -1,6 +1,7 @@ from paddle.trainer.PyDataProvider2 import * import numpy + # Define a py data provider @provider( input_types={'pixel': dense_vector(28 * 28), @@ -20,13 +21,14 @@ def process(settings, filename): # settings is not used currently. n = 60000 else: n = 10000 - - images = numpy.fromfile(f, 'ubyte', count=n*28*28).reshape((n, 28*28)).astype('float32') - images = images / 255.0 * 2.0 - 1.0 + + images = numpy.fromfile( + f, 'ubyte', count=n * 28 * 28).reshape((n, 28 * 28)).astype('float32') + images = images / 255.0 * 2.0 - 1.0 labels = numpy.fromfile(l, 'ubyte', count=n).astype("int") - + for i in xrange(n): yield {"pixel": images[i, :], 'label': labels[i]} - + f.close() l.close() From 6923dc8dcabbd14207635947fb829a5069d75e43 Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Tue, 13 Dec 2016 18:17:11 +0800 Subject: [PATCH 200/265] add chinese doc and modify ml_regression_en.rst --- doc/tutorials/rec/ml_dataset_cn.md | 105 ++++++++ doc/tutorials/rec/ml_regression_ch.rst | 347 +++++++++++++++++++++++++ doc/tutorials/rec/ml_regression_en.rst | 14 +- 3 files changed, 459 insertions(+), 7 deletions(-) create mode 100644 doc/tutorials/rec/ml_dataset_cn.md create mode 100644 doc/tutorials/rec/ml_regression_ch.rst diff --git a/doc/tutorials/rec/ml_dataset_cn.md b/doc/tutorials/rec/ml_dataset_cn.md new file mode 100644 index 0000000000..d500294e7d --- /dev/null +++ b/doc/tutorials/rec/ml_dataset_cn.md @@ -0,0 +1,105 @@ +```eval_rst +.. _demo_ml_dataset_en: + +``` + +# MovieLens数据集 + +[MovieLens 数据集](http://grouplens.org/datasets/movielens/)由GroupLens Research实验室搜集整理。 +该数据集包含一些用户信息、电影信息以及电影评分\[1-5\]。根据数据量规模,该数据及有很多不同的版本。 +我们用[MovieLens 百万数据集](http://files.grouplens.org/datasets/movielens/ml-1m.zip)作为示例数据 +集,其中包含6,000位用户对4,000部电影的1,000,000条评价。该数据集于2003年2月发布。 + +## 数据集特征 + +在[ml-1m 数据集](http://files.grouplens.org/datasets/movielens/ml-1m.zip)中有许多的特征。在[ml-1m 数据集] +(http://files.grouplens.org/datasets/movielens/ml-1m.zip)中的这些数据文件(含有".dat"的后缀)实际上是CSV文件, +分隔符为"::"。以下我们翻译数据集网站中README文件的描述: + +### 评分文件描述(ratings.dat) + + +所有的评分数据都包含在"ratings.dat"文件中,遵循如下的格式: + +用户ID::电影ID::评分::时间戳 + +- 用户ID范围从1到6040 +- 电影ID范围从1到3952 +- 评分被调整为5星的规模(只允许整数的星级) +- 时间戳表示为从1970-01-01(UTC)来的秒数,与time(2)的返回值一致 +- 每位用户至少有20条评分 + +### 用户文件描述(users.dat) + +所有的用户信息都包含在"users.dat"文件中,遵循如下的格式: + +用户ID::性别::年龄::职业::邮编 + +所有的人口统计学信息由用户自愿提供,没有进行正确性的检查。只有含有人 +口统计学信息的用户才被包含在数据集中。 + +- 性别,用"M"表示男性,"F"表示女性 +- 年龄从下列列表范围中选取: + + * 1: "18岁以下" + * 18: "18-24岁" + * 25: "25-34岁" + * 35: "35-44岁" + * 45: "45-49岁" + * 50: "50-55岁" + * 56: "56+" + +- 职业从下面所列中选择: + + * 0: "其他"或不确定 + * 1: "学术/教育工作者" + * 2: "艺术家" + * 3: "文书工作/管理员" + * 4: "大学生/研究生" + * 5: "客户服务" + * 6: "医生/医疗保健" + * 7: "行政工作/管理人员" + * 8: "农民" + * 9: "操持家务者" + * 10: "高中毕业生" + * 11: "律师" + * 12: "程序员" + * 13: "退休人员" + * 14: "销售/市场" + * 15: "科学家" + * 16: "自由职业者" + * 17: "技术员/工程师" + * 18: "推销员/手工艺者" + * 19: "无业人士" + * 20: "作家" + +### 电影文件描述(movies.dat) + +所有的电影信息都包含在"movies.dat"文件中,遵循如下的格式: + +电影ID::电影名称::电影类型 + +- 电影名称(包括发行时间)与IMDB网站提供的一致 +- 电影类型如符合多种用管道符号|分割,选自下列类型: + + * 动作片 + * 冒险片 + * 动画片 + * 儿童片 + * 喜剧片 + * 犯罪片 + * 纪录片 + * 戏剧 + * 奇幻片 + * 黑色电影 + * 恐怖片 + * 音乐剧 + * 悬疑片 + * 浪漫片 + * 科幻片 + * 惊险电影 + * 战争片 + * 西部片 + +- 由于意外的副本记录和测试记录,有些电影ID可能与实际电影不相符合 +- 电影大部分是手工输入数据,因此可能会有一些错误和不一致发生 diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst new file mode 100644 index 0000000000..19a89d270d --- /dev/null +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -0,0 +1,347 @@ +MovieLens数据集评分回归模型 +========================= + +这里我们在MovieLens数据集描述一种**余弦相似度回归**任务。 +该示例将展示paddle如何进行词向量嵌入,处理相似度回归,针对文本 +的单词级别的卷积神经网络,以及paddle如何处理多种类型的输入。 +需要注意的是,该模型网络只是用于进行demo展示paddle如何工作,而 +没有进行结构的微调。 + + +**我们非常欢迎您用PADDLEPADDLE构建更好的示例,如果您有好的建议来 +让这个示例变得更好,希望能让我们知晓。** + +数据准备 +``````` +下载并解压数据集 +'''''''''''''' +这里我们使用:ref:`demo_ml_dataset_en`。 +要下载和解压数据集,只需要简单的运行下面的命令即可。 + +.. code-block:: bash + + cd demo/recommendation/data + ./ml_data.sh + +:code:`demo/recommendation/data/ml-1m`的目录结构为: + +.. code-block:: text + + +--ml-1m + +--- movies.dat # 电影特征 + +--- ratings.dat # 评分 + +--- users.dat # 用户特征 + +--- README # 数据集描述 + +字段配置文件 +'''''''''' +**字段配置文件**用来具体说明数据集的字段和文件格式, +例如,说明每个特征文件具体字段是**什么**类型。 + +ml-1m的字段配置文件在目录:code:`demo/recommendation/data/config.json`中。 +其具体说明了字段类型和文件名称: +1) 用户文件中有四种类型的字段\: 编号,性别,年龄和职业; +2) 文件名称为"users.dat",文件的分隔符为"::"。 + +.. include:: ../../../demo/recommendation/data/config.json + :code: json + :literal: + +准备数据 +``````` +你需要安装python的第三方库。 +**强烈推荐使用VIRTUALENV来创造一个干净的python环境。** + +.. code-block:: bash + + pip install -r requirements.txt + +预处理数据一般的命令为: + +.. code-block:: bash + + cd demo/recommendation + ./preprocess.sh + +下面介绍预处理过程具体的步骤。 + +提取电影或用户的特征并生成python对象 +'''''''''''''''''''''''''''''''' + +在movielens 1m数据集中,电影和用户有许多的特征。 +评分文件的每一行仅仅提供电影或用户的编号来代表相应的电影或用户。 +我们首先处理电影或用户的特征文件,然后用pickle命令将特征(**Meta**)对象存储为文件。 + +Meta配置文件 +........... + +**Meta配置文件**用来具体描述**如何**解析数据集中的每一个字段。 +该文件可以从字段配置文件生成,或是手动编辑生成。文件的格式可以 +为json或yaml格式。解析器能通过文件的扩展名自动识别文件的格式。 + +要将字段配置文件转化为meta配置文件,只需要运行: + +.. code-block:: bash + + cd demo/recommendation/data + python config_generator.py config.json > meta_config.json + +生成的meta配置文件如下所示: + +.. include:: ../../../demo/recommendation/data/meta_config.json + :code: json + :literal: + +在meta文件中有两种特征\: 电影和用户。 + +* 在电影文件movies.dat中 + * 我们仅用"::"来分隔每一行 + * pos 0 代表编号。 + * pos 1 特征: + * name是电影名。 + * 利用正则表达式来解析该特征。 + * 基于字母的词嵌入特征。 + * 是序列。 + * pos 2 特征: + * name是体裁。 + * type是one hot稠密向量。 + * dictionary由解析自动生成,每一个key由'|'分隔。 +* 在用户文件users.dat中 + * 我们仅用"::"来分隔每一行 + * pos 0 代表编号。 + * pos 1 特征: + * name是性别。 + * 简单的基于字母的词嵌入。 + * pos 2 特征: + * name是年龄 + * 是整个的词嵌入 + * 嵌入编号会根据单词排序 + * pos 3 特征: + * name是职业 + * 简单的整个词嵌入 + + +Meta文件 +'''''''' + +有了meta配置文件之后,我们可以生成**Meta文件**,该文件是python的pickle对象, +存储着电影或用户信息。可以运行下面的命令来生成。 + +.. code-block:: bash + + python meta_generator.py ml-1m meta.bin --config=meta_config.json + +meta文件:code:`meta.bin`的结构如下: + +.. code-block:: text + + +--+ movie + | +--+ __meta__ + | | +--+ raw_meta # 每个特征的meta配置。列表 + | | | + + | | | | # 编号字段,我们用编号作为key + | | | +--+ {'count': 3883, 'max': 3952, 'is_key': True, 'type': 'id', 'min': 1} + | | | | + | | | | # 电影名字段,嵌入特征字典 + | | | +--+ {'dict': [ ... ], 'type': 'embedding', 'name': 'title', 'seq': 'sequence'} + | | | | + | | | | # 体裁字段,体裁字典 + | | | +--+ {'dict': [ ... ], 'type': 'one_hot_dense', 'name': 'genres'} + | | | + | | +--+ feature_map [1, 2] # a list for raw_meta index for feature field. + | | # it means there are 2 features for each key. + | | # * 0 offset of feature is raw_meta[1], Title. + | | # * 1 offset of feature is raw_meta[2], Genres. + | | + | +--+ 1 # 电影1的特征 + | | + + | | +---+ [[...], [...]] # title ids, genres dense vector + | | + | +--+ 2 + | | + | +--+ ... + | + +--- user + +--+ __meta__ + | + + | +--+ raw_meta + | | + + | | +--+ id field as user + | | | + | | +--+ {'dict': ['F', 'M'], 'type': 'embedding', 'name': 'gender', 'seq': 'no_sequence'} + | | | + | | +--+ {'dict': ['1', '18', '25', '35', '45', '50', '56'], 'type': 'embedding', 'name': 'age', 'seq': 'no_sequence'} + | | | + | | +--+ {'dict': [...], 'type': 'embedding', 'name': 'occupation', 'seq': 'no_sequence'} + | | + | +--+ feature_map [1, 2, 3] + | + +--+ 1 # 用户1的特征 + | + +--+ 2 + +--+ ... + + +分割训练/测试文件 +''''''''''''''' + +我们将:code:`ml-1m/ratings.dat`文件分割为训练和测试文件。分割文件的方法是:对于每位用户,我们将评分分成两部分。 +这样的话每位用户在测试文件中将与训练文件含有同样的信息。 + +用:code:`separate.py`来分离训练和测试文件。 + +.. code-block:: bash + + python split.py ml-1m/ratings.dat --delimiter="::" --test_ratio=0.1 + +这样就会生成两个文件::code:`ml-1m/ratings.dat.train`和:code:`ml-1m/ratings.data.test`。 +将他们移动到目录:code:`data`,然后进行随机打乱,再为paddle的训练过程提供文件列表。 + +.. code-block:: bash + + shuf ml-1m/ratings.dat.train > ratings.dat.train + cp ml-1m/ratings.dat.test . + echo "./data/ratings.dat.train" > train.list + echo "./data/ratings.dat.test" > test.list + + +神经网络结构配置 +`````````````` + +训练器配置文件 +'''''''''''' + +网络结构如下图所示: + +.. image:: rec_regression_network.png + :align: center + :alt: rec_regression_network + +该示例的神经网络配置文件:code:`trainer_config.py`如下所示: + +.. literalinclude:: ../../../demo/recommendation/trainer_config.py + :language: python + :lines: 15- + +在文件:code:`trainer_config.py`中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 +展示了如何将每个特征映射到一个向量。 + +* :code:`id`\: 仅仅是简单的嵌入,然后添加一个全连接层。 +* :code:`embedding`\: + - 如果是序列,则先做嵌入,然后再做一次文本卷积网络操作, + 然后得到平均采样的结果 + - 如果不是序列,则先做嵌入,然后添加一个全连接层。 +* :code:`one_host_dense`\: + - 仅仅是两个全连接层。 + +然后我们利用多输入的:code:`fc_layer`全连接层将电影的每个特征结合成一个电影特征, +并且对用户的特征做同样的操作,也得到一个用户特征。然后我们求这两个特征的余弦相似度。 + +在这些网络中,我们用以下的一些:ref:`api_trainer_config`中的接口。 + +* 数据层, :ref:`api_trainer_config_helpers_layers_data_layer` +* 全连接层, :ref:`api_trainer_config_helpers_layers_fc_layer` +* 嵌入层, :ref:`api_trainer_config_helpers_layers_embedding_layer` +* 文本投影层, :ref:`api_trainer_config_helpers_layers_context_projection` +* 采样层, :ref:`api_trainer_config_helpers_layers_pooling_layer` +* 余弦相似度层, :ref:`api_trainer_config_helpers_layers_cos_sim` +* 文本卷积采样层, :ref:`api_trainer_config_helpers_network_text_conv_pool` +* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources`. + +数据提供脚本 +''''''''''' + +.. literalinclude:: ../../../demo/recommendation/dataprovider.py + :language: python + :lines: 15- + +数据提供脚本仅仅是读取meta.bin和评分文件,生成训练需要的样本。 +在脚本:code:`dataprovider.py`中,我们需要设置: + +* obj.slots\: 特征的类型和维度。 +* use_seq\: :code:`dataprovider.py`中的数据是否为序列模式。 +* process\: 返回数据的每一条样本给:code:`paddle`. + +数据提供脚本的细节文档可以参考:ref:`api_pydataprovider`. + +训练 +```` + +准备好数据,配置了网络,编写好数据提供脚本后,现在我们可以开始paddle训练了。 + +代码:code:`run.sh`如下: + +.. literalinclude:: ../../../demo/recommendation/run.sh + :language: bash + :lines: 16- + +该脚本仅仅是开始一个paddle训练过程,将日志写入文件:code:`log.txt`,然后 +打印在屏幕上。 + +脚本:code:`run.sh`中的每一行命令,请参考页面:ref:`cmd_line_index_en`。 +这些参数的简短介绍如下: + +* config\: 告诉paddle哪个文件是神经网络的配置文件。 +* save_dir\: 告诉paddle将模型保存在:code:`./output`中。 +* use_gpu\: 是否使用GPU,默认为不使用。 +* trainer_count\: 一台机器上面的线程数量。 +* test_all_data_in_one_period\: 每一个测试周期测试一次所有数据。否则, + 每个测试周期测试:code:`batch_size`批次的数据。 +* log_period\: 在训练了:code:`log_period`批次后打印日志. +* dot_period\: 在每训练:code:`dot_period`个批次后打印一个:code:`.`. +* num_passes\: 训练至多:code:`num_passes`轮. + +如果训练过程启动成功的话,输出应该类似如下: + +.. code-block:: text + + I0601 08:07:22.832059 10549 TrainerInternal.cpp:157] Batch=100 samples=160000 AvgCost=4.13494 CurrentCost=4.13494 Eval: CurrentEval: + + I0601 08:07:50.672627 10549 TrainerInternal.cpp:157] Batch=200 samples=320000 AvgCost=3.80957 CurrentCost=3.48421 Eval: CurrentEval: + + I0601 08:08:18.877369 10549 TrainerInternal.cpp:157] Batch=300 samples=480000 AvgCost=3.68145 CurrentCost=3.42519 Eval: CurrentEval: + + I0601 08:08:46.863963 10549 TrainerInternal.cpp:157] Batch=400 samples=640000 AvgCost=3.6007 CurrentCost=3.35847 Eval: CurrentEval: + + I0601 08:09:15.413025 10549 TrainerInternal.cpp:157] Batch=500 samples=800000 AvgCost=3.54811 CurrentCost=3.33773 Eval: CurrentEval: + I0601 08:09:36.058670 10549 TrainerInternal.cpp:181] Pass=0 Batch=565 samples=902826 AvgCost=3.52368 Eval: + I0601 08:09:46.215489 10549 Tester.cpp:101] Test samples=97383 cost=3.32155 Eval: + I0601 08:09:46.215966 10549 GradientMachine.cpp:132] Saving parameters to ./output/model/pass-00000 + I0601 08:09:46.233397 10549 ParamUtil.cpp:99] save dir ./output/model/pass-00000 + I0601 08:09:46.233438 10549 Util.cpp:209] copy trainer_config.py to ./output/model/pass-00000 + I0601 08:09:46.233541 10549 ParamUtil.cpp:147] fileName trainer_config.py + +模型被保存在:code:`output/`目录中。你可以在任何时候用:code:`Ctrl-C`来停止训练。 + +模型评估和预测 +```````````` + +在训练了几个轮次以后,你可以对模型进行评估,得到最好轮次下的模型。运行下面命令即可: + +.. code-block:: bash + + ./evaluate.sh + +你讲看到如下的信息: + +.. code-block:: text + + Best pass is 00009, error is 3.06949, which means predict get error as 0.875998002281 + evaluating from pass output/pass-00009 + +然后,你可以预测任何用户对于任何一部电影的评价,运行下面命令即可: + +.. code-block:: bash + + python prediction.py 'output/pass-00009/' + +预测程序将读取用户的输入,然后输出预测分数。用户预测的命令行界面如下: + +.. code-block:: text + + Input movie_id: 9 + Input user_id: 4 + Prediction Score is 2.56 + Input movie_id: 8 + Input user_id: 2 + Prediction Score is 3.13 \ No newline at end of file diff --git a/doc/tutorials/rec/ml_regression_en.rst b/doc/tutorials/rec/ml_regression_en.rst index 4bb2586e34..993b9a516f 100644 --- a/doc/tutorials/rec/ml_regression_en.rst +++ b/doc/tutorials/rec/ml_regression_en.rst @@ -36,7 +36,7 @@ And the directory structure of :code:`demo/recommendation/data/ml-1m` is: Field config file ''''''''''''''''' -**Field config file** is used to specific the fields dataset and file format, +**Field config file** is used to specify the fields of the dataset and the file format, i.e, specific **WHAT** type it is in each feature file. The field config file of ml-1m shows in :code:`demo/recommendation/data/config.json`. @@ -188,7 +188,7 @@ Split Training/Testing files We split :code:`ml-1m/ratings.dat` into a training and testing file. The way to split file is for each user, we split the rating by two parts. So each user in testing file will have some rating information in training file. -Use separate.py to separate the training and testing file. +Use :code:`separate.py` to separate the training and testing file. .. code-block:: bash @@ -217,7 +217,7 @@ The network structure shows below. :align: center :alt: rec_regression_network -The demo's neural network config file "trainer_config.py" show as below. +The demo's neural network config file :code:`trainer_config.py` show as below. .. literalinclude:: ../../../demo/recommendation/trainer_config.py :language: python @@ -239,7 +239,7 @@ Then we combine each features of movie into one movie feature by a get one user feature. Then we calculate the cosine similarity of these two features. -In these network, we use several api in :ref:`api_trainer_config` . There are +In these networks, we use several APIs in :ref:`api_trainer_config` . There are * Data Layer, :ref:`api_trainer_config_helpers_layers_data_layer` * Fully Connected Layer, :ref:`api_trainer_config_helpers_layers_fc_layer` @@ -271,19 +271,19 @@ Train After prepare data, config network, writting data provider, now we can run paddle training. -The run.sh is shown as follow: +The :code:`run.sh` is shown as follow: .. literalinclude:: ../../../demo/recommendation/run.sh :language: bash :lines: 16- -It just start a paddle training process, write the log to `log.txt`, +It just start a paddle training process, write the log to :code:`log.txt`, then print it on screen. Each command line argument in :code:`run.sh`, please refer to the :ref:`cmd_line_index` page. The short description of these arguments is shown as follow. * config\: Tell paddle which file is neural network configuration. -* save_dir\: Tell paddle save model into './output' +* save_dir\: Tell paddle save model into :code:`./output`. * use_gpu\: Use gpu or not. Default is false. * trainer_count\: The compute thread in one machine. * test_all_data_in_one_period\: Test All Data during one test period. Otherwise, From cbd5d27445e5621a418fba92bf355e9a4371b66e Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Tue, 13 Dec 2016 19:51:43 +0800 Subject: [PATCH 201/265] fix rst format error --- doc/tutorials/rec/ml_regression_ch.rst | 74 +++++++++++++------------- 1 file changed, 37 insertions(+), 37 deletions(-) diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst index 19a89d270d..13548fc3a6 100644 --- a/doc/tutorials/rec/ml_regression_ch.rst +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -1,7 +1,7 @@ MovieLens数据集评分回归模型 ========================= -这里我们在MovieLens数据集描述一种**余弦相似度回归**任务。 +这里我们在MovieLens数据集描述一种 **余弦相似度回归** 任务。 该示例将展示paddle如何进行词向量嵌入,处理相似度回归,针对文本 的单词级别的卷积神经网络,以及paddle如何处理多种类型的输入。 需要注意的是,该模型网络只是用于进行demo展示paddle如何工作,而 @@ -15,7 +15,7 @@ MovieLens数据集评分回归模型 ``````` 下载并解压数据集 '''''''''''''' -这里我们使用:ref:`demo_ml_dataset_en`。 +这里我们使用 :ref:`demo_ml_dataset_en` 。 要下载和解压数据集,只需要简单的运行下面的命令即可。 .. code-block:: bash @@ -23,7 +23,7 @@ MovieLens数据集评分回归模型 cd demo/recommendation/data ./ml_data.sh -:code:`demo/recommendation/data/ml-1m`的目录结构为: +:code:`demo/recommendation/data/ml-1m` 的目录结构为: .. code-block:: text @@ -35,10 +35,10 @@ MovieLens数据集评分回归模型 字段配置文件 '''''''''' -**字段配置文件**用来具体说明数据集的字段和文件格式, -例如,说明每个特征文件具体字段是**什么**类型。 +**字段配置文件** 用来具体说明数据集的字段和文件格式, +例如,说明每个特征文件具体字段是 **什么** 类型。 -ml-1m的字段配置文件在目录:code:`demo/recommendation/data/config.json`中。 +ml-1m的字段配置文件在目录 :code:`demo/recommendation/data/config.json` 中。 其具体说明了字段类型和文件名称: 1) 用户文件中有四种类型的字段\: 编号,性别,年龄和职业; 2) 文件名称为"users.dat",文件的分隔符为"::"。 @@ -70,12 +70,12 @@ ml-1m的字段配置文件在目录:code:`demo/recommendation/data/config.json` 在movielens 1m数据集中,电影和用户有许多的特征。 评分文件的每一行仅仅提供电影或用户的编号来代表相应的电影或用户。 -我们首先处理电影或用户的特征文件,然后用pickle命令将特征(**Meta**)对象存储为文件。 +我们首先处理电影或用户的特征文件,然后用pickle命令将特征( **Meta** )对象存储为文件。 Meta配置文件 ........... -**Meta配置文件**用来具体描述**如何**解析数据集中的每一个字段。 +**Meta配置文件** 用来具体描述 **如何** 解析数据集中的每一个字段。 该文件可以从字段配置文件生成,或是手动编辑生成。文件的格式可以 为json或yaml格式。解析器能通过文件的扩展名自动识别文件的格式。 @@ -124,14 +124,14 @@ Meta配置文件 Meta文件 '''''''' -有了meta配置文件之后,我们可以生成**Meta文件**,该文件是python的pickle对象, +有了meta配置文件之后,我们可以生成 **Meta文件** ,该文件是python的pickle对象, 存储着电影或用户信息。可以运行下面的命令来生成。 .. code-block:: bash python meta_generator.py ml-1m meta.bin --config=meta_config.json -meta文件:code:`meta.bin`的结构如下: +meta文件 :code:`meta.bin` 的结构如下: .. code-block:: text @@ -185,17 +185,17 @@ meta文件:code:`meta.bin`的结构如下: 分割训练/测试文件 ''''''''''''''' -我们将:code:`ml-1m/ratings.dat`文件分割为训练和测试文件。分割文件的方法是:对于每位用户,我们将评分分成两部分。 +我们将 :code:`ml-1m/ratings.dat` 文件分割为训练和测试文件。分割文件的方法是:对于每位用户,我们将评分分成两部分。 这样的话每位用户在测试文件中将与训练文件含有同样的信息。 -用:code:`separate.py`来分离训练和测试文件。 +用 :code:`separate.py` 来分离训练和测试文件。 .. code-block:: bash python split.py ml-1m/ratings.dat --delimiter="::" --test_ratio=0.1 -这样就会生成两个文件::code:`ml-1m/ratings.dat.train`和:code:`ml-1m/ratings.data.test`。 -将他们移动到目录:code:`data`,然后进行随机打乱,再为paddle的训练过程提供文件列表。 +这样就会生成两个文件::code:`ml-1m/ratings.dat.train` 和 :code:`ml-1m/ratings.data.test` 。 +将他们移动到目录 :code:`data` ,然后进行随机打乱,再为paddle的训练过程提供文件列表。 .. code-block:: bash @@ -217,27 +217,27 @@ meta文件:code:`meta.bin`的结构如下: :align: center :alt: rec_regression_network -该示例的神经网络配置文件:code:`trainer_config.py`如下所示: +该示例的神经网络配置文件 :code:`trainer_config.py` 如下所示: .. literalinclude:: ../../../demo/recommendation/trainer_config.py :language: python :lines: 15- -在文件:code:`trainer_config.py`中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 +在文件 :code:`trainer_config.py` 中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 展示了如何将每个特征映射到一个向量。 -* :code:`id`\: 仅仅是简单的嵌入,然后添加一个全连接层。 -* :code:`embedding`\: +* :code:`id` \: 仅仅是简单的嵌入,然后添加一个全连接层。 +* :code:`embedding` \: - 如果是序列,则先做嵌入,然后再做一次文本卷积网络操作, 然后得到平均采样的结果 - 如果不是序列,则先做嵌入,然后添加一个全连接层。 -* :code:`one_host_dense`\: +* :code:`one_host_dense` \: - 仅仅是两个全连接层。 -然后我们利用多输入的:code:`fc_layer`全连接层将电影的每个特征结合成一个电影特征, +然后我们利用多输入的:code:`fc_layer` 全连接层将电影的每个特征结合成一个电影特征, 并且对用户的特征做同样的操作,也得到一个用户特征。然后我们求这两个特征的余弦相似度。 -在这些网络中,我们用以下的一些:ref:`api_trainer_config`中的接口。 +在这些网络中,我们用以下的一些:ref:`api_trainer_config` 中的接口。 * 数据层, :ref:`api_trainer_config_helpers_layers_data_layer` * 全连接层, :ref:`api_trainer_config_helpers_layers_fc_layer` @@ -246,7 +246,7 @@ meta文件:code:`meta.bin`的结构如下: * 采样层, :ref:`api_trainer_config_helpers_layers_pooling_layer` * 余弦相似度层, :ref:`api_trainer_config_helpers_layers_cos_sim` * 文本卷积采样层, :ref:`api_trainer_config_helpers_network_text_conv_pool` -* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources`. +* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources` . 数据提供脚本 ''''''''''' @@ -256,40 +256,40 @@ meta文件:code:`meta.bin`的结构如下: :lines: 15- 数据提供脚本仅仅是读取meta.bin和评分文件,生成训练需要的样本。 -在脚本:code:`dataprovider.py`中,我们需要设置: +在脚本 :code:`dataprovider.py` 中,我们需要设置: * obj.slots\: 特征的类型和维度。 -* use_seq\: :code:`dataprovider.py`中的数据是否为序列模式。 -* process\: 返回数据的每一条样本给:code:`paddle`. +* use_seq\: :code:`dataprovider.py` 中的数据是否为序列模式。 +* process\: 返回数据的每一条样本给 :code:`paddle` . -数据提供脚本的细节文档可以参考:ref:`api_pydataprovider`. +数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider` . 训练 ```` 准备好数据,配置了网络,编写好数据提供脚本后,现在我们可以开始paddle训练了。 -代码:code:`run.sh`如下: +代码 :code:`run.sh` 如下: .. literalinclude:: ../../../demo/recommendation/run.sh :language: bash :lines: 16- -该脚本仅仅是开始一个paddle训练过程,将日志写入文件:code:`log.txt`,然后 +该脚本仅仅是开始一个paddle训练过程,将日志写入文件 :code:`log.txt` ,然后 打印在屏幕上。 -脚本:code:`run.sh`中的每一行命令,请参考页面:ref:`cmd_line_index_en`。 +脚本 :code:`run.sh` 中的每一行命令,请参考页面 :ref:`cmd_line_index_en` 。 这些参数的简短介绍如下: * config\: 告诉paddle哪个文件是神经网络的配置文件。 -* save_dir\: 告诉paddle将模型保存在:code:`./output`中。 +* save_dir\: 告诉paddle将模型保存在: code:`./output` 中。 * use_gpu\: 是否使用GPU,默认为不使用。 * trainer_count\: 一台机器上面的线程数量。 * test_all_data_in_one_period\: 每一个测试周期测试一次所有数据。否则, - 每个测试周期测试:code:`batch_size`批次的数据。 -* log_period\: 在训练了:code:`log_period`批次后打印日志. -* dot_period\: 在每训练:code:`dot_period`个批次后打印一个:code:`.`. -* num_passes\: 训练至多:code:`num_passes`轮. + 每个测试周期测试: code:`batch_size` 批次的数据。 +* log_period\: 在训练了: code:`log_period` 批次后打印日志. +* dot_period\: 在每训练: code:`dot_period` 个批次后打印一个 :code:`.` . +* num_passes\: 训练至多: code:`num_passes` 轮. 如果训练过程启动成功的话,输出应该类似如下: @@ -311,7 +311,7 @@ meta文件:code:`meta.bin`的结构如下: I0601 08:09:46.233438 10549 Util.cpp:209] copy trainer_config.py to ./output/model/pass-00000 I0601 08:09:46.233541 10549 ParamUtil.cpp:147] fileName trainer_config.py -模型被保存在:code:`output/`目录中。你可以在任何时候用:code:`Ctrl-C`来停止训练。 +模型被保存在 :code:`output/` 目录中。你可以在任何时候用 :code:`Ctrl-C` 来停止训练。 模型评估和预测 ```````````` @@ -322,7 +322,7 @@ meta文件:code:`meta.bin`的结构如下: ./evaluate.sh -你讲看到如下的信息: +你将看到如下的信息: .. code-block:: text @@ -344,4 +344,4 @@ meta文件:code:`meta.bin`的结构如下: Prediction Score is 2.56 Input movie_id: 8 Input user_id: 2 - Prediction Score is 3.13 \ No newline at end of file + Prediction Score is 3.13 From 42624cb4c9a747456b8cb6dac3d16025bf4ee305 Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Tue, 13 Dec 2016 20:18:01 +0800 Subject: [PATCH 202/265] small details --- doc/tutorials/rec/ml_regression_ch.rst | 40 ++++++++++++++------------ 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst index 13548fc3a6..9d2b5071a2 100644 --- a/doc/tutorials/rec/ml_regression_ch.rst +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -40,7 +40,9 @@ MovieLens数据集评分回归模型 ml-1m的字段配置文件在目录 :code:`demo/recommendation/data/config.json` 中。 其具体说明了字段类型和文件名称: + 1) 用户文件中有四种类型的字段\: 编号,性别,年龄和职业; + 2) 文件名称为"users.dat",文件的分隔符为"::"。 .. include:: ../../../demo/recommendation/data/config.json @@ -96,22 +98,22 @@ Meta配置文件 * 在电影文件movies.dat中 * 我们仅用"::"来分隔每一行 - * pos 0 代表编号。 + * pos 0 代表编号 * pos 1 特征: - * name是电影名。 - * 利用正则表达式来解析该特征。 - * 基于字母的词嵌入特征。 - * 是序列。 + * name是电影名 + * 利用正则表达式来解析该特征 + * 基于字母的词嵌入特征 + * 是序列 * pos 2 特征: - * name是体裁。 - * type是one hot稠密向量。 - * dictionary由解析自动生成,每一个key由'|'分隔。 + * name是体裁 + * type是one hot稠密向量 + * dictionary由解析自动生成,每一个key由'|'分隔 * 在用户文件users.dat中 * 我们仅用"::"来分隔每一行 - * pos 0 代表编号。 + * pos 0 代表编号 * pos 1 特征: - * name是性别。 - * 简单的基于字母的词嵌入。 + * name是性别 + * 简单的基于字母的词嵌入 * pos 2 特征: * name是年龄 * 是整个的词嵌入 @@ -135,7 +137,7 @@ meta文件 :code:`meta.bin` 的结构如下: .. code-block:: text - +--+ movie + +--+ movie | +--+ __meta__ | | +--+ raw_meta # 每个特征的meta配置。列表 | | | + @@ -229,7 +231,7 @@ meta文件 :code:`meta.bin` 的结构如下: * :code:`id` \: 仅仅是简单的嵌入,然后添加一个全连接层。 * :code:`embedding` \: - 如果是序列,则先做嵌入,然后再做一次文本卷积网络操作, - 然后得到平均采样的结果 + 然后得到平均采样的结果。 - 如果不是序列,则先做嵌入,然后添加一个全连接层。 * :code:`one_host_dense` \: - 仅仅是两个全连接层。 @@ -246,7 +248,7 @@ meta文件 :code:`meta.bin` 的结构如下: * 采样层, :ref:`api_trainer_config_helpers_layers_pooling_layer` * 余弦相似度层, :ref:`api_trainer_config_helpers_layers_cos_sim` * 文本卷积采样层, :ref:`api_trainer_config_helpers_network_text_conv_pool` -* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources` . +* 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources` 数据提供脚本 ''''''''''' @@ -260,9 +262,9 @@ meta文件 :code:`meta.bin` 的结构如下: * obj.slots\: 特征的类型和维度。 * use_seq\: :code:`dataprovider.py` 中的数据是否为序列模式。 -* process\: 返回数据的每一条样本给 :code:`paddle` . +* process\: 返回数据的每一条样本给 :code:`paddle` 。 -数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider` . +数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider` 。 训练 ```` @@ -287,9 +289,9 @@ meta文件 :code:`meta.bin` 的结构如下: * trainer_count\: 一台机器上面的线程数量。 * test_all_data_in_one_period\: 每一个测试周期测试一次所有数据。否则, 每个测试周期测试: code:`batch_size` 批次的数据。 -* log_period\: 在训练了: code:`log_period` 批次后打印日志. -* dot_period\: 在每训练: code:`dot_period` 个批次后打印一个 :code:`.` . -* num_passes\: 训练至多: code:`num_passes` 轮. +* log_period\: 在训练了: code:`log_period` 批次后打印日志。 +* dot_period\: 在每训练: code:`dot_period` 个批次后打印一个 :code:`.` 。 +* num_passes\: 训练至多: code:`num_passes` 轮。 如果训练过程启动成功的话,输出应该类似如下: From 9c758caf356e499d2e824260e19e3cb5909521ea Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Thu, 15 Dec 2016 17:39:32 +0800 Subject: [PATCH 203/265] delete _cn --- doc/tutorials/rec/ml_dataset_cn.md | 2 +- doc/tutorials/rec/ml_regression_ch.rst | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/tutorials/rec/ml_dataset_cn.md b/doc/tutorials/rec/ml_dataset_cn.md index d500294e7d..2207a776f0 100644 --- a/doc/tutorials/rec/ml_dataset_cn.md +++ b/doc/tutorials/rec/ml_dataset_cn.md @@ -1,5 +1,5 @@ ```eval_rst -.. _demo_ml_dataset_en: +.. _demo_ml_dataset: ``` diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_ch.rst index 9d2b5071a2..a084e4790c 100644 --- a/doc/tutorials/rec/ml_regression_ch.rst +++ b/doc/tutorials/rec/ml_regression_ch.rst @@ -15,7 +15,7 @@ MovieLens数据集评分回归模型 ``````` 下载并解压数据集 '''''''''''''' -这里我们使用 :ref:`demo_ml_dataset_en` 。 +这里我们使用 :ref:`demo_ml_dataset` 。 要下载和解压数据集,只需要简单的运行下面的命令即可。 .. code-block:: bash @@ -225,7 +225,7 @@ meta文件 :code:`meta.bin` 的结构如下: :language: python :lines: 15- -在文件 :code:`trainer_config.py` 中,我们仅仅是讲每个特征种类映射到一个特征向量中,以下 +在文件 :code:`trainer_config.py` 中,我们仅仅是将每个特征种类映射到一个特征向量中,以下 展示了如何将每个特征映射到一个向量。 * :code:`id` \: 仅仅是简单的嵌入,然后添加一个全连接层。 @@ -280,7 +280,7 @@ meta文件 :code:`meta.bin` 的结构如下: 该脚本仅仅是开始一个paddle训练过程,将日志写入文件 :code:`log.txt` ,然后 打印在屏幕上。 -脚本 :code:`run.sh` 中的每一行命令,请参考页面 :ref:`cmd_line_index_en` 。 +脚本 :code:`run.sh` 中的每一行命令,请参考页面 :ref:`cmd_line_index` 。 这些参数的简短介绍如下: * config\: 告诉paddle哪个文件是神经网络的配置文件。 From 332f8d648dbe5885f8b80043c7bf7ee32b02e0b1 Mon Sep 17 00:00:00 2001 From: dayhaha <18800111918@163.com> Date: Thu, 15 Dec 2016 19:04:10 +0800 Subject: [PATCH 204/265] rename ml_regression_ch.rst to ml_regression_cn.rst --- doc/tutorials/rec/{ml_regression_ch.rst => ml_regression_cn.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename doc/tutorials/rec/{ml_regression_ch.rst => ml_regression_cn.rst} (100%) diff --git a/doc/tutorials/rec/ml_regression_ch.rst b/doc/tutorials/rec/ml_regression_cn.rst similarity index 100% rename from doc/tutorials/rec/ml_regression_ch.rst rename to doc/tutorials/rec/ml_regression_cn.rst From ee2c14229eb3874c690bbd556e993fb5d08c92a0 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 16 Dec 2016 10:40:16 +0800 Subject: [PATCH 205/265] Partially fix demo data provider. * Partially fix #622 --- demo/gan/data/download_cifar.sh | 1 + demo/gan/data/get_mnist_data.sh | 0 .../data/download_cifar.sh | 1 + demo/image_classification/image_provider.py | 15 ++++---- demo/introduction/.gitignore | 5 +++ demo/introduction/dataprovider.py | 6 ++-- demo/introduction/trainer_config.py | 5 +-- demo/quick_start/.gitignore | 2 ++ demo/quick_start/dataprovider_bow.py | 14 ++++---- demo/quick_start/dataprovider_emb.py | 17 ++++----- demo/recommendation/common_utils.py | 7 ++-- demo/recommendation/dataprovider.py | 36 ++++++++++++------- demo/recommendation/prediction.py | 4 +-- demo/recommendation/preprocess.sh | 2 +- demo/semantic_role_labeling/.gitignore | 4 +++ demo/semantic_role_labeling/data/get_data.sh | 0 .../trainer_config_helpers/data_sources.py | 2 +- .../paddle/trainer_config_helpers/layers.py | 2 +- 18 files changed, 74 insertions(+), 49 deletions(-) mode change 100644 => 100755 demo/gan/data/get_mnist_data.sh create mode 100644 demo/introduction/.gitignore mode change 100644 => 100755 demo/semantic_role_labeling/data/get_data.sh diff --git a/demo/gan/data/download_cifar.sh b/demo/gan/data/download_cifar.sh index ae24ef2b7f..bbadc7c10c 100755 --- a/demo/gan/data/download_cifar.sh +++ b/demo/gan/data/download_cifar.sh @@ -1,3 +1,4 @@ +#!/bin/bash # Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/demo/gan/data/get_mnist_data.sh b/demo/gan/data/get_mnist_data.sh old mode 100644 new mode 100755 diff --git a/demo/image_classification/data/download_cifar.sh b/demo/image_classification/data/download_cifar.sh index 52e82d0d98..532178d627 100755 --- a/demo/image_classification/data/download_cifar.sh +++ b/demo/image_classification/data/download_cifar.sh @@ -1,3 +1,4 @@ +#!/bin/bash # Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); diff --git a/demo/image_classification/image_provider.py b/demo/image_classification/image_provider.py index 87eed5eebd..6a315ff094 100644 --- a/demo/image_classification/image_provider.py +++ b/demo/image_classification/image_provider.py @@ -21,7 +21,7 @@ from paddle.trainer.PyDataProvider2 import * # # {'img_size': 32, -# 'settings': , +# 'settings': a global object, # 'color': True, # 'mean_img_size': 32, # 'meta': './data/cifar-out/batches/batches.meta', @@ -50,10 +50,10 @@ def hook(settings, img_size, mean_img_size, num_classes, color, meta, use_jpeg, settings.logger.info('Image size: %s', settings.img_size) settings.logger.info('Meta path: %s', settings.meta_path) - settings.input_types = [ - dense_vector(settings.img_raw_size), # image feature - integer_value(settings.num_classes) - ] # labels + settings.input_types = { + 'image': dense_vector(settings.img_raw_size), + 'label': integer_value(settings.num_classes) + } settings.logger.info('DataProvider Initialization finished') @@ -83,4 +83,7 @@ def processData(settings, file_list): img, settings.img_mean, settings.img_size, settings.is_train, settings.color) label = data['labels'][i] - yield img_feat.astype('float32'), int(label) + yield { + 'image': img_feat.astype('float32'), + 'label': int(label) + } diff --git a/demo/introduction/.gitignore b/demo/introduction/.gitignore new file mode 100644 index 0000000000..c54f3f9480 --- /dev/null +++ b/demo/introduction/.gitignore @@ -0,0 +1,5 @@ +dataprovider.pyc +empty.list +train.log +output +train.list diff --git a/demo/introduction/dataprovider.py b/demo/introduction/dataprovider.py index 03c920cc34..5b48aad040 100644 --- a/demo/introduction/dataprovider.py +++ b/demo/introduction/dataprovider.py @@ -17,8 +17,10 @@ import random # define data types of input: 2 real numbers -@provider(input_types=[dense_vector(1), dense_vector(1)], use_seq=False) +@provider( + input_types={'x': dense_vector(1), + 'y': dense_vector(1)}, use_seq=False) def process(settings, input_file): for i in xrange(2000): x = random.random() - yield [x], [2 * x + 0.3] + yield {'x': [x], 'y': [2 * x + 0.3]} diff --git a/demo/introduction/trainer_config.py b/demo/introduction/trainer_config.py index 41cebcf6e1..ecafe955f9 100644 --- a/demo/introduction/trainer_config.py +++ b/demo/introduction/trainer_config.py @@ -15,11 +15,8 @@ from paddle.trainer_config_helpers import * # 1. read data. Suppose you saved above python code as dataprovider.py -data_file = 'empty.list' -with open(data_file, 'w') as f: - f.writelines(' ') define_py_data_sources2( - train_list=data_file, + train_list=['no_matter.txt'], test_list=None, module='dataprovider', obj='process', diff --git a/demo/quick_start/.gitignore b/demo/quick_start/.gitignore index d6bc73105b..f71662563f 100644 --- a/demo/quick_start/.gitignore +++ b/demo/quick_start/.gitignore @@ -8,6 +8,8 @@ data/test.list data/test.txt data/train.list data/train.txt +data/pred.list +data/pred.txt dataprovider_copy_1.py train.log output diff --git a/demo/quick_start/dataprovider_bow.py b/demo/quick_start/dataprovider_bow.py index 8e651d77bf..2745495586 100644 --- a/demo/quick_start/dataprovider_bow.py +++ b/demo/quick_start/dataprovider_bow.py @@ -31,16 +31,16 @@ def initializer(settings, dictionary, **kwargs): # setting.input_types specifies what the data types the data provider # generates. - settings.input_types = [ + settings.input_types = { # The first input is a sparse_binary_vector, # which means each dimension of the vector is either 0 or 1. It is the # bag-of-words (BOW) representation of the texts. - sparse_binary_vector(len(dictionary)), + 'word': sparse_binary_vector(len(dictionary)), # The second input is an integer. It represents the category id of the # sample. 2 means there are two labels in the dataset. # (1 for positive and 0 for negative) - integer_value(2) - ] + 'label': integer_value(2) + } # Delaring a data provider. It has an initializer 'data_initialzer'. @@ -67,12 +67,12 @@ def process(settings, file_name): # Return the features for the current comment. The first is a list # of ids representing a 0-1 binary sparse vector of the text, # the second is the integer id of the label. - yield word_vector, int(label) + yield {'word': word_vector, 'label': int(label)} def predict_initializer(settings, dictionary, **kwargs): settings.word_dict = dictionary - settings.input_types = [sparse_binary_vector(len(dictionary))] + settings.input_types = {'word': sparse_binary_vector(len(dictionary))} # Declaring a data provider for prediction. The difference with process @@ -83,4 +83,4 @@ def process_predict(settings, file_name): for line in f: comment = line.strip().split() word_vector = [settings.word_dict.get(w, UNK_IDX) for w in comment] - yield word_vector + yield {'word': word_vector} diff --git a/demo/quick_start/dataprovider_emb.py b/demo/quick_start/dataprovider_emb.py index b010253a8a..ddfa3ce9b7 100755 --- a/demo/quick_start/dataprovider_emb.py +++ b/demo/quick_start/dataprovider_emb.py @@ -19,13 +19,13 @@ UNK_IDX = 0 def initializer(settings, dictionary, **kwargs): settings.word_dict = dictionary - settings.input_types = [ + settings.input_types = { # Define the type of the first input as sequence of integer. # The value of the integers range from 0 to len(dictrionary)-1 - integer_value_sequence(len(dictionary)), + 'word': integer_value_sequence(len(dictionary)), # Define the second input for label id - integer_value(2) - ] + 'label': integer_value(2) + } @provider(init_hook=initializer, cache=CacheType.CACHE_PASS_IN_MEM) @@ -35,15 +35,12 @@ def process(settings, file_name): label, comment = line.strip().split('\t') words = comment.split() word_slot = [settings.word_dict.get(w, UNK_IDX) for w in words] - yield word_slot, int(label) + yield {'word': word_slot, 'label': int(label)} def predict_initializer(settings, dictionary, **kwargs): settings.word_dict = dictionary - settings.input_types = [ - integer_value( - len(dictionary), seq_type=SequenceType.SEQUENCE) - ] + settings.input_types = {'word': integer_value_sequence(len(dictionary))} @provider(init_hook=predict_initializer, should_shuffle=False) @@ -52,4 +49,4 @@ def process_predict(settings, file_name): for line in f: comment = line.strip().split() word_slot = [settings.word_dict.get(w, UNK_IDX) for w in comment] - yield word_slot + yield {'word': word_slot} diff --git a/demo/recommendation/common_utils.py b/demo/recommendation/common_utils.py index d4fbdad1d7..c20c652866 100755 --- a/demo/recommendation/common_utils.py +++ b/demo/recommendation/common_utils.py @@ -17,13 +17,14 @@ from paddle.trainer.PyDataProvider2 import * def meta_to_header(meta, name): metas = meta[name]['__meta__']['raw_meta'] for each_meta in metas: + slot_name = each_meta.get('name', '%s_id' % name) if each_meta['type'] == 'id': - yield integer_value(each_meta['max']) + yield slot_name, integer_value(each_meta['max']) elif each_meta['type'] == 'embedding': is_seq = each_meta['seq'] == 'sequence' - yield integer_value( + yield slot_name, integer_value( len(each_meta['dict']), seq_type=SequenceType.SEQUENCE if is_seq else SequenceType.NO_SEQUENCE) elif each_meta['type'] == 'one_hot_dense': - yield dense_vector(len(each_meta['dict'])) + yield slot_name, dense_vector(len(each_meta['dict'])) diff --git a/demo/recommendation/dataprovider.py b/demo/recommendation/dataprovider.py index 80c62d7561..c4ff96d80e 100755 --- a/demo/recommendation/dataprovider.py +++ b/demo/recommendation/dataprovider.py @@ -16,6 +16,14 @@ from paddle.trainer.PyDataProvider2 import * import common_utils # parse +def __list_to_map__(lst): + ret_val = dict() + for each in lst: + k, v = each + ret_val[k] = v + return ret_val + + def hook(settings, meta, **kwargs): """ Init hook is invoked before process data. It will set obj.slots and store @@ -34,12 +42,16 @@ def hook(settings, meta, **kwargs): # second part is user features. # final part is rating score. # header is a list of [USE_SEQ_OR_NOT?, SlotType] - headers = list(common_utils.meta_to_header(meta, 'movie')) - headers.extend(list(common_utils.meta_to_header(meta, 'user'))) - headers.append(dense_vector(1)) # Score + movie_headers = list(common_utils.meta_to_header(meta, 'movie')) + settings.movie_names = [h[0] for h in movie_headers] + headers = movie_headers + user_headers = list(common_utils.meta_to_header(meta, 'user')) + settings.user_names = [h[0] for h in user_headers] + headers.extend(user_headers) + headers.append(("rating", dense_vector(1))) # Score # slot types. - settings.input_types = headers + settings.input_types = __list_to_map__(headers) settings.meta = meta @@ -57,20 +69,20 @@ def process(settings, filename): movie_meta = settings.meta['movie'][movie_id] user_meta = settings.meta['user'][user_id] - outputs = [movie_id - 1] + outputs = [('movie_id', movie_id - 1)] # Then add movie features - for each_meta in movie_meta: - outputs.append(each_meta) + for i, each_meta in enumerate(movie_meta): + outputs.append((settings.movie_names[i + 1], each_meta)) # Then add user id. - outputs.append(user_id - 1) + outputs.append(('user_id', user_id - 1)) # Then add user features. - for each_meta in user_meta: - outputs.append(each_meta) + for i, each_meta in enumerate(user_meta): + outputs.append((settings.user_names[i + 1], each_meta)) # Finally, add score - outputs.append([score]) + outputs.append(('rating', [score])) # Return data to paddle - yield outputs + yield __list_to_map__(outputs) diff --git a/demo/recommendation/prediction.py b/demo/recommendation/prediction.py index 191120188e..8ad993eab3 100755 --- a/demo/recommendation/prediction.py +++ b/demo/recommendation/prediction.py @@ -34,8 +34,8 @@ if __name__ == '__main__': network.loadParameters(model_path) with open('./data/meta.bin', 'rb') as f: meta = pickle.load(f) - headers = list(meta_to_header(meta, 'movie')) - headers.extend(list(meta_to_header(meta, 'user'))) + headers = [h[1] for h in meta_to_header(meta, 'movie')] + headers.extend([h[1] for h in meta_to_header(meta, 'user')]) cvt = DataProviderConverter(headers) while True: movie_id = int(raw_input("Input movie_id: ")) diff --git a/demo/recommendation/preprocess.sh b/demo/recommendation/preprocess.sh index e121e47019..dc6b2cdfc1 100755 --- a/demo/recommendation/preprocess.sh +++ b/demo/recommendation/preprocess.sh @@ -25,7 +25,7 @@ python meta_generator.py $dir meta.bin --config=meta_config.json echo 'split train/test file' python split.py $dir/ratings.dat --delimiter=${delimiter} --test_ratio=0.1 echo 'shuffle train file' -shuf $dir/ratings.dat.train > ratings.dat.train +gshuf $dir/ratings.dat.train > ratings.dat.train cp $dir/ratings.dat.test . echo "./data/ratings.dat.train" > train.list echo "./data/ratings.dat.test" > test.list diff --git a/demo/semantic_role_labeling/.gitignore b/demo/semantic_role_labeling/.gitignore index cd90ca7bbe..65c9b674c7 100644 --- a/demo/semantic_role_labeling/.gitignore +++ b/demo/semantic_role_labeling/.gitignore @@ -8,3 +8,7 @@ data/test.wsj.seq_pair data/test.wsj.words data/tgt.dict output +data/emb +data/targetDict.txt +data/verbDict.txt +data/wordDict.txt diff --git a/demo/semantic_role_labeling/data/get_data.sh b/demo/semantic_role_labeling/data/get_data.sh old mode 100644 new mode 100755 diff --git a/python/paddle/trainer_config_helpers/data_sources.py b/python/paddle/trainer_config_helpers/data_sources.py index 0fcf993d57..d7cb95c477 100644 --- a/python/paddle/trainer_config_helpers/data_sources.py +++ b/python/paddle/trainer_config_helpers/data_sources.py @@ -69,7 +69,7 @@ def define_py_data_source(file_list, """ if isinstance(file_list, list): file_list_name = 'train.list' - if isinstance(cls, TestData): + if cls == TestData: file_list_name = 'test.list' with open(file_list_name, 'w') as f: f.writelines(file_list) diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index 8dd6b7b7d2..c10fa671bd 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -970,7 +970,7 @@ def pooling_layer(input, :param layer_attr: The Extra Attributes for layer, such as dropout. :type layer_attr: ExtraLayerAttribute|None :return: LayerOutput object. - :rtype: LayerType + :rtype: LayerOutput """ extra_dict = dict() # noinspection PyUnresolvedReferences From 54cb6bdb379ef5304bffbce85b6e980c66cca978 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 16 Dec 2016 13:38:05 +0800 Subject: [PATCH 206/265] Fix bad import name in trainer_config_helpers. * math => layer_math * Fix #903 --- python/paddle/trainer_config_helpers/__init__.py | 2 +- .../tests/configs/math_ops.py | 13 ++++++------- 2 files changed, 7 insertions(+), 8 deletions(-) diff --git a/python/paddle/trainer_config_helpers/__init__.py b/python/paddle/trainer_config_helpers/__init__.py index 3ac1454934..a2335768b9 100644 --- a/python/paddle/trainer_config_helpers/__init__.py +++ b/python/paddle/trainer_config_helpers/__init__.py @@ -22,4 +22,4 @@ from optimizers import * from attrs import * # This will enable operator overload for LayerOutput -import math +import math as layer_math diff --git a/python/paddle/trainer_config_helpers/tests/configs/math_ops.py b/python/paddle/trainer_config_helpers/tests/configs/math_ops.py index c4c6d4020f..3331c10d64 100644 --- a/python/paddle/trainer_config_helpers/tests/configs/math_ops.py +++ b/python/paddle/trainer_config_helpers/tests/configs/math_ops.py @@ -1,15 +1,14 @@ from paddle.trainer_config_helpers import * -from paddle.trainer_config_helpers import math settings(batch_size=1000, learning_rate=1e-5) x = data_layer(name='data', size=100) -x = math.exp(x) -x = math.log(x) -x = math.abs(x) -x = math.sigmoid(x) -x = math.square(x) -x = math.square(x) +x = layer_math.exp(x) +x = layer_math.log(x) +x = layer_math.abs(x) +x = layer_math.sigmoid(x) +x = layer_math.square(x) +x = layer_math.square(x) y = 1 + x y = y + 1 y = x + y From 9ea33c59bfd12f14390961f636d4dcd4a3a7f2a5 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 16 Dec 2016 13:50:51 +0800 Subject: [PATCH 207/265] Refine FAQ for git submodule --- doc/faq/index_cn.rst | 33 ++++++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/doc/faq/index_cn.rst b/doc/faq/index_cn.rst index abdb5c7cf9..d611eb8250 100644 --- a/doc/faq/index_cn.rst +++ b/doc/faq/index_cn.rst @@ -113,7 +113,7 @@ PaddlePaddle支持Sparse的训练,sparse训练需要训练特征是 :code:`spa * 具体的多机训练方法参考 `多机训练文档 <../ui/data_provider/pydataprovider2.html#provider>`_ 。 -3. 遇到“非法指令”或者是“illegal instruction” +3. 遇到“非法指令”或者是“illegal instruction” -------------------------------------------- PaddlePaddle使用avx SIMD指令提高cpu执行效率,因此错误的使用二进制发行版可能会导致这种错误,请选择正确的版本。 @@ -140,7 +140,7 @@ PaddlePaddle使用avx SIMD指令提高cpu执行效率,因此错误的使用二 .. code-block:: python - hidden = fc_layer(input=ipt, param_attr=ParamAttr(initial_max=1.0, initial_min=-1.0), + hidden = fc_layer(input=ipt, param_attr=ParamAttr(initial_max=1.0, initial_min=-1.0), bias_attr=ParamAttr(initial_mean=1.0, initial_std=0.0)) 上述代码将bias全部初始化为1.0, 同时将参数初始化为 :code:`[1.0, -1.0]` 的均匀分布。 @@ -190,14 +190,14 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 41 - test_config_parser (Failed) 42 - test_swig_api (Failed) 43 - layers_test (Failed) - + 并且查询PaddlePaddle单元测试的日志,提示: .. code-block:: bash - + paddle package is already in your PYTHONPATH. But unittest need a clean environment. Please uninstall paddle package before start unittest. Try to 'pip uninstall paddle'. - + 解决办法是: * 卸载PaddlePaddle包 :code:`pip uninstall paddle`, 清理掉老旧的PaddlePaddle安装包,使得单元测试有一个干净的环境。如果PaddlePaddle包已经在python的site-packages里面,单元测试会引用site-packages里面的python包,而不是源码目录里 :code:`/python` 目录下的python包。同时,即便设置 :code:`PYTHONPATH` 到 :code:`/python` 也没用,因为python的搜索路径是优先已经安装的python包。 @@ -225,7 +225,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 用户强制指定特定的Python版本,具体操作如下: .. code-block:: bash - + cmake .. -DPYTHON_EXECUTABLE= -DPYTHON_LIBRARY= -DPYTHON_INCLUDE_DIR= 用户需要指定本机上Python的路径:````, ````, ```` @@ -238,7 +238,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 .. code-block:: bash [libprotobuf ERROR google/protobuf/io/coded_stream.cc:171] A protocol message was rejected because it was too big (more than 67108864 bytes). To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. - F1205 14:59:50.295174 14703 TrainerConfigHelper.cpp:59] Check failed: m->conf.ParseFromString(configProtoStr) + F1205 14:59:50.295174 14703 TrainerConfigHelper.cpp:59] Check failed: m->conf.ParseFromString(configProtoStr) 可能的原因是:传给dataprovider的某一个args过大,一般是由于直接传递大字典导致的。错误的define_py_data_sources2类似: @@ -284,3 +284,22 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 .. code-block:: bash paddle train --use_gpu=true --trainer_count=2 --gpu_id=2 + +12. 编译源码提示warp-ctc/include/ctc.h 找不到的情况 +--------------------------------------------------- + +目前Paddle使用\ :code:`git submodule`\ 来引用一些第三方模块。简单的\ +:code:`git clone`\ 命令不能完全得到Paddle和第三方模块的代码。需要使用\: + +.. code-block:: bash + + git clone --recursive https://github.com/PaddlePaddle/Paddle.git + +来获取所有源码。对于已经clone的git版本库,可以在Paddle的源码目录中执行\: + +.. code-block:: bash + + git submodule init + git submodule update + +来获得所有第三方模块。 \ No newline at end of file From 796a40daa5783263e29086975433464003111d26 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 16 Dec 2016 14:06:23 +0800 Subject: [PATCH 208/265] Follow comments --- doc/faq/index_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/faq/index_cn.rst b/doc/faq/index_cn.rst index d611eb8250..0a825c5c7d 100644 --- a/doc/faq/index_cn.rst +++ b/doc/faq/index_cn.rst @@ -289,7 +289,7 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 --------------------------------------------------- 目前Paddle使用\ :code:`git submodule`\ 来引用一些第三方模块。简单的\ -:code:`git clone`\ 命令不能完全得到Paddle和第三方模块的代码。需要使用\: +:code:`git clone`\ 命令不能得到第三方模块的代码。需要使用\: .. code-block:: bash From 651d323619f77b461e643c7c2b77fe25b4bb360c Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 16 Dec 2016 14:30:54 +0800 Subject: [PATCH 209/265] Follow comments --- doc/faq/index_cn.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/faq/index_cn.rst b/doc/faq/index_cn.rst index 0a825c5c7d..f2f114065c 100644 --- a/doc/faq/index_cn.rst +++ b/doc/faq/index_cn.rst @@ -156,8 +156,8 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 这里 :code:`hidden_a` 和 :code:`hidden_b` 使用了同样的parameter和bias。并且softmax层的两个输入也使用了同样的参数 :code:`softmax_param`。 -7. *-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. ---------------------------------------------------------------------------- +7. \*-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. +------------------------------------------------------------------------ 出现这个问题的主要原因是,系统编译wheel包的时候,使用的 :code:`wheel` 包是最新的, 而系统中的 :code:`pip` 包比较老。具体的解决方法是,更新 :code:`pip` 包并重新编译PaddlePaddle。 From fc80f65889fa4df9e71c4decb70e45681f9bdc3d Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Fri, 16 Dec 2016 14:54:59 +0800 Subject: [PATCH 210/265] Temp disable errored unittest --- paddle/gserver/tests/test_RecurrentGradientMachine.cpp | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp index 874aabf37c..b47279b77a 100644 --- a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp +++ b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp @@ -127,7 +127,7 @@ TEST(RecurrentGradientMachine, HasSubSequence) { } } -TEST(RecurrentGradientMachine, rnn) { +TEST(RecurrentGradientMachine, DISABLED_rnn) { for (bool useGpu : {false, true}) { test("gserver/tests/sequence_rnn.conf", "gserver/tests/sequence_nest_rnn.conf", @@ -136,7 +136,7 @@ TEST(RecurrentGradientMachine, rnn) { } } -TEST(RecurrentGradientMachine, rnn_multi_input) { +TEST(RecurrentGradientMachine, DISABLED_rnn_multi_input) { for (bool useGpu : {false, true}) { test("gserver/tests/sequence_rnn_multi_input.conf", "gserver/tests/sequence_nest_rnn_multi_input.conf", @@ -145,7 +145,7 @@ TEST(RecurrentGradientMachine, rnn_multi_input) { } } -TEST(RecurrentGradientMachine, rnn_multi_unequalength_input) { +TEST(RecurrentGradientMachine, DISABLED_rnn_multi_unequalength_input) { for (bool useGpu : {false, true}) { test("gserver/tests/sequence_rnn_multi_unequalength_inputs.py", "gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py", @@ -155,13 +155,14 @@ TEST(RecurrentGradientMachine, rnn_multi_unequalength_input) { } int main(int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); + if (paddle::version::isWithPyDataProvider()) { if (!paddle::version::isWithGpu()) { FLAGS_use_gpu = false; } initMain(argc, argv); initPython(argc, argv); - testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } else { return 0; From 5222b586e2db3a4dc46cacf884afae9e4d6e51f2 Mon Sep 17 00:00:00 2001 From: yangwenbo02 Date: Fri, 16 Dec 2016 15:43:40 +0800 Subject: [PATCH 211/265] support UBUNTU MIRROR and modify doc --- .../build_and_install/docker_install_en.rst | 16 ++++++++++++++++ paddle/scripts/docker/Dockerfile | 2 ++ paddle/scripts/docker/Dockerfile.gpu | 2 ++ 3 files changed, 20 insertions(+) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index 7633bf4d57..1252ff3974 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -142,6 +142,22 @@ to install CUDA driver and let Docker knows about it: export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest + +UBUNTU MIRROR +------------- + +Building Paddle Docker image hits some wrong with apt-get update, you +can use other UBUNTU MIRROR instead of the default + +.. code-block:: bash + + cd ~ + git clone https://github.com/PaddlePaddle/Paddle.git + cd Paddle + git submodule update --init --recursive + docker build --build-arg UBUNTU_MIRROR="http://mirrors.163.com" -t paddle:cpu-avx -f paddle/scripts/docker/Dockerfile . + docker build --build-arg UBUNTU_MIRROR="http://mirrors.163.com" -t paddle:gpu-avx -f paddle/scripts/docker/Dockerfile.gpu . + Non-AVX Images -------------- diff --git a/paddle/scripts/docker/Dockerfile b/paddle/scripts/docker/Dockerfile index 207f97c4a6..f26055d0d4 100644 --- a/paddle/scripts/docker/Dockerfile +++ b/paddle/scripts/docker/Dockerfile @@ -2,6 +2,8 @@ FROM ubuntu:14.04 MAINTAINER PaddlePaddle Authors ARG DEBIAN_FRONTEND=noninteractive +ARG UBUNTU_MIRROR +RUN /bin/bash -c 'if [[ -n ${UBUNTU_MIRROR} ]]; then sed -i 's#http://archive.ubuntu.com#${UBUNTU_MIRROR}#g' /etc/apt/sources.list; fi' RUN apt-get update \ && apt-get install -y cmake libprotobuf-dev protobuf-compiler git \ libgoogle-glog-dev libgflags-dev libgtest-dev \ diff --git a/paddle/scripts/docker/Dockerfile.gpu b/paddle/scripts/docker/Dockerfile.gpu index 33f6adfea2..d13b977147 100644 --- a/paddle/scripts/docker/Dockerfile.gpu +++ b/paddle/scripts/docker/Dockerfile.gpu @@ -2,6 +2,8 @@ FROM nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04 MAINTAINER PaddlePaddle Authors ARG DEBIAN_FRONTEND=noninteractive +ARG UBUNTU_MIRROR +RUN /bin/bash -c 'if [[ -n ${UBUNTU_MIRROR} ]]; then sed -i 's#http://archive.ubuntu.com#${UBUNTU_MIRROR}#g' /etc/apt/sources.list; fi' RUN apt-get update \ && apt-get install -y cmake libprotobuf-dev protobuf-compiler git \ libgoogle-glog-dev libgflags-dev libgtest-dev \ From a8a4a84c1ff294573dfbacdcf9b354d8f4323d0f Mon Sep 17 00:00:00 2001 From: Yunfeng Wang Date: Fri, 16 Dec 2016 16:55:55 +0800 Subject: [PATCH 212/265] fix typo: you -> we Hi paddle developers, In this tutorial, I guess it should be `We` rather than `You` according to context. Please check it, thanks. --- doc/tutorials/quick_start/index_en.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/tutorials/quick_start/index_en.md b/doc/tutorials/quick_start/index_en.md index 4e765b2303..eed0a6239a 100644 --- a/doc/tutorials/quick_start/index_en.md +++ b/doc/tutorials/quick_start/index_en.md @@ -159,7 +159,7 @@ define_py_data_sources2(train_list='data/train.list', You can refer to the following link for more detailed examples and data formats: PyDataProvider2. ## Network Architecture -You will describe four kinds of network architectures in this section. +We will describe four kinds of network architectures in this section.
![](./src/PipelineNetwork_en.jpg)
First, you will build a logistic regression model. Later, you will also get chance to build other more powerful network architectures. From 5b746fb183572bc04a0697f3ef9d043849506862 Mon Sep 17 00:00:00 2001 From: yangwenbo02 Date: Fri, 16 Dec 2016 17:23:24 +0800 Subject: [PATCH 213/265] modify doc doc/getstarted/build_and_install/docker_install_en.rst --- .../build_and_install/docker_install_en.rst | 24 ++++++------------- 1 file changed, 7 insertions(+), 17 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index 1252ff3974..ffda796470 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -39,12 +39,18 @@ The general development workflow with Docker and Bazel is as follows: code. This image contains all the development tools and dependencies of PaddlePaddle. - .. code-block:: bash cd paddle docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . + Apt-get source errors may occur when building paddle docker image. + **You can specify the UBUNTU MIRROR with :code:`--build-arg UBUNTU_MIRROR` like the example below.** + + .. code-block:: bash + + docker build --build-arg UBUNTU_MIRROR="http://mirrors.163.com" -t paddle:dev -f paddle/scripts/docker/Dockerfile . + 3. Run the image as a container and mounting local source code directory into the container. This allows us to change the code on @@ -142,22 +148,6 @@ to install CUDA driver and let Docker knows about it: export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') docker run ${CUDA_SO} ${DEVICES} -it paddledev/paddle:gpu-latest - -UBUNTU MIRROR -------------- - -Building Paddle Docker image hits some wrong with apt-get update, you -can use other UBUNTU MIRROR instead of the default - -.. code-block:: bash - - cd ~ - git clone https://github.com/PaddlePaddle/Paddle.git - cd Paddle - git submodule update --init --recursive - docker build --build-arg UBUNTU_MIRROR="http://mirrors.163.com" -t paddle:cpu-avx -f paddle/scripts/docker/Dockerfile . - docker build --build-arg UBUNTU_MIRROR="http://mirrors.163.com" -t paddle:gpu-avx -f paddle/scripts/docker/Dockerfile.gpu . - Non-AVX Images -------------- From 36af605a2d13f7be0a8d326144b88d7d2ed5d242 Mon Sep 17 00:00:00 2001 From: yangwenbo02 Date: Fri, 16 Dec 2016 17:33:14 +0800 Subject: [PATCH 214/265] modify doc --- doc/getstarted/build_and_install/docker_install_en.rst | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index ffda796470..1cc23ac3aa 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -45,11 +45,14 @@ The general development workflow with Docker and Bazel is as follows: docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . Apt-get source errors may occur when building paddle docker image. - **You can specify the UBUNTU MIRROR with :code:`--build-arg UBUNTU_MIRROR` like the example below.** + **You can specify the UBUNTU MIRROR with** :code:`--build-arg UBUNTU_MIRROR` **like the example below.** .. code-block:: bash - docker build --build-arg UBUNTU_MIRROR="http://mirrors.163.com" -t paddle:dev -f paddle/scripts/docker/Dockerfile . + docker build \ + --build-arg UBUNTU_MIRROR="http://mirrors.163.com" \ + -t paddle:dev \ + -f paddle/scripts/docker/Dockerfile . 3. Run the image as a container and mounting local source code From 9f990d9059dcf1b3536c3060670121a2fe67ce66 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Fri, 16 Dec 2016 19:15:03 +0800 Subject: [PATCH 215/265] Add unittest of the priorbox layer --- paddle/gserver/layers/PriorBox.cpp | 1 + paddle/gserver/tests/CMakeLists.txt | 8 ++ paddle/gserver/tests/test_PriorBox.cpp | 160 +++++++++++++++++++++++++ python/paddle/trainer/config_parser.py | 3 +- 4 files changed, 171 insertions(+), 1 deletion(-) create mode 100644 paddle/gserver/tests/test_PriorBox.cpp diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp index dd52f61c30..ca61dfec5f 100644 --- a/paddle/gserver/layers/PriorBox.cpp +++ b/paddle/gserver/layers/PriorBox.cpp @@ -76,6 +76,7 @@ void PriorBoxLayer::forward(PassType passType) { auto image = getInput(1); int imageWidth = image.getFrameWidth(); int imageHeight = image.getFrameHeight(); + float stepW = static_cast(imageWidth) / layerWidth; float stepH = static_cast(imageHeight) / layerHeight; int dim = layerHeight * layerWidth * numPriors_ * 4; diff --git a/paddle/gserver/tests/CMakeLists.txt b/paddle/gserver/tests/CMakeLists.txt index 34dc375f21..c26a2a7f06 100644 --- a/paddle/gserver/tests/CMakeLists.txt +++ b/paddle/gserver/tests/CMakeLists.txt @@ -34,6 +34,14 @@ add_unittest_without_exec(test_ConvTrans add_test(NAME test_ConvTrans COMMAND test_ConvTrans) +################# test_PriorBox ####################### +add_unittest_without_exec(test_PriorBox + test_PriorBox.cpp + LayerGradUtil.cpp + TestUtil.cpp) + +add_test(NAME test_PriorBox + COMMAND test_PriorBox) ################# test_ConvUnify ####################### add_unittest_without_exec(test_ConvUnify test_ConvUnify.cpp diff --git a/paddle/gserver/tests/test_PriorBox.cpp b/paddle/gserver/tests/test_PriorBox.cpp new file mode 100644 index 0000000000..fd63be2f8e --- /dev/null +++ b/paddle/gserver/tests/test_PriorBox.cpp @@ -0,0 +1,160 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include +#include + +#include "LayerGradUtil.h" +#include "TestUtil.h" + +using namespace paddle; // NOLINT +using namespace std; // NOLINT + +P_DECLARE_bool(use_gpu); +P_DECLARE_int32(gpu_id); +P_DECLARE_bool(thread_local_rand_use_global_seed); + +// Do one forward pass of priorBox layer and check to see if its output +// matches the given result +void doOnePriorBoxTest(size_t featureMapWidth, + size_t featureMapHeight, + size_t imageWidth, + size_t imageHeight, + vector minSize, + vector maxSize, + vector aspectRatio, + vector variance, + MatrixPtr& result) { + // Setting up the priorbox layer + TestConfig configt; + configt.layerConfig.set_type("priorbox"); + + configt.inputDefs.push_back({INPUT_DATA, "featureMap", 1, 0}); + LayerInputConfig* input = configt.layerConfig.add_inputs(); + configt.inputDefs.push_back({INPUT_DATA, "image", 1, 0}); + configt.layerConfig.add_inputs(); + PriorBoxConfig* pb = input->mutable_priorbox_conf(); + for (size_t i = 0; i < minSize.size(); i++) pb->add_min_size(minSize[i]); + for (size_t i = 0; i < maxSize.size(); i++) pb->add_max_size(maxSize[i]); + for (size_t i = 0; i < aspectRatio.size(); i++) + pb->add_aspect_ratio(aspectRatio[i]); + for (size_t i = 0; i < variance.size(); i++) pb->add_variance(variance[i]); + + // data layer initialize + std::vector dataLayers; + LayerMap layerMap; + vector datas; + initDataLayer( + configt, &dataLayers, &datas, &layerMap, "priorbox", 1, false, true); + dataLayers[0]->getOutput().setFrameHeight(featureMapHeight); + dataLayers[0]->getOutput().setFrameWidth(featureMapWidth); + dataLayers[1]->getOutput().setFrameHeight(imageHeight); + dataLayers[1]->getOutput().setFrameWidth(imageWidth); + + // test layer initialize + std::vector parameters; + LayerPtr priorboxLayer; + initTestLayer(configt, &layerMap, ¶meters, &priorboxLayer); + + priorboxLayer->forward(PASS_GC); + checkMatrixEqual(priorboxLayer->getOutputValue(), result); +} + +TEST(Layer, priorBoxLayerFwd) { + vector minSize; + vector maxSize; + vector aspectRatio; + vector variance; + + minSize.push_back(276); + maxSize.push_back(330); + variance.push_back(0.1); + variance.push_back(0.1); + variance.push_back(0.2); + variance.push_back(0.2); + + MatrixPtr result; + result = Matrix::create(1, 2 * 8, false, false); + + float resultData[] = {0.04, + 0.04, + 0.96, + 0.96, + 0.1, + 0.1, + 0.2, + 0.2, + 0, + 0, + 1, + 1, + 0.1, + 0.1, + 0.2, + 0.2}; + result->setData(resultData); + doOnePriorBoxTest(/* featureMapWidth */ 1, + /* featureMapHeight */ 1, + /* imageWidth */ 300, + /* imageHeight */ 300, + minSize, + maxSize, + aspectRatio, + variance, + result); + + variance[1] = 0.2; + variance[3] = 0.1; + maxSize.pop_back(); + Matrix::resizeOrCreate(result, 1, 4 * 8, false, false); + float resultData2[] = {0, 0, 0.595, 0.595, 0.1, 0.2, 0.2, 0.1, + 0.405, 0, 1, 0.595, 0.1, 0.2, 0.2, 0.1, + 0, 0.405, 0.595, 1, 0.1, 0.2, 0.2, 0.1, + 0.405, 0.405, 1, 1, 0.1, 0.2, 0.2, 0.1}; + result->setData(resultData2); + doOnePriorBoxTest(/* featureMapWidth */ 2, + /* featureMapHeight */ 2, + /* imageWidth */ 400, + /* imageHeight */ 400, + minSize, + maxSize, + aspectRatio, + variance, + result); + + aspectRatio.push_back(2); + Matrix::resizeOrCreate(result, 1, 3 * 8, false, false); + float resultData3[] = {0.04, 0.04, 0.96, 0.96, 0.1, 0.2, + 0.2, 0.1, 0, 0.17473088, 1, 0.825269, + 0.1, 0.2, 0.2, 0.1, 0.17473088, 0, + 0.825269, 1, 0.1, 0.2, 0.2, 0.1}; + result->setData(resultData3); + doOnePriorBoxTest(/* featureMapWidth */ 1, + /* featureMapHeight */ 1, + /* imageWidth */ 300, + /* imageHeight */ 300, + minSize, + maxSize, + aspectRatio, + variance, + result); +} + +int main(int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); + initMain(argc, argv); + FLAGS_thread_local_rand_use_global_seed = true; + srand(1); + return RUN_ALL_TESTS(); +} diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 83fda9f709..4056ecdf79 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1583,7 +1583,7 @@ class PriorBoxLayer(LayerBase): def __init__(self, name, inputs, size, min_size, max_size, aspect_ratio, variance): super(PriorBoxLayer, self).__init__(name, 'priorbox', 0, inputs) - config_assert(len(inputs) == 2, 'PriorBoxLayer must have 2 input') + config_assert(len(inputs) == 2, 'PriorBoxLayer must have 2 inputs') input_layer = self.get_input_layer(1) config_assert( input_layer.type == 'data', @@ -1591,6 +1591,7 @@ class PriorBoxLayer(LayerBase): 'a data layer') config_assert(input_layer.width > 0, 'The data layer must set width') config_assert(input_layer.height > 0, 'The data layer must set height') + config_assert(len(variance) == 4, 'The variance must have 4 inputs') self.config.inputs[0].priorbox_conf.min_size.extend(min_size) self.config.inputs[0].priorbox_conf.max_size.extend(max_size) self.config.inputs[0].priorbox_conf.aspect_ratio.extend(aspect_ratio) From 8d9f67591022655fb62401c470825b319573920c Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Fri, 16 Dec 2016 19:54:09 +0800 Subject: [PATCH 216/265] Add header files --- paddle/gserver/tests/test_PriorBox.cpp | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/paddle/gserver/tests/test_PriorBox.cpp b/paddle/gserver/tests/test_PriorBox.cpp index fd63be2f8e..8aabb1ef97 100644 --- a/paddle/gserver/tests/test_PriorBox.cpp +++ b/paddle/gserver/tests/test_PriorBox.cpp @@ -12,8 +12,14 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include #include #include +#include "ModelConfig.pb.h" +#include "paddle/gserver/layers/DataLayer.h" +#include "paddle/math/MathUtils.h" +#include "paddle/trainer/Trainer.h" +#include "paddle/utils/GlobalConstants.h" #include "LayerGradUtil.h" #include "TestUtil.h" From 03d14ac4b1dc4a64aa58f512c424889624fdb6d6 Mon Sep 17 00:00:00 2001 From: livc Date: Fri, 16 Dec 2016 20:16:56 +0800 Subject: [PATCH 217/265] add cluster_train_cn.md --- doc/howto/usage/cluster/cluster_train_cn.md | 156 ++++++++++++++++++++ 1 file changed, 156 insertions(+) create mode 100644 doc/howto/usage/cluster/cluster_train_cn.md diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md new file mode 100644 index 0000000000..67e724d2fc --- /dev/null +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -0,0 +1,156 @@ +# 运行分布式训练 + +在本文中,我们将阐释如何在集群上运行分布式 Paddle 训练作业。我们将创建分布式的单进程训练示例,[推荐](https://github.com/baidu/Paddle/tree/develop/demo/recommendation)。 + +在本文中使用的[脚本](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train)通过 SSH 运行分布式作业。 它们还可以供那些运行更复杂的集群管理系统(如 MPI 和 Kubernetes )的用户参考。 + +## 前提条件 + +1. 上述脚本使用 Python 库 [fabric](http://www.fabfile.org/) 来运行 SSH 命令。 我们使用 `pip` 来安装 fabric: + + ```bash + pip install fabric + ``` + +2. 我们需要在集群的所有节点上安装 PaddlePaddle。 如果要启用GPU,需要在 `/usr/local/cuda` 中安装 CUDA; 否则 Paddle 将在运行时报错。 + +3. 在所有节点上的[`cluster_train/conf.py`]中设置 `ROOT_DIR` 变量。 为了方便起见,我们通常在所有节点上创建一个 Unix 用户 `paddle`,并设置 `ROOT_DIR=/home/paddle`。这样,我们可以将 SSH 公钥写入 `/home/paddle/.ssh/authorized_keys`,以便用户 `paddle` 可以 SSH 到所有节点而不用密码。 + +## 准备工作空间 + +我们将放置依赖库、配置等文件的目录视为 *工作空间(workspace)*。 + +这些 ```train/test``` 数据应该在启动集群作业之前准备好。 为了满足训练/测试数据放置在工作空间中不同目录的要求,PADDLE 根据在模型配置文件中使用的名为 ```train.list/test.list``` 的索引文件引用训练/测试数据。所以训练/测试数据也包含 train.list/test.list 两个列表文件。所有本地训练 demo 已经提供了脚本来帮助您创建这两个文件,并且集群作业中的所有节点将在正常情况下处理具有相同逻辑代码的文件。 + +通常,你可以使用本地训练中的相同模型文件进行集群训练。 你应该知道,在模型文件的 ```setting``` 函数中设置的 ```batch_size``` 表示在集群作业**每个**节点中的 batch 大小,而不是使用同步 SGD 的总 batch 大小。 + +以下步骤基于 demo 目录中的 demo/recommendation。 + +你只需完成 demo/recommendation 教程文档到 ```Train``` 的部分,之后你会得到训练/测试数据和模型配置文件。最后,只需使用 demo/recommendation 作为集群训练的工作空间。 + +最后,你的工作空间应如下所示: +``` +. +|-- common_utils.py +|-- data +| |-- config.json +| |-- config_generator.py +| |-- meta.bin +| |-- meta_config.json +| |-- meta_generator.py +| |-- ml-1m +| |-- ml_data.sh +| |-- ratings.dat.test +| |-- ratings.dat.train +| |-- split.py +| |-- test.list +| `-- train.list +|-- dataprovider.py +|-- evaluate.sh +|-- prediction.py +|-- preprocess.sh +|-- requirements.txt +|-- run.sh +`-- trainer_config.py +``` +虽然这些文件并非都需要集群训练,但是也没有必要删除无用的文件。 + +```trainer_config.py``` +表示模型配置文件。 + +```train.list``` 和 ```test.list``` +文件索引。它存储当前节点所有训练/测试数据的所有相对或绝对文件路径。 + +```dataprovider.py``` +用于读取训练/测试样本。这与本地训练相同。 + +```data``` +数据目录中的所有文件被 train.list/test.list 引用。 + + +## 准备集群作业配置 + +以下选项必须在 cluster_train/conf.py 中认真设置 + +```HOSTS``` 所有节点运行集群作业的主机名或 IP 。你还可以将用户和 ssh 端口附加到主机名上,例如 root@192.168.100.17:9090。 + +```ROOT_DIR``` 用于放置 JOB 工作空间目录的工作空间 ROOT 目录 + +```PADDLE_NIC``` 集群通信通道的 NIC(Network Interface Card, 网络接口卡) 接口名称,例如以太网的 eth0,infiniband 的 ib0。 + +```PADDLE_PORT``` 集群通信通道的端口号 + +```PADDLE_PORTS_NUM``` 用于集群通信通道的端口数。 如果集群节点数量少(少于5〜6个节点),建议将其设置为较大,如2〜8,以获得更好的网络性能。 + +```PADDLE_PORTS_NUM_FOR_SPARSE``` 用于稀疏更新器集群通信信道的端口数。如果使用稀疏远程更新,则可以像 ```PADDLE_PORTS_NUM``` 一样设置。 + +```LD_LIBRARY_PATH``` 为集群作业设置额外的 LD_LIBRARY_PATH。你可以使用它来设置 CUDA 库的路径。 + +默认配置如下: + +```python +HOSTS = [ + "root@192.168.100.17", + "root@192.168.100.18", + ] + +''' +工作空间配置 +''' + +#工作空间根目录 +ROOT_DIR = "/home/paddle" + +''' +网络配置 +''' +#pserver NIC +PADDLE_NIC = "eth0" +#pserver 端口 +PADDLE_PORT = 7164 +#pserver 端口数 +PADDLE_PORTS_NUM = 2 +#pserver sparse ports num +PADDLE_PORTS_NUM_FOR_SPARSE = 2 + +#集群作业中所有进程的环境设置 +LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib64" +``` + +### 启动集群作业 +```paddle.py``` 提供了自动化脚本来启动不同节点中的所有 PaddlePaddle 集群进程。默认情况下,所有命令行选项可以设置为```paddle.py``` 命令选项并且 ```paddle.py``` 将透明、自动地将这些选项应用到 PaddlePaddle 低级进程。 + +```paddle.py``` 为方便作业启动提供了两个独特的命令选项。 + +```job_dispatch_package``` 设为本地 ```workspace``` 目录,它将被分发到 conf.py 中设置的所有节点。 这有助于频繁的修改、访问工作区文件,否则频繁的多节点工作空间部署可能会很麻烦。 +```job_workspace``` 设为已部署的工作空间目录,```paddle.py``` 将跳过分发阶段直接启动所有节点的集群作业。它可以帮助减少分发延迟。 + +```cluster_train/run.sh``` 提供了命令样例来运行 ```demo/recommendation``` 集群工作,只需用你定义的目录修改 ```job_dispatch_package``` 和 ```job_workspace``` ,然后: +``` +sh run.sh +``` + +集群作业将会在几秒后启动。 + +### 终止集群作业 +```paddle.py``` 能获取 ```Ctrl + C``` SIGINT 信号来自动终止它启动的所有进程。只需中断 ```paddle.py``` 任务来终止集群作业。如果程序崩溃你也可以手动终止。 + +### 检查集群训练结果 +详细信息请检查 $workspace/log 里的日志,每一个节点都有相同的日志结构。 + +```paddle_trainer.INFO``` +提供几乎所有训练的内部输出日志,与本地训练相同。这里检验运行时间模型的收敛。 + +```paddle_pserver2.INFO``` +提供 pserver 运行日志,有助于诊断分布式错误。 + +```server.log``` +提供 pserver 进程的 stderr 和 stdout。训练失败时可以检查错误日志。 + +```train.log``` +提供训练过程的 stderr 和 stdout。训练失败时可以检查错误日志。 + +### 检查模型输出 +运行完成后,模型文件将被写入节点 0 的 ```output``` 目录中。 +工作空间中的 ```nodefile``` 表示当前集群作业的节点 ID。 + From 202b2e75592a3daff261cfffca723c3de3470652 Mon Sep 17 00:00:00 2001 From: livc Date: Fri, 16 Dec 2016 20:22:32 +0800 Subject: [PATCH 218/265] modify markdown format --- doc/howto/usage/cluster/cluster_train_cn.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md index 67e724d2fc..028758fb05 100644 --- a/doc/howto/usage/cluster/cluster_train_cn.md +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -55,16 +55,16 @@ ``` 虽然这些文件并非都需要集群训练,但是也没有必要删除无用的文件。 -```trainer_config.py``` +`trainer_config.py` 表示模型配置文件。 -```train.list``` 和 ```test.list``` +`train.list` 和 `test.list` 文件索引。它存储当前节点所有训练/测试数据的所有相对或绝对文件路径。 -```dataprovider.py``` +`dataprovider.py` 用于读取训练/测试样本。这与本地训练相同。 -```data``` +`data` 数据目录中的所有文件被 train.list/test.list 引用。 @@ -138,16 +138,16 @@ sh run.sh ### 检查集群训练结果 详细信息请检查 $workspace/log 里的日志,每一个节点都有相同的日志结构。 -```paddle_trainer.INFO``` +`paddle_trainer.INFO` 提供几乎所有训练的内部输出日志,与本地训练相同。这里检验运行时间模型的收敛。 -```paddle_pserver2.INFO``` +`paddle_pserver2.INFO` 提供 pserver 运行日志,有助于诊断分布式错误。 -```server.log``` +`server.log` 提供 pserver 进程的 stderr 和 stdout。训练失败时可以检查错误日志。 -```train.log``` +`train.log` 提供训练过程的 stderr 和 stdout。训练失败时可以检查错误日志。 ### 检查模型输出 From d5d0f7e856e0889590a96c851f7df2630fe70960 Mon Sep 17 00:00:00 2001 From: livc Date: Fri, 16 Dec 2016 20:25:17 +0800 Subject: [PATCH 219/265] modify markdown format --- doc/howto/usage/cluster/cluster_train_en.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/doc/howto/usage/cluster/cluster_train_en.md b/doc/howto/usage/cluster/cluster_train_en.md index 2fd24e532e..a9a3194f09 100644 --- a/doc/howto/usage/cluster/cluster_train_en.md +++ b/doc/howto/usage/cluster/cluster_train_en.md @@ -55,16 +55,16 @@ At last your workspace should look like as follow: ``` Not all of these files are needed for cluster training, but it's not necessary to remove useless files. -```trainer_config.py``` +`trainer_config.py` Indicates the model config file. -```train.list``` and ```test.list``` +`train.list` and `test.list` File index. It stores all relative or absolute file paths of all train/test data at current node. -```dataprovider.py``` +`dataprovider.py` used to read train/test samples. It's same as local training. -```data``` +`data` all files in data directory are refered by train.list/test.list which are refered by data provider. @@ -139,16 +139,16 @@ The cluster Job will start in several seconds. ### Check Cluster Training Result Check log in $workspace/log for details, each node owns same log structure. -```paddle_trainer.INFO``` +`paddle_trainer.INFO` It provides almost all interal output log for training, same as local training. Check runtime model convergence here. -```paddle_pserver2.INFO``` +`paddle_pserver2.INFO` It provides pserver running log, which could help to diagnose distributed error. -```server.log``` +`server.log` It provides stderr and stdout of pserver process. Check error log if training crashs. -```train.log``` +`train.log` It provides stderr and stdout of trainer process. Check error log if training crashs. ### Check Model Output From cad325f09ae8c1e2272b79a0c0b30298e891350e Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Fri, 16 Dec 2016 21:03:05 +0800 Subject: [PATCH 220/265] Add header file --- paddle/gserver/tests/test_PriorBox.cpp | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/paddle/gserver/tests/test_PriorBox.cpp b/paddle/gserver/tests/test_PriorBox.cpp index 8aabb1ef97..d37c0bb702 100644 --- a/paddle/gserver/tests/test_PriorBox.cpp +++ b/paddle/gserver/tests/test_PriorBox.cpp @@ -15,8 +15,10 @@ limitations under the License. */ #include #include #include +#include "./paddle/utils/CommandLineParser.h" #include "ModelConfig.pb.h" #include "paddle/gserver/layers/DataLayer.h" +#include "paddle/gserver/layers/ExpandConvTransLayer.h" #include "paddle/math/MathUtils.h" #include "paddle/trainer/Trainer.h" #include "paddle/utils/GlobalConstants.h" @@ -29,7 +31,9 @@ using namespace std; // NOLINT P_DECLARE_bool(use_gpu); P_DECLARE_int32(gpu_id); +P_DECLARE_double(checkgrad_eps); P_DECLARE_bool(thread_local_rand_use_global_seed); +P_DECLARE_bool(prev_batch_state); // Do one forward pass of priorBox layer and check to see if its output // matches the given result From c40b069bb7faa7e14406502f0b6cc0dbcb668c23 Mon Sep 17 00:00:00 2001 From: liaogang Date: Sat, 17 Dec 2016 11:59:14 +0800 Subject: [PATCH 221/265] remove redundant code --- paddle/utils/CompilerMacros.h | 17 ----------------- 1 file changed, 17 deletions(-) delete mode 100644 paddle/utils/CompilerMacros.h diff --git a/paddle/utils/CompilerMacros.h b/paddle/utils/CompilerMacros.h deleted file mode 100644 index e50093f7fc..0000000000 --- a/paddle/utils/CompilerMacros.h +++ /dev/null @@ -1,17 +0,0 @@ -/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. */ - -#pragma once - -#define ATTR_NORETURN __attribute__((noreturn)) From a980b83a0cb974f0622051bea957f8305831b4bb Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Sun, 18 Dec 2016 13:53:11 +0800 Subject: [PATCH 222/265] Fix RNN unittest bugs. * The DataProvider should be INCREF every time. --- .../gserver/dataproviders/PyDataProvider2.cpp | 18 +++--------------- .../tests/test_RecurrentGradientMachine.cpp | 6 +++--- 2 files changed, 6 insertions(+), 18 deletions(-) diff --git a/paddle/gserver/dataproviders/PyDataProvider2.cpp b/paddle/gserver/dataproviders/PyDataProvider2.cpp index 460efc5adc..c26e242534 100644 --- a/paddle/gserver/dataproviders/PyDataProvider2.cpp +++ b/paddle/gserver/dataproviders/PyDataProvider2.cpp @@ -252,19 +252,9 @@ private: // only for instance will make python reference-count error. // // So here, we increase reference count manually. - if (gModuleClsPtrs_.find((uintptr_t)module.get()) != - gModuleClsPtrs_.end()) { - // Multi instance use same module - Py_XINCREF(module.get()); - Py_XINCREF(moduleDict.get()); - } else { - gModuleClsPtrs_.insert((uintptr_t)module.get()); - } - if (gModuleClsPtrs_.find((uintptr_t)cls.get()) != gModuleClsPtrs_.end()) { - Py_XINCREF(cls.get()); - } else { - gModuleClsPtrs_.insert((uintptr_t)cls.get()); - } + Py_XINCREF(module.get()); + Py_XINCREF(moduleDict.get()); + Py_XINCREF(cls.get()); PyObjectPtr fileListInPy = loadPyFileLists(fileListName); PyDict_SetItemString(kwargs.get(), "file_list", fileListInPy.get()); @@ -471,7 +461,6 @@ private: std::vector fileLists_; std::vector headers_; static PyObjectPtr zeroTuple_; - static std::unordered_set gModuleClsPtrs_; class PositionRandom { public: @@ -671,7 +660,6 @@ public: } }; -std::unordered_set PyDataProvider2::gModuleClsPtrs_; PyObjectPtr PyDataProvider2::zeroTuple_(PyTuple_New(0)); REGISTER_DATA_PROVIDER_EX(py2, PyDataProvider2); diff --git a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp index b47279b77a..e19cf35cd5 100644 --- a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp +++ b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp @@ -127,7 +127,7 @@ TEST(RecurrentGradientMachine, HasSubSequence) { } } -TEST(RecurrentGradientMachine, DISABLED_rnn) { +TEST(RecurrentGradientMachine, rnn) { for (bool useGpu : {false, true}) { test("gserver/tests/sequence_rnn.conf", "gserver/tests/sequence_nest_rnn.conf", @@ -136,7 +136,7 @@ TEST(RecurrentGradientMachine, DISABLED_rnn) { } } -TEST(RecurrentGradientMachine, DISABLED_rnn_multi_input) { +TEST(RecurrentGradientMachine, rnn_multi_input) { for (bool useGpu : {false, true}) { test("gserver/tests/sequence_rnn_multi_input.conf", "gserver/tests/sequence_nest_rnn_multi_input.conf", @@ -145,7 +145,7 @@ TEST(RecurrentGradientMachine, DISABLED_rnn_multi_input) { } } -TEST(RecurrentGradientMachine, DISABLED_rnn_multi_unequalength_input) { +TEST(RecurrentGradientMachine, rnn_multi_unequalength_input) { for (bool useGpu : {false, true}) { test("gserver/tests/sequence_rnn_multi_unequalength_inputs.py", "gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py", From d4932962e0d80ae6bb2589a797e0f7bc98ce80c6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=B8=80=E4=B8=AA=E9=99=8C=E7=94=9F=E4=BA=BA?= <546777653@qq.com> Date: Sun, 18 Dec 2016 15:06:16 +0800 Subject: [PATCH 223/265] Update index_cn.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 根据review结果修正一些翻译 --- doc/tutorials/text_generation/index_cn.md | 28 +++++++++++------------ 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/doc/tutorials/text_generation/index_cn.md b/doc/tutorials/text_generation/index_cn.md index 5cfdf2304c..41a87b926d 100644 --- a/doc/tutorials/text_generation/index_cn.md +++ b/doc/tutorials/text_generation/index_cn.md @@ -1,6 +1,6 @@ # 文本生成教程 # -在语言生成领域中,“序列到序列”(sequence to sequence)的方法已被证明是一种强大的模型。它可以被应用于进行机器翻译(machine translation)、请求改写(query rewriting)、图像字幕(image captioning)等等。 +在语言生成领域中,“序列到序列”(sequence to sequence)的方法已被证明是一种强大的模型。它可以被应用于进行机器翻译(machine translation)、query改写(query rewriting)、图像描述(image captioning)等等。 本篇教程将会指导你通过训练一个“序列到序列”的神经网络机器翻译(NMT)模型来将法语翻译成英语。 @@ -101,8 +101,8 @@ cd demo/seqToseq/data - 将每个源语言到目标语言的平行语料库文件合并为一个文件: - 合并每个 **XXX.src** 和 **XXX.trg** 文件为 **XXX** - **XXX** 中的第i行 = **XXX.src** 中的第i行 + '\t' + **XXX.trg**中的第i行 -- 创建训练数据的“源字典”和“目标字典”,每个字典都有DICTSIZE个单词: - - 频率最高的单词(DICTSIZE - 3 个) +- 创建训练数据的“源字典”和“目标字典”,每个字典都有DICTSIZE个单词,包括: + - 词频最高的(DICTSIZE - 3)个单词 - 3个特殊符号 - ``:序列的开始 - ``:序列的结束 @@ -133,7 +133,7 @@ python preprocess.py -i INPUT [-d DICTSIZE] [-m] python preprocess.py -i data/wmt14 -d 30000 ``` -这将花费数分钟的时间,并且将预处理好的数据集存放在`demo/seqToseq/data/pre-wmt14`目录下。字典具有以下结构。 +这将花费数分钟的时间,并且将预处理好的数据集存放在`demo/seqToseq/data/pre-wmt14`目录下。目录结构如下: train test gen train.list test.list gen.list src.dict trg.dict# Text generation Tutorial # @@ -146,7 +146,7 @@ python preprocess.py -i data/wmt14 -d 30000 神经网络机器翻译(NMT)旨在建立一个可以被协同调至最优翻译效果的单神经元网络。近期提出的NMT模型通常都属于编解码模型(encoder–decoder models)的一种。编解码模型将一个源语句编码为一个定长的向量,然后解码器通过这个向量生成一个目标语句。 -在这个任务中,我们使用了一个编解码模型的扩展,它联合地学习了排列与翻译。每当模型在翻译过程中生成了一个单词,它就会在源语句中搜索出最相关信息的位置的集合。解码器根据上下文向量预测出一个目标单词,这个向量与源中搜索出的位置和所有之前生成的目标单词有关。如想了解更多详细的解释,可以参考 [Neural Machine Translation by Jointly Learning to Align and Translate](http://arxiv.org/abs/1409.0473)。 +在这个任务中,我们使用了一个编解码模型的扩展,它同时学习排列(align)与翻译。每当模型在翻译过程中生成了一个单词,它就会在源语句中搜索出最相关信息的位置的集合。解码器根据上下文向量预测出一个目标单词,这个向量与源中搜索出的位置和所有之前生成的目标单词有关。如想了解更多详细的解释,可以参考 [Neural Machine Translation by Jointly Learning to Align and Translate](http://arxiv.org/abs/1409.0473)。 这个模型对于编解码模型来说,最不同的特色是它并没有将输入语句编码为一个单独的定长向量。相反,它将输入语句编码为向量的序列,其中每个向量对应输入语句中的一个元素。然后在解码被翻译的语句时,会自适应地从这些向量中选择一个子集出来。这使得NMT模型得以解放出来,不必再将任意长度源语句中的所有信息压缩至一个定长的向量中。该模型在长语句翻译的场景下效果提升更加明显,在任意长度语句翻译的场景下都可以观察到其效果的提升。
![](./encoder-decoder-attention-model.png)
@@ -215,10 +215,10 @@ paddle train \ I0719 19:16:45.952062 15563 TrainerInternal.cpp:160] Batch=10 samples=500 AvgCost=198.475 CurrentCost=198.475 Eval: classification_error_evaluator=0.737155 CurrentEval: classification_error_evaluator=0.737155 I0719 19:17:56.707319 15563 TrainerInternal.cpp:160] Batch=20 samples=1000 AvgCost=157.479 CurrentCost=116.483 Eval: classification_error_evaluator=0.698392 CurrentEval: classification_error_evaluator=0.659065 ..... -- AvgCost:从第0个batch到当前batch的平均花销 -- CurrentCost::当前batch的花销 -- classification\_error\_evaluator(Eval):从第0个评估到当前评估中,每个单词的失败预测率 -- classification\_error\_evaluator(CurrentEval):当前评估中,每个单词的失败预测率 +- AvgCost:从第0个batch到当前batch的平均cost +- CurrentCost::当前batch的cost +- classification\_error\_evaluator(Eval):从第0个评估到当前评估中,每个单词的预测错误率 +- classification\_error\_evaluator(CurrentEval):当前评估中,每个单词的预测错误率 当classification\_error\_evaluator的值低于0.35时,模型就训练成功了。 @@ -227,10 +227,10 @@ paddle train \ 一般而言,NMT模型受制于源语句的编码,并且通过给出当前目标单词来预测下一个目标单词。在训练过程中,当前单词在相比之下总是被当作真值(ground truth)。在生成过程中,当前单词是解码器最后一步的输出,这来自于PaddlePaddle的内存中。 -而且,我们使用集束搜索(Beam Search)来生成序列。集束搜索使用广度优先搜索来构建搜索树。对于树的每一层,生成当前层的所有后继状态,并将它们按照启发成本(heuristic cost)升序排列。但是这种方法在每层只保存预设数量的最优状态(这个数量称为beam size)。 +而且,我们使用集束搜索(Beam Search)来生成序列。集束搜索使用广度优先搜索来构建搜索树。对于树的每一层,生成当前层的所有后继状态,并将它们按照启发代价(heuristic cost)升序排列。但是这种方法在每层只保存预设数量的最优状态(这个数量称为beam size)。 ### 预训练的模型 ### -我们在拥有50个节点的集群中训练模型,每个节点有两个6核CPU。我们在5天里训练了16条pass,其中每条pass花费了7个小时。model_dir中有16个子目录,每个里面都包含202MB的全部的模型参数。然后我们发现pass-00012的模型有着最高的BLEU值27.77(参考文献[BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf))。要下载解压这个模型,只需在linux下运行如下命令: +我们在拥有50个节点的集群中训练模型,每个节点有两个6核CPU。我们在5天里训练了16个pass,其中每条pass花费了7个小时。model_dir中有16个子目录,每个里面都包含202MB的全部的模型参数。然后我们发现pass-00012的模型有着最高的BLEU值27.77(参考文献[BLEU: a Method for Automatic Evaluation of Machine Translation](http://www.aclweb.org/anthology/P02-1040.pdf))。要下载解压这个模型,只需在linux下运行如下命令: ```bash cd demo/seqToseq/data @@ -261,8 +261,8 @@ gru_encoder_decoder(gen_conf, is_generating) 1. **Data Definiation**:在示例中我们定义了一个序列到序列的生成数据。它返回gen_conf作为配置,其输入参数如下: - data_dir:生成数据的目录 - - is_generating:这个配置是否用来生成,这里设置为False - - gen_result:保存生成结果的文件 +  - is_generating:这个配置是否用来生成,这里设置为True +  - gen_result:保存生成结果的文件 2. **Algorithm Configuration**:在生成过程中我们使用SGD训练算法,并指定batch_size为1(每次生成1个序列),learning_rate为0 3. **Network Architecture**:本质上与训练模型一样 @@ -336,4 +336,4 @@ cd demo/seqToseq/translation ``` - FILE:生成的结果文件 -- BEAMSIZE:扩展集束搜索的广度 +- BEAMSIZE:集束搜索中的扩展广度 From 713a383291e0771c054f6b4454767f7e59948244 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E4=B8=80=E4=B8=AA=E9=99=8C=E7=94=9F=E4=BA=BA?= <546777653@qq.com> Date: Sun, 18 Dec 2016 15:13:25 +0800 Subject: [PATCH 224/265] fix typo MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 修正tutorial原文中的笔误 --- doc/tutorials/text_generation/index_en.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/tutorials/text_generation/index_en.md b/doc/tutorials/text_generation/index_en.md index d63f5cb607..5d8e667c20 100644 --- a/doc/tutorials/text_generation/index_en.md +++ b/doc/tutorials/text_generation/index_en.md @@ -260,8 +260,8 @@ gru_encoder_decoder(gen_conf, is_generating) 1. **Data Definiation**: We defines an SeqToSeq gen data in our example. It returns gen_conf as the configuration, following is its input arguments: - data\_dir: directory of gen data - - is\_generating: whether this config is used for generating, here is false - - gen\_result: file to store the generation result +   - is\_generating: whether this config is used for generating, here is true +   - gen\_result: file to store the generation result 2. **Algorithm Configuration**: We use SGD traing algorithm in generation, and specify batch_size as 1 (each time generate one sequence), and learning rate as 0. 3. **Network Architecture**: Essentially the same as the training model. From b05e1896f0408aaea3247c0082bf5af35d0e0e56 Mon Sep 17 00:00:00 2001 From: livc Date: Sun, 18 Dec 2016 20:51:40 +0800 Subject: [PATCH 225/265] fix bug of 'shuf' and 'gshuf' in demo/recommendation/preprocess.sh --- demo/recommendation/preprocess.sh | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/demo/recommendation/preprocess.sh b/demo/recommendation/preprocess.sh index dc6b2cdfc1..eeb81ce3cb 100755 --- a/demo/recommendation/preprocess.sh +++ b/demo/recommendation/preprocess.sh @@ -14,6 +14,15 @@ # limitations under the License. set -e +UNAME_STR=`uname` + +if [[ ${UNAME_STR} == 'Linux' ]]; then + SHUF_PROG='shuf' +else + SHUF_PROG='gshuf' +fi + + cd "$(dirname "$0")" delimiter='::' dir=ml-1m @@ -25,7 +34,7 @@ python meta_generator.py $dir meta.bin --config=meta_config.json echo 'split train/test file' python split.py $dir/ratings.dat --delimiter=${delimiter} --test_ratio=0.1 echo 'shuffle train file' -gshuf $dir/ratings.dat.train > ratings.dat.train +${SHUF_PROG} $dir/ratings.dat.train > ratings.dat.train cp $dir/ratings.dat.test . echo "./data/ratings.dat.train" > train.list echo "./data/ratings.dat.test" > test.list From 38723e778dbdec32d98b6a191da0e2ea94f0f3c5 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Mon, 19 Dec 2016 10:56:22 +0800 Subject: [PATCH 226/265] remove random flag --- paddle/gserver/tests/test_PriorBox.cpp | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/paddle/gserver/tests/test_PriorBox.cpp b/paddle/gserver/tests/test_PriorBox.cpp index d37c0bb702..1a7217ab94 100644 --- a/paddle/gserver/tests/test_PriorBox.cpp +++ b/paddle/gserver/tests/test_PriorBox.cpp @@ -15,13 +15,6 @@ limitations under the License. */ #include #include #include -#include "./paddle/utils/CommandLineParser.h" -#include "ModelConfig.pb.h" -#include "paddle/gserver/layers/DataLayer.h" -#include "paddle/gserver/layers/ExpandConvTransLayer.h" -#include "paddle/math/MathUtils.h" -#include "paddle/trainer/Trainer.h" -#include "paddle/utils/GlobalConstants.h" #include "LayerGradUtil.h" #include "TestUtil.h" @@ -29,12 +22,6 @@ limitations under the License. */ using namespace paddle; // NOLINT using namespace std; // NOLINT -P_DECLARE_bool(use_gpu); -P_DECLARE_int32(gpu_id); -P_DECLARE_double(checkgrad_eps); -P_DECLARE_bool(thread_local_rand_use_global_seed); -P_DECLARE_bool(prev_batch_state); - // Do one forward pass of priorBox layer and check to see if its output // matches the given result void doOnePriorBoxTest(size_t featureMapWidth, @@ -164,7 +151,5 @@ TEST(Layer, priorBoxLayerFwd) { int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); initMain(argc, argv); - FLAGS_thread_local_rand_use_global_seed = true; - srand(1); return RUN_ALL_TESTS(); } From fad72d74e0bd5138ffca9f94c1f05432ba06e8ab Mon Sep 17 00:00:00 2001 From: livc Date: Mon, 19 Dec 2016 11:04:04 +0800 Subject: [PATCH 227/265] modify format of cluster_train_cn.md --- doc/howto/usage/cluster/cluster_train_cn.md | 37 ++++++++++----------- 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md index 028758fb05..158c518ac4 100644 --- a/doc/howto/usage/cluster/cluster_train_cn.md +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -20,13 +20,13 @@ 我们将放置依赖库、配置等文件的目录视为 *工作空间(workspace)*。 -这些 ```train/test``` 数据应该在启动集群作业之前准备好。 为了满足训练/测试数据放置在工作空间中不同目录的要求,PADDLE 根据在模型配置文件中使用的名为 ```train.list/test.list``` 的索引文件引用训练/测试数据。所以训练/测试数据也包含 train.list/test.list 两个列表文件。所有本地训练 demo 已经提供了脚本来帮助您创建这两个文件,并且集群作业中的所有节点将在正常情况下处理具有相同逻辑代码的文件。 +这些 `train/test` 数据应该在启动集群作业之前准备好。 为了满足训练/测试数据放置在工作空间中不同目录的要求,PADDLE 根据在模型配置文件中使用的名为 `train.list/test.list` 的索引文件引用训练/测试数据。所以训练/测试数据也包含 train.list/test.list 两个列表文件。所有本地训练 demo 已经提供了脚本来帮助您创建这两个文件,并且集群作业中的所有节点将在正常情况下处理具有相同逻辑代码的文件。 -通常,你可以使用本地训练中的相同模型文件进行集群训练。 你应该知道,在模型文件的 ```setting``` 函数中设置的 ```batch_size``` 表示在集群作业**每个**节点中的 batch 大小,而不是使用同步 SGD 的总 batch 大小。 +通常,你可以使用本地训练中的相同模型文件进行集群训练。 你应该知道,在模型文件的 `setting`函数中设置的 `batch_size` 表示在集群作业**每个**节点中的 batch 大小,而不是使用同步 SGD 的总 batch 大小。 以下步骤基于 demo 目录中的 demo/recommendation。 -你只需完成 demo/recommendation 教程文档到 ```Train``` 的部分,之后你会得到训练/测试数据和模型配置文件。最后,只需使用 demo/recommendation 作为集群训练的工作空间。 +你只需完成 demo/recommendation 教程文档到 `Train` 的部分,之后你会得到训练/测试数据和模型配置文件。最后,只需使用 demo/recommendation 作为集群训练的工作空间。 最后,你的工作空间应如下所示: ``` @@ -72,19 +72,19 @@ 以下选项必须在 cluster_train/conf.py 中认真设置 -```HOSTS``` 所有节点运行集群作业的主机名或 IP 。你还可以将用户和 ssh 端口附加到主机名上,例如 root@192.168.100.17:9090。 +`HOSTS` 所有节点运行集群作业的主机名或 IP 。你还可以将用户和 ssh 端口附加到主机名上,例如 root@192.168.100.17:9090。 -```ROOT_DIR``` 用于放置 JOB 工作空间目录的工作空间 ROOT 目录 +`ROOT_DIR` 用于放置 JOB 工作空间目录的工作空间 ROOT 目录 -```PADDLE_NIC``` 集群通信通道的 NIC(Network Interface Card, 网络接口卡) 接口名称,例如以太网的 eth0,infiniband 的 ib0。 +`PADDLE_NIC` 集群通信通道的 NIC(Network Interface Card, 网络接口卡) 接口名称,例如以太网的 eth0,infiniband 的 ib0。 -```PADDLE_PORT``` 集群通信通道的端口号 +`PADDLE_PORT` 集群通信通道的端口号 -```PADDLE_PORTS_NUM``` 用于集群通信通道的端口数。 如果集群节点数量少(少于5〜6个节点),建议将其设置为较大,如2〜8,以获得更好的网络性能。 +`PADDLE_PORTS_NUM` 用于集群通信通道的端口数。 如果集群节点数量少(少于5〜6个节点),建议将其设置为较大,如2〜8,以获得更好的网络性能。 -```PADDLE_PORTS_NUM_FOR_SPARSE``` 用于稀疏更新器集群通信信道的端口数。如果使用稀疏远程更新,则可以像 ```PADDLE_PORTS_NUM``` 一样设置。 +`PADDLE_PORTS_NUM_FOR_SPARSE` 用于稀疏更新器集群通信信道的端口数。如果使用稀疏远程更新,则可以像 ```PADDLE_PORTS_NUM``` 一样设置。 -```LD_LIBRARY_PATH``` 为集群作业设置额外的 LD_LIBRARY_PATH。你可以使用它来设置 CUDA 库的路径。 +`LD_LIBRARY_PATH` 为集群作业设置额外的 LD_LIBRARY_PATH。你可以使用它来设置 CUDA 库的路径。 默认配置如下: @@ -118,14 +118,14 @@ LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib64" ``` ### 启动集群作业 -```paddle.py``` 提供了自动化脚本来启动不同节点中的所有 PaddlePaddle 集群进程。默认情况下,所有命令行选项可以设置为```paddle.py``` 命令选项并且 ```paddle.py``` 将透明、自动地将这些选项应用到 PaddlePaddle 低级进程。 +`paddle.py` 提供了自动化脚本来启动不同节点中的所有 PaddlePaddle 集群进程。默认情况下,所有命令行选项可以设置为```paddle.py``` 命令选项并且 `paddle.py` 将透明、自动地将这些选项应用到 PaddlePaddle 低级进程。 -```paddle.py``` 为方便作业启动提供了两个独特的命令选项。 +`paddle.py` 为方便作业启动提供了两个独特的命令选项。 -```job_dispatch_package``` 设为本地 ```workspace``` 目录,它将被分发到 conf.py 中设置的所有节点。 这有助于频繁的修改、访问工作区文件,否则频繁的多节点工作空间部署可能会很麻烦。 -```job_workspace``` 设为已部署的工作空间目录,```paddle.py``` 将跳过分发阶段直接启动所有节点的集群作业。它可以帮助减少分发延迟。 +`job_dispatch_package` 设为本地 `workspace` 目录,它将被分发到 conf.py 中设置的所有节点。 这有助于频繁的修改、访问工作区文件,否则频繁的多节点工作空间部署可能会很麻烦。 +`job_workspace` 设为已部署的工作空间目录,`paddle.py` 将跳过分发阶段直接启动所有节点的集群作业。它可以帮助减少分发延迟。 -```cluster_train/run.sh``` 提供了命令样例来运行 ```demo/recommendation``` 集群工作,只需用你定义的目录修改 ```job_dispatch_package``` 和 ```job_workspace``` ,然后: +`cluster_train/run.sh` 提供了命令样例来运行 `demo/recommendation` 集群工作,只需用你定义的目录修改 `job_dispatch_package` 和 `job_workspace`,然后: ``` sh run.sh ``` @@ -133,7 +133,7 @@ sh run.sh 集群作业将会在几秒后启动。 ### 终止集群作业 -```paddle.py``` 能获取 ```Ctrl + C``` SIGINT 信号来自动终止它启动的所有进程。只需中断 ```paddle.py``` 任务来终止集群作业。如果程序崩溃你也可以手动终止。 +`paddle.py`能获取`Ctrl + C` SIGINT 信号来自动终止它启动的所有进程。只需中断 `paddle.py` 任务来终止集群作业。如果程序崩溃你也可以手动终止。 ### 检查集群训练结果 详细信息请检查 $workspace/log 里的日志,每一个节点都有相同的日志结构。 @@ -151,6 +151,5 @@ sh run.sh 提供训练过程的 stderr 和 stdout。训练失败时可以检查错误日志。 ### 检查模型输出 -运行完成后,模型文件将被写入节点 0 的 ```output``` 目录中。 -工作空间中的 ```nodefile``` 表示当前集群作业的节点 ID。 - +运行完成后,模型文件将被写入节点 0 的 `output` 目录中。 +工作空间中的 `nodefile` 表示当前集群作业的节点 ID。 From 43b9ce3b49f4dd3cf8178a42d992d362c130dc17 Mon Sep 17 00:00:00 2001 From: livc Date: Mon, 19 Dec 2016 11:08:03 +0800 Subject: [PATCH 228/265] add link of demo/recommendation --- doc/howto/usage/cluster/cluster_train_cn.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md index 158c518ac4..7ce17e20f8 100644 --- a/doc/howto/usage/cluster/cluster_train_cn.md +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -24,7 +24,7 @@ 通常,你可以使用本地训练中的相同模型文件进行集群训练。 你应该知道,在模型文件的 `setting`函数中设置的 `batch_size` 表示在集群作业**每个**节点中的 batch 大小,而不是使用同步 SGD 的总 batch 大小。 -以下步骤基于 demo 目录中的 demo/recommendation。 +以下步骤基于 demo 目录中的 [demo/recommendation](https://github.com/PaddlePaddle/Paddle/tree/develop/demo/recommendation)。 你只需完成 demo/recommendation 教程文档到 `Train` 的部分,之后你会得到训练/测试数据和模型配置文件。最后,只需使用 demo/recommendation 作为集群训练的工作空间。 From 2b4ddb38089e11bbd1da44c94a52605043479e2b Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 19 Dec 2016 11:11:41 +0800 Subject: [PATCH 229/265] fix dead links and refine tutorials/index --- doc/api/data_provider/dataprovider_cn.rst | 2 +- doc/api/data_provider/pydataprovider2_cn.rst | 2 ++ doc/api/trainer_config_helpers/evaluators.rst | 2 ++ doc/api/trainer_config_helpers/layers.rst | 10 ++++++ doc/api/trainer_config_helpers/networks.rst | 2 ++ doc/api/trainer_config_helpers/optimizers.rst | 4 +++ doc/faq/index_cn.rst | 10 +++--- .../deep_model/rnn/hierarchical_layer_cn.rst | 12 ++----- .../deep_model/rnn/recurrent_group_cn.md | 2 +- .../usage/cluster/k8s/k8s_distributed_cn.md | 2 +- doc/howto/usage/concepts/use_concepts_cn.rst | 36 ++++++------------- doc/tutorials/index_cn.md | 23 +++--------- doc/tutorials/index_en.md | 15 ++------ doc/tutorials/quick_start/index_en.md | 4 +-- .../paddle/trainer_config_helpers/layers.py | 8 ++--- 15 files changed, 54 insertions(+), 80 deletions(-) diff --git a/doc/api/data_provider/dataprovider_cn.rst b/doc/api/data_provider/dataprovider_cn.rst index 6861ecece8..8c83ea57ca 100644 --- a/doc/api/data_provider/dataprovider_cn.rst +++ b/doc/api/data_provider/dataprovider_cn.rst @@ -1,7 +1,7 @@ DataProvider的介绍 ================== -DataProvider是PaddlePaddle负责提供数据的模块。其作用是将数据传入内存或显存,让神经网络可以进行训练或预测。用户可以通过简单使用Python接口 `PyDataProvider2 `_ ,来自定义传数据的过程。如果有更复杂的使用,或者需要更高的效率,用户也可以在C++端自定义一个 ``DataProvider`` 。 +DataProvider是PaddlePaddle负责提供数据的模块。其作用是将数据传入内存或显存,让神经网络可以进行训练或预测。用户可以通过简单使用Python接口 :ref:`api_pydataprovider2` ,来自定义传数据的过程。如果有更复杂的使用,或者需要更高的效率,用户也可以在C++端自定义一个 ``DataProvider`` 。 PaddlePaddle需要用户在网络配置(trainer_config.py)中定义使用哪种DataProvider,并且在DataProvider中实现如何访问训练文件列表(train.list)或测试文件列表(test.list)。 diff --git a/doc/api/data_provider/pydataprovider2_cn.rst b/doc/api/data_provider/pydataprovider2_cn.rst index f243ea775a..8f9db31cfb 100644 --- a/doc/api/data_provider/pydataprovider2_cn.rst +++ b/doc/api/data_provider/pydataprovider2_cn.rst @@ -1,3 +1,5 @@ +.. _api_pydataprovider2: + PyDataProvider2的使用 ===================== diff --git a/doc/api/trainer_config_helpers/evaluators.rst b/doc/api/trainer_config_helpers/evaluators.rst index d6a79c13e2..11dc735164 100644 --- a/doc/api/trainer_config_helpers/evaluators.rst +++ b/doc/api/trainer_config_helpers/evaluators.rst @@ -1,3 +1,5 @@ +.. _api_trainer_config_helpers_evaluators: + ========== Evaluators ========== diff --git a/doc/api/trainer_config_helpers/layers.rst b/doc/api/trainer_config_helpers/layers.rst index 52a6cfb120..4e429650e5 100644 --- a/doc/api/trainer_config_helpers/layers.rst +++ b/doc/api/trainer_config_helpers/layers.rst @@ -187,6 +187,8 @@ get_output_layer Mixed Layer =========== +.. _api_trainer_config_helpers_layers_mixed_layer: + mixed_layer ----------- .. automodule:: paddle.trainer_config_helpers.layers @@ -255,12 +257,16 @@ pooling_layer :members: pooling_layer :noindex: +.. _api_trainer_config_helpers_layers_last_seq: + last_seq -------- .. automodule:: paddle.trainer_config_helpers.layers :members: last_seq :noindex: +.. _api_trainer_config_helpers_layers_first_seq: + first_seq --------- .. automodule:: paddle.trainer_config_helpers.layers @@ -282,6 +288,8 @@ block_expand_layer :members: block_expand_layer :noindex: +.. _api_trainer_config_helpers_layers_expand_layer: + expand_layer ------------ .. automodule:: paddle.trainer_config_helpers.layers @@ -374,6 +382,8 @@ sampling_id_layer :members: sampling_id_layer :noindex: +.. _api_trainer_config_helpers_layers_cost_layers: + Cost Layers =========== diff --git a/doc/api/trainer_config_helpers/networks.rst b/doc/api/trainer_config_helpers/networks.rst index e13c368051..edb53acbf0 100644 --- a/doc/api/trainer_config_helpers/networks.rst +++ b/doc/api/trainer_config_helpers/networks.rst @@ -36,6 +36,8 @@ img_conv_group :members: img_conv_group :noindex: +.. _api_trainer_config_helpers_network_simple_img_conv_pool: + simple_img_conv_pool -------------------- .. automodule:: paddle.trainer_config_helpers.networks diff --git a/doc/api/trainer_config_helpers/optimizers.rst b/doc/api/trainer_config_helpers/optimizers.rst index 7ca4e34156..d2f4958c92 100644 --- a/doc/api/trainer_config_helpers/optimizers.rst +++ b/doc/api/trainer_config_helpers/optimizers.rst @@ -1,3 +1,5 @@ +.. _api_trainer_config_helpers_optimizers: + ========== Optimizers ========== @@ -50,6 +52,8 @@ RMSPropOptimizer :members: RMSPropOptimizer :noindex: +.. _api_trainer_config_helpers_optimizers_settings: + settings ======== .. automodule:: paddle.trainer_config_helpers.optimizers diff --git a/doc/faq/index_cn.rst b/doc/faq/index_cn.rst index f2f114065c..d792f410bc 100644 --- a/doc/faq/index_cn.rst +++ b/doc/faq/index_cn.rst @@ -35,7 +35,7 @@ PyDataProvider使用的是异步加载,同时在内存里直接随即选取数 .. literalinclude:: src/reduce_min_pool_size.py -这样做可以极大的减少内存占用,并且可能会加速训练过程,详细文档参考 `这里 <../ui/data_provider/pydataprovider2.html#provider>`_ 。 +这样做可以极大的减少内存占用,并且可能会加速训练过程,详细文档参考 :ref:`api_pydataprovider2` 。 神经元激活内存 ++++++++++++++ @@ -95,7 +95,6 @@ PaddlePaddle支持Sparse的训练,sparse训练需要训练特征是 :code:`spa .. literalinclude:: src/word2vec_config.py -更多关于sparse训练的内容请参考 `sparse训练的文档 `_ 利用更多的计算资源 ++++++++++++++++++ @@ -103,14 +102,15 @@ PaddlePaddle支持Sparse的训练,sparse训练需要训练特征是 :code:`spa 利用更多的计算资源可以分为一下几个方式来进行\: * 单机CPU训练 + * 使用多线程训练。设置命令行参数 :code:`trainer_count`。 * 单机GPU训练 + * 使用显卡训练。设置命令行参数 :code:`use_gpu`。 * 使用多块显卡训练。设置命令行参数 :code:`use_gpu` 和 :code:`trainer_count` 。 -* 多机训练 - * 具体的多机训练方法参考 `多机训练文档 <../ui/data_provider/pydataprovider2.html#provider>`_ 。 +* 多机训练(文档待补充) 3. 遇到“非法指令”或者是“illegal instruction” @@ -302,4 +302,4 @@ PaddlePaddle的参数使用名字 :code:`name` 作为参数的ID,相同名字 git submodule init git submodule update -来获得所有第三方模块。 \ No newline at end of file +来获得所有第三方模块。 diff --git a/doc/howto/deep_model/rnn/hierarchical_layer_cn.rst b/doc/howto/deep_model/rnn/hierarchical_layer_cn.rst index a9906b8b9c..943b1d4bb8 100644 --- a/doc/howto/deep_model/rnn/hierarchical_layer_cn.rst +++ b/doc/howto/deep_model/rnn/hierarchical_layer_cn.rst @@ -22,7 +22,7 @@ pooling_layer ============== -pooling_layer 的使用示例如下,详细见 `pooling_layer`_ 配置API。 +pooling_layer 的使用示例如下,详细见 :ref:`api_trainer_config_helpers_layers_pooling_layer` 配置API。 .. code-block:: bash @@ -47,7 +47,7 @@ pooling_layer 的使用示例如下,详细见 `pooling_layer`_ 配置API。 last_seq 和 first_seq ===================== -last_seq 的使用示例如下( `first_seq`_ 类似),详细见 `last_seq`_ 配置API。 +last_seq 的使用示例如下( :ref:`api_trainer_config_helpers_layers_first_seq` 类似),详细见 :ref:`api_trainer_config_helpers_layers_last_seq` 配置API。 .. code-block:: bash @@ -68,7 +68,7 @@ last_seq 的使用示例如下( `first_seq`_ 类似),详细见 `last_seq`_ expand_layer ============ -expand_layer 的使用示例如下,详细见 `expand_layer`_ 配置API。 +expand_layer 的使用示例如下,详细见 :ref:`api_trainer_config_helpers_layers_expand_layer` 配置API。 .. code-block:: bash @@ -87,9 +87,3 @@ expand_layer 的使用示例如下,详细见 `expand_layer`_ 配置API。 - 作用:一个单层序列经过运算扩展成一个双层序列 - 输入:layer1必须是一个单层序列,是待扩展的数据;layer2 必须是一个双层序列,提供扩展的长度信息 - 输出:一个双层序列,序列中含有元素的数目同 layer2 一致。要求单层序列含有元素的数目(0层序列)和双层序列含有subseq 的数目一致。单层序列第i个元素(0层序列),被扩展为一个单层序列,构成了输出双层序列的第i个 subseq 。 - - -.. _pooling_layer: ../../../doc/ui/api/trainer_config_helpers/layers.html#pooling-layer -.. _last_seq: ../../../doc/ui/api/trainer_config_helpers/layers.html#last-seq -.. _first_seq: ../../../doc/ui/api/trainer_config_helpers/layers.html#first-seq -.. _expand_layer: ../../../doc/ui/api/trainer_config_helpers/layers.html#expand-layer diff --git a/doc/howto/deep_model/rnn/recurrent_group_cn.md b/doc/howto/deep_model/rnn/recurrent_group_cn.md index 984fdcc505..06dc9e089a 100644 --- a/doc/howto/deep_model/rnn/recurrent_group_cn.md +++ b/doc/howto/deep_model/rnn/recurrent_group_cn.md @@ -12,7 +12,7 @@ 更进一步,`recurrent_group`同样可以扩展到双层序列的处理上。通过两个嵌套的`recurrent_group`分别定义子句级别和词语级别上需要完成的运算,最终实现一个层次化的复杂RNN。 -目前,在PaddlePaddle中,能够对双向序列进行处理的有`recurrent_group`和部分Layer,具体可参考文档:支持双层序列作为输入的Layer。 +目前,在PaddlePaddle中,能够对双向序列进行处理的有`recurrent_group`和部分Layer,具体可参考文档:支持双层序列作为输入的Layer。 ## 相关概念 diff --git a/doc/howto/usage/cluster/k8s/k8s_distributed_cn.md b/doc/howto/usage/cluster/k8s/k8s_distributed_cn.md index d4d01f2759..53d0b4676c 100644 --- a/doc/howto/usage/cluster/k8s/k8s_distributed_cn.md +++ b/doc/howto/usage/cluster/k8s/k8s_distributed_cn.md @@ -82,7 +82,7 @@ COPY start_paddle.py /root/ CMD ["bash"," -c","/root/start.sh"] ``` -[`start.sh`](start.sh)文件拷贝训练文件到容器内,然后执行[`start_paddle.py`](start_paddle.py)脚本启动训练,前文提到的获取其他节点IP地址,分配`trainer_id`等都在`start_paddle.py`脚本中完成。 +[start.sh](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/cluster/k8s/start.sh)文件拷贝训练文件到容器内,然后执行[start_paddle.py](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/howto/usage/cluster/k8s/start_paddle.py)脚本启动训练,前文提到的获取其他节点IP地址,分配`trainer_id`等都在`start_paddle.py`脚本中完成。 `start_paddle.py`脚本开始时,会先进行参数的初始化与解析。 diff --git a/doc/howto/usage/concepts/use_concepts_cn.rst b/doc/howto/usage/concepts/use_concepts_cn.rst index 77ba764419..fa334bcbb9 100644 --- a/doc/howto/usage/concepts/use_concepts_cn.rst +++ b/doc/howto/usage/concepts/use_concepts_cn.rst @@ -37,7 +37,7 @@ PaddlePaddle是一个深度学习框架,支持单机模式和多机模式。 DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据转换成系统可以识别的数据类型。每当系统需要新的数据训练时, trainer进程会调用DataProvider函数返回数据。当所有数据读取完一轮后,DataProvider返回空数据,通知系统一轮数据读取结束,并且系统每一轮训练开始时会重置DataProvider。需要注意的是,DataProvider是被系统调用,而不是新数据驱动系统,一些随机化噪声添加都应该在DataProvider中完成。 -在不同的应用里,训练数据的格式往往各不相同。因此,为了用户能够灵活的处理数据,我们提供了Python处理数据的接口,称为 `PyDataProvider`_ 。在 ``PyDataProvider`` 中,系统C++模块接管了shuffle、处理batch、GPU和CPU通信、双缓冲、异步读取等问题,一些情况下(如:``min_pool_size=0``)需要Python接口里处理shuffle,可以参考 `PyDataProvider`_ 的相关文档继续深入了解。 +在不同的应用里,训练数据的格式往往各不相同。因此,为了用户能够灵活的处理数据,我们提供了Python处理数据的接口,称为 ``PyDataProvider`` 。在 ``PyDataProvider`` 中,系统C++模块接管了shuffle、处理batch、GPU和CPU通信、双缓冲、异步读取等问题,一些情况下(如:``min_pool_size=0``)需要Python接口里处理shuffle,可以参考 :ref:`api_pydataprovider2` 继续深入了解。 训练配置文件 @@ -50,21 +50,21 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据 .. literalinclude:: src/trainer_config.py :linenos: -文件开头 ``from paddle.trainer_config_helpers import *`` ,是因为PaddlePaddle配置文件与C++模块通信的最基础协议是protobuf,为了避免用户直接写复杂的protobuf string,我们为用户定以Python接口来配置网络,该Python代码可以生成protobuf包,这就是`trainer_config_helpers`_的作用。因此,在文件的开始,需要import这些函数。 这个包里面包含了模型配置需要的各个模块。 +文件开头 ``from paddle.trainer_config_helpers import *`` ,是因为PaddlePaddle配置文件与C++模块通信的最基础协议是protobuf,为了避免用户直接写复杂的protobuf string,我们为用户定以Python接口来配置网络,该Python代码可以生成protobuf包,这就是 :ref:`api_trainer_config` 的作用。因此,在文件的开始,需要import这些函数。 这个包里面包含了模型配置需要的各个模块。 下面分别介绍数据源配置、优化算法配置、网络结构配置这三部分该概念。 数据源配置 ---------- -使用 `PyDataProvider`_ 的函数 ``define_py_data_sources2`` 配置数据源。``define_py_data_sources2`` 里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 +使用 ``PyDataProvider2`` 的函数 ``define_py_data_sources2`` 配置数据源。``define_py_data_sources2`` 里通过train_list和test_list指定是训练文件列表和测试文件列表。 如果传入字符串的话,是指一个数据列表文件。这个数据列表文件中包含的是每一个训练或者测试文件的路径。如果传入一个list的话,则会默认生成一个list文件,再传入给train.list或者test.list。 -``module`` 和 ``obj`` 指定了DataProvider的文件名和返回数据的函数名。更详细的使用,请参考 `PyDataProvider`_ 。 +``module`` 和 ``obj`` 指定了DataProvider的文件名和返回数据的函数名。更详细的使用,请参考 :ref:`api_pydataprovider2` 。 优化算法配置 ------------ -通过 `settings`_ 接口设置神经网络所使用的训练参数和 `优化算法`_ ,包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 `settings`_ 文档。 +通过 :ref:`api_trainer_config_helpers_optimizers_settings` 接口设置神经网络所使用的训练参数和 :ref:`api_trainer_config_helpers_optimizers` ,包括学习率、batch_size、优化算法、正则方法等,具体的使用方法请参考 :ref:`api_trainer_config_helpers_optimizers_settings` 文档。 网络结构配置 ------------ @@ -82,14 +82,13 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据 这个配置文件网络由 ``data_layer`` 、 ``simple_img_conv_pool`` 、 ``fc_layer`` 组成。 - - `data_layer`_ : 通常每个配置文件都会包括 ``data_layer`` ,定义输入数据大小。 - - `simple_img_conv_pool`_ :是一个组合层,包括了图像的卷积 (convolution)和池化(pooling)。 - - `fc_layer`_ :全连接层,激活函数为Softmax,这里也可叫分类层。 + - :ref:`api_trainer_config_helpers_layers_data_layer` : 通常每个配置文件都会包括 ``data_layer`` ,定义输入数据大小。 + - :ref:`api_trainer_config_helpers_network_simple_img_conv_pool` :是一个组合层,包括了图像的卷积 (convolution)和池化(pooling)。 + - :ref:`api_trainer_config_helpers_layers_fc_layer` :全连接层,激活函数为Softmax,这里也可叫分类层。 - - 损失函数和评估器:损失函数即为网络的优化目标,评估器可以评价模型结果。 - PaddlePaddle包括很多损失函数和评估起,详细可以参考 `损失函数层`_ 和 `评估器`_ 。这里 ``classification_cost`` 默认使用多类交叉熵损失函数和分类错误率统计评估器。 + PaddlePaddle包括很多损失函数和评估起,详细可以参考 :ref:`api_trainer_config_helpers_layers_cost_layers` 和 :ref:`api_trainer_config_helpers_evaluators` 。这里 ``classification_cost`` 默认使用多类交叉熵损失函数和分类错误率统计评估器。 - ``outputs``: 标记网络输出的函数为 ``outputs`` 。 @@ -106,7 +105,7 @@ DataProvider是PaddlePaddle系统的数据提供器,将用户的原始数据 with mixed_layer(size=200) as out: out += full_matrix_projection(input=data) -PaddlePaddle 可以使用 ``mixed layer`` 配置出非常复杂的网络,甚至可以直接配置一个完整的LSTM。用户可以参考 `mixed_layer`_ 的相关文档进行配置。 +PaddlePaddle 可以使用 ``mixed layer`` 配置出非常复杂的网络,甚至可以直接配置一个完整的LSTM。用户可以参考 :ref:`api_trainer_config_helpers_layers_mixed_layer` 的相关文档进行配置。 分布式训练 @@ -138,18 +137,3 @@ PaddlePaddle多机采用经典的 Parameter Server 架构对多个节点的 trai * --ports_num_for_sparse\: 一个pserver进程共绑定多少端口用来做稀疏更新,默认是0。 使用手工指定端口数量,是因为Paddle的网络通信中,使用了 int32 作为消息长度,比较容易在大模型下溢出。所以,在 pserver 进程中可以启动多个子线程去接受 trainer 的数据,这样单个子线程的长度就不会溢出了。但是这个值不可以调的过大,因为增加这个值,对性能尤其是内存占用有一定的开销,另外稀疏更新的端口如果太大的话,很容易导致某一个参数服务器没有分配到任何参数。 - -详细的说明可以参考,使用 `集群训练Paddle`_ 。 - - -.. _PyDataProvider: ../ui/data_provider/pydataprovider2.html -.. _settings: ../../doc/ui/api/trainer_config_helpers/optimizers.html#settings -.. _优化算法: ../../doc/ui/api/trainer_config_helpers/optimizers.html#optimizers -.. _trainer_config_helper: ../../doc/ui/api/trainer_config_helpers/index.html -.. _data_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#data-layer -.. _simple_img_conv_pool: ../../doc/ui/api/trainer_config_helpers/networks.html#simple-img-conv-pool -.. _fc_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#fc-layer -.. _损失函数层: ../../doc/ui/api/trainer_config_helpers/layers.html#cost-layers -.. _评估器: ../../doc/ui/api/trainer_config_helpers/evaluators.html -.. _mixed_layer: ../../doc/ui/api/trainer_config_helpers/layers.html#mixed-layer -.. _集群训练Paddle: ../cluster/index.html diff --git a/doc/tutorials/index_cn.md b/doc/tutorials/index_cn.md index adc75978a7..97014d5376 100644 --- a/doc/tutorials/index_cn.md +++ b/doc/tutorials/index_cn.md @@ -1,24 +1,11 @@ # 完整教程 -## 快速入门 - -使用商品评论分类任务,系统性的介绍如何一步步改进,最终得到产品级的深度模型。 - -* [阅读教程](quick_start/index_cn.rst) - -## 图像 - -* TBD - -## 自然语言处理 - -* [情感分类](sentiment_analysis/index_cn.md) +* [快速入门](quick_start/index_cn.rst) +* [个性化推荐](rec/ml_regression_cn.rst) +* [情感分析](sentiment_analysis/index_cn.md) * [语义角色标注](semantic_role_labeling/index_cn.md) - -## 个性化推荐 - -* TBD +* [机器翻译](text_generation/index_cn.md) ## 常用模型 -* TBD +* [ResNet模型](imagenet_model/resnet_model_cn.md) diff --git a/doc/tutorials/index_en.md b/doc/tutorials/index_en.md index 63b2091c24..cce9d3a176 100644 --- a/doc/tutorials/index_en.md +++ b/doc/tutorials/index_en.md @@ -1,23 +1,12 @@ # TUTORIALS There are several examples and demos here. -## Quick Start - * [Quick Start](quick_start/index_en.md) - -## Image - +* [MovieLens Regression](rec/ml_regression_en.rst) * [Image Classification](image_classification/index_en.md) - -## NLP - * [Sentiment Analysis](sentiment_analysis/index_en.md) -* [Text Generation](text_generation/index_en.md) * [Semantic Role Labeling](semantic_role_labeling/index_en.md) - -## Recommendation - -* [MovieLens Regression](rec/ml_regression_en.rst) +* [Text Generation](text_generation/index_en.md) ## Model Zoo * [ImageNet: ResNet](imagenet_model/resnet_model_en.md) diff --git a/doc/tutorials/quick_start/index_en.md b/doc/tutorials/quick_start/index_en.md index 4e765b2303..a30944f18f 100644 --- a/doc/tutorials/quick_start/index_en.md +++ b/doc/tutorials/quick_start/index_en.md @@ -391,7 +391,7 @@ paddle train \ --use_gpu=false ``` -We do not provide examples on how to train on clusters here. If you want to train on clusters, please follow the distributed training documentation or other demos for more details. +We do not provide examples on how to train on clusters here. If you want to train on clusters, please follow the distributed training documentation or other demos for more details. ## Inference You can use the trained model to perform prediction on the dataset with no labels. You can also evaluate the model on dataset with labels to obtain its test accuracy. @@ -509,7 +509,7 @@ The scripts of data downloading, network configurations, and training scrips are * \--config_args:Other configuration arguments. * \--init_model_path:The path of the initial model parameter. -By default, the trainer will save model every pass. You can also specify `saving_period_by_batches` to set the frequency of batch saving. You can use `show_parameter_stats_period` to print the statistics of the parameters, which are very useful for tuning parameters. Other command line arguments can be found in command line argument documentation。 +By default, the trainer will save model every pass. You can also specify `saving_period_by_batches` to set the frequency of batch saving. You can use `show_parameter_stats_period` to print the statistics of the parameters, which are very useful for tuning parameters. Other command line arguments can be found in command line argument documentation。 ### Log diff --git a/python/paddle/trainer_config_helpers/layers.py b/python/paddle/trainer_config_helpers/layers.py index c10fa671bd..da951390c9 100644 --- a/python/paddle/trainer_config_helpers/layers.py +++ b/python/paddle/trainer_config_helpers/layers.py @@ -1776,15 +1776,15 @@ def img_conv_layer(input, trans=False, layer_type=None): """ - Convolution layer for image. Paddle only support square input currently and - thus input image's width equals height. + Convolution layer for image. Paddle can support both square and non-square + input currently. The details of convolution layer, please refer UFLDL's `convolution `_ . - Convolution Transpose (deconv) layer for image. Paddle only support square - input currently and thus input image's width equals height. + Convolution Transpose (deconv) layer for image. Paddle can support both square + and non-square input currently. The details of convolution transpose layer, please refer to the following explanation and references therein From f392ddf73309fd38fff7dbf0eacc317f2e4bfa8b Mon Sep 17 00:00:00 2001 From: livc Date: Mon, 19 Dec 2016 11:13:52 +0800 Subject: [PATCH 230/265] modify markdown format of cluster_train_en.md and add link of demo/recommendation --- doc/howto/usage/cluster/cluster_train_en.md | 38 ++++++++++----------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/doc/howto/usage/cluster/cluster_train_en.md b/doc/howto/usage/cluster/cluster_train_en.md index a9a3194f09..68715b6645 100644 --- a/doc/howto/usage/cluster/cluster_train_en.md +++ b/doc/howto/usage/cluster/cluster_train_en.md @@ -20,13 +20,13 @@ In this article, we explain how to run distributed Paddle training jobs on clust We refer to the directory where we put dependent libraries, config files, etc., as *workspace*. -These ```train/test``` data should be prepared before launching cluster job. To satisfy the requirement that train/test data are placed in different directory from workspace, PADDLE refers train/test data according to index file named as ```train.list/test.list``` which are used in model config file. So the train/test data also contains train.list/test.list two list file. All local training demo already provides scripts to help you create these two files, and all nodes in cluster job will handle files with same logical code in normal condition. +These `train/test` data should be prepared before launching cluster job. To satisfy the requirement that train/test data are placed in different directory from workspace, PADDLE refers train/test data according to index file named as `train.list/test.list` which are used in model config file. So the train/test data also contains train.list/test.list two list file. All local training demo already provides scripts to help you create these two files, and all nodes in cluster job will handle files with same logical code in normal condition. -Generally, you can use same model file from local training for cluster training. What you should have in mind that, the ```batch_size``` set in ```setting``` function in model file means batch size in ```each``` node of cluster job instead of total batch size if synchronization SGD was used. +Generally, you can use same model file from local training for cluster training. What you should have in mind that, the `batch_size` set in `setting` function in model file means batch size in `each` node of cluster job instead of total batch size if synchronization SGD was used. -Following steps are based on demo/recommendation demo in demo directory. +Following steps are based on [demo/recommendation](https://github.com/PaddlePaddle/Paddle/tree/develop/demo/recommendation) demo in demo directory. -You just go through demo/recommendation tutorial doc until ```Train``` section, and at last you will get train/test data and model configuration file. Finaly, just use demo/recommendation as workspace for cluster training. +You just go through demo/recommendation tutorial doc until `Train` section, and at last you will get train/test data and model configuration file. Finaly, just use demo/recommendation as workspace for cluster training. At last your workspace should look like as follow: ``` @@ -72,19 +72,19 @@ all files in data directory are refered by train.list/test.list which are refere The options below must be carefully set in cluster_train/conf.py -```HOSTS``` all nodes hostname or ip that will run cluster job. You can also append user and ssh port with hostname, such as root@192.168.100.17:9090. +`HOSTS` all nodes hostname or ip that will run cluster job. You can also append user and ssh port with hostname, such as root@192.168.100.17:9090. -```ROOT_DIR``` workspace ROOT directory for placing JOB workspace directory +`ROOT_DIR` workspace ROOT directory for placing JOB workspace directory -```PADDLE_NIC``` the NIC(Network Interface Card) interface name for cluster communication channel, such as eth0 for ethternet, ib0 for infiniband. +`PADDLE_NIC` the NIC(Network Interface Card) interface name for cluster communication channel, such as eth0 for ethternet, ib0 for infiniband. -```PADDLE_PORT``` port number for cluster commnunication channel +`PADDLE_PORT` port number for cluster commnunication channel -```PADDLE_PORTS_NUM``` the number of port used for cluster communication channle. if the number of cluster nodes is small(less than 5~6nodes), recommend you set it to larger, such as 2 ~ 8, for better network performance. +`PADDLE_PORTS_NUM` the number of port used for cluster communication channle. if the number of cluster nodes is small(less than 5~6nodes), recommend you set it to larger, such as 2 ~ 8, for better network performance. -```PADDLE_PORTS_NUM_FOR_SPARSE``` the number of port used for sparse updater cluster commnunication channel. if sparse remote update is used, set it like ```PADDLE_PORTS_NUM``` +`PADDLE_PORTS_NUM_FOR_SPARSE` the number of port used for sparse updater cluster commnunication channel. if sparse remote update is used, set it like `PADDLE_PORTS_NUM` -```LD_LIBRARY_PATH``` set addtional LD_LIBRARY_PATH for cluster job. You can use it to set CUDA libraries path. +`LD_LIBRARY_PATH` set addtional LD_LIBRARY_PATH for cluster job. You can use it to set CUDA libraries path. Default Configuration as follow: @@ -118,15 +118,15 @@ LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib64" ``` ### Launching Cluster Job -```paddle.py``` provides automatical scripts to start all PaddlePaddle cluster processes in different nodes. By default, all command line options can set as ```paddle.py``` command options and ```paddle.py``` will transparently and automatically set these options to PaddlePaddle lower level processes. +`paddle.py` provides automatical scripts to start all PaddlePaddle cluster processes in different nodes. By default, all command line options can set as `paddle.py` command options and `paddle.py` will transparently and automatically set these options to PaddlePaddle lower level processes. -```paddle.py```provides two distinguished command option for easy job launching. +`paddle.py`provides two distinguished command option for easy job launching. -```job_dispatch_package``` set it with local ```workspace```directory, it will be dispatched to all nodes set in conf.py. It could be helpful for frequent hacking workspace files, otherwise frequent mulit-nodes workspace deployment could make your crazy. -```job_workspace``` set it with already deployed workspace directory, ```paddle.py``` will skip dispatch stage to directly launch cluster job with all nodes. It could help to reduce heavy +`job_dispatch_package` set it with local `workspace`directory, it will be dispatched to all nodes set in conf.py. It could be helpful for frequent hacking workspace files, otherwise frequent mulit-nodes workspace deployment could make your crazy. +`job_workspace` set it with already deployed workspace directory, `paddle.py` will skip dispatch stage to directly launch cluster job with all nodes. It could help to reduce heavy dispatch latency. -```cluster_train/run.sh``` provides command line sample to run ```demo/recommendation``` cluster job, just modify ```job_dispatch_package``` and ```job_workspace``` with your defined directory, then: +`cluster_train/run.sh` provides command line sample to run `demo/recommendation` cluster job, just modify `job_dispatch_package` and `job_workspace` with your defined directory, then: ``` sh run.sh ``` @@ -134,7 +134,7 @@ sh run.sh The cluster Job will start in several seconds. ### Kill Cluster Job -```paddle.py``` can capture ```Ctrl + C``` SIGINT signal to automatically kill all processes launched by it. So just stop ```paddle.py``` to kill cluster job. You should mannally kill job if program crashed. +`paddle.py` can capture `Ctrl + C` SIGINT signal to automatically kill all processes launched by it. So just stop `paddle.py` to kill cluster job. You should mannally kill job if program crashed. ### Check Cluster Training Result Check log in $workspace/log for details, each node owns same log structure. @@ -152,5 +152,5 @@ It provides stderr and stdout of pserver process. Check error log if training cr It provides stderr and stdout of trainer process. Check error log if training crashs. ### Check Model Output -After one pass finished, model files will be writed in ```output``` directory in node 0. -```nodefile``` in workspace indicates the node id of current cluster job. +After one pass finished, model files will be writed in `output` directory in node 0. +`nodefile` in workspace indicates the node id of current cluster job. From 7dfe3bdf7a2e7d39ee76c356c1894e72e84bc464 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Mon, 19 Dec 2016 11:46:57 +0800 Subject: [PATCH 231/265] remove gpu memory alloc --- paddle/gserver/tests/test_PriorBox.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paddle/gserver/tests/test_PriorBox.cpp b/paddle/gserver/tests/test_PriorBox.cpp index 1a7217ab94..1dab21218e 100644 --- a/paddle/gserver/tests/test_PriorBox.cpp +++ b/paddle/gserver/tests/test_PriorBox.cpp @@ -53,7 +53,7 @@ void doOnePriorBoxTest(size_t featureMapWidth, LayerMap layerMap; vector datas; initDataLayer( - configt, &dataLayers, &datas, &layerMap, "priorbox", 1, false, true); + configt, &dataLayers, &datas, &layerMap, "priorbox", 1, false, false); dataLayers[0]->getOutput().setFrameHeight(featureMapHeight); dataLayers[0]->getOutput().setFrameWidth(featureMapWidth); dataLayers[1]->getOutput().setFrameHeight(imageHeight); From 0740e4dbfd6f9e17eec94371f956385af4fa3a7c Mon Sep 17 00:00:00 2001 From: livc Date: Mon, 19 Dec 2016 13:05:29 +0800 Subject: [PATCH 232/265] modify details --- doc/howto/usage/cluster/cluster_train_cn.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md index 7ce17e20f8..bb5b22102f 100644 --- a/doc/howto/usage/cluster/cluster_train_cn.md +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -1,6 +1,6 @@ # 运行分布式训练 -在本文中,我们将阐释如何在集群上运行分布式 Paddle 训练作业。我们将创建分布式的单进程训练示例,[推荐](https://github.com/baidu/Paddle/tree/develop/demo/recommendation)。 +在本文中,我们将阐释如何在集群上运行分布式 Paddle 训练作业。我们将以[推荐系统](https://github.com/baidu/Paddle/tree/develop/demo/recommendation)为例创建分布式的单进程训练。 在本文中使用的[脚本](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train)通过 SSH 运行分布式作业。 它们还可以供那些运行更复杂的集群管理系统(如 MPI 和 Kubernetes )的用户参考。 @@ -14,15 +14,15 @@ 2. 我们需要在集群的所有节点上安装 PaddlePaddle。 如果要启用GPU,需要在 `/usr/local/cuda` 中安装 CUDA; 否则 Paddle 将在运行时报错。 -3. 在所有节点上的[`cluster_train/conf.py`]中设置 `ROOT_DIR` 变量。 为了方便起见,我们通常在所有节点上创建一个 Unix 用户 `paddle`,并设置 `ROOT_DIR=/home/paddle`。这样,我们可以将 SSH 公钥写入 `/home/paddle/.ssh/authorized_keys`,以便用户 `paddle` 可以 SSH 到所有节点而不用密码。 +3. 在 [`cluster_train/conf.py`] 中设置 `ROOT_DIR`, 该 ROOT_DIR 要在所有节点上存在。为了方便起见,我们通常在所有节点上创建一个 Unix 用户 `paddle`,并设置 `ROOT_DIR=/home/paddle`。这样,我们可以将 SSH 公钥写入 `/home/paddle/.ssh/authorized_keys`,以便用户 `paddle` 可以 SSH 到所有节点而不用密码。 ## 准备工作空间 我们将放置依赖库、配置等文件的目录视为 *工作空间(workspace)*。 -这些 `train/test` 数据应该在启动集群作业之前准备好。 为了满足训练/测试数据放置在工作空间中不同目录的要求,PADDLE 根据在模型配置文件中使用的名为 `train.list/test.list` 的索引文件引用训练/测试数据。所以训练/测试数据也包含 train.list/test.list 两个列表文件。所有本地训练 demo 已经提供了脚本来帮助您创建这两个文件,并且集群作业中的所有节点将在正常情况下处理具有相同逻辑代码的文件。 +这些 `train/test` 数据应该在启动集群作业之前准备好。 为了满足训练/测试数据放置在工作空间中不同目录的要求,PADDLE 根据在模型配置文件中使用的名为 `train.list/test.list` 的索引文件引用训练/测试数据,所以训练/测试数据也包含 train.list/test.list 两个列表文件。所有本地训练 demo 已经提供了脚本来帮助您创建这两个文件,并且集群作业中的所有节点将在正常情况下处理具有相同逻辑代码的文件。 -通常,你可以使用本地训练中的相同模型文件进行集群训练。 你应该知道,在模型文件的 `setting`函数中设置的 `batch_size` 表示在集群作业**每个**节点中的 batch 大小,而不是使用同步 SGD 的总 batch 大小。 +通常,你可以使用本地训练中的相同模型文件进行集群训练。请记住,在模型文件的 `setting`函数中设置的 `batch_size` 表示在集群作业**每个**节点中的 batch 大小,而不是使用同步 SGD 的总 batch 大小。 以下步骤基于 demo 目录中的 [demo/recommendation](https://github.com/PaddlePaddle/Paddle/tree/develop/demo/recommendation)。 @@ -82,7 +82,7 @@ `PADDLE_PORTS_NUM` 用于集群通信通道的端口数。 如果集群节点数量少(少于5〜6个节点),建议将其设置为较大,如2〜8,以获得更好的网络性能。 -`PADDLE_PORTS_NUM_FOR_SPARSE` 用于稀疏更新器集群通信信道的端口数。如果使用稀疏远程更新,则可以像 ```PADDLE_PORTS_NUM``` 一样设置。 +`PADDLE_PORTS_NUM_FOR_SPARSE` 用于 sparse remote updater 集群通信信道的端口数。如果使用 sparse remote update,则可以像 `PADDLE_PORTS_NUM` 一样设置。 `LD_LIBRARY_PATH` 为集群作业设置额外的 LD_LIBRARY_PATH。你可以使用它来设置 CUDA 库的路径。 @@ -118,11 +118,11 @@ LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib64" ``` ### 启动集群作业 -`paddle.py` 提供了自动化脚本来启动不同节点中的所有 PaddlePaddle 集群进程。默认情况下,所有命令行选项可以设置为```paddle.py``` 命令选项并且 `paddle.py` 将透明、自动地将这些选项应用到 PaddlePaddle 低级进程。 +`paddle.py` 提供了自动化脚本来启动不同节点中的所有 PaddlePaddle 集群进程。默认情况下,所有命令行选项可以设置为```paddle.py``` 命令选项并且 `paddle.py` 将透明、自动地将这些选项应用到 PaddlePaddle 底层进程。 `paddle.py` 为方便作业启动提供了两个独特的命令选项。 -`job_dispatch_package` 设为本地 `workspace` 目录,它将被分发到 conf.py 中设置的所有节点。 这有助于频繁的修改、访问工作区文件,否则频繁的多节点工作空间部署可能会很麻烦。 +`job_dispatch_package` 设为本地 `workspace` 目录,它将被分发到 conf.py 中设置的所有节点。 它有助于帮助频繁修改和访问工作区文件的用户减少负担,否则频繁的多节点工作空间部署可能会很麻烦。 `job_workspace` 设为已部署的工作空间目录,`paddle.py` 将跳过分发阶段直接启动所有节点的集群作业。它可以帮助减少分发延迟。 `cluster_train/run.sh` 提供了命令样例来运行 `demo/recommendation` 集群工作,只需用你定义的目录修改 `job_dispatch_package` 和 `job_workspace`,然后: From a091b9718246e2ea202d9923b9257a62bc25fb9b Mon Sep 17 00:00:00 2001 From: livc Date: Mon, 19 Dec 2016 14:10:19 +0800 Subject: [PATCH 233/265] add refer link of Kubernetes in cluster_train_cn.md --- doc/howto/usage/cluster/cluster_train_cn.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md index bb5b22102f..f70f4214af 100644 --- a/doc/howto/usage/cluster/cluster_train_cn.md +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -2,7 +2,7 @@ 在本文中,我们将阐释如何在集群上运行分布式 Paddle 训练作业。我们将以[推荐系统](https://github.com/baidu/Paddle/tree/develop/demo/recommendation)为例创建分布式的单进程训练。 -在本文中使用的[脚本](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train)通过 SSH 运行分布式作业。 它们还可以供那些运行更复杂的集群管理系统(如 MPI 和 Kubernetes )的用户参考。 +在本文中使用的[脚本](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train)通过 SSH 运行分布式作业。 它们还可以供那些运行更复杂的集群管理系统(如 MPI 和 [Kubernetes](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/k8s) )的用户参考。 ## 前提条件 From de850e6d8022d54d83c73a53ce9f2e690cedff26 Mon Sep 17 00:00:00 2001 From: livc Date: Mon, 19 Dec 2016 14:12:20 +0800 Subject: [PATCH 234/265] add refer link of Kubernetes in cluster_train_en.md --- doc/howto/usage/cluster/cluster_train_en.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/howto/usage/cluster/cluster_train_en.md b/doc/howto/usage/cluster/cluster_train_en.md index 68715b6645..30963dcd92 100644 --- a/doc/howto/usage/cluster/cluster_train_en.md +++ b/doc/howto/usage/cluster/cluster_train_en.md @@ -2,7 +2,7 @@ In this article, we explain how to run distributed Paddle training jobs on clusters. We will create the distributed version of the single-process training example, [recommendation](https://github.com/baidu/Paddle/tree/develop/demo/recommendation). -[Scripts](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train) used in this article launch distributed jobs via SSH. They also work as a reference for users running more sophisticated cluster management systems like MPI and Kubernetes. +[Scripts](https://github.com/baidu/Paddle/tree/develop/paddle/scripts/cluster_train) used in this article launch distributed jobs via SSH. They also work as a reference for users running more sophisticated cluster management systems like MPI and [Kubernetes](https://github.com/PaddlePaddle/Paddle/tree/develop/doc/howto/usage/cluster/k8s). ## Prerequisite From af5d954bdf70b553186539621baf8badcb9940c8 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Mon, 19 Dec 2016 15:03:13 +0800 Subject: [PATCH 235/265] Clean BatchNorm Code. --- .../layers/BatchNormalizationLayer.cpp | 26 ++++++------------- paddle/math/Matrix.h | 4 +-- 2 files changed, 10 insertions(+), 20 deletions(-) diff --git a/paddle/gserver/layers/BatchNormalizationLayer.cpp b/paddle/gserver/layers/BatchNormalizationLayer.cpp index e6a0624636..412762d384 100644 --- a/paddle/gserver/layers/BatchNormalizationLayer.cpp +++ b/paddle/gserver/layers/BatchNormalizationLayer.cpp @@ -59,24 +59,14 @@ void BatchNormalizationLayer::calMeanAndStd(const MatrixPtr& mat) { void BatchNormalizationLayer::calMovingMeanAndVar() { // calculating and saving moving mean and variance - MatrixPtr movingMean = movingMean_->getW(); - MatrixPtr movingVar = movingVar_->getW(); - - if (!useGpu_ && FLAGS_trainer_count > 1) { - auto mvMean = std::dynamic_pointer_cast(movingMean); - auto mvVar = std::dynamic_pointer_cast(movingVar); - CHECK(mvMean && mvVar); - - mvMean->add(*savedMean_, movingAvgFraction_, 1.0 - movingAvgFraction_); - mvVar->add(*savedInvVar_, movingAvgFraction_, 1.0 - movingAvgFraction_); - } else { - // movingMean = movingMean * movingAvgFraction_ - // + savedMean_ * (1 - movingAvgFraction_) - movingMean->add(*savedMean_, movingAvgFraction_, 1.0 - movingAvgFraction_); - // movingVar = movingVar * movingAvgFraction_ - // + savedInvVar_ * (1 - movingAvgFraction_) - movingVar->add(*savedInvVar_, movingAvgFraction_, 1.0 - movingAvgFraction_); - } + auto& movingMean = movingMean_->getW(); + auto& movingVar = movingVar_->getW(); + // movingMean = movingMean * movingAvgFraction_ + // + savedMean_ * (1 - movingAvgFraction_) + movingMean->add(*savedMean_, movingAvgFraction_, 1.0 - movingAvgFraction_); + // movingVar = movingVar * movingAvgFraction_ + // + savedInvVar_ * (1 - movingAvgFraction_) + movingVar->add(*savedInvVar_, movingAvgFraction_, 1.0 - movingAvgFraction_); } void BatchNormalizationLayer::setMeanAndStd() { diff --git a/paddle/math/Matrix.h b/paddle/math/Matrix.h index 5685cb7bcb..1cfb90a9db 100644 --- a/paddle/math/Matrix.h +++ b/paddle/math/Matrix.h @@ -1973,8 +1973,8 @@ public: public: virtual void mul(CpuSparseMatrix* a, CpuMatrix* b, real scaleAB, real scaleT); - void add(Matrix& b, real p1, real p2); - void add(real p1, real p2); + virtual void add(Matrix& b, real p1, real p2); + virtual void add(real p1, real p2); private: using Matrix::mul; From 148bd4d0b3240d31c1c96ddac89ffd4935f71b03 Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Mon, 19 Dec 2016 15:04:48 +0800 Subject: [PATCH 236/265] add Layer::createFunction --- paddle/gserver/layers/Layer.h | 24 +++++++++++-- paddle/gserver/layers/NormProjectionLayer.cpp | 34 +++++++------------ 2 files changed, 35 insertions(+), 23 deletions(-) diff --git a/paddle/gserver/layers/Layer.h b/paddle/gserver/layers/Layer.h index 16f66a2205..6dfd48fb96 100644 --- a/paddle/gserver/layers/Layer.h +++ b/paddle/gserver/layers/Layer.h @@ -102,9 +102,9 @@ protected: std::vector markInBackward_; /// Layer forward function - FunctionBase* forward_; + std::vector> forward_; /// Layer backward function - FunctionBase* backward_; + std::vector> backward_; public: /** @@ -132,6 +132,26 @@ public: virtual void markAllInputGrad(); protected: + /** + * Create layer function. Function is called in forward or backward. + * \param function, Layer::forward_ or Layer::backward_ + * \param name, function name + * \param config, initialization configuration for the function + */ + void createFunction(std::vector>& function, + const std::string& name, + const FuncConfig& config) { + if (useGpu_) { + function.emplace_back( + FunctionBase::funcRegistrar_.createByType(name + "-GPU")); + } else { + function.emplace_back( + FunctionBase::funcRegistrar_.createByType(name + "-CPU")); + } + auto& func = function.back(); + func->init(config); + } + /** * Notify specified layer the output grad ready. * Called in the backward function. diff --git a/paddle/gserver/layers/NormProjectionLayer.cpp b/paddle/gserver/layers/NormProjectionLayer.cpp index 0f6f9b91d0..262d757c67 100644 --- a/paddle/gserver/layers/NormProjectionLayer.cpp +++ b/paddle/gserver/layers/NormProjectionLayer.cpp @@ -45,21 +45,13 @@ bool CMRProjectionNormLayer::init(const LayerMap& layerMap, /* the size of inputs for norm-layer is 1 */ CHECK_EQ(config_.inputs_size(), 1); - if (useGpu_) { - forward_ = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormal, GPU)); - backward_ = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormalGrad, GPU)); - } else { - forward_ = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormal, CPU)); - backward_ = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormalGrad, CPU)); - } - forward_->init( + createFunction( + forward_, + "CrossMapNormal", FuncConfig().set("size", size_).set("scale", scale_).set("pow", pow_)); - - backward_->init( + createFunction( + backward_, + "CrossMapNormalGrad", FuncConfig().set("size", size_).set("scale", scale_).set("pow", pow_)); return true; @@ -80,7 +72,7 @@ void CMRProjectionNormLayer::forward(PassType passType) { Matrix::resizeOrCreate(denoms_, batchSize, size, /* trans */ false, useGpu_); dims_ = {batchSize, channels_, imgSizeH_, imgSizeW_}; - forward_->calc( + forward_[0]->calc( {Tensor(input->getData(), dims_)}, {Tensor(outV->getData(), dims_), Tensor(denoms_->getData(), dims_)}, {}); @@ -98,11 +90,11 @@ void CMRProjectionNormLayer::backward(const UpdateCallback& callback) { MatrixPtr localOutV = getOutputValue(); MatrixPtr preOutV = inputLayers_[0]->getOutputValue(); - backward_->calc({Tensor(preOutV->getData(), dims_), - Tensor(localOutV->getData(), dims_), - Tensor(localGrad->getData(), dims_), - Tensor(denoms_->getData(), dims_)}, - {Tensor(preOutGrad->getData(), dims_)}, - {}); + backward_[0]->calc({Tensor(preOutV->getData(), dims_), + Tensor(localOutV->getData(), dims_), + Tensor(localGrad->getData(), dims_), + Tensor(denoms_->getData(), dims_)}, + {Tensor(preOutGrad->getData(), dims_)}, + {}); } } // namespace paddle From 843d08d5f89c7cd4007458e09e80cdab5134eae8 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 19 Dec 2016 16:08:38 +0800 Subject: [PATCH 237/265] fix dead links on quick_start --- doc/api/data_provider/dataprovider_cn.rst | 2 ++ doc/api/predict/swig_py_paddle_cn.rst | 2 ++ doc/faq/index_cn.rst | 4 ++- doc/getstarted/build_and_install/index_cn.rst | 14 +++++----- doc/howto/index_cn.rst | 1 + doc/howto/usage/cluster/cluster_train_cn.md | 4 +++ doc/tutorials/quick_start/index_cn.rst | 15 ++++++----- doc/tutorials/rec/ml_regression_cn.rst | 26 +++++++++---------- 8 files changed, 41 insertions(+), 27 deletions(-) diff --git a/doc/api/data_provider/dataprovider_cn.rst b/doc/api/data_provider/dataprovider_cn.rst index 8c83ea57ca..d08c6b3efa 100644 --- a/doc/api/data_provider/dataprovider_cn.rst +++ b/doc/api/data_provider/dataprovider_cn.rst @@ -1,3 +1,5 @@ +.. _api_dataprovider: + DataProvider的介绍 ================== diff --git a/doc/api/predict/swig_py_paddle_cn.rst b/doc/api/predict/swig_py_paddle_cn.rst index 15e35353bb..42f333dba2 100644 --- a/doc/api/predict/swig_py_paddle_cn.rst +++ b/doc/api/predict/swig_py_paddle_cn.rst @@ -1,3 +1,5 @@ +.. _api_swig_py_paddle: + 基于Python的预测 ================ diff --git a/doc/faq/index_cn.rst b/doc/faq/index_cn.rst index d792f410bc..ea0ef25f00 100644 --- a/doc/faq/index_cn.rst +++ b/doc/faq/index_cn.rst @@ -110,7 +110,9 @@ PaddlePaddle支持Sparse的训练,sparse训练需要训练特征是 :code:`spa * 使用显卡训练。设置命令行参数 :code:`use_gpu`。 * 使用多块显卡训练。设置命令行参数 :code:`use_gpu` 和 :code:`trainer_count` 。 -* 多机训练(文档待补充) +* 多机训练 + + * 请参考 :ref:`cluster_train` 。 3. 遇到“非法指令”或者是“illegal instruction” diff --git a/doc/getstarted/build_and_install/index_cn.rst b/doc/getstarted/build_and_install/index_cn.rst index 3ffa858504..a24df6c518 100644 --- a/doc/getstarted/build_and_install/index_cn.rst +++ b/doc/getstarted/build_and_install/index_cn.rst @@ -1,8 +1,10 @@ -编译与安装 +安装与编译 ========== -安装 -++++ +.. _install_steps: + +安装流程 +++++++++ PaddlePaddle提供数个预编译的二进制来进行安装,包括Docker镜像,ubuntu的deb安装包等。我们推荐使用Docker镜像来部署环境,同时欢迎贡献更多的安装包。 @@ -14,12 +16,12 @@ PaddlePaddle提供数个预编译的二进制来进行安装,包括Docker镜 -编译 -++++ +编译流程 +++++++++ .. warning:: - 编译选项主要推荐高级用户查看,普通用户请走安装流程。 + 编译流程主要推荐高级用户查看,普通用户请走安装流程。 .. toctree:: :maxdepth: 1 diff --git a/doc/howto/index_cn.rst b/doc/howto/index_cn.rst index e03138723e..6a14ce8ae7 100644 --- a/doc/howto/index_cn.rst +++ b/doc/howto/index_cn.rst @@ -8,6 +8,7 @@ :maxdepth: 1 usage/concepts/use_concepts_cn.rst + usage/cluster/cluster_train_cn.md usage/cluster/k8s/k8s_cn.md usage/cluster/k8s/k8s_distributed_cn.md diff --git a/doc/howto/usage/cluster/cluster_train_cn.md b/doc/howto/usage/cluster/cluster_train_cn.md index f70f4214af..acdcfa1c00 100644 --- a/doc/howto/usage/cluster/cluster_train_cn.md +++ b/doc/howto/usage/cluster/cluster_train_cn.md @@ -1,3 +1,7 @@ +```eval_rst +.. _cluster_train: +``` + # 运行分布式训练 在本文中,我们将阐释如何在集群上运行分布式 Paddle 训练作业。我们将以[推荐系统](https://github.com/baidu/Paddle/tree/develop/demo/recommendation)为例创建分布式的单进程训练。 diff --git a/doc/tutorials/quick_start/index_cn.rst b/doc/tutorials/quick_start/index_cn.rst index 936f16118a..d565fcf95e 100644 --- a/doc/tutorials/quick_start/index_cn.rst +++ b/doc/tutorials/quick_start/index_cn.rst @@ -8,7 +8,7 @@ 安装 ==== -请参考 `安装教程 <../../build_and_install/index.html>`_ 安装PaddlePaddle。 +请参考 :ref:`install_steps` 安装PaddlePaddle。 使用概述 ======== @@ -60,7 +60,7 @@ Python脚本读取数据 ------------------ -`DataProvider <../../ui/data_provider/index.html>`_ 是PaddlePaddle负责提供数据的模块。``DataProvider`` 主要职责在于将训练数据传入内存或者显存,让模型能够得到训练更新,其包括两个函数: +`DataProvider` 是PaddlePaddle负责提供数据的模块,主要职责在于将训练数据传入内存或者显存,让模型能够得到训练更新,其包括两个函数: * initializer:PaddlePaddle会在调用读取数据的Python脚本之前,先调用initializer函数。在下面例子里,我们在initialzier函数里初始化词表,并且在随后的读取数据过程中填充词表。 * process:PaddlePaddle调用process函数来读取数据。每次读取一条数据后,process函数会用yield语句输出这条数据,从而能够被PaddlePaddle 捕获 (harvest)。 @@ -73,6 +73,7 @@ Python脚本读取数据 :linenos: :emphasize-lines: 8,33 +详细内容请参见 :ref:`api_dataprovider` 。 配置中的数据加载定义 -------------------- @@ -93,7 +94,7 @@ Python脚本读取数据 - obj="process": 指定生成数据的函数 - args={"dictionary": word_dict}: 额外的参数,这里指定词典 -更详细数据格式和用例请参考 `PyDataProvider2 <../../ui/data_provider/pydataprovider2.html>`_ 。 +更详细数据格式和用例请参考 :ref:`api_pydataprovider2` 。 模型网络结构 ============ @@ -105,7 +106,7 @@ Python脚本读取数据 :scale: 80% -我们将以最基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置连接请参考 `Layer文档 <../../../doc/layer.html>`_ 。 +我们将以最基本的逻辑回归网络作为起点,并逐渐展示更加深入的功能。更详细的网络配置连接请参考 :ref:`api_trainer_config_helpers_layers` 。 所有配置都能在 `源代码 `_ 的 ``demo/quick_start`` 目录下找到。 逻辑回归模型 @@ -306,7 +307,7 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 --num_passes=15 \ --use_gpu=false -这里只简单介绍了单机训练,如何进行分布式训练,可以参考教程 `分布式训练 <../../cluster/index.html>`_ 。 +这里只简单介绍了单机训练,如何进行分布式训练,请参考 :ref:`cluster_train` 。 预测 ===== @@ -318,7 +319,7 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 :scale: 80% 之前配置文件中 ``test.list`` 指定的数据将会被测试,这里直接通过预测脚本 ``predict.sh`` 进行预测, -更详细的说明,可以参考 `Python API预测 <../../ui/predict/swig_py_paddle.html>`_ 教程。 +更详细的说明,请参考 :ref:`api_swig_py_paddle` 。 .. code-block:: bash @@ -373,7 +374,7 @@ Momentum, RMSProp,AdaDelta,AdaGrad,ADAM,Adamax等,这里采用Adam优 默认一个pass保存一次模型,也可以通过saving_period_by_batches设置每隔多少batch保存一次模型。 可以通过show_parameter_stats_period设置打印参数信息等。 -其他参数请参考 `命令行参数文档 <../../ui/index.html#command-line-argument>`_ 。 +其他参数请参考 命令行参数文档(链接待补充)。 输出日志 --------- diff --git a/doc/tutorials/rec/ml_regression_cn.rst b/doc/tutorials/rec/ml_regression_cn.rst index a084e4790c..9278c9f603 100644 --- a/doc/tutorials/rec/ml_regression_cn.rst +++ b/doc/tutorials/rec/ml_regression_cn.rst @@ -1,5 +1,5 @@ MovieLens数据集评分回归模型 -========================= +=========================== 这里我们在MovieLens数据集描述一种 **余弦相似度回归** 任务。 该示例将展示paddle如何进行词向量嵌入,处理相似度回归,针对文本 @@ -12,9 +12,9 @@ MovieLens数据集评分回归模型 让这个示例变得更好,希望能让我们知晓。** 数据准备 -``````` +````````` 下载并解压数据集 -'''''''''''''' +''''''''''''''''' 这里我们使用 :ref:`demo_ml_dataset` 。 要下载和解压数据集,只需要简单的运行下面的命令即可。 @@ -34,7 +34,7 @@ MovieLens数据集评分回归模型 +--- README # 数据集描述 字段配置文件 -'''''''''' +''''''''''''' **字段配置文件** 用来具体说明数据集的字段和文件格式, 例如,说明每个特征文件具体字段是 **什么** 类型。 @@ -50,7 +50,7 @@ ml-1m的字段配置文件在目录 :code:`demo/recommendation/data/config.json` :literal: 准备数据 -``````` +````````` 你需要安装python的第三方库。 **强烈推荐使用VIRTUALENV来创造一个干净的python环境。** @@ -68,14 +68,14 @@ ml-1m的字段配置文件在目录 :code:`demo/recommendation/data/config.json` 下面介绍预处理过程具体的步骤。 提取电影或用户的特征并生成python对象 -'''''''''''''''''''''''''''''''' +''''''''''''''''''''''''''''''''''''' 在movielens 1m数据集中,电影和用户有许多的特征。 评分文件的每一行仅仅提供电影或用户的编号来代表相应的电影或用户。 我们首先处理电影或用户的特征文件,然后用pickle命令将特征( **Meta** )对象存储为文件。 Meta配置文件 -........... +............. **Meta配置文件** 用来具体描述 **如何** 解析数据集中的每一个字段。 该文件可以从字段配置文件生成,或是手动编辑生成。文件的格式可以 @@ -185,7 +185,7 @@ meta文件 :code:`meta.bin` 的结构如下: 分割训练/测试文件 -''''''''''''''' +'''''''''''''''''' 我们将 :code:`ml-1m/ratings.dat` 文件分割为训练和测试文件。分割文件的方法是:对于每位用户,我们将评分分成两部分。 这样的话每位用户在测试文件中将与训练文件含有同样的信息。 @@ -208,10 +208,10 @@ meta文件 :code:`meta.bin` 的结构如下: 神经网络结构配置 -`````````````` +````````````````` 训练器配置文件 -'''''''''''' +''''''''''''''' 网络结构如下图所示: @@ -251,7 +251,7 @@ meta文件 :code:`meta.bin` 的结构如下: * 声明Python数据源, :ref:`api_trainer_config_helpers_data_sources` 数据提供脚本 -''''''''''' +''''''''''''' .. literalinclude:: ../../../demo/recommendation/dataprovider.py :language: python @@ -264,7 +264,7 @@ meta文件 :code:`meta.bin` 的结构如下: * use_seq\: :code:`dataprovider.py` 中的数据是否为序列模式。 * process\: 返回数据的每一条样本给 :code:`paddle` 。 -数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider` 。 +数据提供脚本的细节文档可以参考 :ref:`api_pydataprovider2` 。 训练 ```` @@ -316,7 +316,7 @@ meta文件 :code:`meta.bin` 的结构如下: 模型被保存在 :code:`output/` 目录中。你可以在任何时候用 :code:`Ctrl-C` 来停止训练。 模型评估和预测 -```````````` +``````````````` 在训练了几个轮次以后,你可以对模型进行评估,得到最好轮次下的模型。运行下面命令即可: From 1a0669753e5fe2af475905d084149b5d928c9b6a Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Mon, 19 Dec 2016 13:11:48 +0800 Subject: [PATCH 238/265] travis for check broken links --- .travis.yml | 2 +- paddle/scripts/travis/docs.sh | 17 +++++++++++++++-- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/.travis.yml b/.travis.yml index 5b14f8e61e..047ca6ffe7 100644 --- a/.travis.yml +++ b/.travis.yml @@ -56,7 +56,7 @@ before_install: - if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then sudo paddle/scripts/travis/before_install.linux.sh; fi - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then paddle/scripts/travis/before_install.osx.sh; fi - if [[ "$JOB" == "PRE_COMMIT" ]]; then sudo ln -s /usr/bin/clang-format-3.8 /usr/bin/clang-format; fi - - pip install wheel protobuf sphinx recommonmark virtualenv numpy sphinx_rtd_theme pre-commit + - pip install wheel protobuf sphinx recommonmark virtualenv numpy sphinx_rtd_theme pre-commit requests==2.9.2 LinkChecker script: - paddle/scripts/travis/main.sh notifications: diff --git a/paddle/scripts/travis/docs.sh b/paddle/scripts/travis/docs.sh index 0bbb76a8a3..4ab1746b5a 100755 --- a/paddle/scripts/travis/docs.sh +++ b/paddle/scripts/travis/docs.sh @@ -7,6 +7,19 @@ source ./common.sh cmake .. -DCMAKE_BUILD_TYPE=Debug -DWITH_GPU=OFF -DWITH_DOC=ON make paddle_docs paddle_docs_cn +# check websites for broken links +set +e +linkchecker doc/cn/html/index.html > doc_cn.out +linkchecker doc/en/html/index.html > doc_en.out +for i in doc_cn.out doc_en.out; do + echo $i + grep " 0 errors found" $i + if [ $? -ne 0 ]; then + cat $i + exit 1 + fi +done + # Parse Github URL REPO=`git config remote.origin.url` SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:} @@ -35,8 +48,8 @@ git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH # remove old docs. mv new docs. rm -rf doc doc_cn -mv ../doc_cn/html doc_cn -mv ../doc/html doc +mv ../doc/cn/html doc_cn +mv ../doc/en/html doc # Check is there anything changed. set +e From 18ebeec2accb31381e80554e0a0f60931cf0ad0f Mon Sep 17 00:00:00 2001 From: chenchaoxiu Date: Mon, 19 Dec 2016 16:26:15 +0800 Subject: [PATCH 239/265] Added support for cudnn v6 and cuda 8.0 --- paddle/cuda/src/hl_cuda_cudnn.cc | 44 ++++++++++++++++++++++++++++++-- 1 file changed, 42 insertions(+), 2 deletions(-) diff --git a/paddle/cuda/src/hl_cuda_cudnn.cc b/paddle/cuda/src/hl_cuda_cudnn.cc index 8cddf10d40..c0c8b0e60d 100644 --- a/paddle/cuda/src/hl_cuda_cudnn.cc +++ b/paddle/cuda/src/hl_cuda_cudnn.cc @@ -175,11 +175,15 @@ void hl_cudnn_init(cudnnHandle_t* cudnn_handle, cudaStream_t stream) { << "PaddlePaddle Requirement: " << "(header v[2-3] with libcudnn v[2-3]) Or " << "(header v4 with libcudnn v4) Or " - << "(header v5 with libcudnn v5)."; + << "(header v5 with libcudnn v5) Or" + << "(header v6 with libcudnn v6)."; - CHECK(!(CUDNN_VERSION >= 5000 && CUDA_VERSION < 7050)) + CHECK(!(CUDNN_VERSION < 6000 && CUDNN_VERSION >= 5000 && CUDA_VERSION < 7050)) << "cudnn v5 requires cuda version >= 7.5"; + CHECK(!(CUDNN_VERSION >= 6000 && CUDA_VERSION < 8000)) + << "cudnn v6 requires cuda version >= 8.0"; + CHECK_CUDNN(dynload::cudnnCreate(cudnn_handle)); CHECK_CUDNN(dynload::cudnnSetStream(*cudnn_handle, stream)); @@ -610,6 +614,23 @@ void hl_create_convolution_descriptor(hl_convolution_descriptor* conv, CHECK_CUDNN(dynload::cudnnCreateConvolutionDescriptor(&hl_conv->desc)); cudnnConvolutionMode_t mode = CUDNN_CROSS_CORRELATION; + +#if CUDNN_VERSION >= 6000 +#ifndef PADDLE_TYPE_DOUBLE + cudnnDataType_t data_type = CUDNN_DATA_FLOAT; +#else + cudnnDataType_t data_type = CUDNN_DATA_DOUBLE; +#endif + CHECK_CUDNN(dynload::cudnnSetConvolution2dDescriptor(hl_conv->desc, + padding_height, + padding_width, + stride_height, + stride_width, + 1, + 1, + mode, + data_type)); +#else CHECK_CUDNN(dynload::cudnnSetConvolution2dDescriptor(hl_conv->desc, padding_height, padding_width, @@ -618,6 +639,7 @@ void hl_create_convolution_descriptor(hl_convolution_descriptor* conv, 1, 1, mode)); +#endif hl_conv->input_image = image; hl_conv->filter = filter; @@ -645,6 +667,23 @@ void hl_reset_convolution_descriptor(hl_convolution_descriptor conv, cudnnConvolutionDescriptor_t conv_desc = GET_CONVOLUTION_DESCRIPTOR(conv); cudnnConvolutionMode_t mode = CUDNN_CROSS_CORRELATION; + +#if CUDNN_VERSION >= 6000 +#ifndef PADDLE_TYPE_DOUBLE + cudnnDataType_t data_type = CUDNN_DATA_FLOAT; +#else + cudnnDataType_t data_type = CUDNN_DATA_DOUBLE; +#endif + CHECK_CUDNN(dynload::cudnnSetConvolution2dDescriptor(conv_desc, + padding_height, + padding_width, + stride_height, + stride_width, + 1, + 1, + mode, + data_type)); +#else CHECK_CUDNN(dynload::cudnnSetConvolution2dDescriptor(conv_desc, padding_height, padding_width, @@ -653,6 +692,7 @@ void hl_reset_convolution_descriptor(hl_convolution_descriptor conv, 1, 1, mode)); +#endif cudnn_convolution_descriptor hl_conv = (cudnn_convolution_descriptor)conv; hl_conv->input_image = image; From 706c572424b6f273fd948d60675c25c378e7021a Mon Sep 17 00:00:00 2001 From: xutianbing Date: Fri, 16 Dec 2016 15:14:02 -0800 Subject: [PATCH 240/265] Matrix API refactor, when passing parameters, convert shared_ptr (MatrixPtr) to reference or raw matrix (Matrix & or Matrix *) contextProjectionForward contextProjectionBackward contextProjectionBackwardData contextProjectionBackwardWeight classificationError The mul functions would be updated later. --- paddle/gserver/evaluators/Evaluator.cpp | 2 +- paddle/gserver/layers/ContextProjection.cpp | 12 +- paddle/math/Matrix.cpp | 171 ++++++++------------ paddle/math/Matrix.h | 34 ++-- paddle/math/tests/test_matrixCompare.cpp | 20 +-- 5 files changed, 103 insertions(+), 136 deletions(-) diff --git a/paddle/gserver/evaluators/Evaluator.cpp b/paddle/gserver/evaluators/Evaluator.cpp index 2f99281911..ae7508e2bb 100644 --- a/paddle/gserver/evaluators/Evaluator.cpp +++ b/paddle/gserver/evaluators/Evaluator.cpp @@ -78,7 +78,7 @@ public: useGpu(arguments[0].deviceId)); errorMat->zeroMem(); if (label != nullptr) { - errorMat->classificationError(output, label); + errorMat->classificationError(*output, *label); } else if (dynamic_cast(multiBinaryLabel.get()) || dynamic_cast(multiBinaryLabel.get())) { errorMat->classificationErrorMulti( diff --git a/paddle/gserver/layers/ContextProjection.cpp b/paddle/gserver/layers/ContextProjection.cpp index 7ac56e3a2a..51c0ae5cc9 100644 --- a/paddle/gserver/layers/ContextProjection.cpp +++ b/paddle/gserver/layers/ContextProjection.cpp @@ -90,8 +90,8 @@ void ContextProjection::forward() { REGISTER_TIMER_INFO("ContextProjectionForward", getName().c_str()); bool isPadding = config_.trainable_padding(); out_->value->contextProjectionForward( - in_->value, - state_ ? state_ : isPadding ? weight_->getW() : nullptr, + *(in_->value), + state_ ? state_.get() : isPadding ? weight_->getW().get() : nullptr, *startPositions, config_.context_length(), config_.context_start(), @@ -128,8 +128,8 @@ void ContextProjection::backward(const UpdateCallback& callback) { bool isPadding = config_.trainable_padding(); if (!out_->grad->useGpu()) { out_->grad->contextProjectionBackward( - in_->grad, - isPadding ? weight_->getWGrad() : nullptr, + in_->grad.get(), + isPadding ? weight_->getWGrad().get() : nullptr, *startPositions, config_.context_length(), config_.context_start(), @@ -137,7 +137,7 @@ void ContextProjection::backward(const UpdateCallback& callback) { isPadding); } else { if (in_->grad) { - out_->grad->contextProjectionBackwardData(in_->grad, + out_->grad->contextProjectionBackwardData(*(in_->grad), *startPositions, config_.context_length(), config_.context_start()); @@ -145,7 +145,7 @@ void ContextProjection::backward(const UpdateCallback& callback) { if (isPadding && weight_->getWGrad()) { out_->grad->contextProjectionBackwardWeight( - weight_->getWGrad(), + *(weight_->getWGrad()), *startPositions, config_.context_length(), config_.context_start(), diff --git a/paddle/math/Matrix.cpp b/paddle/math/Matrix.cpp index c69e074a76..3b3c1d7d48 100644 --- a/paddle/math/Matrix.cpp +++ b/paddle/math/Matrix.cpp @@ -766,20 +766,19 @@ void GpuMatrix::maxoutBackward(Matrix& a, } /*calulate the error of classification */ -void GpuMatrix::classificationError(MatrixPtr output, IVectorPtr label) { - GpuMatrixPtr output_ptr = std::dynamic_pointer_cast(output); - GpuIVectorPtr label_ptr = std::dynamic_pointer_cast(label); - +void GpuMatrix::classificationError(Matrix& output, IVector& label) { + auto output_ptr = dynamic_cast(&output); + auto label_ptr = dynamic_cast(&label); CHECK(output_ptr && label_ptr) << "Invalid argument pointer"; CHECK(height_ == output_ptr->height_ && width_ == 1) << "Matrix dimensions are not equal"; - real* output_d = output_ptr->data_; - real* recResult_d = data_; - int* label_d = label_ptr->getData(); - hl_matrix_classification_error( - output_d, label_d, recResult_d, height_, output_ptr->width_); + hl_matrix_classification_error((real*)output_ptr->data_, + (int*)label_ptr->getData(), + data_, + height_, + output_ptr->width_); } /* copy -log(output[i * width + label]) to this->data[i] */ @@ -1370,86 +1369,62 @@ void GpuMatrix::maxSequenceBackward(Matrix& outputGrad, hl_max_sequence_backward(outGrad, maxIndex, inputGrad, numSequences, dim); } -void GpuMatrix::contextProjectionForward(MatrixPtr input, - MatrixPtr weight, +void GpuMatrix::contextProjectionForward(Matrix& input, + Matrix* weight, const IVector& sequence, int contextLength, int contextStart, size_t beginPad, bool isPadding) { - CHECK(dynamic_cast(input.get())); + CHECK(dynamic_cast(&input)); CHECK(dynamic_cast(&sequence)); - if (weight) CHECK(dynamic_cast(weight.get())); - - size_t numSequences = sequence.getSize() - 1; - int64_t inputDim = input->getWidth(); - int64_t dim = getWidth(); - CHECK_EQ(dim, inputDim * contextLength); - - real* outData = getData(); - real* inputData = input->getData(); - const int* starts = sequence.getData(); + if (weight) CHECK(dynamic_cast(weight)); + CHECK_EQ(getWidth(), input.getWidth() * contextLength); - hl_context_projection_forward(inputData, - starts, + hl_context_projection_forward(input.getData(), + sequence.getData(), isPadding ? weight->getData() : NULL, - outData, - numSequences, - inputDim, + getData(), + sequence.getSize() - 1, + input.getWidth(), contextLength, contextStart, beginPad, isPadding); } -void GpuMatrix::contextProjectionBackwardData(MatrixPtr inputGrad, +void GpuMatrix::contextProjectionBackwardData(Matrix& inputGrad, const IVector& sequence, int contextLength, int contextStart) { - CHECK(dynamic_cast(inputGrad.get())); + CHECK(dynamic_cast(&inputGrad)); CHECK(dynamic_cast(&sequence)); + CHECK_EQ(getWidth(), inputGrad.getWidth() * contextLength); - size_t numSequences = sequence.getSize() - 1; - int64_t inputDim = inputGrad->getWidth(); - int64_t dim = getWidth(); - CHECK_EQ(dim, inputDim * contextLength); - - real* outGrad = getData(); - real* inGrad = inputGrad->getData(); - const int* starts = sequence.getData(); - - hl_context_projection_backward_data(outGrad, - starts, - inGrad, - numSequences, - inputDim, + hl_context_projection_backward_data(getData(), + sequence.getData(), + inputGrad.getData(), + sequence.getSize() - 1, + inputGrad.getWidth(), contextLength, contextStart); } -void GpuMatrix::contextProjectionBackwardWeight(MatrixPtr weightGrad, +void GpuMatrix::contextProjectionBackwardWeight(Matrix& weightGrad, const IVector& sequence, int contextLength, int contextStart, int totalPad, size_t beginPad) { - CHECK(dynamic_cast(weightGrad.get())); + CHECK(dynamic_cast(&weightGrad)); CHECK(dynamic_cast(&sequence)); + CHECK_EQ(getWidth(), weightGrad.getWidth() * contextLength); - size_t numSequences = sequence.getSize() - 1; - int64_t weightDim = weightGrad->getWidth(); - int64_t dim = getWidth(); - CHECK_EQ(dim, weightDim * contextLength); - - real* outGrad = getData(); - real* wtGrad = weightGrad->getData(); - const int* starts = sequence.getData(); - - hl_context_projection_backward_weight(outGrad, - starts, - wtGrad, - numSequences, - weightDim, + hl_context_projection_backward_weight(getData(), + sequence.getData(), + weightGrad.getData(), + sequence.getSize() - 1, + weightGrad.getWidth(), totalPad, contextLength, contextStart, @@ -2371,23 +2346,21 @@ void CpuMatrix::maxSequenceBackward(Matrix& outputGrad, } } -void CpuMatrix::contextProjectionForward(MatrixPtr input, - MatrixPtr weight, +void CpuMatrix::contextProjectionForward(Matrix& input, + Matrix* weight, const IVector& sequence, int contextLength, int contextStart, size_t beginPad, bool isPadding) { - CHECK(dynamic_cast(input.get())); - CHECK(dynamic_cast(&sequence)); - if (weight) CHECK(dynamic_cast(weight.get())); - - size_t numSequences = sequence.getSize() - 1; - int64_t inputDim = input->getWidth(); - int64_t dim = getWidth(); - CHECK_EQ(dim, inputDim * contextLength); - const int* starts = sequence.getData(); - + auto input_ptr = dynamic_cast(&input); + auto seq_ptr = dynamic_cast(&sequence); + CHECK(input_ptr && seq_ptr); + if (weight) CHECK(dynamic_cast(weight)); + CHECK_EQ(getWidth(), input_ptr->getWidth() * contextLength); + + const int* starts = seq_ptr->getData(); + size_t numSequences = seq_ptr->getSize() - 1; for (size_t i = 0; i < numSequences; ++i) { for (int j = 0; j < contextLength; ++j) { int begin = starts[i] + contextStart + j; @@ -2400,7 +2373,7 @@ void CpuMatrix::contextProjectionForward(MatrixPtr input, MatrixPtr mat = this->subMatrix(starts[i], padSize); if (isPadding) { MatrixPtr sub = weight->subMatrix(j, padSize); - mat->addAtOffset(*sub, j * inputDim); + mat->addAtOffset(*sub, j * input_ptr->getWidth()); } dstBegin = starts[i] + padSize; begin = starts[i]; @@ -2412,41 +2385,36 @@ void CpuMatrix::contextProjectionForward(MatrixPtr input, if (isPadding) { MatrixPtr sub = weight->subMatrix(beginPad + contextStart + j - padSize, padSize); - mat->addAtOffset(*sub, j * inputDim); + mat->addAtOffset(*sub, j * input_ptr->getWidth()); } dstEnd = starts[i + 1] - padSize; end = starts[i + 1]; } if (end <= begin) continue; - MatrixPtr src = input->subMatrix(begin, end - begin); + MatrixPtr src = input_ptr->subMatrix(begin, end - begin); MatrixPtr dst = this->subMatrix(dstBegin, dstEnd - dstBegin); - dst->addAtOffset(*src, j * inputDim); + dst->addAtOffset(*src, j * input_ptr->getWidth()); } } } -void CpuMatrix::contextProjectionBackward(MatrixPtr inputGrad, - MatrixPtr weightGrad, +void CpuMatrix::contextProjectionBackward(Matrix* inputGrad, + Matrix* weightGrad, const IVector& sequence, int contextLength, int contextStart, size_t beginPad, bool isPadding) { - if (inputGrad) CHECK(dynamic_cast(inputGrad.get())); - if (weightGrad) CHECK(dynamic_cast(weightGrad.get())); + if (inputGrad) CHECK(dynamic_cast(inputGrad)); + if (weightGrad) CHECK(dynamic_cast(weightGrad)); CHECK(dynamic_cast(&sequence)); - int64_t inputDim = 0; - int64_t dim = getWidth(); - size_t numSequences = sequence.getSize() - 1; - const int* starts = sequence.getData(); - if (inputGrad) { - inputDim = inputGrad->getWidth(); - } else { - inputDim = weightGrad->getWidth(); - } - CHECK_EQ(dim, inputDim * contextLength); + int64_t inputDim = inputGrad ? inputGrad->getWidth() + : weightGrad ? weightGrad->getWidth() : 0; + CHECK_EQ(getWidth(), inputDim * contextLength); + const int* starts = sequence.getData(); + size_t numSequences = sequence.getSize() - 1; for (size_t i = 0; i < numSequences; ++i) { for (int j = 0; j < contextLength; ++j) { int begin = starts[i] + contextStart + j; @@ -3544,21 +3512,20 @@ void CpuMatrix::rowNormalizeL1(Matrix& out) { } /* calulate classification error */ -void CpuMatrix::classificationError(MatrixPtr output, IVectorPtr label) { - CHECK(dynamic_cast(output.get())); - CHECK(dynamic_cast(label.get())); +void CpuMatrix::classificationError(Matrix& output, IVector& label) { + CHECK(dynamic_cast(&output)); + CHECK(dynamic_cast(&label)); - size_t numSamples = getHeight(); - size_t dim = output->getWidth(); - CHECK_EQ(label->getSize(), numSamples); - CHECK_EQ(output->getHeight(), numSamples); CHECK_EQ(getWidth(), (size_t)1); + size_t numSamples = getHeight(); + CHECK_EQ(label.getSize(), numSamples); + CHECK_EQ(output.getHeight(), numSamples); - real* out = output->getData(); - real* result = getData(); - int* lbl = label->getData(); - real maxData; - int maxIndex; + size_t dim = output.getWidth(); + real* out = output.getData(); + int* lbl = label.getData(); + real maxData = 0.0; + int maxIndex = -1; for (size_t i = 0; i < numSamples; ++i) { CHECK_GE(lbl[i], 0); CHECK_LT((size_t)lbl[i], dim); @@ -3570,7 +3537,7 @@ void CpuMatrix::classificationError(MatrixPtr output, IVectorPtr label) { maxData = out[i * dim + j]; } } - result[i] = (maxIndex != lbl[i]); + getData()[i] = (maxIndex != lbl[i]); } } diff --git a/paddle/math/Matrix.h b/paddle/math/Matrix.h index 1cfb90a9db..b8c7adf948 100644 --- a/paddle/math/Matrix.h +++ b/paddle/math/Matrix.h @@ -835,7 +835,7 @@ public: * * output[i] = 0 if row i is correct. */ - virtual void classificationError(MatrixPtr output, IVectorPtr label) { + virtual void classificationError(Matrix& output, IVector& label) { LOG(FATAL) << "Not implemented"; } @@ -997,8 +997,8 @@ public: LOG(FATAL) << "Not implemeted"; } - virtual void contextProjectionForward(MatrixPtr input, - MatrixPtr weight, + virtual void contextProjectionForward(Matrix& input, + Matrix* weight, const IVector& sequence, int contextLength, int contextStart, @@ -1007,8 +1007,8 @@ public: LOG(FATAL) << "Not implemeted"; } - virtual void contextProjectionBackward(MatrixPtr inputGrad, - MatrixPtr weightGrad, + virtual void contextProjectionBackward(Matrix* inputGrad, + Matrix* weightGrad, const IVector& sequence, int contextLength, int contextStart, @@ -1017,14 +1017,14 @@ public: LOG(FATAL) << "Not implemeted"; } - virtual void contextProjectionBackwardData(MatrixPtr inputGrad, + virtual void contextProjectionBackwardData(Matrix& inputGrad, const IVector& sequence, int contextLength, int contextStart) { LOG(FATAL) << "Not implemeted"; } - virtual void contextProjectionBackwardWeight(MatrixPtr weightGrad, + virtual void contextProjectionBackwardWeight(Matrix& weightGrad, const IVector& sequence, int contextLength, int contextStart, @@ -1373,7 +1373,7 @@ public: void check(std::ostream& os, Matrix& refMat, bool printDiff = true); void randomizeUniform(); - void classificationError(MatrixPtr output, IVectorPtr label); + void classificationError(Matrix& output, IVector& label); void convExpand(Matrix& feature, int feaImgHeight, @@ -1487,20 +1487,20 @@ public: const IVector& sequence, IVector& index); - void contextProjectionForward(MatrixPtr input, - MatrixPtr weight, + void contextProjectionForward(Matrix& input, + Matrix* weight, const IVector& sequence, int contextLength, int contextStart, size_t beginPad, bool isPadding); - void contextProjectionBackwardData(MatrixPtr inputGrad, + void contextProjectionBackwardData(Matrix& inputGrad, const IVector& sequence, int contextLength, int contextStart); - void contextProjectionBackwardWeight(MatrixPtr weightGrad, + void contextProjectionBackwardWeight(Matrix& weightGrad, const IVector& sequence, int contextLength, int contextStart, @@ -1713,16 +1713,16 @@ public: const IVector& sequence, IVector& index); - void contextProjectionForward(MatrixPtr input, - MatrixPtr weight, + void contextProjectionForward(Matrix& input, + Matrix* weight, const IVector& sequence, int contextLength, int contextStart, size_t beginPad, bool isPadding); - void contextProjectionBackward(MatrixPtr inputGrad, - MatrixPtr weightGrad, + void contextProjectionBackward(Matrix* inputGrad, + Matrix* weightGrad, const IVector& sequence, int contextLength, int contextStart, @@ -1881,7 +1881,7 @@ public: void randomizeUniform(); - void classificationError(MatrixPtr output, IVectorPtr label); + void classificationError(Matrix& output, IVector& label); void addByBitCode(size_t numClasses, const IVector& codes, const Matrix& vec); diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 62de5b25e4..10289940a4 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -65,16 +65,16 @@ void testMatrixProjectionForward(int contextStart, // calculate int beginPad = std::max(0, -contextStart); - cpuOutput->contextProjectionForward(cpuInput, - cpuWeight, + cpuOutput->contextProjectionForward(*cpuInput, + cpuWeight.get(), *cpuSequence, contextLength, contextStart, beginPad, padding); - gpuOutput->contextProjectionForward(gpuInput, - gpuWeight, + gpuOutput->contextProjectionForward(*gpuInput, + gpuWeight.get(), *gpuSequence, contextLength, contextStart, @@ -120,17 +120,17 @@ void testMatrixProjectionBackward(int contextStart, // calculate int beginPad = std::max(0, -contextStart); - cpuOutputGrad->contextProjectionBackward(cpuInputGrad, - cpuWeightGrad, + cpuOutputGrad->contextProjectionBackward(cpuInputGrad.get(), + cpuWeightGrad.get(), *cpuSequence, contextLength, contextStart, beginPad, padding); gpuOutputGrad->contextProjectionBackwardData( - gpuInputGrad, *gpuSequence, contextLength, contextStart); + *gpuInputGrad, *gpuSequence, contextLength, contextStart); if (padding) { - gpuOutputGrad->contextProjectionBackwardWeight(gpuWeightGrad, + gpuOutputGrad->contextProjectionBackwardWeight(*gpuWeightGrad, *gpuSequence, contextLength, contextStart, @@ -939,8 +939,8 @@ void testClassificationError(int numSamples, int dim) { gpuOutput->copyFrom(*cpuOutput); gpuLabel->copyFrom(*cpuLabel); - cpuError->classificationError(cpuOutput, cpuLabel); - gpuError->classificationError(gpuOutput, gpuLabel); + cpuError->classificationError(*cpuOutput, *cpuLabel); + gpuError->classificationError(*gpuOutput, *gpuLabel); TensorCheckEqual(*cpuError, *gpuError); } From 4fbf94993b0699bb06c5347612a7b97d692a2625 Mon Sep 17 00:00:00 2001 From: xutianbing Date: Mon, 19 Dec 2016 17:21:06 -0800 Subject: [PATCH 241/265] Refactor MUL functions, pass object reference instead of shared_ptr. --- .../gserver/layers/ConvexCombinationLayer.cpp | 6 +- paddle/gserver/layers/ExpandConvBaseLayer.cpp | 6 +- .../gserver/layers/FullMatrixProjection.cpp | 7 ++- paddle/gserver/layers/FullyConnectedLayer.cpp | 8 +-- paddle/gserver/layers/LinearChainCRF.cpp | 2 +- paddle/gserver/layers/LstmLayer.cpp | 26 ++++----- paddle/gserver/layers/MDLstmLayer.cpp | 8 +-- paddle/gserver/layers/OuterProdLayer.cpp | 6 +- paddle/gserver/layers/RecurrentLayer.cpp | 32 +++++------ .../layers/SelectiveFullyConnectedLayer.cpp | 10 ++-- paddle/gserver/layers/TensorLayer.cpp | 8 +-- .../layers/TransposedFullMatrixProjection.cpp | 7 ++- paddle/math/CpuSparseMatrix.cpp | 15 ++--- paddle/math/CpuSparseMatrix.h | 2 +- paddle/math/Matrix.cpp | 49 +++++++---------- paddle/math/Matrix.h | 14 ++--- paddle/math/SparseMatrix.cpp | 55 +++++++++---------- paddle/math/SparseMatrix.h | 7 +-- paddle/math/tests/test_SparseMatrix.cpp | 14 ++--- paddle/math/tests/test_matrixCompare.cpp | 12 ++-- .../math/tests/test_sparseMatrixCompare.cpp | 4 +- 21 files changed, 144 insertions(+), 154 deletions(-) diff --git a/paddle/gserver/layers/ConvexCombinationLayer.cpp b/paddle/gserver/layers/ConvexCombinationLayer.cpp index 3f4d77a2fe..ed57f2af3c 100644 --- a/paddle/gserver/layers/ConvexCombinationLayer.cpp +++ b/paddle/gserver/layers/ConvexCombinationLayer.cpp @@ -113,7 +113,7 @@ void ConvexCombinationLayer::forward(PassType passType) { tmpRow0->setData(inV0->getData() + i * weightDim); tmpRow1->setData(outV->getData() + i * dataDim); - tmpRow1->mul(tmpRow0, tmpMtx0, 1, 0); + tmpRow1->mul(*tmpRow0, *tmpMtx0, 1, 0); } } @@ -136,7 +136,7 @@ void ConvexCombinationLayer::backward(const UpdateCallback& callback) { tmpRow1->setData(outG->getData() + i * dataDim); tmpMtx0->setData(inV1->getData() + i * weightDim * dataDim); - tmpRow0->mul(tmpRow1, tmpMtx0->getTranspose(), 1, 1); + tmpRow0->mul(*tmpRow1, *(tmpMtx0->getTranspose()), 1, 1); } } @@ -146,7 +146,7 @@ void ConvexCombinationLayer::backward(const UpdateCallback& callback) { tmpRow1->setData(outG->getData() + i * dataDim); tmpMtx0->setData(inG1->getData() + i * weightDim * dataDim); - tmpMtx0->mul(tmpRow0->getTranspose(), tmpRow1, 1, 1); + tmpMtx0->mul(*(tmpRow0->getTranspose()), *tmpRow1, 1, 1); } } } diff --git a/paddle/gserver/layers/ExpandConvBaseLayer.cpp b/paddle/gserver/layers/ExpandConvBaseLayer.cpp index 25948747fe..9ddccc2027 100644 --- a/paddle/gserver/layers/ExpandConvBaseLayer.cpp +++ b/paddle/gserver/layers/ExpandConvBaseLayer.cpp @@ -150,7 +150,7 @@ void ExpandConvBaseLayer::expandFwdOnce(MatrixPtr image, Matrix::create(wgtData, subM, subK, false, useGpu_); // mark transpose MatrixPtr B = Matrix::create(expInData, subK, subN, false, useGpu_); MatrixPtr C = Matrix::create(outData, subM, subN, false, useGpu_); - C->mul(A, B, 1, 1); + C->mul(*A, *B, 1, 1); A->clear(); B->clear(); @@ -185,7 +185,7 @@ void ExpandConvBaseLayer::bpropActs(MatrixPtr out, MatrixPtr C = Matrix::create(expandInData, subK, subN, false, useGpu_); MatrixPtr B = Matrix::create(localGradData, subM, subN, false, useGpu_); MatrixPtr A = Matrix::create(wgtData, subM, subK, true, useGpu_); - C->mul(A, B); // mul + C->mul(*A, *B); // mul // clear the temporary matrix A->clear(); @@ -252,7 +252,7 @@ void ExpandConvBaseLayer::bpropWeights(MatrixPtr image, MatrixPtr A = Matrix::create(expandInData, subK, subN, true, useGpu_); MatrixPtr B = Matrix::create(gradData, subM, subN, false, useGpu_); MatrixPtr C = Matrix::create(wGradData, subM, subK, false, useGpu_); - C->mul(B, A, 1, 1); + C->mul(*B, *A, 1, 1); A->clear(); B->clear(); diff --git a/paddle/gserver/layers/FullMatrixProjection.cpp b/paddle/gserver/layers/FullMatrixProjection.cpp index 9e72a33a3c..b8b6f403d6 100644 --- a/paddle/gserver/layers/FullMatrixProjection.cpp +++ b/paddle/gserver/layers/FullMatrixProjection.cpp @@ -28,7 +28,7 @@ FullMatrixProjection::FullMatrixProjection(const ProjectionConfig& config, void FullMatrixProjection::forward() { REGISTER_TIMER_INFO("FwMulTimer", getName().c_str()); - out_->value->mul(in_->value, weight_->getW(), 1, 1); + out_->value->mul(*(in_->value), *(weight_->getW()), 1, 1); } void FullMatrixProjection::backward(const UpdateCallback& callback) { @@ -37,7 +37,8 @@ void FullMatrixProjection::backward(const UpdateCallback& callback) { /* Calculate the W-gradient for the current layer */ if (weight_->getWGrad()) { REGISTER_TIMER_INFO("GradMulTimer", getName().c_str()); - weight_->getWGrad()->mul(in_->value->getTranspose(), out_->grad, 1, 1); + weight_->getWGrad()->mul( + *(in_->value->getTranspose()), *(out_->grad), 1, 1); } // If callback does not change value, backward propagation error @@ -47,7 +48,7 @@ void FullMatrixProjection::backward(const UpdateCallback& callback) { /* Calculate the input layers error */ if (in_->grad) { REGISTER_TIMER_INFO("BpMulTimer", getName().c_str()); - in_->grad->mul(out_->grad, weight_->getW()->getTranspose(), 1, 1); + in_->grad->mul(*(out_->grad), *(weight_->getW()->getTranspose()), 1, 1); } hl_set_sync_flag(syncFlag); diff --git a/paddle/gserver/layers/FullyConnectedLayer.cpp b/paddle/gserver/layers/FullyConnectedLayer.cpp index 89afe33c36..d8a667ff8d 100644 --- a/paddle/gserver/layers/FullyConnectedLayer.cpp +++ b/paddle/gserver/layers/FullyConnectedLayer.cpp @@ -84,8 +84,8 @@ void FullyConnectedLayer::forward(PassType passType) { auto input = getInput(i); CHECK(input.value) << "The input of 'fc' layer must be matrix"; REGISTER_TIMER_INFO("FwMulTimer", getName().c_str()); - i == 0 ? outV->mul(input.value, weights_[i]->getW(), 1, 0) - : outV->mul(input.value, weights_[i]->getW(), 1, 1); + i == 0 ? outV->mul(*input.value, *weights_[i]->getW(), 1, 0) + : outV->mul(*input.value, *weights_[i]->getW(), 1, 1); } /* add the bias-vector */ @@ -123,7 +123,7 @@ void FullyConnectedLayer::backward(const UpdateCallback& callback) { MatrixPtr oGrad = getOutputGrad(); { REGISTER_TIMER_INFO("GradMulTimer", getName().c_str()); - weights_[i]->getWGrad()->mul(input_T, oGrad, 1, 1); + weights_[i]->getWGrad()->mul(*input_T, *oGrad, 1, 1); } } @@ -136,7 +136,7 @@ void FullyConnectedLayer::backward(const UpdateCallback& callback) { if (NULL != preGrad) { MatrixPtr weights_T = weights_[i]->getW()->getTranspose(); REGISTER_TIMER_INFO("BpMulTimer", getName().c_str()); - preGrad->mul(getOutputGrad(), weights_T, 1, 1); + preGrad->mul(*getOutputGrad(), *weights_T, 1, 1); } hl_set_sync_flag(syncFlag); diff --git a/paddle/gserver/layers/LinearChainCRF.cpp b/paddle/gserver/layers/LinearChainCRF.cpp index af550c7a01..b7f748f3bb 100644 --- a/paddle/gserver/layers/LinearChainCRF.cpp +++ b/paddle/gserver/layers/LinearChainCRF.cpp @@ -59,7 +59,7 @@ real LinearChainCRF::forward(real* x, int* s, int length) { matX->rowMax(*maxX_); expX_->assign(*matX); // subtract max to avoid overflow or underflow - expX_->mul(maxX_, ones_, (real)-1, (real)1); + expX_->mul(*maxX_, *ones_, (real)-1, (real)1); expX_->exp2(); real* a = a_->getData(); diff --git a/paddle/gserver/layers/LstmLayer.cpp b/paddle/gserver/layers/LstmLayer.cpp index 2543d1b49a..01cc5fec8b 100644 --- a/paddle/gserver/layers/LstmLayer.cpp +++ b/paddle/gserver/layers/LstmLayer.cpp @@ -316,7 +316,7 @@ void LstmLayer::forwardSequence(int batchSize, } if (prevOutput_) { frameGate->setData(lstmValue.gateValue); - frameGate->mul(prevOutput_, weight_->getW(), 1, 1); + frameGate->mul(*prevOutput_, *weight_->getW(), 1, 1); } } AsyncGpuBlock asyncGpuBlock; @@ -338,7 +338,7 @@ void LstmLayer::forwardSequence(int batchSize, frameOutput->setData(lstmValue.outputValue); nextFrame(reversed_, getSize()); frameGate->setData(lstmValue.gateValue); - frameGate->mul(frameOutput, weight_->getW(), 1, 1); + frameGate->mul(*frameOutput, *weight_->getW(), 1, 1); } } if (n != numSequences - 1) { @@ -348,7 +348,7 @@ void LstmLayer::forwardSequence(int batchSize, if (!reversed_) { if (!prevState_) lstmValue.prevStateValue = nullptr; if (prevOutput_) { - frameGate->mul(frameOutput, weight_->getW(), 1, 1); + frameGate->mul(*frameOutput, *weight_->getW(), 1, 1); } } else { lstmValue.prevStateValue = nullptr; @@ -470,7 +470,7 @@ void LstmLayer::backwardSequence(int batchSize, frameGate->setData(lstmGrad.gateGrad); nextFrame(reversed_, getSize()); frameOutput->setData(lstmGrad.outputGrad); - frameOutput->mul(frameGate, weightT, 1, 1); + frameOutput->mul(*frameGate, *weightT, 1, 1); } else { nextFrame(reversed_, getSize()); } @@ -479,14 +479,14 @@ void LstmLayer::backwardSequence(int batchSize, if (weight_->getWGrad()) { if (!reversed_) { weight_->getWGrad()->mul( - output_.value->subMatrix(start, length - 1)->getTranspose(), - gate_.grad->subMatrix(start + 1, length - 1), + *output_.value->subMatrix(start, length - 1)->getTranspose(), + *gate_.grad->subMatrix(start + 1, length - 1), 1, 1); } else { weight_->getWGrad()->mul( - output_.value->subMatrix(start + 1, length - 1)->getTranspose(), - gate_.grad->subMatrix(start, length - 1), + *output_.value->subMatrix(start + 1, length - 1)->getTranspose(), + *gate_.grad->subMatrix(start, length - 1), 1, 1); } @@ -541,7 +541,7 @@ void LstmLayer::forwardBatch(int batchSize, if (n != 0) { MatrixPtr batch1 = batchValue_->getBatchValue(n - 1, batchSize); - gateValue->mul(batch1, weight_->getW(), 1, 1); + gateValue->mul(*batch1, *weight_->getW(), 1, 1); } else if (prevOutput_) { Matrix::resizeOrCreate(prevBatchOutput2_, gateValue->getHeight(), @@ -549,7 +549,7 @@ void LstmLayer::forwardBatch(int batchSize, false, useGpu_); batchValue_->prevOutput2Batch(*prevOutput_, *prevBatchOutput2_); - gateValue->mul(prevBatchOutput2_, weight_->getW(), 1, 1); + gateValue->mul(*prevBatchOutput2_, *weight_->getW(), 1, 1); batchValue_->prevOutput2Batch(*prevState_, *totalState_->subMatrix(0, numSequences)); @@ -672,16 +672,16 @@ void LstmLayer::backwardBatch(int batchSize, if (n != 0) { MatrixPtr tmp = batchGrad_->getBatchValue(n - 1, batchSize); - tmp->mul(gateGrad, weightT, 1, 1); + tmp->mul(*gateGrad, *weightT, 1, 1); } if (n != 0 && weight_->getWGrad()) { /* backward weight */ MatrixPtr outputValue = batchValue_->getBatchValue(n - 1, batchSize); - weight_->getWGrad()->mul(outputValue->getTranspose(), gateGrad, 1, 1); + weight_->getWGrad()->mul(*outputValue->getTranspose(), *gateGrad, 1, 1); } else if (prevOutput_ && weight_->getWGrad()) { weight_->getWGrad()->mul( - prevBatchOutput2_->getTranspose(), gateGrad, 1, 1); + *prevBatchOutput2_->getTranspose(), *gateGrad, 1, 1); } } } diff --git a/paddle/gserver/layers/MDLstmLayer.cpp b/paddle/gserver/layers/MDLstmLayer.cpp index 1243c12889..fb41af5631 100644 --- a/paddle/gserver/layers/MDLstmLayer.cpp +++ b/paddle/gserver/layers/MDLstmLayer.cpp @@ -547,7 +547,7 @@ void MDLstmLayer::forwardOneSequence(int start, CoordIterator& coordIter) { if (coordIter.getPrePos(delays_, i, prePos)) { int preOffset = coordIter.offset(prePos); frameGate_[start + offset].value->mul( - frameOutput_[start + preOffset].value, weight_->getW(), 1.0, 1.0); + *frameOutput_[start + preOffset].value, *weight_->getW(), 1.0, 1.0); } } forwardGate2OutputSequence(start, coordIter); @@ -747,11 +747,11 @@ void MDLstmLayer::backwardOneSequence(int start, CoordIterator& coordIter) { if (coordIter.getPrePos(delays_, i, prePos)) { int preOffset = coordIter.offset(prePos); frameOutput_[start + preOffset].grad->mul( - frameGate_[start + offset].grad, weightT, 1.0, 1.0); + *frameGate_[start + offset].grad, *weightT, 1.0, 1.0); if (weight_->getWGrad()) { weight_->getWGrad()->mul( - frameOutput_[start + preOffset].value->getTranspose(), - frameGate_[start + offset].grad, + *frameOutput_[start + preOffset].value->getTranspose(), + *frameGate_[start + offset].grad, 1.0, 1.0); } diff --git a/paddle/gserver/layers/OuterProdLayer.cpp b/paddle/gserver/layers/OuterProdLayer.cpp index cf9a008318..b606e44365 100644 --- a/paddle/gserver/layers/OuterProdLayer.cpp +++ b/paddle/gserver/layers/OuterProdLayer.cpp @@ -96,7 +96,7 @@ void OuterProdLayer::forward(PassType passType) { tmpRow0->setData(inV0->getData() + i * dim0); tmpRow1->setData(inV1->getData() + i * dim1); - tmpMtx0->mul(tmpRow0->getTranspose(), tmpRow1); + tmpMtx0->mul(*tmpRow0->getTranspose(), *tmpRow1); } } } @@ -121,7 +121,7 @@ void OuterProdLayer::backward(const UpdateCallback& callback) { tmpRow0->setData(inG0->getData() + i * dim0); tmpRow1->setData(inV1->getData() + i * dim1); - tmpRow0->mul(tmpRow1, tmpMtx0->getTranspose(), 1, 1); + tmpRow0->mul(*tmpRow1, *tmpMtx0->getTranspose(), 1, 1); } } @@ -131,7 +131,7 @@ void OuterProdLayer::backward(const UpdateCallback& callback) { tmpRow0->setData(inV0->getData() + i * dim0); tmpRow1->setData(inG1->getData() + i * dim1); - tmpRow1->mul(tmpRow0, tmpMtx0, 1, 1); + tmpRow1->mul(*tmpRow0, *tmpMtx0, 1, 1); } } } diff --git a/paddle/gserver/layers/RecurrentLayer.cpp b/paddle/gserver/layers/RecurrentLayer.cpp index 85812c9d66..94b16996a8 100644 --- a/paddle/gserver/layers/RecurrentLayer.cpp +++ b/paddle/gserver/layers/RecurrentLayer.cpp @@ -215,12 +215,12 @@ void RecurrentLayer::forwardSequence(int batchSize, void RecurrentLayer::forwardOneSequence(int start, int length) { if (!reversed_) { if (prevOutput_) { - frameOutput_[start].value->mul(prevOutput_, weight_->getW(), 1, 1); + frameOutput_[start].value->mul(*prevOutput_, *weight_->getW(), 1, 1); } activation_->forward(frameOutput_[start]); for (int i = 1; i < length; ++i) { frameOutput_[start + i].value->mul( - frameOutput_[start + i - 1].value, weight_->getW(), 1, 1); + *frameOutput_[start + i - 1].value, *weight_->getW(), 1, 1); activation_->forward(frameOutput_[start + i]); } if (prevOutput_) { @@ -230,7 +230,7 @@ void RecurrentLayer::forwardOneSequence(int start, int length) { activation_->forward(frameOutput_[start + length - 1]); for (int i = length - 2; i >= 0; --i) { frameOutput_[start + i].value->mul( - frameOutput_[start + i + 1].value, weight_->getW(), 1, 1); + *frameOutput_[start + i + 1].value, *weight_->getW(), 1, 1); activation_->forward(frameOutput_[start + i]); } } @@ -282,13 +282,13 @@ void RecurrentLayer::backwardOneSequence(int start, int length) { for (int i = length - 1; i > 0; --i) { activation_->backward(frameOutput_[start + i]); frameOutput_[start + i - 1].grad->mul( - frameOutput_[start + i].grad, weightT, 1, 1); + *frameOutput_[start + i].grad, *weightT, 1, 1); } activation_->backward(frameOutput_[start]); if (weight_->getWGrad()) { weight_->getWGrad()->mul( - output_.value->subMatrix(start, length - 1)->getTranspose(), - output_.grad->subMatrix(start + 1, length - 1), + *output_.value->subMatrix(start, length - 1)->getTranspose(), + *output_.grad->subMatrix(start + 1, length - 1), 1, 1); } @@ -296,13 +296,13 @@ void RecurrentLayer::backwardOneSequence(int start, int length) { for (int i = 0; i < length - 1; ++i) { activation_->backward(frameOutput_[start + i]); frameOutput_[start + i + 1].grad->mul( - frameOutput_[start + i].grad, weightT, 1, 1); + *frameOutput_[start + i].grad, *weightT, 1, 1); } activation_->backward(frameOutput_[start + length - 1]); if (weight_->getWGrad()) { weight_->getWGrad()->mul( - output_.value->subMatrix(start + 1, length - 1)->getTranspose(), - output_.grad->subMatrix(start, length - 1), + *output_.value->subMatrix(start + 1, length - 1)->getTranspose(), + *output_.grad->subMatrix(start, length - 1), 1, 1); } @@ -329,7 +329,7 @@ void RecurrentLayer::forwardBatch(int batchSize, if (n != 0) { MatrixPtr batch1 = batchValue_->getBatchValue(n - 1, batch2->getHeight()); - batch2->mul(batch1, weight_->getW(), 1, 1); + batch2->mul(*batch1, *weight_->getW(), 1, 1); } Argument arg; arg.value = batch2; @@ -367,14 +367,14 @@ void RecurrentLayer::backwardBatch(int batchSize, if (n != 0) { batch1 = batchGrad_->getBatchValue(n - 1, batch2->getHeight()); - batch1->mul(batch2, weightT, 1, 1); + batch1->mul(*batch2, *weightT, 1, 1); } if (backwardByBatch && weight_->getWGrad()) { if (n != 0) { /* backward weight */ batch1 = batchValue_->getBatchValue(n - 1, batch2->getHeight()); - weight_->getWGrad()->mul(batch1->getTranspose(), batch2, 1, 1); + weight_->getWGrad()->mul(*batch1->getTranspose(), *batch2, 1, 1); } } } @@ -389,14 +389,14 @@ void RecurrentLayer::backwardBatch(int batchSize, int len = starts[seq + 1] - starts[seq]; if (!reversed_) { weight_->getWGrad()->mul( - output_.value->subMatrix(starts[seq], len - 1)->getTranspose(), - output_.grad->subMatrix(starts[seq] + 1, len - 1), + *output_.value->subMatrix(starts[seq], len - 1)->getTranspose(), + *output_.grad->subMatrix(starts[seq] + 1, len - 1), 1, 1); } else { weight_->getWGrad()->mul( - output_.value->subMatrix(starts[seq] + 1, len - 1)->getTranspose(), - output_.grad->subMatrix(starts[seq], len - 1), + *output_.value->subMatrix(starts[seq] + 1, len - 1)->getTranspose(), + *output_.grad->subMatrix(starts[seq], len - 1), 1, 1); } diff --git a/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp b/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp index 9200a01eee..5eacff6b71 100644 --- a/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp +++ b/paddle/gserver/layers/SelectiveFullyConnectedLayer.cpp @@ -155,20 +155,20 @@ void SelectiveFullyConnectedLayer::forward(PassType passType) { // manully compute the multiplication of // the input vector and the selected rows. REGISTER_TIMER("selective.plain"); - interOutput_->mul(input, weight->getTranspose(), 1, scaleT); + interOutput_->mul(*input, *weight->getTranspose(), 1, scaleT); } else { // if the indecies is not sparse enough, // use full mul instead REGISTER_TIMER("selective.mul"); if (fullOutput_) { - interOutput_->mul(input, weight->getTranspose(), 1, scaleT); + interOutput_->mul(*input, *weight->getTranspose(), 1, scaleT); } else { Matrix::resizeOrCreate(mmat_, hsize, wsize, /*trans=*/false, /*useGpu=*/useGpu_); - mmat_->mul(input, weight->getTranspose()); + mmat_->mul(*input, *weight->getTranspose()); interOutput_->add3(mmat_); } } @@ -242,14 +242,14 @@ void SelectiveFullyConnectedLayer::backward(const UpdateCallback& callback) { MatrixPtr preGrad = getInputGrad(i); if (preGrad) { REGISTER_TIMER_INFO("BpMulTimer", getName().c_str()); - preGrad->mul(interOutGrad_, weights_[i]->getW(), 1, 1); + preGrad->mul(*interOutGrad_, *weights_[i]->getW(), 1, 1); } MatrixPtr wGrad = weights_[i]->getWGrad(); if (wGrad) { REGISTER_TIMER_INFO("GradMulTimer", getName().c_str()); MatrixPtr input = getInputValue(i); - wGrad->mul(interOutGrad_->getTranspose(), input, 1, 1); + wGrad->mul(*interOutGrad_->getTranspose(), *input, 1, 1); } { diff --git a/paddle/gserver/layers/TensorLayer.cpp b/paddle/gserver/layers/TensorLayer.cpp index 642eb1bdd3..5be88d7c05 100644 --- a/paddle/gserver/layers/TensorLayer.cpp +++ b/paddle/gserver/layers/TensorLayer.cpp @@ -77,7 +77,7 @@ void TensorLayer::forward(PassType passType) { REGISTER_TIMER_INFO("TensorFwMulTimer", getName().c_str()); for (size_t i = 0; i < getSize(); ++i) { MatrixPtr weights = weights_[i]->getW(); - tmpMat->mul(input1, weights, 1, 0); + tmpMat->mul(*input1, *weights, 1, 0); outV->rowDotMul(i, *tmpMat, *input2); } } @@ -112,7 +112,7 @@ void TensorLayer::backward(const UpdateCallback& callback) { if (weights_[i]->getWGrad()) { tmpMat->rowScale(i, *input1, *oGrad); MatrixPtr input1_T = tmpMat->getTranspose(); - weights_[i]->getWGrad()->mul(input1_T, input2, 1, 1); + weights_[i]->getWGrad()->mul(*input1_T, *input2, 1, 1); } } } @@ -130,11 +130,11 @@ void TensorLayer::backward(const UpdateCallback& callback) { if (NULL != preGrad1) { /* (grad * e2) * trans(W) */ tmpMat->rowScale(i, *input2, *oGrad); MatrixPtr weights_T = weights->getTranspose(); - preGrad1->mul(tmpMat, weights_T, 1, 1); + preGrad1->mul(*tmpMat, *weights_T, 1, 1); } if (NULL != preGrad2) { /* (grad * e1) * W */ tmpMat->rowScale(i, *input1, *oGrad); - preGrad2->mul(tmpMat, weights, 1, 1); + preGrad2->mul(*tmpMat, *weights, 1, 1); } } } diff --git a/paddle/gserver/layers/TransposedFullMatrixProjection.cpp b/paddle/gserver/layers/TransposedFullMatrixProjection.cpp index 3f7ff04882..2a12499e5b 100644 --- a/paddle/gserver/layers/TransposedFullMatrixProjection.cpp +++ b/paddle/gserver/layers/TransposedFullMatrixProjection.cpp @@ -46,7 +46,7 @@ TransposedFullMatrixProjection::TransposedFullMatrixProjection( void TransposedFullMatrixProjection::forward() { REGISTER_TIMER_INFO("FwMulTimer", getName().c_str()); - out_->value->mul(in_->value, weight_->getW()->getTranspose(), 1, 1); + out_->value->mul(*(in_->value), *(weight_->getW()->getTranspose()), 1, 1); } void TransposedFullMatrixProjection::backward(const UpdateCallback& callback) { @@ -55,7 +55,8 @@ void TransposedFullMatrixProjection::backward(const UpdateCallback& callback) { /* Calculate the W-gradient for the current layer */ if (weight_->getWGrad()) { REGISTER_TIMER_INFO("GradMulTimer", getName().c_str()); - weight_->getWGrad()->mul(out_->grad->getTranspose(), in_->value, 1, 1); + weight_->getWGrad()->mul( + *(out_->grad->getTranspose()), *(in_->value), 1, 1); } // If callback does not change value, backprop error asynchronously so that @@ -69,7 +70,7 @@ void TransposedFullMatrixProjection::backward(const UpdateCallback& callback) { /* Calculate the input layers error */ if (in_->grad) { REGISTER_TIMER_INFO("BpMulTimer", getName().c_str()); - in_->grad->mul(out_->grad, weight_->getW(), 1, 1); + in_->grad->mul(*(out_->grad), *(weight_->getW()), 1, 1); } hl_set_sync_flag(syncFlag); diff --git a/paddle/math/CpuSparseMatrix.cpp b/paddle/math/CpuSparseMatrix.cpp index b5d5b6ef61..82a482f701 100644 --- a/paddle/math/CpuSparseMatrix.cpp +++ b/paddle/math/CpuSparseMatrix.cpp @@ -163,15 +163,16 @@ MatrixPtr CpuSparseMatrix::getTranspose() { SparseValueType CpuSparseMatrix::getValueType() { return valueType_; } -void CpuSparseMatrix::mul(MatrixPtr a, MatrixPtr b, real scaleAB, real scaleT) { +void CpuSparseMatrix::mul(const Matrix& a, + const Matrix& b, + real scaleAB, + real scaleT) { CHECK(!isTransposed()) << "Not supported"; + const auto a_ptr = dynamic_cast(&a); + const auto b_ptr = dynamic_cast(&b); - if (dynamic_cast(a.get()) && dynamic_cast(b.get())) { - CpuMatrix::mul(dynamic_cast(a.get()), - dynamic_cast(b.get()), - this, - scaleAB, - scaleT); + if (a_ptr && b_ptr) { + CpuMatrix::mul((CpuMatrix*)a_ptr, (CpuMatrix*)b_ptr, this, scaleAB, scaleT); } else { LOG(FATAL) << "not supported"; } diff --git a/paddle/math/CpuSparseMatrix.h b/paddle/math/CpuSparseMatrix.h index 9676f8864f..d3e8871cb5 100644 --- a/paddle/math/CpuSparseMatrix.h +++ b/paddle/math/CpuSparseMatrix.h @@ -203,7 +203,7 @@ public: /// mem MUST be alloced outside (memAlloc=false) void transpose(MatrixPtr matTrans, bool memAlloc); - void mul(MatrixPtr A, MatrixPtr B, real alpha, real beta); + void mul(const Matrix& A, const Matrix& B, real alpha, real beta); /** * @brief sparseMatrix += denseMatrix diff --git a/paddle/math/Matrix.cpp b/paddle/math/Matrix.cpp index 3b3c1d7d48..0193f2f997 100644 --- a/paddle/math/Matrix.cpp +++ b/paddle/math/Matrix.cpp @@ -582,18 +582,16 @@ void GpuMatrix::mul(const GpuMatrix& a, } /* this = a*b */ -void GpuMatrix::mul(const MatrixPtr a, const MatrixPtr b) { - mul(a, b, 1.0, 0.0); -} +void GpuMatrix::mul(const Matrix& a, const Matrix& b) { mul(a, b, 1.0, 0.0); } -void GpuMatrix::mul(const MatrixPtr a, - const MatrixPtr b, +void GpuMatrix::mul(const Matrix& a, + const Matrix& b, real scaleAB, real scaleT) { - GpuMatrixPtr a_ptr = std::dynamic_pointer_cast(a); - GpuMatrixPtr b_ptr = std::dynamic_pointer_cast(b); - GpuSparseMatrixPtr a_ptr_s = std::dynamic_pointer_cast(a); - GpuSparseMatrixPtr b_ptr_s = std::dynamic_pointer_cast(b); + const auto a_ptr = dynamic_cast(&a); + const auto b_ptr = dynamic_cast(&b); + const auto a_ptr_s = dynamic_cast(&a); + const auto b_ptr_s = dynamic_cast(&b); if (a_ptr && b_ptr) { mul(*a_ptr, *b_ptr, scaleAB, scaleT); @@ -2598,29 +2596,22 @@ void CpuMatrix::sequenceAvgForward(Matrix& a, } /* this = scaleAB*(a*b) + scaleT*this*/ -void CpuMatrix::mul(const MatrixPtr a, - const MatrixPtr b, +void CpuMatrix::mul(const Matrix& a, + const Matrix& b, real scaleAB, real scaleT) { CHECK(!isTransposed()) << "Not supported"; + const auto a_ptr = dynamic_cast(&a); + const auto b_ptr = dynamic_cast(&b); + const auto a_ptr_s = dynamic_cast(&a); + const auto b_ptr_s = dynamic_cast(&b); - if (dynamic_cast(a.get()) && dynamic_cast(b.get())) { - mul(dynamic_cast(a.get()), - dynamic_cast(b.get()), - scaleAB, - scaleT); - } else if (dynamic_cast(a.get()) && - dynamic_cast(b.get())) { - mul(dynamic_cast(a.get()), - dynamic_cast(b.get()), - scaleAB, - scaleT); - } else if (dynamic_cast(a.get()) && - dynamic_cast(b.get())) { - mul(dynamic_cast(a.get()), - dynamic_cast(b.get()), - scaleAB, - scaleT); + if (a_ptr && b_ptr) { + mul((CpuMatrix*)a_ptr, (CpuMatrix*)b_ptr, scaleAB, scaleT); + } else if (a_ptr_s && b_ptr) { + mul((CpuSparseMatrix*)a_ptr_s, (CpuMatrix*)b_ptr, scaleAB, scaleT); + } else if (a_ptr && b_ptr_s) { + mul((CpuMatrix*)a_ptr, (CpuSparseMatrix*)b_ptr_s, scaleAB, scaleT); } else { LOG(FATAL) << "Not supported"; } @@ -3289,7 +3280,7 @@ void CpuMatrix::addColumnVector(const Matrix& b) { } /* this = a*b */ -void CpuMatrix::mul(const MatrixPtr a, const MatrixPtr b) { +void CpuMatrix::mul(const Matrix& a, const Matrix& b) { return mul(a, b, 1.0, 0.0); } diff --git a/paddle/math/Matrix.h b/paddle/math/Matrix.h index b8c7adf948..dfcb0853df 100644 --- a/paddle/math/Matrix.h +++ b/paddle/math/Matrix.h @@ -444,8 +444,8 @@ public: * this = scaleAB*(a*b) + scaleT*this * @endcode */ - virtual void mul(const MatrixPtr a, - const MatrixPtr b, + virtual void mul(const Matrix& a, + const Matrix& b, real scaleAB, real scaleT) { LOG(FATAL) << "Not implemented"; @@ -643,7 +643,7 @@ public: * this = a*b * @endcode */ - virtual void mul(const MatrixPtr a, const MatrixPtr b) { + virtual void mul(const Matrix& a, const Matrix& b) { LOG(FATAL) << "Not implemented"; } @@ -1272,14 +1272,14 @@ public: * this = scaleAB*(a*b) + scaleT*this * @endcode */ - void mul(const MatrixPtr a, const MatrixPtr b, real scaleAB, real scaleT); + void mul(const Matrix& a, const Matrix& b, real scaleAB, real scaleT); /** * @code * this = a*b * @endcode */ - void mul(const MatrixPtr a, const MatrixPtr b); + void mul(const Matrix& a, const Matrix& b); void mul(const GpuMatrix& a, const GpuMatrix& b, real scaleAB, real scaleT); @@ -1784,7 +1784,7 @@ public: void addColumnVector(const Matrix& b); - void mul(const MatrixPtr a, const MatrixPtr b, real scaleAB, real scaleT); + void mul(const Matrix& a, const Matrix& b, real scaleAB, real scaleT); void mul(CpuMatrix* a, CpuMatrix* b, real scaleAB, real scaleT); void mul(CpuMatrix* a, CpuSparseMatrix* b, real scaleAB, real scaleT); @@ -1807,7 +1807,7 @@ public: virtual void mul(CpuSparseMatrix* a, CpuMatrix* b, real scaleAB, real scaleT); - void mul(const MatrixPtr a, const MatrixPtr b); + void mul(const Matrix& a, const Matrix& b); void rightMul(Matrix& b, real scaleAB, real scaleT); void rightMul(Matrix& b); diff --git a/paddle/math/SparseMatrix.cpp b/paddle/math/SparseMatrix.cpp index 9154503c21..720a035ecb 100644 --- a/paddle/math/SparseMatrix.cpp +++ b/paddle/math/SparseMatrix.cpp @@ -571,49 +571,48 @@ void GpuSparseMatrix::transpose(MatrixPtr matTrans, bool memAlloc) { hl_stream_synchronize(stream); } -void GpuSparseMatrix::mul(const GpuMatrixPtr a, - const GpuMatrixPtr b, +void GpuSparseMatrix::mul(const GpuMatrix& a, + const GpuMatrix& b, real scaleAB, real scaleT) { - CHECK(a->useGpu_ && b->useGpu_) << "type not match"; + CHECK(a.useGpu_ && b.useGpu_) << "type not match"; CHECK(!trans_) << "trans not supported"; - real* A_d = a->getData(); - real* B_d = b->getData(); + real* A_d = (real*)a.getData(); + real* B_d = (real*)b.getData(); hl_sparse_matrix_s C_d = sMatrix_.get(); - hl_trans_op_t a_trans = a->trans_ ? HPPL_OP_T : HPPL_OP_N; - hl_trans_op_t b_trans = b->trans_ ? HPPL_OP_T : HPPL_OP_N; - - if (!a->trans_ && !b->trans_) { - CHECK(height_ == a->getHeight()); - CHECK(width_ == b->getWidth()); - CHECK(a->getWidth() == b->getHeight()); - } else if (a->trans_ && !b->trans_) { - CHECK(height_ == a->getWidth()); - CHECK(width_ == b->getWidth()); - CHECK(a->getHeight() == b->getHeight()); - } else if (!a->trans_ && b->trans_) { - CHECK(height_ == a->getHeight()); - CHECK(width_ == b->getHeight()); - CHECK(a->getWidth() == b->getWidth()); + hl_trans_op_t a_trans = a.trans_ ? HPPL_OP_T : HPPL_OP_N; + hl_trans_op_t b_trans = b.trans_ ? HPPL_OP_T : HPPL_OP_N; + + if (!a.trans_ && !b.trans_) { + CHECK(height_ == a.getHeight()); + CHECK(width_ == b.getWidth()); + CHECK(a.getWidth() == b.getHeight()); + } else if (a.trans_ && !b.trans_) { + CHECK(height_ == a.getWidth()); + CHECK(width_ == b.getWidth()); + CHECK(a.getHeight() == b.getHeight()); + } else if (!a.trans_ && b.trans_) { + CHECK(height_ == a.getHeight()); + CHECK(width_ == b.getHeight()); + CHECK(a.getWidth() == b.getWidth()); } else { LOG(INFO) << "Not support"; } int dimM = height_; int dimN = width_; - int dimK = !b->trans_ ? b->getHeight() : b->getWidth(); + int dimK = !b.trans_ ? b.getHeight() : b.getWidth(); hl_sparse_matrix_mul( A_d, a_trans, B_d, b_trans, C_d, dimM, dimN, dimK, scaleAB, scaleT); } -void GpuSparseMatrix::mul(const MatrixPtr a, - const MatrixPtr b, +void GpuSparseMatrix::mul(const Matrix& a, + const Matrix& b, real scaleAB, real scaleT) { - if (std::dynamic_pointer_cast(a) && - std::dynamic_pointer_cast(b)) { - GpuMatrixPtr a_ptr = std::dynamic_pointer_cast(a); - GpuMatrixPtr b_ptr = std::dynamic_pointer_cast(b); - mul(a_ptr, b_ptr, scaleAB, scaleT); + const auto a_ptr = dynamic_cast(&a); + const auto b_ptr = dynamic_cast(&b); + if (a_ptr && b_ptr) { + mul(*a_ptr, *b_ptr, scaleAB, scaleT); } else { LOG(FATAL) << "not supported"; } diff --git a/paddle/math/SparseMatrix.h b/paddle/math/SparseMatrix.h index bd96a3301d..1d3801548e 100644 --- a/paddle/math/SparseMatrix.h +++ b/paddle/math/SparseMatrix.h @@ -104,10 +104,7 @@ public: size_t newNnz, SparseValueType valueType); - void mul(const GpuMatrixPtr a, - const GpuMatrixPtr b, - real scaleAB, - real scaleT); + void mul(const GpuMatrix& a, const GpuMatrix& b, real scaleAB, real scaleT); /// B = A , B.trans = !A.trans MatrixPtr getTranspose(); @@ -218,7 +215,7 @@ protected: void copyRow(int offsets, size_t colNum, const sparse_float_value_t* row); public: - void mul(const MatrixPtr a, const MatrixPtr b, real scaleAB, real scaleT); + void mul(const Matrix& a, const Matrix& b, real scaleAB, real scaleT); void copyFrom(CpuSparseMatrix& src, hl_stream_t stream); void copyFrom(GpuSparseMatrix& src, hl_stream_t stream); diff --git a/paddle/math/tests/test_SparseMatrix.cpp b/paddle/math/tests/test_SparseMatrix.cpp index 88b75b6d83..0949ab7ffb 100644 --- a/paddle/math/tests/test_SparseMatrix.cpp +++ b/paddle/math/tests/test_SparseMatrix.cpp @@ -33,8 +33,8 @@ TEST(Matrix, CopyCpuMatrixToSparseMatrix) { ret2(new CpuMatrix(HEIGHT, WIDTH_TEST)); ret1->zeroMem(); ret2->zeroMem(); - ret1->mul(testMatrix, mulCpuMatrix, 1.0, 1.0); - ret2->mul(testCpuMatrix, mulCpuMatrix, 1.0, 1.0); + ret1->mul(*testMatrix, *mulCpuMatrix, 1.0, 1.0); + ret2->mul(*testCpuMatrix, *mulCpuMatrix, 1.0, 1.0); checkMatrixEqual(ret1, ret2); } @@ -147,9 +147,9 @@ void test_sparse_matrix_mul(MatrixPara paraA, hl_stream_synchronize(stream); /*matrix mul*/ - cpuMatrixC->mul(cpuMatrixA, cpuMatrixB, 1.0, 1.0); - gpuMatrixC->mul(gpuMatrixA, gpuMatrixB, 1.0, 1.0); - cpuDenseC->mul(cpuDenseA, cpuDenseB, 1.0, 1.0); + cpuMatrixC->mul(*cpuMatrixA, *cpuMatrixB, 1.0, 1.0); + gpuMatrixC->mul(*gpuMatrixA, *gpuMatrixB, 1.0, 1.0); + cpuDenseC->mul(*cpuDenseA, *cpuDenseB, 1.0, 1.0); gpuMatrixC_d2h->copyFrom(*gpuMatrixC, stream); hl_stream_synchronize(stream); @@ -224,8 +224,8 @@ TEST(Matrix, CopySparseMatrixToGpuSparseMatrix) { MatrixPtr ret2(new GpuMatrix(HEIGHT, WIDTH_TEST)); ret1->zeroMem(); ret2->zeroMem(); - ret1->mul(testMatrix, mulCpuMatrix, 1.0, 1.0); - ret2->mul(testGpuMatrix, mulGpuMatrix, 1.0, 1.0); + ret1->mul(*testMatrix, *mulCpuMatrix, 1.0, 1.0); + ret2->mul(*testGpuMatrix, *mulGpuMatrix, 1.0, 1.0); checkMatrixEqual(ret1, ret2); } diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 10289940a4..c6fc849ba0 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -318,7 +318,7 @@ void testMatrixInverse(int height) { cpu->randomizeUniform(); MatrixPtr cpuT = cpu->getTranspose(); MatrixPtr outputCheck = std::make_shared(height, height); - outputCheck->mul(cpu, cpuT); + outputCheck->mul(*cpu, *cpuT); cpu->setDiag(1.0); cpu->add(*outputCheck); @@ -328,7 +328,7 @@ void testMatrixInverse(int height) { TensorCheckErr(*cpuI, *gpuI); - outputCheck->mul(cpu, cpuI); + outputCheck->mul(*cpu, *cpuI); cpu->setDiag(1.0); TensorCheckErr(*cpu, *outputCheck); } @@ -509,8 +509,8 @@ void testMatrixMul(bool transa, bool transb, int dimM, int dimN, int dimK) { gpuB->copyFrom(*cpuB); gpuC->copyFrom(*cpuC); - cpuC->mul(cpuA, cpuB, alpha, beta); - gpuC->mul(gpuA, gpuB, alpha, beta); + cpuC->mul(*cpuA, *cpuB, alpha, beta); + gpuC->mul(*gpuA, *gpuB, alpha, beta); TensorCheckErr(*cpuC, *gpuC); } @@ -581,8 +581,8 @@ void testSubMatrixMul(bool transa, bool transb, int dimM, int dimN, int dimK) { MatrixPtr subCpuC = cpuC->subMatrix(startM, endM, startN, endN); MatrixPtr subGpuC = gpuC->subMatrix(startM, endM, startN, endN); - subCpuC->mul(subCpuA, subCpuB, alpha, beta); - subGpuC->mul(subGpuA, subGpuB, alpha, beta); + subCpuC->mul(*subCpuA, *subCpuB, alpha, beta); + subGpuC->mul(*subGpuA, *subGpuB, alpha, beta); TensorCheckErr(*cpuC, *gpuC); } diff --git a/paddle/math/tests/test_sparseMatrixCompare.cpp b/paddle/math/tests/test_sparseMatrixCompare.cpp index 6f6de238ba..dcdbccffc3 100644 --- a/paddle/math/tests/test_sparseMatrixCompare.cpp +++ b/paddle/math/tests/test_sparseMatrixCompare.cpp @@ -102,8 +102,8 @@ void testSpMatrixMul(int M, int N, int K, real rate) { gpuC->copyFrom(*cpuC, stream); hl_stream_synchronize(stream); - cpuC->mul(cpuA, cpuB->getTranspose(), 1, 1); - gpuC->mul(gpuA, gpuB->getTranspose(), 1, 1); + cpuC->mul(*cpuA, *cpuB->getTranspose(), 1, 1); + gpuC->mul(*gpuA, *gpuB->getTranspose(), 1, 1); MatrixPtr outputCheck(new CpuSparseMatrix(M, N, nnz)); outputCheck->copyFrom(*gpuC, stream); From 204152c76ea90100f286067f9ff298a8c79d33a3 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Tue, 20 Dec 2016 12:54:19 +0800 Subject: [PATCH 242/265] set -e for docs.sh --- paddle/scripts/travis/docs.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/paddle/scripts/travis/docs.sh b/paddle/scripts/travis/docs.sh index 4ab1746b5a..cd331522a9 100755 --- a/paddle/scripts/travis/docs.sh +++ b/paddle/scripts/travis/docs.sh @@ -12,13 +12,13 @@ set +e linkchecker doc/cn/html/index.html > doc_cn.out linkchecker doc/en/html/index.html > doc_en.out for i in doc_cn.out doc_en.out; do - echo $i grep " 0 errors found" $i if [ $? -ne 0 ]; then cat $i exit 1 fi done +set -e # Parse Github URL REPO=`git config remote.origin.url` From bf26679c3214f2c0c24f02218d3c15e720557a38 Mon Sep 17 00:00:00 2001 From: yangwenbo02 Date: Tue, 20 Dec 2016 13:51:56 +0800 Subject: [PATCH 243/265] update docker_install_en.rst --- doc/getstarted/build_and_install/docker_install_en.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/doc/getstarted/build_and_install/docker_install_en.rst b/doc/getstarted/build_and_install/docker_install_en.rst index 1cc23ac3aa..57725c0d85 100644 --- a/doc/getstarted/build_and_install/docker_install_en.rst +++ b/doc/getstarted/build_and_install/docker_install_en.rst @@ -44,8 +44,7 @@ The general development workflow with Docker and Bazel is as follows: cd paddle docker build -t paddle:dev -f paddle/scripts/docker/Dockerfile . - Apt-get source errors may occur when building paddle docker image. - **You can specify the UBUNTU MIRROR with** :code:`--build-arg UBUNTU_MIRROR` **like the example below.** + Sometimes docker build might suffer from a slow network connection to the official Ubuntu apt-source servers. In such case, we can specify an apt-source mirror server that is geologically nearer to us. In the following example, we specified an apt-source server that responds fast in China.You can specify the UBUNTU MIRROR with :code:`--build-arg UBUNTU_MIRROR` like the example below. .. code-block:: bash From 6f8f468fdbfafafda1661b8002cfda76263cf9af Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Tue, 20 Dec 2016 16:28:41 +0800 Subject: [PATCH 244/265] Add priorbox layer gpu unit test. --- paddle/gserver/tests/test_PriorBox.cpp | 127 ++++++++++++++++++------- 1 file changed, 92 insertions(+), 35 deletions(-) diff --git a/paddle/gserver/tests/test_PriorBox.cpp b/paddle/gserver/tests/test_PriorBox.cpp index 1dab21218e..19dfd0f065 100644 --- a/paddle/gserver/tests/test_PriorBox.cpp +++ b/paddle/gserver/tests/test_PriorBox.cpp @@ -24,14 +24,15 @@ using namespace std; // NOLINT // Do one forward pass of priorBox layer and check to see if its output // matches the given result -void doOnePriorBoxTest(size_t featureMapWidth, - size_t featureMapHeight, - size_t imageWidth, - size_t imageHeight, - vector minSize, - vector maxSize, - vector aspectRatio, +void doOnePriorBoxTest(size_t feature_map_width, + size_t feature_map_height, + size_t image_width, + size_t image_height, + vector min_size, + vector max_size, + vector aspect_ratio, vector variance, + bool use_gpu, MatrixPtr& result) { // Setting up the priorbox layer TestConfig configt; @@ -42,28 +43,27 @@ void doOnePriorBoxTest(size_t featureMapWidth, configt.inputDefs.push_back({INPUT_DATA, "image", 1, 0}); configt.layerConfig.add_inputs(); PriorBoxConfig* pb = input->mutable_priorbox_conf(); - for (size_t i = 0; i < minSize.size(); i++) pb->add_min_size(minSize[i]); - for (size_t i = 0; i < maxSize.size(); i++) pb->add_max_size(maxSize[i]); - for (size_t i = 0; i < aspectRatio.size(); i++) - pb->add_aspect_ratio(aspectRatio[i]); + for (size_t i = 0; i < min_size.size(); i++) pb->add_min_size(min_size[i]); + for (size_t i = 0; i < max_size.size(); i++) pb->add_max_size(max_size[i]); for (size_t i = 0; i < variance.size(); i++) pb->add_variance(variance[i]); + for (size_t i = 0; i < aspect_ratio.size(); i++) + pb->add_aspect_ratio(aspect_ratio[i]); // data layer initialize std::vector dataLayers; LayerMap layerMap; vector datas; initDataLayer( - configt, &dataLayers, &datas, &layerMap, "priorbox", 1, false, false); - dataLayers[0]->getOutput().setFrameHeight(featureMapHeight); - dataLayers[0]->getOutput().setFrameWidth(featureMapWidth); - dataLayers[1]->getOutput().setFrameHeight(imageHeight); - dataLayers[1]->getOutput().setFrameWidth(imageWidth); + configt, &dataLayers, &datas, &layerMap, "priorbox", 1, false, use_gpu); + dataLayers[0]->getOutput().setFrameHeight(feature_map_height); + dataLayers[0]->getOutput().setFrameWidth(feature_map_width); + dataLayers[1]->getOutput().setFrameHeight(image_height); + dataLayers[1]->getOutput().setFrameWidth(image_width); // test layer initialize std::vector parameters; LayerPtr priorboxLayer; initTestLayer(configt, &layerMap, ¶meters, &priorboxLayer); - priorboxLayer->forward(PASS_GC); checkMatrixEqual(priorboxLayer->getOutputValue(), result); } @@ -73,6 +73,7 @@ TEST(Layer, priorBoxLayerFwd) { vector maxSize; vector aspectRatio; vector variance; + bool useGpu = false; minSize.push_back(276); maxSize.push_back(330); @@ -81,9 +82,8 @@ TEST(Layer, priorBoxLayerFwd) { variance.push_back(0.2); variance.push_back(0.2); + // CPU case 1. MatrixPtr result; - result = Matrix::create(1, 2 * 8, false, false); - float resultData[] = {0.04, 0.04, 0.96, @@ -100,52 +100,109 @@ TEST(Layer, priorBoxLayerFwd) { 0.1, 0.2, 0.2}; + result = Matrix::create(1, 2 * 8, false, useGpu); result->setData(resultData); - doOnePriorBoxTest(/* featureMapWidth */ 1, - /* featureMapHeight */ 1, - /* imageWidth */ 300, - /* imageHeight */ 300, + doOnePriorBoxTest(/* feature_map_width */ 1, + /* feature_map_height */ 1, + /* image_width */ 300, + /* image_height */ 300, minSize, maxSize, aspectRatio, variance, + useGpu, result); - + // CPU case 2. variance[1] = 0.2; variance[3] = 0.1; maxSize.pop_back(); - Matrix::resizeOrCreate(result, 1, 4 * 8, false, false); float resultData2[] = {0, 0, 0.595, 0.595, 0.1, 0.2, 0.2, 0.1, 0.405, 0, 1, 0.595, 0.1, 0.2, 0.2, 0.1, 0, 0.405, 0.595, 1, 0.1, 0.2, 0.2, 0.1, 0.405, 0.405, 1, 1, 0.1, 0.2, 0.2, 0.1}; + Matrix::resizeOrCreate(result, 1, 4 * 8, false, useGpu); result->setData(resultData2); - doOnePriorBoxTest(/* featureMapWidth */ 2, - /* featureMapHeight */ 2, - /* imageWidth */ 400, - /* imageHeight */ 400, + doOnePriorBoxTest(/* feature_map_width */ 2, + /* feature_map_height */ 2, + /* image_width */ 400, + /* image_height */ 400, minSize, maxSize, aspectRatio, variance, + useGpu, result); - + // CPU case 3. aspectRatio.push_back(2); - Matrix::resizeOrCreate(result, 1, 3 * 8, false, false); float resultData3[] = {0.04, 0.04, 0.96, 0.96, 0.1, 0.2, 0.2, 0.1, 0, 0.17473088, 1, 0.825269, 0.1, 0.2, 0.2, 0.1, 0.17473088, 0, 0.825269, 1, 0.1, 0.2, 0.2, 0.1}; + Matrix::resizeOrCreate(result, 1, 3 * 8, false, useGpu); result->setData(resultData3); - doOnePriorBoxTest(/* featureMapWidth */ 1, - /* featureMapHeight */ 1, - /* imageWidth */ 300, - /* imageHeight */ 300, + doOnePriorBoxTest(/* feature_map_width */ 1, + /* feature_map_height */ 1, + /* image_width */ 300, + /* image_height */ 300, minSize, maxSize, aspectRatio, variance, + useGpu, result); + +#ifndef PADDLE_ONLY_CPU + // reset the input parameters + variance[1] = 0.1; + variance[3] = 0.2; + maxSize.push_back(330); + aspectRatio.pop_back(); + MatrixPtr resultGpu; + useGpu = true; + // GPU case 1. + resultGpu = Matrix::create(1, 2 * 8, false, useGpu); + resultGpu->copyFrom(resultData, 2 * 8); + doOnePriorBoxTest(/* feature_map_width */ 1, + /* feature_map_height */ 1, + /* image_width */ 300, + /* image_height */ 300, + minSize, + maxSize, + aspectRatio, + variance, + useGpu, + resultGpu); + // GPU case 2. + variance[1] = 0.2; + variance[3] = 0.1; + maxSize.pop_back(); + Matrix::resizeOrCreate(resultGpu, 1, 4 * 8, false, useGpu); + resultGpu->copyFrom(resultData2, 4 * 8); + doOnePriorBoxTest(/* feature_map_width */ 2, + /* feature_map_height */ 2, + /* image_width */ 400, + /* image_height */ 400, + minSize, + maxSize, + aspectRatio, + variance, + useGpu, + resultGpu); + // GPU case 3. + aspectRatio.push_back(2); + Matrix::resizeOrCreate(resultGpu, 1, 3 * 8, false, useGpu); + resultGpu->copyFrom(resultData3, 3 * 8); + doOnePriorBoxTest(/* feature_map_width */ 1, + /* feature_map_height */ 1, + /* image_width */ 300, + /* image_height */ 300, + minSize, + maxSize, + aspectRatio, + variance, + useGpu, + resultGpu); +#endif } int main(int argc, char** argv) { From 8fe3a3aa73be9b7d1f748e3809dff9f5323be719 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Tue, 20 Dec 2016 16:46:42 +0800 Subject: [PATCH 245/265] Add excluded_chunk_types to ChunkEvaluator The chunks of types in excluded_chunk_types will not be counted in ChunkEvaluator. This is useful for tasks such as SRL, in which chunks of type V (verb) will not be taken into account in evaluation. --- paddle/gserver/evaluators/ChunkEvaluator.cpp | 17 ++++++-- proto/ModelConfig.proto | 10 ++++- python/paddle/trainer/config_parser.py | 6 ++- .../trainer_config_helpers/evaluators.py | 39 +++++++++++-------- 4 files changed, 50 insertions(+), 22 deletions(-) diff --git a/paddle/gserver/evaluators/ChunkEvaluator.cpp b/paddle/gserver/evaluators/ChunkEvaluator.cpp index 3d8af5bcd4..15e0e95206 100644 --- a/paddle/gserver/evaluators/ChunkEvaluator.cpp +++ b/paddle/gserver/evaluators/ChunkEvaluator.cpp @@ -12,6 +12,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ +#include #include #include "paddle/math/Vector.h" @@ -72,6 +73,7 @@ class ChunkEvaluator : public Evaluator { std::vector labelSegments_; std::vector outputSegments_; + std::set excludedChunkTypes_; public: virtual void init(const EvaluatorConfig& config) { @@ -105,6 +107,10 @@ public: } CHECK(config.has_num_chunk_types()) << "Missing num_chunk_types in config"; otherChunkType_ = numChunkTypes_ = config.num_chunk_types(); + + // the chunks of types in excludedChunkTypes_ will not be counted + auto& tmp = config.excluded_chunk_types(); + excludedChunkTypes_.insert(tmp.begin(), tmp.end()); } virtual void start() { @@ -157,7 +163,8 @@ public: size_t i = 0, j = 0; while (i < outputSegments_.size() && j < labelSegments_.size()) { if (outputSegments_[i] == labelSegments_[j]) { - ++numCorrect_; + if (excludedChunkTypes_.count(outputSegments_[i].type) != 1) + ++numCorrect_; } if (outputSegments_[i].end < labelSegments_[j].end) { ++i; @@ -168,8 +175,12 @@ public: ++j; } } - numLabelSegments_ += labelSegments_.size(); - numOutputSegments_ += outputSegments_.size(); + for (auto& segment : labelSegments_) { + if (excludedChunkTypes_.count(segment.type) != 1) ++numLabelSegments_; + } + for (auto& segment : outputSegments_) { + if (excludedChunkTypes_.count(segment.type) != 1) ++numOutputSegments_; + } } void getSegments(int* label, int length, std::vector& segments) { diff --git a/proto/ModelConfig.proto b/proto/ModelConfig.proto index 552af71e76..e24ed21fbb 100644 --- a/proto/ModelConfig.proto +++ b/proto/ModelConfig.proto @@ -433,8 +433,12 @@ message EvaluatorConfig { repeated string input_layers = 3; // Used by ChunkEvaluator - optional string chunk_scheme = 4; // one of "IOB", "IOE", "IOBES" - optional int32 num_chunk_types = 5; // number of chunk types other than "other" + // one of "IOB", "IOE", "IOBES" + optional string chunk_scheme = 4; + // number of chunk types other than "other" + optional int32 num_chunk_types = 5; + // chunk of these types are not counted + repeated int32 excluded_chunk_types = 12; // Used by PrecisionRecallEvaluator and ClassificationErrorEvaluator // For multi binary labels: true if output > classification_threshold @@ -453,6 +457,8 @@ message EvaluatorConfig { // whether to delimit the sequence in the seq_text_printer optional bool delimited = 11 [default = true]; + + // NOTE: 12 has been occupied by excluded_chunk_types } message LinkConfig { diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index ea3e4308fe..39892d0533 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1240,7 +1240,8 @@ def Evaluator( dict_file=None, result_file=None, num_results=None, - delimited=None, ): + delimited=None, + excluded_chunk_types=None, ): evaluator = g_config.model_config.evaluators.add() evaluator.type = type evaluator.name = MakeLayerNameInSubmodel(name) @@ -1269,6 +1270,9 @@ def Evaluator( if delimited is not None: evaluator.delimited = delimited + if excluded_chunk_types: + evaluator.excluded_chunk_types.extend(excluded_chunk_types) + class LayerBase(object): def __init__( diff --git a/python/paddle/trainer_config_helpers/evaluators.py b/python/paddle/trainer_config_helpers/evaluators.py index 3e0e88972c..731e30d367 100644 --- a/python/paddle/trainer_config_helpers/evaluators.py +++ b/python/paddle/trainer_config_helpers/evaluators.py @@ -57,19 +57,21 @@ def evaluator(*attrs): return impl -def evaluator_base(input, - type, - label=None, - weight=None, - name=None, - chunk_scheme=None, - num_chunk_types=None, - classification_threshold=None, - positive_label=None, - dict_file=None, - result_file=None, - num_results=None, - delimited=None): +def evaluator_base( + input, + type, + label=None, + weight=None, + name=None, + chunk_scheme=None, + num_chunk_types=None, + classification_threshold=None, + positive_label=None, + dict_file=None, + result_file=None, + num_results=None, + delimited=None, + excluded_chunk_types=None, ): """ Evaluator will evaluate the network status while training/testing. @@ -127,7 +129,8 @@ def evaluator_base(input, positive_label=positive_label, dict_file=dict_file, result_file=result_file, - delimited=delimited) + delimited=delimited, + excluded_chunk_types=excluded_chunk_types, ) @evaluator(EvaluatorAttribute.FOR_CLASSIFICATION) @@ -330,7 +333,8 @@ def chunk_evaluator( label, chunk_scheme, num_chunk_types, - name=None, ): + name=None, + excluded_chunk_types=None, ): """ Chunk evaluator is used to evaluate segment labelling accuracy for a sequence. It calculates the chunk detection F1 score. @@ -376,6 +380,8 @@ def chunk_evaluator( :param num_chunk_types: number of chunk types other than "other" :param name: The Evaluator name, it is optional. :type name: basename|None + :param excluded_chunk_types: chunks of these types are not considered + :type excluded_chunk_types: list of integer|[] """ evaluator_base( name=name, @@ -383,7 +389,8 @@ def chunk_evaluator( input=input, label=label, chunk_scheme=chunk_scheme, - num_chunk_types=num_chunk_types) + num_chunk_types=num_chunk_types, + excluded_chunk_types=excluded_chunk_types, ) @evaluator(EvaluatorAttribute.FOR_UTILS) From 5fddd99e18f3920ff0d8158fd4a9800d5566943e Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Tue, 20 Dec 2016 17:20:22 +0800 Subject: [PATCH 246/265] move TEST from test_matrixCompare.cpp to cross_map_normal_op_test.cpp --- cmake/util.cmake | 1 + paddle/function/CMakeLists.txt | 35 +++-- paddle/function/FunctionTest.h | 102 +++++++++++++ paddle/function/TestMain.cpp | 22 +++ paddle/function/cross_map_normal_op_test.cpp | 71 +++++++++ paddle/math/tests/test_matrixCompare.cpp | 144 ------------------- 6 files changed, 221 insertions(+), 154 deletions(-) create mode 100644 paddle/function/FunctionTest.h create mode 100644 paddle/function/TestMain.cpp create mode 100644 paddle/function/cross_map_normal_op_test.cpp diff --git a/cmake/util.cmake b/cmake/util.cmake index 03734e7839..8a71b23c62 100644 --- a/cmake/util.cmake +++ b/cmake/util.cmake @@ -107,6 +107,7 @@ function(link_paddle_exe TARGET_NAME) paddle_parameter paddle_proto paddle_cuda + paddle_test_main ${METRIC_LIBS} ${PROTOBUF_LIBRARY} ${LIBGLOG_LIBRARY} diff --git a/paddle/function/CMakeLists.txt b/paddle/function/CMakeLists.txt index 8fad0e3ebd..0697842bbe 100644 --- a/paddle/function/CMakeLists.txt +++ b/paddle/function/CMakeLists.txt @@ -1,12 +1,27 @@ -file(GLOB FUNCTION_HEADERS . *.h) - -if(NOT WITH_GPU) - file(GLOB FUNCTION_SOURCES . *.cpp) - add_library(paddle_function STATIC ${FUNCTION_SOURCES}) -else() - file(GLOB FUNCTION_SOURCES . *.cpp *.cu) - cuda_add_library(paddle_function ${FUNCTION_SOURCES}) +file(GLOB h_files . *_op.h) +file(GLOB cpp_files . *_op.cpp) + +list(APPEND h_files Function.h) +list(APPEND cpp_files Function.cpp) + +if(WITH_GPU) + file(GLOB cu_files . *_op_gpu.cu) + cuda_compile(cu_objs ${cu_files}) endif() -add_style_check_target(paddle_function ${FUNCTION_SOURCES}) -add_style_check_target(paddle_function ${FUNCTION_HEADERS}) +add_library(paddle_function STATIC ${cpp_files} ${cu_objs}) + +add_library(paddle_test_main STATIC TestMain.cpp) + +if(WITH_GPU) + # TODO: + # file(GLOB test_files . *_op_test.cpp) + # add_executable(${test_bin} EXCLUDE_FROM_ALL ${test_files}) + add_simple_unittest(cross_map_normal_op_test) +endif() + +add_style_check_target(paddle_function ${h_files}) +add_style_check_target(paddle_function ${cpp_files}) +if(WITH_GPU) + add_style_check_target(paddle_function ${cu_files}) +endif() diff --git a/paddle/function/FunctionTest.h b/paddle/function/FunctionTest.h new file mode 100644 index 0000000000..a8c5e412bd --- /dev/null +++ b/paddle/function/FunctionTest.h @@ -0,0 +1,102 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include "Function.h" +#include "paddle/math/Vector.h" +#include "paddle/math/tests/TensorCheck.h" + +namespace paddle { + +class FunctionCompare { +public: + FunctionCompare(const std::string& name, const FuncConfig& config) + : cpu(FunctionBase::funcRegistrar_.createByType(name + "-CPU")), + gpu(FunctionBase::funcRegistrar_.createByType(name + "-GPU")) { + cpu->init(config); + gpu->init(config); + } + + void cmpWithArg(const Arguments& inputs, + const Arguments& outputs, + const Arguments& inouts) { + // init cpu and gpu arguments + auto initArgs = [=]( + Arguments& cpuArgs, Arguments& gpuArgs, const Arguments& inArgs) { + for (auto arg : inArgs) { + size_t size = sizeof(real); + for (auto dim : arg.dims_) { + size *= dim; + } + cpuMemory.emplace_back(std::make_shared(size)); + gpuMemory.emplace_back(std::make_shared(size)); + cpuArgs.emplace_back( + Tensor((real*)cpuMemory.back()->getBuf(), arg.dims_)); + gpuArgs.emplace_back( + Tensor((real*)gpuMemory.back()->getBuf(), arg.dims_)); + + // will use an api to refactor this code. + CpuVector cpuVector(size / sizeof(real), + (real*)cpuArgs.back().getData()); + GpuVector gpuVector(size / sizeof(real), + (real*)gpuArgs.back().getData()); + cpuVector.uniform(0.001, 1); + gpuVector.copyFrom(cpuVector); + } + }; + initArgs(cpuInputs, gpuInputs, inputs); + initArgs(cpuOutputs, gpuOutputs, outputs); + initArgs(cpuInouts, gpuInouts, inouts); + + // function calculate + cpu->calc(cpuInputs, cpuOutputs, cpuInouts); + gpu->calc(gpuInputs, gpuOutputs, gpuInouts); + + // check outputs and inouts + auto checkArgs = [=](const Arguments& cpuArgs, const Arguments& gpuArgs) { + for (size_t i = 0; i < cpuArgs.size(); i++) { + auto cpu = cpuArgs[i]; + auto gpu = gpuArgs[i]; + size_t size = 1; + for (auto dim : cpu.dims_) { + size *= dim; + } + CpuVector cpuVector(size, (real*)cpu.getData()); + GpuVector gpuVector(size, (real*)gpu.getData()); + + autotest::TensorCheckErr(cpuVector, gpuVector); + } + }; + checkArgs(cpuOutputs, gpuOutputs); + checkArgs(cpuInouts, gpuInouts); + } + +protected: + std::shared_ptr cpu; + std::shared_ptr gpu; + std::vector cpuMemory; + std::vector gpuMemory; + Arguments cpuInputs; + Arguments cpuOutputs; + Arguments cpuInouts; + Arguments gpuInputs; + Arguments gpuOutputs; + Arguments gpuInouts; +}; + +} // namespace paddle + +using paddle::FunctionCompare; +using paddle::FuncConfig; +using paddle::Dims; +using paddle::Tensor; diff --git a/paddle/function/TestMain.cpp b/paddle/function/TestMain.cpp new file mode 100644 index 0000000000..3e14532d18 --- /dev/null +++ b/paddle/function/TestMain.cpp @@ -0,0 +1,22 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include +#include "paddle/utils/Util.h" + +int main(int argc, char** argv) { + testing::InitGoogleTest(&argc, argv); + paddle::initMain(argc, argv); + return RUN_ALL_TESTS(); +} diff --git a/paddle/function/cross_map_normal_op_test.cpp b/paddle/function/cross_map_normal_op_test.cpp new file mode 100644 index 0000000000..22692691bd --- /dev/null +++ b/paddle/function/cross_map_normal_op_test.cpp @@ -0,0 +1,71 @@ +/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. */ + +#include +#include "FunctionTest.h" + +TEST(CrossMapNormal, real) { + for (size_t numSamples : {5, 32}) { + for (size_t channels : {1, 5, 32}) { + for (size_t imgSizeH : {5, 33, 100}) { + for (size_t imgSizeW : {5, 32, 96}) { + for (size_t size : {1, 2, 3, 5, 7}) { + VLOG(3) << " numSamples=" << numSamples << " channels=" << channels + << " imgSizeH=" << imgSizeH << " imgSizeW=" << imgSizeW + << " size=" << size; + + FunctionCompare compare("CrossMapNormal", + FuncConfig() + .set("size", size) + .set("scale", (real)1.5) + .set("pow", (real)0.5)); + Dims dims{numSamples, channels, imgSizeH, imgSizeW}; + compare.cmpWithArg({Tensor(nullptr, dims)}, + {Tensor(nullptr, dims), Tensor(nullptr, dims)}, + {}); + } + } + } + } + } +} + +TEST(CrossMapNormalGrad, real) { + for (size_t numSamples : {5, 32}) { + for (size_t channels : {1, 5, 32}) { + for (size_t imgSizeH : {5, 33, 100}) { + for (size_t imgSizeW : {5, 32, 96}) { + for (size_t size : {1, 2, 3, 5, 7}) { + VLOG(3) << " numSamples=" << numSamples << " channels=" << channels + << " imgSizeH=" << imgSizeH << " imgSizeW=" << imgSizeW + << " size=" << size; + + FunctionCompare compare("CrossMapNormalGrad", + FuncConfig() + .set("size", size) + .set("scale", (real)1.5) + .set("pow", (real)0.5)); + Dims dims{numSamples, channels, imgSizeH, imgSizeW}; + compare.cmpWithArg({Tensor(nullptr, dims), + Tensor(nullptr, dims), + Tensor(nullptr, dims), + Tensor(nullptr, dims)}, + {Tensor(nullptr, dims)}, + {}); + } + } + } + } + } +} diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index c89b7ff490..440534e722 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -1263,150 +1263,6 @@ TEST(Matrix, MaxOutFwdBwd) { } } -void testCrossMapNormalFwd( - int numSamples, int channels, int imgSizeH, int imgSizeW, int sizeX) { - float scale = 1.5; - float pow = 0.5; - int width = imgSizeH * imgSizeW * channels; - CpuMatrix inputs(numSamples, width); - CpuMatrix denoms(numSamples, width); - CpuMatrix outputs(numSamples, width); - GpuMatrix inputsGpu(numSamples, width); - GpuMatrix denomsGpu(numSamples, width); - GpuMatrix outputsGpu(numSamples, width); - - inputs.randomizeUniform(); - outputs.randomizeUniform(); - inputsGpu.copyFrom(inputs); - outputsGpu.copyFrom(outputs); - - FunctionBase* cpu = - FunctionBase::funcRegistrar_.createByType(FUNC_NAME(CrossMapNormal, CPU)); - FunctionBase* gpu = - FunctionBase::funcRegistrar_.createByType(FUNC_NAME(CrossMapNormal, GPU)); - cpu->init(FuncConfig() - .set("size", (size_t)sizeX) - .set("scale", scale) - .set("pow", pow)); - gpu->init(FuncConfig() - .set("size", (size_t)sizeX) - .set("scale", scale) - .set("pow", pow)); - - Dims dims{ - (size_t)numSamples, (size_t)channels, (size_t)imgSizeH, (size_t)imgSizeW}; - cpu->calc({Tensor(inputs.getData(), dims)}, - {Tensor(outputs.getData(), dims), Tensor(denoms.getData(), dims)}, - {}); - - gpu->calc( - {Tensor(inputsGpu.getData(), dims)}, - {Tensor(outputsGpu.getData(), dims), Tensor(denomsGpu.getData(), dims)}, - {}); - - TensorCheckErr(outputs, outputsGpu); - TensorCheckErr(denoms, denomsGpu); -} - -TEST(Matrix, crossMapNormalFwd) { - for (auto numSamples : {5, 32}) { - for (auto channels : {1, 5, 32}) { - for (auto imgSizeH : {5, 33, 100}) { - for (auto imgSizeW : {5, 32, 96}) { - for (auto sizeX : {1, 2, 3, 5, 7}) { - VLOG(3) << " numSamples=" << numSamples << " channels=" << channels - << " imgSizeH=" << imgSizeH << " imgSizeW=" << imgSizeW - << " sizeX=" << sizeX; - testCrossMapNormalFwd( - numSamples, channels, imgSizeH, imgSizeW, sizeX); - } - } - } - } - } -} - -void testCrossMapNormalBwd( - int numSamples, int channels, int imgSizeH, int imgSizeW, int sizeX) { - float scale = 1.5; - float pow = 0.5; - size_t width = imgSizeH * imgSizeW * channels; - - CpuMatrix inputsGrad(numSamples, width); - CpuMatrix inputsValue(numSamples, width); - CpuMatrix outputsGrad(numSamples, width); - CpuMatrix outputsValue(numSamples, width); - CpuMatrix denoms(numSamples, width); - - outputsGrad.randomizeUniform(); - denoms.randomizeUniform(); - inputsValue.randomizeUniform(); - outputsValue.randomizeUniform(); - inputsGrad.randomizeUniform(); - denoms.add(0.01); - - GpuMatrix inputsGradGpu(numSamples, width); - GpuMatrix inputsValueGpu(numSamples, width); - GpuMatrix outputsGradGpu(numSamples, width); - GpuMatrix outputsValueGpu(numSamples, width); - GpuMatrix denomsGpu(numSamples, width); - - outputsGradGpu.copyFrom(outputsGrad); - denomsGpu.copyFrom(denoms); - inputsValueGpu.copyFrom(inputsValue); - outputsValueGpu.copyFrom(outputsValue); - inputsGradGpu.copyFrom(inputsGrad); - - FunctionBase* cpu = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormalGrad, CPU)); - FunctionBase* gpu = FunctionBase::funcRegistrar_.createByType( - FUNC_NAME(CrossMapNormalGrad, GPU)); - cpu->init(FuncConfig() - .set("size", (size_t)sizeX) - .set("scale", scale) - .set("pow", pow)); - gpu->init(FuncConfig() - .set("size", (size_t)sizeX) - .set("scale", scale) - .set("pow", pow)); - - Dims dims{ - (size_t)numSamples, (size_t)channels, (size_t)imgSizeH, (size_t)imgSizeW}; - cpu->calc({Tensor(inputsValue.getData(), dims), - Tensor(outputsValue.getData(), dims), - Tensor(outputsGrad.getData(), dims), - Tensor(denoms.getData(), dims)}, - {Tensor(inputsGrad.getData(), dims)}, - {}); - - gpu->calc({Tensor(inputsValueGpu.getData(), dims), - Tensor(outputsValueGpu.getData(), dims), - Tensor(outputsGradGpu.getData(), dims), - Tensor(denomsGpu.getData(), dims)}, - {Tensor(inputsGradGpu.getData(), dims)}, - {}); - - TensorCheckErr(inputsGrad, inputsGradGpu); -} - -TEST(Matrix, crossMapNormalBwd) { - for (auto numSamples : {5, 32}) { - for (auto channels : {1, 5, 32}) { - for (auto imgSizeH : {5, 33, 100}) { - for (auto imgSizeW : {5, 32, 96}) { - for (auto sizeX : {1, 2, 3, 5, 7}) { - VLOG(3) << " numSamples=" << numSamples << " channels=" << channels - << " imgSizeH=" << imgSizeH << " imgSizeW=" << imgSizeW - << " sizeX=" << sizeX; - testCrossMapNormalBwd( - numSamples, channels, imgSizeH, imgSizeW, sizeX); - } - } - } - } - } -} - int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); initMain(argc, argv); From 6e405a10c54fd8f5695832663668a51f4ed19c2b Mon Sep 17 00:00:00 2001 From: Peng Li Date: Tue, 20 Dec 2016 17:35:37 +0800 Subject: [PATCH 247/265] fix style issues --- paddle/gserver/evaluators/ChunkEvaluator.cpp | 6 +++--- proto/ModelConfig.proto | 6 +++--- python/paddle/trainer_config_helpers/evaluators.py | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/paddle/gserver/evaluators/ChunkEvaluator.cpp b/paddle/gserver/evaluators/ChunkEvaluator.cpp index 15e0e95206..13f02e51fe 100644 --- a/paddle/gserver/evaluators/ChunkEvaluator.cpp +++ b/paddle/gserver/evaluators/ChunkEvaluator.cpp @@ -162,9 +162,9 @@ public: getSegments(label, length, labelSegments_); size_t i = 0, j = 0; while (i < outputSegments_.size() && j < labelSegments_.size()) { - if (outputSegments_[i] == labelSegments_[j]) { - if (excludedChunkTypes_.count(outputSegments_[i].type) != 1) - ++numCorrect_; + if (outputSegments_[i] == labelSegments_[j] && + excludedChunkTypes_.count(outputSegments_[i].type) != 1) { + ++numCorrect_; } if (outputSegments_[i].end < labelSegments_[j].end) { ++i; diff --git a/proto/ModelConfig.proto b/proto/ModelConfig.proto index e24ed21fbb..be4d0041f9 100644 --- a/proto/ModelConfig.proto +++ b/proto/ModelConfig.proto @@ -437,8 +437,6 @@ message EvaluatorConfig { optional string chunk_scheme = 4; // number of chunk types other than "other" optional int32 num_chunk_types = 5; - // chunk of these types are not counted - repeated int32 excluded_chunk_types = 12; // Used by PrecisionRecallEvaluator and ClassificationErrorEvaluator // For multi binary labels: true if output > classification_threshold @@ -458,7 +456,9 @@ message EvaluatorConfig { // whether to delimit the sequence in the seq_text_printer optional bool delimited = 11 [default = true]; - // NOTE: 12 has been occupied by excluded_chunk_types + // Used by ChunkEvaluator + // chunk of these types are not counted + repeated int32 excluded_chunk_types = 12; } message LinkConfig { diff --git a/python/paddle/trainer_config_helpers/evaluators.py b/python/paddle/trainer_config_helpers/evaluators.py index 731e30d367..bd247ea9af 100644 --- a/python/paddle/trainer_config_helpers/evaluators.py +++ b/python/paddle/trainer_config_helpers/evaluators.py @@ -381,7 +381,7 @@ def chunk_evaluator( :param name: The Evaluator name, it is optional. :type name: basename|None :param excluded_chunk_types: chunks of these types are not considered - :type excluded_chunk_types: list of integer|[] + :type excluded_chunk_types: list of integer|None """ evaluator_base( name=name, From 0d1703d91ff02bb1ba51d164db8f29a4b9ed161c Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 20 Dec 2016 17:41:41 +0800 Subject: [PATCH 248/265] Add const in ParameterUpdater init --- paddle/parameter/ParameterUpdaterBase.cpp | 2 +- paddle/parameter/ParameterUpdaterBase.h | 4 ++-- paddle/trainer/ParameterUpdater.cpp | 3 ++- paddle/trainer/ParameterUpdater.h | 4 ++-- paddle/trainer/RemoteParameterUpdater.cpp | 7 ++++--- paddle/trainer/RemoteParameterUpdater.h | 6 +++--- paddle/trainer/ThreadParameterUpdater.cpp | 2 +- paddle/trainer/ThreadParameterUpdater.h | 2 +- 8 files changed, 16 insertions(+), 14 deletions(-) diff --git a/paddle/parameter/ParameterUpdaterBase.cpp b/paddle/parameter/ParameterUpdaterBase.cpp index 49e2ae2b39..458cae886a 100644 --- a/paddle/parameter/ParameterUpdaterBase.cpp +++ b/paddle/parameter/ParameterUpdaterBase.cpp @@ -19,7 +19,7 @@ limitations under the License. */ namespace paddle { -void ParameterUpdater::init(std::vector& parameters) { +void ParameterUpdater::init(const std::vector& parameters) { parameters_ = parameters; for (ParameterType type : getParameterTypes()) { for (auto& para : parameters) { diff --git a/paddle/parameter/ParameterUpdaterBase.h b/paddle/parameter/ParameterUpdaterBase.h index 5401046f67..88148d9b76 100644 --- a/paddle/parameter/ParameterUpdaterBase.h +++ b/paddle/parameter/ParameterUpdaterBase.h @@ -32,7 +32,7 @@ public: parameterTypes_.push_back(type); } - virtual void init(std::vector& parameters); + virtual void init(const std::vector& parameters); // called by Trainer when starting a new pass virtual void startPass() {} @@ -105,7 +105,7 @@ public: ParameterUpdaterComposite() {} virtual ~ParameterUpdaterComposite() {} - virtual void init(std::vector& parameters) = 0; + virtual void init(const std::vector& parameters) = 0; virtual void startPass() { syncThreadPool_->execPlusOwner( diff --git a/paddle/trainer/ParameterUpdater.cpp b/paddle/trainer/ParameterUpdater.cpp index 8b5b95da5b..4e9e890c85 100644 --- a/paddle/trainer/ParameterUpdater.cpp +++ b/paddle/trainer/ParameterUpdater.cpp @@ -34,7 +34,8 @@ SgdUpdaterWithCpuAverager::SgdUpdaterWithCpuAverager( updateWorker_.addJob([]() { hl_set_device(FLAGS_gpu_id); }); } -void SgdUpdaterWithCpuAverager::init(std::vector& parameters) { +void SgdUpdaterWithCpuAverager::init( + const std::vector& parameters) { SgdLocalUpdater::init(parameters); averager_->init(parameters_.size(), nullptr); copyEvents_.resize(parameters_.size()); diff --git a/paddle/trainer/ParameterUpdater.h b/paddle/trainer/ParameterUpdater.h index e52b5cd318..4dae77567f 100644 --- a/paddle/trainer/ParameterUpdater.h +++ b/paddle/trainer/ParameterUpdater.h @@ -64,7 +64,7 @@ public: * be initialized. * @param parameters The parameter need to be initialized. */ - virtual void init(std::vector& parameters) { + virtual void init(const std::vector& parameters) { ParameterUpdater::init(parameters); optimizer_->init(parameters_.size(), nullptr); // check no L1 decay in parameter configs @@ -208,7 +208,7 @@ public: * @brief init. Initialize cpu parameters, model average optimizer. * @param parameters */ - virtual void init(std::vector& parameters); + virtual void init(const std::vector& parameters); virtual PassType startBatch(int64_t batchSize) { averager_->startBatch(-1UL); diff --git a/paddle/trainer/RemoteParameterUpdater.cpp b/paddle/trainer/RemoteParameterUpdater.cpp index 974e78fa17..630f55d998 100644 --- a/paddle/trainer/RemoteParameterUpdater.cpp +++ b/paddle/trainer/RemoteParameterUpdater.cpp @@ -44,7 +44,7 @@ RemoteParameterUpdater::RemoteParameterUpdater( addParameterType(PARAMETER_MOMENTUM); } -void RemoteParameterUpdater::init(std::vector& parameters) { +void RemoteParameterUpdater::init(const std::vector& parameters) { ParameterUpdater::init(parameters); if (localUpdater_) { @@ -595,7 +595,8 @@ SparseRemoteParameterUpdater::SparseRemoteParameterUpdater( testing_(testing), useApplyInPserver_(false) {} -void SparseRemoteParameterUpdater::init(std::vector& parameters) { +void SparseRemoteParameterUpdater::init( + const std::vector& parameters) { ParameterUpdater::init(parameters); parameterClient_.reset(new ParameterClient2( @@ -809,7 +810,7 @@ void SparseRemoteParameterUpdater::saveParametersRemote( } void SparseRemoteParameterUpdaterComposite::init( - std::vector& parameters) { + const std::vector& parameters) { parameters_ = parameters; std::vector parametersArray[NUMBER_UPDATERS]; diff --git a/paddle/trainer/RemoteParameterUpdater.h b/paddle/trainer/RemoteParameterUpdater.h index 66055c778e..ec6ed443d3 100644 --- a/paddle/trainer/RemoteParameterUpdater.h +++ b/paddle/trainer/RemoteParameterUpdater.h @@ -67,7 +67,7 @@ public: /** * initialize the internal parameter client and itself. */ - virtual void init(std::vector& parameters); + virtual void init(const std::vector& parameters); /** * @brief start batch * @@ -274,7 +274,7 @@ public: } /// initialization - virtual void init(std::vector& parameters); + virtual void init(const std::vector& parameters); /// stateful batch control virtual PassType startBatch(int64_t batchSize); @@ -360,7 +360,7 @@ public: } /// initialization of dense and sparse updaters - virtual void init(std::vector& parameters); + virtual void init(const std::vector& parameters); }; class ParameterUpdaterCreators { diff --git a/paddle/trainer/ThreadParameterUpdater.cpp b/paddle/trainer/ThreadParameterUpdater.cpp index 049022b1f1..2a76d5723c 100644 --- a/paddle/trainer/ThreadParameterUpdater.cpp +++ b/paddle/trainer/ThreadParameterUpdater.cpp @@ -32,7 +32,7 @@ SgdThreadUpdater::SgdThreadUpdater(const OptimizationConfig& optConfig) } } -void SgdThreadUpdater::init(std::vector& parameters) { +void SgdThreadUpdater::init(const std::vector& parameters) { ParameterUpdater::init(parameters); // calc max parameter id diff --git a/paddle/trainer/ThreadParameterUpdater.h b/paddle/trainer/ThreadParameterUpdater.h index d01ac689f9..198435c0f3 100644 --- a/paddle/trainer/ThreadParameterUpdater.h +++ b/paddle/trainer/ThreadParameterUpdater.h @@ -49,7 +49,7 @@ public: // Use the finishPass() function of the base optimizer. virtual bool finishPass(real cost); - virtual void init(std::vector& parameters); + virtual void init(const std::vector& parameters); virtual PassType startBatch(int64_t batchSize); // Call finishBatch for each optimizer. virtual void finishBatch(real cost); From f1a94e3ff7fce800f6c846da2ae6ad4312c4acfc Mon Sep 17 00:00:00 2001 From: hedaoyuan Date: Tue, 20 Dec 2016 20:30:13 +0800 Subject: [PATCH 249/265] follow comments --- paddle/function/cross_map_normal_op.cpp | 22 +++++++++++----------- paddle/function/cross_map_normal_op.h | 10 +++++----- paddle/function/cross_map_normal_op_gpu.cu | 10 +++++----- paddle/math/tests/test_matrixCompare.cpp | 1 - 4 files changed, 21 insertions(+), 22 deletions(-) diff --git a/paddle/function/cross_map_normal_op.cpp b/paddle/function/cross_map_normal_op.cpp index a18c0bb750..a9c7693830 100644 --- a/paddle/function/cross_map_normal_op.cpp +++ b/paddle/function/cross_map_normal_op.cpp @@ -20,7 +20,7 @@ namespace paddle { template <> void CrossMapNormal(real* outputs, real* denoms, - real* inputs, + const real* inputs, size_t numSamples, size_t channels, size_t height, @@ -32,7 +32,7 @@ void CrossMapNormal(real* outputs, size_t oneSample = channels * oneImage; CpuVector outputsV(numSamples * oneSample, outputs); - CpuVector inputsV(numSamples * oneSample, inputs); + CpuVector inputsV(numSamples * oneSample, const_cast(inputs)); CpuVector denomsV(numSamples * oneSample, denoms); // f(x) = x * ( 1 + scale * SUM((x)^2) )^(-pow) @@ -44,7 +44,7 @@ void CrossMapNormal(real* outputs, const int end = (int)size + start; for (size_t i = 0; i < numSamples; i++) { real* oneDenom = denoms + i * oneSample; - real* oneInput = inputs + i * oneSample; + real* oneInput = const_cast(inputs) + i * oneSample; for (int c = 0; c < (int)channels; c++) { CpuVector denom(oneImage, oneDenom + c * oneImage); for (int s = start; s < end; s++) { @@ -61,10 +61,10 @@ void CrossMapNormal(real* outputs, template <> void CrossMapNormalGrad(real* inputsGrad, - real* inputsValue, - real* outputsValue, - real* outputsGrad, - real* denoms, + const real* inputsValue, + const real* outputsValue, + const real* outputsGrad, + const real* denoms, size_t numSamples, size_t channels, size_t height, @@ -84,10 +84,10 @@ void CrossMapNormalGrad(real* inputsGrad, for (size_t i = 0; i < numSamples; i++) { size_t sOffset = i * oneSample; real* oneInputGrad = inputsGrad + sOffset; - real* oneInputValue = inputsValue + sOffset; - real* oneDenom = denoms + sOffset; - real* oneOutputGrad = outputsGrad + sOffset; - real* oneOutputValue = outputsValue + sOffset; + real* oneInputValue = const_cast(inputsValue) + sOffset; + real* oneDenom = const_cast(denoms) + sOffset; + real* oneOutputGrad = const_cast(outputsGrad) + sOffset; + real* oneOutputValue = const_cast(outputsValue) + sOffset; for (int c = 0; c < (int)channels; c++) { size_t cOffset = c * height * width; diff --git a/paddle/function/cross_map_normal_op.h b/paddle/function/cross_map_normal_op.h index e935b26e12..b1e401ad0a 100644 --- a/paddle/function/cross_map_normal_op.h +++ b/paddle/function/cross_map_normal_op.h @@ -37,7 +37,7 @@ namespace paddle { template void CrossMapNormal(real* outputs, real* denoms, - real* inputs, + const real* inputs, size_t numSamples, size_t channels, size_t height, @@ -66,10 +66,10 @@ void CrossMapNormal(real* outputs, */ template void CrossMapNormalGrad(real* inputsGrad, - real* inputsValue, - real* outputsValue, - real* outputsGrad, - real* denoms, + const real* inputsValue, + const real* outputsValue, + const real* outputsGrad, + const real* denoms, size_t numSamples, size_t channels, size_t height, diff --git a/paddle/function/cross_map_normal_op_gpu.cu b/paddle/function/cross_map_normal_op_gpu.cu index 6339c04194..aae4f461b6 100644 --- a/paddle/function/cross_map_normal_op_gpu.cu +++ b/paddle/function/cross_map_normal_op_gpu.cu @@ -63,7 +63,7 @@ __global__ void KeCMRNormOutput(size_t inputSize, const real* in, template <> void CrossMapNormal(real* outputs, real* denoms, - real* inputs, + const real* inputs, size_t numSamples, size_t channels, size_t height, @@ -132,10 +132,10 @@ __global__ void KeCMRNormDiff(size_t imageSize, const real* bottom_data, template <> void CrossMapNormalGrad(real* inputsGrad, - real* inputsValue, - real* outputsValue, - real* outputsGrad, - real* denoms, + const real* inputsValue, + const real* outputsValue, + const real* outputsGrad, + const real* denoms, size_t numSamples, size_t channels, size_t height, diff --git a/paddle/math/tests/test_matrixCompare.cpp b/paddle/math/tests/test_matrixCompare.cpp index 440534e722..62de5b25e4 100644 --- a/paddle/math/tests/test_matrixCompare.cpp +++ b/paddle/math/tests/test_matrixCompare.cpp @@ -18,7 +18,6 @@ limitations under the License. */ #include #include "TensorCheck.h" -#include "paddle/function/Function.h" #include "paddle/gserver/tests/TestUtil.h" #include "paddle/math/Matrix.h" #include "paddle/math/SparseMatrix.h" From f4f0f2daeb3bd0bffd8302a4388098e0ab1ffed6 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Tue, 20 Dec 2016 20:30:37 +0800 Subject: [PATCH 250/265] Fix bug in config_parse.py when batch_norm layer is used in RecurrentLayerGroup. --- python/paddle/trainer/config_parser.py | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 39892d0533..0308d9df94 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -498,9 +498,12 @@ class Input(Cfg): is_static=None, is_shared=None, update_hooks=None, - input_layer_argument=None, ): + input_layer_argument=None, + not_make_layer_name_in_submodel=None, ): self.add_keys(locals()) self.input_layer_name = MakeLayerNameInSubmodel(input_layer_name) + if not_make_layer_name_in_submodel: + self.input_layer_name = input_layer_name # Define a projection for iexed layer @@ -1848,7 +1851,8 @@ class BatchNormLayer(LayerBase): initial_std=0.0, initial_mean=0.0, is_static=True, - is_shared=is_shared, )) + is_shared=is_shared, + not_make_layer_name_in_submodel=True, )) parallel_nn = bool(int(g_command_config_args.get("parallel_nn", 0))) cudnn_version = int(g_command_config_args.get("cudnn_version", 0)) From 35bbb4fb01a2172c867cd7c27a3c805e87f1ea69 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Tue, 20 Dec 2016 21:04:53 +0800 Subject: [PATCH 251/265] change float to real in two test Change float in test_ConvTrans and test_ConvUnify to real. --- paddle/gserver/tests/test_ConvTrans.cpp | 12 +++---- paddle/gserver/tests/test_ConvUnify.cpp | 46 ++++++++++++------------- 2 files changed, 29 insertions(+), 29 deletions(-) diff --git a/paddle/gserver/tests/test_ConvTrans.cpp b/paddle/gserver/tests/test_ConvTrans.cpp index 99202c2d57..dd3378304b 100644 --- a/paddle/gserver/tests/test_ConvTrans.cpp +++ b/paddle/gserver/tests/test_ConvTrans.cpp @@ -206,8 +206,8 @@ TEST(Layer, convTransLayerFwd2) { /* filter_size */ 5, result); - float resultData[] = {1, 2, 2, 2, 1, 2, 4, 4, 4, 2, 2, 4, 4, - 4, 2, 2, 4, 4, 4, 2, 1, 2, 2, 2, 1}; + real resultData[] = {1, 2, 2, 2, 1, 2, 4, 4, 4, 2, 2, 4, 4, + 4, 2, 2, 4, 4, 4, 2, 1, 2, 2, 2, 1}; result->setData(resultData); doOneConvtTest(/* imgSize */ 5, /* output_x */ 2, @@ -216,8 +216,8 @@ TEST(Layer, convTransLayerFwd2) { /* filter_size */ 4, result); - float resultData2[] = {1, 2, 2, 2, 1, 2, 4, 4, 4, 2, 2, 4, 4, - 4, 2, 2, 4, 4, 4, 2, 1, 2, 2, 2, 1}; + real resultData2[] = {1, 2, 2, 2, 1, 2, 4, 4, 4, 2, 2, 4, 4, + 4, 2, 2, 4, 4, 4, 2, 1, 2, 2, 2, 1}; result->setData(resultData2); doOneConvtTest(/* imgSize */ 5, /* output_x */ 2, @@ -226,8 +226,8 @@ TEST(Layer, convTransLayerFwd2) { /* filter_size */ 5, result); - float resultData3[] = {1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 2, 4, - 2, 2, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1}; + real resultData3[] = {1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 2, 4, + 2, 2, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1}; result->setData(resultData3); doOneConvtTest(/* imgSize */ 5, /* output_x */ 2, diff --git a/paddle/gserver/tests/test_ConvUnify.cpp b/paddle/gserver/tests/test_ConvUnify.cpp index 2ab18f8868..072a886a19 100644 --- a/paddle/gserver/tests/test_ConvUnify.cpp +++ b/paddle/gserver/tests/test_ConvUnify.cpp @@ -106,8 +106,8 @@ TEST(Layer, convParaUnified) { #ifndef PADDLE_ONLY_CPU MatrixPtr input, resultCpu, resultGpu; input = Matrix::create(1, 4 * 4, false, false); - float inputData[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; - float param[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 9, 8, 7, 6, 5, 4, 3, 2, 1}; + real inputData[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; + real param[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 9, 8, 7, 6, 5, 4, 3, 2, 1}; input->setData(inputData); @@ -137,26 +137,26 @@ TEST(Layer, convParaUnified) { checkMatrixEqual(resultCpu, resultGpu); input = Matrix::create(1, 3 * 3 * 2, false, false); - float inputData2[] = {1, - 2, - 3, - 4, - 5, - 6, - 7, - 8, - 9, - - 10, - 11, - 12, - 13, - 14, - 15, - 16, - 17, - 18}; - float param2[] = {1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6, 5, 4, 3, 2, 1}; + real inputData2[] = {1, + 2, + 3, + 4, + 5, + 6, + 7, + 8, + 9, + + 10, + 11, + 12, + 13, + 14, + 15, + 16, + 17, + 18}; + real param2[] = {1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6, 5, 4, 3, 2, 1}; input->setData(inputData2); @@ -185,7 +185,7 @@ TEST(Layer, convParaUnified) { true); checkMatrixEqual(resultCpu, resultGpu); - float param3[] = {1, 2, 3, 4, 4, 3, 2, 1}; + real param3[] = {1, 2, 3, 4, 4, 3, 2, 1}; resultCpu = doOneConvTest(/* imgSize */ 3, /* output_x */ 2, From 56f29658ba935e539ac8977f44a5c942cb09bc29 Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 20 Dec 2016 22:12:56 +0800 Subject: [PATCH 252/265] Remove not used params in GradientMachine::start --- paddle/gserver/gradientmachines/GradientMachine.h | 6 +----- paddle/gserver/gradientmachines/MultiGradientMachine.cpp | 2 +- paddle/gserver/gradientmachines/MultiNetwork.cpp | 5 ++--- paddle/gserver/gradientmachines/MultiNetwork.h | 2 +- paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp | 6 +----- paddle/gserver/gradientmachines/ParallelNeuralNetwork.h | 2 +- paddle/gserver/tests/test_NetworkCompare.cpp | 2 +- paddle/gserver/tests/test_RecurrentGradientMachine.cpp | 2 +- paddle/trainer/Tester.cpp | 2 +- paddle/trainer/Trainer.cpp | 4 ++-- paddle/trainer/tests/test_Compare.cpp | 2 +- paddle/trainer/tests/test_CompareTwoNets.cpp | 2 +- 12 files changed, 14 insertions(+), 23 deletions(-) diff --git a/paddle/gserver/gradientmachines/GradientMachine.h b/paddle/gserver/gradientmachines/GradientMachine.h index 579eca71d4..ad82869aec 100644 --- a/paddle/gserver/gradientmachines/GradientMachine.h +++ b/paddle/gserver/gradientmachines/GradientMachine.h @@ -212,11 +212,7 @@ public: * @note This function will only been implemented and used in a * multithreaded environment. */ - virtual void start(const TrainerConfig& config, - DataProviderPtr dataProvider) { - (void)config; - (void)dataProvider; - } + virtual void start() {} /** * @brief check each work-thread whether is failed/error/finish, diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp index 88c098b355..95a4c0e16a 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp @@ -441,7 +441,7 @@ TrainerThread::TrainerThread(const ModelConfig& config, TrainerThread::~TrainerThread() { stop(); } void TrainerThread::start() { - gradientMachine_->start(*(TrainerConfig*)nullptr, (DataProviderPtr) nullptr); + gradientMachine_->start(); computeThread_.reset(new std::thread([this]() { computeThread(); })); diff --git a/paddle/gserver/gradientmachines/MultiNetwork.cpp b/paddle/gserver/gradientmachines/MultiNetwork.cpp index 6eb3d8db96..f1308f3721 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.cpp +++ b/paddle/gserver/gradientmachines/MultiNetwork.cpp @@ -109,10 +109,9 @@ void MultiNetwork::onPassEnd() { } } -void MultiNetwork::start(const TrainerConfig& config, - DataProviderPtr dataProvider) { +void MultiNetwork::start() { for (auto& subNetwork : subNetworks_) { - subNetwork->start(config, dataProvider); + subNetwork->start(); } } diff --git a/paddle/gserver/gradientmachines/MultiNetwork.h b/paddle/gserver/gradientmachines/MultiNetwork.h index 89fbf32b4f..f04406b983 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.h +++ b/paddle/gserver/gradientmachines/MultiNetwork.h @@ -54,7 +54,7 @@ public: return subNetworks_; } - virtual void start(const TrainerConfig& config, DataProviderPtr dataProvider); + virtual void start(); virtual void finish(); diff --git a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp index 980a5851a2..c6e3a3b321 100644 --- a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp +++ b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.cpp @@ -131,11 +131,7 @@ void ParallelNeuralNetwork::forwardBackward(const std::vector& inArgs, backward(callback); } -void ParallelNeuralNetwork::start(const TrainerConfig& config, - DataProviderPtr dataProvider) { - (void)config; - (void)dataProvider; - +void ParallelNeuralNetwork::start() { for (auto& thread : threads_) { thread->start(); } diff --git a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h index 8f445b1ded..39f5682a58 100644 --- a/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h +++ b/paddle/gserver/gradientmachines/ParallelNeuralNetwork.h @@ -56,7 +56,7 @@ public: PassType passType, const UpdateCallback &callback = NULL); - virtual void start(const TrainerConfig &config, DataProviderPtr dataProvider); + virtual void start(); void addComputeThread(int deviceId); diff --git a/paddle/gserver/tests/test_NetworkCompare.cpp b/paddle/gserver/tests/test_NetworkCompare.cpp index fc60228f81..0d26105955 100644 --- a/paddle/gserver/tests/test_NetworkCompare.cpp +++ b/paddle/gserver/tests/test_NetworkCompare.cpp @@ -114,7 +114,7 @@ void calcGradient(DataIn& in, DataOut& out, const std::string& configPath) { parameters[i]->getBuf(PARAMETER_VALUE)->copyFrom(*in.paraValues[i]); } } - gradientMachine->start(trainer.getConfig(), nullptr); + gradientMachine->start(); gradientMachine->forward(in.inArgs, &outArgs, PASS_TRAIN); for (size_t i = 0; i < in.outGrads.size(); i++) { // If the all the layers in the config have no parameters, also diff --git a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp index e19cf35cd5..150850da4d 100644 --- a/paddle/gserver/tests/test_RecurrentGradientMachine.cpp +++ b/paddle/gserver/tests/test_RecurrentGradientMachine.cpp @@ -28,7 +28,7 @@ class TrainerForTest : public paddle::Trainer { public: void startTrain() { GradientMachine& gm = *this->trainerInternal_.getGradientMachine(); - gm.start(this->getConfig(), dataProvider_); + gm.start(); } void finishTrain() { diff --git a/paddle/trainer/Tester.cpp b/paddle/trainer/Tester.cpp index 24fac3e5a8..13aa28ae5d 100644 --- a/paddle/trainer/Tester.cpp +++ b/paddle/trainer/Tester.cpp @@ -257,7 +257,7 @@ void Tester::test() { CHECK(testDataProvider_) << "TestData is not specified"; testDataProvider_->setSkipShuffle(); testDataProvider_->reset(); - gradientMachine_->start(*config_, testDataProvider_); + gradientMachine_->start(); // For evaluation std::vector modelList; diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 1eec2c432d..6c57467cca 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -308,7 +308,7 @@ static double genPerturbation(real* d, real* grad, size_t dim) { } real Trainer::checkGradient() { - trainerInternal_.getGradientMachine()->start(*config_, dataProvider_); + trainerInternal_.getGradientMachine()->start(); std::vector& parameters = trainerInternal_.getGradientMachine()->getNonStaticParameters(); DataBatch dataBatch; @@ -390,7 +390,7 @@ void Trainer::startTrain() { dataProvider_->reset(); } - trainerInternal_.getGradientMachine()->start(*config_, dataProvider_); + trainerInternal_.getGradientMachine()->start(); } void Trainer::finishTrain() { trainerInternal_.getGradientMachine()->finish(); } diff --git a/paddle/trainer/tests/test_Compare.cpp b/paddle/trainer/tests/test_Compare.cpp index 72fc76bea3..e855a8fe2e 100644 --- a/paddle/trainer/tests/test_Compare.cpp +++ b/paddle/trainer/tests/test_Compare.cpp @@ -50,7 +50,7 @@ void calcGradient(bool useGpu, comData& Data) { trainer.getDataProvider()->getNextBatch(batchSize, &dataBatch); CHECK(dataBatch.getSize()) << "No data from data provider"; vector& inArgs = dataBatch.getStreams(); - trainer.getGradientMachine()->start(trainer.getConfig(), nullptr); + trainer.getGradientMachine()->start(); for (int i = 0; i < 2; ++i) { trainer.getGradientMachine()->forwardBackward( inArgs, &Data.outArgs, PASS_TRAIN); diff --git a/paddle/trainer/tests/test_CompareTwoNets.cpp b/paddle/trainer/tests/test_CompareTwoNets.cpp index 80c61e259e..94f65e545d 100644 --- a/paddle/trainer/tests/test_CompareTwoNets.cpp +++ b/paddle/trainer/tests/test_CompareTwoNets.cpp @@ -72,7 +72,7 @@ void calcGradient(ComData& data, const string configFile) { CHECK(dataBatch.getSize()) << "No data from data provider"; vector& inArgs = dataBatch.getStreams(); - trainer.getGradientMachine()->start(trainer.getConfig(), nullptr); + trainer.getGradientMachine()->start(); trainer.getGradientMachine()->forwardBackward( inArgs, &data.outArgs, PASS_TRAIN); From 71a316ea1f79fe0fef451d98ee7e89e6abcdca7c Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Tue, 20 Dec 2016 23:12:22 +0800 Subject: [PATCH 253/265] Remove unused cost parameter in ParameterUpdater --- paddle/parameter/ParameterUpdaterBase.h | 6 +++--- paddle/trainer/ParameterUpdater.h | 8 ++++---- paddle/trainer/RemoteParameterUpdater.cpp | 4 ++-- paddle/trainer/RemoteParameterUpdater.h | 4 ++-- paddle/trainer/ThreadParameterUpdater.cpp | 2 +- paddle/trainer/ThreadParameterUpdater.h | 2 +- paddle/trainer/Trainer.cpp | 2 +- 7 files changed, 14 insertions(+), 14 deletions(-) diff --git a/paddle/parameter/ParameterUpdaterBase.h b/paddle/parameter/ParameterUpdaterBase.h index 5401046f67..d13dbe33cf 100644 --- a/paddle/parameter/ParameterUpdaterBase.h +++ b/paddle/parameter/ParameterUpdaterBase.h @@ -38,7 +38,7 @@ public: virtual void startPass() {} // called by Trainer then finishing a pass, ruturn true if pass accepted - virtual bool finishPass(real cost = 0) { return true; } + virtual bool finishPass() { return true; } // called by Trainer before backward() of a batch // Return the type of pass it needs. This pass type will be passed @@ -112,9 +112,9 @@ public: [&](int tid, size_t numThreads) { updaters_[tid]->startPass(); }); } - virtual bool finishPass(real cost = 0) { + virtual bool finishPass() { syncThreadPool_->execPlusOwner( - [&](int tid, size_t numThreads) { updaters_[tid]->finishPass(cost); }); + [&](int tid, size_t numThreads) { updaters_[tid]->finishPass(); }); return true; } diff --git a/paddle/trainer/ParameterUpdater.h b/paddle/trainer/ParameterUpdater.h index e52b5cd318..9e62580ccb 100644 --- a/paddle/trainer/ParameterUpdater.h +++ b/paddle/trainer/ParameterUpdater.h @@ -102,9 +102,9 @@ public: * @param cost sum cost during one pass. * @return true if accept (used for owlqn). */ - virtual bool finishPass(real cost) { + virtual bool finishPass() { optimizer_->finishPass(); - return ParameterUpdater::finishPass(cost); + return ParameterUpdater::finishPass(); } /** @@ -220,9 +220,9 @@ public: averager_->startPass(); SgdLocalUpdater::startPass(); } - virtual bool finishPass(real cost) { + virtual bool finishPass() { averager_->finishPass(); - return SgdLocalUpdater::finishPass(cost); + return SgdLocalUpdater::finishPass(); } /// apply the averaged parameter to PARAMETER_VALUE diff --git a/paddle/trainer/RemoteParameterUpdater.cpp b/paddle/trainer/RemoteParameterUpdater.cpp index 974e78fa17..6ee2ed9158 100644 --- a/paddle/trainer/RemoteParameterUpdater.cpp +++ b/paddle/trainer/RemoteParameterUpdater.cpp @@ -309,7 +309,7 @@ void RemoteParameterUpdater::startPass() { } } -bool RemoteParameterUpdater::finishPass(real cost) { +bool RemoteParameterUpdater::finishPass() { if (localUpdater_) { localUpdater_->finishPass(); } @@ -711,7 +711,7 @@ void SparseRemoteParameterUpdater::startPass() { } } -bool SparseRemoteParameterUpdater::finishPass(real cost) { +bool SparseRemoteParameterUpdater::finishPass() { if (config_.algorithm() == TrainAlgorithm::SGD) { parameterClient_->waitPassFinish(); } else { diff --git a/paddle/trainer/RemoteParameterUpdater.h b/paddle/trainer/RemoteParameterUpdater.h index 66055c778e..8c5d5bb66b 100644 --- a/paddle/trainer/RemoteParameterUpdater.h +++ b/paddle/trainer/RemoteParameterUpdater.h @@ -90,7 +90,7 @@ public: */ virtual void finishBatch(real cost); virtual void startPass(); - virtual bool finishPass(real cost); + virtual bool finishPass(); #ifndef PADDLE_DISABLE_TIMER virtual void setForwardbackwardTime(uint64_t delta) { @@ -281,7 +281,7 @@ public: /// send all sparse related parameters to all pservers virtual void finishBatch(real cost); virtual void startPass(); - virtual bool finishPass(real cost); + virtual bool finishPass(); virtual void apply(); virtual void restore(); diff --git a/paddle/trainer/ThreadParameterUpdater.cpp b/paddle/trainer/ThreadParameterUpdater.cpp index 049022b1f1..36d42ed7e9 100644 --- a/paddle/trainer/ThreadParameterUpdater.cpp +++ b/paddle/trainer/ThreadParameterUpdater.cpp @@ -70,7 +70,7 @@ void SgdThreadUpdater::startPass() { } } -bool SgdThreadUpdater::finishPass(real cost) { +bool SgdThreadUpdater::finishPass() { catchUpWith(); for (auto& para : parameters_) { diff --git a/paddle/trainer/ThreadParameterUpdater.h b/paddle/trainer/ThreadParameterUpdater.h index d01ac689f9..61f337ecb3 100644 --- a/paddle/trainer/ThreadParameterUpdater.h +++ b/paddle/trainer/ThreadParameterUpdater.h @@ -47,7 +47,7 @@ public: virtual void startPass(); // Use the finishPass() function of the base optimizer. - virtual bool finishPass(real cost); + virtual bool finishPass(); virtual void init(std::vector& parameters); virtual PassType startBatch(int64_t batchSize); diff --git a/paddle/trainer/Trainer.cpp b/paddle/trainer/Trainer.cpp index 1eec2c432d..031e3b7cf1 100644 --- a/paddle/trainer/Trainer.cpp +++ b/paddle/trainer/Trainer.cpp @@ -537,7 +537,7 @@ void Trainer::trainOnePassBatch(int passId) { trainerInternal_.getGradientMachine()->onPassEnd(); - bool accepted = trainerInternal_.getParameterUpdater()->finishPass(cost); + bool accepted = trainerInternal_.getParameterUpdater()->finishPass(); globalStat.setThreadInfo(true); globalStat.printAllStatus(); From 84ad724f99164eba9c45dfc10e280f8f8104689a Mon Sep 17 00:00:00 2001 From: xuwei06 Date: Tue, 20 Dec 2016 16:31:11 -0800 Subject: [PATCH 254/265] Adding namespace in timing macros Sometime those macros are used under different namespaces. We need to use namespace ::paddle to make it compile correctly. Change-Id: I57a6d6ec8cd0d680b584aab62d72a35c226a24a4 --- paddle/utils/Stat.cpp | 3 +++ paddle/utils/Stat.h | 49 +++++++++++++++++++++++++++---------------- 2 files changed, 34 insertions(+), 18 deletions(-) diff --git a/paddle/utils/Stat.cpp b/paddle/utils/Stat.cpp index 44acee2495..c7194d3bf1 100644 --- a/paddle/utils/Stat.cpp +++ b/paddle/utils/Stat.cpp @@ -137,6 +137,9 @@ void StatSet::printSegTimerStatus() { void StatSet::printBarrierTimerStatus() { ReadLockGuard guard(lock_); + if (barrierStatSet_.empty()) { + return; + } // control barrierAbstact in runtime, so enable compliation LOG(INFO) << std::setiosflags(std::ios::left) << std::setfill(' ') << "======= BarrierStatSet status ======" << std::endl; diff --git a/paddle/utils/Stat.h b/paddle/utils/Stat.h index 9be79e8859..d9cc6e413a 100644 --- a/paddle/utils/Stat.h +++ b/paddle/utils/Stat.h @@ -258,28 +258,41 @@ inline StatSet& registerTimerArg2(uint64_t threshold = -1, // The default arguments are shown in the following line: // REGISTER_TIMER(statName, threshold = -1, statSet = globalStat) // TODO(yuyang18,wangyanfei01): if UNIQUE_NAME is needed -#define REGISTER_TIMER(statName, ...) \ - static StatPtr __stat = registerTimerArg2(__VA_ARGS__).getStat(statName); \ - TimerOnce __timerOnce(__stat.get(), "", registerTimerArg1(__VA_ARGS__)); +#define REGISTER_TIMER(statName, ...) \ + static ::paddle::StatPtr __stat = \ + ::paddle::registerTimerArg2(__VA_ARGS__).getStat(statName); \ + ::paddle::TimerOnce __timerOnce( \ + __stat.get(), "", ::paddle::registerTimerArg1(__VA_ARGS__)); #define REGISTER_TIMER_SET(statName, start, ...) \ - static StatPtr __stat = registerTimerArg2(__VA_ARGS__).getStat(statName); \ - TimerOnce __timerOnce( \ - __stat.get(), "", registerTimerArg1(__VA_ARGS__), false, start); + static ::paddle::StatPtr __stat = \ + ::paddle::registerTimerArg2(__VA_ARGS__).getStat(statName); \ + ::paddle::TimerOnce __timerOnce(__stat.get(), \ + "", \ + ::paddle::registerTimerArg1(__VA_ARGS__), \ + false, \ + start); // dynmaic timer, support to discriminate runtime entity, used in pserver -#define REGISTER_TIMER_DYNAMIC(statName, ...) \ - StatPtr __stat = registerTimerArg2(__VA_ARGS__).getStat(statName); \ - TimerOnce __timerOnce(__stat.get(), "", registerTimerArg1(__VA_ARGS__)); - -#define REGISTER_TIMER_DYNAMIC_SET(statName, start, ...) \ - StatPtr __stat = registerTimerArg2(__VA_ARGS__).getStat(statName); \ - TimerOnce __timerOnce( \ - __stat.get(), "", registerTimerArg1(__VA_ARGS__), false, start); - -#define REGISTER_TIMER_INFO(statName, info) \ - static StatPtr __stat = globalStat.getStat(statName); \ - TimerOnce __timerOnce(__stat.get(), info, 10 * 1000000LU /*threshold*/); +#define REGISTER_TIMER_DYNAMIC(statName, ...) \ + ::paddle::StatPtr __stat = \ + ::paddle::registerTimerArg2(__VA_ARGS__).getStat(statName); \ + ::paddle::TimerOnce __timerOnce( \ + __stat.get(), "", ::paddle::registerTimerArg1(__VA_ARGS__)); + +#define REGISTER_TIMER_DYNAMIC_SET(statName, start, ...) \ + ::paddle::StatPtr __stat = \ + ::paddle::registerTimerArg2(__VA_ARGS__).getStat(statName); \ + ::paddle::TimerOnce __timerOnce(__stat.get(), \ + "", \ + ::paddle::registerTimerArg1(__VA_ARGS__), \ + false, \ + start); + +#define REGISTER_TIMER_INFO(statName, info) \ + static ::paddle::StatPtr __stat = ::paddle::globalStat.getStat(statName); \ + ::paddle::TimerOnce __timerOnce( \ + __stat.get(), info, 10 * 1000000LU /*threshold*/); #endif // DISABLE_TIMER From 5bb29ece7fd5352b93100a20b4bf904c5b5bc2f0 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Wed, 21 Dec 2016 09:55:09 +0800 Subject: [PATCH 255/265] close log info in BN. --- python/paddle/trainer/config_parser.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 0308d9df94..8389476e6a 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -1884,7 +1884,7 @@ class BatchNormLayer(LayerBase): # when either of it is non-zero. if input_layer.width != 0 or input_layer.height != 0: self.set_cnn_layer(name, image_conf.img_size_y, image_conf.img_size, - image_conf.channels, True) + image_conf.channels, False) else: self.set_layer_size(input_layer.size) From 67fcd898c5f17bf7a61237351e9087257a8a34f2 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Wed, 21 Dec 2016 10:14:10 +0800 Subject: [PATCH 256/265] fix array style problem --- paddle/gserver/tests/test_ConvUnify.cpp | 21 ++------------------- 1 file changed, 2 insertions(+), 19 deletions(-) diff --git a/paddle/gserver/tests/test_ConvUnify.cpp b/paddle/gserver/tests/test_ConvUnify.cpp index 072a886a19..ad99b50245 100644 --- a/paddle/gserver/tests/test_ConvUnify.cpp +++ b/paddle/gserver/tests/test_ConvUnify.cpp @@ -137,25 +137,8 @@ TEST(Layer, convParaUnified) { checkMatrixEqual(resultCpu, resultGpu); input = Matrix::create(1, 3 * 3 * 2, false, false); - real inputData2[] = {1, - 2, - 3, - 4, - 5, - 6, - 7, - 8, - 9, - - 10, - 11, - 12, - 13, - 14, - 15, - 16, - 17, - 18}; + real inputData2[] = { + 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18}; real param2[] = {1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6, 5, 4, 3, 2, 1}; input->setData(inputData2); From de8927ebe10a2ce8f7eb05b45e07794998040270 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Wed, 21 Dec 2016 10:30:58 +0800 Subject: [PATCH 257/265] refine docs.sh --- paddle/scripts/travis/docs.sh | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/paddle/scripts/travis/docs.sh b/paddle/scripts/travis/docs.sh index cd331522a9..1b05dce29a 100755 --- a/paddle/scripts/travis/docs.sh +++ b/paddle/scripts/travis/docs.sh @@ -9,12 +9,9 @@ make paddle_docs paddle_docs_cn # check websites for broken links set +e -linkchecker doc/cn/html/index.html > doc_cn.out -linkchecker doc/en/html/index.html > doc_en.out -for i in doc_cn.out doc_en.out; do - grep " 0 errors found" $i +for i in cn en; do + linkchecker doc/$i/html/index.html if [ $? -ne 0 ]; then - cat $i exit 1 fi done From f2029298a7d44d396e4e87bef07c55d10a06e498 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Wed, 21 Dec 2016 10:35:43 +0800 Subject: [PATCH 258/265] Change type float to real. --- paddle/gserver/layers/PriorBox.cpp | 20 ++++----- paddle/gserver/tests/test_PriorBox.cpp | 56 +++++++++++++------------- 2 files changed, 38 insertions(+), 38 deletions(-) diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp index ca61dfec5f..abaeaf3c1c 100644 --- a/paddle/gserver/layers/PriorBox.cpp +++ b/paddle/gserver/layers/PriorBox.cpp @@ -36,8 +36,8 @@ public: int numPriors_; std::vector minSize_; std::vector maxSize_; - std::vector aspectRatio_; - std::vector variance_; + std::vector aspectRatio_; + std::vector variance_; MatrixPtr buffer_; }; @@ -77,8 +77,8 @@ void PriorBoxLayer::forward(PassType passType) { int imageWidth = image.getFrameWidth(); int imageHeight = image.getFrameHeight(); - float stepW = static_cast(imageWidth) / layerWidth; - float stepH = static_cast(imageHeight) / layerHeight; + real stepW = static_cast(imageWidth) / layerWidth; + real stepH = static_cast(imageHeight) / layerHeight; int dim = layerHeight * layerWidth * numPriors_ * 4; reserveOutput(1, dim * 2); // use a cpu buffer to compute @@ -88,8 +88,8 @@ void PriorBoxLayer::forward(PassType passType) { int idx = 0; for (int h = 0; h < layerHeight; ++h) { for (int w = 0; w < layerWidth; ++w) { - float centerX = (w + 0.5) * stepW; - float centerY = (h + 0.5) * stepH; + real centerX = (w + 0.5) * stepW; + real centerY = (h + 0.5) * stepH; int minSize = 0; for (size_t s = 0; s < minSize_.size(); s++) { // first prior. @@ -121,10 +121,10 @@ void PriorBoxLayer::forward(PassType passType) { } // rest of priors. for (size_t r = 0; r < aspectRatio_.size(); r++) { - float ar = aspectRatio_[r]; + real ar = aspectRatio_[r]; if (fabs(ar - 1.) < 1e-6) continue; - float boxWidth = minSize * sqrt(ar); - float boxHeight = minSize / sqrt(ar); + real boxWidth = minSize * sqrt(ar); + real boxHeight = minSize / sqrt(ar); tmpPtr[idx++] = (centerX - boxWidth / 2.) / imageWidth; tmpPtr[idx++] = (centerY - boxHeight / 2.) / imageHeight; tmpPtr[idx++] = (centerX + boxWidth / 2.) / imageWidth; @@ -137,7 +137,7 @@ void PriorBoxLayer::forward(PassType passType) { // clip the prior's coordidate such that it is within [0, 1] for (int d = 0; d < dim * 2; ++d) if ((d % 8) < 4) - tmpPtr[d] = std::min(std::max(tmpPtr[d], (float)0.), (float)1.); + tmpPtr[d] = std::min(std::max(tmpPtr[d], (real)0.), (real)1.); MatrixPtr outV = getOutputValue(); outV->copyFrom(buffer_->data_, dim * 2); } diff --git a/paddle/gserver/tests/test_PriorBox.cpp b/paddle/gserver/tests/test_PriorBox.cpp index 19dfd0f065..a6d6a24269 100644 --- a/paddle/gserver/tests/test_PriorBox.cpp +++ b/paddle/gserver/tests/test_PriorBox.cpp @@ -30,8 +30,8 @@ void doOnePriorBoxTest(size_t feature_map_width, size_t image_height, vector min_size, vector max_size, - vector aspect_ratio, - vector variance, + vector aspect_ratio, + vector variance, bool use_gpu, MatrixPtr& result) { // Setting up the priorbox layer @@ -71,8 +71,8 @@ void doOnePriorBoxTest(size_t feature_map_width, TEST(Layer, priorBoxLayerFwd) { vector minSize; vector maxSize; - vector aspectRatio; - vector variance; + vector aspectRatio; + vector variance; bool useGpu = false; minSize.push_back(276); @@ -84,22 +84,22 @@ TEST(Layer, priorBoxLayerFwd) { // CPU case 1. MatrixPtr result; - float resultData[] = {0.04, - 0.04, - 0.96, - 0.96, - 0.1, - 0.1, - 0.2, - 0.2, - 0, - 0, - 1, - 1, - 0.1, - 0.1, - 0.2, - 0.2}; + real resultData[] = {0.04, + 0.04, + 0.96, + 0.96, + 0.1, + 0.1, + 0.2, + 0.2, + 0, + 0, + 1, + 1, + 0.1, + 0.1, + 0.2, + 0.2}; result = Matrix::create(1, 2 * 8, false, useGpu); result->setData(resultData); doOnePriorBoxTest(/* feature_map_width */ 1, @@ -116,10 +116,10 @@ TEST(Layer, priorBoxLayerFwd) { variance[1] = 0.2; variance[3] = 0.1; maxSize.pop_back(); - float resultData2[] = {0, 0, 0.595, 0.595, 0.1, 0.2, 0.2, 0.1, - 0.405, 0, 1, 0.595, 0.1, 0.2, 0.2, 0.1, - 0, 0.405, 0.595, 1, 0.1, 0.2, 0.2, 0.1, - 0.405, 0.405, 1, 1, 0.1, 0.2, 0.2, 0.1}; + real resultData2[] = {0, 0, 0.595, 0.595, 0.1, 0.2, 0.2, 0.1, + 0.405, 0, 1, 0.595, 0.1, 0.2, 0.2, 0.1, + 0, 0.405, 0.595, 1, 0.1, 0.2, 0.2, 0.1, + 0.405, 0.405, 1, 1, 0.1, 0.2, 0.2, 0.1}; Matrix::resizeOrCreate(result, 1, 4 * 8, false, useGpu); result->setData(resultData2); doOnePriorBoxTest(/* feature_map_width */ 2, @@ -134,10 +134,10 @@ TEST(Layer, priorBoxLayerFwd) { result); // CPU case 3. aspectRatio.push_back(2); - float resultData3[] = {0.04, 0.04, 0.96, 0.96, 0.1, 0.2, - 0.2, 0.1, 0, 0.17473088, 1, 0.825269, - 0.1, 0.2, 0.2, 0.1, 0.17473088, 0, - 0.825269, 1, 0.1, 0.2, 0.2, 0.1}; + real resultData3[] = {0.04, 0.04, 0.96, 0.96, 0.1, 0.2, + 0.2, 0.1, 0, 0.17473088, 1, 0.825269, + 0.1, 0.2, 0.2, 0.1, 0.17473088, 0, + 0.825269, 1, 0.1, 0.2, 0.2, 0.1}; Matrix::resizeOrCreate(result, 1, 3 * 8, false, useGpu); result->setData(resultData3); doOnePriorBoxTest(/* feature_map_width */ 1, From 1b8e151fa2b1d79b3e145600f136e6d3d556fe70 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Wed, 21 Dec 2016 10:49:04 +0800 Subject: [PATCH 259/265] Support user specified label input in tests --- paddle/gserver/tests/LayerGradUtil.cpp | 36 ++++++++++++++++++++++---- paddle/gserver/tests/LayerGradUtil.h | 19 ++++++++++++++ 2 files changed, 50 insertions(+), 5 deletions(-) diff --git a/paddle/gserver/tests/LayerGradUtil.cpp b/paddle/gserver/tests/LayerGradUtil.cpp index 1d5e7de1ba..57c176810f 100644 --- a/paddle/gserver/tests/LayerGradUtil.cpp +++ b/paddle/gserver/tests/LayerGradUtil.cpp @@ -303,13 +303,31 @@ void initDataLayer(TestConfig testConf, ICpuGpuVectorPtr sequenceStartPositions; ICpuGpuVectorPtr subSequenceStartPositions; IVectorPtr cpuSequenceDims; - for (size_t i = 0; i < testConf.inputDefs.size(); i++) { + for (size_t i = 0; i < testConf.inputDefs.size(); ++i) { + if (testConf.inputDefs[i].inputType != INPUT_SEQUENCE_LABEL) continue; + + const std::vector& labelSeqStartPositions = + testConf.inputDefs[i].labelSeqStartPositions; + if (labelSeqStartPositions.size() != 0) { + CHECK(!sequenceStartPositions); + CHECK_GE(labelSeqStartPositions.size(), 2); + + sequenceStartPositions = + ICpuGpuVector::create(labelSeqStartPositions.size(), useGpu); + sequenceStartPositions->copyFrom( + labelSeqStartPositions.data(), labelSeqStartPositions.size(), useGpu); + } + } + + for (size_t i = 0; i < testConf.inputDefs.size(); ++i) { LayerConfig config; config.set_name(testConf.inputDefs[i].name); config.set_type("data"); config.set_size(testConf.inputDefs[i].dim); LayerPtr layer = LayerPtr(new DataLayer(config)); - size_t numSequence = batchSize / 10 + 1; + size_t numSequence = sequenceStartPositions + ? sequenceStartPositions->getSize() - 1 + : batchSize / 10 + 1; Argument data; auto fillData = [&](bool trans, int height, int width) { @@ -336,9 +354,17 @@ void initDataLayer(TestConfig testConf, break; case INPUT_LABEL: case INPUT_SEQUENCE_LABEL: - data.ids = VectorT::create(batchSize, useGpu); - // now rand number can be 0 to inputDefs[i].dim - data.ids->rand(testConf.inputDefs[i].dim); + if (testConf.inputDefs[i].labelInitValue.size() != 0) { + const std::vector& labelInitValue = + testConf.inputDefs[i].labelInitValue; + CHECK_EQ(labelInitValue.size(), batchSize); + data.ids = VectorT::create(batchSize, useGpu); + data.ids->copyFrom(labelInitValue.data(), batchSize); + } else { + data.ids = VectorT::create(batchSize, useGpu); + // now rand number can be 0 to inputDefs[i].dim + data.ids->rand(testConf.inputDefs[i].dim); + } break; case INPUT_SPARSE_NON_VALUE_DATA: data.value = makeRandomSparseMatrix( diff --git a/paddle/gserver/tests/LayerGradUtil.h b/paddle/gserver/tests/LayerGradUtil.h index 62ac2d160f..46cfcd29e0 100644 --- a/paddle/gserver/tests/LayerGradUtil.h +++ b/paddle/gserver/tests/LayerGradUtil.h @@ -64,6 +64,8 @@ struct InputDef { size_t paraSize; ParaSparse sparse; bool isStatic; + std::vector labelInitValue; + std::vector labelSeqStartPositions; InputDef(InputType type, string nameIn, size_t dimIn, size_t sizeIn) { inputType = type; name = nameIn; @@ -72,6 +74,23 @@ struct InputDef { sparse = {""}; isStatic = false; } + + InputDef(InputType type, + string nameIn, + size_t dimIn, + size_t sizeIn, + std::vector labelInitValue, + std::vector labelSeqStartPositions) + : labelInitValue(labelInitValue), + labelSeqStartPositions(labelSeqStartPositions) { + inputType = type; + name = nameIn; + dim = dimIn; + paraSize = sizeIn; + sparse = {""}; + isStatic = false; + } + InputDef(InputType type, string nameIn, size_t dimIn, From 39a547741cb953bc92095ff74b3962336acab3f8 Mon Sep 17 00:00:00 2001 From: Luo Tao Date: Wed, 21 Dec 2016 10:59:37 +0800 Subject: [PATCH 260/265] refine docs.sh --- paddle/scripts/travis/docs.sh | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/paddle/scripts/travis/docs.sh b/paddle/scripts/travis/docs.sh index 1b05dce29a..8690fe1d40 100755 --- a/paddle/scripts/travis/docs.sh +++ b/paddle/scripts/travis/docs.sh @@ -8,14 +8,8 @@ cmake .. -DCMAKE_BUILD_TYPE=Debug -DWITH_GPU=OFF -DWITH_DOC=ON make paddle_docs paddle_docs_cn # check websites for broken links -set +e -for i in cn en; do - linkchecker doc/$i/html/index.html - if [ $? -ne 0 ]; then - exit 1 - fi -done -set -e +linkchecker doc/en/html/index.html +linkchecker doc/cn/html/index.html # Parse Github URL REPO=`git config remote.origin.url` From e4c492d3b8d6dc7b700aca16db7c410cf1961f23 Mon Sep 17 00:00:00 2001 From: dangqingqing Date: Wed, 21 Dec 2016 11:21:45 +0800 Subject: [PATCH 261/265] change type to bool. --- python/paddle/trainer/config_parser.py | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/python/paddle/trainer/config_parser.py b/python/paddle/trainer/config_parser.py index 8389476e6a..29704391f2 100644 --- a/python/paddle/trainer/config_parser.py +++ b/python/paddle/trainer/config_parser.py @@ -499,11 +499,15 @@ class Input(Cfg): is_shared=None, update_hooks=None, input_layer_argument=None, - not_make_layer_name_in_submodel=None, ): + make_layer_name_in_submodel=True, ): + """ + @param make_layer_name_in_submodel True by defalut, you might need to + set it carefully when adding Input in config_parser.py. + """ self.add_keys(locals()) - self.input_layer_name = MakeLayerNameInSubmodel(input_layer_name) - if not_make_layer_name_in_submodel: - self.input_layer_name = input_layer_name + self.input_layer_name = MakeLayerNameInSubmodel( + input_layer_name + ) if make_layer_name_in_submodel else input_layer_name # Define a projection for iexed layer @@ -1852,7 +1856,7 @@ class BatchNormLayer(LayerBase): initial_mean=0.0, is_static=True, is_shared=is_shared, - not_make_layer_name_in_submodel=True, )) + make_layer_name_in_submodel=False, )) parallel_nn = bool(int(g_command_config_args.get("parallel_nn", 0))) cudnn_version = int(g_command_config_args.get("cudnn_version", 0)) From d09564b73f77e248748f5f07738b07e708275194 Mon Sep 17 00:00:00 2001 From: Peng Li Date: Wed, 21 Dec 2016 15:10:42 +0800 Subject: [PATCH 262/265] change std::vector to const reference --- paddle/gserver/tests/LayerGradUtil.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/paddle/gserver/tests/LayerGradUtil.h b/paddle/gserver/tests/LayerGradUtil.h index 46cfcd29e0..4e88ac0e81 100644 --- a/paddle/gserver/tests/LayerGradUtil.h +++ b/paddle/gserver/tests/LayerGradUtil.h @@ -66,6 +66,7 @@ struct InputDef { bool isStatic; std::vector labelInitValue; std::vector labelSeqStartPositions; + InputDef(InputType type, string nameIn, size_t dimIn, size_t sizeIn) { inputType = type; name = nameIn; @@ -79,8 +80,8 @@ struct InputDef { string nameIn, size_t dimIn, size_t sizeIn, - std::vector labelInitValue, - std::vector labelSeqStartPositions) + const std::vector& labelInitValue, + const std::vector& labelSeqStartPositions) : labelInitValue(labelInitValue), labelSeqStartPositions(labelSeqStartPositions) { inputType = type; From 22aacbfd6c19cf8145134910d8c89497f84b15df Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 21 Dec 2016 16:33:01 +0800 Subject: [PATCH 263/265] Add const to GradientMachine::eval --- paddle/gserver/gradientmachines/GradientMachine.h | 2 +- paddle/gserver/gradientmachines/MultiGradientMachine.cpp | 2 +- paddle/gserver/gradientmachines/MultiGradientMachine.h | 2 +- paddle/gserver/gradientmachines/MultiNetwork.cpp | 2 +- paddle/gserver/gradientmachines/MultiNetwork.h | 2 +- paddle/gserver/gradientmachines/NeuralNetwork.cpp | 2 +- paddle/gserver/gradientmachines/NeuralNetwork.h | 2 +- paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp | 2 +- paddle/gserver/gradientmachines/RecurrentGradientMachine.h | 2 +- 9 files changed, 9 insertions(+), 9 deletions(-) diff --git a/paddle/gserver/gradientmachines/GradientMachine.h b/paddle/gserver/gradientmachines/GradientMachine.h index 579eca71d4..873b89e3fc 100644 --- a/paddle/gserver/gradientmachines/GradientMachine.h +++ b/paddle/gserver/gradientmachines/GradientMachine.h @@ -186,7 +186,7 @@ public: /** * evaluate using the given evaluator */ - virtual void eval(Evaluator* evaluator) = 0; + virtual void eval(Evaluator* evaluator) const = 0; std::vector& getParameters() { return parameters_; } diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp index 88c098b355..a34316c57a 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp @@ -331,7 +331,7 @@ Evaluator* MultiGradientMachine::makeEvaluator() { return threads_[0]->getGradientMachine()->makeEvaluator(); } -void MultiGradientMachine::eval(Evaluator* evaluator) { +void MultiGradientMachine::eval(Evaluator* evaluator) const { for (auto& thread : threads_) { SetDevice device(thread->getDeviceId()); thread->getGradientMachine()->eval(evaluator); diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.h b/paddle/gserver/gradientmachines/MultiGradientMachine.h index 5f9855c4be..f2b074e393 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.h +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.h @@ -195,7 +195,7 @@ public: virtual Evaluator* makeEvaluator(); - virtual void eval(Evaluator* evaluator); + virtual void eval(Evaluator* evaluator) const; bool useGpu() const { return useGpu_; } diff --git a/paddle/gserver/gradientmachines/MultiNetwork.cpp b/paddle/gserver/gradientmachines/MultiNetwork.cpp index 6eb3d8db96..33100c842b 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.cpp +++ b/paddle/gserver/gradientmachines/MultiNetwork.cpp @@ -181,6 +181,6 @@ Evaluator* MultiNetwork::makeEvaluator() { return multiCombinedEvaluator; } -void MultiNetwork::eval(Evaluator* evaluator) { evaluator->eval(*this); } +void MultiNetwork::eval(Evaluator* evaluator) const { evaluator->eval(*this); } } // namespace paddle diff --git a/paddle/gserver/gradientmachines/MultiNetwork.h b/paddle/gserver/gradientmachines/MultiNetwork.h index 89fbf32b4f..93b8b5d2ba 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.h +++ b/paddle/gserver/gradientmachines/MultiNetwork.h @@ -48,7 +48,7 @@ public: virtual Evaluator* makeEvaluator(); - virtual void eval(Evaluator* evaluator); + virtual void eval(Evaluator* evaluator) const; const std::vector>& getSubNetworks() const { return subNetworks_; diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.cpp b/paddle/gserver/gradientmachines/NeuralNetwork.cpp index ee36a87b9d..98d0bcac79 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.cpp +++ b/paddle/gserver/gradientmachines/NeuralNetwork.cpp @@ -383,7 +383,7 @@ Evaluator* NeuralNetwork::makeEvaluator() { return combinedEvaluator; } -void NeuralNetwork::eval(Evaluator* evaluator) { evaluator->eval(*this); } +void NeuralNetwork::eval(Evaluator* evaluator) const { evaluator->eval(*this); } void NeuralNetwork::setOutputGrad(const std::vector& args) { CHECK_GE(outputLayers_.size(), args.size()); diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.h b/paddle/gserver/gradientmachines/NeuralNetwork.h index 384ca88f47..3a07e0bc9f 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.h +++ b/paddle/gserver/gradientmachines/NeuralNetwork.h @@ -98,7 +98,7 @@ public: virtual Evaluator* makeEvaluator(); - virtual void eval(Evaluator* evaluator); + virtual void eval(Evaluator* evaluator) const; virtual void resetState(); virtual void setOutputGrad(const std::vector& args); diff --git a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp index 8f68b3d66b..a9a9f4f903 100644 --- a/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/RecurrentGradientMachine.cpp @@ -593,7 +593,7 @@ void RecurrentGradientMachine::forwardBackward( LOG(FATAL) << "should not use this function"; } -void RecurrentGradientMachine::eval(Evaluator* evaluator) { +void RecurrentGradientMachine::eval(Evaluator* evaluator) const { // call printers frame by frame for (int i = 0; i < maxSequenceLength_; ++i) { LOG(INFO) << "Recurrent Layer Group eval frame " << i << " begin"; diff --git a/paddle/gserver/gradientmachines/RecurrentGradientMachine.h b/paddle/gserver/gradientmachines/RecurrentGradientMachine.h index db7d8aff6d..910ca4376b 100644 --- a/paddle/gserver/gradientmachines/RecurrentGradientMachine.h +++ b/paddle/gserver/gradientmachines/RecurrentGradientMachine.h @@ -63,7 +63,7 @@ public: const UpdateCallback& callback); virtual void resetState() {} - virtual void eval(Evaluator* evaluator); + virtual void eval(Evaluator* evaluator) const; const std::vector& getParameterIds() { return parameterIds_; } From 4d5a0b0a0348932760933b31e37c0c27ac4678af Mon Sep 17 00:00:00 2001 From: Yu Yang Date: Wed, 21 Dec 2016 17:07:00 +0800 Subject: [PATCH 264/265] Also add const to makeEvaluator --- paddle/gserver/gradientmachines/GradientMachine.h | 2 +- paddle/gserver/gradientmachines/MultiGradientMachine.cpp | 2 +- paddle/gserver/gradientmachines/MultiGradientMachine.h | 2 +- paddle/gserver/gradientmachines/MultiNetwork.cpp | 2 +- paddle/gserver/gradientmachines/MultiNetwork.h | 2 +- paddle/gserver/gradientmachines/NeuralNetwork.cpp | 2 +- paddle/gserver/gradientmachines/NeuralNetwork.h | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/paddle/gserver/gradientmachines/GradientMachine.h b/paddle/gserver/gradientmachines/GradientMachine.h index 873b89e3fc..26ce340aa6 100644 --- a/paddle/gserver/gradientmachines/GradientMachine.h +++ b/paddle/gserver/gradientmachines/GradientMachine.h @@ -181,7 +181,7 @@ public: /** * Create an evaluator which can be used for eval() */ - virtual Evaluator* makeEvaluator() = 0; + virtual Evaluator* makeEvaluator() const = 0; /** * evaluate using the given evaluator diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp index a34316c57a..bd51507a29 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.cpp +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.cpp @@ -327,7 +327,7 @@ void MultiGradientMachine::finish() { } } -Evaluator* MultiGradientMachine::makeEvaluator() { +Evaluator* MultiGradientMachine::makeEvaluator() const { return threads_[0]->getGradientMachine()->makeEvaluator(); } diff --git a/paddle/gserver/gradientmachines/MultiGradientMachine.h b/paddle/gserver/gradientmachines/MultiGradientMachine.h index f2b074e393..9be15ef4bc 100644 --- a/paddle/gserver/gradientmachines/MultiGradientMachine.h +++ b/paddle/gserver/gradientmachines/MultiGradientMachine.h @@ -193,7 +193,7 @@ public: virtual void finish(); - virtual Evaluator* makeEvaluator(); + virtual Evaluator* makeEvaluator() const; virtual void eval(Evaluator* evaluator) const; diff --git a/paddle/gserver/gradientmachines/MultiNetwork.cpp b/paddle/gserver/gradientmachines/MultiNetwork.cpp index 33100c842b..933144b5bd 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.cpp +++ b/paddle/gserver/gradientmachines/MultiNetwork.cpp @@ -172,7 +172,7 @@ protected: std::vector> evaluators_; }; -Evaluator* MultiNetwork::makeEvaluator() { +Evaluator* MultiNetwork::makeEvaluator() const { MultiCombinedEvaluator* multiCombinedEvaluator = new MultiCombinedEvaluator(); for (size_t i = 0; i < subNetworks_.size(); i++) { std::unique_ptr evaluator(subNetworks_[i]->makeEvaluator()); diff --git a/paddle/gserver/gradientmachines/MultiNetwork.h b/paddle/gserver/gradientmachines/MultiNetwork.h index 93b8b5d2ba..ce024659ec 100644 --- a/paddle/gserver/gradientmachines/MultiNetwork.h +++ b/paddle/gserver/gradientmachines/MultiNetwork.h @@ -46,7 +46,7 @@ public: virtual void onPassEnd(); - virtual Evaluator* makeEvaluator(); + virtual Evaluator* makeEvaluator() const; virtual void eval(Evaluator* evaluator) const; diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.cpp b/paddle/gserver/gradientmachines/NeuralNetwork.cpp index 98d0bcac79..22051e07ee 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.cpp +++ b/paddle/gserver/gradientmachines/NeuralNetwork.cpp @@ -348,7 +348,7 @@ protected: std::vector> evaluators_; }; -Evaluator* NeuralNetwork::makeEvaluator() { +Evaluator* NeuralNetwork::makeEvaluator() const { CombinedEvaluator* combinedEvaluator = new CombinedEvaluator(); auto subModelConfig = std::find_if(config_.sub_models().begin(), config_.sub_models().end(), diff --git a/paddle/gserver/gradientmachines/NeuralNetwork.h b/paddle/gserver/gradientmachines/NeuralNetwork.h index 3a07e0bc9f..25af4abcf8 100644 --- a/paddle/gserver/gradientmachines/NeuralNetwork.h +++ b/paddle/gserver/gradientmachines/NeuralNetwork.h @@ -96,7 +96,7 @@ public: virtual void onPassEnd(); - virtual Evaluator* makeEvaluator(); + virtual Evaluator* makeEvaluator() const; virtual void eval(Evaluator* evaluator) const; virtual void resetState(); From 8d24931588ff2152d90bb4eff2c14bcbfc7733c6 Mon Sep 17 00:00:00 2001 From: gaoyuan Date: Wed, 21 Dec 2016 20:44:06 +0800 Subject: [PATCH 265/265] Change member variables from public to protected --- paddle/gserver/layers/PriorBox.cpp | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/paddle/gserver/layers/PriorBox.cpp b/paddle/gserver/layers/PriorBox.cpp index abaeaf3c1c..36ace7597c 100644 --- a/paddle/gserver/layers/PriorBox.cpp +++ b/paddle/gserver/layers/PriorBox.cpp @@ -18,10 +18,10 @@ limitations under the License. */ namespace paddle { /** - * @brief A layer for generate prior box locations and variances. + * @brief A layer for generating priorbox locations and variances. * - Input: Two and only two input layer are accepted. The input layer must be * be a data output layer and a convolution output layer. - * - Output: The prior box locations and variances of the input data. + * - Output: The priorbox locations and variances of the input data. * Reference: * Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, * Cheng-Yang Fu, Alexander C. Berg. SSD: Single Shot MultiBox Detector @@ -31,8 +31,11 @@ class PriorBoxLayer : public Layer { public: explicit PriorBoxLayer(const LayerConfig& config) : Layer(config) {} bool init(const LayerMap& layerMap, const ParameterMap& parameterMap); + void forward(PassType passType); void backward(const UpdateCallback& callback) {} + +protected: int numPriors_; std::vector minSize_; std::vector maxSize_;