When the define_py_data_sources2 has both train_list and test_list,
for job=test, the trainer will create both dataProvider_ and testDataProvider_.
But dataProvider_ is not used. This causes SIGSEGV at finishAsync() because asyncLoader_ is not created.
Change-Id: If579f715f80a70ebc795094792c3436bfa0f5746
* Fix SRL hang when exit.
* Error occurred when enable Async Load in TestDataProvider.
* It because DataProvider is calling getNextBatchInternal in one thread, and destructing DataProvider in other thread.
* Add wait routine in DataProvider destructing.
* Also fix another bug, when destructing TestDataProvider and do not read any test data.
Fix#286
* Follow comments, Use mutex is cool!
* fix DataProvider create function args bug
Change-Id: I9e3a1c535c805bf30204a14aea8d5143ff534784
* remove PserverForPython.h which is not used
Change-Id: I2b27f1f3c11a42766a92fc689f0f5f1f73ee1d70
* add internal document script
Change-Id: Ia0fec79456caea0b271f9903cc13e8a3d32e0774
This bug occasionally causes dead lock in test_RecurrentGradientMachine
In general, conditional_variable::notify should be used together with mutex for changing condition.
* min_pool_size would be infinite by default.
* add unittest for min_pool_size
* Fix bug in can_over_batch_size
* add unittest for can_over_batch_size
* Add DEFINE_PROVIDER_EX
* Add default value of should_shuffle
* When training, the default value of should_shuffle is True.
* When testing, the default value of should_shuffle is False.
* User a set a provider should_shuffle or not by pass it to `@provider`
* should_shuffle can handle a list of value, not just boolean
* Add input order mapping by using name
* Add unittest
* Add check to check input format.
* Default is close for speed reason.
* User could stop train when check error, or continue train without
this train sample.
* use deque instead of vector in generators pool, make erase
generator faster.
* Add chinese/english documentation
* Make should shuffle = false in unittest
* Add python files to depends.