Commit Graph

26 Commits (cedc04775c85d044c6e2705701232278196d28f7)

Author SHA1 Message Date
xujiaqi01 cedc04775c
support change shuffle and train thread num (#19841)
6 years ago
xujiaqi01 6bf298bf09
support preload thread, optimize hdfs log, fix master+patch bug (#19695)
6 years ago
jiaqi b104ea0684
add get_last_save_xbox_base/get_last_save_xbox (#19122)
6 years ago
yaoxuefeng 9150cf50fc
add save cache model api in fleet& add slots shuffle in dataset module & add metric op to calculate ctr related metrics (#18871)
6 years ago
Thunderbrook 52c1431eee
add clear_model interface in fleetwrapper (#18815)
6 years ago
fuyinno4 c167a4b4dd
Fix shrink-dense and add scale-datanorm (#18746)
6 years ago
Thunderbrook d8396281ef
add slot to sparse table (#18686)
6 years ago
jiaqi d18aabb472
support patch data, add load_one_table, fix bug (#18509)
6 years ago
jiaqi 66d51206b1
add save/load model, shrink table, cvm, config file & fix pull dense bug (#17118)
6 years ago
xjqbest 782ab2e2bd add some doc
6 years ago
xjqbest a99c8d0c29 fix client to client communication bug
6 years ago
dongdaxiang 98dda08a85 fix pull sparse slow problem
6 years ago
dongdaxiang d4514949bf remove local random engine in fleet with rand_r()
6 years ago
dongdaxiang a0b59773af fix code style
6 years ago
xjqbest a34fe6248f add some doc
6 years ago
xujiaqi01 f5c6a14b54 fix runtime error
6 years ago
xujiaqi01 a5b1a0e12b support multi dataset && add init model && fix bug
6 years ago
dongdaxiang 317eb0aad3 add incubate for unified API
6 years ago
xujiaqi01 39449ba0b9 fix bug && add DestroyReaders in trainer
6 years ago
xujiaqi01 ecfc7df913 add dataset factory && fix style
6 years ago
xujiaqi01 3cea00bd52 store memory data in Dataset && fix bug
6 years ago
dongdaxiang be757096da add pybind for fleet
6 years ago
dongdaxiang f2bde9c241 fix destructor problem
6 years ago
dongdaxiang 378037c535 make s_instance_ private to ensure singleton
6 years ago
dongdaxiang c165012031 refine device_worker and trainer code
6 years ago
dongdaxiang 855bf579d2 add dist_multi_trainer for distributed training, add trainer_factory and device_worker_factory so that we can easily extend new training mode, add pull dense worker which is a singleton for parameter fetching
6 years ago