You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Paddle/paddle/fluid/recordio
Yi Wang 2486d563ba
Create README.md of fluid/recordio (#10145)
7 years ago
..
CMakeLists.txt Add Writer/Scanner 7 years ago
README.md Create README.md of fluid/recordio (#10145) 7 years ago
chunk.cc remove net op and cond_op (#9663) 7 years ago
chunk.h Unify Fluid code to Google C++ style 7 years ago
chunk_test.cc Fix warnings in chunk_test 7 years ago
header.cc remove net op and cond_op (#9663) 7 years ago
header.h Unify Fluid code to Google C++ style 7 years ago
header_test.cc Update rcordio 7 years ago
scanner.cc Update rcordio 7 years ago
scanner.h Update rcordio 7 years ago
writer.cc Update rcordio 7 years ago
writer.h Update rcordio 7 years ago
writer_scanner_test.cc Update rcordio 7 years ago

README.md

Background

The RecordIO file format is a container for records. This package is a C++ implementation of https://github.com/paddlepaddle/recordio, which originates from https://github.com/wangkuiyi/recordio.

Fault-tolerant Writing

For the initial design purpose of RecordIO within Google, which was logging, RecordIO groups record into chunks, whose header contains an MD5 hash of the chunk. A process that writes logs is supposed to call the Writer interface to add records. Once the writer accumulates a handful of them, it groups a chunk, put the MD5 into the chunk header, and appends the chunk to the file. In the event the process crashes unexpected, the last chunk in the RecordIO file could be incomplete/corrupt. The RecordIO reader is able to recover from these errors when the process restarts by identifying incomplete chucks and skipping over them.

Reading Ranges

A side-effect of chunks is to make it easy to indexing records while reading, thus allows us to read a range of successive records. This is good for distributed log process, where each MapReduce task handles only part of records in a big RecordIO file.

The procedure that creates the index starts from reading the header of the first chunk. It indexes the offset (0) and the size of the chunk, and skips to the header of the next chunk by calling the fseek API. Please be aware that most distributed filesystems and all POSIX-compatible local filesystem provides fseek, and makes sure that fseek runs much faster than fread. This procedure generates a map from chunks to their offsets, which allows the readers is to locate and read a range of records.