/sc2-frame-error

Spectrum Challange 2 Dataset and Frame Error Prediction Code

Primary LanguageJupyter Notebook

Spectrum Challange 2 Dataset and Frame Error Prediction Code

Deep Learning for Frame Error Prediction using a DARPA Spectrum Collaboration Challenge (SC2) Dataset.

Code

The code is provided into separate files, one for each scrimmage. Every file contains five different neural network architectures, with two separate approaches for creating the train-validation-test set. Please refer to the paper for a detailed description.

Dataset

The dataset is approved for public release, distribution unlimited.

The dataset is contained in two files - scrimmage4_link_dataset.pickle and scrimmage5_link_dataset.pickle

The pickle files are stored as list of tuples, each list corresponding to a single link, and containing two elements. Each element a length equal to the number of frames in that link - it varies between link to link. The first tuple is contains the paramenters -

  1. Signal to Noise Ratio ('snr') - 1 element
  2. The Modulation and Coding Scheme ('mcs') - 1 element
  3. The center frequency of the link ('centerFreq') - 1 element
  4. The bandwidth of the link ('bandwidth') - 1 element
  5. The Power Spectral Density ('psd') - 16 elements Thus the total width of each element of the first tuple for a link is 20.

The second tuple contains the success of transmission ('rxSuccess'). If it is 1, there is no frame error, if it is 0, there is a frame error.

Here are the links to the dataset files mentioned in the code (one pickle file for each scrimmage):

Scrimmage 4 (547.5 MB) Mirror

Scrimmage 5 (979.7 MB) Mirror

A larger dataset containing complete information about each match is also available. Please refer to SC2_Dataset_Documentation.pdf for more details regarding the structure of the full dataset. SC2_Dataset_Technical_Design_Report.pdf contains more information about the dataset acquisition process.

Here is the link to the full dataset (separate sqlite files for each match):

Full Dataset (135.517 GB) Mirror (Needs Access Request)

Edge Dataset

Based on the channel allocation strategy, the dataset is divided in two files - scrimmage4_nodes_dataset.pickle and scrimmage5_nodes_dataset.pickle. In Scrimmage 4, the channel allocation decision parameters are randomized. There are a total of 25 matches in scrimmage 4 and 53 matches in scrimmage 5. Only nodes with more than 10000 transmitted frames are considered, resulting in 154 and 408 nodes for scrimmages 4 and 5, respectively. We also include the SMOTE generated synthetic training dataset we used to train the teacher network in the cloud. The SMOTE dataset is produced using only the training data (50%) from the original dataset nodes, and hence contains same no of nodes.

Here are the links to the dataset files mentioned in the code at https://github.com/amahdeej/sc2-edge-learning (one pickle file for each scrimmage):

Scrimmage 4 Edge Node Dataset (573.7 MB) Mirror

Scrimmage 5 Edge Node Dataset (982 MB) Mirror

Scrimmage 4 Edge SMOTE Training Dataset (547.7 MB) Mirror

Scrimmage 5 Edge SMOTE Training Dataset (937.4 MB) Mirror