/NeLoRa_Dataset

This folder provides the neural-based LoRa demodulation code for our ICLR 2023 workshop paper: NELORA-BENCH: A BENCHMARK FOR NEURAL ENHANCED LORA DEMODULATION

Primary LanguagePython

NELoRa-Bench

This folder provides the neural-based LoRa demodulation code for our ICLR 2023 workshop paper: NELoRa-Bench: A Benchmark for Neural-enhanced LoRa Demodulation The dataset and checkpoint can be accessed at this Google Drive link

This code reproduces the experiments in the SenSys '21 paper "NELoRa: Towards Ultra-low SNR LoRa Communication with Neural-enhanced Demodulation".

Differences from the original code provided by NELoRa:

  1. Now for both nelora and baseline train/test, no need for a separate stage of data-generation (adding artificial noise). Noise is added on-the-fly. This reduces overfitting issues and removes the need for additional harddisk space, also speeding up the process drastically.
  2. Added data balancing.
  3. Removed clutter.
  4. Parameters are partially hardcoded.
  5. Add a double check on the dataset for wrong codes.
  6. Add comparison with baseline methods, using LoRaPhy from From Demodulation to Decoding: Toward Complete LoRa PHY Understanding and Implementation

Usage:

  1. Download dataset and unzip them from Google Drive.
  2. Download checkpoints from the same Google Drive. Note: these checkpoints are only trained for a limited amount of time and can be improved.
  3. Adjust the parameters in main.py:
# parameters
parser = argparse.ArgumentParser() 
parser.add_argument('--sf', type=int, help='The spreading factor.') 
parser.add_argument('--batch_size', type=int, default=16, help='The batch size.') 
opts = parser.parse_args()
sf = opts.sf  # spreading factor
batch_size = opts.batch_size  # batch size (the larger, the better, depending on GPU memory)

bw = 125e3  # bandwidth
fs = 1e6  # sampling frequency
data_dir = f'/path/to/NeLoRa_Dataset/{sf}/'  # directory for training dataset
mask_CNN_load_path = f'checkpoint/sf{sf}/100000_maskCNN.pkl'  # path for loading mask_CNN model weights
C_XtoY_load_path = f'checkpoint/sf{sf}/100000_C_XtoY.pkl'  # path for loading mask_CNN model weights
save_ckpt_dir = 'ckpt'  # directory for saving trained weight checkpoints
normalization = True  # whether to perform normalization on data
snr_range = list(range(-30, 1))  # range of SNR for training
test_snr = -17  # SNR for testing
scaling_for_imaging_loss = 128  # scaling of losses between mask_CNN and C_XtoY
ckpt_per_iter = 1000  # checkpoint per iteration
train_epochs = 100  # how many epochs to train (the larger, the better, network will not overfit)
  1. For training, call train(). For testing, call test().
if __name__ == '__main__':
    train()
    # test()    
  1. Run main.py with appropriate prameters, e.g.:
python main.py --sf 7 --batch_size 128
  1. please consider to cite our paper if you use the code or data in your research project.
  @inproceedings{nelora2021sensys,
  	title={{NELoRa: Towards Ultra-low SNR LoRa Communication with Neural-enhanced Demodulation}},
  	author={Li, Chenning and Guo, Hanqing and Tong, Shuai and Zeng, Xiao and Cao, Zhichao and Zhang, Mi and Yan, Qiben and Xiao, Li and Wang, Jiliang and Liu, Yunhao},
    	booktitle={In Proceeding of ACM SenSys},
    	year={2021}
  }