Unable to download the dataset.
PalgunaGopireddy opened this issue · 6 comments
I have went to search on https://search.asf.alaska.edu/ using the scene names. I can not download them could you tell me another way to find the data
The raw SAR data can be downloaded from ASF. Just select "List Search" type at the leftop and then input the scene name and search. You also can download the dataset constructed by us (see pan link in dataset page).
Thank you.
I can't use BaiduYunPan link as I do not understand Chinese.
I am able to use ASF website. Able to search and Identify the scenes. But I am confused to understand, because each scene has different files shown in the right column of the search window namely Level 1.0
, Level 1.5
, Hi-Res Terrain Corrected
, Low-Res Terrain Corrected
, GoogleEarth KMZ
etc.
Which one should I download
(or) only the .jpg
files already given in the dataset folder suffice?
I add new links (MEGA) for the dataset, please see the readme file in the Datasets folder.
Level 1.0: RAW echo, Level 1.5: SLC image. If you use a Level 1.0 file, you need to apply a SAR imaging algorithm such as RDA to get an SLC image.
Hi. Just want some clarification about downloading data in ASF website, for future purposes.
I have downloaded 'ALPSRP020160970in mega. It's size is
3.96GB. You told
Level 1.5: SLC image' is the image we are processing for autofocus. The file of Level 1.5: SLC image
of ALPSRP020160970
in ALF website is showing about 198.78MB
(see image).
So the size in mega link and ASF website doe not match. I am a bit confused about it. Also both the images given in dataset folder and ASF website does not match. Why the difference in size? or did I misunderstood that Level 1.5: SLC image
is not what we are taking as input?
Just seen, read me file in mega_link 2. It told each file has 8000 images each of size 256MB. Do we need all these images to train or The one image for each scene suffice for the algorithm (which i can download from ASF website directly)?
Yes, they have different sizes.
- The shared file in MEGA is the dataset proposed in our paper (https://www.mdpi.com/2072-4292/13/14/2683). To make this dataset, first, we download nine raw data files (level 1.0, if you visualize, it may look like white noise). Then, these raw images are imaged with RDA to get SLC images. 8000 image patches (256x256) are selected from each of the SLC images. We defocused the SLC image by applying inaccuracy velocities in RDA. Please see our paper for details.
- The SLC file (level 1.5) in ASF may not be imaged with the RDA algorithm and the image in it is well-focused. If you want to use these SLC images as input, you need to add phase errors to defocus it.
- As mentioned in our paper, we just selected 20000, 8000, and 8000 image patches across the nine scenes for training, validation, and testing, respectively. We use different scenes for training validation and testing to verify the robustness of the algorithm, You can also add more patches in the training process.
Thank you