eMapR/LT-GEE

Scale of data display on the LT change mapper

JodiNorris opened this issue · 8 comments

The features on both the change mapper and the time series pages are amazing, and have been useful to a resource manager who is interested in tracking change over the last 30 years. He's asked me what the scale is for these datasets. On the time series page, the pixel size appears to be 30 meters, but on the change mapper site, the Year of Detection layer appears to have better than 1 foot precision (actually all 3 layers have this effect). I think this is some kind of artifact but don't know what's going on. Could that explanation be added to the click here for information link that is already on the right hand side?
ExampleOfUltraHighResolution

Hi, I noticed the same type of artifacts in the output images from LandTrendr, for instance when converting into images the results of the ltgee.getSegmentData. Did you happen to find an explanation of this? What is the resolution of the LandTrendr output?

Hi @JodiNorris and @agataelia

@JodiNorris
As for the change mapper screenshot... I think what you are seeing are artifacts that are the result of the combination of bicubic resampling we apply during image preprocessing and being zoomed in so far that Earth Engine is performing calculations for cells that are less than 30m (native Landsat resolution). In the change app we let the map's zoom level determine what scale to perform analyses at so that it can run very quickly if you are zoomed way out looking at a very large region. If you zoom out, I would expect some of that "noise" to go away. Also note that LandTrendr is best suited to regions with more vegetation than what is shown in your example. Areas with a lot of exposed soil tend to be noisy, which on top of the bicubic resampling and high zoom might be compounding the noise.

@agataelia
The same issue with Earth Engine map zoom and bicubic resampling can be happening in your case too. If you are running ltgee.getSegmentData and then exploring the outputs in the Code Editor's map, all of the analyses are being performed at the scale determined by the map zoom.

For best results you need to export the image using one of the Export.image methods and be sure to set the scale property to 30 (30 meters). You can also use ee.Image.reproject to force the map outputs to be calculated at a specified scale when exploring in the Code Editor map. However, if you are zoomed out, the processing is going to take a while and may not complete before the computation time limit is reached for interactive Code Editor requests. Again best to export the data.

One thing that would help in these cases with removing artifacts I believe is to not use bicubic resampling. I'll look at updating the code so that it does not use it or at least make it an option to not use it.

Hi @jdbcode,

Thank you for the prompt remply! I looked into reprojecting the LT output and indeed a solution like the one below works in providing a map output with consistent resolution.

// Define a function that rescale images
var rescale = function(image, meter){
  var rescaled = image.reproject({
  crs: 'EPSG:4326',
  scale: meter
  });
  return rescaled; 
};

// Get segments informations
var segInfo = ltgee.getSegmentData(lt, index, 'loss');

// Reproject LT output to 30 meters resolution
var segInfo30m = rescale(segInfo, 30);

I am still having GEE memory troubles when running my code over very large areas, but probably just running an export would solve that.

Thanks!

Hi @jdbcode ,

I am back to ask for some advice on this. I am having troubles in exporting as an asset the final output of my GEE code based on the LT functions. The area I am working with is definitely very large (working on the whole USA, masked on polygons of interest) so my first question is whether it is not advisable to run such an algorithm over these large areas.

My code performs the following tasks:

  1. Run LT over the desired AOI (e.g. the USA) and get the segments information array with ltgee.getSegmentData, which is then rescaled to 30 meters resolution and masked over the polygons of interest;

  2. Generate an image collection from the segments information array of all the identified disturbance (loss) segments turned into image stacks, so slicing the segments information array vertically and generating an image stack of segment properties for each identified segment over the AOI;

  3. Perform yearly masking of this image collection in order to retrieve only the desired segment per pixel based on a comparison with a secondary raster (comparing the dates of the segment with the year of the secondary raster) and reduce the collection to a single raster with as many bands as segments properties, where each pixel is the segment selected with the above mentioned rule. The whole final raster is masked over the specific polygons of interest.

My desired export is the final raster (approximately 8 bands) with the extent of the USA masked over the polygons of interest (still an extensive area). The export as asset of this data is taking a long time (> 4 days) and usually crashing for user memory limit. Do you have any suggestion:

  • First of all, on whether this is an advisable process or one should avoid to work on such large areas;
  • How to lighten an output of this kind (I already tried unmask() clip() and short() over the output);
  • How to lighten the NoData.

Any recommendation would be very much appreciated, thank you!

Hi @agataelia
I recommend breaking up the export. In projects where my lab group ran LT on CONUS as exports, we had I think 15 zones. This was helpful for 1) getting export success 2) if one zone fails, can just rerun the failing one 3) the smaller local files sizes were easier to deal with (we used GDAL virtual raster as the zones "mosaic" method)

  • yes, using short will be better for smaller local file size
  • ensure that the extent exported is as small as can be, i.e. don't add a large buffer to the export region
  • using zones can help eliminate a lot of nodata regions that are just ocean

Hi @jdbcode,

Great, thank you!

Note on allowing person to choose resampling method (don't resample on change mapping app):

buildSRcollection has optional parameter which is a dictionary. Could add a key for resample method. In getSRcollection pass the option on to harmonizationRoy and would need to make a function for images that are not OLI to accept the option.