hackoregon/disaster-resilience

Integrate neighborhood_loss_final.csv into production

Closed this issue · 5 comments

Fred provided a file that is equivalent to DOGAMI neighborhood loss tables, except that it adjusted for our census maps of neighborhoods.
For reference, the equivalent tables from DOGAMI are

  • loss_neighborhood_unit_csz_m9p0_dry
  • loss_neighborhood_unit_csz_m9p0_wet

The input file is:
neighborhood_loss_final.csv

This change request is to load that data into the production database

I'm going to try to normalize this table a bit.
I will fold the wet/dry into a field so we can query by wet or dry.
I will move the non-wet/dry attributes into the parent table - "neighborhood"
These are all mostly static, non-disaster related attributes:

  • acres
  • buildings
  • permanent residents
  • total population

In addition to this change, I notice while working with these tables, the neighborhood IDs are not the same. I will need to re-sort to make the IDs match.

  • neighborhood_census_response_rate.csv - 101 records
  • DisasterNeighborhoodsFinal_DogamiRaster_stats.csv - 101 records
  • neighborhood_loss_final.csv - 100 records

To do:
0) *donwnload neighborhood data to local db

  • download disaster_neighborhoods to local db
  1. The source data
  • create a table neighborhood_loss_flat
  • load neighborhood_loss_flat data from csv
  1. Update neighborhood table
  • alter table public.neighborhood to add the columns specific to neighborhoods:
  1. Load the data from neighborhood_loss_flat into neighborhood
  • Create a table neighborhood_loss
  1. Add the normalized neighborhood loss table
  2. add neighborhood_loss to the disaster_neighborhood_view
  3. add new columns from neighborhood to disaster_neighborhood_view
  4. rename neighborhood_loss to loss_neighborhood_census?
  5. Apply this to the production database

This is done.
Some comments:

  1. I renamed neighborhood_loss to loss_neighborhood_census. This is more consistent with the naming convention for the DOGAMI loss_ tables.
  2. I added the 'wet' scenario to the view. We're only using the wet scenario for demo day. I imaging that later the 'wet'/'dry' scenarios could be passed to the endpoint as a filter.
  3. I found some inconsistencies in the extracted data
    a) neighborhood and neighborhood_loss both had ids and neighborhood names that mostly matched. The only issue was that "portland unclaimed #2" was not included in the neighborhood_loss data. So, I had to reconcile that
    b) disaster_neighborhoods was in a completely different order. I think maybe Fred was using the DOGAMI ID for the neighborhoods here. So, I had to join disaster_neighborhoods to neighborhoods by name.

The changes have been applied to the production database and the endpoints are changed in github.

Possible followups:

  1. rename dn.id as DOGAMI_neighborhood_id in the disaster_neighborhood_view
  2. Expose the local neighborhood ID in the view