StanfordAHA/garnet

PD: svdb directories too big

steveri opened this issue · 3 comments

Svdb directories, produced in the LVS step, collectively take up over 60GB of a full chip run, bringing the total space needed to almost 100G instead of 33G, cutting our reference-run storage capability by one third (see below).

The svdb directories are rarely used or needed, and can easily be reproduced on demand by rerunning LVS. I recommend using an LVS flag that will suppress creation of the svdb directories, and plan to file a git merge soon to that effect. This issue will serve as a reference point, as well as a place for others to weigh in.

Experiment: svdb dirs take up 61/94 G (94 before, 33 after)
GOLD=/build/gold.233/full_chip/

------------------------------------------------------------------------
BEFORE=94G

% du -hx --max-depth=0 /build/gold.233
94G     /build/gold.233

% df .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       322G  304G   19G  95% /build

% find /build/gold.233 -name svdb -exec du -hx --max-depth=0 {} \;
33G     $GOLD/37-mentor-calibre-lvs/svdb
28G     $GOLD/19-tile_array/36-mentor-calibre-lvs/svdb
397M    $GOLD/16-glb_top/9-glb_tile/22-mentor-calibre-lvs/svdb
102M    $GOLD/19-tile_array/16-Tile_MemCore/27-mentor-calibre-lvs/svdb
40M     $GOLD/19-tile_array/17-Tile_PE/27-mentor-calibre-lvs/svdb
21M     $GOLD/17-global_controller/19-mentor-calibre-lvs/svdb
0       $GOLD/16-glb_top/24-mentor-calibre-lvs/svdb


------------------------------------------------------------------------
DELETE ALL SVDB

find /build/gold.233 -name svdb -exec echo /bin/rm -rf {} \;

/bin/rm -rf $GOLD/19-tile_array/16-Tile_MemCore/27-mentor-calibre-lvs/svdb
/bin/rm -rf $GOLD/19-tile_array/17-Tile_PE/27-mentor-calibre-lvs/svdb
/bin/rm -rf $GOLD/19-tile_array/36-mentor-calibre-lvs/svdb
/bin/rm -rf $GOLD/16-glb_top/9-glb_tile/22-mentor-calibre-lvs/svdb
/bin/rm -rf $GOLD/16-glb_top/24-mentor-calibre-lvs/svdb
/bin/rm -rf $GOLD/17-global_controller/19-mentor-calibre-lvs/svdb
/bin/rm -rf $GOLD/37-mentor-calibre-lvs/svdb


------------------------------------------------------------------------
AFTER=33G

% du -hx --max-depth=0 /build/gold.233
33G     /build/gold.233

% df .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       322G  239G   84G  74% /build

Haha well I tried using the runset directive *lvsCreateSVDB: 0 but it still created the full svdb directory.

It looks like there are more entanglements, including these ruleset commands that apparently "overrule" the runset directive:

MASK SVDB DIRECTORY "svdb" QUERY
MASK SVDB DIRECTORY "svdb" XRC SI
MASK SVDB DIRECTORY "svdb" XRC

So I plan to take the easy way out and just have buildkite delete the svdb directory immediately after the lvs step is complete. The upside of this is that we get the 60G back immediately; the downside is that there still needs to be 60G of free space before we can start the LVS step.

Filed git pull #782 to solve this problem.

Fixed ish, see git pull #782 .