This repository provides the evaluation setups for MICRO22 artifact evaluation for the paper Sparseloop: An Analytical Modeling Approach to Sparse Tensor Accelerators. We provide a docker environment and a jupyter notebook for the artifect evlauation. Please contact timeloop-accelergy@mit.edu if there are any questions.
- Please install docker app
git clone --recurse-submodules git@github.com:Accelergy-Project/micro22-sparseloop-artifact.git
cd <cloned repo>
ls docker/
You should see the subdirectories in docker/
populated with actual source code instead of submodule pointers.
- Create a docker compose file by making a copy of the docker compose tempalte file
cp docker-compose.yaml.template docker-compose.yaml
- Examine the instructions in
docker-compose.yaml
to setup the docker correctly, i.e., setup the correctUID
andGID
.
We provide two options for obtaining the docker image. Please choose one of the methods listed below.
- We provide a pre-built image on dockerhub
docker-compose pull
-
Build the timeloop-accelergy infrastructure docker
cd docker/accelergy-timeloop-infrastructure make
-
Build the pytorch docker (which uses the timeloop-accelergy infrastructure as a basis)
cd ../timeloop-accelergy-pytorch make cd ../../ # go back to root directory of the repo
To check if the image is obtained successfully, please do docker image ls
and you should see mitdlh/timeloop-accelergy-pytorch
with a tag name micro22-artifact
listed.
- Run
docker-compose up
. You should see the docker being setup. - This docker uses Jupyter notebooks, and you will see an URL once the docker is up. Please copy and paste the
127.0.0.1 URL
to a web browser of your choice to access the workspace. - If you are experiencing any issues when bringing the page up, please try the tourble shooting notes in
docker-compose.yaml
.
We provide a jupyter notebook for the experiments. Please navigate to workspace/2022.micro.artifact/notebook
to run the experiments. Each cell in the notebook provides the background, instructions, and commands to run each evaluation with provided scripts.
For each experiment, we give a very conservative estiamtion of how long the sweeping will take. The input specifications and related scripts can be found in workspace/2022.micro.artifact/evaluation_setups
. The easiest way to validate the outputs is to compare the generated figure/table to the figure/table in the paper, but we do provide a ref_outputs
folder for each evaluation for more detailed comparison of results if necessary.