A question about the npy files used in spark_env
VioletLi opened this issue · 3 comments
Question:
There are some npy files in your code, including "task_duration" and "stage_id_to_node_idx_map", and could you please tell me how to generate these files? I'm trying to reuse your code, but I don't know what these files mean.
Thank you!
We load the TPCH jobs here: https://github.com/hongzimao/decima-sim/blob/master/spark_env/job_generator.py#L12-L15 We measure the task duration and job structure information by running those TPCH jobs in a real Spark cluster. For example, in task_duration_*.npy, we measure the task duration for different "wave" of task execution (e.g., first wave, when doing more IO, typically has longer task duration). Those files are just the job information loaded into the simulator. Hope this helps!
So first wave means time spent on IO or something else? And what does "fresh duration" mean? Does it mean the whole time a task need?
And I notice that in first wave there are some keys such as "0", "40", what does these mean? In addition, did you use "stage_id_to_node_idx_map" files? Because I didn't find them in your code.
So first wave means time spent on IO or something else? --- Yes
And what does "fresh duration" mean? Does it mean the whole time a task need? --- this is used here
Lines 103 to 116 in c010dd7
And I notice that in first wave there are some keys such as "0", "40", what does these mean? --- IIRC, these are the task duration under different number of executors assigned to the node. When we simulate the runtime of a task, we find (or interpolate) the task duration based on corresponding degree of parallelism.
In addition, did you use "stage_id_to_node_idx_map" files? Because I didn't find them in your code. --- I used it for other bookkeepings, like generating the demo. It's just an index mapping for the nodes in the graph I think. The simulator doesn't need this info if you can run the code without it.