Logging with SEM
akshitpatel01 opened this issue · 14 comments
Hi,
I was using SEM to run some ns-3 scripts and noticed that I wasn’t able to use the logging commands inside the ns-3 script. I looked into how ns-3 normally uses logging and found that it uses 'getenv' to extract the NS_LOG environment variable. I was wondering if something similar is available in SEM or does SEM have a different approach for enabling logging?
Thanks.
As of now, logging is not integrated in the module, but it can be done relatively easily: we should make sure ns-3 is built using the debug profile, and we can specify what log components should be enabled by changing the code in the runner that actually runs the ns-3 simulations. Before we do this, though, I'd like to ask you some questions to better guide the design of this new feature.
- Can you expand on what your use case is for this? Do you plan to use logging to visualize and better understand what is going on in the simulation, or to correct bugs? How many components are you planning to enable for each simulation?
- Are you planning to use logging to debug single simulations (that is, only when issues come up for a certain parameter combination) or would you enable it to run whole simulation campaigns and always have logs available?
- Can you expand on what your use case is for this? Do you plan to use logging to visualize and better understand what is going on in the simulation, or to correct bugs? How many components are you planning to enable for each simulation?
For my particular use case, I was planning to use logging to understand the underlying functions being called and visualize them. For the components to enable, it would be great if the new SEM feature could support it as it is supported for normal ns-3(that is, the user provides names of the components to enable or '*' to enable all of them).
- Are you planning to use logging to debug single simulations (that is, only when issues come up for a certain parameter combination) or would you enable it to run whole simulation campaigns and always have logs available?
Initially, I thought the logs could be enabled for the entire simulation campaign but having the ability to enable it only for a certain parameter combination would give better control to the user.
@DvdMgr I would like to contribute to adding this new feature to SEM. Can you please elaborate on the design of this new feature?
Thanks.
Sorry, I've been a bit busy these past few weeks. I should have some code that does something similar to this laying around, already. I think a good starting point would be for me to publish it to a separate branch so you can take a look at it and play around a little bit - how does that sound?
I also had a few exams this last week, that's why I wasn't active. Anyways this sounds great! Having a reference code to build upon will surely help.
Ok, I pushed the code I already have on the logging branch fdd02ea.
In the end, I must have gone down this rabbit hole more than i remember doing 😅
You can see the main changes from the git diff - in a nutshell, I added a run_and_log
command to the CampaignManager
, that takes as an input a set of parameters to run the simulation with, and a list of log components to enable. This function will output the log file produced by the simulation, comprising the timestamp and the context. You can see how this is used in the examples/logging-example.py
file.
The last line of python code in this file does something extra: it parses the log file, and runs a dash application to visualize the contents of the log, allowing you to check which functions you are interested in logging - it will tell you the address you should open in your browser in the log message, should be something along the lines of http://127.0.0.1:8050/. Once you are done selecting the desired functions and contexts from the provided list (the first few entries), hit the update plot button, and it will visualize the selected output for you in the plot below.
The code at the moment is quite messy, but should work. I think the next big question we have to address now is whether we should keep this as a standalone feature, or allow the user to integrate it more with their campaigns (potentially running logging for multiple parameter combinations simultaneously).
I went through the commits b7cf560 and fdd02ea in the logging branch.
The code at the moment is quite messy, but should work. I think the next big question we have to address now is whether we should keep this as a standalone feature, or allow the user to integrate it more with their campaigns (potentially running logging for multiple parameter combinations simultaneously).
For this, according to my perspective, we can justify either of these approaches. For users who aim to debug their scripts will benefit from the logging standalone feature whereas the users to wish to visualize their scripts during runtime will benefit if we allow them to integrate logging with their campaigns.
The code that has been implemented in the aforementioned commits works like a stand-alone feature if I am not wrong. I had a few observations(possible starting points to work on) regarding this.
- The user can specify which log components he wants to enable but cannot specify the severity levels/prefixes of the logs.
- Currently, the logs are stored in a temporary file. Also, the runner creates two files 'stdout' and 'stderr' in the campaign directory for storing the output of the scripts. Can we have some sort of 'stdlog' file that will store the logs?
I still haven't looked into the dash application(I'll do this soon) but I think we can add the logging feature first and then move forward to parse the logging files.
I think it would be awesome if we could keep both the features that you discussed. What are your thoughts on this? If you think it would be better to integrate logging with the campaign directly, I could start working on it next.
The code at the moment is quite messy, but should work. I think the next big question we have to address now is whether we should keep this as a standalone feature, or allow the user to integrate it more with their campaigns (potentially running logging for multiple parameter combinations simultaneously).
If we choose the second option to allow the users to integrate logging with their campaign, this could be done by passing two additional variables logging(boolean; default = False) and log_components(a dictionary of the form 'LogComponent': 'severityLevel|prefixLevel') with the CampaignManager's new function. Using a dictionary for specifying log components will allow the user to set log levels easily. These new variables can then be passed to the SimulatorRunner constructor(from the create_runner function in manager.py) and then the NS_LOG environment can be set in the run_simulation function (if logging=True) itself. Creating the environment in the runsimulation function will allow us to write the logs in a separate 'stdlog' file as well.
What are your thoughts on this approach?
Hey @DvdMgr,
Do any of the above two approaches seem good?
Also, I was playing around with the logging code and noticed that the parse_log function assumes that the contents in the log file must start with a '+timestamp'. This is not the case for all log files, is it? What I did was I added a normal NS_LOG_INFO in first.cc and the code didn't parse the log_file.
Thanks.
Creating the environment in the runsimulation function will allow us to write the logs in a separate 'stdlog' file as well.
Aren't logs always directed to the stderr stream by default? Do you know if there is a way to separate logs from errors in ns-3? If we integrate logging as a main feature, I would be ok to have the logs stay in the stderr file in the results folder, but obviously it would be even better if we could redirect them to a separate file.
the parse_log function assumes that the contents in the log file must start with a '+timestamp'. This is not the case for all log files, is it?
Correct, it's only the case if prefix_time is enabled. You mention elsewhere that we could let the users specify the severity and prefix level. While I agree that the prefix should be customizable, I would rather we always enable all prefixes, since this would guarantee a coherent format of the logs and make them significantly easier to parse! It would be a bit of a nightmare to provide visualization functions for logs without timestamps, for instance...
Another thought on the issue of integrating logging into campaigns: I think we must also consider how the feature will be used. If logging is used mostly for debugging, users will probably want to re-run the same simulation over and over again, overwriting the previously saved result. Furthermore, if users want to change the amount of logging after they run an initial set of simulations, we might have a campaign folder that contains a mixture of different logging levels and classes, which could make things harder to manage...
In other words, I think we should consider the following question: is logging a part of the simulation campaign running and result plotting process? In case we decide it is, it might be worth it to integrate it into the CampaignManager. If, instead, we decide that logging is a component that should have to do more with inspecting single runs, or with understanding the influence of a parameter on simulation behavior in a small subset of parameters, it might be better to keep it as a "standalone" feature only (of course, in this case, work is still needed to make the feature as usable as possible). As of now, I would lean on the second option, but also consider the following paragraph as a valid option.
A third possibility: we could also consider specializing the CampaignManager class to be either an OptimizedCamapaignManager or a DebugCampaignManager. In optimized mode, ns-3 would be compiled using the optimized profile, we would check the status of the repo before running simulations, and we would only be interested in results. If a CampaignManager is configured in debug mode, instead, we could enable logging, and make overwriting existing simulation runs the default.
Aren't logs always directed to the stderr stream by default? Do you know if there is a way to separate logs from errors in ns-3? If we integrate logging as a main feature, I would be ok to have the logs stay in the stderr file in the results folder, but obviously it would be even better if we could redirect them to a separate file.
According to my little research, it seems that the errors are directed to std::cerr stream, and logs are directed to std::clog stream, and by using the rdbuf function I was able to separate logs from errors in ns-3 scripts. What I did is, I redirected std::clog to a clog_file and std::cerr to a cerr_file at the beginning of the ns-3 script which resulted in all the logs getting dumped to clog_file and errors to cerr_file separately. Though I am not very sure how to implement this in python or how to integrate this with SEM.
Correct, it's only the case if prefix_time is enabled. You mention elsewhere that we could let the users specify the severity and prefix level. While I agree that the prefix should be customizable, I would rather we always enable all prefixes, since this would guarantee a coherent format of the logs and make them significantly easier to parse! It would be a bit of a nightmare to provide visualization functions for logs without timestamps, for instance...
Totally agreed, parsing would be very difficult if prefix_time (or prefix in general) is left for the users to set. So it seems we can allow users to set severity_levels (as it will not change the format of the logs) and keep the prefix_level to all, can't we?
Another thought on the issue of integrating logging into campaigns: I think we must also consider how the feature will be used. If logging is used mostly for debugging, users will probably want to re-run the same simulation over and over again, overwriting the previously saved result. Furthermore, if users want to change the amount of logging after they run an initial set of simulations, we might have a campaign folder that contains a mixture of different logging levels and classes, which could make things harder to manage...
If I understand this correctly, you think the users might be able to generate a campaign folder with different logging levels. I don't really understand how this is possible. Can you elaborate on this a little?
In other words, I think we should consider the following question: is logging a part of the simulation campaign running and result plotting process? In case we decide it is, it might be worth it to integrate it into the CampaignManager. If, instead, we decide that logging is a component that should have to do more with inspecting single runs, or with understanding the influence of a parameter on simulation behavior in a small subset of parameters, it might be better to keep it as a "standalone" feature only (of course, in this case, work is still needed to make the feature as usable as possible). As of now, I would lean on the second option, but also consider the following paragraph as a valid option.
Personally, both the second and third options seem good to me. But I think we should go forward with your call regarding whether to integrate logging with the CampaignManger or keep it as a standalone feature as you would have a better understanding of what the ns-3 users might want to do with this feature.
I am not very sure how to implement this in python or how to integrate this with SEM.
Yeah, this sounds like a change that needs to happen in ns-3 before we can use it in SEM. If we could tell the ns-3 executable to dump logs in a file through a command line argument or a flag, for instance, it would be enough to pass that flag when we execute simulations with SEM. This way, errors would still go through the stderr pipe and logs would be written to file.
So it seems we can allow users to set severity_levels (as it will not change the format of the logs) and keep the prefix_level to all, can't we?
This sounds like the better solution to me. By the way, this will generate heavier logs, which brings up another opportunity - it would be interesting if we could compress the written log file. Since they have a lot of repeating structure (always timestamp, followed by class and function names, followed by very similar messages), I suspect we can get very good reductions in file size if we decide to compress. Anyway, this is just another idea to throw in the bucket :)
If I understand this correctly, you think the users might be able to generate a campaign folder with different logging levels. I don't really understand how this is possible. Can you elaborate on this a little?
Sure! I think the solution you describe (i.e., fully specifying the logging spec when the campaign object is first created) is fine if you want to run all the simulations you are interested in with the same set of classes and functions enabled for logging. However, say you look at results and decide that you want to log additional data for a subset of simulations. How would you do this? If you allow users to change the logging parameters in the campaign and keep their existing simulations, you would get a mixture of logging depth across your simulations. If you instead want to enforce all simulations to have been performed with the same logging parameters, you would need to delete all previously run simulations, and only keep the ones with the additional logging components enabled - and this might surprise the user.
Yeah, this sounds like a change that needs to happen in ns-3 before we can use it in SEM. If we could tell the ns-3 executable to dump logs in a file through a command line argument or a flag, for instance, it would be enough to pass that flag when we execute simulations with SEM. This way, errors would still go through the stderr pipe and logs would be written to file.
True. I tried to find a way if we could redirect these streams through a wrapper function (that is, not change the actual script at all) but didn't find anything useful. For now, I guess we don't have any option(because telling the user to add some code in their script for logging is not feasible).
This sounds like the better solution to me. By the way, this will generate heavier logs, which brings up another opportunity - it would be interesting if we could compress the written log file. Since they have a lot of repeating structure (always timestamp, followed by class and function names, followed by very similar messages), I suspect we can get very good reductions in file size if we decide to compress. Anyway, this is just another idea to throw in the bucket :)
Then this is sorted. We will allow the user to set the logging levels but keep the prefix_level set to all. Also, the idea of compressing these files sounds great.
Sure! I think the solution you describe (i.e., fully specifying the logging spec when the campaign object is first created) is fine if you want to run all the simulations you are interested in with the same set of classes and functions enabled for logging. However, say you look at results and decide that you want to log additional data for a subset of simulations. How would you do this? If you allow users to change the logging parameters in the campaign and keep their existing simulations, you would get a mixture of logging depth across your simulations. If you instead want to enforce all simulations to have been performed with the same logging parameters, you would need to delete all previously run simulations, and only keep the ones with the additional logging components enabled - and this might surprise the user.
Got it. So if we choose to do this then we will have to rerun all the simulations associated with that campaign(not just the subset that the user would hope) with the additional logs or the previous logs plus the additional logs(I guess this is the third option from your previous message) or find a way to efficiently organize/manage simulations with different log levels. So, should we move forward with the second or the third option? What do you think?