-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simulations limited by PID_MAX_LIMIT
#3323
Comments
You're probably already using it (and it's included in simulations generated by tornettools), but for reference just mentioning the existence of the |
Right, I think tornettools does this (or I did and forgot), but yes tor.common.torrc has |
I think that was us + Ian :) in Table 2 from our USENIX paper: We don't typically run a tor+tgen for those 792k users, we instead use tornettools I wonder why they chose a hard upper limit for |
On a more serious note, many people are going to have access to smaller machines and not that many people are going to have access to giant near-super computers. So I think designing for the general case is the correct strategy for Shadow. Thus, multi-machine simulation support would be the feature I would support on the Shadow side, and it would have other benefits as well. It may allow people to utilize many small cheaper machines more effectively. For those of us wanting to run a crazy number of simulations on one machine, the hypervisor approach could work. I never played around with the type of configuration we want, but it might be worth documenting if we figure out how to do it. |
For the multiple-simulation use-case, I wonder if this limit is actually global or if it's per PID namespace? https://www.man7.org/linux/man-pages/man7/pid_namespaces.7.html If the latter, then maybe putting each sim in its own pid namespace would at least be a somewhat lighter weight solution than putting them each in a full VM |
Agreed, more commonly available setups should definitely be the priority. I was just discussing it with Ian, and he wanted me to make sure this limitation is documented somewhere, since it did ultimately limit the size of experiments we could run in a feasible amount of time. :)
That's a good idea. I suspect there will still be some kernel data structure somewhere that won't allow it, but I'll try to test that and see what happens. |
From the
proc
man page:This is a hard limit, that can't be modified the way many of the other limits in Linux can. It essentially bounds the size of
experiment threads * number of parallel experiments
for one machine, where "experiment threads" is the total number of threads across all processes in the experiment (less a little headroom from the rest of the OS). I ran into this from running arguably too many experiments at once for one machine (though the machine had plenty of RAM to spare), but I suspect this might even prevent running large single experiments. When's the last time someone tried to run a 100% Tor network?This seems unlikely to have a quick fix. Some possible solutions are:
PID_MAX_LIMIT
I suppose running pre-phantom Shadow is another workaround, but that sounds like a bad and increasingly difficult idea.
In the meantime, it's probably a good idea to document this limit.
The text was updated successfully, but these errors were encountered: