CPU usage time, simulation time and determinism #1675
-
As far as I understood Shadow can simulate the OS Kernel CPU usage and can capture processing time. For example consider an application process that creates 100 signatures (which consumes some time) then sends a messages to other processes. This amount of time can be captured by Shadow's simulated OS Kernel, e.g. by to pushing back causally-depending events in the event queue by exactly this amount (correct me if I am wrong, but that was my impression on the influence of CPU usage on simulation time). I was wondering how this time can be captured without disturbing the determinism of the simulation given that time measurements usually are a source of randomness? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 10 replies
-
To the best of my knowledge, Shadow doesn't currently support CPU time simulation, so all computations in the simulation essentially take 0 sim time. If you need to run an experiment that requires taking CPU time into account, you have two options:
The first option has the advantage of modeling the actual, dynamic usage under deployment, the second option has the advantage of being deterministic (and running on cheaper hardware). |
Beta Was this translation helpful? Give feedback.
-
The Shadow
This CPU model was disabled by default, and could be enabled by setting The problems with this CPU modeling approach is:
So we disable the algorithm by default in Shadow Also, we have temporarily removed the feature altogether in Shadow |
Beta Was this translation helpful? Give feedback.
-
Can see |
Beta Was this translation helpful? Give feedback.
The Shadow
v1.x
design has a preliminary cpu modeling feature that attempted to take into account CPU usage of the plugins running in Shadow. It works roughly as follows: