Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recommendations on (tracing) approaches to help create vm2 mocks/profiles? #465

Open
zxti opened this issue Aug 28, 2022 · 0 comments
Open
Labels

Comments

@zxti
Copy link

zxti commented Aug 28, 2022

I am interested in using vm2 to sandbox a large/complex application. This is a generic node web service where I wish to isolate user requests from each other and from the "host," and where untrusted user code is evaluated—I imagine this is a common use case. Unfortunately the surface area is a bit large—will take some effort to define the right set of mocks, which will likely be a large set.

One strategy is to keep running your application on representative workloads, manually adding mocks for each failed standard library function/method. Ideally these mocks allow exactly the specific type of invocation expected and similarly mock all returned values to prevent methods/property getters from exploiting unexpected surface area.

The hard part about this approach is when returning objects from your mocks, ensuring that their methods or property getters are mocked. This is easy to miss, however, as some of these object trees can be large. (It's also tedious.)

Are there any general suggested approaches to creating the right sandboxed environment, or am I thinking about it the right way?

Might there possibly be any tool to "strace" all such (remaining) entrypoints that should be mocked?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants