Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running tests remotely (e.g. in docker) #13

Closed
cspotcode opened this issue Sep 22, 2018 · 15 comments
Closed

Running tests remotely (e.g. in docker) #13

cspotcode opened this issue Sep 22, 2018 · 15 comments

Comments

@cspotcode
Copy link

My team needs to run mocha tests in a docker container. The containerized environment is configured with special CLI tools and networked services that our code must access. I've created a VSCode launch.json configuration that runs mocha via docker exec, attaches the debugger, maps sourcemaps correctly, and lets us set breakpoints. It works! But it's incompatible with your great sidebar, so we can't e.g. debug a single test case.

Here's what I've tried so far:

Implementing a shim script for "execPath" that emulates "node" but actually invokes it within our docker container. (docker exec $containerId node) Unfortunately:

  • docker exec doesn't forward the extra file descriptor used by child_process.fork() so IPC is broken. I'm trying to create a multiplexer / demultiplexer so that multiple file descriptors can be tunneled over stdin /stdout into the docker container. Usage looks something like this: multiplex -- docker exec $containerId demultiplex -- node ./node_modules/bin/_mocha

  • My shim script needs to map host paths into container paths. This is a hack and requires JSON.parse()-ing the argv array. I also need to download a copy of vscode-mocha-test-adapter into my project's source tree so that "node" within the container can run "./worker/{runTests,loadTests}.sh" It would be great if this happened automatically via my debug configuration's "localRoot" and "remoteRoot."

  • Docker containers cannot forward additional ports after they've started. So I'm stuck using the same port or range of ports for node's debugger protocol. I need a way to set the port instead of getting one at random.

  • The debugger configuration needs the appropriate sourcemap options so that breakpoints and line numbers are correct. Our configuration looks something like this:

    "version": "0.2.0",
    "configurations": [
        {
            "name": "Run all tests in Docker",
            "type": "node",
            "request": "launch",
            "runtimeExecutable": "docker-compose",
            "runtimeArgs": [
                "exec", "-T", "test-container", "./runInWorkspace.sh",
                "node", "--inspect-brk=0.0.0.0:43671", "./node_modules/mocha/bin/_mocha"
            ],
            "protocol": "inspector",
            "port": 43671,
            "timeout": 60000,
            
            // Mapping between host paths and docker container paths
            "localRoot": "${workspaceFolder}",
            "remoteRoot": "/home/circleci/workspace/",
            "sourceMapPathOverrides": {
                "/home/circleci/workspace/*": "${workspaceRoot}/*"
            },
            // Map between .ts and .js
            "sourceMaps": true
        }
    ]
}

Ideas to support this use-case

Support a user-specified "template" launch configuration for debugging. For example, set "mochaExplorer.launchTemplateId": "docker-debug" and it'll inherit the right "localRoot", "remoteRoot", "port", etc options. This solves most of the issues.

Add the ability to install ./worker/ scripts via npm so they can be run from within the docker container. Alternatively, maybe there's an easy way to pipe them into node via a mix of "node -e" and IPC? That way, the worker process doesn't even need filesystem access to the worker scripts.

Support IPC over stdin / stdout, so that it's compatible with docker exec. If this is too complex, I'll keep using my multiplexer / demultiplexer script.


I realize this is a ton to ask. If we add a few, relatively simple features to vscode-mocha-test-adapter, I can finish my "execPath" shim scripts, and the combination of the two will do the trick. I am happy to help implement these features, too. What do you think?

@hbenl
Copy link
Owner

hbenl commented Sep 22, 2018

First of all: Wow! It would be fascinating to make this work.
Some initial thoughts:

  • a user-specified launch config should be easy
  • instead of node's IPC channel we could use IPC over a TCP connection - I did experiment with this once as a possible workaround for IPC between Node <= 10.5.0 and Node 10.6.0 crashes nodejs/node#21671, it's quite simple really
  • to "install" worker scripts: maybe your shim script could send the worker script via stdin to the process running in docker and that process would then use node -e or eval() to execute it

So... if IPC could be run over TCP and you had the ability to specify your own launch config for debugging, do you think you'd have everything you need?

@cspotcode
Copy link
Author

Thanks for the quick reply. I think I'd only need a couple other things:

  • When you send file paths to the worker, convert them using the launch config's "localRoot" and "remoteRoot." Alternatively, spec the worker API so I can wrap it in my own script that does the right mapping.
  • Let me specify a fixed TCP port to use for IPC, since docker doesn't allow me to open random ports after the container has booted.
  • Ensure the worker scripts are self-contained and don't require() any external files, so I can send them via stdin. It should be straightforward to bundle them with rollup or webpack.

As a first step, I can submit a PR to bundle the worker scripts. I've used webpack extensively so it shouldn't be too difficult.

@hbenl
Copy link
Owner

hbenl commented Sep 24, 2018

Let me specify a fixed TCP port to use for IPC

Yes, that's the plan: make the port fixed and configurable.

As for the path conversion and the bundling: I don't want to add any more complexity to this adapter
than is absolutely necessary to support this (very interesting but also unusual) scenario. You should
be able to create those bundles yourself and also do the path conversion (e.g. in your shim script).

@hbenl
Copy link
Owner

hbenl commented Sep 29, 2018

I've created a branch called remote-worker that contains 2 new configuration options:

  • mochaExplorer.ipcPort lets you set the port used for IPC over TCP
  • mochaExplorer.debuggerConfig lets you set the name of a launch.json configuration for debugging

The IPC connection is currently established by the worker, i.e. the adapter acts as a TCP server and the worker as a client. So you don't have to open a port for this in your docker config, but you have to somehow forward the 127.0.0.1:<ipcPort> port from docker to the corresponding host port. If that isn't possible, I could add a mochaExplorer.ipcHost option that would tell the worker to connect to <ipcHost>:<ipcPort> instead of 127.0.0.1:<ipcPort>. Or I could reverse the roles, i.e. the worker would be the server and the adapter the client of the IPC connection.

This is the default configuration used for debugging - use this as a template for your own debug config:

{
	name: 'Debug Mocha Tests',
	type: 'node',
	request: 'attach',
	port: 9229,
	protocol: 'inspector',
	timeout: 30000,
	stopOnEntry: false
}

(if you change the port, you'll also have to set mochaExplorer.debuggerPort to the same value - the adapter currently isn't clever enough to fetch the port from the launch config)

I like the idea of bundling the worker script with its dependencies, but there is a problem: the worker script contains a dynamic require for mocha, because you can configure which mocha version to use - this means that the bundler can't know which version of mocha to include in the bundle. To work around that, maybe I could move the dynamic require into a separate file and the bundler config could tell it to replace that file with one that does a normal (not dynamic) import of mocha. I think this should be possible using e.g. webpack's resolve.alias option, but I'm not sure - what do you think?

Finally, there is still the issue of converting the paths - this has to be done for the arguments to the worker script, but also for the events that the worker sends back to the adapter. I could write some utility methods that help with that.

@cspotcode
Copy link
Author

Thanks! I'm taking a look at the code.

you have to somehow forward the 127.0.0.1: port from docker to the corresponding host port.

Hmm, my shim script can probably establish a reverse-tunnel using socat or some sort of pure-node equivalent. I'm playing around with https://npmjs.im/netcat.

I like the idea of bundling the worker script with its dependencies

I prototyped a worker bundler using rollup; I'll share it to see what you think. As much as I love webpack, rollup's config in this case was simpler.

the worker script contains a dynamic require for mocha

For require()ing mocha, I think it makes sense to simply not bundle it; declare it as an "external." If a user wants to run the worker remotely, they can configure the path to mocha on the remote. Mocha's always going to be installed locally into your project's node_modules, so the docker container should always have a copy of mocha. The only thing it won't have is a copy of the worker scripts, so they're the only thing that needs to be sent over the wire.

there is still the issue of converting the paths

If you want to keep your code as simple as possible, you can expose a "path conversion plugin" API. Then I can implement a localRoot to remoteRoot converter plugin. VSCode's debugger already uses localRoot and remoteRoot to handle this; it's a shame it doesn't expose a path conversion API that we can use.

arguments to the worker script

Right now the arguments are being passed as positional arguments. You want to keep your worker as simple as possible (understandable) so my shim script will need to parse, convert, and re-serialize these args.

The problem is, the index of this.log.enabled changed from 6 to 7 because the IPC port was inserted. If I write the shim script today, it might break if/when you add more options in the future. What if all options are passed as a single JSON string? For example:

const childProc = fork(
				require.resolve('./worker/loadTests.js'),
				[
					JSON.stringify({testFiles, mochaPath, mochaOpts, monkeyPatch, ipcPort: ipcPort || null, logEnabled: this.log.enabled})
				],

This makes it simpler for shim scripts to parse, mutate, and re-serialize the args. Unknown options can be passed through verbatim. This doesn't eliminate the possibility of breakage but it reduces it. And I think it keeps your code just as simple as it is today.

@hbenl
Copy link
Owner

hbenl commented Sep 30, 2018

Right now the arguments are being passed as positional arguments.

Well, it was the straightforward thing to do when I started this adapter, but I agree that I should move this to a single JSON string now.

I'm planning to write (and maintain) a little collection of utility functions for this scenario. One of those functions would apply the path conversion to the JSON-encoded worker arguments, another one could set up a little TCP proxy for the worker events that applies the path conversion to the events. So the only thing that your shim script would have to do for path conversion is call these functions and pass them your path conversion plugin.

@hbenl
Copy link
Owner

hbenl commented Oct 3, 2018

I've started implementing the utility functions, but they're completely untested so far because I couldn't yet figure out how to bundle the worker scripts:

  • with webpack I can bundle the scripts, but they fail at the dynamic require of mocha - I think that the webpack-bundled script tries to resolve all dependencies from within the bundle, including the dynamic require. I've tried to find a way how to tell webpack to treat the dynamic require as an external dependency, but the configuration options for externals only seem to work for static requires.
  • with rollup I couldn't figure out how to bundle the scripts at all - I get "Error: Cannot call a namespace ('RegExpEscape')", which makes sense (I think) because the export of escape-string-regexp is not ES6-compliant (AFAIK ES6 modules need to export a namespace, but this module exports a function)

Any ideas?

@cspotcode
Copy link
Author

I implemented rollup bundling in #19. I'm using two common rollup plugins to handle node's module resolution and CommonJS imports & exports. I also added a hack to give us access to node's require API without rollup transforming it, so we can still dynamically require() mocha and spec files.

@cspotcode
Copy link
Author

I pushed a lot more changes, experimental at the moment.

  • switched back to webpack, addressing the dynamic require() problem you were facing. Configuration for that is pretty simple, and rollup felt unnecessarily limited.
  • Added an option so that worker can be the IPC server instead of the client. Generalized the IPC interface so that each end is a bidirectional "endpoint"
  • Sends bundled worker script to the worker via stdin. Simple implementation and conveniently supports sending the worker script into a docker container. Should worker over SSH, etc.
  • Added a first-pass at a worker "plugin" interface. If specified, the worker loads a node module and delegates path conversions to it. This is untested; probably broken.

@hbenl
Copy link
Owner

hbenl commented Nov 4, 2018

I have implemented a new approach in my remote-worker branch. I haven't merged your PR but picked a lot of ideas from it.

The new approach is to let you replace the worker script with an adapter script in your project and pass the path to the directory containing the worker bundle to your script so that it can start the worker. With this approach, this extension remains simple and generic while giving you maximum flexibility.

I have also created an example project that runs its tests in a Docker container. Check it out! Debugging isn't working yet, but everything else works (only tested on Linux so far).

Some notes:

  • I modified your approach for enabling dynamic require() in a webpack bundle. The code now works both bundled and unbundled, however the bundled version somehow only works when passed to node via stdin, not when called directly (but that's OK for now)
  • the example works both with the worker as IPC server and IPC client
    • when the worker is the server, the adapter script needs to keep trying to connect to ipcPort until the worker has started up and accepts the connection. The problem here is that the adapter script can successfully connect to ipcPort even before that - it will get a socket from docker-proxy, which then closes the socket immediately. So I've built a workaround into my IPC functions that will retry connecting if the socket is closed within 10 milliseconds (which is configurable). I suspect that this may also be the reason why debugging isn't working yet (VS Code connects to debuggerPort before the worker has started up, it gets a socket from docker-proxy which is immediately closed, so VS Code thinks the debugger session has ended)
    • when the worker is the client, it needs to connect to <Host IP>:<ipcPort> instead of localhost:<ipcPort>. There is an option ipcHost in WorkerArgs to pass the Host IP to the worker.

@hbenl
Copy link
Owner

hbenl commented Nov 5, 2018

I got debugging to work as well - kind of. Breakpoints don't work initially so you have to use a debugger; statement.

@hbenl
Copy link
Owner

hbenl commented Nov 5, 2018

I think the issue with breakpoints not working initially (they work after hitting a debugger; statement) is the same that is described here. See also microsoft/vscode#60187

@connorshea
Copy link

How do the new VS Code remote extensions effect this issue?

@hbenl
Copy link
Owner

hbenl commented May 9, 2019

When you use a remote workspace, the tests will automatically be run remotely, so that's a possible solution.
At the same time, I started working on this again and will publish an update soon. Then you will be able to run tests remotely without having to use a remote workspace.

@hbenl
Copy link
Owner

hbenl commented Jun 9, 2019

This is now (finally!) supported by Mocha Test Explorer: you'll have to write a launcher script that runs the tests remotely - there are example projects for running tests in a docker container and via ssh that contain well-documented launcher scripts and a project containing further documentation and utility functions to use in the launcher script.

@hbenl hbenl closed this as completed Jun 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants