Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a single "message" event listener to dispatch received messages #653

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

achim-k
Copy link

@achim-k achim-k commented Jan 18, 2024

From #649:

We make heavy use of Comlink for our WebWorker communication. On some recent profiling traces I noticed a hot-path on the "addEventListener" line.

292389350-2052be72-8671-4e20-9e06-fef0a30cb420

While its hard to say exactly how the event listeners are stored, most implementations I've seen for such interfaces involve storing them in an array and removing the listener equally requires adjusting an array. The existing logic also creates two closures - one for the Promise (which is unavoidable from what I can tell), and another for the event listener. As it was suggested in #647 there is a potential for improvement by using a single "message" event listener and doing the dispatch manually by looking up the resolve functions in a Map.

This definitely reduces the runtime cost of the requestResponseMessage call itself - though there's still cost to new Promise and the closure. There's a shifted cost to the ID lookup on the map though in my profiling that did not show up as a hot path item.

Conceptually using a map for the lookup and one handler seems like it should perform better but if folks have some ideas on how to more robustly benchmark this PR that would be helpful.

This PR is similar to #649 and #651 but avoids that references to the resolve functions or the endpoint are kept in memory.

The performance impact of this PR gets more noticeable the more parallel requests are made:
boxcompare

See also #651 (comment)

Co-authored-by: Roman Shtylman <roman@foxglove.dev>
Copy link

google-cla bot commented Jan 18, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

src/comlink.ts Outdated
return;
}
const resolver = pendingListeners.get(data.id);
if (resolver) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: flip this around to early return so that early return is the "no more work to do case"

if (!resolver) {
  return;
}

// actual work
...

}
});

return createProxy<T>(ep, pendingListeners, [], target) as any;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this case as any? I know this was also true of the earlier code but in our code (foxglove) we always avoid any - are we not able to get the correct type here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure. But seems like it's not that easy, quoting the README:

Comlink does provide TypeScript types. When you expose() something of type T, the corresponding wrap() call will return something of type Comlink.Remote<T>. While this type has been battle-tested over some time now, it is implemented on a best-effort basis. There are some nuances that are incredibly hard if not impossible to encode correctly in TypeScript’s type system. It may sometimes be necessary to force a certain type using as unknown as <type>.

ep.removeEventListener("message", l as any);
resolve(ev.data);
} as any);
pendingListeners.set(id, resolve);
if (ep.start) {
ep.start();
}
ep.postMessage({ id, ...msg }, transfers);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Random thought for the future - but it would be nice to avoid this spread operator - I don't see any reason for it when we could instead have id, payload or similar. While not a massive perf issue, removing it is fewer work cycles for the runtime with no downside to the interface or logic since this is an internal structure.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be something for a separate PR?

@MikalDev
Copy link

MikalDev commented Mar 2, 2024

This change had a huge impact on my work with a physics library. I am calling a function on main thread to add physics bodies in the worker thread. Originally it was a huge hot path, because I was stress testing creating 5000 bodies in one 16ms tick.

I thought I would have to instead batch up the commands and parameters and send over in one message (which still might be a good idea), instead I can now continue to use the comlink proxy for the function. Nice work.

I am now using the foxglove fork for my project to pick up the other nice to haves like faster id generation.

@lvivski
Copy link

lvivski commented Mar 26, 2024

@achim-k Thank you for this change! I think the performance improvements are great. Do you still need to follow up on something or at this point are just waiting for the final PR approval?

@achim-k
Copy link
Author

achim-k commented Mar 26, 2024

@achim-k Thank you for this change! I think the performance improvements are great. Do you still need to follow up on something or at this point are just waiting for the final PR approval?

This is just waiting for the final approval. We are already using this in production for a couple of months, without issues.

@lvivski
Copy link

lvivski commented Mar 26, 2024

@surma is this PR still interesting from the maintainer's perspective? I understand that you might have different opinions on how this should be implemented. I think this is a useful change for high-throughput applications.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants