Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Electron: Fatal error in V8: v8_ArrayBuffer_NewBackingStore #514

Closed
uxmanz opened this issue Oct 30, 2022 · 10 comments · Fixed by #532
Closed

Electron: Fatal error in V8: v8_ArrayBuffer_NewBackingStore #514

uxmanz opened this issue Oct 30, 2022 · 10 comments · Fixed by #532
Labels

Comments

@uxmanz
Copy link

uxmanz commented Oct 30, 2022

The bug
As soon as data is received the app crashes with the following error:

[24124:1030/235710.237:ERROR:node_bindings.cc(146)] Fatal error in V8: v8_ArrayBuffer_NewBackingStore When the V8 Sandbox is enabled, ArrayBuffer backing stores must be allocated inside the sandbox address space. Please use an appropriate ArrayBuffer::Allocator to allocate these buffers.

Reproducing
Added following code in main.js

const zmq = require("zeromq")

async function run() {
 const sock = new zmq.Subscriber

 sock.connect("tcp://127.0.0.1:1737")
 sock.subscribe('testDATA');
 console.log("Subscriber connected to port 3000")

 for await (const [msg] of sock) {
   console.log( "containing message:", msg)
 }
}

run();

Expected behavior
Data should be received and logged in to console

Tested on

  • OS: [Windows 10]
  • ZeroMQ.js version: [6.0.0-beta.6]
  • Electron Version :[v21.1.1]
  • Node Version:[v18.12.0]
@uxmanz uxmanz added the bug label Oct 30, 2022
@Bartel-C8
Copy link
Contributor

https://www.electronjs.org/blog/v8-memory-cage

Electron 21 and later will have the V8 Memory Cage enabled, with implications for some native modules.

To track ongoing discussion about native module usage in Electron 21+, see electron/electron#35801.

@Bartel-C8
Copy link
Contributor

I think it would be fairly trivial change to "solve" this problem, at a performance cost though:

Remove this if-block: https://github.com/zeromq/zeromq.js/blob/master/src/incoming_msg.cc#L29 where an external buffer is used for performance optimisation reasons.

Change were this avoid-buffer-copy performance was done initially: rolftimmermans/zeromq-ng@01773f2#diff-72251d28770bda8994e43d8152076673c8dc28e1d363f787d33c1dc0216ec98b

Should be still thought of if we can make this change optional somehow, for Election versions > 21 ?

@aminya
Copy link
Member

aminya commented Nov 15, 2022

@Bartel-C8 Let's try this. Interested in making a pull request?
Might be related to #466

@Bartel-C8
Copy link
Contributor

Bartel-C8 commented Nov 15, 2022

Yes sure.

But I would like to have a working (test/benchmark) build to validate my changes/performance.

As it is now, it seems the test build (on master) is failing? The same on my macOS setup...

I can run npm run build but npm run test and npm run bench both fail...

  TOUCH Debug/obj.target/libzmq.stamp
  CXX(target) Debug/obj.target/zeromq/src/context.o
  CXX(target) Debug/obj.target/zeromq/src/incoming_msg.o
  CXX(target) Debug/obj.target/zeromq/src/module.o
  CXX(target) Debug/obj.target/zeromq/src/observer.o
  CXX(target) Debug/obj.target/zeromq/src/outgoing_msg.o
  CXX(target) Debug/obj.target/zeromq/src/proxy.o
  CXX(target) Debug/obj.target/zeromq/src/socket.o
  SOLINK_MODULE(target) Debug/zeromq.node
gyp info ok 
choma: to re-use this ordering, run tests with CHOMA_SEED=Ou4oXUbGkA
dyld[34109]: missing symbol called
sh: line 1: 34107 Abort trap: 6           mocha

So running mocha fails somehow...

@aminya
Copy link
Member

aminya commented Nov 15, 2022

Yeah, the test suit fails. Related to #466. I could not find the issue in the code.

@Bartel-C8
Copy link
Contributor

Bartel-C8 commented Nov 16, 2022

Thanks for the build fixes! Can run a benchmark now properly!

Did a clean benchmark:

Running benchmarks...
queue msgsize=1 n=5000 zmq=ng x 9.10 ops/sec ±1.54% (28 runs sampled)
queue msgsize=16 n=5000 zmq=ng x 8.98 ops/sec ±1.19% (28 runs sampled)
queue msgsize=256 n=5000 zmq=ng x 8.72 ops/sec ±0.53% (27 runs sampled)
queue msgsize=4096 n=5000 zmq=ng x 8.34 ops/sec ±2.87% (27 runs sampled)
queue msgsize=65536 n=5000 zmq=ng x 5.74 ops/sec ±1.70% (23 runs sampled)
queue msgsize=1048576 n=5000 zmq=ng x 2.25 ops/sec ±1.27% (14 runs sampled)
deliver proto=tcp msgsize=1 n=5000 zmq=ng x 3.61 ops/sec ±7.61% (17 runs sampled)
deliver proto=tcp msgsize=16 n=5000 zmq=ng x 3.84 ops/sec ±2.86% (18 runs sampled)
deliver proto=tcp msgsize=256 n=5000 zmq=ng x 3.61 ops/sec ±7.08% (18 runs sampled)
deliver proto=tcp msgsize=4096 n=5000 zmq=ng x 3.61 ops/sec ±1.46% (18 runs sampled)
deliver proto=tcp msgsize=65536 n=5000 zmq=ng x 1.75 ops/sec ±0.60% (12 runs sampled)
deliver proto=tcp msgsize=1048576 n=5000 zmq=ng x 0.18 ops/sec ±1.13% (5 runs sampled)
deliver proto=inproc msgsize=1 n=5000 zmq=ng x 4.44 ops/sec ±1.40% (20 runs sampled)
deliver proto=inproc msgsize=16 n=5000 zmq=ng x 4.31 ops/sec ±1.74% (19 runs sampled)
deliver proto=inproc msgsize=256 n=5000 zmq=ng x 4.01 ops/sec ±5.34% (19 runs sampled)
deliver proto=inproc msgsize=4096 n=5000 zmq=ng x 4.11 ops/sec ±1.35% (19 runs sampled)
deliver proto=inproc msgsize=65536 n=5000 zmq=ng x 3.26 ops/sec ±1.28% (17 runs sampled)
deliver proto=inproc msgsize=1048576 n=5000 zmq=ng x 1.65 ops/sec ±22.08% (12 runs sampled)
deliver multipart proto=tcp msgsize=1 n=5000 zmq=ng x 3.44 ops/sec ±2.61% (17 runs sampled)
deliver multipart proto=tcp msgsize=16 n=5000 zmq=ng x 3.58 ops/sec ±1.91% (18 runs sampled)
deliver multipart proto=tcp msgsize=256 n=5000 zmq=ng x 3.30 ops/sec ±1.83% (17 runs sampled)
deliver multipart proto=tcp msgsize=4096 n=5000 zmq=ng x 2.95 ops/sec ±1.80% (16 runs sampled)
deliver multipart proto=tcp msgsize=65536 n=5000 zmq=ng x 0.59 ops/sec ±2.14% (7 runs sampled)
deliver multipart proto=tcp msgsize=1048576 n=5000 zmq=ng x 0.05 ops/sec ±8.13% (5 runs sampled)
deliver multipart proto=inproc msgsize=1 n=5000 zmq=ng x 3.89 ops/sec ±2.43% (18 runs sampled)
deliver multipart proto=inproc msgsize=16 n=5000 zmq=ng x 4.09 ops/sec ±2.14% (19 runs sampled)
deliver multipart proto=inproc msgsize=256 n=5000 zmq=ng x 3.55 ops/sec ±4.72% (18 runs sampled)
deliver multipart proto=inproc msgsize=4096 n=5000 zmq=ng x 3.38 ops/sec ±1.58% (17 runs sampled)
deliver multipart proto=inproc msgsize=65536 n=5000 zmq=ng x 2.05 ops/sec ±5.21% (13 runs sampled)
deliver multipart proto=inproc msgsize=1048576 n=5000 zmq=ng x 0.40 ops/sec ±5.21% (6 runs sampled)
deliver async iterator proto=tcp msgsize=1 n=5000 zmq=ng x 3.92 ops/sec ±2.76% (18 runs sampled)
deliver async iterator proto=tcp msgsize=16 n=5000 zmq=ng x 3.86 ops/sec ±1.88% (18 runs sampled)
deliver async iterator proto=tcp msgsize=256 n=5000 zmq=ng x 3.64 ops/sec ±0.82% (18 runs sampled)
deliver async iterator proto=tcp msgsize=4096 n=5000 zmq=ng x 3.43 ops/sec ±2.73% (17 runs sampled)
deliver async iterator proto=tcp msgsize=65536 n=5000 zmq=ng x 1.66 ops/sec ±3.62% (12 runs sampled)
deliver async iterator proto=tcp msgsize=1048576 n=5000 zmq=ng x 0.18 ops/sec ±3.48% (5 runs sampled)
deliver async iterator proto=inproc msgsize=1 n=5000 zmq=ng x 4.41 ops/sec ±1.88% (20 runs sampled)
deliver async iterator proto=inproc msgsize=16 n=5000 zmq=ng x 4.55 ops/sec ±2.32% (20 runs sampled)
deliver async iterator proto=inproc msgsize=256 n=5000 zmq=ng x 4.07 ops/sec ±5.62% (19 runs sampled)
deliver async iterator proto=inproc msgsize=4096 n=5000 zmq=ng x 3.92 ops/sec ±5.73% (19 runs sampled)
deliver async iterator proto=inproc msgsize=65536 n=5000 zmq=ng x 3.20 ops/sec ±1.08% (17 runs sampled)
deliver async iterator proto=inproc msgsize=1048576 n=5000 zmq=ng x 1.22 ops/sec ±24.59% (9 runs sampled)
Completed.

And one where I commented out the if-block like suggested above, always copy:

Running benchmarks...
queue msgsize=1 n=5000 zmq=ng x 8.97 ops/sec ±1.05% (28 runs sampled)
queue msgsize=16 n=5000 zmq=ng x 9.05 ops/sec ±2.14% (28 runs sampled)
queue msgsize=256 n=5000 zmq=ng x 8.47 ops/sec ±8.60% (27 runs sampled)
queue msgsize=4096 n=5000 zmq=ng x 7.58 ops/sec ±8.55% (26 runs sampled)
queue msgsize=65536 n=5000 zmq=ng x 5.81 ops/sec ±3.73% (23 runs sampled)
queue msgsize=1048576 n=5000 zmq=ng x 2.29 ops/sec ±1.58% (14 runs sampled)
deliver proto=tcp msgsize=1 n=5000 zmq=ng x 3.96 ops/sec ±2.42% (19 runs sampled)
deliver proto=tcp msgsize=16 n=5000 zmq=ng x 4.03 ops/sec ±2.17% (19 runs sampled)
deliver proto=tcp msgsize=256 n=5000 zmq=ng x 3.78 ops/sec ±3.35% (18 runs sampled)
deliver proto=tcp msgsize=4096 n=5000 zmq=ng x 3.61 ops/sec ±0.84% (18 runs sampled)
deliver proto=tcp msgsize=65536 n=5000 zmq=ng x 1.90 ops/sec ±3.64% (12 runs sampled)
deliver proto=tcp msgsize=1048576 n=5000 zmq=ng x 0.21 ops/sec ±1.07% (6 runs sampled)
deliver proto=inproc msgsize=1 n=5000 zmq=ng x 4.36 ops/sec ±1.49% (19 runs sampled)
deliver proto=inproc msgsize=16 n=5000 zmq=ng x 4.46 ops/sec ±2.61% (20 runs sampled)
deliver proto=inproc msgsize=256 n=5000 zmq=ng x 4.50 ops/sec ±3.03% (20 runs sampled)
deliver proto=inproc msgsize=4096 n=5000 zmq=ng x 3.98 ops/sec ±5.88% (18 runs sampled)
deliver proto=inproc msgsize=65536 n=5000 zmq=ng x 1.43 ops/sec ±4.53% (11 runs sampled)
deliver proto=inproc msgsize=1048576 n=5000 zmq=ng x 0.19 ops/sec ±0.42% (5 runs sampled)
deliver multipart proto=tcp msgsize=1 n=5000 zmq=ng x 3.41 ops/sec ±7.21% (17 runs sampled)
deliver multipart proto=tcp msgsize=16 n=5000 zmq=ng x 3.47 ops/sec ±3.91% (17 runs sampled)
deliver multipart proto=tcp msgsize=256 n=5000 zmq=ng x 3.35 ops/sec ±2.11% (17 runs sampled)
deliver multipart proto=tcp msgsize=4096 n=5000 zmq=ng x 3.08 ops/sec ±1.41% (16 runs sampled)
deliver multipart proto=tcp msgsize=65536 n=5000 zmq=ng x 0.60 ops/sec ±0.80% (7 runs sampled)
deliver multipart proto=tcp msgsize=1048576 n=5000 zmq=ng x 0.06 ops/sec ±3.06% (5 runs sampled)
deliver multipart proto=inproc msgsize=1 n=5000 zmq=ng x 3.93 ops/sec ±2.73% (18 runs sampled)
deliver multipart proto=inproc msgsize=16 n=5000 zmq=ng x 3.73 ops/sec ±7.76% (18 runs sampled)
deliver multipart proto=inproc msgsize=256 n=5000 zmq=ng x 3.72 ops/sec ±1.53% (18 runs sampled)
deliver multipart proto=inproc msgsize=4096 n=5000 zmq=ng x 3.36 ops/sec ±3.17% (17 runs sampled)
deliver multipart proto=inproc msgsize=65536 n=5000 zmq=ng x 0.67 ops/sec ±0.64% (8 runs sampled)
deliver multipart proto=inproc msgsize=1048576 n=5000 zmq=ng x 0.06 ops/sec ±0.40% (5 runs sampled)
deliver async iterator proto=tcp msgsize=1 n=5000 zmq=ng x 3.55 ops/sec ±6.03% (18 runs sampled)
deliver async iterator proto=tcp msgsize=16 n=5000 zmq=ng x 3.86 ops/sec ±3.06% (18 runs sampled)
deliver async iterator proto=tcp msgsize=256 n=5000 zmq=ng x 3.81 ops/sec ±1.82% (18 runs sampled)
deliver async iterator proto=tcp msgsize=4096 n=5000 zmq=ng x 3.65 ops/sec ±3.49% (18 runs sampled)
deliver async iterator proto=tcp msgsize=65536 n=5000 zmq=ng x 1.89 ops/sec ±1.02% (12 runs sampled)
deliver async iterator proto=tcp msgsize=1048576 n=5000 zmq=ng x 0.21 ops/sec ±1.56% (6 runs sampled)
deliver async iterator proto=inproc msgsize=1 n=5000 zmq=ng x 4.32 ops/sec ±1.25% (19 runs sampled)
deliver async iterator proto=inproc msgsize=16 n=5000 zmq=ng x 4.35 ops/sec ±0.98% (20 runs sampled)
deliver async iterator proto=inproc msgsize=256 n=5000 zmq=ng x 4.25 ops/sec ±1.95% (19 runs sampled)
deliver async iterator proto=inproc msgsize=4096 n=5000 zmq=ng x 4.08 ops/sec ±1.36% (19 runs sampled)
deliver async iterator proto=inproc msgsize=65536 n=5000 zmq=ng x 1.44 ops/sec ±0.98% (11 runs sampled)
deliver async iterator proto=inproc msgsize=1048576 n=5000 zmq=ng x 0.18 ops/sec ±5.90% (5 runs sampled)
Completed.

Results of small msgsize (<128) should be the same in both cases, as already the copy buffer method was used. Which is roughly the case.
For bigger msgsizes there is a trend:
For proto=tcp the always copy even seems to be a bit better (the bigger the ops/sec, the better) or similar (given variance).
But for proto=inproc, from msgsize=65536 on, performance is noticeably worse.

I think I found a way (electron/electron#29893 (comment)) to have an Electron (version) compile switch which detects the environment, being plain Node.JS or Electron (from v21 on ...).

@aminya aminya changed the title zeromqjs in electron app Electron: Fatal error in V8: v8_ArrayBuffer_NewBackingStore Nov 16, 2022
aminya pushed a commit that referenced this issue Nov 27, 2022
aminya added a commit that referenced this issue Nov 27, 2022
@aminya
Copy link
Member

aminya commented Nov 27, 2022

But for proto=inproc, from msgsize=65536 on, performance is noticeably worse.

Does this code when inproc is used also cause this error on Electron?

#534

@Bartel-C8
Copy link
Contributor

But for proto=inproc, from msgsize=65536 on, performance is noticeably worse.

Does this code when inproc is used also cause this error on Electron?

Must be. It is in the incoming_msg code, which is generic for all protocols.
The new V8 memory sandbox/cage prohibits using memory outside of the JS heap memory.

This has a limitation as well:

The main downside of enabling pointer compression is that the V8 heap is limited to a maximum size of 4GB.

But I don't think (many) implementations are affected by this?

@Bartel-C8
Copy link
Contributor

For Electron v21+, now, with beta 14, I need a manual step to rebuild native modules: Running

electron-rebuild rebuilds zeromq successfully.
But before I did not to do this step (using electron-builder).

Somehow my electron-builder does not completely rebuilds zeromq, because the prebuilt seems to fit? (How is it getting matched? Only architecture?)

Although I see

  • rebuilding native dependencies  dependencies=zeromq@6.0.0-beta.14 platform=darwin arch=x64
  • rebuilding native dependency  name=zeromq version=6.0.0-beta.14

It still gives the crash. First running electron-rebuild solves the issue.

Will try to upstep electron-builder though... (Now using: "electron-builder": "^23.0.3")

@Bartel-C8
Copy link
Contributor

Bartel-C8 commented Nov 29, 2022

Ok, upstepping electron-rebuild to @next (24.0.0-alpha.4) works, when adding "postinstall": "electron-builder install-app-deps" to scripts in package.json as proposed by electron-builder:

'electron-rebuild is already incorporated into electron-builder, please consider to remove excess dependency from devDependencies
To ensure your native dependencies are always matched electron version, simply add script `"postinstall": "electron-builder install-app-deps" to your `package.json`'

This triggers a complete rebuild (from source) for zeromq.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants