Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build linux libextism on manylinux/musllinux containers #591

Open
G4Vi opened this issue Nov 16, 2023 · 8 comments
Open

Build linux libextism on manylinux/musllinux containers #591

G4Vi opened this issue Nov 16, 2023 · 8 comments
Assignees

Comments

@G4Vi
Copy link
Contributor

G4Vi commented Nov 16, 2023

We can ensure our libraries are compatible with a broader set of linux systems by linking against old libcs/compiler libs. The manylinux/musllinux containers may be used to do so.

It looks like only a few lines may need to change:
https://kobzol.github.io/rust/ci/2021/05/07/building-rust-binaries-in-ci-that-work-with-older-glibc.html

I was thinking following Rust's compatibility https://doc.rust-lang.org/nightly/rustc/platform-support.html https://blog.rust-lang.org/2023/05/09/Updating-musl-targets.html

to use the manylinux2014/manylinux_2_17 and musllinux_1_2 containers. manylinux2014 is an alias for manylinux_2_17.

Related reading on manylinux/musllinux:
https://peps.python.org/pep-0513/
https://peps.python.org/pep-0600/
https://peps.python.org/pep-0656/

@G4Vi G4Vi self-assigned this Nov 16, 2023
@chrisdickinson
Copy link
Contributor

Ooh, yes – we already have to do this for the Python extism_sys/extism-maturin builds. I wonder if we could skip the maturin-action if we had manylinux builds of libextism?

@chrisdickinson
Copy link
Contributor

(Notably, we have to be on manylinux 2_28 or later because of our dep on ring; more info here)

@G4Vi
Copy link
Contributor Author

G4Vi commented Nov 17, 2023

That would be nice if the Python build could share the regular native build. Maybe just the aarch64 builds need to be on 2_28 as cross-compiling wouldn't be necessary with x86_64 and manylinux2014 should include gcc 10?

@neuronicnobody neuronicnobody added the question Further information is requested label Nov 27, 2023
@chrisdickinson
Copy link
Contributor

It just clicked for me that this would help with extism/ruby-sdk#6 as well – we could probably bundle all of the manylinux builds with the ruby gem to create that fat gem release.

@neuronicnobody neuronicnobody removed the question Further information is requested label Nov 27, 2023
@chrisdickinson
Copy link
Contributor

I just peeked at the the .so bundled with one of the maturin wheels yesterday using wheel unpack1 and it looks like the generated shared objects export what we'd be bundling up with libextism anyway:

000000000009c9dc T _extism_current_plugin_memory
000000000009c9e8 T _extism_current_plugin_memory_alloc
000000000009caa4 T _extism_current_plugin_memory_free
000000000009ca58 T _extism_current_plugin_memory_length
00000000000a03ec T _extism_error
000000000009d07c T _extism_function_free
000000000009caf8 T _extism_function_new
000000000009d0d8 T _extism_function_set_namespace
00000000000a15c4 T _extism_log_custom
00000000000a1968 T _extism_log_drain
00000000000a1080 T _extism_log_file
000000000009fdc8 T _extism_plugin_call
000000000009e26c T _extism_plugin_cancel
000000000009de50 T _extism_plugin_cancel_handle
000000000009e6a4 T _extism_plugin_config
00000000000a03f0 T _extism_plugin_error
000000000009da10 T _extism_plugin_free
000000000009f78c T _extism_plugin_function_exists
000000000009c9cc T _extism_plugin_id
000000000009d440 T _extism_plugin_new
000000000009d9e8 T _extism_plugin_new_error_free
00000000000a0af0 T _extism_plugin_output_data
00000000000a0964 T _extism_plugin_output_length
00000000000a1ed8 T _extism_version
00000000006df9b0 T _resolve_vmctx_memory_15_0_0
00000000006df9dc T _resolve_vmctx_memory_ptr_15_0_0
0000000000b951a8 D _ring_core_0_17_5_OPENSSL_armcap_P
00000000006dfa40 T _set_vmctx_memory_15_0_0

So we may just need to rejigger the release.yml to remove our direct cargo build, unpack the wheels, and add those .so files to the release artifacts instead.

Footnotes

  1. I'm a little rusty at dealing with Python wheels but I was able to unpack the wheel by running the following commands:

    $ python3 -m venv tmp
    $ . tmp/bin/activate
    $ pip install wheel
    $ curl -sLO https://github.com/extism/extism/releases/download/v1.0.0-rc5/extism_sys-1.0.0rc5-py3-none-macosx_11_0_arm64.whl
    $ wheel unpack extism_sys*whl
    $ nm -gU extism_sys*/extism_sys/*.so
    

@G4Vi
Copy link
Contributor Author

G4Vi commented Jan 3, 2024

Unpacking the wheels could work. Something I haven't considered yet is how to handle the static builds. I see three possible ways forward with wheel unpacking:

  • A: Replace the shared objects in the releases with the manylinux/musllinux shared objects
  • B: Create more releases, the manylinux/musllinux releases wouldn't have static objects
  • C: Remove the static objects from actions. You should build libextism from source if you wish to link it statically.

I'd lean to B, but would want to encourage using them unless you need static objects. A sounds chaotic having different system support in the same release. @chrisdickinson , @zshipko Thoughts?

@chrisdickinson
Copy link
Contributor

Hm, so in option B, would we have two releases per tag? (Like, v9.9.9 creates a v9.9.9 release and a v9.9.9-manylinux release?)

Re option A, are you thinking that the number of artifacts attached to the release might be chaotic, or that the workflow file might become overcomplicated?

I tend to lean towards keeping all release artifacts unified under the tag that produced them, to match user expectations on where to find things (& because github requires that tags and releases are 1:1), but I can definitely see where the number of artifacts attached to the release is becoming unwieldy. (Maybe we could solve this through some explanatory text in the release notes, readme, and/or website?)

@G4Vi
Copy link
Contributor Author

G4Vi commented Jan 3, 2024

Hm, so in option B, would we have two releases per tag? (Like, v9.9.9 creates a v9.9.9 release and a v9.9.9-manylinux release?)

Yes, exactly. The number of assets in a release would increase as many targets would also have a manylinux version.

Re option A, are you thinking that the number of artifacts attached to the release might be chaotic, or that the workflow file might become overcomplicated?

In A I'm mostly worried about mixing support in the same release, if the static object was built on a different system than the manylinux, it might not link while the dynamic version does, for example if the static object references newer glibc than the dynamic version. The number of artifacts would stay the same (with the shared object being from the wheels instead).

I tend to lean towards keeping all release artifacts unified under the tag that produced them, to match user expectations on where to find things (& because github requires that tags and releases are 1:1), but I can definitely see where the number of artifacts attached to the release is becoming unwieldy. (Maybe we could solve this through some explanatory text in the release notes, readme, and/or website?)

I definitely want to keep all releases artifacts under the same tag. The artifacts attached to the release ballooning is my main concern with B.

Instead of taking the shared object from the wheel, is it viable to build the wheels from prebuilt shared objects? If we could do the normal cargo build on a manylinux container, I'd think that the shared object would be just as compatible and we'd be have a matching static object built alongside it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants