Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] rmm should stop prescribing cudart linking #1320

Open
vyasr opened this issue Aug 4, 2023 · 7 comments
Open

[FEA] rmm should stop prescribing cudart linking #1320

vyasr opened this issue Aug 4, 2023 · 7 comments
Labels
? - Needs Triage Need team to review and classify feature request New feature or request

Comments

@vyasr
Copy link
Contributor

vyasr commented Aug 4, 2023

Is your feature request related to a problem? Please describe.
Currently when librmm is built, it accepts a CMake option CUDA_STATIC_RUNTIME to determine whether to link to cudart statically or dynamically. However, librmm is header-only, so really what this controls is the behavior of the CMake link interface when dependent libraries consume librmm. The flag is sufficient if a user is building their entire stack locally, but if they are building some library that depends on librmm but they want to use a prepackaged version, the current setup means that they will be constrained by librmm's requirement unless they manually patch the rmm target in their own CMake.

Describe the solution you'd like
Since librmm is header-only, there is no reason for it to define a linking behavior here. That logic should be removed entirely from librmm and be left up to consumers.

Describe alternatives you've considered
None

Additional context
This change will break consumers who were previously not specifying any cudart linking behavior at all and were relying on librmm's settings. Prior to making this change, we should audit potential consumers for that issue. Some likely candidates for needing this change are xgboost and ucx.

@vyasr vyasr added feature request New feature or request ? - Needs Triage Need team to review and classify labels Aug 4, 2023
@jakirkham
Copy link
Member

cc @robertmaynard @hcho3 (as we discussed this previously)

@trivialfis
Copy link
Member

trivialfis commented Aug 5, 2023

From a packaging standpoint, cudart is a propagated input from RMM for downstream projects. Since cudart is part of RMM's dependencies, then RMM should specify as such in my opinion. For instance, numpy is a dependency of XGBoost, we won't remove it from the toml project specification even if there are users who want to have a customized numpy that conflicts with numpy. Python scripts can be considered "header-only" since the source is distributed and can be re-compiled to byte code. IMHO, this is more an issue that should be resolved by the dependency manager, which decides what dependency to use/prioritize as default in the software distribution.

That said, I'm fine with whatever decision for improving RMM's usability. It's just a small conceptual issue that I want to raise before merging the change.

@robertmaynard
Copy link
Contributor

I have been looking if we could propagate the cudart requirements only if the consuming library has not specified what version it wants via linking to CUDA::cudart or CUDA::cudart_static. It looks possible!

So that would allow us to effectively state a default ( I vote for static ), which also allowing consumers to override. Would we want to go with this approach instead?

@vyasr
Copy link
Contributor Author

vyasr commented Aug 7, 2023

Since cudart is part of RMM's dependencies, then RMM should specify as such in my opinion.

I mostly agree with this. However:

IMHO, this is more an issue that should be resolved by the dependency manager, which decides what dependency to use/prioritize as default in the software distribution.

The problem we're trying to solve is that something like this is not viable. If rmm specifies static linking that propagates unconditionally to a consuming library. To allow the dependency manager to resolve this, you would need to build two separate rmm packages so that consuming libraries could request that the correct one be downloaded by a package manager during dependency resolution before the CMake build starts (at which point it's already too late).

Another option would be to instead provide multiple targets like rmm::rmm_cuda_static and rmm::rmm_cuda_dynamic so that rmm still encapsulates the necessary information but passes the choice onto the user. If we want to default to static, we could have rmm::rmm be static and then only need to add one extra target rmm::rmm_cuda_dynamic. Thoughts?

I have been looking if we could propagate the cudart requirements only if the consuming library has not specified what version it wants via linking to CUDA::cudart or CUDA::cudart_static. It looks possible!

That sounds even better. Is there any ordering imposed there though? i.e. would it matter whether find_package(rmm) was executed before or after the consuming library specified target_link_libraries(CUDA::cudart)?

@jakirkham
Copy link
Member

So that would allow us to effectively state a default ( I vote for static ), which also allowing consumers to override. Would we want to go with this approach instead?

How would this look? What do consumers need to do to override? Also which consumers (other CMake users, package builders, etc.)?

@trivialfis
Copy link
Member

trivialfis commented Aug 7, 2023

To allow the dependency manager to resolve this, you would need to build two separate rmm packages so that consuming libraries could request that the correct one be downloaded by a package manager during dependency resolution before the CMake build starts (at which point it's already too late).

Thank you for the reply. Actually, this seems reasonable to me. It's not uncommon for package distributions to have multiple compile-time variants. You can optionally enable components in llvm during compilation, including whether to build MLIR, whether to have runtime type information, etc. A distribution (like debian, nix) would usually choose a default configuration for all other dependencies within that distribution. Emacs has a tons of variants.

For the case of RMM, I think it can continue to keep the compile time CMake option CUDA_STATIC_RUNTIME for specifying which cudart to use, then conda packagers choose a default (like static) for all downstream projects within the conda-forge and rapidsai channel. A variant with shared cudart can be built and uploaded if necessary, but as a leaf package and not used for any other default conda packages (like the cuDF in the rapidsai channel).

@robertmaynard
Copy link
Contributor

That sounds even better. Is there any ordering imposed there though? i.e. would it matter whether find_package(rmm) was executed before or after the consuming library specified target_link_libraries(CUDA::cudart)?

No ordering issues, as the evaluation occurs entirely in CMake generator expressions. The issue is that we can only scan for a direct usage of CUDA::cudart ( or CUDA::cudart_static ).

How would this look? What do consumers need to do to override? Also which consumers (other CMake users, package builders, etc.)?

Any consumer that uses CMake would get this behavior since the logic would be encoded in the CMake rmm::rmm target.
Basically what we can do is tell CMake that if any consuming library uses rmm::rmm and doesn't also use CUDA::cudart or CUDA::cudart_static it should get the 'default' cudart version that rmm::rmm was built with.

Approach

  find_package(rmm REQUIRED)
  find_package(CUDAToolkit REQUIRED) # order doesn't matter
  
  add_executable(uses_cudart_static memory_test.cu)
  target_link_libraries(uses_cudart_static PRIVATE rmm::rmm  CUDA::cudart_static) # order doesn't matter
  
  add_executable(uses_cudart_shared memory_test.cu)
  target_link_libraries(uses_cudart_shared PRIVATE rmm::rmm  CUDA::cudart_shared) # order doesn't matter

Since the executables are linking to both rmm:: and CUDA::cudart* targets the generator expression embedded in rmm::rmm detects this and ensures it doesn't add a CUDA::cudart dependency ( or CUDA::cudart_static ).

Issue with this approach

The issue with this approach is that it only works with direct usages of CUDA::cudart or CUDA::cudart_static. By that I mean that the following use-case won't work and rmm::rmm would think no linking to CUDA::cudart or CUDA::cudart_static is occuring and would inject it own.

add_library(cudart_via_proxy INTERFACE)
target_link_libraries(cudart_via_proxy INTERFACE CUDA::cudart)

add_executable(uses_cudart memory_test.cu)
target_link_libraries(uses_cudart PRIVATE rmm::rmm cudart_via_proxy)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
? - Needs Triage Need team to review and classify feature request New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants