Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dlpack support #1306

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from
Draft

Add dlpack support #1306

wants to merge 3 commits into from

Conversation

SunDoge
Copy link

@SunDoge SunDoge commented Jul 14, 2023

DLPack is an open in-memory tensor structure for sharing tensors among frameworks. DLPack enables

  • Easier sharing of operators between deep learning frameworks.
  • Easier wrapping of vendor level operator implementations, allowing collaboration when introducing new devices/ops.
  • Quick swapping of backend implementations, like different version of BLAS
  • For final users, this could bring more operators, and possibility of mixing usage between frameworks.

Supporting dlpack will make it easier to share tensors with dfdx, tch-rs and even numpy, torch in Python (with pyo3 feature).

@SunDoge SunDoge marked this pull request as draft July 14, 2023 06:26
@nilgoyette
Copy link
Collaborator

I don't know much about dlpack, but I'm quite sure this won't be merged if

  • it's not hidden behind a feature gate
  • it doesn't use a real crates.io version (not git)

@SunDoge
Copy link
Author

SunDoge commented Jul 17, 2023

This is still a draft. The api of dlpark is still evolving and I'll release v0.3.0 this week. I want to utilize the ownership system to make sure dlpack can only be consumed once, otherwise there will be double free error. And thanks for your advice, I will add the feature gate.

@SunDoge SunDoge marked this pull request as ready for review July 18, 2023 09:27
@SunDoge SunDoge changed the title Draft: add dlpack support Add dlpack support Jul 18, 2023
unsafe impl<A> Send for ManagedRepr<A> where A: Send {}

impl<A> FromDLPack for ManagedArray<A, IxDyn> {
fn from_dlpack(dlpack: NonNull<dlpark::ffi::DLManagedTensor>) -> Self {
Copy link
Member

@bluss bluss Mar 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function takes a raw pointer (wrapped in NonNull) and it must be an unsafe function, otherwise we can trivially violate memory safety unfortunately.

The only way to remove this requirement - the requirement of using unsafe - would be if you have a "magical" function that can take an arbitrary pointer and say whether it's a valid, live, non-mutably aliased pointer to a tensor.

Here's how to create a dangling bad pointer: NonNull::new(1 as *mut u8 as *mut dlpark::ffi::DLManagedTensor) does this code crash if we run with this pointer? I think it would..

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you. from_dlpack should be unsafe, and users should use it at their own risk.

Copy link
Member

@bluss bluss Mar 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say, we normally don't commit to public dependencies that are not stable (yes, not a very fair policy since ndarray itself is not so stable.), and dlpark is a public dependency here because it becomes part of our API. It could mean it takes a long time between version bumps.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we don't need to include dlpark as a dependency. We can create an ArrayView using ArrayView::from_shape_ptr and ManagedTensor. I can implement ToTensor for ArrayD in dlpark with a new feature ndarray. I'll do some quick experiments.

@bluss bluss marked this pull request as draft March 9, 2024 14:36

let strides: Vec<usize> = match (managed_tensor.strides(), managed_tensor.is_contiguous()) {
(Some(s), _) => s.into_iter().map(|&x| x as _).collect(),
(None, true) => managed_tensor
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Later work, check compatibility of dlpack and ndarray strides, how they work, their domains etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants