Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Support via OpenCL #1377

Open
wants to merge 23 commits into
base: master
Choose a base branch
from

Conversation

Pencilcaseman
Copy link

This is very much a work-in-progress, but I wanted to know how this approach fits with the rest of the codebase. I realise it's not the most "rusty" implementation, but I think it could be abstracted away quite nicely.

There is a fair amount of unnecessary/unclean code, but that can be cleaned up pretty quickly. The changes made here only allow for binary operations (+ - * /) on contiguous ArrayBase references (see the example below). With more work, it could be expanded to support almost everything the CPU "backend" supports.

To enable OpenCL support, you need to enable the opencl feature.

Here is an example:

// Unfortunately, OpenCL requires some initialisation before it can be used.
// There are currently no checks on this, but they can be easily added.
ndarray::configure();

// Note that the result of `move_to_device` is a `Result<_, OpenCLErrorCode>`, so errors
// can be handled correctly
let x = ndarray::Array2::<f32>::from_shape_fn((3, 4), |(r, c)| (c + r * 4) as f32)
    .move_to_device(ndarray::Device::OpenCL)
    .unwrap_or_else(|| panic!("Something went wrong"));

let y = ndarray::Array2::<f32>::from_shape_fn((3, 4), |(r, c)| 12.0 - (c + r * 4) as f32)
    .move_to_device(ndarray::Device::OpenCL)
    .unwrap_or_else(|| panic!("Something went wrong"));

// Only this form of binary operation is supported currently.
// i.e. reference <op> reference
// This operation takes place in a JIT-compiled kernel on the GPU
let z = &x + &y;

// You can only print something if it's in Host memory. This could be changed
// to automatically copy/move to host if necessary
println!(
    "Result:\n{:>2}",
    z.move_to_device(ndarray::Device::Host).unwrap()
);

// [[12, 12, 12, 12],
//  [12, 12, 12, 12],
//  [12, 12, 12, 12]]

A very similar approach can be taken to get CUDA support working. It might even be possible to merge them into a single GPU backend trait, for example, which would simplify the implementations quite a bit. It'd require a few substantial changes internally, though (I think. Maybe not?).

Anyway, let me know what you think!

@bluss
Copy link
Member

bluss commented Apr 1, 2024

That is interesting. Thanks for the neat proof of concept.

Now I don't have bandwidth to look at this in depth unfortunately. Look at the repo, we're working on version 0.16 which is only three years late or so. :) Brief and off the cuff for that reason. I'll ask some questions, that doesn't mean I'm making requirements or demands, they are things to think about.

Why not have a separate owned type for arrays that are on a non-cpu device? Users probably don't want a situation where they can't tell if adding two arrays together is going to cause a crash or not.

How do array views work in this model? Most interesting functionality in ndarray happens through array views.

I do agree - if you think so - that too much type information is not good either - it just gets in the way, the hard part is finding a balance.

let typename = match crate::opencl::rust_type_to_c_name::<A>() {
Some(x) => x,
None => panic!("The Rust type {} is not supported by the \
OpenCL backend", std::any::type_name::<A>())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enforce at type level?

unsafe {
let elements = self.len();
let self_ptr = self.as_ptr() as *mut std::ffi::c_void;
let other_ptr = rhs.as_ptr() as *mut std::ffi::c_void;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming this operation can be invalid here because they are only known to be on the same device by a debug assertion, not a hard assertion?

let dim = self.dim;
let strides = self.strides;
let data = self.data.move_to_device(device)?;
let ptr = std::ptr::NonNull::new(data.as_ptr() as *mut A).unwrap();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new ptr is invalid if move is Host to Host and the ptr before move was not to the start of the allocation. This is probably just a feature not yet implemented here.

@Pencilcaseman
Copy link
Author

Thanks for the review @bluss. I didn't check the validity of stuff particularly rigorously, so I'm sure there are a few potential issues.

I've had a bit of a think, and I reckon it'd be possible to kill a few birds with one (admittedly quite large) stone.

I've read through most of the open issues, and it seems like a lot of them relate to the maintainability of the library (I'd be keen to help out with this if you're still looking for someone to maintain things) and cleaning up or refactoring the code.

If there were a Backend trait which exposed things like OwnedRepr creation (each backend will have its own OwnedRepr type), contiguous operations, strided operations, linear algebra operations, etc., for a given device, then it would be possible to abstract the calculations/operations away from the ArrayBase struct and into those backends.

The backends could have reasonably generic code that uses templates, cutting down on the heavy macro expansions and making the code easier to maintain. The Array and View types would need to take the backend as a generic type, but that should allow any backend to have full View functionality since it'd all be handled by the backend provider.

Obviously, things like custom mapping functions and hand-written iterators won't work on anything other than the CPU since the data isn't available in host memory, but everything else should work fine.

That has the added benefit of allowing anyone to write their own backend for ndarray for whatever use case they have instead of weaving the functionality into the existing codebase.

Anyway, maybe I'm getting a bit ahead of myself, since that'd be a fairly substantial refactor of the codebase. If that's something you think would be beneficial, then I'd be happy to start looking into it. As I mentioned earlier, I'd also be happy to help maintain the project if you need another maintainer :)

Thanks again for the review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants