forum.alglib.net
http://forum.alglib.net/

Alglib Functions in Cuda Kernels
http://forum.alglib.net/viewtopic.php?f=2&t=4769
Page 1 of 1

Author:  Count_Calculus [ Mon Nov 10, 2025 11:06 pm ]
Post subject:  Alglib Functions in Cuda Kernels

I am currently writing some simulation software that utilizes Alglib's interpolation functions. I am preparing to draft a version that can be GPU accelerated with Cuda, and I was wondering if Alglib is compatible with Cuda kernels. Here is the general schematic of the relevant process:

- A functor object is constructed, and during construction, creates and stores a collection of spline1d/2d objects in a vector (either std::vector for CPU or thrust::device_vector for GPU). This functor would then be passed to the GPU with cudaMemAlloc/Copy.
- During the simulation, the functor's function call operator is passed a collection of values and, based on these values, chooses and evaluates the appropriate interpolation for a given value. This process would be called from within a Cuda kernel.

I know that, when building classes for GPU implementation, the __device__ decorator must be applied to class functions (or in this case, function call operators) so that they are made available to the GPU. I've looked through the Alglib source code (free version), and don't see any of these decorators. I suspect this will prevent me from calling the interpolation evaluations from the GPU, but I wanted to see if anyone with more experience can confirm this / has a work around.

Thank you.

Page 1 of 1 All times are UTC
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/