Differences between revisions 3 and 4
Revision 3 as of 2012-04-30 21:52:33
Size: 2509
Editor: JohnFlanagan
Comment:
Revision 4 as of 2012-04-30 21:53:09
Size: 2508
Editor: JohnFlanagan
Comment:
Deletions are marked like this. Additions are marked like this.
Line 17: Line 17:
 * '''bool EMData::usecuda''', this member acts as a flag to signle when CUDFA is being used. You should enclose all CUDA code in the braces: if(EMData::usecuda ==1){......}  * '''bool EMData::usecuda''', this member acts as a flag to signle when CUDA is being used. You should enclose all CUDA code in the braces: if(EMData::usecuda ==1){......}

using the EMAN2 CUDA api

EMAN2 includes support for CUDA processing. To use CUDA in EMAN2 you must set the flag ENABLE_EMAN2_CUDA using ccmake, thne recompile. This step defines the identifier ENABLE_EMAN2_CUDA so the preprocessor compiles CUDA code. Any new cuda code should be enclosed by #ifdef #endif directives so it is only compiled when CUDA is desired. Compiling with CUDA exposes addition methods and members of the class EMData. Below is a list of addition EMData methods with python bindings.

  • bool EMData::copy_to_cuda() const, this copies EMData data from the CPU to the GPU global memory

  • bool EMData::copy_to_cudaro() const, this copies EMData data from the CPU to the GPU texture memory

  • bool EMData::copy_rw_to_ro() const, this copes EMData data from global memory to texture memory

  • void EMData::switchoncuda(), this tells EMAN2 to use CUDA, you almost never want to call this function directly anymore. Use cuda_initialize instead.

  • void EMData::switchoffcuda(), this tells EMAN2 to stop using CUDA.

  • bool EMData::cuda_initialize(), this tells EMAN2 to initialize CUDA and start using CUDA.

  • void EMData::cuda_cleanup(), this cleans up CUDA cahce and is called by an event handler in the EMAN2 module. You should nver call this function unless you intend to shut down a EMAN2 program.

  • const char* EMData::getcudalock(), this returns a CUDA lock file. CUDA lock files are created to the system can keep track for what process is using what device. Insanely this functionality is not built into the CUDA API. CUDA lock files are stored in /tmp (Yes this will not work for WIndows, but neither will CUDA either).

If you are writing new C++ code, you will have access to and want to you additional EMData CUDA methods these are:

  • float* getcudarwdata() const, returns a pointer to data in the global GPU memory. If nothing is there, 0 is returned

  • float* getcudarodata() const, returns a pointer to data in the texture GPU memory. If nothing is there, 0 is returned

  • bool EMData::isrodataongpu() const, returns True if data is in the GPU texture memory. Also returns Ture is data is in the gobal memory AND it succefuuly copied memory from global to texture. Other wise False is returned

  • bool EMData::usecuda, this member acts as a flag to signle when CUDA is being used. You should enclose all CUDA code in the braces: if(EMData::usecuda ==1){......}

Eman2UsingCudaFromC++ (last edited 2012-05-01 22:00:50 by JohnFlanagan)