Machine Learning Team Lead at White Hat, Founder/Director at Farset Labs
The sub-title of the book is “A Hands on Approach” and I didn’t get it until a third of the way through the book, that that’s exactly what it is. The pairing of Kirk, a NVIDIA Fellow, outgoing NVIDIA Chief Scientist and generally world-weary technologist and all round ‘hardware guru’ with Hwu, a well-heeled educator and researcher at the University of Illinois provide a practical but in-depth look at not just the pure ‘programming’ to deal with massivly parallel processing, but instead assumes that the reader can work out for instance how to do Matrix Multiplication the ‘basic’ way from looking at the NVIDIA CUDA API’s, and looks at how to take advantage of the hardware to give sometimes incredible speed increases.
I’m currently still working through it; reading a few chapters ahead and then going back and doing all the exercises in emulation mode (access to a remote machine has been problematic, so if anyone wants to lend me shell access to a cuda machine, I’d appreciate it.)
For anyone looking at this field, this is the bible!