A Gentle Introduction to CUDA PTX
philipfabianek.com59 points by ashvardanian 5 days ago
59 points by ashvardanian 5 days ago
Very interesting. It sounds like tuning at the PTX level can increase workload efficiencies, such as quote "Specifically, we employ customized PTX (Parallel Thread Execution) instructions" from the DeepSeek folks https://arxiv.org/abs/2412.19437.
Agreed! The gulf between pure-C++ CUDA and PTX is getting larger with these optimizations. My understanding is that Deepseek used PTX instructions that either had no corresponding C++ implemented (like `wgmma` mentioned in the article) or uncommon permutations of modifiers (`LD.Global.NC.L1::no_allocate.L2::256b`).
It’s really not true anymore that PTX is forward compatible. There’s a subset that is but any of the new interesting interfaces that have been added are not forward compatible and change in each microarchitectural revision. Most of the reason you’d drop down to PTX anyway is to use those; otherwise compilers are fairly good these days and it’s rarely the case you’ll see PTX unless you’re profiling.
Is this analogy valid: Writing PTX is like writing assembly instead of a higher-level language (C, C++, rust etc) for CPU code? E.g. normally the higher level code compiles to it, but you can do optimizations by going lower?
For context, like the opening paragraph in the article goes into, I generate PTX code regularly, but have no idea what the actual code in the PTX file means!
I'm curious about the forward compatibility the article goes into. I only experience that to a point: Code compiled on Cuda 12 does not seem to work on machines with Cuda 13.
Indeed, this is one way to think about it. However, PTX is an instruction set for a virtual machine, not the actual hardware. The true, hardware-specific assembly is called SASS (Streaming Assembly) and the PTX code is translated into SASS by the GPU driver (using ptxas) in a final compilation step. Unlike SASS, PTX is (mostly) forward compatible.
I don't know the details about your CUDA 12 vs. 13 issue but I think it is not about hardware compatibility but more about the software stack. An application linked against CUDA 12 libraries and might not work with CUDA 13 libraries.
That's not much different than a modern CPU with an OS on top; where you have the OS doing some of the scheduling then the CPU is splitting up the instructions into microinstructions and then scheduling them again in finer detail (hyperthreading and such). Seems to me there must be a C-level syntax and compiler so you're not manually splitting up individual adds and such and is still capable of optimizing the math effectively. But if that were true, we wouldn't have AAA game studios going to NVidia to optimize their game engines for each individual game.