In my parallel programming class we used several techniques to increase the speed of matrix multiplication, and compared them. I vaguely remember using OpenMP and cuda. I need to look into my backups to see if I still have those codes. Specially the cuda one, I wonder how similar it is to tensors
I agree with the author that programming GEMM on newer GPUs is a very different experience, though I'm wondering if "newer GPUs are [actually strictly] better"? It seems like there should still be some highly cost-effective use cases for T4 GPUs — aren't there?
>> Our first contribution is Optimised Attention, which performs similarly to standard attention, but has 3/4 as many parameters and one matrix multiplication fewer per head. Next, we introduce Efficient Attention, which performs on par with standard attention with only 1/2 as many parameters as many parameters and two matrix multiplications fewer per head and is up to twice as fast as standard attention. Lastly, we introduce Super Attention, which surpasses standard attention by a significant margin in both vision and natural language processing tasks while having fewer parameters and matrix multiplications.
>>> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products.
reply