muragekibicho 4 days ago

Introduction : Finite Field Assembly is a programming language that lets you emulate GPUs on CPUs

It's a CUDA alternative that uses finite field theory to convert GPU kernels to prime number fields.

Finite Field is the primary data structure : FF-asm is a CUDA alternative designed for computations over finite fields.

Recursive computing support : not cache-aware vectorization, not parallelization, but performing a calculation inside a calculation inside another calculation.

Extension of C89 - runs everywhere gcc is available. Context : I'm getting my math PhD and I built this language around my area of expertise, Number Theory and Finite Fields.

  • zeroq 3 hours ago

    I've read this and I've seen the site, and I still have no idea what it is, what's the application and why should I be interested.

    Additionally I've tried earlier chapters and they are behind a paywall.

    You need a better introduction.

    • pizza 3 hours ago

      This is phrased in a kind of demanding way to an author who has been kind enough to share their novel work with us. Are you sure you spent enough time trying to understand?

      • Conscat 2 hours ago

        It seems that pretty much everybody here is confused by this article. One user even accused it of LLM plagiarism, which is pretty telling in my opinion.

        I for one have no clue what anything I read in there is supposed to mean. Emulating a GPU's semantics on a CPU is a topic which I thought I had a decent grasp on, but everything from the stated goals at the top of this article to the example code makes no sense to me.

        • pizza 2 hours ago

          It just seems like residue numbering systems computation, which I'm already working with.

  • almostgotcaught 2 hours ago

    > I'm getting my math PhD and I built this language around my area of expertise, Number Theory and Finite Fields.

    Your LinkedIn says you're an undergrad that took a gap year 10 months ago (before completing your senior year) to do sales for a real estate company.

    • pizza 2 hours ago

      Why bother doing a witch hunt and leaving out that they did Stats at Yale..

      • almostgotcaught 2 hours ago

        Because why does it matter? Are you suggesting undergrad stats at Yale is comparable to a PhD in number theory?

        • pizza 44 minutes ago

          I guess it's not clear to me why it's even interesting to talk about their LinkedIn or their PhD in the first place? It's not like not having a PhD will make the work any more true or not. Wouldn't it be more interesting to discuss the merits of the post? There's really little point in trying to say that their LinkedIn has different info than the comment therefore the submission is invalid.

          But, suppose I did actually hold that belief for some reason, then it would seem fairly intellectually dishonest to withhold relevant info in my pointed inquisition wherein I just characterize them as someone lacking mathematical experience at all, let alone from a world class university. But maybe that's just me!

    • saghm 2 hours ago

      Depending on what properties they sold, they certainly could have gotten valuable real-world expertise with finite fields. It's certainly easier to sell them than infinite ones!

    • saagarjha 2 hours ago

      Are you sure that’s their LinkedIn?

      • almostgotcaught 2 hours ago

        Why wouldn't it be? All of the pics, names and details line up between GitHub, here, Reddit, and substack.

adamvenis 3 hours ago

I think I get it. You're using the Ring isomorphism from the Chinese Remainder Theorem to do "parallel computation". This is the same principle as how boolean algebra on binary strings computes the pairwise results of each bit in parallel. Unfortunately, there's no free lunch - if you want to perform K operations on N-bit integers in parallel, you still need to work with (K * N)-bit-wide vectors, which is essentially what SIMD does anyway.

  • markisus an hour ago

    I’m also unsure where finite fields are coming into play. Finite fields have orders that are prime powers, and the author is talking about a “finite field” of order 7x9x11. But if we aren’t dealing with fields, why is the author mentioning plans for implementing division? It definitely needs more explanation but I’m not sure if the idea is coherent.

  • almostgotcaught 3 hours ago

    Yup that's exactly what this is and thus, notably, it is not actually about finite fields.

vimarsh6739 2 hours ago

One of the more subtle aspects of retargeting GPU code to run on the CPU is the presence of fine grained(read - block level and warp level) explicit synchronization mechanisms being available in the GPU. However, this is not the same in CPU land, so additional care has to be taken to handle this. One example of work which tries this is https://arxiv.org/pdf/2207.00257 .

Interestingly, in the same work, contrary to what you’d expect, transpiling GPU code to run on CPU gives ~76% speedups in HPC workloads compared to a hand optimized multi-core CPU implementation on Fugaku(a CPU only supercomputer), after accounting for these differences in synchronization.

  • petermcneeley an hour ago

    A single CPU thread should be treated as basically a warp executing 4 simd vectors in parallel. The naïve implementation of __syncthreads() would be an atomic mechanism shared across all threads that contribute to what is GPU workgroup.

    Looks like this entire paper is just about how to move/remove these barriers.

pwdisswordfishz 2 hours ago

> Field order (the number of elements your field can hold). i.e you can store 8 * 9 * 11 elements in the field

I thought a finite field's order has to be a prime power.

hashxyz 3 hours ago

Pretty sure this is just vectorization. You can pack some 8bit ints into a machine-length 32bit int and add them together, that is vectorization.

  • Conscat 3 hours ago

    I don't think that's true when the add overflows. You wouldn't want a lane's overflow to carry into an adjacent lane.

foota 3 hours ago

It's a bit hard for me to tell the intention here. Is the idea that finite fields can take better advantage of CPU architecture than something like SIMD for parallel computation? Or is this just for experimentation?

Edit: this tickles my brain about some similar seeming sort of programming language experiment, where they were also trying to express concurrency (not inherently the same as parallelism) using some fancy math. I can't remember what it was though?

catapart 3 hours ago

If matrix multiplication does get added to this, I imagine that there is some utility for game development. At that point, I'd be curious what the comparison would be from CPU to GPU. Like, given a clock speed of x, what would a comparable GPU (or set of GPU features) look like?

I know that's pretty abstract, but without that kind of "apples to apples" comparison, I have trouble contextualizing what kind of output is bring targeted with this kind of work.

tooltechgeek 3 hours ago

What are some problems where this approach has advantages?

imbusy111 3 hours ago

I suspect this is just AI slop.

Retr0id 25 minutes ago

This is nonsense.

almostgotcaught 3 hours ago

It's hilarious how gullible hn is. All you gotta do is put GPU and math buzzwords in your README and you'll automatically be upvoted.

This was discussed on Reddit - this is not actually finite field arithmetic.

Also you can go to this dudes GitHub and see exactly how serious this project is.

https://github.com/LeetArxiv/Finite-Field-Assembly

Lol