I haven't seen such an information gap in a long time. While everyone writes about AI, SubQ news went by in the most hermetic silence possible. Here's what they're cooking and why it matters for the future of AI coding.
Hi, my name is Tom Smykowski, I'm a staff full-stack engineer. I build and scale SaaS platforms to millions of users, working end-to-end from system architecture to frontend to mobile. On this blog I explore how AI is reshaping software development and what it means for engineers.
The AI news cycle is relentless. Every day brings another announcement, another benchmark, another company claiming to revolutionize everything. But somehow, SubQ managed to slip through almost unnoticed.
One outlet picked up the story, mostly in a "this is what it's supposed to do, but we can't verify it" tone. Which surprised me, because most AI claims these days circle the internet without any verification at all.
So I looked into what Subquadratic actually offers. And it's interesting.
The Problem With Current LLMs
The problem with LLM models is that they have to do a lot of calculations during inference. They calculate weights, sum them across many layers of weights and attention layers. The computational effort seems to be exponential.
And let's be honest, it's not like latest models are any way better than the previous ones. In fact, they tend to become lazier with each generation.
This is where SubQ enters the picture with a promise that could change everything...
