The 5-Minute Proof
It took five minutes. That’s how long the internal beta of Grok 4.20 needed to solve a problem that mathematicians have been picking at for years. We aren’t just talking about generating boilerplate code or summarizing an email—we’re talking about discovering a new Bellman function for a problem in harmonic analysis.
The discovery, shared by a researcher with early access, provides a “sharp” lower bound for the dyadic square function, improving upon known theoretical limits. While this won’t change the price of bread tomorrow, it’s a terrifyingly impressive signal of where AI reasoning is heading in 2026.
The Problem: Pointwise Maximal Functions & Square Functions

To appreciate what Grok did, we have to look at the math. The core problem sits at the intersection of probability and harmonic analysis: finding lower bounds for dyadic square functions of indicator functions.
In simple terms, we are looking at a subset $A$ of the interval $[0,1]$ (represented by the function $1_A$) and trying to measure how much it “distorts” under dyadic martingale transformations.
Specifically, researchers Alpay and Ivanisvili were studying two variations:
1. The $S_2$ Norm (Quadratic Variation): $\lVert S_2(1_A) \rVert_1$
2. The $S_1$ Norm (Total Variation): $\lVert S_1(1_A) \rVert_1$
In their paper Lower Bounds for Dyadic Square Functions of Indicator Functions of Sets, they proved a new baseline for the quadratic variation:
$$\lVert S_2(1_A) \rVert_1 \ge I(|A|)$$
Where $I(x)$ is the Gaussian isoperimetric profile. This was a big deal because it improved upon the classical “Burkholder-Davis-Gundy” bound, which was roughly proportional to $|A|(1-|A|)$. The new bound added a factor of $\sqrt{\log(1/(|A|(1-|A|)))}$, making it much tighter (sharper) as the set $A$ gets smaller.
Grok’s Solution: The Missing “Smooth” Piece

Here is where the AI steps in. The researchers had established the bound involving the Gaussian profile $I(p)$. But to fully solve the Bellman function $U(p,q)$ for this problem, they needed an explicit formula that satisfied complex differential constraints.
After minutes of compute, Grok 4.20 produced this explicit formula:
$$U(p,q) = E \sqrt{q^2 + \tau}$$
Here, $\tau$ is the exit time of Brownian motion from $(0,1)$ starting at $p$. This yields a simplified asymptotic behavior:
$$U(p,0) = E\sqrt{\tau} \sim p \log(1/p)$$
Why Is This “Sharp”?
The significance lies in the smoothness.
* The $S_1$ Problem: The paper showed that the minimal bound for the $S_1$ norm ($B_{1,1}$) has a fractal-like structure. It satisfies a recursive equation $B_{1,1}(x) + x = 2B_{1,1}(x/2)$, creating a jagged, non-differentiable profile at binary points ($2^{-k}$).
* The $S_2$ Problem (Grok’s contribution): Grok found that for the $S_2$ norm, the function is smooth. It’s related to the Gaussian profile but distinct. It essentially bridges the gap between the rough fractal geometry of the Hamming cube (discrete gradients) and smooth continuous probability.
Grok identified that the sharp lower bound on the L1 norm of the dyadic square function is given by exactly this expectation of the square root of Brownian motion exit time, providing a square root improvement in the logarithmic factor over previous bounds.
What This Means: Optimizing the Hypercube
You might be asking: Why do I care about dyadic square functions?
It’s about optimization on the hypercube. This math relates directly to “Edge Isoperimetric Inequalities”—basic questions about how to cut shapes in high-dimensional discrete spaces (like boolean functions used in computer science).
- Sharpening the Edge: Theorem 1.6 of the paper shows that these martingale bounds sharpen classical edge-isoperimetric inequalities.
- Speed of Insight: What took humans months to derive and conjecture, Grok confirmed and solved in minutes by finding the exact maximizing function $E\sqrt{\tau}$.
- Co-Authorship Era: We are officially in the era where AI finds the “sharp” bounds that human intuition might miss because we struggle to visualize infinite-dimensional martingale limits.
The Bottom Line
Grok 4.20’s discovery is a “small step” in harmonic analysis, but a giant leap for AI reasoning. It demonstrates that the next generation of models can navigate abstract mathematical spaces with the same ease they navigate Python code. The gap between human conjecture and AI proof is closing—fast.
FAQ
What is a Bellman function?
In this context, it’s a tool used in harmonic analysis to prove inequalities. It involves finding a function that satisfies certain differential or convexity conditions (often related to Brownian motion) to bound singular integrals.
Is Grok 4.20 available to the public?
Not yet. The result came from a user with early access to the internal beta version.
What is the “square root improvement”?
The previous bound scaled with $\sqrt{\log(1/p)}$. Grok’s improved bound scales with $\log(1/p)$ (without the square root), which is a larger (sharper) value as $p \to 0$, providing a tighter mathematical constraint.
