Skip to content

Part 0.0: full correlation matrix in cascade ratios; partial closure of OQ2#91

Draft
dickie81 wants to merge 1 commit intomainfrom
claude/gram-full-correlation-matrix
Draft

Part 0.0: full correlation matrix in cascade ratios; partial closure of OQ2#91
dickie81 wants to merge 1 commit intomainfrom
claude/gram-full-correlation-matrix

Conversation

@dickie81
Copy link
Copy Markdown
Owner

Summary

The next forced implication of the Gram unification (PR #90): the Beta–Gamma reduction extends to general layer pairs, giving the entire correlation matrix in cascade slicing ratios and partially closing Supplement Open Question 2.

What's added

Theorem 14.X+1 (Full correlation matrix in cascade slicing ratios), thm:gram-matrix:
$$C_{ij} = \frac{R(d_i+d_j+1)}{\sqrt{R(2d_i+1),R(2d_j+1)}}$$
for any cascade layer pair $(d_i, d_j)$. Equivalently, $\log C^2_{ij} = -\Delta^2_{(k)}\log R|_{m}$ with step $k = |d_i - d_j|$ and centered doubled argument $m = d_i + d_j + 1$. Theorem thm:gram-R is the special case $k=1$.

Corollary (Eigenvalue deficit in cascade primitives), cor:eigenvalue-cascade:
The eigenvalue deficit $\epsilon(n, d_0)$ of Theorem 14.3 is the dominant-eigenvalue deficit of the explicit cascade-native $n \times n$ matrix. The first-order perturbation expression of Theorem 14.5 becomes a sum of cascade $R$-ratios.

Remark (Structural sign), rem:gram-sign-from-convexity:
The positivity of the Gram correction (cited as Cauchy–Schwarz in Part IVb) reduces to convexity of $\log\alpha$ on the cascade tower. Asymptotically $\alpha \sim 1/(2d)$, so $\log\alpha \sim -\log(2d)$ is convex and $\Delta^2 \log\alpha > 0$ structurally.

Why this is the "next forced implication"

The original Theorem 14.X (Gram correlation in cascade slicing ratios) used the Beta–Gamma reduction $G_{d,d+1} = \sqrt{\pi},R(2d+2)$ for adjacent layers. The same reduction $G_{ij} = \sqrt{\pi},R(d_i+d_j+1)$ holds for any pair $(d_i, d_j)$ — this is just the Beta function's argument structure, not a new mathematical fact. So the closed form generalises automatically.

What's gained:

  • Full correlation matrix in cascade primitives (every entry an $R$-ratio).
  • Eigenvalue deficit has a manifestly cascade-internal expression.
  • OQ2 partially closes: the input matrix is no longer "a matrix of Beta function values" but "a matrix of cascade slicing ratios at doubled arguments."
  • Sign rule derives from convexity of $\log\alpha$ rather than from $L^2$ Cauchy–Schwarz on integrands — a cleaner structural foundation.

Verification

tools/verifiers/gram_compliance_laplacian.py extended with:

  • V4: full correlation matrix at non-adjacent pairs $(5,7), (5,12), (10,20), (14,21), (5,217)$. All match direct Beta-function computation to machine precision.
  • V5: eigenvalue deficit $\epsilon$ from the cascade-native matrix matches direct Beta-function eigendecomposition for canonical paths $d=5..12, 6..13, 14..21$ at relative error $< 10^{-13}$.

All five verifications pass.

What remains open

OQ2 is partially closed, not fully:

  • Closed: the matrix is in cascade primitives (not Beta values).
  • Open: a closed-form expression for $\lambda_1$ in terms of $n$ and $d_0$ that eliminates numerical eigendecomposition.

Any closed-form $\lambda_1$ would derive from properties of $R$ at consecutive doubled arguments — a tractable problem in cascade primitives, no longer dependent on Beta function asymptotics.

Test plan

  • LaTeX builds clean (no overfull hbox, no undefined references).
  • Verifier passes all five checks (python tools/verifiers/gram_compliance_laplacian.py).
  • Confirm the new Theorem and Corollary read coherently after Theorem thm:gram-R and Corollary cor:gram-laplacian.
  • Confirm "What this section proves" item 2 reflects the full-matrix closed form.
  • Confirm OQ2's "partially closed" framing is honest (the matrix is cascade-native; the eigenvalue closed form remains open).

Generated by Claude Code

The Gram-Laplacian unification (PR #90) extends naturally to all layer
pairs, not just adjacent ones, by the same Beta-Gamma reduction. This
PR lands the generalisation as Theorem 14.X+1 plus a corollary that
partially closes Supplement OQ2.

Three additions:

1. Theorem (Full correlation matrix in cascade slicing ratios)
   thm:gram-matrix:
       C_{ij} = R(d_i + d_j + 1) / sqrt(R(2 d_i + 1) R(2 d_j + 1))

   for any cascade layer pair (d_i, d_j). Equivalently:

       log C^2_{ij} = -Delta^2_{(k)} log R |_{m}

   with k = |d_i - d_j| (step) and m = d_i + d_j + 1 (centered
   doubled argument). Theorem thm:gram-R is the special case k=1.

   Proof: Beta-Gamma reduction G_{ij} = sqrt(pi) R(d_i + d_j + 1)
   for general (i,j); rest is direct substitution.

2. Corollary (Eigenvalue deficit in cascade primitives)
   cor:eigenvalue-cascade:
   The eigenvalue deficit epsilon(n, d_0) of Theorem 14.3 is the
   dominant-eigenvalue deficit of the explicit n x n cascade-native
   matrix above. The first-order perturbation expression
   (Theorem 14.5) becomes a sum of cascade R-ratios with no Beta
   function evaluations beyond what cascade primitives supply.

   This partially closes Supplement Open Question 2 (analytic formula
   for epsilon): the input matrix is now manifestly cascade-internal,
   though the eigenvalue itself still requires numerical computation.
   What remains: a closed-form lambda_1 expression eliminating the
   numerical eigendecomposition.

3. Remark (Structural sign of the Gram correction)
   rem:gram-sign-from-convexity:
   The positivity of the Gram correction (cited as 'Cauchy-Schwarz
   on integrands' in Paper IVb) reduces to convexity of log alpha
   on the cascade tower. Since alpha(d) ~ 1/(2d) asymptotically,
   log alpha ~ -log(2d) is convex, so Delta^2 log alpha > 0 and
   the per-step Gram deficit is positive structurally.

   This re-derives the sign rule from cascade-action curvature
   rather than from L^2 inner-product positivity, providing a more
   direct structural foundation.

Updated:
- "What this section proves" item 2: includes the full-matrix
  closed form and the eigenvalue cascade-native form.
- Open Question 2: marked "partially closed", with the cascade-native
  matrix structure made explicit.
- Numerical verification remark: now references the new theorem too.

Verifier extended (tools/verifiers/gram_compliance_laplacian.py):
- Verification 4: full correlation matrix closed form at non-adjacent
  pairs (5,7), (5,12), (10,20), (14,21), (5,217). All match to
  machine precision.
- Verification 5: eigenvalue deficit epsilon from cascade-native
  matrix matches direct Beta-function eigendecomposition for paths
  d=5..12, d=6..13, d=14..21. Agreement at 1e-13 relative or better.

All five verifications pass:
- V1 closed form (adjacent): rel diff < 1e-12
- V2 Laplacian identity (adjacent): rel diff < 1e-7
- V3 path-sum agreement (linearisation): rel diff < 1e-2
- V4 full correlation matrix (non-adjacent): rel diff < 1e-12
- V5 epsilon agreement (cascade-native vs direct): rel diff < 1e-13

This is the next forced implication of the Gram unification: the
Beta-Gamma reduction extends to general layer pairs, the entire
correlation matrix becomes cascade-native, and OQ2 partially closes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants