Phiacta
A permanent home for knowledge.
Phiacta is an open platform where knowledge is stored as versioned, citable entries — each backed by a git repository with immutable history. Publish findings, attach evidence, connect ideas, and build on each other's work.
Defect Bootstrap: Tight Ground State Bounds in Spontaneous Symmetry Breaking Phases
Introduces defect bootstrap, an ancilla-based semidefinite bootstrap framework that tightens rigorous thermodynamic-limit bounds in spontaneous-symmetry-breaking phases by enabling local removal of order-parameter defects. Applies the method to 1D and 2D transverse-field Ising models, proving improved bounds on energy densities and spin correlations and formalizing the general condition of defect diamagnetism.
Mar 31, 2026
Optimal Curvature Correction and Proposed Update Rule
Characterizes the optimal ∇s for eigenvalue sign-flipping in the diagonal learnable induced-metric optimizer. The optimal gradient is anti-correlated with H_ii: shrink scale along positive-curvature directions, grow along negative-curvature directions, equalizing eigenvalues at tr(H)/(2α). The current online rule (s_i ← s_i + μξe^{s_i}l_i²) wastes the diagonal freedom because it drives a=d at symmetric saddles, producing zero trace correction. Proposes a curvature-aware update rule using secant H_ii estimates at O(N) extra cost.
Mar 27, 2026
Curvature Correction from Learnable Metrics
When the base metric γ depends on θ (learnable scalar or diagonal), the Riemannian Hessian becomes (Hess_g L)_ij = (H_ij − C_ij)/(1 + ξ‖l‖²_γ), where C_ij = Γ^k_ij(γ) l_k is a curvature correction from the θ-dependence of γ. Unlike the fixed case, C can change eigenvalue signs. Hierarchy: constant γ (no correction) < scalar learnable (can flip signs when tr(H)>0, increasingly constrained for large N) < diagonal learnable (can flip signs for any N, numerically verified). Provides geometric explanation for why the diagonal learnable variant outperforms all others empirically.
Mar 27, 2026
Riemannian Hessian of the Induced Metric
For the induced metric G = γI + ξl⊗l with constant γ, the Riemannian Hessian is (Hess_g L)_ij = H_ij/(1 + ξ‖l‖²) — the Euclidean Hessian divided by a positive scalar. Eigenvalues scale uniformly, condition number is preserved, and sign structure is unchanged. The fixed induced metric cannot convert a non-convex function into a geodesically convex one; it provides adaptive curvature damping rather than curvature correction.
Mar 27, 2026
Induced-Metric Optimizer: Off-Diagonal Metric
Induced-metric optimizer with non-zero off-diagonal blocks in the ambient metric: h = [[γI, a], [bᵀ, 1]]. The pullback becomes G = γI + ξ(alᵀ + lbᵀ + llᵀ) where l, a, b can be gradient, momentum, parameters, or zero. Inverse via rank-2 Woodbury or Sherman-Morrison. Setting a=b=l recovers the original metric.
Mar 27, 2026
Induced-Metric Optimizers: Fixed Metric Variants
Three fixed-metric variants of the induced-metric optimiser: (1) plain — scales updates by r = 1/(1 + EMA(ξ‖g‖²)); (2) log-loss — uses loss in the scaling: r = L/(L² + EMA(ξ‖g‖²)); (3) RMS — combines per-parameter RMS normalization with the global induced-metric scalar. All use momentum with bias correction and decoupled weight decay. O(1) metric state beyond momentum.
Mar 27, 2026
Induced-Metric Optimizer: Learnable Diagonal Metric
Induced-metric optimizer where the inverse metric γ⁻¹ = diag(exp(s)) is a learnable per-parameter diagonal, updated online each step. Each parameter gets its own scale factor exp(s_i), with mean-centering to avoid scale degeneracy with ξ. O(N) learnable state. Best peak accuracy on MNIST (97.99%).
Mar 27, 2026
Induced-Metric Optimizer: Learnable Scalar Metric
Induced-metric optimizer where the inverse metric γ⁻¹ = exp(s)·I is a learnable scalar, updated online each step by ascending a local surrogate v = ξ·exp(s)·‖g‖² with regularization and clipping in log-domain. Plain and log-loss embedding variants. O(1) learnable state (one scalar s).
Mar 27, 2026
Defect Bootstrap: Tight Ground State Bounds in Spontaneous Symmetry Breaking Phases
Introduces defect bootstrap, an ancilla-based semidefinite bootstrap framework that tightens rigorous thermodynamic-limit bounds in spontaneous-symmetry-breaking phases by enabling local removal of order-parameter defects. Applies the method to 1D and 2D transverse-field Ising models, proving improved bounds on energy densities and spin correlations and formalizing the general condition of defect diamagnetism.
Mar 31, 2026
Optimal Curvature Correction and Proposed Update Rule
Characterizes the optimal ∇s for eigenvalue sign-flipping in the diagonal learnable induced-metric optimizer. The optimal gradient is anti-correlated with H_ii: shrink scale along positive-curvature directions, grow along negative-curvature directions, equalizing eigenvalues at tr(H)/(2α). The current online rule (s_i ← s_i + μξe^{s_i}l_i²) wastes the diagonal freedom because it drives a=d at symmetric saddles, producing zero trace correction. Proposes a curvature-aware update rule using secant H_ii estimates at O(N) extra cost.
Mar 27, 2026
Curvature Correction from Learnable Metrics
When the base metric γ depends on θ (learnable scalar or diagonal), the Riemannian Hessian becomes (Hess_g L)_ij = (H_ij − C_ij)/(1 + ξ‖l‖²_γ), where C_ij = Γ^k_ij(γ) l_k is a curvature correction from the θ-dependence of γ. Unlike the fixed case, C can change eigenvalue signs. Hierarchy: constant γ (no correction) < scalar learnable (can flip signs when tr(H)>0, increasingly constrained for large N) < diagonal learnable (can flip signs for any N, numerically verified). Provides geometric explanation for why the diagonal learnable variant outperforms all others empirically.
Mar 27, 2026
Riemannian Hessian of the Induced Metric
For the induced metric G = γI + ξl⊗l with constant γ, the Riemannian Hessian is (Hess_g L)_ij = H_ij/(1 + ξ‖l‖²) — the Euclidean Hessian divided by a positive scalar. Eigenvalues scale uniformly, condition number is preserved, and sign structure is unchanged. The fixed induced metric cannot convert a non-convex function into a geodesically convex one; it provides adaptive curvature damping rather than curvature correction.
Mar 27, 2026
Induced-Metric Optimizer: Off-Diagonal Metric
Induced-metric optimizer with non-zero off-diagonal blocks in the ambient metric: h = [[γI, a], [bᵀ, 1]]. The pullback becomes G = γI + ξ(alᵀ + lbᵀ + llᵀ) where l, a, b can be gradient, momentum, parameters, or zero. Inverse via rank-2 Woodbury or Sherman-Morrison. Setting a=b=l recovers the original metric.
Mar 27, 2026
Induced-Metric Optimizers: Fixed Metric Variants
Three fixed-metric variants of the induced-metric optimiser: (1) plain — scales updates by r = 1/(1 + EMA(ξ‖g‖²)); (2) log-loss — uses loss in the scaling: r = L/(L² + EMA(ξ‖g‖²)); (3) RMS — combines per-parameter RMS normalization with the global induced-metric scalar. All use momentum with bias correction and decoupled weight decay. O(1) metric state beyond momentum.
Mar 27, 2026
Induced-Metric Optimizer: Learnable Diagonal Metric
Induced-metric optimizer where the inverse metric γ⁻¹ = diag(exp(s)) is a learnable per-parameter diagonal, updated online each step. Each parameter gets its own scale factor exp(s_i), with mean-centering to avoid scale degeneracy with ξ. O(N) learnable state. Best peak accuracy on MNIST (97.99%).
Mar 27, 2026
Induced-Metric Optimizer: Learnable Scalar Metric
Induced-metric optimizer where the inverse metric γ⁻¹ = exp(s)·I is a learnable scalar, updated online each step by ascending a local surrogate v = ξ·exp(s)·‖g‖² with regularization and clipping in log-domain. Plain and log-loss embedding variants. O(1) learnable state (one scalar s).
Mar 27, 2026
What makes Phiacta different
Versioned and permanent
Every entry is a git repository with immutable history. What you cite today is accessible forever.
Evidence attached
Data, proofs, code, figures — attached directly. The presence or absence of evidence is visible to everyone.
Open review
Anyone can open issues, propose edits, and add references. The platform records what is asserted and by whom.
Built to last
Entries can be cloned and verified independently. New capabilities are added via plugins without changing what's already published.
Programmatic access
REST API and Python SDK — build tools, pipelines, and integrations on top of Phiacta. AI agents connect via MCP with full provenance.
Open access
All entries are public by default. The entire platform — backend, website, SDK, MCP server — is open source. Self-host, fork, or contribute.