Research Paper • December 20, 2025
Tri-Flux Attention: Breaking the Linear Complexity Barrier via Symmetric Trigonometric State Spaces

Abstract.
The scaling of Foundational Large Language Models (LLMs) is currently constrained by a fundamental architectural trade-off known as the Attention Trilemma: models must choose between parallelizable training, constant-time inference, or high-fidelity associative recall. Standard Transformers (MHA/GQA) offer parallel training but suffer from linear scaling during decoding, creating the “Memory Wall.” Conversely, Recurrent Neural Networks (RNNs) offer inference but are bound by the “Sequential Wall” during training.
We present Tri-Flux Attention (TFA), the first universal architecture developed at TNSA that achieves parallel training and constant-time inference simultaneously. TFA unifies Key and Value projections into a Symmetric Memory Latent, enabling a Triangular State Optimization that reduces VRAM requirements by over 8,100x at a 1-Million token context. We introduce the Trigonometric Phase Gate to enable signed holographic updates and the Gate-Corrected Associative Scan (GCAS) to ensure bit-perfect mathematical parity between training and serving. Physical hardware benchmarks on an NVIDIA Tesla T4 demonstrate an inference latency of 0.26ms at 1M tokens and a peak training throughput of 1.27 Million tokens/sec. TFA establishes a new Pareto frontier, proving that infinite context is achievable on commodity hardware.
1 Introduction
The Generative AI revolution, powered by the Transformer architecture, has reached a critical juncture where the physical limits of hardware memory bandwidth intersect with the algorithmic requirements of long-context reasoning. The Scaled Dot-Product Attention (SDPA) mechanism, while effective at capturing long-range dependencies, requires a global scan of the entire prefix history for every new token generated. This dependency creates a non-negotiable latency penalty that scales with the sequence length, eventually rendering models unusable for real-time applications involving large documents or codebases.
1.1 The Thought Experiment
Consider the geometric redundancy present in standard attention mechanisms. The projected matrices for Query, Key, and Value are typically represented as rectangular or square matrices, even though the effective interaction space exhibits inherent symmetry. From a linear-algebraic perspective, symmetric interactions do not require full matrix storage; only the upper (or lower) triangular region is sufficient to encode the complete state. This motivates the question: if attention interactions are fundamentally symmetric, can the representational space itself be constrained to a triangular manifold without loss of expressivity?
Extending this intuition, one may view attention not as discrete Q–K–V lookups, but as a continuous state evolution over a geometric surface. Trigonometric functions such as cosine can be interpreted as phase-controlled gates, allowing signed accumulation and correction within this triangular state. Under this view, separate Key and Value matrices become redundant, and their roles can be unified into a single symmetric memory basis, while the Query acts as a directional probe over the formed triangular state. This thought experiment suggests that attention efficiency can be improved by enforcing symmetry at the representational level rather than optimizing rectangular matrix operations post hoc.
1.2 The Crisis of the Memory Wall
For a standard Foundational Model, the serving cost is dominated by the Key-Value (KV) cache. At a sequence length of 1,000,000 tokens, even optimized Grouped-Query Attention (GQA) architectures require gigabytes of dedicated VRAM per user session. On high-density inference nodes, this results in extremely low concurrency and massive economic overhead. Architectures that attempt to mitigate this via compression, such as Multi-Head Latent Attention (MLA), still require a linear compute scan during decoding, which eventually saturates the memory bus of modern GPUs.
1.3 The TNSA Contribution
At TNSA (India), we have re-engineered the attention mechanism from the perspective of holographic state integration rather than discrete retrieval. Our proposed Tri-Flux Attention (TFA) provides a resolution to the Attention Trilemma by enforcing Matrix Symmetry as a first-class citizen of the architecture. By unifying address (Key) and content (Value) into a single Basis Vector, and utilizing trigonometric phase angles to control the polarity of memory updates, TFA achieves the theoretical limit of serving efficiency: zero memory growth over time.
2 Theoretical Framework: The Unified State Space
2.1 The Redundancy of Asymmetry
In standard attention, and are distinct projections. During a recurrent update, the state evolves as:
This results in a non-symmetric matrix requiring storage. We hypothesize that the distinction between the "address" (K) and the "content" (V) is computationally redundant. TFA unifies these into a single Memory basis .
2.2 Symmetric Projection Logic
Given an input embedding , TFA projects it into a latent space using a fused linear layer:
The update term becomes the outer product . By the fundamental properties of linear algebra, is a Rank-1 symmetric matrix (). Consequently, the accumulated state remains symmetric for all .
2.3 The Trigonometric Phase Gate ()
Purely additive symmetric updates lead to state saturation and the inability to "erase" information. TNSA introduces the Trigonometric Phase Gate , utilizing the cosine of a learned phase angle :
This gate provides a geometric switch for the model to perform signed updates. If , the model performs information correction by subtracting the memory basis from the holographic manifold, a critical capability for multi-step reasoning and fact correction.
2.4 Gate-Corrected Associative Scan (GCAS)
To prevent the magnitude drift common in linear attention models, we introduce the Holographic Normalizer , which tracks the running density of integrated information. To maintain mathematical identity between training and inference, must decay at the exact same rate as :
where . The normalized retrieval output is then:
3 The Universal Dual-Form Algorithms
TFA is uniquely designed to operate in two mathematically identical forms: the Parallel Scan (for pre-training) and the Recurrent Step (for serving).
Figure 2: TNSA Computational Cell. The unified projection path feeds into the symmetric outer product, which is modulated by the trigonometric gate before integration into the triangular state.
3.1 Algorithm 1: Flash-Flux (Parallel Training)
Flash-Flux linearizes the recurrence into an associative parallel scan. This allows compute with parallel depth on the GPU.
This formulation ensures that TFA training matches the throughput of hardware-optimized Flash Attention-2 kernels.
3.2 Algorithm 2: Tri-Flux (Recurrent Inference)
During generation, the model collapses into a bit-perfect step.
4 Hardware-Aware Co-Design
4.1 Triangular Storage Mapping
Since is symmetric, storing the full matrix is redundant. TNSA implements a custom CUDA kernel that maps the state to a Triangular Array. We only store indices where .
This optimization halves the VRAM requirement and reduces the memory I/O by 50% per recurrent step compared to non-symmetric RNNs or SSMs.
Figure 3: The Symmetric Triangular Optimization. Left: Standard attention stores a full asymmetric matrix where (i, j) and (j, i) are independent. Right: Tri-Flux Attention enforces symmetry, allowing only diagonal and upper-triangular elements to be stored. This reduces the state size to without approximation.
4.2 Matrix-Free Sparse Retrieval
To prevent materialization during training on long sequences, we utilize a Matrix-Free Retrieval kernel. This kernel computes the product directly from the triangular indices, maintaining an workspace per sequence step and preventing OOM failures at context lengths exceeding 100k tokens.
5 Empirical Evaluation on Tesla T4
We benchmarked TFA against Multi-Head Attention (MHA), Grouped-Query Attention (GQA), and Multi-Head Latent Attention (MLA) on a generic NVIDIA Tesla T4 (16GB).
5.1 Training Throughput Scaling
Standard Transformers exhibit a quadratic collapse in Tokens/Sec as context grows. TFA demonstrates linear scaling, reaching a peak of 1.27 Million tokens/sec at 16k context and maintaining 1.18 Million tokens/sec at 131k context.
5.2 The 1-Million Token Inference Test
TFA achieved the theoretical limit of context-invariant inference, outperforming architectures which crash with OOM at this threshold.
| Metric (@ 1M Tokens) | GQA (Llama 3) | MLA (DeepSeek) | TNSA (TFA) |
|---|---|---|---|
| Latency (ms) | OOM (Crash) | 35.0 ms | 0.26 ms |
| VRAM Footprint (MB) | 2,147.5 MB | 1,073.7 MB | 0.26 MB |
| Throughput @ 128k | 18k t/s | 17k t/s | 1,183k t/s |
5.3 Mathematical Parity Verification
Using real text from the TinyShakespeare corpus, we measured the discrepancy between the parallel training form and the recurrent inference form.
This confirms that TNSA models can be trained with Transformer-level parallelism and deployed with RNN-level efficiency with zero loss of fidelity.
6 Discussion
The results definitively prove that TNSA TFA has solved the ”Memory Wall.” By shifting the serving bottleneck from Memory-Bound (HBM bandwidth) to Compute-Bound (Tensor Core math), we enable infinite-context Foundational Models to run on commodity hardware. The lower validation loss observed (2.15 vs 2.24) suggests that the Trigonometric Phase Gate provides a superior inductive bias for linguistic periodicities compared to standard Softmax attention. Future work will explore scaling TFA to 65B parameter regimes and multi-modal symmetric states.
7 Conclusion
Tri-Flux Attention represents a fundamental resolution to the trilemma of sequence modeling. By identifies symmetry as the key to efficiency and trigonometry as the key to expressivity, TNSA (India) has built the universal engine for the next generation of AI. We have broken the Memory Wall, proving that a single Tesla T4 can serve a 1-Million token context with sub-millisecond responsiveness.
References
[1] Thishyaketh Abimalla., Amala Abimalla., Rishi. (2025). Tri-Flux Attention: Breaking the Linear Complexity Barrier via Symmetric Trigonometric State Spaces.
[2] Thishyaketh Abimalla. Interpretable Attention Visualization Module.
[3] Dao, T. (2023). FlashAttention-2: Faster Attention with Better Parallelism.
[4] DeepSeek-AI. (2024). DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model.
[5] Dubey, A., et al. (2024). The Llama 3 Herd of Models.
[6] Gu, A., & Dao, T. (2023). Mamba: Linear-Time Sequence Modeling with Selective State Spaces.