Skip to content

Latest commit

 

History

History
225 lines (149 loc) · 11.5 KB

File metadata and controls

225 lines (149 loc) · 11.5 KB

Version License

Note: For further understanding, I have created an article on the repo concept.

Renamed: This repository was previously ZPRE-Implementation-6G. The code and research are unchanged — the name now reflects the core finding rather than the internal project lineage.

Unlearnable Interference

Adaptive Convergence Boundary Under Adversarial Signal Structures

Adaptive interference cancellation assumes that interference is learnable. This repository maps the boundary where that assumption breaks — and shows that the failure is structural, not a matter of model capacity.

Status: v1.1.0 — Theory layer added (Eigen-Drift Law). All v1.0.0 code and results unchanged.


This repository preserves the complete experimental snapshot accompanying the research. The files represent the full implementation as it existed at the conclusion of the study.


Core Insight

Structure without sufficient lifetime is functionally equivalent to noise for a learning system.

There exists a class of signals whose structure prevents convergence of adaptive filters — including nonlinear filters theoretically capable of modeling the underlying dynamics.

The failure is not due to insufficient model capacity. It is due to the rate and structure of discontinuities in the signal.

Boundary Condition

Adaptive systems fail when the rate of structural discontinuity exceeds their capacity to infer continuity.

This defines a practical limit of adaptive interference cancellation.

Adaptive Convergence Boundary


Theoretical Framework: The Eigen-Drift Law

The empirical boundary has a structural explanation rooted in the mathematics of eigenvalues and invariant representations.

The central empirical finding:

Empirically, learning succeeds only when τ_structure ≥ τ_alignment

Where:

  • τ_structure — the lifetime of the signal's dominant invariant structure (measured via sliding-window SVD on the anchor signal)
  • τ_alignment — the minimum temporal horizon required for the learner's gradient updates to establish a consistent descent direction (measured via ∆W cosine similarity across consecutive updates)

When the anchor's basis transitions (L3/L4) reset τ_structure faster than the learner can align, convergence collapses — not because the learner lacks capacity, but because no stable object persists long enough to converge to. The formal biconditional is a hypothesis; the empirical signal is consistent across all tested regimes.

Empirical validation: τ_structure drops monotonically with anchor speed (slow: ~411 samples → normal: ~53 → fast: ~18 at 16 kHz), and late SINR tracks it directly. Spectral gap stays > 3 across all regimes, confirming the system operates in the transient-invariant regime: structure exists locally but is destroyed before it can be exploited.

Structure without sufficient lifetime is functionally equivalent to noise for a learning system.

The connection to linear algebra: just as a rotation matrix has no real eigenvectors (its identity only becomes legible in ℂ), the anchor signal has no eigenbasis that remains stable long enough to be discovered — not because structure is absent, but because the basis that would expose it resets faster than any learner can find it.

For the full theoretical derivation, see docs/concept.md.
For the measurement apparatus (EigenDriftTracker design and corrections), see docs/methodology.md.
For the multi-AI development history, see docs/conversation_lineage.md.


Key Results

Structural coherence time (τ_structure) governs learnability, not model capacity:

Anchor speed τ_structure @0.5 Spectral gap Late SINR
slow ~411 samples > 3 −0.32 dB
normal ~53 samples > 3 −0.71 dB
fast ~18 samples > 3 −1.30 dB

Spectral gap > 3 in all cases confirms structure exists throughout — the anchor operates in the transient-invariant regime, not structural annihilation. SINR degradation tracks τ_structure directly, not model size.

Linear filters (FxLMS): Chaotic phase modulation collapses convergence (~3.5 dB SINR loss). The system cannot track discontinuities at τ ≈ 50 samples.

Nonlinear filters: Volterra (2nd-order polynomial) partially absorbs chaotic modulation via quadratic match (~1.5 dB reduced effect). Kernel LMS overfits local chaos and fails to generalize. Online MLPs absorb simple chaos over time, but fail under hidden-state transitions and orthogonal policy jumps.

Scaling behavior: Increasing model capacity from 544 to ~49k parameters yields marginal gains (~0.1–0.3 dB). The bottleneck is not capacity.

Discontinuity dominance: Increasing discontinuity rate (slow → extreme) produces ~1.6 dB degradation and prevents late-stage convergence — regardless of adversary architecture.

Learning rate does not rescue convergence: Varying LR across 0.0001–0.001 with fixed anchor produced no meaningful change in τ_alignment. Alignment time appears primarily constrained by architectural factors (memory length, depth), not step size.


Repository Structure

unlearnable-interference/
├── README.md
├── requirements.txt
├── LICENSE
│
│  # Core modules
├── core/
│   ├── FxLMS_UDP_Prototype.py       # FxLMS engine with safety controls (leakage, clipping, normalization)
│   ├── ChaoticAnchor.py             # Multi-layer anchor generator (L1–L4)
│   ├── NonlinearAdversary.py        # Volterra, Kernel LMS, Online MLP adversaries
│   └── EigenDriftTracker.py         # τ measurement apparatus (v1.1.0)
│
│  # Experiments
├── experiments/
│   ├── quick_demo.py                # 2-minute entry point — τ vs SINR in three lines
│   ├── AnchorBenchmark.py           # Anchor vs linear adversary (FxLMS)
│   ├── NonlinearBenchmark.py        # Cross-matrix: anchor layers × adversary types
│   ├── BoundaryProbe.py             # Scaled MLPs vs discontinuity rates — maps the boundary
│   ├── ZPRE_Benchmarking.py         # Config sweeps, CSV logging, visualization
│   ├── 6G_ISAC_Integration.py       # ISAC-style harness (KPIs, sensing, beamforming stubs)
│   └── eigen_drift_plotter.py       # Publication-ready figures
│
│  # Theory and documentation
└── docs/
    ├── concept.md                   # Eigen-Drift Law: theoretical derivation from eigenvalues to τ
    ├── methodology.md               # EigenDriftTracker design, measurement decisions, known limits
    └── conversation_lineage.md      # Development notes and correction history

Quick Start

pip install -r requirements.txt

# Start here — see τ drop and SINR track it in under a minute
python experiments/quick_demo.py

# Core system
python core/FxLMS_UDP_Prototype.py         # Adaptive cancellation baseline
python experiments/6G_ISAC_Integration.py  # ISAC integration demo
python experiments/ZPRE_Benchmarking.py    # Config sweep + plots

# Boundary experiments
python experiments/AnchorBenchmark.py      # Anchor vs linear filter
python experiments/NonlinearBenchmark.py   # Anchor vs nonlinear adversaries
python experiments/BoundaryProbe.py        # Full boundary mapping

# Figures
python experiments/eigen_drift_plotter.py  # Generates results/figures/

How It Works

The Adaptive Canceller

FxLMS_UDP_Prototype.py implements Filtered-x LMS with three operational modes (efficiency / balanced / enhance), NLMS normalization, leakage-based weight decay, and step clipping. An auto-escalation heuristic monitors residual variance and switches to aggressive mode when it detects convergence degradation.

The Adversarial Anchor

ChaoticAnchor.py generates signals with four independently toggleable defense layers:

Layer Mechanism What it disrupts τ control parameter
L1 — Structural Logistic map phase modulation Linear correlation tracking tau_samples
L2 — Dynamic Feedback-controlled chaos (adaptive τ/θ) Steady-state convergence tau_adapt_rate
L3 — Hidden State Private-key basis transitions System identification key_transition_interval
L4 — Epistemic Orthogonal policy jumps (TRNG-style) Statistical inference orthogonal_switch_prob

Each parameter directly controls τ_structure. The anchor doesn't hide its dynamics — it ensures they reset faster than any learner can align to them.

The Nonlinear Adversaries

NonlinearAdversary.py provides three filters designed to test whether nonlinear capacity can overcome the anchor:

Adversary Capacity Why it matters
Volterra (2nd order) Polynomial — exact match for logistic map Can it learn the chaos generator directly?
Kernel LMS (RBF) Universal approximator (infinite-dim) Can kernel methods generalize across discontinuities?
Online MLP (backprop) Universal approximator (neural) Can gradient-based learning close the gap?

The Boundary Experiments

BoundaryProbe.py is the decisive experiment. It varies two axes independently — adversary capacity (depth, width, memory) and discontinuity rate (slow → extreme) — to map where adaptive learning fails. The result: discontinuity rate dominates capacity scaling.


ISAC Integration

6G_ISAC_Integration.py provides a harness for evaluating the canceller in an Integrated Sensing and Communication (ISAC) context, with KPIs inspired by emerging 6G discussions (SINR gain, energy preservation, control latency, sensing accuracy, multi-node coherence). Beamforming and photonic acceleration interfaces are stubbed for future hardware integration.


Extension Points

Eigen-Drift measurement (new in v1.1.0): The EigenDriftTracker class (documented in docs/methodology.md) can be integrated into BoundaryProbe.py to directly measure τ_structure and τ_alignment alongside SINR. This turns the boundary from an observed outcome into a measurable, controllable parameter. Key design decisions: window=100 (not 500), track ∆W direction (not absolute weights), pure anchor signal only, multi-threshold τ at [0.9, 0.8, 0.7, 0.6, 0.5, 0.4].

Adversary scaling: LSTM/GRU adversaries (recurrent memory across discontinuities), transformer-based sequence prediction, ensemble methods.

Boundary refinement: Finer discontinuity sweeps (interval 150–300, probability 0.03–0.07), longer runs for late-convergence analysis, theoretical lower bounds on τ_alignment for given architectures.

Hardware integration: Replace BeamformerStub with THz/mmWave phase-array control, route canceller through photonic accelerator API, replace SensingModule with range-Doppler pipelines.


Related Work

For a complete catalog of related research: Research Index

Thematically related:


License

Apache 2.0 — see LICENSE for details.