Independent AI Systems Researcher focused on advanced AI systems, alternative architectures, verification protocols, and the boundary conditions of machine intelligence.
I work at the edge where AI systems stop behaving like clean demos and start exposing their real structure: brittle abstractions, hidden shortcuts, fake generalization, benchmark leakage, weak representation bridges, and failure residue that most projects throw away.
My research does not begin from the assumption that larger Transformers solve intelligence. It begins from a harder question:
What kind of system can detect its own failure boundary, mutate the instruments used to observe that failure, and turn the residue into a new search direction?
I build CPU-runnable experimental systems around recursive learning loops, attention-free sequence models, evaluator evolution, operator-program synthesis, structural memory, topological-symbolic engines, and bounded closed-loop generalization.
I treat AI research like a demolition job. I design systems, push them far past their comfort zone, and wait to see what snaps first. When something breaks, I do not rename the bug as a "feature." I write it down and move on. My GitHub is not a victory parade; it is a crime scene archive of experiments that did not survive contact with reality.
What is novel about my approach is that I do not optimize for applause. I optimize for stress. I assume an idea is wrong until it proves otherwise under real constraints, limited compute, and zero excuses. If a system only works when everyone is optimistic and polite, I assume it does not actually work.
My research is less product-oriented and more boundary-oriented. I am interested in systems that adapt, mutate, reorganize, and expose their own limits under pressure. When it is time to stop theorizing, stop pitching, and build the thing to see what collapses under load, that is usually when I show up—with a screwdriver and low expectations.
The project investigates how a learning system can:
- infer task context from support examples
- retrieve non-leaking episodic memory
- select and mutate adaptation operators
- synthesize bounded operator programs
- reject brittle candidates through validation-only evaluation
- compress failures into reusable rewrite rules
- record manifests and anti-cheat traces
- connect interaction residue to evaluator evolution
Repository: DeepNeural-AutoExploration
Automatic neural architecture search without attention mechanisms.
The project documents manual failures, mechanism discovery, automated search over thousands of candidate architectures, and attention-free mechanisms for long-range information transfer.
It investigates:
- non-attention routing
- hierarchical pooling
- spectral propagation
- dynamic gating
- field-based sequence computation
- O(L)-oriented alternatives to quadratic attention
- Seeker Field Networks and Adaptive Field Networks
This line of work directly challenges the assumption that long-range sequence modeling must be organized around attention-style token interaction.
Repository: RSI-NAS-Attention-Free
I am a structure-breaking boundary explorer.
My default move is not to polish the surface of an existing paradigm. It is to cut into the premise, inspect the hidden machinery, and find the structural point where the claimed intelligence, adaptation, or integration breaks.
I look for:
- the hidden assumption
- the shortcut path
- the fake abstraction
- the benchmark leak
- the untested bridge
- the inactive code path
- the failure residue that can be converted into a new mechanism
This makes my work less product-oriented and more boundary-oriented. I am interested in systems that adapt, mutate, reorganize, and expose their own limits under pressure.
- Recursive and self-evolving learning systems
- Residue-driven mechanism discovery
- Neural architecture search beyond attention
- Attention-free sequence modeling
- Topological and hyperdimensional symbolic computation
- Operator-program synthesis and mutation
- Evaluator evolution and anti-cheat instrumentation
- Failure grammar, structural memory, and boundary-condition testing
- Constrained-compute experimental systems
Email: sunghunkwag@gmail.com

