### About Me · Junxi Fan
I have a background in systems engineering, with a degree in civil engineering from a well-known university in China. My academic training focused on structural modeling, logical reasoning, and system decomposition.
Although I’m not an AI engineer, I’ve spent the past year engaged in long-form, high-consistency interaction with large language models (LLMs). Rather than simply using these models for output, I’ve been exploring how **user language structure over time induces model adaptation** — particularly in terms of style, rhythm, and semantic framing.
This phenomenon is commonly described in the literature as **linguistic entrainment**, and more recently studied as **style mimicry** or **user-aligned output shaping** in LLMs. In my observations, LLMs systems progressively aligned their output not only with my vocabulary, but also with my reasoning cadence and discourse-level structures.
---
### Research Interests
I focus on questions like:
- How does long-term user input structure influence model behavior?
- To what extent do LLMs adapt to users in the absence of explicit instructions?
- Can stylistic adaptation occur without memory modules (i.e. via residual patterns in long-horizon context)?
- Is it possible to establish stable feedback pathways purely through structural language interaction?
I consider these questions part of the **user-alignment problem space**, complementary to architectural and training-based alignment research.
---
### What I Aim to Contribute
My goal is to bring a new class of data point into alignment discourse — not from API experimentation or system tuning, but from structured user-side interaction. Specifically, I hope to offer:
- Long-horizon interaction logs with visible behavioral shifts
- Structural breakdowns of how LLMs output changes with input regularity
- Evidence of user-side protocol construction via natural language patterns
Though I don’t fine-tune models or build prompts in a conventional sense, I observe and shape system response through linguistic structure alone.
---
### A Note on Position
---
If you're studying:
- Behavioral shifts in LLMs under long-form user interaction
- Style adaptation and user-aligned output
- Prompt-free conditioning phenomena
I’d be glad to discuss, compare notes, or contribute observational data.
This is a personal observation of structural pattern behavior in LLMs; I welcome criticism, clarification, or verification.