π€ AI Summary
In the era of large language models (LLMs), traditional TeX systems struggle to support efficient scientific writing due to slow compilation, limited semantic expressiveness, difficulty in error localization, and incompatibility with modern tooling ecosystems; moreover, their high information entropy substantially increases LLM training costs. This work systematically identifies the structural bottlenecks of TeX and proposes Mogan STEMβa WYSIWYG structured editor that introduces the low-entropy .tmu document format, coupled with efficient data structures, rapid rendering, and on-demand plugin loading. Experimental results demonstrate that Mogan significantly outperforms TeX in both compilation and rendering speed, while the .tmu format markedly improves LLM fine-tuning efficiency, offering a new direction for scientific writing tools and document representation paradigms.
π Abstract
As large language models (LLMs) increasingly assist scientific writing, limitations and the significant token cost of TeX become more and more visible. This paper analyzes TeX's fundamental defects in compilation and user experience design to illustrate its limitations on compilation efficiency, generated semantics, error localization, and tool ecosystem in the era of LLMs. As an alternative, Mogan STEM, a WYSIWYG structured editor, is introduced. Mogan outperforms TeX in the above aspects by its efficient data structure, fast rendering, and on-demand plugin loading. Extensive experiments are conducted to verify the benefits on compilation/rendering time and performance in LLM tasks. What's more, we show that due to Mogan's lower information entropy, it is more efficient to use .tmu (the document format of Mogan) to fine-tune LLMs than TeX. Therefore, we launch an appeal for larger experiments on LLM training using the .tmu format.