🤖 AI Summary
Traditional 3D scene editing suffers from heavy reliance on manual object repositioning, expert modeling, and extensive annotated data. To address this, we propose a natural language–driven zero-shot 3D editing framework. Our method formalizes spatial semantics using conformal geometric algebra (CGA), which is integrated as an interpretable, verifiable semantic mapping language within the reasoning chain of a large language model (LLM). The framework jointly leverages CGA-based geometric representation, zero-shot LLM instruction parsing, real-time 3D simulation, and standard graphics pipeline interfaces—requiring no domain-specific fine-tuning or human modeling intervention. Experiments demonstrate that our approach reduces system response latency by 16% and improves task success rate by 9.6% over Euclidean-space baselines; notably, it achieves a 100% perfect execution rate on typical practical queries.
📝 Abstract
This paper introduces a novel integration of Large Language Models (LLMs) with Conformal Geometric Algebra (CGA) to revolutionize controllable 3D scene editing, particularly for object repositioning tasks, which traditionally requires intricate manual processes and specialized expertise. These conventional methods typically suffer from reliance on large training datasets or lack a formalized language for precise edits. Utilizing CGA as a robust formal language, our system, shenlong, precisely models spatial transformations necessary for accurate object repositioning. Leveraging the zero-shot learning capabilities of pre-trained LLMs, shenlong translates natural language instructions into CGA operations which are then applied to the scene, facilitating exact spatial transformations within 3D scenes without the need for specialized pre-training. Implemented in a realistic simulation environment, shenlong ensures compatibility with existing graphics pipelines. To accurately assess the impact of CGA, we benchmark against robust Euclidean Space baselines, evaluating both latency and accuracy. Comparative performance evaluations indicate that shenlong significantly reduces LLM response times by 16% and boosts success rates by 9.6% on average compared to the traditional methods. Notably, shenlong achieves a 100% perfect success rate in common practical queries, a benchmark where other systems fall short. These advancements underscore shenlong's potential to democratize 3D scene editing, enhancing accessibility and fostering innovation across sectors such as education, digital entertainment, and virtual reality.