SWE-Refactor: A Repository-Level Benchmark for Real-World LLM-Based Code Refactoring
Abstract
Large Language Models (LLMs) have recently attracted wide interest for tackling software engineering tasks. In contrast to code generation, refactoring demands precise, semantics-preserving edits that improve program structure, which also makes automated evaluation challenging. However, existing refactoring benchmarks commonly suffer from three shortcomings: limited coverage of refactoring scenarios, the inclusion of instances that mix refactoring with unrelated changes, and insufficient repository-level context for realistic assessment. To mitigate these issues, we introduce SWE-Refactor, a new benchmark for LLM-based code refactoring. SWE-Refactor comprises 1,099 developer-written, behavior-preserving refactorings mined from 18 Java projects, including 922 atomic and 177 compound instances. Each instance is validated via compilation, test execution, and automated refactoring detection tools to ensure correctness. We evaluate nine widely used LLMs on SWE-Refactor, covering models such as GPT-4o-mini, DeepSeek-V3, and CodeLLaMa, to provide representative reference results. Our results show that complex and compound refactorings remain the primary source of failures; notably, an OpenAI Codex agent achieves only 39.4% success on compound instances. We release SWE-Refactor and all evaluation results to facilitate future research on LLM-based code refactoring.
Growth and citations
This paper is currently showing No growth state computed yet..
Citation metrics and growth state from academic sources (e.g. Semantic Scholar). See About for details.
Cited by (0)
No citing papers yet
Papers that cite this one will appear here once data is available.
View citations page →References (0)
No references in DB yet
References for this paper will appear here once ingested.
Related papers in Software Engineering
- FullStack-Agent: Enhancing Agentic Full-Stack Web Coding via Development-Oriented Testing and Repository Back-Translation0 citations
- Bridging Online and Offline RL: Contextual Bandit Learning for Multi-Turn Code Generation0 citations
- Improving Deep Learning Library Testing with Machine Learning0 citations
Growth transitions
No transitions recorded yet
Growth state transitions will appear here once computed.