arXiv:2605.12326v1 Announce Type: new Abstract: Model merging has emerged as a cost-effective alternative to training large language models (LLMs) from scratch, enabling researchers to combine pre-trained models into more capable systems without full retraining. Evolutionary approaches to model merging have shown particular promise, automatically searching for optimal merging configurations across both parameter space (PS) and data flow space (DFS). However, the optimization challenges underlying these approaches -- particularly in DFS merging -- remain poorly understood and formally underspecified in the literature. This paper makes two contributions. First, we provide a structured survey of evolutionary model merging techniques, organizing them into three categories: parameter-space merging, data flow space merging, and hybrid approaches. Second, we formally characterize the DFS merging problem as a black-box optimization problem involving mixed binary-continuous variables, high-dime...
Want to discover more AI signals like this?
Explore Steek