Asteroseismology is a powerful tool that can precisely characterize the mass , radius , and other properties of field stars . However , our inability to properly model the near-surface layers of stars creates a frequency-dependent frequency difference between the observed and the modeled frequencies , usually referred to as the “ surface term ” . This surface term can add significant errors to the derived stellar properties unless removed properly . In this paper we simulate surface terms across a significant portion of the HR diagram , exploring four different masses ( M = 0.8 , 1.0 , 1.2 , and 1.5 M _ { \odot } ) at five metallicities ( [ Fe / H ] = 0.5 , 0.0 , -0.5 , -1.0 ,and - 1.5 ) from main sequence to red giants for stars with T _ { eff } < 6500 K and explore how well the most common ways of fitting and removing the surface term actually perform . We find that the two-term model proposed by Ball & Gizon ( 6 ) works much better than other models across a large portion of the HR diagram , including the red giants , leading us to recommend its use for future asteroseismic analyses .