We discuss the dissipation of turbulent kinetic energy E _ { k } in the global interstellar medium ( ISM ) by means of two-dimensional , MHD , non-isothermal simulations in the presence of model radiative heating and cooling . We argue that dissipation in two dimensions is representative of that in three dimensions as long as it is dominated by shocks rather than by a turbulent cascade . Contrary to previous treatments of dissipation in the ISM , in this work we consider realistic , stellar-like forcing : energy is injected at a few isolated sites in space , over relatively small scales , and over short time periods . This leads to the coexistence of forced and decaying regimes in the same flow , to a net propagation of turbulent kinetic energy from the injection sites to the decaying regions , and to different characteristic dissipation rates and times in the forced sites and in the global flow . We find that the ISM-like flow dissipates its turbulent energy rapidly . In simulations with forcing , the input parameters are the radius l _ { f } of the forcing region , the total kinetic energy e _ { k } each source deposits into the flow , and the rate of formation of those regions , \dot { \Sigma } _ { OB } . The global dissipation time t _ { d } depends mainly on l _ { f } . We find that for most of our simulations t _ { d } is well described by a combination of parameters of the forcing and global parameters of the flow : t _ { d } \approx u _ { rms } ^ { 2 } / ( \dot { \epsilon } _ { k } f ) , where u _ { rms } is the rms velocity dispersion , \dot { \epsilon } _ { k } is the specific power of each forcing region , and f is the filling factor of all these regions . In terms of measurable properties of the ISM , t _ { d } \gtrsim \langle \Sigma _ { g } \rangle u _ { rms } ^ { 2 } / ( e _ { k } \dot { % \Sigma } _ { OB } ) , where \langle \Sigma _ { g } \rangle is the average gas surface density ; for the solar neighborhood , t _ { d } \gtrsim 1.5 \times 10 ^ { 7 } yr . The global dissipation time is consistently smaller than the crossing time of the largest energy-containing scales , suggesting that the local dissipation time near the sources must be significantly smaller than what would be estimated from large-scale quantities alone . In decaying simulations , we find that the kinetic energy decreases with time as E _ { k } ( t ) \propto t ^ { - \alpha } , where \alpha \approx 0.8–0.9 . This result can be translated into a decay with distance \ell when applied to the mixed forced+decaying case , giving E _ { k } \propto \ell ^ { -2 \alpha / ( 2 - \alpha ) } at large distances from the sources . Our results , if applicable in the direction perpendicular to galactic disks , support models of galaxy evolution in which stellar energy injection provides significant support for the gas disk thickness , but do not support models in which this energy injection is supposed to reheat an intra-halo medium at distances of up to 10-20 times the optical galaxy size , as the dissipation occurs on distances comparable to the disk height . However , this conclusion is not definitive until the effects of stratification on our results are tested .