From a purely economic viewpoint, the rational golf course superintendent will spend only the minimum amount of money required to control disease on his or her golf course. Each additional dollar spent will not increase the level of control, hence those additional dollars are wasteful, hence those superintendents that spend above the minimum required for control are (economically) irrational. Before you get all defensive regarding how rational you are, note that nearly EVERY golf course superintendent is irrational with regards to fungicide applications. This is because of the uncertainty regarding WHEN disease will occur, and the desire of most superintendents to reduce that uncertainty by making more fungicide applications than are absolutely necessary. To increase the economic efficiency of turfgrass disease control, one needs an accurate prediction regarding when disease will occur so that fungicides can be applied only when necessary.
Turf pathologists have been trying to shed light on when disease will appear through the use of predictive modeling for decades. Predictive models have been developed with various levels of success for anthracnose, brown patch, and Pythium blight. As the primary bentgrass disease for much of the world and one that develops over a wide temperature range, dollar spot is seemingly ideal for predictive modeling. However, in the past most dollar spot models have been ineffective at accurately predicting disease outbreaks.
Recently, a new dollar spot model has been developed that at least in early trials has more accurately predicted dollar spot outbreaks. Developed by Dr. Damon Smith (formerly at Oklahoma State University, now field crops pathologist at Wisconsin) and Dr. Jim Kerns (formerly at Wisconsin, now at North Carolina State), the Smith-Kerns model uses a complex algorithm that takes into account various environmental factors to develop a ‘probability’ that dollar spot will occur on that given day. The model is relatively new and we’re still field-testing it, but so far the model has accurately determined the dollar spot outbreaks of 2014 at our research station in Madison. Following further field-testing this summer, we hope to expand the use of the model to more areas of the country in 2015.
While the entire algorithm is complex, one interesting thing Dr. Smith and Dr. Kerns realized is that in the northern U.S., the primary factor driving dollar spot development was relative humidity. When the 5-day moving average of relative humidity was at 70% or higher the risk for dollar spot development was elevated, and when the 5-day moving average of relative humidity was at 75% or higher the risk for dollar spot was high. If you’re a superintendent in the northern half of the country looking to experiment with the model at your facility, consider only spraying for dollar spot once the 5-day moving average of relative humidity equals 70% or greater. This simplification of the model won’t work in the transition zone or the southern U.S. because the warmer temperatures suppressed dollar spot development despite high humidity, leading to over-predictions of disease.
Please note that this model is still being tested in the field and alterations will likely be made to further increase accuracy in the future. But for now the model appears to be a promising tool for any superintendent looking to make more timely fungicide applications targeting dollar spot. In the future, I even envision the day where we wake up, turn on the morning news and hear ‘High temperatures today should hit 75°F with a 20% chance for showers and 40% chance of dollar spot.’ At least the dollar spot part would be accurate.