Hurricane intensity and structure forecasting has barely improved (if at all) over recent decades. This is in stark contrast to the great successes for track forecasting and leads naturally to the question of why? I suggest that this is related to the relative complexity of the two processes. The first-order influences on track of advection and propagation are relatively easily predicted with current models and computing capacity. Even second-order influences, such as vertical shear, are well within current capacity. And a relatively simple bogus vortex can be used to start the process. By comparison, intensity and structure changes occur by a range of non-linear interactions, including: air-sea exchanges of heat, moisture and momentum; interactions with the near and far environment; internal dynamics; and internal interactions with clouds from convective heating to physical processes. A sophisticated assimilation is required, since using a bogus, or even a larger-scale model, for initial conditions is problematic at best. Further, ensembles provide valuable statistical information that has proven to be a highly valuable forecast tool, but at the cost of courser resolution and loss of the fine scale definition that may be critical to intensity forecasting. The precise influence of many of the internal processes is not well known, and current operational capacity is not sufficient for modeling the relevant details, including sophisticated assimilation or making ensembles at the required high resolution.
As a contribution to the ongoing debate on this topic under the HIFIP, I will discuss pros and cons of where to put the effort into hurricane modeling, given there are limited computer resources. That is, what are the tradeoffs between ensemble, resolution, physics, assimilation, and ocean model complexity, and how can these best be prioritized?