James Halloway

Radiative Shocks: A story of uncertainty in modeling and measuring

The Center for Radiative Shock Hydrodynamics developed the capability to model and to measure shocks launched at ~100km/s in argon filled tubes, resulting in shock waves of sufficient compression to heat gas to a point where radiative energy losses from the shocked gas is significant.  Such radiative shocks are difficult to model because the transport of energy through radiation is effectively non-local and can be highly directional.  In the system that we study the radiation from the primary shock propagates in front of the shock and creates a secondary shock by ablating the wall of the tube containing the argon gas.  This wall shock interacts with the main shock.

This system can be measured in the field, primarily through the use of x-ray radiographs that allow us to see the location of the primary shock in the argon filled tube.  The system can also be modeled through the CRASH code system, which models the hydrodynamics and radiation transport in the system.  The code system contains both phenomenological parameters that can (or should) be calibrated to fixed values by using field data, and it also contains parameters that are uncertain in the field and not identically reproducible from experiment to experiment.  And of course the physics of the code is not perfect, so there is an unknown model discrepancy, and field measurements can also inform us regarding this discrepancy.  During the course of this project we developed various statistical models based on the ideas of Kennedy and O'Hagen for performing this calibration and discrepancy estimation, with the ultimate aim of making predications and estimates of uncertainties for experiments that had not yet been performed.  A key aim was in fact to perform an extrapolation using the physics code, with calibration performed using shocks in circular tubes and then used to predict the location for a shock launched in a elliptical tube.

The actual process involved in this work was less idealized than might typically be imagined.  The computer runs involved require extensive scarce resources, are being performed for multiple purposes, and with a code that is being actively developed and hence changing throughout the life of the project.   The field experiments are also resource constrained and quite limited in number (O(10)), and also performed with changing systems that also undergo active development during the project.  This leads to interesting questions of how to practically use the available information as the project progresses, and these will be discussed.  Different notions of "prediction" are also in play, and will be described.

In the end the impact of the work is as much cultural as predictive, with the experimental team and the computational team becoming increasingly glued together by the statistical analysis, and its the change in culture of these teams that is perhaps the largest impact of such a project.