_____Computational Fluid Dynamics_____
Hypersonic Flow Around the Halis Orbiter
Roy Williams
California Institute of Technology
Jochem Hauser and Ralf Winkelmann
Center of Logistics and Expert Systems -- Germany
The Halis model spacecraft has been investigated extensively in several U.S. and European wind tunnels,
where pressure, skin friction, and heat flux can be measured for cold, perfect gas, and for hot, chemically
reacting flow. Comparison with CFD for this freely available geometry is thus of great interest.
Figure 1 depicts a Mach 9.8 Navier-Stokes flow, where the freestream pressure (65 Pa)
corresponds to reentry in the outer atmosphere. Half of the Halis is shown in blue, with the Mach contours
shown on the symmetry plane. Note that the bow shock is well resolved, even when it is very close at the
nose, and the shock-shock interaction behind the body flap is also well represented.
Figure 1. A Mach 9.8 Navier-Stokes flow over the Halis configuration,
where the freestream pressure (65 Pa) corresponds to re-entry in the outer atmosphere. Half of the Halis is
shown in blue, with the Mach contours shown on the symmetry plane.
This flow is difficult computationally, not only because of the thin viscous boundary layer, but also
because there is a "temperature boundary layer"; the skin of the craft is assumed to be held at a constant
temperature (300 degrees K). Therefore, the boundary layer has not only large momentum flux within it,
but also large heat flux. The most complex part of the flow is at the rear of the craft, where the body flap
projects downward. This body flap is exposed to extreme stress, both mechanical and thermal; thus, it is
essential to model this region well. Experiments indicate a small recirculation zone at the hinge between
the fuselage and the body flap, and discerning this was one of the objectives of the work described
herein.
The ParNSS (Parallel Navier-Stokes Solver) code was used to compute this flow. ParNSS uses a
multiblock grid with second-order, GMRES-implicit steps, and van Leer's flux splitting scheme in a
MUSCL formulation. The grid, which has 980,000 points with 32 points to resolve the boundary layer,
was made with GridPro software. Figure 2 illustrates part of the surface grid near the body flap,
showing a modified topology to enhance the local resolution. This topological flexibility allows a
substantial increase in local resolution without over-refining the far field.
Figure 2. Part of the surface grid near the body flap showing a modified
topology to enhance the local resolution. This "3d Clamp" technique allows a substantial increase in local
resolution without over-refining the far field.
Numerous runs were performed on the Intel Paragon at Caltech to discover an optimum convergence to
steady state. We investigated sequencing of explicit and block-implicit steps, choice of preconditioners,
and how to choose the (spatially non-uniform) time steps. Communication inefficiency was no problem
because, by using block-implicit step, each processor solves a set of large, sparse linear systems for each
time step, with a communication only after each of these large tasks. Up to 135 processors were used for
the 192-block grid, but the varying block-size provides an upper limit on efficiency: the smallest block has
462 points and the largest has 2,560 points, so it becomes difficult to distribute the blocks evenly for larger
numbers of processors.
Figure 3 shows the recirculation zone at the body-flap hinge. The red and green arrows show the
fast, supersonic flow that has passed through the bow shock, while the adjacent small, blue arrows point in
the opposite direction, showing recirculation.
Figure 3. Recirculation zone at the body-flap hinge. The red and green
arrows show the fast, supersonic flow that has passed through the bow-shock. The adjacent small, blue
arrows point in the opposite direction, showing recirculation.
References
- [1] R. D. Williams, J. Hauser, and R. Winkelmann, "Efficient Convergence Acceleration for a
Parallel CFD Code." In: Parallel Computational Fluid Mechanics 1996, A. Ecer et. al., Eds.,
Elsevier North-Holland, to be published.
- [2] J. Hauser, R. D. Williams, H.-G. Paap, M. Spel, J. Muylaert, and R. Winkelmann, "A Newton-
GMRES Method for the Parallel Navier-Stokes Equations." In: Parallel Computational Fluid
Mechanics 1995, A. Ecer et. al., Eds., Elsevier North-Holland, 1995.
- [3] J. Hauser, M. Spel, J. Muylaert, and R. Williams, ParNSS: An Efficient Parallel Navier-Stokes
Solver for Complex Geometries, AIAA paper 94-2263.
- [4] J. Hauser and R. D. Williams, "Strategies for Parallelizing a Navier-Stokes Code on the Intel
Touchstone Machines," Int. J. Numerical Methods in Fluids, 15(51), 1992.
- [5] P. R. Eiseman et al., GridPro/sb3020 software, Program Development Corporation, White
Plains, NY.
- [6] J. Hauser, J. Muylaert, and Y. Xia, "Grid Generation for the Halis Configuration." In:
Numerical Grid Generation for Computational Fluid Dynamics, B. Soni et al., Eds., Engineering
Research Center Press, Starkville, Mississippi.
Last Modified January 9, 1996
Available through: Concurrent Supercomputing Consortium,
California Institute of Technology, USA,
http://www.cacr.caltech.edu/publications/annreps/annrep96/cfd1.html