Wind power turbine design, weather forecasting, and climate change modelling are three examples of fields which depend on understanding turbulence. Turbulence occurs in air, water, and other gases or liquids resulting in chaotic changes in pressure and flow velocity. With assistance from European high-performance computing (HPC) resources, a team at the National Research Council (NRC), Lecce, Italy is fine tuning turbulence theory developed in 1941 by Soviet physicist Andrey Kolmogorov.
“When running global circulation simulations to describe the ocean and the atmosphere, whether it be for short-term weather forecasting or for predicting long-term climate change, you can never fully simulate all motions at all scales. You have lower limits, and beyond them you must use models. All the models used in such cases are based on Kolmogorov’s laws, so if we can correct these somehow, or provide some pointers as to when and where these laws might start to break down, then we can help improve the overall predictive powers of these types of simulation,” explains Alessandra Sabina Lanotte, Senior Researcher at NRC and head of the team.
Through the Partnership for Advanced Computing in Europe (PRACE), the project has been able to apply extensive High-Performance Computing (HPC) to challenge Kolmogorov’s theory.
Turbulence is caused by uneven distribution of kinetic energy in a flow. Above a certain threshold – different for different fluids and conditions – the localized kinetic energy will overcome the dampening effect of the viscosity of the fluid. Eddies and whorls will appear, initiating movements which are at the same time coherent and unpredictable, long alluding any unified theory to describe them.
With no theory derived from principle available, researchers started to rely on phenomenological models. In 1941, Andrey Kolmogorov introduced the first such model which remains the reference theory for describing turbulent flows.
To be able to challenge Kolmogorov’s prediction, two routes are possible. The first is to carry out field experiments, for example taking real measurements of turbulence in the atmosphere or ocean. The project follows the other route, which involves using HPC to do extremely large direct numerical simulations:
“The disorder and chaos of turbulence means that it is yet impossible to know for certain the characteristics of a flow at any specific moment in time or point in space. Instead, what we do is to try and work out the average properties of the flow, as well as what happens at the extremities of the energy distribution – the rare events. These rare events are important because they are often associated with failure in engineering, for example when wind turbines break in extreme conditions,” explains Alessandra Sabina Lanotte.
Looking forward, one of the main goals of the team is to refine the models of small-scale turbulence used in meteorology and oceanography to provide better overall predictive powers. Through PRACE, the project was granted 56,000,000 core hours on the supercomputer Marconi, hosted by KNL CINECA, Italy. GARR, the national research and education network (NREN) of Italy, has provided connectivity.
Alessandra Sabina Lanotte believes that the importance of numerical simulations has yet to be fully appreciated in the minds of scientists and the wider public:
“I work in an institute where most of the people are experimentalists. When I talk to them about numerical simulations, I often get the impression that they think these take a couple of minutes on my desktop computer! It is important for the next generation of scientists to understand that carrying out numerical simulations on Tier-0 resources like those provided by PRACE is a huge undertaking, as challenging as the largest lab experiments. In the future, I think students need to be introduced to this kind of work at the undergraduate level.”
The text is inspired by the article “Refining the Models of Small-Scale Turbulence” on the PRACE website.
For more information please contact our contributor(s):