>> internal Site (SSL)
Ecosystem analysis by means of complexity theory
BITÖK-S11From 01/1995 to 12/1997
Principal Investigator: Michael Hauhs
Staff: Holger Lange, Frank Wolf
Grant: 0339476 B Vorhersage und Erklärung des Verhaltens und der Belastbarkeit von Ökosystemen unter veränderten Umweltbedingungen
Energy and matter fluxes have already been extensively used to study forested ecosystems. However, measurement resolutions and types of models used in evaluation of observations vary widely even for similar overall goals. Here, the information fluxes accompanying the energy and matter fluxes are used for a more systematic comparison of models and field data from various catchments. Complexity theory supplies methods for a quantification of (i) randomness, unpredictability, information content and (ii) the complexity of the data structure or its representation (by a model). This approach has to our knowledge not been applied to ecosystem data sets before.
The automatic calculation of 14 different complexity measures has been implemented in a computer program called SYMDYN (SYMbolic DYNamics). These methods were tested for the well-known logistic map and compared to published results. Stability of the methods was investigated by parameter variation.
The hypothesis that forested catchments filter the information from the input water fluxes to the output was confirmed for the precipitation and runoff fluxes of 4 different forested catchments, one of which allowed the inspection of two different measurement resolutions.
The parallel assessment of information and complexity of aggregated precipitation and runoff data from one hour resolution to 23 days revealed hints for an optimal measurement resolution. Hourly data yields only little information and complexity values, which indicates redundancy. A maximum in complexity identifies an optimal aggregation, which yields maximum information from the observed process. A higher aggregation yields more information but also more randomness, which contributes to an easier representation of the data and therefore less complexity. Thus we found that the runoff could be measured every 2 - 3 days, whereas the precipitation needs to be measured with a resolution of 2 - 3 hours. These values are below the autocorrelation lengths of 4 days (precipitation) and 3.5 months (runoff).
Comparison with sampling routines for catchments in which resolution has been usually based on heuristic arguments proved the power of this approach. In combination with black box models such as neural nets it provides also a new powerful method for an independent assessment of model performance and the appropriate model sophistication. (final report 1998)
List of publications of this Project