Energy systems modelling: models and the real world

By Dr. Chris Dent, School of Mathematics, University of Edinburgh

Introduction by Dr Mark Winskel, University of Edinburgh

Systems modelling has played an important role in the development of the Scottish Government’s draft Climate Change Plan and Energy Strategy, and is seen by the policymakers in Scotland as key enabler of its vision for policy integration. The kind of modelling adopted in Scotland has been used by many national governments, advisory bodies and international agencies in formulating response to energy policy challenges, especially climate change. Despite their widespread use, however, there are some basic analytic and pragmatic concerns, as Dr Chris Dent explores.


This article will discuss key methodological issues in energy systems modelling, with a little help from some well-known experts…

George Box: “All models are wrong, but some are useful”

This is probably the most famous quotation about statistics (which might demonstrate that there is no such thing as a famous quotation about statistics). Box meant by this simply that that no model represents the world perfectly, or provides outputs which are the whole truth about the world – one needs therefore to identify carefully the purposes for which a given model is useful and those for which it is not.

My overall thesis is thus that it is necessary when carrying out an applied modelling study to provide a clear statement about what the model is claimed to tell us regarding the real world, along with a logical argument to support that statement.

Ben Hobbs[1]: “Simple models for insight, complex models for quantification”

This statement, from a prominent researcher on the boundary of energy systems and operational research, summarises well a key issue in quantitative modelling. When it is necessary to take a decision involving a complex real world system, in order for a modelling study to have credibility for decision support, the model must sometimes be correspondingly complex – for instance detail of power generation operation or investment decisions depends on detail of the network as well as on the generation options alone. However, it can then be hard to understand how inputs and modelling assumptions feed through into model outputs.

On the other hand, while it is clearly easier to understand what features of data and assumptions are driving the results of simpler models, one must ensure that insights gained about the model are relevant to the real world rather than only being relevant to the model. This brings to mind the quote attributed (possibly incorrectly[2]) to Einstein that “Everything should be made as simple as possible, but not simpler”.

It remains the case, however, that for any model (simple or complex) a key part of making it useful (in Box’s term) is an honest assessment of its capabilities.

Niels Bohr: “Prediction is difficult, especially about the future”

This, to me, leads to several key issues.

  1. The nature of applied modelling. I would argue that any applied modelling study must be aiming to predict some feature of the real world – otherwise how can the study be called applied, or be of any use in supporting decision making? The word ‘predict’ is used here in a very general sense – for instance it might be about understanding at a qualitative level what the key issues are in determining a given real world outcome, or about reconstructing historic events at times or places where observations are not available.
  2. It is also important to recognise the nature of scientific prediction. It is not, in general, sufficient to make point predictions without quantification of uncertainty. There is a very great difference between a point prediction (i.e. the most likely value) of a future observation being 10, and being confident that it will be between 9 and 11 – and on the other hand making a point prediction of 10 but judging that the outcome could be anywhere between 1 and 100. Certainly any decisions based on that central prediction could be very different, depending on which of these quantifications of uncertainty is made.
  3. Issues in quantifying and managing uncertainty are very different on operational and planning timescales. For instance, when operating an engineering system one accrues relevant data for model calibration, or constructing statistics of forecast error, relatively rapidly. On longer planning timescales (where one might be looking years or decades ahead) many more uncertainties enter the picture including imperfect knowledge of system background and technology development, and one accrues relevant historic data much more slowly – indeed for many purposes one might not have directly relevant historic data at all, and then a substantial degree of expert judgment is inevitably required to determine model inputs and quantify uncertainty in real-world predictions.
  4. As a consequence of the uncertain relationship between model outputs and the world, conventional ideas of single optimal solutions do not apply when thinking about the real world on planning timescales – the equivalent is being confident that a policy or decision will perform well in the real world, along with a systematic argument to support this.

Darth Vader: “Don’t be too proud of this technological terror you’ve constructed”

This is a good reminder that as modellers we should take a realistic and honest view of the strengths and weaknesses of our work, and certainly not take an unrealistic salesperson’s perspective. In the original Star Wars film, Tarkin and Motti did not recognise the weaknesses of their Death Star, and the consequences for them were not good.

Specific issues to guard against in presenting one’s own work include justifying an approach as the best available or an industry standard (this does not necessarily mean that it is fit for the purpose at hand), and choosing performance metrics which unduly favour the chosen approach over others.

Juvenal: “Quis custodiet ipsos custodies” (“Who will watch the watchmen?”)

The Roman poet Juvenal probably was not thinking of peer review processes when writing this line in his Satires – but nevertheless this quotation is very apt. The reliability of any peer review process is strongly dependent on how well the reviewers are selected and managed, and on the way in which the authors of a document act upon their comments. In addition, even the most appropriate reviewers will occasionally miss something or otherwise make a mistake.

There are specific causes for concern in circumstances where the authors of a document or study (or their organisation) select the reviewers themselves (and sometimes pay them) – great integrity is then required on the part of all concerned, including willingness on the part of the organisation commissioning the review to regard a correctly negative review as being to their benefit. Another circumstance in which peer review can become unreliable is where a community is quite closed, and does not seek relevant expertise from other communities which have relevant knowledge.

As an individual looking to understand others’ modelling studies, the safe course of action is to take one’s own judgment, rather than trusting entirely the authors or reviewers – of course some authors or forms of publication will be more worthy of trust than others. One must also understand that over time knowledge develops, so what was deemed best practice some years ago might not remain thus.

In order for people to take their own judgment, a sufficient level of detail must be supplied to them by a study’s authors. This can be a serious issue in public policy debates, where sometimes quite limited detail of study methodology is placed in the public domain – it is then very hard for the modelling study to play a proper role in debate, as no one external to the organisation designing the study can critique it, and use of its conclusions by anyone external relies on a very high degree of trust.

Brian Clough[3]: “false confidence” and “true confidence”

This great football manager talked about taking away his players’ false confidence and giving them true confidence based on a proper understanding of their abilities. Unfortunately, many studies at some point implicitly rely on an unsupported assertion that their results can say something meaningful about the world, in which case confidence in their use may be false. True confidence can be gained by having logical arguments about what the study can say regarding the real world, as discussed above.

Conclusion

This article advocates an approach to energy systems modelling which is quite different from much current practice. The key, as noted above, is that where models are used in decision support it is necessary to have logical arguments regarding what the modelling says about the real world.

A key part of this is deciding an appropriate level of detail in a given modelling study. Additional detail is typically added with an aim of learning more about the world, and there may well be a minimal level of detail below which the model cannot say anything about the real system. On the other hand, as the complexity of a model increases, it can become harder to assess what it is saying about the world – for instance quantifying uncertainty in predictions typically requires multiple model runs, with greater complexity implying a need for more runs but also making each run take longer.

This necessary comprehensive analysis of model-real world relationship requires significant additional resource, and may also require quite different expertise from that needed to build a system model. In addition, it is necessary to confront explicitly the subjective judgments involved in determining model structure and model inputs. All this should be thought about right from the start of a modelling study, with the system modelling and uncertainty quantification being designed together, in order to maximise learning about the real world. Thus this analysis is not easy, but carrying it out will mean decisions can be based on the state of knowledge about the world, rather than about the model – and thus that better decisions based on modelling evidence are made.


The author thanks colleagues (H. Du, B. Hobbs, A. Wilson and M. Winskel) who have commented on drafts of this article, and acknowledges discussions with many further colleagues which have helped shape the thinking which it contains. The views expressed are the author’s own, and should not be assumed to represent those of any other person or organisation (including the Galactic Empire).

[1] This is not a direct quote, but rather a combination of ideas expressed by its author.

[2] See http://quoteinvestigator.com/2011/05/13/einstein-simple/ for a quite scarily exhaustive investigation of this point.

[3] Similarly to Ben Hobbs, this is not a direct quotation.

 

This entry was posted in Energy & Society Network. Bookmark the permalink.

4 Responses to Energy systems modelling: models and the real world

  1. Hi! We invite you to join our community of conservation projects, environmental educational activities and ideas to help the environment http://www.sustainablefriends.com, so that you can participate and benefit in multiple ways.

  2. Thanks for an interesting article Chris and you make very salient points. The earlier version of MARKAL seemed to rely upon linear optimisation which seems a misreading of economic thought, especially the emerging insights of evolutionary and behavioural economics. Optimisation fits better with the neo-classical view of economic processes, which we’ve now come to see as operating within a rather narrow set of conditions. Neo-classical economics fails to explain why customers don’t switch energy suppliers when there are cheaper tariffs available, but behavioural economics is more useful here. When it comes to technological change, evolutionary economics turns out to be much more useful that the neo-classical paradigm. The SG has been using the TIMES model in its energy and climate strategy but its not one I’ve looked at so don’t know whether it is in the same vein as MARKAL or has successfully responded to the critiques of the latter. Have you applied your thinking above to the TIMES model and could you say anything about that at this stage? I’d also be interested to know whether you’ve been conducting uncertainty analysis of the TIMES model?

    • Chris Dent says:

      A brief reply on this – thanks for your kind words on my article! My understanding is that TIMES is an evolution of MARKAL (it stands for “The Integrated MARKAL-EFOM System”.) However my point is rather broader – when thinking on planning timescales there are many aspects of the real world (including those you name) which cannot be considered in tractable optimization problems, or where accurate data are not available. The question is how, given this inconvenient truth, we can best use modelling to help make decisions which we are confident will be good ones _in the real world_.

  3. Nice article! Along these lines, I recently published a paper with several colleagues from UCL titled “Formalizing best practice for energy system optimization modelling”: http://www.sciencedirect.com/science/article/pii/S0306261917302192

    We tried to cover this topic in some detail.

Leave a Reply

Your email address will not be published. Required fields are marked *