EME 807
Technologies for Sustainability Systems

3.1. Purpose of metrics and how they are selected


3.1. Purpose of metrics and how they are selected

One of the challenges in sustainability assessment of technologies or other elements of anthropogenic systems is designing meaningful and quantifiable metrics. Because sustainability frameworks are built on very diverse sets of environmental and economic values, there is difficulty in bringing them to common terms within a unified model.

When we ask the question "Is this process sustainable?", we often do not get a "yes/no" answer. Some parts of the system may benefit sustainable development, some parts may be in conflict with it, and some parts may be rather neutral or flexible. But how can we tell where we are on the scale of sustainability assessment? Further, if we do alter certain parts of the system, how much shift do we create towards sustainability goals? Can we measure the impact of those changes?

All these questions prompted attempts to develop quantifiable metrics or indicators, which would allow researchers and policy makers to make more accurate comparisons between different paths of system development and take better justified decisions.

The existing methods for evaluating products and technologies in terms of environmental impact and sustainable development are numerous, and the scope of their applications and purposes is extremely wide. Metrics have been and continue to be developed by different agencies, companies, and researchers to address a variety of issues, which were often placed in different frameworks. As a result, those methods of assessment may use widely diverging rationale, terminology, and approaches, and often come up with contrasting results. Consistency and compatibility are most difficult issues!

In this lesson, it would be hardly possible to learn all of the methods in detail, especially because most models and evaluation approaches are subject-specific and would rather be learned in the context of a particular technology. But we will take a tour over several evaluation platforms and will try to distill the most important questions to address as we proceed to characterization of particular technological areas in further lessons of this course.

Numerous technologies existing in the world differ in many aspects (e.g., profitability, social popularity, efficiency, scale, local need, resource consumption, etc.). However, our main goal will be to distinguish and characterize the technologies in terms of sustainable development. In the long run, the main question we aim to answer: Is the technology project sustainable or not sustainable? What are the criteria for "good" and "evil" here?

How are metrics developed?

What exactly is a metric? A metric is a system of measurement that includes:

  • the item being measured (what we measure),
  • the method of measurement (how we measure), and
  • the inherent value associated with the metric (why we measure, or what we intend to achieve by this measurement).

According to Werner and Souder [Werner and Souder, Research-Technology Management, 40(2), 1997, 34-42], the choice of an appropriate measurement metric depends on the user’s needs and purpose, the area of study, and the available data. Setting the purpose of evaluation is the key. Without it, the metrics are simply data. There should be a decision focus.

Metrics should help answer a question, and the answer in turn would justify the recommendation for future action.

Metrics are categorized into quantitative-subjective, qualitative, and integrated metrics. The type is often determined by the availability and accuracy of raw data. Data must be accessible and affordable; otherwise, assumptions and surrogate information would inevitably undermine the adequacy and validity of assessment.

Standardization and coherence in rules of construction of technology evaluation metrics are yet to be achieved. In the broad area of science and technology, the practice of creating and using metrics is in the form of a "menu." Evaluators select combination of measures, data sources, and instruments that will address their specific objectives and needs.

The following issues make metrics not a straightforward matter:

  • A metric may be composed of a single quantity or by a more complex set of measures (for example, indexes and "macro" metrics).
  • Metrics in science and technology evaluation are designed to measure a variety of activities, events, and phenomena—some simple and short-lived, others highly complex and durable along an extended time frame and therefore using different units and scales.
  • The absence of a unique and single building-block increases the role of subjective reasons for the construction and selection of metrics, which sometimes requires better formalization.

Designing comprehensive universal metrics, which would work as "magic crystal" for decision makers, is difficult, if at all possible, because different stakeholders care about different impacts. Most analytical approaches separate contexts and rather develop multi-metric frameworks for assessment. The table below lists some examples of metrics used for technology evaluation in various contexts. This list is not exhaustive, by any means, and in each assessment project metrics must be justified and modified for specific research purpose.

Social Environmental Economic
Some Examples of Informative Metrics
  • Health impacts
  • New technology acceptance
  • Education opportunities and needs
  • Employment
  • GDP
  • Gender impacts
  • Rural development
  • Energy access
  • Safety and security
  • Energy security
  • Food security
  • Cultural preservation
  • Greenhouse gas emissions
  • Primary energy use
  • Biodiversity
  • Water use and impact
  • Air quality
  • Land use impacts
  • Soil health
  • Cost of energy
  • Employment
  • Industry expansion
  • Trade impacts
  • Energy imports and security
  • Market demand
  • Climate resilience
  • Social license to operate
  • Infrastructure "lock-in"
  • Technology innovation "lock-out"

[Source: National Renewable Energy Laboratory]

While working on this course, you should feel free to modify existing metrics or create new ones for the specific needs of your assessment. There are no "mandatory" criteria for evaluation - it all depends on the purpose and the message you try to deliver.

Lagging versus leading metrics

  • Lagging metrics are those that indicate what has already happened (past). For example:
    • amount of soil eroded
    • electricity cost per kWh
    • average annual temperature
    • battery efficiency measured
  • Leading metrics are those that indicate what may happen (future). For example:
    • deforestation rate
    • input / output ratio
    • wind direction and speed
  • Lagging metrics are mainly actual measures; leading metrics are usually indicators marking rates or trends

Complexity levels of metrics

(In parentheses, some examples of metrics are given for the case of a wastewater treatment facility.)

  • Level 1 – Measure a technology’s level of compliance with regulations or its conformance with industrial performance standards (e.g., output concentration of hazardous chemicals versus EPA tolerance standards).
  • Level 2 – Measure the inputs, outputs, and performance of a technology during system operation (e.g., cost of purified water per gallon, amount of solid waste generated per year, treatment efficiency).
  • Level 3 – Evaluate the potential impact of the technology and associated operation on facility personnel, the surrounding environment, and communities (e.g., eutrophication, local air quality trends, carbon emission cost of facility operation).
  • Level 4 – Evaluate the lifecycle effects of the technology. This will involve Level 1-3 metrics as necessary to assess the impact of technology’s manufacturing, operation, and disassembly stages (e.g., amounts of raw materials consumed for building the facility, % carbon emissions related to transportation and infrastructure maintenance, water filter life and disposal).
  • Level 5 – Assess how the technology-related activities affect the sustainability balance at the scale of society, region, or planet (if the technology is scaled-up) (e.g., % renewable materials or energy used at different stages of operation, community quality-of-life indicators).

In the above hierarchy, Levels 1 and 2 may be sufficient for understanding the promise of technical performance. These levels would be primary guides in relationships between research and development sector and industry, which look for reliable and efficient systems. Ecological perspective would involve Level 3 in order to understand and track environmental impacts. Furthermore, sustainability analysis would have to involve Level 4 at the community scale and Level 5 at the economy scale, since only via thorough lifecycle assessment and system analysis is it possible to identify the correct targets for metric design.

One of the ways to understand if the choice of metric that is adequate to the purpose is sensitivity analysis. By varying impact factors, see how metrics respond. Simulating a series of "what-if" and "what-if-not" scenarios will lead you to designing a proper metric model and defining the boundaries.

Probing Question:

Which of the following metrics would be suitable for comparing firewood and coal as heating fuels in sustainability analysis? (Check all that apply.)

(A) Heat of combustion
(B) CO2 emission per unit energy released
(C) Cost of transportation per year
(D) Local air quality measures
(E) Consumer's price per unit weight of fuel

Click for answer.

All metrics (A) through (E) would need to be included in sustainability analysis. A is a characteristic of technical performance. B is a characteristic of environmental impact. C and E are parts of economic analysis, and D is a factor affecting society well-being.

Supplemental Reading:

Web article: Geisler, E., The metrics of technology evaluation: where we stand and where we should go from here, 24th Annual Technology Transfer Society Meeting, July 15-17, 1999.

This reading is optional. The article provides more discussion on challenges and purpose of technology evaluation. Also, it presents a wide pool of metrics that are relevant to different categories of assessment.

Sustainability Indicators

The framework of sustainability indicators deals with much wider context than just technology assessment. It was developed for assessing socio-ecologic development of communities and associated resources and services, and technology can be certainly part of that context. Applying this framework to technology, we can see how technical performance metrics (such as efficiency and useful output) are connected to the economic, social, and environmental specifics of the locale, thus allowing us to estimate the promise of this technology in a local setting.

Systems of sustainability indicators are typically customized to a particular case study. One of the rationales is to take a more detailed approach to assessing sustainability and move beyond the traditional three-pillar approach, which conventionally classifies factors within social, economic, and environmental domains. Indicators can go through those boundaries and address specific needs of assessment.

Good indicators must be:

  1.  relevant to the problem or system considered in assessment;
  2.  understandable and easily interpretable by stakeholders and societies that may use the assessment;
  3.  reliable; i.e., based on trustworthy information and also sensitive to data variation;
  4.  built on accessible information (not something that is hidden or will become available in the future).

Collection of data and information for calculating sustainability indicators may be a big task. While much of the data can be available in local, state, and federal reports, and many of those can be available online these days, in some cases, you would need to contact respective agencies and offices to request missing information depending on your specific interest.

One of the criticisms of sustainability metrics and indicators is that they attempt to encapsulate an array of diverse processes and interactions in a few simple measures. Is that simplification fair? And what is the risk of using those simplified measures for taking rational decisions?

In fact, designing metrics can be an obvious approach to deal with the complex world in manageable bits. It is common for scientists to deal with a complex system by breaking it down and studying its components separately before studying how they work together. For example, in natural sciences, scientists have been dealing with complex ecosystems for years, and specific indicators have been long used as tools for gauging ecosystem health and development (Bell and Morse, 2008).

According to Slobodkin (1994):

"Any simplification limits our capacity to draw conclusions, but this is by no means unique to ecology. Essentially, all science is the study of either very small bits of reality or simplified surrogates for complex whole systems. How we simplify can be critical. Careless simplification leads to misleading simplistic conclusions."

To learn more about sustainability indicators, please turn to the following readings:

Supplemental Reading:

Book: Bell, S and Morse, S. Sustainability Indicators. Measuring the Immeasurable? 2nd Ed.London, Sterling VA, 2008.

This book leads comprehensive discussion on the nature and purpose of sustainability indicators and presents a number of great examples of their application.

UN Report: Indicators of Sustainable Development: Guidelines and Methodologies, 2007. Third Edition.

EPA Report: Fiksel, J., Eason, T., Frederickson, H., A Framework for Sustainability Indicators at EPA, National Risk Management Laboratory, EPA 2012