Charity Delmus Alupo

Charity Delmus Alupo

PhD Thesis Title: Explainable Machine Learning For Residential Electricity Demand Insights

Supervisor: Associate Professor Paula Carroll

External Examiners: Dr Patrick Healy, University of Limerick






Abstract

Residential electricity usage is flexible, stochastic, and varies by geographic location. With smart meter data, policymakers, utility providers, and even consumers can gain insights from residential electricity data using machine learning. Through a systematic literature review that we conducted, our findings show that machine learning is commonly applied to several residential application areas, including load forecasting, demand response, and load profile analysis. However, the research gap that was revealed from this literature review was that only a few of the journal papers attempted to elaborate on the rules for arriving at a particular decision, using explainable machine learning. Although deep learning techniques are increasingly used due to their high performance, they are not explainable to stakeholders such as electricity customers, utility providers, and policy holders who need to easily interpret the results and adopt the models. Explainable or interpretable models boost an application’s persuasiveness, customer comprehension, and confidence in the service.

In this work, we aim to study the electricity demand of households to understand different types of users and their characteristics. We research: 1) What are the main machine learning techniques used in residential load management? 2) Which explainable machine learning techniques can be used to understand the characteristics of the electricity usage of groups of electricity consumers? We begin the research with a systematic literature review, which has two goals. First of all, to identify, classify, and review recent academic journal papers that have applied machine learning to residential load management; secondly, to present research gaps and uncover opportunities for future work in residential load management. To study electricity usage in households, various techniques have been used, including the now-popular neural networks. Despite the high performance of these neural networks, they are not explainable. Explainable machine learning offers ample opportunities to provide recommendations for residential demand-side management. We use decision trees in combination with other explainable techniques such as K-Means and K-Medoids to provide insights that are useful for consumers, policymakers, and other stakeholders. We also strive for a whole-some explainable pipeline, from data to features all the way through to modeling and evaluation, delivering insights that are easy to visualise and interpret using the Cross Industry Standard Process for Data Mining (CRISP-DM).

This study explores the use of explainable machine learning models, particularly decision trees, in the context of residential electricity demand. By analysing a sample of 3669 meters from an Irish smart meter dataset, we demonstrate the effectiveness of decision trees in exploring and characterising residential electricity usage, with the potential to support demand response programs by identifying target groups.

We find that load data alone could suffice for the clustering and profiling of customers, even without survey data, indicating the potential for simpler models that rely on readily available data.

Our work highlights the potential of decision trees to derive explanations for customer groups and to understand how residential customers consume electricity, which has implications for the development of more effective demand response programs and the provision of electricity usage recommendations. By providing clear and transparent explanations for how models arrive at their predictions, explainable machine learning has the potential to improve trust, accountability, and ethical considerations in AI applications across a variety of domains, including healthcare, finance, law enforcement and in this case, residential electricity. Ultimately, the development and adoption of explainable machine learning has the potential to unlock the full potential of AI while mitigating the risks associated with black box decision-making.

Discover our Rankings and Accreditations