Explainable Bayesian networks : taxonomy, properties and approximation methods

dc.contributor.advisorDe Waal, Alta
dc.contributor.emailinekederks1@gmail.comen_US
dc.contributor.postgraduateDerks, Iena Petronella
dc.date.accessioned2024-07-30T13:11:52Z
dc.date.available2024-07-30T13:11:52Z
dc.date.created2024-09-03
dc.date.issued2024-07-22
dc.descriptionThesis (PhD (Mathematical Statistics))--University of Pretoria, 2024.en_US
dc.description.abstractTechnological advances have integrated artificial intelligence (AI) into various scientific fields, necessitating understanding AI-derived decisions. The field of explainable artificial intelligence (XAI) has emerged to address transparency concerns, offering both transparent models and post-hoc explanation techniques. Recent research emphasises the importance of developing transparent models, with a focus on enhancing the interpretability of these models. An example of a transparent model that would benefit from enhanced post-hoc explainability is Bayesian networks. This research investigates the current state of explainability in Bayesian networks. Literature includes three categories of explanation: explanation of the model, reasoning, and evidence. Drawing upon these categories, we formulate a taxonomy of explainable Bayesian networks. Following this, we extend the taxonomy to include explanation of decisions, an area recognised as neglected within the broader XAI research field. This includes using the same-decision probability, a threshold-based confidence measure, as a stopping and selection criteria for decision-making. Additionally, acknowledging computational efficiency as a concern in XAI, we introduce an approximate forward-gLasso algorithm as a solution for efficiently solving the most relevant explanation. We compare the proposed algorithm with a local, exhaustive forward search. The forward-gLasso algorithm demonstrates accuracy comparable to the forward search while reducing the average neighbourhood size, leading to computationally efficient explanations. All coding was done in R, building on existing packages for Bayesian networks. As a result, we develop an open-source R package capable of generating explanations of evidence for Bayesian networks. Lastly, we demonstrate the practical insights gained from applying post-hoc explanations on real-world data, such as the South African Victims of Crime Survey 2016 - 2017.en_US
dc.description.availabilityUnrestricteden_US
dc.description.degreePhD (Mathematical Statistics)en_US
dc.description.departmentStatisticsen_US
dc.description.facultyFaculty of Economic And Management Sciencesen_US
dc.identifier.citation*en_US
dc.identifier.doi10.25403/UPresearchdata.26403883en_US
dc.identifier.otherS2024
dc.identifier.urihttp://hdl.handle.net/2263/97333
dc.identifier.uriDOI: https://doi.org/10.25403/UPresearchdata.26403883.v1
dc.language.isoenen_US
dc.publisherUniversity of Pretoria
dc.rights© 2023 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.
dc.subjectUCTDen_US
dc.subjectSustainable Development Goals (SDGs)en_US
dc.subjectExplainable artificial intelligenceen_US
dc.subjectBayesian networks
dc.subjectPost-hoc explanation
dc.subjectSame-decision probability
dc.subjectMost relevant explanation
dc.titleExplainable Bayesian networks : taxonomy, properties and approximation methodsen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Derks_Explainable_2024.pdf
Size:
3.31 MB
Format:
Adobe Portable Document Format
Description:
Thesis

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: