Show simple item record

Files in this item

Thumbnail

Item metadata

dc.contributor.advisorNederhof, Mark-Jan
dc.contributor.advisorCazzanti, Luca
dc.contributor.advisorPenzotti, Julie Elizabeth
dc.contributor.advisorLinda, Ondrej
dc.contributor.authorDanielson, Matthew
dc.coverage.spatialxviii, 198 p.en_US
dc.date.accessioned2021-05-11T16:04:53Z
dc.date.available2021-05-11T16:04:53Z
dc.date.issued2021-06-30
dc.identifier.urihttps://hdl.handle.net/10023/23160
dc.description.abstractHidden Markov Models (HMMs) are a well known type of model for many varieties of sequen- tial data. There exist several algorithms for learning HMMs: a variant of an expectation- maximization (EM) algorithm known as the Baum Welch method, Markov Chain Monte Carlo (MCMC), and Variational Inference (VI). This third method is less frequently used, yet it has interesting properties with regard to convergence, sparsity, and interpretation that are worth further exploration. HMMs are used as explanatory models in the field of marketing science, where one of the goals is to interpret the model structure to understand customer behavior. This thesis will explore the use of HMMs trained with VI to build an interpretable classification model for customer churn on a dataset consisting of call data records from a mobile telecommunications company. In this thesis we first provide an introduction to VI for HMMs and then derive a mixture of HMMs (mHMMs) using VI. A mHMMs is then shown to be quite capable of performing unsupervised clustering of sequential data. Next, we present the design and interface of a new open source library for training HMMs and mHMMs with VI and EM. We show that this library achieves excellent performance while still providing an intuitive interface in the Python programming language. We then examine the performance of classifiers using HMMs trained with VI and EM on several classification datasets. The results from these experiments are then used to build and test several simple classification models to predict churn for the provided dataset. As these models are shown to have poor performance, we train a more traditional machine learning model based on gradient boosted trees and evaluate the interpretability, stability and performance of this model over a subsequent 18 months of data.en_US
dc.language.isoenen_US
dc.publisherUniversity of St Andrews
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectHidden Markov modelsen_US
dc.subjectVarational inferenceen_US
dc.subjectChurnen_US
dc.subjectMixtures of hidden Markov modelsen_US
dc.subjectMachine learningen_US
dc.subjectMarketing scienceen_US
dc.titleHidden Markov models with variational inference in marketing scienceen_US
dc.typeThesisen_US
dc.contributor.sponsorAmplero, Inc.en_US
dc.contributor.sponsorZillow Group, Inc.en_US
dc.type.qualificationlevelDoctoralen_US
dc.type.qualificationnameDEng Doctor of Engineeringen_US
dc.publisher.institutionThe University of St Andrewsen_US
dc.identifier.doihttps://doi.org/10.17630/sta/63


The following licence files are associated with this item:

    This item appears in the following Collection(s)

    Show simple item record

    Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
    Except where otherwise noted within the work, this item's licence for re-use is described as Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International