资源描述
Code of Practice CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 CODE OF PRACTICE 2007 Staff Working Paper No. 816 Machine learning explainability in nance: an application to default risk analysis Philippe Bracke, Anupam Datta, Carsten Jung and Shayak Sen August 2019 Staff Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Any views expressed are solely those of the author(s) and so cannot be taken to represent those of the Bank of England or to state Bank of England policy. This paper should therefore not be reported as representing the views of the Bank of England or members of the Monetary Policy Committee, Financial Policy Committee or Prudential Regulation Committee.Staff Working Paper No. 816 Machine learning explainability in nance: an application to default risk analysis Philippe Bracke, (1)Anupam Datta, (2)Carsten Jung (3)and Shayak Sen (4) Abstract We propose a framework for addressing the black box problem present in some Machine Learning (ML) applications. We implement our approach by using the Quantitative Input Inuence (QII) method of Datta et al (2016) in a realworld example: a ML model to predict mortgage defaults. This method investigates the inputs and outputs of the model, but not its inner workings. It measures feature inuences by intervening on inputs and estimating their Shapley values, representing the features average marginal contributions over all possible feature combinations. This method estimates key drivers of mortgage defaults such as the loantovalue ratio and current interest rate, which are in line with the ndings of the economics and nance literature. However, given the nonlinearity of ML model, explanations vary signicantly for different groups of loans. We use clustering methods to arrive at groups of explanations for different areas of the input space. Finally, we conduct simulations on data that the model has not been trained or tested on. Our main contribution is to develop a systematic analytical framework that could be used for approaching explainability questions in real world nancial applications. We conclude though that notable model uncertainties do remain which stakeholders ought to be aware of. Key words: Machine learning, explainability, mortgage defaults. JEL classification: C55, G21. (1) UK Financial Conduct Authority. Email: philippe.brackefca.uk (2) Carnegie Mellon University. Email: danupamcmu.edu (3) Bank of England. Email: carsten.jungbankofengland.co.uk (4) Carnegie Mellon University. Email: shayakslondon.edu The views expressed here are not those of the Financial Conduct Authority or the Bank of England. We thank seminar participants at the Bank of England, the MIT Interpretable MachineLearning Models and Financial Applications workshop, the UCL Data for Policy Conference, Louise Eggett, Tom Mutton and other colleagues at the Bank of England and Financial Conduct Authority for very useful comments. Datta and Sens work was partially supported by the US National Science Foundation under the grant CNS1704845. The Banks working paper series can be found at bankofengland.co.uk/workingpaper/staffworkingpapers Bank of England, Threadneedle Street, London, EC2R 8AH Email publicationsbankofengland.co.uk Bank of England 2019 ISSN 17499135 (online)1 Introduction Machine learning (ML) based predictive techniques are seeing increased adoption in a number of domains, including nance. However, due to their complexity, their predictions are often dicult to explain and validate. This is sometimes referred to as machine learnings black box problem. It is important to note that even if ML models are available for inspection, their size and complexity makes it dicult to explain their operation to humans. For example, an ML model used to predict mortgage defaults may consist of hundreds of large decision trees deployed in parallel, making it dicult to summarize how the model works intuitively. Recently a debate has emerged around techniques for making machine learning models more explainable. Explanations can answer dierent kinds of questions about a models operation depending on the stakeholder they are addressed to. In the nancial context, there are at least six dierent types of stakeholders: (i) Developers, i.e. those developing or implementing an ML application; (ii) 1st line model checkers, i.e. those directly responsible for making sure model development is of sucient quality; (iii) management responsible for the application; (iv) 2nd line model checkers, i.e. sta that, as part of a rms control functions, independently check the quality of model development and deployment; (v) conduct regulators that take an interest in deployed models being in line with conduct rules and (vi) prudential regulators that take an interest in deployed models being in line with prudential requirements. Table 1 outlines the dierent types of meaningful explanations one could expect for a ma- chine learning model. A developer may be interested in individual predictions, for instance when they get customer queries but also to better understand outliers. Similarly, conduct reg- ulators may occasionally be interested in individual predictions. For instance, if there were complaints about decisions made, there may be an interest in determining what factors drove that particular decision. Other stakeholders may be less interested in individual predictions. For instance, rst line model checkers likely would seek a more general understanding of how the model works and what its key drivers are, across predictions. Similarly, second line model checkers, management and prudential regulators likely will tend to take a higher level view still. 2Table 1: Dierent types of explanations Note: lighter green means these questions are only partially answered through our approach. Stakeholder interest 1st line 2nd line model Manage- model Conduct Prudential Developer checking ment checking regulator regulator 1) Which features mattered in individual predictions? X X 2) What drove the actual predictions more generally? X X X X 3) What are the dierences between the ML model and a linear one? X X 4) How does the ML model work? X X X X X X 5) How will the model perform under new states of the world? X X X X X X (that arent captured in the training data) Especially in cases where a model is of high importance for the business, these stakeholders will want to make sure the right steps for model quality assurance have been taken and, depending on the application, they may seek assurance on what the key drivers are. While regulators expect good model development and governance practices across the board, the detail and stringency of standards on models vary by application. One area where standards around model due diligence are most thorough are models used to calculate minimum capital requirements. Another example is governance requirements around trading and models for stress testing. 1 In this paper, we use one approach to ML explainability, the Quantitative Input Inuence method of 1, which builds on the game-theoretic concept of Shapley values. The QII method is used in a situation where we observe the inputs of the machine learning model as well as its outputs, but it would be impractical to examine the internal workings of the model itself. By changing the inputs in a predetermined way and observing the corresponding changes in outputs, we can learn about the inuence of specic features of the model. By doing so for several inputs and a large sample of instances, we can draw a useful picture of the models 1 See for instance bankofengland.co.uk/-/media/boe/files/prudential-regulation/ supervisory-statement/2018/ss518. 3functioning. We also demonstrate that input inuences can be eectively summarised by using clustering methods 2. Hence our approach provides a useful framework for tackling the ve questions outlined in Table 1. We use this approach in an applied setting: predicting mortgage defaults. For many con- sumers, mortgages are the most important source of nance, and the estimation of mortgage default risk has a signicant impact on the pricing and availability of mortgages. Recently, technological innovations|one of which is the application of ML techniques to the estimation of mortgage default probabilities|have improved the availability of mortgage credit 3. We hence use mortgage default predictions as our applied use case. But our explainability approach can be equally valuable in many other nancial applications of machine learning. We use data on a snapshot of all mortgages outstanding in the United Kingdom and check their default rates over the subsequent two and a half years. In contrast with some of the most recent economics literature 4, we are interested in predicting rather than nding the causes of mortgage defaults. Thus we do not employ techniques or research designs to establish causality claims as understood in applied economics. Such claims would be necessar
展开阅读全文