For many years, machine learning methods have been used in a wide range of fields, including computer vision and natural language processing. Even though machine learning methods significantly improve model performance over traditional methods, their black-box structure makes it difficult for researchers to interpret the results. For highly regulated sectors, such as the financial sector, transparency, explainability, conceptual soundness, fairness, and robustness are at least as important as accuracy. In the absence of meeting regulatory requirements, even highly accurate machine learning methods will be unlikely to be accepted. We propose sparse architectures of neural networks that eliminate nonessential statistical interactions to address the issue of transparency and interpretability. To ensure conceptual soundness and fairness, we impose individual and pairwise monotonicity constraints on the model, while the latter has been neglected in the literature to date. A two-stage modeling framework based on ensemble methods and constraint optimizations is used to provide upper bounds for unconfident samples in terms of robustness. Empirically, we demonstrate that our models provide transparent, explainable, conceptually sound, fair, and robust machine learning models.
学术海报.pdf