Bibliography – Interpretability and Explainability

Selected DH research and resources bearing on, or utilized by, the WE1S project.
(all) Distant Reading | Cultural Analytics | | Sociocultural Approaches | Topic Modeling in DH | Non-consumptive Use

Acknowledgements Fabian Offert (Media Arts & Technology Program, UCSB) contributed references for this bibliography section.

AI Forensics. “Home Page,” 2023. Cite
Zhang, Yu, Peter Tiňo, Aleš Leonardis, and Ke Tang. “A Survey on Neural Network Interpretability.” IEEE Transactions on Emerging Topics in Computational Intelligence 5, no. 5 (2021): 726–42. Cite
Dickson, Ben. “A New Technique Called ‘Concept Whitening’ Promises to Provide Neural Network Interpretability.” VentureBeat (blog), 2021. Cite
Smith, Gary, and Jay Cordes. The Phantom Pattern Problem: The Mirage of Big Data. First edition. Oxford ; New York, NY: Oxford University Press, 2020. Cite
Liu, Alan. “Humans in the Loop: Humanities Hermeneutics and Machine Learning.” Presented at the DHd2020 (7th Annual Conference of the German Society for Digital Humanities), University of Paderborn, 2020. Cite
Dickson, Ben. “The Advantages of Self-Explainable AI over Interpretable AI.” The Next Web, 2020. Cite
Rogers, Anna, Olga Kovaleva, and Anna Rumshisky. “A Primer in BERTology: What We Know about How BERT Works.” ArXiv:2002.12327 [Cs], 2020. Cite
Munro, Robert. Human-in-the-Loop Machine Learning. Shelter Island, New York: Manning, 2020. Cite
Rudin, Cynthia. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1, no. 5 (2019): 206–15. Cite
Molnar, Christoph. Interpretable Machine Learning. Christoph Molnar, 2019. Cite
Lim, Brian Y., Qian Yang, Ashraf Abdul, and Danding Wang. “Why These Explanations? Selecting Intelligibility Types for Explanation Goals.” In IUI Workshops 2019. Los Angeles: ACM, 2019. Cite
Yang, Yiwei, Eser Kandogan, Yunyao Li, Prithviraj Sen, and Walter S. Lasecki. “A Study on Interaction in Human-in-the-Loop Machine Learning for Text Analytics.” In IUI Workshops 2019. Los Angeles: ACM, 2019. Cite
Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. “Datasheets for Datasets.” ArXiv:1803.09010 [Cs], 2019. Cite
Mitchell, Margaret, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. “Model Cards for Model Reporting.” Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 2019, 220–29. Cite
Tahmasebi, Nina, Niclas Hagen, Daniel Brodén, and Mats Malm. “A Convergence of Methodologies: Notes on Data-Intensive Humanities Research.” In Digital Humanities in the Nordic Countries 4th Conference. Helsinki: Nina Tahmasebi, 2019. /publication/2019-aconvergenceofmethods/. Cite
Pandey, Parul. Interpretable Machine Learning, 2019. Cite
Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. “Interpretable Machine Learning: Definitions, Methods, and Applications.” ArXiv:1901.04592 [Cs, Stat], 2019. Cite
Carassai, Mauro. “Preliminary Notes on Conceptual Issues Affecting Interpretation of Topic Models.” WE1S (blog), 2018. Cite
Rule, Adam, Aurélien Tabard, and James D. Hollan. “Exploration and Explanation in Computational Notebooks.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems  - CHI ’18, 1–12. Montreal QC, Canada: ACM Press, 2018. Cite
Narayanan, Menaka, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. “How Do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation.” ArXiv:1802.00682 [Cs], 2018. Cite
Selbst, Andrew D., and Solon Barocas. “The Intuitive Appeal of Explainable Machines.” SSRN Electronic Journal, 2018. Cite
Sawhney, Ravi. “Human in the Loop: Why We Will Be Needed to Complement Artificial Intelligence.” LSE Business Review (blog), 2018. Cite
Kleymann, Rabea, and Jan-Erik Stange. “Towards Hermeneutic Visualization in Digital Literary Studies,” 2018. Cite
Holland, Sarah, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. “The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards.” ArXiv:1805.03677 [Cs], 2018. Cite
Bender, Emily M., and Batya Friedman. “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science.” Transactions of the Association for Computational Linguistics 6 (2018): 587–604. Cite
Hind, Michael, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. “TED: Teaching AI to Explain Its Decisions.” ArXiv:1811.04896 [Cs], 2018. Cite
Alvarez-Melis, David, and Tommi Jaakkola. “Towards Robust Interpretability with Self-Explaining Neural Networks.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 7775–84. Curran Associates, Inc., 2018. Cite
Guldi, Jo. “Critical Search: A Procedure for Guided Reading in Large-Scale Textual Corpora.” Journal of Cultural Analytics, 2018. Cite
Gall, Richard. Machine Learning Explainability vs Interpretability: Two Concepts That Could Help Restore Trust in AI, 2018. Cite
Gilpin, Leilani H., David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. “Explaining Explanations: An Overview of Interpretability of Machine Learning.” ArXiv:1806.00069 [Cs, Stat], 2018. Cite
Spencer, Ann. Make Machine Learning Interpretability More Rigorous, 2018. Cite
Gill, Patrick Hall. Navdeep. Introduction to Machine Learning Interpretability. S.l.: O’Reilly Media, Inc., 2018. Cite
Goodman, Bryce, and Seth Flaxman. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’” AI Magazine 38, no. 3 (2017): 50–57. Cite
Lipton, Zachary C. “The Mythos of Model Interpretability.” ArXiv:1606.03490 [Cs, Stat], 2017. Cite
Edwards, Lilian, and Michael Veale. “Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 2017. Cite
Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. “Explainable Artificial Intelligence.” International Telecommunication Union Journal, no. 1 (2017): 1–10. Cite
Doshi-Velez, Finale, and Been Kim. “Towards A Rigorous Science of Interpretable Machine Learning.” ArXiv:1702.08608 [Cs, Stat], 2017. Cite
Paul, Michael J. “Interpretable Machine Learning: Lessons from Topic Modeling.” In CHI Workshop on Human-Centered Machine Learning, 2016. Cite
Alexander, Eric, and Michael Gleicher. “Task-Driven Comparison of Topic Models.” IEEE Transactions on Visualization and Computer Graphics 22, no. 1 (2016): 320–29. Cite
Collins, Gary S., Johannes B. Reitsma, Douglas G. Altman, and Karel G.M. Moons. “Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis (TRIPOD): The TRIPOD Statement.” Annals of Internal Medicine 162, no. 1 (2015): 55. Cite
Findlater, Leah, Jordan L. Boyd-Graber, Yuening Hu, Jason Chuang, and Alison Smith. Concurrent Visualization of Relationships between Words and Topics in Topic Models, 2014. /paper/Concurrent-Visualization-of-Relationships-between-Smith-Chuang/096ed34cd5d56b5daea50336f891dc26a32b981d. Cite
Freitas, Alex A. “Comprehensible Classification Models: A Position Paper.” In ACM SIGKDD Explorations, 15.1:1–10. Association for Computing Machinery, 2014. Cite
Liu, Alan. “The Meaning of the Digital Humanities.” PMLA 128, no. 2 (2013): 409–23. Cite
Grimmer, Justin, and Gary King. “General Purpose Computer-Assisted Clustering and Conceptualization.” Proceedings of the National Academy of Sciences 108, no. 7 (2011): 2643–50. Cite
Sculley, D., and B. M. Pasanek. “Meaning and Mining: The Impact of Implicit Assumptions in Data Mining for the Humanities.” Literary and Linguistic Computing 23, no. 4 (2008): 409–24. Cite
Tickle, A.B., R. Andrews, M. Golea, and J. Diederich. “The Truth Will Come to Light: Directions and Challenges in Extracting the Knowledge Embedded within Trained Artificial Neural Networks.” IEEE Transactions on Neural Networks 9, no. 6 (1998): 1057–68. Cite