Bibliography – Artificial Intelligence

Selected DH research and resources bearing on, or utilized by, the WE1S project.
(all) Distant Reading | Cultural Analytics | | Sociocultural Approaches | Topic Modeling in DH | Non-consumptive Use

AI Forensics. “Home Page,” 2023. Cite
Dickson, Ben. “A New Technique Called ‘Concept Whitening’ Promises to Provide Neural Network Interpretability.” VentureBeat (blog), 2021. Cite
Heaven, Will Douglass. “AI Is Wrestling with a Replication Crisis.” MIT Technology Review, 2020. Cite
Dickson, Ben. “The Advantages of Self-Explainable AI over Interpretable AI.” The Next Web, 2020. Cite
Rogers, Anna, Olga Kovaleva, and Anna Rumshisky. “A Primer in BERTology: What We Know about How BERT Works.” ArXiv:2002.12327 [Cs], 2020. Cite
Rudin, Cynthia. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1, no. 5 (2019): 206–15. Cite
Lim, Brian Y., Qian Yang, Ashraf Abdul, and Danding Wang. “Why These Explanations? Selecting Intelligibility Types for Explanation Goals.” In IUI Workshops 2019. Los Angeles: ACM, 2019. Cite
Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. “Interpretable Machine Learning: Definitions, Methods, and Applications.” ArXiv:1901.04592 [Cs, Stat], 2019. Cite
Sawhney, Ravi. “Human in the Loop: Why We Will Be Needed to Complement Artificial Intelligence.” LSE Business Review (blog), 2018. Cite
Hind, Michael, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. “TED: Teaching AI to Explain Its Decisions.” ArXiv:1811.04896 [Cs], 2018. Cite
Alvarez-Melis, David, and Tommi Jaakkola. “Towards Robust Interpretability with Self-Explaining Neural Networks.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 7775–84. Curran Associates, Inc., 2018. Cite
Gall, Richard. Machine Learning Explainability vs Interpretability: Two Concepts That Could Help Restore Trust in AI, 2018. Cite
Gilpin, Leilani H., David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. “Explaining Explanations: An Overview of Interpretability of Machine Learning.” ArXiv:1806.00069 [Cs, Stat], 2018. Cite
Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. “Explainable Artificial Intelligence.” International Telecommunication Union Journal, no. 1 (2017): 1–10. Cite
Ruchansky, Natali, Sungyong Seo, and Yan Liu. “CSI: A Hybrid Deep Model for Fake News Detection.” In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 797–806. CIKM ’17. Singapore, Singapore: Association for Computing Machinery, 2017. Cite
Wang, William Yang. “‘Liar, Liar Pants on Fire’: A New Benchmark Dataset for Fake News Detection.” ArXiv:1705.00648 [Cs], 2017. Cite
Tickle, A.B., R. Andrews, M. Golea, and J. Diederich. “The Truth Will Come to Light: Directions and Challenges in Extracting the Knowledge Embedded within Trained Artificial Neural Networks.” IEEE Transactions on Neural Networks 9, no. 6 (1998): 1057–68. Cite