Bibliography – Artificial Intelligence

Selected DH research and resources bearing on, or utilized by, the WE1S project.
(all) Distant Reading | Cultural Analytics | | Sociocultural Approaches | Topic Modeling in DH | Non-consumptive Use


AI Forensics. “Home Page,” 2023. https://ai-forensics.github.io/. Cite
Dickson, Ben. “A New Technique Called ‘Concept Whitening’ Promises to Provide Neural Network Interpretability.” VentureBeat (blog), 2021. https://venturebeat.com/2021/01/12/a-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability/. Cite
Heaven, Will Douglass. “AI Is Wrestling with a Replication Crisis.” MIT Technology Review, 2020. https://www.technologyreview.com/2020/11/12/1011944/artificial-intelligence-replication-crisis-science-big-tech-google-deepmind-facebook-openai/. Cite
Dickson, Ben. “The Advantages of Self-Explainable AI over Interpretable AI.” The Next Web, 2020. https://thenextweb.com/neural/2020/06/19/the-advantages-of-self-explainable-ai-over-interpretable-ai/. Cite
Rogers, Anna, Olga Kovaleva, and Anna Rumshisky. “A Primer in BERTology: What We Know about How BERT Works.” ArXiv:2002.12327 [Cs], 2020. http://arxiv.org/abs/2002.12327. Cite
Rudin, Cynthia. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1, no. 5 (2019): 206–15. https://doi.org/10.1038/s42256-019-0048-x. Cite
Lim, Brian Y., Qian Yang, Ashraf Abdul, and Danding Wang. “Why These Explanations? Selecting Intelligibility Types for Explanation Goals.” In IUI Workshops 2019. Los Angeles: ACM, 2019. https://www.semanticscholar.org/paper/A-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan/03a4544caed21760df30f0e4f417bbe361c29c9e. Cite
Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. “Interpretable Machine Learning: Definitions, Methods, and Applications.” ArXiv:1901.04592 [Cs, Stat], 2019. http://arxiv.org/abs/1901.04592. Cite
Sawhney, Ravi. “Human in the Loop: Why We Will Be Needed to Complement Artificial Intelligence.” LSE Business Review (blog), 2018. https://blogs.lse.ac.uk/businessreview/2018/10/24/human-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence/. Cite
Hind, Michael, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. “TED: Teaching AI to Explain Its Decisions.” ArXiv:1811.04896 [Cs], 2018. http://arxiv.org/abs/1811.04896. Cite
Alvarez-Melis, David, and Tommi Jaakkola. “Towards Robust Interpretability with Self-Explaining Neural Networks.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 7775–84. Curran Associates, Inc., 2018. http://papers.nips.cc/paper/8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf. Cite
Gall, Richard. Machine Learning Explainability vs Interpretability: Two Concepts That Could Help Restore Trust in AI, 2018. https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html. Cite
Gilpin, Leilani H., David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. “Explaining Explanations: An Overview of Interpretability of Machine Learning.” ArXiv:1806.00069 [Cs, Stat], 2018. http://arxiv.org/abs/1806.00069. Cite
Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. “Explainable Artificial Intelligence.” International Telecommunication Union Journal, no. 1 (2017): 1–10. https://www.itu.int/en/journal/001/Pages/05.aspx. Cite
Ruchansky, Natali, Sungyong Seo, and Yan Liu. “CSI: A Hybrid Deep Model for Fake News Detection.” In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 797–806. CIKM ’17. Singapore, Singapore: Association for Computing Machinery, 2017. https://doi.org/10.1145/3132847.3132877. Cite
Wang, William Yang. “‘Liar, Liar Pants on Fire’: A New Benchmark Dataset for Fake News Detection.” ArXiv:1705.00648 [Cs], 2017. http://arxiv.org/abs/1705.00648. Cite
Tickle, A.B., R. Andrews, M. Golea, and J. Diederich. “The Truth Will Come to Light: Directions and Challenges in Extracting the Knowledge Embedded within Trained Artificial Neural Networks.” IEEE Transactions on Neural Networks 9, no. 6 (1998): 1057–68. https://doi.org/10.1109/72.728352. Cite