Bibliography – Machine learning

Selected DH research and resources bearing on, or utilized by, the WE1S project.
(all) Distant Reading | Cultural Analytics | Sociocultural Approaches | Topic Modeling in DH


Kwak, Haewoon, Jisun An, and Yong-Yeol Ahn. “A Systematic Media Frame Analysis of 1.5 Million New York Times Articles from 2000 to 2017.” ArXiv:2005.01803 [Cs], 2020. http://arxiv.org/abs/2005.01803. Cite
Dickson, Ben. “The Advantages of Self-Explainable AI over Interpretable AI.” The Next Web, 2020. https://thenextweb.com/neural/2020/06/19/the-advantages-of-self-explainable-ai-over-interpretable-ai/. Cite
Rogers, Anna, Olga Kovaleva, and Anna Rumshisky. “A Primer in BERTology: What We Know about How BERT Works.” ArXiv:2002.12327 [Cs], 2020. http://arxiv.org/abs/2002.12327. Cite
Munro, Robert. Human-in-the-Loop Machine Learning. Shelter Island, New York: Manning, 2020. https://www.manning.com/books/human-in-the-loop-machine-learning. Cite
Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. “Fairness and Abstraction in Sociotechnical Systems.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. FAT* ’19. Atlanta, GA, USA: Association for Computing Machinery, 2019. https://doi.org/10.1145/3287560.3287598. Cite
Rudin, Cynthia. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1, no. 5 (2019): 206–15. https://doi.org/10.1038/s42256-019-0048-x. Cite
Molnar, Christoph. Interpretable Machine Learning. Christoph Molnar, 2019. https://christophm.github.io/interpretable-ml-book/. Cite
Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. “Interpretable Machine Learning: Definitions, Methods, and Applications.” ArXiv:1901.04592 [Cs, Stat], 2019. http://arxiv.org/abs/1901.04592. Cite
Shu, Kai, Suhang Wang, and Huan Liu. “Beyond News Contents: The Role of Social Context for Fake News Detection.” In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, 312–320. WSDM ’19. Melbourne VIC, Australia: Association for Computing Machinery, 2019. https://doi.org/10.1145/3289600.3290994. Cite
“Milestones:DIALOG Online Search System, 1966 - Engineering and Technology History Wiki,” 2019. https://ethw.org/Milestones:DIALOG_Online_Search_System,_1966. Cite
Narayanan, Menaka, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. “How Do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation.” ArXiv:1802.00682 [Cs], 2018. http://arxiv.org/abs/1802.00682. Cite
Selbst, Andrew D., and Solon Barocas. “The Intuitive Appeal of Explainable Machines.” SSRN Electronic Journal, 2018. https://doi.org/10.2139/ssrn.3126971. Cite
Hind, Michael, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. “TED: Teaching AI to Explain Its Decisions.” ArXiv:1811.04896 [Cs], 2018. http://arxiv.org/abs/1811.04896. Cite
Alvarez-Melis, David, and Tommi Jaakkola. “Towards Robust Interpretability with Self-Explaining Neural Networks.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 7775–7784. Curran Associates, Inc., 2018. http://papers.nips.cc/paper/8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf. Cite
Gall, Richard. Machine Learning Explainability vs Interpretability: Two Concepts That Could Help Restore Trust in AI, 2018. https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html. Cite
Gilpin, Leilani H., David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. “Explaining Explanations: An Overview of Interpretability of Machine Learning.” ArXiv:1806.00069 [Cs, Stat], 2018. http://arxiv.org/abs/1806.00069. Cite
Spencer, Ann. Make Machine Learning Interpretability More Rigorous, 2018. https://blog.dominodatalab.com/make-machine-learning-interpretability-rigorous/. Cite
Gill, Patrick Hall. Navdeep. Introduction to Machine Learning Interpretability. S.l.: O’Reilly Media, Inc., 2018. https://proquest.safaribooksonline.com/9781492033158. Cite
Parikh, Shivam B., and Pradeep K. Atrey. “Media-Rich Fake News Detection: A Survey.” In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 436–41, 2018. https://doi.org/10.1109/MIPR.2018.00093. Cite
Wu, Liang, and Huan Liu. “Tracing Fake-News Footprints: Characterizing Social Media Messages by How They Propagate.” In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, 637–645. WSDM ’18. Marina Del Rey, CA, USA: Association for Computing Machinery, 2018. https://doi.org/10.1145/3159652.3159677. Cite
Lipton, Zachary, and Jacob Steinhardt. Troubling Trends in Machine Learning Scholarship.Pdf, 2018. https://www.dropbox.com/s/ao7c090p8bg1hk3/Lipton%20and%20Steinhardt%20-%20Troubling%20Trends%20in%20Machine%20Learning%20Scholarship.pdf?dl=0. Cite
Randles, Bernadette M., Irene V. Pasquetto, Milena S. Golshan, and Christine L. Borgman. “Using the Jupyter Notebook as a Tool for Open Science: An Empirical Study.” In 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL), 1–2, 2017. https://doi.org/10.1109/JCDL.2017.7991618. Cite
Lipton, Zachary C. “The Mythos of Model Interpretability.” ArXiv:1606.03490 [Cs, Stat], 2017. http://arxiv.org/abs/1606.03490. Cite
Edwards, Lilian, and Michael Veale. “Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 2017. https://papers.ssrn.com/abstract=2972855. Cite
Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. “Explainable Artificial Intelligence.” International Telecommunication Union Journal, no. 1 (2017): 1–10. https://www.itu.int/en/journal/001/Pages/05.aspx. Cite
Doshi-Velez, Finale, and Been Kim. “Towards A Rigorous Science of Interpretable Machine Learning.” ArXiv:1702.08608 [Cs, Stat], 2017. http://arxiv.org/abs/1702.08608. Cite
Shu, Kai, Suhang Wang, and Huan Liu. “Exploiting Tri-Relationship for Fake News Detection.” undefined, 2017. https://www.semanticscholar.org/paper/Exploiting-Tri-Relationship-for-Fake-News-Detection-Shu-Wang/8fd1d13e18c5ef8b57296adab6543cb810c36d81. Cite
Granik, Mykhailo, and Volodymyr Mesyura. “Fake News Detection Using Naive Bayes Classifier.” In 2017 IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON), 900–903, 2017. https://doi.org/10.1109/UKRCON.2017.8100379. Cite
Tacchini, Eugenio, Gabriele Ballarin, Marco L. Della Vedova, Stefano Moret, and Luca de Alfaro. “Some Like It Hoax: Automated Fake News Detection in Social Networks.” ArXiv:1704.07506 [Cs], 2017. http://arxiv.org/abs/1704.07506. Cite
Ahmed, Hadeer, Issa Traore, and Sherif Saad. “Detection of Online Fake News Using N-Gram Analysis and Machine Learning Techniques.” In Intelligent, Secure, and Dependable Systems in Distributed and Cloud Environments, edited by Issa Traore, Isaac Woungang, and Ahmed Awad, 127–38. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2017. https://doi.org/10.1007/978-3-319-69155-8_9. Cite
Potthast, Martin, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein. “A Stylometric Inquiry into Hyperpartisan and Fake News.” ArXiv:1702.05638 [Cs], 2017. http://arxiv.org/abs/1702.05638. Cite
Pérez-Rosas, Verónica, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. “Automatic Detection of Fake News.” ArXiv:1708.07104 [Cs], 2017. http://arxiv.org/abs/1708.07104. Cite
Ruchansky, Natali, Sungyong Seo, and Yan Liu. “CSI: A Hybrid Deep Model for Fake News Detection.” In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 797–806. CIKM ’17. Singapore, Singapore: Association for Computing Machinery, 2017. https://doi.org/10.1145/3132847.3132877. Cite
Wang, William Yang. “‘Liar, Liar Pants on Fire’: A New Benchmark Dataset for Fake News Detection.” ArXiv:1705.00648 [Cs], 2017. http://arxiv.org/abs/1705.00648. Cite
Paul, Michael J. “Interpretable Machine Learning: Lessons from Topic Modeling.” In CHI Workshop on Human-Centered Machine Learning, 2016. https://cmci.colorado.edu/~mpaul/files/chi16hcml_interpretable.pdf. Cite
Conroy, Niall J., Victoria L. Rubin, and Yimin Chen. “Automatic Deception Detection: Methods for Finding Fake News: Automatic Deception Detection: Methods for Finding Fake News.” Proceedings of the Association for Information Science and Technology 52, no. 1 (2015): 1–4. https://doi.org/10.1002/pra2.2015.145052010082. Cite
Dobson, James E. “Can An Algorithm Be Disturbed?: Machine Learning, Intrinsic Criticism, and the Digital Humanities.” College Literature 42, no. 4 (2015): 543–564. https://muse.jhu.edu/article/595031. Cite
Burscher, Björn, Daan Odijk, Rens Vliegenthart, Maarten de Rijke, and Claes H. de Vreese. “Teaching the Computer to Code Frames in News: Comparing Two Supervised Machine Learning Approaches to Frame Analysis.” Communication Methods and Measures 8, no. 3 (2014): 190–206. https://doi.org/10.1080/19312458.2014.937527. Cite
Freitas, Alex A. “Comprehensible Classification Models: A Position Paper.” In ACM SIGKDD Explorations, 15.1:1–10. Association for Computing Machinery, 2014. https://doi.org/10.1145/2594473.2594475. Cite
Grimmer, Justin, and Gary King. “General Purpose Computer-Assisted Clustering and Conceptualization.” Proceedings of the National Academy of Sciences 108, no. 7 (2011): 2643–50. https://doi.org/10.1073/pnas.1018067108. Cite
Sculley, D., and B. M. Pasanek. “Meaning and Mining: The Impact of Implicit Assumptions in Data Mining for the Humanities.” Literary and Linguistic Computing 23, no. 4 (2008): 409–424. https://doi.org/10.1093/llc/fqn019. Cite
Sebastiani, Fabrizio. “Machine Learning in Automated Text Categorization.” ACM Computing Surveys (CSUR) 34, no. 1 (2002): 1–47. https://doi.org/10.1145/505282.505283. Cite