(all)
Global Humanities | History of Humanities | Liberal Arts | Humanities and Higher Education | Humanities as Research Activity | Humanities Teaching & Curricula | Humanities and the Sciences | Medical Humanities | Public Humanities | Humanities Advocacy | Humanities and Social Groups | Value of Humanities | Humanities and Economic Value | Humanities Funding | Humanities Statistics | Humanities Surveys | "Crisis" of the Humanities
Humanities Organizations: Humanities Councils (U.S.) | Government Agencies | Foundations | Scholarly Associations
Humanities in: Africa | Asia (East) | Asia (South) | Australasia | Europe | Latin America | Middle East | North America: Canada - Mexico - United States | Scandinavia | United Kingdom
(all)
Lists of News Sources | Databases with News Archives | History of Journalism | Journalism Studies | Journalism Statistics | Journalism Organizations | Student Journalism | Data Journalism | Media Frames (analyzing & changing media narratives using "frame theory") | Media Bias | Fake News | Journalism and Minorities | Journalism and Women | Press Freedom | News & Social Media
(all)
Corpus Representativeness
Comparison paradigms for idea of a corpus: Archives as Paradigm | Canons as Paradigm | Editions as Paradigm | Corpus Linguistics as Paradigm
(all)
Artificial Intelligence | Big Data | Data Mining | Data Notebooks (Jupyter Notebooks) | Data Visualization (see also Topic Model Visualizations) | Hierarchical Clustering | Interpretability & Explainability (see also Topic Model Interpretation) | Mapping | Natural Language Processing | Network Analysis | Open Science | Reporting & Documentation Methods | Reproducibility | Sentiment Analysis | Social Media Analysis | Statistical Methods | Text Analysis (see also Topic Modeling) | Text Classification | Wikification | Word Embedding & Vector Semantics
Topic Modeling (all)
Selected DH research and resources bearing on, or utilized by, the WE1S project.
(all)
Distant Reading | Cultural Analytics | | Sociocultural Approaches | Topic Modeling in DH | Non-consumptive Use
Searchable version of bibliography on Zotero site
For WE1S developers: Biblio style guide | Biblio collection form (suggest additions) | WE1S Bibliography Ontology Outline
Acknowledgements Fabian Offert (Media Arts & Technology Program, UCSB) contributed references for this bibliography section.
2133649
Interpretability and explainability
1
chicago-fullnote-bibliography
50
date
desc
year
1
1
1
2844
https://we1s.ucsb.edu/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%227VE8ZFEZ%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22AI%20Forensics%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAI%20Forensics.%20%26%23x201C%3BHome%20Page%2C%26%23x201D%3B%202023.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fai-forensics.github.io%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fai-forensics.github.io%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D7VE8ZFEZ%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22webpage%22%2C%22title%22%3A%22Home%20page%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22AI%20Forensics%22%7D%5D%2C%22abstractNote%22%3A%22An%20interdisciplinary%20research%20project%20critically%20investigating%20interpretability%20and%20accountability%20of%20visual%20AI%20systems%20from%20the%20perspective%20of%20their%20social%20implications%2C%20its%20team%20is%20spread%20across%20an%20international%20consortium%20composed%20of%3A%5Cn%5CnHochschule%20f%5Cu00fcr%20Gestaltung%20Karlsruhe%2C%20K%5Cu00fcnstliche%20Intelligenz%20und%20Medienphilosophie%5CnUniversit%5Cu00e4t%20Kassel%2C%20Gender%5C%2FDiversity%20in%20Informatics%20Systems%5CnCambridge%20University%2C%20Cambridge%20Digital%20Humanities%5CnDurham%20University%5CnIncluding%20the%20NVIDIA%20CUDA%20Research%20Centre%20as%20technical%20partner%5CnUniversity%20of%20California%2C%20Santa%20Barbara%22%2C%22date%22%3A%222023%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fai-forensics.github.io%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222023-06-13T22%3A09%3A42Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%226AG5KZIW%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhang%2C%20Yu%2C%20Peter%20Ti%26%23x148%3Bo%2C%20Ale%26%23x161%3B%20Leonardis%2C%20and%20Ke%20Tang.%20%26%23x201C%3BA%20Survey%20on%20Neural%20Network%20Interpretability.%26%23x201D%3B%20%3Ci%3EIEEE%20Transactions%20on%20Emerging%20Topics%20in%20Computational%20Intelligence%3C%5C%2Fi%3E%205%2C%20no.%205%20%282021%29%3A%20726%26%23x2013%3B42.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTETCI.2021.3100641%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTETCI.2021.3100641%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D6AG5KZIW%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Survey%20on%20Neural%20Network%20Interpretability%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Ti%5Cu0148o%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ale%5Cu0161%22%2C%22lastName%22%3A%22Leonardis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ke%22%2C%22lastName%22%3A%22Tang%22%7D%5D%2C%22abstractNote%22%3A%22Along%20with%20the%20great%20success%20of%20deep%20neural%20networks%2C%20there%20is%20also%20growing%20concern%20about%20their%20black-box%20nature.%20The%20interpretability%20issue%20affects%20people%27s%20trust%20on%20deep%20learning%20systems.%20It%20is%20also%20related%20to%20many%20ethical%20problems%2C%20e.g.%2C%20algorithmic%20discrimination.%20Moreover%2C%20interpretability%20is%20a%20desired%20property%20for%20deep%20networks%20to%20become%20powerful%20tools%20in%20other%20research%20fields%2C%20e.g.%2C%20drug%20discovery%20and%20genomics.%20In%20this%20survey%2C%20we%20conduct%20a%20comprehensive%20review%20of%20the%20neural%20network%20interpretability%20research.%20We%20first%20clarify%20the%20definition%20of%20interpretability%20as%20it%20has%20been%20used%20in%20many%20different%20contexts.%20Then%20we%20elaborate%20on%20the%20importance%20of%20interpretability%20and%20propose%20a%20novel%20taxonomy%20organized%20along%20three%20dimensions%3A%20type%20of%20engagement%20%28passive%20vs.%20active%20interpretation%20approaches%29%2C%20the%20type%20of%20explanation%2C%20and%20the%20focus%20%28from%20local%20to%20global%20interpretability%29.%20This%20taxonomy%20provides%20a%20meaningful%203D%20view%20of%20distribution%20of%20papers%20from%20the%20relevant%20literature%20as%20two%20of%20the%20dimensions%20are%20not%20simply%20categorical%20but%20allow%20ordinal%20subcategories.%20Finally%2C%20we%20summarize%20the%20existing%20interpretability%20evaluation%20methods%20and%20suggest%20possible%20research%20directions%20inspired%20by%20our%20new%20taxonomy.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1109%5C%2FTETCI.2021.3100641%22%2C%22ISSN%22%3A%222471-285X%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222022-12-30T23%3A04%3A22Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22ZV69TQ9D%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dickson%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDickson%2C%20Ben.%20%26%23x201C%3BA%20New%20Technique%20Called%20%26%23x2018%3BConcept%20Whitening%26%23x2019%3B%20Promises%20to%20Provide%20Neural%20Network%20Interpretability.%26%23x201D%3B%20%3Ci%3EVentureBeat%3C%5C%2Fi%3E%20%28blog%29%2C%202021.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fventurebeat.com%5C%2F2021%5C%2F01%5C%2F12%5C%2Fa-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fventurebeat.com%5C%2F2021%5C%2F01%5C%2F12%5C%2Fa-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DZV69TQ9D%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22A%20new%20technique%20called%20%5Cu2018concept%20whitening%5Cu2019%20promises%20to%20provide%20neural%20network%20interpretability%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Dickson%22%7D%5D%2C%22abstractNote%22%3A%22%5C%22Concept%20whitening%5C%22%20can%20help%20steer%20neural%20networks%20toward%20learning%20specific%20concepts%20without%20sacrificing%20performance.%22%2C%22blogTitle%22%3A%22VentureBeat%22%2C%22date%22%3A%222021%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fventurebeat.com%5C%2F2021%5C%2F01%5C%2F12%5C%2Fa-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222021-01-15T20%3A17%3A48Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%2249QSYPXT%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Smith%20and%20Cordes%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESmith%2C%20Gary%2C%20and%20Jay%20Cordes.%20%3Ci%3EThe%20Phantom%20Pattern%20Problem%3A%20The%20Mirage%20of%20Big%20Data%3C%5C%2Fi%3E.%20First%20edition.%20Oxford%26%23x202F%3B%3B%20New%20York%2C%20NY%3A%20Oxford%20University%20Press%2C%202020.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D49QSYPXT%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22The%20phantom%20pattern%20problem%3A%20the%20mirage%20of%20big%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gary%22%2C%22lastName%22%3A%22Smith%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jay%22%2C%22lastName%22%3A%22Cordes%22%7D%5D%2C%22abstractNote%22%3A%22Pattern-recognition%20prowess%20served%20our%20ancestors%20well%2C%20but%20today%20we%20are%20confronted%20by%20a%20deluge%20of%20data%20that%20is%20far%20more%20abstract%2C%20complicated%2C%20and%20difficult%20to%20interpret.%20The%20number%20of%20possible%20patterns%20that%20can%20be%20identified%20relative%20to%20the%20number%20that%20are%20genuinely%20useful%20has%20grown%20exponentially%20-%20which%20means%20that%20the%20chances%20that%20a%20discovered%20pattern%20is%20useful%20is%20rapidly%20approaching%20zero.%5Cn%5CnPatterns%20in%20data%20are%20often%20used%20as%20evidence%2C%20but%20how%20can%20you%20tell%20if%20that%20evidence%20is%20worth%20believing%3F%20We%20are%20hard-wired%20to%20notice%20patterns%20and%20to%20think%20that%20the%20patterns%20we%20notice%20are%20meaningful.%20Streaks%2C%20clusters%2C%20and%20correlations%20are%20the%20norm%2C%20not%20the%20exception.%20Our%20challenge%20is%20to%20overcome%20our%20inherited%20inclination%20to%20think%20that%20all%20patterns%20are%20significant%2C%20as%20in%20this%20age%20of%20Big%20Data%20patterns%20are%20inevitable%20and%20usually%20coincidental.%5Cn%5CnThrough%20countless%20examples%2C%20The%20Phantom%20Pattern%20Problem%20is%20an%20engaging%20read%20that%20helps%20us%20avoid%20being%20duped%20by%20data%2C%20tricked%20into%20worthless%20investing%20strategies%2C%20or%20scared%20out%20of%20getting%20vaccinations.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-0-19-886416-5%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222021-07-01T06%3A42%3A08Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Data%20mining%22%7D%2C%7B%22tag%22%3A%22Data%20science%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22DD8NEZZA%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELiu%2C%20Alan.%20%26%23x201C%3BHumans%20in%20the%20Loop%3A%20Humanities%20Hermeneutics%20and%20Machine%20Learning.%26%23x201D%3B%20Presented%20at%20the%20DHd2020%20%287th%20Annual%20Conference%20of%20the%20German%20Society%20for%20Digital%20Humanities%29%2C%20University%20of%20Paderborn%2C%202020.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fyoutu.be%5C%2FlnfeOUBCi3s%27%3Ehttps%3A%5C%2F%5C%2Fyoutu.be%5C%2FlnfeOUBCi3s%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DDD8NEZZA%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22presentation%22%2C%22title%22%3A%22Humans%20in%20the%20Loop%3A%20Humanities%20Hermeneutics%20and%20Machine%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22presenter%22%2C%22firstName%22%3A%22Alan%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22As%20indicated%20by%20the%20emergent%20research%20fields%20of%20computational%20%5Cu201cinterpretability%5Cu201d%20and%20%5Cu201cexplainability%2C%5Cu201d%20machine%20learning%20creates%20fundamental%20hermeneutical%20problems.%20One%20of%20the%20least%20understood%20aspects%20of%20machine%20learning%20is%20how%20humans%20learn%20from%20machine%20learning.%20How%20does%20an%20individual%2C%20team%2C%20organization%2C%20or%20society%20%5Cu201cread%5Cu201d%20computational%20%5Cu201cdistant%20reading%5Cu201d%20when%20it%20is%20performed%20by%20complex%20algorithms%20on%20immense%20datasets%3F%20Can%20methods%20of%20interpretation%20familiar%20to%20the%20humanities%20%28e.g.%2C%20traditional%20or%20poststructuralist%20ways%20of%20relating%20the%20general%20and%20the%20specific%2C%20the%20abstract%20and%20the%20concrete%2C%20the%20structure%20and%20the%20event%2C%20or%20the%20same%20and%20the%20different%29%20be%20applied%20to%20machine%20learning%3F%20Further%2C%20can%20such%20traditions%20be%20applied%20with%20the%20explicitness%2C%20standardization%2C%20and%20reproducibility%20needed%20to%20engage%20meaningfully%20with%20the%20different%20Spielr%5Cu00e4um%20%5Cu2013%20scope%20for%20%5Cu201cplay%5Cu201d%20%28as%20in%20the%20%5Cu201cplay%20of%20a%20rope%2C%5Cu201d%20%5Cu201cwiggle%20room%2C%5Cu201d%20or%20machine-part%20%5Cu201ctolerance%5Cu201d%29%20%5Cu2013%20of%20computation%3F%20If%20so%2C%20how%20might%20that%20change%20the%20hermeneutics%20of%20the%20humanities%20themselves%3F%5Cn%20%20%20%20%20%20%20%20In%20his%20keynote%20lecture%2C%20Alan%20Liu%20uses%20the%20example%20of%20the%20formalized%20%5Cu201cinterpretation%20protocol%5Cu201d%20for%20topic%20models%20he%20is%20developing%20for%20the%20Mellon%20Foundation%20funded%20WhatEvery1Says%20project%20%28which%20is%20text-analyzing%20millions%20of%20newspaper%20articles%20mentioning%20the%20humanities%29%20to%20reflect%20on%20how%20humanistic%20traditions%20of%20interpretation%20can%20contribute%20to%20machine%20learning.%20But%20he%20also%20suggests%20how%20machine%20learning%20changes%20humanistic%20interpretation%20through%20fresh%20ideas%20about%20wholes%20and%20parts%2C%20mimetic%20representation%20and%20probabilistic%20modeling%2C%20and%20similarity%20and%20difference%20%28or%20identity%20and%20culture%29.%22%2C%22date%22%3A%222020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fyoutu.be%5C%2FlnfeOUBCi3s%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222021-04-30T07%3A08%3A38Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22WE1S%20presentations%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22WHBTVD5K%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A1550555%2C%22username%22%3A%22nazkey%22%2C%22name%22%3A%22Naz%20Keynejad%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fnazkey%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Dickson%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDickson%2C%20Ben.%20%26%23x201C%3BThe%20Advantages%20of%20Self-Explainable%20AI%20over%20Interpretable%20AI.%26%23x201D%3B%20The%20Next%20Web%2C%202020.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fthenextweb.com%5C%2Fneural%5C%2F2020%5C%2F06%5C%2F19%5C%2Fthe-advantages-of-self-explainable-ai-over-interpretable-ai%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fthenextweb.com%5C%2Fneural%5C%2F2020%5C%2F06%5C%2F19%5C%2Fthe-advantages-of-self-explainable-ai-over-interpretable-ai%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DWHBTVD5K%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22webpage%22%2C%22title%22%3A%22The%20advantages%20of%20self-explainable%20AI%20over%20interpretable%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Dickson%22%7D%5D%2C%22abstractNote%22%3A%22%5BBeginning%20of%20article%3A%5D%20Would%20you%20trust%20an%20artificial%20intelligence%20algorithm%20that%20works%20eerily%20well%2C%20making%20accurate%20decisions%2099.9%25%20of%20the%20time%2C%20but%20is%20a%20mysterious%20black%20box%3F%20Every%20system%20fails%20every%20now%20and%20then%2C%20and%20when%20it%20does%2C%20we%20want%20explanations%2C%20especially%20when%20human%20lives%20are%20at%20stake.%20And%20a%20system%20that%20can%5Cu2019t%20be%20explained%20can%5Cu2019t%20be%20trusted.%20That%20is%20one%20of%20the%20problems%20the%20AI%20community%20faces%20as%20their%20creations%20become%20smarter%20and%20more%20capable%20of%20tackling%20complicated%20and%20critical%20tasks.%22%2C%22date%22%3A%222020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fthenextweb.com%5C%2Fneural%5C%2F2020%5C%2F06%5C%2F19%5C%2Fthe-advantages-of-self-explainable-ai-over-interpretable-ai%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-07-13T18%3A04%3A24Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22MS4U5EAW%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Rogers%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERogers%2C%20Anna%2C%20Olga%20Kovaleva%2C%20and%20Anna%20Rumshisky.%20%26%23x201C%3BA%20Primer%20in%20BERTology%3A%20What%20We%20Know%20about%20How%20BERT%20Works.%26%23x201D%3B%20%3Ci%3EArXiv%3A2002.12327%20%5BCs%5D%3C%5C%2Fi%3E%2C%202020.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2002.12327%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2002.12327%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DMS4U5EAW%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Primer%20in%20BERTology%3A%20What%20we%20know%20about%20how%20BERT%20works%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Rogers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Olga%22%2C%22lastName%22%3A%22Kovaleva%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Rumshisky%22%7D%5D%2C%22abstractNote%22%3A%22Transformer-based%20models%20are%20now%20widely%20used%20in%20NLP%2C%20but%20we%20still%20do%20not%20understand%20a%20lot%20about%20their%20inner%20workings.%20This%20paper%20describes%20what%20is%20known%20to%20date%20about%20the%20famous%20BERT%20model%20%28Devlin%20et%20al.%202019%29%2C%20synthesizing%20over%2040%20analysis%20studies.%20We%20also%20provide%20an%20overview%20of%20the%20proposed%20modifications%20to%20the%20model%20and%20its%20training%20regime.%20We%20then%20outline%20the%20directions%20for%20further%20research.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2002.12327%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-29T06%3A39%3A16Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%2C%7B%22tag%22%3A%22Natural%20language%20processing%22%7D%2C%7B%22tag%22%3A%22Text%20Analysis%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22HK5Z5SCM%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Munro%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMunro%2C%20Robert.%20%3Ci%3EHuman-in-the-Loop%20Machine%20Learning%3C%5C%2Fi%3E.%20Shelter%20Island%2C%20New%20York%3A%20Manning%2C%202020.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.manning.com%5C%2Fbooks%5C%2Fhuman-in-the-loop-machine-learning%27%3Ehttps%3A%5C%2F%5C%2Fwww.manning.com%5C%2Fbooks%5C%2Fhuman-in-the-loop-machine-learning%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DHK5Z5SCM%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Human-in-the-Loop%20Machine%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Munro%22%7D%5D%2C%22abstractNote%22%3A%22Most%20machine%20learning%20systems%20that%20are%20deployed%20in%20the%20world%20today%20learn%20from%20human%20feedback.%20However%2C%20most%20machine%20learning%20courses%20focus%20almost%20exclusively%20on%20the%20algorithms%2C%20not%20the%20human-computer%20interaction%20part%20of%20the%20systems.%20This%20can%20leave%20a%20big%20knowledge%20gap%20for%20data%20scientists%20working%20in%20real-world%20machine%20learning%2C%20where%20data%20scientists%20spend%20more%20time%20on%20data%20management%20than%20on%20building%20algorithms.%20Human-in-the-Loop%20Machine%20Learning%20is%20a%20practical%20guide%20to%20optimizing%20the%20entire%20machine%20learning%20process%2C%20including%20techniques%20for%20annotation%2C%20active%20learning%2C%20transfer%20learning%2C%20and%20using%20machine%20learning%20to%20optimize%20every%20step%20of%20the%20process.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-1-61729-674-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.manning.com%5C%2Fbooks%5C%2Fhuman-in-the-loop-machine-learning%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T21%3A13%3A04Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Data%20science%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22DSXGKU6A%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Rudin%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERudin%2C%20Cynthia.%20%26%23x201C%3BStop%20Explaining%20Black%20Box%20Machine%20Learning%20Models%20for%20High%20Stakes%20Decisions%20and%20Use%20Interpretable%20Models%20Instead.%26%23x201D%3B%20%3Ci%3ENature%20Machine%20Intelligence%3C%5C%2Fi%3E%201%2C%20no.%205%20%282019%29%3A%20206%26%23x2013%3B15.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs42256-019-0048-x%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs42256-019-0048-x%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DDSXGKU6A%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Stop%20explaining%20black%20box%20machine%20learning%20models%20for%20high%20stakes%20decisions%20and%20use%20interpretable%20models%20instead%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cynthia%22%2C%22lastName%22%3A%22Rudin%22%7D%5D%2C%22abstractNote%22%3A%22Black%20box%20machine%20learning%20models%20are%20currently%20being%20used%20for%20high-stakes%20decision%20making%20throughout%20society%2C%20causing%20problems%20in%20healthcare%2C%20criminal%20justice%20and%20other%20domains.%20Some%20people%20hope%20that%20creating%20methods%20for%20explaining%20these%20black%20box%20models%20will%20alleviate%20some%20of%20the%20problems%2C%20but%20trying%20to%20explain%20black%20box%20models%2C%20rather%20than%20creating%20models%20that%20are%20interpretable%20in%20the%20first%20place%2C%20is%20likely%20to%20perpetuate%20bad%20practice%20and%20can%20potentially%20cause%20great%20harm%20to%20society.%20The%20way%20forward%20is%20to%20design%20models%20that%20are%20inherently%20interpretable.%20This%20Perspective%20clarifies%20the%20chasm%20between%20explaining%20black%20boxes%20and%20using%20inherently%20interpretable%20models%2C%20outlines%20several%20key%20reasons%20why%20explainable%20black%20boxes%20should%20be%20avoided%20in%20high-stakes%20decisions%2C%20identifies%20challenges%20to%20interpretable%20machine%20learning%2C%20and%20provides%20several%20example%20applications%20where%20interpretable%20models%20could%20potentially%20replace%20black%20box%20models%20in%20criminal%20justice%2C%20healthcare%20and%20computer%20vision.%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1038%5C%2Fs42256-019-0048-x%22%2C%22ISSN%22%3A%222522-5839%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs42256-019-0048-x%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-08-12T19%3A06%3A26Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22EU7LY55P%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Molnar%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMolnar%2C%20Christoph.%20%3Ci%3EInterpretable%20Machine%20Learning%3C%5C%2Fi%3E.%20Christoph%20Molnar%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fchristophm.github.io%5C%2Finterpretable-ml-book%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fchristophm.github.io%5C%2Finterpretable-ml-book%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DEU7LY55P%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Interpretable%20Machine%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christoph%22%2C%22lastName%22%3A%22Molnar%22%7D%5D%2C%22abstractNote%22%3A%22Molnar%5Cu2019s%20book%20is%20about%20making%20machine%20learning%20models%20and%20their%20decisions%20interpretable.%20After%20exploring%20the%20concepts%20of%20interpretability%2C%20he%20informs%20the%20reader%20about%20simple%2C%20interpretable%20models%20such%20as%20decision%20trees%2C%20decision%20rules%20and%20linear%20regression.%20Later%20chapters%20focus%20on%20general%20model-agnostic%20methods%20for%20interpreting%20black%20box%20models%20like%20feature%20importance%20and%20accumulated%20local%20effects%20and%20explaining%20individual%20predictions%20with%20Shapley%20values%20and%20LIME.%20All%20interpretation%20methods%20are%20explained%20in%20depth%20and%20discussed%20critically.%20Molnar%5Cu2019s%20book%20helps%20the%20reader%20to%20select%20and%20correctly%20apply%20the%20interpretation%20method%20that%20is%20most%20suitable%20for%20your%20machine%20learning%20project.%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fchristophm.github.io%5C%2Finterpretable-ml-book%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-07-01T20%3A54%3A25Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22PJ6HYXG5%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lim%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELim%2C%20Brian%20Y.%2C%20Qian%20Yang%2C%20Ashraf%20Abdul%2C%20and%20Danding%20Wang.%20%26%23x201C%3BWhy%20These%20Explanations%3F%20Selecting%20Intelligibility%20Types%20for%20Explanation%20Goals.%26%23x201D%3B%20In%20%3Ci%3EIUI%20Workshops%202019%3C%5C%2Fi%3E.%20Los%20Angeles%3A%20ACM%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%27%3Ehttps%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DPJ6HYXG5%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Why%20these%20Explanations%3F%20Selecting%20Intelligibility%20Types%20for%20Explanation%20Goals%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Brian%20Y.%22%2C%22lastName%22%3A%22Lim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qian%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ashraf%22%2C%22lastName%22%3A%22Abdul%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danding%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22The%20increasing%20ubiquity%20of%20artificial%20intelligence%20%28AI%29%20has%20spurred%20the%20development%20of%20explainable%20AI%20%28XAI%29%20to%20make%20AI%20more%20understandable.%20Even%20as%20novel%20algorithms%20for%20explanation%20are%20being%20developed%2C%20researchers%20have%20called%20for%20more%20human%20interpretability.%20While%20empirical%20user%20studies%20can%20be%20conducted%20to%20evaluate%20explanation%20effectiveness%2C%20it%20remains%20unclear%20why%20specific%20explanations%20are%20helpful%20for%20understanding.%20We%20leverage%20a%20recently%20developed%20conceptual%20framework%20for%20user-centric%20reasoned%20XAI%20that%20draws%20from%20foundational%20concepts%20in%20philosophy%2C%20cognitive%20psychology%2C%20and%20AI%20to%20identify%20pathways%20for%20how%20user%20reasoning%20drives%20XAI%20needs.%20We%20identified%20targeted%20strategies%20for%20applying%20XAI%20facilities%20to%20improve%20understanding%2C%20trust%20and%20decision%20performance.%20We%20discuss%20how%20our%20framework%20can%20be%20extended%20and%20applied%20to%20other%20domains%20that%20need%20usercentric%20XAI.%20This%20position%20paper%20seeks%20to%20promote%20the%20design%20of%20XAI%20features%20based%20on%20human%20reasoning%20needs%22%2C%22date%22%3A%222019%22%2C%22proceedingsTitle%22%3A%22IUI%20Workshops%202019%22%2C%22conferenceName%22%3A%22IUI%20Workshops%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T23%3A06%3A56Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22HHTJ2S2M%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYang%2C%20Yiwei%2C%20Eser%20Kandogan%2C%20Yunyao%20Li%2C%20Prithviraj%20Sen%2C%20and%20Walter%20S.%20Lasecki.%20%26%23x201C%3BA%20Study%20on%20Interaction%20in%20Human-in-the-Loop%20Machine%20Learning%20for%20Text%20Analytics.%26%23x201D%3B%20In%20%3Ci%3EIUI%20Workshops%202019%3C%5C%2Fi%3E.%20Los%20Angeles%3A%20ACM%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%27%3Ehttps%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DHHTJ2S2M%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Study%20on%20Interaction%20in%20Human-in-the-Loop%20Machine%20Learning%20for%20Text%20Analytics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiwei%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eser%22%2C%22lastName%22%3A%22Kandogan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunyao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Prithviraj%22%2C%22lastName%22%3A%22Sen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Walter%20S.%22%2C%22lastName%22%3A%22Lasecki%22%7D%5D%2C%22abstractNote%22%3A%22Machine%20learning%20%28ML%29%20models%20are%20often%20considered%20%5Cu201cblackboxes%5Cu201d%20as%20their%20internal%20representations%20fail%20to%20align%20with%20human%20understanding.%20While%20recent%20work%20attempted%20to%20expose%20the%20inner%20workings%20of%20ML%20models%20they%20do%20not%20allow%20users%20to%20interact%20directly%20with%20the%20model.%20This%20is%20especially%20problematic%20in%20domains%20where%20labeled%20data%20is%20limited%20as%20such%20the%20generalizability%20of%20ML%20models%20becomes%20questionable.%20We%20argue%20that%20the%20fundamental%20problem%20of%20generalizibility%20could%20be%20addressed%20by%20making%20ML%20models%20explainable%20in%20abstractions%20and%20expressions%20that%20make%20sense%20to%20users%20and%20by%20allowing%20them%20to%20interact%20with%20the%20model%20to%20assess%2C%20select%2C%20and%20build%20on.%20By%20involving%20humans%20in%20the%20process%20this%20way%2C%20we%20argue%20that%20the%20cocreated%20models%20will%20be%20more%20generalizable%20as%20they%20extrapolate%20what%20ML%20learns%20from%20few%20data%20when%20expressed%20in%20higher%20level%20abstractions%20that%20humans%20can%20verify%2C%20update%2C%20and%20expand%20based%20on%20their%20domain%20expertise.%20In%20this%20paper%2C%20we%20introduce%20RulesLearner%20that%20expresses%20MLmodel%20as%20rules%20on%20top%20of%20semantic%20linguistic%20structures%20in%20disjunctive%20normal%20form.%20RulesLearner%20allows%20users%20to%20interact%20with%20the%20patterns%20learned%20by%20the%20ML%20model%2C%20e.g.%20add%20and%20remove%20predicates%2C%20examine%20precision%20and%20recall%2C%20and%20construct%20a%20trusted%20set%20of%20rules.We%20conducted%20a%20preliminary%20user%20study%20which%20suggests%20that%20%281%29%20rules%20learned%20by%20ML%20are%20explainable%20and%20%282%29%20co-created%20model%20is%20more%20generalizable%20%283%29%20providing%20rules%20to%20experts%20improves%20overall%20productivity%2C%20with%20fewer%20people%20involved%2C%20with%20less%20expertise.%20Our%20findings%20link%20explainability%20and%20interactivity%20to%20generalizability%2C%20as%20such%20suggest%20that%20hybrid%20intelligence%20%28human-AI%29%20methods%20offer%20great%20potential.%22%2C%22date%22%3A%222019%22%2C%22proceedingsTitle%22%3A%22IUI%20Workshops%202019%22%2C%22conferenceName%22%3A%22IUI%20Workshops%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T22%3A57%3A15Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Text%20Analysis%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22YT7FHHAY%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gebru%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGebru%2C%20Timnit%2C%20Jamie%20Morgenstern%2C%20Briana%20Vecchione%2C%20Jennifer%20Wortman%20Vaughan%2C%20Hanna%20Wallach%2C%20Hal%20Daume%26%23xE9%3B%20III%2C%20and%20Kate%20Crawford.%20%26%23x201C%3BDatasheets%20for%20Datasets.%26%23x201D%3B%20%3Ci%3EArXiv%3A1803.09010%20%5BCs%5D%3C%5C%2Fi%3E%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1803.09010%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1803.09010%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DYT7FHHAY%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Datasheets%20for%20Datasets%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Timnit%22%2C%22lastName%22%3A%22Gebru%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jamie%22%2C%22lastName%22%3A%22Morgenstern%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Briana%22%2C%22lastName%22%3A%22Vecchione%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jennifer%20Wortman%22%2C%22lastName%22%3A%22Vaughan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanna%22%2C%22lastName%22%3A%22Wallach%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hal%22%2C%22lastName%22%3A%22Daume%5Cu00e9%20III%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kate%22%2C%22lastName%22%3A%22Crawford%22%7D%5D%2C%22abstractNote%22%3A%22Currently%20there%20is%20no%20standard%20way%20to%20identify%20how%20a%20dataset%20was%20created%2C%20and%20what%20characteristics%2C%20motivations%2C%20and%20potential%20skews%20it%20represents.%20To%20begin%20to%20address%20this%20issue%2C%20we%20propose%20the%20concept%20of%20a%20datasheet%20for%20datasets%2C%20a%20short%20document%20to%20accompany%20public%20datasets%2C%20commercial%20APIs%2C%20and%20pretrained%20models.%20The%20goal%20of%20this%20proposal%20is%20to%20enable%20better%20communication%20between%20dataset%20creators%20and%20users%2C%20and%20help%20the%20AI%20community%20move%20toward%20greater%20transparency%20and%20accountability.%20By%20analogy%2C%20in%20computer%20hardware%2C%20it%20has%20become%20industry%20standard%20to%20accompany%20everything%20from%20the%20simplest%20components%20%28e.g.%2C%20resistors%29%2C%20to%20the%20most%20complex%20microprocessor%20chips%2C%20with%20datasheets%20detailing%20standard%20operating%20characteristics%2C%20test%20results%2C%20recommended%20usage%2C%20and%20other%20information.%20We%20outline%20some%20of%20the%20questions%20a%20datasheet%20for%20datasets%20should%20answer.%20These%20questions%20focus%20on%20when%2C%20where%2C%20and%20how%20the%20training%20data%20was%20gathered%2C%20its%20recommended%20use%20cases%2C%20and%2C%20in%20the%20case%20of%20human-centric%20datasets%2C%20information%20regarding%20the%20subjects%27%20demographics%20and%20consent%20as%20applicable.%20We%20develop%20prototypes%20of%20datasheets%20for%20two%20well-known%20datasets%3A%20Labeled%20Faces%20in%20The%20Wild%20and%20the%20Pang%20%5C%5C%26%20Lee%20Polarity%20Dataset.%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1803.09010%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-12-05T07%3A02%3A23Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Reporting%20and%20documentation%20methods%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22QJ5ZHJWR%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mitchell%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMitchell%2C%20Margaret%2C%20Simone%20Wu%2C%20Andrew%20Zaldivar%2C%20Parker%20Barnes%2C%20Lucy%20Vasserman%2C%20Ben%20Hutchinson%2C%20Elena%20Spitzer%2C%20Inioluwa%20Deborah%20Raji%2C%20and%20Timnit%20Gebru.%20%26%23x201C%3BModel%20Cards%20for%20Model%20Reporting.%26%23x201D%3B%20%3Ci%3EProceedings%20of%20the%20Conference%20on%20Fairness%2C%20Accountability%2C%20and%20Transparency%20-%20FAT%2A%20%26%23x2019%3B19%3C%5C%2Fi%3E%2C%202019%2C%20220%26%23x2013%3B29.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3287560.3287596%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3287560.3287596%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DQJ5ZHJWR%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Model%20Cards%20for%20Model%20Reporting%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Margaret%22%2C%22lastName%22%3A%22Mitchell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Simone%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrew%22%2C%22lastName%22%3A%22Zaldivar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Parker%22%2C%22lastName%22%3A%22Barnes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lucy%22%2C%22lastName%22%3A%22Vasserman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Hutchinson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elena%22%2C%22lastName%22%3A%22Spitzer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Inioluwa%20Deborah%22%2C%22lastName%22%3A%22Raji%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Timnit%22%2C%22lastName%22%3A%22Gebru%22%7D%5D%2C%22abstractNote%22%3A%22Trained%20machine%20learning%20models%20are%20increasingly%20used%20to%20perform%20high-impact%20tasks%20in%20areas%20such%20as%20law%20enforcement%2C%20medicine%2C%20education%2C%20and%20employment.%20In%20order%20to%20clarify%20the%20intended%20use%20cases%20of%20machine%20learning%20models%20and%20minimize%20their%20usage%20in%20contexts%20for%20which%20they%20are%20not%20well%20suited%2C%20we%20recommend%20that%20released%20models%20be%20accompanied%20by%20documentation%20detailing%20their%20performance%20characteristics.%20In%20this%20paper%2C%20we%20propose%20a%20framework%20that%20we%20call%20model%20cards%2C%20to%20encourage%20such%20transparent%20model%20reporting.%20Model%20cards%20are%20short%20documents%20accompanying%20trained%20machine%20learning%20models%20that%20provide%20benchmarked%20evaluation%20in%20a%20variety%20of%20conditions%2C%20such%20as%20across%20different%20cultural%2C%20demographic%2C%20or%20phenotypic%20groups%20%28e.g.%2C%20race%2C%20geographic%20location%2C%20sex%2C%20Fitzpatrick%20skin%20type%29%20and%20intersectional%20groups%20%28e.g.%2C%20age%20and%20race%2C%20or%20sex%20and%20Fitzpatrick%20skin%20type%29%20that%20are%20relevant%20to%20the%20intended%20application%20domains.%20Model%20cards%20also%20disclose%20the%20context%20in%20which%20models%20are%20intended%20to%20be%20used%2C%20details%20of%20the%20performance%20evaluation%20procedures%2C%20and%20other%20relevant%20information.%20While%20we%20focus%20primarily%20on%20human-centered%20machine%20learning%20models%20in%20the%20application%20fields%20of%20computer%20vision%20and%20natural%20language%20processing%2C%20this%20framework%20can%20be%20used%20to%20document%20any%20trained%20machine%20learning%20model.%20To%20solidify%20the%20concept%2C%20we%20provide%20cards%20for%20two%20supervised%20models%3A%20One%20trained%20to%20detect%20smiling%20faces%20in%20images%2C%20and%20one%20trained%20to%20detect%20toxic%20comments%20in%20text.%20We%20propose%20model%20cards%20as%20a%20step%20towards%20the%20responsible%20democratization%20of%20machine%20learning%20and%20related%20AI%20technology%2C%20increasing%20transparency%20into%20how%20well%20AI%20technology%20works.%20We%20hope%20this%20work%20encourages%20those%20releasing%20trained%20machine%20learning%20models%20to%20accompany%20model%20releases%20with%20similar%20detailed%20evaluation%20numbers%20and%20other%20relevant%20documentation.%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3287560.3287596%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1810.03993%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-12-05T06%3A51%3A13Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Reporting%20and%20documentation%20methods%22%7D%5D%7D%7D%2C%7B%22key%22%3A%2233NYWHBB%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tahmasebi%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETahmasebi%2C%20Nina%2C%20Niclas%20Hagen%2C%20Daniel%20Brod%26%23xE9%3Bn%2C%20and%20Mats%20Malm.%20%26%23x201C%3BA%20Convergence%20of%20Methodologies%3A%20Notes%20on%20Data-Intensive%20Humanities%20Research.%26%23x201D%3B%20In%20%3Ci%3EDigital%20Humanities%20in%20the%20Nordic%20Countries%204th%20Conference%3C%5C%2Fi%3E.%20Helsinki%3A%20Nina%20Tahmasebi%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27%5C%2Fpublication%5C%2F2019-aconvergenceofmethods%5C%2F%27%3E%5C%2Fpublication%5C%2F2019-aconvergenceofmethods%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D33NYWHBB%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Convergence%20of%20Methodologies%3A%20Notes%20on%20Data-Intensive%20Humanities%20Research%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nina%22%2C%22lastName%22%3A%22Tahmasebi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Niclas%22%2C%22lastName%22%3A%22Hagen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Brod%5Cu00e9n%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mats%22%2C%22lastName%22%3A%22Malm%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%2C%20we%20discuss%20a%20data-intensive%20research%20methodology%20for%20the%20digital%20humanities.%20We%20highlight%20the%20differences%20and%20commonalities%20between%20quantitative%20and%20qualitative%20research%20methodologies%20in%20relation%20to%20a%20data-intensive%20research%20process.%20We%20argue%20that%20issues%20of%20representativeness%20and%20reduction%20must%20be%20in%20focus%20for%20all%20phases%20of%20the%20process%3B%20from%20the%20status%20of%20texts%20as%20such%2C%20over%20their%20digitization%20to%20pre-processing%20and%20methodological%20exploration.%22%2C%22date%22%3A%222019%22%2C%22proceedingsTitle%22%3A%22Digital%20Humanities%20in%20the%20Nordic%20Countries%204th%20Conference%22%2C%22conferenceName%22%3A%22DHN2019%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22%5C%2Fpublication%5C%2F2019-aconvergenceofmethods%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-08-16T22%3A23%3A09Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Corpus%20representativeness%22%7D%2C%7B%22tag%22%3A%22DH%20Digital%20humanities%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22QJ4XZV2F%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Pandey%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EPandey%2C%20Parul.%20%3Ci%3EInterpretable%20Machine%20Learning%3C%5C%2Fi%3E%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Ftowardsdatascience.com%5C%2Finterpretable-machine-learning-1dec0f2f3e6b%27%3Ehttps%3A%5C%2F%5C%2Ftowardsdatascience.com%5C%2Finterpretable-machine-learning-1dec0f2f3e6b%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DQJ4XZV2F%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Interpretable%20Machine%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Parul%22%2C%22lastName%22%3A%22Pandey%22%7D%5D%2C%22abstractNote%22%3A%22Extracting%20human%20understandable%20insights%20from%20any%20Machine%20Learning%20model%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Ftowardsdatascience.com%5C%2Finterpretable-machine-learning-1dec0f2f3e6b%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A40%3A47Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Data%20science%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22RIYZUJJ9%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Murdoch%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMurdoch%2C%20W.%20James%2C%20Chandan%20Singh%2C%20Karl%20Kumbier%2C%20Reza%20Abbasi-Asl%2C%20and%20Bin%20Yu.%20%26%23x201C%3BInterpretable%20Machine%20Learning%3A%20Definitions%2C%20Methods%2C%20and%20Applications.%26%23x201D%3B%20%3Ci%3EArXiv%3A1901.04592%20%5BCs%2C%20Stat%5D%3C%5C%2Fi%3E%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1901.04592%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1901.04592%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DRIYZUJJ9%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Interpretable%20machine%20learning%3A%20definitions%2C%20methods%2C%20and%20applications%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22W.%20James%22%2C%22lastName%22%3A%22Murdoch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chandan%22%2C%22lastName%22%3A%22Singh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karl%22%2C%22lastName%22%3A%22Kumbier%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Reza%22%2C%22lastName%22%3A%22Abbasi-Asl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bin%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22The%20authors%20aim%20to%20address%20concerns%20surrounding%20machine-learning%20models%20by%20defining%20interpretability%20in%20the%20context%20of%20machine%20learning%20and%20introducing%20the%20Predictive%2C%20Descriptive%2C%20Relevant%20%28PDR%29%20framework%20for%20discussing%20interpretations.%20The%20PDR%20framework%20provides%20three%20overarching%20desiderata%20for%20evaluation%3A%20predictive%20accuracy%2C%20descriptive%20accuracy%20and%20relevancy%2C%20with%20relevancy%20judged%20relative%20to%20a%20human%20audience.%20Moreover%2C%20to%20help%20manage%20the%20deluge%20of%20interpretation%20methods%2C%20they%20introduce%20a%20categorization%20of%20existing%20techniques%20into%20model-based%20and%20post-hoc%20categories%2C%20with%20sub-groups%20including%20sparsity%2C%20modularity%20and%20simulatability.%20To%20demonstrate%20how%20practitioners%20can%20use%20the%20PDR%20framework%20to%20evaluate%20and%20understand%20interpretations%2C%20the%20authors%20provide%20numerous%20real-world%20examples%20that%20highlight%20the%20often%20under-appreciated%20role%20played%20by%20human%20audiences%20in%20discussions%20of%20interpretability.%20Finally%2C%20based%20on%20their%20framework%2C%20the%20authors%20discuss%20limitations%20of%20existing%20methods%20and%20directions%20for%20future%20work.%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1901.04592%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A42%3A37Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22TZURJD6V%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Carassai%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECarassai%2C%20Mauro.%20%26%23x201C%3BPreliminary%20Notes%20on%20Conceptual%20Issues%20Affecting%20Interpretation%20of%20Topic%20Models.%26%23x201D%3B%20%3Ci%3EWE1S%3C%5C%2Fi%3E%20%28blog%29%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fresearch_post%5C%2Fpreliminary-notes-on-conceptual-issues-affecting-interpretation-of-topic-models%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fresearch_post%5C%2Fpreliminary-notes-on-conceptual-issues-affecting-interpretation-of-topic-models%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DTZURJD6V%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22Preliminary%20Notes%20on%20Conceptual%20Issues%20Affecting%20Interpretation%20of%20Topic%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mauro%22%2C%22lastName%22%3A%22Carassai%22%7D%5D%2C%22abstractNote%22%3A%22%5BBeginning%3A%5D%20Any%20process%20of%20interpretation%20of%20textual%20data%20relates%2C%20to%20some%20extent%2C%20to%20the%20fundamental%20interplay%20between%20observable%20features%20belonging%20to%20the%20object%20of%20our%20inquiry%20and%20the%20specific%20%5Cu201cperspective%5Cu201d%20that%20we%20use%20in%20observing%20those%20features%2C%20i.e.%20the%20specific%20point%20of%20view%20that%20makes%20us%20see%20what%20we%20see%20during%20such%20an%20observational%20process.%5CnIf%20we%20agree%20to%20take%20each%20and%20every%20aspect%20of%20such%20fundamental%20interplay%20into%20consideration%20while%20trying%20to%20develop%20both%20our%20%5Cu201ctopic%5Cu201d%20and%20%5Cu201ctopic%20model%5Cu201d%20interpretation%20methodology%2C%20then%20we%20need%20to%20start%20from%20analyzing%20the%20problem%20of%20what%20we%20see.%22%2C%22blogTitle%22%3A%22WE1S%22%2C%22date%22%3A%222018%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fresearch_post%5C%2Fpreliminary-notes-on-conceptual-issues-affecting-interpretation-of-topic-models%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222021-05-25T23%3A20%3A54Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Topic%20model%20interpretation%22%7D%2C%7B%22tag%22%3A%22Topic%20modeling%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22ZLUC5QPP%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Rule%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERule%2C%20Adam%2C%20Aur%26%23xE9%3Blien%20Tabard%2C%20and%20James%20D.%20Hollan.%20%26%23x201C%3BExploration%20and%20Explanation%20in%20Computational%20Notebooks.%26%23x201D%3B%20In%20%3Ci%3EProceedings%20of%20the%202018%20CHI%20Conference%20on%20Human%20Factors%20in%20Computing%20Systems%26%23xA0%3B%20-%20CHI%20%26%23x2019%3B18%3C%5C%2Fi%3E%2C%201%26%23x2013%3B12.%20Montreal%20QC%2C%20Canada%3A%20ACM%20Press%2C%202018.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3173574.3173606%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3173574.3173606%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DZLUC5QPP%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Exploration%20and%20Explanation%20in%20Computational%20Notebooks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Adam%22%2C%22lastName%22%3A%22Rule%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aur%5Cu00e9lien%22%2C%22lastName%22%3A%22Tabard%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22James%20D.%22%2C%22lastName%22%3A%22Hollan%22%7D%5D%2C%22abstractNote%22%3A%22Computational%20notebooks%20combine%20code%2C%20visualizations%2C%20and%20text%20in%20a%20single%20document.%20Researchers%2C%20data%20analysts%2C%20and%20even%20journalists%20are%20rapidly%20adopting%20this%20new%20medium.%20We%20present%20three%20studies%20of%20how%20they%20are%20using%20notebooks%20to%20document%20and%20share%20exploratory%20data%20analyses.%20In%20the%20first%2C%20we%20analyzed%20over%201%20million%20computational%20notebooks%20on%20GitHub%2C%20finding%20that%20one%20in%20four%20had%20no%20explanatory%20text%20but%20consisted%20entirely%20of%20visualizations%20or%20code.%20In%20a%20second%20study%2C%20we%20examined%20over%20200%20academic%20computational%20notebooks%2C%20finding%20that%20although%20the%20vast%20majority%20described%20methods%2C%20only%20a%20minority%20discussed%20reasoning%20or%20results.%20In%20a%20third%20study%2C%20we%20interviewed%2015%20academic%20data%20analysts%2C%20finding%20that%20most%20considered%20computational%20notebooks%20personal%2C%20exploratory%2C%20and%20messy.%20Importantly%2C%20they%20typically%20used%20other%20media%20to%20share%20analyses.%20These%20studies%20demonstrate%20a%20tension%20between%20exploration%20and%20explanation%20in%20constructing%20and%20sharing%20computational%20notebooks.%20We%20conclude%20with%20opportunities%20to%20encourage%20explanation%20in%20computational%20media%20without%20hindering%20exploration.%22%2C%22date%22%3A%222018%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202018%20CHI%20Conference%20on%20Human%20Factors%20in%20Computing%20Systems%20%20-%20CHI%20%2718%22%2C%22conferenceName%22%3A%22the%202018%20CHI%20Conference%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3173574.3173606%22%2C%22ISBN%22%3A%22978-1-4503-5620-6%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fcitation.cfm%3Fdoid%3D3173574.3173606%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-09-03T05%3A52%3A15Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Data%20notebooks%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22Z6VGKZCS%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Narayanan%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ENarayanan%2C%20Menaka%2C%20Emily%20Chen%2C%20Jeffrey%20He%2C%20Been%20Kim%2C%20Sam%20Gershman%2C%20and%20Finale%20Doshi-Velez.%20%26%23x201C%3BHow%20Do%20Humans%20Understand%20Explanations%20from%20Machine%20Learning%20Systems%3F%20An%20Evaluation%20of%20the%20Human-Interpretability%20of%20Explanation.%26%23x201D%3B%20%3Ci%3EArXiv%3A1802.00682%20%5BCs%5D%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1802.00682%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1802.00682%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DZ6VGKZCS%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22How%20do%20Humans%20Understand%20Explanations%20from%20Machine%20Learning%20Systems%3F%20An%20Evaluation%20of%20the%20Human-Interpretability%20of%20Explanation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Menaka%22%2C%22lastName%22%3A%22Narayanan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Emily%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffrey%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Been%22%2C%22lastName%22%3A%22Kim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sam%22%2C%22lastName%22%3A%22Gershman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Finale%22%2C%22lastName%22%3A%22Doshi-Velez%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20years%20have%20seen%20a%20boom%20in%20interest%20in%20machine%20learning%20systems%20that%20can%20provide%20a%20human-understandable%20rationale%20for%20their%20predictions%20or%20decisions.%20However%2C%20exactly%20what%20kinds%20of%20explanation%20are%20truly%20human-interpretable%20remains%20poorly%20understood.%20This%20work%20advances%20our%20understanding%20of%20what%20makes%20explanations%20interpretable%20in%20the%20specific%20context%20of%20verification.%20Suppose%20we%20have%20a%20machine%20learning%20system%20that%20predicts%20X%2C%20and%20we%20provide%20rationale%20for%20this%20prediction%20X.%20Given%20an%20input%2C%20an%20explanation%2C%20and%20an%20output%2C%20is%20the%20output%20consistent%20with%20the%20input%20and%20the%20supposed%20rationale%3F%20Via%20a%20series%20of%20user-studies%2C%20we%20identify%20what%20kinds%20of%20increases%20in%20complexity%20have%20the%20greatest%20effect%20on%20the%20time%20it%20takes%20for%20humans%20to%20verify%20the%20rationale%2C%20and%20which%20seem%20relatively%20insensitive.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1802.00682%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-27T08%3A57%3A54Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22KAZPSZE3%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Selbst%20and%20Barocas%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESelbst%2C%20Andrew%20D.%2C%20and%20Solon%20Barocas.%20%26%23x201C%3BThe%20Intuitive%20Appeal%20of%20Explainable%20Machines.%26%23x201D%3B%20%3Ci%3ESSRN%20Electronic%20Journal%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.2139%5C%2Fssrn.3126971%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.2139%5C%2Fssrn.3126971%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DKAZPSZE3%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20Intuitive%20Appeal%20of%20Explainable%20Machines%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrew%20D.%22%2C%22lastName%22%3A%22Selbst%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Solon%22%2C%22lastName%22%3A%22Barocas%22%7D%5D%2C%22abstractNote%22%3A%22Algorithmic%20decision-making%20has%20become%20synonymous%20with%20inexplicable%20decision-making%2C%20but%20what%20makes%20algorithms%20so%20difficult%20to%20explain%3F%20This%20Article%20examines%20what%20sets%20machine%20learning%20apart%20from%20other%20ways%20of%20developing%20rules%20for%20decision-making%20and%20the%20problem%20these%20properties%20pose%20for%20explanation.%20We%20show%20that%20machine%20learning%20models%20can%20be%20both%20inscrutable%20and%20nonintuitive%20and%20that%20these%20are%20related%2C%20but%20distinct%2C%20properties.%5Cn%5CnCalls%20for%20explanation%20have%20treated%20these%20problems%20as%20one%20and%20the%20same%2C%20but%20disentangling%20the%20two%20reveals%20that%20they%20demand%20very%20different%20responses.%20Dealing%20with%20inscrutability%20requires%20providing%20a%20sensible%20description%20of%20the%20rules%3B%20addressing%20nonintuitiveness%20requires%20providing%20a%20satisfying%20explanation%20for%20why%20the%20rules%20are%20what%20they%20are.%20Existing%20laws%20like%20the%20Fair%20Credit%20Reporting%20Act%20%28FCRA%29%2C%20the%20Equal%20Credit%20Opportunity%20Act%20%28ECOA%29%2C%20and%20the%20General%20Data%20Protection%20Regulation%20%28GDPR%29%2C%20as%20well%20as%20techniques%20within%20machine%20learning%2C%20are%20focused%20almost%20entirely%20on%20the%20problem%20of%20inscrutability.%20While%20such%20techniques%20could%20allow%20a%20machine%20learning%20system%20to%20comply%20with%20existing%20law%2C%20doing%20so%20may%20not%20help%20if%20the%20goal%20is%20to%20assess%20whether%20the%20basis%20for%20decision-making%20is%20normatively%20defensible.%5Cn%5CnIn%20most%20cases%2C%20intuition%20serves%20as%20the%20unacknowledged%20bridge%20between%20a%20descriptive%20account%20and%20a%20normative%20evaluation.%20But%20because%20machine%20learning%20is%20often%20valued%20for%20its%20ability%20to%20uncover%20statistical%20relationships%20that%20defy%20intuition%2C%20relying%20on%20intuition%20is%20not%20a%20satisfying%20approach.%20This%20Article%20thus%20argues%20for%20other%20mechanisms%20for%20normative%20evaluation.%20To%20know%20why%20the%20rules%20are%20what%20they%20are%2C%20one%20must%20seek%20explanations%20of%20the%20process%20behind%20a%20model%5Cu2019s%20development%2C%20not%20just%20explanations%20of%20the%20model%20itself.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.2139%5C%2Fssrn.3126971%22%2C%22ISSN%22%3A%221556-5068%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.ssrn.com%5C%2Fabstract%3D3126971%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-27T08%3A56%3A34Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22MRZBRAN6%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sawhney%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESawhney%2C%20Ravi.%20%26%23x201C%3BHuman%20in%20the%20Loop%3A%20Why%20We%20Will%20Be%20Needed%20to%20Complement%20Artificial%20Intelligence.%26%23x201D%3B%20%3Ci%3ELSE%20Business%20Review%3C%5C%2Fi%3E%20%28blog%29%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fblogs.lse.ac.uk%5C%2Fbusinessreview%5C%2F2018%5C%2F10%5C%2F24%5C%2Fhuman-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fblogs.lse.ac.uk%5C%2Fbusinessreview%5C%2F2018%5C%2F10%5C%2F24%5C%2Fhuman-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DMRZBRAN6%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22Human%20in%20the%20loop%3A%20why%20we%20will%20be%20needed%20to%20complement%20artificial%20intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ravi%22%2C%22lastName%22%3A%22Sawhney%22%7D%5D%2C%22abstractNote%22%3A%22Along%20with%20artificial%20intelligence%20%28AI%29%2C%20it%20is%20likely%20most%20readers%20will%20have%20observed%20the%20increased%20press%20coverage%20around%20automation.%20More%20recently%20these%20two%20terms%20are%20being%20used%20jointly%20to%20present%5Cu2026%22%2C%22blogTitle%22%3A%22LSE%20Business%20Review%22%2C%22date%22%3A%222018%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fblogs.lse.ac.uk%5C%2Fbusinessreview%5C%2F2018%5C%2F10%5C%2F24%5C%2Fhuman-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T21%3A45%3A39Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22AMRL28R2%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kleymann%20and%20Stange%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKleymann%2C%20Rabea%2C%20and%20Jan-Erik%20Stange.%20%26%23x201C%3BTowards%20Hermeneutic%20Visualization%20in%20Digital%20Literary%20Studies%2C%26%23x201D%3B%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Fwww.stereoscope.threedh.net%5C%2FHermeneuticVisualization.pdf%27%3Ehttp%3A%5C%2F%5C%2Fwww.stereoscope.threedh.net%5C%2FHermeneuticVisualization.pdf%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DAMRL28R2%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22manuscript%22%2C%22title%22%3A%22Towards%20Hermeneutic%20Visualization%20in%20Digital%20Literary%20Studies%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rabea%22%2C%22lastName%22%3A%22Kleymann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan-Erik%22%2C%22lastName%22%3A%22Stange%22%7D%5D%2C%22abstractNote%22%3A%22%5BManuscript%20of%20article%20under%20submission%3B%20posted%20online%5D%20Hermeneutic%20approaches%20in%20the%20digital%20humanities%20have%20been%20agnostic%20about%20the%20epistemological%20premises%20of%20hermeneutic%20theory.%20These%20can%20be%20summarized%20as%20%281%29%20differentiation%20author%5C%2Ftext%2C%20%282%29%20hermeneutic%20circle%20and%20%283%29%20dependency%20text%5C%2Frecipient.%20In%20this%20article%20we%20present%20the%20concept%20of%20hermeneutic%20visualization%20as%20a%20means%20of%20bridging%20the%20gap%20between%20classic%20hermeneutic%20theory%20and%20the%20emerging%20practice%20of%20digital%20hermeneutics.%20Since%20data%20visualization%20is%20based%20on%20epistemological%20premises%20stemming%20from%20the%20sciences%2C%20it%20is%20not%20well-equipped%20to%20meet%20hermeneutic%20demands.%20We%20discuss%20four%20postulates%20that%20can%20be%20used%20as%20guidelines%20and%20help%20transform%20traditional%20data%20visualization%20into%20hermeneutic%20visualization%2C%20while%20respecting%20the%20epistemological%20foundations%20of%20hermeneutic%20theory.%20We%20demonstrate%20the%20usefulness%20of%20the%20postulates%20with%20an%20interactive%20prototype%20%5Cu201cStereoscope%5Cu201d%20designed%20to%20support%20them.%22%2C%22manuscriptType%22%3A%22%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.stereoscope.threedh.net%5C%2FHermeneuticVisualization.pdf%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T20%3A59%3A37Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22DH%20Digital%20humanities%22%7D%2C%7B%22tag%22%3A%22Data%20visualization%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22NTL6YC8M%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Holland%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHolland%2C%20Sarah%2C%20Ahmed%20Hosny%2C%20Sarah%20Newman%2C%20Joshua%20Joseph%2C%20and%20Kasia%20Chmielinski.%20%26%23x201C%3BThe%20Dataset%20Nutrition%20Label%3A%20A%20Framework%20To%20Drive%20Higher%20Data%20Quality%20Standards.%26%23x201D%3B%20%3Ci%3EArXiv%3A1805.03677%20%5BCs%5D%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1805.03677%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1805.03677%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DNTL6YC8M%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20Dataset%20Nutrition%20Label%3A%20A%20Framework%20To%20Drive%20Higher%20Data%20Quality%20Standards%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sarah%22%2C%22lastName%22%3A%22Holland%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmed%22%2C%22lastName%22%3A%22Hosny%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sarah%22%2C%22lastName%22%3A%22Newman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joshua%22%2C%22lastName%22%3A%22Joseph%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kasia%22%2C%22lastName%22%3A%22Chmielinski%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20%28AI%29%20systems%20built%20on%20incomplete%20or%20biased%20data%20will%20often%20exhibit%20problematic%20outcomes.%20Current%20methods%20of%20data%20analysis%2C%20particularly%20before%20model%20development%2C%20are%20costly%20and%20not%20standardized.%20The%20Dataset%20Nutrition%20Label%20%28the%20Label%29%20is%20a%20diagnostic%20framework%20that%20lowers%20the%20barrier%20to%20standardized%20data%20analysis%20by%20providing%20a%20distilled%20yet%20comprehensive%20overview%20of%20dataset%20%5C%22ingredients%5C%22%20before%20AI%20model%20development.%20Building%20a%20Label%20that%20can%20be%20applied%20across%20domains%20and%20data%20types%20requires%20that%20the%20framework%20itself%20be%20flexible%20and%20adaptable%3B%20as%20such%2C%20the%20Label%20is%20comprised%20of%20diverse%20qualitative%20and%20quantitative%20modules%20generated%20through%20multiple%20statistical%20and%20probabilistic%20modelling%20backends%2C%20but%20displayed%20in%20a%20standardized%20format.%20To%20demonstrate%20and%20advance%20this%20concept%2C%20we%20generated%20and%20published%20an%20open%20source%20prototype%20with%20seven%20sample%20modules%20on%20the%20ProPublica%20Dollars%20for%20Docs%20dataset.%20The%20benefits%20of%20the%20Label%20are%20manyfold.%20For%20data%20specialists%2C%20the%20Label%20will%20drive%20more%20robust%20data%20analysis%20practices%2C%20provide%20an%20efficient%20way%20to%20select%20the%20best%20dataset%20for%20their%20purposes%2C%20and%20increase%20the%20overall%20quality%20of%20AI%20models%20as%20a%20result%20of%20more%20robust%20training%20datasets%20and%20the%20ability%20to%20check%20for%20issues%20at%20the%20time%20of%20model%20development.%20For%20those%20building%20and%20publishing%20datasets%2C%20the%20Label%20creates%20an%20expectation%20of%20explanation%2C%20which%20will%20drive%20better%20data%20collection%20practices.%20We%20also%20explore%20the%20limitations%20of%20the%20Label%2C%20including%20the%20challenges%20of%20generalizing%20across%20diverse%20datasets%2C%20and%20the%20risk%20of%20using%20%5C%22ground%20truth%5C%22%20data%20as%20a%20comparison%20dataset.%20We%20discuss%20ways%20to%20move%20forward%20given%20the%20limitations%20identified.%20Lastly%2C%20we%20lay%20out%20future%20directions%20for%20the%20Dataset%20Nutrition%20Label%20project%2C%20including%20research%20and%20public%20policy%20agendas%20to%20further%20advance%20consideration%20of%20the%20concept.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1805.03677%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-12-05T06%3A55%3A35Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Reporting%20and%20documentation%20methods%22%7D%5D%7D%7D%2C%7B%22key%22%3A%226SEBDCE8%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bender%20and%20Friedman%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EBender%2C%20Emily%20M.%2C%20and%20Batya%20Friedman.%20%26%23x201C%3BData%20Statements%20for%20Natural%20Language%20Processing%3A%20Toward%20Mitigating%20System%20Bias%20and%20Enabling%20Better%20Science.%26%23x201D%3B%20%3Ci%3ETransactions%20of%20the%20Association%20for%20Computational%20Linguistics%3C%5C%2Fi%3E%206%20%282018%29%3A%20587%26%23x2013%3B604.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1162%5C%2Ftacl_a_00041%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1162%5C%2Ftacl_a_00041%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D6SEBDCE8%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Data%20Statements%20for%20Natural%20Language%20Processing%3A%20Toward%20Mitigating%20System%20Bias%20and%20Enabling%20Better%20Science%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Emily%20M.%22%2C%22lastName%22%3A%22Bender%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Batya%22%2C%22lastName%22%3A%22Friedman%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%2C%20we%20propose%20data%20statements%20as%20a%20design%20solution%20and%20professional%20practice%20for%20natural%20language%20processing%20technologists%2C%20in%20both%20research%20and%20development.%20Through%20the%20adoption%20and%20widespread%20use%20of%20data%20statements%2C%20the%20field%20can%20begin%20to%20address%20critical%20scientific%20and%20ethical%20issues%20that%20result%20from%20the%20use%20of%20data%20from%20certain%20populations%20in%20the%20development%20of%20technology%20for%20other%20populations.%20We%20present%20a%20form%20that%20data%20statements%20can%20take%20and%20explore%20the%20implications%20of%20adopting%20them%20as%20part%20of%20regular%20practice.%20We%20argue%20that%20data%20statements%20will%20help%20alleviate%20issues%20related%20to%20exclusion%20and%20bias%20in%20language%20technology%2C%20lead%20to%20better%20precision%20in%20claims%20about%20how%20natural%20language%20processing%20research%20can%20generalize%20and%20thus%20better%20engineering%20results%2C%20protect%20companies%20from%20public%20embarrassment%2C%20and%20ultimately%20lead%20to%20language%20technology%20that%20meets%20its%20users%20in%20their%20own%20preferred%20linguistic%20style%20and%20furthermore%20does%20not%20misrepresent%20them%20to%20others.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1162%5C%2Ftacl_a_00041%22%2C%22ISSN%22%3A%222307-387X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mitpressjournals.org%5C%2Fdoi%5C%2Fabs%5C%2F10.1162%5C%2Ftacl_a_00041%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-12-05T06%3A54%3A28Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Reporting%20and%20documentation%20methods%22%7D%5D%7D%7D%2C%7B%22key%22%3A%228UEU8HL4%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hind%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHind%2C%20Michael%2C%20Dennis%20Wei%2C%20Murray%20Campbell%2C%20Noel%20C.%20F.%20Codella%2C%20Amit%20Dhurandhar%2C%20Aleksandra%20Mojsilovi%26%23x107%3B%2C%20Karthikeyan%20Natesan%20Ramamurthy%2C%20and%20Kush%20R.%20Varshney.%20%26%23x201C%3BTED%3A%20Teaching%20AI%20to%20Explain%20Its%20Decisions.%26%23x201D%3B%20%3Ci%3EArXiv%3A1811.04896%20%5BCs%5D%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1811.04896%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1811.04896%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D8UEU8HL4%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22TED%3A%20Teaching%20AI%20to%20Explain%20its%20Decisions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Hind%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dennis%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Murray%22%2C%22lastName%22%3A%22Campbell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Noel%20C.%20F.%22%2C%22lastName%22%3A%22Codella%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amit%22%2C%22lastName%22%3A%22Dhurandhar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aleksandra%22%2C%22lastName%22%3A%22Mojsilovi%5Cu0107%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karthikeyan%20Natesan%22%2C%22lastName%22%3A%22Ramamurthy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kush%20R.%22%2C%22lastName%22%3A%22Varshney%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20systems%20are%20being%20increasingly%20deployed%20due%20to%20their%20potential%20to%20increase%20the%20efficiency%2C%20scale%2C%20consistency%2C%20fairness%2C%20and%20accuracy%20of%20decisions.%20However%2C%20as%20many%20of%20these%20systems%20are%20opaque%20in%20their%20operation%2C%20there%20is%20a%20growing%20demand%20for%20such%20systems%20to%20provide%20explanations%20for%20their%20decisions.%20Conventional%20approaches%20to%20this%20problem%20attempt%20to%20expose%20or%20discover%20the%20inner%20workings%20of%20a%20machine%20learning%20model%20with%20the%20hope%20that%20the%20resulting%20explanations%20will%20be%20meaningful%20to%20the%20consumer.%20In%20contrast%2C%20this%20paper%20suggests%20a%20new%20approach%20to%20this%20problem.%20It%20introduces%20a%20simple%2C%20practical%20framework%2C%20called%20Teaching%20Explanations%20for%20Decisions%20%28TED%29%2C%20that%20provides%20meaningful%20explanations%20that%20match%20the%20mental%20model%20of%20the%20consumer.%20We%20illustrate%20the%20generality%20and%20effectiveness%20of%20this%20approach%20with%20two%20different%20examples%2C%20resulting%20in%20highly%20accurate%20explanations%20with%20no%20loss%20of%20prediction%20accuracy%20for%20these%20two%20examples.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1811.04896%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-08-09T19%3A01%3A21Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22HI7SBGVB%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Alvarez-Melis%20and%20Jaakkola%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAlvarez-Melis%2C%20David%2C%20and%20Tommi%20Jaakkola.%20%26%23x201C%3BTowards%20Robust%20Interpretability%20with%20Self-Explaining%20Neural%20Networks.%26%23x201D%3B%20In%20%3Ci%3EAdvances%20in%20Neural%20Information%20Processing%20Systems%2031%3C%5C%2Fi%3E%2C%20edited%20by%20S.%20Bengio%2C%20H.%20Wallach%2C%20H.%20Larochelle%2C%20K.%20Grauman%2C%20N.%20Cesa-Bianchi%2C%20and%20R.%20Garnett%2C%207775%26%23x2013%3B84.%20Curran%20Associates%2C%20Inc.%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf%27%3Ehttp%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DHI7SBGVB%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Towards%20Robust%20Interpretability%20with%20Self-Explaining%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Alvarez-Melis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tommi%22%2C%22lastName%22%3A%22Jaakkola%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Bengio%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22H.%22%2C%22lastName%22%3A%22Wallach%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22H.%22%2C%22lastName%22%3A%22Larochelle%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22K.%22%2C%22lastName%22%3A%22Grauman%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22N.%22%2C%22lastName%22%3A%22Cesa-Bianchi%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Garnett%22%7D%5D%2C%22abstractNote%22%3A%22Most%20recent%20work%20on%20interpretability%20of%20complex%20machine%20learning%20models%20has%20focused%20on%20estimating%20a-posteriori%20explanations%20for%20previously%20trained%20models%20around%20specific%20predictions.%20Self-explaining%20models%20where%20interpretability%20plays%20a%20key%20role%20already%20during%20learning%20have%20received%20much%20less%20attention.%20We%20propose%20three%20desiderata%20for%20explanations%20in%20general%20--%20explicitness%2C%20faithfulness%2C%20and%20stability%20--%20and%20show%20that%20existing%20methods%20do%20not%20satisfy%20them.%20In%20response%2C%20we%20design%20self-explaining%20models%20in%20stages%2C%20progressively%20generalizing%20linear%20classifiers%20to%20complex%20yet%20architecturally%20explicit%20models.%20Faithfulness%20and%20stability%20are%20enforced%20via%20regularization%20specifically%20tailored%20to%20such%20models.%20Experimental%20results%20across%20various%20benchmark%20datasets%20show%20that%20our%20framework%20offers%20a%20promising%20direction%20for%20reconciling%20model%20complexity%20and%20interpretability.%22%2C%22date%22%3A%222018%22%2C%22proceedingsTitle%22%3A%22Advances%20in%20Neural%20Information%20Processing%20Systems%2031%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-08-09T19%3A12%3A44Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22LHJARFEP%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Guldi%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGuldi%2C%20Jo.%20%26%23x201C%3BCritical%20Search%3A%20A%20Procedure%20for%20Guided%20Reading%20in%20Large-Scale%20Textual%20Corpora.%26%23x201D%3B%20%3Ci%3EJournal%20of%20Cultural%20Analytics%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.22148%5C%2F16.030%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.22148%5C%2F16.030%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DLHJARFEP%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Critical%20Search%3A%20A%20Procedure%20for%20Guided%20Reading%20in%20Large-Scale%20Textual%20Corpora%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jo%22%2C%22lastName%22%3A%22Guldi%22%7D%5D%2C%22abstractNote%22%3A%22%5BExcerpt%20from%20article%5D%3A%20This%20article%2C%20therefore%2C%20proposes%20that%20the%20solution%20for%20better%20text-mining%20is%20not%20another%20algorithm%2C%20but%20a%20new%20attitude%20among%20scholars%20engaging%20with%20digital%20techniques%2C%20one%20that%20is%20modeled%20here%20under%20the%20phrase%2C%20%5C%22Critical%20Search.%5C%22%20Critical%20search%2C%20like%20critical%20thinking%2C%20employs%20archives%20from%20a%20set%20of%20pre-existing%20social%20and%20political%20concerns%2C%20brokered%20through%20skepticism%20about%20the%20shifting%20meanings%20and%20hidden%20voices%20within%20any%20archive.%20%20As%20this%20article%20will%20argue%2C%20scholarship%20requires%2C%20above%20all%2C%20a%20careful%20process%20of%20iterative%20examination%20of%20the%20corpus%2C%20and%20iterative%20investigation%2C%20and%20the%20world%20of%20research%20by%20algorithm%20should%20follow%20these%20practices%20as%20well.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.22148%5C%2F16.030%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fculturalanalytics.org%5C%2F2018%5C%2F12%5C%2Fcritical-search-a-procedure-for-guided-reading-in-large-scale-textual-corpora%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-04-11T07%3A48%3A55Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Corpus%20representativeness%22%7D%2C%7B%22tag%22%3A%22DH%20Digital%20humanities%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22PTNAWWJA%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Gall%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGall%2C%20Richard.%20%3Ci%3EMachine%20Learning%20Explainability%20vs%20Interpretability%3A%20Two%20Concepts%20That%20Could%20Help%20Restore%20Trust%20in%20AI%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.kdnuggets.com%5C%2F2018%5C%2F12%5C%2Fmachine-learning-explainability-interpretability-ai.html%27%3Ehttps%3A%5C%2F%5C%2Fwww.kdnuggets.com%5C%2F2018%5C%2F12%5C%2Fmachine-learning-explainability-interpretability-ai.html%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DPTNAWWJA%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Machine%20Learning%20Explainability%20vs%20Interpretability%3A%20Two%20concepts%20that%20could%20help%20restore%20trust%20in%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Gall%22%7D%5D%2C%22abstractNote%22%3A%22This%20blog%20post%20explains%20the%20key%20differences%20between%20explainability%20and%20interpretability%20and%20why%20they%27re%20so%20important%20for%20machine%20learning%20and%20AI%2C%20before%20taking%20a%20look%20at%20several%20techniques%20and%20methods%20for%20improving%20machine%20learning%20interpretability.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.kdnuggets.com%5C%2F2018%5C%2F12%5C%2Fmachine-learning-explainability-interpretability-ai.html%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A40%3A36Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22LMNYM8LH%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Gilpin%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGilpin%2C%20Leilani%20H.%2C%20David%20Bau%2C%20Ben%20Z.%20Yuan%2C%20Ayesha%20Bajwa%2C%20Michael%20Specter%2C%20and%20Lalana%20Kagal.%20%26%23x201C%3BExplaining%20Explanations%3A%20An%20Overview%20of%20Interpretability%20of%20Machine%20Learning.%26%23x201D%3B%20%3Ci%3EArXiv%3A1806.00069%20%5BCs%2C%20Stat%5D%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1806.00069%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1806.00069%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DLMNYM8LH%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Explaining%20Explanations%3A%20An%20Overview%20of%20Interpretability%20of%20Machine%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leilani%20H.%22%2C%22lastName%22%3A%22Gilpin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Bau%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%20Z.%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ayesha%22%2C%22lastName%22%3A%22Bajwa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Specter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lalana%22%2C%22lastName%22%3A%22Kagal%22%7D%5D%2C%22abstractNote%22%3A%22There%20has%20recently%20been%20a%20surge%20of%20work%20in%20explanatory%20artificial%20intelligence%20%28XAI%29.%20This%20research%20area%20tackles%20the%20important%20problem%20that%20complex%20machines%20and%20algorithms%20often%20cannot%20provide%20insights%20into%20their%20behavior%20and%20thought%20processes.%20In%20an%20effort%20to%20create%20best%20practices%20and%20identify%20open%20challenges%2C%20this%20paper%20provides%20a%20definition%20of%20explainability%20and%20shows%20how%20it%20can%20be%20used%20to%20classify%20existing%20literature.%20It%20discuss%20why%20current%20approaches%20to%20explanatory%20methods%20especially%20for%20deep%20neural%20networks%20are%20insufficient%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1806.00069%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A33%3A40Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22TW5J2NDL%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Spencer%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESpencer%2C%20Ann.%20%3Ci%3EMake%20Machine%20Learning%20Interpretability%20More%20Rigorous%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fblog.dominodatalab.com%5C%2Fmake-machine-learning-interpretability-rigorous%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fblog.dominodatalab.com%5C%2Fmake-machine-learning-interpretability-rigorous%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DTW5J2NDL%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Make%20Machine%20Learning%20Interpretability%20More%20Rigorous%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ann%22%2C%22lastName%22%3A%22Spencer%22%7D%5D%2C%22abstractNote%22%3A%22This%20Domino%20Data%20Science%20Field%20Note%20covers%20a%20proposed%20definition%20of%20machine%20learning%20interpretability%2C%20why%20interpretability%20matters%2C%20and%20the%20arguments%20for%20considering%20a%20rigorous%20evaluation%20of%20interpretability.%20Insights%20are%20drawn%20from%20Finale%20Doshi-Velez%5Cu2019s%20talk%2C%20%5Cu201cA%20Roadmap%20for%20the%20Rigorous%20Science%20of%20Interpretability%5Cu201d%20as%20well%20as%20the%20paper%2C%20%5Cu201cTowards%20a%20Rigorous%20Science%20of%20Interpretable%20Machine%20Learning%5Cu201d.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fblog.dominodatalab.com%5C%2Fmake-machine-learning-interpretability-rigorous%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A40%3A54Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22K39ZRMNQ%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Gill%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGill%2C%20Patrick%20Hall.%20Navdeep.%20%3Ci%3EIntroduction%20to%20Machine%20Learning%20Interpretability%3C%5C%2Fi%3E.%20S.l.%3A%20O%26%23x2019%3BReilly%20Media%2C%20Inc.%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fproquest.safaribooksonline.com%5C%2F9781492033158%27%3Ehttps%3A%5C%2F%5C%2Fproquest.safaribooksonline.com%5C%2F9781492033158%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DK39ZRMNQ%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Introduction%20to%20Machine%20Learning%20Interpretability%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patrick%20Hall.%20Navdeep%22%2C%22lastName%22%3A%22Gill%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20ebook%2C%20Patrick%20Hall%20and%20Navdeep%20Gill%20from%20H2O.ai%20thoroughly%20introduce%20the%20idea%20of%20machine%20learning%20interpretability%20and%20examine%20a%20set%20of%20machine%20learning%20techniques%2C%20algorithms%2C%20and%20models%20to%20help%20data%20scientists%20improve%20the%20accuracy%20of%20their%20predictive%20models%20while%20maintaining%20interpretability.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-1-4920-3315-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fproquest.safaribooksonline.com%5C%2F9781492033158%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A42%3A51Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22BXJL56CK%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Goodman%20and%20Flaxman%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGoodman%2C%20Bryce%2C%20and%20Seth%20Flaxman.%20%26%23x201C%3BEuropean%20Union%20Regulations%20on%20Algorithmic%20Decision-Making%20and%20a%20%26%23x2018%3BRight%20to%20Explanation.%26%23x2019%3B%26%23x201D%3B%20%3Ci%3EAI%20Magazine%3C%5C%2Fi%3E%2038%2C%20no.%203%20%282017%29%3A%2050%26%23x2013%3B57.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1609%5C%2Faimag.v38i3.2741%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1609%5C%2Faimag.v38i3.2741%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DBXJL56CK%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22European%20Union%20Regulations%20on%20Algorithmic%20Decision-Making%20and%20a%20%5Cu201cRight%20to%20Explanation%5Cu201d%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bryce%22%2C%22lastName%22%3A%22Goodman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Seth%22%2C%22lastName%22%3A%22Flaxman%22%7D%5D%2C%22abstractNote%22%3A%22We%20summarize%20the%20potential%20impact%20that%20the%20European%20Union%5Cu2019s%20new%20General%20Data%20Protection%20Regulation%20will%20have%20on%20the%20routine%20use%20of%20machine%20learning%20algorithms.%20Slated%20to%20take%20effect%20as%20law%20across%20the%20EU%20in%202018%2C%20it%20will%20restrict%20automated%20individual%20decision-making%20%28that%20is%2C%20algorithms%20that%20make%20decisions%20based%20on%20user-level%20predictors%29%20which%20%5Cu201csignificantly%20affect%5Cu201d%20users.%20The%20law%20will%20also%20effectively%20create%20a%20%5Cu201cright%20to%20explanation%2C%5Cu201d%20whereby%20a%20user%20can%20ask%20for%20an%20explanation%20of%20an%20algorithmic%20decision%20that%20was%20made%20about%20them.%20We%20argue%20that%20while%20this%20law%20will%20pose%20large%20challenges%20for%20industry%2C%20it%20highlights%20opportunities%20for%20computer%20scientists%20to%20take%20the%20lead%20in%20designing%20algorithms%20and%20evaluation%20frameworks%20which%20avoid%20discrimination%20and%20enable%20explanation.%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1609%5C%2Faimag.v38i3.2741%22%2C%22ISSN%22%3A%222371-9621%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.aaai.org%5C%2Fojs%5C%2Findex.php%5C%2Faimagazine%5C%2Farticle%5C%2Fview%5C%2F2741%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-08-12T19%3A19%3A17Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22XXRMYCWY%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipton%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELipton%2C%20Zachary%20C.%20%26%23x201C%3BThe%20Mythos%20of%20Model%20Interpretability.%26%23x201D%3B%20%3Ci%3EArXiv%3A1606.03490%20%5BCs%2C%20Stat%5D%3C%5C%2Fi%3E%2C%202017.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1606.03490%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1606.03490%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DXXRMYCWY%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20Mythos%20of%20Model%20Interpretability%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zachary%20C.%22%2C%22lastName%22%3A%22Lipton%22%7D%5D%2C%22abstractNote%22%3A%22Supervised%20machine%20learning%20models%20boast%20remarkable%20predictive%20capabilities.%20But%20can%20you%20trust%20your%20model%3F%20Will%20it%20work%20in%20deployment%3F%20What%20else%20can%20it%20tell%20you%20about%20the%20world%3F%20We%20want%20models%20to%20be%20not%20only%20good%2C%20but%20interpretable.%20And%20yet%20the%20task%20of%20interpretation%20appears%20underspecified.%20Papers%20provide%20diverse%20and%20sometimes%20non-overlapping%20motivations%20for%20interpretability%2C%20and%20offer%20myriad%20notions%20of%20what%20attributes%20render%20models%20interpretable.%20Despite%20this%20ambiguity%2C%20many%20papers%20proclaim%20interpretability%20axiomatically%2C%20absent%20further%20explanation.%20In%20this%20paper%2C%20we%20seek%20to%20refine%20the%20discourse%20on%20interpretability.%20First%2C%20we%20examine%20the%20motivations%20underlying%20interest%20in%20interpretability%2C%20finding%20them%20to%20be%20diverse%20and%20occasionally%20discordant.%20Then%2C%20we%20address%20model%20properties%20and%20techniques%20thought%20to%20confer%20interpretability%2C%20identifying%20transparency%20to%20humans%20and%20post-hoc%20explanations%20as%20competing%20notions.%20Throughout%2C%20we%20discuss%20the%20feasibility%20and%20desirability%20of%20different%20notions%2C%20and%20question%20the%20oft-made%20assertions%20that%20linear%20models%20are%20interpretable%20and%20that%20deep%20neural%20networks%20are%20not.%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1606.03490%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-27T08%3A57%3A14Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%2268FG7BA8%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Edwards%20and%20Veale%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EEdwards%2C%20Lilian%2C%20and%20Michael%20Veale.%20%26%23x201C%3BSlave%20to%20the%20Algorithm%3F%20Why%20a%20%26%23x2018%3BRight%20to%20an%20Explanation%26%23x2019%3B%20Is%20Probably%20Not%20the%20Remedy%20You%20Are%20Looking%20For.%26%23x201D%3B%20SSRN%20Scholarly%20Paper.%20Rochester%2C%20NY%3A%20Social%20Science%20Research%20Network%2C%202017.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fpapers.ssrn.com%5C%2Fabstract%3D2972855%27%3Ehttps%3A%5C%2F%5C%2Fpapers.ssrn.com%5C%2Fabstract%3D2972855%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D68FG7BA8%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22report%22%2C%22title%22%3A%22Slave%20to%20the%20Algorithm%3F%20Why%20a%20%27Right%20to%20an%20Explanation%27%20Is%20Probably%20Not%20the%20Remedy%20You%20Are%20Looking%20For%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lilian%22%2C%22lastName%22%3A%22Edwards%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Veale%22%7D%5D%2C%22abstractNote%22%3A%22Algorithms%2C%20particularly%20machine%20learning%20%28ML%29%20algorithms%2C%20are%20increasingly%20important%20to%20individuals%5Cu2019%20lives%2C%20but%20have%20caused%20a%20range%20of%20concerns%20revolving%20mainly%20around%20unfairness%2C%20discrimination%20and%20opacity.%20Transparency%20in%20the%20form%20of%20a%20%5Cu201cright%20to%20an%20explanation%5Cu201d%20has%20emerged%20as%20a%20compellingly%20attractive%20remedy%20since%20it%20intuitively%20promises%20to%20open%20the%20algorithmic%20%5Cu201cblack%20box%5Cu201d%20to%20promote%20challenge%2C%20redress%2C%20and%20hopefully%20heightened%20accountability.%20Amidst%20the%20general%20furore%20over%20algorithmic%20bias%20we%20describe%2C%20any%20remedy%20in%20a%20storm%20has%20looked%20attractive.%22%2C%22reportNumber%22%3A%22ID%202972855%22%2C%22reportType%22%3A%22SSRN%20Scholarly%20Paper%22%2C%22institution%22%3A%22Social%20Science%20Research%20Network%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpapers.ssrn.com%5C%2Fabstract%3D2972855%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T01%3A54%3A57Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22MDHKGZNJ%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Samek%20et%20al.%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESamek%2C%20Wojciech%2C%20Thomas%20Wiegand%2C%20and%20Klaus-Robert%20M%26%23xFC%3Bller.%20%26%23x201C%3BExplainable%20Artificial%20Intelligence.%26%23x201D%3B%20%3Ci%3EInternational%20Telecommunication%20Union%20Journal%3C%5C%2Fi%3E%2C%20no.%201%20%282017%29%3A%201%26%23x2013%3B10.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.itu.int%5C%2Fen%5C%2Fjournal%5C%2F001%5C%2FPages%5C%2F05.aspx%27%3Ehttps%3A%5C%2F%5C%2Fwww.itu.int%5C%2Fen%5C%2Fjournal%5C%2F001%5C%2FPages%5C%2F05.aspx%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DMDHKGZNJ%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Explainable%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wojciech%22%2C%22lastName%22%3A%22Samek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Wiegand%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus-Robert%22%2C%22lastName%22%3A%22M%5Cu00fcller%22%7D%5D%2C%22abstractNote%22%3A%22With%20the%20availability%20of%20large%20databases%20and%20recent%20improvements%20in%20deep%20learning%20methodology%2C%20the%20performance%20of%20AI%20systems%20is%20reaching%2C%20or%20even%20exceeding%2C%20the%20human%20level%20on%20an%20increasing%20number%20of%20complex%20tasks.%20Impressive%20examples%20of%20this%20development%20can%20be%20found%20in%20domains%20such%20as%20image%20classification%2C%20sentiment%20analysis%2C%20speech%20understanding%20or%20strategic%20game%20playing.%20However%2C%20because%20of%20their%20nested%20non-linear%20structure%2C%20these%20highly%20successful%20machine%20learning%20and%20artificial%20intelligence%20models%20are%20usually%20applied%20in%20a%20black-box%20manner%2C%20i.e.%20no%20information%20is%20provided%20about%20what%20exactly%20makes%20them%20arrive%20at%20their%20predictions.%20Since%20this%20lack%20of%20transparency%20can%20be%20a%20major%20drawback%2C%20e.g.%20in%20medical%20applications%2C%20the%20development%20of%20methods%20for%20visualizing%2C%20explaining%20and%20interpreting%20deep%20learning%20models%20has%20recently%20attracted%20increasing%20attention.%20This%20paper%20summarizes%20recent%20developments%20in%20this%20field%20and%20makes%20a%20plea%20for%20more%20interpretability%20in%20artificial%20intelligence.%20Furthermore%2C%20it%20presents%20two%20approaches%20to%20explaining%20predictions%20of%20deep%20learning%20models%2C%20one%20method%20which%20computes%20the%20sensitivity%20of%20the%20prediction%20with%20respect%20to%20changes%20in%20the%20input%20and%20one%20approach%20which%20meaningfully%20decomposes%20the%20decision%20in%20terms%20of%20the%20input%20variables.%20These%20methods%20are%20evaluated%20on%20three%20classification%20tasks.%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.itu.int%5C%2Fen%5C%2Fjournal%5C%2F001%5C%2FPages%5C%2F05.aspx%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A33%3A48Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22YI2CGWHW%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Doshi-Velez%20and%20Kim%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDoshi-Velez%2C%20Finale%2C%20and%20Been%20Kim.%20%26%23x201C%3BTowards%20A%20Rigorous%20Science%20of%20Interpretable%20Machine%20Learning.%26%23x201D%3B%20%3Ci%3EArXiv%3A1702.08608%20%5BCs%2C%20Stat%5D%3C%5C%2Fi%3E%2C%202017.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1702.08608%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1702.08608%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DYI2CGWHW%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Towards%20A%20Rigorous%20Science%20of%20Interpretable%20Machine%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Finale%22%2C%22lastName%22%3A%22Doshi-Velez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Been%22%2C%22lastName%22%3A%22Kim%22%7D%5D%2C%22abstractNote%22%3A%22As%20machine%20learning%20systems%20become%20ubiquitous%2C%20there%20has%20been%20a%20surge%20of%20interest%20in%20interpretable%20machine%20learning%3A%20systems%20that%20provide%20explanation%20for%20their%20outputs.%20However%2C%20despite%20the%20interest%20in%20interpretability%2C%20there%20is%20very%20little%20consensus%20on%20what%20interpretable%20machine%20learning%20is%20and%20how%20it%20should%20be%20measured.%20This%20paper%20defines%20interpretability%20and%20describes%20when%20interpretability%20is%20needed.%20It%20suggests%20a%20taxonomy%20for%20rigorous%20evaluation%20and%20exposes%20open%20questions%20towards%20a%20more%20rigorous%20science%20of%20interpretable%20machine%20learning%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1702.08608%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A47%3A32Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%2272Q8YIBR%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Paul%22%2C%22parsedDate%22%3A%222016%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EPaul%2C%20Michael%20J.%20%26%23x201C%3BInterpretable%20Machine%20Learning%3A%20Lessons%20from%20Topic%20Modeling.%26%23x201D%3B%20In%20%3Ci%3ECHI%20Workshop%20on%20Human-Centered%20Machine%20Learning%3C%5C%2Fi%3E%2C%202016.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fcmci.colorado.edu%5C%2F~mpaul%5C%2Ffiles%5C%2Fchi16hcml_interpretable.pdf%27%3Ehttps%3A%5C%2F%5C%2Fcmci.colorado.edu%5C%2F~mpaul%5C%2Ffiles%5C%2Fchi16hcml_interpretable.pdf%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D72Q8YIBR%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Interpretable%20Machine%20Learning%3A%20Lessons%20from%20Topic%20Modeling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20J.%22%2C%22lastName%22%3A%22Paul%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20examines%20how%20the%20topic%20modeling%20community%20has%20characterized%20interpretability%2C%20and%20discusses%20how%20ideas%20used%20in%20topic%20modeling%20could%20be%20used%20to%20make%20other%20types%20of%20machine%20learning%20more%20interpretable.%20Interpretability%20is%20discussed%20both%20from%20the%20perspective%20of%20evaluation%20%28%5Cu201chow%20interpretable%20is%20this%20model%3F%5Cu201d%29%20and%20training%20%28%5Cu201chow%20can%20we%20make%20this%20model%20more%20interpretable%3F%5Cu201d%29%20in%20machine%20learning.%22%2C%22date%22%3A%222016%22%2C%22proceedingsTitle%22%3A%22CHI%20Workshop%20on%20Human-Centered%20Machine%20Learning%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fcmci.colorado.edu%5C%2F~mpaul%5C%2Ffiles%5C%2Fchi16hcml_interpretable.pdf%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-08-18T18%3A25%3A20Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%2C%7B%22tag%22%3A%22Topic%20model%20interpretation%22%7D%2C%7B%22tag%22%3A%22Topic%20modeling%22%7D%5D%7D%7D%2C%7B%22key%22%3A%223RRGTBRT%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Alexander%20and%20Gleicher%22%2C%22parsedDate%22%3A%222016%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAlexander%2C%20Eric%2C%20and%20Michael%20Gleicher.%20%26%23x201C%3BTask-Driven%20Comparison%20of%20Topic%20Models.%26%23x201D%3B%20%3Ci%3EIEEE%20Transactions%20on%20Visualization%20and%20Computer%20Graphics%3C%5C%2Fi%3E%2022%2C%20no.%201%20%282016%29%3A%20320%26%23x2013%3B29.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTVCG.2015.2467618%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTVCG.2015.2467618%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D3RRGTBRT%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Task-Driven%20Comparison%20of%20Topic%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eric%22%2C%22lastName%22%3A%22Alexander%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Gleicher%22%7D%5D%2C%22abstractNote%22%3A%22Topic%20modeling%2C%20a%20method%20of%20statistically%20extracting%20thematic%20content%20from%20a%20large%20collection%20of%20texts%2C%20is%20used%20for%20a%20wide%20variety%20of%20tasks%20within%20text%20analysis.%20Though%20there%20are%20a%20growing%20number%20of%20tools%20and%20techniques%20for%20exploring%20single%20models%2C%20comparisons%20between%20models%20are%20generally%20reduced%20to%20a%20small%20set%20of%20numerical%20metrics.%20These%20metrics%20may%20or%20may%20not%20reflect%20a%20model%27s%20performance%20on%20the%20analyst%27s%20intended%20task%2C%20and%20can%20therefore%20be%20insufficient%20to%20diagnose%20what%20causes%20differences%20between%20models.%20In%20this%20paper%2C%20we%20explore%20task-centric%20topic%20model%20comparison%2C%20considering%20how%20we%20can%20both%20provide%20detail%20for%20a%20more%20nuanced%20understanding%20of%20differences%20and%20address%20the%20wealth%20of%20tasks%20for%20which%20topic%20models%20are%20used.%20We%20derive%20comparison%20tasks%20from%20single-model%20uses%20of%20topic%20models%2C%20which%20predominantly%20fall%20into%20the%20categories%20of%20understanding%20topics%2C%20understanding%20similarity%2C%20and%20understanding%20change.%20Finally%2C%20we%20provide%20several%20visualization%20techniques%20that%20facilitate%20these%20tasks%2C%20including%20buddy%20plots%2C%20which%20combine%20color%20and%20position%20encodings%20to%20allow%20analysts%20to%20readily%20view%20changes%20in%20document%20similarity.%22%2C%22date%22%3A%222016%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1109%5C%2FTVCG.2015.2467618%22%2C%22ISSN%22%3A%221077-2626%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F7194832%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-01-03T19%3A57%3A08Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Topic%20model%20interpretation%22%7D%2C%7B%22tag%22%3A%22Topic%20model%20optimization%22%7D%5D%7D%7D%2C%7B%22key%22%3A%223I264DEX%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Collins%20et%20al.%22%2C%22parsedDate%22%3A%222015%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECollins%2C%20Gary%20S.%2C%20Johannes%20B.%20Reitsma%2C%20Douglas%20G.%20Altman%2C%20and%20Karel%20G.M.%20Moons.%20%26%23x201C%3BTransparent%20Reporting%20of%20a%20Multivariable%20Prediction%20Model%20for%20Individual%20Prognosis%20Or%20Diagnosis%20%28TRIPOD%29%3A%20The%20TRIPOD%20Statement.%26%23x201D%3B%20%3Ci%3EAnnals%20of%20Internal%20Medicine%3C%5C%2Fi%3E%20162%2C%20no.%201%20%282015%29%3A%2055.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.7326%5C%2FM14-0697%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.7326%5C%2FM14-0697%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D3I264DEX%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Transparent%20Reporting%20of%20a%20multivariable%20prediction%20model%20for%20Individual%20Prognosis%20Or%20Diagnosis%20%28TRIPOD%29%3A%20The%20TRIPOD%20Statement%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gary%20S.%22%2C%22lastName%22%3A%22Collins%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20B.%22%2C%22lastName%22%3A%22Reitsma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Douglas%20G.%22%2C%22lastName%22%3A%22Altman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karel%20G.M.%22%2C%22lastName%22%3A%22Moons%22%7D%5D%2C%22abstractNote%22%3A%22The%20Transparent%20Reporting%20of%20a%20multivariable%20prediction%20model%20for%20Individual%20Prognosis%20Or%20Diagnosis%20%28TRIPOD%29%20Initiative%20developed%20a%20set%20of%20recommendations%20for%20the%20reporting%20of%20studies%20developing%2C%20validating%2C%20or%20updating%20a%20prediction%20model%2C%20whether%20for%20diagnostic%20or%20prognostic%20purposes....The%20resulting%20TRIPOD%20Statement%20is%20a%20checklist%20of%2022%20items%2C%20deemed%20essential%20for%20transparent%20reporting%20of%20a%20prediction%20model%20study.%22%2C%22date%22%3A%222015%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.7326%5C%2FM14-0697%22%2C%22ISSN%22%3A%220003-4819%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fannals.org%5C%2Farticle.aspx%3Fdoi%3D10.7326%5C%2FM14-0697%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-12-05T06%3A59%3A52Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Reporting%20and%20documentation%20methods%22%7D%5D%7D%7D%2C%7B%22key%22%3A%229ETYSREH%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Findlater%20et%20al.%22%2C%22parsedDate%22%3A%222014%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFindlater%2C%20Leah%2C%20Jordan%20L.%20Boyd-Graber%2C%20Yuening%20Hu%2C%20Jason%20Chuang%2C%20and%20Alison%20Smith.%20%3Ci%3EConcurrent%20Visualization%20of%20Relationships%20between%20Words%20and%20Topics%20in%20Topic%20Models%3C%5C%2Fi%3E%2C%202014.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27%5C%2Fpaper%5C%2FConcurrent-Visualization-of-Relationships-between-Smith-Chuang%5C%2F096ed34cd5d56b5daea50336f891dc26a32b981d%27%3E%5C%2Fpaper%5C%2FConcurrent-Visualization-of-Relationships-between-Smith-Chuang%5C%2F096ed34cd5d56b5daea50336f891dc26a32b981d%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D9ETYSREH%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Concurrent%20Visualization%20of%20Relationships%20between%20Words%20and%20Topics%20in%20Topic%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leah%22%2C%22lastName%22%3A%22Findlater%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jordan%20L.%22%2C%22lastName%22%3A%22Boyd-Graber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuening%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jason%22%2C%22lastName%22%3A%22Chuang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alison%22%2C%22lastName%22%3A%22Smith%22%7D%5D%2C%22abstractNote%22%3A%22Analysis%20tools%20based%20on%20topic%20models%20are%20often%20used%20as%20a%20means%20to%20explore%20large%20amounts%20of%20unstructured%20data.%20Users%20often%20reason%20about%20the%20correctness%20of%20a%20model%20using%20relationships%20between%20words%20within%20the%20topics%20or%20topics%20within%20the%20model.%20This%20useful%20contextual%20information%20is%20computed%20as%20term%20co-occurrence%20and%20topic%20covariance%20and%20overlay%20it%20on%20top%20of%20standard%20topic%20model%20output%20via%20an%20intuitive%20interactive%20visualization.%20This%20is%20a%20work%20in%20progress%20with%20the%20end%20goal%20to%20combine%20the%20visual%20representation%20with%20interactions%20and%20online%20learning%2C%20so%20the%20users%20can%20directly%20explore%20%28a%29%20why%20a%20model%20may%20not%20align%20with%20their%20intuition%20and%20%28b%29%20modify%20the%20model%20as%20needed.%22%2C%22date%22%3A%222014%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22%5C%2Fpaper%5C%2FConcurrent-Visualization-of-Relationships-between-Smith-Chuang%5C%2F096ed34cd5d56b5daea50336f891dc26a32b981d%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A33%3A39Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Topic%20model%20interpretation%22%7D%2C%7B%22tag%22%3A%22Topic%20model%20visualization%22%7D%2C%7B%22tag%22%3A%22Topic%20modeling%22%7D%5D%7D%7D%2C%7B%22key%22%3A%229C54BR32%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Freitas%22%2C%22parsedDate%22%3A%222014%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFreitas%2C%20Alex%20A.%20%26%23x201C%3BComprehensible%20Classification%20Models%3A%20A%20Position%20Paper.%26%23x201D%3B%20In%20%3Ci%3EACM%20SIGKDD%20Explorations%3C%5C%2Fi%3E%2C%2015.1%3A1%26%23x2013%3B10.%20Association%20for%20Computing%20Machinery%2C%202014.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F2594473.2594475%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F2594473.2594475%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D9C54BR32%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Comprehensible%20classification%20models%3A%20a%20position%20paper%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alex%20A.%22%2C%22lastName%22%3A%22Freitas%22%7D%5D%2C%22abstractNote%22%3A%22The%20vast%20majority%20of%20the%20literature%20evaluates%20the%20performance%20of%20classification%20models%20using%20only%20the%20criterion%20of%20predictive%20accuracy.%20This%20paper%20reviews%20the%20case%20for%20considering%20also%20the%20comprehensibility%20%28interpretability%29%20of%20classification%20models%2C%20and%20discusses%20the%20interpretability%20of%20five%20types%20of%20classification%20models%2C%20namely%20decision%20trees%2C%20classification%20rules%2C%20decision%20tables%2C%20nearest%20neighbors%20and%20Bayesian%20network%20classifiers.%20We%20discuss%20both%20interpretability%20issues%20which%20are%20specific%20to%20each%20of%20those%20model%20types%20and%20more%20generic%20interpretability%20issues%2C%20namely%20the%20drawbacks%20of%20using%20model%20size%20as%20the%20only%20criterion%20to%20evaluate%20the%20comprehensibility%20of%20a%20model%2C%20and%20the%20use%20of%20monotonicity%20constraints%20to%20improve%20the%20comprehensibility%20and%20acceptance%20of%20classification%20models%20by%20users.%22%2C%22date%22%3A%222014%22%2C%22proceedingsTitle%22%3A%22ACM%20SIGKDD%20Explorations%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F2594473.2594475%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-08-12T19%3A14%3A01Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%2C%7B%22tag%22%3A%22Text%20classification%22%7D%5D%7D%7D%2C%7B%22key%22%3A%228DUDF9VX%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Liu%22%2C%22parsedDate%22%3A%222013%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELiu%2C%20Alan.%20%26%23x201C%3BThe%20Meaning%20of%20the%20Digital%20Humanities.%26%23x201D%3B%20%3Ci%3EPMLA%3C%5C%2Fi%3E%20128%2C%20no.%202%20%282013%29%3A%20409%26%23x2013%3B23.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1632%5C%2Fpmla.2013.128.2.409%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1632%5C%2Fpmla.2013.128.2.409%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D8DUDF9VX%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20Meaning%20of%20the%20Digital%20Humanities%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alan%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Meaning%20is%20clearly%20a%20key%20meta-value%2C%20and%20therefore%20also%20meta-problem%2C%20for%20digital%20humanities.%20To%20explicate%20the%20meaning%20problem%2C%20Liu%20will%20spotlight%20a%20recent%20work%20of%20digital%20literary%20scholarship%20by%20two%20younger%20scholars%20that%20is%20both%20state-of-the-art%20and%20representative%20of%20major%20trends%20in%20digital%20humanities%5Cu2013a%20tactic%20that%20has%20the%20additional%20advantage%20of%20providing%20outsiders%20to%20the%20field%20with%20a%20close%2C%20end-to-end%20look%20at%20a%20single%20example%20of%20DH%20research.%22%2C%22date%22%3A%222013%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1632%5C%2Fpmla.2013.128.2.409%22%2C%22ISSN%22%3A%220030-8129%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.mlajournals.org%5C%2Fdoi%5C%2Fabs%5C%2F10.1632%5C%2Fpmla.2013.128.2.409%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A46%3A27Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22DH%20Digital%20humanities%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22BNGLJNZU%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Grimmer%20and%20King%22%2C%22parsedDate%22%3A%222011%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGrimmer%2C%20Justin%2C%20and%20Gary%20King.%20%26%23x201C%3BGeneral%20Purpose%20Computer-Assisted%20Clustering%20and%20Conceptualization.%26%23x201D%3B%20%3Ci%3EProceedings%20of%20the%20National%20Academy%20of%20Sciences%3C%5C%2Fi%3E%20108%2C%20no.%207%20%282011%29%3A%202643%26%23x2013%3B50.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1073%5C%2Fpnas.1018067108%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1073%5C%2Fpnas.1018067108%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DBNGLJNZU%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22General%20purpose%20computer-assisted%20clustering%20and%20conceptualization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Justin%22%2C%22lastName%22%3A%22Grimmer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gary%22%2C%22lastName%22%3A%22King%22%7D%5D%2C%22abstractNote%22%3A%22%5BFirst%20paragraph%20of%20abstract%5D%3A%20We%20develop%20a%20computer-assisted%20method%20for%20the%20discovery%20of%20insightful%20conceptualizations%2C%20in%20the%20form%20of%20clusterings%20%28i.e.%2C%20partitions%29%20of%20input%20objects.%20Each%20of%20the%20numerous%20fully%20automated%20methods%20of%20cluster%20analysis%20proposed%20in%20statistics%2C%20computer%20science%2C%20and%20biology%20optimize%20a%20different%20objective%20function.%20Almost%20all%20are%20well%20defined%2C%20but%20how%20to%20determine%20before%20the%20fact%20which%20one%2C%20if%20any%2C%20will%20partition%20a%20given%20set%20of%20objects%20in%20an%20%5Cu201cinsightful%5Cu201d%20or%20%5Cu201cuseful%5Cu201d%20way%20for%20a%20given%20user%20is%20unknown%20and%20difficult%2C%20if%20not%20logically%20impossible.%20We%20develop%20a%20metric%20space%20of%20partitions%20from%20all%20existing%20cluster%20analysis%20methods%20applied%20to%20a%20given%20dataset%20%28along%20with%20millions%20of%20other%20solutions%20we%20add%20based%20on%20combinations%20of%20existing%20clusterings%29%20and%20enable%20a%20user%20to%20explore%20and%20interact%20with%20it%20and%20quickly%20reveal%20or%20prompt%20useful%20or%20insightful%20conceptualizations.%20In%20addition%2C%20although%20it%20is%20uncommon%20to%20do%20so%20in%20unsupervised%20learning%20problems%2C%20we%20offer%20and%20implement%20evaluation%20designs%20that%20make%20our%20computer-assisted%20approach%20vulnerable%20to%20being%20proven%20suboptimal%20in%20specific%20data%20types.%20We%20demonstrate%20that%20our%20approach%20facilitates%20more%20efficient%20and%20insightful%20discovery%20of%20useful%20information%20than%20expert%20human%20coders%20or%20many%20existing%20fully%20automated%20methods.%22%2C%22date%22%3A%222011%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1073%5C%2Fpnas.1018067108%22%2C%22ISSN%22%3A%220027-8424%2C%201091-6490%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.pnas.org%5C%2Fcontent%5C%2F108%5C%2F7%5C%2F2643%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-08-18T20%3A27%3A18Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%2C%7B%22tag%22%3A%22Topic%20clusters%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22FQPMGJA6%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Sculley%20and%20Pasanek%22%2C%22parsedDate%22%3A%222008%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESculley%2C%20D.%2C%20and%20B.%20M.%20Pasanek.%20%26%23x201C%3BMeaning%20and%20Mining%3A%20The%20Impact%20of%20Implicit%20Assumptions%20in%20Data%20Mining%20for%20the%20Humanities.%26%23x201D%3B%20%3Ci%3ELiterary%20and%20Linguistic%20Computing%3C%5C%2Fi%3E%2023%2C%20no.%204%20%282008%29%3A%20409%26%23x2013%3B24.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1093%5C%2Fllc%5C%2Ffqn019%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1093%5C%2Fllc%5C%2Ffqn019%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DFQPMGJA6%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Meaning%20and%20mining%3A%20the%20impact%20of%20implicit%20assumptions%20in%20data%20mining%20for%20the%20humanities%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D.%22%2C%22lastName%22%3A%22Sculley%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22B.%20M.%22%2C%22lastName%22%3A%22Pasanek%22%7D%5D%2C%22abstractNote%22%3A%22This%20article%20makes%20explicit%20some%20of%20the%20foundational%20assumptions%20of%20machine%20learning%20methods%2C%20and%20presents%20a%20series%20of%20experiments%20as%20a%20case%20study%20and%20object%20lesson%20in%20the%20potential%20pitfalls%20in%20the%20use%20of%20data%20mining%20methods%20for%20hypothesis%20testing%20in%20literary%20scholarship.%20The%20worst%20dangers%20may%20lie%20in%20the%20humanist%27s%20ability%20to%20interpret%20nearly%20any%20result%2C%20projecting%20his%20or%20her%20own%20biases%20into%20the%20outcome%20of%20an%20experiment%5Cu2014perhaps%20all%20the%20more%20unwittingly%20due%20to%20the%20superficial%20objectivity%20of%20computational%20methods.%20The%20authors%20argue%20that%20in%20the%20digital%20humanities%2C%20the%20standards%20for%20the%20initial%20production%20of%20evidence%20should%20be%20even%20more%20rigorous%20than%20in%20the%20empirical%20sciences%20because%20of%20the%20subjective%20nature%20of%20the%20work%20that%20follows.%20Thus%2C%20they%20conclude%20with%20a%20discussion%20of%20recommended%20best%20practices%20for%20making%20results%20from%20data%20mining%20in%20the%20humanities%20domain%20as%20meaningful%20as%20possible.%20These%20include%20methods%20for%20keeping%20the%20boundary%20between%20computational%20results%20and%20subsequent%20interpretation%20as%20clearly%20delineated%20as%20possible.%22%2C%22date%22%3A%222008%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1093%5C%2Fllc%5C%2Ffqn019%22%2C%22ISSN%22%3A%220268-1145%2C%201477-4615%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Facademic.oup.com%5C%2Fdsh%5C%2Farticle-lookup%5C%2Fdoi%5C%2F10.1093%5C%2Fllc%5C%2Ffqn019%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A43%3A42Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22DH%20Digital%20humanities%22%7D%2C%7B%22tag%22%3A%22Data%20mining%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22E5HMW56U%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tickle%20et%20al.%22%2C%22parsedDate%22%3A%221998%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETickle%2C%20A.B.%2C%20R.%20Andrews%2C%20M.%20Golea%2C%20and%20J.%20Diederich.%20%26%23x201C%3BThe%20Truth%20Will%20Come%20to%20Light%3A%20Directions%20and%20Challenges%20in%20Extracting%20the%20Knowledge%20Embedded%20within%20Trained%20Artificial%20Neural%20Networks.%26%23x201D%3B%20%3Ci%3EIEEE%20Transactions%20on%20Neural%20Networks%3C%5C%2Fi%3E%209%2C%20no.%206%20%281998%29%3A%201057%26%23x2013%3B68.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2F72.728352%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2F72.728352%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DE5HMW56U%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20truth%20will%20come%20to%20light%3A%20directions%20and%20challenges%20in%20extracting%20the%20knowledge%20embedded%20within%20trained%20artificial%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.B.%22%2C%22lastName%22%3A%22Tickle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Andrews%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Golea%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%22%2C%22lastName%22%3A%22Diederich%22%7D%5D%2C%22abstractNote%22%3A%22To%20date%2C%20the%20preponderance%20of%20techniques%20for%20eliciting%20the%20knowledge%20embedded%20in%20trained%20artificial%20neural%20networks%20%28ANN%27s%29%20has%20focused%20primarily%20on%20extracting%20rule-based%20explanations%20from%20feedforward%20ANN%27s.%20The%20ADT%20taxonomy%20for%20categorizing%20such%20techniques%20was%20proposed%20in%201995%20to%20provide%20a%20basis%20for%20the%20systematic%20comparison%20of%20the%20different%20approaches.%20This%20paper%20shows%20that%20not%20only%20is%20this%20taxonomy%20applicable%20to%20a%20cross%20section%20of%20current%20techniques%20for%20extracting%20rules%20from%20trained%20feedforward%20ANN%27s%20but%20also%20how%20the%20taxonomy%20can%20be%20adapted%20and%20extended%20to%20embrace%20a%20broader%20range%20of%20ANN%20types%20%28e%2Cg.%2C%20recurrent%20neural%20networks%29%20and%20explanation%20structures.%20In%20addition%20we%20identify%20some%20of%20the%20key%20research%20questions%20in%20extracting%20the%20knowledge%20embedded%20within%20ANN%27s%20including%20the%20need%20for%20the%20formulation%20of%20a%20consistent%20theoretical%20basis%20for%20what%20has%20been%2C%20until%20recently%2C%20a%20disparate%20collection%20of%20empirical%20results.%22%2C%22date%22%3A%221998%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1109%5C%2F72.728352%22%2C%22ISSN%22%3A%2210459227%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F728352%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T20%3A17%3A53Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%5D%7D
AI Forensics. “Home Page,” 2023. https://ai-forensics.github.io/. Cite
Zhang, Yu, Peter Tiňo, Aleš Leonardis, and Ke Tang. “A Survey on Neural Network Interpretability.” IEEE Transactions on Emerging Topics in Computational Intelligence 5, no. 5 (2021): 726–42. https://doi.org/10.1109/TETCI.2021.3100641. Cite
Dickson, Ben. “A New Technique Called ‘Concept Whitening’ Promises to Provide Neural Network Interpretability.” VentureBeat (blog), 2021. https://venturebeat.com/2021/01/12/a-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability/. Cite
Smith, Gary, and Jay Cordes. The Phantom Pattern Problem: The Mirage of Big Data. First edition. Oxford ; New York, NY: Oxford University Press, 2020. Cite
Liu, Alan. “Humans in the Loop: Humanities Hermeneutics and Machine Learning.” Presented at the DHd2020 (7th Annual Conference of the German Society for Digital Humanities), University of Paderborn, 2020. https://youtu.be/lnfeOUBCi3s. Cite
Dickson, Ben. “The Advantages of Self-Explainable AI over Interpretable AI.” The Next Web, 2020. https://thenextweb.com/neural/2020/06/19/the-advantages-of-self-explainable-ai-over-interpretable-ai/. Cite
Rogers, Anna, Olga Kovaleva, and Anna Rumshisky. “A Primer in BERTology: What We Know about How BERT Works.” ArXiv:2002.12327 [Cs], 2020. http://arxiv.org/abs/2002.12327. Cite
Munro, Robert. Human-in-the-Loop Machine Learning. Shelter Island, New York: Manning, 2020. https://www.manning.com/books/human-in-the-loop-machine-learning. Cite
Rudin, Cynthia. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1, no. 5 (2019): 206–15. https://doi.org/10.1038/s42256-019-0048-x. Cite
Molnar, Christoph. Interpretable Machine Learning. Christoph Molnar, 2019. https://christophm.github.io/interpretable-ml-book/. Cite
Lim, Brian Y., Qian Yang, Ashraf Abdul, and Danding Wang. “Why These Explanations? Selecting Intelligibility Types for Explanation Goals.” In IUI Workshops 2019. Los Angeles: ACM, 2019. https://www.semanticscholar.org/paper/A-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan/03a4544caed21760df30f0e4f417bbe361c29c9e. Cite
Yang, Yiwei, Eser Kandogan, Yunyao Li, Prithviraj Sen, and Walter S. Lasecki. “A Study on Interaction in Human-in-the-Loop Machine Learning for Text Analytics.” In IUI Workshops 2019. Los Angeles: ACM, 2019. https://www.semanticscholar.org/paper/A-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan/03a4544caed21760df30f0e4f417bbe361c29c9e. Cite
Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. “Datasheets for Datasets.” ArXiv:1803.09010 [Cs], 2019. http://arxiv.org/abs/1803.09010. Cite
Mitchell, Margaret, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. “Model Cards for Model Reporting.” Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 2019, 220–29. https://doi.org/10.1145/3287560.3287596. Cite
Tahmasebi, Nina, Niclas Hagen, Daniel Brodén, and Mats Malm. “A Convergence of Methodologies: Notes on Data-Intensive Humanities Research.” In Digital Humanities in the Nordic Countries 4th Conference. Helsinki: Nina Tahmasebi, 2019. /publication/2019-aconvergenceofmethods/. Cite
Pandey, Parul. Interpretable Machine Learning, 2019. https://towardsdatascience.com/interpretable-machine-learning-1dec0f2f3e6b. Cite
Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. “Interpretable Machine Learning: Definitions, Methods, and Applications.” ArXiv:1901.04592 [Cs, Stat], 2019. http://arxiv.org/abs/1901.04592. Cite
Carassai, Mauro. “Preliminary Notes on Conceptual Issues Affecting Interpretation of Topic Models.” WE1S (blog), 2018. https://we1s.ucsb.edu/research_post/preliminary-notes-on-conceptual-issues-affecting-interpretation-of-topic-models/. Cite
Rule, Adam, Aurélien Tabard, and James D. Hollan. “Exploration and Explanation in Computational Notebooks.” In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18, 1–12. Montreal QC, Canada: ACM Press, 2018. https://doi.org/10.1145/3173574.3173606. Cite
Narayanan, Menaka, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. “How Do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation.” ArXiv:1802.00682 [Cs], 2018. http://arxiv.org/abs/1802.00682. Cite
Selbst, Andrew D., and Solon Barocas. “The Intuitive Appeal of Explainable Machines.” SSRN Electronic Journal, 2018. https://doi.org/10.2139/ssrn.3126971. Cite
Sawhney, Ravi. “Human in the Loop: Why We Will Be Needed to Complement Artificial Intelligence.” LSE Business Review (blog), 2018. https://blogs.lse.ac.uk/businessreview/2018/10/24/human-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence/. Cite
Kleymann, Rabea, and Jan-Erik Stange. “Towards Hermeneutic Visualization in Digital Literary Studies,” 2018. http://www.stereoscope.threedh.net/HermeneuticVisualization.pdf. Cite
Holland, Sarah, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. “The Dataset Nutrition Label: A Framework To Drive Higher Data Quality Standards.” ArXiv:1805.03677 [Cs], 2018. http://arxiv.org/abs/1805.03677. Cite
Bender, Emily M., and Batya Friedman. “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science.” Transactions of the Association for Computational Linguistics 6 (2018): 587–604. https://doi.org/10.1162/tacl_a_00041. Cite
Hind, Michael, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. “TED: Teaching AI to Explain Its Decisions.” ArXiv:1811.04896 [Cs], 2018. http://arxiv.org/abs/1811.04896. Cite
Alvarez-Melis, David, and Tommi Jaakkola. “Towards Robust Interpretability with Self-Explaining Neural Networks.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 7775–84. Curran Associates, Inc., 2018. http://papers.nips.cc/paper/8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf. Cite
Guldi, Jo. “Critical Search: A Procedure for Guided Reading in Large-Scale Textual Corpora.” Journal of Cultural Analytics, 2018. https://doi.org/10.22148/16.030. Cite
Gall, Richard. Machine Learning Explainability vs Interpretability: Two Concepts That Could Help Restore Trust in AI, 2018. https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html. Cite
Gilpin, Leilani H., David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. “Explaining Explanations: An Overview of Interpretability of Machine Learning.” ArXiv:1806.00069 [Cs, Stat], 2018. http://arxiv.org/abs/1806.00069. Cite
Spencer, Ann. Make Machine Learning Interpretability More Rigorous, 2018. https://blog.dominodatalab.com/make-machine-learning-interpretability-rigorous/. Cite
Gill, Patrick Hall. Navdeep. Introduction to Machine Learning Interpretability. S.l.: O’Reilly Media, Inc., 2018. https://proquest.safaribooksonline.com/9781492033158. Cite
Goodman, Bryce, and Seth Flaxman. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’” AI Magazine 38, no. 3 (2017): 50–57. https://doi.org/10.1609/aimag.v38i3.2741. Cite
Lipton, Zachary C. “The Mythos of Model Interpretability.” ArXiv:1606.03490 [Cs, Stat], 2017. http://arxiv.org/abs/1606.03490. Cite
Edwards, Lilian, and Michael Veale. “Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, 2017. https://papers.ssrn.com/abstract=2972855. Cite
Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. “Explainable Artificial Intelligence.” International Telecommunication Union Journal, no. 1 (2017): 1–10. https://www.itu.int/en/journal/001/Pages/05.aspx. Cite
Doshi-Velez, Finale, and Been Kim. “Towards A Rigorous Science of Interpretable Machine Learning.” ArXiv:1702.08608 [Cs, Stat], 2017. http://arxiv.org/abs/1702.08608. Cite
Paul, Michael J. “Interpretable Machine Learning: Lessons from Topic Modeling.” In CHI Workshop on Human-Centered Machine Learning, 2016. https://cmci.colorado.edu/~mpaul/files/chi16hcml_interpretable.pdf. Cite
Alexander, Eric, and Michael Gleicher. “Task-Driven Comparison of Topic Models.” IEEE Transactions on Visualization and Computer Graphics 22, no. 1 (2016): 320–29. https://doi.org/10.1109/TVCG.2015.2467618. Cite
Collins, Gary S., Johannes B. Reitsma, Douglas G. Altman, and Karel G.M. Moons. “Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis (TRIPOD): The TRIPOD Statement.” Annals of Internal Medicine 162, no. 1 (2015): 55. https://doi.org/10.7326/M14-0697. Cite
Findlater, Leah, Jordan L. Boyd-Graber, Yuening Hu, Jason Chuang, and Alison Smith. Concurrent Visualization of Relationships between Words and Topics in Topic Models, 2014. /paper/Concurrent-Visualization-of-Relationships-between-Smith-Chuang/096ed34cd5d56b5daea50336f891dc26a32b981d. Cite
Freitas, Alex A. “Comprehensible Classification Models: A Position Paper.” In ACM SIGKDD Explorations, 15.1:1–10. Association for Computing Machinery, 2014. https://doi.org/10.1145/2594473.2594475. Cite
Liu, Alan. “The Meaning of the Digital Humanities.” PMLA 128, no. 2 (2013): 409–23. https://doi.org/10.1632/pmla.2013.128.2.409. Cite
Grimmer, Justin, and Gary King. “General Purpose Computer-Assisted Clustering and Conceptualization.” Proceedings of the National Academy of Sciences 108, no. 7 (2011): 2643–50. https://doi.org/10.1073/pnas.1018067108. Cite
Sculley, D., and B. M. Pasanek. “Meaning and Mining: The Impact of Implicit Assumptions in Data Mining for the Humanities.” Literary and Linguistic Computing 23, no. 4 (2008): 409–24. https://doi.org/10.1093/llc/fqn019. Cite
Tickle, A.B., R. Andrews, M. Golea, and J. Diederich. “The Truth Will Come to Light: Directions and Challenges in Extracting the Knowledge Embedded within Trained Artificial Neural Networks.” IEEE Transactions on Neural Networks 9, no. 6 (1998): 1057–68. https://doi.org/10.1109/72.728352. Cite