(all)
Global Humanities | History of Humanities | Liberal Arts | Humanities and Higher Education | Humanities as Research Activity | Humanities Teaching & Curricula | Humanities and the Sciences | Medical Humanities | Public Humanities | Humanities Advocacy | Humanities and Social Groups | Value of Humanities | Humanities and Economic Value | Humanities Funding | Humanities Statistics | Humanities Surveys | "Crisis" of the Humanities
Humanities Organizations: Humanities Councils (U.S.) | Government Agencies | Foundations | Scholarly Associations
Humanities in: Africa | Asia (East) | Asia (South) | Australasia | Europe | Latin America | Middle East | North America: Canada - Mexico - United States | Scandinavia | United Kingdom
(all)
Lists of News Sources | Databases with News Archives | History of Journalism | Journalism Studies | Journalism Statistics | Journalism Organizations | Student Journalism | Data Journalism | Media Frames (analyzing & changing media narratives using "frame theory") | Media Bias | Fake News | Journalism and Minorities | Journalism and Women | Press Freedom | News & Social Media
(all)
Corpus Representativeness
Comparison paradigms for idea of a corpus: Archives as Paradigm | Canons as Paradigm | Editions as Paradigm | Corpus Linguistics as Paradigm
(all)
Artificial Intelligence | Big Data | Data Mining | Data Notebooks (Jupyter Notebooks) | Data Visualization (see also Topic Model Visualizations) | Hierarchical Clustering | Interpretability & Explainability (see also Topic Model Interpretation) | Mapping | Natural Language Processing | Network Analysis | Open Science | Reporting & Documentation Methods | Reproducibility | Sentiment Analysis | Social Media Analysis | Statistical Methods | Text Analysis (see also Topic Modeling) | Text Classification | Wikification | Word Embedding & Vector Semantics
Topic Modeling (all)
Selected DH research and resources bearing on, or utilized by, the WE1S project.
(all)
Distant Reading | Cultural Analytics | | Sociocultural Approaches | Topic Modeling in DH | Non-consumptive Use
Searchable version of bibliography on Zotero site
For WE1S developers: Biblio style guide | Biblio collection form (suggest additions) | WE1S Bibliography Ontology Outline
2133649
Artificial intelligence
1
chicago-fullnote-bibliography
50
date
desc
year
1
1
1
6761
https://we1s.ucsb.edu/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%227VE8ZFEZ%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22AI%20Forensics%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAI%20Forensics.%20%26%23x201C%3BHome%20Page%2C%26%23x201D%3B%202023.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fai-forensics.github.io%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fai-forensics.github.io%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D7VE8ZFEZ%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22webpage%22%2C%22title%22%3A%22Home%20page%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22AI%20Forensics%22%7D%5D%2C%22abstractNote%22%3A%22An%20interdisciplinary%20research%20project%20critically%20investigating%20interpretability%20and%20accountability%20of%20visual%20AI%20systems%20from%20the%20perspective%20of%20their%20social%20implications%2C%20its%20team%20is%20spread%20across%20an%20international%20consortium%20composed%20of%3A%5Cn%5CnHochschule%20f%5Cu00fcr%20Gestaltung%20Karlsruhe%2C%20K%5Cu00fcnstliche%20Intelligenz%20und%20Medienphilosophie%5CnUniversit%5Cu00e4t%20Kassel%2C%20Gender%5C%2FDiversity%20in%20Informatics%20Systems%5CnCambridge%20University%2C%20Cambridge%20Digital%20Humanities%5CnDurham%20University%5CnIncluding%20the%20NVIDIA%20CUDA%20Research%20Centre%20as%20technical%20partner%5CnUniversity%20of%20California%2C%20Santa%20Barbara%22%2C%22date%22%3A%222023%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fai-forensics.github.io%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222023-06-13T22%3A09%3A42Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22ZV69TQ9D%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dickson%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDickson%2C%20Ben.%20%26%23x201C%3BA%20New%20Technique%20Called%20%26%23x2018%3BConcept%20Whitening%26%23x2019%3B%20Promises%20to%20Provide%20Neural%20Network%20Interpretability.%26%23x201D%3B%20%3Ci%3EVentureBeat%3C%5C%2Fi%3E%20%28blog%29%2C%202021.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fventurebeat.com%5C%2F2021%5C%2F01%5C%2F12%5C%2Fa-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fventurebeat.com%5C%2F2021%5C%2F01%5C%2F12%5C%2Fa-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DZV69TQ9D%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22A%20new%20technique%20called%20%5Cu2018concept%20whitening%5Cu2019%20promises%20to%20provide%20neural%20network%20interpretability%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Dickson%22%7D%5D%2C%22abstractNote%22%3A%22%5C%22Concept%20whitening%5C%22%20can%20help%20steer%20neural%20networks%20toward%20learning%20specific%20concepts%20without%20sacrificing%20performance.%22%2C%22blogTitle%22%3A%22VentureBeat%22%2C%22date%22%3A%222021%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fventurebeat.com%5C%2F2021%5C%2F01%5C%2F12%5C%2Fa-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222021-01-15T20%3A17%3A48Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%224IVIKSUK%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Heaven%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHeaven%2C%20Will%20Douglass.%20%26%23x201C%3BAI%20Is%20Wrestling%20with%20a%20Replication%20Crisis.%26%23x201D%3B%20%3Ci%3EMIT%20Technology%20Review%3C%5C%2Fi%3E%2C%202020.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.technologyreview.com%5C%2F2020%5C%2F11%5C%2F12%5C%2F1011944%5C%2Fartificial-intelligence-replication-crisis-science-big-tech-google-deepmind-facebook-openai%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fwww.technologyreview.com%5C%2F2020%5C%2F11%5C%2F12%5C%2F1011944%5C%2Fartificial-intelligence-replication-crisis-science-big-tech-google-deepmind-facebook-openai%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D4IVIKSUK%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22AI%20is%20wrestling%20with%20a%20replication%20crisis%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Will%20Douglass%22%2C%22lastName%22%3A%22Heaven%22%7D%5D%2C%22abstractNote%22%3A%22%5BBeginning%20of%20article%3A%5D%20Last%20month%20Nature%20published%20a%20damning%20response%20written%20by%2031%20scientists%20to%20a%20study%20from%20Google%20Health%20that%20had%20appeared%20in%20the%20journal%20earlier%20this%20year.%20Google%20was%20describing%20successful%20trials%20of%20an%20AI%20that%20looked%20for%20signs%20of%20breast%20cancer%20in%20medical%20images.%20But%20according%20to%20its%20critics%2C%20the%20Google%20team%20provided%20so%20little%20information%20about%20its%20code%20and%20how%20it%20was%20tested%20that%20the%20study%20amounted%20to%20nothing%20more%20than%20a%20promotion%20of%20proprietary%20tech.%5Cn%5Cn%5Cu201cWe%20couldn%5Cu2019t%20take%20it%20anymore%2C%5Cu201d%20says%20Benjamin%20Haibe-Kains%2C%20the%20lead%20author%20of%20the%20response%2C%20who%20studies%20computational%20genomics%20at%20the%20University%20of%20Toronto.%20%5Cu201cIt%5Cu2019s%20not%20about%20this%20study%20in%20particular%5Cu2014it%5Cu2019s%20a%20trend%20we%5Cu2019ve%20been%20witnessing%20for%20multiple%20years%20now%20that%20has%20started%20to%20really%20bother%20us.%5Cu201d%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.technologyreview.com%5C%2F2020%5C%2F11%5C%2F12%5C%2F1011944%5C%2Fartificial-intelligence-replication-crisis-science-big-tech-google-deepmind-facebook-openai%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-11-14T06%3A21%3A30Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%2C%7B%22tag%22%3A%22Reproducibility%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22WHBTVD5K%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A1550555%2C%22username%22%3A%22nazkey%22%2C%22name%22%3A%22Naz%20Keynejad%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fnazkey%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Dickson%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDickson%2C%20Ben.%20%26%23x201C%3BThe%20Advantages%20of%20Self-Explainable%20AI%20over%20Interpretable%20AI.%26%23x201D%3B%20The%20Next%20Web%2C%202020.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fthenextweb.com%5C%2Fneural%5C%2F2020%5C%2F06%5C%2F19%5C%2Fthe-advantages-of-self-explainable-ai-over-interpretable-ai%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fthenextweb.com%5C%2Fneural%5C%2F2020%5C%2F06%5C%2F19%5C%2Fthe-advantages-of-self-explainable-ai-over-interpretable-ai%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DWHBTVD5K%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22webpage%22%2C%22title%22%3A%22The%20advantages%20of%20self-explainable%20AI%20over%20interpretable%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Dickson%22%7D%5D%2C%22abstractNote%22%3A%22%5BBeginning%20of%20article%3A%5D%20Would%20you%20trust%20an%20artificial%20intelligence%20algorithm%20that%20works%20eerily%20well%2C%20making%20accurate%20decisions%2099.9%25%20of%20the%20time%2C%20but%20is%20a%20mysterious%20black%20box%3F%20Every%20system%20fails%20every%20now%20and%20then%2C%20and%20when%20it%20does%2C%20we%20want%20explanations%2C%20especially%20when%20human%20lives%20are%20at%20stake.%20And%20a%20system%20that%20can%5Cu2019t%20be%20explained%20can%5Cu2019t%20be%20trusted.%20That%20is%20one%20of%20the%20problems%20the%20AI%20community%20faces%20as%20their%20creations%20become%20smarter%20and%20more%20capable%20of%20tackling%20complicated%20and%20critical%20tasks.%22%2C%22date%22%3A%222020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fthenextweb.com%5C%2Fneural%5C%2F2020%5C%2F06%5C%2F19%5C%2Fthe-advantages-of-self-explainable-ai-over-interpretable-ai%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-07-13T18%3A04%3A24Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22MS4U5EAW%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Rogers%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERogers%2C%20Anna%2C%20Olga%20Kovaleva%2C%20and%20Anna%20Rumshisky.%20%26%23x201C%3BA%20Primer%20in%20BERTology%3A%20What%20We%20Know%20about%20How%20BERT%20Works.%26%23x201D%3B%20%3Ci%3EArXiv%3A2002.12327%20%5BCs%5D%3C%5C%2Fi%3E%2C%202020.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2002.12327%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2002.12327%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DMS4U5EAW%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Primer%20in%20BERTology%3A%20What%20we%20know%20about%20how%20BERT%20works%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Rogers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Olga%22%2C%22lastName%22%3A%22Kovaleva%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Rumshisky%22%7D%5D%2C%22abstractNote%22%3A%22Transformer-based%20models%20are%20now%20widely%20used%20in%20NLP%2C%20but%20we%20still%20do%20not%20understand%20a%20lot%20about%20their%20inner%20workings.%20This%20paper%20describes%20what%20is%20known%20to%20date%20about%20the%20famous%20BERT%20model%20%28Devlin%20et%20al.%202019%29%2C%20synthesizing%20over%2040%20analysis%20studies.%20We%20also%20provide%20an%20overview%20of%20the%20proposed%20modifications%20to%20the%20model%20and%20its%20training%20regime.%20We%20then%20outline%20the%20directions%20for%20further%20research.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2002.12327%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-29T06%3A39%3A16Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%2C%7B%22tag%22%3A%22Natural%20language%20processing%22%7D%2C%7B%22tag%22%3A%22Text%20Analysis%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22DSXGKU6A%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Rudin%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERudin%2C%20Cynthia.%20%26%23x201C%3BStop%20Explaining%20Black%20Box%20Machine%20Learning%20Models%20for%20High%20Stakes%20Decisions%20and%20Use%20Interpretable%20Models%20Instead.%26%23x201D%3B%20%3Ci%3ENature%20Machine%20Intelligence%3C%5C%2Fi%3E%201%2C%20no.%205%20%282019%29%3A%20206%26%23x2013%3B15.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs42256-019-0048-x%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs42256-019-0048-x%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DDSXGKU6A%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Stop%20explaining%20black%20box%20machine%20learning%20models%20for%20high%20stakes%20decisions%20and%20use%20interpretable%20models%20instead%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cynthia%22%2C%22lastName%22%3A%22Rudin%22%7D%5D%2C%22abstractNote%22%3A%22Black%20box%20machine%20learning%20models%20are%20currently%20being%20used%20for%20high-stakes%20decision%20making%20throughout%20society%2C%20causing%20problems%20in%20healthcare%2C%20criminal%20justice%20and%20other%20domains.%20Some%20people%20hope%20that%20creating%20methods%20for%20explaining%20these%20black%20box%20models%20will%20alleviate%20some%20of%20the%20problems%2C%20but%20trying%20to%20explain%20black%20box%20models%2C%20rather%20than%20creating%20models%20that%20are%20interpretable%20in%20the%20first%20place%2C%20is%20likely%20to%20perpetuate%20bad%20practice%20and%20can%20potentially%20cause%20great%20harm%20to%20society.%20The%20way%20forward%20is%20to%20design%20models%20that%20are%20inherently%20interpretable.%20This%20Perspective%20clarifies%20the%20chasm%20between%20explaining%20black%20boxes%20and%20using%20inherently%20interpretable%20models%2C%20outlines%20several%20key%20reasons%20why%20explainable%20black%20boxes%20should%20be%20avoided%20in%20high-stakes%20decisions%2C%20identifies%20challenges%20to%20interpretable%20machine%20learning%2C%20and%20provides%20several%20example%20applications%20where%20interpretable%20models%20could%20potentially%20replace%20black%20box%20models%20in%20criminal%20justice%2C%20healthcare%20and%20computer%20vision.%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1038%5C%2Fs42256-019-0048-x%22%2C%22ISSN%22%3A%222522-5839%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs42256-019-0048-x%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-08-12T19%3A06%3A26Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22PJ6HYXG5%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lim%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELim%2C%20Brian%20Y.%2C%20Qian%20Yang%2C%20Ashraf%20Abdul%2C%20and%20Danding%20Wang.%20%26%23x201C%3BWhy%20These%20Explanations%3F%20Selecting%20Intelligibility%20Types%20for%20Explanation%20Goals.%26%23x201D%3B%20In%20%3Ci%3EIUI%20Workshops%202019%3C%5C%2Fi%3E.%20Los%20Angeles%3A%20ACM%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%27%3Ehttps%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DPJ6HYXG5%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Why%20these%20Explanations%3F%20Selecting%20Intelligibility%20Types%20for%20Explanation%20Goals%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Brian%20Y.%22%2C%22lastName%22%3A%22Lim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qian%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ashraf%22%2C%22lastName%22%3A%22Abdul%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danding%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22The%20increasing%20ubiquity%20of%20artificial%20intelligence%20%28AI%29%20has%20spurred%20the%20development%20of%20explainable%20AI%20%28XAI%29%20to%20make%20AI%20more%20understandable.%20Even%20as%20novel%20algorithms%20for%20explanation%20are%20being%20developed%2C%20researchers%20have%20called%20for%20more%20human%20interpretability.%20While%20empirical%20user%20studies%20can%20be%20conducted%20to%20evaluate%20explanation%20effectiveness%2C%20it%20remains%20unclear%20why%20specific%20explanations%20are%20helpful%20for%20understanding.%20We%20leverage%20a%20recently%20developed%20conceptual%20framework%20for%20user-centric%20reasoned%20XAI%20that%20draws%20from%20foundational%20concepts%20in%20philosophy%2C%20cognitive%20psychology%2C%20and%20AI%20to%20identify%20pathways%20for%20how%20user%20reasoning%20drives%20XAI%20needs.%20We%20identified%20targeted%20strategies%20for%20applying%20XAI%20facilities%20to%20improve%20understanding%2C%20trust%20and%20decision%20performance.%20We%20discuss%20how%20our%20framework%20can%20be%20extended%20and%20applied%20to%20other%20domains%20that%20need%20usercentric%20XAI.%20This%20position%20paper%20seeks%20to%20promote%20the%20design%20of%20XAI%20features%20based%20on%20human%20reasoning%20needs%22%2C%22date%22%3A%222019%22%2C%22proceedingsTitle%22%3A%22IUI%20Workshops%202019%22%2C%22conferenceName%22%3A%22IUI%20Workshops%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.semanticscholar.org%5C%2Fpaper%5C%2FA-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan%5C%2F03a4544caed21760df30f0e4f417bbe361c29c9e%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T23%3A06%3A56Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22RIYZUJJ9%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Murdoch%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMurdoch%2C%20W.%20James%2C%20Chandan%20Singh%2C%20Karl%20Kumbier%2C%20Reza%20Abbasi-Asl%2C%20and%20Bin%20Yu.%20%26%23x201C%3BInterpretable%20Machine%20Learning%3A%20Definitions%2C%20Methods%2C%20and%20Applications.%26%23x201D%3B%20%3Ci%3EArXiv%3A1901.04592%20%5BCs%2C%20Stat%5D%3C%5C%2Fi%3E%2C%202019.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1901.04592%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1901.04592%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DRIYZUJJ9%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Interpretable%20machine%20learning%3A%20definitions%2C%20methods%2C%20and%20applications%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22W.%20James%22%2C%22lastName%22%3A%22Murdoch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chandan%22%2C%22lastName%22%3A%22Singh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karl%22%2C%22lastName%22%3A%22Kumbier%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Reza%22%2C%22lastName%22%3A%22Abbasi-Asl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bin%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22The%20authors%20aim%20to%20address%20concerns%20surrounding%20machine-learning%20models%20by%20defining%20interpretability%20in%20the%20context%20of%20machine%20learning%20and%20introducing%20the%20Predictive%2C%20Descriptive%2C%20Relevant%20%28PDR%29%20framework%20for%20discussing%20interpretations.%20The%20PDR%20framework%20provides%20three%20overarching%20desiderata%20for%20evaluation%3A%20predictive%20accuracy%2C%20descriptive%20accuracy%20and%20relevancy%2C%20with%20relevancy%20judged%20relative%20to%20a%20human%20audience.%20Moreover%2C%20to%20help%20manage%20the%20deluge%20of%20interpretation%20methods%2C%20they%20introduce%20a%20categorization%20of%20existing%20techniques%20into%20model-based%20and%20post-hoc%20categories%2C%20with%20sub-groups%20including%20sparsity%2C%20modularity%20and%20simulatability.%20To%20demonstrate%20how%20practitioners%20can%20use%20the%20PDR%20framework%20to%20evaluate%20and%20understand%20interpretations%2C%20the%20authors%20provide%20numerous%20real-world%20examples%20that%20highlight%20the%20often%20under-appreciated%20role%20played%20by%20human%20audiences%20in%20discussions%20of%20interpretability.%20Finally%2C%20based%20on%20their%20framework%2C%20the%20authors%20discuss%20limitations%20of%20existing%20methods%20and%20directions%20for%20future%20work.%22%2C%22date%22%3A%222019%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1901.04592%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A42%3A37Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22MRZBRAN6%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sawhney%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESawhney%2C%20Ravi.%20%26%23x201C%3BHuman%20in%20the%20Loop%3A%20Why%20We%20Will%20Be%20Needed%20to%20Complement%20Artificial%20Intelligence.%26%23x201D%3B%20%3Ci%3ELSE%20Business%20Review%3C%5C%2Fi%3E%20%28blog%29%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fblogs.lse.ac.uk%5C%2Fbusinessreview%5C%2F2018%5C%2F10%5C%2F24%5C%2Fhuman-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence%5C%2F%27%3Ehttps%3A%5C%2F%5C%2Fblogs.lse.ac.uk%5C%2Fbusinessreview%5C%2F2018%5C%2F10%5C%2F24%5C%2Fhuman-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence%5C%2F%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DMRZBRAN6%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22blogPost%22%2C%22title%22%3A%22Human%20in%20the%20loop%3A%20why%20we%20will%20be%20needed%20to%20complement%20artificial%20intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ravi%22%2C%22lastName%22%3A%22Sawhney%22%7D%5D%2C%22abstractNote%22%3A%22Along%20with%20artificial%20intelligence%20%28AI%29%2C%20it%20is%20likely%20most%20readers%20will%20have%20observed%20the%20increased%20press%20coverage%20around%20automation.%20More%20recently%20these%20two%20terms%20are%20being%20used%20jointly%20to%20present%5Cu2026%22%2C%22blogTitle%22%3A%22LSE%20Business%20Review%22%2C%22date%22%3A%222018%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fblogs.lse.ac.uk%5C%2Fbusinessreview%5C%2F2018%5C%2F10%5C%2F24%5C%2Fhuman-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence%5C%2F%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T21%3A45%3A39Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%2C%7B%22key%22%3A%228UEU8HL4%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hind%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHind%2C%20Michael%2C%20Dennis%20Wei%2C%20Murray%20Campbell%2C%20Noel%20C.%20F.%20Codella%2C%20Amit%20Dhurandhar%2C%20Aleksandra%20Mojsilovi%26%23x107%3B%2C%20Karthikeyan%20Natesan%20Ramamurthy%2C%20and%20Kush%20R.%20Varshney.%20%26%23x201C%3BTED%3A%20Teaching%20AI%20to%20Explain%20Its%20Decisions.%26%23x201D%3B%20%3Ci%3EArXiv%3A1811.04896%20%5BCs%5D%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1811.04896%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1811.04896%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3D8UEU8HL4%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22TED%3A%20Teaching%20AI%20to%20Explain%20its%20Decisions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Hind%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dennis%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Murray%22%2C%22lastName%22%3A%22Campbell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Noel%20C.%20F.%22%2C%22lastName%22%3A%22Codella%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amit%22%2C%22lastName%22%3A%22Dhurandhar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aleksandra%22%2C%22lastName%22%3A%22Mojsilovi%5Cu0107%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karthikeyan%20Natesan%22%2C%22lastName%22%3A%22Ramamurthy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kush%20R.%22%2C%22lastName%22%3A%22Varshney%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20systems%20are%20being%20increasingly%20deployed%20due%20to%20their%20potential%20to%20increase%20the%20efficiency%2C%20scale%2C%20consistency%2C%20fairness%2C%20and%20accuracy%20of%20decisions.%20However%2C%20as%20many%20of%20these%20systems%20are%20opaque%20in%20their%20operation%2C%20there%20is%20a%20growing%20demand%20for%20such%20systems%20to%20provide%20explanations%20for%20their%20decisions.%20Conventional%20approaches%20to%20this%20problem%20attempt%20to%20expose%20or%20discover%20the%20inner%20workings%20of%20a%20machine%20learning%20model%20with%20the%20hope%20that%20the%20resulting%20explanations%20will%20be%20meaningful%20to%20the%20consumer.%20In%20contrast%2C%20this%20paper%20suggests%20a%20new%20approach%20to%20this%20problem.%20It%20introduces%20a%20simple%2C%20practical%20framework%2C%20called%20Teaching%20Explanations%20for%20Decisions%20%28TED%29%2C%20that%20provides%20meaningful%20explanations%20that%20match%20the%20mental%20model%20of%20the%20consumer.%20We%20illustrate%20the%20generality%20and%20effectiveness%20of%20this%20approach%20with%20two%20different%20examples%2C%20resulting%20in%20highly%20accurate%20explanations%20with%20no%20loss%20of%20prediction%20accuracy%20for%20these%20two%20examples.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1811.04896%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-08-09T19%3A01%3A21Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22HI7SBGVB%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Alvarez-Melis%20and%20Jaakkola%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAlvarez-Melis%2C%20David%2C%20and%20Tommi%20Jaakkola.%20%26%23x201C%3BTowards%20Robust%20Interpretability%20with%20Self-Explaining%20Neural%20Networks.%26%23x201D%3B%20In%20%3Ci%3EAdvances%20in%20Neural%20Information%20Processing%20Systems%2031%3C%5C%2Fi%3E%2C%20edited%20by%20S.%20Bengio%2C%20H.%20Wallach%2C%20H.%20Larochelle%2C%20K.%20Grauman%2C%20N.%20Cesa-Bianchi%2C%20and%20R.%20Garnett%2C%207775%26%23x2013%3B84.%20Curran%20Associates%2C%20Inc.%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf%27%3Ehttp%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DHI7SBGVB%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Towards%20Robust%20Interpretability%20with%20Self-Explaining%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Alvarez-Melis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tommi%22%2C%22lastName%22%3A%22Jaakkola%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Bengio%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22H.%22%2C%22lastName%22%3A%22Wallach%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22H.%22%2C%22lastName%22%3A%22Larochelle%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22K.%22%2C%22lastName%22%3A%22Grauman%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22N.%22%2C%22lastName%22%3A%22Cesa-Bianchi%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Garnett%22%7D%5D%2C%22abstractNote%22%3A%22Most%20recent%20work%20on%20interpretability%20of%20complex%20machine%20learning%20models%20has%20focused%20on%20estimating%20a-posteriori%20explanations%20for%20previously%20trained%20models%20around%20specific%20predictions.%20Self-explaining%20models%20where%20interpretability%20plays%20a%20key%20role%20already%20during%20learning%20have%20received%20much%20less%20attention.%20We%20propose%20three%20desiderata%20for%20explanations%20in%20general%20--%20explicitness%2C%20faithfulness%2C%20and%20stability%20--%20and%20show%20that%20existing%20methods%20do%20not%20satisfy%20them.%20In%20response%2C%20we%20design%20self-explaining%20models%20in%20stages%2C%20progressively%20generalizing%20linear%20classifiers%20to%20complex%20yet%20architecturally%20explicit%20models.%20Faithfulness%20and%20stability%20are%20enforced%20via%20regularization%20specifically%20tailored%20to%20such%20models.%20Experimental%20results%20across%20various%20benchmark%20datasets%20show%20that%20our%20framework%20offers%20a%20promising%20direction%20for%20reconciling%20model%20complexity%20and%20interpretability.%22%2C%22date%22%3A%222018%22%2C%22proceedingsTitle%22%3A%22Advances%20in%20Neural%20Information%20Processing%20Systems%2031%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fpapers.nips.cc%5C%2Fpaper%5C%2F8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-08-09T19%3A12%3A44Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22PTNAWWJA%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Gall%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGall%2C%20Richard.%20%3Ci%3EMachine%20Learning%20Explainability%20vs%20Interpretability%3A%20Two%20Concepts%20That%20Could%20Help%20Restore%20Trust%20in%20AI%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.kdnuggets.com%5C%2F2018%5C%2F12%5C%2Fmachine-learning-explainability-interpretability-ai.html%27%3Ehttps%3A%5C%2F%5C%2Fwww.kdnuggets.com%5C%2F2018%5C%2F12%5C%2Fmachine-learning-explainability-interpretability-ai.html%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DPTNAWWJA%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Machine%20Learning%20Explainability%20vs%20Interpretability%3A%20Two%20concepts%20that%20could%20help%20restore%20trust%20in%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Gall%22%7D%5D%2C%22abstractNote%22%3A%22This%20blog%20post%20explains%20the%20key%20differences%20between%20explainability%20and%20interpretability%20and%20why%20they%27re%20so%20important%20for%20machine%20learning%20and%20AI%2C%20before%20taking%20a%20look%20at%20several%20techniques%20and%20methods%20for%20improving%20machine%20learning%20interpretability.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.kdnuggets.com%5C%2F2018%5C%2F12%5C%2Fmachine-learning-explainability-interpretability-ai.html%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A40%3A36Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22LMNYM8LH%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Gilpin%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGilpin%2C%20Leilani%20H.%2C%20David%20Bau%2C%20Ben%20Z.%20Yuan%2C%20Ayesha%20Bajwa%2C%20Michael%20Specter%2C%20and%20Lalana%20Kagal.%20%26%23x201C%3BExplaining%20Explanations%3A%20An%20Overview%20of%20Interpretability%20of%20Machine%20Learning.%26%23x201D%3B%20%3Ci%3EArXiv%3A1806.00069%20%5BCs%2C%20Stat%5D%3C%5C%2Fi%3E%2C%202018.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1806.00069%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1806.00069%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DLMNYM8LH%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Explaining%20Explanations%3A%20An%20Overview%20of%20Interpretability%20of%20Machine%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leilani%20H.%22%2C%22lastName%22%3A%22Gilpin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Bau%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%20Z.%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ayesha%22%2C%22lastName%22%3A%22Bajwa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Specter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lalana%22%2C%22lastName%22%3A%22Kagal%22%7D%5D%2C%22abstractNote%22%3A%22There%20has%20recently%20been%20a%20surge%20of%20work%20in%20explanatory%20artificial%20intelligence%20%28XAI%29.%20This%20research%20area%20tackles%20the%20important%20problem%20that%20complex%20machines%20and%20algorithms%20often%20cannot%20provide%20insights%20into%20their%20behavior%20and%20thought%20processes.%20In%20an%20effort%20to%20create%20best%20practices%20and%20identify%20open%20challenges%2C%20this%20paper%20provides%20a%20definition%20of%20explainability%20and%20shows%20how%20it%20can%20be%20used%20to%20classify%20existing%20literature.%20It%20discuss%20why%20current%20approaches%20to%20explanatory%20methods%20especially%20for%20deep%20neural%20networks%20are%20insufficient%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1806.00069%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A33%3A40Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22MDHKGZNJ%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A22837%2C%22username%22%3A%22ayliu%22%2C%22name%22%3A%22Alan%20Liu%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fayliu%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Samek%20et%20al.%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESamek%2C%20Wojciech%2C%20Thomas%20Wiegand%2C%20and%20Klaus-Robert%20M%26%23xFC%3Bller.%20%26%23x201C%3BExplainable%20Artificial%20Intelligence.%26%23x201D%3B%20%3Ci%3EInternational%20Telecommunication%20Union%20Journal%3C%5C%2Fi%3E%2C%20no.%201%20%282017%29%3A%201%26%23x2013%3B10.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.itu.int%5C%2Fen%5C%2Fjournal%5C%2F001%5C%2FPages%5C%2F05.aspx%27%3Ehttps%3A%5C%2F%5C%2Fwww.itu.int%5C%2Fen%5C%2Fjournal%5C%2F001%5C%2FPages%5C%2F05.aspx%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DMDHKGZNJ%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Explainable%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wojciech%22%2C%22lastName%22%3A%22Samek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thomas%22%2C%22lastName%22%3A%22Wiegand%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Klaus-Robert%22%2C%22lastName%22%3A%22M%5Cu00fcller%22%7D%5D%2C%22abstractNote%22%3A%22With%20the%20availability%20of%20large%20databases%20and%20recent%20improvements%20in%20deep%20learning%20methodology%2C%20the%20performance%20of%20AI%20systems%20is%20reaching%2C%20or%20even%20exceeding%2C%20the%20human%20level%20on%20an%20increasing%20number%20of%20complex%20tasks.%20Impressive%20examples%20of%20this%20development%20can%20be%20found%20in%20domains%20such%20as%20image%20classification%2C%20sentiment%20analysis%2C%20speech%20understanding%20or%20strategic%20game%20playing.%20However%2C%20because%20of%20their%20nested%20non-linear%20structure%2C%20these%20highly%20successful%20machine%20learning%20and%20artificial%20intelligence%20models%20are%20usually%20applied%20in%20a%20black-box%20manner%2C%20i.e.%20no%20information%20is%20provided%20about%20what%20exactly%20makes%20them%20arrive%20at%20their%20predictions.%20Since%20this%20lack%20of%20transparency%20can%20be%20a%20major%20drawback%2C%20e.g.%20in%20medical%20applications%2C%20the%20development%20of%20methods%20for%20visualizing%2C%20explaining%20and%20interpreting%20deep%20learning%20models%20has%20recently%20attracted%20increasing%20attention.%20This%20paper%20summarizes%20recent%20developments%20in%20this%20field%20and%20makes%20a%20plea%20for%20more%20interpretability%20in%20artificial%20intelligence.%20Furthermore%2C%20it%20presents%20two%20approaches%20to%20explaining%20predictions%20of%20deep%20learning%20models%2C%20one%20method%20which%20computes%20the%20sensitivity%20of%20the%20prediction%20with%20respect%20to%20changes%20in%20the%20input%20and%20one%20approach%20which%20meaningfully%20decomposes%20the%20decision%20in%20terms%20of%20the%20input%20variables.%20These%20methods%20are%20evaluated%20on%20three%20classification%20tasks.%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.itu.int%5C%2Fen%5C%2Fjournal%5C%2F001%5C%2FPages%5C%2F05.aspx%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222019-07-27T21%3A33%3A48Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22TN35EMBP%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ruchansky%20et%20al.%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERuchansky%2C%20Natali%2C%20Sungyong%20Seo%2C%20and%20Yan%20Liu.%20%26%23x201C%3BCSI%3A%20A%20Hybrid%20Deep%20Model%20for%20Fake%20News%20Detection.%26%23x201D%3B%20In%20%3Ci%3EProceedings%20of%20the%202017%20ACM%20on%20Conference%20on%20Information%20and%20Knowledge%20Management%3C%5C%2Fi%3E%2C%20797%26%23x2013%3B806.%20CIKM%20%26%23x2019%3B17.%20Singapore%2C%20Singapore%3A%20Association%20for%20Computing%20Machinery%2C%202017.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3132847.3132877%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3132847.3132877%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DTN35EMBP%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22CSI%3A%20A%20Hybrid%20Deep%20Model%20for%20Fake%20News%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Natali%22%2C%22lastName%22%3A%22Ruchansky%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sungyong%22%2C%22lastName%22%3A%22Seo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yan%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22The%20topic%20of%20fake%20news%20has%20drawn%20attention%20both%20from%20the%20public%20and%20the%20academic%20communities.%20Such%20misinformation%20has%20the%20potential%20of%20affecting%20public%20opinion%2C%20providing%20an%20opportunity%20for%20malicious%20parties%20to%20manipulate%20the%20outcomes%20of%20public%20events%20such%20as%20elections.%20Because%20such%20high%20stakes%20are%20at%20play%2C%20automatically%20detecting%20fake%20news%20is%20an%20important%2C%20yet%20challenging%20problem%20that%20is%20not%20yet%20well%20understood.%20Nevertheless%2C%20there%20are%20three%20generally%20agreed%20upon%20characteristics%20of%20fake%20news%3A%20the%20text%20of%20an%20article%2C%20the%20user%20response%20it%20receives%2C%20and%20the%20source%20users%20promoting%20it.%20Existing%20work%20has%20largely%20focused%20on%20tailoring%20solutions%20to%20one%20particular%20characteristic%20which%20has%20limited%20their%20success%20and%20generality.%20In%20this%20work%2C%20we%20propose%20a%20model%20that%20combines%20all%20three%20characteristics%20for%20a%20more%20accurate%20and%20automated%20prediction.%20Specifically%2C%20we%20incorporate%20the%20behavior%20of%20both%20parties%2C%20users%20and%20articles%2C%20and%20the%20group%20behavior%20of%20users%20who%20propagate%20fake%20news.%20Motivated%20by%20the%20three%20characteristics%2C%20we%20propose%20a%20model%20called%20CSI%20which%20is%20composed%20of%20three%20modules%3A%20Capture%2C%20Score%2C%20and%20Integrate.%20The%20first%20module%20is%20based%20on%20the%20response%20and%20text%3B%20it%20uses%20a%20Recurrent%20Neural%20Network%20to%20capture%20the%20temporal%20pattern%20of%20user%20activity%20on%20a%20given%20article.%20The%20second%20module%20learns%20the%20source%20characteristic%20based%20on%20the%20behavior%20of%20users%2C%20and%20the%20two%20are%20integrated%20with%20the%20third%20module%20to%20classify%20an%20article%20as%20fake%20or%20not.%20Experimental%20analysis%20on%20real-world%20data%20demonstrates%20that%20CSI%20achieves%20higher%20accuracy%20than%20existing%20models%2C%20and%20extracts%20meaningful%20latent%20representations%20of%20both%20users%20and%20articles.%22%2C%22date%22%3A%222017%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202017%20ACM%20on%20Conference%20on%20Information%20and%20Knowledge%20Management%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3132847.3132877%22%2C%22ISBN%22%3A%22978-1-4503-4918-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3132847.3132877%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-04-01T07%3A08%3A45Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Data%20science%22%7D%2C%7B%22tag%22%3A%22Fake%20news%22%7D%2C%7B%22tag%22%3A%22Journalism%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22P9D2I8BZ%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWang%2C%20William%20Yang.%20%26%23x201C%3B%26%23x2018%3BLiar%2C%20Liar%20Pants%20on%20Fire%26%23x2019%3B%3A%20A%20New%20Benchmark%20Dataset%20for%20Fake%20News%20Detection.%26%23x201D%3B%20%3Ci%3EArXiv%3A1705.00648%20%5BCs%5D%3C%5C%2Fi%3E%2C%202017.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1705.00648%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1705.00648%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DP9D2I8BZ%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22%5C%22Liar%2C%20Liar%20Pants%20on%20Fire%5C%22%3A%20A%20New%20Benchmark%20Dataset%20for%20Fake%20News%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%20Yang%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Automatic%20fake%20news%20detection%20is%20a%20challenging%20problem%20in%20deception%20detection%2C%20and%20it%20has%20tremendous%20real-world%20political%20and%20social%20impacts.%20However%2C%20statistical%20approaches%20to%20combating%20fake%20news%20has%20been%20dramatically%20limited%20by%20the%20lack%20of%20labeled%20benchmark%20datasets.%20In%20this%20paper%2C%20we%20present%20liar%3A%20a%20new%2C%20publicly%20available%20dataset%20for%20fake%20news%20detection.%20We%20collected%20a%20decade-long%2C%2012.8K%20manually%20labeled%20short%20statements%20in%20various%20contexts%20from%20PolitiFact.com%2C%20which%20provides%20detailed%20analysis%20report%20and%20links%20to%20source%20documents%20for%20each%20case.%20This%20dataset%20can%20be%20used%20for%20fact-checking%20research%20as%20well.%20Notably%2C%20this%20new%20dataset%20is%20an%20order%20of%20magnitude%20larger%20than%20previously%20largest%20public%20fake%20news%20datasets%20of%20similar%20type.%20Empirically%2C%20we%20investigate%20automatic%20fake%20news%20detection%20based%20on%20surface-level%20linguistic%20patterns.%20We%20have%20designed%20a%20novel%2C%20hybrid%20convolutional%20neural%20network%20to%20integrate%20meta-data%20with%20text.%20We%20show%20that%20this%20hybrid%20approach%20can%20improve%20a%20text-only%20deep%20learning%20model.%22%2C%22date%22%3A%222017%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F1705.00648%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-04-01T07%3A06%3A50Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Data%20science%22%7D%2C%7B%22tag%22%3A%22Fake%20news%22%7D%2C%7B%22tag%22%3A%22Journalism%22%7D%2C%7B%22tag%22%3A%22Machine%20learning%22%7D%5D%7D%7D%2C%7B%22key%22%3A%22E5HMW56U%22%2C%22library%22%3A%7B%22id%22%3A2133649%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tickle%20et%20al.%22%2C%22parsedDate%22%3A%221998%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETickle%2C%20A.B.%2C%20R.%20Andrews%2C%20M.%20Golea%2C%20and%20J.%20Diederich.%20%26%23x201C%3BThe%20Truth%20Will%20Come%20to%20Light%3A%20Directions%20and%20Challenges%20in%20Extracting%20the%20Knowledge%20Embedded%20within%20Trained%20Artificial%20Neural%20Networks.%26%23x201D%3B%20%3Ci%3EIEEE%20Transactions%20on%20Neural%20Networks%3C%5C%2Fi%3E%209%2C%20no.%206%20%281998%29%3A%201057%26%23x2013%3B68.%20%3Ca%20class%3D%27zp-DOIURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2F72.728352%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2F72.728352%3C%5C%2Fa%3E.%20%3Ca%20title%3D%27Cite%20in%20RIS%20Format%27%20class%3D%27zp-CiteRIS%27%20href%3D%27https%3A%5C%2F%5C%2Fwe1s.ucsb.edu%5C%2Fwp-content%5C%2Fplugins%5C%2Fzotpress%5C%2Flib%5C%2Frequest%5C%2Frequest.cite.php%3Fapi_user_id%3D2133649%26amp%3Bitem_key%3DE5HMW56U%27%3ECite%3C%5C%2Fa%3E%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20truth%20will%20come%20to%20light%3A%20directions%20and%20challenges%20in%20extracting%20the%20knowledge%20embedded%20within%20trained%20artificial%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.B.%22%2C%22lastName%22%3A%22Tickle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Andrews%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Golea%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J.%22%2C%22lastName%22%3A%22Diederich%22%7D%5D%2C%22abstractNote%22%3A%22To%20date%2C%20the%20preponderance%20of%20techniques%20for%20eliciting%20the%20knowledge%20embedded%20in%20trained%20artificial%20neural%20networks%20%28ANN%27s%29%20has%20focused%20primarily%20on%20extracting%20rule-based%20explanations%20from%20feedforward%20ANN%27s.%20The%20ADT%20taxonomy%20for%20categorizing%20such%20techniques%20was%20proposed%20in%201995%20to%20provide%20a%20basis%20for%20the%20systematic%20comparison%20of%20the%20different%20approaches.%20This%20paper%20shows%20that%20not%20only%20is%20this%20taxonomy%20applicable%20to%20a%20cross%20section%20of%20current%20techniques%20for%20extracting%20rules%20from%20trained%20feedforward%20ANN%27s%20but%20also%20how%20the%20taxonomy%20can%20be%20adapted%20and%20extended%20to%20embrace%20a%20broader%20range%20of%20ANN%20types%20%28e%2Cg.%2C%20recurrent%20neural%20networks%29%20and%20explanation%20structures.%20In%20addition%20we%20identify%20some%20of%20the%20key%20research%20questions%20in%20extracting%20the%20knowledge%20embedded%20within%20ANN%27s%20including%20the%20need%20for%20the%20formulation%20of%20a%20consistent%20theoretical%20basis%20for%20what%20has%20been%2C%20until%20recently%2C%20a%20disparate%20collection%20of%20empirical%20results.%22%2C%22date%22%3A%221998%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1109%5C%2F72.728352%22%2C%22ISSN%22%3A%2210459227%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F728352%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222020-02-08T20%3A17%3A53Z%22%2C%22tags%22%3A%5B%7B%22tag%22%3A%22Artificial%20intelligence%22%7D%2C%7B%22tag%22%3A%22Interpretability%20and%20explainability%22%7D%5D%7D%7D%5D%7D
AI Forensics. “Home Page,” 2023. https://ai-forensics.github.io/. Cite
Dickson, Ben. “A New Technique Called ‘Concept Whitening’ Promises to Provide Neural Network Interpretability.” VentureBeat (blog), 2021. https://venturebeat.com/2021/01/12/a-new-technique-called-concept-whitening-promises-to-provide-neural-network-interpretability/. Cite
Heaven, Will Douglass. “AI Is Wrestling with a Replication Crisis.” MIT Technology Review, 2020. https://www.technologyreview.com/2020/11/12/1011944/artificial-intelligence-replication-crisis-science-big-tech-google-deepmind-facebook-openai/. Cite
Dickson, Ben. “The Advantages of Self-Explainable AI over Interpretable AI.” The Next Web, 2020. https://thenextweb.com/neural/2020/06/19/the-advantages-of-self-explainable-ai-over-interpretable-ai/. Cite
Rogers, Anna, Olga Kovaleva, and Anna Rumshisky. “A Primer in BERTology: What We Know about How BERT Works.” ArXiv:2002.12327 [Cs], 2020. http://arxiv.org/abs/2002.12327. Cite
Rudin, Cynthia. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1, no. 5 (2019): 206–15. https://doi.org/10.1038/s42256-019-0048-x. Cite
Lim, Brian Y., Qian Yang, Ashraf Abdul, and Danding Wang. “Why These Explanations? Selecting Intelligibility Types for Explanation Goals.” In IUI Workshops 2019. Los Angeles: ACM, 2019. https://www.semanticscholar.org/paper/A-Study-on-Interaction-in-Human-in-the-Loop-Machine-Yang-Kandogan/03a4544caed21760df30f0e4f417bbe361c29c9e. Cite
Murdoch, W. James, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. “Interpretable Machine Learning: Definitions, Methods, and Applications.” ArXiv:1901.04592 [Cs, Stat], 2019. http://arxiv.org/abs/1901.04592. Cite
Sawhney, Ravi. “Human in the Loop: Why We Will Be Needed to Complement Artificial Intelligence.” LSE Business Review (blog), 2018. https://blogs.lse.ac.uk/businessreview/2018/10/24/human-in-the-loop-why-we-will-be-needed-to-complement-artificial-intelligence/. Cite
Hind, Michael, Dennis Wei, Murray Campbell, Noel C. F. Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. “TED: Teaching AI to Explain Its Decisions.” ArXiv:1811.04896 [Cs], 2018. http://arxiv.org/abs/1811.04896. Cite
Alvarez-Melis, David, and Tommi Jaakkola. “Towards Robust Interpretability with Self-Explaining Neural Networks.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 7775–84. Curran Associates, Inc., 2018. http://papers.nips.cc/paper/8003-towards-robust-interpretability-with-self-explaining-neural-networks.pdf. Cite
Gall, Richard. Machine Learning Explainability vs Interpretability: Two Concepts That Could Help Restore Trust in AI, 2018. https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html. Cite
Gilpin, Leilani H., David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. “Explaining Explanations: An Overview of Interpretability of Machine Learning.” ArXiv:1806.00069 [Cs, Stat], 2018. http://arxiv.org/abs/1806.00069. Cite
Samek, Wojciech, Thomas Wiegand, and Klaus-Robert Müller. “Explainable Artificial Intelligence.” International Telecommunication Union Journal, no. 1 (2017): 1–10. https://www.itu.int/en/journal/001/Pages/05.aspx. Cite
Ruchansky, Natali, Sungyong Seo, and Yan Liu. “CSI: A Hybrid Deep Model for Fake News Detection.” In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 797–806. CIKM ’17. Singapore, Singapore: Association for Computing Machinery, 2017. https://doi.org/10.1145/3132847.3132877. Cite
Wang, William Yang. “‘Liar, Liar Pants on Fire’: A New Benchmark Dataset for Fake News Detection.” ArXiv:1705.00648 [Cs], 2017. http://arxiv.org/abs/1705.00648. Cite
Tickle, A.B., R. Andrews, M. Golea, and J. Diederich. “The Truth Will Come to Light: Directions and Challenges in Extracting the Knowledge Embedded within Trained Artificial Neural Networks.” IEEE Transactions on Neural Networks 9, no. 6 (1998): 1057–68. https://doi.org/10.1109/72.728352. Cite