Chih-Kuan Yeh

Chih-Kuan Yeh

Research Scientist at Google Brain

Biography

I am a research scientist in Google Brain. During my PhD in CMU, my research interests focus on understanding and interpreting machine learning models by more objective explanations (which may be functional evaluations or theoretical properties). More recently, I am interested in building better large-scale models with less (but more efficient) data, and improving models through our understanding obtained by model explanations. Feel free to contact me if you are interested in working with me.

Interests
  • Artificial Intelligence
  • Machine Learning
  • Explainable AI
  • Algorithmic Game Theory
Education
  • PhD Student in Machine Learning, 2017- 2022 (expected)

    Carnegie Mellon University

  • BSc in Electrical Engineering, 2016

    National Taiwan University

Experience

 
 
 
 
 
PhD Student
Carnegie Mellon University Machine Learning department
Sep 2017 – Present Pennsylvania
Focused on understanding and explaining machine learning methods working with professor Pradeep Ravikumar.
 
 
 
 
 
Research Intern
Google
Jun 2019 – Oct 2019 California
Worked on formalizing the ‘‘completeness’’ concept for Concept-Based explanations with Been Kim, Sercan Arik, Chun-Liang Li, and Thomas Pfister. Paper published in NeurIPS 2020.
 
 
 
 
 
Research Intern
Google
Jun 2021 – Nov 2021 California
Worked on Scalable Data Influence methods for NLP models with Ankur Taly, Frederick Liu, and Mukund Sundararajan. Paper submitted.

Recent Publications

Quickly discover relevant content by filtering publications.
(2022). First is Better than Last for Language Data Influence. To appear In NeurIPS 2022.

PDF Cite

(2022). Threading the needle for off-manifold and on-manifold value functions for Shapley Value Explanations. In AISTATS 2022.

PDF Cite Code

(2022). Faith-Shap: The faithful Shapley Interaction Index. In Submission.

PDF Cite

(2021). Evaluations and Methods for Explanation through Robustness Analysis. In ICLR 2021.

PDF Cite

(2021). Human-Centered Concept Explanations for Neural Networks. Book Chapter In Neuro-Symbolic Artificial Intelligence– The State of the Art.

PDF Cite

(2021). Objective criteria for explanations of machine learning models. In Applied AI letters 2021.

PDF Cite

(2020). On Completeness-Aware Concept-Based Explanations in Deep Neural Networks. In NeurIPS 2020.

PDF Cite Code

(2019). On the (In)fidelity and Sensitivity of Explanations. In NeurIPS 2019.

PDF Cite Code External packages

(2019). Unsupervised Speech Recognition via Segmental Empirical Output Distribution Matching. ICLR 2019.

PDF Cite

(2018). Representer Point Selection for Explaining Deep Neural Networks. In NeurIPS 2018.

PDF Cite Code Blog

(2018). Automatic Bridge Bidding Using Deep Reinforcement Learning. In IEEE Trasaction of Games 2018 (shorter version in ECAI 2016).

PDF Cite

(2018). Multi-Label Zero-Shot Learning With Structured Knowledge Graphs. In CVPR 2018.

PDF Cite

(2017). Learning Deep Latent Space for Multi-Label Classification. In AAAI 2017.

PDF Cite Code

Contact