A paper, Adaptive and Explainable Deployment of Navigation Skills via Hierarchical Deep Reinforcement Learning, written by Kyowoon Lee, Seongun Kim and Jaesik Choi, is accepted at ICRA-23.
Prof. Choi presented the work at IPAM XAI Workshop held in UCLA. Explainable AI for the Sciences: Towards Novel Insights
Prof. Choi’s article, South Korea’s Response to Surging AI Use in the US and China, is published in Global Asia, a international relational magazine published by the East Asia Foundation.
Two papers are accepted at NeurIPS-2022. Anh Tong, Thanh Nguyen-Tang, Toan Tran and Jaesik Choi, Learning White Noises in Neural Stochastic Differential Equations Giyoung Jeon, Haedong Jeong and Jaesik Choi, Distilled Gradient Aggregation: Purify Features for Input Attribution in the… Continue Reading →
We have open positions for shot/long-terms AI researchers/SW Engineers. If you are interested in, please send your CV to jaesik.choi@kaist.ac.kr.
A collaborative work on explainable artificial intelligence received recognition by Samsung.
Haedong Jeong successfully defended his PhD thesis, Example-based Methods to Explain the Internal Generative Mechanism of Deep Generative Neural Networks. Congratulations Dr. Jeong!
Our paper, Can We Find Neurons that Cause Unrealistic Images in Deep Generative Networks?, written by Hwanil Choi, Wonjoon Chang, Jaesik Choi is accepted at IJCAI-2022.
The KAIST Explainable Artificial Intelligence Center is selected by the Ministry of Science and ICT to host a new XAI project (13 Billion KRW, 130억원/5년), Development of Plug and Play Explainable Artificial Intelligence Platform, for the next 57 months.
Our paper, An Unsupervised Way to Understand Artifact Generating Internal Units in Generative Neural Networks, written by Haedong Jeong, Jiyeon Han and Jaesik Choi is accepted at AAAI-2022.
© 2024 Statistical Artificial Intelligence Lab@KAIST — Powered by WordPress
Theme by Anders Noren — Up ↑