Hazel Kim

DPhil Student, Oxford

{firstname}.kimh [AT] gmail

I am a DPhil student in computer science at the University of Oxford.

I conduct research in Natural Language Processing and Machine Learning at Oxford Applied and Theoretical Machine Learning Group (OATML). I am grateful to be advised by Yarin Gal and Sebastian Farquhar from OATML Group. I am also fortunate to be advised by Janet Pierrehumbert from OxNLP and Yonatan Belinkov from Technion as a student of the European Laboratory for Learning and Intelligent Systems (ELLIS Society).

Language Acquisition from Limited Sources
Deep learning shapes the world with given data. It works well with quality and abundant data while providing biased perspectives with scarce, poor data. However, quality data is often expensive to collect or private to get access. This challenge has inspired me to study how to acquire language proficiency from limited data.
Causal Inference, not Correlation
Deep learning understands the world with correlations between input data and output predictions. Due to the heavy reliance on patterns of the observed data, it lacks reasoning about cause-and-effect relations that are not explicitly appeared in the text. High correlations do not always uncover the causality of events. Causal inference is an important topic to control language models against biased data.
Interpretable and Controllable Language Models
The predictable behaviors under solid interpretation and trustworthiness would enable models to be more transparent and applicable to the real world. The controllable language models would allow the learning procedure to de-bias the models with minimum human supervision and less computational cost.

My Wonderful Collaborators

I was fortunate to start researching with consistent support from Yosub Han. I got great insights into what and how to research from Seong Joon Oh. I was delighted to work with humble yet astute advice from Sangdoo Yun and lucky to have inspiring discussions with Kangmin Yoo. I was happy to mentor enthusiastic JB. Kim in writing his first paper. I enjoyed working on my first paper with Daecheol Woo's sincerity and positive mindset. I thankfully met many great people while conducting research! I appreciate all of my collaborators for supporting me in many different ways :)

Publications

ATHENA: Mathematical Reasoning with Thought Expansion PDF

JB. Kim, Hazel Kim, Joonghyuk Hahn, Yo-Sub Han

EMNLP 2023

ALP: Data Augmentation Using Lexicalized PCFGs for Few-Shot Text Classification PDF

Hazel Kim, Daecheol Woo, Seong Joon Oh, Jeong-Won Cha, Yo-Sub Han

AAAI 2022

LST: Lexicon-Guided Self-Training for Few-Shot Text Classification PDF

{Hazel Kim, Jaeman Son}*, Yo-Sub Han

Arxiv

ATHENA: Mathematical Reasoning with Thought Expansion PDF

JB. Kim, Hazel Kim, Joonghyuk Hahn, Yo-Sub Han

EMNLP 2023

ALP: Data Augmentation Using Lexicalized PCFGs for Few-Shot Text Classification PDF

Hazel Kim, Daecheol Woo, Seong Joon Oh, Jeong-Won Cha, Yo-Sub Han

AAAI 2022

LST: Lexicon-Guided Self-Training for Few-Shot Text Classification PDF

{Hazel Kim, Jaeman Son}*, Yo-Sub Han

Arxiv

ATHENA: Mathematical Reasoning with Thought Expansion PDF

JB. Kim, Hazel Kim, Joonghyuk Hahn, Yo-Sub Han

EMNLP 2023

ALP: Data Augmentation Using Lexicalized PCFGs for Few-Shot Text Classification PDF

Hazel Kim, Daecheol Woo, Seong Joon Oh, Jeong-Won Cha, Yo-Sub Han

AAAI 2022

LST: Lexicon-Guided Self-Training for Few-Shot Text Classification PDF

{Hazel Kim, Jaeman Son}*, Yo-Sub Han

Arxiv

Resume

Acknowledgement

This website uses the website design and template by Martin Saveski