I'm a second-year PhD student in the Computer Science Department at Stanford University. My advisors are Tatsunori Hashimoto and Tengyu Ma. My work is supported by the NSF GRFP and Quad Fellowship.
My long-term goal is to build frontier AI systems that safely and legibly advance scientific knowledge and our collective well-being.
My current focus is on harnessing the internal knowledge of large models to make them more reliable and interpretable.
An alignment objective to train language models that express calibrated verbal statements of confidence in their long-form natural language generations.
Three industry-scale tasks for evaluating robustness and uncertainty quality under distribution shift, including the largest vehicle motion prediction dataset to date.
A novel deep learning architecture that takes the entire dataset as input and learns to reason about relationships between datapoints using self-attention.
An optimization framework for distributed deep learning that jointly considers memory usage and computation time when searching for a parallelization strategy.