Rethinking LLM Human Simulation: When a Graph is What You Need Large language models (LLMs) are increasingly used to simulate humans, with applications ranging from survey prediction to decision-making However, are LLMs strictly necessary, or can smaller, domain-grounded models suffice?
Joseph Suh I'm a third year Ph D student at BAIR in UC Berkeley, advised by Serina Chang and John Canny Most of my recent research is about language models as models of human behavior, spanning around opinions, actions, preferences, and temporal changes
Rethinking LLM Human Simulation: When a Graph is What You Need We identify a large class of simulation problems in which individuals make choices among discrete options, where a graph neural network (GNN) can match or surpass strong LLM baselines despite being three orders of magnitude smaller
Joseph Suh | Berkeley Sensor Actuator Center Ming C Wu (Advisor) josephsuh@berkeley edu BPNX1009: Piezoelectric Silicon Photonic MEMS Switch Joseph Suh BSAC Project Materials (Final Archive), 2024 All publications
Joseph Suh - Google Scholar Joseph Suh University of California, Berkeley Verified email at berkeley edu - Homepage Machine Learning
Serina Chang I can advise PhD students at UC Berkeley in EECS or Computational Precision Health (CPH) I'm generally interested in students who are skilled in ML and data science, care deeply about real-world impact and interdisciplinary work, have strong critical thinking skills, and are clear communicators
Rethinking LLM Human Simulation: When a Graph is What You Need This position paper argues that the promise of LLM social simulations can be achieved by addressing five tractable challenges, and identifies promising directions, including context-rich prompting and fine-tuning with social science datasets
Joseph Suh - alphaxiv. org Moreover, we can use the derived principal components to assess personality along the Big Five dimensions, and achieve improvements in average personality prediction accuracy of up to 5% over fine-tuned models, and up to 21% over direct LLM-based scoring techniques