Hello, I'm Shenzhe (Cho) Zhu🤔

I am a researcher with interest in Trustworthy AI, LLM safety,LLM Interpretability and Multimodal LLM. Currently, I am a third-year Computer Science student at the University of Toronto. It’s pleasure to collaborate with PRADA Lab at King Abdullah University of Science and Technology(KAUST).

email: cho.zhu at mail.utoronto.ca


News 📢

Publications/Pre-prints

Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM Against Augmented Fraud and Phishing Inducements

Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM Against Augmented Fraud and Phishing Inducements

arXiv, 2024

Fraud-R1 benchmarks LLMs' fraud resistance with 8,564 cases across five types, using multi-round evaluation. Testing 15 LLMs in two settings, it reveals key weaknesses, especially in role-play and fake job scams, and a Chinese-English gap in fraud detection.

AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous Driving

AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for Autonomous Driving

arXiv, 2024

AutoTrust, a comprehensive benchmark to evaluate the trustworthiness of DriveVLMs in autonomous driving, uncovering vulnerabilities in safety, robustness, privacy, and fairness using a large visual question-answering dataset and testing six diverse VLMs.

Exploring the Personality Traits of LLMs through Latent Features Steering

Exploring the Personality Traits of LLMs through Latent Features Steering

NeurIPS 2024 LanGame Workshop

Large language models (LLMs) display human-like traits, but how they form is unclear. We explore how long-term factors and short-term pressures shape these traits and their impact on model safety.

A short Survey: Exploring knowledge graph-based neural-symbolic system from application perspective

A short Survey: Exploring knowledge graph-based neural-symbolic system from application perspective

Shenzhe Zhu, Shengxiang Sun
arXiv, 2024

This survey paper explores KG-based neural-symbolic integration, enhancing neural reasoning, improving symbolic accuracy, and combining both

Experience

  • Texas A&M University
    Research Intern, advised by Prof. ZhengZhong Tu
    July 2024 - Current
  • PRADA LAB - King Abdullah University of Science and Technology
    Research Intern, advised by Prof. Di Wang
    June 2024 - Current
  • Social AI Lab - University of Toronto
    Research Assistant, advised by Prof. William Cunningham
    May 2024 - Current
  • Urban Data Research Center - University of Toronto
    NLP Analyst
    May 2024 - Aug 2024
  • University of Toronto Scarborough
    Data Analyst Intern
    Jan 2024 - May 2024

Education

  • University of Toronto
    B.S. in Computer Science.
    Fall 2022 - Current