Jiangrui Zheng
Jiangrui Zheng
Ph.D. Student in Computer Science
Stevens Institute of Technology

Email:
jzheng36@stevens.edu

Affiliation:
Computer Science Department
Stevens Institute of Technology
Hoboken, NJ, USA

Links:
Google Scholar
LinkedIn
GitHub

About Me

I am a Computer Science PhD student at Stevens Institute of Technology, specializing in large language models for security and software engineering. advised by Prof. Xueqing (Susan) Liu . My research focuses on building practical AI systems that improve vulnerability analysis, automate security workflows, and evaluate the reliability of state-of-the-art LLMs.

My work includes developing LLM-based agents for verifying security vulnerability reports, automated red-teaming pipelines for testing hate-speech defenses, and test-case generators for model management on HuggingFace Hub. I have also worked on NER-driven automation for handling software versions in vulnerability reports and empirical studies on AI-assisted code review and patch retrieval.

I aim to build trustworthy intelligent systems that strengthen software security at scale. My research has been published at NAACL and IEEE BigData workshops, and I have contributed to multiple projects on patch tracing, explainable retrieval, and analyzing risks in AI-generated code.

Publications

Workshop & Conference Papers

  • HateModerate: Testing Hate Speech Detectors against Content Moderation Policies
    Jiangrui Zheng, Xueqing Liu, Guanqun Yang, et al.
    Findings of the Association for Computational Linguistics (NAACL 2024), Mexico City, Mexico, June 2024.
  • From Reviewers' Lens: Understanding Bug Bounty Report Invalid Reasons with LLMs
    Jiangrui Zheng, Yingming Zhou, Ali Abdullah Ahmad, Hanqing Yao, Xueqing Liu.
    Workshop on Secure and Safe AI Agents for Big Data Infrastructure (S2AI @ IEEE BigData 2025).

Template adapted from Plain Academic