Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in ICML 2024, 2024
This paper studies the vulnerability of GNN explanations.
Recommended citation: Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang. (2024). "Graph Neural Network Explanations are Fragile." ICML 2024.
Download Paper
Published in AAAI 2025, 2024
This paper proposed a practical and effective black-box attack against LPDG.
Recommended citation: Jiate Li, Meng Pang, Binghui Wang. (2025). "Practicable Black-box Evasion Attacks on Link Prediction in Dynamic Graphs--A Graph Sequential Embedding Method." AAAI 2025.
Download Paper
Published in Usenix Security 2025, 2025
This paper proposed an effective certifiable robust GNN method against arbitary perturbations
Recommended citation: Jiate Li, Binghui Wang. (2025). "AGNNCert: Defending Graph Neural Networks against Arbitrary Perturbations with Deterministic Certification." Usenix Security 2025.
Download Paper
Published in ICLR 2025, 2025
This paper proposed a provably robust framework for GNN explainers.
Recommended citation: Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang. (2025). "Provably Robust Explainable Graph Neural Networks against Graph Perturbation Attacks." ICLR 2025.
Download Paper
Published in CVPR 2025, 2025
This paper extends AGNNCert to defending posioning attack.
Recommended citation: Jiate Li, Meng Pang, Yun Dong, Binghui Wang. (2025). "Deterministic Certification of Graph Neural Networks against Graph Poisoning Attacks with Arbitrary Perturbations." CVPR 2025.
Download Paper