Ziqi Wang
Cited by
Cited by
MAVEN: A massive general domain event detection dataset
X Wang, Z Wang, X Han, W Jiang, R Han, Z Liu, J Li, P Li, Y Lin, J Zhou
EMNLP 2020, 2020
CLEVE: contrastive pre-training for event extraction
Z Wang*, X Wang*, X Han, Y Lin, L Hou, Z Liu, P Li, J Li, J Zhou
ACL-IJCNLP 2021, 2021
HMEAE: Hierarchical modular event argument extraction
X Wang*, Z Wang*, X Han, Z Liu, J Li, P Li, M Sun, J Zhou, X Ren
EMNLP-IJCNLP 2019, 2019
Nero: A neural rule grounding framework for label-efficient relation extraction
W Zhou, H Lin, BY Lin, Z Wang, J Du, L Neves, X Ren
The Web Conference 2020, 2020
Learning from explanations with neural execution tree
Z Wang*, Y Qin*, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren
ICLR 2020, 2019
RESIN-11: Schema-guided event prediction for 11 newsworthy scenarios
X Du, Z Zhang, S Li, P Yu, H Wang, T Lai, X Lin, Z Wang, I Liu, B Zhou, ...
NAACL 2022 (System Demonstrations), 2022
In-context learning of large language models explained as kernel regression
C Han, Z Wang, H Zhao, H Ji
arXiv preprint arXiv:2305.12766, 2023
Rcot: Detecting and rectifying factual inconsistency in reasoning by reversing chain-of-thought
T Xue, Z Wang, Z Wang, C Han, P Yu, H Ji
arXiv preprint arXiv:2305.11499, 2023
Recognizing object by components with human prior knowledge enhances adversarial robustness of deep neural networks
X Li*, Z Wang*, B Zhang, F Sun, X Hu
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
FALCON: fast visual concept learning by integrating images, linguistic descriptions, and conceptual relations
L Mei, J Mao, Z Wang, C Gan, JB Tenenbaum
ICLR 2022, 2022
Covid-19 claim radar: A structured claim extraction and tracking system
M Li, RG Reddy, Z Wang, YS Chiang, T Lai, P Yu, Z Zhang, H Ji
ACL 2022 (System Demonstrations), 135-144, 2022
Gibbs Sampling from Human Feedback: A Provable KL-constrained Framework for RLHF
W Xiong, H Dong, C Ye, Z Wang, H Zhong, H Ji, N Jiang, T Zhang
arXiv preprint arXiv:2312.11456, 2024
Smartbook: Ai-assisted situation report generation
RG Reddy, YR Fung, Q Zeng, M Li, Z Wang, P Sullivan, H Ji
arXiv preprint arXiv:2303.14337, 2023
Enabling Language Models to Implicitly Learn Self-Improvement
Z Wang, L Hou, T Lu, Y Wu, Y Li, H Yu, H Ji
ICLR 2024, 2024
Augmentation with projection: Towards an effective and efficient data augmentation paradigm for distillation
Z Wang, Y Wu, F Liu, D Liu, L Hou, H Yu, J Li, H Ji
ICLR 2023, 2022
Parameter-Efficient Tuning Helps Language Model Alignment
T Xue, Z Wang, H Ji
arXiv preprint arXiv:2310.00819, 2023
Understanding the effect of data augmentation on knowledge distillation
Z Wang, C Han, W Bao, H Ji
arXiv preprint arXiv:2305.12565, 2023
The system can't perform the operation now. Try again later.
Articles 1–17