Woojeong Jin
Jointly learning explainable rules for recommendation with knowledge graph
W Ma, M Zhang, Y Cao, W Jin, C Wang, Y Liu, S Ma, X Ren
The world wide web conference, 1210-1221, 2019
Recurrent event network: Autoregressive structure inference over temporal knowledge graphs
W Jin, M Qu, X Jin, X Ren
arXiv preprint arXiv:1904.05530, 2019
Personalized ranking in signed networks using signed random walk with restart
J Jung, W Jin, L Sael, U Kang
2016 IEEE 16th International Conference on Data Mining (ICDM), 973-978, 2016
Collaborative policy learning for open knowledge graph reasoning
C Fu, T Chen, M Qu, W Jin, X Ren
arXiv preprint arXiv:1909.00230, 2019
Fast and accurate random walk with restart on dynamic graphs with guarantees
M Yoon, W Jin, U Kang
Proceedings of the 2018 World Wide Web Conference, 409-418, 2018
Supervised and extended restart in random walks for ranking and link prediction in networks
W Jin, J Jung, U Kang
PloS one 14 (3), e0213857, 2019
Random walk-based ranking in signed social networks: model and algorithms
J Jung, W Jin, U Kang
Knowledge and Information Systems 62 (2), 571-610, 2020
A good prompt is worth millions of parameters? low-resource prompt-based learning for vision-language models
W Jin, Y Cheng, Y Shen, W Chen, X Ren
arXiv preprint arXiv:2110.08484, 2021
Forecastqa: A question answering challenge for event forecasting with temporal text data
W Jin, R Khanna, S Kim, DH Lee, F Morstatter, A Galstyan, X Ren
arXiv preprint arXiv:2005.00792, 2020
Temporal attribute prediction via joint modeling of multi-relational structure evolution
S Garg, N Sharma, W Jin, X Ren
arXiv preprint arXiv:2003.03919, 2020
MSD: Saliency-aware Knowledge Distillation for Multimodal Understanding
W Jin, M Sanjabi, S Nie, L Tan, X Ren, H Firooz
Findings of the Association for Computational Linguistics: EMNLP 2021, 3557-3569, 2021
Accurate relational reasoning in edge-labeled graphs by multi-labeled random walk with restart
J Jung, W Jin, H Park, U Kang
World Wide Web 24 (4), 1369-1393, 2021
Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer
W Jin, DH Lee, C Zhu, J Pujara, X Ren
arXiv preprint arXiv:2203.07519, 2022
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–13