My research interests include Natural Language Processing, Graph Neural Network, Data Mining and Multi-module Processing. Currently, I am working with Mengyu Zhou on table understanding especially for numerical reasoning with pretrained model. With the ambition of making contribution to the NLP community, I aim to build a robust and efficient NLP system to tackle with the issues of biasness and unfairness in our society.
BSc in Data Science and Technology (DSCT), 2017
The Hong Kong University of Science and Technology
Tensorflow, Pytorch, Pandas, Spacy, etc.
C++, JAVA, SQL, MATLAB, etc.
AWS, GCP
Supervised by senior researcher Mengyu Zhou
We will submit a paper to VLDB in November.
Under the supervision from Professor Raymond, Wong, we propose a novel Transformer-based model: HetTransformer to solve the fake news detection problem on social networks, which utilizes the structure-aware Transformer and temperal embedding to capture the news propagation patterns in social media. Experiments on three real-world datasets demonstrate that our model is able to outperform the state-of-the-art baselines in fake news detection.
We will submit a paper to WWW in October.
We aim at generating adversarial examples in text to attack pretrained BERT model in black box setting under budget constraint (merely query much less number of times towards target model with the same level of success rate and perturbation rate). We employ all the intermediate failure and successful queries to learn words salience rank globally and locally. Responsibilities include:
This work has been submitted to EMNLP in May.