Zhaopeng Tu

Principal Researcher
Tencent AI Lab
tuzhaopeng@gmail.com

Bio

Zhaopeng Tu is a principal researcher at Tencent AI Lab, whose research focuses on deep learning for natural language processing (NLP). He is currently working on neural machine translation (NMT) and large language modeling (LLM). He has published over 100 papers in leading NLP/AI journals and conferences such as ICML, ACL, EMNLP, ICLR, and TACL. He served as Associate Editor of NeuroComputing, Area Chair or Senior PC Member of ACL, EMNLP, NAACL, AAAI, and IJCAI.

Publications

Google Scholar Citations

(* denotes corresponding author.)

    Preprint

  1. Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine. Preprint.
  2. Zhiwei He, Xing Wang, Wenxiang Jiao, Zhuosheng Zhang, Rui Wang, Shuming Shi, and Zhaopeng Tu. Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model. Preprint.
  3. Jianhui Pang, Fanghua Ye, Longyue Wang, Dian Yu, Derek F. Wong, Shuming Shi, and Zhaopeng Tu. Salute the Classic: Revisiting Challenges of Machine Translation in the Age of Large Language Models. Preprint.
  4. Wenxuan Wang, Zhaopeng Tu, Chang Chen, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, and Michael R Lyu. All Languages Matter: On the Multilingual Safety of Large Language Models. Preprint.
  5. Wenxuan Wang, Wenxiang Jiao, Jingyuan Huang, Ruyi Dai, Jen-tse Huang, Zhaopeng Tu*, and Michael R Lyu. Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models. Preprint.
  6. Wenxuan Wang, Juluan Shi, Zhaopeng Tu, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, and Michael R. Lyu. The Earth is Flat? Unveiling Factual Errors in Large Language Models. Preprint.
  7. Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, and Zhaopeng Tu. Benchmarking LLMs via Uncertainty Quantification. Preprint.
  8. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. Preprint.
  9. Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, and Xing Wang. Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models. Preprint.
  10. Bingshuai Liu, Longyue Wang, Chenyang Lyu, Yong Zhang, Jinsong Su, Shuming Shi, and Zhaopeng Tu*. On the Cultural Gap in Text-to-Image Generation. Preprint.
  11. Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, and Zhaopeng Tu. Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration. Preprint.
  12. Zhanyu Wang, Longyue Wang, Zhen Zhao, Minghao Wu, Chenyang Lyu, Huayang Li, Deng Cai, Luping Zhou, Shuming Shi, and Zhaopeng Tu. GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation. Preprint.
  13. 2024

  14. Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, and Zhaopeng Tu. GPT-4 is too Smart to be Safe: Stealthy Chat with LLMs via Cipher. ICLR 2024.
  15. Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho LAM, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael Lyu. On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs. ICLR 2024 (Oral, 1.2%).
  16. Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu*, Shuming Shi, and Xing Wang. Exploring Human-Like Translation Strategy with Large Language Models. TACL 2024.
  17. Cunxiao Du, Hao Zhou, Zhaopeng Tu, and Jing Jiang. Revisiting the Markov Property for Machine Translation. EACL 2024 (Short, Findings).
  18. 2023

  19. Longyue Wang, Siyou Liu, Mingzhou Xu, Linfeng Song, Shuming Shi, and Zhaopeng Tu. A Survey on Zero Pronoun Translation. ACL 2023.
  20. Zhihao Wang, Longyue Wang, Jinsong Su, Junfeng Yao, and Zhaopeng Tu. Revisiting Non-Autoregressive Translation at Scale. ACL 2023 (Findings).
  21. Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. Document-Level Machine Translation with Large Language Models. EMNLP 2023.
  22. Wenxiang Jiao, Jen-tse Huang, Wenxuan Wang, Zhiwei He, Tian Liang, Xing Wang, Shuming Shi, and Zhaopeng Tu. ParroT: Translating During Chat Using Large Language Models. EMNLP 2023 (Findings).
  23. Jinhui Ye, Wenxiang Jiao, Xing Wang, Zhaopeng Tu, and Hui Xiong. Cross-modality Data Augmentation for End-to-End Sign Language Translation. EMNLP 2023 (Findings).
  24. Kangjie Zheng, Longyue Wang, Zhihao Wang, Binqi Chen, Ming Zhang, and Zhaopeng Tu*. Towards A Unified Training for Levenshtein Transformer. ICASSP 2023.
  25. Jinhui Ye, Wenxiang Jiao, Xing Wang, and Zhaopeng Tu. Scaling Back-Translation with Domain Text Generation for Sign Language Gloss Translation. EACL 2023.
  26. Ante Wang, Qi Liu, Haitao Mi, Longyue Wang, Zhaopeng Tu, Jinsong Su, Dong Yu, and Linfeng Song. Search-Engine-augmented Dialogue Response Generation with Cheaply Supervised Query Production Corresponding. Journal of Artificial Intelligence, 2023.
  27. Mingzhou Xu, Longyue Wang, Siyou Liu, Derek F Wong, Shuming Shi, and Zhaopeng Tu. A Benchmark Dataset and Evaluation Methodology for Chinese Zero Pronoun Translation. Journal of Language Resources and Evaluation, 2023.
  28. 2022

  29. Liang Ding, Longyue Wang, Shuming Shi, Dacheng Tao, and Zhaopeng Tu*. Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation. ACL 2022.
  30. Zhiwei He, Xing Wang, Rui Wang, Shuming Shi, and Zhaopeng Tu. Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation. ACL 2022.
  31. Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu*, and Michael Lyu. Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation. ACL 2022.
  32. Mingzhou Xu, Longyue Wang, Derek F. Wong, Hongye Liu, Linfeng Song, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu. GuoFeng: A Benchmark for Zero Pronoun Recovery and Translation. EMNLP 2022.
  33. Shuo Wang, Peng Li, Zhixing Tan, Zhaopeng Tu, Maosong Sun, and Yang Liu. A Template-based Method for Constrained Neural Machine Translation. EMNLP 2022.
  34. Yifan Hou, Wenxiang Jiao, Meizhen Liu, Carl Allen, Zhaopeng Tu, and Mrinmaya Sachan. Adapters for Enhanced Modeling of Multilingual Knowledge and Text. EMNLP 2022 (Findings). [Best Paper of MRL Workshop]
  35. Cunxiao Du, Zhaopeng Tu*, Longyue Wang, and Jing Jiang. ngram-OAXE: Phrase-Based Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation. COLING 2022.
  36. Wenxiang Jiao, Xing Wang, Shilin He, Zhaopeng Tu, Irwin King, and Michael Lyu. Exploiting Inactive Examples for Natural Language Generation with Data Rejuvenation. Journal of TASLP, 2022.
  37. Xinwei Geng, Longyue Wang, Xing Wang, Mingtao Yang, Xiaocheng Feng, Bing Qin, and Zhaopeng Tu. Learning to Refine Source Representations for Neural Machine Translation. International Journal of Machine Learning and Cybernetics, 2022.
  38. 2021

  39. Cunxiao Du, Zhaopeng Tu*, and Jing Jiang. Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation. ICML 2021 (Long Presentation, 3%).
  40. Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, and Zhaopeng Tu. Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation. ACL 2021.
  41. Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Shuming Shi, Michael R. Lyu, and Irwin King. Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation. ACL 2021.
  42. Shuo Wang, Zhaopeng Tu, Zhixing Tan, Shuming Shi, Maosong Sun, Yang Liu. On the Language Coverage Bias for Neural Machine Translation. ACL 2021 (Findings).
  43. Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu. On the Copying Behaviors of Pre-Training for Neural Machine Translation. ACL 2021 (Findings).
  44. Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, and Zhaopeng Tu. Progressive Multi-Granularity Training for Non-Autoregressive Translation. ACL 2021 (Findings, Short).
  45. Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, and Zhaopeng Tu. Understanding and Improving Lexical Choice in Non-Autoregressive Translation. ICLR 2021.
  46. Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, and Zhaopeng Tu. Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning. ICLR 2021.
  47. Jie Hao, Linfeng Song, Liwei Wang, Kun Xu, Zhaopeng Tu, and Dong Yu. RAST: Domain-Robust Dialogue Rewriting as Sequence Tagging. EMNLP 2021.
  48. Xuebo Liu, Longyue Wang, Derek F Wong, Liang Ding, Lidia S Chao, Shuming Shi, and Zhaopeng Tu. On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation. EMNLP 2021 (Findings, Short).
  49. Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, and Xing Wang. Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation. NAACL 2021 (Short).
  50. Jian Li, Xing Wang, Zhaopeng Tu*, and Michael R Lyu. On the Diversity of Multi-Head Attention. Journal of Neurocomputing, 2021.
  51. Baosong Yang, Longyue Wang, Derek F Wong, Shuming Shi, and Zhaopeng Tu*. Context-Aware Self-Attention Networks for Natural Language Processing. Journal of Neurocomputing, 2021.
  52. Xintong Li, Lemao Liu, Zhaopeng Tu, Guanlin Li, Shuming Shi, Max Q-H Meng. Attending From Foresight: A Novel Attention Mechanism for Neural Machine Translation. Journal of TASLP, 2021.
  53. Shuo Wang, Zhaopeng Tu, Zhixing Tan, Wenxuan Wang, Maosong Sun, and Yang Liu. Language Models are Good Translators. arXiv 2021.
  54. 2020

  55. Shuo Wang, Zhaopeng Tu, Shuming Shi, and Yang Liu. On the Inference Calibration of Neural Machine Translation. ACL 2020.
  56. Xinwei Geng, Longyue Wang, Xing Wang, Bing Qin, Ting Liu, and Zhaopeng Tu. How Does Selective Mechanism Improve Self-Attention Networks? ACL 2020.
  57. Wenxiang Jiao, Xing Wang, Shilin He, Irwin King, Michael Lyu, and Zhaopeng Tu. Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation. EMNLP 2020.
  58. Yilin Yang, Longyue Wang, Shuming Shi, Prasad Tadepalli, Stefan Lee, and Zhaopeng Tu. On the Sub-Layer Functionalities of Transformer Decoder. EMNLP 2020 (Findings).
  59. Yong Wang, Longyue Wang, Victor O.K. Li, and Zhaopeng Tu. On the Sparsity of Neural Machine Translation Models. EMNLP 2020 (Short).
  60. Wenxuan Wang and Zhaopeng Tu*. Rethinking the Value of Transformer Components. COLING 2020.
  61. Deyu Zhou, Shuangzhi Wu, Qing Wang, Jun Xie, Zhaopeng Tu, and Mu Li. Emotion Classification by Jointly Learning to Lexiconize and Classify. COLING 2020.
  62. Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, and Zhumin Chen. EmpDG: Multi-resolution Interactive Empathetic Dialogue Generation. COLING 2020.
  63. Liang Ding, Longyue Wang, Di Wu, Dacheng Tao, and Zhaopeng Tu. Context-Aware Cross-Attention for Non-Autoregressive Translation. COLING 2020 (Short).
  64. Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, and Zhaopeng Tu*. Neuron Interaction Based Representation Composition for Neural Machine Translation. AAAI 2020.
  65. Yong Wang, Longyue Wang, Shuming Shi, Victor O.K. Li, and Zhaopeng Tu. Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks. AAAI 2020.
  66. Tianxiang Zhao, Lemao Liu, Guoping Huang, Zhaopeng Tu, Huayang Li, Yingling Liu, Guiquan Liu, and Shuming Shi. Balancing Quality and Human Involvement: An Effective Approach to Interactive Neural Machine Translation. AAAI 2020.
  67. Yongquan He, Zhihan Wang, Peng Zhang, Zhaopeng Tu, and Zhaochun Ren. VN Network: Embedding Newly Emerging Entities with Virtual Neighbors. CIKM 2020.
  68. Jinhuan Liu, Xuemeng Song, Zhaochun Ren, Liqiang Nie, Zhaopeng Tu, and Jun Ma. Auxiliary Template-Enhanced Generative Compatibility Modeling. IJCAI 2020.
  69. Chuan Meng, Pengjie Ren, Zhumin Chen, Weiwei Sun, Zhaochun Ren, Zhaopeng Tu, and Maarten de Rijke. DukeNet: A Dual Knowledge Interaction Network for Knowledge-Grounded Conversation. SIGIR 2020.
  70. 2019

  71. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu*. Assessing the Ability of Self-Attention Networks to Learn Word Order. ACL 2019.
  72. Xing Wang, Zhaopeng Tu, Longyue Wang, and Shuming Shi. Exploiting Sentential Context for Neural Machine Translation. ACL 2019 (Short).
  73. Shilin He, Zhaopeng Tu*, Xing Wang, Longyue Wang, Michael R. Lyu, and Shuming Shi. Towards Understanding Neural Machine Translation with Word Importance. EMNLP 2019.
  74. Longyue Wang, Zhaopeng Tu, Xing Wang, and Shuming Shi. One Model to Learn Both: Zero Pronoun Prediction and Translation. EMNLP 2019.
  75. Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, and Zhaopeng Tu. Multi-Granularity Self-Attention for Neural Machine Translation. EMNLP 2019.
  76. Zaixiang Zheng, Shujian Huang, Zhaopeng Tu, Xinyu Dai, and Jiajun Chen. Dynamic Past and Future for Neural Machine Translation. EMNLP 2019.
  77. Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, and Shuming Shi. Retrieval-guided Dialogue Response Generation via a Matching-to-Generation Framework. EMNLP 2019.
  78. Xing Wang, Zhaopeng Tu, Longyue Wang, and Shuming Shi. Self-Attention Networks with Structural Position Encoding. EMNLP 2019 (Short).
  79. Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, and Zhaopeng Tu. Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons. EMNLP 2019 (Short).
  80. Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, and Zhaopeng Tu*. Information Aggregation for Multi-Head Attention with Routing-by-Agreement. NAACL 2019.
  81. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu*. Modeling Recurrence for Transformer. NAACL 2019.
  82. Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. Skeleton-to-Response: Dialogue Generation Guided by Retrieval Memory. NAACL 2019.
  83. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu*. Convolutional Self-Attention Networks. NAACL 2019 (Short).
  84. Xiang Kong, Zhaopeng Tu*, Shuming Shi, Eduard Hovy, and Tong Zhang. Neural Machine Translation with Adequacy-Oriented Learning. AAAI 2019.
  85. Zi-Yi Dou, Zhaopeng Tu*, Xing Wang, Longyue Wang, Shuming Shi, and Tong Zhang. Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement. AAAI 2019.
  86. Baosong Yang, Jian Li, Derek F. Wong, Lidia S. Chao, Xing Wang, and Zhaopeng Tu*. Context-Aware Self-Attention Networks. AAAI 2019.
  87. Zi-Yi Dou, Xing Wang, Shuming Shi, and Zhaopeng Tu*. Exploiting Deep Representations for Natural Language Processing. Journal of NeuroComputing, 2019.
  88. 2018

  89. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. Towards Robust Neural Machine Translation. ACL 2018.
  90. Zi-Yi Dou, Zhaopeng Tu*, Xing Wang, Shuming Shi, and Tong Zhang. Exploiting Deep Representations for Neural Machine Translation. EMNLP 2018.
  91. Baosong Yang, Zhaopeng Tu*, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. Modeling Localness for Self-Attention Networks. EMNLP 2018.
  92. Jian Li, Zhaopeng Tu*, Baosong Yang, Michael R. Lyu, and Tong Zhang. Multi-Head Attention with Disagreement Regularization. EMNLP 2018 (Short).
  93. Longyue Wang, Zhaopeng Tu*, Andy Way, and Qun Liu. Learning to Jointly Translate and Predict Dropped Pronouns with a Shared Reconstruction Mechanism. EMNLP 2018 (Short).
  94. Fandong Meng, Zhaopeng Tu, Yong Cheng, Haiyang Wu, Junjie Zhai, Yuekui Yang, and Di Wang. Neural Machine Translation with Key-Value Memory-Augmented Attention. IJCAI 2018.
  95. Longyue Wang, Zhaopeng Tu*, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. Translating Pro-Drop Languages with Reconstruction Models. AAAI 2018.
  96. Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, and Max Meng. Target Foresight Based Attention for Neural Machine Translation. NAACL 2018.
  97. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. Learning to Remember Translation History with a Continuous Cache. Transactions of the Association for Computational Linguistics (TACL), 2018.
  98. Zaixiang Zheng, Hao Zhou, Shujian Huang, Lili Mou, Xinyu Dai, Jiajun Chen, and Zhaopeng Tu. Modeling Past and Future for Neural Machine Translation. Transactions of the Association for Computational Linguistics (TACL), 2018.
  99. Xing Wang, Zhaopeng Tu, and Min Zhang. Incorporating Statistical Machine Translation Word Knowledge into Neural Machine Translation. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 2018.
  100. Zhaopeng Tu, Yong Jiang, Xiaojiang Liu, Lei Shu, and Shuming Shi. Generative Stock Question Answering. arXiv:1804.07942. [data]
  101. 2017

  102. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, and Guodong Zhou. Modeling Source Syntax for Neural Machine Translation. ACL 2017.
  103. Hao Zhou, Zhaopeng Tu, Shujian Huang, Xiaohua Liu, Hang Li, and Jiajun Chen. Chunk-Based Bi-Scale Decoder for Neural Machine Translation. ACL 2017 (Short).
  104. Xing Wang, Zhaopeng Tu, Deyi Xiong, and Min Zhang. Translating Phrases in Neural Machine Translation. EMNLP 2017.
  105. Longyue Wang, Zhaopeng Tu*, Andy Way, and Qun Liu. Exploiting Cross-Sentence Context for Neural Machine Translation. EMNLP 2017 (Short).
  106. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. Neural Machine Translation with Reconstruction. AAAI 2017. [code]
  107. Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. Neural Machine Translation Advised by Statistical Machine Translation. AAAI 2017.
  108. Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. Context Gates for Neural Machine Translation. Transactions of the Association for Computational Linguistics (TACL), 2017. [code]
  109. 2016

  110. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling Coverage for Neural Machine Translation. ACL 2016. [code]
  111. Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. A Novel Approach for Dropped Pronoun Translation. NAACL 2016.
  112. Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Hang Li, and Qun Liu. Dropped Pronoun Generation for Dialogue Machine Translation. ICASSP 2016.
  113. Baishakhi Ray, Vincent Hellendoorn, Saheel Godhane, Zhaopeng Tu, Alberto Bacchelli, and Premkumar Devanbu. On the "Naturalness" of Buggy Code. ICSE 2016.
  114. Anh Nguyen, Zhaopeng Tu, and Tien Nguyen. Do Contexts Help in Phrase-based, Statistical Source Code Migration? ICSME 2016.
  115. Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Andy Way, and Qun Liu. The Automatic Construction of Discourse Corpus for Dialogue Translation. LREC 2016.
  116. Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Siyou Liu, Hang Li, Andy Way, and Qun Liu. A Novel and Robust Approach for Pro-Drop Language Translation. Journal of Machine Translation, 2016.
  117. 2010 ~ 2015

  118. Zhaopeng Tu, Baotian Hu, Zhengdong Lu, and Hang Li. Context-Dependent Translation Selection Using Convolutional Neural Network. arXiv 2015. (Short version is accepted by ACL 2015 (Short))
  119. Christine Franks, Zhaopeng Tu, Prem Devanbu, and Vincent Hellendoorn. CACHECA: A Cache Language Model Based Code Suggestion Tool. ICSE 2015 - Demonstration Track. (Demo of the "localness" work)
  120. Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, and Qun Liu. A Deep Memory-based Architecture for Sequence-to-Sequence Learning. arXiv 2015.
  121. Zhaopeng Tu, Zhendong Su, and Prem Devanbu. On the Localness of Software. FSE 2014. [code] [data]
  122. Qun Liu, Zhaopeng Tu, and Shouxun Lin. A Novel Graph-based Compact Representation of Word Alignment. ACL 2013 (Short).
  123. Zhaopeng Tu, Yifan He, Jennifer Foster, Josef van Genabith, Qun Liu, and Shouxun Lin. Identifying High-Impact Sub-Structures for Convolution Kernels in Document-level Sentiment Classification. ACL 2012 (Short).
  124. Junhui Li, Zhaopeng Tu, Guodong Zhou, and Josef van Genabith. Head-Driven Hierarchical Phrase-based Translation. ACL 2012 (Short).
  125. Zhaopeng Tu, Yang Liu, Yifan He, Josef van Genabith, Qun Liu, and Shouxun Lin. Combining Multiple Alignments to Improve Machine Translation. COLING 2012. [data]
  126. Junhui Li, Zhaopeng Tu, Guodong Zhou, and Josef van Genabith. Using Syntactic Head Information in Hierarchical Phrase-based Translation. WMT 2012.
  127. Zhaopeng Tu, Wenbin Jiang, Qun Liu, and Shouxun Lin. Dependency Forest for Sentiment Analysis. NLP&CC 2012.
  128. Zhaopeng Tu, Yang Liu, Qun Liu, and Shouxun Lin. Extracting Hierarchical Rules from a Weighted Alignment Matrix. IJCNLP 2011.
  129. Zhaopeng Tu, Yang Liu, Young-Sook Hwang, Qun Liu, and Shouxun Lin. Dependency Forest for Statistical Machine Translation. COLING 2010.