电话 :010-62510042
个人下载亚博app主页:www.jin-qin.com
电子邮箱:qjin@ruc.edu.cn
1999.08 - 2007.01, 美国卡内基梅隆大学,博士
1991.09 - 1999.01, 清华大学,本科、硕士
2013.01 - 今, 中国人民大学,教授
2012.04 - 2012.12, ibm中国研究院,研究员
2008.06 - 2012.03, 美国卡内基梅隆大学,研究员,教师
2007.02 - 2008.05, 美国卡内基梅隆大学,博士后
多媒体智能计算,人机交互
多媒体技术(multimedia technologies)
言语信息处理(spoken language processing)
国家自然科学基金 (2021.01-2024.12)
国家自然科学基金 (2018.01-2021.12)
北京市自然科学基金 (2019.01-2021.12)
北京市储备课题基金(2018-2020)
国家重点研发计划项目 (2016.07-2020.07)
北京市自然科学基金 (2014.01-2016.12)
中国人民大学科学研究基金(2014.01-2016.12)
教育部留学回国人员科研启动基金(2015.01-2016.12)
- shizhe chen, qin jin, peng wang, qi wu. say as you wish: fine-grained control of image caption generation with abstract scene graphs. cvpr, 2020.
- shizhe chen, yida zhao, qin jin, qi wu. fine-grained video-text retrieval with hierarchical graph reasoning. cvpr, 2020.
- jia chen, qin jin. better captioning with sequence-level exploration. cvpr, 2020.
- sipeng zheng, shizhe chen, qin jin. skeleton-based interactive graph network for human object interaction detection. icme, 2020.
- shizhe chen, qin jin, alexandar hauptmann. unsupervised bilingual lexicon induction from mono-lingual multimodal data. aaai, 2019.
- jingjun liang, shizhe chen, jinming zhao, qin jin, haibo liu, li lu. cross-culture multimodal emotion recognition with adversarial learning. icassp, 2019.
- shizhe chen, yuqing song, yida zhao, qin jin,zhaoyang zeng, bei liu, jianlong fu, alexander hauptmann. activitynet 2019 task 3:exploring contexts for dense captioning events in video. cvpr 2019, activitynet large scale activity recognition challenge.
- shizhe chen, qin jin, jianlong fu. from words to sentences: a progressive learning approach for zero-resource machine translation with visual pivots. ijcai, 2019.
- shizhe chen, qin jin, jia chen, alexander g. hauptmann. generating video descriptions with latent topic guidance. ieee transactions on multimedia, vol. 21, no. 9, september 2019.
- jinming zhao, shizhe chen, jingjun liang, qin jin. speech emotion recognition in dyadic dialogues. interspeech, 2019.
- yuqing song, shizhe chen, qin jin. unpaired cross-lingual image caption generation with self-supervised rewards. acm multimedia, 2019.
- sipeng zheng, shizhe chen, qin jin. visual relation detection with multi-level attention. acm multimedia, 2019.
- shizhe chen, bei liu, jianlong fu, ruihua song, qin jin, pingping lin, xiaoyu qi, chunting wang, jin zhou. neural storyboard artist: visualizing stories with coherent image sequences. acm multimedia, 2019.
- sipeng zheng, xiangyu chen, shizhe chen, qin jin. relation understanding in videos. acm multimedia, grand challenge: relation understanding in videos, 2019.
- jinming zhao, ruichen li, jingjun liang, qin jin. adversarial domain adaption for multi-cultural dimensionalemotion recognition in dyadic interactions. avec, 2019.
- shizhe chen, yida zhao, yuqing song, qin jin, qi wu. integrating temporal and spatial attentions for vatex video captioning challenge 2019. iccv, vatex video captioning challenge 2019.
- weiying wang, yongcheng wang, shizhe chen, qin jin. youmakeup: a large-scale domain-specific multimodal dataset for fine-grained semantic comprehension. emnlp, 2019.
- yuqing song, yida zhao, shizhe chen, qin jin. ruc_aim3 at trecvid 2019: video to text. nist trecvid, 2019.
- jingjun liang, shizhe chen, qin jin. semi-supervised multimodal emotion recognition with improved wasserstein gans. apsipa asc, 2019.
- shizhe chen, yuqing song, yida zhao, qin jin, alexandar hauptmann. ruc cmu: system report for dense captioning events in videos.
cvpr activitynet large scale activity recognition challenge, 2018.
- shizhe chen, jia chen, qin jin, alexandar hauptmann. class-aware self-attention for audio event recognition. acm international conference on multimedia retrieval (icmr), 2018. (best paper runner-up)
- jinming zhao, shizhe chen, qin jin. multimodal dimensional and continuous emotion recognition in dyadic video interactions.
pacific-rim conference on multimedia (pcm), 2018.
- xiaozhu lin, qin jin, shizhe chen, yuqing song, yida zhao. imakeup: makeup instructional video dataset for fine-grained dense video captioning. pacific-rim conference on multimedia (pcm), 2018.
- jinming zhao, ruichen li, shizhe chen, qin jin. multi-modal multi-cultural dimensional continues emotion recognition in dyadic interactions. acm multimedia audio-visual emotion challenge (avec) workshop, 2018.
- shizhe chen, jia chen, qin jin, alexandar hauptmann. video captioning with guidance of multimodal latent topics. acm multimedia, 2017.
- qin jin, shizhe chen, jia chen, alexandar hauptmann. knowing yourself: improving video caption via in-depth recap. acm multimedia, 2017.
- shizhe chen, qin jin, jinming zhao and shuai wang. multimodal multi-task learning for dimensional and continuous emotion recognition. acm multimedia audio-visual emotion challenge (avec) workshop 2017.
- shizhe chen, jia chen, qin jin. generating video descriptions with topic guidance. international conference on multimedia retrieval (icmr) 2017.
- jia chen, qin jin, shiwan zhao, shenghua bao, li zhang, zhong su, yong yu. boosting recommendation in unexplored categories by user price preference. acm transactions on information systems (tois) volume 35 issue 2, october 2016.
- qin jin, jia chen, shizhe chen, yifan xiong. describing videos using multi-modal fusion. acm multimedia, 2016.
- jia chen, qin jin, yifan xiong. semantic image profiling for historic events: linking images to phrases. acm multimedia 2016.
- shizhe chen, qin jin, multi-modal conditional attention fusion for dimensional emotion prediction, acm multimedia 2016.
- yifan xiong, jia chen, qin jin, chao zhang. history rhyme: searching historic events by multimedia knowledge. acm multimedia 2016.
- xirong li, yujia huo, qin jin, jieping xu. detecting violence in video using subclasses. acm multimedia, october,2016.
- shizhe chen, xinrui li, qin jin, shilei zhang, yong qin. video emotion recognition in the wild based on fusion of multimodal features. international conference on multimodal interaction (icmi) 2016.
- guankun mu, haibing cao, qin jin. violent scene detection using convolutional neural networks and deep audio features. chinese conference on pattern recognition (ccpr) 2016.
- shizhe chen,yujie dian,xinrui li,xiaozhu lin,qin jin(*),haibo liu,li lu. emotion recognition in videos via fusing multimodal features. chinese conference on pattern recognition (ccpr) 2016.
- xirong li, qin jin. improving image captioning by concept-based sentence reranking. pacific-rim conference on multimedia (pcm), september 2016.
- qin jin, junwei liang, xiaozhu lin. generating natural video descriptions via multimodal processing. interspeech 2016.
- qin jin, junwei liang. video description generation using audio and visual cues. international conference on multimedia retrieval (icmr) 2016.
- jia chen, qin jin, shenghua bao, junfeng ye, zhong su, shimin chen, yong yu. exploitation and exploration balanced hierarchical summary for landmark images. ieee transactions on multimedia (tmm), volume:17 issue:10, 2015.
- jia chen, qin jin, yong yu, alexander g. hauptmann, image profiling for history events on the fly. acm multimedia 2015.
- shizhe chen, qin jin. multi-modal dimensional emotion recognition using recurrent neural networks. acm multimedia audio/visual emotion challenge and workshop 2015.
- shimin chen and qin jin, persistent b -trees in non-volatile main memory, vldb, hawaii, usa, 2015.
- qin jin, xirong li, haibing cao, yujia huo, shuai liao, gang yang, jieping xu. rucmm at mediaeval 2015 affective impact of movies task: fusion of audio and visual cues. mediaeval workshop 2015, wurzen, germany.
- xirong li, qin jin, shuai liao, junwei liang, xixi he, yujia huo, weiyu lan, bin xiao, yanxiong lu, jieping xu. ruc-tencent at imageclef 2015: concept detection, localization and sentence generation, clef working notes, 2015.
- jia chen, min li, qin jin, yongzhe zhang, shenghua bao, zhong su, yong yu, lead curve detection in drawings with complex cross-points, neurocomputing, 2015.
- qin jin, junwei liang, xixi he, gang yang, jieping xu, xirong li, semantic concept annotation for user generated videos using soundtracks. international conference on multimedia retrieval (icmr) 2015.
- qin jin, chengxin li, shizhe chen, huimin wu, speech emotion recognition with acoustic and lexical features, in proc. of ieee international conference on acoustics, speech and signal processing (icassp), brisbane, australia, 2015.
- junwei liang, qin jin, xixi he, gang yang, jieping xu, xirong li, detecting semantic concepts in consumer videos using audio, in proc. of ieee international conference on acoustics, speech and signal processing (icassp), brisbane, australia, 2015.
- jia chen, qin jin, shiwan zhao, shenghua bao, li zhang, zhong su, yong yu. does product recommendation meet its waterloo in unexplored categories? no, price comes to help, sigir 2014.
- junwei liang, qin jin, xixi he, xirong li, gang yang, jieping xu. semantic concept annotation of consumer videos at frame-level using audio. pacific-rim conference on multimedia (pcm) 2014.
- shizhe chen, qin jin, xirong li, gang yang, jieping xu. speech emotion classification using acoustic features. international symposium on chinese spoken language processing (iscslp), 2014.
- jia chen , qin jin , weipeng zhang , shenghua bao , zhong su , yong yu , tell me what happened here in history , acm international conference on multimedia, 2013.
associate editor, tomm
area chair of acm multimedia 2018,2020
special session chair of apsipa-asc 2016, 2019.
member of ccf, acm, ieee, isca
cvpr 2020 activitynet large scale activity recognition challenge (anet) dense captioning events in videos task (winner)
the end-of-end-to-end a video understanding pentathlon @cvpr 2020 (rank 2nd)
outstanding method award in vatex video captioning challenge @ iccv 2019
2019, 之江杯全球人工智能大赛视频内容描述生成 (第一名)
cvpr 2019 activitynet large scale activity recognition challenge (anet) temporal captioning task (winner)
2019 trecvid (video to text description) grand challenge (rank 1st)
2019 audio-visual emotion challenge @ acm multimedia 2019 (winner)
cvpr 2018 activitynet large scale activity recognition challenge (anet) temporal captioning task (winner)
2018 trecvid (video to text description) grand challenge (rank 1st)
2018 audio-visual emotion challenge @ acm multimedia 2018 (winner)
2017 trecvid (video to text description) grand challenge (rank 1st)
best grand challenge paper award at acm multimedia 2017
2017 acm multimedia (video to language) grand challenge (rank 1st)
2017 audio-visual emotion challenge @ acm multimedia 2017 (winner)
2016 ibm sur award
2016 acm multimedia (video to language) grand challenge (rank 1st)
2016 audio-visual emotion challenge (avec)(rank 2nd)
2016 mediaeval movie emotion impact challenge (rank 1st)
2016 chinese multimodal emotion challenge (mec)(rank 2nd)
2016 nlpcc chinese weibo stance detection(rank 1st)
2015 imageclef(image sentence generation)evaluation(rank 1st)
2015 outstanding bachelor thesis advisor of renmin university of china