層学習ミドルウェアRaNNCを用いてより巨大で高品質な言語モデルを構築中であり、こうした巨大言語モデルもWISDOM Xのような各種質問応答技術で活用していきたいと考えている。参考文献】【 1C. Baker-Austin, J. Trinanes, N. Taylor, R. Hartnell, A. Siitonen, and J. Martinez-Urtaza, “Emerging Vibrio risk at high latitudes in response to ocean warming,” Nature Climate Change, vol.3, no.1 pp.73–77, 2013.2V. N. Vapnik, “The nature of statistical learning theory,” Springer-Verlag New York, Inc., New York, NY, US, 1995.3J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” Pro-ceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech-nologies, vol.1 (Long and Short Papers), pp.4171–4186, 2019 .4I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol.27, pp.2672–2680. Curran Associates, Inc, 2014.5J.-H. Oh, K. Kadowaki, J. Kloetzer, R. Iida, and K. Torisawa, “Open–domain why-question answering with adversarial learning to encode answer texts,” Proceedings of the 57th Annual Meeting of the Associa-tion for Computational Linguistics (ACL 2019), pp.4227–4237, 2019.6J.-H. Oh, R. Iida, J. Kloetzer, and K. Torisawa, “BERTAC: Enhancing transformer-based language models with adversarially pretrained con-volutional neural networks,” Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Pro-cessing (ACL-IJCNLP 2021), pp.2103–2115, 2021. 7B. He, D. Zhou, J. Xiao, X. Jiang, Q. Liu, N. J. Yuan, and T. Xu, “BERT-MK: Integrating graph contextualized knowledge into pre-trained lan-guage models,” Findings of the Association for Computational Linguistics: EMNLP 2020, pp.2281–2290, 2020.8M. E. Peters, M. Neumann, R. Logan, R. Schwartz, V. Joshi, S. Singh, and N. A. Smith, “Knowledge enhanced contextual word representa-tions,” Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer-ence on Natural Language Processing (EMNLP-IJCNLP), pp.43–54, 2019.9R. Wang, D. Tang, N. Duan, Z. Wei, X. Huang, J. Ji, G. Cao, D. Jiang, and M. Zhou, “K-adapter: Infusing knowledge into pre-trained models with adapters,” CoRR, abs/2002.01808, 2020.10Y. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, and H. Wang, “ERNIE 2.0: A continual pre-training framework for language understanding,” Proceedings of the AAAI Conference on Artificial Intelligence, pp.8968–8975, 2020.11W. Xiong, J. Du, W. Y. Wang, and V. Stoyanov, “Pretrained encyclope-dia: Weakly supervised knowledge-pretrained language model,” 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.12J.-H. Oh, K. Torisawa, C. Hashimoto, M. Sano, S. De Saeger, and K. Ohtake, “Why-question answering using intra- and inter-sentential causal relations,” Proceedings of the 51st Annual Meeting of the As-sociation for Computational Linguistics (ACL 2013), pp.1733–1743, 2013.13K. Kadowaki, R. Iida, K. Torisawa, J.-H. Oh, and J. Kloetzer, “Event causality recognition exploiting multiple annotators’ judgments and background knowledge,” Proceedings of the 2019 Conference on Em-pirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019), pp.5820–5826, 2019.14M. Murata, S. Tsukawaki, T. Kanamaru, Q. Ma, and H. Isahara, “A system for answering non-factoid Japanese questions by using passage re-trieval weighted based on type of answer,” Proceedings of NTCIR-6, 2007.15飯田 龍, “DIRECTにおける深層学習を用いた大規模自然言語処理,” 情報通信研究機構研究報告, 本特集号, 3-4, 2022.16C. Hashimoto, K. Torisawa, J. Kloetzer, M. Sano, I. Varga, J.-H. Oh, and Y. Kidawara, “Toward future scenario generation: Extracting event causality exploiting semantic relation, context, and association fea-tures,” Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pp.987–997, 2014.17C. Hashimoto, K. Torisawa, S. De Saeger, J.-H. Oh, and J. Kazama, “Excitatory or Inhibitory: A New Semantic Orientation Extracts Contra-diction and Causality from the Web,” Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp.619–630. Association for Computational Linguistics, 2012.18水野 淳太, “高齢者介護支援用マルチモーダル音声対話システムMICSUS,” 情報通信研究機構研究報告, 本特集号, 3-5, 2022.19田仲 正弘, “自動並列化深層学習ミドルウェアRaNNC,” 情報通信研究機構研究報告, 本特集号, 3-2, 2022.呉 鍾勲 (おー じょんふん)ユニバーサルコミュニケーション研究所データ駆動知能システム研究センター研究マネージャー博士(工学)自然言語処理、質問応答、言語モデル、情報検索【受賞歴】2015年 モバイル・コミュニケーション・ファンド ドコモ・モバイル・サイエンス賞 先端技術部門優秀賞2014年 Twitter Data Grants獲得Kloetzer Julien (くろえつぇー じゅりあん)ユニバーサルコミュニケーション研究所データ駆動知能システム研究センター主任研究員博士(情報学)自然言語処理、質問応答、音声対話システム【受賞歴】2015年 モバイル・コミュニケーション・ファンド ドコモ・モバイル・サイエンス賞 先端技術部門優秀賞2014年 Twitter Data Grants獲得128 情報通信研究機構研究報告 Vol.68 No.2 (2022)3 社会知コミュニケーション技術
元のページ ../index.html#134