Hakaze Cho

@Beijing Inst. Tech. 2023

Ph.D. 2nd Year Student @ Graduate School of Information Science, Japan Advanced Institute of Science and Technology
Research Assistant & Mentor @ RebelsNLU, PI: A. Prof. Naoya Inoue

Alias: Yufeng Zhao, both from the hieroglyph “趙 羽風”
Birth: Beijing, 1999

E-mail: yfzhao [at] jaist.ac.jp
Phone: +81-070-8591-1495
Links: Twitter    GitHub    Google Scholar    ORCID    Blog   
Physical Address: Laboratory I-52, Information Science Building I, 1-1 Asahidai, Nomi, Ishikawa, Japan

I graduated from Beijing Institute of Technology, a top-ranking university in China, with a Master’s degree in Software Engineering in 2023 and a Bachelor’s degree in Chemistry in 2021. I am pursuing a Ph.D. at JAIST, with an expected early graduation in March 2026. My research focuses on exploring the internal mechanisms of artificial neural networks during both training and inference, particularly Transformer-based neural language models, by mathematical and representation-learning methods, and enhancing their performance robustly through this deeper understanding. I have published over 20 papers in this area since 2023, some of which have been presented at top-tier international conferences such as ICLR and NAACL.

I am actively seeking productive research collaborations in the mentioned area. If you are interested in working together, please do not hesitate to contact me. I welcome collaborations with both experts and motivated beginners—being a novice is not a drawback if you are eager and efficient to learn. Additionally, I am open to exploring collaborations in other areas as well.

Research Interests

Keywords: Representation Learning, Mechanistic Interpretability, In-context Learning

Publications

International Conference

  1. Revisiting In-context Learning Inference Circuit in Large Language Models
    Hakaze Cho, Mariko Kato, Yoshihiro Sakai, Naoya Inoue
    The Thirteenth International Conference on Learning Representations (ICLR), 2025. [h5=304, IF=48.9]
    [PDF] [Code] [Poster]
  2. Token-based Decision Criteria Are Suboptimal in In-context Learning
    Hakaze Cho, Yoshihiro Sakai, Mariko Kato, Kenshiro Tanaka, Akira Ishii, Naoya Inoue
    In Proceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL.main). 2025. [h5=132, IF=16.5]
    [PDF] [Code]
  3. Understanding Token Probability Encoding in Output Embeddings
    Hakaze Cho, Yoshihiro Sakai, Kenshiro Tanaka, Mariko Kato, Naoya Inoue
    In Proceedings of the 2025 Conference on Computational Linguistics (COLING). 2025. [h5=65, IF=7.7]
    [PDF] [Poster]
  4. Find-the-Common: A Benchmark for Explaining Visual Patterns from Images
    Yuting Shi, Naoya Inoue, Houjing Wei, Yufeng Zhao, Tao Jin
    In Proceedings of the 2024 Conference on Language Resources and Evaluation (LREC). 2024. [h5=59, IF≈5]
    [PDF]
  5. Methods to Enhance BERT in Aspect-Based Sentiment Classification
    Yufeng Zhao, Evelyn Soerjodjojo, et al.
    In Proceedings of IEEE Euro-Asia Conference on Frontiers of Computer Science and Information Technology. 2022. Outstanding Oral Presentation Award
    [PDF]

Pre-print

  1. Measuring Intrinsic Dimension of Token Embeddings
    Takuya Kataiwa, Hakaze Cho, Tetsushi Ohki
    Pre-print. 2025.
    [PDF]
  2. Affinity and Diversity: A Unified Metric for Demonstration Selection via Internal Representations
    Mariko Kato, Hakaze Cho, Yoshihiro Sakai, Naoya Inoue
    Pre-print. 2025.
    [PDF]
  3. StaICC: Standardized Evaluation for Classification Task in In-context Learning
    Hakaze Cho, Naoya Inoue.
    Pre-print. 2025.
    [PDF] [Code] [Package]
  4. NoisyICL: A Little Noise in Model Parameters Calibrates In-context Learning
    Yufeng Zhao, Yoshihiro Sakai, Naoya Inoue.
    Pre-print. 2024.
    [PDF] [Code]
  5. SkIn: Skimming-Intensive Long-Text Classification Using BERT for Medical Corpus
    Yufeng Zhao, et al.
    Pre-print. 2022.
    [PDF]

Domestic Conferences / Miscellaneous
(† = Japan-domestic Secondary Publication for International Conference Papers; Default: Non-refereed,▲= Refereed)

  1. トークン埋め込みの内在次元を測る
    片岩拓也, 趙羽風, 大木哲史
    人工知能学会第39回全国大会. 2025.
  2. 既知性を示す言語表現を伴う知識に関する内部表象の分析
    田中健史朗, 坂井吉弘, 趙羽風, 井之上直也, 佐藤魁, 高橋良允, Benjamin Heinzerling, 乾健太郎
    人工知能学会第39回全国大会. 2025.
  3. 言語モデルにおける知識の既知性判断の内部表象
    佐藤魁, 高橋良允, Benjamin Heinzerling, 田中健史朗, 趙羽風, 坂井吉弘, 井之上直也, 乾健太郎
    人工知能学会第39回全国大会. 2025.
  4. 大規模言語モデルにおける In-context Learning の推論回路
    趙羽風, 加藤万理子, 坂井吉弘, 井之上直也
    言語処理学会第31回年次大会. 2025. (Oral) 優秀賞
    [PDF] [Slides]
  5. Beyond the Induction Circuit: A Mechanistic Prototype for Out-of-domain In-context Learning
    趙羽風, 井之上直也
    言語処理学会第31回年次大会. 2025.
    [PDF] [Poster]
  6. 埋め込み表現の内在次元を測る
    片岩拓也, 趙羽風, 大木哲史
    言語処理学会第31回年次大会. 2025.
    [PDF]
  7. 文脈内学習におけるデモの親和性と多様性の提案
    加藤万理子, 趙羽風, 坂井吉弘, 井之上直也
    言語処理学会第31回年次大会. 2025.
    [PDF]
  8. StaICC: 文脈内学習における分類タスクの標準的なベンチマーク
    趙羽風, 坂井吉弘, 加藤万理子, 井之上直也
    言語処理学会第19回YANSシンポジウム. 2024. (Poster Only)
    [Poster]
  9. 画像特徴ベクトルは重みを固定した言語モデルで情報豊かなトークンである
    加藤万理子, 趙羽風, 閻真竺, 石钰婷, 井之上直也
    言語処理学会第19回YANSシンポジウム. 2024. (Poster Only)
  10. In-context Learning におけるトークンベース較正手法の用いる決定境界は最適でない
    趙羽風, 坂井吉弘, 加藤万理子, 田中健史朗, 石井晶, 井之上直也
    情報処理学会NL研第260回研究発表会. 2024. (Oral) 若手奨励賞
    [PDF] [Slides]
  11. NoisyICL: A Little Noise in Model Parameters Can Calibrate In-context Learning
    趙羽風, 坂井吉弘, 井之上直也
    言語処理学会第30回年次大会. 2024. (Oral)
    [PDF] [Slides]
  12. In-context Learning においてLLMはフォーマットを学べるか
    坂井吉弘, 趙羽風, 井之上直也
    言語処理学会第30回年次大会. 2024. スポンサー賞
    [PDF]
  13. Find-the-Common: Benchmarking Inductive Reasoning Ability on Vision-Language Models
    Yuting Shi, Houjing Wei, Jin Tao, Yufeng Zhao, Naoya Inoue
    言語処理学会第30回年次大会. 2024.
    [PDF]

Thesis

  1. Fine-tuning with Randomly Initialized Downstream Network: Finding a Stable Convex-loss Region in Parameter Space
    Yufeng Zhao
    Master’s Thesis - Rank A @ Beijing Institute of Technology. 2023.
  2. Synthesis and Self-Assembly of Amphiphilic Aggregation Enhanced Emission Compounds
    Yufeng Zhao
    Bachelor Thesis @ Beijing Institute of Technology. 2021.

Resume

Awards



Copyright © 2025 Hakaze Cho / Yufeng Zhao. All rights reserved. Icon generated by StableDiffusion.
Updated on 2025-04-02 15:03:17 +0900.
Viewed.