Loading

CJK width token filter

Normalizes width differences in CJK (Chinese, Japanese, and Korean) characters as follows:

  • Folds full-width ASCII character variants into the equivalent basic Latin characters
  • Folds half-width Katakana character variants into the equivalent Kana characters

This filter is included in Elasticsearch's built-in CJK language analyzer. It uses Lucene’s CJKWidthFilter.

Note

This token filter can be viewed as a subset of NFKC/NFKD Unicode normalization. See the analysis-icu plugin for full normalization support.

 GET /_analyze { "tokenizer" : "standard", "filter" : ["cjk_width"], "text" : "シーサイドライナー" } 

The filter produces the following token:

シーサイドライナー 

The following create index API request uses the CJK width token filter to configure a new custom analyzer.

 PUT /cjk_width_example { "settings": { "analysis": { "analyzer": { "standard_cjk_width": { "tokenizer": "standard", "filter": [ "cjk_width" ] } } } } }