/xiwen

A tool to scan HTML for Chinese characters

Primary LanguagePythonMIT LicenseMIT

Xiwen 析文

A tool to scan HTML for Chinese characters.

Overview

Use Xiwen to scan websites for Chinese characters — hanzi — and:

  • analyse the content by HSK grade
  • identify character variants
  • export character sets for further use

The analysis describes the breakdown by HSK grade (see below) and character lists can be exported for any combination of those levels, or less common hanzi beyond the HSK grades.

Data exports provide hanzi by HSK grade in traditional and simplified Chinese, their pinyin, count within the text, and character frequency.

Who this is for

Mandarin learners can use Xiwen to determine the expected difficulty of an article or book relative to their current reading level, and create character lists for further study.

Instructors can use it to assess the suitability of reading materials for their students, and produce vocabulary lists.

HSK

HSK — Hanyu Shuiping Kaoshi 汉语水平考试 — is a series of examinations designed to test Chinese language proficiency in simplified Chinese.

In its latest form the HSK consists of nine levels, and covers 3,000 simplified hanzi and 11,092 vocabulary items. The advanced levels — seven to nine — share 1,200 hanzi that are tested together.

To approximate a traditional hanzi version of the HSK, Xiwen maps the HSK hanzi to traditional Chinese equivalents. In most cases this is a one-to-one conversion, but in several cases there are two or more traditional hanzi that reflect distinct meanings of the single simplified character.

For example:

  • "发": ["發", "髮"]
  • "了": ["了", "瞭"]
  • "面": ["面", "麵"]

Or even:

  • "只": ["只", "衹", "隻"]
  • "台": ["台", "檯", "臺", "颱"]

A list of these "polymaps" — not all of which relate to hanzi in the HSK — can be found in the Wikipedia article Ambiguous character mappings.

This approach isn't perfect: obscure definitions implied by a distinct traditional hanzi may be far less frequent than the common conversion of a simplified hanzi.

The table below lists the number of simplified hanzi per grade, and the number of mappings to traditional equivalents.

HSK Grade Simp. Hanzi Running Total Trad. Hanzi Equivalents Running Total
[1] 300 300 313 313
[2] 300 600 314 627
[3] 300 900 312 939
[4] 300 1200 316 1255
[5] 300 1500 310 1565
[6] 300 1800 310 1875
[7-9] 1200 3000 1214 3089

Installation

GitHub repo

Clone the xiwen repo from GitHub for the full code, files used to generate the character lists and a test suite.

$ git clone git@github.com:essteer/xiwen

Change into the xiwen directory then create and activate a virtual environment — the below example uses Astral's uv; substitute pip or use another package manager as needed — then install the dev dependencies:

$ uv venv
$ source .venv/bin/activate
$ uv pip install -r requirements.txt

$ uv venv
$ .venv\Scripts\activate
$ uv pip install -r requirements.txt

Operation

Run xiwen/interface.py from the src directory to launch the CLI program.

resources/main.py makes use of datasets in resources/assets to pair simplified and traditional character sets with their pinyin, HSK grades, and character frequencies as identified in the MTSU dataset.

Xiwen relies on a Parquet file produced from those resources and located at xiwen/assets. This file can be replaced as needed — update the file name loaded in xiwen/app.py to make use of a custom file.

xiwen/interface.py is the interactive component of the CLI version of Xiwen. It receives user input and makes function calls to modules in xiwen/utils. Those files form the program's ETL pipeline including the following functions:

  • break down text into individual hanzi (filter_text())
  • sort hanzi as HSK-level simplified or traditional hanzi, or outliers (partition_hanzi())
  • determine the overall character variant of the text as simplified or traditional, or a mix (identify_variant())
  • compute the grade-based and cumulative numbers of unique hanzi and total hanzi in the text (analyse())

Character sets can then be exported to .csv.

Sources

This repo makes use of datasets of HSK vocabulary and character frequency lists in the public domain as indicated below - credit goes to those involved in their creation and distribution.

  • Hanyu Shuiping Kaoshi (HSK) 3.0 character list: "hsk30-chars.csv", hsk30, ivankra, GitHub

  • Character frequency list: "CharFreq-Modern.csv", Da, Jun. 2004, Chinese text computing, Middle Tennessee State University

  • Multiple character mappings: "Ambiguous character mappings", Wikipedia

  • Simplified character set demo: "Folding Beijing" 《北京折叠》, Hao Jingfang 郝景芳, 2012

  • Traditional character set demo: "Tao Te Ching" 《道德經》, Lao Tzu 老子, 400BC