emersonbottero/vitepress-plugin-search

I Can`t search for Chinese

yyrc opened this issue · 43 comments

yyrc commented

I hope to be able to search for Chinese

Do you have an repo with an example?

yyrc commented

I can't search Chinese, for example "的".

It works if you place a space after.. , not great... I'm gonna take a look.
I'll also should handle the frontmatter portion of the docs.

@yyrc can you share your repo with me?
there seems to be other problems but I can't reproduce.

Will this be fixed?I notice that when I search English words in Chinese articles, the search result is not correct and Chinese words are not searched.

I tried to clone your repo but then couldn't find it..
can you share the repo again?

with more data the better..

I tried to clone your repo but then couldn't find it.. can you share the repo again?

with more data the better..

You can clone my repo: https://github.com/Charles7c/charles7c.github.io.git

本地搜索
Image 4

However, you need to enable vitepress-plugin-search in docs/vite.config.ts.

image

Must add language support and make It avaiable in the plugin options https://github.com/MihaiValentin/lunr-languages

Must add language support and make It avaiable in the plugin options https://github.com/MihaiValentin/lunr-languages

var lunr = require('./lib/lunr.js');
require('./lunr.stemmer.support.js')(lunr);
require('./lunr.ru.js')(lunr);
require('./lunr.multi.js')(lunr);

var idx = lunr(function () {
  // the reason "en" does not appear above is that "en" is built in into lunr js
  this.use(lunr.multiLanguage('en', 'ru'));
  // then, the normal lunr index initialization
  // ...
});

How does the configuration take effect in the plug-in? @emersonbottero
Can you provide an example? thanks. :)

import { defineConfig } from 'vite'
import { SearchPlugin } from 'vitepress-plugin-search'

export default defineConfig({
  plugins: [
    SearchPlugin({
      //Add a wildcard at the end of the search
      wildcard: false,
      //The length of the result search preview item
      previewLength: 62,
    })
  ]
})

I have to change my plugin based in my last comment

I have to change my plugin based in my last comment

Thanks a lot. 👍

Expecting

just for reference vitejs/vite#10486

Just an updated..
The initial Idea of using the above link does not work, since I used lunr and those are for elasticLunr.

Due to the lack of maintenance in the lunr project I decide to switch the index library to flexsearch
I managed to create the library and it works great, but it fails on vitepress build to a problem in the library itself, see my comment there

Once this is fixed it should be possible to pass all index options in the library to the plugin.
the simplest way to set to chinese is specify here or just add the cjk deafult language.

but we can and should improve that with an actual chinese language!
You guys could help to add the chinese language to the flexsearch library..
for now it should be:

  • saving the default as chinese and add the stop word
  • if it does make sense add stemmer..
    • stemmer is like considering drive, driving, driven to be use as alias in the search

Is there an easy solution for now? to be honest I'm not familiar with any of the libraries mentioned above...

just notice we can download the flexsearch files.
I'll try bundle it all together with my plugin.
if it works we would be able to configure as mention above.

I did it.. 😁
please try adding the options as suggested above with flexsearch.

Could Someone tell me if It works?
@yyrc @Charles7c @li-zheng-hao @jonsam-ng

Could Someone tell me if It works? @yyrc @Charles7c @li-zheng-hao @jonsam-ng

import { defineConfig } from 'vite'
import { SearchPlugin } from 'vitepress-plugin-search'

export default defineConfig({
  plugins: [
    SearchPlugin({
      lang: 'zh',
      encode: str => str.replace(/[\x00-\x7F]/g, "").split("")
    })
  ]
})

I upgraded the version to 1.0.4-alpha.15, and then looked at the link below, but didn't quite understand how to configure it, and finally it didn't work.

  1. https://github.com/nextapps-de/flexsearch#cjk-word-break-chinese-japanese-korean
  2. nextapps-de/flexsearch#207
  3. alex-shpak/hugo-book#327

i tried same config, it only works when search one word ,like this:

image

image

Could Someone tell me if It works? @yyrc @Charles7c @li-zheng-hao @jonsam-ng

import { defineConfig } from 'vite'
import { SearchPlugin } from 'vitepress-plugin-search'

export default defineConfig({
  plugins: [
    SearchPlugin({
      lang: 'zh',
      encode: str => str.replace(/[\x00-\x7F]/g, "").split("")
    })
  ]
})

I upgraded the version to 1.0.4-alpha.15, and then looked at the link below, but didn't quite understand how to configure it, and finally it didn't work.

  1. https://github.com/nextapps-de/flexsearch#cjk-word-break-chinese-japanese-korean
  2. Chinese and English at the same time? nextapps-de/flexsearch#207
  3. 有大佬知道搜索功能如何改为中文吗? alex-shpak/hugo-book#327

It should be

{
encode: str => str.replace(/[\x00-\x7F]/g, "").split("")
}

And you Will only find whole words..
Sto for example Will have 0 found... But stop should work. Try that plz.

To search for partials there should be another setting options
tokenize: "full"

It should be

{ encode: str => str.replace(/[\x00-\x7F]/g, "").split("") }

And you Will only find whole words.. Sto for example Will have 0 found... But stop should work. Try that plz.

no.. it not work for me...
image

my config:

import { SearchPlugin } from "vitepress-plugin-search";
import { defineConfig } from "vite";

export default defineConfig({
  plugins: [SearchPlugin({
    encode: str => str.replace(/[\x00-\x7F]/g, "").split("")
  })],
});

Try both settings toguether.

import { SearchPlugin } from "vitepress-plugin-search";
import { defineConfig } from "vite";

export default defineConfig({
  plugins: [SearchPlugin({
    encode: str => str.replace(/[\x00-\x7F]/g, "").split(""),
    tokenize: "full"
  })],
});

still not work...
image
image

export default defineConfig({
  plugins: [SearchPlugin({
    encode: str => str.replace(/[\x00-\x7F]/g, "").split(""),
    tokenize: "full"
  })],
});

I'll take a look..
If I can't managed I'll ask some of the vitepress dev that know chinese to help me.

plz, try

 SearchPlugin({
      encode: false,
      tokenize: function (str) {
        return str.replace(/[\x00-\x7F]/g, "").split("");
      },
      filter:
        "的 一 不 在 人 有 是 为 以 于 上 他 而 后 之 来 及 了 因 下 可 到 由 这 与 也 此 但 并 个 其 已 无 小 我 们 起 最 再 今 去 好 只 又 或 很 亦 某 把 那 你 乃 它 吧 被 比 别 趁 当 从 到 得 打 凡 儿 尔 该 各 给 跟 和 何 还 即 几 既 看 据 距 靠 啦 了 另 么 每 们 嘛 拿 哪 那 您 凭 且 却 让 仍 啥 如 若 使 谁 虽 随 同 所 她 哇 嗡 往 哪 些 向 沿 哟 用 于 咱 则 怎 曾 至 致 着 诸 自".split(
          " "
        ),
    }),

if the filter does not make sense you can remove

export default defineConfig({
  plugins: [SearchPlugin({
    encode: false,
    tokenize: function (str) {
      return str.replace(/[\x00-\x7F]/g, "").split("");
    },
    // filter:
    //   "的 一 不 在 人 有 是 为 以 于 上 他 而 后 之 来 及 了 因 下 可 到 由 这 与 也 此 但 并 个 其 已 无 小 我 们 起 最 再 今 去 好 只 又 或 很 亦 某 把 那 你 乃 它 吧 被 比 别 趁 当 从 到 得 打 凡 儿 尔 该 各 给 跟 和 何 还 即 几 既 看 据 距 靠 啦 了 另 么 每 们 嘛 拿 哪 那 您 凭 且 却 让 仍 啥 如 若 使 谁 虽 随 同 所 她 哇 嗡 往 哪 些 向 沿 哟 用 于 咱 则 怎 曾 至 致 着 诸 自".split(
    //     " "
    //   ),
  })],
});

i tried this ,filter is not work , and i can only search the whole words , one word is not work, like this:
image
image

你把 加进去应该就好了。[狗头]

你把 加进去应该就好了。[狗头]

总不能写了啥我还得手动加一下吧哈哈哈

all that this does
return str.replace(/[\x00-\x7F]/g, "").split("");
is remove non chinese caractheres..

tokenize: "full" should return a lot of results.
@li-zheng-hao could you try only with that setting?

Is really hard for me to debug because I don't know chinese.
@Charles7c
could you list the term you are search and what return is expected?
don't past only images..
I need the to be able to copy the words to test.. 😁

wow! it works!!!!

image
image
image

export default defineConfig({
  plugins: [SearchPlugin({
    encode: false,
    tokenize: "full"
  })],
});

enter 单元单元测试 will list 单元测试 (means unit test 😁)

tokenize: "full" should return a lot of results. @li-zheng-hao could you try only with that setting?

Is really hard for me to debug because I don't know chinese. @Charles7c could you list the term you are search and what return is expected? don't past only images.. I need the to be able to copy the words to test.. 😁

thanks a lot. @emersonbottero , as configured for the @li-zheng-hao test, it worked. 😁

// vite.config.ts
import { defineConfig } from 'vite'
import { SearchPlugin } from 'vitepress-plugin-search'

export default defineConfig({
  plugins: [
    SearchPlugin({
      encode: false,
      tokenize: 'full'
    })
  ]
})

Uhuuuu 🎉

tokenize: "full" should return a lot of results. @li-zheng-hao could you try only with that setting?
Is really hard for me to debug because I don't know chinese. @Charles7c could you list the term you are search and what return is expected? don't past only images.. I need the to be able to copy the words to test.. 😁

thanks a lot. @emersonbottero , as configured for the @li-zheng-hao test, it worked. 😁

// vite.config.ts
import { defineConfig } from 'vite'
import { SearchPlugin } from 'vitepress-plugin-search'

export default defineConfig({
  plugins: [
    SearchPlugin({
      encode: false,
      tokenize: 'full'
    })
  ]
})

It works for me now. Thank you @emersonbottero @Charles7c @li-zheng-hao

奇怪,不知道为什么一开始没有搜到这个贴子。感谢。

tokenize: 'full'
Then the index file is really huge. Now I have a size of 80M+ index file. (82.2 MB virtual_search-data.d06d4ff8.js)

You can try forward.
It should reduce a Lot the size If you reais chinese from left to right

You can try forward. It should reduce a Lot the size If you reais Chinese from left to right

Thanks for your reply.

If I change tokenize to "forward". That will reduce the count of results. I can only find the results that the search word located on the start of the whole sentence.

I think it is because CJK language words are not divided by space but by semantics.

Finally, I think I got the solution.

I found a word splitter for Chinese text: https://github.com/leizongmin/node-segment

I installed it:
yarn add segment -D

however, I have to split the key words by space manually in searchbox that in the nav bar. Else I will get nothing if the two words in searchbox is not separated by space. (Can it be auto?)

Now the size of index file is reduced to 1,662Kb

83M+ -> 1.6M. Really great progress.

If I change the tokenizer to "full". It will be about 2,581Kb.

// docs/vite.config.ts

import { SearchPlugin } from "vitepress-plugin-search";
import { defineConfig } from "vite";

// 分词器来源
// https://wenjiangs.com/article/segment.html
// https://github.com/leizongmin/node-segment
// 安装:
// yarn add segment -D
// 以下为样例

// 载入模块
var Segment = require('segment');
// 创建实例
var segment = new Segment();
// 使用默认的识别模块及字典,载入字典文件需要1秒,仅初始化时执行一次即可
segment.useDefault();
// 开始分词
// console.log(segment.doSegment('这是一个基于Node.js的中文分词模块。'));

var options = {

  // 采用分词器优化,
  encode: function (str) {
    return segment.doSegment(str, {simple: true});
  },
  tokenize: "forward", // 解决汉字搜索问题。来源:https://github.com/emersonbottero/vitepress-plugin-search/issues/11

  // 以下代码返回完美的结果,但内存与空间消耗巨大,索引文件达到80M+
  // encode: false,
  // tokenize: "full",

};

export default defineConfig({
  plugins: [SearchPlugin(options)],
});

Finally, I think I got the solution.

I found a word splitter for Chinese text: https://github.com/leizongmin/node-segment

I installed it: yarn add segment -D

however, I have to split the key words by space manually in searchbox that in the nav bar. Else I will get nothing if the two words in searchbox is not separated by space. (Can it be auto?)

Now the size of index file is reduced to 1,662Kb

83M+ -> 1.6M. Really great progress.

If I change the tokenizer to "full". It will be about 2,581Kb.

// docs/vite.config.ts

import { SearchPlugin } from "vitepress-plugin-search";
import { defineConfig } from "vite";

// 分词器来源
// https://wenjiangs.com/article/segment.html
// https://github.com/leizongmin/node-segment
// 安装:
// yarn add segment -D
// 以下为样例

// 载入模块
var Segment = require('segment');
// 创建实例
var segment = new Segment();
// 使用默认的识别模块及字典,载入字典文件需要1秒,仅初始化时执行一次即可
segment.useDefault();
// 开始分词
// console.log(segment.doSegment('这是一个基于Node.js的中文分词模块。'));

var options = {

  // 采用分词器优化,
  encode: function (str) {
    return segment.doSegment(str, {simple: true});
  },
  tokenize: "forward", // 解决汉字搜索问题。来源:https://github.com/emersonbottero/vitepress-plugin-search/issues/11

  // 以下代码返回完美的结果,但内存与空间消耗巨大,索引文件达到80M+
  // encode: false,
  // tokenize: "full",

};

export default defineConfig({
  plugins: [SearchPlugin(options)],
});

当vitepress里存在base设置时,就是这个 /developer-guide/https://beierzhijin.github.io/developer-guide/ ,

部署到github page,搜索后回车,base会丢失,导致404,本地跑没有这种情况

zkrisj commented

Finally, I think I got the solution.

I found a word splitter for Chinese text: https://github.com/leizongmin/node-segment

I installed it: yarn add segment -D

however, I have to split the key words by space manually in searchbox that in the nav bar. Else I will get nothing if the two words in searchbox is not separated by space. (Can it be auto?)

Now the size of index file is reduced to 1,662Kb

83M+ -> 1.6M. Really great progress.

If I change the tokenizer to "full". It will be about 2,581Kb.

// docs/vite.config.ts

import { SearchPlugin } from "vitepress-plugin-search";
import { defineConfig } from "vite";

// 分词器来源
// https://wenjiangs.com/article/segment.html
// https://github.com/leizongmin/node-segment
// 安装:
// yarn add segment -D
// 以下为样例

// 载入模块
var Segment = require('segment');
// 创建实例
var segment = new Segment();
// 使用默认的识别模块及字典,载入字典文件需要1秒,仅初始化时执行一次即可
segment.useDefault();
// 开始分词
// console.log(segment.doSegment('这是一个基于Node.js的中文分词模块。'));

var options = {

  // 采用分词器优化,
  encode: function (str) {
    return segment.doSegment(str, {simple: true});
  },
  tokenize: "forward", // 解决汉字搜索问题。来源:https://github.com/emersonbottero/vitepress-plugin-search/issues/11

  // 以下代码返回完美的结果,但内存与空间消耗巨大,索引文件达到80M+
  // encode: false,
  // tokenize: "full",

};

export default defineConfig({
  plugins: [SearchPlugin(options)],
});

请问,只能搜索文章中的包含的标题名称,而不能搜索文章名称吗?
Can only search for the title names included in the article, not the article name