sharkdp/fd

[BUG] search strings containing umlaut fails to find any results

j-lakeman opened this issue · 11 comments

Checks

  • I have read the troubleshooting section and still think this is a bug.

Describe the bug you encountered:

[I] ➜ ~ fd gung.html finds

Bestätigung.html
Downloads/Bestätigung.html

as expected.

But [I] ➜ ~ fd bestätigung doesn't find anything, even if run with --unrestricted.

This seems not to be a Unicode issue as emoji containing files and folders are found properly.

Describe what you expected to happen:

Same output as first command

What version of fd are you using?

fd 9.0.0

Which operating system / distribution are you on?

Darwin 20.6.0 x86_64

Great application nevertheless! Love it!

This is likely a duplicate of #638

Is the search using U+75 and U+308(a "u" witha diaresis combining character in front of it", but the filename uses U+00FC (a single ü charachter) or vice versa?

@tmccombs Yeah it would be vice versa. macOS stores filenames in normalization form NFD (D for decomposed), so the actual filenames will have combining characters while most everything else uses the precomposed characters.

Oh I guess my info is out of date. That's true for HFS+, but APFS is normalization-insensitive rather than actually normalizing. So file paths will use whatever normalization you used to create the file, but you can access it by other normalizations too (kinda like how touch foo; cat Foo would work on a case-insensitive FS).

Finder still uses NFD though.

@tavianator I'm on a case-sensitive APFS
@tmccombs [I] ➜ ~ printf %x\n "'ä'" outputs e4 and [I] ➜ ~ printf \ue4\n ä again.

However it seems I can't pipe to fd to be able to test the individual characters (#1346).

What does printf "%x\n" $(ls) in the folder that contains the bestätigung file give

Just copy-pasting from the OP shows what's happening:

[I] ➜ ~ fd gung.html finds

Bestätigung.html
Downloads/Bestätigung.html

as expected.

tavianator@graphene $ echo 'Bestätigung' | xxd
00000000: 4265 7374 61cc 8874 6967 756e 670a       Besta..tigung.

But [I] ➜ ~ fd bestätigung doesn't find anything, even if run with --unrestricted.

tavianator@graphene $ echo 'bestätigung' | xxd
00000000: 6265 7374 c3a4 7469 6775 6e67 0a         best..tigung.

The difference (apart from the case of B) is that fd outputs 61 cc 88 for , which is UTF-8 for U+0061 U+0308, while the OP typed c3 a4 for ä, AKA U+00E4.

I suspect if you manually search for the decomposed form, something like

$ fd $'besta\xcc\x88tigung'

it will find it.

Yep, that's right! Cheers!
What makes fd outstanding apart from its efficiency is its ease of use IMHO. Though this is quite a workaround, don't you think? Similar characters like that can be found in many European languages.

I agree that this is not a good user experience.

Unfortunately, it is also a very difficult problem to solve.

The library we use for regex doesn't support normalization, and probably won't anytime soon. See rust-lang/regex#404 (comment). The workaround there of normalizing the regex and input is much easier said than done. Normalizing all the filenames significantly hurts performance. And normalizing the regex isn't as straightforward as normalizing the string of the regex.

For example "ä?" Would need to be converted to "(a\u0308)?".

Perhaps the best path would be to have an option to transform the regex to accept either equivalent form. So for example ä would be transformed into "(ä|a\u0308)".

I'm not familiar enough with unicode to know how feasible that would be in general, or how to create those transformation tables.

Perhaps the best path would be to have an option to transform the regex to accept either equivalent form. So for example ä would be transformed into "(ä|a\u0308)".

I think the worst case here is character classes like [ä-ë]. We'd have to iterate over every code point in the range, apply NFD, and construct a new alternation. It could blow up the regex gigantically.

Perhaps we could just do the replacement on literals, and not worry about ranges?