When it comes to the overall speed, BulkSearch outperforms every searching library out there and also provides flexible search capabilities like multi-word matching, phonetic transformations or partial matching. It is basically based on how a hdd manage files on the filesystem. Adding, updating or removing items are also fast as searching for them, but also requires some additional amount of memory. When your index don't needs to be updated continuously then FlexSearch may be a better choice. BulkSearch also provides you a asynchronous processing model to perform queries in background.
Benchmark:
- Comparison: https://jsperf.com/compare-search-libraries
- Detailed: https://jsperf.com/bulksearch
Supported Platforms:
- Browser
- Node.js
Supported Module Definitions:
- AMD (RequireJS)
- CommonJS (Node.js)
- Closure (Xone)
- Global (Browser)
All Features:
- Partial Words
- Multiple Words
- Flexible Word Order
- Phonetic Search
- Limit Results
- Pagination
- Caching
- Asynchronous Mode
- Custom Matchers
- Custom Encoders
<html>
<head>
<script src="js/bulksearch.min.js"></script>
</head>
...
Note: Use bulksearch.min.js for production and bulksearch.js for development.
Use latest from CDN:
<script src="https://cdn.rawgit.com/nextapps-de/bulksearch/master/bulksearch.min.js"></script>
npm install bulksearch
In your code include as follows:
var BulkSearch = require("bulksearch");
Or pass in options when requiring:
var index = require("bulksearch").create({/* options */});
AMD
var BulkSearch = require("./bulksearch.js");
Description | BulkSearch | FlexSearch |
---|---|---|
Access | Read-Write optimized index | Read-Memory optimized index |
Memory | Large (~ 90 bytes per word) | Tiny (~ 2 bytes per word) |
Usage |
|
|
Limit Results | Yes | Yes |
Pagination | Yes | No |
Global methods:
- BulkSearch.create(<options>)
- BulkSearch.addMatcher({KEY: VALUE})
- BulkSearch.register(name, encoder)
- BulkSearch.encode(name, string)
Index methods:
- Index.add(id, string)
- Index.search(string, <limit>, <callback>)
- Index.search(string, <page>, <callback>)
- Index.search(options, <callback>)
- Index.update(id, string)
- Index.remove(id)
- Index.reset()
- Index.destroy()
- Index.init(<options>)
- Index.optimize()
- Index.info()
- Index.addMatcher({KEY: VALUE})
- Index.encode(string)
BulkSearch.create(<options>)
var index = new BulkSearch();
alternatively you can also use:
var index = BulkSearch.create();
var index = new BulkSearch({
// default values:
type: "integer",
encode: "icase",
boolean: "and",
size: 4000,
multi: false,
strict: false,
ordered: false,
paging: false,
async: false,
cache: false
});
Read more: Phonetic Search, Phonetic Comparison, Improve Memory Usage
Index.add(id, string)
index.add(10025, "John Doe");
Index.search(string|options, <limit|page>, <callback>)
index.search("John");
Limit the result:
index.search("John", 10);
Perform queries asynchronously:
index.search("John", function(result){
// array of results
});
index.search({
query: "John",
page: '1:1234',
limit: 10,
callback: function(result){
// async
}
});
Index.update(id, string)
index.update(10025, "Road Runner");
Index.remove(id)
index.remove(10025);
index.reset();
index.destroy();
Index.init(<options>)
Note: Re-initialization will also destroy the old index!
Initialize (with same options):
index.init();
Initialize with new options:
index.init({
/* options */
});
BulkSearch.addMatcher({REGEX: REPLACE})
Add global matchers for all instances:
BulkSearch.addMatcher({
'ä': 'a', // replaces all 'ä' to 'a'
'ó': 'o',
'[ûúù]': 'u' // replaces multiple
});
Add private matchers for a specific instance:
index.addMatcher({
'ä': 'a', // replaces all 'ä' to 'a'
'ó': 'o',
'[ûúù]': 'u' // replaces multiple
});
Define a private custom encoder during creation/initialization:
var index = new BulkSearch({
encode: function(str){
// do something with str ...
return str;
}
});
BulkSearch.register(name, encoder)
BulkSearch.register('whitespace', function(str){
return str.replace(/ /g, '');
});
Use global encoders:
var index = new BulkSearch({ encode: 'whitespace' });
Private encoder:
var encoded = index.encode("sample text");
var encoded = BulkSearch.encode("whitespace", "sample text");
BulkSearch.register('mixed', function(str){
str = this.encode("icase", str); // built-in
str = this.encode("whitespace", str); // custom
return str;
});
BulkSearch.register('extended', function(str){
str = this.encode("custom", str);
// do something additional with str ...
return str;
});
index.info();
Returns information about the index, e.g.:
{
"bytes": 103600,
"chunks": 9,
"fragmentation": 0,
"fragments": 0,
"id": 0,
"length": 7798,
"matchers": 0,
"size": 10000,
"status": false
}
Note: When the fragmentation value is about 50% or higher, your should consider using cleanup().
Optimize an index will free all fragmented memory and also rebuilds the index by scoring.
index.optimize();
Note: Pagination can simply reduce query time by a factor of 100.
Enable pagination on initialization:
var index = BulkSearch.create({ paging: true });
Perform query and pass a limit (items per page):
index.search("John", 10);
The response will include a pagination object like this:
{
"current": "0:0",
"prev": null,
"next": "1:16322",
"results": []
}
Explanation:
"current" | Includes the pointer to the current page. |
"prev" | Includes the pointer to the previous page. Whenever this field has the value null there are no more previous pages available. |
"next" | Includes the pointer to the next page. Whenever this field has the value null there are no more pages left. |
"results" | Array of matched items. |
Perform query and pass a pointer to a specific page:
index.search("John", {
page: "1:16322", // pointer
limit: 10
});
Option | Values | Description |
---|---|---|
type |
"byte" "short" "integer" "float" "string" |
The data type of passed IDs has to be specified on creation. It is recommended to uses to most lowest possible data range here, e.g. use "short" when IDs are not higher than 65,535. |
encode |
false "icase" "simple" "advanced" "extra" function(string):string |
The encoding type. Choose one of the built-ins or pass a custom encoding function. |
boolean |
"and" "or" |
The applied boolean model when comparing multiple words. Note: When using "or" the first word is also compared with "and". Example: a query with 3 words, results has either: matched word 1 & 2 and matched word 1 & 3. |
size | 2500 - 10000 | The size of chunks. It depends on content length which value fits best. Short content length (e.g. User names) are faster with a chunk size of 2,500. Bigger text runs faster with a chunk size of 10,000. Note: It is recommended to use a minimum chunk size of the maximum content length which has to be indexed to prevent fragmentation. |
multi |
true false |
Enable multi word processing. |
ordered |
true false |
Multiple words has to be the same order as the matched entry. |
strict |
true false |
Matches exactly needs to be started with the query. |
cache |
true false |
Enable caching. |
Encoder | Description | False Positives | Compression Level |
---|---|---|---|
false | Turn off encoding | no | no |
"icase" | Case in-sensitive encoding | no | no |
"simple" | Phonetic normalizations | no | ~ 3% |
"advanced" | Phonetic normalizations + Literal transformations | no | ~ 25% |
"extra" | Phonetic normalizations + Soundex transformations | yes | ~ 50% |
Reference String: "Björn-Phillipp Mayer"
Query | ElasticSearch | BulkSearch (iCase) | BulkSearch (Simple) | BulkSearch (Adv.) | BulkSearch (Extra) |
---|---|---|---|---|---|
björn | yes | yes | yes | yes | yes |
björ | no | yes | yes | yes | yes |
bjorn | no | no | yes | yes | yes |
bjoern | no | no | no | yes | yes |
philipp | no | no | no | yes | yes |
filip | no | no | no | yes | yes |
björnphillip | no | no | yes | yes | yes |
meier | no | no | no | yes | yes |
björn meier | no | no | no | yes | yes |
meier fhilip | no | no | no | yes | yes |
byorn mair | no | no | no | no | yes |
(false positives) | yes | no | no | no | yes |
Note: The data type of passed IDs has to be specified on creation. It is recommended to uses the most lowest possible data range here, e.g. use "short" when IDs are not higher than 65,535.
ID Type | Range of Values | Memory usage of every ~ 100,000 indexed words |
---|---|---|
Byte | 0 - 255 | 4.5 Mb |
Short | 0 - 65,535 | 5.3 Mb |
Integer | 0 - 4,294,967,295 | 6.8 Mb |
Float | 0 - * (16 digits) | 10 Mb |
String | * (unlimited) | 28.2 Mb |
Author BulkSearch: Thomas Wilkerling
License: Apache 2.0 License