Inheritance
AnalyzeTokenizersSelector
Assembly: OpenSearch.Client.dll
public class AnalyzeTokenizersSelector : SelectorBase, ISelector
Methods
|
Edit this page
View Source
A list containing a list of characters to tokenize the string on. Whenever a character from this list is encountered, a
new token is started. This accepts either single characters like eg. -, or character groups: whitespace, letter, digit,
punctuation, symbol.
Declaration
public ITokenizer CharGroup(Func<CharGroupTokenizerDescriptor, ICharGroupTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type edgeNGram.
Declaration
public ITokenizer EdgeNGram(Func<EdgeNGramTokenizerDescriptor, IEdgeNGramTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
Tokenizes text into words on word boundaries, as defined in UAX #29: Unicode Text Segmentation. It behaves much
like the standard tokenizer, but adds better support for some Asian languages by using a dictionary-based approach
to identify words in Thai, Lao, Chinese, Japanese, and Korean, and using custom rules to break Myanmar and Khmer
text into syllables.
Part of the analysis-icu
plugin:
Declaration
public ITokenizer Icu(Func<IcuTokenizerDescriptor, IIcuTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type keyword that emits the entire input as a single input.
Declaration
public ITokenizer Keyword(Func<KeywordTokenizerDescriptor, IKeywordTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type pattern that can flexibly separate text into terms via a regular expression.
Part of the analysis-kuromoji
plugin:
Declaration
public ITokenizer Kuromoji(Func<KuromojiTokenizerDescriptor, IKuromojiTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type letter that divides text at non-letters. That’s to say, it defines tokens as maximal strings of adjacent letters.
Note, this does a decent job for most European languages, but does a terrible job for some Asian languages, where words are not
separated by spaces.
Declaration
public ITokenizer Letter(Func<LetterTokenizerDescriptor, ILetterTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type lowercase that performs the function of Letter Tokenizer and Lower Case Token Filter together.
It divides text at non-letters and converts them to lower case.
While it is functionally equivalent to the combination of Letter Tokenizer and Lower Case Token Filter,
there is a performance advantage to doing the two tasks at once, hence this (redundant) implementation.
Declaration
public ITokenizer Lowercase(Func<LowercaseTokenizerDescriptor, ILowercaseTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type nGram.
Declaration
public ITokenizer NGram(Func<NGramTokenizerDescriptor, INGramTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
Tokenizer that ships with the analysis-nori plugin
Declaration
public ITokenizer Nori(Func<NoriTokenizerDescriptor, INoriTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
The path_hierarchy tokenizer takes something like this:
/something/something/else
And produces tokens:
/something
/something/something
/something/something/else
Declaration
public ITokenizer PathHierarchy(Func<PathHierarchyTokenizerDescriptor, IPathHierarchyTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type pattern that can flexibly separate text into terms via a regular expression.
Declaration
public ITokenizer Pattern(Func<PatternTokenizerDescriptor, IPatternTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
Standard(Func<StandardTokenizerDescriptor, IStandardTokenizer>)
A tokenizer of type standard providing grammar based tokenizer that is a good tokenizer for most European language documents.
The tokenizer implements the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29.
Declaration
public ITokenizer Standard(Func<StandardTokenizerDescriptor, IStandardTokenizer> selector = null)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type uax_url_email which works exactly like the standard tokenizer, but tokenizes emails and urls as single tokens
Declaration
public ITokenizer UaxEmailUrl(Func<UaxEmailUrlTokenizerDescriptor, IUaxEmailUrlTokenizer> selector)
Parameters
Returns
|
Edit this page
View Source
A tokenizer of type whitespace that divides text at whitespace.
Declaration
public ITokenizer Whitespace(Func<WhitespaceTokenizerDescriptor, IWhitespaceTokenizer> selector = null)
Parameters
Returns
Implements
Extension Methods