Package | Description |
---|---|
com.azure.search.documents.models |
Package containing the data models for SearchServiceRestClient.
|
Modifier and Type | Class and Description |
---|---|
class |
ClassicTokenizer
Grammar-based tokenizer that is suitable for processing most
European-language documents.
|
class |
EdgeNGramTokenizer
Tokenizes the input from an edge into n-grams of the given size(s).
|
class |
KeywordTokenizer
Emits the entire input as a single token.
|
class |
KeywordTokenizerV2
Emits the entire input as a single token.
|
class |
MicrosoftLanguageStemmingTokenizer
Divides text using language-specific rules and reduces words to their base
forms.
|
class |
MicrosoftLanguageTokenizer
Divides text using language-specific rules.
|
class |
NGramTokenizer
Tokenizes the input into n-grams of the given size(s).
|
class |
PathHierarchyTokenizer
Tokenizer for path-like hierarchies.
|
class |
PathHierarchyTokenizerV2
Tokenizer for path-like hierarchies.
|
class |
PatternTokenizer
Tokenizer that uses regex pattern matching to construct distinct tokens.
|
class |
StandardTokenizer
Breaks text following the Unicode Text Segmentation rules.
|
class |
StandardTokenizerV2
Breaks text following the Unicode Text Segmentation rules.
|
class |
UaxUrlEmailTokenizer
Tokenizes urls and emails as one token.
|
Modifier and Type | Method and Description |
---|---|
Tokenizer |
Tokenizer.setName(String name)
Set the name property: The name of the tokenizer.
|
Copyright © 2020 Microsoft Corporation. All rights reserved.