Package | Description |
---|---|
com.azure.search.documents |
Package containing the classes for SearchIndexRestClient.
|
com.azure.search.documents.models |
Package containing the data models for SearchServiceRestClient.
|
Modifier and Type | Method and Description |
---|---|
PagedIterable<TokenInfo> |
SearchServiceClient.analyzeText(String indexName,
AnalyzeRequest analyzeRequest)
Shows how an analyzer breaks text into tokens.
|
PagedFlux<TokenInfo> |
SearchServiceAsyncClient.analyzeText(String indexName,
AnalyzeRequest analyzeRequest)
Shows how an analyzer breaks text into tokens.
|
PagedFlux<TokenInfo> |
SearchServiceAsyncClient.analyzeText(String indexName,
AnalyzeRequest analyzeRequest,
RequestOptions requestOptions)
Shows how an analyzer breaks text into tokens.
|
PagedIterable<TokenInfo> |
SearchServiceClient.analyzeText(String indexName,
AnalyzeRequest analyzeRequest,
RequestOptions requestOptions,
Context context)
Shows how an analyzer breaks text into tokens.
|
Modifier and Type | Method and Description |
---|---|
AnalyzeRequest |
AnalyzeRequest.setAnalyzer(AnalyzerName analyzer)
Set the analyzer property: The name of the analyzer to use to break the
given text.
|
AnalyzeRequest |
AnalyzeRequest.setCharFilters(List<CharFilterName> charFilters)
Set the charFilters property: An optional list of character filters to
use when breaking the given text.
|
AnalyzeRequest |
AnalyzeRequest.setText(String text)
Set the text property: The text to break into tokens.
|
AnalyzeRequest |
AnalyzeRequest.setTokenFilters(List<TokenFilterName> tokenFilters)
Set the tokenFilters property: An optional list of token filters to use
when breaking the given text.
|
AnalyzeRequest |
AnalyzeRequest.setTokenizer(TokenizerName tokenizer)
Set the tokenizer property: The name of the tokenizer to use to break
the given text.
|
Copyright © 2020 Microsoft Corporation. All rights reserved.