Struct aws_sdk_textract::client::Client
source · [−]pub struct Client<C = DynConnector, M = DefaultMiddleware, R = Standard> { /* private fields */ }
Expand description
Client for Amazon Textract
Client for invoking operations on Amazon Textract. Each operation on Amazon Textract is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
Examples
Constructing a client and invoking an operation
// create a shared configuration. This can be used & shared between multiple service clients.
let shared_config = aws_config::load_from_env().await;
let client = aws_sdk_textract::Client::new(&shared_config);
// invoke an operation
/* let rsp = client
.<operation_name>().
.<param>("some value")
.send().await; */
Constructing a client with custom configuration
use aws_config::RetryConfig;
let shared_config = aws_config::load_from_env().await;
let config = aws_sdk_textract::config::Builder::from(&shared_config)
.retry_config(RetryConfig::disabled())
.build();
let client = aws_sdk_textract::Client::from_conf(config);
Implementations
impl<C, M, R> Client<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
impl<C, M, R> Client<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
Constructs a fluent builder for the AnalyzeDocument
operation.
- The fluent builder is configurable:
document(Document)
/set_document(Option<Document>)
:The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can’t pass image bytes. The document must be an image in JPEG or PNG format.
If you’re using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the
Bytes
field.feature_types(Vec<FeatureType>)
/set_feature_types(Option<Vec<FeatureType>>)
:A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. To perform both types of analysis, add TABLES and FORMS to
FeatureTypes
. All lines and words detected in the document are included in the response (including text that isn’t related to the value ofFeatureTypes
).human_loop_config(HumanLoopConfig)
/set_human_loop_config(Option<HumanLoopConfig>)
:Sets the configuration for the human in the loop workflow for analyzing documents.
- On success, responds with
AnalyzeDocumentOutput
with field(s):document_metadata(Option<DocumentMetadata>)
:Metadata about the analyzed document. An example is the number of pages.
blocks(Option<Vec<Block>>)
:The items that are detected and analyzed by
AnalyzeDocument
.human_loop_activation_output(Option<HumanLoopActivationOutput>)
:Shows the results of the human in the loop evaluation.
analyze_document_model_version(Option<String>)
:The version of the model used to analyze the document.
- On failure, responds with
SdkError<AnalyzeDocumentError>
Constructs a fluent builder for the AnalyzeExpense
operation.
- The fluent builder is configurable:
document(Document)
/set_document(Option<Document>)
:The input document, either as bytes or as an S3 object.
You pass image bytes to an Amazon Textract API operation by using the
Bytes
property. For example, you would use theBytes
property to pass a document loaded from a local file system. Image bytes passed by using theBytes
property must be base64 encoded. Your code might not need to encode document file bytes if you’re using an AWS SDK to call Amazon Textract API operations.You pass images stored in an S3 bucket to an Amazon Textract API operation by using the
S3Object
property. Documents stored in an S3 bucket don’t need to be base64 encoded.The AWS Region for the S3 bucket that contains the S3 object must match the AWS Region that you use for Amazon Textract operations.
If you use the AWS CLI to call Amazon Textract operations, passing image bytes using the Bytes property isn’t supported. You must first upload the document to an Amazon S3 bucket, and then call the operation using the S3Object property.
For Amazon Textract to process an S3 object, the user must have permission to access the S3 object.
- On success, responds with
AnalyzeExpenseOutput
with field(s):document_metadata(Option<DocumentMetadata>)
:Information about the input document.
expense_documents(Option<Vec<ExpenseDocument>>)
:The expenses detected by Amazon Textract.
- On failure, responds with
SdkError<AnalyzeExpenseError>
Constructs a fluent builder for the AnalyzeID
operation.
- The fluent builder is configurable:
document_pages(Vec<Document>)
/set_document_pages(Option<Vec<Document>>)
:The document being passed to AnalyzeID.
- On success, responds with
AnalyzeIdOutput
with field(s):identity_documents(Option<Vec<IdentityDocument>>)
:The list of documents processed by AnalyzeID. Includes a number denoting their place in the list and the response structure for the document.
document_metadata(Option<DocumentMetadata>)
:Information about the input document.
analyze_id_model_version(Option<String>)
:The version of the AnalyzeIdentity API being used to process documents.
- On failure, responds with
SdkError<AnalyzeIDError>
Constructs a fluent builder for the DetectDocumentText
operation.
- The fluent builder is configurable:
document(Document)
/set_document(Option<Document>)
:The input document as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Textract operations, you can’t pass image bytes. The document must be an image in JPEG or PNG format.
If you’re using an AWS SDK to call Amazon Textract, you might not need to base64-encode image bytes that are passed using the
Bytes
field.
- On success, responds with
DetectDocumentTextOutput
with field(s):document_metadata(Option<DocumentMetadata>)
:Metadata about the document. It contains the number of pages that are detected in the document.
blocks(Option<Vec<Block>>)
:An array of
Block
objects that contain the text that’s detected in the document.detect_document_text_model_version(Option<String>)
:
- On failure, responds with
SdkError<DetectDocumentTextError>
Constructs a fluent builder for the GetDocumentAnalysis
operation.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:A unique identifier for the text-detection job. The
JobId
is returned fromStartDocumentAnalysis
. AJobId
value is only valid for 7 days.max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of results to return per paginated call. The largest value that you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.
- On success, responds with
GetDocumentAnalysisOutput
with field(s):document_metadata(Option<DocumentMetadata>)
:Information about a document that Amazon Textract processed.
DocumentMetadata
is returned in every page of paginated responses from an Amazon Textract video operation.job_status(Option<JobStatus>)
:The current status of the text detection job.
next_token(Option<String>)
:If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text detection results.
blocks(Option<Vec<Block>>)
:The results of the text-analysis operation.
warnings(Option<Vec<Warning>>)
:A list of warnings that occurred during the document-analysis operation.
status_message(Option<String>)
:Returns if the detection job could not be completed. Contains explanation for what error occured.
analyze_document_model_version(Option<String>)
:
- On failure, responds with
SdkError<GetDocumentAnalysisError>
Constructs a fluent builder for the GetDocumentTextDetection
operation.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:A unique identifier for the text detection job. The
JobId
is returned fromStartDocumentTextDetection
. AJobId
value is only valid for 7 days.max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of results to return per paginated call. The largest value you can specify is 1,000. If you specify a value greater than 1,000, a maximum of 1,000 results is returned. The default value is 1,000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.
- On success, responds with
GetDocumentTextDetectionOutput
with field(s):document_metadata(Option<DocumentMetadata>)
:Information about a document that Amazon Textract processed.
DocumentMetadata
is returned in every page of paginated responses from an Amazon Textract video operation.job_status(Option<JobStatus>)
:The current status of the text detection job.
next_token(Option<String>)
:If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.
blocks(Option<Vec<Block>>)
:The results of the text-detection operation.
warnings(Option<Vec<Warning>>)
:A list of warnings that occurred during the text-detection operation for the document.
status_message(Option<String>)
:Returns if the detection job could not be completed. Contains explanation for what error occured.
detect_document_text_model_version(Option<String>)
:
- On failure, responds with
SdkError<GetDocumentTextDetectionError>
Constructs a fluent builder for the GetExpenseAnalysis
operation.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:A unique identifier for the text detection job. The
JobId
is returned fromStartExpenseAnalysis
. AJobId
value is only valid for 7 days.max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of results to return per paginated call. The largest value you can specify is 20. If you specify a value greater than 20, a maximum of 20 results is returned. The default value is 20.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the previous response was incomplete (because there are more blocks to retrieve), Amazon Textract returns a pagination token in the response. You can use this pagination token to retrieve the next set of blocks.
- On success, responds with
GetExpenseAnalysisOutput
with field(s):document_metadata(Option<DocumentMetadata>)
:Information about a document that Amazon Textract processed.
DocumentMetadata
is returned in every page of paginated responses from an Amazon Textract operation.job_status(Option<JobStatus>)
:The current status of the text detection job.
next_token(Option<String>)
:If the response is truncated, Amazon Textract returns this token. You can use this token in the subsequent request to retrieve the next set of text-detection results.
expense_documents(Option<Vec<ExpenseDocument>>)
:The expenses detected by Amazon Textract.
warnings(Option<Vec<Warning>>)
:A list of warnings that occurred during the text-detection operation for the document.
status_message(Option<String>)
:Returns if the detection job could not be completed. Contains explanation for what error occured.
analyze_expense_model_version(Option<String>)
:The current model version of AnalyzeExpense.
- On failure, responds with
SdkError<GetExpenseAnalysisError>
Constructs a fluent builder for the StartDocumentAnalysis
operation.
- The fluent builder is configurable:
document_location(DocumentLocation)
/set_document_location(Option<DocumentLocation>)
:The location of the document to be processed.
feature_types(Vec<FeatureType>)
/set_feature_types(Option<Vec<FeatureType>>)
:A list of the types of analysis to perform. Add TABLES to the list to return information about the tables that are detected in the input document. Add FORMS to return detected form data. To perform both types of analysis, add TABLES and FORMS to
FeatureTypes
. All lines and words detected in the document are included in the response (including text that isn’t related to the value ofFeatureTypes
).client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:The idempotent token that you use to identify the start request. If you use the same token with multiple
StartDocumentAnalysis
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations.job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:An identifier that you specify that’s included in the completion notification published to the Amazon SNS topic. For example, you can use
JobTag
to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.
output_config(OutputConfig)
/set_output_config(Option<OutputConfig>)
:Sets if the output will go to a customer defined bucket. By default, Amazon Textract will save the results internally to be accessed by the GetDocumentAnalysis operation.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.
- On success, responds with
StartDocumentAnalysisOutput
with field(s):job_id(Option<String>)
:The identifier for the document text detection job. Use
JobId
to identify the job in a subsequent call toGetDocumentAnalysis
. AJobId
value is only valid for 7 days.
- On failure, responds with
SdkError<StartDocumentAnalysisError>
Constructs a fluent builder for the StartDocumentTextDetection
operation.
- The fluent builder is configurable:
document_location(DocumentLocation)
/set_document_location(Option<DocumentLocation>)
:The location of the document to be processed.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:The idempotent token that’s used to identify the start request. If you use the same token with multiple
StartDocumentTextDetection
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operations.job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:An identifier that you specify that’s included in the completion notification published to the Amazon SNS topic. For example, you can use
JobTag
to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.
output_config(OutputConfig)
/set_output_config(Option<OutputConfig>)
:Sets if the output will go to a customer defined bucket. By default Amazon Textract will save the results internally to be accessed with the GetDocumentTextDetection operation.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.
- On success, responds with
StartDocumentTextDetectionOutput
with field(s):job_id(Option<String>)
:The identifier of the text detection job for the document. Use
JobId
to identify the job in a subsequent call toGetDocumentTextDetection
. AJobId
value is only valid for 7 days.
- On failure, responds with
SdkError<StartDocumentTextDetectionError>
Constructs a fluent builder for the StartExpenseAnalysis
operation.
- The fluent builder is configurable:
document_location(DocumentLocation)
/set_document_location(Option<DocumentLocation>)
:The location of the document to be processed.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:The idempotent token that’s used to identify the start request. If you use the same token with multiple
StartDocumentTextDetection
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidentally started more than once. For more information, see Calling Amazon Textract Asynchronous Operationsjob_tag(impl Into<String>)
/set_job_tag(Option<String>)
:An identifier you specify that’s included in the completion notification published to the Amazon SNS topic. For example, you can use
JobTag
to identify the type of document that the completion notification corresponds to (such as a tax form or a receipt).notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:The Amazon SNS topic ARN that you want Amazon Textract to publish the completion status of the operation to.
output_config(OutputConfig)
/set_output_config(Option<OutputConfig>)
:Sets if the output will go to a customer defined bucket. By default, Amazon Textract will save the results internally to be accessed by the
GetExpenseAnalysis
operation.kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:The KMS key used to encrypt the inference results. This can be in either Key ID or Key Alias format. When a KMS key is provided, the KMS key will be used for server-side encryption of the objects in the customer bucket. When this parameter is not enabled, the result will be encrypted server side,using SSE-S3.
- On success, responds with
StartExpenseAnalysisOutput
with field(s):job_id(Option<String>)
:A unique identifier for the text detection job. The
JobId
is returned fromStartExpenseAnalysis
. AJobId
value is only valid for 7 days.
- On failure, responds with
SdkError<StartExpenseAnalysisError>
Creates a client with the given service config and connector override.
Trait Implementations
Auto Trait Implementations
impl<C = DynConnector, M = DefaultMiddleware, R = Standard> !RefUnwindSafe for Client<C, M, R>
impl<C = DynConnector, M = DefaultMiddleware, R = Standard> !UnwindSafe for Client<C, M, R>
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more