aws_sdk_bedrock/client/
create_automated_reasoning_policy_test_case.rs

1// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
2impl super::Client {
3    /// Constructs a fluent builder for the [`CreateAutomatedReasoningPolicyTestCase`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder) operation.
4    ///
5    /// - The fluent builder is configurable:
6    ///   - [`policy_arn(impl Into<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::policy_arn) / [`set_policy_arn(Option<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::set_policy_arn):<br>required: **true**<br><p>The Amazon Resource Name (ARN) of the Automated Reasoning policy for which to create the test.</p><br>
7    ///   - [`guard_content(impl Into<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::guard_content) / [`set_guard_content(Option<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::set_guard_content):<br>required: **true**<br><p>The output content that's validated by the Automated Reasoning policy. This represents the foundation model response that will be checked for accuracy.</p><br>
8    ///   - [`query_content(impl Into<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::query_content) / [`set_query_content(Option<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::set_query_content):<br>required: **false**<br><p>The input query or prompt that generated the content. This provides context for the validation.</p><br>
9    ///   - [`expected_aggregated_findings_result(AutomatedReasoningCheckResult)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::expected_aggregated_findings_result) / [`set_expected_aggregated_findings_result(Option<AutomatedReasoningCheckResult>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::set_expected_aggregated_findings_result):<br>required: **true**<br><p>The expected result of the Automated Reasoning check. Valid values include: , TOO_COMPLEX, and NO_TRANSLATIONS.</p> <ul>  <li>   <p><code>VALID</code> - The claims are true. The claims are implied by the premises and the Automated Reasoning policy. Given the Automated Reasoning policy and premises, it is not possible for these claims to be false. In other words, there are no alternative answers that are true that contradict the claims.</p></li>  <li>   <p><code>INVALID</code> - The claims are false. The claims are not implied by the premises and Automated Reasoning policy. Furthermore, there exists different claims that are consistent with the premises and Automated Reasoning policy.</p></li>  <li>   <p><code>SATISFIABLE</code> - The claims can be true or false. It depends on what assumptions are made for the claim to be implied from the premises and Automated Reasoning policy rules. In this situation, different assumptions can make input claims false and alternative claims true.</p></li>  <li>   <p><code>IMPOSSIBLE</code> - Automated Reasoning can’t make a statement about the claims. This can happen if the premises are logically incorrect, or if there is a conflict within the Automated Reasoning policy itself.</p></li>  <li>   <p><code>TRANSLATION_AMBIGUOUS</code> - Detected an ambiguity in the translation meant it would be unsound to continue with validity checking. Additional context or follow-up questions might be needed to get translation to succeed.</p></li>  <li>   <p><code>TOO_COMPLEX</code> - The input contains too much information for Automated Reasoning to process within its latency limits.</p></li>  <li>   <p><code>NO_TRANSLATIONS</code> - Identifies that some or all of the input prompt wasn't translated into logic. This can happen if the input isn't relevant to the Automated Reasoning policy, or if the policy doesn't have variables to model relevant input. If Automated Reasoning can't translate anything, you get a single <code>NO_TRANSLATIONS</code> finding. You might also see a <code>NO_TRANSLATIONS</code> (along with other findings) if some part of the validation isn't translated.</p></li> </ul><br>
10    ///   - [`client_request_token(impl Into<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::client_request_token) / [`set_client_request_token(Option<String>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::set_client_request_token):<br>required: **false**<br><p>A unique, case-sensitive identifier to ensure that the operation completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error.</p><br>
11    ///   - [`confidence_threshold(f64)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::confidence_threshold) / [`set_confidence_threshold(Option<f64>)`](crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::set_confidence_threshold):<br>required: **false**<br><p>The minimum confidence level for logic validation. Content that meets the threshold is considered a high-confidence finding that can be validated.</p><br>
12    /// - On success, responds with [`CreateAutomatedReasoningPolicyTestCaseOutput`](crate::operation::create_automated_reasoning_policy_test_case::CreateAutomatedReasoningPolicyTestCaseOutput) with field(s):
13    ///   - [`policy_arn(String)`](crate::operation::create_automated_reasoning_policy_test_case::CreateAutomatedReasoningPolicyTestCaseOutput::policy_arn): <p>The Amazon Resource Name (ARN) of the policy for which the test was created.</p>
14    ///   - [`test_case_id(String)`](crate::operation::create_automated_reasoning_policy_test_case::CreateAutomatedReasoningPolicyTestCaseOutput::test_case_id): <p>The unique identifier of the created test.</p>
15    /// - On failure, responds with [`SdkError<CreateAutomatedReasoningPolicyTestCaseError>`](crate::operation::create_automated_reasoning_policy_test_case::CreateAutomatedReasoningPolicyTestCaseError)
16    pub fn create_automated_reasoning_policy_test_case(
17        &self,
18    ) -> crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder {
19        crate::operation::create_automated_reasoning_policy_test_case::builders::CreateAutomatedReasoningPolicyTestCaseFluentBuilder::new(
20            self.handle.clone(),
21        )
22    }
23}