pub struct CreateImageEditRequest {Show 15 fields
pub image: ImageEditInput,
pub prompt: String,
pub mask: Option<ImageInput>,
pub background: Option<ImageBackground>,
pub model: Option<ImageModel>,
pub n: Option<u8>,
pub size: Option<ImageSize>,
pub response_format: Option<ImageResponseFormat>,
pub output_format: Option<ImageOutputFormat>,
pub output_compression: Option<u8>,
pub user: Option<String>,
pub input_fidelity: Option<InputFidelity>,
pub stream: Option<bool>,
pub partial_images: Option<u8>,
pub quality: Option<ImageQuality>,
}Fields§
§image: ImageEditInputThe image(s) to edit. Must be a supported image file or an array of images.
For gpt-image-1, each image should be a png, webp, or jpg file less
than 50MB. You can provide up to 16 images.
For dall-e-2, you can only provide one image, and it should be a square
png file less than 4MB.
prompt: StringA text description of the desired image(s). The maximum length is 1000 characters
for dall-e-2, and 32000 characters for gpt-image-1.
mask: Option<ImageInput>An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where
image should be edited. If there are multiple images provided, the mask will be applied on the
first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.
background: Option<ImageBackground>Allows to set transparency for the background of the generated image(s).
This parameter is only supported for gpt-image-1. Must be one of
transparent, opaque or auto (default value). When auto is used, the
model will automatically determine the best background for the image.
If transparent, the output format needs to support transparency, so it
should be set to either png (default value) or webp.
model: Option<ImageModel>The model to use for image generation. Only dall-e-2 and gpt-image-1 are supported.
Defaults to dall-e-2 unless a parameter specific to gpt-image-1 is used.
n: Option<u8>The number of images to generate. Must be between 1 and 10.
size: Option<ImageSize>The size of the generated images. Must be one of 1024x1024, 1536x1024 (landscape),
1024x1536 (portrait), or auto (default value) for gpt-image-1, and one of 256x256,
512x512, or 1024x1024 for dall-e-2.
response_format: Option<ImageResponseFormat>The format in which the generated images are returned. Must be one of url or b64_json. URLs
are only valid for 60 minutes after the image has been generated. This parameter is only supported
for dall-e-2, as gpt-image-1 will always return base64-encoded images.
output_format: Option<ImageOutputFormat>The format in which the generated images are returned. This parameter is
only supported for gpt-image-1. Must be one of png, jpeg, or webp.
The default value is png.
output_compression: Option<u8>The compression level (0-100%) for the generated images. This parameter
is only supported for gpt-image-1 with the webp or jpeg output
formats, and defaults to 100.
user: Option<String>A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
input_fidelity: Option<InputFidelity>Control how much effort the model will exert to match the style and features, especially facial
features, of input images. This parameter is only supported for gpt-image-1. Unsupported for
gpt-image-1-mini. Supports high and low. Defaults to low.
stream: Option<bool>Edit the image in streaming mode. Defaults to false. See the
Image generation guide for more
information.
partial_images: Option<u8>The number of partial images to generate. This parameter is used for streaming responses that return partial images. Value must be between 0 and 3. When set to 0, the response will be a single image sent in one streaming event. Note that the final image may be sent before the full number of partial images are generated if the full image is generated more quickly.
quality: Option<ImageQuality>The quality of the image that will be generated. high, medium and low are only supported for
gpt-image-1. dall-e-2 only supports standard quality. Defaults to auto.
Trait Implementations§
Source§impl AsyncTryFrom<CreateImageEditRequest> for Form
impl AsyncTryFrom<CreateImageEditRequest> for Form
Source§type Error = OpenAIError
type Error = OpenAIError
Source§impl Clone for CreateImageEditRequest
impl Clone for CreateImageEditRequest
Source§fn clone(&self) -> CreateImageEditRequest
fn clone(&self) -> CreateImageEditRequest
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more